Field of Science

The anatomy of peer review: Why airing dirty laundry in public is important

Since everyone is talking about Ron Breslow, I thought I might bring readers' attention to a truly fascinating (from the point of view of the sociology of science) article published in Nature in 1992 regarding two of Breslow's papers. The article was written by Prof. Fred Menger (Emory) and Prof. Albert Haim (Stony Brook) and details what happened when the two tried to publish a rebuttal addressing errors in two Breslow articles from the Journal of the American Chemical Society (JACS).

First I want to emphasize the reason why I am writing about this episode. I write about it not because I want to add to Breslow's troubles. I don't want it to sound like I am kicking someone when they are down, nor do I think that I will ever have the capability to do this to someone like Breslow. As I have reiterated in other places, neither this account nor the recent controversy should blind us to an unambiguous fact; Breslow's contributions to research, service and education have been outstanding by any standards. Most scientists would be lucky if they could accomplish half of what he has done during an unusually long and productive career that has still not slowed down. We can all hope that people will continue to remember him by his superlative accomplishments.

So I am not writing this post because I want to add to further criticism of Breslow. It's because I have always thought that the Nature article is a unique document in the history and sociology of chemistry, a great illustration of the pitfalls and promises of peer review that deserves a wider audience. I think laymen (who don't have journal access) will find its contents very interesting, and there's lessons in there for fellow scientists too. I do want to add a disclaimer: Prof. Menger was on my Ph.D. committee and I have enormous respect for his research, but as someone interested in the way science functions, this paper would have been equally fascinating to me even if I had absolutely no connection to him. There were several other papers related to this incident, but the one in Nature stands out as a unique example of scrutiny.

The article is one of the most remarkable publications I have ever read, for two reasons. Firstly, because it presents a rare glimpse into what we might call the "anatomy" of peer review; and it does this in excruciating detail, with warts and all exposed. Secondly, because it makes you wonder whether a journal like Nature would ever again have the inclination to publish something like this.

The whole episode germinated when Breslow and a pair of graduate students (Eric Anslyn and Deeng-Lih Huang) published two papers in JACS dealing with the kinetics of an imidazole-catalysed nucleotide cleavage reaction. This is standard stuff in physical organic and bioorganic chemistry and is part of a large body of work going back 50 years. What was interesting was that the authors seemed to derive negative rate constants for some of the reactions. Now, even college students will recognize this as odd; a rate constant is supposed to indicate the speed of a reaction. If it's positive, the reaction proceeds. If it's zero, the reaction halts. But what is a negative rate constant supposed to mean? Of course the original authors had their own interpretation of their numbers. Independently, Menger and Haim investigated this question and they found out a few rather significant problems with the two papers. The technical details can only be appreciated by a physical organic chemist, but the main problem seemed to be in treating the background reactions with water in the absence of imidazole. It seemed that at least some of the "negative" rate constants were artifacts of the data fitting.

The publication saga started when Menger and Haim independently submitted papers to JACS criticizing the original paper and offering corrections. The editor of JACS at the time was Alan Bard, an internationally renowned chemist who served a distinguished stint as journal editor for several years. Menger's paper was rejected by the reviewers with some odd and rather self-contradictory commentary. On one hand, the reviewers acknowledged the errors in the Breslow papers, but on the other hand they inexplicably chose to reject the manuscript, "hoping" that Breslow and Huang would publish a more detailed explanation and correction. It wasn't clear why they would not let Menger himself publish a correction in the journal.

Haim's paper was also rejected by the journal with similar comments. At this point both Menger and Haim wrote to Bard and the associate editor in charge of the manuscripts. They appealed to JACS's editorial policy which encouraged the submission of manuscripts detailing major errors in published material. This appeal did not have much effect. Something strange also transpired at this juncture; as Menger details, Breslow sent a rather interesting note to Haim, saying: "I don't know what you are so excited about. Are you being led astray by a notoriously unstable individual?". 

Following their unsuccessful attempt at publication in JACS, both Menger and Haim did what scientists always do; try to publish in other places. Haim first send his manuscript to the Journal of Physical Chemistry (JPC). JPC's response was about as strange as JACS's; while acknowledging the problems with the original papers, they too chose not to publish Haim's manuscript. Menger in turn sent his paper to the Journal of Organic Chemistry (JOC). He seems to have found a much more sympathetic audience in the journal's chief and associate editors, Clayton Heathcock and Andrew Streitwieser, both leading researchers. The paper was accepted without much ado by Streitwieser who wondered how the original paper had made it past the JACS referees.

The story does not end there. Menger's JOC paper stimulated a number of largely favorable responses. An especially noteworthy response was from a Nobel Laureate, who sent a note to Bard saying, 

"It seems to me there is a long-established scientific etiquette which says that papers pointing out errors should be published in the same journal in which the original article appeared". 

But these encouraging exchanges were followed by a few calls for retraction of the JOC paper, one of these being from Breslow. At least some of these raised legitimate scientific questions. This prompted Heathcock to ask for clarification, which Menger duly provided. Streitwieser accepted Menger's responses and the paper was published. To the journal's credit, they even permitted a footnote in which Menger said that his attempts to publish his paper in JACS had been rejected.

Meanwhile, Haim was still trying to publish his correction in JACS. He sent a revised, more thorough manuscript to Bard, asking that the original associate editor who had rejected the manuscript be replaced by a fresh pair of eyes. Bard did not agree to this request, but asked Haim to recommend ten referees for the paper. The article was sent to four of these reviewers. Two of the reviewers again agreed that the original science contained errors but again asked that Breslow and his collaborators be given a chance to publish corrections. One reviewer's response was especially interesting; he or she thought that the matter should simply be dropped, ostensibly to protect the reputation of the author:
"It could lead to a reputation, rightly or wrongly, of the author being a nitpicker and Breslow would certainly fight back loudly. Who needs such things"

I find the part about being a nitpicker especially interesting. Firstly, the criticism was not just about a few minor details; it was about rather fundamental analyses and conclusions in the paper. But more importantly, a lot of science in fact is nitpicking because it's through nitpicking that one often uncovers the really important things. Science especially should provide a welcome refuge for nitpickers. 

In any case, after yet another rejection, Haim submitted another revised manuscript. It's worth noting that most reviewers' comments during the 11 months that Haim had tried to get his manuscript published had been favorable, and nobody had ever called Haim's basic analysis into serious question. Yet the paper kept on getting rejected for various reasons. Finally, Haim appealed to Bard and the paper suddenly and inexplicably got published.

Menger ends his account of the long saga with the following words:

"As the dust settles, it is comforting to reflect that the system ultimately worked. After all, both of us succeeded in getting our papers published. Yet this was accomplished only at the cost of considerable anguish to us. Few people, we presume, would be willing to go through this experience...Two problems are involved here. First is the mishandling of the original publications, which many people have come to regard as substandard. The second is the position taken by the associate editor after flaws were pointed out. That position can only be described as evasive and defensive. Without attributing motivation for his actions, we simply state that we believe them to have been inimical to the best interests of science"

He ends by hoping that the fear of open criticism would encourage scientists to police themselves better (At this point science bloggers should let out a collective hurrah, for reasons that will become apparent below). There is a postscript: Breslow replied to this lengthy report shortly (and gratifyingly, his response was published in the same journal) and described experiments that would clarify his earlier work, but he did not address the many questions about peer review raised in Menger's communication.

This fascinating account raises many important issues. For one thing, it was quite clear that the original paper had problems; even the reviewers consistently agreed with this part of the story. Thus the science seems pretty clear, and the ambiguity in the situation came from the human element. We will never know what went behind the scenes when the manuscripts were rejected even after the reviewers agreed with the rebuttal. Unfortunately motivations are hard to unravel, but one cannot help but suspect that there was prestige and influence at work here which thwarted efficient and open scientific revision. The fact that powerful people from the chemistry community (especially Breslow and Bard) were involved cannot be an inconsequential factor.

Secondly, it does seem important to me (although this is a relatively minor issue in my mind) for journals to publish corrections to papers in their own pages; at the very least, this underscores a culture of responsibility on the part of the journal and sends out a positive message. However this practice involves some interesting operational questions. Should the journal first allow the original authors to publish a correction? If so, how long should it wait before doing this? It seems clear to me that legitimate corrections should immediately be published, irrespective of the source.

The most remarkable fact about this account is that Nature published it, and in writing it Menger performed a unique and valuable public service. Personally I have never seen such a detailed dissection of peer review described in a major journal. Some people would deplore this public airing of dirty laundry. They would say that none of this can undo what happened, and the only effect of such articles is bad blood and destroyed reputations. I happen to disagree. I think journals should occasionally publish such analyses, because it alerts us to the very human aspect of science. It demonstrates to the public what science is truly like, how scientists can make mistakes, and how they can react when they are corrected or challenged. It sheds important light on the limitations of the peer review process, but also reaffirms faith in its ultimately self-correcting nature. Some people might think that this is a great example of how peer review should not be, but I would like to think that this is in fact exactly how the process works in the vast majority of cases; imperfect, ambiguous, influenced by human factors like reputations, biases and beliefs. If we want to understand science, we need to acknowledge its true workings instead of trying to fit it into our idealized worldview of perfect peer review.

In this day and age, blogs are performing the exact same function as Nature did in 1992, and this is clearly apparent from the latest Breslow brouhaha. Menger and Haim in 2012 would not have to test their patience by trying to publish in JACS for 11 months; instead they could upload their correction on a website and let the wonder of instant online dissemination work its magic. Blogs may not yet be as respectable as JACS, but the recent incident shows that they can be perfectly respectable outlets of criticism as long as the criticism is fair and rigorous. The growing ascendancy of blogs and their capacity to inflict instant harm on sloppy or unscrupulous science should hopefully result in much better self-policing, leading authors to be more careful about what they publish in "more respectable" venues. Thus, quite paradoxically, blogs could lead to the publication of better science in the very official sources which have largely neglected them until now. This would be a delightful irony.

Perhaps the greatest message that the public can take home from such incidents is that even great scientists can make mistakes and remain great scientists, and that science continues to progress in one way or another. No matter how bad this kind of stuff sounds, it's actually business as usual for the scientific process, and there's nothing wrong with it.

Would Ron Breslow's dinosaurs be typing this post?

Much has been written about a recent perspective in JACS written by Ronald Breslow on the origin of homochirality during the origin of life. There's excellent commentary on the topic from See Arr Oh and Paul@Chembark. Briefly, Breslow's paper describes some pretty interesting research from his and other groups establishing a possible mechanism for the transfer of chirality from alpha-methyl amino acids to standard amino acids, followed by the amplification of that small chirality excess through a variety of plausible mechanisms involving the concentration of the dominant enantiomer.

The paper would have remained an interesting chemistry curiosity about the origin of life. It could have even served to remind the public that the origin of life is chemistry's Big Question, had it not been for two lines at the end of the piece:

"An implication of this work is that elsewhere in the universe there could exist life forms based on D amino acids and L sugars...Such life forms could even be advanced versions of dinosaurs,  if mammals did not have the good fortune to have the dinosaurs wiped out by an asteroidal collision, as on Earth. We would be better off not meeting them."

What was interesting was that when I first came across the paper, I spent about two seconds on this line and moved on. The line is an amusing attempt at humor. You usually don't see humor in a technical paper, but in fact I am all for it; I think we need to spice up our otherwise dry scientific literature with the occasional joke. The content of the paper obviously had nothing to do with dinosaurs; it was about a specific technical chemical puzzle in the origins of life. And nothing would have come out of it had not the ACS PR office created a sensationalized news piece wrongly centered around these two lines. Scant attention was paid to the scientific substance of the paper, and it didn't help when other popular venues like The Huffington Post also questioned Breslow about it and received the following answer:

"From there, Breslow makes the jump to advanced dinosaurs. But why might extraterrestrial life be in that form? “Because mammals survived and became us only because the dinosaurs were wiped out by an asteroid, so on a planet similar to ours without the asteroid collision it is unlikely that human types would be there, more probably advanced lizards (dinosaurs),” Dr. Breslow told The Huffington Post in an email."

This set of events led to some unfortunate consequences. For one thing, the undue emphasis on dinosaurs at the expense of homochirality was another nail in the coffin of the public communication of chemistry. Here was a chance to explain to the public why the origin of life is chemically fascinating, but instead the chemical substance got overwhelmed by the precipitate of publicity surrounding dinosaurs. If the ACS is wondering why chemistry is having such a PR problem, now would be the time to look in the mirror.

The situation was exacerbated by more serious matters. Following a tip from some commentators, Stu Cantrill of Nature Chemistry looked up two old Breslow papers on the same topic and found out an extreme case of self-plagiarism; most of the paper seems to have been copied verbatim from the other sources. Breslow should not be blamed for inserting that little joke at the end - it was the media which sensationalized it - but he cannot be excused for the gratuitous self-plagiarism.

That's about what I want to say about this unfortunate episode since others have extensively covered it, but I do want to focus on Breslow's reply to The Huffington Post. Some have chided him for it, but the statement is actually not as absurd as it sounds since Breslow is asking a famous, age-old question in evolutionary theory: If the tape of evolution were re-run, would it again produce dinosaurs, Breslow and ACS editors? Or in other words, how predetermined is evolution, and how dependent is it on accidents? This question is a profound one , since if the answer is even an affirmative "yes", it has serious implications for not just science but for theology and philosophy and the whole puzzle of human existence. 

Stephen Jay Gould was a powerful advocate of contingency in evolution, and his argument is not surprising to see. Evolution has been shaped by so many quirks of environment and the fate of individual organisms and species, that it would be naive to think that chance did not play a role in it. A single piece of wood accidentally drifting apart and carrying a few species on it to an isolated island can sculpt the evolution of that species. And we know for a fact that more massive events like volcanoes and earthquakes certainly did this. In fact it was geologist Charles Lyell's descriptions of such seismic events that started Darwin down the path to evolution and natural selection. It seems thus that if one could hypothetically run "what if" scenarios, it's very unlikely that anything approximating modern humans and dinosaurs could ever arise.

But this answer is not as obvious as it sounds. The biologist Simon Conway Morris has put forth a competing scenario in which certain universal features of evolution guarantee the presence of common adaptations during the evolution of species. This argument is based on what's called "convergent evolution" which essentially refers to the existence of common solutions to diverse evolutionary problems. A typical example would be all kinds of mammals, fish, amphibians and reptiles whose bodies are adapted to swimming. In most of these creatures you see similarly shaped, streamlined bodies, muscles and bones which are suited for swimming. Another principle concerns homologous structures (and not convergent evolution, as a commentator reminded me) like the digits of the hand, whose basic plan seems to be conserved across species. Indeed, homologous evolution provide some of the strongest pieces of evidence for common evolutionary origins. Thus Morris's argument is that even if the evolutionary tape were to run again, something similar to humans, dinosaurs, frogs and eagles (although the details would certainly differ) would be seen if the process were allowed to keep to itself for a few billion years. This interpretation acquires even greater significance when applied to humans; would such an intelligent, successful and self-centered species as Homo sapiens have evolved in an alternative evolutionary universe?

There is a lot of interesting discussion to be had about this topic. It's equally fascinating when applied to chemistry and leads to similar questions. For instance, what are the chances that the foundational compounds of life - DNA, RNA, amino acids, sugars, ATP - would have formed had evolution been left to run again with different tweaks and quirks of fate? Personally I find the questions somewhat easier to answer in case of chemistry since the formation of many of these compounds is governed by relatively simple energetic arguments. ATP's express purpose is to make otherwise unfavorable reactions possible by driving them downhill through high-energy bonds, and if not ATP it's hard to see how some other chemical compounds performing the same function could not have evolved. A great example of an attempt to answer these questions is seen in Frank Westheimer's classic paper "Why Nature Chose Phosphates?" in which he points to the unique properties of phosphate that make it such a dominant source in life's workings, both in metabolism and heredity.

Breslow's question is therefore quite sensible and its implications are fascinating to ponder. How would 2012 have looked like had the dinosaurs not been wiped out by an asteroid? Would they still have been alive and would humans have had the unfortunate fate of co-existing with them? Would they be as smart as humans? Naturally such scenarios would have profoundly affected the evolution and character of our civilization. Or would the dinosaurs have precluded the rise of Homo sapiens, perhaps by nipping our scarce population in the bud and making us extinct? Or would they have become extinct themselves through some other cause, perhaps extreme climate change? How indeed would planet earth have looked like had it still been ruled by dinosaurs?

Naturally we don't know the answers to these questions. But Breslow's little joke at the end, while sounding silly, inadvertently asks a very important and thought-provoking question. Too bad it was all obscured by the charges of self-plagiarism.

Modeling magic methyls


Shamans have their magic mushrooms, we medicinal chemists have our magic methyls. The 'magic methyl effect' refers to the sometimes large and unexpected change in drug potency resulting from the addition of a single methyl group to a molecule (for laymen, a methyl group is a single carbon with three hydrogens, and you don't expect spectacular effects from the addition of such a small modification to a drug). This is part of a broader paradigm in chemistry in which small changes in molecular structure can bring about large changes in properties, especially biological activity.

In this study, researchers from Bill Jorgensen's group at Yale ask what exactly it is that a methyl group can do to a biologically active molecule. They look at more than 2000 cases of purported activity changes induced by methyl groups from two leading medicinal chemistry journals, and their work highlights some unexpected effects of methyls. The techniques used are free-energy perturbation, Monte-Carlo and molecular dynamics simulations to compare methylated and non-methylated versions of published inhibitors in an effort to gain insight into the factors dictating potency.

Firstly, they find that for all their reputation, methyls mostly confer a modest increase in potency. The greatest increase is about 3 kcal/mol in free energy, which considering the exponential relationship between free energy and binding constant, is actually quite substantial. But this happens in a negligible minority of cases; as they find, a 10 fold boost in potency with a methyl is seen in only 8% of the cases, while a 100 fold difference is seen in only 0.4%.

So what does a methyl do? For starters, a methyl is simply a nice, small, lipophilic group so you would expect it to give you some advantage simply by snugly fitting in in an otherwise unoccupied binding pocket. But the real advantage of a methyl is thought to come from kicking out 'unhappy' water molecules; often a small protein pocket is occupied by a highly constrained water molecule that is desperate to join its free brethren in the bulk. A methyl group is usually only too happy to oblige and kick the water out. Now, as the crystallographer Jack Dunitz demonstrated more than a decade back, the maximum free energy gain you could estimate from displacing a water molecule is about 2 kcal/mol. Considered in this light, you would expect a gain of at least that much from a hydrogen to methyl change, but the very rare 3 kcal/mol cases seen seem to call this belief into question. Clearly the common wisdom about methyls displacing waters is not telling us the entire story.

As the authors demonstrate, the common wisdom may indeed point to uncommon cases. They look at five cases where methyls give the greatest potency boost, and in no case do they find evidence for displacement of a water molecule. So where's the potency gain coming from? It turns out that it may be coming from a common but often underappreciated factor; conformational reorganization. When a ligand binds to a protein, it exists in several - often hundreds - of conformations in solution. How tightly the protein can bind the ligand depends on how much energy the protein can expend to twist and turn these conformations into the single bound conformation. You would expect that the more similar a molecule's unbound conformation in solution is to its protein-bound conformation, the easier it would be for the protein to latch on to it.

And indeed, that's what they find. Most of the cases they look at concern potency gains coming from putting methyls at the ortho position of a biaryl ring. Organic chemists are quite familiar with the steric, planarity-disrupting effects of ortho substituents on biaryl rings; in fact it's a tried and tested strategy to improve solubility by disrupting crystal packing effects. It turns out that the bound structures of the molecules present a twisted, non-planar conformation. In the absence of methyls, the rings would prefer to stay almost coplanar (or at least less non-planar) in the unbound conformation. But putting methyl groups on twists the rings in the unbound conformation into a form that's similar to the bound one; basically there's more overlap between the solution and bound conformations in case of the methylated versions compared to the non-methylated ones. Consequently, the protein has to expend less energy to turn an already similar conformation into its bound counterpart. This becomes clear simply by comparing the single dihedral angle bridging the two rings in the bound and unbound conformations of one particular molecule (as shown in the figure above).

The study seems to impart what is part of an important general lesson; when designing ligands or drugs to bind a protein, it is as important to take the solution conformations into account. It's not just how the protein interacts with the drug, but it's what the drug is doing before it meets the protein that's equally important.

A citation against citations

In the latest issue of Angewandte Chemie, Stanford chemistry professor Richard Zare has some cogent words of advice for assessing young faculty members when they are up for tenure. Zare has written the article partly as a response to what he sees as an excessive use and abuse of citation indices throughout the world in judging tenure-worthy achievements.

As Zare notes, the h-index which has been adopted in part because it seems to measure both quality and quantity is particularly ill-equipped to measure early success in research. This is mostly because (and this is one of the biggest arguments against any citation metric) the significance of most research does not become obvious until much later. Zare cites the example of physicist Steven Weinberg's paper unifying the electroweak force; while it was cited only a few times in the first few years, it is now one of the most highly cited papers in the history of physics. Another obvious example is Watson and Crick's DNA paper which gathered very few citations in its fledgling years. Thus citations, if they make sense at all, make much more sense in one's later career.

Yet there is a disturbing trend of universities worldwide adopting citation statistics to drive tenure decisions. In some countries like China, researchers are even awarded cash prizes for trying to publish large numbers of papers in both leading and minor science journals. As a recent article bemoaned, this has led to journals like Nature and Science being flooded by papers of dubious quality from certain countries. Not only can such practices create bias against papers from these countries, but they harm the global enterprise of science as a whole by emphasizing publication at the expense of genuinely interesting work.

Far better is to try to objectively judge the promise of young investigators by evaluating their impact on specific subfields. To this end Zare describes the system adopted by the Stanford chemistry department which puts the highest emphasis on 10 to 15 letters of recommendation from around the world. The letter writers are essentially asked to answer the simple question, "Has the investigators' work in his/her chosen area led to new understanding and directions in the field?". Everything else comes second, including the number and authorship rank of papers along with various indices. As Zare says,

We do not look into how much funding the candidate has brought to the university in the form of grants. We do not count the number of published papers ; we also do not rank publications according to authorship order. We do not use some elaborate algorithm that weighs publications in journals according to the impact factor of the journal. We seldom discuss h-index metrics, which aim to measure the impact of a researcher's publications. We simply ask outside experts, as well as our tenured faculty members, whether a candidate has significantly changed how we understand chemistry.


I find it very surprising - and encouraging - that the amount of money brought in does not play a major role in determining tenure. If true, this seems to go against disturbing current trends that are geared toward evaluating professors similar to sales managers at Macy's or hedge fund managers on Wall Street.


There is one caveat to what seems to be an otherwise cogent and role model-worthy tenure policy adopted by the Stanford chemistry department. Just as the true impact of research does not become clear until years later, the true value of ideas also does not become clear by asking 10, 15 or even 100 referees. Although this helps, it can mask the fact that most original scientific contributions at least partly challenge conventional wisdom, and keepers of the faith are almost always reluctant in endorsing such contributions. What should really matter is not whether a young scientist's work has led to new truths, but whether it has been interesting enough to spark a flurry of research activity that in turn may lead to minor or major truths. In science, being interesting is more important than being right, and tenure committees should take this fact into account.

On Freeman Dyson, cadmium estimation and the joy of chemistry

Freeman Dyson who is a hero of mine is someone who has done lots of interesting things during a long and fruitful life. If you look at his writings or talks it is easy to mistake him as a philosopher of science. But as he himself said to me during a long, one-on-one, intellectually sparkling lunch discussion two years ago, a lot of people think that talking about the big picture automatically makes one a philosopher. Dyson has indeed written about a lot of big picture topics, from the origins of life to the colonization of space. But he also maintains that he has always been first and foremost a problem solver, someone who is much more interested in details than in grand theorizing. Whatever philosophy he manages to weave is built on a foundation of solving specific technical problems. This is partly evidenced by his work on highly technical engineering projects ranging from nuclear spaceships to nuclear reactors.

This quality would make Dyson quite comfortable in the company of chemists, since chemistry by its very nature is more a problem-solving discipline rather than a philosophical pursuit like cosmology. I was curious to know Dyson's views on chemistry since he has not really written anything of note on the subject except a review in Physics Today of Roald Hoffmann's fine book "The Same and Not the Same".

Then I remembered a great collection of interviews with Dyson that I found a few years ago on a website called Web of Stories. This website is a must-see for history of science enthusiasts. It features interviews with scores of famous scientists, humanists and artists from diverse disciplines. The best thing about these interviews is that they are long and detailed instead of two minute sound bytes; each interview lasts in total for a few hours and covers most significant events in the interviewees's life, so you really get a close look at the thought processes of leading thinkers. In one of Dyson's interviews I was delighted to find this:

"I was going to say about chemistry that Roald Hoffmann whom I got to know quite recently, who is a chemist who writes poetry and is a great character, he has the same attitude toward chemistry that I do. I mean it is the beauty of the details rather than any over-arching theory. In that way it's very different from physics, and I had a taste for it. My taste is always more for the details than for the big picture...

...I learned chemistry from Christopher Longuet-Higgins who was already much more of an expert and more excited about chemistry than Eric James. And I remember Christopher bringing to Winchester some crystals of stannic iodide which he had made, which is the most marvelous stuff. It is a brilliant scarlet colour and it makes these beautiful scarlet crystals, and they're also extremely heavy. If you have a little bottle full of it, it feels like lead. So that kind of chemistry I found delightful, just the sort of details of the actual stuff, rather than the theory that lay behind it.

I remember the joy when, here in Princeton, Willard Libby came on a visit once and brought along another little bottle of chemicals, which also was very heavy, and that was barium xenate, which was barium xenon oxide, which of course was an absolute revelation because nobody imagined that xenon could have compounds, being an inert gas. And it was sometime in the 1950s these compounds were discovered, and barium xenate is just such an ordinary stuff. It's a sort of heavy white crystals which are completely stable, they don't show any signs of anything strange and there it is. If you heat them up of course the xenon comes bubbling off..."

So there, I think Dyson would have felt right at home as a chemist. And the point he makes is an obvious and important one that is often lost among the cliched caricatures of chemists bubbling frothy liquids and crystallizing colorful solids that one often sees in literature and cinema. But it's precisely these bubbling liquids and colorful solids that provide chemistry with a palpable reality that's often missing from more theoretical sciences.

A personal digression. I remember an episode from an undergraduate chemistry lab where we were supposed to estimate two unknown metal ions from a solution. After trying out every test in the book we could only detect copper. The other ion remained a mystery and we finally threw up our hands. That's when the instructor revealed his trick. It turns out that the method of copper estimation that we were using involved turning the solution highly acidic with hydrochloric acid. With a smile on his face, the instructor put a single drop of the concentrated acidic solution in a large flask and then filled the flask to the brim with water, diluting the initial solution by at least a factor of ten thousand. Our eyes were glued to the flask as he passed hydrogen sulfide gas into the solution. And then, starting from the bottom and rising to the top, the flask filled up with the most beautiful yellow color that I have ever seen; it's a sight that I will never forget. What happens is that cadmium is precipitated as cadmium sulfide only in dilute acidic solutions while for copper it's the opposite. All our tests for detecting copper in concentrated acid missed the hidden cadmium, until it was ready to be unmasked by simple dilution.

Every chemist among us is familiar with this feeling of discovering something unknown, no matter how trivial or important, that actually exists; all the better if it has a brilliant scarlet or full-throated yellow color, as is often the case in chemistry. Dyson is right that there is something unmistakably reassuring, in-your-face - real - about holding a vial of something that was previously considered impossible.

Physicists often like to tell the story of how Einstein felt that "something had snapped inside" him when he saw the predictions of general relativity confirmed by observations of the perihelion of mercury. He surely must have felt the rare, once-in-a-lifetime satisfaction of a great theoretical construct being validated by a real observation that could be boiled down to a single number. We tend to think of Einstein as the great scientist-philosopher, but there he was, being ecstatic about a technical detail that was a crucial part of his magnum opus. Observing barium xenate or cadmium is not quite as momentous as confirming the theory of relativity, but I can readily imagine feeling a shiver down my spine if I had been presented with that sort of chemical evidence. Evidence that seemed to defy the impossible but which I could nonetheless hold in my hand and keep in my closet. That's the joy of chemistry.

A molecular modeler's view of molecular beauty

The last few months have seen discussion on the definition of "beauty" as applied to molecular structures. As a molecular modeler, I ended up ruminating on what I think are the features of molecules that I would call beautiful.

This kind of exercise may seem pointless since it's highly subjective. And yet I think that exploring these definitions is valuable because at least a few of the conflicts that occasionally erupt between groups of interdisciplinary scientists seem to reside in divergent definitions of molecular beauty. A typical scenario is when a modeler designs a molecule that seems to fit perfectly in a protein binding pocket, only to have it dismissed by the medicinal chemist because it's too hard to make. Or because it looks like a detergent. In this case, structural beauty has not necessarily translated to practical beauty- always a consideration when one is talking about molecules which ultimately have to be synthesized. In fact, projects can often progress when different scientists agree - often unconsciously - on a notion of beauty or utility and they can falter when these ideas lead different scientists down very different paths.

So what kind of molecules does a modeler find pleasing? Let's first look at structure-based design which in its broadest sense entails acquiring crystal structures of proteins and finding and tailoring molecules to fit the binding site of those proteins. Beauty in structure-based design is not hard to appreciate since it inherently involves protein structures, paragons of symmetry and parsimonious elegance. In this sense the modeler has much more in common with crystallographers than medicinal chemists. He or she is always struck by how precisely oriented the amino acid residues in a protein's binding site are. Concomitantly, it's not surprising that for a modeler, a particularly beautiful molecule may be one that makes interactions with most or all of the residues, especially in the form of hydrogen bonds with geometries so perfect that they might have fallen from the sky. This preference for geometric complementarity is extremely general. A particular pleasing manipulation for a modeler may be one that leads to a perfect fit for a given protein cavity, with the contours of the molecular surface of the molecule following those of the protein.

Yet there are limits to how far this beauty-from-complementarity can take you. Designing a molecule with interactions that are too tightly knit and precise may leave very little room for error. And considering the inherent inaccuracy of many modeling algorithms, it may be best to always be forgiving and give the design some space to move around in the binding site, so that slight reorganizations of both protein and ligand can still result in reasonably tight binding.

While we are on the subject of shape complementarity, let's not forget a factor which has only recently been started to be taken into account - water molecules filling the active site. These water molecules which are often highly constrained can have a perverse beauty of their own; they are sometimes trapped in both an entropic (no freedom of movement) and enthalpic (unable to satisfy their optimal four hydrogen bonds) cage, and yet they are the only things keeping the protein from what technically must be called a vacuum. These water molecules deserve to be freed, and there is a particularly deep kind of satisfaction in adding a functional group to a ligand that would displace them from the protein's tight embrace and enable them to join the ranks of their hydrogen-bonded brethren. A modeler could regard a modification precisely tailored to improve affinity by freeing up a water molecule as a particular beautiful case of molecular engineering.

Yet as we have sometimes seen, the beauty of filling protein pockets with perfectly matched shapes may lead to a conflict with other practitioners of the art. For starters, a relentless march toward adding functionality can detract from at least two important features - size and lipophilicity - both of which are often casualties of "molecular obesity". Dents in beauty can come from too many chiral centers, aromatic rings or quaternary carbons. Real beauty in structure-based design arises from a precise balance of complementarity and simplicity in the design. And of course there is the always important consideration of synthetic accessibility, which is why every modeler should ideally take a basic course in organic synthesis.

Beyond structure-based design, there are other considerations that endear molecules to modelers. Protein structures are not always available, and in the absence of such structures, molecular similarity alone is sometimes used to find novel active molecules. The question is simple; given a known molecule of high potency, how will you discover molecules similar to it in structure. The answer is complicated. The key to realize here is that this similarity must be defined in three dimensions since all molecular interactions are in 3D. Conformational similarity is one of the most elegant considerations for a modeler, and one that synthetic chemists often don't contemplate. Very few things provide as much satisfaction to a modeler as the observation that two molecules with very different 2D structures seem to adopt the same biologically active 3D conformation in solution and in the binding pocket. And a particularly gratifying and rare find - for both modelers and synthetic chemists - is a molecule which nature has sterically constrained to a single dominant conformation by the judicious placement of certain functional elements. Discodermolide, a microtubule binding agent which I worked on in graduate school, belongs to this category.

Those then are the criteria that come to my mind for defining molecular elegance - perfect (but not too perfect) shape complementarity to a precisely structured binding site, a well-defined 3D conformation often similar to a completely different molecule and simplicity in structure in spite of these other demanding features. Conflicts arise when modelers' and chemists' notions of molecular pulchritude diverge. And it's a happy coincidence when these sensibilities overlap, which is more often than you might think.

On physics envy and drug discovery

In a recent New York Times article, two prominent social scientists lament the epidemic of physics envy that has infected their ranks, and they implore their colleagues to take a more observation-based, utilitarian approach to addressing the most pressing problems of social science.

We natural scientists should empathize. Physics envy is the name of a disease that afflicts many scientists at various stages of their careers. Its main symptom is an overwhelming desire to see their science - whatever it may be - become as precise and predictable as particle physics. The victim of physics envy thinks wistfully of the glorious days of quantum mechanics and molecular biology and believes that his or her science can achieve the same six-decimal precision in its measurements and predictions. The victims may be natural or social scientists, although the disease takes on a particularly nasty form when it affects economists, as recounted by the physicist-turned-financial modeler Emanuel Derman.

Physics envy is so widespread that even physicists are affected by it. Some theoretical physicists for instance want to reduce all the world's complexities to an all-encompassing theory of "everything", preferably a single equation that would describe everything from black holes to romantic love. Presumably this will help us truly understand, from first principles, why nations go to war or why election outcomes depend on Ohio. This is in part because physics envy is closely tied to its cousin reductionism which provided untold dividends in twentieth century science. But the scientific world in the twenty-first century is a different creature. Physics envy during our times can cause tunnel vision, an exaggerated belief in the power of mathematics, and not infrequently, the loss of billions of dollars. Perhaps the worst thing about this malady may be its focused transmission to new generations of students and scientists, thus ensuring its long life and continued dominance.

Sadly, this disease is not unknown among drug discovery scientists, and I dare say that I have suffered from it myself. Drug discovery is a complex, multidisciplinary field where luck and intuition play as great a role as any rational approach. Drug hunters study complex systems that are almost always refractory to any one approach from any one science. Yet physics envy exists, implicitly or explicitly. You see it in the modeler who thinks he can find the next revolutionary drug simply by optimizing his compound's affinity for his protein, or the synthetic chemist who thinks that he can produce an army of molecular analogs that can dissect a complete biological pathway, or the biologist who thinks that inhibiting his pet protein will be all that it takes to disable a complicated biochemical pathway.

A more serious case is the scientist who thinks that if only we had knowledge of every single biological and chemical building block, if only we could map out every gene, every protein, every small molecule interacting with all these genes and proteins and present these interactions on a wall like a subway map, we will be able to understand and treat all diseases. These scientists who often but not always go by the name of systems biologists, try to produce precisely this kind of map and predict the output from an input. The output can be in the form of an upregulated protein or the manifestation of a phenotype. The input may be the activation of a gene or inhibition of a protein by a small molecule. The systems biologists think that what's necessary (and perhaps sufficient) for predicting responses in a biological system is a map.

Yet as the Nobel Prize winning biologist Sydney Brenner once wrote in a very readable article, he has been practicing "systems biology" all his life, except that in his time it was called "physiology". Brenner is supposedly practicing systems biology without a license, and yet he and many of his fellow classical physiologists seem to have been both remarkably successful and strangely immune to physics envy. Their job was to study the responses of biological systems using every tool at their disposal. It did not matter if they didn't have an overwhelming theoretical framework to tie together their diverse observations. As the authors of the NYT article indicate, the lack of deep theory does not preclude either understanding or utility. An example of this principle would be the school of pharmacology starting with Steve Brodie at the NIH and culminating with Solomon Snyder at Johns Hopkins. As illustrated in Robert Kanigel's book, these scientists made important discoveries in basic pharmacology even when detailed knowledge of genes and proteins was unavailable. They did not need the pharmacological equivalent of a "theory of everything" to proceed. In fact they did not even need a theory in some cases.

The same goes for drug discovery in general. Think of the currently fashionable paradigm of phenotypic screening which involves discovering new compounds by looking at their effects on simple phenotypic traits like locomotion or heart rate. These approaches hark back to the old days of drug discovery when the responses could be purely clinical, anything from increased urination to dry mouth to flushed faces. This approach is very far from the target-based reductionist approaches that have become popular in the last twenty years, yet nobody can deny its value.

But neither are target-based approaches useless. If you are dealing with HIV protease which can yield copious crystal structures, a structure-based approach might be, and indeed was, very fruitful. But think of a protein whose binding pocket is unknown, which is full of flexible regions and whose functional form consists of oligomers of unknown composition, and structure-based approaches might be completely useless. It might then be best to proceed based on the biology alone or by educated guesswork guided by SAR trends.

The point is, it all depends on the specific case. And therein lies the rub of drug discovery and the problem of physics envy. As I mentioned in a previous post, physics searches for general principles while drug discovery largely thrives on exceptions. The question in drug discovery is not what overarching principle can be applied to all cases, but what exact mix of different techniques would work for a given case. It's very different from physics, where the goal is to search for one equation to rule them all.

Fortunately physics envy has a potent antidote, both in physics and in drug discovery. It goes by the name of "nature". Whenever physics envy tries to go on a rampage, nature invariably steps in and straps on the straitjacket. In just the last few years we have seen nature generously cutting us down to size. From resveratrol-based SIRT inhibitors for aging to recently tainted PARP inhibitors for cancer, we have seen how nature (which in most of these cases means "biology") is smarter than us. Every time we break down a system into its constituent parts and pin down what we think is the operative entity, nature keeps reminding us that there's something else out there which is equally important which we have missed. In some sense nature is mocking us for bringing our biases to bear on what in her scheme of things is only one important component of a grand dance. In nature's eyes the rule is simple; if we want to participate, we better make sure we know who our partner is. And leave physics envy at the door.