Field of Science

Chemistry and Biology: Kuhnian or Galisonian?


Peter Galison who has emphasized the dominance of experimental techniques in engineering scientific revolutions (Image: BNL).
First published on the Scientific American Blog Network.

Freeman Dyson has a perspective in this week's Science magazine in which he provides a summary of a theme he has explored in his book "The Sun, the Genome and the Internet". Dyson's central thesis is that scientific revolutions are driven as much or even more by tools than by ideas. This view runs somewhat contrary to the generally accepted belief regarding the dominance of Kuhnian revolutions - described famously by Thomas Kuhn in his seminal book "The Structure of Scientific Revolutions" - which are engineered by ideas and shifting paradigms. In contrast, in reference to Harvard university historian of science Peter Galison and his book "Image and Logic", Dyson emphasizes the importance of Galisonian revolutions which are driven mainly by experimental tools.

As a chemist I find myself in almost complete agreement with the idea of tool-driven Galisonian revolutions. Chemistry as a discipline rose from the ashes of alchemy, a thoroughly experimental activity. Since then there have been four revolutions in chemistry that can be called Kuhnian. One was the attempt by Lavoisier, Priestley and others at the turn of the 17th century to systematize elements, compounds and mixtures to separate chemistry from the shackles of alchemical mystique. The second was the synthesis of urea by Friedrich Wohler in 1828; this was a paradigm shift in the true sense of the term since it placed substances from living organisms into the same realm as those from non-living organisms. The third revolution was the conception of the periodic table by Mendeleev, although this was more of a classification akin to the classification of elementary particles by Murray Gell-Mann and others during the 1960s. A minor revolution accompanying Mendeleev's invention that was paramount for organic chemistry was the development of the structural theory by von Leibig, Kekule and others which led the way to structure determination of molecules. The fourth revolution was the application of quantum mechanics to chemistry and the elucidation of the chemical bond by Pauling, Slater, Mulliken and others. All these advances blazed new trails, but none were as instrumental or overarching as the corresponding revolutions in physics by Newton (mechanics), Carnot, Clausius and others (thermodynamics), Maxwell and Faraday (electromagnetism), Einstein (relativity) and Einstein, Planck and others (quantum mechanics).

Why does chemistry seem more Galisonian and physics seem more Kuhnian? One point that Dyson does not allude to but which I think is cogent concerns the complexity of the science. Physics can be very hard, but chemistry is more complex in that it deals with multilayered, emergent systems that cannot yield themselves to reductionist, first principles approaches. This kind of complexity is also apparent in the branches of physics typically subsumed under the title of "many-body interactions". Many-body interactions range from the behavior of particles in a superconductor to the behavior of stars condensing into galaxies under the influence of their mutual gravitational interaction. There are of course highly developed theoretical frameworks to describe both kinds of interactions, but they involve several approximations and simplifications, resulting in models rather than theories. My contention is that the explanation of more complex systems, being less amenable to theorizing, are driven by Galisonian revolutions rather than Kuhnian.

Chemistry is a good case in point. Linus Pauling's chemical theory arose from the quantum mechanical treatment of molecules, and more specifically the theory of the simplest molecule, the hydrogen molecular ion which consists of one electron interacting with two nuclei. The parent atom, hydrogen, is the starting point for the discipline of quantum chemistry. Open any quantum chemistry textbook and what follows from this simple system is a series of approximations that allow one to apply quantum mechanics to complex molecules. Today quantum chemistry and more generally theoretical chemistry are highly refined techniques that allow one to explain and often predict the behavior of molecules with hundreds of atoms.

And yet if you look at the insights gained into molecular structure and bonding over the past century, they have come from a handful of key experimental approaches. Foremost among these are x-ray diffraction, which Dyson also mentions, and Nuclear Magnetic Resonance (NMR) spectroscopy, also the basis of MRI. It is hard to overstate the impact that these techniques have had on the determination of the structure of literally millions of molecules ranging across an astonishing range of diversity, from table salt to the ribosome. X-ray diffraction and NMR have provided us not only with the locations of the atoms in a molecule, but also with invaluable insights into the bonding and energetic features of the arrangements. Along with other key spectroscopic methods like infrared spectroscopy, neutron diffraction and fluorescence spectroscopy, x-rays and magnetic resonance have not just revolutionized the practice of chemical science but have also led to the most complete understanding we have yet of chemical bonding. Contrast this wealth of data with similar attempts using purely theoretical techniques which can also be used in principle to predict the structures, properties and functions of molecules. Progress in this area has been remarkable and promising, but it's still orders of magnitude harder to predict, say, the most stable configuration of a simple molecule in a crystal than to actually crystallize the chemical even by trial and error. From materials for solar cells to those for organ transplants, experimental structure determination in chemistry has fast outpaced theoretical prediction.

What about biology? The Galisonian approach in the form of x-ray diffraction and NMR has been spectacularly successful in the application of chemistry to biological systems that culminated in the advent of molecular biology in the twentieth century. Starting with Watson and Crick's solution of the structure of DNA, x-ray diffraction basically helped formulate the theory of nucleic acid and protein structure. Particularly noteworthy is the Sanger method of gene sequencing - an essentially chemical technique - which has had a profound and truly revolutionary impact on genetics and medicine that we are only beginning to appreciate. Yet we are still far from a theory of protein structure in the form of protein folding; that Kuhnian revolution is yet to come. The dominance of Galisonian approaches to biochemistry raise the question about the validity of Kuhnian thinking in the biological sciences. This is an especially relevant question because the last Kuhnian revolution in biology - a synthesis of known facts leading to a general explanatory theory that could encapsulate all of biology - was engineered by Charles Darwin more than 150 years ago. Since then nothing comparable has happened in biological science; as indicated earlier, the theoretical understanding of the genetic code and the central dogma came from experiment rather than the very general synthesis in terms of replicators, variation and fitness that Darwin put together for living organisms. Interestingly, in his later years (and only a year before the discovery of the structure of DNA) the great mathematician John von Neumann put forward a Darwin-like, general theoretical framework that explained how replication and metabolism could be coupled to each other, but this was largely neglected and certainly did not come to the attention of practicing chemists and biologists.

Dyson's essay and the history of science does not necessarily assert that the view of science in terms of Kuhnian revolutions is misguided and that in terms of Galisonian revolutions is justified. It's rather that complex systems are often more prone to Galisonian advances because the theoretical explanations are simply too complicated. Another viewpoint driven home by Dyson is that Kuhnian and Galisonian approaches alternate and build on each other. It is very likely that after a few Galisonian spells a field becomes ripe for a Kuhnian consolidation.
Biology is going to be especially interesting in this regard. 

The most exciting areas in current biology are considered to be neuroscience, systems biology and genomics. These fields have been built up from an enormous number of experimentally determined facts but they are in search of general theories. However, it is very likely that a general theoretical understanding of the cell or the brain will come from very different approaches from the reductionist approaches that were so astonishingly successful in the last two hundred years. A Kuhnian revolution to understand biology could likely borrow from its most illustrious practitioner - Charles Darwin. One of the signature features of Darwin's theory is that it seeks to provide a unified understanding that transcends multiple levels of biological organization, from individual to society. Our twenty-first view of biology adds two pieces, genes and culture, to opposite ends of the ladder. It is time to integrate these pieces - obtained by hard, creative Galisonian science - into the Kuhnian edifice of biology.

Nobel Week Dialogue

There hasn't been much blogging over the last two weeks, mainly because I have been blogging for a special event preceding the official Nobel Prize ceremony in Stockholm. 

"Nobel Week Dialogue" was a day-long symposium organized by the Nobel committee and other sponsors on December 9. The topic was "The Genomic Revolution and its Impact on Society" and it featured many science and policy leaders including Eric Lander, Steven Chu, James Watson and Craig Mello. 

I was invited to write for this event and I have four posts, mainly historical and philosophical, contemplating aspects of the genomic revolution.

Why the same can be different: The case of the two enantiomers

The R enantiomer (green) allows Tyr337 to adopt
two different orientations. The S (yellow) does not.
Since we were discussing thermodynamics in biological systems the other day, here's a neat example from Angewandte Chemie of a system where thermodynamics reveals something surprising. The authors from UmeÃ¥ University in Sweden were looking at two enantiomers of a ligand binding acetylcholinesterase. It's a robust, well-studied system and you don't really expect anything unexpected.

Except that it does do something unexpected. The first surprise was that both enantiomers bound with the same binding affinity. This is an observation that violates a central general tenet of biochemistry, namely that ligands and receptors are both chiral and therefore enantiomeric ligands will bind differently. The second surprise was that when they dissected the similar free energy of binding into entropic and enthalpic components, they found that the S enantiomer had a much more unfavorable entropy (1.5 e.u) than the R (8.5 e.u). Since the free energies were the same, this meant that there was enthalpy-entropy compensation, which meant in turn that the S enantiomer must have the more favorable enthalpy.

To investigate the origins of these differences, the two enantiomers were crystallized with the protein. Observation of the binding site indicated something interesting; the R enantiomer bound in a way that allowed a critical tyrosine residue (Tyr337) to adopt two different orientations. However, the S enantiomer shoved an ethyl group next to the tyrosine, essentially precluding this movement. Greater conformational flexibility for the tyrosine translated to greater disorder, hence the more favorable entropy for the R. What about enthalpy? Here it turns out that the S enantiomer, while sacrificing entropic freedom for the tyrosine, compensates by making stronger interactions with it. This was analyzed by quantum chemical calculations on a "reduced" version of the protein. Interestingly, the interactions are not "normal", respectable hydrogen bonds but "unnatural" C-H---O hydrogen bonds. For the R enantiomer, even these relatively weak interactions were enough to confer an enthalpic advantage that offset the entropic disadvantage.

This is why chemistry in general and biochemistry in particular are endlessly interesting; conventional wisdom is always being challenged even in well-studied systems, weak can be important, every example is unique and best of all, surprises lurk around almost every corner. As Arthur Kornberg put it, "I never met a dull enzyme".

Angewandte Chemie retracts hexacyclinol paper. Sort of


So it seems that the infamous hexacyclinol saga has been finally put to rest and Angewandte Chemie has retracted the paper. For those chemists who might still be unfamiliar with it, it's not hard to explain: Total synthesis paper published in 2006 with more holes than the vacuum of deep space. Multiple blog postings and papers demolish the claim within months. Journal does not retract the paper for six years.

Well, now the journal has published the retraction. Here's what it has to say:


The following article from Angewandte Chemie International Edition, “Total Syntheses of Hexacyclinol, 5-epi-Hexacyclinol, and Desoxohexacyclinol Unveil an Antimalarial Prodrug Motif” by James J. La Clair, published online on February 9, 2006 in Wiley Online Library (http://onlinelibrary.wiley.com), has been retracted by agreement between the author, the journal Editor in Chief, Peter Gölitz, and Wiley-VCH Verlag GmbH & Co. KGaA. The retraction has been agreed due to lack of sufficient Supporting Information. In particular, the lack of experimental procedures and characterization data for the synthetic intermediates as well as copies of salient NMR spectra prevents validation of the synthetic claims. The author acknowledges this shortcoming and its potential impact on the community.


What I find disappointing about this retraction is that it's just not strong enough in denouncing the paper. It's not just that the procedures were irreproducible or that the supporting information was incomplete, it's that the whole synthesis was essentially...make believe. This was made clear by papers published later (re-synthesizing the natural product and calculating and comparing NMR spectra) which demonstrated beyond any shade of reasonable doubt that whatever was supposedly synthesized in the paper simply couldn't correspond to the structure of hexacyclinol as we know it. 

I think this is an important difference that the retraction does not acknowledge; it's the difference between saying "we think this could be wrong but we can't be sure since we can't reproduce the data" and "we are almost certain this is wrong since independent studies have convincingly demonstrated its utter implausibility".

Update: Carmen Drahl from C&EN has a superb Storify summary of the hexacyclinol saga over the last six years which features some of the blog posts commenting on the debacle. Carmen was also kind enough to post a picture of my cherished hexacyclinol t-shirt which I am still eager to break out; as I said in my email to her, I am still waiting to wear it at a big party where fellow t-shirters get together, laugh with sadistic glee, and mock the scattered bones of hexacyclinol's atomic constituents.

Occam, me and a conformational medley

Originally posted on the Scientific American Blog Network.


William of Occam, whose principle of parsimony has been used and misused (Image: WikiCommons)
The philosopher and writer Jim Holt who has written the sparkling new book “Why Does The World Exist?” recently wrote an op-ed column in the New York Times, gently reprimanding physicists to stop being ‘churlish’ and appreciate the centuries-old interplay between physics and philosophy. Holt’s point was that science and philosophy have always co-existed, even if their relationship has been more of an uneasy truce rather than an enthusiastic embrace. Some of the greatest physicists including Bohr and Einstein were also great philosophers.

Fortunately – or unfortunately – chemistry has had little to say about philosophy compared to physics. Chemistry is essentially an experimental science and for the longest time, theoretical chemistry had much less to contribute to chemistry than theoretical physics had to physics. This is now changing; people like Michael WeisbergEric Scerri and Roald Hoffmann proclaim themselves to to be bonafide philosophers of chemistry and bring valuable ideas to the discussion.

But the interplay between chemistry and philosophy is a topic for another post. In this post I want to explore one of the very few philosophical principles that chemists have embraced so wholeheartedly that they speak of it with the same familiar nonchalance with which they would toss around facts about acids and bases. This principle is Occam’s Razor, a sort of guiding vehicle that allows chemists to pick between competing explanations for a phenomenon or observation. Occam’s Razor owes its provenance to William of Occam, a 14th century Franciscan friar who dabbled in many branches of science and philosophy. Fully stated, the proposition tells us that “entities should not be multiplied unnecessarily” or that the fewer the assumptions and hypotheses underlying a particular analysis, the more preferred that analysis relative to those of equal explanatory power. More simply put, simple explanations are always better than complex explanations.

Sadly, the multiple derivate restatements of Occam’s Razor combined with our tendency to look for simple explanations can sometimes lead to erroneous results. Part of the blame lies not with Occam’s razor but with his interpreters; the main problem is that it’s not clear what “simple” and “complex” mean when applied to a natural law or phenomena. In addition, nature does not really care about what we perceive as simple or complex, and what may seem complex to us may appear perfectly simple to nature because it’s…real. This was driven home to me early on in my career.

Most of my research in graduate school was concerned with finding out the many conformations that complex organic molecules adopt in solution. Throw an organic molecule like ibuprofen in water and you don’t get a static picture of the molecule standing still; instead, there is free rotation about single bonds joining various atoms leading to multiple, rapidly interconverting shapes, or conformations, that are buffeted around by water like ships on the high seas. The exact percentage of each conformation in this dance is dictated by its energy; low-energy conformations are more prevalent than high-energy ones.
Different shapes of conformations of cyclohexane - a ring of six carbon atoms - ranked by energy (Image: Mcat review)

Since the existence of multiple conformations enabled by rotation around single bonds is a logical consequence of the basic principles of molecular structure, it would seem that this picture would be uncontroversial. Surprisingly though, it’s not always appreciated. The reason has to do with the fact that measurements of conformations by experimental techniques like nuclear magnetic resonance (NMR) spectroscopy always result in averages. This is because the time-scales for most of these techniques are longer than the time-scales needed for interconversion between conformations and therefore they cannot make out individual differences. The best analogy is that of a ceiling fan; when the fan is rotating fast, all we see is a contiguous disk because of the low time resolution of our eye. But we know that in reality, there are separate individual blades (see figure at end of post). NMR is like the eye that sees the disk and mistakes it for the fan.

Such is the problem with using experimental techniques to determine individual conformations of molecules. Their long time scales lead to average data to which a single, average structure is assigned. Clearly this is a flawed interpretation, but partly because of entrenched beliefs and partly because of lack of methods to tease apart individual conformations, scientists through the years have routinely published single structures as representing a more complex distribution of conformers. Such structures are sometimes called “virtual structures”, a moniker that reflects their illusory – essentially non-existent – nature. A lot of my work in graduate school was to use a method called NAMFIS (NMR Analysis of Molecular Flexibility In Solution) that combined average NMR data with theoretically calculated conformations to tease apart the data into individual conformations. There are others. Here's an article on NAMFIS that I wrote for college students.

When time came to give a talk on this research, a very distinguished scientist in the audience told me that he found it hard to digest this complicated picture of multiple conformations vying for a spot on the energy ladder. Wouldn’t the assumption of a single, clean, average structure be more pleasing? Wouldn’t Occam’s Razor favor this interpretation of the data? That was when I realized the limitations of Occam’s principle. The “complicated” picture of the multiple conformations was the real one in this case, and the simple picture of  a single average conformation was unreal. In this case, it was the complicated and not the simple explanation that turned out to be the right one. This interpretation was validated when I also managed to find, among the panoply of conformations, one which bound to a crucial protein in the body and turned the molecule into a promising anticancer drug. The experience again drove home the point that nature doesn’t often care about what we scientists find simple or complex.

Recently Occam made another appearance, again in the context of molecular conformations. This time I was studying the diffusion of organic molecules through cell membranes, a process that’s of great significance in drug discovery since even your best test-tube drug is useless if it cannot get into a cell. A chemist from San Francisco has come up with a method to calculate different conformations of molecules. By looking at the lowest-energy conformation, he then predicts whether that conformation will be stable inside the lipid-rich cell membrane. Based on this he predicts whether the molecule will make it across. Now for me this posed a conundrum and I found myself in the shoes of my old inquisitor; we know that molecules have several conformations, so how can only the single, lowest-energy conformation matter in predicting membrane permeability?

I still don’t know the answer, but a couple of months ago another researcher did a more realistic calculation in which she did take all these other conformations into consideration. Her conclusion? More often than not the accuracy of the prediction becomes worse because by including more conformations, we are also including more noise. Someday perhaps we can take all those conformations into account without the accompanying noise. Would we then be both more predictive and more realistic? I don’t know.

These episodes from my own research underscores the rather complex and subtle nature of Occam’s Razor and its incarnation in scientific models. In the first case, the assumption of multiple conformations is both realistic and predictive. In the second, the assumption of multiple conformations is realistic but not predictive because the multiple-conformation model is not good enough for calculation. In the first case, a simple application of Occam’s razor is flawed while in the second, the flawed simple assumption actually leads to better predictions. Thus, sometimes simple assumptions can work not because the more complex ones are wrong, but because we simply lack the capacity to implement the more complex ones.

I am glad that my work with molecular conformations invariably led me to explore the quirky manifestations of Occam’s razor. And I am thankful to a well-known biochemist who put it best: “Nature doesn’t always shave with Occam’s Razor”. In science as in life, simple can be quite complicated, and complicated can turn out to be refreshingly simple.

A rotating ceiling fan - Occam's razor might lead us to think that the fan is a contiguous disk, but we know better.

Does modern day college and graduate education in chemistry sacrifice rigor for flexibility?

This is what I love about blogging; it's the classic "one thing leads to another" device. The previous discussion on the paucity of thermodynamics in college coursework led to a more general and important exchange in the comments section that basically asked: Are we sacrificing rigor for flexibility by giving students too much freedom to pick and choose their courses?

The following sentiments (or variations thereof) were expressed:

- There should be a core curriculum for chemistry students that exposes them to mandatory courses in general, organic and physical chemistry at the very least. These requirements seems to be more widespread among physics departments. To my knowledge, Caltech is one of the few schools with a general core curriculum for all science majors. How many other schools have this?

- Part of the lack of exposure to important topics in grad school results from emphasizing research at the cost of coursework. And this is related to a more widespread sentiment of woe: it's become all too easy to get a PhD, partly because of the curse of academia that encourages one to become a glorified technician at the cost of instilling creative scientific thought. The belief is that many professors (and there's many exceptions) would rather produce well-trained manual laborers who contribute to the Grant and Paper Production Factory than independent scientific thinkers who can assimilate ideas from diverse scientific fields. You shouldn't really get a Ph.D. just for putting in 80-hour weeks.

We need to hold students to higher standards, but I think this is not going to happen until the publish-or-perish culture is fundamentally transformed and the movers and shakers of academic research take a hard look at what they are doing to their graduate students.

- Many textbooks are mired in the age-old, classical presentation of thermodynamics that emphasizes Carnot cycles and Maxwell relations much more than any semi-quantative feel for the operation of thermodynamic in practical chemical and biological systems. We are just not doing a good job communicating the real-world importance of topics like thermodynamics; add to this students who are not going to study something if it's not required and we are in a real bind.

- Physical organic chemistry - the one discipline that can naturally build bridges between physical and organic chemistry - is disappearing from the curriculum. Those who intellectually matured in its heyday were naturally exposed to thermodynamics and kinetics. Graduate students in organic chemistry shouldn't be able to get away with just synthesis and spectroscopy courses.

- Matt who, unlike most of us armchair philosophers, is a live professor at an actual research institution, makes the point that we should do an outstanding job of emphasizing thermodynamics in the freshman general chemistry class. We should do such a good job that students should always be able to connect those concepts to anything else that they study later. As Matt recommends, we could include the more qualitative important real-life applications of thermodynamics (and not just to antiquated heat engines) like those in drug discovery in this gen chem class.

All great points in my opinion. I have strong feelings about all this myself, but I have not done any detailed study of college curricula so my opinions are mostly anecdotal. Feel free to chime in with actual data or more opinions in the comments section.

Who's afraid of Big Bad Thermodynamics?

George Whitesides, in a trademark outfit.
In a talk at Northeastern University yesterday, George Whitesides asked the students in the audience if they had ever studied thermodynamics. Not a single hand went up.

Even accounting for the fact that some students might have been reluctant to flag themselves in a large audience, I find this staggering, especially if these students are planning to go into basic drug discovery research. But I can’t completely blame them. I happened to take one (mandatory) thermodynamics and one (non-mandatory) basic statistical mechanics class in college and was exposed to thermodynamics in graduate school through my work on conformational equilibria and NMR. But most of my fellow graduate students in organic and biochemistry had little inkling of thermodynamics; it certainly wasn't a part of their standard intellectual toolkit.

The problem’s made worse by misunderstandings about thermodynamics that seem to linger in students’ heads even later in their career. These misunderstandings stem from a larger rift between organic and physical chemistry; the latter is supposed to be highly mathematical and abstract and rather irrelevant to the former. This is in spite of the overwhelming importance of concepts from p-chem in classical physical organic chemistry. Sadly, classical physical organic chemistry itself is disappearing from the college and grad school curriculum, and an argument in favor of emphasizing thermodynamics is also a plug for not letting physical organic chemistry become a relic of the past. No other topic gives you as good a basic feel for structure, function and reactivity in organic chemistry.

But coming back to thermodynamics, this impression that many students have about thermodynamics being all about Maxwell relations and Carnot cycles and virial theorems is rather misleading. It’s not that these things are not important, it’s just that that’s not the kind of thermodynamics Whitesides was talking about when he was talking about drug discovery. Thermodynamics in drug discovery is often much less complicated than bonafide textbook thermodynamics. And it’s of foundational importance in the field since at its core, drug discovery is about understanding molecular recognition which is completely a thermodynamic phenomenon governed by free energy.

For a drug designer, the key thermodynamic circus to understand is the interplay between G, H, and S as manifested in the classic equation ∆G = ∆H – T∆S. It’s key to get a feel for how opposing values of H and S can lead to the same value of G, since this is at the heart of protein-drug recognition. It’s also important to know the different features of water, protein and solvent that contribute to changes in these parameters. Probably the most important thermodynamic effect that drug designers need to be aware of is the hydrophobic effect. They need to know that the hydrophobic effect is largely an entropic effect arising from the release of bound waters, although as Whitesides has himself demonstrated, reality can be more complicated. But the fact is that we simply cannot understand water without thermodynamics, and we cannot understand drug action without understanding water.

Also paramount is to understand the relationship between thermodynamics and kinetics, something that again benefits from studying reactions under thermodynamic and kinetic control and things like the Curtin-Hammet principle in classical physical organic chemistry. It’s crucial to know the difference between thermodynamic and kinetic stability, especially when one is dealing with small molecule and protein conformations. Finally, it’s enormously valuable to have a feel for a few key numbers, foremost among which may be the relationship between equilibrium constant and free energy; knowing this tells you for instance that it takes only a difference of 1.8 kcal/mol of free energy between two conformers to almost completely shift the conformational equilibrium on to the side of the more stable one. And when that difference is 3 kcal/mol, the higher-energy conformation is all gone, well beyond the detection limits of techniques like NMR. Speaking of which, a good understanding of thermodynamics also tells you why it’s incorrect to rely on average NMR data to tease apart the details of multiple conformations in solution.

All this knowledge about thermodynamics is ingrained more easily than complicated mathematical derivations of configuration integrals in free-energy perturbation theory. Students need to realize that the thermodynamics that they need to tackle drug discovery is a semi-quantitative blend of ideas relying much more on a rough feel for numbers and competing entropic and enthalpic effects. This kind of feel can lead to some very useful insights, for instance regarding relationships between G, H and S in the evolution of new drugs.

It’s time to incorporate this more general thermodynamics outlook in drug discovery classes and even in regular chemistry classes. It’s simple enough to be taught to undergraduates and bypasses the more sophisticated and intimidating ideas of statistical mechanics.

In yesterday’s conference, the chairman had the last laugh. Half-jokingly he emphasized that Northeastern’s chemistry course is ACS certified, which means that one semester of p-chem and thermodynamics are mandatory. Apparently Harvard’s is not. To which Whitesides replied that he can guarantee that you will find students at Harvard who are also not familiar with thermodynamics.

Whitesides's appeal to give pharmaceutical scientists-in-training a firm grounding in thermodynamics applies across the board. On top of Plato’s Academy there was rumored to be a sign which said “Let no one ignorant of geometry enter”. Perhaps one can make a similar case for thermodynamics in the pharmaceutical industry?

Note: 

I have often written about thermodynamics in drug design on my blog. A few potentially useful posts:


Some useful references:

George Whitesides - Designing ligands to bind tightly to proteins (book chapter PDF): Includes much of the material from Whitesides's talk.
Jonathan Chaires - Calorimetry and Thermodynamics in Drug Design.

Live blogging the Northeastern Drug Discovery conference

So I figured that since I am attending the drug discovery conference at Northeastern University I might as well jot down a few thoughts about the talks.

Leroy Hood: Institute for Systems Biology (Seattle).


Lots of optimism, which is usually the case with systems biology talks; being able to distinguish "wellness" genes from "disease" genes for everybody in about ten years, being able to map all disease-related biomarkers from blood analysis etc. But there were some interesting tidbits:


- Noise - especially biological noise - cannot be handled by traditional machine learning approaches. Signal to noise ratio is very low especially when picking biomarkers.

- SysBio can help pharma pick targets (which it is increasingly getting worse at).
- Cost can be minimized in optimal cases; eg. FDA approved Herceptin specific for 20% of patients in only 40-patient sample (Genentech). 
- Descriptive and graphical models can be enormously useful; in fact complexity often precludes mathematical modeling.
- Example of prions injected into mouse: expression of 33% genes changed. Biological noise can be “subtracted” by judiciously picking strains that get rid of one effect and preserve others.

My own take on systems biology has always been that, while it is likely to become quite significant at some point:


a. It's going to take longer than we think.

b. Separating signal from noise and honing in on the handful of approaches which will be robust and meaningful is going to give us a lot of grief. This will likely be Darwinian selection at its best.


Patricia Hurter (Vertex): Formulation

For people like me in discovery, formulation is a whole new world. Compaction, rolling, powder flow, force-response curves; engineers would feel right at home with these concepts, and in fact they do. And of course, you don’t talk about anything less than 25 kilograms.


Eric Olson (Vertex): Cystic Fibrosis

- Most common mutation is F508del (targets 88% of patients)

- Two potential drugs; potentiators (for restoring function) and correctors (for localizing protein from ER to membrane surface).
- However, only potentiators needed for G551D mutation (targets 4% of patients). Ivacaftor increases probability of channel being open; more beating cilia (nice video).
- Development challenges: little CF expertise, limited patient pop, no defined preclinical and regulatory path, outcomes for proof-of-concept and phase 3 not well established for mechanism-of-action.

I thought that the development of Vertex's CF drugs is a model example of charting out drug development in a novel, unexplored area.



Arun Ghosh (Purdue): Darunavir

From a medicinal chemistry standpoint this was probably the most impressive. Ghosh is one of the very few academic scientists to have a drug (Darunavir) named after them. He described the evolution of Darunavir from the key idea of targeting the backbone of HIV protease; the belief was that while side-chains are different between HIV mutants, the backbone stays constant and therefore compound binding to the backbone would be effective against resistant strains. 

This idea turned out to be remarkably productive, and Ghosh described a series of studies that just kept on improving potencies against virtually any mutant HIV strain that the biologists threw at the compound. It was a medicinal chemist’s dream; there was a wealth of crystal structure data, compounds routinely turned out to have picomolar potencies, and almost every single modification that the chemists designed worked exactly as expected. Some of this success was of course good luck, but that’s something that’s usually a given in drug discovery. Darunavir and its analogs got fast-track FDA approval against HIV strains that had failed to respond against every other medication. Ghosh’s study was a powerful reminder that the right kind of design principal can lead to exceptional success, even against a target that's been beaten to death.

George Whitesides (Harvard): Challenges

Interesting talk by Whitesides. A pretty laid back speaker. The first half was a general rumination on the state of pharma and drug discovery ("the current model of capitalism is not working"; "the FDA has become unreasonable"; "if the best we can do in cancer is to invent a drug that gives someone 3 extra months with a lot of side effects, then we are doing something wrong").

The second half concerned his work on the hydrophobic effect. The papers deal with ligand binding to carbonic anhydrase. Basically he found out that the so-called entropic signature of the hydrophobic effect (an increase in entropy from release of bound water molecules) is more complicated.

A few notes:


- Designing drugs is hard because we are robust, multi multiplexed complex systems.
- Cost of healthcare in the US is ~17% of GDP: also, no correlation between health cost and quality, as evidenced by low standing of US.
- Quoted Anna Karenina’s happy and unhappy families; has something to do with drug development. Every successful drug has its success in common, unsuccessful drugs are unsuccessful in their own way.
- Pharmaceutical crisis has nothing to with per se with science, everything to do with costs.

Finally, he made an important point: biochemists have always done experiments in dilute phosphate buffer. Interior of cell is anything but.

Favorite quote, regarding the limitations of animal models: "Whatever else you may think of me, I am not a large, hairless mouse”

Book review: Benoit Mandelbrot's "The Fractalist"

"My life", says Benoit Mandelbrot in the introduction to his memoir, "reminds me of that fairy tale in which the hero finds a hitherto unseen thread, and as he unravels the thread it leads him to unimaginable and unknown wonders". Mandelbrot not only found those wonders, but bequeathed to us the thread which will continue to lead us to more wondrous discoveries.

Mandelbrot was one of those chosen few scientists in history who are generalists, people whose ideas impact a vast landscape of fields. A maverick in the best sense of the term, he even went one step further and created his own field of fractal geometry. In a nutshell, he developed a "theory of roughness", and the fractals which represent this roughness are now household names, even making it into "Jurassic Park". Today fractals are known to manifest themselves in a staggering range of phenomena; the rhythms of the heart, the distribution of galaxies, market fluctuations, the rise and fall of species populations, the shapes of blood vessels, earthquakes, and the weather. Before Mandelbrot scientists liked to deal with smooth averages and equilibria, assuming that the outliers, the anomalies, the sudden jumps from normalcy were rare and could be ignored. Mandelbrot proved that they can't and found methods to tame them and bring them into the mainstream. His insights into this new view of nature effected minor and major revolutions in fields as diverse as economics, astronomy, physiology and fluid dynamics. More than almost any other thinker he was responsible for teaching natural and social scientists to model the world as it is rather than the abstraction which they want it to be.

In this memoir Mandelbrot describes his immensely eventful and somewhat haphazard journey to these revelations. The volume is quirky, charming, wide-ranging, often lingering on self-similar themes, much like his fractals. It is divided into three parts. The first deals with family history and childhood influences. The second deals with a peripatetic, broad scientific education. The third details Mandelbrot's great moments of discovery, the ones he calls "Keplerian moments" in homage to the great astronomer who realized the power of abstract mathematical notions to illuminate reality.

Mandelbrot grew up in a Lithuanian family first in Warsaw and then in France. He came from an educated and intellectually alert household. His most formative influences were his garment-maker father and dentist mother and especially his mathematician uncle Szolem. The parents had acquired great reserves of tenacity, having been uprooted from one place to another at least six times because of the depression. Szolem had toured the great centers of European mathematics and knew quite a few famous mathematicians himself. Mandelbrot grew up steeped in the mathematical beauty and folklore which Szolem vividly imparted to him. A dominant theme in the household was self-improvement, constantly challenging oneself to do better. This theme served Benoit well.

Mandelbrot's early years were marked by the rise of Nazism. After the fall of France his family fled Paris, taking refuge in the south of France before the country was liberated. There were dangerous moments, like his father narrowly escaping a strafing and Benoit and his cousin being interrogated by the Vichy police. After the war Mandelbrot studied at the prestigious École Polytechnique. At this point his central character started to reveal itself; an intellectual restless that inspired forays into diverse fields, a thirst for knowledge that would take him to many corners of the globe, a tendency to question orthodox wisdom and most importantly, an unwillingness to be a specialist. All these traits would turn out to be paramount in his future discoveries. Throughout his life Mandelbrot was known as a sometimes cantankerous and difficult person, but while there is a trace of these qualities in his memoir, most of the volume is generous in acknowledging the influence of family, friends, colleagues and institutions.

His intellectual restlessness led him across the Atlantic to major centers of scientific research including Caltech, MIT and the Institute for Advanced Study in Princeton, where he was the last postdoc of the great mathematician John von Neumann. Part of the joy of the book comes from Mandelbrot's accounts of encounters with a veritable who's who of late twentieth century science including von Neumann, Oppenheimer, Wiener, Feynman, Chomsky and Stephen Jay Gould. A particularly memorable incident has him flabbergasted by a penetrating comment from an audience member and Oppenheimer and von Neumann coming to his defense to explain his ideas even better than he could. At all these institutions Mandelbrot worked on a remarkable variety of problems, from aircraft design to linguistics, and acquired a rare, extremely broad education that would serve him in good stead.

As he explains, the trajectory of Mandebrot's life was irrevocably changed when his uncle Szolem introduced him to a law named Zipf's Law that deals with the frequencies of words in various languages. Mandelbrot discovered that Zipf's law led to some counterintuitive and universal results that could only be explained by non-standard distributions; this was when he discovered the high prevalence of what many had previous considered to be "rare" events. His work in this area as well as some preliminary work in economics led him to a highly productive position at IBM. Mandelbrot describes IBM's remarkable scientific culture that allowed scientists like him to pursue unfettered basic scientific research; sadly that culture has now all but vanished in many organizations. During this time he stayed in touch with academia, giving seminars at many leading universities. Ironically, it was Mandelbrot's lack of specialization that made universities reluctant to hire him; implicitly, his experience is also a critique of an academic system that discourages broad thinkers and generalists. The difficulty of pinning down an unconventional thinker like Mandelbrot is reflected in the fact that Chicago found his interests too spread out while Harvard thought them too narrow!

But IBM was more than happy to support his multiple intellectual forays and in addition to his own explorations he also has accounts of IBM's pioneering work in software and graphics design. It was while at IBM that Mandelbrot discovered what he is most famous for - fractals. As the book recounts, the work arose partly from analyzing price and market fluctuations. Mandelbrot was struck by the uncanny similarity of disparate price and income curves and realized that the equilibrium model that economists were relying for decades was of little use in analyzing real world jumps which tended to be much more frequent than normal distributions would indicate. In a set of stunning and sweeping intellectual insights engendered by his broad scientific background, Mandelbrot realized that the math underlying an astonishing range of phenomena, from economic fluctuations to geographic coastlines, is the same. His work in this area was seminal by any standard, but it was not adopted by economists partly because they found it difficult to use and partly because the field was entrenched in established ideas from equilibrium models. It was only in the 1980s that his insights became accepted into the mainstream, and the global recession in 2008 and the shocks to the economy have soundly validated his fractal fluctuation models. Outliers are not so rare after all, and as Nassim Taleb has documented, their impact can be tremendous and unpredictable. The parts of the book charting the road leading to fractals are fascinating and clearly detail the advantage of having a broad scientific education.

In spite of the lukewarm reception by economists Mandelbrot persevered along his general line of thinking, and in the late 1970s he discovered the iconic Mandelbrot set which made him a household name. Starting from an almost laughably simple formula, one quickly generates what has been called the most complex object in mathematics. The stunning geometry of the set today dots everything from murals to coffee mugs and there are hundreds of websites on which you can generate the set and examine it. Zooming in on the picture reveals a thick and endlessly complex jungle of self-similar geometric shapes and convolutions; one can gaze at this mesmerizing creature for hours.

Mandelbrot retired from IBM in the 80s and his career culminated in his appointment as the Sterling professor at Yale University. His eventful journey, from Warsaw to New Haven, holds many key lessons for us. He taught us to celebrate diversity and broad interests in an era of specialization. He shifted the focus of scientists from the idealized experiments of their laboratories to the messy world of reality. And he made it clear that many of the most penetrating insights into nature like fractals emerge from asking simple questions and exploring the obvious; What's the length of Britain's coastline? What's the shape of clouds? How does the heart beat?

It is hard to think of a twentieth century thinker whose ideas have influenced so many disciplines, and the fruits of Mandelbrot's labors promise continuing revelations long after his death in 2010. His memoir makes a resounding case for the virtues of indulging in, in Feynman's words, "perfectly reasonable deviations from the beaten track".


Image

How to run a world-class lab

One of this year's Nobel laureates in physics, Serge Haroche, has a few words of wisdom for fostering a good research environment.

Our experiments could only have succeeded with the reliable financial support provided by the institutions that govern our laboratory, supplemented by international agencies inside and outside Europe. European mobility programs also opened our laboratory to foreign visitors, bringing expertise and scientific culture to complement our own. During this long adventure in the micro-world, my colleagues and I have retained the freedom to choose our path without having to justify it with the promise of possible applications. 



Unfortunately, the environment from which I benefited is less likely to be found by young scientists embarking on research now, whether in France or elsewhere in Europe. Scarcity of resources due to the economic crisis, combined with the requirement to find scientific solutions to practical problems of health, energy and the environment, tend to favour short-term, goal-oriented projects over long-term basic research. Scientists have to describe in advance all their research steps, to detail milestones and to account for all changes in direction. This approach, if extended too far, is not only detrimental to curiosity-driven research. It is also counterproductive for applied research, as most practical devices come from breakthroughs in basic research and would never have been developed out of the blue.

Haroche’s quip about short-termism being bad even for applied research is especially worth noting, since applied research is supposedly what short-termism seeks to encourage. The point is that the path of science is almost always unexpected and complex, and most applied research is the illegitimate albeit charming and often spectacularly successful offspring of blue-sky basic research. Neglect of this foundation is one major flaw I see with the whole concept of “translational medicine” which seems to lack an accurate appreciation of the haphazard way in which basic scientific principles have actually translated to practical medical therapies. Unless we know the underlying biology of disease, which even now is quite complex for us to grasp, it’s not going to be possible to have scientists sit in a room and think up treatments for Alzheimer’s disease and diabetes.
On a related note, an article in Nature explores the phenomenal success of the MRC’s Laboratory of Molecular Biology at Cambridge which has produced 9 Nobel Laureates, the latest one in 2009. The piece also talks about similar successful experiments, for instance at Justus von Liebig’s laboratory in Germany or Ivan Pavlov’s laboratory in Russia and places a significant share of the productivity in successful labs on the shoulders of their leaders. The MRC’s leaders led less and interacted more. Tea was a daily tradition and Nobel Laureates sat at the same table with graduate students and postdocs during lunch. Everyone was encouraged to speak up and no one was afraid to ask what could be perceived as a stupid question; Tom Steitz who was awarded a prize for work on the ribosome remembers a meeting where director Max Perutz asked about the difference between prokaryotes and eukaryotes. The ideal leader directed less vertically and more horizontally.
Some of the Nobel Laureates at the MRC

A similar tradition was carried out in many other outstanding institutes producing famous scientists; these included the Institute for Advanced Study at Princeton (where Robert Oppenheimer used to say that “tea is where we explain to each other what we don’t understand”), Niels Bohr’s institute in Copenhagen which nurtured the founders of quantum mechanics and the forerunner of the MRC, Ernest Rutherford and Lawrence Bragg’s Cavendish Laboratories which discovered both the neutron and the structure of DNA. The same principle applied to industrial labs like Bell Labs and IBM; as Jon Gertner’s book on Bell Labs chronicles, Bell’s first director Mervin Kelly gave his scientists the same freedom. This freedom manifested itself even in the physical layout of the buildings which featured movable panels that allowed experimental and theoretical sanctums to connect. And it goes without saying that Kelly and most other successful directors were world class scientists themselves or at least people with a considerable scientific background. Contrast that with much of today’s corporate research enterprise where scientific leaders at the top have been replaced with lawyers and MBAs.
It’s also worth noting that these scientific leaders never made the mistake of equating quality with quantity; Rutherford’s lab even had a rule that forbade work after 6 PM except in rare cases. There’s a huge lesson there for professors and departments who insist that their students spend 12 or 14 hour days at the bench. As history has adequately demonstrated, it’s very much possible to work a productive 9 AM – 6 PM workday and still achieve significant results, and I have been told this is the way it still largely works in countries like Germany. The key lies in culture, collaboration and focus, not raw work hours. It’s really not that hard to understand that the best results arise when scientists are supplied with a general overarching plan but are otherwise left free to work out their own details for implementing it. And often the best short-term research is long-term research.
A friend of mine tells the story of her father who was working at a well-known government institution in the US. He quit when they started circulating forms that asked the scientists what they thought they would discover next year. “How the hell should I know what I am going to discover next year?”, wrote my friend’s father on the form before he stormed out.
Image links: 1, 2 

Post first published on the Scientific American blog network.

ChemCoach Carnival: What I do

I am late to the party, but SeeArrOh's ChemCoach Carnival has given me a chance to indulge in some narcissistic self-promotion. There many great entires on his blog so you should take a look. Here's my pitch.


Your current job.

I am an organic chemist turned molecular modeler at a small biotech startup in Cambridge, MA. I spend as much time looking at synthetic strategies, building block procurement, target selection and assays as I spend building models. I also spend a lot of time thinking about how my work fits within the broader boundaries of science.

What you do in a standard "work day."

As a lot of others scientists on this thread have emphasized, one of the great things about our job is that there is no “standard work day”. I am the lone modeler in a small startup so that requires me to wear several hats. I am as involved in discussing synthesis and assays as I am in docking small molecules to proteins or running molecular dynamics simulations. In addition I also need to occasionally look up building block availablity, talk to database and informatics specialists and arrange for presentations from outside vendors. The point is that in drug discovery and especially in a small outfit, you must be adaptable and be able to accomplish multiple, diverse tasks. This kind of capability makes you a valued member of the team, especially in a small company where your voice will be heard by everyone, from the intern to the CEO. It’s also a terrific learning experience in general.

What kind of schooling / training / experience helped you get there?

I have a doctorate in organic chemistry, although frankly that is just the means to an end. I switched from synthesis to modeling because I was clumsy in the lab and because I was interested in many different fields of science. I don’t regret my choice at all. Modeling allowed me to indulge interests in physics, chemistry, computer science and biology. I would say that if you have diverse scientific interests, modeling and simulation in a general sense would excellent career choices for you. If you are planning for a career in drug discovery or biotechnology, I would encourage you to soak up as much knowledge from diverse fields of chemistry and biology as possible. You won’t regret it.

This would also be the place for me to sneak in my favorite pitch regarding the history and philosophy of science. A simple piece of advice: study it. Science is done by fascinating human beings with all their flaws and triumphs. Your experiments are not being done in a vacuum. Reading up on the history of your discipline will give you the feeling of participating in a grand, unbroken thread of discovery going back to the Greeks. Even if you may not be a world-famous scientist or are not doing earth-shattering research, the simple fact that you are exploring the same laws of physics and chemistry that world-famous scientists once did will put you in their league and inspire a feeling of kinship. Studying the history of science will convince you that there are many who empathize and who have shared the same sense of despair and triumph that you do. Study the history of science, and you will know that you are not alone.

How does chemistry inform your work?

It is all-pervasive in my work. When I say I am a “molecular modeler”, I mean that in the broadest sense of the term. For me all of chemistry is largely about models, whether the models consist of structures scribbled on a hood or three-dimensional protein images built on a computer screen. A lot of people think computation in drug discovery is all about building regression models and writing fancy algorithms. But what it’s really about is data interpretation, and pretty much all this data is chemical. I cannot stress how important it is for a molecular modeler to understand chemistry, especially organic and physical chemistry. What has turned out to be an intractable problem has often proved amenable to a solution found in the principles of basic organic chemistry. In addition you have to have a real feel for structure-activity relationships and the basic physiochemical properties of functional groups. Useful numbers from thermodynamics and kinetics should ideally roll off your tongue like French verbs. A knowledge of statistics is also important. My background in organic chemistry is much more important than any facility with programming or knowledge of particular software that I may have picked up on the way. Those things you can learn, but the bedrock of your work will always be chemistry, even when it's operating behind the scenes.

Finally, a unique, interesting, or funny anecdote about your career*

Well, when I said I was clumsy in the lab I was thinking about the time I actually dropped a rotavap on the floor and broke it. My advisor, a generous man, said that maybe I was not quite cut out for working in the lab. It was the only time in my life that an embarrassing accident gently pointed out by a wise future advisor has fortuitously decided the trajectory of my career.