Field of Science

Einstein, Oppenheimer, relativity and black holes: The curse of fundamentalitis

J. Robert Oppenheimer and Albert Einstein at the
Institute for Advanced Study in Princeton
A hundred years ago, in November, 1915, Albert Einstein sent a paper to the Prussian Academy of Sciences which was to become one of the great scientific papers of all time. In this paper Einstein published the full treatment of his so-called field equations which present the curvature of spacetime by matter; the paper heralded the culmination of his general theory of relativity.

Forty years later when Einstein died, the implications of that paper had completely changed our view of the cosmos. They had explained the anomalous precession of Mercury, predicted the bending of starlight and most importantly, the expansion of the universe. Einstein enthusiastically accepted all these conclusions. One conclusion that he did not however accept and which in fact he did not seem interested in was the implications of the equations in areas of the cosmos where the force of gravity is so strong that it does not allow even light to escape - a black hole. Today we know that black holes showcase Einstein's general theory of relativity in all its incandescent glory. In addition black holes have become profound playgrounds for some of the deepest mysteries of the universe, including quantum mechanics, information theory and quantum gravity.

And yet Einstein seemed almost pathologically uninterested in them. He had heard about them from many of his colleagues; in particular his Princeton colleague John Wheeler had taken it upon himself to fully understand these strange objects. But Einstein stayed aloof. There was another one of the same persuasion whose office was only one floor away from him - J. Robert Oppenheimer, the architect of the atomic bomb and the Delphic director of the Institute for Advanced Study where Einstein worked. Oppenheimer in fact had been the first to mathematically describe these black holes in a seminal paper in 1939. Unfortunately Oppenheimer's paper was published on the same day that Hitler attacked Poland. In addition its importance was eclipsed by another article in the same issue of the journal Physical Review: an article by Niels Bohr and John Wheeler describing the mechanism of nuclear fission, a topic that would soon herald urgent and ominous portents for the fate of the world.

The more general phenomena of gravitational contraction and collapse that black holes exhibit were strangely phenomena that seemed doomed to obscurity; in a strange twist of fate, those who truly appreciated them stayed obscure, while those who were influential ignored them. Among the former were Subrahmanyan Chandrasekhar and Fritz Zwicky; among the latter were Oppenheimer, Einstein and Arthur Eddington. In 1935, Chandrasekhar had discovered a limiting formula for white dwarfs beyond which a white dwarf could no longer thwart its inward gravitational pull. He was roundly scolded by Eddington, one of the leading astronomers of his time, who stubbornly refused to believe that nature would behave in such a pathological manner. Knowing Eddington's influence in the international community of astronomers, Chandrasekhar wisely abandoned his pursuit until others validated it much later.

The Swiss-born Fritz Zwicky was a more pugnacious character, and in the 1930s he and his Caltech colleague Walter Baade published an account of what we now call a neutron star as a plausible explanation for the tremendous energy powering the luminous explosion of a supernova. Zwicky's prickly and slightly paranoid personality led to his distancing from other mainstream scientists and his neutron stars were taken seriously by only a few scientists, among them the famous Soviet physicist Lev Landau. It was building on Landau's work in 1938 and 1939 that Oppenheimer and his students published three landmark papers which pushed the envelope on neutron stars and asked what would be the logical, extreme conclusion of a star completely unable to support itself against its own gravity. In the 1939 paper in particular, Oppenheimer and his student Hartland Snyder presented several innovations, among them the difference between time as measured by an external observer outside a black hole's so-called event horizon and a free falling observer inside it.

Then World War 2 intervened. Einstein got busy signing letters to President Franklin Roosevelt warning him of Germany's efforts to acquire nuclear weapons while Oppenheimer got busy leading the Manhattan Project. When 1945 dawned both of them had forgotten about the key theoretical insights regarding black holes which they had produced before the war. It was a trio of exceptional scientists - Dennis Sciama in the UK, John Wheeler at Princeton and Yakov Zeldovich in the USSR - who got interested in black holes after the war and pioneered research into them.

What is strangest about the history of black holes is Einstein and Oppenheimer's utter indifference to their existence. What exactly happened? Oppenheimer’s lack of interest wasn’t just because he despised the free-thinking and eccentric Zwicky who had laid the foundations for the field through the discovery of black holes' parents - neutron stars. It wasn’t even because he achieved celebrity status after the war, became the most powerful scientist in the country and spent an inordinate amount of time consulting in Washington until his carefully orchestrated downfall in 1954. All these factors contributed, but the real reason was something else entirely – Oppenheimer simply wasn’t interested in black holes. Even after his downfall, when he had plenty of time to devote to physics, he never talked or wrote about them. He spent countless hours thinking about quantum field theory and particle physics, but not a minute thinking about black holes. The creator of black holes basically did not think they mattered.

Oppenheimer’s rejection of one of the most fascinating implications of modern physics and one of the most enigmatic objects in the universe - and one he sired - is documented well by Freeman Dyson who tried to initiate conversations about the topic with him. Every time Dyson brought it up Oppenheimer would change the subject, almost as if he had disowned his own scientific children.

The reason, as attested to by Dyson and others who knew him, was that in his last few decades Oppenheimer was stricken by a disease which I call “fundamentalitis”. Fundamentalitis is a serious condition that causes its victims to believe that the only thing worth thinking about is the deep nature of reality as manifested through the fundamental laws of physics.
As Dyson put it:
“Oppenheimer in his later years believed that the only problem worthy of the attention of a serious theoretical physicist was the discovery of the fundamental equations of physics. Einstein certainly felt the same way. To discover the right equations was all that mattered. Once you had discovered the right equations, then the study of particular solutions of the equations would be a routine exercise for second-rate physicists or graduate students.”
Thus for Oppenheimer, black holes, which were particular solutions of general relativity, were mundane; the general theory itself was the real deal. In addition they were anomalies, ugly exceptions which were best ignored rather than studied. As Dyson mentions, unfortunately Oppenheimer was not the only one affected by this condition. Einstein, who spent his last few years in a futile search for a grand unified theory, was another. Like Oppenheimer he was uninterested in black holes, but he also went a step further by not believing in quantum mechanics. Einstein’s fundamentalitis was quite pathological indeed.
History proved that both Oppenheimer and Einstein were deeply mistaken about black holes and fundamental laws. The greatest irony is not that black holes are very interesting, it is that in the last few decades the study of black holes has shed light on the very same fundamental laws that Einstein and Oppenheimer believed to be the only things worth studying. The disowned children have come back to haunt the ghosts of their parents.
As mentioned earlier, black holes took off after the war largely due to the efforts of a handful of scientists in the United States, the Soviet Union and England. But it was experimental developments which truly brought their study to the forefront. The new science of radio astronomy showed us that, far from being anomalies, black holes litter the landscape of the cosmos, including the center of the Milky Way. A decade after Oppenheimer’s death, the Israeli theorist Jacob Bekenstein proved a very deep relationship between thermodynamics and black hole physics. Stephen Hawking and Roger Penrose found out that black holes contain singularities; far from being ugly anomalies, black holes thus demonstrated Einstein’s general theory of relativity in all its glory. They also realized that a true understanding of singularities would involve the marriage of quantum mechanics and general relativity, a paradigm that’s as fundamental as any other in physics.
In perhaps the most exciting development in the field, Leonard Susskind, Hawking and others have found intimate connections between information theory and black holes, leading to the fascinating black hole firewall paradox that forges very deep connections between thermodynamics, quantum mechanics and general relativity. Black holes are even providing insights into computer science and computational complexity. The study of black holes is today as fundamental as the study of elementary particles in the 1950s.
Einstein and Oppenheimer could scarcely have imagined that this cornucopia of discoveries would come from an entity that they despised. But their wariness toward black holes is not only an example of missed opportunities or the fact that great minds can sometimes suffer from tunnel vision. I think the biggest lesson from the story of Oppenheimer and black holes is that what is considered ‘applied’ science can actually turn out to harbor deep fundamental mysteries. Both Oppenheimer and Einstein considered the study of black holes to be too applied, an examination of anomalies and specific solutions unworthy of thinkers thinking deep thoughts about the cosmos. But the delicious irony was that black holes in fact contained some of the deepest mysteries of the cosmos, forging unexpected connections between disparate disciplines and challenging the finest minds in the field. If only Oppenheimer and Einstein had been more open-minded.
The discovery of fundamental science in what is considered applied science is not unknown in the history of physics. For instance Max Planck was studying blackbody radiation, a relatively mundane and applied topic, but it was in blackbody radiation that the seeds of quantum theory were found. Similarly it was spectroscopy or the study of light emanating from atoms that led to the modern framework of quantum mechanics in the 1920s. Scores of similar examples abound in the history of physics; in a more recent case, it was studies in condensed matter physics that led physicist Philip Anderson to make significant contributions to symmetry breaking and the postulation of the existence of the Higgs boson. And in what is perhaps the most extreme example of an applied scientist making fundamental contributions, it was the investigation of cannons and heat engines by French engineer Sadi Carnot that led to a foundational law of science – the second law of thermodynamics.
Today many physicists are again engaged in a search for ultimate laws, with at least some of them thinking that these ultimate laws would be found within the framework of string theory. These physicists probably regard other parts of physics, and especially the applied ones, as unworthy of their great theoretical talents. For these physicists the story of Oppenheimer and black holes should serve as a cautionary tale. Nature is too clever to be constrained into narrow bins, and sometimes it is only by poking around in the most applied parts of science that one can see the gleam of fundamental principles.
As Einstein might have said had he known better, the distinction between the pure and the applied is often only a "stubbornly persistent illusion". It's an illusion that we must try hard to dispel.
This is a revised version of an old post which I wrote on occasion of the one-hundredth anniversary of the publication of Einstein's field equations.

The death of new reactions in medicinal chemistry?

JFK to medicinal chemists: Get out of your comfort zone and
try out new reactions; not because it's easy, but because it's hard
Since I was discussing the "death of medicinal chemistry" the other day (what's the use of having your own blog if you cannot enjoy some dramatic license every now and then), here's a very interesting and comprehensive analysis in J. Med. Chem. which has a direct impact on that discussion. The authors Dean Brown and Jonas Boström who are from Astra Zeneca have done a study of the most common reactions used by medicinal chemists using a representative set of papers published in the Journal of Medicinal Chemistry in the years 1984 and 2014. Their depressing conclusion is that about 20 reactions populate the toolkit of medicinal chemists in both years. In other words, if you can run those 20 chemical reactions well, then you could be as competent a medicinal chemist in the year 2015 as in 1984, at least on a synthetic level.

In fact the picture is probably more depressing than that. The main difference between the medicinal chemistry toolkit in 1984 vs 2014 is the presence of the Suzuki-Miyaura cross-coupling reaction and amide bond formation reactions; these seem to exist overwhelmingly in modern medicinal chemistry. The authors also look at overall reactions vs "production reactions", that is, the final steps which generate the product of interest in a drug discovery project. Again, most of the production reactions are still dominated by the Suzuki reaction and the Buchwald-Hartwig reaction. Reactions like phenol alkylation which is used more frequently in 2014 and not in 1984 partly point to the fact that we are now more attuned to unfavorable metabolic reactions like glucuronidation which necessitate the capping of free phenolic hydroxyl groups.

There is a lot of material to chew upon in this analysis and it deserves a close look. Not surprisingly, there is a horde of important and interesting factors like reagent and raw material availability, ease of synthesis (especially outsourcing) and better (or flawed and exaggerated) understanding of druglike character that has dictated the relatively small differences in reaction use in the last thirty years. In addition there is also a thought-provoking analysis of differences in reactions used for making natural products vs druglike compounds. Surprisingly, the authors find that reactions like cross-coupling which heavily populate the synthesis of druglike compounds are not as frequently encountered in natural product synthesis; among the top 20 reactions used in medicinal chemistry, few are used in natural product synthesis.

There is a thicket of numbers and frequency analysis of changes reaction type and functional group type showcased in the paper. But none of that should blind us to the central take home message here: in terms of innovation, at least as measured by new reaction development and use, medicinal chemistry has been rather stagnant in the last twenty years. Why would this be so? Several factors come to mind and some of them are discussed in the paper, and most of them don't speak well of the synthetic aspects of the drug discovery enterprise. 

As the authors point out, cross-coupling reactions are easy to set up and run and there is a wide variety of catalytic reagents that allows for robust reaction conditions and substrate variability. Not surprisingly, these reactions are also disproportionately easy to outsource. This means that they can produce a lot of molecules fast, but as commonsense indicates and the paper confirms, more is not better. In my last post I talked about the fact that one reason wages have been stagnant in medicinal chemistry is precisely because so much of medicinal chemistry synthesis has become cheap and easy, and coupling chemistry is a good reason why this is so.

One factor that the paper does not explicitly talk about but which I think is relevant is the focus on certain target classes which has dictated the choice of specific reactions over the last two decades or so. For example, a comprehensive and thought-provoking analysis by Murcko and Walters from 2012 speculated that a big emphasis on kinase inhibitors in the last fifteen years or so has led to a proliferation of coupling reactions, since biaryls are quite common among kinase inhibitor scaffolds. The current paper validates this speculation and in fact demonstrates that para-disubstituted biphenyls are among the most common of all modern medicinal chemistry compounds.

Another damning critique that the paper points to in its discussion of the limited toolkit of medicinal chemistry reactions is our obsession with druglike character and with this rule and that metric for defining such character; a community pastime which we have been collectively preoccupied with roughly since 1997 (when Lipinski published his paper). The fact of the matter is that the 20 reactions which medicinal chemists hold so dear are quite amenable to producing their favorite definition of druglike molecules; flat, relatively characterless, high-throughput synthesis-friendly and cheap. Once you narrowly define what your target or compound space is, then you also limit the number of ways to access that space.

That problem becomes clear when the authors compare their medicinal chemistry space to natural product space, both in terms of the reactions used and the final products. It's well known that natural products have more sp3 characters and chiral centers, and reactions like Suzuki coupling are not going to make too many of those. In addition, the authors perform a computational analysis of 3D shapes on their typical medicinal chemistry dataset. This analysis can have a subjective component to it, but what's clear not just from this calculation but from other previous ones is that what we call druglike molecules occupy a very different shape distribution from more complex natural products. 

For instance, a paper also from AZ that just came out demonstrated that many compounds occupying "non-Lipinski" space have sphere and dislike shapes that are not seen in more linear compounds. In that context, the para bisubstituted biphenyls which dot the landscape of modern druglike molecules are the epitome of linear compounds. As the authors show us, there is thus a direct correlation between the kinds of reactions used commonly at the bench today and the shapes and character of compounds which they result in. And all this constrained thinking is producing a very decided lack of diversity in the kinds of compounds that we are shuttling into clinical trials. The focus here in particular may be on synthetic reactions but it's affecting all of us and is at least a part of the answer to why medicinal chemists don't seem to see better days.

Taken together, the analyses in this review throw the gauntlet at the modern medicinal chemist and ask a provocative question: "Why are you taking the easy way out and making compounds that are easy to make? Why aren't you trying to expand the scope of novel reactions and trying to explore uncharted chemical space"? To which we may also add, "Why are you letting your constrained views of druglike space and metrics dictate the kind of reactions you use and the molecules they result in"? 

As they say however, it's always better to light a candle than to just curse the darkness (which can be quite valuable in itself). The authors showcase several new and interesting reactions - ring-closing cross-metathesis, C-H arylation, fluorination, photoredox catalysis - which can produce a wide variety of interesting and novel compounds that challenge traditional druglike space and promise to interrogate novel classes of targets. Expanding the scope of these reaction is not easy and will almost certainly result in some head scratchers, but that may be the only way we innovate. I might also add that the advent of new technology such as DNA encoded library technology also promises to change the fundamental character of our compounds. 

This paper is clearly a challenge to medicinal chemists and in fact is pointing out an embarrassing truth for our entire community: cost, convenience, job instability, poor management and plain malaise have made us take the easy way out and keep on circling back to a limited palette of chemical reactions that ultimately impact every aspect of the drug discovery enterprise. Some of these factors are unfortunate and understandable, but others are less so, especially if they're negatively affecting our ability to hit new targets, to explore novel chemical space, and ultimately to discover new drugs for important diseases which kill people. What the paper is saying is that we can do better.

More than fifty years ago John F. Kennedy made a plea to the country to work hard on getting a man to the moon and bring him back, "not because it's easy, but because it's hard". I would say analyses like this one ask the same of medicinal chemists and drug discovery scientists in general. It's a plea we should try to take to heart - perhaps the rest of JFK's exhortation will motivate us:

"We choose to go to the moon. We choose to go to the moon in this decade and do the other things, not because they are easy, but because they are hard, because that goal will serve to organize and measure the best of our energies and skills, because that challenge is one that we are willing to accept, one we are unwilling to postpone, and one which we intend to win."

The death of medicinal chemistry?

E. J. Corey's rational methods of chemical synthesis
revolutionized organic chemistry, but they also may have been
responsible for setting off unintended explosions in the medicinal
chemistry job market
Chemjobber points us to a discussion hosted by Michael Gilman, CEO of Padlock Therapeutics, on Reditt in which he laments the fact that medicinal chemistry has now become so commoditized that it's going to be unlikely for wages to rise in that field. Here's what he had to say.
"I would add that, unfortunately, medicinal chemistry is increasingly regarded as a commodity in the life sciences field. And, worse, it's subject to substantial price competition from CROs in Asia. That -- and the ongoing hemorrhaging of jobs from large pharma companies -- is making jobs for bench-level chemists a bit more scarce. I worry, though, because it's the bench-level chemists who grow up and gather the experience to become effective managers of out-sourced chemistry, and I'm concerned that we may be losing that next general of great drug discovery chemists."
I think he's absolutely right and that's partly what has been responsible for the woes of the pharmaceutical and biotech industry over the last few years. But as I noted in the comments section of CJ's blog, the historian of science in me thinks that this is ironically the validation of the field of organic synthesis as a highly developed field whose methods and ideas have now become so standardized that you need very few specialized practitioners to put them into practice. 

I have written about this historical aspect of the field before. The point is that synthesis was undeveloped, virgin territory when scientists like R B Woodward, E J Corey and Carl Djerassi worked in it in the 1950s and 60s. They were spectacularly successful. For instance, when Woodward synthesized complex substances like strychnine (strychnine!) and reserpine, many chemists around the world did not believe that we could actually make molecules as complicated as these. Forget about standardization, even creative chemists found it quite hard to make molecules like nucleic acids and peptides which we take for granted now.

It was a combination of hard work by brilliant individuals like Woodward combined with the amazing proliferation of techniques for structure determination and purification (NMR, crystallography, HPLC etc.) that led to the vast majority of molecules falling under the purview of chemists who were distinctly non-Woodwardian in their abilities and creative reach. Corey especially turned the field into a more or less precisely predictive science that could succumb to rational analysis. In the 1990s and 2000s, with the advent of palladium-catalyzed coupling chemistry, more sophisticated instrumentation and combinatorial chemistry, even callow chemists could make molecules which would have taken their highly capable peers months or years to make in the 60s. As just an example, today in small biotech companies, interns can learn to make in three months the same molecules that bench chemists with PhDs are making. The bench PhDs presumably have better powers of critical thinking and planning, but the gap has still significantly narrowed. The situation may reach a fever pitch with the development of automated methods of synthesis. The bottom line is that synthesis is not longer the stumbling block for the discovery of new drugs; it's largely an understanding of biology and toxicity.

Because organic synthesis and much of medicinal chemistry have now become victims of their own success, tame creatures which can be harnessed into workable products even by modestly trained chemists in India or China, the more traditional scenario as pointed out by Dr. Gilman now involves a few creative and talented medicinal chemists at the top directing the work of a large number of less talented chemists around the world (that's certainly the case at my workplace). From an economic standpoint it makes sense that only these few people at the top command the highest wages and those under them make a more modest living; the average wage has thus been lowered. That's great news for the average bench chemist in Bangalore but not for the ambitious medicinal chemist in Boston. And as Dr. Gilman says, considering the layoffs in pharma and biotech it's also not great news for the field in general.

It's interesting to contemplate how this situation mirrors the situation in computer science, especially concerning the development of the customized code that powers our laptops and workstations; it's precisely why companies like Microsoft and Google can outsource so much of their software development to other countries. Coding has become quite standardized, and while there will always be a small niche demand for novel code, this will be limited to a small fraction at the top who can then shower the rest of the hoi polloi with the fruits of their labors. The vast masses who do coding meanwhile will never make the kind of money which the skill set commanded fifteen years ago. Ditto for med chem. Whenever a discipline becomes too mature it sadly becomes a victim of its own success. That's why it's best to enter a field when the supply is still tight and the low hanging fruit is still ripe for the taking. In the tech sector data science is such a field right now, but you can bet that even the hallowed position of data scientist is not going to stay golden for too long once that skill set too becomes largely automated and standardized.

What, then, will happen to the discipline of medicinal chemistry? The simple truth is that when it comes to cushy positions that pay extremely well, we'll still need medicinal chemists, but only a few. In addition, medicinal chemists will have to shift their focus from synthesis to a much more holistic approach; thus medicinal chemistry, at least as traditionally conceived with a focus on synthesis and rapid access of chemical analogs, will be seeing its demise soon. Most medicinal chemists are still reluctant to think of themselves as anything other than synthetic chemists, but this situation will have to change. Ironically Wikipedia seems to be ahead of the times here since its entry on medicinal chemistry seems to encompass pharmacology, toxicology, structural and chemical biology and computer-aided drug design. It would be a good blueprint for the future.

In particular, medicinal chemistry in its most common practice —focusing on small organic molecules—encompasses synthetic organic chemistry and aspects of natural products and computational chemistry in close combination with chemical biologyenzymology and structural biology, together aiming at the discovery and development of new therapeutic agents. Practically speaking, it involves chemical aspects of identification, and then systematic, thorough synthetic alteration of new chemical entities to make them suitable for therapeutic use. It includes synthetic and computational aspects of the study of existing drugs and agents in development in relation to their bioactivities (biological activities and properties), i.e., understanding their structure-activity relationships (SAR). Pharmaceutical chemistry is focused on quality aspects of medicines and aims to assure fitness for purpose of medicinal products.

To escape the tyranny of the success of synthetic chemistry, the accomplished medicinal chemist of the future will thus likely be someone whose talents are not just limited to synthesis but whose skill set more broadly encompasses molecular design and properties. While synthesis has become standardized, many other disciplines in drug discovery like computer-aided drug design, pharmacology, assay development and toxicology have not. There is still plenty of scope for original breakthroughs and standardization in these unruly areas, and there's even more scope for traditional medicinal chemists to break off chunks of those fields and weave them into the fabric of their own philosophy in novel ways, perhaps by working with these other practioners to incorporate "higher-level" properties like metabolic stability, permeability and clearance into their own early designs. This takes me back to a post I wrote on an article by George Whitesides which argued that chemists should move "beyond the molecule" and toward uses and properties: Whitesides could have been talking about contemporary medicinal chemistry here.

The integration of downstream drug discovery disciplines into the early stages of synthesis and hit and lead discovery will itself be a novel kind of science and art whose details need to be worked out; that art by itself holds promising dividends for adventurous explorers. But the mandate for the 20th century medicinal chemist in the 21st still rings true. Medicinal chemists who can borrow from myriad other disciplines and use that knowledge in their synthetic schemes, thus broadening their expertise beyond the tranquil waters of pure synthesis into the roiling seas of biological complexity will be in far more demand both professionally and financially. Following Darwin, the adage they should adopt is to be the ones who are not the strongest or the quickest synthetically but the ones who are most adaptable and responsive to change. 

For medicinal chemistry to thrive, its very definition will have to change.

Beware of von Neumann's elephants using bulldozers to model quarks

The other day I wrote about the late physicist Leo Kadanoff who captured one of the key caveats of models with a seriously useful piece of advice - "Do not model bulldozers with quarks". Kadanoff was talking about the problems that arise when we fail to use the right resolution and tools to model a specific system. While reading Kadanoff's warnings I also remembered one of John von Neumann's equally witty portents for flawed modeling - "With four parameters I can fit an elephant to a curve. With five I can make him wiggle his trunk".

It strikes me that between them Kadanoff and von Neumann capture almost all the cardinal sins of modeling. The other day I was having a conversation about modeling with a leading industrial molecular modeler, and he made the very cogent point that it is imperative to keep the resolution of a particular system and the data it presents in mind when modeling it. My colleague could well have been channeling Kadanoff. This point is actually simple enough to understand (although hard enough to always keep in mind when obeying institutional mandates in a shortsighted environment which thrives on unrealistic short-term goals). 

If you are doing structure based drug design for instance, it's dangerous to try to read too much atomic detail into a 3 angstrom protein-ligand structure. Divining fine details of halogen substitutions, amide flips and water molecules from such a structure can always get you in trouble. If a 3 angstrom structure is the best you have, your optimum strategy would be to try rough designs of molecules - a hydrophobic extension here, a basic amine there - without getting too fine-grained about it. What you should aim for is maximum diversity accessible with minimal synthetic effort - libraries of small peptides might be suitable candidates in such cases. After that let the chemical matter guide you. Once you have a hit, that's when you want to get more detailed, although even then the low resolution of the structure may be at odds with the high resolution of your thinking.

An equally good or even better strategy to adopt in such cases might be a purely ligand-based assault on the structure. There might be similar ligands hitting similar proteins which you might be aware of, or even in case of de novo ligand design you might want to push for purely ligand-based diversity. But this is where you now have to start listening to von Neumann. You may try to fit potential activities of ligands to a few parameters, or build a QSAR model. What you might really be doing however is building not a QSAR model but a house of cards supporting a castle in the air - in other words an overfit model with scant connection to chemically intuitive reality. In that case rest assured - von Neumann's elephant would be quite willing to crash his way in and tear apart your castle.

Kadanoff's admonition to not model bulldozers with quarks is a good admonition for structure-based design. Von Neumann's elephants are good portents to keep in mind for ligand-based drug design. Together the two can hopefully keep you from falling into the abyss and getting crushed under the elephant and the bulldozer.

Linus Pauling's last laugh? Vitamin C might be bad news for mutant colorectal cancer


Linus Pauling holding enough rope to make sure we
can hang ourselves with it if we don't run the right
statistically validated experiments
During the last few decades of his life, Linus Pauling (in) famously began a crusade to convince the general public of the miraculous benefits of Vitamin C for curing every potential malady, from the common cold to cancer. Pauling’s work on ascorbic acid resulted in many collaborations, dozens of papers and at least two best-selling books.

The general reaction to his results and studies ranged from “interesting” to “hey, where are the proper controls and statistical validation?” Over the years none of his work has been definitively validated, but vitamin C itself has continued to be interesting, partly because of its cheap availability and ubiquitous nature in our diet and partly because of its antioxidant properties that seem to many people to be “obviously” beneficial (although there’s been plenty of criticism of antioxidants in general in recent years). Personally I have always put vitamin C in the “interesting and should be further investigated” drawer, partly because oxidation and reduction are such elemental cellular phenomena that anything that seeks to perturb such fundamental events deserves to be further looked at.

Now here’s an interesting paper in Science that validates the potential benefits of ascorbic acid in a very specific but well-defined case study. It’s worth noting at the outset that the word ‘potential’ should be highlighted in giant, size 24 bold font sizes. The authors who are part of a multi-organization consortium look at the effects of high doses of the compound on colorectal cancer cells with mutations in two ubiquitous and important proteins – KRAS and BRAF. KRAS and BRAF are both part of key signaling networks in cells. Mutations in both of these proteins are seen in up to 40% of all cancers, so both proteins have unsurprisingly been very high-profile targets of interest in cancer therapy for several decades. The mutation is additionally important because it also turns out that cancers with these mutations show poor response to anti-EGFR therapies.

One of the hallmarks of cancer cells which has been teased out in fascinating detail in the last few years is their increased metabolism and especially their dependence on glucose metabolism pathways such as glycolysis that allows them to feed hungrily on this crucial substance. The current study took off from the observation that a glucose transporter protein called GLUT1 is overexpressed in these mutant cancer cells. Incidentally this transporter protein is also involved in transporting vitamin C, but in its oxidized form (dehydroascorbate – DHA). Presumably the authors put two and two together and wondered if ascorbate might be more rapidly absorbed by the mutant cancer cells and mess up the oxidation-reduction machinery inside.

It turns out that it does. Firstly, the authors confirmed by the addition of reducing agents that it’s the oxidized form of vitamin C that interferes with the cancer cells’ survival. Secondly, they looked at mutant vs wild-type cells and found that the mutant cells are indeed much more efficient at ascorbate uptake. Thirdly, they looked at various markers for cell death like apoptosis signals and found out that these were indeed more pronounced in the KRAS-BRAF mutant cells (addition of a reducing agent rescued these cells, again attesting to the function of DHA rather than reduced vitamin C). Fourthly, mice with known as well as transgenic KRAS mutations showed favorable tumor reduction when vitamin C was intravenously administered.

Fifth and most interesting, they performed protein metabolite analysis of the cells’ machinery after treatment with vitamin C and found that there was a significant accumulation of chemical intermediates which serve as substrates for the enzyme glyceraldehyde-3-phosphate dehydrogenase (GAPDH). GAPDH is a central enzyme of the glycolytic pathway and its inhibition would unsurprisingly lead to cell starvation and death. Lastly, they were able to make a statement about the mechanism of action of vitamin C on GAPDH by determining that it might interfere with post-translational modification of the protein and NAD+ depletion.

The authors end with some ruminations on the history of vitamin C therapy for cancer and the usual qualifications which should apply to any such study,. As they note, vitamin C has a checkered history in the treatment of cancer but most studies which failed to show benefits only involved large oral doses of the vitamin (Pauling himself was rumored to ingest up to 50 g of the substance a day). Intravenous administration however has suggested that far higher doses may be required for effective results. And of course, this study was done in mice, and time after time we have seen that such studies cannot be measurably extrapolated to human beings without a lot of additional work, so you should pause a bit before you rush off and try to inject yourself with Emergen-C solution.

Nonetheless, I think the detail-oriented and relatively clear nature of the study makes it a good starting point. Google searches of vitamin C and colorectal cancer bring up at least a few tantalizing clues as to its potential efficacy (along with a lot of New Age, feel-good piffle). As usual the key goal here is to separate out the wheat from the chaff, the sloppy anecdotal evidence from the careful statistical validation and the detailed mechanistic rationales from the stratospheric theorizing. When the dust settles we would hopefully have a clearer picture. And who knows, maybe the ghost of Linus Pauling might then even allow himself the last laugh, or at least an imperceptible smile.

Physicist Leo Kadanoff on reductionism and models: "Don't model bulldozers with quarks."

I have been wanting to write about Leo Kadanoff who passed away a few weeks ago. Among other things Kadanoff made seminal contributions to statistical physics, specifically the theory of phase transitions, that were undoubtedly Nobel caliber. But he should also be remembered for something else - a cogent and very interesting attack on 'strong reductionism' and a volley in support of emergence, topics about which I have written several times before.

Kadanoff introduced and clarified what can be called the "multiple platform" argument. The multiple platform argument is a response to physicists like Steven Weinberg who believe that higher-order phenomena like chemistry and biology have a strict on-on-one relationship with lower-order physics, most notably quantum mechanics. Strict reductionists like Weinberg tell us that "the explanatory arrows always point downward". But leading emergentist physicists like P W Anderson and Robert Laughlin have taken objection to this interpretation. Some of their counterarguments deal with very simple definitions of emergence; for instance a collection of gold atoms have a property (the color yellow) that does not directly flow from the quantum properties of individual gold atoms.

Kadanoff further revealed the essence of this argument by demonstrating that the Navier-Stokes equations which are the fundamental classical equations of fluid flow cannot be accounted for purely by quantum mechanics. Even today one cannot directly derive these equations from the Schrodinger equation, but what Kadanoff demonstrated is that even a simple 'toy model' in which classical particles move around on a hexagonal grid can give rise to fluid behavior described by the Navier-Stokes equations. There clearly isn't just one 'platform' (quantum mechanics) that can account for fluid flow. The complexity theorist Stuart Kauffman captures this well in his book "Reinventing the Sacred".



Others have demonstrated that a simple 'bucket brigade' toy model in which empty and filled buckets corresponding to binary 1s and 0s (which in turn can be linked to well-defined quantum properties) that are being passed around can account for computation. Thus, as real as the electrons obeying quantum mechanics which flow through the semiconducting chips of a computer are, we do not need to invoke their specific properties in order to account for a computer's behavior. A simple toy model can do equally well.

Kadanoff's explanatory device is in a way an appeal to the great utility of models which capture the essential features of a complicated phenomenon. But at a deeper level it's also a strike against strong reductionism. Note that nobody is saying that a toy model of classical particles is a more accurate and fundamental description of reality than quantum mechanics, but what Kadanoff and others are saying is that the explanatory arrows going from complex phenomena to simpler ones don't strictly flow downward; in fact the details of such a flow cannot even be truly demonstrated.

In some of his other writings Kadanoff makes a very clear appeal based on such toy models for understanding complex systems. Two of his statements provide the very model of pithiness when it comes to using and building models:

"1. Use the right level of description to catch the phenomena of interest. Don't model bulldozers with quarks.

2. Every good model starts from a question. The modeler should aways pick the right level of detail to answer the question."


"This lesson applies with equal strength to theoretical work aimed at understanding complex systems. Modeling complex systems by tractable closure schemes or complicated free-field theories in disguise does not work. These may yield a successful description of the small-scale structure, but this description is likely to be irrelevant for the large-scale features. To get these gross features, one should most often use a more phenomenological and aggregated description, aimed specifically at the higher level. 

Thus, financial markets should not be modeled by simple geometric Brownian motion based models, all of which form the basis for modern treatments of derivative markets. These models were created to be analytically tractable and derive from very crude phenomenological modeling. They cannot reproduce the observed strongly non-Gaussian probability distributions in many markets, which exhibit a feature so generic that it even has a whimsical name, fat tails. Instead, the modeling should be driven by asking what are the simplest non-linearities or non-localities that should be present, trying to separate universal scaling features from market specific features. The inclusion of too many processes and parameters will obscure the desired qualitative understanding."

This paragraph captures as well as anything else why chemistry requires its own language, rules and analytical devices for understanding its details and why biology and psychology require their own similar implements. Not everything can be understood through quantum mechanics, because as you try to get more and more fundamental, true understanding might simply slip away from between your fingers.

RIP, Leo Kadanoff.

(Ir)rational drug design and the history of 20th century science


Here is an excellent overview of the hopes and foibles of "rational" drug design by Brooke Magnanti (Hat tip: Pete Kenny) which touches on several themes and names that would be familiar to those in the field: Ant Nicholls and OpenEye, Dave Weininger and Daylight fingerprints, Barry Werth's "The Billion Dollar Molecule" and Vertex, the inflated hopes of structure-based design, cheminformatics and screening etc. 

Those who are heroic survivors of that period would probably start with looking back with dewey eyes, followed by groans of disappointment. The bottom line in that article and several similar ones is that rational drug design and all that it entails (crystallography and molecular modeling in particular) has clearly not lived up to the hype. It's also clear that the swashbuckling scientists portrayed by Werth in his book for instance were more brilliant than successful. It's a tape of hope and woe that has played before, over and over again in fact.

It's clear that much of the faith in rational drug design until now has had a healthy component of irrational exuberance to it. Looking back at the inflated expectations of the 1980s and early 90s for designing drugs atom by atom, followed by the disappointing failures and massive attrition which rapidly succeeded these expectations, makes me wonder what it was exactly that got everyone into trouble. There was a constellation of factors of course, but the historian of science in me thinks that a major part of at least the psychological (and by extension, organizational) aspects of the issue have to deal with the stupendous successes of twentieth century science in generating a mountain of optimism which skeptics are still trying to chip away at.

It's quite clear that as far as scientific progress goes, the 20th century was the mother of all centuries. Very significant scientific advances (Newton, Maxwell, Darwin, Mendel) had undoubtedly occurred in earlier times, but the sheer rate at which science advanced in the last one hundred years far outstripped scientific progress in all previous centuries. Just consider the roster of both idea-based and tool-based scientific revolutions that we witnessed in the past century: x-rays, the atomic nucleus, relativity, quantum mechanics, nuclear fission, the laws of heredity, the structure of biomolecules, particle physics, lasers, computers, organic synthesis, gene editing...and we are just getting warmed up here.

By the 1980s this amazing collection of scientific gems had reached a crescendo, especially in the biomedical sciences. The rise of recombinant DNA technology, protein structure determination, and improved hardware, software and visualization virtually ensured that scientists started feeling very good indeed about designing drugs to block particular proteins at the molecular level. Philosophically too they were highly primed by the astounding reductionist successes of the past one hundred years. After all reductionism had uncovered the cosmic microwave background radiation from the Big Bang, given us the structure of elemental life proteins like hemoglobin and the photosynthetic complex, split the atom, doubled the number of transistors on a chip in eighteen months and taught us how to copy and paste genes. Designing drugs would be a natural extension, if not a job for graduate students, after all this success.

But what happened instead was that both scientifically and philosophically we ran into a wall. What we found out scientifically was that we still understand only a fraction of the complexity of biological systems that we need to for perturbing them with the fine scalpels of small organic molecules. Philosophically we found out that biological systems are emergent and contingent, so all the reductionist success of the past century is still not enough to understand them. In fact beyond a certain point reductionism would fundamentally put us on the wrong track. The past hundred years made us believers in Moore's Law, but what we got instead was Eroom's Law. Moore's Law is what reduces my running time from 12 mins/mile to 8:30 mins/mile in a year. Eroom's Law is what keeps it from reducing much further. Exponential technological success is not axiomatic and self-fulfilling.

I thus see a very strong influence of the success of twentieth century science in steering the wildly optimistic hopes of drug discovery scientists beginning in the 1980s. Hopefully we are wiser now, but institutional forces and biases still keep us from improving on our failures. As Pete Kenny says in his post for instance, obsession with specific technologies rather than a combined application of several technologies still biases scientists and managers in biotech and pharmaceutical organizations. The rise and ebb (did you just say "rise"?) of economic forces makes the job environment unstable and discourages scientists from pushing bold ideas that promise to break free from reductionist approaches. And much of our science is still based on sloppy theorizing without proper recourse to statistics and controls, not to mention an unbiased look at what the experiments truly are and are not telling us. 

Santayana told us that we are condemned to relive history if we forget it. But when it comes to the promises of rational drug design, what we should do perhaps is to purge our minds of the successes of the 20th century and remember Francis Bacon's exhortation from the 16th century instead: "All depends upon keeping the eye steadily fixed on the facts of nature. For God forbid that we should give out a dream of our own for a pattern of the world."

Image: "Cognition enhancer" (Source: Brooke Magnanti, Garrett Vreeland)

"The Hunt for Vulcan": Theory, experiment, and the origin of scientific revolutions

Joseph Urbain La Verrier: The force of his personality
and his spectacular prediction of Neptune solidified
faith in the existence of Vulcan
In his book "The Hunt for Vulcan", MIT science writing professor Thomas Levenson tackles one of the most central questions in all of science - what do you do when a fact of nature disagrees with your theory? In this particular case the fact of nature was an anomaly in the orbit of Mercury around the sun. The theory was Newton's successful theory of gravitation which had reigned supreme for two hundred years in explaining the motion of everything from rocks to the moon. Levenson’s book looks at this question through the lens of an important case study. His writing is clear, often elegant and impressionistic, and he does a good job driving home the nature of science as a human activity with all its human triumphs and follies.

The physical entity invoked to explain the anomalies in Mercury's orbit - a small planet close to the sun which would usually be too small and intensely illuminated by the sun to be seen - was called Vulcan. The idea was that Vulcan's gravitational tug on Mercury would cause its orbit to stray from the expected path. The hypothesis had much merit to it since it was similar theorizing about the anomalies in the predicted orbit of Uranus that had resulted in the discovery of Neptune. The man who proposed the theories of both Neptune and Vulcan was Joseph Urbain Le Verrier, the most important French astronomer of his day and one of the most important of the 19th century. The successful prediction of Neptune and its dazzlingly swift observational validation was a resounding tribute to both Le Verrier’s acumen and to Newton’s understanding of the universe. Not surprisingly Le Verrier's prediction of Vulcan was taken seriously.

The book recounts how partly because of past successes of Newton's theories and partly because of the force of Le Verrier’s personality astronomers spent the next one hundred years unsuccessfully looking for Vulcan. Spectators included a host of well-known astronomers and amateurs, including Thomas Edison. The search was peppered by expeditions to exotic places like Rhodesia and Wyoming. Occasionally the newspapers would ridicule Vulcan-chasers, but none could disprove its evidence conclusively. This fact raises an important point: As far as scientific theories go Vulcan was a good theory since it was testable, but because its existence really strained the limits of astronomical technique as it existed during the time, it did not really satisfy the criteria for being a cleanly falsifiable theory. This led to the Vulcan hypothesis having enough wiggle room for people to get away with explaining away the lack of observation as bad technique or faulty equipment.

As Levenson describes in the latter half of the book, the culmination of the hunt for Vulcan came in the early half of the twentieth century with Einstein’s theory of relativity which did away with Vulcan for good. Levenson spends a good deal of time on Einstein's background and his mathematical preparation; there's a lucid description of the special theory of relativity. Vulcan was almost an afterthought in Einstein's intellectual development, but when he realized that his own theory could explain Mercury's anomalous orbit as an effect of the curvature of spacetime, the realization left him feeling like "something had snapped inside him". When finished his general theory of relativity demonstrated one of the most fascinating features of scientific discoveries – sometimes tiny anomalies in observation point not just to the reworking of an existing theory but a complete overhaul of our understanding of nature. In this case the dramatic change was an appreciation of gravity not as a force but as a curvature of spacetime itself.


It is also instructive to apply lessons from Vulcan to my own fields of drug discovery and biochemistry. Often when a drug does not work it seems convenient to invoke the existence of hitherto unobserved entities (specific proteins, artifacts, side products from organic reactions etc.) to explain the anomalies or failures. Vulcan tells us that while it is prudent to look for these entities experimentally, it's also worth giving a thought to how their existence might be explained by tweaks - or in rare cases significant overhauls - of existing theories of biological signaling or drug action. This might especially be true in case of neurological disorders like Alzheimer's disease where the causes are ill-understood and the underlying theories (the amyloid hypothesis for instance) are constantly being subjected to revision.

Levenson’s book is a tribute to how science actually works as opposed to how it's thought to work. It's also a good instruction manual for how science works when experiment disagrees with theory. In such cases the theory can then be slightly amended, radically amended or replaced. In Vulcan’s case Newtonian gravity was not really replaced, but the amendment required was so drastic that it led to a new epoch in our view of our cosmos. The story of Vulcan is a story for our scientific times.