2015 Nobel Prize predictions


The nice thing about Nobel Prizes is that it gets easier to predict them every year, simply because most of the people you nominate don't win and automatically become candidates for the next year (note however that I said "easier to predict", not "easier to correctly predict").

Having said that, there is a Bayesian quality to the predictions since the previous year's prize does compel you to tweak your priors, even if ever so slightly. Last year's award was for a biophysical instrumental technique so that probably rules out similar awards this year. 

This year I have decided to separate the prizes into lifetime achievement awards and specific discoveries. There have been fewer of the former in Nobel history and I have only three in mind myself, although the ones that do stand out are no lightweights - for instance R B Woodward, E J Corey, Linus Pauling and Martin Karplus were all lifetime achievement awardees. If you had to place a bet though, then statistically speaking you would bet on specific discoveries since there have been many more of these. So here goes:

Lifetime achievement awards

Harry Gray and Steve Lippard: For their pioneering and foundational work in the field of bioinorganic chemistry; work which has illuminated the workings of untold number of enzymatic and biological processes including electron transfer.

Stuart Schreiber and Peter Schultz: For their founding of the field of modern chemical genetics and their impact on the various ramifications of this field in chemistry, biology and medicine.

Robert Langer for his extensive contributions to drug delivery: Much of what Langer does is actually chemistry, but his practical impact has been on medicine so a prize for him would lie more squarely in medicine. It's clear though that he deserves some kind of lifetime recognition.

Specific awards

John Goodenough and Stanley Whittingham for lithium-ion batteries: This has been on my list for a long time. Very few science-based innovations have revolutionized our basic standard of living the way lithium-ion batteries have. However, prizes for devices have been few, with the charged-coupled device (CCD) and the integrated circuit being exceptions. More importantly, a device prize was given out just last year in physics (for blue light-emitting diodes) so based on that Bayesian argument stated above, it might make it a bit unlikely for another device-based invention to win this year.

Franz-Ulrich Hartl and Arthur Harwich for their discovery of chaperones: This is clearly a discovery which has had a huge impact on our understanding of both basic biological processes as well as their therapeutic relevance. I worked a bit on nuclear receptors myself during a postdoc and appreciate the amazing complexity and importance of their signaling roles. However, as often happens with the chemistry prize, this one could also go to medicine.

Alexander Pines for solid-state NMR: Another technique that has clearly come of age. Using our Bayesian ruler again though, last year's chemistry prize went to an instrumental technique (super-resolution electron microscopy) so it seems implausible to have the same type of prize given out this year.

Krzysztof Matyjaszewski for atom-transfer radical polymerization, Barry Sharpless for click chemistry, Chi-Huey Wong for oligosaccharide synthesis and Marvin Caruthers for DNA synthesis: It's highly unlikely that these three gentlemen will receive any prize together, but I am grouping them under the general title of "organic and polymer synthesis" for convenience.

Matyjaszewski's name has been tossed around for a while, and while I am no expert in the field it seems that his ATRP method has had enough of a practical and commonplace impact to be a serious contender; plus an award for polymer chemistry has been long due. Click chemistry has also been extensively applied, although I am less certain of its industrial use compared to say, the undoubted applications of palladium-catalyzed chemical reactions.

In the world of biopolymers, oligosaccharide synthesis has always been an important field which in my opinion has received the short end of the stick (compared to the glamorous world of proteins and nucleic acids, lipids and carbohydrates have always been the black sheep) so recognizing Wong might be a kind of redemption. On the other hand, recognizing Caruthers for DNA synthesis (perhaps along with Leroy Hood who automated the process) seems to be an obvious honor in the Age of Genomics.

The medicine prize

As is traditionally the case, several of the above discoveries and inventions can be contenders for the medicine prize. However we have left out what is potentially the biggest contender of all until now.

Jennifer Doudna, Emmanuelle Charpentier and Feng Zhang for CRISP-Cas9: I don't think there is a reasonable soul who thinks CRISPR-Cas9 does not deserve a Nobel Prize. In terms of revolutionary impact and ubiquitous use it almost certainly belongs in the same shelf that houses PCR and Sanger sequencing. 

There are two sets of questions I have about it though: Firstly, whether an award for it would still be rather premature. While there is no doubt as to the broad applicability of CRISPR, it also seems to me that it's rather hard right now to apply it with complete confidence to a wide variety of systems. I haven't seen numbers describing the percentage of times that CRISPR works reliably, and one would think that kind of statistics would be important for anyone wanting to reach an informed decision on the matter (I would be happy to have someone point me to such numbers). While that infamous Chinese embryo study that made the headlines recently was quite flawed, it also exposed the problems with efficacy and specificity that still bedevil CRISPR (these are problems similar to the two major problems for drugs). My personal take on it is that we might have to wait for just a few more years before the technique becomes robust and reliable enough to thoroughly enter the realm of reality from one of possibility.

The second question I have about it is the whole patent controversy. Generally speaking Nobel Prizes try to stay clear of controversy, and one would think that the Nobel committee would be especially averse to sullying their hands with a commercial one. The lack of clear assignment of priority that is being played out in the courts right now not only tarnishes the intellectual purity of the discovery, but on a more practical level it also makes the decision to award the prize to all three major contenders (Doudna, Charpentier and Zhang) difficult. Hopefully, as would be fitting for a good novel, the allure of a Nobel Prize would make the three protagonists reach an agreement to settle their differences over a few beers. But that could still take some time.

The bottom line in my mind: CRISPR definitely deserves a prize, and its past results and tremendous future potential may very well tip the balance this year, but it could also happen that the lack of robust, public vindication of the method and the patent controversy could make the recognition seem premature and delay the actual award.

Craig Venter, Francis Collins, Eric Lander and others for genomics and sequencing: The split here may be pretty hard here and they might have to rope in a few consortiums, but as incomplete and even misleading as the sequencing of the human genome might have been, there is little doubt that it was a signal scientific achievement deserving of a Nobel Prize.

Alec Jeffreys for DNA fingerprinting and assorted applications: Alec Jeffreys is another perpetual favorite on the list and one whose invention has had a huge societal impact.

Karl Deisseroth, Ed Boyden and others for optogenetics: Optogenetics is another invention that will almost certainly get a prize; its methodology is fascinating and its potential applications for neuroscience are amazing. But its validation seems even more incomplete to me than CRISPR's so it would be rather stunning if they get the prize this year. (On a side note: I am probably among the minority who think that awarding the prize for RNA interference in the 1990s was also too early and quite premature).

Ronald Evans for nuclear receptors: It would be odd if a major class of proteins and therapeutic drug targets went unrecognized.

Bert Vogelstein, Robert Weinberg and others for cancer genes: This again seems like a no-brainer to me. Several medicine prizes have been awarded to cancer genetics so this certainly wouldn't be a novel idea, and it's also clear that Vogelstein and Weinberg have done more than almost anyone else in identifying rogue cancer genes and their key roles in health and disease.

A brief note on the physics prize: There is no doubt in my mind that the Nobel committee needs to give the prize this year to the ATLAS-CMS collaboration at the LHC which discovered the Higgs boson. A prize for them would emphasize several things: it would put experiment at the center of this important scientific discovery (there would have been no 2013 Nobel Prize without the LHC) and it would herald a new and necessary tradition of awarding the prize to teams rather than individuals, reflecting the reality of contemporary science.

So that's it from my side. Let the games commence!

Note: Reader artqtarks has a list, with a thoughtful analysis of potential prizes for CRISPR and tumorigenic genes.

3 Quarks Daily annual science writing prize

I am honored and frankly a bit stunned to hear that my post on the "fundamental philosophical dilemma" of chemistry was awarded first place in the annual 3 Quarks Daily science writing prize contest. I am deeply thankful to the editors of the site, to those who nominated the post and to Prof. Nick Lane who judged the final nine entries.

I am enormously gratified that my entry was selected by Nick Lane, biochemist and science writer extraordinaire, whose latest book "The Vital Question" proposes a novel theory of the origins of life based on the genesis and evolution of the energy generating apparatus in cells. I have long since admired Nick's books both for their originality and their clarity so I was especially thrilled that he picked my post. And I am stunned because my entry was included in a roster of entries that included some very fine writing indeed by writers whose work I have respected for a long time, so I didn't expect my own entry to win. Congratulations especially to my fellow prizewinners, Aatish Bhatia and Nadia Drake, for their incisive and highly readable pieces. The list of finalists is also very much worth taking a look at.

Here's what Nick had to say about the post and about his own views on writing:

When I read a blog, I'm not really looking for a beautiful piece of writing, or stunning visuals, or links to amazing videos, even though these things make a great post. I'm looking for a personal point of view, usually from someone with a particular vantage point, whether scientific or journalistic. I'm looking for something that I couldn't find so easily in the mainstream media, grounded in personal experience, and more idiosyncratic than most magazines would allow you to get away with. (That's one of the things I like about writing books too.)

I don't really know where to draw the line between a blog and a news story, or a feature article, or even a short story. Some of the finalists here did not really write blog posts at all, in my view, but achieved a higher calling, works of art in their own right. So with all that in mind, here goes:

The winner is Ashutosh (Ash) Jogalekar. I loved this post. It is personal and authoritative, and grows from what starts out as a quirky irritation in the day job into a profound commentary on the limits of the controlled experiment in chemistry, stemming from fundamental physics. Ash begins with the different interactions between atoms in molecules – electrical charges, hydrophobic interactions and the rest – and shows them to be different aspects of the same fundamental electrochemical force, making it impossible to achieve any independent changes in a molecule. He finishes with a lovely twist, justifying the thrill of experiment as the only way to explore design in chemistry, making the subject endlessly fascinating. 

Ash's writing style is crisp and clean, admirably precise without being patronising, even in the use of italics, which can easily feel preachy. Not here. I followed the links for genuine interest, and there was a great discussion in the comments pointing out an equivalent problem in biology, in the use of knockout models. In an age when science is being pushed towards supposedly managed outcomes, this is a refreshing reminder of why it can't be planned.

Many thanks to Nick for his very thoughtful appraisal and appreciation of the piece. I am especially thrilled to see writing about fundamental chemistry - a topic that doesn't usually get much billing in the popular science literature - being recognized. The limits of performing controlled experiments in science is a topic close to my heart, and I'm glad to see that others have thoroughly appreciated the problem too. 

Occasionally we'll hear drumbeats about the "end of science" which proclaim the complete ascendancy of knowledge in one field or another. While this kind of proclamation ignores the simple fact that progress in different fields is not all created equal (as a commenter on a blog recently mentioned, "we can land a probe on a comet at 17,000 miles/hr but we still don't know if butter's bad for you") I think it's also important to realize more fundamental, epistemological limits to knowledge that arise from the kinds of limits on measuring basic atomic and molecular interactions that I was talking about in my post. 

But crucially, while some may see such limits as heralding "the end", I see them as heralding endless opportunities and fascinating discoveries which will forever remain open-ended. If that's not the opposite of "the end" I don't know what is.

Agonists and antagonists, and why drug discovery is hard (again)

Here's a valuable and comprehensive review on one of the most glaring pieces of evidence for why drug discovery is so hard - the fact that very small structural changes in molecules can lead to drastic changes in their biological activity.

I particularly like this review because it's absolutely chock-full of examples of small structural changes which not only impact the magnitude of binding of a small molecule to a receptor protein but invert it - that is, change an agonist into an antagonist. And the receptor family in this case is GPCRs, so it's not like we're talking about a minor rash of examples in a scientifically insignificant and financially paltry domain.

Here's one of my favorite examples from the dozens showcased in the piece; in this case a set of small molecules targeting the nociceptin receptor which is being studied as a potential target in treating heart failure and depression.



At first sight it's compelling how such similar groups as a cyclooctyl, a cyclooctyl-methyl and a phenyl can lead to complete inversion of activity, from 200 nM agonism to 1.5 nM antagonism. Thinking in 3D however makes the observation a bit more comprehensible. The N-cyclooctyl on the right is going to have a very well-defined conformational preference - pointing pretty much straight in one direction. The cyclooctyl-methyl on the other hand is going to have much more conformational freedom. It's also going to occupy much more space than the phenyl group on the right.

Now this kind of retrospective analysis may well be the explanation, but very few medicinal chemists would have been able to predictive this complete inversion in activity at the outset (as a medicinal chemist recently quipped at a Gordon Conference, "We medicinal chemists are very good at predicting the past.")

Here's a more diabolical example that would have been even harder to predict. In this case the target concerns two suptypes of the mGlu (metabotropic glutamate) receptor which is involved among other things in Parkinson's and anxiety.



In this case, not only does that 'magic' methyl group and its precise stereochemistry change an antagonist into an agonist but it even changes the agonism/antagonism mix at two separate receptors. Try explaining that, even in retrospect.

These kinds of well-known activity cliffs reinforce the essentially non-linear nature of medicinal chemistry, a quality that is essentially emergent since it arises from the interaction of small molecules with a highly non-linear biological system. Neither experimental chemistry nor computational modeling would allow us to predict activity cliffs like these because of the lack of sensitivity in such techniques.

It's things like these which I always think really need to be communicated to laymen to impress the staggering difficulty of drug design to them - most of the times we are simply ignorant when it comes to designing molecules like the ones above with any kind of predictive power and we can only find out about their fickle properties in retrospect. Perhaps then we will get less heat from the public for why we sometimes have to spend (and charge) so much money for our products.

Prof. Erland Stevens's edX med chem class

Just as he did previously, Prof. Erland Stevens of Davidson College is teaching a comprehensive med chem edX class that would be useful for anyone wanting to dive into the field. The course attracted 25,000 students last year. Here's the syllabus and list of topics - it certainly looks like something I would enthusiastically check out if I were starting out in the field, or even if I had been in it for a while.

Information on the course:

  • The course: D001x Medicinal Chemistry
  • Host platform: edX
  • Date: Starts 10/5/15, but enrollment is open until 12/11/15
  • Length: 8 weeks
  • Cost: free
  • Time: 6-8 h/wk for all content, 1 h/wk to peruse the video lectures
  • Pre-req: chem (organic functional groups, line-angle structures), biology (parts of cell), math (logarithms & exponents)
  • Topics (~1 wk per topic)
  • Drug Approval (early drugs, regulatory process, cost, IP concerns)
  • Enzymes & Receptors (inhibition, Ki, ligand types, Kd)
  • Pharmacokinetics (Vd, clearance, compartment models)
  • Metabolism (phase I & II reactions, CYP450 isoforms, prodrugs)
  • Molecular Diversity (binding, drug space, combi chem, libraries)
  • Lead Discovery (screening, filtering hits, drug metrics)
  • Lead Optimization (functional group replacements, isosteres, peptidomimetics)
  • Case Studies on Selected Drug Classes
  • Bonus features
  • Interviews with pharma professionals, including scientists from Novartis (a partner on the course)
  • Virtual labs involving online tools for predicting drug-relevant activity
  • Target audience: pre-med students, graduate students, recent pharma hires, research assistants


The 111 Nobel Prize nominations of Robert Burns Woodward

As Nobel season dawns upon us, Stu Cantrill points me to an endlessly interesting link on the Nobel website which lists nominating information for various scientists up to 1964 (names of nominees and nominators cannot be revealed for 50 years). Since many more deserving scientists never win the prize compared to those who do this list makes for especially readable material.

For instance Carl Djerassi who never won the prize was nominated three times (only until 1964 though, so he was likely nominated many more times after that). In the peace category Franklin Roosevelt was nominated 5 times. And Lise Meitner was nominated 47 times without winning in both the physics and the chemistry categories.

The astonishing statistics are for everybody's favorite chemistry demigod R B Woodward. Woodward was nominated a record 111 times from 1937 until 1965 when he finally won. What's even more stunning though is the year of his first nomination - 1937. That can't be quite right since Woodward was 20 years old then and about to finish his PhD at MIT. Interestingly there is no name in front of the nomination so this could be a mistake. But there's little doubt that nominating even the precocious Woodward at age 20 would have been premature to say the least (Note: Woodward famously finished both college and graduate school in four years and had to drop out for one semester for neglecting other subjects). 

The more authentic nomination still comes in 1946 when he was still only 29: this time he was nominated along with his colleague Bill Doering by the astronomer Harlow Shapley. The nomination was clearly for the Woodward-Doering breakthrough synthesis of quinine. After 1946 Woodward was nominated pretty much every single year by multiple people. In fact looking at the list what's astonishing is how he didn't win the prize until 1965.

You can have more fun looking at the list and especially searching for other famous chemists who should have gotten a Nobel Prize but who never did. For instance Gilbert Newton Lewis is widely considered to be the greatest American chemist to have never won, and he was nominated 41 times so one wonders what exactly kept him from being on the list. C K Ingold, one of the fathers of physical organic chemistry, also never won and he was nominated 63 times. On the other hand, Robert Robinson with whom Ingold enjoyed a friendly rivalry was nominated 51 times but actually won.

Another interesting fact to be gained from the database is the number of times a particular Nobel Laureate nominated another scientist. In what is a testimony to his well-known generosity of spirit for instance, Niels Bohr nominated other scientists 25 times (this included multiple nominations for Lise Meitner who unfortunately never won). 

Woodward on the other hand nominated someone only once - Linus Pauling in 1949. Interestingly, Woodward had tried to apply for an instructorship at Caltech in 1942 when Pauling was the chairman of the department but as the letter below indicates, Pauling didn't seem too interested; one wonders how the course of American and Caltech chemistry would have been had both Woodward and Pauling reigned over the world of chemistry from the same department.


Source: Angew. Chem. Int. Ed. 2007, 1378


In any case, the nomination website makes for very intriguing browsing with which you can play around for a long time. The one thing it makes clear is what we already know - that the number of outstanding Nobel-caliber scientists who will never win the prize far outweighs the number who actually do. That fact should put the nature of the prize in the right perspective.

Peter Thiel on biotechnology again: "Get rid of the randomness"

Peter Thiel has some provocative thoughts on biotechnology again, this time in an interview for Technology Review. I had a post earlier about Thiel's view of biotechnology which included some hardheaded, sensible thoughts and some more questionable skepticism. This interview projects a similar combination.

The interview is really about Thiel's investment in Stemcentrx, a company utilizing our increasing knowledge of stem cell driven tumor evolution to create targeted therapy. As Derek noted in his post on the company it's an interesting approach but there's still a lot of holes that need to be plugged before it becomes reliable. Thiel's own take on the company is unsurprisingly positive, although it's not clear why he thinks the company's approach makes it particularly Thiel-worthy (he is known to be very judicious when it comes to funding startups). He seems to think that Stemcentrx's use of human xenografts in mice is something "very unusual", but I find Thiel's take on this quite unusual since drug discovery scientists have been using human xenografts in mice for decades so it's certainly not a novel idea. Also as Derek says in his post, just because a drug works in human xenografts does not mean it will work in humans.

The real rub of the matter is Thiel's thoughts on "getting rid of randomness":

"It’s interesting that a lot of technology outfits are getting into biology. Google has announced a number of plans. You have invested in longevity research. What do you think makes actual programmers want to start programming biology?"
 “The big picture is the question of whether biological science can be transformed into an information science. Can something that seems chaotic, fractal, and generally random be transformed into something more deterministic and more controlled? 
I think of aging and maybe just mortality as random things that go wrong. The older you get, the more random things happen, the more breaks. If it’s not cancer, you could get hit by an asteroid. So on some level, technology is trying to overcome the randomness that is nature. That is a question on the level of a company. Can you get rid of randomness in building a company? But the philosophical version of the question is whether we can get rid of randomness in its entirety and overcome the randomness that I think of as the evil part of nature.”

There are a couple of thoughts I have on this. Firstly, biological science has been increasingly transformed into information science for about fifty years now so that's not a brand new development. The problem is that just because it's information science does not mean it's deterministic; Thiel of all people should know how messy, incomplete and random information can be, and unlike him I certainly don’t see randomness as an “evil” (Darwinian evolution, anyone?). And even when the information is available it can be pretty hard to get a handle on it. People who do cheminformatics or bioinformatics for instance are well aware of the kind of false and messy correlations they can find even in information-rich datasets.

There is no doubt that we can make headway into this problem with better algorithms and computing power, but to believe that somehow injecting enough programmers into biology will enable a quantum leap on the information analysis problem strikes me as a bit naive. Consider Thiel's thoughts on aging: aging is indeed a result of many imperceptible and perceptible random events. But the issue in addressing aging is not the lack of adequate computing approaches that would transform the randomness into predictability. It's the ignorance in understanding the randomness that shackles our ability to understand it in the first place. 

A good analogy is provided by the random molecules of water molecules in a beaker. We know the movement is random, but we also know enough about this randomness to use the laws of quantum mechanics and statistical thermodynamics to provide accurate and predictable macroscopic descriptions of the random motion. Unlike this scenario, we don't have a complete picture of the randomness in aging (or in cancer for that matter) and we don't know what causes the randomness in the first place. The problem is not one of technology as Thiel seems to think, it's one of basic understanding. It's one of simple ignorance.

The other peeve I have with thoughts like this is the implicit belief that somehow before the advent of programmers all the scientists working in biotechnology and drug discovery did not have either the ability or the inclination to get rid of randomness (It’s similar to the reaction we have about ‘rational drug design’ – does that mean everyone else was irrational before?). The fact of the matter is that chemists and biologists have been well aware of the randomness in biological systems for decades, and in fact one of the reasons we take random approaches (say phenotypic screening or diverse library design) to try to discover new drugs is precisely because we have studied the randomness and discovered it’s not particularly amenable to more rational approaches. In such cases the trial-and-error process which Thiel dislikes is the rational one; just consider the riches unearthed through pseudo-random directed evolution for instance.

Sadly Thiel's thinking is not uncommon among technology entrepreneurs and is part of what Derek has called the "Andy Grove fallacy". The Andy Grove fallacy believes that the problem with drug discovery is a lack of technology and computing power; it’s what some people have called “technological solutionism”. But a lack of technology and a lack of basic understanding are two very different things. There is no doubt that we should use all the technology at our disposal to try to make the drug discovery process more rational and amenable to control. But we should also not believe that any particular discipline is going to transform this landscape, especially when we don't even have a good understanding of where its peaks and valleys lie.

Cryo-electron microscopy: A prime example of a tool-driven scientific revolution

Last week I had the immense pleasure again of having lunch with Freeman Dyson in Princeton. One of the myriad topics on the platter of intellectual treats on the table was the idea of science as a tool-driven rather than as an idea-driven scientific revolution. The framework was fleshed out in detail by Harvard historian of science Peter Galison in his highly readable book "Image and Logic" and was popularized by Dyson in his own book and article. I wrote a post on that particular paradigm last year.

Since physics had profited immensely from idea-driven revolutions in the 20th century (most notably relativity and quantum theory) that were enshrined by Thomas Kuhn in his idea of paradigm shifts, it took physicists some time to appreciate how tools like the cyclotron, the cloud chamber, the CCD and the laser have played an equal part in their revolutionary history. But as I told Dyson, chemists on the other hand have absolutely no problem accepting the idea of tool-driven revolutions. Chemistry more than physics is an experimental science where first principles theories are often too complicated to put into practice. Chemists have thus benefited much more from experimental toys rather than fancy theorizing, and in no other case has the ascendancy of such toys been more prominent than in the case of x-ray crystallography and NMR spectroscopy. It's hard to overstate how much these two techniques have revolutionized not just our understanding of the world of molecules but of other domains, like biology and engineering. Last year's Nobel Prize for microscopy was likewise a fitting tribute to the supremacy of tools in chemical and biological research.

Now a new technique joins the arsenal of structural weapons, and I have little doubt that it too is going to be part of a revolution - cryo-electron microscopy. ACS has a nice article on how much the technique has advanced in the last decade and how prominently it is poised to be applied to structural problems that have been recalcitrant to the old approaches. During the last few years use of the technique has skyrocketed: as this Nature article compellingly describes, cryo-EM can acquire structures of ribosomes in weeks or months that took Nobel Prize-winning scientists years to solve. And as the article says, even this revolution has benefited from a crucial tool-within-a-tool.
Over the years, gradual progress in computational power and microscope quality has yielded higher and higher resolution structures. Up until the past few years, most cryo-EM structures clocked in at well above 10-Å resolution, about the size of an amino acid. Between 2002 and 2012, only 14 structures determined by EM crossed the 4-Å threshold, dipping a toe in high-resolution territory. But a true breakthrough came in 2012 when a new toy—the direct electron detector—opened the gates, allowing for a flood of high-resolution cryo-EM structures. In 2014 alone, 27 structures have reached sub-4-Å resolution, and scientists keep pushing the boundaries. “The direct electron detector has been the biggest game changer for the electron microscopy field,” says Melanie D. Ohi of Vanderbilt University.
The direct electron detector joins a long list of specialized instruments like the Bunsen burner, the Kirchhoff spectroscope, the scintillation counter and the Geiger counter, all of which proved to be key appendages of the larger technologies which they were enabling. A good counterpart to the direct electron detector would be the CCD which revolutionized tools like cameras and telescopes and which was awarded a Nobel Prize a few years ago.

Cryo-EM will almost certainly make a big splash in the world of drug discovery in the upcoming decades. However, better experimental tools alone won't suffice for this revolution. It's sometimes underappreciated how important software and hardware were in enabling the routine application of NMR and crystallography to tough biological problems in drug design. The advent of cryo-EM similarly opens up attractive opportunities for the development of specialized software and hardware that can handle the often fuzzy, low-resolution images coming out of cryo-EM. This will especially be important for multiprotein assemblies like modular enzymes and ribosomes where multiple solutions exist for a given dataset and where computational model building will be paramount. 

As the technique proliferates, so will the data that it unearths. Someone will have to then make sense of this data, and scientific and financial rewards will await those who have the courage and foresight to found companies making specialized software for analyzing cryo-EM images. The founding of these companies with their custom hardware and software will itself be a paean to the tool-driven revolution in science, in this case one led by the computer. One tool both piggybacking on and enabling another tool, that's how science progresses.

Added: Here's a nice application of cryo-EM in resolving crystals of the protein alpha-synuclein that are essentially 'invisible'.

Image source

Why foxes are more important than hedgehogs in drug discovery

A while ago I wrote about the dominance of foxes over hedgehogs in chemistry. Hedgehogs love to drill deep into one topic; foxes love to leap over interdisciplinary fences. Both creatures have been key to the progress of science: Einstein was a hedgehog, von Neumann was a fox; Darwin was a hedgehog, Crick was a fox. Many chemists are both hedgehogs and foxes, but my reading of the recent history of chemistry told me that in chemistry hedgehogs have had a greater impact.

I was reminded of hedgehogs and foxes again, this time in the context of drug discovery, as I was reading psychologist Philip Tetlock’s book on prediction, “Superforecasting: The Art and Science of Prediction”. The book describes a giant prediction project called the Good Judgment Project which Tetlock created to test people’s powers of intuiting the future. The project recruited thousands of people, both renowned experts and amateurs, and asked them to make important predictions related to finance, politics and human society in general. Participants were asked to predict the fate of particular currencies or countries, or asked to predict terrorist attacks or asked to predict the major effects of climate change.

The results of the study were published in 2005 and they caused quite a stir because they found that several amateurs handily beat experts at their own game. But another conclusion that the study found, and one that caught my eye, was that the best predictors were all foxes rather than hedgehogs. That’s because hedgehogs are more likely to fall in love with one big idea and try to apply it to most problems while foxes are more likely to be more modest and try several ideas to pick the ones that work. Hedgehogs may be deep, but foxes are nimble, and it seems that when it comes to complicated predictions flexibility is more important than depth.

It struck me that this is exactly how it should be in drug discovery, and how it often isn’t. The reason that foxes outperformed hedgehogs in the Good Judgment Project was because multifactorial, complex systems like the climate and the stock market are seldom amenable to a select few Big Ideas. What they benefit most from instead are modest but useful ideas from several different approaches – rational thinking, seat-of-your-pants intuition, rock-solid experience – that are cobbled together to produce a workable recipe.

Now drug discovery should be the poster boy for a multifactorial, complex system whose fruits are ripe for plucking by foxes, but the history of the field actually demonstrates how hedgehogs have often held it back, sometimes by their own designs but most often by blind acceptance by their followers. Consider all the Big Ideas that have permeated drug discovery in the past three decades – structure-based design, molecular modeling, HTS, combinatorial chemistry, DOS, Rule-of-5. In each of these cases the relevant Big Idea usually came from a single hedgehog or a few hedgehogs. Sometimes these individuals highlighted the utility of the idea beyond its modest but narrow reach, but often it was other scientists or business leaders that hyped the idea way beyond its domain of applicability (remember that "Designing Drugs Without Chemicals" headline from 1981?). The Big Idea escaped from its ground reality of bounded rationality and entered the stratospheric domain of Six Sigma efficiency and business mantras. The same goes for many other smaller but still influential ideas in the history of drug discovery, including druglike metrics, molecular dynamics and animal models.

None of these hedgehog-based ideas is useless, and in some cases they can be very useful, but the key thing to realize is that they have been the product of hedgehog minds which sought to revolutionize the process of drug discovery and fold its problems under one big tent. Ironically however, the very reason these ideas have found use is because they were largely adopted by foxes who then used them as part of a flexible and versatile toolkit. Equally ironically, while the hedgehogs have undoubtedly led to an understanding of the field, often that understanding has come from foxes who have illuminated the limitations of the hedgehogs’ ideas (the Ro5 is a notorious example).

It's also interesting to note that sometimes being a fox or hedgehog can be a matter of time. This is because as our understanding of complex systems grows with time we could transition a bit from foxdom to hedgehogdom. Consider all the attempts to correlate toxicity - a very complex and heterogenous variable - with simple metrics like molecular lipophilicity, flexibility, number of sp3 carbons etc. These attempts represent classic hedgehog-like strategies to usher a complex variable into the tent of a simple rule; no wonder they have been found to be woefully lacking in utility. However we might conjecture that as our understanding of the determinants of toxicity grows in the next few decades, we may potentially reach a tipping point where we may be able to come up with general rules that are at least somewhat broadly applicable. The time then may be better for hedgehogs to make general predictions.

Consider the big names in the history of drug discovery – James Black, Gertrude Elion, Paul Janssen, Albert Hoffman, Paul Ehrlich, Robert Koch – and you will find that they were either exclusively or mostly foxes. What they stressed was the plurality of judgment and the exploration of alternative ideas. The same quality applies to some leading drug hunters who I personally know. Most of these individuals don’t have one Big Thought attached to their name; rather they conceived several ideas that led to an integrated approach to drug discovery. They will not go down in history as the originators of rules and principles, but they will go down in history as actual drug inventors with names on several key patents.

My guess is that the reason these people worked so well as foxes rather than hedgehogs was because they realized how complex biological systems are and how ignorant we all are when it comes to perturbing these systems with small molecules. I think that the feature that distinguishes a hedgehog mindset from a fox mindset is an appreciation of the complexity of the system under consideration. When you know how incomplete your understanding of biological targets or the properties of successful drugs is, you quickly realize how hard it would be to apply a hedgehog-like, one size fits all approach to any problem in drug discovery. It is far better to diversify your portfolio of tools and ideas and improvise. The best drug hunters are foxes not always because they want to, but because they realize that they have to. While some of them understand the time-dependence of hedghogish predictions noted above, they are also sober enough to realize that we aren't there yet.

As the future of drug discovery unfolds before our eyes, I do not see hedgehogs eclipsing foxes in relevance. Both are necessary and hedgehogs will continue to come up with big ideas, but these ideas can likely mislead the field if foxes do not reveal their strengths and limitations. Hedgehogs and foxes can both co-exist and thrive in drug discovery, but only if they let each other explore their favorite haunts.