Physics and the search for fundamental laws: Is physics turning into biology?

The Standard Model of particle physics
contains many fundamental values which
may be a result of pure accident
Physics, unlike biology or geology, was not considered to be a historical science until now. Physicists have prided themselves on being able to derive the vast bulk of phenomena in the universe from first principles. Biology - and chemistry, as a matter of fact - are different. Chance and contingency play an important role in the evolution of chemical and biological phenomena, so beyond a point scientists in these disciplines have realized that it's pointless to ask questions about origins and first principles.

The overriding "fundamental law" in biology is that of evolution by natural selection. But while the law is fundamental on a macro scale, its details at a micro level don't lend themselves to real explanation in terms of origins. For instance the bacterial flagellum is a product of accident and time, a key structure involved in locomotion, feeding and flight that resulted from gene sharing, recombination and selective survival of certain species spread over billions of years. While one can speculate, it is impossible to know for certain all the details that led to the evolution of this marvelous molecular motor. Thus biologists have accepted history and accident as integral parts of their fundamental laws.

Physics was different until now. Almost everything in the universe could be explained in terms of fundamental laws like Einstein's theory of general relativity or the laws of quantum mechanics. If you wanted to explain the shape and structure of a galaxy you could seek the explanation in the precise motion of the various particles governed by the laws of gravity. If you wanted to explain why water is H20 and not H30 you could seek the explanation in the principles of quantum mechanics that in turn dictate the laws of chemical bonding.

But beyond this wildly successful level of explanation seems to lie an impasse. The problem arises when you try to explain one of the most profound facts of nature, the fact that the fundamental constants of nature are fine-tuned to a fault, that the universe as we know it would not exist if these constants had even slightly different values. For instance, it is impossible to imagine life existing had the strength of the strong force binding nuclei together been even a few percent smaller or larger. Scientists have struggled for decades to explain why other numbers like the value of Planck's constant or the electron's mass are what they are. In fact this is one of the biggest gaps in the Standard Model of physics, an otherwise spectacularly successful paradigm that nonetheless contains arbitrary constants that defy an origin story. It seems now that physicists are giving up trying to explain conundrums like these, or at least giving up trying to do it the way they always have.

Two books that I read recently drove the point home to me. One was Max Tegmark's "The Mathematical Universe". In the book Tegmark takes us on a dizzying journey through modern physics that ends in the fanciful realm of multiple universes. It's hardly the first book to do so. Multiple universes have been invoked to explain many problems in physics, but their most common use is try to explain (or explain away, as some seem to rightly think) the problem of the fundamental constants. The purported "solution" sounds simple; we can stop wondering why the fundamental constants have the precise values that they do if we assume the existence of a potentially infinite number of universes, each of which has a different set of values for the constants. Our universe just happens to have the right combination that allows sentient life to arise and ask such questions in the first place.

Leaving aside the fact that multiple universes still belong to speculation and science fiction rather than science, what is really striking about them to me is that they have finally transported physics into the realm of biology. What physicists are essentially saying is that there have been several universes in the past and there are likely several universes in the present, and our unique universe with its specific combination of fundamental constants is an accident. The multiple universe argument is very much similar to the argument establishing evolution by natural selection as the centerpiece of biology: there have been several species with several genotypic and phenotypic features, and our own human species is a result of contingency and historical accident. This is not so much an explanation as an admission of incomplete knowledge, but biologists are fine with this since it does not obviate any natural law and is still part of a satisfying overarching theory.

It looks like with the postulation of multiple universes physicists too have stepped over from the land of fundamental explanatory laws into the land of historical accident and contingency. This is a radical shift in the way physics has been done until now and a rather painful blow to the physicist's view of nature. One might also say that biology is having the last laugh. In the sixteenth and seventeenth century when biology was still doing the messy job of cataloging data and trying to make sense of the mess, physics was marching on, discovering precise regularities and generalities in nature's offerings. Since then several sciences including biology and economics have suffered from "physics envy". But now it ironically looks like physics' successful run at predicting everything from first principles might have become a victim of its own success. It may be the case that physicists' spectacular findings themselves have illuminated their own limitations. In his book "The Accidental Universe", physicist and writer Alan Lightman puts it thus:

"Dramatic developments in cosmological findings and thought have led some of the world's premier physicists to propose that our universe is only one of an enormous number of universes, with wildly varying properties, and that some of the most basic features of our particular universe are mere accidents - random throws of the cosmic dice. In which case, there is no hope of ever explaining these features in terms of fundamental causes and principles."

Lightman also quotes the doyen of physicists, Steven Weinberg, who recognizes this watershed in the history of his discipline:

"We now find ourselves at a historic fork in the road we travel to understand the laws of nature. If the multiverse idea is correct, the style of fundamental physics will be radically changed."

Although Weinberg does not say this, what's depressing about the multiverse is that its existence might always remain postulated and never proven. This is an ever worse scenario because the only thing that a scientist hates even more than an unpleasant answer to a question is no answer at all. It's not inaccurate to say that many physicists - and especially those like Weinberg who have been part of the spectacular revolution in physics during the 60s and 70s - are distressed by this fact.

The metamorphosis of physics into a historical science means that many of the facts that have troubled the field's foremost practitioners may be a product of chance and fundamentally unexplainable in terms of more basic laws. I must emphasize that this is not some kind of "end of physics" scenario that I am imagining here (unlike my colleague John Horgan); there are still plenty of very challenging questions dealing with the application of the fundamental laws that will keep physicists occupied for decades. Foremost among these may be the conundrum of emergent phenomena which themselves are very fundamental in fields like neuroscience and economics. I am also not implying that physicists should simply give up looking for fundamental laws. But their methodological take on finding these laws may have to change. As far as the deep question of why certain building blocks of the universe seem to exist within very narrow constraints is concerned, physicists might simply have to accept that there is no true causal explanation for the fact.

String theory, which at one point was considered a promising strategy to unify quantum mechanics and gravity and possibly explain such problems, has been severely floundering for the last few years. Bereft of testable predictions, some of its proponents have now resorted to thinking of the theory as a good example of “non-empirical” science. This kind of thinking seems to me to be an alarming cop-out. As the physicist Carlo Rovelli explained in a recent talk, many predictions like antimatter and the bending of starlight of gravity have started out as non-empirical predictions based on pure thought. But they were not regarded as a legitimate part of science until they were experimentally verified. In addition the predictions they made were testable in a very well defined manner. To consider a theoretical framework as perpetually being a legitimate part of science as a non-empirical framework would be contrary to all the science that we know and love.

Are physicists justified in feeling despondent because they seem to be tapping the bottom of the barrel in their search for fundamental laws and because their efforts to explain these laws at a truly basic level have not borne fruit? I don't think so. Biologists have known about contingency and accident ever since Darwin wrote his great book, but not only has this not made them emotionally unstable but it has also not kept them from making spectacular discoveries in their discipline. Just because a system of laws might have a historical origin based on accident does not mean that there are no great truths about the system still waiting to be discovered. But more importantly, perhaps physicists need to embrace contingency to be as much of a fundamental law as any other. Biologists know this; in fact they know that there would be no evolution in the first place without contingency, and they know that it is thanks to historical accident that they get to study the incredibly rich cornucopia of living structures that the earth has presented to them.

The best thing would be for physicists to realize that just because the ultimate laws of their discipline might have a fundamentally accidental origin, it does not mean that the manifestations of those laws are any less important or useful. The most important page they should lift out of the biologists' playbook is very simple; when ideas about a field evolve, it is best for the practitioners of the field to evolve too.

This is a revised and updated version of an older post.

2016 Nobel Prize picks

The nice thing about Nobel Prizes is that it gets easier to predict them every year, simply because most of the people you nominate don't win and automatically become candidates for the next year (note however that I said "easier to predict", not "easier to correctly predict"). That's why every year you can carry over much of the same list of likely candidates as before.

Having said that, there is a Bayesian quality to the predictions since the previous year's prize does compel you to tweak your priors, even if ever so slightly. Recent developments and a better understanding of scientific history also might make you add or subtract from your choices. For instance, last year the chemistry prize was awarded for the discovery of DNA repair systems, so that might make it a bit less likely for a biological discovery to be recognized this year. 


This time as in previous years, I have decided to separate the prizes into lifetime achievement awards and specific discoveries. There have been fewer of the former in Nobel history and I have only three in mind myself, although the ones that do stand out are no lightweights - for instance R B Woodward, E J Corey, Linus Pauling and Martin Karplus were all lifetime achievement awardees. If you had to place a bet though, then statistically speaking you would bet on specific discoveries since there have been many more of these. So here goes:

Lifetime achievement awards

Harry Gray and Steve Lippard: For their pioneering and foundational work in the field of bioinorganic chemistry;work which has illuminated the workings of untold number of enzymatic and biological processes including electron transfer.

Stuart Schreiber and Peter Schultz: For their founding of the field of modern chemical genetics and their impact on the various ramifications of this field in chemistry, biology and medicine. Schreiber has already received the Wolf Prize this year so that improves his chances for the Nobel. The only glitch with this kind of recognition is that a lot of people contributed to the founding of chemical biology in the 1980s and 90s, so it might be a bit controversial to single out Schreiber and Schultz. The Thomson-Reuters website has a Schreiber prediction, but for rapamycin and mTOR; in my opinion that contribution, while noteworthy, would be too narrow and probably not sufficient for a prize.

Robert Langer for his extensive contributions to drug delivery: Much of what Langer does is actually chemistry, but his practical impact has been on medicine so a prize for him would lie more squarely in medicine. It's clear though that he deserves some kind of lifetime recognition.

Specific awards

John Goodenough and Stanley Whittingham for lithium-ion batteries: This has been on my list for a very long time. Very few science-based innovations have revolutionized our basic standard of living the way lithium-ion batteries have. Generally speaking, recognition for the invention of specific devices have been rather rare, with the charged-coupled device (CCD) and the integrated circuit being exceptions. More importantly, a device prize was given out just two years ago in physics (for blue light-emitting diodes) so based on the Bayesian argument stated above, it might make it a bit unlikely for another device-based invention to win this year. Nonetheless, a prize for lithium ion batteries more than most other inventions would conform to the line in Alfred Nobel's will about the discovery that has "conferred the greatest benefits on mankind."

Franz-Ulrich Hartl and Arthur Harwich for their discovery of chaperones: This is clearly a discovery which has had a huge impact on our understanding of both basic biological processes as well as their therapeutic relevance. However, as often happens with the chemistry prize, this one could also go to medicine.


Krzysztof Matyjaszewski for atom-transfer radical polymerization, Barry Sharpless for click chemistry, Chi-Huey Wong for oligosaccharide synthesis and Marvin Caruthers for DNA synthesis: It's highly unlikely that these three gentlemen will receive any prize together, but I am grouping them under the general title of "organic and polymer synthesis" for convenience.

Matyjaszewski's name has been tossed around for a while, and while I am no expert in the field it seems that his ATRP method has had enough of a practical and commonplace impact to be a serious contender; plus an award for polymer chemistry has been long due. Click chemistry has also been extensively applied, although I am less certain of its industrial use compared to say, the undoubted applications of palladium-catalyzed chemical reactions.

In the world of biopolymers, oligosaccharide synthesis has always been an important field which in my opinion has received the short end of the stick (compared to the glamorous world of proteins and nucleic acids, lipids and carbohydrates have always been the black sheep) so recognizing Wong might be a kind of redemption. On the other hand, recognizing Caruthers for DNA synthesis (perhaps along with Leroy Hood who automated the process) seems to be an obvious honor in the Age of Genomics. Hood has also been highlighted in the public eye recently through a new biography.

The medicine prize

As is traditionally the case, several of the above discoveries and inventions can be contenders for the medicine prize. However we have left out what is potentially the biggest contender of all until now.

Jennifer Doudna, Emmanuelle Charpentier and Feng Zhang for CRISP-Cas9: I don't think there is a reasonable soul who thinks CRISPR-Cas9 does not deserve a Nobel Prize. In terms of revolutionary impact and ubiquitous use it almost certainly belongs in the same shelf that houses PCR and Sanger sequencing. 

There are two sets of questions I have about it though: Firstly, whether an award for it would still be rather premature. While there is no doubt as to the broad applicability of CRISPR, it also seems to me that it's rather hard right now to apply it with complete confidence to a wide variety of systems. I haven't seen numbers describing the percentage of times that CRISPR works reliably, and one would think that kind of statistics would be important for anyone wanting to reach an informed decision on the matter (I would be happy to have someone point me to such numbers). While that infamous Chinese embryo study that made the headlines last year was quite flawed, it also exposed the problems with efficacy and specificity that still bedevil CRISPR (these are problems similar to the two major problems for drugs). My personal take on it is that we might have to wait for just a few more years before the technique becomes robust and reliable enough to thoroughly enter the realm of reality from one of possibility.

The second question I have about it is the whole patent controversy, which if anything seems to have become even more acrimonious since last year, reaching worthy-of-optioning-movie-rights level of acrimonious in fact. Generally speaking Nobel Prizes try to stay clear of controversy, and one would think that the Nobel committee would be especially averse to sullying their hands with a commercial one. The lack of clear assignment of priority that is being played out in the courts right now not only tarnishes the intellectual purity of the discovery, but on a more practical level it also makes the decision to award the prize to all three major contenders (Doudna, Charpentier and Zhang) difficult. Hopefully, as would be fitting for a good novel, the allure of a Nobel Prize would make the three protagonists reach an agreement to settle their differences over a few beers. But that could still take some time. A different way to look at the whole issue however is to say that the Nobel committee could actually heal the divisions by awarding the prize to the trio. Either way, a recognition of CRISPR is likely going to be one of the most publicly debated prizes of recent times.


It's also interesting to note that the folks at Thomson Reuters have cited only George Church and Feng Zhang in their picks. A prize only for the duo that leaves out the Berkeley scientists would likely ignite a bitter controversy that might make the controversy over the MRI prize pale in comparison. I don't think any kind of CRISPR recognition that cites only Church and Zhang would be good for the reputation of either the Nobel Prize or for science as a whole.

The bottom line in my mind: CRISPR definitely deserves a prize, and its past results and tremendous future potential may very well tip the balance this year, but it could also happen that the lack of robust, public vindication of the method and the patent controversy could make the recognition seem premature and delay the actual award.

Craig Venter, Francis Collins, Eric Lander, Leroy Hood and others for genomics and sequencing: The split here may be pretty hard here and they might have to rope in a few consortiums, but as incomplete and even misleading as the sequencing of the human genome might have been, there is little doubt that it was a signal scientific achievement deserving of a Nobel Prize.

Alec Jeffreys for DNA fingerprinting and assorted applications: Alec Jeffreys is another perpetual favorite on the list and one whose invention has had a huge societal impact. I have never really understood why he has never been recognized.

Karl Deisseroth, Ed Boyden and others for optogenetics: Optogenetics is another invention that will almost certainly get a prize; its methodology is fascinating and its potential applications for neuroscience are amazing. But its validation seems even more incomplete to me than CRISPR's so it would be rather stunning if they get the prize this year. (On a side note: I am probably among the minority who think that awarding the prize for RNA interference in the 1990s was also too early and quite premature).

Ronald Evans for nuclear receptors: It would be odd if a major class of proteins and therapeutic drug targets went unrecognized.

Bert Vogelstein, Robert Weinberg and others for cancer genes: This again seems like a no-brainer to me. Several medicine prizes have been awarded to cancer genetics so this certainly wouldn't be a novel idea, and it's also clear that Vogelstein and Weinberg have done more than almost anyone else in identifying rogue cancer genes and their key roles in health and disease.


The Thomson-Reuters team has cancer immunotherapy on their shortlist which I think is another good choice.

The physics prize: There is no doubt in my mind that this year's Nobel Prize in physics will be awarded to Kip Thorne, Rainer Weiss and Ron Drever for their decades-long dogged leadership and work that culminated in this year's breakthrough discovery of gravitational waves by the LIGO observatory. It's a dead ringer. Drever sadly suffers from dementia, but that certainly should not preclude the Nobel committee from honoring him. For those wanting to know more about the kind of dedication and personality clashes these three men brought to the project, Janna Levin's book which came out earlier this year is a great source.


There is another recognition that I have always thought has been due: a recognition of the ATLAS-CMS collaboration at the LHC which discovered the Higgs boson. A prize for them would emphasize several things: it would put experiment at the center of this important scientific discovery (there would have been no 2013 Nobel Prize without the LHC) and it would herald a new and necessary tradition of awarding the prize to teams rather than individuals, reflecting the reality of contemporary science.

The Thomas-Reuters team predicts a chaos theory prize for the inventors of the OGY method. However it seems to me that a Nobel Prize for chaos theory and the study of dynamical systems - a field that surprisingly has not been recognized yet - should include any number of pioneers featured for instance in James Gleick's amazing book "Chaos", most notably Mitchell Feigenbaum.

So that's it from my side. Let the bloodbath games commence!


Other predictions: Thomson-Reuters, artkqtarks, Everyday Scientist, In the Pipeline

Why drug discovery is hard, part 2: On the unpredictable and complicated origins of drug species

The Brazilian pit viper, a fascinating and unlikely source
of drugs leading to Captopril, one of the world's bestselling
blood pressure-lowering drugs (Source: venomstodrugs)
Drugs from the forest. Drugs from the sea. Drugs from every conceivable natural source ranging from fungi to frogs; that's much of the history of drug discovery. In the first part of this series we looked at the initial steps in drug discovery, from identifying key target proteins involved in a disease to trying to make sure that these proteins can be "drugged" with a small molecule. But let's say you have now identified a few promising proteins malfunctioning in Alzheimer's disease. How do you even begin to try to discover drugs that modulate this protein? Or more generally, where do new drugs come from?

In fact the ancients knew quite well where drugs came from. At a time when even the rudiments of science were barely known, our South American ancestors were cheerfully chewing on coca leaves to provide stimulation and energy and the Greek physician Hippocrates was prescribing a bitter powder made from willow bark that could ease fevers and aches. Similar narratives permeate the traditions of cultures around the world, with Chinese and Indian traditions playing an especially prominent role in the history of medicine. The Bolivians had no idea that coca leaves contained cocaine and Hippocrates had no idea that willow bark contains salicylic acid (from which aspirin is made), but they all knew that there was something in plant and animal extracts that could mitigate a variety of ills.

Fast forward two thousand years and the picture has not changed. Nature continues to be an enormously valuable source of new drugs, or at least of compounds that can be turned into new drugs. In fact about 50% of all medicines on the market are derived from what are called natural products. The term "natural product" means something quite different to a chemist than what it might to a layman. For laymen the phrase might conjure up images of bottles of herbal medicines lined up on the shelves of the nearest health food store, but for chemists it refers to molecules produced by living organisms for a variety of functions, from feeding to mating to defense against predators. These molecules are also called secondary metabolites to distinguish them from "primary" metabolites, namely nucleic acids, lipids, amino acids and sugars which are essential for life's functioning.

Quite fascinatingly, it turns out that these natural products can be astonishingly potent in serving a variety of functions sought by drug discoverers, most commonly killing other cells. This is perhaps not surprising, considering that defense against predators was a key housekeeping chore for most organisms throughout evolution. Fortunately for us, it also turns out that some of the most potent leads for new drugs are therefore found in the lowliest of organisms - bacteria, fungi and protists - since these organisms in particular are constantly engaged in chemical warfare with multiple other pathogens from their environment (including human beings). It's again no surprise to find that most successful antibiotics such as penicillin and streptomycin have come from bacteria and molds.

And thus it comes to pass that some of the most important drugs on the market have been derived from the humblest of creatures to whom we owe millions of lives. Three examples will suffice. Captopril is a bestselling blood pressure medicine that was originally derived from the venom of the Brazilian pit viper. Taxol, one of the world's bestselling anticancer drugs, comes from the bark of the Pacific yew tree. And rapamycin, a significant immunosuppresant that allows millions to survive organ transplants without violently rejecting them, came from a soil bacterium on the distant Easter Island of Rapa Nui. The potential for discovering new drugs from natural organisms is as good a siren song for preserving our biodiversity as any other; for instance marine sponges are an unusually fertile source of promising new drugs and their homes in coral reefs therefore need to be preserved.

Nature has thus forged an intimate link with human life and death through its production of novel drugs. And the fact that a fungus which evolved billions of years ago and had absolutely no contact with the human race produces a molecule that saves a young girl's life is as poignant and fascinating a fact in all of science as I have encountered.

However it's easy to point out these molecules and even easier to overlook how hard it is to discover them. In the post-war boom in pharmaceutical research, pharmaceutical, academic and government labs sent out legions of scientists to scoop up samples of soil and bring them back to their labs. Sometimes a glass vial would be thrust into the hands of a scientist who was leaving for a relaxing vacation in an exotic locale, just in case. The collected samples were then screened against different kinds of cells. Any kind of effect on the cells was carefully noted, and compounds which seemed to inhibit cell growth were selected (it's interesting to note that one can also discover drugs this way, simply by throwing molecules at cells without any knowledge of the protein target. More on this later). But the success rate from such screening was quite low, and only a fraction of extracts or molecules screened show promising activity. In fact one of the reasons we are facing such a big threat of antibiotic-resistant bacteria is because it's been extremely hard to find novel antibiotics using traditional methods that worked so well before. For every taxol or rapamycin or erythromycin, there were hundreds of thousands of extracts that did not deliver anything. Hundreds of millions of dollars were sunk into the collection, purification and testing of these natural sources. Most were either too weak or too powerful, completely killing any cells they encountered.

But nature, as inventive as its evolutionary processes are, cannot supply us with all the drugs we need. This is where the ingenuity of chemists comes in. The major triumph of chemistry, one which makes it unique among all sciences, is its ability to discover, design and synthesize molecules that don't exist in nature. Chemists can either tinker with existing molecules or create new ones from scratch by arranging atoms in specific configurations, a feature that makes chemistry an art akin to architecture. The employment of chemistry in the service of medicine has been one of the most successful scientific stories in history. Not only has it allowed us to discover molecules that never existed before, but it has also helped us preserve biodiversity; for instance, once chemists figured out how to cheaply make the anticancer drug taxol from abundant starting materials, they did not have to depend on the loss of thousands of Yew trees for delivering the drug. 

Over the years chemists have finely honed their capacity to rapidly make millions of compounds efficiently and in pure forms. They can test these millions of compounds and see whether any of them bind against protein targets, a feat helped to no small extent by automation and robotics. This process is called high-throughput screening (HTS), which as the name indicates can test millions of compounds against proteins or cells in short order. When it became fashionable in the 80s and 90s, HTS was regarded as something revolutionary; after all if you ended up testing tens of millions of molecules against any disease or protein, surely you would find at least dozens of promising leads. Sadly that dream has not come true, and while HTS is valuable it has turned up very few leads which were then optimized into drugs. As with natural screening, HTS success rates can also be quite low (about 0.5%).

Why is this the case? Well, one simple reason why HTS has not worked out is because the theoretical number of druglike molecules you can make is literally more than the number of atoms in the universe; even when you have a library of consisting of millions of compounds, you are barely scraping the surface of this unimaginably vast number. Another reason is a fact mentioned in the previous post, namely that nature had very little evolutionary incentives to create proteins that would bind to synthetic drug molecules that would appear on the scene billions of years later. Yet another reason is that you may be testing the wrong molecules and trying to put a square peg in a round hole; in that case quantity will never trump quality. Also based on the previous post, it's obvious that not all proteins are created equal and therefore it can be much harder to find hits for certain proteins compared to others. What is worse is that it's often very difficult to gauge this success rate beforehand. Thus as often turns out to be the case in science, nature is a very harsh taskmaster, yielding her secrets with great reluctance. If you want to find a small molecule that binds to an important protein, you are going to have to work for it.

A third strategy for finding drugs comes from studying the physiological life of whatever protein you are interested in. Most proteins already bind to a small molecule in the body which modulates their activity, for instance a hormone, neurotransmitter, peptide or some other signaling molecule. For example proteins in the brain work their magic by binding to small molecules like dopamine and serotonin. These molecules are very potent, but they lack the properties that would allow one to transform them into a neat white pill that can be taken once a day. But they at least provide a springboard; why start from scratch when nature has already given you clues? Thus, any of these small molecules can be a starting point for modification, a scaffold whose structure can be tweaked by imaginative chemists. Sadly this strategy also often fails, for the simple reason that changing the structure of a molecule even a tiny bit can completely change its properties. In mathematical terms, the optimization landscape of the structure-activity relationships (SAR) of the drug is rough. This is a general property of molecules that plagues every chemist and drug discoverers especially have a hard time circumventing it. It's one of the key reasons why drug discovery is unpredictable.

A great example of how difficult the process is concerns molecules called enkephalins. Enkephalins are naturally occurring peptide molecules which produce the same potent painkilling effects as morphine, and yet in spite of dozens of years of trying, nobody has been able to turn them into drugs. In addition not everything that comes from nature or HTS or physiological molecules is a perfectly formed drug that falls into your lap from heaven, and that leads us into a discussion of another important reason why drug discovery is hard. Almost every single time, irrespective of the starting source, a promising newly discovered molecule is what's called a hit. A hit is to a drug what a freshly minted West Point graduate is to a four-star general. It is weak and unpolished in its interactions with biological system and it can often be too toxic. It may be poorly absorbed or it may hang around in the body for much too long. It may be impossible to press it into a pill and it may be impossible to simply get it into cells in the first place. Namely, it may have a lot of potential but very few real credentials. With some effort a hit may be turned into a lead which is a better version of a hit but still inadequate. Turning a hit or lead into a drug occupies the mind of the best scientists in academia and industry and even after decades of efforts there is no general formula which will achieve this. But not for lack of trying.

In 1997 a scientist named Chris Lipinski came up with a set of four rules that would apparently allow us to predict whether any given molecule would be a drug or not. Each rule deals with a fundamental property of molecules and brackets them within numerical limits; for instance the number of atoms that form hydrogen bonds (which tether a drug to its protein target), the hydrophobicity or "greasiness" of the molecule (which allows it to get through lipid cells walls), and the molecular weight, which is a rough measure of size. After analyzing hundreds of drugs, Lipinski came up with ranges for these properties that he thought are featured in the world's bestselling medicines. Since then, "Lipinski's rules" have been used by many leading pharmaceutical companies to constrain the kind of features that their screening collections should have, presumably to bias the chances of success. And yet there is still no proof that adopting these rules has actually led to a higher drug discovery rate. The other strike against Lipinski's rules is that almost none of the natural products described above obey these rules. And yet these natural products like rapamycin are potent and widely prescribed drugs. The bottom line is that in spite of some guidelines, we still don't know what truly makes a molecule druglike and therefore we don't know how to fine-tune the properties of a hit and turn it into a drug. There are too many exceptions that fall through the sieve constructed by any general rules, and learning about these exceptions is a big goal of drug discovery scientists.

This concludes the second part of the series. Drug discovery is hard because it is very rare to discover a molecule - either natural or artificial - that is a hit against a protein target implicated in a disease. Hit rates from screening even millions of molecules can be very low. And even if you discover such a hit it can be very difficult to turn it into a drug, partly because our definitions of what a drug truly is are still hazy. In the next part we will consider something very simple that a drug has to do, namely get into a cell, and we will find that predicting even such a simple process is fraught with complications.

Summary: Why is drug discovery hard?

Reason 1: Drugs work by modulating the function of proteins. It’s difficult to find out exactly which proteins are involved in a disease. Even if these proteins are found, it is difficult then to know if their activity can be controlled by a small molecule drug.

Reason 2: Since nature has not really optimized its proteins for binding to drugs, it is very difficult to find a hit for a protein even after searching through millions of molecules, either natural or artificial. And even when a hit is discovered, we don't know for sure how to turn it into a drug with favorable properties.

Let's embrace this new era of private science funding

This week, Mark Zuckerberg and his wife Priscilla Chan announced an initiative to give $3 billion dollars to UCSF for funding biomedical research. The tagline accompanying the funding in which they promised to “cure, prevent or manage all diseases in our children’s lifetime” drew scorn from scientists, but the bigger message of their philanthropy should not be lost on us. In an era where public funding of science has been steadily flagging and more and more researchers are finding it depressingly hard not just to fund their own research but even to contemplate pursuing basic research in the first place, initiatives like the Chan-Zuckerberg gift to UCSF are not just helpful but essential. Even if the research arising from the funds does not cure a single disease, by recruiting influential researchers and giving them money to explore their favorite areas in basic science, there is little doubt that the funding will have an impact on biomedical research. The most important discoveries arising from this initiative will be ones that cannot be anticipated, and that's what makes it especially important.

Private funding of science ideally should not raise any eyebrows; it only does so because most of us are young enough to have lived in an era of mainly publicly funded research. In fact private funding of science has a glorious history. Just to quote some specific examples, William Keck was an oil magnate who made very significant contributions to astronomy by funding the Keck Telescopes. Gordon Moore was a computer magnate who made significant contributions to information technology and proposed Moore's Law; along with the Keck foundation, his organization has been funding the BICEP experiments. Fred Kavli who a few years ago started the Kavli Foundation; this foundation has backed everything from the Brain Initiative to astrophysics to nanoscience professorships at research universities.

A few years ago, science writer William Broad wrote an article in the New York Times describing the private funding of research. Broad talked about how a variety of billionaire entrepreneurs ranging from the Moores (Intel) to Larry Ellison and his wife (Oracle) to Paul Allen (Microsoft) have spent hundreds of millions of dollars in the last two decades to fund a variety of scientific endeavors ranging from groundbreaking astrophysics to nanoscience. For these billionaires a few millions of dollars is not too much, but for a single scientific project hinging on the vicissitudes of government funding it can be a true lifeline. The article talked about how science will come to rely on such private funding in the near future in the absence of government support, and personally I think this funding is going to do a very good job in stepping in where the government has failed.

The public does not often realize that for most of its history, science was in fact privately funded. During the early scientific revolution in Europe, important research often came from what we can call self-philanthropy, exemplified by rich men like Henry Cavendish and Antoine Lavoisier who essentially did science as a hobby and made discoveries that are now part of textbook science. Cavendish's fortune funded the famed Cavendish Laboratory in Cambridge where Ernest Rutherford discovered the atomic nucleus and Watson and Crick discovered the structure of DNA. This trend continued for much of the nineteenth and early twentieth centuries. The current era of reliance on government grants by the NIH, the NSF and other agencies is essentially a post-World War 2 phenomenon.

Before the war a lot of very important science as well as science education was funded by trust funds set up by rich businessmen. During the 1920s, when the center of physics research was in Europe, the Rockefeller and Guggenheim foundation gave postdoctoral fellowships to brilliant young scientists like Linus Pauling, Robert Oppenheimer and Isidor Rabi to travel to Europe and study with masters like Bohr, Born and Sommerfeld. It was these fellowships that crucially allowed young American physicists to quarry their knowledge of the new quantum mechanics to America. It was partly this largesse that allowed Oppenheimer to create a school of physics that equaled the great European centers.

Perhaps nobody exemplified the bond between philanthropy and research better than Ernest Lawrence who was as much an astute businessman as an accomplished experimental physicist. Lawrence came up with his breakthrough idea for a cyclotron in the early 30s but it was the support of rich California businessmen - several of whom he regularly took on tours of his Radiation Lab at Berkeley - that allowed him to secure support for cyclotrons of increasing size and power. It was Lawrence's cyclotrons that allowed physicists to probe the inner structure of the nucleus, construct theories explaining this structure and produce uranium for the atomic bombs used during the war. There were other notable examples of philanthropic science funding during the 30s, with the most prominent case being the Institute for Advanced Study at Princeton which was bankrolled by the Bamberger brother-sister duo.

As the New York Times article notes, during the last three decades private funding has expanded to include cutting-edge biological and earth sciences research. The Allen Institute for Brain Science in Seattle, for example, is making a lot of headway in understanding neuronal connectivity and how it gives rise to thoughts and feelings; just two months ago they released a treasure trove of data about visual processing in the mouse cortex, an announcement that gave some academic scientists heartache. The research funded by twenty-first century billionaires ranges across the spectrum and comes from a mixture of curiosity about the world and personal interest. The personal interest is especially reflected in funding for rare and neurodegenerative diseases; even the richest people in the world know that they are not immune from cancer and Alzheimer's disease so it's in their own best interests to fund research in such areas. For instance Larry Page of Google has a speaking problem while Sergey Brin carries a gene that predisposes him to Parkinson's; no wonder Page is interested in a new institute for aging research.

However the benefits that accrue from such research will aid everyone, not just the very rich. For instance the Cystic Fibrosis Foundation which was funded by well to do individuals whose children were stricken by the devastating disease gave about $70 million to Vertex Pharmaceuticals. The infusion partly allowed Vertex to create Kalydeco, the first truly breakthrough drug for treating a disease where there were essentially no options before. The drug is not cheap but there is no doubt that it has completely changed people's lives.

But the billionaires are not just funding disease. As Broad puts it in his article, they are funding almost every imaginable field, from astronomy to paleontology:

"They have mounted a private war on disease, with new protocols that break down walls between academia and industry to turn basic discoveries into effective treatments. They have rekindled traditions of scientific exploration by financing hunts for dinosaur bones and giant sea creatures. They are even beginning to challenge Washington in the costly game of big science, with innovative ships, undersea craft and giant telescopes — as well as the first private mission to deep space."

That part about challenging government funding really puts this development in perspective. It's hardly news that government support for basic science has steadily declined during the last decade, and a sclerotic Congress that seems perpetually unable to agree on anything means that the problem will endure for a long time. As Francis Collins notes in the article, 2013 saw an all time funding low in NIH grants, and it’s not gotten much better since then. In the face of such increasing withdrawal by the government from basic scientific research, it can only be good news that someone else is stepping up to the plate. Angels step in sometimes where fools fear to tread. And in an age when it is increasingly hard for this country to be proud of its public funding it can at least be proud of its private funding; no other country can claim to showcase this magnitude of science philanthropy.

There has been some negative reaction to news like this. The responses come mostly from those who think science is being "privatized" and that these large infusions of cash will fund only trendy research. Some negative reactions have also come from those who find it hard to keep their disapproval of what they see as certain billionaires' insidious political machinations - those of the Koch brothers for instance - separate from their support of science. There is also a legitimate concern that at least some of this funding will go to diseases affecting rich, white people rather than minorities.

I have three responses to this criticism. Firstly, funding trendy research is still better than funding no research at all. In addition many of the diseases that are being explored by this funding affect all of us and not just rich people; for instance, the Chan-Zuckerberg funding is geared toward infectious diseases. Secondly, we need to keep raw cash for political manipulation separate from raw cash for genuinely important research. Thirdly, believing that these billionaires somehow "control" the science they fund strikes me as a little paranoid. For instance, a stone's throw from where I live sits the Broad Institute, a $700 million dollar endeavor funded by Eli Broad. The Broad Institute is affiliated with both Harvard and MIT. During the last decade it has made important contributions to basic research including genomics and chemical biology. Its scientists have published in basic research journals and have shared their data. The place has largely functioned like an academic institution, with no billionaire around to micromanage the scientists' everyday work. The same goes for other institutes like the Allen Institute. Unlike some critics, I don't see the levers of these institutes being routinely pulled by their benefactors at all. The Bambergers never told Einstein what to do.

Ultimately I am a both a human being and a scientist, so I don't care as much about where the source of science funding comes from as whether it benefits our understanding of life and the universe and leads to advances improving the quality of life of our fellow human beings. From everything that I have read, private funding for science during the last two decades has eminently achieved both these goals. I hope it endures.

Note: Derek has some optimistic thoughts on the topic.

This is a revised and updated version of an older post.

Why drugs are expensive: Follow the science (and not just the money)

The RAS protein: A famously 'undruggable' drug target
Two years ago I started writing a series of posts on the scientific challenges inherent in drug discovery on another blog. Recently a handful of miscreants have again put a glaring spotlight on pharmaceutical research, for reasons that have nothing to do with pharmaceutical research per se. So I decided to resurrect, revise and add to that old series of posts.

Often you will hear people talking about why drugs are expensive: it's the greedy pharmaceutical companies, the patent system, the government, capitalism itself. All these factors can contribute to increasing the price of a drug, but one very important factor often gets entirely overlooked in all the public discussion: Drugs are expensive because the science of drug discovery is hard.

And it's just getting harder. In fact purely on a scientific level, taking a drug all the way from initial discovery to market is considered harder than putting a man on the moon, and there's more than a shred of truth to this contention. It can easily take up to ten years and about $5 billion to discover a new breakthrough drug, or even to discover a drug that’s marginally better than an existing one. In this series of posts I will try to highlight some of the purely scientific challenges inherent in the discovery of new medicines. I am hoping that this will make laymen appreciate a little better why the cost of drugs doesn't have everything to do with profit and power and a lot to do with scientific ignorance and difficulty; as one leading scientist I know quips, "Drugs are not expensive because we are evil, they are expensive because we are stupid."

I could actually end this post right here by stating one simple, predominant reason why the science of drug discovery is so tortuous: it's because biology is complex and ill understood. Biological systems are highly non-linear and emergent; large changes can result from small perturbations to them. The second reason is because we are dealing with a classic multiple variable optimization problem, except that the variables to be optimized again pertain to a very poorly understood, complex and unpredictable system.

The longer answer will be more interesting. The simple fact is that we still haven't figured out the workings of biological systems - the human body in this case - to an extent that allows us to rationally and predictably modify, mitigate or cure their ills using small organic molecules. That we have been able to do so to an unusually successful degree is a tribute to both human ingenuity and plain good luck. But there's still a very long way to go; there are very few diseases for which we truly have drugs that are almost always efficacious and have little to no side effects. Most important diseases like cancer and Alzheimer's disease are still problems looking for solutions, and even after a century of extraordinary progress in biology, chemistry and medicine the solutions seem a long way off.

That then, is the simple reason why discovering drugs is hard; because we are dealing with a biological system that still escapes our rational understanding and because we are trying to engineer a molecule that perturbs this incompletely understood system, and that too while being forced to satisfy multiple constraints. It's like being asked to throw a ball at a black cat in the dark; with the added constraint that one of your feet is bound to the top of your head. And you only get three tries.

The rest of this series will be devoted to a discussion of specific factors that contribute to this lack of understanding. The goal is not to list all possible complications in the discovery of new drugs but to give a flavor of the major challenges that drug scientists face at a very fundamental level, several of which have been known for decades and are still not circumvented. It is to drive home the fact that even on a basic level we are still groping in the dark. This forces us to often simply try out things, to navigate our way through the process by clumsy Edisonian trial and error, to try a hundred approaches before finding one that succeeds. If there can be one word that could be applied to the whole drug discovery and development process it is "attrition"; roughly 95% of candidates entering clinical trials fail, most commonly because of lack of efficacy, followed by unacceptable side-effects. Plain ignorance and attrition play a huge role in discovering new drugs (or rather, in not discovering them). Most of the stuff that drug researchers try fails, and the stuff that works then has to take into account all the sunk costs inherent in these failures. No wonder drug discovery is expensive.

To appreciate the scientific challenges confronting drug designers it is important to understand at a basic level how drugs work. Almost all drugs are what are called "small molecules", that is, small organic compounds like aspirin with a few dozen atoms, bonds and rings like benzene rings. Recently there has been a resurgence of "large molecules" like antibodies but for now we will focus on small molecules. For the purposes of this discussion the mechanism behind small molecule drugs can be boiled down to one statement: Drugs work by binding to proteins and modifying their function. As we all know, proteins are the workhorses of living systems, performing every single important function from growth and repair to response and attack. No matter what physiological process you are talking about, from launching an immune response to thinking creative thoughts, there will be a handful of key proteins involved in mediating that response. Not surprisingly, a fine balance between the activities of the hundreds of thousands of proteins in the body is necessary for good health and, equally unsurprisingly, any breakdown in this balance causes disease. While in theory the entire network of proteins in the human body gets perturbed in some way or another in a disease state (a problem that is of great interest to the discipline of systems biology), fortunately for drug designers it's usually a handful of key proteins that are the major rogue players in any disease.

Depending on the disease the protein may be malfunctioning in different ways. In cancer for instance there's typically an overproduction of proteins involved in cell growth. There may also be an underproduction of proteins involved in slowing down cell growth. This most commonly happens through mutations to the structure of the proteins, an unfortunate side effect of the wonders of evolution, which is a natural part of cell division. The overproduction of specific proteins is in fact a common determinant in most major diseases. The solution then sounds simple: discover a small molecule which binds to and blocks such proteins, which in the parlance of drug discovery would be regarded as drug "targets".

But this is where our troubles begin. Firstly, it takes a lot of sleuthing and arduous biochemical and genetic experimentation to find out if a particular protein is in fact a major contributor to a disease. One of the major reasons why drugs fail in clinical trials is because the protein that is targeted by the drug doesn't turn out to be that important for the disease, especially in large populations. There are several ways to probe the relevance of a protein to a particular disease state. Sometimes accidental clues come from natural genetic ‘experiments’ in human populations in which the effects of incidental mutations in that protein can be observed; for instance one of the hottest recent targets in heart disease is a protein called PCSK9, and its significance was realized in part through the discovery of a young aerobics instructor in Texas with mutations in the protein and incredibly low cholesterol levels. Sometimes insights emerge from so-called ‘inborn errors of metabolism’ in which specific proteins are mutated or silenced, leading to serious diseases. But such cases are rare; more often than not scientists have to artificially silence the function of a protein using genetic engineering or other approaches to find out whether it truly contributes to a specific disease state or a lack thereof.

But even if the protein's role in causing disease is established, not every protein can then actually bind to a synthetic small molecule and be modulated by it, for the simple reason that evolution had absolutely no reason to cause it to do so. For instance the heart drug lipitor (atorvastatin) binds to and blocks the action of a protein called hydroxymethyl-glutaryl-coenzyme-A (HMG-CoA) reductase, a key protein involved in the initial steps of cholesterol synthesis. Cholesterol is one of the most important structural and signaling molecules occurring in living systems, and the assembly line of proteins and genes for making it was put in place by evolution billions of years ago. There was no plausible reason why natural selection should have engineered HMG-CoA reductase to bind a bestselling drug which appeared on the scene a billion years later. And yet here we are, beneficiaries of the ingenuity of both chemists and nature in possessing a drug that is considered to be the most important heart disease medicine in history. HMG-CoA reductase does bind lipitor, but many other proteins don't.

The binding of HMG-CoA reductase to lipitor is what makes it "druggable". However many other proteins are considered "undruggable" and decades of attempts to "drug" them with small molecules have failed; an excellent example is a protein called Ras which is mutated and overproduced in one out of five cancers. PCSK9 which was noted above has also proved to be undruggable until now. In fact a widespread belief holds that drug discovery is much harder now because most of the druggable proteins were picked in the 80s and 90s; this is the so-called "low hanging fruit" theory of drug decline. There are several reasons why a protein might not be druggable but one of the most common reasons is this: Druggable proteins have deep, small, well-shaped pockets that can embrace a small molecule the way a lock holds a key. Undruggable proteins on the other hand have shallow grooves spread across an extended area; a small molecule which tries to bind this surface faces a challenge similar to that confronting a climber who is trying to grab a foothold on a giant rock face. However it must also be remembered that the designation for a protein as "undruggable" may be nothing more than a provisional admission of ignorance; future advances in technology may well make the protein druggable. A protein which is shown to be both a major causal component in a disease and druggable is called a "validated target" which is now ripe for drug discovery.

In any case, the first problem in drug discovery then is that even if a particular protein is implicated in a particular disease, it may not be druggable. In addition, even if we were to successfully drug that protein, other proteins may also be involved in that disease which may compensate for its loss of function by being overproduced. This routinely happens in cancer and that is why cancer patients often become resistant to one particular drug; when you block one protein with a drug, other proteins which are also mutated and over-expressed take over, like an alternative pathway for an electrical circuit. This also happens frequently in case of antibiotics where bacteria can compensate for a drug target by producing other disease-causing proteins, or sometimes even by producing proteins which can destroy the drug. It is almost impossible for now to predict such kinds of alternative rewiring, a factor that significantly adds to the lack of predictive power in drug discovery.

This concludes the first part of the series. Drug discovery is difficult for two initial reasons; it is difficult to find out which proteins are involved in a disease, and even if you find them they may not be druggable and able to bind to a small molecule drug. In the next post we will see how, if we do find such proteins, we then find the drugs targeting them. In other words, where do drugs come from?

Summary: Why is drug discovery hard?

Reason 1: Drugs work by modulating the function of proteins. It's difficult to find out exactly which proteins are involved in a disease. Even if these proteins are found, it is difficult then to know if their activity can be controlled by a small molecule drug.

Select references:

1. The Quest for the Cure - Brent Stockwell: An excellent account of many modern concepts in drug discovery including genomics and undruggable proteins.
2. The Billion-Dollar Molecule - Barry Werth: A swashbuckling ride through the exciting and high-pressure world of a pharmaceutical startup (Vertex) which has now grown into one of the world's most innovative pharmaceutical companies. The only book on drug discovery I know which reads like a combination of a fast-paced thriller and an epic romantic novel.
3. Real World Drug Discovery - Robert Rydzewski: A succinct and yet comprehensive guide to all aspects of the science, art and business of drug discovery).
4. Druglike Properties – Edward Kerns and Li Di: This is a professional reference for students and scientists, but it gives a great flavor of the number of variables that have to be optimized in a good drug, and strategies to do this.
5. Natural Obsessions – Natalie Angier: A fly-on-the-wall account of drug discovery at its most basic level. Angier spent a year as an observer in the lab of Robert Weinberg of MIT, a pioneer in discovering cancer-causing genes. This work is not drug discovery per se but is a splendid account of the basic science and human stories that leads to drug development.