Drug discovery 2.0: Rise of the biologics? Not so fast

Along with ominous drumbeats heralding the decline of the pharmaceutical industry, another recurring lament that we hear these days is about the decline of small molecules and the rise of biologics like antibodies and other proteins. The last decade has seen several successful antibody drugs, especially against cancer. Based on this success the prevailing wisdom (which does not tire of making its presence known) seems to suggest that traditional small molecule drugs are being shown the door even as biologics with their much higher specificities and lack of side effects are being welcomed in.

Not so fast. A rather refreshing opinion piece from a B. Meunier from CNRS in this week's Angewandte Chemie makes a strong case that rumors of small molecules' deaths are grossly exaggerated. Along the way Meunier also takes a swipe at what he considers to be Europe's stifling innovation culture; his point is that since the "new" model of pharmaceutical research is going to entail "Big" Pharma licensing compounds in from small companies, the model can only thrive in countries where a healthy startup culture already exists. And Meunier does not see European countries steering that boat. Of course he does not discuss the rather baleful VC funding environment in the US, but that's a tale of woe for another article.


Meunier points out some rather obvious but important problems with biologics that their upbeat proponents sometimes cheerfully dismiss. Foremost is the price tag, with the minimum cost of a monoclonal antibody being in the tens of thousands of dollars, and many costing hundreds of thousands. Secondly, antibodies have had their biggest success against cancer where the high cost has to be balanced against the terminal nature of the disease, the little time that patients have and the possibility of complete remission. Contrast that with chronic diseases like diabetes, heart disease and arthritis where people have to be on therapies for years. It would be prohibitively expensive to consider an antibody treatment in such cases unless there's some new revolutionary way to lower costs of production; needless to say such breakthroughs are not visible around the corner. 


Thirdly, antibodies and protein therapeutics have obvious clinical limitations; they cannot cross the blood brain barrier and are therefore of little use against neurodegenerative disorders like Alzheimer's, and they can mostly attach only to extracellular targets and therefore may be useless for targeting intracellular proteins. There's also the perpetually inconvenient fact that they have to be injected, another stumbling block on the way to using them against chronic diseases. Finally, Meunier thinks that the whole economic basis for biologics, namely the belief that generics will take a very long time to hit the market because of the production challenges, may well be refuted sooner than we think. If there's one lesson we have learnt from the history of innovation, from the development of paper to the atomic bomb, it's that other countries don't lack smart innovators, and given enough time they will always come up with novel ideas to speed up production and lower costs. Putting all your eggs in the "It will be decades before India and China can come up with generic versions of biologics" basket is a dangerous strategy.


Ultimately small molecules have one thing going for them; they have worked ever since the dawn of drug discovery and they continue to be robust, versatile and easy to make. As Meunier points out, occasional competitors like oligonucleotide therapies have largely bit the dust. And of course the sheer number of ways in which you can combine six elements to build a drug is literally astronomical; we have barely scratched the surface of chemical diversity and there's still myriad scaffolds to plumb. Meunier's message is thus to keep up the good work with small molecules, employing the whole toolkit of chemistry comprising not just synthesis but analysis and modeling. Biologics may stridently stake out certain areas of the drug kingdom, but small molecules will continue to rule for the foreseeable future.


Latest Scientific American posts

Not to beat a dead Sprague Dawley rat, but I have another post on Sci Am comparing airplane and drug design, this time discussing some of the actual complexity of protein-drug binding.

On occasion of a special "Beginnings" blog festival on multiple sites, I make a case for why the Origin of Life is chemistry's grand question.

And here's some thoughts on the chemistry of Curiosity.

Again, drug design and airplane design are not the same

A while back I had a post about an article that compared airplane design to drug design. I discussed the challenges in drug design compared to airplane design and why the former is much less predictable than the latter, the short answer being "biological complexity".

Now the analogy surfaces again in a different context. C & EN has an interview with Kiran Mazumdar-Shaw, CEO of India's largest biopharmaceutical company Biocon. Shaw is an accomplished woman who does not hold back when she laments the current depressing state of drug development. I think many of us would commiserate with her disappointment at the increasing regulatory hurdles that new drugs have to face. But at one point she says something that I don't quite agree with:

Mazumdar-Shaw dismisses the argument that drugs create a public safety imperative mandating stricter oversight than many other regulated products. “So you think passenger safety is any less important than patient safety?” she asked. Yet aircraft makers don’t face a 12-year, all-or-nothing proposition when designing, developing, and commercializing an airplane. Nor, she added, does Boeing have to prove that it is making something fundamentally different than what Airbus already has on the market.

To which my counter-question would be, "What do you think is the probability of unforeseen problems showing up in aircraft design compared to drug design"? I see a rather clear flaw in the analogy; aircraft design is not as tightly regulated because most aircraft work as designed and the attrition rate in their development is quite low. The number of failures in aircraft development pale in comparison with the number of Phase II failures in drug development. In fact as the article quoted in my previous post described, these days you can almost completely simulate an aircraft on a computer. Regulatory agencies can thus be much more confident and insouciant about approving a new airplane.

This is far from the case for drugs. First of all there is no clear path to a drug's development and in the initial stages most people don't have a clue as to what the final product is going to look like. But more importantly, designing drugs is just so much riskier than designing aircraft that regulatory agencies have to be more circumspect. How many times do drugs show all kinds of side-effects which would never have been predicted at the beginning? How many times do drugs show side-effects that would not even have been imagined at the beginning? It's this almost complete lack of prediction driven by the sheer complexities of biology that distinguishes drugs from airplanes.

In another part of the interview Mazumdar-Shaw voices her impatience with regulators' recalcitrance to adopt new technologies. The example she gives is of Hawk-Eye, a computer that tracks a tennis ball's movement and makes it easier to call out the result of a disputed bounce. Just like sports authorities are reluctant to use these technologies to override the flaws in human judgement, Mazumdar-Shaw thinks regulators are reluctant to use new technologies to overcome the limitations of human judgement. The point is not irrelevant but the truth is that decision making in drug development is far more complex than decision making in tennis tournaments. For Hawk-Eye to track tennis balls is a simple matter of physics, and it can do this with high accuracy. Contrast this with drug development where the "event" to be analyzed is not the bounce of a ball but the efficacy of a drug in a large clinical trial as assessed by a variety of complex statistical measures. In addition, approving a drug is inherently more subjective, being based on efficacies of existing therapies, the exact numerical superiority of the extra benefits, cost and patient populations. Good luck writing a computer problem that could possibly assess this morass of sometimes conflicting information and reach an informed judgement.

I think many of us are frustrated with the increasing regulatory hurdles that new drugs face and we all wish that the process was smoother. Personally I don't think that the FDA's systems for assessing risks is as finely attuned to potential benefits as it should be. But I don't find myself following Mazumdar-Shaw in advocating for drug approvals that are as easy as aircraft approvals. The former is science and engineering. The latter is science with a healthy dose of intuition and art. And some black magic.

Protein-ligand crystal structures: WYSI(N)WYG

Crystal structures of proteins bound to small molecules have become ubiquitous in drug discovery. These structures are routinely used for docking, homology modeling and lead optimization. Yet as several authors have shown over the years, many of these structures can hide flaws that don't become apparent until they are actually revealed through analysis. Worse still, there may be flaws that never become apparent because nobody takes the trouble to look at the original data.

A recent paper from the group at OpenEye has a pretty useful analysis of flaws in crystal structures. They carry on a tradition most prominently exemplified by Gerard Kleywegt at Uppsala. The authors describe common metrics used for picking crystal structures from the PDB and demonstrate their limitations. They propose new metrics to remedy the problems with these parameters. And they use these metrics to analyze about 700 crystal structures used in structure-based drug design and software validation and find out that only about 120 structures or so pass the rigorous criteria used for distinguishing good structures from bad ones.

The most important general message in the article is about the difference between accuracy and completeness of data, and the caveat that any structure on a computer screen is a model and not reality. Even very accurate looking data may be incomplete and this flaw is often neglected by modelers and medicinal chemists when picking crystal structures. For instance, the resolution of a protein structure is often used as a criterion for selecting one among many structures of the same protein from the PDB. Yet the resolution only tells you how far apart atoms can be distinguished from each other. It does not tell you if the data is complete to begin with. So for instance, a crystallographer can acquire only 80% of the theoretically maximum possible data and present it with a resolution of 1.5 A, in which case the structure is clearly incomplete and possibly flawed for use in structure-based drug design. Another important metric is the R-free factor which is obtained by omitting certain parts of the data and refitting the rest to the model. A difference between R-free and the R-factor (a factor denoting the original difference between the full set of data and the model) of more than 0.45 for a structure with resolution 3.5 A or more is a red flag.

The OpenEye authors instead talk about a variety of measures that provide much better information about the fidelity of the data than resolution. The true measure of the data is of course the actual electron density. Any structure that is seen on the computer screen results from the fitting of a model to this electron density. While proteins are often fit fairly well to the density, the placement of ligands is often more ambiguous, partly because protein crystallographers are not always interested in the small molecules. The paper documents several examples of electron density that was either not fit or incorrectly fit by ligand atoms. In some cases sparse or even non-existent density was fit by guesswork, as in the illustration above. All these badly fit atoms show up in the final structure but only an expert would know this. The only way to overcome these problems is to take a look at the original electron density, which is unfortunately a task that most medicinal chemists and modelers are ill equipped for.

The worst problem with these crystal structures is that because they look so accurate and nice on a computer screen, they may fall into Donald Rumsfeld's category of "unknown unknowns". But even with "known unknowns" the picture is a disturbing one. When the authors apply all their filters and metrics to a set of 728 protein-ligand structures frequently used in benchmarking docking programs and in actual structure-based drug design projects, they find that only 17% or so (121) of the structures make it through. They collect these structures together and call the resulting database Iridium which can be downloaded from their website. But the failure of the 580 or so common structures used by leading software vendors and pharmaceutical companies to pass important filters leaves you wondering how many resources we may have wasted by using them and how much uncertainty is out there. 

Something to think about, especially when considering the 50,000 or so protein structures in the PDB. What you see may not be what you get.