Thanks to Derek I became familiar with an article in the recent issue of Nature Reviews Drug Discovery which addresses that existential question that has been asked by so many plaintive members of the scientific community; why has pharmaceutical productivity declined over the last two decades with no end to this attrition in sight?
The literature has been awash in articles discussing this topic but this piece presents one of the most perceptive and well-thought out analyses that I have recently read. The paper posits a law called "Eroom's Law", the opposite of Moore's Law, which seems to chart the regress in drug approvals and novel medicines contrary to Moore's bucolic vision of technological progress. The authors wisely avoid recommending solutions, but do cite four principal causes for the decline. Derek is planning to write a series of undoubtedly insightful posts on these causes. But here I want to list them especially for those who may not have access to the journal and discuss one of them in some detail.
The first cause is named the 'Better than The Beatles' effect. The title is self-explanatory; if every new drug that has to be developed is required to be better than its predecessor which has achieved Beatle-like medical status, then the bar for acceptance of this drug is going to be very high leading to an expensive and resource-intensive discovery process. An example would be a new drug for stomach ulcers which will have to top the outstanding success of ranitidine and omeprazole; unlikely to happen. Naturally the bar is very high for certain areas like heart disease with its statins and hypertension with its phalanx of therapies, but the downside of this fact is that it stops novel medication from ever seeing the light of day. The Better-than-The-Beatles bar is understandably lower for a disease like Alzheimer's where there are no effective existing therapies, so perhaps drug developers should focus on these areas. The same goes for orphan drugs which target rare diseases.
The second reason is the 'cautious regulator' with the title again being self-expalantory. The thalidomide disaster in the 1960s led to a body of regulatory schedules and frameworks that today severely constrain the standards for efficacy and toxicity that drugs have to meet. This is not bad in itself, except that it often leads to the elimination of potential candidates (whose efficacy and toxicity can be modulated later) very early on in the process. The stupendously crushing magnitude of the regulatory schedule is illustrated by the example of a new oncology drug, whose documentation if piled in a single stack would top the height of the Empire State Building. With this kind of regulation, scientists never tire of pointing out that many of the path breaking drugs approved in the 50s and 60s would never survive the FDA's gauntlet today. There's a lesson in there somewhere; it does not mean that every new compound should be directly tested on humans, but it does seem to suggest that maybe compounds which initially appear problematic should be allowed to compete in the race a little longer without having to pass litmus tests. It's also clear that, as with the Beatles problem, the regulatory bar is going to be lower for unmet needs and rare but important diseases. An interesting fact cited by the article is the relatively low standards for HIV drugs in the 90s which were partly a result of the intense lobbying in Washington.
The third reason cited in the article concerns the 'throw money at it' tendency. The authors don't really delve into this, partly because the problem is rather obvious; you cannot solve a complex, multifaceted puzzle like the discovery of a new drug simple by pumping in financial and human resources.
It's the fourth problem that I want to talk about. The authors call it the 'basic science-brute force' problem and it seems to point to a rather paradoxical contradiction; that the increasingly basic-science and data-driven approaches in the pharmaceutical industry over the last twenty years might have actually hampered progress.
The paradox is perhaps not as hard to understand as it looks if we realize just how complex the human body and its interactions with small molecules are. This was well-understood in the fifties and sixties and it led to the evaluation of small molecules largely through their effect on actual living systems (which these days is called
phenotypic screening) instead of by validating their action at the molecular level. A promising new therapeutic would often be directly tested on a mouse; at a time when very little was known about protein structure and enzyme mechanisms, this seemed to be the reasonable thing to do. Surprisingly it was also perhaps the smart thing to do. As molecular biology, organic chemistry and crystallography provided us with new, high-throughput techniques to study the molecular mechanism of drugs, focus shifted from the somewhat risky whole-animal testing methods of the 60s to target-based approaches where you tried to decipher the interaction of drugs with their target proteins.
As the article describes, this thinking led to a kind of molecular reductionism, where optimizing the affinity of a ligand for a protein appeared to be the key to the development of a successful drug. The philosophy was only buttressed by the development of rapid molecular synthesis techniques like combinatorial chemistry. With thousands of new compounds and fantastic new ways to study their interactions at a molecular level, what could go wrong?
A lot, as it turns out. The complexity of biological systems ensures that the one target-one disease correlation more often than not fails. We now appreciate more than ever that new drugs and especially ones that target complex diseases like Alzheimer's or diabetes might be required to interact with multiple proteins for being effective. As the article notes, the advent of rational approaches and cutting-edge basic science might have led companies to industrialize and unduly systematize the wrong part of the drug discovery process - the early one. The paradigm only gathered steam with the brute-force approaches enabled by combinatorial chemistry and rapid screening of millions of compounds. The whole philosophy of finding the proverbial needle in the haystack ignored the possible utility of the haystack itself.
This misplaced systematization eliminated potentially promising compounds with multiple modes of action whose interactions could not be easily studied by traditional target-based methods. Not surprisingly, this led to compounds with nanomolar affinity and apparently promising properties often failing in clinical trials. Put more simply, the whole emphasis on target-based drug discovery and its attendant technologies might have resulted in lots of high-affinity, tight binding ligands, but few drugs.
Although the authors don't discuss it, we continue to have such misplaced beliefs today by thinking that genomics and all that it entails could help us to rapidly discover new drugs. As we constrain ourselves to accurate, narrowly defined features of biological systems, it deflects our attention from the less accurate but broader and more relevant features. The lesson here is simple; we are turning into the guy who looks for his keys under the street light only because it's easier to see there.
The authors of the article don't suggest simple solutions because they aren't any. But there is a hint of a solution in their recommendation of a new post in pharmaceutical organizations colorfully titled the Chief Dead Drug Officer (CDDO) whose sole job would be to document and analyze reasons for drug failures. Refreshingly, the authors suggest that the CDDO's renumeration could come in the form of delayed gratification a few years down the line when his analysis has been validated. It is hoped that the understanding emerging from such an analysis would lead to some simple but hopefully effective guidelines. In the context of the 'basic science-brute force' problem, the guidelines may allow us to decide when to use ultra-rational target-based approaches and when to use phenotypic screening or whole animal studies.
At least in some cases the right solution seems to be clear. For instance we have known for years that neurological drugs hit multiple targets in the brain. Fifty years of psychiatrists prescribing drugs for psychosis, depression and bipolar disorder have done nothing to hide the fact that even today we treat many psychiatric drugs as black boxes. With multiple subtypes of histamine, dopamine and serotonin receptors activated through all kinds of diverse, ill-understood mechanisms, it's clear that single target-based approaches for CNS drug discovery are going to be futile, while multiple target-based approaches are simply going to be too complicated in the near future. In this situation it's clear that phenotypic screening, animal studies, and careful observations of patient populations are the way to go in prioritizing and developing new psychiatric medication.
Ultimately the article illuminates a simple fact; we just don't understand biological systems well enough to discover drugs through a few well-defined approaches. And in the face of ignorance, both rational and "irrational" approaches are going to be valuable in their own right. As usual, knowing which ones to use when is going to be the trick.