The unstoppable Moore hits the immovable Eroom

Thanks to Derek I became familiar with an article in the recent issue of Nature Reviews Drug Discovery which addresses that existential question that has been asked by so many plaintive members of the scientific community; why has pharmaceutical productivity declined over the last two decades with no end to this attrition in sight?

The literature has been awash in articles discussing this topic but this piece presents one of the most perceptive and well-thought out analyses that I have recently read. The paper posits a law called "Eroom's Law", the opposite of Moore's Law, which seems to chart the regress in drug approvals and novel medicines contrary to Moore's bucolic vision of technological progress. The authors wisely avoid recommending solutions, but do cite four principal causes for the decline. Derek is planning to write a series of undoubtedly insightful posts on these causes. But here I want to list them especially for those who may not have access to the journal and discuss one of them in some detail.

The first cause is named the 'Better than The Beatles' effect. The title is self-explanatory; if every new drug that has to be developed is required to be better than its predecessor which has achieved Beatle-like medical status, then the bar for acceptance of this drug is going to be very high leading to an expensive and resource-intensive discovery process. An example would be a new drug for stomach ulcers which will have to top the outstanding success of ranitidine and omeprazole; unlikely to happen. Naturally the bar is very high for certain areas like heart disease with its statins and hypertension with its phalanx of therapies, but the downside of this fact is that it stops novel medication from ever seeing the light of day. The Better-than-The-Beatles bar is understandably lower for a disease like Alzheimer's where there are no effective existing therapies, so perhaps drug developers should focus on these areas. The same goes for orphan drugs which target rare diseases.

The second reason is the 'cautious regulator' with the title again being self-expalantory. The thalidomide disaster in the 1960s led to a body of regulatory schedules and frameworks that today severely constrain the standards for efficacy and toxicity that drugs have to meet. This is not bad in itself, except that it often leads to the elimination of potential candidates (whose efficacy and toxicity can be modulated later) very early on in the process. The stupendously crushing magnitude of the regulatory schedule is illustrated by the example of a new oncology drug, whose documentation if piled in a single stack would top the height of the Empire State Building. With this kind of regulation, scientists never tire of pointing out that many of the path breaking drugs approved in the 50s and 60s would never survive the FDA's gauntlet today. There's a lesson in there somewhere; it does not mean that every new compound should be directly tested on humans, but it does seem to suggest that maybe compounds which initially appear problematic should be allowed to compete in the race a little longer without having to pass litmus tests. It's also clear that, as with the Beatles problem, the regulatory bar is going to be lower for unmet needs and rare but important diseases. An interesting fact cited by the article is the relatively low standards for HIV drugs in the 90s which were partly a result of the intense lobbying in Washington.

The third reason cited in the article concerns the 'throw money at it' tendency. The authors don't really delve into this, partly because the problem is rather obvious; you cannot solve a complex, multifaceted puzzle like the discovery of a new drug simple by pumping in financial and human resources.

It's the fourth problem that I want to talk about. The authors call it the 'basic science-brute force' problem and it seems to point to a rather paradoxical contradiction; that the increasingly basic-science and data-driven approaches in the pharmaceutical industry over the last twenty years might have actually hampered progress.

The paradox is perhaps not as hard to understand as it looks if we realize just how complex the human body and its interactions with small molecules are. This was well-understood in the fifties and sixties and it led to the evaluation of small molecules largely through their effect on actual living systems (which these days is called phenotypic screening) instead of by validating their action at the molecular level. A promising new therapeutic would often be directly tested on a mouse; at a time when very little was known about protein structure and enzyme mechanisms, this seemed to be the reasonable thing to do. Surprisingly it was also perhaps the smart thing to do. As molecular biology, organic chemistry and crystallography provided us with new, high-throughput techniques to study the molecular mechanism of drugs, focus shifted from the somewhat risky whole-animal testing methods of the 60s to target-based approaches where you tried to decipher the interaction of drugs with their target proteins.

As the article describes, this thinking led to a kind of molecular reductionism, where optimizing the affinity of a ligand for a protein appeared to be the key to the development of a successful drug. The philosophy was only buttressed by the development of rapid molecular synthesis techniques like combinatorial chemistry. With thousands of new compounds and fantastic new ways to study their interactions at a molecular level, what could go wrong?

A lot, as it turns out. The complexity of biological systems ensures that the one target-one disease correlation more often than not fails. We now appreciate more than ever that new drugs and especially ones that target complex diseases like Alzheimer's or diabetes might be required to interact with multiple proteins for being effective. As the article notes, the advent of rational approaches and cutting-edge basic science might have led companies to industrialize and unduly systematize the wrong part of the drug discovery process - the early one. The paradigm only gathered steam with the brute-force approaches enabled by combinatorial chemistry and rapid screening of millions of compounds. The whole philosophy of finding the proverbial needle in the haystack ignored the possible utility of the haystack itself.

This misplaced systematization eliminated potentially promising compounds with multiple modes of action whose interactions could not be easily studied by traditional target-based methods. Not surprisingly, this led to compounds with nanomolar affinity and apparently promising properties often failing in clinical trials. Put more simply, the whole emphasis on target-based drug discovery and its attendant technologies might have resulted in lots of high-affinity, tight binding ligands, but few drugs.

Although the authors don't discuss it, we continue to have such misplaced beliefs today by thinking that genomics and all that it entails could help us to rapidly discover new drugs. As we constrain ourselves to accurate, narrowly defined features of biological systems, it deflects our attention from the less accurate but broader and more relevant features. The lesson here is simple; we are turning into the guy who looks for his keys under the street light only because it's easier to see there.

The authors of the article don't suggest simple solutions because they aren't any. But there is a hint of a solution in their recommendation of a new post in pharmaceutical organizations colorfully titled the Chief Dead Drug Officer (CDDO) whose sole job would be to document and analyze reasons for drug failures. Refreshingly, the authors suggest that the CDDO's renumeration could come in the form of delayed gratification a few years down the line when his analysis has been validated. It is hoped that the understanding emerging from such an analysis would lead to some simple but hopefully effective guidelines. In the context of the 'basic science-brute force' problem, the guidelines may allow us to decide when to use ultra-rational target-based approaches and when to use phenotypic screening or whole animal studies.

At least in some cases the right solution seems to be clear. For instance we have known for years that neurological drugs hit multiple targets in the brain. Fifty years of psychiatrists prescribing drugs for psychosis, depression and bipolar disorder have done nothing to hide the fact that even today we treat many psychiatric drugs as black boxes. With multiple subtypes of histamine, dopamine and serotonin receptors activated through all kinds of diverse, ill-understood mechanisms, it's clear that single target-based approaches for CNS drug discovery are going to be futile, while multiple target-based approaches are simply going to be too complicated in the near future. In this situation it's clear that phenotypic screening, animal studies, and careful observations of patient populations are the way to go in prioritizing and developing new psychiatric medication.

Ultimately the article illuminates a simple fact; we just don't understand biological systems well enough to discover drugs through a few well-defined approaches. And in the face of ignorance, both rational and "irrational" approaches are going to be valuable in their own right. As usual, knowing which ones to use when is going to be the trick.

5 comments:

  1. Well written but there is a much much more simple explanation for that 'Law of Eroom', which is the already well known fact that communist economies fail. As a matter of fact, in all developed countries, Health Care in general and Big Pharma in particular is run by communist principles which are characterized by inefficiency and exploding costs.

    For instance, in a communist economy it is much more profitable and less risky to bribe officials in order to 'convince' them to bye products that already exist for an exorbitant price and, on the other hand, to tighten regulations that no other competitive products reach the market. Many of the R&D money is in fact camouflaged bribery.

    Also, after competition turned off, the Peter Principle is set into action. The result is expensive mistakes. The 'throw money at it' tendency spells uncreative morons in charge.

    The 'Law of Moore' only holds because of fierce competition. As soon as governmental regulations are introduced for the production of integrated circuits the 'Law of Eroom' will be steadily enacted there too.

    ReplyDelete
  2. " it's clear that single target-based approaches for CNS drug discovery are going to be futile," Correct, but we were led down this path, by several outstanding successes 50 years ago -- what could be simpler and more targeted than L-DOPA for the dopamine deficiency of parkinsonism. The results were spectacular. Ditto for the dopamine blocking neuroleptics -- quite true that now we know they interact with far more receptors than anyone dreamed of back then, but compared to what we had the results were spectacular. We were equally certain that we'd have drugs for addiction when the enkephalins and endorphins were discovered.

    "The complexity of biological systems ensures that the one target-one disease correlation more often than not fails." I would argue that this doesn't go far enough. We simply don't understand the workings of the systems we're trying to change. Certainly we now know more than we ever did, but consider what we didn't know about gene expression back than which we know now -- introns and exons, splicing enhancers and inhibitors, microRNAs, competitive endogenous RNAs. The list goes on, and that's just gene expression, not how the products interact with each other. I doubt that we've exhausted the list of cellular mechanisms to discover.

    For why big Pharma is shedding chemists (from lack of results) with 21 posts underling how little we knew until the work cam out see #21 https://luysii.wordpress.com/2012/03/07/why-drug-discovery-is-so-hard-reason-21-rna-sequences-wont-help-you-determine-function/.

    For an annotated list of the first 20 reasons see https://luysii.wordpress.com/category/aargh-big-pharma-sheds-chemists-why/

    Retread/Luysii

    ReplyDelete
  3. I always wonder if L-DOPA was a stroke of luck and an exception. Plus, Parkinson's is still with us, so future therapies will likely need to hit multiple targets.

    ReplyDelete
  4. Well, it may have been lucky that L-DOPA worked so very well, but it wasn't luck that the drug was found. It was based on a very clear idea of what was wrong in Parkinsonism

    Luysii/Retread

    ReplyDelete
  5. @healthcarebubble, The over regulation of pharmaceuticals and big pharma lobbying to make it difficult for smaller pharmaceuticals to have any traction has stymied studies, and I would say is the largest major factor of Eroom's law. Just look at statins. Even this article is saying that they are a beatle's drug, but big data is finding out that the outcomes for patients on statin's are no better then for patients who did not take statin's, which is telling because the only heart patients with high cholestoral(sp?) that don't take statins are probably those that can't afford it. What does that say? Basically that though inital effects of statins are positive that it does nothing to help heal the system and give long term effects. Lower cholestoral does not actually = better outcomes. Fortunately Big data and better testing techniques (like the realization that mice are in many ways not good testing subjects for effectivity) will increase progress. Unfortunately more progress will probably be made in area's outside of the US due to the tight regulation by the FDA.

    ReplyDelete

Markup Key:
- <b>bold</b> = bold
- <i>italic</i> = italic
- <a href="http://www.fieldofscience.com/">FoS</a> = FoS