Nimbus Therapeutics, which as far as I know is the only drug company based on a purely computational model of drug discovery (with all experiments outsourced), just handed over one of their key programs to Gilead for a good sum of money. This program was aimed at discovering inhibitors of the protein acetyl-CoA carboxylase (ACC) which is implicated in Non-Alcoholic Steatohepatitis (NASH) and went from start to finish in about 18 months.
To discover this inhibitor Nimbus brought all of Schrodinger's resources and computing capability (they essentially have unlimited access to Schrodinger's licenses and hardware) to bear on the problem. They (presumably) used techniques like virtual screening of millions of compounds, molecular dynamics, calculating unstable water molecules and the newest Schrodinger tool - free-energy perturbation (FEP) which in its ideal incarnation can allow you to rank order compounds by their free energy of binding to a protein target.
I think this is a very promising development for the applications of computation to drug discovery, but as a scientist in the field myself I am even more excited about the volume of useful data this effort must have generated. This is simply because, based on their model, it seems that every molecule that Nimbus prioritizes or discards necessarily goes through their computational pipeline: this would be rather unique. The corpus of data this process generates is presumably locked inside Nimbus's virtual vaults, but I think they and Schrodinger should release at least a high level abstraction of it to let everyone figure out what worked and what did not. At the very least it would transform Nimbus's success from an impressive black box to a comprehensible landscape of computational workflows.
There are several questions whose answers are always very valuable when it comes to assessing the success of any computational drug discovery protocol. One of the most important insights is to get an idea of the domain of applicability of particular techniques, and try to tease apart the general features of problems where they successfully seem to work. Also since any computational protocol is a model, one also wants to know how well it compares to other models, and preferably simpler ones. If a particular technique turns out to be one that's general, accurate, robust and consistently better than others, then we're in business.
Here are a few questions which apply not only to Nimbus but which I think can be asked of any project in which computational methods are thought to play a predominant role.
1. Did they have a crystal structure or did they build a homology model? If they built a model, what was the sequence identity with the template protein?
2. Was the protein flexible or was it fairly rigid? Were there missing loops or other parts and did they computationally build these loops? If loops were actually reconstructed, what was their length?
3. What was the success rate of their virtual screening campaign? How many top ranked molecules were actually made by the chemists?
4. How much did Schrodinger's WaterMap protocol help with improving binding affinity? Were the key displaceable or replaceable water molecules buried deep in hydrophobic cavities or were there also a few non-intuitive water molecules at the protein-solvent interface?
5. Did they test any of the false negatives to know if the accuracy was what they thought it was?
6. How well did the new FEP algorithm work in terms of rank ordering binding affinity? Did they compare this method with other simpler methods like MMGBSA?
7. In what way, if at all, did molecular dynamics help in the discovery of these inhibitors? How did MD compare to simpler techniques?
8. How well did methods to predict ADME work relative to methods to predict binding affinity? (The latter are usually far more accurate).
I think it's great that a purely computation-driven company like Nimbus can discover a lead for a tough target and an important disease in a reasonable amount of time. But it would be even better to distill general lessons from their success that we could apply to the discovery of drugs for other important diseases.
RFK Jr. is not a serious person. Don't take him seriously.
3 weeks ago in Genomics, Medicine, and Pseudoscience
See http://www.pnas.org/content/113/13/E1796.long
ReplyDelete