Field of Science

The human problems with molecular modeling

Molecular modeling and computational chemistry are the neglected stepchildren of pharmaceutical and biotech research. In almost every company, whether large or small, these disciplines are considered "support" disciplines, peripheral to the main line of research and never at the core. At the core instead are synthetic chemistry, biology and pharmacology, with ancillary fields like formulations and process chemistry becoming increasingly important as the path to a drug progresses.

In this post I will explore two contentions:

1. Unless its technical and human problems are addressed, molecular modeling and simulation will remain peripheral instead of core fields in drug discovery.

2. The overriding problem with molecular modeling is the lack of a good fit between tools and problems. If this problem is addressed, molecular modeling stands a real chance of moving from the periphery to, if not the very core, at least close to the core of drug discovery.

There are two kinds of challenges with molecular modeling that practitioners have known for a long time - technical and human. The technical problems are well known; although great progress has been made, we still can't model the details of biochemical systems very accurately, and even key aspects of these systems like protein motion, water molecules and - in case of approaches like machine learning - lack of adequate benchmarks and datasets continue to thwart the field. 

However, in this piece I will focus on the human problems and explore potential ways of mitigating them. My main contention is that the reason modeling often works so poorly in a pharmaceutical setting is because the incentives of modelers and other scientists are fundamentally misaligned. 

In a nutshell, a modeler has two primary objectives - to make predictions about active, druglike molecules and to validate the models they are using. But that second part is actually a prerequisite for the first - without proper validation, a modeler cannot know if the exact problem space they are applying their models to is actually a valid application of their techniques. For proper validation, two things are necessary:

1. That the synthetic chemist actually makes the molecules they are suggesting.

2. That the synthetic chemistry does not make molecules which they aren't suggesting.

In reality the synthetic chemist who takes up the modelers' suggestions has little to no interest in model validation. As anyone who has done modeling knows, when a modeler suggests ten compounds to a synthetic chemist, the synthetic chemist would typically pick 2 or 5 out of those 10. In addition, the synthetic chemist might pick 5 other compounds which the modeler never recommended. The modeler typically also has no control and authority over ordering compounds themselves.

The end result of this patchwork implementation of the modeler's predictions is that they never know whether their model really worked. Negative data is especially a problem, since synthetic chemists are almost never going to make molecules that the modeler thinks will be inactive. You are therefore left with a scenario in which neither the synthetic chemist nor the modeler knows or is satisfied with the utility of the models. No wonder the modeler is relegated in the back of the room during project discussions.

There is another fundamental problem which the modeler faces, a problem which is actually more broadly applicable to drug discovery scientists. In one sense, not just modeling but all of drug discovery including devices, assays, reagents and models can be considered as a glorious application of tools. Tools only work if they are suited to the problem. If a practitioners thinks the tool will be unsuited, they need to be able to say so and decline using the tool. Unfortunately, incentive structures in organizations are rarely set up for employees to say "no". Hearing this is often regarded as an admission of defeat or an unwillingness to help out. This is a big mistake. Modelers in particular should always be rewarded if they decline to use modeling and can gives good reasons for doing so. As it stands, because they are expected to be "useful", most modelers end up indiscriminately using their tools on problems, no matter what the quality of the data or the probability of success is. This means that quite often they are simply using the wrong tool for the wrong problem. Add to this the aforementioned unwillingness of synthetic chemists to validate the models, and it's little surprise that modeling so often fails to have an impact and is relegated to the periphery.

How does one address this issue? In my opinion, the issue can be mitigated to a significant extent if modelers know something about the system they are modeling and the synthesis which will yield the molecules they are predicting. If a modeler can give sound reasons based on assays and synthesis - perhaps the protein construct they are using for docking is different from one in the assay, perhaps the benchmarks are inadequate or perhaps the compounds they are suggesting won't be amenable to easy synthesis because of a weird ring system - other scientists are more likely to both take their suggestions more seriously as well as respect their unwillingness to use modeling for a particular problem. The overriding philosophy that a modeler utilizes should be captured not in the question, "What's the best modeling tool for this problem?" but "Is modeling the right tool for this problem?". So, the first thing a modeler should know is whether modeling would even work, but if not, he or she will go a long way in gaining the respect of their organization if they can say at least a few intelligent things about alternative experimental approaches or the experimental data. There is no excuse for a computational chemist to not be a chemist in the first place.

More significantly, my opinion is that this mismatch will not be addressed until modelers themselves are in the driver's seat, until they can ensure that their predictions are tested in their entirety. Unfortunately there's little control modelers have over testing their models; much of it simply depends on how much the synthetic chemists trust the modelers, a relationship driven as much by personality and experience as modeling success. Even today, modelers can't usually simply order their compounds for synthesis from internal or external teams.

Fortunately there are two very significant recent developments that promise modelers a degree of control and validation that is unprecedented. One is the availability of cheap CROs like WuXi and Enamine which can make many of the compounds that are predicted by modeling. These CROs have driven the cost down so significantly that even negative predictions can now importantly be tested. In general, the big advantage of external CROs relative to internal chemists is that you can dictate what the external CROs should and shouldn't make - they won't make compounds which you don't recommend and they will make every compound that you do; the whims of personal relationships won't make a difference in a fee-for-service structure.

More tantalizingly, there have been a few success stories now of fully computationally-driven pipelines, most notably Nimbus and Morphic Therapeutic and, more recently, Silicon Therapeutics. When I say "fully computationally driven" I don't mean that synthetic chemists don't have any input - the inaccuracy of computational techniques precludes fully automated molecule selection from a model - what I mean is that every compound is a modeled compound. In these organizations the relationship between modeling and other disciplines is reversed, computation is front and center - at the core - and it's synthetic chemistry and biology in the form of CROs that are at the periphery. These organizations can ensure that every single prediction made by modelers is tested and made, or conversely, that no molecule that is made and tested fails to go through the computational pipeline. At the very least, you can then keep a detailed bookkeeping record of how designed molecules perform and therefore validate the models; at best, as some of these organizations showed, you can discover viable druglike leads and development candidates.

Computational chemistry and modeling have come a long way, but they have a long way to go both in terms of technical and organizational challenges. Even if the technical challenges are solved, the human challenges are significant and will hobble the influence computation has on drug discovery. Unless incentive structures are aligned the fields will continue to have poor impact and be at the periphery. The only way for them to progress is for computation to be in the driver's seat and for computational chemists to be as informed as possible. Fortunately with the advent of the commodification of synthesis and the increased funding and interest in computationally driven drug pipelines, it seems there may be a chance for us to find out how well these techniques work after all.

Image source

7 comments:

  1. From the very beginning, any digital product development project needs a clear idea of what exactly to build and why. Often, the “clear idea” can be associated with a long list of requirements - a fixed list of project specifications. Quite a scary phrase, agree? https://cxdojo.com/what-you-should-know-about-user-stories

    ReplyDelete
  2. It is simply not that case that synthetic and medicinal chemists do not make negative compounds to test hypotheses. I believe modellers may tell themselves this as a comfort blanket when their model fails, it's not my fault as they did not make all the compounds the model suggested. The synthetic chemist has to balance the resource they have with what can be achieved. They might pick 5 out of 10 because those five can sensibly made that month. If the other 5 were all 20 steps with 5 chiral centres and no common intermediate, they are not getting made. My suggestion for moving computational up the pecking order would be they need to understand synthetic chemistry better. Too many crazy suggestions from the model and people lose confidence. The second is the model needs proper testing with a go no go decision. Should be treated like a human medicinal chemist and judged by the same standards.

    ReplyDelete
  3. Do *some* modelers tell themselves that as a comfort story? Definitely. But I think your quips about synthetic chemists are very organization-specific. In cases where modeling plays a dominant role they are more likely to pursue multiple suggestions, but what I said about the incentive structures not being aligned is still true (in the next post I will try to explore how those structures *could* be aligned). I definitely agree with you and allude to above that any knowledge of synthesis that the modeler brings to the table is very much valued. One of the most valuable things I have been to say to a synthetic chemist is "I think both of these molecules look good in terms of the model, but this one can be made using a combined Suzuki-Buchwald reaction."

    ReplyDelete
  4. I enjoyed the premise and discussion. However, as a computational chemist I would argue that the problem might be neither the practicing comp or synthetic chemists but rather the lack of rigor, reproducibility and/or "real world" testing of the computational methods on the part of the developers. John Chodera gave a talk at the D3R Grand Challenge workshop (August 2019) on the non-reproducibility of many (if not most) of the computational methods in use today. We wouldn't expect a self-driving car to work if it is only tested on the Daytona international Speedway oval track. We shouldn't expect computational methods to work either as the testing rigor for most computational algorithms isn't even an oval track. It is the equivalent of a 1/4 mile drag racing strip.

    ReplyDelete
    Replies
    1. I would say it depends on what problem you are solving. For instance pose prediction using docking for well-characterized targets works about 70% of the time. But that's exactly why you need to use docking for pose prediction and not for affinity prediction, and you need to use it for well-characterized targets - this is precisely what I was saying about fitting the tool to the problem. But generally I agree that computational methods need to be made much more rigorous and validated. I fear that with all this hype about AI and ML we are again going in the opposite direction.

      Delete
    2. One brief counter - 70% only when docking cognate ligands. When cross docking the best performance is 50% with 25% being more typical using the standard <=2 A. I'm not sure we can even pose ligands robustly and by robustly I mean >50% at a precision 4 times the typical precision for the experimental coordinates.

      Delete
  5. This comment has been removed by the author.

    ReplyDelete

Markup Key:
- <b>bold</b> = bold
- <i>italic</i> = italic
- <a href="http://www.fieldofscience.com/">FoS</a> = FoS