Boston is a beautiful city, and I am staying in a beautiful and merry location, Faneuil Market. Lots of good grub, charming old marketplace with cobbled stones, and history written at many places around. Boston Harbor right after crossing one road. Lovely.
But anyway, on to the conference highlights. Since I am a little pressed for time over these two or three days, I will simply briefly mention some of the more interesting stuff and observations and leave the details and links (of which there are many) for later.
1. There was almost unanimous agreement about the role of modeling in modern structure based drug design (SBDD). There were some who rightly questioned the exact value and utility of different kinds of modeling, but no one who thought it was not helpful. The real problem is not really of synthetic chemists appreciating modelers (although that is a problem in some cases) but of there being something of an educational gap between modelers and chemists. The consensus was that both camps simply should not see each other as competitors and/or as witch doctors. There should be vigorous discussion between the two and especially outside formal channels. I don't think there were more than one or two drug design projects which did not involve some component of modeling. It's a pretty encouraging scenario, but of course there's still a long way to go, as there always seems to be in drug discovery.
2. One of the most enlightening sessions for me was a roundtable session with one of the leading computational chemists in the world and probably someone who is more familiar with docking as well as other drug discovery related computational methods than anyone else- Richard Friesner of Columbia and Schrodinger. Friesner expressed surprise about large companies not investing more in computational resources because they somehow think it's so "risky". He pointed out that the costs for implementing even a big computing grid are probably a fraction of the cost invested in HTS, RNAi and suchlike, many of which also turn out to be big risks. The take home message really is that experimentalists should be bold and should come ahead to test docking programs for example. Friesner also cited the success that pharma has had with Schrodinger programs using their libraries. Unfortunately, this knowledge is proprietary. More needs to come forth for academia and collaboration.
3. David Borhani from Abbott gave a nice talk about their Lck kinase inhibitors, which also led to discovery of selective Hck inhibitors. A single hydrogen bonded interaction was responsible for conferring the selectivity.
3. Mark Murcko, CTO of Vertex, gave a general overview of SBDD and where it has come. He pointed to some of his favourite examples, including carbonic anhydrase, HIV protease, and of course Vertex's newest HCV protease inhibitors.
4. Arthur Doweyko from BMS invented a whole new solvent system cryptically named "Q" for selecting good poses. He rightly opined that it's actually good to separate the docking and scoring problems, and address them separately. His "Q" basically deals with calculating the hydrophobic effect based on hydrophobic solvent accesible surface area (SASA). He showed cases where the "simple" correlation between SASA and affinity prediction (or deltaG) failed. This was because in traditional SASA calculations, the probe chosen is often water, with a diameter of 1.4 A. which misses some of the fine nature of the lipophilic surface. Doweyko mentioned that in some cases, simply changing the diameter of the probe to 0.5 A gave better correlation.
6. Gergely Toth from Sunesis talked about their well-known tethering disulfide approach, combined with computational approaches that included conformational searching of the tether conformational space and MD.
5. Not surprisingly there was lots of discussion about kinase inhibitors, with many technologies and protocols directed towards finding selective compounds. While allosteric inhibitors promise new frontiers, traditional ATP competitive inhibitors are still popular targets.
6. Other speakers included structural biologists, chemists, modelers, crystallographers, and "informaticians" from Novartis, Bayer, Lilly, and Pfizer among others. Much discussion and musings especially on modeling, HTS, crystallography.
All in all, I am having a nice time. Tomorrow's speakers include Rich Friesne and Roderick Hubbard (Vernalis) among others.
Hey...I got your note! Sorry I missed you, I'd have given you the grand tour.
ReplyDeleteGlad you're enjoying the conference! Boston is indeed a fun city.
ReplyDeleteThe key to good collaboration between synthetic chemists and modelers is that each needs to have realistic expectations of what the other can deliver as members of the team. Each should try to demystify their discipline and take the time to explain the underlying assumptions.
One comment on computational methods used in drug discovery is that it is extremely difficult for a person intimately associated with a software company to be more familiar with than anyone else with drug discovery computational methods. The reason for this is that people in one software company don't get to see the offerings of other companies in much detail. And certainly not in the 'battlefield' situation of live drug discovery projects where apparently trivial weaknesses are exposed without mercy.
The issue of pharmaceutical companies not investing more in computational resources is not just a case of being risk averse. There is a lot that the commercial software cannot do or does so poorly that it is effectively useless. The software (which can sometimes be quite expensive) has to pay its way and demonstrate added value. One risk pharmaceutical companies do however face is becoming overly dependent on a single vendor.
I think you are quite right...one of the problems is that experimental scientists cannot put their finger on the exact value that modeling can bring to their work and to SBDD. But Friesner's point was that experimental scientists have to be bold and willing to extensively test new programs; that's the way any computer program is developed. What he was saying is that pharma cannot expect any good predictive good quality from the programs if they also are averse to try them out. As we all know, this kind of testing is essential for development. You also made an excellent point about software being useless if it's not correctly used; that's another important point. I don't think docking is like doing single point energy evaluations on two conformers using a high level of theory...in this case, a monkey really can press a button and usually (with lots of exceptions!) get good results. But pharma cannot expect novices to be able to press a button and get good leads about docking programs. Sometimes you get the feeling that that's what they want, for novices to simply get good results by pressing buttons. That's not going to happen, and we will always need people who have a good knowledge of the programs, scoring functions, test sets, benchmarking sets, and hit lists.
ReplyDeleteOn the subject of being bold in testing new programs, I'll mention that my boldness depends on the degree to which I believe in the underlying science. Obviously with other things to do, I like the evaluation to be easy (properly documented code, few bugs) and free (charges for evaluation licenses are a big turnoff). Some sort of partnership between the software supplier and pharma might be in order although to make this work well there would probably need to be some pricing advantage for the pharma partner.
ReplyDeleteSome of the software companies can be less than helpful with their licensing models. Small pharma companies tend to suffer disproprortionately because they can't afford annual maintenance for something they might use two or three weeks a year. Sometimes the software vendor reckons that the pharma company has become overly dependent on their software, which leads to brinkmanship and a non-fairytale ending for both parties.
My comment about some commericial software being effectively useless was not qualified by 'when not used correctly'. Some commerical (and in house) software is effectively useless even when used correctly. Sad but true!
gmc, that's interesting...can you mention some specific programs you have in mind? the thing is that as you know, every program has some utility in some context. again, we cannot expect novices to get great results with them...the context, system specs and limitations need to be understood.
ReplyDeletealso, i don't understand why even small pharma companies should really suffer...for example, a three year license from schrodinger for all their programs (which is a pretty good deal and close to the best that is being offered right now, even though it will not solve all problems) is about 20-30K. is that really too much compared to the investment in other technologies?
i don't want to make a pitch for schrodinger, but i think that their underlying science for GLIDE is as sound as you can get right now, and i believe that it would be worth it to be bold and test those programs. moreover, they are including some really neat developments in the next Glide XP version. their paper in 2006 in j med chem (friesner et al.) is really worth checking out.
one point which i totally agree with is the cost...it has been proved repeatedly as you know that some of the best programs are usually the cheapest or free programs (such as gaussian)
I'm not going to be drawn into naming specific programs that I believe to be virtually useless. Sometimes software will have been built to apply a predictive model that is fundamentally flawed. I will address this in more detail in future GMC posts.
ReplyDeleteHowever I should point out that my earlier comments were not about either of the two software packages that you mention in your previous comment. I am familiar with both and I have found them useful in my own work. However, the license fees that you quote do seem rather low.
My point about the problems facing small companies is that it is more difficult to justify a diverse software portfolio when you've only got 1 or 2 modelers. This is because the licensing arrangements typically don't scale particularly well and there are usually significant fixed costs. With 1 or 2 modelers some software might only get used for the equivalent of a couple of weeks a year. Another area where licensing arrangments can get sticky is when large linux clusters are used. Sometimes vendors try to charge by number of processors available even though the client would be happy to run the software on a much smaller number.
That's ok, no sweat! I am not trying to draw you into naming names...just for the record, I don't have any axe to grind, but I was just curious. You are probably right that some software may get used only a couple of weeks per year, but the question is how much it contributes to lead optimization, and how much of other efforts it saved. I think it's still hard to qualitatively, let alone quantitatively assess this, and so it's going to take some time.
ReplyDelete