Field of Science

CARDD: Relatively simple this time

Whenever I see the title "Computer-Aided Rational Drug Design" in some paper, my doubts are not about the CADD part but about the R part, that is, how rational has the drug design actually been. More simply put, is it "rational" or "rational in retrospect".

But sometimes, things are simple, and there is obvious rationality in the proportion. Like this J. Med. Chem. study on Indole-3-carbinol, a common dietary constituent from cruciferous vegetables, from which analogs that inhibited Akt kinase were made. I3C apparently forms oligomers in the stomach, depending on the pH. These oligomers are thought to be the dominant active species, although their proportion is only ca. 20%. In the study, the authors decided to study the oligomers depicted below, and tried to find common elements that would help them design better analogs whose proportion naturally would be greater.

Image Hosted by ImageShack.us

The CADD part consisted of simple minimization of the structures and looking at the N-N distance in the oligomers. The hypothesis was that if one constructed other compounds with similar N-N distances, maybe the activity would be the same.

Image Hosted by ImageShack.us

I have to say that in the absence of anything indicating otherwise, I always find such an assumption simplistic. That's because proteins are funny entities, and molecules and active sites are often promiscuous, with similar molecules binding totally differently. This concept has negated many previously accepted hypotheses about "common pharmacophores". People lay things on top of each other, look for common elements, and design new compounds encompassing those elements in the right places. However, nature many times has other plans in her mind, and we get bamboozled when it turns out that these "common elements" bind in quite different places. Also, minimized structures depend on the force field being used, and different FFs give different structures. Lastly, there is no a priori reason for assuming that proteins will bind structures even in local minima, let alone global ones. All this is precisely why de novo ligand design is so tricky.

In this case though, the hypothesis is much more reasonable because the molecules are simple and planar and not very flexible. One may expect them to have a very limited number of conformations. And this is probably what it turned out to be. The authors did some simple modifications indicated below, including rigidifying the linker between the two indole rings, in which they came up with analogs retaining the N-N distance. The analogs worked, further modifications added polar electronegative substituents to enhance potency and selectivity, and everyone was reasonably happy in the end.

Image Hosted by ImageShack.us

It's interesting that they did not see it useful to do some docking and active site examination. Methinks that that could have helped weed out some of the analogs with added polarity but with the "wrong" substituents. In any case, I think this is a good example of how computers can help in the simplest of ways to guide drug design. Sorry...I mean ligand design.

Can water stand the heat of a hydrophobic carbon nanotube?

Image Hosted by ImageShack.us


"If you cannot stand the lipophilia, get out of the nanotube"- Anon

Yes. And this sort of ties in with the recent discussions of water in hydrophobic channels and cavities that I have been reading , discussions that can have a direct impact on ligand design (and also "rational" drug design). At least in once case, a simulation of water inside a carbon nanotube, the authors find that water does indeed get inside this greasy den. This study if part of many recent studies which I believe challenge our basic notions of "hydrophobicity" of surfaces and cavities.

I was not aware of this CNT paper, which was published in Nature in 2001 (Nature | Vol 414 | 8 November 2001 | pg. 188-190). The paper involved simulating water around a CNT. The researchers found that a few water molecules do enter the CNT in single file and exit, as the CNT is wide enough to accomodate only the diameter of a water. The interesting explanation for why the waters can ever get inside such an unwelcome environment is given by the energetics; apparently the waters can form two hydrogen bonds inside the nanotube, but fluctuations in the number of hydrogen bonds even in bulk means that they are incompletely hydrogen bonded even in bulk. Part of the explanation also may be that the Hbonds inside the tube have better geometrical characteristics, although I am not sure how much enthalpic advantage such slight changes in geometry provide to a Hbond.

This is a very significant fact in my opinion, and does not completely tie in with the study of Friesner et al. where they say that expelling a water molecule from a protein cavity into bulk may be enthalpically favourable because in bulk water is assumed to form its full complement of hydrogen bonds. I think the verdict may be still out there, as far as the enthalpic gain of waters expelled from hydrophobic protein active sites is concerned. On the other hand, in the nanotube study, it's clear that since the waters are certainly entropically constrained inside the tube, which means that the enthalpic advantage is what drives them in, even if they don't stay there forever.

However, the flags which are always raised in my mind when I read such a study pertain to the method dependence of the results. After all, any model is as good as the parameters in it. For example, the very reason why people do simulations of water under confinement is because they cannot study it by experiment, but at the same time, they are using bulk or gas phase parameters to represent the water molecules. In this study, the authors use Bill Jorgensen's TIP3P water model, which is a very good model that nonetheless represents bulk and gas-phase properties. The fact that the simulation results depend on model parametrization becomes clear when the authors change the depth of the energy well of the Lennard-Jones potential by a mere 0.05 kcal/mol, they observe a drastic change in the wetting event, with a two-step wetting-dewetting transition in a a nanosecond. The question arises is; what if they had used another value for the well depth? Would they have then observed no wetting at all? And would this then have been a representation of the real world? As usual, the question here is of the transferability of parameters, in this case whether the parameters from bulk apply under confinement. Unless there is a better hypothesis and reason, there may be some good reason to believe this transferability, but as usual, with what confidence does the model represent the "real world" is another question.

On a related note, is anyone aware of studies in which such confined-water parameters have been experimentally obtained?

How much water does it hold?

Image Hosted by ImageShack.us
The last couple of days, in connection with the behaviour of water in protein active sites, I have been reading about water and the hydrophobic effect in general, and I found it fascinating how little we really know about both of these in general. For one thing, we still seem to have miles to go in understanding ice, bulk water, and the transition between the two. There was a debate between Richard Saykally of Cal Berkeley and Anders Nilsson of Stanford about the number of hydrogen bonds that a water molecule makes in bulk water. One would think that something as simple as this would have been unraveled by now. But nothing about water is simple. Nilsson published the pretty amazing contention that many water molecules in bulk liquid water at room temperature form only two hydrogen bonds. This would mean that those hydrogen bonds are unusually strong. This paper sparked heated debate, and Saykally was a vocal opponent, countering with his own experiments, and contending that it would be a walk across the street to Stockholm if Nilsson's viewpoint turned out to be true. I am no water expert, but the best value for the energy of an average hydrogen bond in bulk water that I have come across seems to be 1.5 kcal/mol (determined by Saykally), which can definitely be less than that in a protein active site. At least for now, this discussion of "average" hydrogen bonds generally sounds a little slippery to me.

I think that this whole issue of how much a hydrogen bond in bulk water is worth could affect how much gain in energy a water molecule displaced from a protein active site might gain. Friesner et al. in their recent publications indicate that water molecules which cannot form their full complement of hydrogen bonds in a protein active site because of confinement could get an enthalpic advantage if they are pushed out into solvent. I think there's much less ambiguity about the entropic advantage that such a water molecule could have. Then there's also Dunitz's whole argument about entropy-enthalpy compenstation (see previous post) which could factor in...I am still really groping about for coherence in this landscape.

As far as hydrophobic interactions are concerned, while the general idea that the hydrophobic effect is entropically driven seems correct, it also seems situation dependent, especially varying with the temperature. One of those basic important things to note is that the enthalpy of transfer for a nonpolar solute to water is close to zero, and can even be slightly favourable. But when two nonpolar surfaces in water aggregate, it's the entropy of disordering waters clustered around the solute that is very favourable. As usual, even these seemingly simple issues are draped in subtleties. Among others, Themis Lazaridis has explored this very interestingly in a review (2001), in which he cements the "classical" view of the hydrophobic effect. He also counters arguments about the similar energy of cavity creation in water compared to some other solvents by saying that while this may be true, the decomposition of factors contributing to the energy might be different for the two cases.

And finally, and I should have posted about this a long time back, Water In Biology, a great blog all about the molecular intricacies of water, from Philip Ball, staff writer for Nature and author of many excellent books, including H2O: A biography of water.

Water really seems like a testament to that quote by T.S. Eliot about coming back to the beginning again and again, and newly getting to know a place every time. There are just so many things about it we don't understand.

Amidine-aminoisoquinoline substitution

Image Hosted by ImageShack.us
Shot

Nice replacement to preserve overall electronic characteristics such as HBonding, but increase absorbtion and lipophilicity, and decrease basicity. From a nice review on FactorXa inhibitor development from Structure-Based Drug Discovery

The strange case of entropy-enthalpy compensation

My original intention was to read about the role water molecules play in active sites, one thing led to another, and I ended up actually spending more time on the fascinating topic of entropy-enthalpy compensation. I drfited from water molecules to this topic primarily because of a 1995 paper by Jack Dunitz, in which he derives the conclusion that for a typical hydrogen bond, ~5 kcal/mol, there is entropy-enthalpy compensation, which means that that typical free energy of transfer of a hydrogen bond from bulk water to a protein active site can be close to zero. There were some assumptions in this paper, but the concept mostly seems to hold.

Entropy-enthalpy compensation (EEC) is actually a pretty logical concept. Let's say you are designing a ligand to bind a protein, and want to increase the binding affinity by adding hydrophobic groups on it. As you add these groups, the ligand will (usually) bind tighter because of increased vdW contacts as well as the hydrophobic effect, but the word "tighter" already indicates that it will do so with a loss in entropy. Thus, the gain in enthalpy of binding is offset by a loss in entropy. So even if you modify the ligand this, the resulting free energy of binding may be close to zero, or at least will stay constant because of these two opposing quantitites.

However, there are some pretty striking exceptions, and this came to light when I read a paper by George Whitesides in which his group was doing studies of ligand binding to carbonic anhydrase (CA). The authors observed that surprisingly, as they were adding more hydrophobic groups to some sulfonamide ligands by extending the side chain, they observed almost no change in the free energy of binding. In fact, they even observed this effect with different classes of side chains. Clearly, EEC was taking place. But remarkably, they observed that the effect was due to exactly the opposite of what usually happens; namely that the enthalpy was become more unfavourable and the entropy was becoming more favourable. Needless to say, this is not what one expects. The authors have proposed a nice model in which they believe there is some sort of negative cooperativity; as you add more atoms to the side chain, it somehow weakens the binding of the initial atoms that were previously binding better. This worsens the enthalpy of binding, and improves the entropy because the ligand becomes free to wiggle around more. Even though this model supports what is happening, the exact details of how it happens are not clear.

Clearly, EEC is an important concept in rational ligand and drug design. Formerly, it was thought to be a "phantom phenomenon", an artifact of experimental measurements and errors. But Whitesides contends that with the advent of Isothermal Titration Calorimetry (ITC), it has become known as a very real phenomenon. Practically, it means that small moelcules with relatively rigid structures could have the best potency and binding affinity, because we would then get a good and favourable increase in binding enthalpy, without having to pay the corresponding cost in entropy.

However, as I was reading this, I realised that while the chemist would aim to design such rigid high-binding ligands, nature already seems to have solved the EEC problem. Consider the various kinds of protein-protein interactions, where highly flexible loops are seen as recognition elements. Would a chemist ever design a loop for molecular reecognition? Yes and No. Yes, because designing such a loop would build in versatility and flexibility to explore conformational space. No because of the above-noted reasons, of attaining favourable entropy. So how does nature circumvent this problem? Clearly, there must be a way in which nature pays the cost of unfavourable entropy. A couple of ways come to mind:

1. Through the very existence of the protein! Consider how much penalty nature pays in synthesizing and compactly folding the protein in the first place. This entropy cost paves the way to a future, entropically less favourable situation.

2. Through 'steric confinment'. Loops are only a small part of a giant protein surface. The coming together of these surfaces is hydrophobically driven by the expulsion of water. Then it is relatively easy for loops to be recognised, as they are already close to the other surface. Again, the entropic cost has been paid by the rest of the protein surface.

3. Through optimizing the binding enthalpy so much, that unfavourable entropy is not so much of an issue. This is of course what chemists try to do all the time, but nature does it elegantly. Think of the umpteen number of cyclic peptides and macrocycles that nature uses for molecular recognition. Admittedly, one of the ways nature solves the EEC problem is by designing through evolution, ultra potent ligands, where a favourable ∆H compensates for an unfavourable ∆S

Once again, nature rules by striking the right balance through relentless optimization, and we have much to learn from it for tackling EEC

Culprit?

Mark Murcko who is the CTO of Vertex gave a nice keynote presentation on SBDD at the CHI conference. He was talking about our general ability to design drugs, when he said that a famous scientist he knew had told him once that we really know how to design ligands. This task by itself is quite challenging. Designing drugs, a much more difficult task, is firstly a multidisciplinary endeavor, while the former one is largely chemical in its broadest sense. Murcko said that the scientist was of the opinion that we really know only how to design ligands, and not really drugs in a rational way. By the way, Murcko did not entirely agree with him.

But Murcko thought it prudent to not name the scientist!

Yesterday, randomly flipping through a review by a well-known scientist, I came across this statement
"The ability to design drugs (so-called ‘ rational drug design ’) has been one of the long-term objectives of chemistry for 50 years. It is an exceptionally difficult problem, and many of its parts lie outside the expertise of chemistry. The much more limited problem – how to design tight-binding ligands (rational ligand design) – would seem to be one that chemistry could solve, but has also proved remarkably recalcitrant. The question is ‘Why is it so difficult ? ’ and the answer is ‘We still don’t entirely know’."
Needless to say, I get the very strong feeling that Murcko was talking about this particular scientist. I have to say that my own opinions swing toward those enumerated by the scientist. Care to guess who?? (One hint: conference and company location...)

I am not the only one! Getting past the original GBSA

The Generalized Born Surface Area implicit solvation model is one of the most important continuum solvation models used in deciphering protein-ligand interactions in drug discovery from an energetic standpoint. For a pretty long time, I had considerable trouble, to put it mildly, understanding the original JACS paper by Clark Still. A couple of days, I decided to give up all pretext at understanding the math, and decided to think purely from a physical and logical viewpoint, which should be natural for any chemist. To my surprise, I found that it then became rather easy to understand the crux of the model.

Then yesterday, I came across notes prepared by Matt Jacobson for his class at UCSF. Here is his opening statement about the GB model:
"Introduction to GB"
• Originally developed by Clark Still.
Original paper is virtually impossible to understand;
GB approximation remains a bit mysterious.
Ha! I stand vindicated. The original paper also contains a term which Still seems to have plucked out of nowhere, and Jacobson also mentions this in his notes. Needless to say, the notes are pretty good...

Anyway, so I thought I would put down my two cents worth and qualitative understanding of the model, without mentioning any equation (although one of the equations is "just" Coulomb's law)

The basic goal is simple; to ask what is the solvation energy of an ion or solute in solution (most commonly water). This in turn related to what must be the energy needed to transfer that ion from vacuum to the solvent.
First, let's neglect the charge of the solute. Now for transferring anything into a solvent, we obviously need to create a cavity in the solvent. Thus there needs to be a term that represents the free energy of cavity formation.

Call this ∆G (cav)

Then there is also the Van der Waals interaction that the atoms of the solute will have with the solvent atoms in this cavity. This will lead to a free energy of Van der Waals interaction between the two.

Call this ∆G (vdw)

Now, intuitively, both these quantities will depend on the surface area of the solute, or more accurately, the solvent-exposed surface area. Greater the surface area, bigger the size of the cavity and more the ∆G (cav). Also, greater the surface area, more the vdw interactions with the solvent atoms, so more the ∆G (vdw).

Thus, ∆G (cav) + ∆G (vdw) are proportional to the solvent exposed area (SA).

So ∆G (cav) + ∆G (vdw) = k * SA

where k is a constant of proportionality. Of course, the term to the right will be summed up over all solvent-solute atoms (as are all the other terms). The constant k can be found out from doing a controlled experiment, that is, by transferring an uncharged solute into solvent, for example a hydrocarbon. Thus, one can experimentally determine this constant by transferring various hydrocarbons into solvents and measuring the free energy changes. So there, we have k.

Now let's turn to the other component, the interaction due to the charge. Naturally, the term that comes to mind is a Coulombic energy term, that goes as 1/r where r is the solvent atom-solute atom distance.

Call this ∆G (coul.) which is proportional to the charges, and distance between the solute and solvent particles.

But that cannot be all. If the Coulombic energy were the only term for charge interaction, then since it depends only on the distance between the two particles, it would be independent of size and provide no provision for accounting for the size of the particles. But we know that how well an ion is solvated depends crucially on its ionic radius (Li+ vs Cs+ for example), with small ions getting better solvated. So there needs to be a term that needs to take into account this ionic radius and the charge on the ion (because what really matters is the charge/size ratio). This is the Born energy term. But it also includes another term; since what we really care about is the change in the free energy, it would depend upon the dielectric constant of the solvent, with a high-dielectric like water solvating ions better.

Call this ∆G (Born), which is proportional to the ionic charge, the ionic radius, and the dielectric constant of the solvent.

Therefore, the total energy of solvation,
∆G (solv) = ∆G (cav) + ∆G (vdw) + ∆G (coul) + ∆G (Born)
with the proportionalities and parameters as enunciated above.

There it is, the GBSA model of implicit solvation, widely used in molecular mechanics and MD protocols. Similar continuum solvation models are also used in quantum mechanical models. The model has shortcomings obviously. It neglects the nuances provided by local changes in water molecule behaviour. The Born energy terms is also an approximation, as ideally it considers higher order terms and long-range solvation effects.

But given its relative crudeness, GBSA is surprisingly accurate quite often, such as in the MM-GBSA protocol of Schrodinger. Looking at the terms, it is not surprising to see why this is so, since the model does a good job of capturing the essential physics of solvation.

Charity begins in the university

I mentioned in the last post how the transition time between academic science---->industrial technology needs to be accelerated, and it struck me that there were so many things in the conference which were being said by pharma scientists, which originally came from academia, and I cannot help but think of technologies that people in pharma currently rave about, all of which were developed in academic laboratories.

Consider the recent use of NMR spectroscopy in studying the interaction of drugs with proteins, a development that has really taken place in the last five to ten years. NMR is essentially an academic field which has been around for almost fifty years now, originally developed by physicists who worked on radar and the bomb, and then bequeathed to chemists. It is the humdrum tool that every chemist uses to determine the structure of molecules, and in the last twenty years it was also expanded into a powerful tool for studying biomolecules. What if pharma had actually gone to the doorstep of the NMR pioneers twenty years back, and asked them to develop NMR especially as a tool for drug discovery? What if pharma had funded a few students to focus on such an endeavor, and promised general funding for the lab? What if Kurt Wuthrich had been offered such a prospect in the early 90s? I don't think he would have been too averse to the idea. There could then have been substantial funding to specially focus on the application of NMR to drug-protein binding, and who knows, maybe we could have had NMR as a practical tool for drug discovery ten years ago, if not as sophisticated as it is now.

Or think of the recent computational advances used to study protein-ligand interaction. One of the most important advances in this area has beendocking, in which one calculates the interactions that a potential drug has with a target in the body, and then thinks of ways to improve those interactions based on the structure of the drug bound to the protein. These docking programs are not perfect, but they are getting better every day, and now are at a stage where they are realistically useful for many problems. These docking protocols are based on force fields, and the paradigm in which force fields are developed, molecular mechanics, was developed by Norman Allinger at UGA, and then improved by many other academic scientists. Only one very effective force field was developed by an industrial scientist named Thomas Halgren at Merck. During the 80s and 90s, force fields were regularly used to calculate the energies of simple organic molecules. One can argue that at that point they simply lacked the sophistication to tackle problems in drug discovery. But what if pharmaceutical companies had then channeled millions of dollars into these academic laboratories for specifically trying to focus on adapting these force fields for drug-like molecules and biomolecules? It is very likely that academic scientists would have been more than eager to make use of those funding opportunities and dedicate some of their time to exploring this particular aspect of force fields. The knowledge from this specific application could have been used in a mutually beneficial and cyclic manner to improve basic characteristics of the force fields. And perhaps we could have had good docking programs based on force fields in the late 90s. Pharma could also fund computer scientists in academia to develop parallel processing platforms specifically for these applications, as much of the progress in the last ten years has been possible because of exponential rise in software and hardware technology.

There are many other such technologies; fabrication, microfluidics, single molecule spectroscopy, which can potentially revolutionize drug discovery. All these technologies are being pursued in universities at a basic level. As far as I know, pharma is not providing significant funding to universities for specifically trying to adapt these technologies to their benefit. There are of course a few very distinguished academic scientists who are focused on shortening the science--->technology timeframe; George Whitesides at Harvard and Robert Langer at MIT immediately come to mind. But not everybody is a Whitesides or Langer, both of whom have massive funding from every imaginable source. There are lesser known scientists in lesser known universities who may also be doing research that could be revolutionary for pharma. Whitesides recently agreed to license his lab's technologies to the company Nano-Terra. Nano-Terra would get the marketing rights, and Harvard would get the royalties. There are certainly a few such examples. But I don't know of many where pharma is pouring money into academic laboratories to accelerate the transformation of science into enabling technology.

In retrospect, it's actually not surprising that future technologies are being developed in universities. In fact it was almost always the case. Even now-ubiquitous industrial research tools like x-ray crystallography, sequencing, and nuclear technology were originally products of academic research. Their great utility immediately catapulted these technologies into industrial environs. But we are in a new age now, with the ability to suddenly solve many complex problems being manifested through our efforts and intellect. More than at any other time, we need to shorten the transition time between science and technology. For doing this, industry needs to draw up a checklist of promising academic scientists and labs who are doing promising research, and try to strike deals with them to channel their research acumen into specifically tweaking their pet projects to deliver tangible and practical results. There would of course be new problems that we would need to solve. But such an approach in general would be immensely and mutually satisfying, with pharma possibly getting products on their tables in five instead of ten years, and academia getting funded for doing this. It would keep pharma, professors, and their students reasonably happy. The transition time may not always be speeded up immensely. But in drug discovery, even saving five years can mean potentially saving millions of lives. And that's always a good cause isn't it.

Boston ahoy. Pharma ahoy.

Image Hosted by ImageShack.usImage Hosted by ImageShack.us
(Above: The view across the bridge at the World Trade Center and Below: The always scenic view across the Charles in front of MIT)

I am finally back from a trip that was both professionally and personally immensely satisfying. I can keep on talking about how great a place Boston is- the place where I stayed and the historic places cruise I took around the harbor were just fantastic- but my praise has also been somewhat tempered by two realisations. Firstly, the boss paid the tab which makes it a little easier to have a good time. Secondly, everybody says that it's only in these four months that Boston is the best place on earth. Anytime after that and the enjoyment quickly starts to dwindle because of the pretty nasty weather. So if I could get a part time dream job where I could work in Boston only for four months, that really would be it. Dream on.

One of the good things about this conference was that our patent on a new (potential) anti-cancer compound just got filed days before the conference. That made it possible for me to present the work. The work was well-received, although as is always the case, there's miles to go before we can possibly sleep.

The conference itself was great, and it was held in a scenic location- the World Trade Center by the side of the harbor (although almost everything in Boston seems to be harbor-side). It was the first time I got a preview of what it's like to work in industry. I was happy to see that a camarederie similar to that among academic scientists exists in the pharmaceutical industry too. However, I also got the feeling that that camaraderie is more guarded, and also a little more exclusive. I may possibly have been the only graduate student there among about a hundred participants. I was also surprised to see, perhaps not so surprisingly in retrospect, that folks in industry do almost exactly the same kind of work that we do, at least in the very initial stages of drug design. But where they really get a head start is in validating early models by having massive in-house facilities and personnel for things like pharmacokinetics (investigating the properties of the drug in the body) and x-ray crystallography (having a structure of the drug bound to the protein which it is supposed to inhibit). So they can decide relatively early on whether to pursue or drop a prospective candidate. We are now planning to put our own compounds in animals, and I would have given anything to have a crystal structure and pharmacokinetic data in the early stages when we had the lead. Pharma can do this, and they learn a lot from it.

The downside of working in pharma? You cannot talk! About 60% of the presentations in the conference did not have a single chemical structure in them. In most cases the only structure displayed was an already well-known one. It's really frustrating to be a chemist and not see what are the structural characteristics that are leading to all those tantalizing pieces of biological and clinical data. And it looks like it's only going to get more proprietary. That's the only thing that makes me a little wary of working in pharma, the fact that you often cannot talk to people outside even if you know that they could have the answers to your questions. Also, the fact is that many of the technologies that are now roaring in pharma have their origin in basic science developed in academic labs. I always keep on imagining how much the science--->technology transit time would have been reduced if there could have been collaboration between pharma and those academic labs in the initial stages. Of course there are IP issues, but one cannot help but think about this.

But all in all, a very fruitful experience. Unfortunately I missed getting aboard Paul's grand tour of Harvard Chemistry (although I heard some really good piano in the chemistry lounge), but I look forward to seeing more of everything next year.

SBDD Boston 1

Image Hosted by ImageShack.us

Boston is a beautiful city, and I am staying in a beautiful and merry location, Faneuil Market. Lots of good grub, charming old marketplace with cobbled stones, and history written at many places around. Boston Harbor right after crossing one road. Lovely.

But anyway, on to the conference highlights. Since I am a little pressed for time over these two or three days, I will simply briefly mention some of the more interesting stuff and observations and leave the details and links (of which there are many) for later.

1. There was almost unanimous agreement about the role of modeling in modern structure based drug design (SBDD). There were some who rightly questioned the exact value and utility of different kinds of modeling, but no one who thought it was not helpful. The real problem is not really of synthetic chemists appreciating modelers (although that is a problem in some cases) but of there being something of an educational gap between modelers and chemists. The consensus was that both camps simply should not see each other as competitors and/or as witch doctors. There should be vigorous discussion between the two and especially outside formal channels. I don't think there were more than one or two drug design projects which did not involve some component of modeling. It's a pretty encouraging scenario, but of course there's still a long way to go, as there always seems to be in drug discovery.

2. One of the most enlightening sessions for me was a roundtable session with one of the leading computational chemists in the world and probably someone who is more familiar with docking as well as other drug discovery related computational methods than anyone else- Richard Friesner of Columbia and Schrodinger. Friesner expressed surprise about large companies not investing more in computational resources because they somehow think it's so "risky". He pointed out that the costs for implementing even a big computing grid are probably a fraction of the cost invested in HTS, RNAi and suchlike, many of which also turn out to be big risks. The take home message really is that experimentalists should be bold and should come ahead to test docking programs for example. Friesner also cited the success that pharma has had with Schrodinger programs using their libraries. Unfortunately, this knowledge is proprietary. More needs to come forth for academia and collaboration.

3. David Borhani from Abbott gave a nice talk about their Lck kinase inhibitors, which also led to discovery of selective Hck inhibitors. A single hydrogen bonded interaction was responsible for conferring the selectivity.

3. Mark Murcko, CTO of Vertex, gave a general overview of SBDD and where it has come. He pointed to some of his favourite examples, including carbonic anhydrase, HIV protease, and of course Vertex's newest HCV protease inhibitors.

4. Arthur Doweyko from BMS invented a whole new solvent system cryptically named "Q" for selecting good poses. He rightly opined that it's actually good to separate the docking and scoring problems, and address them separately. His "Q" basically deals with calculating the hydrophobic effect based on hydrophobic solvent accesible surface area (SASA). He showed cases where the "simple" correlation between SASA and affinity prediction (or deltaG) failed. This was because in traditional SASA calculations, the probe chosen is often water, with a diameter of 1.4 A. which misses some of the fine nature of the lipophilic surface. Doweyko mentioned that in some cases, simply changing the diameter of the probe to 0.5 A gave better correlation.

6. Gergely Toth from Sunesis talked about their well-known tethering disulfide approach, combined with computational approaches that included conformational searching of the tether conformational space and MD.

5. Not surprisingly there was lots of discussion about kinase inhibitors, with many technologies and protocols directed towards finding selective compounds. While allosteric inhibitors promise new frontiers, traditional ATP competitive inhibitors are still popular targets.

6. Other speakers included structural biologists, chemists, modelers, crystallographers, and "informaticians" from Novartis, Bayer, Lilly, and Pfizer among others. Much discussion and musings especially on modeling, HTS, crystallography.

All in all, I am having a nice time. Tomorrow's speakers include Rich Friesne and Roderick Hubbard (Vernalis) among others.

Pretty basic stuff

One of the more enlightening facts about conformational analysis I learnt in the last couple of years was related to the simple question, "What is the proportion of the axial conformer in 3-fluoro piperidine?" Contrary to what most of our knowledge about six membered ring conformational analysis says, the answer is:

100%. The axial conformer is present to the extent of 100%. The equatorial- 0%

Image Hosted by ImageShack.us

The situation arises because of a stabilizing C-F...N-H+ dipole effect that also raises the pKa of this molecule over what we would assume. This analysis was done in a series of papers by Lankin, Snyder et al.

This study is part of a recent ChemMedChem review that tackles the wide range of amine basicities in drug like molecules, and the factors that influence them. I have now learnt to mentally protonate requisite nitrogens when I see them in a drug almost as a reflex action, but I still get a little bamboozled sometimes. While the influence of inductive and polar effects on the lowering of amine pKas is well known, effects like the one noted above are more subtle and unexpected and such trends are discussed in the review.

For example, the above 1,3 syn kind of fluorine-amine interaction also extends to acyclic systems, and nicely explains the simple differences in basicities of the simplest of compounds; mono, di, and trifluoro ethyl amines (pKas drop down from around 10-11).

Image Hosted by ImageShack.us

One of those set of principles well kept in mind