Intramolecular hydrogen bonds in medicinal chemistry

ResearchBlogging.orgIn the latest issue of J. Med. Chem., researchers from Roche in Basel have a nice analysis of intramolecular hydrogen bonds in druglike molecules. An internal hydrogen bond can intuitively confer an important property on a drug; it can make the drug more lipophilic by shielding the hydrogen bonding groups from solvent. Thus, intramolecular h-bonding has emerged as a useful strategy in improving membrane permeability.

The authors look at both the CSD and the PDB and do a reasonably exhaustive analysis of HB motifs of all ligands in these two important databases. They find that internal hydrogen bonds in six membered rings are most common, followed by five and then 7 and 8 membered rings. The HBs in five membered rings are around the edge of definitions for hydrogen bond formation; this indicates the difficulty in defining a HB based on strict geometric criteria. The strongest HBs in six membered rings are, not surprisingly, those between NH and C=O groups, followed by NH and sp2 N groups. In fact nitrogen acceptors for HBs seem to be almost as common as carbonyl acceptors. The authors also find that particularly strong HBs exist for donating groups that are part of a resonance substructure such as an amide linkage (resonance assisted hydrogen bonds). Percentages of hydrogen bonded forms for various structural motifs are noted which could be useful in deliberately designing in such HBs.

After looking at various features of these HBs in several ring sizes including length, angle and torsional dependence, the authors also analyze the effects of internal HBs on membrane permeability. For this they synthesize four pairs of eight model compounds in which each pair consists of two compounds, one able to form a HB and the other one unable to (usually where the donor H is replaced with a methyl). They then calculate parameters like PAMPA permeability, logD and clogP which can be indicators of lipophilicity and permeability. They discover that because of the fragment-based rules used in calculating clogP values, computer programs cannot often predict the increase in lipophilicity resulting from internal HBs. This is a valuable finding that could be translated into a correction applied by computer programs calculating clogP.

The authors find that although internal hydrogen bonds do seem to improve logD, the relationship is not completely straightforward. The hydrogen bonding compounds can exist in closed (h-bonded) and open conformations. They find that only if the open form is a relatively low energy conformation can the molecule readily adopt the closed conformation with a hydrogen bond. They indicate how quantum chemical calculations can be useful for qualitatively rationalizing such energy differences; in one case for instance, the open form was too high in energy according to such calculations and therefore the other form was not easily populated. Because of the sharp dependence of equilibrium populations on free energy differences, I would think that the open form should not be more than about 1.8 kcal/mol higher in energy compared to the closed form (when the population of the former would be about 5%).

This overview should be useful in designing specific internal hydrogen bonds for use in drug design programs.

Kuhn, B., Mohr, P., & Stahl, M. (2010). Intramolecular Hydrogen Bonding in Medicinal Chemistry Journal of Medicinal Chemistry DOI: 10.1021/jm100087s

Quo Vadis, natural science?

On Wednesday last week the town where I lived got 20 inches of snow in a twenty-four hour period. I got an unexpected, happy, day off work. Bizarrely, southern regions like DC and Baltimore got much more than northern ones; Baltimore got 40 inches, Philadelphia got about the same. Records were set in both places for the snowiest winters in recent history. People were left wondering and reeling at this capriciousness of the Norse Gods.

So what could be the reason for this sudden onslaught of severe weather? That's akin to asking what could be the reason for cancer suddenly emerging in someone's body or for a particular drug demonstrating a slew of side-effects. The reasons are non-obvious, often non-intuitive, complex, multifactorial and extremely hard to determine. And that is also what one should say if asked to elucidate reasons for a particularly snowy winter.

But human beings don't work that way. Immediately there sprung up a debate about whether global warming could be responsible for the increased snow. Engaging in the common and never-dying fallacy of equating weather with climate, climate change skeptics declared the cold to be a slap in the face of AGW proponents. On the other side, while most climate scientists are pointing out that single weather events have scant connection with global warming, some proponents are also saying that this is actually a good instance of the effects of global warming, that global warming does predict extreme weather events, that all this is simply part of the connected whole. More vapor in the air, El Nino and other events have been suggested as plausible candidates.

Now let's step back a little and think about this from the educated layperson's perspective. Less snow has been commonly predicted to be a consequence of global warming, but now the same explanation is being provided for lots of snow. The layman should be excused for being skeptical about a model that seems to equally explain diametrically opposite events. Of course, as we mentioned before, more or less snow neither "proves" nor "disproves" global warming. But to me this is yet another reminder of why I don't say much about the topic these days; the whole damn thing has gotten so overly politicized that each side feels compelled to say something non-scientific just to make the other side shut up.

However, from a scientific perspective too this issue illustrates the pitfalls that natural science faces in the twenty first century. When I discussed this issue with my father who is an economics professor the other day, he said "Welcome to the social sciences". Social scientists face such problems all the time. What happens when a model becomes so complex that it can explain virtually any observation you throw at it? (To begin with the model also become so complex that you stop truly understanding it; case in point- derivatives on Wall Street). Surely there seems to be a problem with a model that is invoked to explain both more precipitation and absence of precipitation. That would be akin to a molecular model that predicts the same reason for compounds with both high and low potencies against a protein target.

The hallmark of a judicious model is that it is not so spare as to be useless but also not so full of parameters and variables so as to fit almost any data point. A model that seems to explain everything (as sometimes seems to be the case with global warming) is a bad scientific model because in principle it's hard to see how it could be falsified (Popper again). In addition, you should always ask how many data points are enough to build confidence in a model; statisticians have struggled for decades with this sampling problem, and there is no straightforward general answer. Sadly, it's not climate scientists that have first raised such issues through their model-building. That dubious honor belongs to evolutionary biologists.

Evolutionary biology is notorious for advancing adaptationist explanations which can account for almost any observed trait. For instance, polar bears are white because they are supposed to camouflage well into their surroundings, but penguins are black because their skin should absorb enough sunlight to keep them warm. Now why does the first explanation not apply to the second case and vice versa? Well, in case of evolutionary biology the short answer that is given is "trade offs". Depending on the details of the problem (in this case the species, its body requirements, genetic makeup etc.), in the first case the ability to camouflage won out over the need to remain warm, and vice versa for the second one. But beyond a certain point it can be impossible to actually explain such trade offs since so much in evolution is a matter of contingency. And one can conveniently invoke trade-offs (Shazam!) as a magical whitewashing word for almost any trait. That's hardly an actual explanation.

Nevertheless, such ingenious explanations have often been advocated by evolutionary biologists. In a classic article, Stephen Jay Gould and Richard Lewontin shot down this relentless urge to wrap everything into an adaptionist program; their main point was that every trait is not the consequence of adaptation and natural selection and some traits can be simply carried along for the ride with others without possessing any evolutionary benefit. The main merit of the adaptationist explanations is their internal logic. However, internal logic by itself, no matter how tempting, does not make an explanation. In the absence of experimental data, such hypotheses about evolutionary adaptations are just that, good hypotheses waiting to be validated by good data. A professor of mine got so fed up with these ingenious evolutionary explanations for everything from homosexuality to sloths coming down from trees to bury their feces that he wrote a highly readable book about it. Again, it's not that these ideas are bad, but in the absence of causal evidence they can only remain respectable armchair speculation. So how then do we come up with explanations?

Sadly, here's when fields like evolution and climate change run into fundamental roadblocks of the kind faced by social scientists; the sheer complexity of the system thwarts attempts at clean experiments. The big problem with fields like psychology, sociology and economics is that it is often difficult or even impossible to perform controlled experiments. Admittedly the situation here is worse than climate science, since the data itself is variable and represents a moving target (was the state of mind of your experimental subjects the same on Monday as on Tuesday?). Consider the hundreds of pop science books on neuroscience claiming that things like fMRI scans can "explain" emotions like hate and jealousy, and even spiritual and religious experiences. Other problems notwithstanding, how on earth do we not know that at the very least, like the perpetual observed-induced reality in quantum mechanics, we do not influence what we want to observe? But even in the apparently more rigorous discipline of climate science, models are the result of data conducted under less than ideal non-isolated laboratory conditions from thousands of places over dozens of years. Who can guarantee that at least some of this data won't even be contradictory, let alone that all of it would consistently be of the same standard? To be fair to climate scientists, they usually perform stringent checks on the validity of their models but no checks can help duplicate fine differences in experimental conditions spread over thousands of data points over long time frames.

Lest one think that only the "softer" sciences face these problems, witness the current debate about string theory. Skeptics say that about the only reason that the framework is so highly regarded is because it seems to be logically internally consistent, is mathematically elegant and seems to tantalizingly "feel right". All these qualities can be respectable, but I suspect that the pioneers of modern science in the eighteenth and nineteenth century would not have been happy with this state of affairs. From what I have read, there seem to be no hard experimental tests that could provide strong support for string theory.

That then is the dilemma the natural sciences find themselves in in my opinion, a dilemma that the social sciences have faced for centuries. In fact one can argue that the dilemma has been caused by the social sciences finally intersecting with the natural science as their integrated whole has become more and more complex and is now tackling extremely convoluted territory like the brain, the climate, the universe, human behavior, the economy, evolution and the mechanisms of drug action and disease. With this kind of complexity, scientists have been resigned to pick between two quite unsastisfactory choices; either no explanation at all, or an "explanation" based on models, internal logical consistency, "aesthetics" and elegance (case in point- string theory) and ingenious sounding armchair explanations. In many cases the underlying systems are simply so dense that scientists are forced to perform extensive parametrization and model building. There is probably an equation somewhere relating excessive parametrization to risk of model failure.

Nonetheless, in the absence of controlled experiments, there is not much that science can lean on at this point. But in fact one can argue that science actually proceeds in this way, by tentatively accepting hypotheses. As long as it's kept in mind that the hypotheses are only hypotheses, we can still have a wall to grasp as we grope around in the dark. If we start regarding the hypotheses as explanations and facts, we will leave the safety of that frail wall to grasp at imaginary will-o-wisps at our own peril.

An alternative BBC list for the "educated" mind

So there's this little blurb going around on Facebook in which the BBC has listed 100 books written over the last 200 years or so and asked people how many they and their friends have read. The books are diverse and include everything from Jane Austen to J D Salinger to Harry Potter.

Obviously the BBC thinks this list is important in some way or that people who have read some of these books are educated or well-informed. There is a note informing us that most people would have read only 6 out of those 100 books. Perhaps this is startling.

But what is startling by orders of magnitude is that this list of 100 books does not include a single scientific work. Now of course people would not be expected to have read The Principia. But what about Darwin's "The Origin of Species"? Or, looking at something more modern and still pivotal, Thomas Kuhn's "The Structure of Scientific Revolutions"? These volumes are comparable to many of the books listed by the BBC, certainly in terms of comprehension, and also almost certainly in terms of importance.

Most prominently, what about C P Snow's "The Two Cultures" which lamented the rift between science and the humanities? You want to see a classic example of this rift? WItness the BBC list! Snow would have nodded his head vigorously, especially and most ironically because the exclusion of his own volume from the list makes his point resoundingly clear.

So, dear BBC, if I were to draw up my own short and admittedly limited list of scientific works that surely deserve as much of a place in the "educated" man's mind as the august books you present, I would cite the following. I haven't read all of these works; but with all I have a passing familiarity and some I have read more seriously. Let's even forget Newton's "Principia" for now and focus on the last 200 years as the BBC mostly has, and even just on the 20th century. Of course some of the following are more important than others; some are popular treatments while others are defining and fundamental volumes for their respective fields. But one can still come up with a highly readable list, which in my opinion would enrich the mind of any human being.

1. The Origin of Species- Charles Darwin

2. The Structure of Scientific Revolutions- Thomas Kuhn

3. The Logic of Scientific Discovery- Karl Popper

4. Silent Spring- Rachel Carson

5. Science and the Common Understanding- J. Robert Oppenheimer

6. Principia Mathematica- Bertrand Russell and Alfred North Whitehead

7. Physics and Philosophy- Werner Heisenberg

8. Flatland: A Romance of Many Dimensions- Edwin Abbott

9. On Growth and Form- D'Arcy Thompson

10. What is Life?- Erwin Schrodinger

11. Men of Mathematics- E T Bell

12. Microbe Hunters- Paul De Kruif

13. The Mismeasure of Man- Stephen Jay Gould

14. The Selfish Gene- Richard Dawkins

15. Sociobiology- E O Wilson

16. Mr. Tompkins- George Gamow

17. The Double Helix- James Watson

18. The Nature of the Chemical Bond- Linus Pauling

19. Chaos- James Gleick

20. Advice to a Young Scientist- Peter Medawar

and finally

21. The Two Cultures- C P Snow

Consider the diverse and varying importance of these works. Kuhn and Popper are defining volumes in the philosophy of science. Darwin needs no explanation. Schrodinger inspired a generation of physicists like Francis Crick to change fields and initiate a revolution in biology. E O Wilson's book started a fierce chapter in the "nature vs nurture" debate whose ramifications can still be felt. In one fell swoop Gould demolished the foundations of scientific racism and eugenics. Pauling's book is one of the most important scientific works of all time and redefined chemistry. D'Arcy Thompson's beautiful volume established the mathematical foundations of developmental biology. Bell and De Kruif both inspired dozens of famous scientists like Andrew Weil and John Nash who went on to do groundbreaking work and win Fields and Nobel medals. Russell's book was a landmark event designed to provide a foundation for all of mathematics. Watson's book is considered the archetype of how real science is done, warts and all. Carson became the godmother of the modern environmental movement. On a more limited but important level, Gleick, Gamow and Dawkins made chaos theory, quantum physics and selfish genes comprehensible to the layman. And Medawar, Oppenheimer and Snow wrote deeply thoughtful volumes on the relationship between science, society and culture.

Now I suppose it would not be too presumptuous to ask the question; how many of these have the BBC list-makers read?

The writings of John Cassidy

For the last several weeks I have been enjoying John Cassidy's "How Markets Fail". I am almost done with the volume and have to say that it is one of the best and most balanced critiques of markets that I have read. Cassidy who was educated at Oxford and certainly knows his economics carefully documents the history of how academic mathematical theories like Arrow's impossibility theorem and Robert Lucas's theory of rational expectations came to be mistaken as practical rules for application to the free market when they were really supposed to be not much more than ideal mathematical constructs. He also documents very well how most free market theorists did not include the behavioral economic approaches pioneered by psychologists.

Even more enlightening is Cassidy's series of interviews with Chicago school economists like Richard Posner, Gary Becker and Eugene Fama. It is heartening to see how most of these people who were once die hard free marketeers are now taking a more moderate stance towards the world and accepting the limitations of things like rational expectations and the efficient market hypothesis. All except Eugene Fama, who in his interview appears to be as much of a stubborn free market "fundamentalist" as anyone else, the last man manning the fort, keeping a brave face and clinging to the flag known as the efficient market hypothesis, with smoke and mirrors being his main weapon of combat. Behavioral economists who were once despised at Chicago are now part of the establishment there.

Very enlightening and more than a little gratifying. I would strongly suggest especially the interviews and also the book.

Scaling further GPCR summits

ResearchBlogging.orgThere's a nice review on GPCRs and their continuing challenges in the British Journal of Pharmacology this month. The authors focus on both structural and functional challenges in the characterization of this most important class of signaling proteins. As is well-known, drugs targeting GPCRs generate the highest revenue among all drugs. And given their basic roles in signal transduction, GPCRs are also clearly very important from an academic standpoint. Yet there is a wall of obstacles confronting us.

For starters there are the well-known problems with crystallization plaguing all membrane proteins like GPCRs. Until now only four GPCRs- rhodopsin, beta1 and beta2 adrenergic receptors and A2a adenosine receptor- have been crystallized, and the publication of each structure was considered a breakthrough. As the review mentions, the proteins are unstable outside the membrane and conditions for stabilization and crystallization are frequently incompatible; for instance stabilization is often effected by long-chain detergents while the opposite is true for crystallization. To circumvent these problems clever strategies have been adopted and immense trial and error and hard work were required. The rhodopsin and adrenergic receptors were crystallized by point mutations and special techniques; in one case an antibody was tethered to the protein and in another case a fusion protein was attached to stabilize the domain.

It's when we enter the dense jungle of GPCR biology that crystallization problems almost start sounding trivial. GPCRs couple to a variety of ligands including well-known biogenic amines (like adrenaline and serotonin), peptides, proteins and nucleotides. Where is starts to become complex is in the kind of response these ligands elicit, which could be full agonism, partial agonism, inverse agonism and full antagonism.

What structural features distinguish these different responses from each other? This is a key question in GPCR biology. But not only can ligands be agonists or antagonists but they can act in different ways on the same GPCR, activating different pathways. The case of partial agonists is especially interesting and more protein-partial agonist structures would be quite valuable.

The traditional model of protein binding assumes two dominant states, inactive and active. Agonists stabilize the active state, antagonists stabilize both states, and inverse agonists stabilize the inactive state. But, as the authors say, the traditional model is slowly undergoing a revision:

The concept of a receptor existing in a simple pair of active and inactive states (R and R*) is no longer sufficient to explain the observations of pharmacology. Agonists vary considerably in their efficacy and how this relates to the bound conformational states is unclear. A partial agonist with 50% efficacy could fully activate 50% of the receptors or could activate 100% of the receptor by 50%. Alternatively, a partial agonist might stabilize a different form of the receptor to a full agonist state and this different conformation might activate the G protein with a lower efficiency. The study of rhodopsin suggests that activation of the receptor involves the release of key structural constraints within the E/DRY and NPxxY regions. Energy provided by agonist binding must be sufficient to break these constraints and stabilize the new active conformation. In the case of rhodopsin, whether this transition is complete or partial depends on the chemical nature of the ligand (Fritze et al., 2003). The retinal analogue 9-demethyl-retinal is a partial agonist of rhodopsin which only poorly activates G protein in response to light. Spin-labeling studies (Knierim et al., 2008) suggest that in the presence of this ligand, only a small proportion of receptors are in the active conformation equivalent to all-trans-retinal. However, this can also result in a new state that is not formed with the full agonist. Therefore, rhodopsin studies suggest that that partial agonism may result in either a reduced number of fully active receptors or conformations which are not capable of fully engaging the signal transduction process. Structures of other GPCRs in complex with partial agonists are required to determine their effects on conformation.
An example makes the hideous complexity clear. The mu-opioid receptor is activated by several ligands including morphine, etorphine and fentanyl. However, morphine acts only as a partial agonist in effecting a phosphorylation endpoint whereas the other two act as full agonists. But it gets more interesting. While morphine effects phosphorylation of the kinase ERK through activation of PKC (protein kinase C), etorphine also activates ERK but by activation of beta-arrestin. Thus the same endpoint can be effected through different pathways. And it doesn't even stop there. Morphine causes the phosphorylated ERK to stay in the cytoplasm while etorphine causes the ERK to translocate to the nucleus. Not done yet; in addition, morphine can reverse its role and act as a full agonist on the adenylyl cyclase pathway.

Thus, the same ligand adopts different roles when activating different pathways. To begin with it's not even clear which pathway is activated under what circumstances. And the problem is only accentuated by the participation of different G proteins in inducing different responses.

Another dense layer of complexity is added by the fact that GPCRs have been found to dimerize and oligomerize. Crystallography can often be misleading in studying these dimers since there are several documented reports of dimers being formed as misleading artifacts of the crystallization conditions.

Apart from the stated problems, there are even more differences in further downstream signaling and receptor internalization induced by oligomerization. It's clearly a jungle out there. No wonder the design of drugs targeting GPCRs needs a measure of faith. For instance consider the various drugs targeting CNS proteins. CNS drug discovery has long been considered a black box for a good reason. Once a drug enters the brain, one can imagine it not only targeting a diverse subset of GPCRs (and even other classes of proteins) but, given the above complexities, also acting separately as agonist and antagonist at the various receptors. We clearly have a long way to go before we can prospectively design a CNS drug that will do all this on cue.

It would be a tall order trying to explain all these differences simply through structural modifications induced by the ligands. Yet whatever signal is eventually transmitted to the G proteins must begin with a crucial structural movement. It seems that elucidating the differences in helix and loop movements induced by partial and full agonists, inverse agonists and antagonists is a tantalizing part of the GPCR puzzle.

Since crystal structure data on GPCR is lacking, modeling approaches especially based on homology modeling have proved especially fruitful. Earlier attempts were all based on the single rhodopsin template. Since then the higher resolution adrenergic and adenosine receptor structures have provided significant insight. But here again numerous caveats abound. Modeling the helices is relatively easy since all GPCRs share the same general 7TM helix topology which is highly conserved, but modeling the fine differences between helices that lead to structural changes upon ligand binding is harder. And most difficult and important of all is modeling the extracellular loops which actually bind the ligands. Subtle changes in loop movement, salt-bridge breakage, hydrophobic effects and interaction of loops with helices is difficult to model. Often a change in conformation of a single residue can be enough to throw the modeling off balance. Nonetheless, the paucity of structural data means that modeling when done right will continue to be valuable. In the absence of structural data, computational ligand-based approaches which search for ligands similar to known compounds could be useful.

We have made a lot of progress in understanding the structure and function of these key proteins. But investigations seem to have unearthed more questions than answers. Which is always good for science since then it can have more choice fodder for contemplation.

Congreve, M., & Marshall, F. (2009). The impact of GPCR structures on pharmacology and structure-based drug design British Journal of Pharmacology DOI: 10.1111/j.1476-5381.2009.00476.x

Zheng, H., Loh, H., & Law, P. (2010). Agonist-selective signaling of G protein-coupled receptor: Mechanisms and implications IUBMB Life DOI: 10.1002/iub.293

Linkland

1. A crystal structure of the important PI3 kinase delta form which may provide insight into designing selective inhibitors of this key kinase implicated in cancer.

2. The discovery that a class of sandalwood odorants targets both the traditional GPCR odorant receptors but also the unexpected estrogen receptor (ER). The targeting of these two apparently functionally unrelated proteins may suggest roles for ORs different from olfaction.

3. And speaking of smell, the identification of odorant receptors in malaria-causing mosquitoes. The researchers identify receptors responding to specific body odors that could help the insects home in. Maybe they can also identify dietary components that cause/eliminate these?

4. And finally, the world's first 1GHz NMR in France. Pacemakers beware.

Indian village has unusually low rates of Alzheimer's disease

This caught my attention recently.
As the sun breaks through the morning mist in Ballabgarh, the elders of the village make their way to their regular meeting spot to exchange stories and share a traditional hookah pipe.

These men are in their sixties and seventies, while their faces bear the evidence of years of hard work in the fields, their minds are still sharp.

In other parts of the world, people of their age would be at some risk of developing dementia. But here, Alzheimer's disease is rare. In fact, scientists believe recorded rates of the condition in this small community are lower than anywhere else in the world.
Apparently the villagers here, mostly farmers, were tested for the ApoE4 gene which has been indicated as a risk factor for Alzheimer's. ApoE4 frequency was the same as in a population of farmers in rural Pennsylvania. Unfortunately the explanations suggested (vegetarian diet, lack of obesity, low cholesterol levels, physically fit farmers) does not seem to be unique to this farming community.
In contrast with lives in Pennsylvania and other parts of the world, the people of Ballabgarh are unusually healthy. It is a farming community, so most of them are very physically active and most eat a low-fat, vegetarian diet. Obesity is virtually unheard of.

Life in this fertile farming community is also low in stress, and family support is still strong, unlike in other, more urban parts of India.

"It all leads to a happy body, and a happy mind and hopefully a happy brain," says Dr Chandra. "Cholesterol levels here are much lower. We believe that is what is protecting the community."
There must surely be other farming communities in India and other places whose residents have a "happy body and a happy brain". I need to look up the original reference.

Simulations long enough to...put you to sleep

ResearchBlogging.orgFor something as widely used for as long as general anesthetics (GAs), one would think that their molecular mechanism of action would have been fairly understood. Far from it.

From Linus Pauling's theory of gases like xenon acting at high concentrations by forming clathrates to more recent theories of GA action on lipids and now on proteins, tantalizing clues have emerged, but speculation remains rife.

In a recent Acc. Chem. Res. review, a group of researchers explains some recent studies on GA action. Now there's a field that to me seems primed for computational studies. This is for two reasons.

Firstly, experimental information on GAs is hard to come by. Consider their chemical features; halogenated, apolar small molecules lacking polar hydrogen bonding and other interactions, binding to their targets with low affinity (it's interesting that halogenation seems to be a key criterion for GA action). In addition most GAs do not bind to a highly specific active site but instead influence protein action indirectly. Such features make any kind of NMR or x-ray structure determination an enormous challenge.

Secondly, molecular dynamics simulations (MDS) have come of age. With recent programs augmented by tremendous gains in hardware and software, microsecond to millisecond simulations have gradually become a reality. This particular field seems to provide a classic and worthy challenge for MDS, since GAs seem to interact indirectly and subtly with proteins by influencing their local and global dynamics rather than binding to well-defined active pockets. Such dynamic perturbations would fail to be captured during the pico to nanosecond timescales typically sampled by MDS. For instance, the most prevalent belief for GAs right now is that they interact with ligand-gated ion channels like the GABA and NMDA receptor and with potassium ion channels. One hypothesis for the mode of action of halothane is that it binds to the open conformation of a potassium ion channel. The channel stays open for milliseconds, thus thwarting experimental study. However, a millisecond transition provides a robust and respectable challenge for long time-scale MD simulations.

At the same time, caveats abound in the field. For instance it's easy to infer that a GA molecule binds to a certain site and obstructs the motion of a tyrosine residue, thus providing support to fluorescence quenching and other studies. But the results of such studies as well as the all-important site-directed mutagenesis studies are notoriously hard to interpret; indirect influences on protein motion may be construed as direct binding to particular sites. Plus, it seems to me that one can read too much into the mere, rather obvious observation that a molecule binding to a protein site inhibits the motion of some residues; whether that observation translates into a realistic phenomenon may be much harder to glean.

So yes, it seems that GA action provides a fertile field for computer simulation. Long MD simulations generally seem to me to be a solution looking for problems; after all most interesting molecular interactions in the body take place on the order of micro to milliseconds. There is a huge number of important problems waiting to be tackled with such tools. However, interpretations of the results will always have to be guided by the sure hand of experiment, with the always important caveat that when it comes to interpretation, one computational study and one experiment can have several offspring.

Vemparala, S., Domene, C., & Klein, M. (2010). Computational Studies on the Interactions of Inhalational Anesthetics with Proteins Accounts of Chemical Research, 43 (1), 103-110 DOI: 10.1021/ar900149j