Almost two years ago, I relished giving a departmental seminar on a novel theory of smell by Luca Turin, which proposed that we smell molecules not by their shape, but by their vibrations. The seminar was largely inspired by this book, which then encouraged me to explore the fascinating literature on smell. Turin's theory was then largely discounted, although it predicted that dimethyl sulfide and dimethyl sulfide-d6 would smell different because of different vibrations of the C-H and C-D bonds. I tested this hypothesis myself and indeed could detect a slight difference in their smell. All ten of of my test subjects also could.
A critical editorial in Nature Neuroscience, based on an experiment conducted in Rockefeller University, dismissed the theory with what I thought was a little too much chutzpah. But now, in an article published in the Physical Review Letters, Turin's theory seems to receive support. I still have to read the details; not that the equations of quantum physics are exactly at the tip of my tongue, but still.
Turin has also come out with a book about smell and the science behind it. I just got a copy from Amazon and have started on it. One thing that you have to appreciate about the man is his fine perception of smell, as both science and art, as well as his wide-ranging knowledge. His descriptions of smell are sometimes poetry exemplified, and his ability to nail down a smell in the weirdest description is uncanny ("...smells like the breath of a newborn infant mixed with its mother's hair spray").
On the other hand, the science in the earlier book was sometimes pretty sketchy, and Turin's words about why the holy Angstrom is an appealingly natural unit are not entirely scientifically appealing. He says that the Angstrom seems very natural, because a C-C bond length is about 1 A. Well, a C-C bond length is 1.54 A, very different from 1 A as chemists will realise, and saying that a C-C bond length is 'about' 1 A is alarming. On a similar note, the difference between a C-C and a C=C is 'only' 0.12 A, and yet it makes a world of difference in the chemistry. As they say, chemistry (and biology) are worlds encapsulated within 0.5 A and 2kcal/mol.
Frankly, I have always thought that there's definitely much more to smell than shape. And as far as the difference in smell of deuterated compunds was concerned, I thought the vibration theory bore good weight. The problem is that smell is not quantifiable the way the effect of a drug is, through quantitative dose-response curves. I have to admit that SAR for smell looks even more bizarre than SAR for drugs, which is bizarre enough sometimes. The Nobel prize awarded to smell two years ago was really about the biology, and not about the molecular recognition part. So we definitely have a long way to go in deciphering smell. Smell is fascinating by any standards, no doubt about that.
- Home
- Angry by Choice
- Catalogue of Organisms
- Chinleana
- Doc Madhattan
- Games with Words
- Genomics, Medicine, and Pseudoscience
- History of Geology
- Moss Plants and More
- Pleiotropy
- Plektix
- RRResearch
- Skeptic Wonder
- The Culture of Chemistry
- The Curious Wavefunction
- The Phytophactor
- The View from a Microbiologist
- Variety of Life
Field of Science
-
-
-
Political pollsters are pretending they know what's happening. They don't.1 month ago in Genomics, Medicine, and Pseudoscience
-
-
Course Corrections6 months ago in Angry by Choice
-
-
The Site is Dead, Long Live the Site2 years ago in Catalogue of Organisms
-
The Site is Dead, Long Live the Site2 years ago in Variety of Life
-
Does mathematics carry human biases?4 years ago in PLEKTIX
-
-
-
-
A New Placodont from the Late Triassic of China5 years ago in Chinleana
-
Posted: July 22, 2018 at 03:03PM6 years ago in Field Notes
-
Bryophyte Herbarium Survey7 years ago in Moss Plants and More
-
Harnessing innate immunity to cure HIV8 years ago in Rule of 6ix
-
WE MOVED!8 years ago in Games with Words
-
-
-
-
post doc job opportunity on ribosome biochemistry!9 years ago in Protein Evolution and Other Musings
-
Growing the kidney: re-blogged from Science Bitez9 years ago in The View from a Microbiologist
-
Blogging Microbes- Communicating Microbiology to Netizens10 years ago in Memoirs of a Defective Brain
-
-
-
The Lure of the Obscure? Guest Post by Frank Stahl12 years ago in Sex, Genes & Evolution
-
-
Lab Rat Moving House13 years ago in Life of a Lab Rat
-
Goodbye FoS, thanks for all the laughs13 years ago in Disease Prone
-
-
Slideshow of NASA's Stardust-NExT Mission Comet Tempel 1 Flyby13 years ago in The Large Picture Blog
-
in The Biology Files
pKa bamboozle
In a recent publication by a famous chemist, I came across this illustration in which they have calculated the pKa values of guanine and xanthine, among other bases.
Not only does the NH in the five membered ring in xanthine have a lower pKa than the NH next to the carbonyl in the six membered ring, but the pKa of the NH next to the carbonyl in the six membered ring does not change at all when you go from guanine to xanthine.
Maybe I have just had a long day, but this does not make sense to me at all. For the love of god, explain.
Not only does the NH in the five membered ring in xanthine have a lower pKa than the NH next to the carbonyl in the six membered ring, but the pKa of the NH next to the carbonyl in the six membered ring does not change at all when you go from guanine to xanthine.
Maybe I have just had a long day, but this does not make sense to me at all. For the love of god, explain.
Look ma, pet dinosaur
An enormously fun debate is going on in the pages of Nature (subscriber link), initiated after the magazine published the 'Creationism in Europe' article which prompted me to write this post a couple of days ago. After that article was published, a Polish gentleman named Maciej Giertych of the Institute of Dendrology of the Polish Academy of Sciences sent a letter to Nature, in which he questioned the validity of evolution, apparently without citing a single reference, although he did cite his seemingly impressive credentials from the universities of Oxford and Toronto. Giertych said:
"I believe that, as a result of media bias, there seems to be total ignorance of new scientific evidence against the theory of evolution. Such evidence includes race formation (microevolution), which is not a small step in macroevolution because it is a step towards a reduction of genetic information and not towards its increase. It also includes formation of geological strata sideways rather than vertically, archaeological and palaeontological evidence that dinosaurs coexisted with humans, a major worldwide catastrophe in historical times, and so on."
What on earth (pun intended)?! First of all, assuming that what he means by "microevolution" is evolution on the scale of genes and biomolecules, such microevolution has been demonstrated thousands of times, in fact thus enormously supporting and widening the purview of evolution. Secondly, he actually has the audacity to suggest that humans and dinosaurs may have walked together on the earth!
Quite appropriately, this rash letter invoked a series of no less than eight rebuttals in the latest correspondence section of Nature. There are those who have also criticised Nature for publishing such a hack letter, but most have directly condemned Giertych's views. There is the correspondent from the Institute of Dendrology who is prompt to dissociate his institute's views from Giertych's views, and then there are those who lambast him directly for his opinions and deplore his lack of reference citing. But there are also two correspondents who say
"The very fact that his letter was published shows that Nature has no bias against critics of evolution."
This is an interesting point. Should scientific journals publish letters and so-called articles from people like Giertych? At one end, we may think that this is necessary to prove that scientific journals have no bias in publication. Thus, creationists cannot accuse them of actively suppressing evidence. On the other hand, it is not the responsibility of scientific journals to refute every hack creationist unscientific assertion.
I don't know whether Nature published Giertych's letter to allow dissent (no matter how misguided and unsubstantiated) or to actually publish a serious opposing point of view. It surely cannot be the latter, and I am convinced it is the former reason. But as a matter of principle, I completely agree that scientific journals have absolutely no obligation to publish any pseudoscientific cricticism of sound scientific facts, let alone dissenting correpondence. If pseudoscientists cry foul, it's quite clear they are really crying sour grapes. It's one thing to be a valid scientific critic of evolution, but it's quite another to be a pseudoscientific opponent of evolution who cites not one scientific reference. Since it's really the creationists who assert that the earth was created six thousand years ago, the onus of proof has always been on them to prove their assertions, and no journal needs to pander to their dissenting views that don't have an iota of scientific basis to support them.
The other point is related to Dawkins's stance that we can never disprove the existence of god and creation. Naturally, the creationists tout that as proof of their contentions. Scientific journals also don't have any duty whatsoever to publish assertions that are not disproved. Because in science and the reality seeking world, innocent until proven guilty is a non-existent principle.
On a different note, Poland is a staunchly Catholic country, and I won't be surprised if they start teaching creationism in schools as an "alternative theory". The only condition should be that they should teach all "theories" of creation, including that of the Flying Spaghetti Monster and the Dozing Fatty Spinster.
"I believe that, as a result of media bias, there seems to be total ignorance of new scientific evidence against the theory of evolution. Such evidence includes race formation (microevolution), which is not a small step in macroevolution because it is a step towards a reduction of genetic information and not towards its increase. It also includes formation of geological strata sideways rather than vertically, archaeological and palaeontological evidence that dinosaurs coexisted with humans, a major worldwide catastrophe in historical times, and so on."
What on earth (pun intended)?! First of all, assuming that what he means by "microevolution" is evolution on the scale of genes and biomolecules, such microevolution has been demonstrated thousands of times, in fact thus enormously supporting and widening the purview of evolution. Secondly, he actually has the audacity to suggest that humans and dinosaurs may have walked together on the earth!
Quite appropriately, this rash letter invoked a series of no less than eight rebuttals in the latest correspondence section of Nature. There are those who have also criticised Nature for publishing such a hack letter, but most have directly condemned Giertych's views. There is the correspondent from the Institute of Dendrology who is prompt to dissociate his institute's views from Giertych's views, and then there are those who lambast him directly for his opinions and deplore his lack of reference citing. But there are also two correspondents who say
"The very fact that his letter was published shows that Nature has no bias against critics of evolution."
This is an interesting point. Should scientific journals publish letters and so-called articles from people like Giertych? At one end, we may think that this is necessary to prove that scientific journals have no bias in publication. Thus, creationists cannot accuse them of actively suppressing evidence. On the other hand, it is not the responsibility of scientific journals to refute every hack creationist unscientific assertion.
I don't know whether Nature published Giertych's letter to allow dissent (no matter how misguided and unsubstantiated) or to actually publish a serious opposing point of view. It surely cannot be the latter, and I am convinced it is the former reason. But as a matter of principle, I completely agree that scientific journals have absolutely no obligation to publish any pseudoscientific cricticism of sound scientific facts, let alone dissenting correpondence. If pseudoscientists cry foul, it's quite clear they are really crying sour grapes. It's one thing to be a valid scientific critic of evolution, but it's quite another to be a pseudoscientific opponent of evolution who cites not one scientific reference. Since it's really the creationists who assert that the earth was created six thousand years ago, the onus of proof has always been on them to prove their assertions, and no journal needs to pander to their dissenting views that don't have an iota of scientific basis to support them.
The other point is related to Dawkins's stance that we can never disprove the existence of god and creation. Naturally, the creationists tout that as proof of their contentions. Scientific journals also don't have any duty whatsoever to publish assertions that are not disproved. Because in science and the reality seeking world, innocent until proven guilty is a non-existent principle.
On a different note, Poland is a staunchly Catholic country, and I won't be surprised if they start teaching creationism in schools as an "alternative theory". The only condition should be that they should teach all "theories" of creation, including that of the Flying Spaghetti Monster and the Dozing Fatty Spinster.
Strontium ralenate
Came across this nifty site that features many old and new drugs and a brief and accessible description of their basic properties and licensing and patent information.
That's how I got to know about strontium ralenate, never heard of before.
That's how I got to know about strontium ralenate, never heard of before.
Clerical Chemistry
I want to lay claim to a new field; clerical chemistry. Unfortunately, I can't because millions before me seem to have already exploited it and sucked it dry. In one sentence, clerical chemistry is the art and science of lists. Lists which may not indicate anything. Lists which may aspire to but never reach a Voila! conclusion. And lists that are included for no reason except to clutter up powerpoint presentations.
Case in point. A med chem colleague who gave a presentation today about her efforts to develop a new molecule for some receptor (what else). Admirable that she synthesized 175 molecules without having any idea of what they were doing, but does she have to list all her 170 failed molecules on every slide? Not only does it make the head spin, but it deprives one of any inkling of rational drug design. No offense here, but sure, you may have even synthesized 1075 molecules separately, but please give us the bottom line. We admire your heroic efforts sincerely even without you listing all of them till our eyes water. Plus, we waited for the SAR for about 30 minutes, and in the end, the SAR was something which any one of us may have guessed.
Med chem presentations like this one really put me off. I dread it when someone has SAR in the title. Of course, SAR is the heart of med chem, but what we want is a tidy little package that gives us the details. You may have really synthesized every molecule out there with all possible permutations and combinations including one with Uglium, but we already know that, and we really don't care about every single possibility that did not work.
And did I mention that this was a practice talk for a job interview in pharma? Best of luck to her.
Case in point. A med chem colleague who gave a presentation today about her efforts to develop a new molecule for some receptor (what else). Admirable that she synthesized 175 molecules without having any idea of what they were doing, but does she have to list all her 170 failed molecules on every slide? Not only does it make the head spin, but it deprives one of any inkling of rational drug design. No offense here, but sure, you may have even synthesized 1075 molecules separately, but please give us the bottom line. We admire your heroic efforts sincerely even without you listing all of them till our eyes water. Plus, we waited for the SAR for about 30 minutes, and in the end, the SAR was something which any one of us may have guessed.
Med chem presentations like this one really put me off. I dread it when someone has SAR in the title. Of course, SAR is the heart of med chem, but what we want is a tidy little package that gives us the details. You may have really synthesized every molecule out there with all possible permutations and combinations including one with Uglium, but we already know that, and we really don't care about every single possibility that did not work.
And did I mention that this was a practice talk for a job interview in pharma? Best of luck to her.
Eternal unanswered riddles
Yesterday, while having a project discussion, we got into asking how strong a salt-bridge is, and realised that we are trying to answer one of the perpetually alive and kicking questions of chemistry. I then realised that this question belongs into the class of some other PAQs (Perpetually Alive Questions)
1. How strong is a hydrogen bond?
2. Do "low-barrier, strong" hydrogen bonds exist?
3. How do enzymes exactly stabilize transition states and bring about such enormous stabilization? What forces contribute to this?
4. How do you distinguish between a 'weak' hydrogen bon and a Van der Waals contact?
5. Why do molecules adopt one crystal structure and not another equienergetic one?
6. What is the origin of the rotation barrier in ethane?
Some of these questions (such as 4.) depend as much on convenience and arbitrary definition as on having definite answers. There are also ones where one can make good general guesses and yet lack predictive ability (such as 5.). The protein folding problem also falls into this category.
Many of these questions concern my favourite topics, especially those related to hydrogen bonds. While hyperconjugation has been advanced as the source of the rotation barrier in ethane, proton sponges have been postulated as model systems for demonstrating "strong" hydrogen bonds. According to Dunitz, crystal structure prediction really boils down to choosing between equienergetic possibilities rather than asking why one of them exists. As for enzymes, Kendall Houk seems to think that efficiencies above a certain extent may imply covalent rather than non-covalent binding.
All questions make for exciting discussion and much fun, and the great thing is that even partial answers make for great intellectual debate and even scientific advancement. Roll on! Chemistry remains alive because of such questions. But PhD.s may get prolonged, an emphatic disadvantage.
More such questions?
1. How strong is a hydrogen bond?
2. Do "low-barrier, strong" hydrogen bonds exist?
3. How do enzymes exactly stabilize transition states and bring about such enormous stabilization? What forces contribute to this?
4. How do you distinguish between a 'weak' hydrogen bon and a Van der Waals contact?
5. Why do molecules adopt one crystal structure and not another equienergetic one?
6. What is the origin of the rotation barrier in ethane?
Some of these questions (such as 4.) depend as much on convenience and arbitrary definition as on having definite answers. There are also ones where one can make good general guesses and yet lack predictive ability (such as 5.). The protein folding problem also falls into this category.
Many of these questions concern my favourite topics, especially those related to hydrogen bonds. While hyperconjugation has been advanced as the source of the rotation barrier in ethane, proton sponges have been postulated as model systems for demonstrating "strong" hydrogen bonds. According to Dunitz, crystal structure prediction really boils down to choosing between equienergetic possibilities rather than asking why one of them exists. As for enzymes, Kendall Houk seems to think that efficiencies above a certain extent may imply covalent rather than non-covalent binding.
All questions make for exciting discussion and much fun, and the great thing is that even partial answers make for great intellectual debate and even scientific advancement. Roll on! Chemistry remains alive because of such questions. But PhD.s may get prolonged, an emphatic disadvantage.
More such questions?
Of course I can google it, but...
"If someone is interested in the details, I will be happy to talk to them later"...these words of mine in a group meeting presentation were met with amusement and subdued smirking. I was puzzled. I remember hearing these words often a few years ago in talks. They made me feel happy, because they seemed to indicate that the speaker was genuinely interested in explaining the fine points, and indeed even the general points of his talk later to those who were interested. So what had changed between then and now? Two factors I think among others: Google, and Powerpoint.
Powerpoint allows you to display a long list of references with the tacit assumption that the audience will scan and memorize them instantaneously. Surely all the references you need will be in there. So for details, just look into those.
Google made avoiding human communication even more easy. Some experience that I have supports this. Let's say someone was talking about a project that involved RNA interference (RNAi). I would cringe asking them "What is RNAi?", because more than once I have received the response, "O RNAi...that's...why don't you google it?" Well, of course I can google it, but it's not a crime to sometimes yearn for human communication. In 'older' times, the speaker knew that you would have to probably go to the library and browse through books to get such a question answered. To save you that trouble, he or she would take out a few minutes to answer your question. Even today, there are a few speakers who are gracious enough to be patient and try to answer even a general question by taking a few minutes. But the percentage is alarmingly dwindling, even those who are willing to talk to you in detail later. If you want to ask them about the direct details of their research, fine. If it's something general, you can always...
I understand of course, the enormous benefits of having Google and the internet at your fingertips, which in fact allow you to instantly access such information. Interestingly, it works both ways; today in a presentation, a colleague highlighted a drug for tuberculosis, a well-known antibiotic. I was tempted to ask her what protein target in the tuberculosis bacterium it targets. But I was stricken with the 'information at your fingertips syndrome'; why should I ask her that if I could get the information right away from Al Gore's information superhighway? (This syndrome has also led more people googling in presentations than paying attention to the talk)
Naturally, Google is God. But I wonder if human communication in presentations has been stifled because of the tacit assumption on the part of both speaker and audience, that they can always google it. As for me, I still love to say "If someone is interested in the details, I will be happy to talk to them later" as a catch-all phrase, and I think I am going to continue doing so. For the sake of good old fashioned banter, if not anything else.
Powerpoint allows you to display a long list of references with the tacit assumption that the audience will scan and memorize them instantaneously. Surely all the references you need will be in there. So for details, just look into those.
Google made avoiding human communication even more easy. Some experience that I have supports this. Let's say someone was talking about a project that involved RNA interference (RNAi). I would cringe asking them "What is RNAi?", because more than once I have received the response, "O RNAi...that's...why don't you google it?" Well, of course I can google it, but it's not a crime to sometimes yearn for human communication. In 'older' times, the speaker knew that you would have to probably go to the library and browse through books to get such a question answered. To save you that trouble, he or she would take out a few minutes to answer your question. Even today, there are a few speakers who are gracious enough to be patient and try to answer even a general question by taking a few minutes. But the percentage is alarmingly dwindling, even those who are willing to talk to you in detail later. If you want to ask them about the direct details of their research, fine. If it's something general, you can always...
I understand of course, the enormous benefits of having Google and the internet at your fingertips, which in fact allow you to instantly access such information. Interestingly, it works both ways; today in a presentation, a colleague highlighted a drug for tuberculosis, a well-known antibiotic. I was tempted to ask her what protein target in the tuberculosis bacterium it targets. But I was stricken with the 'information at your fingertips syndrome'; why should I ask her that if I could get the information right away from Al Gore's information superhighway? (This syndrome has also led more people googling in presentations than paying attention to the talk)
Naturally, Google is God. But I wonder if human communication in presentations has been stifled because of the tacit assumption on the part of both speaker and audience, that they can always google it. As for me, I still love to say "If someone is interested in the details, I will be happy to talk to them later" as a catch-all phrase, and I think I am going to continue doing so. For the sake of good old fashioned banter, if not anything else.
Features of selective kinase inhibitors
It was quite recently that I came across this fantastic review of kinase inhibitors from 2005 by Kevan Shokat. The reason why I missed it is because it was published in a journal that is usually not in people's top ten list- Chemistry and Biology. So I am putting it into mine from now onwards.
In any case, I think this review should be read by anyone who is concerned with either the experimental or computational design and testing of selective kinase inhibitors. Even now, the holy grail of kinase inhibitor development is selectivity, and Shokat gives a succint account of what we know about designing such molecules until now. I thought there were a few points especially crucial to keep in mind.
1. IC50 is not equal to Ki...usually:
This is a central if simple fact that should always guide computational as well as experimental scientists in their evaluation. The IC50 and Ki values are generally related by the so-called Cheng Prusoff equation:
Here, Km and [S] are the Km and substrate concentrations for the natural substrate of the protein, ATP in this case, which is usually competitively displaced by inhibitors.
What does this mean on a practical basis? Let me take my own example in which this principle helped a lot. We are trying to design a selective kinase inhibitor, and found out that a compound which we had, showed some selectivity for one kinase versus the other. To investigate the source of this selectivity, we started looking at the interactions of the inhibitor with the two kinase pockets; presumably, better the interaction, more it would contribute to the smaller IC50. Or would it? No! Better the interactions, the more it would contribute to the smaller Ki. The point is, only the Ki has to do with how effectively the inhibitor interacts with the active site. But the IC50 is an experimental number which as the above equation indicates, also has to do with how well the natural substrate, in this case ATP, binds to the protein. So if the Km of the protein for ATP is really small, that means ATP binds very well, and even a compound with a low Ki will have a relatively large IC50 and will be a poor inhibitor. So just looking at the active site interactions does not help to rationalize anything about the IC50; what must be known is how well the competitor ATP binds to the site. The bottom line is, in kinase assays, one can only compare Ki's and IC50's if the ratio [S]/Km is kept constant. Otherwise it's not a controlled experiment.
2. There is a minimum threshold of potency below which an inhibitor cannot be selective, irrespective of the in vitro data:
Another important point. If the inhibitor is extremely lousy in the first place, then the dosage needed to achieve selectivity is going to be much higher. On a practical basis, as Shokat says, "more potent compounds are more selective because they can be used at a lower dose". What I take this to mean is that if your compound is extremely potent, then you can essentially use it at such a low concentration, that it binds to only one protein, and is denied to the others. What could be the 'threshold' for a kinase inhibitor? Well, it depends also on what kind of a clinical target you are targeting, but I would think that anything above maybe a micromolar Ki would be enough to raise serious doubts about selectivity.
3. Common features of kinase inhibitors:
This could be educated observation and guesswork at best, but Shokat says many inhibitors show dramatic SAR relationships. The hydrogen bonds between the adenine ring nitrogens and a crucial backbone residue are duplicated by many inhibitors for example. I can vouch for the ubiquity of this particular interaction, as it has shown up even in docking poses. This is what can be called a 'correlated' pair of hydrogen bonds, one which is strong and conserved. The other point about kinase inhibitors being usually relatively rigid and entropically constrained is also interesting. One thing is for sure; kinase inhibitors seem to promise yet another bounty for heterocyclic chemists (We who criticize 'flatland' should quietly slink away now...)
And of course, this is only for ATP competitive inhibitors. Allosteric inhibition will be quite another unexplored terrain. Overall, a highly informative and practically useful review. It helped me ask our biologists questions which they wouldn't have expected from a modeler. The search for selective inhibitors is surely one of the most vigorously explored areas of med chem. The dozens of publications literally every week on Src, PKC, p38 Map, CDK, and Bcr kinases represent only a fraction of the research that is being currently done in pharma as well as academia.
In any case, I think this review should be read by anyone who is concerned with either the experimental or computational design and testing of selective kinase inhibitors. Even now, the holy grail of kinase inhibitor development is selectivity, and Shokat gives a succint account of what we know about designing such molecules until now. I thought there were a few points especially crucial to keep in mind.
1. IC50 is not equal to Ki...usually:
This is a central if simple fact that should always guide computational as well as experimental scientists in their evaluation. The IC50 and Ki values are generally related by the so-called Cheng Prusoff equation:
Here, Km and [S] are the Km and substrate concentrations for the natural substrate of the protein, ATP in this case, which is usually competitively displaced by inhibitors.
What does this mean on a practical basis? Let me take my own example in which this principle helped a lot. We are trying to design a selective kinase inhibitor, and found out that a compound which we had, showed some selectivity for one kinase versus the other. To investigate the source of this selectivity, we started looking at the interactions of the inhibitor with the two kinase pockets; presumably, better the interaction, more it would contribute to the smaller IC50. Or would it? No! Better the interactions, the more it would contribute to the smaller Ki. The point is, only the Ki has to do with how effectively the inhibitor interacts with the active site. But the IC50 is an experimental number which as the above equation indicates, also has to do with how well the natural substrate, in this case ATP, binds to the protein. So if the Km of the protein for ATP is really small, that means ATP binds very well, and even a compound with a low Ki will have a relatively large IC50 and will be a poor inhibitor. So just looking at the active site interactions does not help to rationalize anything about the IC50; what must be known is how well the competitor ATP binds to the site. The bottom line is, in kinase assays, one can only compare Ki's and IC50's if the ratio [S]/Km is kept constant. Otherwise it's not a controlled experiment.
2. There is a minimum threshold of potency below which an inhibitor cannot be selective, irrespective of the in vitro data:
Another important point. If the inhibitor is extremely lousy in the first place, then the dosage needed to achieve selectivity is going to be much higher. On a practical basis, as Shokat says, "more potent compounds are more selective because they can be used at a lower dose". What I take this to mean is that if your compound is extremely potent, then you can essentially use it at such a low concentration, that it binds to only one protein, and is denied to the others. What could be the 'threshold' for a kinase inhibitor? Well, it depends also on what kind of a clinical target you are targeting, but I would think that anything above maybe a micromolar Ki would be enough to raise serious doubts about selectivity.
3. Common features of kinase inhibitors:
This could be educated observation and guesswork at best, but Shokat says many inhibitors show dramatic SAR relationships. The hydrogen bonds between the adenine ring nitrogens and a crucial backbone residue are duplicated by many inhibitors for example. I can vouch for the ubiquity of this particular interaction, as it has shown up even in docking poses. This is what can be called a 'correlated' pair of hydrogen bonds, one which is strong and conserved. The other point about kinase inhibitors being usually relatively rigid and entropically constrained is also interesting. One thing is for sure; kinase inhibitors seem to promise yet another bounty for heterocyclic chemists (We who criticize 'flatland' should quietly slink away now...)
And of course, this is only for ATP competitive inhibitors. Allosteric inhibition will be quite another unexplored terrain. Overall, a highly informative and practically useful review. It helped me ask our biologists questions which they wouldn't have expected from a modeler. The search for selective inhibitors is surely one of the most vigorously explored areas of med chem. The dozens of publications literally every week on Src, PKC, p38 Map, CDK, and Bcr kinases represent only a fraction of the research that is being currently done in pharma as well as academia.
Discodermolide unraveled?
Drugs affecting microtubule dynamics are familiar chemical players in med chem by now. First came Taxol, then the epothilones, then discodermolide, and the list continues with peluroside, eleutherobin, and dictyostatin to name a few of the better known entities.
Like it is for other drugs, one of the major questions asked about these molecules is how they bind to their target. Taxol and epothilone have been subjected to immense SAR and analog preparation by some of the hard hitters in the synthetic arena. Their binding conformations have been postulated with reasonable confidence. The common pharmacophore hypothesis, tempting but misleading and not true in this case, has been convincingly questioned. But for discodermolide, the binding conformation is not yet known. Now, groups from Spain and the UK have applied the "INPHARMA" NMR methodology to probe the interaction of disco with tubulin.
Admittedly, INPHARMA is a nifty technique- here is the original reference. It relies on magnetization transfer to a protein proton from a proton of a molecule that binds in the active site. This magnetization is then again transfered from the protein proton to a proton of another molecule that binds to the same site. For this to happen, the rate constants for binding have to be much smaller than the relaxation times for the protons.
Thus, the magnetization transfer sequence for two ligands A and B that bind to the same active site is
Naturally, this happens if both H(A) and H(B) are close to the same protein proton. Thus you see cross peaks between two protons A and B of two different ligands, mediated by a protein proton. Information from many such cross peaks allows us to map the protein protons to the ligand protons that are near them. In the end, not only does a picture emerge of the binding conformation of both ligands separately, but this information also allows us to suggest a common pharmacophore for the two ligands. And Paterson has now used this technique for disco and epothilone.
I am sure the technique has to be done carefully and that it was, and I also don't doubt the postulated conformation of disco. Most of the paper is really interesting and it's a neat study. But what concerns me is the fact that the end result, the binding conformation of disco can be mapped onto the x-ray conformation of disco proposed earlier, as well as the solution conformation of dictyostatin. Where my mind snags is in accepting this conclusion, because a single or even one dominant conformation for a flexible molecule derived in solution is unrealistic. It's what is called a 'virtual' solution. It's virtual simply because it's an average conformation. And since the average is a juxtaposition of all possible individual conformations, it simply does not exist in solution by itself. It's like saying that the contiguous structure of fan blades seen when a fan is moving very fast actually exists. It does not, because it is an average, and the resolution time of our eye is not short enough to capture individual positions of the fan blades.
So I wonder how the binding conformation of disco could be mapped onto the conformations of one x-ray conformation and one single dominant conformation in solution. Now I am sure there is more to this story, and I am still exploring the paper, but for commonsense reasons, a little red light in my brain always turns on (or at least should turn on) when a single or dominant conformation for a highly flexible molecule in solution is postulated.
More cogitations to come soon.
Protein-protein interactions and academic bounties
Whistling's post on the neat article published by researchers at Sunesis Pharma on tethering as a strategy to discover caspase inhibitors reminded me of Jim Wells, who had come to give a talk at Emory a couple of months ago. Wells moved from being president of Sunesis to UCSF. At UCSF, he commands a formidable repertoire of resources, including NMR, X-Ray and High Res Mass, as well as synthesis and molecular biology facilities. Who in academia can compete with such an immense wall of capability? I am sure Wells must have been offered great incentives including these facilities at UCSF to facilitate his transition from industry to academia. His move is symbolic of the power that academia has now started to command. Part of this power no doubt comes from it being allowed to have patents on drugs, from which they can get considerable finances through royalties. My own advisor, Prof. Dennis Liotta, got Emory 500 million $ in royalties, as one of the co-discoverers of the anti-HIV drug Emtricitabine (Emtriva®). That is a good sign, because researchers would gladly move back into the intellectually more stimulating environment of academics, if they were also provided good incentives and facilities.
But coming back to the scientific side, Wells is one of the pioneers in developing small molecule inhibitors for disrupting protein-protein interactions, a notoriously tricky endeavor. Proteins can interact with other proteins in as many ways as small molecules can interact with them. Finding a protein-protein interaction is not simply a matter of finding a good complementary fit, but is much more complicated, because the protein essentially interacts with another protein through flexible maneuvering. Not only can it simply slide into a hydrophobic complementary site, but it can also catch hold of loops, causing immense conformational changes in them, and then only be in a comfortable position to dock with the other protein. Needless to say, programs which depend on rigid body protein docking often fail miserably, like ClusPro, which gave me horrendous results on my system. Also, proteins may not always dock in a theoretically optimum manner in real systems, but only in an orientation that is optimum to cause further action.
Protein-protein docking will remain a holy grail for both experimentalists and computational scientists, more so with the huge number of protein-protein interactions impliccated in diseases now. This whole discussion reminded me of two excellent reviews on protein-protein interactions, which give a succint view of the field.
1. Small-molecule inhibitors of protein-protein interactions: progressing towards the dream- Michelle R. Arkin, James A. Wells, Nature Reviews Drug Discovery 3, 301 - 317 (01 Apr 2004)
2. Strategies for Targeting Protein–Protein Interactions With Synthetic Agents- Hang Yin and Andrew D. Hamilton, Angewandte Chemie International Edition Volume 44, Issue 27, Date: July 4, 2005, Pages: 4130-4163
But coming back to the scientific side, Wells is one of the pioneers in developing small molecule inhibitors for disrupting protein-protein interactions, a notoriously tricky endeavor. Proteins can interact with other proteins in as many ways as small molecules can interact with them. Finding a protein-protein interaction is not simply a matter of finding a good complementary fit, but is much more complicated, because the protein essentially interacts with another protein through flexible maneuvering. Not only can it simply slide into a hydrophobic complementary site, but it can also catch hold of loops, causing immense conformational changes in them, and then only be in a comfortable position to dock with the other protein. Needless to say, programs which depend on rigid body protein docking often fail miserably, like ClusPro, which gave me horrendous results on my system. Also, proteins may not always dock in a theoretically optimum manner in real systems, but only in an orientation that is optimum to cause further action.
Protein-protein docking will remain a holy grail for both experimentalists and computational scientists, more so with the huge number of protein-protein interactions impliccated in diseases now. This whole discussion reminded me of two excellent reviews on protein-protein interactions, which give a succint view of the field.
1. Small-molecule inhibitors of protein-protein interactions: progressing towards the dream- Michelle R. Arkin, James A. Wells, Nature Reviews Drug Discovery 3, 301 - 317 (01 Apr 2004)
2. Strategies for Targeting Protein–Protein Interactions With Synthetic Agents- Hang Yin and Andrew D. Hamilton, Angewandte Chemie International Edition Volume 44, Issue 27, Date: July 4, 2005, Pages: 4130-4163
Low is still better
One of the pieces of news (NYT link) making waves is the finding that resveratrol, a substance in red wine, can offset the effects of a high-calorie diet and prolong longevity...in mice at least. But the graph is revealing, and is also relieving, because it emphatically shows that wine enthusiasts gleefully running out to buy (and justify) large stores of Chianti should pause for thought. The graph clearly shows that a standard low calorie diet still is better than a high-calorie one fortified with red wine, at least in the long term. So think again before you douse yourself with wine and cheese.
Copyright: NPG
Original Nature article
Now, let me get back to my creme-filled donut
Copyright: NPG
Original Nature article
Now, let me get back to my creme-filled donut
Fishy no more
I better fry that fish for dinner today instead of waiting for the weekend. How many times do you see a front page news headline on BBC saying "Only 50 years left for sea fish"? But that's what it says, and this is one of the scarier changes that's going to take place as we irreversibly modify our planet. No need to have Halloween as a special celebration anymore.
This is not suprising. In an earlier post, I already commented about precipitous amphibian declines orchestrated by environmental damage. Now it's the fish, and in fact marine life in general. And no wonder; out of all systems, marine systems are probably the most delicate systems on earth. In fact, we haven't even understood the complex symphony involving fish, algae, other sea denizens and chemicals, that takes place below the water's surface. As one researcher said, marine biosystems are like a pack of cards, so intricably linked with each other, that disturb one, and you turn others topsy turvy. But doesn't the pack of cards go further in even more ways? After all, the oceans are the great equalizers of the planet, absorbing CO2 and being key for maintaining temperature. One of the scarier scenarios for global warming concerns the perturbation of the North Atlantic Circulation, which would throw Europe and the US into a new age of climate, possibly an ice age.
In the last two decades or so, we have been starting to see the effects of climate change on biodiversity in a very real way, with not only loss of habitats, but also the spread of disease vectors that thrive in warmer conditions. Finally, as I have already said, it's going to be the disruption of daily life that is going to be the final wake up call for people. The only critical question is whether it will be too late by then, and the answer increasingly seems to be yes. This is no longer a matter that needs to appeal to only morality and preserving the beauty of nature. This has to do with our modern way of life, and once we take a look, we realise that the matter of biodiversity destruction is linked to many others of our grotesque nemeses, including the oil crises, and religious and political conflict. The pack of cards packs deep indeed.
Critics of global warming who said that taking action against it would adversely affect economies need to open their eyes. The naysayers who don't wish to preserve the environment for its own sake could at least preserve it for their own sake. How many people's likelihood is related to seafood collection and processing? And again, how much of the world economic capital rests on providing seafood to populations? If this has nothing to do with economics, then I don't see what has. The fact that I may not get that stuffed pomfret on a lazy weekend will be the most trivial of all consequences.
I firmly believe that if humanity's end comes, it will not be because it lacked the technology and capability for solving problems, but because the problems were so intractably connected to each other and humans' way of life, that even solving one problem would make the entire system collapse. It would be the ultimate irony; the system's sheer complexity and overbearing influence precluding even the realistic solution of a problem, even when it is at hand.
This is not suprising. In an earlier post, I already commented about precipitous amphibian declines orchestrated by environmental damage. Now it's the fish, and in fact marine life in general. And no wonder; out of all systems, marine systems are probably the most delicate systems on earth. In fact, we haven't even understood the complex symphony involving fish, algae, other sea denizens and chemicals, that takes place below the water's surface. As one researcher said, marine biosystems are like a pack of cards, so intricably linked with each other, that disturb one, and you turn others topsy turvy. But doesn't the pack of cards go further in even more ways? After all, the oceans are the great equalizers of the planet, absorbing CO2 and being key for maintaining temperature. One of the scarier scenarios for global warming concerns the perturbation of the North Atlantic Circulation, which would throw Europe and the US into a new age of climate, possibly an ice age.
In the last two decades or so, we have been starting to see the effects of climate change on biodiversity in a very real way, with not only loss of habitats, but also the spread of disease vectors that thrive in warmer conditions. Finally, as I have already said, it's going to be the disruption of daily life that is going to be the final wake up call for people. The only critical question is whether it will be too late by then, and the answer increasingly seems to be yes. This is no longer a matter that needs to appeal to only morality and preserving the beauty of nature. This has to do with our modern way of life, and once we take a look, we realise that the matter of biodiversity destruction is linked to many others of our grotesque nemeses, including the oil crises, and religious and political conflict. The pack of cards packs deep indeed.
Critics of global warming who said that taking action against it would adversely affect economies need to open their eyes. The naysayers who don't wish to preserve the environment for its own sake could at least preserve it for their own sake. How many people's likelihood is related to seafood collection and processing? And again, how much of the world economic capital rests on providing seafood to populations? If this has nothing to do with economics, then I don't see what has. The fact that I may not get that stuffed pomfret on a lazy weekend will be the most trivial of all consequences.
I firmly believe that if humanity's end comes, it will not be because it lacked the technology and capability for solving problems, but because the problems were so intractably connected to each other and humans' way of life, that even solving one problem would make the entire system collapse. It would be the ultimate irony; the system's sheer complexity and overbearing influence precluding even the realistic solution of a problem, even when it is at hand.
Smith's Interestingolide is wrong
Imagine that your name was Smith, and you had synthesized a molecule named Interestingolide. Now, if someone published a paper with the title "Reassignment of the Structure of Smith's Interestingolide", how would you feel? Well, that's what has happened to 'Mehta and Kundu's Spiculoic Acid' whose published structure has been refuted by Oxford's Jack Baldwin. What is interesting is that the original group seems to have incorrectly predicted the stereochemistry of the product of a Sharpless epoxidation. The substrate is simple, so is the product, and it should not have been difficult to use Sharpless's mnemonic device to do this prediction. As is clear from the diagram below, the Sharpless mnemonic device makes it clear that holding the allylic alcohol in the orientation shown and using D (-) DET, you should get the alpha epoxide.
The rest of Baldwin's analysis follows, but this initial incorrect prediction sets the stage for all that. Maybe the grad student did the stereochemical prediction, and the professor either trusted him, or did not look closely enough at his analysis. Well, better luck next time, and I will stop speculating.
* Incorrect prediction:
* Correct prediction:
* Sharpless mnemonic device:
* Reference:
Org. Lett.; (Letter); 2006; ASAP Article; DOI: 10.1021/ol062361a
The rest of Baldwin's analysis follows, but this initial incorrect prediction sets the stage for all that. Maybe the grad student did the stereochemical prediction, and the professor either trusted him, or did not look closely enough at his analysis. Well, better luck next time, and I will stop speculating.
* Incorrect prediction:
* Correct prediction:
* Sharpless mnemonic device:
* Reference:
Org. Lett.; (Letter); 2006; ASAP Article; DOI: 10.1021/ol062361a
It's the simple things that...
I learnt my lesson. The problematic molecule for which our NMR data was not making sense turned out to have some protons with really long relaxation times. I had read a line in some well known NMR book which said "For any detailed study, it's best to measure T1 relaxation times before the experiment". I read it, and I forgot it. Well, not forgot it entirely, but it's still true that for most molecules, relaxation times are similar for all the protons and not doing a T1 does not really make a difference. In our case unfortunately, it were the reference protons for the NOESY distance calculation that turned out to have loooong T1s. Simple solution: measure T1, and then set the relaxation delay (d1 for Varian spectrometers) equal to 5T1. I am in the process of rerunning the NOESY. But from next time, I am never going to forget doing a T1 measurement before anything else, even for simple and 'obvious' moelcules. It's a humbling lesson well learnt for me.
Beyond the rhetoric
I have been reading George Olah's 'Beyond Oil and Gas: The Methanol Economy' and even without having gotten to the part about the methanol economy, I can heartily recommend the book. At first sight, the book looks technical, but it is actually extremely accessible to the layman. The first half of the book is an extremely lucid and comprehensive account of the history, geopolitics, technology and future, of oil, natural gas, and coal, and also discusses the hydrogen economy and alternative fuel sources including atomic energy. The book is very much worth reading and buying for this half alone. All three of these commodities have become the Big Brothers of our lives, seemingly munificent, indispensable, and revolutionary. Yet all three, and especially oil, have made us utterly dependent on them in morbid manner. This is true in the many obvious ways in which we use oil in transportation and electricity and heating, but also in the not so obvious and yet ubiquitous ways in which oil based products are the basis of every part of modern life; from plastics to pharmaceuticals. Our dependence on them is appalling indeed, as demonstrated in the book.
These three commodities are like some of the thieves in movie scenes; the moment the thieves run in some direction where they think that have a safe haven, some insurmountable obstacle materializes. And so it is for fossil fuels. Whatever optimistic estimates and facts we discover about them are almost immediately thwarted by serious problems. Oil is convenient for transportation, is in large reserves, and is the most versatile fossil fuel. Yet, it is riddled by exponentially increasing demand and production costs, locations in regions of political instability, and most importantly, environmental problems. Wars and political regimes are made and broken over oil, and leaders will go to any lengths to disguise their aspirations for oil and the actions resulting from them. Non-conventional sources of oil need much energy input from natural gas, and again contribute to environmental CO2 levels. This last problem is gigantic for coal. Natural gas has momentous transportation and safety problems. So does the much touted Hydrogen Economy of Bush. Ethanol from corn seemingly needs more energy from fossil fuels than it actually saves and produces, and may not be worth it. The bottom line is, any fossil fuel source and many non-conventional energy sources that we can consider have such intractable problems that we cannot think of depending upon them for eternity or even in the comfortable future. Put the rhetoric aside and focus on the facts. Any decision about energy has already been made very difficult because of our aspiration to a high standard of living, our reluctance to give up creature comforts, and because of political lobbying which traps decisions in a cycle of profit and oneupmanship, and the last thing we need is trendy slogans about unlikely energy sources.
It does not matter what the reserves are; after all, predictions about oil reserves have always turned out to be underestimates and false alarms until now. But current predictions seem more true, and in the end, what will be the last straw would be the simple dominance of demand over supply. It would not matter if we have reserves then, but the production costs and the resulting oil price due to demand will be so high, that they would lead to a virtual breakdown in social infrastructure. If oil prices become 150$ a barrel, nobody would care how much proven reserves we have. And it is likely that it will happen very soon, with much of the developing world aspiring to US SUV standards of existence. What's important is that because of our utter dependence on it, such a situation will entail a fundamental shift in our standard of living, especially in the US, and we simply lack the social and mental capacity to make this shift overnight.
I haven't gotten to the part about methanol yet, so I will refrain from commenting on it until later, but what is clear is that oil and fossil fuels have to go, in some way or the other. I have said it before and I say it again, that nuclear energy is the cheapest, safest, and most efficient energy source that we can use in the near future. What to do about terrorists hitting a nuclear power plant is a complex problem, but surely Bush can take care of that, with all the extreme measures he takes for enforcing national security.
In any case, it is eminently worth taking a look at 'Beyond Oil and Gas' if you get a chance.
These three commodities are like some of the thieves in movie scenes; the moment the thieves run in some direction where they think that have a safe haven, some insurmountable obstacle materializes. And so it is for fossil fuels. Whatever optimistic estimates and facts we discover about them are almost immediately thwarted by serious problems. Oil is convenient for transportation, is in large reserves, and is the most versatile fossil fuel. Yet, it is riddled by exponentially increasing demand and production costs, locations in regions of political instability, and most importantly, environmental problems. Wars and political regimes are made and broken over oil, and leaders will go to any lengths to disguise their aspirations for oil and the actions resulting from them. Non-conventional sources of oil need much energy input from natural gas, and again contribute to environmental CO2 levels. This last problem is gigantic for coal. Natural gas has momentous transportation and safety problems. So does the much touted Hydrogen Economy of Bush. Ethanol from corn seemingly needs more energy from fossil fuels than it actually saves and produces, and may not be worth it. The bottom line is, any fossil fuel source and many non-conventional energy sources that we can consider have such intractable problems that we cannot think of depending upon them for eternity or even in the comfortable future. Put the rhetoric aside and focus on the facts. Any decision about energy has already been made very difficult because of our aspiration to a high standard of living, our reluctance to give up creature comforts, and because of political lobbying which traps decisions in a cycle of profit and oneupmanship, and the last thing we need is trendy slogans about unlikely energy sources.
It does not matter what the reserves are; after all, predictions about oil reserves have always turned out to be underestimates and false alarms until now. But current predictions seem more true, and in the end, what will be the last straw would be the simple dominance of demand over supply. It would not matter if we have reserves then, but the production costs and the resulting oil price due to demand will be so high, that they would lead to a virtual breakdown in social infrastructure. If oil prices become 150$ a barrel, nobody would care how much proven reserves we have. And it is likely that it will happen very soon, with much of the developing world aspiring to US SUV standards of existence. What's important is that because of our utter dependence on it, such a situation will entail a fundamental shift in our standard of living, especially in the US, and we simply lack the social and mental capacity to make this shift overnight.
I haven't gotten to the part about methanol yet, so I will refrain from commenting on it until later, but what is clear is that oil and fossil fuels have to go, in some way or the other. I have said it before and I say it again, that nuclear energy is the cheapest, safest, and most efficient energy source that we can use in the near future. What to do about terrorists hitting a nuclear power plant is a complex problem, but surely Bush can take care of that, with all the extreme measures he takes for enforcing national security.
In any case, it is eminently worth taking a look at 'Beyond Oil and Gas' if you get a chance.
The same and the not same
One of the most important questions that someone in the early stages of drug discovery can ask is: Do similar ligands bind in a similar manner? Ambiguities riddle this seemingly commonsense question, right from the definition of 'similar'. Countless drug discovery projects must have been made or broken because of attention or lack of attention to this central principle.
If I had to give a one word answer to the question, it would be no, not because it's the right answer, but I would be playing safe by saying that. It can be a very big mistake to assume that similar ligands will bind to the same way in a protein binding pocket, or even in the same pocket, and medicinal chemists both on the experimental and computational sides are aware what a wide and disparate range of SAR relationships similar ligands can have. This extends very much to similar binding too.
Not on one occasion have medicinal chemists taken the known binding conformation of a ligand, and then twisted and turned another 'similar' ligand into that same conformation. The two structures are then superimposed. They look so similar, with the hydrophobic and polar groups in the same places. Ergo, they must be overlapping in their binding mode. Big mistake. While this can turn out to be the case many times, it's just wrong on a philosophical basis to assume it. That's because ligands have their own personalities, and each one of them can prefer to act quite differently with a protein binding pocket. The problem is; superposition of ligands can always be justified in retrospect if your ligand shows activity. But that does not make your assumption true.
One of the better known cases concerns the search for a 'common pharmacophore' for Taxol, Epothilone, and other ligands which bind to the same pocket in tubulin. A common pharmacophore is a minimal set of common structural features which will cause bioactivity. In typical fashion, researchers compared the various parts of the molecules and twisted them to overlap with each other (PNAS paper). Based on this superposition, they then designed analogs. Result: most of the analogs turned out to be inactive. Thus, there was no 'common pharmacophore'. In this context, Snyder and others published a model (SCIENCE paper) in which, doing meticulous electron density fitting for Taxol and Epothilone, they demonstrated that each one of these molecules explores the binding pocket of tubulin in a unique way, utilizing unique interactions. Thus, the ligands are 'promiscuous'.
Many times, modeling may assume that similar ligands are binding in the same conformation. Docking programs dock them in the same conformation. What docking programs fail to take into account is mainly protein flexibility, which accounts as much for ligand binding as ligand flexibility. Then, when X-ray structures of those 'similar' ligands bound to the same protein are obtained, they reveal that either the protein underwent a conformational change and changed the binding modes of the ligands, or that even without this, the ligands bound in a dissimillar manner. Ligand binding is a 1 kcal/mol energy window game, and there's no telling how each ligand will exploit this window.
So it's timely that a Swedish team has published a paper in J. Med. Chem that tries to tackle the question: Do similar ligands bind in the same way? The team has used the Tanimoto index to measure similarity of ligands, and then has looked at many samples from the PDB to guage the binding of similar ligands to the same protein. They try to use three main criteria of difference:
1. The position of water molecules
2. The movement of backbone atoms
3. The movement of protein sidechain atoms.
The first factor significantly turned out to be the most different for their examined cases, and one which is not always paid attention to. Water is a fickle guest for a protein host, and it can mediate interactions differently even in ligands with slight structural differences. The second factor is also significant, as side chain conformational differences induced by particular ligand features can greatly change the electrostatic environment of the protein; as illustrated above from the paper, the conformation of a single Met residue changes the electrostatics. The third factor, backbone movements, is relatively unchanged and a benign variable.
The bottom line is, it may be an ok working hypothesis to assume that because your known ligand binds in a known conformation, other very similar ligands or even known actives will bind the same way. But start taking it as an obvious rule, and you can always expect trouble.
Bizarre halloween paper of the year
If there was a prize for "Bizarre paper published to celebrate halloween", this paper may get it. On a more serious note, it's an interesting paper published in this week's ACIE that tackles a very common phenomenon- the metallic odor of iron that we all perceive. As the authors demonstrate, the odor is more likely to be from carbonyl compounds produced by the reduction of lipoperoxides in the skin by Fe2+. They also analyze the garlicky odor produced when iron is 'pickled' with phosphorus, commonly attributed to phosphine (PH3) and argue that this odor too is due to organophosphines rather than phosphine itself. Some of their statements and conlcusions (not to mention the title: The Two Odors of Iron when Touched or Pickled: (Skin) Carbonyl Compounds and Organophosphines) really merit Transylvanian attention:
Several of the experiments made me a little edgy; breathing phosphine to detect its odor (I have always been looking for a reference to the man/woman who first described the taste of KCN) and blood rubbed on to skin. These guys' work sure is interesting; what I am not sure of is whether I want to do a postdoc with them. Interestingly, pure phosphine, just like pure methane, is odorless.
"Humans are perplexed by the metallic odor from touching iron metal objects, such as tools, cutlery, railings, door handles, firearms, jewelry, and coins. Phosphorus-containing iron which is under acid attack gives rise to a different carbide or garlic odor which metallurgists have attributed to the gas phosphine (PH3); however, we found that purified PH3 at breathable dilution has hardly any odor. The aim of our research is to understand the chemical causes of these two iron smells in our engineered metal environment."
"1) Ironically, the iron odor on skin contact is a type of human body odor"
"Blood iron: Blood of one of the authors rubbed onto his own skin resulted in similar metallic odor and the same odorants (78±7 nmol dm-2, 4 repetitions) as in the above experiments. Controls by addition of FerroZine which is a blood-iron chelator suppressed the reaction (4±0.4). Aerated and homogenized blood also developed metallic odor on its own. There are reports that blood iron[10] can decompose blood lipidperoxides and that FerroZine inhibits this reaction.[11] This finding confirms that blood iron can trigger the metallic odor on skin or in blood itself."
"Sweaty skin corrodes iron metal to form reactive Fe2+ ions that are oxidized within seconds to Fe3+ ions while simultaneously reducing and decomposing existing skin lipidperoxides to odorous carbonyl hydrocarbons that are perceived as a metallic odor. This fast reaction creates the sensory illusion that it is the metal in itself that we smell right after touching it."
Several of the experiments made me a little edgy; breathing phosphine to detect its odor (I have always been looking for a reference to the man/woman who first described the taste of KCN) and blood rubbed on to skin. These guys' work sure is interesting; what I am not sure of is whether I want to do a postdoc with them. Interestingly, pure phosphine, just like pure methane, is odorless.
Cholesterol in plants fun fact
One of the questions I have always asked myself is "Do plants contain cholesterol?". I was under the impression that they don't and that this is precisely the difference between the cholesterol pathway in plants in animals; lanosterol to cholesterol in animals, and to stigmasterol or ergosterol etc. in plants and fungi. Interestingly, none of the 'popular' biochem books I read elaborated on this question or answered in the affirmative.
So it was interesting when I came across this J. Chem. Ed. article which talked about the existence of cholesterol in plants. Apparently, the amount as expected is quite low. But what is more interesting is the authors' study of popular biochem books (Lehninger, Stryer, Garrett and Grisham, Voet etc.) in which they certify that this fact is not mentioned.
But the most interesting fact may be that the USDA does not state the existence of cholesterol when it is less than 2 mg/serving, which is the case with plant products. I wonder what other product labels Uncle Sam dispenses with when they refer to compounds which are less than a certain percentage. Or maybe they should at least state what upper limits can do, as is the case of that compound in Chicken McNuggets, which can kill you when it's more than 1g.
So it was interesting when I came across this J. Chem. Ed. article which talked about the existence of cholesterol in plants. Apparently, the amount as expected is quite low. But what is more interesting is the authors' study of popular biochem books (Lehninger, Stryer, Garrett and Grisham, Voet etc.) in which they certify that this fact is not mentioned.
But the most interesting fact may be that the USDA does not state the existence of cholesterol when it is less than 2 mg/serving, which is the case with plant products. I wonder what other product labels Uncle Sam dispenses with when they refer to compounds which are less than a certain percentage. Or maybe they should at least state what upper limits can do, as is the case of that compound in Chicken McNuggets, which can kill you when it's more than 1g.
Hiatus
Real life haas finally intervened, and I have been busy with it for the past couple of days. The hiatus will continue till the weekend after which I hopefully should fill in a backlog.
'But is it chemistry?' hits Nature
There it is. Nature has picked up on the 'Is it chemistry' thread. Awareness and contrbutions from the good folks at the Skeptical Chymist were no doubt responsible for the article. The article does not echo bloggers' views directly, but echoes the views of previous Nobelists which in turn echo bloggers' views. There's the one that thinks (like I and some others do) that they should have a separate award for biology. There are those who point to the disparate majority of bioscientists on the Nobel chemistry committee, and there are those who think that such biology oriented chemistry awards are inevitable in the future. Like it or not, I think we have to agree with this last prediction. This year's Nobel makes no major advances in fundamental understanding of structure, reactivity, or synthesis. But then, are many such advances possible in the near future? And in the absence of such advances, awards such as the one given this year are inevitable.
Just one problem with the article. Richard Schrock became Robert Schrock.
Overheard from the 2007 But Is It Chemistry (BIIC) Conference at Tierra del Fuego:
* Physicist, biologist, medical researcher and engineer: Hey, chemist...too chicken to compete for the prize?
* Chemist: What are you talking about? You are all doing chemistry.
Well, maybe it's not that simple, but close.
Just one problem with the article. Richard Schrock became Robert Schrock.
Overheard from the 2007 But Is It Chemistry (BIIC) Conference at Tierra del Fuego:
* Physicist, biologist, medical researcher and engineer: Hey, chemist...too chicken to compete for the prize?
* Chemist: What are you talking about? You are all doing chemistry.
Well, maybe it's not that simple, but close.
Wrong data makes me happy
Well, it's not that simple, but a couple of months ago, I did a NOESY analysis of an alkaloid. I acquired the spectrum, integrated the peaks, and got the interproton distances, which looked fine. Then I tried to do a similar analysis for the salt. That's it, nothing much different, just the salt. And now, try as I may, I could never get decent distances. Most of my distances seemed too short (<2.4 A) I reran the NOESY with different parameters, integrated the peaks tediously, and everything looked fine. But again, when I calculated the distances from the intensities, everything was messed up. Finally, the cost-benefit analysis of spending the time on the compound and getting it done from an expert was carried out, and it was quickly decided that it would be best to send it to an expert.
A couple of days ago, we got the intensities of the peaks from the expert, and guess what. The distances are still terrible, at least most of them. So that means that maybe it was not my fault. Maybe it wasn't me, it was you (you=the machine, the parameters, the molecule itself). I am not blaming the expert, because there seems to some unique feature of the molecule itself that's causing trouble. So in some ways, I am happy that it wasn't just me. But I am also sad that this means that our analysis is even further prolonged. This is a project which I want to get done with and publish. And it never seems to be getting close to that goal.
But this also is making the situation more chemically interesting. Why would a simple transformation from the alkaloid to its salt make the analysis so much more difficult? The 1D spectra look ok, so we are pretty sure no decomposition/chemical transformation has taken place. And yet the problem suddenly became thorny because of such a simple change. It seems that the vagaries of SAR reflect in SNR (Structure-NMR Relationships) too.
A couple of days ago, we got the intensities of the peaks from the expert, and guess what. The distances are still terrible, at least most of them. So that means that maybe it was not my fault. Maybe it wasn't me, it was you (you=the machine, the parameters, the molecule itself). I am not blaming the expert, because there seems to some unique feature of the molecule itself that's causing trouble. So in some ways, I am happy that it wasn't just me. But I am also sad that this means that our analysis is even further prolonged. This is a project which I want to get done with and publish. And it never seems to be getting close to that goal.
But this also is making the situation more chemically interesting. Why would a simple transformation from the alkaloid to its salt make the analysis so much more difficult? The 1D spectra look ok, so we are pretty sure no decomposition/chemical transformation has taken place. And yet the problem suddenly became thorny because of such a simple change. It seems that the vagaries of SAR reflect in SNR (Structure-NMR Relationships) too.
In the beginning...
Once in a while, you come across a paper whose basic premise (and length) is so short that you can read it in 10 mins. You can talk about this paper to a college student, and its conciseness and simplicity provides a refreshing change from the usual technical articles that assail your faculties every day.
Such is the latest missive from Ronald Breslow. One of the big puzzles in the origin of life is the origin of homochirality (why L-amino acids?). Breslow does not solve this problem, but provides a possible solution for the amplification of existing homochirality that is simplicity exemplified.
The premise? Racemates of amino acids have a lower solubility than the pure enantiomers.
The experiment? Evaporate a solution of a slightly enantiomerically enriched solution of amino acids two times.
The result? The ee increases from 10% to about 91%, and from 1% to about 87%.
Here are all the results:
These experiments have been inspired in part by the remarkable discovery of slightly enantiomerically enriched non-racemizable amino acids in meteorites, most famously the Murchison meteorite (eg. S-alpha methyl valine) (See the linked ACR 2006 review)
These L-amino acids in sufficient concentration could then catalyse the reactions of other organic and biomolecules, such as has been demonstrated with sugars.
Very nice, and could have been entirely plausible on the early earth. The problem of course is how the chiral excess, no matter how small, could have arisen in the first place. The real problem is; how can chirality arise from random processes, which produce either enantiomer in equal parts?
When I started thinking about this question, I realised what a fickle beast the word 'random' is. Random processes of course cannot give rise to a particular bias. But now, consider a random process which happens extremely rarely. For example, the odds of getting heads and tails for a coin are 50:50. However, flip a coin only 10 times. Random variables such as air flow could very reasonably give you six heads and four tails. Now, if there were some process that could take advantage of whichever face lands more often (the heads in this case), then that process could essentially start off with more heads, and then capitalise on the even more heads produced and so on...if this process were auto-catalytic, as is an important condition for early life forming processes, then it would immediately take advantage of the excess heads, and then produce even more heads and...there; you have your L-amino acids. Of course, it can be noted that there was no particular reason to choose L-amino acids. That choice was entirely random, but the above analysis shows that such a choice can be made in the first place.
Based on this argument, I can envisage a process where, say a rock surface forms in a biased way, that is, its surface is more suitable for anchoring the L-amino acids rather than the racemates or the D-amino acids. In an amino acid 'soup' formed in a crater in such a rock then, it is quite plausible that evaporation would take away the racemate simply because it cannot hold on to the surface as well as the L. In such a case, the final 'soup' will then be enriched in L-amino acids.
Again, the hypothesis breaks down if one imagines millions of such rocks, in which case there is no reason for any bias. But if our rock is formed by a cometary impact for example, then I can imagine a comet with amino acids slamming into a giant rock and producing a skewed surface. Planetary impacts are very rare as we know. So by the time the next comet comes around, there already exists an asymmetrical surface which can introduce a bias in the evaporation of Breslow's amino acid pool. So by the time the comet comes back, the L-amino acids have taken over, aided by the biased structure produced by the 'random' impact of the comet.
It's all about natural selection. Even a random process can produce bias if it happens only once, and then allows nature to capitalise on the asymmetry produced by that one time event. Statistics be praised.
* Reference: Ronald Breslow and Mindy S. Levine, Amplification of enantiomeric concentrations under credible prebiotic conditions, PNAS 2006 103: 12979-12980
Such is the latest missive from Ronald Breslow. One of the big puzzles in the origin of life is the origin of homochirality (why L-amino acids?). Breslow does not solve this problem, but provides a possible solution for the amplification of existing homochirality that is simplicity exemplified.
The premise? Racemates of amino acids have a lower solubility than the pure enantiomers.
The experiment? Evaporate a solution of a slightly enantiomerically enriched solution of amino acids two times.
The result? The ee increases from 10% to about 91%, and from 1% to about 87%.
Here are all the results:
These experiments have been inspired in part by the remarkable discovery of slightly enantiomerically enriched non-racemizable amino acids in meteorites, most famously the Murchison meteorite (eg. S-alpha methyl valine) (See the linked ACR 2006 review)
These L-amino acids in sufficient concentration could then catalyse the reactions of other organic and biomolecules, such as has been demonstrated with sugars.
Very nice, and could have been entirely plausible on the early earth. The problem of course is how the chiral excess, no matter how small, could have arisen in the first place. The real problem is; how can chirality arise from random processes, which produce either enantiomer in equal parts?
When I started thinking about this question, I realised what a fickle beast the word 'random' is. Random processes of course cannot give rise to a particular bias. But now, consider a random process which happens extremely rarely. For example, the odds of getting heads and tails for a coin are 50:50. However, flip a coin only 10 times. Random variables such as air flow could very reasonably give you six heads and four tails. Now, if there were some process that could take advantage of whichever face lands more often (the heads in this case), then that process could essentially start off with more heads, and then capitalise on the even more heads produced and so on...if this process were auto-catalytic, as is an important condition for early life forming processes, then it would immediately take advantage of the excess heads, and then produce even more heads and...there; you have your L-amino acids. Of course, it can be noted that there was no particular reason to choose L-amino acids. That choice was entirely random, but the above analysis shows that such a choice can be made in the first place.
Based on this argument, I can envisage a process where, say a rock surface forms in a biased way, that is, its surface is more suitable for anchoring the L-amino acids rather than the racemates or the D-amino acids. In an amino acid 'soup' formed in a crater in such a rock then, it is quite plausible that evaporation would take away the racemate simply because it cannot hold on to the surface as well as the L. In such a case, the final 'soup' will then be enriched in L-amino acids.
Again, the hypothesis breaks down if one imagines millions of such rocks, in which case there is no reason for any bias. But if our rock is formed by a cometary impact for example, then I can imagine a comet with amino acids slamming into a giant rock and producing a skewed surface. Planetary impacts are very rare as we know. So by the time the next comet comes around, there already exists an asymmetrical surface which can introduce a bias in the evaporation of Breslow's amino acid pool. So by the time the comet comes back, the L-amino acids have taken over, aided by the biased structure produced by the 'random' impact of the comet.
It's all about natural selection. Even a random process can produce bias if it happens only once, and then allows nature to capitalise on the asymmetry produced by that one time event. Statistics be praised.
* Reference: Ronald Breslow and Mindy S. Levine, Amplification of enantiomeric concentrations under credible prebiotic conditions, PNAS 2006 103: 12979-12980
The Trouble With Everything
I am looking forward to reading Lee Smolin's new and hot 'The Trouble with Physics' in which he lambasts string theory as a science that is being pursued for the sake of elegance and beauty instead of agreement with experiment, which is the bedrock of the scientific method. Well, it's probably wrong to say that theoretical physicists are avoiding predictions that agree with experiments. What Smolin seems to think is that there have been very few experimental predictions made by the theory, which have not even been verified, and so the main justification for pursuing the theory seems to be mathematical elegance. Also, the predictions that do have been made seem to be verifiable only at very high energies, not achievable in the near future. A physicist friend of mine tells me that the option for this is to look back to the early universe and have 'observational' instead of 'experimental' evidence for the predictions of string theory, something like what they did for the Big Bang which won a Nobel this year.
That is why chemistry appeals to me more, although I find physics very interesting too. In chemistry, theory has to be necessarily much closer to experiment. That's why Woodward chose chemistry over mathematics I believe.
Anyway, more after I actually read the book.
That is why chemistry appeals to me more, although I find physics very interesting too. In chemistry, theory has to be necessarily much closer to experiment. That's why Woodward chose chemistry over mathematics I believe.
Anyway, more after I actually read the book.
Chartelline and copper catalyzed decarboxylation
A former mentor-student duo has published successive papers in JACS.
Total Synthesis of Antheliolide A
Chandra Sekhar Mushti, Jae-Hun Kim, and E. J. Corey
http://dx.doi.org/10.1021/ja066336b
Total Synthesis of (±)-Chartelline C
Phil S. Baran and Ryan A. Shenvi
http://dx.doi.org/10.1021/ja0659673
First, Corey publishes a synthesis of Antheliolide and then Baran publishes Chartelline. Corey uses a several nice transformations including a [2+2] ketene formation. In one step, he uses LiOH/MeOH to deprotect a TMS protected alkyne, instead of TBAF, because TBAF will also protect the TBDPS group. Why does LiOH protect only the TBS and not the TBDPS?
Also, in another step, he wanted to convert a (methylated) lactol to a lactone. But he had a sensitive caryophyllene like moeity in the ring, which would have been notoriously sensitive to acid and oxidants. So he takes the indirect way and first converts the OMe to a phenyl seleno, then turns it into a hydroxyl with AgNO3, and then finally uses mild oxidation with TPAP to oxidise the hydroxyl. Neat.
Baran's synthesis is more elegant, and as usual, based on biosynthetic proposals. He does some neat transformations, including a cascade like reaction to effect a nice ring closure.
However, it is the last step which I found interesting, in which he was trying to decarboxylate a vinyl carboxylate. Copper catalyzed decarboxylation failed, but it turned out that the carboxylate was quite sensitive and labile to heat, so simple heating worked; the mechanism involved a proton that seemed to be serving a function akin to copper as noted in the reference below. He also suggested transient positive charge stabilization by chlorine. I know fluorine can do it well; chlorine would be less efficient but could still do it I suppose.
This led to me look up some back references on vinyl decarboxylation. I found out a reference by Theodore Cohen of Pittsburgh (J. Am. Chem. Soc.; 1970; 92(10); 3189-3190) who proposed the following schematic for explaining stabilization of the TS by copper.
He was trying to explain some copper catalyzed quinoline decarboxylations that went back to 1930. I also came upon a 1950 paper by Frank Westheimer (J. Am. Chem. Soc.; 1951; 73(1); 429-435) in which he tried to explain the decarboxylation of alpha-keto acids with copper, a classic reaction which he uses as an enzyme model system.
Total Synthesis of Antheliolide A
Chandra Sekhar Mushti, Jae-Hun Kim, and E. J. Corey
http://dx.doi.org/10.1021/ja066336b
Total Synthesis of (±)-Chartelline C
Phil S. Baran and Ryan A. Shenvi
http://dx.doi.org/10.1021/ja0659673
First, Corey publishes a synthesis of Antheliolide and then Baran publishes Chartelline. Corey uses a several nice transformations including a [2+2] ketene formation. In one step, he uses LiOH/MeOH to deprotect a TMS protected alkyne, instead of TBAF, because TBAF will also protect the TBDPS group. Why does LiOH protect only the TBS and not the TBDPS?
Also, in another step, he wanted to convert a (methylated) lactol to a lactone. But he had a sensitive caryophyllene like moeity in the ring, which would have been notoriously sensitive to acid and oxidants. So he takes the indirect way and first converts the OMe to a phenyl seleno, then turns it into a hydroxyl with AgNO3, and then finally uses mild oxidation with TPAP to oxidise the hydroxyl. Neat.
Baran's synthesis is more elegant, and as usual, based on biosynthetic proposals. He does some neat transformations, including a cascade like reaction to effect a nice ring closure.
However, it is the last step which I found interesting, in which he was trying to decarboxylate a vinyl carboxylate. Copper catalyzed decarboxylation failed, but it turned out that the carboxylate was quite sensitive and labile to heat, so simple heating worked; the mechanism involved a proton that seemed to be serving a function akin to copper as noted in the reference below. He also suggested transient positive charge stabilization by chlorine. I know fluorine can do it well; chlorine would be less efficient but could still do it I suppose.
This led to me look up some back references on vinyl decarboxylation. I found out a reference by Theodore Cohen of Pittsburgh (J. Am. Chem. Soc.; 1970; 92(10); 3189-3190) who proposed the following schematic for explaining stabilization of the TS by copper.
He was trying to explain some copper catalyzed quinoline decarboxylations that went back to 1930. I also came upon a 1950 paper by Frank Westheimer (J. Am. Chem. Soc.; 1951; 73(1); 429-435) in which he tried to explain the decarboxylation of alpha-keto acids with copper, a classic reaction which he uses as an enzyme model system.
Schrodinger Webinars 2006. And the importance of being polarized
Like last year, Schrodinger is hosting webinars regarding new application developments, one ever week. Here is the schedule for those who may be interested. Registration is easy. Our group is thinking of doing most of them, especially the docking sessions.
They did the QM polarized docking module seminar today. Current docking routines often fail to give good results because of lack of treatement of polarization effects, which can be ultimately traced back to the charges on the ligand atoms. Polarization effects are crucial in calculating transition states for enzyme reactions for example.
Charges on ligand atoms constitute a big topic in themselves, and many methods have been developed to get accurate charges, including semi-empirical and ab-initio methods. Probably the best method I know involves calculating the electrostatic potential and then fitting those charges to the atoms which reproduce the potential the best, a method which I believe was developed by the late Peter Kollman (whose papers are still getting published five years after he passed away). Charges on atoms forming h-bonds for example can greatly change h-bond stabilization energies.
Cute electrostatic potential factoid: ESP studies can often reveal features of molecules that are not obvious upon inspection. For example, the ESP potential well is much deeper for the N3 of cytosine than the O8. 'Inspection' may suggest that it's the O8 that is the most nucleophilic, but experiment shows that alkylation and protonation take place on N3 as suggested by the ESP
In any case, the Schrodinger group uses poses from regular docking with their module Glide, and then uses a program called QSite to do a single point energy calculation using a good basis set, that endows the ligand atoms with charges. The poses are then redocked. Using this methodology, they were able to get good docked poses for many proteins that were outliers with regular docking (RMSD>2 A). Top of the outlier list was HIV-1 protease, and I suspect that the hydrophobic nature of the site can raise special issues. Regular Glide probably cannot find the correct pose for compounds in this site because many compounds bind tightly to the protease purely on the basis of the hydrophobic effect, without having any significant hydrogen bonding.
Reference: "Importance of Accurate Charges in Molecular Docking: Quantum Mechanical/Molecular Mechanical (QM/MM) Approach"- AE Cho, V Guallar, BJ Berne, R Friesner - Journal of Computational Chemistry, 2005
They did the QM polarized docking module seminar today. Current docking routines often fail to give good results because of lack of treatement of polarization effects, which can be ultimately traced back to the charges on the ligand atoms. Polarization effects are crucial in calculating transition states for enzyme reactions for example.
Charges on ligand atoms constitute a big topic in themselves, and many methods have been developed to get accurate charges, including semi-empirical and ab-initio methods. Probably the best method I know involves calculating the electrostatic potential and then fitting those charges to the atoms which reproduce the potential the best, a method which I believe was developed by the late Peter Kollman (whose papers are still getting published five years after he passed away). Charges on atoms forming h-bonds for example can greatly change h-bond stabilization energies.
Cute electrostatic potential factoid: ESP studies can often reveal features of molecules that are not obvious upon inspection. For example, the ESP potential well is much deeper for the N3 of cytosine than the O8. 'Inspection' may suggest that it's the O8 that is the most nucleophilic, but experiment shows that alkylation and protonation take place on N3 as suggested by the ESP
In any case, the Schrodinger group uses poses from regular docking with their module Glide, and then uses a program called QSite to do a single point energy calculation using a good basis set, that endows the ligand atoms with charges. The poses are then redocked. Using this methodology, they were able to get good docked poses for many proteins that were outliers with regular docking (RMSD>2 A). Top of the outlier list was HIV-1 protease, and I suspect that the hydrophobic nature of the site can raise special issues. Regular Glide probably cannot find the correct pose for compounds in this site because many compounds bind tightly to the protease purely on the basis of the hydrophobic effect, without having any significant hydrogen bonding.
Reference: "Importance of Accurate Charges in Molecular Docking: Quantum Mechanical/Molecular Mechanical (QM/MM) Approach"- AE Cho, V Guallar, BJ Berne, R Friesner - Journal of Computational Chemistry, 2005
Whither chemistry?
This year's chemistry Nobel again raises the question about the nature of chemistry as a central science. An article by Philip Ball in Nature a few months ago, tried to explore whether there are any great questions in pure chemistry, that are exclusively chemical questions. On the side, I must say that Ball has been doing a truly admirable job of popularizing chemistry over the years.
In my opinion, the problem is that the nature of chemistry by definition is both a blessing and a curse for our science. That's because chemistry is all about understanding molecules, and in our lives, it's molecules that are involved in all the real and good stuff, including biology, medicine, and engineering. So what happens is that a chemist may develop a drug, but then a doctor uses it to cure a disease. The credit thus goes mainly to the doctor. Similarly, a chemist discovers a lubricant, but then engineers use it to revolutionize the automobile industry. There goes the credit to the engineers. Therefore, the problem is not really with recognising chemistry, but with recognizing chemists. The problem with recognizing chemistry is of a different nature though; chemistry is so ubiquitous that most people take it for granted. Also, because most people's perception of chemistry is on a practical basis, for them chemistry is more synonymous with industry and manufacturing than with basic scientific research. As noted above, when it comes to basic research, for the general public, chemistry seems to mostly manifest itself through medicine and engineering, which are the actual faces of the product.
In my opinion, there is one question out of all the ones that Ball cites which is both inherently chemical as well as all-pervading, and that is self-assembly, in the broadest sense possible (so that question also ends up encompassing some of his other questions, such as the origin of life). I also think that this problem largely captures the unique nature of chemistry. Self-assembly needs an understanding of forces between molecules that is is a hodgepodge of qualitative and quantitative comprehension. A physics based approach might turn out to be too quantitative, a biology based approach would be too qualitative. Understanding the physics of hydrogen bonds for example would help but would not be enough for understanding their role in self-assembly. It's only such understanding coupled with considerable empirical knowledge of hydrogen bonding in real systems that will serve as a true guide. It's only chemists who can bring the right amount of quantitative analyses and empirical data to bear on such problems. Of course, that does not mean others cannot do this if they tried. But then, anyone can do almost anything if he tries hard enough; that hardly discounts the value of specific expertise.
The main feature of chemistry in my opinion is this fine balance between analytical or mathematical thinking based on first principles, and empirical thinking based on real life data and experiments. This approach makes the science unique I think, and Linus Pauling was probably the exemplary example of someone who embodied this approach in the right proportion. The practical feature of chemistry that makes it unique of course is that chemists create new stuff, but without this kind of understanding, that would not be possible on a grand scale.
In my opinion, the problem is that the nature of chemistry by definition is both a blessing and a curse for our science. That's because chemistry is all about understanding molecules, and in our lives, it's molecules that are involved in all the real and good stuff, including biology, medicine, and engineering. So what happens is that a chemist may develop a drug, but then a doctor uses it to cure a disease. The credit thus goes mainly to the doctor. Similarly, a chemist discovers a lubricant, but then engineers use it to revolutionize the automobile industry. There goes the credit to the engineers. Therefore, the problem is not really with recognising chemistry, but with recognizing chemists. The problem with recognizing chemistry is of a different nature though; chemistry is so ubiquitous that most people take it for granted. Also, because most people's perception of chemistry is on a practical basis, for them chemistry is more synonymous with industry and manufacturing than with basic scientific research. As noted above, when it comes to basic research, for the general public, chemistry seems to mostly manifest itself through medicine and engineering, which are the actual faces of the product.
In my opinion, there is one question out of all the ones that Ball cites which is both inherently chemical as well as all-pervading, and that is self-assembly, in the broadest sense possible (so that question also ends up encompassing some of his other questions, such as the origin of life). I also think that this problem largely captures the unique nature of chemistry. Self-assembly needs an understanding of forces between molecules that is is a hodgepodge of qualitative and quantitative comprehension. A physics based approach might turn out to be too quantitative, a biology based approach would be too qualitative. Understanding the physics of hydrogen bonds for example would help but would not be enough for understanding their role in self-assembly. It's only such understanding coupled with considerable empirical knowledge of hydrogen bonding in real systems that will serve as a true guide. It's only chemists who can bring the right amount of quantitative analyses and empirical data to bear on such problems. Of course, that does not mean others cannot do this if they tried. But then, anyone can do almost anything if he tries hard enough; that hardly discounts the value of specific expertise.
The main feature of chemistry in my opinion is this fine balance between analytical or mathematical thinking based on first principles, and empirical thinking based on real life data and experiments. This approach makes the science unique I think, and Linus Pauling was probably the exemplary example of someone who embodied this approach in the right proportion. The practical feature of chemistry that makes it unique of course is that chemists create new stuff, but without this kind of understanding, that would not be possible on a grand scale.
Chemistry Nobel for Biologist
So there we are; Roger Kornberg following in line behind his father, has won the Nobel for chemistry. This prize also seems to have been a quickie; Kornberg's pivotal papers seem to have been published after 2000. I am sure Kornberg deserved it, but like Paul, who stayed up all night looking at the announcements, I too don't feel really excited about it. I wouldn't have predicted this as I am not so familiar with molecular biologists, but I was surprised to see that absolutely no blog I read had this name in a post or in the comments. No problem of course with molecular biologists getting the chemistry prize. In a way, it's a triumph for chemistry because it shows the vast scope and purview of the science. Maybe the committee decided to balance the 'pure' chemistry that was honoured last year with something more interdisciplinary. But it's always more exciting to see the prize awarded to someone whose work you are familiar with and who is more from your general field, as happened last year.
Nitrone-cyclopropane cycloaddition
Use of a methodology I haven't really heard about in a long time.; the nitrone-cyclopropane cycloaddition. Kerr and coworkers now use it in the synthesis of one of those indolizidine alkaloids, phyllantidine. Fairly straightforward synthesis, although again, I would always pay attention to forming an N-O bond at an early stage. One of my favourite name reactions is used- the Krapcho decarboxylation, the concomitant hydrolysis and decarboxylation of an ester using LiCl in DMSO.
I am too lazy to look up the literature, but why doesn't the nitrone add to the double bond?
Also, the nitrone-cyclopropane addition should be nonconcerted, since we have a nicely stabilised ring opened intermediate.
The other interesting thing is the selectivity of the enolate oxidation.
At first, I was not satisfied, but then I thought that their explanation for it sounds ok. But it looks a little crafty when you draw the vinyl group conveniently pointing in a sterically hindered direction, and make that the only structure with the group in that position! The difference between the group in that position and pointing away from that position is 2 kcal/mol by the way, at least that's what MMFF says for a model compound.
According to Eliel's latest version, the A value for OH ranges from 0.6 to 1.04 for different solvents, while that for CO2CH3 is 1.2-1.3. Also, MMFF gives a difference of 1 kcal/mol for the product with OH axial. So not much difference there, but still worth keeping in mind that the product with the CO2CH3 axial dominates. I wonder how much the reaction is product controlled here though. Such analyses depend on the thermodynamic nature of the reaction; in this case, the fact that the oxaziridine ring opens probably means that the products as a whole are lower in energy. If that is so, then the TS should resemble the reactant, but I would never place my bets on such rationalizing, as subtle effects can change things in the TS. Also there are other factors here, especially the facile nature of topside attack.
Ref: Total Synthesis of (+)-Phyllantidine, Cheryl A. Carson, Michael A. Kerr
Published Online: 13 Sep 2006
DOI: 10.1002/anie.200602569
Corwin Hansch
In the whole Nobel betting, I think we forgot someone significant- Corwin Hansch, whose QSAR has now become ubiquitous in medicinal chemistry and biology, both theoretical and experimental. Also, I think Hansch is one of the first ones, if not the first one, to apply classical physical organic chemistry (Hammett etc.) to medicinal chemistry and bioactivity.
Updates: New thoughts.
Albert Overhauser: Nuclear Overhauser Effect
Norman Allinger: Molecular Mechanics
Updates: New thoughts.
Albert Overhauser: Nuclear Overhauser Effect
Norman Allinger: Molecular Mechanics
Solvent assisted olefin epoxidation: fluorine emerges a winner
A nice application of both computational techniques and kinetics (entropy of activation etc.) in elucidating the role of hydrogen bonded networks of H2O2 and hexafluoroisopropanol (HFIP) in the epoxidation of olefins. Some of the ordered clusters look almost like the active site of an enzyme; no wonder the barriers for oxygen transfer decrease. Also, fluorinated solvents are highly hydrophobic, so I would not be surprised if that property alone contributes to a very specific degree of ordering. All those h-bonds made me giddy.
HFIP and related solvents have interesting effects on both organic reactions and conformations of biomolecules. For example, addition of TFE to peptide and protein solutions can increase helicity. A simple and nice model was proposed by Balaram, which explained this effect by virtue of the fact that since F is a poor h-bond acceptor, the NH of the amide backbone no longer had to sacrifice its usual NH---C=O h-bond for one with the solvent (like water) and thus could engage better in helix formation.
In this account, some of the C-H---F distances are right outside the sum of vdW radii of the atoms. Weak C-H h-bonds are interesting entities, and unlike 'normal' h-bonds, their scope and definition is much looser, and is emphatically not restrained to having them within the sum of vdW radii; the h-bond potential minimum is of lesser energy and shallower. In such cases, comparative studies of geometry and energy have to be performed to assess whether these are 'true' h-bonds. The boundary is thin and nebulous though, but there is an excellent book on this state of affairs as well as the whole gamut of h-bonding interactions- The Weak Hydrogen Bond in Structural Chemistry and Biology. I find this slippery boundary of h-bond definition in many cases, a delicious example of the inexact yet rationalizable nature of chemistry.
One of those factoids which looks deceptively obvious and hence can be misconstrued, is that organic fluorine is a good h-bond acceptor. Not so! And Jack Dunitz has written a neat article ("Organic Fluorine Hardly Ever Accepts Hydrogen Bonds") in which he did a comparative study of the CSD (Camb. Struct. Database) and concluded that out of some 6000 structures with fluorine, only 30 or so had features which maybe could indicate h-bonding with fluorine. In the case of further data, it's best to assume that a fluorine in a molecule will not h-bond. As usual, the case could be quite different in a crystal.
Another thing to note again is the authors' use of MP2, which handles dispersion much better than DFT.
Reference: Albrecht Berkessel and Jens A. Adrio, J. Am. Chem. Soc., ASAP Article 10.1021/ja0620181
HFIP and related solvents have interesting effects on both organic reactions and conformations of biomolecules. For example, addition of TFE to peptide and protein solutions can increase helicity. A simple and nice model was proposed by Balaram, which explained this effect by virtue of the fact that since F is a poor h-bond acceptor, the NH of the amide backbone no longer had to sacrifice its usual NH---C=O h-bond for one with the solvent (like water) and thus could engage better in helix formation.
In this account, some of the C-H---F distances are right outside the sum of vdW radii of the atoms. Weak C-H h-bonds are interesting entities, and unlike 'normal' h-bonds, their scope and definition is much looser, and is emphatically not restrained to having them within the sum of vdW radii; the h-bond potential minimum is of lesser energy and shallower. In such cases, comparative studies of geometry and energy have to be performed to assess whether these are 'true' h-bonds. The boundary is thin and nebulous though, but there is an excellent book on this state of affairs as well as the whole gamut of h-bonding interactions- The Weak Hydrogen Bond in Structural Chemistry and Biology. I find this slippery boundary of h-bond definition in many cases, a delicious example of the inexact yet rationalizable nature of chemistry.
One of those factoids which looks deceptively obvious and hence can be misconstrued, is that organic fluorine is a good h-bond acceptor. Not so! And Jack Dunitz has written a neat article ("Organic Fluorine Hardly Ever Accepts Hydrogen Bonds") in which he did a comparative study of the CSD (Camb. Struct. Database) and concluded that out of some 6000 structures with fluorine, only 30 or so had features which maybe could indicate h-bonding with fluorine. In the case of further data, it's best to assume that a fluorine in a molecule will not h-bond. As usual, the case could be quite different in a crystal.
Another thing to note again is the authors' use of MP2, which handles dispersion much better than DFT.
Reference: Albrecht Berkessel and Jens A. Adrio, J. Am. Chem. Soc., ASAP Article 10.1021/ja0620181
The Penicillin before Penicillin
We who live in the era of so many effective antibiotics would find it hard to imagine an era when even a simple cut or abscess would lead to a frequently fatal condition. It's hard to imagine the distress of doctors and family members when they saw a patient simply die of such an apparently minor affliction. The story of penicillin which finally emerged to fight these infections has become the stuff of legends. What's probably still living in the shadows is the equally crucial discovery of sulfa drugs which were the penicillins of their time; perhaps not as effective, but almost miraculous by virtue of being the first.
Now Thomas Hager has come out with a book that should rescue the heroic stories from being forgotten. Hager is a fine writer, and I have read his comprehensive biography of Linus Pauling. He now has written 'The Demon under the Microscope', a history of sulfa drugs discovered by German chemists in the 1920s and 30s. The New York Times gave it a favourable review, and I am looking forward to reading it. The NYT reviewer compared it to 'Microbe Hunters' a classic which has inspired many famous scientists including Nobel laureates in their childhood. I was also quite intrigued by the book, which reads like a romantic account of microbiology. Of course, truth is always harsher than such accounts, but it does no harm to initiate a child into science with them.
It was interesting for me to read that the German chemists had taken out a patent on the azo part of the first sulfa drug. They did not know that in fact it was the sulfa part which confered activity, and they were soon scooped by French chemists who actually discovered that even sulfanilamide alone has potency.
Sulfa drugs of course inihibit dihydrofolate reductase which is involved in nucleotide synthesis, and they are quite close to the ideal of the 'magic bullet', a molecule that is potent, has few to zero side effects, and most importantly, is selective for the microorganism. In this case, dihydrofolate enzyme is expressed only in bacteria. That does not necessarily mean that there will be no human side effects- after all, every molecule can target more than one protein- but it seems to work well in this particular case. Sulfa drugs led to further research on DHFR, which also led to the Methotrexate, a compound that is even today a standard component of anti-cancer therapy.
Dock dock, who is there?
Docking is one of the holy grails of computational chemistry and the pharmaceutical industry. But it's also a big current unsolved problem. The process refers to the placement of an inhibitor or small molecule in the active site of a protein, and then assesing its interactions with the active site, thereby reaching a conclusion about whether it will bind strongly or weakly. This process, if perfected, naturally will be very valuable for finding new leads or testing yet untested compounds against pharmaceutical targets, and most importantly, high throughput screening. Two subprocedures have to be honed when doing this; there first needs to be a way for placing the inhibitor in the site and exploring various orientations in which this can be done, and once the ligand is placed in the site, there then needs to be some way of evaluating whether its interaction with the site is 'good' or 'bad'.
The most popular way in which this is done is by using a 'scoring function' which is simply a sum of different interaction energies, due to hydrogen bond, electrostatics, van der waals interactions, and hydrophobic interactions, to name a few. The number that comes out from the sum of these interactions with the protein for a particular compound is anything but reliable, and scoring functions correlate very poorly with experimentally determined free energies of binding in general. The most reliable way of estimating free energies computationally is by Free Energy Perturbation (FEP). Yet, scoring functions can be reasonably good on a relative basis, and offer the fastest way of doing an evaluation. However, what we are essentially trying to do is evaluate the free energy of interaction, which is inherently diabolically convoluted, and consists of complicated entropy and enthalpy terms. These include terms for the protein, the ligand, and the complex that is formed. Enormously complicating the matter is the fact that both the protein and ligand are solvated, and displacement of water and desolvation effects will massively affect the sum interaction of the ligand with the active site. In addition, conformational changes do take place in the ligand when it binds, and also the protein in many cases. Needless to say, the general process of a ligand binding to a protein is extremely complicated to understand, let alone computationally evaluate.
And yet, there are programs out there like Glide, DOCK, Flexx, and Gold, to name a few, which have repeatedly attempted to dock ligands into active sites. This whole program has been a big saga, with people publishing one article every week in J. Med. Chem. related to docking. Many of these programs include scoring functions with terms that have been parametrized from data, and from observations related to basic physical principles of intermolecular interactions. The programs don't work very well generally, but can work for the interaction of one inhibitor with homologous proteins, or for different inhibitors for the same protein (a more tenuous application). I have personally used only Glide, and in my specific project, it has provided impressive results.
Any docking program needs to accomplish two goals:
1. Find the bioactive conformation of the ligand when supplied with the protein and ligand structure.
2. Evaluate whether similar/dissimilar ligands will show activity or not.
In practice, every docking result gives a list of different conformations of the ligand and protein, known as 'poses' ranked in descending order of efficacy based on their perceived free energies of interaction. Looking at just the top pose and concluding that that is the bioactive pose is a big mistake. Sometimes, if that pose is repeatedly found among the top ten results, one might hypothesize that in fact it may be the bioactive pose. In my case, that did turn out to be the case. However, it must also be noted that such programs can be parametrized for particular proteins and ligands, where the ligands are known to have very specific interactions. Then it would be relatively easy for the program to find similar ligands, but given the literally infinite possibilities in which ligands bind to proteins, even this fails sometimes.
One of the big challenges of docking is to model protein conformational changes- the classic induced fit mechanism in biochemistry. Glide has an induced fit module which has again given me favourable results in many cases. Induced fit docking remains an elusive general goal, however.
Solvation, as mentioned above, is probably the biggest problem. For the same protein and different ligands which are known to bind with certain IC50 values, the Glide scoring function seldom reproduces this ranking in terms of free energy of binding. However, the MM-GBSA model, which uses continuum solvation, gave me good results which neither regular nor induced fit docking did.
Docking programs continue to be improved. The group at Schrodinger which developed Glide is doing some solid and impressive work. In their latest paper in J. Med. Chem., they discuss a further refinement of Glide which is called Extra Precision (XP) Glide. Essentially, the program works on the basis of 'penalties' and 'rewards' for bonds and interactions based on their nature. The main difference in succeeding versions of docking programs is not surprisingly, attempts to improve the terms in the scoring functions by modification and addition, and attempts to rigorously parametrize those terms by using hundreds of known protein-ligand complexes as training sets. In this particular paper, the Schrodinger team has included some very realistic modifications to the hydrophobic and hydrogen bonding terms.
In general, how does one evaluate the energy of a hydrogen bond between a ligand atom and protein atom, an evaluation that obviously would be crucial for assesing ligand-protein interaction? It depends on several factors, including the nature of the atoms and their charge, the nature of the binding cavity where the bonds are formed (polar or hydrophobic) as well as its exact geometry, and the relative propensity of water to form hydrogen bonds in that cavity. This last factor is particularly important. Hydrogen bonds will be favourable only if water does not form very favourable hydrogen bonds in the cavity, and if the desolvation penalty for the ligand is not excessive. The Glide team has come up with a protocol of assesing the relative ease for h-bond formation of both water and the ligand in the active site, and then deciding for which one it will be more favourable. H-bonds formed between ligand and protein when water in the active site is not 'comfortable' because the site is hydrophobic and cannot form its full complement of h-bonds, will be especially favourable. The group cites the program's ability to reproduce such a situation, that contributes significantly to the extraordinary affinity of streptavidin to biotin, the strongest such interaction known. In this case, four correlated hydrogen bonds provide solid binding interactions as shown below. The group says that theirs is the first scoring function that has explained this unique experimental result.
The other significant modification to the program is a better representation of the hydrophobic effect, an effect which again is quite complicated, and depends upon the binding of the ligand itself, as well as the displacement of water. The hydrophobic effect is extremely important; I remember one case in which a ligand bound to HIV-1 protease showed great binding affinity without having formed a single h-bond, purely on the basis of the hydrophobic nature of the binding site! The group has cleverly tried to include the effect of not just the lipophilicity, but the exact geometry of the hydrophobic site. A 'hydrophobic enclosure' as the group calls it is particularly favourable for lipophilic parts of the ligand, and is rewarded in the scoring function. Balancing this is the desolvation penalty for the ligand, which is enthalpically unfavourable for it and the water bound to it.
The new modifications also seem to have made accomodations for pi-pi stacking and cation-pi interactions, which can contribute significantly in certain cases.
Overall, the scoring functions and the program are getting better, as the group is parametrizing it based on commonly occuring structural motifs and better interaction terms. The nice thing is that these modifications are in the end based on sound physical principles of ligand-protein binding, principles that are complicated to understand, but are based on fundamental laws of physical organic chemistry such as the hydrophobic effect, solvation/desolvation, hydrogen bonding, and other intermolecular interactions. Finally, it's the chemistry that is most important.
Docking may never serve as a solve-all technique, and may never work for all situations universally, but with this kind of development based on experiment going on, I feel confident that it will become a major guiding, if not predictive tool, in both academic labs and pharma. As usual, the goal would remain to balance accuracy with speed, something which is invaluable for high-throughput screening. For more details, refer to the paper, which is detailed indeed.
Reference: Friesner, R. A.; Murphy, R. B.; Repasky, M. P.; Frye, L. L.; Greenwood, J. R.; Halgren, T. A.; Sanschagrin, P. C.; Mainz, D. T. "Extra Precision Glide: Docking and Scoring Incorporating a Model of Hydrophobic Enclosure for Protein-Ligand Complexes" (J. Med. Chem.; (Article); 2006; ASAP Article; DOI: 10.1021/jm051256o)
The most popular way in which this is done is by using a 'scoring function' which is simply a sum of different interaction energies, due to hydrogen bond, electrostatics, van der waals interactions, and hydrophobic interactions, to name a few. The number that comes out from the sum of these interactions with the protein for a particular compound is anything but reliable, and scoring functions correlate very poorly with experimentally determined free energies of binding in general. The most reliable way of estimating free energies computationally is by Free Energy Perturbation (FEP). Yet, scoring functions can be reasonably good on a relative basis, and offer the fastest way of doing an evaluation. However, what we are essentially trying to do is evaluate the free energy of interaction, which is inherently diabolically convoluted, and consists of complicated entropy and enthalpy terms. These include terms for the protein, the ligand, and the complex that is formed. Enormously complicating the matter is the fact that both the protein and ligand are solvated, and displacement of water and desolvation effects will massively affect the sum interaction of the ligand with the active site. In addition, conformational changes do take place in the ligand when it binds, and also the protein in many cases. Needless to say, the general process of a ligand binding to a protein is extremely complicated to understand, let alone computationally evaluate.
And yet, there are programs out there like Glide, DOCK, Flexx, and Gold, to name a few, which have repeatedly attempted to dock ligands into active sites. This whole program has been a big saga, with people publishing one article every week in J. Med. Chem. related to docking. Many of these programs include scoring functions with terms that have been parametrized from data, and from observations related to basic physical principles of intermolecular interactions. The programs don't work very well generally, but can work for the interaction of one inhibitor with homologous proteins, or for different inhibitors for the same protein (a more tenuous application). I have personally used only Glide, and in my specific project, it has provided impressive results.
Any docking program needs to accomplish two goals:
1. Find the bioactive conformation of the ligand when supplied with the protein and ligand structure.
2. Evaluate whether similar/dissimilar ligands will show activity or not.
In practice, every docking result gives a list of different conformations of the ligand and protein, known as 'poses' ranked in descending order of efficacy based on their perceived free energies of interaction. Looking at just the top pose and concluding that that is the bioactive pose is a big mistake. Sometimes, if that pose is repeatedly found among the top ten results, one might hypothesize that in fact it may be the bioactive pose. In my case, that did turn out to be the case. However, it must also be noted that such programs can be parametrized for particular proteins and ligands, where the ligands are known to have very specific interactions. Then it would be relatively easy for the program to find similar ligands, but given the literally infinite possibilities in which ligands bind to proteins, even this fails sometimes.
One of the big challenges of docking is to model protein conformational changes- the classic induced fit mechanism in biochemistry. Glide has an induced fit module which has again given me favourable results in many cases. Induced fit docking remains an elusive general goal, however.
Solvation, as mentioned above, is probably the biggest problem. For the same protein and different ligands which are known to bind with certain IC50 values, the Glide scoring function seldom reproduces this ranking in terms of free energy of binding. However, the MM-GBSA model, which uses continuum solvation, gave me good results which neither regular nor induced fit docking did.
Docking programs continue to be improved. The group at Schrodinger which developed Glide is doing some solid and impressive work. In their latest paper in J. Med. Chem., they discuss a further refinement of Glide which is called Extra Precision (XP) Glide. Essentially, the program works on the basis of 'penalties' and 'rewards' for bonds and interactions based on their nature. The main difference in succeeding versions of docking programs is not surprisingly, attempts to improve the terms in the scoring functions by modification and addition, and attempts to rigorously parametrize those terms by using hundreds of known protein-ligand complexes as training sets. In this particular paper, the Schrodinger team has included some very realistic modifications to the hydrophobic and hydrogen bonding terms.
In general, how does one evaluate the energy of a hydrogen bond between a ligand atom and protein atom, an evaluation that obviously would be crucial for assesing ligand-protein interaction? It depends on several factors, including the nature of the atoms and their charge, the nature of the binding cavity where the bonds are formed (polar or hydrophobic) as well as its exact geometry, and the relative propensity of water to form hydrogen bonds in that cavity. This last factor is particularly important. Hydrogen bonds will be favourable only if water does not form very favourable hydrogen bonds in the cavity, and if the desolvation penalty for the ligand is not excessive. The Glide team has come up with a protocol of assesing the relative ease for h-bond formation of both water and the ligand in the active site, and then deciding for which one it will be more favourable. H-bonds formed between ligand and protein when water in the active site is not 'comfortable' because the site is hydrophobic and cannot form its full complement of h-bonds, will be especially favourable. The group cites the program's ability to reproduce such a situation, that contributes significantly to the extraordinary affinity of streptavidin to biotin, the strongest such interaction known. In this case, four correlated hydrogen bonds provide solid binding interactions as shown below. The group says that theirs is the first scoring function that has explained this unique experimental result.
The other significant modification to the program is a better representation of the hydrophobic effect, an effect which again is quite complicated, and depends upon the binding of the ligand itself, as well as the displacement of water. The hydrophobic effect is extremely important; I remember one case in which a ligand bound to HIV-1 protease showed great binding affinity without having formed a single h-bond, purely on the basis of the hydrophobic nature of the binding site! The group has cleverly tried to include the effect of not just the lipophilicity, but the exact geometry of the hydrophobic site. A 'hydrophobic enclosure' as the group calls it is particularly favourable for lipophilic parts of the ligand, and is rewarded in the scoring function. Balancing this is the desolvation penalty for the ligand, which is enthalpically unfavourable for it and the water bound to it.
The new modifications also seem to have made accomodations for pi-pi stacking and cation-pi interactions, which can contribute significantly in certain cases.
Overall, the scoring functions and the program are getting better, as the group is parametrizing it based on commonly occuring structural motifs and better interaction terms. The nice thing is that these modifications are in the end based on sound physical principles of ligand-protein binding, principles that are complicated to understand, but are based on fundamental laws of physical organic chemistry such as the hydrophobic effect, solvation/desolvation, hydrogen bonding, and other intermolecular interactions. Finally, it's the chemistry that is most important.
Docking may never serve as a solve-all technique, and may never work for all situations universally, but with this kind of development based on experiment going on, I feel confident that it will become a major guiding, if not predictive tool, in both academic labs and pharma. As usual, the goal would remain to balance accuracy with speed, something which is invaluable for high-throughput screening. For more details, refer to the paper, which is detailed indeed.
Reference: Friesner, R. A.; Murphy, R. B.; Repasky, M. P.; Frye, L. L.; Greenwood, J. R.; Halgren, T. A.; Sanschagrin, P. C.; Mainz, D. T. "Extra Precision Glide: Docking and Scoring Incorporating a Model of Hydrophobic Enclosure for Protein-Ligand Complexes" (J. Med. Chem.; (Article); 2006; ASAP Article; DOI: 10.1021/jm051256o)
Obviously Elusive
For some reason, we always have the knack of missing those things which are simple. And each one of us has a knack of missing a different thing. For me, the question which many synthetic chemists seem to miss is; how can a flexible molecule have a single conformation in solution? And yet, many synthetic chemists publish one solution conformation for their pet macrolide in a good journal, and the journal referees accept it without comment. The conformation is based on NMR coupling constants (Js) and distances (ds) from NOESY spectra, which are average values. In fact, the average structure obtained from these values does not exist in solution at all, and publishing such a structure is basically publishing a 'virtual' structure. In fact, take this structure and minimize it using a good force field, and it will always fall down by 10-12 kcal/mol in energy. Thus, it simply cannot exist as a 'low energy' conformer in solution, which is touted for an NMR structure.
I am not very keen in pointing out specific cases, but Amos Smith's "Solution Structure of (+)-Discodermolide" (Org. Lett.; (Letter); 2001; 3(5); 695-698. DOI: 10.1021/ol006967p) is a good example. For such a flexible molecule, there can never be one single structure in solution. It is a little intriguing for me how this phenomenon keeps on happening. For a rejoinder to Smith's paper, which makes use of a nifty conformer deconvolution method called NAMFIS, see Snyder's "Conformations of Discodermolide in DMSO" (J. Am. Chem. Soc. 2001, 123, 6929-6930). Note the use of the plural. Many such cases abound.
A simple rule to know when a molecule will be especially conformationally mobile in solution is to just count the number of single rotatable bonds in it. For a molecule with, say 15 such bonds, there won't even be one dominant (meaning more than 50%) conformation in solution. For example, Snyder's Disco analysis shows that the 'dominant' conformation of Disco is one with a population of 24% in solution.
I am working on NAMFIS, and not as a favourite method but by an objective assesment, want to say that it is a nifty method. The method was developed by an Italian group and stands for "NMR Analysis of Molecular Flexibility in Solution" (J. Am. Chem. Soc. 1995,117, 1027-1033). What it does is it takes the average NMR data (Js and ds from NOESY) and then matches that against a family of conformers obtained from a good conformational search, often done using multiple force fields and then combining the results. It then calculates the deviation of each structure's calculated Js and ds from the average data, and chooses the best fit as the most dominant structure in solution, with decreasing proportions of the worse fitting ones. Note that the 'dominant' conformation is neither more than 50%, nor does it match all the data, but it is the one that gives the best fit, which in this case is simply the sum of SD (square deviation) values for calculated and experimental NOE ds and Js. NAMFIS has been applied to Taxol and Laulimalide in addition to Disco. It was also applied to the structure of a 7 residue peptide which supposedly formed an alpha-helix in solution. The results? Not only did the peptide exist in many conformations, but the alpha helix was not even a minor one among them! This was "On the Stability of a Single-Turn -Helix: The Single versus Multiconformation Problem" (J. AM. CHEM. SOC. 2003, 125, 632-633)
The reason why synthetic organic chemists often don't end up paying close attention to conformation is simply because that knowledge is seldom something that is useful to them. For them, the principal function of NMR spectroscopy is to assign configuration. However, if they want to go ahead and publish a conformational analysis of their molecule in solution, they would do well to step back and consider the simple fact that if the molecule is even reasonably flexible, it is going to have multiple conformations, and not even a single dominant (more than 50%) one. In my view, not publishing conformational analysis for a highly flexible molecule at all is better than publishing only one conformation. In very special cases, the molecule may be constrained in some way, and then in fact, the average conformation may approach a single dominant conformation. Even then, there still cannot be only one conformation, as was the case for the 'constrained' alpha helix peptide cited in the above paragraph. A JOC paper, "A Test of the Single-Conformation Hypothesis in the Analysis of NMR Data for Small Polar Molecules: A Force Field Comparison" (J. Org. Chem. 1999, 64, 3979-3986), nicely explores NAMFIS and this question for a Diels-Alder adduct. But cases of truly constrained molecules having only one conformation are rare, and chemists' antennas must go up if someone publishes one conformation for your average flexible 'small' molecule.
Unfortunately, they don't seem to largely have.
I am not very keen in pointing out specific cases, but Amos Smith's "Solution Structure of (+)-Discodermolide" (Org. Lett.; (Letter); 2001; 3(5); 695-698. DOI: 10.1021/ol006967p) is a good example. For such a flexible molecule, there can never be one single structure in solution. It is a little intriguing for me how this phenomenon keeps on happening. For a rejoinder to Smith's paper, which makes use of a nifty conformer deconvolution method called NAMFIS, see Snyder's "Conformations of Discodermolide in DMSO" (J. Am. Chem. Soc. 2001, 123, 6929-6930). Note the use of the plural. Many such cases abound.
A simple rule to know when a molecule will be especially conformationally mobile in solution is to just count the number of single rotatable bonds in it. For a molecule with, say 15 such bonds, there won't even be one dominant (meaning more than 50%) conformation in solution. For example, Snyder's Disco analysis shows that the 'dominant' conformation of Disco is one with a population of 24% in solution.
I am working on NAMFIS, and not as a favourite method but by an objective assesment, want to say that it is a nifty method. The method was developed by an Italian group and stands for "NMR Analysis of Molecular Flexibility in Solution" (J. Am. Chem. Soc. 1995,117, 1027-1033). What it does is it takes the average NMR data (Js and ds from NOESY) and then matches that against a family of conformers obtained from a good conformational search, often done using multiple force fields and then combining the results. It then calculates the deviation of each structure's calculated Js and ds from the average data, and chooses the best fit as the most dominant structure in solution, with decreasing proportions of the worse fitting ones. Note that the 'dominant' conformation is neither more than 50%, nor does it match all the data, but it is the one that gives the best fit, which in this case is simply the sum of SD (square deviation) values for calculated and experimental NOE ds and Js. NAMFIS has been applied to Taxol and Laulimalide in addition to Disco. It was also applied to the structure of a 7 residue peptide which supposedly formed an alpha-helix in solution. The results? Not only did the peptide exist in many conformations, but the alpha helix was not even a minor one among them! This was "On the Stability of a Single-Turn -Helix: The Single versus Multiconformation Problem" (J. AM. CHEM. SOC. 2003, 125, 632-633)
The reason why synthetic organic chemists often don't end up paying close attention to conformation is simply because that knowledge is seldom something that is useful to them. For them, the principal function of NMR spectroscopy is to assign configuration. However, if they want to go ahead and publish a conformational analysis of their molecule in solution, they would do well to step back and consider the simple fact that if the molecule is even reasonably flexible, it is going to have multiple conformations, and not even a single dominant (more than 50%) one. In my view, not publishing conformational analysis for a highly flexible molecule at all is better than publishing only one conformation. In very special cases, the molecule may be constrained in some way, and then in fact, the average conformation may approach a single dominant conformation. Even then, there still cannot be only one conformation, as was the case for the 'constrained' alpha helix peptide cited in the above paragraph. A JOC paper, "A Test of the Single-Conformation Hypothesis in the Analysis of NMR Data for Small Polar Molecules: A Force Field Comparison" (J. Org. Chem. 1999, 64, 3979-3986), nicely explores NAMFIS and this question for a Diels-Alder adduct. But cases of truly constrained molecules having only one conformation are rare, and chemists' antennas must go up if someone publishes one conformation for your average flexible 'small' molecule.
Unfortunately, they don't seem to largely have.
Panek's Leucascandrolide A
A colleague presented Panek's leucascandrolide A (LA) synthesis in a group meeting. Several interesting steps were included in the synthesis, and several interesting questions came up.
For example, in one case, when he is doing the [4+2] allylsilane cycloaddition, there is an ester group axial in a TS that gives the 'right' product. When he increases the bulk of the alkyl group on the ester, the proportion of this product actually goes up.
A larger alkyl group will naturally have a larger A value, so why should it be preferred in an axial disposition? My surmise; as the group size increases, the ester prefers increasingly a conformation in which its carbonyl is directed inward, thus getting rid of the unfavourable interaction. Interesting how steric hindrance can cause a group to orient itself in such a way that makes the reaction more favourable.
(The conformation to the left is prefered as R becomes larger)
Also, he used Kozmin's spontaneous macrolactonization, an entropically unfavourable event. In another step, he oxidises a secondary alcohol in the presence of a primary one using a tungsten catalyst. How does this happen? My guess; there can be a radical mechanism involved and then the secondary radical will be more stable than the primary radical obviously.
Overall an interesting, if a little long, synthesis.
J. Org. Chem., ASAP Article 10.1021/jo0610412 S0022-3263(06)01041-3
Web Release Date: September 1, 2006
For example, in one case, when he is doing the [4+2] allylsilane cycloaddition, there is an ester group axial in a TS that gives the 'right' product. When he increases the bulk of the alkyl group on the ester, the proportion of this product actually goes up.
A larger alkyl group will naturally have a larger A value, so why should it be preferred in an axial disposition? My surmise; as the group size increases, the ester prefers increasingly a conformation in which its carbonyl is directed inward, thus getting rid of the unfavourable interaction. Interesting how steric hindrance can cause a group to orient itself in such a way that makes the reaction more favourable.
(The conformation to the left is prefered as R becomes larger)
Also, he used Kozmin's spontaneous macrolactonization, an entropically unfavourable event. In another step, he oxidises a secondary alcohol in the presence of a primary one using a tungsten catalyst. How does this happen? My guess; there can be a radical mechanism involved and then the secondary radical will be more stable than the primary radical obviously.
Overall an interesting, if a little long, synthesis.
J. Org. Chem., ASAP Article 10.1021/jo0610412 S0022-3263(06)01041-3
Web Release Date: September 1, 2006
Subscribe to:
Posts (Atom)