Imagine that your name was Smith, and you had synthesized a molecule named Interestingolide. Now, if someone published a paper with the title "Reassignment of the Structure of Smith's Interestingolide", how would you feel? Well, that's what has happened to 'Mehta and Kundu's Spiculoic Acid' whose published structure has been refuted by Oxford's Jack Baldwin. What is interesting is that the original group seems to have incorrectly predicted the stereochemistry of the product of a Sharpless epoxidation. The substrate is simple, so is the product, and it should not have been difficult to use Sharpless's mnemonic device to do this prediction. As is clear from the diagram below, the Sharpless mnemonic device makes it clear that holding the allylic alcohol in the orientation shown and using D (-) DET, you should get the alpha epoxide.
The rest of Baldwin's analysis follows, but this initial incorrect prediction sets the stage for all that. Maybe the grad student did the stereochemical prediction, and the professor either trusted him, or did not look closely enough at his analysis. Well, better luck next time, and I will stop speculating.
* Incorrect prediction:
* Correct prediction:
* Sharpless mnemonic device:
* Reference:
Org. Lett.; (Letter); 2006; ASAP Article; DOI: 10.1021/ol062361a
- Home
- Angry by Choice
- Catalogue of Organisms
- Chinleana
- Doc Madhattan
- Games with Words
- Genomics, Medicine, and Pseudoscience
- History of Geology
- Moss Plants and More
- Pleiotropy
- Plektix
- RRResearch
- Skeptic Wonder
- The Culture of Chemistry
- The Curious Wavefunction
- The Phytophactor
- The View from a Microbiologist
- Variety of Life
Field of Science
-
-
From Valley Forge to the Lab: Parallels between Washington's Maneuvers and Drug Development4 weeks ago in The Curious Wavefunction
-
Political pollsters are pretending they know what's happening. They don't.4 weeks ago in Genomics, Medicine, and Pseudoscience
-
-
Course Corrections5 months ago in Angry by Choice
-
-
The Site is Dead, Long Live the Site2 years ago in Catalogue of Organisms
-
The Site is Dead, Long Live the Site2 years ago in Variety of Life
-
Does mathematics carry human biases?4 years ago in PLEKTIX
-
-
-
-
A New Placodont from the Late Triassic of China5 years ago in Chinleana
-
Posted: July 22, 2018 at 03:03PM6 years ago in Field Notes
-
Bryophyte Herbarium Survey7 years ago in Moss Plants and More
-
Harnessing innate immunity to cure HIV8 years ago in Rule of 6ix
-
WE MOVED!8 years ago in Games with Words
-
-
-
-
post doc job opportunity on ribosome biochemistry!9 years ago in Protein Evolution and Other Musings
-
Growing the kidney: re-blogged from Science Bitez9 years ago in The View from a Microbiologist
-
Blogging Microbes- Communicating Microbiology to Netizens10 years ago in Memoirs of a Defective Brain
-
-
-
The Lure of the Obscure? Guest Post by Frank Stahl12 years ago in Sex, Genes & Evolution
-
-
Lab Rat Moving House13 years ago in Life of a Lab Rat
-
Goodbye FoS, thanks for all the laughs13 years ago in Disease Prone
-
-
Slideshow of NASA's Stardust-NExT Mission Comet Tempel 1 Flyby13 years ago in The Large Picture Blog
-
in The Biology Files
It's the simple things that...
I learnt my lesson. The problematic molecule for which our NMR data was not making sense turned out to have some protons with really long relaxation times. I had read a line in some well known NMR book which said "For any detailed study, it's best to measure T1 relaxation times before the experiment". I read it, and I forgot it. Well, not forgot it entirely, but it's still true that for most molecules, relaxation times are similar for all the protons and not doing a T1 does not really make a difference. In our case unfortunately, it were the reference protons for the NOESY distance calculation that turned out to have loooong T1s. Simple solution: measure T1, and then set the relaxation delay (d1 for Varian spectrometers) equal to 5T1. I am in the process of rerunning the NOESY. But from next time, I am never going to forget doing a T1 measurement before anything else, even for simple and 'obvious' moelcules. It's a humbling lesson well learnt for me.
Beyond the rhetoric
I have been reading George Olah's 'Beyond Oil and Gas: The Methanol Economy' and even without having gotten to the part about the methanol economy, I can heartily recommend the book. At first sight, the book looks technical, but it is actually extremely accessible to the layman. The first half of the book is an extremely lucid and comprehensive account of the history, geopolitics, technology and future, of oil, natural gas, and coal, and also discusses the hydrogen economy and alternative fuel sources including atomic energy. The book is very much worth reading and buying for this half alone. All three of these commodities have become the Big Brothers of our lives, seemingly munificent, indispensable, and revolutionary. Yet all three, and especially oil, have made us utterly dependent on them in morbid manner. This is true in the many obvious ways in which we use oil in transportation and electricity and heating, but also in the not so obvious and yet ubiquitous ways in which oil based products are the basis of every part of modern life; from plastics to pharmaceuticals. Our dependence on them is appalling indeed, as demonstrated in the book.
These three commodities are like some of the thieves in movie scenes; the moment the thieves run in some direction where they think that have a safe haven, some insurmountable obstacle materializes. And so it is for fossil fuels. Whatever optimistic estimates and facts we discover about them are almost immediately thwarted by serious problems. Oil is convenient for transportation, is in large reserves, and is the most versatile fossil fuel. Yet, it is riddled by exponentially increasing demand and production costs, locations in regions of political instability, and most importantly, environmental problems. Wars and political regimes are made and broken over oil, and leaders will go to any lengths to disguise their aspirations for oil and the actions resulting from them. Non-conventional sources of oil need much energy input from natural gas, and again contribute to environmental CO2 levels. This last problem is gigantic for coal. Natural gas has momentous transportation and safety problems. So does the much touted Hydrogen Economy of Bush. Ethanol from corn seemingly needs more energy from fossil fuels than it actually saves and produces, and may not be worth it. The bottom line is, any fossil fuel source and many non-conventional energy sources that we can consider have such intractable problems that we cannot think of depending upon them for eternity or even in the comfortable future. Put the rhetoric aside and focus on the facts. Any decision about energy has already been made very difficult because of our aspiration to a high standard of living, our reluctance to give up creature comforts, and because of political lobbying which traps decisions in a cycle of profit and oneupmanship, and the last thing we need is trendy slogans about unlikely energy sources.
It does not matter what the reserves are; after all, predictions about oil reserves have always turned out to be underestimates and false alarms until now. But current predictions seem more true, and in the end, what will be the last straw would be the simple dominance of demand over supply. It would not matter if we have reserves then, but the production costs and the resulting oil price due to demand will be so high, that they would lead to a virtual breakdown in social infrastructure. If oil prices become 150$ a barrel, nobody would care how much proven reserves we have. And it is likely that it will happen very soon, with much of the developing world aspiring to US SUV standards of existence. What's important is that because of our utter dependence on it, such a situation will entail a fundamental shift in our standard of living, especially in the US, and we simply lack the social and mental capacity to make this shift overnight.
I haven't gotten to the part about methanol yet, so I will refrain from commenting on it until later, but what is clear is that oil and fossil fuels have to go, in some way or the other. I have said it before and I say it again, that nuclear energy is the cheapest, safest, and most efficient energy source that we can use in the near future. What to do about terrorists hitting a nuclear power plant is a complex problem, but surely Bush can take care of that, with all the extreme measures he takes for enforcing national security.
In any case, it is eminently worth taking a look at 'Beyond Oil and Gas' if you get a chance.
These three commodities are like some of the thieves in movie scenes; the moment the thieves run in some direction where they think that have a safe haven, some insurmountable obstacle materializes. And so it is for fossil fuels. Whatever optimistic estimates and facts we discover about them are almost immediately thwarted by serious problems. Oil is convenient for transportation, is in large reserves, and is the most versatile fossil fuel. Yet, it is riddled by exponentially increasing demand and production costs, locations in regions of political instability, and most importantly, environmental problems. Wars and political regimes are made and broken over oil, and leaders will go to any lengths to disguise their aspirations for oil and the actions resulting from them. Non-conventional sources of oil need much energy input from natural gas, and again contribute to environmental CO2 levels. This last problem is gigantic for coal. Natural gas has momentous transportation and safety problems. So does the much touted Hydrogen Economy of Bush. Ethanol from corn seemingly needs more energy from fossil fuels than it actually saves and produces, and may not be worth it. The bottom line is, any fossil fuel source and many non-conventional energy sources that we can consider have such intractable problems that we cannot think of depending upon them for eternity or even in the comfortable future. Put the rhetoric aside and focus on the facts. Any decision about energy has already been made very difficult because of our aspiration to a high standard of living, our reluctance to give up creature comforts, and because of political lobbying which traps decisions in a cycle of profit and oneupmanship, and the last thing we need is trendy slogans about unlikely energy sources.
It does not matter what the reserves are; after all, predictions about oil reserves have always turned out to be underestimates and false alarms until now. But current predictions seem more true, and in the end, what will be the last straw would be the simple dominance of demand over supply. It would not matter if we have reserves then, but the production costs and the resulting oil price due to demand will be so high, that they would lead to a virtual breakdown in social infrastructure. If oil prices become 150$ a barrel, nobody would care how much proven reserves we have. And it is likely that it will happen very soon, with much of the developing world aspiring to US SUV standards of existence. What's important is that because of our utter dependence on it, such a situation will entail a fundamental shift in our standard of living, especially in the US, and we simply lack the social and mental capacity to make this shift overnight.
I haven't gotten to the part about methanol yet, so I will refrain from commenting on it until later, but what is clear is that oil and fossil fuels have to go, in some way or the other. I have said it before and I say it again, that nuclear energy is the cheapest, safest, and most efficient energy source that we can use in the near future. What to do about terrorists hitting a nuclear power plant is a complex problem, but surely Bush can take care of that, with all the extreme measures he takes for enforcing national security.
In any case, it is eminently worth taking a look at 'Beyond Oil and Gas' if you get a chance.
The same and the not same
One of the most important questions that someone in the early stages of drug discovery can ask is: Do similar ligands bind in a similar manner? Ambiguities riddle this seemingly commonsense question, right from the definition of 'similar'. Countless drug discovery projects must have been made or broken because of attention or lack of attention to this central principle.
If I had to give a one word answer to the question, it would be no, not because it's the right answer, but I would be playing safe by saying that. It can be a very big mistake to assume that similar ligands will bind to the same way in a protein binding pocket, or even in the same pocket, and medicinal chemists both on the experimental and computational sides are aware what a wide and disparate range of SAR relationships similar ligands can have. This extends very much to similar binding too.
Not on one occasion have medicinal chemists taken the known binding conformation of a ligand, and then twisted and turned another 'similar' ligand into that same conformation. The two structures are then superimposed. They look so similar, with the hydrophobic and polar groups in the same places. Ergo, they must be overlapping in their binding mode. Big mistake. While this can turn out to be the case many times, it's just wrong on a philosophical basis to assume it. That's because ligands have their own personalities, and each one of them can prefer to act quite differently with a protein binding pocket. The problem is; superposition of ligands can always be justified in retrospect if your ligand shows activity. But that does not make your assumption true.
One of the better known cases concerns the search for a 'common pharmacophore' for Taxol, Epothilone, and other ligands which bind to the same pocket in tubulin. A common pharmacophore is a minimal set of common structural features which will cause bioactivity. In typical fashion, researchers compared the various parts of the molecules and twisted them to overlap with each other (PNAS paper). Based on this superposition, they then designed analogs. Result: most of the analogs turned out to be inactive. Thus, there was no 'common pharmacophore'. In this context, Snyder and others published a model (SCIENCE paper) in which, doing meticulous electron density fitting for Taxol and Epothilone, they demonstrated that each one of these molecules explores the binding pocket of tubulin in a unique way, utilizing unique interactions. Thus, the ligands are 'promiscuous'.
Many times, modeling may assume that similar ligands are binding in the same conformation. Docking programs dock them in the same conformation. What docking programs fail to take into account is mainly protein flexibility, which accounts as much for ligand binding as ligand flexibility. Then, when X-ray structures of those 'similar' ligands bound to the same protein are obtained, they reveal that either the protein underwent a conformational change and changed the binding modes of the ligands, or that even without this, the ligands bound in a dissimillar manner. Ligand binding is a 1 kcal/mol energy window game, and there's no telling how each ligand will exploit this window.
So it's timely that a Swedish team has published a paper in J. Med. Chem that tries to tackle the question: Do similar ligands bind in the same way? The team has used the Tanimoto index to measure similarity of ligands, and then has looked at many samples from the PDB to guage the binding of similar ligands to the same protein. They try to use three main criteria of difference:
1. The position of water molecules
2. The movement of backbone atoms
3. The movement of protein sidechain atoms.
The first factor significantly turned out to be the most different for their examined cases, and one which is not always paid attention to. Water is a fickle guest for a protein host, and it can mediate interactions differently even in ligands with slight structural differences. The second factor is also significant, as side chain conformational differences induced by particular ligand features can greatly change the electrostatic environment of the protein; as illustrated above from the paper, the conformation of a single Met residue changes the electrostatics. The third factor, backbone movements, is relatively unchanged and a benign variable.
The bottom line is, it may be an ok working hypothesis to assume that because your known ligand binds in a known conformation, other very similar ligands or even known actives will bind the same way. But start taking it as an obvious rule, and you can always expect trouble.
Bizarre halloween paper of the year
If there was a prize for "Bizarre paper published to celebrate halloween", this paper may get it. On a more serious note, it's an interesting paper published in this week's ACIE that tackles a very common phenomenon- the metallic odor of iron that we all perceive. As the authors demonstrate, the odor is more likely to be from carbonyl compounds produced by the reduction of lipoperoxides in the skin by Fe2+. They also analyze the garlicky odor produced when iron is 'pickled' with phosphorus, commonly attributed to phosphine (PH3) and argue that this odor too is due to organophosphines rather than phosphine itself. Some of their statements and conlcusions (not to mention the title: The Two Odors of Iron when Touched or Pickled: (Skin) Carbonyl Compounds and Organophosphines) really merit Transylvanian attention:
Several of the experiments made me a little edgy; breathing phosphine to detect its odor (I have always been looking for a reference to the man/woman who first described the taste of KCN) and blood rubbed on to skin. These guys' work sure is interesting; what I am not sure of is whether I want to do a postdoc with them. Interestingly, pure phosphine, just like pure methane, is odorless.
"Humans are perplexed by the metallic odor from touching iron metal objects, such as tools, cutlery, railings, door handles, firearms, jewelry, and coins. Phosphorus-containing iron which is under acid attack gives rise to a different carbide or garlic odor which metallurgists have attributed to the gas phosphine (PH3); however, we found that purified PH3 at breathable dilution has hardly any odor. The aim of our research is to understand the chemical causes of these two iron smells in our engineered metal environment."
"1) Ironically, the iron odor on skin contact is a type of human body odor"
"Blood iron: Blood of one of the authors rubbed onto his own skin resulted in similar metallic odor and the same odorants (78±7 nmol dm-2, 4 repetitions) as in the above experiments. Controls by addition of FerroZine which is a blood-iron chelator suppressed the reaction (4±0.4). Aerated and homogenized blood also developed metallic odor on its own. There are reports that blood iron[10] can decompose blood lipidperoxides and that FerroZine inhibits this reaction.[11] This finding confirms that blood iron can trigger the metallic odor on skin or in blood itself."
"Sweaty skin corrodes iron metal to form reactive Fe2+ ions that are oxidized within seconds to Fe3+ ions while simultaneously reducing and decomposing existing skin lipidperoxides to odorous carbonyl hydrocarbons that are perceived as a metallic odor. This fast reaction creates the sensory illusion that it is the metal in itself that we smell right after touching it."
Several of the experiments made me a little edgy; breathing phosphine to detect its odor (I have always been looking for a reference to the man/woman who first described the taste of KCN) and blood rubbed on to skin. These guys' work sure is interesting; what I am not sure of is whether I want to do a postdoc with them. Interestingly, pure phosphine, just like pure methane, is odorless.
Cholesterol in plants fun fact
One of the questions I have always asked myself is "Do plants contain cholesterol?". I was under the impression that they don't and that this is precisely the difference between the cholesterol pathway in plants in animals; lanosterol to cholesterol in animals, and to stigmasterol or ergosterol etc. in plants and fungi. Interestingly, none of the 'popular' biochem books I read elaborated on this question or answered in the affirmative.
So it was interesting when I came across this J. Chem. Ed. article which talked about the existence of cholesterol in plants. Apparently, the amount as expected is quite low. But what is more interesting is the authors' study of popular biochem books (Lehninger, Stryer, Garrett and Grisham, Voet etc.) in which they certify that this fact is not mentioned.
But the most interesting fact may be that the USDA does not state the existence of cholesterol when it is less than 2 mg/serving, which is the case with plant products. I wonder what other product labels Uncle Sam dispenses with when they refer to compounds which are less than a certain percentage. Or maybe they should at least state what upper limits can do, as is the case of that compound in Chicken McNuggets, which can kill you when it's more than 1g.
So it was interesting when I came across this J. Chem. Ed. article which talked about the existence of cholesterol in plants. Apparently, the amount as expected is quite low. But what is more interesting is the authors' study of popular biochem books (Lehninger, Stryer, Garrett and Grisham, Voet etc.) in which they certify that this fact is not mentioned.
But the most interesting fact may be that the USDA does not state the existence of cholesterol when it is less than 2 mg/serving, which is the case with plant products. I wonder what other product labels Uncle Sam dispenses with when they refer to compounds which are less than a certain percentage. Or maybe they should at least state what upper limits can do, as is the case of that compound in Chicken McNuggets, which can kill you when it's more than 1g.
Hiatus
Real life haas finally intervened, and I have been busy with it for the past couple of days. The hiatus will continue till the weekend after which I hopefully should fill in a backlog.
'But is it chemistry?' hits Nature
There it is. Nature has picked up on the 'Is it chemistry' thread. Awareness and contrbutions from the good folks at the Skeptical Chymist were no doubt responsible for the article. The article does not echo bloggers' views directly, but echoes the views of previous Nobelists which in turn echo bloggers' views. There's the one that thinks (like I and some others do) that they should have a separate award for biology. There are those who point to the disparate majority of bioscientists on the Nobel chemistry committee, and there are those who think that such biology oriented chemistry awards are inevitable in the future. Like it or not, I think we have to agree with this last prediction. This year's Nobel makes no major advances in fundamental understanding of structure, reactivity, or synthesis. But then, are many such advances possible in the near future? And in the absence of such advances, awards such as the one given this year are inevitable.
Just one problem with the article. Richard Schrock became Robert Schrock.
Overheard from the 2007 But Is It Chemistry (BIIC) Conference at Tierra del Fuego:
* Physicist, biologist, medical researcher and engineer: Hey, chemist...too chicken to compete for the prize?
* Chemist: What are you talking about? You are all doing chemistry.
Well, maybe it's not that simple, but close.
Just one problem with the article. Richard Schrock became Robert Schrock.
Overheard from the 2007 But Is It Chemistry (BIIC) Conference at Tierra del Fuego:
* Physicist, biologist, medical researcher and engineer: Hey, chemist...too chicken to compete for the prize?
* Chemist: What are you talking about? You are all doing chemistry.
Well, maybe it's not that simple, but close.
Wrong data makes me happy
Well, it's not that simple, but a couple of months ago, I did a NOESY analysis of an alkaloid. I acquired the spectrum, integrated the peaks, and got the interproton distances, which looked fine. Then I tried to do a similar analysis for the salt. That's it, nothing much different, just the salt. And now, try as I may, I could never get decent distances. Most of my distances seemed too short (<2.4 A) I reran the NOESY with different parameters, integrated the peaks tediously, and everything looked fine. But again, when I calculated the distances from the intensities, everything was messed up. Finally, the cost-benefit analysis of spending the time on the compound and getting it done from an expert was carried out, and it was quickly decided that it would be best to send it to an expert.
A couple of days ago, we got the intensities of the peaks from the expert, and guess what. The distances are still terrible, at least most of them. So that means that maybe it was not my fault. Maybe it wasn't me, it was you (you=the machine, the parameters, the molecule itself). I am not blaming the expert, because there seems to some unique feature of the molecule itself that's causing trouble. So in some ways, I am happy that it wasn't just me. But I am also sad that this means that our analysis is even further prolonged. This is a project which I want to get done with and publish. And it never seems to be getting close to that goal.
But this also is making the situation more chemically interesting. Why would a simple transformation from the alkaloid to its salt make the analysis so much more difficult? The 1D spectra look ok, so we are pretty sure no decomposition/chemical transformation has taken place. And yet the problem suddenly became thorny because of such a simple change. It seems that the vagaries of SAR reflect in SNR (Structure-NMR Relationships) too.
A couple of days ago, we got the intensities of the peaks from the expert, and guess what. The distances are still terrible, at least most of them. So that means that maybe it was not my fault. Maybe it wasn't me, it was you (you=the machine, the parameters, the molecule itself). I am not blaming the expert, because there seems to some unique feature of the molecule itself that's causing trouble. So in some ways, I am happy that it wasn't just me. But I am also sad that this means that our analysis is even further prolonged. This is a project which I want to get done with and publish. And it never seems to be getting close to that goal.
But this also is making the situation more chemically interesting. Why would a simple transformation from the alkaloid to its salt make the analysis so much more difficult? The 1D spectra look ok, so we are pretty sure no decomposition/chemical transformation has taken place. And yet the problem suddenly became thorny because of such a simple change. It seems that the vagaries of SAR reflect in SNR (Structure-NMR Relationships) too.
In the beginning...
Once in a while, you come across a paper whose basic premise (and length) is so short that you can read it in 10 mins. You can talk about this paper to a college student, and its conciseness and simplicity provides a refreshing change from the usual technical articles that assail your faculties every day.
Such is the latest missive from Ronald Breslow. One of the big puzzles in the origin of life is the origin of homochirality (why L-amino acids?). Breslow does not solve this problem, but provides a possible solution for the amplification of existing homochirality that is simplicity exemplified.
The premise? Racemates of amino acids have a lower solubility than the pure enantiomers.
The experiment? Evaporate a solution of a slightly enantiomerically enriched solution of amino acids two times.
The result? The ee increases from 10% to about 91%, and from 1% to about 87%.
Here are all the results:
These experiments have been inspired in part by the remarkable discovery of slightly enantiomerically enriched non-racemizable amino acids in meteorites, most famously the Murchison meteorite (eg. S-alpha methyl valine) (See the linked ACR 2006 review)
These L-amino acids in sufficient concentration could then catalyse the reactions of other organic and biomolecules, such as has been demonstrated with sugars.
Very nice, and could have been entirely plausible on the early earth. The problem of course is how the chiral excess, no matter how small, could have arisen in the first place. The real problem is; how can chirality arise from random processes, which produce either enantiomer in equal parts?
When I started thinking about this question, I realised what a fickle beast the word 'random' is. Random processes of course cannot give rise to a particular bias. But now, consider a random process which happens extremely rarely. For example, the odds of getting heads and tails for a coin are 50:50. However, flip a coin only 10 times. Random variables such as air flow could very reasonably give you six heads and four tails. Now, if there were some process that could take advantage of whichever face lands more often (the heads in this case), then that process could essentially start off with more heads, and then capitalise on the even more heads produced and so on...if this process were auto-catalytic, as is an important condition for early life forming processes, then it would immediately take advantage of the excess heads, and then produce even more heads and...there; you have your L-amino acids. Of course, it can be noted that there was no particular reason to choose L-amino acids. That choice was entirely random, but the above analysis shows that such a choice can be made in the first place.
Based on this argument, I can envisage a process where, say a rock surface forms in a biased way, that is, its surface is more suitable for anchoring the L-amino acids rather than the racemates or the D-amino acids. In an amino acid 'soup' formed in a crater in such a rock then, it is quite plausible that evaporation would take away the racemate simply because it cannot hold on to the surface as well as the L. In such a case, the final 'soup' will then be enriched in L-amino acids.
Again, the hypothesis breaks down if one imagines millions of such rocks, in which case there is no reason for any bias. But if our rock is formed by a cometary impact for example, then I can imagine a comet with amino acids slamming into a giant rock and producing a skewed surface. Planetary impacts are very rare as we know. So by the time the next comet comes around, there already exists an asymmetrical surface which can introduce a bias in the evaporation of Breslow's amino acid pool. So by the time the comet comes back, the L-amino acids have taken over, aided by the biased structure produced by the 'random' impact of the comet.
It's all about natural selection. Even a random process can produce bias if it happens only once, and then allows nature to capitalise on the asymmetry produced by that one time event. Statistics be praised.
* Reference: Ronald Breslow and Mindy S. Levine, Amplification of enantiomeric concentrations under credible prebiotic conditions, PNAS 2006 103: 12979-12980
Such is the latest missive from Ronald Breslow. One of the big puzzles in the origin of life is the origin of homochirality (why L-amino acids?). Breslow does not solve this problem, but provides a possible solution for the amplification of existing homochirality that is simplicity exemplified.
The premise? Racemates of amino acids have a lower solubility than the pure enantiomers.
The experiment? Evaporate a solution of a slightly enantiomerically enriched solution of amino acids two times.
The result? The ee increases from 10% to about 91%, and from 1% to about 87%.
Here are all the results:
These experiments have been inspired in part by the remarkable discovery of slightly enantiomerically enriched non-racemizable amino acids in meteorites, most famously the Murchison meteorite (eg. S-alpha methyl valine) (See the linked ACR 2006 review)
These L-amino acids in sufficient concentration could then catalyse the reactions of other organic and biomolecules, such as has been demonstrated with sugars.
Very nice, and could have been entirely plausible on the early earth. The problem of course is how the chiral excess, no matter how small, could have arisen in the first place. The real problem is; how can chirality arise from random processes, which produce either enantiomer in equal parts?
When I started thinking about this question, I realised what a fickle beast the word 'random' is. Random processes of course cannot give rise to a particular bias. But now, consider a random process which happens extremely rarely. For example, the odds of getting heads and tails for a coin are 50:50. However, flip a coin only 10 times. Random variables such as air flow could very reasonably give you six heads and four tails. Now, if there were some process that could take advantage of whichever face lands more often (the heads in this case), then that process could essentially start off with more heads, and then capitalise on the even more heads produced and so on...if this process were auto-catalytic, as is an important condition for early life forming processes, then it would immediately take advantage of the excess heads, and then produce even more heads and...there; you have your L-amino acids. Of course, it can be noted that there was no particular reason to choose L-amino acids. That choice was entirely random, but the above analysis shows that such a choice can be made in the first place.
Based on this argument, I can envisage a process where, say a rock surface forms in a biased way, that is, its surface is more suitable for anchoring the L-amino acids rather than the racemates or the D-amino acids. In an amino acid 'soup' formed in a crater in such a rock then, it is quite plausible that evaporation would take away the racemate simply because it cannot hold on to the surface as well as the L. In such a case, the final 'soup' will then be enriched in L-amino acids.
Again, the hypothesis breaks down if one imagines millions of such rocks, in which case there is no reason for any bias. But if our rock is formed by a cometary impact for example, then I can imagine a comet with amino acids slamming into a giant rock and producing a skewed surface. Planetary impacts are very rare as we know. So by the time the next comet comes around, there already exists an asymmetrical surface which can introduce a bias in the evaporation of Breslow's amino acid pool. So by the time the comet comes back, the L-amino acids have taken over, aided by the biased structure produced by the 'random' impact of the comet.
It's all about natural selection. Even a random process can produce bias if it happens only once, and then allows nature to capitalise on the asymmetry produced by that one time event. Statistics be praised.
* Reference: Ronald Breslow and Mindy S. Levine, Amplification of enantiomeric concentrations under credible prebiotic conditions, PNAS 2006 103: 12979-12980
The Trouble With Everything
I am looking forward to reading Lee Smolin's new and hot 'The Trouble with Physics' in which he lambasts string theory as a science that is being pursued for the sake of elegance and beauty instead of agreement with experiment, which is the bedrock of the scientific method. Well, it's probably wrong to say that theoretical physicists are avoiding predictions that agree with experiments. What Smolin seems to think is that there have been very few experimental predictions made by the theory, which have not even been verified, and so the main justification for pursuing the theory seems to be mathematical elegance. Also, the predictions that do have been made seem to be verifiable only at very high energies, not achievable in the near future. A physicist friend of mine tells me that the option for this is to look back to the early universe and have 'observational' instead of 'experimental' evidence for the predictions of string theory, something like what they did for the Big Bang which won a Nobel this year.
That is why chemistry appeals to me more, although I find physics very interesting too. In chemistry, theory has to be necessarily much closer to experiment. That's why Woodward chose chemistry over mathematics I believe.
Anyway, more after I actually read the book.
That is why chemistry appeals to me more, although I find physics very interesting too. In chemistry, theory has to be necessarily much closer to experiment. That's why Woodward chose chemistry over mathematics I believe.
Anyway, more after I actually read the book.
Chartelline and copper catalyzed decarboxylation
A former mentor-student duo has published successive papers in JACS.
Total Synthesis of Antheliolide A
Chandra Sekhar Mushti, Jae-Hun Kim, and E. J. Corey
http://dx.doi.org/10.1021/ja066336b
Total Synthesis of (±)-Chartelline C
Phil S. Baran and Ryan A. Shenvi
http://dx.doi.org/10.1021/ja0659673
First, Corey publishes a synthesis of Antheliolide and then Baran publishes Chartelline. Corey uses a several nice transformations including a [2+2] ketene formation. In one step, he uses LiOH/MeOH to deprotect a TMS protected alkyne, instead of TBAF, because TBAF will also protect the TBDPS group. Why does LiOH protect only the TBS and not the TBDPS?
Also, in another step, he wanted to convert a (methylated) lactol to a lactone. But he had a sensitive caryophyllene like moeity in the ring, which would have been notoriously sensitive to acid and oxidants. So he takes the indirect way and first converts the OMe to a phenyl seleno, then turns it into a hydroxyl with AgNO3, and then finally uses mild oxidation with TPAP to oxidise the hydroxyl. Neat.
Baran's synthesis is more elegant, and as usual, based on biosynthetic proposals. He does some neat transformations, including a cascade like reaction to effect a nice ring closure.
However, it is the last step which I found interesting, in which he was trying to decarboxylate a vinyl carboxylate. Copper catalyzed decarboxylation failed, but it turned out that the carboxylate was quite sensitive and labile to heat, so simple heating worked; the mechanism involved a proton that seemed to be serving a function akin to copper as noted in the reference below. He also suggested transient positive charge stabilization by chlorine. I know fluorine can do it well; chlorine would be less efficient but could still do it I suppose.
This led to me look up some back references on vinyl decarboxylation. I found out a reference by Theodore Cohen of Pittsburgh (J. Am. Chem. Soc.; 1970; 92(10); 3189-3190) who proposed the following schematic for explaining stabilization of the TS by copper.
He was trying to explain some copper catalyzed quinoline decarboxylations that went back to 1930. I also came upon a 1950 paper by Frank Westheimer (J. Am. Chem. Soc.; 1951; 73(1); 429-435) in which he tried to explain the decarboxylation of alpha-keto acids with copper, a classic reaction which he uses as an enzyme model system.
Total Synthesis of Antheliolide A
Chandra Sekhar Mushti, Jae-Hun Kim, and E. J. Corey
http://dx.doi.org/10.1021/ja066336b
Total Synthesis of (±)-Chartelline C
Phil S. Baran and Ryan A. Shenvi
http://dx.doi.org/10.1021/ja0659673
First, Corey publishes a synthesis of Antheliolide and then Baran publishes Chartelline. Corey uses a several nice transformations including a [2+2] ketene formation. In one step, he uses LiOH/MeOH to deprotect a TMS protected alkyne, instead of TBAF, because TBAF will also protect the TBDPS group. Why does LiOH protect only the TBS and not the TBDPS?
Also, in another step, he wanted to convert a (methylated) lactol to a lactone. But he had a sensitive caryophyllene like moeity in the ring, which would have been notoriously sensitive to acid and oxidants. So he takes the indirect way and first converts the OMe to a phenyl seleno, then turns it into a hydroxyl with AgNO3, and then finally uses mild oxidation with TPAP to oxidise the hydroxyl. Neat.
Baran's synthesis is more elegant, and as usual, based on biosynthetic proposals. He does some neat transformations, including a cascade like reaction to effect a nice ring closure.
However, it is the last step which I found interesting, in which he was trying to decarboxylate a vinyl carboxylate. Copper catalyzed decarboxylation failed, but it turned out that the carboxylate was quite sensitive and labile to heat, so simple heating worked; the mechanism involved a proton that seemed to be serving a function akin to copper as noted in the reference below. He also suggested transient positive charge stabilization by chlorine. I know fluorine can do it well; chlorine would be less efficient but could still do it I suppose.
This led to me look up some back references on vinyl decarboxylation. I found out a reference by Theodore Cohen of Pittsburgh (J. Am. Chem. Soc.; 1970; 92(10); 3189-3190) who proposed the following schematic for explaining stabilization of the TS by copper.
He was trying to explain some copper catalyzed quinoline decarboxylations that went back to 1930. I also came upon a 1950 paper by Frank Westheimer (J. Am. Chem. Soc.; 1951; 73(1); 429-435) in which he tried to explain the decarboxylation of alpha-keto acids with copper, a classic reaction which he uses as an enzyme model system.
Schrodinger Webinars 2006. And the importance of being polarized
Like last year, Schrodinger is hosting webinars regarding new application developments, one ever week. Here is the schedule for those who may be interested. Registration is easy. Our group is thinking of doing most of them, especially the docking sessions.
They did the QM polarized docking module seminar today. Current docking routines often fail to give good results because of lack of treatement of polarization effects, which can be ultimately traced back to the charges on the ligand atoms. Polarization effects are crucial in calculating transition states for enzyme reactions for example.
Charges on ligand atoms constitute a big topic in themselves, and many methods have been developed to get accurate charges, including semi-empirical and ab-initio methods. Probably the best method I know involves calculating the electrostatic potential and then fitting those charges to the atoms which reproduce the potential the best, a method which I believe was developed by the late Peter Kollman (whose papers are still getting published five years after he passed away). Charges on atoms forming h-bonds for example can greatly change h-bond stabilization energies.
Cute electrostatic potential factoid: ESP studies can often reveal features of molecules that are not obvious upon inspection. For example, the ESP potential well is much deeper for the N3 of cytosine than the O8. 'Inspection' may suggest that it's the O8 that is the most nucleophilic, but experiment shows that alkylation and protonation take place on N3 as suggested by the ESP
In any case, the Schrodinger group uses poses from regular docking with their module Glide, and then uses a program called QSite to do a single point energy calculation using a good basis set, that endows the ligand atoms with charges. The poses are then redocked. Using this methodology, they were able to get good docked poses for many proteins that were outliers with regular docking (RMSD>2 A). Top of the outlier list was HIV-1 protease, and I suspect that the hydrophobic nature of the site can raise special issues. Regular Glide probably cannot find the correct pose for compounds in this site because many compounds bind tightly to the protease purely on the basis of the hydrophobic effect, without having any significant hydrogen bonding.
Reference: "Importance of Accurate Charges in Molecular Docking: Quantum Mechanical/Molecular Mechanical (QM/MM) Approach"- AE Cho, V Guallar, BJ Berne, R Friesner - Journal of Computational Chemistry, 2005
They did the QM polarized docking module seminar today. Current docking routines often fail to give good results because of lack of treatement of polarization effects, which can be ultimately traced back to the charges on the ligand atoms. Polarization effects are crucial in calculating transition states for enzyme reactions for example.
Charges on ligand atoms constitute a big topic in themselves, and many methods have been developed to get accurate charges, including semi-empirical and ab-initio methods. Probably the best method I know involves calculating the electrostatic potential and then fitting those charges to the atoms which reproduce the potential the best, a method which I believe was developed by the late Peter Kollman (whose papers are still getting published five years after he passed away). Charges on atoms forming h-bonds for example can greatly change h-bond stabilization energies.
Cute electrostatic potential factoid: ESP studies can often reveal features of molecules that are not obvious upon inspection. For example, the ESP potential well is much deeper for the N3 of cytosine than the O8. 'Inspection' may suggest that it's the O8 that is the most nucleophilic, but experiment shows that alkylation and protonation take place on N3 as suggested by the ESP
In any case, the Schrodinger group uses poses from regular docking with their module Glide, and then uses a program called QSite to do a single point energy calculation using a good basis set, that endows the ligand atoms with charges. The poses are then redocked. Using this methodology, they were able to get good docked poses for many proteins that were outliers with regular docking (RMSD>2 A). Top of the outlier list was HIV-1 protease, and I suspect that the hydrophobic nature of the site can raise special issues. Regular Glide probably cannot find the correct pose for compounds in this site because many compounds bind tightly to the protease purely on the basis of the hydrophobic effect, without having any significant hydrogen bonding.
Reference: "Importance of Accurate Charges in Molecular Docking: Quantum Mechanical/Molecular Mechanical (QM/MM) Approach"- AE Cho, V Guallar, BJ Berne, R Friesner - Journal of Computational Chemistry, 2005
Whither chemistry?
This year's chemistry Nobel again raises the question about the nature of chemistry as a central science. An article by Philip Ball in Nature a few months ago, tried to explore whether there are any great questions in pure chemistry, that are exclusively chemical questions. On the side, I must say that Ball has been doing a truly admirable job of popularizing chemistry over the years.
In my opinion, the problem is that the nature of chemistry by definition is both a blessing and a curse for our science. That's because chemistry is all about understanding molecules, and in our lives, it's molecules that are involved in all the real and good stuff, including biology, medicine, and engineering. So what happens is that a chemist may develop a drug, but then a doctor uses it to cure a disease. The credit thus goes mainly to the doctor. Similarly, a chemist discovers a lubricant, but then engineers use it to revolutionize the automobile industry. There goes the credit to the engineers. Therefore, the problem is not really with recognising chemistry, but with recognizing chemists. The problem with recognizing chemistry is of a different nature though; chemistry is so ubiquitous that most people take it for granted. Also, because most people's perception of chemistry is on a practical basis, for them chemistry is more synonymous with industry and manufacturing than with basic scientific research. As noted above, when it comes to basic research, for the general public, chemistry seems to mostly manifest itself through medicine and engineering, which are the actual faces of the product.
In my opinion, there is one question out of all the ones that Ball cites which is both inherently chemical as well as all-pervading, and that is self-assembly, in the broadest sense possible (so that question also ends up encompassing some of his other questions, such as the origin of life). I also think that this problem largely captures the unique nature of chemistry. Self-assembly needs an understanding of forces between molecules that is is a hodgepodge of qualitative and quantitative comprehension. A physics based approach might turn out to be too quantitative, a biology based approach would be too qualitative. Understanding the physics of hydrogen bonds for example would help but would not be enough for understanding their role in self-assembly. It's only such understanding coupled with considerable empirical knowledge of hydrogen bonding in real systems that will serve as a true guide. It's only chemists who can bring the right amount of quantitative analyses and empirical data to bear on such problems. Of course, that does not mean others cannot do this if they tried. But then, anyone can do almost anything if he tries hard enough; that hardly discounts the value of specific expertise.
The main feature of chemistry in my opinion is this fine balance between analytical or mathematical thinking based on first principles, and empirical thinking based on real life data and experiments. This approach makes the science unique I think, and Linus Pauling was probably the exemplary example of someone who embodied this approach in the right proportion. The practical feature of chemistry that makes it unique of course is that chemists create new stuff, but without this kind of understanding, that would not be possible on a grand scale.
In my opinion, the problem is that the nature of chemistry by definition is both a blessing and a curse for our science. That's because chemistry is all about understanding molecules, and in our lives, it's molecules that are involved in all the real and good stuff, including biology, medicine, and engineering. So what happens is that a chemist may develop a drug, but then a doctor uses it to cure a disease. The credit thus goes mainly to the doctor. Similarly, a chemist discovers a lubricant, but then engineers use it to revolutionize the automobile industry. There goes the credit to the engineers. Therefore, the problem is not really with recognising chemistry, but with recognizing chemists. The problem with recognizing chemistry is of a different nature though; chemistry is so ubiquitous that most people take it for granted. Also, because most people's perception of chemistry is on a practical basis, for them chemistry is more synonymous with industry and manufacturing than with basic scientific research. As noted above, when it comes to basic research, for the general public, chemistry seems to mostly manifest itself through medicine and engineering, which are the actual faces of the product.
In my opinion, there is one question out of all the ones that Ball cites which is both inherently chemical as well as all-pervading, and that is self-assembly, in the broadest sense possible (so that question also ends up encompassing some of his other questions, such as the origin of life). I also think that this problem largely captures the unique nature of chemistry. Self-assembly needs an understanding of forces between molecules that is is a hodgepodge of qualitative and quantitative comprehension. A physics based approach might turn out to be too quantitative, a biology based approach would be too qualitative. Understanding the physics of hydrogen bonds for example would help but would not be enough for understanding their role in self-assembly. It's only such understanding coupled with considerable empirical knowledge of hydrogen bonding in real systems that will serve as a true guide. It's only chemists who can bring the right amount of quantitative analyses and empirical data to bear on such problems. Of course, that does not mean others cannot do this if they tried. But then, anyone can do almost anything if he tries hard enough; that hardly discounts the value of specific expertise.
The main feature of chemistry in my opinion is this fine balance between analytical or mathematical thinking based on first principles, and empirical thinking based on real life data and experiments. This approach makes the science unique I think, and Linus Pauling was probably the exemplary example of someone who embodied this approach in the right proportion. The practical feature of chemistry that makes it unique of course is that chemists create new stuff, but without this kind of understanding, that would not be possible on a grand scale.
Chemistry Nobel for Biologist
So there we are; Roger Kornberg following in line behind his father, has won the Nobel for chemistry. This prize also seems to have been a quickie; Kornberg's pivotal papers seem to have been published after 2000. I am sure Kornberg deserved it, but like Paul, who stayed up all night looking at the announcements, I too don't feel really excited about it. I wouldn't have predicted this as I am not so familiar with molecular biologists, but I was surprised to see that absolutely no blog I read had this name in a post or in the comments. No problem of course with molecular biologists getting the chemistry prize. In a way, it's a triumph for chemistry because it shows the vast scope and purview of the science. Maybe the committee decided to balance the 'pure' chemistry that was honoured last year with something more interdisciplinary. But it's always more exciting to see the prize awarded to someone whose work you are familiar with and who is more from your general field, as happened last year.
Nitrone-cyclopropane cycloaddition
Use of a methodology I haven't really heard about in a long time.; the nitrone-cyclopropane cycloaddition. Kerr and coworkers now use it in the synthesis of one of those indolizidine alkaloids, phyllantidine. Fairly straightforward synthesis, although again, I would always pay attention to forming an N-O bond at an early stage. One of my favourite name reactions is used- the Krapcho decarboxylation, the concomitant hydrolysis and decarboxylation of an ester using LiCl in DMSO.
I am too lazy to look up the literature, but why doesn't the nitrone add to the double bond?
Also, the nitrone-cyclopropane addition should be nonconcerted, since we have a nicely stabilised ring opened intermediate.
The other interesting thing is the selectivity of the enolate oxidation.
At first, I was not satisfied, but then I thought that their explanation for it sounds ok. But it looks a little crafty when you draw the vinyl group conveniently pointing in a sterically hindered direction, and make that the only structure with the group in that position! The difference between the group in that position and pointing away from that position is 2 kcal/mol by the way, at least that's what MMFF says for a model compound.
According to Eliel's latest version, the A value for OH ranges from 0.6 to 1.04 for different solvents, while that for CO2CH3 is 1.2-1.3. Also, MMFF gives a difference of 1 kcal/mol for the product with OH axial. So not much difference there, but still worth keeping in mind that the product with the CO2CH3 axial dominates. I wonder how much the reaction is product controlled here though. Such analyses depend on the thermodynamic nature of the reaction; in this case, the fact that the oxaziridine ring opens probably means that the products as a whole are lower in energy. If that is so, then the TS should resemble the reactant, but I would never place my bets on such rationalizing, as subtle effects can change things in the TS. Also there are other factors here, especially the facile nature of topside attack.
Ref: Total Synthesis of (+)-Phyllantidine, Cheryl A. Carson, Michael A. Kerr
Published Online: 13 Sep 2006
DOI: 10.1002/anie.200602569
Corwin Hansch
In the whole Nobel betting, I think we forgot someone significant- Corwin Hansch, whose QSAR has now become ubiquitous in medicinal chemistry and biology, both theoretical and experimental. Also, I think Hansch is one of the first ones, if not the first one, to apply classical physical organic chemistry (Hammett etc.) to medicinal chemistry and bioactivity.
Updates: New thoughts.
Albert Overhauser: Nuclear Overhauser Effect
Norman Allinger: Molecular Mechanics
Updates: New thoughts.
Albert Overhauser: Nuclear Overhauser Effect
Norman Allinger: Molecular Mechanics
Solvent assisted olefin epoxidation: fluorine emerges a winner
A nice application of both computational techniques and kinetics (entropy of activation etc.) in elucidating the role of hydrogen bonded networks of H2O2 and hexafluoroisopropanol (HFIP) in the epoxidation of olefins. Some of the ordered clusters look almost like the active site of an enzyme; no wonder the barriers for oxygen transfer decrease. Also, fluorinated solvents are highly hydrophobic, so I would not be surprised if that property alone contributes to a very specific degree of ordering. All those h-bonds made me giddy.
HFIP and related solvents have interesting effects on both organic reactions and conformations of biomolecules. For example, addition of TFE to peptide and protein solutions can increase helicity. A simple and nice model was proposed by Balaram, which explained this effect by virtue of the fact that since F is a poor h-bond acceptor, the NH of the amide backbone no longer had to sacrifice its usual NH---C=O h-bond for one with the solvent (like water) and thus could engage better in helix formation.
In this account, some of the C-H---F distances are right outside the sum of vdW radii of the atoms. Weak C-H h-bonds are interesting entities, and unlike 'normal' h-bonds, their scope and definition is much looser, and is emphatically not restrained to having them within the sum of vdW radii; the h-bond potential minimum is of lesser energy and shallower. In such cases, comparative studies of geometry and energy have to be performed to assess whether these are 'true' h-bonds. The boundary is thin and nebulous though, but there is an excellent book on this state of affairs as well as the whole gamut of h-bonding interactions- The Weak Hydrogen Bond in Structural Chemistry and Biology. I find this slippery boundary of h-bond definition in many cases, a delicious example of the inexact yet rationalizable nature of chemistry.
One of those factoids which looks deceptively obvious and hence can be misconstrued, is that organic fluorine is a good h-bond acceptor. Not so! And Jack Dunitz has written a neat article ("Organic Fluorine Hardly Ever Accepts Hydrogen Bonds") in which he did a comparative study of the CSD (Camb. Struct. Database) and concluded that out of some 6000 structures with fluorine, only 30 or so had features which maybe could indicate h-bonding with fluorine. In the case of further data, it's best to assume that a fluorine in a molecule will not h-bond. As usual, the case could be quite different in a crystal.
Another thing to note again is the authors' use of MP2, which handles dispersion much better than DFT.
Reference: Albrecht Berkessel and Jens A. Adrio, J. Am. Chem. Soc., ASAP Article 10.1021/ja0620181
HFIP and related solvents have interesting effects on both organic reactions and conformations of biomolecules. For example, addition of TFE to peptide and protein solutions can increase helicity. A simple and nice model was proposed by Balaram, which explained this effect by virtue of the fact that since F is a poor h-bond acceptor, the NH of the amide backbone no longer had to sacrifice its usual NH---C=O h-bond for one with the solvent (like water) and thus could engage better in helix formation.
In this account, some of the C-H---F distances are right outside the sum of vdW radii of the atoms. Weak C-H h-bonds are interesting entities, and unlike 'normal' h-bonds, their scope and definition is much looser, and is emphatically not restrained to having them within the sum of vdW radii; the h-bond potential minimum is of lesser energy and shallower. In such cases, comparative studies of geometry and energy have to be performed to assess whether these are 'true' h-bonds. The boundary is thin and nebulous though, but there is an excellent book on this state of affairs as well as the whole gamut of h-bonding interactions- The Weak Hydrogen Bond in Structural Chemistry and Biology. I find this slippery boundary of h-bond definition in many cases, a delicious example of the inexact yet rationalizable nature of chemistry.
One of those factoids which looks deceptively obvious and hence can be misconstrued, is that organic fluorine is a good h-bond acceptor. Not so! And Jack Dunitz has written a neat article ("Organic Fluorine Hardly Ever Accepts Hydrogen Bonds") in which he did a comparative study of the CSD (Camb. Struct. Database) and concluded that out of some 6000 structures with fluorine, only 30 or so had features which maybe could indicate h-bonding with fluorine. In the case of further data, it's best to assume that a fluorine in a molecule will not h-bond. As usual, the case could be quite different in a crystal.
Another thing to note again is the authors' use of MP2, which handles dispersion much better than DFT.
Reference: Albrecht Berkessel and Jens A. Adrio, J. Am. Chem. Soc., ASAP Article 10.1021/ja0620181
Subscribe to:
Posts (Atom)