The Penicillin before Penicillin
We who live in the era of so many effective antibiotics would find it hard to imagine an era when even a simple cut or abscess would lead to a frequently fatal condition. It's hard to imagine the distress of doctors and family members when they saw a patient simply die of such an apparently minor affliction. The story of penicillin which finally emerged to fight these infections has become the stuff of legends. What's probably still living in the shadows is the equally crucial discovery of sulfa drugs which were the penicillins of their time; perhaps not as effective, but almost miraculous by virtue of being the first.
Now Thomas Hager has come out with a book that should rescue the heroic stories from being forgotten. Hager is a fine writer, and I have read his comprehensive biography of Linus Pauling. He now has written 'The Demon under the Microscope', a history of sulfa drugs discovered by German chemists in the 1920s and 30s. The New York Times gave it a favourable review, and I am looking forward to reading it. The NYT reviewer compared it to 'Microbe Hunters' a classic which has inspired many famous scientists including Nobel laureates in their childhood. I was also quite intrigued by the book, which reads like a romantic account of microbiology. Of course, truth is always harsher than such accounts, but it does no harm to initiate a child into science with them.
It was interesting for me to read that the German chemists had taken out a patent on the azo part of the first sulfa drug. They did not know that in fact it was the sulfa part which confered activity, and they were soon scooped by French chemists who actually discovered that even sulfanilamide alone has potency.
Sulfa drugs of course inihibit dihydrofolate reductase which is involved in nucleotide synthesis, and they are quite close to the ideal of the 'magic bullet', a molecule that is potent, has few to zero side effects, and most importantly, is selective for the microorganism. In this case, dihydrofolate enzyme is expressed only in bacteria. That does not necessarily mean that there will be no human side effects- after all, every molecule can target more than one protein- but it seems to work well in this particular case. Sulfa drugs led to further research on DHFR, which also led to the Methotrexate, a compound that is even today a standard component of anti-cancer therapy.
Dock dock, who is there?
Docking is one of the holy grails of computational chemistry and the pharmaceutical industry. But it's also a big current unsolved problem. The process refers to the placement of an inhibitor or small molecule in the active site of a protein, and then assesing its interactions with the active site, thereby reaching a conclusion about whether it will bind strongly or weakly. This process, if perfected, naturally will be very valuable for finding new leads or testing yet untested compounds against pharmaceutical targets, and most importantly, high throughput screening. Two subprocedures have to be honed when doing this; there first needs to be a way for placing the inhibitor in the site and exploring various orientations in which this can be done, and once the ligand is placed in the site, there then needs to be some way of evaluating whether its interaction with the site is 'good' or 'bad'.
The most popular way in which this is done is by using a 'scoring function' which is simply a sum of different interaction energies, due to hydrogen bond, electrostatics, van der waals interactions, and hydrophobic interactions, to name a few. The number that comes out from the sum of these interactions with the protein for a particular compound is anything but reliable, and scoring functions correlate very poorly with experimentally determined free energies of binding in general. The most reliable way of estimating free energies computationally is by Free Energy Perturbation (FEP). Yet, scoring functions can be reasonably good on a relative basis, and offer the fastest way of doing an evaluation. However, what we are essentially trying to do is evaluate the free energy of interaction, which is inherently diabolically convoluted, and consists of complicated entropy and enthalpy terms. These include terms for the protein, the ligand, and the complex that is formed. Enormously complicating the matter is the fact that both the protein and ligand are solvated, and displacement of water and desolvation effects will massively affect the sum interaction of the ligand with the active site. In addition, conformational changes do take place in the ligand when it binds, and also the protein in many cases. Needless to say, the general process of a ligand binding to a protein is extremely complicated to understand, let alone computationally evaluate.
And yet, there are programs out there like Glide, DOCK, Flexx, and Gold, to name a few, which have repeatedly attempted to dock ligands into active sites. This whole program has been a big saga, with people publishing one article every week in J. Med. Chem. related to docking. Many of these programs include scoring functions with terms that have been parametrized from data, and from observations related to basic physical principles of intermolecular interactions. The programs don't work very well generally, but can work for the interaction of one inhibitor with homologous proteins, or for different inhibitors for the same protein (a more tenuous application). I have personally used only Glide, and in my specific project, it has provided impressive results.
Any docking program needs to accomplish two goals:
1. Find the bioactive conformation of the ligand when supplied with the protein and ligand structure.
2. Evaluate whether similar/dissimilar ligands will show activity or not.
In practice, every docking result gives a list of different conformations of the ligand and protein, known as 'poses' ranked in descending order of efficacy based on their perceived free energies of interaction. Looking at just the top pose and concluding that that is the bioactive pose is a big mistake. Sometimes, if that pose is repeatedly found among the top ten results, one might hypothesize that in fact it may be the bioactive pose. In my case, that did turn out to be the case. However, it must also be noted that such programs can be parametrized for particular proteins and ligands, where the ligands are known to have very specific interactions. Then it would be relatively easy for the program to find similar ligands, but given the literally infinite possibilities in which ligands bind to proteins, even this fails sometimes.
One of the big challenges of docking is to model protein conformational changes- the classic induced fit mechanism in biochemistry. Glide has an induced fit module which has again given me favourable results in many cases. Induced fit docking remains an elusive general goal, however.
Solvation, as mentioned above, is probably the biggest problem. For the same protein and different ligands which are known to bind with certain IC50 values, the Glide scoring function seldom reproduces this ranking in terms of free energy of binding. However, the MM-GBSA model, which uses continuum solvation, gave me good results which neither regular nor induced fit docking did.
Docking programs continue to be improved. The group at Schrodinger which developed Glide is doing some solid and impressive work. In their latest paper in J. Med. Chem., they discuss a further refinement of Glide which is called Extra Precision (XP) Glide. Essentially, the program works on the basis of 'penalties' and 'rewards' for bonds and interactions based on their nature. The main difference in succeeding versions of docking programs is not surprisingly, attempts to improve the terms in the scoring functions by modification and addition, and attempts to rigorously parametrize those terms by using hundreds of known protein-ligand complexes as training sets. In this particular paper, the Schrodinger team has included some very realistic modifications to the hydrophobic and hydrogen bonding terms.
In general, how does one evaluate the energy of a hydrogen bond between a ligand atom and protein atom, an evaluation that obviously would be crucial for assesing ligand-protein interaction? It depends on several factors, including the nature of the atoms and their charge, the nature of the binding cavity where the bonds are formed (polar or hydrophobic) as well as its exact geometry, and the relative propensity of water to form hydrogen bonds in that cavity. This last factor is particularly important. Hydrogen bonds will be favourable only if water does not form very favourable hydrogen bonds in the cavity, and if the desolvation penalty for the ligand is not excessive. The Glide team has come up with a protocol of assesing the relative ease for h-bond formation of both water and the ligand in the active site, and then deciding for which one it will be more favourable. H-bonds formed between ligand and protein when water in the active site is not 'comfortable' because the site is hydrophobic and cannot form its full complement of h-bonds, will be especially favourable. The group cites the program's ability to reproduce such a situation, that contributes significantly to the extraordinary affinity of streptavidin to biotin, the strongest such interaction known. In this case, four correlated hydrogen bonds provide solid binding interactions as shown below. The group says that theirs is the first scoring function that has explained this unique experimental result.
The other significant modification to the program is a better representation of the hydrophobic effect, an effect which again is quite complicated, and depends upon the binding of the ligand itself, as well as the displacement of water. The hydrophobic effect is extremely important; I remember one case in which a ligand bound to HIV-1 protease showed great binding affinity without having formed a single h-bond, purely on the basis of the hydrophobic nature of the binding site! The group has cleverly tried to include the effect of not just the lipophilicity, but the exact geometry of the hydrophobic site. A 'hydrophobic enclosure' as the group calls it is particularly favourable for lipophilic parts of the ligand, and is rewarded in the scoring function. Balancing this is the desolvation penalty for the ligand, which is enthalpically unfavourable for it and the water bound to it.
The new modifications also seem to have made accomodations for pi-pi stacking and cation-pi interactions, which can contribute significantly in certain cases.
Overall, the scoring functions and the program are getting better, as the group is parametrizing it based on commonly occuring structural motifs and better interaction terms. The nice thing is that these modifications are in the end based on sound physical principles of ligand-protein binding, principles that are complicated to understand, but are based on fundamental laws of physical organic chemistry such as the hydrophobic effect, solvation/desolvation, hydrogen bonding, and other intermolecular interactions. Finally, it's the chemistry that is most important.
Docking may never serve as a solve-all technique, and may never work for all situations universally, but with this kind of development based on experiment going on, I feel confident that it will become a major guiding, if not predictive tool, in both academic labs and pharma. As usual, the goal would remain to balance accuracy with speed, something which is invaluable for high-throughput screening. For more details, refer to the paper, which is detailed indeed.
Reference: Friesner, R. A.; Murphy, R. B.; Repasky, M. P.; Frye, L. L.; Greenwood, J. R.; Halgren, T. A.; Sanschagrin, P. C.; Mainz, D. T. "Extra Precision Glide: Docking and Scoring Incorporating a Model of Hydrophobic Enclosure for Protein-Ligand Complexes" (J. Med. Chem.; (Article); 2006; ASAP Article; DOI: 10.1021/jm051256o)
The most popular way in which this is done is by using a 'scoring function' which is simply a sum of different interaction energies, due to hydrogen bond, electrostatics, van der waals interactions, and hydrophobic interactions, to name a few. The number that comes out from the sum of these interactions with the protein for a particular compound is anything but reliable, and scoring functions correlate very poorly with experimentally determined free energies of binding in general. The most reliable way of estimating free energies computationally is by Free Energy Perturbation (FEP). Yet, scoring functions can be reasonably good on a relative basis, and offer the fastest way of doing an evaluation. However, what we are essentially trying to do is evaluate the free energy of interaction, which is inherently diabolically convoluted, and consists of complicated entropy and enthalpy terms. These include terms for the protein, the ligand, and the complex that is formed. Enormously complicating the matter is the fact that both the protein and ligand are solvated, and displacement of water and desolvation effects will massively affect the sum interaction of the ligand with the active site. In addition, conformational changes do take place in the ligand when it binds, and also the protein in many cases. Needless to say, the general process of a ligand binding to a protein is extremely complicated to understand, let alone computationally evaluate.
And yet, there are programs out there like Glide, DOCK, Flexx, and Gold, to name a few, which have repeatedly attempted to dock ligands into active sites. This whole program has been a big saga, with people publishing one article every week in J. Med. Chem. related to docking. Many of these programs include scoring functions with terms that have been parametrized from data, and from observations related to basic physical principles of intermolecular interactions. The programs don't work very well generally, but can work for the interaction of one inhibitor with homologous proteins, or for different inhibitors for the same protein (a more tenuous application). I have personally used only Glide, and in my specific project, it has provided impressive results.
Any docking program needs to accomplish two goals:
1. Find the bioactive conformation of the ligand when supplied with the protein and ligand structure.
2. Evaluate whether similar/dissimilar ligands will show activity or not.
In practice, every docking result gives a list of different conformations of the ligand and protein, known as 'poses' ranked in descending order of efficacy based on their perceived free energies of interaction. Looking at just the top pose and concluding that that is the bioactive pose is a big mistake. Sometimes, if that pose is repeatedly found among the top ten results, one might hypothesize that in fact it may be the bioactive pose. In my case, that did turn out to be the case. However, it must also be noted that such programs can be parametrized for particular proteins and ligands, where the ligands are known to have very specific interactions. Then it would be relatively easy for the program to find similar ligands, but given the literally infinite possibilities in which ligands bind to proteins, even this fails sometimes.
One of the big challenges of docking is to model protein conformational changes- the classic induced fit mechanism in biochemistry. Glide has an induced fit module which has again given me favourable results in many cases. Induced fit docking remains an elusive general goal, however.
Solvation, as mentioned above, is probably the biggest problem. For the same protein and different ligands which are known to bind with certain IC50 values, the Glide scoring function seldom reproduces this ranking in terms of free energy of binding. However, the MM-GBSA model, which uses continuum solvation, gave me good results which neither regular nor induced fit docking did.
Docking programs continue to be improved. The group at Schrodinger which developed Glide is doing some solid and impressive work. In their latest paper in J. Med. Chem., they discuss a further refinement of Glide which is called Extra Precision (XP) Glide. Essentially, the program works on the basis of 'penalties' and 'rewards' for bonds and interactions based on their nature. The main difference in succeeding versions of docking programs is not surprisingly, attempts to improve the terms in the scoring functions by modification and addition, and attempts to rigorously parametrize those terms by using hundreds of known protein-ligand complexes as training sets. In this particular paper, the Schrodinger team has included some very realistic modifications to the hydrophobic and hydrogen bonding terms.
In general, how does one evaluate the energy of a hydrogen bond between a ligand atom and protein atom, an evaluation that obviously would be crucial for assesing ligand-protein interaction? It depends on several factors, including the nature of the atoms and their charge, the nature of the binding cavity where the bonds are formed (polar or hydrophobic) as well as its exact geometry, and the relative propensity of water to form hydrogen bonds in that cavity. This last factor is particularly important. Hydrogen bonds will be favourable only if water does not form very favourable hydrogen bonds in the cavity, and if the desolvation penalty for the ligand is not excessive. The Glide team has come up with a protocol of assesing the relative ease for h-bond formation of both water and the ligand in the active site, and then deciding for which one it will be more favourable. H-bonds formed between ligand and protein when water in the active site is not 'comfortable' because the site is hydrophobic and cannot form its full complement of h-bonds, will be especially favourable. The group cites the program's ability to reproduce such a situation, that contributes significantly to the extraordinary affinity of streptavidin to biotin, the strongest such interaction known. In this case, four correlated hydrogen bonds provide solid binding interactions as shown below. The group says that theirs is the first scoring function that has explained this unique experimental result.
The other significant modification to the program is a better representation of the hydrophobic effect, an effect which again is quite complicated, and depends upon the binding of the ligand itself, as well as the displacement of water. The hydrophobic effect is extremely important; I remember one case in which a ligand bound to HIV-1 protease showed great binding affinity without having formed a single h-bond, purely on the basis of the hydrophobic nature of the binding site! The group has cleverly tried to include the effect of not just the lipophilicity, but the exact geometry of the hydrophobic site. A 'hydrophobic enclosure' as the group calls it is particularly favourable for lipophilic parts of the ligand, and is rewarded in the scoring function. Balancing this is the desolvation penalty for the ligand, which is enthalpically unfavourable for it and the water bound to it.
The new modifications also seem to have made accomodations for pi-pi stacking and cation-pi interactions, which can contribute significantly in certain cases.
Overall, the scoring functions and the program are getting better, as the group is parametrizing it based on commonly occuring structural motifs and better interaction terms. The nice thing is that these modifications are in the end based on sound physical principles of ligand-protein binding, principles that are complicated to understand, but are based on fundamental laws of physical organic chemistry such as the hydrophobic effect, solvation/desolvation, hydrogen bonding, and other intermolecular interactions. Finally, it's the chemistry that is most important.
Docking may never serve as a solve-all technique, and may never work for all situations universally, but with this kind of development based on experiment going on, I feel confident that it will become a major guiding, if not predictive tool, in both academic labs and pharma. As usual, the goal would remain to balance accuracy with speed, something which is invaluable for high-throughput screening. For more details, refer to the paper, which is detailed indeed.
Reference: Friesner, R. A.; Murphy, R. B.; Repasky, M. P.; Frye, L. L.; Greenwood, J. R.; Halgren, T. A.; Sanschagrin, P. C.; Mainz, D. T. "Extra Precision Glide: Docking and Scoring Incorporating a Model of Hydrophobic Enclosure for Protein-Ligand Complexes" (J. Med. Chem.; (Article); 2006; ASAP Article; DOI: 10.1021/jm051256o)
Obviously Elusive
For some reason, we always have the knack of missing those things which are simple. And each one of us has a knack of missing a different thing. For me, the question which many synthetic chemists seem to miss is; how can a flexible molecule have a single conformation in solution? And yet, many synthetic chemists publish one solution conformation for their pet macrolide in a good journal, and the journal referees accept it without comment. The conformation is based on NMR coupling constants (Js) and distances (ds) from NOESY spectra, which are average values. In fact, the average structure obtained from these values does not exist in solution at all, and publishing such a structure is basically publishing a 'virtual' structure. In fact, take this structure and minimize it using a good force field, and it will always fall down by 10-12 kcal/mol in energy. Thus, it simply cannot exist as a 'low energy' conformer in solution, which is touted for an NMR structure.
I am not very keen in pointing out specific cases, but Amos Smith's "Solution Structure of (+)-Discodermolide" (Org. Lett.; (Letter); 2001; 3(5); 695-698. DOI: 10.1021/ol006967p) is a good example. For such a flexible molecule, there can never be one single structure in solution. It is a little intriguing for me how this phenomenon keeps on happening. For a rejoinder to Smith's paper, which makes use of a nifty conformer deconvolution method called NAMFIS, see Snyder's "Conformations of Discodermolide in DMSO" (J. Am. Chem. Soc. 2001, 123, 6929-6930). Note the use of the plural. Many such cases abound.
A simple rule to know when a molecule will be especially conformationally mobile in solution is to just count the number of single rotatable bonds in it. For a molecule with, say 15 such bonds, there won't even be one dominant (meaning more than 50%) conformation in solution. For example, Snyder's Disco analysis shows that the 'dominant' conformation of Disco is one with a population of 24% in solution.
I am working on NAMFIS, and not as a favourite method but by an objective assesment, want to say that it is a nifty method. The method was developed by an Italian group and stands for "NMR Analysis of Molecular Flexibility in Solution" (J. Am. Chem. Soc. 1995,117, 1027-1033). What it does is it takes the average NMR data (Js and ds from NOESY) and then matches that against a family of conformers obtained from a good conformational search, often done using multiple force fields and then combining the results. It then calculates the deviation of each structure's calculated Js and ds from the average data, and chooses the best fit as the most dominant structure in solution, with decreasing proportions of the worse fitting ones. Note that the 'dominant' conformation is neither more than 50%, nor does it match all the data, but it is the one that gives the best fit, which in this case is simply the sum of SD (square deviation) values for calculated and experimental NOE ds and Js. NAMFIS has been applied to Taxol and Laulimalide in addition to Disco. It was also applied to the structure of a 7 residue peptide which supposedly formed an alpha-helix in solution. The results? Not only did the peptide exist in many conformations, but the alpha helix was not even a minor one among them! This was "On the Stability of a Single-Turn -Helix: The Single versus Multiconformation Problem" (J. AM. CHEM. SOC. 2003, 125, 632-633)
The reason why synthetic organic chemists often don't end up paying close attention to conformation is simply because that knowledge is seldom something that is useful to them. For them, the principal function of NMR spectroscopy is to assign configuration. However, if they want to go ahead and publish a conformational analysis of their molecule in solution, they would do well to step back and consider the simple fact that if the molecule is even reasonably flexible, it is going to have multiple conformations, and not even a single dominant (more than 50%) one. In my view, not publishing conformational analysis for a highly flexible molecule at all is better than publishing only one conformation. In very special cases, the molecule may be constrained in some way, and then in fact, the average conformation may approach a single dominant conformation. Even then, there still cannot be only one conformation, as was the case for the 'constrained' alpha helix peptide cited in the above paragraph. A JOC paper, "A Test of the Single-Conformation Hypothesis in the Analysis of NMR Data for Small Polar Molecules: A Force Field Comparison" (J. Org. Chem. 1999, 64, 3979-3986), nicely explores NAMFIS and this question for a Diels-Alder adduct. But cases of truly constrained molecules having only one conformation are rare, and chemists' antennas must go up if someone publishes one conformation for your average flexible 'small' molecule.
Unfortunately, they don't seem to largely have.
I am not very keen in pointing out specific cases, but Amos Smith's "Solution Structure of (+)-Discodermolide" (Org. Lett.; (Letter); 2001; 3(5); 695-698. DOI: 10.1021/ol006967p) is a good example. For such a flexible molecule, there can never be one single structure in solution. It is a little intriguing for me how this phenomenon keeps on happening. For a rejoinder to Smith's paper, which makes use of a nifty conformer deconvolution method called NAMFIS, see Snyder's "Conformations of Discodermolide in DMSO" (J. Am. Chem. Soc. 2001, 123, 6929-6930). Note the use of the plural. Many such cases abound.
A simple rule to know when a molecule will be especially conformationally mobile in solution is to just count the number of single rotatable bonds in it. For a molecule with, say 15 such bonds, there won't even be one dominant (meaning more than 50%) conformation in solution. For example, Snyder's Disco analysis shows that the 'dominant' conformation of Disco is one with a population of 24% in solution.
I am working on NAMFIS, and not as a favourite method but by an objective assesment, want to say that it is a nifty method. The method was developed by an Italian group and stands for "NMR Analysis of Molecular Flexibility in Solution" (J. Am. Chem. Soc. 1995,117, 1027-1033). What it does is it takes the average NMR data (Js and ds from NOESY) and then matches that against a family of conformers obtained from a good conformational search, often done using multiple force fields and then combining the results. It then calculates the deviation of each structure's calculated Js and ds from the average data, and chooses the best fit as the most dominant structure in solution, with decreasing proportions of the worse fitting ones. Note that the 'dominant' conformation is neither more than 50%, nor does it match all the data, but it is the one that gives the best fit, which in this case is simply the sum of SD (square deviation) values for calculated and experimental NOE ds and Js. NAMFIS has been applied to Taxol and Laulimalide in addition to Disco. It was also applied to the structure of a 7 residue peptide which supposedly formed an alpha-helix in solution. The results? Not only did the peptide exist in many conformations, but the alpha helix was not even a minor one among them! This was "On the Stability of a Single-Turn -Helix: The Single versus Multiconformation Problem" (J. AM. CHEM. SOC. 2003, 125, 632-633)
The reason why synthetic organic chemists often don't end up paying close attention to conformation is simply because that knowledge is seldom something that is useful to them. For them, the principal function of NMR spectroscopy is to assign configuration. However, if they want to go ahead and publish a conformational analysis of their molecule in solution, they would do well to step back and consider the simple fact that if the molecule is even reasonably flexible, it is going to have multiple conformations, and not even a single dominant (more than 50%) one. In my view, not publishing conformational analysis for a highly flexible molecule at all is better than publishing only one conformation. In very special cases, the molecule may be constrained in some way, and then in fact, the average conformation may approach a single dominant conformation. Even then, there still cannot be only one conformation, as was the case for the 'constrained' alpha helix peptide cited in the above paragraph. A JOC paper, "A Test of the Single-Conformation Hypothesis in the Analysis of NMR Data for Small Polar Molecules: A Force Field Comparison" (J. Org. Chem. 1999, 64, 3979-3986), nicely explores NAMFIS and this question for a Diels-Alder adduct. But cases of truly constrained molecules having only one conformation are rare, and chemists' antennas must go up if someone publishes one conformation for your average flexible 'small' molecule.
Unfortunately, they don't seem to largely have.
Panek's Leucascandrolide A
A colleague presented Panek's leucascandrolide A (LA) synthesis in a group meeting. Several interesting steps were included in the synthesis, and several interesting questions came up.
For example, in one case, when he is doing the [4+2] allylsilane cycloaddition, there is an ester group axial in a TS that gives the 'right' product. When he increases the bulk of the alkyl group on the ester, the proportion of this product actually goes up.
A larger alkyl group will naturally have a larger A value, so why should it be preferred in an axial disposition? My surmise; as the group size increases, the ester prefers increasingly a conformation in which its carbonyl is directed inward, thus getting rid of the unfavourable interaction. Interesting how steric hindrance can cause a group to orient itself in such a way that makes the reaction more favourable.
(The conformation to the left is prefered as R becomes larger)
Also, he used Kozmin's spontaneous macrolactonization, an entropically unfavourable event. In another step, he oxidises a secondary alcohol in the presence of a primary one using a tungsten catalyst. How does this happen? My guess; there can be a radical mechanism involved and then the secondary radical will be more stable than the primary radical obviously.
Overall an interesting, if a little long, synthesis.
J. Org. Chem., ASAP Article 10.1021/jo0610412 S0022-3263(06)01041-3
Web Release Date: September 1, 2006
For example, in one case, when he is doing the [4+2] allylsilane cycloaddition, there is an ester group axial in a TS that gives the 'right' product. When he increases the bulk of the alkyl group on the ester, the proportion of this product actually goes up.
A larger alkyl group will naturally have a larger A value, so why should it be preferred in an axial disposition? My surmise; as the group size increases, the ester prefers increasingly a conformation in which its carbonyl is directed inward, thus getting rid of the unfavourable interaction. Interesting how steric hindrance can cause a group to orient itself in such a way that makes the reaction more favourable.
(The conformation to the left is prefered as R becomes larger)
Also, he used Kozmin's spontaneous macrolactonization, an entropically unfavourable event. In another step, he oxidises a secondary alcohol in the presence of a primary one using a tungsten catalyst. How does this happen? My guess; there can be a radical mechanism involved and then the secondary radical will be more stable than the primary radical obviously.
Overall an interesting, if a little long, synthesis.
J. Org. Chem., ASAP Article 10.1021/jo0610412 S0022-3263(06)01041-3
Web Release Date: September 1, 2006
Medals and Champagne Fizz
It's that time of the year again, when the champagne floods the otherwise noxious chemical shelves in the stockroom. Yes, the Nobel prize in chemistry will be announced on October 4, and many hopefuls will be dreaming about it, although it probably would go to someone who is not. Harvard usually stocks champagne in the stockroom, "just in case" and not without realistic expectations this year, since three of their faculty have been slotted to win the prize for some time now.
So what do I predict about the whole chemistry prize festivities and deliberations? Here are my bets, placed while my computer calculation drones on, although I am not as sanguine about them as one may expect, because I don't win any prizes for predicting prizes.
1. Nanotechnology: George Whitesides, Harvard. Pioneer in everything nano, from enzymes to surface lithography, you name it. More than a thousand publications. Started out as a 'pure' chemist doing NMR spectroscopy at Caltech.
Also, J. Fraser Stoddart of UCLA, for his remarkable and dogged pursuit of nanomachines, including nanomotors and nanopropellors. Many then jumped on this bandwagon, but no one has been as prolific as Stoddart. And it's not just about making fancy toys, but about generating prototypes for some very novel chemistry using them. In the process, Stoddart has created a 'new' type of bond, the 'mechanical bond'.
2. Chemical Biology/ Bioorganic/Bioinorganic chemistry: Stuart Schreiber, Harvard. Pioneered the field of 'chemical genetics'- controlling the workings of genes and protein at a fundamental level in the body, using ‘small’ organic molecules. Schreiber metamorphosed from a purely synthetic organic chemist to a force to reckon with in chemical biology. Racy description of him in The Billion Dollar Molecule. Another one of my favourites has been Ronald Breslow at Columbia, who should get it for pioneering biomimetic chemistry, and especially the study of artificial enzymes.
Some other contenders: Peter Dervan of Caltech- did magic with DNA as a chemical. Harry Gray of the same institution- discovered untold riches in electron transfer in proteins. Stephen Lippard, MIT- did groundbreaking work in understanding metal mediated enzymatic reactions, especially the notorious methane monooxygenase, that converts methane to methanol. If we could do that on a large scale at room temperature, given the amount of methane around (the most abundant greenhouse gas), what more could humankind want? World Peace perhaps.
3. Organic Synthesis: If there’s one metal that has had singular success in aiding the synthesis of complex molecules, it’s palladium, for all its toxicity in everyday life. Three palladium catalysed reactions in organic chemistry; Heck, Suzuki, Sonogashira (there's another one, except that the discoverer, Stille, is no more) have become ubiquitous in the art and science of synthesis. If anybody should get a prize for metal mediated reactions, it should be these three. I wouldn’t be optimistic though, because it was just last year that a prize was given for a special kind of organic reaction mediated by ruthenium and tungsten catalysts.
4. X-ray crystallography/Biochemistry- One of the best ways to try to get a Nobel is to devote your life to solving the structure of some protein that is crucial to life; you may succeed or you may spend your entire life doing it and fail- this being the tradeoff. Many such Nobels have been awarded, including the latest in 2003 to Roderick McKinnon, who after painstaking work resolved the structure and action of the potassium channel, a protein that is one of the fundamental workhorses of all living organisms, involved in ion conduction in the nervous system as well as elsewhere. I always get a kick out of such discoveries, because unlike even other Nobel discoveries, these involve truly looking into the heart of nature, like W-C did with DNA. If you are talking about fundamental science that lifts the veil from nature’s face, this is as fundamental as you can get.
So my contenders for such work- 'Venki' Ramakrishnan The Indian and Ada Yonath, who cracked the structure of the Ribosome, a protein which is if anything, more fundamental than the potassium channel.
4. Computational organic chemistry- Of course my favourite because of my own disposition. Kendall Houk of UCLA, who more than anyone else in the last twenty years, has helped expand our knowledge of organic reactions and syntheses using computational insights.
5. Computer Simulation of Biomolecules- Again, a personal favourite and a big contender- Martin Karplus of Harvard, the last student of two times laureate Linus Pauling. The scope and depth of Karplus's computational work in chemistry, physics, and biology, are almost comparable to Pauling's, and his work in especially simulating proteins using computers has been pioneering in every sense. He has been a possibility for many years. I have written about his visit to Emory a few months ago here.
Another big fish from the same pond is David Baker from the University of Washington at Seattle, who has probably provided the first solution to one of the greatest unsolved problems in biology- the protein folding problem. I presented a paper by Baker last year, about a program called ROSETTA, which was used to predict the folding of a small protein from first principles, giving a result of extraordinary accuracy that agreed with experiment. This is the first time any one has ever done something like that, and the work represented a triumph for computational scientists. The way in which the complex interactions in protein folding were included by Baker in his programs was diabolically clever. It may be a little early for Baker to get felicited, though the earlier the BaTer of course (that was a bad one)
One thing I think I can say for certain; the prize will definitely be connected with either biology or materials science, the two most fertile scientific paradigms of the twenty-first century.
I believe that many of these scientists should get a Nobel prize at least sometime in their lifetime, an opinion that is reiterated by many in the scientific community. But that only shows how hung up we all are on the Nobel. Of course, everyone who gets the Nobel is brilliant. The real problem is that there are many fold that number of scientists who are of Nobel caliber, who never get it. In the scientific community, their names are as well recognized, but in the public eye, they somehow rank lower in the genius category. This is surely unfair, because a Nobel prize is after all a prize instituted by human beings, and reflects personal preferences and tastes. It can be given to only three people at a time, and in the end, the exclusion of the fourth person often is unfair. Many such cases abound, probably the best known case in public memory being that of Rosalind Franklin, although she unfortunately died before the Nobels were awarded. The bottom line is that prestigious as the Nobels are and exceptional as the laureates are, there are many more such fine researchers around who would never get it, yet are on par with their Nobel colleagues in intellect and achievement. All of us could do well to remember this. In the end, it is a truism that getting a Nobel is as much a matter of timing and luck as it is of innate factors. That should help for a realistic appraisal of its prestige.
On a personal note, as you progress in your own field and career, it is a pleasure to see how you graduate from not ever having heard the names of that year’s laureates, to having actually studied their research in classes and otherwise, to having perhaps actually used their research in your own work. If I had been twenty-five in 1998 (ceteris paribus), I would have been pleased that I am actually using some of the methods developed by computational chemists John Pople and Walter Kohn who received that year’s prize, and I did study the reaction for which chemists received the prize last year. That is a very satisfying feeling, the feeling that one aspiring artisan is using the tools of another accomplished one.
Update: Paul Bracher, after much thought, comes up with a comprehensive list. He reminds me of two whom I did not think about; Stanley Miller and Leslie Orgel, who worked on the origin of life. The problem is that the field in general is quite speculative, and the one contribution which did help a lot, catalytic RNA, has already been awarded a prize.
So what do I predict about the whole chemistry prize festivities and deliberations? Here are my bets, placed while my computer calculation drones on, although I am not as sanguine about them as one may expect, because I don't win any prizes for predicting prizes.
1. Nanotechnology: George Whitesides, Harvard. Pioneer in everything nano, from enzymes to surface lithography, you name it. More than a thousand publications. Started out as a 'pure' chemist doing NMR spectroscopy at Caltech.
Also, J. Fraser Stoddart of UCLA, for his remarkable and dogged pursuit of nanomachines, including nanomotors and nanopropellors. Many then jumped on this bandwagon, but no one has been as prolific as Stoddart. And it's not just about making fancy toys, but about generating prototypes for some very novel chemistry using them. In the process, Stoddart has created a 'new' type of bond, the 'mechanical bond'.
2. Chemical Biology/ Bioorganic/Bioinorganic chemistry: Stuart Schreiber, Harvard. Pioneered the field of 'chemical genetics'- controlling the workings of genes and protein at a fundamental level in the body, using ‘small’ organic molecules. Schreiber metamorphosed from a purely synthetic organic chemist to a force to reckon with in chemical biology. Racy description of him in The Billion Dollar Molecule. Another one of my favourites has been Ronald Breslow at Columbia, who should get it for pioneering biomimetic chemistry, and especially the study of artificial enzymes.
Some other contenders: Peter Dervan of Caltech- did magic with DNA as a chemical. Harry Gray of the same institution- discovered untold riches in electron transfer in proteins. Stephen Lippard, MIT- did groundbreaking work in understanding metal mediated enzymatic reactions, especially the notorious methane monooxygenase, that converts methane to methanol. If we could do that on a large scale at room temperature, given the amount of methane around (the most abundant greenhouse gas), what more could humankind want? World Peace perhaps.
3. Organic Synthesis: If there’s one metal that has had singular success in aiding the synthesis of complex molecules, it’s palladium, for all its toxicity in everyday life. Three palladium catalysed reactions in organic chemistry; Heck, Suzuki, Sonogashira (there's another one, except that the discoverer, Stille, is no more) have become ubiquitous in the art and science of synthesis. If anybody should get a prize for metal mediated reactions, it should be these three. I wouldn’t be optimistic though, because it was just last year that a prize was given for a special kind of organic reaction mediated by ruthenium and tungsten catalysts.
4. X-ray crystallography/Biochemistry- One of the best ways to try to get a Nobel is to devote your life to solving the structure of some protein that is crucial to life; you may succeed or you may spend your entire life doing it and fail- this being the tradeoff. Many such Nobels have been awarded, including the latest in 2003 to Roderick McKinnon, who after painstaking work resolved the structure and action of the potassium channel, a protein that is one of the fundamental workhorses of all living organisms, involved in ion conduction in the nervous system as well as elsewhere. I always get a kick out of such discoveries, because unlike even other Nobel discoveries, these involve truly looking into the heart of nature, like W-C did with DNA. If you are talking about fundamental science that lifts the veil from nature’s face, this is as fundamental as you can get.
So my contenders for such work- 'Venki' Ramakrishnan The Indian and Ada Yonath, who cracked the structure of the Ribosome, a protein which is if anything, more fundamental than the potassium channel.
4. Computational organic chemistry- Of course my favourite because of my own disposition. Kendall Houk of UCLA, who more than anyone else in the last twenty years, has helped expand our knowledge of organic reactions and syntheses using computational insights.
5. Computer Simulation of Biomolecules- Again, a personal favourite and a big contender- Martin Karplus of Harvard, the last student of two times laureate Linus Pauling. The scope and depth of Karplus's computational work in chemistry, physics, and biology, are almost comparable to Pauling's, and his work in especially simulating proteins using computers has been pioneering in every sense. He has been a possibility for many years. I have written about his visit to Emory a few months ago here.
Another big fish from the same pond is David Baker from the University of Washington at Seattle, who has probably provided the first solution to one of the greatest unsolved problems in biology- the protein folding problem. I presented a paper by Baker last year, about a program called ROSETTA, which was used to predict the folding of a small protein from first principles, giving a result of extraordinary accuracy that agreed with experiment. This is the first time any one has ever done something like that, and the work represented a triumph for computational scientists. The way in which the complex interactions in protein folding were included by Baker in his programs was diabolically clever. It may be a little early for Baker to get felicited, though the earlier the BaTer of course (that was a bad one)
One thing I think I can say for certain; the prize will definitely be connected with either biology or materials science, the two most fertile scientific paradigms of the twenty-first century.
I believe that many of these scientists should get a Nobel prize at least sometime in their lifetime, an opinion that is reiterated by many in the scientific community. But that only shows how hung up we all are on the Nobel. Of course, everyone who gets the Nobel is brilliant. The real problem is that there are many fold that number of scientists who are of Nobel caliber, who never get it. In the scientific community, their names are as well recognized, but in the public eye, they somehow rank lower in the genius category. This is surely unfair, because a Nobel prize is after all a prize instituted by human beings, and reflects personal preferences and tastes. It can be given to only three people at a time, and in the end, the exclusion of the fourth person often is unfair. Many such cases abound, probably the best known case in public memory being that of Rosalind Franklin, although she unfortunately died before the Nobels were awarded. The bottom line is that prestigious as the Nobels are and exceptional as the laureates are, there are many more such fine researchers around who would never get it, yet are on par with their Nobel colleagues in intellect and achievement. All of us could do well to remember this. In the end, it is a truism that getting a Nobel is as much a matter of timing and luck as it is of innate factors. That should help for a realistic appraisal of its prestige.
On a personal note, as you progress in your own field and career, it is a pleasure to see how you graduate from not ever having heard the names of that year’s laureates, to having actually studied their research in classes and otherwise, to having perhaps actually used their research in your own work. If I had been twenty-five in 1998 (ceteris paribus), I would have been pleased that I am actually using some of the methods developed by computational chemists John Pople and Walter Kohn who received that year’s prize, and I did study the reaction for which chemists received the prize last year. That is a very satisfying feeling, the feeling that one aspiring artisan is using the tools of another accomplished one.
Update: Paul Bracher, after much thought, comes up with a comprehensive list. He reminds me of two whom I did not think about; Stanley Miller and Leslie Orgel, who worked on the origin of life. The problem is that the field in general is quite speculative, and the one contribution which did help a lot, catalytic RNA, has already been awarded a prize.
More nitrile oxide histrionics
Nitrile oxide type cycloadditions seem to be the rage right now, as evidenced by TotSyn's account of Ley's bengazole synthesis. Here's a concise Philip Fuchs synthesis of the histrionical histrionicotoxin, one of those notorious frog alkaloids each of which seems to take the brain to some kind of high.
I liked the way the delicate N-O bond is formed at an early stage, and kept intact till the end, when it is cleaved by Zn-AcOH. The formation of two aldehydes leading to the two enynes, was slotted to happen to from the start as can be seen.
JACS (Communication) DOI: 10.1021/ja065015x
I liked the way the delicate N-O bond is formed at an early stage, and kept intact till the end, when it is cleaved by Zn-AcOH. The formation of two aldehydes leading to the two enynes, was slotted to happen to from the start as can be seen.
JACS (Communication) DOI: 10.1021/ja065015x
Glimpse into history
I would never have heard about a magazine called 'The Chemical Intelligencer' had I not been looking for some articles by the fiery and brilliant Michael Dewar (tropolone) on SciFinder. An article penned by Dewar and Derek Barton caught my eye because it was of a kind that I had never seen before. In the article, coauthored by a third scientist, the two chemists discuss why inspite of similar intelligence and significant contributions, only Barton and not Dewar won the Nobel prize. My curiosity was piqued and I ordered the entire volume for 1996 from storage. Instead of a single article, I was treated to a variety of articles about chemists and history. The editor of this magazine is Istvan Hargittai, a chemist who has written beautiful books about symmetry, and has combined the interviews from this magazine into Candid Science, a nice series of books in which he interviews the premier chemists of their time from every subfield of the subject, Nobelists and non-Nobelists who did Nobel calibre work. But now, some tidbits from the slew of articles:
1. Why did Barton and not Dewar get the Nobel? The author opines that Barton came from a more modest background and probably had to struggle harder, thus also trying his hand at 'popular' or conventional topics. Dewar was much more prolific, and focused on almost every nook and cranny of every field of chemistry. Dewar was also much more contentious, and often said unkind words to others that he later regretted. I personally believe that Barton's paper on conformational analysis in 1950 is one of the most significant papers in chemical history, and would have been quite enough to get him the Nobel. While Dewar did contribute very significantly to many areas of the subject, it is difficult to think of any single contribution by him that pervades such a ubiquitous landscape of chemical and biological ideas, both pure and applied. Later, Barton made two other important contributions- the Barton-McCombie reaction, and the nitrite photolysis reaction, a nice precursor to today's remote functionalization methods. Using the latter reaction, he synthesized 30g of androsterone, when the world supply was a few milligrams. For more info, read the autobiographies of the two from the splendid series of ACS Autobiographies, "Profiles, Pathways, and Dreams" edited by Jeffrey Seeman.
2. Interesting interview with H C Brown: Notice how he cleverly dodges the 2-norbornyl cation issue by saying, "We experimented with many kinds of secondary and tertiary 2-norbornyl systems and concluded that the 2-norbornyl cation must consist of two equilibrating structures." But of course, the cation is going to be stablized by substituents like phenyl in the 2 position. With this stabilization, you really can get two equilibrating structures. But what about the parent system, plain old 2-norbornyl cation itself? In any case, so much sweat and tears has been shed over the famous controversy that there is no more to say. I personally find it an absolutely fascinating episode in twentieth century chemistry.
3. Interview with Paul Scheuer: Yet again, someone who paid a tribute to Woodward's legendary chalk drawing performances. Scheuer was a first rate marine natural products chemist, in Hawai, where else.
4. Captain Nemo's chemistry: untangling the chemistry behind paragraphs from the classic Verne volume.
5. Portrait of P D Bartlett by Jack Roberts (Caltech): Names like Bartlett have become ghosts for the new generation. But let us forget not that it was people such as these who laid the foundations for all the organic chemistry we do and study today.
1. Why did Barton and not Dewar get the Nobel? The author opines that Barton came from a more modest background and probably had to struggle harder, thus also trying his hand at 'popular' or conventional topics. Dewar was much more prolific, and focused on almost every nook and cranny of every field of chemistry. Dewar was also much more contentious, and often said unkind words to others that he later regretted. I personally believe that Barton's paper on conformational analysis in 1950 is one of the most significant papers in chemical history, and would have been quite enough to get him the Nobel. While Dewar did contribute very significantly to many areas of the subject, it is difficult to think of any single contribution by him that pervades such a ubiquitous landscape of chemical and biological ideas, both pure and applied. Later, Barton made two other important contributions- the Barton-McCombie reaction, and the nitrite photolysis reaction, a nice precursor to today's remote functionalization methods. Using the latter reaction, he synthesized 30g of androsterone, when the world supply was a few milligrams. For more info, read the autobiographies of the two from the splendid series of ACS Autobiographies, "Profiles, Pathways, and Dreams" edited by Jeffrey Seeman.
2. Interesting interview with H C Brown: Notice how he cleverly dodges the 2-norbornyl cation issue by saying, "We experimented with many kinds of secondary and tertiary 2-norbornyl systems and concluded that the 2-norbornyl cation must consist of two equilibrating structures." But of course, the cation is going to be stablized by substituents like phenyl in the 2 position. With this stabilization, you really can get two equilibrating structures. But what about the parent system, plain old 2-norbornyl cation itself? In any case, so much sweat and tears has been shed over the famous controversy that there is no more to say. I personally find it an absolutely fascinating episode in twentieth century chemistry.
3. Interview with Paul Scheuer: Yet again, someone who paid a tribute to Woodward's legendary chalk drawing performances. Scheuer was a first rate marine natural products chemist, in Hawai, where else.
4. Captain Nemo's chemistry: untangling the chemistry behind paragraphs from the classic Verne volume.
5. Portrait of P D Bartlett by Jack Roberts (Caltech): Names like Bartlett have become ghosts for the new generation. But let us forget not that it was people such as these who laid the foundations for all the organic chemistry we do and study today.
WHO is DDT
WHO has endorsed the use of DDT in small amounts to be sprayed indoors. I think it's a sane decision. They have concluded that small amounts indoors do not pose a considerable hazard. For me, the case always concerns the tradeoff between costs and benefits. Bioamplification is of course an important issue, but this is not some big swathe of species that you are bathing with the compound. On the other hand, the number of lives it spares which would otherwise be snatched away by malaria, still one of the most pernicious diseases around, will be a substantial benefit.
Cation-pies in squalene synthases
Now here's a nice paper that ties together facts from two of the previous posts. A Japanese group has investigated the role of aromatic amino acids for stabilizing cationic intermediates in the enzyme sites of squalene synthases, which bring about one of the most beautiful and classic reaction cascades in biology. The enzymatic cascades investigated included especially the squalene-hopene cascade, although the squalene-lanosterol-cholesterol pathway is probably better known to Homo sapiens.
By substituting Phe and Tyr in the active sites by O-methyl Tyr, Trp, and most importantly, mono, di and tri fluoro Phenylalanines, these workers found that there is a clear correlation between rates of cyclization and cation-pi interaction energies. The Trp and O-methyl Tyr cases introduced an unwanted variable- sterics, which disorganized the active site at higher temperature. That's why the use of the fluoro Phe s, to preserve VdW similarity. Fluoro benzene has a lower cation-pi stabilization energy than benzene, and it naturally becomes even lower with more fluorine substitution. In this case, more fluorination led to decreased activity. I am sure there are other factors are play, but from their analysis, the cation-pi energy definitely seems to be an important determinant. The pioneer of cation-pi interactions for structural biology is of course Dennis Dougherty.
Ref: J. Am. Chem. Soc.; 2006; ASAP Web Release Date: 20-Sep-2006; (Article) DOI: 10.1021/ja063358p
By substituting Phe and Tyr in the active sites by O-methyl Tyr, Trp, and most importantly, mono, di and tri fluoro Phenylalanines, these workers found that there is a clear correlation between rates of cyclization and cation-pi interaction energies. The Trp and O-methyl Tyr cases introduced an unwanted variable- sterics, which disorganized the active site at higher temperature. That's why the use of the fluoro Phe s, to preserve VdW similarity. Fluoro benzene has a lower cation-pi stabilization energy than benzene, and it naturally becomes even lower with more fluorine substitution. In this case, more fluorination led to decreased activity. I am sure there are other factors are play, but from their analysis, the cation-pi energy definitely seems to be an important determinant. The pioneer of cation-pi interactions for structural biology is of course Dennis Dougherty.
Ref: J. Am. Chem. Soc.; 2006; ASAP Web Release Date: 20-Sep-2006; (Article) DOI: 10.1021/ja063358p
Stereochemistry by natural selection
Back in India when I was a school kid, I was obsessed with entomology, and used to spend much of my spare time catching insects and studying them in jars borrowed from our kitchen, to the consternation of my mother. One of my favourite insects was the 'walking stick' pictured above. The pleasure of catching this little critter used to be compounded because of the difficulty of detecting him in the grass; slender and brown, the walking stick is notorious for being able to almost perfectly camouflage himself to match the colour of wood or grass. But catch many walking sticks I did, and they made fascinating creatures for study.
In any case, I never met a walking stick who sprayed me with a noxious spray from his underbelly. Apparently, you do get such walking sticks of the obnoxious kind in the US. And so did grad students and a professor in Florida catch a few sticks and analyse the irritating secretion by NMR. I am sure this process has been done before for other insects, but I doubt whether any of those researchers found such an interesting fact. It turns out that every walking stick individual has a unique mixture and ratio of the stereoisomers of the 10 carbon cyclopentane derivative that packs the punch in their secretion. There's 3 chiral centers, so 8 stereoisomers, which can be mixed together in myriad proportions. It's a fascinating find.
I am guessing that the reason for the evolution of such varying stereochemical cocktails could be the survival of the fittest. What if your predator gradually got used to and immune to your special chemical warfare compound? If I think about it, it seems easier to modify this compound by changing the mixture of isomers in it, rather than manufacture a totally new compound to ward off the predator, a process which would take too much time and energy. Also, this difference could also help the insects keep each other at bay during courtship, mating, or aggression. Again, if everyone had the same composition, they could become immune to it, but with each individual having his own secret formula, his adversary can never guess what is going to emerge out of his underside. What I am interested in knowing is if this composition of stereoisomers changes during the individual's own lifetime. Now that would be evolution at its best. Let me check out the original article.
Update: The paper with the cute name is from Chemical Biology- Single-Insect NMR: A New Tool To Probe Chemical Biodiversity. Interestingly, the authors are chemists to the core. They don't seem to have speculated about the evolutionary or biological significance of their findings.
Vinylcyclobutane-cyclohexene reaction in natural products
I was not aware of the really neat paper by Baran and Houk in which they justified a dicationic diradical mechanism for the rearrangement of skeptrin to ageliferin. This may be the final nail in the coffin of the earlier assumed 'hymenidin-ageliferin' rearrangment which was thought to be a DA reaction catalysed by a novel Diels-Alderase.
This was of course the sequel to Baran's remarkable 2004 paper in ACIE, in which he convincingly questioned the DA reaction and instead proposed a vinylcyclobutane-cyclohexene reaction. What I found neat was the simple reason why he questioned the hypothesis; when the natural products are isolated, ageliferin is always present in smaller amounts, whereas if both ageliferin and sceptrin were formed from hymenidin, then thermodynamically, ageliferin is more stable and hence it should be the major product. Ergo, the only way in which it can be the minor product is if it is derived from sceptrin through an energetically high VC-CH rearrangement.
Now, Houk's calculations seem to support the diradical mechanism. As a note, he finds UB3YLP more suitable for these calcs rather than the now popular mpW1 functional or the CASSCF methods.
Refs:
1. Baran, Houk, et. al. Angew. Chem. Int. Ed. Vol.43, 20 Pages: 2674-2677
2. Baran et. al. Angew. Chem. Int. Ed. Vol.45, 25 Pages: 4126-4130
This was of course the sequel to Baran's remarkable 2004 paper in ACIE, in which he convincingly questioned the DA reaction and instead proposed a vinylcyclobutane-cyclohexene reaction. What I found neat was the simple reason why he questioned the hypothesis; when the natural products are isolated, ageliferin is always present in smaller amounts, whereas if both ageliferin and sceptrin were formed from hymenidin, then thermodynamically, ageliferin is more stable and hence it should be the major product. Ergo, the only way in which it can be the minor product is if it is derived from sceptrin through an energetically high VC-CH rearrangement.
Now, Houk's calculations seem to support the diradical mechanism. As a note, he finds UB3YLP more suitable for these calcs rather than the now popular mpW1 functional or the CASSCF methods.
Refs:
1. Baran, Houk, et. al. Angew. Chem. Int. Ed. Vol.43, 20 Pages: 2674-2677
2. Baran et. al. Angew. Chem. Int. Ed. Vol.45, 25 Pages: 4126-4130
Modern Physical Organic Chemistry
Phys. Org. Chem. has always been one of my favourite subjects. As I graduated from school and college into university for my master's, I began to realise that it represents not so much a separate topic as a philosophy and approach; to treat chemical and biological systems from the perspective of structure, conformation, and reactivity, which are after all the most fundamental aspects of any such system. I reached the conclusion that phys org is a truly interdisciplinary framework, and any one who has a solid background in phys. org. can be a good computational chemist, synthetic organic chemist, and/or bioorganic/biochemist.
Unfortunately, all the classic phys org books until now have been of the 'pure' kind, focusing on mechanism and reactivity, but not discussing the interdisciplinary nature of the topic, especially for biological systems. My wait is over; Modern Physical Organic Chemistry by Dennis Dougherty and Eric Anslyn has completely and satisfactorily reinvented the phys org chem textbook. Now, one can look to a wholesome treatment of phys org as a multidisciplinary, fundamental, and exciting approach to both chemistry and biology. The book is worth its price, and covers the gamut of topics, including basic ones like mechanism, but also interspersed with lots of boxes explaining the applications of basic phys org concepts to host guest systems, proteins and nucleic acids, strained molecules, and materials science. Fantastic reference.
Dougherty of course is a great chemist doing some very interesting research ('Physical Organic Chem on the brain' as he calls it) and he gives a swashbuckling talk ('Nicotine is the most common molecule for all SAR studies!') as evidenced by his spiel in the Atlanta Spring 2006 ACS.
Incidentally, the book (and especially the section of E+ aromatic subtitution) reminded me of a jolly good ol thin British book by Peter Sykes- A Guidebook to Mechanism in Organic Chemistry. It's old, but worth every penny. I had digested it, and Sykes is the epitome of the British pedagogic tradition of explaining concisely and most accurately in one or two statements, something which other textbooks will take two paragraphs to say. A true vintage classic, and for its explanations of mechanisms, the best I have ever seen. I will never forget it. Happily, it is still available in its sixth edition on Amazon. Unfortunately, almost all Americans are unaware of it, and I am the Organator sent back from the future to introduce it to them.
Unfortunately, all the classic phys org books until now have been of the 'pure' kind, focusing on mechanism and reactivity, but not discussing the interdisciplinary nature of the topic, especially for biological systems. My wait is over; Modern Physical Organic Chemistry by Dennis Dougherty and Eric Anslyn has completely and satisfactorily reinvented the phys org chem textbook. Now, one can look to a wholesome treatment of phys org as a multidisciplinary, fundamental, and exciting approach to both chemistry and biology. The book is worth its price, and covers the gamut of topics, including basic ones like mechanism, but also interspersed with lots of boxes explaining the applications of basic phys org concepts to host guest systems, proteins and nucleic acids, strained molecules, and materials science. Fantastic reference.
Dougherty of course is a great chemist doing some very interesting research ('Physical Organic Chem on the brain' as he calls it) and he gives a swashbuckling talk ('Nicotine is the most common molecule for all SAR studies!') as evidenced by his spiel in the Atlanta Spring 2006 ACS.
Incidentally, the book (and especially the section of E+ aromatic subtitution) reminded me of a jolly good ol thin British book by Peter Sykes- A Guidebook to Mechanism in Organic Chemistry. It's old, but worth every penny. I had digested it, and Sykes is the epitome of the British pedagogic tradition of explaining concisely and most accurately in one or two statements, something which other textbooks will take two paragraphs to say. A true vintage classic, and for its explanations of mechanisms, the best I have ever seen. I will never forget it. Happily, it is still available in its sixth edition on Amazon. Unfortunately, almost all Americans are unaware of it, and I am the Organator sent back from the future to introduce it to them.
College Chemistry Carbocation Cinch
Now here's a paper that reiterates one simple freshman organic chemistry principle; tertiary carbocations are more stable than secondary carbocations. Of course, that need not be the case with enzymes at all, but in this case Tantillo's theoretical results indicate that that may be the case (DFT studies using the popular mpW1PW91 functional). He finds that the biasobolyl cation which traditionally goes through a sec carbocation intermediate to the terpene trichodiene, actually follows a lower energy pathway if it involves all tert carbocations. But he ends up proposing a novel proton transfer and a "temporary methyl shift", that is a colour different from all those hydride transfers that we learn in terpene biosynthesis. It would be interesting to investigate experimentally what happens in the active site; enzymes can stabilize prim and sec cations through cation-pi interactions for example.
Org. Lett.; (Letter); 2006; ASAP Article; DOI: 10.1021/ol061884f
Incidentally, the muddle of terpene pathways in the paper brings back fond memories of a master's level course on the topic which a really great, senior professor had taught us. It was actually fun deciphering painfully, the various ways involving methyl, hydride, and proton shifts and cyclizations which could get you from an intermediate to the product in those terpene biosyntheses.
Org. Lett.; (Letter); 2006; ASAP Article; DOI: 10.1021/ol061884f
Incidentally, the muddle of terpene pathways in the paper brings back fond memories of a master's level course on the topic which a really great, senior professor had taught us. It was actually fun deciphering painfully, the various ways involving methyl, hydride, and proton shifts and cyclizations which could get you from an intermediate to the product in those terpene biosyntheses.
New Look
I am giving this blog a slightly new look, and hope to breathe a fresh breath of life in it. I hope to post more often on technical matters about chemistry and related topics, and indeed, because of emphasis on technical matters, this blog should complement my other general blog. But since I have always been interested in other aspects of science like it's practice, social standing and general philosophy, I will always also comment on those aspects when I find it interesting to do. Some posts will be cross posted on the general blog.
As far as the technical side goes, I hope to post on organic chemistry (including synthesis), computational chemistry/molecular modeling, drug discovery, and general topics at the chemistry-biology interface. But these are just general guidelines for myself based on my interests, not hard and fast rules, and posts will occasionally include anything scientific that I find interesting. So it should not be surprising if one suddenly discovers a post on astronomy and animal behaviour. I want to treat science on this blog as fun, and as a human endeavor. And I want to treat it as something which has an exciting and interesting technical side, but also seamless connections within itself, and with human beings and society. So here goes.
A word on copyrights: I will be posting images that I see on other websites, including professional publications like those from the ACS. Every image will be referenced, and you should not see any image that has not been linked to its source. If you see this, please tell me about it, because I don't want to run into any legal issues here and want to acknowledge the source of everything I post. Also, I will refrain from saying anything about my own research except in a very general way, again for propreitary reasons.
As far as the technical side goes, I hope to post on organic chemistry (including synthesis), computational chemistry/molecular modeling, drug discovery, and general topics at the chemistry-biology interface. But these are just general guidelines for myself based on my interests, not hard and fast rules, and posts will occasionally include anything scientific that I find interesting. So it should not be surprising if one suddenly discovers a post on astronomy and animal behaviour. I want to treat science on this blog as fun, and as a human endeavor. And I want to treat it as something which has an exciting and interesting technical side, but also seamless connections within itself, and with human beings and society. So here goes.
A word on copyrights: I will be posting images that I see on other websites, including professional publications like those from the ACS. Every image will be referenced, and you should not see any image that has not been linked to its source. If you see this, please tell me about it, because I don't want to run into any legal issues here and want to acknowledge the source of everything I post. Also, I will refrain from saying anything about my own research except in a very general way, again for propreitary reasons.
Killing the Hydra
As an addendum to the previous post, I want to note that the 'hockey stick' graph by Michael Mann and others which was so much in the spotlight recently has been endorsed in its general features by many bodies, including the National Academy of Sciences. (Link: Climateaudit)
Once somebody asked me a curious question; Mann's graph shows that the temperature anomaly has been the highest for the 20th century in the last 1000 years. What if we are looking at a cycle that repeats itself in, say, 2000 years? Won't the temperature anomaly that we see then be only part of a cycle?
I guess this is an objection that many people have about global warming. What if it is only a cycle? To my knowledge, the answer to this objection is now clear; computer models can reproduce the temperature that might have existed had mankind and greenhouse gases not been around. This natural variation in the temperature is far lower than what it is. Also, the simple fact that temperature rise can be concomitant with the rise of CO2 is a very telling one, and may appear deceptively simple. Also, it is a simple law of nature that CO2 absorbs certain wavelengths of light. Taken together, these three facts for me constitute as good a chain of reasoning as any. Also, the naysayer's objection is absurd for another reason. If the data had been collected for 10,000 years, he could still have claimed that it was not collected for 20,000 and renewed his objection. I am not sure it makes sense to play these childlish games till the world comes to an end. As I have noted before, do we really want to be one hundred percent certain about an event that could eminently mean the end of humanity? Maybe then we should also stop vaccinating ourselves.
Frankly, given our nature, I don't think any amount of moral reasoning, no matter how true, is going to sway public opinion soon. Indeed, people don't even stop smoking cigarettes when they know they can kill them, so it may appear naive to expect them to suddenly care about global warming, a phenomenon that probably won't directly affect them in their own life. No matter how lofty the moral pillars of reasoning seem, the one thing that can finally force people to pay attention is still the mundane allure of economic incentives. Consider this; this semester onwards, almost nobody from my lab is going to drive their car to work, and they are all going to take one of the three new shuttles that Emory University has begun. I don't believe for a moment that they are doing this for the environment. The simple reason why they are doing it is because Emory University is also going to double the annual parking fees to 700$ a year. If you can't show them the tree, just don't make it free. Do this more often; create new bus services, make wireless internet available on the shuttles, and make the parking fees prohibitive, and people will obediently avail of the service. It's surprising how the simplest of daily incentives can change people's minds about the most profound objectives. But evidently, things like public transportation are not that simple, because they are not being implemented.
The Kyoto protocol is being riddled with blame games, with the US saying they won't sign until India and China do, and India and China saying they won't sign until the US does. China is next only to the US in greenhouse emissions, and if anything, it's going to spew many times more in the atmosphere in the near future. (Link: BBC)
I for one think this is another childish game that can be played till eternity. Even though the arguments are valid, somebody has to get out of the loop. After all, deep down, does it really matter that the US has increased emissions over the last few decades? It's us who have to suffer the consequences to our environment if we don't cut them down. It is true that as of now, we cannot achieve a high standard of living without using conventional energy sources. But at some point in the future, we are going to run out of oil anyway. How does it harm us to start early and found our new standard of living on unconventional energy sources, an effort that actually can turn out to be profitable in case of a likely oil crisis? At least we have the nuclear deal with the US. Let's avail of it. On the part of the US, I think that if they want us to sign, they should also become ready to sell us some of their already developed research into uncoventional sources of energy, at a cheap cost, and save us the cost that we would have to expend in doing research using conventional energy in the first place. But finally, they also must be prepared to give their citizens incentives to cut down on their standard of living, a standard that neither they nor the world can realistically aspire to in the near future. I agree that governments are to blame, but ordinary Americans need to pitch in too, and they are going to do it only if they are provided incentives of the kind mentioned above regarding public transportation. As Al Gore says, we have the technology to stop global warming, but in a different sense, I think that statement can also refer to the technology needed to give people incentives to combat global warming.
The media has a very important role as always to play in this situation. As noted below, the media has many times embellished global warming research by spuriously connecting it with specific environmental events. However, much needed public awareness did come out of this misused connection. Now, the media needs to highlight the general effects of global warming that are becoming so certain. For example, not the number but the intensity of hurricanes is predicted to increase according to climate experts, and this has been so. A good connection has also been established between mean sea surface temperature and hurricane intensity. The media needs to highlight such facts that have been extensively investigated using sound science. In the US, the media has a considerable hold on the people's psyche. For once, they should exploit this hold for a good purpose.
Combating ignorance and galvanizing official policy and public opinion about global warming is like killing the Hydra; when one of its heads is cut off, another one appears from somewhere and grows in its place. But the truth remains that Hercules did kill the Hydra and win the battle, and so we also should know that we can, and fight it with all honesty and sincerity.
Once somebody asked me a curious question; Mann's graph shows that the temperature anomaly has been the highest for the 20th century in the last 1000 years. What if we are looking at a cycle that repeats itself in, say, 2000 years? Won't the temperature anomaly that we see then be only part of a cycle?
I guess this is an objection that many people have about global warming. What if it is only a cycle? To my knowledge, the answer to this objection is now clear; computer models can reproduce the temperature that might have existed had mankind and greenhouse gases not been around. This natural variation in the temperature is far lower than what it is. Also, the simple fact that temperature rise can be concomitant with the rise of CO2 is a very telling one, and may appear deceptively simple. Also, it is a simple law of nature that CO2 absorbs certain wavelengths of light. Taken together, these three facts for me constitute as good a chain of reasoning as any. Also, the naysayer's objection is absurd for another reason. If the data had been collected for 10,000 years, he could still have claimed that it was not collected for 20,000 and renewed his objection. I am not sure it makes sense to play these childlish games till the world comes to an end. As I have noted before, do we really want to be one hundred percent certain about an event that could eminently mean the end of humanity? Maybe then we should also stop vaccinating ourselves.
Frankly, given our nature, I don't think any amount of moral reasoning, no matter how true, is going to sway public opinion soon. Indeed, people don't even stop smoking cigarettes when they know they can kill them, so it may appear naive to expect them to suddenly care about global warming, a phenomenon that probably won't directly affect them in their own life. No matter how lofty the moral pillars of reasoning seem, the one thing that can finally force people to pay attention is still the mundane allure of economic incentives. Consider this; this semester onwards, almost nobody from my lab is going to drive their car to work, and they are all going to take one of the three new shuttles that Emory University has begun. I don't believe for a moment that they are doing this for the environment. The simple reason why they are doing it is because Emory University is also going to double the annual parking fees to 700$ a year. If you can't show them the tree, just don't make it free. Do this more often; create new bus services, make wireless internet available on the shuttles, and make the parking fees prohibitive, and people will obediently avail of the service. It's surprising how the simplest of daily incentives can change people's minds about the most profound objectives. But evidently, things like public transportation are not that simple, because they are not being implemented.
The Kyoto protocol is being riddled with blame games, with the US saying they won't sign until India and China do, and India and China saying they won't sign until the US does. China is next only to the US in greenhouse emissions, and if anything, it's going to spew many times more in the atmosphere in the near future. (Link: BBC)
I for one think this is another childish game that can be played till eternity. Even though the arguments are valid, somebody has to get out of the loop. After all, deep down, does it really matter that the US has increased emissions over the last few decades? It's us who have to suffer the consequences to our environment if we don't cut them down. It is true that as of now, we cannot achieve a high standard of living without using conventional energy sources. But at some point in the future, we are going to run out of oil anyway. How does it harm us to start early and found our new standard of living on unconventional energy sources, an effort that actually can turn out to be profitable in case of a likely oil crisis? At least we have the nuclear deal with the US. Let's avail of it. On the part of the US, I think that if they want us to sign, they should also become ready to sell us some of their already developed research into uncoventional sources of energy, at a cheap cost, and save us the cost that we would have to expend in doing research using conventional energy in the first place. But finally, they also must be prepared to give their citizens incentives to cut down on their standard of living, a standard that neither they nor the world can realistically aspire to in the near future. I agree that governments are to blame, but ordinary Americans need to pitch in too, and they are going to do it only if they are provided incentives of the kind mentioned above regarding public transportation. As Al Gore says, we have the technology to stop global warming, but in a different sense, I think that statement can also refer to the technology needed to give people incentives to combat global warming.
The media has a very important role as always to play in this situation. As noted below, the media has many times embellished global warming research by spuriously connecting it with specific environmental events. However, much needed public awareness did come out of this misused connection. Now, the media needs to highlight the general effects of global warming that are becoming so certain. For example, not the number but the intensity of hurricanes is predicted to increase according to climate experts, and this has been so. A good connection has also been established between mean sea surface temperature and hurricane intensity. The media needs to highlight such facts that have been extensively investigated using sound science. In the US, the media has a considerable hold on the people's psyche. For once, they should exploit this hold for a good purpose.
Combating ignorance and galvanizing official policy and public opinion about global warming is like killing the Hydra; when one of its heads is cut off, another one appears from somewhere and grows in its place. But the truth remains that Hercules did kill the Hydra and win the battle, and so we also should know that we can, and fight it with all honesty and sincerity.
The Discovery of Global Warming
The Discovery of Global Warming- Spencer Weart
Any new scientific theory, when born, always comes into the world kicking and fighting back. That's because scientists inherently are skeptical, and in the opinion of one of my colleagues, also inherently mean. Whenever a new revolutionary fact is presented to them, their first reaction is of incredulity because skepticism is a reflex action for them, but also because another reflex action causes them to be galled that they weren't the one coming up with the new idea.
If 'pure' scientific ideas themselves have so much trouble coming up for air, what would the scenario be for a revolutionary new idea that also has a gory heap of political controversy written over it? Messy, to say the least. And so it is for the idea of global warming.
Spencer Weart has penned a lively, informative, and concise history of the discovery of global warming, that precisely demonstrates how difficult it is for such an idea to take root in the public mind and affect public policy. What is more fascinating is how research in climate change was spurred on by unseemly government and military interests, and misunderstood media coverage and inquiry. Weart starts with some old stalwarts from different fields, in the nineteenth and early twentieth century, and how they were intrigued by a fascinating phenomenon- the ice ages, which served as the driving force for suspecting the role of greenhouse gases in changing the temperature of the planet. If there's one singular fact that emerges out of the history of global warming, it is the public's extreme skepticism in underestimating humankind's role in changing the mighty earth's enormous environs, and scientists' reluctance to accept the role of small changes caused by humans and natural forces that could cause violent climate change ('The Day After Tomorrow' notwithstanding).
The discovery of global warming was a painful endeavor, often occupying many scientists' lifetimes. Almost everyone who wondered about it faced opposition in terms of opinion and funding. Almost no one could alone prove global warming without extensive collaboration; not suprising given the interdisciplinary nature of climate. Scientists had to grudgingly forge alliances with other scientists whose fields they would have hardly considered respectable. They had to beseech the government for funding and support. One of the most interesting facts is the government funding of climate studies in the 50s and 60s that was fuelled entirely by military purposes dealing with the Cold War. More than any one else, defense forces were interested in controlling the weather for military purposes, and they couldn't have cared less about global warming. But this was one of those fortuitous times in history, when a misguided venture proved to be beneficial for humanity. Just like building the atomic bomb produced a bonus of insights into the behaviour of matter as a side effect, so did the military's interest in the weather, absurd as it was in many ways, prove to be a godsend for scientists who were hungry for funding and facilities. Weart makes it quite clear how scientists found an unexpected asset in the military's interest in climate. Secretly, they must have laughed in the face of paranoid cold warriors. Publicly, they appeared most grateful, and in fact were, for the funding they got.
If the military unknowingly contributed to our knowledge of climate change by supporting dubious studies in the field, the media contributed to it by miscommunicating the facts on many occasions. During the first few years, the general public wasn't concerned and did not believe in climate change, again, because they could not believe that a puny entity such as mankind could disturb the grand equilibrium of nature. But then, as the general nature of events such as hurricanes, floods, and droughts began to be linked with climate change in the 70s, the media began to pay more attention to scientific studies, and began to exaggerate the connection of man's contribution to the environment and violent weather phenomena. Just like the military's venture, even thought this venture was completely misguided (even today, we cannot pinpoint specific events to global warming), the unexpected effect of the media's spin doctoring was that people began to believe that man could change climate. Of course, the media also was not afraid to point out and again exaggerate when the scientists' predictions and explanations failed, but for the better or worse, people for the first time in history began to take serious notice of global warming and mankind's contribution to it. In the 1960's, Rachel Carson's 'Silent Spring' provided yet another impetus for the public to consider the general relationship between technology and it's effects on the environment.
And yet, as Weart narrates, the road was tortuous. At every stage, speculative as the scientists' predictions were, they were opposed and overwhelmed by powerful government lobbyists who had influence in congress, and much more money to thwart their opponents' efforts. Whenever a new study linked greenhouse gases with warming, industrial lobbyists would launch massive campaigns to rebut the scientists and reinforce public faith in the propriety of what they were doing. As Joel Bakan says in The Corporation, one of the main methods of corporations in maximizing profits is to 'externalize' costs. Suddenly being responsible for environmental pollution which was previously externalized would put their profit making dreams in jeopardy. Until the 80s, scientists could not do much, as firstly there was not enough evidence for global warming and secondly, computer models were not powerful and reliable enough to help them make their case. Matters were made worse by the Reagan administration which has one of the worst track records in history when it comes to environmental legislation. So unfortunately for scientists, just when their efforts and computer models were gaining credence, they were faced with a looming pall of government and corporate opposition, against which their fight was feeble.
These scientists who researched climate change were and are an exemplary lot. They built computer models, wrote reams of codes, and ran simulations for weeks and months. They went to the coldest parts of Antarctica and the deepest parts of the ocean to gather data and samples, to collect climate 'proxies' such as pollen, ice cores and tree rings, for gathering data in past ages which thermometers had not. They spent lifetimes in their search for the contribution of mankind's action to climate change, even though they knew that their results could disprove their convictions. As far as dedication to science and policy is concerned, you could not wish for a more dedicated lot of investigators.
Slowly, in the face of opposition, predictions began to get more credible, and enough data began to get accumulated to make reasonable analyses and predictions. The discovery of global warming really came in the late 90s, but the culmination of efforts really came in the late 80s. During those few years, droughts and rain deficit around the US again brought media attention to climate change. Computer models became much more reliable. When a powerful volcano exploded in 1991, computer models accurately predicted the drop in temperature (one that was more than compensated by a rise in greenhouse gases) that was caused by the accumulation of sulfate particles in the atmosphere. Scientists began to appear before congress to testify. An Intergovernmental Panel on Climate Change (IPCC) was created that created authoritative reports on climate change and the 'anthropogenic' contribution to it. The evidence became too widespread to mock or downright reject. Global warming had to be given at least serious consideration. However, because of the uncertainties inherent in predicting something as complex as the climate, government officials always could do cherry picking and convince the public about the speculative nature of the whole framework. Here, they were making a fundamental mistake, of the kind that opponents of evolution make. Just because a theory has uncertainties does not mean it is completely wrong, as these officials would have the public believe. Of course nothing is certain. But in case of global warming, enough data had accumulated by the 90s to make one thing absolutely clear at the minimum; that we were altering the climate of the earth in unpredictable ways. Studies of past climates had also reinforced the conclusion (with some startling impetus from chaos theory) that very small perturbations in the earth's climate and ocean systems can result in huge effects on the climate (the so-called 'butterfly effect'). Man's contributions to the earth's environment are now eminently more than a 'small perturbation'.
However, when it comes to the fickle palette of politics, every colour can be shaded to suit one's interests. There was, and will always be, great hope from the fact that the opposition against CFCs worked and all nations successfully signed the Montreal Treaty. But In 1997, the US Senate rejected the Kyoto Protocol in spite of Clinton and Gore (naturally) ratifying it. After this, it was but a formality for George W. Bush to resurrect this policy by not agreeing to sign Kyoto in 2001, citing that it would bring about grave economic damage.
Today, there is no doubt that global warming is real. It has been endorsed by every major scientific body in the world. Its effects are many and each one of them is devastating. Enough data has now been accumulated to reinforce the relation between greenhouse gases and global warming. Individual details do remain ambiguous in certain respects. But they will soon be quantified. And as I noted in this post, does it matter that we don't know everything with one hundred percent certainty. The repurcussions of global warming are the biggest that mankind will ever face, and even a 30% certainty about them should be enough for us to make serious efforts to stop it. In my opinion, the unfortunate thing about global warming is that it is a relatively slow killer. And because individual events due to it cannot be predicted, people are not going to be flustered by even Hurricane Katrina and think it was caused by global warming. They will just consider it to be an unfortunate incident and move on. If they knew for sure that Katrina was caused by global warming, they would be lined up on the steps of Capitol Hill in Washington. But what they want is certainty. Strange that they don't seem to want it when it comes to terrorist attacks.
Weart's book is not an eloquent appeal to stop global warming. But that's what makes it striking, because the facts, as revealed by the dispassionate hand of science, make the phenomenon clear. However, that's probably the only problem I would find with the book. Weart is a good writer, but not a particularly poetic or eloquent one. I believe he could have made the book much more sobering and dramatic. He essentially weaves a history in the true sense of the word, even if he may fall short of making it read like a novel. The human drama is there, but kept to a minimum. He writes like a true scientist, making the facts matter. The science on global warming is now sound. What is not is human nature.
I cannot help putting in this cartoon again
Any new scientific theory, when born, always comes into the world kicking and fighting back. That's because scientists inherently are skeptical, and in the opinion of one of my colleagues, also inherently mean. Whenever a new revolutionary fact is presented to them, their first reaction is of incredulity because skepticism is a reflex action for them, but also because another reflex action causes them to be galled that they weren't the one coming up with the new idea.
If 'pure' scientific ideas themselves have so much trouble coming up for air, what would the scenario be for a revolutionary new idea that also has a gory heap of political controversy written over it? Messy, to say the least. And so it is for the idea of global warming.
Spencer Weart has penned a lively, informative, and concise history of the discovery of global warming, that precisely demonstrates how difficult it is for such an idea to take root in the public mind and affect public policy. What is more fascinating is how research in climate change was spurred on by unseemly government and military interests, and misunderstood media coverage and inquiry. Weart starts with some old stalwarts from different fields, in the nineteenth and early twentieth century, and how they were intrigued by a fascinating phenomenon- the ice ages, which served as the driving force for suspecting the role of greenhouse gases in changing the temperature of the planet. If there's one singular fact that emerges out of the history of global warming, it is the public's extreme skepticism in underestimating humankind's role in changing the mighty earth's enormous environs, and scientists' reluctance to accept the role of small changes caused by humans and natural forces that could cause violent climate change ('The Day After Tomorrow' notwithstanding).
The discovery of global warming was a painful endeavor, often occupying many scientists' lifetimes. Almost everyone who wondered about it faced opposition in terms of opinion and funding. Almost no one could alone prove global warming without extensive collaboration; not suprising given the interdisciplinary nature of climate. Scientists had to grudgingly forge alliances with other scientists whose fields they would have hardly considered respectable. They had to beseech the government for funding and support. One of the most interesting facts is the government funding of climate studies in the 50s and 60s that was fuelled entirely by military purposes dealing with the Cold War. More than any one else, defense forces were interested in controlling the weather for military purposes, and they couldn't have cared less about global warming. But this was one of those fortuitous times in history, when a misguided venture proved to be beneficial for humanity. Just like building the atomic bomb produced a bonus of insights into the behaviour of matter as a side effect, so did the military's interest in the weather, absurd as it was in many ways, prove to be a godsend for scientists who were hungry for funding and facilities. Weart makes it quite clear how scientists found an unexpected asset in the military's interest in climate. Secretly, they must have laughed in the face of paranoid cold warriors. Publicly, they appeared most grateful, and in fact were, for the funding they got.
If the military unknowingly contributed to our knowledge of climate change by supporting dubious studies in the field, the media contributed to it by miscommunicating the facts on many occasions. During the first few years, the general public wasn't concerned and did not believe in climate change, again, because they could not believe that a puny entity such as mankind could disturb the grand equilibrium of nature. But then, as the general nature of events such as hurricanes, floods, and droughts began to be linked with climate change in the 70s, the media began to pay more attention to scientific studies, and began to exaggerate the connection of man's contribution to the environment and violent weather phenomena. Just like the military's venture, even thought this venture was completely misguided (even today, we cannot pinpoint specific events to global warming), the unexpected effect of the media's spin doctoring was that people began to believe that man could change climate. Of course, the media also was not afraid to point out and again exaggerate when the scientists' predictions and explanations failed, but for the better or worse, people for the first time in history began to take serious notice of global warming and mankind's contribution to it. In the 1960's, Rachel Carson's 'Silent Spring' provided yet another impetus for the public to consider the general relationship between technology and it's effects on the environment.
And yet, as Weart narrates, the road was tortuous. At every stage, speculative as the scientists' predictions were, they were opposed and overwhelmed by powerful government lobbyists who had influence in congress, and much more money to thwart their opponents' efforts. Whenever a new study linked greenhouse gases with warming, industrial lobbyists would launch massive campaigns to rebut the scientists and reinforce public faith in the propriety of what they were doing. As Joel Bakan says in The Corporation, one of the main methods of corporations in maximizing profits is to 'externalize' costs. Suddenly being responsible for environmental pollution which was previously externalized would put their profit making dreams in jeopardy. Until the 80s, scientists could not do much, as firstly there was not enough evidence for global warming and secondly, computer models were not powerful and reliable enough to help them make their case. Matters were made worse by the Reagan administration which has one of the worst track records in history when it comes to environmental legislation. So unfortunately for scientists, just when their efforts and computer models were gaining credence, they were faced with a looming pall of government and corporate opposition, against which their fight was feeble.
These scientists who researched climate change were and are an exemplary lot. They built computer models, wrote reams of codes, and ran simulations for weeks and months. They went to the coldest parts of Antarctica and the deepest parts of the ocean to gather data and samples, to collect climate 'proxies' such as pollen, ice cores and tree rings, for gathering data in past ages which thermometers had not. They spent lifetimes in their search for the contribution of mankind's action to climate change, even though they knew that their results could disprove their convictions. As far as dedication to science and policy is concerned, you could not wish for a more dedicated lot of investigators.
Slowly, in the face of opposition, predictions began to get more credible, and enough data began to get accumulated to make reasonable analyses and predictions. The discovery of global warming really came in the late 90s, but the culmination of efforts really came in the late 80s. During those few years, droughts and rain deficit around the US again brought media attention to climate change. Computer models became much more reliable. When a powerful volcano exploded in 1991, computer models accurately predicted the drop in temperature (one that was more than compensated by a rise in greenhouse gases) that was caused by the accumulation of sulfate particles in the atmosphere. Scientists began to appear before congress to testify. An Intergovernmental Panel on Climate Change (IPCC) was created that created authoritative reports on climate change and the 'anthropogenic' contribution to it. The evidence became too widespread to mock or downright reject. Global warming had to be given at least serious consideration. However, because of the uncertainties inherent in predicting something as complex as the climate, government officials always could do cherry picking and convince the public about the speculative nature of the whole framework. Here, they were making a fundamental mistake, of the kind that opponents of evolution make. Just because a theory has uncertainties does not mean it is completely wrong, as these officials would have the public believe. Of course nothing is certain. But in case of global warming, enough data had accumulated by the 90s to make one thing absolutely clear at the minimum; that we were altering the climate of the earth in unpredictable ways. Studies of past climates had also reinforced the conclusion (with some startling impetus from chaos theory) that very small perturbations in the earth's climate and ocean systems can result in huge effects on the climate (the so-called 'butterfly effect'). Man's contributions to the earth's environment are now eminently more than a 'small perturbation'.
However, when it comes to the fickle palette of politics, every colour can be shaded to suit one's interests. There was, and will always be, great hope from the fact that the opposition against CFCs worked and all nations successfully signed the Montreal Treaty. But In 1997, the US Senate rejected the Kyoto Protocol in spite of Clinton and Gore (naturally) ratifying it. After this, it was but a formality for George W. Bush to resurrect this policy by not agreeing to sign Kyoto in 2001, citing that it would bring about grave economic damage.
Today, there is no doubt that global warming is real. It has been endorsed by every major scientific body in the world. Its effects are many and each one of them is devastating. Enough data has now been accumulated to reinforce the relation between greenhouse gases and global warming. Individual details do remain ambiguous in certain respects. But they will soon be quantified. And as I noted in this post, does it matter that we don't know everything with one hundred percent certainty. The repurcussions of global warming are the biggest that mankind will ever face, and even a 30% certainty about them should be enough for us to make serious efforts to stop it. In my opinion, the unfortunate thing about global warming is that it is a relatively slow killer. And because individual events due to it cannot be predicted, people are not going to be flustered by even Hurricane Katrina and think it was caused by global warming. They will just consider it to be an unfortunate incident and move on. If they knew for sure that Katrina was caused by global warming, they would be lined up on the steps of Capitol Hill in Washington. But what they want is certainty. Strange that they don't seem to want it when it comes to terrorist attacks.
Weart's book is not an eloquent appeal to stop global warming. But that's what makes it striking, because the facts, as revealed by the dispassionate hand of science, make the phenomenon clear. However, that's probably the only problem I would find with the book. Weart is a good writer, but not a particularly poetic or eloquent one. I believe he could have made the book much more sobering and dramatic. He essentially weaves a history in the true sense of the word, even if he may fall short of making it read like a novel. The human drama is there, but kept to a minimum. He writes like a true scientist, making the facts matter. The science on global warming is now sound. What is not is human nature.
I cannot help putting in this cartoon again