- Home
- Angry by Choice
- Catalogue of Organisms
- Chinleana
- Doc Madhattan
- Games with Words
- Genomics, Medicine, and Pseudoscience
- History of Geology
- Moss Plants and More
- Pleiotropy
- Plektix
- RRResearch
- Skeptic Wonder
- The Culture of Chemistry
- The Curious Wavefunction
- The Phytophactor
- The View from a Microbiologist
- Variety of Life
Field of Science
-
-
From Valley Forge to the Lab: Parallels between Washington's Maneuvers and Drug Development3 weeks ago in The Curious Wavefunction
-
Political pollsters are pretending they know what's happening. They don't.3 weeks ago in Genomics, Medicine, and Pseudoscience
-
-
Course Corrections5 months ago in Angry by Choice
-
-
The Site is Dead, Long Live the Site2 years ago in Catalogue of Organisms
-
The Site is Dead, Long Live the Site2 years ago in Variety of Life
-
Does mathematics carry human biases?4 years ago in PLEKTIX
-
-
-
-
A New Placodont from the Late Triassic of China5 years ago in Chinleana
-
Posted: July 22, 2018 at 03:03PM6 years ago in Field Notes
-
Bryophyte Herbarium Survey7 years ago in Moss Plants and More
-
Harnessing innate immunity to cure HIV8 years ago in Rule of 6ix
-
WE MOVED!8 years ago in Games with Words
-
-
-
-
post doc job opportunity on ribosome biochemistry!9 years ago in Protein Evolution and Other Musings
-
Growing the kidney: re-blogged from Science Bitez9 years ago in The View from a Microbiologist
-
Blogging Microbes- Communicating Microbiology to Netizens10 years ago in Memoirs of a Defective Brain
-
-
-
The Lure of the Obscure? Guest Post by Frank Stahl12 years ago in Sex, Genes & Evolution
-
-
Lab Rat Moving House13 years ago in Life of a Lab Rat
-
Goodbye FoS, thanks for all the laughs13 years ago in Disease Prone
-
-
Slideshow of NASA's Stardust-NExT Mission Comet Tempel 1 Flyby13 years ago in The Large Picture Blog
-
in The Biology Files
Some slices from the literature
Not too much time this week to dwell in detail on papers, but here are some that I found interesting:
Richard Friesner and his colleagues at Columbia investigate the entropic contribution of arginine side chains by measuring local conformational dynamics by NMR spin relaxation. They compare these results to detailed MD simulations to infer that the aliphatic side chain maintains enough entropy for the charged amino group to fruitfully engage in rigid salt-bridge formation, thus minimizing the entropic penalty.
On the other side of the Atlantic, Angelo Vedani and his colleagues in Basel have derived a QSAR model for psychotropic drugs binding to the glucocorticoid receptor using their programs Yeti and QUASAR. The GR is a nuclear hormone receptor that can modulate a variety of key processes by binding small molecules. The model was validated on 110 compounds representing 4 chemical classes.
An interesting paper on the conformation of a common and important consensus tripeptide motif in a glycoprotein compares measured IR intensities and frequencies using a technique called ion-dip IR spectroscopy to frequencies calculated using high-level ab initio theory (MP2/6-311++G**). The Oxford authors do the comparison for the wild type motif as well as for two mutants and ask a "Why nature selected this specific motif" kind of question by looking at the viability of the resulting conformations. It looks fine, but I am a little skeptical about all those gamma turns that they see, a common artifact of inadequate treatment of solvation in both force field and ab initio approaches.
And again on the medicinal front, a group in Europe does some fragment-based computational design and find a PPAR agonist. Detailed knowledge of protein structure aids in explaining why one bioisostere works and another one does not.
Author's delight
Now this is the kind of comment that I look forward to from a reviewer:
"Referee's report: This paper contains much that is new and much that is true. Unfortunately, that which is true is not new and that which is new is not true."Another memorable comment was kindly passed on to me by Prof. Tony Barrett of Imperial College, London.
In H. Eves, Return to Mathematical Circles, Boston: Prindle, Weber, and Schmidt, 1988.
"This paper should be reduced by 50% and oxidised by 150%"- Anon
PBS film on Oppenheimer
I accidentally came to know about this film about Oppenheimer on American Experience on PBS yesterday. It was a 2 hour piece, a docu-drama, with David Strathairn playing Oppenheimer during his infamous trial. The trial was an opportunity for Oppenheimer to recapitulate his life, and this is what the docu-drama does. It would undoubtedly be repeated and I would strongly encourage those unfamiliar with the man and the period in this country's history to watch it. It shows the rise of a truly brilliant and iconic character, followed by his fall that was orchestrated by a vindictive and self-serving government.
Given my long interest in Oppenheimer, there wasn't much in the film that was new for me. It was highly informative, poignant and fortunately well-grounded in facts and consensus. Interviewed among others were historians Richard Rhodes, Martin Sherwin, Priscilla McMillan and veteran scientists Harold Agnew, Herbert York, Nobel Laureate Roy Glauber and Marvin Goldberger. Prudently, the film does not speculate on Gregg Herken's belief that Oppenheimer was a member of the communist party; to my knowledge only Herken holds this opinion, and to be honest it does not even matter. But as the record shows, 30 years of constant investigation by the FBI turned up nothing that indicated party membership, and that says a lot.
The disturbing thing about the trial of course is that it is a poster boy case for how disagreement and dissent are equated with disloyalty by the government, a tale for our times even as its essential unlawful scare tactics have been repeated in numerous administrations, and most recently in the Bush administration when opposition to the Iraq war was often deemed "unpatriotic". The Oppenheimer story, one of the most shameful episodes in this country's history in my opinion, is a cautionary tale that should always be remembered as an example of how we need to be constantly alert and aware in a democracy and watch out for abuse of power by the government. The film does a good job of demonstrating this.
Given my long interest in Oppenheimer, there wasn't much in the film that was new for me. It was highly informative, poignant and fortunately well-grounded in facts and consensus. Interviewed among others were historians Richard Rhodes, Martin Sherwin, Priscilla McMillan and veteran scientists Harold Agnew, Herbert York, Nobel Laureate Roy Glauber and Marvin Goldberger. Prudently, the film does not speculate on Gregg Herken's belief that Oppenheimer was a member of the communist party; to my knowledge only Herken holds this opinion, and to be honest it does not even matter. But as the record shows, 30 years of constant investigation by the FBI turned up nothing that indicated party membership, and that says a lot.
The disturbing thing about the trial of course is that it is a poster boy case for how disagreement and dissent are equated with disloyalty by the government, a tale for our times even as its essential unlawful scare tactics have been repeated in numerous administrations, and most recently in the Bush administration when opposition to the Iraq war was often deemed "unpatriotic". The Oppenheimer story, one of the most shameful episodes in this country's history in my opinion, is a cautionary tale that should always be remembered as an example of how we need to be constantly alert and aware in a democracy and watch out for abuse of power by the government. The film does a good job of demonstrating this.
Fundamentals of Asymmetric Catalysis
source
Fundamentals of Asymmetric Catalysis
Patrick Walsh and Marisa C. Kozlowski
University Science Books, 2008
I browsed this yesterday in the library and it looks really good. An opinion from one of the field's godfathers:
It is my opinion that this text will be widely accepted by the organic chemical community. There is nothing like it currently on the market. It reminds me of the Morrison & Mosher text, Asymmetric Synthesis, that was published more than three decades ago. The authors have generated the New Bible of asymmetric catalysis. The results are truly impressive and the cases are both relevant and current. --David A. Evans, Harvard University
A beautiful, informal and comprehensive account of quantum history
Uncertainty: Einstein, Heisenberg, Bohr, and the Struggle for the Soul of Science
David Lindley
Anchor, 2008
One of the best informal histories of quantum physics and its makers that I have come across, and I can say I have read many. Concisely and with passionate enthusiasm, Lindley manages to weave together the essential scientific discoveries, the scores of anecdotes about the famous participants including their personal conflicts and friendships, the philosophical and social implications associated with many quantum concepts, and the political and historical turmoil and connections that accompanied these discoveries. While mainly focused on Einstein, Heisenberg and Bohr, Lindley draws vivid portraits of other pioneers such as Pauli, Dirac, Born, Kramers and Schrodinger. Quantum physics is perhaps the most paradoxical, beautiful, bizarre and important scientific paradigm ever developed. The decade from 1920-1930 was undoubtedly the golden era of physics. Lindley succinctly and engagingly tells us how and why. A must read for history of physics lovers.
David Lindley
Anchor, 2008
One of the best informal histories of quantum physics and its makers that I have come across, and I can say I have read many. Concisely and with passionate enthusiasm, Lindley manages to weave together the essential scientific discoveries, the scores of anecdotes about the famous participants including their personal conflicts and friendships, the philosophical and social implications associated with many quantum concepts, and the political and historical turmoil and connections that accompanied these discoveries. While mainly focused on Einstein, Heisenberg and Bohr, Lindley draws vivid portraits of other pioneers such as Pauli, Dirac, Born, Kramers and Schrodinger. Quantum physics is perhaps the most paradoxical, beautiful, bizarre and important scientific paradigm ever developed. The decade from 1920-1930 was undoubtedly the golden era of physics. Lindley succinctly and engagingly tells us how and why. A must read for history of physics lovers.
Strain Energies in Ligand Binding: Round Two- Fight!
Or why to be wary of ligands in the PDB, force field energies, and anybody who tells you not to be wary of these two
One of the longstanding questions in protein-ligand binding has been; what is the energy penalty that a protein has to pay in order to bind a ligand? Another question is; what is the strain energy that a protein pays in order to bind the ligand? Contrary to what one might initially think, the two questions are not the same. Strain energy is the price paid to twist the conformation of the ligand into the binding conformation. Free energy of binding is the energy that the protein has to pay in addition to the strain energy in order to bind the ligand.
A few years ago, this question shot into the limelight because of a publication in J. Med. Chem. by Perola et al. from Vertex. The authors did a meticulous study of hundreds of ligands in their protein-bound complexes, some from the PDB and others proprietary. They used force fields to estimate the difference between the energy of the bound conformation of the ligands and the nearest local energy minimum conformation- the strain energy penalty. For most ligands, they obtained strain energies ranging from 2-5 kcal/mol. But what raised eyebrows was that for a rather significant minority of ligands, the strain energies seemed to be more than 10 kcal/mol, and for some they seemed to be up to 20 kcal/mol.
These are extremely high numbers. To understand why this is so, consider a fact that I have frequently emphasized on this blog; the concentration of a particular conformation in solution is virtually negligible if the free energy difference between it and a stable conformation is about only 3 kcal/mol. For a conformation to pay that much of an energy penalty in order to transform itself into the bound conformation would already be a stretch, considering its low concentration. For a conformation to pay an energy penalty of 20 kcal/mol does not make sense at all in this light, since such a conformation should be non-existent. Plus, think about the fact that hydrogen bonds usually contribute about 5 kcal/mol and that energy at room temperature is itself about 20 kcal/mol- significantly greater than the rotational barriers in most molecules- and this number for the strain energy penalty starts looking humungous. Where exactly would it come from?
Perola's paper generated a lot of buzz- a good thing. It was discussed by speakers at a conference in March last year that I attended. Now, a paper in J. Comp. Chem. seems to clear up the air a little. In a nutshell, the authors conclude that the strain energies they have measured seldom, if ever, surpass 2 kcal/mol. Needless to say, this is a huge difference compared to the earlier studies.
Why such a startling difference? It seems that as always, the answer strongly depends on the method and the data.
First of all, the PDB is not as flawless as people assume it is. Most people who are crystallizing protein-ligand complexes are first and foremost interested in the structure of the protein. They often do a poor job of fitting ligands to the electron density; Gerard Kleywegt of the University of Uppsala has done some marvelous work on detecting errors in PDB ligands, and his review on this should be a must-read for all scientists even marginally connected with crystallography. Because of poor fits, conformations of ligands in the electron densities in the PDB can be completely unrealistic and at the very least, brutally strained. Amides can be cis or non-planar, and more rarely planar aromatic rings can be deformed. There can be severe steric clashes which are not easily apparent. Quite naturally, such conformations when refined would lead to huge drops in energy. Therein lies the first source of the unrealistically large strain energy differences.
The second factor has to do with the vagaries and inadequacies of force fields, often unknown to crystallographers but known to experienced computational chemists. Force fields are quite poor at determining energies and their results are especially skewed by an overemphasis on electrostatic interactions which the force fields are ill-equipped to damp. Now consider what happens when a ligand in a PDB that has a positively and negatively charged group in it is optimized. If you relax it to the nearest local energy minimum, these two groups would instantly snap together and form a very strong ionic bond. This would lead to a huge overstabilization of the conformation, thus again giving the illusion of a large strain energy difference between the PDB conformation and the local minimum.
Finally, the devil is in the details. In doing the initial refinement of the conformation, the earlier study used a constraint called the flat-bottom potential in optimizing the PDB ligands in their bound state. However the flat-bottom potential, which extracts no penalties for atomic movement within a certain short distance and suddenly ramps up the penalty, is not physically realistic. A better method might be to use a harmonic potential which continuously and smoothy extracts a penalty proportional to atomic displacement.
The present study takes all these factors into account and also substitutes the force field results with some well-established quantum chemical energy determinations at the B3LYP/6-31G* level. They use this method to calculate the energies of bound and local energy minimum conformations. Secondly, they use a well-established continuum solvation model (PCM) as incorporated in the latest version of the Gaussian program to incorporate damping effects due to solvation. Thirdly as indicated above, they use the harmonic potential for optimization. Fourthly and most importantly, for the cases where the strain energy seems unusually high (and even there they set the bar quite high- anything greater than 2 kcal/mol), the authors closely investigate the relevant PDB entries and find that indeed, the ligands were not fit well into the electron density and had unrealistically strained conformations.
Once they tackled these problems, the strain energies all fell down to between 0.5 and 2 kcal/mol, which seems to be a realistic penalty that a conformation with a respectable concentration in solution could pay. There is now a second question; what is the maximum strain energy penalty that a ligand can pay to be transformed into the bound conformation? The authors are working on this question, and we will await their answer.
But this study reiterates two important lessons that should be remembered by anyone dealing with structure at all times:
1. Don't trust the PDB
2. Don't trust force field energies
Better still, as old Fox Mulder said, trust no one and nothing.
References:
1. Keith T. Butler, F. Javier Luque, Xavier Barril (2009). Toward accurate relative energy predictions of the bioactive conformation of drugs Journal of Computational Chemistry, 30 (4), 601-610 DOI: 10.1002/jcc.21087
2. Emanuele Perola, Paul S. Charifson (2004). Conformational Analysis of Drug-Like Molecules Bound to Proteins: An Extensive Study of Ligand Reorganization upon Binding Journal of Medicinal Chemistry, 47 (10), 2499-2510 DOI: 10.1021/jm030563w
3. A Davis, S Stgallay, G Kleywegt (2008). Limitations and lessons in the use of X-ray structural information in drug design Drug Discovery Today, 13 (19-20), 831-841 DOI: 10.1016/j.drudis.2008.06.006
One of the longstanding questions in protein-ligand binding has been; what is the energy penalty that a protein has to pay in order to bind a ligand? Another question is; what is the strain energy that a protein pays in order to bind the ligand? Contrary to what one might initially think, the two questions are not the same. Strain energy is the price paid to twist the conformation of the ligand into the binding conformation. Free energy of binding is the energy that the protein has to pay in addition to the strain energy in order to bind the ligand.
A few years ago, this question shot into the limelight because of a publication in J. Med. Chem. by Perola et al. from Vertex. The authors did a meticulous study of hundreds of ligands in their protein-bound complexes, some from the PDB and others proprietary. They used force fields to estimate the difference between the energy of the bound conformation of the ligands and the nearest local energy minimum conformation- the strain energy penalty. For most ligands, they obtained strain energies ranging from 2-5 kcal/mol. But what raised eyebrows was that for a rather significant minority of ligands, the strain energies seemed to be more than 10 kcal/mol, and for some they seemed to be up to 20 kcal/mol.
These are extremely high numbers. To understand why this is so, consider a fact that I have frequently emphasized on this blog; the concentration of a particular conformation in solution is virtually negligible if the free energy difference between it and a stable conformation is about only 3 kcal/mol. For a conformation to pay that much of an energy penalty in order to transform itself into the bound conformation would already be a stretch, considering its low concentration. For a conformation to pay an energy penalty of 20 kcal/mol does not make sense at all in this light, since such a conformation should be non-existent. Plus, think about the fact that hydrogen bonds usually contribute about 5 kcal/mol and that energy at room temperature is itself about 20 kcal/mol- significantly greater than the rotational barriers in most molecules- and this number for the strain energy penalty starts looking humungous. Where exactly would it come from?
Perola's paper generated a lot of buzz- a good thing. It was discussed by speakers at a conference in March last year that I attended. Now, a paper in J. Comp. Chem. seems to clear up the air a little. In a nutshell, the authors conclude that the strain energies they have measured seldom, if ever, surpass 2 kcal/mol. Needless to say, this is a huge difference compared to the earlier studies.
Why such a startling difference? It seems that as always, the answer strongly depends on the method and the data.
First of all, the PDB is not as flawless as people assume it is. Most people who are crystallizing protein-ligand complexes are first and foremost interested in the structure of the protein. They often do a poor job of fitting ligands to the electron density; Gerard Kleywegt of the University of Uppsala has done some marvelous work on detecting errors in PDB ligands, and his review on this should be a must-read for all scientists even marginally connected with crystallography. Because of poor fits, conformations of ligands in the electron densities in the PDB can be completely unrealistic and at the very least, brutally strained. Amides can be cis or non-planar, and more rarely planar aromatic rings can be deformed. There can be severe steric clashes which are not easily apparent. Quite naturally, such conformations when refined would lead to huge drops in energy. Therein lies the first source of the unrealistically large strain energy differences.
The second factor has to do with the vagaries and inadequacies of force fields, often unknown to crystallographers but known to experienced computational chemists. Force fields are quite poor at determining energies and their results are especially skewed by an overemphasis on electrostatic interactions which the force fields are ill-equipped to damp. Now consider what happens when a ligand in a PDB that has a positively and negatively charged group in it is optimized. If you relax it to the nearest local energy minimum, these two groups would instantly snap together and form a very strong ionic bond. This would lead to a huge overstabilization of the conformation, thus again giving the illusion of a large strain energy difference between the PDB conformation and the local minimum.
Finally, the devil is in the details. In doing the initial refinement of the conformation, the earlier study used a constraint called the flat-bottom potential in optimizing the PDB ligands in their bound state. However the flat-bottom potential, which extracts no penalties for atomic movement within a certain short distance and suddenly ramps up the penalty, is not physically realistic. A better method might be to use a harmonic potential which continuously and smoothy extracts a penalty proportional to atomic displacement.
The present study takes all these factors into account and also substitutes the force field results with some well-established quantum chemical energy determinations at the B3LYP/6-31G* level. They use this method to calculate the energies of bound and local energy minimum conformations. Secondly, they use a well-established continuum solvation model (PCM) as incorporated in the latest version of the Gaussian program to incorporate damping effects due to solvation. Thirdly as indicated above, they use the harmonic potential for optimization. Fourthly and most importantly, for the cases where the strain energy seems unusually high (and even there they set the bar quite high- anything greater than 2 kcal/mol), the authors closely investigate the relevant PDB entries and find that indeed, the ligands were not fit well into the electron density and had unrealistically strained conformations.
Once they tackled these problems, the strain energies all fell down to between 0.5 and 2 kcal/mol, which seems to be a realistic penalty that a conformation with a respectable concentration in solution could pay. There is now a second question; what is the maximum strain energy penalty that a ligand can pay to be transformed into the bound conformation? The authors are working on this question, and we will await their answer.
But this study reiterates two important lessons that should be remembered by anyone dealing with structure at all times:
1. Don't trust the PDB
2. Don't trust force field energies
Better still, as old Fox Mulder said, trust no one and nothing.
References:
1. Keith T. Butler, F. Javier Luque, Xavier Barril (2009). Toward accurate relative energy predictions of the bioactive conformation of drugs Journal of Computational Chemistry, 30 (4), 601-610 DOI: 10.1002/jcc.21087
2. Emanuele Perola, Paul S. Charifson (2004). Conformational Analysis of Drug-Like Molecules Bound to Proteins: An Extensive Study of Ligand Reorganization upon Binding Journal of Medicinal Chemistry, 47 (10), 2499-2510 DOI: 10.1021/jm030563w
3. A Davis, S Stgallay, G Kleywegt (2008). Limitations and lessons in the use of X-ray structural information in drug design Drug Discovery Today, 13 (19-20), 831-841 DOI: 10.1016/j.drudis.2008.06.006
This is how science should be taught
The Viki Weisskopf way
From Jeremy Bernstein's review of noted physicist Victor Weisskopf's 1991 memoir. Bernstein first took a class from the utterly brilliant and impenetrable Nobel laureate Julian Schwinger at Harvard. After a couple of days of withstanding the barrage, Bernstein decided to attend Victor Weisskopf's class at MIT. The result is endearingly described:
From Jeremy Bernstein's review of noted physicist Victor Weisskopf's 1991 memoir. Bernstein first took a class from the utterly brilliant and impenetrable Nobel laureate Julian Schwinger at Harvard. After a couple of days of withstanding the barrage, Bernstein decided to attend Victor Weisskopf's class at MIT. The result is endearingly described:
My visits to Viki's class in quantum mechanics at MIT were, in every way, a culture shock. The class and the classroom were both huge—at least a hundred students. Weisskopf was also huge, at least he was tall compared to the diminutive Schwinger. I do not think he wore a jacket, or if he did, it must have been rumpled. Schwinger was what we used to call a spiffy dresser.The trick in any class is not to let the students know how much you know (the Schwinger technique) but to let them know how much you, and indeed everyone else, do not know.
Weisskopf's first remark on entering the classroom, was "Boys [there were no women in the class], I just had a wonderful night!" There were raucous catcalls of "Yeah Viki!" along with assorted outbursts of applause. When things had quieted down Weisskopf said, "No, no it's not what you think. Last night, for the first time, I really understood the Born approximation." This was a reference to an important approximation method in quantum mechanics that had been invented in the late 1920s by the German physicist Max Born, with whom Weisskopf studied in Göttingen. Weisskopf then proceeded to derive the principal formulas of the Born approximation, using notes that looked as if they had been written on the back of an envelope. Along the way, he got nearly every factor of two and pi wrong. At each of these mistakes there would be a general outcry from the class; at the end of the process, a correct formula emerged, along with the sense, perhaps illusory, that we were participating in a scientific discovery rather than an intellectual entertainment. Weisskopf also had wonderful insights into what each term in the formula meant for understanding physics. We were, in short, in the hands of a master teacher
The wrath of lithium
Mitch alerts us to this extremely tragic story where a UCLA lab assistant has succumbed to burns caused by t-butyl lithium:
A 23 year old female research associate/laboratory technician intended to add an (unknown) aliquot of 1.6 M t-bu-Li (in pentane) to a round bottom flask, placed in a dry ice/acetone bath. She had been employed in the lab for about 3 months. The incident occurred on Dec. 29, during the UCLA holiday shutdown between Christmas and New Years. Researchers are granted permission to work during the shut down for “critical research needs.” There were two post doctoral researchers working in the lab and the adjacent lab, with limited English proficiency.I find it especially sad that she was doing a relatively routine procedure done in hundreds of labs, and was wearing gloves and safety goggles, even if not a lab coat. This sobering incident should remind us that for all that we jest about laboratory procedures and reagents, working in a lab should be a deadly serious activity. Sometimes the monotony blurs the line between casual protocol and hellishly serious work precautions.
The principal investigator had trained the employee to slightly pressurize the bottle (an ~ 250 ml Aldrich Sure Seal container) with Argon and withdraw the desired aliquot using a 60 ml syringe, fitted with a 20 gauge needle. The PI likes to use these particular syringes because they have a tight seal. There is no evidence that the employee used this method. Speculation: she may have just tried to pull up the aliquot in the syringe. Somehow, the syringe plunger popped out or was pulled out of the syringe barrel, splashing the employee with t-bu-Li and pentane. The mixture caught fire, upon contact with air. She was wearing nitrile gloves, safety glasses and synthetic sweater. She was not wearing a lab coat. The fire ignited the gloves and the sweater.
Six feet from the fume hood was an emergency shower. When the employee’s gloves and clothing caught fire, she ran from the area away from the shower. One of the post-docs used his lab coat to smother the flames. 911 was called. UCLA Fire Dept. and emergency medical, Los Angeles City Fire, and Los Angeles County Haz Mat. The EMTs put the employee in the safety shower for gross decon and then transported her to the ER. She’s currently in the Grossman burn unit in Sherman Oaks with second degree burns on her arms and third degree burns on her hands, a total of about 40% of her body. There was very little damage to the lab. Bill has not interviewed the employee.
An Excellent Primer
The Bomb: A New History
Stephen Younger
Ecco, 2009
Stephen Younger's book on the bomb is a very good primer on nuclear weapons, but somewhat limited by its length. Mr. Younger who is a veteran weapons designer and defense official begins with a succinct history of nuclear weapons and then goes on to review the major weapons and delivery systems in the United States and other countries. He talks about the deterrence triad in the United States; bombers, ballistic missiles and especially submarine-based nuclear missiles that can pack the biggest punch most efficiently. Also included are short discussions of developing and already developed arsenals in other countries including Russia, China, Southeast Asia, France and Britain. Younger writes about the modern weaponization of Russia which is in progress and discusses the status of development in other countries. The discussion also includes a general overview of nuclear weapons effects including thermal, blast, radiation and electromagnetic effects and a chapter on ‘soft’ and ‘hard’ targets and their targeting. Younger contends that a weapon of about 10kT yield would be sufficient to destroy or seriously damage most major cities and installations in the world, except extremely hardened underground facilities. Compare this with the W series of warheads in the US arsenal, many of which pack an explosive force equivalent to several hundred kilotons of TNT.
Younger also discusses nuclear proliferation and the problems inherent in terrorists constructing a bomb. His list of measures for combating such terrorism include a discussion of not just technical measures like missile defense and more efficient border security, but an insightful paragraph on the valuable role of intelligence and especially human intelligence in thwarting terrorists’ attempts to secure a weapon or material in the first place. He also narrates the efforts expended by the Cooperative Threat Reduction Initiative in securing nuclear weapons and reactors in the former Soviet Union. These efforts also involve the dismantling of conventional weapons. While people constantly warn that terrorists might end up constructing a crude nuclear device and while there is some merit in this suggestion, it’s not as easy as it sounds. As Younger says, the devil is in the details, and while much of the general information on nuclear weapons is publicly available, it is far from trivial for any terrorist outfit to actually surmount the many intricate scientific and engineering problems encountered in actual weapons construction. The construction of a plutonium implosion weapon is especially daunting given the excessively exacting conditions that the weapon’s core and outer explosives have to satisfy. A more detailed discussion of dirty bombs is missing from this narrative. Also, while Younger’s analysis of anti-nuclear weapons measures is clear, what is missing is a crucial discussion of countermeasures that can be easily developed against missile defense. These countermeasures have been convincingly demonstrated time and time again to be able to thwart even sophisticated missile defenses. In addition, new missiles such as the Russian SS 27 have been apparently designed to manuever and baffle such defenses.
One of the most informative chapters in the book talks about replacing nuclear weapons with conventional weapons. With better targeting and accuracy, the need for megaton weapons is virtually non-existent. Pinpoint targeting can take out the most crucial command and control centers for nuclear weapons without causing high numbers of casualties. Many new conventional weapons can do the tasks previously reserved for nuclear weapons and and thus lower the spectre of the nuclear threat. In fact, some tasks like hitting biological weapons facilities can be safely accomplished only with conventional weapons, since nuclear weapons might well disperse dangerous biological or chemical material into the surroundings. Even hardened bunkers can be destroyed by especially hardened warheads. In addition, replacing nuclear weapons by conventional weapons can go a long way in nuclear disarmament.
Further on, Younger has a valuable analysis of the security of the US nuclear arsenal. This analysis made me realise that the problem is more complicated than it seems at first sight. The issue is simple. The US has declared a moratorium on nuclear testing in 1992. Congress cut funding for new nuclear weapons research. However, many of the weapons in the US arsenal have extended their shelf lives and it’s not certain whether they would work as designed, an ability that is crucial for deterrence. Doubts have especially been raised about the plutonium ‘pits’ at the center of implosion weapons. Computer simulations can aid in such predictions, but the only sure criterion for judging the workability of a design would be a test, an act that would have deep repurcussions for non-proliferation. In addition, many of the production and manufacturing units that built these weapons have been shut down since 1992. Perhaps most importantly, talented personnel who were competent in nuclear weapons design are gradually fading away with very few new recruits to replace them. Sometimes it is easy to forget that even if they are terribly destructive, nuclear weapons provide an immense and exciting scientific and engineering challenge for technical minds. To partly counter this, the US government has poured billions of dollars into the three national laboratories that still work on nuclear weapons- Los Alamos, Lawrence Livermore and Sandia. Massive basic science facilities have been developed at these three laboratories to retain personnel and attract new blood. Nonetheless, nobody really knows whether the nation would be able to gear up for producing new weapons if it becomes necessary, and nobody has been really able to say when and why it would become necessary in the first place. The problem is quite a pressing one and the solution is not clear.
Finally, Younger talks about the future of nuclear weapons. He examines the three positions that have been taken on nuclear weapons. The abolitionist position was recently made popular by a panel of four non-partisan experienced political leaders (Nunn, Perry, Kissinger and Schultz). While this position may be tenable in principle, in practice it would need constant and complete verification which may be difficult. Then there are the minimalist and moderate positions. Younger himself adopts the moderate position which calls for about 1000-2000 relatively low yield non-strategic weapons on missiles and submarines. It is not easy to decide what number is efficient for deterrence, partly because deterrence dictates that analyses of this number should not be publicly disclosed in the first place! But whatever the number, Younger does not see nuclear weapons disappearing from the face of the earth anytime soon. As he concludes in this primer, hopefully the world can enter a state of security in which rogue states don’t have weapons, bombs and material are secured, and deterrence works as planned. While this succinct primer does not provide the answer to whether such a state will actually be achieved, it certainly provides a slim and good introduction to all basic nuclear issues to the layman that should make him or her think and decide for themselves.
Stephen Younger
Ecco, 2009
Stephen Younger's book on the bomb is a very good primer on nuclear weapons, but somewhat limited by its length. Mr. Younger who is a veteran weapons designer and defense official begins with a succinct history of nuclear weapons and then goes on to review the major weapons and delivery systems in the United States and other countries. He talks about the deterrence triad in the United States; bombers, ballistic missiles and especially submarine-based nuclear missiles that can pack the biggest punch most efficiently. Also included are short discussions of developing and already developed arsenals in other countries including Russia, China, Southeast Asia, France and Britain. Younger writes about the modern weaponization of Russia which is in progress and discusses the status of development in other countries. The discussion also includes a general overview of nuclear weapons effects including thermal, blast, radiation and electromagnetic effects and a chapter on ‘soft’ and ‘hard’ targets and their targeting. Younger contends that a weapon of about 10kT yield would be sufficient to destroy or seriously damage most major cities and installations in the world, except extremely hardened underground facilities. Compare this with the W series of warheads in the US arsenal, many of which pack an explosive force equivalent to several hundred kilotons of TNT.
Younger also discusses nuclear proliferation and the problems inherent in terrorists constructing a bomb. His list of measures for combating such terrorism include a discussion of not just technical measures like missile defense and more efficient border security, but an insightful paragraph on the valuable role of intelligence and especially human intelligence in thwarting terrorists’ attempts to secure a weapon or material in the first place. He also narrates the efforts expended by the Cooperative Threat Reduction Initiative in securing nuclear weapons and reactors in the former Soviet Union. These efforts also involve the dismantling of conventional weapons. While people constantly warn that terrorists might end up constructing a crude nuclear device and while there is some merit in this suggestion, it’s not as easy as it sounds. As Younger says, the devil is in the details, and while much of the general information on nuclear weapons is publicly available, it is far from trivial for any terrorist outfit to actually surmount the many intricate scientific and engineering problems encountered in actual weapons construction. The construction of a plutonium implosion weapon is especially daunting given the excessively exacting conditions that the weapon’s core and outer explosives have to satisfy. A more detailed discussion of dirty bombs is missing from this narrative. Also, while Younger’s analysis of anti-nuclear weapons measures is clear, what is missing is a crucial discussion of countermeasures that can be easily developed against missile defense. These countermeasures have been convincingly demonstrated time and time again to be able to thwart even sophisticated missile defenses. In addition, new missiles such as the Russian SS 27 have been apparently designed to manuever and baffle such defenses.
One of the most informative chapters in the book talks about replacing nuclear weapons with conventional weapons. With better targeting and accuracy, the need for megaton weapons is virtually non-existent. Pinpoint targeting can take out the most crucial command and control centers for nuclear weapons without causing high numbers of casualties. Many new conventional weapons can do the tasks previously reserved for nuclear weapons and and thus lower the spectre of the nuclear threat. In fact, some tasks like hitting biological weapons facilities can be safely accomplished only with conventional weapons, since nuclear weapons might well disperse dangerous biological or chemical material into the surroundings. Even hardened bunkers can be destroyed by especially hardened warheads. In addition, replacing nuclear weapons by conventional weapons can go a long way in nuclear disarmament.
Further on, Younger has a valuable analysis of the security of the US nuclear arsenal. This analysis made me realise that the problem is more complicated than it seems at first sight. The issue is simple. The US has declared a moratorium on nuclear testing in 1992. Congress cut funding for new nuclear weapons research. However, many of the weapons in the US arsenal have extended their shelf lives and it’s not certain whether they would work as designed, an ability that is crucial for deterrence. Doubts have especially been raised about the plutonium ‘pits’ at the center of implosion weapons. Computer simulations can aid in such predictions, but the only sure criterion for judging the workability of a design would be a test, an act that would have deep repurcussions for non-proliferation. In addition, many of the production and manufacturing units that built these weapons have been shut down since 1992. Perhaps most importantly, talented personnel who were competent in nuclear weapons design are gradually fading away with very few new recruits to replace them. Sometimes it is easy to forget that even if they are terribly destructive, nuclear weapons provide an immense and exciting scientific and engineering challenge for technical minds. To partly counter this, the US government has poured billions of dollars into the three national laboratories that still work on nuclear weapons- Los Alamos, Lawrence Livermore and Sandia. Massive basic science facilities have been developed at these three laboratories to retain personnel and attract new blood. Nonetheless, nobody really knows whether the nation would be able to gear up for producing new weapons if it becomes necessary, and nobody has been really able to say when and why it would become necessary in the first place. The problem is quite a pressing one and the solution is not clear.
Finally, Younger talks about the future of nuclear weapons. He examines the three positions that have been taken on nuclear weapons. The abolitionist position was recently made popular by a panel of four non-partisan experienced political leaders (Nunn, Perry, Kissinger and Schultz). While this position may be tenable in principle, in practice it would need constant and complete verification which may be difficult. Then there are the minimalist and moderate positions. Younger himself adopts the moderate position which calls for about 1000-2000 relatively low yield non-strategic weapons on missiles and submarines. It is not easy to decide what number is efficient for deterrence, partly because deterrence dictates that analyses of this number should not be publicly disclosed in the first place! But whatever the number, Younger does not see nuclear weapons disappearing from the face of the earth anytime soon. As he concludes in this primer, hopefully the world can enter a state of security in which rogue states don’t have weapons, bombs and material are secured, and deterrence works as planned. While this succinct primer does not provide the answer to whether such a state will actually be achieved, it certainly provides a slim and good introduction to all basic nuclear issues to the layman that should make him or her think and decide for themselves.
Obama's threat reduction priorities
Plutonium may be deadly, but uranium is much more dangerous
I sincerely believe that because of its utterly devastating and game-changing implications, nuclear terrorism is one of the greatest threats the world faces. Even a crude nuclear weapon detonated in Mumbai, London, Tokyo or Los Angeles will cause the kind of destruction and havoc that would be every citizen's worst nightmare. Such an event will significantly change the political and social landscape of a country for a long time to come, and probably for the worse. That's all everybody would talk about. In case of nuclear terrorism, the adage about us having to succeed every single time while them having to succeed just once rings resoundingly true.
A recent Nature article emphasizes the steps that President-elect (for only 5 more days) Obama should take to keep nuclear weapons from falling into the hands of terrorists. According to the article, something like only 0.2% of US defence spending is devoted to practical non-proliferation, an amount that has remained virtually unchanged for a decade. The new President's chief science advisor John Holdren has worked on these issues, having already alerted the non-proliferation community to them back in 2002.
What needs to be paid very close attention to is highly-enriched uranium (HEU) and not plutonium. Building a plutonium implosion weapon involves many intricate steps and would likely be beyond the reach of a terrorist outfit. Plutonium is a hideous element that is extremely difficult to work with. The explosives arrangement around it needs to be machined to the finest dimensions in order to work as expected. By contrast, simple firing mechanisms can be used to detonate a uranium bomb (although I don't share the article's predilection for calling it "child's play"). One of the topics of discussion between Pakistani scientists and Osama Bin Laden in August 2001 apparently involved such firing mechanisms. As the article correctly notes, even a uranium weapon fizzle that delivers 1-5 kT in a place like Manhattan would be devastating.
Given this scenario, it is more than disconcerting that some 272 HEU reactors in 56 countries remain unsecured. Feedstock balances for many of these reactors are not meticulously accounted for. Some uranium can even be scraped from the insides of centrifuges or gaseous diffusion tubes and declared as wasted or not produced. Quiet and gradual extraction of tiny amounts could lead to the accumulation of tens of kilograms, a quantity sufficient for a crude explosive device.
Clearly the focus of the new administration should be to try to secure such reactors in hot spots; in Pakistan, Iran and the former Soviet Union. Fortunately, one of the foremost policy actions that Barack Obama was involved in as a Senator was non-proliferation. He worked with Senator Richard Lugar to continue securing nuclear material from the former Soviet Union. Non-proliferation was always one of Senator Obama's special concerns. Let's hope it stays that way.
I sincerely believe that because of its utterly devastating and game-changing implications, nuclear terrorism is one of the greatest threats the world faces. Even a crude nuclear weapon detonated in Mumbai, London, Tokyo or Los Angeles will cause the kind of destruction and havoc that would be every citizen's worst nightmare. Such an event will significantly change the political and social landscape of a country for a long time to come, and probably for the worse. That's all everybody would talk about. In case of nuclear terrorism, the adage about us having to succeed every single time while them having to succeed just once rings resoundingly true.
A recent Nature article emphasizes the steps that President-elect (for only 5 more days) Obama should take to keep nuclear weapons from falling into the hands of terrorists. According to the article, something like only 0.2% of US defence spending is devoted to practical non-proliferation, an amount that has remained virtually unchanged for a decade. The new President's chief science advisor John Holdren has worked on these issues, having already alerted the non-proliferation community to them back in 2002.
What needs to be paid very close attention to is highly-enriched uranium (HEU) and not plutonium. Building a plutonium implosion weapon involves many intricate steps and would likely be beyond the reach of a terrorist outfit. Plutonium is a hideous element that is extremely difficult to work with. The explosives arrangement around it needs to be machined to the finest dimensions in order to work as expected. By contrast, simple firing mechanisms can be used to detonate a uranium bomb (although I don't share the article's predilection for calling it "child's play"). One of the topics of discussion between Pakistani scientists and Osama Bin Laden in August 2001 apparently involved such firing mechanisms. As the article correctly notes, even a uranium weapon fizzle that delivers 1-5 kT in a place like Manhattan would be devastating.
Given this scenario, it is more than disconcerting that some 272 HEU reactors in 56 countries remain unsecured. Feedstock balances for many of these reactors are not meticulously accounted for. Some uranium can even be scraped from the insides of centrifuges or gaseous diffusion tubes and declared as wasted or not produced. Quiet and gradual extraction of tiny amounts could lead to the accumulation of tens of kilograms, a quantity sufficient for a crude explosive device.
Clearly the focus of the new administration should be to try to secure such reactors in hot spots; in Pakistan, Iran and the former Soviet Union. Fortunately, one of the foremost policy actions that Barack Obama was involved in as a Senator was non-proliferation. He worked with Senator Richard Lugar to continue securing nuclear material from the former Soviet Union. Non-proliferation was always one of Senator Obama's special concerns. Let's hope it stays that way.
Kinase substrates detected by dansylation
Detecting substrates of kinases by labeling them is an important technique as well as future goal in profiling novel kinases with important biological activity.
In this method, dansyl-ATP has been used as a co-substrate for kinases. The dansyl group is transferred to the substrate peptides and detected by MALDI-TOF. The authors further extend the method to FRET detection of dansyl-labeled substrates that have fluorophores attached to them.
The general idea seems to be be to use gamma-phosphate modified ATP analogues which are accommodated in many kinase active sites. Dansylation is an old and well-known method for peptide sequencing.
Keith D. Green, Mary Kay H. Pflum (2009). Exploring Kinase Cosubstrate Promiscuity: Monitoring Kinase Activity through Dansylation ChemBioChem, 10 (2), 234-237 DOI: 10.1002/cbic.200800393
Targeting early AD oligomers with a simple dipeptide
A couple of days ago, an American man in Switzerland became the first human being to have his assisted suicide filmed and broadcasted on television. Craig Ewert was a 59 year old university professor who had been diagnosed with motor neuron disease in 2006. He had been given 3-5 years to live, but the disease took its terrible toll in just a few months. By the end of 2006 Ewert decided he had had enough. He contacted Dignitas, a Swiss organization started by a lawyer that has helped literally hundreds of people to peacefully end their lives. On a quiet, sunny morning, Ewert sipped a lethal cocktail of barbiturates completely of his own volition, and then bit on a switch that stopped his ventilator, again of his own volition. The film crew was allowed to film the entire operation except for after he passed away, when his wife wanted a few minutes of silence to herself.
Assisted suicide is an immensely controversial topic laden with moral issues. I have thought about it and discussed it with my friends many times, and I usually find myself ending up in favour of it at least in some cases where death is inevitable and the quality of life is going to assuredly become worse. Motor neuron disease is one such ailment. Alzheimer's disease is another. I personally don't think there's any other disease as cruel as AD, with loved ones trying to grasp at a person's identity as it slips away all too slowly and painfully. To me AD seems to be one of those unambiguous cases where a person who is still coherent should be able to exercise his right to die a dignified death. As horrible as it is to watch a loved one decide to willingly end his or her life, it is usually infinitely worse to watch them fade away into a wilderness of silence.
But the insidious pain of AD also makes the search for AD therapies a particularly compelling and desperate one. 15 million people are afflicted with AD, and in 20 years that burden is supposed to double. Part of the trouble for targeting AD is a still incomplete understanding of its molecular basis. While the initial amyloid hypothesis that posits the formation of insoluble amyloid protein fibrils is still very much relevant, much less is known about its exact relation to the disease, whether as a cause or symptom. The most important recent finding with respect to this hypothesis is that it's not insoluble aggregates but soluble oligomers that are the toxic species.
A recent overview in Nature (The Plaque Plan, Nature, November 13 2008) talked about the disappointments and ambiguities facing researchers in AD. Amyloid burden does not always correlate with cognitive impairment. In addition, two big clinical trials designed to target the formation of amyloid have both ended in failure and confusion. It's back to the drawing board for many embattled scientists. A major problem inherent in testing AD therapies is the late stage at which they inevitably have to be initiated in the absence of an early test for diagnosis. Thus disease progression is already advanced and perhaps that is the reason the therapies don't work as planned. The future should be as much focused on early brain and cerebrospinal fluid imaging as on new therapies. And of course, we are still in the rudimentary stages of achieving that supreme goal of understanding how memories are formed and stored.
At any rate, the midnight oil keeps burning and the search goes one, and Ehud Gazit from Tel Aviv University has published a paper in Angewandte Chemie that features a simple dipeptide for targeting and disassembling early soluble amyloid fibrils. The dipeptide is D-Trp-Aib, consisting of D-tryptophan and the unnatural amino acid amino isobutyric acid. This amino acid is rather unique and has been extensively studied because it seems to lend pronounced alpha helicity to peptides. The running hypothesis in its inclusion in the inhibitor is that it is a ß-breaker, an amino acid that helps to break the ß sheets that are a signature of amyloid. The work also focuses on the central region of the Aß (1-42) amyloid peptide that forms its aromatic core. The peptide sequence here is KLVFFAE, and the central Phe residues are thought to strongly influence the aggregation of the peptide. Thus the use of the D-Trp; apparently it would help to interfere in aromatic assembly. In addition it would also escape degradation by ubiquitous proteases.
Being a simple uncapped dipeptide, the blood-brain barrier permeability of this molecule is admittedly lousy (It has a depressing logP value of -0.8). Strategies could probably be worked up for enhancing delivery by chemical modification. However, other tests that the authors used attest to its efficacy as a amyloid-interfering agent. The compound inhibits the oligomerization of the amyloid oligomers as shown in a SDS-PAGE gel; it inhibited oligomers that can be stabilized using SDS. Interestingly its inhibition ability decreases at intermediate concentration and then goes up again. The compound also dissociated amyloid fibrils as shown in fluorescence assays with thioflavin-T. Lastly and more importantly, it seems to demonstrate long-term potentiation (LTP) in mice that may lead to an increase in cognitive retention. The molecule also showed good bioavailability and low toxicity.
In addition the researchers also did a NMR study on the interaction of D-Trp-Aib with the core of amyloid Aß (1-42). Using TOCSY and other spectra they detected differences in the chemical shifts of alpha protons of some key amino acids. Using NOE information they have also proposed a family of solution structures representing the bound conformation of the dipeptide with the sequence. I am more skeptical about this result. Small sequences of peptides rapidly interconvert in solution and give averaged NMR signals. The binding of the peptide to the sequence is probably an on-off event in which it preferentially stabilizes a particular conformation. A more detailed structural study combined with kinetics experiments would shed light on the key binding events.
How far such therapies would take us in tackling AD in human beings remains to be seen; in any case, maybe the trials should begin soon.
Other AD-related posts: water in amyloid, amyloid dimers as possible culprits, "seminal" truths about amyloid, and amyloid as a possible window into historical pathogen wars
References:
Anat Frydman-Marom, Meirav Rechter, Irit Shefler, Yaron Bram, Deborah E. Shalev, Ehud Gazit (2008). Cognitive-Performance Recovery of Alzheimer's Disease Model Mice by Modulation of Early Soluble Amyloidal Assemblies Angewandte Chemie International Edition DOI: 10.1002/anie.200802123
The Black Swans of Chemistry Models
And Nassim Nicholas Taleb
Nassim Nicholas Taleb's two books, Fooled by Randomness and The Black Swan, are undoubtedly two of the most provocative and interesting books I have come across in my life. While I am still ploughing through them, especially the second book, The Black Swan, has made waves and is apparently now cited as required reading on Wall Street. The books are highly eclectic and traverse a remarkably diverse landscape that includes psychology, finance, evolution, mathematics, philosophy, history, sociology, economics and many other disciplines. It would be impossible to review them in a limited space. But if I wanted to capture their essence, it would be by saying that Taleb alerts readers to the patterns that human beings see in the randomness inherent in the world, and the models, both mental and practical, that they build to account for this randomness.
Since his book came out, Taleb has become a mini celebrity and has been interviewed on Charlie Rose and Stephen Colbert. His books sell in large numbers in the far corners of the world. The reason why Taleb has suddenly become such a big deal is in part because he at least philosophically seems to have predicted the financial crisis of 2008 which occured two years after The Black Swan came out. One of the firms he advised turned out a profit of more than a 100 million dollars in 2008 when others were close to losing the clothes on their back. Taleb now has emerged as one of the most profound soothsayers and philosophers of our times, a "scholar of randomness", although his message seems to be more modest; models are not designed to account for rare or Black Swan events which may have monumental impact. The analogy deals with the assured belief that people had in the past about all swans being white. When the continent of Australia was discovered and black swans were observed in flocks (a fact which my dad who is currently in Australia corroborated), there was a paradigm shift. Similarly a model, any model, that is built on the basis of White Swan events will fail to foresee Black Swans.
Unfortunately as Taleb explains, it's the Black Swans that dictate the direction that the world proceeds in. It's the rare event that is the watershed, the event that changes everything. And it's exactly the rare event that models don't encapsulate. And this fact spells their doom.
To augment his theory, Taleb cites many Black Swan events from history and politics. For example if you lived in 1913, you would hardly foresee the gargantuan event of 1914 which would forever change the world. If you lived in 1988, you would scarcely comprehend the epoch-making events of 1989. One of my favourite parts of the book concerns The Turkey Analogy. Imagine you are a turkey who is being constantly fed by the butcher 364 days a year. You are happy, you know the butcher loves you, your economics and accounts departments are happy, they start to think this is the way of the world for you. Now comes the 365th day. You die. Just when your expectations levels reach their most optimistic, your destiny reaches its lowest point. But right before day 365 on day 364, you were 100% certain that you had a lifetime of bountiful happiness ahead of you. Day 365 was exactly contrary to the expectations of your finance department. Day 365 was the Black Swan which you did not anticipate. And yet it was that single Black Swan day that proved fateful for you, and not the earlier 364 days of well-fed bliss. According to Taleb, this is what most of us and especially the derivatives wizards on Wall Street are- happy, deluded turkeys.
In any case, one of the most important discussions in Taleb's books concerns the fallacy of model building. He claims that the models that Wall Streets used, the models that raked in billions and Nobel Prizes, were fundamentally flawed, in part because they were not built to accommodate Black Swans. That made me think about models in chemistry and how they relate to other models. In Taleb's mind, the derivatives and other frameworks that the genius quants used were like a plane whose workings they did not understand. When you build a plane, you should always keep the possibility of a storm, a rare event, in mind. Aerospace engineers do this. The quants apparently did not.
But let's face one big truth about models; most of them in fact are not designed to represent "reality". In fact models don't care as much about causation as they do about accurate correlation and prediction. While this may sound like shooting ourselves in the foot, it often saves us a lot of time, not to mention philosophizing. We use models not because they are "real" but because they work. I would not care if a model I had for predicting enzyme rates involved little gnomes rotating amino acid torsions in the active sites and passing on water molecules to each other. If the model could predict the catalysis rate for orotidine decarboxylase, that's all I care about. A model for representing water molecules may put unrealistic charges on the O and H atoms, but all I care about is whether it reproduces the dipole moment, density and heat capacity of bulk water. Thus, models are not necessarily a window into reality. They are very likely a window in spite of reality.
Also, models can always fit data if arbitrary changes are made to their parameters and enough number of parameters are used. As Retread quoted Von Neumann in a past comments thread, "Give me five parameters and I can fit an elephant to a curve. Give me six and I can make him dance". This is overfitting. In overfitting, models can do a stellar job of accounting for known data, but miserably fail to predict new data which is after all what they should be able to do. Models to predict SAR relationships for biologically active molecules are a notorious example. In QSAR, one can almost always get a good correlation if enough parameters are used. Overfitting can be addictive and rampant. The fallacy of associating correlation with causation has been frequently asserted (how about the one in which the number of breeding storks correlate with the number of new births?), and there are few places where it manifests itself in all its glory more than in QSAR. While we know these pitfalls, in practice though we are much less careful since we want to get practical, applicable results and could care less if the model represented conditions on Mars.
Lastly and most obviously, the model is only as good as the data that goes in. Standard deviations that one might get in results obtained by the model cannot be smaller than the errors in the experimental data. Now this is not something that has been lost on model builders. Models for molecular mechanics for example frequently include charges acquired from high-level first principles quantum chemistry calculations. And yet even quantum chemistry calculations are hamstrung by computational efficiency and are based on assumptions. One of the important lessons my advisor taught me was to always ask what assumptions went into a particular study. If the assumptions are suspect, then no matter how elegant and meticulous the study, its results are bound to be hazy.
The result of all this ad hoc, utility-oriented model building is obvious; we often simply fail to include relevant physical phenomena in building a model. Now consider what will happen if we encounter an outlier, a case where that physical phenomenon dominates all others in influencing, say, the activity of a molecule or the stereoselectivity of a reaction. That's when we would get a Black Swan, an outlier, a rare event which the model predictably cannot predict because the factors responsible for that event have not been included in building it. Retrospectively a Black Swan should not be surprising. Prospectively we don't care about it much because we are certainly not going to discard the model for the sake of one outlier.
But choosing to retain the model in spite of it not being able to predict the rare event is not always an easy decision. What if the outlier is going to cost your employer millions? This is usually not very important in academic chemistry, but it almost always is in financial markets. In chemistry we have the luxury of simply retreating to our labs and computers and initiating more research that would investigate the basic factors that went into the models. One can argue about "short, strong hydrogen bonds" until the cows come home, and he or she (probably) won't get booted out. But in finance the rare outlier can, and does, mean retreating into a trailer without any savings.
The bottom line is, all of us are playing a game when we use models, in chemistry, finance or any other discipline. As in other games, we are fine as long as we win. One of Taleb's messages is that we should at least be able to assess the impact of losing, something which he asserts the quants have significantly underestimated. If the impact is a complete game changer, then we should know when to get out of the game. We tend to forget that the models that we have don't represent reality. We use them because they work, and it's the reality of utility that produces the illusion of reality. Slightly modifying a quote by the great Pablo, models then are the lies that help us to conceal the truth.
Note: The short Charlie Rose interview with Taleb is worth watching:
Nassim Nicholas Taleb's two books, Fooled by Randomness and The Black Swan, are undoubtedly two of the most provocative and interesting books I have come across in my life. While I am still ploughing through them, especially the second book, The Black Swan, has made waves and is apparently now cited as required reading on Wall Street. The books are highly eclectic and traverse a remarkably diverse landscape that includes psychology, finance, evolution, mathematics, philosophy, history, sociology, economics and many other disciplines. It would be impossible to review them in a limited space. But if I wanted to capture their essence, it would be by saying that Taleb alerts readers to the patterns that human beings see in the randomness inherent in the world, and the models, both mental and practical, that they build to account for this randomness.
Since his book came out, Taleb has become a mini celebrity and has been interviewed on Charlie Rose and Stephen Colbert. His books sell in large numbers in the far corners of the world. The reason why Taleb has suddenly become such a big deal is in part because he at least philosophically seems to have predicted the financial crisis of 2008 which occured two years after The Black Swan came out. One of the firms he advised turned out a profit of more than a 100 million dollars in 2008 when others were close to losing the clothes on their back. Taleb now has emerged as one of the most profound soothsayers and philosophers of our times, a "scholar of randomness", although his message seems to be more modest; models are not designed to account for rare or Black Swan events which may have monumental impact. The analogy deals with the assured belief that people had in the past about all swans being white. When the continent of Australia was discovered and black swans were observed in flocks (a fact which my dad who is currently in Australia corroborated), there was a paradigm shift. Similarly a model, any model, that is built on the basis of White Swan events will fail to foresee Black Swans.
Unfortunately as Taleb explains, it's the Black Swans that dictate the direction that the world proceeds in. It's the rare event that is the watershed, the event that changes everything. And it's exactly the rare event that models don't encapsulate. And this fact spells their doom.
To augment his theory, Taleb cites many Black Swan events from history and politics. For example if you lived in 1913, you would hardly foresee the gargantuan event of 1914 which would forever change the world. If you lived in 1988, you would scarcely comprehend the epoch-making events of 1989. One of my favourite parts of the book concerns The Turkey Analogy. Imagine you are a turkey who is being constantly fed by the butcher 364 days a year. You are happy, you know the butcher loves you, your economics and accounts departments are happy, they start to think this is the way of the world for you. Now comes the 365th day. You die. Just when your expectations levels reach their most optimistic, your destiny reaches its lowest point. But right before day 365 on day 364, you were 100% certain that you had a lifetime of bountiful happiness ahead of you. Day 365 was exactly contrary to the expectations of your finance department. Day 365 was the Black Swan which you did not anticipate. And yet it was that single Black Swan day that proved fateful for you, and not the earlier 364 days of well-fed bliss. According to Taleb, this is what most of us and especially the derivatives wizards on Wall Street are- happy, deluded turkeys.
In any case, one of the most important discussions in Taleb's books concerns the fallacy of model building. He claims that the models that Wall Streets used, the models that raked in billions and Nobel Prizes, were fundamentally flawed, in part because they were not built to accommodate Black Swans. That made me think about models in chemistry and how they relate to other models. In Taleb's mind, the derivatives and other frameworks that the genius quants used were like a plane whose workings they did not understand. When you build a plane, you should always keep the possibility of a storm, a rare event, in mind. Aerospace engineers do this. The quants apparently did not.
But let's face one big truth about models; most of them in fact are not designed to represent "reality". In fact models don't care as much about causation as they do about accurate correlation and prediction. While this may sound like shooting ourselves in the foot, it often saves us a lot of time, not to mention philosophizing. We use models not because they are "real" but because they work. I would not care if a model I had for predicting enzyme rates involved little gnomes rotating amino acid torsions in the active sites and passing on water molecules to each other. If the model could predict the catalysis rate for orotidine decarboxylase, that's all I care about. A model for representing water molecules may put unrealistic charges on the O and H atoms, but all I care about is whether it reproduces the dipole moment, density and heat capacity of bulk water. Thus, models are not necessarily a window into reality. They are very likely a window in spite of reality.
Also, models can always fit data if arbitrary changes are made to their parameters and enough number of parameters are used. As Retread quoted Von Neumann in a past comments thread, "Give me five parameters and I can fit an elephant to a curve. Give me six and I can make him dance". This is overfitting. In overfitting, models can do a stellar job of accounting for known data, but miserably fail to predict new data which is after all what they should be able to do. Models to predict SAR relationships for biologically active molecules are a notorious example. In QSAR, one can almost always get a good correlation if enough parameters are used. Overfitting can be addictive and rampant. The fallacy of associating correlation with causation has been frequently asserted (how about the one in which the number of breeding storks correlate with the number of new births?), and there are few places where it manifests itself in all its glory more than in QSAR. While we know these pitfalls, in practice though we are much less careful since we want to get practical, applicable results and could care less if the model represented conditions on Mars.
Lastly and most obviously, the model is only as good as the data that goes in. Standard deviations that one might get in results obtained by the model cannot be smaller than the errors in the experimental data. Now this is not something that has been lost on model builders. Models for molecular mechanics for example frequently include charges acquired from high-level first principles quantum chemistry calculations. And yet even quantum chemistry calculations are hamstrung by computational efficiency and are based on assumptions. One of the important lessons my advisor taught me was to always ask what assumptions went into a particular study. If the assumptions are suspect, then no matter how elegant and meticulous the study, its results are bound to be hazy.
The result of all this ad hoc, utility-oriented model building is obvious; we often simply fail to include relevant physical phenomena in building a model. Now consider what will happen if we encounter an outlier, a case where that physical phenomenon dominates all others in influencing, say, the activity of a molecule or the stereoselectivity of a reaction. That's when we would get a Black Swan, an outlier, a rare event which the model predictably cannot predict because the factors responsible for that event have not been included in building it. Retrospectively a Black Swan should not be surprising. Prospectively we don't care about it much because we are certainly not going to discard the model for the sake of one outlier.
But choosing to retain the model in spite of it not being able to predict the rare event is not always an easy decision. What if the outlier is going to cost your employer millions? This is usually not very important in academic chemistry, but it almost always is in financial markets. In chemistry we have the luxury of simply retreating to our labs and computers and initiating more research that would investigate the basic factors that went into the models. One can argue about "short, strong hydrogen bonds" until the cows come home, and he or she (probably) won't get booted out. But in finance the rare outlier can, and does, mean retreating into a trailer without any savings.
The bottom line is, all of us are playing a game when we use models, in chemistry, finance or any other discipline. As in other games, we are fine as long as we win. One of Taleb's messages is that we should at least be able to assess the impact of losing, something which he asserts the quants have significantly underestimated. If the impact is a complete game changer, then we should know when to get out of the game. We tend to forget that the models that we have don't represent reality. We use them because they work, and it's the reality of utility that produces the illusion of reality. Slightly modifying a quote by the great Pablo, models then are the lies that help us to conceal the truth.
Note: The short Charlie Rose interview with Taleb is worth watching:
Heavy Me
An interesting NYT article about early black holes that may have been detected by their radio signature has this to say:
Dust grows over time as stars manufacture heavy elements called metals, like carbon, silicon and oxygen, that make up dust and then spit them out into space.If carbon and oxygen are heavy metals, then I am heavy indeed.
The Ghost of Alfred Kinsey?
I have seen a few outrageous papers during my time; I have to admit this ranks close to the top...
"Ovulatory cycle effects on tip earnings by lap dancers: economic evidence for human estrus?"
Read it here. I found out later that not surprisingly, this was a 2008 Ig Nobel prize winner.
One thing is for sure: everything in this world lends itself to scientific investigation, and especially the raucous adventures of promiscuous men and women. That is the beauty of science.
"Ovulatory cycle effects on tip earnings by lap dancers: economic evidence for human estrus?"
Read it here. I found out later that not surprisingly, this was a 2008 Ig Nobel prize winner.
One thing is for sure: everything in this world lends itself to scientific investigation, and especially the raucous adventures of promiscuous men and women. That is the beauty of science.
Gupta for Surgeon General?
I stopped watching CNN when they called on Dr. Phil for proffering his deep insights on student psychology after the Virginia Tech shootings. Since then the quality of the channel has just kept on sliding in my opinion. Where Fox News brings you the best of cherry-picking and biased right-wing propaganda under the guise of being fair and balanced, CNN smothers you either with sensationalist news items, unqualified "experts" (Dr. Phil, Deepak Chopra and a horde of others) or mostly completely irrelevant entertainment covering the likes of Anna Nicole Smith. The Onion might be the only decent news source to follow now.
In spite of this I respect some of CNN's correspondents, especially Anderson Cooper who I think tries his best to do an objective and fair job. Another correspondent who I respect is Sanjay Gupta, and that's not just because he is a professor at my school. I have always been impressed with his sheer stamina and wide knowledge of issues, his capacity to traverse the globe and country and report on diverse stories, and his (generally) sincere and unbiased efforts to report accurately and present all the relevant sides. While he did have his bad moments (like the one with Michael Moore), Gupta usually does a good job. Most of his shortcomings I attribute to CNN, whose lazy buffoons like their correspondents to focus on sensationalist or unimportant news items.
So I personally feel satisfied upon hearing that Gupta has been tapped for US Surgeon General. The Surgeon General may not be the Secretary of State, but he has the power to do a significant amount of good...and bad. Remember this lady? (although I think her dismissal was completely unjustified). In an age where bug-resistant infectious diseases, poison from abroad, controversies about AIDS and abortion and stem cell research riddle the front pages, I would think that the Surgeon General's post is one of the more important posts in the country.
That does not mean I am in complete agreement with Gupta's appointment; for instance I don't know how qualified he would be to make decisions at the highest level or fend off bureaucracy. But if not for any other reason, I feel gratified that the administration would at least pick an intelligent, young, knowledgeable and driven person who would ask the right questions and draw on his extensive background. Gupta knows about the important issues and because of his reporting knows the right sources to plumb regarding them. At least in principle he should be able to make some informed decisions.
In spite of this I respect some of CNN's correspondents, especially Anderson Cooper who I think tries his best to do an objective and fair job. Another correspondent who I respect is Sanjay Gupta, and that's not just because he is a professor at my school. I have always been impressed with his sheer stamina and wide knowledge of issues, his capacity to traverse the globe and country and report on diverse stories, and his (generally) sincere and unbiased efforts to report accurately and present all the relevant sides. While he did have his bad moments (like the one with Michael Moore), Gupta usually does a good job. Most of his shortcomings I attribute to CNN, whose lazy buffoons like their correspondents to focus on sensationalist or unimportant news items.
So I personally feel satisfied upon hearing that Gupta has been tapped for US Surgeon General. The Surgeon General may not be the Secretary of State, but he has the power to do a significant amount of good...and bad. Remember this lady? (although I think her dismissal was completely unjustified). In an age where bug-resistant infectious diseases, poison from abroad, controversies about AIDS and abortion and stem cell research riddle the front pages, I would think that the Surgeon General's post is one of the more important posts in the country.
That does not mean I am in complete agreement with Gupta's appointment; for instance I don't know how qualified he would be to make decisions at the highest level or fend off bureaucracy. But if not for any other reason, I feel gratified that the administration would at least pick an intelligent, young, knowledgeable and driven person who would ask the right questions and draw on his extensive background. Gupta knows about the important issues and because of his reporting knows the right sources to plumb regarding them. At least in principle he should be able to make some informed decisions.
Boulevard of Broken Dreams
The brilliant and tragic history of nuclear fusion
Sun in a Bottle: The Strange History of Fusion and the Science of Wishful Thinking
Charles Seife
Viking Adult, October 2008
Among all of humanity’s great quests to wrest control of nature and its own destiny, few quests have been as grand in scale and optimism as nuclear fusion. The fascinating history of nuclear fusion shows man’s relentless efforts to first understand and then gain power over the source of energy that makes the stars shine. This history has also been dotted with some of the most brilliant, colorful and tragic figures in scientific history. Most importantly, fusion also demonstrates the dangers and pitfalls inherent in trying to seize nature’s greatest secrets from her.
In this engaging and informative history, Charles Seife tells us the story of trying to put the sun in a bottle, the singular personalities which permeated this history, the monumental mistakes made in understanding and harnessing this awesome source, and the wishful thinking that has pervaded the dream ever since its conception. Seife who has bachelor’s and master’s degrees in mathematics from Princeton and is now a journalism professor at NYU does a great job of clearly explaining the science behind fusion, and sprinkles his narrative with wit and gripping human drama.
These days fusion is mostly associated with hydrogen bombs that can obliterate entire cities and populations. And yet its story begins with a quest to understand one of the oldest and most profound questions that man has pondered; what makes the sun shine? Quite early on, it was quickly recognized that chemical rections couldn’t sustain the tremendous power of the sun for so long. After many decades of efforts, it was the great physicist Hans Bethe who finally cracked the secret of the stars’ luminous glow. Bethe found out a set of reactions catalysed by carbon that achieved the transformation of four hydrogen atoms into helium atoms. This central mechanism was soon shown to underlie the production of energy in all so-called main sequence stars like the sun.
It was with the entry of the United States into the Second World War however, that a more sinister use for nuclear fusion was envisioned by the volatile, brilliant Hungarian physicist Edward Teller, a dark character whose shadow looms large over the history of fusion and nuclear weapons. Teller proposed setting off a then still conceptual atomic bomb to generate the immense temperatures of tens of millions of degrees at the center of the sun that would ignite and hopefully propagate a fusion reaction in deuterium and tritium, isotopes of hydrogen that would be easier to fuse compared to hydrogen itself. Achieving fusion is an enormously difficult endeavor; one has to overcome the intense repulsive barrier between nuclei that keeps them from approaching one other. Only temperatures of tens of millions of degrees can get these nuclei hot enough to fuse. And yet as Seife explains, there is a fundamental paradox here; the very temperatures that can overcome the repulsive barrier between nuclei also blow them apart. It seems that in achieving nuclear fusion, we are constantly working against ourselves.
The history of the US and Soviet thermonuclear weapons program has been well documented in other sources. I have a summary in my last post. Seife succintly enumerates this history and narrates the development of genocidal megaton yield hydrogen bombs which are now part of almost every nuclear arsenal.
It is however in life and not in death that fusion promises mankind eternal glory. Efforts to attain this glory bear the stamp of the quintessential Faustian bargain for knowledge, where men gambled their careers and reputations, not to mention billions of government dollars, in trying to secure their place in history and free mankind of the burden of energy sources.
These efforts, while they taught us a lot about the workings of nuclei and electrons, have been riddled with tall claims and monumental failures. Seife recounts one program after another starting in the early 1950s that promised working fusion reactors in about twenty years. In Argentina and Britain, in Russia and the United States, claims about fusion regularly appeared and were hungrily lapped up by the popular press until a few months later, when the premature optimism came crashing down in the light of further investigations. In the first UN conference organized to discuss peaceful uses for atomic energy, Indian physicist Homi Bhabha talked about fusion becoming the practical solution to all our energy needs in three decades. And yet, effort after effort exposed fundamental problems in the system, hideously recalcitrant barriers that nature seemed to have erected to thwart us in our quest. The barriers still seem insurmountable.
On one hand, grandiose schemes using hydrogen bombs to excavate harbors, to carve out canals, to analyze moon dust and to solve almost every conceivable problem were imagined by Edward Teller and his followers. None of them worked, and all of them would produce dangerous radioactive fallout. On the other hand, early on scientists recognized a basic mechanism for taming fusion; by keeping fusing deuterium or tritium nuclei confined within a magnetic field in an extremely hot plasma of electrons and nuclei. The field of plasma physics emerged. This is the famous inertial confinement approach for harnessing fusion. This approach was developed and tested throughout the 50s and 60s. Some schemes looked as if they were working. Later it was found that not only were they producing less energy than what went in, but sometimes fusion was not even taking place and the neutrons that are a signature of the process were coming from elsewhere. The first condition, a net gain of energy, is called breakeven and is a fundamental condition for any energy-generating source to be satisfied. You have got to get more energy than what you put in. Ever since then, fusion has been achieved on smaller scales, but breakeven has never been attained.
Apart from inertially confined plasma fusion, Seife also describes the second major approach called laser fusion, which gradually arose as a competitor to plasma fusion in the 1970s. In this process, intense lasers shine on a small pellet of a deuterium or tritium compound from many directions. In the center of the pellet where unearthly temperatures and pressures are achieved, fusion takes place. This approach has been pursued in many grand schemes. One is called Shiva and involves 20 laser beams from 20 different directions squeezing a fusile pellet. The latest approach is called Nova which uses even more lasers. Both Shiva and Nova are closely guarded secrets. A computer program called LASNEX which helps their operation by simulating different fusion scenarios based on hundreds of variables and conditions is highly classified. Billions of dollars were spent on both these developments. And yet, as practical energy producing devices, both Shiva and Nova now look like dead ends.
Why is this the case? Why has almost every attempt to tame fusion failed? The answer has to do simply with the magnitude of the problem, and with how less we still understand nature. Both laser fusion and inertial fusion suffer from some fundamental and extremely complex problems that were discovered only when the experiments were underway. One problem has already been stated; the difficulty of confining such a hot plasma of particles. Another problem has to do with instability. As a hot plasma of deuterium and tritium circulates in an intense magnetic and electric field, local inescapable defects and asymmetries in the fields get quickly amplified and cause ‘kinks’ in the flow. The kinks gradually grow bigger like cracks in weak concrete and finally bring the entire structure down, quickly dissipating the plasma and halting fusion. While impressive progress has been made in controlling the fine parameters of the magnetic and electric fields, the problem still persists because of its basic nature. The other problem was that the electrons were getting heated much faster than the nuclei so that the nuclei- the real target- would stay relatively cool. A third serious problem was the initiation of Rayleigh-Taylor instabilities, little whirlpools and tongues that develop when a less-dense material presses against a more-dense material. Interestingly it’s Rayleigh-Taylor instabilities and not gravity that is the reason why water from an overturned glass escapes. Rayleigh-Taylor instabilities developed in laser fusion when less dense photos of light tried to compress a denser pellet of deuterium. These instabilities quickly destroyed the fine balance of the fusion process. The process is exquisitely sensitive to the finest of defects, like nanoscopic dimples of the surface of the pellet. Solving this problem requires the best of physics and engineering.
All these problem still plague fusion, and billions of dollars, thousands of brilliant scientists and hundreds of innovative ideas later, fusion still remains a dream. It has been achieved many times, neutrons have been observed, but breakeven still is a land that’s far, far away.
But scientists don’t give up. And while legitimate scientific efforts on the two ‘hot fusion’ approaches continue, there have been cases where some scientists believed they were observing fusion a tad too easily under circumstances that were too good to believe. These events saw their careers being destroyed and the promise of fusion again mangled. The events refer to the infamous cases of ‘cold fusion’ which constitute the last and most important part of Seife’s book. Seife weaves a riveting tale around these events, partly since he was a participant in one of the debacles.
The story of Pons and Fleischmann’s 1989 cold fusion disaster at the University of Utah is well known. The two took the unusual step of announcing their results in a press conference before getting them peer-reviewed and published. Later their experiments were shown to be essentially irreproducible. Seife recounts in details the developments that gradually cast a black cloud over this claim. One of the characters in this story is Steve Jones, a physicist who has recently gained notoriety for becoming a 9/11 denier.
But I was particularly interested in the next story since I had actually met and talked to one of the characters in the cold fusion catastrophe many years ago. Rusi Taleyarkhan, an Indian scientist, happened to come to our University in 2002 to give a talk. Just a few months before, he and his colleagues had published a paper in the prestigious journal Science, which if true would herald one of the greatest breakthroughs in scientific history. Taleyarkhan and his group claimed to have observed fusion in the most disarmingly simple experiment. They had taken a solution of deuterated acetone (acetone with its hydrogen atoms replaced by deuterium) and had bombarded it with neutrons that caused giant bubbles to form in the solution. They had then exposed the solution to intense acoustic waves, thus causing the bubbles to violently collapse. The phenomenon was well known and is called sonoluminescence, a name alluding to the light that is often given off because of these violent collapses. But what was Taleyarkhan’s claim? That the immense pressures and temperatures generated at the center of the bubbles caused nuclear fusion of the deuterium in the acetone, essentially in a tabletop apparatus at room temperature. Why acetone? This was the question I asked Taleyarkhan when I met him in 2002. He did not know, and he sounded sincere about it.
But this was before the storm was unleashed and the controversy erupted. In this case unlike the previous one, the work had been peer reviewed by one of the most famous and stringent journals in the world. But curiously, further investigation by Seife and others revealed that the paper had been published by Science’s editor in spite of objections by the reviewers. This was highly unusual to say the least. What was more disturbing was that concomitant experiments done at Oak Ridge National Laboratory, Taleyarkhan’s home turf at the time, revealed negative results. Once the results were announced, researchers across the world including some at prestigious institutions scurried around to repeat the experiments using more sophisticated detectors and apparatus. Fusion produces very signature neutrons of specific energy. The more sophisticated apparatus failed to detect these neutrons. In the earlier cold fusion debacle, there had been doubt about the energy peaks of the neutrons. Similar doubts started surfacing in this case. Questions were also raised about the possibly shoddy nature of the experiments, including the absence of control experiments. Later Taleyarkhan moved to Purdue, and Purdue initially defended the experiments. But the story remained murky. Some ‘independently’ published later articles turned out to not be so independent after all. Gradually, just like it had previously, the great edifice turned into a crumbling structure and came down. As a reporter for Science then, Seife personally covered these events. Purdue reinvestigated the matter and as of 2008, Taleyarkhan is forbidden from working as a regular PhD. student advisor at Purdue. Even though he was not convicted of deliberate fraud, his reputation has come crashing down.
This then is the history of fusion, episode after episode of wishful thinking to solve the biggest problem in the history of mankind. A fusion reactor may someday be possible, but nothing until now suggests that it would be so. It’s hard to trust a technology if it has consistently failed to deliever on its promise time after time. After all this, even the mention of the statement ‘cheap, abundant and universal energy’ should raise our eyebrows. In the afterword, Seife discusses the rather harsh nature of the scientific process where skepticism is everyone’s best friend and results are intensely vetted, a fact that’s necessary though to keep science and scientists in line. Fusion seems to be one of those endeavors where tall claims have been more consistently proclaimed than perhaps in any other branch of science. This has been undoubtedly so because of the earth-shattering implications of a true practical nuclear fusion reactor and the fame that it will bring its inventor. Even with such a reactor, our problems may not be over. First of all fusion is not as clean as it is made out to be; copious amounts of neutrons, gamma rays and other forms of radiation are released in the process. Secondly, even with mass production fusion reactors may cost no less than tens of millions of dollars. Even as Seife writes, the world’s economies have pooled their resources together into ITER, an international thermonuclear project that promises to be the biggest of its kind until now. The United States did not support the project earlier and it had to be scaled back. Now the US seems to be contributing again to a more modest version of the vision. As with other matters, the politics of fusions seems to be even more elusive than the science of fusion. Gratifyingly, Seife thinks that our best current bet to solve the energy problem is nuclear fission. It emits no carbon dioxide, provides the biggest bang for your buck, and most importantly unlike fusion is already here. Compared to the will-o-wisps of fusion, the very real strands of fission can solve many of our real problems. Ironically, controlled fusion is still a distant dream while very tangible thermonuclear bombs sit securely in the arsenals of so many nations.
In the end, one factor which Seife should have appreciated more in my opinion is the immense knowledge that has been gained from so many years of fusion research. That is one of the great virtues of science, that even failed endeavors can contribute key insights into the workings of nature and uncover new principles. Fusion might be wishful thinking, a grandiose and tragic scheme to put the sun in a bottle, but science always wins. And if not for anything else, for that we should always be grateful.
Sun in a Bottle: The Strange History of Fusion and the Science of Wishful Thinking
Charles Seife
Viking Adult, October 2008
Among all of humanity’s great quests to wrest control of nature and its own destiny, few quests have been as grand in scale and optimism as nuclear fusion. The fascinating history of nuclear fusion shows man’s relentless efforts to first understand and then gain power over the source of energy that makes the stars shine. This history has also been dotted with some of the most brilliant, colorful and tragic figures in scientific history. Most importantly, fusion also demonstrates the dangers and pitfalls inherent in trying to seize nature’s greatest secrets from her.
In this engaging and informative history, Charles Seife tells us the story of trying to put the sun in a bottle, the singular personalities which permeated this history, the monumental mistakes made in understanding and harnessing this awesome source, and the wishful thinking that has pervaded the dream ever since its conception. Seife who has bachelor’s and master’s degrees in mathematics from Princeton and is now a journalism professor at NYU does a great job of clearly explaining the science behind fusion, and sprinkles his narrative with wit and gripping human drama.
These days fusion is mostly associated with hydrogen bombs that can obliterate entire cities and populations. And yet its story begins with a quest to understand one of the oldest and most profound questions that man has pondered; what makes the sun shine? Quite early on, it was quickly recognized that chemical rections couldn’t sustain the tremendous power of the sun for so long. After many decades of efforts, it was the great physicist Hans Bethe who finally cracked the secret of the stars’ luminous glow. Bethe found out a set of reactions catalysed by carbon that achieved the transformation of four hydrogen atoms into helium atoms. This central mechanism was soon shown to underlie the production of energy in all so-called main sequence stars like the sun.
It was with the entry of the United States into the Second World War however, that a more sinister use for nuclear fusion was envisioned by the volatile, brilliant Hungarian physicist Edward Teller, a dark character whose shadow looms large over the history of fusion and nuclear weapons. Teller proposed setting off a then still conceptual atomic bomb to generate the immense temperatures of tens of millions of degrees at the center of the sun that would ignite and hopefully propagate a fusion reaction in deuterium and tritium, isotopes of hydrogen that would be easier to fuse compared to hydrogen itself. Achieving fusion is an enormously difficult endeavor; one has to overcome the intense repulsive barrier between nuclei that keeps them from approaching one other. Only temperatures of tens of millions of degrees can get these nuclei hot enough to fuse. And yet as Seife explains, there is a fundamental paradox here; the very temperatures that can overcome the repulsive barrier between nuclei also blow them apart. It seems that in achieving nuclear fusion, we are constantly working against ourselves.
The history of the US and Soviet thermonuclear weapons program has been well documented in other sources. I have a summary in my last post. Seife succintly enumerates this history and narrates the development of genocidal megaton yield hydrogen bombs which are now part of almost every nuclear arsenal.
It is however in life and not in death that fusion promises mankind eternal glory. Efforts to attain this glory bear the stamp of the quintessential Faustian bargain for knowledge, where men gambled their careers and reputations, not to mention billions of government dollars, in trying to secure their place in history and free mankind of the burden of energy sources.
These efforts, while they taught us a lot about the workings of nuclei and electrons, have been riddled with tall claims and monumental failures. Seife recounts one program after another starting in the early 1950s that promised working fusion reactors in about twenty years. In Argentina and Britain, in Russia and the United States, claims about fusion regularly appeared and were hungrily lapped up by the popular press until a few months later, when the premature optimism came crashing down in the light of further investigations. In the first UN conference organized to discuss peaceful uses for atomic energy, Indian physicist Homi Bhabha talked about fusion becoming the practical solution to all our energy needs in three decades. And yet, effort after effort exposed fundamental problems in the system, hideously recalcitrant barriers that nature seemed to have erected to thwart us in our quest. The barriers still seem insurmountable.
On one hand, grandiose schemes using hydrogen bombs to excavate harbors, to carve out canals, to analyze moon dust and to solve almost every conceivable problem were imagined by Edward Teller and his followers. None of them worked, and all of them would produce dangerous radioactive fallout. On the other hand, early on scientists recognized a basic mechanism for taming fusion; by keeping fusing deuterium or tritium nuclei confined within a magnetic field in an extremely hot plasma of electrons and nuclei. The field of plasma physics emerged. This is the famous inertial confinement approach for harnessing fusion. This approach was developed and tested throughout the 50s and 60s. Some schemes looked as if they were working. Later it was found that not only were they producing less energy than what went in, but sometimes fusion was not even taking place and the neutrons that are a signature of the process were coming from elsewhere. The first condition, a net gain of energy, is called breakeven and is a fundamental condition for any energy-generating source to be satisfied. You have got to get more energy than what you put in. Ever since then, fusion has been achieved on smaller scales, but breakeven has never been attained.
Apart from inertially confined plasma fusion, Seife also describes the second major approach called laser fusion, which gradually arose as a competitor to plasma fusion in the 1970s. In this process, intense lasers shine on a small pellet of a deuterium or tritium compound from many directions. In the center of the pellet where unearthly temperatures and pressures are achieved, fusion takes place. This approach has been pursued in many grand schemes. One is called Shiva and involves 20 laser beams from 20 different directions squeezing a fusile pellet. The latest approach is called Nova which uses even more lasers. Both Shiva and Nova are closely guarded secrets. A computer program called LASNEX which helps their operation by simulating different fusion scenarios based on hundreds of variables and conditions is highly classified. Billions of dollars were spent on both these developments. And yet, as practical energy producing devices, both Shiva and Nova now look like dead ends.
Why is this the case? Why has almost every attempt to tame fusion failed? The answer has to do simply with the magnitude of the problem, and with how less we still understand nature. Both laser fusion and inertial fusion suffer from some fundamental and extremely complex problems that were discovered only when the experiments were underway. One problem has already been stated; the difficulty of confining such a hot plasma of particles. Another problem has to do with instability. As a hot plasma of deuterium and tritium circulates in an intense magnetic and electric field, local inescapable defects and asymmetries in the fields get quickly amplified and cause ‘kinks’ in the flow. The kinks gradually grow bigger like cracks in weak concrete and finally bring the entire structure down, quickly dissipating the plasma and halting fusion. While impressive progress has been made in controlling the fine parameters of the magnetic and electric fields, the problem still persists because of its basic nature. The other problem was that the electrons were getting heated much faster than the nuclei so that the nuclei- the real target- would stay relatively cool. A third serious problem was the initiation of Rayleigh-Taylor instabilities, little whirlpools and tongues that develop when a less-dense material presses against a more-dense material. Interestingly it’s Rayleigh-Taylor instabilities and not gravity that is the reason why water from an overturned glass escapes. Rayleigh-Taylor instabilities developed in laser fusion when less dense photos of light tried to compress a denser pellet of deuterium. These instabilities quickly destroyed the fine balance of the fusion process. The process is exquisitely sensitive to the finest of defects, like nanoscopic dimples of the surface of the pellet. Solving this problem requires the best of physics and engineering.
All these problem still plague fusion, and billions of dollars, thousands of brilliant scientists and hundreds of innovative ideas later, fusion still remains a dream. It has been achieved many times, neutrons have been observed, but breakeven still is a land that’s far, far away.
But scientists don’t give up. And while legitimate scientific efforts on the two ‘hot fusion’ approaches continue, there have been cases where some scientists believed they were observing fusion a tad too easily under circumstances that were too good to believe. These events saw their careers being destroyed and the promise of fusion again mangled. The events refer to the infamous cases of ‘cold fusion’ which constitute the last and most important part of Seife’s book. Seife weaves a riveting tale around these events, partly since he was a participant in one of the debacles.
The story of Pons and Fleischmann’s 1989 cold fusion disaster at the University of Utah is well known. The two took the unusual step of announcing their results in a press conference before getting them peer-reviewed and published. Later their experiments were shown to be essentially irreproducible. Seife recounts in details the developments that gradually cast a black cloud over this claim. One of the characters in this story is Steve Jones, a physicist who has recently gained notoriety for becoming a 9/11 denier.
But I was particularly interested in the next story since I had actually met and talked to one of the characters in the cold fusion catastrophe many years ago. Rusi Taleyarkhan, an Indian scientist, happened to come to our University in 2002 to give a talk. Just a few months before, he and his colleagues had published a paper in the prestigious journal Science, which if true would herald one of the greatest breakthroughs in scientific history. Taleyarkhan and his group claimed to have observed fusion in the most disarmingly simple experiment. They had taken a solution of deuterated acetone (acetone with its hydrogen atoms replaced by deuterium) and had bombarded it with neutrons that caused giant bubbles to form in the solution. They had then exposed the solution to intense acoustic waves, thus causing the bubbles to violently collapse. The phenomenon was well known and is called sonoluminescence, a name alluding to the light that is often given off because of these violent collapses. But what was Taleyarkhan’s claim? That the immense pressures and temperatures generated at the center of the bubbles caused nuclear fusion of the deuterium in the acetone, essentially in a tabletop apparatus at room temperature. Why acetone? This was the question I asked Taleyarkhan when I met him in 2002. He did not know, and he sounded sincere about it.
But this was before the storm was unleashed and the controversy erupted. In this case unlike the previous one, the work had been peer reviewed by one of the most famous and stringent journals in the world. But curiously, further investigation by Seife and others revealed that the paper had been published by Science’s editor in spite of objections by the reviewers. This was highly unusual to say the least. What was more disturbing was that concomitant experiments done at Oak Ridge National Laboratory, Taleyarkhan’s home turf at the time, revealed negative results. Once the results were announced, researchers across the world including some at prestigious institutions scurried around to repeat the experiments using more sophisticated detectors and apparatus. Fusion produces very signature neutrons of specific energy. The more sophisticated apparatus failed to detect these neutrons. In the earlier cold fusion debacle, there had been doubt about the energy peaks of the neutrons. Similar doubts started surfacing in this case. Questions were also raised about the possibly shoddy nature of the experiments, including the absence of control experiments. Later Taleyarkhan moved to Purdue, and Purdue initially defended the experiments. But the story remained murky. Some ‘independently’ published later articles turned out to not be so independent after all. Gradually, just like it had previously, the great edifice turned into a crumbling structure and came down. As a reporter for Science then, Seife personally covered these events. Purdue reinvestigated the matter and as of 2008, Taleyarkhan is forbidden from working as a regular PhD. student advisor at Purdue. Even though he was not convicted of deliberate fraud, his reputation has come crashing down.
This then is the history of fusion, episode after episode of wishful thinking to solve the biggest problem in the history of mankind. A fusion reactor may someday be possible, but nothing until now suggests that it would be so. It’s hard to trust a technology if it has consistently failed to deliever on its promise time after time. After all this, even the mention of the statement ‘cheap, abundant and universal energy’ should raise our eyebrows. In the afterword, Seife discusses the rather harsh nature of the scientific process where skepticism is everyone’s best friend and results are intensely vetted, a fact that’s necessary though to keep science and scientists in line. Fusion seems to be one of those endeavors where tall claims have been more consistently proclaimed than perhaps in any other branch of science. This has been undoubtedly so because of the earth-shattering implications of a true practical nuclear fusion reactor and the fame that it will bring its inventor. Even with such a reactor, our problems may not be over. First of all fusion is not as clean as it is made out to be; copious amounts of neutrons, gamma rays and other forms of radiation are released in the process. Secondly, even with mass production fusion reactors may cost no less than tens of millions of dollars. Even as Seife writes, the world’s economies have pooled their resources together into ITER, an international thermonuclear project that promises to be the biggest of its kind until now. The United States did not support the project earlier and it had to be scaled back. Now the US seems to be contributing again to a more modest version of the vision. As with other matters, the politics of fusions seems to be even more elusive than the science of fusion. Gratifyingly, Seife thinks that our best current bet to solve the energy problem is nuclear fission. It emits no carbon dioxide, provides the biggest bang for your buck, and most importantly unlike fusion is already here. Compared to the will-o-wisps of fusion, the very real strands of fission can solve many of our real problems. Ironically, controlled fusion is still a distant dream while very tangible thermonuclear bombs sit securely in the arsenals of so many nations.
In the end, one factor which Seife should have appreciated more in my opinion is the immense knowledge that has been gained from so many years of fusion research. That is one of the great virtues of science, that even failed endeavors can contribute key insights into the workings of nature and uncover new principles. Fusion might be wishful thinking, a grandiose and tragic scheme to put the sun in a bottle, but science always wins. And if not for anything else, for that we should always be grateful.
Subscribe to:
Posts (Atom)