It's Clay Time: The Origins of "Silicon-Based" Life

ResearchBlogging.org
Every thinking man or woman seems to have their own favourite theory of origins of life. Like clothes fashions and hairstyles, in this case one is sometimes reduced to mere irrational favoritism at the end, devoid of real substantial logic. The reason is straightforward; origins might be the biggest unsolved problem in chemistry and biology of all time, but it deals with things billions of years backwards in time which we can hardly mimic, let alone observe directly. Modern life with its machinery of DNA, RNA and proteins provides tantalizing clues and yet no answers. Which came first? Genes, protein, or something else?

For many years now, I have placed my own bets on an origins theory about which I first read in mathematician John Casti's sweeping survey of the big problems facing modern science- Paradigms Lost. The theory which Casti says is his favourite is mine too because of a sane reason that Casti provides- the venerable (perhaps too much so) principle in science called Ockham's Razor, which simply says that "entities should not be unnecessarily multiplied" or, "Simple is best". When one is confronted with several explanations that lead to the same conclusion, the simplest one is likely to be the correct one. Well, not really, but if we are going to proceed on hunches anyway, why not choose the simplest one. The premise is simple here; DNA, RNA and proteins are too complicated for us to think about how they could have arose on our primordial planet. Better to start with simple, possibly inorganic substances that were abundant on early earth.

Enter the British biochemist Alexander Graham Cairns-Smith (CS from now on) who came up with a "life from clay" theory. CS conjectured that it makes much more sense to think of life evolving from simple inorganic materials, especially crystals, rather than from organic molecules. His hypothesis was straightforward and dealt with two familiar properties of crystals that are very similar to what we think are essential properties for anything to be called "living"- reproduction and natural selection. Crystals by their very nature are periodic, extending regularly in infinite planes, "reproducing" if you will in three dimensions. Crystals are also subject to defects and impurities. The beauty of crystallization is that impurities or defects always exist. The key principle that CS recognized was that these impurities or defects, if they confer some benefit like better 'stickiness' or mechanical properties, would be propagated in the way that beneficial mutations are propagated through evolution. Gradually, the old, decrepit crystal will be left behind and this impure crystal with its superior properties would take over. Ergo crystal evolution.

Which crystal would be preponderant on primitive earth to possibly do such a job? Why, ordinary clay of course. Silicon dioxide in its myriad manifestations. For one thing, it's always been extremely abundant for billions of years on earth. Secondly, it comes in an amazing variety of polymorphs, allotropes and geometries. Silicon is a wonderful element. It can mix and match with an untold number of cations and anions and form quixotic 3D structures. It can act as a scaffold for many other substances. One can spend a lifetime studying silicates. Science fiction writers have long since fantasized about a silicon-based biosphere. But here silicon has been theorized to be the seed of life in quite another fashion.

According to CS, crystals of silicon easily harbor organic impurities in them. With time, these impurities will grow along with the crystal. At some point as noted above, the organic molecules will have an evolutionary benefit possibly because of their greater flexibility, branching power, stickiness due to hydrophobic interactions and multiple bonding characteristics. Gradually, like a snake shedding its old skin, the organic molecules will simply grow faster and stronger and leave their old siliconian parentage behind. In time, what you will have would be an organic crystal. Now think about DNA, RNA, proteins and suchlike forming from such organic entities. At least it's a little easier than before, where one had to conjure up these biochemical wonders from inorganic gunk.

CS now has an article (cited below) expounding upon his "life from clay" theory. The most compelling hypothesis I found in this article is that if crystals are to serve as a template for making copies, replication should have to be edgewise and not in other directions. CS cites DNA as an example and considers a simplified version of it with the sugar-phosphate backbone removed and hydrogen bonds removed too. Now it's just a stack of colored plates, with each color corresponding to a base. It's pretty obvious that copying can take place only along the edge, as CS's figure shows:

Image Hosted by ImageShack.us


CS then gives some example of silicate minerals which could have such edges acting as templates. Edges could come together because of electrostatic or non-polar interactions. They could be jagged or smooth. Finally, you need not have only one kind of edge. A whole panoply of silicates with their varied kinds of edges could compete for sheet formation and consequent duplication. After that, the aforementioned impurities and defects could ensure natural selection and organic crystal formation. Life could then piggyback on the surfaces of these crystals.

CS also ponders if one could have inorganic enzymes that could possibly speed up the arrival of life. One does not imagine such enzymes to have the flexibility of organic ones. On the other hand as CS says, inorganic enzymes are already used in industry as superior catalysts. Clay crystals could similarly act as catalysts and speed up all kinds of reactions. By the way, CS is of the "genes first" camp as opposed to the "metabolism first" camp. However, CS's idea is one of metabolism facilitated by "inorganic genes" that replicate and improve their stock. I personally find the idea of crystal surfaces very alluring; after all so many reactions are speeded up on surfaces- Haber's ammonia synthesis which partly led to the latest Nobel Prize for Gerhard Ertl immediately comes to mind. If one is looking for simple chemical entities that could kick-start and speed up reactions, inorganic crystal surfaces sure seem to be good candidates.

CS's most tantalizing thoughts are about such clay-based life-inducing reactions happening even today, quietly in the nooks and crannies of nature. While such reactions would be hard to discover and compared to our life spans would be trivial and temporary, it is fascinating to think that the echoes of the origins of life still resonate all around us. Score one for Si.

Cairns-Smith, A. (2008). Chemistry and the Missing Era of Evolution. Chemistry - A European Journal, 14(13), 3830-3839. DOI: 10.1002/chem.200701215

Force field dependence of conformational energies

ResearchBlogging.org
This paper explores the fallacy of determining conformational energies for polar organic molecules from molecular mechanics force fields. Using Taxol as a test case, it investigates how different force fields can produce downright contradictory results for energetic rankings of Taxol conformations.

The bottom line is simple; do NOT trust energies from force fields. Trust geometries. In case of energies force fields usually overemphasize electrostatic interactions because of lack of explicit solvent representation. Thus sometimes even geometries can be warped because of electrostatics overwhelming the optimization. The one thing force fields are good at calculating on the other hand is sterics.

Running a "complete" conformational search with multiple force fields will usually give you completely different geometries for the global minimum, or at least slightly different ones (depending on the molecule). Thus, trusting the global minimum conformation from any one force field is a big fallacy. Thinking that that global minimum will be the true global minimum in solution is nothing short of blasphemy. And for a bioactive molecule, thinking that the global minimum from a force field search will be the bioactive conformation is just...well, that just means you have been seduced by the dark side of the force field.

Lakdawala, A., Wang, M., Nevins, N., Liotta, D.C., Rusinska-Roszak, D., Lozynski, M., Snyder, J.P. (2001). . BMC Chemical Biology, 1(1), 2. DOI: 10.1186/1472-6769-1-2

Magic without Magic: John Archibald Wheeler (1911-2008)

Image Hosted by ImageShack.us

Image copyright: NNDB, Soylent communications (2008)

When I heard from a friend about John Wheeler's death this morning, I grimaced and actually loudly let out an exclamation of pain and sadness. That's because not only was Wheeler one of the most distinguished physicists of the century but with his demise, the golden era of physics- that which gave us relativity, quantum theory and the atomic age- finally passes into history. The one consolation is that he lived a long and satisfying life, passing away at the ripe age of 96. It was just a few weeks ago that I asked a cousin of mine who did his PhD. at the University of Texas at Austin whether he ever ran into Wheeler there. My cousin who himself is in his fifties said that Wheeler arrived just as he was finishing- after retirement from Princeton university.

Wheeler was the last survivor of that heroic age that changed the world and he worked with some true prima donnas. He was an unusually imaginative physicist who made excursions into exotic realms; particles traveling backwards in time, black holes, time travel. A list of his collaborators and friends includes the scientific superstars of the century- Niels Bohr, Albert Einstein, Enrico Fermi, Edward Teller and Richard Feynman to name a few. To the interested lay public, he would be best known as Richard Feynman's PhD. advisor at Princeton.

Wheeler is famous for many things- mentor to brilliant students, originator of outrageous ideas, coiner of the phrase "black hole", outstanding teacher and writer. My most enduring memory about him is from John Gribbin's biography of Feynman. Gribbin recounts how Wheeler in his pinstriped suits used to look like a conservative banker, a look that belied one of the most creative scientific minds of his time. The fond incident is about the playful rogue Feynman being summoned into Wheeler's office for the first time. In order to underscore the importance of his time, Wheeler laid out an expensive pocket watch in front of Feynman. Feynman who had a congenital aversion to perceived or real pomposity took note of this and during their next meeting, laid out a dirt-cheap watch on the table. After a moment of stunned silence, both professor and student burst into loud laughter, laughter that almost always accentuated their discussions on physics and life thereafter. Feynman and Wheeler together derived a novel approach to quantum mechanics that involved particles radiating backwards in time. Wheeler also initiated the discussion of the notorious sprinkler problem described by Feynman in Surely you're joking Mr. Feynman

John Wheeler was born in Florida to strong-willed and working class parents. After obtaining his PhD. from Johns Hopkins at the age of 21, he joined Princeton in 1938 where he remained all his working life. Princeton in 1938 was a mecca of physics, largely because of the Institute for Advanced Study nearby which housed luminaries like Einstein, John von Neumann and Kurt Godel. Wheeler knew Einstein well and later sometimes used to hold seminars with his students in Einstein's home. As was customary for many during those times, Wheeler also studied with Niels Bohr at his famous institute in Copenhagen. In 1939 Bohr and Wheeler made a lasting contribution to physics- the liquid drop model of nuclear fission. According to this, the nucleus of especially heavy atoms behaves like a liquid drop, with opposing electrostatic repulsive forces and attractive surface tension and strong forces. Shoot an appropriately energetic neutron into an unstable uranium nucleus and it wobbles sufficiently for the repulsive forces to become dominant, causing it to split. The liquid drop model explained fission discovered earlier. The mathematics was surprisingly simple yet remarkably accurate. Bohr was one of Wheeler's most important mentors; in his biography he describes how he used to have marathon sessions with Bohr, with the great man often insisting on walking around the department, tossing choice tidbits to Wheeler ambling at his side. Caught up in the recent heated debate about the philosophical implications of quantum theory, Wheeler argued the nature of reality with both Einstein and Bohr.

When World War 2 began, Wheeler like many physicists was recruited into the Manhattan Project. Because of his wide-ranging intellect and versatility, he was put in charge as scientific consultant to Du Pont, who was building plutonium producing reactors at Hanford in Washington state. There Wheeler tackled and solved an unexpected and very serious problem. As the reactors were transforming uranium 238 into the precious plutonium, the process suddenly shut down. After some time it started up again. Nobody knew what was happening. Wheeler who was the resident expert worked out the strange phenomenon in an all-night session. What was happening was that some of the fission products produced had a big appetite for neutrons and were therefore "poisoning" the chain reaction. After some time when these products had decayed to sufficiently low levels, they would stop eating up the neutrons and the reactor would start again. This was one of the most valuable pieces of information gained during plutonium production. Ironically, the omission of this information in a second edition of a government history of atomic energy released just after the war alerted the Soviets to its importance. Working on the Manhattan Project was also a poignantly personal experience for Wheeler; the bomb could not save his brother Joe who was killed in action in Italy in 1944. Wheeler later also worked with Edward Teller on the hydrogen bomb, a decision about which he was fairly neutral because he thought it was necessary at the time to stand up to the Soviets.

After the war Wheeler embarked on a lifelong quest in a completely different field and became a pioneer in it- general relativity. He took up where Robert Oppenheimer had left off in 1939. Oppenheimer had made a key contribution to twentieth century physics by first describing what we now know as black holes. Strangely and somewhat characteristically, he lost all interest in the field after the war. But Wheeler took it up and reinitiated a bona fide revolution in the application of general relativity to astrophysics. As his most enduring mark, he coined the word "black hole" in the 1960s. Wheeler became the scientific godfather of a host of other physicists who became pioneers in exploring exotic phenomena- black holes, wormholes, time travel, multiple universes. His most successful student in this regard has been Kip Thorne whose wonderful book expounds on the golden age of relativity. Hugh Everett, the tragic genius who invented multiple universes and the Lagrange multipliers method for optimization problems before plunging into paranoia and depression, left behind choice fodder not just for science but for science fiction; parallel universes have been a staple of our collective imagination ever since then. In retrospect, Wheeler followed his mentor and did for astrophysics what Bohr had done for quantum theory- he served as friend, philosopher and guide for a brilliant new generation of physicists.

Wheeler also was known as an outstanding teacher. His mentoring of Feynman is well-known, and he devoted a lot of time and care to teaching and writing. Along with his students Kip Thorne and Charles Misner, Wheeler produced what is surely the bible of general relativity, Gravitation, a mammoth book running more than a thousand pages whose only discouraging feature may be its length. The book has served as advanced introduction to Einstein and beyond for generations of students. Wheeler also co-authored Spacetime Physics, an introduction to special relativity which even I have timidly managed to savor a little during my college days. His own autobiography, Geons, Black Holes and Quantum Foam: A Life in Physics is worth reading for its evocation of a unique time of the last century, as well as for fond anecdotes about great physicists.

But many people will remember Wheeler as a magician. Sitting in his office in his pinstriped suits, Wheeler's mind roamed across the universe straddling everything from the smallest to the largest, exploring far-flung concepts and realms of the unknown. He grappled with the interpretation of quantum mechanics and was an early proponent of the anthropic principle- in John L Casti's magnificent book Paradigms Lost, Casti quotes Wheeler analogizing observer-created reality with the game in which a group of people asks someone else to guess an object they have in mind by asking questions, except that in the modified version of this game, they let the object be created during the process of questioning. With his mentor Bohr's enduring principle of complementarity as a guide, Wheeler produced esoteric ideas that nonetheless questioned the bedrock of reality. Wheeler was entirely at home with such bizarre yet profound concepts that still tug at the heartstrings of physicist-philosophers. Only Wheeler could have introduced paradoxical and yet meaningful phrases like "mass without mass". In celebration of his sixtieth birthday, physicists produced a volume dedicated to him with a title that appropriately captured the essence of his thinking- "magic without magic".

John Wheeler was indeed a magician. He made great contributions to physics, served as its guide for half a century and motivated and taught new generations to wonder at the universe's complexities as much as he did. He was the last torch-bearer of a remarkable age when mankind transformed the most esoteric and revolutionary investigations into the universe into forces that changed the world. He will be sorely missed.

Large-scale effects of a nuclear war between India and Pakistan

Back in the days when the Cold War was simmering, one of the rather depressing activities scientists and other officials used to engage in was to conjure up hypothetical scenarios involving nuclear war between the US and the Soviet Union and try to gauge its effects. Such theorizing was often done behind closed doors in enclaves like the RAND corporation. In the early 1960s, RAND's Herman Kahn wrote an influential and morbid book called On Thermonuclear War. Kahn, a portly, overweight, brilliant Strangelovian character was said to be a possible inspiration for the good doctor in Kubrick's brilliant movie Dr. Strangelove. In fact Kubrick supposedly read Kahn's 600 page book in detail before working on the movie (A recent biography of Kahn sheds light on this fascinating man)

The book ignited a controversy about nuclear conflict because Kahn's thesis was that nuclear war fought with thermonuclear weapons was winnable, thus possibly upping the ante for the nuclear powers. Kahn used many quite rather incomplete arguments to make the not entirely unreasonable point that while such a war would be horrific, it would not mean the end of humanity. The survivors may not necessarily envy the dead. But of course Kahn was speculating based on the then best available scientific data along with his own idiosyncratic biases. One of the biggest effects of a nuclear explosion is to send up debris in the atmosphere, and climate models in the 60s were in a primitive stage to help with predicting any such effects. Also, nuclear effects start wide-ranging fires and, on the rare occasions when the conditions are right, gruesome firestorms; a firestorm is the nearest thing to hell that one can imagine. Fires can account for up to 60% of the damage from a nuclear explosion. While the thermal effects constitute about 35% of the total effects from a typical nuclear air-burst (blast effects constitute about 50%), thermal effects unlike others can naturally sort of self-perpetuate through starting successive fires. According to some analysts, state department officials in the 50s calculating nuclear weapons effects neglected the devastation due to fire, which made their results underestimates. Any realistic simulation of a nuclear explosion has to take into account effects due to fires.

The debate about the effects of a global thermonuclear war was galvanized in the 1980s when Carl Sagan and his colleagues proposed the idea of nuclear winter, in which dimming of sunlight because of the debris from nuclear explosions would lower the average temperature at the surface of the earth. Among other effects, this combined with the resulting darkness would devastate crops, thus bringing about long-term starvation and other catastrophes. Since then, scientists have been arguing about nuclear winter.

What has changed between 1980 and now though is that climate models including general circulation models have vastly improved and computational power to analyze them has exponentially gone up. Although we still cannot predict long-term climate, we now have a reasonably good handle on quantifying the various forcings and factors that affect climate. Thus for the last few years it has seemed worthwhile to predict the effect of nuclear war on our climate. Now scientists working at the University of Colorado and NOAA have come up with a rather disconcerting study in the Proceedings of the National Academy of Sciences indicating the effects of a regional nuclear war on global climate. A typical scenario is a war with 50 warheads of 15 kilotons each (about the yield of the Hiroshima bomb) between India and Pakistan, a conservative estimate. There have been a few such studies published earlier but this one looks at the effects on the ozone layer, the delicate veneer that protects life from UV radiation.

The researchers' main argument is that there is a tremendous mass of soot that is kicked up tens of kilometers into the atmosphere during a nuclear explosion. The study seems to be carefully done, taking into account various factors acting to both reinforce and oppose the effects of this soot. The number they cite for the amount is about 5Tg (teragrams, a teragram being 10^12 grams) which is a huge number. They account for local fallout of the soot through rain as being about 20%. What happens to the remaining 4Tg is the main topic of investigation. According to the model, this enormous plume of soot is intensely heated by sunlight. By this time it has entered the upper layer of the troposphere and snakes up into the lower stratosphere where the ozone layer is situated, it is radiating heat that disrupts the delicate balance of chemical reactions that produce and get rid of ozone, reactions that have now been well-studied for decades. These involve the interaction of radical species of oxygen, nitrogen and halogens with ozone that sap the precious molecule away. The bottom line is that this heat from the hot soot vastly increases the rate of reactions that produce these species and eat up the ozone at that altitude, thus depleting the layer. The soot lingers around since removal mechanisms are slow at that height. The heat also encourages the formation of water vapour and its consequent break up and reaction with ozone, thus further contributing to the breakdown. The researchers also include circulation of water vapour and other gases in the global atmosphere, and how this circulation will be affected by the heat and the flow. Nitrogen oxides generated by natural and human processes have already been shown to deplete ozone, and the heated soot will also intensify the rate of these processes.

The frightening thing about the study is the magnitude of the predicted ozone loss due to these accelerated processes; about 20% globally, 40% at mid latitudes and up to 70% at high latitudes. Also, these losses would last for at least five years or so after the war. These are horrifying numbers. The ozone layer has evolved in a synergistic manner over hundreds of millions of years to wrap up life in a protective blanket and keep it safe. What would the loss of 40% of the ozone layer entail? The steep decline would allow low wavelength UV radiation which is currently almost completely blocked out to penetrate the biosphere. This deadly UV radiation would have large-scale devastating effects including rapid increases in cancer and perhaps irreversible changes in ecosystems, especially aquatic ones. The DNA effects documented by the researchers are appalling- up to 213% increases in DNA damage with respect to normal levels, with plant damage up to 132%. In addition, the increased UV light would hasten the normal decomposition of organic material, further contributing to the natural balance of the biosphere. The phenomenon would indeed be a global phenomenon. Decomposition of the soot is thought to be negligible.

Now I am no atmospheric scientist, but even if we assume that some of these estimates end up a little exaggerated, it still seems to me that effects on the ozone layer could be pretty serious. If I had to guess, I would think that there could be uncertainty in estimating how much soot is produced, how much goes up and to what altitude, and how long it stays there. What seems more certain are the effects on the well-studied radical reactions that deplete ozone. Some elementary facts seem to reinforce this in my mind- carbon has a very high sublimation point and can get heated up to high temperatures, the energy radiated by a hot body goes as the fourth power of the temperature, and from college chemistry I do remember the rule of thumb that on an average, the rate of a reaction doubles with a 10 degrees centigrade temperature rise. The estimates of rate increases made by the authors seem reasonable to me.

What is most disconcerting about the study is that it involves a rather "small" nuclear exchange that takes place in a localized part of a continent, and yet whose effects can affect the entire world. "Globalization" acquires a new and portentous meaning in this context. India and Pakistan can both easily field 50 weapons each of 15 kilotons yield, if not now, in the near future. In addition to this global-scale devastation of the ozone layer, it would be unthinkable to imagine the more than 10 million people dying in such a conflict, as well as total devastation of public systems and the food supply. Herman Kahn might have thought that nuclear war is "survivable". Well, maybe not exactly...

Reference and abstract for those who are interested:
Mills, M.J., Toon, O.B., Turco, R.P., Kinnison, D.E., Garcia, R.R. (2008). Massive global ozone loss predicted following regional nuclear conflict. Proceedings of the National Academy of Sciences, 105(14), 5307-5312. DOI: 10.1073/pnas.0710058105
"We use a chemistry-climate model and new estimates of smoke produced by fires in contemporary cities to calculate the impact on stratospheric ozone of a regional nuclear war between developing nuclear states involving 100 Hiroshima-size bombs exploded in cities in the northern subtropics. We find column ozone losses in excess of 20% globally, 25–45% at midlatitudes, and 50–70% at northern high latitudes persisting for 5 years, with substantial losses continuing for 5 additional years. Column ozone amounts remain near or <220 Dobson units at all latitudes even after three years, constituting an extratropical "ozone hole." The resulting increases in UV radiation could impact the biota significantly, including serious consequences for human health. The primary cause for the dramatic and persistent ozone depletion is heating of the stratosphere by smoke, which strongly absorbs solar radiation. The smoke-laden air rises to the upper stratosphere, where removal mechanisms are slow, so that much of the stratosphere is ultimately heated by the localized smoke injections. Higher stratospheric temperatures accelerate catalytic reaction cycles, particularly those of odd-nitrogen, which destroy ozone. In addition, the strong convection created by rising smoke plumes alters the stratospheric circulation, redistributing ozone and the sources of ozone-depleting gases, including N2O and chlorofluorocarbons. The ozone losses predicted here are significantly greater than previous "nuclear winter/UV spring" calculations, which did not adequately represent stratospheric plume rise. Our results point to previously unrecognized mechanisms for stratospheric ozone depletion.

Profile of a fiend

Image Hosted by ImageShack.us

Plutonium: A History of the World's Most Dangerous Element- Jeremy Bernstein
Joseph Henry Press, 2007

The making of the atomic bomb was one of the biggest scientific projects in history. Some of the brightest minds of the world worked against exceedingly demanding deadlines to produce a nuclear weapon in record time. To do this, every kind of problem imaginable in physics, chemistry, metallurgy, ordnance and engineering had to be surmounted. Many of the problems had never been encountered before and challenged the ingenuity and perseverance of even the best of the brightest.

To accomplish this feat, human, material and monetary resources were poured in on a scale unsurpassed till then. Factories were constructed at Oak Ridge, Los Alamos and Hanford that were bigger than anything built until then. The resources required were staggering; at one point the Manhattan Project was using 70% of the silver produced in the United States. Steel production in the entire nation had to be ramped up to fulfill the needs of the secret laboratories. Extra electricity on a national scale had to be generated to power the hungry reactors and electromagnetic separators. The factories at Oak Ridge were giant structures; one of them was a whole mile under one roof. The gargantuan factories and the resulting employment increased the population of the small town from 3000 to about 75,000. At the end of the war, hundreds of thousands of people and an estimated 2 billion 1945 dollars had been spent on the biggest technical project in history. The entire country had had to be mobilized for it. In just three years, the scale of the project was consuming about as many resources as the US automobile industry, an astonishing achievement. Only the United States could have done something like that at the time.

Of all the myriad and complex problems involved in the project, two stand out for their formidable complexity and difficulty. One was the separation of uranium-235 from its much more abundant cousin uranium-238. The differences between the masses of the two isotopes is so small that at the beginning, nobody believed that it could be done. Indeed, the atomic bomb effort in Germany largely stalled because its leaders could not think of any way this could be done in any reasonable time. An entire town had to be constructed at Oak Ridge to surmount this problem. Even today this is probably the single-hardest problem for anyone wanting to construct an atomic bomb from scratch.

However, the uranium separation problem was at least anticipated at the very beginning. Compared to this, the second problem was completely unexpected. It involved a material from hell that nobody had seen before. This material was highly unstable and difficult to work with, intensely radioactive, and its discovery was one of the most closely-kept secrets of all time. The material would play a decisive role in the project and in the nuclear arms race that was to ensue. Today, its shadow looms large over the world. This material is plutonium.

Now in a succinct and readable book, well-known physicist and historian of science Jeremy Bernstein tracks the history of a diabolical fiend. Bernstein has earlier written biographies of Oppenheimer and Hans Bethe and a recent book on nuclear weapons. He is an accomplished veteran physicist who has known some of the big names in physics of the century, Oppenheimer and Bethe included. Bernstein is a fine writer who recounts many interesting anecdotes and bits of trivia. But he does have one annoying habit; his constant tendency to digress from the matter under consideration. He could be talking about one event and then suddenly digress into a four page life history of a person involved in that event. One gets the feeling that Bernstein wants to put his opinion of every small and sundry event from the life of every scientist he has met or heard of on record. At times, the connections he unravels are rather tenuous and long-winded. Readers could be forgiven for finding Bernstein's digressions too many in number. But at the same time, those interested in the history of physics and atomic energy will be rewarded if they persevere; most of Bernstein's forays, though exasperating, are also quite interesting. In this particular case, they weave a complex story around a singular element.

Plutonium was discovered by the chemist Glenn Seaborg and his associates at Berkeley in 1940. In a breathtakingly productive career, Seaborg would go on to discover nine more transuranic elements, advise four US presidents, win the Nobel prize, win enough other awards and honors to have an entry in the Guinness Book, and have an element and asteroid named after him while still alive. After fission was discovered, it was hypothesized that elements with atomic numbers 93 and 94 might also behave like uranium. In 1939 Seaborg was a young scientist working at Berkeley when he heard about the discovery of fission. In the next year he performed many experiments on fission at Chicago and Berkeley. In 1940, another future Nobel laureate named Edwin McMillan discovered a radioactive element past uranium with a postdoc, Philip Abelson. In logical sequence they named it neptunium. Abelson and McMillan's June 1940 paper on neptunium was the last paper to come out of the United States on fission and related issues; the need for secrecy in such matters had already been realised by senior scientists. There matters stood until December 1941- a decisive time due to Pearl Harbor- when Seaborg, McMillan and their associates Joseph Kennedy and Arthur Wahl discovered element 94 by using tedious and clever chemical techniques. After uranium and neptunium, Seaborg decided to name the new element after Pluto- the god of fertility but also the god of the underworld.

Concomitantly with the American effort, the Germans were also trying to understand the properties of plutonium and Bernstein devotes a chapter to their efforts and background. A resourceful German physicist named Carl Friedrich von Weiszacker had observantly noticed the dwindling and disappearance of papers from the United States after the paper by McMillan and Abelson appeared in mid 1940. He also realised the advantage of using plutonium in a nuclear weapon. But as the history of the German atomic project makes clear, Weiszacker's report was not taken too seriously, and in any case the Germans were too cash and resources-strapped to seriously pursue the production of plutonium. Notice was also taken by accomplished physicists in the Soviet Union but it was espionage that provided them with information about the real potential and importance of plutonium. The fascinating story of Soviet espionage is superbly narrated in Richard Rhodes's Dark Sun: The Making of the Hydrogen Bomb.

Plutonium was soon isolated in gram quantities by Seaborg's team and its enhanced fissile properties were investigated. After the enormous problems with separating U-235 were realised, the great advantage of plutonium became obvious; plutonium being a different element, it would be relatively easy to separate from its parent uranium, thus avoiding the difficulty of isotope separation. After plutonium was discovered, it was found that it is even more prone to fission than uranium. Compounded with its relative ease of separation, this property of plutonium made it a key material for a nuclear weapon. It was also realised however that many tons of uranium would have to be bombarded with neutrons to produce pounds of the precious element. By 1942, it was known that at least a few kilograms of both uranium and plutonium would be needed for the critical mass of a bomb. To this end enormous factories were constructed at Oak Ridge (for enriching uranium) and reactors at Hanford in Washington state (for producing plutonium) in 1943. The reactors at Hanford would keep on producing the material for thousands of nuclear warheads until the late 1980s. A secret lab at Los Alamos was concurrently established, headed by Robert Oppenheimer. He would bring a group of "luminaries" to the mesa high up in the mountains for working on the actual design of an atomic weapon.

At Los Alamos, initial designs of bombs with both uranium and plutonium involved the "gun method" wherein a plug of fissile material would be shot down at great speed along a large gun barrel into another mould of fissile material. When the two met a critical mass would suddenly materialize and fission would result in an explosive detonation. However, a fatal flaw was unexpectedly encountered in 1944. When the first few grams of plutonium arrived at Los Alamos from Hanford, it was observed that Pu-239 had a very high rate of "spontaneous" fission due to the copious presence of another isotope, Pu-240. Even today, the feature that distinguishes "reactor-grade" plutonium from "weapons-grade" plutonium is the higher presence of Pu-240 in reactor-grade material. Because of the presence of extra neutrons from spontaneous fission, a gun type bomb though it would work for U-235 would be worthless for Pu-239 since by the time the two pieces met, fission would have already started and the result would be a "fizzle", a suboptimal explosion. Because of this difficulty the whole lab was reorganised by Oppenheimer in August 1944 and experts were brought in to investigate new mechanisms for a plutonium bomb.

The result was one of the most ingenious concepts in nuclear weapons history and design- implosion. The idea was to suddenly squeeze a sub-critical ball of plutonium using high explosives into a highly compressed supercritical mass, causing fission and a massive explosion. The problem was that this microsecond compression had to be perfectly symmetrical, otherwise the Pu-239 would simply squirt out along the path of least resistance like dough squeezed within the cupped palms of our hands. To circumvent this problem would require the capabilities of some of the greatest scientists of the day. The Hungarian genius John von Neumann supplied the crucial idea of using "lenses" of explosives of differing densities to direct shock waves that would symmetrically converge onto a point, just like light through glass lenses. The concept required a paradigm shift- nobody had used explosives before as precision tools; they were generally used to blow things out, not in. Even after the idea was floated, the engineering and diagnostics obstacles were formidable. Chemist George Kistiakowsky from Harvard was put in charge of a division that would painstakingly develop the moulds for the lenses; machining had to be accurate to within microns as any air bubbles, cracks or irregularities would immediately impede the symmetrical shock wave. Another challenging device was the "initiator", a tiny ball of radioactive elements in the center of the sphere that would release neutrons right after the implosion, but not a moment before. Its design was so challenging that it is one of the few things that's still almost completely classified. One of the physicists who worked on both shock wave hydrodynamics and on initiator design was Soviet spy Klaus Fuchs. He was ironically brought in as part of a British team to replace Edward Teller, whose reluctance to pursue implosion and obsession with hydrogen bombs tested the patience of theoretical division leader Hans Bethe. Information obtained by Fuchs would prove invaluable to the Russians in building their own implosion bomb.

Compounding all of these difficulties was the hideously diabolical nature of Pu-239 itself. Chemists and metallurgists had never faced the challenge before of working with such an unusual and dangerous material. Pu-239 exists as several allotropes, different physical forms of the same element, depending upon the conditions. When one investigates the use of plutonium in a bomb and then looks at its allotropic behavior, it's almost as if nature had conspired to keep humans from using it. The reason is that at room temperature, Pu-239 exists as an allotrope named the alpha phase allotrope. The problem with this is that while it is dense, it is brittle and won't do at all for an implosion. On the other hand the allotrope of Pu-239 that is suitable for a bomb, the delta phase, exists only at 315 degrees centigrade and above. This is a catch-22 situation; the useful and machinable allotrope exists only at high temperatures while the one at room temperature is worthless. A very clever solution to this was discovered by human ingenuity; Cyril Smith, head of the metallurgy division at Los Alamos found that adding a small amount of the metal gallium to Pu-239 stabilized the valuable delta phase at room temperature. This was found only a few months before the first test of the bomb.

In the end, while the uranium bomb was reliable enough to not require testing, the implosion bomb was too novel to use without testing. On July 16, 1945, the sky thundered and a new force surpassing human ability to contain it was unleashed in the cold desert sands of New Mexico at the Trinity test site. Plutonium tested on that ominous dawn would reincarnate into Fat Man, the bomb that leveled Nagasaki in less than ten seconds.

In addition to Pu-239's unusual chemistry, there were of course its radioactive properties that make its name so dreaded for laypersons. But we have to put things in perspective. I would easily be within a kilometer of Pu-239 than within a kilometer of anthrax or VX nerve gas. Plutonium decays by emitting alpha particles and simple laws of physics dictate that these particles have a very short range. You could hold Pu on a sheet of paper in the palm of your hand and live to talk about it. The real danger from Pu-239 comes from inhaling it; it can cause severe damage to lungs and bone and cause cancer. Its half-life is 24,000 years and another law of physics dictates that half-life and radioactive intensity are inversely related. To help understand Pu-239's true nature, Bernstein narrates a fascinating study of 37 technicians and scientists at Los Alamos who ended up getting Pu-239 into their system. This group was whimsically named the "UPPU" (U Pee Pu) group as Pu-239 could be detected in their urine. The group was tested periodically at Los Alamos for many years. The verdict is clear; none of these people suffered long-term damage from Pu-239. Many of them lived long and healthy lives and some of them are still alive. As with other aspects of nuclear power, the danger from plutonium has to be carefully reasoned and objectively assessed. As with other nuclear material, Pu needs to be handled with the utmost care, but that does not mean that fears about it should outweigh benefits that one could get from its potential for providing power. There is naturally a real proliferation danger with plutonium, but even there, risks are often inflated. Terrorists will have to steal a substantial amount of Pu using special equipment from facilities which are usually heavily guarded. Stealing Pu and using it is not as easy as robbing a bank and laundering the money.

However, there are sites in the former Soviet Union where plutonium is not that heavily guarded and these will have to be secured. 5 kilograms of Pu-239 if efficiently utilised can be used for a weapon that will easily destroy Manhattan. It is very difficult to keep track of such small quantities through inspection. International collaboration will be necessary to keep track of and contain every gram of plutonium at vulnerable facilities. At the same time, power-generating plutonium is indispensable for the future of humanity. Forged on earth by human brilliance, Pu outlived its initial use. Most of the warheads in the US arsenal including thermonuclear warheads use plutonium for the fission assembly. Several hundred tons of both weapons-grade and reactor-grade plutonium have been produced and are being produced. Hundreds more sit in fuel rods immersed in huge water pools, glowing eerily with a bluish light. Plutonium production sites in the United States are facing a heavy and expensive backlog of cleanups.

Plutonium seems to be a classic case of the "careful what you wish for" adage. Glenn Seaborg would not have imagined the consequences of his discovery that hazy morning in December 1941, when after an all-night session the angry element revealed itself to a warring world, kicking and screaming from its fiery radioactive cradle. But as Richard Feynman once so lucidly put it, science is a set of keys that open the gates to heaven. The same keys open the gates to hell. Plutonium constitutes one of the keys to heaven that's given to us. Which gate to approach is entirely our choice.

Picture says more than a thousand words

"Unlawful killing" was implicated in Princess Diana's death by a British court yesterday. Her driver Henri Paul's blood report indicated an ethanol level of 1.78 g/L which comes out to 178 mg/dL. Wikipedia says that even 100 mg/dL causes "central nervous system depression, impaired motor and sensory function, impaired cognition" and >140 mg/dL causes "decreased blood flow to the brain".

But even without knowing all this, if I had simply looked at the bloke's face before the crash, I would have been a little unwilling to have him as my driver for the night. Take a look...

Image Hosted by ImageShack.us

Quote of the day...no, of the year

"Nobody believes a calculated result except the person that calculated it.
Everybody believes an experimental result except the person that measured it"--- Paul Labute

Datasets in Virtual Screening: starting off on the right foot

ResearchBlogging.org
As Niels Bohr said, prediction is very difficult, especially about the future. In case of computational modeling, the real grist of value is in prediction. But for any methodology to predict it must first be able to evaluate. Sound evaluation of known data (retrospective testing) is the only means to proceed to accurate prediction of new data (prospective testing).

Over the last few years, several papers have come out involving the comparisons of different structure-based and ligand-based methods for virtual screening (VS), binding mode prediction and binding affinity prediction. Every one of these goals if accurately achieved could lead to the saving of immense amounts of time and money for the industry. Every paper concludes that some method is better than other. For virtual screening for example, it has been concluded by many that ligand-based 3D methods are better than docking methods, and 2D ligand-based methods are at least as good if not better.

However, such studies have to be conducted very carefully to make sure that you are not biasing your experiment for or against any method, or comparing apples and oranges. In addition, you have to use the correct metrics for evaluation of your results. Failure to do either of these and other things can lead to erroneous or/and inflated or artificially enhanced results leading to fallible prediction.

Here I will talk about two aspects of virtual screening; choosing the correct dataset, and choice of evaluation metric. The basic problems in VS are false positives and false negatives and one wants to minimize the occurrence of these. Sound statistical analysis can do wonders for generating and evaluating good virtual screening data. This has been documented in several recent papers, notably one by Ant Nicholls from OpenEye. If you have a VS method, it's not of much use randomly picking a random screen of 100,000 compounds. You need to choose the nature and number of actives and inactives in the screen judiciously to avoid bias. Here are a few things to be remembered that I got from the literature:

1. Standard statistical analysis tells you that the error in your results depends upon the number of representatives in your sample. Thus, you need to have an adequate number of actives and inactives in your screening dataset. What is much more important is the correct ratio of inactives to actives. The errors inherent in choosing various such ratios have been quantified; for example, with an inactive:active ratio of 4:1, the error incurred is 11% more than that incurred by a theoretical ratio of infinite:1. For a ratio of 100:1 it's only 0.5% more than with infinite. Clearly we must use a good ratio of inactives to actives to reduce statistical error. Incidentally you can also increase the number of actives to reduce this error. But this is not compatible with real-life HTS where actives are (usually) very less, sometimes not more than 0.1% of the screen.

2. Number is one thing. The nature of your actives and decoys is equally important; simply overwhelming your screen with decoys won't do the trick. For example, consider a kinase inhibitor virtual screen in which the decoys are things like hydrocarbons and inorganic ions. In his paper, Nicholls calls distinguishing these decoys the "dog test", that is, even your dog should be able to distinguish them from actives (not that I am belittling dogs here). We don't want a screen that makes it too easy for the method to reject actives. Thus, simply throwing a screen of random compounds at your method might make it too easy for your method to screen actives and mislead.

We also don't want a method that rejects chemically similar molecules on the basis of some property like logP or molecular weight. For example consider a method or scoring function that is sensitive to logP, and suppose it is supplied with two hypothetical molecules which have an identical core and a nitrogen in the side chain. If one side chain has a NH2 and another one is N-alkylated where the alkyl is butyl, then there will be a substantial difference in logP between the two, and your method will fail to recognise them as "similar", especially from a 2D perspective. Thus, a challenging dataset for similarity based methods is one in which the decoys and actives are property-matched. Just such a dataset has been put together by Irwin, Huang and Shoichet- this is the DUD (Directory of Useful Decoys) dataset of property-matched compounds. In it, 40 protein targets and their corresponding actives have been selected. 36 property-matched decoys for every active have been chosen. This dataset is much more challenging for many methods that do well on other random datasets. For more details, take a look at the original DUD paper. In general, there can be different kinds of decoys; random, drug-like, drug-like and property-matched etc. and one needs to know exactly how to choose the correct dataset. With datasets like DUD, there is an attempt to provide possible benchmarks for the modeling community.

3. Then there is the extremely important matter of evaluation. After doing a virtual screen with a well-chosen dataset and well-chosen targets, how do you actually evaluate the results and put your method in perspective? There are several metrics but until now, the most popular way of doing this is by calculating enrichment and this is the way it has been done in several publications. The idea is simple; you want your top ranked compounds to contain the most number of actives. Enrichment is simply the fraction of actives found in a certain fraction of screened compounds. Ideally you want your enrichment curve to shoot up at the beginning, that is you want most (ideally all) of the actives to show up in the first 1% or so of your ranked molecules. Then you compare that enrichment curve to a curve (actually a straight line) that would stem from an ideal result.
The problem with enrichment is that it is a function of the method and the dataset, hence of the entire experiment. For example, the ideal straight line depends on the number of actives in the dataset. If you want to do a controlled experiment, then you want to make sure that the only differences in the results come from your method, and enrichment introduces another variable that complicates interpretation. Other failings of enrichment are documented in this paper.

Instead, what's recommended for evaluating methods are R.O.C curves.

Essentially, R.O.C curves can be used in any situation where one needs to distinguish signal from noise and boy, is there a lot of noise around. R.O.C curves have an interesting history; they were developed by radar scientists during World War 2 to distinguish the signal of enemy warplanes from the noise of false hits and other artifacts. In recent times they have been used in diverse fields; psychology, medicine, epidemiology, engineering quality control, anywhere where we want to pick the bad apples from the good ones. Thus, R.O.C curves simply plot the false positive (FP) rate against the true positive (TP) rate. A purely random result gives a straight line at 45 degrees implying that for every FL you get a TP- dismal performance. A good R.O.C curve is a hyperbola that shoots above the straight line, and a very useful measure of your method's performance is the Area Under the Curve (AUC). The AUC needs to be prudently interpreted; for instance an AUC of 0.8 means that you can discriminate a TP by assigning a higher score to it than to a FP in 8 out of 10 cases. Here's a paper discussing the advantages of R.O.C curves for VS and detailing an actual example.

One thing seems to be striking. The papers linked here and at other places document that R.O.C curves may currently be the single-best metric for measuring performance of virtual screening methods. This is probably not too surprising given that they have proved so successful in other fields.

Why should modeling be different? Just like in other fields, rigorous and standard statistical metrics need to be established for the field. Only then will the comparisons between different methods and programs commonly seen these days be valid. For this, as in other fields, experiments need to be judiciously planned (including choosing the correct datasets here) and their results need to be carefully evaluated with unbiased techniques.

It is worth noting that these are mostly prescriptions for retrospective evaluations. When confronted with an unknown and novel screen, which method or combination of methods does one use? The answer to this question is still out there. In fact some of the real-life challenges run contrary to the known scenarios. For example consider a molecular screen from some novel plant or marine sponge. Are the molecules in this screen going to be drug-like? Certainly not. Is this going to have the right ratio of actives to decoys? Who knows (the whole point is to find the actives). Is it going to be random? Yes. If so, how random? In all actual screenings, there are a lot of unknowns out there. But it's still very useful to know about "known unknowns" and "unknown unknowns", and retrospective screening and the design of experiments can help us unearth some of these. If nothing else, it indicates attention to sound scientific and statistical principles.

In later posts, we will take a closer look at statistical evaluation and dangers in pose-prediction including being always wary of crystal structures, as well as something I found fascinating- bias in virtual screen design and evaluation that throws light on chemist psychology itself. This is a learning experience for me as much or more than it is for anyone else.


References:

1. Hawkins, P.C., Warren, G.L., Skillman, A.G., Nicholls, A. (2008). How to do an evaluation: pitfalls and traps. Journal of Computer-Aided Molecular Design, 22(3-4), 179-190. DOI: 10.1007/s10822-007-9166-3

2. Triballeau, N., Acher, F., Brabet, I., Pin, J., Bertrand, H. (2005). . Journal of Medicinal Chemistry, 48(7), 2534-2547. DOI: 10.1021/jm049092j

3. Huang, N., Shoichet, B., Irwin, J. (2006). . Journal of Medicinal Chemistry, 49(23), 6789-6801. DOI: 10.1021/jm0608356