Field of Science

Showing posts with label origin of life. Show all posts
Showing posts with label origin of life. Show all posts

The only two equations that you should know

“Chemistry”, declared the Nobel laureate Roger Kornberg in an interview, “is the queen of all sciences. Our best hope of applying physical principles to the world around us is at the level of chemistry. In fact if there is one subject which an educated person should know in the world it is chemistry.” Kornberg won the 2006 Nobel Prize in chemistry for his work on transcription which involved unraveling the more than dozen complicated proteins involved in the copying of DNA into RNA. He would know how important chemistry is in uncovering the details of a ubiquitous life process.
I must therefore inevitably take my cue from Kornberg and ask the following question: What equation would you regard as the most important one in science? For most people the answer to this question would be easy: Einstein’s famous mass-energy formula, E=mc2. Some people may cite Newton’s inverse square law of gravitation. And yet it should be noted that both of these equations are virtually irrelevant for the vast majority of practicing physicists, chemists and biologists. They are familiar to the public mainly because they have been widely publicized and are associated with two very famous scientists. There is no doubt that both Einstein and Newton are supremely important for understanding the universe, but they both suffer from the limitations of reductionist science that preclude the direct application of the principles of physics to the everyday workings of life and matter.
Take Einstein’s formula for instance. About the only importance it has for most physical scientists is the fact that it is responsible for the nuclear processes that have forged the elements in stars and supernova. Chemists deal with reactions that involve not nuclear processes but the redistribution of electrons. Except in certain special cases, Einstein therefore does not figure in chemical or biological processes. Newton’s gravitational formula is equally distant for most chemists' everyday concerns. Chemistry hinges on the attraction and repulsion of charges, processes overwhelmingly governed by the electromagnetic force. This force is stronger than the gravitational force by a factor of 1036, an unimaginably large number. Gravity is thus too weak for chemists and biologists to bother with in their work. The same goes for many physicists who deal with atomic and molecular interactions.
Instead here are two equations which have a far greater and more direct relevance to the work done by most physical and biological scientists. The equations lie at the boundary of physics and chemistry, and both of them are derived from a science whose basic truths are so permanently carved in stone that Einstein thought they would never, ever need to be modified. The man who contributed the most to their conception, Josiah Willard Gibbs, was called "the greatest mind in American science" by Einstein. The science that Gibbs, Helmholtz, Clausius, Boltzmann and others created is thermodynamics, and the equations we are talking about involve its most basic quantities. They apply without exception to every important physical and chemical process you can think of, from the capture of solar energy by plants and solar cells to the combustion of fuel inside trucks and human bodies to the union between sperm and egg.
Two thermodynamic quantities govern molecular behavior, and indeed the behavior of all matter in the universe. One is the enthalpy, usually denoted by the symbol H, and roughly representing the quantity of energy and the strength of interactions and bonds between different atoms and molecules. The other is the entropy, usually denoted by the symbol S, and roughly representing the quality of energy and the disorder in any system. Together the enthalpy and entropy make up the free energy G, which roughly denotes the amount of useful work that can be extracted from any living or non-living system. In practical calculations, what we are concerned with are changes in these quantities rather than their absolute values, so each one of them is prefaced by the symbol ∆, indicating change. The celebrated second law of thermodynamics states that the entropy of a spontaneous process always increases, and it is indeed one of the universal facts of life, but that is not what we are concerned with here.
Think about what happens when two molecules – of any kind – interact with each other. The interaction need not even be an actual reaction, it can simply be the binding of two molecules to one another by strong or weak forces. The process is symbolized by an equilibrium constant Ke, which is simply the ratio of the concentrations of the products of the reaction to the starting materials (reactants). The bigger the equilibrium constant, the more the amount of the products. Ke thus tells us how much of a reaction has been completed and how much reactant has been converted to product. Our first great equation relates this equilibrium constant to the free energy of the interaction through the following formula:
∆G0 = -RT ln Ke
or, in other words
Ke = e-∆G0/RT
Here ln is the natural logarithm to base e, R is a fundamental constant called the gas constant, T is the ambient temperature and ∆Gis the free energy change under so-called 'standard conditions' (a detail which can be ignored by the reader for the sake of this discussion). This equation tells us two major things and one minor thing. The minor thing is that reactions can be driven in particular directions by temperature increases, and exponentially so. But the major things are what's critical here. Firstly, the equation says that the free energy in a spontaneous process with a favorable positive equilibrium constant is always going to be negative; the more negative it is the better. And that is what you find. The free energy change for many of biology's existential reactions like the coupling of biological molecules with ATP (the “energy currency” of the cell), the process of electron transfer mediated by chlorophyll and the oxidation of glucose to provide energy is indeed negative. Life has also worked out ingenious little tricks to couple reactions with positive (unfavorable) ∆G changes to those with negative ∆G0 values to give an overall favorable free energy profile.
The second feature of the equation is a testament to the wonder that is life, and it never ceases to amaze me. It attests to what scientists and philosophers have called “fine-tuning”, the fact that evolution has somehow succeeded in minimizing the error inherent in life’s processes, in carefully reining in the operations of life to within a narrow window. Look again at that expression. It says that ∆G0 is related to Ke not linearly but exponentially. That is a dangerous proposition because it means that even a tiny change in ∆G0 will correspond to a large change in Ke. How tiny? It should be no bigger than 3 kcal/mol.
A brief digression to appreciate how small this value is. Energies in chemistry are usually expressed as kilocalories per mole. A bond between two carbon atoms is about 80 kcal/mol. A bond between two nitrogen atoms is 226 kcal/mol: this is why nitrogen can be converted to ammonia by breaking this bond only at very high temperatures and pressures and in the presence of a catalyst. A hydrogen bond - the "glue" that holds biological molecules like DNA and proteins together - is anywhere between 2 and 10 kcal/mol.
3 kcal/mol is thus a fraction of the typical energy of a bond. It takes just a little jiggling around to overcome this energy barrier. The exponential, highly sensitive dependence of Ke on ∆G0 means that changing ∆G from close to zero to 3 kcal/mol will translate to changing Ke from 1:99.98 in favor of products to 99.98:1 in favor of reactants (remember that Ke is a ratio). This is a simple mathematical truth. Thus, a tiny change in ∆G0 can all but completely shift a chemical reaction from favoring products to favoring reactants. Naturally this will be very bad if the goal of a reaction is to create products that are funneled into the next chemical reaction. Little changes in the free energy can therefore radically alter the flux of matter and energy in life’s workings. But this does not happen. Evolution has fine-tuned life so well that it has remained a game played within a 3 kcal/mol energy window for more than 2.5 billion years. It's so easy for this game to quickly spiral out of hand, but it doesn’t. It doesn’t for the trillions of chemical transactions which trillions of cells execute everyday in every single organism on this planet.
And it doesn’t happen for a reason; because cells would have a very hard time modulating their key chemical reactions if the free energies involved in those reactions had been too large. Life would be quickly put into a death trap if every time it had to react, fight, move or procreate it had to suddenly change free energies for each of its processes by tens of kilocalories per mole. There are lots of bonds broken and formed in biochemical events, of course, and as we saw before, these bond energies can easily amount to dozens of kcals/mol. But the tendency of the reactants or products containing those bonds to accumulate is governed by these tiny changes in free energy which nudge a reaction one way or another. In one sense then, life is optimizing small changes (in free energy of reactions) between two large numbers (bond energies). This is always a balancing act on the edge of a cliff, and life has managed to be successful in it for billions of years. It's one of the great miracles of the universe.
The second equation is also a relationship between free energy, enthalpy and entropy. It's simpler than the first, but no less important:
∆G = ∆H - T∆S
The reason this equation is also crucial to the operation of the universe is because it depicts a fine dance between entropy and enthalpy that dictates whether physical processes will happen. Note that entropy is multiplied by the temperature here and the sign is negative. So if it decreases in a process then ∆S becomes negative and the overall product (T∆S) becomes positive. In that case the change in enthalpy needs to be negative enough to compensate, otherwise the free energy will not be negative and the process won't take place. 
For instance, consider the schoolboy experiment of oil and water not mixing. When oil is put into water, the water molecules have to order themselves around oil molecules, leading their entropy to decrease and become negative. The attraction between water and oil on the other hand is weak, so the change in enthalpy does not compensate for the change in entropy, and oil does not mix. This is called the hydrophobic effect. It's a fundamental effect governing a myriad of critical phenomena; drugs interacting with signaling proteins, detergents interacting with grease, food particles attracting or repelling each other inside saucepans and human bodies. On the other hand, salt and water mix easily; in this case, while the entropy is still unfavorable because of the ordering of water molecules around salt molecules, the enthalpy is overwhelmingly favorable (negative) because the positive and negatively charged sodium and chloride ions strongly attract water.
Because temperature is part of the equation it too plays an important role. For instance consider a phenomenon like a chemical reaction in which the change in entropy is favorable but quite small. We can then imagine that this reaction will be greatly accelerated if T is high, making the product of it and the entropy large. This explains why the free energy of chemical reactions can be made much more favorable at high temperatures (there is a subtlety here, however: making the free energy more favorable is not the same as accelerating the reactions, it's simply making the products more stable. The difference is between thermodynamics and kinetics).
Even the origin of life during which the exact nature of molecular interactions was crucial in deciding which ones would survive, replicate and thrive was critically dependent on enthalpy and entropy. When little oily molecules called micelles repelled water molecules because of the unfavorable entropy and enthalpy described above, they sequestered themselves into tiny bags inside which fragile molecules like DNA and RNA could safely isolate themselves from the surrounding water. These DNA and RNA molecules could then experiment with copying themselves at leisure, not having to worry about being hydrolyzed by water. The ones with higher fitness survived, kickstarting the process which, billions of years later, finally led to this biped typing these words on his computer.
That's really all there is to life. We all thus hum along smoothly, beneficiaries of a 3 kilocalorie energy window and of the intricate dance of entropy and enthalpy, going about our lives even as we are held hostage to the quirks of thermodynamic optimization, walking along an exponential energy precipice.
And all because Ke = e-∆G0/RT

Cancer and the origins of life: The Age of Metabolism

The NYT has an interesting article on the Warburg Effect and how it can be used to provide a new weapon in the treatment of cancer (the article is part of a larger series on cancer in the weekend magazine). The effect which is named after the Nobel Prize winning German biochemist Otto Warburg pertains to the fact that tumors can grow by disproportionately consuming glucose from their environment. More specifically it deals with anaerobic respiration in tumor cells which allow them to persist even in the absence of oxygen.

This is clearly a mechanism that could be potentially targeted in cancer therapy, for example by blocking glucose transporters. But more generally it speaks to the growing importance of metabolism in cancer treatment. It seems to me that since the 1970s or so, partly because of discoveries regarding oncogenes like Ras and Src and partly because of the explosive growth in sequencing and genomics, genetics has become front and center in cancer research. This is a great thing but it's not without its pitfalls. In the race to decode the genetic basis of cancer, one gets the feeling that the study of cancer metabolism has fallen a bit by the wayside and is now being resurrected. In some sense this almost harkens back to an older period when cancer was conjectured to be caused by environmental factors affecting metabolism.

It's gratifying therefore that things like the Warburg Effect are being recognized. As the article points out, one of the simple reasons is because while many (frighteningly many in fact) genes might be mutated in cancer, a cancer cell usually has only a few ways to get energy from its surroundings: the range of targets is thus potentially fewer when it comes to energy. The recognition of this effect also speaks to the commonsense view that we should have a multipronged approach toward cancer therapy: genetics, metabolism and everything in between. Judah Folkman's idea of starving off a cancer cells's blood supply is another approach, what we may call a 'mechanical' approach (all of cancer surgery is a mechanical approach, in fact).

I could not help but also note the interesting coincidence that this tussle between emphasizing genetics vs metabolism has played out in another area which seems quite far removed from cancer medicine: the origin of life. For the longest time people focused on how DNA and RNA could have been formed on the primordial earth. It's only about 20 years ago or so that "metabolism first" started getting emphasized too: this approach emphasized the all important role that the evolution of life's energy generating apparatus (in the form of proton gradients and ATP) played in getting life jumpstarted. The metabolism first viewpoint really took off with the discovery of deep sea hydrothermal vents which can generate primitive energy-creating biochemical cycles based on proton gradients, alkaline environments and diffusion through tiny pores in the vents. Biochemists like Nick Lane and Mike Russell have been pioneers in this area.

The renewed focus on metabolism in treating cancer as well as in exploring the most primeval characteristics of life seems to me to bring the study of life in both health and disease full circle. Just like you cannot discuss the genetics of life's origins without discussing life's source of energy, so can you also not disrupt cancer's spread by disabling its genes without disabling its source of energy. Both are important, and emphasizing one over the other seems mainly to be a function of research fads and fashions rather than objective scientific reasoning.

As an amusing aside, the father of a very close friend of mine knew Otto Warburg quite well when he worked in Vienna in the 50s. Here's what he had to say about Warburg's scrupulous lab protocols: "One story I've always remembered was that he would clean his own glassware, used in experiments. He didn't trust any low-level dishwasher or junior staff around the lab. He wanted to make sure everything was perfect. I can confirm that even a tiny 'foreign fragment' in glassware can wreck an experiment."

"Arsenic bacteria": Coffin, meet nails

For those dogged souls still following the whole debacle of arsenic-eating bacteria, it seems that Science has published what should be close to the death knell for "arsenic life". I already mentioned the report by Rosy Redfield and there's another one by Tobias Erb's group at ETH. The title of the paper is "GFAJ-1 is an arsenate-resistant, phosphate-dependent organism".

It's worth reflecting on that title again; "arsenate-resistant, phosphate-dependent". Yes, that description applies to GFAJ-1. It also applies to me, Shamu the killer whale, E. coli 0157 and Francis Bacon. In fact it applies to all the normal life forms that we know. So basically the title says that GFAJ-1 is not much different in this respect from any other bacterium that you may happen to find in a thimbleful of mud scooped up from your backyard.

The paper goes on to analyze the behavior of the bacterium in the presence and absence of phosphorus and arsenate. The bacterium seems to survive in tiny concentrations of phosphate, a concentration that was interestingly deemed as an "impurity" in the original Wolfe-Simon studies. It also does not survive on arsenate but starts dividing as soon as trace amounts of phosphorus are added. The authors' conclusion is clear: "We conclude that cultures in the previous study might have grown on trace amounts of phosphate rather than arsenate". This is what several experts had suspected since the beginning. Their suspicion was based on life's extraordinarily resilience and its ability to zealously guard and use every single atom of precious growth nutrients.

The authors also analyze the composition of the biomolecules (nucleotides, sugars etc.) in GFAJ-1 in the presence and absence of arsenate. They find only phosphate incorporated in the organism's essential machinery. While this does not necessarily argue against the use of arsenate, it demonstrates that when given a choice GFAJ-1 clearly prefers phosphate.

That observation is however not as striking as the next one where they find some metabolites containing arsenate, specifically sugars with arsenate appended to them. The question then is, are these metabolites formed biogenically or abiotically? To try to distinguish between these possibilities, the authors ran mock experiments where they treated glucose medium with arsenates. The purported metabolites showed up in the products and their formation is also supported by simple thermodynamic arguments which favor the attachment of arsenates to sugars. Thus it seems that simple chemistry rather than complex biology is sufficient for explaining the small amounts of arsenated metabolites. The scientists further resort to careful experiments to rule out the existence of other arsenated biomolecules.

The sum total of these experiments says that GFAJ-1 can grow in the presence of phosphate, that it cannot grown in arsenate, and that it can grow in high concentrations of arsenate only when supplemented with limiting concentrations of phosphate. Taken together with the other paper by Rosy Redfield, this is as good a case against arsenic-based life that we can make right now.

The papers are good examples of the conservative yet decisive style that scientists are accustomed to pitching their results in. Unfortunately the original authors have not reacted as conservatively. If anything their responses are transparently shallow and unconvincing. When asked about the results, Felisa-Wolfe Simon said that:
"There is nothing in the data of these new papers that contradicts our published data."
That reply almost convinces me that denial is the most sincere form of self-deception.
A current collaborator of Wolfe-Simon had even more remarkable things to say:

“There are many reasons not to find things — I don’t find my keys some mornings,” he said. “That doesn’t mean they don’t exist. The absence of a finding is not definitive.”

To which I might add that there is a possibility that disgruntled unicorns with chemistry PhDs looking for jobs may well exist, since we haven't found any yet.


Update: Paul@Chembark nicely weighs in.

"Arsenic bacteria": If you hadn't nailed 'im to the perch 'e'd be pushing up the daisies

Rosie Redfield (who blogs on this network) has just published an official, careful and decisive rebuttal to the "arsenic bacteria" fiasco in collaboration with a group at Princeton. The paper which will appear in Science is under embargo for now, but there is a copy available at that bastion of free publication arXiv. Readers may remember Redfield as the scientist who offered the most meticulous preliminary criticism of the original paper by Felisa Wolfe-Simon and others. Wolfe-Simon and the rest of the arsenic group refused to engage in debate with Redfield and other critics at the time, citing the "non-official" nature of the offered criticism and asking for publication in a more formal venue. Looks like they finally got their wish.

The abstract could not be clearer:

"A strain of Halomonas bacteria, GFAJ-1, has been reported to be able to use arsenate as a nutrient when phosphate is limiting, and to specifically incorporate arsenic into its DNA in place of phosphorus. However, we have found that arsenate does not contribute to growth of GFAJ-1 when phosphate is limiting and that DNA purified from cells grown with limiting phosphate and abundant arsenate does not exhibit the spontaneous hydrolysis expected of arsenate ester bonds. Furthermore, mass spectrometry showed that this DNA contains only trace amounts of free arsenate and no detectable covalently bound arsenate."

It's a fairly short paper but there are many observations in it which quite directly contradict the earlier results. The strain of bacteria that was claimed to grow only when arsenic was added to the medium was found to not grow at all. In fact it did not budge even when some phosphate was added, growing only after the addition of other nutrients. Trace element analysis using several techniques detected no arsenate in DNA monomers and polymers. This is about as definitive an argument as can be published indicating that the claims about the bacteria using arsenic instead of phosphorus in their essential biomolecules were simply incorrect. Much credit goes to Redfield who patiently and probingly pursued the counterargument, undoubtedly at the expense of other research in her lab. In addition she did open-science a great service and described all the ongoing research on the blog. She sets a standard for how science should be done, and we should hope to see more of this in the future.

Sociologically the episode is a treasure trove of lessons on how science should not be done. It checks off some standard "don'ts" in the practice of science. Don't fall prey to wishful thinking and confirmation bias that tells you exactly what you wanted to hear for years. Don't carry out science by press conference and then refuse to engage in debate in public venues. And of course, don't fail in providing extraordinary evidence when making extraordinary claims. If the original paper had been published cautiously and without hullabaloo, it would have become part of the standard scientific tradition of argument and counterargument. As it turned out, the publicity accompanying the paper made it a prime candidate for demolition by blogs and websites. If nothing it provided a taste of how one needs to be extra careful in this age of instant online dissemination. There's also some "do's" that deserve to be mentioned. The researchers did reply to criticism later and make their bacterial strains available to everyone who wanted to study them in a gesture of cooperation, but their earlier behavior left a bad taste in everyone's mouth and detracted from these later acts.

When the original paper came out, many of us were left gaping with eyes wide open at visions of DNA, ATP, phosphorylated proteins and lipids swirling around in a soup of arsenic, carrying out the exact same crucial biological processes that they were carrying out before without skipping a heartbeat. We just had a gut feeling that this couldn't be quite right, mainly because of the sheer magnitude of the biochemical gymnastics an organism would have to undergo in order to retool for this drastically different environment. Gut feelings are often wrong in science, but in this case it seems they made perfect sense.

What next? As often happens in science, I suspect that the defenders of the original paper will not outright capitulate but will fight a rearguard retreat until the whole episode drops off everyone's radar. But this paper here, it clinches the case for normal biochemistry as well as anything could. Good old phosphorus is still one of life's essential elements, and arsenic is not.

Would Ron Breslow's dinosaurs be typing this post?

Much has been written about a recent perspective in JACS written by Ronald Breslow on the origin of homochirality during the origin of life. There's excellent commentary on the topic from See Arr Oh and Paul@Chembark. Briefly, Breslow's paper describes some pretty interesting research from his and other groups establishing a possible mechanism for the transfer of chirality from alpha-methyl amino acids to standard amino acids, followed by the amplification of that small chirality excess through a variety of plausible mechanisms involving the concentration of the dominant enantiomer.

The paper would have remained an interesting chemistry curiosity about the origin of life. It could have even served to remind the public that the origin of life is chemistry's Big Question, had it not been for two lines at the end of the piece:

"An implication of this work is that elsewhere in the universe there could exist life forms based on D amino acids and L sugars...Such life forms could even be advanced versions of dinosaurs,  if mammals did not have the good fortune to have the dinosaurs wiped out by an asteroidal collision, as on Earth. We would be better off not meeting them."

What was interesting was that when I first came across the paper, I spent about two seconds on this line and moved on. The line is an amusing attempt at humor. You usually don't see humor in a technical paper, but in fact I am all for it; I think we need to spice up our otherwise dry scientific literature with the occasional joke. The content of the paper obviously had nothing to do with dinosaurs; it was about a specific technical chemical puzzle in the origins of life. And nothing would have come out of it had not the ACS PR office created a sensationalized news piece wrongly centered around these two lines. Scant attention was paid to the scientific substance of the paper, and it didn't help when other popular venues like The Huffington Post also questioned Breslow about it and received the following answer:

"From there, Breslow makes the jump to advanced dinosaurs. But why might extraterrestrial life be in that form? “Because mammals survived and became us only because the dinosaurs were wiped out by an asteroid, so on a planet similar to ours without the asteroid collision it is unlikely that human types would be there, more probably advanced lizards (dinosaurs),” Dr. Breslow told The Huffington Post in an email."

This set of events led to some unfortunate consequences. For one thing, the undue emphasis on dinosaurs at the expense of homochirality was another nail in the coffin of the public communication of chemistry. Here was a chance to explain to the public why the origin of life is chemically fascinating, but instead the chemical substance got overwhelmed by the precipitate of publicity surrounding dinosaurs. If the ACS is wondering why chemistry is having such a PR problem, now would be the time to look in the mirror.

The situation was exacerbated by more serious matters. Following a tip from some commentators, Stu Cantrill of Nature Chemistry looked up two old Breslow papers on the same topic and found out an extreme case of self-plagiarism; most of the paper seems to have been copied verbatim from the other sources. Breslow should not be blamed for inserting that little joke at the end - it was the media which sensationalized it - but he cannot be excused for the gratuitous self-plagiarism.

That's about what I want to say about this unfortunate episode since others have extensively covered it, but I do want to focus on Breslow's reply to The Huffington Post. Some have chided him for it, but the statement is actually not as absurd as it sounds since Breslow is asking a famous, age-old question in evolutionary theory: If the tape of evolution were re-run, would it again produce dinosaurs, Breslow and ACS editors? Or in other words, how predetermined is evolution, and how dependent is it on accidents? This question is a profound one , since if the answer is even an affirmative "yes", it has serious implications for not just science but for theology and philosophy and the whole puzzle of human existence. 

Stephen Jay Gould was a powerful advocate of contingency in evolution, and his argument is not surprising to see. Evolution has been shaped by so many quirks of environment and the fate of individual organisms and species, that it would be naive to think that chance did not play a role in it. A single piece of wood accidentally drifting apart and carrying a few species on it to an isolated island can sculpt the evolution of that species. And we know for a fact that more massive events like volcanoes and earthquakes certainly did this. In fact it was geologist Charles Lyell's descriptions of such seismic events that started Darwin down the path to evolution and natural selection. It seems thus that if one could hypothetically run "what if" scenarios, it's very unlikely that anything approximating modern humans and dinosaurs could ever arise.

But this answer is not as obvious as it sounds. The biologist Simon Conway Morris has put forth a competing scenario in which certain universal features of evolution guarantee the presence of common adaptations during the evolution of species. This argument is based on what's called "convergent evolution" which essentially refers to the existence of common solutions to diverse evolutionary problems. A typical example would be all kinds of mammals, fish, amphibians and reptiles whose bodies are adapted to swimming. In most of these creatures you see similarly shaped, streamlined bodies, muscles and bones which are suited for swimming. Another principle concerns homologous structures (and not convergent evolution, as a commentator reminded me) like the digits of the hand, whose basic plan seems to be conserved across species. Indeed, homologous evolution provide some of the strongest pieces of evidence for common evolutionary origins. Thus Morris's argument is that even if the evolutionary tape were to run again, something similar to humans, dinosaurs, frogs and eagles (although the details would certainly differ) would be seen if the process were allowed to keep to itself for a few billion years. This interpretation acquires even greater significance when applied to humans; would such an intelligent, successful and self-centered species as Homo sapiens have evolved in an alternative evolutionary universe?

There is a lot of interesting discussion to be had about this topic. It's equally fascinating when applied to chemistry and leads to similar questions. For instance, what are the chances that the foundational compounds of life - DNA, RNA, amino acids, sugars, ATP - would have formed had evolution been left to run again with different tweaks and quirks of fate? Personally I find the questions somewhat easier to answer in case of chemistry since the formation of many of these compounds is governed by relatively simple energetic arguments. ATP's express purpose is to make otherwise unfavorable reactions possible by driving them downhill through high-energy bonds, and if not ATP it's hard to see how some other chemical compounds performing the same function could not have evolved. A great example of an attempt to answer these questions is seen in Frank Westheimer's classic paper "Why Nature Chose Phosphates?" in which he points to the unique properties of phosphate that make it such a dominant source in life's workings, both in metabolism and heredity.

Breslow's question is therefore quite sensible and its implications are fascinating to ponder. How would 2012 have looked like had the dinosaurs not been wiped out by an asteroid? Would they still have been alive and would humans have had the unfortunate fate of co-existing with them? Would they be as smart as humans? Naturally such scenarios would have profoundly affected the evolution and character of our civilization. Or would the dinosaurs have precluded the rise of Homo sapiens, perhaps by nipping our scarce population in the bud and making us extinct? Or would they have become extinct themselves through some other cause, perhaps extreme climate change? How indeed would planet earth have looked like had it still been ruled by dinosaurs?

Naturally we don't know the answers to these questions. But Breslow's little joke at the end, while sounding silly, inadvertently asks a very important and thought-provoking question. Too bad it was all obscured by the charges of self-plagiarism.

A new way to look for life on other planets

One of the fundamental properties of light is its polarization which refers to the spatial orientation of the electric and magnetic fields constituting a light wave. There are many fascinating facts about polarized light which are of both academic and applied interest, but perhaps the most intriguing one is the ability of chiral or handed organic compounds to change the plane of polarization of circularly polarized light. This has proven to be an invaluable tool for chemists in detecting and assigning the structures of molecules, especially biological molecules like sugars and amino acids which tend to exist in only form (left or right handed) in nature.

This fact has now been put to good use by a team of Chilean, Spanish and British astronomers who, in a paper in this week's Nature, have come up with a novel way to detect biosignatures of life on planets. They demonstrate their method by detecting the polarization signatures of water, clouds and vegetation in Earthshine. Earthshine refers to the sunlight reflected by the earth which has been reflected back by the moon towards the earth. It turns out that earthshine contains polarized light whose polarization has been shaped by the earth's atmosphere and vegetation and their constituent molecules. Specific molecules polarize specific wavelengths of light so scanning the whole range of wavelengths is essential. Crucially, the presence of vegetation is manifested in the polarization of the light by chlorophyll. Chlorophyll is special because it absorbs light up to about 700 nm. Beyond 700 nm (in the infrared region) it sharply reflects it, leading to a spike in the spectrum known as the "red edge". That is why plants glow strongly in infrared light. The red edge is a major part of the earth's reflected light as detected in outer space. It's a remarkable phenomenon which could be put to good use to detect similar life-enabling pigments on other planets.

The team used the Very Large Telescope in Chile to analyze reflected earthshine during two months, April and June. The two different times were necessary to make observations of two different viewing faces which the earth presented to the moon; one face was predominantly covered by land and vegetation and the other mainly by water. The earthshine arising from the two faces would be characterized by different spectroscopic signatures, one belonging mainly to vegetation and the other to water. At each wavelength of light they observed peaks and discontinuities corresponding to oxygen, water vapor and chlorophyll. They then compared these observations to calculations from a model that contains as parameters varying proportions of vegetation and ocean. There is some uncertainty because of assumptions about cloud structure but overall there is good agreement. Remarkably, the vegetation is sensitive to even a 10% difference in vegetation.

The technique is fascinating and promises to be useful in being able to make gross detections of water, oxygen and vegetation on other earth-like planets, all of which are strong indicators of life. Yet it is clear that earthshine presents a relatively simple test case, mainly because of the proximity of the moon which is the source of the polarized light. By astronomical distances the moon is right next to the earth and there's very little in the intervening medium by way of dust, ice and other celestial bodies. The situation is going to be quite different for detecting polarized reflections from planets that are millions of light years away. A few thoughts and questions:

1. The authors note that the lunar surface partially depolarizes the light. Wouldn't this happen much more with light coming from very far that has hit multiple potentially depolarizing surfaces? Light could also be depolarized by dense atmospheres or by interstellar media like dust grains and ice grains. More interestingly, the polarization could also be reversed or affected by chiral compounds in outer space.


2. A related question: how intense does the light have to be when it reaches the detectors? Presumably light from worlds that are billions of light years away is going to strongly interact with surfaces and interstellar media and lose most of its intensity.


3. It's clear that chlorophyll is responsible for the signature of vegetation. Alien plants may not necessarily utilize chlorophyll as the light harvesting pigment, in fact they may well be equipped to use alternative wavelengths. There could also be life not dependent on sunlight. How we will be able to interpret signatures arising from other unknown pigments and constituents of life is an open question.


4. It is likely that advanced civilizations have discovered this method of detecting life. Could they be deliberately broadcasting polarized light to signal their presence? In the spirit of a past post, could they do this with specific molecules like amino acids, isotopically labeled molecules or stereoisomers? How sensitive is the polarization to molecular concentration? Any of these compounds would strongly suggest the presence of intelligent life which has developed the technology for the synthesis and purification of organic molecules.


Image source


Sterzik, M., Bagnulo, S., & Palle, E. (2012). Biosignatures as revealed by spectropolarimetry of Earthshine Nature, 483 (7387), 64-66 DOI: 10.1038/nature10778

The fine-tuning problem in protein folding: Is there a protein multiverse?

One of the deepest questions physicists have struggled with in the last half-decade is the so-called "fine-tuning problem". The fine-tuning problem asks why the values of the fundamental constants (Planck's constant, the speed of light, the mass of the electron etc.) are what they are.

The reason why physicists are so worried about the values of these constants is because presumably if the values were even a little different from what they are, the universe and life as we know them would not exist. For instance, even a slight weakening of the strong nuclear force that holds nucleons together would prevent the formation of atoms and thus of all complex matter. Similarly, a slight change in the electromagnetic force would fundamentally alter the interactions between atoms crucial for the formation of chemical bonds between the molecules of life.


There thus seems to be some factor during the evolution of the universe responsible for fine-tuning the values of the constants to their present values within an incredible window of accuracy. The fine-tuning problem is a real problem not least because some religious believers point to the unchangeable and precise values of the constants to be the work of some kind of intelligent designer.


In the last few decades there have been a few attempts to resolve the fine-tuning problem. Probably the most exotic and yet in some ways the most reasonable solution has been to assume the existence of multiple parallel universes. Multiple universes (or multiverses) were first proposed by Hugh Everett, a brilliant and troubled physicist who worked on nuclear weapons targeting, as a way around the so-called "measurement problem" in quantum mechanics. The measurement problem is fundamentally embedded in the quantum description of our world. The unsettling thing (and one that troubled Einstein) about quantum mechanics is that it assigns probabilities to certain events, but provides no answer as to why only one of those events materializes when we make a measurement. Everett worked around this conundrum by assuming that in fact all possible events actually do take place, but only one of them is part of our universe; the rest of the events also occur, but in parallel universes. Everett's interpretation which was regarded to be a fringe explanation for years (thus making it successfully into science fiction books) is now taken seriously by many physicists.


Being a problem associated with the most fundamental constants of nature, the fine-tuning problem makes its way into all "higher-level" sciences including chemistry and biology. In chemistry the fine-tuning problem takes on a fascinating form and entails asking why certain molecules have become fundamental to living systems while other more or less equivalent alternatives have been discarded during evolution. For instance, why alpha amino acids (and why not beta or gamma amino acids)? Why left-handed amino acids and right handed-sugars? Why phosphates and not sulfates or silicates? In retrospect one can think of answers to these questions based on factors like stability, versatility and ease of synthesis, but ultimately we may never know. However, the fine-tuning problem also manifests itself in one of the most fundamental processes in the workings of life; protein folding.


The protein folding problem is well-known; given an amino acid sequence, how can a protein fold into a single three-dimensional structure and reject the countless number of other possible structures it can fold into? What is even more remarkable about this problem is that
several thousand of those other structures are almost equienergetic with the preferred folded structure and yet they do not form. In fact it is this energetic equivalency between several structures that plagues all modern computational protein folding algorithms; the problem is not so much to generate the one correct structure as it is to distinguish it from other structures that are very close to it in energy. The fundamental assumption in all these algorithms is that the correctly folded structure is the lowest-energy structure. But that does not mean it differs in energy from the other solutions disproportionately. Therein lies the rub.

Ever since I heard about the protein folding problem this issue has bothered me as I am sure it has others. Consider that the free energy difference between two different protein structures may be only 5 kcal/mol or so, about the energy of a single hydrogen bond. Yet a protein when it folds unerringly picks only one among the two structures. How can nature manage to pick the right solution every one of millions of times when it folds proteins inside our body each second? To put it another way, here's the "fine-tuning problem" in protein folding:
why does a protein always adopt one and only one correct structure even when many other structures, very similar in energy and presumably in function, are available to it?

From a retrospective evolutionary standpoint the answer to this conundrum is perhaps not too surprising. Imagine what would happen if every time a newly synthesized copy of a given protein folded, it formed a slightly different structure. This heterogeneity and lack of quality control would play havoc with the intricate signaling networks in our body. Evolution simply cannot afford to have different three-dimensional structures for the same protein, no matter how slightly different they are. No wonder that quality control in protein folding is extreme. Of course nature does make occasional mistakes, but wrongly folded proteins are quickly degraded and destroyed.


Nonetheless, the original dilemma persists and metamorphoses into a further interesting question: isn't it possible for a protein structure that is slightly different from the one true structure to be functional? There are two possible answers here. Perhaps the alternative structure
was functional during evolution at one point, but competition from the slightly better structure weeded out the former from the gene pool. If this is the case, could there be a chance that there is some unknown form of life in which this other slightly different yet perfectly reasonable structure still exists, happily doing its job with no evolutionary pressure around to discard it? The best way to answer this question is to compare proteins from different species, something that has been extensively done for years. But such a comparison usually reveals protein homology, in which the sequences themselves are slightly different and yet perform similar functions.

That's not what we are looking for. What we are looking for is "two" proteins with
absolutely identical amino acid sequences which in two different creatures adopt slightly different three-dimensional structures and perform similar functions. Or they could even perform different functions, thus validating evolution as a force that puts slight differences to optimal use. Let us call these proteins with identical sequences but different functional folds "fold mutants". To my knowledge such fold mutants have not yet been found.

A second albeit more exotic solution to the fine-tuning problem appeals to a possible "protein multiverse". The argument here is that the kind of protein structures which we observe are indeed not the only feasible or functional ones. There are in fact other structures which are not only well-folded but also functional. For some reason, evolution, during its intricate dance of maintaining order, structure and function, chose to discard these structures in favor of ones that were more functionally relevant
in this universe. However there is no reason why they could not have been picked in a different universe, where the laws were slightly different. There is another way to think of a protein multiverse; as a set of valleys and peaks where the valleys correspond to different folded structures. Such a metaphor has also been used by physicists to argue that our universe with its own set of fundamental constants corresponds to one local minimum
in this "multiverse landscape", with other universes populating the other dips. Similarly we could imagine a protein multiverse landscape in which different protein folds occupy different valleys; we favor a particular fold only because it inhabits our own valley, but that does not stop other folds from corresponding to the others.

In a different universe, hemoglobin could have folded into a marginally different structure in which it bound not oxygen but some other small ligand like ammonia more efficiently. Such a fold mutant of hemoglobin would be useful to creatures which survive in an ammonia-rich environment (ammonia in fact has a greater temperature range as a liquid compared to water). Or one could imagine a fold mutant of carbonic anhydrase, which catalyzes the conversion of carbon dioxide to bicarbonate at a different pH or a different temperature. Fold mutants of known proteins could have every conceivable property different from their original "correctly" folded counterparts, including shape, size, polarizability and stability. The fold mutants could be exquisitely adopted to living conditions in their parents universe. Their special folds could be stabilized by environments differing
from those found on earth in ionic strengths, hydrogen bonding capabilities and hydrophobicities. For a given protein, this alternative fold could in fact be the lowest in energy and its companion fold found in our universe could be slightly higher in energy.

This kind of speculation immediately suggests two explorations. One is to look for fold mutants in other parts of the universe. This search would be part of the search for extraterrestrial life that has been going on for years. But the point is that if we happen to find fold mutants of existing proteins on other planets or in other inhospitable environments, these mutants would provide powerful support for the solution of the fine-tuning problem. They would tell us that the fine-tuning problem exists only in our narrow-minded anthropocentric imagination, that there could indeed be many folds of the same protein that are robust and functional and that we just happen to inhabit a part of the universe that stabilizes our favorite fold.


The other more readily testable experiment asks if we can produce different functional folds from the same amino acid sequence by varying the experimental conditions. It's of course well-known to crystallographers and protein chemists that slight changes in physicochemical conditions can play havoc with the structure and function of their proteins. But most of the times these slight changes in conditions produce misfolded protein junk. Is there an example of someone slightly (or even radically) varying conditions in a test-tube and producing two different folds of the same protein that are both stable and functional? If there is one I would be very eager to know about it.


On the other hand, if it turns out that it's impossible to find two different functional folds for a single protein, such an observation might well lend credence to the physicists' multiverse with differing fundamental constants. It might well be that under the present values of fundamental constants, it is impossible to stabilize a slightly different protein fold and make it functional. Perhaps only a slight albeit conceptually radical restructuring of the fundamental constants could result in a universe that is friendly to fold mutants. Such a universe would still enable the creation of complex matter through the appropriate combination of the constants, but it would indeed result in life very different from what we know.


The protein multiverse could thus help resolve the fine-tuning problem in protein folding and make biochemists and physicists part of the same multiverse fraternity. More importantly, it could once again reinforce the diversity of creation. One could have different universes with the same fundamental constants but different protein folds or different universe with entirely different combinations of the constants themselves. Take your pick.

If uncovered, such diversity would only echo J B S Haldane's quote that the "universe is not only queerer than we suppose, but it is queerer than we can suppose".

Gray and Labinger on chemistry's big problems

The irrepressible Harry Gray and his colleague Jay Labinger have an editorial in Science. The editorial asks some of the questions that we here and others have asked about chemistry; does chemistry have any "big questions" akin to physics and biology that would make it attractive to the public? The editorial is a little short on detail but the authors rightly propose that there are indeed a few big questions in chemistry but they are not as easily visible and sometimes seem to be couched in the veil of biology and physics. As an example of a "big question", the authors cite the cracking of the photosynthesis puzzle that involves both intimately understanding the process and duplicating it in the laboratory.


"We suggest that the noble themes in chemistry are there, but may be a little harder to see. One can look outward to the universe, or inward to the mind, and recognize the complexity and profundity of the questions to be answered. The problems that contemporary chemistry tackles are just as fundamental, but may not be as immediately obvious to the non-chemist. We could illustrate this claim in many ways, but perhaps one has received the broadest sustained attention: photosynthesis. How can light be harvested and converted to electrochemical energy that is sent off so efficiently in two directions: to both reductively generate the building blocks of life from carbon dioxide and oxidize water to oxygen? This extraordinarily complex question, to be sure, is closely linked to aspects of both physics (but cannot be completely reduced to physics) and biology; but the answer clearly lies in the realm of chemistry. And the workings of each individual component, as well as the entire integrated system that nature has constructed, pose questions that are fully as deep and inspirational as those in any other field of science. Moreover, on the practical side, the answers will be needed to devise methods for making comparably effective use of solar power, which at present appears to be the only resource of sufficient magnitude to cope with the world’s long-term energy needs."

Photosynthesis is indeed a hard, rewarding unsolved problem but I am disappointed that the authors did not instead pick the origin of life as the one shining chemical puzzle of all time. If you really had to pick one problem that's as important as the truly big problems in cosmology or biology, you would pick the origin of life. It is of enduring value to working chemists and it is easier to pitch to the public as a profound philosophical conundrum compared to photosynthesis. And the origin of life has the additional advantage that unlike photosynthesis, it probably cannot be solved even in principle, adding to the mystery and the everlasting allure.

Why should chemists study the origin of life?

In the past we have alluded to the fact that the origin of life (OOL) is a quintessentially chemical problem. But from a professional standpoint, what's in it for chemists and why should they care? Some thoughts:

1. OOL is the ultimate interdisciplinary playing field: No matter what kind of chemist you are, OOL provides an opportunity for you to flex your intellectual muscles. Organic chemists can of course contribute directly to OOL research by speculating on and studying the kinds of reactions that would have been important in molecular origins. Some reactions such as the Strecker reaction (for amino acid synthesis) and the formose reaction (for carbohydrate synthesis) have already been proposed as the frontrunners for the genesis of life's molecules. Both reactions have been around for decades, but it was only recently that the concrete connection to OOL was made. What other reactions in the organic chemist's bag of tricks are applicable to OOL? The question should tickle organic chemists' brain cells like no other.

Other kinds of chemists also have a lot of potential contributions to make. The connection to biochemistry is obvious; for instance, how did the crucial watershed event of membrane formation come about and how did the earliest enzymes form? Inorganic chemists have made new inroads into OOL research, especially through pioneering research implicating metal sulfides in deep sea hydrothermal vents as precursors to organic life and inorganic surfaces (such as clays) as templates for primitive evolution and polymerization. Analytical chemists can bring their impressive phalanx of instrumentation like mass spectrometry and chromatography to bear on the problem. And theoretical and computational chemists can contribute to OOL by performing calculations on the forces operating in the processes of self-assembly that must have been key during the early moments of molecular organization. Of course, none of these areas is insular and every problem stated above demands the attention of every conceivable kind of chemist. Thus, there is a slice of pie in OOL for every chemist who dares to dream and the field guarantees an unlimited number of interdisciplinary collaborations.

2. OOL is a proving ground for basic chemical concepts: Just like organic synthesis is supposed to provide the ultimate training laboratory for fundamentals like spectroscopy, mechanism, and physical organic chemistry, OOL provides an opportunity to review and probe every basic chemical concept we can imagine in every chemical field. For instance, why are the pKa values of amino acids what they are? What would happen if they are different? Or the famous question; why did nature choose phosphates, a question which leads us to basic discussions of nucleophilicity, pKa, steric effects, thermodynamics, kinetics, atomic sizes and myriad other fundamental concepts. Other questions may include: Why alpha amino acids? Why ribose? Why these twenty amino acids and not others? We will never know the ultimate answers to these questions (since there was a fair element of chance involved), but simply asking them forces us to re-evaluate fundamental concepts of chemistry, an exercise that can be enormously rewarding and informative. OOL has involved fundamental research on chirality, self-assembly (more on this in the next point) and free energy calculation. This leads us from not knowing anything to fine-tuning our understanding and knowing something. As a side-benefit, then when there are some fanciful-sounding announcements, we can count on this knowledge to provide answers and level informed criticism.

3. OOL forces us to understand self-assembly: From a practical standpoint this may be the greatest benefit of OOL research. Self-assembly is undoubtedly the single-most important process in life's beginnings, and it also turns out to be of paramount importance in understanding everything else, from how Alzheimer's disease proteins fold to how surfactants sequester dirt to how we can construct supramolecular architectures for solar energy research. The workhorse in self-assembly is our cherished friend the hydrogen bond. Understanding the hydrogen bond thus opens the door to understanding self-assembly. In the past few years we have gained extremely valuable insights into hydrogen bonding, partly obtained through OOL research. For instance, studies of hydrogen bonding in DNA base pairing has revealed the subtle interplay between thermodynamics and electrostatics that stabilizes nucleic acids. Similar effects naturally operate in protein folding. The knowledge gained from such studies can help in the design of everything from novel proteins to supramolecular arrays. The same kind of self-assembly leads to insights into OOL questions addressing fundamental issues such as the formation of the first cell. The practical applications of self-assembly and OOL are thus two ends of a cycle which feed into each other, contributing and utilizing important insights that would fuel both basic and applied research. Understand self-assembly and you will not only inch closer to understanding origins but will also be able to harvest knowledge from the field toward practical ends.

4. OOL is the ultimate open-ended problem: Technically most problems in science are open-ended, but OOL is literally a problem without end. There is no conceivable way in which we will hit on the single, unique solution that jump-started life at a molecular level. We can inch tantalizingly closer to the plausible, but there is still a gigantic leap between the plausible and the certain. Should we despair? Absolutely not. If science can be defined as the "endless frontier", then OOL is the poster child for this definition. OOL will promise us an unending string of questions and plausible explanations until the end of the human species. This will bring us a proliferation of riches in basic chemical understanding. As scientists in general and chemists in particular, we should be ecstatic that OOL has given us a perpetual question machine to do research, discuss, debate and do more research. OOL like few other questions in science promises an infinitude of moments for reveling in the pleasure of finding things out.

And ultimately of course, OOL will help us take one more modest step in answering the question which human beings have asked since eternity- "Where do we come from?"

What more could we want?