Field of Science

Gene duplication and molecular promiscuity

Head over to the Scientific American blog for a post that discusses a recent article suggesting a link between gene duplication and the promiscuity of enzymes involved in secondary metabolism in plants. Since gene duplication frees up one copy of the gene to "experiment", it can potentially accumulate mutations that confer the ability to bind and process more than one substrate. We should partly thank gene duplication for giving us many secondary metabolites which are used as drugs (both recreational and non-recreational), flavors and food products.

The Higgs boson and the future of science

My latest post on the Scientific American blog network ties together several threads about reductionism, emergence and the nature of scientific problems which I have explored on this blog.


Philip Anderson: Anderson first described the so-called Higgs mechanism and also fired the first modern salvo against strong reductionism (Image: Celeblist)
The discovery of the Higgs boson (or the "Higgs-like particle" if you prefer) is without a doubt one of the signal scientific achievements of our time. It illustrates what sheer thought - aided by data of course - can reveal about the workings of the universe and it continues a trend that lists Descartes, Hume, Galileo and Newton among its illustrious forebears. From sliding objects down an incline to smashing atoms at almost the speed of light in a 27 kilometer tunnel, we have come a long way. Dissecting our origins and the universe around us scarcely gets any better than this.

Yet even as the exciting discovery was being announced, I could not help but think about what the Higgs does not do for us. It does not speed up the time needed to discover a new cancer drug. It does not help us understand consciousness. It does not tell us how life began or whether it exists elsewhere in the universe. It does not explain romantic love, how to design the best solar cell, why people have certain political preferences and how exactly to predict the effects of climate change. In fact we can safely predict that the discovery of the Higgs boson, as consciousness-elevating as it is, does not impact the daily work of 99% of all pure and applied scientists in the world.

I do not say all this to downplay the discovery of the particle which is an unparalleled triumph of human thought, hard work and experimental ingenuity. I also do not say this to make the obvious point that a discovery in one field of science does not automatically solve problems in other fields. Rather, I say this to probe the deeper reality beyond that point, to highlight the multifaceted nature of science and the sheer diversity of problems and phenomena that it presents to us at every level of inquiry. And I say this with a suspicion that the Higgs boson may be the most fitting tribute to the limitations of what has been the most potent philosophical instrument of scientific discovery - reductionism.

In one sense the discovery of this fundamental component of matter can be seen as the culmination of reductionist thinking, accounting as it does for the very existence of mass. Reductionism is the great legacy of the twentieth century, a philosophy whose seeds were sown when Greek philosophers started mulling the nature of matter. The method is in fact quite intuitive; ever since they stepped down from the trees, human beings have tried to solve problems by breaking them down into simpler parts. In the twentieth century the fruits of reductionism have been nothing short of awe-inspiring. Reductionism is what told us that molecules are made of atoms, that the universe is expanding, that DNA is a double helix and that you can build lasers and computers. The reductionist ethic has given us quantum mechanics, relativity, quantum chemistry and molecular biology. Over the centuries it has been used by its countless practitioners as a fine scalpel which has laid bare the secrets of nature. In fact many of the questions answered using the reductionist method were construed as being amenable to this method even before their answers were provided; for instance, how do atoms combine to form molecules? What is the basic nature of the gene? What are atoms themselves made up of?

Yet as we enter the second decade of the twenty-first century, it is clear that reductionism as a principal weapon in our arsenal of discovery tools is no longer sufficient. Consider some of the most important questions facing modern science, almost all of which deal with complex, multifactorial systems. How did life on earth begin? How does biological matter evolve consciousness? What are dark matter and dark energy? How do societies cooperate to solve their most pressing problems? What are the properties of the global climate system? It is interesting to note at least one common feature among many of these problems; they result from the buildup rather than the breakdown of their operational entities. Their signature is collective emergence, the creation of attributes which are greater than the sum of their constituent parts. Whatever consciousness is for instance, it is definitely a result of neurons acting together in ways that are not obvious from their individual structures. Similarly, the origin of life can be traced back to molecular entities undergoing self-assembly and then replication and metabolism, a process that supersedes the chemical behavior of the isolated components. The puzzle of dark matter and dark energy also have as their salient feature the behavior of matter at large length and time scales. Studying cooperation in societies essentially involves studying group dynamics and evolutionary conflict. The key processes that operate in the existence of all these problems seem to almost intuitively involve the opposite of reduction; they all result from the agglomeration of molecules, matter, cells, bodies and human beings across a hierarchy of unique levels. In addition, and this is key, they involve the manifestation of unique principles emerging at every level that cannot be merely reduced to those at the underlying level.
The traditional picture of science asserts that X can be reduced to Y. Reality is more complicated (Image: P. W. Anderson, Science, 1972)
A classic example of emergence: The exact shape of a termite mound is not reducible to the actions of individual termites (Image: Wikipedia Commons)
























This kind of emergence has long since been seen as key to the continued unraveling of scientific mysteries. While emergence had been implicitly appreciated by scientists for a long time, its modern salvo was undoubtedly a 1972 paper in Science by the Nobel Prize winning physicist Philip Anderson titled "More is Different", a title that has turned into a kind of clarion call for emergence enthusiasts. In his paper Anderson (who incidentally first came up with the so-called Higgs mechanism) argued that emergence was nothing exotic; for instance, a lump of salt has properties very different from those of its highly reactive components sodium and chlorine. A lump of gold evidences properties like color that don't exist at the level of individual atoms. Anderson also appealed to the process of broken symmetry, invoked in all kinds of fundamental events - including the existence of the Higgs boson - as being instrumental for emergence. Since then, emergent phenomena have been invoked in hundreds of diverse cases, ranging from the construction of termite hills to the flight of birds. The development of chaos theory beginning in the 60s further illustrated how very simple systems could give rise to very complicated and counterintuitive patterns and behavior that are not obvious from the identities of the individual components.

Many scientists and philosophers have contributed to considered critiques of reductionism and an appreciation of emergence since Anderson wrote his paper. These thinkers make the point that not only does reductionism fail in practice (because of the sheer complexity of the systems it purports to explain), but it also fails in principle on a deeper level. In his book "The Fabric of Reality" for instance, the Oxford physicist David Deutsch has made the compelling point that reductionism can never explain purpose; to drive home this point he asks us if it can account for the existence of a particular atom of copper on the tip of the nose of a statue of Winston Churchill in London. Deutsch's answer is a clear no, since the fate of that atom was based on contingent, emergent phenomena, including war, leadership and adulation. Nothing about the structure of copper atoms allows us to directly predict that a particular atom will someday end up on the tip of that nose. Chance plays an outsized role in these developments and reductionism offers us little solace to understand such historical accidents.
Complexity theorist Stuart Kauffman who has written about the role of contingency as a powerful argument against strong reductionism (Image: Wikipedia Commons)

An even more forceful proponent of this contingency-based critique of reductionism is the complexity theorist Stuart Kauffman (supposedly an inspiration for the Jeff Goldblum character in "Jurassic Park") who has laid out his thoughts in two books. Just like Anderson, Kauffman does not deny the great value of reductionism in illuminating our world, but he also points out the factors that greatly limit its application. One of his favorite examples is the role of contingency in evolution and the object of his attention is the mammalian heart. Kauffman makes the case that no amount of reductionist analysis could explain tell you that the main function of the heart is to pump blood. Even in the unlikely case that you could predict the structure of hearts and the bodies that house them starting from the Higgs boson, such a deductive process could never tell you that of all the possible functions of the heart, the most important one is to pump blood. This is because the blood-pumping action of the heart is as much a result of historical contingency and the countless chance events that led to the evolution of the biosphere as it is of its bottom-up construction from atoms, molecules, cells and tissues. As another example, consider the alpha amino acids which make up all proteins on earth. These amino acids come in two potential varieties, left-handed and right-handed. With very few exceptions, all the functional amino acids that we know of are left handed, but there's no reason to think that right handed amino acids wouldn't have served life equally well. The question then is, why left-handed amino acids? Again, reductionism is silent on this question mainly because the original use of left-handed amino acids during the origin of life was to the best of our knowledge a matter of contingency. Now some form of reductionism may still explain the subsequent propagation of left-handed amino acids and their dominance in biological processes by resorting to molecular level arguments regarding chemical bonding and energetics, but this description will still leave the origins issue unresolved. Even something as fundamental as the structure and function of DNA - which by all accounts was a triumph of reductionism - is much better explained by principles of chemistry like electrostatic attraction and hydrogen bonding.

Life as we know it is based on left-handed amino acids. But there is no reason why right-handed amino acids could not sustain life (Image: Islamickorner)
Reductionism then falls woefully short when trying to explain two things; origins and purpose. And one can see that if it has problems even when dealing with left-handed amino acids and human hearts, it would be in much more dire straits when attempting to account for say kin selection or geopolitical conflict. The fact is that each of these phenomena are better explained by fundamental principles operating at their own levels. Chemistry has its covalent bonds and steric effects, geology has its weathering and tectonic shifts, neurology has its memory potentiation and plasticity and sociology has its conflict theory. And as far as we can tell, these sciences will continue to progress without needing the help of Higgs bosons and neutrinos. This also seems to make it unlikely that the discovery of a single elegant equation linking the four fundamental forces (the purported "theory of everything"), while undoubtedly representing one of the greatest intellectual achievements of humanity, will give sociologists and economists little pause for thought, even as they continue to study the stock market and democracies using their own special toolkit of bedrock principles.

This rather gloomy view of reductionism may sound like science is at a dead end or at the very least has started collapsing under the weight of its own success. But such a view would be as misplaced as announcements about the "end of science" which have surfaced every couple of years for the last two hundred years. Every time the end of science has been announced, science itself proved that claims of its demise were vastly exaggerated. Firstly, reductionism will always be alive and kicking since the general approach of studying anything by breaking it down into its constituents will continue to be enormously fruitful. But more importantly, it's not so much the end of reductionism as the beginning of a more general paradigm that combines reductionism with new ways of thinking. The limitations of reductionism should be seen as a cause not for despair but for celebration since it means that we are now entering new, uncharted territory. There are still an untold number of deep mysteries that science has to solve, ranging from dark energy, consciousness and the origin of life to more supposedly pedestrian concerns like superconductivity, cancer drug discovery and the behavior of glasses. Many of these questions require interdisciplinary approaches which result in the crafting of fundamental principles that are unique to the problem statement. Such a meld will inherently involve reductionism only as one component.

Now there are some who may not consider these problems as "fundamental" enough but that is because they would be peering through the lens of traditional twentieth century science. One of the sad casualties of the reductionist undertaking is a small group of people who think that cosmology and particle physics constitute the only things truly worth doing and the epitome of fundamental science; the rest is all detail that can be filled in by second-rate minds. This is in spite of the inconvenient fact that perhaps 80% of physicists are not concerned at all with fundamental questions. But you would be deluding yourself if you are thinking that turbulence in fluids is a second-rate problem (still unsolved) for second-rate minds, especially if you remember that Heisenberg thought that God would will be able to provide an explanation for quantum mechanics but not for turbulence. The fact is that "pedestrian" concerns like superconductivity have engaged some of the best minds of the last fifty years without fully succumbing to them, and at their own levels they are as hard as the discovery of the Higgs boson or the accelerating universe. Exploring these worthy conundrums is every bit as exciting, deep and satisfying as any other endeavor in science. Those who are wondering what's next should not worry; a sparkling journey lies ahead.

To guide us on this journey all we have to remember are the words of one of the twentieth century's great reductionists and one of Peter Higgs's heroes. Paul Dirac closed his famous text on quantum theory with stirrings that will hopefully be as great a portent for the emergent twenty-first century as they were for the reductionist twentieth: "Some new principles are here needed".

References:
1. P. W. Anderson, More is Different, Science, 1972177, 393
2. David Deutsch, "The Fabric of Reality", 2004
3. Stuart Kauffman, "Reinventing the Sacred", 2009; "At Home in the Universe", 1996
Other reading:
1. Terrence Deacon, "Incomplete Nature", 2011
2. John Horgan, "The End of Science", 1997
3. Robert Laughlin, "A Different Universe", 2006

Change is here

As they say, the essence of chemistry is change, so it seems only fitting that I will be moving part of my blogging to a new blog (with the same name) on the Scientific American blogging network. My first post talks about what gets me out of bed every morning (it's not breakfast). Many thanks to Bora Zivkovic for this great opportunity. At SciAm I will be joining a first-rate cast of writers who between them seem to have every field of human inquiry covered.

Am I leaving FoS then? Far from it. FoS has introduced me to some great bloggers and has also provided me with a fine platform. The way I intend to do this is to save the more technical and drug discovery related posts for this site while holding forth on other topics on SciAm. That should ensure that both sites have something different to offer. I will of course be linking to posts on the other site; you could comment on those posts either there or here.

As always, what makes a blog successful are the readers and commentators. Through Twitter, Facebook and mentions on other blogs, many of my posts have gotten more attention than they deserved and for that I am thankful; you should believe me when I say that I have learnt much more from you than you could ever learn from my posts. Hopefully you will continue to hang around both this site and the other one, and I hope to continue benefiting from the interactions.

Zn(III)? Not so fast

Chemists love rogues, oddballs which seem to defy the rules and bond, react, and exist on their own terms. The rogues are valuable because they push the boundaries, teach us about new principles of structure and reactivity and challenge coveted preconceptions. One of the most striking rogues in the history of chemistry was the compound xenon hexafluoroplatinate which shattered belief in the non-reactivity of noble gases. Another immensely productive rogue was the first stable carbocation, a species that was considered too unstable to isolate until George Olah surmounted the energy barrier. Some rogues are literal rogues in the sense that they need to be incarcerated in order to prevent their unruly bonds from going haywire; witness the classic taming of cyclobutadiene by Donald Cram. There's no doubt about it; rogues' galleries are shining gems in the chemical establishment.

A couple of months ago it seemed that another minor rogue had made its appearance. Students around the world know that the most stable oxidation state of zinc is +2. The rationale is simple; unlike transition metals like iron and nickel, zinc contains a fully filled d orbital with an electronic configuration of d10s2. It is therefore quite happy to lose its outermost 2s electrons and remain stable. Thus it came as a surprise when a paper detailing theoretical calculations on zinc compounds predicted the existence of a complex with Zn as Zn(III). That fact would have raised the eyebrows of a million freshmen memorizing transition metals trends for their final exams. The existence of the unusual zinc was based on quantum chemical calculations done by Puru Jena's group at VCU and their basic rationale was that a ligand that was oxidizing and electronegative enough would essentially force the metal to donate extra electrons. They seemed to find such a complex in Zn(AuF6)3. On the face of it that would make sense, but it's worth keeping in mind that nature loves filled orbitals; for instance, even a ligand as oxidizing and electronegative as CF3 does not force copper to adopt the +2 state (copper's most stable oxidation state being +1, resulting from its d10s1 configuration).

Now another paper suggests, with ample evidence it seems, that the suspected Zn(III) rogue might be an upstanding chemical citizen after all. Sebastian Riedel's group in Berlin has examined the purported Zn(III) complex and found that not only does zinc adopt its good old +2 oxidation state in the compound, but that the compound would not be thermodynamically stable. They use a quantum chemical method more sophisticated than the previous one and include more relativistic effects. Where does relativity enter the picture, you ask. It turns out that for heavy metals, the speed (in a crude sense) of the d s-electrons can be fast enough to warrant relativistic calculations; indeed, relativistic quantum chemistry has shed light on many commonplace and yet unusual phenomena, like the color of gold and the liquid state of mercury at room temperature. Relativity is not the exclusive domain of physicists.

I am not enough of an expert in quantum chemistry, but it clearly seems from the latest paper that the more accurate calculations which account for finer details of electron correlation and relativistic effects indicate something striking; the purported complex with Zn in the +3 oxidation state would undergo an exothermic reaction. Put simply, it would not be stable and would decompose to a compound with good old Zn(II). For me, a more troubling fact was the structure of the compound published in the original paper. The structure was supposed to be a low-energy minimum but it has two fluorines practically colliding with each other in an unholy, 197 pm, sub-Van der Waals radius squeeze (illustrated above). The present calculations find another low-energy structure existing only 10 kJ/mol above the previous structure that is consistent with a Zn(II) state. Other calculations indicate that the fluorines in the previously proposed structure are best described as radical anions bridging two AuF5 units.

There's other theoretical evidence in there too, along with citations of experimental facts that point to the recalcitrance of Zn to adopt a +3 state. The study seems to be as solid a piece of evidence against Zn(III) as can be obtained using current levels of theory. But the real reason I want to point out this paper is because it illustrates one of the key roles of theory and computation in chemistry; as an invitation to experiment. Just because the complex is thermodynamically unstable does not mean it cannot be isolated under any conditions. One of the great lessons of chemical science in the last forty years has been the fact that given the right conditions almost anything can be synthesized, stored and isolated (Cram's cyclobutadiene again being a case in point) and that the words stable and unstable are highly relative constructions created by an impoverished chemical lexicon.

This tussle between two predictions illustrates the function of theory in chemistry as the ultimate teaser; if I were an experimentalist I would be rubbing my hands in glee, sending my armies of postdocs and graduate students into the lab to try to synthesize Zn(AuF6)3. As Excimer commented on another site, "Enough theory! Someone make the damn thing already".
Image source: ACS publications.


"Arsenic bacteria": Coffin, meet nails

For those dogged souls still following the whole debacle of arsenic-eating bacteria, it seems that Science has published what should be close to the death knell for "arsenic life". I already mentioned the report by Rosy Redfield and there's another one by Tobias Erb's group at ETH. The title of the paper is "GFAJ-1 is an arsenate-resistant, phosphate-dependent organism".

It's worth reflecting on that title again; "arsenate-resistant, phosphate-dependent". Yes, that description applies to GFAJ-1. It also applies to me, Shamu the killer whale, E. coli 0157 and Francis Bacon. In fact it applies to all the normal life forms that we know. So basically the title says that GFAJ-1 is not much different in this respect from any other bacterium that you may happen to find in a thimbleful of mud scooped up from your backyard.

The paper goes on to analyze the behavior of the bacterium in the presence and absence of phosphorus and arsenate. The bacterium seems to survive in tiny concentrations of phosphate, a concentration that was interestingly deemed as an "impurity" in the original Wolfe-Simon studies. It also does not survive on arsenate but starts dividing as soon as trace amounts of phosphorus are added. The authors' conclusion is clear: "We conclude that cultures in the previous study might have grown on trace amounts of phosphate rather than arsenate". This is what several experts had suspected since the beginning. Their suspicion was based on life's extraordinarily resilience and its ability to zealously guard and use every single atom of precious growth nutrients.

The authors also analyze the composition of the biomolecules (nucleotides, sugars etc.) in GFAJ-1 in the presence and absence of arsenate. They find only phosphate incorporated in the organism's essential machinery. While this does not necessarily argue against the use of arsenate, it demonstrates that when given a choice GFAJ-1 clearly prefers phosphate.

That observation is however not as striking as the next one where they find some metabolites containing arsenate, specifically sugars with arsenate appended to them. The question then is, are these metabolites formed biogenically or abiotically? To try to distinguish between these possibilities, the authors ran mock experiments where they treated glucose medium with arsenates. The purported metabolites showed up in the products and their formation is also supported by simple thermodynamic arguments which favor the attachment of arsenates to sugars. Thus it seems that simple chemistry rather than complex biology is sufficient for explaining the small amounts of arsenated metabolites. The scientists further resort to careful experiments to rule out the existence of other arsenated biomolecules.

The sum total of these experiments says that GFAJ-1 can grow in the presence of phosphate, that it cannot grown in arsenate, and that it can grow in high concentrations of arsenate only when supplemented with limiting concentrations of phosphate. Taken together with the other paper by Rosy Redfield, this is as good a case against arsenic-based life that we can make right now.

The papers are good examples of the conservative yet decisive style that scientists are accustomed to pitching their results in. Unfortunately the original authors have not reacted as conservatively. If anything their responses are transparently shallow and unconvincing. When asked about the results, Felisa-Wolfe Simon said that:
"There is nothing in the data of these new papers that contradicts our published data."
That reply almost convinces me that denial is the most sincere form of self-deception.
A current collaborator of Wolfe-Simon had even more remarkable things to say:

“There are many reasons not to find things — I don’t find my keys some mornings,” he said. “That doesn’t mean they don’t exist. The absence of a finding is not definitive.”

To which I might add that there is a possibility that disgruntled unicorns with chemistry PhDs looking for jobs may well exist, since we haven't found any yet.


Update: Paul@Chembark nicely weighs in.