Field of Science

Ode to a classic: The Nature of the Chemical Bond

No other chemistry book in the twentieth century influenced the general thinking of chemists more than Linus Pauling's "The Nature of the Chemical Bond". Yet take an opinion poll among undergraduates or graduate students and hardly anyone would have read a single page from this classic. This unfortunate facts only reflects a broader problem with undergraduate education; the relentless urge to teach problem solving at the expense of an appreciation of the essential philosophy of the subject. And very few chemistry books ever published communicate the deep structure of chemical thinking as well as Pauling's memorable volume.

In the latest issue of Nature, Philip Ball visits this lost gem. The book is remarkably and fortunately still in print but it's not really making an impact. My own introduction to "The Nature" was fortuitous. I had of course known about Pauling and his legendary status but never had the chance to actually peruse the volume. Sometime during my senior high school year a friend showed me the book which he had borrowed from his uncle and I was hooked. At first put off because of the extraordinary volume of detail in it, I soon realized that the elegant explanation of this voluminous material through a few simple principles was the crowning achievement of Pauling's thinking. It was all (or at least most) of chemistry through a few good concepts of chemical bonding. Although I did not read every single thing in the book, I read most key parts multiple times and still keep referring to it constantly.

As I noted in an earlier post, the immense impact of the book is driven home by the fact that in the first ten years after publication, it was cited no less than sixteen thousand times. Most of the principles that Pauling developed like resonance, hybridization, electronegativity and hydrogen bonding are now such fundamental parts of chemistry that everyone takes them for granted. With Pauling, chemistry was transformed from a descriptive science to one based on rational notions of bond breaking and formation based on the laws of physics. And yet as Ball describes in his article, the end effect was unmistakably chemical:

"The significance of The Nature of The Chemical Bond was not so much that it pioneered the quantum-mechanical view of bonding, but that it made this a chemical theory: a description that chemists could understand and use, rather than a mathematical account of wave functions. It recognized that, if a model of physical phenomena is to be useful, it needs to accommodate itself to the intuitions and heuristics that enable scientists to talk coherently about the problem. Emerging from the forefront of physics, this was nevertheless a chemists' book."

Physicists' dream is to find five equations that describe the entire universe. Pauling came the closest to doing this for chemistry; no wonder that a poll by Time magazine about the greatest scientists of all time included Pauling as only one among two twentieth century scientists, along with Einstein. His concepts underlie every branch of the science and crucially extend into interdisciplinary branches like biology; it was his insights into chemical bonding that made him one of the founding fathers of molecular biology by way of important ideas on the structures of proteins, enzymes and antibodies. "The Nature" contains scores of examples drawn from physical, organic, inorganic and biological chemistry. The sheer sweep of Pauling's contribution to bonding is astonishing. It would not be an exaggeration to say that his book did for chemistry something like what Darwin's "The Origin" did for biology; it brought all of chemistry under a unifying rubric. And like "The Origin", "The Nature" is one of the very few founding texts of science whose language is simple enough to be understood by beginning students (of course Darwin did one better since his book can be understood easily even by laymen).

Every student of chemistry must be exposed to this foundation of chemistry, yet "The Nature" has been forgotten in colleges and graduate schools. Of course it cannot replace a modern chemistry course, of course the language is somewhat dated, of course chemistry has made exciting advances since Pauling that are not included in the book and of course the valence bond theory described by Pauling has been replaced by molecular orbitals in many important cases. But the book should be required reading for understanding the core philosophy of the subject and how a few simple concepts can explain the astounding variety of the material world around us. It's also a superb vehicle for demonstrating the limitations of the reductionism of physics and the empirical character of chemistry. The goal of a chemistry education is not simply to solve chemical problems, but it is to view the world through a chemical lens. And that means to view the world through the language that we inherited from Pauling.

If you want to think like a chemist, you cannot do better than "The Nature of the Chemical Bond". Buy it, it's actually not that expensive compared to most college and graduate school textbooks.

New place, new view, slow reactions and the origins of life

ResearchBlogging.org
I have been unable to blog for the past few days because I was busy moving to Chapel Hill for a postdoc at UNC Chapel Hill. I am very excited about this move and my upcoming research which is going to involve protein design and folding. Regular blogging will resume soon. Until then, happy holidays, and I will leave you with the following interesting paper published by a group from my new institution.

One of the abiding puzzles in the origin of life is to explain how life arose in the relatively small amount of time it had to evolve on the planet. From a chemical perspective, this entails explaining how especially slow chemical reactions could have contributed to the complexity of life. In a new paper in PNAS, a group from UNC suggests part of a possible solution to the puzzle by demonstrating that slow reactions especially are accelerated by temperature much more than fast reactions. Recall from college physical chemistry that the rate of a typical reaction roughly doubles with ten degree rise in temperature. As the authors note, this bit of textbook wisdom is off the mark when it comes to many important reactions and needs to be appended.

They look at certain important reactions like the hydrolysis of phosphate monoesters and find that these reactions are accelerated not two or a few fold but many million fold with a rise in temperature. The increase in rate would have been significant especially under the hot, primordial conditions present on earth during its early days. Now this acceleration is free-energetic and basically corresponds to a favorable change in either the entropy or the enthaply of activation. The authors measure both these variables and find that the crucial change is in the enthalpy. It's interesting to note that a favorable change in the enthalpy would entail forming stronger interactions including hydrogen bonds between substrate and enzyme, and this is exactly the kind of process you would imagine happening during the optimization of biomolecular interactions during evolution. In fact, recent research suggests that this process of optimizing enthalpy is also synthetically mirrored during drug discovery. The authors end by explaining why a catalyst that impacted enthalpy rather than entropy favorably would have had a selective advantage in rate acceleration as the environment later cooled (and entropy became unfavorable).

Amusingly, the paper has come under criticism from some unexpected quarters, from none other than folks from the infamous 'Discovery' Institute which is funded and run by creationists. In the view of these esteemed 'scientists', the paper provides no evidence that the slow reactions which were accelerated were in fact ones which were important during the origin of life. The DI crowd seems to have fundamentally misjudged the nature of origins of life research; it's more speculative than many other fields but still remains scientific. More importantly, the criticism seems to have completely missed the fact that the general hypotheses proposed by the authors- that all slow reactions could have been vastly accelerated by temperature on a hot primordial planet- is independent of the exact nature of these reactions which may or may not have contributed to life's origins. As usual, miss the forest for the trees.


Stockbridge, R., Lewis, C., Yuan, Y., & Wolfenden, R. (2010). Impact of temperature on the time required for the establishment of primordial biochemistry, and for the evolution of enzymes Proceedings of the National Academy of Sciences, 107 (51), 22102-22105 DOI: 10.1073/pnas.1013647107

Making speculation official: More on the conservatism of leading science journals

I want to thank everyone for the interesting comments on the last post; I thought it would be best to address them in a new one since my response got too long and spawned too many thoughts for the comments section.

I want to enumerate what I think are the benefits of having a separate 'Speculations' section in journals like Science and Nature because that point perhaps did not come across very clearly. In the context of the present controversial paper on arsenic-associated life, here's what would happen in a world which reveled in speculation. The authors submit the paper to Science. The Science reviewers and editors say that the paper is interesting but that the extraordinary claims are not supported by extraordinary evidence. Nonetheless, they would be quite happy to publish it in their brand new 'Imaginings' section as food for thought for other researchers. They ask the authors to tone down their conclusions and present the paper as a set of observations with some possible interpretations; either in a preliminary findings section or a speculation section. Now someone in the comments section suggested that the authors would have rejected Science's offer in such a case. But let's give them the benefit of doubt. While by no means ecstatic, the authors grudgingly accept Science's offer. The paper now looks much more tentative and its conclusions are much more modest. It proudly features as one of the first inaugural articles in 'Imaginings'. But here's the other good thing that happens: NASA and the authors now resist the temptation to present and sensationalize the work in a press conference before publication because of course it's a little embarrassing to hype a paper explicitly marked as speculative. Everyone is happy; the reviewers, the authors, Science and the public. The media will of course still hype the paper but that's pretty much a constant anyway. Generally speaking, the evil stepmother disintegrates in a blinding flash of light, the princess marries the prince in a glade surrounded by furry creatures and everybody lives happily ever after. The End, for now at least.

I agree that this is an ideal scenario. But it has a much higher probability of being played out if Science sported an explicit section on speculation. When work being presented is speculative, both the public and the reviewers are more forgiving of its incompleteness and the authors and sponsoring agencies don't (or at least should not) feel as tempted to hype it. It appears in print exactly the way it should; as a very intriguing set of observations and experiments that deserves closer scrutiny and nothing more. Any possible earth-shattering implications can wait.

There was a thought that journals should actually become more conservative because of the increasing instances of fraud that have been reported during the last few years. The general direction of this kind of thinking is sound, but I don't think it will help scientific progress at all. Fraud in scientific publishing will continue at a minimum ambient level irrespective of whether journals are conservative or not; it's just human nature. The only way journals could significantly crack down on fraud is if they become ultra-conservative. But this would be a disaster since along with fraud it would lead to the filtering out of too many promising novel ideas. The occasional admission of fraud is a burden we have to bear for publishing the boldest flights of imagination. The best thing however is that we don't have to worry too much about the problem at all; as I mentioned earlier, the beauty of science is that it is usually an incredibly efficient self-correcting process. Unlike the rogue agent from The Matrix, fraud does not stick around for too long to cause havoc. If anything, the universal presence of blogs and online information sharing now ensures that fraud is much more swiftly recognized and dealt with than before; many recent cases can attest to this fact. If the world earlier depended on one Neo to save itself, we now have several who are up to the task.

This brings us to another point in the comments section. Some people pointed out that the proliferation of blogs and other online avenues have now actually provided more opportunities than ever to speculate, and we need not depend on elite journals for doing this. While this is undoubtedly true, I wish it solved the problem. I wish that speculation on blogs was as respected as speculation in Nature. But we don't live in that ideal world yet. For whatever reason, journals like Science and Nature are now worshipped even more than what they were before. In my own field of organic chemistry for instance, you would find many pathbreaking papers published in relatively low-impact journals in the sixties and seventies, but hardly any more. Sadly, the obsession of impact factors and the constant pressure to publish and perish have put the premier journals on a pedestal. There are unfortunately many who think that only papers in these journals are worth taking seriously. This is extremely regretful (and is definitely a topic for a separate post) but sadly it's reality. Unless this reality changes, speculation would become respectable only if it's published by Nature and Science. As was pointed out, the Annals of Improbable Research has published ideas that first make us laugh and then make us think. If you look at some of the papers in the journal which have bagged the notorious IgNobel prize, they are actually quite well-supported by data and statistical analysis. Yet regrettably, we will have to wait for at least a few generations before anyone takes the Annals as seriously as Cell or PNAS. One of my main points in the last post was that we need to make speculation not just easier but more respectable and official again. And for better or worse, for now it's going to become respectable and official only if the top journals give it a public platform.

Ultimately, what purpose will all this serve? Many of the benefits have been described; as a commentator succinctly mentioned in the earlier post, it would give the publication of preliminary ideas an official sounding board. The way the present system is set up- and the recent example makes it clear- scientists are just going to be dissuaded from publishing tentative, bold observations and ideas because of the impending public backlash. But the commentator also pointed out another important dividend; the process would perhaps make the true nature of science clear to a public which is too often fed information in black and white sound bytes.

There are rules for doing, interpreting and publishing science, just like there are rules for how to raise children. And just as the rules for raising children wonderfully break down in the face of reality, so do the rules of actual scientific research. Real science is as messy as real child rearing. It's only fair that the public knows about this process.

The beauty of it is that it all comes together in the end. The baby turns into a fine young man or woman, and science continues to flourish.

Note: The comments section makes it clear that we need to distinguish between two kinds of articles, those suitable as "Preliminary Results" and those suitable as "Speculation". The two kinds may certainly overlap; the arsenic paper would thus be primarily in a "Preliminary Results" section but the hypothesis about arsenated DNA backbones would put it into a "Speculations" section and in this case the speculation would not be toned down but kept in.

Aliens, arsenic and alternative peer-review: Has science publishing become too conservative?

In 1959, physicists Philip Morrison and Giuseppe Cocconi advanced a hypothesis about how we could detect signals from extraterrestrial civilizations. The two suggested monitoring microwave signals from outer space at the frequency of 1420 MHz. This frequency is the frequency of neutral hydrogen, the most abundant element in the universe and one which aliens would likely harness for communication. The paper marked the beginning of serious interest in searching for extraterrestrial life. A year later, Freeman Dyson followed up on this suggestion with an even more fanciful idea. He conjectured that a sufficiently advanced civilization might be able to actually disassemble a planet the size of Jupiter and use its parts to create a shell of material that would surround the parent planet’s solar system. This sphere would capture solar energy and allow civilizations to make the most efficient use of all such energy. The most telling signature of such an advanced habitat would be an intense infrared signal coming from the sphere. Thus Dyson recommended looking for infrared signals in addition to radio signals if we were to search for aliens. The sphere came to be known as a ‘Dyson sphere’ and became fodder for a generation of science fiction enthusiasts and Star Trek fans.

These two ideas and especially the second one sound outrageous and highly speculative to say the least. Can you guess where both were published? In the two most prestigious science journals in the world; the Morrison paper was published in Nature while Dyson published his report in Science. This was in 1960. I can say in a heartbeat that I don’t see similar ideas being published in these journals today, and this is a situation which we all should regret.

I bring up this issue because I think it indicates the significant changes in attitude about publishing novel scientific ideas that have occurred from 1960 to the present. In 1960 even serious journals like Nature and Science were open to publishing fanciful speculation, provided it was clearly enumerated. Now the demands for publishing have become more stringent, but also more narrowly defined. While this may have led to the publishing of more ‘concrete’ science, it has also dissuaded researchers from venturing out into novel territory. Most importantly, it has led the scientific community to put an unnecessarily high premium on ideas being right rather than interesting.

Science progresses not by being right or wrong but by being interesting. Most scientific ideas in their infancy are tentative, unsubstantiated and incomplete. Yet modern scientific publishing and peer review largely discourage the presentation of these ideas by insisting on convincing evidence that they are right. In most cases this emphasis on accuracy and complete validation is necessary to save science from itself; we have seen all too many cases of pseudoscience that looked superficially plausible but which turned out to be full of holes. Science usually plays it safe by insisting on unimpeachable evidence. But in my opinion this stringent self-correcting process has gone too far, and in our desire to err on the safer side we have erred on the extreme side. This is having a negative impact on what we can call creative science. The insistence on foolproof data and the public censure that researchers would face if they don’t provide it is deterring many scientists from publishing provocative results that are still in the early stages of gestation. Demands for conservative presentation are also accompanied by conservative peer review since reviewers fear backlash as much as authors. All this is unfortunate and is to the detriment of the very core of scientific progress, since it’s only when provocative ideas are published can other researchers validate, verify and refute them.

The furor about the recent paper on “arsenic-based” life brings these issues into sharp focus. Much of the hailstorm of criticism would have been avoided if the standards and formats of scientific publishing allowed the presentation of ideas that may not be fully substantiated but which are nonetheless interesting. By now we are all familiar with the torrent of criticism about the paper that has come from all quarters, from blog posts to opinions from well-known experts. What is clear is that the experiments done were shoddy and controls were lacking. But the criticism is detracting from the potential value of the paper. Irrespective of whether the claims of arsenic actually being incorporated in the bacterium’s replicative and metabolic machinery are true, the paper is undoubtedly interesting, if only as an example of a hitherto unknown novel extremophile. Yet it is in danger of simply being forgotten as one of the uglier episodes in the history of science publishing.

There is in fact a solution to this problem, one which I have been in favor of for a long time. What if there was a separate section specifically devoted to relatively far-fetched ideas and this paper had been published in that section? The paper would then likely have been taken much less seriously and its tenets would have been accepted simply as thought-provoking observations pointing to further experimentation rather than established facts. So here’s my suggestion; let the top scientific journals have a separate section entitled ‘Speculation’ (or perhaps ‘Imaginings’) which allows the presentation of ideas that are fanciful and speculative. The ideas proposed could range from purely theoretical constructs to the documentation and interpretation of unusual experimental observations. The only requirement is that they should be unorthodox and interesting, backed up by more or less known scientific principles, clearly defined and enumerated and contain testable hypotheses. Let there be a second type of peer-review process for these ideas, one which is as honest as the primary process but more forgiving of the lack of foolproof evidence.

The idea about Dyson spheres would fit in nicely in such a section. Another example that comes to my mind is an idea proposed by the biophysicist Luca Turin. Turin conjectured that we may smell molecules based not on their shape but on the vibrations of their bonds. The history of this idea is interesting since others had already proposed it earlier in respectable journals. Turin actually wrote it up and sent it to Nature. Nature deliberated for an entire year and rejected the paper. In this case Nature should at least be commended for taking so long and presumably giving careful consideration to the idea, but the point is that they wouldn’t have had a problem publishing it in a ‘Speculation’ section right away. Turin’s idea was interesting, novel, highly interdisciplinary, enumerated in great detail and backed up by well-known principles of chemistry and spectroscopy. It satisfied all the criteria of a novel scientific idea that may or may not be right. Turin finally published in a journal which only specialists read, thus precluding the concept from being appreciated by an interdisciplinary cross-section of scientists. There is now at least some evidence that his ideas may be right.

Interestingly, there is at least one entire journal devoted to the publication of interesting hypotheses. This is the journal ‘Medical Hypotheses’. Medical Hypotheses prominently lacks peer review (although they have instituted some peer review recently) and has occasionally come under fire for publishing highly questionable papers, such as those criticizing the link between HIV and AIDS. But it has also served as a playground for the interaction of many interesting ideas. The editorial board of Medical Hypotheses features highly respected scientists like the neurologist V S Ramachandran and the Nobel Prize winning neuroscientist Arvid Carlsson. Ramachandran himself has iterated the need for such a journal. Science and Nature merely have to devote a small section in each issue to the kinds of ideas that are published in Medical Hypotheses, perhaps with a higher standard.

It’s worth reiterating Thomas Kuhn’s notions of paradigm shifts in science here. Scientific paradigms rarely change by playing it safe. Most scientific revolutions have been initiated by bold and heretical ideas from maverick individuals, whether it was Darwin’s ideas about natural selection, Einstein’s thoughts about the constancy of the speed of light, Wegener’s ideas about continental shift or Bohr’s construction of the quantum atom. Not a single one of these ideas was validated by foolproof evidence when it was proposed. Many of them sounded outright bizarre and counter-intuitive. But it was still paramount to bring these ideas to a greater audience. Only time would tell whether they were right or wrong, but they were undoubtedly supremely novel and interesting. And almost all of them were published by leading journals. It was the willingness to entertain interesting ideas that made possible the scientific revolutions of the twentieth century. It seems to be a strange historical anomaly to find journals much more prone to publishing speculative ideas a hundred years ago than today. Today we seem to worship the safety of truth at the expense of the uncertain but bold reaches of novelty.

Of course, the existence of a second-tier of publication and peer review would undoubtedly have to be carefully monitored. There is after all a thin line between reasonable speculation and pseudoscience. The reviewers in this tier would have to pay even more careful attention than they usually do to ensure that they are not pushing baseless fantasies. But as we have seen in the case of the vibrational theory of smell and the case of arsenic-loving bacteria, it’s not that hard to separate legitimate science with uncertain truth value from mere storytelling.

Once the ground rules are established and the initial obstacles are overcome, the second tier of peer review would have many advantages apart from encouraging the publication of speculation. It would also make reviewers more comfortable in recommending publication; since the ideas are speculative anyway, they would not insist on complete verification and would not fear backlash if the ideas they had reviewed turn out to be wrong. Journal editors would similarly find it easier to approve publication. And the scientific community at large perhaps would not be as critical as it has been in the case of the recent paper because it too would accept the proposed ideas not as declarations of truth but as tentative exploration. But the greatest beneficiaries of the improved system would undoubtedly be the publishing scientists. Their minds would be much freer to dream and they would fear much less retaliation from the community for daring to do this. Most importantly, unlike the recent case, they would not be under pressure to make statements whose implications exceed the objective factual implications of their claims, and they would be happy to just present the claims as interesting observations that point the way towards further experiments.

Science progresses by being the ultimate free-market of ideas; this has led to it being a highly social process where scientists build on each other’s work. But for this social process to work the ideas must be liberated from their initial nebulous beginnings. Ideas in the scientific marketplace come in different flavors, from boring and established to interesting and maverick. The current scientific publication and peer-review process imposes a straitjacket that ideas have to fit in in order to be ‘pre-selected’ for entry into this market. This keeps out some of the most interesting ideas and more importantly, dissuades thinkers from even pursuing them in the first place. The straitjacket does serve the valuable purpose of filtering flotsam but it is also filtering out too many other interesting things. Science is too haphazard and full of unexpected twists and turns to be entrusted to rigid rules of review and publication. We need to accept the liability of occasionally having a dubious idea published in order to keep open the possibility of also giving novel beginnings a public platform; the beauty of science is that the bonafide dubious ideas automatically get weeded out through scrutiny and so we should not have to worry about too many of them going on extended rampages. But the potentially good ideas can only be fleshed out by other scientists when they are allowed to be exposed to criticism, appreciation and ridicule. Even if the ideas themselves ultimately sink, they may serve as spores which lead to the germination of other ideas. And it is the germination of these other ideas that gets transformed into trees of scientific discovery.

We are all sheltered, invigorated and inspired by the branches of these trees. Let’s give them an opportunity to grow.

An eternity of infinities: the power and beauty of mathematics

The biggest intellectual shock I ever received was in high school. Someone gifted me a copy of the physicist George Gamow’s classic book “One two three...infinity”. Gamow was not only a brilliant scientist but also one of the best science popularizers of the late twentieth century. In his book I encountered the deepest and most utterly fascinating pure intellectual fact I have ever known; the fact that mathematics allows us to compare ‘different infinities’. This idea will forever strike awe and wonder in me and I think is the ultimate tribute to the singularly bizarre and completely counter-intuitive worlds that science and especially mathematics can uncover.

Gamow starts by alerting us to the Hottentot tribe in Africa. Members of this tribe cannot formally count beyond three. How then do they compare commodities such as animals whose numbers are greater than three? By employing one of the most logical and primitive methods of counting- the method of counting by one-to-one correspondences or put more simply, by pairing objects with each other. So if a Hottentot has ten animals and she wishes to compare these with animals from a rival tribe, she will pair off each animal with its counterpart. If animals are left over in her own collection, she wins. If they are left over in her rival’s collection, she has to admit the rival tribe’s superiority in sheep.

What is remarkable is that this simplest of counting methods allowed the great German mathematician Georg Cantor to discover one of the most stunning and counter-intuitive facts ever divined by pure thinking. Consider the set of natural numbers 1, 2, 3… Now consider the set of even numbers 2, 4, 6…If asked which set is greater, commonsense would quickly point to the former. After all the set of natural numbers contains both even
and odd numbers and this would of course be greater than just the set of even numbers, wouldn’t it? But if modern science and mathematics have revealed one thing about the universe, it’s that the universe often makes commonsense stand on its head. And so it is the case here. Let’s use the Hottentot method. Line up the natural numbers and the even numbers next to each other and pair them up.

1 2 3 4 5…
2 4 6 8 10…

So 1 pairs up with 2, 2 pairs up with 4, 3 pairs up with 6 and so on. It’s now obvious that every natural number n will always pair up with an even number 2n. Thus the set of natural numbers is equal to the set of even numbers, a conclusion that seems to fly in the face of commonsense and shatters its visage. We can extend this conclusion even further. For instance consider the set of squares of natural numbers, a set that would seem even ‘smaller’ than the set of even numbers. By similar pairings we can show that every natural number n can be paired with its square
n2, again demonstrating the equality of the two sets. Now you can play around with this method and establish all kinds of equalities, for instance that of whole numbers (all positive and negative numbers) with squares.

But what Cantor did with this technique was much deeper than amusing pairings. The set of natural numbers is infinite. The set of even numbers is also infinite. Yet they can be compared. Cantor showed that two infinities can actually be compared and can be shown to be equal to each other. Before Cantor infinity was just a place card for ‘unlimited’, a vague notion that exceeded man’s imagination to visualize. But Cantor showed that infinity can be mathematically precisely quantified, captured in simple notation and expressed more or less like a finite number. In fact he found a precise mapping technique with which a certain kind of infinity can be defined. By Cantor’s definition, any infinite set of objects which has a one-to-one mapping or correspondence with the natural numbers is called a ‘countably’ infinite set of objects. The correspondence needs to be strictly one-to-one and it needs to be exhaustive, that is, for every object in the first set there must be a corresponding object in the second one. The set of natural numbers is thus a ruler with which to measure the ‘size’ of other infinite sets. This countable infinity was quantified by a measure called the ‘cardinality’ of the set. The cardinality of the set of natural numbers and all others which are equivalent to it through one-to-one mappings is called ‘aleph-naught’, denoted by the symbol \aleph_0. The set of natural numbers and the set of odd and even numbers constitute the ‘smallest’ infinity and they all have a cardinality of \aleph_0. Sets which seemed disparately different in size could all now be declared equivalent to each other and pared down to a single classification. This was a towering achievement.

The perplexities of Cantor’s infinities led the great mathematician David Hilbert to propose an amusing situation called ‘Hilbert’s Hotel’. Let’s say you are on a long journey and, weary and hungry, you come to a fine-looking hotel. The hotel looks like any other but there’s a catch: much to your delight, it contains a countably infinite number of rooms. So now when the manager at the front desk says “Sorry, but we are full”, you have a response ready for him. You simply tell him to move the first guest into the second room, the second guest into the third room and so on, with the nth guest moving into the (n+1)th room. Easy! But now what if you are accompanied by your friends? In fact, what if you are so popular that you are accompanied by a countably infinite number of friends? No problem! You simply ask the manager to move the first guest into the second room, the second guest into the fourth room, the third guest into the sixth room…and the nth guest into the 2nth room. Now all the odd-numbered rooms are empty, and since we already know that the set of odd numbers is countably infinite, these rooms will easily accommodate all your countably infinite guests, making you even more popular. Mathematics can bend the laws of the material world like nothing else.

But the previous discussion leaves a nagging question. Since all our infinities are countably infinite, is there something like an ‘uncountably’ infinite set? In fact, what would such an infinity even look like? The ensuing discussion probably constitutes the gem in the crown of infinities and it struck infinite wonder in my heart when I read it.

Let’s consider the set of real numbers, numbers defined with a decimal point as a.bcdefg... The real numbers consist of the rational and the irrational numbers. Is this set countably infinite? By Cantor’s definition, to demonstrate this we would have to prove that there is a one-to-one mapping between the set of real numbers and the set of natural numbers. Is this possible? Well, let’s say we have an endless list of rational numbers, for instance 2.823, 7.298, 4.001 etc. Now pair up each one of these with the natural numbers 1, 2, 3…, in this case simply by counting them. For instance:

S1 = 2.823
S2 = 7.298
S3 = 4.001
S4 = …

Have we proved that the rational numbers are countably infinite? Not really. This is because I can construct a new real number not on the list using the following prescription: construct a new real number such that it differs from the first real number in the first decimal place, the second real number in the second decimal place, the third real number in the third decimal place…and the nth real number in the nth decimal place. So for the example of three numbers above the new number can be:

S0 = 3.942

(9 is different from 8 in S1, 4 is different from 9 in S2 and 2 is different from 1 in S3)

Thus, given an endless list of real numbers counted from 1, 2, 3…onwards, one can always construct a number which is not on the list since it will differ from the 1st number in the first decimal place, 2nd number in the second decimal place…and from the nth number in the nth decimal place.

Cantor called this argument the ‘diagonal argument’ since it really constructs a new real number from a line that’s diagonally drawn across all the relevant numbers after the decimal points in each of the listed numbers. The image from the Wikipedia page makes the picture clearer:


In this picture, the new number is constructed from the red numbers on the diagonal. It’s obvious that the new number Eu will be different from every single number E1…En on the list. The diagonal argument is an astonishingly simple and elegant technique that can be used to prove a deep truth.

With this comparison Cantor achieved something awe-inspiring. He showed that one infinity can be greater than another, and in fact it can be infinitely greater than another. This really drives the nail in the coffin of commonsense, since a ‘comparison of two infinities’ appears absurd to the uninformed mind. But our intuitive ideas about sets break down in the face of infinity. A similar argument can demonstrate that while the rational numbers are countably infinite, the irrational numbers are uncountably so. This leads to another shattering comparison; it tells us that the tiny line segment between 0 and 1 on the number line containing real numbers (denoted by [0, 1]) is ‘larger’ than the entire set of natural numbers. A more spectacular case of David obliterating Goliath I have never seen.

The uncountably infinite set of reals comprises a separate cardinality from the cardinality of countably infinite objects like the naturals which was denoted by
\aleph_0. Thus one might logically expect the cardinality of the reals to be denoted by ‘\aleph_1’. But as usual reality thwarts logic. This cardinality is actually denoted by ‘c’ and not by the expected \aleph_1. Why this is so is beyond my capability to understand, but it is fascinating. While it can be proven that 2^\aleph_0 = c,the hypothesis that c = \aleph_1 is actually just a hypothesis, not a proven and obvious fact of mathematics. This hypothesis is called the ‘continuum hypothesis’ and happens to be one of the biggest unsolved problems in pure mathematics. The problem was in fact the first of the 23 famous problems for the new century proposed by David Hilbert in 1900 during the International Mathematical Congress in France (among others on the list were the notorious Riemann hypothesis and the fond belief that the axioms of arithmetic are consistent, later demolished by Kurt Gödel). The brilliant English mathematician G H Hardy put the continuum at the top of his list of things to do before he died (he did not succeed). A corollary of the hypothesis is that there are no sets with cardinality between \aleph_0 and c. Unfortunately the continuum hypothesis may be forever beyond our reach. The same Gödel and the Princeton mathematician Paul Cohen damned the hypothesis by proving that, assuming the consistency of the basic foundation of set theory, the continuum hypothesis is undecidable and therefore it cannot be proved nor disproved. This is assuming that there are no contradictions in the basic foundation of set theory, something that itself is 'widely believed' but not proven. Of course all this is meat and drink for mathematicians wandering around in the most abstract reaches of thought and it will undoubtedly keep them busy for years.

But it all starts with the Hottentots, Cantor and the most primitive methods of counting and comparison. I happened to chance upon Gamow’s little gem yesterday, and all this came back to me in a rush. The comparison of infinities is simple to understand and is a fantastic device for introducing children to the wonders of mathematics. It drives home the essential weirdness of the mathematical universe and raises penetrating questions not only about the nature of this universe but about the nature of the human mind that can comprehend it. One of the biggest questions concerns the nature of reality itself. Physics has also revealed counter-intuitive truths about the universe like the curvature of space-time, the duality of waves and particles and the spooky phenomenon of entanglement, but these truths undoubtedly have a real existence as observed through exhaustive experimentation. But what do the bizarre truths revealed by mathematics actually mean? Unlike the truths of physics they can’t exactly be touched and seen. Can some of these such as the perceived differences between two kinds of infinities simply be a function of human perception, or do these truths point to an objective reality ‘out there’? If they are only a function of human perception, what is it exactly in the structure of the brain that makes such wondrous creations possible? In the twenty-first century when neuroscience promises to reveal more of the brain than was ever possible, the investigation of mathematical understanding could prove to be profoundly significant.

Blake was probably not thinking about the continuum hypothesis when he wrote the following lines:

To see a world in a grain of sand,
And a heaven in a wild flower,
Hold infinity in the palm of your hand,
And eternity in an hour.


But mathematics would have validated his thoughts. It is through mathematics that we can hold not one but an infinity of infinities in the palm of our hand, for all of eternity.

Medicine! Poison! Arsenic! Life itself!

ResearchBlogging.org
A few months back when the Nobel Prize for chemistry was announced, a few observers lamented that unlike physics and biology, perhaps chemistry does not have any 'big' questions to answer. So here's a question for these skeptics. What branch of science has the biggest bearing on the discovery of an organism that utilizes arsenic instead of phosphorus? If you say "biology" or "geology" you would be wrong. The essential explanation underlying today's headline about an arsenic-guzzling bacterium is at the chemical level. The real question to ask is about the key molecular mechanisms in which arsenic substitutes phosphorus. What molecular level events enable this novel organism to survive, metabolize and reproduce? Of course the discovery is significant for all kinds of scientists including biologists, geologists, astronomers and perhaps even philosophers, but the essential unraveling of the puzzle will undoubtedly be at the level of the molecule.

Many years back I read a classic paper by the late Harvard chemist Frank Westheimer called "Why Nature Chose Phosphates". In simple and elegant terms, Westheimer explained why arsenic cannot replace phosphorus and silicon cannot replace carbon in the basic chemistry of life. In a nutshell, phosphates have the right kind of acid-base behavior at physiological pH. The single negative charge in phosphates in DNA hinders nucleophilic attack by water and hydrolysis without making the system so stable that it loses its dynamic nature. Arsenates, simply put, are too unstable. So are silicates.

And yet we have an arsenate-metabolizing bacterium here. Arsenic, the same stuff that was used in outrageous amounts in Middle-Age medicines and which later turned into the diabolical murderer's patent weapon of choice makes a new appearance now as a sustainer of life. First of all let's be clear on what this is not. It's not an indication that "life arose twice", it does not suddenly promise penetrating insight into extraterrestrial life, it probably won't win its discoverers a Nobel Prize and in fact it's not even technically speaking an 'arsenic-based life form'. The bacteria were found in a highly saline and alkaline lake with a relatively high concentration of arsenic where they were happily using conventional phosphorus-based chemistry. The fun started when they were gradually exposed to increasing concentrations of arsenic and increasing dilutions of phosphorus. The hardy little creatures still continued to grow.

But the real surprise was when the cellular components were analyzed and found to contain a lot of arsenic and very little phosphorus, certainly too less to sustain the metabolic machinery of life. If true this is a significant discovery, although not too surprising. Chemistry deals with improbabilities, not impossibilities. Life forms utilizing arsenates were conjectured to exist for some time, but such total substitution of arsenic for phosphorus was not anticipated.

If validated the work raises fascinating questions, not about extraterrestrial life or even about life's origins, but more mundane and yet probing ones about the basic chemistry of life. I haven't read the original paper in detail yet, but here are a few thoughts whose confirmation would lead to new territory:

1. The best thing would be to get a crystal structure of arsenic-based DNA. That would be a slam dunk and would really catapult the discovery to the front ranks of novelty. The second-best thing would be to do experiments involving labeled phosphorus and arsenic, to find out the exact proportion of arsenic getting incorporated. Which brings us to the next point.

2. How much of the cellular components are trading phosphorus for arsenic? Life's molecules are crucially dependent on phosphate. Not just DNA but signaling molecules like kinases and AMP are phosphorus-based. And of course there's ATP. What is fascinating to ponder is whether all of these key molecules traded phosphorus for arsenic. Perhaps some of them like DNA are using arsenic while others keep on using phosphorus. Checking the numbers and concentrations left over would certainly help to decide this.

One thing that should be confirmed and re-confirmed beyond the slightest shade of doubt is that there is absolutely no phosphorus hanging around which would be sufficient to sustain basic life processes; the entire conclusion depends on this fact. Traces of phosphorus can come from virtually anywhere; from the media (no, not the journalists, although it could come from them too), from human bodies, from laboratory equipment. A rough analogy from chemistry comes to mind; we have seen in the past how 'transition metal-free' reactions turned out to be catalyzed by traces of transition metals. If life is pushed to the brink by decreasing the phosphorus levels in its environment, the first thing we would expect it to do would not be to use arsenic but to scavenge the tiniest amounts of vital phosphorus from its environment with fanatic efficiency. It's interesting to note that the phosphorus concentrations being measured are in femtograms, which means that the error bars need to be zealously monitored. If it turns out that there is enough phosphorus to sustain a core cycle of essential processes while others are utilizing arsenic, the conclusions drawn would still be interesting but not as revolutionary as the current ones, and we probably won't be calling it an 'arsenic-based' life form then. In any case, my guess is that the utilization of phosphorus was selective and not ubiquitous. Organisms rarely utilize all-or-none principles and usually do their best under the circumstances.

If arsenic is truly substituting phosphorus in all these signaling, genetic and structural components, that would really be something because it would create more questions. By what pathways does arsenic enter these molecules? How does it affect the kinetics of reactions involving them? And most important are questions about molecular recognition. There are hundreds of proteins that recognize phosphorylated protein residues and similar other molecules. Do all these proteins recognize their arsenic containing counterparts? If so, is this the result of mutations in most of these proteins?; it seems hard to imagine that simultaneous mutations in so many biomolecules to make them recognize arsenic would result in viable living organisms. A more conservative explanation is that most of these molecules don't mutate but still recognize arsenic, albeit with different specificities and affinities that are nonetheless feasible for keeping life's engine chugging. The molecules of life are exquisitely specific but they are also flexible and amenable to changing circumstances. They have to be so.

3. And finally of course, how does the protein expression systems of the bacteria cope with arsenic-based DNA? As mentioned above, arsenates are unstable. To counter this instability does DNA expression simply get ramped up? How do proteins control the unpacking, packing, duplication and transcription of this unusual form of DNA? For starters, how does DNA polymerase zip together arsenated nucleotides for instance? How does the whole thing essentially hold together?

There are of course more questions. Whatever the implications, this is an interesting discovery that would keep scientists busy for a long time. Like all truly interesting scientific discoveries it asks more questions than it answers. But ultimately it should come as no surprise. The wonders of chemistry combined with those of Darwinian evolution have allowed life to conquer unbelievably diverse niches, from methane-riddled environments to hot springs to sub-zero temperatures. In one way this discovery would only add one more feather into the cap of a robust and abiding belief- that life is tough. It survives.

Selenium for sulfur should be next (but I wouldn't wait around for silicon...)

Update: Two first-rate rebuttals to the paper. One is an outstanding and meticulously detailed piece by University of British Columbia microbiologt Rosie Redfield. The other one is a Scienceblogs post. Basically the question keeps coming back to whether there could have enough phosphorus for survival. It's worth noting the application of Occam's Razor here. If bacteria which normally metabolize phosphorus were challenged with an arsenic-rich and phosphorus-poor environment, what would they first do? Start incorporating arsenic in their basic biochemistry or intensely adapt their life processes so that they zealously start sequestering and utilizing the smallest traces of the vital phosphorus? Occam's Razor and everything that we know about evolution suggests the latter.

Wolfe-Simon, F., Blum, J., Kulp, T., Gordon, G., Hoeft, S., Pett-Ridge, J., Stolz, J., Webb, S., Weber, P., Davies, P., Anbar, A., & Oremland, R. (2010). A Bacterium That Can Grow by Using Arsenic Instead of Phosphorus Science DOI: 10.1126/science.1197258

Probing amyloid, one oligomer at a time

ResearchBlogging.org
One of the more important paradigm shifts in our understanding of the Alzheimer’s disease-causing amyloid protein in the last few years has been the recognition of differences between the well known polymer aggregates of amyloid and their smaller, soluble oligomer counterparts. For a long time it was believed that the fully formed 40-42 amino acid protein aggregate found in autopsies was the causative agent in AD, or at least the most toxic one. This understanding has radically changed in the last few years, partly through elegant work done in identifying oligomers and partly through the unfortunate results of clinical trials targeting amyloid. The new understanding is that it’s not the fully formed aggregates but the smaller oligomers that are the real toxic species.

Identifying these different monomers, dimers, trimers and tetramers is a valuable goal. But until now their recognition has mainly depended on raising specific antibodies against them, a tedious and expensive process. Small molecule probes that specifically identify each oligomer have been missing. In a recent JACS communication, a team from the University of Michigan uses a simple but clever technique to develop such probes and makes a promising step in this direction.

The probes are based on the idea that the best antidote against a poison is another poison. In this case the poison is the specific sequence of amino acids that makes up amyloid. In particular, a sequence of five amino acids- KLVFF- has been found to be sufficient for aggregation and toxicity. The aggregates form by the stacking of beta sheets principally driven by hydrophobic interaction between the FF residues; each pair thus serves as a growth site for addition of further such residues. The insight then is that if one could construct a mimic of the sequence, this mimic would basically act as a competitive inhibitor and bind to the normal sequence, inhibiting further growth. In this case the strategy was to use KLVFF segments themselves which would sort of wrap around newly formed oligomers of different constitution and sequester them from further self-assembly. So the team essentially constructed two KLVFF segments joined by a linker. The linker would also serve the purpose of providing an entropic advantage to the two segments so that they would not be at an energetic disadvantage during binding. The important question was how long the linker should be.

To decide on the length of the linker the team made some clever use of molecular dynamics simulations. Since you can estimate the approximate thickness of every oligomer, you can estimate the linker length that would be required to keep two KLVFF segments at the same distance as the thickness of the oligomer. For instance, the distances between the segments needed to wrap around the oligomers were 14-15 A for the dimer, 19-20 A for the trimer and 24-25 A for the tetramer.



But the linker should also keep the segments stable at that distance. To probe this the team used MD simulations. The MD simulations revealed the length of the linker required to keep the two segments separated at the specific distances by indicating how much time the assembly spent at those distances.

To test these results, the team then generated mixtures of different kinds of KLVFF oligomers and then added each probe to the solution. A streptavidin moiety was attached to every probe. Silver staining revealed that each probe was specifically binding to an oligomer of a certain type dictated by the compatibility of the intraprobe distance and oligomer thickness. Trimers and tetramers could be clearly identified but there was more ambiguity in case of dimers, presumably because of their less ordered structure.

Most interestingly, the team then added the probes to cerebrospinal fluid (CSF). Since amyloid is part of normal physiology, it is present in CSF. Gratifyingly they found that the probes could very clearly label trimers and tetramers against a background of several other proteins and intermediates in CSF. This experiment notably demonstrates that the method can selectively detect amyloid oligomers in complex mixtures.

I think that this work is valuable and paves the way toward the development of similar small-molecule based probes for identifying the key intermediates in amyloid formation. It could also be very useful in exploring amyloid formation in normal physiology and in exploring the stages of protein self-assembly in diverse amyloid-based diseases.

Reinke, A., Ung, P., Quintero, J., Carlson, H., & Gestwicki, J. (2010). Chemical Probes That Selectively Recognize the Earliest Aβ Oligomers in Complex Mixtures Journal of the American Chemical Society DOI: 10.1021/ja106291e

In praise of contradiction

Scientists usually don't like contradictions. A contradiction in experimental results is like a canary in a coal mine. It sets off alarm bells and compels the experimentalist to double-check his or her setup. A contradiction in theoretical results can be equally bad if not worse. It could mean you made a simple arithmetical mistake. Contradiction could force you to go back to the drawing board and start afresh. Science is not the only human activity where contradictions are feared and disparaged. A politician or businessman who contradicts himself is not considered trustworthy. A consumer product which garners contradictory reviews raises suspicions about its true value. Contradictory trends in the stock market can put investors in a real bind.

Yet contradiction and paradoxes have a hallowed place in intellectual history. First of all, contradiction is highly instructive simply because it forces us to think further and deeper. It reveals a discrepancy in our understanding of the world which needs to be resolved and encourages scientists to perform additional experiments and decisive calculations to settle the matter. It is only when scientists observe contradictory results that the real fun of discovery begins. It’s the interesting paradoxes and the divergent conclusions that often point to a tantalizing reality which is begging to be teased apart by further investigation.

Let's consider that purest realm of human thought, mathematics. In mathematics, the concept of proof by contradiction or reductio ad absurdum has been highly treasured for millennia. It has provided some of the most important and beautiful proofs in the field, like the irrationality of the square root of two. In his marvelous book "A Mathematician's Apology", the great mathematician G H Hardy paid the ultimate tribute to this potent weapon:
"Reductio ad absurdum, which Euclid loved so much, is one of a mathematician's finest weapons. It is a far finer gambit than any chess gambit: a chess player may offer the sacrifice of a pawn or even a piece, but a mathematician offers the game."
However, the great ability of contradiction goes far beyond opening a window into abstract realms of thought. Twentieth-century physics demonstrated that contradiction and paradoxes constitute the centerpiece of reality itself. At the turn of the century, it was a discrepancy in results from blackbody radiation that sparked one of the greatest revolutions in intellectual history in the form of the quantum theory. Paradoxes such as the twin paradox are at the heart of the theory of relativity. But it was in the hands of Niels Bohr that contradiction was transformed into a subtler and lasting facet of reality which Bohr named 'complementarity'. Complementarity entailed the presence of seemingly opposite concepts whose co-existence was nonetheless critical for an understanding of reality. It was immortalized in one of the most enduring and bizarre paradoxes of all, wave-particle duality. Wave-particle duality taught us that contradiction is not only an important aspect of reality but an indispensable one. Photons of light and electrons behave as both waves and particles. The two qualities seem to be maddeningly at odds with each other. Yet both are absolutely essential to grasp the essence of physical reality. Bohr codified this deep understanding of nature with a characteristically pithy statement- "The opposite of a big truth is also a big truth". Erwin Schrödinger followed up on his own disdain for complementarity by highlighting an even more bizarre quantum phenomenon- entanglement- wherein particles that are completely separated from each other are nonetheless intimately connected; by doing this Schrödinger brought us the enduring image of a cat helplessly trapped in limbo between a state of life and death.

The creative tension created by seemingly contradictory phenomena and results has been fruitful in other disciplines. Darwin was troubled by the instances of altruism he observed in the wild; these seemed to be contradicting the ‘struggle for existence’ which he was describing. It took the twentieth century and theories of kin selection and reciprocal altruism to fit these seemingly paradoxical observations into the framework of modern evolutionary theory. The history of organic chemistry is studded by efforts to determine the molecular structures of complex natural products like penicillin and chlorophyll. In many of these cases, contradictory proposed structures like those for penicillin spurred intense efforts to discover the true structure. Clearly, contradiction is not only a vital feature of science but it is also a constant and valuable companion of the process of scientific discovery.

These glittering instances of essential contradiction in science would seem perfectly at home with the human experience. While contradiction in science can be disturbing and ultimately rewarding, many religions and philosophies have come to savor this feature of the world for a long time. The Chinese philosophy of Yin and Yang recognizes the role of opposing and contrary forces in sustaining human life. In India, the festival celebrating the beginning of the Hindu new year includes a ritual where every member of the family consumes a little piece of sweet jaggery (solidified sugarcane juice) wrapped in a bitter leaf of the Neem tree (which contains the insecticide azadirachtin). The sweet and bitter are supposed to exemplify the essential combination of happy and sad moments that are necessary for a complete life. Similar paradoxes are recognized in Western theology, for instance pertaining to the doctrines of the Trinity and the Incarnation.

The ultimate validation of contradiction however is not through its role in life or in scientific truth but through its role as an insoluble part of our very psyche. We all feel disturbed by contradiction, yet how many of us think we hold perfectly consistent and mutually exclusive beliefs in our own mind about all aspects of our life? You may love your son, yet his egregious behavior may lead you to sometimes (hopefully not often) wish he had not been born. We often speak of 'love-hate' relationships which exemplify opposing feelings toward a loved one. If we minutely observe our behavior at every moment, such observation would undoubtedly reveal numerous instances of contradictory thoughts and behavior. This discrepancy is not only an indelible part of our consciousness but we all realize that it actually enriches our life, makes it more complex, more unpredictable. It is what makes us human.

Why would contradictory thinking be an important part of our psyche? I am no neuroscientist, but I believe that our puzzlement about contradiction would be mitigated if we realize that we human beings perceive reality by building models of the world. It has always been debatable whether the reality we perceive is what is truly 'out there' (and this question may never be answered); what is now certain is that neural events in our brains enable us to build sensory models of the world. Some of the elements in the model are more fundamental and fixed while others are flexible and constantly updated. The world that we perceive is what is revealed to us through this kind of interactive modeling. These models are undoubtedly some of the most complex ever generated, and anyone who has built models of complex phenomena would recognize how difficult it is to achieve a perfectly logically consistent model. Model building also typically involves errors, of which some may accumulate and others may cancel. In addition models can always be flawed because they don't include all the relevant elements of reality. All these limitations lead to models in which a few facts can appear contradictory, but trying to make these facts consistent with each other could possibly lead to even worse and unacceptable problems with the other parts of the model. Simply put, we compromise and end up living with a model with a few contradictions in favor of a model with too many. Further research in neuroscience will undoubtedly shed light on the details of model building done by the brain, but what seems unsurprising is that these models contain some contradictory worldviews which nonetheless preserve their overall utility.

Yet there are those who would seek to condemn such contradictory thinking as an anomaly. In my opinion, one of the most prominent examples of such a viewpoint in the last few years has been the criticism of religious-minded scientists by several so-called 'New Atheists' like Richard Dawkins and Sam Harris. The New Atheists have made it their mission to banish what they see as artificial barriers created between science and religion for the sake of political correctness, practical expediency and plain fear of offending the other party. There is actually much truth to this viewpoint, but the New Atheists seem to take it beyond its strictly utilitarian value.

A case in point is Francis Collins, the current director of the NIH. Collins is famous as a first-rate scientist who is also an ardent Catholic. The problem with Collins is not that he is deeply religious but that he tends to blur the line between science and religion. A particularly disturbing instance is a now widely discussed set of slides from a presentation where he tries to somehow scientifically justify the existence and value of the Christian God. Collins's conversion to a deeply religious man when he apparently saw the Trinity juxtaposed on his view of a beautiful frozen waterfall during a hike is also strange, and at the very least displays a poor chain of causation and inadequate critical thinking.

But all this does not make Collins any less of an able administrator. He does not need to mix science with religion to justify his abilities as a science manager. To my knowledge there is not a single instance of his religious beliefs dictating his preference for NIH funding or policy. In practice if not in principle, Collins manages to admirably separate science from storytelling. But the New Atheists are still not satisfied. They rope in Collins among a number of prominent scientists who they think are 'schizophrenic' in conducting scientific experiments during the week and then suspending critical thinking on Sundays when they pray in church. They express incredulity that someone as intelligent as Francis Collins can so neatly compartmentalize his rational and 'irrational' brain and somehow sustain two completely opposite - contradictory - modes of thought.

For a long time I actually agreed with this viewpoint. Yet as we have seen before, such seemingly contradictory thinking seems to be a mainstay of the human psyche and human experience. There are hundreds of scientists like Collins who largely manage to separate their scientific and religious beliefs. Thinking about it a bit more, I realized that the New Atheists' insistence on banishing perfectly mutually exclusive streams of thinking seems to go against a hallowed principle that they themselves have emphasized to no end- a recognition of reality as it is. If the New Atheists and indeed all of us hold reality to be sacrosanct, then we need to realize that contradictory thinking and behavior are essential elements of this reality. As the history of science demonstrates, appreciating contradiction can even be essential in deciphering the workings of the physical world.

Now this certainly does not mean that we should actively encourage contradiction in our thinking. We also recognize the role of tragedy in the human experience, but few of us would strive to deliberately make our lives tragic. Contradictory thinking should be recognized, highlighted and swiftly dealt with, whether in science or life. But its value in shaping our experience should also be duly appreciated. Paradox seems to be a building block in the fabric of the world, whether in the mind of Francis Collins or in the nature of the universe. We should in fact celebrate the remarkable fact that the human mind can subsume opposing thoughts within its function and still operate within the realm of reason. Simply denying this and proclaiming that it should not be so would mean denying the very thing we are striving for- a deeper and more honest understanding of reality.

Will-o'-the-wisp around 5 sigma: the hunting of the Higgs

Mr. Hunter, we have rules that are not open to interpretation, personal intuition, gut feelings, hairs on the back of your neck, little devils or angels sitting on your shoulder.... - Capt. Ramsey ('Crimson Tide')

Particle physicists hunting for maddeningly elusive particles sometimes must feel like Mr. Hunter in the movie "Crimson Tide". The quarries which they are trying to mine seem so ephemeral, making their presence known in events with such slim probability margins, victims of nature's capricious dance of energy and matter, that intuition must sometimes seem as important as data. The hunt for such particles signifies some of the most intense efforts in extruding reality from nature's womb that human beings have ever put in.

No other particle exemplifies this uniquely human of all endeavors than the so-called Higgs boson. The man who bears the burden of imparting it its name is now a household name himself. Yet as the history of science often demonstrates, the real story is both more interesting and more complicated. It involves intense competition involving billions of dollars and thousands of careers of a kind rarely seen in science, and stories of glories and follies befitting the great tragedies. In his book "Massive", Ian Sample does a marvelous job of bringing this history to life.

Sample excels at three things. The first is the story of the two great laboratories that have mainly been involved in the race to the finish in discovering nature's building blocks- Fermilab and CERN. CERN was started in the 60s to give a boost to European physics after World War 2. Fermilab was lovingly built by the experimental physicist Robert Wilson, a former member of the Manhattan Project who was a first-rate amateur architect and saw accelerators as aesthetic things of beauty. Secondly, Sample does a nice job of explaining the reasons that led to the construction of these machines, the most complicated that mankind has ever constructed. Only human beings would put billions of dollars and immense manpower on the line purely for the purpose of satisfying man's curiosity of plumbing the depths of nature's deepest secrets. Sample also lays out the very human and social concerns that accompany such investigations. Lastly, Sample was lucky enough to get an extended interview with Peter Higgs, a shy man who very rarely does interviews. Higgs grew up in Scotland idolizing Paul Dirac and shared Dirac's view of a unifying beauty that would connect nature's disparate facts. In the late 1960s he wrote papers describing what is now called the Higgs boson. The papers were well-accepted in the US and Higgs's name soon began to be bandied about in seminars and meetings. As described below however, Higgs was not the only one postulating the theory.

So what exactly is the Higgs boson? A complete understanding would naturally need a background in theoretical physics, but the best analogy for the layman was given by a British scientist. Imagine a room full of young women who are happily chatting. In walks a handsome young man. As long as he is not noticed he can move freely across the room, but as soon as the young women spot him they cluster around him, impeding his movement. It's as though the young man has become heavier and has acquired mass from the "field" of women surrounding him. The Higgs then is the particle that imparts specific masses to all the other myriad particles discovered so far including quarks and leptons through its own field. It should be evident why it's important. The Higgs would be the crowning achievement in the Standard Model of particle physics which encompasses all particles and forced known until now except gravity.

However, the history of the Higgs particle is complicated. Sample does a great job of explaining why the credit belongs to six different people who reached the same conclusion that Higgs did. It seems that Higgs was not the first to publish, but he was the first one to clearly state the existence of a new particle. However, the most comprehensive theory of the Higgs field and particle came out later. If Nobel Prizes are to be awarded, it's not at all clear what three people should be picked, although Higgs's name seems obvious. The sociology of scientific discovery is as important as the facts and again illustrates that science is a much more haphazard and random process than is believed.

The search for the Higgs gathered tremendous momentum in the 80s and 90s. It intensified after accelerator laboratories spectacularly discovered two particles named the W and Z bosons that are responsible for mediating the electromagnetic and weak interactions (the electroweak force). These particles were predicted by Steven Weinberg, Abdus Salam and Sheldon Glashow in the 60s, and their prediction surely ranks as one of the greatest theoretical successes in modern physics. Once the theory predicted the masses of these particles, they were up for grabs. No experimentalist worth his or her salt would fail to relish nailing a concrete theoretical prediction of fundamental importance through a decisive experiment. Sample captures the pulse-quickening inter-Atlantic races to find these particles especially between CERN and Fermilab. The importance of these particles was so obvious that Nobel Prizes came in quick succession both to the theorists and the experimentalists. However the existence of the Higgs is also essential for the successful formulation of the electroweak theory, and signatures of the Higgs are thought to be produced whenever W and Z bosons are created. It again becomes obvious why finding the Higgs is so important; its existence would validate all those successes and Nobel Prizes, whereas a failure to find it would entail a stunningly hard look at some of particle physics's most fundamental notions.

These days the Large Hadron Collider (LHC) is all over the news. Yet the most exciting part of Sample's book describes not the LHC but the Large Electron Positron collider (LEP) at CERN which was the largest particle accelerator in the world at the time. Unlike protons, electrons and positrons are fundamental particles and crashing them together produces 'cleaner' results. There were some fascinating events associated with the LEP. The behemoth's circumference was 27 kilometers and it crisscrossed the Swiss-French border, so authorities had to seek permission to build the accelerator underneath some homes. It seems that French law is special just like their cheese and language; apparently if you build a house in France, it means that you own the entire ground beneath the house, all the way to the center of the earth. Suffice it to say that some negotiation with the homeowners was necessary to secure permission for underground construction. At one point the intensity of the beams inside the mammoth machine started to wax and wane. After many days of brainstorming a scientist had a hunch; it turns out that the the gravity of the moon and the sun sets up tides inside the crust of the earth. These tides put the calibration of the machine off by a millimeter, too small to be noticed by human beings, but thunderingly large for electron beams. In another case, the daily departure of a train from a nearby station sent surges of electricity into the ground and affected the beams. It seems like when you are building an accelerator you have to guard against the workings of the entire solar system.

The story of particle physics is also fraught with tragedies. One of the biggest described in the book was the construction of the Superconducting Supercollider in Texas. The SSC was supposed to be the answer to CERN and got enthusiastic backing from Reagan and Bush Sr. Unfortunately the budget spiraled out of hand, the infighting intensified, congressmen remained unconvinced and the collider never got built in spite of spending billions and affecting thousands of careers of scientists who had relocated. The fiasco just proved that public support for even projects like the LHC is never a sure thing, and scientists don't always excel at public relations.

Then of course there are all the doomsday scenarios and concerns which were raised about the LHC, from the formation of black holes to the world ending in myriad other ways. As Sample describes, these concerns go back to an accelerator at Brookhaven National Laboratory which would impact large gold ions together at furious velocities. The would-be Nobel laureate Frank Wilczek raised the theoretical yet vanishingly small probability of forming 'strangelets', entities akin to the fictitious substance 'Ice-9' in Kurt Vonnegut's novel 'Cat's Cradle'. These strangelets would coalesce together matter around themselves and form a superstable form of dead matter that would rapidly engulf the entire planet. The concern about strangelets pales in comparison however to the possibility of 'vacuum decay', in which our universe is thought to be in a perfectly happy but metastable state like a vase on a table. All it takes is a little nudge or a massive kick from a high-energy particle collision in our case to dislodge the vase or universe from its metastable state into a stable state of minimum energy. Gratifyingly, not only would this state mean the end of life as we know it but it would also mean the impossibility of life ever arising. Yes, all these scenarios seem straight out of the drug-induced, overactive imagination of a demented mind, but at least some of them are within the realm of theoretical possibility. Unfortunately when the result is the destruction of the planet, the words "improbable" and "vanishingly small" will never do much to assuage the public's fears. It just indicates that physicists will always have to grapple with public relations issues vastly more complex than the LHC.

Finally, we get a fascinating overview of the kinds of things which scientists hope to see in the LHC. The problem is that the generation of particles like the Higgs is a very low-probability event and is usually only a side-product of some other primary event. The situation is made more complicated by the immense difficulty of observing such fleeting glimpses in a hideously complex background of noise generated by the creation of other particles. Scientists working on these projects have to keep their eyes and instruments peeled for the one in a trillion event that may bring them glory. Whenever an event is observed, the scientists have to calculate the realm of probability in which it belongs. Usually if the event is outside five standard deviations ('5 sigma') then it is extremely likely to be real and not have occurred by chance alone. Not surprisingly, the observation and communication of these events is a tortuous thing. Publicity has to be avoided before you confirm such fleeting bits of probability, but leaks inevitably offer. And the media has seldom shown any restraint in announcing such potentially momentous discoveries which would bring glory, prizes and money to their originators. Scientists working today also have to deal with the presence of blogs and other instant communication conduits. As Sample narrates, at least in one case a physicist at CERN posted preliminary LHC results on the blog Cosmic Variance, and all hell broke loose. Scientists have to tread carefully especially in this era of instant data dissemination.

All this makes the scientists engaged in such endeavors live on the edge, and to us they appear like the explorers who have their eyes peeled to the sky looking out for the stray signal that would announce the presence of extraterrestrials. The mathematics of the Higgs boson is of course much more sound than that of alien contact, but the scientists who are looking for it are hanging on to such flimsy wisps of probability and interpretation that they surely must be questioning their own sanity sometimes.

In the end, even physicists are all too human. As Capt. Ramsey says, our rules are not always subject to little devils and angels sitting on our shoulders. And yet it seems that scientists like the Higgs hunters sometimes would be tempted to trust the hairs on the back of their head, especially when those hairs stand up straight at the glimpse of a peak in the graph, that 5-sigma event which would change everything. Maybe, just maybe.