Field of Science

The price of global warming science is eternal vigilance

John Tierney of the NYT weighs in on the hacked emails and accurately nails it
I’ve long thought that the biggest danger in climate research is the temptation for scientists to lose their skepticism and go along with the “consensus” about global warming. That’s partly because it’s easy for everyone to get caught up in “informational cascades”, and partly because there are so many psychic and financial rewards rewards for working on a problem that seems to be a crisis. We all like to think that our work is vitally useful in solving a major social problem — and the more major the problem seems, the more money society is liable to spend on it.

I’m not trying to suggest that climate change isn’t a real threat, or that scientists are deliberately hyping it. But when they look at evidence of the threat, they may be subject to the confirmation bias — seeing trends that accord with their preconceptions and desires. Given the huge stakes in this debate — the trillions of dollars that might be spent to reduce greenhouse emissions — it’s important to keep taking skeptical looks at the data. How open do you think climate scientists are to skeptical views, and to letting outsiders double-check their data and calculations?
We are all subject to the confirmation bias, and I can say from experience that we have to battle it in our research every single day as fallible human beings. But as Tierney says, when the stakes are so incredibly high, when governments and international budgets and debts and the fate of billions is going to be affected by what you say, you better fight the conformation bias ten times as much as usual.

Listen to Capt. Ramsey, son:
"Mr. Hunter, we have rules that are not open to interpretation, personal intuition, gut feelings, hairs on the back of your neck, little devils or angels sitting on your shoulders..."

The damning global warming emails; when science becomes the casualty

By now everyone and his grandmother must have heard about the hacked emails of the prestigious University of East Anglia Climate Research Unit (CRU). The emails were sent by leading climate change scientists to each other and seem to express doubts and uncertainty. More importantly they also seem to display some troubling signs of rather dishonest discourse, with scientists trying to hold dangerously unfavorable opinions of journal editors who seem to be open to publishing papers that don't seem to agree with their views, and asking each other to delete emails which might signal doubt.

There is at least one example of bad science revealed in the emails. It seems that one set of data from tree ring proxies did not show the expected rise in temperatures for a particular period and showed a decline. What was done was that just for that period, a different set of data from another method which did show the rise was grafted on to this piece of data. John Tierney of the NYT has the two graphs on his blog. Does this change the general conclusion? Probably not. Is this bad science and enough to justify a flurry of indignant questions in the minds of outsiders? Certainly so. Good science would have meant revealing all the pieces of data including those which showed a decline.

Now what is remarkable (or perhaps not remarkable at all) is the vociferous political- not scientific- reaction that has erupted in blogs all over the internet. I would point readers to my fellow blogger Derek Lowe's succinct summary of the matter. While I am not as skeptical about climate change as he is, it is disconcerting to see how much political, personal and social baggage the whole issue is carrying. Whenever a scientific issue starts carrying so much non-scientific baggage, one can be assured that we are in trouble.

The comments on most blogs range across the spectrum. There are the outright deniers who claim that the emails "disprove global warming"; they don't, and I can't see how any set of personal exchanges could say almost anything definitive about a system as complex as the climate. Phrases like "hide the decline" (in the case of the above tree ring proxy data) and "trick" have been taken out of their technical context to indicate subversion and deception. And then there are the proponents who want to act like nothing has happened. I like George Monbiot's take on it where he says that even if the science of climate change has certainly not come crashing down, the public image of climate change has been dealt a serious blow, and denying this would simply mean burying your head in the sand. After all, we are supposed to be the good guys, the ones who are supposed to honestly admit to our limitations and failings, and we are not doing this. What ramifications this will have for the important Copenhagen climate summit this month is uncertain.

However, the very fact that we have to worry as much about the public image of climate science as the science itself plainly speaks to the degree of politicization of the issue. I think the liability of this entire matter has basically become infinite and I think scientists working in the field are facing an unprecedented dilemma which few scientists have ever faced. Here's the problem; we are dealing with an extremely complex system and it is hardly surprising if the science of this system (which after all is only a hundred years or so old) keeps getting revised, reshuffled and reiterated even if the basics remain intact. That would be perfectly normal for a vast, multidisciplinary field like this. That is the way science works. One finds such revision and vigorous debate even in highly specific and recondite areas like the choice of atomic partial charges in the calculation of intermolecular energies. The climate is orders of magnitude more complicated. If the usual rules of scientific discourse were to be followed, making such debates and disagreements open would not be a problem.

But with an issue that is so exquisitely fraught with political and economic liabilities and where the stakes are so enormously high, I believe that the normal process of scientific debate, discourse and progress has broken down and is being bypassed. Scientists who would otherwise engage in lively debate and disagreements have become extremely loathe to make their doubts public. These scientists fear that they would essentially be condemned by both sides. The right wing extremists would seize upon any honest disclosure of debate as the kick that brings the entire edifice crumbling down. They would predictably try to discredit even reasonable conclusions drawn by climate change scientists. At the same time, left wing extremists would essentially disown such scientists and either declare them an anomaly or more predictably declare them to be political and corporate shills. A scientist who honestly voices his doubts would become a man without a country.

This is of course in addition to the ample scorn that establishment upholders like climate blogger Joe Romm would heap on them. Thus, if you are a scientist working in climate change today, it would be rather difficult for you to make even the normal process of science transparent. Plus, most scientists are genuinely scared that all the momentum they have built over the years would fizzle out if their right wing opponents pounce on their private doubts. Think about it. The Copenhagen summit is going to be held in a month. Scientists have faced enormous obstacles in convincing the public and governments about climate change. Your work has been crowned by grudging acknowledgement even by George W Bush and the Nobel Peace Prize for Al Gore. Would you be ready to throw away all this rightly hard-earned and hard-fought consensus for the sake of a few dissenting opinions? The simple laws of human nature dictate that you probably would not.

In my opinion, that is what seems to have happened with the scientists at the CRU. They have been so afraid of not only expressing their doubts (many of which as noted above would be valid given the science involved) but also entertaining other dissenting opinions that they have unfortunately picked the option of trying to silence open debate in a way that would be unacceptable in science in general. One can understand their motivation, but their actions still seem deplorable.

I think these emails point to a much more serious structural problem in the scientific enterprise of climate change. For good reasons and bad, whether to stand up to political hacks or ironically to defend good science, this enterprise has accumulated so much political baggage that it is now virtually impossible for it to compromise, to change, to maneuver even in the face of cogent reasons. The science of climate change has essentially bound itself into a straitjacket. My prediction is that important decisions about this science will in the future be mainly politically motivated. Public consensus not completely backed by good science will be the driving force for major decisions. The consequences of those decisions, just like the climate, are uncertain. We will have to wait and see.

But as usual, the casualty is ultimately science itself. What was good science and ineffective politics before is becoming effective politics and bad science. Whatever else happens, science never wins when it gets so overtly politicized. And hopefully about this there will be universal consensus.

More model perils; parametrize this

ResearchBlogging.orgNow here's a very interesting review article that puts some of the pitfalls of models that I have mentioned on these pages in perspective. The article is by Jack Dunitz and his long-time colleague Angelo Gavezzotti. Dunitz is in my opinion one of the finest chemists and technical writers of the last half century and I have learnt a lot from his articles. Two that are on my "top 10" list are his article showing the entropic gain accrued by displacing water molecules in crystals and proteins (a maximum of 2 kcal/mol for strongly bound water) and his paper demonstrating that organic fluorine rarely, if ever, forms hydrogen bonds.

In any case, in this article he talks about an area in which he is the world's acknowledged expert; organic crystal structures. Understanding and predicting (the horror!) crystal structures essentially boils down to understanding the forces that makes molecules stick to each other. Dunitz and Gavezzotti describe theoretical and historical attempts to model forces between molecules, and many of their statements about the inherent limitations of modeling these forces rang as loudly in my mind as the bell in Sainte-Mère-Église during the Battle of Normandy.

Dunitz has a lot to say about atom-atom potentials that are the most popular framework for modeling inter and intramolecular interactions. Basically such potentials assume simple functional forms that model the attractive and repulsive interactions between nuclei which are treated as rigid balls. This is also of course the fundamental approximation in molecular mechanics and force fields. The interactions are basically Coulombic interactions (relatively simple to model) and more complicated dispersion interactions which are essentially quantum mechanical in nature. The real and continuing challenge is to model these weak dispersive interactions.

But the problem is fuzzy. As Dunitz says, atom-atom potentials are popular mainly because they are simple in form and easy to calculate. However, they have scant, if any, connection to "reality". This point cannot be stressed enough again. As this blog has noted several times before, we use models because they work, not because they are real. The coefficients in the functional forms of the atom-atom potentials are essentially varied to minimize the potential energy of the system and there are several ways to skin this cat. For instance, atomic point charges are rather arbitrary (and definitely not "real") and can be calculated and assigned by a variety of theoretical approaches. In the end, nobody knows if the final values or even the functional forms have much to do with the real forces inside crystals. It's all a question of parameterization which gives you the answer, and while parameterization may seem like a magic wand which may give you anything that you want, that's precisely the problem with it...that it may give you anything that you want without reproducing the underlying reality. Overfitting is also a constant headache and one of the biggest problems with any modeling in my opinion; whether in chemistry, quantitative finance or atmospheric science. More on that later.

An accurate treatment of intermolecular forces will have to take electron delocalization into consideration. The part which is the hardest to deal with is the part close to the bottom of the famous Van der Waals energy curve, where there is an extremely delicate balance between repulsion and attraction. Naturally one thinks of quantum mechanics to handle such fine details. A host of sophisticated methods have been developed to calculate molecular energies and forces. But those who think QM will take them to heaven may be mistaken; it may in fact take them to hell.

Let's start with the basics. In any QM calculation one uses a certain theoretical framework and a certain basis set to represent atomic and molecular orbitals. One then adds terms to the basis set to improve accuracy. Consider Hartree-Fock theory. As Dunitz says, it is essentially useless for dealing with electron delocalization because it does not take electron correlation into account, no matter how large a basis set you use. More sophisticated methods have names like "Moller-Plesset perturbation theory with second order corrections" (MP2) but these may greatly overestimate the interaction energy, and more importantly the calculations become hideously computer intensive for anything more than the simplest molecules.

True, there are "model systems" like the benzene dimer (which has been productively beaten to death) for which extremely high levels of theory have been developed that approach experimental accuracy within a hairsbreadth. But firstly, model systems are just that, model systems; the benzene dimer is not exactly a molecular arrangement which real life chemists deal with all the time. Secondly, a practical chemist would rather have an accuracy of 1 kcal/mol for a large system than an accuracy of 0.1 kcal/mole for a simple system like the benzene dimer. Thus, while MP2 and other methods may give you unprecedented accuracy for some model systems, they are usually very expensive for most systems of biological interest and not very useful.

DFT still seems to be one of the best techniques around to deal with intermolecular forces. But "classical" DFT suffers from a well-known inability to treat dispersion. "Parameterized DFT" in which an inverse sixth power term is added to the basic equations can work well and promises to be a very useful addition to the theoretical chemist's arsenal. More parameterization though.

And yet, as Dunitz points out, problems remain. Even if one can accurately calculate the interaction energy of the benzene dimer, it is not really possible to know how much of it comes from dispersion and how much of it comes from higher order terms. Atom-atom potentials are happiest calculating interaction energies at large distances, where the Coulomb term is pretty much the only one which survives, but at small interatomic distances which are the distances most of interest for the chemist and the crystallographer, a complex dance between attraction and repulsion, monopoles, dipoles and multipoles and overlapping electron clouds manifests itself. The devil himself would have a hard time calculating interactions in these regions.

The theoretical physicist turned Wall Street quant Emanuel Derman (author of the excellent book ("My Life as a Quant: Reflections on Physics and Finance") says that one of the problems with the financial modelers on Wall Street is that they suffer from "physics envy". Just like in physics, they want to discover three laws that govern 99% of the financial world. More predictably as Derman says, they end up discovering 99 laws that seem to govern 3% of the financial world with varying error margins. I would go a step further and say that even physics is accurate only in the limit of ideal cases and this deviation from absolute accuracy distinctly shows in theoretical chemistry. Just consider that the Schrodinger equation can be solved exactly only for the hydrogen atom, which is where chemistry only begins. Anything more complicated that, and even the most austere physicist cannot help but approximate, parametrize, and constantly struggle with errors and noise. As much as the theoretical physicist would like to tout the platonic purity of his theories, their practical applications would without exception involve much approximation. There is a reason why that pinnacle of twentieth century physics is called the Standard Model.

I would say that computational modelers in virtually every field from finance to climate change to biology and chemistry suffer from what Freeman Dyson has called "technical arrogance". We have made enormous progress in understanding complex systems in the last fifty years and yet when it comes to modeling the stock market, the climate or protein folding, we seem to think that we know it all. But we don't. Far from it. Until we do all we can do is parametrize, and try to avoid the fallacy of equating our models with reality.

That's right Dorothy. Everything is a model. Let's start with the benzene dimer.

Dunitz, J., & Gavezzotti, A. (2009). How molecules stick together in organic crystals: weak intermolecular interactions Chemical Society Reviews, 38 (9) DOI: 10.1039/b822963p

California axes science

From the NYT
As the University of California struggles to absorb its sharpest drop in state financing since the Great Depression, every professor, administrator and clerical worker has been put on furlough amounting to an average pay cut of 8 percent.

In chemistry laboratories that have produced Nobel Prize-winning research, wastebaskets are stuffed to the brim on the new reduced cleaning schedule. Many students are frozen out of required classes as course sections are trimmed.

And on Thursday, to top it all off, the Board of Regents voted to increase undergraduate fees — the equivalent of tuition — by 32 percent next fall, to more than $10,000. The university will cost about three times as much as it did a decade ago, and what was once an educational bargain will be one of the nation’s higher-priced public universities.
There was a time when people used to go to Berkeley for the lower tuition. Seems the last refuges of education are gradually eroding away.

What did you say the error was??

I was looking at some experimental data for drug molecules binding to a pharmaceutically relevant protein.

The numbers reported were as percentages of binding relative to a standard which was defined to be 100%. Here's how they looked:

97.3 + - (plus or minus) 68.4
79.4 + - 96.1
59.5 + - 55.3
1.4 + - 2.5

Seriously, how did the reviewers allow this to go through?

A new look, with hair combed and shoes brushed

As you may have noticed, I have transitioned to a spanking new look at Field of Science (FoS), thanks to Edward's invitation and am loving it. I also join a super team of fellow bloggers whom I hope to regularly read. You won't have to update your bookmarks since you will be automatically directed here when you click on the old link.

I have to admit that after exercising my primitive blog management skills for the last five years, this feels like warp speed and spinach combined.

A hermitian operator in self-imposed exile

Perfect Rigor: A Genius and the Mathematical Breakthrough of the Century
Masha Gessen (Houghton Mifflin Harcourt, 2009)

Pure mathematicians have the reputation of being otherworldly and divorced from practical matters. Grisha or Grigory Perelman, the Russian mathematician who at the turn of this century solved one of the great unsolved problems in mathematics, the Poincare Conjecture, is sadly or perhaps appropriately an almost perfect specimen of this belief. For Perelman, even the rudiments of any kind of monetary, professional or material rewards resulting from his theorem were not just unnecessary but downright abhorrent. He has turned down professorships at the best universities in the world, declined the Fields Medal, and will probably not accept the 1 million dollar prize awarded by the Clay Mathematics Institute for the solution of the some of the most daunting mathematical problems of all time. He has cut himself off from the world after seeing the publicity that his work received and has become a recluse, living with his mother in St. Petersburg. For Perelman, mathematics should purely and strictly be done for its own sake, and could never be tainted with any kind of worldly stigma. Perelman is truly a mathematical hermit, or what a professor of mine would call using mathematical jargon, a "hermitian operator".

In "Perfect Rigor", Masha Gessen tells us the story of this remarkable individual, but even more importantly tells us the story of the Russian mathematical system that produced this genius. The inside details of Russian mathematics were cut off from the world until the fall of the Soviet Union. Russian mathematics was nurtured by a small group of extraordinary mathematicians including Andrey Kolmogorov, the greatest Russian mathematician of the twentieth century. Kolmogorov and others who followed him believed in taking latent, outstanding talent in the form of young children and single-mindedly transforming them into great problem solvers and thinkers. Interestingly in the early Soviet Union under Stalin's brutal rule, mathematics flourished where other sciences languished partly because Stalin and others simply could not understand abstract mathematical concepts and thus did not think they posed any danger to communist ideology. Soviet mathematics also got a boost when its great value was recognized during the Great Patriotic War in building aircraft and later in work on the atomic bomb. Mathematicians and physicists thus became unusually valued assets to the Soviet system.

Kolmogorov and a select band of others took advantage of the state's appreciation of math and created small, elite schools for students to train them for the mathematical olympiads. Foremost among the teachers was a man named Sergei Rukshin who Gessen talks about at length. Rukshin believed in completely enveloping his students in his world. In his schools the students entered a different universe, forged by intense thought and mathematical camaraderie. They were largely shielded from outside influences and coddled. The exceptions were women and Jews. Gessen tells us about the rampant anti-Semitism in the Soviet Union which lasted until its end and prevented many bright Jewish students from showcasing their talents. Perelman was one of the very few Jews who made it, and only because he achieved a perfect score in the International Mathematical Olympiad.

Perelman's extreme qualities were partly a result of this system, which had kept him from knowing about politics and the vagaries of human existence and insulated him from a capricious world where compromise is necessary. For him, everything had to be logical and utterly honest. There was no room for things such as diplomacy, white lies, nationalism and manipulation to achieve one's personal ends. If a mathematical theorem was proven to be true, then any further acknowledgment of its existence in the form of monetary or practical benefits was almost vulgar. This was manifested in his peculiar behavior in the United States. For instance, when he visited the US in the 90s as a postdoctoral researcher he had already made a name for himself. Princeton offered the twenty nine year old an assistant professorship, a rare and privileged opportunity. However Perelman would settle for nothing less than a full professorship and was repulsed even by the request that he officially interview for the position (which would have been simply a formality) and submit his CV. Rudimentary formalities which would be normal for almost everyone were abhorrent for Perelman.

After being disillusioned with what he saw as an excessively materialistic academic food chain in the US, Perelman returned to Russia. For five years after that he virtually cut himself off from his colleagues. But it was then that he worked on the Poincare Conjecture and created his lasting achievement. Sadly, his time spent intensely working alone in Russia seemed to have made him even more sensitive to real and perceived slights. However, he did publicly put up his proofs on the internet in 2002 and then visited the US. For a brief period he even seemed to enjoy the reception he received in the country, with mathematicians everywhere vying to secure his services for their universities. He was unusually patient in giving several talks and patiently explaining his proof to mathematicians. Yet it was clear he was indulging in this exercise only for the sake of clarifying the mathematical concepts, and not to be socially acceptable.

However, after this brief period of normalcy, a series of events made Perelman reject the world of human beings and even that of his beloved mathematics. He was appalled by the publicity he received in newspapers like the New York Times which could not understand his work. He found the rat race to recruit him, with universities climbing over each other and making him fantastic offers of salary and opportunity, utterly repulsive. After rejecting all these offers and even accusing some of his colleagues of being traitors who gave him undue publicity, he withdrew to Russia and definitively severed himself from the world. The final straw may have been two events; the awarding of the Fields Medal which, since his work was still being verified, could not explicitly state that he had proven the Poincare conjecture, and the publication of a paper by Chinese mathematicians which in hindsight clearly seems to have been written for stealing the limelight and the honors from Perelman. For Perelman, all this (including the sharing of the Fields with three other mathematicians) was a grave insult and unbecoming of the pursuit of pure mathematics.

Since then Perelman has been almost completely inaccessible. He does not answer emails, letters and phone calls. In an unprecedented move, the president of the International Mathematical Congress which awards the Fields Medals personally went to St. Petersburg to talk him out of declining the prize. Perelman was polite, but the conversation was to no avail. Neither is there any indication that he would accept the 1 million dollar Clay prize. Gessen himself could never interview him, and because of this the essence of Perelman remains vague and we don't really get to know him in the book. Since Gessen is trying to somewhat psychoanalyze her subject and depends on second-hand information to draw her own conclusions, her narrative sometimes lacks coherence and meanders off. As some other reviewers have noted, the discussion of the actual math is sparse and disappointing, but this book is not really about the math but about the man and his social milieu. The content remains intriguing and novel.

Of course, Perelman's behavior is bizarre and impenetrable only to us mere mortals. For Perelman it forms a subset of what has in his mind always been a perfectly internally consistent and logical set of postulates and conclusions. Mathematics has to be done for its own sake. Academic appointments, prizes, publicity and professional rivalries should have no place in the acknowledgement of a beautiful mathematical proof. While things like applying for interviews and negotiating job offers may seem to us to be perfectly reasonable components of the real world and may even seem to be necessary evils, for Perelman they are simply evils interfering with a system of pure thought and should be completely rejected. He is the epitome of the Platonic ideal; where pure ideas are concerned, any human association could only be a deeply unsettling imposition.

Constancy of the discodermolide hairpin motif

ResearchBlogging.org
Our paper on the conformational analysis of discodermolide is now up on the ACS website. The following is a brief description of the work.

Discodermolide (DDM) is a well-known highly flexible polyketide that is the most potent microtubule polymerization agent known. In this capacity it functions very similar to taxol and the epothilones. However the binding mode of DDM will intimately depend on its conformations in solution.

To this end we have performed multiple force field conformational searches on DDM and the first surprising thing we noticed was that all four force fields located the same global minimum for the molecule in terms of geometry. This is surprising because, given the dissimilar parameterization criteria used in different force fields, minima obtained for flexible organic molecules are usually different for different force fields. Not only that, but all the minima closely superimposed on the x-ray structure of DDM which we call the "hairpin" motif. This is also surprising since the solid state structure of such a highly flexible molecule should not generally bear resemblance to a theoretically calculated global minimum.

Next, we used our NAMFIS methodology that combines parameters from conformational searches to coupling constants and interproton distances obtained from NMR data to determine DDM conformations in two solvents, water and DMSO. We were again surprised to see the x-ray/force field global minimum structure existing as a major component of the complex solution conformational ensemble. In many earlier studies, the x-ray structure has been located as a minor component so this too was unexpected.

However, this same structure has also been remarkably implicated as the bioactive conformation bound to tubulin by a series of elegant NMR experiments. To our knowledge, this is the first tubulin binder which has a single dominant preferred conformation in the solid-state, as a theoretical global minimum in multiple force field conformational searches, in solution as well as in the binding pocket of tubulin. In fact I personally don't know of any other molecule of this flexibility which exists as one dominant conformation in such extremely diverse environments; if this happened to every or even most molecules, drug discovery would suddenly become easier by an order of magnitude since all we would have to do to predict the binding mode of a drug would be to crystallize it or to look at its theoretical energy minima. To rationalize this very pronounced conformational preference of DDM, we analyze the energetics of three distributed synthons (methyl-hydroxy-methyl triads) in the molecule using molecular mechanics and quantum chemical methods; it seems that these three synthons modulate the conformational preferences of the molecule and essentially override other interactions with solvent, adjacent crystal entities, and amino acid elements in the protein.

Finally, we supplement this conformational analysis with a set of docking experiments which lead to a binding mode that is different from the earlier one postulated by NMR (as of now there is no x-ray structure of DDM bound to tubulin). We rationalize this binding mode in the light of SAR data for the molecule and describe why we prefer it to the previous one.

In summary then, DDM emerges as a unique molecule which seems to exist in one dominant conformation in highly dissimilar environments. The study also indicates the use of reinforcing synthons as modular elements to control conformation.

Jogalekar, A., Kriel, F., Shi, Q., Cornett, B., Cicero, D., & Snyder, J. (2009). The Discodermolide Hairpin Structure Flows from Conformationally Stable Modular Motifs Journal of Medicinal Chemistry DOI: 10.1021/jm9015284

Warren DeLano

This is quite shocking. I just heard him speak at the eChemInfo conference two weeks back and talked to him briefly. His visualization software Pymol was *the* standard for producing and manipulating beautiful molecular images, and almost all images in all my papers until now were created using Pymol.

This is shocking and saddening. He could not have been more than in his late 30s. I have heard him give talks a couple of times and talked to him at another conference; he was naturally pleased to see Pymol used all over my poster. I think everyone can vouch that he was a cool and fun person. I really wonder what's going to happen to Pymol without him.

Here is a brief note posted by Dr. Axel Brunger in whose lab he greatly helped contribute to the programs X-PLOR and CNS for crystallography and modeling.

Dear CCP4 Community:

I write today with very sad news about Dr. Warren Lyford DeLano.

I was informed by his family today that Warren suddenly passed away at home on Tuesday morning, November 3rd.

While at Yale, Warren made countless contributions to the computational tools and methods developed in my laboratory (the X-PLOR and CNS programs),
including the direct rotation function, the first prediction of helical coiled coil structures, the scripting and parsing tools that made CNS a universal computational crystallography program.

He then joined Dr. Jim Wells laboratory at USCF and Genentech where he pursued a Ph.D. in biophysics, discovering some of the principles that govern
protein-protein interactions.

Warren then made a fundamental contribution to biological sciences by creating the Open Source molecular graphics program PyMOL that is widely used throughout the world. Nearly all publications that display macromolecular structures use PyMOL.

Warren was a strong advocate of freely available software and the Open Source movement.

Warren's family is planning to announce a memorial service, but arrangements have not yet been made. I will send more information as I receive it.

Please join me in extending our condolences to Warren's family.

Sincerely yours,
Axel Brunger

Axel T. Brunger
Investigator, Howard Hughes Medical Institute
Professor of Molecular and Cellular Physiology
Stanford University

A wrong kind of religion; Freeman Dyson, Superfreakonomics, and global warming

Image Hosted by ImageShack.us Image Hosted by ImageShack.us


The greatest strength of science is that it tries to avoid dogma. Theories, explanations, hypotheses, everything is tentative, true only as long as the next piece of data does not invalidate it. This is how science progresses, by constantly checking and cross checking its own assumptions. The heart of this engine of scientific progress is constant skepticism and questioning. This skepticism and questioning can often be exasperating. You can enthusiastically propound your latest brainwave only to be met with hard-nosed opposition, deflating your long harbored fervor for your pet idea. Sometimes scientists can be vicious in seminars, questioning and cross questioning you as if you were a defendant in a court.

But you learn to live with this frustration. That's because in science, skepticism always means erring on the safer side. As long as skepticism does not descend into outright irrational cynicism, it is far better to be skeptical than to buy into a new idea. This is science's own way to ensure immunity to crackpot notions that can lead it astray. One of the important lessons you learn in graduate school is to make peace with your skeptics, to take them seriously, to be respectful to them in debate. This attitude keeps the flow of ideas open, giving everyone a chance to voice their opinion.

Yet the mainstay of science is also a readiness to test audacious new concepts. Sadly, whenever a paradigm of science reaches something like universal consensus, the opposite can happen. New ideas and criticism are met with so much skepticism that it borders on hostility. Bold conjectures are shot down mercilessly sometimes even without considering their possible merits. The universal consensus separates scientists into a majority who provide a vocal and even threatening wall of obduracy against new ideas. From what I have seen in recent times, this unfortunately seems to have happened to the science of global warming.

First, a disclaimer. I have always been firmly in the "Aye" camp when it comes to global warming. There is no doubt that the climate is warming due to greenhouse gases, especially CO2, and that human activities are most probably responsible for the majority of that warming. There is also very little doubt that this rate of warming has been unprecedented into the distant past. It is also true that if kept unchecked, these developments will cause dangerous and unpredictable changes in the composition of our planet and its biosphere. Yet it does not stop there. Understanding and accepting the details about climate change is one thing, proposing practical solutions for mitigating it is a whole different ball game. This ball game involves more economics than science, since any such measures will have to be adopted on a very large scale that would significantly affect the livelihood of hundreds of millions. We need vigorous discussion on solutions to climate change from all quarters, and the question is far from settled.

But even from a scientific perspective, there are a lot of details about climate change that can still be open to healthy debate. Thus, one would think that any skepticism about certain details of climate change would be met with the same kind of lively, animated argument that is the mainstay of science. Sadly, that does not seem to be happening. Probably the most recent prominent example of this occurred when the New York Times magazine ran a profile of the distinguished physicist Freeman Dyson. Dyson is a personal scientific hero of mine and I have read all of his books (except his recent very technical book on quantum mechanics). Climate change is not one of Dyson's main interests and has occupied very little of his writings, although more so recently. To me Dyson appears as a mildly interested climate change buff who has some opinions on some aspects of the science. He is by no means an expert on the subject, and he never claims to be one. However he has certain ideas, ideas which may be wrong, but which he thinks make sense (in his own words, "It is better to be wrong than to be vague"). For instance he is quite skeptical about computer models of climate change, a skepticism which I share based on my own experience with the uncertainty modeling even "simple" chemical systems. Dyson who is also well known as a "futurist" has proposed a very interesting possible solution to climate change; the breeding of special genetically engineered plants and trees with an increased capacity for capturing carbon. I think there is no reason why this possibility could not be looked into.

Now if this were the entire story, all one would expect at most would be experts in climate change respectfully debating and refuting Dyson's ideas strictly on a factual basis. But surprisingly, that's not what you got after the Times profile. There were ad hominem attacks calling him a "crackpot", "global warming denier", "pompous twit" and "faker". Now anyone who knows the first thing about Dyson would know that the man does not have a political agenda and he has always been, if anything, utterly honest about his views. Yet his opponents spared no pains in painting him with a broad denialist brush and even discrediting his other admirable work in physics to debunk his climate change views. What disturbed me immensely was not that they were attacking his facts- that is after all how science works and is perfectly reasonable- but they were attacking his character, his sanity and his general credibility. The respected climate blogger Joe Romm rained down on Dyson like a ton of bricks, and his criticism of Dyson was full of condescension and efforts to discredit Dyson's other achievements. My problem was not with Romm's expertise or his debunking of facts, but with his tone; note for instance how Romm calls Dyson a crackpot right in the title. One got the feeling that Romm wanted to portray Dyson as a senile old man who was off his rocker. Other bloggers too seized upon Romm-style condescension and dismissed Dyson as a crank. Since then Dyson has expressed regret over the way his views on global warming were overemphasized by the journalist who wrote the piece. But the fact is that it was this piece which made Freeman Dyson notorious as some great global warming contrarian, when the truth was much simpler. In a Charlie Rose interview, Dyson talked about how global warming occupies very little of his time, and his writings clearly demonstrate this. Yet his views on the topic were blown out of proportion. Sadly, such vociferous, almost violent reactions to even reasonable critics of climate change seems to be becoming commonplace. If this is how the science of global warming is looking like, then it's not a very favourable outlook for the future .

If Dyson has been Exhibit A in the list of examples of zealous reactions to unbiased critics of climate change, then the recent book "Superfreakonomics" by economists Steven Levitt and Stephen Dubner (authors of the popular "Freakonomics") would surely be Exhibit B. There is one chapter among six in their book about global warming. And yet almost every negative review on Amazon focuses on this chapter. The authors are bombarded with accusations of misrepresentation, political agendas and outright lies. Joe Romm again penned a rather propagandish and sensationalist sounding critique of the authors' arguments. Others duly followed. In response the authors wrote a couple of posts on their New York Times blog to answer these critics. One of the posts was written by Nathan Myhrvold, previously Chief Technology officer of Microsoft and now the head of a Seattle-based think tank called Intellectual Ventures. Myhrvold is one of the prominent players in the book. Just note the calm, rational, response that he pens and compare it to one of Joe Romm's posts filled with condescending personal epithets. If this is really a scientific debate, then Myhrvold surely seems to be behaving like the objective scientist in this case.

So are the statements made by Levitt and Dubner as explosive as Romm and others would make us believe? I promptly bought the book and read it, and read the chapter on climate change twice to make sure. The picture that emerged in front of me was quite different from the one that I had been exposed to until then. Firstly, the authors' style is quite matter of fact and not sensationalist or contrarian sounding at all. Secondly, they never deny climate change anywhere. Thirdly, they make the very important general point that complex problems like climate change are not beyond easy, cheap solutions and that people sometimes don't readily think of these; they cite hand washing to drastically reduce infections and seat belts to reduce fatal car crashes as two simple and cheap innovations that saved countless lives. But on to Chapter 5 on warming.

Now let me say upfront that at least some of Levitt and Dubner's research is sloppy. They unnecessarily focus on the so-called "global cooling" events of the 70s, events that by no means refute global warming. They also seem to cherry pick the words of Ken Caldeira, a leading expert on climate change. But most of their chapter is devoted to possible cheap, easy solutions to climate change. To tell this story, they focus on Nathan Myhrvold and his team at Intellectual Ventures who have come up with two extremely innovative and interesting solutions to tackle the problem. The innovations are based on the injection of sulfate aerosols in the upper atmosphere. This rationale is based on a singular event, the eruption of Mount Pinatubo in the Phillipines in 1990 which sent millions of tons of sulfates and sulfur dioxide into the atmosphere and circulated them around the planet. Sulfate aerosols serve to reflect sunlight and tend to cause cooling. Remarkably, global temperatures fell by a slight amount for a few years after that. The phenomenon was carefully and exhaustively documented. It was a key contributor to the development of ideas which fall under the rubric of "geoengineering". These ideas involve artificially modulating the atmosphere to offset the warming effects of CO2. Geoengineering is controversial and hotly debated, but it is supported by several very well known scientists, and nobody has come up with a good reason why it would not work. In the light of the seriousness of global warming, it deserves to be investigated. With this in mind, Myhrvold and his team came up with a rather crazy sounding idea; to send up a large hose connected to motors and helium balloons which would pump sulfates and sulfur dioxide into the stratosphere. Coupled with this they came up with an even crazier sounding idea; to thwart hurricanes by erecting large, balloon like structures on coastlines which would essentially suck the hot air out of the hurricanes. With their power source gone, the hurricanes would possibly quieten down.

Are these ideas audacious? Yes. Would they work? Maybe, and maybe not. Are they testable? Absolutely, at least on a prototypical, experimental basis. Throughout the history of science, science has never been fundamentally hostile to crazy ideas if they could be tested. Most importantly, the authors propose these ideas because the analysis indicates them to be much cheaper than long-term measures designed to reduce carbon emissions. Solutions to climate change need to be as cheap as they need to be scientifically viable.

So let's get this straight; the authors are not denying global warming and in fact in their own words, they are proposing a possible solution that could be cheap and relatively simple. And they are proposing this solution only to temporarily act as a gag on global warming, so that long-term measures could then be researched at relative leisure. In fact they are not even claiming that such a scheme would work, only that it deserves research attention. Exactly what part of this argument screams "global warming denial"? One would imagine that opponents of these ideas would pen objective, rational objections based on hard data and facts. And yet almost none of the vociferous critics of Levitt and Dubner seem to have engaged in such an exercise (except a few). Most exercises seem to be of the "Oh my God! Levitt and Dubner are global warming deniers!!" kind. Science simply does not progress in this manner. All we need to do here is to debate the merit of a particular set of ideas. Sure, they could turn out to be bad ideas, but we will never know until we test them. The late Nobel laureate Linus Pauling said it best; "If you want to have a good idea, first have lots of ideas, then throw the bad ones away". Especially a problem as big as climate change needs ideas flying in from all quarters, some conservative, some radical. And as the authors indicate, cheap and simple ideas ought to be especially welcome. Yet the reception to Superfreakonomics to me looked like the authors were being castigated and resented for having ideas. The last thing scientific progress needs is a vocal majority that thwarts ideas from others and encourages them to shut up.

Freeman Dyson once said that global warming sometimes looks like a province of "the secular religion of environmentalism" and sadly there seems to be some truth to this statement. It is definitely the wrong kind of religion. As I mentioned before, almost any paradigm that reaches almost universal consensus runs the risk of getting forged into a religion. At such a point it is even more important to respect critics and give them a voice. Otherwise, going by the almost violent reaction against both Dyson and the authors of Superfreakonomics, I fear that global warming science will descend to the status of biological studies of race. Any research that has to do with race is so politically sensitive and fraught with liabilities and racist overtones that even reasonable scientists who feel that there is actually something beneficial to be gained from the study of race (and there certainly is; nobody would deny that certain diseases are more common to certain ethnic minorities) feel extremely afraid to speak up, let alone apply for funding.

We cannot let such a thing happen with the extremely important issue of climate change. Scientific progress itself would be in a very sad state if critics of climate change with no axe to grind are so vilified and resented that they feel inclined to shut up. Such a situation would trample the very core principles of science underfoot.

That is verboten

I have been poring over some manuscripts recently and realized that there are some words which are best avoided in any scientific paper. Hopefully I would not use them myself and I would find myself grimacing if someone else used them.

Probably the most verboten word is "prove". There is no proof in science, only in mathematics. Especially in science where almost everything we do consists in building a model; whether it is a protein-ligand interaction model, stereoselective organic reaction model, or transition state model. A model can never be "proven" to be "true". It can only be shown to correlate to experimental results. Thus anyone who says that such and such a piece of data "proves" his model should get the referees' noose right away.

So what would be a better word? "Validate"? Even that sounds too strong a word to me. So does "justify". How about "support"? Perhaps. I think about the best thing that all of us can say is that our model is consistent with the experimental data. This statement makes it clear that we aren't even proposing it as the sole model, only as a model that agrees with the data.

Even here the comparison is tricky since all pieces of data are not created equal. For instance one might have a model of a drug bound to a protein that's consistent with a lot of SAR data but somehow does not seem to agree with one key data point. The question to ask here is what the degree of disagreement is and what the quality of that data point is. If the disagreement is strong, this should be made clear in the presentation of the model. Often it is messy to tally the validity of a model with a plethora of diverse data points of differing quality. But quality of data and underreporting of errors in it is something we will leave for some other time.

For now we can try to keep the proofs out of the manuscripts.

Tautomers need some love

ResearchBlogging.org
Now here's a paper about something that every college student knows about and which is yet not considered by people who do drug design as often as it should- tautomers. Yvonne Martin (previously at Abbott) has a nice article about why tautomers are important in drug design and what are the continuing challenges in predicting and understanding them. This should be a good reminder for both experimentalists and theoreticians to consider tautomerism in their projects.

So why are tautomers important? For one thing, a particular tautomer of a drug molecule might be the one that binds to its protein target. More importantly, this tautomer might be the minor tautomer in solution, so knowing the major tautomer in solution may not always help determine the form bound to a protein. This bears analogy with conformational equilibria in which the conformer binding to a protein more often than not is a minor conformer. Martin illustrates some remarkable cases in which both tautomers of a particular kinase inhibitor were observed in the same crystal structure. In many cases, quantum chemical calculations indicate a considerable energy different between the minor protein-bound tautomer and its major counterpart. A further fundamental complication arises from the fact that solvent changes hugely impact tautomer equilibria, and not enough data is always available on tautomers in aqueous solution because of problems like solubility.

Thus, predicting tautomers is crucial if you want to deal with ligands bound to proteins. It is also important for predicting parameters like logP and blood brain barrier penetration which in turn depend on accurate estimations of hydrophobicity. Different tautomers have different hydrophobicities, and Martin indicates that different methods and programs can sometimes calculate different values of hydrophobicity for a given tautomer, which will directly impact important calculations of logP and blood-brain penetration. It will also affect computational calculations like docking and QSAR where tautomer state will be crucial.

Sadly, there is not enough experimental data on tautomer equilibria. Such data is also admittedly hard to obtain; the net pKa of a compound is a result of all tautomers contributing to its equilibrium, and the number of tautomers can sometimes be tremendous; for instance 8-oxoguanine which is a well known DNA lesion caused by radiation can exist in 100 or so ionic and neutral tautomers. Now let's say you want to dock this compound to a protein to predict a ligand orientation. Which tautomer on earth do you possibly choose?

Clearly calculating tautomers can be very important for drug design. As Martin mentions, more experimental as well cas theoretical data on tautomers is necessary; however such research, similar to solvation measurements discussed in a past post, usually falls under the title of "too basic" and therefore may not be funded by the NIH. But whether funded or not, successful ligand design cannot prevail without consideration of tautomers. What was that thing about basic research yielding its worth many times over in applications?

Martin, Y. (2009). Let’s not forget tautomers Journal of Computer-Aided Molecular Design DOI: 10.1007/s10822-009-9303-2

The model zoo

So I am back from the eCheminfo meeting at Bryn Mawr College. For those having the inclination (both computational chemists and experimentalists), I would strongly recommend the meeting for the small group and consequent close interaction. The campus with its neo-gothic architecture and verdant lawns provides a charming environment.

Whenever I go to most of these meetings I am usually left with a slightly unsatisfied feeling at the end of many talks. Most computational models to describe proteins and protein-ligand interactions are patchwork models based on several approximations. Often one finds several quite different methods (force fields, QSAR, quantum mechanics, docking, similarity based searching) giving similar answers to a given problem. The choice of method is usually made on the basis of availability and computational power and past successes, rather than some sound judgement allowing one to choose that particular method over all others. And as usual it depends on what question you are trying to ask.

But in such cases, I am always left with two questions; firstly, if several methods give similar answers (and sometimes if no method gives the right answer), then which is the "correct" method? And secondly, because there is no one method that gives the right answer, one cannot escape the feeling at the end of a presentation that the results that were obtained could have been obtained by chance. Sadly, it is not even always possible to actually calculate the probability that a result was obtained by chance. An example is our own work on the design of a kinase inhibitor which was recently published; docking was remarkably successful in this endeavor, and yet it's hard to pinpoint why it worked. In addition a professor might use some complex model combining neural networks and machine learning and may get results agreeing with experiment, and yet by that time the model may have become so abstract and complex that one would have trouble understanding any of its connections to reality (that is partly what happened to financial derivatives models when their creators themselves stopped understanding why they are really working, but I am digressing...)

However, I remind myself in the end about something that is always easy to forget; models are emphatically not supposed to be "correct" from the point of view of modeling "reality", no matter what kind of fond hopes their creators may have. The only way in which it is possible to gauge the "correctness" of a model is by comparing it to experiment. If several models agree with experiment, then it may be meaningless to really argue about which one is the right one. There are metrics suggested by people to discriminate between such similar models, for instance employing that time-honored principle of Occam's Razor where a model with fewer parameters might be better. Yet in practice such philosophical distinctions are hard to apply and the details can be tricky.

Ultimately, while models can work well on certain systems, I can never escape the nagging feeling that we are somehow "missing reality". Divorcing models from reality, irrespective of whether they are supposed to represent reality or not, can have ugly consequences, and I think all these models are in danger of falling into a hole on specific problems; adding too many parameters to comply with experimental data can easily lead to overfitting for instance. But to be honest, at this point what we are trying to model is so complex (the forces dictating protein folding or protein-ligand interactions only get more and more convoluted like Alice's rabbit hole) that this is probably the best we can do. Even ab initio quantum mechanics involves acute parameter fitting and approximations in modeling the real behavior of biochemical systems. The romantic platonists like me will probably have to wait, perhaps forever.

New Book

Dennis Gray's "Wetware: A Computer in every Living Cell" discusses the forces of physics, chemistry and self-assembly that turns a cell into a computer like concatenation of protein networks that communicate, evolve and perform complex functions. The origin of life is essentially a chemistry problem and it centers on self-assembly.

At the Bryn Mawr eCheminfo Conference

From Monday through Wednesday I will be at the eCheminfo "Applications of Cheminformatics & Chemical Modelling to Drug Discovery" meeting at Bryn Mawr College, PA. The speakers and topics as seen in the schedule are interesting and varied. As usual, if anyone wants to crib about the finger food I will be around. I have heard the campus is quite scenic.

Coyne vs Dawkins

This year being Darwin's 200th birth anniversary, we have seen a flurry of books on evolution. Out of these two stand out for the authority of their writers and the core focus on the actual evidence for evolution that they provide; Jerry Coyne's "Why Evolution is True" and Richard Dawkins's "The Greatest Show on Earth". I have read Coyne's book and it's definitely an excellent introduction to evolution. Yet I am about 300 pages into Dawkins and one cannot help but be sucked again into his trademark clarity and explanatory elegance. I will have detailed reviews of the two books later but for now here are the main differences I can think of:

1. Dawkins talks about more evidence than simply that from biology. He also has evidence from history, geology and astronomy.

2. Dawkins's clarity of exposition is of course highly commendable. You would not necessarily find the literary sophistication of the late Stephen Jay Gould here but for straight and simple clarity this is marvelous.

3. A minor but noteworthy difference is the inclusion of dozens of absorbing color plates in the Dawkins book which are missing in Coyne's.

4. Most importantly, Dawkins's examples for evolution on the whole are definitely more fascinating and diverse than Coyne's, although Coyne's are pretty good too. For instance Coyne dwells more on the remarkable evolution of the whale from land-dwelling animals (with the hippo being a close ancestral cousin). Also, Coyne's chapter on sexual selection and speciation are among the best such discussions I have come across.

Dawkins on the other hand has a fascinating account of Michigan State University bacteriologist Richard Lenski's amazing experiments with E. coli that have been running for over twenty years. They have provided a remarkable window into evolution in real time like nothing else. Also marvelously engaging are his descriptions of the immensely interesting history of the domestication of the dog. Probably the most striking example of evolution in real time from his book is his clear account of University of Exter biologist John Endler's fabulous experiments with guppies in which the fish evolved drastically before our very eyes in relatively few generations because of carefully regulated and modified selection pressure.

Overall then, Coyne's book does a great job of describing evolution but Dawkins does an even better job of explaining it. As usual Dawkins is also uniquely lyrical and poetic in parts with his sparkling command of the English language.

Thus I would think that Dawkins and Coyne (along with probably Carl Zimmer's "The Tangled Bank" due to be published on October 15) would provide the most comprehensive introduction to evolution you can get.

As Darwin said, "There is grandeur in this view of life". Both Coyne and Dawkins serve as ideal messengers to convey this grandeur to us and to illustrate the stunning diversity of life around us. Both are eminently readable.

The 2009 Nobel Prize in Chemistry: Ramakrishnan, Steitz and Yonath

Image Hosted by ImageShack.us

Source: Nobelprize.org

Venki Ramakrishnan, Ada Yonath and Tom Steitz have won the Nobel Prize for chemistry for 2009 for their pioneering studies on the structure of the ribosome. The prize was predicted by many for many years and I myself have listed these names in my lists for a couple of years now; in fact I remember talking with a friend about Yonath and Ramakrishnan getting it as early as 2002. Yonath becomes the first Israeli woman to win a science Nobel Prize and Ramakrishnan becomes the first Indian-born scientist to win a chemistry prize.

The importance of the work has been obvious for many years since the ribosome is one of the most central components of the machinery of life in all organisms. Every school student is taught about its function in acting as the giant player that holds the multicomponent assembly of translation- the process in which the code of letters in RNA is read to produce proteins- together. The ribosome comes as close to being an assembly line for manufacturing proteins as something possibly can. It is also an important target for antibiotics like tetracycline. It's undoubtedly a highly well-deserved accolade. The prize comes close on the heels of the 2006 prize awarded to Roger Kornberg for his studies of transcription, the process preceding translation in which DNA is copied into RNA.

The solution of the ribosome structure by x-ray crystallography is a classic example of work that has a very high chance of getting a prize because of its fundamental importance. X-ray crystallography is a field which has been honored many times and as people have mentioned before, if there's any field where you stand a good chance of winning a Nobel Prize, it's x-ray crystallography on some important protein or biomolecule. In the past x-ray crystallography on hemoglobin, potassium ion channels, photosynthetic proteins, the "motor" that generates ATP and most recently, the machinery of genetic transcription, have all been honored by the Nobel Prize. It's also the classic example of a field where the risks are as high as the rewards, since you may easily spend two decades or more working on a structure and in the end fail to solve it or worse, be scooped.

However, when this meticulous effort pays off the fruits are sweet indeed. In this case the three researchers have been working on the project for years and their knowledge has built up not overnight but incrementally through a series of meticulous and exhaustive experiments reported in top journals like Nature and Science. It's an achievement that reflects as much stamina and the ability to overcome frustration as it does intelligence.

It's a prize that is deserved in every way.

Update: As usual the chemistry blog world seems to be be divided over the prize with many despondently wishing that a more "pure" chemistry prize should have been awarded. However this prize is undoubtedly being awarded primarily for chemistry.

Firstly, as some commentators have pointed out, crystallography was only the most important aspect of the ribosome work. There were a lot of important chemical manipulations that had to be carried out in order to shed light on its structure and function.

Secondly, as Roger Kornberg pointed out in his interview (when similar concerns were voiced), the prize is being awarded for the determination of an essentially chemical structure, in principle no different from the myriad structures of natural and unnatural compounds that have been the domain of classical organic chemistry for decades.

Thirdly, the ribosome can be thought of as an enzyme that forms peptide bonds. To this end the structure resolution engaged knowing the precise locations of catalytic groups that are responsible for the all-important peptide bond formation reaction. Finding out the locations of these groups is no different from determining the catalytic parts of a more conventional enzyme like chymotrypsin or ornithine decarboxylase.

Thus, the prize quite squarely falls in the domain of chemistry. It's naturally chemistry as applied to a key biological problem, but I don't doubt that the years ahead will see prizes given to chemistry as applied to the construction of organic molecules (palladium catalysis) or chemistry as applied to the synthesis of energy efficient materials (perhaps solar cells).

I understand that having a chemistry prize awarded in one's own area of research is especially thrilling, but as a modified JFK quote would say, first and foremost "Wir sind Chemiker". We are all chemists, irrespective of our sub-disciplines, and we should be all pleased that an application of our science has been awarded, an application that only underscores the vast and remarkably diverse purview of our discipline.

Update: Kao, Boyle and Smith

Seems nobody saw this coming but the importance of optical fibers and CCDs is obvious.

It's no small irony that the CCD research was done in 1969 at Bell Labs. With this Bell Labs may well be the most productive basic industrial research organization in history, and yet today it is less than a mere shadow of itself. The CCD research was done 40 years back and the time in which it was done seems disconnected from the present not just temporally, but more fundamentally. The research lab that once housed six Nobel Prize winners on its staff can now count a total of four scientists in its basic physics division.

The 80s and indeed most of the postwar decades before then seem to be part of a different universe now. The Great American Industrial Research Laboratory seems like a relic of the past. Merck, IBM, Bell Labs...what on earth happened to all that research productivity? Are we entering a period of permanent decline?

The 2009 Nobel Prize in Physiology or Medicine

Image Hosted by ImageShack.us

Source: Nobelprize.org

The 2009 Nobel Prize in Physiology or Medicine has been awarded to Elizabeth Blackburn (UCSF), Carol Greider (Johns Hopkins) and Jack Szostak (Harvard) for their discovery of the enzyme telomerase and its role in human health and disease.

This prize was highly predictable because the trio's discovery is of obvious and fundamental importance to an understanding of living systems. DNA replication is a very high fidelity event where new nucleotides are added to the new DNA helix being synthesized with an error rate of only 1 in 10*9. Highly efficient repair enzymes act on damaged or wrongly structured DNA strands and repair them with impressive accuracy. And yet the process has some intrinsic problems. One of the most important problems concerns the shortening of one of the two newly synthesized strands of the double helix during every successive duplication. This is an inherent result of the manner in which the two strands are synthesized.

This shortening leads to shortened ends of chromosomes, termed telomeres. As our cells divide in every generation, there is progressive shortening of the chromosomal ends. Ultimately the chromosomal ends become too short for the chromosomes to remain functional and the cell puts into the motion the machinery of apoptosis or cell death which eliminates cells with these chromosomes. The three recipients of this year's prize discovered an enzyme called telomerase that actually prevents the shortening of chromosomes by adding new nucleotides to the ends. Greider was actually Blackburn's PhD. student at Berkeley when they did the pioneering work (not every PhD. student can claim that his or her PhD. thesis was recognized by a Nobel Prize). The group not only discovered the enzyme but actually demonstrated through a series of comprehensive experiments that mutant cells and mice lacking the enzymes had shortened life spans and other fatal defects, indicating the key role of the enzyme in preventing cell death. At the same time, they and other scientists also crucially discovered that certain kinds of cancers, brain tumors for instance, had high levels of telomerase. This high level meant that cancer cells repaired their chromosomes more efficiently than normal cells, thus accounting for their increased activities and life spans and their ability to outcompete normal cells for survival (As usual, what's beneficial for normal cells unfortunately turns out to be even more beneficial for cancer cells; this need to address similar processes in both cells is part of what makes cancer such a hard disease to treat)

The work thus is a fine example of both pure and applied research. Most of the work's implications lie in an increased understanding of the fundamental biochemical machinery governing living cells. However, with the observation that cancer cells express higher levels of telomerase the work also opens up possible chemotherapy that could target increased levels of telomerase in such cells using drugs. Conversely, boosting the level of the enzyme in normal cells could possibly contribute toward slowing down aging.

The prize has been awarded for work that was done about twenty years ago. This is quite typical of the Nobel Prize. Since then Jack Szostak has turned his focus on to other exciting and unrelated research involving the origins of life. In this field too he has done pioneering work involving for instance, the synthesis of membranes that could mimic the proto-cells formed on the early earth. Blackburn also became famous in 2004 for a different reason; she was bumped off President Bush's bioethics council for her opposition to a ban on stem cell research. Given the Bush administration's consistent manipulation and suppression of cogent scientific data, Blackburn actually wore her rejection as a proud label. Catherine Brady has recently written a fine biography of Blackburn.

Update: Blackburn, Greider, Szostak

A well-deserved and well-predicted prize for telomerase
Again, I point to Blackburn's readable biography

The evils of our time

So yesterday over lunch me and some colleagues got into a discussion about why scientific productivity in the pharmaceutical industry has been perilously declining over the last two decades. What happened to the golden 80s when not just the "Merck University" but other companies produced academic-style high quality research and published regularly in the top journals? We hit on some of the usual factors. Maybe readers can think of more.

1. Attack of the MBAs: Sure, we can all benefit from MBAs but in the 80s places like Merck used to be led by people with excellent scientific backgrounds, sometimes exceptional ones. Many were hand-picked from top academic institutions. These days we see mostly lawyers and pure MBAs occupying the top management slots. Not having a scientific background definitely causes them to empathize less with the long hairs.

2. Technology for its own sake: In the 90s many potentially important technologies like HTS and combi chem were introduced. However people have a tendency to worship technology for its own sake and many have fallen in love with these innovations to the extent that they want to use them everywhere and think of them as cures for most important problems. Every technology works best when it occupies its own place in the hierarchy of methodologies and approaches, and where a good understanding of its limitations wisely prevents its over-application. This does not seem to have really happened with things like HTS or combi chem.

3. The passion of the structuralists: At the other end of the science-averse managers are the chemical purists who are so bent on "rules" for generating leadlike and druglike molecules that they have forgotten the original purpose of a drug. The Lipinskians apply Lipinski's rules (which were meant to be guidelines anyway) to the extent that they trump everything else. Lipinski himself never meant these rules to be absolute constraints.

What is remarkable is that we already knew that about 50% of drugs are derived from natural products which are about as un-Lipinskian as you can imagine. In fact many drugs are so un-Lipinskian as to defy imagination. I remember the first time I saw the structure of Metformin, essentially methyl guanidine, and almost fell off my chair. I couldn't have imagined in my wildest dreams that this molecule could be "druglike", let alone of the biggest selling drugs in the world. I will always remember Metformin as the granddaddy of rejoinders to all these rules.

The zealous application of rules means that we forget the only two essential features of any good drug; efficacy and safety, essentially pharmacology. If a drug displays good pharmacology, its structure could resemble a piece of coal for all I know. In the end the pharmacology and toxicity are all that really matter.

4. It's the science stupid: In the 80s there were four Nobel Prize winners on the technical staff of Bell Labs. Now the entire physics division of the iconic research outfit boasts a dozen or so scientists in all. What happened to Bell Labs has happened to most pharmaceutical companies. The high respect that basic science once enjoyed has now been accorded to other things like quarterly profits, CEO careers and the pleasure of stock holders. What is even more lamentable is the apparent mentality that doing good science and making profits are somehow independent of each other; the great pharmaceutical companies in the 80s like Merck clearly proved otherwise.

Part of the drive toward only short term profits and the resulting obsession with mergers and acquisitions has clearly arose from the so-called blockbuster model. If a candidate is not foreseen to be making a billion dollars or more, dump it overboard. Gone are the days when a molecule was pursued as an interesting therapy that would validate some interesting science or biochemical process, irrespective of its projected market value. Again, companies in the past have proved that you can pursue therapeutic molecules for their own sake and still reap healthy profits. Profits seem to be like that electron in the famous double slit experiment; if you don't worry about them, they will come to you. But start obsessing about them too much and you will watch them gradually fade away like that mystical interference pattern.

We ended our discussion wondering what it's going to take in the end for big pharma to start truly investing in academic style basic science? The next public outcry that emerges from drug-resistant strains of TB killing millions because the drugs which could have fought them were never discovered in the current business model? It could be too late then.

That time of the year

So it seems the Nobel speculations have started again. I have been doing them for some years now and this year at a meeting in Lindau in Germany I saw 23 Nobel Prize winners in chemistry up close, none of whom I predicted would win the prize (except the discoverers of GFP, but that was a softball prediction).

As I mentioned in one of my posts from Lindau, predicting the prize for chemistry has always been tricky because the discipline spans the breadth of the spectrum of science, from physics to biology. The chemistry prize always leaves a select group of people upset; the materials scientists will crib about biochemists getting it, the biochemists will crib about chemical physicists getting it. However, as I mentioned in the Lindau post about Roger Kornberg, to me this selective frustration indicates the remarkable purview of chemistry. With this in mind, here goes another short round of wild speculation. It would of course again be most interesting if someone who was not on anybody's list gets the prize; there is no better indication of the diversity of chemistry than a failure to predict the winner.

1. Structural biology: Not many seem to have mentioned this. Ada Yonath (Weizmann Institute) and Venki Ramakrishnan (MRC) should definitely get it for their resolution of the structure of the ribosome. Cracking an important biological structure has always been the single best bet for winning the Nobel (the tradeoff being that you can spend your life doing it and not succeed, or worse, get scooped), and Yonath and Ramakrishnan would deserve it as much as say Roderick McKinnon (potassium channel) or Hartmut Michel (light harvesting center)

2. Single-molecule spectroscopy: The technique has now come of age and fascinating studies of biomolecules have been done with it. W. E. Moerner and Richard Zare (Stanford) seem to be in line for it.

3. Palladium: This is a perpetual favorite of organic chemists. Every week I get emails announcing the latest literature selections for that week's organic journal club in our department. One or two of the papers without exception feature some palladium catalyzed reaction. Palladium is to organic chemists what gold was to the Incas. Heck, Suzuki and perhaps Buchwald should get it.

4. Computational modeling of biomolecules; Very few computational chemists get Nobel Prizes, but if anyone should get it it's Martin Karplus (Harvard). More than anyone else he pioneered the use of theoretical and computational techniques for studying biomolecules. I would also think of Norman Allinger (UGA) who pioneered force fields and molecular mechanics. But I don't think the Nobel committee considers that work fundamental enough, although it is now a cornerstone of computational modeling. Another candidate is Ken Houk (UCLA) who more than anyone else pioneered the application of computational techniques to the study of organic reactions. As my past advisor who once introduced him in a seminar quipped, "If there's a bond that is broken in organic chemistry, Ken has broken it on his computers".

Among other speculations include work on electron transfer in DNA especially pioneered by Jacqueline Barton (Caltech). However I remember more than one respectable scientist saying that this work is controversial. On a related topic though, there is one field which has not been honored:

5. Bioinorganic chemistry: The names of Stephen Lippard (MIT) and Harry Gray (Caltech) come to mind. Lippard has cracked many important problems in metalloenzyme chemistry, Gray has done some well-established and highly significant work on electron transfer in proteins.

So those are the names. Some people are mentioning Michael Grätzel for his work on solar cells, although I personally don't think the time is ripe for recognizing solar energy. Hopefully the time will come soon. It also seems that Stuart Schreiber is no longer on many of the lists. I think he still deserves a prize for really being the pioneer in investigating the interaction of small organic and large biological molecules.

As for the Medicine Nobel, from a drug discovery point of view I really think that Akiro Endo of Japan who discovered statins should get it. Although the important commercial statins were discovered by major pharmaceutical companies, Endo not only painstakingly isolated and tested the first statin but also was among the first to propound the importance of inhibiting HMG-CoA reductase as the key enzyme in cholesterol metabolism. He seems to deserve a prize just like Alexander Fleming did, and just like penicillin, statins have literally saved millions of lives.

Another popular candidate for the medicine Nobel is Robert Langer of MIT, whose drug delivery methods have been very important in the widespread application of the controlled delivery of drugs. A third good bet for the medicine prize is Elizabeth Blackburn who did very important work in the discovery of telomeres and telomerases. Blackburn is also a warm and highly ethical woman who was bumped off Bush's bioethics committee for her opposition to the ban on stem cell research. Blackburn proudly wears this label, and you can read this and other interesting aspects of her life in her biography.

And finally of course, as for the physics prize, give it to Stephen Hawking. Just give it to him. And perhaps to Roger Penrose. Just do it!!

Update: Ernest McCullough and James Till also seem to be strong candidates for the Medicine prize for their discovery of stem cells. They also won the Lasker Award in 2005, which has often been a stepping stone on the path to the Nobel. McCullough seems to be 83, so now might be a good time to award him the prize.

For chemistry, Benjamin List also seems to be on many lists for his work in organocatalysis, but I personally think the field may be too young go be recognized.

Another interesting category in the physics prize seems to be quantum entanglement. Alain Aspect who performed the crucial experimental validations of Bell's Theorem definitely comes to mind. Bell himself almost certainly would have received the prize had he not died very untimely of a stroke.

Previous predictions: 2008, 2007, 2006

Other blogs: The Chem Blog, In The Pipeline

First potential HIV vaccine

This just came off the press:
A new AIDS vaccine tested on more than 16,000 volunteers in Thailand has protected a significant minority against infection, the first time any vaccine against the disease has even partly succeeded in a clinical trial...Col. Jerome H. Kim, a physician who is manager of the army’s H.I.V. vaccine program, said half the 16,402 volunteers were given six doses of two vaccines in 2006 and half were given placebos. They then got regular tests for the AIDS virus for three years. Of those who got placebos, 74 became infected, while only 51 of those who got the vaccines did. Results of the trial of the vaccine, known as RV 144, were released at 2 a.m. Eastern time Thursday in Thailand by the partners that ran the trial, by far the largest of an AIDS vaccine: the United States Army, the Thai Ministry of Public Health, Dr. Fauci’s institute, and the patent-holders in the two parts of the vaccine, Sanofi-Pasteur and Global Solutions for Infectious Diseases.
However this also came off the same press:
Scientists said they were delighted but puzzled by the result. The vaccine — a combination of two genetically engineered vaccines, neither of which had worked before in humans — protected too few people to be declared an unqualified success. And the researchers do not know why it worked...The most confusing aspect of the trial, Dr. Kim said, was that everyone who did become infected developed roughly the same amount of virus in their blood whether they got the vaccine or a placebo. Normally, any vaccine that gives only partial protection — a mismatched flu shot, for example — at least lowers the viral load.
Nevertheless, after a decade of failures, at least it's a definite starting point scientifically.

The emerging field of network biology

One of the things I have become interested in recently is the use of graph theory in drug discovery. I had taken a class in graph theory during my sophomore year and while I have forgotten some of the important things from that class, I am revising that information again from the excellent textbook that was then recommended- Alan Tucker's "Applied Combinatorics" which covers both graph theory and combinatorics.

The reason why graph theory has become exciting in drug discovery in recent times is because of the rise of the paradigm of 'systems biology'. Now when they hear this term many purists usually cringe at what they see as simply a fancy name given to an extension of well-known concepts. However, labeling a framework does not reduce its utility. The approach should be better named 'network biology' in this context. The reason why graph theory is becoming tantalizingly interesting is because of the large networks of interactions between proteins, genes, and drugs and their targets that have been unearthed in the last few years. These networks could be viewed as abstract graph theoretical networks possibly utilizing the concepts of graph theory and predictions based on the properties of these graphs could possibly help us to understand and predict. This kind of 'meta' thinking which previously was not much possible because of the lack of data can unearth interesting insights that may be missed by looking at molecular interactions alone.

The great power and beauty of mathematics has always been to employ a few simple principles and equations that can explain many diverse general phenomenon. Thus, a graph is any collection of vertices or nodes (which can represent molecules, proteins, actors, internet web pages, bacteria, social agents etc.) connected by edges (which represent interactions between the vertices). In the field of network analysis this has been manifested in a singularly interesting observation; the observation that many diverse networks, from protein-protein networks to the internet to academic citation networks, are scale-free. Scale free networks demonstrate a topology which as the name indicates is independent of the scale. From a mathematical standpoint the defining quality of scale free networks is that they follow a power law. That is, the probability of any node having k connections goes as k to some power -γ, where γ is usually a number between 2 and 3.

Thus, P(k) ~ k^-γ

The scale-free property has been observed for a remarkable number of networks, from the internet to protein-protein interactions. This property is counterintuitive since one would expect the number of connections to follow a normal or Poisson like distribution, with P(k) depending more or less exponentially on k, and nodes having a large number of connections being disproportionately small in number. The scale-free property however leads to a valuable insight; that there are a surprisingly large number of nodes or 'hubs' which are quite densely connected. This property can have huge implications. For instance it could allow us to predict the exact hubs in an internet network which could be most vulnerable to attack. In the study of protein-protein interactions, it could tantalizingly allow us to predict which protein or set of proteins to hit in order to disrupt the maximum number of interactions. A recent study on the network of FDA approved drugs and their targets suggests that this network is scale-free; this could mean that there is a privileged set of targets which are heavily connected to most drugs. Such a study could indicate both targets which could be more safely hit as well as new targets (sparsely connected nodes) which could be productively investigated. Any such speculation can of course only be guided by data, but it may be much more difficult to engage in it without looking at the big picture afforded by graphs and networks.

However the scale-free property has to be very cautiously inferred. Many networks which seem scale-free are subnetworks whose parent network may not be scale-free. Conversely a parent network that is scale-free may contain a subnetwork that is not scale-free. The usual problem with such studies is the lack of data. For instance we have still plumbed only a fraction of the total number of protein-protein interactions that may exist. We don't know if this vast ultimate network is scale-free or not. And of course, the data underlying such interactions comes from relatively indirect methods like yeast two-hybrid or reporter gene assays and its accuracy must be judiciously established.

But notwithstanding these limitations, concepts from network analysis and graph theory are having an emerging impact in drug discovery and biology. They allow us to consider a big picture view of the vast network of protein-protein, gene-gene, and drug-protein and drug-gene interactions. There are several more concepts which I am trying to understand currently. This is very much a field that is still developing, and we hope that insight from it will serve to augment the process of drug discovery in a substantial way.

Some further reading:
1. A great primer on the basics of graph theory and its applications in biology. A lot of the references at the end are readable
2. Applied Combinatorics by Alan Tucker
3. Some class notes and presentations on graph theory and its application in chemistry and biology
4. A pioneering 1999 Science paper that promoted interest in scale-free networks. The authors demonstrated that several diverse networks may be scale-free.