Field of Science

An almost-Nobelist's lesson to his daughter

Last year's physics Nobel Prize was awarded to a group of three people who discovered one of the most significant recent facts about our universe; the fact that its expansion is accelerating. It turns out that two out of the three laureates had gotten their Ph.D. with Robert Kirshner at Harvard, who among other things has written the excellent book "The Extravagant Universe". Kirshner was involved in a big way with the supernova project that discovered the acceleration; he trained dozens of students apart from the two prize winners. He would almost certainly have won the prize had it not been restricted to three scientists. One might have expected him to feel at least a few pangs of regret about not winning. But even if he did, his response to his daughter which she published in this week's Science is worth reading:

One morning this past October, I woke up to find an email from my father. Reading the subject line, I immediately burst into tears. My father, Robert Kirshner, is an astronomy professor at Harvard University. The subject of his e-mail was, “My Students won the Nobel Prize!”...I was worried because I knew my father to be incredibly competitive...But as it turns out, his response to not winning is the lesson I really value. When I spoke to him that morning, he amazed me: He was proud of the people he has worked with and taught; he was generous-spirited; he was funny; and he had perspective. What a relief! It turns out that a guy who spent his life trying to understand the immensity of the universe could put into perspective the relative importance of which particular earthling took home the ribbons and the medals and got to bow to the King of Sweden. It turns out that what was really important to him was the work itself, the wonder of this extraordinary universe, the honor and the fun of trying to figure things out, and maybe, just a little bit, the thrill of the chase. I admire all the terrific scientists who contributed to this greater understanding of the universe we live in, but in particular I admire my father, whose expansive understanding of what really matters taught me something of astronomical importance.

That's a lesson which, while contrary to the common human emotions of jealousy and vanity, seems to be alive and well in scientists like Kirshner and it's worth always keeping in mind. Science is inherently a community enterprise; even the science done by supposed loners like Newton and Einstein would not have been possible had it not built on a body of work extending back several centuries. The culmination of this body of work is what's real. The prizes are incidental.

The protein makes the poison: Dancing fruit flies and terfenadine

"Chemophobia" is the name of the exasperating phenomenon in which every material substance is branded as a "chemical" and made to look dangerous irrespective of context. Since everything in the universe is supposedly material, by definition chemophobia extends to everything. The media in particular has eagerly latched on to this idea, forgetting that almost everything (not just chemicals but life, liberty and the pursuit of happiness) is dangerous in the wrong quantities and context and harmless in the right ones. 

Matt Hartings at Sciencegeist had the excellent idea for us bloggers to do our part in dispelling chemophobia. He wants us to write about our favorite toxic chemical compounds. This will not only give us an opportunity to explore the many incarnations of toxicity but will also help inform the public about the highly context-specific safety and toxicity of chemicals.

My fellow bloggers have done a great job so far in documenting the various facts and myths about toxic molecules (you can find summaries on Matt's blog). A resounding theme in their posts is that "the dose makes the poison". It's an idea which goes back to Paracelsus in the 15th century and sounds intuitively true (consider the widespread injunction against gluttony), but which seems surprisingly recalcitrant to being universally accepted. This dose-specific toxicity especially makes its appearance in medicine, with unfortunate reports of celebrities fatally overdosing on prescription drugs regularly appearing in the news media. Strangely, the same media which readily accepts the fact that prescription drugs are safe as long as they are not abused in large quantities abandons its critical attitude when talking about "chemicals" in our food and clothing.

Dose-specific toxicity is indeed of paramount importance in medicine, but if you delve deeper, the common mechanism underlying the toxicity of many drugs often has less to do with the specific drugs themselves and more to do with the other major player in the interaction of drugs with the human body - proteins. Unwarranted dosages of drugs are certainly dangerous, but even in these cases the effect is often mediated by specific proteins. Thus in this post, I want to take a slightly different tack and want to reinforce the idea that when it comes to drugs it's often wise to remember that "the protein makes the poison". I want to reinforce the fact that toxicity is often a function of multiple entities and not just one. In fact this concept underlies most of the side-effects of drugs, manifested in all those ominous sounding warnings delivered in rapid fire intonations in otherwise soothing drug commercials.

What do I mean by "the protein makes the poison"? Almost every drug demonstrates its effects by binding to specific proteins which may be involved in particular diseases, and the goal of pharmaceutical research is to find molecules that target and inhibit or activate these proteins. There is of course much more to a drug than just inhibition of a protein, but that's the fundamental challenge. This goal was delineated during the turn of the twentieth century in Paul Ehrlich's notion of a "magic bullet", a compound that would hit only the rogue protein and nothing else. We are still trying to implement Ehrlich's program and in the process have discovered how hideously complicated the process is.

The thing is, in spite of much progress we still understand woefully little about the human body. When we design a drug to inhibit one protein, it has to contend with the thousands of other proteins in the body which perform crucial functions. Making a drug that binds to a protein is essentially like designing a key to fit a lock. Even if you think you have a perfect key that fits only one lock, the number of locks with similar structures is so large that it's very likely for parts of the key to fit other locks. And if these other locks or proteins play fundamental roles in normal physiological processes, you may be in trouble. In fact there's a name for this group of unwanted proteins - antitargets - and there are entire books written on how to avoid them.

Ideally you have to contend with every other protein when your goal is to target only one, but somewhat fortunately, the history of drug research has found out a handful of key proteins which seem to be often hit, leading to side-effects. In this post I will focus on two, and I will illustrate both through the example of the drug terfenadine (illustrated on top). Interestingly, the story of terfenadine reinforces the idea about both dosage and protein-specific toxicity.

Terfenadine was introduced in 1985 as an anti-allergy drug. Things seemed to be going well for its maker Hoechst Marion Roussel until 1990 when troubling reports emerged of a serious and potentially lethal side effect. This side effect was a perturbation of the heart's rhythm. It can be of several types, all of which are usually lumped under the title of "arrhythmias". In particular, terfenadine caused two phenomena with the impressive names of QT prolongation and torsades de pointes.

The heart is a pump, but it's also a kind of electrical motor with its own electrical cycle. This cycle is governed by the influx and outflux of various ions into heart cells; most commonly, sodium, potassium, calcium and chloride. The cycle shows up as peaks and troughs in electrocardiograms (ECGs). Each peak is alphabetically labeled, and the interval between the trough Q and the peak T is particularly important. It turns out that several drugs including terfenadine prolong this interval, essentially throwing the heart's rhythm out of sync. It is not hard to see that the consequences of disturbing this very fundamental rhythm of life can be catastrophic; the heart can stall, go into cardiac arrest and kill the unfortunate victim. QT prolongation can also be part of a larger indication called torsades de pointes, characterized by a specific shape of the ECG.

But what's responsible for this effect at the molecular level is a unique protein called human ether a-go-ho (hERG). The protein is an ion channel conducting potassium ions through heart cells, thus its crucial significance in maintaining heart rhythm should not be surprising. Terfenadine and several other drugs (most notably some antidepressants and antipsychotics) bind to this protein with high affinity and can even block it. The amusing name of the protein points to an amusing origin. The protein was a product of the human analog of genes discovered in fruit flies by researchers at the University of Wisconsin who were studying mutations in these genes. They found that the mutant flies' legs started to shake when they were anesthetized, making the insects look like entomological versions of Elvis. Another scientist at the City of Hope remembered where he had seen humans doing a similar dance; at the Whisky a Go Go nightclub in West Hollywood. It was the ultimate in anthropomorphization. Here's what part of the protein looks like.

When the FDA found out about the dangerous side effects that terfenadine mediated through the hERG ion channel, they sent a letter to doctors who were prescribing the drug and issued a black box warning. In 1997 the FDA finally withdrew terfenadine; after all there were several anti-allergy medications out there and there was no need to market an especially dangerous one. Since then, testing potential drugs against hERG is a mandatory part of seeking FDA approval and there is much research dedicated to finding out specific molecular features of drugs which may turn out to be hERG blockers; one common determinant seems to be the presence of a positively charged basic nitrogen atom. There are entire lists of drugs including marketed ones that can cause QT problems to varying extents under different conditions.

So there it is, toxicity mediated not just by a particular "chemical" but by its interaction with a particular protein. But the story does not end there. Toxicity problems with terfenadine seemed to occur - you guessed it - only at high dosages. The dose indeed made the poison. But there was an added twist. Some patients experienced hERG blockage only when they were taking other drugs, most notably the antibiotic erythromycin. Surprisingly this also happened when they were drinking quantities of, of all things, grapefruit juice. Grapefruit juice has also turned out to be important in the effects of other popular drugs like statins for heart disease.

What was going on? When terfenadine is administered, like any foreign molecule it has to first get through the gut wall and the liver to enter the bloodstream. And it's in the liver that it encounters a protein called cytochrome P450. This crucial protein is the great gatekeeper of the human body, denying entry to thousands of molecules which it deems to be poisonous. It is responsible for the metabolism of about 75% of all drugs. It served a necessary function during evolution when organisms had to keep potentially poisonous chemicals out, but it haunts drug discovery scientists in their dreams because of its ability to affect drug structures in unexpected ways. The centerpiece of P450 is an iron atom that oxidizes electron-rich bonds in molecules. Most of the times the protein induces an oxidation reaction in a drug that changes it to something else. As a further testament to the complexities of drug development, that "something else" itself can be toxic, beneficial or neutral. In case of terfenadine there was a stroke of good luck; cytochrome P450 was transforming the compound into another drug called fexofenadine. Chemists will recognize the small difference in the structures - a single carboxylate group at the terminal end. 

But as is often the case in the wonderful world of pharmacology, this tiny difference had momentous consequences; fexofenadine no longer bound to hERG with high affinity to cause QT-prolongation. What happened at high doses was that terfenadine saturated cytochrome P450 and some of it made its way into the bloodstream without being transformed into fexofenadine. Similarly the compounds in grapefruit juice preferentially bound to cytochrome P450, again allowing terfenadine to get past the protein. And this terfenadine which escaped the clutches of cytochrome P450 blocked hERG. One thing is clear here; it is chilling to contemplate the effects of terfenadine had it not been metabolized to fexofenadine by P450 in the first place.

This fascinating (at least for me) story of terfenadine drives home many important points regarding toxicity. Firstly, it takes two to tango, and toxicity is always a function of a drug and its target and not just of the drug alone. Secondly, we again had a case where "the dose made the poison". And thirdly, the reason this was true was because of a guardian angel, a protein which changed terfenadine into something else that was not toxic; a corollary of this point is that it takes a tiny change to turn a toxic compound into a non-toxic one.

There should be little more evidence needed to prove that toxicity is a many splendored, context-specific thing.

All images are from Wikipedia

Physics's PR problem: Moving beyond string theory and multiple universes

 I was reminded of this by a timely post by MJ at "Interfacial Digressions". As everyone knows, chemistry has a PR problem. Fear of "chemicals" runs rampant without context or qualification. In addition, unlike physics and biology, chemistry is not considered to be the science that answers profound questions about the origins of life or the future of the universe. Of course there's evidence to the contrary for each one of these thoughts - modern life would be impossible without chemistry and the origin of life can claim to be the ultimate grand chemical question - but it's been hard to convince the public of this truth. The acute PR problem for chemistry is illustrated by the fact that popular literature on chemistry does not sell half as well as that on physics; just count the number of chemistry versus physics books in your Barnes & Noble the next time you visit (if you are still obsessed with paper that is).

But I think physics also has a PR problem, and it's of a different kind than chemistry's. This statement should elicit gasps of indignation, since the Greenes, Hawkings and Kakus seem to be doing quite well; they are household names and every one of their books instantly gathers hundreds of positive reviews on Amazon. But there's still a problem and it's not one that is acknowledged by many of these leading popular expositors, at least partly because doing so would rob them of their next big NewYork Times bestseller and the accompanying profits. Look at the physics section in your B&N next time and you will understand what I am talking about.

The problem is that most of the popular physics that the public enjoys constitutes perhaps 10% of the research that physicists worldwide are engaged in. Again, count the number of physics books in your local bookstore, and you will notice that about 90% of them cover quantum mechanics, cosmology, particle physics and "theories of everything". You would be hard-pressed to find volumes on condensed matter physics, biophysics, the physics of "soft" matter like liquids and non-linear dynamics. And yes, these are bonafide fields of physics that have engaged physics's best minds for decades and which are as exciting as any other field of science. Yet if you ask physics-friendly laymen what cutting-edge physics is about, the answers will typically span the Big Bang, Higgs boson, black holes, dark matter, string theory and even time-travel. There will be scant mention if any of say spectroscopy, optics, polymers, magnetic resonance, lasers or even superconductivity.

Whether physicists admit it or not, this is a PR problem. Laymen are being exposed to what is an undoubtedly exciting but tiny fraction of the universe of physics research. For eager readers of the popular physics literature, the most exciting advances in physics are encapsulated between the Higgs boson and the Big Bang and that's all they think exists in heaven and earth. In my opinion this does a great disservice to the majority of physicists around the world who work on other, equally exciting topics. Just consider one major academic physics department, say Stanford, and you get an idea of the sheer variety of projects physicists work on. Physics books may still sell, but the physics they describe is something which most of the world's physicists don't do. I cannot see how this cannot be called a PR problem.

So who is responsible for this situation? Well, in one sense, nobody. The fact is that the public has always shown a taste for "big picture" topics like cosmology and quantum mechanics and physicists have been indulging this taste for quite a while now. And who can blame the public for being attracted to relativity with its time paradoxes or quantum mechanics with its cats and famous personal rivalries. Even in the 1920s, the popular physics literature sported the likes of Arthur Eddington and James Jeans who were pitching nuclear physics and relativity to packed audiences. The mantle was passed on in the postwar era to scientists like George Gamow and Isaac Asimov who spread the gospel with gusto. And the trend continues to the present day, with even a mind-numbingly well-trodden topic like the history of quantum theory finding eager expositors like Louisa Gilder, Manjit Kumar and David Lindley. All their books are highly engaging, but they are not doing a favor to other equally interesting branches of physics.

The popular physics literature has also started turning quasi-religious, and writers like Brian Greene and Michio Kaku are unfortunately responsible for this development. Greene in particular is a remarkably charismatic and clear writer and lecturer who has achieved almost rock-star status. Sadly, his popular expositions are seeming more like rock concerts rather than serious physics lectures. Part of the problem is his almost evangelical espousal of highly speculative, experimentally unverified (and perhaps even unverifiable) but deliciously tantalizing topics like string theory and multiple universe. Greene's books seem to indicate that the more speculative the topic, the more eagerly it will be assimilated by lay audiences. This cannot but be a disturbing trend, especially for those thousands of physicists whose research may sound pedestrian but which is also more solidly grounded in experiment and as interesting as perpetually splitting universes. One suspects that even the famous popular physics writers of lore like George Gamow would have been hesitant in pitching highly speculative topics merely for their "Wow" factor. If the biggest selling point of a popular physics book is its dependence on experimentally unverified ideas that sound more like science fiction, popular physics is in trouble indeed. 

In addition, whatever lacks the "Wow" factor seems to evidence the "Yawn" factor. By this I am referring to books constantly repackaging old wine in new bottles. A good example is Lisa Randall's latest book. It's an extremely well-written and spirited volume but it mostly treads the same tired ground of quantum mechanics, relativity and the Large Hadron Collider. The bottom line is that the popular physics literature seems to have reached a point of diminishing marginal returns. It's become very difficult to write anything on the subject that's either not well-trodden or highly speculative.

There is another unintentional effect of this literature which is more serious. Today's popular physics gives people the impression that the only questions worth addressing in physics are those that deal with unified theories or the birth and death of the cosmos. Everything else is either not worth doing or is at best done by second-rate minds or graduate students (take your pick). Not only does this paint a skewed picture of what's important and difficult in the field, it also inflates the importance and intellectual abilities of physicists working on fundamental problems at the expense of those working on more applied ones. This again does a great disservice to very many challenging problems in physics and the people addressing them. Building a room-temperature superconductor, understanding turbulence, designing new materials for capturing solar energy, keeping atoms stable at cold temperatures, kicking DNA around with lasers and of course, beating nuclear fusion at its own thermodynamic game are still long-unsolved problems that promise to engage the finest minds in the field. Yet the myth that the greatest problem in physics is finding the theory describing "everything" persists. This constant emphasis on "big" questions provides a biased view not just of physics but in fact of all of science, most of which involves solving interesting but modest problems. As MJ says in his post, most physicists he knows aren't really after 3 laws that describe 99% of the universe but would be content finding 99 laws that describe 3%. 

So what's the solution? As with other problems, the first step would be to acknowledge that there is indeed a problem. Sadly this would mean somewhat blunting the public's starry-eyed impression of cutting-edge physics, which the leading expositors of physics would perhaps be unwilling to do. At least some physicists might be basking in the public's mistaken grand impression that cosmology and quantum theory are all that physicists work on. If I were a soft condensed matter physicist and if I told someone at a cocktail party that I do physics, the images that response would evoke would most likely include long-haired professors, black holes, bosons and fermions, supernovae, nuclear weapons and time-travel. I may be excused for sounding hesitant to dispel this illusion and emphasize that I actually work on understanding the exact shape of coffee stains.

Nonetheless, this harsh assessment of reality might be necessary to cut the public's umbilical cord to the Hawkings, Greenes and Randalls. But this would have to be done by someone else and not by Brian Greene. Now let me make it clear that as speculative as I might find some of his proclamations, I don't blame Greene at all for doing what he does. You cannot fault him for not reminding the public about the wonders of graphene since that's not his business. His business is string theory, that's what he is passionate about, and nobody can doubt that he is exceedingly good at practicing this trade. Personally I have enjoyed his books, and in an age where ignorance of science seems to reach new lows, Greene's books provide at least some solace. But other physicists would have to tread into territory that he does not venture into if they want to solve physicists' PR problem. 

Gratifyingly some physicists have already started staking their claims in this territory, although until now their efforts have sounded more like tiptoeing and less like confident leaps. Nevertheless, James Gleick proved in the 1990s with his "Chaos" that one can indeed grab the public's attention and introduce them to an entirely new branch of science very successfully. In recent years this tradition has been carried on with varying degrees of success by other scientists, and they provide very promising examples of how the PR problem could be addressed. Let me offer a few suggestions. Robert Laughlin has talked about emergence and condensed matter in his "A Different Universe". David Deutsch has laid out some very deep thoughts in his two books, most recently in "The Beginning of Infinity". Philip Anderson expounds on a variety of interesting topics in his recent collection of essays. And while not entirely about physics, Stuart Kauffman's books have done a great job at dismantling the strong reductionist ethic endemic in physics and suggesting new directions for inquiry. The common emphasis of these authors is on emergent, complex, adaptive systems, a paradigm of endless opportunities and questions which has been generally neglected by the popular physics literature. In addition there are excellent, courageous critiques of string theory from Peter Woit and Lee Smolin that deviate from the beaten track.

Sadly most of these books, while exceedingly interesting, are not as engagingly written as those by Greene or Randall. But the modest success they have enjoyed seems to indicate that the public does have a taste for other areas of physics as long as they are described with verve, passion and clarity. Maybe someday someone will do the same for turbulence, DNA dynamics, non-Newtonian liquids and single-molecule spectroscopy. Then physics will finally be complete, at least in a popular sense.

Image source

Striking Alzheimer's before it strikes

Those following the news on trials of drugs against Alzheimer's disease must be familiar with the depressing outlook from the front lines. There was a string of failures reported in the last few years for therapies intended to disrupt the beta amyloid protein in AD. The failures have sent researchers back to the drawing board and the beta amyloid hypothesis itself has been strongly questioned. Amyloid is almost certainly involved in some big way with the disease, but its exact role as a causative agent has been under scrutiny for a while now.

Several factors could be responsible for the failure of these trials, but one factor in particular was bandied about as an obvious one; perhaps the intervention came too late to help the patients. We now know that diseases like AD and cancer often kick in quite early when they are beyond the detection limit of current diagnostic techniques. Perhaps, the thinking goes, we might stand a chance of beating the disease if we intervene early enough.

This thinking is completely sound, except for the problem that there is even now no definitive test to detect AD at very early stages. Fortunately for scientists - and quite certainly unfortunately for those unlucky enough to draw from this lottery - there are certain populations which are genetically predisposed to the disease. Members of these families typically get the disease in their 40s and by the 50s they are completely debilitated by it. The most prominent of these groups is an unfortunate family in Colombia, and the New York Times reported on them in 2010.

Now the Times reports on a drug trial designed to test the early intervention hypothesis in this clan. The drug in question is an antibody targeted against amyloid called crenezumab. The antibody was developed by Genentech and the preventative study is being jointly funded by the company, the NIH and a private foundation. Naturally it's going to be a long-drawn project; suspected patients are going to be started on the treatment when they are as young as 30, and their progress will be monitored meticulously over the next several years through both diagnostic mental tests and non-invasive techniques like PET scans.

This is a very hopeful and well thought-out experiment, and just like Derek who blogged about this today, I wish both the patients and the researchers the very best. Sadly, the history of the AD trials cited above does not fill me with too much hope. The amyloid hypothesis has constantly been under attack for the last decade or so. The most significant discovery in this regard was the finding that small oligomers of the protein rather than the full misfolded form might be the real culprit; but crenezumab seems to function by attacking the full form.

More intriguing and disturbing are the potential consequences for the normal health of the patients. Amyloid is definitely part of the normal functioning of the human body but nobody knows its exact function yet. However, it seems clear that the misfolded and normal soluble form of the protein are in some kind of equilibrium with each other. More interestingly, recent reports have implicated amyloid as an antibacterial agent. There's also some longstanding studies that suggest that amyloid forms free radicals which are usually toxic, but which may also help kill bacteria. I myself had speculated on amyloid's possible evolutionary role as a defense mechanism.

All this makes me skeptical about disturbing the normal vs misfolded amyloid equilibrium as a long-term strategy; the process may well be a crucial one, and killing the messenger might kill the message. Everyone wants this trial to succeed but we probably shouldn't be surprised if something disappointing shows up. One thing's for sure; this stuff will generate a lot of data, and that's what science is about.
Image source

John LaMattina on the new NIH drug discovery center

There's a post by ex-Pfizer research chief John LaMattina about the new NIH drug discovery center, and predictably he does not seem too happy about it. While the initial center was supposed to be all about translational research, the latest idea is to use the NIH's resources for "repurposing", or discovering new indications for old drugs.

LaMattina echoes some of the dissatisfaction that a few of us have earlier expressed about this idea. The main point here is that the NIH should not be in the business of discovering new drugs; it should be in the business of doing the basic biological research that may enable such potential discoveries. In fact one might argue that the biggest challenge facing drug discovery today is an incomplete understanding of the complexities of the biology underlying major diseases. Just think of the conflicting data and the complications that have emerged from attacking beta amyloid in Alzheimer's disease for instance. There's hardly any doubt that better treatments can only result from a proper evaluation of the basic biology of disease. And it's also clear that this understanding is not going to come from industry. Only the NIH and academic labs can accomplish this, and spending money on therapies when it could more fruitfully be spent on such fundamental studies seems to be folly. So ironically, funding drug discovery may hinder an understanding of the very foundations that may truly enable it.

Nor is what the NIH doing truly novel. As LaMattina points out, repurposing is an obvious route and an attractive one at that, since finding a novel indication for an old drug means that the drug has already run the gauntlet of FDA approval. So we can bet that industry would have worked on repurposing if they could possibly do it. Now granted, there's always going to be compounds that were dropped for financial or project-related reasons which may be potentially valuable agents for all kinds of conditions. And we can also assume that these numbers might have grown during the last few years when projects have been axed and personnel laid off in increasing numbers. But what are the chances that hidden among those dusty vials on the shelf is the next cure for pancreatic cancer? Of course one may never find out if one does not look, but the NIH's announcements make it sound like there's pure gold among those neglected compounds, waiting to be discovered. The fact is that examples of truly repurposed drugs are quite few; as LaMattina points out, even the two repurposed drugs cited by NIH director Francis Collins are drugs for which the "other" indications were rather obvious based on their mechanism of action. Repurposing by itself is not entirely misguided, but repurposing at the cost of basic biomedical research draws resources away from more worthy endeavors.

Thus, by and large LaMattina's arguments seem to be cogent. Unfortunately the indignation on the other side of the equation is not as justified as it sounds. LaMattina refers to a statement by legendary Merck ex-CEO Roy Vagelos along the lines that if there was real benefit to something that the NIH wants to do, pharma would already be doing it. Sadly this is increasingly not the case. In the last few years pharma has defined "benefit" based on whether something's going to affect the next quarter's profits. Working on Alzheimer's disease and other CNS disorders where the rewards are long-term but undoubtedly stellar is no longer considered a beneficial strategy. So we have a situation here where industry is rightly advising the NIH to work on basic research rather than drug development, but not committing itself to its part of the deal. As well-intended as it may be, the impact of your advice gets blunted a little if you stop looking in the mirror.

A history of metallocenes: Bringing on the hashish

Following on the heels of the comprehensive article on metal-catalyzed reactions noted by Derek, here's another one by Helmut Werner specifically about the history of ferrocene and other metallocenes. It's got lots of interesting trivia about priorities, personalities and chemical developments. The article traces early priority disputes in the discovery of ferrocene followed by an account of the rush to explore other metal-organic systems.

It's hard for us today to imagine the shock that was felt on witnessing the existence of the first sandwich compound, a complex of iron sandwiched between two cyclopentadienyl rings. Before ferrocene the division of chemistry into inorganic (especially metallic) and organic compounds was assumed to be virtually set in stone, and this was one of those classic developments that shatters the mirror between two realms. The world of transition metal-mediated chemistry that the discovery inaugurated completely transformed the academic and industrial practice of chemistry, led to several Nobel Prizes and turned out to be one of the most beneficial scientific developments of the latter half of the twentieth century. 

The novelty of the new compound is best captured by what must surely be the most memorable reply sent by a journal editor to a submitting author, this one being from Marshall Gates (the editor of JACS) to R. B. Woodward:

"We have dispatched your communication to the printer but I cannot help feeling that you have been at the hashish again. 'Remarkable' seems a pallid word with which to describe this substance"

Perhaps the most extraordinary part of the story is the candid and rather dramatic note from Woodward to the Nobel committee lamenting his exclusion from the 1973 Nobel Prize awarded to Geoffrey Wilkinson and Ernst Fischer.

"The notice in The Times of London (October 24, p. 5) of the award of this year's Nobel Prize in Chemistry leaves me no choice but to let you know, most respectfully, that you have - inadvertently, I am sure - committed a grave injustice"

Woodward went on to rather pointedly emphasize his individual contributions to the discovery, making it sound like he had done Wilkinson at least a minor favor by putting his own name last on the manuscript. 

"The problem is that there were two seminal ideas in this field-first the proposal of the unusual and hitherto unknown sandwich structure, and second, the prediction that such structures would display unusual, "aromatic" characteristics. Both of these concepts were simply, completely, and entirely mine, and mine alone. Indeed, when I, as a gesture to a friend and junior colleague interested in organo-metallic  chemistry, invited Professor Wilkinson to join me and my colleagues in the simple experiments which verified my structure proposal, his initial reaction to my views was close to derision . . . . But in the event, he had second thoughts about his initial scoffing view of my structural proposal and its consequences, and all together we published the initial seminal communication that was written by me. The decision to place my name last in the roster of authors was made, by me alone, again as a courtesy to a junior staff colleague of independent status".

Interestingly, his recollection almost completely differs from that of Wilkinson's who stated in a 1975 review that he thought of the structure right away while Woodward immediately started thinking about its reactions. It's intriguing - and probably futile - to psychoanalyze the reasons for this very public expression of disappointment, especially coming from one who was not exactly known for publicly airing his personal feelings (for instance, his Cope Award lecture is the only time Woodward really provided personal biographical details). By 1973 Woodward had already won the Nobel Prize, and while he was always known to be extraordinarily ambitious, he must have known that his place in chemical history had already been secured; at that point he had even published the landmark papers on the Woodward-Hoffmann rules. Perhaps he sincerely felt that he deserved a share of the prize; nevertheless, it's a little curious that such a towering figure in the field made it a point to convey his disappointment at not winning a prize so publicly and strongly. Whatever the reason, Woodward's note makes it clear that scientists - both famous ones and otherwise - are keen to stake their priority. They are after all human.

To be fair to the prize committee, the award was given for the more general field of organometallic chemistry that the discovery of ferrocene launched rather than for the structure of ferrocene itself. Even at the beginning Wilkinson had been more interested in the new structural class of metallocenes while Woodward had been more interested in the kind of reactions the novel compounds would undergo. After the initial finding, while Wilkinson immersed himself in investigating the interactions of other metals with similar organic systems, Woodward went back to his life's love; the chemistry of natural products. Thus, it seems sensible in retrospect to have the prize given to Wilkinson and Fischer if the purpose had been to honor a new field of chemistry. Woodward died in 1979, and I am not familiar with his later thoughts on the subject if he had any. But of course, his place in the annals of science had long been assured, and ferrocene has turned into little more than an interesting historical footnote in his list of superlative achievements.

Note: The quotes by Woodward come from an article by Thomas Zydowsky from the Northeastern Section of the ACS that I had noted in the mailing list 2001. Time flies.

The devil under the hood: To look or not to look?

Modern biology and chemistry would be unthinkable without the array of instrumental techniques at their disposal; indeed, one can make a case that it was new methods (think NMR, x-ray crystallography, PCR) rather than new concepts that were really responsible for revolutions in these disciplines. The difference between a good paper and a great paper is sometimes the foolproof confirmation of a decisive concept, often made possible only by the application of a novel technique.

Yet the onslaught of these methods have brought with them the burden of responsibility. Ironically, increasing user-friendliness of the tools has only exacerbated this burden. Today it's all too easy to press a button and communicate a result which may be utter nonsense. In a recent article in Nature titled "Research tools: Understand how it works", Daniel Piston from Vanderbilt laments the fact that many modern instruments and techniques have turned into black boxes that are being used by students and researchers without an adequate understanding of how they work. While acknowledging the undoubted benefits that automation has brought to the research enterprise, Piston points out the flip side:

Unfortunately, this scenario is becoming all too common in many fields of science: researchers, particularly those in training, use commercial or even lab-built automated tools inappropriately because they have never been taught the details about how they work. Twenty years ago, a scientist wanting to computerize a procedure had to write his or her own program, which forced them to understand every detail. If using a microscope, he or she had to know how to make every adjustment. Today, however, biological science is replete with tools that allow young scientists simply to press a button, send off samples or plug in data — and have a result pop out. There are even high-throughput plate-readers that e-mail the results to the researcher.

Indeed, and as a molecular modeler I can empathize, since modeling presents a classic example of black-box versus nuts-and-bolts approaches. On one hand you have the veteran programmers who did quantum chemistry on punch cards, and on the other hand you have application scientists like me who are much more competent at looking at molecular structures than at code (you also have those who can do both, but these are the chosen few). There's a classic time spent vs benefits accrued tradeoff here. In the old days (which in modeling lingo go back only fifteen years or so), most researchers wrote their own programs, compiled and debugged them and tested them rigorously on test systems. While this may seem like the ideal training environment, the fact is that in modern research environments and especially in an industry like the pharmaceutical industry, this kind of from-scratch methodology development is often just not possible because of time constraints. If you are a modeler in a biotech or pharma company, your overlords rightly want you to apply existing software to discover new drugs, not spend most of your time writing it. In addition, many modelers (especially in this era of user-friendly software) don't have strong programming skills. So it's considered far better to pay a hefty check to a company like Schrodinger or OpenEye who have the resources to spend all their time perfecting such programs. 

The flip side of this however is that most of the software coming from these companies is not going to be customized for your particular problem, and you can start counting the number of ways in which a small change between training and test sets can dramatically impact your results. The only way to truly make these programs work for you is to look under the hood, change the code at the source and reconfigure the software for your unique situation. Unfortunately this runs into the problem stated above, namely the lack of personnel, resources and time for doing that kind of thing.

So how do you solve this problem? The solution is not simple, but the author hints at one possible approach when he suggests providing more graduate-level opportunities to learn the foundations of the techniques. For a field like molecular modeling, there are still very few formal courses available in universities. Implementing such courses will give students a head-start in learning about the relevant background, so that they can come to industry at least reasonably well-versed with the foundations and subsequently spent their time actually applying the background instead of acquiring it. 

The same principal applies to more standard techniques like NMR and x-ray diffraction. For instance, even today most courses in NMR start with a basic overview of the technique followed by dozens of problems in structure determination. This is good training for a synthetic chemist, but what would be really useful is a judiciously chosen list of case studies from the current literature which illustrate the promises and pitfalls of magnetic resonance. These case studies would illustrate the application of NMR to messy, real-world problems rather than ideal cases. And it's only by studying these case studies that students can get a real feel for the kind of problems for which NMR is really the best technique.

Thus, gaining a background in the foundations of a particular technique is only one aspect of the problem, and one which in fact does not sound as important to me as getting to know the strengths and limitations of the technique. To me it's not as important to formally learn quantum chemistry as it is to get a feel for the kinds of systems for which it works. In addition you want to know what the results really mean, since the numbers you get from the output are often more or less informative than they look. Learning the details of perturbation theory is not as key as knowing when to apply it. If the latter is your goal, it may be far more fruitful to scour the literature and get a feel for the circumstances in which the chosen technique works rather than just take formal classes. And conveying this feel for strengths and limitations of techniques is again something we are not doing very well in graduate school, which we should be.

Nanopore sequencing: The next big thing?

The biotech field is abuzz with nanopore sequencing, with lots of people starting to think it might be the Next Big Thing. And not just one of those incremental developments that keep science humming along, but more along the lines of Kuhn's paradigm shifts. The belief is that nanopore sequencing may finally lead to the domestication of biotechnology, which will open up brave new worlds in all kinds of domains like drug development, law and health insurance. The ultimate manifestation of this technology, of which Oxford Nanopore Technologies is the leader, is a flash drive that can be plugged into your computer and which can sequence your DNA for about 1000$ within a day or two. Personal genomics does not get any more well-defined than that.

There's an article on nanopores in this week's Science which lays out both the promises and the challenges. Nanopore sequencing seems to be one of those disarmingly simple ideas which had to be hammered out into practice by sweating all the details. About half a dozen people, starting with David Deamer at UCSC, contributed to the basic concept which sounds simple enough; find a pore with an electrical current flowing through it and thread a DNA strand through the pore. The expectation is that as each base of the DNA passes through, there will be a small, characteristic change in the ionic current. The great hope is that this change would be unique for each one of the four bases, thus allowing you to read the sequence.

As usual, the implementation of the idea faced major hurdles. For one thing, nobody knew of a pore that would allow DNA to thread through it until Deamer heard of alpha-hemolysin, a protein with a magnificent structure that is used by some bacteria to break down red blood cells by literally drilling holes in them. Deamer realized that the perfectly positioned inner channel of hemolysin could be used to string DNA through it. After this initial breakthrough, the main problem remained reducing both the number and the speed of the base pairs enough to read them accurately. The first problem was resolved through the discovery of another protein called MspA which has a pore narrow enough to allow just four base pairs through at any given moment. The negatively charged residues inside the channel had to be removed through mutations to prevent repulsion with the negatively charged DNA. The second problem was addressed by using a special DNA polymerase to hold the DNA on top of the hemolysin and create a bottleneck.

The culmination of all these efforts was a publication two months back in which they could distinguish and sequence DNA strands ranging from 42 to 53 bases in length. That's a good start, but of course it's a trivial number compared to most respectable genomes of interest. And this is where the skeptics start chiming in. The error rate of the sequencing is currently 4%, mostly engendered by stalled DNA polymerase and bases that slip past without being read. For the three billion base pairs of the human genome, this will translate to about 120 million bases read incorrectly. That's a staggering number, especially if you consider that many genetic disorders arise from changes in a single base pair. When you are dealing with complex disorders in which small changes in sequence mean the difference between health and disease, this kind of accuracy may just not be good enough. And of course, that's assuming that such minor changes in sequence are truly meaningful, but that's a different story. In addition there are probably going to be problems associated with the durability of the pore, not to mention the usual challenges involved in mass producing such a technology. I am assuming the midnight oil at Oxford is being burnt as we speak.

So nanopore sequencing may well be the next big disruptive technology. But its actual impact can only be judged after they iron out these key wrinkles and publish a few papers dealing with typical-sized genomes. Until then, cautious optimism without strenuous limb-flailing would be the correct response.

Image source

Woodward on how to travel incognito

Searching aimlessly for material on R. B. Woodward online, I came across an amusing source: "Droll Science" by Robert Weber. In it I found the following droll anecdote:

At the Munich (1955) meeting of the Gesellschaft deutscher Chemiker, Woodward attracted attention as he roamed the halls carrying a big notebook in a blue silk cover on which was embroidered the structural formula of strychnine. The next day he appeared bearing a cover innocent of any embroidery. Asked a friend, "Why no structural formula?" Quipped Woodward, "Oh, I'm traveling incognito today."

The anecdote probably says more about the man than it intends to: Woodward's identity was inextricably linked to the objects of his creation, and without them, he could indeed pronounce himself incognito.

Using gas-phase conformations to predict membrane permeability of drugs

Drug discovery is a complex process in which multiple factors need to be optimized. One of the most important stages involves maximizing the cell permeability of compounds, since even the most potent molecule is worth nothing if it can't get into cells. A particularly intriguing challenge in this regard is to explain how nature has solved this problem by engineering natural products featuring "non Rule of 5" compounds- typically large, greasy molecules with several hydrogen bonding groups - acting as drugs. Nature seems to have overcome chemists' queasiness at making such compounds, and it's worth noting what lessons we can learn from her. Among other challenges, one has to get these drugs across lipid-rich cell membranes, and predicting this membrane permeability could be a very useful thing. One clever strategy that nature has adopted to do this is to fold up the molecule through lots of internal hydrogen bonds, thus minimizing its polar surface area and allowing it to be stable in the membrane.

Naturally it would be quite useful to be able to theoretically predict beforehand which compounds will form such hydrogen bonds, and there have been a few papers taking a stab at this goal. Here's one such paper from a group at Pfizer which stuck out for me because of its simplicity. In it the author look at three apparently non-druglike compounds- cyclosporin (an immunosuppressant), atazanavir (an antiviral) and aliskiren (an anti hypertensive) and try to the theoretically account for their observed permeabilities.

 What they do is run conformational searches on these compounds in the gas phase (or in "vacuum", if that suits your anti-Aristotlean sentiments) and look at the low energy conformations. In two of the three cases the low-energy conformations present lots of intramolecular hydrogen bonds, and the authors say that this is consistent with the stabilization of the drug inside the membrane, account for its known high permeability and oral bioavailability. They also verify these hydrogen bonds through NMR in non-polar solvents which are supposed to simulate the lipid membrane. For the third compound, aliskiren, they observe only two groups hydrogen bonded in the gas phase and they think this is consistent with the compound's low permeability. The paper's conclusion is that the presence of hydrogen-bonded conformations in the gas phase is a fair predictor of favorable permeability.

The study seems like an interesting starting point but I am skeptical. For starters, I have run more than my share of gas-phase conformational searches over the years and the one ubiquitous observation is that most polar, druglike compounds show "collapsed" conformations, with hydrogen bonding groups tightly interlocked. The main reason is that these calculations are carried out in vacuum, where there is no intervening solvent to shield the groups from each other. Coulomb's Law runs rampant in these situations, and a proliferation of collapsed, highly hydrogen-bonded conformations are a headache for any conformational search. So I am not surprised that the authors see all these hydrogen bonds. Sure, they say they don't see them for aliskiren, but even aliskiren demonstrates two so I am not sure how to interpret this conclusion.

The other important thing is that what matters for membrane permeability is not just whether an internally hydrogen bonded conformation is present in a nonpolar medium, but whether the desolvation penalty to achieve that conformation from the initial panoply of conformations in water is easily overcome. It's hard to draw a conclusion about a favorable membrane conformation of a drug unless there is an estimate of the desolvation penalty. At the very least the authors should have carried out a conformational analysis in water to get a crude estimate. In contrast, another study trying to calculate the permeability of peptides seemed to take this factor into account.

Lastly of course, the paper looks at only three cases. A much larger dataset will have to be examined in order to predict any correlations between hydrogen-bonded gas-phase conformations and membrane permeability. But while I am not sure the recent paper comes up with a general principle to do this, it's clear that conformational analyses of this kind (preferably in water, chloroform and the gas phase) will help. Understanding how nature has managed to engineer permeable, bioavailable beasts like cyclosporine is an important enough goal to benefit from such approaches.