Field of Science

Phil Baran keeps the dream of classical organic chemists alive

I am very happy to note that organic chemist Phil Baran from Scripps is one of this year's recipients of the MacArthur "Genius" Award. It's rare for a chemist and especially a "pure" organic chemist to receive this recognition. 

The first reason why chemists should be happy of course is that Baran is a phenomenal chemist. Ever since he was a graduate student he has been churning out innovative molecules and methods to make them. It's probably safe to say that he is the most promising young organic chemist in the world right now.

But the more important reason why this recognition is almost heart-warming is because it reaffirms faith in the soul of "pure" organic chemistry and synthesis. Baran's style of synthesis reminds one of the golden age of the discipline in the 50s and 60s, when legendary practitioners like Woodward, Corey and Stork used to make molecules for the sake of making them, for exploring the beauty and difficulty of their architectures and for appreciating the simple tricks and reagents that could turn a complex synthesis into a simple one. Phil Baran produces the same wistful nostalgia in a young aspiring organic chemist that a Detroit car manufacturer from the 50s would produce in a young automobile engineer standing on the empty grounds of a once-thriving factory. He reminds us of the time when synthesis was king.

Throughout his career Baran has continued to achieve all the goals savored by the giants of synthesis. He has followed his mentor E J Corey in synthesizing some very complex compounds as well as in developing new methods. When I read about his work I think of the young Woodward making reserpine or of the young Corey discovering new protecting groups for alcohols. 

The last few years have seen some cynicism - much of it well-directed - about total synthesis, about the tendency to treat molecule-making as a marathon rather than a sprint. And yet young Baran has proven that there are still gems to be unearthed from the dross of hammer and tong chemistry, and that there is still hope for the next generation of purely synthetic chemists who are looking for truly innovative molecules and methods.

This seems to me to be a more than adequate reason to toast Baran and his accomplishments. Congratulations Phil!


2013 Nobel Prizes

It's almost time for the 2013 Nobel Prizes, which means it's also time for playing that little game which we in the chemosphere have been playing for a while. As a prize predictor my record hasn't been execrable; in the last few years I did get most of the prizewinners for the ribosome, palladium-catalyzed chemistry, GFP and GPCRs right. Out of these the ones that really count are the ribosome and GPCRs, since I actually predicted the winners during the year in which they won.

As I mentioned in last year's list, predictions for the prize get somewhat easier every year when all you have to do is to keep old predictions and get rid of recent hits. Having perpetual favorites definitely makes the job easier. What does change is the probability of prediction based on last year's prizes. So without further ado, here is a modified and updated list from last year with a modified set of probabilities.

Let's start by noting that since last year's prize was awarded to biochemists, it makes it less likely that the same field will be recognized this year. Excluding biochemistry, the fields that top my list for this year are instrumental techniques and energy. Physical chemistry has also not won since Gerhard Ertl received the prize for surface chemistry. NMR and single-molecule spectroscopy both seem to me to be fields whose time is due. As for energy, I don't see anyone in solar or wind who has made enough headway to warrant a prize. However the inventors of lithium ion batteries definitely seem to deserve one.

As usual the predictions are classified as "easy" or "difficult" depending on their likelihood of winning. Note also that Nobel Prizes have traditionally been handed out to specific discoveries rather than for lifetime achievements, although the latter have not been entirely missing from the list.

Single-molecule spectroscopy (Easy) Pros: The field has obviously matured and is now a powerful tool for exploring everything from nanoparticles to DNA. It’s been touted as a candidate for years. The frontrunners seem to be W E Moerner and M Orrit, although Richard Zare has also been floated often. Cons: The only con I can think of is that the field might yet be too new for a prize. - See more at: http://wavefunction.fieldofscience.com/2012/09/2012-nobel-prizes.html#sthash.kV0fQtod.dpuf
Single-molecule spectroscopy (Easy)
Pros: The field has obviously matured and is now a powerful tool for exploring everything from nanoparticles to DNA. It’s been touted as a candidate for years. The frontrunners seem to be W E Moerner and M Orrit, although Richard Zare has also been floated often.
Cons: The only con I can think of is that the field might yet be too new for a prize.

NMR (Difficult): It’s been a while since Kurt Wuthrich won the prize for NMR. But it’s been even longer since a prize was awarded for methodological developments in the field (Richard Ernst). I don’t know enough about the field to know who the top contenders would be, but Ad Bax and Alexander Pines seem to have really made pioneering contributions. Pines especially helped launch the field of solid-state NMR which as a field certainly seems to deserve a Nobel at some point.

While we are on the topic of instrumental techniques, it's also worthwhile to mull over some methods that have become household words in both academia and industry. These methods may not be as earth-shattering as NMR but these days they are certainly as commonplace as NMR. What about surface plasmon resonance which is routinely used to measure binding of all kinds of molecules to each other? Wikipedia tells me that "The first SPR immunoassay was proposed in 1983 by Liedberg, Nylander, and Lundström, then of the Linköping Institute of Technology (Sweden)", so I don't know if these gentlemen should be up for the prize (the three form a neat, Nobel-approved trio). FRET also comes to mind. Then there's cryoelectron microscopy which while tantalizing is almost certainly too nascent a field to be recognized.

Moving on to energy, there's one development that undoubtedly tugs at my heartstrings:

Lithium-ion batteries (Moderately easy): Used in almost every kind of consumer electronics, lithium-ion batteries are also touted as the best battery alternative to fossil fuels. A great account is provided in Seth Fletcher’s “Bottled Lightning”. From what I have read in that book and other sources, John Goodenough, Stanley Whittingham and Akira Yoshino seem to be the top candidates, although others have also made important contributions and it may be hard to divide up the credit.

And two other fields, at least one of which has been a favorite for a while:

Electron transfer in biological systems (Easy)
Pros: Another field which has matured and has been well-validated. Gray and Bard seem to be leading candidates.

Computational chemistry and biochemistry (Difficult):

Pros: Computational chemistry as a field has not been recognized since 1998 so the time seems due. One obvious candidate would be Martin Karplus. Another would be Norman Allinger, the pioneer of molecular mechanics.
Cons: This would definitely be a "lifetime achievement award". Karplus did do the first MD simulation of a protein ever but that by itself wouldn’t command a Nobel Prize. 

The other question is regarding what field exactly the prize would honor. If it’s specifically applications to biochemistry, then Karplus alone would probably suffice. But if the prize is for computational methods and applications in general, then others would also have to be considered, most notably Allinger but perhaps also Ken Houk who has been foremost in applying such methods to organic chemistry. Another interesting candidate is David Baker whose program Rosetta has really produced some fantastic results in predicting protein structure and folding. It even spawned a cool game. But the field is probably too new for a prize and would have to be further validated; at some point I do see a prize for biomolecular simulation.

If they really do decide to give out another award for biochemistry, there are some well-recognized candidates. Many of these are also shoe-ins for the medicine prize.

Nuclear receptors (Easy): Pros: The importance of these proteins is unquestioned. I worked a little on NRs during my postdoc and remember being awed by the sheer diversity and ubiquity of these molecules in mediating key physiological functions. In addition they are already robust drug targets, with drugs like tamoxifen that hit the estrogen receptor making hundreds of millions of dollars. Most predictors seem to converge on the names of Chambon and Evans this prediction and NRs are definitely at the top of my list.

Chaperones: (Easy): Arthur Horwich and Franz-Ulrich Hartl just won last year’s Lasker Award for their discovery of chaperones. Their names have been high on the list for some time now.
Pros: Clearly important. Chaperones are not only important for studying protein folding on a basic level but in the last few years the malfunctioning of chaperones such as heat-shock proteins has been shown to be very relevant to diseases like cancer.

Cons: Too early? Probably not.

Statins (Difficult): Akira Endo’s name does not seem to have been discussed much. Endo discovered the first statin. Although this particular compound was not a blockbuster drug, since then statins have revolutionized the treatment of heart disease.
Pros: The “importance” as described in Nobel’s will is obvious since statins have become the best-selling drugs in history. It also might be a nice statement to award the prize to the discovery of a drug for a change. Who knows, it might even boost the image of a much maligned pharmaceutical industry...
Cons: The committee is not really known for awarding actual drug discovery. Precedents like Alexander Fleming (antibiotics), James Black (beta blockers, antiulcer drugs) and Gertrude Elion (immunosuppresants, anticancer agents) exist but are far and few in between. On the other hand this fact might make a prize for drug discovery overdue.

Drug delivery (Difficult): A lot of people are pointing to Robert Langer for his undoubtedly prolific and key contributions to drug delivery. The field as a whole has not been recognized yet so the time may be ripe; from my own understanding of his contributions, Langer seems to me more of an all-rounder, although it may not be too late to single out some of his earlier discoveries, such as the first demonstration of the delivery of high molecular weight polymer drugs. 

Cancer genetics (Easy): Clearly a very important and cutting-edge field. We still don’t know how much of an impact genomic approaches will ultimately have on cancer therapy since the paradigm is clearly evolving and traps abound, but any history of the field will have to include Robert Weinberg and Bert Vogelstein. Vogelstein discovered the importance of p53, the “guardian of the genome” while Weinberg discovered the first oncogenes. In addition both men have also been prominent influences on the field as a whole. Given both the pure and applied importance of their work, their discoveries should fit the Nobel committee’s preferences like a glove. As a con, the field is very vast and divvying up credit could be tricky. 

Genomics (Difficult): A lot of people say that Venter should get the prize, but it’s not clear exactly for what. Not for the human genome, which others would deserve too. If a prize was to be given out for synthetic biology, it’s almost certainly premature. Venter’s synthetic organisms from last year may rule the world, but for now we humans still prevail. On the other hand, a possible prize for genomics may rope in people like Carruthers and Hood who pioneered methods for DNA synthesis. 

DNA fingerprinting (Easy): Now this seems to me to be very much a field from the “obvious” category and one that's long overdue. The impact of DNA fingerprinting and Western and Southern Blots on pure and applied science- everything from discovering new drugs to hunting down serial killers (and exonerating wrongly convicted ones; for instance check out this great article by Carmen Drahl in C&EN)- is at least as big as the prizeworthy PCR. I think the committee would be doing itself a favor by honoring Jeffreys, Stark, Burnette and Southern. And while we are on DNA, I think it’s also worth throwing in Marvin Caruthers whose technique for DNA synthesis really transformed the field. In fact it would be nice to award a dual kind of prize for DNA- for both synthesis and diagnosis.Cons: Picking three might be tricky. 
 

Chemical genetics (Easy): Another favorite for years, with Stuart Schreiber and Peter Schultz being touted as leading candidates. Pros: The general field has had a significant impact on basic and applied scienceCons: This again would be more of a lifetime achievement award which is rare. Plus, there are several individuals in recent years (Cravatt, Bertozzi, Shokat) who have contributed to the field. It may make some sense to award Schreiber a ‘pioneer’ award for raising ‘awareness’ but that’s sure going to make at least some people unhappy. Also, a prize for chemical biology might be yet another one whose time has just passed, just like a prize for the Pill.

Speaking of the pill, Carl Djerassi's 90th birthday was celebrated this year at the ACS National Meeting in Indianapolis. For the past thirty odd years Djerassi has been focused not on science but on poetry and writing. Personally I think that recognizing him with the prize would still be a nice thing - after all, the social impact of the easily equals that of other prizewinning discoveries like IVF - but the late receipt of the prize combined with the death of many important associates would make it all a bit strange.

What about the physics prize?

As interesting as the chemistry prize is going to be, its significance and excitement might pale to a whimpering whisper in comparison to the physics Nobel Prize, the awarding of which might create a controversy the likes of which have not been seen since Miley Cyrus took to the stage with a novel dance form.

The problem is simple. Everybody agrees that the discovery of the Higgs boson deserves a Nobel Prize. But almost every history of the Higgs credits at least six and possibly seven people with laying out the ideas that predicted the finding. Again, nobody denies that Peter Higgs deserves the prize, but after that it's anybody's guess. And the six people often cited are just the theoreticians; including the experimenters at CERN adds another well-deserved layer of complexity to giving out the prize.

If we are really looking for least-of-all-evil type scenarios then perhaps they can award the prize to Higgs and collectively to the CERN team. That way fewer feathers may be ruffled (or at least all feathers would be equally ruffled) and the prize would also have been squarely divided among the lead theoretician and the experimenters. We will see. This year's Nobel Prize for Physics may put some of the great Greek dramas to shame.

Update: As David Pendlebury from Thomson Reuters correctly pointed out to me, Vogelstein did not discover p53 but was the first to point out its connection to cancer as a common denominator. Also, Elwood Jensen passed away last year.

Update: Other predictions - Pipeline, Chembark, Chemistry World.
Single-molecule spectroscopy (Easy) Pros: The field has obviously matured and is now a powerful tool for exploring everything from nanoparticles to DNA. It’s been touted as a candidate for years. The frontrunners seem to be W E Moerner and M Orrit, although Richard Zare has also been floated often. Cons: The only con I can think of is that the field might yet be too new for a prize. - See more at: http://wavefunction.fieldofscience.com/2012/09/2012-nobel-prizes.html#sthash.kV0fQtod.dpuf

MIT and chemistry: What gives?

A reference to pioneering physical organic chemist Jack Roberts in a C&EN article again brought the following question to my mind: Why has MIT, over several decades, managed to lose some of the best chemists in the world to other departments? This question has been nagging at me for several years and resurfaced recently when Dan Nocera moved to Harvard from MIT.

I understand that my information is largely anecdotal, but it seems to me that the school has lost more highly accomplished chemists to other departments than pretty much any other top school. 

Of course, since these chemists were attracted to MIT in the first place that says something about the caliber of the department, but why lose them then?

Here's a tentative list of MIT chemists who have been successfully lured away. What is striking about the list is that it spans at least four decades and includes some of the most distinguished scientists of those four decades. Also interesting to note that most of these are organic/bioorganic guys so that could say something about what areas the department is focused on.

Jack Roberts (Caltech)
George Whitesides (Harvard)
Chris Walsh (Harvard)
Barry Sharpless (Scripps)
Peter Seeberger (Max Planck)
Greg Fu (Caltech)
Dan Nocera (Harvard)

Who else am I leaving out? I don't want to speculate on the reasons; a simple one could be that not every school focuses equally on all its disciplines. And nobody can deny that MIT chemistry has still been top notch over the decades; as I indicated before, the very fact that all these people launched their careers there vouches for the quality of the department. But the track record seems to indicate that MIT is much better at attracting people than retaining them. And there's got to be a reason for that.

Chemical and Engineering News celebrates 90 years: How chemistry has come a long way


Chemistry is - in the true sense - the central science, reaching inside every aspect of our lives (Image: Marquette University)
Chemical and Engineering News (C&EN) is celebrating 90 years of its existence this year, and I can only imagine how perplexed and awestruck its editors from 1923 would have been had they witnessed the state of pure and applied chemistry in 2013. I still remember devouring the articles published in the magazine during its 75th anniversary, and this anniversary also offers some tasty perspectives on a diverse smattering of topics; catalysis, structural biology and computational chemistry to name a few. 

There's an article in the magazine documenting how the single-most important concept in chemistry - that of the chemical bond - has undergone a transformation; from fuzzy, to rigorously defined, to fuzzy again (although in a very different sense).

Nobel Laureate Roald Hoffmann had something characteristically insightful to say about The Bond:
"My advice is this: Push the concept to its limits. Be aware of the different experimental and theoretical measures out there. Accept that at the limits a bond will be a bond by some criteria, maybe not others. Respect chemical tradition, relax, and instead of wringing your hands about how terrible it is that this concept cannot be unambiguously defined, have fun with the fuzzy richness of the idea.”
In a bigger sense the change in chemistry during these 90 years has been no less than astounding. In 1923 the chemical industry already made up the foundations of a great deal of daily life, but there was little understanding of how to use the concepts and products of chemical science in a rational manner. Since 1923 our knowledge of both the most important aspect of pure chemistry (the chemical bond) and of applied chemistry (synthesis) has grown beyond the wildest dreams of chemistry's founders.

If we had to pinpoint two developments in chemistry during these 90 years that would truly be described as "paradigm shifts", they would be the theoretical understanding of bonding and the revolution in instrumental analysis. As I and others have argued before, chemistry unlike physics is more "Galisonian" than "Kuhnian", relying as much on new instrumental techniques as on conceptual leaps for its signal achievements.

The two most important experimental advances in chemistry - x-ray diffraction and nuclear magnetic resonance - both came from physics, but it was chemists who honed these concepts into a routine laboratory tool for the structure determination of a staggeringly diverse array of substances, from table salt to theribosome. The impact of these two developments on chemistry, biology, medicine and materials science cannot be underestimated; they cut down the painstaking task of molecular structure determination from months to hours, they allowed us to find out the nature of novel drugs, plastics and textiles and they are now used by every graduate student every single day to probe the structure of matter and synthesize new forms of it. Other developments like infrared spectroscopy, electron diffraction, atomic force microscopy and single molecule spectroscopy are taking chemistry in novel directions.

The most important theoretical development in chemistry also derived from physics, but its progress against demonstrates chemists' central role in acting as mediators between concept and application. It also serves to make a key point about reductionism and the drawbacks of trying to reduce chemistry to physics. The chemical bond is an abstract concept going back to "affinities" between atoms (which when illustrated were replete with hooks and eyes). But it was in 1923 that the great American chemist G. N. Lewis propounded the idea in terms of atoms sharing electrons. This was a revolutionary brainwave and illuminated the way for Linus Pauling, John Slater, Robert Mulliken, John Pople and others to use the newly developed machinery of quantum mechanics to fashion the qualitative principle into an accurate, quantitative tool which  - with the development of modern computing - now allows chemists to routinely calculate and predict important properties for any number of chemical substances.

Yet the ramifications of the chemical bond tempt and beguile physicists and constantly escape from their grasp when they try to define them too accurately. The above quote by Roald Hoffmann puts the problem in perspective; quintessentially chemical ideas like aromaticity, the hydrophobic effect, steric effects and polarity "fray at the edges" (in Hoffmann's words) when you try to push them to their limits and try to define them in terms of subatomic physics. Chemistry is a great example of an emergent discipline. It is derived from physics and yet independent of it, relying on fundamental definitions at its own level when progressing.

The chemical bond and other theoretical aspects of chemistry have enabled the rise of the one activity pursued by chemists of which society is an unsurpassed beneficiary - the science, art and commerce of synthesis. Every single molecule that bathes, clothes, feeds, warms, transports and heals us has been either derived from nature using chemical techniques or has been synthetically made in a chemical laboratory. The social impact of these substances is hard to underestimate; even a sampling of a few such as the contraceptive pill, antibiotics or nylon attests to the awesome power of chemistry to completely transform our lives.

In 1923 synthesis was a haphazard process and there was virtually no understanding of how we could do it rationally. All of this changed in the 1950s and 60s when a group of pioneering scientists led by the legendary organic chemist Robert Burns Woodward revolutionized the process and honed synthesis into a precisely rational science which took advantage of the course of chemical reactions, the alignment of orbitals, the development of new chemical reagents and the three-dimensional shape of molecules. Many Nobel Prizes were handed out for these groundbreaking discoveries, but none surpassed the sheer impact that synthesis will continue to have on our way of life.

As is inevitably the case for our embrace of science and technology, with progress also come problems, and chemists have had to deal with their share of issues like environmental pollution, drug side effects and the public perception of chemistry. Suffice it to say that most chemists are well aware of these and are working hard to address them. They recognize that with knowledge comes responsibility, and the responsibility they bear to mitigate the ills of the wrongful application of their science transcends their narrow professional interests and encompasses their duties as citizens.

In the new century chemistry continues to build upon its past and chemists continue to push its boundaries. Another change which the editors of C&EN would not have foreseen in 1923 is the complete integration of chemistry into other disciplines like biology, medicine and engineering and its coming into its own as the true "central science". Today chemistry deeply reaches into every single aspect of our lives. The cardinal problems facing civilization - clean and abundant food and water, healthcare, national security, overpopulation, poverty, climate change and energy - cannot be solved without a knowledge of chemistry. Simply put, a world without chemistry would be a world which we cannot imagine, and we should all welcome and integrate the growth of chemical science into our material and moral worldview.

First published on the Scientific American Blog Network.

Macrocycle drug review

Here's a comprehensive and useful review of macrocycle drugs in J Med Chem by Giordanetto and Kihlberg at AstraZeneca; well worth reading to get an idea of what's out there in the clinic and on the market. 

The authors looked at about 30 clinical macrocycle candidates and 70 marketed macrocycle drugs and analyzed their principal physicochemical properties to investigate trends and differences. Some main points emerging from the discussion:

1. Most macrocycles are in oncology or infection; however, the ones that are targeted toward other areas include a significant number of de-novo synthetic or semisynthetic molecules from structure-based drug design.
2. Among marketed macrocycles, injected drugs are mostly cyclic peptides while oral drugs are mostly macrolides (in general, injectable macrocycles seem to span a broader chemical space).
3. The structural differences between oral and injectable macrocycles can often be pretty trivial (at least on inspection; eg. tacrolimus vs pimecrolimus).
4. For oral macrocycles, increasing MW seems to track with increasing lipophilicity.
5. There's a set of plots which indicates the limits of chemical space within which marketed macrocycles seem to lie. "Rogue" rule-breakers like cyclosporin which lie very far from this space are still exceptions.

The question of whether we can deliberately engineer drug molecules to act like cyclosporin on a large scale is still very much an open one. What is clear is that we are increasingly making molecules that are decidedly testing the boundaries of the rule-of-5 and pushing the envelope. The future is not guaranteed, but it's promising.

So you hate GMO's because they are untested. What about feelbetteramine from the health store?

Normal rice and golden rice fortified with beta carotene (Image: Wikipedia Commons)
Noted pharmacologist, Forbes blogger and North Carolina Museum of Natural History science communications director David Kroll has a good post in Forbes about the recent controversy regarding "Golden Rice", a strain of rice genetically engineered to produce beta-carotene, the precursor of vitamin A. This kind of rice might be invaluable in regions with endemic vitamin A deficiency (VAD) which is a big deal; as the Wikipedia article on the topic says, VAD is responsible for 1–2 million deaths, 500,000 cases of irreversible blindness and millions of cases of xerophthalmia annually. Clearly Golden Rice has the potential to do a lot of good.

Now I don't want to take either a strongly pro-GMO or anti-GMO stance here, although I definitely deplore the vandalism of Golden Rice fields described in the article that David links to. As a scientist however I am generally inclined to side with GMOs; to an organic chemist like me, modified sequences of DNA - while not without potential to cause harm - seem much more benign when ingested than decidedly nasty things like dioxins, pyrene and botulism toxin. In addition there are specific cases where engineering crops to withstand insect pests has done enormous good; and this perspective is independent of whatever I might think of the financial or political behavior of the relevant corporations.

But the bigger problem I have is with a common thread running through almost every anti-GMO protester's vocabulary, irrespective of whatever other objections they might have against GMOs. I find myself pondering the following question which I asked on David's blog:
"I actually find the anti-GMO folks’ argument about not trusting GMOs simply because they have “not been tested enough” to be disingenuous, selective and cherry-picked at the very minimum. Let’s say that tomorrow Whole Foods introduces a new brand of spirulinadetoxwhatever health supplement containing feelbetteramine from a wholly natural plant found in the foothills of Bolivia. Do we think for a second that the anti-GMO folks won’t be lining up at their nearest Whole Foods, no matter that this novel substance is as much or even more untested than a GMO?"
It's food for thought. Most opponents of GMOs don't seem to have a problem eagerly loading up their shopping carts with all kinds of exotic stuff from the health supplement aisle in the local supermarket. How many Whole Foods (and Whole Foods is just an example here, and probably one of the more benign ones) store assistants - many of whom are far from being trained in nutrition or pharmacology - have convinced these people that feelbetteramine is right for their gout, or for their insomnia, or for the "cognitive deficit" that they feel everyday at work? What kind of evidence of long-term safety exists for feelbetteramine that allows these GMO opponents to embrace the wondrous effects of this non FDA-approved concoction with alacrity? And proponents of health supplements are often big on anecdotal evidence; why don't they, at the very least, admit anecdotal evidence about the benefits of GMOs (especially when the evidence is concrete, as in case of VAD) into their belief system?

To me there clearly seems to be a discrepancy between the reflexive rejection of untested GMOs by the anti-GMO crowd and their rapid embrace of the equally or more untested latest health supplement. All things being equal, as a scientist I at least know what the express purpose of Golden Rice is, compared to the hazy reports on salutary effects of feelbetteramine. So it seems to me that if I am really against GMOs because they are insufficiently tested, I need to mostly steer clear of the health supplement aisle. And did I mention that feelbetteramine can also set your love life on the path to glorious bliss?

This post first appeared on Scientific American Blogs.

Who's afraid of nuclear waste?: WIPPing transuranics into shape


This post first appeared on Scientific American Blogs.
Waste arriving at the WIPP from all over the country's non-commercial DoE nuclear reactor sites (Image: PBS Nova)
About 50 miles from the Texas border in southeastern New Mexico sits the town of Carlsbad, home of the renowned Carlsbad Caverns. Its lesser-known claim to fame which actually might have a disproportionately long-lasting impact on the future of energy and the human species is as a site for the Department of Energy's Waste Isolation Pilot Plant (WIPP), the only official waste repository in the US currently accepting high-level nuclear waste. Jessica Morrison from PBS has an excellent article on the workings of the WIPP and its importance for nuclear power (hat tip: Bora). With the disaster of Yucca Mountain still beckoning in people's memory, the WIPP offers a welcome and unique possibility for the future:
The Waste Isolation Pilot Plant, known locally as WIPP (pronounced “whip”), opened in 1999 after decades of back and forth between state and federal regulators. Today, it holds more than 85,000 cubic meters of radioactive waste arriving from as far away as South Carolina. Currently, WIPP is only authorized to handle waste containing elements with atomic numbers higher than 92—primarily plutonium—that originated from the development and manufacture of nuclear weapons. Between 1944 and 1988, the U.S. produced about 100 metric tons of plutonium, most of which was used to develop nuclear weapons.
What I find pleasing on a deep level about the WIPP is that it relies on entirely natural mechanisms for sequestering the waste from the outside world. The basic principle is to dig a hole deep into a salt bed. Salt has the unique property of displaying "creep", the tendency to flow into and around cracks and naturally form seals; seals that can be as tight as those formed by the hardest rock when they are laboring under pressures operating at 2100 ft underground. When you bury waste in salt, you are basically letting geology do its job and create a seamless tomb for the waste.
WIPP’s operators stack waste containers in rooms dug into the salt formation and then let geology do the rest. Under pressure from the ground above, salt formations flow into cracks and open spaces. Over several dozen years, salt will settle around the containers, forming a rocky seal. That self-sealing ability also protects the site from cracks caused by earthquakes—any that open will quickly close. So far, the site has been successful in containing radiation from the waste.
Working in WIPP is therefore a job with a time stamp; in some sense the mine itself is urging the workers to do their job quickly and get out the hell out of there, so that the earth can close around the waste and clasp it in its tight embrace.
WIPP feels like it’s in constant motion—the continuous care needed to control the salt, the movement of the electric carts within the mine’s pathways, the loading of waste first into the walls and then the room, back to front. It all serves as a reminder that the place really is moving, just at a slower, inexorable pace. WIPP depends “on salt and the behavior of salt,” Elkins says. Salt flows under pressure, and it’s under a great deal of pressure this far underground. On a geologic time scale, it presses down with surprising speed, crushing and then encapsulating whatever is placed inside.
What this also partly means is that the sooner the repository fills up with waste, the better it would be to close it and let the salt do its job. This is a good incentive for carting high-level waste from the nation's myriad nuclear sites to WIPP as soon as possible. It's not as if there is a shortage of waste waiting to be disposed:
While WIPP has been accepting nuclear waste from weapons programs, no central repository currently exists in the U.S. for spent nuclear fuel and related waste from commercial reactors. Until one opens, waste has been sitting in interim storage at or near each of the nation’s 65 nuclear power plants. At the end of 2011, these sites and others held more than 67,000 metric tons of spent nuclear fuel, according to a report issued by the Congressional Research Service.
Waste buried deep underground in the right kind of geological formation is extremely safe and many people who criticize the problem of nuclear waste don't realize that good technical solutions based on burying waste have already been at hand for decades; the problem is mainly a political one. It's worth appreciating the basic fact that there are two kinds of waste, short-lived intensely radioactive and long-lived mildly radioactive. The inverse relationship is a basic law of physics and plays to our advantage. Thus, short-lived isotopes like strontium-90 and cesium-137 might be biologically dangerous, but they also reach safe levels rather quickly (half-life about 30 years for both). On the other hand, long-lived isotopes like plutonium-239 (half-life 24,000 years) are less dangerous because of their lower activity. Typically nuclear waste contains both kinds of elements, and one of the bad decisions taken by the government in this country based on rather flimsy grounds was to halt reprocessing, a process that would have separated plutonium and other valuable and proliferation-prone elements from the short-lived waste and which is routinely done in Europe, Russia and Japan. Burying plutonium is thus both an unnecessary invitation to potential proliferation as well as a waste of valuable fuel for civilian nuclear reactors.

It's hard to think of proliferation though when the plutonium is lying 2100 ft below ground covered by salt and earth as hard as kryptonite. Even when Yucca mountain was being discussed in the 70s and 80s, there were sound techniques for enclosing waste in borosilicate glass surrounded by layers of tamper-proof materials like copper and clay. The following illustration from a 1991 article on nuclear power by physicist Hans Bethe displays the multiple barriers separating transuranic waste from the environment:

Multilayered cylindrical design for the isolation of transuranic waste (Image: Engineering and Science, 1991)
When this kind of waste burial was being discussed, one of the cogent problems was that of groundwater seepage which might potentially transport the waste over great distances. But this problem is not as serious as it sounds at all. To begin with, waste repositories are already located away from both residential areas and groundwater sources. But even if groundwater were to come in contact with the waste, it would be several hundred thousand years before it ever reached the surface. As Bethe clarifies it in the same article:
"Groundwater doesn't flow like a river; it creeps. At a disposal site in Nevada called Yucca Mountain the Department of Energy has measured the flow of groundwater at 1 millimeter per day. And it has to  flow a distance of about 50 kilometers before it comes to the surface, because it generally flows horizontally. With this alone, it takes more than 100,000 years (italics mine) to come to the surface. In addition to that, at Yucca Mountain the waste can be placed about 400 meters below ground, and the groundwater is 600 meters below ground, so the waste won't even touch it. This might change due to  geological upheavals, but to start with it's a very good disposal site.
And even if  the groundwater is flowing 1 millimeter per day, experiments have shown that most dissolved elements take 100 times longer to  flow than groundwater; they are constantly adsorbed by the surrounding rock and then put back into solution again. And plutonium, which is the element people are so afraid of, takes 10,000 times longer again to migrate than most elements. In other words, during plutonium's half-life of 20,000 years, you are insured 100,000 times over."
Yucca Mountain is now abandoned, but these general principles of waste storage still stand and plutonium can still be considered to be confidently isolated from the environment over multiple half lives when buried this way. With short-lived elements the solution is easier. It's a pity that political inaction and public opinion has not allowed us to cart most of existing waste to sites like the WIPP. The waste is relatively small in amount to begin with - the annual waste from the 100 odd reactors in the US would only fill a football field to a depth of one foot - and storing it around creates unnecessary safety issues. Dry cask storage is a good solution but since the casks are often stored on land is far from a permanent one.

The public, government officials and experts should take the lessons of the WIPP and of existing techniques for disposing waste to heart. As with many things nuclear, one of the major problems is that of education; many members of the public think that all nuclear waste is alike, that all of it will kill you even on slight exposure, and that there is no way at all of disposing it. Stories like that of the WIPP should hopefully change their minds and demonstrate that the problem of nuclear waste is not a technical problem, it is one of psychology and politics.

Note: As Twitter user @AtomikRabbit pointed out to me, WIPP is only a repository for non-commerical DoE nuclear reactors. It's waste from such sources that's displayed in the photo above.

Fairness or intimidation: How do you handle difficult commenters?

Last week I wrote a post on my Scientific American blog criticizing a guest post about nuclear power on Andrew Revkin's NYT blog "Dot Earth" by John Miller, a social psychologist and journalist who had very briefly served as an officer on a nuclear submarine in 1972. Miller's post criticized "Pandora's Promise", a film showcasing environmentalists supporting nuclear power which I reviewed a few months ago. Since Miller vehemently disagreed with the film and I found much in it of merit, not surprisingly I disagreed with Miller on many points and clarified my disagreement on my blog. What I found most remarkable was that several of the links that Miller provides themselves contain information qualifying or contradicting his views.

Here's how events unfolded from then onwards. Firstly, let me say that everything that I am stating here is from public sources like Twitter and Andrew Revkin's NYT blog.

It's worth noting that long before I wrote the post, there were hundreds of comments criticizing both Miller's lack of expertise and the flimsy, misleading and cherry-picked evidence that he presented in his piece. As of now there are more than 600 comments on the NYT blog, several of them critical of Miller. In response Miller replied to hundreds of these comments, and in many of them asserted his supposed expertise on the matter and denigrated that of others (always signing his comments as "Dr. John Miller"). He takes a swipe at leading climatologist James Hansen (who has recently supported nuclear power), insisting that his own experience as a nuclear submarine officer makes him more qualified than Hansen to comment on nuclear energy. Phrases like "your comments are nonsense" and "you know nothing about this topic" are commonplace. One commenter remarked that  "The level of sniping and character assassination here makes me feel the need to double check the masthead to verify that it is indeed The New York Times rather than The Huffington Post or The Drudge Report."

Nuclear expert Rod Adams (who writes the Atomic Insights blog) put Miller's qualifications in perspective:

"After reporting to his submarine, Miller again went through the qualification process and became an Engineering Officer of the Watch on a new plant. For his non watchstanding duty, however, he was assigned as the ship's Supply Officer, NOT as an engineering division officer. 

Within 9 months after his arrival, his submarine was put into drydock for a conversion to special operations. It remained there with the reactor shut down until after he had resigned and left the Navy.

When Miller left the Navy in 1972, he was extremely "light" and wet behind the ears. His nuclear knowledge has not improved in the past 41 years."

Adams of course felt it necessary to address Miller's qualifications only because Miller was so fond of reiterating them. Miller's replies to this and other comments were a mix of technical facts and personal remarks. Most of the personal remarks consisted of re-asserting that since he had served as an officer on a nuclear submarine in 1972, he knew more about nuclear power than almost every other commenter. As of this moment, this torrent of commenting shows no sign of abating.

Now let's come back to my post. After I wrote it Miller wrote an extremely long comment countering my points. The comment was filled with personal remarks and a diatribe against Sci Am's editors; Miller could not believe they had let these "falsehoods" through (at this point he was not aware of the difference between Sci Am Blogs and Sci Am magazine). Perhaps the most notable part of the comment was a demand that Sci Am "retract" the article and "issue an apology".

Upon reading this long comment filled with a mix of technical information and personal attacks, I made the editorial decision to not publish it. Why? For simple reasons. Firstly, the comment added nothing new to the discussion. But more importantly, I want to moderate the tone of the comments on my blog; this decision is mine and mine alone, and I am not obligated to publish any comment which I think will affect the tenor of the discussion. Your blog is your living room, and you decide what kind of conversations you allow in it. This decision was bolstered last year after reading a study which said that the nature of comments on blog posts affects a lot of things: the inclination of new commenters to comment (they are reluctant to enter what they perceive to be a minefield), their perception of the post itself (it is seen as polarizing and biased rather than reasoned) and their own opinions of the topic (which change from neutral to polarized simply based on the comments) . Thus I stand by my decision to not publish Miller's comments. My decision was followed by several emails from Miller demanding that I publish his comments. In these emails as in the Twitter exchange there was little evidence of reconciliation or willingness to reach an agreement; almost every statement was in the form of a demand or entitlement. Maybe it's just me, but if I really wanted to have a blogger publish my comments, that would be the last kind of attitude I would pick.

When his first comment did not appear, Miller followed up by posting no less than 22 comments of similar nature, sometimes doling out technical information and often denigrating other people's knowledge. This number by itself constitutes massive spamming, irrespective of the nature of the comments. One of the cardinal rules on blogs is to not hijack the conversation by excessive commenting, and Miller violated this rule almost right away.

It was then that I logged on to Twitter and became aware of an epic Twitter war between Miller and Sci Am Blogs editor Bora Zivkovic. Miller was demanding that Bora and I publish the comment, Bora was being unfailingly reasonable, civil and clear in saying that Miller's behavior did not oblige me to publish any comment from him. In all his tweets there was little evidence of wanting to reach a reconciliation or an admission that he might have started out on the wrong foot by writing a very long comment filled with condescending remarks and demands for retraction and apologies. Miller also did not seem to appreciate how much editorial control individual bloggers have - and should have - over their posts. He also does not seem to understand how easy it would have been to start his own blog and respond and comment to his heart's content. In fact, protocol would have dictated that since I countered his post on my site, he offer a rebuttal on his own.

In any case, about a day after this Miller went one step further: He published the entire content of his long comment in the comments section of Andrew Revkin's blog. Of the hundreds of comments that he has written, at least ten (ranging from bold-lettered "PART ONE" to "PART TEN") are devoted to duplicating the contents of his comments on my blog. The rest of his comments consist of complaints about me, Bora and Sci Am in general. According to Miller, Bora and me are "grotesquely unfair and cowardly" for moderating his comments on my blog and our actions "tarnish Scientific American's reputation". He also urged readers to write to Sci Am's editors to complain about our behavior. I don't know about readers, but if I actually paid this kind of attention to any post criticizing my views, my detractors would probably be forgiven for calling me pathologically obsessed.

In any case, Miller's original comment has now been let through after his many complaints (along with 4 others) and after we had him significantly temper it to conform to the comment policy. His comment says nothing that he has not already said and does not provide new original criticism in my opinion, so I don't feel any need to amend my post. I am also not going to allow him to comment further on my blog; he says that he wants to respond to every other commenter on my post who is critical of his writing, to which I say, "Get your own damn blog".

I wanted to write about this this incident since it is, in my opinion, a good case study in handing difficult and obsessive commenters. The case raises a number of interesting questions: Is Miller's behavior intimidating, obsessive and bullying or is this about free speech? I think it's the former. Should bloggers automatically allow rebuttals to their posts even if they think those rebuttals will significantly affect the tone of their comments section for the worse? What is the correct reaction by a blogger to a commenter who seems obsessed with commenting on their blog and who will go to great lengths to criticize the blogger and his or her sponsor on other websites? On my part I found it most interesting to be a part of the debate; as a blogger it only helps me to learn more and provides me with a background to handle similar cases in the future.

Druggability: An optimistic assessment.


Derek has a good post that takes a philosophical approach toward the whole question of "druggability". The main question is; given a disease state and biochemical knowledge of all the mechanisms involved in that state, can you find a therapeutic that will abolish the disease state and restore another one which we ordinarily term "healthy"? The whole thing is worth reading.

For the moment let's make things simple and assume that the therapeutic we are looking for is a small molecule. Derek's post actually takes me back to my post about why physics cannot solve the problem of drug design. The main challenge in drug discovery is that we have to come up with solutions that have to extend over a whole range of emergent phenomena; an ideal drug will have to modulate every level of biological organization from the whole body down to organs systems, cells and molecular targets. Some of our best drugs do this but that was a happy and accidental coincidence. We still cannot deliberately design in features that will modulate a system across multiple emergent levels.

However I am hard pressed to see why this cannot be done in principle. That is because in some sense, the problem of druggability comes down to the simpler problem of "ligandability"; that is, for every given protein and every arbitrary binding site, can you find a complementary key that fits the lock? My guess is that the answer to this question is a yes since ultimately it boils down to forces and geometric complementarity. Can we design a small molecule that fills a pocket, forms hydrogen bonds and electrostatic interactions and gains binding affinity through the hydrophobic effect by displacing water molecules? I would tend to think that the answer is a yes across the board, so I don't see why the problem shouldn't be solvable for every single protein at least in principle. 

There's other challenges in drug development of course; maintaining the right blood levels, modulating half lives and protein-drug binding kinetics, avoiding drug-drug interactions, but all of these problems have at their root the interaction between molecules which can be modulated. Ultimately, every drug works its magic through molecular interactions, even if they are spread across multiple emergent levels and targets. The problem of druggability is in one sense the problem of ligandability stated multiple times. Solving the problem of druggability is tantamount to solving the problem of ligandability for several targets; the target of interest, anti-targets like hERg and cytochrome P450, a judiciously picked subset of targets whose simultaneous inhibition leads to the required beneficial effect (like the targets for some of the "selectively non-selective" kinase inhibitors out on the market). Want to improve off-rates? Improve protein-ligand interactions. Want to maintain drug levels? Minimize off-target degradation by proteases or esterases. Want to avoid interactions with other drugs? Regulate or improve inhibition of cytochrome P450s or PgP. If the problem of ligandability can be solved for multiple targets, wouldn't it be equivalent to solving the problem of druggability?

In practice of course things are very different since it may well be impossible to practically satisfy every single constraint leading to the abolition of a specific disease state. Since biological systems are emergent we could also get very unexpected feedback from this kind of perturbation that throws our predictions off. I also tend to agree with Derek that we can look through every SMILES string that we have and still not find the right molecule for addressing the multiple-ligandability problem. We can throw every single molecular library in the world at Ras and still not find anything worthwhile. But knowing what we do about the physical and chemical principles that govern biological systems, I doubt it would be so because we are dealing with the biological version of Hilbert's halting problem which Turing proved was undecidable. It could still be because we have simply not tried looking everywhere.

Gordon Conference impressions

What goes on at the Gordon Conference stays at the Gordon Conference, goes the saying. In keeping with this tradition I am not going to divulge the details of the science at my very first medicinal chemistry Gordon Conference. But that should not stop me from offering general comments and holding forth on some of the non-scientific aspects of the meeting.

The conference is always held in some scenic place; if you can convince your boss you could possibly make it to Lucerne, Switzerland or Milan, Italy, otherwise you might have to be content with traveling to one of the many small resort and college towns in New Hampshire. But have no fear; all these towns offer their own very scenic views and crystal clear weather.


The one big truth about the GRC is that it's all about interacting with people. The medicinal chemistry conference was in New London, NH which is the home of Colby-Sawyer College; this has been the venue since 1944. We were lucky to enjoy spectacular weather during the entire week. The conference is deliberately located in a small location away from the bright lights of a big city so that there will be minimal distractions and participants can spend most of their time together. The interactions are amplified by having all participants stay together in one of the suites in the college dorms (with separate bedrooms and common bathroom space); take advantage of this fact and do get to know your roommates.


What really makes a Gordon Conference unique is the schedule. Mornings and evenings are filled with talks and poster sessions but afternoons are free. It feels a little strange at the beginning to attend talks from 7:30 PM to 9:30 PM followed by poster sessions until 11:30 PM but you get used to it. The poster sessions are where the most stimulating interactions usually happen, so you should definitely not miss these. The posters are also where the most interesting new information is divulged, so unlike the talks, you won't find copies of these reprinted in the folder which you get on the first day. In general you will find people sometimes saying things off the record, with the honest expectation that you won't divulge these details to the outside world.


One thing that becomes clear at a GRC is that there are people in their 60s and 70s who have been attending GRCs for decades. Naturally they are good friends and tend to hang out together. If you are a first timer you might be slightly overwhelmed by what seem to be cliques, but one thing you would find out is that even the old timers are quite welcoming. Go ahead and introduce yourself to them and you will very likely end up having interesting and lively conversations. This is especially true during meals where you may often end up at tables with strangers, many of whom will hopefully be friends and colleagues by the end of the meeting. The one thing you should not do at a GRC is to keep to yourself since it sort of defeats the whole purpose of the conference. Plus, how are you ever going to get to know people if you never start?


There is some kind of physical activity - hiking, kayaking, horse-back riding, cruises, soccer games - scheduled for every single afternoon. You can either join in or relax in your room, but joining in is strongly encouraged. I went on a hike and a walk and had a very productive conversation with a medicinal chemist from the UK. Even if you are more of the indoor types like me, don't miss these activities and the resulting conversations. It's your best chance to network and make connections.


Interestingly, a corollary of all this is that in one sense, the formal talks are the least interesting aspect of the meeting. Don't get me wrong; some of the talks were really great and all of the talks covered a very diverse smattering of topics, but the overall scope and content of the talks mirrored that at other good meetings. Most scientists know that the most valuable conversations are the ones that occur outside formal talks, at the bar and during lunch and dinner, and the Gordon Conferences underscore this fact more than any other.


Which brings me to the food. You should avoid the Gordon Conference like a plague if you are trying to diet and lose weight. The GRC knows that scientists - who are still harboring traumatic memories of their time as graduate students - are suckers for good food. With this in mind the organizers put out a spread every day like no other I have seen at a scientific conference. There's an omelette station at breakfast and stir fry station at lunch. There's a different dessert for every meal and six different flavors of ice cream. Soda and chocolate milk flow like water. The poster sessions offer endless rounds of pizza and drinks. And you are surrounded by all this pretty much 24/7. You get to eat so much roast beef and lobster and mushroom ravioli that by the end of the week you are actually hungering for simple fare like oatmeal. Oh wait, they have four different kinds of that too...


The end result of this gastronomical and scientific cornucopia is a bunch of extremely well-fed and intellectually stimulated scientists. In this case, the talks themselves mirrored the astonishing diversity of medicinal chemistry (the topics are publicly listed so there's no harm in talking about them). It's interesting that even today, when you meet someone who calls themselves a "medicinal chemist" he or she is most likely to be a synthetic organic chemist. But I am a modeler, and yet I consider myself first and foremost a medicinal chemist. As the scope of the med chem GRC reveals, a conference on medicinal chemistry today includes all kinds of people; synthetic chemists, biochemists and molecular biologists, pharmacologists, chemical engineers, molecular modelers, physical organic chemists and even doctors. The list of topics ranging from pain to high-throughput screening and from drug delivery to antibody-drug conjugates makes it clear that at this point in time, "medicinal chemistry" essentially includes almost every discipline that could have an impact on drug discovery and development. Another thing that's evident from the list of speakers is the focus on biology; a lot of the talks are about biological assays and gene knockouts and target validation and synthetic biology. In keeping with scientific trends, it's clear that medicinal chemistry conferences henceforth are going to include a healthy amount of biology.


Overall the conference was very satisfying and stimulating. I think it's safe to say that in the end we all went away with a renewed appreciation of our discipline and of the good cheer and spirit that exists in our ranks in spite of today's troubled times. Most importantly, I think all of us were inspired to go back to our labs and computers and get on with the science and business of designing drugs, an endeavor that has real impact on real people's lives every single day. If you haven't been to the med chem or any other GRC I would strongly recommend it.