Field of Science

In praise of Small (and Cheap) science

Originally posted on the Scientific American blog

The discovery of DNA structure was an outstanding example of Small Science (Image: Subversive Archeologist)
I am a big fan of Small Science. In spite of the riches unearthed by Big Science in the fields of biology and physics during the last fifty years, historically speaking much of scientific progress has come from small groups or individuals working with relatively cheap equipment and resources. For instance consider discoveries like the structure of DNA, the structure of proteins, nuclear fission, the cosmic microwave background radiation and the transistor. All of these have been the beneficiaries of Small Science. Even in those cases where large organizations have supported these developments, the key findings themselves have come from small groups left alone to pursue their own interests. The work done by these groups benefited from a maximum of flexibility and a minimum of bureaucratic interference.

However there are some cases where Big Science is necessary for making discoveries. The LHC and the Higgs Boson, the Human Genome Project and the ENCODE project are just three examples of areas where Big Science involving massive government support and billions of dollars were necessary. These projects have uncovered invaluable insights into the workings of the universe and of living organisms. However they run the risk of giving the impression that Big Science is here to stay and that Small Science will be less important in the future.

I happen to strongly disagree with this perspective and it's nice to see Bruce Alberts, the editor of Science, expressing similar sentiments in an editorial this week. He points out at least three major biological challenges whose solutions will likely arise from Small Science. These three challenges are; a continued investigation of unknown genetic and biological function, a continued investigation of protein structure, and a fuller understanding of emergent properties.

In all three of these areas, initial success is likely to be engendered by an emphasis on special cases than on general principles. Special cases are of more interest to small groups rather than to large organizations. At some point we will need general principles encompassing these special cases, but we are not there yet. For instance at some point we will hopefully have a handle on the transition between different layers of emergence, but the starting point for any such understanding will be the investigation of particular cases. When it comes to a deeper understanding of emergence in biology, we are in the same position that Darwin was when he came back from his momentous voyage on the Beagle. By that time he had a comprehensive list of parts in the form of animal and plant collections and had a good idea of how they might relate to each other. 

We are in a similar position regarding emergent properties. We now have a good understanding of individual levels of biological hierarchy and we have some idea of how they might be connected. We know enough about atoms, molecules, genes, cells and organisms at their own levels. But like Darwin in 1836, we lack an overarching theory to put it all together. And just like Darwin spent the next twenty years examining individual cases like finch beaks, dog breeds and barnacle growth, so we must spend our time trying to figure out what lies between individual steps of the ladder. This is a task best suited to Small Science, and hopefully in twenty years Big Science will be able to take over and provide us with a grand perspective just like Darwin did in 1859.

To see why we need to support Small Science, it's worth undertaking a brief detour into the history of Big Science. The era of Big Science’ in the United States began in the 1930s. Nobody exemplified this spirit more than Ernest Lawrence at the University of California, Berkeley whose cyclotrons smashed subatomic particles together to reveal nature’s deep secrets. Lawrence was one of the first true scientist-entrepreneurs. He paid his way through college selling all kinds of things as a door-to-door salesman and brought the same persuasive power a decade later to sell his ideas about particle accelerators to wealthy businessmen and philanthropists. Sparks flying off his big machines, his ‘boys’ frantically running around to fix miscellaneous leaks and shorts, Lawrence would proudly display his Nobel Prize winning invention to millionaires as if it were his own child. The philanthropists’ funding paid off in at least one practical respect; it was Lawrence’s modified cyclotrons that produced the uranium used in the Hiroshima bomb.

After the war big science was propelled to even greater heights. With ever bigger particle accelerators needed to explore ever smaller particles at higher energies, science became an expensive prospect. The decades through the 70s were dominated by high-energy physics that needed billion-dollar accelerators to verify its predictions. Fermilab, Brookhaven and of course, CERN, all became household names. Researchers competed for the golden apples that would sustain these behemoths. But one of the rather unfortunate fallouts of these developments was that good science started to be defined by the amount of money it needed. Gone were the days when a Davy or a Cavendish could make profound discoveries using tabletop apparatus. The era of molecular biology and the billion dollar Human Genome Project further cemented everyone's faith in the fruits of expensive research.

This faith is not entirely misplaced since there will always be endeavors which will require large, multidisciplinary organizations and billions of dollars in funding. But these facts also create a bias in the minds of young scientists just entering the game. The past success of Big Science makes it appear to young scientists that they need to necessarily do expensive science in order to be successful. Part of this belief does come from the era of big accelerator physics and high profile molecular biology as noted above. But we don't have to see far to realize that this belief is flawed and it has been demolished by physicists themselves; two years ago, the Nobel Prize in Physics was awarded to scientists who produced graphene by peeling off layers of it from graphite using good old scotch tape. How many millions of dollars did it take to do this experiment?

Now one might argue (and many do) that the low hanging scientific fruits accessible through simple experiments have largely been picked and that picking the high hanging fruit will necessarily be more expensive, but such a perspective is really in the eye of the beholder. As the graphene scientists proved, there are still fledgling fields like materials science where simple and ingenious experiments can contribute to profound discoveries. Another field where such experiments can provide handsome dividends is the other fledgling field of neuroscience. Cheap research that provides important insights in this area is exemplified by the work of the neurologist Vilayanur Ramachandran, who has performed the simplest and most ingenious experiments on patients using mirrors and other elementary equipment to unearth key insights into the functioning of the brain. Scientists like Ramachandran and Andre Geim have shown that if you find the right field, you can find the right simple experiment.

However, are university administrations going to come around to this point of view? Are they going to recruit a young researcher describing an ingenious tabletop experiment worth five thousand dollars or are they going to go for one who is campaigning for a hundred thousand dollars worth of fancy equipment? Sadly, the current answer seems to be that they would rather prefer the latter. Faculty appointments have turned into a kind of auction, where the professor potentially bringing in the largest grants and most expensive equipment is likely to win the bid. This has got to change, not only because simple experiments and Small Science still hold the potential to provide unprecedented insights in the right fields, but also because the undue association of science with money misleads young researchers into thinking that more expensive is better. It threatens to undermine much that science has stood for since The Enlightenment. The function of academic scientists is to do high-quality research and mentor the next generation of scientist-citizens. Raising money should come second. A scientist who spends most of his time securing funds ends up being little different from a corporate lackey soliciting capital.

Fortunately there is hope on the horizon. Firstly, Big Science is constrained by its very size and nature. Especially in an increasingly poor funding environment, the fortunes of Big Science will wax and wane while Small Science's will stay more or less constant. But the real revolution that will make it possible to sustain Small Science is the revolution in open-source science, crowdsourcing and crowdfunding. Already we are seeing the value of crowdsourced projects in the form of endeavors like InnoCentive. Crowdsourcing has been on powerful display in the success of initiatives like the game FoldIt, where ordinary citizens pool their talents to solve thorny problems in protein folding and drug discovery. Each of these citizens is a unit in Small Science. What's remarkable is that the combined power of these units is equivalent to the capabilities harnessed by Big Science. With the increasing domestication of biotechnology and the plummeting cost of information retrieval and processing, ordinary citizens will find it easier than ever to collectively contribute to important scientific puzzles, provided they are pitched to them the right way. The one feature of Big Science that Small Science will borrow and raise to even greater heights will be international collaboration, expect that such collaboration will no longer be the exclusive province of scientific experts. In the future anyone will be able to play with scientific tools and results.

As the twenty-first century progresses in key fields like neuroscience, cosmology and genomics, I have no doubt that Small Science, both in the form of small groups working with cheap equipment and citizen scientists pooling their talents, will continue to make great advances. Where Big Science will continue to falter and occasionally rise, Small Science will keep steadily humming along in the form of games, public challenges, free encyclopedias and open-access reports. The fruits of Small Science may occasionally be used by Big Science to uncover deep facts, but in doing this Big Science itself would have stood on Small Science's shoulders.

Are chemistry bloggers journalists? Eat the fruit, don't count the trees

There is an updated version of this post on my Scientific American blog so you may want to read that instead. Comments are of course welcome at both sites.

In his parting editorial for C&E News, Rudy Baum had the following words for science bloggers:
"Technology has profoundly changed journalism during my tenure with C&EN. Much of the change has been positive—who can imagine doing research on a topic without access to the Internet?—but the business model for journalism remains very much in a state of flux. The silly mantra, “Information wants to be free,” overlooks the fact that quality information requires effort, and effort costs money.  

Blogs are all well and good, they add richness to the exchange of information, but they are not journalism, and they never will be."

In addition as Derek and Chembark have pointed out, the ACS's director of public affairs Glenn Ruskin said in another context that:


“We find little constructive dialogue can be had on blogs and other listservs where logic, balance, and common courtesy are not practiced and observed,”

I agree with Mr. Baum's thoughts about information not being free. However I think information can be cheap, in fact very cheap, and it's this line of thinking that is the source of the campaign against publishers like Elsevier who practice unfair "bundling" and sport huge profit margins. More importantly though, I think there's at least some evidence to refute Mr. Baum's statement that "quality information requires effort, and effort costs money". I think Wikipedia is a resounding example of the fact that quality can come without money through the efforts of millions of volunteers who contribute knowledge and information for a variety of reasons. Many articles on Wikipedia have been vetted by experts in their respective areas (including Nature) and have been found to contain high-quality information.

On the other hand I find myself confused by Mr. Baum's thoughts on blogging. What exactly does he mean when he says "science blogs will never be journalism"? I see journalism defined mainly in three terms; news, opinion and analysis. As far as I am concerned, science blogs have contributed to each of these phases of journalism over the last decade or so. High-quality content not driven by money has been an outstanding feature of the chemical blogosphere.

Let's start with opinion. Opinion has always been a principal function of blogging; in fact that's why many of us started our blogs, to hold forth in all our self-important erudition on a variety of topics. As far as news is concerned, those of us who are reporting on the latest chemical breakthroughs, safety issues, chemical controversies and the human side of chemistry are communicating exactly the kind of news that magazines like C&E News report. I am not saying that magazines are not doing a good job of reporting the relevant news, just that bloggers can also be equal to the task.

And then there's analysis. I believe this is an area in which bloggers have been outstanding. Whether it's Derek Lowe analyzing the state of the pharmaceutical industry, Chemjobber analyzing the state of the job market, Paul analyzing the state of chemical publishing or SeeArrOh analyzing the state of chemophobia, I believe that bloggers have repeatedly subscribed to the highest standards of fact checking, careful thinking and clear exposition. Sure, we all make mistakes, but I think many of us can agree that when it comes to episodes like the sodium hydride "oxidation" or the structure of hexacyclinol, bloggers have been at the forefront of sounding the alarm and of meticulously charting the flaws, often before more "official" news sources scoop the story up. In some cases this analysis has even been more thorough and well-informed than the official sources. Even a preliminary look at some of the major blog posts written by chemistry bloggers would convince Mr. Ruskin that "logic, balance and common courtesy" are not just alive but are thriving in the chemical blogosphere.

The benefit of a magazine like C&E News is of course that all this information is in one place instead of being scattered around various sites, but this is hardly a general argument against the ability of blogs to do good science journalism. Perhaps what Mr. Baum means that all blogs don't contribute to journalism, but that's a far cry from saying that they can't and that they never will. Surely Mr. Baum is familiar with the high-quality service that veteran chemistry bloggers have provided over the last decade. Surely he is aware of the fact that members of his own very capable staff have often featured and linked to posts, both their own and others. At the very least his opinion should have been tempered by a recognition of the good that has come out of chemistry blogging during the last few years.

I will leave you with an excellent post regarding this very perceived distinction between science blogging and science journalism written by Ed Yong, one of the most accomplished science bloggers around. It seems that Ed really hits the nail on the head in locating the source of criticism of science blogs:

To an extent, I get why it’s played. I think people are rightly worried about their industry. As I said at the start: massive sinking ship. People see a profession in trouble, they want to save and protect it. They see these random interlopers trying to claim a stake and they think that it somehow devalues this noble thing that they’re trying to defend. I certainly agree that good journalism in all its forms is a necessary thing that is worth defending. But no one has ever saved something by playing with definitions. You protect journalism by trumpeting its values, criticising people who do it poorly and supporting those who do it well, regardless of the medium they happen to use. You won’t buoy up journalism through taxonomy.
Indeed, you don't buoy up journalism through conventional, narrow-minded classification. You buoy it up by recognizing high-quality content in your field, irrespective of the source. There's an old proverb which roughly says "Enjoy the fruit, don't count the trees". If the fruit is sweet and satisfying, do you really care where the trees come from and how many there are?

Ivano Bertini

Bertini with Harry Gray at Caltech (Image: NSMB)
I thought I should take note of the unfortunate fact that Ivano Bertini has passed away. I first heard of him when I came across the famous textbook on bioinorganic chemistry which he co-authored with Gray, Stiefel and Valentine. I think it's still the best introduction to the subject. After laying out the basic properties and abundances of inorganic ions in biology and the environment, it goes on to describe in careful detail the role of major metalloproteins in key biological processes. Each of these proteins is depicted as an elegant molecular machine performing metal-catalyzed room-temperature reactions with an efficiency that we chemists can only dream of accomplishing.

Bertini made very important contributions to the NMR structure elucidation of metalloproteins and solved more than 150 structures during a distinguished career that was cut short, an unprecedented record. Before his work, NMR of paramagnetic proteins was thought to be exceedingly difficult because of paramagnetic relaxation; this is the same reason why oxygen in an NMR sample precludes the acquisition of good data for NOE experiments, requiring the sample to be purged with nitrogen. 

An obituary in Nature Chemical Biology (paywall) gives a good sense of both the man and his characteristically bold approach to science:

One could not avoid knowing Ivano: he was 'loud' in all senses. In a room full of people, his booming voice would always tell if he was around. He was also tall and large and would speak loudly to the heart of any new acquaintance, making himself unforgettable. He also had a loud love for science. In his office, he had a banner that said, “La scienza รจ come l'amore: non puoi non pensarci sempre” (Science is like love: you can't help thinking about it all the time)...
The story of the first solution structure of a paramagnetic protein is a typical example of Ivano's response to scientific challenges. In the early nineties, about ten years after the first protein NMR structure, it was implicit that paramagnetic relaxation prevented NMR analysis of paramagnetic proteins. 
On a midsummer Sunday at Ivano's country house, however, we were reading a recent review article by a well-known NMR spectroscopist that explicitly stated it would never be possible to solve NMR structures of paramagnetic proteins. Ivano said, “Do you believe it?” We said, “No.” He then said, “This is a project that will need the whole lab.” The next day, we were all at work; the paper was published 14 months later. With that work, a taboo had been broken, and many structures of paramagnetic proteins have been solved since then.

Metalloproteins continue to be of intense interest in chemistry, biology and medicine and we will all continue to benefit from Bertini's legacy.

On modular complexity and reverse engineering the brain



The Forbes columnist Matthew Herper has a profile of Microsoft co-founder Paul Allen who has placed his bets on a brain institute whose goal is to to map the brain...or at least the visual cortex. His institute is engaged in charting the sum total of neurons and other working parts of the visual cortex and then mapping their connections. Allen is not alone in doing this; there's projects like the Connectome at MIT which are trying to do the same thing (and the project's leader Sebastian Seung has written an excellent book about it) .

Well, we have heard prognostications about reverse engineered brains from more eccentric sources before, but fortunately Allen is one of those who does not believe that the singularity is around the corner. He also seems to have entrusted his vision to sane minds. His institute's chief science officer is Christof Koch, former professor at Caltech, longtime collaborator of the late Francis Crick and self-proclaimed "romantic reductionist" who started at the institute earlier this year. Just last month Koch penned a perspective in Science which points out the staggering challenge of understanding the connections between all the components of the brain; the "neural interactome" if you will. The article is worth reading if you want to get an idea of how simple numerical arguments illuminate the sheer magnitude of mapping out the neurons, cells and proteins that make up the wonder that's the human brain.

Koch starts by pointing out that calculating the interactions between all the components in the brain is not the same as computing the interactions between all atoms of an ideal gas since the interactions are between different kinds of entities and are therefore not identical. Instead, he proposes, we have to use something called Bell's number Bn which reminds me of the partitions that I learnt about when I was sleepwalking through set theory in college. Briefly for n objects, Bn refers to the number of combinations (doubles, triples, quadruples etc.) that can be formed. Thus, when n=3 Bn is 5. Not surprisingly, Bnscales exponentially with n and Koch points out that B10 is already 115,975. If we think of a typical presynaptic terminal with its 1000 proteins or so, Bnstarts giving us serious heartburn. For something like the visual cortex where n= 2 million Bn would be inconceivable. Koch then uses a simple calculation based on Moore's Law in trying to estimate the time needed for "sequencing" these interactions. For n = 2 million the time needed would be of the order of 10 million years. And as the graph on top demonstrates, for more than 10components or so the amount of time spirals out of hand at warp speed.

This considers only the 2 million neurons in the visual cortex; it doesn't even consider the proteins and cells which might interact with the neurons on an individual basis. Looks like we can rapidly see the outlines of what Allen himself has called the "complexity brake". And this one seems poised to make an asteroid-sized impact on our dreams.

So are we doomed in trying to understand the brain, consciousness and the whole works? Not necessarily, argues Koch. He gives the example of electronic circuits where individual components are grouped separately into modules. If you bunch a number of interacting entities together and form a separate module, then the complexity of the problem reduces since you now have to only calculate interactions between modules. The key question then is, is the brain modular, and how many modules does it present? Commonsense would have us think it is modular, but it is far from clear how we can exactly define the modules. We would also need a sense of the minimal number of modules to calculate interactions between them. This work is going to need a long time (hopefully not as long as that for B2 million) and I don't think we are going to have an exhaustive list any time soon, especially since these are going to be composed of different kinds of components and not just one kind.

Any attempt to define these modules are going to run into problems of emergent complexity that I have occasionally written about. Two neurons plus one protein might be different from two neurons plus two proteins in unanticipated ways. Also if we are thinking about forward and reverse neural pathways, I would hazard a guess that one neuron plus one neuron in one direction may even be different from the same interaction in the reverse direction. Then there's the more obvious problem of dynamics. The brain is not a static entity and its interactions would reasonably be expected to change over time. This might interpose a formidable new barrier in brain mapping, since it may mean that whatever modules are defined may not even be the same during every time slice. A fluid landscape of complex modules whose very identity changes every single moment could well be a neuroscientist's nightmare. Nevertheless this goal of mapping modules seems far more attainable in principle than calculating every individual interaction, and that's probably the reason Koch left Caltech to join the Allen Institute in spite of the pessimistic calculation above. The value of modular approaches goes beyond neuroscience though; similar thinking may provide insights into other areas of biology, such as the interaction of genes with proteins and of proteins with drugs. As an amusing analogy, this kind of analysis reminds me of trying to understand the interactions between different components in a stew; we have to appreciate how the salt interacts with the pepper and how the pepper interacts with the broth and how the three of them combined interact with the chicken. Could the salt and broth be considered a single module?

If we can ever get a sense of the modular structure of the brain, we may have at least a fighting chance to map out the whole neural interactome. I am not holding my breath too hard, but my ears will be wide open.

Image source: Science magazine

First posted on the Scientific American blog "The Curious Wavefunction".

Kinetics in drug discovery: The neglected child?

A couple of articles appearing in the last few months brought my attention to a topic that medicinal chemists don't always think about and need to pay more attention to; the important role of kinetics in drug discovery, especially in its early stages.

Anyone who is involved in drug discovery knows the importance of the dissociation constant that signifies the affinity of a therapeutic ligand for a protein. SAR around changing affinities (usually represented by Kd or IC50 values) drive lead design and optimization. But as some of the recent reviews note, the problem with this number is that it's a ratio of the on and off rates of binding of the ligand to the protein. A fast on rate and a fast off rate will give you the same number as a slow on and a slow off rate. But the two situations are not identical.

The most important point emphasized by these reviews is that slow off rates can sometimes lead to prolonged drug efficacy in ways that are not apparent from just the affinity. And this is quite logical if you consider that a slow off rate means that a ligand has a longer residence time in the protein's active site and is spending more time modulating its action. What this means in practice is that even compounds with relatively low affinities can have quite significant efficacies resulting from slow off rates.

So how do you modulate off rates? One good thing about off rates is that unlike on rates, they don't depend on concentration. The benefit of this is that you could have a compound which has a low concentration at the target site and is rapidly cleared away, but which nonetheless spends a lot of time in the protein and therefore provides good efficacy. The other good thing about off rates is that they are essentially dependent on the interactions between ligand and target. So you should in principle be able to improve them just by optimizing these interactions. Again, this won't result in better affinity if you are also slowing down on rates, but it might give you improved efficacy.

In part these studies remind us that we need to clearly distinguish between terms like affinity, IC50, efficacy and the other vocabulary of lingua pharmaceutica. But they also ask an important question; why aren't pharmaceutical scientists paying more attention to kinetic measurements in the early stages of drug discovery? That this is indeed the case became apparent when I looked at the website BindingDB which lists key biological, thermodynamic and kinetic parameters for ligands bound to popular targets. One prominent pharmaceutical target that I looked at had 277 different ligand structures bound to it along with many cases where affinities, IC50s and even free energies had been measured. But out of those 277 I could find only 5 cases (less than 2%) where on and off rates had been recorded. Clearly this is not a focus in preclinical drug discovery.

But as the recent article note, it should be. There are several cases of drugs - HIV protease inhibitors for instance - where differing efficacies for compounds with similar affinities essentially result from differing off rates and residence times. In fact as illustrated by the blood pressure lowering drug amlodipine, off rates can mean the difference between a best-in-class drug and the second-best contender; amlodipine is a better drug than others partly because of its longer residence time in the pocket of the calcium channel protein which it inhibits.

The neglect of kinetic rate measurements reminds me of an almost equal neglect of thermodynamic measurements by ITC. As described in other articles, careful measurement of enthalpy and entropy (and not just free energy) can be very useful in early stage drug discovery. This shouldn't be surprising at all; after all kinetics and thermodynamics are the twin pillars of protein-ligand binding, and you neglect them at your own peril.

Biotechnology. Misunderstood.

David Kroll, veteran science blogger, educator and Director of Science Communications at the NC Museum of Natural History is having a hard time convincing someone of the importance of biotechnology and of communicating it to the public. As a scientist working at a biotech company myself, this seemed to be of particular interest to me and it also turned out to be particularly disconcerting. In this case David's correspondent happens to be Ms. Laura Combs, a former state environmental agency employee writing at a blog that seems to discuss diverse topics connected with medicine and the environment. Unfortunately that's not precluded her from inventing some rather strange ideas about the definitions and scope of biotechnology.

Ms. Combs seems to be an engaged citizen who genuinely appreciates the work done by the NCMNS, and that makes her response even more perplexing. The incident started innocently enough with the NCMNS organizing a 'Biotechnology Day' that showcased biotechnology research for the public. I believe this is an exceedingly important endeavor that should be encouraged, especially in the face of the growing importance of biotechnology and genomics in our lives. The presentations were split evenly between people from industry, academia and the agricultural sector; again, an entirely fair split since these three sectors are where the majority of biotech research takes place. 

Unfortunately the inclusion of industry in the museum's events sparked a backlash from Ms. Combs, which resulted in a lengthy correspondence with David and others at the museum. What left me the most nonplussed was Ms. Combs's definition of biotechnology as something antagonistic to the natural world and generally malevolent to humanity. Biotechnology, according to Ms. Combs, was not compatible with the "natural science" that the museum claimed to promote. Leaving aside the fact that the public dissemination of science should include all of science and not just "natural" science, unfortunately this seems to be a common foundational misunderstanding on the part of biotech opponents and it seems to ignore a lot of things, including a starting point in Darwin's great work "The Origin of Species".

Darwin kicked off his thoughts on natural selection in the first chapter of his book by reminding us of the artificial selection done on domestic animals and agricultural plants for hundreds of centuries. Yes, all these people who were practicing artificial selection were doing 'biotechnology' even if they had no knowledge of genes. But the overarching point that Darwin was getting at was that nature also practices biotechnology in the form of natural selection, and has in fact been doing so since the origin of life. This is probably the biggest mistake that biotech opponents make, to think of biotech as a wholly human invention. All of natural selection that involves the selective retention and manipulation of genes and phenotypes is biotechnology. In addition as David notes in his detailed reply, horizontal genetic transfer has been one of the key driving forces of evolution. Again, biotechnology. And perhaps Ms. Combs would like to know that about 8% of our genome consists of genes from retroviruses that were inserted during evolution. Thus, viruses were doing biotech with us long before we started doing biotech with them. The fact is that gene transfer and manipulation have been natural processes that we have very recently started to exploit. That also leads directly to Ms. Combs's criticism about GM foods. She is right in insisting that presentations regarding the benefits of GM foods need to be balanced with their possible side-effects, but she also seems to fall prey to a more basic and flawed belief that GM foods are fundamentally a new, man-made creature in the list of biological species. They are not. Nature has been trying out GM foods for millennia.

As a biotechnology scientist myself I was particularly distressed by the response, since I happen to study a particular form of biotechnology in my research that would not follow Ms. Combs's definition. My company uses the specific base pairing properties of DNA - one of its most amazing and fundamental features - to make drugs for cancer, psoriasis and other disorders. My research which has been pioneered by an academic lab has nothing to do with GM foods, I don't work for Big Pharma, and I am not manipulating anyone's genes. I am using entirely natural processes to help me find drugs for diseases which very palpably affect millions of people every year. In my case, nature is the entity that's allowing me to do biotechnology and I find this fact fascinating. I find it hard to see how Ms. Combs could be against the kind of research I am doing, but the major point as pointed out David is that biotech goes far beyond GM and into many areas of science like detergent manufacture, biodefense and health supplements. What I am doing is just one of its myriad manifestations.

Unfortunately, this kind of valid debate about the definition or pros and cons of biotechnology is undermined by Ms. Combs insinuations about the penalties paid by Big Pharma and their unethical practices. Ms. Combs seems to erect a classic straw man and points out the huge fines paid by Monsanto, Pfizer, Bayer and others for false labeling, bribes to doctors and other transgressions - and nobody's supporting these practices - but what on earth do these fines have to do with the scientific evidence for or against GM foods? This listing of pharmaceutical evils is completely tangential to the science of GM foods and smells suspiciously of guilt-by-association. In her emphasis on including equal time for critics of biotech, she also suggests the name of people who seem to be bonafide supporters of the vaccine-autism link. It's one thing to have a balanced debate, quite another to give voice to critics whose arguments are chiefly fueled by emotion and incomplete evidence rather than reason. 

Finally, she is not impressed by the inclusion of academic presentations in the museum's events because she says that universities "receive significant funds from industry to support biotechnology research". That part is especially amusing since the biotechnology revolution was launched almost entirely by academic scientists like Fred Sanger, Paul Berg and Hamilton Smith as an offshoot of basic, curiosity-driven research about the natural world. And a moment's research would have convinced Ms. Combs that the scientific underpinnings of biotechnology have been almost entirely taxpayer funded...by taxpayers like herself.

I am not singling out Ms. Combs for her objections and I do respect her general support of the museum and her regular visits to it. But she seems to have started a minor campaign on her blog to discredit the museum's attempts to help the public to understand biotechnology. This is disappointing. David has responded in as much detail as possible to her emails, and anyone who knows him would be aware of the tremendous and admirable work he has done for years in support of the public understanding of science; it would sound ludicrous for those of us who know his work to hear the allegation that he does not appreciate the merits of a balanced scientific debate. 

What I would like to say to Ms. Combs is this; biotechnology has been with us since the origins of life, and recombinant DNA is only the latest incarnation of a process that started billions of years ago. To say that biotechnology is at odds with the natural world is to completely ignore the biotechnology that nature has always practiced and to proclaim that man is not a part of nature. But more importantly, whether you like it or not, biotechnology and genomics are poised to enter the public discourse in ways that we can't even imagine yet. Genomic medicine is on the threshold of impacting public health and policy in a big way, and it promises to create new drugs for major disease and new diagnostics that will allow us to detect diseases like cancer earlier. Like other scientific developments, discoveries in the next few decades will make us confront novel social and moral issues. It's all biotechnology, knowledge that's based on the fundamental workings of the biological universe, and it will be upon us very soon. And as recent progress demonstrates, it will inevitably be developed by both academia and industry. 

Would it have unintended consequences? Of course it would, like every other technology. But that is precisely the reason to publicize it as widely as possible, to make sure that the public is aware of the most cutting-edge research in the field. If you are suspicious of biotechnology, then you should be the first one to make sure that museums all around the country organize biotechnology days to discuss, debate and present. About the worst thing you can do about a topic which you don't trust is to advocate that it should not be discussed in a public forum.

2012 Nobel Prizes


Predicting the Nobel Prizes gets easier every year ((I said predicting, not getting your predictions right) since there’s very little you can add in the previous year’s list, although there are a few changes; the Plucky Palladists can now happily be struck off the list. As before, I am dividing categories into ‘easy’, and ‘difficult’ and assigning pros and cons to every prediction. This is a revised and updated version of my list from last year. Paul has already kicked off the predictions.

The easy ones are those regarding discoveries whose importance is (now) ‘obvious’; these discoveries inevitably make it to lists everywhere each year and the palladists clearly fell into this category. The difficult predictions would either be discoveries which have been predicted by few others or ones that that are ‘non-obvious’. But what exactly is a discovery of ‘non-obvious’ importance? Well, one of the criteria in my mind for a ‘non-obvious’ Nobel Prize is one that is awarded to an individual for general achievements in a field rather than for specific discoveries, much like the lifetime achievement Academy Awards given out to men and women with canes. Such predictions are somewhat harder to make simply because fields are honored by prizes much less frequently than specific discoveries.

When predicting the Nobel prize it’s also prudent to be cognizant of discoveries whose recognition makes you go “Of course! That’s obvious”. Prizes for the charge-coupled device (CCD) (2009) integrated chip (2000) and in-vitro fertilization (2010) fall into this category.

Anyway, here's the N-list for chemistry:

Single-molecule spectroscopy (Easy)
Pros: The field has obviously matured and is now a powerful tool for exploring everything from nanoparticles to DNA. It’s been touted as a candidate for years. The frontrunners seem to be W E Moerner and M Orrit, although Richard Zare has also been floated often.
Cons: The only con I can think of is that the field might yet be too new for a prize.

Lithium-ion batteries (Moderately easy): Used in almost every kind of consumer electronics, lithium-ion batteries are also touted as the best battery alternative to fossil fuels. A great account is provided in Seth Fletcher’s “Bottled Lightning”. From what I have read in that book and other sources, John Goodenough, Stanley Whittingham and Akira Yoshino seem to be the top candidates, although others have also made important contributions and it may be hard to divide up the credit.


Computational chemistry and biochemistry (Difficult):

Pros: Computational chemistry as a field has not been recognized since 1998 so the time seems due. One obvious candidate would be Martin Karplus. Another would be Norman Allinger, the pioneer of molecular mechanics.

Cons: This would definitely be a lifetime achievement award. Karplus did do the first MD simulation of a protein ever but that by itself wouldn’t command a Nobel Prize. The other question is regarding what field exactly the prize would honor. If it’s specifically applications to biochemistry, then Karplus alone would probably suffice. But if the prize is for computational methods and applications in general, then others would also have to be considered, most notably Allinger but perhaps also Ken Houk who has been foremost in applying such methods to organic chemistry. Another interesting candidate is David Baker whose program Rosetta has really produced some fantastic results in predicting protein structure and folding. It even spawned a cool game. But the field is probably too new for a prize and would have to be further validated; at some point I do see a prize for biomolecular simulation.

Chemical genetics (Easy)
Another favorite for years, with Stuart Schreiber and Peter Schultz being touted as leading candidates.Pros: The general field has had a significant impact on basic and applied scienceCons: This again would be more of a lifetime achievement award which is rare. Plus, there are several individuals in recent years (Cravatt, Bertozzi, Shokat) who have contributed to the field. It may make some sense to award Schreiber a ‘pioneer’ award for raising ‘awareness’ but that’s sure going to make at least some people unhappy. Also, a prize for chemical biology might be yet another one whose time has just passed).


Electron transfer in biological systems (Easy)
Pros: Another field which has matured and has been well-validated. Gray and Bard seem to be leading candidates.

NMR (Difficult): It’s been a while since Kurt Wuthrich won the prize for NMR. But it’s been even longer since a prize was awarded for methodological developments in the field (Richard Ernst). I don’t know enough about the field to know who the top contenders would be, but Ad Bax and Alexander Pines seem to have really made pioneering contributions. Pines especially helped launch the field of solid-state NMR which as a field certainly seems to deserve a Nobel at some point.

Among other fields, I don’t really see a prize for the long lionized birth pill and Carl Djerassi; although we might yet be surprised, the time just seems to have passed. Then there are fields which seem too immature for the prize; among these are molecular machines (Stoddart et al.) and solar cells (Gratzel). One promising candidate is Krzysztof Matyjaszewski whose work in ATRP has had a pronounced impact on the way polymers are made; this would neatly fit into the Nobel Prize’s requirement for work that is both fundamental and has “benefited humanity”.

MEDICINE/CHEMISTRY:


Nuclear receptors (Easy)
Pros: The importance of these proteins is unquestioned. I worked a little on NRs during my postdoc and remember being awed by the sheer diversity and ubiquity of these molecules in mediating key physiological functions. In addition they are already robust drug targets, with drugs like tamoxifen that hit the estrogen receptor making hundreds of millions of dollars. Most predictors seem to converge on the names of Chambon, Jensen and Evans and this prediction is definitely at the top of my list.

Chaperones: (Easy)

Arthur Horwich and Franz-Ulrich Hartl just won this year’s Lasker Award for their discovery of chaperones. Their names have been high on the list for some time now.
Pros: Clearly important. Chaperones are not only important for studying protein folding on a basic level but in the last few years the malfunctioning of chaperones such as heat-shock proteins has been shown to be very relevant to diseases like cancer.

Cons: Too early? Probably not.

Statins (Difficult)

Akira Endo’s name does not seem to have been discussed much. Endo discovered the first statin. Although this particular compound was not a blockbuster drug, since then statins have revolutionized the treatment of heart disease.
Pros: The “importance” as described in Nobel’s will is obvious since statins have become the best-selling drugs in history. It also might be a nice statement to award the prize to the discovery of a drug for a change. Who knows, it might even boost the image of a much maligned pharmaceutical industry...
Cons: The committee is not really known for awarding actual drug discovery. Precedents like Alexander Fleming (antibiotics), James Black (beta blockers, antiulcer drugs) and Gertrude Elion (immunosuppresants, anticancer agents) exist but are far and few in between. On the other hand this fact might make a prize for drug discovery overdue.

Drug delivery (Difficult): A lot of people are pointing to Robert Langer for his undoubtedly prolific and key contributions to drug delivery. The field as a whole has not been recognized yet so the time may be ripe; from my own understanding of his contributions, Langer seems to me more of an all-rounder, although it may not be too late to single out some of his earlier discoveries, such as the first demonstration of the delivery of high molecular weight polymer drugs.



Cancer genetics (Easy): Clearly a very important and cutting-edge field. We still don’t know how much of an impact genomic approaches will ultimately have on cancer therapy since the paradigm is clearly evolving, but any history of the field will have to include Robert Weinberg and Bert Vogelstein. Vogelstein discovered p53, the “guardian of the genome” while Weinberg discovered the first oncogenes. In addition both men have also been prominent influences on the field as a whole. Given both the pure and applied importance of their work, their discoveries should fit the Nobel committee’s preferences like a glove.


Genomics (Difficult)
A lot of people say that Venter should get the prize, but it’s not clear exactly for what. Not for the human genome, which others would deserve too. If a prize was to be given out for synthetic biology, it’s almost certainly premature. Venter’s synthetic organisms from last year may rule the world, but for now we humans still prevail. On the other hand, a possible prize for genomics may rope in people like Carruthers and Hood who pioneered methods for DNA synthesis.

DNA fingerprinting (Easy):
Now this seems to me to be very much a field from the “obvious” category. The impact of DNA fingerprinting and Western and Southern Blots on pure and applied science- everything from discovering new drugs to hunting down serial killers (and exonerating wrongly convicted ones; for instance check out this great article by Carmen Drahl in C&EN)- is at least as big as the prizeworthy PCR. I think the committee would be doing itself a favor by honoring Jeffreys, Stark, Burnette and Southern. And while we are on DNA, I think it’s also worth throwing in Marvin Caruthers whose technique for DNA synthesis really transformed the field. In fact it would be nice to award a dual kind of prize for DNA- for both synthesis and diagnosis.

Cons: Picking three might be tricky.

Stem Cells (Easy)

This seems to be yet another favorite. McCulloch and Till are often listed. Unfortunately McCullough died earlier this year so it would be a little unfair to award just Till. However such a thing is not unprecedented. For example, the psychologist Daniel Kahneman shared the 2002 Economics Nobel Prize with Vernon L. Smith. Left out was his long-time collaborator Amos Tversky who had died in the 90s; it’s pretty much regarded as a given that Tversky would have shared the prize had he been alive.

Pros: Surely one of the most important biological discoveries of the last 50 years, promising fascinating advances in human health and disease.

Cons: Politically controversial (although we hope the committee can rise above this). Plus, a 2007 Nobel was awarded for work on embryonic stem cells using gene targeting strategies so there’s a recent precedent.

Membrane vesicle trafficking (Easy)

Pros: Clearly important. The last trafficking/transport prize was given out in 1999 (Blobel) so another one is due and Rothman and Schekman seem to be the most likely canidates. Plus, they have already won the Lasker Award which in the past has been a good indicator of the Nobel.

GPCR structures (Difficult)

When the latest GPCR structure (the first one of a GPCR bound to a G protein) came out I remember remarking that Kobilka, Stevens and Palczewski are probably up for a Nobel Prize sometime. In the last two years I have become convinced that they deserve it. Palczewski solved the first structure of rhodopsin and Stevens and Kobilka have been churning out structure after important structure over the last decade, including the first structure of an active receptor along with several medicinally important ones including the dopamine D3 and CXCR4 receptors. Kobilka topped it off early this year with another tour-de-force, the structure of the beta adrenergic receptor bound to its G-protein. The implications of these structures are far-reaching but the results are already being used by both pure and applied scientists to better understand GPCR function and design GPCR-targeting drugs.

Pros: GPCR’s are clearly important for basic and applied science, especially drug discovery where 30% of drugs already target these proteins.


Cons: Perhaps too early.

PHYSICS

I think it’s high time Anton Zeilinger, John Clauser and Alain Aspect got it for bringing the unbelievably weird phenomenon of quantum entanglement to the masses. Zeilinger’s book “Dance of the Photons” presents an informative and revealing account of this work



I have also always wondered whether non-linear dynamics and chaos deserves a prize. The proliferation and importance of the field certainly seems to warrant one; the problem is that there are way too many deserving recipients (and Mandelbrot is dead). Among the pioneers, Feigenbaum, May and Yorke come to mind easily.