Field of Science

How about a graphite ring for getting engaged?

Donald Sadoway is a professor of materials science and engineering at MIT. Over the last few years he has emerged as one of the most popular lecturers on campus. Even Bill Gates has referred to him as a fantastic chemistry teacher. His course titled "Introduction to Solid-State Chemistry" is so much in demand that in 2007 it had about 600 students and the school had to stream the lecture into another room. Fortunately all his lectures including the ones for 2010 are online. Sadoway is a great speaker and seems to have thought very carefully about what he wants to say in class. Definitely worth watching.

Check out his description of the structural differences between diamond and graphite and how they affect the radically different properties of the substances. It's one of the clearest explanations of the difference I have heard.



At the end, Sadoway mentions that since graphite is the most stable form of carbon at room temperature, wouldn’t it make sense to present a ring made of graphite instead of diamond to your love interest as a symbol of everlasting love? I think that makes sense although my fiancée will probably disagree.

A limitless life



The Indian chemist Chintamani Nagesa Ramachandra Rao (known as C N R Rao) is one of the foremost solid-state and materials chemists in the world. His output- more than a thousand papers and forty books- is phenomenal by most scientific standards. He has been one of the founding fathers of the field in the last fifty years. There are very few living chemists in any field who have worked in such diverse areas. Rao’s work has been recognized by several honors, including election to the Indian science academies, the Royal Society and the US National Academy of Sciences. Very few scientists have influenced Indian science in the last half century to the extent that he has. In his native city of Bangalore he is virtually worshipped by some; I have seen a traffic intersection named after him.

Rao has now written an autobiography in which he catalogs his life and times in chemistry. It’s worth reading, especially if you want to get a glimpse of science in a developing country and the kind of efforts it takes to do research in such a place.

Rao grew up in post-independence India where the fledgling republic was striving to get its feet off the ground. India’s first Prime Minister, Jawaharlal Nehru, was probably the most scientifically literate and ambitious of all the country’s leaders and placed a premium on scientific and technological development. It was under his leadership that the Indian Institutes of Technology and many of the leading national laboratories were established. Rao grew up in the 1940s and did his undergraduate work at the Banaras Hindu University in the holy city of Banaras, situated along the banks of the Ganges River. As a 19-year old undergraduate he published his first paper in Science on electrical discharges. After graduation he applied to Linus Pauling for his PhD. However, Pauling was then vigorously engaged in deciphering the structure of proteins and was not involved with the kind of experimental physical chemistry that Rao was interested in. He referred Rao instead to John Livingston at Purdue University, who was a leader in electron diffraction.

After finishing his PhD at Purdue, Rao went to Berkeley for a postdoc where he was engrossed by the likes of Glenn Seaborg, Melvin Calvin and others who had made Berkeley a Mecca for chemistry and physics. His scientific output was already outstanding- about 30 papers in leading journals- and it would have been easy for him to get a top faculty position in the US. However, Rao wanted to return to India and got a faculty appointment at the Indian Institute of Science (IISc). He also got married to a woman (Indu) who has been a great source of strength and wisdom for him since then. Apart from a productive stint at the Indian Institute of Technology, Kanpur, Rao has spent his entire career at IISc and then at the Jawaharlal Nehru Centre for Advances Scientific Research (JNCASR) which he founded.

The next part of the book is the part that’s most interesting. By that point (late 50s), chemistry had been revolutionized by two great developments. One was the invention of key instrumental techniques like NMR spectroscopy and x-ray diffraction. The other development was the formulation of a theoretical framework for chemistry through quantum mechanics, pioneered by Pauling, Slater, Mulliken etc. These developments were virtually unknown in India and were almost non-existent in the university curriculum. Along with a small band of other chemists, Rao was instrumental in establishing these modern chemical concepts in India. He did this, firstly by being one of the first to teach courses in quantum chemistry, spectroscopy etc. and secondly by founding a vigorous program of modern chemical research. He was certainly one of the few pioneers of modern chemistry in post-independence India; one is reminded of the American school of modern theoretical physics which Robert Oppenheimer founded at Berkeley in the 30s. Rao’s perseverance in overcoming fundamental odds like the lack of equipment and the Indian bureaucracy is noteworthy. Rao also made solid-state chemistry respectable when work in that discipline was far from fashionable. His descriptions of the threadbare capabilities of Indian science and the efforts necessary to overcome these are intriguing and inspiring. It definitely took a lot of courage and was an enormous gamble for Rao to decide to establish his career in India during that time, especially when his career would certainly have flourished anywhere in Europe or the US. But it ultimately paid off and allowed Rao to make contributions that were far greater in terms of social and national impact compared to the contributions he would have been able to make elsewhere.

So how does one do high-quality research in a resources and cash-strapped developing country? Rao’s approach is worth noting. He knew that the accuracy of measurements he could do with the relatively primitive equipment in India could never compete with sophisticated measurements in Europe or the US. So instead of aiming for accuracy, Rao aimed at interesting problems. He would pick a novel problem or system where even crude measurements would reveal something new. Others may then perform more accurate measurements on the system, but his work would stand as the pioneering work in the area. This approach is worth emulating and should be especially emphasized by young scientists starting out in their careers: be problem-oriented rather than technique-oriented. Another key lesson from Rao's life is to not work in crowded fields; Rao would often contribute the initial important observations in the field and then move on while it was taken over by other scientists. This also keeps one from getting bored. Embodying this philosophy allowed Rao to work in a vast number of areas. He started with spectroscopic investigations of liquids, moved to inorganic materials and further worked extensively on organic materials. Among other things, he has made significant contributions to unraveling the structures and properties of transition metal oxides, ceramic superconductors and materials displaying giant magneto-resistance. All these had special physical and chemical properties which were directly a result of their unique structures. Rao co-authored an internationally recognized book- “New Directions in Solid-State and Structural Chemistry”- which encapsulates the entire field.

However, sometimes not having the right technique can prove significantly debilitating. In the 80s, the world of science was shaken by the discovery of ‘high-temperature’ superconductivity in a ceramic material. In fact Rao had synthesized the exact same material - an oxide of copper, lanthanum and barium - more than fifteen years before. However, the compound became superconducting at 30 degrees Kelvin and could be studied only in liquid helium. Unfortunately Rao was unable to do measurements at this temperature because the only relevant material available in his laboratory was liquid nitrogen, which boils at 77 K. If liquid helium had been available, Rao might well have been the first person to observe superconductivity in this material. In 1987, two scientists at IBM who discovered the phenomenon were awarded the Nobel Prize.

The later parts of the book deal with Rao’s experiences as a top government advisor and his relationships with several leading scientists including Nobel laureates like Nevill Mott and Philip Anderson. He also laments the current state of science education in India where most bright students prefer to study financially lucrative disciplines like information technology, business and medicine. The Indian middle class is still stuck in a peculiar frame of mind in which intelligence and achievement is necessarily measured by the amount of money you make. Understandably, many Indian middle class parents who themselves grew up in relative poverty want their children to be financially successful. But as Rao says, this attitude is adversely affecting the scientific future of the country and is siphoning off talent from science and technology research. For now, about the only solution to this problem is the infusion of funds in science education and research with a view to making these fields financially sustainable. Some steps in this direction have been taken with the establishment of the Indian Institutes of Science Education and Research (IISER), but much more needs to be done. Unfortunately, Rao has relatively few thoughts on practical policies which could bring about such a change. This is probably the most disappointing part of the book since Rao, with his enormous experience in Indian science and government, enjoys a unique vantage point and would have been the idea guide to offer solutions and policy recommendations. But apart from stressing the importance of science education and science, he has few deep thoughts on the problem.

The book ends with some interesting appendices and reflections. One is a “Letter to a Young Chemist” in which Rao succinctly catalogs the excitement of solid-state and materials chemistry. Another essay on science and spirituality is again disappointing; while Rao clearly sees no conflict between the two, the essay is only two pages long and superficial. The last essay titled “Science as a Way of Life” is a masterful exposition on the kind of attitude one needs to be a scientist, and the role of science in our society. Here Rao teaches by example. As attested by his colleagues and friends, he has been completely dedicated to science throughout his life and demands the same kind of unflinching commitment from his students and co-workers. He still spends almost every free minute in the lab and intends to follow the example of some of his scientific heroes in working till the last day of his life. While this intensity has often made him a demanding teacher and taskmaster, no one can accuse him of not walking the talk. Rao talks about the international community of scientists and how it has helped him. He also talks about prejudices still standing in the way of international cooperation, including the occasional racism he encountered at Purdue in the 50s, which can be rapidly dissolved by the bonds of scientific kinship.

The great thing about science is that like music and art it is truly without boundaries and constitutes an international community. As Rao himself has demonstrated, excellence in science does not ask for one’s nationality, religion, gender, sexual inclination or political views. All it asks for are an open mind, healthy skepticism, honest dedication and respect for knowledge and inquiry. As Rao’s life exemplifies, cultivating these qualities can lead to a life that is extraordinarily rewarding and enriching.

Link: An extended video interview of Rao on the Vega Science Trust website conducted by his friend, chemist Anthony Cheetham of UCSB and Cambridge. The interview is worth watching and covers Rao's life, science, public service and home life.

Functional selectivity: Nature's Bach concerto

ResearchBlogging.org
One of the great things about Bach’s organ music is how changes of a single note in a whole pattern can have rather dramatic effects on the sound. A unique and potentially very important similar phenomenon has been discovered recently in the area of GPCR research.

The understanding of the basic process by which GPCRs transmit signals from the cell exterior to the interior has seen remarkable advances in the last three decades, but much still remains to be deciphered. Our knowledge of signaling responses until now hinged on the action of agonists and antagonists. Central to this knowledge was the concept of ‘intrinsic efficacy’; according to this concept, there was no difference between two full agonists for instance, and both of them would produce the same response irrespective of the situation.

But this understanding failed to explain some observations. For instance, a full agonist would function as a partial agonist and even as an inverse agonist under different circumstances. Several such observations, especially in the context of GPCRs involved in neurotransmission, have forced a re-evaluation of the concept of intrinsic efficacy and led to an integrated formulation of a fascinating concept called ‘functional selectivity’.

So what is functional selectivity? It is the phenomenon by which the same kind of ligand (agonist, antagonist etc.) can modulate different signaling pathways activated by a single GPCR, leading to different physiological responses. Functional selectivity thus opens up a whole new method of modifying GPCR signaling in complex ways. It comprises a new layer of complexity and control that biological systems enforce at the molecular level to engage in complex signaling and homeostasis. Functional selectivity can allow the ‘tuning’ of ligands on a continuum scale of properties, from agonism to inverse agonism. In addition it can tightly regulate the strength of the particular property. It is what allows GPCRs to function as rheostats rather than as binary switches and allows them to exercise a fine layer of biological control and discrimination.

Functional selectivity is not just of academic interest. It can have clinical significance. Probably most tantalizingly, it may be one of the holy grails of pharmacology that allows us to separate the beneficial and harmful effects of a drug, leading to Paul Ehrlich’s ‘magic bullet’. Until now, side-effects have been predominantly thought to result from the lack of subtype-specificity of drugs. For instance, morphine’s side effects are thought to result from its activation of the μ-opioid receptor. But functional selectivity could provide a totally new avenue for explaining and possibly mitigating side-effects of drugs. For instance, consider the dopamine receptor agonist ropinirole, used in the treatment of Parkinson’s disease. There are several D-receptor agonists and just like them ropinirole interacts with several receptor subtypes. But unlike many of these, ropinirole does not demonstrate the dangerous side-effect named valvulopathy, a weakening of the heart valves that makes them stiff and inflamed. This can be a potentially life-threatening condition that seems to be facilitated by several dopamine agonists, but not ropinirole. The cause seems to be becoming clear only now; ropinirole is a functionally selective ligand that activates a certain pattern of second messenger pathways that is different from those activated by other agonists. Somehow this pattern of pathways is responsible for reduced valvulopathy.

Let’s go back to the organ/piano analogy to gauge the significance of such control. The sound produced by a piano depends on two variables- the exact identities of the keys pressed, and the intensity (how hard or softly you press them). The second variable can be as important as the first since a pressing a key particularly hard can drown out other notes and influence the very nature of the sound. The analogy to functional selectivity would be in looking at the keys themselves as different signaling pathways and the intensity of the notes as the strength of the pathways. Now, if one ligand binding to a single GPCR is able to activate a specific combination of these pathways, each with its own strengths, think of the permutations and combinations you could get from a set of even a dozen pathways- an astonishing number. Thus, functional selectivity could be the key that unlocks the puzzle of how one ligand can put into motion such a complex set of signaling events and physiological responses. One ligand- one receptor- several pathways with differing strengths. An added variable is the concentration of certain second messengers in a particular environment or cell type, which could add even more combinations. This picture could go a long way toward explaining how we can get such complex signaling in the brain from just a few ligands like dopamine, serotonin and histamine. And as described above, it also provides a fascinating direction - along with control of subtype selectivity (a much more well known and accepted cause) - for developing therapies that demonstrate all the good stuff without the bad stuff.

The basic foundation of functional selectivity is as tantalizing. Whatever the reasons for the phenomenon, the proximal cause for it has to concern the stabilization of different protein conformations by the same kind of ligands. Unravel these protein conformations and you would make significant inroads into unraveling functional selectivity. If you come to think of it, this principle is not too different from the current model of conformational selection used in explaining the action of agonists and antagonists in general, which involves the stabilization of certain conformations by specific molecules.

Nature never ceases to amaze. As we plumb its mysteries further, it reveals deeper, more subtle and finer layers of control and discrimination that allows it to generate profound complexity starting from some relatively simple events like the binding of a disarmingly simple molecule like adrenaline to a protein. And combined with the action of several proteins, the concerto turns into a symphony. We have been privileged to be in the audience.

Mailman, R., & Murthy, V. (2010). Ligand functional selectivity advances our understanding of drug mechanisms and drug discovery Neuropsychopharmacology, 35 (1), 345-346 DOI: 10.1038/npp.2009.117

Kelly, E., Bailey, C., & Henderson, G. (2009). Agonist-selective mechanisms of GPCR desensitization British Journal of Pharmacology, 153 (S1) DOI: 10.1038/sj.bjp.0707604

"The most ludicrous system ever devised"

That's Nobel laureate Harry Kroto on the peer-review system. Since he won his Nobel for fullerenes, Kroto has become a tireless promoter of science education and communication. This week's issue of Nature has a series of brief interviews with several Nobel laureates. One of the questions asked was about the peer-review system and whether it is an optimal one. Most laureates gave an analogy and paraphrased Churchill's quip about democracy: it’s a system full of flaws, yet better than the other alternatives. But Kroto went one step further:
Many people consider the peer-review system broken. Do you share their view, and do you have a solution?

The peer-review system is the most ludicrous system ever devised. It is useless and does not make sense in dealing with science funding when history abounds with a plethora of examples that indicate that the most important breakthroughs are impossible to foresee.

The science budget should be split into three (not necessarily equal) parts and downloaded to departments. The local institutions, and not government departments, should disburse funding as they are close to the coalface and can decide what needs support and what is in the long-term interest of the department. There should be no research proposals on which to waste time.

One part should go to young people chosen by their universities as the researchers on which their institution's future will depend — they have done the work, why waste time doing it again when people have no time and are too far away from the coalface and in general do not have the relevant expertise?

The second part should go to a group whose most recent report was excellent. This is the racehorse solution — if a scientist has just done some great work, let her or him run again.
Although I would have probably eschewed such strong words and do sympathize with the other laureates' perspective, my heart is with Sir Kroto. Revolutionary science has often been rejected by the peer-review system; it's worth noting that Enrico Fermi's paper on beta decay was rejected by Nature.

I myself have believed in having a separate section in leading science journals devoted to "improbable" science, speculative and brain-tickling ideas flung out for contemplation by the rest of the community. The section should make it clear that such ideas have not been validated, but then that's true of any scientific idea when it's being conceived. I seriously believe that such sections would provide a lot of food for thought for researchers who are willing to go out on a limb. Maybe the published, incomplete ideas will meet their own ideas to be synthesized into a more coherent whole.

Now of course that does not mean that any crackpot idea deserves to be published. There certainly needs to be a minimum standard for acceptance. For this there could be a second kind of peer-review, where reviewers are more forgiving and more creative in judging the merit of the proposed concept. These reviewers could judge the idea not on the basis of its validation but on the basis of its novelty, novelty that’s nonetheless grounded in sound basic principles of science (thus homeopathy would be instantly excluded). Such a two-tier system would then provide an opportunity for the publication of both “normal” science as well as potentially revolutionary science. An example that comes to my mind is Luca Turin’s novel idea about olfactory molecules being detected by vibration rather than shape. The idea certainly seemed grounded in basic physics and chemistry. Its publication would have pushed at least a couple of researchers to validate or disprove it. As it turns out it was rejected by a leading journal after a long wait.

As Freeman Dyson says, the most important scientists are often "rebels" who speak out against the conventional wisdom. Their far-fetched sounding pronouncements of today have often been transformed into the important discoveries of tomorrow. The least that science journals can do is to give their ideas a worldwide platform.

In praise of cheap science

The era of ‘big science’ in the United States began in the 1930s. Nobody exemplified this spirit more than Ernest Lawrence at the University of California, Berkeley whose cyclotrons smashed subatomic particles together to reveal nature’s deepest secrets. Lawrence was one of the first true scientist-entrepreneurs. He paid his way through college selling all kinds of things as a door-to-door salesman. He brought the same persuasive power a decade later to sell his ideas about particle accelerators to wealthy businessmen and philanthropists. Sparks flying off his big machines, his ‘boys’ frantically running around to fix miscellaneous leaks and shorts, Lawrence would proudly display his Nobel Prize winning invention to millionaires as if it were his own child. The philanthropists’ funding paid off in at least one practical respect; it was Lawrence’s modified cyclotrons that produced the uranium used in the Hiroshima bomb.

After the war big science was propelled to even greater heights. With ever bigger particle accelerators needed to explore ever smaller particles, science became an expensive ‘hobby’. The decades through the 70s were dominated by high-energy physics that needed billion-dollar accelerators to verify its predictions. Fermilab, Brookhaven and of course, CERN, all became household names. Researchers competed for the golden apples that would sustain these behemoths. But one of the rather unfortunate fallouts of these developments was that good science started to be defined by the amount of money it needed. Gone were the days when a Davy or a Cavendish could make profound discoveries using tabletop apparatus. The era of molecular biology and the billion dollar Human Genome Project further cemented this faith in the fruits of expensive research.

We are now seeing the culmination of this era of big physics and biology. In recent years, university professors’ worth has exceedingly been measured by the amount of funding that they get. Science, long a relentless search to uncover the mysteries of life and the universe, has been transformed into a relentless search to find the perfect problem most likely to bag the biggest grant. Rather than focusing on the ideas themselves, the current system encourages researchers on proving their ‘worth’. The only true worth of a scientist is his quest and hunger for knowledge and his passion in transferring that knowledge to the next generation. All other metrics of worth are greatly exaggerated.

The accomplished chemist Alan Bard nails this problem in a C & EN editorial that castigates the current system for sacrificing the actual quality of research at the altar of the ability to bring in research funds. The editorial succinctly points out that in the race to secure these funds, scientists are often tempted to hype their research proposals so that the end product is more smoke and less fire. And of course, the biggest casualty is the education of further generations of scientists, those who are going to bring about the very technological and scientific advances that make our world tick. The result of all this? Young people are dissuaded from going into academic science; if their worth is going to be mainly judged in dollars (and that too only after they turn 40), they might as well work for the private sector.

Now of course nobody is arguing against scientists being able to file patents or apply for large grants. Money flowing in from these endpoints can sustain further research which today on the whole is more expensive. But as Bard’s article makes it clear, these activities are often becoming the primary and not the secondary focus of universities. That goes against the spirit of research and it undermines the very meaning of intellectual scholarship.

But most importantly, and Bard does not explicitly mention this, I think that the current environment makes it appear to young scientists just entering the game that they need to necessarily do expensive science in order to be successful. I think part of this belief does come from the era of big accelerator physics and high profile molecular biology. But this belief is flawed and it has been demolished by physicists themselves; this year’s Nobel Prize in Physics was awarded to scientists who produced graphene by peeling off layers of it from graphite using good old scotch tape. How many millions of dollars did it take to do this experiment?

Sure, low hanging scientific fruits accessible through simple experiments have largely been picked, but such a perspective is also in the eye of the beholder. As the graphene scientists proved, there are still fledgling fields like materials science where simple and ingenious experiments can contribute to profound discoveries. Another field where such experiments can provide handsome dividends is the other fledgling field of neuroscience. Cheap research that provides important insights in this area is exemplified by the neurologist V S Ramachandran, who has performed the simplest and ingenious experiments on patients using mirrors and other elementary equipment to unearth key insights into the functioning of the brain. These scientists have shown that if you find the right field, you can find the right simple experiment.

Ultimately, few can doubt that cheap experiments are also more elegant, and one derives much more satisfaction from simply mixing two chemicals together to generate complex self-assembled structures than using the latest accelerator to analyze gigabytes of computer data, although the latter may also lead to exciting discoveries. The beauty of science still lies in its simplicity.

But as Bard’s article suggests, are university administrations going to come around to this point of view? Are they going to recruit a young researcher describing an ingenious tabletop experiment worth five thousand dollars or are they going to go for one who is going to pitch for a hundred thousand dollars worth of fancy equipment? Sadly, the current answer seems to be that they would rather prefer the latter.

This has got to change, not only because simple experiments still hold the potential to provide unprecedented insights in the right fields, but also because the undue association of science with money misleads young researchers into thinking that more expensive is better. It threatens to undermine everything that science has stood for since The Enlightenment. The function of academic scientists is to do high-quality research and mentor the next generation of scientist-citizens. Raising money comes second. A scientist who spends most of his time securing funds is no different from a corporate lackey soliciting capital.

Science, which has nurtured and sustained our intellectual growth and contributed to our well-being for four hundred years, is like an eagle held aloft by the wind of creativity and skepticism. How can this magnificent bird soar if the wind fueling its flight and holding it high starts getting charged by the cubic centimeter?

What they found in the virtual screening jungle

ResearchBlogging.org
If successful, virtual screening (VS) promises to become an efficient way to find new pharmaceutical hits, competitive with high-throughput screening (HTS). Briefly, virtual screening screens libraries of millions of compounds to find new and diverse hits, either based on similarity to a known active or by complementarity to a protein binding site. The former protocol is called ligand-based VS (LBVS) and the latter is called structure-based VS (SBVS). In a typical VS campaign, either LBVS or SBVS is used to screen compounds which are then ranked by how well they are likely to be active. The top few percent compounds are then actually tested in assays, thus validating the success or failure of the VS procedure. VS has the potential to cut down on time and expenses inherent in the HTS process.

Unfortunately the success rate of VS has been relatively poor, ranging from a few tenths of a percent to no more than a few percent. If VS is to become a standard part of drug discovery, the factors that influence its failures and successes warrant a thorough review. A recent review in JMC addresses some of these factors and raises some intriguing questions.

From a single publication with the phrase ‘virtual screening’ in 1997, there are now about a hundred such papers every year. The authors pick about 400 successful VS results from three dominant journals in the field- J. Med. Chem., Bioorg. Med. Chem. Lett. and ChemMedChem, along with some from J. Chem. Inf. Mod. They then classify these into ligand-based and structure-based techniques. As mentioned before, VS comes in two flavors. LBVS starts from a known potent compound and then looks for “similar” compounds (with dozens of ways of defining ‘similarity’), with the hope that chemical similarity will translate into biological similarity. Structure-based techniques start with a protein structure, either an x-ray crystal structure, NMR structure or a structure built from homology modeling.

The authors look for correlations between the types of methods and various parameters of success and make some interesting observations, some of which are rather counterintuitive. Here are a couple that are especially interesting.

While SBVS methods dominate, LBVS methods seem to find more potent hits, usually defined as less than 1 μM in activity. Why does this happen? Actually the authors don’t seem to dwell on this point but I have some thoughts on this. Typically when you start a ligand-based campaign, your starting point is a bonafide highly potent ligand. If you have a choice of ligands with a range of activity, you will naturally pick the most potent among them as your query. Now, if your method works, is it surprising that the hits you find based on this query will also be potent? You get what you put in.

Contrast this to a structure-based approach. You usually start with a crystal structure having a co-crystallized ligand in it. Co-crystallized ligands are usually but not always highly potent. The next step would be to use a method like docking to find hits that are complementary to your protein binding-site. But the binding site is conformationally pre-organized and optimized to bind its co-crystallized ligand. Thus, the ligands you screen will not be ranked highly by your docking protocol if they are ill-optimized for the binding site. For instance there could be significant induced fit changes during their binding. Even in the absence of explicit induced fit, fine parameters like precise hydrogen bonding geometries will greatly affect your score; after all, the protein binding site has hydrogen bonding geometries tailored for optimally binding its cognate ligand. If the hydrogen bonding geometries for your ligands are off even by a bit, the score will suffer. No wonder that the hits you find span a range of activities; you are using a binding site template that is not optimized to bind most of your ligands. The other reason which could thwart SBVS campaigns is simply that there is more work necessary in ‘preparing’ a crystal structure for docking. You have to add hydrogens to a structure, make sure all the ionization states are right and optimize the hydrogen bonding network in the protein. If any one of these steps goes wrong you will start with a fundamentally crappy protein structure for screening. Thus this protocol usually requires expert inspection, unlike LBVS where you just have to ‘prepare’ a single template ligand by making sure that the ionization state and bond orders are ok. These differences mean that your starting point for SBVS is more tortuous and much more likely to be messy than it is for LBVS. Again, you get out what you put in.

The second observation that the authors make is also interesting, and it bears on the protein preparation step we just mentioned. They find that VS campaigns where putative hits are docked into homology models seem to find more potent hits compared to those using an x-ray structure. This is surprising since x-ray structures are supposed to be the most rock-solid structures for docking. The authors speculate that this difference could be due to the fact that building good homology model requires a fair level of expertise; thus, successful VS campaigns using homology models are likely to be carried out by experts who know what they are doing, whereas x-ray structures are more likely to be used by novices who simply use the default parameters for docking.

Thirdly, the authors note an interesting correlation between the potency and frequency of hits found and the families of proteins targeted. GPCRs seem to be the most successful targeted family, followed by enzymes and then kinases. This is a pretty interesting observation and to me it points to a crucial factor which the authors don’t seem to really discuss- the nature of the libraries used for screening. These libraries are usually biased by the preferences and efforts of medicinal chemists in making certain kinds of compounds. I already blogged about a paper that looked at the surprising success of VS in finding GPCR ligands, and that paper ascribed this success to ‘library bias’ which was the fact that libraries are sometimes ‘enriched’ for GPCR-active ligands, such as aminergic compounds. Ditto for kinases; kinase inhibitor-like molecules now abound in many libraries. This is partly due to the importance of these targets and partly because of the prevalence of synthetic reactions (like cross-coupling reactions) that make it easy for medicinal chemists to synthesize such ligands and populate libraries with them. I think it would have been very interesting for the authors to analyze the nature of the screened libraries; unfortunately such information is proprietary in industrial publications. But in the absence of such data, one would have to assume that we are dealing with a fundamentally biased set of libraries, which would explain selective target enrichment.

Finally, the authors find that most successful VS efforts have come from academia, while most of the potent hits have come from industry. This seems to be consistent with the role of the former in validating methodologies and that of the former in discovering new drugs.

There are some caveats as usual. Most of the studies don’t include a detailed analysis of false positives and negatives since such analysis is time consuming. But this analysis can be extremely valuable in truly validating a method. Standards for assessing the success of VS are also not consistent and universal and these will have to be decided for true comparisons. But overall, virtual screening seems to hold promise. At the very least there are holes and gaps to fill. And researchers are always fond of these.

Ripphausen, P., Nisius, B., Peltason, L., & Bajorath, J. (2010). Quo Vadis, Virtual Screening? A Comprehensive Survey of Prospective Applications Journal of Medicinal Chemistry DOI: 10.1021/jm101020z

New kid on the GPCR block

The CXCR4 GPCR structure has been solved by Raymond Stevens's group at Scripps. It joins the exclusive august list of previously crystallized GPCRs- the beta-adrenergic, rhodopsin and the adenosine A2A receptor.

This could be quite important for HIV drug discovery since the CXCR4 is a chemokine receptor expressed on the surface of lymphocytes that HIV uses as a co-receptor to gain entry into cells. People struggling with structure-based drug design with CXCR4 should be elated.

Welcome to the club, although many more members have to be still inducted.

What is 'elegance' in chemistry?

Physicists and mathematicians have their own notions of 'elegance'. These notions are often tied to mathematical beauty. Paul Dirac was famous for insisting that equations be beautiful. His own Dirac equation is a singular example of beauty; it can literally be written in half a line and yet completely explains the behavior of the electron, taking special relativity into account and predicting antimatter.

Chemistry is much more of an experimental science than physics, so does that mean that notions of elegance would be meaningless in chemistry? Not so at all, although chemists need their own definitions.

Organic synthesis, which is as close to architecture that a science can get, has defined elegance for a long time. Organic chemists will tell you that a complex synthesis is elegant when it can be accomplished in only a few steps, with maximum purity, yield and stereoselectivity, using the most benign reagents under the mildest of conditions. The Nobel laureate John Cornforth said it well: he defined the perfect synthesis as one which could be done by a one-armed operator by pouring down a mixture of chemicals down a drain and collecting the product in one hundred percent yield and stereoselectivity at the other end. Biomimetic reactions particularly lend themselves to definitions of elegance. In such reactions you can usually get a truly elegant cascade of bond formation and breaking that installs several stereochemical centers in one or a few steps. Robert Robinson's synthesis of atropine is a classic example. So is William Johnson's synthesis of progesterone through a stunning biomimetic cascade. I remember both these examples leaving me cold as an undergraduate.

Other kinds of chemists dealing with synthesis would have similar definitions of elegance. But in the age of supramolecular chemistry, elegance is being defined another way, through self-assembly. Traditional organic synthesis deals with the stepwise construction of complex molecules. In much of solid-state and structural chemistry though, simple building blocks self-assemble into some of the most complex chemical architectures and nanomaterials merely by mixing reagents together. A great example concerns the beautiful structures resulting from simply mixing sodium oxalate and calcium chloride under hydrothermal conditions. Solid-state and supramolecular chemists are harkening back to the old days of chemistry, when you got interesting results just by combining simple chemicals together in different proportions.

Whereas an organic chemist would define elegance in the context of yield, stereoselectivity and mild conditions, that definition would not be of much use to a biochemist studying enzymes, since it's child's play for virtually all enzymes to accomplish this goal every single moment of their existence. Carbonic anhydrase, nitrogenase and peroxidase are only three examples of enzymes that nonchalantly go about their business at room temperature with a devastating efficiency that would put an organic chemist to shame. For biochemists, this kind of elegance is passé. Or it's an everyday miracle, depending on how you look at it. For biochemists, signal transduction cascades in which the binding of a single molecule causes a shower of precise protein-binding events involving dozens of biomolecules that usually culminates in genetic expression must seem like a true miracle. The binding of adrenaline to the beta-adrenergic receptor and the ensuing perfectly choreographed symphony of biomolecular music must surely seem elegant.

In other areas of chemistry elegance may be trickier to define. In theoretical chemistry you can use quantum mechanics to describe a chemical system. Given a relatively simple system, it's possible to describe it extremely accurately using high-level basis sets and theory. You can get answers accurate to a dozen decimal places using such techniques. Indeed, in principle, quantum mechanics can describe all of chemistry.

Is it elegant? A first thought may be that it is, since you are using the most fundamental theory available in nature for achieving an unprecedented degree of accuracy. But consider that you can usually get the same result accurate to a lesser number of decimal places (but still quite accurately) using a judiciously parametrized force field. A force field is simply a set of terms describing bond stretching, bending, torsional angles and Van der Waals and electrostatic forces, thrown in with a set of parameters drawn usually from experiment. Given a choice between reams of complex math and days of computer time, and minutes of computer time and a bleedingly simple molecular mechanics equation that you can write on the back of a cocktail napkin (try this out on a girl or boy the next time you are at a party; they will be very impressed), which one would you say is more elegant? Granted, the latter is parametrized with experimental measurements and is not as accurate as the former, but it's still good enough. More importantly, if simplicity is one of the important hallmarks of elegance, then the latter approach is surely more elegant, isn't it?

Ultimately, what matters so much is not elegance but the ability to discover new things. The one thing that sets chemistry apart is its ability to make new stuff that did not exist before. If chemists can find techniques that accomplish this goal more efficiently, they can be forgiven for not thinking too much about elegance.

After all, it was a famous physicist himself who once said, "Matters of elegance should be left to the cobbler and tailor...".

The Velvet Undergrounds of science

Over at the physics blog "Uncertain Principles", Chad Orzel has a nice meme. He talks about the band called The Velvet Undergound which itself was not very popular but which influenced many other bands. Orzel then asks which scientists were the Velvet Undergrounds of their respective disciplines. These would be individuals whose great achievements were not recognized during their lifetimes. He names Sadi Carnot.

I think there are two kinds of Velvet Undergrounds in science, ones whose achievements were not even recognized by their peers until after they died, and others whose achievements were recognized by their peers when they were alive but which did not make their names publicly known. Here's a few I thought of. Do you know more?

In the first category:

Josiah Willard Gibbs for thermodynamics: He published his founding contributions in an obscure Connecticut journal.

Gregor Mendel for genetics: His contribution was famously and independently uncovered only after 30 years.

Ludwig Boltzmann (partially) for physics: His belief in the existence of atoms was ruthlessly demolished by some including Ernst Mach.

George Price for evolutionary biology: His contributions to altruism were invaluable, but he astonishingly died as a penniless and homeless person on the streets of London

Henrietta Swan Leavitt for astronomy: Her groundbreaking and backbreaking work in exploring the Cepheid variables was pivotal to Edwin Hubble's foundational research on the expanding universe.

Hugh Everett for physics: His multiple universe theory is now increasingly embraced as a way to get around wavefunction collapse and problems with the Copenhagen Interpretation

In the second category:

Bruno Zimm and Jack Dunitz for crystallography: Zimm developed Zimm-Bragg diffraction theory. Dunitz inspired a generation of crystallographers (Dunitz is still alive actually)

Norman Heatley for penicillin: He was the brilliant technician behind the commercial production of the miracle drug

Stanislaw Ulam for math: He was the dominant contributor to Monte-Carlo methods

Arnold Sommerfeld for physics: He had a tremendous educational impact on most of the leading quantum physicists of the early twentieth century

Carl Woese for microbiology: He identified a whole new tree of life, the Archaea

Robert Wilson for physics: He designed particle accelerators the way Frank Lloyd Wright designed buildings. A fine amateur architect himself, he was the driving force behind the aesthetically pleasing Fermilab

Stanley Miller: The father of modern origins-of-life research

S F Boys for chemistry: He invented the technique of using Gaussian orbitals to approximate Slater-type orbitals, a development that is at the root of all of ab initio quantum chemistry

Michael Dewar for chemistry: A brilliant man with a huge ego, he vastly influenced many branches of theoretical chemistry

Sidney Coleman for physics: He tremendously influenced a generation of theoretical physicists with his penetrating insight and criticism

Gilbert Newton Lewis for chemistry: The father of the shared-electron chemical bond

Frank Westheimer for chemistry: A founding father of bioorganic chemistry

Peter Mitchell: Mitchell won a Nobel Prize, but his extremely important contribution of chemiosmotic theory is virtually unknown to the public

Graphene: Physics or Chemistry?

Akshat Rathi of “The Allotrope” pointed me to an Economist post on the graphene prize (by the way, MS word still asks for a spell-check on ‘graphene’). The writer seems to be a little miffed that a graphene award which was a ‘shoo-in’ for the chemistry prize in his opinion was awarded to physicists, thus depriving chemists of their glory. I can imagine some chemists feeling similarly rebuffed, although they should now ironically anticipate a much more ‘chemical’ prize tomorrow.

I am finding all this extremely amusing. Till last year chemists were galled to find chemistry prizes being awarded to biologists and this year they are going to be unhappy because their prize is being appropriated by physicists? Note that this despondency seems to be rather limited to chemists. I haven’t seen that many physicists complain about their prize awarded to biologists or vice versa.

But to me this disappointment again resoundingly underscores what I have always maintained- that the unique cross-disciplinary nature of chemistry is precisely what makes it the ‘central science’, a field which straddles all of biology and physics. There can be no better tribute to chemistry than the fact that debates are ignited every single year about ‘other’ scientists treading across chemical boundaries. The din only proves that just like sheer films of graphene, chemistry coats the surface of every other field of scientific inquiry and gives it a luminous glow.

The philosopher Karl Popper supposedly devised criteria for distinguishing "science" from "non-science" (and often from nonsense). I think he would have had a much harder time devising tests for separating "chemistry" from "non-chemistry". We chemists should be proud, every single one of us.

P.S. It's also worth noting a commentator's comment on the post which says that unlike fullerene, graphene was initially the domain of physicsts so a physics Nobel seems to be quite justified.

Graphene!

Pretty neat!. But this could equally well have been a chemistry prize so the chemistry prize will now not go to materials.

One of this year's Nobel laureates also got the Ig Nobel Prize in 2000 for levitating frogs...from there to ultrathin, ductile graphene is by leaps and bounds quite a stretch (pun alert)

From the mouths of (test-tube) babes

Robert Edwards, in-vitro fertilization pioneer, winner of the 2010 Nobel Prize in Physiology or Medicine. It's a nice, overdue decision by the committee, another prize whose importance was missed by predictors because it's so obvious. Edwards had also won the Lasker Prize.