So it seems the Nobel speculations have started again. I have been doing them for some years now and this year at a meeting in Lindau in Germany I saw 23 Nobel Prize winners in chemistry up close, none of whom I predicted would win the prize (except the discoverers of GFP, but that was a softball prediction).
As I mentioned in one of my posts from Lindau, predicting the prize for chemistry has always been tricky because the discipline spans the breadth of the spectrum of science, from physics to biology. The chemistry prize always leaves a select group of people upset; the materials scientists will crib about biochemists getting it, the biochemists will crib about chemical physicists getting it. However, as I mentioned in the Lindau post about Roger Kornberg, to me this selective frustration indicates the remarkable purview of chemistry. With this in mind, here goes another short round of wild speculation. It would of course again be most interesting if someone who was not on anybody's list gets the prize; there is no better indication of the diversity of chemistry than a failure to predict the winner.
1. Structural biology: Not many seem to have mentioned this. Ada Yonath (Weizmann Institute) and Venki Ramakrishnan (MRC) should definitely get it for their resolution of the structure of the ribosome. Cracking an important biological structure has always been the single best bet for winning the Nobel (the tradeoff being that you can spend your life doing it and not succeed, or worse, get scooped), and Yonath and Ramakrishnan would deserve it as much as say Roderick McKinnon (potassium channel) or Hartmut Michel (light harvesting center)
2. Single-molecule spectroscopy: The technique has now come of age and fascinating studies of biomolecules have been done with it. W. E. Moerner and Richard Zare (Stanford) seem to be in line for it.
3. Palladium: This is a perpetual favorite of organic chemists. Every week I get emails announcing the latest literature selections for that week's organic journal club in our department. One or two of the papers without exception feature some palladium catalyzed reaction. Palladium is to organic chemists what gold was to the Incas. Heck, Suzuki and perhaps Buchwald should get it.
4. Computational modeling of biomolecules; Very few computational chemists get Nobel Prizes, but if anyone should get it it's Martin Karplus (Harvard). More than anyone else he pioneered the use of theoretical and computational techniques for studying biomolecules. I would also think of Norman Allinger (UGA) who pioneered force fields and molecular mechanics. But I don't think the Nobel committee considers that work fundamental enough, although it is now a cornerstone of computational modeling. Another candidate is Ken Houk (UCLA) who more than anyone else pioneered the application of computational techniques to the study of organic reactions. As my past advisor who once introduced him in a seminar quipped, "If there's a bond that is broken in organic chemistry, Ken has broken it on his computers".
Among other speculations include work on electron transfer in DNA especially pioneered by Jacqueline Barton (Caltech). However I remember more than one respectable scientist saying that this work is controversial. On a related topic though, there is one field which has not been honored:
5. Bioinorganic chemistry: The names of Stephen Lippard (MIT) and Harry Gray (Caltech) come to mind. Lippard has cracked many important problems in metalloenzyme chemistry, Gray has done some well-established and highly significant work on electron transfer in proteins.
So those are the names. Some people are mentioning Michael Grätzel for his work on solar cells, although I personally don't think the time is ripe for recognizing solar energy. Hopefully the time will come soon. It also seems that Stuart Schreiber is no longer on many of the lists. I think he still deserves a prize for really being the pioneer in investigating the interaction of small organic and large biological molecules.
As for the Medicine Nobel, from a drug discovery point of view I really think that Akiro Endo of Japan who discovered statins should get it. Although the important commercial statins were discovered by major pharmaceutical companies, Endo not only painstakingly isolated and tested the first statin but also was among the first to propound the importance of inhibiting HMG-CoA reductase as the key enzyme in cholesterol metabolism. He seems to deserve a prize just like Alexander Fleming did, and just like penicillin, statins have literally saved millions of lives.
Another popular candidate for the medicine Nobel is Robert Langer of MIT, whose drug delivery methods have been very important in the widespread application of the controlled delivery of drugs. A third good bet for the medicine prize is Elizabeth Blackburn who did very important work in the discovery of telomeres and telomerases. Blackburn is also a warm and highly ethical woman who was bumped off Bush's bioethics committee for her opposition to the ban on stem cell research. Blackburn proudly wears this label, and you can read this and other interesting aspects of her life in her biography.
And finally of course, as for the physics prize, give it to Stephen Hawking. Just give it to him. And perhaps to Roger Penrose. Just do it!!
Update: Ernest McCullough and James Till also seem to be strong candidates for the Medicine prize for their discovery of stem cells. They also won the Lasker Award in 2005, which has often been a stepping stone on the path to the Nobel. McCullough seems to be 83, so now might be a good time to award him the prize.
For chemistry, Benjamin List also seems to be on many lists for his work in organocatalysis, but I personally think the field may be too young go be recognized.
Another interesting category in the physics prize seems to be quantum entanglement. Alain Aspect who performed the crucial experimental validations of Bell's Theorem definitely comes to mind. Bell himself almost certainly would have received the prize had he not died very untimely of a stroke.
Previous predictions: 2008, 2007, 2006
Other blogs: The Chem Blog, In The Pipeline
- Home
- Angry by Choice
- Catalogue of Organisms
- Chinleana
- Doc Madhattan
- Games with Words
- Genomics, Medicine, and Pseudoscience
- History of Geology
- Moss Plants and More
- Pleiotropy
- Plektix
- RRResearch
- Skeptic Wonder
- The Culture of Chemistry
- The Curious Wavefunction
- The Phytophactor
- The View from a Microbiologist
- Variety of Life
Field of Science
-
-
Don't tell me they found Tyrannosaurus rex meat again!2 weeks ago in Genomics, Medicine, and Pseudoscience
-
-
-
Course Corrections4 months ago in Angry by Choice
-
-
The Site is Dead, Long Live the Site2 years ago in Catalogue of Organisms
-
The Site is Dead, Long Live the Site2 years ago in Variety of Life
-
Does mathematics carry human biases?4 years ago in PLEKTIX
-
-
-
-
A New Placodont from the Late Triassic of China5 years ago in Chinleana
-
Posted: July 22, 2018 at 03:03PM6 years ago in Field Notes
-
Bryophyte Herbarium Survey6 years ago in Moss Plants and More
-
Harnessing innate immunity to cure HIV8 years ago in Rule of 6ix
-
WE MOVED!8 years ago in Games with Words
-
-
-
-
post doc job opportunity on ribosome biochemistry!9 years ago in Protein Evolution and Other Musings
-
Growing the kidney: re-blogged from Science Bitez9 years ago in The View from a Microbiologist
-
Blogging Microbes- Communicating Microbiology to Netizens10 years ago in Memoirs of a Defective Brain
-
-
-
The Lure of the Obscure? Guest Post by Frank Stahl12 years ago in Sex, Genes & Evolution
-
-
Lab Rat Moving House13 years ago in Life of a Lab Rat
-
Goodbye FoS, thanks for all the laughs13 years ago in Disease Prone
-
-
Slideshow of NASA's Stardust-NExT Mission Comet Tempel 1 Flyby13 years ago in The Large Picture Blog
-
in The Biology Files
First potential HIV vaccine
This just came off the press:
A new AIDS vaccine tested on more than 16,000 volunteers in Thailand has protected a significant minority against infection, the first time any vaccine against the disease has even partly succeeded in a clinical trial...Col. Jerome H. Kim, a physician who is manager of the army’s H.I.V. vaccine program, said half the 16,402 volunteers were given six doses of two vaccines in 2006 and half were given placebos. They then got regular tests for the AIDS virus for three years. Of those who got placebos, 74 became infected, while only 51 of those who got the vaccines did. Results of the trial of the vaccine, known as RV 144, were released at 2 a.m. Eastern time Thursday in Thailand by the partners that ran the trial, by far the largest of an AIDS vaccine: the United States Army, the Thai Ministry of Public Health, Dr. Fauci’s institute, and the patent-holders in the two parts of the vaccine, Sanofi-Pasteur and Global Solutions for Infectious Diseases.However this also came off the same press:
Scientists said they were delighted but puzzled by the result. The vaccine — a combination of two genetically engineered vaccines, neither of which had worked before in humans — protected too few people to be declared an unqualified success. And the researchers do not know why it worked...The most confusing aspect of the trial, Dr. Kim said, was that everyone who did become infected developed roughly the same amount of virus in their blood whether they got the vaccine or a placebo. Normally, any vaccine that gives only partial protection — a mismatched flu shot, for example — at least lowers the viral load.Nevertheless, after a decade of failures, at least it's a definite starting point scientifically.
The emerging field of network biology
One of the things I have become interested in recently is the use of graph theory in drug discovery. I had taken a class in graph theory during my sophomore year and while I have forgotten some of the important things from that class, I am revising that information again from the excellent textbook that was then recommended- Alan Tucker's "Applied Combinatorics" which covers both graph theory and combinatorics.
The reason why graph theory has become exciting in drug discovery in recent times is because of the rise of the paradigm of 'systems biology'. Now when they hear this term many purists usually cringe at what they see as simply a fancy name given to an extension of well-known concepts. However, labeling a framework does not reduce its utility. The approach should be better named 'network biology' in this context. The reason why graph theory is becoming tantalizingly interesting is because of the large networks of interactions between proteins, genes, and drugs and their targets that have been unearthed in the last few years. These networks could be viewed as abstract graph theoretical networks possibly utilizing the concepts of graph theory and predictions based on the properties of these graphs could possibly help us to understand and predict. This kind of 'meta' thinking which previously was not much possible because of the lack of data can unearth interesting insights that may be missed by looking at molecular interactions alone.
The great power and beauty of mathematics has always been to employ a few simple principles and equations that can explain many diverse general phenomenon. Thus, a graph is any collection of vertices or nodes (which can represent molecules, proteins, actors, internet web pages, bacteria, social agents etc.) connected by edges (which represent interactions between the vertices). In the field of network analysis this has been manifested in a singularly interesting observation; the observation that many diverse networks, from protein-protein networks to the internet to academic citation networks, are scale-free. Scale free networks demonstrate a topology which as the name indicates is independent of the scale. From a mathematical standpoint the defining quality of scale free networks is that they follow a power law. That is, the probability of any node having k connections goes as k to some power -γ, where γ is usually a number between 2 and 3.
Thus, P(k) ~ k^-γ
The scale-free property has been observed for a remarkable number of networks, from the internet to protein-protein interactions. This property is counterintuitive since one would expect the number of connections to follow a normal or Poisson like distribution, with P(k) depending more or less exponentially on k, and nodes having a large number of connections being disproportionately small in number. The scale-free property however leads to a valuable insight; that there are a surprisingly large number of nodes or 'hubs' which are quite densely connected. This property can have huge implications. For instance it could allow us to predict the exact hubs in an internet network which could be most vulnerable to attack. In the study of protein-protein interactions, it could tantalizingly allow us to predict which protein or set of proteins to hit in order to disrupt the maximum number of interactions. A recent study on the network of FDA approved drugs and their targets suggests that this network is scale-free; this could mean that there is a privileged set of targets which are heavily connected to most drugs. Such a study could indicate both targets which could be more safely hit as well as new targets (sparsely connected nodes) which could be productively investigated. Any such speculation can of course only be guided by data, but it may be much more difficult to engage in it without looking at the big picture afforded by graphs and networks.
However the scale-free property has to be very cautiously inferred. Many networks which seem scale-free are subnetworks whose parent network may not be scale-free. Conversely a parent network that is scale-free may contain a subnetwork that is not scale-free. The usual problem with such studies is the lack of data. For instance we have still plumbed only a fraction of the total number of protein-protein interactions that may exist. We don't know if this vast ultimate network is scale-free or not. And of course, the data underlying such interactions comes from relatively indirect methods like yeast two-hybrid or reporter gene assays and its accuracy must be judiciously established.
But notwithstanding these limitations, concepts from network analysis and graph theory are having an emerging impact in drug discovery and biology. They allow us to consider a big picture view of the vast network of protein-protein, gene-gene, and drug-protein and drug-gene interactions. There are several more concepts which I am trying to understand currently. This is very much a field that is still developing, and we hope that insight from it will serve to augment the process of drug discovery in a substantial way.
Some further reading:
1. A great primer on the basics of graph theory and its applications in biology. A lot of the references at the end are readable
2. Applied Combinatorics by Alan Tucker
3. Some class notes and presentations on graph theory and its application in chemistry and biology
4. A pioneering 1999 Science paper that promoted interest in scale-free networks. The authors demonstrated that several diverse networks may be scale-free.
The reason why graph theory has become exciting in drug discovery in recent times is because of the rise of the paradigm of 'systems biology'. Now when they hear this term many purists usually cringe at what they see as simply a fancy name given to an extension of well-known concepts. However, labeling a framework does not reduce its utility. The approach should be better named 'network biology' in this context. The reason why graph theory is becoming tantalizingly interesting is because of the large networks of interactions between proteins, genes, and drugs and their targets that have been unearthed in the last few years. These networks could be viewed as abstract graph theoretical networks possibly utilizing the concepts of graph theory and predictions based on the properties of these graphs could possibly help us to understand and predict. This kind of 'meta' thinking which previously was not much possible because of the lack of data can unearth interesting insights that may be missed by looking at molecular interactions alone.
The great power and beauty of mathematics has always been to employ a few simple principles and equations that can explain many diverse general phenomenon. Thus, a graph is any collection of vertices or nodes (which can represent molecules, proteins, actors, internet web pages, bacteria, social agents etc.) connected by edges (which represent interactions between the vertices). In the field of network analysis this has been manifested in a singularly interesting observation; the observation that many diverse networks, from protein-protein networks to the internet to academic citation networks, are scale-free. Scale free networks demonstrate a topology which as the name indicates is independent of the scale. From a mathematical standpoint the defining quality of scale free networks is that they follow a power law. That is, the probability of any node having k connections goes as k to some power -γ, where γ is usually a number between 2 and 3.
Thus, P(k) ~ k^-γ
The scale-free property has been observed for a remarkable number of networks, from the internet to protein-protein interactions. This property is counterintuitive since one would expect the number of connections to follow a normal or Poisson like distribution, with P(k) depending more or less exponentially on k, and nodes having a large number of connections being disproportionately small in number. The scale-free property however leads to a valuable insight; that there are a surprisingly large number of nodes or 'hubs' which are quite densely connected. This property can have huge implications. For instance it could allow us to predict the exact hubs in an internet network which could be most vulnerable to attack. In the study of protein-protein interactions, it could tantalizingly allow us to predict which protein or set of proteins to hit in order to disrupt the maximum number of interactions. A recent study on the network of FDA approved drugs and their targets suggests that this network is scale-free; this could mean that there is a privileged set of targets which are heavily connected to most drugs. Such a study could indicate both targets which could be more safely hit as well as new targets (sparsely connected nodes) which could be productively investigated. Any such speculation can of course only be guided by data, but it may be much more difficult to engage in it without looking at the big picture afforded by graphs and networks.
However the scale-free property has to be very cautiously inferred. Many networks which seem scale-free are subnetworks whose parent network may not be scale-free. Conversely a parent network that is scale-free may contain a subnetwork that is not scale-free. The usual problem with such studies is the lack of data. For instance we have still plumbed only a fraction of the total number of protein-protein interactions that may exist. We don't know if this vast ultimate network is scale-free or not. And of course, the data underlying such interactions comes from relatively indirect methods like yeast two-hybrid or reporter gene assays and its accuracy must be judiciously established.
But notwithstanding these limitations, concepts from network analysis and graph theory are having an emerging impact in drug discovery and biology. They allow us to consider a big picture view of the vast network of protein-protein, gene-gene, and drug-protein and drug-gene interactions. There are several more concepts which I am trying to understand currently. This is very much a field that is still developing, and we hope that insight from it will serve to augment the process of drug discovery in a substantial way.
Some further reading:
1. A great primer on the basics of graph theory and its applications in biology. A lot of the references at the end are readable
2. Applied Combinatorics by Alan Tucker
3. Some class notes and presentations on graph theory and its application in chemistry and biology
4. A pioneering 1999 Science paper that promoted interest in scale-free networks. The authors demonstrated that several diverse networks may be scale-free.
One day left before the singularity
From a January 2009 post on another blog which I forgot to cross-post here:
As I have discussed with friends often, the reason why we start liking certain sitcoms so much is not just because they remain intrinsically funny, but because we gradually start to become friends with the characters and anticipate their actions and words. That certainly happened with Seinfeld. With me that also happened with F.R.I.E.N.D.S., at least till the sixth season. But for this to be so, the lines have to be genuinely creative and witty and most importantly, actors have to inhabit the characters almost perfectly, which seldom happens.
Now it seems to be happening again. The Big Bang Theory has enormously entertained me since it kicked off last year. While it may seem that it would appeal only to science-type nerds, it has potential to get a much more general audience hooked. The premise is not entirely novel but is beautifully packaged. Two brilliant physicists at Caltech, Sheldon Cooper and Leonard Hofstadter, are living a life of total nerdiness; speaking in nerd-speak all the time, analyzing every statement literally, playing Halo every Thursday, collecting tons of actions figures, and attending role-playing medieval games. Needlessly to say, their social ineptness figures in the nth power of ten. Leonard (an experimentalist) is a little more normal, while Sheldon (a theoretician) who is the most brilliant of them all is infinitely annoying and exasperating, being unable to understand simple linguistic devices like sarcasm and metaphor in spite of (or because of) his incandescent brilliance.
Their total lack of social tact and immersion in science is only helped by their two best friends also from Caltech, Howard Wolowitz, an obsessive womanizer and total loser who in spite of his repeated failures will never stop trying to get every attractive woman in bed with him, and an Indian named Raj Koothrapalli who is so scared of attractive women that he can't talk in front of them...unless he is drunk. Between Howard trying to fend off his mother with whom he lives and Raj trying to fend off his parents who are hell-bent on getting him into an arranged marriage, the four friends usually hang out at Sheldon and Howard's apartment, stay away from most human beings, have lunch everyday at the university cafeteria and in general exemplify the epitome of nerdiness.
But things change dramatically when an attractive, not-so-bright (but often having more common sense than the geniuses) blonde named Penny moves in next door to Sheldon and Leonard. How their state of equilibrium is suddenly disturbed by this violent perturbation and how the event has several manifestations of various kinds is the general subject of the episodes. Throw in ample science-speak, the typical lives of awkward geniuses, fun at physics quizzes and desperate dates gone embarrassingly bad, and you have one entertaining sitcom.
It's been a while since I was this entertained. As I noted before, the series works because of clever lines and the actors' ability to almost perfectly inhabit their characters, so that they gradually form a distinct identity in your mind that allows you to start appreciating and anticipating their tics and lines. Hopefully the series will find a large enough fan following to continue playing for a long time.
The Big Bang Theory plays at 9:30 p.m. on Mondays on CBS.
Missile shield to be scrapped
It's a great day. This piece of news makes me feel extremely gratified as I am sure it does many others. Missile defense against ICBMs has been an eternal bug that has bitten almost every President since 1960. The Bush administration had aggressively pushed plans to implement a missile shield in Poland and the Czech Republic. There has always been evidence that the efficacy of such a shield will ultimately be severely limited by the basic laws of physics, and that the adversary can essentially and cheaply overwhelm the defense with decoys and countermeasures.
I have written about these limitations and studies about them several times before (see below). The best article arguing against the European missile shield is a May 2008 article by Theodore Postol and George Lewis in the Bulletin of the Atomic Scientists (free PDF here).
And as arms expert Pavel Podvig succinctly wrote in the Bulletin of the Atomic Scientists only three days back, it's not just about the technology, but it's about a fundamentally flawed concept:
Unfortunately, the sordid history of missile defense and the inherent satisfaction that seems to stem by arguing in favor of a "shield" to protect the population makes me skeptical in believing that the concept is dead forever. But for now, there is peace in our time and this is a significant breakthrough.
Past posts on missile defense:
Made For Each Other
Missile Defense: The Eternal Bug
Holes in the Whole Enterprise
Czechs halt missile shield progress
I have written about these limitations and studies about them several times before (see below). The best article arguing against the European missile shield is a May 2008 article by Theodore Postol and George Lewis in the Bulletin of the Atomic Scientists (free PDF here).
And as arms expert Pavel Podvig succinctly wrote in the Bulletin of the Atomic Scientists only three days back, it's not just about the technology, but it's about a fundamentally flawed concept:
"The fundamental problem with the argument is that missile defense will never live up to its expectations. Let me say that again: Missile defense will never make a shred of difference when it comes to its primary mission--protecting a country from the threat of a nuclear missile attack. That isn't to say that advanced sensors and interceptors someday won't be able to deal with sophisticated missiles and decoys. They probably will. But again, this won't overcome the fundamental challenge of keeping a nation safe against a nuclear threat, because it would take only a small probability of success to make such a threat credible while missile defense would need to offer absolute certainty of protection to truly be effective...It's understandable that people often talk about European missile defense as one of the ways in which to deal with the missile threat posed by Iran. Or that someday missile defense could provide insurance for nuclear disarmament--this is the vision that Ronald Reagan had. When framed in this way, missile defense seems like a promising way out of difficult situations. But this promise is false. If a real confrontation ever comes about (and let's hope it never happens), we quickly would find out that missile defense offers no meaningful protection whatsoever".Now the Obama administration has decided to scrap the unworkable shield and has decided to replace it with a much more realistic defense against short-range missiles. I cannot imagine how gratified this must make the scores of scientists, engineers and policy officials who have long argued against the feasibility of the shield. It also signals a huge shift in Bush-era foreign policy. Notice how the administration has diplomatically and shrewdly avoided mentioning the basic failures of the earlier system.
Unfortunately, the sordid history of missile defense and the inherent satisfaction that seems to stem by arguing in favor of a "shield" to protect the population makes me skeptical in believing that the concept is dead forever. But for now, there is peace in our time and this is a significant breakthrough.
Past posts on missile defense:
Made For Each Other
Missile Defense: The Eternal Bug
Holes in the Whole Enterprise
Czechs halt missile shield progress
First pentacene, now this
It just seems to get better. Last week a stellar AFM picture of pentacene showing the molecules with unprecedented resolution made the news. This week the picture seems to get even deeper
Update: After thinking about this a little more and looking at the comment in the comment section my exhilartion has been tempered by skepticism (for a good scientist it should ideally be the other way around...I am still learning). The orbitals look perfect, and I would be interested in knowing what kind of actual techniques they use to process the initial raw data into this finished image. Plus, what about sp3 hybridization?
The pictures, soon to be published in the journal Physical Review B, show the detailed images of a single carbon atom's electron cloud, taken by Ukrainian researchers at the Kharkov Institute for Physics and Technology in Kharkov, Ukraine.Whenever I see something like this I always wonder how utterly exhilarated and astonished Dalton, Boltzmann, Maxwell, Heisenberg, Bohr, Einstein and others would have been if they had seen all this. I remember that Stuart Schreiber's life trajectory was set when he saw orbitals first presented as gorgeous lobes in class. The Schreibers of the twenty-first century could have much more to be excited about. Man's dominion over the understanding and manipulation of matter sometimes seems almost mythical.
This is the first time scientists have been able to see an atom's internal structure directly. Since the early 1980s, researchers have been able to map out a material's atomic structure in a mathematical sense, using imaging techniques.
Quantum mechanics states that an electron doesn't exist as a single point, but spreads around the nucleus in a cloud known as an orbital. The soft blue spheres and split clouds seen in the images show two arrangements of the electrons in their orbitals in a carbon atom. The structures verify illustrations seen in thousands of chemistry books because they match established quantum mechanical predictions.
David Goldhaber-Gordon, a physics professor at Stanford University in California, called the research remarkable.
"One of the advantages [of this technique] is that it's visceral," he said. "As humans we're used to looking at images in real space, like photographs, and we can internalize things in real space more easily and quickly, especially people who are less deep in the physics."
To create these images, the researchers used a field-emission electron microscope, or FEEM. They placed a rigid chain of carbon atoms, just tens of atoms long, in a vacuum chamber and streamed 425 volts through the sample. The atom at the tip of the chain emitted electrons onto a surrounding phosphor screen, rendering an image of the electron cloud around the nucleus.
Update: After thinking about this a little more and looking at the comment in the comment section my exhilartion has been tempered by skepticism (for a good scientist it should ideally be the other way around...I am still learning). The orbitals look perfect, and I would be interested in knowing what kind of actual techniques they use to process the initial raw data into this finished image. Plus, what about sp3 hybridization?
Norman Borlaug (1914-2009)
A scientist whose name has been heard by few and yet one who gave more people life than almost any other scientist in the twentieth century
NYT obituary
Can natural sciences be taught without recourse to evolution?
That's the question for a discussion over the American Philosophical Society museum website. I think the answer to the question would have to be no. Now of course that does not mean it's technically impossibly; after all before Darwin natural sciences were taught without recourse to evolution. But evolution ties together all the threads like nothing else, and to teach the natural sciences without it would be to present disparate facts without really connecting them together. It would be like presenting someone with a map of a city without a single road in it.
In fact natural sciences were largely taught to us without recourse to evolution during our high school and college days. Remember those reams of facts about the anatomy of obscure animals that we had to memorize. If it wasn't the hydra it was the mouse. If not the mouse then the paramecium. I can never resent my biology teachers enough for not connecting all these animals and their features through the lens of evolution. What a world of difference it would have made if the beauty of the unity of life would have been made evident by citing the evolutionary relationships between all these exotic creatures.
In fact "Evolution" was nothing more than a set of two clumsy textbook chapters that got many of the details wrong and left countless other facts wanting. Granted, some of the teachers at least had good intentions, but they just didn't get it. Teaching biology without constantly referring to evolution is like asking someone to learn about a world without using language. Would you teach physics without recourse to mathematics? Then you should not teach biology without recourse to evolution, at least not in the twenty first century.
In fact natural sciences were largely taught to us without recourse to evolution during our high school and college days. Remember those reams of facts about the anatomy of obscure animals that we had to memorize. If it wasn't the hydra it was the mouse. If not the mouse then the paramecium. I can never resent my biology teachers enough for not connecting all these animals and their features through the lens of evolution. What a world of difference it would have made if the beauty of the unity of life would have been made evident by citing the evolutionary relationships between all these exotic creatures.
In fact "Evolution" was nothing more than a set of two clumsy textbook chapters that got many of the details wrong and left countless other facts wanting. Granted, some of the teachers at least had good intentions, but they just didn't get it. Teaching biology without constantly referring to evolution is like asking someone to learn about a world without using language. Would you teach physics without recourse to mathematics? Then you should not teach biology without recourse to evolution, at least not in the twenty first century.
Olivia Judson on "Creation"
Olivia Judson is a science writer and research associate at Imperial College London who has written excellent articles on biology and evolution for the NYT as well as the entertaining and informative book "Dr. Tatiana's Sex Advice to all Creation". She seems to like the new movie on Charles Darwin, "Creation", in which the real life couple of Jennifer Connelly and Paul Bettany star as Darwin's wife Emma and Charles. Interestingly Bettany did a fine job playing a Darwin-like naturalist and doctor in the film "Master and Commander".
Darwin's relationship with his wife was admirable and interesting because although she was always devoutly religious and he increasingly was not, their marriage was largely warm and affectionate throughout their lives. In typical scientific fashion, he had drawn up a list of pros and cons before marrying her and decided the pros outweighed the cons. Emma who had taken piano lessons from Chopin provided marital stability while Charles labored over The Origin.
In the movie, I think Connelly is too attractive to play Emma but that's a relatively minor point. My greater concern was with the scientific accuracy in the movie and whether it might turn out to be overwrought and unduly dramatised. However Judson largely mitigates my fears.
Unlike most biographies of Darwin, its central event is not the publication of the “Origin,” but the death of Darwin’s adored eldest daughter, Annie, at the age of 10. She died in 1851 after nine months of a mysterious illness; at the time of her death, she was not at home, but in the English spa town of Malvern, where she had been sent for treatment.Thus the movie really seems to focus on Darwin's relationship with his family and especially with his beloved and favorite daughter who unfortunately died an untimely death as a child. According to most accounts, this was a focal point in Darwin's conversion to being a non-believer and the movie seems to dwell on the pain and conflict that Darwin experienced during this event.
Annie’s death is also the central event of this beautifully shot film. For “Creation” is not a didactic film: its main aim is not the public understanding of Darwin’s ideas, but a portrait of a bereaved man and his family. The man just happens to be one of the most important thinkers in human history.
Which isn’t to say that Darwin’s ideas don’t feature. We see him dissecting barnacles, preparing pigeon skeletons, meeting pigeon breeders and talking to scientific colleagues. He visits the London zoo, where he plays a mouth organ to Jenny, an orangutan; at home, he takes notes on Annie as a baby (Does she laugh? Does she recognize herself in the mirror?). He teaches his children about geology and beetles, makes them laugh with tales of his adventures in South America, and shows them how to walk silently in a forest so as to sneak up on wild animals.
At the same time, we see his view of nature — a wasteful, cruel, violent place, where wasps lay their eggs in the living flesh of caterpillars, chicks fall from the nest and die of starvation, and the fox kills and eats the rabbit.
But all this is merely the backdrop to the story of a man convulsed by grief.
It's probably easy to forget that along with being one of the greatest minds in history, Darwin was also an unsually kind, modest and gentle soul and a devoted family man. Seems like this movie will do a good job of underscoring this fact as well as entertaining audiences with some of Darwin's scientific explorations.
Impressionist thoughts on Rosetta
Here is Bosco Ho, a postdoc at UCSF comparing Rosetta to the Impressionists, my favorite cabal of artists. Along the way praise and disappointment are also exuded toward GROMOS, an earlier protein modeling program
As an aside, I have used Rosetta a little and it can be hideously user-unfriendly. Why the authors never sought to collaborate with a software company who would design a nice GUI for it is something I have never understood. Now in spite of the above rants let me not be misleading here; I think Rosetta is a fantastic program that has achieved some spectacular results reported in places like Nature and Science; perhaps its most stunning achievement was designing an enzyme from scratch that would catalyze a Kemp elimination reaction, a reaction that no other enzyme in nature is known to catalyze. It's just that I think that using it, at least for people who are not members of David Baker's group, might be like flying a highly sophisticated spaceship whose workings are somewhat mysterious. It could be a problem when those ill-understood cumulonimbus (or Romulans) start looming on the horizon.
THE GUTS OF ROSETTANow don't get me wrong; impressionism is my favorite art style, but somehow I am always going to be a little uncomfortable about a program that relies more on statistics than physics to simulate protein folding. I already have this hang up about models in general which I have articulated before. Although modeling reality is what models are supposed to do, ultimately you can still be in for a nasty surprise if you are not paying too much attention to the actual physics and chemistry behind the molecular interactions.
In the last two CASP meets, David Baker from the University of Washington, using his program Rosetta has come first by a hefty margin in the New Fold category. The success of Rosetta has electrified the protein-folding community.
Yet, there are theorists out there who feel slightly queasy when poking through the innards of Rosetta. Theorists such as Wilfred van Gunsteren, write programs such as GROMOS, which have the richness of 17th century Dutch paintings. Just as Vermeer was fetishistically obsessed with painting every detail of the Dutch bourgeoisie, right down to the hem-line of the chamber-maid's dress, GROMOS is obsessed with modeling every detail of 21st century atomic physics, right down to the quadruple expansion of the electron shells of polarizable atoms. The problem with programs like GROMOS is that they are lumbering giants, bloated programs that devour all the computing that you could ever offer, and still beg for more. Although GROMOS is used for many things, attempts to fold a protein have lurched to a stuttering halt, even after agos of computing time.
Programs like Rosetta, on the other hand, are more like Impressionists paintings, virtuoso dabs of paint that trick the eye into seeing a protein fold in no time at all. For instance, whereas GROMOS fastidiously models all 6 atoms in carbon rings attached to the protein and each atom in the ring is allowed to wobble, Rosetta models the carbon ring as one fat unmovable atom. Water molecules surrounding the protein? No problem, says Rosetta, we'll just ignore them. Rosetta also uses a clever trick by folding similar proteins from different species of animals, and then averaging all the structures to obtain a consensus structure. In reality, when proteins like hemoglobin fold inside your body, they don't get to watch how hemoglobin folds in rats or flies in order to come to a consensus.
As an aside, I have used Rosetta a little and it can be hideously user-unfriendly. Why the authors never sought to collaborate with a software company who would design a nice GUI for it is something I have never understood. Now in spite of the above rants let me not be misleading here; I think Rosetta is a fantastic program that has achieved some spectacular results reported in places like Nature and Science; perhaps its most stunning achievement was designing an enzyme from scratch that would catalyze a Kemp elimination reaction, a reaction that no other enzyme in nature is known to catalyze. It's just that I think that using it, at least for people who are not members of David Baker's group, might be like flying a highly sophisticated spaceship whose workings are somewhat mysterious. It could be a problem when those ill-understood cumulonimbus (or Romulans) start looming on the horizon.
Can you at least get the solvation energy right?
Basic physical property measurement and prediction is not supported at the granting level and is considered too far from the issues directly affecting drug development to have been pursued by industry. This has left a critical gap in the basic scientific method that drives theoretical methods forward, that is, the observation, hypothesis, and testing methodology that Bacon, al-Haytham, and others championed and that Galileo applied to great effect in the formulative years of modern science...if basic physical science is supported in this area there is great potential for improvement and eventual achievement of long-desired goals of molecular modeling in the pharmaceutical industry- Prescient Soothsayers of SolvationSometimes it's a wonder computational predictions of protein and ligand activity work at all. Consider the number of factors we still don't have a good handle on; among other things, calculating protein conformational entropy is virtually beyond reach, calculation of hydrogen bond strengths that depend intimately on the surrounding environment is still quite tricky and calculation of favourable hydrophobic entropy gain because of expulsion of water molecules from the the active site is still a murky area.
But there are things even simpler than these which we have not learnt to calculate well. Foremost among these is a crucial factor influencing every instance of protein ligand binding, the interaction of both assemblies with bulk water. If we can't even get the aqueous solvation energy right, can we make a statement about progress in modeling protein-ligand interactions at all? Water has been probably the most studied solvent for decades and dozens of water models have sprung up, none of which is significantly superior in calculating the properties of this stunningly deceptively simple liquid.
The two foremost implicit methods (as opposed to explicit solvent methods like MD) currently used for calculating solvation energy are the Born solvation method and ones based on the Poisson-Boltzmann equation. Calculating solvation energy ultimately will involve getting the basic science right. With this view in mind, a group from OpenEye and Astra Zeneca narrate their successes and failures in a blind test for calculating solvation energies of 56 druglike organic molecules called SAMPL1. They do a fine job in investigating individual cases and talking about the effect of two crucial variables on the solvation energies; atomic radii (which inversely relate to the solvation) and even more importantly, charges. The group essentially fiddle around with these two variables, modifying the charges and the atomic radii until they get the solvation energy about right. It's a classic case of both the virtues and pitfalls of parametrization and indicates that real parameterization should not involve blindly adding terms to get experimental agreement but instead focus on the two or three scientifically most interesting and important variables.
Believe it or not, but there are a dozen different methods for calculating atomic charges in computational chemistry. Fixed charge models don't capture a very important phenomenon- polarization- that can profoundly affect bond strengths and especially hydrogen bond strengths. In real life charges on atoms don't stay constant in a changing environment. At the same time there is no one "correct" charge model, and as in the case of models in general, what matters ultimately is a model that works. In an earlier blind test, the group had used a particular quantum chemical method called AM1-BCC to calculate charges, and this gave them a mean error of about 2 kcal/mol in the solvation energy. The AM1-BCC method is a well-established semiempirical method that actually calculates slightly overpolarized charges, thus fortuitously and conveniently mimicking the change in charge distribution for a molecule as it transfers from the gas to the aqueous medium. In this paper the group calculate charges at the DFT level and find that this makes a significant difference for a large subset of the previous molecules.
Another interesting phenomenon investigated in the study is the effect of conformations on the calculation of solvation energy. The first axiomatic truth to realize is that molecules exist as several different conformations in both gas and aqueous phases. But low energy conformations for a typical organic molecule in the gas phase will be very different from aqueous conformations. Conformations calculated in the gas phase are typically 'collapsed' and have oppositely charged polar groups too close for comfort because of the lack of intervening solvent that would usually break them up. If you want to use only one conformation for a solvation energy calclation, you would use a collapsed gas phase conformation and a relatively extended aqueous phase conformation. Ideally though you should be more realistic and should use multiple conformations. In the study, the effect of multiple conformations for calculating the vacuum and aqueous phase partition functions and solvation free energy was studied. Interestingly the results obtained with multiple conformations are generally worse than the results obtained with single conformations! There must probably be some added noise that is introduced from unrealistic calculated conformations. The authors also find out, not surprisingly, that using different charges for different conformations of the same molecule can make a difference, although not much. At the same time charges for certain atoms don't change much if the atoms are buried; a failure to realize this leads to two screaming outliers, which however only provides a good opportunity to learn what's wrong.
There are several interesting paragraphs on how the authors played with the atomic radii and the charges and how they explained and were puzzled by outliers. In the end, a particular combination of DFT charges along with a particular combination of radii (termed ZAP10 radii) provided the smallest error in calculation of solvation energies. Interestingly some radii had to be maintained at their default Bondi radii values (which are derived from crystal data) in order to work well.
What I like about this study is that it is told from the real-time viewpoint and illustrates the calculation as it actually evolved. The pitfalls and the possibilities are cogently explored. Certain functional groups and atom types seem to perform better than others. It is clear that much care is devoted to understanding the basic science.
The basic science is also going to involve the accurate experimental determination of solvation energies. Such measurements are typically considered too mundane and basic to be funded. And yet, as the authors make clear in the paragraph quoted at the beginning, it's only such measurements that are going to aid the calculation of aqueous solvation energies. And these calculations are going to be ultimately key to calculating drug-protein interactions. After all, if you cannot even get the solvation energy right...
Nicholls, A., Wlodek, S., & Grant, J. (2009). The SAMP1 Solvation Challenge: Further Lessons Regarding the Pitfalls of Parametrization The Journal of Physical Chemistry B, 113 (14), 4521-4532 DOI: 10.1021/jp806855q
Books on evolution
Continuing on the theme of Darwin's 200th birth anniversary and the 150th anniversary of The Origin of Species, we have some great forthcoming books on the topic.
1. Carl Zimmer's "The Tangled Bank: An Introduction to Evolution " (October 15) promises to be perhaps the most visually attractive introduction to evolution, lavishly studded with color photos and schematics. Zimmer has already written the excellent "Evolution: The Triumph of an Idea" which I will highly recommend.
2. Richard Dawkins's "The Greatest Show on Earth" (September 22) should divert some attention from his greater recent fame as an atheist. The whole atheism debate makes it easy to forget that Dawkins has been one of the best science writers of our time. His earlier books on evolution hardly need introduction and there is no doubt that his next book will also be filled with crystalline and poetic prose. Here's an excerpt that details the domestication of the dog from wolves and sheds some common misconceptions.
3. Among books already published, probably the single best book for understanding the core essentials of the topic is Jerry Coyne's "Why Evolution Is True". The chapter on sexual selection is especially laudable. Also add Sean Carroll's "Remarkable Creatures: Epic Adventures in the Search for the Origin of Species"
I have some thoughts on what I saw as interesting analogies between descriptors used to gauge sexual selection and molecular descriptors for biological activity, but more on this later.
1. Carl Zimmer's "The Tangled Bank: An Introduction to Evolution " (October 15) promises to be perhaps the most visually attractive introduction to evolution, lavishly studded with color photos and schematics. Zimmer has already written the excellent "Evolution: The Triumph of an Idea" which I will highly recommend.
2. Richard Dawkins's "The Greatest Show on Earth" (September 22) should divert some attention from his greater recent fame as an atheist. The whole atheism debate makes it easy to forget that Dawkins has been one of the best science writers of our time. His earlier books on evolution hardly need introduction and there is no doubt that his next book will also be filled with crystalline and poetic prose. Here's an excerpt that details the domestication of the dog from wolves and sheds some common misconceptions.
3. Among books already published, probably the single best book for understanding the core essentials of the topic is Jerry Coyne's "Why Evolution Is True". The chapter on sexual selection is especially laudable. Also add Sean Carroll's "Remarkable Creatures: Epic Adventures in the Search for the Origin of Species"
I have some thoughts on what I saw as interesting analogies between descriptors used to gauge sexual selection and molecular descriptors for biological activity, but more on this later.
Subscribe to:
Posts (Atom)