And a very brief history of the US hydrogen bomb effort
How the Soviets got the H-bomb by 1955 has always been something of a mystery. Although they had top-notch scientists like Andrei Sakharov working for them, they still got almost exactly the same design as the Americans in just 4 years. Nobody denies Sakharov's tremendous contributions to the H-bomb effort. And yet the question lingers whether espionage helped H-bomb design just as it had helped Soviet A-bomb design, most famously through Klaus Fuchs's efforts.
Now a new book due to be released in January claims that the authors have uncovered a spy who gave details about the H-bomb design to the Soviets. 'The Nuclear Express' is co-authored by Thomas Reed, a former weapons designer who worked at Los Alamos for many years. The authors would not name the spy since he is now purportedly dead. Historians who have weighed in don't find the idea entirely implausible; after all it is hard to believe that security would have been so tight so as to completely preclude espionage. In addition even after Fuchs was apprehended, the Soviets still had a web of spies and sympathizers spread throughout the US that even as of now is not completely deciphered.
It is worthwhile at this point to recapitulate some of the US H-bomb history:
1942: Edward Teller, the 'father of the hydrogen bomb', builds upon a suggestion by Enrico Fermi and proposes the first design for a thermonuclear weapon. This is during a secret conference at Berkeley organized by Robert Oppenheimer that's supposed to explore the feasibility of a fission weapon, not a fusion device. Teller's basic idea is to use the extreme temperatures arising from an atomic bomb to ignite a cylinder of deuterium or tritium at one end, with the unproven assumption that the fusion fuel will ignite and continue to burn, thus producing a tremendous explosion equivalent to millions of tons of TNT. The H-bomb distracts the participants enough for them to speculate on its workings, but atomic bomb design (necessary for initiating a fusion reaction anyway) is wisely given priority. The as yet speculative thermonuclear weapon is christened "The Super". The Manhattan Project is kicked off. Throughout the war Teller goes off on his own H-bomb trajectory, often contributing to flared tempers and inadequate expertise at Los Alamos.
1946: After the war, Teller who is still obsessed with the weapon convenes a short, top secret conference. Klaus Fuchs is one of the participants. The conference concludes, mostly based on Teller's optimistic assessment, that The Super is feasible. At the end of the conference, Teller submits an overly optimistic report much to the chagrin of Robert Serber, an accomplished physicist who had been Oppenheimer's principal assistant at Los Alamos. Fuchs transmits the information from this conference to the Soviets.
August 1949: The Soviets detonate their first atomic bomb. Everyone is shocked, perhaps unnecessarily so. A high-level committee headed by Oppenheimer convenes in October on Halloween and debates H-bomb development. The almost unanimous opinion is that the H-bomb is not a tactical weapon of war but a weapon of genocide and therefore its development should not be undertaken. Priority should be given instead to the development of better, tactical fission weapons.
December 1949-January 1950: Edward Teller, spurred on by the Soviet A-bomb, starts recruiting scientists to join him at Los Alamos to work on the Super. Hans Bethe initially agrees, then after a chat with Oppenheimer and Victor Weiskopf, declines. Teller blames his change of mind on Oppenheimer. Later Bethe decides to work at Los Alamos only as a consultant, mainly because he wants to prove that The Super won't be feasible.
During this time, Stanislaw Ulam and Cornelius Everett at Los Alamos embark on a set of tedious calculations to investigate the feasibility of The Super. The result is decidedly pessimistic. The Super would need much more tritium, an extremely rare and expensive isotope, to initiate burning. Even if tritium is added the probability of successful propagation is extremely low. Teller's dream is dead in the water. Fermi, one of Teller's role models, confirms the bad news.
January-February 1950: Even as Ulam's calculations give a fit to Teller, Klaus Fuchs confesses his espionage. The country gradually starts descending into a state of paranoia. Against the advice of many experts, President Harry Truman initiates a crash program to build the H-bomb. Incidentally his announcement comes before that about Fuchs. And it comes absurdly even as Ulam and others have proven that Teller's Super would not work.
1950: Throughout 1950 the options for the Super keep on looking bleaker. In June the Korean War begins, fueling more feeling of paranoia. Teller's mood blackens. At one point after Ulam reports his latest set of calculations, Teller is said to be "pale with fury"
December 1950-January 1951: Ulam makes a breakthrough. He realizes that separating the fission weapon and fusion fuel and using the extreme pressures generated by the fission weapon will cause compression of the fusion fuel, thus dramatically increasing the odds of thermonuclear burning. Ulam floats the idea to Teller who enthusiastically espouses it and also crucially realizes that radiation from the fission "primary" would do an efficient job of compressing and sparking off fusion in the fusion "secondary". The idea is so elegant that Oppenheimer calls it 'technically sweet' and now supports the program. Bethe agrees to work on the device because suddenly everyone thinks that the Soviets will now not be long in discovering it. Later, Teller makes significant efforts to discredit Ulam's role in the invention. But the Teller-Ulam design becomes the basis for almost every hydrogen bomb in the arsenals of the world's nuclear powers.
1951-1952: Work proceeds on a thermonuclear weapon. In November 1952 the world's first hydrogen bomb, Ivy Mike, explodes with a force equivalent to 650 Hiroshima type bombs. Mankind has finally invented a device that comes closest to being a weapon for complete annihilation of nations.
So that is where matters stood in 1952. That gives the spy a window of two years or so to transmit the information. It's instructive to note that after Fuchs was outed, Oppenheimer actually hoped that he had told the Soviets about the H-bomb design from the 1946 conference, since that design had been shown to fail and would have led the Soviets on a wild goose chase. However the initial thoughts that Bethe, Oppenheimer and others had about the Soviets discovering the Teller-Ulam mechanism soon don't look unfounded to me. There were experts who thought that the idea of using compression and radiation to ignite and burn the thermonuclear fuel would occur to anyone who had thought hard and long about these matters. Niels Bohr thought that a bright high-school student would have thought about it, but that's probably going a little too far. The truth could well be in between, with both original thought and espionage playing a role. In any case the new book promises fresh fodder for atomic aficionados and I have pre-ordered it.
- Home
- Angry by Choice
- Catalogue of Organisms
- Chinleana
- Doc Madhattan
- Games with Words
- Genomics, Medicine, and Pseudoscience
- History of Geology
- Moss Plants and More
- Pleiotropy
- Plektix
- RRResearch
- Skeptic Wonder
- The Culture of Chemistry
- The Curious Wavefunction
- The Phytophactor
- The View from a Microbiologist
- Variety of Life
Field of Science
-
-
-
Political pollsters are pretending they know what's happening. They don't.1 month ago in Genomics, Medicine, and Pseudoscience
-
-
Course Corrections6 months ago in Angry by Choice
-
-
The Site is Dead, Long Live the Site2 years ago in Catalogue of Organisms
-
The Site is Dead, Long Live the Site2 years ago in Variety of Life
-
Does mathematics carry human biases?4 years ago in PLEKTIX
-
-
-
-
A New Placodont from the Late Triassic of China5 years ago in Chinleana
-
Posted: July 22, 2018 at 03:03PM6 years ago in Field Notes
-
Bryophyte Herbarium Survey7 years ago in Moss Plants and More
-
Harnessing innate immunity to cure HIV8 years ago in Rule of 6ix
-
WE MOVED!8 years ago in Games with Words
-
-
-
-
post doc job opportunity on ribosome biochemistry!9 years ago in Protein Evolution and Other Musings
-
Growing the kidney: re-blogged from Science Bitez9 years ago in The View from a Microbiologist
-
Blogging Microbes- Communicating Microbiology to Netizens10 years ago in Memoirs of a Defective Brain
-
-
-
The Lure of the Obscure? Guest Post by Frank Stahl12 years ago in Sex, Genes & Evolution
-
-
Lab Rat Moving House13 years ago in Life of a Lab Rat
-
Goodbye FoS, thanks for all the laughs13 years ago in Disease Prone
-
-
Slideshow of NASA's Stardust-NExT Mission Comet Tempel 1 Flyby13 years ago in The Large Picture Blog
-
in The Biology Files
Deconstructing Little Boy and Fat Man
A high-school educated truck driver uncovers the classified details of the first atomic bombs with unlimited verve and imagination
The details and specifications of the first two atomic bombs developed by humanity- Little Boy and Fat Man- are still secret. While a lot of material about nuclear weapons has been declassified, the specs for the first bombs dropped on Hiroshima are still considered out of reach, and probably absurdly so. Even after other countries have built countless nuclear weapons like Little Boy and Fat Man and vastly improved ones, the original bomb design details remain under a shroud of secrecy.
Now, a truck driver with a high school diploma has uncovered these details in excruciating detail. His work has been lauded by prominent historians including Richard Rhodes. His fascinating story is recounted in the December 15 New Yorker. John Coster-Mullen, with the "Coster" in his name curiously being the last name of his wife, has gone to simple but extraordinary lengths to get detailed information on the design of the first two nuclear weapons. He has succeeded to a degree that no professional scientist or historian has before, and which no national laboratory scientist will admit.
Coster-Mullen's story proves that to make significant headway in a problem you don't have to be a professional historian or a professor with a PhD. All you need is the patience to stick with a topic and keep on drilling deep into it. Coster-Mullen has worked with this single purpose for the last fifteen years or so, and has exploited almost every publicly available source to put together the details of Little Boy and Fat Man. These include museum exhibits around the world, scores of books written about nuclear weapons, thousands of documents declassified in the last fifty years, and testimonies and interviews with everyone from top scientists to machinists who worked on the bombs. The most important asset that Coster-Mullen brings to bear on the problem is unremitting determination and pure old common sense.
Consider the instance where he looked at an old and commonly seen photograph of two scientists carting the core explosive 'physics package' for the device exploded in the first atomic bomb test- Trinity- into a sedan. Coster-Mullen simply looked at the height of the sedan doors, figured out which model it was (a 1942 Plymouth) went into a car museum to measure the height and width, and then by simple proportionality deduced the size of the box the men were carrying. In another instance, he deduced the length of a crucial plug used for Little Boy from the account of the number of turns needed to screw it in. His general approach is to patch together material from a variety of sources and then connect the dots using simple deductive logic. While there are still unresolved questions about the designs, he has put together an extraordinary amount of detail. This is classic detective work at its best. The culmination of this work is Atom Bombs, a book about the detailed designs of the first atomic weapons that Mullen is selling on Amazon for 50$.
Again, Coster-Mullen has nothing more than a high school diploma and works as a truck driver and part-time photographer. His example eminently indicates that what is needed for success is an iron will to uncover something, and knowing where to look for the data. Read the entire article- it's fascinating.
The details and specifications of the first two atomic bombs developed by humanity- Little Boy and Fat Man- are still secret. While a lot of material about nuclear weapons has been declassified, the specs for the first bombs dropped on Hiroshima are still considered out of reach, and probably absurdly so. Even after other countries have built countless nuclear weapons like Little Boy and Fat Man and vastly improved ones, the original bomb design details remain under a shroud of secrecy.
Now, a truck driver with a high school diploma has uncovered these details in excruciating detail. His work has been lauded by prominent historians including Richard Rhodes. His fascinating story is recounted in the December 15 New Yorker. John Coster-Mullen, with the "Coster" in his name curiously being the last name of his wife, has gone to simple but extraordinary lengths to get detailed information on the design of the first two nuclear weapons. He has succeeded to a degree that no professional scientist or historian has before, and which no national laboratory scientist will admit.
Coster-Mullen's story proves that to make significant headway in a problem you don't have to be a professional historian or a professor with a PhD. All you need is the patience to stick with a topic and keep on drilling deep into it. Coster-Mullen has worked with this single purpose for the last fifteen years or so, and has exploited almost every publicly available source to put together the details of Little Boy and Fat Man. These include museum exhibits around the world, scores of books written about nuclear weapons, thousands of documents declassified in the last fifty years, and testimonies and interviews with everyone from top scientists to machinists who worked on the bombs. The most important asset that Coster-Mullen brings to bear on the problem is unremitting determination and pure old common sense.
Consider the instance where he looked at an old and commonly seen photograph of two scientists carting the core explosive 'physics package' for the device exploded in the first atomic bomb test- Trinity- into a sedan. Coster-Mullen simply looked at the height of the sedan doors, figured out which model it was (a 1942 Plymouth) went into a car museum to measure the height and width, and then by simple proportionality deduced the size of the box the men were carrying. In another instance, he deduced the length of a crucial plug used for Little Boy from the account of the number of turns needed to screw it in. His general approach is to patch together material from a variety of sources and then connect the dots using simple deductive logic. While there are still unresolved questions about the designs, he has put together an extraordinary amount of detail. This is classic detective work at its best. The culmination of this work is Atom Bombs, a book about the detailed designs of the first atomic weapons that Mullen is selling on Amazon for 50$.
Again, Coster-Mullen has nothing more than a high school diploma and works as a truck driver and part-time photographer. His example eminently indicates that what is needed for success is an iron will to uncover something, and knowing where to look for the data. Read the entire article- it's fascinating.
Buying Used Books on Amazon
As a book-hungry graduate student whose money is a precious commodity, it's not surprising that I am loathe to walk into Borders and buy a brand new book. If the book is older I would rather haunt used book stores, comb through the hundreds of sometimes boring titles, and pick the one gem ensconced among them which others' eyes have not noticed. Needless to say, this gnaws away at another one of a graduate student's precious commodity- time.
However, another option is Amazon's used books service. I was hesitant to use this but was finally egged on to try out a few titles. My conditions were simple: spines should be intact, and there should be absolutely no underlining or highlighting inside. The rules are pretty simple too: go for sellers offering books whose condition is marked "very good" or better, who have at least a 97% rating and who have been selling at least for a year or so. Most importantly, buy ex-library books if they are available; the seller will usually indicate this explicitly. These books get you the biggest bang for your buck. They are usually wrapped in plastic with the tender loving care characteristic of many public libraries, their dust jackets are usually firm and intact, and they may have some library stamps on the first page or on the sides.
But if these simple rules are satisfied, then ex-lib books can be better than even brand new books. Consider that hardbacks usually cost no less than 18-20$. So if I do end up buying new books, I buy paperbacks whenever they are available. Paperbacks cost between 10-15$. Now consider ex-lib books which I have gotten for about 3$ on average. Even with the shipping it comes to 7-8$. A well-protected hardcover ex-lib book warmly clasped with a plastic-covered dust jacket beats a brand new paperback even if the hardcover is a few years older.
Until now I have had a great experience ordering these ex-lib hardcovers from Amazon. Starting about six months ago, I have already ordered about 30 of these and have been satisfied 99% of the time. There may have been one or two which looked none the worse for wear, but in their defence, they were selling for 20 cents apiece. There's a limit to what you can expect.
However, another option is Amazon's used books service. I was hesitant to use this but was finally egged on to try out a few titles. My conditions were simple: spines should be intact, and there should be absolutely no underlining or highlighting inside. The rules are pretty simple too: go for sellers offering books whose condition is marked "very good" or better, who have at least a 97% rating and who have been selling at least for a year or so. Most importantly, buy ex-library books if they are available; the seller will usually indicate this explicitly. These books get you the biggest bang for your buck. They are usually wrapped in plastic with the tender loving care characteristic of many public libraries, their dust jackets are usually firm and intact, and they may have some library stamps on the first page or on the sides.
But if these simple rules are satisfied, then ex-lib books can be better than even brand new books. Consider that hardbacks usually cost no less than 18-20$. So if I do end up buying new books, I buy paperbacks whenever they are available. Paperbacks cost between 10-15$. Now consider ex-lib books which I have gotten for about 3$ on average. Even with the shipping it comes to 7-8$. A well-protected hardcover ex-lib book warmly clasped with a plastic-covered dust jacket beats a brand new paperback even if the hardcover is a few years older.
Until now I have had a great experience ordering these ex-lib hardcovers from Amazon. Starting about six months ago, I have already ordered about 30 of these and have been satisfied 99% of the time. There may have been one or two which looked none the worse for wear, but in their defence, they were selling for 20 cents apiece. There's a limit to what you can expect.
∆G, ∆G†† and All That: Implications for NMR
Since we were on the subject of NMR and determining conformations, I think it would be pertinent to briefly discuss one of the more slippery basic concepts that I have seen a lot of chemistry students (naturally including myself) get plagued with; the difference between thermodynamics and kinetics. I find myself often besieged by a distinction between these two important ideas that encompass all of chemistry. Simply saying that thermodynamics is "where you go" and kinetics is "how you get there" is not enough of a light to always assuredly guide students through the sometimes dark corridors of structure and conformation.
Going beyond the fact that thermodynamics is defined by the equilibrium free energy difference (∆G) between reactants and products and that kinetics relates to the activation barrier (∆G††) for getting from one to the other, I want to particularly discuss the importance of both these concepts for determining conformation by NMR spectroscopy.
There are two reasons why determining conformations in solution can become a particularly challenging endeavor. The first reason is thermodynamics. Again consider the all-important relation ∆G = -RTlnK which makes the equilibrium constant exquisitely sensitive to small changes in free energy (∆G). As mentioned before, an energy difference of only 1.8 kcal/mol between two conformations means that the more stable one exists to the extent of 96% while the minor one exists to the extent of only 4%. In practice such energy differences between conformers are seen all the time. A typical scenario for a flexible molecule in solution will posit a complex distribution of conformers being separated from each other by tiny energy differences ranging from say 0.5-3 kcal/mol. Again, the above exponential dependence of equilibrium constant K on ∆G means that the concentration of minor conformers which are higher in energy than the more stable ones by only 3 kcal/mol will be so tiny (~0.04%) as to be virtually non-existent. NMR typically cannot detect conformers which are less than 2-3% percent in solution (and it's too much to ask of NMR to do this), but such populations exist all the time.
Thus, thermodynamics is often the bane of NMR; in this case the technique is plagued by its low sensitivity
If thermodynamics is the bane, kinetics may be the nemesis. Rotational barriers between conformations (∆G††) can be even tinier compared to thermal energy available to jostle molecules around at room temperature. For example, the classic rotational barrier for interconversion in ethane (whose origins are still debated by the way) is only 3 kcal/mol. Energy available at room temperature is about 20 kcal/mol which will make the ethane conformations interconvert like crazy. So even for energy barriers that are several kcal/mol, conformational interconversion is usually more than adequate to observe averaging of conformations and consequently all associated parameters- most importantly chemicals shifts and coupling constants- in NMR. The resolution time of NMR is on the order of tens of milliseconds, while conformational interconversion is on the order of tens of microseconds or less. Now in theory one can go to lower temperatures and 'freeze out' such motions. In many such experiments, line broadening at lower temperatures is observed, followed by separation of peaks at the relevant temperature. But consider that even for a barrier as high as 8-10 kcal/mol, NMR usually gives distinct, separate signals for the different conformers only at -100 degrees celsius. For barriers like those in ethane, the situation would be hopelessly challenging. As an aside, that means that sharp, well-defined resonances at room temperature do not indicate lack of conformational interconversion but can simply mean that conformational interconversion is fast compared to the NMR time scale.
Thus, kinetics is also often the bane of NMR; in this case the technique is plagued by low resolution time
Now there may be situations in which either thermodynamics or kinetics is favourable for carrying out an NMR conformational study. But for the typical flexible organic molecule, both these factors are usually pitted against the technique; rapid interconversion because of low rotational barriers, and low thermodynamic energy differences between conformers. Given this fact, it probably should not sound surprising to say that NMR is not that great a technique. However, as is well known to every chemist, its advantages far outweigh its drawbacks. Conformational studies comprise but one important aspect of countless NMR applications.
Nonetheless, when conformational studies are attempted, it should always be kept in mind that thermodynamics and kinetics have both conspired to make NMR an unattractive method for our purposes. Thermodynamics leads to low populations. Kinetics leads to averaging of populations. And yet the average information gained from NMR is invaluable and can shed light on individual solution conformations when combined with a deconvolution technique like NAMFIS or molecular dynamics. On the other hand, fitting the average data to a single conformation for a flexible molecule is inherently flawed and unrealistic. No one who has tried to take pictures of a horse race with a low-shutter speed camera should believe that NMR by itself is capable of teasing apart individual conformations in solution.
For determining conformations then, NMR alone does provide a wealth of data locked inside a safe. Peepholes in the door may illuminate some aspects of the system. But you need a key, best obtained from other sources, that will allow you to open the door and savor the treasures unearthed by NMR in their full glory.
Going beyond the fact that thermodynamics is defined by the equilibrium free energy difference (∆G) between reactants and products and that kinetics relates to the activation barrier (∆G††) for getting from one to the other, I want to particularly discuss the importance of both these concepts for determining conformation by NMR spectroscopy.
There are two reasons why determining conformations in solution can become a particularly challenging endeavor. The first reason is thermodynamics. Again consider the all-important relation ∆G = -RTlnK which makes the equilibrium constant exquisitely sensitive to small changes in free energy (∆G). As mentioned before, an energy difference of only 1.8 kcal/mol between two conformations means that the more stable one exists to the extent of 96% while the minor one exists to the extent of only 4%. In practice such energy differences between conformers are seen all the time. A typical scenario for a flexible molecule in solution will posit a complex distribution of conformers being separated from each other by tiny energy differences ranging from say 0.5-3 kcal/mol. Again, the above exponential dependence of equilibrium constant K on ∆G means that the concentration of minor conformers which are higher in energy than the more stable ones by only 3 kcal/mol will be so tiny (~0.04%) as to be virtually non-existent. NMR typically cannot detect conformers which are less than 2-3% percent in solution (and it's too much to ask of NMR to do this), but such populations exist all the time.
Thus, thermodynamics is often the bane of NMR; in this case the technique is plagued by its low sensitivity
If thermodynamics is the bane, kinetics may be the nemesis. Rotational barriers between conformations (∆G††) can be even tinier compared to thermal energy available to jostle molecules around at room temperature. For example, the classic rotational barrier for interconversion in ethane (whose origins are still debated by the way) is only 3 kcal/mol. Energy available at room temperature is about 20 kcal/mol which will make the ethane conformations interconvert like crazy. So even for energy barriers that are several kcal/mol, conformational interconversion is usually more than adequate to observe averaging of conformations and consequently all associated parameters- most importantly chemicals shifts and coupling constants- in NMR. The resolution time of NMR is on the order of tens of milliseconds, while conformational interconversion is on the order of tens of microseconds or less. Now in theory one can go to lower temperatures and 'freeze out' such motions. In many such experiments, line broadening at lower temperatures is observed, followed by separation of peaks at the relevant temperature. But consider that even for a barrier as high as 8-10 kcal/mol, NMR usually gives distinct, separate signals for the different conformers only at -100 degrees celsius. For barriers like those in ethane, the situation would be hopelessly challenging. As an aside, that means that sharp, well-defined resonances at room temperature do not indicate lack of conformational interconversion but can simply mean that conformational interconversion is fast compared to the NMR time scale.
Thus, kinetics is also often the bane of NMR; in this case the technique is plagued by low resolution time
Now there may be situations in which either thermodynamics or kinetics is favourable for carrying out an NMR conformational study. But for the typical flexible organic molecule, both these factors are usually pitted against the technique; rapid interconversion because of low rotational barriers, and low thermodynamic energy differences between conformers. Given this fact, it probably should not sound surprising to say that NMR is not that great a technique. However, as is well known to every chemist, its advantages far outweigh its drawbacks. Conformational studies comprise but one important aspect of countless NMR applications.
Nonetheless, when conformational studies are attempted, it should always be kept in mind that thermodynamics and kinetics have both conspired to make NMR an unattractive method for our purposes. Thermodynamics leads to low populations. Kinetics leads to averaging of populations. And yet the average information gained from NMR is invaluable and can shed light on individual solution conformations when combined with a deconvolution technique like NAMFIS or molecular dynamics. On the other hand, fitting the average data to a single conformation for a flexible molecule is inherently flawed and unrealistic. No one who has tried to take pictures of a horse race with a low-shutter speed camera should believe that NMR by itself is capable of teasing apart individual conformations in solution.
For determining conformations then, NMR alone does provide a wealth of data locked inside a safe. Peepholes in the door may illuminate some aspects of the system. But you need a key, best obtained from other sources, that will allow you to open the door and savor the treasures unearthed by NMR in their full glory.
Does a protein-bound ligand exist in only one conformation?
I have been thinking a lot recently about studies in which people have determined the bound conformation of a ligand by transfer-NOESY experiments, essentially by transferring magnetization off another ligand to the protein and then back to the ligand of interest. With the known bound conformation of the first ligand, one can apparently locate the conformation of the second one. Many such unknown protein-bound conformations have been worked out. In my field of research, the ones which are relevant are of agents that bind to tubulin, especially discodermolide. In this case, the conformation of discodermolide was deduced via competition transfer-NOESY experiments with epothilone. These experiments are non-trivial to carry out and, as is the case for other biomolecular NMR studies, should be interpreted carefully. But in the end they look like nifty techniques that can shed light on unknown bioactive conformations, something that's very valuable for drug design.
Essentially it's again a problem of fitting the bound conformation NMR data to a single conformation. In solution we know for sure that this is a fallacious step. The (not so) obvious assumption in doing this for bound conformations is that there's got to be only one conformation in the active site too. But I have always wondered if a ligand in a protein active site could also have multiple conformations. MJ's comments on a past post and the discussion there makes me think that even in a protein active site, there could possibly be multiple conformations of a ligand, something that runs counter to what we conventionally think. How diverse those conformations might be is a different question; one would probably not expect large conformational changes. But even 'small' conformational changes could be significant enough to distinguish between different conformations in the active site. It's a problem worth thinking about.
Essentially it's again a problem of fitting the bound conformation NMR data to a single conformation. In solution we know for sure that this is a fallacious step. The (not so) obvious assumption in doing this for bound conformations is that there's got to be only one conformation in the active site too. But I have always wondered if a ligand in a protein active site could also have multiple conformations. MJ's comments on a past post and the discussion there makes me think that even in a protein active site, there could possibly be multiple conformations of a ligand, something that runs counter to what we conventionally think. How diverse those conformations might be is a different question; one would probably not expect large conformational changes. But even 'small' conformational changes could be significant enough to distinguish between different conformations in the active site. It's a problem worth thinking about.
Sleep well through chemistry...forever
Molecules of Murder: Criminal Molecules and Classic Cases
By John Emsley
Royal Society of Chemistry, 2008
In this highly engaging, detailed and morbidly fascinating slim volume, chemist John Emsley narrates the stories of those who made use of science for killing their fellow beings through deadly means. Emsley recounts the use of famous chemicals used as poisons in famous and some not-so-famous murder cases.He tells us ten stories in ten chapters, each devoted to a specific poison and specific murder case in which it was used. The cases are fascinating for science buffs because of the scientific background about the poisons, and for others for the ingenious thinking that went both into the murders and the detective work involved in solving them.
The stories span a range of countries, periods and motives for murder. They feature famous victims such as former FSB agent Alexander Litvinenko as well as lesser-known victims whose killing was also equally deadly and well-planned. Each story has comprehensive details on the personal or political background of the victims and murderers and their times, as well as detailed background on the poisons themselves, including their history, chemical and biological characteristics, use and availability and actual administration to the victims. During this process, Emsley uncovers a range of diabolical and murderous characters who each had their own motives, personal or political, for causing the death of one or several persons.
While the famous murders like Litvinenko's from polonium and Bulgarian dissident Georgi Markov's from ricin are told in fascinating detail, so are the murders involving relatively low-profile and yet deadly poisons like adrenaline, diamorphine and atropine. Emsley also covers murders that used the standard and deadly poisons carbon monoxide and cyanide. Many of these chemicals are relatively easily accessible and that makes their use more difficult to control. Particular chilling is the case of Kristen Gilbert, a nurse who used adrenaline to kill her patients essentially by giving them fatal heart attacks. The story is made more grim by the fact that Gilbert was a nurse who was supposed to be a giver of life, and that adrenaline which is a substance produced naturally by the body is a very clever choice for a poison since its levels rapidly fade and it's hard to detect it as a foreign poison.
The first and last chapters dealing with the Litvinenko and Markov murders from polonium and ricin merit special attention because of their high-profile political nature and the rather exotic identity of the poisons used. Markov was murdered by an agent aided by the KGB while standing on a bridge on the Thames River in London. The murder weapon used was most unlikely; an umbrella with a tip containing a pellet with an extremely tiny amount of ricin which was injected into Markov's thigh by an 'accidental' jab which he hardly felt. Ricin is one of the most toxic substances known to man, and within three days Markov had died a painful and inexplicable death. The murder was well-planned and ingenious. Emsley who himself was involved in this case as a scientific expert gives a fascinating description of the rather simple but ingenious forensic work that went into ascertaining the amount of poison used, which made it possible to eliminate many well-known poisons.
The Litvinenko case is still fresh in everyone's mind. Litvinenko was a former agent of the FSB (the successor of the KGB) who accused prominent Russian politicians and businessmen of nefariously bringing Vladimir Putin to power. His murder also took place in London in a cafe with another unlikely poison- tea laced with radioactive polonium 210. The fact that he could not be saved in spite of 50 years of knowledge about radioactive substances and their effects on biological systems indicates how we can still miss the 'obvious'. It took a long time before polonium 210 emerged as a suspected poison, and this apparently is the first case when this rather well-known substance was used for assassinating a political target. The source was almost certainly a nuclear reactor or some other facility in Russia. While the attempt was successful, the choice of poison was less than perfect since the polonium left a trail of radioactive hot spots literally leading from one location to another. While this combined with Litvinenko's extensive testimony before his death made it possible to finally uncover the suspect, as of now the man is enjoying political immunity in Russia, a fact that may give some credence to the suspicion that Putin may somehow have known about Litvinenko's murder.
These and other morbid cases Emsley narrates with details about the science, chemical history and detective work as well as the politics, personal and social history of the victims and murderers that should keep anyone engaged. For science fans, it is important reading about how science can be used to do harm, and for others, at the very least it is a fascinating set of detective stories that should keep them glued to their chairs. The one problem I had with the book was its format; the font could have been more attractive and the illustrations should have been interspersed within the text instead of curiously being stitched together at the end. But these are minor shortcomings of an otherwise fascinating and lucid book.
I can only end by saying that in this period of paranoia about terrorist acts, it may not be a good idea to read this book in the airport security line.
Article on NAMFIS in IIT-D magazine
A short holiday break and a rather protracted bout of the flu have kept me from blogging. So I will link to an article of mine that just got published in the magazine of the Chemical Society of the Indian Institute of Technology (IIT), Delhi. The article is written for the layman and talks about the importance of realizing that flexible molecules have multiple conformations in solution. Such conformations cannot be determined by NMR alone due to their rapid interconversion.
In the article, I describe NAMFIS (NMR Analysis of Molecular Flexibility In Solution), a joint computational-NMR approach which can derive a Boltzmann population for flexible molecules in solution. This information can be very useful for deducing, for example, the protein-bound conformation of a drug. But it can also be useful under other circumstances where determining conformation is important, such as for organic molecules assembling on a surface. Comments, criticism and questions are of course always welcome.
In the article, I describe NAMFIS (NMR Analysis of Molecular Flexibility In Solution), a joint computational-NMR approach which can derive a Boltzmann population for flexible molecules in solution. This information can be very useful for deducing, for example, the protein-bound conformation of a drug. But it can also be useful under other circumstances where determining conformation is important, such as for organic molecules assembling on a surface. Comments, criticism and questions are of course always welcome.
Help!
I am looking around for a specific indicator for ferric ions that's very specific for ferric over ferrous. It should be water soluble and stable at room temperature and neutral or basic pH. In addition, it should be very sensitive and should be able to detect slight increases in ferric concentration by a bright colour change. And of course, the colour change should be easily detectable by a spectrophotometer, or more preferably, by the human eye.
Help will be much appreciated!
Help will be much appreciated!
Please, save science from the "holists"
There are two kinds of environmentalists. One kind which is a scarce breed consists of the ones who are willing to do cost-benefit analysis and apply rational scientific thinking based on available facts to suggest policy. The other kind, sadly even now the majority, are against nuclear power, are not averse to hijacking oil tankers to make their point, and are more concerned about the fate of koala bears than about prudent science-based solutions to climate change. These environmentalists often, but not always, include far-left anti-corporate activists. Many of them tout "holistic farming" and vague notions of glorified "scientific solutions" to the world's food, medical and environmental problems. Most jarringly, they hold many scientists in contempt and subscribe to the strange postmodernist view of science, wherein science is "just another way of looking at reality".
Vandana Shiva, who has a PhD. in physics from the University of Western Ontario, is unfortunately one of them.
Sovietologist brings one of her crazier articles to my attention. According to Shiva, the traditional reductionist approach to science is not only incorrect but responsible for the deaths of thousands of people. Shiva thinks that the reductionist approach has harmed science and essentially should be side-lined and abandoned in favour of a more generalized "holistic" approach. Thrown out of the window are the benefits to billions that reductionist science has brought.
According to Shiva,
Not surprisingly then, Shiva launches into an litany of benefits for "natural" concoctions. As with other extreme propaganda, there is a shred of truth to this contention. There is no doubt that many Ayurvedic medicines bring real benefits. However, there is merit to isolating the active ingredient from any such natural source and modifying it to reduce toxicity. That is how most drugs have been developed, by starting from an isolated natural molecule and then tuning its properties to reduce toxicity and improve potency. Shiva needs to educate herself a little about the process of drug discovery.
However, Shiva's real agenda, hidden all along, becomes clear at the end. If postmodernist leanings don't move you, compassionate socialist ones surely will:
As noted before, Shiva's entire essay contains too much cherry picking, straw man arguments and misleading information to criticize here. And again, Shiva tugs at the fine line between some legitimate objections to reductionist science and a full-blown irrational attack on its methods. We don't need Shiva to tell us that reductionist methods have their limitations; consider the recent emergence of fields like systems biology where scientists are trying to grapple to get a better perspective on overcoming these limitations. But no serious scientific critic of reductionist science will deny the immense benefits that it has served us since the dawn of humanity. Almost all the fruits of scientific research that we enjoy have come from reductionist science, and that will continue to be so. Disparaging wholesale the benefits of reductionist science and deriding the huge windfall of discoveries that reductionist science had bequeathed to us is a tremendous insult to the very edifice of scientific discovery. But then that's the standard agenda of the postmodernist-socialists; to contend that science is "just another way" of looking at reality and to charge scientists with having a "monopoly over the truth".
I have a simple suggestion for Shiva which I am sure she would not be loathe to accept. Next time she suffers from a deadly pathogenic infection, she should not take any antibiotic or drug manufactured by the evil companies. She should subsist on coconut water, isabgol and curd to ward off her illness. Shiva would then be truly walking the talk. Not only would she be proving her point about reductionist science doing her more harm than good and about antibiotics simply being one way among many to "look at reality", but her admirable bed-ridden efforts would be a true slap in the face of those evil multinational drug companies.
Vandana Shiva, who has a PhD. in physics from the University of Western Ontario, is unfortunately one of them.
Sovietologist brings one of her crazier articles to my attention. According to Shiva, the traditional reductionist approach to science is not only incorrect but responsible for the deaths of thousands of people. Shiva thinks that the reductionist approach has harmed science and essentially should be side-lined and abandoned in favour of a more generalized "holistic" approach. Thrown out of the window are the benefits to billions that reductionist science has brought.
According to Shiva,
"In order to prove itself superior to alternative modes of knowledge and be the only legitimate mode of knowing, reductionist science resorts to suppression and falsification of facts and thus commits violence against science itself, which ought to be a search for truth. We discuss below how fraudulent this claim to truth is."Shiva then helpfully rails against every application of science from medicine to agriculture to energy. I don't think it's even worth discussing the many glaring flaws and rampant cherry picking in her ramblings, but her opinions of medicine especially rankled me
Simple ailments have been cured over centuries by appropriate use of concoctions made from plants and minerals found in nature. 'Scientific medicine' removes the diversity by isolating 'active' ingredients or by synthesizing chemical combinations. Such processing first involves violence against the complex balance inherent in natural resources. And then, when the chemical is introduced into the human body, it is often a violation of human physiology.Little does Shiva realize that by obfuscating the issue and presenting medical therapy as a "violation of human physiology" she is obscuring the fact that that's what precisely any foreign substance introduced in the body does. And by the way, perhaps Shiva has forgotten the billions of lives that "active ingredients" have saved all over the world. She gives the example of the psychoactive drug reserpine isolated from the beautiful flowering plant Rauwolfia serpentina. It was perhaps the first drug that brought dignity and benefits to countless patients of psychoses. Then, it began showing unacceptable side effects. But Shiva not only does not stress the benefits, but stops here. There is no discussion of scores of future anti-psychotic drugs which were focused on reducing these side-effects and improving efficacy. Even today we don't have the perfect drug for schizophrenia, but intense efforts continue in both academia and industry. For Shiva these efforts are trivial and even misguided.
Not surprisingly then, Shiva launches into an litany of benefits for "natural" concoctions. As with other extreme propaganda, there is a shred of truth to this contention. There is no doubt that many Ayurvedic medicines bring real benefits. However, there is merit to isolating the active ingredient from any such natural source and modifying it to reduce toxicity. That is how most drugs have been developed, by starting from an isolated natural molecule and then tuning its properties to reduce toxicity and improve potency. Shiva needs to educate herself a little about the process of drug discovery.
However, Shiva's real agenda, hidden all along, becomes clear at the end. If postmodernist leanings don't move you, compassionate socialist ones surely will:
But it is highly unlikely that medical science and pharamaceutical establishments will pay heed. For the reductionist medical science cannot but manufacture reductionist products and undermine the balance inherent in natural products. The multinationals that produce synthetic drugs in pursuit of fabulous profits and ignore their toxic side effects do not care. When they are forbidden to sell some harmful drugs in the home countries, they find a lucrative market in the third world, where the élites, including the medical establishment, are usually bewitched by anything that is offered as scientific, especially if it comes wrapped in pretty pay-offs. They give a free hand to multinationals to buy medicinal plants at dirt-cheap rates and sell the processed pills in the third-world countries at exorbitant prices and at enormous cost to the health of the people. The élites cannot accept that it would be more equitable socially, cheaper economically, conductive to self-reliance politically, and more beneficial medically for the third-world countries to use the plants locally according to time-tested indigenous pharmacology.That's right. The billions of lives that have been saved by the "elitist" multinational drug corporations are nothing compared to the virulent and rampant capitalism they have have infected third world populations with. If it's a private corporation, then by default it must be part of a global conspiracy to oppress poor people in developing countries.
While multinational drug companies and the third-world political élites are out for profits, the third-world intellectual élites, eager to prove their scientific temper, join in a chorus to denounce indigenous therapeutics and related knowledge systems as hocus-pocus and their practice as quackery. It is through this mixture of misinformation, falsehood and bribes that a reductionist medical science has established its monopoly on medical knowledge in many societies.
As noted before, Shiva's entire essay contains too much cherry picking, straw man arguments and misleading information to criticize here. And again, Shiva tugs at the fine line between some legitimate objections to reductionist science and a full-blown irrational attack on its methods. We don't need Shiva to tell us that reductionist methods have their limitations; consider the recent emergence of fields like systems biology where scientists are trying to grapple to get a better perspective on overcoming these limitations. But no serious scientific critic of reductionist science will deny the immense benefits that it has served us since the dawn of humanity. Almost all the fruits of scientific research that we enjoy have come from reductionist science, and that will continue to be so. Disparaging wholesale the benefits of reductionist science and deriding the huge windfall of discoveries that reductionist science had bequeathed to us is a tremendous insult to the very edifice of scientific discovery. But then that's the standard agenda of the postmodernist-socialists; to contend that science is "just another way" of looking at reality and to charge scientists with having a "monopoly over the truth".
I have a simple suggestion for Shiva which I am sure she would not be loathe to accept. Next time she suffers from a deadly pathogenic infection, she should not take any antibiotic or drug manufactured by the evil companies. She should subsist on coconut water, isabgol and curd to ward off her illness. Shiva would then be truly walking the talk. Not only would she be proving her point about reductionist science doing her more harm than good and about antibiotics simply being one way among many to "look at reality", but her admirable bed-ridden efforts would be a true slap in the face of those evil multinational drug companies.
Whose fault is it, again?
Don't let your view of the Bush administration color your picture of reality
Usually I find myself vigorously nodding my head when I read most New York Times Op-Eds and columns. I share the Times's disdain for the Bush administration's policies and usually think they are right on spot when they criticize them. But in this particular case, I think they have let their rightly justified Bush-phobia lead to an unreasonable response.
The story is painful but straightforward. A woman was given the widely prescribed anti-nausea drug Phenergan by injection. When it did not work, the doctor opted for a riskier procedure during which his assistant accidentally punctured an artery in the woman's arm. Gangrene set in, and her entire right arm and hand tragically had to be amputated. Sneezing from a few allergies is hardly worth losing an arm.
The woman rightly sued the physician and his assistant and received a healthy out-of-court settlement. But then she also sued Wyeth, the drug's manufacturer. Why? For "failing to warn the clinicians to use the much safer “IV drip” technique, in which the drug is injected into a stream of liquid flowing from a hanging bag that already has been safely connected to a vein, making it highly unlikely that the drug will reach an artery". The trial court even awarded her a whopping 6.7 million dollars worth of damages. The NYT supports the court's decision and objects to Wyeth's displeasure:
In fact here's the shocker. Wyeth does have a strong warning against such an injection on its label.
Somehow the NYT also ties this event to the Bush administration's argument that companies should be protected from lawsuits if the FDA has completely approved their drug and the way it's prescribed. If anything, shouldn't the FDA be sued for not making sure that the company had all the warnings adequately written on the label here? I share the NYT's general contempt for industry-protecting Bush policies. But in this case the policy seems to make sense to me. If the FDA is supposed to be the "decider" when it comes to approving drugs, why should companies bear the brunt of failed drugs if the FDA has already approved them?
It is sad when general opinions that are justified lead to specific views that are not.
Usually I find myself vigorously nodding my head when I read most New York Times Op-Eds and columns. I share the Times's disdain for the Bush administration's policies and usually think they are right on spot when they criticize them. But in this particular case, I think they have let their rightly justified Bush-phobia lead to an unreasonable response.
The story is painful but straightforward. A woman was given the widely prescribed anti-nausea drug Phenergan by injection. When it did not work, the doctor opted for a riskier procedure during which his assistant accidentally punctured an artery in the woman's arm. Gangrene set in, and her entire right arm and hand tragically had to be amputated. Sneezing from a few allergies is hardly worth losing an arm.
The woman rightly sued the physician and his assistant and received a healthy out-of-court settlement. But then she also sued Wyeth, the drug's manufacturer. Why? For "failing to warn the clinicians to use the much safer “IV drip” technique, in which the drug is injected into a stream of liquid flowing from a hanging bag that already has been safely connected to a vein, making it highly unlikely that the drug will reach an artery". The trial court even awarded her a whopping 6.7 million dollars worth of damages. The NYT supports the court's decision and objects to Wyeth's displeasure:
Now Wyeth, supported by the Bush administration, has asked the Supreme Court to reverse the verdict on the grounds that Wyeth complied with federal regulatory requirements.So let me get this straight. Wyeth is being sued because the physician did not know what was the safest and best protocol to use and because his assistant botched up the operation?
We do not buy Wyeth’s argument that it did everything it needed to, or could have done, to warn doctors about the dangers involved in the treatment Ms. Levine received. Wyeth did warn of some dangers of the drug treatment, in words approved by the F.D.A., but the state court was well within its rights to conclude that those warnings were insufficient.
In fact here's the shocker. Wyeth does have a strong warning against such an injection on its label.
"Under no circumstances should PHENERGAN Injection be given by intra-arterial injection due to the likelihood of severe arteriospasm and the possibility of resultant gangrene"What more do you want the company to do? Emphasize "under no circumstances" three times? Were they also supposed to say, "Do not inject this drug directly into the heart"? I find this case outright bizarre.
Somehow the NYT also ties this event to the Bush administration's argument that companies should be protected from lawsuits if the FDA has completely approved their drug and the way it's prescribed. If anything, shouldn't the FDA be sued for not making sure that the company had all the warnings adequately written on the label here? I share the NYT's general contempt for industry-protecting Bush policies. But in this case the policy seems to make sense to me. If the FDA is supposed to be the "decider" when it comes to approving drugs, why should companies bear the brunt of failed drugs if the FDA has already approved them?
It is sad when general opinions that are justified lead to specific views that are not.
Selective vs Multitargeted Kinase Inhibitors: Still in the Stone Age
In the conference on kinase inhibitors I attended recently, there was a panel discussion on the second morning (why do these discussions have to start at 7:30 a.m.?) about the utility of selective vs multi- target directed inhibitors. The conventional wisdom has been that selective inhibitors- or any selective drugs for that matter- are best, since off-target effects can cause toxicity. The fight against cancer has largely been about finding selective and therefore safe drugs that hit targets only in cancer cells. It is a measure of how less we have accomplished in cancer therapy in spite of the countless amounts of dollars spent that we still are far from rationally designing reliable, selective and safe cancer drugs.
The discussion we had did not end in any consensus. While selective drugs may clearly be good in certain cases, there are cases in which drugs designed for selectivity ended up promoting their action by being non-selective and targeting multiple targets, but only in retrospect. Gleevec, the revolutionary drug for treating chronic myeloid leukemia, is a classic example. Initially supposed to be a "magic bullet" that targeted only a mutated kinase named Bcr-Abl in cancer cells, Gleevec later turned out to also potently target two other kinases, c-Kit and PDGFR. Interestingly these two targets are valuable targets in cancer therapy of two other cancers, renal cell carcinoma of the kidneys and glioblastoma of the brain.
In any case, the consensus was that we are still far away from designing drugs for a specifically chosen subset of targets. Something like staurosporine that hits almost every kinase out there is going to be undoubtedly gratuitously toxic. But inhibitors hitting a very specific subset of kinases could target a few crucial choke-points in disease pathways, thus serving as valuable drugs. But we are still far from rationally designing such inhibitors. Indeed, in the first place we don't even know what specific subset of kinases to hit for treating a particular disease. First comes target validation, then modulation. Most of the specific subset targeting kinase inhibitors seem to be discovered only in retrospect. In my own project where we are trying to target only one kinase selectively, we are now being skeptical about whether the beneficial effects we are observing are due to multi-target binding.
The other unrelated point we discussed was whether anybody knew kinase inhibitors which were near clinical trial phase completion for areas other than oncology. The silence around the table spoke for itself.
The bottom line is; as far as targeting specific subsets of kinases with inhibitors or even knowing which specific subset to target is concerned, we are still in the Stone Age of kinase drug discovery. The drugs which we have are largely still stone and tree branches. We have a long way to go before discovering tools and bronze.
In the next post, I will talk about a recent effort that overcame the rational multi-kinase inhibitor design for two very different kinases. It points the way forward.
The discussion we had did not end in any consensus. While selective drugs may clearly be good in certain cases, there are cases in which drugs designed for selectivity ended up promoting their action by being non-selective and targeting multiple targets, but only in retrospect. Gleevec, the revolutionary drug for treating chronic myeloid leukemia, is a classic example. Initially supposed to be a "magic bullet" that targeted only a mutated kinase named Bcr-Abl in cancer cells, Gleevec later turned out to also potently target two other kinases, c-Kit and PDGFR. Interestingly these two targets are valuable targets in cancer therapy of two other cancers, renal cell carcinoma of the kidneys and glioblastoma of the brain.
In any case, the consensus was that we are still far away from designing drugs for a specifically chosen subset of targets. Something like staurosporine that hits almost every kinase out there is going to be undoubtedly gratuitously toxic. But inhibitors hitting a very specific subset of kinases could target a few crucial choke-points in disease pathways, thus serving as valuable drugs. But we are still far from rationally designing such inhibitors. Indeed, in the first place we don't even know what specific subset of kinases to hit for treating a particular disease. First comes target validation, then modulation. Most of the specific subset targeting kinase inhibitors seem to be discovered only in retrospect. In my own project where we are trying to target only one kinase selectively, we are now being skeptical about whether the beneficial effects we are observing are due to multi-target binding.
The other unrelated point we discussed was whether anybody knew kinase inhibitors which were near clinical trial phase completion for areas other than oncology. The silence around the table spoke for itself.
The bottom line is; as far as targeting specific subsets of kinases with inhibitors or even knowing which specific subset to target is concerned, we are still in the Stone Age of kinase drug discovery. The drugs which we have are largely still stone and tree branches. We have a long way to go before discovering tools and bronze.
In the next post, I will talk about a recent effort that overcame the rational multi-kinase inhibitor design for two very different kinases. It points the way forward.
History
Image: New York Times
Till I was about 13 or 14 years old, my readings of American history consisted only of offerings from the history of the United States during World War 2, an old and enduring historical interest of mine. It was when I picked up Harold Evans's The American Century, a superb and magisterial illustrated history of the country during the twentieth century, that I became painfully and woefully aware of the injustice that African-Americans faced in this country for two hundred years. I was horrified to read about Jim Crow, the dog squads and water hoses on the streets of Montgomery, Alabama, the lynchings in Mississippi. As a boy who was about the same age then, I was especially sickened and completely shaken by the relatively recent story of Emmet Till, a story that has been vividly seared into my mind ever since.
I could not believe that this was the country enshrined in the Declaration of Independence, the land which first and foremost looked at the integrity of one's character and his or her abilities and not where he or she came from. And yet I saw hope and fundamental human decency in Martin Luther King and the Civil Rights movement. But since then, I have often felt whether any event or moment in the United States could possibly mentally transport me to a pre-Harold Evans time when I had a singularly auspicious and pristine perception of this country. Such a moment would never come because one cannot erase the scars inflicted on this country's character for two hundred years. But I feel convinced that if there would be a moment closest to such a moment in my life, that moment would be yesterday night.
At midnight, I stood on the 15th floor of the Hyatt Regency Hotel and amid the car horns constantly blaring on the street downstairs, I strained my ears to catch every word that he spoke on the TV screen. The overriding feeling among everyone around me was one of peace and relief and tears even more than elation.
He looked tired, relieved and happy but not jubilant. He knows the difficult task that lies ahead and knows that celebration right now is premature. He knows that there's much to be done and that this is just the beginning. He knows that he may not be able to bring about a sea change in the way things have been done. But he knows that he will nudge the country in the right direction by valuing and fostering rationality and honest debate. He knows he will be upfront and forthright about what he thinks and he understands the value of the journey, even if the final destination may not be known. He understands the value of incremental progress.
And he knows that his extraordinary story culminating in yesterday's night healed at least some of the internal divisions among his own people and will go a long way in reviving his country's image in the world as the land of opportunity, diversity and respect.
He did it. Now we have to do it. Now we can get back to our lives.
Save Science on Tuesday
It was Richard Nixon who got rid of the Presidential Science Advisory Committee during his tenure, which has not been resurrected since. In the 80s, Ronald Reagan embraced the idealistic vision of Star Wars, a pipe dream that did not have a valid scientific basis. In the 90s, Congress got rid of the Office of Technology Assessment which is supposed to provide the country's political leaders with bipartisan scientific advice. Science on the whole in the last twenty five years has been on a downhill path as far as respect for it in political circles has been concerned.
Although George Bush's administration has been the single-largest malefactor of science and all it stands for and in general although Republicans have done more damage to science, all administrations since the 1970s have overall been lax and negligent in supporting science and its essential spirit. As I have written before, the issue goes far beyond the important one of providing funding for basic scientific research. It has to do with trusting unbiased advice that tries to give you a picture of the world as it is, and not how you would like to see it. It has to do with promoting and respecting open-mindedness and true bipartisan debate. Thus science has always stood opposite dogma, a fact that is usually hard to swallow for most politicians who would want to color the world with their own ideological brush. This is a wholly fatalistic attitude because a disrespect for science means an abandonment of informed decision making, eventually a sure path for a country's spiral into regress.
Barack Obama is not good for science because he is a liberal Democrat. He is good for science because he largely stands for all that science traditionally has; open minds, patient and careful thought, forthcomingness and respect in listening to dissenting opinions, a mistrust of blind reliance on authority and a willingness to listen to all sides of the debate before taking an informed decision. Obama also knows he is not perfect and embodies another key aspect of science; the ability to understand one's deficiencies and limitations and seek the best possible advice to overcome them. There is scarce doubt that he will bring knowledgeable science advisors into the White House and that he will take seriously the advice of people with whom he may not agree. At the same time he will weigh all the options and sides and try to take as unbiased a decision as he can. In an age of climate change, evolution, food crises, energy crises, drug resistance and nuclear terrorism, science is going to become an increasingly key and vocal part of the national debate and the future of this country. Obama understands this. Maybe that's why, a few days ago, 76 Nobel Prize winners represented by the great physicist Murray Gell-Mann wrote an open letter to the American people and endorsed Obama as most prudent for science in this country.
The American people need to reclaim their lost preeminence in science and technology and their respect for learning and rationality. They need to reaffirm their place in the world as the land where open minds meet unlimited resources and intellectual capital. The time has come when this land needs to save science from itself. With this in view, anyone who deeply cares about science, reason and objective thought should vote for Barack Obama on Tuesday.
Although George Bush's administration has been the single-largest malefactor of science and all it stands for and in general although Republicans have done more damage to science, all administrations since the 1970s have overall been lax and negligent in supporting science and its essential spirit. As I have written before, the issue goes far beyond the important one of providing funding for basic scientific research. It has to do with trusting unbiased advice that tries to give you a picture of the world as it is, and not how you would like to see it. It has to do with promoting and respecting open-mindedness and true bipartisan debate. Thus science has always stood opposite dogma, a fact that is usually hard to swallow for most politicians who would want to color the world with their own ideological brush. This is a wholly fatalistic attitude because a disrespect for science means an abandonment of informed decision making, eventually a sure path for a country's spiral into regress.
Barack Obama is not good for science because he is a liberal Democrat. He is good for science because he largely stands for all that science traditionally has; open minds, patient and careful thought, forthcomingness and respect in listening to dissenting opinions, a mistrust of blind reliance on authority and a willingness to listen to all sides of the debate before taking an informed decision. Obama also knows he is not perfect and embodies another key aspect of science; the ability to understand one's deficiencies and limitations and seek the best possible advice to overcome them. There is scarce doubt that he will bring knowledgeable science advisors into the White House and that he will take seriously the advice of people with whom he may not agree. At the same time he will weigh all the options and sides and try to take as unbiased a decision as he can. In an age of climate change, evolution, food crises, energy crises, drug resistance and nuclear terrorism, science is going to become an increasingly key and vocal part of the national debate and the future of this country. Obama understands this. Maybe that's why, a few days ago, 76 Nobel Prize winners represented by the great physicist Murray Gell-Mann wrote an open letter to the American people and endorsed Obama as most prudent for science in this country.
The American people need to reclaim their lost preeminence in science and technology and their respect for learning and rationality. They need to reaffirm their place in the world as the land where open minds meet unlimited resources and intellectual capital. The time has come when this land needs to save science from itself. With this in view, anyone who deeply cares about science, reason and objective thought should vote for Barack Obama on Tuesday.
Break
I am in Boston for a kinase inhibitors conference this week, so I may not be able to blog, except when I want to complain about some kinase inhibitor speaker. Enjoy the fall.
The Unbearable Heat Capacity of Being
There is a peculiar connection in my mind; that between thermodynamics and Beethoven's 5th symphony. I was in my final year of high school and it was a rainy and stormy night outside. I had to desperately study thermodynamics for my final exam. The only light that was on was from my table lamp. I was also listening to Beethoven's 5th symphony for the 2nd or 3rd time. Somehow within the mystical shadows and strange shapes manifested by the light, the strains of the strings and the equations of entropy formed a hybrid meld in my mind that has never dissociated. After that night, whenever I read thermodynamics, I don't always remember Beethoven's 5th. But whenever I listen to Beethoven's 5th, I am immediately transformed to that night, into the middle of a fluid energy landscape if you will.
Since then thermodynamics has been an enduring interest of mine. Another reason why it has been an interest of mine is because I don't understand it very well. In my opinion thermodynamics is one of those difficult subjects like quantum mechanics, where a great deal of effort has to be put into understanding abstract concepts and even then concepts remain elusive. Maybe it's a feature of all those sciences that are intimately bound with the fabric of matter and life. It is relatively easy to colloquially grasp entropy as an increase in disorder- we can grasp this point every time we put ice in our drink even as we struggle to understand thermodynamic principles- but much harder to get the physical meaning of the derivative of the pressure with respect to the entropy or some similar expression. Enter the Maxwell relations.
Over the years I have found myself coming back to thermodynamics and repeatedly trying to understand its fine points. I have a long way to go but I am confident I am going to continue my frequently ineffectual efforts. There are some classic books which I have encountered on the way that have served as guides, sometimes strict and sometimes gentle- Enrico Fermi's "Thermodynamics" is a jewel still in print, the thermodynamics treatment in Alberty and Silbey's physical chemistry book is quite nice and Ken Dill's Molecular Driving Forces has the best treatment of statistical thermodynamics applied to chemical and biological systems that I am aware of. There's also an old book on thermodynamics which is gold- Samuel Glasstone's "Thermodynamics for Chemists".
I cannot deny the value of thermodynamics and what it has taught me. Thermodynamics has been immensely useful in understanding computational chemistry, conformational changes in biomolecules and especially protein-ligand binding. All that really matters for protein ligand binding and the orchestration of the actions of numerous naturally occurring ligands and drugs is the free energy change ∆G. More than any other, there is one overriding goal today among the groups of people who are in the business of prediction- to predict binding affinity from first principles. Free us they say, free us from the constraints of predicting free energy.
There is an all-pervasive equation relating ∆G to the equilibrium constant of a reaction- ∆G=-RTlnK. This is perhaps the single most compelling equation in biology. Why? Because it tells you that life lives within a roughly 3 kcal/mol energy window. All the jiggling that transmits signals, folds proteins, docks molecules, makes neurons buzz, mainly happens within a 3 kcal/world. That does not of course mean that no process can have a ∆G of more than 3 kcal/mol, but it does mean that fragile life is pretty tightly constrained and can call the shots only within a limited thermodynamic domain. The reason is that a difference in ∆G of 3 kcal/mol means that the favourable product in any reaction exists to the extent of 99.96%. The exponential dependence of K on ∆G takes care of this. 3 kcal/mol is all a protein needs to toss at a ligand to decisively shift the equilibrium to the side of the bound ligand. It can of course toss more but 3 is enough. One of the reasons why prediction of binding affinity is still so difficult is because 'small' errors of 1 kcal/mol or so translate into huge differences in equilibria. Nature with its fondness for exponentials has doomed life- and chemists- to operate in a straitjacket.
But this same fondness has also made it possible to modulate different reactions and binding events in living systems with exquisite precision. The 3 kcal/mole value perfectly encapsulates the workings of such critical interactions as hydrogen bonds and Van der Waals forces. Expulsions of water, making and breaking of salt bridges, dispersion interactions, peptide hydrogen bond formation; everything can take place within 3 kcal/mol. At the same time the magic number of 3 also ensures that these interactions can be fleeting and rapidly annihilated and molecular partners can dissociate whenever necessary. What reins us in also frees us to explore an ever-widening energy landscape of weak interactions that strike the precise balance. By consigning our lives to whimsical energetic windows, we have finally liberated ourselves from the temptation of falling for monstrous blooming thermodynamic calamities that would have snuffed life out. We can be fortunate that we are not asymptotically free.
But ∆G is like statistics (or some would say like skirts); it hides much more than it reveals. Most techniques can give you ∆G but unraveling the details of a molecular process can immensely benefit from the knowledge of ∆H and ∆S, two crucial components that make up another of biology's key equations- ∆G=∆H-T∆S. Contrast ligand binding with ballroom dancing- what matters is not only how steadily you can hold on to your partner but also how flexible you concomitantly are. The correct combination of motion and attraction in this case can provide a cascade of favourable events. Ditto for ligand binding. Techniques like calorimetry can provide these valuable details. Theoretically, an infinite combination of ∆H and ∆S can add up to a ∆G value, which is all the more reason for finding out the exact composition that makes up a particular value. Two isoenergetic processes need not be either isoenthalpic or isoentropic. In a future post, I will mention a review that explores this aspect; suffice it for now to say that subtle differences in structure may give us the same ∆G but very different decomposition of ∆H and ∆S. Generally intermolecular forces contribute the most to ∆H while hydrophobic effects and the freeing up of water contribute dominantly to ∆S.
And so life lives and breathes, supported on two stilts. These two equations, one endowing biological reactions with the correct equilibria and the other modulating biological action by injecting the precise dosage of two key quantities are like the Magi. They bring us great gifts of understanding and insight. They ask only that we give them a patient ear.
Since then thermodynamics has been an enduring interest of mine. Another reason why it has been an interest of mine is because I don't understand it very well. In my opinion thermodynamics is one of those difficult subjects like quantum mechanics, where a great deal of effort has to be put into understanding abstract concepts and even then concepts remain elusive. Maybe it's a feature of all those sciences that are intimately bound with the fabric of matter and life. It is relatively easy to colloquially grasp entropy as an increase in disorder- we can grasp this point every time we put ice in our drink even as we struggle to understand thermodynamic principles- but much harder to get the physical meaning of the derivative of the pressure with respect to the entropy or some similar expression. Enter the Maxwell relations.
Over the years I have found myself coming back to thermodynamics and repeatedly trying to understand its fine points. I have a long way to go but I am confident I am going to continue my frequently ineffectual efforts. There are some classic books which I have encountered on the way that have served as guides, sometimes strict and sometimes gentle- Enrico Fermi's "Thermodynamics" is a jewel still in print, the thermodynamics treatment in Alberty and Silbey's physical chemistry book is quite nice and Ken Dill's Molecular Driving Forces has the best treatment of statistical thermodynamics applied to chemical and biological systems that I am aware of. There's also an old book on thermodynamics which is gold- Samuel Glasstone's "Thermodynamics for Chemists".
I cannot deny the value of thermodynamics and what it has taught me. Thermodynamics has been immensely useful in understanding computational chemistry, conformational changes in biomolecules and especially protein-ligand binding. All that really matters for protein ligand binding and the orchestration of the actions of numerous naturally occurring ligands and drugs is the free energy change ∆G. More than any other, there is one overriding goal today among the groups of people who are in the business of prediction- to predict binding affinity from first principles. Free us they say, free us from the constraints of predicting free energy.
There is an all-pervasive equation relating ∆G to the equilibrium constant of a reaction- ∆G=-RTlnK. This is perhaps the single most compelling equation in biology. Why? Because it tells you that life lives within a roughly 3 kcal/mol energy window. All the jiggling that transmits signals, folds proteins, docks molecules, makes neurons buzz, mainly happens within a 3 kcal/world. That does not of course mean that no process can have a ∆G of more than 3 kcal/mol, but it does mean that fragile life is pretty tightly constrained and can call the shots only within a limited thermodynamic domain. The reason is that a difference in ∆G of 3 kcal/mol means that the favourable product in any reaction exists to the extent of 99.96%. The exponential dependence of K on ∆G takes care of this. 3 kcal/mol is all a protein needs to toss at a ligand to decisively shift the equilibrium to the side of the bound ligand. It can of course toss more but 3 is enough. One of the reasons why prediction of binding affinity is still so difficult is because 'small' errors of 1 kcal/mol or so translate into huge differences in equilibria. Nature with its fondness for exponentials has doomed life- and chemists- to operate in a straitjacket.
But this same fondness has also made it possible to modulate different reactions and binding events in living systems with exquisite precision. The 3 kcal/mole value perfectly encapsulates the workings of such critical interactions as hydrogen bonds and Van der Waals forces. Expulsions of water, making and breaking of salt bridges, dispersion interactions, peptide hydrogen bond formation; everything can take place within 3 kcal/mol. At the same time the magic number of 3 also ensures that these interactions can be fleeting and rapidly annihilated and molecular partners can dissociate whenever necessary. What reins us in also frees us to explore an ever-widening energy landscape of weak interactions that strike the precise balance. By consigning our lives to whimsical energetic windows, we have finally liberated ourselves from the temptation of falling for monstrous blooming thermodynamic calamities that would have snuffed life out. We can be fortunate that we are not asymptotically free.
But ∆G is like statistics (or some would say like skirts); it hides much more than it reveals. Most techniques can give you ∆G but unraveling the details of a molecular process can immensely benefit from the knowledge of ∆H and ∆S, two crucial components that make up another of biology's key equations- ∆G=∆H-T∆S. Contrast ligand binding with ballroom dancing- what matters is not only how steadily you can hold on to your partner but also how flexible you concomitantly are. The correct combination of motion and attraction in this case can provide a cascade of favourable events. Ditto for ligand binding. Techniques like calorimetry can provide these valuable details. Theoretically, an infinite combination of ∆H and ∆S can add up to a ∆G value, which is all the more reason for finding out the exact composition that makes up a particular value. Two isoenergetic processes need not be either isoenthalpic or isoentropic. In a future post, I will mention a review that explores this aspect; suffice it for now to say that subtle differences in structure may give us the same ∆G but very different decomposition of ∆H and ∆S. Generally intermolecular forces contribute the most to ∆H while hydrophobic effects and the freeing up of water contribute dominantly to ∆S.
And so life lives and breathes, supported on two stilts. These two equations, one endowing biological reactions with the correct equilibria and the other modulating biological action by injecting the precise dosage of two key quantities are like the Magi. They bring us great gifts of understanding and insight. They ask only that we give them a patient ear.
Off the list
Nobody gets a prize for predicting the Chemistry Nobel this year; it was as much of a softball prediction as you can imagine. But at least there's one less person to gossip about now, and hopefully no acrimonious debates.
2008 Medicine Nobel: Montagnier finally wins
If you knew little about the Nobel prizes, you could be easily forgiven for assuming that somebody must have already won the Nobel for discovering the AIDS virus. Many people probably do assume this. It just seems hard that such an important discovery has not already been recognized by the prize.
And yet, those who know the history know about the acrimonious dispute between Frenchman Luc Montagnier and American Robert Gallo about priority. The two were involved in a protracted and cantankerous debate with both camps claiming that they were the ones who discovered HIV and demonstrated its action. When I read the history, to me it was always clear that it was Montagnier whose team not only undoubtedly first isolated the virus, but actually proved that HIV causes AIDS, an absolutely crucial step in establishing the identity of a causative agent and a diagnostic step for the disease. While Gallo also played an important role in the latter, the history also indicated to me that he had engaged in some pretty cunning and disingenuous political manipulation to claim priority for the discovery.
It didn't really seem that the prize would be awarded to both of them. It may well have not been awarded to any of them. The Nobel committee usually steers clear of controversial people and topics. But it seems to have realized that it can no longer neglect the truly important people behind such an obviously groundbreaking discovery. So Luc Montaginer, along with Francois-Barre Sinoussi have finally been awarded the 2008 Nobel Prize for Physiology and Medicine. Barre-Sinoussi first isolated HIV. The committee clearly is trying to avoid controversy by specifically saying that the prize is for discovering HIV. Even Gallo should not have a problem conceding that it was Montagnier and Barre-Sinoussi who first saw and isolated the virus.
The other half deservedly goes to Harald Zur Hausen, discoverer of the human papilloma virus which causes cervical cancer.
I would recommend reading Virus, Montagnier's story of his life and his work.
And yet, those who know the history know about the acrimonious dispute between Frenchman Luc Montagnier and American Robert Gallo about priority. The two were involved in a protracted and cantankerous debate with both camps claiming that they were the ones who discovered HIV and demonstrated its action. When I read the history, to me it was always clear that it was Montagnier whose team not only undoubtedly first isolated the virus, but actually proved that HIV causes AIDS, an absolutely crucial step in establishing the identity of a causative agent and a diagnostic step for the disease. While Gallo also played an important role in the latter, the history also indicated to me that he had engaged in some pretty cunning and disingenuous political manipulation to claim priority for the discovery.
It didn't really seem that the prize would be awarded to both of them. It may well have not been awarded to any of them. The Nobel committee usually steers clear of controversial people and topics. But it seems to have realized that it can no longer neglect the truly important people behind such an obviously groundbreaking discovery. So Luc Montaginer, along with Francois-Barre Sinoussi have finally been awarded the 2008 Nobel Prize for Physiology and Medicine. Barre-Sinoussi first isolated HIV. The committee clearly is trying to avoid controversy by specifically saying that the prize is for discovering HIV. Even Gallo should not have a problem conceding that it was Montagnier and Barre-Sinoussi who first saw and isolated the virus.
The other half deservedly goes to Harald Zur Hausen, discoverer of the human papilloma virus which causes cervical cancer.
I would recommend reading Virus, Montagnier's story of his life and his work.
Emory in a little, Nemeroff in big, trouble
One of the perks of becoming an academic professor is the side income which you can generate by consulting for companies, especially pharmaceutical companies. While that is a healthy way of supplementing your income and in fact provides some incentive for people to go into academia, it would be imperative to disclose any conflict of interest you may have, and in fact most authors do so in journal articles for example. This would be absolutely key if you were using a company's products in your supposedly fair, unbiased and balanced academic research.
Apparently Charles Nemeroff, Chair of the Psychiatry Department at Emory University, does not think so. I was a little shocked at the news partly because these days I am reading the classic psychopharmacology textbook that he co-authored with Alan Schatzberg and finding it quite eye-opening. But nothing in the book opened my eyes wider than this piece of news from the NYT:
It also does not seem to be the first time that he has blurred the line. In 2006 he seems to have stepped down from the editor's position of the journal Neuropsychopharmacology when he published an article using a device whose manufacturer was paying him. As detailed above, he also had had an incident with Emory in 2004 when he promised not to make too much money off his consulting. Dr. Nemeroff regularly gives talks in which he discusses the benefits of drugs like Paxil.
It's also interesting that some people suspect that Nemeroff may have had a hand in David Healy being denied his position at the University of Toronto. Healy has written a very interesting book called "Let them eat Prozac" which rather meticulously and candidly documents the alarming incidence of suicide attempts by patients on SSRIs. Apparently Healy faced a lot of hostility from the establishment and...surprise...from pharma when he tried to go public with these findings. It's all disturbing.
I really hope Emory takes some drastic action against what seems to be a repeated violation of some extremely important and time-honored guidelines of research. It's getting uncomfortable, and fingers are being pointed at the school for not noticing this and taking action earlier. The sooner the university acts, the better it can save face and avoid embarrassment.
But the gnawing questions remain. Since the line between a productive and honest and unholy academic-corporate nexus seems to be thin indeed, who regulates such collaborations and how can they do this? Sadly we know all too well who pays.
Other links: The Carlat Psychiatry Blog, University Diaries, Pharmalot
Apparently Charles Nemeroff, Chair of the Psychiatry Department at Emory University, does not think so. I was a little shocked at the news partly because these days I am reading the classic psychopharmacology textbook that he co-authored with Alan Schatzberg and finding it quite eye-opening. But nothing in the book opened my eyes wider than this piece of news from the NYT:
One of the nation’s most influential psychiatrists earned more than $2.8 million in consulting arrangements with drug makers from 2000 to 2007, failed to report at least $1.2 million of that income to his university and violated federal research rules, according to documents provided to Congressional investigators.And why was disclosing this windfall deathly important for Dr. Nemeroff? Well, because:
The psychiatrist, Dr. Charles B. Nemeroff of Emory University, is the most prominent figure to date in a series of disclosures that is shaking the world of academic medicine and seems likely to force broad changes in the relationships between doctors and drug makers.
In one telling example, Dr. Nemeroff signed a letter dated July 15, 2004, promising Emory administrators that he would earn less than $10,000 a year from GlaxoSmithKline to comply with federal rules. But on that day, he was at the Four Seasons Resort in Jackson Hole, Wyo., earning $3,000 of what would become $170,000 in income that year from that company — 17 times the figure he had agreed on.
Dr. Nemeroff was the principal investigator for a five-year $3.9 million grant financed by the National Institute of Mental Health for which GlaxoSmithKline provided drugs.So Nemeroff was on an NIH grant that involved using GSK drugs, and was getting paid princely sums by GSK at the same time. It's hard to have a better definition of conflict of interest. And even in an age when the sum of 700 billion$ is being bandied around rather casually, 2.8 million$ is still a lot of money.
It also does not seem to be the first time that he has blurred the line. In 2006 he seems to have stepped down from the editor's position of the journal Neuropsychopharmacology when he published an article using a device whose manufacturer was paying him. As detailed above, he also had had an incident with Emory in 2004 when he promised not to make too much money off his consulting. Dr. Nemeroff regularly gives talks in which he discusses the benefits of drugs like Paxil.
It's also interesting that some people suspect that Nemeroff may have had a hand in David Healy being denied his position at the University of Toronto. Healy has written a very interesting book called "Let them eat Prozac" which rather meticulously and candidly documents the alarming incidence of suicide attempts by patients on SSRIs. Apparently Healy faced a lot of hostility from the establishment and...surprise...from pharma when he tried to go public with these findings. It's all disturbing.
I really hope Emory takes some drastic action against what seems to be a repeated violation of some extremely important and time-honored guidelines of research. It's getting uncomfortable, and fingers are being pointed at the school for not noticing this and taking action earlier. The sooner the university acts, the better it can save face and avoid embarrassment.
But the gnawing questions remain. Since the line between a productive and honest and unholy academic-corporate nexus seems to be thin indeed, who regulates such collaborations and how can they do this? Sadly we know all too well who pays.
Other links: The Carlat Psychiatry Blog, University Diaries, Pharmalot
Fizz or Fizzle: The 2008 Nobels
It's that time of the year again. I have already made predictions in 2006 and 2007 and the last year hasn't exactly seen a windfall of novel discoveries that would suddenly add 10 new names to my list. So the lists largely hold. But what does happen in one year is that the Nobel Committee's moral baggage becomes indisputably heavier. When for example are they going to seek repentance for their misses by acknowledging:
Roger Tsien
Martin Karplus
The Palladium Gang (Heck, Sonogashira, Suzuki)
Stuart Schreiber
Ken Houk
As for Sir Fraser Stoddart, I personally think that he may get it in the future when a few more practical applications are found for his toys and methods (On the other hand I still claim credit for mentioning his name if he wins it)
Like last year, fields can also get rewarded through individuals; I personally would be buoyant if my favourite fields- computational chemistry, biochemistry and organic chemistry- win. I also think that Robert Langer can get it for medicine and single molecule spectroscopy may win for either physics or chemistry. Some x-ray structure of an important protein always stands a chance. The interesting thing about the Nobels is that they often reward things that are so important and widespread that we have all taken them for granted and therefore never think of them; no blogger thought of RNAi for example.
But whoever wins, every time the Nobel committee awards the prize, they inevitably commit a grave injustice since somebody deserving is left out. But then that's the nature of man-made accolades. Fortunately most scientists don't depend on such honors and instead are rewarded by nature's sure award; the kick that one gets from scientific discovery, as this guy can describe very well.
And so it goes.
Update: Here's a dark horse prediction for me- geochemistry or climate chemistry. As far as I know, the last climate chemistry prize was won a pretty long time ago for the discovery of the effects of CFCs on the ozone layer.
Links: Other and similar predictions- The Chem Blog and the Skeptical Chymist. The Coronenes have rightly rose above the committee and awarded their own prize. Now that's the kind of assertiveness that we need.
Roger Tsien
Martin Karplus
The Palladium Gang (Heck, Sonogashira, Suzuki)
Stuart Schreiber
Ken Houk
As for Sir Fraser Stoddart, I personally think that he may get it in the future when a few more practical applications are found for his toys and methods (On the other hand I still claim credit for mentioning his name if he wins it)
Like last year, fields can also get rewarded through individuals; I personally would be buoyant if my favourite fields- computational chemistry, biochemistry and organic chemistry- win. I also think that Robert Langer can get it for medicine and single molecule spectroscopy may win for either physics or chemistry. Some x-ray structure of an important protein always stands a chance. The interesting thing about the Nobels is that they often reward things that are so important and widespread that we have all taken them for granted and therefore never think of them; no blogger thought of RNAi for example.
But whoever wins, every time the Nobel committee awards the prize, they inevitably commit a grave injustice since somebody deserving is left out. But then that's the nature of man-made accolades. Fortunately most scientists don't depend on such honors and instead are rewarded by nature's sure award; the kick that one gets from scientific discovery, as this guy can describe very well.
And so it goes.
Update: Here's a dark horse prediction for me- geochemistry or climate chemistry. As far as I know, the last climate chemistry prize was won a pretty long time ago for the discovery of the effects of CFCs on the ozone layer.
Links: Other and similar predictions- The Chem Blog and the Skeptical Chymist. The Coronenes have rightly rose above the committee and awarded their own prize. Now that's the kind of assertiveness that we need.
What's an Iodide doing stabilizing a helix?
One of the most important- and least understood- effects dealing with biomolecular structure concerns the effects of salts on protein conformation. The famous Hofmeister Series for ions that either 'salt-in' or 'salt-out' proteins is well known, but the mechanism through which the ions act is controversial and probably involves not one mechanism but different ones under different circumstances.
In an interesting single-author JACS paper, Joachim Dzubiella studied the effects of different salts of sodium and potassium on the structure of alpha helices in solution. Even something as common and widely studied as the alpha helix is still an enigma. For example, the simple question "What contributes to the stability of an alpha helix?" is controversial and not fully answered yet. In this context I will refer the reader to an excellent perspective written by Robert Baldwin at Stanford that tries to answer the rather simple question: "How much energetic stability does a peptide bond in a helix contribute?". Baldwin looks at two approaches to understand the problem. One is the 'hydrogen bond inventory' approach which simply lists the bonds broken and formed on each side when an amide group desolvates and forms a peptide bond. Based on this approach, the mean figure for peptide h-bond energy has been estimated as 0.9 kcal/mol/h-bond. Even though this quantity is small, a 100 residue protein where 70 residues form hydrogen bonds is clearly going to lead to a very substantial net stabilization. The second approach that Baldwin considers is the electrostatic solvation enthalpy or free-energy method, where one uses the Born equation to estimate the strength of a h-bond. Using this approach Baldwin gets a very different answer- 2.5 kcal/mol. Clearly there is still some way to go toward estimating how much peptide h-bonds contribute to stability. One important factor not considered by Baldwin is the entropy of the water. Another important factor that he does consider is the preferential desolvation for helix formation that depends on the exact residues involved. We have ourselves encountered desolvation issues in continuing work on amyloid beta-sheets.
But back to Dzubiella's paper. Dzubiella uses MD simulations to study the dynamics of helix-salt interaction. He considers helices where a i---> (i+4) salt bridge between the side chains of a glutamate and lysine has stabilized the conformation. He looks at which salts stabilize helices and which ones destabilize them. From these detailed simulations he gains some valuable insight into the radically different behavior of rather similar ions. For example, K+ ions are much less able to destabilize helices than Na+ ions. This is due to preferential interaction of carboxylate groups involved in salt-bridge formation by Na+. Due to its smaller size, Na+ is better able to interact with carboxylates than K+.
However, we have to remember that Na+ or K+ or any of the other ions have to compete with water when interacting with amino acids in the peptide. Water is in great excess and water also efficiently interacts with carboxylates (1). The MD simulations reveal that a curious and unexpected helper comes to the aid of the Na+ ions- I- ions. Iodide interestingly interacts with the non-polar parts of the peptide, thus "clearing" water away and paving the way for Na+ to access the carboxylates and carbonyls. This unexpected observation again sheds light on the different properties of iodine compared to the rest of the halogens (2). Iodide is much bigger, has a diffuse charge and is therefore much more polarizable. Apparently it is so electronically watered down that even carbon thinks it is harmless and can preferentially interact with it.
This curious observation tells us that we know less about the elements than we think. From the observation of weak hydrogen bonds and halogen bonds to the unexpected non-polarity of iodide, surprises await us in the realm of biomolecular structure and indeed in all of chemistry. It is also thanks to tools like MD that we can now gain insights into the details of such molecular interaction.
Notes:
(1) In fact water can interact so well that it might steal a few h-bonds from the peptide and destabilize the helix. That's why trifluoroethanol (TFE) or hexafluoroacetone are so good at stabilizing helices (these lead to "Teflon-coated peptides"), because the fluorine cannot steal h-bonds from the peptide backbone.
(2) For example iodide most efficiently forms halogen bonds with oxygen, a phenomenon now well-accepted.
References:
Joachim Dzubiella (2008). Salt-Specific Stability and Denaturation of a Short Salt-Bridge-Forming α-Helix Journal of the American Chemical Society DOI: 10.1021/ja805562g
R. L. Baldwin (2003). In Search of the Energetic Role of Peptide Hydrogen Bonds Journal of Biological Chemistry, 278 (20), 17581-17588 DOI: 10.1074/jbc.X200009200
Please, don't stand in the way of this man
This is as sensible and assertive a statement about evolution that we can expect from a Presidential candidate
Not suprisingly, McCain's camp declined to answer with specifics and Nature dug up relevant statements from his old speeches that mainly included boilerplate sound-bytes. Obama's camp on the other hand provided rather eloquent and clear answers that actually talk about facts. It's pretty amazing to hear answers that actually are filled with details about science. McCain's cast of "science" advisors looks like a Gilligan's Island outfit and includes HP chief Carly Fiorina (who thinks Sarah Palin is quite competent to be President), James Woolsey, a former CIA director and Meg Whitman, former CEO of EBay. This group seems as miscast for science as Sarah Palin is miscast for being Vice President. Obama's advisors on the other hand include some real scientists, including Dan Kammen from Berkeley and Harold Varmus from Sloan Kettering Cancer Center.
Obama would speed up the residency process for foreign students and minimize barriers between private and public R & D (this is going to be very important). And Obama is as clear about nuclear energy as anything else
Reading this is like being immersed inside a gutter for 8 years and suddenly coming up for fresh air in the bright sunlight with a gasp. We finally see a political leader who can actually think and give serious thought to all sides of a problem including dissenting ones. There's a scientist in Obama somewhere. This man deserves to lead this country. This country (at least for those who care) deserves to be led by this man.
Do you believe that evolution by means of natural selection is a sufficient explanation for the variety and complexity of life on Earth? Should intelligent design, or some derivative thereof, be taught in science class in public schools?This is from the latest issue of Nature whose cover story is about the candidates' views on scientific issues, views that are going to be of paramount importance to the future well-being of this country. Nature asked the candidates 18 questions about science and technology, including questions about increasing funding for basic research, speeding up the track to permanent residency for talented foreign students, and pumping more funds into biomedical innovations.
Obama: I believe in evolution, and I support the strong consensus of the scientific community that evolution is scientifically validated. I do not believe it is helpful to our students to cloud discussions of science with non-scientific theories like intelligent design that are not subject to experimental scrutiny.
Not suprisingly, McCain's camp declined to answer with specifics and Nature dug up relevant statements from his old speeches that mainly included boilerplate sound-bytes. Obama's camp on the other hand provided rather eloquent and clear answers that actually talk about facts. It's pretty amazing to hear answers that actually are filled with details about science. McCain's cast of "science" advisors looks like a Gilligan's Island outfit and includes HP chief Carly Fiorina (who thinks Sarah Palin is quite competent to be President), James Woolsey, a former CIA director and Meg Whitman, former CEO of EBay. This group seems as miscast for science as Sarah Palin is miscast for being Vice President. Obama's advisors on the other hand include some real scientists, including Dan Kammen from Berkeley and Harold Varmus from Sloan Kettering Cancer Center.
Obama would speed up the residency process for foreign students and minimize barriers between private and public R & D (this is going to be very important). And Obama is as clear about nuclear energy as anything else
What role does nuclear power have in your vision for the US energy supply, and how would you address the problem of nuclear waste?Most importantly, Obama promises to reform the political environment for scientific opinion; this would include appointing a Chief Technology Officer for the government and strengthening the President's Scientific Advisory Committee, a key source of scientific advice for the President that was abolished by the odious Richard Nixon
Obama: Nuclear power represents an important part of our current energy mix. Nuclear also represents 70% of our non-carbon generated electricity. It is unlikely that we can meet our aggressive climate goals if we eliminate nuclear power as an option. However, before an expansion of nuclear power is considered, key issues must be addressed, including security of nuclear fuel and waste, waste storage and proliferation. The nuclear waste disposal efforts at Yucca Mountain [in Nevada] have been an expensive failure and should be abandoned. I will work with the industry and governors to develop a way to store nuclear waste safely while we pursue long-term solutions.
Many scientists are bitter about what they see as years of political interference in scientific decisions at federal agencies. What would you do to help restore impartial scientific advice in government?This point is the most encouraging policy vision, after a 8 year tradition of bullying, manipulating, cherry picking, ignoring and roughing up science and objective facts. The cost of scientific ignorance will be progress in all its forms.
Obama: Scientific and technological information is of growing importance to a range of issues. I believe such information must be expert and uncoloured by ideology. I will restore the basic principle that government decisions should be based on the best-available, scientifically valid evidence and not on the ideological predispositions of agency officials or political appointees. More broadly, I am committed to creating a transparent and connected democracy, using cutting edge technologies to provide a new level of transparency, accountability and participation for America’s citizens. Policies must be determined using a process that builds on the long tradition of open debate that has characterized progress in science, including review by individuals who might bring new information or contrasting views. I have already established an impressive team of science advisers, including several Nobel laureates, who are helping me to shape a robust science agenda for my
administration.
Reading this is like being immersed inside a gutter for 8 years and suddenly coming up for fresh air in the bright sunlight with a gasp. We finally see a political leader who can actually think and give serious thought to all sides of a problem including dissenting ones. There's a scientist in Obama somewhere. This man deserves to lead this country. This country (at least for those who care) deserves to be led by this man.
Thinking about Alzheimer's Disease as Historians
Head slumped forward, eyes closed, she could be dozing — or knocked out by the pharmacological cocktails that dull her physical and psychic pains.This heartbreaking and sad account by a husband of his wife's early slide into Alzheimer's Disease (AD) reminds us of how much we need to do to fight this. I personally think that of the myriad diseases afflicting humankind, AD is probably the cruelest of all. Pancreatic cancer might kill you in three months and cause a lot of pain but at least you are in touch with your loved ones till the end. But this is human suffering on a totally different level.
I approach, singing “Let Me Call You Sweetheart,” off key. Not a move or a flutter. Up close, I caress one freckled cheek, plant a kiss on the other. Still flutterless.
More kisses. I press my forehead to hers. “Pretty nice, huh?” Eyelids do not flicker, no soft smile, nothing.
She inhales. Her lips part. Then one word: “Beautiful.”
My skin prickles, my breath catches.
It is a clear, finely formed “beautiful,” the “t” a taut “tuh,” the first multisyllable word in months, a word that falls perfectly on the moment.
Then it is gone. The flash of synaptic lightning passes. That night, awake, I wonder, Did Pat choose “beautiful?” Or did “beautiful” choose Pat? Does she know?
The search for the causes of Alzheimer's disease goes on, and I have recently been thinking in a wild and woolly way about it from an evolutionary standpoint. While my thoughts have not been well-formed, I want to present a cursory outline here.
The thinking was inspired by two books- one book which has been discounted by many, and another which has been praised by many. The lauded book is Paul Ewald's "Plague Time" which puts forth the revolutionary hypothesis that the cause of most chronic diseases is ultimately pathogenic. The other book "Survival of the Sickest" by Sharon Moalem puts forth the potentially equally revolutionary hypothesis that most diseases arose as favourable adaptations to pathogenic onslaughts. Unfortunately the author goes off on a tangent making too many speculative and unfounded suggestions, leading some to consider his writings rather unscientific. As far as I am concerned, the one thing that the book does offer is provocative questions.
On the face of it both these hypotheses make sense. The really interesting question about any chronic disease is; why have the genes responsible for that disease endured even after so many millennia if the disease kills you? Why hasn't evolution weeded out such a harmful genotype? There are two potential answers. One is that evolution simply has not had the time to do this. The other hypothesis, more provocative, is that these diseases have actually been beneficial adaptations against something in our history. That adaptation was so beneficial that its benefits outweighed the obvious harm that it caused. While that something probably does not presently prevail, it was significant in the past. What factor could possibly have existed that needed such a radical adaptation to fight it?
Well, if we think about what it has been that we humans have been fighting the most desperately and constantly ever since we first stepped foot on the planet, it's got to be a foe that was much older than us and more exquisitely adapted than we ever were- bacteria. The history of disease is largely a history of a fierce competition that humans and bacteria have engaged in. This competition plays by the rules of natural selection, and is relentless and ruthless. For most of our history we have been fighting all kinds of astonishingly adaptable bacteria and there have been millions of martyrs in this fight, both bacterial and human. Only recently have we somewhat eroded their malign influence with antibiotics, but hardly so. They still keep evolving and developing resistance (MRSA killed 18,000 in the US in 2005), and some think that it's only a matter of time before we enter a new and terrifying age of infectious diseases.
So from an evolutionary standpoint, it's not unreasonable to assume that at least a few genetic adaptations would have developed in us to fight bacteria, since that fight more than anything else has been keeping our immune system busy and our mortality high since the very beginning. But instead of thinking about genes, why don't we think about phenotypes? Hence arose the hypotheses that many of the age-old chronic diseases that are currently the scourge of humanity may sometime have been genetic adaptations against bacterial infection. While the harm that is done by these diseases is obvious, maybe their benefits outweighed that harm sometime in the past.
When we think of chronic diseases, a few immediately come to mind, most notably heart disease, diabetes, Alzheimer's and cancer. But one of the best cases in point that illustrates this adaptive tendency is hemochromatosis which is an excess of iron absorption and storage, and it was this disease that made me think about AD. A rather fascinating evolutionary explanation has been provided for hemochromatosis. Apparently when certain types of bacteria attack our system, one of the first nutrients they need for survival is iron. By locking down stores of iron the body can protect itself from these bacteria. It turns out that one of the species of bacteria that especially needs iron is Yersinia pestis, the causative agent of the black plague. Now when Yersinia attacks the human body, macrophages rally to the body's defense to swallow it. Yersinia exploits iron resources in macrophages. If the body keeps iron stores from macrophages, it will keep iron from Yersinia, which however will lead to a buildup of iron in the body; hence hemochromatosis. The evidence for this hypothesis is supposed to come from the Black Plague which swept Europe in the Middle Ages and killed almost half the population. Support for the idea comes from the fact that the gene for hemochromatosis has a surprisingly higher frequency among Europeans compared to others. Could it have been passed on because it protected the citizens of that continent from the plague epidemic? It's a tantalizing hypothesis and there is some good correlation. Whether it's true or not in this case, I believe the general hypothesis about looking for past pathogenic causes that may have triggered chronic disease symptoms as adaptations is basically a sound one, and in theory testable. Such hypothesis have been formed for other diseases and are documented in the books.
But I want to hazard such a guess for the causes of AD. I started thinking along the same lines as for hemochromatosis. Apart from the two books, my thinking was also inspired by recent research that suggests that amyloid peptide- a ubiquitous signature in AD- binds to copper, zinc, and possibly iron to generate free radicals that cause oxidative damage to neurons. Oxidative damage they may cause, but we have to note that oxidative damage is also extremely harmful to bacteria. Could amyloid have evolved to generate free radicals that would kill pathogens? Consider that in this case it's also serving a further valuable function akin to that in hemochromatosis- keeping essential metals from the bacteria by binding to them. This would serve a double whammy; denying bacteria their essential nutrients, and bombarding them with deadly free radicals. The damage that neurons suffer would possibly be a small price to pay if the benefit was the death of lethal microorganisms.
For testing this hypothesis, I need to know a couple of things:
1. Are there in fact bacteria which are extremely sensitive to copper or iron deficiency? Well, Yersinia is certainly one and in fact most bacteria are to varying extents. But since AD affects the brain, I am thinking about bacterial infections that affect the brain. How about meningitis caused by Neisseria, one of the deadliest bacterial diseases even now which is almost certainly a death sentence if not treated? Apart from this, many other diseases affect the brain if left untreated; the horrible dementia seen in the last stages of syphilis comes to mind. Potentially the brain would benefit against any of these deadly species by locking its stores of metal nutrients and generating free radicals to kill them, a dual function that amyloid could serve. I have not been able to say which one of these bacteria amyloid and AD might have evolved against. Maybe it could have been against a single species, maybe it could have been a general response to many. I am still exploring this aspect of the idea.
2. More importantly, I need epidemiology information about various epidemics that swept the world in the last thousand years or so. In the case of hemochromatosis, the causative genetic stimulus was pinned down to Yersinia because both the disease etiology and the pandemic are documented in detail. I cannot easily find such detailed information about meningitis or syphilis or other outbreaks.
3. In addition, while risk factors have been suggested for AD (for instance the ApoE epsilon4 gene allele), no specific genes have been suggested as causal factors for the disease. There is a clear problem with correlation and causation in this case. Also, the important role played by environmental factors such as stress and diet is becoming clear now; it's certainly not an exclusively genetic disease, and probably not even predominantly so.
4. Most importantly, I think it is impossible to find instances of AD clusters in history for a simple reason-the disease was simply unknown before 1906 when Alöis Alzheimer first described it. Even today it is not easy to make an assessment of it. All cases of Alzheimer's before a hundred years back would have been dismissed as cases of dementia causes by old age and senility. Thus, while the causative hypothesis is testable, the effects are hard to historically investigate.
The fact that AD is a disease of age might provide some credence to this hypothesis. Two things happen in old age. Firstly, the body's immune defenses start faltering, and this might need the body to marshall extra help to fight pathogens. Amyloid might do this. Secondly, as age progresses evolution is less worried about the tradeoff between beneficial and harmful effects because the reproductive age has already passed. So the devastating effect of AD would be less worrisome for evolution. Thus, the same AD that today is thought to reduce longevity would have ironically increased it in an age where infection would have reduced it even further.
However, if AD is an adaptation especially for old age, then it begs a crucial question; why would it exist in the first place? Evolution is geared toward increasing reproductive success, not toward increasing longevity. There is no use as such for a rather meticulously developed evolutionary adaptation that kicks in after reproductive age has passed. I think the answer may lie in the fact that while AD and amyloid do affect old people, they don't suddenly materialize in old age. What we do know now is that amyloid Aß is a natural component of our body's biochemistry and is regularly synthesized and cleared. Apparently in AD something goes wrong and it starts to suddenly agglomerate and cause harm. But if AD was truly an adaptation in the past, then it should have possibly manifested itself in younger age, perhaps not a much younger age but an age where reproduction was still possible. Consider that some dementia is much preferred to not being able to bear offspring, and so AD at a younger reproductive age would make evolutionary sense even with its vile symptoms. If this were true, then it means that the average age at which AD manifests itself has simply been increasing for the past thousand years. It would mean that AD is not per se a disease of the old; it's just become a disease of the old in recent times.
So after all the convoluted rambling and long-winded thought, here's the hypothesis:
Alzheimer's disease and especially Aß amyloid is an evolutionary adaptation that has evolved to kill pathogens by binding to key metals and generating free radicals
There are several details to unravel here. The precise relationship between metals, amyloid and oxidative damage is yet to be established although support is emerging. Which of the metals really matter? What do they exactly do? The exact role that amyloid plays in AD is of course under much scrutiny these days. And what, if anything, is the relationship between bacterial infection and amyloid Aß load and function in the body?
In the end, I suggest a simple test that could validate at least part of the hypothesis; take a test-tube filled with fresh amyloid Aß, throw in metal ions, and then throw in bacteria that were thought to be responsible for major epidemics throughout history. What do you see? It may not even work in vitro- I wonder if it could be tried in vivo- but it would be worth a shot.
Now I will wait for people to shoot this idea down because we all know that science progresses through mistakes. At least I do.
Water-Inclusive Docking with Remarkable Approximations
The role of water in mediating protein-ligand interactions has now been well-recognized by both experimentalists and modelers. However it's been relatively recently that modelers have actually started taking the unique roles that water plays into account. While the role of water in bridging ligand and protein atoms is obvious, a more subtle but crucial role of water is to fill up hydrophobic pockets in proteins. Such waters can be very unhappy in such pockets because of both unfavourable entropy (not much movement) and enthalpy (inability to form a full complement of 4 hydrogen bonds). If one can design a ligand that will displace such waters, significant gains in affinity would be obtained. One docking approach that does take such properties of waters into consideration is Schrodinger's Glide, with a recent paper attesting to the importance of such a method for Factor Xa inhibitors.
Clearly the exclusion of water molecules during docking and virtual screening (VS) will hamper enrichment factors, namely how well you can rank actives above inactives. Now a series of experiments from Brian Shoichet's group illustrates the benefits of including waters in active sites when doing virtual screening. These experiments seem to work in spite of two approximations that should have posed significant problems, but surprisingly did not.
To initiate the experiments, the authors chose a set of 24 targets and their corresponding ligands from their well-known DUD ligand set. This is a VS data set in which ligands are distinguished by topology but not by physical properties such as size and lipophilicity. This feature makes sure that ligands aren't trivially distinguished by VS methods on the basis of such properties alone. Importantly, the complexes were chosen so that the waters in them are bridging waters with at least two hydrogen bonds to the protein, and not waters which simply occupy hydrophobic pockets. Note that this would exclude a lot of important cases where affinity comes from displacement of such waters.
Now for the approximations. Firstly, the authors treated each water molecule separately in multiple configurations. They then scored the docked ligands against each such configuration as well as the rest of the protein. The waters were treated as either "on" or "off", that is, either displaced or not displaced. Whether to keep a water or not depended on whether the score improved or not when it was displaced by a ligand. The best scored ligands were then selected and figured high on the enrichment curve. This is a significant approximation because the assumption here is that every water contributes to ligand binding affinity independently of the other waters. While this would be true in certain cases, there is no reason to assume that it would generally hold.
The second approximation was even more important and startling. All the waters were regarded as energetically equivalent. From our knowledge of protein-ligand interactions, we know that the reason why evaluating waters in protein active sites is such a tricky business is precisely because each water has a different energetic profile. In fact the Factor Xa study cited above takes this profile into consideration. Without such an analysis it would be difficult to tell the medicinal chemist which part of the molecule to modify to get the best binding affinity from water displacement.
The most important benefit of this approximate approach was a linear increase in computational time instead of an exponential one. This was clearly because of the separate-water configuration approximation. The calculation of individual water free energies would also have added to this time.
In spite of these crucial approximations, the results indicate that the ability to distinguish actives from inactives was considerably improved for 12 out of 24 targets. This is not saying much, but even 50% sounds like a lot in the face of such approximations. Clearly an examination of the protein active site will also help to evaluate which cases will benefit, but it will also naturally depend on the structure of the ligand.
For now, this is an encouraging result and indicates that this approach could be implemented in virtual screening. There are probably very few cases where docking accuracy decreases when waters are included. With the sparse increases in computational time, this would be a quick and dirty but viable approach for virtual screening.
Reference:
Niu Huang, Brian K. Shoichet (2008). Exploiting Ordered Waters in Molecular Docking Journal of Medicinal Chemistry, 51 (16), 4862-4865 DOI: 10.1021/jm8006239
Subscribe to:
Posts (Atom)