- Home
- Angry by Choice
- Catalogue of Organisms
- Chinleana
- Doc Madhattan
- Games with Words
- Genomics, Medicine, and Pseudoscience
- History of Geology
- Moss Plants and More
- Pleiotropy
- Plektix
- RRResearch
- Skeptic Wonder
- The Culture of Chemistry
- The Curious Wavefunction
- The Phytophactor
- The View from a Microbiologist
- Variety of Life
Field of Science
-
-
From Valley Forge to the Lab: Parallels between Washington's Maneuvers and Drug Development3 weeks ago in The Curious Wavefunction
-
Political pollsters are pretending they know what's happening. They don't.3 weeks ago in Genomics, Medicine, and Pseudoscience
-
-
Course Corrections5 months ago in Angry by Choice
-
-
The Site is Dead, Long Live the Site2 years ago in Catalogue of Organisms
-
The Site is Dead, Long Live the Site2 years ago in Variety of Life
-
Does mathematics carry human biases?4 years ago in PLEKTIX
-
-
-
-
A New Placodont from the Late Triassic of China5 years ago in Chinleana
-
Posted: July 22, 2018 at 03:03PM6 years ago in Field Notes
-
Bryophyte Herbarium Survey7 years ago in Moss Plants and More
-
Harnessing innate immunity to cure HIV8 years ago in Rule of 6ix
-
WE MOVED!8 years ago in Games with Words
-
-
-
-
post doc job opportunity on ribosome biochemistry!9 years ago in Protein Evolution and Other Musings
-
Growing the kidney: re-blogged from Science Bitez9 years ago in The View from a Microbiologist
-
Blogging Microbes- Communicating Microbiology to Netizens10 years ago in Memoirs of a Defective Brain
-
-
-
The Lure of the Obscure? Guest Post by Frank Stahl12 years ago in Sex, Genes & Evolution
-
-
Lab Rat Moving House13 years ago in Life of a Lab Rat
-
Goodbye FoS, thanks for all the laughs13 years ago in Disease Prone
-
-
Slideshow of NASA's Stardust-NExT Mission Comet Tempel 1 Flyby13 years ago in The Large Picture Blog
-
in The Biology Files
What's an Iodide doing stabilizing a helix?
One of the most important- and least understood- effects dealing with biomolecular structure concerns the effects of salts on protein conformation. The famous Hofmeister Series for ions that either 'salt-in' or 'salt-out' proteins is well known, but the mechanism through which the ions act is controversial and probably involves not one mechanism but different ones under different circumstances.
In an interesting single-author JACS paper, Joachim Dzubiella studied the effects of different salts of sodium and potassium on the structure of alpha helices in solution. Even something as common and widely studied as the alpha helix is still an enigma. For example, the simple question "What contributes to the stability of an alpha helix?" is controversial and not fully answered yet. In this context I will refer the reader to an excellent perspective written by Robert Baldwin at Stanford that tries to answer the rather simple question: "How much energetic stability does a peptide bond in a helix contribute?". Baldwin looks at two approaches to understand the problem. One is the 'hydrogen bond inventory' approach which simply lists the bonds broken and formed on each side when an amide group desolvates and forms a peptide bond. Based on this approach, the mean figure for peptide h-bond energy has been estimated as 0.9 kcal/mol/h-bond. Even though this quantity is small, a 100 residue protein where 70 residues form hydrogen bonds is clearly going to lead to a very substantial net stabilization. The second approach that Baldwin considers is the electrostatic solvation enthalpy or free-energy method, where one uses the Born equation to estimate the strength of a h-bond. Using this approach Baldwin gets a very different answer- 2.5 kcal/mol. Clearly there is still some way to go toward estimating how much peptide h-bonds contribute to stability. One important factor not considered by Baldwin is the entropy of the water. Another important factor that he does consider is the preferential desolvation for helix formation that depends on the exact residues involved. We have ourselves encountered desolvation issues in continuing work on amyloid beta-sheets.
But back to Dzubiella's paper. Dzubiella uses MD simulations to study the dynamics of helix-salt interaction. He considers helices where a i---> (i+4) salt bridge between the side chains of a glutamate and lysine has stabilized the conformation. He looks at which salts stabilize helices and which ones destabilize them. From these detailed simulations he gains some valuable insight into the radically different behavior of rather similar ions. For example, K+ ions are much less able to destabilize helices than Na+ ions. This is due to preferential interaction of carboxylate groups involved in salt-bridge formation by Na+. Due to its smaller size, Na+ is better able to interact with carboxylates than K+.
However, we have to remember that Na+ or K+ or any of the other ions have to compete with water when interacting with amino acids in the peptide. Water is in great excess and water also efficiently interacts with carboxylates (1). The MD simulations reveal that a curious and unexpected helper comes to the aid of the Na+ ions- I- ions. Iodide interestingly interacts with the non-polar parts of the peptide, thus "clearing" water away and paving the way for Na+ to access the carboxylates and carbonyls. This unexpected observation again sheds light on the different properties of iodine compared to the rest of the halogens (2). Iodide is much bigger, has a diffuse charge and is therefore much more polarizable. Apparently it is so electronically watered down that even carbon thinks it is harmless and can preferentially interact with it.
This curious observation tells us that we know less about the elements than we think. From the observation of weak hydrogen bonds and halogen bonds to the unexpected non-polarity of iodide, surprises await us in the realm of biomolecular structure and indeed in all of chemistry. It is also thanks to tools like MD that we can now gain insights into the details of such molecular interaction.
Notes:
(1) In fact water can interact so well that it might steal a few h-bonds from the peptide and destabilize the helix. That's why trifluoroethanol (TFE) or hexafluoroacetone are so good at stabilizing helices (these lead to "Teflon-coated peptides"), because the fluorine cannot steal h-bonds from the peptide backbone.
(2) For example iodide most efficiently forms halogen bonds with oxygen, a phenomenon now well-accepted.
References:
Joachim Dzubiella (2008). Salt-Specific Stability and Denaturation of a Short Salt-Bridge-Forming α-Helix Journal of the American Chemical Society DOI: 10.1021/ja805562g
R. L. Baldwin (2003). In Search of the Energetic Role of Peptide Hydrogen Bonds Journal of Biological Chemistry, 278 (20), 17581-17588 DOI: 10.1074/jbc.X200009200
Please, don't stand in the way of this man
This is as sensible and assertive a statement about evolution that we can expect from a Presidential candidate
Not suprisingly, McCain's camp declined to answer with specifics and Nature dug up relevant statements from his old speeches that mainly included boilerplate sound-bytes. Obama's camp on the other hand provided rather eloquent and clear answers that actually talk about facts. It's pretty amazing to hear answers that actually are filled with details about science. McCain's cast of "science" advisors looks like a Gilligan's Island outfit and includes HP chief Carly Fiorina (who thinks Sarah Palin is quite competent to be President), James Woolsey, a former CIA director and Meg Whitman, former CEO of EBay. This group seems as miscast for science as Sarah Palin is miscast for being Vice President. Obama's advisors on the other hand include some real scientists, including Dan Kammen from Berkeley and Harold Varmus from Sloan Kettering Cancer Center.
Obama would speed up the residency process for foreign students and minimize barriers between private and public R & D (this is going to be very important). And Obama is as clear about nuclear energy as anything else
Reading this is like being immersed inside a gutter for 8 years and suddenly coming up for fresh air in the bright sunlight with a gasp. We finally see a political leader who can actually think and give serious thought to all sides of a problem including dissenting ones. There's a scientist in Obama somewhere. This man deserves to lead this country. This country (at least for those who care) deserves to be led by this man.
Do you believe that evolution by means of natural selection is a sufficient explanation for the variety and complexity of life on Earth? Should intelligent design, or some derivative thereof, be taught in science class in public schools?This is from the latest issue of Nature whose cover story is about the candidates' views on scientific issues, views that are going to be of paramount importance to the future well-being of this country. Nature asked the candidates 18 questions about science and technology, including questions about increasing funding for basic research, speeding up the track to permanent residency for talented foreign students, and pumping more funds into biomedical innovations.
Obama: I believe in evolution, and I support the strong consensus of the scientific community that evolution is scientifically validated. I do not believe it is helpful to our students to cloud discussions of science with non-scientific theories like intelligent design that are not subject to experimental scrutiny.
Not suprisingly, McCain's camp declined to answer with specifics and Nature dug up relevant statements from his old speeches that mainly included boilerplate sound-bytes. Obama's camp on the other hand provided rather eloquent and clear answers that actually talk about facts. It's pretty amazing to hear answers that actually are filled with details about science. McCain's cast of "science" advisors looks like a Gilligan's Island outfit and includes HP chief Carly Fiorina (who thinks Sarah Palin is quite competent to be President), James Woolsey, a former CIA director and Meg Whitman, former CEO of EBay. This group seems as miscast for science as Sarah Palin is miscast for being Vice President. Obama's advisors on the other hand include some real scientists, including Dan Kammen from Berkeley and Harold Varmus from Sloan Kettering Cancer Center.
Obama would speed up the residency process for foreign students and minimize barriers between private and public R & D (this is going to be very important). And Obama is as clear about nuclear energy as anything else
What role does nuclear power have in your vision for the US energy supply, and how would you address the problem of nuclear waste?Most importantly, Obama promises to reform the political environment for scientific opinion; this would include appointing a Chief Technology Officer for the government and strengthening the President's Scientific Advisory Committee, a key source of scientific advice for the President that was abolished by the odious Richard Nixon
Obama: Nuclear power represents an important part of our current energy mix. Nuclear also represents 70% of our non-carbon generated electricity. It is unlikely that we can meet our aggressive climate goals if we eliminate nuclear power as an option. However, before an expansion of nuclear power is considered, key issues must be addressed, including security of nuclear fuel and waste, waste storage and proliferation. The nuclear waste disposal efforts at Yucca Mountain [in Nevada] have been an expensive failure and should be abandoned. I will work with the industry and governors to develop a way to store nuclear waste safely while we pursue long-term solutions.
Many scientists are bitter about what they see as years of political interference in scientific decisions at federal agencies. What would you do to help restore impartial scientific advice in government?This point is the most encouraging policy vision, after a 8 year tradition of bullying, manipulating, cherry picking, ignoring and roughing up science and objective facts. The cost of scientific ignorance will be progress in all its forms.
Obama: Scientific and technological information is of growing importance to a range of issues. I believe such information must be expert and uncoloured by ideology. I will restore the basic principle that government decisions should be based on the best-available, scientifically valid evidence and not on the ideological predispositions of agency officials or political appointees. More broadly, I am committed to creating a transparent and connected democracy, using cutting edge technologies to provide a new level of transparency, accountability and participation for America’s citizens. Policies must be determined using a process that builds on the long tradition of open debate that has characterized progress in science, including review by individuals who might bring new information or contrasting views. I have already established an impressive team of science advisers, including several Nobel laureates, who are helping me to shape a robust science agenda for my
administration.
Reading this is like being immersed inside a gutter for 8 years and suddenly coming up for fresh air in the bright sunlight with a gasp. We finally see a political leader who can actually think and give serious thought to all sides of a problem including dissenting ones. There's a scientist in Obama somewhere. This man deserves to lead this country. This country (at least for those who care) deserves to be led by this man.
Thinking about Alzheimer's Disease as Historians
Head slumped forward, eyes closed, she could be dozing — or knocked out by the pharmacological cocktails that dull her physical and psychic pains.This heartbreaking and sad account by a husband of his wife's early slide into Alzheimer's Disease (AD) reminds us of how much we need to do to fight this. I personally think that of the myriad diseases afflicting humankind, AD is probably the cruelest of all. Pancreatic cancer might kill you in three months and cause a lot of pain but at least you are in touch with your loved ones till the end. But this is human suffering on a totally different level.
I approach, singing “Let Me Call You Sweetheart,” off key. Not a move or a flutter. Up close, I caress one freckled cheek, plant a kiss on the other. Still flutterless.
More kisses. I press my forehead to hers. “Pretty nice, huh?” Eyelids do not flicker, no soft smile, nothing.
She inhales. Her lips part. Then one word: “Beautiful.”
My skin prickles, my breath catches.
It is a clear, finely formed “beautiful,” the “t” a taut “tuh,” the first multisyllable word in months, a word that falls perfectly on the moment.
Then it is gone. The flash of synaptic lightning passes. That night, awake, I wonder, Did Pat choose “beautiful?” Or did “beautiful” choose Pat? Does she know?
The search for the causes of Alzheimer's disease goes on, and I have recently been thinking in a wild and woolly way about it from an evolutionary standpoint. While my thoughts have not been well-formed, I want to present a cursory outline here.
The thinking was inspired by two books- one book which has been discounted by many, and another which has been praised by many. The lauded book is Paul Ewald's "Plague Time" which puts forth the revolutionary hypothesis that the cause of most chronic diseases is ultimately pathogenic. The other book "Survival of the Sickest" by Sharon Moalem puts forth the potentially equally revolutionary hypothesis that most diseases arose as favourable adaptations to pathogenic onslaughts. Unfortunately the author goes off on a tangent making too many speculative and unfounded suggestions, leading some to consider his writings rather unscientific. As far as I am concerned, the one thing that the book does offer is provocative questions.
On the face of it both these hypotheses make sense. The really interesting question about any chronic disease is; why have the genes responsible for that disease endured even after so many millennia if the disease kills you? Why hasn't evolution weeded out such a harmful genotype? There are two potential answers. One is that evolution simply has not had the time to do this. The other hypothesis, more provocative, is that these diseases have actually been beneficial adaptations against something in our history. That adaptation was so beneficial that its benefits outweighed the obvious harm that it caused. While that something probably does not presently prevail, it was significant in the past. What factor could possibly have existed that needed such a radical adaptation to fight it?
Well, if we think about what it has been that we humans have been fighting the most desperately and constantly ever since we first stepped foot on the planet, it's got to be a foe that was much older than us and more exquisitely adapted than we ever were- bacteria. The history of disease is largely a history of a fierce competition that humans and bacteria have engaged in. This competition plays by the rules of natural selection, and is relentless and ruthless. For most of our history we have been fighting all kinds of astonishingly adaptable bacteria and there have been millions of martyrs in this fight, both bacterial and human. Only recently have we somewhat eroded their malign influence with antibiotics, but hardly so. They still keep evolving and developing resistance (MRSA killed 18,000 in the US in 2005), and some think that it's only a matter of time before we enter a new and terrifying age of infectious diseases.
So from an evolutionary standpoint, it's not unreasonable to assume that at least a few genetic adaptations would have developed in us to fight bacteria, since that fight more than anything else has been keeping our immune system busy and our mortality high since the very beginning. But instead of thinking about genes, why don't we think about phenotypes? Hence arose the hypotheses that many of the age-old chronic diseases that are currently the scourge of humanity may sometime have been genetic adaptations against bacterial infection. While the harm that is done by these diseases is obvious, maybe their benefits outweighed that harm sometime in the past.
When we think of chronic diseases, a few immediately come to mind, most notably heart disease, diabetes, Alzheimer's and cancer. But one of the best cases in point that illustrates this adaptive tendency is hemochromatosis which is an excess of iron absorption and storage, and it was this disease that made me think about AD. A rather fascinating evolutionary explanation has been provided for hemochromatosis. Apparently when certain types of bacteria attack our system, one of the first nutrients they need for survival is iron. By locking down stores of iron the body can protect itself from these bacteria. It turns out that one of the species of bacteria that especially needs iron is Yersinia pestis, the causative agent of the black plague. Now when Yersinia attacks the human body, macrophages rally to the body's defense to swallow it. Yersinia exploits iron resources in macrophages. If the body keeps iron stores from macrophages, it will keep iron from Yersinia, which however will lead to a buildup of iron in the body; hence hemochromatosis. The evidence for this hypothesis is supposed to come from the Black Plague which swept Europe in the Middle Ages and killed almost half the population. Support for the idea comes from the fact that the gene for hemochromatosis has a surprisingly higher frequency among Europeans compared to others. Could it have been passed on because it protected the citizens of that continent from the plague epidemic? It's a tantalizing hypothesis and there is some good correlation. Whether it's true or not in this case, I believe the general hypothesis about looking for past pathogenic causes that may have triggered chronic disease symptoms as adaptations is basically a sound one, and in theory testable. Such hypothesis have been formed for other diseases and are documented in the books.
But I want to hazard such a guess for the causes of AD. I started thinking along the same lines as for hemochromatosis. Apart from the two books, my thinking was also inspired by recent research that suggests that amyloid peptide- a ubiquitous signature in AD- binds to copper, zinc, and possibly iron to generate free radicals that cause oxidative damage to neurons. Oxidative damage they may cause, but we have to note that oxidative damage is also extremely harmful to bacteria. Could amyloid have evolved to generate free radicals that would kill pathogens? Consider that in this case it's also serving a further valuable function akin to that in hemochromatosis- keeping essential metals from the bacteria by binding to them. This would serve a double whammy; denying bacteria their essential nutrients, and bombarding them with deadly free radicals. The damage that neurons suffer would possibly be a small price to pay if the benefit was the death of lethal microorganisms.
For testing this hypothesis, I need to know a couple of things:
1. Are there in fact bacteria which are extremely sensitive to copper or iron deficiency? Well, Yersinia is certainly one and in fact most bacteria are to varying extents. But since AD affects the brain, I am thinking about bacterial infections that affect the brain. How about meningitis caused by Neisseria, one of the deadliest bacterial diseases even now which is almost certainly a death sentence if not treated? Apart from this, many other diseases affect the brain if left untreated; the horrible dementia seen in the last stages of syphilis comes to mind. Potentially the brain would benefit against any of these deadly species by locking its stores of metal nutrients and generating free radicals to kill them, a dual function that amyloid could serve. I have not been able to say which one of these bacteria amyloid and AD might have evolved against. Maybe it could have been against a single species, maybe it could have been a general response to many. I am still exploring this aspect of the idea.
2. More importantly, I need epidemiology information about various epidemics that swept the world in the last thousand years or so. In the case of hemochromatosis, the causative genetic stimulus was pinned down to Yersinia because both the disease etiology and the pandemic are documented in detail. I cannot easily find such detailed information about meningitis or syphilis or other outbreaks.
3. In addition, while risk factors have been suggested for AD (for instance the ApoE epsilon4 gene allele), no specific genes have been suggested as causal factors for the disease. There is a clear problem with correlation and causation in this case. Also, the important role played by environmental factors such as stress and diet is becoming clear now; it's certainly not an exclusively genetic disease, and probably not even predominantly so.
4. Most importantly, I think it is impossible to find instances of AD clusters in history for a simple reason-the disease was simply unknown before 1906 when Alöis Alzheimer first described it. Even today it is not easy to make an assessment of it. All cases of Alzheimer's before a hundred years back would have been dismissed as cases of dementia causes by old age and senility. Thus, while the causative hypothesis is testable, the effects are hard to historically investigate.
The fact that AD is a disease of age might provide some credence to this hypothesis. Two things happen in old age. Firstly, the body's immune defenses start faltering, and this might need the body to marshall extra help to fight pathogens. Amyloid might do this. Secondly, as age progresses evolution is less worried about the tradeoff between beneficial and harmful effects because the reproductive age has already passed. So the devastating effect of AD would be less worrisome for evolution. Thus, the same AD that today is thought to reduce longevity would have ironically increased it in an age where infection would have reduced it even further.
However, if AD is an adaptation especially for old age, then it begs a crucial question; why would it exist in the first place? Evolution is geared toward increasing reproductive success, not toward increasing longevity. There is no use as such for a rather meticulously developed evolutionary adaptation that kicks in after reproductive age has passed. I think the answer may lie in the fact that while AD and amyloid do affect old people, they don't suddenly materialize in old age. What we do know now is that amyloid Aß is a natural component of our body's biochemistry and is regularly synthesized and cleared. Apparently in AD something goes wrong and it starts to suddenly agglomerate and cause harm. But if AD was truly an adaptation in the past, then it should have possibly manifested itself in younger age, perhaps not a much younger age but an age where reproduction was still possible. Consider that some dementia is much preferred to not being able to bear offspring, and so AD at a younger reproductive age would make evolutionary sense even with its vile symptoms. If this were true, then it means that the average age at which AD manifests itself has simply been increasing for the past thousand years. It would mean that AD is not per se a disease of the old; it's just become a disease of the old in recent times.
So after all the convoluted rambling and long-winded thought, here's the hypothesis:
Alzheimer's disease and especially Aß amyloid is an evolutionary adaptation that has evolved to kill pathogens by binding to key metals and generating free radicals
There are several details to unravel here. The precise relationship between metals, amyloid and oxidative damage is yet to be established although support is emerging. Which of the metals really matter? What do they exactly do? The exact role that amyloid plays in AD is of course under much scrutiny these days. And what, if anything, is the relationship between bacterial infection and amyloid Aß load and function in the body?
In the end, I suggest a simple test that could validate at least part of the hypothesis; take a test-tube filled with fresh amyloid Aß, throw in metal ions, and then throw in bacteria that were thought to be responsible for major epidemics throughout history. What do you see? It may not even work in vitro- I wonder if it could be tried in vivo- but it would be worth a shot.
Now I will wait for people to shoot this idea down because we all know that science progresses through mistakes. At least I do.
Water-Inclusive Docking with Remarkable Approximations
The role of water in mediating protein-ligand interactions has now been well-recognized by both experimentalists and modelers. However it's been relatively recently that modelers have actually started taking the unique roles that water plays into account. While the role of water in bridging ligand and protein atoms is obvious, a more subtle but crucial role of water is to fill up hydrophobic pockets in proteins. Such waters can be very unhappy in such pockets because of both unfavourable entropy (not much movement) and enthalpy (inability to form a full complement of 4 hydrogen bonds). If one can design a ligand that will displace such waters, significant gains in affinity would be obtained. One docking approach that does take such properties of waters into consideration is Schrodinger's Glide, with a recent paper attesting to the importance of such a method for Factor Xa inhibitors.
Clearly the exclusion of water molecules during docking and virtual screening (VS) will hamper enrichment factors, namely how well you can rank actives above inactives. Now a series of experiments from Brian Shoichet's group illustrates the benefits of including waters in active sites when doing virtual screening. These experiments seem to work in spite of two approximations that should have posed significant problems, but surprisingly did not.
To initiate the experiments, the authors chose a set of 24 targets and their corresponding ligands from their well-known DUD ligand set. This is a VS data set in which ligands are distinguished by topology but not by physical properties such as size and lipophilicity. This feature makes sure that ligands aren't trivially distinguished by VS methods on the basis of such properties alone. Importantly, the complexes were chosen so that the waters in them are bridging waters with at least two hydrogen bonds to the protein, and not waters which simply occupy hydrophobic pockets. Note that this would exclude a lot of important cases where affinity comes from displacement of such waters.
Now for the approximations. Firstly, the authors treated each water molecule separately in multiple configurations. They then scored the docked ligands against each such configuration as well as the rest of the protein. The waters were treated as either "on" or "off", that is, either displaced or not displaced. Whether to keep a water or not depended on whether the score improved or not when it was displaced by a ligand. The best scored ligands were then selected and figured high on the enrichment curve. This is a significant approximation because the assumption here is that every water contributes to ligand binding affinity independently of the other waters. While this would be true in certain cases, there is no reason to assume that it would generally hold.
The second approximation was even more important and startling. All the waters were regarded as energetically equivalent. From our knowledge of protein-ligand interactions, we know that the reason why evaluating waters in protein active sites is such a tricky business is precisely because each water has a different energetic profile. In fact the Factor Xa study cited above takes this profile into consideration. Without such an analysis it would be difficult to tell the medicinal chemist which part of the molecule to modify to get the best binding affinity from water displacement.
The most important benefit of this approximate approach was a linear increase in computational time instead of an exponential one. This was clearly because of the separate-water configuration approximation. The calculation of individual water free energies would also have added to this time.
In spite of these crucial approximations, the results indicate that the ability to distinguish actives from inactives was considerably improved for 12 out of 24 targets. This is not saying much, but even 50% sounds like a lot in the face of such approximations. Clearly an examination of the protein active site will also help to evaluate which cases will benefit, but it will also naturally depend on the structure of the ligand.
For now, this is an encouraging result and indicates that this approach could be implemented in virtual screening. There are probably very few cases where docking accuracy decreases when waters are included. With the sparse increases in computational time, this would be a quick and dirty but viable approach for virtual screening.
Reference:
Niu Huang, Brian K. Shoichet (2008). Exploiting Ordered Waters in Molecular Docking Journal of Medicinal Chemistry, 51 (16), 4862-4865 DOI: 10.1021/jm8006239
Aldrichimica Acta Woodward Memoirs
I was not aware of this splendid 1977 issue of Aldrichimica Acta dedicated to RB. It has an insightful article by David Dolphin containing memories I have not heard recounted elsewhere. One of the more curious part of the article narrates the story of a horoscope that was prepared for Woodward! Read on...probably his most important piece of advice was:
"There is not time to worry over what others think of us..."
Also, I was hoping that someone who stumbles upon this blog would have a personal photograph of Woodward to contribute to our Wikipedia article on him. I have contributed considerably to this article and in fact was the first to expand it from a two sentence piece to a bonafide article, but could never locate a photo that didn't have a copyright.
"There is not time to worry over what others think of us..."
Also, I was hoping that someone who stumbles upon this blog would have a personal photograph of Woodward to contribute to our Wikipedia article on him. I have contributed considerably to this article and in fact was the first to expand it from a two sentence piece to a bonafide article, but could never locate a photo that didn't have a copyright.
Thanks for all the fish
The LHC begins crashing protons tomorrow. The following from Stephen Hawking captures it the most accurately:
"It is a tribute to how far we have come in theoretical physics that it now takes enormous machines and a great deal of money to perform an experiment whose results we cannot predict"
Since one of those unpredictable results is the end of the world, we might as well depart with song and dance:
But as imbibed with levity as this matter is, it reminds me a very similar matter brought up during the making of the atomic bomb; the suspicion by Edward Teller that the atmosphere might go up in flames. Teller first brought up the topic during a secret 1942 Berkeley summer study headed by Oppenheimer. Oppenheimer was concerned enough to go to Michigan and discuss it with Arthur Compton, one of the administrative heads of the project. The two actually decided to stop work on the project if this scenario posed a non-trivial risk. But the resourceful Hans Bethe by then had worked out the energy balances of the reactions involved and concluded that there was a "vanishingly small" possibility that this might happen...
Fast forward to 16 July 1945 at the site of the world's first atomic bomb detonation. Enrico Fermi was cheerfully taking bets on whether the bomb would ignite the entire planet or just the state of New Mexico...
"It is a tribute to how far we have come in theoretical physics that it now takes enormous machines and a great deal of money to perform an experiment whose results we cannot predict"
Since one of those unpredictable results is the end of the world, we might as well depart with song and dance:
But as imbibed with levity as this matter is, it reminds me a very similar matter brought up during the making of the atomic bomb; the suspicion by Edward Teller that the atmosphere might go up in flames. Teller first brought up the topic during a secret 1942 Berkeley summer study headed by Oppenheimer. Oppenheimer was concerned enough to go to Michigan and discuss it with Arthur Compton, one of the administrative heads of the project. The two actually decided to stop work on the project if this scenario posed a non-trivial risk. But the resourceful Hans Bethe by then had worked out the energy balances of the reactions involved and concluded that there was a "vanishingly small" possibility that this might happen...
Fast forward to 16 July 1945 at the site of the world's first atomic bomb detonation. Enrico Fermi was cheerfully taking bets on whether the bomb would ignite the entire planet or just the state of New Mexico...
Gernot Frenking is not happy...not at all
Stable is simply "able" with a "st"
Wow. This is a first for me. Three of the heavyweights in theoretical and computational chemistry have published a set of prescriptions in Angewandte Chemie for theoretical chemists claiming to have discovered new, "stable" molecules. In response, Gernot Frenking who is a well-known chemist himself has not just published a piercing and trenchant critique in reply to this article, but they actually seem to have reproduced the text of his referee's comments as a reply. This is a lively and extremely readable debate.
In an article asking for more "realism" from theory, the three heavyweights- Roald Hoffmann, Paul von Schleyer and Henry Shaefer III- have basically come up with a roster of suggestions in response to what they see as the rather flippant declarations by theoretical chemists of molecules as "stable". One of the annoying things about theoreticians is that they regularly analyze molecules and proclaim them as stable. Experimentalists then have to sweat it out for years to actually try to make these molecules. Frequently such molecules are stable under rather extreme conditions, for example in gas phase at 4 degrees kelvin. To address the animosity that experimentalists feel against such carefree theoretical predictions, the three chemists have come up with suggestions for publication.
They make some interesting points about criteria that should be satisfied when declaring molecules as stable. In fact they think that one must do away with the word "stable" and replace it by the words "viable" and "fleeting". For example for "viable" molecules, one has to be clear about the difference between thermodynamic and kinetic stability. Molecules described as viable by theoreticians must have half lives of about a day, must be isolable in condensed phases at room temperature and pressure, and must not react easily with oxygen, nitrogen and ozone (?). Molecules with more than +1 positive or negative charge must also be included with "realistic" counterions. Molecules must even be stable under conditions of some humidity. The authors then also make suggestions about reporting accuracy and precision, and about the well-known fact that theoretically reported precision cannot be more than experimentally measured precision.
If theoreticians think these suggestions are asking for too much, they have a friend in Gernot Frenking.
Frenking batters these suggestions down by basically launching two criticisms:
1. The suggestions are too obvious and well-known to be published in Angewandte Chemie
2. The suggestions are heavily biased towards experimentalists' preferences
As Frenking puts it, he expected to walk into a "gourmet restaurant", and was served a "thin soup" instead. Ouch.
I have to say that while the suggestions made by the three prominent scientists are quite sound, Frenking's points are also well-taken. He lambasts the suggestions that realistic counterions should be included in the calculation of a molecule with multiple charges; there are already molecules with multiple charges predicted to be theoretically stable which were then isolated by experiment. Ionic molecules with charges more than + or -1 are easily isolated in condensed phases. And one of the central questions Frenking asks is; why does a molecule need to be so experimentally stable in order to justify the publication of its theoretical existence. After all there are many molecules present in interstellar space which cannot be isolated under average Joe lab conditions. Under these circumstances, Frenking is of the opinion that the distinction between "viable" and "fleeting" is "eyewash" (it's the European way of euphemism)
I resoundingly agree especially with this contention, harsh as it sounds. Why should experimentalists get an easy pass? The whole point of theory is to push the boundaries of what's experimentally possible. To suggest that one should only publish a theoretical prediction if it can easily be verified by experiment is to do disservice to the frontiers of science. While I can understand the angst that an experimentalist may feel when he sees an unusual molecule stable only under extreme conditions declared by a theoretician as "stable", that's exactly the challenge experimentalists should be up to, to devise conditions under which they can observe these short-lived molecules. If they do this they are the ones who carry the day. Since stability as is well-known is a relative term anyway, why insist on calling something "stable" only if it satisfies the everyday lab conditions of the experimentalist. I believe that it is precisely by testing the extreme frontiers of stability that chemistry progresses. And this can be done only by making things hard for experimentalists, not easy. Theoreticians pushing experimentalists and vice versa is how science itself progresses, and there is no reason for either one of them to quit questioning the boundaries of the others' domain.
There are other points and criticisms worth reading, include other referee comments which endorse the article and are also quite interesting. In the end however, I cannot answer Frenking's central question; should this article have been published in Angewandte Chemie? We should leave it for readers to judge.
Roald Hoffmann, Paul von Ragué Schleyer, Henry F. Schaefer III (2008). Predicting Molecules - More Realism, Please! Angewandte Chemie International Edition, 47 (38), 7164-7167 DOI: 10.1002/anie.200801206
Gernot Frenking (2008). No Important Suggestions Angewandte Chemie International Edition, 47 (38), 7168-7169 DOI: 10.1002/anie.200802500
Wow. This is a first for me. Three of the heavyweights in theoretical and computational chemistry have published a set of prescriptions in Angewandte Chemie for theoretical chemists claiming to have discovered new, "stable" molecules. In response, Gernot Frenking who is a well-known chemist himself has not just published a piercing and trenchant critique in reply to this article, but they actually seem to have reproduced the text of his referee's comments as a reply. This is a lively and extremely readable debate.
In an article asking for more "realism" from theory, the three heavyweights- Roald Hoffmann, Paul von Schleyer and Henry Shaefer III- have basically come up with a roster of suggestions in response to what they see as the rather flippant declarations by theoretical chemists of molecules as "stable". One of the annoying things about theoreticians is that they regularly analyze molecules and proclaim them as stable. Experimentalists then have to sweat it out for years to actually try to make these molecules. Frequently such molecules are stable under rather extreme conditions, for example in gas phase at 4 degrees kelvin. To address the animosity that experimentalists feel against such carefree theoretical predictions, the three chemists have come up with suggestions for publication.
They make some interesting points about criteria that should be satisfied when declaring molecules as stable. In fact they think that one must do away with the word "stable" and replace it by the words "viable" and "fleeting". For example for "viable" molecules, one has to be clear about the difference between thermodynamic and kinetic stability. Molecules described as viable by theoreticians must have half lives of about a day, must be isolable in condensed phases at room temperature and pressure, and must not react easily with oxygen, nitrogen and ozone (?). Molecules with more than +1 positive or negative charge must also be included with "realistic" counterions. Molecules must even be stable under conditions of some humidity. The authors then also make suggestions about reporting accuracy and precision, and about the well-known fact that theoretically reported precision cannot be more than experimentally measured precision.
If theoreticians think these suggestions are asking for too much, they have a friend in Gernot Frenking.
Frenking batters these suggestions down by basically launching two criticisms:
1. The suggestions are too obvious and well-known to be published in Angewandte Chemie
2. The suggestions are heavily biased towards experimentalists' preferences
As Frenking puts it, he expected to walk into a "gourmet restaurant", and was served a "thin soup" instead. Ouch.
I have to say that while the suggestions made by the three prominent scientists are quite sound, Frenking's points are also well-taken. He lambasts the suggestions that realistic counterions should be included in the calculation of a molecule with multiple charges; there are already molecules with multiple charges predicted to be theoretically stable which were then isolated by experiment. Ionic molecules with charges more than + or -1 are easily isolated in condensed phases. And one of the central questions Frenking asks is; why does a molecule need to be so experimentally stable in order to justify the publication of its theoretical existence. After all there are many molecules present in interstellar space which cannot be isolated under average Joe lab conditions. Under these circumstances, Frenking is of the opinion that the distinction between "viable" and "fleeting" is "eyewash" (it's the European way of euphemism)
I resoundingly agree especially with this contention, harsh as it sounds. Why should experimentalists get an easy pass? The whole point of theory is to push the boundaries of what's experimentally possible. To suggest that one should only publish a theoretical prediction if it can easily be verified by experiment is to do disservice to the frontiers of science. While I can understand the angst that an experimentalist may feel when he sees an unusual molecule stable only under extreme conditions declared by a theoretician as "stable", that's exactly the challenge experimentalists should be up to, to devise conditions under which they can observe these short-lived molecules. If they do this they are the ones who carry the day. Since stability as is well-known is a relative term anyway, why insist on calling something "stable" only if it satisfies the everyday lab conditions of the experimentalist. I believe that it is precisely by testing the extreme frontiers of stability that chemistry progresses. And this can be done only by making things hard for experimentalists, not easy. Theoreticians pushing experimentalists and vice versa is how science itself progresses, and there is no reason for either one of them to quit questioning the boundaries of the others' domain.
There are other points and criticisms worth reading, include other referee comments which endorse the article and are also quite interesting. In the end however, I cannot answer Frenking's central question; should this article have been published in Angewandte Chemie? We should leave it for readers to judge.
Roald Hoffmann, Paul von Ragué Schleyer, Henry F. Schaefer III (2008). Predicting Molecules - More Realism, Please! Angewandte Chemie International Edition, 47 (38), 7164-7167 DOI: 10.1002/anie.200801206
Gernot Frenking (2008). No Important Suggestions Angewandte Chemie International Edition, 47 (38), 7168-7169 DOI: 10.1002/anie.200802500
Vytorin: New Problems for a New Era
The NYT has a piece today in which it describes the problems riddling Vytorin, a combination of ezetimibe and simvastatin, which is taken by 3 million around the world and makes 5 billion for its owners, Schering-Plough and Merck. The piece says that in spite of such widespread usage, there is apparently no clinical trial data that demonstrates that the second piece of the cocktail- ezetimibe- is efficacious. Much more concerning is the fact that there may be a link between ezetimibe and cancer, although the link looks fragile at best right now.
Statins have been the wonder drugs of our time, with Atorvastatin (Lipitor) being the best-selling drug in the world. Their efficacy in reducing heart attacks has been demonstrated in large-scale trials. Ezetimibe which blocks cholesterol absorption in the intestine and reduces LDL has much more tenuous effects. Recent large-scale studies found that while ezetimibe does reduce LDL, it's effect on the variables that actually matter are far less certain; there was no evidence that it actually reduces heart attacks. Yet it continues to be prescribed by thousands of doctors around the country, with patients shelling out considerable amounts for it.
However, the article raises the much bigger issue of knowing exactly when a drug is efficacious. According to the article, utility of a drug is usually gauged by "surrogate endpoints", that is, endpoints which indicate reductionist type effects rather than an actual increase in life-span or quality of lives. Take cancer for example. For most cancer drugs, tumor shrinkage is a convincing endpoint, not an actual increase in life-span. Or take cholesterol medications. The causal link between LDL cholesterol lowering and reduced heart attacks is apparently well-proven. Yet the body is complex enough for this causal link to be questioned; such questions arise most often in truly large sample studies, when the drug is on the market. Clearly the only true indication of side-effects or efficacy will come from such large-scale studies.
But there's a dilemma here as far as I can see it. The reason why surrogate endpoints are used seems clear to me; it's simply much more easy to look for such effects and ascribe them to drug action than effects like "increase in life-span" which can be controlled by multiple factors. Let's say two men who are the same age have been prescribed the same cancer medication. Both show shrinkage in tumors. Apparently the medicine works. But can this be translated into an observation about the difference in their life-spans, which can be attributed to so many different factors. Especially if the general health of one of the patients is worse than the other, then the cancer medicine can basically be an adjuvant and not the primary cause for his prolonged survival. How do we know that it's the cancer medicine and not his general health condition that actually increased his life-span? Naturally a reduced incidence of heart attacks is much easier to analyze than an increase in life-span. But even there it seems that so many factors can be responsible for a heart attack that it would be a problem trying to attribute specific effects to the medication, or especially the lack thereof of its effects.
The article also says that the FDA has become stringent about medications that address chronic problems like heart disease. The stringent bar set for a true study of efficacy seems to be about 10,000 patients over about four or five years. If every pharmaceutical company needs to do such a study to get approval, they might as well start digging their grave right away. Drug development is already so risky and expensive that putting a drug through 10,000 patients over five years and having the FDA almost certainly reject it after that would spell doom for all drug makers. Yet it is clearly unethical for doctors to keep on prescribing medication whose efficacy has not been demonstrated.
There does not seem to be an easy way out of this problem. To me right now it seems that, with all the compromises and problems it entails, the best bet might be to set such a stringent bar for medications for which good alternatives exist (and medications for chronic heart disease seem to fall into that category now) but relax the need for such large-scale trials for unmet and critical needs for which no good drugs exist. I am pretty sure the FDA is not going to set the bar for cancer so high.
So what's the way out for companies? To me the soundest scheme for now seems to be for doctors to provide information on the label saying that the drug did not show efficacy in a fair number of patients, and let the patient decide for himself or herself. It would be ridiculous in my opinion for the FDA to demand that the company withdraw the drug. Also, as a general thought, I think that both the FDA and public opinion need to get over their obsession of approving drugs only if they show efficacy in 100% or even 90% patients. What if a drug shows efficacy in 50% of patients? The rational thing would be for doctors and companies to explicitly say this on their label and let the consumer decide. That's the kind of thing that should happen in a liberal society.
Note: Over at The Pipeline, Derek has quite a few posts on this
Statins have been the wonder drugs of our time, with Atorvastatin (Lipitor) being the best-selling drug in the world. Their efficacy in reducing heart attacks has been demonstrated in large-scale trials. Ezetimibe which blocks cholesterol absorption in the intestine and reduces LDL has much more tenuous effects. Recent large-scale studies found that while ezetimibe does reduce LDL, it's effect on the variables that actually matter are far less certain; there was no evidence that it actually reduces heart attacks. Yet it continues to be prescribed by thousands of doctors around the country, with patients shelling out considerable amounts for it.
However, the article raises the much bigger issue of knowing exactly when a drug is efficacious. According to the article, utility of a drug is usually gauged by "surrogate endpoints", that is, endpoints which indicate reductionist type effects rather than an actual increase in life-span or quality of lives. Take cancer for example. For most cancer drugs, tumor shrinkage is a convincing endpoint, not an actual increase in life-span. Or take cholesterol medications. The causal link between LDL cholesterol lowering and reduced heart attacks is apparently well-proven. Yet the body is complex enough for this causal link to be questioned; such questions arise most often in truly large sample studies, when the drug is on the market. Clearly the only true indication of side-effects or efficacy will come from such large-scale studies.
But there's a dilemma here as far as I can see it. The reason why surrogate endpoints are used seems clear to me; it's simply much more easy to look for such effects and ascribe them to drug action than effects like "increase in life-span" which can be controlled by multiple factors. Let's say two men who are the same age have been prescribed the same cancer medication. Both show shrinkage in tumors. Apparently the medicine works. But can this be translated into an observation about the difference in their life-spans, which can be attributed to so many different factors. Especially if the general health of one of the patients is worse than the other, then the cancer medicine can basically be an adjuvant and not the primary cause for his prolonged survival. How do we know that it's the cancer medicine and not his general health condition that actually increased his life-span? Naturally a reduced incidence of heart attacks is much easier to analyze than an increase in life-span. But even there it seems that so many factors can be responsible for a heart attack that it would be a problem trying to attribute specific effects to the medication, or especially the lack thereof of its effects.
The article also says that the FDA has become stringent about medications that address chronic problems like heart disease. The stringent bar set for a true study of efficacy seems to be about 10,000 patients over about four or five years. If every pharmaceutical company needs to do such a study to get approval, they might as well start digging their grave right away. Drug development is already so risky and expensive that putting a drug through 10,000 patients over five years and having the FDA almost certainly reject it after that would spell doom for all drug makers. Yet it is clearly unethical for doctors to keep on prescribing medication whose efficacy has not been demonstrated.
There does not seem to be an easy way out of this problem. To me right now it seems that, with all the compromises and problems it entails, the best bet might be to set such a stringent bar for medications for which good alternatives exist (and medications for chronic heart disease seem to fall into that category now) but relax the need for such large-scale trials for unmet and critical needs for which no good drugs exist. I am pretty sure the FDA is not going to set the bar for cancer so high.
So what's the way out for companies? To me the soundest scheme for now seems to be for doctors to provide information on the label saying that the drug did not show efficacy in a fair number of patients, and let the patient decide for himself or herself. It would be ridiculous in my opinion for the FDA to demand that the company withdraw the drug. Also, as a general thought, I think that both the FDA and public opinion need to get over their obsession of approving drugs only if they show efficacy in 100% or even 90% patients. What if a drug shows efficacy in 50% of patients? The rational thing would be for doctors and companies to explicitly say this on their label and let the consumer decide. That's the kind of thing that should happen in a liberal society.
Note: Over at The Pipeline, Derek has quite a few posts on this
Tipping Point
The Revenge of Gaia: Earth's Climate Crisis and the Fate of Humanity
By James Lovelock
Basic Books (2007)
In this clarion call to arms, eminent scientist James Lovelock warns us cogently and eloquently of the impending doom that we have forced upon our planet by global warming. Lovelock is well-qualified to offer such gloomy predictions; it was this extremely versatile scientist who in the 1960s and 70s proposed the idea of Gaia, the notion that the earth is a self-regulating organism whose regulatory mechanisms are intimately coupled to the activities of species in its biosphere. One species- man- has tilted the balance of these mechanisms and thrown them into disarray. The species that will pay the biggest price for this deed is also man himself. Through careful speculation and excellent scientific arguments about details, he rationalized this notion until it has now become widely accepted.
Lovelock's premier argument is that global warming (which he amusingly always refers to as "global heating") has already rendered our planet incapable of the self-regulation that it has admirably demonstrated for millennia. The temperature rises which global warming are going to bring about are beyond those which the earth can endure in a homeostatic manner, and its catastrophic effects are likely going to manifest within decades. There is a horrific precedent for believing this; the same kinds of temperature rises fifty five million years ago led to catastrophic mass extinctions and sea-level rises, inducing an ice age that lasted 200,000 years. We are in danger of inducing such a global pandemic by our efforts right now. The most serious manifestation of man-made global warming is in positive feedback. Two examples suffice; the well-known melting of ice which leads to less reflection of sunlight which leads to more melting, and the heating of the upper layers of the ocean that kills algae. These algae are crucial players in maintaining cooling by the emission of sulfur compounds that serve to reflect sunlight from clouds. Lovelock documents both these effects well as well as others that are resulting from the 'double whammy' that we are serving our planet; simultaneously emitting CO2 and depriving the earth of biomass that normally absorbs it.
While the first part of the book describes Gaia and how it's been affected irreversibly by global warming, the second part basically deals with the muddle headed perceptions of energy, food sources and environmentalism that affect many in the political establishment and media, most prominently environmentalists themselves.
There is clearly a rift between environmentalists that threatens to slow down action against climate change. One section, unfortunately the bigger one, is the more vocal one consisting of organizations like Greenpeace, who have a wrong-headed and irrational perception of environmentalism. They tout phrases like "sustainable development" and "renewables" without really understanding their limitations. They participate in emotion-laden protests and demonstrations just to prove their point. Their environmentalism mainly deals with trying to save cuddly creatures and colorful birds in remote parts of the world, while there are organisms much more in need of saving, including the microorganisms and algae which play extremely crucial roles in maintaining the homeostasis of Gaia.
The second group of environmentalists is a minority, and Lovelock is one of them. They understand that global warming has already done its damage and our goal now should not be mainly "sustainable development" but "sustainable retreat". They understand that much more important than saving a few endangered species in New Guinea is to prevent deforestation and use of more landmass even in developing countries. They know that debate about saving the environment cannot be dictated by emotion. Most importantly they understand that nuclear energy is the best short-term and perhaps long-term solution for our energy needs.
When it comes to energy sources that we should pursue, Lovelock's thesis is clear and rational. Renewables (solar, wind, biofuels) may sometime make a dent in the energy equation, but renewables are not going to save us soon enough. The phrase soon enough is important here. Lovelock is a reasonable man and does not discard renewables entirely. The problem is in trying to find good energy sources as fast as we can. But each one of the renewables is currently fraught with problems of inefficiency, environmental unfriendliness and lack of scale-up plans. Solar panels are expensive and inefficient. Wind farms consume huge tracts of land, land on which forestation usually soaks up carbon dioxide, and in addition need back up from fossil fuel generators when the wind is not blowing. Biofuels struggle with maintaining energy balances and pose similar land-use problems. It will be at least 50 years before renewables make a significant contribution to our energy needs and their use becomes cheap and widespread. But by that time it will be too late. The single-most important factor here is time.
The answer is clear and rational; especially for the short term future, nuclear power is the most efficient, readily available, widely-implementable, environment-friendly and safe source of power. Even if the problem of waste disposal is not trivial, it pales in comparison with the benefits we will incur, and especially the catastrophe that we will find ourselves in if we don't do it.
While Lovelock hopes fusion will become important soon, fission is currently our best bet. We already have the technology unlike that for renewables. Its efficiency is marvelous- a good numerical argument to keep in mind is this; global CO2 emissions for a year make up a mountain that is a mile in diameter and sixteen miles in height, a behemoth. In contrast all the nuclear fuel providing power for a year will constitute a cube that is sixteen meters on a side. It was Lovelock's espousal for nuclear power that represented a break from the 'green' party line. But now, nuclear is going to be as green as we can think of. To stave off fears of nuclear waste, Lovelock has even offered to bury the waste from a nuclear reactor in his backyard and use its energy for heating his house. In addition to these facts, Lovelock also clearly describes the paranoia that the public has for nuclear power, while all the time they face risks and dangers much more damaging and insidious.
One very cogent point that Lovelock makes is about how religious faith has caused problems in enabling our stewardship of the planet. He correctly points out that all religious texts were written at a time when man and his life were the focus. At very few places in the Bible or the Koran or even the Eastern texts is there an emphasis on the planet. None of the major world religions put nature before man. Now however, emphasizing man is going to be meaningless unless we emphasize Gaia, because without Gaia we won't be around. There need to be new "religious" principles, infusing the care and stewardship of the planet into children's minds, instead of the narrow self-serving interests of man that will become irrelevant once the sea-levels rise or the North Atlantic current slows down.
The same factor- time- that makes a good argument against renewables, also makes the strongest argument against libertarian "solutions" to climate change. Libertarians argue that the free market will eventually find solutions to the climate change problem without government intervention. But even if this solution might work in principle, 'eventually' is not going to be soon enough, good enough for us. We may have a little more than 20 years to beat a respectable retreat. For that we need legislation against carbon emissions, against use of oil for transportation, against land use right now. The libertarian approach may have worked 50 years ago when we had time. Thinking about renewable sources could have saved us if we had begun 200 years ago. But now even if these solutions work, they almost certainly will come too late to save us. As they say, "operation successful, but the patient is dead". To save the patient in time, we are going to inevitably have to make compromises, sacrifice at least some of our freedom to large scale government actions. We have to operate now in a manner reminiscent of how we operate in wartime. In times of legitimate (and in these times I stress the word 'legitimate') war, citizens don't complain about sacrificing freedom because they know their lives depend on it. Now Lovelock says we face a similar scenario.
On the downside. Lovelock makes some statements which I think should be better referenced. For example, I would not completely trust his contention that most of the cancers that we are going to die from are caused by our breathing oxygen. While oxygen certainly can produce free radicals and cause damage, such a significant role should be more firmly supported by evidence.
It is very difficult to find wholesome solutions to climate change. We seem to have now done a good job of recognizing the problem in the first place. But unfortunately it's too late to implement quick fixes that will wake us up from this nightmare when we will find that everything is all right. In an age where politicians are pushing for more oil drilling, rapid action and awareness is essential. We have to beat a retreat and live to fight another day, unlike Napoleon in Russia in 1812. For that we need coherent and rational thinking and global fixes, with all the compromises that they might entail. Going nuclear, and perhaps even indulging in grandiose fixes like "space reflectors" which reflect sunlight from miles-wide arrays, may be possibilities. Lovelock sounds an alarm in his book that is backed up by evidence and grim prognostication. Gaia will do whatever it takes to establish her equilibrium, equilibrium that's inherent in the laws of her physics and chemistry, equilibrium that will be established even if it means the loss of humanity. As a pithy line in an X-Files episode once put it, "You can't turn your back on nature, or nature will turn her back on you". It's simple.
Subscribe to:
Posts (Atom)