From physicist Peter Woit's blog comes a link to a PDF document containing a transcript of the bizarre scribblings that John Nash used to leave at night on the blackboards of the math department at Princeton when he was at the peak of his illness in the 70s. Often he used to haunt the hallways at night and by morning the chalkboards used to be covered with these bizarre writings.
Reading them is like reading some inexplicable combination of Hunter S Thompson, Thomas Pynchon and Salman Rushdie. It is very tempting to read patterns into the creations of a mind that was as creative as brilliant as Nash's. However some of the words ring far more true than Nash's disturbed mind intended them to. Take a look at this.
- Home
- Angry by Choice
- Catalogue of Organisms
- Chinleana
- Doc Madhattan
- Games with Words
- Genomics, Medicine, and Pseudoscience
- History of Geology
- Moss Plants and More
- Pleiotropy
- Plektix
- RRResearch
- Skeptic Wonder
- The Culture of Chemistry
- The Curious Wavefunction
- The Phytophactor
- The View from a Microbiologist
- Variety of Life
Field of Science
-
-
-
The Hayflick Limit: why humans can't live forever1 month ago in Genomics, Medicine, and Pseudoscience
-
-
Course Corrections4 months ago in Angry by Choice
-
-
The Site is Dead, Long Live the Site2 years ago in Catalogue of Organisms
-
The Site is Dead, Long Live the Site2 years ago in Variety of Life
-
Does mathematics carry human biases?3 years ago in PLEKTIX
-
-
-
-
A New Placodont from the Late Triassic of China5 years ago in Chinleana
-
Posted: July 22, 2018 at 03:03PM6 years ago in Field Notes
-
Bryophyte Herbarium Survey6 years ago in Moss Plants and More
-
Harnessing innate immunity to cure HIV8 years ago in Rule of 6ix
-
WE MOVED!8 years ago in Games with Words
-
-
-
-
post doc job opportunity on ribosome biochemistry!9 years ago in Protein Evolution and Other Musings
-
Growing the kidney: re-blogged from Science Bitez9 years ago in The View from a Microbiologist
-
Blogging Microbes- Communicating Microbiology to Netizens10 years ago in Memoirs of a Defective Brain
-
-
-
The Lure of the Obscure? Guest Post by Frank Stahl12 years ago in Sex, Genes & Evolution
-
-
Lab Rat Moving House13 years ago in Life of a Lab Rat
-
Goodbye FoS, thanks for all the laughs13 years ago in Disease Prone
-
-
Slideshow of NASA's Stardust-NExT Mission Comet Tempel 1 Flyby13 years ago in The Large Picture Blog
-
in The Biology Files
On the ethics of that "chocolate sting" study
By now many people must have heard of the so-called "chocolate sting" carried out by scientist and journalist John Bohannon. In a nutshell, what Bohannon did was to carry out a fake study on a very small sample of people that purported to investigate the effects of a chocolate-laced diet on weight. The study was actually perfectly done except for the statistics which were nicely massaged to conform to what's called 'p-hacking', the selective cherry-picking of results to correspond to statistical significance.
Bohannon and his colleagues then published the "study". Perhaps to nobody's surprise, both journalists and popular magazines jumped on it and proclaimed a new era of chocolate-enabled weight loss. Bohannon has written an article about his sting on the website iO9 which is now all over the Internet. The message from the article is that the gullibility of both journalists and the lay public is well and alive when it comes to swallowing incredulous health-related results. The result was especially depressing because one would think that even uninformed people would be skeptical about the supposedly beneficial effects of chocolate on weight.
Personally when I read about the sting and people's reactions to it I was immediately reminded of the Sokal hoax. Granted that the Sokal hoax did not perform experiments on human beings, but it too relied on deception and a sting operation to reveal the dirty truth. In that case the truth pertained to the tendency of people who called themselves "postmodernists" to dress up nonsensical notions in pseudoscientific language and try to present them as serious theories. Sokal wrote a deliberately nonsensical article and sent it to 'Social Text', the leading postmodernist journal. To his surprise it was enthusiastically published and touted as a novel insight into science and human nature. The emperor had been disrobed.
Bohannon's hoax applies to a different field but it is essentially in the same spirit. The goal of the hoax is to show that the emperor of uncritical thinking and statistical ignorance has no clothes. And the one thing the hoax managed to demonstrate is that emperors of this kind are running all over the place, and in fact each one of us has a bit of them rooted in our own thinking.
The study has drawn mixed reactions. For instance Ben Goldacre, slayer of bad science and medical studies, has praised the sting and said that it "deserves a prize" while Seth Mnookin, another slayer of bad science and campaigner against anti-vaxxers, has condemned it and called it "reprehensible". The most common reason why people are condemning the study is because they see no value coming out of it and because they see it as unethical: Bohannon did not seek approval from anything like an institutional review board (IRB) or inform his subjects about the goals of the study and get their approval (although he did lay out the general outline).
I myself got into a lively debate about the study on Twitter with two longstanding members of the science blogging community - Aatish Bhatia who blogs at Wired and Bethany Brookshire who blogs under the name Scicurious. Aatish and Bethany's concerns about the study were the same as those of Mnookin and others: the authors did not get their subjects' consent and approval from an IRB, and the value of the study was marginal at best. The thinking was, there have been several exposes of bad science and journalism in the past few years, so what exactly does this ethically dubious work demonstrate?
Here are my responses to these objections: First of all I agree with Aatish and Bethany that strictly speaking the study is not ethical. But then so aren't hundreds of other studies, especially in areas like psychology where detailed disclosure of the study goals themselves might change people's psychology and thwart those goals. Even the Sokal hoax with its explicit deception of journal editors and thousands of readers was not strictly ethical, and yet it is now regarded as a landmark critique of pseudoscience. In some cases, as long as no harm is being explicitly done to human subjects, it is only by withholding all the details that one can do a truly blinded and clean study. Bohannon's project was as much a psychology project as a nutrition project. In fact strictly speaking even most medical trials are unethical in the sense that they are run some time after the purported drug has shown a beneficial effect relative to the controls. If complete consent to every detail was a required condition for scientific experiments involving human beings, then most research projects in psychology and medicine would be deemed 'unethical.'
My bigger point though is that a discussion of ethics cannot be divorced from consequences, and any assessment of the ethical nature of research has to be done relative to the potential benefits and costs of the research. Viewed through this lens, the Bohannon study acquires a more interesting tinge. The most important benefit of the project would be an inculcation of a sense of skepticism and caution in the laypeople, journalists and magazines which were fooled. Aatish and Bethany are skeptical that such a sense would be ingrained in the victims. At the very least I agree that one would have to do a follow-up study to find out whether people have indeed become more enlightened and cautious and whether a sizable fraction of those who were fooled are now aware of the hoax.
But personally I am more hopeful. While P T Barnum's observation that a sucker is born every minute does ring true, it is also true that people often remember to be circumspect only after they have been shamed or embarrassed into doing so. Sometimes it's only a jolt of reality that can awaken people to reality. Granted that the jolt may be exaggerated, simplistic or disingenuous - in that sense Bohannan's work is what I would call the Michael Moore style of research - but it can lead to results that may not be acquired by a gentler approach. It's not the best approach, should be used sparingly and can be easily seen as cynical, but I would be surprised if it does not work. I would be surprised if at least a few of the magazines which fell hook line and sinker for Bohannon's ruse didn't think twice next time before enthusiastically publishing such results. Even if a few of them turn more skeptical I think the study would have had considerable value.
Another reasonable objection raised was that the ploy might cause people to become too skeptical and lead to a mistrust of even legitimate science from next time onwards. This is a valid point, but my response to it is to ask the question: would we rather err on the side of safety or on the side of caution (ideally we would rather not err at all, but that's a different story)? Aatish's answer was that we should try to err on the side of minimizing harm. I agree with this, but in my view harm is of different kinds. Aatish was presumably talking about the harm done to people's psyche that might cause them to mistrust honest science and journalism, but I believe that the harm done from ignorance of the scientific method and statistics is even greater, that it can lead to an equal or greater erosion of trust in science and rationality. If I were indeed to pick between the sides of overt skepticism and overt gullibility, even with my reservations I would pick the side of overt skepticism.
Ultimately my feeling about this study is that it's the kind of bitter medicine that should be administered occasionally for the health of a rational society. Strictly speaking it's not ethical, but its ethics should be balanced against its consequences. Its liberal use would indeed lead to a jaundiced populace that trusts nothing, but using it once in a while might actually cause a statistically significant increase in that skepticism that we all sorely need. And that's a good thing.
Bohannon and his colleagues then published the "study". Perhaps to nobody's surprise, both journalists and popular magazines jumped on it and proclaimed a new era of chocolate-enabled weight loss. Bohannon has written an article about his sting on the website iO9 which is now all over the Internet. The message from the article is that the gullibility of both journalists and the lay public is well and alive when it comes to swallowing incredulous health-related results. The result was especially depressing because one would think that even uninformed people would be skeptical about the supposedly beneficial effects of chocolate on weight.
Personally when I read about the sting and people's reactions to it I was immediately reminded of the Sokal hoax. Granted that the Sokal hoax did not perform experiments on human beings, but it too relied on deception and a sting operation to reveal the dirty truth. In that case the truth pertained to the tendency of people who called themselves "postmodernists" to dress up nonsensical notions in pseudoscientific language and try to present them as serious theories. Sokal wrote a deliberately nonsensical article and sent it to 'Social Text', the leading postmodernist journal. To his surprise it was enthusiastically published and touted as a novel insight into science and human nature. The emperor had been disrobed.
Bohannon's hoax applies to a different field but it is essentially in the same spirit. The goal of the hoax is to show that the emperor of uncritical thinking and statistical ignorance has no clothes. And the one thing the hoax managed to demonstrate is that emperors of this kind are running all over the place, and in fact each one of us has a bit of them rooted in our own thinking.
The study has drawn mixed reactions. For instance Ben Goldacre, slayer of bad science and medical studies, has praised the sting and said that it "deserves a prize" while Seth Mnookin, another slayer of bad science and campaigner against anti-vaxxers, has condemned it and called it "reprehensible". The most common reason why people are condemning the study is because they see no value coming out of it and because they see it as unethical: Bohannon did not seek approval from anything like an institutional review board (IRB) or inform his subjects about the goals of the study and get their approval (although he did lay out the general outline).
I myself got into a lively debate about the study on Twitter with two longstanding members of the science blogging community - Aatish Bhatia who blogs at Wired and Bethany Brookshire who blogs under the name Scicurious. Aatish and Bethany's concerns about the study were the same as those of Mnookin and others: the authors did not get their subjects' consent and approval from an IRB, and the value of the study was marginal at best. The thinking was, there have been several exposes of bad science and journalism in the past few years, so what exactly does this ethically dubious work demonstrate?
Here are my responses to these objections: First of all I agree with Aatish and Bethany that strictly speaking the study is not ethical. But then so aren't hundreds of other studies, especially in areas like psychology where detailed disclosure of the study goals themselves might change people's psychology and thwart those goals. Even the Sokal hoax with its explicit deception of journal editors and thousands of readers was not strictly ethical, and yet it is now regarded as a landmark critique of pseudoscience. In some cases, as long as no harm is being explicitly done to human subjects, it is only by withholding all the details that one can do a truly blinded and clean study. Bohannon's project was as much a psychology project as a nutrition project. In fact strictly speaking even most medical trials are unethical in the sense that they are run some time after the purported drug has shown a beneficial effect relative to the controls. If complete consent to every detail was a required condition for scientific experiments involving human beings, then most research projects in psychology and medicine would be deemed 'unethical.'
My bigger point though is that a discussion of ethics cannot be divorced from consequences, and any assessment of the ethical nature of research has to be done relative to the potential benefits and costs of the research. Viewed through this lens, the Bohannon study acquires a more interesting tinge. The most important benefit of the project would be an inculcation of a sense of skepticism and caution in the laypeople, journalists and magazines which were fooled. Aatish and Bethany are skeptical that such a sense would be ingrained in the victims. At the very least I agree that one would have to do a follow-up study to find out whether people have indeed become more enlightened and cautious and whether a sizable fraction of those who were fooled are now aware of the hoax.
But personally I am more hopeful. While P T Barnum's observation that a sucker is born every minute does ring true, it is also true that people often remember to be circumspect only after they have been shamed or embarrassed into doing so. Sometimes it's only a jolt of reality that can awaken people to reality. Granted that the jolt may be exaggerated, simplistic or disingenuous - in that sense Bohannan's work is what I would call the Michael Moore style of research - but it can lead to results that may not be acquired by a gentler approach. It's not the best approach, should be used sparingly and can be easily seen as cynical, but I would be surprised if it does not work. I would be surprised if at least a few of the magazines which fell hook line and sinker for Bohannon's ruse didn't think twice next time before enthusiastically publishing such results. Even if a few of them turn more skeptical I think the study would have had considerable value.
Another reasonable objection raised was that the ploy might cause people to become too skeptical and lead to a mistrust of even legitimate science from next time onwards. This is a valid point, but my response to it is to ask the question: would we rather err on the side of safety or on the side of caution (ideally we would rather not err at all, but that's a different story)? Aatish's answer was that we should try to err on the side of minimizing harm. I agree with this, but in my view harm is of different kinds. Aatish was presumably talking about the harm done to people's psyche that might cause them to mistrust honest science and journalism, but I believe that the harm done from ignorance of the scientific method and statistics is even greater, that it can lead to an equal or greater erosion of trust in science and rationality. If I were indeed to pick between the sides of overt skepticism and overt gullibility, even with my reservations I would pick the side of overt skepticism.
Ultimately my feeling about this study is that it's the kind of bitter medicine that should be administered occasionally for the health of a rational society. Strictly speaking it's not ethical, but its ethics should be balanced against its consequences. Its liberal use would indeed lead to a jaundiced populace that trusts nothing, but using it once in a while might actually cause a statistically significant increase in that skepticism that we all sorely need. And that's a good thing.
John Nash's work makes as good a case as any for the value of curiosity-driven research
What's the mark of a true genius? A Nobel Prize based on work that someone did in their PhD thesis at age 22? The fact that their theories are used in a stunning variety of disciplines, from economics to biology to government welfare? Or that their college professor signs off on their graduate school recommendation letter with a single line - "This man is a genius"?
To me, none of these facts mark out John Forbes Nash as an authentic genius as the fact that the work for which he won the Nobel Prize was, intellectually speaking, not even regarded as the most important piece of work he did, not even by himself. His most important work was instead a highly counterintuitive theorem in topology which he attacked with startling creativity and fearlessness, two qualities that he was becoming known for before his illness cut his career short.
In contrast, he himself called his famous equilibrium "trivial", a piece of research he did because of his interest in games and pure math. In fact he almost stopped working on game theory after transitioning from Princeton to MIT as an instructor, saving his time for "real" math such as number theory and topology. In showcasing this multifaceted intellect Nash joins a rarefied tradition of brilliance that features only a handful of stars including John von Neumann, Hans Bethe, Linus Pauling and Albert Einstein. And speaking of real geniuses, even the reigning mathematical genius of the twentieth century - von Neumann - immediately recognized Nash's mathematical contribution by exclaiming "Oh that's trivial you know; it's just a fixed point theorem". Now granted that it would take any other mathematician at least a few days to figure out what von Neumann could figure out in a second, but von Neumann's instant grasp of the problem demonstrates that Nash's math itself was not exactly on par with the solution of Fermat's last theorem.
For me, what stands out especially about the Nash equilibrium was that Nash solved it mainly as an interesting problem in mathematics and not as a problem with potential applications in economics. He was first and foremost a pure mathematician and any applications were to him of secondary importance. The relevance of equilibria in games for economics had been made clear by von Neumann and Oskar Morgenstern in their landmark work introducing game theory to the world. But they had only considered so-called "zero-sum" games where one person's win is another's loss. Nash significantly extended their work by making it applicable to the real world where losses and wins are usually distributed in a complex manner across multiple parties. He identified a set of conditions where party A has nothing to gain by varying their strategy as long as party B keeps theirs fixed and vice versa, and his real interest and genius in doing this was to apply a purely mathematical theorem called the fixed point theorem to the problem. His paper on the proof of this set of conditions was one of the shortest papers ever to win the Nobel Prize, and it was an absolutely first-rate example of using ideas from one field to fertilize discoveries in another unrelated field.
And then Nash moved on to other things, mainly regarding his contribution as something that would fetch him a quick PhD at age 22 and allow him to work on more important problems like the Riemann hypothesis. Thus, his solution of the Nash equilibrium, as relevant as it was for economics, was not seen by him as an applied problem but simply as an interesting problem in mathematics.
As economics writer John Cassidy describes it, fifty years later the Nash equilibrium was everywhere, and its reach had far exceeded Nash's wildest imagination.
The message of Nash's life is that it was pure curiosity-driven work in mathematics, not economics, that led to his great contribution to economics, biology and a host of other disciplines for which it was not originally designed. This is how curiosity-driven research works; you sustain bright people and give them the freedom to pick their own problems. And then you watch the solutions to those problems sprout wondrous branches in novel and unexpected terrain. That kind of research simply does not work if you ask people to produce results for the next quarter or papers for tenure.
Nash is certainly not the first one to have benefited from this freedom, but he may very well be among the dwindling few who would be able to work unfettered if the current disdain and lethargy for basic science continues. The John Nashes of the next twenty years may have acute constraints imposed on their creativity, and while this would undoubtedly curb their creativity, the real losers in this zero-sum game would be all of us. You don't need mathematics to prove this fact. It's something Nash would have noted with more than a touch of irony. R.I.P.
To me, none of these facts mark out John Forbes Nash as an authentic genius as the fact that the work for which he won the Nobel Prize was, intellectually speaking, not even regarded as the most important piece of work he did, not even by himself. His most important work was instead a highly counterintuitive theorem in topology which he attacked with startling creativity and fearlessness, two qualities that he was becoming known for before his illness cut his career short.
In contrast, he himself called his famous equilibrium "trivial", a piece of research he did because of his interest in games and pure math. In fact he almost stopped working on game theory after transitioning from Princeton to MIT as an instructor, saving his time for "real" math such as number theory and topology. In showcasing this multifaceted intellect Nash joins a rarefied tradition of brilliance that features only a handful of stars including John von Neumann, Hans Bethe, Linus Pauling and Albert Einstein. And speaking of real geniuses, even the reigning mathematical genius of the twentieth century - von Neumann - immediately recognized Nash's mathematical contribution by exclaiming "Oh that's trivial you know; it's just a fixed point theorem". Now granted that it would take any other mathematician at least a few days to figure out what von Neumann could figure out in a second, but von Neumann's instant grasp of the problem demonstrates that Nash's math itself was not exactly on par with the solution of Fermat's last theorem.
For me, what stands out especially about the Nash equilibrium was that Nash solved it mainly as an interesting problem in mathematics and not as a problem with potential applications in economics. He was first and foremost a pure mathematician and any applications were to him of secondary importance. The relevance of equilibria in games for economics had been made clear by von Neumann and Oskar Morgenstern in their landmark work introducing game theory to the world. But they had only considered so-called "zero-sum" games where one person's win is another's loss. Nash significantly extended their work by making it applicable to the real world where losses and wins are usually distributed in a complex manner across multiple parties. He identified a set of conditions where party A has nothing to gain by varying their strategy as long as party B keeps theirs fixed and vice versa, and his real interest and genius in doing this was to apply a purely mathematical theorem called the fixed point theorem to the problem. His paper on the proof of this set of conditions was one of the shortest papers ever to win the Nobel Prize, and it was an absolutely first-rate example of using ideas from one field to fertilize discoveries in another unrelated field.
And then Nash moved on to other things, mainly regarding his contribution as something that would fetch him a quick PhD at age 22 and allow him to work on more important problems like the Riemann hypothesis. Thus, his solution of the Nash equilibrium, as relevant as it was for economics, was not seen by him as an applied problem but simply as an interesting problem in mathematics.
As economics writer John Cassidy describes it, fifty years later the Nash equilibrium was everywhere, and its reach had far exceeded Nash's wildest imagination.
"These days, political scientists, evolutionary biologists, and even government regulators are obliged to grasp best-response equilibria and other aspects of game theory. Whenever a government agency is considering a new rule—a set of capital requirements for banks, say, or an environmental regulation—one of the first questions it needs to ask is whether obeying the rules leads to a Nash equilibrium. If it doesn’t, the new policy measure is likely to prove a failure, because those affected will seek a way around it.
John Nash, in writing his seminal 1951 article, “Non-Cooperative Games,” which was published in The Annals of Mathematics, surely didn’t predict any of this. He was then a brilliant young mathematician who saw some interesting theoretical problems in a new field and solved them. But one thing led to another, and it was he, rather than von Neumann, who ended up as an intellectual celebrity, the subject of a Hollywood movie. Life, as Nash discovered in tragic fashion, often involves the unexpected. Thanks to his work, though, we know it is possible to impose at least some order on the chaos."
Nash is certainly not the first one to have benefited from this freedom, but he may very well be among the dwindling few who would be able to work unfettered if the current disdain and lethargy for basic science continues. The John Nashes of the next twenty years may have acute constraints imposed on their creativity, and while this would undoubtedly curb their creativity, the real losers in this zero-sum game would be all of us. You don't need mathematics to prove this fact. It's something Nash would have noted with more than a touch of irony. R.I.P.
A review of Freeman Dyson's "Dreams of Earth and Sky"
Freeman Dyson is one of the most brilliant and wide-ranging thinkers of his time; the rare example of a truly outstanding scientist who is also a truly eloquent writer. This volume gathers together book reviews that he has written for the New York Review of books since 2004. The essays cover a range of topics as diverse as Dyson's interests and knowledge - from biotech to philosophy to theoretical physics.
Book reviews in the New York Review of Books are more than just simple descriptive reviews: they are also opportunities for authors to hold forth on their own views of the world. Thus Dyson's reviews are all accompanied by substantial personal commentary.
Every review benefits from his own vast experience with research as well as his unique friendship with many of science's best known personalities like Richard Feynman, Hans Bethe, Robert Oppenheimer and Edward Teller. Thus for instance Dyson adds his own touching reminiscences of Feynman and Oppenheimer in reviewing Jim Ottaviani's "Feynman" and Ray Monk's biography of Oppenheimer. Also in a review of Daniel Kahneman's "Thinking Fast and Slow", he uses his own unique experience as a statistician doing bomber studies during World War 2 to display the frailties of human thinking and bureaucracy.
In other cases Dyson uses specific books to inform us of his own original views on topics like biotechnology or cosmology. For instance in a review of Brian Greene's "The Elegant Universe" he speculates whether it might be impossible to detect a graviton particle using existing technology. In the first review titled "Our Biotech Future" he predicts that the domestication of biotechnology will be as significant a feature of the 21st century as computer technology was of the 20th.
Dyson also reveals himself to be no shrinking violet when it comes to controversy, although he does so with thoughtful and unfailing courtesy. His views on global warming have become well known, and his criticism of the field is balanced and moderate. And in reviewing Margaret Wertheim's book on scientific cranks "Physics on the Fringe", he also asks us not to dismiss all such cranks since some of them may turn out to have groundbreaking ideas.
You don't need to agree with all of Dyson's views in order to find them stimulating and thought-provoking, since that's what science is supposed to be about in its best tradition (as Dyson himself has said, "I would rather be wrong than uninteresting"). Even for readers who may have read these reviews the volume provides a ready reference of Dyson's views in one place as well as a glimpse into interesting literature worth reading. For those who have never read them they provide a window into one of the most original, literate and sensitive minds of its time.
Book reviews in the New York Review of Books are more than just simple descriptive reviews: they are also opportunities for authors to hold forth on their own views of the world. Thus Dyson's reviews are all accompanied by substantial personal commentary.
Every review benefits from his own vast experience with research as well as his unique friendship with many of science's best known personalities like Richard Feynman, Hans Bethe, Robert Oppenheimer and Edward Teller. Thus for instance Dyson adds his own touching reminiscences of Feynman and Oppenheimer in reviewing Jim Ottaviani's "Feynman" and Ray Monk's biography of Oppenheimer. Also in a review of Daniel Kahneman's "Thinking Fast and Slow", he uses his own unique experience as a statistician doing bomber studies during World War 2 to display the frailties of human thinking and bureaucracy.
In other cases Dyson uses specific books to inform us of his own original views on topics like biotechnology or cosmology. For instance in a review of Brian Greene's "The Elegant Universe" he speculates whether it might be impossible to detect a graviton particle using existing technology. In the first review titled "Our Biotech Future" he predicts that the domestication of biotechnology will be as significant a feature of the 21st century as computer technology was of the 20th.
Dyson also reveals himself to be no shrinking violet when it comes to controversy, although he does so with thoughtful and unfailing courtesy. His views on global warming have become well known, and his criticism of the field is balanced and moderate. And in reviewing Margaret Wertheim's book on scientific cranks "Physics on the Fringe", he also asks us not to dismiss all such cranks since some of them may turn out to have groundbreaking ideas.
You don't need to agree with all of Dyson's views in order to find them stimulating and thought-provoking, since that's what science is supposed to be about in its best tradition (as Dyson himself has said, "I would rather be wrong than uninteresting"). Even for readers who may have read these reviews the volume provides a ready reference of Dyson's views in one place as well as a glimpse into interesting literature worth reading. For those who have never read them they provide a window into one of the most original, literate and sensitive minds of its time.
The difference between popular chemistry and popular physics
This is from Half Price Books in Redmond, WA which I visited over the weekend. In this world popular physics books are popular physics books. Meanwhile, popular chemistry books are just textbooks.
Of course, as I have noted earlier, the problem is not with Half Price Books or with any other bookstore where this will be a familiar scenario. It's really with the lack of popular chemistry literature compared to popular physics fare, much of which also happens to be repetitive and marginally different from the rest. The great challenge of chemistry is to make the essential but (often deceptively) mundane exciting and memorable.
In a nutshell, the belief is that physics and biology seem to deal with the biggest of big ideas - quantum reality, the origin of the universe, black holes, human evolution - that are largely divorced from everyday experience while chemistry deals with small ideas that are all around us. But the smallness or bigness of ideas has nothing to do with their inherent excitement; witness the glory and importance of the Krebs cycle for instance. In addition, the origin of life is chemistry's signature "big idea". Plus, a conglomeration of small ideas in chemistry - like the evolution of methods for the refinement of various metals or the revolution engineered by polymers - underlies the foundation of civilization itself.
All I can do is point to my list of top 10 favorite chemistry books. Fortunately the occasional chemical splash continues to provide rays of light.
Of course, as I have noted earlier, the problem is not with Half Price Books or with any other bookstore where this will be a familiar scenario. It's really with the lack of popular chemistry literature compared to popular physics fare, much of which also happens to be repetitive and marginally different from the rest. The great challenge of chemistry is to make the essential but (often deceptively) mundane exciting and memorable.
In a nutshell, the belief is that physics and biology seem to deal with the biggest of big ideas - quantum reality, the origin of the universe, black holes, human evolution - that are largely divorced from everyday experience while chemistry deals with small ideas that are all around us. But the smallness or bigness of ideas has nothing to do with their inherent excitement; witness the glory and importance of the Krebs cycle for instance. In addition, the origin of life is chemistry's signature "big idea". Plus, a conglomeration of small ideas in chemistry - like the evolution of methods for the refinement of various metals or the revolution engineered by polymers - underlies the foundation of civilization itself.
All I can do is point to my list of top 10 favorite chemistry books. Fortunately the occasional chemical splash continues to provide rays of light.
Heisenberg and Dirac in the age of NIH funding
The men who engineered the quantum revolution had some hard tasks cut out in front of them. But as the brilliant Philip Anderson says in his sparkling collection of essays "More and Different", at least they did not have to deal with the exigencies of NIH/NSF funding crunches, tenure pressures, media sensationalism, instant approbation or reprobation from social media, and the dog-eat-dog culture of peer review that has come to plague the upper echelons of science. Tis was a simpler time, and here's what would have happened to poor Werner and his fellow physicists had they tried to practice their trade today...it would be funny if it weren't painful.
We do not need to discard old scientific role models in order to embrace new ones
Historian of science Steven Shapin has a review of Steven Gimbel's new capsule biography of Einstein in which he holds forth with some of his more general thoughts on the art of scientific biography and the treatment of famous scientific figures. Shapin mulls over the current of writing about scientific lives that initially tended to treat its inhabitants as scientific heroes but which in recent times has sought more to illuminate their human flaws. Pointing out the triumphs as well as the follies of your subjects is of course a good way to achieve balance, but as Shapin points out, one can bend over backward in either direction while doing this.
"The “human face” genre was an understandable response to hagiography, but more recently it has lapped over into a commitment to dirt-digging. Some modern scientific biographies mean to show great scientists as not just human but all-too-human, needing to be knocked off their pedestals. Galileo—we are now told—was a self-publicizing courtier, sucking up to his Medici patrons; Robert Hooke was a miser who molested his niece; Newton was a paranoid supervisor of torture who cheated in a priority dispute;Pasteur was a careerist power broker who cut ethical corners; even gentle Darwin was channeling laissez-faire capitalist ideology and using illness as an excuse to get colleagues to face down his scientific opponents on his behalf. Weary of stories about the virtues attached to transcendent genius, biographers have brought their scientific subjects down to Earth with a thunderous thump."
This view does not quite conform to the post-modernist ideal of treating every achievement as a subjective (and pedestrian) product of its times rather than as the unique work of an individual, but it does go a bit too far in discounting the nature of the few genuinely intellectually superior minds the human race has produced. The approach also tends to conflate people's scientific achievements with the social or personal aspects of their lives and somehow asks us to consider one only in the "context" of the other. For instance according to this approach Richard Feynman (who would incidentally have celebrated his ninety seventh birthday today had he been alive) is not a hero because of his occasional sexist remarks or behavior, but the truth of the matter is that Feynman's behavior does not stop many of us (who are well aware of the unseemlier aspects of his personality) from regarding him as a scientific hero.
It also seems odd to me to declare that Feynman is not a role model for science communication when millions of young people of all colors, nationalities, genders and political views have been deeply inspired by his teaching, books and science. Growing up in India for instance, I know first hand how much Feynman's books encouraged me and my friends to go into science; I wouldn't be surprised if it turns out that his books probably sold more copies in the bookshelves and on the streets of India than in this country. This fact also demonstrates that while it helps, one does not always need someone from their own community or gender to serve as an inspiring figure. When I was growing up it might have been nice to have a Nobel Laureate from my neighborhood to serve as a role model, but Einstein, Feynman, Darwin and Bohr provided no dearth of inspiration. When I thought about the wonderful facts about the world they had unearthed it did not matter whether they were white, black, brown or green. They were scientists first and Americans, Englishmen, Germans and Danes later.
The point is, when it comes to being a successful model for science communication, shouldn't the product speak for itself? In our quest to seek new models of science communication, why do we need to discard older ones because they somehow don't look or speak like us? Why can't we have addition instead of substitution? And yet I see a minor, well-meaning but vocal chorus of voices in this country which wants to replace, not add to, old heroes of science communication. These proponents of diversity make the case that we need people from our own communities to serve as role models, and while this is certainly true their arguments sometimes ignore two facts: firstly, they underestimate the sheer power of scientific ideas like evolution or the birth of the universe to move young minds, and secondly they confuse "role model" with "inspiring figure". Henrietta Swan Leavitt is certainly an inspiration for me, but by definition she cannot be a role model. Role models do work better when they are somewhere close to you, but inspiration can come from anywhere.
Somewhere in our quest to bring about a more equal scientific community I think we are elevating identity above ideas, notions of nationality, gender and color above the sheer factual authenticity of scientific contributions. While the former are important, they cannot rise by treading on the latter which as facts about the universe stand in their own right in isolated splendor. When I think about Feynman I don't want to think of him as a Dead White Guy first and as the originator of Feynman diagrams only second, and when I look at George Washington Carver I don't want to think of him as a Black Scientist first and as the originator of groundbreaking ideas in agriculture only second. We need to look at ideas first and foremost as ideas, and only secondarily as ideas from a person belonging to a particular community, gender or nationality. Otherwise we risk falling prey to the same divisiveness and proliferation of "isms" which we seek to abolish.
"The “human face” genre was an understandable response to hagiography, but more recently it has lapped over into a commitment to dirt-digging. Some modern scientific biographies mean to show great scientists as not just human but all-too-human, needing to be knocked off their pedestals. Galileo—we are now told—was a self-publicizing courtier, sucking up to his Medici patrons; Robert Hooke was a miser who molested his niece; Newton was a paranoid supervisor of torture who cheated in a priority dispute;Pasteur was a careerist power broker who cut ethical corners; even gentle Darwin was channeling laissez-faire capitalist ideology and using illness as an excuse to get colleagues to face down his scientific opponents on his behalf. Weary of stories about the virtues attached to transcendent genius, biographers have brought their scientific subjects down to Earth with a thunderous thump."
This view does not quite conform to the post-modernist ideal of treating every achievement as a subjective (and pedestrian) product of its times rather than as the unique work of an individual, but it does go a bit too far in discounting the nature of the few genuinely intellectually superior minds the human race has produced. The approach also tends to conflate people's scientific achievements with the social or personal aspects of their lives and somehow asks us to consider one only in the "context" of the other. For instance according to this approach Richard Feynman (who would incidentally have celebrated his ninety seventh birthday today had he been alive) is not a hero because of his occasional sexist remarks or behavior, but the truth of the matter is that Feynman's behavior does not stop many of us (who are well aware of the unseemlier aspects of his personality) from regarding him as a scientific hero.
It also seems odd to me to declare that Feynman is not a role model for science communication when millions of young people of all colors, nationalities, genders and political views have been deeply inspired by his teaching, books and science. Growing up in India for instance, I know first hand how much Feynman's books encouraged me and my friends to go into science; I wouldn't be surprised if it turns out that his books probably sold more copies in the bookshelves and on the streets of India than in this country. This fact also demonstrates that while it helps, one does not always need someone from their own community or gender to serve as an inspiring figure. When I was growing up it might have been nice to have a Nobel Laureate from my neighborhood to serve as a role model, but Einstein, Feynman, Darwin and Bohr provided no dearth of inspiration. When I thought about the wonderful facts about the world they had unearthed it did not matter whether they were white, black, brown or green. They were scientists first and Americans, Englishmen, Germans and Danes later.
The point is, when it comes to being a successful model for science communication, shouldn't the product speak for itself? In our quest to seek new models of science communication, why do we need to discard older ones because they somehow don't look or speak like us? Why can't we have addition instead of substitution? And yet I see a minor, well-meaning but vocal chorus of voices in this country which wants to replace, not add to, old heroes of science communication. These proponents of diversity make the case that we need people from our own communities to serve as role models, and while this is certainly true their arguments sometimes ignore two facts: firstly, they underestimate the sheer power of scientific ideas like evolution or the birth of the universe to move young minds, and secondly they confuse "role model" with "inspiring figure". Henrietta Swan Leavitt is certainly an inspiration for me, but by definition she cannot be a role model. Role models do work better when they are somewhere close to you, but inspiration can come from anywhere.
Somewhere in our quest to bring about a more equal scientific community I think we are elevating identity above ideas, notions of nationality, gender and color above the sheer factual authenticity of scientific contributions. While the former are important, they cannot rise by treading on the latter which as facts about the universe stand in their own right in isolated splendor. When I think about Feynman I don't want to think of him as a Dead White Guy first and as the originator of Feynman diagrams only second, and when I look at George Washington Carver I don't want to think of him as a Black Scientist first and as the originator of groundbreaking ideas in agriculture only second. We need to look at ideas first and foremost as ideas, and only secondarily as ideas from a person belonging to a particular community, gender or nationality. Otherwise we risk falling prey to the same divisiveness and proliferation of "isms" which we seek to abolish.
The same problem applies to recognizing the balance between scientific genius and social environment. As Shapin alludes to in his piece, for some reason there has been what I see as a completely unnecessary tussle between two camps in recent times: one camp wants to declare scientific achievements as the work of lone geniuses while the other camp (definitely the more vocal one in recent times) wants to try to abolish or radically downplay the idea of genius and instead ascribe scientific feats to cultural and historical factors. Thus you can have genius or you can have luck, community, education and accidents of birth, but you cannot have both. The truth of course is much more mundane and somewhere in between: Einstein or Feynman were extraordinary intellects who did things that few others could have done, but it's also equally accurate that they could not have done those things without the relevant historical and social factors being lined up in their favor. We don't have to pick between genius-enabled science and socially-enabled science. The reality is that both of them piggyback on each other. There is little doubt that flashes of insight occasionally come from geniuses, just as it's true that these flashes stand on a foundation of historical and social drivers.
Fortunately as Shapin indicates, the art of denigration itself has not seen untrammeled success and some successful biographies of Einstein for instance have satisfying walked the tightrope between hagiography and candid human assessment.
"Denigration is now itself showing signs of wear and tear. Much present-day scientific biography aspires to something as apparently benign as a “rounded” account of life and scientific work—neither panegyric nor exposé, neither a dauntingly technical document nor a personal life with the science omitted. Walter Isaacson’s 2007 biography is an accessible and assured account, and his goal is such a “whole life” treatment. “Knowing about the man,” Mr. Isaacson writes, “helps us to understand the wellsprings of his science, and vice versa. Character and imagination and creative genius were all related, as if part of some unified field.” What could be more sensible?"
We can thus recognize Einstein as an authentic genius while also recognizing the shoulders on which he stood. We can recognize Feynman's scientific genius and his pronounced impact on the minds of millions of science students while recognizing his flaws as a human being. Gimbel's biography is along similar lines.
Taken together these balanced works provide some hope that we can appreciate the shining products of humanity while also appreciating their human defects. Like the paradoxical but complementary waves and particles of quantum theory, the two complete each other.
"Denigration is now itself showing signs of wear and tear. Much present-day scientific biography aspires to something as apparently benign as a “rounded” account of life and scientific work—neither panegyric nor exposé, neither a dauntingly technical document nor a personal life with the science omitted. Walter Isaacson’s 2007 biography is an accessible and assured account, and his goal is such a “whole life” treatment. “Knowing about the man,” Mr. Isaacson writes, “helps us to understand the wellsprings of his science, and vice versa. Character and imagination and creative genius were all related, as if part of some unified field.” What could be more sensible?"
We can thus recognize Einstein as an authentic genius while also recognizing the shoulders on which he stood. We can recognize Feynman's scientific genius and his pronounced impact on the minds of millions of science students while recognizing his flaws as a human being. Gimbel's biography is along similar lines.
Taken together these balanced works provide some hope that we can appreciate the shining products of humanity while also appreciating their human defects. Like the paradoxical but complementary waves and particles of quantum theory, the two complete each other.
A review of David McCullough's "The Wright Brothers"
David McCullough is one of the preeminent American historians of our times, the deft biographer of John Adams and Harry Truman, and in this book he brings his wonderful historical exposition and storytelling skills to the lives of the Wright brothers. So much is known about these men that they have been turned into legends. Legends they were but they were also human, and this is the quality that McCullough is best at showcasing in these pages. The book is a quick and fun read. If I have some minor reservations they are only in the lack of technical detail which could have informed descriptions of some of the Wrights' experiments and the slightly hagiographical tint that McCullough is known to bring to his subjects. Nevertheless, this is after all a popular work, and popular history seldom gets better than under McCullough's pen.
The book shines in three aspects. Firstly McCullough who is quite certainly one of the best storytellers among all historians does a great job of giving us the details of the Wrights' upbringing and family. He drives home the importance of the Wrights' emphasis on simplicity, intellectual hunger and plain diligence, hard work and determination. The Wright brothers' father who was a Bishop filled the house with books and learning and never held back their intellectual curiosity. This led to an interest in tinkering in the best sense of the tradition, first with bicycles and then with airplanes. The Wrights' sister Katharine also played an integral part in their lives; they were very close to her and McCullough's account is filled with copious examples of the affectionate, sometimes scolding, always encouraging letters that the siblings wrote to each other. The Wrights' upbringing drives home the importance of family and emotional stability.
Secondly, McCullough also brings us the riveting details of their experiments with powered flight. He takes us from their selection of Kill Devil Hills in the Outer Banks of North Carolina as a flight venue through their struggles, both with the weather conditions and with the machinery. He tells us how the brothers were inspired by Otto Lillienthal, a brilliant German glider pilot who crashed to his death and by Octave Chanute and Samuel Langley. Chanute was a first-rate engineer who encouraged their efforts while Samuel Langley headed aviation efforts at the Smithsonian and was a rival. The Wrights' difficult life on the sand dunes - with "demon mosquitoes", 100 degree weather and wind storms - is described vividly. First they experimented with the glider, then consequentially with motors. Their successful and historic flight on December 17, 1903 was a testament to their sheer grit, bon homie and technical brilliance. A new age had dawned.
Lastly, McCullough does a fine job describing how the Wrights rose to world fame after their flight. The oddest part of the story concerns how they almost did not make it because institutions in their own country did not seem to care enough. They found a willing and enthusiastic customer in the French, perhaps the French had already embraced the spirit of aviation through their pioneering efforts in ballooning (in this context, Richard Holmes's book on the topic is definitely worth a read). Wilbur traveled to France, secured funding from individuals and the government and made experimental flights that were greeted with ecstatic acclaim. It was only when his star rose in France that America took him seriously. After that it was easier for him and Orville to secure army contracts and test more advanced designs. Throughout their efforts to get funding, improve their designs and tell the world what they had done, their own determined personalities and the support of their sister and family kept them going. While Wilbur died at the age of forty-five from typhoid fever, Orville lived until after World War 2 to witness the evolution of his revolutionary invention in all its glory and horror.
McCullough's account of the Wright brothers, as warm and fast-paced as it is, was most interesting to me for the lessons it holds for the future. The brothers were world-class amateurs, not professors at Ivy League universities or researchers in giant corporations. A similar attitude was demonstrated by the amateurs who built Silicon Valley, and that's also an attitude that's key to American innovation. The duo's relentless emphasis on trial and error - displayed to an almost fanatical extent by their compatriot Thomas Edison - is also an immortal lesson. But perhaps what the Wright brothers' story exemplifies the most is the importance of simple traits like devotion to family, hard work, intense intellectual curiosity and most importantly, the frontier, can-do attitude that has defined the American dream since its inception. It's not an easy ideal to hold on to, and as we move into the 21st century, we should always remember Wilbur and Orville who lived that ideal better than almost anyone else. David McCullough tells us how they did it.
The book shines in three aspects. Firstly McCullough who is quite certainly one of the best storytellers among all historians does a great job of giving us the details of the Wrights' upbringing and family. He drives home the importance of the Wrights' emphasis on simplicity, intellectual hunger and plain diligence, hard work and determination. The Wright brothers' father who was a Bishop filled the house with books and learning and never held back their intellectual curiosity. This led to an interest in tinkering in the best sense of the tradition, first with bicycles and then with airplanes. The Wrights' sister Katharine also played an integral part in their lives; they were very close to her and McCullough's account is filled with copious examples of the affectionate, sometimes scolding, always encouraging letters that the siblings wrote to each other. The Wrights' upbringing drives home the importance of family and emotional stability.
Secondly, McCullough also brings us the riveting details of their experiments with powered flight. He takes us from their selection of Kill Devil Hills in the Outer Banks of North Carolina as a flight venue through their struggles, both with the weather conditions and with the machinery. He tells us how the brothers were inspired by Otto Lillienthal, a brilliant German glider pilot who crashed to his death and by Octave Chanute and Samuel Langley. Chanute was a first-rate engineer who encouraged their efforts while Samuel Langley headed aviation efforts at the Smithsonian and was a rival. The Wrights' difficult life on the sand dunes - with "demon mosquitoes", 100 degree weather and wind storms - is described vividly. First they experimented with the glider, then consequentially with motors. Their successful and historic flight on December 17, 1903 was a testament to their sheer grit, bon homie and technical brilliance. A new age had dawned.
Lastly, McCullough does a fine job describing how the Wrights rose to world fame after their flight. The oddest part of the story concerns how they almost did not make it because institutions in their own country did not seem to care enough. They found a willing and enthusiastic customer in the French, perhaps the French had already embraced the spirit of aviation through their pioneering efforts in ballooning (in this context, Richard Holmes's book on the topic is definitely worth a read). Wilbur traveled to France, secured funding from individuals and the government and made experimental flights that were greeted with ecstatic acclaim. It was only when his star rose in France that America took him seriously. After that it was easier for him and Orville to secure army contracts and test more advanced designs. Throughout their efforts to get funding, improve their designs and tell the world what they had done, their own determined personalities and the support of their sister and family kept them going. While Wilbur died at the age of forty-five from typhoid fever, Orville lived until after World War 2 to witness the evolution of his revolutionary invention in all its glory and horror.
McCullough's account of the Wright brothers, as warm and fast-paced as it is, was most interesting to me for the lessons it holds for the future. The brothers were world-class amateurs, not professors at Ivy League universities or researchers in giant corporations. A similar attitude was demonstrated by the amateurs who built Silicon Valley, and that's also an attitude that's key to American innovation. The duo's relentless emphasis on trial and error - displayed to an almost fanatical extent by their compatriot Thomas Edison - is also an immortal lesson. But perhaps what the Wright brothers' story exemplifies the most is the importance of simple traits like devotion to family, hard work, intense intellectual curiosity and most importantly, the frontier, can-do attitude that has defined the American dream since its inception. It's not an easy ideal to hold on to, and as we move into the 21st century, we should always remember Wilbur and Orville who lived that ideal better than almost anyone else. David McCullough tells us how they did it.
Is mathematics necessary for doing great science?
Derek has a post on the value of mathematics for chemists in Chemistry World. As he says in there, while formal mathematics may not be necessary at all in a chemist or biologist's day to day work, the "mental furnishings" that math provides can be quite useful for thinking through problems. The post reminded me of my own musings on an editorial by E. O. Wilson on mathematics in the sciences which I am reposting with some additions.
Writing in the Wall Street Journal, biologist E. O. Wilson asks if math is necessary for doing great science. At first glance the question seems rather pointless and the answer trivial; we can easily name dozens of Nobel Prize winners whose work was not mathematical at all. Most top chemists and biomedical researchers have little use for mathematics per se, except in terms of using statistical software or basic calculus. The history of science is filled with scientists like Darwin, Lavoisier and Linnaeus who were poor mathematicians but who revolutionized their fields.
But Wilson seems to be approaching this question from two different perspectives and by and large I agree with both of them. The first perspective is from the point of view of students and the second is from the point of view of research scientists. Wilson contends that many students who want to become scientists are put off when they are told that they need to know mathematics well to become great scientists.
"During my decades of teaching biology at Harvard, I watched sadly as bright undergraduates turned away from the possibility of a scientific career, fearing that, without strong math skills, they would fail. This mistaken assumption has deprived science of an immeasurable amount of sorely needed talent. It has created a hemorrhage of brain power we need to stanch."
I do not know if this is indeed what students feel, but at least on one level it makes sense. While it’s true that chemists and biologists certainly don’t need to know advanced mathematical topics like topology or algebraic geometry to do good science, these days they do need to know how to handle large amounts of data, and that’s a trend that only going to grow by leaps and bounds. Now analyzing large amounts of data does not require advanced mathematics per se – it’s more statistics than mathematics – but one can see how mathematical thinking can help one to understand the kinds of tools (things like machine learning and principal component analysis) that are standard parts of modern data analysis. So while Wilson may be right that professors should not discourage students by requiring them to know mathematics, they also should stress the importance of abstract mathematical thinking that’s useful in analyzing data in fields ranging from evolutionary biology to social psychology. You don’t have to be a mathematician in order to think like a mathematician, and it never hurts these days for any kind of a scientist to take a class in machine learning or statistics.
At the same time Wilson is quite right that true success in science mostly does not come from mathematics. In many fields math is a powerful tool, but only a tool nonetheless; what matters is a physical feel for the systems to which it is applied. As Wilson puts it, “Far more important throughout the rest of science is the ability to form concepts, during which the researcher conjures images and processes by intuition”. In Wilson’s own field for instance, you can use all the math you like to calculate rising and ebbing populations of prey and predator, but true insight into the system can only come from broader thinking that utilizes the principles of evolution. In fact biology can claim many scientists like John Maynard Smith, J. B. S. Haldane and W. D. Hamilton who were excellent mathematicians, but the fact remains that these men’s great contributions came from their understanding of the biological systems under consideration rather than the mathematics itself.
In my own field of chemistry, math is employed as the basis of several physics-based algorithms that are used to calculate the structure and properties of molecules. But most chemists like me can largely get away by using these algorithms as black boxes; our insights into problems comes from analyzing the results of the calculations within the unique structure and philosophy of chemistry. Knowledge of mathematics may or may not help us in understanding molecular behavior, but knowledge of chemistry always helps. This also speaks to the limitations of reductionism in a field like chemistry which I have often written about before. As Roald Hoffmann put it quite memorably, chemical concepts like aromaticity and electronegativity start "fraying at their edges" when you try to make them too precise through mathematical manipulation. It's far better to have a semi-qualitative idea that's actually explanatory than a six-decimal calculation that sheds no chemical insight. And even in a bona fide mathematical field like quantum chemistry the distinction between “using” math and “knowing” it is quite clear; I don’t really know the math behind many theoretical calculations on molecules, but I certainly use it on a regular basis in an implicit way.
What’s interesting is that mathematics is not even a game changer in the world of physics, the one field where its application is considered to be essential. The physicist Eugene Wigner did write an essay named “The Unreasonable Effectiveness of Mathematics in the Natural Sciences”, but even the greatest theoretical physicists of the twentieth century including Einstein, Fermi, Feynman and Bohr were really known for their physical intuition than for formidable mathematical prowess. Einstein's strength was to imagine thought experiments, Fermi's was to do rough back-of-the-envelope calculations. So while mathematics is definitely key to making advances in fields like particle physics, even in those fields what really matters is the ability to imagine physical phenomena and make sense of them. The history of physics presents very few examples – Paul Dirac’s work in quantum mechanics and Hermann Weyl's work in group theory come to mind – where mathematical beauty and ability alone served to bring about important scientific progress.
This use of mathematics as little more than an elegant tool relates to Wilson’s second point concerning math, this time in the context of collaboration. To me Wilson confirms a quote attributed to Thomas Edison who is purported to have said, “I can hire a mathematician but a mathematician cannot hire me”. Most non-mathematicians can collaborate with a mathematician to firm up their analyses, but without a collaborator in the physical or social sciences mathematicians will have no idea what to do with their equations, no matter how rigorous or elegant they are.
The other thing to keep in mind is that an over-reliance on math can also seriously hinder progress in certain fields and even lead to great financial and personal losses. Finance is a great example; the highly sophisticated models developed by physicists on Wall Street caused more harm than good. In the words of the physicist-turned-financial modeler Emanuel Derman, the modelers suffered from "physics envy", expecting markets to be as precise as electrons and neutrinos. In one sense I see Wilson's criticism of mathematics as a criticism of the overly reductionist ethos that some scientists bring to their work. I agree with him that this ethos can often lead one to miss the forest for the trees.
The fact is that fear of math often dissuades students and professionals from embarking on research in fields where data analysis and mathematics-type thinking are useful. Wilson’s essay should assure these scientists that they need not fear math and do not even need to know it too well to become great scientists. All they need to do is to use it when it matters. Or find someone who can. The adage about mathematics being the "handmaiden of the sciences" sounds condescending, but it's not, and it's fairly accurate.
A review of Marcia Bartusiak's "Black Hole: How an Idea Abandoned by Newtonians, Hated by Einstein, and Gambled On by Hawking Became Loved"
Black holes are unusual objects. They are now recognized as some of the most important cosmic laboratories for studying all kinds of physics phenomena, from general relativity to quantum mechanics. And yet as science writer Marcia Bartusiak describes in this book, their road to success has been paved with a lack of interest from their own pioneers and many haphazard detours.
Bartusiak traces the conception of the idea of black holes to a Cambridge don named John Mitchell who asked whether an object could be so dense that even light would not escape its gravitational pull. This idea lay buried in the scientific literature until the early 20th century when astronomers began asking questions about the constitution of stars. It was the young Indian astrophysicist Subrahmanyan Chandrasekhar who first thought about gravitational collapse on his way to graduate school in England. Bartusiak describes well Chandrasekhar's battles with the old English establishment of astronomers, and especially the doyen of English astrophysics Arthur Eddington, in getting his ideas accepted. He was so frustrated in his endeavors that he switched to studying other topics before he finally the Nobel Prize for his work decades later. Chandrasekhar's life holds many lessons, but one of the lessons is that the best way to win a Nobel Prize is to live long enough.
The next actors on the cosmic stage were the volatile Fritz Zwicky and the brilliant Lev Landau and Robert Oppenheimer. Zwicky was a maverick scientist with a prickly personality who ruffled both personal and scientific feathers by his bold conjectures regarding dark matter and supernovae. Landau and Zwicky laid out the first contours of what's called a neutron star while Oppenheimer was really the first scientist who asked what happens when a star completely collapses to a point, what was later called a singularity. Interestingly both Oppenheimer and Einstein - whose general theory relativity shines in all its glory in black holes - either refused to accept their reality or showed a complete lack of interest in them in their later years. Oppenheimer's study of black holes was ironically the most important scientific contribution he ever made, even though it occupied a fraction of his scientific interests and career. Many people think he would have won a Nobel Prize had he lived long enough to see his predictions validated by experiment. One of the reasons why both Oppenheimer and the world at large lost interest in black holes after he predicted them is revealed by the date on which his paper on the subject appeared in the journal Physical Review - September 1, 1939, the date on which the Nazis attacked Poland and inaugurated World War 2. Interestingly the same issue of the journal contained a pioneering paper on the mechanism of nuclear fission by John Wheeler and Niels Bohr. As the war kicked off, fission took center stage while black holes were forgotten.
Why did Oppenheimer and Einstein turn away from black holes even after the war? Part of their recalcitrance - at least in case of Oppenheimer - concerned their dislike of Zwicky and Wheeler, but part of it was also an aversion to anything that they thought was not fundamental enough. When I met Freeman Dyson, he confirmed that he had tried several times to talk about black holes with Oppenheimer, but every time Oppenheimer changed the subject, considering black holes too applied, too dull, objects of attention best suited to second-rate minds or graduate students (take your pick). The story of black holes is a good instance of scientific revolutionaries turning conservative. Ironically the scientific grandchildren that Oppenheimer and Einstein despised have now come to haunt the ghosts of their grandparents, showcasing properties that promise to shed light on some of the most basic tenets of relativity, cosmology and quantum mechanics.
As Bartusiak narrates, it fell to a young breed of brilliant scientists led by John Wheeler in the US, Dennis Sciama in the UK and Yakov Zeldovich in the USSR to work out the details of black hole astrophysics. They in turn inspired a whole generation of students like Kip Thorne, Roger Penrose and Stephen Hawking who contributed to the discipline. Bartusiak's book also has a readable account of the experimental discoveries in x-ray and radio astronomy which turned black holes from speculation to reality. As the book makes it clear, the importance of observational astronomy and developments in electronics in the discovery of these wondrous objects cannot be underestimated. Frequently it is only a meld of new techniques and new ideas that bring about scientific revolution, and black holes are a good example of this necessary marriage.
The book ends with a brief description of Hawking's work on black holes that led to the proposal of so-called Hawking radiation, energetic radiation engendered by the principles of quantum mechanics that can allow particles to escape from a black hole's surface. I was disappointed that Bartusiak does not pay more attention to this exciting frontier, especially regarding the meld of ideas from information theory and computer science with thermodynamics and quantum mechanics that has been published in the last few years.
Overall Bartusiak's volume is a good introduction to the history and physics of black holes. My only concern is that it covers very little information that has not been already documented by other books. Kip Thorne's "Black Holes and Time Warps: Einstein's Outrageous Legacy" remains the gold standard in the field and covers all these discoveries and more much more comprehensively and engagingly, while Pedro Ferreira's "The Perfect Theory" which came out this year treads the same ground of experimental discoveries. This is not a bad book at all but it came out slightly late: if you really want to read one book on black holes I think it should be Thorne's.
Bartusiak traces the conception of the idea of black holes to a Cambridge don named John Mitchell who asked whether an object could be so dense that even light would not escape its gravitational pull. This idea lay buried in the scientific literature until the early 20th century when astronomers began asking questions about the constitution of stars. It was the young Indian astrophysicist Subrahmanyan Chandrasekhar who first thought about gravitational collapse on his way to graduate school in England. Bartusiak describes well Chandrasekhar's battles with the old English establishment of astronomers, and especially the doyen of English astrophysics Arthur Eddington, in getting his ideas accepted. He was so frustrated in his endeavors that he switched to studying other topics before he finally the Nobel Prize for his work decades later. Chandrasekhar's life holds many lessons, but one of the lessons is that the best way to win a Nobel Prize is to live long enough.
The next actors on the cosmic stage were the volatile Fritz Zwicky and the brilliant Lev Landau and Robert Oppenheimer. Zwicky was a maverick scientist with a prickly personality who ruffled both personal and scientific feathers by his bold conjectures regarding dark matter and supernovae. Landau and Zwicky laid out the first contours of what's called a neutron star while Oppenheimer was really the first scientist who asked what happens when a star completely collapses to a point, what was later called a singularity. Interestingly both Oppenheimer and Einstein - whose general theory relativity shines in all its glory in black holes - either refused to accept their reality or showed a complete lack of interest in them in their later years. Oppenheimer's study of black holes was ironically the most important scientific contribution he ever made, even though it occupied a fraction of his scientific interests and career. Many people think he would have won a Nobel Prize had he lived long enough to see his predictions validated by experiment. One of the reasons why both Oppenheimer and the world at large lost interest in black holes after he predicted them is revealed by the date on which his paper on the subject appeared in the journal Physical Review - September 1, 1939, the date on which the Nazis attacked Poland and inaugurated World War 2. Interestingly the same issue of the journal contained a pioneering paper on the mechanism of nuclear fission by John Wheeler and Niels Bohr. As the war kicked off, fission took center stage while black holes were forgotten.
Why did Oppenheimer and Einstein turn away from black holes even after the war? Part of their recalcitrance - at least in case of Oppenheimer - concerned their dislike of Zwicky and Wheeler, but part of it was also an aversion to anything that they thought was not fundamental enough. When I met Freeman Dyson, he confirmed that he had tried several times to talk about black holes with Oppenheimer, but every time Oppenheimer changed the subject, considering black holes too applied, too dull, objects of attention best suited to second-rate minds or graduate students (take your pick). The story of black holes is a good instance of scientific revolutionaries turning conservative. Ironically the scientific grandchildren that Oppenheimer and Einstein despised have now come to haunt the ghosts of their grandparents, showcasing properties that promise to shed light on some of the most basic tenets of relativity, cosmology and quantum mechanics.
As Bartusiak narrates, it fell to a young breed of brilliant scientists led by John Wheeler in the US, Dennis Sciama in the UK and Yakov Zeldovich in the USSR to work out the details of black hole astrophysics. They in turn inspired a whole generation of students like Kip Thorne, Roger Penrose and Stephen Hawking who contributed to the discipline. Bartusiak's book also has a readable account of the experimental discoveries in x-ray and radio astronomy which turned black holes from speculation to reality. As the book makes it clear, the importance of observational astronomy and developments in electronics in the discovery of these wondrous objects cannot be underestimated. Frequently it is only a meld of new techniques and new ideas that bring about scientific revolution, and black holes are a good example of this necessary marriage.
The book ends with a brief description of Hawking's work on black holes that led to the proposal of so-called Hawking radiation, energetic radiation engendered by the principles of quantum mechanics that can allow particles to escape from a black hole's surface. I was disappointed that Bartusiak does not pay more attention to this exciting frontier, especially regarding the meld of ideas from information theory and computer science with thermodynamics and quantum mechanics that has been published in the last few years.
Overall Bartusiak's volume is a good introduction to the history and physics of black holes. My only concern is that it covers very little information that has not been already documented by other books. Kip Thorne's "Black Holes and Time Warps: Einstein's Outrageous Legacy" remains the gold standard in the field and covers all these discoveries and more much more comprehensively and engagingly, while Pedro Ferreira's "The Perfect Theory" which came out this year treads the same ground of experimental discoveries. This is not a bad book at all but it came out slightly late: if you really want to read one book on black holes I think it should be Thorne's.
Thermodynamics in drug discovery: A Faustian bargain cooked in a devil's stew?
The binding of analogs of the anticoagulant melagatran to thrombin demonstrates intricate water network and protein conformation differences masked by similar binding modes |
Driving drug design by studying the thermodynamics of ligands binding to proteins has always
seemed like a good idea whose time has come. It all sounds very attractive: at its heart every molecular recognition event is driven by thermodynamic and kinetic principles, so in principle one would be able to figure out everything they wanted to know about the details of such interactions by accurately calculating or measuring the relevant parameters. Surely the main hurdles are experimental? The truth though is that we now have good experimental techniques to investigate thermodynamics, and yet nobody still seems to have figured
out what’s the best way to apply the idea prospectively to new drug discovery projects.
However there has been an increasing awareness of the breakdown of the free energy of binding into enthalpy and entropy: part of this awareness has been driven by analyses of drugs indicating that the best drugs have their enthalpies of binding rather than their entropies optimized. The conventional wisdom gleaned from these and other studies seems to be that optimizing enthalpy is tantamount to optimizing protein-ligand interactions while optimizing entropy is tantamount to displacing water molecules and building in hydrophobic modifications. This seems to indicate that one must try to optimize enthalpy through specific interactions early on in the drug development process, no matter that doing this is usually very challenging since you can often end up simply trading one hydrogen bond for another, leading to a net zero impact on binding affinity.
However there has been an increasing awareness of the breakdown of the free energy of binding into enthalpy and entropy: part of this awareness has been driven by analyses of drugs indicating that the best drugs have their enthalpies of binding rather than their entropies optimized. The conventional wisdom gleaned from these and other studies seems to be that optimizing enthalpy is tantamount to optimizing protein-ligand interactions while optimizing entropy is tantamount to displacing water molecules and building in hydrophobic modifications. This seems to indicate that one must try to optimize enthalpy through specific interactions early on in the drug development process, no matter that doing this is usually very challenging since you can often end up simply trading one hydrogen bond for another, leading to a net zero impact on binding affinity.
Now there’s a new, very comprehensive and readable review dealing with these
matters out in J. Med. Chem. which demonstrates just what kind of a devil’s
stew this matter actually is and which asks whether thermodynamics is still a 'hot tip' in drug discovery. The authors who are from Astra Zeneca in Sweden
look at a variety of topics related to protein-ligand thermodynamics – the gory
details of ITC which is the experimental workhorse used to determine the
thermodynamic quantities, case studies showing that displacing water can
sometimes help and sometimes hurt, the convoluted phenomenon of enthalpy-entropy
compensation, the whole fundamental idea that water molecules are all about
entropy and interactions with the protein are all about enthalpy.
They reach the conclusion that a lot of the conventional
wisdom is, if not exactly incorrect, far too simplistic and often misleading. As always, reality is more complex and subtle than three round numbers. They find cases where
displacing water improves the free energy of binding, not through
entropy as one might expect but by strengthening existing interactions or
effecting new ones, that is through enthalpy. There are also cases where displacing water makes things worse because the ligand is not able to pay the desolvation penalty imposed on its polar groups. There have already been reports looking
at networks of water molecules not just in the protein active site but also
around the ligand, and the subtle movements of these networks only serve to
complicate any kind of water-based analysis. And as the authors demonstrate, simple observation of SAR can be very misleading when applied to
conclusions regarding displacement of water molecules or formation of specific
interactions: for instance, even ligands that have the same binding mode can
showcase differing water networks and protein conformations.
The conclusion the paper reaches is not exactly heart-warming, although it points to some future directions.
“So can ligand binding thermodynamics still be regarded as a hot tip in drug discovery? No, not in a routine setting or with enthalpy and entropy regarded as isolated endpoints. Experimentally obtained thermodynamic data and crude derived parameters thereof are simply not well suited to be used for direct red/green decision-making. It is not unlikely, that some of the underlying parameters of the measured enthalpy and entropy might correlate with other interesting and relevant compound parameters. However, no such correlation has been convincingly shown thus far…Comparison of experimental ITC data with e.g. LLE (lipophilic ligand efficiency), simple solvent calculations or more rigorous free energy perturbations can enable the identification of compounds that do not behave as expected. Identifying and scrutinizing those outliers appears currently to be the most impactful use of thermodynamic profiling. The outliers could help to identify compound series that shift their binding mode, induce different motions in the target protein or distinguish intra-molecular hydrogen bonds from those between protein and ligand.”
The main question that the authors try to answer here is
whether the measurement of free energy, enthalpy and entropy can prospectively
help drug design, and their answer is largely negative. The fundamental reason
is that all these quantities are composite effects so they mask individual
contributions from protein, ligand and water. The contribution of a particular
hydrogen bond to affinity is not an experimental observable, and trying to over-interpret thermodynamic data in order to divine such contributions may easily lead you down the rabbit hole. There is a multiplicity of such contributions that can result in
the same number, so the problem is really underdetermined to a large extent. As the review indicates, all that prospective measurement of thermodynamic quantities
can do is point to obvious outliers that might be causing very large protein
conformational changes or leading to radically different ligand conformations. Although
I would think that an eagle-eyed medicinal chemist armed with some structural
expertise and robust SAR data might be able to reach the same conclusions.
Thermodynamics has always been one of those beloved children
of drug discovery, one on whom the parents have pinned their high hopes
but who still has to turn that potential into real achievement. As this review
demonstrates, there is much complexity hidden in the heart of this prodigal child,
and until one unravels this complexity his beatific smile will remain a cloudy
crystal ball.
Reference:
Ligand Binding Thermodynamics in Drug Discovery: still a hot tip?
J. Med. Chem., Just Accepted Manuscript
DOI: 10.1021/jm501511f
Subscribe to:
Posts (Atom)