Field of Science

Carl Djerassi (1923-2015): Chemist, writer, polymath, cultural icon

R B Woodward, Vladimir Prelog and Carl Djerassi on the beach at a conference in Riga, Latvia. Photo signed by Djerassi and generously gifted to the author by Prof. Jeffrey Seeman, University of Richmond.
Very few scientists of the 20th century have had as much of both a scientific as well as a cultural impact on the world as Carl Djerassi. It is a measure of how many things Djerassi excelled at that even his amazing purely scientific career seems like a distant horizon. While most scientists are quite happy to become world-renowned in their narrow subfield of science, Djerassi polished off multiple fields of chemistry and then reinvented himself as a notable playwright and writer. And he did all this after significantly contributing to one of the greatest social revolutions of the 20th century, if not of all time - safe, affordable and easily accessible contraception for women. Very few technical inventions in history contributed to giving women control over their lives the way the pill did. Scientists are usually not cultural icons, and the wrong people often are, but Djerassi definitely deserves to be one.

On his 90th birthday, my friend, the noted historian of chemistry Jeff Seeman summarized a few of Djerassi's astonishing contributions and honors:


He has published more than 1,200 scientific papers, 350 in the Journal of the American Chemical Society.
He is a chemical “father of the Pill.”
He has published three autobiographies, one memoir, five novels, two nonfiction books, 11 plays, two collections of poetry, three collections of essays and short stories, and one art book.
He has made significant contributions, in cash and in kind, to charitable causes and artistic endeavors.
He has received the National Medal of Science and the National Medal of Technology.
He has been awarded the Priestley Medal, the first Wolf Prize in Chemistry, and many other awards, as well as 32 honorary doctorates.
He has been recognized by Austria with a postage stamp issued in his honor.
He is in constant demand as a lecturer around the world.
He will turn 90 on Oct. 29.
Yet, Carl Djerassi has never been fully satisfied.

I first encountered Djerassi's work when as an undergraduate I studied the octant rule that he, R B Woodward, Bill Moffit and others pioneered to study the configuration of steroids. Nobody really uses it anymore since more sophisticated methods like NMR spectroscopy have superseded it, but as I came to know more about Djerassi's contributions, it amazed me that the same man who published the octant rule also pioneered the use of mass spectrometry in natural products chemistry, unraveled the biosynthesis of several key steroids and alkaloids and also published some of the first papers on the applications of artificial intelligence in organic chemistry. His 1200 papers span the breadth of the discipline, and among younger chemists only Clark Still comes to my mind as someone who had the same diversity of contributions.

Djerassi's autobiography ("Steroids Made It Possible") which is edited by Jeff is wonderful and a real treat. In it he talks about a variety of scientific and private topics, ranging from the letter to Eleanor Roosevelt that got him a college scholarship to his experiments with mescaline to his accidental grin in the photo showing Richard Nixon awarding him the National Medal of Science (he hated Nixon and in fact was on Nixon's silly "enemies" list, but Nixon said something funny right at the moment the photo was taken). The book also contains painful ruminations such as the one about his daughter's struggle with addiction and her suicide. Djerassi was nothing but upfront about his life, both in this memoir as well as his subsequent two books, the latest of which just came out and which I haven't read yet. 


His writings are also studded with sketches of great chemists like R B Woodward, Gilbert Stork, Bill Johnson and E J Corey, most of whom Djerassi counted among his close friends and colleagues. In some sense, a journey through his science is a journey through the development of postwar organic chemistry in its golden age. And speaking of R B Woodward, Djerassi managed to become the highest cited chemist of the 60s - at a time when his friend had already staked his claim as the the greatest organic chemist of the century and continued to publish seminal papers. That's no small feat.
Djerassi was of course also a noted playwright, reinventing himself during the second half of his life and crafting the play "Oxygen" with his fellow chemist Roald Hoffmann for instance. He was a great example of someone who bridged C P Snow's two cultures, inculcating and displaying a wide storehouse of knowledge ranging from philosophy and art to literature and science. His shares in Syntex Corporation where he researched steroids also made him a wealthy man and allowed him to do this. It also enabled him to retire early, amass an enviable private art collection and spend part of every year in London and other parts of Europe writing and giving talks. His fiction is well worth reading, and his characters are as interesting and honest as the science he pioneered.

Most laymen will of course always know Djerassi as one of the fathers of the contraceptive pill, although those of us who are aware of his chemical contributions appreciate that it was but one part of his prolific career. As many of us also know, Djerassi was a favorite on Nobel Prize lists for a long time, and as they did with many other scientists, the Nobel Prize committee did themselves a disservice by not awarding him one. But Djerassi's career more than that of most others indicates the irrelevance of prizes, as honorable as they may be. In that sense Djerassi is like Gandhi. His work was beyond prizes, and considering the social revolution that The Pill brought about, he will always stand not only as one of the scientific greats of the 20th century but as one of its most important human beings. RIP.

Precision medicine is not precision engineering

From the NYT, a plea for not getting carried away with the vision of 'moonshot medicine' as precision engineering. The op-ed is written by Michael Joyner, a doctor at the Mayo Clinic and takes issue with Obama's precision medicine initiative which he apparently underlined in his state of the union address earlier this month. Obama's proposal is nothing new and has been echoed by proponents since the beginnings of the Human Genome Project. The idea of identifying specific genetic variants in patients and then targeting them with specific drugs seems logical enough. But as the article indicates, we have clearly spoken too soon:

The basic idea behind it is that we each have genetic variants that put us at increased or decreased risk of getting various diseases, or that make us more or less responsive to specific treatments. If we can read someone’s genetic code, then we should be able to provide him or her with more effective therapeutic and preventive strategies. 
But for most common diseases, hundreds of genetic risk variants with small effects have been identified, and it is hard to develop a clear picture of who is really at risk for what. This was actually one of the major and unexpected findings of the Human Genome Project. In the 1990s and early 2000s, it was thought that a few genetic variants would be found to account for a lot of disease risk. But for widespread diseases like diabetes, heart disease and most cancers, no clear genetic story has emerged for a vast majority of cases.

Then there's the psychological aspect:

The push toward precision medicine could also lead to unintended consequences based on how humans respond to perceptions of risk. There is evidence that if people believe they are less at risk for a given disease, they feel excessively protected and their behavior gets worse, putting them at increased risk. Likewise, those who feel they are at greater risk, even if the increased risk is small, might become fatalistic, making their behavior worse as well. Then there are the worriers, who might embark on a course of excessive tests and biopsies “just in case.” In a medical system already marked by the overuse of diagnostic tests and procedures, this could lead to even more wasteful spending.

And finally, the whole idea that biology can be thought of as a linear engineering systems is fraught with flaws and uncertainty (it's also not everyday that you hear the word 'omertà'):

Given the general omertà about researchers’ criticizing funding initiatives, you probably won’t hear too many objections from the research community about President Obama’s plan for precision medicine. But I am deeply skeptical. Like most “moonshot” medical research initiatives, precision medicine is likely to fall short of expectations. Medical problems and their underlying biology are not linear engineering exercises, and solving them is more than a matter of vision, money and will.

This is not the first time that complex medical endeavors have been made to sound much simpler than what they are. The breathless initial promise of the ability of gene sequencing to directly lead to new drugs has not been somewhat dampened, but other efforts still echo this promise, most notably the Brain Initiative announced with much fanfare a few years ago. The goals of the initiative are laudable and some of the technologies that are being envisaged are fascinating, but we are still light years away in any incarnation whatever to "mapping the brain", let alone using that information for direct therapeutic intervention.

Unlike precision engineering where we are dealing with moving parts whose behavior is a matter of well-understood physical principles, the emergent chaos of biological systems is a very different matter. We should certainly not stop trying to conquer this frontier but we should also make sure everyone knows how far the horizons are. Biology is not physics.

Boundary value conditions, domain applicability and "American Sniper"

Actor Bradley Cooper in "American Sniper"
General relativity is a generalization of Newtonian mechanics which applies to large objects moving at high speeds that curve spacetime. Similarity, quantum mechanics is a generalization of classical mechanics which applies to very small objects like electrons and photons. Newtonian mechanics thus has a domain of applicability within which it works perfectly well even if it fails to work under the larger rubric of Einsteinian general relativity. Similarly classical mechanics has a domain of applicability within which it is golden. Both are perfectly comfortable within their own boundary value conditions - medium-sized objects and slow speeds for instance.

If you think about it, morality has similar domains of applicability. I was reminded of this comparison when I watched the Clint Eastwood-directed movie "American Sniper" over the weekend. The movie is about Chris Kyle, a celebrated US Navy sniper who over his four tours of duty in Iraq racked up 160 confirmed kills. The film has garnered some controversy for its supposed glorification of war and mindless patriotism. Kyle faced some gut-wrenching decisions in his sniper career in gunning down women and children who were enemy combatants, but by most accounts the decisions were quite simple for him and his conscience seemed clear. Personally, irrespective of where my political sympathies lie, I thought the movie was very gripping and well-made and enjoyed it, and Bradley Cooper was a revelation. After it ended I witnessed a kind of sustained and erie silence as the audience shuffled out of the theater, a silence that I haven't encountered very frequently before.

Since the audience was based in Boston, MA, it is unlikely that the silence was the result of them being enamored and stunned by Chris Kyle's sacrifice and patriotism. Instead I would like to think that the contemplative pause was provoked by a classic and highly problematic dilemma that mankind has faced since it acquired the ability to wage war: how can you vigorously oppose what you think is a highly unnecessary and immoral war and yet support the actions and decisions of individual soldiers like Chris Kyle who are putting their lives on the line with courage? It's a question Americans of every stripe face: when a soldier appears at the airport after returning from a tour, do you condemn him or her or laud their courage?

It's a very difficult question to answer, one whose answer is certainly not black and white and which we will almost certainly never definitively figure out in the foreseeable future. But a comparison with theories in science and especially in physics at least provides an interesting analysis in my opinion.

Here's the point: When Chris Kyle was looking down the barrel of his sniper rifle and had to pick between the life of a ten year-old Iraqi kid carrying a grenade and the life of five American marines, he picked the marines. The choice still wasn't easy but it was the best one to make under the limited parameters and boundary value conditions that he was operating under. The parameters were related to choosing between culpable Iraqi lives and American lives. The boundary value conditions had to do with the narrowly defined parameters of his mission or assignment. You could convince yourself that he had the right solution under those conditions even if you vehemently protested the war at large.

The fact of the matter is that moral decisions, as hard and endlessly unanswerable as they may be, still often operate within a limited domain of applicability. These domains of applicability have announced themselves in every war from the Mexican war to Vietnam. You can be for or against the moral decisions with some degree of confidence within those particular domains, even if you may be entirely opposed or in favor of them when the domains expand. It's like being in favor of Newtonian mechanics when you want to send rockets to the moon while being vehemently opposed to it when you want to send rockets to a neighboring galaxy containing black holes. Or like being in favor of classical mechanics when you want to deal with large blocks of gold while denying the use of that domain of applicability while analyzing individual gold atoms and the energy levels of their electrons. You can therefore potentially consider the political leaders who orchestrated the Iraq war to be war criminals even if you agree with the difficult decisions that Kyle made in those few seconds when he was looking down that gun barrel in Fallujah. 

There may be no answer to moral questions like those surrounding the Iraq war, and it is very likely that we may be forever divided into black and white bins when it comes to answering those questions. But I would like to believe that by dividing the conundrums into domains of applicability and at least agreeing that one's answer may depend on the exact boundary value conditions that one is dealing with, we may possibly reach a consensus and reach out into that gray area that always bedevils the gut-wrenching implications of our morality in both war and peace. Perhaps then we can meet halfway across and reach some kind of consensus, even if that consensus may be occupying the kind of twilight netherworld that entangled electrons do.

Surprises in physics: From black bodies to the accelerating universe

Max Planck's revolutionary 

1900 discovery that energy in 
the subatomic world exists as discrete 
packets marked the beginning of a century of spectacular 
surprises in physics 
Surprises rank high on the list of things that make science a source of everlasting delight. When it comes to being surprised scientists are no different from the general public. Just like children on their birthdays being surprised by unexpected gifts, scientists revel in the surprises that nature whips up in front of them. Surprises in science point to something deeper: the mystery, excitement and things unseen and unknown that always keep the scientific enterprise interesting, amusing and profound.
Scientific surprises are not always tantamount to important findings. For instance the experimental detection of the Higgs boson was a great achievement but it was not exactly surprising since the theoretical prediction had been made much earlier. The theoretical prediction itself emerged as a logical extension of ideas which were then in the air, and no fewer than six individuals contributed to its genesis. The fact is that some discoveries in science are important, some are surprising and some are both. It is this third category that is the most memorable, and rare is the scientist who finds himself or herself the beneficiary of a discovery that is both surprising and important.
Thus it’s worth taking a look back and charting some of the most important surprises in science, those that either forced us to rethink a lot of our assumptions or, in rare cases, those that truly changed our view of the world. Here I present a list of surprises in physics drawn from roughly the last one hundred years. Physics is in equal parts a theoretical and an experimental science, so its surprises come from both arenas. Feel free to note other surprises which I might have missed, and remember that surprising is not the same as important and important is not the same as surprising (and therefore this list excludes many of the most important discoveries in physics).
1900 – Max Planck: The beginning of quantum mechanics – Max Planck’s solution to the thorny theoretical problem of blackbody radiation started what was surely the greatest revolution in physics since Isaac Newton’s time. Planck was famously trying to explain the dependence of the intensity of radiation emitted by a black body on its frequency and temperature. The conservative German physicist was essentially struggling with what we today call “curve fitting”, finding the right equation for a graph of experimental data. He realized that the only way he could do this was to imagine a formula with a new constant called h and the assumption that energy was emitted by the blackbody only in certain discrete units. Planck intended his solution as a mathematical fix and not a representation of reality, and it was only when Albert Einstein appeared on the scene that he realized the nature of the radical transformation in our view of reality that he had inaugurated. The age of the quantum had dawned.
1905 – Albert Einstein: Special relativity – In 1905 Einstein wrote five papers that changed the face of physics, but perhaps only one of these can be called truly surprising. This was his famous paper setting out the principles of the special theory of relativity. There were of course many profound surprises in the theory – including time dilation and length contraction – but the biggest fundamental surprise was probably the iron rule that required the speed of light to be constant. The breakthrough was counterintuitive, bold and clearly revolutionary.
1909 – Ernest Rutherford: The atomic nucleus – Until Rutherford’s time the structure of the atom was nebulous, the best guess coming from J. J. Thomson who imagined it as a “plum pudding” with negatively charged electrons uniformly embedded in a positively charged sphere. It was in Manchester that Rutherford, along with two associates and a 70 pound grant from the Royal Institution, performed his famous gold foil experiment in which he shot alpha particles at a thin gold foil. If Thomson’s uniform and homogeneous atomic model had been correct the particles should have been scattered equally in all directions. Instead a very few of them came right back at the experimenters. The novelty and surprise of the result is best captured by Rutherford himself: “It was almost as incredible as if you had fired a 15 inch shell at a piece of tissue paper and it came right back at you”. Rutherford’s discovery signaled the beginning of nuclear physics.
1911 – Heike Kamerlingh Onnes: Superconductivity – Until 1911 there were divergent views regarding the behavior of matter at very low temperature. Kamerlingh Onnes from Leiden settled the debate in 1911 by finding, to his utter surprise, that mercury at 4.2 degrees Kelvin lost all resistance to the flow of current. The discovery was so surprising and significant that Onnes received a Nobel Prize only two years later. Superconductivity continues to be a frontier of physics research, both pure and applied.
1913 – Niels Bohr: The quantum theory of the atom – Bohr’s surprise was the postulation that electrons can occupy only certain energy levels in atoms and emit photons with energies defined by Planck’s formula when they make transitions. This really marked the beginning of the weird, probabilistic world of quantum mechanics, encapsulated by the question, “How does the electron know when to make a transition”?
1917 – Albert Einstein: General relativity – Einstein followed up his annus mirabilis with his crowning achievement – a theory of gravitation – in 1917. General relativity sealed the seamless meld of space and time and postulated gravity not as Newton’s force but as a property of spacetime itself, arising from the curvature of spacetime by the presence of mass.
1919 – Arthur Eddington: The bending of starlight – Leading an expedition to the island of Principe lying off the West coast of Africa, Eddington confirmed one of the most startling predictions of general relativity, the bending of starlight by the gravitational field of a massive object. This was not strictly a surprise since Einstein had predicted it, but in science any theory is only as good as the data that supports it. The discovery not only splashed Einstein’s face all over the world’s papers but also signified a rare bond of scientific friendship between two nations which had just ended a devastating war with each other.
1926 – Werner Heisenberg: The Uncertainty Principle – Heisenberg’s Uncertainty Principle – which has since then been hijacked and affixed to remote and bizarre notions in popular culture – drew a wedge between our assumptions of determinism and the nature of reality. Along with the phenomenon of quantum entanglement it essentially captures all the weirdness about the quantum world. Since then we have been struggling with the legacy of the true nature of quantum mechanics.
1928 – Paul Dirac: The Dirac equation – The Dirac equation is one of the true glories of theoretical physics. In a single line that can be neatly stated on a cocktail napkin, it combines special relativity with quantum mechanics. But most surprisingly, it proposes a new form of matter – antimatter. Dirac found this implication of the equation so unsettling that for some time he assumed that the missing twin of the electron predicted by the theory must be a proton. But in 1932 Carl Anderson found the elusive positron in cosmic rays. Both Dirac and Anderson got well-deserved Nobel Prizes for prediction and discovery.
1929 – Edwin Hubble: The Expanding Universe – Just as the Dirac equation is one of the true glories of theoretical physics, so is Hubble’s discovery of an expanding universe in the subsequent year one of the true glories of astronomy. Again, the possibility of an expanding universe has been conjectured by Russian Alexander Friedmann and Belgian priest Georges Lemaitre. But this possibility was so bizarre that even Einstein could not take it seriously at first. It took Hubble and his painstaking studies of nebulae at the Mount Wilson observatory in California to etch the truth in stone.
1938 – Otto Hahn and Fritz Strassman: Nuclear fission – Both Enrico Fermi’s group and the husband-wife duo of Irene and Frederic Joliot-Curie narrowly missed discovering fission in the mid-thirties. Curiously, the chemist Ira Noddack had hypothesized the phenomenon in 1934 but was ignored; one wonders if everyone would have taken her more seriously had she been a male scientist, preferably German or English. Yet it was a female scientist, Lisa Meitner, who interpreted Hahn and Strassman’s novel breaking up of uranium into barium and calculated the startling magnitude of energy release, paving the way toward both the peaceful and the destructive uses of atomic energy. Fission was thought to be so improbable that great theorists like Oppenheimer and Bethe had done calculations arguing against its existence. And yet here it was, an elegant theory demolished by an ugly, consequential fact.
1939 – Robert Oppenheimer and Hartland Snyder: Black holes – In the same issue of the journal Physical Review in which Niels Bohr and John Wheeler published their famous liquid drop model of fission, Oppenheimer and his student Hartland Snyder published the first description of what we now call black holes. The paper’s conclusions were surprising and bizarre and followed on the heels of a decade’s worth of groundbreaking research on the implications of general relativity for stellar evolution by Subrahmanyan Chandrasekhar, Lev Landau, Fritz Zwicky and others. Curiously, Oppenheimer never followed up on this paper and in fact displayed a remarkable indifference to general relativity for the rest of his life.
1947 – Willis Lamb: The Lamb Shift – The Lamb Shift was one of those subtle experimental measurements that lay the foundation for a whole new field. In 1947 Lamb and Retherford measured a tiny, surprising difference in energy levels of the electron in the hydrogen atom; the Dirac equation predicted identical energies for both these levels. This tiny difference concealed a wealth of important aspects of quantum field theory, and feverish and pathbreaking theoretical work explaining the shift culminated in the creation of the strikingly accurate theory of quantum electrodynamics (QED) by Schwinger, Feynman, Dyson and Tomonaga by the end of the decade.
1956 – Yang and Lee: Parity violation – Parity is one of the most basic properties of the atomic world. In physics a parity transformation is an operation that changes the sign of the spatial coordinates of an object. For instance a parity transformation on a right-handed coordinate system will turn it into a left-handed one. Until 1956 it was rightly and reasonably believed that nature should not care whether we use right or left handed coordinate systems and that the laws of physics should be the same for both. Yet in 1956, Chen Ning Yang and Tsung Dao Lee showed that parity was not conserved for certain particles which decayed through the weak nuclear force, so that the products of their decay depended on whether their spins were aligned parallel or antiparallel to a magnetic field. This result was confirmed by C. S. Wu in an elegant experiment at Columbia University. The discovery that at a very fundamental level nature actually cares about parity was so radical that Yang and Lee received a Nobel Prize only one year later. Wu was left out.
1964 – Penzias and Wilson: The cosmic microwave background – The discovery of an expanding universe had given at least some scientists the inkling of a purported “big bang” origin for the universe. But until 1964 this was just a hypothesis, so much so that the Cambridge astrophysicist Fred Hoyle had coined the phrase “Big Bang” as a derogatory dig at the theory. 1964 changed all that. Arno Penzias and Robert Wilson’s discovery – made using a small Bell Labs antenna in Holmdel, New Jersey – of the remnant of the big bang in the form of a steady background radiation hum was one of the most significant and surprising discoveries in twentieth century science, and one which has led to at least two Nobel Prizes. With Penzias and Wilson our current view of the universe acquired a firm foundation.
1964 – John Bell: Bell’s Theorem – Einstein had never been happy with quantum mechanics, all the way until his death, partly because his 1933 EPR paradox paper seemed to imply non-local, seemingly faster-than-light communication of information between particles. In spite of quantum mechanics’s spectacular predictive power, Einstein was convinced that there must be some kind of “hidden variables” which could explain what seem like contradictory and bizarre properties of the quantum world and bring locality into the picture. In 1964 an obscure Irish physicist named John Bell laid these questions to rest with a remarkable and surprising theorem that can be derived and stated using high school algebra. Because of Bell’s obscurity it seemed to come out of left field. In short, Bell’s Theorem said that no local, hidden variable theory of quantum mechanics could explain its basic postulates. The theorem thus established quantum weirdness – and especially entanglement – as a fundamental part of the physical universe. An equally surprising development was the set of experiments by Alain Aspect, Stuart Freedman, John Clauser and others in the 70s and 80s confirming Bell’s Theorem.
1973 – Gross, Politzer and Wilczek: Asymptotic freedom – Asymptotic freedom refers to the strange weakening of the force between quarks as they come near. This is a surprising and totally counterintuitive mechanism of particle interaction since in case of every other force (gravitation, electromagnetism with opposite charges and the weak force), the interactions become stronger with decreasing distance. The discovery of asymptotic freedom put the quantum theory of the strong force (called quantum chromodynamics) on a sure footing.
1986 – Bednorz and Mueller: High temperature superconductivity – In 1986 the world of physics and materials science was shaken by the discovery of ceramic materials like yttrium copper barium oxide which become superconducting at “high” temperatures, in some cases above the boiling point of liquid nitrogen (77 degrees kelvin) and in others as high as 138 degrees kelvin. This not only caused a revolution in our understanding of superconductivity (these materials were ceramics, not metals) but also led to dreams of practically harnessing the phenomenon. The dream has not quite come true yet, but ceramic superconductivity continues to be a thriving field of physics research. IBM’s Alex Muller and George Bednorz received a Nobel Prize in the very next year; so groundbreaking and surprising was the discovery.
1998 – Perlmutter, Reiss, Schmidt and others: The accelerating universe – Even after Hubble discovered the expanding universe, for the longest time scientists believed that the rate of expansion was slowing: since gravity was an attractive force the belief seemed entirely logical. Observations of distant supernovae in the early 90s completely turned this picture on its head. It was found that the universe was not only expanding but the expansion was speeding up. Einstein’s cosmological constant was resurrected and a new entity, dark energy, was postulated as a placeholder to account for the process. This was by any definition a surprising and counterintuitive discovery wherein experiment drove our understanding of reality without any major theoretical input.
A few observations emerge from this partial list of surprising discoveries in physics. The most important is that while lone individuals made discoveries in the early period, most findings in the later period were the work of groups of researchers or at least research duos. Many times more than one team made the same discovery. The discoveries were also a mix of theoretical and experimental work; in many cases theory and experiment were equally important since it’s one thing to postulate a surprising result but quite another to actually observe it in front of your eyes. I also find it interesting that no truly surprising fundamental discovery has emerged in the twenty-first century yet, although we are just getting started.
One thing is clear from the list, that physics will continue to make surprising discoveries and that our understanding of the universe will never be complete. We can all look forward to that.

Added: Chad Orzel of "Uncertain Principles" has his own interesting picks.

Adapted from a previous post on Scientific American Blogs.

"Clueless machines being ceded authority far beyond their competence"


Edge.org which is well-known for asking big-picture questions and having leading thinkers offer their answers to the questions has a relevant one this year: "What do you think about machines that think?" As usual there are a lot of very interesting responses from a variety of writers, scientists, philosophers and members of the literati. But one that really got my attention was by the philosopher and cognitive scientist Daniel Dennett.



I guess I got drawn in by Dennett’s response because of his targeted dagger-thrust against The Singularity ("It has the earmarks of an urban legend: a certain scientific plausibility ("Well, in principle I guess it's possible!") coupled with a deliciously shudder-inducing punch line ("We'd be ruled by robots!"). 

He then goes on to say this:

These alarm calls distract us from a more pressing problem, an impending disaster that won't need any help from Moore's Law or further breakthroughs in theory to reach its much closer tipping point: after centuries of hard-won understanding of nature that now permits us, for the first time in history, to control many aspects of our destinies, we are on the verge of abdicating this control to artificial agents that can't think, prematurely putting civilization on auto-pilot.

The problem thus is not in trusting truly intelligent machines but in becoming increasingly dependent on unintelligent machines which we believe – or desperately want ourselves to believe – are intelligent. Desperately so because of our dependence on them – Dennett’s examples of GPS and computers for simple arithmetic calculations are good ones. The same could be said of a host of other technologies that are coming online, from Siri to airplane navigation to Watson-like “intelligences” that are likely going to become a routine basis for multifactorial tasks like medical diagnosis. The problem as Dennett points out is that belief in such technologies packs a double whammy – on one hand we have become so dependent on them that we cannot imagine relinquishing their presence in our lives, while on the other we would like to consciously or unconsciously endow them with attributes far superior to what they possess because of this ubiquitous presence.


Thus,

The real danger, then, is not machines that are more intelligent than we are usurping our role as captains of our destinies. The real danger is basically clueless machines being ceded authority far beyond their competence.

One reason I found Dennett’s words compelling was because they reminded me of that structural error in the cloud computing paper which I tweeted about and which Derek concurrently blogged about. In that case the error seems to be a simple problem of transcription but it does not mean that computational algorithms cannot ever pick out chemically and biologically nonsensical structures.

Fortunately unlike engineering and technology, biology and chemistry are still too complex for us to consider ceding authority to machines far beyond their competence. But since computers are inevitably making their way into the field by leaps and bounds this is already a danger which we should be well aware of. Whether you are being seduced into thinking that a protein’s motions as deduced by molecular dynamics actually correspond to real motions in the human body or whether you think you will be able to plumb startling new correlations between molecular properties and biological effects using the latest Big Data and machine learning techniques, unimpeded belief in the illusion of machine intelligence can be only a few typed commands away. Just like in case of Siri and Watson, MD and machine learning can illuminate. And just like Siri and Watson they can even more easily mislead. 

So take a look at whether that molecule contains a vinyl alcohol the next time you grab a result from the computer screen and put it into the clinic. Otherwise you might end up ceding more than just your authority to the age of the machines.

The many tragedies of Edward Teller

This is a revised version of a post written a few years ago on physicist Edward Teller's birthday.

Edward Teller was born on this day 107 years ago. Teller is best known to the general public for two things: his reputation as the “father of the hydrogen bomb” and as a key villain in the story of the downfall of Robert Oppenheimer. To me Teller will always be a prime example of the harm that brilliant men can do – either by accident or design – when they are placed in positions of power; as the famed historian Richard Rhodes said about Teller in an interview, “Teller consistently gave bad advice to every president that he worked for”. It’s a phenomenon that is a mainstay of politics but Teller’s case sadly indicates that even science can be put into the service of such misuse of power.
Ironically it is the two most publicly known facts about Teller that are also probably not entirely accurate. Later in life he often complained that the public had exaggerated his roles in both the hydrogen bomb program and in the ousting of Oppenheimer, and this contention was largely true. In truth he deserved both less credit and less blame for his two major acts. Without Teller hydrogen bombs would still have been developed and without Teller Oppenheimer would still have been removed from his role as the government’s foremost scientific advisor.
The question that continues to dog historians and scientists is simple; why did Teller behave the way he did? By any account he was a brilliant man, well attuned to the massive overkill by nuclear weapons that he was advocating and also well attuned to the damage he would cause Oppenheimer and the scientific community by testifying against the father of the atomic bomb. He was also often a warm person and clearly desired friendship with his peers, so why did he choose to alienate so many who were close to him? The answers to these questions undoubtedly lie in Teller’s background. Growing up in progressive Hungary at the turn of the century as the son of a well to do Jewish father, Teller was part of a constellation of Hungarian prodigies with similar cultural and family backgrounds who followed similar trajectories, emigrated to the United States and became famous scientists. Leo Szilard, Eugene Wigner and John von Neumann were all childhood friends.
Sadly Teller became a psychological casualty of Hungary’s post-World War 1 communist and fascist regimes early in his childhood when he witnessed first hand the depredations visited upon his country by Bela Kun and then by Miklos Horthy. The chaos and uncertainty brought about by the communists left a deep impression on the sensitive young boy and traumatized him for life. Later when Teller migrated to Germany, England and America he saw the noose of Nazism tightening around Europe. This combined double blow brought about by the cruelties of communism and Nazism seems to have dictated almost every one of Teller’s major decisions for the rest of his life.
The fear of totalitarianism manifested itself early, leading Teller to be among the first ones to push for a US nuclear weapons program. He was Leo Szilard’s driver when Szilard went to meet Einstein in his Long Island cottage and got the famous letter to FDR signed by the great physicist. Along with Szilard and Wigner Teller was the first one to raise the alarm about a potential German atomic project and he lobbied vigorously for the government to take notice. By the time the war started he was a respected professor at George Washington University. Goaded by his experiences and inner conscience, Teller became one of Oppenheimer’s first recruits at Los Alamos where he moved at the beginning of the Manhattan Project in the spring of 1943.
Oppenheimer and Teller’s meeting was like one of those fateful events in Greek tragedies which is destined to end in friction and tragedy. Perhaps the most ironic twist in this story is how similar the two men were; brilliant physicists who were both products of high culture and affluent families, interested in literature and the arts, envisioning a great role for themselves in history and sensitive to the plight of human beings around them. However their personalities clashed almost right from the beginning, although the mistrust was mostly engendered by Teller.
Not all of it was Teller’s fault however. By the time Teller met Oppenheimer the latter had established himself as the foremost American-born theoretical physicist of his age, a man who could hold sway over even Nobel Laureates with his astonishingly quick mind, dazzlingly Catholic interests and knowledge and ability to metamorphose into adopting whatever role history had thrust upon him. But men like Oppenheimer are hardly simple, and Oppenheimer’s colleagues and students usually fell into two extreme camps, those who saw him as an insecure and pretentious poseur and those who idolized his intellect. Clearly Teller fell into the former group.
The friction between the two men was accentuated after Teller moved to Los Alamos when Oppenheimer made Hans Bethe the head of the project’s important theoretical division. Teller understandably chafed at the choice since unlike Bethe he had lived with the project since the beginning, but Oppenheimer’s decision was wise; he had sized up both physicists and realized that while both were undoubtedly scientifically capable, administering a division of prima donnas needed steadfast determination, levelheaded decision making and the ability to be a team player while quietly soothing egos, all of which were qualities inherent in Bethe but not in the volatile Teller.
Teller never really recovered from this slight and from then on his relationship with both Oppenheimer and Bethe (with whom he had been best friends for years) was increasingly strained. It wouldn’t be the first time he let the personal interfere with the professional and I think this was his first great tragedy – the inability to separate personal feelings from objective thinking. It was also during the war that the idea of using an atomic bomb to ignite a self-sustaining fusion reaction caught Teller’s imagination. Teller confirmed Oppenheimer’s decision to hire Bethe when he refused to perform detailed calculations for the implosion weapon and insisted that he work on his pet idea for the “Super”, a diversion that was undoubtedly orthogonal to the urgent task of producing an atomic bomb, especially one which was necessary to light up the Super in any case.
After the war got over Teller kept on pushing for the hydrogen bomb. History was on his side and the increasing encroachment of the Soviets into Eastern Europe followed by major events like the Berlin airlift and the testing of the first Soviet atomic bomb firmed up his conviction and allowed him to drum up support from scientists, politicians and the military. Sadly his initial design for the Super was fatally flawed; while an atomic bomb would in fact ignite a large mass of tritium or deuterium, energy losses would be too rapid to sustain a successful fusion reaction. Even after knowing this Teller kept pushing for the design, taking advantage of the worsening political situation and his own growing prominence in the scientific community. This was Teller’s first real dishonest act.
His second dishonest act was withholding credit from the man who actually came up with the first successful idea for a hydrogen bomb – Stanislaw Ulam. An exceptionally brilliant and versatile mathematician, Ulam first performed detailed calculations that revealed holes in Teller’s original Super design and then thought of the key process of radiation implosion that would compress a batch of thermonuclear fuel and enable its sustained fusion. Teller who had been smoldering with rage at Ulam’s calculations until then immediately saw the merit of the idea and significantly refined it. Since then almost every hydrogen bomb in the world’s nuclear arsenals has been constructed on the basis of the Teller-Ulam model. Yet Teller seems to have denied Ulam the credit for the idea even in his later years, something that is especially puzzling considering that he downplayed his own role in the development of hydrogen bombs in the waning years of his life. Was this simply a ploy engineered to gain sympathy and to display false modesty? We will never know.
The act for which Teller became infamous followed only a few years later in 1954. Since the end of the war Oppenheimer had been steadfast in his opposition to the hydrogen bomb, not just on a moral basis but also on a technical basis. This did not go down well with the establishment, especially in the face of the increasingly dire-looking international situation. Oppenheimer was hardly the only one opposing the project – prominent scientists like Enrico Fermi and Isidor Rabi were even more vocal in their opposition – but Oppenheimer’s reputation, his role as the government’s foremost nuclear advisor and his often casual cruelty and impatience with lesser men made him stand out. After the Teller-Ulam design came to light Oppenheimer actually supported the project but by that time he had already made powerful enemies, especially in the person of Lewis Strauss, a vindictive, petty and thin-skinned former Secretary of the Navy who unfortunately had the ear of President Eisenhower.
When the government brought charges against Oppenheimer Teller was asked to testify. He could have declined and still saved his reputation but he chose not to. Curiously, the actual testimony offered by Teller is at the same time rather straightforward as well as vague enough to be interpreted damningly. It has an air of calculated ambiguity about it that makes it particularly potent. What Teller said was the following:
In a great number of cases I have seen Dr. Oppenheimer act – I understood that Dr. Oppenheimer acted – in a way which for me was exceedingly hard to understand. I thoroughly disagreed with him in numerous issues and his actions frankly appeared to me confused and complicated. To this extent I feel that I would like to see the vital interests of this country in hands which I understand better, and therefore trust more.
What is interesting about the testimony, as explained by Freeman Dyson in his autobiography, is that it’s actually quite undramatic and true. Oppenheimer had lied to army officials during the war regarding an indirect approach made to him for ferrying secrets to the Soviet Union. He had refused right away but had then concocted an unnecessary and bizarre “cock and bull story” (in his own words) to explain his actions. That story had not gotten him into trouble during the war because of his indispensable role in the project, but it certainly qualified him as “confused and complicated”. In addition after the war, Oppenheimer’s views on nuclear weapons also often appeared conflicted, as did his loyalties to his former students. Oppenheimer’s opinions on the hydrogen bomb which were quite sound were however also interpreted as “confused and complicated” by Teller. But where Teller was coming from, Oppenheimer’s actions were hard to understand, and therefore it was clear that Teller would trust opinions regarding national security in someone’s else’s hands. Thus Teller’s testimony was actually rather unsurprising and sensible when seen in a certain context.
As it happened however, his words were seen as a great betrayal by the majority of physicists who supported Oppenheimer. The result of this perception was that Teller himself was damaged far more by his testimony than was Oppenheimer. Close friends simply stopped talking to him and one former colleague publicly refused to shake his hand, a defiant display that led Teller to retire to his room and weep. He was essentially declared a pariah by a large part of the wartime physics community. It is likely that Teller would have reconsidered testifying against Oppenheimer had he known the personal price he would have to pay. But the key point here is that Teller had again let personal feelings interfere with objective decision making; Teller’s animosity toward Oppenheimer went back years, and he knew that as long as the emperor ruled he could never take his place. This was his chance to stage a coup. As it happened his decision simply led to a great tragedy of his life, a tragedy that was particularly acute since his not testifying would have essentially made no difference in the revocation of Oppenheimer’s security clearance.
This inability to keep the personal separate from reality exemplified Teller’s obsession with nuclear weapons for the next fifty years until his death. At one point he was paranoid enough to proclaim that he saw himself in a Soviet prison camp within five years. I will not go so far as to label Teller paranoid from a medical standpoint but some of the symptoms certainly seem to be there. Teller’s attachment to his hydrogen bombs became so absolute that he essentially opposed almost every effort to seek reconciliation and arms reductions with the Soviets. The Partial Test Ban Treaty, the NPT, the ABM treaty and sound scientific opposition to Reagan’s fictional “Star Wars” defense; all met with his swift disapproval even when the science argued otherwise, as in the case of Star Wars . He also publicly debated Linus Pauling regarding the genetic effects of radiation just as he would debate Carl Sagan twenty years later regarding nuclear winter.
Sagan has a particularly illuminating take on Teller’s relationship with nuclear weapons in his book “The Demon- Haunted World”. The book has an entire chapter on Teller in which Sagan tries to understand Teller’s love affair with bombs. Sagan’s opinion is that Teller was actually sincere in his beliefs that nuclear weapons were humanity’s savior. He actually believed that these weapons would solve all our problems in war and peace. This led to him advocating rather outlandish uses for nuclear weapons: “Do you want to find out more about moon dust? Explode a nuclear weapon on the moon and analyze the spectrum of the resulting dust. Do you want to excavate harbors or change the course of rivers? Nuclear weapons can do the job”. Teller’s proposal to excavate harbors in Alaska using bombs led to appropriate opposition from the Alaskan natives. In many of these scenarios he seemed to simply ignore the biological effects of fallout.
But as much as I appreciate Sagan’s view that Teller was sincere in his proposals I find it hard to digest; Teller was smart enough to know the collateral damage caused by nuclear weapons, or to know how ridiculous the idea of using nuclear weapons to study moon dust sounded when there were much simpler methods to do it. My opinion is that by this time he had travelled so far along the path which he chose for himself after the war that he simply could not retract his steps. He clung to dubious peacetime uses of nuclear weapons simply so that he could advocate their buildup in wartime. By this time the man was too far along to choose another role in his life. That, I think, was another of Teller’s tragedies.
But in my view, Teller’s greatest tragedy had nothing to do with nuclear weapons. It was simply the fact that in pursuit of his obsession with bombs he wasted his great scientific gifts and failed to become a truly great physicist. Ironically he again shared this fate with his nemesis Robert Oppenheimer. Before the war both Oppenheimer and Teller had made significant contributions to science. Teller is so famous for his weapons work that it is easy to ignore his scientific research. Along with two other scientists he worked out an important equation describing the adsorption of gases to solids. Another very significant Teller contribution known to chemists is the Jahn-Teller effect, a distortion of geometry in certain inorganic molecular complexes that impacts key properties like color and magnetic behavior. In nuclear physics Teller again came up with several ideas including the Gamow-Teller rules that describe energy transitions in nuclei. Even after the war Teller kept on thinking about science, working for instance on Thomas-Fermi theory which was the precursor of techniques used to calculate important properties of molecules.
But after 1945 Teller’s scientific gifts essentially lay undisturbed, stagnating in all their creative glory. Edward Teller the theoretical physicist was slowly but surely banished to the shadows and Edward Teller the nuclear weapons expert and political advocate took his place. A similar fate befell Oppenheimer, although for many years he at least stayed in touch with the latest developments in physics. Seduced by power, both men forgot what had brought them to this juncture in history to begin with. In pursuing power they ignored their beloved science.
Ultimately one fact stands apart stark and clear in my view: Edward Teller’s obsession with nuclear weapons will likely become a historical curiosity but the Jahn-Teller will persist for all eternity. This, I think, is the real tragedy.

Is this the dawn of a golden age of private science funding?

Paul Allen is just one example of billionaires who are productively
funding cutting-edge and important science (Image: Forbes)
Last year, the BICEP2 experiment dropped a bombshell in the physics world by announcing potential evidence for gravitational waves from inflation as well as support for the quantization of gravity. The news was all over the place. Unfortunately the discovery turned out to be premature to say the least, and within a few months it turned to dust. But that's not the point of this post. What was less appreciated was the fact that BICEP2 was prominently funded by the Keck Foundation and the Betty and Gordon Moore Foundation. William Keck was an oil magnate who made very significant contributions to astronomy by funding the Keck Telescopes. Gordon Moore was a computer magnate who made significant contributions to information technology and proposed Moore’s Law. Fred Kavli who died last year started the Kavli Foundation; this foundation has backed everything from Obama’s Brain Initiative to astrophysics to nanoscience professorships at research universities.
It is with this background in my mind that I was very pleased to read an article by William Broad in the New York Times from last year about the private funding of research. Broad talks about how a variety of billionaire entrepreneurs ranging from the Moores (Intel) to Larry Ellison and his wife (Oracle) to Paul Allen (Microsoft) have spent hundreds of millions of dollars in the last two decades to fund a variety of scientific endeavors ranging from groundbreaking astrophysics to nanoscience. For these billionaires a few millions of dollars is not too much, but for a single scientific project hinging on the vicissitudes of government funding it can be a true lifeline. The article talked about how science will come to rely on such private funding in the near future in the absence of government support, and personally I think this funding is going to do a very good job in stepping in where the government has failed.
The public does not often realize that for most of its history, science was in fact privately funded. During the early scientific revolution in Europe, important research often came from what we can call self-philanthropy, exemplified by rich men like Henry Cavendish and Antoine Lavoisier who essentially did science as a hobby and made discoveries that are now part of the textbooks. Cavendish’s fortune funded the famed Cavendish Laboratory in Cambridge where Ernest Rutherford discovered the atomic nucleus and Watson and Crick discovered the structure of DNA. This trend continued for much of the nineteenth and early twentieth centuries. The current era of reliance on government grants by the NIH, the NSF and other agencies is essentially a postwar phenomenon. 
Before the war a lot of very important science as well as science education was funded by trust funds set up by rich businessmen. During the 1920s, when the center of physics research was in Europe, the Rockefeller and Guggenheim foundation gave postdoctoral fellowships to brilliant young scientists like Linus Pauling, Robert Oppenheimer and Isidor Rabi to travel to Europe and study with masters like Bohr, Born and Sommerfeld. It was these fellowships that crucially allowed young American physicists to quarry their knowledge of the new quantum mechanics to America. It was partly this largesse that allowed Oppenheimer to create a school of physics that equaled the great European centers and produce a roster of scientists whose students continue to win Nobel Prizes.
Perhaps nobody exemplified the bond between philanthropy and research better than Ernest Lawrence who was as much an astute businessman as an accomplished experimental physicist. Lawrence came up with his breakthrough idea for a cyclotron in the early 30s but it was the support of rich California businessmen – several of whom he regularly took on tours of his Radiation Lab at Berkeley – that allowed him to secure support for cyclotrons of increasing size and power. It was Lawrence’s cyclotrons that allowed physicists to probe the inner structure of the nucleus, construct theories explaining this structure and produce uranium for the atomic bombs used during the war. There were other notable examples of philanthropic science funding during the 30s, with the most prominent case being the Institute for Advanced Study at Princeton which was bankrolled by the Bamberger brother-sister duo.
As the New York Times article notes, during the last three decades private funding has expanded to include cutting-edge biological and earth sciences research. The Allen Institute for Brain Science in Seattle, for example, is making a lot of headway in understanding neuronal connectivity and how it gives rise to thoughts and feelings: to advance its goals the institute managed to lure away one of the most insightful neuroscientist in the world from a tenured faculty position at Caltech - Christof Koch. The research funded by twenty-first century billionaires ranges across the spectrum and comes from a mixture of curiosity about the world and personal interest. 
The personal interest is especially reflected in funding for rare and neurodegenerative diseases; even the richest people in the world know that they are not immune from cancer and Alzheimer’s disease so it’s in their own best interests to fund research in such areas. For instance Larry Page has a speaking problem while Sergey Brin carries a gene that predisposes him to Parkinson’s; no wonder Page is interested in a new institute for aging research. However the benefits that accrue from such research will aid everyone, not just the very rich. As another example, the Cystic Fibrosis Foundation which was funded by well-to-do individuals whose children were stricken by the devastating disease gave about $70 million to Vertex Pharmaceuticals. The infusion partly allowed Vertex to create Kalydeco, the first truly breakthrough drug for treating a disease where there were essentially no transformative options before. The drug is not cheap but there is no doubt that it has completely changed people’s lives.
But the billionaires are not just funding disease. They are funding almost every imaginable field, from astronomy to paleontology:
“They have mounted a private war on disease, with new protocols that break down walls between academia and industry to turn basic discoveries into effective treatments. They have rekindled traditions of scientific exploration by financing hunts for dinosaur bones and giant sea creatures. They are even beginning to challenge Washington in the costly game of big science, with innovative ships, undersea craft and giant telescopes — as well as the first private mission to deep space.”
That part about challenging government funding really puts this development in perspective. It’s hardly news that government support for basic science has steadily declined during the last decade, and a sclerotic Congress that seems perpetually unable to agree on anything means that the problem will endure for a long time. As Francis Collins notes in the article, 2014 saw an all time funding low in NIH grants. In the face of such increasing withdrawal by the government from basic scientific research, it can only be good news that someone else is stepping up to the plate. Angels step in sometimes where fools fear to tread. And in an age when it is increasingly hard for this country to be proud of its public funding it can at least be proud of its private funding; no other country can claim to showcase this magnitude of science philanthropy.
There has been some negative reaction to the news. The responses come mostly from those who think science is being “privatized” and that these large infusions of cash will fund only trendy research. Some negative reactions have also come from those who find it hard to keep their disapproval of what they see as certain billionaires’ insidious political machinations – those of the Koch (not Christof Koch!) brothers for instance – separate from their support of science. There is also a legitimate concern that at least some of this funding will go to diseases affecting rich, white people rather than minorities.
I have three responses to this criticism. Firstly, funding trendy research is still better than funding no research at all. In addition many of the diseases that are being explored by this funding affect all of us and not just rich people. Secondly, we need to keep raw cash for political manipulation separate from raw cash for genuinely important research. Thirdly, believing that these billionaires somehow “control” the science they fund strikes me as a little paranoid. For instance, a stone’s throw from where I live sits the Broad Institute, a $700 million dollar endeavor funded by Eli Broad. The Broad Institute is affiliated with both Harvard and MIT. During the last decade it has made important contributions to basic research including genomics and chemical biology. Its scientists have published in basic research journals and have shared their data. The place has largely functioned like an academic institution, with no billionaire around to micromanage the scientists’ everyday work. The same goes for other institutes like the Allen Institute. Unlike some critics, I don’t see the levers of these institutes being routinely pulled by their benefactors at all. The Bambergers never told Einstein what to do.
Ultimately I am a both a human being and a scientist, so I don’t care as much about where the source of science funding comes from as whether it benefits our understanding of life and the universe and leads to advances improving the quality of life of our fellow human beings. From everything that I have read, private funding for science during the last two decades has eminently achieved both these goals. I hope it endures.

Adapted from a previous post on Scientific American Blogs.