Field of Science

Paul Schleyer: Among the last of the universalists

I was saddened to hear of the passing of Paul Schleyer, not exactly a household name among non-chemists but someone who was undoubtedly one of the most prolific and towering chemists of his time. Schleyer started out in synthetic physical organic chemistry and then moved to computational and theoretical chemistry. Starting out at Princeton, he moved to Erlangen in Germany and finally settled down at the University of Georgia, helping to make each one of these centers a leading hub for theoretical chemistry.

Schleyer was definitely one of the last universalities - at least in the theoretical territory of chemistry - and had a definitive grasp of almost every aspect of the field; from NMR spectroscopy (he came up with the valuable NICS metric for chemical shifts) and ab initio quantum chemical calculations to non-classical carbocations, lithium compounds and strained hydrocarbon synthesis (he synthesized adamantane, for instance). Most people are content to be experts in one or two fields, but it's probably not an exaggeration to say that Schleyer was close to being an expert in most of these fields. During his career he also seems to have co-authored papers and books with virtually every prominent figure in the field, from George Olah to John Pople to Jack Roberts. His publications on these and other parts of chemistry were prolific and deep, and he was also not one to shy away from spirited debate. I had the pleasure of watching him give a talk once and it was clear that he was a formidable opponent whose knowledge of chemistry would almost always supersede that of his 'adversaries'.

In fact that last aspect of his personality was on full display the first time I encountered him. This was as a graduate student when I came across a charming and very readable account of his debate with Herbert Brown on the infamous and invaluable non-classical carbocation debate. Titled "The Non-Classic Ion Problem", this little book is structured as a series of points and counterpoints in every chapter offered by Schleyer and Brown. The constant back and forth, the dissection of minutiae of solvents, reaction rates and spectroscopy, and the general unyielding yet cordial personalities of the two men are fascinating to watch. It's pure, unadulterated scientific debate at its best.

Schleyer was definitely one of the last and greatest practitioners of the golden era of physical organic chemistry, a time when chemists dedicated their lives to exploring the fundamentals of chemical reactions and structure with vigor and rigor. He is one of a handful of figures on whose shoulders we all stand and will be genuinely missed.

Philosophy begins where physics ends, and physics begins where philosophy ends

Richard Feynman - philosopher (Image:WashU)
A few months ago, physicist Sean Carroll has some words of wisdom for physicists who might have less than complimentary things to say about philosophy. The most recent altercation between a physicist and philosophy came from Neil deGrasse Tyson who casually disparaged philosophy in a Q&A session, saying that it can be a time sink and it doesn’t actually provide any concrete answers to scientific questions. Now I am willing to give Tyson the benefit of doubt since his comment was probably a throwaway remark; plus it’s always easy for scientists to take potshots at philosophers in a friendly sort of way, much like the Yale football team would take potshots at its Harvard counterpart.
But Tyson’s response was only the latest in a series of run-ins that the two disciplines have had over the past few years. For instance in 2012 philosopher David Albert castigated physicist Lawrence Krauss for purportedly claiming in his most recent book that physics had settled or at least given plausible answers to the fundamental question of existence. In reply Krauss called Albert “moronic” which didn’t help much to bridge the divide between the two fields. Stephen Hawking also had some harsh words for philosophers, saying that he thought “philosophy is dead”, and going further back, Richard Feynman was famously disdainful of philosophy which he called “dopey”.
In his post Carroll essentially deconstructs the three major criticisms of philosophy seen among physicists: there’s the argument that philosophers don’t really gather data or do experiments, there’s the argument that practicing physicists don’t really use any philosophy in their work, and there’s the refrain that philosophers concern themselves too much with unobservables. Carroll calls the first of these arguments dopey (providing a fitting rejoinder to Feynman), the second frustratingly annoying and the third deeply depressing.
I tend to agree with his take, and I have always had trouble understanding why otherwise smart physicists like Tyson or Hawking seem to neglect both the rich history of interaction between physics and philosophy as well as the fact that they are unconsciously doing philosophy even when they are doing science. For instance, what exactly was the philosophy-hating Feynman talking about when he gave the eloquent Messenger Lectures that became “The Character of Physical Law“? Feynman was talking about the virtues of science, about the methodology of science, about the imperfect march of science toward the truth; in other words he was talking about what most of us would call “the philosophy of science”. There’s also more than a few examples of what could fairly be called philosophical musings even in the technical “Feynman Lectures on Physics”. Even Tyson, when he was talking about the multiverse and quantum entanglement in “Cosmos” was talking philosophically.
I think at least part of the problem here comes from semantics. Most physicists don’t explicitly try to falsify their hypotheses or apply positive heuristics or keep on looking for paradigms shifts in their daily work, but they are doing this unconsciously all the time. In many ways philosophy is simply a kind of meta, higher level look at the way science is done. Now sometimes philosophers of science are guilty of thinking that science in fact fits the simple definitions engendered by this meta level look, but that does not mean these frameworks are completely inapplicable to science, even if they may be messier than what they appear on paper. It’s a bit like saying that Newton’s laws are irrelevant to entities like black holes and chaotic systems because they lose their simple formulations in these domains.
My take on philosophy and physics is very simple: Philosophy begins where physics ends, and physics begins where philosophy ends. And I believe this applies to all of science.
I think there are plenty of episodes in the history of science that support this view. When science was still in a primitive state, almost all musings about it came first from Greek philosophers and later from Asian, Arab and European thinkers who were called “natural philosophers” for a reason. Anyone who contemplated the nature of earthly forces, wondered what the stars were made up of, thought about whether living things change or are always constant or pondered if there is life after death was doing philosophy. But he or she was also squarely thinking about science since we know for a fact that science has been able to answer these philosophical questions in the ensuing five hundred years. In this case philosophy stepped in where the era’s best science ended, and then science again stepped in when it had the capacity to answer these philosophical questions.
As another example, consider the deep philosophical questions about quantum mechanics pondered by the founders of quantum mechanics, profound thinkers like Bohr, Einstein and Heisenberg. These men were brilliant scientists but they were also bona fide philosophers; Heisenberg even wrote a readable book called “Physics and Philosophy“. But the reason why they were philosophers almost by default is because they understood that quantum mechanics was forcing a rethinking about the nature of reality itself that challenged our notions not just about concrete entities like electrons and photons but also about more ethereal ones like consciousness, objectivity and perception. Bohr and Heisenberg realized that they simply could not talk about these far flung implications of physics without speaking philosophically. In fact some of the most philosophical issues that they debated, such as quantum entanglement, were later validated through hard scientific experiments; thus, if nothing else, their philosophical arguments helped keep these important issues alive. Even among the postwar breed of physicists (many of whom were of the philosophy-averse, “shut up and calculate” type) there were prominent philosophers like John Wheeler and David Bohm, and they again realized the value of philosophy not as a tool for calculation or measurement but simply as a guide to thinking about hazy issues at the frontiers of science. In some sense it’s a good sign then when you start talking philosophically about a scientific issue; it means you are really at the cutting edge.
The fact of the matter – and a paradox of sorts – is that science grows fastest at its fringes, but it’s also at the fringes that it is most uncertain and unable to reach concrete conclusions. That is where philosophy steps in. You can think of philosophy as a kind of stand-in that’s exploring the farthest reaches of scientific thinking while science is maturing and retooling itself to understand the nature of reality. Tyson, Hawking, Krauss, and in fact all of us, are philosophers in that respect, and we should all feel the wiser for it.

First published on SciAmBlogs.

Oppenheimer’s Folly: On black holes, fundamental laws and pure and applied science

Einstein and Oppenheimer: Both men in their later years
dismissed black holes as anomalies, unaware that they contained
some of the deepest mysteries of physics
(Image: Alfred Eisenstaedt, LIFE magazine)
On September 1, 1939, the same day that Germany attacked Poland and started World War 2, a remarkable paper appeared in the pages of the journal Physical Review. In it J. Robert Oppenheimer and his student Hartland Snyder laid out the essential characteristics of what we today call the black hole. Building on work done by Subrahmanyan Chandrasekhar, Fritz Zwicky and Lev Landau, Oppenheimer and Snyder described how an infalling observer on the surface of an object whose mass exceeded a critical mass would appear to be in a state of perpetual free fall to an outsider. The paper was the culmination of two years of work and followed two other articles in the same journal.
Then Oppenheimer forgot all about it and never said anything about black holes for the rest of his life.
He had not worked on black holes before 1938, and he would not do so ever again. Ironically, it is this brief contribution to physics that is now widely considered to be Oppenheimer’s greatest, enough to have possibly warranted him a Nobel Prize had he lived long enough to see experimental evidence for black holes show up with the advent of radio astronomy.
What happened? Oppenheimer’s lack of interest wasn’t just because he became the director of the Manhattan Project a few years later and got busy with building the atomic bomb. It also wasn’t because he despised the free-thinking and eccentric Zwicky who had laid the foundations for the field through the discovery of black holes’ parents – neutron stars. It wasn’t even because he achieved celebrity status after the war, became the most powerful scientist in the country and spent an inordinate amount of time consulting in Washington until his carefully orchestrated downfall in 1954. All these factors contributed, but the real reason was something else entirely – Oppenheimer just wasn’t interested in black holes. Even after his downfall, when he had plenty of time to devote to physics, he never talked or wrote about them. The creator of black holes basically did not think they mattered.
Oppenheimer’s rejection of one of the most fascinating implications of modern physics and one of the most enigmatic objects in the universe – and one he sired – is documented well by Freeman Dyson who tried to initiate conversations about the topic with him. Every time Dyson brought it up Oppenheimer would change the subject, almost as if he had disowned his own scientific children.
The reason, as attested to by Dyson and others who knew him, was that in his last few decades Oppenheimer was stricken by a disease which I call “fundamentalitis”. Fundamentalitis is a serious condition that causes its victims to believe that the only thing worth thinking about is the deep nature of reality as manifested through the fundamental laws of physics.
As Dyson put it:
“Oppenheimer in his later years believed that the only problem worthy of the attention of a serious theoretical physicist was the discovery of the fundamental equations of physics. Einstein certainly felt the same way. To discover the right equations was all that mattered. Once you had discovered the right equations, then the study of particular solutions of the equations would be a routine exercise for second-rate physicists or graduate students.”
Thus for Oppenheimer, black holes, which were particular solutions of general relativity, were mundane; the general theory itself was the real deal. In addition they were anomalies, ugly exceptions which were best ignored rather than studied. As Dyson mentions, unfortunately Oppenheimer was not the only one affected by this condition. Einstein, who spent his last few years in a futile search for a grand unified theory, was another. Like Oppenheimer he was uninterested in black holes, but he also went a step further by not believing in quantum mechanics. Einstein’s fundamentalitis was quite pathological indeed.
History proved that both Oppenheimer and Einstein were deeply mistaken about black holes and fundamental laws. The greatest irony is not that black holes are very interesting, it is that in the last few decades the study of black holes has shed light on the very same fundamental laws that Einstein and Oppenheimer believed to be the only thing worth studying. The disowned children have come back to haunt the ghosts of their parents.
Black holes took off after the war largely due to the efforts of John Wheeler in the US and Dennis Sciama in the UK. The new science of radio astronomy showed us that, far from being anomalies, black holes litter the landscape of the cosmos, including the center of the Milky Way. A decade after Oppenheimer’s death, the Israeli theorist Jacob Bekenstein proved a very deep relationship between thermodynamics and black hole physics. Stephen Hawking and Roger Penrose found out that black holes contain singularities; far from being ugly anomalies, black holes thus demonstrated Einstein’s general theory of relativity in all its glory. They also realized that a true understanding of singularities would involve the marriage of quantum mechanics and general relativity, a paradigm that’s as fundamental as any other in physics.
In perhaps the most exciting development in the field, Leonard Susskind, Hawking and others have found intimate connections between information theory and black holes, leading to the fascinating black hole firewall paradox that forges very deep connections between thermodynamics, quantum mechanics and general relativity. Black holes are even providing insights into computer science and computational complexity. The study of black holes is today as fundamental as the study of elementary particles in the 1950s.
Einstein and Oppenheimer could scarcely have imagined that this cornucopia of discoveries would come from an entity that they despised. But their wariness toward black holes is not only an example of missed opportunities or the fact that great minds can sometimes suffer from tunnel vision. I think the biggest lesson from the story of Oppenheimer and black holes is that what is considered ‘applied’ science can actually turn out to harbor deep fundamental mysteries. Both Oppenheimer and Einstein considered the study of black holes to be too applied, an examination of anomalies and specific solutions unworthy of thinkers thinking deep thoughts about the cosmos. But the delicious irony was that black holes in fact contained some of the deepest mysteries of the cosmos, forging unexpected connections between disparate disciplines and challenging the finest minds in the field. If only Oppenheimer and Einstein had been more open-minded.
The discovery of fundamental science in what is considered applied science is not unknown in the history of physics. For instance Max Planck was studying blackbody radiation, a relatively mundane and applied topic, but it was in blackbody radiation that the seeds of quantum theory were found. Similarly it was spectroscopy or the study of light emanating from atoms that led to the modern framework of quantum mechanics in the 1920s. Scores of similar examples abound in the history of physics; in a more recent case, it was studies in condensed matter physics that led physicist Philip Anderson to make significant contributions to symmetry breaking and the postulation of the existence of the Higgs boson. And in what is perhaps the most extreme example of an applied scientist making fundamental contributions, it was the investigation of cannons and heat engines by French engineer Sadi Carnot that led to a foundational law of science – the second law of thermodynamics.
Today many physicists are again engaged in a search for ultimate laws, with at least some of them thinking that these ultimate laws would be found within the framework of string theory. These physicists probably regard other parts of physics, and especially the applied ones, as unworthy of their great theoretical talents. For these physicists the story of Oppenheimer and black holes should serve as a cautionary tale. Nature is too clever to be constrained into narrow bins, and sometimes it is only by poking around in the most applied parts of science that one can see the gleam of fundamental principles.
As Einstein might have said had he known better, the distinction between the pure and the applied is often only a “stubbornly persistent illusion”. It’s an illusion that we must try hard to dispel.
First published on SciAm

Molecular modeling and physics: A tale of two disciplines

For its development physics relies both on time for
understanding and on multiple other disciplines
that make tools like the LHC possible
(Image: Universe Today)
In my professional field of molecular modeling and drug discovery I often feel like an explorer who has arrived on the shores of a new continent with a very sketchy map in his pocket. There are untold wonders to be seen on the continent and the map certainly points to a productive direction in which to proceed, but the explorer can’t really stake a claim to the bounty which he knows exists at the bottom of the cave. He knows it is there and he can even see occasional glimpses of it but he cannot hold all of it in his hand, smell it, have his patron duke lock it up in his heavily guarded coffers. That is roughly what I feel when I am trying to simulate the behavior of drug molecules and proteins.
It is not uncommon to hear experimentalists from other disciplines and even modelers themselves grumbling about the unsatisfactory state of the discipline, and with good reason. Neither are the reasons entirely new: The techniques are based on an incomplete understanding of the behavior of complex biological systems at the molecular level. The techniques are parametrized based on a limited training set and are therefore not generally applicable. The techniques do a much better job of explaining than predicting (a valid point, although it’s easy to forget that explanation is as important in science as prediction).
To most of these critiques I and my fellow brethren plead guilty; and nothing advances a field like informed criticism. But I also have a few responses to the critiques, foremost among which is one that is often under-appreciated: On the scale of scientific revolutions, computational chemistry and molecular modeling are nascent fields, only just emerging from the cocoon of understanding. Or, to be pithier, give it some more time.
This may seem like a trivial point but it’s an important one and worth contemplating. Turning a scientific discipline from an unpolished, rough gem-in-the-making to the Hope Diamond takes time. To drive this point home I want to compare the state of molecular modeling – a fledgling science – with physics – perhaps the most mature science. Today physics has staked its claim as the most accurate and advanced science that we know. It has mapped everything from the most majestic reaches of the universe at its largest scale to the production of virtual particles inside the atom at the smallest scale. The accuracy of both calculations and experiments in physics can beggar belief; on one hand we can calculate the magnetic moment of the electron to sixteen decimal places using quantum electrodynamics (QED) and on the other hand we can measure the same parameter to the same degree of accuracy using ultra sensitive equipment.
But consider how long it took us to get there. Modern physics as a formal discipline could be assumed to have started with Isaac Newton in the mid 17th century. Newton was born in 1642. QED came of age in about 1952 or roughly 300 years later. So it took about 300 years for physics to go from the development of its basic mathematical machinery to divining the magnetic moment of the electron from first principles to a staggering level of accuracy. That’s a long time to mature.
Contrast this with computational chemistry, a discipline that spun off from the tree of quantum mechanics after World War 2. The application of the discipline to complex molecular entities like drugs and materials is even more recent, taking off in the 1980s. That’s thirty years ago. 30 years vs 300 years, and no wonder physics is so highly developed while molecular modeling is still learning how to walk. It would be like criticizing physics in 1700 for not being able to launch a rocket to the moon. A more direct comparison of modeling is with the discipline of synthetic chemistry  – a mainstay of drug discovery – that is now capable of making almost any molecule on demand. Synthetic chemistry roughly began in about 1828 when German chemist Friedrich Wöhler first synthesized urea from simple inorganic compounds. That’s a period of almost two hundred years for synthetic chemistry to mature.
But it’s not just the time required for a discipline to mature; it’s also the development of all the auxiliary sciences that play a crucial role in the evolution of a discipline that makes its culmination possible. Consider again the mature state of physics in, say, the 1950s. Before it could get to that stage, physics needed critical input from other disciplines, including engineering, electronics and chemistry. Where would physics have been without cloud chambers and Geiger counters, without cyclotrons and lasers, without high-quality ceramics and polymers? The point is that no science is an island, and the maturation of one particular field requires the maturation of a host of others. The same goes for the significant developments in mathematics – multivariate calculus, the theory of Lie groups, topology – that made progress in modern physics possible. Similarly synthetic chemistry would not have been possible had NMR spectroscopy and x-ray diffraction not provided the means to determine the structure of molecules.
Molecular modeling is also constrained by similar input from other science. Simulation really took off in the 80s and 90s with the rapid advances in computer software and hardware; before this period chemists and physicists had to come up with clever theoretical algorithms to calculate the properties of molecules simply because they did not have access to the proper firepower. Now consider what other disciplines modeling is dependent on – most notably chemistry. Without chemists being able to rapidly make molecules and provide both robust databases as well as predictive experiments, it would be impossible for modelers to validate their models. Modeling has also received a tremendous boost from the explosion of crystal structures of proteins engendered by genomics, molecular biology, synchrotron sources and computer software for data processing. The evolution of databases, data mining methods and the whole infrastructure of informatics has also really fed into the growth of modeling. One can even say without too much exaggeration that molecular modeling is ultimately a product of our ability to manipulate elemental silicon and produce it in an ultrapure form.
Thus, just like physics was dependent on mathematics, chemistry and engineering, modeling has been crucially dependent on biology, chemistry and computer science and technology. And in turn, compared to physics, these disciplines are relatively new too. Biology especially is still just taking off, and even now it cannot easily supply the kind of data which would be useful for building a robust model. Computer technology is very efficient, but still not efficient enough to really do quantum mechanical calculations on complex molecules in a high-throughput manner (I am still waiting for that quantum computer). And of course, we still don’t quite understand all the forces and factors that govern the binding of molecules to each other, and we don’t quite understand how to capture these factors in sanitized and user-friendly computer algorithms and graphical interfaces. It’s a bit like physics having to progress without having access to high-voltage sources, lasers, group theory and a proper understanding of the structure of the atomic nucleus.
Thus, thirty years is simply not enough for a field to claim a very significant degree of success. In fact, considering how new the field is and how many unknowns it is still dealing with, I would say that the field of molecular modeling is actually doing quite well. The fact that computer-aided molecular design was hyped during its inception does not make it any less useful, and it’s silly to think so. In the past twenty years we have at least had a good handle on the major challenges that we face and we have a reasonably good idea of how to proceed. In major and minor ways modeling continues to make useful contributions to the very complicated and unpredictable science and art of drug design and discovery. For a field that’s thirty years old I would say we aren’t doing so bad. And considering the history of science and technology as well as the success of human ingenuity in so many forms, I would say that the future is undoubtedly bright for molecular simulation and modeling. It’s a conviction that is as realistic as any other in science, and it’s one of the things that helps me get out of bed every morning. In science fortune always favors the patient, and modeling and simulation will be no different.

Falsification and chemistry: What’s the rub?

Roald Hoffmann has often emphasized the limitations
of falsification for the everyday practice of chemistry
My last post on the role and limitations of falsification leads to a point I have made before: the fact that falsification is far less important for chemists than it is for, say, physicists or mathematicians. My take on the relative unimportance of falsification comes mainly from Roald Hoffmann who is as much of a philosopher of chemistry (and a poet) as a professional Nobel Prize winning chemist. He has an excellent essay called “What would philosophy of science look like if chemists built it“? in his collection of essays from last year (which I reviewed for Nature Chemistry here).

Hoffmann’s basic take on chemistry and the philosophy of science goes to the heart of what distinguishes chemistry from other sciences. Chemistry as it is practiced consists of two major activities – analysis and synthesis. The analysis part wherein you break down a substance into its constituent atoms and deduce their bonding, charge and spatial disposition is akin to the reductionist ethos of physics where you make sense of matter by taking it apart. The synthesis part of chemistry is highly creative and consists of building up complex molecules from simple counterparts. It is an activity that not only makes chemistry conceptually unique among the sciences but which has also contributed to the inestimable utility of the science in creating the material world around us. It is as much an art as a science, and one which makes chemistry very close to architecture as a practical pursuit.

Karl Popper wrote a well-known book called “Conjectures and Refutations” in which among other things, he laid out his central philosophy of falsification. A related philosophy is the hypothetico-deductive approach to the scientific method in which one formulates hypotheses and tests them. Here is what Hoffmann says about this way of thinking about science after analyzing a particular paper on the synthesis of fullerene molecules that can encapsulate hydrogen molecules. I am slightly rephrasing his words to make them more general:
“What theories are being tested (or falsified, for that matter) in a beautiful paper on synthesis? None, really, expect that such and such a molecule can be constructed. The theory building in that is about as informative as the statement that an Archie Ammons poem tests a theory that the English language can be used to construct novel and perceptive insights into the way the world and our minds interact. The power of that tiny poem, the cleverness of the molecular surgery that a synthetic chemist performs in creating a molecule, just sashay around any analytical theory-testing.” 
How is this creative act of synthesizing a novel substance exactly making and testing a hypothesis or theory? Now one may argue that even a synthesis holds the feet of certain theories of bonding (molecular orbital theory for instance) to the fire. It is certainly true that there is always some implicit assumption, some background knowledge, that underlines the synthesis of any molecule; the construction of the molecule would fail in fact if electrons did not flow in such and such a manner and if bonds did not form in such and such a manner, so of course you are testing elementary assumptions and theories about chemical bonding whenever you make any molecule. But why not go further then and say that you are testing the atomic hypothesis whenever you are conducting pretty much any experiment in chemistry, physics or biology? Or if you want to reach out even further and tread into philosophy, you could even say that you are testing the basic assumption behind science that natural laws dictate the behavior of material entities.

Clearly this definition of “falsification” is so general and so all-encompassing as to greatly vitiate the utility of the concept; try asking a synthetic chemist next time if the main purpose of his synthesis is to test or falsify molecular orbital theory. Drawing on the analogy between chemistry and architecture, it would be like saying that every time an architect is designing a new shape for a building she is hypothesizing and testing the law of gravity. Well, yes, and no.

In fact this debate again very much reminds me of the fondness for reductionism that physicists often bring to a debate about “higher order” disciplines like chemistry, economics or psychology. Molecules, people and societies are made out of atoms, they will say, which means that “atoms explain people”. I think most physicists themselves will agree as to the futility of such far-out explanations. The fact is that a concept is useful only if it has a direct, non-trivial relationship to the phenomenon which it purports to explain. Theories in philosophy, just like reductionist theories in physics, are far more relevant on a certain level than on others.

Synthesis is a creative activity, and while every synthesis implicitly and trivially tries to falsify some deep-seated fundamental law, the science and art of synthesis as a whole does not explicitly and non-trivially try to falsify any particular theory. That does not mean that falsification is absent or untrue, it just means that it’s rather irrelevant.

Falsification and its discontents

Karl Popper's grounding in the age of physics colored
his views regarding the way science is done. Falsification
was one of the resulting casualties
(Image: Wikipedia Commons)
Earlier this year the 'Big Questions' website Edge.org’s asked the following question: “What scientific idea is ready for retirement”? In response to the question physicist Sean Carroll. Carroll takes on an idea from the philosophy of science that’s usually considered a given: falsification. I mostly agree with Carroll’s take, although others seem to be unhappier, mainly because Carroll seems to be postulating that lack of falsification should not really make a dent in ideas like the multiverse and string theory.

I think falsification is one of those ideas which is a good guideline but which cannot be taken at face value and applied with abandon to every scientific paradigm or field. It’s also a good example of how ideas from the philosophy of science may have little to do with real science. Too much of anything is bad, especially when that anything is considered to be an inviolable truth.

It’s instructive to look at falsification’s father to understand the problems with the idea. Just like his successor Thomas Kuhn, Karl Popper was steeped in physics. He grew up during the heyday of the discipline and ran circles around the Vienna Circle whose members (mostly mathematicians, physicists and philosophers) never really accepted him as part of the group. Just like Kuhn Popper was heavily influenced by the revolutionary discoveries in physics during the 1920s and 30s and this colored his philosophy of science.

Popper and Kuhn are both favorite examples of mine for illustrating how the philosophy of science has been biased toward physics and by physicists. The origin of falsification was simple: Popper realized that no amount of data can really prove a theory, but that even a single key data point can potentially disprove it. The two scientific paradigms which were reigning then – quantum mechanics and relativity – certainly conformed to his theory. Physics as practiced then was adept at making very precise, quantitative predictions about a variety of phenomena, from the electron’s charge to the perihelion of Mercury. Falsification certainly worked very well when applied to these theories. Sensibly Popper advocated it as a tool to distinguish science from non-science (and from nonsense).

But in 2014 falsification has become a much less reliable and more complicated beast. Let’s run through a list of its limitations and failures. For one thing, Popper’s idea that no amount of data can confirm a theory is a dictum that’s simply not obeyed by the majority of the world’s scientists. In practice a large amount of data does improve confidence in a theory. Scientists usually don’t need to confirm a theory one hundred percent in order to trust and use it; in most cases a theory only needs to be good enough. Thus the purported lack of confidence in a theory just because we are not one hundred percent sure of its validity is a philosophical fear, more pondered by grim professors haunting the halls of academia than by practical scientists performing experiments in the everyday world.

Nor does Popper’s exhortation that a single incisive data point slay a theory hold any water in many scientists’ minds. Whether because of pride in their creations or because of simple caution, most scientists don’t discard a theory the moment there’s an experiment which disagrees with its main conclusions. Maybe the apparatus is flawed, or maybe you have done the statistics wrong; there’s always something that can rescue a theory from death. But most frequently, it’s a simple tweaking of the theory that can save it. For instance, the highly unexpected discovery of CP violation did not require physicists to discard the theoretical framework of particle physics. They could easily save their quantum universe by introducing some further principles that accounted for the anomalous phenomenon. Science would be in trouble if scientists started abandoning theories the moment an experiment disagreed with them. Of course there are some cases where a single experiment can actually make or break a theory but fortunately for the sanity of its practitioners, there are few such cases in science.

Another reason why falsification has turned into a nebulous entity is because much of modern, cutting-edge science is based on models rather than theories. Models are both simpler and less rigorous than theories and they apply to specific, complicated situations which cannot be resolved from first principles. There may be multiple models that can account for the same piece of data. As a molecular modeler I am fully aware of how one can tweak models to fit the data. Sometimes this is justified, at other times it’s a sneaky way to avoid admitting failure. But whatever the case, the fact is that falsification of a model almost never kills it instantly since a model by its very nature is supposed to be more or less a fictional construct. Both climate models and molecular models can be manipulated to agree with the data when the data disagrees with their previous incarnation, a fact that gives many climate skeptics heartburn. The issue here is not whether such manipulation is justified, rather it’s that falsification is really a blunt tool to judge the validity of such models. As science becomes even more complex and model-driven, this failure of falsification to discriminate between competing models will become even more widespread.

The last problem with falsification is that since it was heavily influenced by Popper’s training in physics it simply fails to apply to many activities pursued by scientists in other fields, such as chemistry. The Nobel Prize winning Roald Hoffmann has argued in his recent book how falsification is almost irrelevant to many chemists whose main activity is to synthesize molecules. What hypothesis are you falsifying, exactly, when you are making a new drug to treat cancer or a new polymer to sense toxic environmental chemicals? Now you could get very vague and general and claim that every scientific experiment is a falsification experiment since it’s implicitly based on belief in some principle of science. But as they say, a theory that explains everything explains nothing, so such a catchall definition of falsification ceases to be useful.

All this being said, there is no doubt that falsification is a generally useful guideline for doing science. Like a few other commenters I am surprised that Carroll uses his critique of falsification to justify work in areas like string theory and the multiverse, because it seems to me that those are precisely the areas where testable and falsifiable predictions are badly needed because of lack of success. Perhaps Carroll is simply saying that too much of anything including falsification is bad. With that I resoundingly agree. In fact I would go further and contend that too much of philosophy is always bad for science; as they say, the philosophy of science is too important to be left to philosophers of science.

The simple physics behind a horrible tragedy: A tape measure with the energy of a 0.45 Colt bullet

From the NYT comes this really tragic story of a man who was killed when a tape measure from a construction site fell down 50 floors and struck him on the head. My deepest condolences to his family. 

The tape measure weighed a pound so it may seem strange that it led to such an irreversible and horrible fate. Sadly the man wasn't wearing a hard hat. And physics was not on his side: as we will see below, the tape measure that struck him was tantamount to a bullet.

We can use Newton's famed three equations of motion to determine the kinetic energy of the measure as it struck the unfortunate man's head. The three equations are:

v = u + at
s = ut + at^2/2
v^2 = u^2 + 2as

Here, u is the initial velocity, v is the final velocity, a is the acceleration, s is the distance covered and t is the time.

A moment's inspection reveals that out of the three equations it's most convenient to use the third one since it does not include time, a variable that's not directly apparent in the problem. It's important to convert all units to the same MKS or CGS systems to get the right answer. We use the following values:

u = 0 since the tape measure started from a stationary state.
a = the acceleration due to gravity, g = 9.8 m/s^2
s = 400 ft = 121.9 meters
m = 1 pound = 0.45 kilograms

So v^2 comes out to be 2*9.8*121.9 = 2389.24 which we will round up to 2389.

Now the kinetic energy is just mv^2/2 so we multiply this number by the mass which is 0.45 kilograms and divide by 2.

2389*0.45/2 = 537.48, which we will round up to 537 joules.

How does this number compare to the kinetic energy of other deadly projectiles, say bullets? From this Wikipedia article on muzzle energy comes a comparison chart. 500 joules is the KE of a bullet from a 0.45 Colt pistol. The same Colt that was called "the gun that won the American West" and which was the US military's standard issue firearm until the end of the 19th century.

So the tape measure that ended a life in Jersey City today had a kinetic energy that was more than the energy of a bullet from a 0.45 Colt. It was as if the man whose belt the tape measure fell from had shot the other guy at point blank range with a 0.45 Colt. I am assuming that even with a hard hat his chances of survival might have been close to zero. But possibly finite.

This simple calculation makes as good a case as we can think of for safeguarding every piece of equipment, no matter how small or large, at the top of construction sites with your life. Doing the same math for a quarter (weighing about 6 grams) gives an energy of only 7 joules, but bump up the weight to a third of a pound and the object acquires the same KE as a bullet from a 0.22LR pistol (about 160 joules). The nature of the impact would of course also depend on the material, its shape, surface area (which is tiny for a bullet) the angle at which it strikes and other factors, but that would really be quibbling over trifles (as far as safety is concerned).

There are umpteen number of things on a construction site that are hard, rigid objects and weigh at least a third of a pound; large keychains, travel mugs, small tools like screwdrivers and cell phones come to mind. Newton's equations tell us why it's worth making sure that each one of these common necessities of daily life should be watched and secured as closely as possible. And please, please wear a hard hat.

Because everything changes when you are 400 ft from the ground.

P.S. I just took a look at my copy of Halliday and Resnick's classic physics textbook and realized that a much simpler way to do this would be to calculate the potential energy at the top - mgh. QED. This is what happens when you have not been doing physics formally for a while. It's still a good way to illustrate Newton's equations though.

Biologists, chemists, math and computing

Here are some words of wisdom from C. Titus Brown, a biology professor at Michigan State University, on the critical importance of quantitative math, stats and computing skills in biology. The larger question is what it means to 'do biology' in an age when biology has become so interdisciplinary. 

Brown's post itself is based on another post by Sean Eddy at Janelia Farm on how biologists must learn scripting these days. Here's Brown:
So here's my conclusion: to be a biologist, one must be seriously trying to study biology. Period. Clearly you must know something about biology in order to be effective here, and critical thinking is presumably pretty important there; I think "basic competency in scientific practice" is probably the minimum bar, but even there you can imagine lab techs or undergraduates putting in useful work at a pretty introductory level here. I think there are many useful skills to have, but I have a hard time concluding that any of them are strictly necessary. 
The more interesting question, to my mind, is what should we be teaching undergraduates and graduate students in biology? And there I unequivocally agree with the people who prioritize some reasonable background in stats, and some reasonable background in data analysis (with R or Python - something more than Excel). What's more important than teaching any one thing in specific, though, is that the whole concept that biologists can avoid math or computing in their training (be it stats, modeling, simulation, programming, data science/data analysis, or whatever) needs to die. That is over. Dead, done, over.
And here's Eddy:
The most important thing I want you to take away from this talk tonight is that writing scripts in Perl or Python is both essential and easy, like learning to pipette. Writing a script is not software programming. To write scripts, you do not need to take courses in computer science or computer engineering. Any biologist can write a Perl script. A Perl or Python script is not much different from writing a protocol for yourself. The way you get started is that someone gives you one that works, and you learn how to tweak it to do more of what you need. After a while, you’ll find you’re writing your own scripts from scratch. If you aren’t already competent in Perl and you’re dealing with sequence data, you really need to take a couple of hours and write your first Perl script.  
The thing is, large-scale biological data are almost as complicated as the organism itself. Asking a question of a large, complex dataset is just like doing an experiment.  You can’t see the whole dataset at once; you have to ask an insightful question of it, and get back one narrow view at a time.  Asking insightful questions of data takes just as much time, and just as much intuition as asking insightful questions of the living system.  You need to think about what you’re asking, and you need to think about what you’re going to do for positive and negative controls. Your script is the experimental protocol. The amount of thought you will find yourself putting into many different experiments and controls vastly outweighs the time it will take you to learn to write Perl.
And they are both right. The transformation of biology in the last thirty years or so into a data (especially sequencing data) and information intensive discipline means that it's no longer optional to be able to use the tools of information technology, math and computing to biological problems. You either use the tools yourself or you collaborate with someone who knows how to use them. Collaboration has of course also become much easier in the last thirty years, but it's easiest to do it yourself if your goal is simply to write some small but useful scripts for manipulating data. R and Python/Perl are both usually necessary but also sufficient for such purposes. Both Eddy and Brown also raise important questions about including such tools in the traditional education of undergraduate and graduate students in biology, something that's not a standard practice yet.
What I find interesting and rather ironic is that for most of its existence biology was a highly descriptive and experimental science, the opposite of math, which seemed to have nothing to do with mathematical analysis. This changed partially with the advent of genetic analysis in the early twentieth century, but it did not really hit the molecular basis of biology until the growth of sequencing tools.
Another irony is that chemistry is usually considered to be more data intensive and mathematical than biology, especially in the form of subfields like physical chemistry. And yet today we don't really find the average organic or inorganic chemist facing a need to learn and apply scripting or statistical analysis, although computational chemists have of course long since recognized the value of cheminformatics and data analysis. Compared to this the average biologist has become much more dependent on such tools.
However I think that this scenario in chemistry is changing. Organic synthesis may not be information intensive right now but automated organic synthesis - which I personally think is going to be revolutionary at some point in the future - will undoubtedly lead to large amounts of data which will have to be analyzed and statistically made sense of. One specific area of chemistry where I think this will almost certainly happen is in the application of 'accelerated serendipity' which involves the screening of catalysts and the correlation of molecular properties with catalytic activity. Some chemists are also looking at network theory to calculate optimal synthetic paths. At some point to use these tools effectively, the average chemist will have to borrow tools from the computational chemist, and perhaps chemistry will have become as data-intensive as biology. It's never too early to pick up that Python book or look at that R course.

Edward Teller plays Beethoven's Moonlight Sonata


Volatile neurons and detonators fire, tritium and moral ambiguities fuse, faith and plutonium implode, the hauntingly beautiful notes intertwine and signal a glorious armageddon.

The rise of the male nerd and the decline of female computer scientists

From NPR comes a very interesting graph and hypothesis. The graph shows that the percentage of women who received degrees in computer science was actually rising until the 1980s and then started dropping. 



Why would that be the case?

The highly intriguing hypothesis is that the personal computer revolution during the 1980s mainly featured very simple applications like games that were primarily aimed at men. In fact it was this revolution that gave rise to the stereotype of the geek, tech-savvy, socially awkward male, a stereotype that is now nauseatingly on display in the nooks and crannies of Silicon Valley.

This trend would be consistent with some of the other things we know about the field: We know that the tech industry is male dominated, we know that women who try to get into computer science face all kinds of social and cultural barriers, and as demonstrated recently by the fiasco of 'Gamergate' we know how appallingly unfriendly certain extensions of computer technology can be to women. 

Interestingly the previous healthy presence of women in the field reminds me of something that I have been reading in Walter Isaacson's engaging new book "The Innovators". Isaacson documents that the percentage of women with degrees in math was actually higher in the 1930s than in the 1950s. I wonder if similar factors - perhaps the extensive application of math and physics to war-related problems, an activity that might have been viewed as predominantly "male" - were responsible for women dropping out of the mathematical professions then.

In any case, while correlation is not causation, these findings would certainly be consistent with the cultural factors that make it harder for women to enter the field. Which is something that we can always ponder and remedy.