Field of Science

In the matter of Walter Lewin, MIT goes medieval

By now most people must have heard the unpleasant news that Walter Lewin, the beloved and world-renowned physics teacher at MIT whose legendary video lectures drew comparison with the Feynman Lectures on Physics has been barred from campus and stripped of his emeritus professor title in response to charges of sexual harassment of a student in one of his MITx courses. Unfortunately, considering the very public fame that Lewin achieved, MIT has been frustratingly silent on divulging the details of the events, but we can only assume that the charges were quite serious and supported by strong evidence.

It's a very painful and precipitous fall from grace for someone who had achieved the kind of stardom reserved for a handful of science personalities - images of Lewin swinging from a pendulum to demonstrate how its period is independent of its mass, or risking being struck in the face and badly hurt by having the pendulum approach his face within a hairsbreadth are now part of science folklore. As someone who has savored many of his lectures as well as his book, I am not the only one to feel pained, confused and stunned.

What I find bizarre is that MIT has chosen to take down all the lectures that made him famous. I simply don't understand this, especially since they have quoted reasons of "safety" in doing this - how could Lewin possibly entice or harass students simply through explaining the motion of a simple pendulum through an old, archived lecture? Apply whatever punishment you see fit to him but why deprive millions of students around the world of the joy of physics? As an institution that pioneered online courses through the Open Courseware modules, it is particularly damning to have MIT engage in this action.

Sadly, the purging of Lewin's lectures from the online universe cannot help but bear comparison to the book burning and blacklisting of speech and writing that was so endemic in the Dark and Middle ages. MIT's actions are akin to banning, say, Wagner's operas because he was anti-Semitic, or Peter Debye's equations because there are hints that he was silently permissive at the very least during the rise of the Nazis. Their actions remind me of the movie "The Ten Commandments" in which, after learning that his beloved son is actually the son of a Hebrew slave, the Pharaoh issues an edict to have Moses's name stricken from every monument and parchment in the land so that his very existence can be erased from history. It really shouldn't be so hard to separate men from their actions, and I cannot understand what motive MIT could have had in taking down the lectures. I am assuming that content in the Internet age is not as lost to the chasm of ignorance as content existing in a few select printed volumes four hundreds years ago, but the issue is really about the medium, not about the message.

Whatever their motive, the message MIT is sending to the world is not too complimentary - we will put the objectivity of knowledge on the same pedestal as the fallibility of human beings, and if we find evidence for one we will banish the other one too. It's a very troubling precedent in my opinion. I do hope that especially an institution as committed to the spread of knowledge as MIT reconsiders the decision.

Update: I am glad to see that prominent MIT computer science professor Scott Aaronson has the same thoughts and in fact believes even more strongly than I that taking down the lectures is a huge mistake. I also agree with him that MIT needs to be much more forthcoming and transparent in revealing the details of the case to a perplexed public.

"I’m someone who feels that sexual harassment must never be tolerated, neither here nor anywhere else.  But I also feel that, if a public figure is going to be publicly brought down like this (yes, even by a private university), then the detailed findings of the investigation should likewise be made public, regardless of how embarrassing they are.  I know others differ, but I think the need of the world to see that justice was done overrides MIT’s internal administrative needs, and even Prof. Lewin’s privacy (the names of any victims could, of course, be kept secret). 
More importantly, I wish to register that I disagree in the strongest possible terms with MIT’s decision to remove Prof. Lewin’s lectures from OpenCourseWare—thereby forcing the tens of thousands of students around the world who were watching these legendary lectures to hunt for ripped copies on BitTorrent.  (Imagine that: physics lectures as prized contraband!)  By all means, punish Prof. Lewin as harshly as he deserves, but—as students have been pleading on Reddit, in the MIT Tech comments section, and elsewhere—don’t also punish the countless students of both sexes who continue to benefit from his work.  (For godsakes, I’d regard taking down the lectures as a tough call if Prof. Lewin had gone on a murder spree.)  Doing this sends the wrong message about MIT’s values, and is a gift to those who like to compare modern American college campuses to the Soviet Union."

The demise of SciAmBlogs

So I hear that SciAmBlogs is undergoing a radical overhaul and shedding no less than half of its bloggers, many of whom have been with the network since its inception. This includes many whose thought-provoking writings I respect - even though I don't always agree with them - like Janet Stemwedel and Eric Michael Johnson.

It's a shame really, because I think the network had really distinguished itself as one of the few blogging networks in the world whose bloggers had vibrant, independent voices and who were not afraid to write provocative posts. That being said, I don't have a problem seeing the logic of this move at all: after what happened during the last one year, it is clear that the network wants to repair what it sees as a broken image, wants to avoid dealing with even ten clamorous voices on Twitter, wants to stay away even from interesting controversy and - the importance of this aspect of any issue can never be underestimated - wants to please the lawyers. The rigors of maintaining a hundred and fifty year old organization's image are apparently much harder than the rigors of sustaining a diverse set of opinions and the accompanying freedom of speech.

However it is equally clear that by embarking on this new identity the site has picked safe over interesting and independent and has lost its reputation as a vibrant and diverse community of independent voices which you may not always agree with but whose views always provided food for thought. This is abundantly obvious from the new "guidelines" issued by the network - a veritable school headmaster's list of dos and donts combined with a palpable dash of Orwellian doublespeak - which prohibit its bloggers from hosting guest posts or writing "outside" their areas of expertise without consulting with the editors. In doing this SciAmBlogs has reduced its bloggers to - in physicist Sean Carroll's words - underpaid journalists and effectively dissuaded them from exploring new horizons. A blogger who gets paid a paltry sum of money every month for writing "safe" posts that won't get even a handful of people on Twitter riled up and are considered kosher by the editors is indeed no longer a real blogger, and I can definitely see why many of the network's previous writers quit instead of relinquishing their independence. 

Fortunately for the sake of the network some excellent bloggers whose writings I really enjoy, like Jennifer Ouellette, have stayed and I wish these folks good luck, but it's also clear they know what they are in for. There are also other specialists like Darren Naish who writes superbly on paleontology and prehistory. But these people are hedgehogs - acknowledged experts in specific subject areas. A science network truly worth its name needs both hedgehogs and foxes - people who like to muck around and explore other topics rather than mainly drill deep into one. They may not offer definitive answers but you can count on them to ask thought-provoking and even provocative questions. The network has now retained a few hedgehogs but it has lost all its foxes. The loss of foxes has greatly diminished the diversity of the ecosystem. 

In one sense the whole shebang was never the same since Bora Zivkovic left, but it really took recent events to bring it to this precipitous edge. In one sense this is a good decision since the site has now decided where the lines are drawn. The sad thing is that - for all its supposed emphasis on diversity - it has done the opposite and chose to draw the lines on the side of corporate approval instead of diverse opinion (and speaking of diversity, as this post on SciLogs noted, it's also interesting that most of the blogs which were cut - even for reasons other than low frequency of posting - were written by women). When it comes to maintaining diversity, the site has effectively gone the way of "We had to destroy the village in order to save it". I wish the remaining bloggers there good luck, but I doubt the environment will ever be the unique forum for independent voices and vigorous debate that it once was. Scientific American's Blog Network will probably survive in some form or another, but SciAmBlogs as it once stood is now over.

I leave the last word to Yana Eglit who wrote for the network for a long time and who sees in this unfortunate but predictable development a larger symptom of our collective woes, on SciAmBlogs as well as on social media in general. The whole thing is eminently worth reading so I quote here at length (the italics are mine):
"Social media has a powerful tendency towards homogenising opinions into a flavourless monolithic blob. While many who use social media are clearly and sincerely interested in promoting diversity, it is a bitter irony that the platform itself suppresses it. Dissenting opinions get transformed to strawmen and people become literal [insert favourite tyrants here]. Instead of trying to understand why someone you consider reasonable wrote something (in the case of twitter, in 140 characters!) so apparently shocking, and giving them the benefit of the doubt, you immediately jump to the conclusion that they are against whatever cause is in question. And the causes in question are usually far too complex to have a single position on. Especially one measured in not even words, but characters. 
But no benefits of the doubt are given — you stray from the path, you’re obviously up to no good. And you get axed. This breeds a form of conservatism — the group as a whole becomes too terrified to say something that will be misunderstood, and what could be a diverse discussion by multiple people of varied backgrounds becomes an echo chamber, a ‘circlejerk’ to use cruder but more to-the-point internet terminology.  
It’s somewhat ironic: in this system, you value minorities, you value women, you value the disadvantaged — but you do not value people. Individuals are worthless, to be cast aside the moment you find something disagreeable in them. People will support you until the first flaw, and then they take off to find someone not-yet-flawed instead. I admit that there’s an element in culture in my finding this strange and disagreeable — in Russian culture (as well as several other Slavic ones I’ve dealt with) we tend to see friendship and personal relationships (but especially friendships) as something rather sacred, something that should ideally transcend ideological differences, political disagreements, and especially character flaws. That can lead to issues in its own right, of course — everything comes at a trade-off, and every cultural description hides within it a massive statistical mess, but that’s something that always bothered me here, just how quickly people discard friendships they find no longer savoury. And this is especially nasty on the internet.
Mature nuanced discussions were never a blatant strong point of the internet, but here we have mature, nuanced individuals — intelligent, experienced individuals with a genuine interest to improve the world around them — having discussions on the level of teenage basement trolls. That, I think, is tragic. 
…and for what was all this? We lost a network. We lost voices who fought for us. We lost each other. We lost direction. We lost our actual main goal — to communicate the wonders of science with the world. And to some extent, I think this has damaged our rapport as bloggers with both journalist as well a scientist communities. Not to mention the curious public looking at all this in bemused confusion.We gained nothing."
I would go a step further and say that we gained something valuable, and then lost it with deliberate, purposeful and misguided conviction. It's something to mull over. However there is as always a silver lining. This story will serve as a cautionary tale for other blogging networks which wish to foster diversity. Meanwhile, both time and the Internet are limitless, so foxes like me will always have a multitude of new fields to play in.

The name's bond - reversible covalent bond.

Fifteen years ago most people would have laughed at you if you told them kinase inhibitors would become such a big deal: the received wisdom at that point was that anything that competed with ATP for one kinase would just indiscriminately hit other kinases. While that is generally true, we have found over the intervening decade that there is a wealth of detail - type II motifs, allosteric binding, relatively straightforward residue selectivity etc. - that can be tweaked to provide selectivity. In one fell swoop Gleevec upended the conventional wisdom.

A similar kind of thinking existed for covalent drugs a few years back, and I think that field too is going down the fortuitous road that kinase inhibitors took and defying the naysayers. This year especially has seen a bonanza of activity related to the discovery and fine-tuning of covalent inhibitors. The most striking and unexpected development in the field however has been the finding that one can find reversible covalent inhibitors that engage a protein target long enough to provide useful efficacy: the pessimistic thinking that had prevailed so far was mainly based on the potent off-target effects potentially arising from irreversible inhibitors. That thinking was justified...just as the thinking that kinase inhibitors would be non-selective was justified in 1998.

Reversible covalent kinase inhibitors in particular have seen a lot of interest and publications this year, and it's especially gratifying for me as a computational chemist to see that that the field is benefiting from both experimental and computational approaches. Jack Taunton at UCSF (who has started a company to exploit such kinds of inhibitors) has been a leader, but there have been others. Just recently there was a joint publication from the Taunton and Shoichet groups which used covalent docking techniques to prospectively discover JAK3 inhibitors. Taunton has also done some interesting work with Matt Jacobson from UCSF to use computational methods to tune covalent warhead reactivity - this work would be especially useful in tailoring reversible inhibitors to suit the target. A group at Pfizer did some similar work. Meanwhile the discovery of irreversible inhibitors for other important targets also isn't dead: earlier this year Nathanael Gray's group at Harvard published a report on covalent reversible inhibitors for CDK7, for instance.

It remains to be seen - it always remains to be seen - how many of these leads can survive the rigors of the clinic and how general the field becomes. But the approach in general certainly seems to be coming of age, and I will be watching its development with great interest. Quite a bit of promise there and also a real chance to defy some naysayers, which is what scientists especially relish.

Paul Schleyer: Among the last of the universalists

I was saddened to hear of the passing of Paul Schleyer, not exactly a household name among non-chemists but someone who was undoubtedly one of the most prolific and towering chemists of his time. Schleyer started out in synthetic physical organic chemistry and then moved to computational and theoretical chemistry. Starting out at Princeton, he moved to Erlangen in Germany and finally settled down at the University of Georgia, helping to make each one of these centers a leading hub for theoretical chemistry.

Schleyer was definitely one of the last universalities - at least in the theoretical territory of chemistry - and had a definitive grasp of almost every aspect of the field; from NMR spectroscopy (he came up with the valuable NICS metric for chemical shifts) and ab initio quantum chemical calculations to non-classical carbocations, lithium compounds and strained hydrocarbon synthesis (he synthesized adamantane, for instance). Most people are content to be experts in one or two fields, but it's probably not an exaggeration to say that Schleyer was close to being an expert in most of these fields. During his career he also seems to have co-authored papers and books with virtually every prominent figure in the field, from George Olah to John Pople to Jack Roberts. His publications on these and other parts of chemistry were prolific and deep, and he was also not one to shy away from spirited debate. I had the pleasure of watching him give a talk once and it was clear that he was a formidable opponent whose knowledge of chemistry would almost always supersede that of his 'adversaries'.

In fact that last aspect of his personality was on full display the first time I encountered him. This was as a graduate student when I came across a charming and very readable account of his debate with Herbert Brown on the infamous and invaluable non-classical carbocation debate. Titled "The Non-Classic Ion Problem", this little book is structured as a series of points and counterpoints in every chapter offered by Schleyer and Brown. The constant back and forth, the dissection of minutiae of solvents, reaction rates and spectroscopy, and the general unyielding yet cordial personalities of the two men are fascinating to watch. It's pure, unadulterated scientific debate at its best.

Schleyer was definitely one of the last and greatest practitioners of the golden era of physical organic chemistry, a time when chemists dedicated their lives to exploring the fundamentals of chemical reactions and structure with vigor and rigor. He is one of a handful of figures on whose shoulders we all stand and will be genuinely missed.

Philosophy begins where physics ends, and physics begins where philosophy ends

Richard Feynman - philosopher (Image:WashU)
A few months ago, physicist Sean Carroll has some words of wisdom for physicists who might have less than complimentary things to say about philosophy. The most recent altercation between a physicist and philosophy came from Neil deGrasse Tyson who casually disparaged philosophy in a Q&A session, saying that it can be a time sink and it doesn’t actually provide any concrete answers to scientific questions. Now I am willing to give Tyson the benefit of doubt since his comment was probably a throwaway remark; plus it’s always easy for scientists to take potshots at philosophers in a friendly sort of way, much like the Yale football team would take potshots at its Harvard counterpart.
But Tyson’s response was only the latest in a series of run-ins that the two disciplines have had over the past few years. For instance in 2012 philosopher David Albert castigated physicist Lawrence Krauss for purportedly claiming in his most recent book that physics had settled or at least given plausible answers to the fundamental question of existence. In reply Krauss called Albert “moronic” which didn’t help much to bridge the divide between the two fields. Stephen Hawking also had some harsh words for philosophers, saying that he thought “philosophy is dead”, and going further back, Richard Feynman was famously disdainful of philosophy which he called “dopey”.
In his post Carroll essentially deconstructs the three major criticisms of philosophy seen among physicists: there’s the argument that philosophers don’t really gather data or do experiments, there’s the argument that practicing physicists don’t really use any philosophy in their work, and there’s the refrain that philosophers concern themselves too much with unobservables. Carroll calls the first of these arguments dopey (providing a fitting rejoinder to Feynman), the second frustratingly annoying and the third deeply depressing.
I tend to agree with his take, and I have always had trouble understanding why otherwise smart physicists like Tyson or Hawking seem to neglect both the rich history of interaction between physics and philosophy as well as the fact that they are unconsciously doing philosophy even when they are doing science. For instance, what exactly was the philosophy-hating Feynman talking about when he gave the eloquent Messenger Lectures that became “The Character of Physical Law“? Feynman was talking about the virtues of science, about the methodology of science, about the imperfect march of science toward the truth; in other words he was talking about what most of us would call “the philosophy of science”. There’s also more than a few examples of what could fairly be called philosophical musings even in the technical “Feynman Lectures on Physics”. Even Tyson, when he was talking about the multiverse and quantum entanglement in “Cosmos” was talking philosophically.
I think at least part of the problem here comes from semantics. Most physicists don’t explicitly try to falsify their hypotheses or apply positive heuristics or keep on looking for paradigms shifts in their daily work, but they are doing this unconsciously all the time. In many ways philosophy is simply a kind of meta, higher level look at the way science is done. Now sometimes philosophers of science are guilty of thinking that science in fact fits the simple definitions engendered by this meta level look, but that does not mean these frameworks are completely inapplicable to science, even if they may be messier than what they appear on paper. It’s a bit like saying that Newton’s laws are irrelevant to entities like black holes and chaotic systems because they lose their simple formulations in these domains.
My take on philosophy and physics is very simple: Philosophy begins where physics ends, and physics begins where philosophy ends. And I believe this applies to all of science.
I think there are plenty of episodes in the history of science that support this view. When science was still in a primitive state, almost all musings about it came first from Greek philosophers and later from Asian, Arab and European thinkers who were called “natural philosophers” for a reason. Anyone who contemplated the nature of earthly forces, wondered what the stars were made up of, thought about whether living things change or are always constant or pondered if there is life after death was doing philosophy. But he or she was also squarely thinking about science since we know for a fact that science has been able to answer these philosophical questions in the ensuing five hundred years. In this case philosophy stepped in where the era’s best science ended, and then science again stepped in when it had the capacity to answer these philosophical questions.
As another example, consider the deep philosophical questions about quantum mechanics pondered by the founders of quantum mechanics, profound thinkers like Bohr, Einstein and Heisenberg. These men were brilliant scientists but they were also bona fide philosophers; Heisenberg even wrote a readable book called “Physics and Philosophy“. But the reason why they were philosophers almost by default is because they understood that quantum mechanics was forcing a rethinking about the nature of reality itself that challenged our notions not just about concrete entities like electrons and photons but also about more ethereal ones like consciousness, objectivity and perception. Bohr and Heisenberg realized that they simply could not talk about these far flung implications of physics without speaking philosophically. In fact some of the most philosophical issues that they debated, such as quantum entanglement, were later validated through hard scientific experiments; thus, if nothing else, their philosophical arguments helped keep these important issues alive. Even among the postwar breed of physicists (many of whom were of the philosophy-averse, “shut up and calculate” type) there were prominent philosophers like John Wheeler and David Bohm, and they again realized the value of philosophy not as a tool for calculation or measurement but simply as a guide to thinking about hazy issues at the frontiers of science. In some sense it’s a good sign then when you start talking philosophically about a scientific issue; it means you are really at the cutting edge.
The fact of the matter – and a paradox of sorts – is that science grows fastest at its fringes, but it’s also at the fringes that it is most uncertain and unable to reach concrete conclusions. That is where philosophy steps in. You can think of philosophy as a kind of stand-in that’s exploring the farthest reaches of scientific thinking while science is maturing and retooling itself to understand the nature of reality. Tyson, Hawking, Krauss, and in fact all of us, are philosophers in that respect, and we should all feel the wiser for it.

First published on SciAmBlogs.

Oppenheimer’s Folly: On black holes, fundamental laws and pure and applied science

Einstein and Oppenheimer: Both men in their later years
dismissed black holes as anomalies, unaware that they contained
some of the deepest mysteries of physics
(Image: Alfred Eisenstaedt, LIFE magazine)
On September 1, 1939, the same day that Germany attacked Poland and started World War 2, a remarkable paper appeared in the pages of the journal Physical Review. In it J. Robert Oppenheimer and his student Hartland Snyder laid out the essential characteristics of what we today call the black hole. Building on work done by Subrahmanyan Chandrasekhar, Fritz Zwicky and Lev Landau, Oppenheimer and Snyder described how an infalling observer on the surface of an object whose mass exceeded a critical mass would appear to be in a state of perpetual free fall to an outsider. The paper was the culmination of two years of work and followed two other articles in the same journal.
Then Oppenheimer forgot all about it and never said anything about black holes for the rest of his life.
He had not worked on black holes before 1938, and he would not do so ever again. Ironically, it is this brief contribution to physics that is now widely considered to be Oppenheimer’s greatest, enough to have possibly warranted him a Nobel Prize had he lived long enough to see experimental evidence for black holes show up with the advent of radio astronomy.
What happened? Oppenheimer’s lack of interest wasn’t just because he became the director of the Manhattan Project a few years later and got busy with building the atomic bomb. It also wasn’t because he despised the free-thinking and eccentric Zwicky who had laid the foundations for the field through the discovery of black holes’ parents – neutron stars. It wasn’t even because he achieved celebrity status after the war, became the most powerful scientist in the country and spent an inordinate amount of time consulting in Washington until his carefully orchestrated downfall in 1954. All these factors contributed, but the real reason was something else entirely – Oppenheimer just wasn’t interested in black holes. Even after his downfall, when he had plenty of time to devote to physics, he never talked or wrote about them. The creator of black holes basically did not think they mattered.
Oppenheimer’s rejection of one of the most fascinating implications of modern physics and one of the most enigmatic objects in the universe – and one he sired – is documented well by Freeman Dyson who tried to initiate conversations about the topic with him. Every time Dyson brought it up Oppenheimer would change the subject, almost as if he had disowned his own scientific children.
The reason, as attested to by Dyson and others who knew him, was that in his last few decades Oppenheimer was stricken by a disease which I call “fundamentalitis”. Fundamentalitis is a serious condition that causes its victims to believe that the only thing worth thinking about is the deep nature of reality as manifested through the fundamental laws of physics.
As Dyson put it:
“Oppenheimer in his later years believed that the only problem worthy of the attention of a serious theoretical physicist was the discovery of the fundamental equations of physics. Einstein certainly felt the same way. To discover the right equations was all that mattered. Once you had discovered the right equations, then the study of particular solutions of the equations would be a routine exercise for second-rate physicists or graduate students.”
Thus for Oppenheimer, black holes, which were particular solutions of general relativity, were mundane; the general theory itself was the real deal. In addition they were anomalies, ugly exceptions which were best ignored rather than studied. As Dyson mentions, unfortunately Oppenheimer was not the only one affected by this condition. Einstein, who spent his last few years in a futile search for a grand unified theory, was another. Like Oppenheimer he was uninterested in black holes, but he also went a step further by not believing in quantum mechanics. Einstein’s fundamentalitis was quite pathological indeed.
History proved that both Oppenheimer and Einstein were deeply mistaken about black holes and fundamental laws. The greatest irony is not that black holes are very interesting, it is that in the last few decades the study of black holes has shed light on the very same fundamental laws that Einstein and Oppenheimer believed to be the only thing worth studying. The disowned children have come back to haunt the ghosts of their parents.
Black holes took off after the war largely due to the efforts of John Wheeler in the US and Dennis Sciama in the UK. The new science of radio astronomy showed us that, far from being anomalies, black holes litter the landscape of the cosmos, including the center of the Milky Way. A decade after Oppenheimer’s death, the Israeli theorist Jacob Bekenstein proved a very deep relationship between thermodynamics and black hole physics. Stephen Hawking and Roger Penrose found out that black holes contain singularities; far from being ugly anomalies, black holes thus demonstrated Einstein’s general theory of relativity in all its glory. They also realized that a true understanding of singularities would involve the marriage of quantum mechanics and general relativity, a paradigm that’s as fundamental as any other in physics.
In perhaps the most exciting development in the field, Leonard Susskind, Hawking and others have found intimate connections between information theory and black holes, leading to the fascinating black hole firewall paradox that forges very deep connections between thermodynamics, quantum mechanics and general relativity. Black holes are even providing insights into computer science and computational complexity. The study of black holes is today as fundamental as the study of elementary particles in the 1950s.
Einstein and Oppenheimer could scarcely have imagined that this cornucopia of discoveries would come from an entity that they despised. But their wariness toward black holes is not only an example of missed opportunities or the fact that great minds can sometimes suffer from tunnel vision. I think the biggest lesson from the story of Oppenheimer and black holes is that what is considered ‘applied’ science can actually turn out to harbor deep fundamental mysteries. Both Oppenheimer and Einstein considered the study of black holes to be too applied, an examination of anomalies and specific solutions unworthy of thinkers thinking deep thoughts about the cosmos. But the delicious irony was that black holes in fact contained some of the deepest mysteries of the cosmos, forging unexpected connections between disparate disciplines and challenging the finest minds in the field. If only Oppenheimer and Einstein had been more open-minded.
The discovery of fundamental science in what is considered applied science is not unknown in the history of physics. For instance Max Planck was studying blackbody radiation, a relatively mundane and applied topic, but it was in blackbody radiation that the seeds of quantum theory were found. Similarly it was spectroscopy or the study of light emanating from atoms that led to the modern framework of quantum mechanics in the 1920s. Scores of similar examples abound in the history of physics; in a more recent case, it was studies in condensed matter physics that led physicist Philip Anderson to make significant contributions to symmetry breaking and the postulation of the existence of the Higgs boson. And in what is perhaps the most extreme example of an applied scientist making fundamental contributions, it was the investigation of cannons and heat engines by French engineer Sadi Carnot that led to a foundational law of science – the second law of thermodynamics.
Today many physicists are again engaged in a search for ultimate laws, with at least some of them thinking that these ultimate laws would be found within the framework of string theory. These physicists probably regard other parts of physics, and especially the applied ones, as unworthy of their great theoretical talents. For these physicists the story of Oppenheimer and black holes should serve as a cautionary tale. Nature is too clever to be constrained into narrow bins, and sometimes it is only by poking around in the most applied parts of science that one can see the gleam of fundamental principles.
As Einstein might have said had he known better, the distinction between the pure and the applied is often only a “stubbornly persistent illusion”. It’s an illusion that we must try hard to dispel.
First published on SciAm

Molecular modeling and physics: A tale of two disciplines

For its development physics relies both on time for
understanding and on multiple other disciplines
that make tools like the LHC possible
(Image: Universe Today)
In my professional field of molecular modeling and drug discovery I often feel like an explorer who has arrived on the shores of a new continent with a very sketchy map in his pocket. There are untold wonders to be seen on the continent and the map certainly points to a productive direction in which to proceed, but the explorer can’t really stake a claim to the bounty which he knows exists at the bottom of the cave. He knows it is there and he can even see occasional glimpses of it but he cannot hold all of it in his hand, smell it, have his patron duke lock it up in his heavily guarded coffers. That is roughly what I feel when I am trying to simulate the behavior of drug molecules and proteins.
It is not uncommon to hear experimentalists from other disciplines and even modelers themselves grumbling about the unsatisfactory state of the discipline, and with good reason. Neither are the reasons entirely new: The techniques are based on an incomplete understanding of the behavior of complex biological systems at the molecular level. The techniques are parametrized based on a limited training set and are therefore not generally applicable. The techniques do a much better job of explaining than predicting (a valid point, although it’s easy to forget that explanation is as important in science as prediction).
To most of these critiques I and my fellow brethren plead guilty; and nothing advances a field like informed criticism. But I also have a few responses to the critiques, foremost among which is one that is often under-appreciated: On the scale of scientific revolutions, computational chemistry and molecular modeling are nascent fields, only just emerging from the cocoon of understanding. Or, to be pithier, give it some more time.
This may seem like a trivial point but it’s an important one and worth contemplating. Turning a scientific discipline from an unpolished, rough gem-in-the-making to the Hope Diamond takes time. To drive this point home I want to compare the state of molecular modeling – a fledgling science – with physics – perhaps the most mature science. Today physics has staked its claim as the most accurate and advanced science that we know. It has mapped everything from the most majestic reaches of the universe at its largest scale to the production of virtual particles inside the atom at the smallest scale. The accuracy of both calculations and experiments in physics can beggar belief; on one hand we can calculate the magnetic moment of the electron to sixteen decimal places using quantum electrodynamics (QED) and on the other hand we can measure the same parameter to the same degree of accuracy using ultra sensitive equipment.
But consider how long it took us to get there. Modern physics as a formal discipline could be assumed to have started with Isaac Newton in the mid 17th century. Newton was born in 1642. QED came of age in about 1952 or roughly 300 years later. So it took about 300 years for physics to go from the development of its basic mathematical machinery to divining the magnetic moment of the electron from first principles to a staggering level of accuracy. That’s a long time to mature.
Contrast this with computational chemistry, a discipline that spun off from the tree of quantum mechanics after World War 2. The application of the discipline to complex molecular entities like drugs and materials is even more recent, taking off in the 1980s. That’s thirty years ago. 30 years vs 300 years, and no wonder physics is so highly developed while molecular modeling is still learning how to walk. It would be like criticizing physics in 1700 for not being able to launch a rocket to the moon. A more direct comparison of modeling is with the discipline of synthetic chemistry  – a mainstay of drug discovery – that is now capable of making almost any molecule on demand. Synthetic chemistry roughly began in about 1828 when German chemist Friedrich Wöhler first synthesized urea from simple inorganic compounds. That’s a period of almost two hundred years for synthetic chemistry to mature.
But it’s not just the time required for a discipline to mature; it’s also the development of all the auxiliary sciences that play a crucial role in the evolution of a discipline that makes its culmination possible. Consider again the mature state of physics in, say, the 1950s. Before it could get to that stage, physics needed critical input from other disciplines, including engineering, electronics and chemistry. Where would physics have been without cloud chambers and Geiger counters, without cyclotrons and lasers, without high-quality ceramics and polymers? The point is that no science is an island, and the maturation of one particular field requires the maturation of a host of others. The same goes for the significant developments in mathematics – multivariate calculus, the theory of Lie groups, topology – that made progress in modern physics possible. Similarly synthetic chemistry would not have been possible had NMR spectroscopy and x-ray diffraction not provided the means to determine the structure of molecules.
Molecular modeling is also constrained by similar input from other science. Simulation really took off in the 80s and 90s with the rapid advances in computer software and hardware; before this period chemists and physicists had to come up with clever theoretical algorithms to calculate the properties of molecules simply because they did not have access to the proper firepower. Now consider what other disciplines modeling is dependent on – most notably chemistry. Without chemists being able to rapidly make molecules and provide both robust databases as well as predictive experiments, it would be impossible for modelers to validate their models. Modeling has also received a tremendous boost from the explosion of crystal structures of proteins engendered by genomics, molecular biology, synchrotron sources and computer software for data processing. The evolution of databases, data mining methods and the whole infrastructure of informatics has also really fed into the growth of modeling. One can even say without too much exaggeration that molecular modeling is ultimately a product of our ability to manipulate elemental silicon and produce it in an ultrapure form.
Thus, just like physics was dependent on mathematics, chemistry and engineering, modeling has been crucially dependent on biology, chemistry and computer science and technology. And in turn, compared to physics, these disciplines are relatively new too. Biology especially is still just taking off, and even now it cannot easily supply the kind of data which would be useful for building a robust model. Computer technology is very efficient, but still not efficient enough to really do quantum mechanical calculations on complex molecules in a high-throughput manner (I am still waiting for that quantum computer). And of course, we still don’t quite understand all the forces and factors that govern the binding of molecules to each other, and we don’t quite understand how to capture these factors in sanitized and user-friendly computer algorithms and graphical interfaces. It’s a bit like physics having to progress without having access to high-voltage sources, lasers, group theory and a proper understanding of the structure of the atomic nucleus.
Thus, thirty years is simply not enough for a field to claim a very significant degree of success. In fact, considering how new the field is and how many unknowns it is still dealing with, I would say that the field of molecular modeling is actually doing quite well. The fact that computer-aided molecular design was hyped during its inception does not make it any less useful, and it’s silly to think so. In the past twenty years we have at least had a good handle on the major challenges that we face and we have a reasonably good idea of how to proceed. In major and minor ways modeling continues to make useful contributions to the very complicated and unpredictable science and art of drug design and discovery. For a field that’s thirty years old I would say we aren’t doing so bad. And considering the history of science and technology as well as the success of human ingenuity in so many forms, I would say that the future is undoubtedly bright for molecular simulation and modeling. It’s a conviction that is as realistic as any other in science, and it’s one of the things that helps me get out of bed every morning. In science fortune always favors the patient, and modeling and simulation will be no different.

Falsification and chemistry: What’s the rub?

Roald Hoffmann has often emphasized the limitations
of falsification for the everyday practice of chemistry
My last post on the role and limitations of falsification leads to a point I have made before: the fact that falsification is far less important for chemists than it is for, say, physicists or mathematicians. My take on the relative unimportance of falsification comes mainly from Roald Hoffmann who is as much of a philosopher of chemistry (and a poet) as a professional Nobel Prize winning chemist. He has an excellent essay called “What would philosophy of science look like if chemists built it“? in his collection of essays from last year (which I reviewed for Nature Chemistry here).

Hoffmann’s basic take on chemistry and the philosophy of science goes to the heart of what distinguishes chemistry from other sciences. Chemistry as it is practiced consists of two major activities – analysis and synthesis. The analysis part wherein you break down a substance into its constituent atoms and deduce their bonding, charge and spatial disposition is akin to the reductionist ethos of physics where you make sense of matter by taking it apart. The synthesis part of chemistry is highly creative and consists of building up complex molecules from simple counterparts. It is an activity that not only makes chemistry conceptually unique among the sciences but which has also contributed to the inestimable utility of the science in creating the material world around us. It is as much an art as a science, and one which makes chemistry very close to architecture as a practical pursuit.

Karl Popper wrote a well-known book called “Conjectures and Refutations” in which among other things, he laid out his central philosophy of falsification. A related philosophy is the hypothetico-deductive approach to the scientific method in which one formulates hypotheses and tests them. Here is what Hoffmann says about this way of thinking about science after analyzing a particular paper on the synthesis of fullerene molecules that can encapsulate hydrogen molecules. I am slightly rephrasing his words to make them more general:
“What theories are being tested (or falsified, for that matter) in a beautiful paper on synthesis? None, really, expect that such and such a molecule can be constructed. The theory building in that is about as informative as the statement that an Archie Ammons poem tests a theory that the English language can be used to construct novel and perceptive insights into the way the world and our minds interact. The power of that tiny poem, the cleverness of the molecular surgery that a synthetic chemist performs in creating a molecule, just sashay around any analytical theory-testing.” 
How is this creative act of synthesizing a novel substance exactly making and testing a hypothesis or theory? Now one may argue that even a synthesis holds the feet of certain theories of bonding (molecular orbital theory for instance) to the fire. It is certainly true that there is always some implicit assumption, some background knowledge, that underlines the synthesis of any molecule; the construction of the molecule would fail in fact if electrons did not flow in such and such a manner and if bonds did not form in such and such a manner, so of course you are testing elementary assumptions and theories about chemical bonding whenever you make any molecule. But why not go further then and say that you are testing the atomic hypothesis whenever you are conducting pretty much any experiment in chemistry, physics or biology? Or if you want to reach out even further and tread into philosophy, you could even say that you are testing the basic assumption behind science that natural laws dictate the behavior of material entities.

Clearly this definition of “falsification” is so general and so all-encompassing as to greatly vitiate the utility of the concept; try asking a synthetic chemist next time if the main purpose of his synthesis is to test or falsify molecular orbital theory. Drawing on the analogy between chemistry and architecture, it would be like saying that every time an architect is designing a new shape for a building she is hypothesizing and testing the law of gravity. Well, yes, and no.

In fact this debate again very much reminds me of the fondness for reductionism that physicists often bring to a debate about “higher order” disciplines like chemistry, economics or psychology. Molecules, people and societies are made out of atoms, they will say, which means that “atoms explain people”. I think most physicists themselves will agree as to the futility of such far-out explanations. The fact is that a concept is useful only if it has a direct, non-trivial relationship to the phenomenon which it purports to explain. Theories in philosophy, just like reductionist theories in physics, are far more relevant on a certain level than on others.

Synthesis is a creative activity, and while every synthesis implicitly and trivially tries to falsify some deep-seated fundamental law, the science and art of synthesis as a whole does not explicitly and non-trivially try to falsify any particular theory. That does not mean that falsification is absent or untrue, it just means that it’s rather irrelevant.

Falsification and its discontents

Karl Popper's grounding in the age of physics colored
his views regarding the way science is done. Falsification
was one of the resulting casualties
(Image: Wikipedia Commons)
Earlier this year the 'Big Questions' website Edge.org’s asked the following question: “What scientific idea is ready for retirement”? In response to the question physicist Sean Carroll. Carroll takes on an idea from the philosophy of science that’s usually considered a given: falsification. I mostly agree with Carroll’s take, although others seem to be unhappier, mainly because Carroll seems to be postulating that lack of falsification should not really make a dent in ideas like the multiverse and string theory.

I think falsification is one of those ideas which is a good guideline but which cannot be taken at face value and applied with abandon to every scientific paradigm or field. It’s also a good example of how ideas from the philosophy of science may have little to do with real science. Too much of anything is bad, especially when that anything is considered to be an inviolable truth.

It’s instructive to look at falsification’s father to understand the problems with the idea. Just like his successor Thomas Kuhn, Karl Popper was steeped in physics. He grew up during the heyday of the discipline and ran circles around the Vienna Circle whose members (mostly mathematicians, physicists and philosophers) never really accepted him as part of the group. Just like Kuhn Popper was heavily influenced by the revolutionary discoveries in physics during the 1920s and 30s and this colored his philosophy of science.

Popper and Kuhn are both favorite examples of mine for illustrating how the philosophy of science has been biased toward physics and by physicists. The origin of falsification was simple: Popper realized that no amount of data can really prove a theory, but that even a single key data point can potentially disprove it. The two scientific paradigms which were reigning then – quantum mechanics and relativity – certainly conformed to his theory. Physics as practiced then was adept at making very precise, quantitative predictions about a variety of phenomena, from the electron’s charge to the perihelion of Mercury. Falsification certainly worked very well when applied to these theories. Sensibly Popper advocated it as a tool to distinguish science from non-science (and from nonsense).

But in 2014 falsification has become a much less reliable and more complicated beast. Let’s run through a list of its limitations and failures. For one thing, Popper’s idea that no amount of data can confirm a theory is a dictum that’s simply not obeyed by the majority of the world’s scientists. In practice a large amount of data does improve confidence in a theory. Scientists usually don’t need to confirm a theory one hundred percent in order to trust and use it; in most cases a theory only needs to be good enough. Thus the purported lack of confidence in a theory just because we are not one hundred percent sure of its validity is a philosophical fear, more pondered by grim professors haunting the halls of academia than by practical scientists performing experiments in the everyday world.

Nor does Popper’s exhortation that a single incisive data point slay a theory hold any water in many scientists’ minds. Whether because of pride in their creations or because of simple caution, most scientists don’t discard a theory the moment there’s an experiment which disagrees with its main conclusions. Maybe the apparatus is flawed, or maybe you have done the statistics wrong; there’s always something that can rescue a theory from death. But most frequently, it’s a simple tweaking of the theory that can save it. For instance, the highly unexpected discovery of CP violation did not require physicists to discard the theoretical framework of particle physics. They could easily save their quantum universe by introducing some further principles that accounted for the anomalous phenomenon. Science would be in trouble if scientists started abandoning theories the moment an experiment disagreed with them. Of course there are some cases where a single experiment can actually make or break a theory but fortunately for the sanity of its practitioners, there are few such cases in science.

Another reason why falsification has turned into a nebulous entity is because much of modern, cutting-edge science is based on models rather than theories. Models are both simpler and less rigorous than theories and they apply to specific, complicated situations which cannot be resolved from first principles. There may be multiple models that can account for the same piece of data. As a molecular modeler I am fully aware of how one can tweak models to fit the data. Sometimes this is justified, at other times it’s a sneaky way to avoid admitting failure. But whatever the case, the fact is that falsification of a model almost never kills it instantly since a model by its very nature is supposed to be more or less a fictional construct. Both climate models and molecular models can be manipulated to agree with the data when the data disagrees with their previous incarnation, a fact that gives many climate skeptics heartburn. The issue here is not whether such manipulation is justified, rather it’s that falsification is really a blunt tool to judge the validity of such models. As science becomes even more complex and model-driven, this failure of falsification to discriminate between competing models will become even more widespread.

The last problem with falsification is that since it was heavily influenced by Popper’s training in physics it simply fails to apply to many activities pursued by scientists in other fields, such as chemistry. The Nobel Prize winning Roald Hoffmann has argued in his recent book how falsification is almost irrelevant to many chemists whose main activity is to synthesize molecules. What hypothesis are you falsifying, exactly, when you are making a new drug to treat cancer or a new polymer to sense toxic environmental chemicals? Now you could get very vague and general and claim that every scientific experiment is a falsification experiment since it’s implicitly based on belief in some principle of science. But as they say, a theory that explains everything explains nothing, so such a catchall definition of falsification ceases to be useful.

All this being said, there is no doubt that falsification is a generally useful guideline for doing science. Like a few other commenters I am surprised that Carroll uses his critique of falsification to justify work in areas like string theory and the multiverse, because it seems to me that those are precisely the areas where testable and falsifiable predictions are badly needed because of lack of success. Perhaps Carroll is simply saying that too much of anything including falsification is bad. With that I resoundingly agree. In fact I would go further and contend that too much of philosophy is always bad for science; as they say, the philosophy of science is too important to be left to philosophers of science.

The simple physics behind a horrible tragedy: A tape measure with the energy of a 0.45 Colt bullet

From the NYT comes this really tragic story of a man who was killed when a tape measure from a construction site fell down 50 floors and struck him on the head. My deepest condolences to his family. 

The tape measure weighed a pound so it may seem strange that it led to such an irreversible and horrible fate. Sadly the man wasn't wearing a hard hat. And physics was not on his side: as we will see below, the tape measure that struck him was tantamount to a bullet.

We can use Newton's famed three equations of motion to determine the kinetic energy of the measure as it struck the unfortunate man's head. The three equations are:

v = u + at
s = ut + at^2/2
v^2 = u^2 + 2as

Here, u is the initial velocity, v is the final velocity, a is the acceleration, s is the distance covered and t is the time.

A moment's inspection reveals that out of the three equations it's most convenient to use the third one since it does not include time, a variable that's not directly apparent in the problem. It's important to convert all units to the same MKS or CGS systems to get the right answer. We use the following values:

u = 0 since the tape measure started from a stationary state.
a = the acceleration due to gravity, g = 9.8 m/s^2
s = 400 ft = 121.9 meters
m = 1 pound = 0.45 kilograms

So v^2 comes out to be 2*9.8*121.9 = 2389.24 which we will round up to 2389.

Now the kinetic energy is just mv^2/2 so we multiply this number by the mass which is 0.45 kilograms and divide by 2.

2389*0.45/2 = 537.48, which we will round up to 537 joules.

How does this number compare to the kinetic energy of other deadly projectiles, say bullets? From this Wikipedia article on muzzle energy comes a comparison chart. 500 joules is the KE of a bullet from a 0.45 Colt pistol. The same Colt that was called "the gun that won the American West" and which was the US military's standard issue firearm until the end of the 19th century.

So the tape measure that ended a life in Jersey City today had a kinetic energy that was more than the energy of a bullet from a 0.45 Colt. It was as if the man whose belt the tape measure fell from had shot the other guy at point blank range with a 0.45 Colt. I am assuming that even with a hard hat his chances of survival might have been close to zero. But possibly finite.

This simple calculation makes as good a case as we can think of for safeguarding every piece of equipment, no matter how small or large, at the top of construction sites with your life. Doing the same math for a quarter (weighing about 6 grams) gives an energy of only 7 joules, but bump up the weight to a third of a pound and the object acquires the same KE as a bullet from a 0.22LR pistol (about 160 joules). The nature of the impact would of course also depend on the material, its shape, surface area (which is tiny for a bullet) the angle at which it strikes and other factors, but that would really be quibbling over trifles (as far as safety is concerned).

There are umpteen number of things on a construction site that are hard, rigid objects and weigh at least a third of a pound; large keychains, travel mugs, small tools like screwdrivers and cell phones come to mind. Newton's equations tell us why it's worth making sure that each one of these common necessities of daily life should be watched and secured as closely as possible. And please, please wear a hard hat.

Because everything changes when you are 400 ft from the ground.

P.S. I just took a look at my copy of Halliday and Resnick's classic physics textbook and realized that a much simpler way to do this would be to calculate the potential energy at the top - mgh. QED. This is what happens when you have not been doing physics formally for a while. It's still a good way to illustrate Newton's equations though.