Field of Science

A Christmas message from Steve Jobs for our friends in pharma: 2015 version

I had posted this at the end of 2011, and it's both fascinating and highly disconcerting at the same time that Steve Jobs's lament about product designers' focus on sales instead of product design leading to the decline of specific industries rings even more true for pharma in 2015 than it did in 2011. If anything, the string of mergers and layoffs in Big Pharma during the last four years have underscored even more what happens when an industry starts to worry more about perception and short-term shareholder value than its core reason for existence. Most would agree that that's not how you innovate and that's not how you solve the really hard problems. Let's hope Steve will have a different message for us in 2019.

I am at the end of Walter Isaacson's excellent biography of Steve Jobs and it's worth a read even if you think you know a lot about the man. Love him or hate him, it's hard to deny that Jobs was one of those who disturbed our universe in the last few decades. You can accuse him of a lot of things, but not of being a lackluster innovator or product designer.

The last chapter titled "Legacy" has a distillation of Jobs's words about innovation, creativity and the key to productive, sustainable companies. In that chapter I found this:

"I have my own theory about why decline happens at companies like IBM or Microsoft. The company does a great job, innovates and becomes a monopoly or close to it in some field, and then the quality of product becomes less important. The company starts valuing the great salesmen, because they're the ones who can move the needle on revenues, not the product engineers and designers. So the salespeople end up running the company. John Akers at IBM was a smart, eloquent, fantastic salesperson but he didn't know anything about product. The same thing happened at Xerox. When the sales guys run the company, the product guys don't matter so much, and a lot of them just turn off."

Jobs could be speaking about the modern pharmaceutical industry. There the "product designers" are the scientists of course. Although many factors have been responsible for the decline of innovation in modern pharma, one of the variables that strongly correlates is the replacement of product designers at the helm by salespeople and lawyers beginning roughly in the early 90s.

There's a profound lesson in there somewhere. Not that wishes come true, but it's Christmas, and while we don't have the freedom to innovate, hold a stable job and work on what really matters, we do have the freedom to wish. So with this generous dose of wishful thinking, I wish you all a Merry Christmas.

The AAAS's nomination of Prof. Patrick Harran does a grave disservice to their stated mission

I hear through the Twitterverse that the American Association for the Advancement of Science (AAAS) has appointed Prof. Patrick Harran of UCLA as a new fellow for 2015. I have to say that this choice leaves me both befuddled and disappointed.

Most of us would remember that Prof. Harran was charged on four felony counts for the laboratory death of undergraduate Sheri Sangji in December 2008. The case dragged on for several years, and in 2014 Prof. Harran and UCLA struck a deal with prosecutors that allowed him to avoid the charges in exchange for a fine and community service. The charges were not refuted; they were negotiated for.

Now I am certainly not of the opinion that someone like Prof. Harran should not be rehabilitated into the scientific community in some way or another. Nor do I think that he should never be recognized for his ongoing research. I am also not in a position to pass legal judgement on the degree of his culpability.

But that's not the point here at all. If the award was from, say the ACS, for purely technical achievement I would have been less miffed. As it happens it's a recognition from the AAAS: the American Association for Advancement of Science. 

Advancement of Science does not just mean advancement of the technical aspects of science; it means advancement of the sum total of the scientific enterprise, a key component of which is the intersection of science with public appreciation and public policy. The AAAS was set up in 1848 with the express goal of not just recognizing scientific achievement but of facilitating scientific discourse in the public sphere. Past presidents of the AAAS have included Robert Millikan and Stephen Jay Gould, both of whom put a premium on scientists actively engaging with the public.

Let's take a look at the official mission of the AAAS as noted on their own website:

  • Enhance communication among scientists, engineers, and the public;
  • Promote and defend the integrity of science and its use;
  • Strengthen support for the science and technology enterprise;
  • Provide a voice for science on societal issues;
  • Promote the responsible use of science in public policy;
  • Strengthen and diversify the science and technology workforce;
  • Foster education in science and technology for everyone;
  • Increase public engagement with science and technology; and
  • Advance international cooperation in science.
In my opinion, the election of Prof. Harran goes against at least four of these goals; enhancement of communication between scientists and the public, strengthening support for the scientific enterprise, increasing public engagement with science and most importantly, "promoting and defending the integrity of science and its use".

It's quite clear from the AAAS's mission statement that scientific responsibility and scientific outreach are two of its major aims. In fact one can argue that the AAAS, along with the NAS (National Academy of Sciences), is one of two policy organs in this country which represent the public face of the scientific enterprise. For more than a century now the AAAS has been an integral part of the nationwide scientific dialogue involving scientists, the government and the people. Perhaps it's fitting in this regard that the current CEO of the AAAS is former New Jersey Congressman Rush Holt, one of the few politicians in this country who's not only finely attuned to the truth-seeking nature of science and its potential corruption but was also a serious practicing scientist himself at one point.

All this makes the matter even denser and harder to understand. How does the election of someone who is still under a cloud of suspicion for not having implemented responsible safety practices in his laboratory at a major university and who has not pled guilty to any of the charges against him a healthy reaffirmation of the dialogue between scientists and the public? How does this election help them in their stated mission of "promoting the integrity of science and its use" when Prof. Harran's actions and the charges against him clearly called that integrity of use into question? 

In addition, the statement from a spokesperson of the AAAS saying that they were "unaware of the charges against Harran" is simply bizarre. The Harran and Sangji stories have been all over the news for more than seven years now; how much more exposure do they need for an organization of the size and reach of the AAAS to take notice?

The whole episode is deflating and incomprehensible. Again, this is not about Prof. Harran's merits purely as a scientist; in fact in a sense it's not about him at all. It's about what the AAAS wants to be. Does it want to be an institution purely recognizing technical achievement or does it want to be one which promotes scientific responsibility and outreach? If - as its almost one hundred and fifty year history indicates - it wants to be the latter, it can surely do better than this.

Note: For a comprehensive view of the details of the case as they unfolded, see C&EN reporter Jyllian Kemsley's outstanding coverage here.

Abraham Flexner, the Institute for Advanced Study, and the usefulness of useless knowledge

The most succinct encapsulation of the value of curiosity to practical pursuits came from Michael Faraday; when asked by William Gladstone, Chancellor of the Exchequer, about the utility of electricity, Faraday is purported to have replied, "One day, sir, you may tax it". Whether apocryphal or not, the remark accurately captures the far-reaching, often universal material benefits of the most fundamental of scientific investigations. Faraday's basic research on the relationship between electricity and magnetism ushered in the electrical age, as much as it shed light on one of Natures deepest secrets.

Part of Faraday's sentiment saw its flowering potential in the establishment of the Institute for Advanced Study (IAS) in Princeton. The IAS was set up in 1933 by Abraham Flexner, a far-thinking educator and reformer, with the explicit purpose of providing a heaven for the worlds purest thinkers that was free of teaching, administrative duties and the myriad interferences of the modern university. Funds came from the wealthy Bamberger family who did the world a favor by switching their monetary support from a medical school to the institute (some people might think they did us an even bigger favor by founding the clothing store chain Macy's). 

Flexners paean to unadulterated pure thought was duly enshrined in the institutes founding by an invitation to Albert Einstein to serve as its first permanent member in 1933; other intellectual giants including John von Neumann, Herman Weyl and Kurt Godel followed suit, finding a safe refuge from a continent which seemed to have gone half-mad. Over the next eight decades the institute produced scores of leading thinkers and writers, many of whom have inaugurated new fields of science and the humanities and been associated with prestigious prizes like the Nobel Prize, the Fields Medal and the Pulitzer Prize. Later permanent members have included diplomat George Kennan, physicist Freeman Dyson and art historian Erwin Panofsky. 

Flexner's pioneering thinking found its way into a 1939 issue of Harpers Magazine in the form of an article with the memorable title The Usefulness of Useless Knowledge. The document still provides one of the clearest and most eloquent arguments for supporting thinking without palpable ends that I have come across. The very beginning makes a telling case for science as a candle in the dark, a fact that must have shone like a gem on a mountaintop in the dark year of 1939:
"Is it not a curious fact that in a world steeped in irrational hatreds which threaten civilization, men and women old and young detach themselves wholly or partly from the angry current of daily life to devote themselves to the cultivation of beauty, to the extension of knowledge, to the cure of disease, to the amelioration of suffering, just as fanatics were not simultaneously engaged in spreading pain, ugliness and suffering?"
Flexner then goes on to give the example of half a dozen scientists including Maxwell, Faraday, Gauss, Ehrlich and Einstein whose passionate tinkering with science and mathematics led to pioneering applications in industry, medicine and transportation. Each of these scientists was pursuing research for its own sake, free of concerns regarding future application. Paul Ehrlich's case is especially instructive. Ehrlich who is the father of both modern antibiotic research and drug discovery was asked by his supervisor, Wilhelm von Waldeyer, why he spent so much time tinkering aimlessly with bacterial broths and petri dishes; Ehrlich simply replied, "Ich probiere", which can be loosely translated to "I am just fooling around". Waldeyer wisely left him to fool around, and Ehrlich ended up suggesting the function of protein receptors for drugs and discovering Salvarsan, the first remedy for the scourge of syphilis.

The theme repeats throughout the history of science; Alexander Fleming mulling over unexplained bacterial genocide, Claude Shannon obsessed with the mathematization of information transfer, Edward Purcell and Isidor Rabi investigating the behavior of atoms in magnetic fields. Each of these studies led to momentous practical inventions; specifically leading to antibiotics, information technology and MRI in the above cases.

Thus it should not be hard to make a case for why untrammeled intellectual wandering should be encouraged. It's of course not true that pure thinking always leads to the next iPad or brain scanner in every single instance. But as Flexner eloquently put it, even the occasional benefits far outweigh the perceived waste:
"I am not for a moment suggesting that everything that goes on in laboratories will ultimately turn to some unexpected practical use or that an ultimate practical use is its actual justification. Much more am I pleading for the abolition of the word use, and for the freeing of the human spirit. To be sure, we shall free some harmless cranks. To be sure, we shall thus waste some precious dollars. But what is infinitely more important is that we shall be striking the shackles off the human mind and setting it free for the adventures which in our own day have, on the one hand, taken Hale and Rutherford and Einstein and their peers millions upon millions of miles into the uttermost realms of space, and on the other, loosed the boundless energy imprisoned in the atom."
It is clear from Flexner's words that the sheer motive power of pure thinking by an Einstein or a Bohr makes the accompanying modest wastage of funds or an entry by the occasional crank a mere trifle. However to Flexner's credit, he also defuses the myth of the Great Man of Science, noting that sometimes practical discoveries very much rest on the shoulders of aimless meandering; a fact that makes the web of both pure and applied discovery a highly interconnected and interdependent one:
"Thus it become obvious that one must be wary in attributing scientific discovery wholly to any one person. Almost every discovery has a long and precarious history. Someone finds a bit here, another a bit there. A third step succeeds later and thus onward till a genius pieces the bits together and makes the decisive contribution. Science, like the Mississippi, begins in a tiny rivulet in the distant forest. Gradually other streams swell its volume. And the roaring river that bursts the dikes is formed from countless sources."
A deeper question though is why this relationship between idea and use exists, why even the purest of thought often leads to the most practical of inventions. In 2012 I attended the Lindau meeting of Nobel Laureates in Germany where the uncertain and yet immensely historically successful relationship between ideas and use was amply driven home. 

Physicist David Gross put his finger on the essential reason. He pointed out that "Nature is a reluctant mistress who shares her secrets reluctantly". The case for basic research thus boils down to a practical consideration: the recognition that a stubborn sea of scientific possibilities will yield its secrets only to the one who casts her net the widest, takes the biggest risks, makes the most unlikely and indirect connections, pursues a path of discovery for the sheer pleasure of it. Even from a strictly practical viewpoint, you encourage pure research because you want to maximize the odds of a hit in the face of uncertainty about the landscape of facts. It's simply a matter of statistical optimization.

At the Lindau meeting we could hear first hand accounts by scientists regarding how the uselessness of their investigations turned into useful, sometimes wholly unexpected knowledge. There was Steven Chu talking about how his work in using lasers to cool atoms is now being used by spacecraft studying global warming by tracking the motion of glaciers down to millimeter accuracy. Interestingly Chu also defused the popular notion that research in the exalted corridors of Bell Labs was entirely pie in the sky; as he noted, both the transistor and information theory arose from company concerns regarding communication through noisy channels and finding urgent replacements for vacuum tubes. Pure and applied research certainly don't need to be antagonists.

Others recounted their own stories. There was Alan Heeger who on a whim mixed a conducting polymer with fullerenes, thus anticipating ultrafast electron transfer. And Hartmut Michel, the Frankfurt chemist who is known for not being one to mince words, told the audience about how archeological applications of DNA technology are transforming our knowledge about the deepest mysteries of human origins. Michel also pointed out the important fact that one third or more Nobel Prizes have been awarded for methods development, a pattern which indicates that technical engineering for its own ends is as much a part of science as idea generation. There is great art both in the fashioning of the most abstract equations and the machining of the simplest tools of science.

The life and times of the successful scientists on stage made the immense spinoffs and unanticipated benefits of seemingly aimless research clear. And they did not even touch on the fact amply documented by Flexner in his essay that these aimless investigations have opened up windows into the workings of life and the universe that would have been inconceivable even a hundred years ago. Man is something more than the fruits of his labors, and that something is well worth preserving, even at the cost of billions of dollars and countless false alleys. 

The useful pondering of useless knowledge makes no claim to infallible wisdom or a steady stream of inventions. But what it promises is something far more precious; freedom from fear and the opportunity to see the light wherever it exists. Ich probiere.

This is an updated version of a past post.

Environmentalism is not climate change; climate change is not environmentalism

Freeman Dyson has an Op-Ed in the Boston Globe about the ongoing climate change talks in Paris in which he makes a cogent point - that environmentalism does not equal climate change and focus on climate change should not distract us from other environmental problems which may be largely unrelated to global warming.
"The environmental movement is a great force for good in the world, an alliance of billions of people determined to protect birds and butterflies and preserve the natural habitats that allow endangered species to survive. The environmental movement is a cause fit to fight for. There are many human activities that threaten the ecology of the planet. The environmental movement has done a great job of educating the public and working to heal the damage we have done to nature. I am a tree-hugger, in love with frogs and forests.  
But I am horrified to see the environmental movement hijacked by a bunch of climate fanatics, who have captured the attention of the public with scare stories. As a result, the public and the politicians believe that climate change is our most important environmental problem. More urgent and more real problems, such as the over-fishing of the oceans and the destruction of wild-life habitat on land, are neglected, while the environmental activists waste their time and energy ranting about climate change. The Paris meeting is a sad story of good intentions gone awry."
I rather agree with him that for many people the term environmentalism has become largely synonymous with climate change. However the two are not the same, as becomes clear to me when I hear him mention overfishing which is a real problem largely unconnected with climate change. I have recently been reading about the state of the world's fish in Paul Greenberg's excellent book "Four Fish". Greenberg focuses on the history and future of the four fish that have largely dominated the Western world's diet - salmon, sea bass, cod and  tuna. 

The book talks about how most of these fish were overharvested and almost driven to extinction by humans building dams and poisoning rivers with industrial effluent. Neither of these two issues is directly connected with climate change, but how often do we see high-profile global meetings which press everyone to deal with dam-building and water pollution on a war footing, let alone meetings led by presidents and prime ministers? 

Overfishing also brings another aspect of the climate change problem into sharp perspective: The only reason certain places in the US, such as the Salmon River in New York, are now full of fish is because those fish have been carefully cultivated in captivity and then released in the wild. Without this human intervention the Salmon River would have stayed barren. Then there is the revolution in fish farming, also described in "Four Fish", which has brought expensive fish to the plates of literally millions of people who were previously deprived of it. The equivalent of fish farming in case of climate change is geoengineering. Geoengineering carries more risks but also more potential benefits than fish farming, and it too deserves serious consideration in a meeting on climate change. As far as I know, most of these high-profile meetings on the topic focus on prevention rather than mitigation in the form of geoengineering. As exemplified by solutions to overfishing, any serious discussion of climate change should at least involve discussions of geoengineering.

Politically the sad history of the climate change wars seems to have a simple explanation in my mind. Before 2004 or so when the effects of climate change were not known as well and anti-science Republicans dominated the government, conservative deniers largely ruled over the debate and the media. After 2004 or so, in part because of better data and in part because of relentless and wide publicity by people like Al Gore, the media started paying much more attention to the issue. I do not blame the left for going a little overboard with emphasizing the case for climate change at the beginning, when it was important to counter the right-wing extremism against the topic. But since then the world has been sold on the issue, and there is no longer a need to be overzealous about it. Unfortunately segments of the left have continued the crusade which they started in good faith at the beginning and now many of them have turned into hard liners on the issue, extending the theory beyond where the evidence might lead and denouncing almost any opponent as motivated by politically enabled bigotry. This has led to the silencing of reasonable critics along with irrational deniers (see my previous post for a discussion of the distinction between deniers and skeptics).

But whatever our feelings about the political rancor surrounding climate change, I do share Dyson's concerns that it might distract us from problems that are at least equally important. I think environmentalism has been one of the most important movements for the common good in history, too important to be pigeonholed into one category. Climate change is not environmentalism; environmentalism is not climate change.

The second aspect of the op-ed is Dyson's contention that the science of climate change is not settled. Many people have attacked him for this contention in the past and they will no doubt attack him now, but both in conversations with him as well as based on what he has written, I have found that most of his issues about the science are very general and not extreme at all. He says we don't understand a system as complex as the climate enough to make detailed accurate predictions, and he also says that much of the debate has unfortunately turned so political and rancorous that it has become hard for reasonable people to disagree even on the scientific details. These statements are both quite true and should ideally be uncontroversial. In addition I have discussed parallels between molecular modeling and climate modeling with him; in both cases we seem to see a healthy amount of uncertainty as well as an incomplete understanding of key components involved in the process: for instance water seems to be a common culprit; we have as fuzzy an understanding of water in cloud formation as around the surface of proteins and small organic molecules. 

I think climate change is a serious problem that deserves our attention. There is little doubt that we have injected unprecedented amounts of carbon dioxide into the atmosphere since the industrial revolution, and we would be naive to think that these have not impacted the climate at all. But the devil is in the details. It seems not just bad science but bad policy to me to hold high-profile meetings on the topic every year while neglecting other equally valid topics, all the time making detailed plans for mitigation (and not active intervention) in a system which is not understood well enough right now for detailed preemptive actions which will impact the lives of billions of people, especially in the developing world. 

To me it seems reasonable to think that climate change should be part of a larger portfolio we should invest in if we want to try to protect our future. As they say, it's always best to diversify your portfolio to hedge your bets against future risks.

The late Paul Kalanithi's book "When Breath Becomes Air" is devastating, edifying, eloquent and very real

I read this book in one sitting, long after the lights should have been turned off. I felt like not doing so would have been a disservice to Paul Kalanithi. After reading the book I felt stunned and hopeful in equal parts. Stunned because of the realization that someone as prodigiously talented and eloquent as Paul Kalanithi was taken from the world at such an early age. Hopeful because even in his brief life of thirty-six years he showcased what we as human beings are capable of in our best incarnations. His family can rest assured that he will live on through his book.

When Breath Becomes Air details Dr. Kalanithi's life as a neurosurgeon and his fight against advanced lung cancer. Even in his brief life he achieved noteworthy recognition as a scholar, a surgeon, a scientist and now - posthumously - as a writer. The book is a tale of tribulations and frank reflections. Ultimately there's not much triumph in it in the traditional sense of the word, but there is a dogged, quiet resilience and a frank earthiness that endures long after the last word appears. The tribulations occur in both Dr. Kalanithi's stellar career and his refusal to give in to the illness which ultimately consumed him.

The first part of the book could almost stand separately as an outstanding account of the coming of age of a neurosurgeon and writer. Dr. Kalanithi talks about his upbringing as the child of hardworking and inspiring Indian immigrant parents and his tenacious and passionate espousal of medicine and literature. He speaks lovingly of his relationship with his remarkable wife - also a doctor - who he met in medical school and who played an outsized role in supporting him through everything he went through. He had a stunning and multifaceted career, studying biology and literature at Stanford, then history and philosophy of medicine at Cambridge, and finally neurosurgery at Yale.

Along the way he became not just a neurosurgeon who worked grueling hours and tried to glimpse the very soul of his discipline, but also a persuasive writer. The mark of a man of letters is evident everywhere in the book, and quotes from Eliot, Beckett, Pope and Shakespeare make frequent appearances. Accounts of how Dr. Kalanithi wrested with walking the line between objective medicine and compassionate humanity when it came to treating his patients give us an inside view of medicine as practiced at its most intimate level. Metaphors abound and the prose often soars: When describing how important it is to develop good surgical technique, he tells us that "Technical excellence was a moral requirement"; meanwhile, the overwhelming stress of late night shifts, hundred hour weeks and patients with acute trauma made him occasionally feel like he was "trapped in an endless jungle summer, wet with sweat, the rain of tears of the dying pouring down". This is writing that comes not from the brain or from the heart, but from the gut. When we lost Dr. Kalanithi we lost not only a great doctor but a great writer spun from the same cloth as Oliver Sacks and Atul Gawande.

It is in the second part of the book that the devastating tide of disease and death creeps in, even as Dr. Kalanithi is suddenly transformed from a doctor into a patient (Eliot helps him find the right words here: "At my back in a cold blast I hear, The rattle of bones, and a chuckle spread from ear to ear")
. It must be slightly bizarre to be on the other side of the mirror and intimately know everything that is happening to your body and Dr. Kalanithi is brutally frank in communicating his disbelief, his hope and his understanding of his fatal disease. It's worth noting that this candid recognition permeates the entire account; almost nothing is sanitized. Science mingles with emotion as compassionate doctors, family and a battery of medications and tests become a mainstay of life. The doctor finds out that difficult past conversations with terminal patients can't really help him when he is one of them.

The painful uncertainty which Dr. Kalanithi documents - in particular the tyranny of statistics which makes it impossible to predict how a specific individual will react to cancer therapy - must sadly be familiar to anyone who has had experience with the disease. As he says, "One has a very different relationship with statistics when one becomes one". There are heartbreaking descriptions of how at one point the cancer seemed to have almost disappeared and how, after Dr. Kalanithi had again cautiously made plans for a hopeful future with his wife, it suddenly returned with a vengeance and became resistant to all drugs. There is no bravado in the story; as he says, the tumor was what it was and you simply experienced the feelings it brought to your mind and heart.

What makes the book so valuable is this ready admission of what terminal disease feels like, especially an admission that is nonetheless infused with wise acceptance, hope and a tenacious desire to live, work and love normally. In spite of the diagnosis Dr. Kalanithi tries very hard - and succeeds admirably - to live a normal life. He returns to his surgery, he spends time with his family and most importantly, he decides to have a child with his wife. In his everyday struggles is seen a chronicle of the struggles that we will all face in some regard, and which thousands of people face on a daily basis. His constant partner in this struggle is his exemplary wife Lucy, whose epilogue is almost as eloquent as his own writing; I really hope that she picks up the baton where he left off.

As Lucy tells us in the epilogue, this is not some simple tale of a man who somehow "beats" a disease by refusing to give up. It's certainly that, but it's much more because it's a very human tale of failure and fear, of uncertainty and despair, of cynicism and anger. And yes, it is also a tale of scientific understanding, of battling a disease even in the face of uncertainty, of poetry and philosophy, of love and family, and of bequeathing a legacy to a two year old daughter who will soon understand the kind of man her father was and the heritage he left behind. It's as good a testament to Dr. Kalanithi's favorite Beckett quote as anything I can think of: "I can't go on. I will go on".

Read this book; it's devastating and heartbreaking, inspiring and edifying. Most importantly, it's real.

Science as a messy human endeavor: The origin of the Woodward-Hoffmann rules

A 1973 slide from Roald Hoffmann displaying the 'Woodward
Challenge' - four mysterious reactions which spurred the
Woodward-Hoffmann rules
There is a remarkable and unique article written by my friend and noted historian of chemistry Jeff Seeman that has just come out in the Journal of Organic Chemistry. The paper deals with seven pivotal months in 1964 when Robert Burns Woodward and Roald Hoffmann worked out the basic structure of what we call the Woodward-Hoffmann rules

Organic chemists need no introduction to these seminal rules, but for non-chemists it might suffice to say that they opened the door to an entire world of key chemical reactions - both in nature and in the chemist's test tube - whose essential details had hitherto stayed mysterious. These details include the probability of such reactions occurring in the first place and the stereochemistry (geometric disposition) of their molecular constituents. The rules were probably the first significant meld between theoretical and organic chemistry - ten commandments carried down from a mountain by Woodward and Hoffmann, pointing to the discovery of the promised land.The recognition of their importance was relatively quick; In 1980 Hoffmann shared a Nobel Prize for his contributions, and Woodward would have shared it too (it would have been his second) had he not suddenly passed away in 1979.

The first paper on these rules was submitted in November, 1964 and it came out in January, 1965. Jeff's piece essentially traces the conception of the rules in the previous six months or so. The article is very valuable for the light it sheds not just on the human aspect of scientific discovery but on its meandering, haphazard nature. It is one of the best testaments to science as a process of fits and starts that I have recently seen. Even from a strictly historical perspective Jeff's article is wholly unique. He had unprecedented access to Hoffmann in the form of daylong interviews at Cornell as well as unfettered access to Hoffmann's office. He has also interviewed many other important historical figures such as Andrew Streitweiser, George Whitesides and Jack Roberts who were working in physical organic chemistry at the time: insightful and amusing quotes from all these people (such as Whitesides's reference to the demise of a computer at MIT implying that he would now have to perform calculations using an abacus or his toes) litter the account. And there are copious and fascinating images of scores of notebook pages from Hoffmann's research as well as amusing and interesting letters to editors, lists of publications, scribblings in margins and other correspondence between friends and colleagues. Anyone who knows Jeff and has worked with him will be nodding their heads when they see how thorough the job here is.

The story begins when Woodward was already the world's most acclaimed organic chemist and Hoffmann was an upcoming theoretical chemistry postdoc at Harvard. Then as now, Hoffmann was the quintessential fox whose interests knew no bounds and who was eager to apply theoretical knowledge to almost any problem in chemistry that suited his interests. By then he had already developed Extended Hückel Theory (EHT), a method for calculating energies and orbitals of molecules which was the poster child for a model: imprecise, inaccurate, semiquantitative and yet pitched at the right level so that it could explain a variety of facts in chemistry. Woodward had already been interested in theory for a while and had worked on some theoretical constructs like the octant rule. It was a marriage made in heaven.

The most striking thing that emerges from Jeff's exhaustive and meticulous work is how relatively laid back Woodward and Hoffmann's research was in a sense. Hoffmann became aware of what was called the 'Woodward challenge' early in 1964 during an important meeting; this challenge involved the then mysterious stereochemical disposition during some well-known four and six electron reactions, reactions whose jargon ("electrocyclization", conrotatory") has now turned into household banter for organic chemists. The conventional story would then have had both Woodward and Hoffmann burning the midnight oil and persisting doggedly for the next few months until they cracked the puzzle like warriors on a quest. This was far from the case. Both pursued other interests, often ended up traveling and only occasionally touching base. Why they did this is unclear, but then it's no more unclear than why humans do anything else for that matter. Once they realized that could crack the puzzle however they kicked the door open. The paper that emerged in early 1965 was so long and comprehensive that they worried about its suitability for JACS in a cover letter to the editor.

Jeff's story also touches on a tantalizing conundrum whose solution many readers would have loved to know - E. J. Corey's potential role or the lack thereof in the conception of the rules, a role Corey unambiguously acknowledged in his 2004 Priestley Medal address, setting off a firestorm. Unfortunately Corey declined to talk to Jeff about this article (although he does dispute the timing of Woodward and Hoffmann's first meeting). His side of the story may never be known.

There is a lot of good stuff in the 45-page article that is worth reading about which I can only mention in passing here. Many of the actual mechanistic and technical details would be of interest only to organic chemists. But the more general message should not be lost upon more general readers: science is a messy, almost always unheroic, haphazard process. In addition, its real story is often warped by malleable memory, shifting egos, mundane oversights and blind alleys. For a long time science was described in bestselling books and newspaper articles as a determined, heroic march to the truth. These days there is an increasing number of books aimed at uncovering science's massive storehouses of failure and ignorance. But there is a third view of science - that of a journey to the truth which is more mundane, more complex, perpetually puzzling because of its mystery and perpetually comforting because of its human nature.

In this case even Jeff's exhaustive research leaves us with kaleidoscopic questions, questions that may likely remain unanswered. These pertain to Woodward and Hoffmann's occasional indifference to what was clearly a pivotal piece of research, to Corey's claim about the reactions, to the potential cross-fertilization between whatever else Woodward and Hoffmann were doing during this time and the project in question, and to the number of insights they might have imbibed from the community at large. Jeff conjectures answers to these questions, but even his probing mind provides no comforting conclusions, probably because there are none. The quote from Roald Hoffmann with which the piece ends captures the humanity quite well.
"Life is messy. Science is not all straight logic. And all scientists are not always logical. We're just scrabbler for knowledge and understanding."
Here's to the messy scrabblers.

Einstein, Oppenheimer, relativity and black holes: The curse of fundamentalitis

J. Robert Oppenheimer and Albert Einstein at the
Institute for Advanced Study in Princeton
A hundred years ago, in November, 1915, Albert Einstein sent a paper to the Prussian Academy of Sciences which was to become one of the great scientific papers of all time. In this paper Einstein published the full treatment of his so-called field equations which present the curvature of spacetime by matter; the paper heralded the culmination of his general theory of relativity.

Forty years later when Einstein died, the implications of that paper had completely changed our view of the cosmos. They had explained the anomalous precession of Mercury, predicted the bending of starlight and most importantly, the expansion of the universe. Einstein enthusiastically accepted all these conclusions. One conclusion that he did not however accept and which in fact he did not seem interested in was the implications of the equations in areas of the cosmos where the force of gravity is so strong that it does not allow even light to escape - a black hole. Today we know that black holes showcase Einstein's general theory of relativity in all its incandescent glory. In addition black holes have become profound playgrounds for some of the deepest mysteries of the universe, including quantum mechanics, information theory and quantum gravity.

And yet Einstein seemed almost pathologically uninterested in them. He had heard about them from many of his colleagues; in particular his Princeton colleague John Wheeler had taken it upon himself to fully understand these strange objects. But Einstein stayed aloof. There was another one of the same persuasion whose office was only one floor away from him - J. Robert Oppenheimer, the architect of the atomic bomb and the Delphic director of the Institute for Advanced Study where Einstein worked. Oppenheimer in fact had been the first to mathematically describe these black holes in a seminal paper in 1939. Unfortunately Oppenheimer's paper was published on the same day that Hitler attacked Poland. In addition its importance was eclipsed by another article in the same issue of the journal Physical Review: an article by Niels Bohr and John Wheeler describing the mechanism of nuclear fission, a topic that would soon herald urgent and ominous portents for the fate of the world.

The more general phenomena of gravitational contraction and collapse that black holes exhibit were strangely phenomena that seemed doomed to obscurity; in a strange twist of fate, those who truly appreciated them stayed obscure, while those who were influential ignored them. Among the former were Subrahmanyan Chandrasekhar and Fritz Zwicky; among the latter were Oppenheimer, Einstein and Arthur Eddington. In 1935, Chandrasekhar had discovered a limiting formula for white dwarfs beyond which a white dwarf could no longer thwart its inward gravitational pull. He was roundly scolded by Eddington, one of the leading astronomers of his time, who stubbornly refused to believe that nature would behave in such a pathological manner. Knowing Eddington's influence in the international community of astronomers, Chandrasekhar wisely abandoned his pursuit until others validated it much later.

The Swiss-born Fritz Zwicky was a more pugnacious character, and in the 1930s he and his Caltech colleague Walter Baade published an account of what we now call a neutron star as a plausible explanation for the tremendous energy powering the luminous explosion of a supernova. Zwicky's prickly and slightly paranoid personality led to his distancing from other mainstream scientists and his neutron stars were taken seriously by only a few scientists, among them the famous Soviet physicist Lev Landau. It was building on Landau's work in 1938 and 1939 that Oppenheimer and his students published three landmark papers which pushed the envelope on neutron stars and asked what would be the logical, extreme conclusion of a star completely unable to support itself against its own gravity. In the 1939 paper in particular, Oppenheimer and his student Hartland Snyder presented several innovations, among them the difference between time as measured by an external observer outside a black hole's so-called event horizon and a free falling observer inside it.

Then World War 2 intervened. Einstein got busy signing letters to President Franklin Roosevelt warning him of Germany's efforts to acquire nuclear weapons while Oppenheimer got busy leading the Manhattan Project. When 1945 dawned both of them had forgotten about the key theoretical insights regarding black holes which they had produced before the war. It was a trio of exceptional scientists - Dennis Sciama in the UK, John Wheeler at Princeton and Yakov Zeldovich in the USSR - who got interested in black holes after the war and pioneered research into them.

What is strangest about the history of black holes is Einstein and Oppenheimer's utter indifference to their existence. What exactly happened? Oppenheimer’s lack of interest wasn’t just because he despised the free-thinking and eccentric Zwicky who had laid the foundations for the field through the discovery of black holes' parents - neutron stars. It wasn’t even because he achieved celebrity status after the war, became the most powerful scientist in the country and spent an inordinate amount of time consulting in Washington until his carefully orchestrated downfall in 1954. All these factors contributed, but the real reason was something else entirely – Oppenheimer simply wasn’t interested in black holes. Even after his downfall, when he had plenty of time to devote to physics, he never talked or wrote about them. He spent countless hours thinking about quantum field theory and particle physics, but not a minute thinking about black holes. The creator of black holes basically did not think they mattered.

Oppenheimer’s rejection of one of the most fascinating implications of modern physics and one of the most enigmatic objects in the universe - and one he sired - is documented well by Freeman Dyson who tried to initiate conversations about the topic with him. Every time Dyson brought it up Oppenheimer would change the subject, almost as if he had disowned his own scientific children.

The reason, as attested to by Dyson and others who knew him, was that in his last few decades Oppenheimer was stricken by a disease which I call “fundamentalitis”. Fundamentalitis is a serious condition that causes its victims to believe that the only thing worth thinking about is the deep nature of reality as manifested through the fundamental laws of physics.
As Dyson put it:
“Oppenheimer in his later years believed that the only problem worthy of the attention of a serious theoretical physicist was the discovery of the fundamental equations of physics. Einstein certainly felt the same way. To discover the right equations was all that mattered. Once you had discovered the right equations, then the study of particular solutions of the equations would be a routine exercise for second-rate physicists or graduate students.”
Thus for Oppenheimer, black holes, which were particular solutions of general relativity, were mundane; the general theory itself was the real deal. In addition they were anomalies, ugly exceptions which were best ignored rather than studied. As Dyson mentions, unfortunately Oppenheimer was not the only one affected by this condition. Einstein, who spent his last few years in a futile search for a grand unified theory, was another. Like Oppenheimer he was uninterested in black holes, but he also went a step further by not believing in quantum mechanics. Einstein’s fundamentalitis was quite pathological indeed.
History proved that both Oppenheimer and Einstein were deeply mistaken about black holes and fundamental laws. The greatest irony is not that black holes are very interesting, it is that in the last few decades the study of black holes has shed light on the very same fundamental laws that Einstein and Oppenheimer believed to be the only things worth studying. The disowned children have come back to haunt the ghosts of their parents.
As mentioned earlier, black holes took off after the war largely due to the efforts of a handful of scientists in the United States, the Soviet Union and England. But it was experimental developments which truly brought their study to the forefront. The new science of radio astronomy showed us that, far from being anomalies, black holes litter the landscape of the cosmos, including the center of the Milky Way. A decade after Oppenheimer’s death, the Israeli theorist Jacob Bekenstein proved a very deep relationship between thermodynamics and black hole physics. Stephen Hawking and Roger Penrose found out that black holes contain singularities; far from being ugly anomalies, black holes thus demonstrated Einstein’s general theory of relativity in all its glory. They also realized that a true understanding of singularities would involve the marriage of quantum mechanics and general relativity, a paradigm that’s as fundamental as any other in physics.
In perhaps the most exciting development in the field, Leonard Susskind, Hawking and others have found intimate connections between information theory and black holes, leading to the fascinating black hole firewall paradox that forges very deep connections between thermodynamics, quantum mechanics and general relativity. Black holes are even providing insights into computer science and computational complexity. The study of black holes is today as fundamental as the study of elementary particles in the 1950s.
Einstein and Oppenheimer could scarcely have imagined that this cornucopia of discoveries would come from an entity that they despised. But their wariness toward black holes is not only an example of missed opportunities or the fact that great minds can sometimes suffer from tunnel vision. I think the biggest lesson from the story of Oppenheimer and black holes is that what is considered ‘applied’ science can actually turn out to harbor deep fundamental mysteries. Both Oppenheimer and Einstein considered the study of black holes to be too applied, an examination of anomalies and specific solutions unworthy of thinkers thinking deep thoughts about the cosmos. But the delicious irony was that black holes in fact contained some of the deepest mysteries of the cosmos, forging unexpected connections between disparate disciplines and challenging the finest minds in the field. If only Oppenheimer and Einstein had been more open-minded.
The discovery of fundamental science in what is considered applied science is not unknown in the history of physics. For instance Max Planck was studying blackbody radiation, a relatively mundane and applied topic, but it was in blackbody radiation that the seeds of quantum theory were found. Similarly it was spectroscopy or the study of light emanating from atoms that led to the modern framework of quantum mechanics in the 1920s. Scores of similar examples abound in the history of physics; in a more recent case, it was studies in condensed matter physics that led physicist Philip Anderson to make significant contributions to symmetry breaking and the postulation of the existence of the Higgs boson. And in what is perhaps the most extreme example of an applied scientist making fundamental contributions, it was the investigation of cannons and heat engines by French engineer Sadi Carnot that led to a foundational law of science – the second law of thermodynamics.
Today many physicists are again engaged in a search for ultimate laws, with at least some of them thinking that these ultimate laws would be found within the framework of string theory. These physicists probably regard other parts of physics, and especially the applied ones, as unworthy of their great theoretical talents. For these physicists the story of Oppenheimer and black holes should serve as a cautionary tale. Nature is too clever to be constrained into narrow bins, and sometimes it is only by poking around in the most applied parts of science that one can see the gleam of fundamental principles.
As Einstein might have said had he known better, the distinction between the pure and the applied is often only a "stubbornly persistent illusion". It's an illusion that we must try hard to dispel.
This is a revised version of an old post which I wrote on occasion of the one-hundredth anniversary of the publication of Einstein's field equations.

The death of new reactions in medicinal chemistry?

JFK to medicinal chemists: Get out of your comfort zone and
try out new reactions; not because it's easy, but because it's hard
Since I was discussing the "death of medicinal chemistry" the other day (what's the use of having your own blog if you cannot enjoy some dramatic license every now and then), here's a very interesting and comprehensive analysis in J. Med. Chem. which has a direct impact on that discussion. The authors Dean Brown and Jonas Boström who are from Astra Zeneca have done a study of the most common reactions used by medicinal chemists using a representative set of papers published in the Journal of Medicinal Chemistry in the years 1984 and 2014. Their depressing conclusion is that about 20 reactions populate the toolkit of medicinal chemists in both years. In other words, if you can run those 20 chemical reactions well, then you could be as competent a medicinal chemist in the year 2015 as in 1984, at least on a synthetic level.

In fact the picture is probably more depressing than that. The main difference between the medicinal chemistry toolkit in 1984 vs 2014 is the presence of the Suzuki-Miyaura cross-coupling reaction and amide bond formation reactions; these seem to exist overwhelmingly in modern medicinal chemistry. The authors also look at overall reactions vs "production reactions", that is, the final steps which generate the product of interest in a drug discovery project. Again, most of the production reactions are still dominated by the Suzuki reaction and the Buchwald-Hartwig reaction. Reactions like phenol alkylation which is used more frequently in 2014 and not in 1984 partly point to the fact that we are now more attuned to unfavorable metabolic reactions like glucuronidation which necessitate the capping of free phenolic hydroxyl groups.

There is a lot of material to chew upon in this analysis and it deserves a close look. Not surprisingly, there is a horde of important and interesting factors like reagent and raw material availability, ease of synthesis (especially outsourcing) and better (or flawed and exaggerated) understanding of druglike character that has dictated the relatively small differences in reaction use in the last thirty years. In addition there is also a thought-provoking analysis of differences in reactions used for making natural products vs druglike compounds. Surprisingly, the authors find that reactions like cross-coupling which heavily populate the synthesis of druglike compounds are not as frequently encountered in natural product synthesis; among the top 20 reactions used in medicinal chemistry, few are used in natural product synthesis.

There is a thicket of numbers and frequency analysis of changes reaction type and functional group type showcased in the paper. But none of that should blind us to the central take home message here: in terms of innovation, at least as measured by new reaction development and use, medicinal chemistry has been rather stagnant in the last twenty years. Why would this be so? Several factors come to mind and some of them are discussed in the paper, and most of them don't speak well of the synthetic aspects of the drug discovery enterprise. 

As the authors point out, cross-coupling reactions are easy to set up and run and there is a wide variety of catalytic reagents that allows for robust reaction conditions and substrate variability. Not surprisingly, these reactions are also disproportionately easy to outsource. This means that they can produce a lot of molecules fast, but as commonsense indicates and the paper confirms, more is not better. In my last post I talked about the fact that one reason wages have been stagnant in medicinal chemistry is precisely because so much of medicinal chemistry synthesis has become cheap and easy, and coupling chemistry is a good reason why this is so.

One factor that the paper does not explicitly talk about but which I think is relevant is the focus on certain target classes which has dictated the choice of specific reactions over the last two decades or so. For example, a comprehensive and thought-provoking analysis by Murcko and Walters from 2012 speculated that a big emphasis on kinase inhibitors in the last fifteen years or so has led to a proliferation of coupling reactions, since biaryls are quite common among kinase inhibitor scaffolds. The current paper validates this speculation and in fact demonstrates that para-disubstituted biphenyls are among the most common of all modern medicinal chemistry compounds.

Another damning critique that the paper points to in its discussion of the limited toolkit of medicinal chemistry reactions is our obsession with druglike character and with this rule and that metric for defining such character; a community pastime which we have been collectively preoccupied with roughly since 1997 (when Lipinski published his paper). The fact of the matter is that the 20 reactions which medicinal chemists hold so dear are quite amenable to producing their favorite definition of druglike molecules; flat, relatively characterless, high-throughput synthesis-friendly and cheap. Once you narrowly define what your target or compound space is, then you also limit the number of ways to access that space.

That problem becomes clear when the authors compare their medicinal chemistry space to natural product space, both in terms of the reactions used and the final products. It's well known that natural products have more sp3 characters and chiral centers, and reactions like Suzuki coupling are not going to make too many of those. In addition, the authors perform a computational analysis of 3D shapes on their typical medicinal chemistry dataset. This analysis can have a subjective component to it, but what's clear not just from this calculation but from other previous ones is that what we call druglike molecules occupy a very different shape distribution from more complex natural products. 

For instance, a paper also from AZ that just came out demonstrated that many compounds occupying "non-Lipinski" space have sphere and dislike shapes that are not seen in more linear compounds. In that context, the para bisubstituted biphenyls which dot the landscape of modern druglike molecules are the epitome of linear compounds. As the authors show us, there is thus a direct correlation between the kinds of reactions used commonly at the bench today and the shapes and character of compounds which they result in. And all this constrained thinking is producing a very decided lack of diversity in the kinds of compounds that we are shuttling into clinical trials. The focus here in particular may be on synthetic reactions but it's affecting all of us and is at least a part of the answer to why medicinal chemists don't seem to see better days.

Taken together, the analyses in this review throw the gauntlet at the modern medicinal chemist and ask a provocative question: "Why are you taking the easy way out and making compounds that are easy to make? Why aren't you trying to expand the scope of novel reactions and trying to explore uncharted chemical space"? To which we may also add, "Why are you letting your constrained views of druglike space and metrics dictate the kind of reactions you use and the molecules they result in"? 

As they say however, it's always better to light a candle than to just curse the darkness (which can be quite valuable in itself). The authors showcase several new and interesting reactions - ring-closing cross-metathesis, C-H arylation, fluorination, photoredox catalysis - which can produce a wide variety of interesting and novel compounds that challenge traditional druglike space and promise to interrogate novel classes of targets. Expanding the scope of these reaction is not easy and will almost certainly result in some head scratchers, but that may be the only way we innovate. I might also add that the advent of new technology such as DNA encoded library technology also promises to change the fundamental character of our compounds. 

This paper is clearly a challenge to medicinal chemists and in fact is pointing out an embarrassing truth for our entire community: cost, convenience, job instability, poor management and plain malaise have made us take the easy way out and keep on circling back to a limited palette of chemical reactions that ultimately impact every aspect of the drug discovery enterprise. Some of these factors are unfortunate and understandable, but others are less so, especially if they're negatively affecting our ability to hit new targets, to explore novel chemical space, and ultimately to discover new drugs for important diseases which kill people. What the paper is saying is that we can do better.

More than fifty years ago John F. Kennedy made a plea to the country to work hard on getting a man to the moon and bring him back, "not because it's easy, but because it's hard". I would say analyses like this one ask the same of medicinal chemists and drug discovery scientists in general. It's a plea we should try to take to heart - perhaps the rest of JFK's exhortation will motivate us:

"We choose to go to the moon. We choose to go to the moon in this decade and do the other things, not because they are easy, but because they are hard, because that goal will serve to organize and measure the best of our energies and skills, because that challenge is one that we are willing to accept, one we are unwilling to postpone, and one which we intend to win."

The death of medicinal chemistry?

E. J. Corey's rational methods of chemical synthesis
revolutionized organic chemistry, but they also may have been
responsible for setting off unintended explosions in the medicinal
chemistry job market
Chemjobber points us to a discussion hosted by Michael Gilman, CEO of Padlock Therapeutics, on Reditt in which he laments the fact that medicinal chemistry has now become so commoditized that it's going to be unlikely for wages to rise in that field. Here's what he had to say.
"I would add that, unfortunately, medicinal chemistry is increasingly regarded as a commodity in the life sciences field. And, worse, it's subject to substantial price competition from CROs in Asia. That -- and the ongoing hemorrhaging of jobs from large pharma companies -- is making jobs for bench-level chemists a bit more scarce. I worry, though, because it's the bench-level chemists who grow up and gather the experience to become effective managers of out-sourced chemistry, and I'm concerned that we may be losing that next general of great drug discovery chemists."
I think he's absolutely right and that's partly what has been responsible for the woes of the pharmaceutical and biotech industry over the last few years. But as I noted in the comments section of CJ's blog, the historian of science in me thinks that this is ironically the validation of the field of organic synthesis as a highly developed field whose methods and ideas have now become so standardized that you need very few specialized practitioners to put them into practice. 

I have written about this historical aspect of the field before. The point is that synthesis was undeveloped, virgin territory when scientists like R B Woodward, E J Corey and Carl Djerassi worked in it in the 1950s and 60s. They were spectacularly successful. For instance, when Woodward synthesized complex substances like strychnine (strychnine!) and reserpine, many chemists around the world did not believe that we could actually make molecules as complicated as these. Forget about standardization, even creative chemists found it quite hard to make molecules like nucleic acids and peptides which we take for granted now.

It was a combination of hard work by brilliant individuals like Woodward combined with the amazing proliferation of techniques for structure determination and purification (NMR, crystallography, HPLC etc.) that led to the vast majority of molecules falling under the purview of chemists who were distinctly non-Woodwardian in their abilities and creative reach. Corey especially turned the field into a more or less precisely predictive science that could succumb to rational analysis. In the 1990s and 2000s, with the advent of palladium-catalyzed coupling chemistry, more sophisticated instrumentation and combinatorial chemistry, even callow chemists could make molecules which would have taken their highly capable peers months or years to make in the 60s. As just an example, today in small biotech companies, interns can learn to make in three months the same molecules that bench chemists with PhDs are making. The bench PhDs presumably have better powers of critical thinking and planning, but the gap has still significantly narrowed. The situation may reach a fever pitch with the development of automated methods of synthesis. The bottom line is that synthesis is not longer the stumbling block for the discovery of new drugs; it's largely an understanding of biology and toxicity.

Because organic synthesis and much of medicinal chemistry have now become victims of their own success, tame creatures which can be harnessed into workable products even by modestly trained chemists in India or China, the more traditional scenario as pointed out by Dr. Gilman now involves a few creative and talented medicinal chemists at the top directing the work of a large number of less talented chemists around the world (that's certainly the case at my workplace). From an economic standpoint it makes sense that only these few people at the top command the highest wages and those under them make a more modest living; the average wage has thus been lowered. That's great news for the average bench chemist in Bangalore but not for the ambitious medicinal chemist in Boston. And as Dr. Gilman says, considering the layoffs in pharma and biotech it's also not great news for the field in general.

It's interesting to contemplate how this situation mirrors the situation in computer science, especially concerning the development of the customized code that powers our laptops and workstations; it's precisely why companies like Microsoft and Google can outsource so much of their software development to other countries. Coding has become quite standardized, and while there will always be a small niche demand for novel code, this will be limited to a small fraction at the top who can then shower the rest of the hoi polloi with the fruits of their labors. The vast masses who do coding meanwhile will never make the kind of money which the skill set commanded fifteen years ago. Ditto for med chem. Whenever a discipline becomes too mature it sadly becomes a victim of its own success. That's why it's best to enter a field when the supply is still tight and the low hanging fruit is still ripe for the taking. In the tech sector data science is such a field right now, but you can bet that even the hallowed position of data scientist is not going to stay golden for too long once that skill set too becomes largely automated and standardized.

What, then, will happen to the discipline of medicinal chemistry? The simple truth is that when it comes to cushy positions that pay extremely well, we'll still need medicinal chemists, but only a few. In addition, medicinal chemists will have to shift their focus from synthesis to a much more holistic approach; thus medicinal chemistry, at least as traditionally conceived with a focus on synthesis and rapid access of chemical analogs, will be seeing its demise soon. Most medicinal chemists are still reluctant to think of themselves as anything other than synthetic chemists, but this situation will have to change. Ironically Wikipedia seems to be ahead of the times here since its entry on medicinal chemistry seems to encompass pharmacology, toxicology, structural and chemical biology and computer-aided drug design. It would be a good blueprint for the future.

In particular, medicinal chemistry in its most common practice —focusing on small organic molecules—encompasses synthetic organic chemistry and aspects of natural products and computational chemistry in close combination with chemical biologyenzymology and structural biology, together aiming at the discovery and development of new therapeutic agents. Practically speaking, it involves chemical aspects of identification, and then systematic, thorough synthetic alteration of new chemical entities to make them suitable for therapeutic use. It includes synthetic and computational aspects of the study of existing drugs and agents in development in relation to their bioactivities (biological activities and properties), i.e., understanding their structure-activity relationships (SAR). Pharmaceutical chemistry is focused on quality aspects of medicines and aims to assure fitness for purpose of medicinal products.

To escape the tyranny of the success of synthetic chemistry, the accomplished medicinal chemist of the future will thus likely be someone whose talents are not just limited to synthesis but whose skill set more broadly encompasses molecular design and properties. While synthesis has become standardized, many other disciplines in drug discovery like computer-aided drug design, pharmacology, assay development and toxicology have not. There is still plenty of scope for original breakthroughs and standardization in these unruly areas, and there's even more scope for traditional medicinal chemists to break off chunks of those fields and weave them into the fabric of their own philosophy in novel ways, perhaps by working with these other practioners to incorporate "higher-level" properties like metabolic stability, permeability and clearance into their own early designs. This takes me back to a post I wrote on an article by George Whitesides which argued that chemists should move "beyond the molecule" and toward uses and properties: Whitesides could have been talking about contemporary medicinal chemistry here.

The integration of downstream drug discovery disciplines into the early stages of synthesis and hit and lead discovery will itself be a novel kind of science and art whose details need to be worked out; that art by itself holds promising dividends for adventurous explorers. But the mandate for the 20th century medicinal chemist in the 21st still rings true. Medicinal chemists who can borrow from myriad other disciplines and use that knowledge in their synthetic schemes, thus broadening their expertise beyond the tranquil waters of pure synthesis into the roiling seas of biological complexity will be in far more demand both professionally and financially. Following Darwin, the adage they should adopt is to be the ones who are not the strongest or the quickest synthetically but the ones who are most adaptable and responsive to change. 

For medicinal chemistry to thrive, its very definition will have to change.