Field of Science

On Michael Dewar, Robin Collingwood and models in chemistry

If you want to read memoirs by famous organic chemists that describe the pleasures and pains of doing real science, you can do no better than reach for historian Jeff Seeman's marvelous and unique set of edited autobiographies published in the 1990s by the ACS, titled "Pathways, Profiles and Dreams". These include volumes penned by such bigwigs as Jack Roberts, Derek Barton, Ernest Eliel, William Johnson, Carl Djerassi and Vladimir Prelog. All of them are eminently informative, inspiring and readable and chart out half a century of organic chemistry during its golden age.

Even among these volumes Michael Dewar's memoirs stand out. Dewar was a truly brilliant man with a vast and in-depth knowledge of not just many fields of chemistry but of other branches of human knowledge. He had an illustrious career starting out at Winchester and Oxford and ending at the University of Texas at Austin. A man who was equally at home with dense mathematical calculations and the intricacies of organic synthesis, Dewar contributed many ideas to chemistry but two in particular demonstrate the breadth of his abilities. Firstly, when he was still doing synthesis, he was the first to propose the existence of an aromatic tropone ring in the structure of colchicine, opening the door to the world of non-benzenoid aromatic hydrocarbons. Secondly, he was the father of semi-empirical molecular orbital theory which he described in a remarkable set of back-to-back, highly mathematical papers in JACS and then in a book. His autobiography (aptly titled "A Semiempirical Life") is the thickest of the lot and provides him with an opportunity to hold forth on a variety of topics, from vitamin C to the expansion of the universe.

Unfortunately Dewar's undoubtedly substantial scientific reputation was somewhat blemished by his pugnacious personality and stubbornness, qualities that enraged his enemies and delighted his friends (and he was nothing if not a fiercely loyal friend). Dewar was quick to engage in argument and generally would not give up until he convinced his opponents of his views. In addition he was rather willing to take credit for ideas that were ascribed to others and enthusiastic to insert himself in controversies; for instance he claims to be the only scientist who changed his views (wrongly, as it later turned out) in the infamous non-classical ion debate largely waged by Herbert Brown and Saul Winstein. A past advisor of mine once had lunch with Dewar, and about the only thing he remembers is Dewar railing against the Nobel committee for not awarding him a prize. All this led Jack Roberts to label Dewar the "Peck's Bad Boy of Chemistry".

But Dewar's undoubtedly valuable and prolific contributions to chemistry cannot be denied, and the book contains much of value. Here I want to focus on a section where Dewar describes his take on the philosophy of chemistry and of science in general. In this he was greatly inspired by the now largely forgotten British philosopher Robin Collingwood; in fact he says that Collingwood's works were the single-most important influence on his intellectual development. The reason Collingwood's work is important for chemistry is because of its special relevance to chemical reasoning. This separates it from the traditional philosophy of science that was largely developed by physicists and applied to physics.

Here's Dewar on models:

A model is a simple mechanism that simulates the behavior of a more complex one. A scientific model must simulate the behavior of the universe, or some part of it, while remaining simple enough for us to understand. The test of such a model is purely operational. Does it in fact simulate the behavior of the system being modeled? If not, we have to modify it or replace it with a better one, better in the sense that it simulates the parent system more effectively. There is of course no question of a model being true or false. The same rule applies to scientific theories, which are simply definitions of scientific models. The question, "Is it true?" is meaningless in science. The correct question to ask is "Does it work?"

That there is one of the most accurate encapsulations of the philosophy of chemistry that I have come across. It enunciates all the main features of good chemical models. Chemistry as a science is based much more on models than physics. Crucially, chemical models must be simple enough to be understood by experimentalists while remaining general enough to describe a wide range of situations. This is one reason why reductionism does not work so well in chemistry on a practical basis. Chemists also are more easily reconciled to the value of models as utilitarian constructs rather than representations of some deep and ultimate reality. Physicists worry much more about such things, and the fact that chemists don't does not make them any less scientific or any less enamored of the pursuit of truth. Chemistry much more than physics is about special cases rather than general phenomena. Chemists certainly care about general explanatory frameworks like chemical bonding, steric effects and electron transfer, but even there the thrust is not necessarily to uncover deep reality but to find something that works. The chemist is the perpetual tinkerer, and it is through tinkering that he or she is led to "the truth".

Image source

How it came from bit: Review of George Dyson's "Turing's Cathedral"

The physicist John Wheeler who was famous for his neologisms once remarked that the essence of the universe could be boiled down to the phrase "it from bit", signifying the creation of matter from information. This description encompasses the digital universe which now so completely pervades our existence. Many moments in history could lay claim as the creators of this universe, but as George Dyson marvelously documents in "Turing's Cathedral", the period between 1945 and 1957 at the Institute for Advanced Study (IAS) in Princeton is as good a candidate as any.

The title is somewhat misleading. Dyson's book focuses on the pioneering development of computing during the decade after World War II and essentially centers on one man- John von Neumann; lover of fast cars, noisy parties and finely tailored suits (as the photo above shows, he shared this sartorial taste with IAS director Robert Oppenheimer). A man for whom the very act of thinking was sacrosanct, Von Neumann is one of the chosen few people in history to whom the label "genius" can authentically be applied. His central talent was an unsurpassed ability to take apart any problem from any field and put it back together so that the answer was usually staring you in the face. The sheer diversity of fields to which he made important contributions beggars belief- Wikipedia lists at least twenty ranging from the mathematical foundations of quantum mechanics to game theory to biology. As one of the IAS's first permanent members, his mind ranged across a staggeringly wide expanse of thought, from the purest of mathematics to the most applied nuclear weapons physics.

Dyson's book recounts the path breaking efforts of von Neumann and his team to build a novel computer at the IAS in the late 1940s. Today when we are immersed in a sea of computer-generated information it is easy to take the essential idea of a computer for granted. That idea was not the transistor or the integrated circuit or even the programming language but the groundbreaking notion that you could have a machine where both data AND the instructions for manipulating that data could be stored in the same place by being encoded in a common binary language. That was von Neumann's great insight which built upon the idea of Alan Turing's basic abstract idea of a computing machine. The resulting concept of a stored program is at the foundation of every single computer in the world. The IAS computer practically validated this concept and breathed life into our modern digital universe. By present standards its computing power was vanishingly small (as Dyson poignantly puts it, there were a total of 53 bytes of memory in the world in 1953), but the technological future it unleashed has been limitless. The one thing that von Neumann got wrong was that he envisioned computers getting bigger and bigger, but apart from this miscalculation he seems to have seen it all.

Dyson's book excels mainly in three ways. Firstly, it presents a lively history of the IAS, the brilliant minds who worked there and the culture of pure thought that often looked down on von Neumann's practical computational tinkering; it describes the pure mathematicians' "undisguised horror" at a bunch of engineers getting their hands dirty with real machinery at so close to their offices. Secondly, it discusses the provenance of von Neumann's ideas which partly arose from his need to perform complex calculations of the events occurring in a thermonuclear explosion. These top-secret calculations were quietly run at night on the IAS computer and in turn were used to tweak the computer's workings; as Dyson pithily puts it, "computers built bombs, and bombs built computers". Von Neumann also significantly contributed to the ENIAC computer project at the University of Pennsylvania. Thirdly, Dyson brings us evocative profiles of a variety of colorful and brilliant characters clustered around von Neumann who contributed to the intersection of computing with a constellation of key scientific fields that are now at the cutting edge.

There was the fascinating Stan Ulam who came up with a novel method for calculating complex processes - the Monte Carlo technique - that is used in everything from economic analysis to biology. Ulam who was one of the inventors of thermonuclear weapons originally used the technique to calculate the multiplication of neutrons in a hydrogen bomb. Then there was Jule Charney who set up some of the first weather pattern calculations, early forerunners of modern climate models. Charney was trying to implement von Neumann's grand dream of controlling the weather, but neither he nor von Neumann could anticipate chaos and the fundamental sensitivity of weather to tiny fluctuations. Dyson's book also pays due homage to an under-appreciated character, Nils Barricelli, who used the IAS computer to embark on a remarkable set of early experiments that sought to duplicate evolution and artificial life. In the process Barricelli discovered fascinating properties of code, including replication and parasitism that mirrored some of the great discoveries taking place in molecular biology at the time. As Dyson tells us, there were clear parallels between biology and computing; both depended on sequences of code, although biology thrived on error-prone duplication (leading to variation) while computing actively sought to avoid it. Working on computing and thinking about biology, von Neumann anticipated the genesis of self-reproducing machines which have fueled the imagination of both science fiction fans and leading researchers in nanotechnology.

Finally, Dyson introduces us to the remarkable engineers who were at the heart of the computing projects. Foremost among them was Julian Bigelow, a versatile man who could both understand code and fix a car. Bigelow's indispensable role in building the IAS computer brings up an important point; while von Neumann may have represented the very pinnacle of abstract thought, his computer wouldn't have gotten off the ground had Bigelow and his group of bright engineers not gotten their hands dirty. Great credit also goes to the two lead engineers on the ENIAC project, J. Presper Eckert and John Mauchly, who were rather unfairly relegated to the shadows and sidetracked by history. Dyson rightly places as much emphasis on discussing the nitty-gritty of the engineering hurdles behind the IAS computer as he does on its lofty mathematical underpinnings. He makes it clear that the ascendancy of a revolutionary technology requires both novel theoretical ideas as well as fine craftsmanship. Unfortunately in this case, the craftsmanship was ultimately trampled by the institute's mathematicians and humanists, which only added to its reputation as a refuge for ivory tower intellectuals who considered themselves above pedestrian concerns like engineering. At the end of the computing project the institute passed a resolution which forbade any kind of experimentation from ever taking place; perhaps keeping in line with his son's future interest in the topic, Freeman Dyson (who once worked on a nuclear spaceship and genuinely appreciates engineering details) was one of the few dissenting voices. But this was not before the IAS project spawned a variety of similar machines which partly underlie today's computing technology.

All these accounts are supplemented with gripping stories about weather prediction, the US thermonuclear program, evolutionary biology, and the emigration of European intellectuals like Kurt Godel and von Neumann to the United States. The books does have its flaws though. For one thing it focuses too heavily on von Neumann and the IAS. Thus Dyson says relatively very little about Turing himself, about pioneering computing efforts at Manchester and Cambridge (the first stored-program computer in fact was the Manchester "Baby" machine) and about the equally seminal development of information theory by Claude Shannon. James Gleick's "The Information" and Andrew Hodges's "Alan Turing: The Enigma" might be useful complements to Dyson's volume. In addition, Dyson often meanders into one too many digressions that break the flow of the narrative; for instance, do we really need to know so much about Kurt Godel's difficulties in obtaining a visa?

Notwithstanding these gripes, the book is beautifully written and exhaustively researched with copious quotes from the main characters. It's certainly the most detailed account of the IAS computer project that I have seen. If you want to know about the basic underpinnings of our digital universe, this is a great place to start even with its omissions. All the implications, pitfalls and possibilities of multiple scientific revolutions can be connected in one way or another to that little machine running quietly in a basement in Princeton.

UNC physics professor held on cocaine smuggling charges

Here's a story that's strange and intriguing. Paul Frampton, a well-known theoretical physicist at UNC-Chapel Hill, has been jailed in Argentina for trying to leave the country with no less than 2 kilos of cocaine smuggled in his baggage. He faces the rather horrific prospect of 16 years in jail if convicted.

The story's strange for several reasons. Frampton seems to be a rather distinguished physicist who has published 73 papers in just Physical Review and Physical Review Letters, the two leading journals in physics. I have never personally interacted with him but know someone who was a graduate student with him in the 80s. That person tells me that Frampton was a helpful and gracious advisor who would take care of his students. I find it very unlikely that a 68-year old theoretical physicist at a leading university who has long since had an excellent reputation in his field would risk it all by brazenly trying to commit such an ill-concealed and obvious crime. Even if he wanted to pull it off it would be rather stupid of someone to just try to smuggle such a large amount of cocaine in his baggage. I don't have any evidence pointing either way, but this just seems to be one of those cases where your gut-feelings (as flawed as they are) point in a preferred direction.

UNC also doesn't seem to be particularly helpful in supporting Frampton. They have suspended his pay for one semester which I find extremely disappointing and unworthy of an institution of UNC's caliber. Yes, it's true that he will be unavailable to teach spring semester, but it seems unfair to blame that on him when the details of the case are not clear. At the very least I would have expected them to offer him a percentage of his current salary, but ideally they should have stuck behind an old and distinguished colleague. What happened to the whole "innocent until proven guilty" adage? There also seems to be some kind of rivalry between Frampton and the provost, also a physicist. Frampton claims that the provost has had a role to play in his arrest and possible framing, perhaps out of sheer jealousy. Some parts of the story seem straight out of a bad Disney movie.

Hopefully the air will clear soon. Until then Frampton will have to stay in Argentina, and we can only hope that he will be exonerated of these rather bizarre charges.

Book review: The Idea Factory: Bell Labs and the Great Age of American Innovation

During its fifty odd years of existence, Bell Labs was the most productive scientific laboratory on the planet. It won seven Nobel Prizes, contributed innumerable practical ideas underlying our modern way of life and, whether by accident or design, also managed to make some spectacular basic scientific discoveries that expanded our understanding of the universe. How did it possibly accomplish all this? In this authoritative and intensely engaging book, Jon Gertner tells us exactly how.

Gertner's book about this great American institution excels in three ways. Firstly, it describes in detail the genesis of what was then an unlikely research institution. Until then most communications related work was considered to be squarely within the domain of engineering. Bell Labs arose from a need to improve communications technology pioneered by its parent organization AT&T. But the real stroke of genius was to realize the value that basic scientists - mainly physicists and chemists - could bring to this endeavor along with engineers. This was largely the vision of two men - Frank Jewett and Mervin Kelly. Jewett who was the first president of Bell Labs had the foresight to recruit promising young physicists who were proteges of his friend Robert Millikan, a Nobel Prize winning physicist and president of Caltech. Kelly in turn was Millikan's student and was probably the most important person in the history of the laboratory. It was Kelly who hired the first brilliant breed of physicists and engineers including William Shockley, Walter Brittain, Jim Fisk and Charles Townes and who would set the agenda for future famous discoveries. During World War II Bell gained a reputation for taking on challenging military projects like radar; at the end of the war it handled almost a thousand of these. The war made the benefits of supporting basic science clear. It was Kelly again who realized that the future of innovation lay in electronics. To this end he moved Bell from its initial location in New York City to an expansive wooded field in New Jersey near Murray Hill and recruited even more brilliant physicists, chemists and engineers. This added further fuel to the fire of innovation started in the 1930s, and from then on the laboratory never looked back.

Secondly, Gertner gives a terrific account of the people who populated the buildings in Murray Hill and their discoveries which immortalized the laboratory. Kelly instituted a policy of hiring only the best minds, and it did not matter whether these were drawn from industry, academia or the government. In some cases he would go to great lengths to snare a particularly valuable scientist, offering lucrative financial incentives along with unprecedented freedom to explore ideas. This led to a string of extraordinary discoveries which Gertner describes in rich and accessible detail. One feature of the book that stands out is Gertner's efforts in describing the actual science instead of skimming over it; for instance he pays due attention to the revolution in materials chemistry that was necessary for designing semiconductor devices. The sheer number of important things Bell scientists discovered or invented beggars belief; even a limited but diverse sampling includes the first transatlantic cable, transistors, UNIX, C++, photovoltaic cells, error-corrected communication, charged-coupled devices and statistical process control that now forms the basis of the six-sigma movement. The scientists were a fascinating, diverse lot and Gertner brings a novelist's eye in describing them. There was Bill Shockley, the undoubtedly brilliant, troubled, irascible physicist whose sin of competing against his subordinates led to his alienation at the lab. Gertner provides a fast-paced account of those heady days in 1947 when John Bardeen, Brittain and Shockley invented the transistor, the truly world-changing invention that is Bell Labs's greatest claim to fame. Then there was Claude Shannon, the quiet, eccentric genius who rode his unicycle around the halls and invented information theory which essentially underlies the entire modern digital world. Described also are Arno Penzias and Robert Wilson, whose work with an antenna that was part of the first communications satellite - also built by Bell - led to momentous evidence supporting the Big Bang. The influence of the laboratory was so formative that even the people who left Bell Labs later went on to greatness; several of these such as John Bardeen and future energy secretary Steven Chu joined elite academic institutions and won Nobel Prizes. It's quite clear that the cast of characters that passed through the institution will probably never again be concentrated in one place.

But perhaps the most valuable part of the book deals not with the great scientific personalities or their discoveries but with the reasons that made Bell tick. When Kelly moved the lab to Murray Hill, he designed its physical space in ways that would have deep repercussions for productive thought and invention. Most crucially, he interspersed the basic and applied scientists together without any separation. That way even the purest of mathematicians was forced to interact with and learn from the most hands-on engineer. This led to an exceptional cross-fertilization of ideas, an early precursor of what we call multidisciplinary research. Labs and offices were divided by soundproof steel partitions that could be moved to expand and rearrange working spaces. The labs were all lined along a very long, seven-hundred foot corridor where everybody worked with their doors open. This physical layout ensured that when a scientist or engineer walked to the cafeteria, he or she would "pick up ideas like a magnet picks up iron filings".

Other rules only fed the idea factory. For instance you were not supposed to turn away a subordinate if he came to ask you for advice. This led to the greenest of recruits learning at the feet of masters like Bardeen or Shannon. Most importantly, you were free to pursue any idea or research project that you wanted, free to ask anyone for advice, free to be led where the evidence pointed. Of course this extraordinary freedom was made possible by the immense profits generated by the monopolistic AT&T, but the heart of the matter is that Bell's founders recognized the importance of focusing on long-term goals rather than short-term profits. They did this by gathering bright minds under one roof and giving them the freedom and time to pursue their ideas. And as history makes clear, this policy led not only to fundamental discoveries but to practical inventions greatly benefiting humanity. Perhaps some of today's profitable companies like Google can lift a page from AT&T and channel more of their profits into basic, broadly defined, curiosity-driven research.

Gertner's highly readable book leaves us with a key message. As America struggles to stay competitive in science and technology, Bell Labs still provides the best example of what productive industrial research can accomplish. There are many lessons that modern organizations can learn from it. One interesting lesson arising from the cohabitation of research and manufacturing under the same roof is that it might not be healthy beyond a point to isolate one from the other, a caveat that bears directly on current offshoring policies. It is important to have people involved in all aspects of R&D talking to each other. But the greatest message of all from the story of this remarkable institution is simple and should not be lost in this era of short-term profits, layoffs and declining investment in fundamental research: the best way to generate ideas still is to hire the best minds, put them all in one place and give them the freedom, time and money to explore, think and innovate. You will be surprised how much long-term benefit you get from that policy. As they say, mighty trees from little acorns grow, and it's imperative to nurture those little seeds.

If you want them to collaborate, you should let them collaborate

The past few months have seen a string of stories about major drug makers shutting down their neuroscience research, a move that seems to be the exact opposite of what they should be doing, even in times of economic distress. Many neurological disorders are the very definition of unmet needs, and one would think that pharma would pump in massive, long-term resources into Alzheimer's disease research with alacrity. But such are the times we live in.

A recent commentary in Nature brought this topic to my attention. The authors are from the ETH in Zurich and, after lamenting the withdrawal of drug companies from neuroscience research, they try to propose a way forward in the form of increased industry-academia collaboration. Whether this increasingly fashionable brainwave will work is still unknown. But one paragraph in particular caught my attention:

To reinvigorate the field and avoid repeating past problems, more exchange should be fostered between basic and clinical scientists. When spinal-cord researchers began organizing retreats and workshops to bring together basic researchers and clinicians, they saw first-hand how little each side knows about how the other works. The mutual lack of knowledge was huge; each side had completely different language to describe the same scenario.


But this is exactly what the drug companies should have been doing, putting the basic and clinical scientists under one roof. It's lamentable that they need to have special retreats for bringing these folks together. The reason why this part jumped out at me was because I have just started reading Jon Gertner's great new book about Bell Labs. Many reasons contributed to the institution's phenomenal success, but one notable factor was the concentration of the purest and the most applied scientists under one roof. The firm's pioneering research director Mervin Kelly carefully planned the physical layout of the lab so that everyone, irrespective of specialty or research level, was a stone's throw from everyone else. That way even the purest mathematician was forced to interact and learn from the most hands-on engineer. Research and manufacturing were geographically indistinguishable. There was a very long seven-hundred foot corridor with open offices and labs on each side. It was impossible to walk down the hall and not learn something from someone working in a very different field.

That formula still seems entirely relevant, especially when research has become highly complex and multidisciplinary, and it just seems relatively unproductive to hold special workshops and retreats so that the pure folks can talk to the applied folks. Sure, retreats and workshops can help, but as Bell demonstrated, there's nothing more productive than having the guy who wrote the book on spinal cord injury surgery just down the hall from the guy who wrote the book on dopamine antagonists. Startups and small companies can do this to some extent but they certainly don't have Big Pharma's resources.

Big Pharma of course seems to have stopped listening.

Image source
(Update: As a commentator points out, the photo is not from Bell Labs but from Allied Chemicals. I can imagine the corridor at Bell looking quite similar though).

The unstoppable Moore hits the immovable Eroom

Thanks to Derek I became familiar with an article in the recent issue of Nature Reviews Drug Discovery which addresses that existential question that has been asked by so many plaintive members of the scientific community; why has pharmaceutical productivity declined over the last two decades with no end to this attrition in sight?

The literature has been awash in articles discussing this topic but this piece presents one of the most perceptive and well-thought out analyses that I have recently read. The paper posits a law called "Eroom's Law", the opposite of Moore's Law, which seems to chart the regress in drug approvals and novel medicines contrary to Moore's bucolic vision of technological progress. The authors wisely avoid recommending solutions, but do cite four principal causes for the decline. Derek is planning to write a series of undoubtedly insightful posts on these causes. But here I want to list them especially for those who may not have access to the journal and discuss one of them in some detail.

The first cause is named the 'Better than The Beatles' effect. The title is self-explanatory; if every new drug that has to be developed is required to be better than its predecessor which has achieved Beatle-like medical status, then the bar for acceptance of this drug is going to be very high leading to an expensive and resource-intensive discovery process. An example would be a new drug for stomach ulcers which will have to top the outstanding success of ranitidine and omeprazole; unlikely to happen. Naturally the bar is very high for certain areas like heart disease with its statins and hypertension with its phalanx of therapies, but the downside of this fact is that it stops novel medication from ever seeing the light of day. The Better-than-The-Beatles bar is understandably lower for a disease like Alzheimer's where there are no effective existing therapies, so perhaps drug developers should focus on these areas. The same goes for orphan drugs which target rare diseases.

The second reason is the 'cautious regulator' with the title again being self-expalantory. The thalidomide disaster in the 1960s led to a body of regulatory schedules and frameworks that today severely constrain the standards for efficacy and toxicity that drugs have to meet. This is not bad in itself, except that it often leads to the elimination of potential candidates (whose efficacy and toxicity can be modulated later) very early on in the process. The stupendously crushing magnitude of the regulatory schedule is illustrated by the example of a new oncology drug, whose documentation if piled in a single stack would top the height of the Empire State Building. With this kind of regulation, scientists never tire of pointing out that many of the path breaking drugs approved in the 50s and 60s would never survive the FDA's gauntlet today. There's a lesson in there somewhere; it does not mean that every new compound should be directly tested on humans, but it does seem to suggest that maybe compounds which initially appear problematic should be allowed to compete in the race a little longer without having to pass litmus tests. It's also clear that, as with the Beatles problem, the regulatory bar is going to be lower for unmet needs and rare but important diseases. An interesting fact cited by the article is the relatively low standards for HIV drugs in the 90s which were partly a result of the intense lobbying in Washington.

The third reason cited in the article concerns the 'throw money at it' tendency. The authors don't really delve into this, partly because the problem is rather obvious; you cannot solve a complex, multifaceted puzzle like the discovery of a new drug simple by pumping in financial and human resources.

It's the fourth problem that I want to talk about. The authors call it the 'basic science-brute force' problem and it seems to point to a rather paradoxical contradiction; that the increasingly basic-science and data-driven approaches in the pharmaceutical industry over the last twenty years might have actually hampered progress.

The paradox is perhaps not as hard to understand as it looks if we realize just how complex the human body and its interactions with small molecules are. This was well-understood in the fifties and sixties and it led to the evaluation of small molecules largely through their effect on actual living systems (which these days is called phenotypic screening) instead of by validating their action at the molecular level. A promising new therapeutic would often be directly tested on a mouse; at a time when very little was known about protein structure and enzyme mechanisms, this seemed to be the reasonable thing to do. Surprisingly it was also perhaps the smart thing to do. As molecular biology, organic chemistry and crystallography provided us with new, high-throughput techniques to study the molecular mechanism of drugs, focus shifted from the somewhat risky whole-animal testing methods of the 60s to target-based approaches where you tried to decipher the interaction of drugs with their target proteins.

As the article describes, this thinking led to a kind of molecular reductionism, where optimizing the affinity of a ligand for a protein appeared to be the key to the development of a successful drug. The philosophy was only buttressed by the development of rapid molecular synthesis techniques like combinatorial chemistry. With thousands of new compounds and fantastic new ways to study their interactions at a molecular level, what could go wrong?

A lot, as it turns out. The complexity of biological systems ensures that the one target-one disease correlation more often than not fails. We now appreciate more than ever that new drugs and especially ones that target complex diseases like Alzheimer's or diabetes might be required to interact with multiple proteins for being effective. As the article notes, the advent of rational approaches and cutting-edge basic science might have led companies to industrialize and unduly systematize the wrong part of the drug discovery process - the early one. The paradigm only gathered steam with the brute-force approaches enabled by combinatorial chemistry and rapid screening of millions of compounds. The whole philosophy of finding the proverbial needle in the haystack ignored the possible utility of the haystack itself.

This misplaced systematization eliminated potentially promising compounds with multiple modes of action whose interactions could not be easily studied by traditional target-based methods. Not surprisingly, this led to compounds with nanomolar affinity and apparently promising properties often failing in clinical trials. Put more simply, the whole emphasis on target-based drug discovery and its attendant technologies might have resulted in lots of high-affinity, tight binding ligands, but few drugs.

Although the authors don't discuss it, we continue to have such misplaced beliefs today by thinking that genomics and all that it entails could help us to rapidly discover new drugs. As we constrain ourselves to accurate, narrowly defined features of biological systems, it deflects our attention from the less accurate but broader and more relevant features. The lesson here is simple; we are turning into the guy who looks for his keys under the street light only because it's easier to see there.

The authors of the article don't suggest simple solutions because they aren't any. But there is a hint of a solution in their recommendation of a new post in pharmaceutical organizations colorfully titled the Chief Dead Drug Officer (CDDO) whose sole job would be to document and analyze reasons for drug failures. Refreshingly, the authors suggest that the CDDO's renumeration could come in the form of delayed gratification a few years down the line when his analysis has been validated. It is hoped that the understanding emerging from such an analysis would lead to some simple but hopefully effective guidelines. In the context of the 'basic science-brute force' problem, the guidelines may allow us to decide when to use ultra-rational target-based approaches and when to use phenotypic screening or whole animal studies.

At least in some cases the right solution seems to be clear. For instance we have known for years that neurological drugs hit multiple targets in the brain. Fifty years of psychiatrists prescribing drugs for psychosis, depression and bipolar disorder have done nothing to hide the fact that even today we treat many psychiatric drugs as black boxes. With multiple subtypes of histamine, dopamine and serotonin receptors activated through all kinds of diverse, ill-understood mechanisms, it's clear that single target-based approaches for CNS drug discovery are going to be futile, while multiple target-based approaches are simply going to be too complicated in the near future. In this situation it's clear that phenotypic screening, animal studies, and careful observations of patient populations are the way to go in prioritizing and developing new psychiatric medication.

Ultimately the article illuminates a simple fact; we just don't understand biological systems well enough to discover drugs through a few well-defined approaches. And in the face of ignorance, both rational and "irrational" approaches are going to be valuable in their own right. As usual, knowing which ones to use when is going to be the trick.

Gilbert Stork on steaks, synthesis and more

Few chemists in the twentieth century have contributed as many important ideas to the science and art of organic synthesis as Gilbert Stork. Stork has made any number of groundbreaking and elegant contributions to the discipline, from the enamine reaction to radical chemistry to pathbreaking total syntheses like his synthesis of quinine. And from his perch at Columbia University where he has been for almost fifty years, he has emerged as one of his generation's most productive trainers of leading chemists in academia and industry. Here's a nice presentation listing his achievements.

Stork is now being celebrated on occasion of his 90th birthday by chemist and historian Jeff Seeman in Angewandte Chemie. Jeff brings us a wonderful collection of anecdotes, quotes and stories, both by Stork himself and by his friends and colleagues which include some of the twentieth century's leading organic chemists. There are also dozens of memorable photos. Unlike some of his contemporaries, Story is a rather unassuming man who has shunned the limelight, so it's a treat to hear these stories. There's lots of amusing stuff in there, from Stork's literally explosive relationship with cars to his being thrust into the unenviable situation of having to give a talk right after a stellar lecture by R. B. Woodward. For me two stories stood out.

First, a tale of steak disposal that momentarily triggered a panic attack and illustrated a nice lesson about kinetics (notwithstanding the fact that aqua regia contains hydrochloric, not sulfuric acid):

“There was this one really idiotic time. I remember I was really scared that I was going to blow up the entire Chemistry Department at the University of Wisconsin. I had a steak on the window ledge of my office. It was the winter, and I used the window ledge as a refrigerator. You obviously were not supposed to be cooking steaks in the lab, but I had a small lab where I was usually alone in there, and so I had a steak. But I also was not aware that biodegradable material is biodegradable, and this steak was clearly degraded on the window ledge. And the question was, what to do with it? And I decided to toss the steak in a hot acid bath which we used to clean up glassware. So, it's fuming nitric and sulfuric acid. It's really aqua regia in that bath, in that heavy lead dish, and the steak.


“And then, as I just had thrown it in there, and it fumed furiously and red fumes of who knows what, nitrous oxide of various kinds were being produced there. I became frantically concerned because fat is glycerides. So, I am hydrolyzing the fat to glycerin. You make nitroglycerine by taking glycerin and nitric acid and sulfuric acid, and obviously, I am going to produce a pile of nitroglycerine and blow up the entire building with my steak.


“Now, what is an interesting point there, why didn't it? And of course, the reason is kinetics. That is, the kinetics of oxidation of the glycerol at that temperature is much, much, much, I mean, infinitely faster than the cold temperature nitration of glycerin. And so the place was safe.”

And second, some reflections on the real value and utility of chemical synthesis:

“The toughest question to ask in synthetic organic chemistry after the work is done is: what have you learned? And you can have extraordinarily complex things. They look complex as hell. Maybe they have 80 asymmetric centers and maybe the answer is, [you've learned] nothing. I mean, you could have learned that humans are capable of enormous focused efforts and are capable of sticking with a problem which is extraordinarily complicated.


On the other hand, if somebody makes polyethylene, as somebody obvi- ously did, then you learn a lot, even though it will not thrill most synthetic chemists because this would be comparable to building a highway for an architect. I mean, it's important, but it's fairly dull compared to [building] the Guggenheim Museum, for instance... ”


“So something could be not terribly glamorous but extremely important, or vice versa. I think that B12 was vice versa. It's enormously complicated.”

That's a really important point he makes, and one that should define the choice of a research problem especially for a young investigator. There's not much point in attempting that 80 step synthesis using "hammer-and-tong" chemistry if it's not going to teach you much; that's also one of the reasons that people like Woodward get so much credit for synthesizing something first, since they really demonstrated that such complex synthesis was possible. On the other hand, throwing in two simple chemicals and watching them form an astonishingly intricate infinite lattice can really teach you something new. So can synthesizing a boringly repetitive polymer with novel properties.


The real deal in chemistry as in any other science is understanding, and the nature of experiments that impart real understanding changes with the evolution of chemical science. Stork's message for new researchers is clear; pick a problem that may not be glamorous, but whose solution would teach you something new. Stamina is not quite as important as creativity and discovery, and although perseverance is admirable, you don't make an important contribution just by proving that you can stick with a problem for ten years. Even though it may appear that way, science is not a marathon; it's scuba diving.


Image source

Why it's hard to explain drug discovery to physicists

I minored in physics in college, and ever since then I have had a lively interest in the subject and its history. Although initially trained as an organic chemist, part of the reason I decided to study computational and theoretical chemistry is because of their connections to physics by way of quantum chemistry, electrostatics and statistical thermodynamics. No other science can boast the kind of fundamental insights into the most basic workings of the universe that physics has provided in the twentieth century. Even today when we think of the purest, most exalted science we think of physics.

Not surprisingly, I have several physicist friends with whom I often talk shop. It's enormously interesting to hear about their work ranging from cosmology to solid-state physics. Yet I find that I sometimes I have trouble explaining my own work to them. And this is certainly not because they lack the capacity to understand it. It's because the nature of drug discovery is sometimes rather alien to physicists and especially to theoretical physicists. The physicists have trouble understanding drug discovery not because it's hard but because it seems too messy, unrigorous, haphazard, subject to serendipity.

But drug discovery and design is indeed all this and more, and that's precisely why it works. Success in drug discovery demands a diverse mix of skills that range from highly rigorous analysis to statistical extrapolation, gut feeling and intuition, and of course, a healthy dose of good luck. All of these are an essential part of the cocktail (to borrow a drug metaphor). No wonder that models play an integral role in the discovery of new drugs. In this sense drug discovery is very much like chemistry which Roald Hoffmann has trouble explaining to physicists for similar reasons. For a theoretical physicist, anything that cannot be accurately expressed as a differential equation subject to numerical if not analytical solution is suspect. True success in physics is exemplified by quantum electrodynamics, the most accurate theory that we know which agrees with experiment to 12 decimal places. While not as stunningly accurate as QED, most of theoretical physics in the twentieth century consisted of rigorously solving equations and getting answers that agreed with experiment to an unprecedented degree. The goal of many physicists was, and still is, to find three laws that account for at least 99% of the universe. But the situation in drug discovery is more akin to the situation in finance described by the physicist-turned-financial modeler Emanuel Derman; we drug hunters would consider ourselves lucky to find 99 laws that describe 3% of the drug discovery universe.

Physics strives to find universal laws, drug discovery thrives on exceptions. While there certainly are general principles dictating the binding of a drug to its target protein, every protein-drug system is like a human being, presenting its own quirky personality and peculiar traits that we have to deconvolute by using every tool at our disposal, whether rigorous or not. In fact as anyone in the field would know, drug discovery scientists take great satisfaction in understanding these unique details, knowing what makes that particular molecule and that particular protein tick. Try to convince any scientist working in drug discovery that you have found an equation that would allow you to predict the potency, selectivity and side-effects of a drug starting from its chemical structure and which would be universally applicable to any drug and any protein, and you will be met with ridicule.

Physicists also have to understand that in drug discovery, understanding is much more important than accuracy. There's very little point in calculating or measuring binding affinity to four decimal places, but calculating relative trends in binding affinity could be very useful, even if there are errors in the individual numbers. Far more important than calculation however is in explaining why; why a small change in a drug causes a large change in its activity, why one enantiomer causes side-effects while another does not, why making a molecule mimicking the natural substrate of a protein failed, why adding that fluorine adversely affected solubility. "Why" in turn can lead to "what I should make next", which is really what a drug hunter wants to know. In most of these cases the number of variables is so large that calculation would be hopelessly impossible in any case, but even if it were possible, dissecting every factor quantitatively is not half as important as explanation. And here's the key point; the explanation can come from any quarter and from any method of inquiry, from calculation to intuition.

This brings us to reductionism which we have discussed on this blog before. Part of the reason drug discovery can be challenging to physicists is because they are steeped in a culture of reductionism. Reductionism is the great legacy of twentieth-century physics, but while it worked spectacularly well for particle physics it doesn't quite work for drug design. A physicist may see the human body or even a protein-drug system as a complex machine whose understandings we can completely understand once we break it down into its constituent parts. But the chemical and biological systems that drug discoverers deal with are classic examples of emergent phenomena. A network of proteins displays properties that are not obvious from the behavior of the individual proteins. An aggregate of neurons displays behavior that completely belies the apparent simplicity of neuronal structure and firing. At every level there are fundamental laws governing a particular system which we have to understand. Reductionism certainly doesn't work in drug discovery in practice since the systems are so horrendously complicated, but it may not even work in principle. Physicists need to understand that drug discovery presents reductionism in a straitjacket; it can help you a little bit at every level, but it has very little wiggle room beyond that level.

Physicists may also sometimes find themselves bewildered by the inherently multidisciplinary nature of pharmaceutical research. It is impossible to discover a new drug without the contribution of people from a variety of different fields, and no one scientist can usually claim the credit for a novel therapeutic. This concept is somewhat alien especially to theoretical physicists who are used to sitting in a room with pencil and paper and uncovering the great mysteries of the universe. To be sure, there are areas of physics like experimental particle physics which now require enormous team effort (with the LHC being the ultimate incarnation of such teamwork), but even in those cases the scientists involved have been mostly physicists.

So are physicists doomed to look at drug discoverers with a jaundiced eye? I don't think so. The nature of physics itself has significantly changed in the last thirty years or so. New fields of inquiry have presented physicists with the kind of complex systems opaque to first-principles approaches that chemists and biologists are familiar with. This is apparent in disciplines like biophysics, nonlinear dynamics, atmospheric physics, and the physics of large disordered systems. Many phenomena that physicists study today, from clouds to strange new materials are complex phenomena that don't succumb to reductionist approaches. In fact, as the physicist Philip Anderson reminds us, reductionism does not even help us fully understand well-known properties like superconductivity.

The new fields demand new approaches and their complexity means that physicists have to abandon strictly first-principles approaches and indulge in the kind of modeling familiar to chemists and biologists. Even cosmology is now steeped in model-building due to the sheer complexity of the events it studies. In addition, physicists are now often required to build bridges with other disciplines. Fields like biophysics are often as interdisciplinary as anything found in drug discovery. And just like in drug discovery, physicists now have to accept the fact that a novel solution to their problem may come from a non-physicist.

All this can only be a good augury if it means that more physicists are going to join the working ranks of drug discoverers. And it will all work out splendidly as long as they are willing to occasionally hang their reductionist hats at the door, supply pragmatic solutions and not insist on getting answers right to twelve decimal places.


Image source

A new way to look for life on other planets

One of the fundamental properties of light is its polarization which refers to the spatial orientation of the electric and magnetic fields constituting a light wave. There are many fascinating facts about polarized light which are of both academic and applied interest, but perhaps the most intriguing one is the ability of chiral or handed organic compounds to change the plane of polarization of circularly polarized light. This has proven to be an invaluable tool for chemists in detecting and assigning the structures of molecules, especially biological molecules like sugars and amino acids which tend to exist in only form (left or right handed) in nature.

This fact has now been put to good use by a team of Chilean, Spanish and British astronomers who, in a paper in this week's Nature, have come up with a novel way to detect biosignatures of life on planets. They demonstrate their method by detecting the polarization signatures of water, clouds and vegetation in Earthshine. Earthshine refers to the sunlight reflected by the earth which has been reflected back by the moon towards the earth. It turns out that earthshine contains polarized light whose polarization has been shaped by the earth's atmosphere and vegetation and their constituent molecules. Specific molecules polarize specific wavelengths of light so scanning the whole range of wavelengths is essential. Crucially, the presence of vegetation is manifested in the polarization of the light by chlorophyll. Chlorophyll is special because it absorbs light up to about 700 nm. Beyond 700 nm (in the infrared region) it sharply reflects it, leading to a spike in the spectrum known as the "red edge". That is why plants glow strongly in infrared light. The red edge is a major part of the earth's reflected light as detected in outer space. It's a remarkable phenomenon which could be put to good use to detect similar life-enabling pigments on other planets.

The team used the Very Large Telescope in Chile to analyze reflected earthshine during two months, April and June. The two different times were necessary to make observations of two different viewing faces which the earth presented to the moon; one face was predominantly covered by land and vegetation and the other mainly by water. The earthshine arising from the two faces would be characterized by different spectroscopic signatures, one belonging mainly to vegetation and the other to water. At each wavelength of light they observed peaks and discontinuities corresponding to oxygen, water vapor and chlorophyll. They then compared these observations to calculations from a model that contains as parameters varying proportions of vegetation and ocean. There is some uncertainty because of assumptions about cloud structure but overall there is good agreement. Remarkably, the vegetation is sensitive to even a 10% difference in vegetation.

The technique is fascinating and promises to be useful in being able to make gross detections of water, oxygen and vegetation on other earth-like planets, all of which are strong indicators of life. Yet it is clear that earthshine presents a relatively simple test case, mainly because of the proximity of the moon which is the source of the polarized light. By astronomical distances the moon is right next to the earth and there's very little in the intervening medium by way of dust, ice and other celestial bodies. The situation is going to be quite different for detecting polarized reflections from planets that are millions of light years away. A few thoughts and questions:

1. The authors note that the lunar surface partially depolarizes the light. Wouldn't this happen much more with light coming from very far that has hit multiple potentially depolarizing surfaces? Light could also be depolarized by dense atmospheres or by interstellar media like dust grains and ice grains. More interestingly, the polarization could also be reversed or affected by chiral compounds in outer space.


2. A related question: how intense does the light have to be when it reaches the detectors? Presumably light from worlds that are billions of light years away is going to strongly interact with surfaces and interstellar media and lose most of its intensity.


3. It's clear that chlorophyll is responsible for the signature of vegetation. Alien plants may not necessarily utilize chlorophyll as the light harvesting pigment, in fact they may well be equipped to use alternative wavelengths. There could also be life not dependent on sunlight. How we will be able to interpret signatures arising from other unknown pigments and constituents of life is an open question.


4. It is likely that advanced civilizations have discovered this method of detecting life. Could they be deliberately broadcasting polarized light to signal their presence? In the spirit of a past post, could they do this with specific molecules like amino acids, isotopically labeled molecules or stereoisomers? How sensitive is the polarization to molecular concentration? Any of these compounds would strongly suggest the presence of intelligent life which has developed the technology for the synthesis and purification of organic molecules.


Image source


Sterzik, M., Bagnulo, S., & Palle, E. (2012). Biosignatures as revealed by spectropolarimetry of Earthshine Nature, 483 (7387), 64-66 DOI: 10.1038/nature10778