Kurt Gödel's Open World


Today marks Kurt Gödel's one hundred and eleventh birthday. Along with Aristotle, Gödel is often considered the greatest logician in history. But I believe his influence goes much farther. In an age when both science and politics seem to be riddled with an incessant search for "truth" - often truth that aligns with one's preconceived social or political opinions - Gödel's work is a useful antidote and a powerful reminder against the illusion of certainty.

Gödel was born in 1906 in Brünn, Czechoslovakia, at a time when the Austro-Hungarian empire was at its artistic, philosophical and scientific peak. Many of Gödel's contemporaries, including Ludwig Wittgenstein, distinguished themselves in the world of the intellect during this period. Gödel was born to middle class parents and imbibed the intellectual milieu of the times. It was an idyllic time, spent in cafes and lecture halls learning the latest theories in physics and mathematics and pondering the art of Klimt and the psychological theories of Freud. There had not been a major European conflict for almost a hundred years.

In his late teens Gödel came to Vienna and became part of the Vienna Circle, a group of intellectuals who met weekly to discuss the foundations of philosophy and science. The guiding principle of the circle was the philosophy of logical positivism which said that only statements about the natural world that can be verified should be accepted as true. The group was strongly influenced by both Bertrand Russell and Ludwig Wittgenstein, neither of whom was formally a member. The philosopher Karl Popper, whose thinking on falsification even now is an influential part of science, ran circles around the group, although his love for them seems to be unreciprocated.

It was at the tender age of 25 that young Gödel published his famous incompleteness theorem. He did this as part of his PhD dissertation, making that dissertation one of the most famous in history (as a rule, even most famous scientists don't always do groundbreaking work in graduate school). In a mere twenty-one pages, Gödel overturned the foundations of mathematics and created an edifice that sent out tendrils not just in mathematics but in the humanities, including psychology and philosophy.

To appreciate what Gödel did, it's useful to take a look at what leading mathematicians thought about mathematics until that time. Both Bertrand Russell and the great mathematician David Hilbert had pursued the foundations of mathematics with conviction. In a famous address given in 1900, Hilbert had laid out what he thought were the outstanding problems in mathematics. Perhaps none of these was as important as the overarching goal of proving that mathematics was both consistent and completeConsistency means that there exists no statement in mathematics that is both true and false at the same time. Completeness means that mathematics should be capable of proving the truth or falsity (the "truth value") of every single statement that it can possibly make. 

In some sense, what Hilbert was seeking was a complete "axiomatization" of mathematics. In a perfectly axiomatized mathematical system, you would start with a few statements that would be taken as true, and beginning with these statements, you would essentially have an algorithm that would allow you derive every possible statement in the system, along with their truth value. The axiomatization of mathematics was not a new concept; it had been pioneered by Euclid in his famous text of geometry, "The Elements". But Hilbert wanted to do this for all of mathematics. Bertrand Russell had similar dreams.

In one fell swoop the 25-year-old Gödel shattered this fond hope. His first incompleteness theorem, which is the most well-known, proved that any mathematical system which is capable of proving the basic theorems of arithmetic is always going to include statements whose truth value cannot be proved using the axioms of the system. You could always 'enlarge' the system and prove the truth value in the new system, but then the new, enlarged system itself would contain statements which succumbed to Gödel's theorem. What Gödel thus showed is that mathematics will always be undecidable. It was a remarkable result, one of the deepest in the annals of pure thought, striking at the heart of the beautiful foundation built by mathematicians ranging from Euclid to Riemann over the previous two thousand years.

Gödel's theorems had very far-reaching implications; in mathematics, in philosophy and in human thought in general. One of those momentous implications was worked out by Alan Turing when he proved a similar theorem for computers, addressing a problem called the "halting problem". Similar to Hilbert's hope for the axiomatization of mathematics, the hope for computation was that, given an input and a computer program, you could always find out whether the program would halt. Turing proved that you could not decide this for an arbitrary program and an arbitrary input (although you can certainly do this for specific programs). In the process Turing also clarified our definitions of "computer" and "algorithm" and came up with a universal "Turing machine" which embodies a mathematical model of computation. Gödel's theorems were thus what inspired Turing's pioneering work on the foundations of computer science.

Like many mathematicians who make seminal contributions in their twenties, Gödel produced nothing of comparable value later in his life. He migrated to the US in the 1930s and settled down at the Institute for Advanced Study in Princeton. There he made a new friend - Albert Einstein. From then until Einstein's death in 1955, the sight of the two walking from their homes to the institute and back, often mumbling in German, became a town fixture. Einstein afforded the privilege of being his walking companion to no one, and seemed to have considered only Gödel as his intellectual equal: in fact he held Gödel in such esteem that he was known to have said in his later years that his own work did not mean much to him, and the main reason he went to work was to have the privilege of walking home with Gödel. At least once Gödel startled his friend with a scientific insight he had: he showed using Einstein's own field equations of gravitation that time travel could be possible.

Sadly, like a few other mathematical geniuses, Gödel was also riddled with mental health problems and idiosyncrasies that got worse as he grew older. He famously tried to find holes in the U.S. Constitution while taking his citizenship exam, and Einstein who accompanied him to the exam had to talk him out of trying to demonstrate to the judge how the U.S. could be turned into a dictatorship (nowadays some people have similar fears, but for different reasons). After Einstein died Gödel lost his one friend in the institute. Since early childhood he had always been a hypochondriac - often he could be seen dressed in a warm sweater and scarf even in the balmy Princeton summer - and now his paranoia about his health greatly grew. He started suspecting that his food was poisoned, and refused to accept anything not cooked by his protective wife Adele; in 1930s Vienna she had once physically protected him from Nazis, and now she was protecting him from imagined germs. When Adele was hospitalized with an illness, Kurt stopped eating completely. All attempts to soothe his fears failed, and on January 14, 1978 he died in Princeton Hospital, weighing only 65 pounds and essentially succumbing to starvation. Somehow this sublimely rational, austere man had fallen prey to a messy, frightful, irrational paranoia; how these two contradictory aspects of his faculties conspired to doom him is a conundrum that will remain undecidable.

He left us a powerful legacy. What Gödel's theorems demonstrated was that not only the world of fickle human beings but also the world of supposedly crystal-clear mathematics is, in a very deep sense, unknowable and inexhaustible. Along with Heisenberg's uncertainty principle, Gödel's theorems showed us that all attempts at grasping ultimate truths are bound to fail. More than almost anyone else, Gödel contributed to the fall of man from his privileged, all-knowing position.

We see his undecidability in politics and human affairs, but it is true even in the world of numbers and watertight theorems. Sadly we seem to have accepted uncertainty in mathematics while we keep on denying it in our own lives. From political demagogues to ordinary people, the world keeps getting ensnared in passionate attempts to capture and declare absolute truth. The fact that even mathematics cannot achieve this goal should give us pause. It should inculcate a sense of wonder and humility in the face of our own fallibility, and should lead us to revel in the basic undecidability of an open world, a world without end, Kurt Gödel's world. 

Science books for 14-year-olds

A few days back a relative of mine asked me for science book recommendations for a very bright 14-year-old nephew who's a voracious reader. She was looking both for books that would be easy for him to read as well as ones which might be pitched at a slightly higher level which can still give him a good sense of the wonder and challenges of science.

The easiest way to recommend these volumes was for me to think about books that strongly inspired me when I myself was growing up, so here's my top ten list which I copied in my email to her. I think that these books make for excellent reading not just for 15-year-olds but for 40 and 80-year-olds for that matter. Feel free to add suggestions in the comments section.

1. One, Two, Three…Infinity by George Gamow: Physicist George Gamow’s delightful book talks about many fascinating facts in maths, astronomy and biology (Gamow's comparison of "different infinities” had blown my socks off when I first read it).

2. Microbe Hunters by Paul DeKruif: This book tells the stories of the determined and brilliant doctors and scientists who discovered disease-causing bacteria and treatments for them.

3. Men of Mathematics by E. T. Bell: This classic book does for mathematicians what Paul DeKruif’s book does for doctors. Although it romanticizes and in some cases embellishes its stories, it has inspired many famous scientists who read it and later won Nobel Prizes.

4. Almost any book by Martin Gardner is great for mathematical puzzles (for eg. “Perplexing Puzzles and Tantalizing Mathematical Teasers”.)

5. Raymond Smullyan’s “What is the Name of this Book? The Riddle of Dracula and other Logical Puzzles” is another absolutely rib-tickling book on puzzles and brain teasers. What is remarkable about Smullyan's volumes is that many of his apparently silly puzzles are not only quite hard, but they hint at some of the deepest mysteries of math and logic, such as Gödel's Theorems.

6. "My Family and Other Animals" by Gerald Durrell: This delightful book talks about the author’s experiences with animals of all kinds while vacationing on a small Greek island with his family.

7. I would also recommend science fiction books by H. G. Wells if he likes fiction, especially “The Time Machine” and “The War of the Worlds."

8. "Surely you’re joking Mr. Feynman" by Richard Feynman: Feynman was one of the most brilliant physicists of the 20th century, and this very funny autobiography documents his adventures in science and life. Even if he doesn’t understand all the chapters it will give him an appreciation for physics and how physics can be fun.

9. "Uncle Tungsten: Memories of a Chemical Boyhood" by Oliver Sacks. Oliver Sacks was a famous neurologist but this book talks about his exciting adventures with chemistry while growing up.

10. "A Brief History of Time" by Stephen Hawking. Some of the chapters may be advanced for him right now but it will give him a flavor of the most fascinating concepts of space and time, including black holes and the Big Bang.

11. "King Solomon's Ring" by Konrad Lorenz. This utterly entrancing and hilarious account by Nobel laureate Konrad Lorenz talks about his pioneering imprinting and other experiments with fascinating animals like sticklebacks and shrews. The story of Lorenz quacking around on his knees while baby ducks follow him is now a classic in the annals of animal behavior.

Sixty-four years later: How Watson and Crick did it

"A Structure for Deoxyribose Nucleic Acid",
Nature, April 25, 1953 (Image: Oregon State University)
Today marks the sixty-fourth anniversary of the publication of the landmark paper on the structure of DNA by Watson and Crick, which appeared in the April 25, 1953 issue of the journal Nature. Even fifty years later the discovery is endlessly intriguing, not just because it's so important but because in 1954, both Watson and Crick were rather unlikely characters to have made it. In 2012 I wrote a post for the Nobel Week Dialogue event in Stockholm with a few thoughts on what it exactly was that allowed the duo to enshrine themselves in the history books; it was not sheer brilliance, it was not exhaustive knowledge of a discipline, but it was an open mind and a relentless drive to put disparate pieces of the puzzle together. I am reposting that piece here.

Somehow it all boils down to 1953, the year of the double helix. And it’s still worth contemplating how it all happened.

Science is often perceived as either a series of dazzling insights or as a marathon. Much of the public recognition of science acknowledges this division; Nobel Prizes for instance are often awarded either for a long, plodding project that is sustained by sheer grit (solving a protein crystal structure), a novel idea that seems to be an inspired work of sheer genius (formulating the Dirac equation) or an accumulated body of work (organic synthesis).

But in one sense, both these viewpoints of science are flawed since both of them tend to obscure the often haphazard, unpredictable, chancy and very human process of research. In reality, the marathon runner, the inspired genius and every scientist in between the two tread a tortuous path to the eureka moment, a path that’s highlighted by false alleys, plain old luck, unexpected obstacles and most importantly, the human obstacles of petty rivalry, jealousy, confusion and misunderstanding. A scientific story that fully captures these variables is, in my opinion, emblematic of the true nature of research and discovery. That is why the discovery of the double helix by Watson and Crick is one of my favorite stories in all of science.

The reason why that discovery is so appealing is because it really does not fit into the traditional threads of scientific progress highlighted above. During those few heady days in Cambridge in the dawn of those gloomy post-war years, Watson and Crick worked hard. But their work was very different from, say, the sustained effort akin to climbing a mountain that exemplified Max Perutz’s lifelong odyssey to solve the structure of hemoglobin. It was also different from the great flashes of intuition that characterized an Einstein or a Bohr, although intuition was applied to the problem – and discarded – liberally. Neither of the two protagonists was an expert in the one discipline that they themselves acknowledged mattered most for the discovery – chemistry. And although they had a rough idea of how to do it, neither really knew what it would take to solve the problem. They were far from being experts in the field.

And therein lies the key to their success. Because they lacked expertise and didn’t really know what would solve the problem, they tried all approaches at their disposal. Their path to DNA was haphazard, often lacking direction, always uncertain. Crick, a man who already considered himself an overgrown graduate student in his thirties, was a crystallographer. Watson, a precocious and irreverent youngster who entered the University of Chicago when he was fifteen, was in equal parts geneticist and bird-watcher. Unlike many of their colleagues, both were firmly convinced that DNA and not protein was the genetic material. But neither of them had the background for understanding the chemistry that is essential to DNA structure; the hydrogen bonding that holds the bases together, the acid-base chemistry that ionizes the phosphates and dictates their geometric arrangement, the principles of tautomerism that allow the bases to exist in one of two possible forms; a form that’s crucial for holding the structure together. But they were willing students and they groped, asked, stumbled and finally triumphantly navigated their way out of this conceptual jungle. They did learn all the chemistry that mattered, and because of Crick they already understood crystallography.

And most importantly, they built models. Molecular models are now a mainstay of biochemical research. Modelers like myself can manipulate seductively attractive three-dimensional pictures of proteins and small molecules on computer screens. But modeling was in its premature days in the fifties. Ironically, the tradition had been pioneered by the duo’s perceived rival, the chemist Linus Pauling. Pauling who would be widely considered the greatest chemist of the twentieth century had successfully applied his model-building approach to the structure of proteins. Lying in bed with a bad cold during a visiting sojourn at Oxford University, he had folded paper and marked atoms with a pencil to conform to the geometric parameters of amino acids derived from simple crystal structures. The end product of this modeling combined with detailed crystallographic measurements was one of twentieth century biochemistry’s greatest triumphs; the discovery of the alpha-helical and beta-sheet structures, foundational structural elements in virtually every protein in nature. How exactly the same model-building later led Pauling to an embarrassing gaffe in his own structure of DNA that violated basic chemical principles is the stuff of folklore, narrated with nonchalant satisfaction by Watson in his classic book “The Double Helix”.

Model building is more art than science. By necessity it consists of patching together imperfect data from multiple avenues and techniques using part rational thinking and part inspired guesswork and then building a picture of reality – only a picture – that’s hopefully consistent with most of the data and not in flagrant violation with important pieces. Even today modeling is often regarded skeptically by the data-gatherers, presumably because it does not have the ring of truth that hard, numerical data has. But data by itself is never enough, especially because the methods to acquire it themselves are incomplete and subject to error. It is precisely by combining information from various sources that one expects to somehow cancel these errors or render them unimportant, so that the signal from one source complements its absence in another and vice versa. The building of a satisfactory model thus often necessarily entails understanding data from multiple fields, each part of which is imperfect.

Watson and Crick realized this, but many of their contemporaries tackling the same problem did not. As Watson recounts it in a TED talk, Rosalind Franklin and Maurice Wilkins were excellent crystallographers but were hesitant to build models using imperfect data. Franklin especially came tantalizingly close to cracking DNA. On the other hand Erwin Chargaff and Jerry Donahue, both outstanding chemists, were less appreciative of crystallography and again not prone to building models. Watson and Crick were both willing to remedy their ignorance of chemistry and to bridge the river of data between the two disciplines of chemistry and crystallography. Through Donohue they learnt about the keto-enol tautomerism of the bases that gave rise to the preferred chemical form. From Chargaff came crucial information regarding constancy of the ratios of one kind of base (purines) to another (pyrimidines); this information would be decisive in nailing down the complementary nature of the two strands of the helix. And through Rosalind Franklin they got access – in ways that even today spark controversy and resentment – to the best crystallographic data on DNA that then existed anywhere.

What was left to do was to combine these pieces from chemistry and crystallography and put together the grand puzzle. For this model building was essential; since Watson and Crick were willing to do whatever it took to solve the structure, to their list of things-to-do they added model building. Unlike Franklin and Wilkins, they had no qualms about building models even if it meant they got the answer partially right. The duo proceeded from a handful of key facts, each of which other people possessed, but none of which had been seen by the others as part of an integrated picture. Franklin especially had gleaned very important general features of the helix from her meticulous diffraction experiments and yet failed to build models, remaining skeptical about the very existence of helices until the end. It was the classic case of the blind men and the elephant.

The facts which led Watson and Crick down the road to the promised land included a scattered bundle of truths about DNA from crystallography and chemistry; the distance between two bases (3.4 Å), the distance per turn of the helix (34 Å) which in turn indicated a distribution of ten bases per turn, the diameter of the helix (20 Å), Chargaff’s rules indicating equal ratios of the two kinds of bases, Alexander Todd’s work on the points of linkage between the base, sugar and nucleotide, Donohue’s important advice regarding the preferred keto form of the bases and Franklin’s evidence that the strands in DNA must run in opposite directions. There was another important tool they had, thanks to Crick’s earlier mathematical work on diffraction. Helical-diffraction theory told them the kind of diffraction pattern that would expect if the structure were in fact helical. This reverse process – predicting the expected diffraction parameters from a model – is today a mainstay of the iterative process of structure refinement used by x-ray crystallographers to solve structures as complex as the ribosome.

Using pieces from the metal shop in Cambridge, Watson gradually accumulated a list of parts for the components of DNA and put them together even as Crick offered helpful advice. Once the pieces were in place, the duo were in the position of an airline pilot who has every signpost, flag and light on the runway paving his way for a perfect landing. The end-product was unambiguous, incisive, elegant, and most importantly, it held the key to understanding the mechanism of heredity through complementary base-pairing. Franklin and Wilkins came down from London; the model was so convincing that even Franklin graciously agreed that it had to be correct. Everyone who saw the model would undoubtedly have echoed Watson and Crick’s sentiment that “a structure this beautiful just had to exist”.

In some sense the discovery of the DNA structure was easy; as Max Perutz once said, the technical challenges that it presented were greatly mitigated because of the symmetry of the structure compared to the controlled but tortuous asymmetry inherent in proteins. Yet it was Watson and Crick and not others who made this discovery and their achievement provides insight into the elements of a unique scientific style. Intelligence they did not lack, but intelligence alone would not have helped, and in any case there was no dearth of it; Perutz, Franklin, Chargaff and Pauling were all brilliant scientists who in principle could have cracked open the secret of life which its discoverers proudly touted that day in the Eagle Pub. 

But what these people lacked, what Watson and Crick possessed in spades, was a drive to explore, interrogate, admit ignorance, search all possible sources and finally tie the threads together. This set of traits also made them outsiders in the field, non-chemists who were trying to understand a chemical puzzle; in one sense they appeared out of nowhere. But because they were outsiders they were relatively unprejudiced. Their personalities cast them as misfits and upstarts trying to disrupt the established order. Then there was the famous irreverence between them; Crick once said that politeness kills science. All these personal qualities certainly helped, but none was as important as a sprightly open-mindedness that was still tempered by unsparing rigor, the ability to ask for and use evidence from all quarters while constraining it within reasonable bounds all the time; this approach led to model building almost as a natural consequence. And the open-mindedness also masked a fearlessness that was undaunted by the imperfect nature of the data and the sometimes insurmountable challenges that seemed to loom.

So that’s how they did it; by questioning, probing, conjecturing and model building even in the presence of incomplete data, and by fearlessly using every tool and idea at their disposal. As we approach problems of increasing biological complexity in the twentieth century, this is a lesson we should keep in mind. Sometimes when you don’t know what approach will solve a problem, you try all approaches, all the time constraining them within known scientific principles. Richard Feynman once defined scientific progress as imagination in a straitjacket, and he could have been talking about the double helix.

R. B. Woodward, general problems and the importance of timely birth


History has its own way of securing rewards for those who ride its crests. That's especially true of scientists. If we look at the greatest scientists in history, there is no doubt that being born at the right place at the right time is paramount in scientific success. One of the main reasons for this is that certain times are ripe for solving general problems, and the specific examples that are then attacked are only special cases, presumably choice fodder for 'lesser' minds.
R B Woodward, who celebrates his 100th birthday this week, is certainly a case in point. He showed us how to synthesize almost any complex molecule, and it’s not possible to see how someone could do that again. Until Woodward did it many believed that it might be impossible to synthesize molecules as complex as reserpine, chlorophyll, cholesterol and vitamin B12, and after he did it there was no doubt in anyone's mind. Now there are still undoubtedly challenges in synthetic organic chemistry, and particular examples abound, but the general problem was solved by Woodward and there is no chance that someone can solve it again. Contrast that with a field like computational chemistry, where the general problem of efficiently computing the free energy of binding of a small molecule to a protein is far from being solved.
There is no escaping the fact that you can fail to make your contribution to a scientific paradigm simply because you are born a few years too late. A great example is the golden age of physics in the twenties when people like Heisenberg, Dirac, Pauli and Schrodinger others laid the foundations of quantum mechanics. With the glaring exception of Schrodinger, all the others were in their mid twenties and in fact were born within a year or two of each other (1900-1902). Once they invented quantum theory nobody could invent it again. Dirac in particular was not only one of the founding fathers of quantum mechanics but was also the founding father of quantum electrodynamics; thus, the stunning success of that field after World War 2 by Feynman and others built on his work.
This meant that if you were unfortunate enough to be born just a few years later, say between 1906 and 1910, you would miss your chance to contribute to these developments no matter how talented you were. Examples of people in this category include Robert Oppenheimer, Hans Bethe and Edward Teller. All of them, and especially Bethe, made seminal contributions to physics, but they missed the bus on laying the foundations. They matured at a time when the main task of physicists was to apply the principles developed by men only a few years older than they were to existing problems. Bethe and others achieved great success in this endeavor but there was no way they could replicate the success of their predecessors. Quantum mechanics was perhaps a rare example since the time window during which the fortuitous confluence of brilliance, data, geographic proximity and collaboration bore copious fruit was remarkably narrow, but it does underscore the general point that the period during which one can make fundamental contributions to a field might be preciously rare. As a more recent example, think of particle physics. After the discovery of the Higgs boson, how easy would it be for someone just entering the field to make a fundamental discovery of comparable magnitude? What are the chances that a particle as important as the quark or the neutron would again be discovered? It's quite clear that while there's still plenty of important discoveries to be made in physics, one could make a good case that the age of fundamental discovery at the level of the atom might be over. That is why one cannot help but feel a bit sorry for someone like Edward Witten, perhaps the greatest mathematical physicist since Paul Dirac. If Witten had been born in 1900 he might well have formulated quantum theory or discovered the uncertainty principle, but being born in 1951, he had to be content with formulating string theory instead, a field struggling to find experimental validation.
A similar theme applies to chemistry. Woodward was heads and shoulders ahead of many of his contemporaries but he also had the advantage of being born a few years earlier. Thus, his first synthetic success came with quinine in 1944, a time when many future leaders in the field like E. J. Corey, Carl Djerassi, Samuel Danishefsky, Gilbert Stork were just entering high school or graduate school. This was still the case when Woodward synthesized two other landmarks, strychnine (1954) and reserpine (1956). Being a titan certainly entails being born with great intellectual gifts but it also benefits tremendously if you are born at the right time. Woodward matured when the conditions in organic chemistry were right for a man of his stature to revolutionize the field. The British chemist Robert Robinson and others had just described the electronic theory of organic chemistry which charted movements of electrons in organic reactions, UV and infrared spectroscopy were coming into vogue and structure determination by chemical degradation had reached its zenith. Woodward combined all these tools to invent a superb new methodology of his own and then applied it to scale hitherto unscaled peaks. He pioneered spectroscopy as an alternative to tedious chemical degradation for determining the structure of molecules and applied sound theoretical principles for making complex molecules. Another specific example is his development of the Woodward-Hoffmann rules; these are rules which allow chemists to predict the course of many key reactions that are of both of pure and applied interest. In formulating these reactions with Roald Hoffmann, Woodward again was in a rather unique position to appreciate both observations arising from his synthesis of vitamin B12 and the widespread dissemination of molecular orbital principles which were ripe for application. After he did all this that was it; while the others also made highly innovative contributions, in many ways they were duplicating his success.
This discussion also has a bearing on the frequent debate about awarding Nobel Prizes to one discipline or another. The fact is that we are very unlikely to see Nobel Prizes in organic synthesis in the future because many of the fundamental problems in the field have been solved. There has not been a general organic synthesis prize awarded since 1990 (Corey) and for good reason. Methodology has been recognized more often, but even the two  most recent methodology prizes (2005 and 2011) stem from work done about twenty years earlier. There is of course a chance that some transition metal may further the cause of efficient, high-yielding and environmentally friendly synthesis in a significant manner but these achievements are likely to be rare.
There is a very important lesson to be learnt from all this regarding the education of students in the history of science. Students should never be discouraged from studying particular scientific fields, but graduate students at the cusp of their research careers should be given a good idea of where the greatest opportunities in science lie. There’s still nothing to stop a Phil Baran (widely considered to be the brightest synthetic organic chemist of his generation) from venturing into organic synthesis, but he should do so with full knowledge of what Woodward and Corey have done before him. One way to ensure success in science is of course to work in the “hottest” fields, and while history often has its own peculiar way of defining what these are, it’s sometimes clearer which ones have passed their prime. And it’s important to drive home this point to young researchers.

Intramolecular Hydrogen Bonds, We Hardly Knew Thee

Intramolecular hydrogen bonds (IHBs) are interesting beasts. They can be used to improve the potency of a drug by constraining it in a bioactive conformation, and they can be used to hide polarity and improve permeability; cyclosporin being the classic example of the latter.

But the exact amount of potency gain you get through formation of an intramolecular hydrogen bond is not clear. Conformationally you could constrain a molecule through an IHB, but you would still be introducing polarity and hydrogen bond donors and acceptors, and this will lead to some desolvation penalties that would have to be exactly compensated for by the IHB to lead to a net positive effect. There's a group from D. E. Shaw and AstraZeneca who have now looked at about 1200 cases of matched molecular pairs and their biological activities to figure out the contribution of IHBs to potency. The average gain they see? Close to zero. This means that on average, an IHB is as likely to blunt your potency as it is to improve it.

What's more interesting are the outliers. A small but distinct fraction of results display matched molecular pairs in which there is a gain of at least 2 to 3 log units. Inspection of these results sheds light on why these cases really benefit from the formation of IHBs, but it's also important to not overemphasize the positive role of the IHBs in every instance, especially in cases where the non-IHB matched pair displays particular severe repulsive interactions. 

For instance there's an N-Me vs N-H pair in which the latter can form an IHB while the former cannot. But not only can the former not form this bond, the N-Me probably causes a steric repulsion and causes a very different conformational profile, perhaps leading to a conformational penalty in binding to the target. Similarly in another case, the matched pair presents an N-H vs O difference. Here again, not only can the oxygen not form the bond, but it likely strongly repels the other oxygen participating in the IHB. Thus, these outliers are outliers not because the IHB is particularly stable, but because the complementary arrangement is particularly unstable.

Nonetheless, this is a nice study to keep in mind every time you want to use an IHB as a tactic for improving potency or permeability. It may well work, but then it may well not. As in most cases in drug discovery, the decision to incorporate an IHB-forming element will be dictated by many other factors including cost, resources and synthetic accessibility. As with many other tactics in the field, when it comes to IHBs, caveat emptor.

Richard Feynman's sister Joan's advice to him: "Imagine you're a student again"

Richard and Joan at the beach
Richard Feynman might have been the most famous Feynman of the twentieth century, but his younger sister Joan - who turned 90 a few days ago -  was no slouch. At a time when it was difficult for women to enter and thrive in science, she became a noted astrophysicist in her own right, investigating stellar nucleosynthesis and the aurora among other topics. By all accounts the two also enjoyed a warm relationship, with Richard encouraging Joan's scientific interests from an early age.

There was one time however when it was Joan who gave Richard a very valuable piece of advice for solving a thorny scientific problem, and not only did it serve him well throughout his future career, but it's also one that all of us can really benefit from. In the 1950s, during the heyday of particle physics, Feynman was at a conference in Rochester where there was word of a profoundly deep potential discovery - so-called 'parity violation'. Parity violation means that left and right are not the same, a fact that seems to go against very fundamental physical laws: for instance, in chemistry there is no a priori reason why the parity or 'handedness' of amino acids should be left instead of right, and most researchers think that the only reason we have left handed amino acids is because of an initial accident that then got perpetuated. And yet there were some unstable particles whose decay into simpler ones seemed to violate parity.

At the conference Feynman read a paper by Chen Ning Yang and Tsung Dao Lee, two Chinese-American physicists who had theoretically showed that parity could be violated in certain ways. At that point in time Feynman was in the middle of a kind of scientific slump. He had made his most famous, Nobel Prize winning discovery - the reformulation of quantum electrodynamics - about ten years earlier, and was looking for fresh scientific questions to ponder. Parity violation seemed exactly like the kind of bold and potentially revolutionary problem that would benefit from an unconventional mind like this. But he felt stuck. At that point Joan came to his rescue. As he writes in his memoirs,

“During the conference I was staying with my sister in Syracuse. I brought the paper home and said to her, “I can’t understand these things that Lee and Yang are saying. It’s all so complicated.”

“No,” she said, “what you mean is not that you can’t understand it, but that you didn’t invent it. You didn’t figure it out your own way, from hearing the clue. What you should do is imagine you’re a student again, and take this paper upstairs, read every line of it, and check the equations. Then you’ll understand it very easily.”

I took her advice, and checked through the whole thing, and found it to be very obvious and simple. I had been afraid to read it, thinking it was too difficult.”

And this was sound advice indeed. Feynman thought hard about parity violation, and along with his 'frenemy' Murray Gell-Mann came up with a theory of beta decay that was one of his most significant contributions to physics (it would be the only paper the two would jointly co-author). He got out of his scientific slump and went on to invent a startlingly original theory of superfluidity. But Joan's advice - for which he deeply thanked her later - was pivotal in getting him started on this path.

The advice may seem obvious, and yet it's something that we often forget once we graduate from college or graduate school and progress in our scientific careers. One of my college professors once offered another related piece of advice: "Nothing's difficult, only unfamiliar". When we are students we are used to actually studying difficult topics and walking through them line by line (perhaps because it's required for the final exam, but nonetheless!). Later somehow we seem to lose the zeal and inclination for sustained, serious study of the kind that we did as eager college students. 

What Joan was telling Richard that it's only prolonged attacks on tough subject material that can yield insights. When you don't invent something - and that applies to most things - you do have to go back to basics and try to understand it from scratch. That approach of taking everything apart and understanding it from a fresh perspective played right into the Feynman playbook; it was what had enabled him to reinvent quantum mechanics. When it came to parity violation the strategy clearly worked for him. And there is no reason why it should not work for lesser mortals. We didn't invent many things, but we can understand most things.