Field of Science

Steven Weinberg (1933-2021)

I was quite saddened to hear about the passing of Steven Weinberg, perhaps the dominant living figure from the golden age of particle physics. I was especially saddened since he seemed to be doing fine, as indicated by a lecture he gave at the Texas Science Festival this March. I think many of us thought that he was going to be around for at least a few more years. 

Weinberg was one of a select few individuals who transformed our understanding of elementary particles and cemented the creation of the Standard Model, in his case by unifying the electromagnetic and weak forces; for this he shared the 1979 Nobel Prize in physics with Abdus Salam and Sheldon Lee Glashow. His 1967 paper which heralded the unification, "A Model of Leptons", was only 3 pages long and remains one of the most highly cited articles in physics history.

But what made Weinberg special was that he was not only one of the most brilliant theoretical physicists of the 20th century but also a pedagogical master with few peers. His many technical textbooks, especially his 3-volume "Quantum Theory of Fields", have educated a generation of physicists; meanwhile, his essays in the New York Times Book Review and other avenues and collections of articles published as popular books have educated the lay public about the mysteries of physics. But in his popular books Weinberg also revealed himself to be a real Renaissance Man, writing not just about physics but about religion, politics, philosophy, history including the history of science, opera and literature. He was also known for his political advocacy of science. Among scientists of his generation, only Freeman Dyson had that kind of range.

There have been some great tributes to him, and I would point especially to the ones by Scott Aaronson and Robert McNees, both of whom interacted with Weinberg as colleagues. The tribute by Scott especially shows the kind of independent streak that Weinberg had, never content to go with the mainstream and always seeking orthogonal viewpoints and original thoughts. In that he very much reminded me of Dyson; the two were in fact friends and served together on the government advisory group JASON, and my conversation with Weinberg which I describe below ended with him asking me to give my regards to Freeman, who I was meeting in a few weeks.

I had the good fortune of interacting with Steve on two occasions, both rewarding. The first time I had the opportunity to be with him on a Canadian television panel on the challenges of Big Science. You can see the discussion here:

https://www.tvo.org/video/the-challenge-of-big-science

The next time was a few years later when I contacted him about a project and asked whether he had some thoughts to share about it. Steve didn't know me personally (although he did remember the Big Science panel) and was even then very busy with writing and other projects. In addition, the project wasn't something close to his immediate interests, so I was surprised when not only did he respond right away but asked me to call him at 10 AM on a Sunday and spoke generously for more than an hour. I still have the recording.

Steve was a great physicist, a gentleman and a Renaissance Man, a true original. We are unlikely to see the likes of him for a long time. 

One of the reasons I feel wistful is because he was among the last of the creators of modern particle physics, from an enormously fruitful time in which theory went hand in hand with experiment. This is different from the last twenty years in which fundamental physics and especially string theory have been struggling to make experimental connections. In cosmology however, there have been very exciting developments, and Weinberg who devoted his last few decades to the topic was certainly very interested in these. Hopefully fundamental physics can become as involved with the productive interplay of theory and experiment as cosmology and condensed matter physics are, and hopefully we can again resurrect the golden era of science in which Steven Weinberg played such a commanding role.

Kurt Gödel's open world

Two men walking in Princeton, New Jersey on a stuffy day. One shaggy-looking with unkempt hair, avuncular, wearing a hat and suspenders, looking like an old farmer. The other an elfin man, trim, owl-like, also wearing a fedora and a slim white suit, looking like a banker. The elfin man and the shaggy man used to make their way home from work every day. Passersby and motorists would strain their heads to look. Everyone knew who the shaggy man was; almost nobody knew who his elfin companion was. And yet when asked, the shaggy man would say that his own work no longer meant much to him, and the only reason he came to work was to have the privilege of walking home with the elfin man. The shaggy man was Albert Einstein. His walking companion was Kurt Gödel.

What made Gödel, a figure unknown to the public, so revered among his colleagues? The superlatives kept coming. Einstein called him the greatest logician since Aristotle. The legendary mathematician John von Neumann who was his colleague argued for his extraction from fascism-riddled Europe, writing a letter to the director of his institute saying that “Gödel is absolutely irreplaceable; he is the only mathematician about whom I dare make this assertion.” And when I made a pilgrimage to Gödel’s house during a trip to his native Vienna a few years ago, the plaque in front of the house made his claim to posterity clear: “In this house lived from 1930-1937, the great mathematician and logician Kurt Gödel. Here he discovered his famous incompleteness theorem, the most significant mathematical discovery of the twentieth century.”

The author in front of the house in Vienna where Gödel was living with his mother and brother when he proved his Incompleteness Theorems

The reason Gödel drew gasps of awe from colleagues as brilliant as Einstein and von Neumann was because he revealed a seismic fissure in the foundations of that most perfect, rational and crystal-clear of all creations – mathematics. Of all the fields of human inquiry, mathematics is considered the most exact. Unlike politics or economics, or even the more quantifiable disciplines of chemistry and physics, every question in mathematics has a definite yes or no answer. The answer to a question such as whether there is an infinitude of prime numbers leaves absolutely no room for ambiguity or error – it’s a simple yes or no (yes in this case). Not surprisingly, mathematicians around the beginning of the 20th century started thinking that every mathematical question that can be posed should have a definite yes or no answer. In addition, no mathematical question should have both answers. The first requirement was called completeness, the second one was called consistency.

The overarching goal of mathematics was to prove completeness and consistency starting from a fundamental, minimal set of axioms, much like Euclid had built up the grand structure of plane geometry starting with a handful of axioms in his marvelous ‘Elements’.

Mathematicians had good reasons to be optimistic. The 19th century had perhaps been the most important for the development of the discipline, solidifying results in analysis, geometry and other key mathematical domains. The mathematical giants of that time, textbooks names like Gauss, Dedekind, Cantor and Riemann, had put mathematics on a solid foundation. It was against this background that Bertrand Russell and Alfred North Whitehead wrote their magnum opus, the dense ‘Principia Mathematica’ that sought to put mathematics on a solid foundation of logic. Unnecessary axioms of mathematics would be discarded, the superstructure trimmed, and mathematics would be put on a sound basis of symbolic logic. One of the major goals of their work was to resolve any paradoxes in mathematics that would lead to statements akin to the famous Liar’s Paradox – “I am lying” that are false when they are true and true when they are false. Russell and Whitehead thought that paradoxes were merely a consequence of not clarifying the axioms and the deductions from them well enough.

David Hilbert, perhaps the leading mathematician of the early 20th century

The intellectual godfather of the mathematicians was David Hilbert, perhaps the leading mathematician of the first few decades of the twentieth century. In a famous 1900 address at the International Congress of Mathematics in Paris, Hilbert set out 23 open problems in mathematics that he hoped would engage the brightest minds of the next few decades; it is a measure of Hilbert’s perspicacity in picking these problems that some of them are still unsolved and pursued. The second among these problems was to prove the consistency of arithmetic using the kind of axiomatic approach developed by Russell and Whitehead. Hilbert was confident that within a few decades at best, every question in mathematics would have a definite answer that could be built up from the axioms. He famously proclaimed that there would be no ‘ignorabimus’ (a statement whose truth or falsity could never be known) in mathematics. Mathematicians soon began to make themselves busy in carrying out Hilbert’s program.

When Hilbert gave his talk, Kurt Gödel was still six years away from being born. Thirty years later he would drive a wrecking ball into Hilbert’s dream, showing that even this most exact, pristine of all human intellectual endeavors contained truths that are fundamentally undecidable. And he did it in such a final manner that there could be no debate about it. That is what left brilliant men like Einstein and von Neumann with their mouths agape.

Now we have a biography of Gödel and his times written by veteran science and history writer Stephen Budiansky that is the most evocative and comprehensive biography of the logician written so far for a general audience. The book is really about Gödel and his times rather than his work. There have been some fine books on Gödel until now, including the detailed “Incompleteness” by Rebecca Goldstein, the impressionistic “Gödel: A Life in Logic” by John Casti and most notably, John Dawson’s “Logical Dilemmas” which is perhaps the most complete exploration of the man. But Budiansky’s book is the best one so far that situates Gödel in the magical time that was turn of the century Austria-Hungary, a time that was tragically shattered with a totality approaching anything in mathematics by the onslaught of totalitarianism. Budiansky also sensitively investigates Gödel’s dark side; the same mind that could not tolerate anything that was not precisely defined fell prey to its own exacting standards and unleashed demons that would lead to a life punctuated by paranoid delusions and extreme starvation. When Gödel died in 1978, he weighed 65 pounds.

Gödel’s end was a far cry from his beginning in the glorious years of the Austro-Hungarian empire. The emperor Franz-Joseph, an arch Habsburg, prized order above everything else. The town of Brünn that Gödel was born in was one of the most industrialized towns in the empire, and his father Rudolf was a well-to-do managing director of a textile firm. But it was from his mother Marianne who he was closer to and who was well-versed in music and the arts that Kurt got much of his intellect; throughout his life, Marianne would be a crucial link and lifeline through letters. A brother who was a doctor, Rudi, extended the family. When Gödel was born, the empire was perhaps the foremost fountain of intellect in Europe and possibly the world. In art and philosophy, music and architecture, science and mathematics, Vienna and Budapest led the way. Names like Freud, Wittgenstein, Klimt and Zweig trickled out of the fin de siècle city in a steady stream. They exemplified Vienna and Eastern Europe’s cafe culture, with places like the Café Reichsrat and Café Josephinum becoming battlegrounds of fervent intellectual debate on the deepest questions of epistemology, fueled with marathon shouting matches lasting into the night, strong black coffee topped with whipped cream and scribbles on the marble tabletops.

It was in this heady intellectually milieu that Gödel grew up. He was an outstanding student at the Realgymnasium and showed a meticulous attention to detail that was to be both his biggest strength and his ruin. He was also often the poster child for the head-in-the-clouds intellectual, and throughout his life, as brilliant as his mathematical acumen was, he often remained oblivious to the state of politics around him. A fondness with what many would consider childish preoccupations like children’s toys and kitschy household objects would punctuate his otherwise fanatical commitment to the most abstract reaches of human thought.

Under the facade of Vienna’s intellectual beehive lay a rotten foundation of class and religious inequality, bitterly growing nationalism and, most fatally, anti-Semitism. Viennese Jews had been liberated by Franz Joseph in 1867, and centuries of bottled up ambition and talent in the face of their persecution found a release that led to unprecedented success not just in the practical arts like medicine and law but also in the most abstract realms of mathematics and philosophy. This success bred resentment among Vienna’s growing middle class gentiles. The collective philosophical talent of both Jews and  non-Jews culminated in the creation of the famous Vienna Circle and their philosophy of logical positivism. Logical positivism asked to reject anything that could not be rigorously scientifically verified and the philosophers sought to outdo their fellow scientists and place metaphysics on a solid scientific foundation. The philosophers Hans Hahn who was Gödel’s PhD advisor at the University of Vienna and Moritz Schlick were the leaders of the movement; their patron saints were the mysterious, penetrating Ludwig Wittgenstein and Bertrand Russell. Wittgenstein deigned to speak to the circle only once and remained a distant figure, while the philosopher of science Karl Popper tried to become an official member but was spurned.

Into this milieu entered Kurt Gödel, only 24 years old. He became a regular member of the Vienna Circle but spoke up rarely, preferring to instead listen and occasionally interject with a penetrating comment. But even then Gödel’s predilections ran counter to the circle’s. While the circle emphasized the existence only of propositions that could be verified by grounding in the real world, Kurt became a staunch Platonist whose belief that mathematical objects existed in a world of their own without any human intervention only became deeper during his life. For Gödel, numbers, sets and mathematical axioms were as real as planets, bacteria and rocks, simply waiting to be discovered and existing independent of human effort. A large part of this existence stemmed from the sheer beauty of mathematical structures that Gödel and his colleagues were uncovering: how could such beautiful objects exist only under the pre-condition of discovery by ordinary human minds?

By 1930 the Platonist Gödel was ready to drop his bombshell in the world of mathematics and logic. In September 1930, a big conference was going to be organized in Königsberg. German mathematics had been harmed because of Germany’s instigation of the Great War, and Hilbert’s decency and reputation played a big role in resurrecting it. Just before the conference Gödel met with his friend Rudolf Carnap, a founding member of the Vienna Circle in the Cafe Reichsrat. There, perhaps scribbling a bit on the marble table, he told Carnap that he had just showed that Hilbert and Russell’s program to prove the completeness and consistency of mathematics was was fatally flawed. A few days later Gödel delivered his talk at the conference. As often happens with great scientific discoveries, few people understood the significance of what had just happened. The one exception was John von Neumann, a child prodigy and polymath who was known for jumping ten steps ahead of people’s arguments and extending them in ways that their creators could not imagine. Von Neumann buttonholed Gödel, fully understood his result, and then a week later extended it to a startling new domain, only to find through a polite note from Gödel that the former had already done it.

So what had Gödel done? Budiansky’s treatment of Gödel’s proof is light, and I would recommend the 1950s classic “Gödel’s Proof” by Ernest Nagel and James Newman for a semi-popular treatment. Even today Gödel’s seminal paper is comprehensible in its entirety only to specialists in the field. But in a nutshell, what Gödel had found using an ingenious bit of self-referential mapping between numbers and mathematical statements  was that any consistent mathematical system that could support the basic axioms of arithmetic as described in Russell and Whitehead’s work would always contain statements that were unprovable. This ingenious scheme included a way of encoding mathematical statements as numbers, allowing numbers to “talk about themselves”. What was worse and even more fascinating was that the axiomatic system of arithmetic would contain statements that were true, but whose truth could not be proven using the axioms of the system – Gödel thus showed that there would always be a statement G in this system which would, like the old Liar’s Paradox, say, “G is unprovable”. If G is true it then becomes unprovable by definition, but if G is false, then it would be provable, thus contradicting itself. Thus, the system would always contain ‘truths’ that are undecidable within the framework of the system. And lest one thought that you could then just expand the system and prove those truths within that new system, Gödel infuriatingly showed that the new system would contain its own unprovable truths, and ad infinitum. This is called the First Incompleteness Theorem.

An example of Gödel’s ingenious technique to transform mathematical symbols – and therefore statements – into numbers. (Source: Math stack exchange)

The Second Incompleteness showed that such a system cannot prove its own consistency, leading to another paradox and in effect saying that any formal system that is interesting enough to prove its own consistency can do so only if it’s inconsistent. This was an even more damning conclusion. Far from getting rid of the paradoxes that Russell and Whitehead believed would be clarified if only one understood the axioms and the deductions from them well enough, Gödel showed that such paradoxes are as foundational a feature of mathematical systems as anything else. As far as Hilbert was concerned, he had uncovered a rotten foundation underlying mathematics that doomed Hilbert’s program forever.

Ironically, just a day after Gödel’s talk, Hilbert gave a speech reinforcing his belief that there would be no ‘ignorabimus’ in mathematics and ending with a famous refrain: “Wir müssen wissen – wir werden wissen.” (“We must know – we will know.”). As sometimes happens when a great mind declares a truth in a scientific discipline with such finality, reactions can range from disbelief and denial to acceptance. Hilbert himself recognized the significance of Gödel’s results but held out hope that they wouldn’t be as far-reaching as they were thought to be. Von Neumann on the other hand is on record saying that after he heard of the incompleteness theorems, he decided to abandon his own productive work in set theory and the foundations of logic and move on to other topics. Gödel’s work had a seismic impact on that of many other thinkers. His proof that a system made up of purely mechanical, axiomatic procedures would contain undecidable propositions inspired Alan Turing’s own answer in the negative to the question of whether a mechanical computer could decide the truth value of an arbitrary proposition in a finite number of steps. Most notably, Gödel’s ingenious scheme of having numbers represent both themselves as well as instructions to specify operations on themselves is, without him ever knowing it, the basis of digital computing.

Thus by the time he was 24 years old, Gödel had established himself as a logician of the first rank and immortalized his name in history. In the next few years his friends and colleagues spread his gospel around the world, most notably in the United States. The noted mathematician Karl Menger was a close friend and was spending a semester in Iowa, sending Gödel periodic letters describing life in America (“Americans as a rule do not go for walks, they think that dashing around in their cars on Sundays is sufficient recreation.”). Not only did Menger give talks about Gödel’s results in the United States, but he performed a crucial service. Helped by a largesse from the brother-sister pair of Louis and Caroline Bamberger who sold their clothing business to Macy’s, the educator Abraham Flexner had established an institute in Princeton dedicated to pure thought, with no administrative and teaching duties. To populate this heavenly tank Flexner had bagged the biggest fish of them all – Albert Einstein. Along with Oswald Veblen, von Neumann and a few others, Einstein became one of the first faculty members at what came to be called the “Institute for Advanced Study”, although given the exorbitant money that Flexner dangled in front of his faculty in addition to the unique work environment, it quickly came to be christened the “Institute for Advanced Salaries”. Menger recommended that Flexner hire Gödel on a temporary basis. He would visit a few more times before permanently relocating in 1940.

Kurt and Adele at their wedding (Source: Institute for Advanced Study)

By this time, both the personal currents of life and the larger currents of history would steer Gödel’s destiny. In 1927 he had married Adele Porkert, an older woman who lived across from his street. Adele had worked as a nightclub dancer and was a masseuse, qualifications which neither Gödel’s colleagues nor his family considered worthy of his stature. But Adele was to be a true mother to Kurt until her own death. Her role became clear when Gödel started suffering from a kind of psychotic paranoia that would mark him as indelibly as his genius. Starting in the early 1930s, he spent time in sanatoriums, convinced that an apparently weak heart from a bout of rheumatic fever which afflicted him as a child would kill him. More ominously, he started suspecting the sanatorium staff of conspiring to poison him or inject him with lethal substances. He drastically lost weight, and Adele had to feed him food that she had prepared herself to convince him to eat it. In retrospect it is clear that the ultra-logical Gödel also suffered from what we now call obsessive compulsive disorder. He obsessed over his bodily functions, interpreting ordinary signs as signs of trouble – his letters to his mother from America are generously interspersed with accounts of the health of his bowels. Unsurprisingly, this obsession led to a detailed keeping of diaries recording his thoughts and real and perceived symptoms, along with miscellaneous hospital, travel and grocery receipts. It is to Budiansky’s credit that he has combed through these sources to reveal to us the vivid portrait of a methodical, detail-oriented stickler whose very commitment to logic and details would prove to be his undoing.

Political events were also clearly not evolving favorably by the time Gödel first made his way to America. Austrian anti-Semitism had already had a long history, and German-speaking Austrians were fanatically enthusiastic about embracing their former compatriot and army corporal Adolf Hitler. Hitler triumphantly marched into Austria to ecstatic, waving crowds in March 1938 during the Anschluss. But even while Gödel had been proving his famous theorems, the writing had been on the wall. The University of Vienna had been a venue for anti-Semitic demonstrations for a long time, and the Vienna Circle with its Jewish members and commitment to abstract thought and “Jewish science” like relativity was a brightly painted target. In 1936, Johann Nelböck, a mentally troubled former student of Moritz Schlick shot and killed Schlick on the steps of the university, seething under the illusion that Schlick was having an affair with a female student he was obsessed with. Supported by the Nazis and seen as a martyr to the cause of eradicating the foreign element from the body of the Teutonic intellect, Nelböck was sentenced to ten years in prison, only to be promptly released by right-wing authorities in 1938 after the Anschluss. After Schlick’s murder the Vienna Circle effectively dissolved, and with it a glorious intellectual age whose quick demise remains a reminder of how quickly totalitarianism can destroy what takes decades or even centuries to build. After Jewish professors were all dismissed throughout Germany and Austria, Hilbert was asked by the new Nazi minister of education what mathematics was like at the University of Göttingen where he taught. “There is no mathematics anymore at Göttingen”, Hilbert retorted.

Gödel, as involved as he was with the search for mathematical truth, was not finely attuned to what was happening to politics in the country. Two days before Hitler’s takeover of Austria, Menger received a letter about mundane matters of conferences and mathematics from his friend which, as he put it, “may well represent a record for unconcern on the threshold of world-shaking events.” But even Gödel could not ignore what was happening to his colleagues at the university, and after some unpleasant episodes including one in which he was bullied on the streets by Nazi thugs and Adele fended off their taunts with her umbrella, the couple decided to emigrate to America for good. Bureaucratic snafus regarding Gödel’s visa and his new status as a German citizen led to intervention from the director of the Institute for Advanced Study at von Neumann’s goading: that is when von Neumann wrote the remarkable letter urging him to do everything he could to enable Gödel’s emigration, saying that Gödel was absolutely irreplaceableBecause German passengers crossing the Atlantic had to face the dual hazards of Nazi U-boats and potential arrest as enemy aliens by British authorities, Kurt and Adele took the long, scenic route, going through Eastern Europe through Moscow and then taking the Trans-Siberian railroad to Vladivostok, before finally boarding a steamer for San Francisco. Gödel would never leave the Eastern Seaboard of the United States again during his lifetime.

Kurt and Adele arrived in Princeton, a place puckishly described by recent resident Albert Einstein as “a quaint, ceremonial village, full of demigods on stilts”. Gödel had never known Einstein before coming to America, and yet it was Einstein who, along with the Austrian economist Oskar Morgenstern, provided him with the friendship of his Viennese colleagues which he so missed. Einstein and Gödel made for an unlikely pair: the former gregarious, generous, earthy and shabbily dressed, always eyeing the world through a sense of humor; the latter often withdrawn, hyper-logical, critical and unable to lighten up. And yet these exterior differences hid a deep and genuine friendship that went beyond their common background in German culture. Their families often visited each other, and Adele once knitted a woolen vest for Einstein. Animatedly conversing in German during their walks home together, Einstein communed with few others at the institute. It was Einstein who accompanied Gödel and Morgenstern to Gödel’s citizenship ceremony. At the ceremony the overtly pedantic and meticulous Gödel who had studied exhaustively for the citizenship test above and beyond the standard requirements, told the judge that he had found a flaw in the Constitution that would allow the United States to turn into a dictatorship. Einstein and Morgenstern hastily shut him up from saying anything further and the ceremony progressed smoothly.

But the real reason Einstein so admired Gödel was likely because he shared Gödel’s unshakeable belief in the purity of the mathematical constructs governing the universe. Einstein who was not formally religious nevertheless always harbored a deep belief that the laws of physics exist independently of human beings’ abilities to identify and tamper with them – that was one reason he was so uneasy with the then standard interpretation of quantum mechanics which seemed to say that there was no reality independent of observers. Gödel outdid him and went one step further, believing that even numbers and mathematical theorems exist independently of the human mind. It was this almost spiritual and religious belief in the objective nature of mathematical reality that perhaps formed the most intimate bond between the era’s greatest theoretical physicist and its greatest logician. It also helped that Gödel got interested in Einstein’s general theory of relativity, once playing with the equations and startling Einstein by concluding that the theory allowed for the existence of closed timelike curves – in other words, a universe without past and future, without time. For Gödel’s Platonic mind, this kind of result based purely on mathematics and without any physical basis was exactly the kind of absolute mathematical truth he believed in.

Oskar Morgenstern’s friendship with Gödel was even deeper, in part because he outlasted Einstein until Gödel’s own death. Morgenstern who combined worldly wisdom with brilliance in economics had made a name for himself by writing “Theory of Games and Economic Behavior” with von Neumann which established the field of game theory. Morgenstern worried about Gödel’s work, about Gödel’s health and Gödel’s marriage. One of the main sources of Gödel’s life is Morgenstern’s copious, often heartbreaking notes on Gödel’s worries and mental deterioration in his last years. He saw that Adele, while devoted to Kurt, was not a good fit in snobbish Princeton. A young Freeman Dyson vividly described an uncomfortable scene at a party where a very drunk Adele grabbed him and forced him to dance for twenty minutes while Kurt miserably stood by; Dyson could only imagine the horror of their lives. But Adele stayed utterly loyal to Kurt, feeding him, entertaining his paranoid health issues and generally taking good care of him.

After coming to the institute Gödel contributed one significant piece of work that added to the already hallowed place in mathematical history he enjoyed. In his famous 1900 address, the problem Hilbert had put at the top of his list was the so-called Continuum Hypothesis. The hypothesis deals with one of the most startling and deepest aspects of mathematics – a comparison of different kinds of infinity. The fact that there are in fact different kinds of infinity was discovered by Georg Cantor and came as a bombshell. Cantor showed that the “first” kind of infinity, called a countable infinity, was represented by the set of natural numbers. But there was another kind of much larger infinity, an uncountable infinity, represented by the real numbers. It may seem absurd to say that one infinity is larger or smaller than another, but using ingenious arguments Cantor showed that the real numbers cannot have a one-to-one mapping with the natural numbers and are much bigger. The Continuum Hypothesis asked if there is a third kind of infinity between that of the natural numbers and the real numbers.

The problem is still unsolved, but Gödel made a significant dent by showing that the contradiction of the hypothesis could not be proved by standard set theory. This is not the same as showing that the hypothesis is true, but it does result in one strike in favor of it. A bigger advance came in 1963 when mathematician Paul Cohen showed that the hypothesis is independent of standard set theory; that is, either the hypothesis or its negation can be added to standard set theory without destroying its consistency and axioms. For all of Gödel’s scathing remarks and frequent silence about other mathematicians’ work, he was profusely generous toward Cohen when Cohen sent him his proof of the independence of the Continuum Hypothesis, a problem that Gödel himself had tried and failed to solve for more than twenty years.

Mathematician John von Neumann was one of Gödel’s biggest supporters (Source: Totally History)

Gödel’s peculiar obsessions and pedantry made him a difficult colleague, and his promotion of to full professor was held up until 1953 because the faculty feared he would be challenging to deal with when it came to the obligatory administrative matters that full professors had to busy themselves with. Once again von Neumann came to his friend’s rescue, asking, “If Gödel cannot call himself Professor, how can the rest of us?” But even after Gödel got promoted his insecurities did not leave him, and he kept on feeling a mixture of self-pity and suspicions of conspiracy on the part of the institute to demote or fire him. He could nonetheless be a very loyal friend and colleague, testifying against having Oppenheimer removed as director for instance after Oppenheimer’s enemy Lewis Strauss tried to oust him after his infamous security hearing. Especially in his later years, young mathematicians like Martin Davis and Hao Wang observed a Gödel who was friendly, curious and funny.

After Einstein’s death in 1955 and von Neumann’s excruciatingly painful death in 1957, Gödel began to increasingly rely on Adele and Morgenstern (as a measure of how startlingly original his mind remained, in March, 1956, as von Neumann was dying, Gödel sent him a letter that is supposed to contain the first statement of a famous problem in computer science, the P=NP hypothesis). His exalted mind often delighted in the simplest of objects, including trinkets and cheap children’s toys bought from convenience stores. The fear that his colleagues had about his obsession evolved, if anything, in the opposite direction: he would meticulously labor over member applications, exhaustively analyzing them and offering suggestions on points others had missed.

But the spark of genius that had lit the mathematical world on fire seemed to have gone missing. In his last few years, Gödel became obsessed with not just believing that there was a conspiracy against him but also one against a hero of his, the 18th century mathematician and polymath Gottfried Wilhelm Leibniz. He became convinced that there was a plot to keep Leibniz’s work hidden from the world. Beginning in the 1970s, he began to see a psychiatrist whose detailed notes Budiansky opens the book with: “Believes he has been declared incompetent and that one day they will realize he is free and take him away…fear of destitution, loss of position at institute because he hasn’t done anything for past year…brought out delusional ideas, including that brother is the evil person behind plot to destroy him…believes he wants to take his wife, house and position at the institute.” Clearly, having his mother and brother Rudi visit him in America, while welcomed initially, had also turned into a plot to take over his world. It didn’t help that by this time Gödel’s work had been popularized enough that he received the Einstein Prize from Einstein himself , the National Medal of Science from President Gerald Ford and that crowning sign of fame – letters from all over the world from fans and crackpots.

There was little that anyone could do to help. In 1977 Morgenstern himself received a diagnosis of terminal cancer and became paralyzed. His tragic last notes and letters indicate the struggle he was facing as Gödel increasingly came to rely on him, phoning him two or three times every day to communicate his latest worries, even as he himself was facing his own mortality. The last straw was when Adele fell sick and had to spend several months in a hospital. After Morgenstern, she had been his last link to the sane world, and in spite of neighbors and colleagues trying to help out, he stopped eating, convinced that he was being poisoned through his food and, unlike in Vienna in the 1930s, not having Adele around to feed him with tender, loving care. When Adele came back the end was already there, and Gödel entered the hospital for the last time. The cause of death was “malnutrition”, although most people believed that slow suicide was the more likely explanation.

How do we deal with the legacy of someone like Gödel? Philosophically,  Gödel’s theorems had such a shattering impact on our thinking because, along with two other groundbreaking ideas of 20th century science – Heisenberg’s Uncertainty Principle and quantum indeterminacy – they revealed that human beings’ ability to divine knowledge of the universe had fundamental limitations. But while Heisenberg and the quantum pioneers found limits to understanding rooted in the physical world, Gödel found these limits even in the rarefied world of pure ideas. Nonetheless, mathematics continued to thrive within the boundaries of his theorems, gathering Fields Medals and revolutionizing new fields like algebraic topology and category theory. The deeper significance of Gödel’s work therefore, as he explained in a lecture, is that it’s hard to avoid a connection between them and a Platonic world of numbers and ideas existing independent of our efforts. This is because if human beings are fundamentally incapable of finding out all the results of axiomatic systems, it means there will always be some results outside the grasp of even our most exalted intellects. In our limitations lies mathematics’s freedom.

But that also says something about human minds and points to a debate still raging – whether the mind itself is some kind of Turing machine. The implication of Gödel’s proof is that if the mind is indeed a machine, it will be subject to the incompleteness theorems and there will always be truths beyond our grasp. If on the other hand, the mind is not a machine, it frees it up from being described through purely mechanistic means. Both choices point to a human mind and a world it inhabits that are “decidedly opposed to materialistic philosophy”. Beyond this possible truth is another one that is purely psychological. We can either feel morose in the face of the fundamental limits to knowledge that Gödel revealed, or we can revel, as the historian George Dyson put it, to “celebrate his proof that even the most rigid numerical bureaucracy contains the tools by which higher truth will always be able to effect an escape.”Gödel offers us an invitation to an open world, a world without end.

But what about the paradoxes of the man himself, someone devoted to the highest reaches of rational thought in the most logical of all fields of inquiry, and still one who seemed to have had an almost mystical belief in the spiritual certainty of mathematics and often gave in to the worst impulses of irrationality? I think a clue comes from Gödel’s obsession with Leibniz in his last few years. Leibniz was convinced that this is the best of all possible worlds, because that is the only thing a just God could have created. Like his fellow philosophers and mathematicians, Leibniz was religious and saw no contradictions between science and faith, between teasing out the truths of the world rationally and believing in a hereafter. A few years before his mother Marianne’s death in 1961, Kurt wrote to her in a letter his belief that a God probably exists: “For what kind of sense would there be in bringing forth a creature (man), who has such a broad range of possibilities of his own development and of relationships, and then not allow him to achieve 1/1000 of it?” Like his fellow philosopher Leibniz, Kurt Gödel could perfectly reconcile the rational and the transcendental. In doing this, he proved himself to be much more at home in the 18th century than the 20th. Perhaps that vision of a reconciliation between rational thought and seemingly irrational human frailty and belief will be, even more than his seminal mathematical discoveries, his enduring legacy.

Jim Simons: "We never override the computer"

Billionaire-mathematician Jim Simons has been called the most successful investor of all time. His Renaissance Technologies hedge fund has returned an average of 40% returns (after fees) over the last 20 years. The firm uses proprietary mathematical algorithms and models to exploit statistical asymmetries and fluctuations in stock prices to leverage price differentials and make money. 

Simons had made groundbreaking contributions to algebraic topology before founding Renaissance, and his background enabled him to recruit top mathematicians, computer scientists, physicists and statisticians to the company. In fact the company actively stays away from recruiting anyone with a financial or Wall Street background.

I've been enjoying the recent biography of Simons, "The Man Who Solved the Market", by Gregory Zuckerman. But there's an interesting video of a Simons talk at San Francisco State University from 2014 in which he says something very intriguing about the models that Renaissance builds:

"The only rule is that we never override the computer. No one ever comes in any day and says the computer wants to do this and that’s crazy and we shouldn’t do it. You don’t do it because you can’t simulate that, you can’t study the past and wonder whether the boss was gonna come in and change his mind about something. So you just stick with it, and it’s worked."

It struck me that this is how molecular modeling should be done as well. As I mentioned in a previous post, a major problem with modeling is that it's mostly applied in a slapdash manner to drug discovery problems, with heavy human intervention - often for the right reasons, because the algorithms don't work great - obscuring the true successes and failures of the models. But as Simons's quote indicates, the only way to truly improve the models would be to simply take their results at face value, without any human intervention, and test them. At the very minimum, "simulating" historical human intervention is going to be pretty hard. So the only way we'll know what works and what doesn't is if we trust the models and let them rip through. As I pointed out though, in most organizations experimenters are simply not incentivized, nor are there enough resources, to carry out this comprehensive testing. 

Jim Simons and Renaissance can do it because 1. They have the wisdom to realize that that's the only way in which they can get the models to work and 2. They have pockets that are deep enough so that even model failures can be tolerated. Most drug discovery organizations, especially smaller ones, presumably can't do 2. But they could still do it in a limited sense in a handful of projects. What's really necessary though is 1. and my concern is that we'll be waiting for that even if we have the resources to do 2.

Review: James Hornfischer's "The Fleet at Flood Tide".



A superb book on the last year of the war in the Pacific Theater, full of incredible details about underappreciated leaders like Raymond Spruance (commander of the Fifth Fleet, the navy's primary strike force against Japan), Hollland Smith (head of amphibious operations), Draper Kauffman (creator of the Underwater Demolition Teams that later became the SEALS) and Paul Tibbets (pilot of the Enola Gay), along with other remarkable men and women, both American and Japanese. Among these were Guy Gabaldano, a Mexican-American marine who coolly talked 800 Japanese soldiers into surrendering and Shizuko Miura, an 18-year-old nurse, wise beyond her years, who held out on the island of Saipan.

The book is primarily about the invasion of Saipan, Tinian and Guam, the three islands constituting the Marianas that were considered crucial to staging air attacks by B-29s against the Japanese mainland. The brilliant island hopping strategy that King, Nimitz and Halsey orchestrated saw its culmination in the invasion of the Marianas, followed by the infamous battles of Iwo Jima and Okinawa. The sheer difficulties of logistics and air support involved in carrying out strikes against tiny specks of land separated by thousands of miles in the face of an implacable foe who has the home advantage are driven home well by Hornfischer. It was from Tinian that the Enola Gay and Bockscar that dropped the atomic bombs on Hiroshima and Nagasaki took off. The last part of the book deals with the aftermath of the bombing and the occupation of Japan.

There are two central themes pervading the book. One is the key role the navy and its aviators played in securing the islands. The other is the sheer fanaticism and tenacity demonstrated by the Japanese that convinced the Allies how expensive an invasion of Japan could be. The most horrifying description is regarding the mass suicides on Saipan in which - chilled into desperate fear by Japanese propaganda which warned the civilians of the untold horrors that Americans would inflict on them - thousands of mothers and fathers killed their children and jumped off the cliffs. The few Japanese who were actually captured by American soldiers were astonished by the humane treatment their received. The propaganda was so extreme that the Japanese people as a whole were getting ready to commit national suicide in the service of national salvation when the mainland was going to be invaded.

This played a critical role in the decision to use the bombs and Hornfischer is unapologetic about the decision. His main argument - with which I largely agree - is that the fanaticism displayed by the Japanese along with the paralysis that their leadership exhibited in sending out any clear signals to accept the terms of unconditional surrender made it impossible for the Allies to assume that Japan was anywhere close to surrendering. Later historians have always stressed that the Japanese would have surrendered if they had been allowed to keep their Emperor, a mortal descended from a God. But they never made this intention clear, and even when they did, it came with a list of other unacceptable conditions like retaining the authority to try their own war criminals. The unavoidable fact is that by the summer of 1945, the sheer barbarity and fanaticism of the Pacific War had made ending it a matter of desperate urgency. About the only two other options apart from using the bombs would have been a prolonged starvation of the Japanese people by the navy or an invasion that would have easily caused a quarter of a million casualties. 

Note that the question of whether the Allies thought that using the bombs was necessary is separate from whether the bomb *actually* caused Japan to surrender. The historical scholarship on that second question seems to mainly conclude that it was the entry of the Soviet Union through the invasion of Manchuria from the north that finally caused the Japanese leadership to surrender: curiously Hornfischer practically ignores a detailed discussion of this argument, which shows his bias a bit. But even if this is true, it ignores the fact that eventually it was Emperor Hirohito who really broke the deadlock; even *after* Nagasaki was bombed, the war leadership was still divided.

The book concludes by describing the American occupation of Japan which was unprecedented in its decency and progressivism; it was perhaps MacArthur's finest hour. The war against Japan and its subsequent occupation stand as a fine example of both the atrocities that human beings inflict on each other and the redemption that can salvage them.

Malcolm Gladwell's "The Bomber Mafia" - Weak Tea

Just finished reading Malcolm Gladwell's new book, "The Bomber Mafia", and am sorely disappointed. It's like Gladwell has expanded a short blog post into a 150 page book that's big on storytelling but essentially a complete lightweight when it comes to content and conclusions.

The basic thesis of the book can be summed up in a few short sentences: During WW2, there was a group of air force officers led by Haywood Hansell called the Bomber Mafia who thought that they could bomb cities "more morally" through daytime precision bombing; they were hedging their bets on a revolutionary invention, the Norden bombsight, that allowed bombardiers to pinpoint targets from 15,000 feet. In reality, the philosophy failed miserably because the bombsight was less than perfect under real conditions, because even with the bombsight the bombs were not that precise, and most crucially because in Japan, the hitherto undiscovered jet stream which buffeted airplanes with 120 knot winds basically made it impossible for the B-29s to stabilize and successfully bomb their targets. 

Enter Curtis LeMay, the ruthless air force general who took the B-29 down to 5000 feet to avoid the jet stream, ripped most of the guns out and instead of precision bombs, used incendiary bombs with napalm at night to burn down large built up civilian areas of Japanese with horrific casualties, the most famous incident of course being the March 1945 firebombing of Tokyo that killed over 100,000 people.

Gladwell tells these stories and others like the invention of napalm well, but the core message in the book is that the switch from precision bombing by Hansell which failed to strategic bombing by LeMay which presumably worked was the linchpin air strategy of the war. This message is a highly incomplete and gross oversimplification. The fact of the matter is that strategic bombing did very little damage to morale and production until very late in the war. And while strategic bombing in Japan was more successful, the bombing in Europe did not work until the bombers were accompanied by long-range P-51 Mustang fighters, and even then its impact on shortening the war was dubious. Even in Japan, strategic bombing could have been hobbled had the Japanese had better fighter defenses the way the Germans did. The Germans used a novel method of firing called "Schräge Musik" that allowed their fighters to shoot at the British Lancaster bombers vertically - if the Japanese had used such efforts they would likely have been devastating to LeMay's strategy. Even from a civilian standpoint, the strategic bombing of Dresden and Hamburg did little to curb either morale or production. But in talking only about Tokyo and not Dresden or Hamburg, only about Japan and not Europe, Gladwell leaves the impression that strategic bombing was pretty much foolproof and always worked. These omissions are especially puzzling since he does discuss the lack of effectiveness of the bombing of civilians in London during The Blitz.

There are very few references in this short book - Gladwell seems obsessed with quoting two historians named Tami Biddle and Stephen McFarland for most of the discussion. These are fine historians, but the superficial treatment is especially jarring because strategic bombing has been written about extensively during the last several decades by historians like Richard Overy and Paul Kennedy. The Nobel Prize-winning physicist Patrick Blackett wrote about the mistaken assumptions about strategic bombing way back in the 1950s. I would also recommend physicist Freeman Dyson's essays on the part he himself played in strategic bombing during the war that really drives home how boneheaded the method was. But Gladwell quotes none of these sources, instead just focusing on Haywood Hansell and Curtis LeMay as if they and their thoughts on the matter were the only things that counted.

Perhaps worst of all, the complex moral consequences of LeMay's argument and strategic bombing in general are completely sidelined except for a short postscript in which he discusses how precision bombing has gotten so much better (except in that case the moral consequences have also gotten more complex, precisely because it's become easier). Strategic bombing was wasteful and morally unforgivable because it cost both pilot and civilian lives. LeMay generally receives a very favorable treatment and there are copious quotes from him, but interestingly the one quote which is missing is one which might have shed a different perspective - this is his quote after the war that he would have been hanged as a war criminal had the Allies lost.

I really wish this book were better, given Gladwell's fine storytelling skills which can draw the reader in. As it stands it's slim pickings, a couple of anecdotes and stories compressed as a grand philosophical message in 150 pages that leaves the reader completely unsatisfied. If you are really interested in the topic of bombing during WW2, look at other sources

The human problems with molecular modeling

Molecular modeling and computational chemistry are the neglected stepchildren of pharmaceutical and biotech research. In almost every company, whether large or small, these disciplines are considered "support" disciplines, peripheral to the main line of research and never at the core. At the core instead are synthetic chemistry, biology and pharmacology, with ancillary fields like formulations and process chemistry becoming increasingly important as the path to a drug progresses.

In this post I will explore two contentions:

1. Unless its technical and human problems are addressed, molecular modeling and simulation will remain peripheral instead of core fields in drug discovery.

2. The overriding problem with molecular modeling is the lack of a good fit between tools and problems. If this problem is addressed, molecular modeling stands a real chance of moving from the periphery to, if not the very core, at least close to the core of drug discovery.

There are two kinds of challenges with molecular modeling that practitioners have known for a long time - technical and human. The technical problems are well known; although great progress has been made, we still can't model the details of biochemical systems very accurately, and even key aspects of these systems like protein motion, water molecules and - in case of approaches like machine learning - lack of adequate benchmarks and datasets continue to thwart the field. 

However, in this piece I will focus on the human problems and explore potential ways of mitigating them. My main contention is that the reason modeling often works so poorly in a pharmaceutical setting is because the incentives of modelers and other scientists are fundamentally misaligned. 

In a nutshell, a modeler has two primary objectives - to make predictions about active, druglike molecules and to validate the models they are using. But that second part is actually a prerequisite for the first - without proper validation, a modeler cannot know if the exact problem space they are applying their models to is actually a valid application of their techniques. For proper validation, two things are necessary:

1. That the synthetic chemist actually makes the molecules they are suggesting.

2. That the synthetic chemistry does not make molecules which they aren't suggesting.

In reality the synthetic chemist who takes up the modelers' suggestions has little to no interest in model validation. As anyone who has done modeling knows, when a modeler suggests ten compounds to a synthetic chemist, the synthetic chemist would typically pick 2 or 5 out of those 10. In addition, the synthetic chemist might pick 5 other compounds which the modeler never recommended. The modeler typically also has no control and authority over ordering compounds themselves.

The end result of this patchwork implementation of the modeler's predictions is that they never know whether their model really worked. Negative data is especially a problem, since synthetic chemists are almost never going to make molecules that the modeler thinks will be inactive. You are therefore left with a scenario in which neither the synthetic chemist nor the modeler knows or is satisfied with the utility of the models. No wonder the modeler is relegated in the back of the room during project discussions.

There is another fundamental problem which the modeler faces, a problem which is actually more broadly applicable to drug discovery scientists. In one sense, not just modeling but all of drug discovery including devices, assays, reagents and models can be considered as a glorious application of tools. Tools only work if they are suited to the problem. If a practitioners thinks the tool will be unsuited, they need to be able to say so and decline using the tool. Unfortunately, incentive structures in organizations are rarely set up for employees to say "no". Hearing this is often regarded as an admission of defeat or an unwillingness to help out. This is a big mistake. Modelers in particular should always be rewarded if they decline to use modeling and can gives good reasons for doing so. As it stands, because they are expected to be "useful", most modelers end up indiscriminately using their tools on problems, no matter what the quality of the data or the probability of success is. This means that quite often they are simply using the wrong tool for the wrong problem. Add to this the aforementioned unwillingness of synthetic chemists to validate the models, and it's little surprise that modeling so often fails to have an impact and is relegated to the periphery.

How does one address this issue? In my opinion, the issue can be mitigated to a significant extent if modelers know something about the system they are modeling and the synthesis which will yield the molecules they are predicting. If a modeler can give sound reasons based on assays and synthesis - perhaps the protein construct they are using for docking is different from one in the assay, perhaps the benchmarks are inadequate or perhaps the compounds they are suggesting won't be amenable to easy synthesis because of a weird ring system - other scientists are more likely to both take their suggestions more seriously as well as respect their unwillingness to use modeling for a particular problem. The overriding philosophy that a modeler utilizes should be captured not in the question, "What's the best modeling tool for this problem?" but "Is modeling the right tool for this problem?". So, the first thing a modeler should know is whether modeling would even work, but if not, he or she will go a long way in gaining the respect of their organization if they can say at least a few intelligent things about alternative experimental approaches or the experimental data. There is no excuse for a computational chemist to not be a chemist in the first place.

More significantly, my opinion is that this mismatch will not be addressed until modelers themselves are in the driver's seat, until they can ensure that their predictions are tested in their entirety. Unfortunately there's little control modelers have over testing their models; much of it simply depends on how much the synthetic chemists trust the modelers, a relationship driven as much by personality and experience as modeling success. Even today, modelers can't usually simply order their compounds for synthesis from internal or external teams.

Fortunately there are two very significant recent developments that promise modelers a degree of control and validation that is unprecedented. One is the availability of cheap CROs like WuXi and Enamine which can make many of the compounds that are predicted by modeling. These CROs have driven the cost down so significantly that even negative predictions can now importantly be tested. In general, the big advantage of external CROs relative to internal chemists is that you can dictate what the external CROs should and shouldn't make - they won't make compounds which you don't recommend and they will make every compound that you do; the whims of personal relationships won't make a difference in a fee-for-service structure.

More tantalizingly, there have been a few success stories now of fully computationally-driven pipelines, most notably Nimbus and Morphic Therapeutic and, more recently, Silicon Therapeutics. When I say "fully computationally driven" I don't mean that synthetic chemists don't have any input - the inaccuracy of computational techniques precludes fully automated molecule selection from a model - what I mean is that every compound is a modeled compound. In these organizations the relationship between modeling and other disciplines is reversed, computation is front and center - at the core - and it's synthetic chemistry and biology in the form of CROs that are at the periphery. These organizations can ensure that every single prediction made by modelers is tested and made, or conversely, that no molecule that is made and tested fails to go through the computational pipeline. At the very least, you can then keep a detailed bookkeeping record of how designed molecules perform and therefore validate the models; at best, as some of these organizations showed, you can discover viable druglike leads and development candidates.

Computational chemistry and modeling have come a long way, but they have a long way to go both in terms of technical and organizational challenges. Even if the technical challenges are solved, the human challenges are significant and will hobble the influence computation has on drug discovery. Unless incentive structures are aligned the fields will continue to have poor impact and be at the periphery. The only way for them to progress is for computation to be in the driver's seat and for computational chemists to be as informed as possible. Fortunately with the advent of the commodification of synthesis and the increased funding and interest in computationally driven drug pipelines, it seems there may be a chance for us to find out how well these techniques work after all.

Image source

Book review: Charles Seife’s “Hawking Hawking”

I still remember the first time I encountered “A Brief History of Time”. I must have been in high school. I marveled at the elfin-looking bespectacled man on the cover who looked like an alien. And were the contents the very definition of exotic or what. Clearly I understood very little of what was written about black holes, the Big Bang and quantum theory, but the book definitely got me hooked on to both cosmology and Stephen Hawking and cemented the image of the scientist in my mind as some kind of otherworldly alien superintelligence.

Now I just finished Charles Seife’s unique, must-read contribution to Hawking biography, “Hawking Hawking” and realize that in fact that was the intended effect.  Seife’s book does a first-rate job of stripping the myth-making, hype and self-promotion from the celebrity and revealing the man inside in all his triumph and folly. The achievement is all the more remarkable since Seife did not have access to Hawking’s personal papers and family members, resources which the foundation set up after his death guards carefully in order to preserve the image.

The book recounts several episodes of Hawking being very human; of opposing scientists who did not agree with his ideas and trying to hobble their professional advancement, of playing favorites and denying credit to others, of neglecting and mocking his wife and her work in the humanities, of making pronouncements especially in his last years about topics that were far beyond his expertise and which the media and the public held up as sacrosanct - an image that he not only didn’t do much to dispel but often encouraged. Of course, all scientists can occasionally be cruel, vain, jealous and egotistical, but these qualities of Hawking were hidden behind a blitz of media publicity.

And yet the book is not a takedown in any way. It acknowledges Hawking’s brilliant and important contributions to science, especially his key discovery of Hawking radiation that married general relativity and quantum theory in a tour de force of calculation. Seife sensitively describes how much Hawking struggled because of his terrible diseases, and how ambivalent he was about the media and public highlighting his disability. Much of the public never understood how hard even doing calculations was for him, even aided by his powerful memory and remarkable imagination. It’s not surprising that a lot of his best work was done with collaborators, brilliant scientists in their own right whose names the public never remembered.

Ultimately, although Hawking seems to have contributed to a good deal of self-promotion and myth-making himself, he seems to have been much more in touch with the inner human being than what he let on. In distinguishing what was real from what was hype, Seife gives Hawking his rightful place in science, not as another Newton or Einstein but as Stephen Hawking.

Hawking Hawking: The Selling of a Scientific Celebrity https://www.amazon.com/dp/1541618378/ref=cm_sw_r_cp_api_glt_i_8H7P48KA10T7XX43N1VN

Chandra and Johnny come close to discovering black holes

This is from Jagdish Mehra and Helmut Rechenberg's monumental "The Historical Development of Quantum Mechanics, Vol. 6, Part 2". With Chandrasekhar's facility with astrophysics and von Neumann's with mathematics, there is little doubt in my mind that they would have succeeded.


As it happened, it was Oppenheimer and his student Hartland Snyder who wrote the decisive paper describing black holes in 1939. 


The timing was bad, though; on the same day that the paper came out in the Physical Review, Germany attacked Poland and started World War 2. Far more consequential was another paper published on the same day in the same issue - John Wheeler and Niels Bohr's liquid drop model of nuclear fission.

"Hawking Hawking" and Michio Kaku

Two items of amusement and interest. One is a new biography of Hawking by Charles Seife, coming out tomorrow, that attempts to close the gap between Hawking’s actual scientific accomplishments and his celebrity status. Here's a good review by top science writer and online friend Philip Ball:


Seife's Hawking is a human being, given to petty disputes of priority and oneupmanship and often pontificating with platitudes on fields beyond his expertise. I used to have similar thoughts about Hawking myself but thought that his pronouncements were largely harmless fun. My copy of Seife's book arrives tomorrow and I am looking forward to his views, especially his take on how much it was the media rather than Hawking himself who fueled the exaggerations and the celebrity status.

The second item is an interview with Michio Kaku which seems to have ruffled a lot of feathers in the physics and science writing communities. 


The critics complain that he distorts the facts and says highly misleading things like string theory directly leading to the standard model. I hear the complaints as legitimate, but my take on Kaku is different. I don’t think of him as a science writer but as a futurist, fantasist and storyteller. I think of him rather like E. T. Bell whose “Men of Mathematics”, while highly romanticized and inaccurate regarding the details, nevertheless served to get future scientists Freeman Dyson and John Nash interested in math as kids. I doubt whether either Kaku himself or his readers take the details in his books very seriously.

I think we should always distinguish between writers who write about the facts and writers who tell stories. While you should be as rigorous as possible while writing about facts, you are allowed considerable leeway and speculation while telling stories. If not for this leeway, there wouldn't be any science writers and certainly on science fiction writers. A personal memory: my father was a big fan of Alvin Toffler's "Future Shock" and other futuristic musings. But he never took Toffler seriously as a writer on technology; rather he thought of him as an "ideas man" whose ideas were raw material for more serious considerations. If Kaku's writings get a few kids excited about science and technology the way "Star Trek' did, his purpose would be served.

Six lessons from the biotech startup world

Having worked for a few biotech startups over the years, while I am not exactly a grizzled warrior, I have been around the block a bit and have drawn some conclusions about what seems to work and not work in the world of small biopharma. I don't have any kind of grand lessons related to financial strategy, funding or IPOs or special insights, just some simple observations about science and people based on a limited slice of the universe. My suspicion is that much of what I am saying will be familiar.

1. It's about the problem, not about the technology: 

Many startups are founded with a particular kind of therapeutic area in mind, perhaps a particular kind of cancer or metabolic disease to address. But some are also founded on the basis of an exciting new platform or technology. This is completely legitimate as long as there is also a concomitant list of problems that can be addressed by that platform. If there aren't, then you are in the proverbial hammer-trying-to-find-a-nail territory, trying to be tool-oriented rather than problem-oriented. The best startups I have seen do what it takes to address a problem, sometimes even pivoting from their original toolkit. The not so great ones fall in love with the platform and technology so much that they keep on generating results from it in a frenzy that may or may not be applicable to a real problem. No matter how amazing your platform may be, it's key to find the right problem space as soon as you can. Not surprisingly, this is especially an issue in Silicon Valley where breathless new technology is often the basis for the founding platform for companies. Now I am as optimistic and excited about new technology as anyone else, but with new technological vision must come rigorous scrutiny that allows constant validation of the path that you are on and course-correction if that path looks crooked.

A corollary of this obsession with tools comes from my own field of molecular modeling and structure-based drug design. I have said before that the most important reason computational chemistry stays at the periphery rather than core of drug discovery is because it's not matched to the right problem. And while technical challenges still play a big role in the failure of the field - the complexity of biology usually far overshadows the utility of the tools - the real problem in my view is cultural. In a nutshell, modelers are not paid for saying "no". A modeler constantly has to justify his or her utility by applying the latest and greatest tools to every kind of problem. It doesn't matter if the protein structure is poorly resolved; it doesn't matter if the SAR is sparse; it doesn't matter if you have one static structure for a dynamic protein with many partners - the constant clink of your hammer in that corner office must be heard if your salary is to be justified. It's even more impressive, and correspondingly more futile, if you are using The Cloud or a whole bank of GPUs for your calculations (there are certainly some cases where sheer computing power can make a difference, but these are rare). There are no incentives for you to say, "You know what, computational tools are really not the best approach to this problem given the paucity and quality of data." (as Werner Heisenberg once said, the definition of an expert is someone who knows what doesn't work).

But it goes both ways. Just like management needs to not just allow but reward this kind of judicious selection and rejection of tools, it really helps if modelers know something about assays, synthesis and pharmacology so that they can provide an alternative suggestion to using modeling, otherwise you are just cursing the dark instead of lighting a candle. They don't need to be experts, but having enough knowledge to make general suggestions helps. In my view, having a modeler say, "You know what, I don't think current computational tools are the best way to find inhibitors for this protein, but have you tried biophysical assay X" can be music to the ears.

2. Assays are everything

In all the startups I have worked at, no scientist has been more important to success in the early stages of a drug discovery project than the assay expert. Having a well designed assay that mirrors the behavior of a protein under realistic conditions is worth a thousand computer models or hundreds of hours spent around the whiteboard. Good assays can both test and validate the target. Conversely, a badly designed assay, one that does not recapitulate the real state of the protein, can not only doom the project but lead you down a rabbit hole of false positives. No matter what therapeutic area or target you are dealing with, there are going to be few more important early hires than people who know the assays. And assays are all about the details - things like salt and protein concentration, length of construct, mutations, things only known by someone who has learnt them the hard way. The devil is always in the details, but he really hides in the assays.

3. Outsourcing works great, except when it doesn't

Most biotechs now outsource key aspects of their processes like compound synthesis, HTS and biophysical assays to CROs. And this works fine in many cases, except when that devil in the details rears his head. The problem with many CROs is that while they may be doing a good job of executing on the task, they then throw the results over the wall. The details are lost, and sometimes you don't even know you are going down a rabbit hole when that happens. I remember one example where the contamination of a chip in a SPR binding assay was throwing off our results for a long time, and it took a lot of forensic work and back and forth to figure this out. Timelines were set back substantially and confusion reigned. CROs need to be as collaborative and closely involved as internal scientists, and when this doesn't happen you can spend more time fixing that relationship than actually solving your problem - needless to say, the best CROs are very good at doing this kind of collaborative work. And it's important not just to have collaborative CROs but to have access to as many details as possible in case a problem arises, which it inevitably does.

4. Automation works great, except when it doesn't

The same problems that riddle CRO collaborations riddle automation. These days some form of automation is fairly common for tools like HTS, what with banks of liquid handling robots hopping rapidly and merrily over hundreds of wells in plates. And it again works great for pre-programmed protocols. But simple problems of contamination, efficiency and breakdowns like spills and robotic arms getting stuck can afflict these systems, especially in the more cutting-edge areas like synthesis - one thing you constantly discover that the main problem with automation is not the software but the hardware. I have found that the same caveats apply to automation that Hans Moravec applied to AI - the hard things are easy and the simple things are hard. Getting that multipipetting robot to transfer nanoliters around blazingly fast is beyond the ability of human beings, but that robot won't be able to look at a powder and determine if it's fluffy or crystalline. Theranos is a good example of the catastrophe that can result when the world of well-defined hard robotic grippers and vials meets the messy world of squishy cells, fluffy chemicals and messy fluids like blood (for one thing, stuff behaves very differently at small scale). You know your automation has a problem when you are spending more time babysitting the automation than doing things manually. It's great to be able to use automation to free up your time, but you need to make sure that it's actually doing so as well as generating accurate results without needing babysitting.

5. The best managers delegate

Now a human lesson. I have had the extraordinary good fortune of working for some truly outstanding scientists and human beings, some of whom have become good friends. And I have found that the primary function of a good manager is not to get things done from their reports but to help them grow. The best way to encapsulate sound manager thinking is Steve Jobs's famous quote - "It doesn't make sense to hire good people and tell them what they should do. We hire good people so that they can tell us what to do." The best managers I have worked with delegate important responsibilities to you, trust that you can get the job done, and then check in occasionally on how things are going, leaving the details and execution to you. Not only does this provide a great learning experience but more importantly it helps you feel empowered. If your manager communicates to you how important the task entrusted to you is for the entire company and how they trust you to do it well, the sense of empowerment this brings is enormous and you will usually do the job well (if you don't, it's a good sign for both you and your manager that things are not going well and a conversation is to be had). 

Bad managers are of course well known - they micromanage, constantly tell you what you should do and are often not on top of things. And while this is an uncomfortable truth to hear, often the best scientists are also the poorest managers (there's exceptions of course - Roy Vagelos who led Merck during its glory days excelled at both). One of the best scientists I have ever encountered wisely and deliberately stay away from senior managerial positions that repeatedly came his way. There are few managers worse than distracted scientists.

6. Expect trouble and enjoy the journey

I will leave the most obvious observation for last. Biology and drug discovery are devilishly complicated, hard and messy. After a hundred years of examining life at the molecular level, we still haven't figured it out. Almost every strategy you will adopt, every inspired idea you will have, every new million-dollar tranche of funding you will sink into your organization, will fail. No model will be accurate enough to capture the real life workings of a drug in a cell or a gene that's part of a network of genes, and you will have to approximate, simplify, build model systems and hope for the best. And on the human side, you will have disagreements and friction that should always be handle with considerateness and respect. Be forgiving of both the science and the people since both are hard. But in that sense, getting to the right answer in biotechnology is like building that "more perfect union" that Lincoln talked about. It's a goal that always seems to be one step beyond where you are, but that's precisely why you should enjoy the journey, because you will find that the gems you uncover on the way make the whole effort worth it.