Field of Science

From Valley Forge to the Lab: Parallels between Washington's Maneuvers and Drug Development


George Washington’s leadership during the American Revolution was marked by strategic foresight, perseverance, and adaptability—qualities essential for overcoming challenges in drug discovery as well. He was known more for brilliant tactical retreats than outright wins. He knew that the American War of Independence, just like the long march toward a marketed drug, was a marathon and not a sprint. It is an interesting thought exercise to draw parallels between Washington's qualities of patience, resourcefulness, and collaboration and the development of new drugs that similarly demand inspired leadership in the form of long-term vision, resilience in the face of failure and innovative thinking. Below, we’ll explore five lessons from Washington’s military campaigns, applying them to the high-stakes world of drug discovery with some concrete examples.

1. Strategic Patience and Long-term Vision: Washington's Fabian Strategy vs. HIV Drug Development

Washington’s military success was often defined by his Fabian strategy—a war of attrition and avoidance of large-scale confrontations that allowed the Continental Army to conserve strength and gradually wear down the British forces. One of the clearest examples was his decision to avoid the direct engagement in New York in 1776, focusing instead on smaller, targeted strikes. While this strategy initially led to a disastrous retreat across the river to New Jersey, Washington and his troops regrouped, leading to their famous crossing of the Delaware. Washington's patience ultimately paid off with victories at Trenton and Princeton, where the tide of the war began to shift.

In drug discovery, strategic patience is essential, as breakthroughs rarely come overnight. A breakthrough drug famously has less than 1% odds of success and takes billions of dollars and several years to hit the market and treat patients. Consider the development of antiretroviral drugs for HIV/AIDS. When HIV was first discovered in the early 1980s, treatments were non-existent. Early drug trials failed to yield effective results, and scientists faced constant setbacks. However, researchers took a long-term approach, gradually improving upon early, promising but toxic drugs like AZT through years of clinical trials and development. By 1997, the introduction of highly active antiretroviral therapy (HAART) was a turning point, transforming HIV from a fatal disease to a manageable condition. This mirrors Washington’s long-term vision—victory wasn’t immediate, but patience and persistence eventually led to success. HIV drug development also mirrors Washington's ability to learn from his mistakes and build on his successes. After the Battle of Long Island, Washington realized the importance of intelligence, logistics and preparedness. Similarly, when attacking the HIV virus, researchers recognized the value of building on their knowledge of viral mutations to effect combination therapy and attacking different stages of the virus's lifecycle - the equivalent of an attack on multiple fronts.

2. Managing Limited Resources: Valley Forge vs. Orphan Drug Development

One of the most famous episodes of Washington’s leadership was during the harsh winter at Valley Forge in 1777-1778. The Continental Army suffered severe deprivations because of food, clothing, supply and personnel shortages. Washington’s ability to maintain morale and use limited resources efficiently for everything from building shelter to clothing his men, while relying on men like Baron von Steuben to train the troops, was pivotal in turning the army into a more effective fighting force that emerged from this trial by fire (or cold) stronger and more determined.

In drug discovery, managing limited resources is often critical, especially for diseases that don’t attract significant funding or attention. The development of orphan drugs—those that treat rare diseases—often faces the same kind of scarcity. An example is the development of Spinraza (nusinersen), a drug for spinal muscular atrophy (SMA), a rare genetic disease. Biotech firm Ionis Pharmaceuticals faced significant financial challenges, as developing a treatment for a rare disease didn’t seem commercially viable. However, through careful allocation of resources, partnerships with larger pharmaceutical companies (like Biogen), and perseverance, Spinraza became the first FDA-approved treatment for SMA in 2016. Like Washington at Valley Forge, the success of this drug was the result of making the most out of limited resources while keeping the long-term goal in sight.

3. The Importance of Collaboration and Alliances: French Alliance vs. COVID-19 Vaccines

The United States would likely have lost the Revolutionary War without critical assistance from France. In the pivotal Saratoga campaign, for instance, up to 90% of arms and gunpowder carried by American soldiers came from France. French support, particularly in the form of naval power and troops, played a decisive role in the eventual victory at Yorktown in 1781.

A modern parallel in drug discovery is the unprecedented collaboration seen in the development of COVID-19 vaccines. The global pandemic spurred collaborations between pharmaceutical companies, academic institutions, and governments on a scale never seen before. Pfizer and BioNTech’s partnership is a prime example—BioNTech provided the mRNA technology, while Pfizer’s resources and expertise enabled rapid scaling and distribution. This alliance was crucial in delivering one of the world’s first effective vaccines in less than a year. Just as Washington couldn’t win the war alone, drug discovery often depends on strategic alliances to achieve breakthroughs. Often these strategic alliances are between smaller companies that invent a new drug or technology and larger companies that scale and develop it. Sometimes they are between unlikely bedfellows, such as government and the private sector; for instance, HIV drug success was founded upon productive collaborations between government, private companies and activists.

4. Adaptability and Learning from Failure: Battle of Monmouth vs. Alzheimer’s Drug Failures

Washington’s ability to adapt to failure is best exemplified by the Battle of Monmouth in 1778. After the disastrous retreat ordered by General Charles Lee, Washington rallied the troops and turned the battle into a stalemate, avoiding what could have been a significant defeat. Initially goaded by Lee's retreat, British General Henry Clinton declined to attack when he found Washington in a formidable defensive position. This ability to adapt under pressure and recover from failure was a hallmark of his leadership.

In drug discovery, a parallel can be drawn with the development of gefitinib (Iressa), a targeted cancer therapy for non-small cell lung cancer (NSCLC). Initially approved in 2003, gefitinib showed promise in early clinical trials for a subset of lung cancer patients. However, in 2005, the FDA limited its use after post-approval studies failed to show a significant survival benefit in the broader patient population. This was a major setback for AstraZeneca, the drug’s developer.

Despite this failure, researchers did not abandon gefitinib entirely. Reanalysis of the patient response revealed that the drug was highly effective in patients with specific mutations in the EGFR gene—a critical insight that had not been fully understood during the initial trials. By identifying the right patient population, gefitinib regained relevance as a precision medicine for a subgroup of NSCLC patients. In 2015, gefitinib was reapproved in the U.S. for use in patients with EGFR-mutated lung cancer, a prime example of how learning from early failures can lead to a more targeted, effective approach in drug development.

Like Washington at Monmouth, the researchers behind gefitinib adapted to early setbacks by recalibrating their approach, identifying a more precise target, and ultimately turning a potential failure into a success.

5. Endurance and Moral Leadership: Valley Forge vs. Cystic Fibrosis Drug Development

Washington’s moral leadership during Valley Forge was crucial in keeping the Continental Army together during one of its darkest periods. His decision to stay with his men, sharing their hardships, and his constant encouragement gave them the endurance needed to persevere. By the time they emerged from Valley Forge, the army was stronger and better trained. Another example of Washington's leadership was his assumption of a frontline position at Assunpink Creek near Princeton, when his army was in disarray. Again and again, Washington inspired his troops with his quiet inner strength, battlefield bravery, unwavering focus on the end goal and democratic management of his army in which every man's opinion was valuable.

In the drug discovery world, long-term battles against chronic and deadly diseases often require similar moral leadership. One such example is the decades-long effort to develop treatments for cystic fibrosis (CF), a genetic disease that severely affects the lungs. Adopting a program from Aurora Biosciences and shepherding it through significant doubts and uncertainty, Vertex Pharmaceuticals led the charge, working tirelessly through failed attempts and incremental progress. Like Washington with his war council, the company carefully weighed the opinions of both enthusiasts and critics. Their efforts finally culminated in the development of Kalydeco and, later, Trikafta, drugs that significantly improve lung function and quality of life for CF patients. The leadership of Vertex’s researchers and executives, many of whom remained committed to finding a solution even when success seemed elusive and the mechanism of the drug (a potentiator rather than a more traditional inhibitor) appeared unconventional and novel, mirrors Washington’s leadership at Valley Forge. They persevered against severe odds, driven by a belief in the importance of their mission.

Final thoughts

The cause of complex drug development, just like the cause of the American Revolution, is mired in great cost and uncertainty. The odds of success are slim, the obstacles formidable, the naysayers many. Washington's quiet, dogged endurance, leadership from the front, patience and unwillingness to despair for long sustained the American cause long after purely rational analysis would have concluded that it was lost. Similarly, drug discovery depends on the right combination of leadership, resources and just plain good luck. But there is little doubt that dogged perseverance, adaptability in the face of new data, smart resource management and the spirit of collaboration can cause a step change in the odds of success, whether in war against a recalcitrant enemy or war against a recalcitrant disease. As Washington pithily put it, "Perseverance and spirit have done wonders in all ages."

Timeless Figures, #2: Werner Heisenberg

Werner Heisenberg was a good man who deluded himself into thinking he was working for a good cause. That cause was not working for the Nazis but maintaining the continuity of German science after the Nazis. Heisenberg’s life shows us what happens when a brilliant man with convictions lacks the moral clarity to make tough choices.

Heisenberg’s father was a classics and Greek professor at the University of Munich; academic achievement and intellectual interests ran in the family. Steeped in philosophy and initially drawn toward mathematics, an encounter with Arnold Sommerfeld, arguably the most successful physics professor of the 20th century, sealed his decision to study physics. As a young man after the turmoil of World War 1 and the short-lived postwar experiments in government in Germany, Heisenberg was deeply influenced by joining a youth movement that discussed science and philosophy, took long walks in the mountains, sang patriotic songs and talked about how to rebuild Germany after the war. It seems likely that this influence at an impressionable age left a lasting impression on his later decision to stay in Germany.


In 1922, after listening to a lecture in Munich by Niels Bohr, Heisenberg came under Bohr’s towering influence. Bohr became almost a father figure to Heisenberg. A gentle man who regarded spirited debate as the highest means of reaching the truth, Bohr sharpened Heisenberg’s drive to understand atomic physics and almost brought him to tears once by his “terrifying relentlessness” in seeking the nature of reality. Before Heisenberg went to Bohr’s institute in Copenhagen, though, he went to study with Max Born in Göttingen. Born who was to have as much influence as Sommerfeld in training some of the leading theoretical physicists of the time - among them Pauli, Dirac and Oppenheimer - was a physicist’s physicist, steeped in mathematical sophistication and an all-encompassing knowledge of the subject.


Born and Heisenberg decided to tackle the mess of spectral lines - light emitted from atoms - that had initiated the quantum revolution through Bohr’s theory of atomic structure but which had then turned into headscratchers. Bohr and Sommerfeld’s ‘old’ quantum theory explained the lines from simple atoms like hydrogen and helium, but higher elements proved recalcitrant. It was a more sophisticated, comprehensive theory of atomic structure that Heisenberg undertook as his challenge with Born. The story of how in the process he effectively invented modern quantum mechanics became part of the lore of physics. After suffering from a severe attack of hay fever in 1927, Heisenberg decided to spend some time recovering on the small island of Helgoland in the North Sea. The natural beauty of the island liberated his thoughts, and in the small hours of the morning one day, he suddenly saw that he could work out the rules for atomic structure by focusing purely on the numbers related to the spectral lines instead of trying to make sense of unobservable features like atomic trajectories or electron orbits. His epiphany illustrates the novelty and thrill of scientific discovery: he reported, “I had the feeling that, through the surface of atomic phenomena, I was looking at a strangely beautiful interior, and felt almost giddy at the thought that I now had to probe this wealth of mathematical structures nature had so generously spread out before me.” Heisenberg had undertaken one of the most significant steps toward understanding nature since Einstein.


Back in Göttingen, Born and Heisenberg’s collaborator Pascual Jordan realized that the numerical relationships their young colleague had found could be cast in the form of matrices; for all his brilliance, Heisenberg was untutored in the knowledge of 19th century mathematics that Born displayed. Through their work the three physicists laid out the first underpinnings of what became quantum mechanics. Called “matrix mechanics”, the scheme turned out to be too complicated to deal with actual physical problems, and led the Frenchman Louis de Broglie and the Austrian Erwin Schrödinger to replace it with wave mechanics and the Schrödinger equation which became the standard tools to calculate quantum phenomena. But Heisenberg had set the stage.


The work for which Heisenberg is best known - the rare scientific principle that enters the vocabulary of laymen and achieves the ultimate distinction of being indiscriminately used out of context - came about in 1927 when he was working in Copenhagen. After battling over some fundamental philosophical differences in interpreting quantum behavior, Bohr went off on a skiing trip. Walking at night in the park near Bohr’s institute, Heisenberg realized that there was a fundamental indeterminacy in our knowledge of the atomic world. Thus was born the uncertainty principle which says that you cannot know certain attributes of subatomic entities, like their positions and velocities, with simultaneous certainty. 


The uncertainty principle, along with Gödel’s Incompleteness Theorems and the speed of light, puts absolute constraints on our knowledge of reality. It is, in some sense, another nail in the coffin of human hubris and our quest to achieve total knowledge of the world. And yet as the physicist Hans Bethe was to say to a lay audience half a century later, with these fundamental constraints the principle lets us calculate all kinds of properties of quantum systems to an unprecedented degree of accuracy. The uncertainty principle, Bethe said, eliciting laughter, should really be called the “certainty principle”. Heisenberg received the Nobel Prize in 1932 “for the creation of quantum mechanics”. His place in the history of science was more than assured.


In the 1930s, after quantum mechanics was made complete as a correct description of nature by Heisenberg, Dirac, Pauli, Schrödinger and others, Heisenberg acquired a professorship at Leipzig even as storm clouds gathered over Europe after Hitler became chancellor in 1933. He trained many outstanding future physicists, including Edward Teller, Rudolf Peierls and Felix Bloch, and included his students in his social life, playing Beethoven and Schubert for them; music was always a palliative to the uncertain ways of the world. His friendship with Wolfgang Pauli produced important results, and with Pauli he continued to make contributions to nuclear structure, providing important insights into nuclear forces and the interaction of light and matter (an early forerunner of what came to be known as quantum electrodynamics). Despite the dark times enveloping the continent, those seemed to be halcyon days: the accounts of the passionate discussions with Bohr, Pauli and others through the 1920s and 30s that Heisenberg recounts in two of his books (“Physics and Philosophy” and “Physics and Beyond”) are the ultimate testament to the life of the mind. He married a perceptive woman, Elisabeth, who was later to write a penetrating memoir of her life with Werner.


Many of Heisenberg’s students and colleagues were hit hard by the laws banning Jews from teaching and other government positions, leading to an exodus led by Einstein. In the light of attacks on ‘Jewish physics’ by anti-Semitic scientists like Phillip Lenard, Heisenberg faced a dilemma. On one hand, he knew that the attacks on relativity and quantum mechanics were nonsensical, but on the other he deeply felt an attachment to Germany that he felt went beyond the Nazis. Unlike others, Heisenberg did not go out of his way to protect his Jewish colleagues, although he kept the teaching of relativity alive in the face of constant attacks. In 1938, Heisenberg came under suspicion from Reinhard Heydrich, the sinister head of the Reich Security Office; he had been labeled as a ‘White Jew’ by Lenard and others while being considered as a successor to Arnold Sommerfeld in Munich. Heinrich Himmler, whose mother knew Heisenberg’s, interceded on Hesienberg’s behalf, and after some uncomfortable sessions with Heydrich and Himmler, Heisenberg was exonerated and warned to keep his personal opinions and professional views separate.


In the summer of 1939 Heisenberg visited the United States, lecturing at Columbia, Michigan and other universities. Many of these offered him positions and his exiled colleagues like Samuel Goudsmit pleaded with him to emigrate, but Heisenberg, who professed his love for Germany, still did not seem to understand that his beloved country had become indistinguishable from the Nazis. He sailed for Germany only a few weeks before Hitler attacked Poland and World War 2 began.


By that time, the German scientists Otto Hahn and Fritz Strassmann had discovered nuclear fission. The Manhattan Project kicked off in response to a letter written to President Roosevelt by Leo Szilard and Einstein; one of their overriding reasons for asking the president to take necessary action was precisely the fact that Germany’s premier physicist, Werner Heisenberg, may be working on an atomic bomb. Heisenberg was indeed asked to form a group of scientists that also included his friend Carl Friedrich von Weizsäcker, with a mandate to explore the practical uses of atomic energy. Their main interest was in what they called a “uranium machine” or a nuclear reactor. During their work the German physicists made some critical missteps, including rejecting graphite as a moderator; this led them to rely on heavy water, a scarce material produced in a mountain factory in Norway that was later sabotaged by British and Norwegian commandos. They also put off Hitler’s war minister, Albert Speer, by asking for too little money and projecting long odds for the project’s success. But two teams independently and competitively led by Kurt Diebner and Heisenberg did achieve neutron multiplication.


During the war, Heisenberg made a visit to his old mentor Niels Bohr that remains controversial to this day; it was memorialized by Michael Frayn’s play “Copenhagen.” Perhaps wishing to reconnect with Bohr, Heisenberg visited Copenhagen in September, 1941. Later, during a dinner at Bohr’s house, Heisenberg and Bohr went out for a walk. There Heisenberg tried to feel Bohr out on the Allied effort to build an atomic bomb (Bohr did not know the reality then and only found out about the Manhattan Project when he visited Los Alamos in December, 1943). Bohr was alarmed since he interpreted Heisenberg’s questions to mean that the Germans were working on a bomb, and terminated the conversation right away. Each man later gave his own justification; Heisenberg claimed that he wanted to find out if the effort to make a bomb, depending as it did on the enormous hurdles involved in separating the fissionable isotope of uranium from the non-fissionable one, was underway at all. He claimed that if Bohr had indicated that the Allies were finding it too hard to make one, he would have informed his government of the same thing. That seems self-serving and disingenuous, to say the least.


The fact remains that Heisenberg did not know how to make a bomb. He was aware that it would take a critical mass of uranium to make one, and even that there would be a new element - plutonium - that would be more feasible to separate from uranium than it was to separate uranium-235 from uranium-238. But all evidence indicates that he had failed to accurately calculate the critical mass of a few dozen pounds that would make the entire effort practical. Later he self-servingly claimed that his heart was just not in it and that he even deliberately misled Hitler and Albert Speer into thinking that the effort was not worth it. There is no evidence for the latter and scant evidence for the former. The truth might always remain a mystery. In the event, as the characters in Frayn’s assert, the ironic truth is that Heisenberg never succeeded in killing a single soul with an atomic bomb - an effort which was the whole impetus for the Manhattan Project - while the Allies successfully killed over a hundred and fifty thousand people using it.


As the war came to an end, Heisenberg set out on his bicycle, trying to scrounge what he could to protect his family and get to the American and British lines, risking his life as a potential deserter. He was finally captured by the Allies, along with other physicists like Otto Hahn, and housed in a palatial British estate called Farm Hall. Unknown to the German physicists, the entire house had been bugged and their conversations taped - Heisenberg naively told the others that the British were too gentlemanly to use the techniques of the Gestapo. When news of Hiroshima reached the scientists, there was disbelief and astonishment, although it seems to have come more from the incredulity that the Allies had separated uranium-235 and plutonium-239 than from them being able to calculate the critical mass.


After the war, Heisenberg embarked on his mission to rebuild German physics; at least he was spared that part of his original dream. He wrote insightful books on nuclear physics, philosophy and the halcyon days of the past, recounting his critical role in the creation of one of the most important intellectual frameworks the human mind has ever conceived. But he never made a scientific contribution comparable to his earlier ones. Among others, his former friend Wolfgang Pauli and Richard Feynman scathingly scoffed at his efforts to formulate a comprehensive, unified theory of physics. While his self-serving explanations of why he worked for the Nazis were met with scorn and disapproval, in another, more gentlemanly era, he continued to be received cordially and otherwise defended by old colleagues and students like Isidor Rabi and Edward Teller.


Werner Heisenberg died in 1976, aged 74. Because of his creation of quantum mechanics and the uncertainty principle, he remains one of the most important theoretical physicists in history. But it is his work for Nazi Germany that really makes him relevant for the current era, because that work raised critical questions that are more important than ever. Among them, the most important question is whether you can separate service to your country from service to its political regime. Heisenberg thought you could. Many of his colleagues emphasized that you could not; once the government of a country becomes evil enough, you cannot divorce working for your country from working for that government. In the end, the answer may remain as uncertain as his famed principle.

Timeless Figures, #1: Albert Einstein

So much has been written about Albert Einstein over the ages that it is sometimes easy to take for granted and forget what made him special. Most of us know the earth-shattering impact his special and general theories of relativity had on physics, but it is easier to forget the very special man whose character traits undoubtedly made these soaring works of the human intellect possible.

Einstein’s parents, Hermann and Pauline, were intelligent and diligent. Hermann had an affinity for mathematics but had to become an apprentice and a professional to make ends meet. Pauline had an affinity for music and German literature. Their son inherited both talents, although later in life – perhaps repulsed by the Nazis’ obsessions with heredity – he attributed his success simply to heightened curiosity rather than inheritance. But given the respectable but by no means outstanding degree of his parents’ intellect, it is hard to deny that Einstein was the product of a very lucky genetic lottery.


In later popular accounts, Einstein was typically portrayed as a lazy student lost in his world, with a lackluster performance in school. But like other Einstein myths, this was false; he consistently received the highest grades grades, especially in mathematics and the sciences. His bent for science was clear at an early age and was illustrated especially by two episodes whose vivid impressions on him he could recall even decades later. One was the gift of a compass when he was five years old and sick: Albert was enthralled by the fact that the needle always pointed North, and this alerted him to “something deeply hidden” in the laws of Nature. The second was when his uncle Jakob introduced him to algebra: “We go hunting for an animal whose name we don’t know, so we call it x. When we bag our game, we pounce on it and give it its right name.”


It was early on that Einstein demonstrated his two most important traits, more important even than the glittering intellect that had been bestowed upon him. One was an open disdain for conformity and authoritarianism, whether it was in renouncing his German citizenship at the startling age of sixteen to avoid military conscription, impertinently questioning his professors or marrying his girlfriend Mileva Marić against his parents’ approval. Later these same traits would lead him to discover a revolutionary theory of physics (assuming that the speed of light was constant took an enormous amount of courage), snub his nose at German militarism and antisemitism, leave the country of his birth for good and carry on political activism in his adopted country. An accompanying trait was fearlessness; fearlessness at being mocked for his scientific and political beliefs. Both traits were enveloped in a self-effacing humor that let him see the absurd in life and the world. Undoubtedly these traits, and especially the humor, kept him sane at the brink of scientific discovery and in a world gone half-mad.


The story of Einstein’s lackluster educational performance is well-known. He lived a Bohemian existence and preferred to hang out with his friends in coffee shops, discussing philosophy and science and playing music on his violin. His admission to the famed ETH in Zurich failed because he did not do well enough in the general examination and had to take remedial courses. After he got in on his second attempt, he met the twenty-year-old Mileva – the only woman in his class – and was instantly smitten. His letters to her are full of passionate pronouncements, dirty limericks and poetry.


Graduating from the ETH, Einstein had trouble finding a teaching and research position. There is a letter from around his time from an anguished Hermann to the distinguished physical chemist Wilhelm Ostwald, later the winner of a Nobel Prize, asking for the good professor to give his son a job as an assistant. There is no reply from Ostwald on record. It was thanks to his friend Marcel Grossman’s father that Einstein found a job as a patent clerk, “third-class”, at the Swiss patent office in Bern. Grossmann was to later play a major role in Einstein’s mathematical enlightenment.


Einstein’s time at the patent office from 190 to 1909 and his ‘annus mirabilis’ of 1905, in which he produced five revolutionary papers that forever changed our understanding of physics, is well documented; these included the paper in which he introduced his famous equation relating mass to energy. In all of his science, Einstein’s two most important qualities were summed up by his biographer Abraham Pais: they were an appreciation for invariants (quantities that are independent of the frame of reference) and for statistical fluctuations. The former would enable him to explain relativity, the latter phenomena like Brownian motion and Bose-Einstein condensation.


What is perhaps less appreciated is the contribution of his humdrum daily job to the theory of relativity. Relativity sprang not from abstract manipulation of algebraic symbols but from imaginative thought experiments concerning everyday objects - clocks, rulers, trains, elevators. It was his time at the patent office that immersed Einstein in the details of mechanical implements. His daily job involved sharpening the often fuzzy, vague, partially thought-out ideas of inventors to make them legally defensible and workable in practice. He was quite good at this analysis and received praise from his supervisor as one of the most competent young men in the office. It is impossible to overestimate the impact of this immersion in the details of technical contrivances on Einstein’s future work on the frontiers of physics. Crucially, the job at the patent office left him free to focus on his physics and family in the evenings.


Einstein’s family life was not happy, to say the least, and he was not by any means the role model of the family man. Mileva, who herself had sacrificed a promising career, took care of the house and children and acted as an important sounding board for Einstein’s initial ideas, so much so that controversy later arose as to how much she might have contributed to them (there is no evidence that the key ideas came from anyone but Einstein). Einstein repaid her by omitting her name from the acknowledgments of his relativity paper, mentioning only his ETH friend Michele Besso who was another sounding board. The marriage was strained and often acrimonious. Einstein wrecked it by beginning an affair with his cousin, Elsa, in 1912; he would later have several affairs. When Mileva learned of his adultery, she moved to Zurich, taking their sons Eduard and Hans Albert. In 1919, after having her agree to a harsh set of conditions for remaining married to him, Einstein finally asked Mileva for a divorce; in return, he predicted that he would win the Nobel Prize and would give her the money from it. He did win it two years later, amusingly not for relativity, which even then was too abstract for the prize committee, but for his explanation of the photoelectric effect that grounded the nature of light in particles called photons.


After his annus mirabilis during which Einstein formulated the special theory of relativity, Einstein spent a hard ten years before coming up with the general theory of relativity. Both ideas were revolutionary, but Paul Dirac later remarked that while other scientists like Poincare and Lorentz might have stumbled upon the first one, it might have taken forever for anyone to discover the second one; its tenets were that original and novel. Einstein’s formulation of general relativity replaced gravity as a Newtonian force with gravity as a fundamental curvature of spacetime. He arrived at this startling, unexpected conclusion the same way that he had arrived at special relativity’s conclusions - by thinking of thought experiments. With special relativity, it was asking how the world would look like if he rode on a beam of light, a question he had first asked himself when he was sixteen. With general relativity it was realizing that a man in free fall would not feel his own weight - he called this thought the happiest thought of his life.


Unlike special relativity which could be explained with high school algebra - the physics was what was novel - general relativity needed mathematics that Einstein had never encountered. This is where his patent office friend Marcel Grossmann was crucial. After Einstein explained the requirements of general relativity, most notably the requirement of general covariance that would enable the laws of physics to look the same in all reference frames, Grossman told him that two branches of 19th-century mathematics would help him accomplish this. One was Riemannian geometry, developed by the German mathematical genius Bernhard Riemann, which extended plane geometry to curved surfaces. The other was the algebra of tensors, which are generalized extensions of vectors. 


That Einstein needed Grossmann’s insights to help him is a testament to his greatness as a physicist rather than a mathematician. It explains why there was no scientist like Einstein in the 20th century: while physicists like Paul Dirac, Wolfgang Pauli and Werner Heisenberg were more mathematically adept than Einstein, his feel for the physical picture and the thought experiment were unsurpassed. Among other physicists, probably only Richard Feynman and Enrico Fermi came close to this facility for visualizing the physical picture. In his later life, this facility left Einstein, and his failures would be explained by a peculiar over-reliance on mathematics which he had wisely avoided in his younger years.


1915, when war was engulfing the continent, saw Einstein putting the finishing touches on general relativity as a professor in Berlin; when he saw the equation explaining the longstanding problem of the anomalous precession of the orbit of Mercury, its deep truth made him feel like something had snapped inside him. Einstein was deeply shocked by Germany’s bombastic militarism and march toward war. His pacifism writ large, he refused to sign a letter supporting the war signed by ninety-three German scientific and artistic luminaries including Nobel laureates like Max Planck, Paul Ehrlich and Emil Fischer. Because of the war, experimental confirmation of general relativity had to wait until 1919, when an expedition to Africa led by the British astronomer Arthur Eddington confirmed a key prediction of the theory observable only during a total eclipse of the sun - the bending of starlight.


The prediction catapulted Einstein to the status of the world’s most famous scientist. Crowds thronged to hear him speak, and Eddington’s validation of his theory was also seen as the joining of nationalities that had been broken by a horrific war. In lecture tours of Asia and America, Einstein was welcomed as a celebrity; he met Charlie Chaplain and Upton Sinclair, and parents pushed their way through crowds to have their children meet him. But at home, where the “stab in the back” theory attributing Germany’s loss to communists and Jews was already being swallowed by many, including a young corporal named Adolf Hitler who had been blinded by poison gas, Einstein started finding a hostile reception. The Nobel laureates Johannes Stark and Philip Lenard had started agitating against him, and the general sullen mood of Germany because of the harsh terms imposed by the Treaty of Versailles made it easy for the population to search for easy scapegoats. Einstein’s friend Walther Rathenau, whose crucial actions as minister of production had made it possible for Germany to continue the war until 1919, was assassinated in 1922 by ultranationalists. Because of his internationalism and pacifism during the war, Einstein was a marked man and had good cause to fear for his own life.


His physics temporarily quelled conflict. The 1920s provided a fascinating contrast of sorts, between soaring and crippling economic deprivation on the one hand and unprecedented developments in physics on the other. The creation of quantum mechanics, beginning with Niels Bohr’s formulation of the structure of the atom in 1913 and continuing with work by Max Born, Werner Heisenberg, Paul Dirac and others, provided new fodder for Einstein. The same Einstein who had been a revolutionary in relativity became a conservative in quantum mechanics, although his positions were oversimplified later. He never rejected the success of quantum mechanics - through his explanation of the photoelectric effect, he was one of the originators of it, after all - but because of the intrinsic uncertainty and probabilistic interpretations it introduced, never thought it was a deep, final explanation of the world’s workings. His skepticism did not stop him from making two major contributions to it even in the 1920s; along with helping the Indian physicist Satyendranath Bose develop a novel form of quantum statistics, Einstein laid the foundations of what later became the laser.


But his philosophical problems with quantum mechanics continued for the rest of his life. They also led to a deep friendship with Niels Bohr. When Bohr had formulated his theory of atomic structure, Einstein had called it the “highest form of musicality in the sphere of thought”. Bohr was as deep a thinker in physics as Einstein; he and Einstein became intimate friends as well as spirited adversaries, forming a relationship which held fast and strong until the end of their lives. Each time Einstein would come up with what was purportedly a violation of a fundamental quantum principle like Heisenberg’s uncertainty principle, such as in the famous 1927 Solvay Conference, Bohr would reply with a rejoinder that sometimes embarrassingly relied on explanations based on Einstein’s own theories of relativity. Bohr’s “Discussions with Einstein on Epistemological Problems in Atomic Physics” is the most complete account of his disagreements.


In the 1930s, storm clouds gathered over Europe again as the Nazi party won increasingly larger shares of votes in the Reichstag elections. In January 1933, using perfectly legal means effected by a foolish and deluded Hindenburg and his associates, Adolf Hitler became chancellor of journey. A month later, Einstein, who had experienced increasing attacks and personal antisemitism since the 1920s and who was visiting the United States, announced that he would no longer return to Germany. That March, he renounced his German citizenship for the second time; he would not return to the country of his birth and high accomplishments for the rest of his life. By that time, knowing what direction the winds were blowing, Einstein had already discussed positions at Oxford, Caltech and the newly conceived Institute for Advanced Study. Future institute director Abraham Flexner was an ardent believer in what he called the “usefulness of useless knowledge.” With no teaching and administrative duties, Einstein accepted the IAS offer, becoming the baggy pant-wearing, shaggy-haired, affable sage of the small, provincial town of Princeton, NJ, for the next thirty years.


Einstein may have been a genius, but he was certainly not immune to mistakes. Two stand out, not so much because they demonstrate Einstein’s failures as his mode of thinking. In 1917, Einstein applied his general theory of relativity to the entire universe, essentially founding modern cosmology. Curiously, he found out that his toy universe would not remain static but would instead expand like a balloon. To keep it static, he introduced a “cosmological constant” that would retard its expansion. But in 1922, the Russian physicist Alexander Friedmann found that Einstein’s equations are valid in a non-static universe. Einstein often called the cosmological constant his “biggest mistake”, but by the 1930s, thanks to the pioneering experimental observations of the American astronomer Edwin Hubble, he had accepted the notion of an expanding universe. In the 1990s, a positive value for the cosmological constant acquired new meaning when independent teams found that the expansion of the universe is accelerating.


Einstein’s second mistake is more interesting: he never accepted the existence of black holes and even wrote a paper arguing against their existence. Freeman Dyson’s explanation for Einstein’s refusal was that by the late 1930s when Robert Oppenheimer and his students had postulated black holes, Einstein had become the mathematical platonist he would turn into during his later years; black holes with their singularities were simply too ugly for him. Einstein’s abhorrence of black holes is a good example of how an excessive emphasis on preconceived beauty can blind even great minds to the logical consequences of their own theories.


Einstein’s time in Princeton was far from the most productive time of his life. He was a celebrity and his advice was sought by dignitaries and crackpots. He met FDR and formed a strong relationship with his Jewish Secretary of the Treasury, Henry Morgenthau. He regularly spoke against the Nazi regime even as the Nazis ransacked his house and burnt his books. But he was no longer at the frontier of physics, which was centered mostly around nuclear physics. In 1932 the neutron was discovered, and physicists had a new tool with which to probe the interior of the atom. Unknown to Einstein, scientists in Italy, Germany, Great Britain and other countries started investigating the effect of neutrons on different nuclei. At the end of 1938, German scientists Otto Hahn and Fritz Strassmann discovered nuclear fission, and physicists across Europe and America quickly realized the possibility of an atomic bomb. Foremost among these was the Hungarian-born American physicist Leo Szilard, who had conceived of a nuclear chain reaction while standing at a traffic light in London in 1933. Szilard and Einstein went back to their times in Berlin, when they had filed a joint patent for an intrinsically safe refrigerator. Szilard realized the urgency of the United States building a nuclear bomb before Germany and sought out Einstein as the only scientist with enough stature to convey the message to President Roosevelt. The famous Einstein-Szilard letter did convince FDR to start a nascent atomic bomb program, which kicked into high gear and became the Manhattan Project after Pearl Harbor. But ironically, Einstein because of his German and pacifist background was never granted a security clearance by the government and invited to join the project.


Later Einstein rued the violent uses to which his science had been put, quipping that he should have rather become a watchmaker or plumber; in an obituary, Oppenheimer puckishly suggested that Einstein had no idea how challenging an American plumber’s job was. However, his disdain for nuclear weapons led Einstein to become a powerful voice of peace and sanity in a world that was becoming increasingly paranoid because of the Cold War. He addressed radio audiences, supported civil rights and socialist dissidents, including former students like David Bohm who had been trapped in Joseph McCarthy’s red scare, and agitated against McCarthy’s thuggery. When Oppenheimer, who was technically Einstein's boss as the director of the Institute for Advanced Study, lost his security clearance because of a witch hunt, Einstein advised him to fling his security clearance at an ungrateful government. Most consequentially, Einstein who had embraced the cause of Zionism for decades, supported the creation of a home for Jewish people in Palestine. But Einstein would almost certainly have been horrified by some of Israel’s right-wing nationalism today; as his later letters indicate, he always wanted Palestine to be equally free to Jews and Arabs, with open entry for all.


Einstein’s scientific and political rebellion won him few friends, although as the world’s most famous scientist, he continued to be idolized. In physics, he had let the particle physics revolution sweep past him and kept on expressing his skepticism of quantum mechanics. The young revolutionary had become an old conservative, leading Oppenheimer to trenchantly remark that he was a “lighthouse, not a beacon.” With his trademark self-effacing humor, Einstein was well aware that he was being treated more like a sacred relic rather than a practicing scientist; in 1942, he described himself as having become “a lonely old man who is displayed now and then as a curiosity because he doesn't wear socks.” Lonely after Elsa had died in 1936, he kept on scribbling equations in quest of a grand unified theory combining gravity and electromagnetism, not realizing that he would critically need the strong and weak nuclear forces that were just being revealed.


On April 17, 1955, Einstein suffered internal bleeding because of a ruptured abdominal aneurysm. Surgery could have prolonged his life for a short period, but he refused, saying  "I want to go when I want. It is tasteless to prolong life artificially. I have done my share; it is time to go. I will do it elegantly.” He died in Princeton Hospital the next day.


Einstein’s life illustrates many lessons, but none more than the importance of curiosity and fearlessness and being true to himself. While the world changed momentously during his life, Einstein did not change in his essentials. His love of science and music and men, his commitment to pacifism and the international brotherhood of men and women, and his almost religious (although secularly so) feeling for the beauty and unity of nature’s laws stayed with him all his life. We are unlikely to see another like him for a long time, although he leaves us with lessons worth emulating for a lifetime.

Areopagitica and the problem of regulating AI

How do we regulate a revolutionary new technology with great potential for harm and good? A 380-year-old polemic provides guidance.

In 1644, John Milton sat down to give a speech to the English parliament arguing in favor of the unlicensed printing of books and against a proposed bill to restrict their contents. Published as “Areopagitica”, Milton’s speech became one of the most brilliant defenses of free expression.

Milton rightly recognized the great potential books had and the dangers of smothering that potential before they were published. He did not mince words:

“For books are not absolutely dead things, but …do preserve as in a vial the purest efficacy and extraction of that living intellect that bred them. I know they are as lively, and as vigorously productive, as those fabulous Dragon’s teeth; and being sown up and down, may chance to spring up armed men….Yet on the other hand unless wariness be used, as good almost kill a Man as kill a good Book; who kills a Man kills a reasonable creature, God’s Image; but he who destroys a good Book, kills reason itself, kills the Image of God, as it were in the eye. Many a man lives a burden to the Earth; but a good Book is the precious life-blood of a master-spirit, embalmed and treasured up on purpose to a life beyond life.”

Apart from stifling free expression, the fundamental problem of regulation as Milton presciently recognized is that the good effects of any technology cannot be cleanly separated from the bad effects; every technology is what we call dual-use. Referring back all the way to Genesis and original sin, Milton said:

“Good and evil we know in the field of this world grow up together almost inseparably; and the knowledge of good is so involved and interwoven with the knowledge of evil, and in so many cunning resemblances hardly to be discerned, that those confused seeds which were imposed upon Psyche as an incessant labour to cull out, and sort asunder, were not intermixed. It was from out the rind of one apple tasted, that the knowledge of good and evil, as two twins cleaving together, leaped forth into the world.”

In important ways, “Areopagitica” is a blueprint for controlling potentially destructive modern technologies. Freeman Dyson applied the argument to propose commonsense legislation in the field of recombinant DNA technology. And today, I think, the argument applies cogently to AI.

AI is such a new technology that its benefits and harms are largely unknown and hard to distinguish from each other. In some cases the distinction is clear. For instance, image recognition can be used for all kinds of useful applications ranging from weather assessment to cancer cell analysis, but it can be and is used for surveillance. In that case, it is not possible to separate out the good from the bad even when we know what they are. But more importantly, as the technology of image recognition AI demonstrates, it is impossible to know what exactly AI will be used for unless there’s an opportunity to see some real-world applications of it. Restricting AI before these applications are known will almost certainly ensure that the good applications are stamped out.

It is in the context of Areopagitica and the inherent difficulty of regulating a technology before its potential is unknown that I find myself concerned about some of the new government regulation which is being proposed for regulating AI, especially California Bill SB-1047 which has already passed the state Senate and has made its way to the Assembly, with a proposed decision date at the end of this month.

The bill proposes commonsense measures for AI, such as more transparent cost-accounting and documentation. But it also imposes what seem like arbitrary restrictions on AI models. For instance, it would require regulation and paperwork for models which cost $100 million or more per training run. While this regulation will exempt companies which run cheaper models, the problem in fact runs the other way: nothing stops cheaper models from being used for nefarious purposes.

Let’s take a concrete example: in the field of chemical synthesis, AI models are increasingly used to do what is called retrosynthesis, which is to virtually break down a complex molecule into its constituent building blocks and raw materials (as a simple example, a breakdown of sodium chloride into sodium and chlorine would be retrosynthesis). One can use retrosynthesis algorithms to find out the cheapest or the most environmentally friendly route to a target molecule like a drug, a pesticide or an energy material. And run in reverse, you can use the algorithm for forward planning, predicting based on building blocks what the resulting target molecule would look like. But nothing stops the algorithm from doing the same analysis on a nerve gas or a paralytic or an explosive; it’s the same science and the same code. Importantly, much of this analysis is now available in the form of laptop computer software which enables the models to be trained on datasets of millions of data points: small potatoes in the world of AI. Almost none of these models cost anywhere close to $100 million, which puts their use in the hands of small businesses, graduate students and – if and when they choose to use them – malicious state and non-state actors.

Thus, restricting AI regulation to expensive models might exempt smaller actors, but it’s precisely that fact that would enable these small actors to use the technology to bad ends. On the other hand, critics are also right that it would effectively price out the good small actors since they would not be able to afford the legal paperwork that the bigger corporations can. The arbitrary cap of $100 million therefore does not seem to address the root of the problem. The same issue applies to another restriction which is also part of the European AI regulation, which is limiting the calculation speed to 1026 flops. Using the same example of the AI retrosynthesis models, it is easy to argue that such models can be run for far less computing power and would still produce useful results.

What then is the correct way to regulate AI technology? Quite apart from the details, one thing that is clear is that we should be able to experiment a bit, run laboratory-scale models and at least try to probe the boundaries of potential risks before we decide to stifle this or that model or rein in computing power. Once again Milton echoes such sentiments. As a 17th century intellectual it would have been a long shot for him to call for the completely free dissemination of knowledge; he must well have been aware of the blood that had been shed in religious conflicts in Europe during his time. Instead, he proposed that there could be some checks and restrictions on books, but only after they had been published:

“If then the Order shall not be vain and frustrate, behold a new labour, Lords and Commons, ye must repeal and proscribe all scandalous and unlicensed books already printed and divulged; after ye have drawn them up into a list, that all may know which are condemned, and which not.

Thus, Milton was arguing that books should not be stifled at the time of their creation; instead, they should be stifled at the time of their use if the censors saw a need. The creation vs use distinction is a sensible one when thinking about regulating AI as well. But even that distinction doesn’t completely address the issue, since the uses of AI technology are myriad, and most of them are going to be beneficial and intrinsically dual-use. Even regulating the uses of AI thus would entail interfering in many aspects of AI development and deployment. And what about the legal and commercial paperwork, the extensive regulatory framework and the army of bureaucrats that would be needed to enforce this legislation? The problem with legislation is that it is easy for it to overstep boundaries, to be on a slippery slope and gradually elbow its way into all kinds of things for which it wasn’t originally intended, exceeding its original mandate. Milton shrewdly recognized this overreach when he asked what else besides printing might be up for regulation:

“If we think to regulate printing, thereby to rectify manners, we must regulate all recreations and pastimes, all that is delightful to man. No music must be heard, no song be set or sung, but what is grave and Doric. There must be licensing dancers, that no gesture, motion, or deportment be taught our youth but what by their allowance shall be thought honest; for such Plato was provided of; it will ask more than the work of twenty licensers to examine all the lutes, the violins, and the guitars in every house; they must not be suffered to prattle as they do, but must be licensed what they may say. And who shall silence all the airs and madrigals that whisper softness in chambers? The windows also, and the balconies must be thought on; there are shrewd books, with dangerous frontispieces, set to sale; who shall prohibit them, shall twenty licensers?”

This passage shows that not only was John Milton a great writer and polemicist, but he also had a fine sense of humor. Areopagitica shows us that if we are to confront the problem of AI legislation, we must do it not just with good sense but with a recognition of the absurdities which too much regulation may bring.

The proponents of AI who fear the many problems it might create are well-meaning, but they are unduly adhering to the Precautionary Principle. The Precautionary Principles says that it’s sensible to regulate something when its risks are not known. I would like to suggest that we replace the Precautionary Principle with a principle I call The Adventure Principle. The Adventure Principle says that we should embrace risks rather than running away from them because of the benefits which exploration brings. Without the Adventure Principle, Columbus, Cook, Heyerdahl and Armstrong would never have set sail into the great unknown and Edison, Jobs, Gates and Musk would never embark on big technological projects. Just like with AI, these explorers faced a significant risk of death and destruction, but they understood that with immense risks come immense benefits, and by the rational light of science and calculation, they thought there was a good chance that the risks could be managed. They were right.

Ultimately there is no foolproof “pre-release” legislation or restriction that would purely stop the bad use of models while still enabling their good use. Milton’s Areopagitica does not tell us what the right legislation for regulating AI would look like, although it provides hints based on regulation of use rather than creation. But it makes a resounding case for the problems that such legislation may create. Regulating AI before we have a chance to see what it can do would be like imprisoning a child before he grows up into a young man. Perhaps a better approach would be the one Faraday adopted when Gladstone purportedly asked him what the use of electricity was: “Someday you may tax it”, was Faraday’s response.

Some say that the potential risks from AI are too great to allow for such a liberal approach. But the potential risks from almost any groundbreaking technology developed in the last few hundred decades – printing, electricity, fossil fuels, automobiles, nuclear energy, gene editing – are no different. The premature regulation of AI would prevent us from unleashing its potential to confront our most pressing challenges. When humanity is then grasping with its last-ditch efforts to prevent its own extinction because of known problems, a recognition of the irony of smothering AI because of a fear of unknown problems would come too late to save us.