Field of Science

Mathematics, And The Excellence Of The Life It Brings

Shing-Tung Yau and Eugenio Calabi
Mathematics and music have a pristine, otherworldly beauty that is very unlike that found in other human endeavors. Both of them seem to exhibit an internal structure, a unique concatenation of qualities that lives in a world of their own, independent of their creators. But mathematics might be so completely unique in this regard that its practitioners have seriously questioned whether mathematical facts, axioms and theorems may not simply exist on their own, simply waiting to be discovered rather than invented. Arthur Rubinstein and Andre Previn’s performance of Chopin’s second piano concerto sends unadulterated jolts of pleasure through my mind every time I listen to it, but I don’t for a moment doubt that those notes would not exist were it not for the existence of Chopin, Rubinstein and Previn. I am not sure I could say the same about Euler’s beautiful identity connecting three of the most fundamental constants in math and nature – e, pi and i. That succinct arrangement of symbols seems to simply be, waiting for Euler to chance upon it, the way a constellation of stars has waited for billions of years for an astronomer to find it.
The beauty of music and mathematics is that anyone can catch a glimpse of this timelessness of ideas, and even someone untrained in these fields can appreciate the basics. The most shattering intellectual moment of my life was when, in my last year of high school, I read in George Gamow’s “One, Two, Three, Infinity” about the fact that different infinities can actually be compared. Until then the whole concept of infinity had been a single concept to me, like the color red. The question of whether one infinity could be “larger” than another sounded as preposterous to me as whether one kind of red was better than another. But here was the story of an entire superstructure of infinities which could be compared, studied and taken apart, and whose very existence raised one of the most famous, and still unsolved, problems in math – the Continuum Hypothesis. The day I read about this fact in Gamow’s book, something changed in my mind; I got the feeling that some small combination of neuronal gears permanently shifted, altering forever a part of my perspective on the world.
Anyone who has seriously studied mathematics for any extended period of time also knows the complete immersion that can come with this study. In my second year of college I saw a copy of George F. Simmons’s book “Introduction to Topology and Modern Analysis” at the house of a mathematically gifted friend and asked to borrow it out of sheer curiosity. Until then mathematics had mainly been a matter of utilitarian value to me and most of my formal studies had been grounded in the kind of practical, calculus-based math that are required for solving problems in chemistry and physics. But Gamow’s exposition of countable and uncountable infinities had whetted my mind for more abstract stuff. The greatest strength of Simmons’s book is that it is entirely self-contained, starting with the bare basics of set theory and building up gradually. It’s also marvelously succinct, almost austere in the brevity of its proofs.
The book swept me off my feet, and the first time I started on it I worked through the theorems and problems right through the night; I can still see myself sitting at the table, the halo of a glaringly bright table lamp enclosing me in this special world of mathematical ideas, my grandmother sleeping outside this world in the small room that the two of us shared. The next night was not much different. After that I was seized by an intense desire to understand the fundamentals of topology – compactness, connectedness, metric and topological spaces, the Heine-Borel theorem, the whole works. Topology seemed to me like a cathedral – in fact the very word “spaces” as in “vector spaces” or “topological spaces” conjured up (and still do) an intricate, self-reinforcing cathedral of axioms, corollaries, lemmas and theorems resting on certain rules, each elegantly supporting the rest of it, being gradually built – or perhaps discovered – through the ages by its great practitioners, practitioners like Cantor, Riemann, Hilbert and Banach. It appeared like a great machine with perfectly enmeshed gears flawlessly fitting into each other and enabling great feats of mechanical efficiency and beauty. I was fortunate to find an enthusiastic professor who trained students for the mathematical olympiad, and he started spending several hours with me every week explaining proofs and helping me get over roadblocks. This was followed by many evenings of study and discussion, partly with a like-minded friend who had been inspired to get his own copy of Simmons’s book. I kept up the routine for several months and got as far as the Stone-Weierstrass theorem before other engagements intruded on my time – I wasn’t majoring in mathematics after all. But the intellectual experience had been memorable, unforgettable.
If even a lowly non-mathematician like myself could be so taken by the intricacies of higher mathematics, I can only dimly imagine the reveries experience by some of math’s greatest practitioners, one of whom is Shing-Tung Yau. Yau is a professor at Harvard and one of the world’s greatest mathematicians. His speciality is geometry and topology. Yau’s claim to fame is in bridging geometry and topology with differential equations, essentially founding the discipline of geometric analysis, although perhaps his greatest legacy would be forming novel, startling connections between physics and mathematics and opening up a dialogue that has had a long and often contentious history. For these efforts he won the Fields Medal in 1982, becoming the first mathematician of Chinese descent to do so.
The connection between algebra and geometry is an ancient one. In creating analytical or Cartesian geometry for instance, Rene Descartes had found a way to represent the elements of Euclidean geometry, entities like points and lines, as algebraic coordinates. This was a revolutionary discovery, allowing basic geometric entities like circles and ellipses to be described by algebraic equations. Analytical geometry lies at the very foundation of mathematics, enabling many other fields like multivariate calculus and linear algebra. The culmination of analytical geometry was in the field of differential geometry which uses techniques from algebra and calculus to describe geometric objects, especially curved ones.
The difference between geometry and topology is essentially that the former is about local entities while the latter is about global entities, about the big picture. Thus, in the context of the analogy often given to illustrate what topology is about, while a coffee cup and a donut are different geometric objects, they are identical topological objects because one can be converted into the other simply by stretching, expanding and contracting, without having to tear or cut any part. Perhaps Yau’s most interesting contributions to differential topology would be something called a Calabi-Yau manifold. Loosely speaking, a manifold is a topological space whose every point is essentially “flat” or “locally Euclidean”. A good analogy is with ancient views of the Earth as flat contrasted with modern views of a round earth; the discrepancy arises from the fact that even the round earth is locally flat or Euclidean. Manifolds are not just interesting mathematically but are of great importance in physics and especially in general relativity. For instance, Einstein used the theory of Riemannian manifolds to deal with the curvature of spacetime. Calabi-Yau manifolds are special manifolds that gained importance when they were found to represent “hidden” dimensions in string theory. But this is merely one of Yau’s many seminal contributions to math over a long and fascinating life as described in his memoirs, “The Shape of A Life“.
That life started in China in the 1950s, during the cultural revolution. Yau was one of nine children. His parents were both very intelligent and highly committed to the education of their children. His father in particular was a scholarly role model for Yau. He was a professor who taught many disciplines, including languages and history. He was well versed in poetry and philosophy and always had a ready store of Taoisms and Confucian parables for his children. Shing-Tung’s parents lost most of their property during the revolution, and like many others migrated to Hong Kong where a better life was found. This better life was still very hard. Yau’s parents moved several times, and most houses he lived in were either overcrowded or in the wilderness, without running electricity and water, sometimes infested by snakes and other animals. School was several miles away and had to be reached through a combination of walks and public transportation. And yet it seems to have been a generally happy childhood, sustained by stories and playmates in the form of several brothers and sisters, of whom Yau was especially close to a particular older sister. The poverty and hardscrabble life also engendered a tremendous capacity for persistence and hard work in Yau. This capacity was particularly enhanced after Yau’s father tragically passed away from cancer when he was fourteen. Yau was devastated by his amazing father’s passing, and he resolved to apply the lessons this role model had imparted as diligently as possible. His mother was a tremendous influence, and she worked at odd jobs to support her large family. Later she moved to the United States with her son and had the pleasure of watching him become successful beyond her dreams.
While not particularly prodigious in mathematics in his early years, Yau started shining in high school. By a quirk of fate in which he did less than ideally in a national examination by spending time with a street gang, Yau gained admission to a school named Pui Ching that remarkably enough produced no less than one future Nobel laureate, three future U.S. National Medal of Science winners and eight future members of the U.S. National Academy of Sciences. This is an astonishing record for a fairly provincial school in Hong Kong, similar to records of future eminent scientists from the Bronx High School of Science of New York City. One factor that played into the school’s success as well as that of the Chinese University of Hong Kong which Yau attended for college was the presence of visiting American professors or native-born professors who had studied at American universities. One such professor named Stephen Salaff recommended Yau for graduate school at the University of California in Berkeley, and Yau’s career was launched. A decisive factor in Yau’s admission was a strong recommendation by S. S. Chern, Berkeley’s eminent geometer and perhaps the leading Chinese mathematician outside the United States then. Chern’s relationship with Yau looms large in the book, perhaps too large, and throughout his life Chern was both father figure and mentor to Yau as well as nemesis and adversary. Here Yau also met his wife Yu-Yun, an accomplished physicist; curiously enough, in deference to Chinese traditional culture, while he saw her during his first week in the library, he waited several years to ask her out before someone made a formal introduction. The two also lived apart for several years while Yau, uncommonly for a mathematician, bounced between many universities like Stanford, the Institute for Advanced Study in Princeton and UCSD before finally settling down at Harvard. Their two sons are successful in their own regard, one being a biochemist and the other a doctor.
After graduating Yau made a variety of significant contributions to differential geometry and geometric analysis. This included proving the Calabi conjecture which entails proving the existence of Riemannian metrics with certain properties on complex manifolds. This was a years-lone struggle emblematic of great mathematical achievements, and like great mathematical achievements it involved some blind detours, including Yau’s mistaken early results that seemed to indicate counterexamples to the conjecture. A particularly key contribution by Yau of great relevance to physics was to come up with a purely mathematical proof of the so-called positive mass conjecture. This conjecture, taken at face value as obvious by physicists for a long time, said that the mass of any physical isolated system from both matter and gravitation is positive. This includes our universe. To prove this, Yau and his collaborator Richard Schoen constructed an ingenious argument: they first proved that if the average curvature of the spacetime corresponding to such a system is positive, then the mass is also positive. They then constructed a spacetime with positive curvature that had the same mass as our universe. Put together, the two results which showcased a classic argument by analogy showed that the mass of our universe must also be positive.
Yau and Schoen’s results inaugurated a new era of interaction between physics and mathematics. This relationship although long and profound had often been fraught; David Hilbert once famously said when asked if the relationship was bad that it wasn’t bad because for it to be so the two groups have to talk to each other. Most of the breakthroughs in twentieth century physics were made using what we might call 19th century mathematics – calculus, differential equations and matrix theory. Yau and others’ work showed that there were still novel approaches from pure math based on topology and geometry that could contribute to advances in physics. Roger Penrose who was trained in the classical tradition of British mathematics imbibed these fields, and he was able to use insights from them to make groundbreaking contributions to general relativity.
This line of discovery especially took off when physicists working on string theory in the 1980s discovered that the hidden dimensions postulated by string theory could essentially be modeled as Calabi-Yau manifolds. This was indeed one of those happy circumstances where a purely mathematical discovery made out of intellectual curiosity could have deep ramifications for physics. There are also examples from string theory that have spurred developments in pure mathematics. There was again precedent for such unexpected relationships – for instance the theory of Lie groups turned out to have completely unexpected connections with particle physics – but Yau and others’ work showed the great value of pure curiosity-driven research in mathematics that could spark a robust back and forth with physics. One aspect of string theory that is missing from Yau’s account is the increasing criticism of the field as being unmoored from experiment or even from experimental prediction. But notwithstanding this valid criticism, it is clear that string theory provides a great example of how, just like mathematics has traditionally contributed to physics, discoveries in physics can play back into pure mathematics.
Along with straddling the worlds of math and physics, Yau has also straddled two others worlds – those of China and the United States. Although he grew up in Hong Kong, his parents’ strong Chinese roots made him feel very strong connections to his ancestral homeland. He visited China several times a year, handpicked Chinese students to study in the US and collaborated extensively with Chinese researchers; in fact almost two-thirds of his students and collaborators are Chinese, and in what was a harbinger of current times, the CIA asked him about his students multiple times before realizing that their work was too obscure and pure to impact national security – one of the unexpected ancillary perks of working in pure mathematics. He also criticizes the Chinese system as being too enamored of prizes, money and fame rather than the pure intellectual satisfaction that comes from pursuing science for its own goals. After he won the Fields Medal, Yau’s Chinese sojourns became high profile and they got him into many controversies involving funding, favoritism and committee work, many involving his former mentor Chern. At least a dozen personal controversies dot the narrative in the book, and while they make for fascinating reading because they demonstrate that even the most abstract of mathematics is not free from the very human qualities of personal jealousies, feuds, nepotism and claims of credit, methinks that Yau sometimes doth protest too much, especially since we only hear one side of the story and seldom the other side.
Perhaps the most significant controversy came about when Sylvia Nasar who wrote the book “A Beautiful Mind” wrote in a widely read article in the New Yorker that Yau had tried, through his students, to steal credit away from the famously reclusive Russian mathematician Grigori Perelman and his stunning, completely unexpected proof of the century-old Poincare Conjecture. It turned out that Perelman had not worked out all the details of his proof and had built on the very important work done by the American mathematician Richard Hamilton. Yau recognized that Perelman’s results would have been impossible without Hamilton’s work, and went out of his way to praise Hamilton. He also recruited two Chinese students to work out a mammoth, 300-page exposition of the proof that filled in some gaps. There is no doubt that the proof was Perelman’s, but Yau’s extensive maneuverings made it sound like he was undermining Perelman’s efforts. In this case, because of Perelman’s self-imposed isolation from the community, it is easy to think that Yau deserves the criticisms, but he makes his side of the story clear and one gets the feeling that Nasar exaggerated the feud. And in spite of all these controversies, Yau has sustained warm friendships with many leading mathematicians.
Shing-Tung Yau’s life has been wholly dedicated to mathematics and its advancement. He sees mathematics much like Newton saw all of natural science:
“After much tumult in my early years, I was able to find my way to the field of mathematics, which still has the power to sweep me off my feet like a surging river. I’ve had the opportunity to travel upon this river – at times even clearing an obstruction or two from a small tributary so that water can flow to new places that have never been accessed before. I plan to continue my explorations a bit more and then, perhaps, do some observing – or cheerleading – from the riverbanks, a few steps removed.”
A little boy on the shore, playing with shiny pebbles, while the great ocean of truth lies undiscovered before him, ready to be explored.
First posted on 3 Quarks Daily.

The three horsemen of the machine learning apocalypse

My colleague Patrick Riley from Google has a good piece in Nature in which he describes three very common errors in applying machine learning to real world problems. The errors are general enough to apply to all uses of machine learning irrespective of field, so they certainly apply to a lot of machine learning work that has been going on in drug discovery and chemistry.

The first kind of error is an incomplete split between training and test sets. People who do ML in drug discovery have encountered this problem often; the test set can be very similar to the training set, or - as Patrick mentions here - the training and test sets aren't really picked at random. There should be a clear separation between the two sets, and the impressive algorithms are the ones which extrapolate non-trivially from the former to the latter. Only careful examination of the training and test sets can ensure that the differences are real.

Another more serious problem with training data is of course the many human biases that have been exposed over the last few years, biases arising in fields ranging from hiring to facial recognition. The problem is that it's almost impossible to find training data that doesn't have some sort of human bias (in that context, general image data usually works pretty well because of the sheer number of random images human beings capture), and it's very likely that this hidden bias is what your model will then capture. Even chemistry is not immune from such biases; for instance, if your training data contains compounds synthesized using metal-catalyzed coupling reactions and is therefore enriched in biaryls, you will be training an algorithm that is excellent at identifying biaryls, drug scaffolds that are known to have issues with stability and clearance in the body.

The second problem is that of hidden variables, and this is especially the case with unsupervised learning where you let loose your algorithm on a bunch of data and expect it to learn relevant features. The problem is that there are a very large number of features in the data that your algorithm could potentially learn and find correlations with, and a good number of these might be noise or random features that would give you a good correlation while being physically irrelevant. A couple of years ago there was a good example of an algorithm used to classify tumors learning nothing about the tumors per se but instead learning features of rulers; it turns out that oncologists often keep rulers next to malignant tumors to measure their dimensions, and these were visible in the pictures. 

Closer to the world of chemistry, there was a critique last year of an algorithm that was supposed to pick an optimal combination of reaction conditions for a synthetic Buchwald-Hartwig reaction. This is a rather direct application of machine learning in chemistry, and one of the most promising ones in my view, partly because reaction optimization is still very much a trial-and-error art and it is far more deterministic than, say, finding a new drug target based on sparse genomic correlations. After the paper was published there was a critique pointing out that you could get the same results if you randomized the data or fit the model on noise. That doesn't mean the original model was wrong, it means that it wasn't unique and wasn't likely causative. Basically asking what exactly your model is fitting to is always a good idea.

As Patrick's article points out, there are other examples like an algorithm latching on to edge effects of plates in a biological assay or in image analysis in phenotypic screening; two other applications very relevant to drug discovery. The remedy here again is to run many different models while asking many different questions, a process that needs patience and foresight. Another strategy which I increasingly like would be to not do unsupervised learning but instead do constrained learning, with the constraints coming from the laws of science.

The last problem is a bit more subtle and involves using the wrong objective or "loss" function. A lot of this boils down to asking the right question. Patrick cites the example of using ML to diagnose diabetic retinopathy using images of the back of the eye. It turns out that if the question they asked was focused more on diagnosing a single disease rather than whether the patient needs to see a doctor, the models were thrown into disarray.

So what's the solution? What it always has been. As the article says,


"First, machine-learning experts need to hold themselves and their colleagues to higher standards. When a new piece of lab equipment arrives, we expect our lab mates to understand its functioning, how to calibrate it, how to detect errors and to know the limits of its capabilities. So, too, with machine learning. There is no magic involved, and the tools must be understood by those using them."
Would you expect the developer of a new NMR technique or a new quantum chemistry calculation algorithm to not know what lies under the hood? Would you expect these developers to not run tests using many different parameters and under different controls and conditions? For that matter, would you expect a solider to go into battle without understanding the traps the enemy has laid? Then why expect developers of machine learning to operate otherwise? 
Some of it is indeed education, but much of it involves the same standards and values that have been part of the scientific and engineering disciplines since antiquity. Unfortunately, too often machine learning, especially because of its black-box nature, is regarded as magic. But there is no magic (Arthur Clarke quotes notwithstanding). It's all careful, meticulous investigation, it's about going into the field knowing that there almost certainly will be a few mines scattered here and there. Be careful if you don't want you/r model to get blown up.

Infinite horizons; or why I am optimistic about the future

The Doomsday Scenario, also known as the Copernican Principle, refers to a framework for thinking about the death of humanity. One can read all about it in a recent book by science writer William Poundstone. The principle was popularized mainly by the philosopher John Leslie and the physicist J. Richard Gott in the 1990s; since then variants of it have have been cropping up with increasing frequency, a frequency which seems to be roughly proportional to how much people worry about the world and its future.
The Copernican Principle simply states that the probability of us existing at a unique time in history is small because we are nothing special. We therefore must exist roughly close to half the period of our existence. Using Bayesian statistics and the known growth of population, Gott and others then calculated lower bounds for humanity’s future existence. Referring to the lower bound, their conclusion is that there is a 95% chance that humanity will go extinct in 9120 years.
The Doomsday Argument has sparked a lively debate on the fate of humanity and on different mechanisms by which the end will finally come. As far as I can tell, the argument is little more than inspired numerology and has little to do with any rigorous mathematics. But the psychological aspects of the argument are far more interesting than the mathematical ones; the arguments are interesting because they tell us that many people are thinking about the end of mankind, and that they are doing this because they are fundamentally pessimistic. This should be clear by how many people are now talking about how some combination of nuclear war, climate change and AI will doom us in the near future. I reject such grim prognostications because they are mostly compelled by psychological impressions rather than by any semblance of certainty.
A major reason why there is so much pessimism these days is because of what the great historian Barbara Tuchman once called ‘Tuchman’s Law’; Tuchman’s Law states that the impression that an event leaves in the minds of observers is proportional to its coverage in the newspapers. Tuchman said this in 1979, and it has become a truism today because of the Internet. The media is much more interested in reporting bad things that happened rather than good things that did not happen, so it’s easy to think that the world is getting worse every day. The explosion of social media and multiple news sources have amplified this sensationalism and selection bias by gargantuan proportions. As Tuchman said, even if you may be relentlessly reading about a troubling phenomenon like child kidnapping or mass shootings, it is exceedingly rare that you will come home on any given day having faced such calamities.
In this trivial sense I agree with Bill Gates, Hans Rosling, Steven Pinker and others who have written books describing how by almost every important parameter – for instance child mortality, women and minority rights, health status, poverty, political awareness, environmental improvement – the world of today is not just vastly better than that of yesterday but has been on a steep and steady curve of improvement since medieval times. One simply needs to pick up any well-regarded book on medieval history (Tuchman’s marvelous book “The Distant Mirror” describing the calamitous 14th century will do the job) to realize how present human populations almost seem to live on a different planet as far as quality of life is concerned. This does not refute the often uneven distribution of progress, nor thus it tell us that every improvement that we have seen is guaranteed, nor this it say we should rest on our laurels, but it does give us more than enough rational cause for optimism.
Sometimes the difference between optimism and pessimism is simply related to looking at the same data point in two different ways. For instance, take as a reference date the year that the US Supreme Court legalized same-sex marriage – 2015. Now go back a hundred years, to 1915. Even in the United States the world of individual rights was stunningly different from now. Women could not vote, immigration from non-European countries was strongly discouraged and restricted, racism against non-white people (and even some white people such as Catholics) was part of the fabric of American society, black people were actively getting lynched in the south and their civil rights were almost non-existent, abortion was illegal, gay people would not dream of coming out of the closet and anti-Semitism was not only rampant but institutionalized in places like Ivy League universities.
It is downright incredible that, only a hundred years later, every single one of these barriers had fallen. Not one or two or three, but every single one. I cannot see how this extraordinary reversal of discrimination and inequality cannot lead to soaring optimism about the future. Now, two people might look at this fact in two different ways. One might say, “It took 228 years since the writing of the US Constitution for these developments to transpire”, while another person might say, “It took only a hundred years from 1915 for these developments to transpire”. Which perspective do you choose since both are equally valid? I choose the latter, not only because it points to optimism for the future but to informed optimism. There has been a tremendous raising of moral consciousness about equal treatment of all kinds of groups in the last one hundred years, and if anything, the strong, unstoppable waves of progressivism on the Internet promise that this moral elevation will continue unabated. There are effectively zero chances that women or minorities will lose the vote for instance. The price of liberty is eternal vigilance, not eternal pessimism.
What about those four horsemen of the apocalypse, now compressed into the three horsemen comprising nuclear war, AI and climate change, that seem to loom large when it comes to a dim view of the future of humanity? I believe that as real as some of the fears from climate change, nuclear war and AI are, they are exaggerated and not likely to impact us the way we think.
First, climate change. There are many deleterious impacts of human beings on the environment, of which global warming is an important one and likely the most complicated to predict in its details. It is harder to predict phenomena like the absorption of carbon dioxide by the biosphere and the melting of glaciers based on computer models than it is to understand and act on phenomena like ocean acidification, deforestation, air pollution and strip mining. Sadly, discussions of these topics are often lost in the political din surrounding global warming. There is also insufficient enthusiasm for solutions such as nuclear energy and solar power that can make a real impact on energy usage and fossil fuel emissions. On the bright side, support for fighting climate change and environmental degradation is more vociferous than ever, and social media thankfully has played an important role in generating it. This support is similar to the support that early 20th century environmentalists lent to preventing creatures like the American buffalo and whales from going extinct. There are good reasons to think that whatever the real or perceived effects of climate change, it will not cease to be a publicly important issue in the future. But my optimism regarding climate change does not just come from the level of public engagement I see but from the ability of humans to cope; I am not saying that climate change will pose no problem, but that one way or another humans will find solutions to contain or even eliminate those problems. Humans survived the last ice age at dangerously low levels of population and technological capability compared to today, so there is little reason to think that we won’t be able to cope. Some people worry whether it is worth bequeathing the uncertain world of tomorrow to our children and grandchildren. My belief is that, considering the travails that humanity successfully faced in the last thousand years or so, our children and grandchildren will be more than competent to handle whatever problem they are handed by their predecessors and the planet.
Second, nuclear war. The world’s nuclear arsenals have posed a clear and present danger for years. However, deterrence – as fragile and fraught with near misses as it is – has ensured that no nuclear weapon has been exploded in anger for almost 75 years. This is an almost miraculous track record. Moreover, while the acquisition of dirty bombs or nuclear material by non state actors is a real concern, the global nuclear stockpile has been generally quite secure, and there are enough concerned experts who continue to monitor this situation. Since the end of the Cold War, both the United States and Russia have significantly reduced their stockpiles, although both countries should go to still lower numbers. The detonation of even a low yield nuclear weapon in a major city will be a great tragedy, but it will not have the same effects as the global thermonuclear war whose threat the world labored under for more than fifty years. In 1960, Herman Kahn wrote “On Thermonuclear War”, a controversial book that argued that even a major thermonuclear war would not mean the end of humanity as most people feared. Part of Kahn’s analysis included calculations on the number of deaths and part included historical evidence of human renewal and hope after major wars. While the book was morbid in many details, it did make the point that humanity is far more resilient than we think. Fortunately the scenarios that Kahn described never came to pass, and the risk of them happening even on a small scale are now far lower than they ever were.
Finally, AI seems to be perhaps the prime reason for the extinction of humanity that many world and business leaders and laymen fear. Early fears centered on the kind of killer robots that dotted the landscape of science fiction movies, but recent concerns have centered on machines gradually developing intelligence and humans gradually ceding authority to them. But most AI doomsday scenarios are speculative at best and contain a core of deep uncertainty. For instance, a famous argument made by Nick Bostrom described a scenario called the AI paperclip maximizer. The idea is that humanity creates an AI whose purpose is to create paperclips. The AI will gradually single-mindedly start making paperclips out of everything, consuming all natural resources and rendering the human race extinct. This kind of doomsday scenario has some important assumptions built into it, among which is the assumption that such an AI can actually exist and wouldn’t have a failsafe built into it. But the bigger question is regarding the AI’s intelligence: any kind of truly intelligent AI won’t spend its entire time making paperclips, while any kind of insufficiently intelligent AI will be easily controlled by human beings or at least live with them in some kind of harmony. I worry much less about a paperclip AI than I do about humans gradually ceding thinking to fleeting sources of entertainment like social media.
But the real problem with any kind of doomsday scenario involving AGI (artificial general intelligence) is that it simply underestimates what it would take for a machine to acquire true human-like cognitive capabilities. One of the best guides to thinking about exactly what it would take for AGI to somehow take over the world is the technologist Kevin Kelly. He gives three principal reasons for the unlikelihood of this happening: one, that intelligence is along many axes, and even very intelligent human beings are usually intelligent along a few; second, that intelligence is not just gained through thinking alone but through experimentation, and that experimentation slows down any impact that a super-intelligence might have; and three, that any kind of AGI scenario assumes that the relationship between humans and their creations would be intrinsically hostile and fixed. Almost all such assumptions about AGI are subject to doubt, and at least a few of the conditions that seem to be necessary for AGI to truly dominate humanity seem to be both rate-limiting and unlikely.
Ultimately, most doomsday scenarios are based on predicting the future, and prediction, as Niels Bohr famously said, is very difficult, especially concerning the future. The most important prediction about the future of humanity will probably be the one that we are not capable of making. But in the absence of accurate prediction about the future, we have the past. And while the past is never a certain guide to the future, the human past in particular shows a young species that is almost infinitely capable of adaptation, empathy, creativity and optimism. I see no reason to believe this will not continue to be the case.
First published on 3 Quarks Daily.

Book review: The British Are Coming: The War for America, Lexington to Princeton, 1775-1777

The British Are Coming: The War for America, Lexington to Princeton, 1775-1777The British Are Coming: The War for America, Lexington to Princeton, 1775-1777 by Rick Atkinson

When the British army of regulars captured American troops during the Battle of New York, they contemptuously noted how they were surprised to see so many ordinary people among them – tanners, brewers, farmers, metal workers, carpenters and the like. That observation in one sense summed up the difference between the British and American causes: a ragtag group of ordinary citizens with little battle experience pitted against a professional, experienced and disciplined army belonging to a nation that then possessed the biggest empire since the Roman Empire. The latter were fighting for imperial power, the former for conducting an experiment in individual rights and freedom. The former improbably won.

Rick Atkinson shows us how in this densely-packed, rousing military history of the first two years of the Revolutionary War. The Americans kept on foiling the British through a combination of brilliant tactical retreats, dogged determination, improvisation and faith in providence. His is primarily a military history that covers the opening salvo in Lexington and Concord to the engagements in Princeton and Trenton and Washington's legendary crossing of the frozen Delaware. However, there is enough observational detail on the social and political aspects of the conflict and the sometimes larger than life personalities involved to make it a broader history. The account could be supplemented with other political histories such as ones by Gordon Wood, Bernard Bailyn and Joseph Ellis to provide a fuller view of the politics and the personalities.

Atkinson’s greatest strength is to bring an incredible wealth of detail to the narrative and pepper it with primary quotes from not just generals and soldiers but from ordinary men and women. His other big strength is logistical information. No detail seems to escape his eye; the number and tonnage of food and clothing provisions and shipping, sundry details of types of weapons, ships, beasts of burden and ammunition, the kinds of diseases riddling the camps and the medieval medicine used to treat them (some of them positively so - "oil of whelps" was a grotesque substance concocted from white wine, earthworms and the flesh of dogs boiled alive), ditties and plays that were being performed by the soldiers ("Clinton, Burgoyne, Howe, Bow, wow, wow"), the constantly-changing weather, the political machinations in Whitehall and the Continental Congress…the list goes on and on. Sometimes the overwhelming detail can be distracting – for instance do we need to know the exact number of blankets and weight of salt pork supplied during the eve of a particular battle? – but overall the dense statistics and detail have the effect of immersing the reader in the narrative.

The major battles – Lexington and Bunker Hill, Long Island and Manhattan, Quebec and Ticonderoga, Charleston and Norfolk, Princeton and Trenton – are dissected with fine detail and rousing descriptions of men, material, the thrust and parry at the front and the desperation, disappointments, retreats and triumphs that often marked the field of battle. The writing can occasionally be almost hallucinatory: "Revere swung into the saddle and took off at a canter across Charlestown Neck, hooves striking sparks, rider and steed merged into a single elegant creature, bound for glory". The accounts of the almost unbelievably desperate and excruciating winter fighting and retreat in Canada are probably the highlights of the military narratives. Lesser-known conflicts in Virginia and South Carolina in which the British were squarely routed also get ample space. Particularly interesting is the improbable and self-serving slave uprising drummed up by Lord Dunmore, Virginia's governor, and the far-reaching fears that it inspired in the Southern Colonies. Epic quotes that have become part of American history are seen in a more circumspect light; for instance, it’s not clear who said “Don’t fire until you see the whites of their eyes” during Bunker Hill, and instead of the famous “The British are coming” cry that is attributed to Paul Revere, it’s more likely that he said “The regulars are coming.” Also, the British army might have been experienced, but they too were constantly impacted by shortage of food and material, and this shortage was a major factor in many of their decisions, including the retreat from Boston. Brittania might have ruled the waves, but she wasn’t always properly nourished.

The one lesson that is constantly driven home is how events that seem providential and epic now were so uncertain and riddled with improvisation and desperation when they happened; in that sense hindsight is always convenient. Atkinson makes us aware of the sheer miserable conditions the soldiers and generals lived in; the threadbare clothing which provided scant protection against the cold, the horrific smallpox, dysentery and other diseases which swept entire battle companies off the face of the planet without warning and the problems constantly posed by loyalists and deserters to American patriots. There were many opportunities for men to turn on one another, and yet we also see both friends and enemies being surprisingly humane toward each other. In many ways, it is Atkinson’s ability to provide insights across a wide cross-section of society, to make the reader feel the pain and uncertainty faced by ordinary men and women, that contribute to the uniqueness of his writing.

Atkinson paints a sympathetic and sometimes heroic portrait of both British politicians and military leaders, but he also makes it clear how clueless, bumbling and misguided they were when it came to understanding the fundamental DNA of the colonies, their frontier spirit, their Enlightenment thinking and their very different perception of their relationship with Britain. A excellent complement to Atkinson’s book for understanding British political miscalculations leading up to the war would be Nick Bunker’s “An Empire on the Edge”. While primarily not a study of personality, Atkinson’s portraits of American commanders George Washington, Benedict Arnold, Henry Knox, Charles Lee, Israel Putnam and British commanders William and Richard Howe, George Clinton, Guy Carleton and others are crisp and vivid. Many of these commanders led their men and accomplished remarkable feats through cold and disease, in the wilderness and on the high seas; others like American John Sullivan in Canada and Briton George Clinton in Charleston could be remarkably naive and clueless in judging enemy strength and resolve. Atkinson also dispels some common beliefs; for instance, while the rank and file were indeed generally inexperienced, there were plenty of more senior officers including Washington who had gained good fighting experience in the ten-year-old French and Indian War. As a general, Washington’s genius was to know when to retreat, to make the enemy fight a battle of attrition, to inspire and scold when necessary, and somehow to keep this ragtag group fighting men and their logistical support together, emerging as a great leader in the process. He was also adept at carefully maneuvering the levers of Congress and to keep driving home the great need for ammunition, weapons and ordinary provision through a mixture of cajoling and appeals to men’s better angels.

For anyone wanting a detailed and definitive military history of the Revolutionary War, Atkinson’s book is highly recommended. It gives an excellent account of the military details of the “glorious cause” and it paints a convincing account of the sheer improbability and capriciousness of its success.

View all my reviews

Life And Death In New Jersey

On a whim I decided to visit the gently sloping hill where the universe announced itself in 1964, not with a bang but with ambient, annoying noise. It’s the static you saw when you turned on your TV, or at least used to back when analog TVs were a thing. But today there was no noise except for the occasional chirping of birds, the lone car driving off in the distance and a gentle breeze flowing through the trees. A recent trace of rain had brought verdant green colors to the grass. A white-tailed deer darted into the undergrowth in the distance.
The town of Holmdel, New Jersey is about thirty miles east of Princeton. In 1964, the venerable Bell Telephone Laboratories had an installation there, on top of this gently sloping hill called Crawford Hill. It was a horn antenna, about as big as a small house, designed to bounce off signals from a communications satellite called Echo which the lab had built a few years ago. Tending to the care and feeding of this piece of electronics and machinery were Arno Penzias – a working-class refuge from Nazism who had grown up in the Garment District of New York – and Robert Wilson; one was a big picture thinker who enjoyed grand puzzles and the other an electronics whiz who could get into the weeds of circuits, mirrors and cables. The duo had been hired to work on ultra-sensitive microwave receivers for radio astronomy.
In a now famous comedy of errors, instead of simply contributing to incremental advances in radio astronomy, Penzias and Wilson ended up observing ripples from the universe’s birth – the cosmic microwave background radiation – by accident. It was a comedy of errors because others had either theorized that such a signal would exist without having the experimental know-how or, like Penzias and Wilson, were unknowingly building equipment to detect it without knowing the theoretical background. Penzias and Wilson puzzled over the ambient noise they were observing in the antenna that seemed to come from all directions, and it was only after clearing away every possible earthly source of noise including pigeon droppings, and after a conversation with a fellow Bell Labs scientist who in turn had had a chance conversation with a Princeton theoretical physicist named Robert Dicke, that Penzias and Wilson realized that they might have hit on something bigger. Dicke himself had already theorized the existence of such whispers from the past and had started building his own antenna with his student Jim Peebles; after Penzias and Wilson contacted him, he realized he and Peebles had been scooped by a few weeks or months. In 1978 Penzias and Wilson won the Nobel Prize; Dicke was among a string of theorists and experimentalists who got left out. As it turned out, Penzias and Wilson’s Nobel Prize marked the high point of what was one of the greatest, quintessentially American research institutions in history.
I drove up Crawford Hill with a cousin on a bright May Sunday, half-expecting a chain link fence to block us. But the path was wide open and there wasn’t a soul in sight. As we approached the antenna we saw dilapidated shacks and sheds with equipment strewn around. A tractor hung there with its axel visible and rusting. The pigeon droppings were back. The antenna is not completely forgotten because the National Park Service has a plaque there designating it as a National Historic Landmark, but there’s nothing else; no account of the discovery itself expect a recognition that it happened. At the foot of the antenna is more equipment – cables, tanks of liquid nitrogens – with their function and fate uncertain. A few dozen yards from the horn antenna is another Bell Labs installation, this one looking like something straight out of Greek or Roman ruins, a crumbling monument to lost glory. Rusty gas tanks and scaffolding, more cables and wooden structures in various degrees of decay and neglect surround the engineering artifact.
As you walk away you can’t help but feel a profound sense of loss and sadness. Echoes of a distant past impinge on your heavy heart, much like the radiation that Penzias and Wilson discovered here that will continue to quietly fill the ever-expanding void long after we have all disintegrated into our atomic essence. With everything going on, this distant memory from the era of American innovation seems like a timekeeping ghost that will continue to haunt the future. Bell Labs was the most productive research laboratory in the world for almost five decades. A “Member of Technical Staff” title there was probably the most prestigious professional job title anywhere. As Jon Gertner so ably describes in his biography of the laboratory, “The Idea Factory”, not only did the lab invent revolutionary commercial products like the transistor and satellite communications that completely transformed our way of life, but it also produced a dozen Nobel Laureates like Penzias and Wilson who completely transformed our view of the cosmos. As if to drive home the stunning fall of this giant of American science and technology, the sign in front of the modest, gray building bids you farewell – “Nokia Bell Labs”. Fifty years from now, would we see that beautiful little hill as the hill on which American innovation chose to die?
Drive west about fifteen miles and you see another kind of death. It’s the death of two friends who are buried only a few feet from each other. There are hundreds of beautiful gravestones in Princeton Cemetery, and I realized that unless I asked someone, I would end up wandering around for hours looking for what I wanted. The groundskeeper drove me around in his little cart – “This is where the scientists are all buried”, he said. Is there a plot expressly reserved for the scientists, I asked. No, he said, but sometimes they like to be near each other.
The sun was still shining bright on a beautiful day, and I could take my time. Among the several similar-looking gravestones was the one I was looking for. “John von Neumann, 1903-1957”. Right below is the name of Margaret von Neumann, 1881-1956. The dates are instructive. John von Neumann – mathematician, child prodigy who knew calculus and six languages by the time he was ten, computer scientist, economist, physicist, polymath, widely deemed to be the fastest and most wide-ranging mind of the 20th century. His mother Margaret – married to Johnny’s father Max, a rich banker in glittering, turn of the century Budapest. Both refugees from fascism. When Margaret died in 1956 Johnny was heartbroken. His mother had doted on him. This first-generation immigrant who was a patriot, who had created game theory, modern computing and the mathematical underpinnings of quantum theory, who had presidents and generals and senators eagerly seeking his every word; this titan of modern science was just Jancsi for her. When Jancsi heard of his mother’s death, it compounded his own tragedy, for he was then less than a year away from the cancer that would kill him at age fifty-four, while he was still at the height of his powers. Five years later his wife Klara would walk into the Atlantic Ocean, bedecked in fine jewelry. Now I stood in front of his grave, the fastest thinker of his time having consigned his body and soul to the limitlessly slow processes of disorder and geological time.
Just a few feet away from von Neumann’s resting place lies an owlish, elfin man who arrived in the United States in the spring of 1940 after taking a long route through Siberia and the Pacific to avoid the difficulties of crossing a U-boat-riddled Atlantic. “Kurt F.” had finally deemed the situation in Europe too dangerous to continue living in Vienna, that now crumbling cradle of mathematical, philosophical and artistic thought. His friend Johnny who had come to the country seven years before had written several letters petitioning his employer, the Institute for Advanced Study in Princeton, to help Kurt Gödel obtain a visa and flee from the Nazi menace. The institute had become a haven for von Neumann, Einstein and others persecuted in Europe, providing them with the land of liberty that had beckoned the Pilgrims of Massachusetts three hundred years ago. In his letters Johnny said that Gödel was the most accomplished logician of the century and that he would be a wholly unique addition to the institute faculty. Later, when Gödel’s eccentricities – throughout his life he was plagued by deep insecurities and paranoia – and an insufficient appreciation of his work led to delays in his promotion, von Neumann asked, “How can any of us call ourselves ‘Professor’ if Gödel cannot?”. A year before von Neumann died, Gödel wrote him a letter in which, after expressing shock about his cancer and hope that he would be cured, he conjectured what is considered the first description of the famous P=NP problem in computer science, a reference all the more remarkable given that Gödel had never expressed any serious interest in Johnny’s pioneering computing work.
More than ten years before, Gödel had made a mathematical announcement which was every bit as important as Penzias and Wilson’s announcement of the universe’s birth. While the Big Bang theory told us the near certainty of how the universe was born, Gödel’s announcement told us about the fundamental uncertainty of knowledge itself. His famed incompleteness theorems drove a nail into the coffin of a grand project of axiomatizing all of mathematics and showed that every mathematical system without exception had a kernel of either incompleteness or inconsistency at its core. In other words, every mathematical system contained statements that would be both true and false, whose truth value could never be determined. What was even more damning was a parallel finding; that there would also be statements which would be true but which could not be proved to be so in the same mathematical system. As with many seminal scientific advances, Gödel’s announcement at a 1929 Königsberg conference caused hardly any ripples. But there was one person in the audience who understood the profound implications of his work for the fundamental uncertainty of knowledge – John von Neumann. After the talk von Neumann spoke to Gödel, and in a few days his lightning-fast mind had expanded Gödel’s initial idea to what was called the Second Incompleteness Theorem, a conclusion which young Kurt had already derived.
Since then the two had become friends, and von Neumann was instrumental in getting the institute to hire Gödel. However, it wasn’t he who was Gödel’s best friend. That honor belonged to a fading icon who was considered too behind the times by mainstream physicists because of his unhappiness with the meaning of quantum theory. Einstein was more of an institution than an active physicist in the 40s and 50s – the sharp-tongued Robert Oppenheimer who was the institute’s director called him “a lighthouse, not a beacon” – but Princetonians still saw him walking to and back from the institute in his baggy trousers and hat. They also noticed his daily walking companion, an owlish man who seemed to dress in heavy woolen coats even in the balmiest of summers. In his later years, Einstein said that his own work didn’t mean much to him, and that he came to work mainly for the privilege of walking home with Kurt Gödel.
Gödel’s gravestone is a little more ornate than von Neumann’s; perhaps his family wanted it that way or perhaps it spoke to his whimsical love of ordinary, earthy things like children’s fairy tales. It lists the name of his beloved wife Adele, a nightclub dancer who was deemed too ordinary and unsophisticated for Kurt by his family. But Adele nurtured Kurt through his many imagined and real illnesses and once defended him with an umbrella from Nazi hecklers. In Princeton Adele became his caretaker, guiding him through a deeply insecure, literal view of the world which gradually turned into paranoia that there were dark forces at work threatening to poison him. Soon he would only eat food that his dutiful wife had prepared for him. After Adele herself had to spend an extended spell in the hospital because of an illness, Kurt stopped eating altogether. In 1978 he entered Princeton Hospital, weighing not more than eighty pounds, and died essentially of starvation and self-neglect. For the man who had discovered the most rational uncertainty at the heart of the most rational field of human inquiry, his own end was tragically irrational.
Johnny’s end was even more heartbreaking. A man whose only purpose in life seemed to be to think, when he found out he had cancer, he realized that one day his mind would simply cease to think. This he simply could not fathom. Johnny had been instrumental in the United States’ supremacy in both atomic weapons and ballistic missile technology, and because of his importance to national security he was given a special hospital suite at Walter Reed Hospital near Washington D.C., and a coterie of air force officers was posted round the clock, tending to his every need; part of the reason for the armed guard was to ensure he would not give out secrets in his sleep, even as the cancer had relentlessly spread to his brain. He had been recently appointed to the prestigious Atomic Energy Commission and had received the Medal of Freedom from President Eisenhower, but the hand of death tugged at him with relentless certainty. Another high-ranking atomic energy commissioner named Lewis Strauss remembered an unforgettable scene in the hospital – this first-generation immigrant surrounded by the secretaries of the army, navy and air force and the joint chiefs of staff, hanging on to his every word before it disappeared into history’s scorecard.
The end when it came was cruel. To feel reassured that his mind was still working, von Neumann would ask his daughter Marina and his friends Edward Teller and Stanislaw Ulam to ask him simple arithmetic questions, such as the sum of four and seven. They would come out of his suite shaken and heartbroken. Just like his friend Kurt, Johnny’s ultra-rational mind succumbed to the irrationality of believing that he would be saved by religion, and he asked a Catholic priest to convert him to religion and carried out learned discourses with him in Latin and Greek, the kind of discourses which he had awed his father’s friends with as a child prodigy in Budapest. When he asked his brother to read to him from Goethe’s Faust, his photographic memory would start reciting the next few sentences. John von Neumann died in February 1957; on his hospital bed lay a set of notes comparing the brain with the computer and proposing new directions for neuroscience and computing. At his burial in Princeton Cemetery were both Robert Oppenheimer and Lewis Strauss, sworn enemies of each other; somehow Johnny always managed to be friends with people who were each other’s enemies.
But none of that mattered in Princeton Cemetery. As I stood there, I could not help but notice something striking – that Gödel and von Neumann’s graves were basically indistinguishable from those of hundreds around them; two of the most important minds in scientific history lying in the middle of other merely very good ones. Men and institutions have an expiry date, just like civilizations. It’s the one certainty that even Gödel cannot overturn. Ultimately the universe exerts a great leveling effect and we are all the same, beginning and ending in the same way. But our ideas are what make the difference. Gödel discovered a paradox at the heart of seemingly certain mathematical knowledge: he found that permanence is transient. And yet his and von Neumann and Bell Labs’ lives, vanishingly brief compared to the intervals between stars, showed us the opposite: that transience can lead to permanence through ideas. Ultimately we may begin and end in the same way, but whether it’s Gödel or von Neumann or a little antenna on the top of a hill, it’s our middles that distinguish us. And over those middles we seem to be able to exercise an inordinate degree of control.
First published on 3 Quarks Daily

Book review: The Ideological Origins of the American Revolution by Bernard Bailyn

The Ideological Origins of the American RevolutionThe Ideological Origins of the American Revolution by Bernard Bailyn

While a slightly academic and challenging read, this book (first published in 1967 and then reprinted twice) is a seminal contribution to revolutionary and pre-revolutionary history and a must-read, not just for understanding the American Revolution but also some of the most important issues we grapple with now. The book is entirely based on pamphlets - essentially the Twitter of their times, but far more intelligent - that were written by people across all social strata in response to events in the 1750s and 1760s. These pamphlets were remarkably flexible, spanned anywhere between ten and seventy pages, and contained a wide variety of writing, from scurrilous, sarcastic, bawdy polemics ("wretched harpies" was a favorite derogatory term - I have long since thought of making a list of choice insults of those times) to calls for populist revolution to reasoned, highly erudite writings. More than any other written form of the era, they contain a microcosm of the basic thinking that led to the revolution.

Perhaps more than any other book I have come across, Bailyn's book helped me understand how far back the roots of the revolution went, how entrenched in English political philosophy and especially libertarian philosophy they were, how simplistic and incomplete the textbook version of “no taxation without representation” is, and how many of the central issues of both 1776 and 2019 are rooted in the core of Americans’ view of their own identity and geography going back all the way to the settlers.

A few key takeaways:

1. It's easy to underestimate the outsized impact that geography had on the colonists' thinking. Decentralized control was almost de rigueur in the vast wildernesses bordering Virginia or Massachusetts, so the idea of central control - both by Parliament in the 1760s *and* by a federal government in 1787 - was deeply unpalatable to many people. The abhorrence toward virtual representation in Parliament was only a logical consequence. You gain a much better understanding of Americans' fondness for states' rights and their fears of federal power by understanding this background.

This decentralized thinking also led quite naturally to freedom of religion - Bailyn cites the prominent struggles by baptists in western Massachusetts against taxation by the Congregationalists as an example - and more haltingly and less successfully, for calls to abolish slavery which although they did not make their way into the Constitution, did lead individual states to abolish the institution and to stop the slave trade.

2. Almost the entire debate about independence was about where the seat of sovereignty lay. For the English it lay in Parliament, but the colonists argued that while Parliament did have some central rights (there were some strenuous attempts to distinguish between "external" taxation that Parliament could impose and "internal" taxation that was the people's right - this argument was rapidly dropped), the people had "natural" rights that were outside all authority including parliament's.

The colonists were inspired in this thinking by Enlightenment philosophers like Locke and Hume and this foundation is well known, but Bailyn makes a convincing case that they were inspired even more by the early 18th century English libertarians John Trenchard and Thomas Gordon and their predecessors, who in writings like the famous "Cato's Letters" argued against standing armies, lack of due process and absolute and arbitrary power. Some of these arguments went back to Charles I and the English revolution of the 1640s, so many of the leaders of the revolution had assimilated them way before 1776; Pennsylvania and New York even had written documents outlining some of the key provisions in the Bill of Rights as early as 1677. By the time the Stamp and Townshend Acts were imposed in the 1760s, taxation (which was a relatively minor grievance anyway) was only the last straw on the camel's back.

The biggest strength of the book is that it beautifully illustrates how thinking about decentralized control, natural rights and English libertarian philosophy was a common thread tying together so many disparate themes - independence, taxation and representation, abolitionism, religious freedom, geographic expansion, and finally, the great debate about the Constitution. The volume really reveals the core set of philosophies on which the country is founded better than any other that I have read. A groundbreaking contribution.