Field of Science

Infinite horizons; or why I am optimistic about the future

The Doomsday Scenario, also known as the Copernican Principle, refers to a framework for thinking about the death of humanity. One can read all about it in a recent book by science writer William Poundstone. The principle was popularized mainly by the philosopher John Leslie and the physicist J. Richard Gott in the 1990s; since then variants of it have have been cropping up with increasing frequency, a frequency which seems to be roughly proportional to how much people worry about the world and its future.
The Copernican Principle simply states that the probability of us existing at a unique time in history is small because we are nothing special. We therefore must exist roughly close to half the period of our existence. Using Bayesian statistics and the known growth of population, Gott and others then calculated lower bounds for humanity’s future existence. Referring to the lower bound, their conclusion is that there is a 95% chance that humanity will go extinct in 9120 years.
The Doomsday Argument has sparked a lively debate on the fate of humanity and on different mechanisms by which the end will finally come. As far as I can tell, the argument is little more than inspired numerology and has little to do with any rigorous mathematics. But the psychological aspects of the argument are far more interesting than the mathematical ones; the arguments are interesting because they tell us that many people are thinking about the end of mankind, and that they are doing this because they are fundamentally pessimistic. This should be clear by how many people are now talking about how some combination of nuclear war, climate change and AI will doom us in the near future. I reject such grim prognostications because they are mostly compelled by psychological impressions rather than by any semblance of certainty.
A major reason why there is so much pessimism these days is because of what the great historian Barbara Tuchman once called ‘Tuchman’s Law’; Tuchman’s Law states that the impression that an event leaves in the minds of observers is proportional to its coverage in the newspapers. Tuchman said this in 1979, and it has become a truism today because of the Internet. The media is much more interested in reporting bad things that happened rather than good things that did not happen, so it’s easy to think that the world is getting worse every day. The explosion of social media and multiple news sources have amplified this sensationalism and selection bias by gargantuan proportions. As Tuchman said, even if you may be relentlessly reading about a troubling phenomenon like child kidnapping or mass shootings, it is exceedingly rare that you will come home on any given day having faced such calamities.
In this trivial sense I agree with Bill Gates, Hans Rosling, Steven Pinker and others who have written books describing how by almost every important parameter – for instance child mortality, women and minority rights, health status, poverty, political awareness, environmental improvement – the world of today is not just vastly better than that of yesterday but has been on a steep and steady curve of improvement since medieval times. One simply needs to pick up any well-regarded book on medieval history (Tuchman’s marvelous book “The Distant Mirror” describing the calamitous 14th century will do the job) to realize how present human populations almost seem to live on a different planet as far as quality of life is concerned. This does not refute the often uneven distribution of progress, nor thus it tell us that every improvement that we have seen is guaranteed, nor this it say we should rest on our laurels, but it does give us more than enough rational cause for optimism.
Sometimes the difference between optimism and pessimism is simply related to looking at the same data point in two different ways. For instance, take as a reference date the year that the US Supreme Court legalized same-sex marriage – 2015. Now go back a hundred years, to 1915. Even in the United States the world of individual rights was stunningly different from now. Women could not vote, immigration from non-European countries was strongly discouraged and restricted, racism against non-white people (and even some white people such as Catholics) was part of the fabric of American society, black people were actively getting lynched in the south and their civil rights were almost non-existent, abortion was illegal, gay people would not dream of coming out of the closet and anti-Semitism was not only rampant but institutionalized in places like Ivy League universities.
It is downright incredible that, only a hundred years later, every single one of these barriers had fallen. Not one or two or three, but every single one. I cannot see how this extraordinary reversal of discrimination and inequality cannot lead to soaring optimism about the future. Now, two people might look at this fact in two different ways. One might say, “It took 228 years since the writing of the US Constitution for these developments to transpire”, while another person might say, “It took only a hundred years from 1915 for these developments to transpire”. Which perspective do you choose since both are equally valid? I choose the latter, not only because it points to optimism for the future but to informed optimism. There has been a tremendous raising of moral consciousness about equal treatment of all kinds of groups in the last one hundred years, and if anything, the strong, unstoppable waves of progressivism on the Internet promise that this moral elevation will continue unabated. There are effectively zero chances that women or minorities will lose the vote for instance. The price of liberty is eternal vigilance, not eternal pessimism.
What about those four horsemen of the apocalypse, now compressed into the three horsemen comprising nuclear war, AI and climate change, that seem to loom large when it comes to a dim view of the future of humanity? I believe that as real as some of the fears from climate change, nuclear war and AI are, they are exaggerated and not likely to impact us the way we think.
First, climate change. There are many deleterious impacts of human beings on the environment, of which global warming is an important one and likely the most complicated to predict in its details. It is harder to predict phenomena like the absorption of carbon dioxide by the biosphere and the melting of glaciers based on computer models than it is to understand and act on phenomena like ocean acidification, deforestation, air pollution and strip mining. Sadly, discussions of these topics are often lost in the political din surrounding global warming. There is also insufficient enthusiasm for solutions such as nuclear energy and solar power that can make a real impact on energy usage and fossil fuel emissions. On the bright side, support for fighting climate change and environmental degradation is more vociferous than ever, and social media thankfully has played an important role in generating it. This support is similar to the support that early 20th century environmentalists lent to preventing creatures like the American buffalo and whales from going extinct. There are good reasons to think that whatever the real or perceived effects of climate change, it will not cease to be a publicly important issue in the future. But my optimism regarding climate change does not just come from the level of public engagement I see but from the ability of humans to cope; I am not saying that climate change will pose no problem, but that one way or another humans will find solutions to contain or even eliminate those problems. Humans survived the last ice age at dangerously low levels of population and technological capability compared to today, so there is little reason to think that we won’t be able to cope. Some people worry whether it is worth bequeathing the uncertain world of tomorrow to our children and grandchildren. My belief is that, considering the travails that humanity successfully faced in the last thousand years or so, our children and grandchildren will be more than competent to handle whatever problem they are handed by their predecessors and the planet.
Second, nuclear war. The world’s nuclear arsenals have posed a clear and present danger for years. However, deterrence – as fragile and fraught with near misses as it is – has ensured that no nuclear weapon has been exploded in anger for almost 75 years. This is an almost miraculous track record. Moreover, while the acquisition of dirty bombs or nuclear material by non state actors is a real concern, the global nuclear stockpile has been generally quite secure, and there are enough concerned experts who continue to monitor this situation. Since the end of the Cold War, both the United States and Russia have significantly reduced their stockpiles, although both countries should go to still lower numbers. The detonation of even a low yield nuclear weapon in a major city will be a great tragedy, but it will not have the same effects as the global thermonuclear war whose threat the world labored under for more than fifty years. In 1960, Herman Kahn wrote “On Thermonuclear War”, a controversial book that argued that even a major thermonuclear war would not mean the end of humanity as most people feared. Part of Kahn’s analysis included calculations on the number of deaths and part included historical evidence of human renewal and hope after major wars. While the book was morbid in many details, it did make the point that humanity is far more resilient than we think. Fortunately the scenarios that Kahn described never came to pass, and the risk of them happening even on a small scale are now far lower than they ever were.
Finally, AI seems to be perhaps the prime reason for the extinction of humanity that many world and business leaders and laymen fear. Early fears centered on the kind of killer robots that dotted the landscape of science fiction movies, but recent concerns have centered on machines gradually developing intelligence and humans gradually ceding authority to them. But most AI doomsday scenarios are speculative at best and contain a core of deep uncertainty. For instance, a famous argument made by Nick Bostrom described a scenario called the AI paperclip maximizer. The idea is that humanity creates an AI whose purpose is to create paperclips. The AI will gradually single-mindedly start making paperclips out of everything, consuming all natural resources and rendering the human race extinct. This kind of doomsday scenario has some important assumptions built into it, among which is the assumption that such an AI can actually exist and wouldn’t have a failsafe built into it. But the bigger question is regarding the AI’s intelligence: any kind of truly intelligent AI won’t spend its entire time making paperclips, while any kind of insufficiently intelligent AI will be easily controlled by human beings or at least live with them in some kind of harmony. I worry much less about a paperclip AI than I do about humans gradually ceding thinking to fleeting sources of entertainment like social media.
But the real problem with any kind of doomsday scenario involving AGI (artificial general intelligence) is that it simply underestimates what it would take for a machine to acquire true human-like cognitive capabilities. One of the best guides to thinking about exactly what it would take for AGI to somehow take over the world is the technologist Kevin Kelly. He gives three principal reasons for the unlikelihood of this happening: one, that intelligence is along many axes, and even very intelligent human beings are usually intelligent along a few; second, that intelligence is not just gained through thinking alone but through experimentation, and that experimentation slows down any impact that a super-intelligence might have; and three, that any kind of AGI scenario assumes that the relationship between humans and their creations would be intrinsically hostile and fixed. Almost all such assumptions about AGI are subject to doubt, and at least a few of the conditions that seem to be necessary for AGI to truly dominate humanity seem to be both rate-limiting and unlikely.
Ultimately, most doomsday scenarios are based on predicting the future, and prediction, as Niels Bohr famously said, is very difficult, especially concerning the future. The most important prediction about the future of humanity will probably be the one that we are not capable of making. But in the absence of accurate prediction about the future, we have the past. And while the past is never a certain guide to the future, the human past in particular shows a young species that is almost infinitely capable of adaptation, empathy, creativity and optimism. I see no reason to believe this will not continue to be the case.
First published on 3 Quarks Daily.

Book review: The British Are Coming: The War for America, Lexington to Princeton, 1775-1777

The British Are Coming: The War for America, Lexington to Princeton, 1775-1777The British Are Coming: The War for America, Lexington to Princeton, 1775-1777 by Rick Atkinson

When the British army of regulars captured American troops during the Battle of New York, they contemptuously noted how they were surprised to see so many ordinary people among them – tanners, brewers, farmers, metal workers, carpenters and the like. That observation in one sense summed up the difference between the British and American causes: a ragtag group of ordinary citizens with little battle experience pitted against a professional, experienced and disciplined army belonging to a nation that then possessed the biggest empire since the Roman Empire. The latter were fighting for imperial power, the former for conducting an experiment in individual rights and freedom. The former improbably won.

Rick Atkinson shows us how in this densely-packed, rousing military history of the first two years of the Revolutionary War. The Americans kept on foiling the British through a combination of brilliant tactical retreats, dogged determination, improvisation and faith in providence. His is primarily a military history that covers the opening salvo in Lexington and Concord to the engagements in Princeton and Trenton and Washington's legendary crossing of the frozen Delaware. However, there is enough observational detail on the social and political aspects of the conflict and the sometimes larger than life personalities involved to make it a broader history. The account could be supplemented with other political histories such as ones by Gordon Wood, Bernard Bailyn and Joseph Ellis to provide a fuller view of the politics and the personalities.

Atkinson’s greatest strength is to bring an incredible wealth of detail to the narrative and pepper it with primary quotes from not just generals and soldiers but from ordinary men and women. His other big strength is logistical information. No detail seems to escape his eye; the number and tonnage of food and clothing provisions and shipping, sundry details of types of weapons, ships, beasts of burden and ammunition, the kinds of diseases riddling the camps and the medieval medicine used to treat them (some of them positively so - "oil of whelps" was a grotesque substance concocted from white wine, earthworms and the flesh of dogs boiled alive), ditties and plays that were being performed by the soldiers ("Clinton, Burgoyne, Howe, Bow, wow, wow"), the constantly-changing weather, the political machinations in Whitehall and the Continental Congress…the list goes on and on. Sometimes the overwhelming detail can be distracting – for instance do we need to know the exact number of blankets and weight of salt pork supplied during the eve of a particular battle? – but overall the dense statistics and detail have the effect of immersing the reader in the narrative.

The major battles – Lexington and Bunker Hill, Long Island and Manhattan, Quebec and Ticonderoga, Charleston and Norfolk, Princeton and Trenton – are dissected with fine detail and rousing descriptions of men, material, the thrust and parry at the front and the desperation, disappointments, retreats and triumphs that often marked the field of battle. The writing can occasionally be almost hallucinatory: "Revere swung into the saddle and took off at a canter across Charlestown Neck, hooves striking sparks, rider and steed merged into a single elegant creature, bound for glory". The accounts of the almost unbelievably desperate and excruciating winter fighting and retreat in Canada are probably the highlights of the military narratives. Lesser-known conflicts in Virginia and South Carolina in which the British were squarely routed also get ample space. Particularly interesting is the improbable and self-serving slave uprising drummed up by Lord Dunmore, Virginia's governor, and the far-reaching fears that it inspired in the Southern Colonies. Epic quotes that have become part of American history are seen in a more circumspect light; for instance, it’s not clear who said “Don’t fire until you see the whites of their eyes” during Bunker Hill, and instead of the famous “The British are coming” cry that is attributed to Paul Revere, it’s more likely that he said “The regulars are coming.” Also, the British army might have been experienced, but they too were constantly impacted by shortage of food and material, and this shortage was a major factor in many of their decisions, including the retreat from Boston. Brittania might have ruled the waves, but she wasn’t always properly nourished.

The one lesson that is constantly driven home is how events that seem providential and epic now were so uncertain and riddled with improvisation and desperation when they happened; in that sense hindsight is always convenient. Atkinson makes us aware of the sheer miserable conditions the soldiers and generals lived in; the threadbare clothing which provided scant protection against the cold, the horrific smallpox, dysentery and other diseases which swept entire battle companies off the face of the planet without warning and the problems constantly posed by loyalists and deserters to American patriots. There were many opportunities for men to turn on one another, and yet we also see both friends and enemies being surprisingly humane toward each other. In many ways, it is Atkinson’s ability to provide insights across a wide cross-section of society, to make the reader feel the pain and uncertainty faced by ordinary men and women, that contribute to the uniqueness of his writing.

Atkinson paints a sympathetic and sometimes heroic portrait of both British politicians and military leaders, but he also makes it clear how clueless, bumbling and misguided they were when it came to understanding the fundamental DNA of the colonies, their frontier spirit, their Enlightenment thinking and their very different perception of their relationship with Britain. A excellent complement to Atkinson’s book for understanding British political miscalculations leading up to the war would be Nick Bunker’s “An Empire on the Edge”. While primarily not a study of personality, Atkinson’s portraits of American commanders George Washington, Benedict Arnold, Henry Knox, Charles Lee, Israel Putnam and British commanders William and Richard Howe, George Clinton, Guy Carleton and others are crisp and vivid. Many of these commanders led their men and accomplished remarkable feats through cold and disease, in the wilderness and on the high seas; others like American John Sullivan in Canada and Briton George Clinton in Charleston could be remarkably naive and clueless in judging enemy strength and resolve. Atkinson also dispels some common beliefs; for instance, while the rank and file were indeed generally inexperienced, there were plenty of more senior officers including Washington who had gained good fighting experience in the ten-year-old French and Indian War. As a general, Washington’s genius was to know when to retreat, to make the enemy fight a battle of attrition, to inspire and scold when necessary, and somehow to keep this ragtag group fighting men and their logistical support together, emerging as a great leader in the process. He was also adept at carefully maneuvering the levers of Congress and to keep driving home the great need for ammunition, weapons and ordinary provision through a mixture of cajoling and appeals to men’s better angels.

For anyone wanting a detailed and definitive military history of the Revolutionary War, Atkinson’s book is highly recommended. It gives an excellent account of the military details of the “glorious cause” and it paints a convincing account of the sheer improbability and capriciousness of its success.

View all my reviews

Life And Death In New Jersey

On a whim I decided to visit the gently sloping hill where the universe announced itself in 1964, not with a bang but with ambient, annoying noise. It’s the static you saw when you turned on your TV, or at least used to back when analog TVs were a thing. But today there was no noise except for the occasional chirping of birds, the lone car driving off in the distance and a gentle breeze flowing through the trees. A recent trace of rain had brought verdant green colors to the grass. A white-tailed deer darted into the undergrowth in the distance.
The town of Holmdel, New Jersey is about thirty miles east of Princeton. In 1964, the venerable Bell Telephone Laboratories had an installation there, on top of this gently sloping hill called Crawford Hill. It was a horn antenna, about as big as a small house, designed to bounce off signals from a communications satellite called Echo which the lab had built a few years ago. Tending to the care and feeding of this piece of electronics and machinery were Arno Penzias – a working-class refuge from Nazism who had grown up in the Garment District of New York – and Robert Wilson; one was a big picture thinker who enjoyed grand puzzles and the other an electronics whiz who could get into the weeds of circuits, mirrors and cables. The duo had been hired to work on ultra-sensitive microwave receivers for radio astronomy.
In a now famous comedy of errors, instead of simply contributing to incremental advances in radio astronomy, Penzias and Wilson ended up observing ripples from the universe’s birth – the cosmic microwave background radiation – by accident. It was a comedy of errors because others had either theorized that such a signal would exist without having the experimental know-how or, like Penzias and Wilson, were unknowingly building equipment to detect it without knowing the theoretical background. Penzias and Wilson puzzled over the ambient noise they were observing in the antenna that seemed to come from all directions, and it was only after clearing away every possible earthly source of noise including pigeon droppings, and after a conversation with a fellow Bell Labs scientist who in turn had had a chance conversation with a Princeton theoretical physicist named Robert Dicke, that Penzias and Wilson realized that they might have hit on something bigger. Dicke himself had already theorized the existence of such whispers from the past and had started building his own antenna with his student Jim Peebles; after Penzias and Wilson contacted him, he realized he and Peebles had been scooped by a few weeks or months. In 1978 Penzias and Wilson won the Nobel Prize; Dicke was among a string of theorists and experimentalists who got left out. As it turned out, Penzias and Wilson’s Nobel Prize marked the high point of what was one of the greatest, quintessentially American research institutions in history.
I drove up Crawford Hill with a cousin on a bright May Sunday, half-expecting a chain link fence to block us. But the path was wide open and there wasn’t a soul in sight. As we approached the antenna we saw dilapidated shacks and sheds with equipment strewn around. A tractor hung there with its axel visible and rusting. The pigeon droppings were back. The antenna is not completely forgotten because the National Park Service has a plaque there designating it as a National Historic Landmark, but there’s nothing else; no account of the discovery itself expect a recognition that it happened. At the foot of the antenna is more equipment – cables, tanks of liquid nitrogens – with their function and fate uncertain. A few dozen yards from the horn antenna is another Bell Labs installation, this one looking like something straight out of Greek or Roman ruins, a crumbling monument to lost glory. Rusty gas tanks and scaffolding, more cables and wooden structures in various degrees of decay and neglect surround the engineering artifact.
As you walk away you can’t help but feel a profound sense of loss and sadness. Echoes of a distant past impinge on your heavy heart, much like the radiation that Penzias and Wilson discovered here that will continue to quietly fill the ever-expanding void long after we have all disintegrated into our atomic essence. With everything going on, this distant memory from the era of American innovation seems like a timekeeping ghost that will continue to haunt the future. Bell Labs was the most productive research laboratory in the world for almost five decades. A “Member of Technical Staff” title there was probably the most prestigious professional job title anywhere. As Jon Gertner so ably describes in his biography of the laboratory, “The Idea Factory”, not only did the lab invent revolutionary commercial products like the transistor and satellite communications that completely transformed our way of life, but it also produced a dozen Nobel Laureates like Penzias and Wilson who completely transformed our view of the cosmos. As if to drive home the stunning fall of this giant of American science and technology, the sign in front of the modest, gray building bids you farewell – “Nokia Bell Labs”. Fifty years from now, would we see that beautiful little hill as the hill on which American innovation chose to die?
Drive west about fifteen miles and you see another kind of death. It’s the death of two friends who are buried only a few feet from each other. There are hundreds of beautiful gravestones in Princeton Cemetery, and I realized that unless I asked someone, I would end up wandering around for hours looking for what I wanted. The groundskeeper drove me around in his little cart – “This is where the scientists are all buried”, he said. Is there a plot expressly reserved for the scientists, I asked. No, he said, but sometimes they like to be near each other.
The sun was still shining bright on a beautiful day, and I could take my time. Among the several similar-looking gravestones was the one I was looking for. “John von Neumann, 1903-1957”. Right below is the name of Margaret von Neumann, 1881-1956. The dates are instructive. John von Neumann – mathematician, child prodigy who knew calculus and six languages by the time he was ten, computer scientist, economist, physicist, polymath, widely deemed to be the fastest and most wide-ranging mind of the 20th century. His mother Margaret – married to Johnny’s father Max, a rich banker in glittering, turn of the century Budapest. Both refugees from fascism. When Margaret died in 1956 Johnny was heartbroken. His mother had doted on him. This first-generation immigrant who was a patriot, who had created game theory, modern computing and the mathematical underpinnings of quantum theory, who had presidents and generals and senators eagerly seeking his every word; this titan of modern science was just Jancsi for her. When Jancsi heard of his mother’s death, it compounded his own tragedy, for he was then less than a year away from the cancer that would kill him at age fifty-four, while he was still at the height of his powers. Five years later his wife Klara would walk into the Atlantic Ocean, bedecked in fine jewelry. Now I stood in front of his grave, the fastest thinker of his time having consigned his body and soul to the limitlessly slow processes of disorder and geological time.
Just a few feet away from von Neumann’s resting place lies an owlish, elfin man who arrived in the United States in the spring of 1940 after taking a long route through Siberia and the Pacific to avoid the difficulties of crossing a U-boat-riddled Atlantic. “Kurt F.” had finally deemed the situation in Europe too dangerous to continue living in Vienna, that now crumbling cradle of mathematical, philosophical and artistic thought. His friend Johnny who had come to the country seven years before had written several letters petitioning his employer, the Institute for Advanced Study in Princeton, to help Kurt Gödel obtain a visa and flee from the Nazi menace. The institute had become a haven for von Neumann, Einstein and others persecuted in Europe, providing them with the land of liberty that had beckoned the Pilgrims of Massachusetts three hundred years ago. In his letters Johnny said that Gödel was the most accomplished logician of the century and that he would be a wholly unique addition to the institute faculty. Later, when Gödel’s eccentricities – throughout his life he was plagued by deep insecurities and paranoia – and an insufficient appreciation of his work led to delays in his promotion, von Neumann asked, “How can any of us call ourselves ‘Professor’ if Gödel cannot?”. A year before von Neumann died, Gödel wrote him a letter in which, after expressing shock about his cancer and hope that he would be cured, he conjectured what is considered the first description of the famous P=NP problem in computer science, a reference all the more remarkable given that Gödel had never expressed any serious interest in Johnny’s pioneering computing work.
More than ten years before, Gödel had made a mathematical announcement which was every bit as important as Penzias and Wilson’s announcement of the universe’s birth. While the Big Bang theory told us the near certainty of how the universe was born, Gödel’s announcement told us about the fundamental uncertainty of knowledge itself. His famed incompleteness theorems drove a nail into the coffin of a grand project of axiomatizing all of mathematics and showed that every mathematical system without exception had a kernel of either incompleteness or inconsistency at its core. In other words, every mathematical system contained statements that would be both true and false, whose truth value could never be determined. What was even more damning was a parallel finding; that there would also be statements which would be true but which could not be proved to be so in the same mathematical system. As with many seminal scientific advances, Gödel’s announcement at a 1929 Königsberg conference caused hardly any ripples. But there was one person in the audience who understood the profound implications of his work for the fundamental uncertainty of knowledge – John von Neumann. After the talk von Neumann spoke to Gödel, and in a few days his lightning-fast mind had expanded Gödel’s initial idea to what was called the Second Incompleteness Theorem, a conclusion which young Kurt had already derived.
Since then the two had become friends, and von Neumann was instrumental in getting the institute to hire Gödel. However, it wasn’t he who was Gödel’s best friend. That honor belonged to a fading icon who was considered too behind the times by mainstream physicists because of his unhappiness with the meaning of quantum theory. Einstein was more of an institution than an active physicist in the 40s and 50s – the sharp-tongued Robert Oppenheimer who was the institute’s director called him “a lighthouse, not a beacon” – but Princetonians still saw him walking to and back from the institute in his baggy trousers and hat. They also noticed his daily walking companion, an owlish man who seemed to dress in heavy woolen coats even in the balmiest of summers. In his later years, Einstein said that his own work didn’t mean much to him, and that he came to work mainly for the privilege of walking home with Kurt Gödel.
Gödel’s gravestone is a little more ornate than von Neumann’s; perhaps his family wanted it that way or perhaps it spoke to his whimsical love of ordinary, earthy things like children’s fairy tales. It lists the name of his beloved wife Adele, a nightclub dancer who was deemed too ordinary and unsophisticated for Kurt by his family. But Adele nurtured Kurt through his many imagined and real illnesses and once defended him with an umbrella from Nazi hecklers. In Princeton Adele became his caretaker, guiding him through a deeply insecure, literal view of the world which gradually turned into paranoia that there were dark forces at work threatening to poison him. Soon he would only eat food that his dutiful wife had prepared for him. After Adele herself had to spend an extended spell in the hospital because of an illness, Kurt stopped eating altogether. In 1978 he entered Princeton Hospital, weighing not more than eighty pounds, and died essentially of starvation and self-neglect. For the man who had discovered the most rational uncertainty at the heart of the most rational field of human inquiry, his own end was tragically irrational.
Johnny’s end was even more heartbreaking. A man whose only purpose in life seemed to be to think, when he found out he had cancer, he realized that one day his mind would simply cease to think. This he simply could not fathom. Johnny had been instrumental in the United States’ supremacy in both atomic weapons and ballistic missile technology, and because of his importance to national security he was given a special hospital suite at Walter Reed Hospital near Washington D.C., and a coterie of air force officers was posted round the clock, tending to his every need; part of the reason for the armed guard was to ensure he would not give out secrets in his sleep, even as the cancer had relentlessly spread to his brain. He had been recently appointed to the prestigious Atomic Energy Commission and had received the Medal of Freedom from President Eisenhower, but the hand of death tugged at him with relentless certainty. Another high-ranking atomic energy commissioner named Lewis Strauss remembered an unforgettable scene in the hospital – this first-generation immigrant surrounded by the secretaries of the army, navy and air force and the joint chiefs of staff, hanging on to his every word before it disappeared into history’s scorecard.
The end when it came was cruel. To feel reassured that his mind was still working, von Neumann would ask his daughter Marina and his friends Edward Teller and Stanislaw Ulam to ask him simple arithmetic questions, such as the sum of four and seven. They would come out of his suite shaken and heartbroken. Just like his friend Kurt, Johnny’s ultra-rational mind succumbed to the irrationality of believing that he would be saved by religion, and he asked a Catholic priest to convert him to religion and carried out learned discourses with him in Latin and Greek, the kind of discourses which he had awed his father’s friends with as a child prodigy in Budapest. When he asked his brother to read to him from Goethe’s Faust, his photographic memory would start reciting the next few sentences. John von Neumann died in February 1957; on his hospital bed lay a set of notes comparing the brain with the computer and proposing new directions for neuroscience and computing. At his burial in Princeton Cemetery were both Robert Oppenheimer and Lewis Strauss, sworn enemies of each other; somehow Johnny always managed to be friends with people who were each other’s enemies.
But none of that mattered in Princeton Cemetery. As I stood there, I could not help but notice something striking – that Gödel and von Neumann’s graves were basically indistinguishable from those of hundreds around them; two of the most important minds in scientific history lying in the middle of other merely very good ones. Men and institutions have an expiry date, just like civilizations. It’s the one certainty that even Gödel cannot overturn. Ultimately the universe exerts a great leveling effect and we are all the same, beginning and ending in the same way. But our ideas are what make the difference. Gödel discovered a paradox at the heart of seemingly certain mathematical knowledge: he found that permanence is transient. And yet his and von Neumann and Bell Labs’ lives, vanishingly brief compared to the intervals between stars, showed us the opposite: that transience can lead to permanence through ideas. Ultimately we may begin and end in the same way, but whether it’s Gödel or von Neumann or a little antenna on the top of a hill, it’s our middles that distinguish us. And over those middles we seem to be able to exercise an inordinate degree of control.
First published on 3 Quarks Daily

Book review: The Ideological Origins of the American Revolution by Bernard Bailyn

The Ideological Origins of the American RevolutionThe Ideological Origins of the American Revolution by Bernard Bailyn

While a slightly academic and challenging read, this book (first published in 1967 and then reprinted twice) is a seminal contribution to revolutionary and pre-revolutionary history and a must-read, not just for understanding the American Revolution but also some of the most important issues we grapple with now. The book is entirely based on pamphlets - essentially the Twitter of their times, but far more intelligent - that were written by people across all social strata in response to events in the 1750s and 1760s. These pamphlets were remarkably flexible, spanned anywhere between ten and seventy pages, and contained a wide variety of writing, from scurrilous, sarcastic, bawdy polemics ("wretched harpies" was a favorite derogatory term - I have long since thought of making a list of choice insults of those times) to calls for populist revolution to reasoned, highly erudite writings. More than any other written form of the era, they contain a microcosm of the basic thinking that led to the revolution.

Perhaps more than any other book I have come across, Bailyn's book helped me understand how far back the roots of the revolution went, how entrenched in English political philosophy and especially libertarian philosophy they were, how simplistic and incomplete the textbook version of “no taxation without representation” is, and how many of the central issues of both 1776 and 2019 are rooted in the core of Americans’ view of their own identity and geography going back all the way to the settlers.

A few key takeaways:

1. It's easy to underestimate the outsized impact that geography had on the colonists' thinking. Decentralized control was almost de rigueur in the vast wildernesses bordering Virginia or Massachusetts, so the idea of central control - both by Parliament in the 1760s *and* by a federal government in 1787 - was deeply unpalatable to many people. The abhorrence toward virtual representation in Parliament was only a logical consequence. You gain a much better understanding of Americans' fondness for states' rights and their fears of federal power by understanding this background.

This decentralized thinking also led quite naturally to freedom of religion - Bailyn cites the prominent struggles by baptists in western Massachusetts against taxation by the Congregationalists as an example - and more haltingly and less successfully, for calls to abolish slavery which although they did not make their way into the Constitution, did lead individual states to abolish the institution and to stop the slave trade.

2. Almost the entire debate about independence was about where the seat of sovereignty lay. For the English it lay in Parliament, but the colonists argued that while Parliament did have some central rights (there were some strenuous attempts to distinguish between "external" taxation that Parliament could impose and "internal" taxation that was the people's right - this argument was rapidly dropped), the people had "natural" rights that were outside all authority including parliament's.

The colonists were inspired in this thinking by Enlightenment philosophers like Locke and Hume and this foundation is well known, but Bailyn makes a convincing case that they were inspired even more by the early 18th century English libertarians John Trenchard and Thomas Gordon and their predecessors, who in writings like the famous "Cato's Letters" argued against standing armies, lack of due process and absolute and arbitrary power. Some of these arguments went back to Charles I and the English revolution of the 1640s, so many of the leaders of the revolution had assimilated them way before 1776; Pennsylvania and New York even had written documents outlining some of the key provisions in the Bill of Rights as early as 1677. By the time the Stamp and Townshend Acts were imposed in the 1760s, taxation (which was a relatively minor grievance anyway) was only the last straw on the camel's back.

The biggest strength of the book is that it beautifully illustrates how thinking about decentralized control, natural rights and English libertarian philosophy was a common thread tying together so many disparate themes - independence, taxation and representation, abolitionism, religious freedom, geographic expansion, and finally, the great debate about the Constitution. The volume really reveals the core set of philosophies on which the country is founded better than any other that I have read. A groundbreaking contribution.

Book review: The Island at the Center of the World: The Epic Story of Dutch Manhattan and the Forgotten Colony That Shaped America

The Island at the Center of the World: The Epic Story of Dutch Manhattan and the Forgotten Colony That Shaped AmericaThe Island at the Center of the World: The Epic Story of Dutch Manhattan and the Forgotten Colony That Shaped America by Russell Shorto

About a decade ago when I was living in New Jersey, I used to drive every weekend from New Jersey to Massachusetts to see my then-girlfriend. While driving back I used to take a road called the Saw Mill Parkway, near a town called Yonkers, on the way to crossing the Tappan Zee bridge. Both reference points seemed completely nondescript to me then. What I did not know until now was that both Yonkers and the Saw Mill Parkway are the only tributes in this country to a remarkable man and a lost time which, if it had endured, could have significantly influenced the history of this country.

The remarkable man was Adriaen van der Donck, a Dutchman who brought the liberal outlook of 17th century Amsterdam to the Dutch colony of New Netherlands, with its capital New Amsterdam. He was known as a 'Jonkheer' or 'young lord', and on his estate up the Hudson river he built a saw mill. Hence Yonkers and the Saw Mill Parkway.

Today about the only two things that most people know about the place was that it was bought from the Indians in 1626 for the seemingly laughably small sum of 60 guilders or 24 dollars, and that New Amsterdam became New York when the English took over it in 1664. During that period it became the most progressive European colony in America, reflecting the liberal, multicultural, intellectual and progressive spirit of the Netherlands, but its history is basically taught today as an English history. Russell Shorto's engaging book charts the history of this remarkable and forgotten colony, from the discovery of the location by Henry Hudson in 1608 (about the time that Jamestown was founded) to its takeover by the English.

Much of the book centers on two larger-than-life characters; Adriaen van der Donck and Peter Stuyvesant who were sort of opposites; the former is virtually forgotten while the latter lives on in the form of names like Stuyvesant High School. Van der Donck was educated at Leiden which then had a university rivaling Oxford in its embrace of natural philosophy and logic, and the Netherlands was already serving as a refuge for religious apostates like pilgrims and Rene Descartes. Inspired by his law studies at Leiden, Van der Donck had a scientists' eye for observation and objectivity. He made friends with the Indians, lived with them, studied the plants, animals, mountains and rivers in the vast landscape of what is now the Albany region and wrote a book describing the land that became a bestseller. He brought principles of representative government and religious freedom to New Amsterdam.

Stuyvesant who had fought the Spanish in South America and lost a leg to a cannonball belonged to the conservative old guard and believed in exercising the will of the company of which he was director - the Dutch West India Company which was then scouting around the world looking for natural and human resources. Although Stuyvesant and van der Donck were on good terms before, Stuyvesant's heavy-handed management of the colony led van der Donck and a select few settlers to write protests to the Hague laying out some remarkably forward-looking principles of secular governance for the colony.

And yet somehow, between Stuyvesant's authoritarian but dogged direction and van der Donck's progressive views, New Amsterdam for a few decades became a model for secular civilization that later defined New York City as a melting pot, a unique place with a ragtag band of seamen, traders, brewers, prostitutes, soldiers, farmers, freethinkers, frontiersmen and people from all countries and professions which encouraged multiculturalism and religious freedom, in significant contrast to the monocultural, religiously rigid Puritan colonies of New England to the North; in fact it served as a refuge for persecuted Englishmen and women from New England who settled mainly in Long Island. It also largely regarded the neighboring Indian tribes as equals and traded beaver furs and wampum with them, and unlike the English at Jamestown rarely engaged in murderous conflict with the natives. Under van der Donck's leadership, the Netherlands was going to institute a bonafide progressive government in the place - the Treaty of Westphalia in 1648 had inspired a general sentiment of tolerance and peace - but sadly the beginning of the Anglo-Dutch wars led the country to again cede authority to the West India Company, and Stuyvesant again had the upper hand. Nonetheless the colony still flourished because of its decentralized nature.

The book describes how the colony bequeathed many Americanisms, among them "boss" (from "baas"), "coleslaw" (from "koosla") and "cookies" (from "koekje"). A lot of upstate New York, New Jersey, Philadelphia, (the name Schuylkill is Dutch) and especially New York City have a deeply embedded Dutch heritage, and there's even a non-trivial amount of Swedish and Finnish heritage that came from a Swedish colony that endured for about twenty years in what's now Delaware before it capitulated to the Dutch. The end of the colony came when the English first under Cromwell and then under Charles II realized the lucrative advantage that the location would provide in exploring the interior up the Hudson river, along with a strategic waypoint for the then exploding slave trade. Van der Donck sadly died in an Indian raid while Stuyvesant lived out his life in his colony and was buried there in 1672. The golden age of Dutch civilization and free thought was on the decline, and England - and the English version of history - took over.

There is no doubt that New Amsterdam was a model of religious and cultural tolerance that needs to be remembered, largely because there was no system of top-down governance there for a long time, but perhaps Shorto overstates the influence it had on future developments in the United States including the Constitution; nobody knows how exactly history would have turned out had there been two dominant colonies - the English to the North and the Dutch to the South, but it would have been very interesting indeed. Ironically, in his zeal to demonstrate how forgotten secular New Amsterdam was, Shorto fails, even when mentioning New England in some detail, to mention even once the equally secular and remarkable secular experiment to the North which I wrote about earlier - Roger Williams and his founding of Rhode Island. Seems like someone's always forgotten.

Book review: An Empire on the Edge: How Britain Came to Fight America

An Empire on the Edge: How Britain Came to Fight AmericaAn Empire on the Edge: How Britain Came to Fight America by Nick Bunker

A highly illuminating, novel account of the lead-up to the Boston Tea Party and the American Revolution from the British side, covering the critical years 1772 to 1775. British politics and economics played a seminal role in inciting the conflict (who knew that smuggling which drastically undercut the prices of tea and other commodities played such a huge role in the revolution). As an Englishman Bunker offers a very thoughtful and sympathetic but still fair and balanced portrait of British political leaders and convincingly demonstrates how they were complex characters with far from outright villainous intent.

Unlike the simplistic textbook version of events, Bunker explains in painstaking detail (sometimes too much detail) how the British simply misunderstood the colonists rather than actively oppressed them, seeing them as sources of revenue and little more, largely ignoring them until 1773. Occupied with extending their empire in other parts of the globe and chronically afraid of the French, they thought of the American colonies as a reliable outpost which they would never have to worry about, even as they were losing their grip both on the vast, overstretched geography as well as the hearts and minds of the colonists. They also badly misunderstood that what were for them simple matters of (modest) taxation were for the colonists fundamental matters of religion, individual rights and free trade; unlike other British colonies like India, the North Americans already had a lot of freedom and individual rights, which made England's petulant, scolding attitude highly inflammatory and the decision to separate from the mother country consequently easier.

Even after the Tea Party there was a huge gulf of misunderstanding between King George III, prime minister Lord North, the colonial secretary Lord Dartmouth and the colonists that led to missed opportunities. Support for taking hard measures against the colonies was also far from monolithic, with many famous members of parliament like Edmund Burke delivering fiery speeches against North, and the quality of British democracy was commendable, with even riotous events like the Boston pamphlets and the Tea Party needing proper legislative procedure, witness accounts and documentary evidence to prosecute (the book makes it clear that even by the 1770s the King could make almost no law without parliament's consent). News (including scurrilous "fake news") also took at least a month to travel from one shore to another during those days, compounding the misunderstanding.

The British also made a critical mistake by getting obsessed with Boston and Massachusetts which, although symbolically important, were politically as well as economically much less important than the Southern colonies and the Hudson Valley. After the tea was dumped North basically thought he could quell the rebellion through a targeted local war in Massachusetts or Rhode Island (the most radical state); little did he know that the colony shared deep resentments with other colonies. In a curious sense the British understanding of the colonists was as impoverished as the later American understanding of the Vietnamese. Just like an American General could never see through the eyes of Ho Chi Minh or a Vietnamese peasant, North and his ministers were simply too different from an Ethan Allen or Thomas Young.

Essentially this was a story of many missed opportunities. if they had played their cards right England could well have turned North America into a country like Australia or Canada, essentially an independent republic with intimate ties to the commonwealth, or even defused the budding revolution in 1772 by extending an olive branch of the right kind, perhaps dividing North from South, until it was too late. As it turned out, the two countries indeed turned out to be two nations separated by a common language.

Computer simulations and the Universe

There is a sense in certain quarters that both experimental and theoretical fundamental physics are at an impasse. Other branches of physics like condensed matter physics and fluid dynamics are thriving, but since the composition and existence of the fundamental basis of matter, the origins of the universe and the unity of quantum mechanics with general relativity have long since been held to be foundational matters in physics, this lack of progress rightly bothers its practitioners.
Each of these two aspects of physics faces its own problems. Experimental physics is in trouble because it now relies on energies that cannot be reached even by the biggest particle accelerators around, and building new accelerators will require billions of dollars at a minimum. Even before it was difficult to get this kind of money; in the 1990s the Superconducting Supercollider, an accelerator which would have cost about $2 billion and reached energies greater than those reached by the Large Hadron Collider, was shelved because of a lack of consensus among physicists, political foot dragging and budget concerns. The next particle accelerator which is projected to cost $10 billion is seen as a bad investment by some, especially since previous expensive experiments in physics have confirmed prior theoretical foundations rather than discovered new phenomena or particles.
Fundamental theoretical physics is in trouble because it has become unfalsifiable, divorced from experiment and entangled in mathematical complexities. String theory which was thought to be the most promising approach to unifying quantum mechanics and general relativity has come under particular scrutiny, and its lack of falsifiable predictive power has become so visible that some philosophers have suggested that traditional criteria for a theory’s success like falsification should no longer be applied to string theory. Not surprisingly, many scientists as well as philosophers have frowned on this proposed novel, postmodern model of scientific validation.
Quite aside from specific examples in theory and experiment, perhaps the most serious roadblock that fundamental physics seems to be facing is that it might have reached the end of “Why”. That is to say, the causal framework for explaining phenomena that has been a mainstay of physics since its very beginnings might have ominously hit a wall. For instance, the Large Hadron Collider found the Higgs Boson, but this particle had already been predicted thirty years before. Similarly, the gravitational waves predicted by LIGO were a logical prediction of Einstein’s theory of relativity proposed almost a hundred years before. Both these experiments were technical tour de forces, but they did not make startling, unexpected new discoveries. Other “big physics” experiments before the LHC had validated the predictions of the Standard Model which is our best theoretical framework for the fundamental constituents of matter.
The problem is that the basic fundamental constants in the Standard Model like the masses of elementary particles and their numbers are ad hoc quantities. Nobody knows why they have the values they do. This dilemma has led some physicists to propose the idea that while our universe happens to be the one in which the fundamental constants have certain specific values, there might be other universes in which they have different values. This need for explanation of the values of the fundamental constants is part of the reason why theories of the multiverse are popular. Even if true, this scenario does not bode well for the state of physics. In his collection of essays “The Accidental Universe”, physicist and writer Alan Lightman says:
Dramatic developments in cosmological findings and thought have led some of the world’s premier physicists to propose that our universe is only one of an enormous number of universes, with wildly varying properties, and that some of the most basic features of our particular universe are mere accidents – random throws of the cosmic dice. In which case, there is no hope of ever explaining these features in terms of fundamental causes and principles.
Lightman also quotes the reigning doyen of theoretical physicists, Steven Weinberg, who recognizes this watershed in the history of his discipline:
We now find ourselves at a historic fork in the road we travel to understand the laws of nature. If the multiverse idea is correct, the style of fundamental physics will be radically changed.
Although Weinberg does not say this, what’s depressing about the multiverse is that its existence might always remain postulated and never proven since there is no easy way to experimentally test it. This is a particularly bad scenario because the only thing that a scientist hates even more than an unpleasant answer to a question is no answer at all.
Do the roadblocks that experimental and theoretical physics have hit combined with the lack of explanation of fundamental constants mean that fundamental physics is stuck forever? Perhaps not. Here one must remember Einstein when he said that “Our problems cannot be solved with the same thinking that created them”. Physicists may have to think in wholly different ways, to change the fundamental style that Weinberg refers to, in order to overcome the impasse.
Fortunately there is one tool in addition to theory and experiment which has not been prominently used by physicists but which has been used by biologists and chemists and which could help physicists do new experiments. That tool is computation. Computation is usually regarded separately from experiment, but computational experiments can be performed the same way that lab experiments can as long as the parameters and models underlying the computation are well defined and valid.  In the last few decades, computation has become as legitimate a tool in science as theory and experiment.
Interestingly, this problem of trying to explain fundamental phenomena without being able to resort to deeper explanations is familiar to biologists: it is the old problem of contingency and chance in evolution. Just like physicists want to explain why the proton has a certain mass, biologists want to explain why marsupials have pouches that carry their young or why Blue Morpho butterflies are a beautiful blue. While proximal explanations for such phenomena are available, the ultimate explanations hinge on chance. Biological evolution could have followed an infinite number of pathways, and the ones that it did simply arose from natural selection acting on random mutations. Similarly one can postulate that while the fundamental constants could have had different values, the ones that they do have in our universe came about simply because of random perturbations, each one of which rendered a different universe. Physics turns into biology.
Is there a way to test this kind of thinking in the absence of concrete experiments? One way would be to think of different universes as different local minima in a multidimensional landscape. This scenario would be familiar  to biochemists who are used to thinking of different folded structures for a protein as lying in different local energy minima. A few years back a biophysicist named Collin Stultz in fact made this comparison as a helpful way to think about the multiverse. Computational biophysicists test this protein landscape by running computer simulations in which they allow an unfolded protein to explore all these different local minima until it finds a global minimum which corresponds to its true folded state. In the last few years, thanks to growing computing power, thousands of such proteins have been simulated.
Similarly, I postulate that computational physicists could perform simulations in which they simulate universes with different values for the fundamental constants and evaluate which ones resemble our real universe. Because the values of the fundamental constants dictate chemistry and biology, one could well imagine completely fantastic physics, biology and chemistry arising in universes with different values for Planck’s constant or for the fine structure constant. A 0.001% difference in some values might lead to a lifeless universe with total silence, one with only black holes or spectacularly exploding supernovae, or one which bounced back between infinitesimal and infinite length scales in a split second. Smaller variations on the constants could result in a universe with silicon-based life, or one with liquid ammonia rather than water as life’s essential solvent, or one with a few million earth-like planets in every galaxy. With a slight tweaking of the cosmic calculator, one could even have universes where Blue Morpho butterflies are the dominant intelligent species or where humans have the capacity to photosynthesize.
All these alternative universes could be simulated and explored by computational physicists without the need to conduct billion dollar experiments and deal with politicians for funding. I believe that both the technology and the knowledge base required to simulate entire universes on a computer could be well within our means in the next fifty years, and certainly within the next hundred years. In some sense the technology is already within reach; already we can perform climate and protein structure simulations on mere desktop computers, so simulating whole universes should be possible on supercomputers or distributed cloud computing systems. Crowdsourcing of the kind done for the search for extraterrestrial intelligence or protein folding would be readily feasible. Another alternative would be to do computation using DNA or quantum computers: Because of DNA’s high storage and permutation capacity, computation using DNA can multiply required computational resources manyfold. One can also imagine taking advantage of natural phenomena like electrical discharges in interstellar space or in the clouds of Venus or Jupiter to perform large-scale computation; in fact an intelligence based on communication using electrical discharges was the basis of Fred Hoyle’s science fiction story “The Black Cloud”.
On the theoretical side, the trick is to have enough knowledge about fundamental phenomena and to be able to abstract away the details so that the simulation can be run at the right emergent level. For instance, physicists can already simulate the behavior of entire galaxies and supernovae without worrying about the behavior of every single subatomic particle in the system. Similarly, biologists can simulate the large-scale behavior of ecosystems without worrying about the behavior of every single organism in them. In fact physicists are already quite familiar with such an approach in the field of statistical mechanics where they can simulate quantities like temperature and pressure in a system without simulating every individual atom or molecule in it. And they have measured the values of the fundamental constants to many decimal places to use them confidently in the simulations.
In our hypothetical simulated universe, all the simulator would have to do would be to input slightly different values of the fundamental constants and then hard-code some fundamental emergent laws like evolution by natural selection and the laws of chemical bonding. In fact, a particularly entertaining enterprise would be to run the simulation and see if these laws emerge by themselves. The whole simulation would in one sense largely be a matter of adjusting initial values, setting the boundary value conditions and then sitting back and watching the ensuing fireworks. It would simply be an extension of what scientists already do using computers albeit on a much larger scale. Once the simulations are validated, they could be turned into user-friendly tools or toys that can be used by children. The children could try to simulate their own universes and can have contests to see which one creates the most interesting physics, chemistry and biology. Adults as well as children could thus participate in extending the boundaries of our knowledge of fundamental physics.
Large-scale simulation of multiple universes can help break the impasse that both experimentation and theory in fundamental physics are facing. Computation cannot completely replace experiment if the underlying parameters and assumptions are not well-validated, but there is no reason why this cannot happen as our knowledge of the world based on small-scale experiments grows. In fields like theoretical chemistry, weather prediction and drug development, computational predictions are becoming as important as experimental tests. At the very least, the results from these computational studies will constrain the number of potential experimental tests and provide more confidence in asking governments to allocate billions of dollars for the next generation of particle accelerators and gravitational wave detectors.
I believe that the ability to simulate entire universes is imminent, will be part of the future of physics and will undoubtedly lead to many exciting results. But the most exciting ones will be those that even our best science fiction writers cannot imagine. That is something we can truly look forward to.

First published on 3 Quarks Daily.