Field of Science

On free speech, crossing the Rubicon and the need to unite

I woke up to some welcome news today, news that after an extended period of disappointment and disillusionment, has left me feeling better than I have in a long time. Harper’s Weekly published an open letter signed by an eclectic blend of writers, political scientists, journalists and thinkers across the political spectrum, many of whom have been pillars of the liberal intellectual community for decades. In the letter, Noam Chomsky, Margaret Atwood, Salman Rushdie, Steven Pinker, Nicholas Christakis, Fareed Zakaria, Arlie Russell Hochschild and many others deplore the state in which liberal discourse has descended into for several years.

"The free exchange of information and ideas, the lifeblood of a liberal society, is daily becoming more constricted. While we have come to expect this on the radical right, censoriousness is also spreading more widely in our culture: an intolerance of opposing views, a vogue for public shaming and ostracism, and the tendency to dissolve complex policy issues in a blinding moral certainty." 

The entire letter is worth reading and takes aim at several acts by self-described liberals and Democrats over the years that have been attacks on the values of free expression and debate that they professed to have stood up for for decades. It takes to task institutions which are dealing out disproportionate punishments for minor infractions, if one can call them that. It makes the seemingly obvious case that writers can only thrive when they are allowed to experiment and say controversial things – a whole string of historical writers ranging from Virginia Woolf and D. H Lawrence to Nabokov and Franzen attest to this fact. Rushdie himself of course infamously had to go into hiding for several years after the fatwah. The writers of the letter cite dozens of cases of controversial speakers being disinvited from college campuses, professors being censured for citing “controversial” books like Greek classics, editorials being withdrawn from leading newspapers because of internal rebellion and people’s livelihoods and reputations being threatened for simply tweeting about or referring to something that their detractors disliked. In most cases there was a small group of outraged people, usually on Twitter, responsible for these actions.

Most of this of course has been going on for years, even as those of us who believed in free speech without retaliation and diversity of viewpoints have watched with increasing dismay from the sidelines. Some of us have even been targets in the past, although we have not had to face the kind of retribution that other people did. And yet, compared to what has been happening this year, the last few years have seemed tame. I have to say that as much as my disillusionment has grown steadily over time, this year truly seems like the watershed, one that should squarely force us to take a stand.

Let’s first understand that America in 2020 has made everyone’s job difficult: the country is now being led by a racist, ignorant child-president with dictatorial aspirations who calls the press the enemy of the people and whose minions take every chance they can to try to silence or threaten anyone who disagrees with them, who actively spread misinformation and lies, whose understanding of science and reason is non-existent, and who have been collectively responsible not just for the dismantling of critical public institutions like the EPA and the justice department but for orchestrating, through inaction, one of the deadliest public health crises in the history of the country that has killed hundreds of thousands. One would think that all of us who are opposed to this administration and their abandonment of the fundamental values on which this country has been founded would be utterly horrified and unified at this time.

Sadly, the opposite has happened, and it’s why the Harper’s letter seems like a bright pinprick in a dark landscape to me. For an increasing portion of the self-professed liberal establishment, the answer to Trump has been to go crazy in the other direction. Until this year, I generally used to reject the slippery slope argument – the argument that those even with whom I strongly disagreed will keep on going down a slippery slope. I thought that that would stop at a reasonable juncture. Sadly, I no longer think that way. Three examples among many will suffice, and I think all three are emblematic of larger trends:

First: After the horrific murder of George Floyd, while we were standing in solidarity with the black community and condemning the use of excessive force by police departments across the country, peaceful protests across the country turned into violent demonstrations accompanied by looting. Now most of the protestors were peaceful, so I thought that my fellow liberals would cleanly draw a line and denounce the looters while supporting the protests. But this seldom happened; both on my private social media accounts as well as publicly, people started excusing the looting as a justified act of desperation. Worse still, they started to recruit cherry-picked historical examples of civil rights leaders to make their case, including this speech by MLK Jr. in which he seems to justify violence as a desperate act before making it very clear that it is not the right way of going about things. But even if you hadn’t heard the entire speech, to hold someone who is literally the biggest symbol of non-violent protests in modern times along with Mahatma Gandhi as a spokesperson for violent protests is bizarre to say the least. 

The ahistorical anomalies continued. One of my favorites was a tweet by Charles Blow of the New York Times who justified the looting by comparing it with the Boston Tea Party. I find it hard to believe that Blow doesn’t know what happened after they threw the tea into the water – they not only stripped naked and castigated a fellow Son of Liberty after they found out that he had secretly pocketed the tea, but they came back later and replaced the lock of the ship they had broken. Unlike the looters, the Boston Patriots had a profound respect for private property. In fact, it was precisely British insults to private property by way of quartering soldiers in private residences that served as a spark for the revolution. In addition, as Richard Rothstein painstakingly documents in his superb book "The Color of Law", laws were explicitly enacted precisely to deny African-Americans and other minorities access to private housing for decades, so it's ironic to see mobs destroying private property in their own communities and crippling the livelihoods of folks - many of whom are poor immigrants with small businesses - who had nothing to do with the cause of the protests.

But all these distinctions were lost, especially at the New York Times who tied themselves up into a real knot by publishing an op-ed by Senator Tom Cotton. In the last few years Cotton has emerged as one of the most racist and xenophobic of all Trump supporters and I detest him. Cotton wrote a biased and flawed op-ed that called for the army to step in to pacify cities where looting was taking place. Knowing his past this was a convenient position for him and I completely disagreed with it; I did think there needed to be some way for law and order to be imposed, but the last thing we need in the middle of a militarized police force is the actual military. Nevertheless, it turned out that a fair percentage of the country agreed with him, including a fair share of Democrats, and Cotton is a sitting United States senator after all, so as an elected public official his views needed to be known, not because they were right but because they were relevant. I suddenly felt newfound respect for the New York Times for airing dissenting views that would allow their readers to get out of their echo chambers and take a stroll in a foreign country, but it didn't last long. As we now know, there was a virtual coup inside the paper and the op-ed editor resigned. As Andrew Sullivan said in a must-read piece it is deeply troubling when an ideological faction – any ideological faction – can hold a news source hostage and force it to publish only certain viewpoints conducive to their own thinking.

A similar reaction against what were entirely reasonable responses to the looting spilled over into other institutions and individuals’ lives. In perhaps the most bizarre example, David Shor who is an analyst at a New York City firm - and whose Twitter profile literally includes the phrase “I try to elect Democrats” - was fired for tweeting a study by a black professor at Princeton that said that non-violent protests are more effective than violent ones. Just chew on that a bit: an individual was fired merely for tweeting, and not just tweeting anything but tweeting something that MLK Jr. would have heartily approved of. When people actually face retribution for pointing out that non-violence works better than violence, you do feel like you are in a mirror universe.

Second: The statue controversy. The killing of Floyd set off a wave of protests that extended to many other areas, some because of feuds brewing for years; for more than a hundred years in this particular case. I am all for the removal of Confederate Statues; there is nothing that is redeeming in them, especially since many of them were put up by white supremacists decades after the war ended. While the bigger issue of acknowledging memory and history is complicated, the latest ray of light for me came from Eliot Cohen, a dean at Johns Hopkins who cut through the convoluted thicket to come up with a simple rule that’s as clear as anything in my opinion for weighing historical figures in the scales of justice. Cohen asked those who were demanding the statues to be taken down to ask if the thing that they were criticizing a person for was the most important thing he or she was known for. This question immediately creates a seismic divide between Confederates and Founding Fathers. If the Civil War had not happened, Robert E. Lee would have been a better than average soldier who fought with distinction during the Mexican-American War. If Thomas Jefferson had never owned and abused slaves and had illegitimate children with Sally Hemings, he would have still been the father of religious freedom, the Louisiana Purchase, the University of Virginia, scientific inquiry and the Declaration of Independence – a document that, even if it was not applied universally, had such abstract power that it kept on being cited all the time by figures as diverse as Abraham Lincoln and Ho Chi Minh, not to mention Frederick Douglass and MLK Jr. Jefferson would have still done these great things if you took away his slavery and hypocrisy. Washington is even more unimpeachable since he led the country to freedom during the war and unlike Jefferson freed his slaves. The fact that these were flawed men who still did great things is hardly a novel revelation.

Sadly, you know that your side is losing the war of ideas when they start handing propaganda victories to the side you despise on a platter. Three years ago, in the context of a Lee statue that was going to be taken down, after that terrible anti-Semitic Charlottesville rally by white supremacists, Trump made a loathsome remark about there being “fine people” on all sides and also asked a journalist that if it was Lee today, would it be Jefferson or Washington next? I of course dismissed Trump’s remark as racist and ignorant; he would not be able to recite the Declaration of Independence if it came wafting down at him in a MAGA hat. But now I am horrified that liberals are providing him with ample ammunition by validating his words. A protest in San Francisco toppled a statue of Ulysses S. Grant – literally the man who defeated the Confederacy and destroyed the first KKK – and defaced a statue of Cervantes, a man who as far as we know did not write “Don Quixote” while he was relaxing from a day’s fighting for the Confederacy or abusing slaves. University of Wisconsin students recently asked for a statue of Lincoln to be removed because he had once said some uncomplimentary words about black people. And, since it was just a matter of time, the paper of record just published an op-ed calling for the Jefferson Memorial in Washington to be taken down. Three years ago, if you had asked me if my fellow liberals would go from Robert E. Lee to Jefferson and Washington and Grant so quickly, I would have expressed deep skepticism. But here we are, and based on recent events it won’t be paranoid at all to ask that if Washington statues are next, would streets or schools named after Washington also be added to the list? How about statues of Plato and Aristotle who supported slavery as a natural state of man? And don’t even get me started on Gandhi who said some very unflattering words about Africans. The coefficient of friction on the slippery slope is rapidly going to zero.

Third item in the parade of items signifying a spiraling descent into intolerance - A call to bar Steven Pinker from the Linguistic Society of America’s list of distinguished fellows and media experts. This call would be laughable if it weren’t emblematic of a deeper trend. My fellow liberal Scott Aaronson has already indicated the absurdity of the effort in his post not in the least because Pinker has championed liberalism, evidence-based inquiry and rational thought all throughout his long career. The depressing thing is that the tactics are not new: guilt by association, cherry-picking, an inability to judge someone by the totality of their behavior or contributions, no perception of gray in an argument and so on. The writers don’t like the fact that Pinker tweeted a study showing that police encounters with black people aren’t particularly violent (but that there are more encounters to begin with, so the probability of one turning violent is higher), tweeted that a horrific fatal attack by a disgruntled man at UCSB on women did not imply higher rates of violence against women in general and said in his widely-praised book “The Better Angels of our Nature” that a seemingly mild-mannered man in New York City shockingly turned out to be violent. Pinker has never denied the suffering of individuals but has simply pointed out that that suffering should not blind us to progress at large. As hard is it might be to believe this, liberals are punishing someone who says that the world has at large become a better place because we have embraced liberal values. Again, this feels like we have stepped into a surreal mirror universe.

As biologist Jerry Coyne has explained on his blog, none of these accusations hold water and the protestors are on exceedingly thin ice, but what is noteworthy is the by now all-too-common accusation by selective misrepresentation and the detailed combing through (and a disastrous one at that) of every tweet, every “like” from Pinker that would be evidence of his awfulness as a human being and affront to the orthodoxy. If this does not seem like a job for an incompetent and yet obsessive Orwellian bureaucrat or a member of the NKVD during Stalin’s show trials, I don’t know what is (as Robert Conquest described in his famous account of Stalin’s purges, going through someone’s entire life history with a fine-toothed comb and holding up even the slightest criticism of the dear leader or disagreement with party orthodoxy was almost de rigueur for the Soviets and the Stasi). Perhaps completely unsurprisingly, the doyen of American linguistics, Noam Chomsky, refused to sign the letter and instead signed the other one; Chomsky has consistently been an exemplary supporter of free speech and has famously pointed out that if you support only free speech that you like, you are no different from Goebbels who was also a big fan of speech he liked. But Pinker’s example again goes to show that the slippery slope argument is no longer a fictitious one or a strawman. If we went from Milo Yiannopoulos to Steven Pinker in three years, it simply does not feel paranoid to think that we could get to a very troubling place in three more years.

The whole development of course is very sad, certainly a tragedy but rapidly approaching a farce. Liberals and Democrats were supposed to be the party of free speech and intelligent dialogue and tolerance and viewpoint diversity. The Republican Party, meanwhile, is not a political party anymore but a “radical insurgency” as Chomsky puts it. It is a blot not just on true conservatism but on common sense and decency. The reason I feel particularly concerned this year is because I have always felt that, with Republicans having descended into autocracy and madness, liberal Democrats are the one and only thing standing between democracy and totalitarianism in this country. I have been disillusioned with their abandonment of unions and disparaging of “middle America” for a long time, but I still thought they upheld traditional, age-old liberal values. With Republicans not even making a pretense of doing this, one would think the Democrats have a golden opportunity to pick up the baton here. But instead you have a party that has embraced diversity provided it’s of the kind they like, allows for no nuance or sliding scale of disagreement, accuses people of being some kind of “ist” with the spirit of the Inquisition and refuses to see individuals as individuals rather than as part of their favorite or their despised groups. If Democrats give up on us, what other group of influence can save the country?

Quite apart from how this behavior abandons the values that have made this country a great one, it is a disastrous political strategy. Currently, the number one goal of any American citizen with any amount of decency and intelligence should be to hand Donald Trump and his unscientific, racist, ignorant administration the greatest defeat in American electoral history. Almost nothing else is as important this year. There are sadly still people who are on the fence – these people cannot let go of the Republican Party for one reason or another, but especially in the last few months one hopes that enough of them have become disillusioned with the Trump administration’s utter incompetence, casual cruelty and dog whistle signaling to consider voting for the other guy. The Democrats should be welcoming these people into their ranks with open arms, so would it be harder or easier for fence-sitters to think of voting Democrat when they see self-proclaimed Democrats toppling random statues, unleashing Twitter mobs on people they disagree with and trying to destroy their careers and basically trying to disparage or eliminate anyone who may think slightly differently from the sphere of discourse?

I came to this country as an immigrant, and while several reasons brought me here just like they do all immigrants, science and technology and freedom of speech were the top two things that I loved and continue to love about the United States. When I was growing up in India, my father who was an economics professor at a well-known college used to tell me how he taught econometrics classes during the Indian Emergency of the 1970s when the Constitution was suspended by the Prime Minister, Indira Gandhi. He told me how he used to occasionally see a government agent standing behind during his classes, taking notes, making sure he was not saying something subversive. It can only be amusing if parts of partial different equations used in econometrics were regarded as subversive (and if the agents understood them), but it was nonetheless a sobering experience. It would have been far worse had my father lived in Cambodia during the same time. While it’s to India’s democratic credit that it escaped from that hole, even today much of freedom of speech in India, while enshrined in the Constitution, is on paper. As several recent incidents have shown, you can get in trouble if you criticize the government, and in fact you can get in trouble even from your fellow citizens who may rat you out and file lawsuits against you. Even in Britain you have libel laws, and of course free speech is non-existent in countries like Saudi Arabia. In my experience, Americans who haven’t lived abroad often don’t appreciate how special their country is when it comes to free speech. Sadly, as the current situation shows, we shouldn’t take it for granted.

When I complain about problems with free speech in this country, fellow liberals tell me – as if I have never heard of the US Constitution - how it only means that the government cannot arrest you if you say something incendiary. But this point is moot since people can stifle each other’s ideas as thoroughly as the government does, and while informal censure has been around since we were hunter gatherers, when it gets out of hand as it seems to these days, one can see a pall of conformity and a lack of diversity descending over the country. This also puts in a dim light the objection that there cannot be speech without consequences – as David Shor’s example shows, if the results include getting fired or booted out from professional organizations for almost anything you say, these “consequences” are almost as draconian as government oppression and should be unacceptable. As he did with many things, John Stuart Mill said it best in his “On Liberty”,

“Protection, therefore, against tyranny of the magistrate is not enough: there needs protection also against the tyranny of the prevailing opinion and feeling; against the tendency of society to impose, by other means than civil penalties, its own ideas and practices as rules of conduct on those who dissent from them; to fetter the development, and, if possible, prevent the formation, of any individuality not in harmony with its ways, and compel all characters to fashion themselves upon the model of its own.”

It’s also worth remembering that there is much less distinction between “the people” and “the government” than we think since today’s illiberal anti-free speech activists are tomorrow’s politicians, community leaders, writers and corporate leaders. And we would be laboring under a truly great illusion if we think that these supposedly well-intentioned activists cannot become repressive; everyone can become repressive if given access to power. The ultimate question is not whether we want a government which does not tread on our freedom - we settled that question in 1787 - it’s about what kind of country we want to live in: one in which ideas, even unpleasant ones, are confronted with other ideas in a sphere of spirited public debate, or one in which everyone boringly thinks the same thing, there is no opportunity for dissent, nuanced thinking is thrown out of the window and anybody who challenges the orthodoxy is to be eliminated from public discourse one way or another? Because those are definitely not the values that made this country the envy of the world and the one that its founding ideals envisaged.

So what should those of us who squarely believe in free speech, viewpoint diversity, dialogue and good faith debate do? This year it has become clear that we should take a stand, and as Scott indicates, if supposedly traditional, plain vanilla liberal values like speech without harsh retaliation - values which go back to the founding of the country and beyond – are suddenly “radical” values that are increasingly the province of a narrow minority, so be it: we should not only embrace these radical values with alacrity but be unhesitant and full-throated in their defense. The signers of the Harper’s Weekly letter have set an excellent precedent, and they are saying something very simple – if you want to call yourself a liberal, act liberal.

Von Neumann In 1955 And 2020: Musings Of A Cheerful Pessimist On Technological Survival




Johnny von Neumann enjoying some of the lighter aspects of technology. The cap lights up when its wearer blows into the tube.

“All experience shows that even smaller technological changes than those now in the cards profoundly transform political and social relationships. Experience also shows that these transformations are not a priori predictable and that most contemporary “first guesses” concerning them are wrong.” – John von Neumann
Is the coronavirus crisis political or technological? All present analysis would seem to say that this pandemic was a result of gross political incompetence, lack of preparedness and impulsive responses by world leaders and government. But this view would be narrow because it would privilege the proximate cause over the ultimate one. The true, deep cause underlying the pandemic is technological. The coronavirus arose as a result of a hyperconnected world that made human reaction times much slower than global communication and the transport of physical goods and people across international borders. For all our skill in creating these technologies, we did not equip ourselves to manage the network effects and sudden failures in social, economic and political systems created by them. An even older technology, the transfer of genetic information between disparate species, was what enabled the whole crisis in the first place.
This privileging of political forces over technological ones is typical of the mistakes that we often make in seeking the root cause of problems. Political causes, greatly amplified by the twenty-four hour news cycle and social media, are illusory and may even be important in the short-term, but there is little doubt that the slow but sure grind of technological change that penetrates deeper and deeper into social and individual choices will be responsible for most of the important transformations we face during our lifetimes and beyond. On scales of a hundred to five hundred years, there is little doubt that science and technology rather than any political or social event cause the biggest changes in the fortunes of nations and individuals: as Richard Feynman once put it, a hundred years from now, the American Civil War would pale into provincial insignificance compared to that other development from the 1860s – the crafting of the basic equations of electromagnetism by James Clerk Maxwell. The former led to a new social contract for the United States; the latter underpins all of modern civilization – including politics, war and peace.
The question, therefore, is not whether we can survive this or that political party or president. The question is, can we survive technology? In 1955, John von Neumann wrote a very thought-provoking article titled “Can We Survive Technology?” in Fortune magazine that put this question in the context of the technology of the times. The essay was influenced by historical context – a great, terrible world war had ended just ten years earlier- and by von Neumann’s own background and interests. But the essay also presents original and very general observations that are most interesting to analyze in the context of our present times. By then Johnny, as friends and even casual acquaintances called him, was already regarded as the fastest and most wide-ranging thinker alive and had already carved his name in history as a mathematician, polymath, physicist and military advisor of the very highest rank. Sadly, he was only two years away from the cancer that would kill him at the young age of 54. He was also blessed – or cursed – with a remarkably prescient but still cheerful and ironic pessimism that enabled him to boldly look ahead into future world events; already in the 1930s, he had predicted the major determinants of a potential world war and the winners and losers. Along with his seminal contributions to game theory, pure and applied mathematics, nuclear weapons design and quantum mechanics, his work on computing and automata had already placed him in the front ranks of soothsayers. And like all good soothsayers, he was sometimes wrong.

Copy of the June, 1955 issue of Fortune magazine (from the author’s library)

Perhaps it’s pertinent to quote a paragraph from the last part of Johnny’s article because it lays bare the central thesis of his philosophy in stark terms.
“All experience shows that even smaller technological changes than those now in the cards profoundly transform political and social relationships. Experience also shows that these transformations are not a priori predictable and that most contemporary “first guesses” concerning them are wrong. For all these reasons, one should take neither present difficulties nor presently proposed reforms too seriously.”
Von Neumann starts by pointing to what he saw as the major challenge to the growing technological revolution of the past half century, a technological revolution that saw the rise of radio, television, aviation, submarines, antibiotics, radar and nuclear weapons among other things. He had already seen what military technology could do to millions of people, incinerating them in a heartbeat and reducing their cities and fields to rubble, so one needs to understand his musings in this context.
“In the first half of this century the accelerating industrial revolution encountered an absolute limitation—not on technological progress as such but on an essential safety factor. This safety factor, which had permitted the industrial revolution to roll on from the mid-eighteenth to the early twentieth century, was essentially a matter of geographical and political Lebensraum: an ever broader geographical scope for technological activities, combined with an ever-broader political integration of the world. Within this expanding framework it was possible to accommodate the major tensions created by technological progress. Now this safety mechanism is being sharply inhibited; literally and figuratively, we are running out of room. At long last, we begin to feel the effects of the finite, actual size of the earth in a critical way.”
Let’s contrast this scenario with the last fifty years which were also a period of extraordinary technological development, mainly in communications technologies and the nature of work and knowledge engendered by the Internet. As Johnny notes, just like in 1955 we are “running out of room” and feeling the effects of the “finite, actual size of the earth in a critical way”, albeit through our own novel incarnations. The Internet has suddenly brought people together and made the sphere of interaction crowded. We were naïve in thinking that this intimacy would engender understanding and empathy; but as we realized quite quickly, it tore us apart instead by cloistering us into our own echo chambers that we hermetically sealed from others through social disapproval and technological means. But as Johnny rightly notes, this crisis is scarcely a result of the specific technology involved; rather, “it is inherent in technology’s relation to geography on the one hand and to political organization on the other.”

Von Neumann and Oppenheimer in front of the Institute for Advanced Study computer in Princeton

The three major technological developments of Johnny’s time were computing, energy production and the weather. That last topic might seem like an odd addition, but it was foremost on Johnny’s mind as a major topic of application for computing. Climate was of special interest to him because it was characteristic of complex systems including non-linear differential equations and multifactorial events that are very hard for human beings to solve using pencil and paper. Scientists during World War 2 had also become finely attuned to the need for understanding the weather; this need had become apparent during major events like the invasion of Normandy where the lives of hundreds of thousands of soldiers and civilians depended on day-to-day weather forecasts. It was precisely for understanding complex systems like the weather that Johnny and his associates had made such major contributions to building some of the first general-purpose computers employing the stored program concept, first at the University of Pennsylvania and then at the Institute for Advanced Study in Princeton.
Johnny had a major interest in predicting the weather and then controlling it. He was also one of the first scientists to see that increased production of carbon dioxide would have major effects on the climate. He was well aware of the nature of feedback systems and analyzed, among other things, the impact of solar radiation and ice changes on the earth’s surface. He understood that both these factors are subject to delicate balances, and that human production of carbon dioxide might well upset or override these balances. But Johnny’s main interest was not simply in understanding the weather but in predicting it. In his essay he talks about cloud seeding and rain making and about modulating the reflectivity of ice to increase or decrease temperatures. He clearly understands the monumental impact, exceeding the effects of even nuclear war, that weather prediction and control might have on human civilization:
“There is no need to detail what such things would mean to agriculture or, indeed, to all phases of human, animal, and plant ecology. What power over our environment, over all nature, is implied! Such actions would be more directly and truly worldwide than recent or, presumably, future wars, or than the economy at any time. Extensive human intervention would deeply affect the atmosphere’s general circulation, which depends on the earth’s rotation and intensive solar heating of the tropics. Measures in the arctic may control the weather in temperate regions, or measures in one temperate region critically affect another, one quarter around the globe. All this will merge each nation’s affairs with those of every other, more thoroughly than the threat of a nuclear or any other war may already have done.”
Of all the topics that Johnny discusses, this is the only one which at first sight does not seem to have come to pass in terms of major developments. The reasons are twofold. Firstly, Johnny did not know about chaos in dynamical systems which would make the accurate prediction of climate very difficult. Of course, you don’t always need to understand a system well in order to manipulate it by trial and error. This is where the second reason involving political and social will comes into play. Johnny’s prediction that carbon dioxide will have a major impact on the climate has been well-validated, although the precise effects remain murky. World opinion in general has shied away from climate control experiments, but given the potentially catastrophic effects that CO2 might have on the food supply, immigration, tree cover and biodiversity in general, it is likely that the governments of the world would be pressed into action by their citizens to at least try to mitigate the impact of climate change using technology. Although this prediction by Johnny now seems quaint and outdated, my feeling is that his analysis was actually so far ahead of its time that we will soon see it discussed, debated and put into action, perhaps even during my lifetime. In saying this I remember President Kennedy’s words: “Our problems are man-made; therefore, they can be solved by man.”
Like many scientists of his time, Johnny was optimistic about nuclear power, seeing limitless possibilities for it, perhaps even making it “too cheap to meter”. His prediction seems to have failed along with similar predictions by others, but the failure has less to do with the intrinsic nature of nuclear power and more with the social and political structures that hampered its development by imposing onerous regulatory burdens on nuclear plant construction, spreading unrealistic fears about radiation and not allowing entrepreneurs to experiment with reactor designs through trial and error, the way they did with biotechnology and computing. Just like with weather prediction, I believe that Johnny’s vision for the future of nuclear power will become reality once world governments and their citizenry realize that nuclear power would provide one of the best ways to escape the dual trap of low-energy alternative fuels and high-energy but politically and environmentally destructive fossil fuels. Already we are seeing a resurgence of new-generation nuclear reactors.
One of the fears that Johnny had about nuclear power was that our reaction times would be inadequate compared to even minor developments in the field. He says,
“Today there is every reason to fear that even minor inventions and feints in the field of nuclear weapons can be decisive in less time than would be required to devise specific countermeasures. Soon existing nations will be as unstable in war as a nation the size of Manhattan Island would have been in a contest fought with the weapons of 1900.”
I already mentioned at the beginning how the rapid advances in communications and transport systems made us woefully prepared for the coronavirus. But there is another very important sphere of human activity perhaps unanticipated by Johnny that has also left us impoverished in terms of countermeasures against even minor “improvements”. This sphere is the field of electronic commerce and financial trading, where differences of nanoseconds in the transmission of price signals can make or break the fortunes of companies. More importantly, they can make or break the fortunes of millions of ancillary economic units and individuals who are associated with these institutions through a complex web of models and dependencies whose fault lines we barely understand, leading to a gulf of ignorance with direct causal connections to the global financial crisis of 2008. Sadly, there is no evidence that we understand these dependencies any better now or are better prepared for employing countermeasures against odd and sundry developments in the layering and modeling of financial instruments impacting millions.
Cybersecurity is another field where even minor improvements in being able to control, even momentarily, the complex computer network of an enemy country can have network effects that surpass the initial perturbation and lead to large-scale population impact. Ironically, the very dependence of developed countries on state-of-the-art computer networks which govern the daily lives of their citizens has made them vulnerable to attacks; the creation of these techno-bureaucratic systems itself has not kept pace with the capacity of the systems to efficiently and globally ward off foreign and domestic attacks. Presumably, defense and high-value corporate systems in countries like the United States are resilient enough to not be crippled by such attacks, but as the 2016 election showed, there is a low level of confidence that this is actually the case. Moreover, these systems need to be not just resilient but antifragile so that they can counteract the vastly amplified effects of small initial jolts with maximum efficiency. As critical medical, transport and financial infrastructure increasingly ties its fate to such technology, the ability to respond with countermeasures in equal or less time compared to the threat becomes key.
Automation is another field in which Johnny made major contributions through computing. While working on the atomic bomb at Los Alamos, he had observed human “computers” performing repetitive calculations related to the complex hydrodynamics, radiation flow and materials behavior in a nuclear weapon as it blew apart in a millionth of a second. It was apparent to him that not only would computers revolutionize this process of repetitive calculation, but that they would have to employ stored programs if they were not to be crippled in these calculations by the bottleneck of being reconfigured for every task.
“Thanks to simplified forms of automatic or semi-automatic control, the efficiency of some important branches of industry has increased considerably during recent decades. It is therefore to be expected that the considerably elaborated newer forms, now becoming increasingly available, will effect much more along these lines. Fundamentally, improvements in control are really improvements in communicating information within an organization or mechanism. The sum total of improvements in this field is explosive.”
The explosive nature of the improvements in automation again comes from great gains in economies of scale combined with the non-linear effects of chunking together automated protocols that lead to a critical mass in terms of suddenly freeing up large parts of engineering and commercial processes from human intervention. Strangely, Johnny did not see the seismic effects automation would have on displacing human labor and causing significant political shifts both within and across nations. In looking for insights into this problem, perhaps we should look to a book written by Johnny’s friend and contemporary, mathematician Norbert Wiener of MIT. In 1950 Wiener had written a book titled “The Human Use of Human Beings” in which he extolled automation but warned against machines breaking free from the dictates of their human masters and controlling us instead.

“Progress imposes not only new possibilities for the future but new restrictions.” – Norbert Wiener

Wiener’s prediction has already come true, but likely not in the way he meant or foresaw. Self-replicating pieces of code now travel through cyberspace looking for patterns in human behavior which they reinforce through modifying and spreading themselves through the cyber-human interface. There is no better example of this influence than in the ubiquity of social media and the virtual addiction that most of us display for these sources. In this particular case, the self-replicating pieces of code first observe and then hijack the stimulus-response networks in our brains by looking for dopamine rush-inducing reactions and then mutating and fine-tuning themselves to maximize such reactions (the colloquial phrase “maximizing clicks”, while pithy, does not begin to capture such multilayered phenomena).
How do we ward off such behavior-hijacking technology, and more generally technology with destructive effects? Here Johnny is pessimistic, for several reasons. The primary reason is because as history shows, separating “good” from “bad” technology is often a fool’s errand at best. Johnny gives the example of classified military technology which is often impossible to separate from open civilian technology because of its dual use nature. “Technology – like science – is neutral all through, providing only means of control applicable to any purpose, indifferent to all…A separation into useful and harmful subjects in any technological sphere would probably diffuse into nothing in a decade.” Any number of examples ranging from chemistry developed for both fertilizer and explosives to atomic fission developed for both weapons and reactors should underscore the unvarnished and total truth of this statement.
Technology and more fundamentally science are indeed indifferent, mainly because, in Robert Oppenheimer’s words, “The deep things in science are not discovered because they are useful; they are discovered because it was possible to discover them.” Once prehistoric man found a flint rock, rubbing it together to create fire and using it to smash open the skull of a competitor were both inevitable actions, completely inseparable from each other. It was only our unnatural state of civilization, developed during an eye blink of time as far as geological and biological evolution are concerned, that taught man to try to use the rock for the former purpose instead of the latter. These teachings came from social and political structures that men and women built to ensure harmony, but there was exactly zero information in the basic technology of the rock itself that would have allowed us to make the distinction. As Johnny notes, achieving a strict separation of this distinction could only come from obliteration of the technology in the first place, providing a neat example of having to kill something in order to save it.
However, the bigger and deeper problem that Johnny identified is that technology has an inexorable, Faustian attraction that creates an unholy meld between its utility and volatility. This is because:
“Whatever one feels inclined to do, one decisive trait must be considered: the very techniques that create the dangers and the instabilities are in themselves useful, or closely related to the useful. In fact, the more useful they could be, the more unstabilizing their effects can also be. It is not a particular perverse destructiveness of one particular invention that creates danger. Technological power, technological efficiency as such, is an ambivalent achievement. Its danger is intrinsic… The crisis will not be resolved by inhibiting this or that apparently particularly obnoxious form of technology”
“The more useful they could be, the more unstabilizing their effects can also be.” This statement perfectly captures the Gordian knot technologies like social media have bound us with today. Their usefulness is intrinsically linked to the instability they cause, whether that instability involves an addictive hollowing out of our personal time or the political echo chambers and biases that evolve with these platforms. As such technology is indeed ambivalent, and perhaps the people who would thrive best in an exceedingly technological world are ones who can comfortably ride the wave of this ambivalence while at least marginally pushing it in a productive direction. Neither can people harbor the seemingly fond hope that, even from a strictly political and social viewpoint, the demonstration of a technology such as a social media platform as toxic and divisive would lead to its decline. When even war which clearly demonstrated the ability of technology to obliterate millions could do little to stem further technological development in weaponry, it is scarcely possible to believe that the peacetime problems created by Facebook or Twitter would do anything to starve off what fundamentally makes them tick. And yet, similar to what happened with weaponry, there might be a path forward where we make these destructive technologies more humane and more conditional, with a curious mix of centralized and citizen-enabled control that curb their worst excesses.
Quite apart from the emotional and technical aspects of it, separating useful effects of technology from destructive ones and trying to isolate one from the other might also be a moral mistake. This becomes apparent when one realizes that almost all of technology with its roots in science comes from the basic human urge to seek, discover, build, find and share; the word technology itself comes from the Greek ‘techne’, meaning the skill or manner in which something is gained, and ‘logos’, meaning the words through which such knowledge is expressed. Opposing this urge would be opposing a very basic human facility.
“I believe, most importantly, prohibition of technology (invention and development, which are hardly separable from underlying scientific inquiry), is contrary to the whole ethos of the industrial age. It is irreconcilable with a major mode of intellectuality as our age understands it. It is hard to imagine such a restraint successfully imposed in our civilization.”
What safeguards remain then against the rapid progression and unpredictable nature of technologies described above? As mundane as it sounds, course-correction through small, incremental, opportunistic steps might be the only productive path. Just like the infinitesimal steps of thermodynamic work in an idealized Carnot engine, one hopes that small course-corrective steps will allow us to gradually turn the system back to an equilibrium state. As Johnny put it, “Under present conditions, it is unreasonable to expect a novel cure-all.” 

The cotton gin

I think back again to Johnny’s central thesis stated at the beginning of this essay – “All experience shows that even smaller technological changes than those now in the cards profoundly transform political and social relationships” – and I think of Eli Whitney’s cotton gin. By the end of the 18th century it was thought by many that slavery was a dying institution; the efficiency of slaves picking cotton was so low that one could scarcely imagine slavery serving as the foundation of the American economy. Whitney’s cotton gin, invented in 1794, changed all that: while previously it took a single slave about ten hours to separate and clean a single pound of cotton, two or three slaves using the machine could turn out fifty pounds of cleaned cotton in a day. Whitney’s invention was classic dual use: it would lead to transformative gains in the production of a staple crop. But it was other human beings, not the machine, that decided that these gains would be built on the backs of indentured human beings often treated worse than animals. The cotton gin consigned America to be an economic powerhouse and a fair share of America’s population to not being treated even as citizens. Clearly the reaction time built into the social institutions of the time could not keep pace with the creation of a seemingly mundane brush-like component that would separate cotton fibers from each other.
What can we do in the face of such inevitable, unpredictable technological progression that catches us off guard? If the answer were really simple, we would have discovered it with the metronomic timing of new invented technology. But Johnny’s musings end with hope, hope provided by the same history that tells us that stopping technology is tantamount to trying to stop the air from flowing.
“Can we produce the required adjustments with the necessary speed? The most hopeful answer is that the human species has been subjected to similar tests before and seems to have a congenital ability to come through, after varying amounts of trouble. To ask in advance for a complete recipe would be unreasonable. We can specify only the human qualities required: patience, flexibility, intelligence.”
From limiting the spread of nuclear weapons to reducing human discrimination and trafficking to curbing the worst of greenhouse gas emissions and deforestation, while technology has shown nothing but a ceaseless march into the future, shared morality has been a powerful if sporadic driving force for resurrecting the better angels of our nature against our worst instincts. The social institutions supporting slavery did not reform until a cruel and widespread war forced their hand. But I wonder about counterfactual history. I wonder if, as gains in agricultural production kept on increasing, first with other mechanical harvesters and beasts of burden and then finally with powerful electric implements, whether the reliance on humans as a source of indentured labor would have been weakened and finally done away with by the moral zeitgeist. The great irony here would have been that the injustice one machine (the cotton gin) created might have met its end at the hands of another (the cotton mill created by mass electrification). This counterfactual imagining of history would nonetheless be consistent with the relentless progress of technology that has indeed made life easier and brought dignity to billions whose existence previously was mired in poverty and bondage. Sometimes the existence of something is more important than all the reasons you can think of for justifying its existence. We can continue to hope that the human race will continue to progress as it has before; with patience, flexibility, intelligence.

Book Review: "American Rebels", by Nina Sankovitch

I greatly enjoyed this excellent book on the intertwined lives and fortunes of three families from the little town of Braintree - the Adamses, the Hancocks and the Quincys. Nina Sankovitch has woven a wonderful tale of how these three families and the famous names they gave rise to were both spectators and participants in some of the most famous events in American history. The account is often quite engaging and it kept me glued to the pages. I was aware of the general facts and characters, of course, but the book accomplished the valuable goal of introducing me to Josiah Quincy in particular, a name I had only heard of but did not know much about.
Sankovitch''s account begins in the 1740s when she shows us how John Adams, John Hancock and Josiah Quincy grew up together in Braintree, along with other kids like Abigail Quincy. She leads us through well known events ranging from about 1765 to 1775 - the years of turmoil, and ones during which all these men and women found the role that history had created for them - with flair, and also sheds light on underappreciated events like the dysentery and smallpox epidemics that swept through Boston. The book portrays Braintree as a small town and quintessential example of American egalitarianism, one where everyone was equal - from the distinguished Quincys and wealthy Hancocks to the Adamses who came from yeoman farming stock. Today Braintree is simply the south end of the "T" for most Bostonians.
All the boys and girls played together in the same town square and attended the same church where their fathers were ministers. Abigail Adams came from the prominent Quincy family. Everyone had been drilled right from childhood in both the values of self-reliance and that of community service. The colony had already been enlightened about the evils of slavery, and many colonists did not own slaves unlike their Southern brethren. After John Hancock's father died, his wealthy uncle Thomas and aunt Lydia took him under their wing and spirited him away to Boston. There on Beacon Hill, in a wealthy mansion, Hancock grew up and took charge of the family's prosperous trading business. He soon became perhaps the most prominent citizen of Boston, certainly the wealthiest but also the most charitable. All spit and polish, he would throw dinner parties, give to the poor and somehow still avoid "entangling alliances" with the British, especially the much-hated Governor Thomas Hutchinson.
The real star of the story, however, is Josiah Quincy. A brilliant student at Harvard who raided the library while the others were drinking and playing cards (he knew almost all of Shakespeare by heart), he became a prominent lawyer who started publishing letters promoting the liberty and property rights of the colonists in the "Boston Gazette" and the "Massachusetts Spy". His brilliance, eloquence and dedication to the cause of liberty and property rights all exceeded those of his compatriots, the two Johns. John Adams really became prominent only after his defense of the British soldiers accused of orchestrating the Boston Massacre of 1770, and before that the limelight seemed to belong to Hancock, Quincy and his brother Sam Adams who headed the incendiary group the Sons of Liberty which was responsible for the Boston Tea Party. Racked with consumption almost all his life, Josiah could be laid low for days and nights and it was remarkable that he undertook the work that he did with such enthusiasm and industry. His friend Dr. Joseph Warren regularly visited him and nursed him back to health every time - Warren later martyred himself on Bunker Hill. Josiah had a fraught relationship with his brother Samuel Quincy who was appointed solicitor general of Boston by Hutchinson; even while the other children who he grew up with were turning into patriots, Samuel remained a loyalist. Later he fled to England, leaving a young wife and three children behind, never to return. In some sense his story is a tragic one because he was never completely won over to the Loyalist cause, but at the very least he should be faulted for not realizing what direction the winds were blowing and especially for abandoning his family.
Josiah took it upon himself to spread the cause of Boston and rally the other colonies. In 1773 he traveled by himself to the South to wake up the Southern colonies and press home the oppression that was then being visited by the British on Boston by the Tea Act and then the blockade of the port of Boston. His brother Ned had died during a sea voyage and Josiah feared the same, but this did not come to pass. In 1774 he undertook an ever more serious mission, traveling to England to try to quell any misunderstandings between the parent and the child, trying to convince the prime minister, Lord North, and other high officials that Boston wanted to live in peace with England in spite of its rebellious spirit. But back at home, his incendiary pamphlets and letters indicated that he was completely won over to the cause of rebellion, if not independence. When he found out that the king and Parliament were deciding to tighten the screws even more on the colony (machinations and misunderstandings in England are brilliantly described in Nick Bunker's "An Empire on the Edge"), he decided to go back home in the spring of 1775 to alert his countrymen. Sadly, he fell prey to consumption on the voyage back. Sankovitch's account has convinced me that if Josiah had lived and been in good health, he would likely have surpassed both John Adams and John Hancock in his success, perhaps rising to the stature of Jefferson and certainly occupying high office. Sadly this was not to be. His wife Abigail bore him two children, but the girl died when she was a baby. The son later became a prominent political leader and a governor of Massachusetts.
John Hancock, meanwhile, was treading a delicate balancing act. As perhaps the wealthiest and most prominent citizen of Boston, he had to associate with the governor and royal officials and was given a commission as a colonel. But he still had to stand firm on the principles that his friends were fighting for. Admirably enough, both he and John Adams turned down many very tempting offers from the crown to occupy high office. When the colony's leaders signed a non-importation clause to punish British trade, Hancock who had made his fortune based on trade with Britain joined in. It was Hancock and the firebrand Adams brother, Sam Adams, who later became the most prominent targets of the crown, Hancock commanding the Massachusetts militia and the minutemen who were soon to became famous. By 1775, when the first shots at Lexington and Concord had been fired, there was a price on Hancock and Sam Adams's heads and they had to abandon Boston.
The last part of the book deals with the momentous summer of 1775 when the Declaration of Independence was signed. Abigail Adams had stood guard over the house in Braintree to protect it and her four children from both marauding British soldiers and the horrors of the plague, even as John was away for months during the first, second and third continental congresses in Philadelphia, overseeing logistics and communicating with George Washington who had immediately made his way to Cambridge as the new commander of the Continental Army. Sankovitch tells us how Abigail made a remarkable and brave effort to convince John to include the cause of women, poor people and black people during the signing ("Remember the ladies", she said); when John flippantly dismissed her admonitions as female ignorance, she wouldn't back down. Later of course, Abigail became known as "Mrs. President" because of her strong and intelligent opinions as President Adams's wife.
Sadly as is well known (and superbly documented by Danielle Allen in her book "Our Declaration"), a paragraph condemning slavery and King George's slave trade had been included even by Jefferson in the original draft of the Declaration but had to be taken out to gain the Southern states' fealty. Both John Hancock and John Adams along with their wives were utterly opposed to the institution, and it was Josiah Quincy who had first called it a "peculiar curse" (forerunner of the more famous phrase "a peculiar institution"). John Hancock had his beloved aunt free all their slaves in her will. The summer of 1775 presented a signal opportunity to right the wrongs in both the country's past and its future, but it would not come to pass and the peculiar institution would only be eradicated in a horrifying and destructive war a hundred years later even as its informal effects persevered for another hundred. But they tried, these residents of small Braintree where all were as equal as was possible during those times, and where the ministers and residents alike preached the message that you cannot succeed in your own estimation and that of God's if you don't succeed in the estimation of your community.

Book review: Quantum mechanics and quantum mechanics. David Kaiser's "Quantum Legacies: Dispatches from an Uncertain World"

David Kaiser is a remarkable man. He has two PhDs from Harvard, one in physics and one in the history of science, and is a professor in the Science, Technology and Society department at MIT. He has written excellent books on the history of particle physics and the quirky personalities inhabiting this world. On top of it all he is a genuinely nice guy - he once wrote me a long email out of the blue, complimenting me on a review of his book "How the Hippies Saved Physics". And while his primary focus is the history and philosophy of physics, Kaiser still seems to find time for doing research in quantum entanglement.

What makes Kaiser unique is the attention he gives to what we can call the sociological aspects of physics, things like the physics job market, portrayals of physicists in the humanities, parallel threads of science and history, and perhaps most uniquely, the publications of physics - both the bread-and-butter textbooks that students use and the popular physics books written for laymen. It's this careful analysis of physics's sociological aspects that makes "Quantum Legacies" a delightful read, tread as it does on some of the under-explored aspects of physics. There are chapters on quantum indeterminacy and entanglement and the lives of Schrödinger, Einstein and Dirac, a nice chapter on computing and von Neumann's computer and interesting essays on the Large Hadron Collider and the tragic Superconducting Supercollider which was shelved in 1993 and the Higgs boson. All these are worth reading. But the real gem in the book as far as I am concerned is a collection of three chapters on physics publishing; this is the kind of material that you won't find in other books on the history and philosophy of physics.

The first chapter is about a book that fascinated me to no end while I was growing up - Fritjof Capra's "The Tao of Physics" which explored parallels between quantum physics and Eastern mysticism. This book along with the downright dubious "aliens-visited-earth" literature by the Danish writer Erich von Daniken dotted my bedroom for a while until I grew up and discovered in particular that Daniken was peddling nonsense. But Capra isn't that easy to dismiss, especially as Kaiser tells us, his book hit the market at a perfect time in 1975 when physicists had become disillusioned by the Vietnam War, the public had become disillusioned by physicists, and both groups of people had become smitten with the countercultural movement, Woodstock and Ravi Shankar. There could be no better time for a book exploring the ins and outs of both the bizarre world of quantum mechanics and the mystical world of Buddhism and the "Dance of Shiva" to become popular. Kaiser describes how Capra's book set the tone for many similar ones, and while most of the parallels described in it are fanciful, it did get the public interested in both quantum physics and Eastern philosophy - no small feat. Capra's own personal story, one in which he comes to the United States from Vienna, has a hard time making ends meet and goes back and then decides to first write a textbook and then a more unique popular book based on his experiences in California and advice from famed physicist Victor Weisskopf, is also quite interesting.

The second interesting chapter is about a textbook, albeit a highly idiosyncratic one, that is a household name to students of general relativity - a 1200 page doorstop of a tome by Kip Thorne, Charles Misner and John Wheeler, all legendary physicists. "MTW" as the textbook became known was a kind of landmark event in physics publishing. The textbook was the first major book to introduce advanced undergraduate and graduate students to fascinating concepts like time dilation, spacetime curvature and black holes. The joke about its size was that not only was the book *about* gravity but that it also *generated* gravity. But everything about the book was highly unconventional and quirky, including the typeface, the non-linear narrative and most importantly, serious and advanced mathematical calculations interspersed with boxes containing cartoons, physicist biographies and outrageous speculations about wormholes and time travel. Most people didn't know what to make of it, and perhaps the best review came from the Indian-American astrophysicist Subrahmanyan Chandrasekhar who said, "The book espouses almost a missionary zeal in preaching its message. I (probably for historical reasons) am allergic to missionaries." Nonetheless, "MTW" occupies a pride of place in the history of physics textbooks, and a comparable one on sagging student shelves where it's probably more seen than read.

The last chapter and perhaps the one I found most interesting is about the content of traditional quantum mechanics textbook, which is really a history of the quantum mechanics textbook in general. The first quantum mechanics textbooks in the United States came out in the 1940s and 50s. Many of them came out of the first modern school of theoretical physics in the country founded by J. Robert Oppenheimer at the University of California, Berkeley. Two of Oppenheimer's students, David Bohm and Leonard Schiff, set the opposing tones for two different kinds of textbooks (I remember working through a bit of Schiff's book as an undergraduate). After the war Schiff taught at Stanford, Bohm at Princeton.

Bohm was old school and believed in teaching quantum mechanics as a subject fraught with fascinating paradoxes and philosophical speculations. His approach was very close in spirit to the raging debates of the original scientist-philosophers who had founded the revolutionary paradigm - Niels Bohr, Albert Einstein, Erwin Schrödinger and Werner Heisenberg in particular. Bohm of course had a very eventful life in which he was accused on being a Communist and hounded out of the country, after which he settled in England and became known for carrying out and publishing a set of philosophical dialogues with Indian philosopher J. D. Krishnamurthy. His textbook is still in print and is worth reading, but it's worth noting that the Schrödinger equation is not even introduced until several chapters into the volume.

Schiff's book was different and was a practical textbook that taught students how to solve problems, mirroring a philosophy called "shut up and calculate" that was then taking root in American higher physics education. The Schrödinger equation was introduced on page 6. What Kaiser fascinatingly demonstrates, often through analysis of the original lecture notes from Bohm and Schiff's classes, is that this attitude reflected both a mushrooming of physics students as well as a higher demand for physicists engendered by the Cold War and the military-industrial complex. Not surprisingly, when you had to turn out large numbers of competent physicists with jobs waiting for them in the nation's laboratories and universities, you had little time or patience to teach them the philosophical intricacies of the field. Shut up, calculate, and get out there and beat the Soviets became the mantra of the American physics establishment.

Fascinatingly, Kaiser finds out that the philosophical trends and the practical ones in physics textbook publishing wax and wane with the times; when the job market was good and enrollment was high, the practical school prevailed and textbooks accordingly reflected its preferences, and when the pickings were slim, the job market was tight and enrollment drastically dropped, philosophical questions started making a comeback on tests and in textbooks. Especially after 1970 when the job market tanked, the Vietnam War disillusioned many aspiring physicists and the countercultural movement took off, philosophical speculations took off as well and combined with Fritjof Capra's "The Tao of Physics". Perhaps the ultimate rejection of philosophy among physicists might be said to have come during the second job slump in the early 90s, when many physicists left the world of particles and fields for the world of parties and heels on Wall Street.

Physics publishing, the physics market, the lives of physicists and physics theories have a strange and unpredictable entanglement of their own, one which even Einstein and Bohr might not have anticipated. Kaiser's book explores these well and brings a unique perspective to some of the most interesting aspects of a science that has governed men's lives, their education and their wallets.