Field of Science

A Foray into Jewish History and Judaism

I have always found the similarities between Hinduism and Judaism (and between Hindu Brahmins in particular and Jews) very striking. In order of increasing importance:

1. Both religions are very old, extending back unbroken between 2500 and 3000 years with equally old holy books and rituals.

2. Both religions place a premium on rituals and laws like dietary restrictions etc.

3. Hindus and Jews have both endured for a very long time in spite of repeated persecution, exile, oppression etc. although this is far more true for Jews than Hindus. Of course, the ancestors of Brahmins have the burden of caste while Jews have no such thing, but both Hindus and Jews have been persecuted for centuries by Muslims and Christians. At the same time, people of both faiths have also lived in harmony and productive intercourse with these other faiths for almost as long.

4. Both religions place a premium on the acquisition and dissemination of knowledge and learning. Even today, higher education is a priority in Jewish and Hindu families. As a corollary, both religions also place a premium on fierce and incessant argumentation and are often made fun of for this reason.

5. Both religions are unusually pluralistic, secular and open to a variety of interpretations and lifestyles without losing the core of their faith. You can be a secular Jew or an observant one, a zealous supporter or harsh critic of Israel, a Jew who eats pork and still calls himself a Jew. You can even be a Godless Jewish atheist (as Freud called himself). Most importantly, as is prevalent especially in the United States, you can be a “cultural Jew” who enjoys the customs not because of deep faith but because it fosters a sense of community and tradition. Similarly, you can be a highly observant Hindu, a flaming Hindu nationalist, an atheist Hindu who was raised in the tradition but who is now a “cultural Hindu” (like myself), a Hindu who commits all kinds of blasphemies like eating steak and a Hindu who believes that Hinduism can encompass all other faiths and beliefs.

I think that it’s this highly pluralistic and flexible belief and tradition system that has made both Judaism and Hinduism what Nassim Taleb calls “anti-fragile”, not just resilient but being able to be actively energized in the face of bad events. Not surprisingly, Judaism has always been a minor but constant interest of mine, and there is no single group of people I admire more. The interest has always manifested itself previously in my study of Jewish scientists like Einstein, Bethe, von Neumann, Chargaff and Ulam, many of whom fled persecution and founded great schools of science and learning. More broadly though, although I am familiar with the general history, I am planning to do a deeper dive into Jewish history this year. Here is a list of books that I have either read (*), am reading ($) or planning to read (+). I would be interested in recommendations.

1. Paul Johnson’s “The History of the Jews”. (*)

2. Simon Schama’s “The Story of the Jews”. (*)

3. Jane Gerber’s “The Jews of Spain”. ($)

4. Nathan Katz’s “The Jews of India”. (*)

5. Amos Elon’s “The Pity of It All: A Portrait of the German-Jewish experience, 1743-1933”. ($)

6. Norman Lebrecht’s “Genius and Anxiety: How Jews Changed the World: 1847-1947”. ($)

7. Erwin Chargaff’s “Heraclitean Fire”. (*)

8. Stanislaw Ulam’s “Adventures of a Mathematician”. (*)

9. Stefan Zweig’s “The World of Yesterday”. (*)

10. Primo Levi’s “Survival in Auschwitz” and “The Periodic Table”. (*)

11. Robert Wistrich’s “A Lethal Obsession: Anti-Semitism from Antiquity to the Global Jihad”. (*)

12. Jonathan Kaufman’s “The Last Kings of Shanghai”. (This seems quite wild) (+)

13. Istvan Hargittai’s “The Martians of Science”. (*)

14. Bari Weiss’s “How to Fight Anti-Semitism”. (+)

15. Ari Shavit’s “My Promised Land”. (+)

16. Norman Cohn’s “Warrant for Genocide: The Myth of the Jewish World Conspiracy and the Protocols of the Elders of Zion” (*)

17. Irving Howe’s “World of Our Fathers: The Journey of the East European Jews to America and the Life They Found and Made“ (+)

18. Edward Kritzler’s “Jewish Pirates of the Caribbean” (another book that sounds wild) (+)

19. Alfred Kolatch’s “The Jewish Book of Why” (+)

20. Simon Sebag-Montefiore’s “Jerusalem” ($)

Life. Distributed.

One of my favorite science fiction novels is “The Black Cloud” by Fred Hoyle. It describes an alien intelligence in the form of a cloud that approaches the earth and settles by the sun. Because of its proximity to the sun the cloud causes havoc with the climate and thwarts the attempts of scientists to both study it and attack it. Gradually the scientists come to realize that the cloud is an intelligence unlike any they have encountered. They are finally successful in communicating with the cloud and realize that its intelligence is conveyed by electrical impulses moving inside it. The cloud and the humans finally part on peaceful terms.

There are two particularly interesting aspects of the cloud that warrant further attention. One is that it’s surprised to find intelligence on a solid planet; it is used to intelligence being gaseous. The second is that it’s surprised to find intelligence concentrated in individual human minds; it is used to intelligence constantly moving around. The reason these aspects of the story are interesting is because they show that Hoyle was ahead of his time and was already thinking about forms of intelligence and life that we have barely scratched the surface of.

Our intelligence is locked up in a three pound mass of wet solid matter. And it’s a result of the development of the central nervous system. The central nervous system was one of the great innovations in the history of life. It allowed organisms to concentrate their energy and information-processing power in a single mass that sent out tentacles communicating with the rest of the body. The tentacles are important but the preponderance of the brain’s capability resides in itself, in a single organ that cannot be detached or disassembled and moved around. From dolphins to tigers and from bonobos to humans, we find the same basic plan existing for good reasons. The central nervous system is an example of what’s called convergent evolution, which refers to the ability of evolution to find the same solutions for complex problems. Especially in Homo sapiens, the central nervous system and the consequent development of the neocortex are seen as the crowning glory of human evolution.

And yet it’s the solutions that escaped the general plan that are the most interesting in a sense. Throughout the animal and plant kingdom we find examples not of central but of distributed intelligence, like Hoyle’s cloud. Octopuses are particular fascinating examples. They can smell and touch and understand not just through their conspicuous brains but through their tentacles; they are even thought to “see” color through these appendages. But to find the ultimate examples of distributed intelligence, it might be prudent not to look at earth’s most conspicuous and popular examples of life but its most obscure – fungi. Communicating the wonders of distributed intelligence through the story of fungi is what Merlin Sheldrake accomplishes in his book, “Entangled Life”.

Fungi have always been our silent partners, partners that are much more like us than we can imagine. Like bacteria they are involved in an immense number of activities that both aid and harm human beings, but most interestingly, fungi unlike bacteria are eukaryotes and are therefore, counterintuitively, evolutionarily closer to us rather than to their superficially similar counterparts. And they get as close to us as we can imagine. Penicillin is famously produced by a fungus; so is the antibiotic fluconazole that is used to kill other fungal infections. Fungal infections can be deadly; Aspergillus forms clumps in the lungs that can rapidly kill patients by spreading through the bloodstream. Fungi of course charm purveyors of gastronomic delights everywhere in the world as mushrooms, and they also charm purveyors of olfactory delights as truffles; a small lump can easily sell for five thousand dollars. Last but not the least, fungi have taken millions of humans into other worlds and artistic explosions of colors and sight by inducing hallucinations.

With this diverse list of vivid qualities, it may seem odd that perhaps the most interesting quality of fungi lies not in what we can see but what we can’t. Mushrooms may grace dinner plates in restaurants and homes around the world, but they are merely the fruiting bodies of fungi. They may be visible as clear vials of life-saving drugs in hospitals. But as Sheldrake describes in loving detail, the most important parts of the fungi are hidden below the ground. These are the vast networks of the fungal mycelium – the sheer, gossamer, thread-like structure snaking its way through forests and hills, sometimes spreading over hundreds of square miles, occasionally being as old as the neolithic revolution, all out of sight of most human beings and visible only to the one entity with which it has forged an unbreakable, intimate alliance – trees. Dig a little deep into a tree root and put it under a microscope and your will find wisps of what seem like even smaller roots, except that these roots penetrate into the trees roots. The wisps are fungal mycelium. They are everywhere; around roots, under them, over them and inside them. At first glance the the ability of fungal networks to penetrate inside tree roots might evoke pain and invoke images of an unholy literal physical union of two species. It’s certainly a physical union, but it may be one of the holiest meetings of species in biology. In fact it might well be impossible to find a tree whose roots have no interaction with fungal mycelium. The vast network of fibers the mycelium forms is called a mycorrhizal network.

The mycorrhizal networks that wind their way in and out of tree roots are likely as old as trees themselves. The alliance almost certainly exists because of a simple matter of biochemistry. When plants first colonized land they possessed the miraculous ability of photosynthesis that completely changed the history of life on this planet. But unlike carbon which they can literally manufacture out of sunlight and thin air, they still have to find essential nutrients for life, metals like magnesium and other life-giving elements like phosphorus and nitrogen. Because of an intrinsic lack of mobility, plants and trees had to find someone who could bring them these essential elements. The answer was fungi. Fungal networks stretching across miles ensured that they could shuttle nutrients back and forth between trees. In return the fungi could consume the precious carbon that the tree sank into its body – as much as twenty tons during a large tree’s lifetime. It was the classic example of symbiosis, a term coined by the German botanist Albert Frank, who also coined the term mycorrhiza.

However, the discovery that fungal networks could supply trees with essential nutrients in a symbiotic exchange was only the beginning of the surprises they held. Sheldrake talks in particular about the work of the mycologists Lynne Body and Suzanne Simard who have found qualities in the mycorrhizal networks of trees that can only be described as deliberate intelligence. Here are a few examples: fungi seem to “buy low, sell high”, providing trees with important elements when they have fallen on hard times and liberally borrowing from them when they are doing well. Mycorrhizal networks also show electrical activity and can discharge a small burst of electrochemical potential when prodded. They can entrap nematodes in a kind of death grip and extract their nutrients; they can do the same with ants. Perhaps most fascinatingly, fungal mycelia display “intelligence at a distance”; one part of a huge fungal network seems to know what the other is doing. The most striking experiment that demonstrates this shows oyster mushroom mycelium growing on a piece of wood and spreading in all directions. When another piece of wood is kept at a distance, within a few days the fungal fibers spread and latch on to that piece. This is perhaps unsurprising. What is surprising is that once the fungus discovers this new food source, it almost instantly pares down growth in all other parts of its network and concentrates it in the direction of the new piece of wood. Even more interestingly, scientists have found that the hyphae or tips of fungi can act not only as sensors but as primitive Boolean logic gates, opening and closing to allow only certain branches of the network to communicate with each other. There are even attempts to use fungi as primitive computers.

This intelligent long-distance relay gets mirrored in the behavior of the trees that the fungi form a mind meld with. One of the characters in Richard Powers’s marvelous novel “The Overstory” discovers how trees are whispering hidden signals to each other, not just through fungal networks but through ordinary chemical communication. The character Patty Westford finds out that when insects attack one tree, it can send out a chemical alarm that alerts trees located even dozens of meters away of its plight, causing them to kick their own repellant chemical production into high gear. Meeting the usual fate of scientists with novel ideas, Westford and her ideas are first ignored, then mocked and ostracized and ultimately grudgingly accepted. But the discovery of trees and their fungal networks communicating through each other and through the agency of both chemicals and other organisms like insects is now generally accepted enough to become part of both serious scientific journals and prizewinning novels.

Fungi can also show intelligent behavior by manipulating our minds, and this is where things get speculative. Psilocybin and LSD have been used by shamans, hippies and Silicon Valley tech entrepreneurs over thousands of years. When you are familiar with both chemistry and biology it’s natural to ask what might be the perceived evolutionary utility of chemical compounds that bring about changes in perception that are so profound and seemingly liberating as to lead someone like Aldous Huxley to make sure that he was on a psychedelic high during the moment of his death. One interesting clue arises from the discovery of these compounds in the chemical defense responses of certain fungi. Clearly the microorganisms that are engaged in a war with fungi – and these often include other fungi – lack a central nervous system and have no concept of a hallucination. But if these compounds are found as part of the wreckage of fungal wars, maybe this was their original purpose, and the fact that they happen to take humans on a trip is only incidental.

That is the boring and likely explanation. The interesting and unlikely explanation that Sheldrake alludes to is to consider a human, in the medley of definitions that humans have lent themselves to, as a vehicle for a fungus to propagate itself. In the Selfish Fungus theory, magic mushrooms and ergot have been able to hijack our minds so that more of us will use them, cultivate and tend them and love them, ensuring their propagation. Even though their effects might be incidental, they can help us in unexpected ways. If acid and psilocybin trips can spark even the occasional discovery of a new mathematical object or a new artistic style, both the fungi and the humans’ purpose is served. I have another interesting theory of psychedelic mushroom-human co-evolution in mind that refers to Julian Jaynes’s idea of the bicameral mind. According to Jaynes, humans may have lacked consciousness until as recently as 3000 years ago because their mind was divided into two parts, one of which “spoke” and the other “listened”. What we call Gods speaking to humans was a result of the speaking side holding forth. Is it possible that at some point in time, humans got hold of psychedelic fungi and they hijacked a more primitive version of the speaking mind that allowed it it to turn into a full-blown voice inside the other mind’s head, so to speak? Jaynes’s theory has been called “either complete rubbish or a work of consummate genius, nothing in between” by Richard Dawkins, and this might be another way to probe whether it might be true for a reason.

It is all too easy to anthropomorphize trees and especially fungi, which only indicates how interestingly they behave. One can say that “trees give and trees receive”, “trees feel” and even “trees know”, but at a biological level is this behavior little more than a series of Darwinian business transactions, purely driven by natural selection and survival? Maybe, but ultimately what matters is not what we call the behavior but the connections it implies. And there is no doubt that fungi, trees, octopuses and a few other assorted creatures are displaying a unique type of intelligence that humans may have merely glimpsed. Distributed intelligence clearly has a few benefits over a central, localized one. Unlike humans who are unlikely to live when their heads are cut off, newts can regrow their heads when they get detached, so there’s certainly a survival advantage conferred by not having your intelligence organ be one and done. This principle has been exploited by the one form of distributed intelligence that is an extension of human beings and that has taken over the planet – the Internet. Among many ideas that are regarded as the origins of the Internet, one was conceived by the defense department which wanted to built a communications net that would be resilient in the face of nuclear attack. Having a distributed network with no one node being a central node was the key. Servers in companies like Google and Facebook are also constructed in such a way that a would be hacker or terrorist would have to take out several and not just a few in order to measurably impair the fidelity of the network.

I also want to posit the possibility that distributed systems might be more analog than central ones and therefore confer unique advantages. Think of a distributed network of water pipes, arteries, traffic lanes or tree roots and fungal networks and one has the image in mind of a network that can almost instantaneously transmit changes in parameters like pressure, temperature and density taking place in one part of the network to another. These are all good examples of analog computation, although in case of arteries, the analog process is built on a substrate of digital neuronal firing. The human body is clearly a system where a combination of analog and digital works well, but looking at distributed intelligence one gets a sense that we can optimize our intelligence significantly using more analog computing.

There is no reason why intelligence may not be predominantly analog and distributed so that it becomes resilient, sensitive and creative like mycorrhizal networks, being able to guard itself against existential threats, respond to new food and resource locations and construct new structures with new form and function. One way to make human intelligence more analog and distributed would be to enable human-to-human connections through high-fidelity electronics that allows a direct flow of information to and from human brains. But a more practical solution might be to enable downloading brain contents including memory into computers and then allowing these computers to communicate with each other. I do not know if this advance will take place during my lifetime, but it could certainly bring us closer to being a truly distributed intelligence that just like mycorrhizal networks is infinitely responsive, creative, resilient and empathetic. And then perhaps we will know exactly what it feels like to be a tree.

Brains, Computation And Thermodynamics: A View From The Future?

Rolf Landauer
Progress in science often happens when two or more fields productively meet. Astrophysics got a huge boost when the tools of radio and radar met the age-old science of astronomy. From this fruitful marriage came things like the discovery of the radiation from the big bang. Another example was the union of biology with chemistry and quantum mechanics that gave rise to molecular biology. There is little doubt that some of the most important future discoveries in science in the future will similarly arise from the accidental fusion of multiple disciplines.
One such fusion sits on the horizon, largely underappreciated and unseen by the public. It is the fusion between physics, computer science and biology. More specifically, this fusion will likely see its greatest manifestation in the interplay between information theory, thermodynamics and neuroscience. My prediction is that this fusion will be every bit as important as any potential fusion of general relativity with quantum theory, and at least as important as the development of molecular biology in the mid 20th century. I also believe that this development will likely happen during my own lifetime.
The roots of this predicted marriage go back to 1867. In that year the great Scottish physicist James Clerk Maxwell proposed a thought experiment that was later called ‘Maxwell’s Demon’. Maxwell’s Demon was purportedly a way to defy the second law of thermodynamics that had been proposed a few years earlier. The second law of thermodynamics is one of the fundamental laws governing everything in the universe, from the birth of stars to the birth of babies. It basically states that left to itself, an isolated system will tend to go from a state of order to one of disorder. A good example is how a bottle of perfume wafts throughout a room with time. This order and disorder was quantified by a quantity called entropy.
In technical terms, the order and disorder refers to the number of states a system can exist in; order means fewer states and disorder means more. The second law states that isolated systems will always go from fewer states and lower entropy (order) to more states and higher entropy (disorder). Ludwig Boltzmann quantified this relationship with a simple equation carved on his tombstone in Vienna: S = klnW, where k is a constant called the Boltzmann constant, ln is the natural logarithm (to the base e) and W is the number of states.
Maxwell’s Demon was a mischievous creature which sat on top of a box with a partition in the middle. The box contains molecules of a gas which are ricocheting in every direction. Maxwell himself had found that these molecules’ velocities follow a particular distribution of fast and slow. The demon observes these velocities, and whenever there is a molecule moving faster than usual in the right side of the box, he opens the partition and lets it into the left side, quickly closing the partition. Similarly he lets in slower moving molecules from left to right. After some time, all the slow molecules will be in the right side and the fast ones will in the left. Now, velocity is related to temperature, so this means that one side of the box has heated up and the other has cooled down. To put it another way, the box went from a state of random disorder to order. According to the second law this means that the entropy of the system of the system decreased, which is impossible.
Maxwell’s demon seemingly contravenes the second law of thermodynamics (University of Pittsburgh)
For the next few years scientists tried to get around Maxwell’s Demon’s paradox, but it was in 1922 that the Hungarian physicist Leo Szilard made a dent in it when he was a graduate student hobnobbing with Einstein, Planck and other physicists in Berlin. Szilard realized an obvious truth that many others seem to have missed. The work and decision-making that the demon does to determine the velocities of the molecules itself generates entropy. If one takes this work into account, it turns out that the total entropy of the system has indeed increased. The second law is safe. Szilard later went on to a distinguished career as a nuclear physicist, patenting a refrigerator with Einstein and becoming the first person to think of a chain reaction.
Perhaps unknowingly, however, Szilard had also discovered a connection – a fusion of two fields – that was going to revolutionize both science and technology. When the demon does work to determine the velocities of molecules, the entropy that he creates comes not just from the raising and lowering of the partition but from his thinking processes, and these processes involve information processing. Szilard had discovered a crucial and tantalizing link between entropy and information. Two decades later, mathematician Claude Shannon was working at Bell Labs, trying to improve the communication of signals through wires. This was unsurprisingly an important problem for a telephone and communications company. The problem was that when engineers were trying to send a message over a wire, it would lose its quality because of many factors including noise. One of Shannon’s jobs was to figure out how to make this transmission more efficient.
Shannon found out that there is a quantity that relates to the information transmitted over the wire. In crude terms, this quantity was inversely related to the information as well as to the probability of transmitting that information; the higher the probability of transmitting accurate information over a channel, the lower this quantity was and vice versa. When Shannon showed his result to the famous mathematician John von Neumann, von Neumann with his well-known lightning-fast ability to connect disparate ideas, immediately saw what it was: “You should call your function ‘entropy’”, he said, “firstly because that is what it looks like in thermodynamics, and secondly because nobody really knows what entropy is, so in a debate you will always have the upper hand.” Thus was born the connection between information and entropy. Another fortuitous connection was born – between information, entropy and error or uncertainty. The greater the uncertainty in transmitting a message, the greater the entropy, so entropy also provided a way to quantify error. Shannon’s 1948 paper, “A Mathematical Theory of Communication”, was a seminal publication and has been called the Magna Carta of the information age.
Even before Shannon, another pioneer had published a paper that laid the foundations of the theory of computing. In 1936 Alan Turing published “On Computable Numbers, with an Application to the Entscheidungsproblem”. This paper introduced the concept of Turing machines which also process information. But neither Turing nor von Neumann really made the connection between computation, entropy and information explicit. Making it explicit would take another few decades. But during those decades, another fascinating connection between thermodynamics and information would be discovered.
Stephen Hawking’s tombstone at Westminster Abbey (Cambridge News)
That connection came from Stephen Hawking getting annoyed. Hawking was one of the pioneers of black holes, and along with Roger Penrose he had discovered that at the center of every black hole is a singularity that warps spacetime infinitely. The boundary of the black hole is its event horizon and within that boundary not even light can escape. But black holes posed some fundamental problems for thermodynamics. Every object contains entropy, so when an object disappears into a black hole, where does its entropy go? If the entropy of the black hole does not increase then the second law of thermodynamics would be violated. Hawking had proven that the area of a black hole’s event horizon never decreases, but he had pushed the thermodynamic question under the rug. In 1972 at a physics summer school, Hawking met a graduate student from Princeton named Jacob Bekenstein who proposed that the increasing area of the black hole was basically a proxy for its increasing entropy. This annoyed Hawking and he did not believe it because increased entropy is related to heat (heat is the highest- entropy form of energy) and black holes, being black, could not radiate heat. With two colleagues Hawking set out to prove Bekenstein wrong. In the process, he not only proved him right but also made what is considered his greatest breakthrough: he gave black holes a temperature. Hawking found out that black holes do emit thermal radiation. This radiation can be explained when you take quantum mechanics into account. The Hawking-Bekenstein discovery was a spectacular example of another fusion: between information, thermodynamics, quantum mechanics and general relativity. Hawking deemed it so important that he wanted to put it on his tombstone in Westminster Abbey, and so it has been.
This short digression was to show that more links between information, thermodynamics and other disciplines were being forged in the 1960s and 70s. But nobody saw the connections between computation and thermodynamics until Rolf Landauer and Charles Bennett came along. Bennett and Landauer were both working at IBM. Landauer was an émigré who fled from Nazi Germany before working for the US Navy as an electrician’s mate, getting his PhD at Harvard and joining IBM. IBM was then a pioneer of computing; among other things they had built computers for the Manhattan Project. In 1961, Landauer published a paper titled “Irreversibility and Heat Generation in the Computing Process” that is destined to become a classic of science. In it, Landauer established that the basic act of computation – the change of one bit to another, say a 1 to a 0 – requires a bare minimum amount of entropy. He quantified this amount with another simple equation: S = kln2, with k again being the Boltzmann constant and ln the natural logarithm. This has become known as the Landauer bound; it is the absolute minimum amount of entropy that has to be expended in a single bit operation. Landauer died in 1999 and as far as I know the equation is not carved on his tombstone.
The Landauer bound applies to all kinds of computation in principle and biological processes are also a form of information processing and computation, so it’s tantalizing to ask whether Landauer’s calculation applies to them. Enter Charles Bennett. Bennett is one of the most famous scientists whose name you may not have heard of. He is not only one of the fathers of quantum computing and quantum cryptography but he is also one of the two fathers of the marriage of thermodynamics with computation, Landauer being the other. Working with Landauer in the 1970s and 80s, Bennett applied thermodynamics to both Turing machines and biology. By good fortune he had gotten his PhD in physical chemistry studying the motion of molecules, so his background primed him to apply ideas from computation to biology.
Charles Bennett from IBM has revolutionized our understanding of the thermodynamics of computation (AMSS)
To simplify matters, Bennett considered what he called a Brownian Turing machine. Brownian motion is the random motion of atoms and molecules. A Brownian Turing machine can write and erase characters on a tape using energy extracted from a random environment. This makes the Brownian Turing machine reversible. A reversible process might seem strange, but in fact it’s found in biology all the time. Enzyme reactions occur from the reversible motion of chemicals – at equilibrium there is equal probability that an enzymatic reaction will go forward or backward. What makes these processes irreversible is the addition of starting materials or the elimination of chemical products. Even in computation, only a process which erases bits is truly irreversible because you lose information. Bennett envisaged a biological process like protein translation as a Brownian Turing machine which adds or subtracts a molecule like an amino acid, and he calculated the energy and entropy expenditures involved in running this machine. Visualizing translation as a Turing machine made it easier to do a head-to-head comparison between biological processes and bit operations. Bennett found out that if the process is reversible the Landauer bound does not hold and there is no minimum entropy required. Real life of course is irreversible, so how do real-life processes compare to the Landauer bound?
In 2017, a group of researchers published a fascinating paper in the Philosophical Transactions of the Royal Society in which they explicitly calculated the thermodynamic efficiency of biological processes. Remarkably, they found that the efficiency of protein translation is several orders of magnitude better than the best supercomputers, in some cases as better as a million fold. More remarkably, they found that the efficiency is only one order of magnitude worse than the theoretical minimum Landauer bound. In other words, evolution has done one hell of a job in optimizing the thermodynamic efficiency of biological processes.
But not all biological processes. Circling back to the thinking processes of Maxwell’s little demon, how does this efficiency compare to the efficiency of the human brain? Surprisingly, it turns out that neural processes like the firing of synapses are estimated to be much worse than protein translation and more comparable to the efficiency of supercomputers. At first glance, the human brain thus appears to be worse than other biological processes. However, this seemingly low computational efficiency of the brain must be compared to its complex structure and function. The brain weighs only about a fiftieth of the weight of an average human but it uses up 20% of the body’s energy. It might seem that we are simply not getting the biggest bang for our buck, with an energy-hungry brain providing low computational efficiency. What would explain this inefficiency and this paradox?
My guess is that the brain has been designed to be inefficient through a combination of evolutionary accident and design and that efficiency is the wrong metric for gauging the performance of the brain. Efficiency is the wrong metric because thinking of the brain in digital terms is the wrong metric. The brain arose through a series of modular inventions responding to new environments created by both biology and culture. We now know that thriving in these environments needed a combination of analog and digital functions.; for instance, the nerve impulses controlling blood pressure are digital while the actual change in pressure is continuous and analog. It is likely that digital neuronal firing is built on an analog substrate of wet matter, and that higher-order analog functions could be emergent forms of digital neuronal firing. As early as the 1950s, von Neumann conjectured that we would need to model the brain as both analog and digital in order to understand it. Around the time that Bennett was working out the thermodynamics of computation, two mathematicians named Marian Pour-El and Ian Richards proved a very interesting theorem which showed that in certain cases, there are numbers that are not computable with digital computers but are computable with analog processes; analog computers are thus more powerful in such cases.
If our brains are a combination of digital and analog, it’s very likely that they are this way so that they can span a much bigger range of computation. But this bigger range would come at the expense of inefficiency in the analog computation process. The small price of lower computational efficiency as measured by the Landauer bound would come at the expense of the much greater evolutionary benefits of performing complex calculations that allow us to farm, build cities, know stranger from kin and develop technology. Essentially, the Landauer bound could be evidence for the analog nature of our brains. There is another interesting fact about analog computation, which is its greater error rate; digital computers took off precisely because they had low error rates. How does the brain function so well in spite of this relatively high error rate? Is the brain consolidating this error when we dream? And can we reduce this error rate by improving the brain’s efficiency? Would that make our brains better or worse at grasping the world?
From the origins of thermodynamics and Maxwell’s Demon to the fusion of thermodynamics with information processing, black holes, computation and biology, we have come a long way. The fusion of thermodynamics and computation with neuroscience just seems to be beginning, so for a young person starting out in the field the possibilities are exciting and limitless. A multitude of general questions abound: How does the efficiency of the brain relate to its computational abilities? What might be the evolutionary origins of such abilities? What analogies between the processing of information in our memories and that in computers might we discover through this analysis? And finally, just like Shannon did for information, Hawking and Bekenstein did for black holes and Landauer and Bennett did for computation and biology, can we find out a simple equation describing how the entropy of thought processes relates to simple neural parameters connected to memory, thinking, empathy and emotion? I do not know the answers to these questions, but I am hoping someone who is reading this will, and at the very least they will then be able to immortalize themselves by putting another simple formula describing the secrets of the universe on their tombstone.
Further reading:
  1. Charles Bennett – The Thermodynamics of Computation
  2. Seth Lloyd – Ultimate Physical Limits to Computation
  3. Freeman Dyson – Are brains analog or digital?
  4. George Dyson – Analogia: The Emergence of Technology Beyond Programmable Control (August 2020)
  5. Richard Feynman – The Feynman Lectures on Computation (Chapter 5)
  6. John von Neumann – The General and Logical Theory of Automata
First published on 3 Quarks Daily

    On free speech, crossing the Rubicon and the need to unite

    I woke up to some welcome news today, news that after an extended period of disappointment and disillusionment, has left me feeling better than I have in a long time. Harper’s Weekly published an open letter signed by an eclectic blend of writers, political scientists, journalists and thinkers across the political spectrum, many of whom have been pillars of the liberal intellectual community for decades. In the letter, Noam Chomsky, Margaret Atwood, Salman Rushdie, Steven Pinker, Nicholas Christakis, Fareed Zakaria, Arlie Russell Hochschild and many others deplore the state in which liberal discourse has descended into for several years.

    "The free exchange of information and ideas, the lifeblood of a liberal society, is daily becoming more constricted. While we have come to expect this on the radical right, censoriousness is also spreading more widely in our culture: an intolerance of opposing views, a vogue for public shaming and ostracism, and the tendency to dissolve complex policy issues in a blinding moral certainty." 

    The entire letter is worth reading and takes aim at several acts by self-described liberals and Democrats over the years that have been attacks on the values of free expression and debate that they professed to have stood up for for decades. It takes to task institutions which are dealing out disproportionate punishments for minor infractions, if one can call them that. It makes the seemingly obvious case that writers can only thrive when they are allowed to experiment and say controversial things – a whole string of historical writers ranging from Virginia Woolf and D. H Lawrence to Nabokov and Franzen attest to this fact. Rushdie himself of course infamously had to go into hiding for several years after the fatwah. The writers of the letter cite dozens of cases of controversial speakers being disinvited from college campuses, professors being censured for citing “controversial” books like Greek classics, editorials being withdrawn from leading newspapers because of internal rebellion and people’s livelihoods and reputations being threatened for simply tweeting about or referring to something that their detractors disliked. In most cases there was a small group of outraged people, usually on Twitter, responsible for these actions.

    Most of this of course has been going on for years, even as those of us who believed in free speech without retaliation and diversity of viewpoints have watched with increasing dismay from the sidelines. Some of us have even been targets in the past, although we have not had to face the kind of retribution that other people did. And yet, compared to what has been happening this year, the last few years have seemed tame. I have to say that as much as my disillusionment has grown steadily over time, this year truly seems like the watershed, one that should squarely force us to take a stand.

    Let’s first understand that America in 2020 has made everyone’s job difficult: the country is now being led by a racist, ignorant child-president with dictatorial aspirations who calls the press the enemy of the people and whose minions take every chance they can to try to silence or threaten anyone who disagrees with them, who actively spread misinformation and lies, whose understanding of science and reason is non-existent, and who have been collectively responsible not just for the dismantling of critical public institutions like the EPA and the justice department but for orchestrating, through inaction, one of the deadliest public health crises in the history of the country that has killed hundreds of thousands. One would think that all of us who are opposed to this administration and their abandonment of the fundamental values on which this country has been founded would be utterly horrified and unified at this time.

    Sadly, the opposite has happened, and it’s why the Harper’s letter seems like a bright pinprick in a dark landscape to me. For an increasing portion of the self-professed liberal establishment, the answer to Trump has been to go crazy in the other direction. Until this year, I generally used to reject the slippery slope argument – the argument that those even with whom I strongly disagreed will keep on going down a slippery slope. I thought that that would stop at a reasonable juncture. Sadly, I no longer think that way. Three examples among many will suffice, and I think all three are emblematic of larger trends:

    First: After the horrific murder of George Floyd, while we were standing in solidarity with the black community and condemning the use of excessive force by police departments across the country, peaceful protests across the country turned into violent demonstrations accompanied by looting. Now most of the protestors were peaceful, so I thought that my fellow liberals would cleanly draw a line and denounce the looters while supporting the protests. But this seldom happened; both on my private social media accounts as well as publicly, people started excusing the looting as a justified act of desperation. Worse still, they started to recruit cherry-picked historical examples of civil rights leaders to make their case, including this speech by MLK Jr. in which he seems to justify violence as a desperate act before making it very clear that it is not the right way of going about things. But even if you hadn’t heard the entire speech, to hold someone who is literally the biggest symbol of non-violent protests in modern times along with Mahatma Gandhi as a spokesperson for violent protests is bizarre to say the least. 

    The ahistorical anomalies continued. One of my favorites was a tweet by Charles Blow of the New York Times who justified the looting by comparing it with the Boston Tea Party. I find it hard to believe that Blow doesn’t know what happened after they threw the tea into the water – they not only stripped naked and castigated a fellow Son of Liberty after they found out that he had secretly pocketed the tea, but they came back later and replaced the lock of the ship they had broken. Unlike the looters, the Boston Patriots had a profound respect for private property. In fact, it was precisely British insults to private property by way of quartering soldiers in private residences that served as a spark for the revolution. In addition, as Richard Rothstein painstakingly documents in his superb book "The Color of Law", laws were explicitly enacted precisely to deny African-Americans and other minorities access to private housing for decades, so it's ironic to see mobs destroying private property in their own communities and crippling the livelihoods of folks - many of whom are poor immigrants with small businesses - who had nothing to do with the cause of the protests.

    But all these distinctions were lost, especially at the New York Times who tied themselves up into a real knot by publishing an op-ed by Senator Tom Cotton. In the last few years Cotton has emerged as one of the most racist and xenophobic of all Trump supporters and I detest him. Cotton wrote a biased and flawed op-ed that called for the army to step in to pacify cities where looting was taking place. Knowing his past this was a convenient position for him and I completely disagreed with it; I did think there needed to be some way for law and order to be imposed, but the last thing we need in the middle of a militarized police force is the actual military. Nevertheless, it turned out that a fair percentage of the country agreed with him, including a fair share of Democrats, and Cotton is a sitting United States senator after all, so as an elected public official his views needed to be known, not because they were right but because they were relevant. I suddenly felt newfound respect for the New York Times for airing dissenting views that would allow their readers to get out of their echo chambers and take a stroll in a foreign country, but it didn't last long. As we now know, there was a virtual coup inside the paper and the op-ed editor resigned. As Andrew Sullivan said in a must-read piece it is deeply troubling when an ideological faction – any ideological faction – can hold a news source hostage and force it to publish only certain viewpoints conducive to their own thinking.

    A similar reaction against what were entirely reasonable responses to the looting spilled over into other institutions and individuals’ lives. In perhaps the most bizarre example, David Shor who is an analyst at a New York City firm - and whose Twitter profile literally includes the phrase “I try to elect Democrats” - was fired for tweeting a study by a black professor at Princeton that said that non-violent protests are more effective than violent ones. Just chew on that a bit: an individual was fired merely for tweeting, and not just tweeting anything but tweeting something that MLK Jr. would have heartily approved of. When people actually face retribution for pointing out that non-violence works better than violence, you do feel like you are in a mirror universe.

    Second: The statue controversy. The killing of Floyd set off a wave of protests that extended to many other areas, some because of feuds brewing for years; for more than a hundred years in this particular case. I am all for the removal of Confederate Statues; there is nothing that is redeeming in them, especially since many of them were put up by white supremacists decades after the war ended. While the bigger issue of acknowledging memory and history is complicated, the latest ray of light for me came from Eliot Cohen, a dean at Johns Hopkins who cut through the convoluted thicket to come up with a simple rule that’s as clear as anything in my opinion for weighing historical figures in the scales of justice. Cohen asked those who were demanding the statues to be taken down to ask if the thing that they were criticizing a person for was the most important thing he or she was known for. This question immediately creates a seismic divide between Confederates and Founding Fathers. If the Civil War had not happened, Robert E. Lee would have been a better than average soldier who fought with distinction during the Mexican-American War. If Thomas Jefferson had never owned and abused slaves and had illegitimate children with Sally Hemings, he would have still been the father of religious freedom, the Louisiana Purchase, the University of Virginia, scientific inquiry and the Declaration of Independence – a document that, even if it was not applied universally, had such abstract power that it kept on being cited all the time by figures as diverse as Abraham Lincoln and Ho Chi Minh, not to mention Frederick Douglass and MLK Jr. Jefferson would have still done these great things if you took away his slavery and hypocrisy. Washington is even more unimpeachable since he led the country to freedom during the war and unlike Jefferson freed his slaves. The fact that these were flawed men who still did great things is hardly a novel revelation.

    Sadly, you know that your side is losing the war of ideas when they start handing propaganda victories to the side you despise on a platter. Three years ago, in the context of a Lee statue that was going to be taken down, after that terrible anti-Semitic Charlottesville rally by white supremacists, Trump made a loathsome remark about there being “fine people” on all sides and also asked a journalist that if it was Lee today, would it be Jefferson or Washington next? I of course dismissed Trump’s remark as racist and ignorant; he would not be able to recite the Declaration of Independence if it came wafting down at him in a MAGA hat. But now I am horrified that liberals are providing him with ample ammunition by validating his words. A protest in San Francisco toppled a statue of Ulysses S. Grant – literally the man who defeated the Confederacy and destroyed the first KKK – and defaced a statue of Cervantes, a man who as far as we know did not write “Don Quixote” while he was relaxing from a day’s fighting for the Confederacy or abusing slaves. University of Wisconsin students recently asked for a statue of Lincoln to be removed because he had once said some uncomplimentary words about black people. And, since it was just a matter of time, the paper of record just published an op-ed calling for the Jefferson Memorial in Washington to be taken down. Three years ago, if you had asked me if my fellow liberals would go from Robert E. Lee to Jefferson and Washington and Grant so quickly, I would have expressed deep skepticism. But here we are, and based on recent events it won’t be paranoid at all to ask that if Washington statues are next, would streets or schools named after Washington also be added to the list? How about statues of Plato and Aristotle who supported slavery as a natural state of man? And don’t even get me started on Gandhi who said some very unflattering words about Africans. The coefficient of friction on the slippery slope is rapidly going to zero.

    Third item in the parade of items signifying a spiraling descent into intolerance - A call to bar Steven Pinker from the Linguistic Society of America’s list of distinguished fellows and media experts. This call would be laughable if it weren’t emblematic of a deeper trend. My fellow liberal Scott Aaronson has already indicated the absurdity of the effort in his post not in the least because Pinker has championed liberalism, evidence-based inquiry and rational thought all throughout his long career. The depressing thing is that the tactics are not new: guilt by association, cherry-picking, an inability to judge someone by the totality of their behavior or contributions, no perception of gray in an argument and so on. The writers don’t like the fact that Pinker tweeted a study showing that police encounters with black people aren’t particularly violent (but that there are more encounters to begin with, so the probability of one turning violent is higher), tweeted that a horrific fatal attack by a disgruntled man at UCSB on women did not imply higher rates of violence against women in general and said in his widely-praised book “The Better Angels of our Nature” that a seemingly mild-mannered man in New York City shockingly turned out to be violent. Pinker has never denied the suffering of individuals but has simply pointed out that that suffering should not blind us to progress at large. As hard is it might be to believe this, liberals are punishing someone who says that the world has at large become a better place because we have embraced liberal values. Again, this feels like we have stepped into a surreal mirror universe.

    As biologist Jerry Coyne has explained on his blog, none of these accusations hold water and the protestors are on exceedingly thin ice, but what is noteworthy is the by now all-too-common accusation by selective misrepresentation and the detailed combing through (and a disastrous one at that) of every tweet, every “like” from Pinker that would be evidence of his awfulness as a human being and affront to the orthodoxy. If this does not seem like a job for an incompetent and yet obsessive Orwellian bureaucrat or a member of the NKVD during Stalin’s show trials, I don’t know what is (as Robert Conquest described in his famous account of Stalin’s purges, going through someone’s entire life history with a fine-toothed comb and holding up even the slightest criticism of the dear leader or disagreement with party orthodoxy was almost de rigueur for the Soviets and the Stasi). Perhaps completely unsurprisingly, the doyen of American linguistics, Noam Chomsky, refused to sign the letter and instead signed the other one; Chomsky has consistently been an exemplary supporter of free speech and has famously pointed out that if you support only free speech that you like, you are no different from Goebbels who was also a big fan of speech he liked. But Pinker’s example again goes to show that the slippery slope argument is no longer a fictitious one or a strawman. If we went from Milo Yiannopoulos to Steven Pinker in three years, it simply does not feel paranoid to think that we could get to a very troubling place in three more years.

    The whole development of course is very sad, certainly a tragedy but rapidly approaching a farce. Liberals and Democrats were supposed to be the party of free speech and intelligent dialogue and tolerance and viewpoint diversity. The Republican Party, meanwhile, is not a political party anymore but a “radical insurgency” as Chomsky puts it. It is a blot not just on true conservatism but on common sense and decency. The reason I feel particularly concerned this year is because I have always felt that, with Republicans having descended into autocracy and madness, liberal Democrats are the one and only thing standing between democracy and totalitarianism in this country. I have been disillusioned with their abandonment of unions and disparaging of “middle America” for a long time, but I still thought they upheld traditional, age-old liberal values. With Republicans not even making a pretense of doing this, one would think the Democrats have a golden opportunity to pick up the baton here. But instead you have a party that has embraced diversity provided it’s of the kind they like, allows for no nuance or sliding scale of disagreement, accuses people of being some kind of “ist” with the spirit of the Inquisition and refuses to see individuals as individuals rather than as part of their favorite or their despised groups. If Democrats give up on us, what other group of influence can save the country?

    Quite apart from how this behavior abandons the values that have made this country a great one, it is a disastrous political strategy. Currently, the number one goal of any American citizen with any amount of decency and intelligence should be to hand Donald Trump and his unscientific, racist, ignorant administration the greatest defeat in American electoral history. Almost nothing else is as important this year. There are sadly still people who are on the fence – these people cannot let go of the Republican Party for one reason or another, but especially in the last few months one hopes that enough of them have become disillusioned with the Trump administration’s utter incompetence, casual cruelty and dog whistle signaling to consider voting for the other guy. The Democrats should be welcoming these people into their ranks with open arms, so would it be harder or easier for fence-sitters to think of voting Democrat when they see self-proclaimed Democrats toppling random statues, unleashing Twitter mobs on people they disagree with and trying to destroy their careers and basically trying to disparage or eliminate anyone who may think slightly differently from the sphere of discourse?

    I came to this country as an immigrant, and while several reasons brought me here just like they do all immigrants, science and technology and freedom of speech were the top two things that I loved and continue to love about the United States. When I was growing up in India, my father who was an economics professor at a well-known college used to tell me how he taught econometrics classes during the Indian Emergency of the 1970s when the Constitution was suspended by the Prime Minister, Indira Gandhi. He told me how he used to occasionally see a government agent standing behind during his classes, taking notes, making sure he was not saying something subversive. It can only be amusing if parts of partial different equations used in econometrics were regarded as subversive (and if the agents understood them), but it was nonetheless a sobering experience. It would have been far worse had my father lived in Cambodia during the same time. While it’s to India’s democratic credit that it escaped from that hole, even today much of freedom of speech in India, while enshrined in the Constitution, is on paper. As several recent incidents have shown, you can get in trouble if you criticize the government, and in fact you can get in trouble even from your fellow citizens who may rat you out and file lawsuits against you. Even in Britain you have libel laws, and of course free speech is non-existent in countries like Saudi Arabia. In my experience, Americans who haven’t lived abroad often don’t appreciate how special their country is when it comes to free speech. Sadly, as the current situation shows, we shouldn’t take it for granted.

    When I complain about problems with free speech in this country, fellow liberals tell me – as if I have never heard of the US Constitution - how it only means that the government cannot arrest you if you say something incendiary. But this point is moot since people can stifle each other’s ideas as thoroughly as the government does, and while informal censure has been around since we were hunter gatherers, when it gets out of hand as it seems to these days, one can see a pall of conformity and a lack of diversity descending over the country. This also puts in a dim light the objection that there cannot be speech without consequences – as David Shor’s example shows, if the results include getting fired or booted out from professional organizations for almost anything you say, these “consequences” are almost as draconian as government oppression and should be unacceptable. As he did with many things, John Stuart Mill said it best in his “On Liberty”,

    “Protection, therefore, against tyranny of the magistrate is not enough: there needs protection also against the tyranny of the prevailing opinion and feeling; against the tendency of society to impose, by other means than civil penalties, its own ideas and practices as rules of conduct on those who dissent from them; to fetter the development, and, if possible, prevent the formation, of any individuality not in harmony with its ways, and compel all characters to fashion themselves upon the model of its own.”

    It’s also worth remembering that there is much less distinction between “the people” and “the government” than we think since today’s illiberal anti-free speech activists are tomorrow’s politicians, community leaders, writers and corporate leaders. And we would be laboring under a truly great illusion if we think that these supposedly well-intentioned activists cannot become repressive; everyone can become repressive if given access to power. The ultimate question is not whether we want a government which does not tread on our freedom - we settled that question in 1787 - it’s about what kind of country we want to live in: one in which ideas, even unpleasant ones, are confronted with other ideas in a sphere of spirited public debate, or one in which everyone boringly thinks the same thing, there is no opportunity for dissent, nuanced thinking is thrown out of the window and anybody who challenges the orthodoxy is to be eliminated from public discourse one way or another? Because those are definitely not the values that made this country the envy of the world and the one that its founding ideals envisaged.

    So what should those of us who squarely believe in free speech, viewpoint diversity, dialogue and good faith debate do? This year it has become clear that we should take a stand, and as Scott indicates, if supposedly traditional, plain vanilla liberal values like speech without harsh retaliation - values which go back to the founding of the country and beyond – are suddenly “radical” values that are increasingly the province of a narrow minority, so be it: we should not only embrace these radical values with alacrity but be unhesitant and full-throated in their defense. The signers of the Harper’s Weekly letter have set an excellent precedent, and they are saying something very simple – if you want to call yourself a liberal, act liberal.

    Von Neumann In 1955 And 2020: Musings Of A Cheerful Pessimist On Technological Survival

    Johnny von Neumann enjoying some of the lighter aspects of technology. The cap lights up when its wearer blows into the tube.

    “All experience shows that even smaller technological changes than those now in the cards profoundly transform political and social relationships. Experience also shows that these transformations are not a priori predictable and that most contemporary “first guesses” concerning them are wrong.” – John von Neumann
    Is the coronavirus crisis political or technological? All present analysis would seem to say that this pandemic was a result of gross political incompetence, lack of preparedness and impulsive responses by world leaders and government. But this view would be narrow because it would privilege the proximate cause over the ultimate one. The true, deep cause underlying the pandemic is technological. The coronavirus arose as a result of a hyperconnected world that made human reaction times much slower than global communication and the transport of physical goods and people across international borders. For all our skill in creating these technologies, we did not equip ourselves to manage the network effects and sudden failures in social, economic and political systems created by them. An even older technology, the transfer of genetic information between disparate species, was what enabled the whole crisis in the first place.
    This privileging of political forces over technological ones is typical of the mistakes that we often make in seeking the root cause of problems. Political causes, greatly amplified by the twenty-four hour news cycle and social media, are illusory and may even be important in the short-term, but there is little doubt that the slow but sure grind of technological change that penetrates deeper and deeper into social and individual choices will be responsible for most of the important transformations we face during our lifetimes and beyond. On scales of a hundred to five hundred years, there is little doubt that science and technology rather than any political or social event cause the biggest changes in the fortunes of nations and individuals: as Richard Feynman once put it, a hundred years from now, the American Civil War would pale into provincial insignificance compared to that other development from the 1860s – the crafting of the basic equations of electromagnetism by James Clerk Maxwell. The former led to a new social contract for the United States; the latter underpins all of modern civilization – including politics, war and peace.
    The question, therefore, is not whether we can survive this or that political party or president. The question is, can we survive technology? In 1955, John von Neumann wrote a very thought-provoking article titled “Can We Survive Technology?” in Fortune magazine that put this question in the context of the technology of the times. The essay was influenced by historical context – a great, terrible world war had ended just ten years earlier- and by von Neumann’s own background and interests. But the essay also presents original and very general observations that are most interesting to analyze in the context of our present times. By then Johnny, as friends and even casual acquaintances called him, was already regarded as the fastest and most wide-ranging thinker alive and had already carved his name in history as a mathematician, polymath, physicist and military advisor of the very highest rank. Sadly, he was only two years away from the cancer that would kill him at the young age of 54. He was also blessed – or cursed – with a remarkably prescient but still cheerful and ironic pessimism that enabled him to boldly look ahead into future world events; already in the 1930s, he had predicted the major determinants of a potential world war and the winners and losers. Along with his seminal contributions to game theory, pure and applied mathematics, nuclear weapons design and quantum mechanics, his work on computing and automata had already placed him in the front ranks of soothsayers. And like all good soothsayers, he was sometimes wrong.

    Copy of the June, 1955 issue of Fortune magazine (from the author’s library)

    Perhaps it’s pertinent to quote a paragraph from the last part of Johnny’s article because it lays bare the central thesis of his philosophy in stark terms.
    “All experience shows that even smaller technological changes than those now in the cards profoundly transform political and social relationships. Experience also shows that these transformations are not a priori predictable and that most contemporary “first guesses” concerning them are wrong. For all these reasons, one should take neither present difficulties nor presently proposed reforms too seriously.”
    Von Neumann starts by pointing to what he saw as the major challenge to the growing technological revolution of the past half century, a technological revolution that saw the rise of radio, television, aviation, submarines, antibiotics, radar and nuclear weapons among other things. He had already seen what military technology could do to millions of people, incinerating them in a heartbeat and reducing their cities and fields to rubble, so one needs to understand his musings in this context.
    “In the first half of this century the accelerating industrial revolution encountered an absolute limitation—not on technological progress as such but on an essential safety factor. This safety factor, which had permitted the industrial revolution to roll on from the mid-eighteenth to the early twentieth century, was essentially a matter of geographical and political Lebensraum: an ever broader geographical scope for technological activities, combined with an ever-broader political integration of the world. Within this expanding framework it was possible to accommodate the major tensions created by technological progress. Now this safety mechanism is being sharply inhibited; literally and figuratively, we are running out of room. At long last, we begin to feel the effects of the finite, actual size of the earth in a critical way.”
    Let’s contrast this scenario with the last fifty years which were also a period of extraordinary technological development, mainly in communications technologies and the nature of work and knowledge engendered by the Internet. As Johnny notes, just like in 1955 we are “running out of room” and feeling the effects of the “finite, actual size of the earth in a critical way”, albeit through our own novel incarnations. The Internet has suddenly brought people together and made the sphere of interaction crowded. We were naïve in thinking that this intimacy would engender understanding and empathy; but as we realized quite quickly, it tore us apart instead by cloistering us into our own echo chambers that we hermetically sealed from others through social disapproval and technological means. But as Johnny rightly notes, this crisis is scarcely a result of the specific technology involved; rather, “it is inherent in technology’s relation to geography on the one hand and to political organization on the other.”

    Von Neumann and Oppenheimer in front of the Institute for Advanced Study computer in Princeton

    The three major technological developments of Johnny’s time were computing, energy production and the weather. That last topic might seem like an odd addition, but it was foremost on Johnny’s mind as a major topic of application for computing. Climate was of special interest to him because it was characteristic of complex systems including non-linear differential equations and multifactorial events that are very hard for human beings to solve using pencil and paper. Scientists during World War 2 had also become finely attuned to the need for understanding the weather; this need had become apparent during major events like the invasion of Normandy where the lives of hundreds of thousands of soldiers and civilians depended on day-to-day weather forecasts. It was precisely for understanding complex systems like the weather that Johnny and his associates had made such major contributions to building some of the first general-purpose computers employing the stored program concept, first at the University of Pennsylvania and then at the Institute for Advanced Study in Princeton.
    Johnny had a major interest in predicting the weather and then controlling it. He was also one of the first scientists to see that increased production of carbon dioxide would have major effects on the climate. He was well aware of the nature of feedback systems and analyzed, among other things, the impact of solar radiation and ice changes on the earth’s surface. He understood that both these factors are subject to delicate balances, and that human production of carbon dioxide might well upset or override these balances. But Johnny’s main interest was not simply in understanding the weather but in predicting it. In his essay he talks about cloud seeding and rain making and about modulating the reflectivity of ice to increase or decrease temperatures. He clearly understands the monumental impact, exceeding the effects of even nuclear war, that weather prediction and control might have on human civilization:
    “There is no need to detail what such things would mean to agriculture or, indeed, to all phases of human, animal, and plant ecology. What power over our environment, over all nature, is implied! Such actions would be more directly and truly worldwide than recent or, presumably, future wars, or than the economy at any time. Extensive human intervention would deeply affect the atmosphere’s general circulation, which depends on the earth’s rotation and intensive solar heating of the tropics. Measures in the arctic may control the weather in temperate regions, or measures in one temperate region critically affect another, one quarter around the globe. All this will merge each nation’s affairs with those of every other, more thoroughly than the threat of a nuclear or any other war may already have done.”
    Of all the topics that Johnny discusses, this is the only one which at first sight does not seem to have come to pass in terms of major developments. The reasons are twofold. Firstly, Johnny did not know about chaos in dynamical systems which would make the accurate prediction of climate very difficult. Of course, you don’t always need to understand a system well in order to manipulate it by trial and error. This is where the second reason involving political and social will comes into play. Johnny’s prediction that carbon dioxide will have a major impact on the climate has been well-validated, although the precise effects remain murky. World opinion in general has shied away from climate control experiments, but given the potentially catastrophic effects that CO2 might have on the food supply, immigration, tree cover and biodiversity in general, it is likely that the governments of the world would be pressed into action by their citizens to at least try to mitigate the impact of climate change using technology. Although this prediction by Johnny now seems quaint and outdated, my feeling is that his analysis was actually so far ahead of its time that we will soon see it discussed, debated and put into action, perhaps even during my lifetime. In saying this I remember President Kennedy’s words: “Our problems are man-made; therefore, they can be solved by man.”
    Like many scientists of his time, Johnny was optimistic about nuclear power, seeing limitless possibilities for it, perhaps even making it “too cheap to meter”. His prediction seems to have failed along with similar predictions by others, but the failure has less to do with the intrinsic nature of nuclear power and more with the social and political structures that hampered its development by imposing onerous regulatory burdens on nuclear plant construction, spreading unrealistic fears about radiation and not allowing entrepreneurs to experiment with reactor designs through trial and error, the way they did with biotechnology and computing. Just like with weather prediction, I believe that Johnny’s vision for the future of nuclear power will become reality once world governments and their citizenry realize that nuclear power would provide one of the best ways to escape the dual trap of low-energy alternative fuels and high-energy but politically and environmentally destructive fossil fuels. Already we are seeing a resurgence of new-generation nuclear reactors.
    One of the fears that Johnny had about nuclear power was that our reaction times would be inadequate compared to even minor developments in the field. He says,
    “Today there is every reason to fear that even minor inventions and feints in the field of nuclear weapons can be decisive in less time than would be required to devise specific countermeasures. Soon existing nations will be as unstable in war as a nation the size of Manhattan Island would have been in a contest fought with the weapons of 1900.”
    I already mentioned at the beginning how the rapid advances in communications and transport systems made us woefully prepared for the coronavirus. But there is another very important sphere of human activity perhaps unanticipated by Johnny that has also left us impoverished in terms of countermeasures against even minor “improvements”. This sphere is the field of electronic commerce and financial trading, where differences of nanoseconds in the transmission of price signals can make or break the fortunes of companies. More importantly, they can make or break the fortunes of millions of ancillary economic units and individuals who are associated with these institutions through a complex web of models and dependencies whose fault lines we barely understand, leading to a gulf of ignorance with direct causal connections to the global financial crisis of 2008. Sadly, there is no evidence that we understand these dependencies any better now or are better prepared for employing countermeasures against odd and sundry developments in the layering and modeling of financial instruments impacting millions.
    Cybersecurity is another field where even minor improvements in being able to control, even momentarily, the complex computer network of an enemy country can have network effects that surpass the initial perturbation and lead to large-scale population impact. Ironically, the very dependence of developed countries on state-of-the-art computer networks which govern the daily lives of their citizens has made them vulnerable to attacks; the creation of these techno-bureaucratic systems itself has not kept pace with the capacity of the systems to efficiently and globally ward off foreign and domestic attacks. Presumably, defense and high-value corporate systems in countries like the United States are resilient enough to not be crippled by such attacks, but as the 2016 election showed, there is a low level of confidence that this is actually the case. Moreover, these systems need to be not just resilient but antifragile so that they can counteract the vastly amplified effects of small initial jolts with maximum efficiency. As critical medical, transport and financial infrastructure increasingly ties its fate to such technology, the ability to respond with countermeasures in equal or less time compared to the threat becomes key.
    Automation is another field in which Johnny made major contributions through computing. While working on the atomic bomb at Los Alamos, he had observed human “computers” performing repetitive calculations related to the complex hydrodynamics, radiation flow and materials behavior in a nuclear weapon as it blew apart in a millionth of a second. It was apparent to him that not only would computers revolutionize this process of repetitive calculation, but that they would have to employ stored programs if they were not to be crippled in these calculations by the bottleneck of being reconfigured for every task.
    “Thanks to simplified forms of automatic or semi-automatic control, the efficiency of some important branches of industry has increased considerably during recent decades. It is therefore to be expected that the considerably elaborated newer forms, now becoming increasingly available, will effect much more along these lines. Fundamentally, improvements in control are really improvements in communicating information within an organization or mechanism. The sum total of improvements in this field is explosive.”
    The explosive nature of the improvements in automation again comes from great gains in economies of scale combined with the non-linear effects of chunking together automated protocols that lead to a critical mass in terms of suddenly freeing up large parts of engineering and commercial processes from human intervention. Strangely, Johnny did not see the seismic effects automation would have on displacing human labor and causing significant political shifts both within and across nations. In looking for insights into this problem, perhaps we should look to a book written by Johnny’s friend and contemporary, mathematician Norbert Wiener of MIT. In 1950 Wiener had written a book titled “The Human Use of Human Beings” in which he extolled automation but warned against machines breaking free from the dictates of their human masters and controlling us instead.

    “Progress imposes not only new possibilities for the future but new restrictions.” – Norbert Wiener

    Wiener’s prediction has already come true, but likely not in the way he meant or foresaw. Self-replicating pieces of code now travel through cyberspace looking for patterns in human behavior which they reinforce through modifying and spreading themselves through the cyber-human interface. There is no better example of this influence than in the ubiquity of social media and the virtual addiction that most of us display for these sources. In this particular case, the self-replicating pieces of code first observe and then hijack the stimulus-response networks in our brains by looking for dopamine rush-inducing reactions and then mutating and fine-tuning themselves to maximize such reactions (the colloquial phrase “maximizing clicks”, while pithy, does not begin to capture such multilayered phenomena).
    How do we ward off such behavior-hijacking technology, and more generally technology with destructive effects? Here Johnny is pessimistic, for several reasons. The primary reason is because as history shows, separating “good” from “bad” technology is often a fool’s errand at best. Johnny gives the example of classified military technology which is often impossible to separate from open civilian technology because of its dual use nature. “Technology – like science – is neutral all through, providing only means of control applicable to any purpose, indifferent to all…A separation into useful and harmful subjects in any technological sphere would probably diffuse into nothing in a decade.” Any number of examples ranging from chemistry developed for both fertilizer and explosives to atomic fission developed for both weapons and reactors should underscore the unvarnished and total truth of this statement.
    Technology and more fundamentally science are indeed indifferent, mainly because, in Robert Oppenheimer’s words, “The deep things in science are not discovered because they are useful; they are discovered because it was possible to discover them.” Once prehistoric man found a flint rock, rubbing it together to create fire and using it to smash open the skull of a competitor were both inevitable actions, completely inseparable from each other. It was only our unnatural state of civilization, developed during an eye blink of time as far as geological and biological evolution are concerned, that taught man to try to use the rock for the former purpose instead of the latter. These teachings came from social and political structures that men and women built to ensure harmony, but there was exactly zero information in the basic technology of the rock itself that would have allowed us to make the distinction. As Johnny notes, achieving a strict separation of this distinction could only come from obliteration of the technology in the first place, providing a neat example of having to kill something in order to save it.
    However, the bigger and deeper problem that Johnny identified is that technology has an inexorable, Faustian attraction that creates an unholy meld between its utility and volatility. This is because:
    “Whatever one feels inclined to do, one decisive trait must be considered: the very techniques that create the dangers and the instabilities are in themselves useful, or closely related to the useful. In fact, the more useful they could be, the more unstabilizing their effects can also be. It is not a particular perverse destructiveness of one particular invention that creates danger. Technological power, technological efficiency as such, is an ambivalent achievement. Its danger is intrinsic… The crisis will not be resolved by inhibiting this or that apparently particularly obnoxious form of technology”
    “The more useful they could be, the more unstabilizing their effects can also be.” This statement perfectly captures the Gordian knot technologies like social media have bound us with today. Their usefulness is intrinsically linked to the instability they cause, whether that instability involves an addictive hollowing out of our personal time or the political echo chambers and biases that evolve with these platforms. As such technology is indeed ambivalent, and perhaps the people who would thrive best in an exceedingly technological world are ones who can comfortably ride the wave of this ambivalence while at least marginally pushing it in a productive direction. Neither can people harbor the seemingly fond hope that, even from a strictly political and social viewpoint, the demonstration of a technology such as a social media platform as toxic and divisive would lead to its decline. When even war which clearly demonstrated the ability of technology to obliterate millions could do little to stem further technological development in weaponry, it is scarcely possible to believe that the peacetime problems created by Facebook or Twitter would do anything to starve off what fundamentally makes them tick. And yet, similar to what happened with weaponry, there might be a path forward where we make these destructive technologies more humane and more conditional, with a curious mix of centralized and citizen-enabled control that curb their worst excesses.
    Quite apart from the emotional and technical aspects of it, separating useful effects of technology from destructive ones and trying to isolate one from the other might also be a moral mistake. This becomes apparent when one realizes that almost all of technology with its roots in science comes from the basic human urge to seek, discover, build, find and share; the word technology itself comes from the Greek ‘techne’, meaning the skill or manner in which something is gained, and ‘logos’, meaning the words through which such knowledge is expressed. Opposing this urge would be opposing a very basic human facility.
    “I believe, most importantly, prohibition of technology (invention and development, which are hardly separable from underlying scientific inquiry), is contrary to the whole ethos of the industrial age. It is irreconcilable with a major mode of intellectuality as our age understands it. It is hard to imagine such a restraint successfully imposed in our civilization.”
    What safeguards remain then against the rapid progression and unpredictable nature of technologies described above? As mundane as it sounds, course-correction through small, incremental, opportunistic steps might be the only productive path. Just like the infinitesimal steps of thermodynamic work in an idealized Carnot engine, one hopes that small course-corrective steps will allow us to gradually turn the system back to an equilibrium state. As Johnny put it, “Under present conditions, it is unreasonable to expect a novel cure-all.” 

    The cotton gin

    I think back again to Johnny’s central thesis stated at the beginning of this essay – “All experience shows that even smaller technological changes than those now in the cards profoundly transform political and social relationships” – and I think of Eli Whitney’s cotton gin. By the end of the 18th century it was thought by many that slavery was a dying institution; the efficiency of slaves picking cotton was so low that one could scarcely imagine slavery serving as the foundation of the American economy. Whitney’s cotton gin, invented in 1794, changed all that: while previously it took a single slave about ten hours to separate and clean a single pound of cotton, two or three slaves using the machine could turn out fifty pounds of cleaned cotton in a day. Whitney’s invention was classic dual use: it would lead to transformative gains in the production of a staple crop. But it was other human beings, not the machine, that decided that these gains would be built on the backs of indentured human beings often treated worse than animals. The cotton gin consigned America to be an economic powerhouse and a fair share of America’s population to not being treated even as citizens. Clearly the reaction time built into the social institutions of the time could not keep pace with the creation of a seemingly mundane brush-like component that would separate cotton fibers from each other.
    What can we do in the face of such inevitable, unpredictable technological progression that catches us off guard? If the answer were really simple, we would have discovered it with the metronomic timing of new invented technology. But Johnny’s musings end with hope, hope provided by the same history that tells us that stopping technology is tantamount to trying to stop the air from flowing.
    “Can we produce the required adjustments with the necessary speed? The most hopeful answer is that the human species has been subjected to similar tests before and seems to have a congenital ability to come through, after varying amounts of trouble. To ask in advance for a complete recipe would be unreasonable. We can specify only the human qualities required: patience, flexibility, intelligence.”
    From limiting the spread of nuclear weapons to reducing human discrimination and trafficking to curbing the worst of greenhouse gas emissions and deforestation, while technology has shown nothing but a ceaseless march into the future, shared morality has been a powerful if sporadic driving force for resurrecting the better angels of our nature against our worst instincts. The social institutions supporting slavery did not reform until a cruel and widespread war forced their hand. But I wonder about counterfactual history. I wonder if, as gains in agricultural production kept on increasing, first with other mechanical harvesters and beasts of burden and then finally with powerful electric implements, whether the reliance on humans as a source of indentured labor would have been weakened and finally done away with by the moral zeitgeist. The great irony here would have been that the injustice one machine (the cotton gin) created might have met its end at the hands of another (the cotton mill created by mass electrification). This counterfactual imagining of history would nonetheless be consistent with the relentless progress of technology that has indeed made life easier and brought dignity to billions whose existence previously was mired in poverty and bondage. Sometimes the existence of something is more important than all the reasons you can think of for justifying its existence. We can continue to hope that the human race will continue to progress as it has before; with patience, flexibility, intelligence.