On the absurdity of classified information

The problem with keeping information secret is not that secrecy is never warranted, it's that it is always easier to classify something as secret than not, and this leads not just to the withholding of potentially important information from the public but also to all kinds of absurd contradictions. Imagine a bored low-level bureaucrat whose job is to stamp national security documents as secret. It is always going to be easier for him to not take the trouble to go through every line in the document and decide whether to redact it or not; it's far easier to mark everything as secret and stash it away in the archives.

One of the absurdities of secrecy was illustrated by Freeman Dyson a few years ago in a review of the book "David and Goliath" in the New York Review of Books. Dyson was talking about a report called Project Oregon Trail written by academic historians during the Vietnam War that analyzed the results of 'asymmetric wars' in which a larger adversary is fighting a smaller one. The report especially focused on the kinds of conflicts in which a large colonial or imperial power is fighting a small country. It concluded that when the imperial power spends most of its resources on military means, it usually loses. When it spends most of its resources on civilian measures intended to win the hearts and minds of people, it usually wins. The Vietnam War which was then not going well was clearly going down the first road; the disastrous Iraq War went down the same road forty years later, as did any number of attempts by large powers to defeat small insurgent groups using purely military means. The powers in charge did not want to hear the inconvenient truth that they were adopting the wrong strategy. 

There was another report at the same time, published by the RAND Corporation, which analyzed potential conflicts between the two countries using game theory. This was a valuable study which was legitimately classified as secret. But to avoid embarrassment and prevent the results of Project Oregon Trail from being released to the public, the government bound the two reports together and stamped both of them as secret. The valuable, purely historical, Project Oregon Trail is still secret fifty years later. Dyson appealed to the government to release the historical study. I submitted a FOIA request for it more than two years ago, and I am still waiting.

Even legitimate secrecy can sometimes make it embarrassingly easy to let someone with the time and effort know what you are hiding. The nuclear weapons expert Carey Sublette put together a detailed archive of supposedly classified data on nuclear weapons by looking at different classification sources that had redacted the same document in different parts, subjectively exposing and redacting sections of the document according to whim; thus, what one source left out another source disclosed. By looking at a specific document from multiple sources, Sublette was essentially able to reconstruct entire classified papers.

Sometimes it is the omission of information that can alert an adversary to valuable secret information. At the end of the Manhattan Project, the Smyth Report was published to give a general description of the atomic bomb to the lay public. The first edition of this report contained an important fact about the poisoning of nuclear reactors with fission byproducts; it was a problem that had frustrated scientists and taken them quite a while to surmount. When General Leslie Groves who was the head of the project saw this included in the Smyth Report, he ordered it taken out in the next edition. When the Soviets read the second edition and compared it to the first one, they knew exactly what was so valuable and secret.

In the London Review of Books, the physicist and writer Jeremy Bernstein who worked closely with Dyson, Robert Oppenheimer, Hans Bethe and others talks about his own take on the absurdity of secrecy. In 1958 he was invited by Dyson and others to work on a spaceship powered by hydrogen bomb explosions. This was an audacious scheme which several brilliant scientists and engineers seriously worked on for a few years. One of the key calculations in exploring this idea involved using the opacities of elements; the opacity of an element gives us an idea of its capacity to absorb or repel electromagnetic radiation. As Bernstein puts it:

"A couple of physicists at Los Alamos had had the idea that small nuclear devices could be used as propellants for space travel. They would be dropped from the bottom of the ship sequentially and the detritus from the explosions would act as an efficient propellant. Dyson had gone to La Jolla to work on the design of the ship. There was to be a large flat metallic plate, known as the ‘pusher’, attached to the body of the ship by springs that would absorb the shocks of the explosions.
The question was whether the pusher would ‘ablate’ away under the influence of the successive bombs. This depended in large measure on the opacity – the capacity to absorb radiation – of the materials involved. They were to be relatively light elements. At the time I didn’t appreciate the importance of this limitation. Any information, theoretical or experimental, on the opacity of any element heavier than lutetium was and is classified. This includes gold, platinum and lead. To compute an opacity is a problem in quantum mechanics and atomic physics. Anyone can try to do it, but if you do it for lead, the result is classified. Is anything more absurd? Dyson had an idea for a ‘super Orion’ with a pusher made of uranium, which might have been used as fuel for, say, a return trip to Mars. But the opacity of uranium was classified, though it might be an interesting project for a graduate student."
Fifty years later, I did a  Google search and am still not sure whether there is a reliable public source listing the opacities of heavy elements. This is in spite of the fact that many graduate students could now accurately calculate and measure these numbers using a variety of techniques.
Finally, no system of government or private enterprise is foolproof, and secrecy is always thwarted when there are either spies or leakers. Sometimes these spies work against the common good (as in the case of Soviet spy Klaus Fuchs), sometimes they work in its favor (as in the case of American spy Edward Snowden). Bernstein describes how Fuchs was ferrying atomic secrets out of Los Alamos on a daily basis.
"In autumn 1945, Enrico Fermi gave a lecture at Los Alamos on Edward Teller’s hydrogen bomb, the Classical Super. Fermi concluded that he did not see how it could be made to work. The audience was all Q-cleared and the lecture was classified. One of the people in the audience was Klaus Fuchs, who turned the lecture over to the Russians. I am not sure they learned anything they did not already know. But the lecture remained classified by the US government even after the Russians had put it on the web. You can download it at your leisure."
I have not looked for this lecture online but we can be sure of two things: firstly, that it exists in the public domain, and secondly, that the government will never acknowledge its existence. The ultimate absurdity of secrecy arises when the government refuses to deny that something is secret even when it is out for all the public to see. The distinction between classified and unclassified itself becomes classified.

Why technology won't save biology

Carl Woese, who saw an integrated view for 21st century
biology; from molecules to communities
Starting today, I will be writing a monthly column for the outstanding website 3 Quarks Daily. My first post deals with the limitations of the technological zeitgeist in understanding biology.

There seems to be no end to biology's explosive progress. Genomes can now be read, edited and rewritten with unprecedented scope, individual neurons can now be studied in both space and time, the dynamics of the spread of viruses and ecological populations can be studied using mathematical models, and vaccines for deadly diseases like HIV and Ebola seem to hold more promise than ever. They say that the twentieth century belonged to physics and the twenty first belongs to biology, and everything we see in biology seems to confirm this idea.

There have been roughly six revolutions in biology during the last five hundred years or so that brought us to this stage. The first one was the classification of organisms into binomial nomenclature by Linnaeus. The second was the invention of the microscope by Hooke, Leeuwenhoek and others. The third was the discovery of the composition of cells, in health and disease, by Schwann and Schleiden, a direct beneficiary of the use of the microscope. The fourth was the formulation of evolution by natural selection by Darwin. The fifth was the discovery of the laws of heredity by Mendel. And the sixth was the discovery of the structure of DNA by Watson, Crick and others. The sixth, ongoing revolution could be said to be the mapping of genomes and its implications for disease and ecology. Two other minor revolutions should be added to this list; one was the weaving of statistics into modern genetics, and the second was the development of new imaging techniques like MRI and CT scans.

These six revolutions in biology resulted from a combination of new ideas and new tools. This picture is consistent with the general two-pronged picture of scientific revolutions that has emerged through the ages: a picture consisting in equal parts of revolutions of ideas and revolutions of technology. The first kind was popularized by Thomas Kuhn in his book "The Structure of Scientific Revolutions". The second was popularized by Peter Galison and Freeman Dyson; Galison in his book "Image and Logic", and Dyson in his "The Sun, the Genome and the Internet". Generally speaking, many people are aware of Kuhn but few people are aware of Galison or Dyson. That is because ideas are often considered loftier than tools; the scientist who gazes at the sky and divines formulas for the universe through armchair calculations is considered more brilliant than the one who gets down on her hands and knees and makes new discoveries by gazing into the innards of machines.

However, this fondness for theory versus experiment paints a false picture of scientific progress. Machines and tools are not just important for verifying theories; they are more often used to discover new things that theory then has to catch up with and explain. In physics, the telescope and the particle accelerator have been responsible for some of the greatest revolutions in our understanding of nature; they haven't just verified existing theories but uncovered the fundamental composition of matter and spacetime. In chemistry, the techniques of x-ray diffraction and nuclear magnetic resonance have not just opened new windows into the structure of molecules, but they have led to novel understanding of molecular behavior in environments as diverse as intricate biological systems and the surfaces of distant planets and galaxies. There is little doubt that new experimental techniques have been as or even more responsible for scientific revolutions as new ideas.

As one example of the primacy of tool-driven revolutions, four of the six milestones in biology noted above can be considered to have come from the development or application of new tools. The microscope itself was a purely technological invention. The structures of cells, bacteria and viruses was made possible by the invention of new forms of microscopy – the electron microscope in particular – as well as new dyes which allowed scientists to distinguish cellular components from each other. The structure of DNA came about because of x-ray diffraction and chemical analysis. The minor revolution of imaging was made possible by concomitant revolutions in electronics and computing. And finally, the revolution in genomics has engendered by chemical and physical methods of rapidly sequencing genomes as well as powerful computers which can analyze this data. The application of all this technology has been a windfall of data which hides gems of understanding. The new science of systems biology promises to tie all this data together and lead to an improved understanding of biological systems.

And this is where the problem begins. In one way biology has become a victim of its success. Today we can sequence genomes much faster than we can understand them. We can measure electrochemical signals from neurons much more efficiently than we can understand their function. We can model the spread of populations of viruses and populations much more rapidly than we can understand their origins or interactions. Moore's Law may apply to computer chips and sequencing speeds, but it does not apply to human comprehension. In the words of the geneticist Sydney Brenner, biology in the heyday of the 50s used to be "low input, low throughput, high output"; these days it's "low input, high throughput, no output". What Brenner is saying is that compared to the speed with which we can now gather and process biological data, the theoretical framework which goes into understanding data as well as the understanding which come out from the other end are severely impoverished. What is more serious is a misguided belief that data equals understanding. The philosopher of technology Evgeny Morozow calls this belief "technological solutionism", the urge to use a certain technology to address a problem simply because you can.

Consider a field like cancer where gene sequencing has come to play a dominant role. The idea is to compare the genome sequences of cancer cells and normal cells, and therefore understand which genes are malfunctioning in cancer cells. The problem is that if you sequence a typical cell from, say, a lung cancer patient, you will find literally hundreds of genes which are mutated. It is difficult to distinguish the mutant genes which are truly important from those which just come along for the ride; the latter are a necessary part of the messy, shotgun process of cancer cell evolution. It is even more difficult to know which genes to target if we want to keep the cancer from growing. For doing this it is important to have a better theory for understanding exactly what genes would be mutated in a cancer cell and why, and what function they serve. While we have made strides in developing such theories, our understanding of the basic causal framework of cancer is far behind our capacity to rapidly sequence cancer genomes. And yet millions of dollars are spent in sequencing cancer genomes, with the expectation that someday the data alone with lead to a quantum leap in understanding. You look for the keys not where they are but where you can easily see them, under the bright light.

A recent paper from the neuroscientist John Krakauer said the same thing about neuroscience. If biology is the science of the twenty first century, neuroscience is probably the cherry on that cake. No other field promises to deliver fundamental insights not just into major mental health disorders but into the very essence of what it means to be human. To understand the brain better, scientists and government launched the Brain Map Initiative a few years ago. The goal of this initiative can be stated very simply: it is to map every single neuron in the brain in space and time and to understand the connections between them. The belief is that understanding the behavior of neurons will lead to an understanding of human behavior. At the heart of the initiative are new methods of interrogating neuronal function, ranging from very precise electrical recording using advanced sensor techniques to studying an inventory of the proteins and genes activated in neurons by modern recombinant DNA technology. These methods will undoubtedly discover new aspects of the brain that were previous hidden. Some of them well lead to groundbreaking understanding. But we do not know whether they will allow us to understand human behavior. As one example, the paper by Krakauer talks about mirror neurons, a specific class of neurons that caused a great stir a few years ago. As their names indicate, mirror neurons in one brain fire when the same class of neurons is activated in another brain. These neurons have thus been proclaimed to be the basis of diverse human emotions, including empathy; understanding them is considered to be integral to understanding social behavior; delicate imaging studies can track their activation and deactivation. But as Krakauer notes, many experiments on mirror neurons have been done on monkeys, and in those cases, little attention if any is paid to the actual behavior of the monkey when the mirror neurons fire. Thus, we seem to know what is going on, but only at the level of the neurons themselves. We do not know what is actually going on in the mind of the monkey in terms of its behavior when the neurons are activated.

To understand why these limitations of technology can hamper our understanding of complex biological systems, we must turn to one of the great philosophical foundation stones of science: the paradigm of reductionism. Reductionism was the great legacy of twentieth century science, and a culmination of everything that came before. It means the breaking up of complex systems into their simpler parts; the idea is that understanding the simpler parts will enable us to understand the whole system. There is little doubt that reductionism has led to spectacular successes in all of science. The entire edifice of twentieth century physics – exemplified by relativity and quantum mechanics – rose from the reductionist program of understanding matter and spacetime using its most basic components; particles and fields. Molecular biology similarly was created when biological matter started to be unraveled at the molecular level. Most of these advances became possible because powerful new technology like particle accelerators and spectrometers allowed us to break and study matter and living organisms at their fundamental level.

But as science overturned one obstacle after another in its confident reductionist march, it became clear that all was not well with this approach. One of the first salvos in what came to be called the "reductionism wars" was from the physicist Philip Anderson who in 1972 wrote an article titled ‘More is Different'. Anderson did not deny the great value of reductionism in science, but he pointed out that complex systems are not always the sum of their constituent parts. More is not just quantitatively different but qualitatively so. Even simple examples illustrate this phenomenon: atoms of gold are not yellow, but gold bars are; individual molecules of water don't flow, but put enough of them together and you get a river which has features that are not directly derived from the molecules themselves. And consciousness may be the ultimate challenge to reductionism; there is absolutely nothing in a collection of carbon, hydrogen, oxygen and nitrogen atoms in a human brain that tells us that if you put enough of them together in a very specific configuration, you will get a human being who would be writing this essay. Rivers, gold bars, human brains; all these systems are examples of emergent phenomena in which the specification of the individual components is necessary but not sufficient to understand the specification of the entire system. This is top-down as opposed to bottom-up understanding.

Why does emergence exist? We don't know the answer to that question, but at least part of it is related to historical contingency. The complexity theorist Stuart Kauffman gives a very good example of this contingency. Consider, says, Kauffman, the structure and function of the human heart. Imagine that you had a super-intelligent demon, a "superfreak" who could specific every single particle in the heart and therefore try to derive the function of the heart from string theory. Imagine that, starting from the birth of the universe, this omniscient superfreak could specify every single configuration of atoms in every pocket of spacetime that could lead to the evolution of galaxies, supernovae, planets, and life. He would still fail to predict that the most important function of the human heart is to pump blood. That is because the heart has several functions (making beating noises for instance), but the function of the heart about which we care the most is a result of historical accident, a series of unpredictable circumstances on which natural selection acted before elevating the pumping of the blood as the quintessential property of the heart. Some of the pressures of this natural selection came from below, but others came from above; for instance, the function of the heart was sculpted not just by the molecules which make up heart muscle but by the functions of the physiological and ecological environments in which ancient heart precursors found themselves. The superfreak may even be able to predict the pumping of blood as one of the many properties of the heart, but he will still not be able to determine the unique role of the heart in the context of the grand tapestry of life on earth. The emergence of the human heart from the primordial soup of Darwin's imagination cannot be understood by understanding the quarks and cells from which the heart is composed. And the emergence of consciousness or the brain cannot be understood merely by understanding the functions of single neurons.

Emergence is what thwarts the understanding of biological systems through technology, because most technology used in the biological sciences is geared toward the reductionist paradigm. Technology has largely turned biology into an engineering discipline, and engineering tells us how to build something using its constituent parts, but it doesn't always tell us why that thing exists and what relationship it has to the wider world. The microscope observes cells, x-ray diffraction observes single DNA molecules, sequencing observes single nucleotides, and advanced MRI observes single neurons. As valuable as these techniques are, they will not help us understand the top-down pressures on biological systems that lead to changes in their fundamental structures.

The failure of reductionist technology to understand emergent biology is why technology will not save the biological sciences. I have a modest prescription to escape from this trap: create technology that studies biology at multiple levels, and tie this technology together with concepts that describe biology at multiple levels. For instance when it comes to neuroscience, it would be fruitful to combine magnetic recording of single neurons (low level) with lower resolution techniques for studying clusters of neurons and modules of the brain (intermediate level) with experiments directly probing the behavior of animal and human brains (higher level). The good news is that many of these techniques exist; the bad news is that many of them exist in isolation, and the researchers who study them don't build bridges between the various levels. The same bridge-building goes for concepts. For instance, at the highest level organisms are governed by the laws of thermodynamics, more specifically non-equilibrium thermodynamics (of which life is an example), but you will not usually see scientists studying collections of neurons taking into consideration the principles of statistical thermodynamics or entropy. For achieving this meld of concepts scattered across different levels of biological understanding, there will also need to be much closer multidisciplinary interactions; physicists studying thermodynamics will need to closely collaborate with geneticists understanding the translation of proteins in neurons. These scientists will in turn need to work together with psychologists observing human behavior or ethologists observing animal behavior; both these fields have a very long history which can inform researchers from other fields. Finally, all biologists need to appreciate better the role of contingency in the structure and function of their model systems. By looking at simple organisms, they need to discuss how contingency can inform their understanding of more complicated creatures.

For a wholesome biology to prosper we need both technological and human interactions. But what we need most is an integrated view of biological organisms that moves away from a strict focus on looking at these organisms as collections of particles, fields and molecules. The practitioners of this integrated biology can take a page out of the writings of the late Carl Woese. Woese was a great biologist who discovered an entire new kingdom of life (the Archeae), and one of the few scientists able to take an integrated view of biology, from the molecular to the species level. He pioneered new techniques for comparing genomes across species at the molecular level, but he also had a broader and more eloquent view of life at the species level, one which he set down in an essay titled "A New Biology for A New Century" in 2004, an essay that expanded biology beyond its mechanistic description:

"If they are not machines, then what are organisms? A metaphor far more to my liking is this. Imagine a child playing in a woodland stream, poking a stick into an eddy in the flowing current, thereby disrupting it. But the eddy quickly reforms. The child disperses it again. Again it reforms, and the fascinating game goes on. There you have it! Organisms are resilient patterns in a turbulent flow—patterns in an energy flow. A simple flow metaphor, of course, fails to capture much of what the organism is. None of our representations of organism capture it in its entirety. But the flow metaphor does begin to show us the organism's (and biology's) essence. And it is becoming increasingly clear that to understand living systems in any deep sense, we must come to see them not materialistically, as machines, but as (stable) complex, dynamic organization."

As the quote above observes none of our experiments or theories captures the science at all levels, and it's only by collaboration that we can enable understanding across strata. To enable it we must use technology, but use it not as master but as indispensable handmaiden. We are all resilient organisms in a turbulent energy flow. We live and die in this flow of complex, dynamic organization, and we can only understand ourselves when we understand the flow.

On Albert Einstein's birthday: How Eddington and Einstein set an example for the international fellowship of science

The world woke up on the morning of November 7, 1919, to an amazing piece of news. A few months before in May, the English astronomer Arthur Eddington had led an expedition to the island of Principe off the west coast of Africa to try to observe one of the strangest phenomena ever predicted in the history of the science: the bending of starlight by the gravitational field of the sun. The phenomena could only be observed during a total solar eclipse, when the sun becomes dim enough to track the passage of starlight past it.

Eddington’s analysis proved that the light was bent by an amount that was twice that predicted by Isaac Newton, often considered the greatest scientist who ever lived. The man who trumped the great Sir Isaac’s prediction had until then been a relatively unknown forty-year-old physicist working in Berlin. His name was Albert Einstein, born on this day in 1879.

The observation of starlight bending was the first prediction of Einstein’s then otherworldly-seeming general theory of relativity: “One of the greatest achievements in the history of human thought. It is not the discovery of an outlying island but of a whole new continent of scientific ideas”, quipped the English physicist J. J. Thomson, discoverer of the electron. It was only the first among a stellar set of experimental observations that validated some mind-bending phenomena and ideas: spacetime curvature, black holes, gravitational waves, an expanding universe. It catapulted Einstein to world celebrity, and made him a household name and a part of the history books.

More importantly for the sake of world peace, however, it underscored one of the finest moments in the history of science diplomacy. The world had seen its first brutal world war end just a few months before, when the guns had finally been silenced in August.  The carnage had been unparalleled: more than 38 million casualties, with 17 million deaths. And in the light of this death and destruction, there was one country on which the world’s anger was focused: Germany. It seemed Germany had started the war and continued it, and it was fanatic German militarism that seemed to have sown the seeds of discontent.

In the middle of all that resentment then, it must not have gone unnoticed that an Englishman had confirmed a seminal prediction of a German’s reworkings of our understanding of the cosmos. In fact it was an exceptional gesture that would go down in history as a measure of the international brotherhood of science. Eddington, a pacifist Quaker who loathed war, joined heart and mind with Einstein, another pacifist who loathed war. At a time when the two countries had just come out of a horrific conflict with each other – a conflict streaked with memories of poison gas, trench warfare, dysentery and hand-to-hand combat - here was singular proof that there could be friendship between them again, that it was possible to forgive and move ahead together. The expedition at Principe confirmed a singular fact about science: that it can go beyond petty and deadly territorial disputes, that its bonds go deeper than those of rank or politics, and that, in the hands of men and women with conscience, it can rise above the fray and be the one shining candle in the dark. Einstein never forgot Eddington’s contribution to relativity and the hand of fellowship that bridged two nations who only a few months before had been sworn enemies. At the end of November, he finally gave in to the constant pleas from journalists and the public to hold forth on his theory and wrote a piece for the London Times on relativity. But he began the piece not with a scientific exposition but a human one:
“I gladly accede to the request of your colleague to write something for The Times [London] on relativity. After the lamentable breakdown of the old active intercourse between men of learning, I welcome this opportunity of expressing my feelings of joy and gratitude toward the astronomers and physicists of England. It is thoroughly in keeping with the great and proud traditions of scientific work in your country that eminent scientists should have spent much time and trouble, and your scientific institutions have spared no expense, to test the implications of a theory which was perfected and published during the war in the land of your enemies. Even though the investigation of the influence of the gravitational field of the sun on light rays is a purely objective matter, I cannot forbear to express my personal thanks to my English colleagues for their work; for without it I could hardly have lived to see the most important implication of my theory tested.” 
Einstein’s message here was clear. No matter what the political environment, science should trudge on, and political differences should not be a reason to squelch scientific collaboration. In fact such collaboration may be the only bond joining two countries together when all others have failed.

Eddington and Einstein’s plea for scientific fellowship has again become relevant, even as irrational nationalism has started to rear its ugly head, in the United States, in Europe and beyond. Demagogues with no understanding of science are trying to stem the flow of scientific talent from other countries, and no nation will be hurt more by this backward policy than the United States. More than any other single country, the United States has been the beneficiary of groundbreaking work by émigré scientists; in fact, the U.S. rose to scientific prominence when Jewish scientists fleeing from fascism in Europe migrated to its shores. It rose to world prominence in diverse scientific fields like astronomy, biomedical research and social science as these European immigrants and their students, along with others who came to the country during the last few decades from Asia, Africa, Australia and other continents, massively contributed to new scientific discoveries and inventions and won a string of Nobel Prizes.

If the United States starts appearing as an unwelcome destination for the world’s scientists, doctors and engineers, not only would this be a scientific disaster but it will be a human disaster. Immigrant scientists are often fleeing from persecution, broken economies and shoddy education standards and therefore usually work extra hard to ensure that their work brings success to their adopted countries. They may not all be Christians, but as is clear from the educational attainment and income levels of so many of these immigrant groups, they often live and breathe the Protestant work ethic of hard work, honesty and perseverance. As exemplified by a letter written by the German émigré Hans Bethe to his teacher Arnold Sommerfeld, many of them come to love their country and demonstrate deep loyalty toward it. Einstein himself was of course one of the most important examples of the immigrant experience in this country; when he moved to Princeton in 1933, the center of world physics moved with him. Alienating these people would not just be antithetical to the universal fellowship of science but it would decidedly not be in the United States’ best interests; it is these immigrant scientists and engineers who have started companies worth billions, discovered new drugs, materials and species, and contributed to America's tremendous supremacy in the information age. Supporting these immigrants is in fact putting ‘America First’.

One can have a perfectly reasonable discussion on limits to immigration without keeping talent away from these shores or without alienating potential immigrants who want to succeed in this country through hard work and family values. The current political environment has erased these important distinctions, if not explicitly at least in spirit, and it’s a distinction that we need to all clearly point out. The path ahead will not be easy, but as long as we support and understand each other, as long as we organize panel discussions and conferences and exchange programs which make people appreciate the international nature of science, we will move ahead together. All we need to do is have the Eddingtons and Einsteins among us keep finding each other.

Yuval Noah Harari's "Homo Deus": Sweeping, clever and provocative, but speculative and incomplete

Yuval Noah Harari's "Homo Deus" continues the tradition introduced in his previous book "Sapiens": clever, clear and humorous writing, intelligent analogies and a remarkable sweep through human history, culture, intellect and technology. In general it is as readable as "Sapiens" but suffers from a few limitations.

On the positive side, Mr. Harari brings the same colorful and thought-provoking writing and broad grasp of humanity, both ancient and contemporary, to the table. He starts with exploring the three main causes of human misery through the ages - disease, starvation and war - and talks extensively about how improved technological development, liberal political and cultural institutions and economic freedom have led to very significant declines in each of these maladies. Continuing his theme from "Sapiens", a major part of the discussion is devoted to shared zeitgeists like religion and other forms of belief that, notwithstanding some of their pernicious effects, can unify a remarkably large number of people across the world in striving together for humanity's betterment. This set of unifying beliefs is not just religious or supernatural; even a completely secular concept like human rights refers to ideas which are nowhere to be found except in the human imagination. It is this zeitgeist of beliefs which is partly what jump-started the cognitive revolution that made Homo sapiens so unique. It has created or enriched an almost infinite variety of human institutions and ideas, from money and mating to democracy and disco music. As in "Sapiens", Mr. Harari enlivens his discussion with popular analogies from current culture ranging from McDonald's and modern marriage to American politics and psychotherapy. Mr. Harari's basic take is that science and technology combined with a shared sense of morality and our belief-generating cognitive system have created a solid liberal framework around the world that puts individual rights front and center. There are undoubtedly communities that don't respect individual rights as much as others, but these are usually seen as challenging the centuries-long march toward liberal individualism rather than upholding the global trend.

The discussion above covers about two thirds of the book. About half of this material is recycled from "Sapiens" with a few fresh perspectives and analogies. The most important general message that Mr. Harari delivers, especially in the last one third of the book, is that this long and inevitable-sounding imperative of liberal freedom is now ironically threatened by the very forces that enabled it, most notably the forces of technology and globalization. Foremost among these are artificial intelligence (AI) and machine learning. These significant new developments are gradually making human beings cede their authority to machines, in ways small and big, explicitly and quietly. Ranging from dating to medical diagnosis, from the care of the elderly to household work, entire industries now stand to both benefit and be complemented or even superseded by the march of the machines. Mr. Harari speculates about a bold vision in which most manual labor has been taken over by machines and true human input is limited only to a very limited number of people, many of whom because of their creativity and demand will likely be in the top financial echelons of society. How will the rich and the poor live in these societies? We have already seen how the technological decimation of parts of the working class was a major theme in the 2016 election in the United States and the vote for Brexit in the United Kingdom. It was also a factor that was woefully ignored in the public discussion leading up to these events, probably because it is much easier to provoke human beings against other human beings rather than against cold, impersonal machines. And yet it is the cold, impersonal machines which will increasingly interfere with human lives. How will social harmony be preserved in the face of such interference? If people whose jobs are now being done by machines get bored, what new forms of entertainment and work will we have to invent to keep them occupied? Man after all is a thinking creature, and extended boredom can cause all sorts of psychological and social problems. If the division of labor between machines and men becomes extreme, will society fragment into H. G. Wells's vision of two species, one of which literally feeds on the other even as it sustains it?

These are all tantalizing as well as concerning questions, but while Mr. Harari does hold forth on them with some intensity and imagination, this part of the book is where his limitations become clear. Since the argument about ceding human authority to machines is also a central one, the omission also unfortunately appears to me to be a serious one. The problem is that Mr. Harari is an anthropologist and social scientist, not an engineer, computer scientist or biologist, and many of the questions of AI are firmly grounded in engineering and software algorithms. There are mountains of literature written about machine learning and AI and especially their technical strengths and limitations, but Mr. Harari makes few efforts to follow them or to explicate their central arguments. Unfortunately there is a lot of hype these days about AI, and Mr. Harari dwells on some of the fanciful hype without grounding us in reality. In short, his take on AI is slim on details, and he makes sweeping and often one-sided arguments while largely skirting clear of the raw facts. The same goes for his treatment for biology. He mentions gene editing several times, and there is no doubt that this technology is going to make some significant inroads into our lives, but what is missing is a realistic discussion of what biotechnology can or cannot do, and what aspects of the field are likely to be impacted through gene editing. Similarly, it is one thing to mention brain-machine interfaces that would allow our brains to access supercomputer-like speeds in an offhand manner; it's another to actually discuss to what extent this would be feasible and what the best science of our day has to say about it.

In the field of AI, particularly missing is a discussion of neural networks and deep learning which are two of the main tools used in AI research. Also missing is a view of a plurality of AI scenarios in which machines either complement, subjugate or are largely tamed by humans. When it comes to AI and the future, while general trends are going to be important, much of the devil will be in the details - details which decide how the actual applications of AI will be sliced and diced. This is an arena in which even Mr. Harari's capacious intellect falls short. The ensuing discussion thus seems tantalizing but does not give us a clear idea of the actual potential of machine technology to impact human culture and civilization. For reading more about these aspects, I would recommend books like Nick Bostrom's "Superintelligence", Pedro Domingos's "The Master Algorithm" and John Markoff's "Machines of Loving Grace". All these books delve into the actual details that sum up the promise and fear of artificial intelligence.

Notwithstanding these limitations, the book is certainly readable, especially if you haven't read "Sapiens" before. Mr. Harari's writing is often crisp, the play of his words is deftly clever and the reach of his mind and imagination immerses us in a grand landscape of ideas and history. At the very least he gives us a very good idea of how far we as human beings have come and how far we still have to go. As a proficient prognosticator Mr. Harari's crystal ball remains murky, but as a surveyor of past human accomplishments his robust and unique abilities are still impressive and worth admiring.