Field of Science

Book Review: Jacob Bronowski's "The Origins of Knowledge and Imagination"

The late Jacob Bronowski was one of a handful of people in the 20th century who were true Renaissance Man with a grasp of all intellectual endeavors, from the science of Newton to the art of Blake. But more importantly, Bronowski was also a great humanist who understood well the dangers of dogma and the importance of ethics in securing the contract between science and society. His TV series and book, “The Ascent of Man”, is an eloquent testament and essential resource for anyone who wants to understand the history of science and its relationship to politics and society. His plea to all of us – delivered on a rainy, gloomy day in Auschwitz - to regard no knowledge as final, to regard everything as open to doubt, is one of the great statements on science and ethics of our times. Bronowski had an unusual command over the English language; perhaps because English was not his first language, his style acquired a simplicity and a direct, hard-hitting eloquence that would escape native English speakers. In this sense, I find Bronowski to be the Joseph Conrad of his discipline.

In this book Bronowski takes on a different topic – an inquiry into the meaning of knowledge and our means of its acquisition. The book is based on the Silliman Lectures at Yale and is more academic than “The Ascent of Man”, but it is no less wide-ranging. Bronowski tells us how all objective knowledge is essentially derived, a point illustrated by a description of how the eye and brain conspire together to build a fine-grained picture of coarse-grained reality. He drives home the all important lesson that every experiment we do on nature is only made possible by making a "cut" that isolates the system under investigation from the rest of the universe. Thus, we are forced to sacrifice the connectivity of the universe and the knowledge we leave out when we make this cut when we do science. This fact by itself shows us that scientific knowledge is only going to be an approximation at best.

If Bronowski lived in modern times, I feel certain that this discussion would digress into one about models and especially computer models. All knowledge is essentially a model, and understanding the strengths and limitations of models helps us understand the limitations of objective knowledge. Now knowledge has little value if it cannot be understood and communicated through language, and a good part of the book is devoted to the differences between human and animal language that make human language special; one difference that I hadn’t quite appreciated was that humans have the ability to break sentences down into words that can be rearranged while animals essentially speak in sentences, and even then these sentences communicate instruction rather than information.

The most important part of the book in my opinion is the second half. Here Bronowski solidifies the theme of understanding the world through limited models and really drives home the open and uncertain nature of all of knowledge. There are a few essential springboards here: Bertrand Russell’s difficulties with paradoxes in mathematics, Alan Turing’s negative solution to the halting problem (the problem of whether there is an algorithm that could tell us for certain whether an arbitrary program on a Turing machine will halt) and finally, the twenty-four-year-old Kurt Gödel’s stunning incompleteness theorem. Bronowski ties together the themes explored in his lectures by making the centerpiece of his arguments the ability of linguistic and mathematical systems to engage in self-reference. Creating self-referential system was one of the powerful tools Gödel used in his seminal work. 

In Bronowski’s opinion, we found the limits of our knowledge when, in our attempts to turn mathematics and logic into closed systems, we found ourselves running into fundamental difficulties created by the fact that every such system will create paradoxes through the process of self-reference. Self-reference dooms attempts by human beings to gain complete knowledge of any system. This ensures that our investigations will always remain open, and this openness is one which we must recognize as an indelible feature, not a bug. Our political life too is predicated on openness, and not recognizing the open nature of systems of government can lead to real as opposed to merely intellectual pain and grief. The history of the twentieth century underscores this point adequately.

There is an anecdote at the end of the book which I thought illustrates Bronowski’s appeal to openness quite well, although perhaps not in the way he intended. It’s also particularly relevant to our present trying times. Bronowski tells a story about emigrating to the United States from England during a political fraught time in the 1950s when everyone who dissented from orthodoxy was suspected of being a subversive or traitor. When he arrived at the port of New York City, an Irish-American policeman insisted on examining the books he was carrying. Bronowski had written a well-regarded book on Blake. The policeman took the book, flipped a few pages and asked, “You write this bud?”. “Yes”. He said, “Psshh, this ain’t never going to be no bestseller!”. And here’s Bronowski’s take on the policeman with which he ends the book: “So long as there are Irish policemen who are more addicted to literary criticism than to legalisms, the intellect will not perish”. Unfortunately, as has become apparent in today’s political environment, the kind of dissent and rugged individualism that has been the hallmark of the American experiment could be said to have swung too far. Perhaps the Irish policeman was exercising his right to individual dissent, but perhaps it also meant that he was simply too ignorant to have understood the book, irrespective of its commercial status. Perhaps, ironically enough, the Irish policeman might have benefited from a dose of the spirit of open inquiry that Bronowski extols so well in these lectures.

Is Big Data shackling mankind's sense of creative wonder?

This is my latest monthly column for the site 3 Quarks Daily. 

Primitive science began when mankind looked upward at the sky and downward at the earth and asked why. Modern science began when Galileo and Kepler and Newton answered these questions using the language of mathematics and started codifying them into general scientific laws. Since then scientific discovery has been constantly driven by curiosity, and many of the most important answers have come from questions of the kind asked by a child: Why is the sky blue? Why is grass green? Why do monkeys look similar to us? How does a hummingbird flap its wings? With the powerful tool of curiosity came the even more powerful fulcrum of creativity around which all of science hinged. Einstein’s imagining himself on a light beam was a thoroughly creative act; so were Ada Lovelace’s thoughts about a calculating machine as doing something beyond mere calculation, James Watson and Francis Crick’s DNA model-building exercise, Enrico Fermi’s sudden decision to put a block of paraffin wax in the path of neutrons.

What is common to all these flights of fancy is that they were spontaneous, often spur-of-the-moment, informed at best by meager data and mostly by intuition. If Einstein, Lovelace and Fermi had paused to reconsider their thoughts because of the absence of hard evidence or statistical data, they might at the very least been discouraged from exploring these creative ideas further. And yet that is what I think the future Einsteins and Lovelaces of our day are in danger of doing. They are in danger of doing this because they are increasingly living in a world where statistics and data-driven decisions are becoming the beginning and end of everything, where young minds are constantly cautioned to not speculate before they have enough data.

We live in an age where Big Data, More Data and Still More Data seem to be all consuming, looming over decisions both big and mundane; from driving to ordering pet food to getting a mammogram. We are being told that we should not make any decision pending its substantiation through statistics and large-scale data analysis. Now, I will be the first one to advocate making decisions based on data and statistics, especially in an era where sloppy thinking and speculation based on incomplete or non-existent data seems to have turned into the very air which the media and large segments of the population breathe. Statistics has especially been found to be both paramount and sorely lacking in making decisions, and books like Daniel Kahneman’s “Thinking Fast and Slow” and Nate Silver’s “The Signal and the Noise” have stressed how humans are intrinsically bad at probabilistic and statistical thinking and how this disadvantage leads to them consistently making wrong decisions. It seems that a restructuring of our collective thinking process that is grounded in data would be a good thing for everyone.

But there are inherent problems with implementing this principle, quite apart from the severe limitations on creative speculation that an excess of data-based thinking imposes. Firstly, except in rare cases, we simply don’t have all the data that is necessary for making a good decision. Data itself is not insight, it’s simply raw material for insight. This problem is seen in the nature of the scientific process itself; in the words of the scientist and humanist Jacob Bronowski, in every scientific investigation we decide where to make a “cut” in nature, a cut that isolates the system of interest from the rest of the universe. Even late into the process, we can never truly know whether the part of the universe we have left out is relevant. Our knowledge of what we have left out is thus not just a “known unknown” but often an “unknown unknown”. Secondly and equally importantly, the quality of the data often takes second stage to its quantity; too many companies and research organizations seem to think that more data is always good, even when more data can mean more bad data. Thirdly, even with a vast amount of data, human beings are incapable of digesting this surfeit and making sure that their decisions include all of it. And fourthly and most importantly, making decisions based on data is often a self-fulfilling prophecy; the hypothesis we form and the conclusions we reach are inherently constrained by the data. We get obsessed with the data that we have and develop tunnel vision, and we ignore the importance of the data that we don’t have. This means that all our results are only going to be as good as the existing data.

Consider a seminal basic scientific discovery like the detection of the Higgs Boson, forty years after the prediction was made. There is little doubt that this was a supreme achievement, a technical tour de force that came about only because of the collective intelligence and collaboration of hundreds of scientists, engineers, technicians, bureaucrats and governments. The finding was of course a textbook example of how everyday science works: a theory makes a prediction and a well-designed experiment confirms or refutes the prediction. But how much more novelty the LHC would have found had the parameters been significantly tweaked, if the imagination of the collider and its operator been set loose? Maybe it would not have found the Higgs then, but it would have discovered something wholly different and unexpected. There would certainly have been more noise, but there would also have been more signal that would have led to discoveries which nobody predicted and which might have charted new vistas in physics. One of the major complaints about modern fundamental physics, especially in areas like string theory, is that it is experiment-poor and theory-rich. But experiments can only find something new when they don’t stay too close to the theoretical framework. You cannot always let prevailing theory dictate what experiments should do.

The success of the LHC in finding the Higgs and nothing but the Higgs points to the self-fulfilling prophecy of data that I mentioned: the experiment was set up to find or disprove the Higgs and the data contained within it the existence or absence of the Higgs. True creative science comes from generating hypotheses beyond the domain of the initial hypotheses and the resulting data. These hypotheses have to be confined within the boundaries of the known laws of nature, but there still has to be enough wiggle room to at least push against these boundaries, if not try to break free of them. My contention is that we are gradually becoming so enamored of data that it is clipping and tying down our wings, not allowing us to roam free in the air and explore daring new intellectual landscapes. It’s very much a case of the drunk under the lamppost, looking for his keys there because that’s where the light is.

A related problem with the religion of “dataism” is the tendency to dismiss anything that constitutes anecdotal evidence, even if it can lead to creative exploration. “Yes, but that’s an n of 1” is a refrain that you must have heard from many a data-entranced statistics geek. It’s important to not regard anecdotal evidence as sacrosanct, but it’s equally wrong in my opinion to simply dismiss it and move on. Isaac Asimov reminded us that great discoveries in science are made when an odd observation or fact makes someone go, “Hmm, that’s interesting”. But if instead, the reaction is going to be “Interesting, but that’s just an n of 1, so I am going to move on”, you are potentially giving up on hidden gems of discovery.

With anecdotal data also comes storytelling which has always been an integral part not just of science but of the human experience. Both arouse our sense of wonder and curiosity; we are left fascinated and free to imagine and explore precisely because of the paucity of data and the lone voice from the deep. Very few scientists and thinkers drove home the importance of taking anecdotal storytelling seriously as well as the late Oliver Sacks. If one reads Sacks’s books, every one of them is populated with fascinating stories of individual men and women with neurological deficits or abilities that shed valuable light on the workings of the brain. If Sacks had dismissed these anecdotes as insufficiently data-rich, he would have missed discovering the essence of important neurological disorders. Sacks also extolled the value of looking at historical data, another source of wisdom that would very easily be dismissed by hard scientists who think all historical data suspect because of its absence of large-scale statistical validation. Sacks regarded historical reports as especially neglected and refreshingly valuable sources of novel insights; in his early days, his insistence that his hospital’s weekly journal club discuss the papers of their nineteenth century forebears was met largely with indifference. But this exploration off the beaten track paid dividends. For instance, he once realized that he had rediscovered a key hallucinogenic aspect of severe migraines when he came across a paper on similar self-reported symptoms by the English astronomer John Herschel, written more than a hundred years ago. A data scientist would surely dismiss Herschel’s report as nothing more than a fluke.

The dismissal of historical data is especially visible in our modern system of medicine which ignores many medical reports of the kind that people like Sacks found valuable. It does an even better job ignoring the vast amount of information contained in the medical repositories of ancient systems of medicines, such as the Chinese and Indian pharmacopeias. Now, admittedly there are a lot of inconsistencies in these reports so they cannot all be taken literally, but neither is the process of ignoring them fruitful. Like all uncertain but potentially useful data, they need to be dug up, investigated and validated so that we can keep the gold and throw out the dross. The great potential value of ancient systems of medicine was made apparent when two years ago, the Nobel Prize for medicine was awarded to Chinese medicinal chemist Tu Youyou for her lifesaving discovery of the antimalarial drug artemisinin. Youyou was inspired to make the discovery when she found a process for low-temperature chemical extraction of the drug in a 1600-year-old Chinese text titled “Emergency Prescriptions Kept Up One’s Sleeve”. This obscure and low-visibility data point would have been certainly dismissed by statistics-enamored medicinal chemists in the West, even if they had known where to find it. Part of recognizing the importance of Eastern systems of medicine consists in recognizing their very different philosophy; while Western medicine seeks to attack the disease and is highly reductionist, Eastern medicine takes a much more holistic approach in which it seeks to modify the physiology of the individual itself. This kind of philosophy is harder to study in the traditional double-blinded, placebo-controlled clinical trial that has been the mainstay of successful Western medicine, but the difficulty of implementing a particular scientific paradigm should not be an argument against its serious study or adoption. As Sacks’s and Youyou’s examples demonstrate, gems of discovery still lie hidden in anecdotal and historical reports, especially in medicine where even today we understand so little about entities like the human brain.

Whether it’s the LHC or medical research, the practice of gathering data and relying only on that data is making us stay close to the ground when we could have been soaring high in the air without these constraints. Data is critical for substantiating a scientific idea, but I would argue that it actually makes it harder to explore wild, creative scientific ideas in the first place, ideas that often come from anecdotal evidence, storytelling and speculation. A bigger place for data leaves increasingly smaller room for authentic and spontaneous creativity. Sadly, today’s publishing culture also rooms little room for pure speculation-driven hypothesizing. As just one example of how different things have become in the last forty years, in 1960 the physicist Freeman Dyson wrote a paper in Science speculating on possible ways to detect alien civilizations based on their capture of heat energy from their parent star. Dyson’s paper contained enough calculations to make it at least a mildly serious piece of work, but I feel confident that in 2017 his paper would probably get rejected from major journals like Science and Nature which have lost their taste for interesting speculation and have become obsessed with data-driven research.

Speculation and curiosity have been mainstays of human thinking since our origins. When our ancestors sat around fires and told stories of gods, demons and spirit animals to their grandchildren, it made the wide-eyed children wonder and want to know more about these mysterious entities that their elders were describing. This feeling of wonder led the children to ask questions. Many of these questions led down wrong alleys, but the ones that survived later scrutiny launched important ideas. Today we would dismiss these undisciplined mental meanderings as superstition, but there is little doubt that they involve the same kind of basic curiosity that drives a scientist. There is perhaps no better example of a civilization that went down this path than ancient Greece. Greece was a civilization full of animated spirits and Gods that controlled men’s destinies and the forces of nature. The Greeks certainly found memorable ways to enshrine these beliefs in their plays and literature, but the same cauldron that imagined Zeus and Athena also created Aristotle and Plato. Aristotle and Plato’s universe was a universe of causes and humors, of earth and water, of abstract geometrical entities divorced from real world substantiation. Both men speculated with fierce abandon. And yet both made seminal contributions to Western science and philosophy even as their ideas were accepted, circulated, refined and refuted for the next two thousand years. Now imagine if Aristotle and Plato had refused to speculate on causes and human anatomy and physiology because they had insufficient data, if they had turned away from imagining because the evidence wasn’t there.

We need to remember that much of science arose as poetic speculations on the cosmos. Data kills the poetic urge in science, an urge that the humanities have recognized for a long time and which science has had in plenty. Richard Feynman once wrote,

“Poets say that science takes away the beauty of the stars and turns them into mere globs of gas atoms. But nothing is ‘mere’. I too can see the stars on a desert night, but do I see less or more? The vastness of the heavens stretches my imagination; stuck on this carousel my little eye can catch one-million-year-old light…What men are poets who can speak of Jupiter as if he were a man, but if he is an immense spinning sphere of methane and ammonia must be silent?”

Feynman was speaking to the sense of wonder that science should evoke in all of us. Carl Sagan realized this too when he said that not only is science compatible with spirituality, but it’s a profound source of spirituality. To realize that the world is a multilayered, many-splendored thing, to realize that everything around us is connected through particles and forces, to realize that every time we take a breath or fly on a plane we are being held alive and aloft by the wonderful and weird principles of mechanics and electromagnetism and atomic physics, and to realize that these phenomena are actually real as opposed to the fictional revelations of religion, should be as much a spiritual experience as anything else in one’s life. In this sense, knowing about quantum mechanics or molecular biology is no different from listening to the Goldberg Variations or gazing up at the Sistine Chapel. But this spiritual experience can come only when we let our imaginations run free, constraining them in the straitjacket of skepticism only after they have furiously streaked across the sky of wonder. The first woman, when she asked what the stars were made of, did not ask for a p value.

Staying after the party is over: Nobel Prizes and second acts

Hans Bethe kept on making important
contributions to physics for more than
thirty years after winning the Nobel Prize
Since the frenzy of Nobel season is over, it's worth dwelling a bit on a topic that's not much discussed: scientists who keep on doing good work even after winning the Nobel prize. It's easy to rest on your laurels once you win the prize. Add to this the exponentially higher number of speaking engagements, magazine articles and interviews in which you are supposed to hold forth on the state of the world in all your oracular erudition, and most scientists can be forgiven for simply not having the time to do sustained, major pieces of research after their prizewinning streak. This makes the few examples of post-Nobel scientific dedication even more noteworthy. I decided to look at these second acts in the context of physics Nobel Prizes, starting from 1900, and found interesting examples and trends.

Let's start with the two physicists who are considered the most important ones of the twentieth century in terms of their scientific accomplishments and philosophical influence - Albert Einstein and Niels Bohr. Einstein got a Nobel Prize in 1921 after he had already done work for which he would go down in history; this included the five groundbreaking papers published in the "annus mirabilis" of 1905, his collaboration on Bose-Einstein statistics with Satyendranath Bose and his work on the foundations of the laser. After 1921 Einstein did not accomplish anything of similar stature - in fact one can argue that he did not accomplish anything of enduring importance to physics after the 1920s - but he did became famous for one controversy, his battle with Niels Bohr about the interpretation of quantum theory that started at the Solvay conference in 1927 and continued until the end of his life. This led to the historic paper on the EPR paradox in 1935 that set the stage for all further discussions of the weird phenomenon known as quantum entanglement. In this argument as in most arguments on quantum theory, Einstein was mistaken, but his constant poking of the oracles of quantum mechanics let to spirited efforts to rebut his arguments. In general this was good for the field and culminated in John Bell's famous inequality and Alain Aspect and others' experiments to confirm quantum entanglement and prove Einstein's faith in "hidden variables" misguided.

Bohr himself was on the cusp of greatness when he received his prize in 1922. He was already famous for his atomic model of 1913, but he was not yet known as the great teacher of physics - perhaps the greatest of the century - who was to guide not just the philosophical development of quantum theory but the careers of some of the century's foremost theoretical physicists, including Heisenberg, Gamow, Pauli and Wheeler. Apart from the rejoinders to Einstein's objections to quantum mechanics that Bohr published in the 30s, he contributed one other idea of overwhelming importance, both for physics and for world affairs. In 1939, while tramping across the snow from Princeton University to the Institute for Advanced Study, Bohr realized that it was uranium-235 which was responsible for nuclear fission. This paved the path toward the separation of U-235 from its heavier brother U-238 and led directly to the atomic bomb. Along the same lines, Bohr collaborated with his young protege John Wheeler to formulate the so-called liquid drop model of fission that likened the nucleus to a drop of water; shoot an appropriately energetic neutron into this assembly and it wobbles and finally breaks apart. Otto Hahn who was the chief discoverer of nuclear fission later won the Nobel Prize and it seems to me that along with Fritz Strassman, Lisa Meitner and Otto Frisch, Bohr also deserved a share of this award.
Since we are talking about Nobel Prizes, what better second act than one that results in another Nobel Prize. As everyone knows, this singular achievement belongs to John Bardeen who remains the only person to win two physics Nobels, one for the invention of the transistor and another for the theory of superconductivity. And like his chemistry counterpart Fred Sanger who also won two prizes in the same discipline, Bardeen may be the most unassuming physicist of the twentieth century. Also along similar lines, Marie Curie won another prize in chemistry after her pathbreaking work on radioactivity with Pierre Curie.
Let's consider other noteworthy second acts. When Hans Bethe won the prize for his explanation of the fusion reactions that fuel the sun, the Nobel committee told him that they had trouble deciding which one of his accomplishments they should reward. Perhaps no other physicist of the twentieth century contributed to physics so persistently over such a long time. The sheer magnitude of Bethe's body of work is staggering and he kept on working productively well into his nineties. After making several important contributions to nuclear, quantum and solid-state physics in the 1930s and serving as the head of the theoretical division at Los Alamos during the war, Bethe opened the door to the crowning jewel of quantum electrodynamics by making the first decisive calculation of the so-called Lamb shift that was challenging the minds of the best physicists. This work culminated in the Nobel Prize being awarded to Feynman, Schwinger and Tomonaga in 1965. Later at an age when most physicists are just lucky to be alive, Bethe provided an important solution to the solar neutrino puzzle in which neutrinos change from one type to another as they travel to the earth from the sun. There's no doubt that Bethe was a supreme example of a second act. Richard Feynman also continued to do serious work in physics; among other contributions, he came up with a novel theory of superfluidity and a model of partons.

Another outstanding example is Enrico Fermi, perhaps the most versatile physicist of the twentieth century, equally accomplished in both theory and experiment. After winning a prize in 1938 for his research on neutron-induced reactions, Fermi was the key force behind the construction of the world's first nuclear reactor. That the same man who designed the first nuclear reactor also formulated Fermi-Dirac statistics and the theory of beta decay is a fact that continues to astonish me. The sheer number of concepts, laws and theories (not to mention schools, buildings and labs) named after him is a testament to his mind. And he achieved all this before his life was cut short at the young age of 53.

Speaking of diversity in physics research, no discussion of second acts can ignore Philip Anderson. Anderson spent his entire career at Bell Labs before moving to Princeton, making seminal contributions to condensed matter physics. The extent of Anderson's influence on physics becomes clear when we realize that most people today talk about his non-Nobel Prize winning ideas. These include one of the first descriptions of the Higgs mechanism (Anderson was regarded by some as a possible contender for a Higgs Nobel) and his firing of the first salvo into the "reductionism wars"; this came in the form of a 1972 Science article called "More is Different" which has since turned into a classic critique of reductionism. Now in his nineties, Anderson continues to write papers and has written a book that nicely showcases his wide-ranging interests and his incisive, acerbic and humorous style.
There's other interesting candidates who show up in the list. Luis Alvarez was an outstanding experimental physicist who made important contributions to particle and nuclear physics. But after his Nobel Prize in 1968 he re-invented himself and contributed to a very different set of fields; planetary science and evolutionary biology. In 1980, along with his son Walter, Alvarez wrote a seminal paper proposing a giant asteroid as the cause for the extinction of the dinosaurs. This discovery about the "K-Pg boundary" really changed our understanding of the earth's history and is also one of the most exemplary examples of a father-son collaboration.
There's a few more scientists to consider including Murray Gell-Mann, Steven Weinberg, Werner Heisenberg, Charles Townes and Patrick Blackett who continued to make important contributions. It's worth noting that this list focuses on specific achievements after winning the prize; a "lifetime achievement" list would include many more scientists like Lev Landau (who among other deep contributions co-authored a definitive textbook on physics), Subrahmanyan Chandrasekhar and Max Born. 
It's also important to focus on non-research activities that are still science-related; too often we ignore these other important activities and focus only on technical research. A list of these achievements would include teaching (Feynman, Fermi, Bohr, Born), writing (P. M. S. Blackett, Feynman, Percy Bridgman, Steven Weinberg), science and government policy (Bethe, Arthur Compton, Robert Millikan, Isidor Rabi) and administration (Lawrence Bragg, J. J. Thomson, Pierre de Gennes, Carlo Rubia). Bonafide research is not the only thing at which great scientists excel.