Field of Science

Should a scientist have "faith"?

Scientists like to think that they are objective and unbiased, driven by hard facts and evidence-based inquiry. They are proud of saying that they only go wherever the evidence leads them. So it might come as a surprise to realize that not only are scientists as biased as non-scientists, but that they are often driven as much by belief as are non-scientists. In fact they are driven by more than belief: they are driven by faith. Science. Belief. Faith. Seeing these words in a sentence alone might make most scientists bristle and want to throw something at the wall or at the writer of this piece. Surely you aren’t painting us with the same brush that you might those who profess religious faith, they might say?

But there’s a method to the madness here. First consider what faith is typically defined as – it is belief in the absence of evidence. Now consider what science is in its purest form. It is a leap into the unknown, an extrapolation of what is into what can be. Breakthroughs in science by definition happen “on the edge” of the known. Now what sits on this edge? Not the kind of hard evidence that is so incontrovertible as to dispel any and all questions. On the edge of the known, the data is always wanting, the evidence always lacking, even if not absent. On the edge of the known you have wisps of signal in a sea of noise, tantalizing hints of what may be, with never enough statistical significance to nail down a theory or idea. At the very least, the transition from “no evidence” to “evidence” lies on a continuum. In the absence of good evidence, what does a scientist do? He or she believes. He or she has faith that things will work out. Some call it a sixth sense. Some call it intuition. But “faith” fits the bill equally.

If this reliance on faith seems like heresy, perhaps it’s reassuring to know that such heresies were committed by many of the greatest scientists of all time. All major discoveries, when they are made, at first rely on small pieces of data that are loosely held. A good example comes from the development of theories of atomic structure.

When Johannes Balmer came up with his formula for explaining the spectral lines of hydrogen, he based his equation on only four lines that were measured with accuracy by Anders Ångström. He then took a leap of faith and came up with a simple numerical formula that predicted many other lines emanating from the hydrogen atom and not just four. But the greatest leap of faith based on Balmer’s formula was taken by Niels Bohr. In fact Bohr did not even hesitate to call it anything but a leap of faith. In his case, the leap of faith involved assuming that electrons in atoms only occupy certain discrete energy states, and that figuring out the transitions between these states somehow involved Planck’s constant in an important way. When Bohr could reproduce Balmer’s formula based on this great insight, he knew he was on the right track, and physics would never be the same. One leap of faith built on another.

To a 21st century scientist, Bohr’s and Balmer’s thinking as well as that of many other major scientists well through the 20th century indicates a manifestly odd feature in addition to leaps of faith – an absence of what we call statistical significance or validation. As noted above, Balmer used only four data points to come up with his formula, and Bohr not too many more. Yet both were spectacularly right. Isn’t it odd, from the standpoint of an age that holds statistical validation sacrosanct, to have these great scientists make their leaps of faith based on paltry evidence, “small data” if you will? But that in fact is the whole point about scientific belief, that it originates precisely when there isn’t copious evidence to nail the fact, when you are still on shaky ground and working at the fringe. But this belief also supremely echoes a famous quote by Bohr’s mentor Rutherford – “If your experiment needs statistics, you ought to have done a better experiment.” Resounding words from the greatest experimental physicist of the 20th century whose own experiments were so carefully chosen that he could deduce from them extraordinary truths about the structure of matter based on a few good data points.

The transition between belief and fact in science in fact lies on a continuum. There are very few cases where a scientist goes overnight from a state of “belief” to one of “knowledge”. In reality, as evidence builds up, the scientist becomes more and more confident until there are not enough grounds for believing otherwise. In many cases the scientist may not even be alive to see his or her theory confirmed in all its glory: even the Newtonian model of the solar system took until the middle of the 19th century to be fully validated, more than a hundred years after Newton’s death.

A good example of this gradual transition of a scientific theory from belief to confident espousal is provided by the way Charles Darwin’s theory of evolution by natural selection, well, evolved. It’s worth remembering that Darwin took more than twenty years to build up his theory after coming home from his voyage on the HMS Beagle in 1836. At first he only had hints of an idea based on extensive and yet uncatalogued and disconnected observations of flora and fauna from around the world. Some of the evidence he had documented – the names of Galapagos finches, for instance – was wrong and had to be corrected by his friends and associates. It was only by arduous experimentation and cataloging that Darwin – a famously cautious man – was able to reach the kind of certainty that prompted him to finally publish his magnum opus, Origin of Species, in 1859, and even then only after he was threatened to be scooped by Alfred Russell Wallace. There can be said to be no one fixed eureka moment when Darwin could say that he had transitioned from “believing” in evolution by natural selection to “knowing” that evolution by natural selection was true. And yet, by 1859, this most meticulous scientist was clearly confident enough in his theory that he no longer simply believed in it. But it certainly started out that way. The same uncertain transition between belief and knowledge applies to other discoveries. Einstein often talked about his faith in his general theory of relativity before observations of the solar eclipse of 1919 confirmed its major prediction, the bending of starlight by gravity, remarking that if he was wrong it would mean that the good lord had led him down the wrong garden path. When did Watson and Crick go from believing that DNA is a double helix to knowing that it is? When did Alfred Wegener go from believing in plate tectonics to knowing that it was real? In some sense the question is pointless. Scientific knowledge, both individually and collectively, gets cemented with greater confidence over time until the objections simply cannot stand up to the weight of the accumulated evidence.

Faith, at least in one important sense, is thus an important part of the mindset of a scientist. So why should scientists not nod in assent if someone then tells them that there is no difference, at least in principle, between their faith and religious faith? For two important reasons. Firstly, the “belief” that a scientist has is still based on physical and not supernatural evidence, even if all the evidence may not yet be there. What scientists call faith is still based on data and experiments, not mystic visions and pronouncements from a holy book. More importantly, unlike religious belief, scientific belief can wax and wane with the evidence; it importantly is tentative and always subject to change. Any good scientist who believes X will be ready to let go of their belief in X if strong evidence to the contrary presents itself. That is in fact the main difference between scientists on one hand and clergymen and politicians on the other; as Carl Sagan once asked, when was the last time you heard either of the latter say, “You know, that’s a really good counterargument. Maybe what I am saying is not true after all.”

Faith may also interestingly underlie one of the classic features of great science – serendipity. Unlike what we often believe, serendipity does not always refer to pure unplanned accident but to deliberately planned accident; as Alexander Fleming memorably put it, chance favors the “prepared mind”. A remarkable example of deliberate serendipity comes from an anecdote about his discovery of slow neutrons that Enrico Fermi narrated to Subrahmanyan Chandrasekhar. Slow neutrons unlocked the door to nuclear power and the atomic age. Fermi told Chandrasekhar how he came to make this discovery which he personally considered – among a dozen seminal ones – to be his most important one (From Mehra and Rechenberg, “The Historical Development of Quantum Theory, Vol. 6”):


Chandrasekhar’s invocation of Hadamard’s thesis of unconscious discovery might provide a rational underpinning for what we are calling faith. In this case, Fermi’s great intuitive jump, his seemingly irrational faith that paraffin might slow down neutrons, might have been grounded in the extensive body of knowledge about physics that was housed in his brain, forming connections that he wasn’t even aware of. Not every leap of faith can be explained this way, but some can. In this sense a scientist’s faith, unlike religious faith, is very much rational and based on known facts.

Ultimately there’s a supremely important guiding role that faith plays in science. Scientists ignore believing at their own peril. This is because they have to constantly tread the tightrope of skepticism and wonder. Shut off your belief valve completely and you will never believe anything until there is five-sigma statistical significance for it. Promising avenues of inquiry that are nonetheless on shaky grounds for the moment will be dismissed by you. You may never be the first explorer into rich new scientific territory. But open the belief valve completely and you will have the opposite problem. You may believe anything based on the flimsiest of evidence, opening the door to crackpots and charlatans of all kinds. So where do you draw the line?

In my mind there are a few logical rules of thumb that might help a scientist to mark out territories of non-belief from ones where leaps of faith might be warranted. In my mind, plausibility based on the known laws of science should play a big role. For instance, belief in homeopathy would be mistaken based on the most elementary principles of physics and chemistry, including the laws of mass action and dose response. But what about belief in extraterrestrial intelligence? There the situation is different. Based on our understanding of the laws of quantum theory, stellar evolution and biological evolution, there is no reason to believe that life could not have arisen on another planet somewhere in the universe. In this sense, belief in extraterrestrial intelligence is justified belief, even if we don’t have a single example of life existing anywhere else. We should keep on looking. Faith in science is also more justified when there is a scientific crisis. In a crisis you are on desperate grounds anyway, so postulating ideas that aren’t entirely based on good evidence isn’t going to make matters worse and are more likely to lead into novel territory. Planck’s desperate assumption that energy only comes in discrete packets was partly an act of faith that resolved a crisis in classical physics.

Ultimately, though, drawing a firm line is always hard, especially for topics on the fuzzy boundary. Extra-sensory perception, the deep hot biosphere and a viral cause for mad cow disease are three theories which are implausible although not impossible in principle; there is little in them that flies against the basic laws of science. The scientists who believe in these theories are sticking their necks out and taking a stand. They are heretics who are taking the risk of being called fools; since most bold new ideas in science are usually wrong, they often will be. But they are setting an august precedent.

If science is defined as the quest into the unknown, a foray into the fundamentally new and untested, it is more important than ever especially in this age of conformity, for belief in science to play a more central role in the practice of science. The biggest scientists in history have always been ones who took leaps of faith, whether it was Bohr with his quantum atom, Einstein with his thought experiments or Noether with her deep feeling for the relationship between symmetry and conservation laws, a feeling felt but not seen. For creating minds like these, we need to nurture an environment that not just allows but actively encourages scientists, especially young ones, to tread the boundary between evidence and speculation with aplomb, to exercise their rational faith with abandon. Marie Curie once said, “Now is the time to fear less, so that we may understand more.” To which I may add, “Now is the time to believe more, so that we may understand even more.”

First published on 3 Quarks Daily

Man as a "machine-tickling aphid"

May be a close-up of nature

On the playground in the park today, my daughter and I played with some carpenter ants and the aphids they were farming. The phenomenon never ceases to fascinate me - the aphids being sheltered from natural predators under leaves and sap-rich areas of trees by the ants; the ants milking the aphids for their tasty, sugary honeydew in turn by gently stroking them.
It's doubly fascinating because as recounted in George Dyson's "Darwin Among the Machines", in his groundbreaking 1872 book "Erewhon", Victorian writer and polymath Samuel Butler wondered whether human relationships with machines will one day become very similar to those between ants and aphids, with humans essentially becoming dependent on machines to provide them with constant, nurturing stimulation and feeding: "May not man himself become a sort of parasite upon the machines? An affectionate machine-tickling aphid?", wrote Butler.
In this scenario, there's no need to imagine a Terminator-style takeover of human society by computers; instead, humans will willingly give themselves over to the illusions of tender, loving care provided by machines, becoming permanently dependent and parasitic on them and becoming, in effect, code's way to replicate itself. Clearly, Butler's vision was incredibly prescient and ahead of its time, and resoundingly true as indicated by the medium in which I am typing these words.

Philip Morrison on challenges with AI

Philip Morrison who was a top-notch physicist and polymath with an incredible knowledge of things beyond his immediate field was also a speed reader who reviewed hundreds of books on a stunning range of topics. In one of his essays from an essay collection he held forth on what he thought were the significant challenges with machine intelligence. It strikes me that many of these are still valid (italics mine).

"First, a machine simulating the human mind can have no simple optimization game it wants to play, no single function to maximize in its decision making, because one urge to optimize counts for little until it is surrounded by many conditions. A whole set of vectors must be optimized at once. And under some circumstances, they will conflict, and the machine that simulates life will have the whole problem of the conflicting motive, which we know well in ourselves and in all our literature.


Second, probably less essential, the machine will likely require a multisensory kind of input and output in dealing with the world. It is not utterly essential, because we know a few heroic people, say, Helen Keller-who managed with a very modest cross-sensory connection nevertheless to depict the world in some fashion. It was very difficult, for it is the cross-linking of different senses which counts. Even in astronomy, if something is "seen" by radio and by optics, one begins to know what it is. If you do not "see" it in more than one way, you are not very clear what it in fact is.


Third, people have to be active. I do not think a merely passive machine, which simply reads the program it is given, or hears the input, or receives a memory file, can possibly be enough to simulate the human mind. It must try experiments like those we constantly try in childhood unthinkingly, but instructed by built-in mechanisms. It must try to arrange the world in different fashions.


Fourth, I do not think it can be individual. It must be social in nature. It must accumulate the work--the languages, if you will- of other machines with wide experience. While human beings might be regarded collectively as general-purpose devices, individually they do not impress me much that way at all. Every day I meet people who know things I could not possibly know and can do things I could not possibly do, not because we are from differing species, not because we have different machine natures, but because we have been programmed differently by a variety of experiences as well as by individual genetic legacies. I strongly suspect that this phenomenon will reappear in machines that specialize, and then share experiences with one another. A mathematical theorem of Turing tells us that there is an equivalence in that one machine's talents can be transformed mathematically to another's. This gives us a kind of guarantee of unity in the world, but there is a wide difference between that unity, and a choice among possible domains of activity. I suspect that machines will have that choice, too. The absence of a general-purpose mind in humans reflects the importance of history and of development. Machines, if they are to simulate this behavior- or as I prefer to say, share it--must grow inwardly diversified, and outwardly sociable.


Fifth, it must have a history as a species, an evolution. It cannot be born like Athena, from the head full-blown. It will have an archaeological and probably a sequential development from its ancestors. This appears possible. Here is one of computer science's slogans, influenced by the early rise of molecular microbiology: A tape, a machine whose instructions are encoded on the tape, and a copying machine. The three describe together a self-reproducing structure. This is a liberating slogan; it was meant to solve a problem in logic, and I think it did, for all but the professional logicians. The problem is one of the infinite regress which looms when a machine becomes competent enough to reproduce itself. Must it then be more complicated than itself? Nonsense soon follows. A very long

instruction tape and a complex but finite machine that works on those instructions is the solution to the logical problem."


Consciousness and the Physical World, edited by V. S. Ramachandran and Brian Josephson

Consciousness and the Physical World: Proceedings of the Conference on Consciousness Held at the University of Cambridge, 9Th-10th January, 1978

This is an utterly fascinating book, one that often got me so excited that I could hardly sleep or walk without having loud, vocal arguments with myself. It takes a novel view of consciousness that places minds (and not just brains) at the center of evolution and the universe. It is based on a symposium on consciousness at Cambridge University held in 1979 and is edited by Brian Josephson and V. S. Ramachandran, both incredibly creative scientists. Most essays in the volume are immensely thought-provoking, but I will highlight a few here.


The preface by Freeman Dyson states that "this book stands in opposition to the scientific orthodoxy of our day." Why? Because it postulates that minds and consciousness have as important of a role to play in the evolution of the universe as matter, energy and inanimate forces. As Dyson says, most natural scientists frown upon any inclusion of the mind as an equal player in the arena of biology; for them this amounts to a taboo against the mixing of values and facts. And yet even Francis Crick, as hard a scientist as any other, once called the emergence of culture and the mind from the brain the "astonishing hypothesis." This book defies conventional wisdom and mixes values and facts with aplomb. It should be required reading for any scientist who dares to dream and wants to boldly think outside the box.

Much of the book is in some sense an extension - albeit a novel one - of ideas laid out in an equally fascinating book by Karl Popper and John Eccles titled "The Self and Its Brain: An Argument for Interactionism". Popper and Eccles propose that consciousness arises when brains interact with each other. Without interaction brains stay brains. When brains interact they create both mind and culture.

Popper and Eccles say that there are three "worlds" encompassing the human experience:

World 1 consists of brains, matter and the material universe.
World 2 consists of individual human minds.
World 3 consists of the elements of culture, including language, social culture and science.

Popper's novel hypothesis is that while World 3 clearly derives from World 2, at some point it took on a life of its own as an emergent entity that was independent of individuals minds and brains. In a trivial sense we know this is true since culture and ideas propagate long after their originators are dead. What is more interesting is the hypothesis that World 2 and World 3 somehow feed on each other, so that minds, fueled by cultural determinants and novelty, also start acquiring lives of their own, lives that are no longer dependent on the substrate of World 1 brains. In some sense this is the classic definition of emergent complexity, a phrase that was not quite in vogue in 1978. Not just that but Eccles proposes that minds can in turn act on brains just like culture can act on minds. This is of course an astounding hypothesis since it suggests that minds are separate from brains and that they can influence culture in a self-reinforcing loop that is derived from the brain and yet independent of it.

The rest of the chapters go on to suggest similarly incredible and fascinating ideas. Perhaps the most interesting are chapters 4 and 5 by Nicholas Humphrey (a grand nephew of John Maynard Keynes) and Horace Barlow, both of them well known neuroscientists. Barlow and Humphrey's central thesis is that consciousness arose as an evolutionary novelty in animals for promoting interactions - cooperation, competition, gregariousness and other forms of social communication. In this view, consciousness was an accidental byproduct of primitive neural processes that was then selected by natural selection to thrive because of its key role in facilitating interactions. This raises more interesting questions: Would non-social animals then lack consciousness? The other big question in my mind was, how can we even define "non-social" animals: after all, even bacteria, not to mention more advanced yet primitive creatures (by human standards) like slime molds and ants evidence superior modes of social communication. In what sense would these creatures be conscious, then? Because the volume was written in 1978, it does not discuss Giulio Tononi's "integrated information theory" and Christof Koch's ideas about consciousness existing on a continuum, but the above mentioned ideas certainly contain trappings of these concepts.

There is finally an utterly fascinating discussion of an evolutionary approach to free will,. It states in a nutshell that free will is a biologically useful delusion. This is not the same as saying that free will is an *illusion*. In this definition, free will arose as a kind of evolutionary trick to ensure survival. Without free will, humans would have no sense of controlling their own fates and environments, and this feeling of lack of control would not only detrimentally impact their day to day existence and basic subsistence but impact all the long-term planning, qualities and values that are the hallmark of Homo sapiens. A great analogy that the volume provides is with the basic instinct of hunger. In an environment where food was infinitely abundant, a creature would be free from the burden of choice. So why was hunger "invented"? In Ramachandran's view, hunger was invented to explore the environment around us; similarly, the sensation of free will was "invented" to allow us to plan for the future, make smart choices and even pursue terribly important and useful but abstract ideas like "freedom" and "truth". It allows us to make what Jacob Bronowski called "unbounded plans". In an evolutionary framework, "those who believed in their ability to will survived and those who did not died out."

Is there any support for this hypothesis? As Ramachandran points, there is at least one simple but very striking natural experiment that lends credence to the view of free will being an evolutionarily useful biological delusion. People who are depressed are well known to lack a feeling of control over their environment. In extreme cases this feeling can lead to significantly reduced mortality and death from suicide. Clearly there is at least one group of people in which the lack of a freedom to will can have disastrous consequences if not corrected.

I can go on about the other fascinating arguments and essays of these proceedings. But even reading the amazing introduction by Ramachandran and a few of the essays should give the reader a taste of the sheer chutzpah and creativity demonstrated by these scientific heretics in going beyond the boundary of the known. May this tribe of scientific heretics thrive and grow.

Rutherford on tools and theories (and machine learning)

Ernest Rutherford was the consummate master of experiment, disdaining theoreticians for playing around with their symbols while he and his fellow experimentalists discovered the secrets of the universe. He was said to have used theory and mathematics only twice - once when he discovered the law of radioactive decay and again when he used the theory of scattering to interpret his seminal discovery of the atomic nucleus. But that's where his tinkering with formulae stopped.

Time and time again Rutherford used relatively simple equipment and tools to pull off seemingly miraculous feats. He had already won the Nobel Prize for chemistry by the time he discovered the nucleus - a rare and curious case of a scientist making their most important discovery after they won a Nobel prize. The nucleus clearly deserved another Nobel, but so did his fulfillment of the dreams of the alchemists when he transmuted nitrogen to oxygen by artificial disintegration of the nitrogen atom in 1919. These achievements justified every bit Rutherford's stature as perhaps one of two men who were the greatest experimental physicists in modern history, the other being Michael Faraday. But they also justified the primacy of tools in engineering scientific revolutions.

However, Rutherford was shrewd and wise enough to recognize the importance of theory - he famously mentored Niels Bohr, presumably because "Bohr was different; he was a football player." And he was on good terms with both Einstein and Eddington, the doyens of relativity theory in Europe. So it's perhaps not surprising that he pointed out an observation about the discovery of radioactivity attesting to the important of theoretical ideas that's quite interesting.

As everyone knows, radioactivity in uranium was discovered by Henri Becquerel in 1896, then taken to great heights by the Curies. But as Rutherford points out in a revealing paragraph (Brown, Pais and Pippard, "Twentieth Century Physics", Vol. 1; 1995), it could potentially have been discovered a hundred years earlier. More accurately, it could have been experimentally discovered a hundred years earlier.


Rutherford's basic point is that unless there's an existing theoretical framework for interpreting an experiment - providing the connective tissue, in some sense - the experiment remains merely an observation. Depending only on experiments to automatically uncover correlations and new facts about the world is therefore tantamount to hanging on to a tenuous, risky and uncertain thread that might lead you in the right direction only occasionally, by pure chance. In some ways Rutherford here is echoing Karl Popper's refrain when Popper said that even unbiased observations are "theory laden"; in the absence of the right theory, there's nothing to ground them.

It strikes me that Rutherford's caveat applies well to machine learning. One goal of machine learning - at least as believed by its most enthusiastic proponents - is to find patterns in the data, whether the data is dips and rises in the stock market or signals from biochemical networks, by blindly letting the algorithms discover correlations. But simply letting the algorithm loose on data would be like letting gold leaf electroscopes and other experimental apparatus loose on uranium. Even if they find some correlations, these won't mean much in the absence of a good intellectual framework connecting them to basic facts. You could find a correlation between two biological responses, for instance, but in the absence of a holistic understanding of how the components responsible for these responses fit within the larger framework of the cell and the organism, the correlations would stay just that - correlations without a deeper understanding.

What's needed to get to that understanding is machine learning plus theory, whether it's a theory of the mind for neuroscience or a theory of physics for modeling the physical world. It's why efforts that try to supplement machine learning by embedding knowledge of the laws of physics or biology in the algorithms are likely to work, while efforts blindly using machine learning to try to discover truths about natural and artificial systems using correlations alone would be like Rutherford's fictitious uranium salts from 1806 giving off mysterious radiation that's detected without interpretation, posing a question waiting for an explanation.

Has Carl Sagan's "Contact" aged well?

I have watched "Contact" several times and was watching it again the other day. Carl Sagan got a lot of things right in it, including the truth that even scientists have "faith" in matters disconnected with science. But one of the key parts of the film hasn't aged well for me.

For those who haven't seen it or read the book, Ellie Arroway, a brilliant astronomer played by Jodie Foster, is on a shortlist of people selected to be passengers on an interstellar machine constructed according to blueprints received by radio transmission from the Vega constellation. As earth's first ambassador to space, she is interviewed by a panel on her views on different topics. What would be the most important question she would ask the alien civilization?
An old flame who is on the panel - and who has a personal vested interest in not having her go since he still has romantic feelings for her - asks her squarely if she believes in God. The other members of the panel think that it would be unwise to pick as earth's first interstellar ambassador, someone who does not believe what 95% of the world's population believes. They think that one of the foremost questions Ellie should ask the aliens, should she meet them, is, "What God do you worship?". Elie being a scientist naturally says that she can't believe anything without demonstrated evidence. Candidate rejected.
It seems to me that Sagan really had an opportunity here, if not in the film then in the book, to showcase the theological and intellectual debates and problems concerning religion. The first question Ellie should have asked the panelists is: "When you ask whether I believe in God, I would ask you, *What* God? Those 95% of people you are referring to worship a zillion different Gods, from Jesus to Brahma. But there's even more, now-extinct Gods that their ancestors believed in, including Odin and Huitzilopochtli. Which God am I supposed to believe in? And do we think the aliens wouldn't ask me which one of these many Gods I believe in? What if I say the wrong name?". That would have driven home the central dilemma with believing in God right there.
But there might have been another, much more important question regarding religion that Arroway could have asked, and it would have been one that is independent of specific Gods. Religion clearly serves an important biological and evolutionary purpose, one explicated by numerous scientists. Instead of asking what God the aliens worship, the scientifically relevant question would be, "What are your deepest beliefs and how do you satisfy them?". This would have been a relevant question that is science, and yet one that would have provided an important answer about religion as, in Daniel Dennett's words, a "natural phenomenon".
As it turns out, the answer Arroway gives regarding the question is one I would have given myself: she says she would have asked the aliens how they did it; how they avoided blowing themselves up while developing such advanced technology. Especially in our present circumstances, asking a technologically advanced civilization that seems to have lasted much longer than us how they prevented self-extinction would be perhaps the most question we can ask.
But I can understand why Sagan had his character ask that question: it sets her up for the climax. After being transported to another world, Arroway sees and has a conversation with her loving father, one who had done everything he could to develop her interest and skills in science before tragically dying of a heart attack when Ellie was ten. When Ellie comes back after having that heartrending conversation, she comes to know that from the point of view of people here on earth, she was gone for only a short time, and her audiovisual equipment recorded nothing but noise. She is kept holding on to her vision of what is effectively an out-of-body experience and conversation with her father by the same slender thread which she had rejected before - faith. Sagan's point is that even scientists can have powerful experiences which they have to take on faith because there's no other way to explain them.
But upon watching that part again I still wasn't convinced of what Sagan was trying to say. If he was trying to propose reconciliation between science and religion, he was picking the wrong argument based on faith here. A scientist's "faith" that the sun will rise tomorrow is very different from faith that Jesus was born of a virgin. The former is predicated on well-understood laws of science that result in a probabilistic model which we can believe with high confidence; if the sun indeed failed to rise tomorrow, not just common sense but much of our understanding of physics, astronomy and planetary science would suddenly be called into question. That means that other phenomena that depend on this understanding would also be called into question. A scientist may take some things on "faith", but this is not really faith so much as it is informed judgement based on confidence limits and well-constructed models of reality.
Ultimately though, as much as I think Sagan could have done a much better job with these matters, I think the most important point he makes is still valid: that point simply is that, as monumentally useful and important science is, holding on to it is very hard and needs a lot of rock-solid conviction. That's a message we can all be on board with

When it comes to science, the practical is the moral and the moral the practical

Ignaz Semmelweis
We seem to live in a time when skepticism of science and its experts runs deep and where political mandarins of all persuasions are all too eager to make out science as a villain. It is at times like this that we must remind ourselves that science has not just been the greatest force for practical good that we have discovered but the most moral one as well.

It's easy to make the mistake of thinking of this statement as controversial, especially in a time when science is knocked for its perceived evils. But think about it in simple terms, and in fact in terms of a sphere where the practical and moral improvements are not just obvious but coincident. This sphere is the conquest of disease. I have been reading a fantastic book recently - Frank von Hippel's "The Chemical Age". The title betrays the content. The book is actually an amazing journey through various diseases that literally ended civilizations and destroyed the lives of millions, the lifesaving drugs and other public health measures science devised to end them and the heroic efforts of dogged individuals ranging from Louis Pasteur to Ronald Ross who defeated these implacable foes through blood, sweat and tears, sometimes quite literally so; Ross, during his efforts to prove that the malarial parasite was spread through the mosquito, worked so hard day and night at his microscope that the hinges rusted because of his sweat, the eyepiece cracked and he almost lost his eyesight. Another brave and almost otherworldly soul, a University of Pennsylvania doctoral student named Stubbins Ffirth, injected blood, urine and saliva from yellow fever patients into his body to rule out direct patient to patient transmission. These were the heroic deeds of heroic men.

But look at what they accomplished. Diseases like yellow fever, malaria and typhus, killers whose death toll easily exceeds the lives taken by all the wars of the world combined, which were endemic and a fact of life in ever city and village, which were literal destroyers of armies and even civilizations and scourges of families whose children they took away, were tamed, drastically reduced in intensity and fatal reach and finally contained. They haven't disappeared from our planet, but we, at least those of us who live in most developing and developed countries, hardly even think of any of these maladies any more, let alone know someone who has died of them.

This is not just a practical triumph of science but a profoundly moral one. Think of all the men, women and children numbering in the millions whose lives were saved, extended and enriched because of the innovations of chemistry and medicine, who could love and help and be there for each other and enjoy the blessings of precious life which in earlier ages was cruelly snatched away from them on a regular basis. In all these cases the "practical" and "moral" impact of science is indistinguishable.

This same overlap between the practical and the moral exists in other spheres. The discovery of cheap distillation methods for hydrocarbons not only enabled electricity and transportation but kept people warm in cold climates and cool in hot ones. Better chemical treatment of textiles led to similar, insulating material that protected the vulnerable and the young. And of course, far and away, the methods of artificial selection and genetic engineering have literally led to the feeding and saving of millions in parts of the world like India and China. If this existential improvement to humanity's basic predicaments by science isn't moral, I don't know what is.

The same book, von Hippel's, raises a counterargument when it talks about chemical weapons which disfigured and maimed millions. And yet the numbers don't compare. As hideous as thalidomide, sarin, phosgene and DDT are, the lives they claimed pale in significance and numbers compared to the lives saved by antibiotics, pesticides, disinfectants and the Haber-Bosch process; antibiotics for instance brought down the death toll due to infection on the battlefield from 200% (Civil War) to less than 10% in World War 2. Simpler measures like hand-washing and better sanitation were also the fruits of scientific discovery, and heretics like Ignaz Semmelweis who contributed to these measures were often hounded and ostracized; Semmelweis met a terribly tragic end when he died from beatings and possibly a self-inflicted wound in a mental asylum.

For me the conclusion is obvious. Science can indeed be used for good and evil, but the good outweighs the evil by an infinite amount. This is a timely reminder that the greatest force for practical improvement discovered by humanity is also the most moral one.

Complementarity And The World: Niels Bohr’s Message In A Bottle

Werner Heisenberg was on a boat with Niels Bohr and a few friends, shortly after he discovered his famous uncertainty principle in 1927. A bedrock of quantum theory, the principle states that one cannot determine both the velocity and the position of particles like electrons with arbitrary accuracy. Heisenberg’s discovery foretold of an intrinsic opposition between these quantities; better knowledge of one necessarily meant worse knowledge of the other. Talk turned to physics, and after Bohr had described Heisenberg’s seminal insight, one of his friends quipped, “But Niels, this is not really new, you said exactly the same thing ten years ago.”

In fact, Bohr had already convinced Heisenberg that his uncertainty principle was a special case of a more general idea that Bohr had been expounding for some time – a thread of Ariadne that would guide travelers lost through the quantum world; a principle of great and general import named the principle of complementarity.

Complementarity arose naturally for Bohr after the strange discoveries of subatomic particles revealed a world that was fundamentally probabilistic. The positions of subatomic particles could not be assigned with definite certainty but only with statistical odds. This was a complete break with Newtonian classical physics where particles had a definite trajectory, a place in the world order that could be predicted with complete certainty if one had the right measurements and mathematics at hand. In 1925, working at Bohr’s theoretical physics institute in Copenhagen, Heisenberg was Bohr’s most important protégé had invented quantum theory when he was only twenty-four. Two years later came uncertainty; Heisenberg grasped that foundational truth about the physical world when Bohr was away on a skiing trip in Norway and Heisenberg was taking a walk at night in the park behind the institute.

When Bohr came back he was unhappy with the paper Heisenberg had written, partly because he thought the younger man seemed to echo his own ideas, but more understandably because Bohr – a man who was exasperatingly famous for going through a dozen drafts of a scientific paper and several drafts of even private letters – thought Heisenberg had not expressed himself clearly enough. The 42-year-old kept working on the 26-year-old until the latter admitted that “the uncertainty relations were just a special case of the more general complementarity principle.”

So what was this complementarity principle? Simply put, it was the observation that there are many truths about the world and many ways of seeing it. These truths might appear divergent or contradictory, but they are all equally essential in representing the true nature of reality; they are complementary. As Bohr famously put it, “The opposite of a big truth is also a big truth”. Complementarity provided a way to reconcile the paradoxes that seemed to bedevil quantum theory’s interpretation of reality.

The central scientific paradox was what is called wave-particle duality. In 1803, the British polymath Thomas Young had proposed that light, contrary to Isaac Newton’s view of it, consists of waves; an experiment like diffraction makes this wave nature clear. A hundred years later, in 1905, Einstein proposed that light in fact consists of particles, an idea he invoked in order to explain the photoelectric effect and which won him a Nobel Prize; these particles were later called photons. Soon it was found through other experiments that all subatomic particles and not just photons could display wave and particle behavior. In 1924, the French physicist and aristocrat Louis de Broglie saw a way through the impasse when he came up with a simple equation that related the momentum of a particle – a particle property – inversely to its wavelength – a wave property.

In spite of de Broglie’s insight, particles clearly don’t look like waves and waves don’t look like particles in real life. In fact the very names seem to put them at odds with one another. It was Bohr who saw both the problem and the solution. Particles and waves both exist and are equally valid and essential ways of interpreting the quantum world. Depending on what experiment you do you might see one or the other and never both, but they are not contradictory, they are complementary. Most crucially, you simply cannot make sense of reality without having both in hand. It was a powerful insight that cut through the complexities of intuition and language; it was not too different in principle from other counterintuitive truths that science has uncovered, for instance the truth that both lighter and heavier bodies fall at the same rate. Complementarity rationalized opposing tendencies of the physical world and indicated that they were one. It was what had made Bohr subsume the opposing quantities in Heisenberg’s uncertainty principle under the same rubric.

Complementarity was also pregnant with far more general interpretation. The most effective application of it to human affairs in Bohr’s hands was the problem posed by nuclear weapons. Even before the bomb had been used on Hiroshima, Bohr saw deeper and further than anyone else that the very fact that nuclear weapons are so enormously destructive might make them the most potent force for peace that the world has ever seen, simply because statesmen will realize that nobody can truly “win” a nuclear war if everyone has them. “We are in a completely new situation that cannot be resolved by war”, Bohr said. The complementarity of the bomb continues to keep the peace through deterrence.

Another noteworthy example was a speech delivered by Bohr in 1938 to the International Congress of Anthropological and Ethnological Sciences at Kronberg Castle in Denmark. Apologizing at the outset for presuming to speak about a topic on which he was not an expert, Bohr proceeded to provide a succinct summary of complementarity in the context of atomic physics. Turning to biology, he then made the perspicacious observation – still the subject of considerable debate – that reason and instinct which might appear to be opposed to each other are complementary, both providing a complete picture of a sentient being. Bohr then came to the crux of the matter, pointing out that complementarity is of the essence when people are judging other cultures which might seem divergent from their own but which turn out to be different and equally productive ways of looking at the world. As Bohr eloquently put it: “Each culture represents a harmonious balance of traditional conventions by means of which latent potentialities of human life can unfold themselves in a way which reveals to us new aspects of its unlimited richness and variety.” Bohr believed that contact between cultures can go a long way in not just dispelling biases but in mutually enriching both parties: “A more or less intimate contact between different human societies can lead to a gradual fusion of traditions, giving birth to a quite new culture”. This is as clear an appeal for internationalism and mutual understanding that one can think of; if everyone had understood complementarity, maybe we might have had less fascism, imperialism and genocide. The final goal of complementary views of societies, as Bohr pointed out powerfully in the same lecture, isn’t different from the goal of science as a whole – it is “the gradual removal of prejudices”.

As we approach what seem to be novel problems in the 21st century, Bohr’s complementarity is a message in a bottle from one fraught world to another, telling us that seeing these new problems through the lens of an old principle can be most rewarding. We seem to live in a time when many see social and political problems through a binary, black-or-white, zero sum lens. Either my viewpoint is right or yours, but not both. Complementarity bridges that division. For instance consider the problem of individualism vs communalism, a divide that also hints at the cultural divide Bohr spoke about, in this case largely an Eastern vs Western divide. Western society is fiercely individualistic; self-interests guide people’s lives and most people don’t want others to tell them that they should live their lives for others. Meanwhile, Eastern and some European societies are much more communal; community interests often override self-interest and individuals are told that their self-development should take a backseat to the development of their community and society. Bohr’s complementarity tells us that this divergent view should not exist. Communal and individualistic views are both essential for looking at the world and building a more productive society; in fact one can gain self-knowledge and wisdom by working for a community, and likewise a community can be improved when people engage in individualistic self-improvement that helps everyone.

There are other problems for which complementarity provides a potential solution. I will speak mostly for the United States since that’s where I live, but these problems are in fact global. Consider the problem of immigration, one to which Bohr’s 1938 address is directly applicable. People criticized as “globalists” think that unfettered immigration is a net good. The opposing camp thinks that preserving a nation’s culture is important, and too much or too rapid immigration will weaken this culture. But complementarity tells us that nationalism is in fact strengthened when immigrants work together for the common good of the country. At the same time, immigrants should put their country first and prioritize work that will strengthen their nation’s economy, military and social institutions. We are global citizens, but we are also shaped by evolution and culture to take care of our immediate own. The opposite of a big truth is also a big truth.

Even scientific debates like the nature (genes) vs nurture (environment) conundrum can benefit from complementary views. People criticized as biological “essentialists” believe that genes dictate a lot of an individual’s physical and psychological makeup while the opposing “nurture” camp believes that much of the effect of genes can be changed by the environment. But complementarity says that just like the joint wave-particle view of reality, individuals are whole and complete, and this wholeness arises from a combination of genes and environment. In that sense, how much of a person’s mental and physical constitution we can control by either manipulating their genes or their environment is almost irrelevant. What’s relevant is the basic understanding in the first place that both matter; even if both camps agree with this baseline, they would already be talking a lot more with each other.

A third application of complementarity to international affairs, one which stems directly from Bohr’s view of the complementarity of the bomb, is to the relationship between United States and China, a relationship which will likely be the single-most important geopolitical determinant of the 21st century. China clearly has an autocratic regime that is not likely to yield to demands for more democratic behavior, both internally and externally, anytime soon. This has led to many in the United States to regard China as an implacable foe, almost a second Soviet Union. An important consequence of this view has been to see almost every technological development in the two countries, from gene editing to artificial intelligence to new weapons, as a contest.

But irrespective of the moral wisdom of engaging in this contest, complementarity tells us that such contests are likely to lead to the mutual ruin of both China and the United States, and by extension the rest of the world; for the same reason that any arms race would hollow out countries’ coffers and ramp up the specter of mutual annihilation. The reason is simple: both computer code and the physics of nuclear weapons are products of the fundamental laws of science and technology discovered or invented by human minds. Both can be divined and implemented by any country with smart scientists and engineers, which basically means any developing or developed country. An arms race in AI between China and the United States, for instance, would be as futile and dangerous as was the nuclear arms race between the United States and the former Soviet Union. Both countries would be fooling themselves if they think they can write better computer code and keep it secret for a long time. In that sense the fact that computer code, just like nuclear fission, is essentially a discovery of the human mind poses inconvenient truths for both countries. Whether we like it or not, we need to realize the complementarity of artificial intelligence akin to how we grudgingly realized the complementarity of the bomb: we need to realize that AI is powerful, that it is dangerous, that its secrets cannot stay secret for very long, that there are no real defenses against it, that the very dangers of AI cry for peaceful solutions to AI, and therefore that mutual cooperation between China and the United States under an umbrella of an international organization like the United Nations would be the only solution to avoid mutual cyber-destruction. China might be autocratic, but it has self-interests and wouldn’t want to see its own ruin. In the end, harmony between the United States and China might not be forced by bridging the moral divide between the two countries’ social and political systems: it would be forced by the very laws of science and technology.

Without oversimplifying the issue, it’s clear to me that Bohr’s complementarity provides a mediating middle ground for almost any other social or political issue I can think of; it’s not so much that it offers a solution but that it will compel each side to see the importance of the other side’s argument in providing a complete view of reality that cannot be provided by either viewpoint by itself. Pro-life or pro-choice? One can respect both the life of an unborn child and the life of the mother; the two are complementary. Socialism or capitalism? One can certainly have a mixed market economy – of the kind found in Niels Bohr’s home country for instance – that would give us the benefits of both. Climate legislation or rapid economic growth? One can create jobs related to new climate technologies that will result in economic growth. Science or religion? They address complementary aspects of the world, Stephen Jay Gould’s “non-overlapping magisteria”.

If we accept the idea of complementarity, we are in essence accepting the validity of all ways of looking at the world, and not just one. This does not mean that all ways are equally right – we can’t accept the germ theory of disease and the “theory” of diseases as a punishment from God on equal terms – but it is precisely through placing them on a level playing field and letting them play out their logical flow that we can even know how much of which view is right. In addition, Bohr realized that the world is indeed gray, that even flawed visions may contain snatches of truth that should be acknowledged as potential building blocks in our view of reality. But ultimately, Bohr’s plea for complementarity was a plea for what he called an “open world”, an ideal that for him was the highest that the peoples of the world could aspire to, an ideal that arose naturally from the democratic republic of science. If we accept complementarity, we automatically become open to examining every single approach to a problem, every way of parsing reality. Most importantly, we become open to true, unfettered communication with our fellow human beings, a tentative but lasting step toward Bohr’s – and science’s – “gradual removal of prejudices”. That seems like an important message for today.

First published on 3 Quarks Daily.

Steven Weinberg (1933-2021)

I was quite saddened to hear about the passing of Steven Weinberg, perhaps the dominant living figure from the golden age of particle physics. I was especially saddened since he seemed to be doing fine, as indicated by a lecture he gave at the Texas Science Festival this March. I think many of us thought that he was going to be around for at least a few more years. 

Weinberg was one of a select few individuals who transformed our understanding of elementary particles and cemented the creation of the Standard Model, in his case by unifying the electromagnetic and weak forces; for this he shared the 1979 Nobel Prize in physics with Abdus Salam and Sheldon Lee Glashow. His 1967 paper which heralded the unification, "A Model of Leptons", was only 3 pages long and remains one of the most highly cited articles in physics history.

But what made Weinberg special was that he was not only one of the most brilliant theoretical physicists of the 20th century but also a pedagogical master with few peers. His many technical textbooks, especially his 3-volume "Quantum Theory of Fields", have educated a generation of physicists; meanwhile, his essays in the New York Times Book Review and other avenues and collections of articles published as popular books have educated the lay public about the mysteries of physics. But in his popular books Weinberg also revealed himself to be a real Renaissance Man, writing not just about physics but about religion, politics, philosophy, history including the history of science, opera and literature. He was also known for his political advocacy of science. Among scientists of his generation, only Freeman Dyson had that kind of range.

There have been some great tributes to him, and I would point especially to the ones by Scott Aaronson and Robert McNees, both of whom interacted with Weinberg as colleagues. The tribute by Scott especially shows the kind of independent streak that Weinberg had, never content to go with the mainstream and always seeking orthogonal viewpoints and original thoughts. In that he very much reminded me of Dyson; the two were in fact friends and served together on the government advisory group JASON, and my conversation with Weinberg which I describe below ended with him asking me to give my regards to Freeman, who I was meeting in a few weeks.

I had the good fortune of interacting with Steve on two occasions, both rewarding. The first time I had the opportunity to be with him on a Canadian television panel on the challenges of Big Science. You can see the discussion here:

https://www.tvo.org/video/the-challenge-of-big-science

The next time was a few years later when I contacted him about a project and asked whether he had some thoughts to share about it. Steve didn't know me personally (although he did remember the Big Science panel) and was even then very busy with writing and other projects. In addition, the project wasn't something close to his immediate interests, so I was surprised when not only did he respond right away but asked me to call him at 10 AM on a Sunday and spoke generously for more than an hour. I still have the recording.

Steve was a great physicist, a gentleman and a Renaissance Man, a true original. We are unlikely to see the likes of him for a long time. 

One of the reasons I feel wistful is because he was among the last of the creators of modern particle physics, from an enormously fruitful time in which theory went hand in hand with experiment. This is different from the last twenty years in which fundamental physics and especially string theory have been struggling to make experimental connections. In cosmology however, there have been very exciting developments, and Weinberg who devoted his last few decades to the topic was certainly very interested in these. Hopefully fundamental physics can become as involved with the productive interplay of theory and experiment as cosmology and condensed matter physics are, and hopefully we can again resurrect the golden era of science in which Steven Weinberg played such a commanding role.