Philip Morrison on challenges with AI

Philip Morrison who was a top-notch physicist and polymath with an incredible knowledge of things beyond his immediate field was also a speed reader who reviewed hundreds of books on a stunning range of topics. In one of his essays from an essay collection he held forth on what he thought were the significant challenges with machine intelligence. It strikes me that many of these are still valid (italics mine).

"First, a machine simulating the human mind can have no simple optimization game it wants to play, no single function to maximize in its decision making, because one urge to optimize counts for little until it is surrounded by many conditions. A whole set of vectors must be optimized at once. And under some circumstances, they will conflict, and the machine that simulates life will have the whole problem of the conflicting motive, which we know well in ourselves and in all our literature.


Second, probably less essential, the machine will likely require a multisensory kind of input and output in dealing with the world. It is not utterly essential, because we know a few heroic people, say, Helen Keller-who managed with a very modest cross-sensory connection nevertheless to depict the world in some fashion. It was very difficult, for it is the cross-linking of different senses which counts. Even in astronomy, if something is "seen" by radio and by optics, one begins to know what it is. If you do not "see" it in more than one way, you are not very clear what it in fact is.


Third, people have to be active. I do not think a merely passive machine, which simply reads the program it is given, or hears the input, or receives a memory file, can possibly be enough to simulate the human mind. It must try experiments like those we constantly try in childhood unthinkingly, but instructed by built-in mechanisms. It must try to arrange the world in different fashions.


Fourth, I do not think it can be individual. It must be social in nature. It must accumulate the work--the languages, if you will- of other machines with wide experience. While human beings might be regarded collectively as general-purpose devices, individually they do not impress me much that way at all. Every day I meet people who know things I could not possibly know and can do things I could not possibly do, not because we are from differing species, not because we have different machine natures, but because we have been programmed differently by a variety of experiences as well as by individual genetic legacies. I strongly suspect that this phenomenon will reappear in machines that specialize, and then share experiences with one another. A mathematical theorem of Turing tells us that there is an equivalence in that one machine's talents can be transformed mathematically to another's. This gives us a kind of guarantee of unity in the world, but there is a wide difference between that unity, and a choice among possible domains of activity. I suspect that machines will have that choice, too. The absence of a general-purpose mind in humans reflects the importance of history and of development. Machines, if they are to simulate this behavior- or as I prefer to say, share it--must grow inwardly diversified, and outwardly sociable.


Fifth, it must have a history as a species, an evolution. It cannot be born like Athena, from the head full-blown. It will have an archaeological and probably a sequential development from its ancestors. This appears possible. Here is one of computer science's slogans, influenced by the early rise of molecular microbiology: A tape, a machine whose instructions are encoded on the tape, and a copying machine. The three describe together a self-reproducing structure. This is a liberating slogan; it was meant to solve a problem in logic, and I think it did, for all but the professional logicians. The problem is one of the infinite regress which looms when a machine becomes competent enough to reproduce itself. Must it then be more complicated than itself? Nonsense soon follows. A very long

instruction tape and a complex but finite machine that works on those instructions is the solution to the logical problem."


Consciousness and the Physical World, edited by V. S. Ramachandran and Brian Josephson

Consciousness and the Physical World: Proceedings of the Conference on Consciousness Held at the University of Cambridge, 9Th-10th January, 1978

This is an utterly fascinating book, one that often got me so excited that I could hardly sleep or walk without having loud, vocal arguments with myself. It takes a novel view of consciousness that places minds (and not just brains) at the center of evolution and the universe. It is based on a symposium on consciousness at Cambridge University held in 1979 and is edited by Brian Josephson and V. S. Ramachandran, both incredibly creative scientists. Most essays in the volume are immensely thought-provoking, but I will highlight a few here.


The preface by Freeman Dyson states that "this book stands in opposition to the scientific orthodoxy of our day." Why? Because it postulates that minds and consciousness have as important of a role to play in the evolution of the universe as matter, energy and inanimate forces. As Dyson says, most natural scientists frown upon any inclusion of the mind as an equal player in the arena of biology; for them this amounts to a taboo against the mixing of values and facts. And yet even Francis Crick, as hard a scientist as any other, once called the emergence of culture and the mind from the brain the "astonishing hypothesis." This book defies conventional wisdom and mixes values and facts with aplomb. It should be required reading for any scientist who dares to dream and wants to boldly think outside the box.

Much of the book is in some sense an extension - albeit a novel one - of ideas laid out in an equally fascinating book by Karl Popper and John Eccles titled "The Self and Its Brain: An Argument for Interactionism". Popper and Eccles propose that consciousness arises when brains interact with each other. Without interaction brains stay brains. When brains interact they create both mind and culture.

Popper and Eccles say that there are three "worlds" encompassing the human experience:

World 1 consists of brains, matter and the material universe.
World 2 consists of individual human minds.
World 3 consists of the elements of culture, including language, social culture and science.

Popper's novel hypothesis is that while World 3 clearly derives from World 2, at some point it took on a life of its own as an emergent entity that was independent of individuals minds and brains. In a trivial sense we know this is true since culture and ideas propagate long after their originators are dead. What is more interesting is the hypothesis that World 2 and World 3 somehow feed on each other, so that minds, fueled by cultural determinants and novelty, also start acquiring lives of their own, lives that are no longer dependent on the substrate of World 1 brains. In some sense this is the classic definition of emergent complexity, a phrase that was not quite in vogue in 1978. Not just that but Eccles proposes that minds can in turn act on brains just like culture can act on minds. This is of course an astounding hypothesis since it suggests that minds are separate from brains and that they can influence culture in a self-reinforcing loop that is derived from the brain and yet independent of it.

The rest of the chapters go on to suggest similarly incredible and fascinating ideas. Perhaps the most interesting are chapters 4 and 5 by Nicholas Humphrey (a grand nephew of John Maynard Keynes) and Horace Barlow, both of them well known neuroscientists. Barlow and Humphrey's central thesis is that consciousness arose as an evolutionary novelty in animals for promoting interactions - cooperation, competition, gregariousness and other forms of social communication. In this view, consciousness was an accidental byproduct of primitive neural processes that was then selected by natural selection to thrive because of its key role in facilitating interactions. This raises more interesting questions: Would non-social animals then lack consciousness? The other big question in my mind was, how can we even define "non-social" animals: after all, even bacteria, not to mention more advanced yet primitive creatures (by human standards) like slime molds and ants evidence superior modes of social communication. In what sense would these creatures be conscious, then? Because the volume was written in 1978, it does not discuss Giulio Tononi's "integrated information theory" and Christof Koch's ideas about consciousness existing on a continuum, but the above mentioned ideas certainly contain trappings of these concepts.

There is finally an utterly fascinating discussion of an evolutionary approach to free will,. It states in a nutshell that free will is a biologically useful delusion. This is not the same as saying that free will is an *illusion*. In this definition, free will arose as a kind of evolutionary trick to ensure survival. Without free will, humans would have no sense of controlling their own fates and environments, and this feeling of lack of control would not only detrimentally impact their day to day existence and basic subsistence but impact all the long-term planning, qualities and values that are the hallmark of Homo sapiens. A great analogy that the volume provides is with the basic instinct of hunger. In an environment where food was infinitely abundant, a creature would be free from the burden of choice. So why was hunger "invented"? In Ramachandran's view, hunger was invented to explore the environment around us; similarly, the sensation of free will was "invented" to allow us to plan for the future, make smart choices and even pursue terribly important and useful but abstract ideas like "freedom" and "truth". It allows us to make what Jacob Bronowski called "unbounded plans". In an evolutionary framework, "those who believed in their ability to will survived and those who did not died out."

Is there any support for this hypothesis? As Ramachandran points, there is at least one simple but very striking natural experiment that lends credence to the view of free will being an evolutionarily useful biological delusion. People who are depressed are well known to lack a feeling of control over their environment. In extreme cases this feeling can lead to significantly reduced mortality and death from suicide. Clearly there is at least one group of people in which the lack of a freedom to will can have disastrous consequences if not corrected.

I can go on about the other fascinating arguments and essays of these proceedings. But even reading the amazing introduction by Ramachandran and a few of the essays should give the reader a taste of the sheer chutzpah and creativity demonstrated by these scientific heretics in going beyond the boundary of the known. May this tribe of scientific heretics thrive and grow.