If you want to improve AI, let it evolve toward emergence

One of my favorite quotes about artificial intelligence is often attributed to pioneering computer scientists Hans Moravec and Marvin Minsky. To paraphrase: “The most important thing we have learned from three decades of AI research is that the hard things are easy and the easy things are hard”. In other words, we have been hoodwinked for a long time. We thought that vision and locomotion and housework would be easy and language recognition and chess and driving would be hard. And yet it has turned out that we have made significant strides in tackling the latter while hardly making a dent in the former.
Why is this? Clearly one trivial reason is that we failed to define “easy” and “hard” properly, so in one sense it’s a question of semantics. But the question still persists: what makes the easy problems hard? We got fooled by the easy problems because we took them for granted. Things like facial recognition and locomotion come so easily to human beings, even human beings that are a few months old, that we thought they would be easy for computers too. But the biggest obstacle for an AI today is not the chess playing ability of a Gary Kasparov but the simple image recognition abilities of an average one year old.
What we forgot was that these things seem easy only because they are the sleek final façade of a four billion year process that progressed with countless fits and starts, wrong alleys and dead ends and random experimentation. We see the bare shining mountaintop but we don’t see the tortuous road leading to it. If you looked under the hood, both spatial and temporal, of a seemingly simple act like bipedal navigation over a slightly rocky surface, you would find a veritable mess of failed and successful experiments in the history of life. If the brain were an electrical box which presented an exterior of wondrous simplicity and efficiency, inside the box would be fused wires, wires leading nowhere, wires with the middles cut off, wires sprouting other wires, stillbirthed wires; a mélange of wired chaos with a thread of accident and opportunity poking through it. We see only that ascendant thread but not the field littered with dead cousins and ancestors it resides in.
Over the ages, much of AI tried to grasp the essence of this evolutionary circus by trying to reproduce the essential structure of the human brain. The culmination of these efforts was the neural network, a layered abstract model of virtual electronic neurons trying to capture different aspects of reality with adjustable weights on every layer and a feedback loop that optimized the difference between the model and reality. So far so good, but neural networks are only modeling the end product and not the process. For the longest time they were not allowed to deliberately make mistakes and mirror the contingent, error-ridden processes of evolution that are grounded in mutation and genetic recombination. They made the evolution of thinking seem far more deterministic than what it was, and if there’s anything we know about evolution by now, it’s that one cannot understand or reproduce it unless one understands the general process of clumsy, aimless progress intrinsic to its workings.
But apart from the propensity of evolution to make mistakes, there is another, much broader aspect of evolution that I believe neural nets or other models of AI must capture in order to be useful or credible or both. That aspect is emergence, a feature of the human brain that is directly the product of its messy evolution. Not only could emergence help AI approach the actual process of thinking better and realize its scientific and financial potential, but it could also lead to reconciliation between two fields that are often and unnecessarily at war with each other – science and philosophy.
The basic idea of emergence has been recognized for a long time, first by philosophers and then by scientists. Whether it’s a block of gold having color properties that cannot be ascribed to individual gold atoms, individual termites forming a giant anthill or thousands of starlings forming stunning, sweeping, transient geometric patterns that carpet the sky for miles, we have known that the whole is often very different from both the individual parts and the sum of the parts. Or as one of the philosophical fathers of emergence, the physicist Philip Anderson, wrote in a now-famous article, “More is different”. Anderson noted that the properties of a physical system cannot be directly derived from its individual constituents, and more components are not just quantitatively but qualitatively different from fewer ones. Part of the reason for this is that both physics and biology are, in the words of Anderson’s fellow physicist Murray Gell-Mann, the result of “a small number of laws and a great number of accidents”. In case of biological evolution the laws are the principles of natural selection and neutral drift; in case of physical evolution the laws are the principles of general relativity, quantum mechanics and thermodynamics.
Emergence is partly a function of the great number of accidents that these small numbers of laws have been subjected to. In case of biology the accidents come from random mutations leading to variation and selection; in case of physics they come from forces and fields causing matter to stick together in certain ways and not others to form stars, galaxies and planets. Evolution critically occurred while immersed in this sea of stochastic emergence, and that led to complex feedback loops between fundamental and emergent laws. The human brain in particular is the end product of the basic laws of chemistry and physics being subjected to a variety of other emergent laws imposed by things like group and sexual selection, tribalism, altruism, predation avoidance and prey seeking. Agriculture, cities, animal domestication, gossip, religion, empires, democracy, despotism; all of humanity’s special creations are emergent phenomena. Mind is the ultimate emergent product of the stochastic evolution of the brain. So is consciousness. It’s because of the universal feature of accidental emergence that even a supercomputer (or an omniscient God, if you will) that had all the laws of physics built into it and that could map every one of the countless trajectories that life would take into the future would be unable to predict the shape and function of the human brain in the year 2018.
The mind which itself is an emergent product of brain evolution is very good at modeling emergence. As just one example, our minds are quite competent at understanding both individual needs as well as societal ones. We are good at comprehending the behavior of matter on both a microscopic scale – although it did take some very determined and brilliant efforts to achieve this feat – and the macro scale. In fact, we have so completely assimilated the laws of emergent physics in our brains that implementing them – throwing a javelin or anticipating the speed of a charging elephant for instance – is instinctive and a matter of practice rather than active calculation. Our minds, which build constantly updated models of the world, can now take emergent behavior into account and can apply the right level of emergent detail in these models to address the right problem. Evolution has had a leisurely four billion years to experiment with its creations while buffeted by the winds of stochastic emergence, so it’s perhaps not surprising that it has now endowed one of its most successful species with the ability to intuitively grasp emergent reality.
And yet we are largely failing to take into account this emergent reality when imagining and building new AIs. Even now, most of our efforts at AI are highly reductionist. We are good at writing algorithms to model individual neurons as well as individual layers of them, but we ignore the higher-level emergent behavior that is expected to result from a real neural network in a real human brain. Through a process called backpropagation, the neural networks are getting better at optimizing the gap between reality and the models they represent by setting up feedback loops and optimizing the weights of individual neurons, but whether their models are trying to capture the right level of emergent detail is a question they don’t address. If your model is capturing the wrong emergent details, then you are optimizing the wrong model.
Even if your model does solve the right problem, it will be such a specialized solution that it won’t apply to other related problems, which means you will be unable to build an artificial general intelligence (AGI). Consider the example of image recognition, a problem that neural nets and their machine learning algorithms are supposed to especially excel at. It’s often observed that if you introduce a bit of noise into an image or make it slightly different from an existing similar image, the neural net starts making mistakes. And yet children do this kind of recognition of “different but similar” images effortlessly and all the time. When shown an elephant for instance, a child will be able to identify elephants in a variety of contexts; whether it’s a real elephant, a stuffed elephant toy, a silhouette of an elephant or a rock formation that traces out the outline of an elephant. Each one of these entities is radically different in its details, but they all say “elephant” to the mind of the child but not to the neural network.
Why is this? I believe that emergence is one of the key secret sauces accounting for the difference. The child recognizes both a real elephant and a rock formation as an elephant because its brain, instead of relying on low-level “elephant features” like the detailed texture of the skin and the black or gray colors, is instead relying on high-level “emergent elephant features” like the general shape and more abstract topological qualities. The right level of emergent abstraction makes the child succeed where the computer is failing. And yet the child can – with some practice – also switch between different levels of emergence and realize for instance that the rock formation is not going to charge her. Through practice and exploration, the child perfects this application of emergent recognition. Perhaps that’s why it’s important to heed Alan Turing’s prescription for building intelligent machines in which he told us to endow a machine with the curiosity of a child and let intelligence evolve.
Another emergent feature of living organisms is what we call “emotion” or “instinct”. For the longest time we used to believe that human beings make rational decisions when evaluating their complex social and physical environments. But pioneering work by psychologists and neuroscientists ranging from Daniel Kahneman to Antonio Damasio has now shown that emotion and logical thinking both play a role when deciding how to react to an environmental stimulus. Take again the example of the child recognizing an elephant; one reason why it is so good at recognizing elephant-like features is because the features trigger a certain kind of emotional reaction in her. Not only are the logical feature-selecting parts of her brain activated, but so are her hormonal systems, perhaps imperceptibly; not only does she start thinking, but even before this, her palms may turn sweaty and her heartbeat may increase. Research has now consistently shown that our instinctive systems make decisions before our logical systems even kick in. This behavior was honed in humans by millions of years of living and passing on their genes in the African savannah, where split second decisions had to made to ensure that you weren’t weeded out of the gene pool. This kind of emotional reaction is thus also a kind of emergent behavior. It comes about because of the interaction of lower-level entities (DNA sequences and hormone receptors) with environmental and cultural cues and learnings. If an AI does not take emotional responses into account, it will likely never be able to recognize the kinds of abstract features that scream out “elephant” in a child’s mind.
As the biologist Theodosius Dobzhansky famously quipped, “Nothing in biology makes sense except in the light of evolution”, and I would extend that principle to the construction of artificial intelligences. Human intelligence is indeed a result of a few universal laws combined with an enormous number of accidents. These accidents have evolved evolution to select for those brains which can take stochastic emergent reality into account and build generalized models that can switch between different levels of emergent abstraction. It seems to me that mimicking this central feature of evolution would not just lead to better AIs but would be an essential feature of any truly general AI. Perhaps then the easy problems would truly become easy to solve.

This is my latest column for 3 Quarks Daily.

No comments:

Post a Comment

Markup Key:
- <b>bold</b> = bold
- <i>italic</i> = italic
- <a href="http://www.fieldofscience.com/">FoS</a> = FoS