Field of Science

Technological convergence in drug discovery and other endeavors




You would think that the Wright brothers’ historic flight from Kitty Hawk on December 17, 1903 had little to do with chemistry. And yet it did. The engine they used came from an aluminum mold; since then aluminum has been a crucial ingredient in lightweight flying machines. The aluminum mold would not have been possible had industrial chemists like Charles Hall and Paul Héroult not developed processes like the Hall-Héroult process for refining the metal from its ore, bauxite. More elementally, the gasoline fueling the flight was the result of a refining process invented more than fifty years earlier by a Yale chemist named Benjamin Silliman. There was a fairly straight line from the Bayer and Silliman processes to Kitty Hawk.

The story of the Wright brothers’ powered flight illustrates the critical phenomenon of technological convergence that underlies all major technological developments in world history. Simply put, technological convergence refers to the fact that several enabling technologies have to come together in order for a specific overarching technology to work. And yet what’s often seen is only the technology that benefits, not the technology that enables.

We see technological convergence everywhere. Just to take a few of the most important innovations of the last two hundred years or so: The computer would not have been possible without the twin inventions of the transistor and silicon purification. MRI would not have been possible without the development of sophisticated software to deconvolute magnetic resonance signals and powerful magnets to observe those signals in the first place. There are other important global inventions that we take for granted - factory farming, made-to-order houses, fiber optics, even new tools like machine learning - none of which would have materialized had it not been for ancillary technologies which had to reach maturation.

Recognizing technological convergence is important, both because it helps us appreciate how much has to happen before a particular technology can embed itself in people’s everyday lives, and because it can help us potentially recognize multiple threads of innovation that could potentially converge in the future - a risky but important vision that can help innovators and businessmen stay ahead of the curve. One important point to note: by no means does technological convergence itself help innovations rise to the top – political and social factors can be as or more crucial – but this convergence is often necessary even if not sufficient.

It’s interesting to think of technological convergence in my own field of drug development. Let’s look at a few innovations, both more recent as well as older, that illustrate the phenomenon. Take a well-established technology like high-throughput screening (HTS). HTS came on the scene about thirty years ago, and since then has contributed significantly to the discovery of new medicines. What made the efficient screening of tens of thousands of compounds possible? Several convergent developments: recombinant DNA technology for obtaining reasonable quantities of pure proteins for screening, robotic techniques and automation for testing these compounds quickly at well-defined concentrations in multiple wells or plates, spectroscopic techniques like FRET for determining the feasibility of the end results, and graphing and visualization software for mapping the results and quickly judging if they made sense. These are just a few developments: in addition, there are techniques within these techniques that were also critical. For instance, recombinant DNA depended on methods for viral transfection, for splicing and ligation and for sequencing, and robotic automation depended on microelectronic control systems and materials for smooth manipulation of robotic moving parts. Thus, not only is technology convergent but it also piggybacks, with one piece of technology building on another to produce a whole that is more than the sum of its parts, aiding in the success of a technology it wasn’t primarily designed for.

Below is a table of just a few other primary drug discovery technologies that could not have been possible without ancillary convergent technologies.

Primary technology
Convergent enabling technologies
Combinatorial chemistry
LCMS for purification, organic synthesis methodology, hardware (solid phase beads, plastic, tubes, glassware) for separation and bookkeeping.
Molecular modeling
Computing power (CPUs, GPUs), visualization software, crystal structures and databases (PDB, CSD etc.)
Directed evolution/phage display
Recombinant DNA technology, hardware (solid phase supports), buffer chemistry for elution.
DNA-encoded libraries
PCR, DNA sequencing technology (Illumina etc.), hardware (solid phase beads, micropipettes etc.), informatics software for deconvolution of results.
NMR
Cryogenics, magnet production, software.

I have deliberately included NMR spectroscopy in the last row. A modern day organic chemist’s work would be unthinkable without this technique. It of course depends crucially on the availability of high-field magnets and the cryogenics techniques that keep the magnet cold by immersion in liquid helium, but it also depends fundamentally on the physics of nuclear magnets worked out by Isidor Rabi, Edward Purcell, Richard Ernst and others. Since this post is about technology I won’t say anything further about science, but it should be obvious that every major technology rests on a foundation of pure science which has to be developed for decades before it can be applied, often with no clear goal in mind. Sometimes the application can be very quick, however. For instance, it’s not an accident that solid phase supports appear in three of the five innovations listed above. Bruce Merrifield won the Nobel Prize in chemistry for his development of solid-phase peptide synthesis in 1984, and a little more than thirty years later, that development has impacted many enabling drug development techniques.

There are two interesting conclusions that emerge from considering technological convergence. The first is the depressing conclusion that if ancillary technologies haven’t kept pace, then even the most brilliant innovative idea would get nowhere. Even the most perspicacious inventor won’t be able to make a dent in the technology universe, simply because the rest of technology hasn’t kept up with him. A good example is the early spate of mobile phones appearing in the early 90s which didn’t go anywhere. Not only were they too expensive, but they simply weren’t ready for prime time because the wide availability of broadband internet, touchscreens and advanced battery technology was non-existent. Similarly, the iPhone and iPod took off not just because of Steve Jobs’ sales skills and their sleek GUI, but because broadband internet, mp3s (both legal and pirated) and advanced lithium ion batteries were now available for mass production. In fact, the iPod and the iPhone showcase convergent technologies in another interesting way; their sales skyrocketed because of the iTunes Music Store and the iPhone App store. As the story goes, Jobs was not sold on the app store idea for a long time because he characteristically wanted to keep iPhone apps exclusive. It was only flagging initial sales combined with insistent prodding from the iPhone team that changed his mind. In this case, therefore, the true convergent technology was not really battery chemistry or the accelerometer in the phone but a simple software innovation and a website.

The more positive conclusion to be drawn from the story of convergent technology is to keep track of ancillary enabling technologies if you want to stay ahead of the curve. In case of the iPod, Jobs seems to have had the patience to wait before USB, battery and internet technologies became mature enough for Apple to release the device; in spite of being the third or fourth mp3 player on the market, the iPod virtually took over in a few years. What this means for innovators and technologists is that they should keep an eye out on the ‘fringe’, on seemingly minor details of their idea that might have a crucial impact on its development or lack thereof. If you try to launch an innovative product before the ancillary technologies have caught up, you won’t achieve convergence and the product might well be doomed.

Of course, groundbreaking ancillary technologies are often obvious only in retrospect and are unexpected when they appear – Xerox’s mouse and GUI come to mind – but that does not mean they are invisible. One reason John D. Rockefeller became so spectacularly successful and wealthy is because he looked around the corner and saw not one but three key technologies: oil drilling, oil transportation and oil refining. Similarly, Edison’s success owed, in part, to the fact that he was an all-rounder, developing everything from electrical circuits to the right materials for bulb filaments; chemistry, electricity, mechanical engineering – all found a home in Edison’s lab. Thus, while it’s not guaranteed, one formula for noting the presence or absence of technological convergence is to cast a wide net, to work the field as well as its corners, to spend serious time exploring even the small parts that are expected to contribute to the whole. Recognizing technological convergence requires a can-do attitude and the enthusiasm to look everywhere for every possible lead.

At the very least, being cognizant of convergent technologies can prevent us from wasting time and effort; for instance, combinatorial chemistry went nowhere at the beginning because HTS was not developed. Molecular modeling went nowhere because sampling and scoring weren’t well developed. Genome sequencing by itself went nowhere because simply having a list of genes rang hollow until the technologies for interrogating their protein products and functions weren’t equally efficient. Developing your technology in a silo, no matter how promising it looks by itself, can be a failing effort if not fortified with other developing technology which you should be on the lookout for.

Technology, like life on earth, is part of an ecosystem. Even breakthrough technology does not develop in a vacuum. Without convergence between different innovations, every piece of technology would be stillborn. Without the aluminum, without the refined petroleum, the Wright Flyer would have lain still in the sands of the Outer Banks.

If you want to improve AI, let it evolve toward emergence

One of my favorite quotes about artificial intelligence is often attributed to pioneering computer scientists Hans Moravec and Marvin Minsky. To paraphrase: “The most important thing we have learned from three decades of AI research is that the hard things are easy and the easy things are hard”. In other words, we have been hoodwinked for a long time. We thought that vision and locomotion and housework would be easy and language recognition and chess and driving would be hard. And yet it has turned out that we have made significant strides in tackling the latter while hardly making a dent in the former.
Why is this? Clearly one trivial reason is that we failed to define “easy” and “hard” properly, so in one sense it’s a question of semantics. But the question still persists: what makes the easy problems hard? We got fooled by the easy problems because we took them for granted. Things like facial recognition and locomotion come so easily to human beings, even human beings that are a few months old, that we thought they would be easy for computers too. But the biggest obstacle for an AI today is not the chess playing ability of a Gary Kasparov but the simple image recognition abilities of an average one year old.
What we forgot was that these things seem easy only because they are the sleek final façade of a four billion year process that progressed with countless fits and starts, wrong alleys and dead ends and random experimentation. We see the bare shining mountaintop but we don’t see the tortuous road leading to it. If you looked under the hood, both spatial and temporal, of a seemingly simple act like bipedal navigation over a slightly rocky surface, you would find a veritable mess of failed and successful experiments in the history of life. If the brain were an electrical box which presented an exterior of wondrous simplicity and efficiency, inside the box would be fused wires, wires leading nowhere, wires with the middles cut off, wires sprouting other wires, stillbirthed wires; a mélange of wired chaos with a thread of accident and opportunity poking through it. We see only that ascendant thread but not the field littered with dead cousins and ancestors it resides in.
Over the ages, much of AI tried to grasp the essence of this evolutionary circus by trying to reproduce the essential structure of the human brain. The culmination of these efforts was the neural network, a layered abstract model of virtual electronic neurons trying to capture different aspects of reality with adjustable weights on every layer and a feedback loop that optimized the difference between the model and reality. So far so good, but neural networks are only modeling the end product and not the process. For the longest time they were not allowed to deliberately make mistakes and mirror the contingent, error-ridden processes of evolution that are grounded in mutation and genetic recombination. They made the evolution of thinking seem far more deterministic than what it was, and if there’s anything we know about evolution by now, it’s that one cannot understand or reproduce it unless one understands the general process of clumsy, aimless progress intrinsic to its workings.
But apart from the propensity of evolution to make mistakes, there is another, much broader aspect of evolution that I believe neural nets or other models of AI must capture in order to be useful or credible or both. That aspect is emergence, a feature of the human brain that is directly the product of its messy evolution. Not only could emergence help AI approach the actual process of thinking better and realize its scientific and financial potential, but it could also lead to reconciliation between two fields that are often and unnecessarily at war with each other – science and philosophy.
The basic idea of emergence has been recognized for a long time, first by philosophers and then by scientists. Whether it’s a block of gold having color properties that cannot be ascribed to individual gold atoms, individual termites forming a giant anthill or thousands of starlings forming stunning, sweeping, transient geometric patterns that carpet the sky for miles, we have known that the whole is often very different from both the individual parts and the sum of the parts. Or as one of the philosophical fathers of emergence, the physicist Philip Anderson, wrote in a now-famous article, “More is different”. Anderson noted that the properties of a physical system cannot be directly derived from its individual constituents, and more components are not just quantitatively but qualitatively different from fewer ones. Part of the reason for this is that both physics and biology are, in the words of Anderson’s fellow physicist Murray Gell-Mann, the result of “a small number of laws and a great number of accidents”. In case of biological evolution the laws are the principles of natural selection and neutral drift; in case of physical evolution the laws are the principles of general relativity, quantum mechanics and thermodynamics.
Emergence is partly a function of the great number of accidents that these small numbers of laws have been subjected to. In case of biology the accidents come from random mutations leading to variation and selection; in case of physics they come from forces and fields causing matter to stick together in certain ways and not others to form stars, galaxies and planets. Evolution critically occurred while immersed in this sea of stochastic emergence, and that led to complex feedback loops between fundamental and emergent laws. The human brain in particular is the end product of the basic laws of chemistry and physics being subjected to a variety of other emergent laws imposed by things like group and sexual selection, tribalism, altruism, predation avoidance and prey seeking. Agriculture, cities, animal domestication, gossip, religion, empires, democracy, despotism; all of humanity’s special creations are emergent phenomena. Mind is the ultimate emergent product of the stochastic evolution of the brain. So is consciousness. It’s because of the universal feature of accidental emergence that even a supercomputer (or an omniscient God, if you will) that had all the laws of physics built into it and that could map every one of the countless trajectories that life would take into the future would be unable to predict the shape and function of the human brain in the year 2018.
The mind which itself is an emergent product of brain evolution is very good at modeling emergence. As just one example, our minds are quite competent at understanding both individual needs as well as societal ones. We are good at comprehending the behavior of matter on both a microscopic scale – although it did take some very determined and brilliant efforts to achieve this feat – and the macro scale. In fact, we have so completely assimilated the laws of emergent physics in our brains that implementing them – throwing a javelin or anticipating the speed of a charging elephant for instance – is instinctive and a matter of practice rather than active calculation. Our minds, which build constantly updated models of the world, can now take emergent behavior into account and can apply the right level of emergent detail in these models to address the right problem. Evolution has had a leisurely four billion years to experiment with its creations while buffeted by the winds of stochastic emergence, so it’s perhaps not surprising that it has now endowed one of its most successful species with the ability to intuitively grasp emergent reality.
And yet we are largely failing to take into account this emergent reality when imagining and building new AIs. Even now, most of our efforts at AI are highly reductionist. We are good at writing algorithms to model individual neurons as well as individual layers of them, but we ignore the higher-level emergent behavior that is expected to result from a real neural network in a real human brain. Through a process called backpropagation, the neural networks are getting better at optimizing the gap between reality and the models they represent by setting up feedback loops and optimizing the weights of individual neurons, but whether their models are trying to capture the right level of emergent detail is a question they don’t address. If your model is capturing the wrong emergent details, then you are optimizing the wrong model.
Even if your model does solve the right problem, it will be such a specialized solution that it won’t apply to other related problems, which means you will be unable to build an artificial general intelligence (AGI). Consider the example of image recognition, a problem that neural nets and their machine learning algorithms are supposed to especially excel at. It’s often observed that if you introduce a bit of noise into an image or make it slightly different from an existing similar image, the neural net starts making mistakes. And yet children do this kind of recognition of “different but similar” images effortlessly and all the time. When shown an elephant for instance, a child will be able to identify elephants in a variety of contexts; whether it’s a real elephant, a stuffed elephant toy, a silhouette of an elephant or a rock formation that traces out the outline of an elephant. Each one of these entities is radically different in its details, but they all say “elephant” to the mind of the child but not to the neural network.
Why is this? I believe that emergence is one of the key secret sauces accounting for the difference. The child recognizes both a real elephant and a rock formation as an elephant because its brain, instead of relying on low-level “elephant features” like the detailed texture of the skin and the black or gray colors, is instead relying on high-level “emergent elephant features” like the general shape and more abstract topological qualities. The right level of emergent abstraction makes the child succeed where the computer is failing. And yet the child can – with some practice – also switch between different levels of emergence and realize for instance that the rock formation is not going to charge her. Through practice and exploration, the child perfects this application of emergent recognition. Perhaps that’s why it’s important to heed Alan Turing’s prescription for building intelligent machines in which he told us to endow a machine with the curiosity of a child and let intelligence evolve.
Another emergent feature of living organisms is what we call “emotion” or “instinct”. For the longest time we used to believe that human beings make rational decisions when evaluating their complex social and physical environments. But pioneering work by psychologists and neuroscientists ranging from Daniel Kahneman to Antonio Damasio has now shown that emotion and logical thinking both play a role when deciding how to react to an environmental stimulus. Take again the example of the child recognizing an elephant; one reason why it is so good at recognizing elephant-like features is because the features trigger a certain kind of emotional reaction in her. Not only are the logical feature-selecting parts of her brain activated, but so are her hormonal systems, perhaps imperceptibly; not only does she start thinking, but even before this, her palms may turn sweaty and her heartbeat may increase. Research has now consistently shown that our instinctive systems make decisions before our logical systems even kick in. This behavior was honed in humans by millions of years of living and passing on their genes in the African savannah, where split second decisions had to made to ensure that you weren’t weeded out of the gene pool. This kind of emotional reaction is thus also a kind of emergent behavior. It comes about because of the interaction of lower-level entities (DNA sequences and hormone receptors) with environmental and cultural cues and learnings. If an AI does not take emotional responses into account, it will likely never be able to recognize the kinds of abstract features that scream out “elephant” in a child’s mind.
As the biologist Theodosius Dobzhansky famously quipped, “Nothing in biology makes sense except in the light of evolution”, and I would extend that principle to the construction of artificial intelligences. Human intelligence is indeed a result of a few universal laws combined with an enormous number of accidents. These accidents have evolved evolution to select for those brains which can take stochastic emergent reality into account and build generalized models that can switch between different levels of emergent abstraction. It seems to me that mimicking this central feature of evolution would not just lead to better AIs but would be an essential feature of any truly general AI. Perhaps then the easy problems would truly become easy to solve.

This is my latest column for 3 Quarks Daily.

On their birthday: The wisdom of John Wheeler and Oliver Sacks.

A rare and happy coincidence today: The birthdays of both John Archibald Wheeler and Oliver Sacks. Wheeler was one of the most prominent physicists of the twentieth century. Sacks was one of the most prominent medical writers of his time. Both of them were great explorers, the first of the universe beyond and the second of the universe within.
What made both men special, however, was that they transcended mere accomplishment in the traditional genres that they worked in, and in that process they stand as role models for an age that seems so fractured. Wheeler the physicist was also Wheeler the poet and Wheeler the philosopher. Throughout his life he transmitted startling new ideas through eloquent prose that was too radical for academic journals. Most of his important writings made their way to us through talks and books. Sacks the neurologist was far more than a neurologist, and Sacks the writer was much more than a writer. Both Wheeler and Sacks had a transcendent view of humanity and the universe, a view that is well worth taking to heart in our own self-centered times.
Their backgrounds shaped their views and their destiny. John Wheeler grew up in an age when physics was transforming our view of the universe. While he was too young to participate in the genesis of the twin revolutions of relativity and quantum mechanics, he came on stage at the right time to fully implement the revolution in the burgeoning fields of particle and nuclear physics.
After acquiring his PhD, Wheeler went on a fellowship to what was undoubtedly the mecca of physical thought – Niels Bohr’s Institute of Theoretical Physics in Copenhagen. By then Bohr had already become the grand old man of physics. While Einstein was retreating from the forefront of quantum mechanics, not believing that God would play such an inscrutable game of dice, Bohr and his pioneering disciples – Werner Heisenberg, Wolfgang Pauli and Paul Dirac, in particular – were taking the strange results of quantum mechanics at face value and interpreting them for the next generation. Particles that were waves, that superposed with themselves and that could be described only probabilistically, all found a place in Bohr’s agenda.
Bohr was famous for trying to describe physical reality as accurately as possible. This led to his maddening, Delphic utterances where he would go back and forth with a colleague to rework the fine points of his thinking, relentlessly questioning everyone’s reasoning including his own. But the process also illuminated both his passion to understand the world as well as his absolute insistence on precision and honesty. His talks and writings are often covered in a fine mist of interpretive haze, but once you ponder them enough they are wholly illuminating and novel. Bohr’s disciples did their best to spread his Copenhagen gospel throughout the world, and for the large part they succeeded spectacularly. When Wheeler joined Bohr in the mid 1930s, the grand old philosopher of physics was in the middle of his famous arguments with Einstein concerning the nature of reality. The so-called Einstein-Podolsky-Rosen paradox, published in back-to-back papers by Bohr, Einstein and their eponymous colleagues in 1935, was to set the stage for all quantum mechanical quarrels related to meaning and reality for the next half century.
For Wheeler, doing physics with Bohr was like playing ping-pong with an opponent possessing infinite patience. Back and forth the two went; arguing, refining, correcting, Wheeler doing most of the calculating and Bohr doing most of the talking. Wheeler came from the pragmatic American tradition of physics, later called the “shut up and calculate” tradition. While not particularly attuned back then to philosophical disputes, Wheeler rapidly absorbed Bohr’s avuncular, Socratic method of argument and teaching, later using it to create probably the finest school of theoretical physics in the United States during the postwar years. His opinion of Bohr stayed superlative till the end: “You can talk about people like Buddha, Jesus, Moses, Confucius, but the thing that convinced me that such people existed were the conversations with Bohr.”
In 1939, with Bohr as a sure guide, Wheeler made what was practically speaking probably the most important contribution of his career – an explanation of the mechanism of nuclear fission. The paper is a masterful application of both classical and quantum physics, treating the nucleus as an entity poised on the cusp between the quantum and the classical worlds. In the same issue of the Physical Review that published the Wheeler-Bohr paper, another paper appeared, a paper by Robert Oppenheimer and his student Hartland Snyder. In their paper, Oppenheimer and Snyder laid out the details of what we now call black holes. The seminal Oppenheimer-Snyder paper went practically unnoticed; the seminal Wheeler-Bohr paper spread like wildfire. The reason was simple. Both papers were published on the day Germany attacked Poland and started the Second World War. Just eight months before, German scientists had discovered a potentially awesome and explosive source of energy in the nuclear fission of uranium. The discovery and the Wheeler-Bohr paper made it clear to interested observers that weapons of immensely destructive power could now be made. The race was on. As a professor at Princeton University, Wheeler was in the thick of things.
He became an important part of the Manhattan Project, contributing crucial ideas especially to the design of the nuclear reactors that produced plutonium. He had a vested interest in seeing the bomb come to fruition as soon as possible: his brother, Joe, was fighting on the front in Europe. Joe did not know the details of the secret work John was doing, but the two words in his letter to John made his general understanding of Wheeler’s work clear – “Hurry up”, the letter said. Sadly, Joe was killed in Italy before the bomb could be fully developed. His inability to potentially save his brother’s life massively shaped Wheeler’s political views. From then on, while he did not quite embrace weapons of mass destruction with the same enthusiasm as his friend Edward Teller, his opinion was clear: if there was a bigger weapon, the United States should have it first. One of the hallmarks of Wheeler’s life and career was that in spite of his political leanings – his conservatism was in marked contrast to most of his colleagues’ liberal politics – he seems to have remained friends with everyone. Wheeler’s life is a good illustration, especially in these fraught times, of how someone can keep their politics from interfering with their fundamental decency and appreciation of decency in others.
His scientific gifts and political views led Wheeler to work on the hydrogen bomb amidst an environment of Communist hysteria, witch hunts and stripped security clearances. But after he had done his job perfecting thermonuclear weapons, Wheeler returned to his first love – pure physics. During the war, he had teamed up with an immensely promising young man with fire in his mind and a young wife dying in a hospital in Albuquerque. Richard Feynman and John Wheeler couldn’t have been different from each other; one the fast-talking, irreverent kid from New York City, the other a courtly, conservative, Southern-looking gentleman who wore pinstriped suits. And yet their essential honesty and drive to understand physics from the bottom up made them kindred souls. Feynman got his PhD under Wheeler and for the rest of his life loved and admired his mentor; his work with Wheeler also inspired Feynman’s own Nobel Prize winning work in quantum electrodynamics – the strange theory of the interaction between light and matter. Wheeler’s love for teaching and the art of argument he acquired from Bohr crystallized in his interactions with Feynman. It set the stage for the latter half of his life.
Wheeler is one of the very few scientists in history who did breakthrough work in two completely different branches of science. Before the war he had been an explorer of the infinitesimal, but now he made himself an intrepid Marco Polo of the infinitely large. In the 1950s Wheeler plunged headlong into the physics of gravitational collapse, starting out from where Oppenheimer and others had left off. Memorably, he became the man who christened Oppenheimer’s startling brainchildren: in a conference in New York, Wheeler called objects whose gravitational fields were so strong that they could not let even light escape ‘black holes’. Black holes and curved spacetime became the foci of Wheeler’s career. While pursuing this interest he contributed something even more significant: he essentially created the most important school of relativistic investigations in the United States. And combining this new love with his old love, he also created entire subfields of physics that are today engaging the best minds of our time – quantum gravity, quantum information and quantum computing, quantum entanglement and the philosophy of quantum theory.
As a teacher, Wheeler could give Niels Bohr a run for his mentorship. Not just content with supervising the usual flock of graduate students and postdocs, Wheeler took it upon himself to train promising undergraduates in the art of thinking about the physical world. Story after story flourishes of some young mind venturing with trepidation into Wheeler’s office for the first time, only to emerge dazed two or three hours later, staggering under the weight of papers and books and bursting with research ideas. In fact Wheeler supervised more senior research theses at Princeton than any other professor in the department’s history, and for the longest time he taught the freshman physics class: what better way to ignite a passion for physics than by taking a class as a freshman from one of the century’s most brilliant scientific minds? To top it all, he used to sometimes take his students to see a neighborhood resident at the famous address 112 Mercer Street – Albert Einstein. Sitting in Einstein’s room in a circle, the awestruck young minds would watch Wheeler trying to gently convince a perpetually resistant Einstein of the correctness of quantum ideas.
Out of Wheeler’s fertile school emerged some of the most interesting minds of postwar physics research: a very short list includes Jacob Bekenstein who forged startling links between black hole thermodynamics and relativity; Hugh Everett who came up with the many-worlds interpretation of quantum mechanics, an interpretation which flew in the face of Niels Bohr’s Copenhagen Interpretation; Bryce Dewitt with whom Wheeler made the first inroads into the deep realm of quantum gravity; Kip Thorne, gravitational wave pioneer whose dogged efforts finally won him the Nobel Prize last year. With some of these students Wheeler also wrote pioneering textbooks, including a doorstop of a book that has been gracing the shelves of students and professors of relativity like a patron saint since its publication. Very few teachers of theoretical physics equaled Wheeler in his influence and mentorship; certainly in the twentieth century, only Bohr, Arnold Sommerfeld and Max Born come to mind, and among American physicists, only Robert Oppenheimer and his school at Berkeley.
With his students Wheeler worked on some of the most preposterous extensions of nature’s theories that we can imagine: wormholes, quantum gravity, time travel, measurement in quantum theory. He constantly asked his pupils to think of crazy ideas, to extend our most hallowed theories to their breaking point, to think of the whole universe as a child’s playground. His colleagues often thought he was going crazy, but Feynman once corrected them: “Wheeler’s always been crazy”, he reminded everyone. Like his mentor Bohr, Wheeler became a master of the Delphic utterance, the deep philosophical speculation that could result in leaps and bounds in humanity’s understanding of the universe. Here’s one of those utterances: “Individual events. Events beyond law. Events so numerous and so uncoordinated that, flaunting their freedom from formula, they yet fabricate firm form”. The statement is vintage Wheeler; disarming in its ambiguity, deep in its implications, in equal parts physics, philosophy and poetry.
Many of Wheeler’s ideas were collected together in an essay collection titled “At Home in the Universe” which I strongly recommend. These essays showcase his wide-ranging interests and his gift for philosophy and uncommon prose and are full of paradoxes and puzzles. They also illustrate his warm friendship with many of the most famous names in physics including Bohr, Einstein, Fermi and Feynman. Along with “black hole”, he coined many other memorable phrases and statements: “It from Bit”, “Geometrodynamics”, “Mass without Mass”, and “Time is what prevents everything from happening at once”. He always believed that the universe is simpler than stranger, convinced that what is today’s strangeness and paradox will be tomorrow’s simple accepted wisdom.
John Wheeler died at the ripe old age of ninety-six, a legend among scientists. In affectionate tribute to his own way with words, a sixtieth birthday commemoration for him had called his work “Magic without Magic”, an that’s as good a way as any to remember this giant of science. A fitting epitaph? Many to choose from, but his sentiment about it being impossible to understand science without understanding people stands as a testament to his scientific brilliance and fundamental humanity: “No theory of physics that deals only with physics will ever explain physics. I believe that as we go on trying to understand the universe, we are at the same time trying to understand man.”
We come now to Oliver Sacks. Strangely enough, it took me some time to warm up to Sacks’s writing. I read about the man who mistook his wife for a hat, of course, and the patients with anosmia and colorblindness and the famous patients of ‘Awakenings’ who had been trapped in their bodies and then miraculously – albeit temporarily – resurrected. But I always found Sacks’s descriptions a bit too detached and clinical. It was when I read the charming “Uncle Tungsten” that I came to appreciate the man’s wide-ranging interests. But it was his autobiography “On the Move” that really drove home the unquenchable curiosity, intense desire for connecting to life and human beings and sheer love for living in all its guises that permeated Sacks’s being. I was so moved and satiated by the book that I read it again right after reading the last page, and read it a third time a few days later. After this I went back to almost the entirety of Sacks’s oeuvre and enjoyed it. So mea culpa, Dr. Sacks, and thanks for the reeducation.
Like Wheeler Sacks was born to educated parents in London, both of whom were doctors. He clearly acquired his interest in patients, both as medical curiosities and as human beings from his parents. A voracious reader, he had many interests while growing up – Darwin and chemistry were two which he retained throughout his life – and like other Renaissance men found it hard to settle on one. But family background and natural inclination led him to study medicine at Oxford and, finding England too provincial, he shipped to the New World, first to San Francisco and then to New York.
Throughout his life, Sacks’s most distinguishing quality was the sheer passion with which he clung to various activities. These ranged from the admirable to the foolhardy. For instance, Sacks didn’t just “do bodybuilding”, he became obsessed with it to the point of winning a California state championship and risking permanent damage in his muscles. He didn’t just “ride motorcycles”, he would take his charger on eight hundred mile rides to Utah and Arizona over a single weekend. He didn’t just “do drugs”, he flooded his body with amphetamines to the point of almost killing himself. And he didn’t just “practice medicine” or writing, he turned them into an observational art form without precedent. It is this intense desire for a remarkable diversity of people and things that defined Oliver Sacks’s life. And yet Sacks was lonely; as a gay man who repressed his sexuality after a devastating reception from his mother and a series of failed encounters during his bodybuilding days, he refrained from romantic relationships for four decades before finally finding love in his seventies. It was perhaps his own struggle with his identity, combined with recurring maladies like depression and migraines, that made Sacks sympathize so deeply with his patients.
Two things made Sacks wholly unique as a neurological explorer. The writer Andrew Solomon once frankly remarked in a review of one of Sacks’s books that as purely a writer or purely a neurologist, while Sacks was very good, he probably wasn’t in the first rank. But nobody else could straddle the two realms with as much ease, warmth and informed narrative as he could. It was the intersection that made him one of a kind. That and his absolutely transparent, artless style, amply demonstrated in “On the Move”. He was always the first one to admit to follies, mistakes and missed opportunities.
For Sacks his patients were patients second and human beings first. He was one of the first believers in what is today called “neurodiversity”, long before the idea became fashionable. Neurodiversity means the realization that even people with rather extreme neurological conditions show manifestations of characteristics that are present in “normal” human beings. Even when Sacks told us about the most bizarre kind of patients, he saw them as lying on a continuum of human abilities and powers. He saw the basic humanity among patients frozen in space and time when the rest of the world simply saw them as “cases”. And he displayed all this warmth and understanding toward his patients without ever wallowing in the kind of sweet sentimentality that can mark so much medical writing trying to be literature.
Sacks persisted in exploring an astonishing landscape of aspects of the human mind until his last days. Whether it was music or art, mathematics or natural history, he always had something interesting to say. The one exception – and this was certainly a refreshing part of his writing – was politics; as far as I can tell, Sacks was almost wholly apolitical, preferring to focus on the constants of nature and the human mind than the ephemeral foibles of mankind. His columns in the New York Times were always a pleasure, and in his last few – written after he had announced his impending death in a moving piece – he explored topics dear to his heart; Darwin, the periodic table, his intense love of music, his satisfying and strange connection to Judaism as an atheist, and his gratitude for science, friends and the opportunity to be born, thrive and learn in a world full of chaos. In the column announcing the inevitable end he said, “I cannot pretend I am without fear. But my predominant feeling is one of gratitude. I have loved and been loved”.
Why remember John Wheeler and Oliver Sacks today? Because one taught us to look at the universe beyond ourselves, and the other taught us to look within ourselves. Both appealed to the better angels of our nature and to what we have in common rather than what separates us, asking us to constantly stay curious. These lessons seem to be quite relevant to our day and age. Wheeler told us that the laws of physics and the deep mysteries of the universe, even if they may not care about our fragile, bickering world of politics and social strife, beckon each one of us to explore their limits and essence irrespective of our race, nationality or gender. Sacks appealed to our common humanity and told us that deep within the human brain runs a thread that connects all of us on a continuum, independent again of our race, gender, nationality and political preferences. Two messages should stay with us:
Sacks: “Above all, I have been a sentient being, a thinking animal, on this beautiful planet, and that in itself has been an enormous privilege and adventure.”
Wheeler: “Behind it all is surely an idea so simple, so beautiful, that when we grasp it – in a decade, a century, or a millennium – we will all say to each other, how could it have been otherwise? How could we have been so stupid?”
This is my latest column for 3 Quarks Daily. Image credits: Wheeler, Sacks