Field of Science

Technological convergence in drug discovery and other endeavors




You would think that the Wright brothers’ historic flight from Kitty Hawk on December 17, 1903 had little to do with chemistry. And yet it did. The engine they used came from an aluminum mold; since then aluminum has been a crucial ingredient in lightweight flying machines. The aluminum mold would not have been possible had industrial chemists like Charles Hall and Paul Héroult not developed processes like the Hall-Héroult process for refining the metal from its ore, bauxite. More elementally, the gasoline fueling the flight was the result of a refining process invented more than fifty years earlier by a Yale chemist named Benjamin Silliman. There was a fairly straight line from the Bayer and Silliman processes to Kitty Hawk.

The story of the Wright brothers’ powered flight illustrates the critical phenomenon of technological convergence that underlies all major technological developments in world history. Simply put, technological convergence refers to the fact that several enabling technologies have to come together in order for a specific overarching technology to work. And yet what’s often seen is only the technology that benefits, not the technology that enables.

We see technological convergence everywhere. Just to take a few of the most important innovations of the last two hundred years or so: The computer would not have been possible without the twin inventions of the transistor and silicon purification. MRI would not have been possible without the development of sophisticated software to deconvolute magnetic resonance signals and powerful magnets to observe those signals in the first place. There are other important global inventions that we take for granted - factory farming, made-to-order houses, fiber optics, even new tools like machine learning - none of which would have materialized had it not been for ancillary technologies which had to reach maturation.

Recognizing technological convergence is important, both because it helps us appreciate how much has to happen before a particular technology can embed itself in people’s everyday lives, and because it can help us potentially recognize multiple threads of innovation that could potentially converge in the future - a risky but important vision that can help innovators and businessmen stay ahead of the curve. One important point to note: by no means does technological convergence itself help innovations rise to the top – political and social factors can be as or more crucial – but this convergence is often necessary even if not sufficient.

It’s interesting to think of technological convergence in my own field of drug development. Let’s look at a few innovations, both more recent as well as older, that illustrate the phenomenon. Take a well-established technology like high-throughput screening (HTS). HTS came on the scene about thirty years ago, and since then has contributed significantly to the discovery of new medicines. What made the efficient screening of tens of thousands of compounds possible? Several convergent developments: recombinant DNA technology for obtaining reasonable quantities of pure proteins for screening, robotic techniques and automation for testing these compounds quickly at well-defined concentrations in multiple wells or plates, spectroscopic techniques like FRET for determining the feasibility of the end results, and graphing and visualization software for mapping the results and quickly judging if they made sense. These are just a few developments: in addition, there are techniques within these techniques that were also critical. For instance, recombinant DNA depended on methods for viral transfection, for splicing and ligation and for sequencing, and robotic automation depended on microelectronic control systems and materials for smooth manipulation of robotic moving parts. Thus, not only is technology convergent but it also piggybacks, with one piece of technology building on another to produce a whole that is more than the sum of its parts, aiding in the success of a technology it wasn’t primarily designed for.

Below is a table of just a few other primary drug discovery technologies that could not have been possible without ancillary convergent technologies.

Primary technology
Convergent enabling technologies
Combinatorial chemistry
LCMS for purification, organic synthesis methodology, hardware (solid phase beads, plastic, tubes, glassware) for separation and bookkeeping.
Molecular modeling
Computing power (CPUs, GPUs), visualization software, crystal structures and databases (PDB, CSD etc.)
Directed evolution/phage display
Recombinant DNA technology, hardware (solid phase supports), buffer chemistry for elution.
DNA-encoded libraries
PCR, DNA sequencing technology (Illumina etc.), hardware (solid phase beads, micropipettes etc.), informatics software for deconvolution of results.
NMR
Cryogenics, magnet production, software.

I have deliberately included NMR spectroscopy in the last row. A modern day organic chemist’s work would be unthinkable without this technique. It of course depends crucially on the availability of high-field magnets and the cryogenics techniques that keep the magnet cold by immersion in liquid helium, but it also depends fundamentally on the physics of nuclear magnets worked out by Isidor Rabi, Edward Purcell, Richard Ernst and others. Since this post is about technology I won’t say anything further about science, but it should be obvious that every major technology rests on a foundation of pure science which has to be developed for decades before it can be applied, often with no clear goal in mind. Sometimes the application can be very quick, however. For instance, it’s not an accident that solid phase supports appear in three of the five innovations listed above. Bruce Merrifield won the Nobel Prize in chemistry for his development of solid-phase peptide synthesis in 1984, and a little more than thirty years later, that development has impacted many enabling drug development techniques.

There are two interesting conclusions that emerge from considering technological convergence. The first is the depressing conclusion that if ancillary technologies haven’t kept pace, then even the most brilliant innovative idea would get nowhere. Even the most perspicacious inventor won’t be able to make a dent in the technology universe, simply because the rest of technology hasn’t kept up with him. A good example is the early spate of mobile phones appearing in the early 90s which didn’t go anywhere. Not only were they too expensive, but they simply weren’t ready for prime time because the wide availability of broadband internet, touchscreens and advanced battery technology was non-existent. Similarly, the iPhone and iPod took off not just because of Steve Jobs’ sales skills and their sleek GUI, but because broadband internet, mp3s (both legal and pirated) and advanced lithium ion batteries were now available for mass production. In fact, the iPod and the iPhone showcase convergent technologies in another interesting way; their sales skyrocketed because of the iTunes Music Store and the iPhone App store. As the story goes, Jobs was not sold on the app store idea for a long time because he characteristically wanted to keep iPhone apps exclusive. It was only flagging initial sales combined with insistent prodding from the iPhone team that changed his mind. In this case, therefore, the true convergent technology was not really battery chemistry or the accelerometer in the phone but a simple software innovation and a website.

The more positive conclusion to be drawn from the story of convergent technology is to keep track of ancillary enabling technologies if you want to stay ahead of the curve. In case of the iPod, Jobs seems to have had the patience to wait before USB, battery and internet technologies became mature enough for Apple to release the device; in spite of being the third or fourth mp3 player on the market, the iPod virtually took over in a few years. What this means for innovators and technologists is that they should keep an eye out on the ‘fringe’, on seemingly minor details of their idea that might have a crucial impact on its development or lack thereof. If you try to launch an innovative product before the ancillary technologies have caught up, you won’t achieve convergence and the product might well be doomed.

Of course, groundbreaking ancillary technologies are often obvious only in retrospect and are unexpected when they appear – Xerox’s mouse and GUI come to mind – but that does not mean they are invisible. One reason John D. Rockefeller became so spectacularly successful and wealthy is because he looked around the corner and saw not one but three key technologies: oil drilling, oil transportation and oil refining. Similarly, Edison’s success owed, in part, to the fact that he was an all-rounder, developing everything from electrical circuits to the right materials for bulb filaments; chemistry, electricity, mechanical engineering – all found a home in Edison’s lab. Thus, while it’s not guaranteed, one formula for noting the presence or absence of technological convergence is to cast a wide net, to work the field as well as its corners, to spend serious time exploring even the small parts that are expected to contribute to the whole. Recognizing technological convergence requires a can-do attitude and the enthusiasm to look everywhere for every possible lead.

At the very least, being cognizant of convergent technologies can prevent us from wasting time and effort; for instance, combinatorial chemistry went nowhere at the beginning because HTS was not developed. Molecular modeling went nowhere because sampling and scoring weren’t well developed. Genome sequencing by itself went nowhere because simply having a list of genes rang hollow until the technologies for interrogating their protein products and functions weren’t equally efficient. Developing your technology in a silo, no matter how promising it looks by itself, can be a failing effort if not fortified with other developing technology which you should be on the lookout for.

Technology, like life on earth, is part of an ecosystem. Even breakthrough technology does not develop in a vacuum. Without convergence between different innovations, every piece of technology would be stillborn. Without the aluminum, without the refined petroleum, the Wright Flyer would have lain still in the sands of the Outer Banks.

If you want to improve AI, let it evolve toward emergence

One of my favorite quotes about artificial intelligence is often attributed to pioneering computer scientists Hans Moravec and Marvin Minsky. To paraphrase: “The most important thing we have learned from three decades of AI research is that the hard things are easy and the easy things are hard”. In other words, we have been hoodwinked for a long time. We thought that vision and locomotion and housework would be easy and language recognition and chess and driving would be hard. And yet it has turned out that we have made significant strides in tackling the latter while hardly making a dent in the former.
Why is this? Clearly one trivial reason is that we failed to define “easy” and “hard” properly, so in one sense it’s a question of semantics. But the question still persists: what makes the easy problems hard? We got fooled by the easy problems because we took them for granted. Things like facial recognition and locomotion come so easily to human beings, even human beings that are a few months old, that we thought they would be easy for computers too. But the biggest obstacle for an AI today is not the chess playing ability of a Gary Kasparov but the simple image recognition abilities of an average one year old.
What we forgot was that these things seem easy only because they are the sleek final façade of a four billion year process that progressed with countless fits and starts, wrong alleys and dead ends and random experimentation. We see the bare shining mountaintop but we don’t see the tortuous road leading to it. If you looked under the hood, both spatial and temporal, of a seemingly simple act like bipedal navigation over a slightly rocky surface, you would find a veritable mess of failed and successful experiments in the history of life. If the brain were an electrical box which presented an exterior of wondrous simplicity and efficiency, inside the box would be fused wires, wires leading nowhere, wires with the middles cut off, wires sprouting other wires, stillbirthed wires; a mélange of wired chaos with a thread of accident and opportunity poking through it. We see only that ascendant thread but not the field littered with dead cousins and ancestors it resides in.
Over the ages, much of AI tried to grasp the essence of this evolutionary circus by trying to reproduce the essential structure of the human brain. The culmination of these efforts was the neural network, a layered abstract model of virtual electronic neurons trying to capture different aspects of reality with adjustable weights on every layer and a feedback loop that optimized the difference between the model and reality. So far so good, but neural networks are only modeling the end product and not the process. For the longest time they were not allowed to deliberately make mistakes and mirror the contingent, error-ridden processes of evolution that are grounded in mutation and genetic recombination. They made the evolution of thinking seem far more deterministic than what it was, and if there’s anything we know about evolution by now, it’s that one cannot understand or reproduce it unless one understands the general process of clumsy, aimless progress intrinsic to its workings.
But apart from the propensity of evolution to make mistakes, there is another, much broader aspect of evolution that I believe neural nets or other models of AI must capture in order to be useful or credible or both. That aspect is emergence, a feature of the human brain that is directly the product of its messy evolution. Not only could emergence help AI approach the actual process of thinking better and realize its scientific and financial potential, but it could also lead to reconciliation between two fields that are often and unnecessarily at war with each other – science and philosophy.
The basic idea of emergence has been recognized for a long time, first by philosophers and then by scientists. Whether it’s a block of gold having color properties that cannot be ascribed to individual gold atoms, individual termites forming a giant anthill or thousands of starlings forming stunning, sweeping, transient geometric patterns that carpet the sky for miles, we have known that the whole is often very different from both the individual parts and the sum of the parts. Or as one of the philosophical fathers of emergence, the physicist Philip Anderson, wrote in a now-famous article, “More is different”. Anderson noted that the properties of a physical system cannot be directly derived from its individual constituents, and more components are not just quantitatively but qualitatively different from fewer ones. Part of the reason for this is that both physics and biology are, in the words of Anderson’s fellow physicist Murray Gell-Mann, the result of “a small number of laws and a great number of accidents”. In case of biological evolution the laws are the principles of natural selection and neutral drift; in case of physical evolution the laws are the principles of general relativity, quantum mechanics and thermodynamics.
Emergence is partly a function of the great number of accidents that these small numbers of laws have been subjected to. In case of biology the accidents come from random mutations leading to variation and selection; in case of physics they come from forces and fields causing matter to stick together in certain ways and not others to form stars, galaxies and planets. Evolution critically occurred while immersed in this sea of stochastic emergence, and that led to complex feedback loops between fundamental and emergent laws. The human brain in particular is the end product of the basic laws of chemistry and physics being subjected to a variety of other emergent laws imposed by things like group and sexual selection, tribalism, altruism, predation avoidance and prey seeking. Agriculture, cities, animal domestication, gossip, religion, empires, democracy, despotism; all of humanity’s special creations are emergent phenomena. Mind is the ultimate emergent product of the stochastic evolution of the brain. So is consciousness. It’s because of the universal feature of accidental emergence that even a supercomputer (or an omniscient God, if you will) that had all the laws of physics built into it and that could map every one of the countless trajectories that life would take into the future would be unable to predict the shape and function of the human brain in the year 2018.
The mind which itself is an emergent product of brain evolution is very good at modeling emergence. As just one example, our minds are quite competent at understanding both individual needs as well as societal ones. We are good at comprehending the behavior of matter on both a microscopic scale – although it did take some very determined and brilliant efforts to achieve this feat – and the macro scale. In fact, we have so completely assimilated the laws of emergent physics in our brains that implementing them – throwing a javelin or anticipating the speed of a charging elephant for instance – is instinctive and a matter of practice rather than active calculation. Our minds, which build constantly updated models of the world, can now take emergent behavior into account and can apply the right level of emergent detail in these models to address the right problem. Evolution has had a leisurely four billion years to experiment with its creations while buffeted by the winds of stochastic emergence, so it’s perhaps not surprising that it has now endowed one of its most successful species with the ability to intuitively grasp emergent reality.
And yet we are largely failing to take into account this emergent reality when imagining and building new AIs. Even now, most of our efforts at AI are highly reductionist. We are good at writing algorithms to model individual neurons as well as individual layers of them, but we ignore the higher-level emergent behavior that is expected to result from a real neural network in a real human brain. Through a process called backpropagation, the neural networks are getting better at optimizing the gap between reality and the models they represent by setting up feedback loops and optimizing the weights of individual neurons, but whether their models are trying to capture the right level of emergent detail is a question they don’t address. If your model is capturing the wrong emergent details, then you are optimizing the wrong model.
Even if your model does solve the right problem, it will be such a specialized solution that it won’t apply to other related problems, which means you will be unable to build an artificial general intelligence (AGI). Consider the example of image recognition, a problem that neural nets and their machine learning algorithms are supposed to especially excel at. It’s often observed that if you introduce a bit of noise into an image or make it slightly different from an existing similar image, the neural net starts making mistakes. And yet children do this kind of recognition of “different but similar” images effortlessly and all the time. When shown an elephant for instance, a child will be able to identify elephants in a variety of contexts; whether it’s a real elephant, a stuffed elephant toy, a silhouette of an elephant or a rock formation that traces out the outline of an elephant. Each one of these entities is radically different in its details, but they all say “elephant” to the mind of the child but not to the neural network.
Why is this? I believe that emergence is one of the key secret sauces accounting for the difference. The child recognizes both a real elephant and a rock formation as an elephant because its brain, instead of relying on low-level “elephant features” like the detailed texture of the skin and the black or gray colors, is instead relying on high-level “emergent elephant features” like the general shape and more abstract topological qualities. The right level of emergent abstraction makes the child succeed where the computer is failing. And yet the child can – with some practice – also switch between different levels of emergence and realize for instance that the rock formation is not going to charge her. Through practice and exploration, the child perfects this application of emergent recognition. Perhaps that’s why it’s important to heed Alan Turing’s prescription for building intelligent machines in which he told us to endow a machine with the curiosity of a child and let intelligence evolve.
Another emergent feature of living organisms is what we call “emotion” or “instinct”. For the longest time we used to believe that human beings make rational decisions when evaluating their complex social and physical environments. But pioneering work by psychologists and neuroscientists ranging from Daniel Kahneman to Antonio Damasio has now shown that emotion and logical thinking both play a role when deciding how to react to an environmental stimulus. Take again the example of the child recognizing an elephant; one reason why it is so good at recognizing elephant-like features is because the features trigger a certain kind of emotional reaction in her. Not only are the logical feature-selecting parts of her brain activated, but so are her hormonal systems, perhaps imperceptibly; not only does she start thinking, but even before this, her palms may turn sweaty and her heartbeat may increase. Research has now consistently shown that our instinctive systems make decisions before our logical systems even kick in. This behavior was honed in humans by millions of years of living and passing on their genes in the African savannah, where split second decisions had to made to ensure that you weren’t weeded out of the gene pool. This kind of emotional reaction is thus also a kind of emergent behavior. It comes about because of the interaction of lower-level entities (DNA sequences and hormone receptors) with environmental and cultural cues and learnings. If an AI does not take emotional responses into account, it will likely never be able to recognize the kinds of abstract features that scream out “elephant” in a child’s mind.
As the biologist Theodosius Dobzhansky famously quipped, “Nothing in biology makes sense except in the light of evolution”, and I would extend that principle to the construction of artificial intelligences. Human intelligence is indeed a result of a few universal laws combined with an enormous number of accidents. These accidents have evolved evolution to select for those brains which can take stochastic emergent reality into account and build generalized models that can switch between different levels of emergent abstraction. It seems to me that mimicking this central feature of evolution would not just lead to better AIs but would be an essential feature of any truly general AI. Perhaps then the easy problems would truly become easy to solve.

This is my latest column for 3 Quarks Daily.