Field of Science

Why a "superhuman AI" won't destroy humanity (and solve drug development)

A significant part of the confusion about AI these days arises from the term "AI" being used with rampant abandon and hype to describe everything from self-driving cars to the chip inside your phone to elementary machine learning applications that are glorified linear or multiple regression models. It's driving me nuts. The media of course is the biggest culprit in this regard, and they really need to come up with some "rules" for writing about the topic. Once you start distinguishing between real and potentially groundbreaking advances which are far and few in between and incremental, interesting advances which constitute the vast majority of "AI" applications, you would be able to put the topic in perspective.

That has not stopped people like Elon Musk from projecting their doom-and-gloom apocalyptic fears onto the AI landscape. Musk is undoubtedly a very intelligent man, but he's not an expert on AI so his words need to be taken with a grain of salt. I would be far more interested in hearing from Kevin Kelly, a superb thinker and writer on technology who has been writing about AI and related topics for decades. Kelly who is a former editor of Wired magazine launched the latest salvo in the AI wars a few weeks ago when he wrote a very insightful piece in Wired about four reasons why he believes fears of an AI that will "take over humanity" are overblown. He casts these reasons in the form of misconceptions about AI which he then proceeds to question and dismantle. The whole thing is eminently worth reading.

The first and second misconceptions: Intelligence is a single dimension and is "general purpose".

This is a central point that often gets completely lost when people talk about AI. Most applications of machine intelligence that we have so far are very specific, but when people like Musk talk about AI they are talking about some kind of overarching single intelligence that's good at everything. The media almost always mixes up multiple applications of AI in the same sentence, as in "AI did X, so imagine what it would be like when it could do Y"; lost is the realization that X and Y could refer to very different dimensions of intelligence, or significantly different in any case. As Kelly succinctly puts it, "Intelligence is a combinatorial continuum. Multiple nodes, each node a continuum, create complexes of high diversity in high dimensions." Even humans are not good at optimizing along every single of these dimensions, so it's unrealistic to imagine that AI will. In other words, intelligence is horizontal, not vertical. The more realistic vision of AI is thus what it already has been; a form of augmented, not artificial, intelligence that helps humans with specific tasks, not some kind of general omniscient God-like entity that's good at everything. Some tasks that humans do indeed will be replaced by machines, but in the general scheme of things humans and machines will have to work together to solve the tough problems. Which brings us to Kelly's third misconception.

The third misconception: A super intelligence can solve our major problems.

As a scientist working in drug development, this fallacy is my favorite. Just the other day I was discussing with a colleague how the same kind of raw intelligence that produces youthful prodigies in physics and math fails to do so in highly applied fields like drug discovery: when was the last time you heard of a 25 year old inventing a new drug mainly by thinking about it? That's why institutional knowledge and experience counts in drug discovery, and that's why laying off old timers is especially a bad idea in the drug development field. 

In case of drug discovery the reason is clear: it's pretty much impossible to figure out what a drug does to a complex, emergent biological system through pure thought. You have to do the hard experimental work, you have to find the right assays and animal models, you have to know what the right phenotype is, you have to do target validation using multiple techniques, and even after all this, when you put your drug into human beings you go to your favorite church and pray to your favorite God. None of this can be solved by just thinking about it, no matter what your IQ.

Kelly calls this belief that AI can solve major problems just by thinking about it "thinkism": "the fallacy that future levels of progress are only hindered by a lack of thinking power, or intelligence." However, 

"Thinking (intelligence) is only part of science; maybe even a small part. As one example, we don’t have enough proper data to come close to solving the death problem. In the case of working with living organisms, most of these experiments take calendar time. The slow metabolism of a cell cannot be sped up. They take years, or months, or at least days, to get results. If we want to know what happens to subatomic particles, we can’t just think about them. We have to build very large, very complex, very tricky physical structures to find out. Even if the smartest physicists were 1,000 times smarter than they are now, without a Collider, they will know nothing new."

To which I may also add that no amount of Big Data will translate to the correct data.

Kelly also has some useful words to keep in mind when it comes to computer simulations, and this is another caveat for drug discovery scientists:

"There is no doubt that a super AI can accelerate the process of science. We can make computer simulations of atoms or cells and we can keep speeding them up by many factors, but two issues limit the usefulness of simulations in obtaining instant progress. First, simulations and models can only be faster than their subjects because they leave something out. That is the nature of a model or simulation. Also worth noting: The testing, vetting and proving of those models also has to take place in calendar time to match the rate of their subjects. The testing of ground truth can’t be sped up."

These are all very relevant points. Most molecular simulations for instance are fast not just because of better computing power but because they are intrinsically leaving out parts of reality - and sometimes significant parts (that's the very definition of a 'model' in fact). And it's absolutely true that even sound models have to be tested through often tedious experiments. MD simulations are good examples. You can run an MD simulation very long and hope to see all kinds of interesting fluctuations emerging on your computer screen, but the only way to know whether these fluctuations (a loop moving here, a pocket transiently opening up there) correspond to something real is by doing experiments - mutagenesis, NMR, gene editing etc. - which are expensive and time-consuming. Many of those fluctuations from the simulation may be irrelevant and may lead you down rabbit holes. There's no getting around this bottleneck in the near future even if MD simulations were to be sped up another thousand fold. The problem is not one of speed, it's one of ignorance and complex reality.

The fourth misconception: Intelligence can be infinite.

Firstly, what does "infinite" intelligence even mean? Infinite computing power? The capacity to crunch an infinite amount of data? Growing infinitely along an infinite number of dimensions? Being able to solve every single one of our problems ranging from nuclear war to marriage compatibility? None of these tasks seems even remotely within reach in the near or far future. There is little doubt that AI will keep on crunching more data and keep on benefiting from more computing power, but its ultimate power will be circumscribed by the laws of emergent physical and biological systems that are constrained by the hard work of experiment and various levels of understanding.

AI will continue to make significant advances. It will continue to "take over" specific sectors of industry and human effort, mostly with little fanfare. The mass of workers it will continue to quietly displace will pose important social and political problems. But from a general standpoint, AI is unlikely to take over humanity, let alone destroy it. Instead it will do what pretty much every technological innovation in history has done: keep on making solid, incremental advances that will both improve our lives and create new problems for us to solve.

3 comments:

  1. While I have made versions of most of the points here myself I think there are some confusions/omissions in the claims you repeat here.

    First, the point that intelligence (ok ability at problem solving/tasks) may not be 1-dimensional is well taken. However, one should refrain from drawing too many consequences from this. For instance, while the ability that lets one (without knowledge of the algorithm) solve a rubix cube may be very different from mathematical ability that doesn't mean that sufficiently quickly applied mathematical considerations don't allow one to outperform intuitive rubix cube solving (derive and prove algorithm then implement). This is particularly relevant when considering AIs who always have the option of writing a new piece of software solve a new problem. In short, just because there are different individual abilities doesn't mean that its not possible to produce systems that can outperform us (or the last generation) across the board. Whether or not there are theoretical limitations to this kind of ability improvement is an interesting question.

    --
    While the idea that one need merely analyze tons of data to solve all our problems is clearly false don't forget that AI provides the other missing element in the equation as well: labor/effort. While obviously nothing gets done instantly the potential for huge productivity gains in labor especially once the whole supply chain is AI robots all the way down is massive and that means cheaply building the labs, colliders and even running the experiments. Also, often the limiting factor in experimental work is the availability of sufficiently expert researchers to oversea the experiments. I see no reason why a few decades into the AI revolution we couldn't have many many square miles covered in robot constructed compact robot operated labs running massive numbers of biology or other experiments at once.

    --
    However, even merely making use of the existing research data I think even relatively simple AIs will be able to extract important information/hypothesizes that are difficult for humans. Rather than just theorize about the science itself one can theorize about the papers as well to get a sense of their reliability. Of course people do this but the limitations on our memory size and reading speed limit the impact. In short, we may be surprised just how much information really is present in the existing scientific data set when each and every word is read and all interconnections considered (in either way by how little or how much it reveals).

    ---

    The issue about simulations just seems totally irrelevant. AI would (ultimately) do science in much the same way people do: by considering various simple hypothesizes in an attempt to improve our accuracy in predicting things about the world including identifying the features one can and can't leave out of models. Don't see where the assumption they will stupidly run overly complex models comes from.

    ReplyDelete
  2. Natural stupidity will be the only threat to humanity until we stop falling for halo effect of billionaires and trust them to shape our thinking.
    Having said that, I think there is a pretty reasonable risk coming from AI, although certainly not in a literal sense (aka 'kill all humans'). Some unpredictable secondary consequence are all but impossible and trying to predict that may take, well, artificial intelligence.
    There was a nice overview on relevant topic at SSC http://slatestarcodex.com/2017/06/08/ssc-journal-club-ai-timelines/

    ReplyDelete
  3. Your own sentence says it all; "AI is unlikely to take over humanity, let alone destroy it".

    The very small risk with huge consequences is precisely what those 'intelligent' people you were dismissing are worried about. Unless you are prepared to say it is IMPOSSIBLE, not 'unlikely', then you have a pretty weak argument for downplaying this potential threat to the survival of the human race.

    ReplyDelete

Markup Key:
- <b>bold</b> = bold
- <i>italic</i> = italic
- <a href="http://www.fieldofscience.com/">FoS</a> = FoS