On AlphaGo, chemical synthesis and the rise of the intuition machines

There is a very interesting article by quantum computing and collaborative science pioneer Michael Nielsen in Quanta Magazine on the recent victory of Google’s program AlphaGo over the world’s reigning Go champion, Lee Sedol. In the article Nielsen tries to explain what makes this victory so special. Some people seem to think that Go is just a more complex version of chess, with more branching possibilities and solutions. And since Deep Blue beat Kasparov in 1997 and we have acquired far more computing power since then, this would simply be one more summit on that journey.

In the article Nielsen explains why this belief is incorrect. Go is very different from chess for many reasons. Not only are the number of solutions and branch points exponentially greater, but the winning features of Go are more nebulous are far more prone to intuition. The difference between a win for black over white in chess is clear – it’s when white’s king is checkmated. But the difference between a win for black over white in Go can be very subtle; it’s when black’s pieces surround white’s pieces better, and the definition of “surround” can be marginal. Unlike chess where you display all your pieces, in Go many of your pieces are held in reserve, so your opponent has to consider those pieces too when he or she makes a move. Unlike chess where even an amateur can recognize a winning board, a winning board in Go may be only slightly different from a losing board. In his book “On China”, Henry Kissinger says that China’s strategy is like Go’s whereas the West’s has been like chess’s. That’s something to think about.

However, the important thing here is that it’s precisely these complex features of Go that make it much more prone to subtle intuition: for a human Go player, recognizing a winning board over a losing board is as much a matter of intuition and feeling as it is of mechanistic rational analysis. It’s these features that make it far harder for a machine to defeat a Go champion than a chess champion. And yet two weeks back it did.

What made this victory possible? According to Nielsen, it was the fact that AlphaGo’s algorithm was actually trained to recognize human intuition. Its creators did this by training the program’s neural nets on thousands of past winning boards. These winning boards were human products; they were products of the intuition employed by human players. But the program did not need to understand intuition; it simply needed to learn it by looking at thousands of cases where that intuition worked successfully. AlphaGo’s creators further made the neural net play against itself and tweak its parameters until it achieved a successful result. Ultimately when the program was deployed, not only could it calculate a mind boggling number of winning boards, but it could also use elements of human intuition to say which ones looked good.

Nielsen thinks that it’s this ability to capture important elements of human intuition that marks AlphaGo’s victory as a new era in artificial intelligence. Human beings think that intuition is perhaps the most important thing that distinguishes them from machines. And yet if we think about intuition as pattern recognition and extrapolation gathered from evaluating thousands of past examples, there is no reason why a machine cannot learn how to intuit its way through a tough problem. In our case those countless examples have been ingrained by millions of years of neural and biological evolution; in case of machines we will provide it with those examples ourselves.

I thought of AlphaGo’s victory as I read Derek’s post on automating synthesis by having a smart algorithm work through all the intermediates, starting materials and potential branch points of a complex molecule’s synthesis. Derek thinks that it’s not too far in the future when such automated synthesis starts contributing significantly to the production of complex molecules like drugs and and other materials, and I tend to agree with him. People have been working on applying AI to synthesis since E J Corey’s LHASA program and Carl Djerassi’s work on AI in chemistry from the late 70s, and it does seem that we are getting close to what would be at least a moderately interesting tipping point in the field.

One of the commenters on Derek’s blog brought up the subject of AlphaGo’s latest win, and when I pointed out Nielsen’s article, had the following to say:
"These are my thoughts exactly. When we don’t understand chemical intuition people tend to think “Well, if we can’t understand it ourselves, how can we code it into a computer?” But what if the question is turned back onto you, “If YOU don’t understand chemical intuition, how did YOU learn it?” You learned it by looking at lots of reactions and drawing imperceptible parallels between disparate data points. This is neural net computing and, as you say, this is what allows AlphaGo to have “intuition” without that feature being encoded into its logic. The way these computers are now learning is no different than how humans learn, I think we just need to provide them with informational infrastructure that allows them to efficiently and precisely navigate through the data we’ve generated so far. With Go that’s simple, since the moves are easily translated to 1’s and 0’s. With chemistry it’s tougher, but certainly nowhere near impossible."
“Tougher, but not impossible” is exactly what I think about applying AI to automated chemical synthesis and planning. The fact is that we have accumulated enough synthetic data and wisdom over fifty years of brilliant synthetic feats to serve as a very comprehensive database for any algorithm wanting to use the kind of strategy that AlphaGo did. What I think should be further done is for developers of these algorithms to also train their program’s neural nets on past successful syntheses by chemists who were not just renowned for their knowledge but were known for the aesthetic sense which they brought to their syntheses. Foremost among these of course was the ‘Pope’, R B Woodward, who when he won the Nobel Prize was anointed by the Nobel committee as being a “good second” to Nature when it came to implementing notions of art, beauty and elegance in organic synthesis. Woodward’s syntheses were widely considered to be beautiful and spare, and his use of stereochemistry especially was unprecedented.

Fortunately, we also have a good guidebook to the use of elegance and aesthetics in organic synthesis: K C Nicolaou’s “Classics in Total Synthesis" series. My proposal would be for the developers to train their algorithms on such classics. For every branch point in a synthesis campaign there are several – sometimes thousands - of directions that are possible. Clearly people like Woodward picked certain directions over others, sometimes using perfectly rational principles but at other times using their sense of aesthetics. Together these rational approaches and aesthetic sense comprise what we can call intuition in synthesis. It would not be that hard to train a synthesis AI’s neural nets on capturing this intuition by constantly asking it to learn what option among the many possible were actually chosen in any good synthesis. That in turn would allow us to tweak the weights of the ‘neurons’ in the program’s neural nets, just like the creators of AlphaGo did. If repeated enough number of times, we would get to a stage when the program’s decision to follow one route over another are dictated not just by brute force computation of number of steps, availability of reagents, stereochemical complexity etc. but also simply by what expert human beings did in the past.

At this stage the algorithm would start capturing what we call intuitive elements in the synthesis of complex molecules. Any such program that cuts down synthetic planning and execution time by even 20% would have a very distinct advantage over the (non-existent?) competition. There is little doubt that not only would it be quickly adopted by industry and academia, but that its key functions would also be rapidly outsourced, just like software, car manufacturing and chemical synthesis are currently. This in turn would lead to a huge impact on jobs, STEM careers as well as the economy. The political fallout could be transformational.

All this could potentially come about by the application of training algorithms similar to AlphaGo’s to the construction of synthesis AI software. It would be wise for chemists to anticipate these developments instead of denying their inevitable onward march. We are still a long way from when a computer comes up with a synthetic route rivaling those of a Bob Woodward, an E J Corey or a Phil Baran. But as far as actual impact is concerned, the computer does not need to win that contest; it simply needs to be good enough. And that future seems closer than we think.
  

3 comments:

  1. Humbled to have my comment featured here :) Good post, interesting thoughts. I am excited (and, as a synthetic chemist, rather fearful) to see this field develop. I think it would be truly awe-inspiring to witness the development of a superintelligent synthetic planning program. Perhaps we would learn entirely new ways to construct molecules just as AlphaGo demonstrated some wholly new moves in Go. Could be fascinating.

    ReplyDelete
    Replies
    1. I have always taken a clue from Peter Thiel who has said that it's wrong to think that somehow machines will "replace" us. He thinks - and I agree - that the best results always come when humans and machines work together. In fact my own field of drug design where computers have made a bigger dent than in synthesis is a good example. In that case the computers help us narrow down options and show us plausible paths, but we still have to use expert knowledge, sometimes based on very human constraints, to then pick the solution. I don't know exactly how the interplay between synthetic chemists and machines will be working out in fifty years, but it could be along similar lines.

      Delete
  2. Since AlphaGo's success, it appears to me that the word "intuition" has replaced "intelligent" as THE buzz word. I've not seen it defined in any logical/reductive fashion; "it's what the human mind does" is hardly adequate, imho. Seems to me obvious that "intuition" could be defined in terms of "articulation" (or similar narrative/language focused, expressions). A thought we aren't able to articulate is an "intuition". Avoiding the abyss of "conscious vs un-/sub-conscious" and its relationship to AI and consciousness seems unlikely. Given adequate hardware, which we probably don't quite yet have on our desktops, it seems obvious to me that any goal reached by any finite series of steps will be "soluble" by machine and (eventually) be more trustworthy than what any human mind can produce. It's simply a question of a pseudo-randomly designed (evolved) (biological) computer vs a purpose designed 'silicon' one. It would make no sense for the biological machine to out perform the 'in silico' purpose built machine, once we know enough to design it. Once we are able to design it, it seems likely to me that we will be able to design a better designer.

    ReplyDelete

Markup Key:
- <b>bold</b> = bold
- <i>italic</i> = italic
- <a href="http://www.fieldofscience.com/">FoS</a> = FoS