Field of Science

"Clueless machines being ceded authority far beyond their competence"


Edge.org which is well-known for asking big-picture questions and having leading thinkers offer their answers to the questions has a relevant one this year: "What do you think about machines that think?" As usual there are a lot of very interesting responses from a variety of writers, scientists, philosophers and members of the literati. But one that really got my attention was by the philosopher and cognitive scientist Daniel Dennett.



I guess I got drawn in by Dennett’s response because of his targeted dagger-thrust against The Singularity ("It has the earmarks of an urban legend: a certain scientific plausibility ("Well, in principle I guess it's possible!") coupled with a deliciously shudder-inducing punch line ("We'd be ruled by robots!"). 

He then goes on to say this:

These alarm calls distract us from a more pressing problem, an impending disaster that won't need any help from Moore's Law or further breakthroughs in theory to reach its much closer tipping point: after centuries of hard-won understanding of nature that now permits us, for the first time in history, to control many aspects of our destinies, we are on the verge of abdicating this control to artificial agents that can't think, prematurely putting civilization on auto-pilot.

The problem thus is not in trusting truly intelligent machines but in becoming increasingly dependent on unintelligent machines which we believe – or desperately want ourselves to believe – are intelligent. Desperately so because of our dependence on them – Dennett’s examples of GPS and computers for simple arithmetic calculations are good ones. The same could be said of a host of other technologies that are coming online, from Siri to airplane navigation to Watson-like “intelligences” that are likely going to become a routine basis for multifactorial tasks like medical diagnosis. The problem as Dennett points out is that belief in such technologies packs a double whammy – on one hand we have become so dependent on them that we cannot imagine relinquishing their presence in our lives, while on the other we would like to consciously or unconsciously endow them with attributes far superior to what they possess because of this ubiquitous presence.


Thus,

The real danger, then, is not machines that are more intelligent than we are usurping our role as captains of our destinies. The real danger is basically clueless machines being ceded authority far beyond their competence.

One reason I found Dennett’s words compelling was because they reminded me of that structural error in the cloud computing paper which I tweeted about and which Derek concurrently blogged about. In that case the error seems to be a simple problem of transcription but it does not mean that computational algorithms cannot ever pick out chemically and biologically nonsensical structures.

Fortunately unlike engineering and technology, biology and chemistry are still too complex for us to consider ceding authority to machines far beyond their competence. But since computers are inevitably making their way into the field by leaps and bounds this is already a danger which we should be well aware of. Whether you are being seduced into thinking that a protein’s motions as deduced by molecular dynamics actually correspond to real motions in the human body or whether you think you will be able to plumb startling new correlations between molecular properties and biological effects using the latest Big Data and machine learning techniques, unimpeded belief in the illusion of machine intelligence can be only a few typed commands away. Just like in case of Siri and Watson, MD and machine learning can illuminate. And just like Siri and Watson they can even more easily mislead. 

So take a look at whether that molecule contains a vinyl alcohol the next time you grab a result from the computer screen and put it into the clinic. Otherwise you might end up ceding more than just your authority to the age of the machines.

1 comment:

  1. My favorite piece is by George Church.

    http://edge.org/response-detail/26027

    ReplyDelete

Markup Key:
- <b>bold</b> = bold
- <i>italic</i> = italic
- <a href="http://www.fieldofscience.com/">FoS</a> = FoS