I know that conferences like NeurIPS (formerly called NIPS) have asked for statements about ethical and "broader impact" to accompany papers submitted to them. In principle I am all for this since it's always good for scientists to think about the social implications of the work. I also read the details of the requirements and they aren't draconian, especially for highly theoretical papers whose broader impact is far from clear.
But from a fundamental philosophical viewpoint I still don't think this is useful. My problem is not with the morality of predicting impact but with the lack of utility. The history of science and technology show that it is impossible to predict broader impact. When Maxwell published his electromagnetic equations he could have scarcely imagined the social and political repercussions electrical power generation would have. When Einstein published his general theory of relativity, he could have scarcely imagined the broader impact it would have on space exploration and GPS, both in war and peace. Perhaps most notably, nobody could have predicted the broader impacts of the discovery of the proton or the discovery of DNA as the genetic material. I do not see how James Chadwick or Oswald Avery could have submitted broader impact statements with their papers; anything interesting they might have had to say would probably have been untrue a few years later, and anything they would have admitted would probably have turned out to be important.
My biggest problem is not that broader impact statements will put more of a burden on already-overworked researchers, or that they might inflame all kinds of radical social agendas, or that they might bias conferences against very good technical papers which struggle to find broad impact, or that they might stifle lines of research which are considered to be dangerous or biased. All these problems are real and should be acknowledged. But the real problem simply is that whatever points these statements would make would almost certainly turn out to be wrong because of the fundamental unpredictability and rapid progress of technology. And they would then only cause confusion by sending people down a rabbit hole, one in which the rabbit not only does not exist but is likely to be a whole other creature. And this will be the case with all new technologies like AI and CRISPR.
The other problem with broader statements is what to do with them even if they are accurate, because accurate and actionable are two different things. Facial recognition software is an obvious example. It can be used to identify terrorists or bad guys but it can also be used to identify dissidents and put them into jail. So if I submit a broader statement with my facial recognition paper and point out these facts, now what? Would this kind of research be banned? That would be throwing the baby out with the bathwater. The fact is that science and technology are always dual use, and it is impossible to separate their good from their bad uses except as a matter of social choice after the fact. I am not saying that pointing out this dual use is a bad thing, but I am concerned that doing so might lead to stifling good research for fear that it may be put to bad ends.
So what is the remedy? Except for obvious cases, I would say that science and technology should be allowed to play out the way they have played out since the time of Francis Bacon and the Royal Society, as open areas of inquiry with no moral judgements being made beforehand. In that sense science has always put a severe burden on society and has asked for a tough bargain in return. It says, "If you want me to be useful, don't put me in a straitjacket and try to predict how I will work out. Instead give me the unfettered freedom of discovery, and then accept both the benefits and the risks that come with this freedom." This is the way science has always worked. Personally I believe that we have done an excellent jobs maximizing its benefits and minimizing its risks, and I do not see why it will be different with any new technology including machine learning. Let machine learning run unfettered, and while we should be mindful of its broader impact, predicting it will be as futile as damming the ocean.
I guess Richard Feynman does a great job addressing this very issue through his essays in "The Meaning of It All: Thoughts of a Citizen-Scientist".
ReplyDelete