Neuroscientist and AI researcher Gary Marcus has an op-ed in the NYT in which he bemoans the lack of international collaboration in AI, a limitation that Marcus thinks is significant hampering progress in the field. He says that AI researchers should consider a global effort akin to CERN; a massively funded, wide-ranging project to solve specific problems in AI that would benefit from the expertise of hundreds of independent researchers. This hivemind effort could potentially clear the AI pipeline of several clogs which have held back progress.
On the face of it this is not a bad idea. Marcus's opinion is that both private and public research has some significant limitations which a meld of the two could potentially overcome.
"Academic labs are too small. Take the development of automated machine reading, which is a key to building any truly intelligent system. Too many separate components are needed for any one lab to tackle the problem. A full solution will incorporate advances in natural language processing (e.g., parsing sentences into words and phrases), knowledge representation (e.g., integrating the content of sentences with other sources of knowledge) and inference (reconstructing what is implied but not written). Each of those problems represents a lifetime of work for any single university lab.
Corporate labs like those of Google and Facebook have the resources to tackle big questions, but in a world of quarterly reports and bottom lines, they tend to concentrate on narrow problems like optimizing advertisement placement or automatically screening videos for offensive content. There is nothing wrong with such research, but it is unlikely to lead to major breakthroughs. Even Google Translate, which pulls off the neat trick of approximating translations by statistically associating sentences across languages, doesn’t understand a word of what it is translating.
I look with envy at my peers in high-energy physics, and in particular at CERN, the European Organization for Nuclear Research, a huge, international collaboration, with thousands of scientists and billions of dollars of funding. They pursue ambitious, tightly defined projects (like using the Large Hadron Collider to discover the Higgs boson) and share their results with the world, rather than restricting them to a single country or corporation. Even the largest “open” efforts at A.I., like OpenAI, which has about 50 staff members and is sponsored in part by Elon Musk, is tiny by comparison.
An international A.I. mission focused on teaching machines to read could genuinely change the world for the better — the more so if it made A.I. a public good, rather than the property of a privileged few."
This is a good point. For all its commitment to blue sky research, Google is not exactly the Bell Labs of 2017, and except for highly targeted research like that done at Verily and Calico, it's still committed to work that has more or less immediate applications to its flagship products. And as Marcus says, academic labs suffer from limits to capacity that keep them from working on the big picture.
A CERN for AI wouldn't be a bad idea, but it would be different from the real CERN in some key aspects. Most notably, unlike discovering the Higgs Boston, AI has immense potential social, economic and political ramifications. Thus, keeping the research at a CERN-like facility open and free for all would be a steep challenge, with governments and individuals constantly vying for a piece of the pie. In addition, there would be important IP issues if corporations were funding this endeavor. And even CERN had to contend with paranoid fears of mini black holes, so one can only imagine how much the more realistic (albeit more modest) fears of AI would be blown out of proportion.
As interesting as a CERN-like AI facility is, I think another metaphor for a global AI project would be the Manhattan Project. Now let me be the first to say that I consider most comparisons of Big Science projects to the Manhattan Project to be glib and ill-considered; comparing almost any peacetime project with necessarily limited resources to a wartime project that benefited from a virtually unlimited supply of resources brought to bear on it with great urgency will be a fraught exercise. And yet I think the Manhattan Project supplies at least one particular ingredient for successful AI research that Marcus does not really talk about. It's the essential interdisciplinary nature of tackling big problems like nuclear weapons or artificial intelligence.
What seems to be missing from a lot of the AI research taking place today is that it does not involve scientists from all disciplines working closely together in an open, free-for-all environment. That is not to say that individual scientists have not collaborated in the field, and it's also not to say that fields like neuroscience and biology have not given computer scientists a lot to think about. But a practical arrangement in which generally smart people from a variety of fields work intensely on a few well-defined AI problems seems to still be missing.
The main reason why this kind of interdisciplinary work may be key to cracking AI is very simple: in a very general sense, there are no experts in the field. It's too new for anyone to really claim expertise. The situation was very similar to the Manhattan Project. While physicists are most associated with the atomic bomb, without specialists in chemistry, metallurgy, ordnance, engineering and electronics the bomb would have been impossible to create. More importantly, none of these people were experts in the field and they had to make key innovations on the fly. Let's take the key idea of implosion, perhaps the most important and most novel scientific contribution to emerge from the project: Seth Neddermeyer who worked on cosmic rays before the war came up with the initial idea of implosion that made the Nagasaki bomb possible. But Neddermeyer's idea would not have taken practical shape had it not been for the under-appreciated British physicist James Tuck who came up with the ingenious design of having explosives of different densities around the plutonium core that would focus the shockwave inward toward the core, similar to how a lens focuses light. And Tuck's design would not have seen the light of day had they not brought in an expert in the chemistry of explosives - George Kistiakowsky.
These people were experts in their well-defined fields of science, but none of them were expert in nuclear weapons design, and they were making it up as they went along. But they were generally smart and capable people, capable of thinking widely outside their immediate sphere of expertise, capable of producing at least parts of ideas which they could then hand over in a sort of relay to others with different parts.
Similarly, nobody in the field of AI is an expert, and just like nuclear weapons the field is still new enough and wide enough for all kinds of generally smart people to make contributions to it. So along with a global effort, we should perhaps have a kind of Manhattan Project of AI that brings together computer scientists, neuroscientists, physicists, chemists, mathematicians and biologists at the minimum to dwell on the field's outstanding problems. These people don't need to be experts or know much about AI at all, they don't even need to know how to implement every idea they have, but they do need to be idea generators, they need to be able to bounce ideas off of each other, and they need to be able to pursue odd leads and ends and try to see the big picture. The Manhattan Project worked not because of experts pursuing deep ideas but because of a tight deadline and a concentrated effort by smart scientists who were encouraged to think outside the box as much as possible. Except for the constraints of wartime urgency, it should not be hard to replicate that effort, at least in its essentials.
RFK Jr. is not a serious person. Don't take him seriously.
3 weeks ago in Genomics, Medicine, and Pseudoscience
AIs will not be identical to humans since they don't occupy the same niche. Today AIs are better than humans at some tasks and in some ways. They have, for example, far more short term memory than do humans (the 7 +-2). Www.robert-w-jones.com
ReplyDeleteI don't know I feel like the goal of such a project would be ill defined. What does 'create AI' means, what are milestones, and also what is the ultimate goal of AI itself?
ReplyDeleteAnother parallel between the Manhattan Project and AI is that both aim at producing technologies with the potential to wipe out all of humanity. A truly superhuman AI is cerainly able to create any type of weapon of mass destruction, even things that we can't conceive of now. Therefore, security concerns would be extremely important for such a project.
ReplyDeleteAm I the only one who is _really_ bored with the AI hype? Another AI moonshot, perhaps? And isn't there a big difference in the target, in that one project was aiming to build a bomb (concrete, known deliverable), while the other aims to use a technology for, I don't know, something great? If there is a concrete, precise deliverable, then I'm sorry I've missed it.
ReplyDeleteI'm also tired of the constant "me too" aspect of technology these days, and the fact that it concentrates on delivering things that humans can broadly do well (speech, image recognition, driverless cars...). My car enables me to drive four people with luggage through rain and sleet - by contrast, what the phone has given me is still immensly insignificant.
I think we really need to re-instill some "need based" aspect to the technology hype, and hold the technology companies to at least the same standards that pharma companies are held to.