I have written
about the ‘Big Brain Project’ a few times before including a
post for the Canadian TV channel TVO last year. The project
basically seeks to make sense of that magnificent 3-pound bag of jelly inside
our skull at multiple levels, from molecules to neurons to interactions at the
whole brain level. The aims of the project are typical of ‘moon shot’
endeavors; ambitious, multidisciplinary, multi-institutional and of course,
expensive. Yet right after the project was announced in both the US (partly by
President Obama) and in Europe there were whispers of criticism that turned
first into a trickle and then into a cascade. The criticism was at multiple
levels – administrative, financial and scientific. But even discounting the administrative
and financial problems, many scientists saw issues with the project even at the basic scientific level.
The gist of those issues can be boiled down to one phrase:
“trying to chew on more than we can bite off”. Basically we are trying to
engineer a complex, emergent system whose workings we still don’t understand,
even at basic levels of organization. Our data is impoverished and our
approaches are too reductionist. One major part of the project especially
suffers from this drawback – in-silico
simulation of the brain at multiple levels, from neurons to entire mouse and
human brains. Now here’s a report from a committee which has examined the pros
and cons of the project and reached the conclusion that much of the criticism
was indeed valid, and that we are trying to achieve something for which we still
don’t have the tools. The report is
here. The conclusion of the committee is simple: first work on the tools; then
incorporate the findings from those tools into a bigger picture. The report
makes this clear in a paragraph that also showcases problems with the public’s skewed
perception of the project.
The goal of
reconstructing the mouse and human brain in
silico and the associated comprehensive bottom-up approach is viewed by
one part of the scientific community as being impossible in principle or at
least infeasible within the next ten years, while another part sees value not
only in making such simulation tools available but also in their development,
in organizing data, tools and experts (see, e.g.,
http://www.bbc.com/future/story/ 20130207-will-we-ever-simulate-the-brain). A
similar level of disagreement exists with respect to the assertion that
simulating the brain will allow new cures to be found for brain diseases with
much less effort than in experimental investigations alone.
The public relations
and communication strategy of the HBP and the continuing and intense public
debate also led to the misperception by many neuroscientists that the HBP aims
to cover the field of neuroscience comprehensively and that it constitutes the
major neuroscience research effort in the European Research Area (ERA).
This whole discussion reminds me of the idea of tool-driven
scientific revolutions publicized by Peter
Galison, Freeman
Dyson and others, of which chemistry
is an exemplary instance. The Galisonian picture of scientific revolutions
does not discount the role of ideas in causing seismic shifts in science, but
it places tools on an equal footing. Discussions of grand ideas and goals (like
simulating a brain) often give short shrift to the mundane but critical
everyday tools that need to be developed in order to enable those ideas in the
first place. They are great for sound bytes for the public but brittle in their
foundations. Although scientific ideas are often considered the progenitors of
a lot of everyday scientific activity by the public, in reality the progression
can equally often be the opposite: first come the tools, then the ideas.
Sometimes tools can follow ideas, as was the case with a lot of predictions of
the general theory of relativity. At other times ideas follow the tools and the
experiments, as was the case with the Lamb Shift and quantum
electrodynamics.
Generally speaking it’s more common for ideas to follow
tools when a field is theory-poor, like quantum field theory was in the 1930s, while
it’s more common for tools to follow ideas when a field is theory-rich. From
this viewpoint neuroscience is currently theory-poor, so it seems much more
likely to me that ideas will follow the tools in the field. To be sure the importance
of tools has long been recognized in neurology; where would we be without MRI
and patch-clamp techniques for instance? And yet these tools have only started
to scratch the surface of what we are trying to understand. We need much better
tools before we get our hands on a theory of the brain, let alone one of the
mind.
I believe the same progression also applies to my own field
of molecular modeling in some sense. Part of the problem with modeling proteins
and molecules is that we still don’t have a good idea of the myriad factors
that drive molecular recognition. We have of course had an inkling of these
factors (such as water and protein dynamics) for a while now but we haven’t
really had a good theoretical framework to understand the interactions. We can
wave this objection away by saying that sure we have a theoretical framework,
that of quantum mechanics and statistical mechanics, but that would be little
more than a homage to strong reductionism.
The problem is we still don’t have a handle on the quantitative contribution of
various factors to protein-small molecule binding. Until we have this conceptual
understanding the simulation of such interactions is bound to suffer. And most
importantly, until we have such understanding what we really need is not
simulation but improved instrumental and analytical techniques that enable us
to measure even simple things like molecular concentrations and the kinetics of
binding. Once we get an idea of these parameters using good tools, we can start
incorporating the parameters in modeling frameworks.
Now the brain project is indeed working on tools too, but reports like the current one ask whether we need to predominantly focus on those tools and perhaps divert some of the money and attention from the simulation aspects of the project to the tool-driven aspects. The message from the current status report is ultimately simple: we need to first stand before we can run.
Now the brain project is indeed working on tools too, but reports like the current one ask whether we need to predominantly focus on those tools and perhaps divert some of the money and attention from the simulation aspects of the project to the tool-driven aspects. The message from the current status report is ultimately simple: we need to first stand before we can run.
Image link
No comments:
Post a Comment
Markup Key:
- <b>bold</b> = bold
- <i>italic</i> = italic
- <a href="http://www.fieldofscience.com/">FoS</a> = FoS