Field of Science

Modular complexity and the problem of reverse engineering the brain

Bell's number calculates the number of connections between
various components of a system and scales exponentially
with those components (Image: Science Magazine).
I have been reading an excellent collection of essays on the brain titled "The Future of the Brain" which contains ruminations on current and future brain research from leading neuroscientists and other researchers like Gary Marcus, George Church and the Moser husband and wife pair who won last year's Nobel prize. Quite a few of the authors are from the Allen Institute for Brain Science in Seattle. In starting this institute, Microsoft co-founder Paul Allen has placed his bets on mapping the brain…or at least the mouse visual cortex for starters. His institute is engaged in charting the sum total of neurons and other working parts of the visual cortex and then mapping their connections. Allen is not alone in doing this; there’s projects like the Connectome at MIT which are trying to do the same thing (and the project’s leader Sebastian Seung has written a readable book about it).

Now we have heard prognostications about mapping and reverse engineering brains from more eccentric sources before, but fortunately Allen is one of those who does not believe that the singularity is around the corner. He also seems to have entrusted his vision to sane minds. His institute’s chief science officer is Christof Koch, former professor at Caltech, longtime collaborator of the late Francis Crick and self-proclaimed “romantic reductionist” who started at the institute earlier this year. Koch has written one of the articles in the essay collection. His article and the book in general reminded me of a very interesting perspective that he penned in Science last year which points out the staggering challenge of understanding the connections between all the components of the brain; the “neural interactome” if you will. The article is worth reading if you want to get an idea of how even simple numerical arguments illuminate the sheer magnitude of mapping out the neurons, cells, proteins and connections that make up the wonder that’s the human brain.

Koch starts by pointing out that calculating the interactions between all the components in the brain is not the same as computing the interactions between, say, all atoms of an ideal gas since unlike a gas, the interactions are between different kinds of entities and are therefore not identical. Instead, he proposes, we have to use something called Bell’s number Bwhich reminds me of the partitions that I learnt about when I was sleepwalking through set theory in college. Briefly for n objects, Bn refers to the number of combinations (doubles, triples, quadruples etc.) that can be formed. Thus, when n=3 Bn is 5. Not surprisingly, Bn scales exponentially with n and Koch points out that B10 is already 115,975. If we think of a typical presynaptic terminal with its 1000 proteins or so, Bstarts giving us serious heartburn. For something like the visual cortex where n= 2 million Bn would be inconceivable, and it's futile to even start thinking about what the number would be for the entire brain. Koch then uses a simple calculation based on Moore’s Law in trying to estimate the time needed for “sequencing” these interactions. For n = 2 million the time needed would be of the order of 10 million years. And as the graph on top demonstrates, for more than 10components or so the amount of time spirals out of hand at warp speed.

This considers only the 2 million neurons in the visual cortex; it doesn’t even consider the proteins and cells which might interact with the neurons on an individual basis. In addition, at this point we are not even really aware of how neuronal types there are in the brain: neurons are not all identical like indistinguishable electrons. What makes the picture even more complicated that these types may be malleable so that sometimes a single neuron can be of one type while at other types it can team up with other neurons to form a unit that is of a different type. This multilayered, fluid hierarchy rapidly reveals the outlines of what Paul Allen has called the “complexity brake”: he described this in the same article that was cogently critical of Ray Kurzweil's singularity. And the neural complexity brake that Koch is talking about seems poised to make an asteroid-sized impact on our dreams.

So are we doomed in trying to understand the brain, consciousness and the whole works? Not necessarily, argues Koch. He gives the example of electronic circuits where individual components are grouped separately into modules. If you bunch a number of interacting entities together and form a separate module, then the complexity of the problem reduces since you now have to only calculate interactions between modules. The key question then is, is the brain modular, and how many modules does it present? Commonsense would have us think it is modular, but it is far from clear how we can exactly define the modules. We would also need a sense of the minimal number of modules to calculate interactions between them. This work is going to need a long time (hopefully not as long as that for B2 million) and I don’t think we are going to have an exhaustive list any time soon, especially since these are going to be composed of different kinds of components and not just one kind. But it's quite clear that whataver the nature of these modules, delineating their particulars would go a long way in making the problem more manageable.

Any attempt to define these modules are going to run into problems of emergent complexity that I have occasionally written about. Two neurons plus one protein might be different from two neurons plus two proteins in unanticipated ways. Also if we are thinking about forward and reverse neural pathways, I would hazard a guess that one neuron plus one neuron in one direction may even be different from the same interaction in the reverse direction. Then there’s the more obvious problem of dynamics. The brain is not a static entity and its interactions would reasonably be expected to change over time. This might interpose a formidable new barrier in brain mapping, since it may mean that whatever modules are defined may not even be the same during every time slice. A fluid landscape of complex modules whose very identity changes every single moment could well be a neuroscientist’s nightmare. In addition, the amount of data that captures such neural dynamics would be staggering since even a millimeter sized volume of rat visual tissue requires a few terabytes of data to store all its intricacies. However, the data storage problem pales in comparison to the data interpretation problem.

Nevertheless this goal of mapping modules seems far more attainable in principle than calculating every individual interaction, and that’s probably the reason Koch left Caltech to join the Allen Institute in spite of the pessimistic calculation above. The value of modular approaches goes beyond neuroscience though; similar thinking may provide insights into other areas of biology, such as the interaction of genes with proteins and of proteins with drugs. As an amusing analogy, this kind of analysis reminds me of trying to understand the interactions between different components in a stew; we have to appreciate how the salt interacts with the pepper and how the pepper interacts with the broth and how the three of them combined interact with the chicken. Could the salt and broth be considered a single module?

If we can ever get a sense of the modular structure of the brain, we may have at least a fighting chance to map out the whole neural interactome. I am not holding my breath too hard, but my ears will be wide open since this is definitely going to be one of the most exciting areas of science around.

Adapted from a previous post on Scientific American Blogs.

No comments:

Post a Comment

Markup Key:
- <b>bold</b> = bold
- <i>italic</i> = italic
- <a href="http://www.fieldofscience.com/">FoS</a> = FoS