Field of Science

Physicists in biology, inverse problems and other quirks of the genomic age

Nobel Laureate Sydney Brenner has 
criticized systems biology as a grandiose 
attempt to solve inverse problems in biology
Leo Szilard – brilliant, peripatetic Hungarian physicist, habitué of hotel lobbies, soothsayer without peer – first grasped the implications of a nuclear chain reaction in 1933 while stepping off the curb at a traffic light in London. Szilard has many distinctions to his name; not only did he file a patent for the first nuclear reactor with Enrico Fermi, but he was the one who urged his old friend Albert Einstein to write a famous letter to Franklin Roosevelt, and also the one who tried to get another kind of letter signed as the war was ending in 1945; a letter urging the United States to demonstrate a nuclear weapon in front of the Japanese before irrevocably stepping across the line. Szilard was successful in getting the first letter signed but failed in his second goal.
After the war ended, partly disgusted by the cruel use to which his beloved physics had been put, Szilard left professional physics to explore new pastures – in his case, biology. But apart from the moral abhorrence which led him to switch fields, there was a more pragmatic reason. As Szilard put it, this was an age when you took a year to discover something new in physics but only took a day to discover something new in biology.
This sentiment drove many physicists into biology, and the exodus benefited biological science spectacularly. Compared to physics whose basic theoretical foundations had matured by the end of the war, biology was uncharted territory. The situation in biology was similar to the situation during the heyday of physics right after the invention of quantum theory when, as Paul Dirac quipped, “even second-rate physicists could make first-rate discoveries”. And physicists took full advantage of this situation. Since Szilard, biology in general and molecular biology in particularly have been greatly enriched by the presence of physicists. Today, any physics student who wants to mull doing biology stands on the shoulders of illustrious forebears including Szilard, Erwin Schrodinger, Francis Crick, Walter Gilbert and most recently, Venki Ramakrishnan.
What is it that draws physicists to biology and why have they been unusually successful in making contributions to it? The allure of understanding life which attracts other kinds of scientists is certainly one motivating factor. Erwin Schrodinger whose little book “What is Life?” propelled many including Jim Watson and Francis Crick into genetics is one example. Then there is the opportunity to simplify an enormously complex system into its constituent parts, an art which physicists have excelled at since the time of the Greeks. Biology and especially the brain is the ultimate complex system, and physicists are tempted to apply their reductionist approaches to deconvolute this complexity. Thirdly there is the practical advantage that physicists have; a capacity to apply experimental tools like x-ray diffraction and quantitative reasoning including mathematical and statistical tools to make sense of biological data.
The rise of the data scientists
It it this third reason that has led to a significant influx of not just physicists but other quantitative scientists, including statisticians and computer scientists, into biology. The rapid development of the fields of bioinformatics and computational biology has led to a great demand for scientists with the quantitative skills to analyze large amounts of data. A mathematical background brings valuable skills to this endeavor and quantitative, data-driven scientists thrive in genomics. Eric Lander for instance got his PhD in mathematics at Oxford before – driven by the tantalizing goal of understanding the brain – he switched to biology. Cancer geneticist Bert Vogelstein also has a background in mathematics. All of us are familiar with names like Craig Venter, Francis Collins and James Watson when it comes to appreciating the cracking of the human genome, but we need to pay equal attention to the computer scientists without whom crunching and combining the immense amounts of data arising from sequencing would have been impossible. There is no doubt that, after the essentially chemically driven revolution in genetics of the 70s, the second revolution in the field has been engineered by data crunching.
So what does the future hold? The rise of the “data scientists” has led to the burgeoning field of systems biology, a buzzword which seems to proliferate more than its actual understanding. Systems biology seeks to integrate different kinds of biological data into a broad picture using tools like graph theory and network analysis. It promises to potentially provide us with a big-picture view of biology like no other. Perhaps, physicists think, we will have a theoretical framework for biology that does what quantum theory did for, say, chemistry.
Emergence and systems biology: A delicate pairing
And yet even as we savor the fruits of these higher-level approaches to biology, we must be keenly aware of their pitfalls. One of the fundamental truths about the physicists’ view of biology is that it is steeped in reductionism. Reductionism is the great legacy of modern science which saw its culmination in the two twentieth-century scientific revolutions of quantum mechanics and molecular biology. It is hard to overstate the practical ramifications of reductionism. And yet as we tackle the salient problems in twenty-first century biology, we are become aware of the limits of reductionism. The great antidote to reductionism is emergence, a property that renders complex systems irreducible to the sum of their parts. In 1972 the Nobel Prize winning physicist Philip Anderson penned a remarkably far-reaching article named “More is Different” which explored the inability of “lower-level” phenomena to predict their “higher-level” manifestations.
The brain is an outstanding example of emergent phenomena. Many scientists think that neuroscience is going to be to the twenty-first century what molecular biology was to the twentieth. For the first time in history, partly through recombinant DNA technology and partly due to state-of-the-art imaging techniques like functional MRI, we are poised on the brink of making major discoveries about the brain; no wonder that Francis Crick moved into neuroscience during his later years. But the brain presents a very different kind of challenge than that posed by, say, a superconductor or a crystal of DNA. The brain is a highly hierarchical and modular structure, with multiple dependent and yet distinct layers of organization. From the basic level of the neuron we move onto collections of neurons and glial cells which behave very differently, onward to specialized centers for speech, memory and other tasks on to the whole brain. As we move up this ladder of complexity, emergent features arise at every level whose behavior cannot be gleaned merely from the behavior of individual neurons.

The tyranny of inverse problems
The problem thwarts systems biology in general. In recent years, some of the most insightful criticism of systems biology has come from Sydney Brenner, a founding father of molecular biology whose 2010 piece in Philosophical Transactions of the Royal Society titled “Sequences and Consequences” should be required reading for those who think that systems biology’s triumph is just around the corner. In his essay, Brenner strikes at what he sees as the heart of the goal of systems biology. After reminding us that the systems approach seeks to generate viable models of living systems, Brenner goes on to say that:
“Even though the proponents seem to be unconscious of it, the claim of systems biology is that it can solve the inverse problem of physiology by deriving models of how systems work from observations of their behavior. It is known that inverse problems can only be solved under very specific conditions. A good example of an inverse problem is the derivation of the structure of a molecule from the X-ray diffraction pattern of a crystal…The universe of potential models for any complex system like the function of a cell has very large dimensions and, in the absence of any theory of the system, there is no guide to constrain the choice of model.”
What Brenner is saying that every systems biology project essentially results in a model, a model that tries to solve the problem of divining reality from experimental data. However, a model is not reality; it is an imperfect picture of reality constructed from bits and pieces of data. It is therefore – and this has to be emphasized – only one representation of reality. Other models might satisfy the same experimental constraints and for systems with thousands of moving parts like cells and brains, the number of models is astronomically large. In addition, data in biological measurements is often noisy with large error bars, further complicating its use. This puts systems biology into the classic conundrum of the inverse problem that Brenner points out, and like other inverse problems, the solution you find is likely to be one among an expanding universe of solutions, many of which might be better than the one you have. This means that while models derived from systems biology might be useful – and often this is a sufficient requirement for using them – they might likely leave out some important feature of the system.
There has been some very interesting recent work in addressing such conundrums. One of the major challenges in the inverse problem universe is to find a minimal set of parameters that can describe a system. Ideally the parameters should be sensitive to variation so that one constrains the parameter space describing the given system and avoids the "anything goes" trap. A particularly promising example is the use of 'sloppy models' developed by Cornell physicist James Sethna and others in which parameter combinations rather than individual parameters are varied and those combinations which are most tightly constrained are then picked as the 'right' ones.

But quite apart from these theoretical fixes, Brenner’s remedy for avoiding the fallout from imperfect systems modeling is to simply use the techniques garnered from classical biochemistry and genetics over the last century or so. In one sense systems biology is nothing new; as Brenner tartly puts it, “there is a watered-down version of systems biology which does nothing more than give a new name to physiology, the study of function and the practice of which, in a modern experimental form, has been going on at least since the beginning of the Royal Society in the seventeenth century”. Careful examination of mutant strains of organisms, measurement of the interactions of proteins with small molecules like hormones, neurotransmitters and drugs, and observation of phenotypic changes caused by known genotypic perturbations remain tried-and-tested ways of drawing conclusions about the behavior of living systems on a molecular scale.
Genomics and drug discovery: Tread softly
This viewpoint is also echoed by those who take a critical view of what they say is an overly genomics-based approach to the treatment of diseases. A particularly clear-headed view comes from Gerry Higgs who in 2004 presciently wrote a piece titled “Molecular Genetics: The Emperor’s Clothes of Drug Discovery”. Higgs criticizes the whole gamut of genomic tools used to discover new therapies, from the “high-volume, low-quality sequence data” to the genetically engineered cell lines which can give a misleading impression of molecular interactions under normal physiological conditions. Higgs points to many successful drugs discovered in the last fifty years which have been found using the tools of classical pharmacology and biochemistry; these would include the best-selling, Nobel Prize winning drugs developed by Gertrude Elion and James Black based on simple physiological assays. Higgs’s point is that the genomics approach to drugs runs the risk of becoming too reductionist and narrow-minded, often relying on isolated systems and artificial constructs that are uncoupled from whole systems. His prescription is not to discard these tools which can undoubtedly provide important insights, but supplement them with older and proven physiological experiments.
Does all this mean that systems biology and genomics would be useless in leading us to new drugs? Not at all. There is no doubt that genomic approaches can be remarkably useful in enabling controlled experiments. The systems biologist Leroy Hood for instance has pointed out how selective gene silencing can allow us to tease apart side-effects of drugs from beneficial ones. But what Higgs, Brenner and others are impressing upon us is that we shouldn’t allow genomics to become the end-all and be-all of drug discovery. Genomics should only be employed as part of a judiciously chosen cocktail of techniques including classical ones for interrogating the function of living systems. And this applies more generally to physics-based and systems biology approaches. 

Perhaps the real problem from which we need to wean ourselves is “physics envy”; as the physicist-turned-financial modeler Emanuel Derman reminds us, “Just like  physicists, we would like to discover three laws that govern ninety-nine percent of our system’s intricacies. But we are more likely to discover ninety-nine laws that explain three percent of our system”. And that’s as good a starting point as any.

Adapted from a previous post on Scientific American Blogs.

Tom Morton-Smith's "Oppenheimer": Slight, trite and unoriginal

J. Robert Oppenheimer was a brilliant, enigmatic and complex man. Any treatment of his life, whether biographical or fictional, must bear the substantial weight of these qualities and capture the triumph and tragedy of his immensely consequential life. Unfortunately Tom Morton-Smith's "Oppenheimer" is little more than a short play version of a short Oppenheimer biography. All it really does is recreate in simple detail conversations and events from the physicist's life and career. But it never manages to really tell us in poignant terms what made 'Oppie' tick, what made him so conflicted and so beloved, what gave rise to all his contradictions and complexities. I haven't seen the theatrical version of the play but I think it would be a miracle if it can redeem the written version

Many of Oppie's colleagues, students and family members are here: Edward Teller, Hans Bethe, his students Robert Serber and Rossi Lomanitz, General Leslie Groves, his wife Kitty, brother Frank and old girlfriend Jean Tatlock. The conversations take place at Los Alamos, at the Oppenheimer home, at his office in Berkeley and at Spanish Civil War Relief Fund parties. They tread on many topics including socialist politics, bomb theory and fission and army compartmentalization. But much of the dialogue between the characters is trite and uneventful. At least some of it is based on dialogue from BBC's "Oppenheimer" TV series - a far better dramatized treatment of Oppenheimer's life and times which is worth watching. The whole point of fiction is to communicate what non-fiction cannot, and the play largely fails to do this. Once or twice the dialogue seems to soar into poetic metaphors, but then falls to the ground with a sigh, as if the complexity of Oppenheimer's life was too much to bear. In addition the work is too short to give itself enough runway to even attempt such flights of imagination in the first place.

There are also the historical errors which riddle the conversational settings. Since this is a work of fiction, these inaccuracies might have worked had they actually said something consequential and novel and been there for a reason. But since the fiction only seems to be a retelling of the non-fiction, the inaccuracies glaringly stood out in my mind. One example is the scene when Oppenheimer asks his student Robert Serber's wife Charlotte if she would like to adopt his daughter - he did no such thing and actually blurted out the strange request to his wife Kitty's friend, Pat Sherr. Another scene has Oppenheimer revealing the name of his communist friend Haakon Chevalier to General Groves and army security under threat from Groves - this is a disservice to actual events in which Oppenheimer named his brother Frank to Groves in private. Finally, why the bizarre theme of having Oppenheimer and his colleagues in military uniforms throughout the war? In reality this possibility was considered by Oppenheimer at the very beginning, then quickly rejected when he realized that most of his colleagues would rebel and refuse to join the project if they were subjected to such formal military discipline. It's worth reiterating that these errors of history might have been effective devices had the play offered something substantial and revealing through them, but they serve no such purpose. Instead they only serve to complicate a rather drab, selective and simple narration of a few key scenes from Oppenheimer's life.

It never gives me any pleasure to write a negative review of a book or a play, especially since I know from personal experience how much effort it takes to put pen to paper and produce a body of work, no matter how slight. But as much as I appreciate Mr. Morton-Smith's efforts, I feel compelled to point out its flaws, especially because the subject of this effort is one which is close to my heart. If you want a searching work of fiction that reveals the agonies and the brilliance in Robert Oppenheimer's soul this is certainly not it. In fact that work has yet to be written. In the meantime you would probably be better off reading Richard Rhodes's seminal book on the bomb or Ray Monk's magisterial biography of Oppenheimer.

How to recognize (and talk to) a chemophobe

Over the last few years there has been a lot of discussion of chemophobia in the popular press and on blogs. But it seems to me that there have been few summaries of the general features of chemophobia and how to exorcise them. So I thought I would put together a short list, largely personal, of the “elements of chemophobia” and possible measures to address them. Most of what I say would be all too familiar to chemists, but I hope some of it might be of use to intelligent laymen for identifying, understanding and dispelling chemophobia. To make the discussion a little more interesting, I have divided each point into “symptom” and “remedy”. I end with a few thoughts on how we can bridge the gap between fear and love of “chemicals”.
1. Symptom – Chemophobes fear “chemicals”: This goes without saying. Chemophobes fear a technically nebulous entity called “chemicals” that’s all too real to them. The problem is that in the jargon of chemistry, “chemicals” essentially means everything in the material world, from fuels and plastics to human bodies and baby oil. Over the years chemophobes have expertly molded the word “chemical” into what’s called a “trigger word”, a stimulus that triggers an emotional rather than a rational response. Psychologists have studied such trigger words closely and have recognized how they can often let emotional instincts overwhelm rational thinking. At best the term “chemicals” is so broad as to be useless, and it also does a disservice to the entire material world. To be fair though, I think most chemophobes when they say “chemicals” are referring to what they are thinking of as “bad chemicals”. One would think that they have a much more benign attitude toward “good chemicals”. But even this poses a problem, as the following point makes clear.
An underlying reason for the fear of chemicals is that chemophobes either have a very poor understanding of chemistry or don’t bother to acquaint themselves with the most basic relevant facts. A common misunderstanding is to confuse ingredients used in the manufacture of certain chemicals products with the products themselves. A rather egregious recent example was from the rampantly chemophobic Food Babe blog that talked about TBHQ- an additive found in certain food products. The blog claimed that “TBHQ is made from butane (a very toxic gas)”. Anyone who understands basic chemistry would understand how woefully wrong this statement is; in fact it may even pass the “Pauli test”. It would be as wrong as saying that water should be avoided “because it is made from hydrogen (a very flammable gas)”. All it means is that the chemical formula of TBHQ subsumes the chemical formula of butane (four carbon atoms) within itself. One of the cardinal rules of chemistry is that when atoms combine and form bonds with each other they lose their individual properties. This principle should be on your mind the next time you read about an article that tries to blame ingredients used in a product’s manufacture for the product’s properties.
Remedy – Understand that the whole material world is made up of chemicals; it’s what our bodies and minds are made up of. Reacting negatively to the word “chemical” is reacting to something that’s vague and undefined. Try to resist the urge to react emotionally rather than rationally when you hear the word; the only way to reduce the impact of trigger words is to counter them with rational thoughts. Most importantly, ask yourself what precise chemical an article is talking about. Where does it come from? How much of it is in the product? What studies have been done on it? How reliable are their conclusions? Try to find out more about it before you reach a judgment. Never accept any article either for or against a particular compound at face value.
2. Symptom – Chemophobes almost never talk about context: Most chemophobes would probably agree that taking political statements out of context can be grossly misleading. Yet they don’t apply the same principles when talking about chemicals. The problem even with denouncing “bad chemicals” is that chemicals can completely change their properties and utility depending on context.
When it comes to chemistry, the biggest aspect of context is the dose; the golden principle of toxicology is the 16th century natural philosopher Paracelsus’s dictum that “the dose makes the poison”. Botulism toxin or Botox is the ultimate example: a chemical compound that can undoubtedly be fatal under the right circumstances, it has become a staple of aging celebrities wanting to preserve their stunning good looks. On the other hand water, which is generally considered to be a good chemical, can be toxic. What surprises me is that a lot of chemophobes are perfectly aware of how widely used drugs like acetaminophen can be dangerous in excessive doses but are still beneficial in small doses, yet they somehow don’t apply the same thinking to other chemical compounds which they consider toxic, from flame retardants in couches to food additives.
The Botox example also underscores a ubiquitous and deeply flawed belief that “natural’ chemicals are somehow better than “artificial” or synthetic ones. Our standard of living has improved incalculably because of synthetic chemicals like drugs, plastics and fertilizers. Botox which is very natural can be quite bad, and a lifesaving cancer drug which is very artificial can be quite good. About half of all drugs on the market are synthetic and the other half are natural. Both categories have beneficial effects when used as described and ill effects if abused. Thus it’s almost impossible to objectively categorize an individual molecule as “good” or “bad”.
Chemophobes’ cheerful dismissal of context is symptomatic of a bigger problem which is all too common – a lack of appreciation for details which are really at the heart of science. Although many principles of science can be simply explained, the truth is that the meat of science is all about details and subtleties, and it’s very easy to completely misunderstand a scientific study when you shirk from the details. A good example is a recent post on so-called “toxic couches” which are purportedly laced with harmful chemicals. As I made clear in my analysis of the post, it seemed that the author – who has an M.D. degree – had not actually read the primary literature on the compounds which she claimed were poisoning her couch. There was no examination of the actual evidence supposedly implicating the couch chemicals as toxic agents. If the author had done this she would have easily understood that the evidence for her assertions was flimsy at best. This is a common thread underlying almost all articles reflecting chemophobia. There are no references to the underlying literature and no context. One is made to believe that the mere presence of the chemical under consideration makes it a health hazard.
Remedy- Remember that chemicals are no different from most of the technologies that beset us, technologies that can be used for good or evil depending on the context. We cannot divorce their properties from the context and circumstances under which they are being discussed.
When it comes to context, not all of us are inclined or qualified to read the primary literature and analyze detailed chemical or medical studies. So I have found it useful to keep three key context-specific aspects of chemical compound and their effects in mind and to marshal these aspects into a test that any purported chemical should pass to merit the word “dangerous”. These three aspects are dosesample size and test animal. Even if you don’t know what these are for a particular study, it handsomely pays to at least withhold judgment by asking about them. I will talk about sample size in the next point but let’s focus on the other two for now.
The importance of dose has already been mentioned. Perhaps the most relevant measure of dose in the context of chemophobia is a well-known number called the “LD50” which is the amount of chemical compound that causes death in 50% of the test animals. A lot of times when dangerous chemicals are mentioned by chemophobes, their LD50 is quite low. The nature of test animal (which is already part of the definition of LD50) is also paramount. Many toxicology and chemical studies are done in mice or rats – for good reason – and if there’s one thing you should remember it’s that (with few exceptions) mice and rats are not humans. Although there is some overlap, it’s safe to assume that the effects of a drug or potentially toxic chemical would significantly differ between mice and humans. Another parameter is mode of ingestion; I am sure the material in my t-shirt will probably cause some harm if I ingest it in large quantities but it’s perfectly safe to wear it. The toxicity of a lot of compounds depends on the way they are formulated (solids, liquids, amorphous powders etc.) and therefore the way they are absorbed and excreted by the body. The toxicity also depends on the time they spend inside the body.
Again, the point of this listing is not to enable casual leaders to read the primary literature on LD50, animal studies or formulation but to simply raise questions related to these aspects when they read any article purporting to report the presence of dangerous chemicals in our environment. Vigilance and critical questioning are very effective first barriers to uncritical acceptance.
3. Symptom- For chemophobes everything is “linked to cancer”: Read almost any article about “dangerous chemicals” and you will find them somehow associated with cancer more than with almost any other disease. But the phrase “linked to” is so broad as to be almost useless. Linked to can mean anything from “having a tenuous and unproven connection” to “having a direct correlation” to “considered as a causal factor”. When chemophobes tell us that something is “linked to cancer”, they would have us believe that it “causes” cancer. It is impossible to believe this unless you read the primary literature and in most cases you will find that the truth is quite complex as best.
Again, the devil is in the details, in this case in the devilish complexities of the statistics-based science called epidemiology. As legions of studies have made clear, it is very, very difficult to find a correlation – let alone causation – between any single chemical compound and cancer. Science writer George Johnson wrote an excellent article describing the highly ambiguous correlation between chemicals and cancer in famous cases like the Erin Brockovich story. Part of the reason is that the “natural” cancer background is already very high and we are often challenged by the difficulty of detecting a small excess of cancers in this high background; in fact this high natural background is probably the reason why many chemicals are inevitably associated with cancer in the first place. There have been undoubtedly some cases where this connection has been found, for instance between scrotal cancer and soot or between cigarette smoke and lung cancer. But firstly, these cases are in the minority and secondly, these connections were firmed up only after decades of very exhaustive studies utilizing very large population samples. The same caveat applies to connections between chemicals and almost every other malady. It’s really not possible to draw conclusions unless you scrutinize the statistics.
Remedy - The problem again is that most of us are not inclined or qualified to analyze complex statistical analyses. But statistics is one of those things whose mere awareness gets you brownie points. There are a few simple measures. Going back to the previous point, the single most important question to ask is regarding sample size. A century of statistics has now made it clear that small sample sizes introduce large errors. The “toxic couch” post for instance based its conclusions on a study with a very small sample size. Statisticians have devised ways to deal with small samples, but as a first approximation you should suspect any study that deals with small sample sizes and does not provide error estimates. Other factors to deal with are sample homogeneity and bias and measures of statistical significance. Again, not all of us can become experts in statistics but the very process of asking these questions will make us rightly skeptical of articles evidencing chemophobia.
This is not an “Us vs Them” argument.
I want to end with a plea to build bridges. More than anything else it’s important to empathize with people who fear chemicals. It’s important to understand that these people come in a variety of shades. Many of them simply try to apply the Precautionary Principle and err on the safer side. Many subscribe to the common “Not in My Backyard” sentiment which even chemists would agree with; even if I may fully realize the lack of correlation between, say aniline exposure and cancer, that does not mean I will be ok with tons of aniline being dumped into the soil surrounding my house. Some chemophobes do indeed do it for the publicity even when they know better (anti-industry sentiments almost always sell well on the Internet), and there’s also some who have probably made up their minds and are impervious to reason. Chemophobes thus mirror the same kind of diversity of opinion that you find among climate change skeptics and religious believers and it’s key to not paint all of them with a broad brush. The extremists probably would not be swayed by any kind of argument but I would like to believe that the majority of chemophobes are not in this category and are open to rational argument.
The important thing to realize is that many of these people have at least partially good reasons to express deep skepticism about synthetic chemicals. Even though we as chemists would like to dismiss their fears as irrational, we need to appreciate that Love Canal, Bhopal and Woburn don’t exactly make it easy for us to make our case. In many of these cases the effects of individual chemicals on people’s health were very hard to tease apart. What was uncontested though was the unethical behavior of chemical companies which polluted rivers, soils and groundwater with chemical waste. I just finished Dan Fagin’s superbly researched and written book “Toms River” which documented the unethical and illegal practices of Ciba-Geigy in polluting the environment around Toms River, NJ over a period of thirty years. While this was really the fault of the company and not the chemicals themselves, one must sympathize with the people who waged an expensive campaign against opponents with deep pockets and waited for decades to find an explanation for the heartbreaking early deaths of their sons and daughters. Even if victims of alleged chemical actions may be looking in the wrong places for an answer, their experiences are very real and we don’t have to agree with them in order to empathize with them. We need to do all we can to separate the companies from their products, but we have to appreciate that it’s much harder for a mother who has just lost her 6-year old son to leukemia to do this.
Finally we need to recognize the common bonds that hold all of us together. Scrutinize writers who display chemophobia and we find that many of them share the same goals that we do: to keep our children and our environment safe. Some of them are proponents of healthy eating, others want to hold companies with unethical practices accountable. Look beneath the surface and we find that although our paths may be different, our destination is the same. In JFK’s immortal words, “Our most basic common link is that we all inhabit this planet. We all breathe the same air. We all cherish our children’s future. And we are all mortal.” This parting message should bring chemophobes and chemophiles together; if nothing else, we are all woven from the same chemical tapestry.

Adapted from a previous post on Scientific American Blogs.

Derek Lowe to world: "Beware of von Neumann's elephants"

"With four parameters you can fit an elephant to a curve,
with five you can make him wiggle his trunk" - John von Neumann
That was one of many cogent messages delivered by In the Pipeline's Derek Lowe at a meeting organized today by the Boston Area Group Modeling and Informatics (BAGIMgroup. The session was well attended - by my count there were sixty to seventy people. Most were probably modelers, with a healthy number of medicinal chemists and a few biologists thrown in. Below I offer a brief report on his talk, along with some personal commentary. Memory is a malleable and selective thing, so if someone who attended the talk wants to add their own observations in the comments section they are quite welcome.

The nice thing about Derek's talk was that it was really delivered from the other side of the fence, that of an accomplished and practicing medicinal chemist. Thus he wisely did not dwell too much on all the details that can go wrong in modeling: since the audience mainly consisted of modelers presumably they knew these already (No, we still can't model water well. Stop annoying us!). Instead he offered a more impressionistic and general perspective informed by experience.

Why von Neumann's elephants? Derek was referring to a great piece by Freeman Dyson (who I have had the privilege of having lunch with a few times) published in Nature a few years back in which Dyson reminisced about a meeting with Enrico Fermi in Chicago. Dyson had taken the Greyhound bus from Cornell to tell Fermi about his latest results concerning meson-proton scattering. Fermi took one look at Dyson's graph and basically demolished the thinking that had permeated Dyson and his students' research for several years. The problem, as Fermi pointed out, was that Dyson's premise was based on fitting a curve to some data using four parameters. But, quipped Fermi, you can get a better fit to the data if you add more parameters. Fermi quoted his friend, the great mathematician John von Neumann - "With four parameters you can fit an elephant to a curve; with five you can make him wiggle his trunk". The conversation lasted about ten minutes. Dyson took the next bus back to Ithaca.

And making elephants dance is indeed what modeling in drug discovery runs the risk of doing, especially when you keep on adding parameters to improve the fit (Ah, the pleasures of the Internet - turns out you can literally fit an elephant to a curve). This applies to all models, whether they deal with docking, molecular dynamics or cheminformatics. This problem of overfitting is well-recognized, but researchers don't always run the right tests to get rid of it. As Derek pointed out however, the problem is certainly not unique to drug discovery. He started out by describing the presence of "rare" events in finance related to currency fluctuations - these rare events happen often enough and their magnitude is devastating enough to cause major damage. Yet the models never captured them, and this failure was responsible at least in part for the financial collapse of 2008 (this is well-documented in Nassim Taleb's "The Black Swan").

Here are the charges brought against modelers by medicinal chemists: They cannot always predict but often can only retrodict, they cannot account for 'Black Swans' like dissimilar binding modes resulting from small changes in structure, they often equate computing power with accuracy or confidence in their models, they promise too much and underdeliver. These charges are true to varying degrees under different circumstances, but it's also true that most modelers worth their salt are quite aware of the caveats. As Derek pointed out with some examples however, modeling has over-promised for a long time: a particularly grating example was this paper from Angewandte Chemie from 1986 in which predicting the interaction between a protein and small molecule is regarded as a "spatial problem" (energy be damned). Then there's of course the (in)famous "Designing Drugs Without Chemicals" piece from Fortune magazine which I had highlighted on Twitter a while ago. These are all good examples of how modeling has promised riches and delivered rags. 

However I don't think modeling is really any different from the dreams spun from incomplete successes by structure-based design, high-throughput screening, combinatorial chemistry or efficiency metrics. Most of these ideas consist of undue excitement engendered by a few good successes and inflated dreams of what's sometimes called "physics envy" - the idea that your discipline holds the potential to become as accurate as atomic physics if only everyone adopted your viewpoint. My feeling is that because chemistry unlike physics is primarily an experimental discipline, chemists are generally more inherently skeptical of theory, even when it's doing no worse than experiment. In some sense the charge against modeling is a bit unfair since it's also worth remembering that unlike synthetic organic chemistry or even biochemistry which have had a hundred and fifty years to mature and hone themselves into (moderately) predictive sciences, the kind of modeling that we are talking about is only about three decades old. Scientific progress does not happen overnight.

The discussion of hype and failure brings us to another key part of Derek's talk, that of recognizing false patterns in the data and getting carried away by their apparent elegance, sophisticated computing power or just plain random occasional success. This part is really about human psychology rather than science so it's worth noting the names of psychologists like Daniel Kahneman and Michael Shermer who have explored human fallibility in this regard. Derek gave the example of the "Magic Tortilla" which was a half-baked tortilla in which a lady from New Mexico saw Jesus's image - the tortilla is now in a shrine in New Mexico. In this context I would strongly recommend Michael Shermer's book "The Believing Brain" in which he points out human beings' tendency to see false signals in a sea of noise - generally speaking we are much more prone to seeing false positives rather than false negatives, and this kind of Type I error makes sense when we think about how false positives would have saved us from menacing beasts on the prairie while false negatives would simply have weeded us out of the gene pool. As Derek quipped, the tendency to see signal in the data has basically been responsible for much scientific progress, but it can also contribute mightily to what Irving Langmuir called "pathological science". Modeling is no different - it's very easy to extrapolate from a limited number of results and assume that your success applies to data that is far more extensive and dissimilar to what your methods have been applied to.

There are also some important questions here about what exactly what it is that computational chemists should suggest to medicinal chemists, and much of that discussion arose in the Q&A session. One of the points that was raised was that modelers should stick their necks out and should make non-intuitive suggestions - for instance simply asking a medicinal chemist to put a fluorine or methyl group on an aromatic ring is not as useful because the medicinal chemist might have thought of that as a random perturbation anyway. However this suggestion is not as easy as it sounds since the largest changes are also often the most uncertain, so at the very least it's the modeler's responsibility to communicate this level of risk to his or her colleagues. As an aside, this reminds me of Karl Popper since one of the criteria Popper used for distinguishing a truly falsifiable and robust theory from the rest was its ability to stick its neck out and make bold predictions. As a modeler I also think it's really important to realize when to be coarse-grained and when to be fine-grained. Making very precise and limited suggestions when you don't have enough data and accuracy to be fine-grained is asking for trouble, so that's a decision you should be constantly making throughout the flow of a project. 

One questioner also asked if as a medicinal chemist Derek would make dead compounds to test a model - this is a question that is very important in my opinion since sometimes negative suggestions can provide the best interrogation of a model. I think the answer to this question is best given on a case-by-case basis: if the synthetic reaction is easy and can rapidly generate analogs then the medicinal chemist should be open to the suggestion. If the synthesis is really hard then there should be a damn good reason for doing it, often a reason that would lead to a crucial decision in the project. The same goes for positive suggestions too: a positive suggestion that may potentially lead to significant improvement should be seriously considered by the medicinal chemist, even if it may be more uncertain than one which is rock-solid but only leads to a marginal improvement.

Derek ended with some sensible exhortations for computational chemists: Don't create von Neumann's elephants by overfitting the data, don't talk about what hardware you used (effectively giving the impression that just because you used the latest GPU-based exacycle your results must be right: and really, the medicinal chemist doesn't care), don't see patterns in the data (magic tortillas) based on limited and spotty results.

Coupled with these caveats were more constructive suggestions: always communicate the domain of applicability/usability of your technique and the uncertainty inherent in the protocol, realize that most medicinal chemists are clueless about the gory details of your algorithm and therefore will take what you say at face value (and this also goes to show how it's your responsibility as modelers to properly educate medicinal chemists), admit failures when they exist and try to have the failures as visible as the successes (harder when management demands six sigma-sculpted results but still doable). And yes, it never, ever hurts for modelers to know something about organic chemistry, synthetic routes and the stability of molecules. I would also add the great importance of using statistics to analyze and judge your data.

All very good suggestions indeed. I end with only one: realize that you are all in this together and that you are all - whether modelers, medicinal chemists or biologists - imperfect minds trying to understand highly complex biological systems. In that understanding will be found the seeds of (occasional) success.

Added: It looks like there's going to be a YouTube video of the event up soon. Will link to it once I hear more.

Why the world needs more Leo Szilards

This is a repost of an old post in honor of Leo Szilard who celebrates his 117th birthday today.

The body of men and women who built the atomic bomb was vast, diverse, talented and multitudinous. Every conceivable kind of professional – from theoretical physics to plumber – worked on the Manhattan Project for three years over an enterprise that spread across the country and equaled the US automobile industry in its marshaling of resources like metals and electricity.
The project may have been the product of this sprawling hive mind, but one man saw both the essence and the implications of the bomb, in both science and politics, long before anyone else. Stepping off the curb at a traffic light across from the British Museum in London in 1933, Leo Szilard saw the true nature and the consequences of the chain reaction six years before reality breathed heft and energy into its abstract soul. In one sense though, this remarkable propensity for seeing into the future was business as usual for the Hungarian scientist. Born into a Europe that was rapidly crumbling in the face of onslaughts of fascism even as it was being elevated by revolutionary discoveries in science, Szilard grasped early in his youth both a world split apart by totalitarian regimes and the necessity of international cooperation engendered by the rapidly developing abilities of humankind to destroy itself with science. During his later years Szilard once told an audience, “Physics and politics were my two great interests”. Throughout his life he would try to forge the essential partnership between the two which he thought was necessary to save the human species from annihilation.
Last year William Lanouette brought out a new, revised edition of his authoritative, sensitive and sparkling biography of Szilard. It is essential reading for those who want to understand the nature of science, both as an abstract flight into the deep secrets of nature and a practical tool that can be wielded for humanity’s salvation and destruction. As I read the book and pondered Szilard’s life I realized that the twentieth century Hungarian would have been right at home in the twenty-first. More than anything else, what makes Szilard remarkable is how prophetically his visions have played out since his death in 1962, all the way to the year 2014. But Szilard was also the quintessential example of a multifaceted individual. If you look at the essential events of the man’s life you can see several Szilards, each of whom holds great relevance for the modern world.
There’s of course Leo Szilard the brilliant physicist. Where he came from precocious ability was commonplace. Szilard belonged to the crop of men known as the “Martians” – scientists whose intellectual powers were off scale – who played key roles in European and American science during the mid-twentieth century. On a strict scientific basis Szilard was perhaps not as accomplished as his fellow Martians John von Neumann and Eugene Wigner but that is probably because he found a higher calling in his life. However he certainly did not lack originality. As a graduate student in Berlin – where he hobnobbed with the likes of Einstein and von Laue – Szilard came up with a novel way to consolidate the two microscopic and macroscopic aspects of the science of heat, now called statistical mechanics and thermodynamics. He also wrote a paper connecting entropy and energy to information, predating Claude Shannon’s seminal creation of information theory by three decades. In another prescient paper he set forth the principle of the cyclotron, a device which was to secure a Nobel Prize for its recognized inventor – physicist Ernest Lawrence – more than a decade later.
Later during the 1930s, after he was done campaigning on behalf of expelled Jewish scientists and saw visions of neutrons branching out and releasing prodigious amounts of energy, Szilard performed some of the earliest experiments in the United States demonstrating fission. And while he famously disdained getting his hands dirty, he played a key role in helping Enrico Fermi set up the world’s first nuclear reactor.
Szilard as scientist also drives home the importance of interdisciplinary research, a fact which hardly deserves explication in today’s scientific world where researchers from one discipline routinely team up with those from others and cross interdisciplinary boundaries with impunity. After the war Szilard became truly interdisciplinary when he left physics for biology and inspired some of the earliest founders of molecular biology, including Jacques Monod, James Watson and Max Delbruck. His reason for leaving physics for biology should be taken to heart by young researchers – he said that while physics was a relatively mature science, biology was a young science where even low hanging fruits were ripe for the picking.
Szilard was not only a notable theoretical scientist but he also had another strong streak, one which has helped so many scientists put their supposedly rarefied knowledge to practical use – that of scientific entrepreneur. His early training had been in chemical engineering, and during his days in Berlin he famously patented an electromagnetic refrigerator with his friend and colleague Albert Einstein; by alerting Einstein to the tragic accidents caused by leakage in mechanical refrigerators, he helped the former technically savvy patent clerk put his knowledge of engineering to good use (as another indication of how underappreciated Szilard remains, the Wikipedia entry on the device is called the “Einstein refrigerator”). Szilard was also finely attuned to the patent system, filing a patent for the nuclear chain reaction with the British Admiralty in 1934 before anyone had an inkling what element would make it work, as well as a later patent for a nuclear reactor with Fermi.
He also excelled at what we today called networking; his networking skills were on full display for instance when he secured rare, impurity-free graphite from a commercial supplier as a moderator in Fermi’s nuclear reactor; in fact the failure of German scientists to secure such pure graphite and the subsequent inability of the contaminated graphite to sustain fission damaged their belief in the viability of a chain reaction and held them back. Szilard’s networking abilities were also evident in his connections with prominent financiers and bankers who he constantly tried to conscript in supporting his scientific and political adventures; in attaining his goals he would not hesitate to write any letter, ring any doorbell, ask for any amount of money, travel to any land and generally try to use all means at his disposal to secure support from the right authorities. In his case the “right authorities” ranged, at various times in his life, from top scientists to bankers to a Secretary of State (James Byrnes), a President of the United States (FDR) and a Premier of the Soviet Union (Nikita Khrushchev).
I am convinced that had Szilard been alive today, his abilities to jump across disciplinary boundaries, his taste for exploiting the practical benefits of his knowledge and his savvy public relations skills would have made him feel as much at home in the world of Boston or San Francisco venture capitalism as in the ivory tower.
If Szilard had accomplished his scientific milestones and nothing more he would already have been a notable name in twentieth century science. But more than almost any other scientist of his time Szilard was also imbued with an intense desire to engage himself politically – “save the world” as he put it – from an early age. Among other scientists of his time, only Niels Bohr probably came closest to exhibiting the same kind of genuine and passionate concern for the social consequences of science that Szilard did. This was Leo Szilard the political activist. Even in his teens, when the Great War had not even broken out, he could see how the geopolitical landscape of Europe would change, how Russia would “lose” even if it won the war. When Hitler came to power in 1933 and others were not yet taking him seriously Szilard was one of the few scientists who foresaw the horrific legacy that this madman would bequeath Europe. This realization was what prompted him to help Jewish scientists find jobs in the UK, at about the same time that he also had his prophetic vision at the traffic light.
It was during the war that Szilard’s striking role as conscientious political advocate became clear. He famously alerted Einstein to the implications of fission – at this point in time (July 1939) Szilard and his fellow Hungarian expatriates were probably the only scientists who clearly saw the danger – and helped Einstein draft the now iconic letter to President Roosevelt. Einstein’s name remains attached to the letter, Szilard’s is often sidelined; a recent article about the letter from the Institute for Advanced study on my Facebook mentioned the former but not the latter. Without Szilard the bomb would have certainly been built, but the letter may never have been written and the beginnings of fission research in the US may have been delayed. When he was invited to join the Manhattan Project Szilard snubbed the invitation, declaring that anyone who went to Los Alamos would go crazy. He did remain connected to the project through the Met Lab in Chicago, however. In the process he drove Manhattan Project security up the wall through his rejection of compartmentalization; throughout his life Szilard had been – in the words of the biologist Jacques Monod – “as generous with his ideas as a Maori chief with his wives” and he favored open and honest scientific inquiry. At one point General Groves who was the head of the project even wrote a letter to Secretary of War Henry Stimson asking the secretary to consider incarcerating Szilard; Stimson who was a wise and humane man – he later took ancient and sacred Kyoto off Groves’s atomic bomb target list – refused.
Szilard’s day in the sun came when he circulated a petition directed toward the president and signed by 70 scientists advocating a demonstration of the bomb to the Japanese and an attempt at cooperation in the field of atomic energy with the Soviets. This was activist Leo Szilard at his best. Groves was livid, Oppenheimer  - who by now had tasted power and was an establishment man – was deeply hesitant and the petition was stashed away in a safe until after the war. Szilard’s disappointment that his advice was not heeded turned to even bigger concern after the war when he witnessed the arms race between the two superpowers. In 1949 he wrote a remarkable fictitious story titled ‘My Trial As A War Criminal’ in which he imagined what would have happened had the United States lost the war to the Soviets; Szilard’s point was that in participating in the creation of nuclear weapons, American scientists were no less or more complicit than their Russian counterparts. Szilard’s take on the matter raised valuable questions about the moral responsibility of scientists, an issue that we are grappling with even today. The story played a small part in inspiring Soviet physicist Andrei Sakharov in his campaign for nuclear disarmament. Szilard also helped organize the Pugwash Conferences for disarmament, gave talks around the world on nuclear weapons, and met with Nikita Khrushchev in Manhattan in 1960; the result of this amiable meeting was both the gift of a Schick razor to Khrushchev and, more importantly, Khrushchev agreeing with Szilard’s suggestion that a telephone hot-line be installed between Moscow and Washington for emergencies. The significance of this hot-line was acutely highlighted by the 1962 Cuban missile crisis. Sadly Szilard’s later two attempts at meeting with Khrushchev failed.
After playing a key role in the founding of the Salk Institute in California, Szilard died peacefully in his sleep in 1964, hoping that the genie whose face he had seen at the traffic light in 1933 would treat human beings with kindness.
Since Szilard the common and deep roots that underlie the tree of science and politics have become far clearer. Today we need scientists like Szilard to stand up for science every time a scientific issue such as climate change or evolution collides with politics. When Szilard pushed scientists to get involved in politics it may have looked like an anomaly, but today we are struggling with very similar issues. As in many of his other actions, Szilard’s motto for the interaction of science with politics was one of accommodation. He was always an ardent believer in the common goals that human beings seek, irrespective of the divergent beliefs that they may hold. He was also an exemplar of combining thought with action, projecting an ideal meld of the idealist and the realist. Whether he was balancing thermodynamic thoughts with refrigeration concerns or following up political idealism with letters to prominent politicians, he taught us all how to both think and do. As interdisciplinary scientist, as astute technological inventor, as conscientious political activist, as a troublemaker of the best kind, Leo Szilard leaves us with an outstanding role model and an enduring legacy. It is up to us to fill his shoes.

A canard about Robert Oppenheimer in Louise Gilder's "The Age of Entanglement"

It's always disappointing when an otherwise commendable effort at writing perpetuates hearsay about an important character in the story, especially when that hearsay is casually tossed out and left open-ended.

Louisa Gilder's book "The Age of Entanglement" is a rather unique and engrossing book which tells the story of quantum mechanics and especially the bizarre quantum phenomenon called entanglement through a unique device- recreations of conversations between famous physicists. Although Gilder does take considerable liberty in fictionalizing the conversations, they are based on real events. While some of the exchanges sound contrived, for the most part the device works and I certainly give the author points for effort - in fact I wish more popular books were penned in this format rather than simply pitched as straight explanatory volumes. Gilder is especially skilled at describing the fascinating experiments done by recent physicists which validated entanglement. This part is usually not found in other treatments of the history of physics. Having said that, the book is more a work of popular history than popular science, and I thought that Gilder should have taken more pains to clearly describe the science behind the spooky phenomena.

Gilder's research seems quite exhaustive and well-referenced, which was why the following observation jumped out of the pages and bothered me even more.

On pg. 189, Gilder describes a paragraph from a very controversial and largely discredited book by Jerrold and Leona Schecter. The book which created a furor extensively quotes a Soviet KGB agent named Pavel Sudoplatov who claimed that, among others, Niels Bohr, Enrico Fermi and Robert Oppenheimer were working for the Soviet Union and that Oppenheimer knew that Klaus Fuchs was a Soviet spy (who knew!). No evidence for these fantastic allegations has ever turned up. In spite of this, Gilder refers to the book and essentially quotes a Soviet handler named Merkulov who says that a KGB agent in California named Grigory Kheifets thought that Oppenheimer was willing to transmit secret information to the Soviets. Gilder says nothing more after this and moves on to a different topic.

Now take a look at the footnotes on pg. 190-191 of Kai Bird and Martin Sherwin's authoritative biography of Oppenheimer ("American Prometheus"). Bird and Sherwin also quote exactly the same paragraph, but then emphatically add how there is not a shred of evidence to support what was said and how the whole thing was probably fabricated by Merkulov to save Kheifets's life (since Kheifets had otherwise turned up empty-handed on potential recruits).

If you want to obtain even more authoritative information on this topic, I would recommend the recent book "Spies:The Rise and Fall of the KGB in America" by Haynes, Klehr and Vassiliev. The book has a detailed chapter which discusses the Merkulov and Kheifets letter procured by the Schecters and cited by Gilder. The chapter clearly says that absolutely no corroboration of the contents of this letter has been found in Kheifets's own testimony after he returned to the Soviet Union or in the Venona transcripts. You would think that material of such importance would at the very least be corroborated by Kheifets himself. A source as valuable as Oppenheimer would also most certainly be mentioned in other communications. But no such evidence exists. The authors also point out other multiple glaring inconsistencies and fabrications in the documents cited in the Schecter volume. The book quite clearly says that as of 2008, there is absolutely no ambiguity or the slightest hint that Oppenheimer was willing to transmit secrets to the Soviets; the authors emphatically end the chapter saying that the case is closed.

What is troubling is that Gilder quotes the paragraph and simply ends it there, leaving the question of Oppenheimer's loyalty dangling and tantalizingly open-ended. She does not quote the clear conclusion drawn by Bird and Sherwin, Haynes, Klehr, Vassiliev and others that there is no evidence to support this insinuation. She also must surely be aware of several other general works on Oppenheimer and the Manhattan Project, none of which give any credence to such allegations.

You would expect more from an otherwise meticulous author like Gilder. I have no idea why she entertains the canard about Oppenheimer. But in an interview with her which I saw she offers a possible explanation: she says that she was first fascinated by Oppenheimer (as most people were and still are) but was then repulsed by his treatment of his student David Bohm who dominates the second half of her book. Bohm was a great physicist and philosopher (his still-in-print textbook on quantum theory is unmatched for its logical and clear exposition), a dedicated left-wing thinker who was Oppenheimer's student at Berkeley in the 1930s. After the War, he was suspected of being a communist and stripped of his faculty position at Princeton which was then very much an establishment institution. After this unfortunate incident, Bohm lived a peripatetic life in Brazil and Israel before settling down at Birkbeck College in England. Oppenheimer essentially distanced himself from Bohm after the war, had no trouble detailing Bohm's left-wing associations to security agents and generally did not try to save Bohm from McCarthy's onslaught.

This unseemly aspect of Robert Oppenheimer's personality was well-known; he was a complex and flawed character. But did Gilder's personal views of Oppenheimer in the context of Bohm taint her attitude toward him and cause her to casually toss out a tantalizing allegation which she must have known is not substantiated? I sure hope not. I think it would be great if Gilder would amend this material in a forthcoming edition of this otherwise interesting book.

Modular complexity and the problem of reverse engineering the brain

Bell's number calculates the number of connections between
various components of a system and scales exponentially
with those components (Image: Science Magazine).
I have been reading an excellent collection of essays on the brain titled "The Future of the Brain" which contains ruminations on current and future brain research from leading neuroscientists and other researchers like Gary Marcus, George Church and the Moser husband and wife pair who won last year's Nobel prize. Quite a few of the authors are from the Allen Institute for Brain Science in Seattle. In starting this institute, Microsoft co-founder Paul Allen has placed his bets on mapping the brain…or at least the mouse visual cortex for starters. His institute is engaged in charting the sum total of neurons and other working parts of the visual cortex and then mapping their connections. Allen is not alone in doing this; there’s projects like the Connectome at MIT which are trying to do the same thing (and the project’s leader Sebastian Seung has written a readable book about it).

Now we have heard prognostications about mapping and reverse engineering brains from more eccentric sources before, but fortunately Allen is one of those who does not believe that the singularity is around the corner. He also seems to have entrusted his vision to sane minds. His institute’s chief science officer is Christof Koch, former professor at Caltech, longtime collaborator of the late Francis Crick and self-proclaimed “romantic reductionist” who started at the institute earlier this year. Koch has written one of the articles in the essay collection. His article and the book in general reminded me of a very interesting perspective that he penned in Science last year which points out the staggering challenge of understanding the connections between all the components of the brain; the “neural interactome” if you will. The article is worth reading if you want to get an idea of how even simple numerical arguments illuminate the sheer magnitude of mapping out the neurons, cells, proteins and connections that make up the wonder that’s the human brain.

Koch starts by pointing out that calculating the interactions between all the components in the brain is not the same as computing the interactions between, say, all atoms of an ideal gas since unlike a gas, the interactions are between different kinds of entities and are therefore not identical. Instead, he proposes, we have to use something called Bell’s number Bwhich reminds me of the partitions that I learnt about when I was sleepwalking through set theory in college. Briefly for n objects, Bn refers to the number of combinations (doubles, triples, quadruples etc.) that can be formed. Thus, when n=3 Bn is 5. Not surprisingly, Bn scales exponentially with n and Koch points out that B10 is already 115,975. If we think of a typical presynaptic terminal with its 1000 proteins or so, Bstarts giving us serious heartburn. For something like the visual cortex where n= 2 million Bn would be inconceivable, and it's futile to even start thinking about what the number would be for the entire brain. Koch then uses a simple calculation based on Moore’s Law in trying to estimate the time needed for “sequencing” these interactions. For n = 2 million the time needed would be of the order of 10 million years. And as the graph on top demonstrates, for more than 10components or so the amount of time spirals out of hand at warp speed.

This considers only the 2 million neurons in the visual cortex; it doesn’t even consider the proteins and cells which might interact with the neurons on an individual basis. In addition, at this point we are not even really aware of how neuronal types there are in the brain: neurons are not all identical like indistinguishable electrons. What makes the picture even more complicated that these types may be malleable so that sometimes a single neuron can be of one type while at other types it can team up with other neurons to form a unit that is of a different type. This multilayered, fluid hierarchy rapidly reveals the outlines of what Paul Allen has called the “complexity brake”: he described this in the same article that was cogently critical of Ray Kurzweil's singularity. And the neural complexity brake that Koch is talking about seems poised to make an asteroid-sized impact on our dreams.

So are we doomed in trying to understand the brain, consciousness and the whole works? Not necessarily, argues Koch. He gives the example of electronic circuits where individual components are grouped separately into modules. If you bunch a number of interacting entities together and form a separate module, then the complexity of the problem reduces since you now have to only calculate interactions between modules. The key question then is, is the brain modular, and how many modules does it present? Commonsense would have us think it is modular, but it is far from clear how we can exactly define the modules. We would also need a sense of the minimal number of modules to calculate interactions between them. This work is going to need a long time (hopefully not as long as that for B2 million) and I don’t think we are going to have an exhaustive list any time soon, especially since these are going to be composed of different kinds of components and not just one kind. But it's quite clear that whataver the nature of these modules, delineating their particulars would go a long way in making the problem more manageable.

Any attempt to define these modules are going to run into problems of emergent complexity that I have occasionally written about. Two neurons plus one protein might be different from two neurons plus two proteins in unanticipated ways. Also if we are thinking about forward and reverse neural pathways, I would hazard a guess that one neuron plus one neuron in one direction may even be different from the same interaction in the reverse direction. Then there’s the more obvious problem of dynamics. The brain is not a static entity and its interactions would reasonably be expected to change over time. This might interpose a formidable new barrier in brain mapping, since it may mean that whatever modules are defined may not even be the same during every time slice. A fluid landscape of complex modules whose very identity changes every single moment could well be a neuroscientist’s nightmare. In addition, the amount of data that captures such neural dynamics would be staggering since even a millimeter sized volume of rat visual tissue requires a few terabytes of data to store all its intricacies. However, the data storage problem pales in comparison to the data interpretation problem.

Nevertheless this goal of mapping modules seems far more attainable in principle than calculating every individual interaction, and that’s probably the reason Koch left Caltech to join the Allen Institute in spite of the pessimistic calculation above. The value of modular approaches goes beyond neuroscience though; similar thinking may provide insights into other areas of biology, such as the interaction of genes with proteins and of proteins with drugs. As an amusing analogy, this kind of analysis reminds me of trying to understand the interactions between different components in a stew; we have to appreciate how the salt interacts with the pepper and how the pepper interacts with the broth and how the three of them combined interact with the chicken. Could the salt and broth be considered a single module?

If we can ever get a sense of the modular structure of the brain, we may have at least a fighting chance to map out the whole neural interactome. I am not holding my breath too hard, but my ears will be wide open since this is definitely going to be one of the most exciting areas of science around.

Adapted from a previous post on Scientific American Blogs.