Field of Science

Why technology won't save biology

Carl Woese, who saw an integrated view for 21st century
biology; from molecules to communities
Starting today, I will be writing a monthly column for the outstanding website 3 Quarks Daily. My first post deals with the limitations of the technological zeitgeist in understanding biology.

There seems to be no end to biology's explosive progress. Genomes can now be read, edited and rewritten with unprecedented scope, individual neurons can now be studied in both space and time, the dynamics of the spread of viruses and ecological populations can be studied using mathematical models, and vaccines for deadly diseases like HIV and Ebola seem to hold more promise than ever. They say that the twentieth century belonged to physics and the twenty first belongs to biology, and everything we see in biology seems to confirm this idea.

There have been roughly six revolutions in biology during the last five hundred years or so that brought us to this stage. The first one was the classification of organisms into binomial nomenclature by Linnaeus. The second was the invention of the microscope by Hooke, Leeuwenhoek and others. The third was the discovery of the composition of cells, in health and disease, by Schwann and Schleiden, a direct beneficiary of the use of the microscope. The fourth was the formulation of evolution by natural selection by Darwin. The fifth was the discovery of the laws of heredity by Mendel. And the sixth was the discovery of the structure of DNA by Watson, Crick and others. The sixth, ongoing revolution could be said to be the mapping of genomes and its implications for disease and ecology. Two other minor revolutions should be added to this list; one was the weaving of statistics into modern genetics, and the second was the development of new imaging techniques like MRI and CT scans.

These six revolutions in biology resulted from a combination of new ideas and new tools. This picture is consistent with the general two-pronged picture of scientific revolutions that has emerged through the ages: a picture consisting in equal parts of revolutions of ideas and revolutions of technology. The first kind was popularized by Thomas Kuhn in his book "The Structure of Scientific Revolutions". The second was popularized by Peter Galison and Freeman Dyson; Galison in his book "Image and Logic", and Dyson in his "The Sun, the Genome and the Internet". Generally speaking, many people are aware of Kuhn but few people are aware of Galison or Dyson. That is because ideas are often considered loftier than tools; the scientist who gazes at the sky and divines formulas for the universe through armchair calculations is considered more brilliant than the one who gets down on her hands and knees and makes new discoveries by gazing into the innards of machines.

However, this fondness for theory versus experiment paints a false picture of scientific progress. Machines and tools are not just important for verifying theories; they are more often used to discover new things that theory then has to catch up with and explain. In physics, the telescope and the particle accelerator have been responsible for some of the greatest revolutions in our understanding of nature; they haven't just verified existing theories but uncovered the fundamental composition of matter and spacetime. In chemistry, the techniques of x-ray diffraction and nuclear magnetic resonance have not just opened new windows into the structure of molecules, but they have led to novel understanding of molecular behavior in environments as diverse as intricate biological systems and the surfaces of distant planets and galaxies. There is little doubt that new experimental techniques have been as or even more responsible for scientific revolutions as new ideas.

As one example of the primacy of tool-driven revolutions, four of the six milestones in biology noted above can be considered to have come from the development or application of new tools. The microscope itself was a purely technological invention. The structures of cells, bacteria and viruses was made possible by the invention of new forms of microscopy – the electron microscope in particular – as well as new dyes which allowed scientists to distinguish cellular components from each other. The structure of DNA came about because of x-ray diffraction and chemical analysis. The minor revolution of imaging was made possible by concomitant revolutions in electronics and computing. And finally, the revolution in genomics has engendered by chemical and physical methods of rapidly sequencing genomes as well as powerful computers which can analyze this data. The application of all this technology has been a windfall of data which hides gems of understanding. The new science of systems biology promises to tie all this data together and lead to an improved understanding of biological systems.

And this is where the problem begins. In one way biology has become a victim of its success. Today we can sequence genomes much faster than we can understand them. We can measure electrochemical signals from neurons much more efficiently than we can understand their function. We can model the spread of populations of viruses and populations much more rapidly than we can understand their origins or interactions. Moore's Law may apply to computer chips and sequencing speeds, but it does not apply to human comprehension. In the words of the geneticist Sydney Brenner, biology in the heyday of the 50s used to be "low input, low throughput, high output"; these days it's "low input, high throughput, no output". What Brenner is saying is that compared to the speed with which we can now gather and process biological data, the theoretical framework which goes into understanding data as well as the understanding which come out from the other end are severely impoverished. What is more serious is a misguided belief that data equals understanding. The philosopher of technology Evgeny Morozow calls this belief "technological solutionism", the urge to use a certain technology to address a problem simply because you can.

Consider a field like cancer where gene sequencing has come to play a dominant role. The idea is to compare the genome sequences of cancer cells and normal cells, and therefore understand which genes are malfunctioning in cancer cells. The problem is that if you sequence a typical cell from, say, a lung cancer patient, you will find literally hundreds of genes which are mutated. It is difficult to distinguish the mutant genes which are truly important from those which just come along for the ride; the latter are a necessary part of the messy, shotgun process of cancer cell evolution. It is even more difficult to know which genes to target if we want to keep the cancer from growing. For doing this it is important to have a better theory for understanding exactly what genes would be mutated in a cancer cell and why, and what function they serve. While we have made strides in developing such theories, our understanding of the basic causal framework of cancer is far behind our capacity to rapidly sequence cancer genomes. And yet millions of dollars are spent in sequencing cancer genomes, with the expectation that someday the data alone with lead to a quantum leap in understanding. You look for the keys not where they are but where you can easily see them, under the bright light.

A recent paper from the neuroscientist John Krakauer said the same thing about neuroscience. If biology is the science of the twenty first century, neuroscience is probably the cherry on that cake. No other field promises to deliver fundamental insights not just into major mental health disorders but into the very essence of what it means to be human. To understand the brain better, scientists and government launched the Brain Map Initiative a few years ago. The goal of this initiative can be stated very simply: it is to map every single neuron in the brain in space and time and to understand the connections between them. The belief is that understanding the behavior of neurons will lead to an understanding of human behavior. At the heart of the initiative are new methods of interrogating neuronal function, ranging from very precise electrical recording using advanced sensor techniques to studying an inventory of the proteins and genes activated in neurons by modern recombinant DNA technology. These methods will undoubtedly discover new aspects of the brain that were previous hidden. Some of them well lead to groundbreaking understanding. But we do not know whether they will allow us to understand human behavior. As one example, the paper by Krakauer talks about mirror neurons, a specific class of neurons that caused a great stir a few years ago. As their names indicate, mirror neurons in one brain fire when the same class of neurons is activated in another brain. These neurons have thus been proclaimed to be the basis of diverse human emotions, including empathy; understanding them is considered to be integral to understanding social behavior; delicate imaging studies can track their activation and deactivation. But as Krakauer notes, many experiments on mirror neurons have been done on monkeys, and in those cases, little attention if any is paid to the actual behavior of the monkey when the mirror neurons fire. Thus, we seem to know what is going on, but only at the level of the neurons themselves. We do not know what is actually going on in the mind of the monkey in terms of its behavior when the neurons are activated.

To understand why these limitations of technology can hamper our understanding of complex biological systems, we must turn to one of the great philosophical foundation stones of science: the paradigm of reductionism. Reductionism was the great legacy of twentieth century science, and a culmination of everything that came before. It means the breaking up of complex systems into their simpler parts; the idea is that understanding the simpler parts will enable us to understand the whole system. There is little doubt that reductionism has led to spectacular successes in all of science. The entire edifice of twentieth century physics – exemplified by relativity and quantum mechanics – rose from the reductionist program of understanding matter and spacetime using its most basic components; particles and fields. Molecular biology similarly was created when biological matter started to be unraveled at the molecular level. Most of these advances became possible because powerful new technology like particle accelerators and spectrometers allowed us to break and study matter and living organisms at their fundamental level.

But as science overturned one obstacle after another in its confident reductionist march, it became clear that all was not well with this approach. One of the first salvos in what came to be called the "reductionism wars" was from the physicist Philip Anderson who in 1972 wrote an article titled ‘More is Different'. Anderson did not deny the great value of reductionism in science, but he pointed out that complex systems are not always the sum of their constituent parts. More is not just quantitatively different but qualitatively so. Even simple examples illustrate this phenomenon: atoms of gold are not yellow, but gold bars are; individual molecules of water don't flow, but put enough of them together and you get a river which has features that are not directly derived from the molecules themselves. And consciousness may be the ultimate challenge to reductionism; there is absolutely nothing in a collection of carbon, hydrogen, oxygen and nitrogen atoms in a human brain that tells us that if you put enough of them together in a very specific configuration, you will get a human being who would be writing this essay. Rivers, gold bars, human brains; all these systems are examples of emergent phenomena in which the specification of the individual components is necessary but not sufficient to understand the specification of the entire system. This is top-down as opposed to bottom-up understanding.

Why does emergence exist? We don't know the answer to that question, but at least part of it is related to historical contingency. The complexity theorist Stuart Kauffman gives a very good example of this contingency. Consider, says, Kauffman, the structure and function of the human heart. Imagine that you had a super-intelligent demon, a "superfreak" who could specific every single particle in the heart and therefore try to derive the function of the heart from string theory. Imagine that, starting from the birth of the universe, this omniscient superfreak could specify every single configuration of atoms in every pocket of spacetime that could lead to the evolution of galaxies, supernovae, planets, and life. He would still fail to predict that the most important function of the human heart is to pump blood. That is because the heart has several functions (making beating noises for instance), but the function of the heart about which we care the most is a result of historical accident, a series of unpredictable circumstances on which natural selection acted before elevating the pumping of the blood as the quintessential property of the heart. Some of the pressures of this natural selection came from below, but others came from above; for instance, the function of the heart was sculpted not just by the molecules which make up heart muscle but by the functions of the physiological and ecological environments in which ancient heart precursors found themselves. The superfreak may even be able to predict the pumping of blood as one of the many properties of the heart, but he will still not be able to determine the unique role of the heart in the context of the grand tapestry of life on earth. The emergence of the human heart from the primordial soup of Darwin's imagination cannot be understood by understanding the quarks and cells from which the heart is composed. And the emergence of consciousness or the brain cannot be understood merely by understanding the functions of single neurons.

Emergence is what thwarts the understanding of biological systems through technology, because most technology used in the biological sciences is geared toward the reductionist paradigm. Technology has largely turned biology into an engineering discipline, and engineering tells us how to build something using its constituent parts, but it doesn't always tell us why that thing exists and what relationship it has to the wider world. The microscope observes cells, x-ray diffraction observes single DNA molecules, sequencing observes single nucleotides, and advanced MRI observes single neurons. As valuable as these techniques are, they will not help us understand the top-down pressures on biological systems that lead to changes in their fundamental structures.

The failure of reductionist technology to understand emergent biology is why technology will not save the biological sciences. I have a modest prescription to escape from this trap: create technology that studies biology at multiple levels, and tie this technology together with concepts that describe biology at multiple levels. For instance when it comes to neuroscience, it would be fruitful to combine magnetic recording of single neurons (low level) with lower resolution techniques for studying clusters of neurons and modules of the brain (intermediate level) with experiments directly probing the behavior of animal and human brains (higher level). The good news is that many of these techniques exist; the bad news is that many of them exist in isolation, and the researchers who study them don't build bridges between the various levels. The same bridge-building goes for concepts. For instance, at the highest level organisms are governed by the laws of thermodynamics, more specifically non-equilibrium thermodynamics (of which life is an example), but you will not usually see scientists studying collections of neurons taking into consideration the principles of statistical thermodynamics or entropy. For achieving this meld of concepts scattered across different levels of biological understanding, there will also need to be much closer multidisciplinary interactions; physicists studying thermodynamics will need to closely collaborate with geneticists understanding the translation of proteins in neurons. These scientists will in turn need to work together with psychologists observing human behavior or ethologists observing animal behavior; both these fields have a very long history which can inform researchers from other fields. Finally, all biologists need to appreciate better the role of contingency in the structure and function of their model systems. By looking at simple organisms, they need to discuss how contingency can inform their understanding of more complicated creatures.

For a wholesome biology to prosper we need both technological and human interactions. But what we need most is an integrated view of biological organisms that moves away from a strict focus on looking at these organisms as collections of particles, fields and molecules. The practitioners of this integrated biology can take a page out of the writings of the late Carl Woese. Woese was a great biologist who discovered an entire new kingdom of life (the Archeae), and one of the few scientists able to take an integrated view of biology, from the molecular to the species level. He pioneered new techniques for comparing genomes across species at the molecular level, but he also had a broader and more eloquent view of life at the species level, one which he set down in an essay titled "A New Biology for A New Century" in 2004, an essay that expanded biology beyond its mechanistic description:

"If they are not machines, then what are organisms? A metaphor far more to my liking is this. Imagine a child playing in a woodland stream, poking a stick into an eddy in the flowing current, thereby disrupting it. But the eddy quickly reforms. The child disperses it again. Again it reforms, and the fascinating game goes on. There you have it! Organisms are resilient patterns in a turbulent flow—patterns in an energy flow. A simple flow metaphor, of course, fails to capture much of what the organism is. None of our representations of organism capture it in its entirety. But the flow metaphor does begin to show us the organism's (and biology's) essence. And it is becoming increasingly clear that to understand living systems in any deep sense, we must come to see them not materialistically, as machines, but as (stable) complex, dynamic organization."

As the quote above observes none of our experiments or theories captures the science at all levels, and it's only by collaboration that we can enable understanding across strata. To enable it we must use technology, but use it not as master but as indispensable handmaiden. We are all resilient organisms in a turbulent energy flow. We live and die in this flow of complex, dynamic organization, and we can only understand ourselves when we understand the flow.

5 comments:

  1. In the early days of the introduction of 'systems biology', the old guard would say that what you call for...."create technology that studies biology at multiple levels, and tie this technology together with concepts that describe biology at multiple levels." already exists and is called physiology.
    There is some truth to this reflex rejection of the new by the old. Most institutions and funding agencies give at least lip service to agreement with your goals for multidisciplinary research, but the devil has been in the details of actually supporting it.
    Thank you for a thought provoking essay.

    ReplyDelete
    Replies
    1. Yes, I believe that Sydney Brenner had said something similar essay, that systems biology is just a new name for physiology which took a more holistic approach.

      Delete
  2. Thank you for this thoughtful post---and for mentioning our work (I’m one of Krakauer’s co-authors). Here are a few things to consider: First, myself and other scientists interested in bottom-up and top-down would dispute your claim that technology is always reductionist. I and many other scientists are using robots and large scale simulations of animals explicitly because they enable us to synthesize rather than analyze (in the traditional sense of breaking down). Second, the superfreak example (a thought experiment usually called Laplace’s demon) leaves me puzzled. I don’t see why such a cognizer would have difficulty in computing how the heart’s role played a part in a bigger ecological system. All causal trails up and down would be instantly known. But who cares? Understanding is a human affair, not a task for either a superfreak or a Laplacean demon. As such, perhaps we need to divide systems up in these hierarchical levels of organization (see our discussion of mechanism in the paper) because we simply don’t have the cognitive or cultural capacity to do what we call understanding otherwise.

    ReplyDelete
    Replies
    1. Thanks for your comment. It is indeed promising to have you and your colleagues look at approaches across multiple levels of organization, but firstly, I don't know how many scientists are actually doing this and secondly, I think this is still a new development in the history of applying technology to scientific problems, most of which I consider to be reductionist rather than synthetic. The point about the heart is really a general point about the failure of reductionism to account for contingency, especially evolutionary contingency. You can read a much more detailed description of this point in Stuart Kauffman's book "Reinventing the Sacred" (chapter 2 or 3, I believe). I agree that ultimately we should focus on human rather than supernatural understanding!

      Delete
  3. This comment has been removed by the author.

    ReplyDelete

Markup Key:
- <b>bold</b> = bold
- <i>italic</i> = italic
- <a href="http://www.fieldofscience.com/">FoS</a> = FoS