Field of Science

On the primacy of doubt in an age of illusory certainty

This is my second monthly column for the website 3 Quarks Daily.

We live in a fractured age when many seem to be convinced that their beliefs are right, and that they can never agree with the other side on anything to any degree. Science has always been the best antidote against this bias, because while political truths are highly subjective and subject to the whims of the majority, most scientific truths are starkly objective. You may try to pass a law by majority vote in Congress saying that two and two equals five, or that DNA is not a double helix, but these falsehoods are not going to stay hidden for too long because the bare facts say otherwise. You may keep on denying global warming, but that will not make the warming stop. What makes science different is that its facts are true irrespective of whether you believe they are true.

But combined with this undeniable nature of scientific facts exists a way of doing things that almost seems paradoxical to proclamations about hard scientific truth. That is the essential, never-ending role of doubt, skepticism and uncertainty in the practice of science. Yes, DNA is a double helix, and yes, it almost seems impossible that this fact will someday be overturned, but even then we should not hold the fact as sacrosanct. "Truth" in science, no matter how convincing, is always regarded as provisional and subject to change. Some scientific facts are now so well documented that they approach the status of "truth", and yet considering them so literally would mean abandoning the scientific method. Seen this way, truth in science can be considered to be an asymptotic limit, one which we can always get closer to but can never definitively reach.

It's this seemingly paradoxical and yet crucial yin-and-yang aspect of science that I believe is still quite hard to grasp for non-scientists. Niels Bohr would have appreciated the tension. Bohr bequeathed to the world the concept of complementarity. Complementarity means the existence of seemingly opposite ideas that are still required together to explain the world. In the physical world, complementarity was first glimpsed in the behavior of subatomic particles which can sometimes behave as waves and sometime as particles, depending on the experiment. Waves and particles may seem to be contradictory concepts, and yet as the pioneers of quantum mechanics showed us, you cannot explain reality without assuming that electrons or photons are both. In his later life, Bohr extended the idea of complementarity to many aspects of the human world; good and evil, freedom and restraint, war and peace. He realized that all of these seemingly paradoxical aspects of the human and physical world have to essentially co-exist, not just if we want to understand reality as it is but if we want to develop tolerance for opposing ideas. Niels Bohr played a very significant role in making us comfortable with paradox and uncertainty.

In some sense the twentieth century was the age when science once and for all destroyed Platonic notions of certainty. The one idea from that time that really struck at the heart of certainty was Kurt Gödel's incompleteness theorem. Gödel showed us that we can never know everything even in the supposedly pristine and completely precise world of pure mathematics; even in this world we could keep on finding facts that are unprovable, and so mathematics will always be inexhaustible. At about the same time, Max Born and Werner Heisenberg demonstrated the fundamental probabilistic nature of the subatomic world. In one way, neither of these ideas destroyed certainty as much as they redefined it, but they did tell us that certainty is a very slippery concept, and one that needs us at the very least to recalibrate our expectations from both the physical and the abstract universe. At the same time, neither development portended the end of science in any shape or form; mathematicians went on proving important theorems in spite of Gödel, and physicists went on discovering important facts about the material world in spite of Heisenberg.

Some people are confused about how certainty and uncertainty can both co-exist at the same time in science, while too many opportunists (creationists and climate change deniers for instance) are ready to pounce on shreds of uncertain knowledge in the peripherals to declare the entire edifice uncertain and hollow. As science communicators, I think that we still fail to convey this co-existence of hard facts and room for doubt to non-specialists, and this failure is a significant reason for so many of our troubles in establishing a dialogue about science with the public.

One of those few who did a remarkably accessible job communicating the role of uncertainty in science was Richard Feynman. In "The Pleasure of Finding Things Out", he 
made the point that many people look for certainty as an emotional aid; hence the rise of systems like religion. But as Feynman says, the only way we can make scientific progress is if we remain skeptical, if we expose even the best-known facts of science to the glare of doubt. To do this requires fearlessness, the courage to abandon even your most cherished beliefs in the face of evidence. But the scientists of lore understood well this need for courage. There is a reason why the Royal Society, when it was founded in 1660, chose at its motto the Latin phrase "Nullius in verba": Nobody's word is final. At the time, this way of looking at the world, of not regarding any authority including the King's as the final word, was a novel and revolutionary way of doing things. It marked a great transition from the age of kings to the age of reason. 

And it was a message that Feynman emphasized throughout his career:

“I can live with doubt and uncertainty and not knowing. I think it’s much more interesting to live not knowing than to have answers which might be wrong. I have approximate answers and possible beliefs and different degrees of certainty about some things, but I am not absolutely sure about many things, and there are some things which I don’t know anything about…such as whether it means anything to ask why we are here, and what the question might mean.

But I don’t have to know an answer. I don’t feel frightened by not knowing. By feeling lost in this mysterious universe without having any purpose, which is the way it really is, as far as I can tell. It doesn’t frighten me.”

Skepticism engendered by fearlessness is an integral aspect of science; not only as some formal, abstract protocol, but as a necessary tool without which progress would be impossible. I would say the same attitude applies to the conduct of our social and political affairs, although as current events demonstrate, it's been exceedingly difficult to profess doubt and uncertainty in the political arena.

Today I find myself troubled by what I see as the misplaced conviction and lack of skepticism on both the left and the right in this country and in other lands. The real problem of course is not lack of skepticism in the beliefs of others but skepticism regarding one's own beliefs: as Feynman again memorably put it, "The first principle is that you must not fool yourself, and you are the easiest person to fool."

The bedrock of presumed certainty in what one sees as hard, incontrovertible facts not only roots one's beliefs in certain sacred values, but it often simply precludes one from understanding subtleties and rationally responding to arguments on the other side. The concept of sacred values was popularized by the psychologist Jonathan Haidt in trying to explain why otherwise intelligent people sometimes become irrationally wedded to their viewpoints. The central reason is that humans tend to infuse scientific facts and findings with their own preconceived moral meanings. Once that happens it becomes quite difficult to accept or agree with facts that seem to collide with those values.

This knee-jerk disapproval manifests itself on both sides of the political aisle. The right is too skeptical even of the basic facts of global warming, while the left holds the science of global warming perfect and sacrosanct and believes that the science is largely settled. The right believes that genetics and race play a disproportionate role in mental qualities like IQ and creativity, while the left believes that environment can almost completely compensate for any such differences. In fact the left fears the right’s viewpoint so much that it gropes for support for the equality of all human beings in science, even when science and human value systems should really be kept separate from each other. The right looks into science for the contention held by some of its members that one gender may be inferior to the other. The left is so fearful of this contention that it is quick to interpret any studies demonstrating even subtle or uncontroversial gender differences as evidence of discrimination against one gender or another. 

Generally speaking, the right is quick to point out differences rather than similarities while the left is quick to assume that pointing out differences is tantamount to pointing out inferiority or superiority. The left's sacred values are purportedly equality and justice; the right's are purportedly freedom and God. Each side is frightened to give even a little ground for fear that the other side may declare victory. This fear is causing both sides to closet themselves into bubbles and echo chambers, into rejecting - as is being done on college campuses for example - even the possibility of exposing themselves to the other side's viewpoints and engaging with them. And the profligacy of social media, because of its ability to quickly surround yourself with like-minded people, creates an enduring illusion of your convictions being absolutely right.

Part of the solution to reconciling our sacred beliefs with each other is to again go back to that notion of complementarity extolled by Niels Bohr and to realize that many of our beliefs simply may not be completely compatible with each other, but that they still have to co-exist in order for us to form a complete picture of the world and to live in harmony. This point was made very clearly by the philosopher Isaiah Berlin in a 1996 commencement address at the University of Toronto; the talk was appropriately titled "A Message to the 21st Century". Throughout his life Berlin emphasized the plurality of human thoughts and values, and this theme was what he expounded upon:

"The central values by which most men have lived, in a great many lands at a great many times—these values, almost if not entirely universal, are not always harmonious with each other. Some are, some are not. Men have always craved for liberty, security, equality, happiness, justice, knowledge, and so on. But complete liberty is not compatible with complete equality—if men were wholly free, the wolves would be free to eat the sheep. Perfect equality means that human liberties must be restrained so that the ablest and the most gifted are not permitted to advance beyond those who would inevitably lose if there were competition. Security, and indeed freedoms, cannot be preserved if freedom to subvert them is permitted. Indeed, not everyone seeks security or peace, otherwise some would not have sought glory in battle or in dangerous sports.

Justice has always been a human ideal, but it is not fully compatible with mercy. Creative imagination and spontaneity, splendid in themselves, cannot be fully reconciled with the need for planning, organization, careful and responsible calculation. Knowledge, the pursuit of truth—the noblest of aims—cannot be fully reconciled with the happiness or the freedom that men desire, for even if I know that I have some incurable disease this will not make me happier or freer. I must always choose: between peace and excitement, or knowledge and blissful ignorance. And so on."

It is the inability to grasp this fundamental tussle between different values that in part leads people to believe in holy truths. Human beings are uncomfortable with multiple answers, especially if they seem contradictory; but multiple, contradictory, complementary answers comprise the very essence of the world. Berlin however is honest and reflective enough to admit that he sees no straightforward solution to the dilemma. This is perhaps because the only solution is to admit the dilemma and live with it, to respect a plurality of opinion and abide by the views of multitudes, to recognize that the contradictions that man presents are the contradictions that man contains. In fact this reconciliation with differing views lies at the heart of both liberal democracy and scientific exploration.

Many decades before Feynman, another famous scientist had eloquently advocated for openness in both science and politics:

"There must be no barriers to freedom of inquiry … There is no place for dogma in science. The scientist is free, and must be free to ask any question, to doubt any assertion, to seek for any evidence, to correct any errors. Our political life is also predicated on openness. We know that the only way to avoid error is to detect it and that the only way to detect it is to be free to inquire. And we know that as long as men are free to ask what they must, free to say what they think, free to think what they will, freedom can never be lost, and science can never regress."

More than almost anyone at the time, Robert Oppenheimer knew how crucial open inquiry was, again not as a formal, pedagogic aspect of science but as an error-correcting tool. Error correction is an important part of both pure and applied science. It can allow you to discover fundamental particles and mathematical theorems, but it can equally allow you to build a better integrated circuit and invent new drugs for cancer. As Oppenheimer realized, the atomic bomb had made this need for correcting error literally a matter of life and death. Even the politicians - deeply susceptible as they are to the effects of integrated circuits, cancer and atomic bombs - should be able to get on board with that urgency.

But perhaps we should leave the last word on the many virtues of doubt, uncertainty, self-skepticism and openness to a man who told us a cautionary tale about the horrific ends that result from the acts of men and women who throw all these values out of the window, convinced as they are of the indelible truth of their own beliefs. As he bent down to pick up mud from a pond at Auschwitz, Jacob Bronowski issued a plea to question our beliefs that is as ominous and heartfelt as it is relevant to our modern times.

“It's said that science will dehumanize people and turn them into numbers. That's false, tragically false. Look for yourself. This is the concentration camp and crematorium at Auschwitz. This is where people were turned into numbers. Into this pond were flushed the ashes of some four million people. And that was not done by gas. It was done by arrogance, it was done by dogma, it was done by ignorance. When people believe that they have absolute knowledge, with no test in reality, this is how they behave. This is what men do when they aspire to the knowledge of gods.

Science is a very human form of knowledge. We are always at the brink of the known; we always feel forward for what is to be hoped. Every judgment in science stands on the edge of error and is personal. Science is a tribute to what we can know although we are fallible. In the end, the words were said by Oliver Cromwell: "I beseech you in the bowels of Christ: Think it possible you may be mistaken...we have to cure ourselves of the itch for absolute knowledge and power."

You may be mistaken. Think it possible. 

Brave New World: A review of Jennifer Doudna and Samuel Sternberg's "A Crack in Creation: Gene Editing and the Unthinkable Power to Control Evolution"

For four and a half billion years, evolution on our planet progressed at its own pace, through blind fortune and culling, creating dinosaurs and diatoms, butterflies and blue whales. The organisms that this evolution forged were at the mercy of random mutations, natural selection and the ceaseless slash-and-burn of the unsparing environment. They had no say in their making, no purposeful modification of their form and function. Then, about fifty thousand years ago, a creature appeared who, for the first time, had the wherewithal to actually mold himself in his own image and defy the laws of biology. This was Homo sapiens.

For most of the next fifty thousand years, men and women largely changed their environment to provide comfort, security and prosperity for themselves. They made fire, built cities and cleared out large areas of virgin forest and grassland for agriculture. They domesticated other animals for their meat, milk and other products. They warded off diseases and predators and learnt how to travel long distances. In doing this they were fighting against the blind forces of nature that had shaped them, bending these forces to their will, delaying the natural course of death and exposure to the elements that these forces had visited upon them since the beginning.

This spectacular success was the result of two unique features that evolution itself had fashioned for man by building big brains for him: intelligence and language. Both these features caused a paradigm shift that completely transformed the nature of life on this planet. It led to wheels and cast iron, to automobiles and the computer, to architecture and antibiotics, to mass extinctions and climate change. And perhaps the ultimate impact of these two unique inventions of evolution was the discovery of genes. With this discovery man had finally learnt not just to fight, but to potentially completely hack the very process that made him.

It is only in the last fifty years that we have become realistically able to wield this awesome force. And it’s only in the last ten years that the force has created what is probably its most promising, far-reaching and consequential technological application, one that will truly lead to changes in our very beings, changes whose effects are frankly impossible to predict. This technology is germline gene editing, made possible by a biological tool called CRISPR. CRISPR holds promises for curing some of the most debilitating diseases afflicting humanity, revolutionizing agriculture and environmental sustainability and putting the power of genetics in the hands of parents who might want to have healthy, disease-free babies. And there are few better people to explain CRISPR than two of its principal inventors, Jennifer Doudna and Samuel Sternberg. Their book on the technology is a very good read, laying out both the technical details and the social implications with honesty, clear explanations and sensitivity.

So what is CRISPR? In one sentence, it’s cheap precision gene editing. The words “cheap” and “precision” are both key to its looming dominance over lives. Gene editing itself is not a new concept and has been around for at least thirty years, ever since the genetic code was cracked and methods were invented to cut, copy and paste genes into genomes. There was an enormous amount of enthusiasm in the promise of these techniques to cure genetic diseases in human beings, especially ones where the problem was caused by one or two malfunctioning genes that in principle could be removed or replaced by healthy versions. But as Doudna and Sternberg explain, most of these methods are haphazard and unpredictable, and they are also very expensive. They can paste genes in the wrong part of the genome, and they can do this with very low efficiency. CRISPR circumvents both of these problems.

To understand why CRISPR is a real breakthrough, it’s useful to think of drugs. Almost every drug faces two principal challenges; efficacy and side effects. Many drugs fail simply because they are not very effective, and those that do succeed inevitably have side effects, sometimes serious ones. Imagine, then, a drug that is one hundred percent effective in curing a disease and has zero side effects, a so-called “magic bullet” if you will. The older technologies were like drugs with low efficacy and multiple unpredictable side effects. CRISPR is a magic bullet. More accurately, CRISPR holds the promise of being a magic bullet.

From a technical standpoint, think of a genetic defect as one that can be excised by a pair of scissors. A faulty gene essentially consists of a spelling mistake in a short stretch of DNA, the material that codes for all of life. The main machinery of CRISPR consists of two parts; a stretch of DNA that has the right spelling, a complementary sequence that can latch on to the original wrong spelling, and a protein that guides this complementary sequence to the right location in the genome like a benign guided missile. The protein, which is called Cas9, can be very specific in being guided to that particular spelling; a spelling that is different by even one or two letters may keep it from reaching its target. In addition, CRISPR can edit any gene from the 25,000 or so genes in the human genome. This high specificity and universality is what makes it so exciting.

One of the most important lessons from the history of CRISPR is that it was the product not of applied research but of undirected, curiosity-driven laboratory research. Doudna and Sternberg strongly emphasize this point, and it it’s especially critical in an era in which funding for the basic sciences and institutions like the National Institutes of Health is increasingly gutted. The original system was discovered in bacteria which use it to excise and destroy the genomes of invading viruses. The initial papers on this appeared in the 80s as a pure academic curiosity, but Doudna became aware of it only about ten years ago. Another important lesson from the CRISPR tale is that it is necessarily the work of many people. Doudna certainly is one of the key researchers, but there are many others who discovered critical aspects of the system; two of these are unfortunately engaged in a patent battle with Doudna, signifying the high stakes in the battle. One very important component was discovered by researchers working for the yogurt company Danisco who wanted to know how they could use the system for making yogurt cultures resistant to viral infection. Thus, the one take home message from the CRISPR story is that it was an international one, a product of pure science and multiple academic and industrial labs.

The first part of the book lays out this story well. Doudna is also adept at describing the everyday personal excitement of research, the accidental discoveries and the triumphs and frustrations. She stumbled upon CRISPR entirely by accident, when a fellow scientist, a geologist working on understanding the interactions between microorganisms in rocky environments, contacted her. Her major paper on the topic came about through an accidental meeting in Puerto Rico with Emmanuele Charpentier, a scientist working in the Netherlands who was curious to know what the Cas9 protein did. As her story illustrates, it’s often chance encounters between scientists that lead to the most exciting discoveries. That is why it is important to maintain the flow of scientific collaboration between various countries, especially in this era of global political upheaval and nativism.

Once the book lays out the basic technology of CRISPR and its history, the second part talks about the societal implications, many of which could be very consequential indeed. Apart from its accuracy, what distinguishes CRISPR from older technologies is its ease of use; even high school students can be taught to edit genes with it, at least in simple organisms like yeast and fruit flies. It is also inexpensive, so cost is not a huge hurdle for amateurs. Naturally both these features can lend themselves to beneficial as well as nefarious use; the latter made possible, say, by terrorists who might want to edit the genome of a bacterium to make it resistant to all antibiotics. The CIA and the DOD have started including CRISPR in their threat assessment reports. It’s hard to think of another technology except nuclear weapons which might requite such vigilance; the scary thing about CRISPR is that it’s far more accessible to the layperson compared to a nuclear weapon.

Using CRISPR, it has become possible for the first time to edit, replace and substitute genes with unprecedented accuracy. Until now it could take months or even years to substitute multiple genes in a cell. Now we can do it in weeks. The authors give many examples of CRISPR editing defective genes in genetically inherited diseases like Sickle Cell Anemia. They also describe its use in tackling more complex disease like cancer and AIDS. Cancer is essentially a genetic disease caused by mutations in various genes, so at least in principle, CRISPR can correct these mutations by replacing those genes. In case of AIDS, there is a specific gene whose mutations confer resistance to the virus in a few lucky individuals. Again in principle, CRISPR can mutate the normal version of this gene found in all of us. Much of this has already been accomplished in test tubes, in cells and in animals like mice and monkeys. Human clinical trials on these and other diseases are underway.

Agriculture too could be revolutionized using CRISPR. Using the technology you could breed healthier or stronger animals for meat with minimal environmental impact, you could breed crops that could be resistance to all kinds of infections, and you could even breed crops that make beneficial growth factors or drugs for human beings. This part of the book is particularly interesting because the author comes down very clearly on the side of these GMOs. She finds most of the rhetoric against GMOs unscientific. Her main argument, with which I agree, is that we have been changing the genes of crops and livestock for decades now without any ill effects. In spite of what proponents of non-GMO food would like to tell us, it’s impossible to escape having GMO foods in our diet, and these foods being cheaper and more abundant have kept scores of populations from starvation. But here’s the rub; these GMO foods and animals have actually been produced using technologies that are far more haphazard and unpredictable than CRISPR. They have introduced all kinds of unwanted changes in their hosts (their safety profile is thus even more impressive in light of this non-specificity). If anything, CRISPR with its unprecedented accuracy is going to make GMOs even safer and more effective than before. Currently there is thus no good scientific argument for not using CRISPR, at least for the kinds of changes in crops and livestock that we have been doing for years.

But the most consequential application of CRISPR, and one that could truly have a huge impact on the life of the average man and woman, is germline editing. This is editing of the genes in an embryo, or even in the sperm and egg which make up the embryo. As the authors explain, it is only germline editing that can truly eliminate genetic diseases from their inception. Not only that, but using CRISPR to select viable, disease-free embryos before implantation won’t be much different in kind from the kind of embryonic selection that is done to pick viable embryos through traditional IVF technologies.

What would prevent parents from asking doctors and scientists to use CRISPR to edit out potentially defective genes from a future child? Much more troublingly, what would prevent parents from using the technology for genetic enhancement, for making babies that are potentially taller, calmer, stronger and more intelligent? It’s a scenario that has been explored in science fiction movies like "Gattaca", but CRISPR has brought the possibility to our doorstep. The social implications are troubling and generate distant echoes of the horrific history of eugenics experiments in the early twentieth century. In a scenario like this, what would prevent society from morphing into one where only the rich have this ability? And what would keep us from considering genetically unenhanced individuals as inferior – or, chillingly – undesired? And yet the simple desire of parents to get rid of genes that are known to lead to horrible diseases is a positive case of eugenics, and it’s hard to argue against it. The problem with CRISPR, as with all technologies, is that the ethical dimensions of its use lie on a continuum.

Another very far-reaching application of the method is in creating what is called a ‘gene drive’. A gene drive is a diabolical system in which the genes for making CRISPR itself are included using CRISPR in a genome. This kind of self-referential magic can ensure that when the genome replicates, it also replicates the CRISPR machinery. The implications of this technically ingenious manipulation is that when you incorporate it in the germline of a single organism and allow it to reproduce, the change will be effected in almost every one of its children, and grandchildren, and great grandchildren and so on. You would have essentially circumvented the natural laws of Mendelian inheritance.

Using gene drives, you could change the genetic makeup of an entire species in a few months or years. One example of a gene drive cited in the book is an experiment that changes fruit fly color from red to yellow: the researchers calculated that if one of these CRISPRed fruit flies escaped from the lab, within a few years one in every four fruit flies around the world would be yellow. This is a very compelling scenario, and while it has been proposed for some very beneficial applications like malaria eradication, its more unseemly applications are indeed very concerning: using gene drives terrorists could introduce all kinds of mutations in our food, in our crops, and in our very bodies through manipulating the all-important microbiome that resides in us.

It is when we start contemplating gene drives that the quip from the character in Jurassic Park suddenly starts appearing very urgent: when scientists are very busy doing things just because they could, they seldom stop to consider if they should. The last part of the book has a cogent discussion of these societal implications of gene editing and potential safeguards. In 2015 Doudna and her colleagues came out with a joint paper that categorically argued against germline editing before we fully understand the consequences; 
but not before Chinese scientists had already tested the method in a few (non-viable) human embryos. She also fully recognizes that lawmakers, politicians, ethicists and the general public will have to be as much a part of the conversation as scientists. At the same, she again comes down on the side of proceeding with CRISPR applications when it comes to safety concerns. The opponents are asking how we can proceed with the technology when its full effects are not understood. In Doudna’s mind, we have always used new technologies without fully understanding their pros and cons. We use vaccines and drugs, not just when we don’t know how to get rid of their side effects, but in full knowledge of these limitations. We fly in airplanes and drive on roads knowing all the time that the technology that enables us to do this is not one hundred percent safe. In each of these cases, we use the technology because we as a society have collectively decided that the benefits outweigh the risks. There is no reason per se why it should be different with CRISPR, even with its far more consequential impact on humanity.


So should we all start rejoicing at the positive implications of CRISPR or start feeling cowed with fear at its misuse in evil hands? Not so fast. It's going to be a while before you can buy a "Grow Your Own Organic Meat CRISPR Kit" at the local Whole Foods, let alone a "Grow a Bodybuilder Baby" kit. One aspect of gene editing that I wish the authors had discussed much more is the fact that no technological breakthrough can be used efficiently if we simply don’t know how to use it. For instance CRISPR can repair certain genes involved in diseases, but the simple fact is that it can do so only if we know in the first place that those genes are involved. As the authors allude to, many important diseases like cancer, diabetes and especially psychiatric disorders have causes embedded in the subtle action of dozens - maybe even hundreds - of genes. It is impossible to use CRISPR to treat or cure these diseases when we don’t’ know where to apply it in the first place. The problem is not one of technology; it’s one of fundamental knowledge. The good thing is that CRISPR itself can be used fruitfully to interrogate the genetic causes of these diseases, but it’s going to take a long time before it can be used to treat them. This limitation is even truer of germline editing applications. Parents might very well want to edit embryonic genes that make their kids smarter or less anxious, but even now we are in the dark ages when it comes to understand the genetic basis of intelligence or anxiety. It might very well be possible to tweak simple traits like height and eye color, but we are again a long way off when it comes to imagining a race of superhumans. In addition, as the Chinese study noted above demonstrated, CRISPR still has some significant issues with side effects and efficacy, similar to those of drugs. The future sure seems like it’s here, but not yet.

Ultimately the social and scientific problems with CRISPR are no different in principle from problems with any number of new technologies that humanity has unleashed on itself and the planet, from fossil fuels and antibiotics to nuclear power and plastics. This well written book brings us up to date on one technology that does hold a lot of potential, as long as we understand its true capabilities, don’t fall for breathless hype and have a collective conversation about its pros and cons. It underscores the spirit of pure scientific inquiry and collaboration, and emphasizes the inevitable meld between science and society that we all must grapple with. And ultimately it leaves the ethical problems open. As the physicist Richard Feynman once quoted a Japanese piece of wisdom, we have all been given the key to heaven. The same key opens the gates of hell. It is up to us which door to approach.

Jesuits, science, and a pope with a chemistry background

In 1915, an exceptionally bright Italian youngster walked the two miles from his home to the Campo dei Fiori in Rome to hunt for science books in the weekly market fair. His step was determined and his face was grim. His countenance hid the fact that he was trying to recover from a great tragedy, the sudden death of his brother who had been his closest companion. Science would provide respite from his grief.
The Campo dei Fiori was the same place where the 16th century friar Giordano Bruno had been burnt for his heretical beliefs regarding multiple universes and Copernican astronomy. The boy mostly found books on theology and other topics which did not interest him, but tucked away in the heap was a two-volume compendium on physics by a Jesuit priest named Andrea Caraffa. Written in 1840, the volume expounded on all the classical physics that had been known until then. It was better than nothing and the boy bought it with the meager allowance he had saved. Taking it home he devoured it, not even noticing that it had been written in Latin.
Thus was launched Enrico Fermi's momentous career in physics. There is something exceedingly poignant about the fact that Italy's most famous scientific son found his life's bearings in a book written by a member of the Catholic Church, the same institution which three hundred years earlier had sent a scientific heretic to his death in exactly the same location.
Pope Francis might particularly appreciate this story. He has held favorable views on evolution and on the Big Bang and seems to be an ardent environmentalist. He listens to scientists and is open to their views, even when they might not strictly conform to his religious views. The fact that he has a technical degree in chemistry and is a Jesuit might have something to do with his take on science.
Wikipedia has a list of Jesuit scientists going back to the 17th century. The Jesuits were probably the first Christian sect who really emphasized institutions of learning and study. Jesuit scientists delved into topics across the spectrum of science, although astronomers seem to be especially prominent among them. There's Giovanni Zup who discovered the orbital phases of Mercury, Giovanni Saccheri who wrote on Non-Euclidean geometry, Benito Vines who was known as 'Father Hurricane' and Pierre Chardin who was involved in the discovery of Peking Man
But perhaps the most prominent Jesuit scientist is the Belgian priest Georges Lemaitre who in 1927 first came up with the idea for the Big Bang theory through his vision of a "cosmic egg". In doing this Lemaitre went against both Einstein and the Pope; the Pope because he was uncomfortable with Lemaitre's implications for a "first cause" that excluded God, and Einstein because for once his intuition failed him. When Einstein first saw Lemaitre's theory he is said to have said, "Your ideas are correct but your physics is abominable". In this case, quite simply, Lemaitre was correct and Einstein was wrong, no small feat for an obscure "amateur" from a country removed from the mainstream of mid-twentieth century physics.
Among the most prominent recent Jesuits is Vatican astronomer Guy Consolmagno. Many of these Jesuits studied at prominent universities and later occupied faculty positions themselves. Their contributions to and study of science would be consistent with the Jesuit emphasis on scholarship. In their missionary work Jesuits often took the message of science to people on other continents. For instance it was a Jesuit who helped found the Indian Association for the Cultivation of Science. Jesuits also introduced Western astronomy to China during their travels there and in turn brought back original Chinese research back to the West. Most prominently, Jesuits have founded many influential schools and colleges - including Georgetown University and Boston College - which emphasize teaching and research in science. Compared to other members of the Church, Jesuits' record on science is not bad at all.
The long Jesuit association with science demonstrates that it is very much possible for science and religion to co-exist in harmony and for one to inspire the other. Vatican astronomer Guy Consolmagno sees both science and religion as instruments allowing us to explore the universe and our role in it. Both spark debate and dialogue, and both shed light on human nature and thought. The website of the Jesuit society of the United States says that
"From the early days of the founding of the Society of Jesus, the Jesuits have been engaged in various intellectual enterprises. These have included teaching, research, and writing. The Jesuit thrust to "find God in all things" has had the result that these efforts were not solely confined to the more "ecclesiastical" disciplines (like philosophy and theology), but were extended to the more "mundane" or "secular" disciplines. In the areas of science and technology many Jesuits have made, and continue to make, contributions. These contributions range from astronomy and algebra to natural history and geography."

In their quest to "find God in all things" the Jesuits are voicing an opinion similar to what Newton voiced when he said that for him, God was in the essential nature of the universe. For Newton God was the name of the entity that sowed the deep mysteries of the cosmos for us to reap. You don't have to believe in any kind of supernatural God to appreciate how such a view might not just be consistent with scientific inquiry but might even greatly encourage it, obsessively so in Newton's case. Einstein too used God as a metaphor for the mysteries of the universe that could be uncovered through playful inquiry. Einstein and Newton both shared the Jesuits' emphasis on finding their chosen objective in scientific investigation and they both saw scientific inquiry as a great game. It's a view that Consolmagno clearly relishes:

"Doing science is like playing a game with God, playing a puzzle with God. God sets the puzzles and after I can solve one, I can hear him cheering, "Great, that was wonderful, now here's the next one." It's the way I can interact with the Creator. 

Consolmagno seems to have perfectly reconciled his scientific and religious views.

The new Pope is a Jesuit with an appreciation of science, but he is also a human being who has to conform to the opinions of more than a billion of his followers around the world, so it's unlikely that he will turn into an outright atheist or agnostic any time soon. We will have to wait to hear his opinions on the various scientific topics with which the Church has wrested, as well as new ones which confront us today: for instance what does he think about CRISPR and germline gene editing? And what are his view on fusing humans with machines to give rise to an unprecedented form of artificial intelligence?

But whatever the new Pope has to say, I find satisfaction in the fact that a Jesuit with a science background - an intellectual descendant of Andrea Caraffa and Pierre Chardin - is far from the worst that the Church can do when it comes to science.

Kurt Gödel's Open World


Today marks Kurt Gödel's one hundred and eleventh birthday. Along with Aristotle, Gödel is often considered the greatest logician in history. But I believe his influence goes much farther. In an age when both science and politics seem to be riddled with an incessant search for "truth" - often truth that aligns with one's preconceived social or political opinions - Gödel's work is a useful antidote and a powerful reminder against the illusion of certainty.

Gödel was born in 1906 in Brünn, Czechoslovakia, at a time when the Austro-Hungarian empire was at its artistic, philosophical and scientific peak. Many of Gödel's contemporaries, including Ludwig Wittgenstein, distinguished themselves in the world of the intellect during this period. Gödel was born to middle class parents and imbibed the intellectual milieu of the times. It was an idyllic time, spent in cafes and lecture halls learning the latest theories in physics and mathematics and pondering the art of Klimt and the psychological theories of Freud. There had not been a major European conflict for almost a hundred years.

In his late teens Gödel came to Vienna and became part of the Vienna Circle, a group of intellectuals who met weekly to discuss the foundations of philosophy and science. The guiding principle of the circle was the philosophy of logical positivism which said that only statements about the natural world that can be verified should be accepted as true. The group was strongly influenced by both Bertrand Russell and Ludwig Wittgenstein, neither of whom was formally a member. The philosopher Karl Popper, whose thinking on falsification even now is an influential part of science, ran circles around the group, although his love for them seems to be unreciprocated.

It was at the tender age of 25 that young Gödel published his famous incompleteness theorem. He did this as part of his PhD dissertation, making that dissertation one of the most famous in history (as a rule, even most famous scientists don't always do groundbreaking work in graduate school). In a mere twenty-one pages, Gödel overturned the foundations of mathematics and created an edifice that sent out tendrils not just in mathematics but in the humanities, including psychology and philosophy.

To appreciate what Gödel did, it's useful to take a look at what leading mathematicians thought about mathematics until that time. Both Bertrand Russell and the great mathematician David Hilbert had pursued the foundations of mathematics with conviction. In a famous address given in 1900, Hilbert had laid out what he thought were the outstanding problems in mathematics. Perhaps none of these was as important as the overarching goal of proving that mathematics was both consistent and completeConsistency means that there exists no statement in mathematics that is both true and false at the same time. Completeness means that mathematics should be capable of proving the truth or falsity (the "truth value") of every single statement that it can possibly make. 

In some sense, what Hilbert was seeking was a complete "axiomatization" of mathematics. In a perfectly axiomatized mathematical system, you would start with a few statements that would be taken as true, and beginning with these statements, you would essentially have an algorithm that would allow you derive every possible statement in the system, along with their truth value. The axiomatization of mathematics was not a new concept; it had been pioneered by Euclid in his famous text of geometry, "The Elements". But Hilbert wanted to do this for all of mathematics. Bertrand Russell had similar dreams.

In one fell swoop the 25-year-old Gödel shattered this fond hope. His first incompleteness theorem, which is the most well-known, proved that any mathematical system which is capable of proving the basic theorems of arithmetic is always going to include statements whose truth value cannot be proved using the axioms of the system. You could always 'enlarge' the system and prove the truth value in the new system, but then the new, enlarged system itself would contain statements which succumbed to Gödel's theorem. What Gödel thus showed is that mathematics will always be undecidable. It was a remarkable result, one of the deepest in the annals of pure thought, striking at the heart of the beautiful foundation built by mathematicians ranging from Euclid to Riemann over the previous two thousand years.

Gödel's theorems had very far-reaching implications; in mathematics, in philosophy and in human thought in general. One of those momentous implications was worked out by Alan Turing when he proved a similar theorem for computers, addressing a problem called the "halting problem". Similar to Hilbert's hope for the axiomatization of mathematics, the hope for computation was that, given an input and a computer program, you could always find out whether the program would halt. Turing proved that you could not decide this for an arbitrary program and an arbitrary input (although you can certainly do this for specific programs). In the process Turing also clarified our definitions of "computer" and "algorithm" and came up with a universal "Turing machine" which embodies a mathematical model of computation. Gödel's theorems were thus what inspired Turing's pioneering work on the foundations of computer science.

Like many mathematicians who make seminal contributions in their twenties, Gödel produced nothing of comparable value later in his life. He migrated to the US in the 1930s and settled down at the Institute for Advanced Study in Princeton. There he made a new friend - Albert Einstein. From then until Einstein's death in 1955, the sight of the two walking from their homes to the institute and back, often mumbling in German, became a town fixture. Einstein afforded the privilege of being his walking companion to no one, and seemed to have considered only Gödel as his intellectual equal: in fact he held Gödel in such esteem that he was known to have said in his later years that his own work did not mean much to him, and the main reason he went to work was to have the privilege of walking home with Gödel. At least once Gödel startled his friend with a scientific insight he had: he showed using Einstein's own field equations of gravitation that time travel could be possible.

Sadly, like a few other mathematical geniuses, Gödel was also riddled with mental health problems and idiosyncrasies that got worse as he grew older. He famously tried to find holes in the U.S. Constitution while taking his citizenship exam, and Einstein who accompanied him to the exam had to talk him out of trying to demonstrate to the judge how the U.S. could be turned into a dictatorship (nowadays some people have similar fears, but for different reasons). After Einstein died Gödel lost his one friend in the institute. Since early childhood he had always been a hypochondriac - often he could be seen dressed in a warm sweater and scarf even in the balmy Princeton summer - and now his paranoia about his health greatly grew. He started suspecting that his food was poisoned, and refused to accept anything not cooked by his protective wife Adele; in 1930s Vienna she had once physically protected him from Nazis, and now she was protecting him from imagined germs. When Adele was hospitalized with an illness, Kurt stopped eating completely. All attempts to soothe his fears failed, and on January 14, 1978 he died in Princeton Hospital, weighing only 65 pounds and essentially succumbing to starvation. Somehow this sublimely rational, austere man had fallen prey to a messy, frightful, irrational paranoia; how these two contradictory aspects of his faculties conspired to doom him is a conundrum that will remain undecidable.

He left us a powerful legacy. What Gödel's theorems demonstrated was that not only the world of fickle human beings but also the world of supposedly crystal-clear mathematics is, in a very deep sense, unknowable and inexhaustible. Along with Heisenberg's uncertainty principle, Gödel's theorems showed us that all attempts at grasping ultimate truths are bound to fail. More than almost anyone else, Gödel contributed to the fall of man from his privileged, all-knowing position.

We see his undecidability in politics and human affairs, but it is true even in the world of numbers and watertight theorems. Sadly we seem to have accepted uncertainty in mathematics while we keep on denying it in our own lives. From political demagogues to ordinary people, the world keeps getting ensnared in passionate attempts to capture and declare absolute truth. The fact that even mathematics cannot achieve this goal should give us pause. It should inculcate a sense of wonder and humility in the face of our own fallibility, and should lead us to revel in the basic undecidability of an open world, a world without end, Kurt Gödel's world. 

Science books for 14-year-olds

A few days back a relative of mine asked me for science book recommendations for a very bright 14-year-old nephew who's a voracious reader. She was looking both for books that would be easy for him to read as well as ones which might be pitched at a slightly higher level which can still give him a good sense of the wonder and challenges of science.

The easiest way to recommend these volumes was for me to think about books that strongly inspired me when I myself was growing up, so here's my top ten list which I copied in my email to her. I think that these books make for excellent reading not just for 15-year-olds but for 40 and 80-year-olds for that matter. Feel free to add suggestions in the comments section.

1. One, Two, Three…Infinity by George Gamow: Physicist George Gamow’s delightful book talks about many fascinating facts in maths, astronomy and biology (Gamow's comparison of "different infinities” had blown my socks off when I first read it).

2. Microbe Hunters by Paul DeKruif: This book tells the stories of the determined and brilliant doctors and scientists who discovered disease-causing bacteria and treatments for them.

3. Men of Mathematics by E. T. Bell: This classic book does for mathematicians what Paul DeKruif’s book does for doctors. Although it romanticizes and in some cases embellishes its stories, it has inspired many famous scientists who read it and later won Nobel Prizes.

4. Almost any book by Martin Gardner is great for mathematical puzzles (for eg. “Perplexing Puzzles and Tantalizing Mathematical Teasers”.)

5. Raymond Smullyan’s “What is the Name of this Book? The Riddle of Dracula and other Logical Puzzles” is another absolutely rib-tickling book on puzzles and brain teasers. What is remarkable about Smullyan's volumes is that many of his apparently silly puzzles are not only quite hard, but they hint at some of the deepest mysteries of math and logic, such as Gödel's Theorems.

6. "My Family and Other Animals" by Gerald Durrell: This delightful book talks about the author’s experiences with animals of all kinds while vacationing on a small Greek island with his family.

7. I would also recommend science fiction books by H. G. Wells if he likes fiction, especially “The Time Machine” and “The War of the Worlds."

8. "Surely you’re joking Mr. Feynman" by Richard Feynman: Feynman was one of the most brilliant physicists of the 20th century, and this very funny autobiography documents his adventures in science and life. Even if he doesn’t understand all the chapters it will give him an appreciation for physics and how physics can be fun.

9. "Uncle Tungsten: Memories of a Chemical Boyhood" by Oliver Sacks. Oliver Sacks was a famous neurologist but this book talks about his exciting adventures with chemistry while growing up.

10. "A Brief History of Time" by Stephen Hawking. Some of the chapters may be advanced for him right now but it will give him a flavor of the most fascinating concepts of space and time, including black holes and the Big Bang.

11. "King Solomon's Ring" by Konrad Lorenz. This utterly entrancing and hilarious account by Nobel laureate Konrad Lorenz talks about his pioneering imprinting and other experiments with fascinating animals like sticklebacks and shrews. The story of Lorenz quacking around on his knees while baby ducks follow him is now a classic in the annals of animal behavior.

Sixty-four years later: How Watson and Crick did it

"A Structure for Deoxyribose Nucleic Acid",
Nature, April 25, 1953 (Image: Oregon State University)
Today marks the sixty-fourth anniversary of the publication of the landmark paper on the structure of DNA by Watson and Crick, which appeared in the April 25, 1953 issue of the journal Nature. Even fifty years later the discovery is endlessly intriguing, not just because it's so important but because in 1954, both Watson and Crick were rather unlikely characters to have made it. In 2012 I wrote a post for the Nobel Week Dialogue event in Stockholm with a few thoughts on what it exactly was that allowed the duo to enshrine themselves in the history books; it was not sheer brilliance, it was not exhaustive knowledge of a discipline, but it was an open mind and a relentless drive to put disparate pieces of the puzzle together. I am reposting that piece here.

Somehow it all boils down to 1953, the year of the double helix. And it’s still worth contemplating how it all happened.

Science is often perceived as either a series of dazzling insights or as a marathon. Much of the public recognition of science acknowledges this division; Nobel Prizes for instance are often awarded either for a long, plodding project that is sustained by sheer grit (solving a protein crystal structure), a novel idea that seems to be an inspired work of sheer genius (formulating the Dirac equation) or an accumulated body of work (organic synthesis).

But in one sense, both these viewpoints of science are flawed since both of them tend to obscure the often haphazard, unpredictable, chancy and very human process of research. In reality, the marathon runner, the inspired genius and every scientist in between the two tread a tortuous path to the eureka moment, a path that’s highlighted by false alleys, plain old luck, unexpected obstacles and most importantly, the human obstacles of petty rivalry, jealousy, confusion and misunderstanding. A scientific story that fully captures these variables is, in my opinion, emblematic of the true nature of research and discovery. That is why the discovery of the double helix by Watson and Crick is one of my favorite stories in all of science.

The reason why that discovery is so appealing is because it really does not fit into the traditional threads of scientific progress highlighted above. During those few heady days in Cambridge in the dawn of those gloomy post-war years, Watson and Crick worked hard. But their work was very different from, say, the sustained effort akin to climbing a mountain that exemplified Max Perutz’s lifelong odyssey to solve the structure of hemoglobin. It was also different from the great flashes of intuition that characterized an Einstein or a Bohr, although intuition was applied to the problem – and discarded – liberally. Neither of the two protagonists was an expert in the one discipline that they themselves acknowledged mattered most for the discovery – chemistry. And although they had a rough idea of how to do it, neither really knew what it would take to solve the problem. They were far from being experts in the field.

And therein lies the key to their success. Because they lacked expertise and didn’t really know what would solve the problem, they tried all approaches at their disposal. Their path to DNA was haphazard, often lacking direction, always uncertain. Crick, a man who already considered himself an overgrown graduate student in his thirties, was a crystallographer. Watson, a precocious and irreverent youngster who entered the University of Chicago when he was fifteen, was in equal parts geneticist and bird-watcher. Unlike many of their colleagues, both were firmly convinced that DNA and not protein was the genetic material. But neither of them had the background for understanding the chemistry that is essential to DNA structure; the hydrogen bonding that holds the bases together, the acid-base chemistry that ionizes the phosphates and dictates their geometric arrangement, the principles of tautomerism that allow the bases to exist in one of two possible forms; a form that’s crucial for holding the structure together. But they were willing students and they groped, asked, stumbled and finally triumphantly navigated their way out of this conceptual jungle. They did learn all the chemistry that mattered, and because of Crick they already understood crystallography.

And most importantly, they built models. Molecular models are now a mainstay of biochemical research. Modelers like myself can manipulate seductively attractive three-dimensional pictures of proteins and small molecules on computer screens. But modeling was in its premature days in the fifties. Ironically, the tradition had been pioneered by the duo’s perceived rival, the chemist Linus Pauling. Pauling who would be widely considered the greatest chemist of the twentieth century had successfully applied his model-building approach to the structure of proteins. Lying in bed with a bad cold during a visiting sojourn at Oxford University, he had folded paper and marked atoms with a pencil to conform to the geometric parameters of amino acids derived from simple crystal structures. The end product of this modeling combined with detailed crystallographic measurements was one of twentieth century biochemistry’s greatest triumphs; the discovery of the alpha-helical and beta-sheet structures, foundational structural elements in virtually every protein in nature. How exactly the same model-building later led Pauling to an embarrassing gaffe in his own structure of DNA that violated basic chemical principles is the stuff of folklore, narrated with nonchalant satisfaction by Watson in his classic book “The Double Helix”.

Model building is more art than science. By necessity it consists of patching together imperfect data from multiple avenues and techniques using part rational thinking and part inspired guesswork and then building a picture of reality – only a picture – that’s hopefully consistent with most of the data and not in flagrant violation with important pieces. Even today modeling is often regarded skeptically by the data-gatherers, presumably because it does not have the ring of truth that hard, numerical data has. But data by itself is never enough, especially because the methods to acquire it themselves are incomplete and subject to error. It is precisely by combining information from various sources that one expects to somehow cancel these errors or render them unimportant, so that the signal from one source complements its absence in another and vice versa. The building of a satisfactory model thus often necessarily entails understanding data from multiple fields, each part of which is imperfect.

Watson and Crick realized this, but many of their contemporaries tackling the same problem did not. As Watson recounts it in a TED talk, Rosalind Franklin and Maurice Wilkins were excellent crystallographers but were hesitant to build models using imperfect data. Franklin especially came tantalizingly close to cracking DNA. On the other hand Erwin Chargaff and Jerry Donahue, both outstanding chemists, were less appreciative of crystallography and again not prone to building models. Watson and Crick were both willing to remedy their ignorance of chemistry and to bridge the river of data between the two disciplines of chemistry and crystallography. Through Donohue they learnt about the keto-enol tautomerism of the bases that gave rise to the preferred chemical form. From Chargaff came crucial information regarding constancy of the ratios of one kind of base (purines) to another (pyrimidines); this information would be decisive in nailing down the complementary nature of the two strands of the helix. And through Rosalind Franklin they got access – in ways that even today spark controversy and resentment – to the best crystallographic data on DNA that then existed anywhere.

What was left to do was to combine these pieces from chemistry and crystallography and put together the grand puzzle. For this model building was essential; since Watson and Crick were willing to do whatever it took to solve the structure, to their list of things-to-do they added model building. Unlike Franklin and Wilkins, they had no qualms about building models even if it meant they got the answer partially right. The duo proceeded from a handful of key facts, each of which other people possessed, but none of which had been seen by the others as part of an integrated picture. Franklin especially had gleaned very important general features of the helix from her meticulous diffraction experiments and yet failed to build models, remaining skeptical about the very existence of helices until the end. It was the classic case of the blind men and the elephant.

The facts which led Watson and Crick down the road to the promised land included a scattered bundle of truths about DNA from crystallography and chemistry; the distance between two bases (3.4 Å), the distance per turn of the helix (34 Å) which in turn indicated a distribution of ten bases per turn, the diameter of the helix (20 Å), Chargaff’s rules indicating equal ratios of the two kinds of bases, Alexander Todd’s work on the points of linkage between the base, sugar and nucleotide, Donohue’s important advice regarding the preferred keto form of the bases and Franklin’s evidence that the strands in DNA must run in opposite directions. There was another important tool they had, thanks to Crick’s earlier mathematical work on diffraction. Helical-diffraction theory told them the kind of diffraction pattern that would expect if the structure were in fact helical. This reverse process – predicting the expected diffraction parameters from a model – is today a mainstay of the iterative process of structure refinement used by x-ray crystallographers to solve structures as complex as the ribosome.

Using pieces from the metal shop in Cambridge, Watson gradually accumulated a list of parts for the components of DNA and put them together even as Crick offered helpful advice. Once the pieces were in place, the duo were in the position of an airline pilot who has every signpost, flag and light on the runway paving his way for a perfect landing. The end-product was unambiguous, incisive, elegant, and most importantly, it held the key to understanding the mechanism of heredity through complementary base-pairing. Franklin and Wilkins came down from London; the model was so convincing that even Franklin graciously agreed that it had to be correct. Everyone who saw the model would undoubtedly have echoed Watson and Crick’s sentiment that “a structure this beautiful just had to exist”.

In some sense the discovery of the DNA structure was easy; as Max Perutz once said, the technical challenges that it presented were greatly mitigated because of the symmetry of the structure compared to the controlled but tortuous asymmetry inherent in proteins. Yet it was Watson and Crick and not others who made this discovery and their achievement provides insight into the elements of a unique scientific style. Intelligence they did not lack, but intelligence alone would not have helped, and in any case there was no dearth of it; Perutz, Franklin, Chargaff and Pauling were all brilliant scientists who in principle could have cracked open the secret of life which its discoverers proudly touted that day in the Eagle Pub. 

But what these people lacked, what Watson and Crick possessed in spades, was a drive to explore, interrogate, admit ignorance, search all possible sources and finally tie the threads together. This set of traits also made them outsiders in the field, non-chemists who were trying to understand a chemical puzzle; in one sense they appeared out of nowhere. But because they were outsiders they were relatively unprejudiced. Their personalities cast them as misfits and upstarts trying to disrupt the established order. Then there was the famous irreverence between them; Crick once said that politeness kills science. All these personal qualities certainly helped, but none was as important as a sprightly open-mindedness that was still tempered by unsparing rigor, the ability to ask for and use evidence from all quarters while constraining it within reasonable bounds all the time; this approach led to model building almost as a natural consequence. And the open-mindedness also masked a fearlessness that was undaunted by the imperfect nature of the data and the sometimes insurmountable challenges that seemed to loom.

So that’s how they did it; by questioning, probing, conjecturing and model building even in the presence of incomplete data, and by fearlessly using every tool and idea at their disposal. As we approach problems of increasing biological complexity in the twentieth century, this is a lesson we should keep in mind. Sometimes when you don’t know what approach will solve a problem, you try all approaches, all the time constraining them within known scientific principles. Richard Feynman once defined scientific progress as imagination in a straitjacket, and he could have been talking about the double helix.