Field of Science

Book review: "Unraveling the Double Helix: The Lost Heroes of DNA", by Gareth Williams.

Newton rightly decried that science progresses by standing on the shoulders of giants. But his often-quoted statement applies even more broadly than he thought. A case in point: when it comes to the discovery of DNA, how many have heard of Friedrich Miescher, Fred Griffith or Lionel Alloway? Miescher was the first person to isolate DNA, from pus bandages of patients. Fred Griffith performed the crucial experiment that proved that a ‘transforming principle’ was somehow passing from a virulent dead bacterium to a non-virulent live bacterium, magically rendering the non-virulent strain virulent. Lionel Alloway came up with the first expedient method to isolate DNA by adding alcohol to a concentrated solution.

In this thoroughly engaging book, Gareth Williams brings these and other lost heroes of DNA. The book spans the first 85 years of DNA and ends with Watson and Crick's discovery of the structure. There are figures both well-known and obscure here. Along with those mentioned above, there are excellent capsule histories of Gregor Mendel, Thomas Hunt Morgan, Oswald Avery, Rosalind Franklin, Maurice Wilkins and, of course, James Watson and Francis Crick. The book traces a journey through a variety of disciplines, most notably the fields of biochemistry and genetics, that were key in deciphering the structure of DNA and its role in transmitting hereditary characteristics.
Williams’s account begins with Miescher’s isolation of DNA from pus bandages in 1869. At that point in time, proteins were well-recognized, and all proteins contained a handful of elements like carbon, nitrogen, oxygen and sulfur. The one element they did not contain was phosphorus. It was Miescher’s discovery of phosphorus in his extracts that led him and others to propose the existence of a substance they called ‘nuclein’ that seemed ubiquitous in living organisms. The two other towering figures in the biochemical history of DNA are the German chemist Albrecht Kossel and the Russian-born American chemist Phoebus Levene. They figured out the exact composition of DNA and identified its three key components: the sugar, the phosphate and most importantly, the four bases (adenine, cytosine, thymine and guanine). Kossel was such a revered figure that his students led a torchlight procession through the streets from the train station to his lab when he came back to Heidelberg with the Nobel Prize.
Levene’s case is especially interesting since his identification of the four bases set DNA research back by years, perhaps decades. Because there were only four bases, he became convinced that DNA could never be the hereditary material because it was too simple. His ‘tetra-nucleotide hypothesis’ which said that DNA could only have a repeating structure of four bases doomed its candidacy as a viable genetic material for a long time. Most scientists kept on believing that only proteins could be complex enough to be the stuff of heredity.
Meanwhile, while the biochemists were unraveling the nature of DNA in their own way, the geneticists paved the way. Williams has a brisk but vivid description of the lone monk Gregor Mendel toiling away with thousands of meticulous experiments on pea plants in his monastery in the Moravian town of Brünn. As we now know, Mendel was fortunate in picking the pea plant since it’s a purebred species. Mendel’s faith in his own work was shaken toward the end of his life when he tried to duplicate his experiments using the hawkweed plant whose genetics are more complex. Tragically, Mendel’s notebooks and letters were burnt after his death and his work was forgotten for thirty years before it was resurrected independently by three scientists, all of whom tried to claim credit for the discovery. The other major figure in genetics during the first half of the 20th century was Thomas Hunt Morgan whose famous ‘fly room’ at Columbia University carried our experiments showing the presence of hundreds of genes are precise locations on chromosomes. In his lab, there was a large pillar on which Morgan and his students drew the locations of new genes.
From the work of Mendel, Morgan, Levene and Kossel we move on to New York City where Oswald Avery, Colin MacLeod and Maclyn McCarty at the Rockefeller University and the sharp-tongued, erudite Erwin Chargaff at Columbia made two seminal discoveries about DNA. Avery and his colleagues showed that DNA is in fact the ‘transforming principle’ that Fred Griffith had identified. Chargaff showed that the proportions of A and T and G and C in DNA were similar. Williams says in the epilogue that of all the people who were potentially robbed of Nobel Prizes for DNA, the two most consequential were Avery and Griffith.
By this time, along with biochemistry and genetics, x-ray crystallography had started to become very prominent in the study of molecules: by shining x-rays on a crystal and interpreting the resulting diffraction pattern, scientists could potentially figure out the structure of the molecule on an atomic level. Williams provides an excellent history of this development, starting with the Nobel Prize-winning father-son duo of William Henry and William Lawrence Bragg (who remains the youngest Nobel Laureate at age 25) and continuing with other pioneering figures like J. D. Bernal, William Astbury, Dorothy Hodgkin and Linus Pauling.
Science is done by scientists, but it’s made possible by science administrators. Two major characters star in the DNA drama as science administrators par excellence. Both had their flaws, but without the institutions they set up to fund and encourage biological work, it is doubtful whether the men and women who discovered DNA and its structure would have made the discoveries when and where they did. William Lawrence Bragg repurposed the famed Cavendish Laboratories at Cambridge University – where Ernest Rutherford had reigned supreme - for crystallographic work on biological molecules. A parallel effort was started by John Randall, a physicist who had played a critical role in Britain’s efforts to develop radar during World War 2, at King’s College in London. While Bragg recruited Max Perutz, Francis Crick and James Watson for his group, Randall recruited Maurice Wilkins, Ray Gosling and Rosalind Franklin.
One of the strengths of Williams’s book is that it resurrects the role of Maurice Wilkins who is often regarded as the least important of the Nobel Prize-winning triplet of Watson, Crick and Wilkins. In fact, it was Wilkins and Gosling who took the first x-ray photographs of DNA that seemed to indicate a helical structure. Wilkins was also convinced that DNA and not protein was the genetic material when that view was still unfashionable; he passed on his infectious enthusiasm to Crick and Watson. But even before his work, the Norwegian crystallographer Sven Furberg had been the first to propose a helix – although a single one – as the structure of DNA based on his density and other important features. A key feature of Furberg’s model was that the sugar and the base were perpendicular, which is in fact the case with DNA.
The last third of the book deals with the race to discover the precise structure of DNA. This story has been told many times, but Williams tells it exceptionally well and especially drives home how Watson and Crick were able to stand on the shoulders of many others. Rosalind Franklin comes across as a fascinating, complex, brilliant and flawed character. There was no doubt that she was an exceptional scientist who was struggling to make herself heard in a male-dominated establishment, but it’s also true that her prickly and defensive personality made her hard to work with. Unlike Watson, she was especially reluctant to build models, perhaps because she had identified a fatal flaw in one of the pair’s earlier models. It’s not clear how close Franklin came to identifying DNA as a helix; experimentally she came close, but psychologically she seemed reluctant and bounced back and forth between helical and non-helical structures.
So what did Watson and Crick have that the others did not? As I have described in a post written a few years ago on the 70th anniversary of the DNA structure, many others were in possession of key parts of the evidence, but only Watson and Crick put it all together and compulsively built models. In this sense it was very much like the blind men and the elephant; only Watson and Crick bounced around the entire animal and saw how it was put together. Watson’s key achievement was recognizing the precise base pairing: adenine with thymine and guanine with cytosine. Even here he was helped by the chemist Jerry Donohue who corrected a key chemical feature of the bases (organic chemists will recognize it as what’s called keto-enol tautomerism). Also instrumental were Alec Stokes and John Griffith. Stokes was a first-rate mathematician who, using the theory of Bessel functions, figured out the diffraction pattern that would correspond to a helix; Crick who was a physicist well-versed with the mathematics of diffraction, instantly understood Stokes’s work. Griffith was a first-rate quantum chemist who figured out, independently of Donohue, that A would pair with T and G with C. Before the advent of computers and what are called ab initio quantum chemical techniques, this seems like a remarkable achievement.
With Chargaff’s knowledge of the constancy of base ratios, Donohue’s precise base structures, Franklin and Gosling’s x-ray measurements and Stokes’s mathematics of helix diffraction patterns, Watson and Crick had all the information they needed to try out different models and cross the finish line. No one else had this entire map of information at their disposal. The rest, as they say, is history.
I greatly enjoyed reading Williams’s book. It is, perhaps, the best book on the DNA story that I have read since Horace Freeland Judson’s “The Eighth Day of Creation”. Even characters I was familiar with newly come to life as flawed, brilliant human beings with colorful lives. The account shows that many major and minor figures made important discoveries about DNA. Some came close to figuring out the structure but never made the leap, either because they lacked data or because of personal prejudices. Taken as a whole, the book showcases well the intrinsically human story and the group effort, playing out over 85 years, at the heart of the one of the greatest discoveries that humanity has made. I highly recommend it.

Brian Greene and John Preskill on Steven Weinberg


There's a very nice tribute to Steven Weinberg by Brian Greene and John Preskill that I came across recently that is worth watching. Weinberg was of course one of the greatest theoretical physicists of the later half of the 20th century, winning the Nobel Prize for one of the great unifications of modern physics, which was the unification of the electromagnetic and the weak forces. He was also a prolific author of rigorous, magisterial textbooks on quantum field theory, gravitation and other aspects of modern physics. And on top of it all, he was a true scholar and gifted communicator of complex ideas to the general public through popular books and essays; not just ideas in physics but ones in pretty much any field that caught his fancy. I had the great pleasure and good fortune to interact with him twice.

The conversation between Greene and Preskill is illuminating because it sheds light on many underappreciated qualities of Weinberg that enabled him to become a great physicist and writer, qualities that are worth emulating. Greene starts out by talking about when he first interacted with Weinberg when he gave a talk as a graduate student at the physics department of the University of Texas at Austin where Weinberg taught. He recalls how he packed the talk with equations and formal derivations, only to have the same concepts explained by Weinberg more clearly later. As physicists appreciate, while mathematics remains the key to unlock the secrets of the universe, being able to understand the physical picture is key. Weinberg was a master at doing both.

Preskill was a graduate student of Weinberg's at Harvard and he talks about many memories of Weinberg. One of the more endearing and instructive ones is from when he introduced Weinberg to his parents at his house. They were making ice cream for dinner, and Weinberg wondered aloud why we add salt while making the ice cream. By that time Weinberg had already won the Nobel Prize, so Preskill's father wondered if he genuinely didn't understand that you add the salt to lower the melting point of the ice cream so that it would stay colder longer. When Preskill's father mentioned this Weinberg went, "Of course, that makes sense!". Now both Preskill and Greene think that Weinberg might have been playing it up a bit to impress Preskill's family, but I wouldn't be surprised if he genuinely did not know; top tier scientists who work in the most rarefied heights of their fields are sometimes not as connected to basic facts as graduate students might be. 

More importantly, in my mind the anecdote illustrates an important quality that Weinberg had and that any true scientist should have, which is to never hesitate to ask even simple questions. If, as a Nobel Prize winning scientist, you think you are beyond asking simple questions, especially when you don't know the answers, you aren't being a very good scientist. The anecdote demonstrates a bigger quality that Weinberg had which Preskill and Greene discuss, which was his lifelong curiosity about things that he didn't know. He never hesitated to pump people for information about aspects of physics he wasn't familiar with, not to mention another disciplines. Freeman Dyson who I knew well had the same quality: both Weinberg and Dyson were excellent listeners. In fact, asking the right question, whether it was about salt and ice cream or about electroweak unification, seems to have been a signature Weinberg quality that students should take to heart.

Weinberg became famous for a seminal 1967 paper that unified the electromagnetic and weak force (and used ideas developed by Peter Higgs to postulate what we now call the Higgs boson). The title of the paper was "A Model of Leptons", but interestingly, Weinberg wasn't much of a model builder. As Preskill says, he was much more interested in developing general, overarching theories than building models, partly because models have a limited applicability to a specific domain while theories are much more general. This is a good point, but of course, in fields like my own field of computational chemistry, the problem isn't that there are no general theoretical frameworks  - there are, most notably the frameworks of quantum mechanics and statistical mechanics - but that applying them to practical problems is too complicated unless we build specific models. Nevertheless, Weinberg's attitude of shunning specific models for generality is emblematic of the greatest scientists, including Newton, Pauling, Darwin and Einstein.

Weinberg was also a rather solitary researcher; as Preskill points out, of his 50 most highly cited papers, 42 are written alone. He admitted himself in a talk that he wasn't the best collaborator. This did not make him the best graduate advisor either, since while he was supportive, his main contribution was more along the lines of inspiration rather than guidance and day-to-day conversations. He would often point students to papers and ask them to study them themselves, which works fine if you are Brian Greene or John Preskill but perhaps not so much if are someone else. In this sense Weinberg seems to be have been a bit like Richard Feynman who was a great physicist but who also wasn't the best graduate advisor.

Finally, both Preskill and Greene touch upon Weinberg's gifts as a science writer and communicator. More than many other scientists, he never talked down to his readers because he understood that many of them were as smart as him even if they weren't physicists. Read any one of his books and you see him explaining even simple ideas, but never in a way that assumes his audience are dunces. This is a lesson that every scientist and science writer should take to heart.

Greene especially knew Weinberg well because he invited him often to the World Science Festival which he and his wife had organized in New York over the years. The tribute includes snippets from Weinberg talking about the current and future state of particle physics. In the last part, an interviewer asks him about what is arguably the most famous sentence from his popular writings. In the last part of his first book, "The First Three Minutes", he says, "The more the universe seems comprehensible, the more it seems pointless." Weinberg's eloquent response when he was asked what this means sums up his life's philosophy and tells us why he was so unique, as a scientist and as a human being:

"Oh, I think everything's pointless, in the sense that there's no point out there to be discovered by the methods of science. That's not to say that we don't create points for our lives. For many people it's their loved ones; living a life of helping people you love, that's all the point that's needed for many people. That's probably the main point for me. And for some of us there's a point in scientific discovery. But these points are all invented by humans and there's nothing out there that supports them. And it's better that we not look for it. In a way, we are freer, in a way it's more noble and admirable to give points to our lives ourselves rather than to accept them from some external force."

A long time ago, in a galaxy far, far away





For a brief period earlier this week, social media and the world at large seemed to stop squabbling about politics and culture and united in a moment of wonder as the James Webb Space Telescope (JWST) released its first stunning images of the cosmos. These "extreme deep field" images represent the farthest and the oldest that we have been able to see in the universe, surpassing even the amazing images captured by the Hubble Space Telescope that we have become so familiar with. We will soon see these photographs decorating the walls of classrooms and hospitals everywhere.

The scale of the JWST images is breathtaking. Each dot represents a galaxy or nebula from far, far away. Each galaxy or nebula is home to billions of stars in various stages of life and death. The curved light in the image comes from a classic prediction of Einstein's general theory of relativity called gravitational lensing - the bending of light by gravity that makes spacetime curvature act like a lens. 

Some of the stars in these distant galaxies and nebulae are being nurtured in stellar nurseries; others are in their end stages and might be turning into neutron stars, supernovae or black holes. And since galaxies have been moving away from us because of the expansion of the universe, the farther out we see, the older the galaxy is. This makes the image a gigantic hodgepodge of older and newer photographs, ranging from objects that go as far back as 100 million years after the Big Bang to very close (on a cosmological timescale) objects like Stephan's Quintet and the Carina Nebula that are only a few tens of thousands of light years away.

It is a significant and poignant fact that we are seeing objects not as they are but as they were. The Carina Nebula is 8,500 light years away, so we are seeing it as it looked like 8,500 years ago, during the Neolithic Age when humanity had just taken to farming and agriculture. On the oldest timescale, objects that are billions of light years away look the way did during the universe's childhood. The fact that we are seeing old photographs or stars, galaxies and nebulae gives the photo a poignant quality. For a younger audience who has always grown up with Facebook, imagine seeing a hodgepodge of images of people from Facebook over the last fifteen years presented to you: some people are alive and some people no longer so, some people look very different from what they did when their photo was last taken. It would be a poignant feeling. But the JWST image also fills me with joy. Looking at the vast expanse, the universe feels not like a cold, inhospitable place but like a living thing that's pulsating with old and young blood. We are a privileged part of this universe.

There's little doubt that one of the biggest questions stimulated by these images would be whether we can detect any signatures of life on one of the many planets orbiting some of the stars in those galaxies. By now we have discovered thousands of extrasolar planets around the universe, so there's no doubt that there will be many more in the regions the JWST is capturing. The analysis of the telescope data already indicates a steamy atmosphere containing water on a planet about 1,150 light years away. Detecting elements like nitrogen, carbon, sulfur and phosphorus is a good start to hypothesizing about the presence of life, but much more would be needed to clarify whether these elements arise from an inanimate process or a living one. It may seem impossible that a landscape as gargantuan as this one is completely barren of life, but given the improbability of especially intelligent life arising through a series of accidents, we may have to search very wide and long.

I was gratified as my twitter timeline - otherwise mostly a cesspool of arguments and ad hominem attacks punctuated by all-too-rare tweets of insight - was completely flooded with the first images taken by the JWST. The images proved that humanity is still capable of coming together and focusing on a singular achievement of science and technology, how so ever briefly. Most of all, they prove both that science is indeed bigger than all of us and that we can comprehend it if we put our minds and hands together. It's up to us to decide whether we distract ourselves and blow ourselves up with our petty disputes or explore the universe as revealed by JWST and other feats of human ingenuity in all its glory.

Image credits: NASA, ESA, CSA and STScl

Book Review: "The Rise and Reign of the Mammals: A New History, From the Shadows of the Dinosaurs to US", by Steve Brusatte

A terrific book by Edinburgh paleontologist Steve Brusatte on the rise of the mammals. Engaging, personal and packed with simple explanations and analogies. Brusatte tracks the evolution of mammals from about 325 million years ago when our reptilian answers split off into two groups - the synapsids and the diapsids. The diapsids gave rise to reptiles like crocodiles and snakes while the synapsids eventually gave rise to us. The synapsids evolved with a hole behind their eye socket: it’s now covered with a set of muscles which you can feel if you touch your cheek while chewing.

Much of the book is focused on how mammals evolved different anatomical and physiological functions against the backdrop of catastrophic and gentle climate change, including the shifting of the continents and major extinctions driven by volcanic eruptions, meteors (during the K-T extinction event that killed the dinosaurs) sea level rises and ice ages. That mammals survived these upheavals is partly a result of chance and partly a result of some remarkable adaptations which the author spends considerable time describing. These adaptations include milk production, temperature regulation, hair, bigger brains and stable locomotion, among others.
Some these changes were simple but significant - for instance, a law named Carrier’s law limits lung capacity in slithering reptiles because each lung alternately gets compressed during sidewinding motions. When mammalian ancestors were able to lift their body upward from the ground and able to install a set of bones that constrained the rib cage, it allowed their lungs to be able to breathe and expel oxygen during movement and when the animal was eating. Needless to say, the ability to breathe and move while eating was momentous for survival in an environment in which predators abounded.
Another adaptation was the development of a specialized set of teeth that mark all mammals including humans - the incisors, canines, pre-molars and molars. Because these teeth form a specialized, complex apparatus, they emerge only twice in mammals - once during infancy and one more time during adulthood. But out chewing apparatus gave rise to another remarkable adaptation - in an evolutionary migration spread out over millions of years, bones of the jaw became the bones of the ear. The ear bones are a set of finely orchestrated and sensitive sound detectors that gave mammals an acute sense of hearing and enabled them to seek out mates and avoid predators.
Quite naturally, the book spends a good amount of time describing the mystery of why mammals survived the great meteor extinction of dinosaurs and much of other life on the planet. Except that it’s no mystery. Dinosaurs were bulky and specialized cold-blooded eaters which were exposed. Mammals were furry, rodent-like warm-blooded omnivores which could hide out underground and eke out an existence on charred vegetation and dead flesh in the post-apocalyptic environment. After the K-T event, there was no turning back for mammals.
The rest of the book spends time discussing particular features of mammalian evolution like flight in bats and the odd monotremes like the duck-billed platypus which lay eggs. A particularly memorable discussion is of the whales, the biggest mammals which have ever lived, which actually evolved from land mammals that would occasionally take to water to escape predators and seek out new food. With their exceptionally big brains and bat-like echolocation, whales remain a wonder of nature.
Brusatte also spices up his account with adventurous stories of intrepid paleontologists and archeologists who have dup up pioneering fossils in extreme environments ranging from the blistering tropical forests of Africa to the Gobi desert of Mongolia. Paleontology comes across as a truly international endeavor, with Chinese paleontologists especially making significant contributions; they were among the first for instance to discover a feather dinosaur, attesting to the reptile to bird evolutionary transition. Unlike old times when Victorian men did most of the digging, women are now a healthy percentage of the field.
Human evolution occupies only a few chapters of Brusatte’s book, and for good reason. While humans occupy a unique niche because of their intelligence, evolutionarily they are no more special or fascinating than whales, bats, platypuses, elephants or indeed the earliest synapsids. What we can take heart from is the fact that we are part of an unbroken thread of evolution ranging across all these creatures. Mammals have survived catastrophic extinctions and climate change events. Humans are now being responsible for one. Whether they are responsible for their own extinction or show the kind of adaptability that their ancestors showed is a future state only they are responsible for.

Book Review: "Don't Tell Me I Can't: An Ambitious Homeschooler's Journey", by Cole Summers (Kevin Cooper)

I finished this book with a profound sense of loss combined with an inspired feeling of admiration for what young people can do. Cole Summers grew up in the Great Basin Desert region of Nevada and Utah with a father who had tragically become confined to a wheelchair after an accident in military training. His parents were poor but they wanted Cole to become an independent thinker and doer. Right from when he was a kid, they never said "No" to him and let him try out everything that he wanted to. When four-year-old Cole wanted to plant and grow a garden, they let him, undeterred by the minor cuts and injuries on the way.


Partly because of financial reasons and partly because there were no good schools available in their part of town, Cole's parents decided to homeschool him. But homeschooling for Cole happened on his terms. When they saw him watching Warren Buffet, Charlie Munger and Bill Gates videos on investing and business, they told him it was ok to learn practical skills by watching YouTube videos instead of reading school books. Cole talks about many lessons he learnt from Munger and Buffet about patience and common mistakes in investing. When other kids were reciting the names of planets, Cole was reading company balance sheets and learning how to write off payroll expenses as tax deductions through clever investing.

This amazing kid had, by the age of fourteen, started two businesses - one raising rabbits and one farming. He parlayed his income into buying a beat up house and a sophisticated John Deere tractor. He fixed up the house from scratch, learning everything about roofing, flooring, cabinet installation and other important aspects of construction from YouTube videos and from some local experts. He learnt, sometimes through hard experience, how to operate a tractor and farm his own land. He made a deep study of the Great Basin desert water table which is dropping a few feet every year and came up with a novel and detailed proposal to prevent water levels from declining by planting low-water plants. He came up with solutions to fix the supply chain problems with timber and farm equipment.

A week or two ago, Cole and his brother were kayaking and horsing around in a local reservoir when Cold drowned and died. He leaves behind a profound sense of loss at an incredible life snuffed out too young and some deep wisdom that most of us who have lived our entire lives still don't appreciate.

The main lesson in the book that Cole wants to leave us with is to let kids do what they want, not tell them they can't do things and give them the freedom to explore and spend leisurely time learning things in an unconventional manner. He rightly says that we have structured parenting in a such a way that every minute of a kid's day is oversubscribed. He is also right that many modern parents err on the side of caution.

It was certainly not the way my parents let me use my time when I was growing up, and I was free to explore the local hills looking for insects and libraries reading books and do dangerous experiment in my home lab from an early age; there is little doubt that this relaxed style of parenting on my parents' part significantly contributed to who I am.

I strongly believe that if you let kids do what they want (within some limits, of course), not only will they turn out ok but they will do something special. Cole Summers seems to me to be the epitome of this ideal. May we all, parents and kids, learn from his extraordinary example and memory.

Should a scientist have "faith"?

Scientists like to think that they are objective and unbiased, driven by hard facts and evidence-based inquiry. They are proud of saying that they only go wherever the evidence leads them. So it might come as a surprise to realize that not only are scientists as biased as non-scientists, but that they are often driven as much by belief as are non-scientists. In fact they are driven by more than belief: they are driven by faith. Science. Belief. Faith. Seeing these words in a sentence alone might make most scientists bristle and want to throw something at the wall or at the writer of this piece. Surely you aren’t painting us with the same brush that you might those who profess religious faith, they might say?

But there’s a method to the madness here. First consider what faith is typically defined as – it is belief in the absence of evidence. Now consider what science is in its purest form. It is a leap into the unknown, an extrapolation of what is into what can be. Breakthroughs in science by definition happen “on the edge” of the known. Now what sits on this edge? Not the kind of hard evidence that is so incontrovertible as to dispel any and all questions. On the edge of the known, the data is always wanting, the evidence always lacking, even if not absent. On the edge of the known you have wisps of signal in a sea of noise, tantalizing hints of what may be, with never enough statistical significance to nail down a theory or idea. At the very least, the transition from “no evidence” to “evidence” lies on a continuum. In the absence of good evidence, what does a scientist do? He or she believes. He or she has faith that things will work out. Some call it a sixth sense. Some call it intuition. But “faith” fits the bill equally.

If this reliance on faith seems like heresy, perhaps it’s reassuring to know that such heresies were committed by many of the greatest scientists of all time. All major discoveries, when they are made, at first rely on small pieces of data that are loosely held. A good example comes from the development of theories of atomic structure.

When Johannes Balmer came up with his formula for explaining the spectral lines of hydrogen, he based his equation on only four lines that were measured with accuracy by Anders Ångström. He then took a leap of faith and came up with a simple numerical formula that predicted many other lines emanating from the hydrogen atom and not just four. But the greatest leap of faith based on Balmer’s formula was taken by Niels Bohr. In fact Bohr did not even hesitate to call it anything but a leap of faith. In his case, the leap of faith involved assuming that electrons in atoms only occupy certain discrete energy states, and that figuring out the transitions between these states somehow involved Planck’s constant in an important way. When Bohr could reproduce Balmer’s formula based on this great insight, he knew he was on the right track, and physics would never be the same. One leap of faith built on another.

To a 21st century scientist, Bohr’s and Balmer’s thinking as well as that of many other major scientists well through the 20th century indicates a manifestly odd feature in addition to leaps of faith – an absence of what we call statistical significance or validation. As noted above, Balmer used only four data points to come up with his formula, and Bohr not too many more. Yet both were spectacularly right. Isn’t it odd, from the standpoint of an age that holds statistical validation sacrosanct, to have these great scientists make their leaps of faith based on paltry evidence, “small data” if you will? But that in fact is the whole point about scientific belief, that it originates precisely when there isn’t copious evidence to nail the fact, when you are still on shaky ground and working at the fringe. But this belief also supremely echoes a famous quote by Bohr’s mentor Rutherford – “If your experiment needs statistics, you ought to have done a better experiment.” Resounding words from the greatest experimental physicist of the 20th century whose own experiments were so carefully chosen that he could deduce from them extraordinary truths about the structure of matter based on a few good data points.

The transition between belief and fact in science in fact lies on a continuum. There are very few cases where a scientist goes overnight from a state of “belief” to one of “knowledge”. In reality, as evidence builds up, the scientist becomes more and more confident until there are not enough grounds for believing otherwise. In many cases the scientist may not even be alive to see his or her theory confirmed in all its glory: even the Newtonian model of the solar system took until the middle of the 19th century to be fully validated, more than a hundred years after Newton’s death.

A good example of this gradual transition of a scientific theory from belief to confident espousal is provided by the way Charles Darwin’s theory of evolution by natural selection, well, evolved. It’s worth remembering that Darwin took more than twenty years to build up his theory after coming home from his voyage on the HMS Beagle in 1836. At first he only had hints of an idea based on extensive and yet uncatalogued and disconnected observations of flora and fauna from around the world. Some of the evidence he had documented – the names of Galapagos finches, for instance – was wrong and had to be corrected by his friends and associates. It was only by arduous experimentation and cataloging that Darwin – a famously cautious man – was able to reach the kind of certainty that prompted him to finally publish his magnum opus, Origin of Species, in 1859, and even then only after he was threatened to be scooped by Alfred Russell Wallace. There can be said to be no one fixed eureka moment when Darwin could say that he had transitioned from “believing” in evolution by natural selection to “knowing” that evolution by natural selection was true. And yet, by 1859, this most meticulous scientist was clearly confident enough in his theory that he no longer simply believed in it. But it certainly started out that way. The same uncertain transition between belief and knowledge applies to other discoveries. Einstein often talked about his faith in his general theory of relativity before observations of the solar eclipse of 1919 confirmed its major prediction, the bending of starlight by gravity, remarking that if he was wrong it would mean that the good lord had led him down the wrong garden path. When did Watson and Crick go from believing that DNA is a double helix to knowing that it is? When did Alfred Wegener go from believing in plate tectonics to knowing that it was real? In some sense the question is pointless. Scientific knowledge, both individually and collectively, gets cemented with greater confidence over time until the objections simply cannot stand up to the weight of the accumulated evidence.

Faith, at least in one important sense, is thus an important part of the mindset of a scientist. So why should scientists not nod in assent if someone then tells them that there is no difference, at least in principle, between their faith and religious faith? For two important reasons. Firstly, the “belief” that a scientist has is still based on physical and not supernatural evidence, even if all the evidence may not yet be there. What scientists call faith is still based on data and experiments, not mystic visions and pronouncements from a holy book. More importantly, unlike religious belief, scientific belief can wax and wane with the evidence; it importantly is tentative and always subject to change. Any good scientist who believes X will be ready to let go of their belief in X if strong evidence to the contrary presents itself. That is in fact the main difference between scientists on one hand and clergymen and politicians on the other; as Carl Sagan once asked, when was the last time you heard either of the latter say, “You know, that’s a really good counterargument. Maybe what I am saying is not true after all.”

Faith may also interestingly underlie one of the classic features of great science – serendipity. Unlike what we often believe, serendipity does not always refer to pure unplanned accident but to deliberately planned accident; as Alexander Fleming memorably put it, chance favors the “prepared mind”. A remarkable example of deliberate serendipity comes from an anecdote about his discovery of slow neutrons that Enrico Fermi narrated to Subrahmanyan Chandrasekhar. Slow neutrons unlocked the door to nuclear power and the atomic age. Fermi told Chandrasekhar how he came to make this discovery which he personally considered – among a dozen seminal ones – to be his most important one (From Mehra and Rechenberg, “The Historical Development of Quantum Theory, Vol. 6”):


Chandrasekhar’s invocation of Hadamard’s thesis of unconscious discovery might provide a rational underpinning for what we are calling faith. In this case, Fermi’s great intuitive jump, his seemingly irrational faith that paraffin might slow down neutrons, might have been grounded in the extensive body of knowledge about physics that was housed in his brain, forming connections that he wasn’t even aware of. Not every leap of faith can be explained this way, but some can. In this sense a scientist’s faith, unlike religious faith, is very much rational and based on known facts.

Ultimately there’s a supremely important guiding role that faith plays in science. Scientists ignore believing at their own peril. This is because they have to constantly tread the tightrope of skepticism and wonder. Shut off your belief valve completely and you will never believe anything until there is five-sigma statistical significance for it. Promising avenues of inquiry that are nonetheless on shaky grounds for the moment will be dismissed by you. You may never be the first explorer into rich new scientific territory. But open the belief valve completely and you will have the opposite problem. You may believe anything based on the flimsiest of evidence, opening the door to crackpots and charlatans of all kinds. So where do you draw the line?

In my mind there are a few logical rules of thumb that might help a scientist to mark out territories of non-belief from ones where leaps of faith might be warranted. In my mind, plausibility based on the known laws of science should play a big role. For instance, belief in homeopathy would be mistaken based on the most elementary principles of physics and chemistry, including the laws of mass action and dose response. But what about belief in extraterrestrial intelligence? There the situation is different. Based on our understanding of the laws of quantum theory, stellar evolution and biological evolution, there is no reason to believe that life could not have arisen on another planet somewhere in the universe. In this sense, belief in extraterrestrial intelligence is justified belief, even if we don’t have a single example of life existing anywhere else. We should keep on looking. Faith in science is also more justified when there is a scientific crisis. In a crisis you are on desperate grounds anyway, so postulating ideas that aren’t entirely based on good evidence isn’t going to make matters worse and are more likely to lead into novel territory. Planck’s desperate assumption that energy only comes in discrete packets was partly an act of faith that resolved a crisis in classical physics.

Ultimately, though, drawing a firm line is always hard, especially for topics on the fuzzy boundary. Extra-sensory perception, the deep hot biosphere and a viral cause for mad cow disease are three theories which are implausible although not impossible in principle; there is little in them that flies against the basic laws of science. The scientists who believe in these theories are sticking their necks out and taking a stand. They are heretics who are taking the risk of being called fools; since most bold new ideas in science are usually wrong, they often will be. But they are setting an august precedent.

If science is defined as the quest into the unknown, a foray into the fundamentally new and untested, it is more important than ever especially in this age of conformity, for belief in science to play a more central role in the practice of science. The biggest scientists in history have always been ones who took leaps of faith, whether it was Bohr with his quantum atom, Einstein with his thought experiments or Noether with her deep feeling for the relationship between symmetry and conservation laws, a feeling felt but not seen. For creating minds like these, we need to nurture an environment that not just allows but actively encourages scientists, especially young ones, to tread the boundary between evidence and speculation with aplomb, to exercise their rational faith with abandon. Marie Curie once said, “Now is the time to fear less, so that we may understand more.” To which I may add, “Now is the time to believe more, so that we may understand even more.”

First published on 3 Quarks Daily

Man as a "machine-tickling aphid"

May be a close-up of nature

On the playground in the park today, my daughter and I played with some carpenter ants and the aphids they were farming. The phenomenon never ceases to fascinate me - the aphids being sheltered from natural predators under leaves and sap-rich areas of trees by the ants; the ants milking the aphids for their tasty, sugary honeydew in turn by gently stroking them.
It's doubly fascinating because as recounted in George Dyson's "Darwin Among the Machines", in his groundbreaking 1872 book "Erewhon", Victorian writer and polymath Samuel Butler wondered whether human relationships with machines will one day become very similar to those between ants and aphids, with humans essentially becoming dependent on machines to provide them with constant, nurturing stimulation and feeding: "May not man himself become a sort of parasite upon the machines? An affectionate machine-tickling aphid?", wrote Butler.
In this scenario, there's no need to imagine a Terminator-style takeover of human society by computers; instead, humans will willingly give themselves over to the illusions of tender, loving care provided by machines, becoming permanently dependent and parasitic on them and becoming, in effect, code's way to replicate itself. Clearly, Butler's vision was incredibly prescient and ahead of its time, and resoundingly true as indicated by the medium in which I am typing these words.

Philip Morrison on challenges with AI

Philip Morrison who was a top-notch physicist and polymath with an incredible knowledge of things beyond his immediate field was also a speed reader who reviewed hundreds of books on a stunning range of topics. In one of his essays from an essay collection he held forth on what he thought were the significant challenges with machine intelligence. It strikes me that many of these are still valid (italics mine).

"First, a machine simulating the human mind can have no simple optimization game it wants to play, no single function to maximize in its decision making, because one urge to optimize counts for little until it is surrounded by many conditions. A whole set of vectors must be optimized at once. And under some circumstances, they will conflict, and the machine that simulates life will have the whole problem of the conflicting motive, which we know well in ourselves and in all our literature.


Second, probably less essential, the machine will likely require a multisensory kind of input and output in dealing with the world. It is not utterly essential, because we know a few heroic people, say, Helen Keller-who managed with a very modest cross-sensory connection nevertheless to depict the world in some fashion. It was very difficult, for it is the cross-linking of different senses which counts. Even in astronomy, if something is "seen" by radio and by optics, one begins to know what it is. If you do not "see" it in more than one way, you are not very clear what it in fact is.


Third, people have to be active. I do not think a merely passive machine, which simply reads the program it is given, or hears the input, or receives a memory file, can possibly be enough to simulate the human mind. It must try experiments like those we constantly try in childhood unthinkingly, but instructed by built-in mechanisms. It must try to arrange the world in different fashions.


Fourth, I do not think it can be individual. It must be social in nature. It must accumulate the work--the languages, if you will- of other machines with wide experience. While human beings might be regarded collectively as general-purpose devices, individually they do not impress me much that way at all. Every day I meet people who know things I could not possibly know and can do things I could not possibly do, not because we are from differing species, not because we have different machine natures, but because we have been programmed differently by a variety of experiences as well as by individual genetic legacies. I strongly suspect that this phenomenon will reappear in machines that specialize, and then share experiences with one another. A mathematical theorem of Turing tells us that there is an equivalence in that one machine's talents can be transformed mathematically to another's. This gives us a kind of guarantee of unity in the world, but there is a wide difference between that unity, and a choice among possible domains of activity. I suspect that machines will have that choice, too. The absence of a general-purpose mind in humans reflects the importance of history and of development. Machines, if they are to simulate this behavior- or as I prefer to say, share it--must grow inwardly diversified, and outwardly sociable.


Fifth, it must have a history as a species, an evolution. It cannot be born like Athena, from the head full-blown. It will have an archaeological and probably a sequential development from its ancestors. This appears possible. Here is one of computer science's slogans, influenced by the early rise of molecular microbiology: A tape, a machine whose instructions are encoded on the tape, and a copying machine. The three describe together a self-reproducing structure. This is a liberating slogan; it was meant to solve a problem in logic, and I think it did, for all but the professional logicians. The problem is one of the infinite regress which looms when a machine becomes competent enough to reproduce itself. Must it then be more complicated than itself? Nonsense soon follows. A very long

instruction tape and a complex but finite machine that works on those instructions is the solution to the logical problem."


Consciousness and the Physical World, edited by V. S. Ramachandran and Brian Josephson

Consciousness and the Physical World: Proceedings of the Conference on Consciousness Held at the University of Cambridge, 9Th-10th January, 1978

This is an utterly fascinating book, one that often got me so excited that I could hardly sleep or walk without having loud, vocal arguments with myself. It takes a novel view of consciousness that places minds (and not just brains) at the center of evolution and the universe. It is based on a symposium on consciousness at Cambridge University held in 1979 and is edited by Brian Josephson and V. S. Ramachandran, both incredibly creative scientists. Most essays in the volume are immensely thought-provoking, but I will highlight a few here.


The preface by Freeman Dyson states that "this book stands in opposition to the scientific orthodoxy of our day." Why? Because it postulates that minds and consciousness have as important of a role to play in the evolution of the universe as matter, energy and inanimate forces. As Dyson says, most natural scientists frown upon any inclusion of the mind as an equal player in the arena of biology; for them this amounts to a taboo against the mixing of values and facts. And yet even Francis Crick, as hard a scientist as any other, once called the emergence of culture and the mind from the brain the "astonishing hypothesis." This book defies conventional wisdom and mixes values and facts with aplomb. It should be required reading for any scientist who dares to dream and wants to boldly think outside the box.

Much of the book is in some sense an extension - albeit a novel one - of ideas laid out in an equally fascinating book by Karl Popper and John Eccles titled "The Self and Its Brain: An Argument for Interactionism". Popper and Eccles propose that consciousness arises when brains interact with each other. Without interaction brains stay brains. When brains interact they create both mind and culture.

Popper and Eccles say that there are three "worlds" encompassing the human experience:

World 1 consists of brains, matter and the material universe.
World 2 consists of individual human minds.
World 3 consists of the elements of culture, including language, social culture and science.

Popper's novel hypothesis is that while World 3 clearly derives from World 2, at some point it took on a life of its own as an emergent entity that was independent of individuals minds and brains. In a trivial sense we know this is true since culture and ideas propagate long after their originators are dead. What is more interesting is the hypothesis that World 2 and World 3 somehow feed on each other, so that minds, fueled by cultural determinants and novelty, also start acquiring lives of their own, lives that are no longer dependent on the substrate of World 1 brains. In some sense this is the classic definition of emergent complexity, a phrase that was not quite in vogue in 1978. Not just that but Eccles proposes that minds can in turn act on brains just like culture can act on minds. This is of course an astounding hypothesis since it suggests that minds are separate from brains and that they can influence culture in a self-reinforcing loop that is derived from the brain and yet independent of it.

The rest of the chapters go on to suggest similarly incredible and fascinating ideas. Perhaps the most interesting are chapters 4 and 5 by Nicholas Humphrey (a grand nephew of John Maynard Keynes) and Horace Barlow, both of them well known neuroscientists. Barlow and Humphrey's central thesis is that consciousness arose as an evolutionary novelty in animals for promoting interactions - cooperation, competition, gregariousness and other forms of social communication. In this view, consciousness was an accidental byproduct of primitive neural processes that was then selected by natural selection to thrive because of its key role in facilitating interactions. This raises more interesting questions: Would non-social animals then lack consciousness? The other big question in my mind was, how can we even define "non-social" animals: after all, even bacteria, not to mention more advanced yet primitive creatures (by human standards) like slime molds and ants evidence superior modes of social communication. In what sense would these creatures be conscious, then? Because the volume was written in 1978, it does not discuss Giulio Tononi's "integrated information theory" and Christof Koch's ideas about consciousness existing on a continuum, but the above mentioned ideas certainly contain trappings of these concepts.

There is finally an utterly fascinating discussion of an evolutionary approach to free will,. It states in a nutshell that free will is a biologically useful delusion. This is not the same as saying that free will is an *illusion*. In this definition, free will arose as a kind of evolutionary trick to ensure survival. Without free will, humans would have no sense of controlling their own fates and environments, and this feeling of lack of control would not only detrimentally impact their day to day existence and basic subsistence but impact all the long-term planning, qualities and values that are the hallmark of Homo sapiens. A great analogy that the volume provides is with the basic instinct of hunger. In an environment where food was infinitely abundant, a creature would be free from the burden of choice. So why was hunger "invented"? In Ramachandran's view, hunger was invented to explore the environment around us; similarly, the sensation of free will was "invented" to allow us to plan for the future, make smart choices and even pursue terribly important and useful but abstract ideas like "freedom" and "truth". It allows us to make what Jacob Bronowski called "unbounded plans". In an evolutionary framework, "those who believed in their ability to will survived and those who did not died out."

Is there any support for this hypothesis? As Ramachandran points, there is at least one simple but very striking natural experiment that lends credence to the view of free will being an evolutionarily useful biological delusion. People who are depressed are well known to lack a feeling of control over their environment. In extreme cases this feeling can lead to significantly reduced mortality and death from suicide. Clearly there is at least one group of people in which the lack of a freedom to will can have disastrous consequences if not corrected.

I can go on about the other fascinating arguments and essays of these proceedings. But even reading the amazing introduction by Ramachandran and a few of the essays should give the reader a taste of the sheer chutzpah and creativity demonstrated by these scientific heretics in going beyond the boundary of the known. May this tribe of scientific heretics thrive and grow.

Rutherford on tools and theories (and machine learning)

Ernest Rutherford was the consummate master of experiment, disdaining theoreticians for playing around with their symbols while he and his fellow experimentalists discovered the secrets of the universe. He was said to have used theory and mathematics only twice - once when he discovered the law of radioactive decay and again when he used the theory of scattering to interpret his seminal discovery of the atomic nucleus. But that's where his tinkering with formulae stopped.

Time and time again Rutherford used relatively simple equipment and tools to pull off seemingly miraculous feats. He had already won the Nobel Prize for chemistry by the time he discovered the nucleus - a rare and curious case of a scientist making their most important discovery after they won a Nobel prize. The nucleus clearly deserved another Nobel, but so did his fulfillment of the dreams of the alchemists when he transmuted nitrogen to oxygen by artificial disintegration of the nitrogen atom in 1919. These achievements justified every bit Rutherford's stature as perhaps one of two men who were the greatest experimental physicists in modern history, the other being Michael Faraday. But they also justified the primacy of tools in engineering scientific revolutions.

However, Rutherford was shrewd and wise enough to recognize the importance of theory - he famously mentored Niels Bohr, presumably because "Bohr was different; he was a football player." And he was on good terms with both Einstein and Eddington, the doyens of relativity theory in Europe. So it's perhaps not surprising that he pointed out an observation about the discovery of radioactivity attesting to the important of theoretical ideas that's quite interesting.

As everyone knows, radioactivity in uranium was discovered by Henri Becquerel in 1896, then taken to great heights by the Curies. But as Rutherford points out in a revealing paragraph (Brown, Pais and Pippard, "Twentieth Century Physics", Vol. 1; 1995), it could potentially have been discovered a hundred years earlier. More accurately, it could have been experimentally discovered a hundred years earlier.


Rutherford's basic point is that unless there's an existing theoretical framework for interpreting an experiment - providing the connective tissue, in some sense - the experiment remains merely an observation. Depending only on experiments to automatically uncover correlations and new facts about the world is therefore tantamount to hanging on to a tenuous, risky and uncertain thread that might lead you in the right direction only occasionally, by pure chance. In some ways Rutherford here is echoing Karl Popper's refrain when Popper said that even unbiased observations are "theory laden"; in the absence of the right theory, there's nothing to ground them.

It strikes me that Rutherford's caveat applies well to machine learning. One goal of machine learning - at least as believed by its most enthusiastic proponents - is to find patterns in the data, whether the data is dips and rises in the stock market or signals from biochemical networks, by blindly letting the algorithms discover correlations. But simply letting the algorithm loose on data would be like letting gold leaf electroscopes and other experimental apparatus loose on uranium. Even if they find some correlations, these won't mean much in the absence of a good intellectual framework connecting them to basic facts. You could find a correlation between two biological responses, for instance, but in the absence of a holistic understanding of how the components responsible for these responses fit within the larger framework of the cell and the organism, the correlations would stay just that - correlations without a deeper understanding.

What's needed to get to that understanding is machine learning plus theory, whether it's a theory of the mind for neuroscience or a theory of physics for modeling the physical world. It's why efforts that try to supplement machine learning by embedding knowledge of the laws of physics or biology in the algorithms are likely to work, while efforts blindly using machine learning to try to discover truths about natural and artificial systems using correlations alone would be like Rutherford's fictitious uranium salts from 1806 giving off mysterious radiation that's detected without interpretation, posing a question waiting for an explanation.