Field of Science

Brenner, von Neumann and Schrödinger

Erwin Schrödinger's book, "What is Life"?, inspired many scientists like Crick, Watson and Perutz to go into molecular biology. While many of the details in the book were wrong, the book's central message that the time was ripe for a concerted attack on the structure of the genes based on physical principles strongly resonated.

However, influence and importance are two things, and unfortunately the two aren't always correlated. As Sydney Brenner recounts in detail here, the founding script for molecular biology should really have been John von Neumann's 1948 talk at Caltech as part of the Hixon Symposium, titled "The General and Logical Theory of Automata". In retrospect this talk was seminal and far-reaching. Brenner is one of the very few scientists who seems to have appreciated that von Neumann's influence on biology was greater than Schrödinger's and that von Neumann was right and Schrödinger wrong. Part of the reason was that while many biologists like Crick and Watson had read Schrödinger's "What is Life?", almost nobody had read von Neumann's "General and Logical Theory of Automata".

As Brenner puts it, Schrödinger postulated that the machinery for replication (chromosomes) also included the means of reproducing it. Von Neumann realized that the machinery did not include the means themselves but only the *instructions* for those means.
That's a big difference; the instructions are genes, the means are proteins. In fact as Freeman Dyson says in his "Origins of Life", von Neumann was the first to clearly realize the distinction between software (genes) and hardware (proteins). Why? Because as a mathematician and a generalist (and pioneer of computer science), he had a vantage point that was unavailable to specialist biologists and chemists in the field.

Unfortunately abstract generalists are often not recognized as the true originators of an idea. It's worth noting that in his lecture, von Neumann laid out an entire general program for what we now call translation, five years before Watson, Crick, Franklin and others even solved the structure of DNA. The wages of the theoretician are sparse, especially those of the one, as mathematician John Casti put it, who solves "only" the general case.

On change

Two weeks ago, outside a coffee shop near Los Angeles, I discovered a beautiful creature, a moth. It was lying still on the pavement and I was afraid someone might trample on it, so I gently picked it up and carried it to a clump of garden plants on the side. Before that I showed it to my 2-year-old daughter who let it walk slowly over her arm. The moth was brown and huge, almost about the size of my hand. It had the feathery antennae typical of a moth and two black eyes on the ends of its wings. It moved slowly and gradually disappeared into the protective shadow of the plants when I put it down.

Later I looked up the species on the Internet and found that it was a male Ceanothus silk moth, very prevalent in the Western United States. I found out that the reason it’s not seen very often is because the males live only for about a week or two after they take flight. During that time they don’t eat; their only purpose is to mate and die. When I read about it I realized that I had held in my hand a thing of indescribable beauty, indescribable precisely because of the briefness of its life. Then I realized that our lives are perhaps not all that long compared to the Ceanothus moth’s. Assuming that an average human lives for about 80 years, the moth’s lifespan is about 2000 times shorter than ours. But our lifespans are much shorter than those of redwood trees. Might not we appear the same way to redwood trees the way Ceanoth moths or ants appear to us, brief specks of life fluttering for an instant and then disappearing? The difference, as far as we know, is that unlike redwood trees we can consciously understand this impermanence. Our lives are no less beautiful because on a relative scale of events they are no less brief. They are brief instants between the lives of redwood trees just like redwood trees’ lives are brief instants in the intervals between the lives of stars.

I have been thinking about change recently, perhaps because it’s the standard thing to do for someone in their forties. But as a chemist I have thought about change a great deal in my career. The gist of a chemist’s work deals with the structure of molecules and their transformations into each other. The molecules can be natural or synthetic. They can be as varied as DNA, nylon, chlorophyll, rocket fuel, cement and aspirin. But what connects all of them is change. At some point in time they did not exist and came about through the union of atoms of carbon, oxygen, hydrogen, phosphorus and other elements. At some point they will cease to be and those atoms will become part of some other molecule or some other life form.

Sometimes popular culture can capture the essence of science and philosophy well. In this case, chemistry as change was captured eloquently by the character of Walter White in the TV show “Breaking Bad”. In his first lecture as a high school chemistry teacher White says,

“Chemistry is the study of matter. But I prefer to think of it as the study of change. Now, just think about this. Electrons change their energy levels. Elements, they change and combine into compounds. Well, that’s…that’s all of life, right? It’s the constant, it’s the cycle, it’s solution, dissolution, just over and over and over. It’s growth, then decay, then transformation. It is fascinating, really.”

Changes in the structure of atoms and molecules are ultimately dictated by the laws of atomic physics and the laws of thermodynamics. The second law of thermodynamics which loosely states that disorder is more likely than order guarantees that change will occur. At its root the second law is an argument from probability: there are simply many more ways for a system to be disordered than to be ordered. The miracle of life and the universe at large is that complex systems like biological systems can briefly defy the second law, assembling order from disorder, letting it persist for a few short decades during which that order can do astonishing things like make music and art and solve mathematical equations enabling it to understand where it came from. The biologist Carl Woese once gave an enduringly beautiful metaphor for life, comparing it to a child playing in a stream.

“If not machines, what are organisms? A metaphor far more to my liking is this. Imagine a child playing in a woodland stream, poking a stick into an eddy in the flowing current, thereby disrupting it. But the eddy quickly reforms. The child disperses it again. Again it reforms, and the fascinating game goes on. There you have it! Organisms are resilient patterns in a turbulent flow—patterns in an energy flow.”

Woese’s metaphor perfectly captures both the permanence and impermanence of life. The structure is interrupted, but over time its essence persists. It changes and yet stays the same.

Although thermodynamics and Darwin’s theory of evolution help us understand how ordered structures can perform these complex actions, ultimately we don’t really understand it at the deepest level. The best illustration of our ignorance is the most complex structure in the universe – the human brain. The brain is composed of exactly the same elements as my table, my cup of coffee and the fern plant growing outside my window. Yet the same elements, when assembled together to create a fern, somehow when assembled in another, very specific way, create a 3-pound, jellylike structure that can seemingly perform miracles like writing ‘Hamlet’, finding the equations of spacetime curvature and composing the Choral Symphony. We have loose terminology like ’emergence’ to describe the unique property of consciousness that arises when human brains are assembled together from inanimate elements, but if we were to be honest as scientists, we must admit that we don’t understand how exactly that happens. The ultimate example of change that makes the essence of us as humans possible is still an enduring mystery. Will we ever solve that mystery? Even some of the smartest scientists on the planet, like the theoretical physicist Edward Witten, think we may not. As Witten puts it,

“I think consciousness will remain a mystery. Yes, that’s what I tend to believe. I tend to think that the workings of the conscious brain will be elucidated to a large extent. Biologists and perhaps physicists will understand much better how the brain works. But why something that we call consciousness goes with those workings, I think that will remain mysterious. I have a much easier time imagining how we understand the Big Bang than I have imagining how we can understand consciousness…”

In other words, what Witten is saying is that even if someday we may understand the how and the what of consciousness, we may never understand the why. One of the biggest examples of change in the history of the universe may well remain hidden behind a veil.

I think about change a lot not just because I am a chemist but because I am a parent. Sometimes it feels like our daughter who is now two and a half years old has changed more in that short time than a caterpillar changes into a butterfly. Her language, reasoning, social and motor skills have undergone an amazing change since she was born. And this is, of course, a change that is observed by every parent: children change an incredible amount during their first few years. Some of that change can be guided by parents, but other change is genetic as well as idiosyncratic and unpredictable. Just like you can coax simple arrangements of atoms into certain compounds but not others, as a parent you have to make peace with the fact that you will be able to mold your child’s temperament, personality and trajectory in life to a certain extent but not beyond that. As the old alchemists figured out, you cannot change mercury into gold or gold into mercury no matter how hard you try. And that’s ultimately for the better because, just like the diversity of elements, we then get a diversity of novel and surprising life trajectories for our children.

Children undergo change but they are are also often the best instruments for causing it. Recently I finished reading Octavia Butler’s remarkable “Parable of the Sower” which is set in a 2024 California that is racked by violence and arson by desperate, homeless people who break into gated communities and burn, murder and rape. The protagonist of the story is a clear-eyed, determined 18-year-old named Lauren Olamina who, after her family is murdered, starts out by herself with the goal of starting a new religion called Earthseed amidst the madness surrounding her. Earthseed sees God as a changeable being and embraces change as the essence of living. Lauren thinks that in a world where people have to deal with unpredictable, seismic, sometimes violent change, a religion that makes the very nature of change a blueprint for God’s work can not just survive but thrive. For an atheist like myself, Earthseed seems as good a religion as any for us to believe in if we want to thrive in an uncertain world. Butler’s story tells us that just like they always have, our children exist to fix the problems our generation has created.

Change permeates the largest scales of the universe as much as it does ourselves, our children and our bodies and brains. One of the most philosophically shattering experiences in the development of science was the realization by Galileo, Brahe, Newton and others that the perfect, crystalline, quiet universe of Aristotle and other ancients was in fact a dynamic, violent universe. In the mid 20th century, astrophysicists worked out that stars go through a life sequence much like we do. When they are born they furiously burn hydrogen into helium and form the lighter elements. As they age they can go in one of several directions. Stars the size of the sun will first blow up into red giants and then quietly settle into the life of a white dwarf. But stars much more massive than the sun can turn into supernovae and black holes, ending their lives in a cosmic show of spectacular explosion or fiery gravitational contraction.

When our sun turns into a red giant, about 6 billion years from now, its outer shell will expand and embrace the orbits of Mercury, Venus and Earth. There is no reason to believe that those planets will survive that encounter. By that time the human race would either be extinct or would have migrated to other star systems; the worst thing that it could do would be to stay put. Even after that we will not escape change. The science of eschatology, the study of the ultimate fate of the universe, has mapped out many changes that will be unstoppable in the far future. At some point the Andromeda galaxy will collide with our Milky Way galaxy. Eventually the stars in the universe will run out of fuel and cease to shine; the universe will become a quieter and darker place. Soon it will only contain black holes and at a further point even black holes will evaporate through the process of Hawking radiation. And way beyond that, the laws of quantum mechanics will ensure that the proton, usually considered a stable particle, will decay. Matter as we know it will dissolve into nothingness. The accelerated expansion of our universe will ensure that most of these processes will inevitably take place. The exact fate of the universe is too uncertain to predict beyond these unimaginable gulfs of time, but there is little doubt that the universe will be profoundly different from what it is now and what it has been before.

The elements from which my body and brain are composed will one day be given back to the universe (I like to think that they will perhaps become part of a redwood tree). That fact does not fill me with a feeling of dread or sadness but instead feels me with peace, joy and gratitude. The ultimate death of the universe described above causes similar feelings to arise. Sometimes I like to sit back, close my eyes and imagine a peaceful, lifeless universe, the galaxies receding past the cosmic horizons, the occasional supernova going off. The carbon, oxygen, nitrogen and other heavier elements in my body came from such supernova explosions a long time ago; the hydrogen came from the Big Bang. Those are astounding facts that science has discovered in the last few decades. Of all the things that could have happened to those elements forged in the furnace of a far off supernova, what were the chances that they would assemble into the exact specific arrangements that would be me? While we understand now how that happens, it could well have gone countless other ways. I feel privileged to exist as part of that brief interval between supernova explosions, to be able to understand, in my own modest way, the workings of our universe. To be a tiny part of the change that makes the universe what it is.

Book Review: Chip War: The Fight for the World's Most Critical Technology

In the 19th century it was coal and steel, in the 20th century it was oil and gas, what will it be in the 21st century? The answer, according to Chris Miller in this lively and sweeping book, is semiconductor chips.

There is little doubt that chips are ubiquitous, not just in our computer and cell phones but in our washers and dryers, our dishwashers and ovens, our cars and sprinklers, in hospital monitors and security systems, in rockets and military drones. Modern life as we know it would be unimaginable without these marvels of silicon and germanium. And as Miller describes, we have a problem because much of the technology to make these existential entities is the province of a handful of companies and countries that are caught in geopolitical conflict.
Miller starts by tracing out the arc of the semiconductor industry and its growth in the United States, driven by pioneers like William Shockley, Andy Grove and Gordon Moore and fueled by demands from the defense establishment during the Cold War. Moore's Law has guaranteed that the demand and supply for chips has exploded in the last few decades; pronouncements of its decline have often been premature. Miller also talks about little-known but critically important people like Weldon Ward who designed chips that made precision missiles and weapons possible, secretary of defense Bill Perry who pressed the Pentagon for funding and developing precision weapons and Lynn Conway, a transgender scientist who laid the foundations for chip design.
Weldon Ward's early design for a precision guided missile in Vietnam was particularly neat: a small window in the tip of the warhead shined laser back to a chip that was divided into four quadrants. If one quadrant started getting more light than the other you would know the missile was off-course and would adjust it. Before he designed the missile, Ward was shown photos of a bridge in Vietnam that was surrounded by craters that indicated where the missile had hit. After he designed his missile, there were no more craters, only a destroyed bridge.
There are three kinds of chips: memory chips which control the RAM in your computer, logic chips which control the CPU and analog chips which control things like temperature and pressure sensing in appliances. While much of the pioneering work in designing transistors and chips was spearheaded by American scientists at companies like Intel and Texas Instruments, soon the landscape shifted. First the Japanese led by Sony's Akio Morita captured the market for memory or DRAM chips in the 80s before Andy Grove powerfully brought it back to the US by foreseeing the personal computer era and retooling Intel for making laptop chips. The landscape also shifted because the U.S. found cheap labor in Asia and outsourced much of the manufacturing of chips.
But the real major player in this shift was Morris Chang. Chang was one of the early employees at Texas Instruments and his speciality was in optimizing the chemical and industrial processes for yielding high-quality silicon. He rose through the ranks and advised the defense department. But, in one of those momentous quirks of history that at the time sound trivial, he was passed over for the CEO position. Fortunately he found a receptive audience in the Taiwanese government who gave him a no-strings-attached opportunity to set up a chip manufacturing plant in Taiwan.
The resulting company, TSMC, has been both the boon and the bane of the electronics age. If you use a device with a chip in it, it has most probably been made by TSMC. Apple, Amazon, Tesla, Intel, all design their own chips but have them made by TSMC. However it does not help that TSMC is located in a company that both sits on top of a major earthquake fault and is the target for invasion or takeover by a gigantic world power. The question of whether our modern technology that is dependent on chips can thrive is closely related to whether China is going to invade Taiwan.
The rest of the supply chain for making chips is equally far flung. But although it sounds globalized, it's not. For instance the stunningly sophisticated process of extreme ultraviolet lithography (EUV) that etches designs on chips is essentially monopolized by one company - ASML in the Netherlands. The machines to do this cost more than $100 million each and have about 500,000 moving parts. If something were to happen to ASML the world's chip supply would come to a grinding halt.
The same goes for the companies that make the software for designing the chips. Three companies in particular - Cadence, Synopsys and Mentor - make 90% of chip design software. There are a handful of other companies making specialized software and hardware, but they are all narrowly located.
Miller makes the argument that the future of chips, and therefore of modern technology at large, is going to depend on the geopolitical relationship especially between China and the United States. The good news is that currently China lags significantly behind the U.S. in almost all aspects of chip design and manufacturing; the major centers for these processes are either in the U.S. or in countries which are allies of the U.S. In addition, replicating machinery of the kind used for etching by ASML is hideously complicated. The bad news is that China has a lot of smart scientists and engineers and uses theft and deception to gain access to chip design and making technology. Using front companies and legitimate buyouts, they have already tried to gain such access. While it will still take years for them to catch up, it is more a question of when than if.
If we are to continue our modern way of life that depends on this critical technology, it will have to be done through multiple fronts, some of which are already being set in motion. Intel is now setting up its own foundry and trying to replicate some of the technology that ASML uses. China will have to be brought to the bargaining table and every attempt will have to be made to ensure that they play fair.
But much of the progress also depends on funding basic science. It's worth remembering that much of the early pioneering work in semiconductors was done by physicists and chemists at places like Bell Labs and Intel, a lot of it by immigrants like Andy Grove and Morris Chang. Basic research at national labs like Los Alamos and Sandia laid the foundations for ASML's etching technology. Attempts to circumvent Moore's Law will also have to be continued to be made; as transistors shrink down to single digit nanometer sizes, quantum effects make their functioning more uncertain. However there are plans to avoid these issues through strategies like stacking them together. All these strategies depend on training the next generation of scientists and engineers, because progress on technology ultimately depends on education.

A Science Thanksgiving

It’s Thanksgiving weekend here in the U.S., and there’s an informal tradition on Thanksgiving to give thanks for all kinds of things in our lives. Certainly there’s plenty to be thankful for this year, especially for those of us whose lives and livelihoods haven’t been personally devastated by the coronavirus pandemic. But I thought I would do something different this year. Instead of being thankful for life’s usual blessings, how about being thankful for some specific facts of nature and the universe that are responsible for our very existence and make it wondrous? Being employed and healthy and surrounded by family and friends is excellent, but none of that would be possible without the amazing unity and diversity of life and the universe. So without further ado and in no particular order, I present an entirely personal selection of ten favorites for which I am eternally thankful.

I am thankful for the value of the resonance level energy of the excited state of carbon-12: carbon-12 which is the basis of all organic life on earth is formed in stars through the reaction of beryllium-8 with helium-4. The difference in energies between the starting materials (beryllium + helium) and carbon is only about 4%. If this difference had been even slightly higher, the unstable beryllium-8 would have disappeared long before it had transmuted into carbon-12, making life impossible.

I am thankful for the phenomenon of horizontal gene transfer (HGT): it allowed bacteria during early evolution to jump over evolutionary barriers by sharing genetic material between themselves instead of just with their progeny. The importance of HGT for evolution may be immense since regular HGT early on might have led to the universality of the genetic code. HGT mixed and matched genetic material in the cauldron of life, eventually leading to the evolution of multicellular organisms including human beings.

I am thankful for the pistol shrimp: an amazing creature that can “clap” its pincers and send out a high-pressure bubble with lightning speed to kill its prey. This sonication bubble can produce light when it collapses, and the speed of collapse is such that temperature inside the bubble can briefly approach the surface temperature of the sun. The pistol shrimp shows us that nature hides phenomena that are not dreamt of in our philosophy, leading to an inexhaustible list of natural wonders for us to explore.

I am thankful to the electron: an entire universe within a point particle that performs the subtlest and most profound magic, making possible the chemistry of life; giving rise to the electromagnetic force that holds ordinary matter together; ultimately creating minds that can win prizes for studying electrons.

I am thankful to the cockroach: may humanity have the resilience to survive the long nights of our making the way you have.

I am thankful to the redwoods: majestic observers and guardians of nature who were here before us, who through their long, slow, considered lives have watched us live out our frantic, anxious lives the way we watch ants live out theirs, and whose survival is now consequentially entwined with our own.

I am thankful to the acetyl group, a simple geometric arrangement of two carbon and one oxygen atoms whose diverse, myriad forms fueling life and alleviating pain – acetylcholine, acetyl-coenzyme A, acetaminophen – are tribute to the ingenuity of both human minds and nature.

I am thankful to i, the square root of minus one: who knew that this diabolical creature, initially alien to even the abstract perception of mathematicians, would be as “real” as real numbers and more importantly, underlie the foundation of our most hallowed descriptions of nature such as quantum theory.

I am thankful to the black hole: an endless laboratory of the most bizarre and fantastic wonders; trapping light but letting information escape; providing the ultimate playground for spacetime curvature; working relentlessly over billions of years as a clearinghouse and organizing principle for the universe’s wayward children; proving that the freaks of the cosmos are in fact the soul food of its very existence.

I am thankful for time: that elusive entity which, in the physicist John Wheeler’s words, “keeps everything from happening all at once”; which waits for no one and grinds kings and paupers into the same ethereal dust; whose passage magically changes children every day before our very eyes; whose very fleeting nature makes life precious and gives us the most to be thankful for.

Book review: A Divine Language: Learning Algebra, Geometry, and Calculus at the Edge of Old Age, by Alec Wilkinson

A beautifully written account of mathematics lost and found. The author got "estranged" from mathematics in school and now, at the age of 65 and after a distinguished writing career, has taken it upon himself to learn the fundamentals of algebra, geometry and calculus. The book is by turns funny and sad even as Wilkinson recounts his struggling attempts to master material that would be child's play for many bright teenagers. He is helped in his efforts by his niece Amie Wilkinson, an accomplished mathematician at the University of Chicago. I myself could empathize with the author since I too had an estrangement of sorts with the subject in high school because of a cruel, vindictive teacher, and it took me until college when, thanks to brilliant and empathetic teachers, I clawed myself back up to start appreciating it.

But while he may struggle even with high school mathematical skills (and he I share a particular loathing for word problems), Wilkinson brings a poetic, philosophical sensibility acquired through a long career to bear on the topic that no young 15-year-old whippersnapper genius in math could commit to paper. He ruminates on the platonic beauty of math and wonders whether and how some people's minds might be wired differently for it. He does not always understand how his brilliant mathematical niece Amie always "gets it" and she in turn doesn't always understand why her uncle has trouble with ideas that are second nature to her.

Often quoting from eloquent mathematicians and physicists like Bertrand Russell, G. H. Hardy and Roger Penrose, Wilkinson brings a fresh, beautiful perspective to the utility and beauty of mathematics; to the struggle inherent in mastering it and the rewards that await those who persevere. I would highly recommend the book to those who may have lost faith in mathematics in high school and want to pick up some of the concepts later, or even to young students of math who may be wizards at solving equations but who might want to acquire a broader, more philosophical perspective on this purest of human endeavors.

Temple Grandin vs algebra

There's a rather strange article by Temple Grandin in the Atlantic, parts of which had me vigorously nodding my head and parts of which had my eyebrows crawling straight up. It's a critique of how our school system tries a one-size-fits-all approach that does a lot of students disservice, but more specifically takes aim at algebra. 

First, let me say how much I admire Temple Grandin. A remarkable woman who had severe autism for most of her childhood (there's a very good profile of her in Oliver Sacks's "An Anthropologist On Mars"), she rose above her circumstances and channeled her unusual abilities into empathy for animals, becoming one of the world's leading experts in the design of humane housing and conditions for livestock. She has without a doubt demonstrated the value of what we can call 'non-standard' modes of thinking, teaching and learning that utilize visual and tactile ability. So she starts off strong enough here:

As a professor of animal science, I have ample opportunity to observe how young people emerge from our education system into further study and the work world. As a visual thinker who has autism, I often think about how education fails to meet the needs of our very diverse minds. We are shunting students into a one-size-fits-all curriculum instead of nurturing the budding builders, engineers, and inventors that our country needs.

So far so good. In fact let me digress a bit here. When I was in high school I was very good at geometry but terrible at algebra; I still remember this one midterm where I got an A and in fact the highest points-based grade in the class in geometry but almost flunked algebra. It took me a long time to claw back to a position where algebra made sense to me. In fact this appreciation of visual explanations was what drew me in part to chemistry, so I perfectly appreciate what Grandin is saying about being sympathetic to students who might have more of a visual capacity. 

But further down the pages she takes a detour into the evils of algebra that doesn't make sense to me. Again, some of what she says is spot on; for instance the fact that algebra (and math in general) can be taught much better if you can relate it to the real world. Too often it's presented simply as abstraction and symbol manipulation. But then there's this:

Cognitive skills may simply not be developed enough to handle abstract reasoning before late adolescence, which suggests that, at the very least, we’re teaching algebra too early and too fast. But abstract reasoning is also developed through experience, which is a good argument for keeping all those extracurriculars.

This part may make more of a case for tying algebra to specific real-world applications than doing away with the abstractions per se. The fact of the matter is that math is abstract; in fact it's precisely this abstraction that makes it a powerful general tool. And there are good and bad ways of teaching that abstraction, but the solution isn't to get rid of it or delay it. In fact, that kind of thinking feeds into the popular belief seen in some quarters these days that algebra and calculus both need to be optional classes.

It's when she gets to the end of the piece, however, that Grandin completely loses me:

"No two people have the same intelligence, not even identical twins. And yet we persist in testing—and teaching—people in the same way. We don’t need Americans to be better at algebra, per se. We need future generations that can build and repair infrastructure, overhaul energy and agriculture, develop robotics and AI. We need kids who grow up with the imagination to invent the solutions to pandemics and climate change. When school fails them, it fails all of us."

Say what? Building and repairing infrastructure, overhauling energy and agriculture and - especially - developing robotics and AI do not need algebra? In fact most of these professions involve a very solid grounding in abstract aspects of algebra and calculus. I think Grandin is treading very handily from saying that algebra should be taught better to saying that we should get rid of it or make it optional. Two very different things.

My concern based on this article and others I am reading these days is that, in our drive to reform the system, we want to consider it unnecessary. That is a grave mistake. Algebra and calculus and for that matter music and art are things that, even beyond the practical utility of the first two, help us appreciate our place in society and the cosmos better and in general teach us to be more human. Make them better we certainly should, but let's not burn the building down in our zeal.

David McCullough (1933-2022)

I have been wanting to write about David McCullough who passed away recently and whose writings I always enjoyed. McCullough was admittedly one of the finest popular historians of his generation. His biographical portraits and writings were wide-ranging, covering a variety of eras; from "1776" and "John Adams" about the revolutionary period through "The Great Bridge" about the building of the Brooklyn Bridge in the 1860s to "Truman" about Harry Truman's life and presidency. "Truman" is in fact the best presidential biography I have read. In spite of its size it never bogs down and paints a fair and balanced portrait of the farmer from Missouri who became the unlikely and successful president.

McCullough's writing style and approach to history warrant some discussion. He was what you would call a gentleman writer: amiable, avuncular, genteel, not one to kick up dust or to engage in hard-hitting journalism; the opposite of Howard Zinn. Although his writing was balanced and he stayed away from hagiography, it was also clear that he was fond of his subjects, and that fondness might have made him sometimes avert a completely objective, critical approach.

That style opened him up to criticism. For instance, his "The Pioneers" that described the opening up of the Ohio country and the Northwest Ordinance of 1787 engineered by Manasseh Cutler came under scrutiny for its omission of the brutal and unfair treatment of Native Americans in the new territory. The ordinance was actually quite revolutionary for its time since it outlawed slavery and effectively laid the fuse for developments sparking division between slave and free states in the 1850s and the ensuing Civil War. McCullough did emphasize this positive aspect of the ordinance, but not the negative repercussions for Indians. 

That streak is emblematic of his other writings. He never shied away from the evils of slavery, treatment of Native Americans or oppression of women, but his gaze was always upward, toward the better angels of our nature. Most characteristic of this style is his "The American Spirit", written at a fraught time in this country's history. As I mentioned in my review of the book, McCullough's emphasis is on the positive aspects of this country's founding and the founders' emphasis on individual rights and education, even if some of them personally fell short of observing those rights for others.

While I understand that McCullough might have had a bias toward the better parts of this country's history, I think that's the right approach especially today. That is because I think that a lot of Americans on both sides have acquired a strangely and fundamentally pessimistic approach toward both our past and our future. They seem to think that the country was born and steeped in sin that cannot be expiated. This is a very flawed perspective in my opinion. Perhaps as an immigrant I am more mindful of the freedoms and gifts that this country has bestowed on me, freedoms that are still unique compared to many other countries, but I share McCullough's view that whatever the substantial sins that this country was born in and perpetuated, its moral arc, as Martin Luther King would say, has always been upward and toward justice. In many ways the United States through its constitution laid the foundations for democracy and freedom that been emulated, in big and small ways, by most of the world's successful democracies. The leaders and activists of this country themselves were mindful that their country was not conforming to that perfect union described in its founding documents.

Progress has not been linear, certainly, but it has been steady throughout the ages. I think it's appropriate to complain that some aspects of progress should have taken much less time than what they did - unlike many other countries, the United States still has not had a female president, for instance - but that's different from saying that progress was made only by certain groups of people or that it wasn't made at all. As just one example, while African-Americans took the lead in the civil rights movement, there was no dearth of white Americans including religious activists like Benjamin Lay, firebrand speakers like William Lloyd Garrison and women suffragists who also wanted to end slavery. In addition, as David Hackett Fischer details in his monumental new study of black Africans' contribution to the country's early years, black and white people often worked hand in hand to make big and small achievements for slaves and freedmen alike. Recognizing this unity in diversity - E pluribus unum - is central to recognizing the essence of America.

The United States was a melting pot of different kinds and dimensions since before its founding, and all elements of this melting pot helped shape progressive views in this country. To privilege only certain elements does a disservice to the diversity that this country has exemplified. David McCullough knew this. He distinguished himself by telling us in his many writings how there was a constant stream of progressive forces emerging from all quarters of society, including all races and economic classes, that helped this country implement its founding ideals of liberty and equality. Even when the sky appeared darkest, as happened often in our history, the forces provided the proverbial silver lining for all of us to aspire to. We need more of that sentiment today. McCullough will be missed, but his writings should provide a sure guide.

Book review: "The Apocalypse Factory: Plutonium and the Making of the Atomic Age", by Steve Olson

In the history of the Manhattan Project, Los Alamos has always been the star, and Hanford and Oak Ridge where plutonium and uranium respectively were created have been supporting actors. Steve Olson's goal is to resurrect Hanford as the most important site in retrospect. Its product, plutonium, is now the element of choice in the vast majority of the world's nuclear arsenals. And the product of that creation has created an environmental catastrophe beyond reason.

Olson has written a lively and thought-provoking book about the "devil's element" and the global catastrophe and promise it has bred. Olson's account especially shines in the first half as he describes Glenn Seaborg, Joseph Kennedy and Arthur Wahl discovering plutonium-239 at Berkeley in February, 1942. Very quickly plutonium's promise became clear - unlike uranium whose rare fissionable isotope (uranium-235) it would take herculean efforts to separate from its more copious cousin (uranium-238), plutonium, being a different element from uranium, could be separated using relatively simple chemical means from its parent uranium-238. It was also clear that plutonium could be more efficiently fissioned than uranium and so less of it was needed to build bombs; if this elementary fact of nature had not been true, enough plutonium would never have been produced in time for the bomb that destroyed Nagasaki, and the world's nuclear arsenals might have looked very different. As it turned out, while the Hiroshima bomb needed about 140 pounds (63 kilograms) of uranium, the Nagasaki bomb needed only about 13 pounds (6 kilograms) of plutonium. It is still stupendous and terrifying to think that an amount of plutonium that can be carried as a cube that's about 3 inches on one side can destroy an entire city.
The first hulking reactor at Hanford (Reactor B) went up soon under the watchful eyes of Enrico Fermi, Eugene Wigner and the DuPont company; the first batch of plutonium from Hanford was produced at the beginning of 1945. Olson's book has amusing accounts of the differences in philosophy between the DuPont engineers and the physicists; the engineers thought the physicists considered everything too simple, the physicists thought the engineers made everything too complex. Of special note was Crawfort Greenewalt, a bright young engineer who had married into the DuPont family and who orchestrated DuPont's building of the reactor. Somehow peace was brokered and the warring functions worked well during the rest of the war. The plutonium in the Nagasaki bomb came from Hanford, its high spontaneous fission rate necessitating a revolutionary new design - implosion - used in that bomb and pretty much all its successors.
Olson's account of the Nagasaki mission is gripping. The poor city was the third choice after Hiroshima. Kokura which was the second choice turned out to have significant cloud cover. So did Nagasaki, but at that point the 'Bockscar', the B-29 bomber that was delivering the bomb, made a last-minute decision to bomb in spite of lack of the visual bombing requirement which had been mandated. After the war, even Manhattan Project chief General Leslie Groves who never publicly regretted the bombings said privately that he did not think Nagasaki was necessary.
As the Cold War heated up, the Hanford site became the principal site of production of plutonium for the tens of thousand of nuclear weapons that were to fill the missiles, bombers and submarines of the United States, a number that was many fold that necessary to bring about the destruction of the entire planet in a nuclear exchange between the two superpowers. The reactors were powered down in the 60s and early 70s, only to be powered up again during the hawkish administration of Ronald Reagan. There was another kind of destruction wrought during their operation. In their haste to make plutonium: billions of gallons and pounds of toxic radioactive and chemical sludge and waste were stored in makeshift steel tanks underground; some of this effluent was released into the mighty Columbia River. The scientists and engineers and politicians who made Hanford did not quite understand the profoundly difficult long-term problem for humanity that these long-lived radioactive materials would face. Even today, the Hanford site is often referred to as the most contaminated site in the world, and it is estimated that it could take up to $640 billion to clean up the site.
With plutonium also came jobs and families and hospitals and schools. Olson who grew up in the area talks about the complicated relationship people whose fathers and grandfathers and grandmothers worked on the reactors have with the site. On one hand, they are proud that their work contributed to the end of World War 2 and preserved America's edge and possibly survival during the Cold War; on the other hand, they worry about the bad reputation that the site has gotten as the principal protagonist in creating weapons of mass destruction. Most of all, they worry about the potential cancers that they think the contaminated site might have caused. As Olson documents, studies have found tenuous links at best between the radiation at the site and the rate of cancers, but it's hard to convince people who believe that any amount of radiation must be bad.
Today the Hanford site is part of the Manhattan Project National Historical Park that encompasses Oak Ridge and Los Alamos (I have been wanting to go on a tour for a long time). The B reactor no longer produces the devil's element. Instead it is a mute testament to humankind's discovery of the means of its own destruction. That nuclear weapons have never been used in anger since August, 1945 might elevate it in the future to an importance that we cannot yet gauge.

The root of diverse evil

It wasn’t very long ago that I was rather enamored with the New Atheist movement, of which the most prominent proponent was Richard Dawkins. I remember having marathon debates with a religious roommate of mine in graduate school about religion as the “root of all evil”, as the producers of a documentary by Dawkins called it. Dawkins and his colleagues made the point that no belief system in human history is as all-pervasive in its ability to cause harm as religion.

My attitude toward religion started changing when I realized that what the New Atheists were criticizing wasn’t religion but a caricature of religion that was all about faith. Calling religion the “root of all evil” was also a bad public relations strategy since it opened up the New Atheists to obvious criticism – surely not all evil in history has been caused by religion? But the real criticism of the movement goes deeper. Just like the word ‘God’, the word ‘religion’ is a very broad term, and people who subscribe to various religions do so with different degrees of belief and fervor. For most moderately religious people, faith is a small part of their belonging to a religion; rather, it’s about community and friendship and music and literature and what we can broadly call culture. Many American Jews and American Hindus for instance call themselves cultural Jews or cultural Hindus.

My friend Freeman Dyson made this point especially well, and he strongly disagreed with Dawkins. One of Freeman’s arguments, with which I still agree, was that people like Dawkins set up an antagonistic relationship between science and religion that makes it seem like the two are completely incompatible. Now, irrespective of whether the two are intellectually compatible or not, it’s simply a fact that they aren’t so in practice, as evidenced by scores of scientists throughout history like Newton, Kepler and Faraday who were both undoubtedly great scientists and devoutly religious. These scientists satisfied one of the popular definitions of intelligence – the ability to simultaneously hold two opposing thoughts in one’s mind.

Dyson thought that Dawkins would make it hard for a young religious person to consider a career in science, which would be a loss to the field. My feeling about religion as an atheist are still largely the same: most religion is harmless if it’s practiced privately and moderately, most religious people aren’t out to convert or coerce others and most of the times science and religion can be kept apart, except when they tread into each other’s territory (in that case, as in the case of young earth creationism, scientists should fight back as vociferously as they can).

But recently my feelings toward religion have soured again. A reference point for this change is a particularly memorable quote by Steven Weinberg who said, “Without religion good people will do good things and bad people will do bad things. But for good people to do bad things, that takes religion.” Weinberg got a lot of flak for this quote, and I think it’s because of a single word in it that causes confusion. That word is “good”. If we replace that word by “normal” or “regular” his quote makes a lot of sense. “For normal people to do evil or harm, that takes religion.” What Weinberg is saying that people who are otherwise reasonable and uncontroversial and boring in their lives will do something exceptionally bad because of religion. This discrepancy is not limited to religious ideology – the Nazis at Auschwitz were also otherwise “normal” people who had families and pets and hobbies – but religious ideology, because of its unreason and reliance on blind faith, seems to pose a particularly all-pervading example. Religion may not be the root of all evil, but it certainly may be the root of the most diverse evil.

I was reminded of Weinberg’s quote when I read about the shocking attack on Salman Rushdie a few weeks ago. Rushdie famously had to go into hiding for a long time and abandon any pretense of a normal life because of an unconscionable death sentence or fatwa to kill him issued by Ayatollah Khomeini of Iran. Rushdie’s attacker is a 24-year-old man named Hadi Matar who was born in the United States but was radicalized after a trip to Lebanon to see his father. By many accounts, Matar was a loner but otherwise a normal person. The single enabling philosophy that motivated him to attack and almost kill Rushdie was religious. As Weinberg would say, without religion, he would have just been another disgruntled guy, but it was religion that gave him a hook to hang his toxic hat on. Even now Matar says he is “surprised” that Rushdie survived. He also says that he hasn’t even read the controversial ‘Satanic Verses’ which led to the edict, which just goes to show how intellectually vacuous, mindless sheep the religiously motivated can be.

I had the same feelings, even more strongly felt, when I looked up the stories of the Boston marathon bomber brothers, Dzhokhar and Tamerlan Tsarnaev. By any account theirs should have been the quintessential American success story: both were brought to this country from war-torn Chechnya, placed in one of the most enlightened and progressive cities in the United States (Cambridge, MA) and given access to great educational resources. What, if not religious ideology, would lead them to commit such mindless, horrific acts against innocent people? Both Matar and the marathon bombers are a perfect example of Weinberg’s adage – it was religion that led them down a dark path and made the crucial difference.

The other recent development that has made me feel depressed about the prospects for peace between religion and secularism is the overturning of Roe v. Wade by the United States Supreme Court. In doing so, the Supreme Court has overturned a precedent with which a significant majority (often cited to be at least 60%) of Americans agree. Whatever the legal merits of the court’s decision, there is little doubt that the buildup to this deeply regressive decision was driven primarily by a religious belief that considers life to begin at conception. It’s a belief without any basis in science; in fact, as Carl Sagan and Ann Druyan wrote many years, if you factored in science, then Roe v. Wade would seem to have drawn the line at the right point, when the fetus develops a nervous system and really distinguishes itself as a human. In fact one of the tragedies of overturning Roe v. Wade is that the verdict struck a good balance between respecting the wishes of religious moderates and taking rational science into account.

But Evangelical Christians in the United States, of which there has a been dwindling and therefore proportionately bitter and vociferous number in recent years, don’t care about such lowly details as nervous systems (although they do seem to care about heartbeats which ironically aren’t unique to humans). For them, all there is to know about when life begins has been written in a medieval book. Lest there be any doubt that this consequential decision by the court was religiously motivated, it’s worth reading a recent, detailed analysis by Laurence Tribe, a leading constitutional scholar. Lessig convincingly argues that the Catholic justices’ arguments were in fact rooted in the view that life begins at conception, a view on which the constitution is silent but religion has plenty to say.

The grim fact that we who care about things like due process and equality are dealing with here is that a minority of religious extremists continues to foist extremely regressive views on the majority of us who reject those views to different degrees. For a while it seemed that religiosity was declining in the United States. But now it appears that those of us who found this trend reassuring were too smug; it’s not the numbers of the religious that have mattered but the strength of their convictions, crucially applied over time like water dripping on a stone to wear the system down. And that’s exactly what they have wanted.

The third reason why I am feeling rather bitter about religion is a recent personal experience. I was invited to a religious event at an extremely devout friend’s place. I will not note the friend’s religion or denomination to keep the story general and to avoid bias; similar stories could be told about any religion. My friend is a smart, kind and intelligent man, and while I usually avoid religious events, I made an exception this time because I like him and also because I wanted to observe the event, much like an anthropologist would observe the customs of another tribe. What struck me from the beginning was the lack of inclusivity in the event. We were not supposed to go into certain rooms, touch certain objects or food, take photos of them or even point at them. We were supposed to speak in hushed tones. Most tellingly, we weren’t supposed to shake hands with my friend or touch him in any way because he was conducting the event in a kind of priestly capacity. What social or historical contexts in more than one society this behavior evokes I do not need to spell out.

Now, my friend is well-meaning and was otherwise very friendly and generous, but all these actions struck me as emblematic of the worst features of religion, features meant to draw boundaries and divide the world into “us” and “them”. And the experience was again emblematic of Weinberg’s quote – an otherwise intelligent, kind and honest person was practicing strange, exclusionary customs because his holy book told him to do so, customs that otherwise would have been regarded as odd and even offensive. For normal people to do strange things, that takes religion.

Fortunately, these depressing thoughts about religion have, as their counterpart, hopeful thoughts about science. Everything about science makes it a different system. Nobody will issue a fatwa in science because a scientist says something that others disagree with or even find offensive, because if the scientist is wrong, the facts will decide one way or another. Nobody will carry out a decades-long vendetta to overturn a rule or decision which the majority believes as shown by the data. And certainly nobody will try to exclude anyone from doing a scientific experiment or proposing a theory just because they don’t belong to their particular tribe. All this is true even if science has its own priesthoods and has historically practiced forms of exclusion at one time or another. Scientists have their own biases as much as any other human people – witness the right’s opposition to climate change and the left’s opposition to parts of genetics research – but the great thing about science is that slowly but surely, it’s the facts about the world that decide truths, not authority or majority or minority opinion. Science is the greatest self-correcting system discovered by human beings, while religion keeps on allowing errors to propagate for generations and centuries by invoking authority and faith.

Sadly, these recent developments have shown us that the destructive passions unleashed by religious faith continue to proliferate. Again and again, when those of us who value rationality and science think we have reached some kind of understanding with the religious or think that the most corrosive effects of religion are waning, along comes a Hadi Matar to try to end the life of a Salman Rushdie, and along comes a cohort of religious extremists to end the will of the majority. Religion may not be the root of all evil, but it’s the root of a lot of evil, and undoubtedly of the most diverse evil. That’s reason enough to oppose it with all our hearts and minds. It’s time to loudly sound the trumpets of rationalism and the scientific worldview again.

First published on 3 Quarks Daily.

Book review: "Unraveling the Double Helix: The Lost Heroes of DNA", by Gareth Williams.

Newton rightly decried that science progresses by standing on the shoulders of giants. But his often-quoted statement applies even more broadly than he thought. A case in point: when it comes to the discovery of DNA, how many have heard of Friedrich Miescher, Fred Griffith or Lionel Alloway? Miescher was the first person to isolate DNA, from pus bandages of patients. Fred Griffith performed the crucial experiment that proved that a ‘transforming principle’ was somehow passing from a virulent dead bacterium to a non-virulent live bacterium, magically rendering the non-virulent strain virulent. Lionel Alloway came up with the first expedient method to isolate DNA by adding alcohol to a concentrated solution.

In this thoroughly engaging book, Gareth Williams brings these and other lost heroes of DNA. The book spans the first 85 years of DNA and ends with Watson and Crick's discovery of the structure. There are figures both well-known and obscure here. Along with those mentioned above, there are excellent capsule histories of Gregor Mendel, Thomas Hunt Morgan, Oswald Avery, Rosalind Franklin, Maurice Wilkins and, of course, James Watson and Francis Crick. The book traces a journey through a variety of disciplines, most notably the fields of biochemistry and genetics, that were key in deciphering the structure of DNA and its role in transmitting hereditary characteristics.
Williams’s account begins with Miescher’s isolation of DNA from pus bandages in 1869. At that point in time, proteins were well-recognized, and all proteins contained a handful of elements like carbon, nitrogen, oxygen and sulfur. The one element they did not contain was phosphorus. It was Miescher’s discovery of phosphorus in his extracts that led him and others to propose the existence of a substance they called ‘nuclein’ that seemed ubiquitous in living organisms. The two other towering figures in the biochemical history of DNA are the German chemist Albrecht Kossel and the Russian-born American chemist Phoebus Levene. They figured out the exact composition of DNA and identified its three key components: the sugar, the phosphate and most importantly, the four bases (adenine, cytosine, thymine and guanine). Kossel was such a revered figure that his students led a torchlight procession through the streets from the train station to his lab when he came back to Heidelberg with the Nobel Prize.
Levene’s case is especially interesting since his identification of the four bases set DNA research back by years, perhaps decades. Because there were only four bases, he became convinced that DNA could never be the hereditary material because it was too simple. His ‘tetra-nucleotide hypothesis’ which said that DNA could only have a repeating structure of four bases doomed its candidacy as a viable genetic material for a long time. Most scientists kept on believing that only proteins could be complex enough to be the stuff of heredity.
Meanwhile, while the biochemists were unraveling the nature of DNA in their own way, the geneticists paved the way. Williams has a brisk but vivid description of the lone monk Gregor Mendel toiling away with thousands of meticulous experiments on pea plants in his monastery in the Moravian town of Brünn. As we now know, Mendel was fortunate in picking the pea plant since it’s a purebred species. Mendel’s faith in his own work was shaken toward the end of his life when he tried to duplicate his experiments using the hawkweed plant whose genetics are more complex. Tragically, Mendel’s notebooks and letters were burnt after his death and his work was forgotten for thirty years before it was resurrected independently by three scientists, all of whom tried to claim credit for the discovery. The other major figure in genetics during the first half of the 20th century was Thomas Hunt Morgan whose famous ‘fly room’ at Columbia University carried our experiments showing the presence of hundreds of genes are precise locations on chromosomes. In his lab, there was a large pillar on which Morgan and his students drew the locations of new genes.
From the work of Mendel, Morgan, Levene and Kossel we move on to New York City where Oswald Avery, Colin MacLeod and Maclyn McCarty at the Rockefeller University and the sharp-tongued, erudite Erwin Chargaff at Columbia made two seminal discoveries about DNA. Avery and his colleagues showed that DNA is in fact the ‘transforming principle’ that Fred Griffith had identified. Chargaff showed that the proportions of A and T and G and C in DNA were similar. Williams says in the epilogue that of all the people who were potentially robbed of Nobel Prizes for DNA, the two most consequential were Avery and Griffith.
By this time, along with biochemistry and genetics, x-ray crystallography had started to become very prominent in the study of molecules: by shining x-rays on a crystal and interpreting the resulting diffraction pattern, scientists could potentially figure out the structure of the molecule on an atomic level. Williams provides an excellent history of this development, starting with the Nobel Prize-winning father-son duo of William Henry and William Lawrence Bragg (who remains the youngest Nobel Laureate at age 25) and continuing with other pioneering figures like J. D. Bernal, William Astbury, Dorothy Hodgkin and Linus Pauling.
Science is done by scientists, but it’s made possible by science administrators. Two major characters star in the DNA drama as science administrators par excellence. Both had their flaws, but without the institutions they set up to fund and encourage biological work, it is doubtful whether the men and women who discovered DNA and its structure would have made the discoveries when and where they did. William Lawrence Bragg repurposed the famed Cavendish Laboratories at Cambridge University – where Ernest Rutherford had reigned supreme - for crystallographic work on biological molecules. A parallel effort was started by John Randall, a physicist who had played a critical role in Britain’s efforts to develop radar during World War 2, at King’s College in London. While Bragg recruited Max Perutz, Francis Crick and James Watson for his group, Randall recruited Maurice Wilkins, Ray Gosling and Rosalind Franklin.
One of the strengths of Williams’s book is that it resurrects the role of Maurice Wilkins who is often regarded as the least important of the Nobel Prize-winning triplet of Watson, Crick and Wilkins. In fact, it was Wilkins and Gosling who took the first x-ray photographs of DNA that seemed to indicate a helical structure. Wilkins was also convinced that DNA and not protein was the genetic material when that view was still unfashionable; he passed on his infectious enthusiasm to Crick and Watson. But even before his work, the Norwegian crystallographer Sven Furberg had been the first to propose a helix – although a single one – as the structure of DNA based on his density and other important features. A key feature of Furberg’s model was that the sugar and the base were perpendicular, which is in fact the case with DNA.
The last third of the book deals with the race to discover the precise structure of DNA. This story has been told many times, but Williams tells it exceptionally well and especially drives home how Watson and Crick were able to stand on the shoulders of many others. Rosalind Franklin comes across as a fascinating, complex, brilliant and flawed character. There was no doubt that she was an exceptional scientist who was struggling to make herself heard in a male-dominated establishment, but it’s also true that her prickly and defensive personality made her hard to work with. Unlike Watson, she was especially reluctant to build models, perhaps because she had identified a fatal flaw in one of the pair’s earlier models. It’s not clear how close Franklin came to identifying DNA as a helix; experimentally she came close, but psychologically she seemed reluctant and bounced back and forth between helical and non-helical structures.
So what did Watson and Crick have that the others did not? As I have described in a post written a few years ago on the 70th anniversary of the DNA structure, many others were in possession of key parts of the evidence, but only Watson and Crick put it all together and compulsively built models. In this sense it was very much like the blind men and the elephant; only Watson and Crick bounced around the entire animal and saw how it was put together. Watson’s key achievement was recognizing the precise base pairing: adenine with thymine and guanine with cytosine. Even here he was helped by the chemist Jerry Donohue who corrected a key chemical feature of the bases (organic chemists will recognize it as what’s called keto-enol tautomerism). Also instrumental were Alec Stokes and John Griffith. Stokes was a first-rate mathematician who, using the theory of Bessel functions, figured out the diffraction pattern that would correspond to a helix; Crick who was a physicist well-versed with the mathematics of diffraction, instantly understood Stokes’s work. Griffith was a first-rate quantum chemist who figured out, independently of Donohue, that A would pair with T and G with C. Before the advent of computers and what are called ab initio quantum chemical techniques, this seems like a remarkable achievement.
With Chargaff’s knowledge of the constancy of base ratios, Donohue’s precise base structures, Franklin and Gosling’s x-ray measurements and Stokes’s mathematics of helix diffraction patterns, Watson and Crick had all the information they needed to try out different models and cross the finish line. No one else had this entire map of information at their disposal. The rest, as they say, is history.
I greatly enjoyed reading Williams’s book. It is, perhaps, the best book on the DNA story that I have read since Horace Freeland Judson’s “The Eighth Day of Creation”. Even characters I was familiar with newly come to life as flawed, brilliant human beings with colorful lives. The account shows that many major and minor figures made important discoveries about DNA. Some came close to figuring out the structure but never made the leap, either because they lacked data or because of personal prejudices. Taken as a whole, the book showcases well the intrinsically human story and the group effort, playing out over 85 years, at the heart of the one of the greatest discoveries that humanity has made. I highly recommend it.