Field of Science

Areopagitica and the problem of regulating AI

How do we regulate a revolutionary new technology with great potential for harm and good? A 380-year-old polemic provides guidance.

In 1644, John Milton sat down to give a speech to the English parliament arguing in favor of the unlicensed printing of books and against a proposed bill to restrict their contents. Published as “Areopagitica”, Milton’s speech became one of the most brilliant defenses of free expression.

Milton rightly recognized the great potential books had and the dangers of smothering that potential before they were published. He did not mince words:

“For books are not absolutely dead things, but …do preserve as in a vial the purest efficacy and extraction of that living intellect that bred them. I know they are as lively, and as vigorously productive, as those fabulous Dragon’s teeth; and being sown up and down, may chance to spring up armed men….Yet on the other hand unless wariness be used, as good almost kill a Man as kill a good Book; who kills a Man kills a reasonable creature, God’s Image; but he who destroys a good Book, kills reason itself, kills the Image of God, as it were in the eye. Many a man lives a burden to the Earth; but a good Book is the precious life-blood of a master-spirit, embalmed and treasured up on purpose to a life beyond life.”

Apart from stifling free expression, the fundamental problem of regulation as Milton presciently recognized is that the good effects of any technology cannot be cleanly separated from the bad effects; every technology is what we call dual-use. Referring back all the way to Genesis and original sin, Milton said:

“Good and evil we know in the field of this world grow up together almost inseparably; and the knowledge of good is so involved and interwoven with the knowledge of evil, and in so many cunning resemblances hardly to be discerned, that those confused seeds which were imposed upon Psyche as an incessant labour to cull out, and sort asunder, were not intermixed. It was from out the rind of one apple tasted, that the knowledge of good and evil, as two twins cleaving together, leaped forth into the world.”

In important ways, “Areopagitica” is a blueprint for controlling potentially destructive modern technologies. Freeman Dyson applied the argument to propose commonsense legislation in the field of recombinant DNA technology. And today, I think, the argument applies cogently to AI.

AI is such a new technology that its benefits and harms are largely unknown and hard to distinguish from each other. In some cases the distinction is clear. For instance, image recognition can be used for all kinds of useful applications ranging from weather assessment to cancer cell analysis, but it can be and is used for surveillance. In that case, it is not possible to separate out the good from the bad even when we know what they are. But more importantly, as the technology of image recognition AI demonstrates, it is impossible to know what exactly AI will be used for unless there’s an opportunity to see some real-world applications of it. Restricting AI before these applications are known will almost certainly ensure that the good applications are stamped out.

It is in the context of Areopagitica and the inherent difficulty of regulating a technology before its potential is unknown that I find myself concerned about some of the new government regulation which is being proposed for regulating AI, especially California Bill SB-1047 which has already passed the state Senate and has made its way to the Assembly, with a proposed decision date at the end of this month.

The bill proposes commonsense measures for AI, such as more transparent cost-accounting and documentation. But it also imposes what seem like arbitrary restrictions on AI models. For instance, it would require regulation and paperwork for models which cost $100 million or more per training run. While this regulation will exempt companies which run cheaper models, the problem in fact runs the other way: nothing stops cheaper models from being used for nefarious purposes.

Let’s take a concrete example: in the field of chemical synthesis, AI models are increasingly used to do what is called retrosynthesis, which is to virtually break down a complex molecule into its constituent building blocks and raw materials (as a simple example, a breakdown of sodium chloride into sodium and chlorine would be retrosynthesis). One can use retrosynthesis algorithms to find out the cheapest or the most environmentally friendly route to a target molecule like a drug, a pesticide or an energy material. And run in reverse, you can use the algorithm for forward planning, predicting based on building blocks what the resulting target molecule would look like. But nothing stops the algorithm from doing the same analysis on a nerve gas or a paralytic or an explosive; it’s the same science and the same code. Importantly, much of this analysis is now available in the form of laptop computer software which enables the models to be trained on datasets of millions of data points: small potatoes in the world of AI. Almost none of these models cost anywhere close to $100 million, which puts their use in the hands of small businesses, graduate students and – if and when they choose to use them – malicious state and non-state actors.

Thus, restricting AI regulation to expensive models might exempt smaller actors, but it’s precisely that fact that would enable these small actors to use the technology to bad ends. On the other hand, critics are also right that it would effectively price out the good small actors since they would not be able to afford the legal paperwork that the bigger corporations can. The arbitrary cap of $100 million therefore does not seem to address the root of the problem. The same issue applies to another restriction which is also part of the European AI regulation, which is limiting the calculation speed to 1026 flops. Using the same example of the AI retrosynthesis models, it is easy to argue that such models can be run for far less computing power and would still produce useful results.

What then is the correct way to regulate AI technology? Quite apart from the details, one thing that is clear is that we should be able to experiment a bit, run laboratory-scale models and at least try to probe the boundaries of potential risks before we decide to stifle this or that model or rein in computing power. Once again Milton echoes such sentiments. As a 17th century intellectual it would have been a long shot for him to call for the completely free dissemination of knowledge; he must well have been aware of the blood that had been shed in religious conflicts in Europe during his time. Instead, he proposed that there could be some checks and restrictions on books, but only after they had been published:

“If then the Order shall not be vain and frustrate, behold a new labour, Lords and Commons, ye must repeal and proscribe all scandalous and unlicensed books already printed and divulged; after ye have drawn them up into a list, that all may know which are condemned, and which not.

Thus, Milton was arguing that books should not be stifled at the time of their creation; instead, they should be stifled at the time of their use if the censors saw a need. The creation vs use distinction is a sensible one when thinking about regulating AI as well. But even that distinction doesn’t completely address the issue, since the uses of AI technology are myriad, and most of them are going to be beneficial and intrinsically dual-use. Even regulating the uses of AI thus would entail interfering in many aspects of AI development and deployment. And what about the legal and commercial paperwork, the extensive regulatory framework and the army of bureaucrats that would be needed to enforce this legislation? The problem with legislation is that it is easy for it to overstep boundaries, to be on a slippery slope and gradually elbow its way into all kinds of things for which it wasn’t originally intended, exceeding its original mandate. Milton shrewdly recognized this overreach when he asked what else besides printing might be up for regulation:

“If we think to regulate printing, thereby to rectify manners, we must regulate all recreations and pastimes, all that is delightful to man. No music must be heard, no song be set or sung, but what is grave and Doric. There must be licensing dancers, that no gesture, motion, or deportment be taught our youth but what by their allowance shall be thought honest; for such Plato was provided of; it will ask more than the work of twenty licensers to examine all the lutes, the violins, and the guitars in every house; they must not be suffered to prattle as they do, but must be licensed what they may say. And who shall silence all the airs and madrigals that whisper softness in chambers? The windows also, and the balconies must be thought on; there are shrewd books, with dangerous frontispieces, set to sale; who shall prohibit them, shall twenty licensers?”

This passage shows that not only was John Milton a great writer and polemicist, but he also had a fine sense of humor. Areopagitica shows us that if we are to confront the problem of AI legislation, we must do it not just with good sense but with a recognition of the absurdities which too much regulation may bring.

The proponents of AI who fear the many problems it might create are well-meaning, but they are unduly adhering to the Precautionary Principle. The Precautionary Principles says that it’s sensible to regulate something when its risks are not known. I would like to suggest that we replace the Precautionary Principle with a principle I call The Adventure Principle. The Adventure Principle says that we should embrace risks rather than running away from them because of the benefits which exploration brings. Without the Adventure Principle, Columbus, Cook, Heyerdahl and Armstrong would never have set sail into the great unknown and Edison, Jobs, Gates and Musk would never embark on big technological projects. Just like with AI, these explorers faced a significant risk of death and destruction, but they understood that with immense risks come immense benefits, and by the rational light of science and calculation, they thought there was a good chance that the risks could be managed. They were right.

Ultimately there is no foolproof “pre-release” legislation or restriction that would purely stop the bad use of models while still enabling their good use. Milton’s Areopagitica does not tell us what the right legislation for regulating AI would look like, although it provides hints based on regulation of use rather than creation. But it makes a resounding case for the problems that such legislation may create. Regulating AI before we have a chance to see what it can do would be like imprisoning a child before he grows up into a young man. Perhaps a better approach would be the one Faraday adopted when Gladstone purportedly asked him what the use of electricity was: “Someday you may tax it”, was Faraday’s response.

Some say that the potential risks from AI are too great to allow for such a liberal approach. But the potential risks from almost any groundbreaking technology developed in the last few hundred decades – printing, electricity, fossil fuels, automobiles, nuclear energy, gene editing – are no different. The premature regulation of AI would prevent us from unleashing its potential to confront our most pressing challenges. When humanity is then grasping with its last-ditch efforts to prevent its own extinction because of known problems, a recognition of the irony of smothering AI because of a fear of unknown problems would come too late to save us.

Daniel Dennett (1942-2024)


For a long time there's been a kind of Cold War with a slow moving front between philosophers and scientists, especially physicists. The scientists accuse the philosophers of being as useful to the theory and practice of science as "ornithologists are to birds", as a popular saying goes. The philosophers in turn emphasize to the scientists that their disciplines, especially in the 20th and 21st centuries, are so complex and abstract that they cannot be understood without the input of philosophy.

It is in the light of this debate, especially, that the death of Daniel Dennett hit so hard. Unlike most philosophers, Dennett was someone who tried to seriously grapple with the actual facts of science - in his case, evolutionary biology and neuroscience - as opposed to the fevered armchair speculation of philosophy. These facts were on full display in the many phenomenal books he wrote, of which my favorites are "Darwin's Dangerous Idea", "Breaking the Spell" and "From Bacteria to Bach and Back".

Dennett's writing was wonderful and brilliant - extremely witty, confident, bold, even stridently so. He was one of only a handful of writers who regularly elicited moments of "Aha!" in my mind. More than almost anyone else from his generation he was unafraid of taking on bold ideas, particularly ones which would make readers uncomfortable. Whether he was arguing that consciousness is a kind of useful delusion in "Consciousness Explained" or exhorting readers to take the scientific study of religion seriously, as in "Breaking the Spell", Dennett was always provocative. I do not remember a single time when I did not come away from a piece of Dennett's writing without ideas and questions swirling around in my head.

This was true irrespective of whether I agreed with him or not, and there was certainly enough in his work for spirited disagreement. But this is something that needs to be pointed out especially today when so many of us are being asked, explicitly or implicitly, to pick sides, to eschew shades of gray, to personify the "with us or against us" ethos. Dennett took his opponents' arguments seriously, before politely demolishing them. Even when he mocked shoddy thinking - and there was no dearth of that kind of incisive analysis in his writings - he did so after careful consideration of their positions. That quality is on full display in "Breaking the Spell" in which he takes on religious proponents with zeal and certainly, but also with careful analysis.

It was Dennett's critical take on religion that led him to be pegged as one of the four "horsemen" of the New Atheism movement, along with Richard Dawkins, Christopher Hitchens and Sam Harris. Part of what made him a member of that group was his sheer delight at the wonders of natural (as opposed to supernatural) evolution by natural selection. In fact, one of the most delightful and brilliant things he wrote showcasing the centrality of a mindless but highly creative process giving the illusion of intelligence was the following from "From Bacteria to Bach and Back":


I find that last sentence to be cleverness exemplified. But given his vast oeuvre of writings, I never thought membership in the brotherhood of the horsemen to be a particularly significant part of Dennett's intellectual identity, and from what I hear, neither did he. Instead it was just one among many facets of a life devoted to reason, understanding and debate. His books were packed with so many things apart from atheism that it would be a disservice to primarily identify him with that movement.

When I heard about Dennett's death I was about to spend some quality reading time in a coffee shop. I picked up "Breaking the Spell" and spent the next two hours engaging with that classic Dennettsian blend of provocativeness, wit and wisdom. At the end, just like when I had read his works before, I felt invigorated, as if I had just had a first-class workout in a mental gym. And as before I felt like a slight shift had taken place in my consciousness, my understanding of the world and myself. The core of Dan Dennett's identity was devoted to teaching us to question our deepest, most cherished beliefs and to encourage critical thinking, no matter where it led us. In the process he made us think and feel provoked, delighted and yes, uncomfortable. Because through discomfort, whether physical or mental, comes enlightenment.

Simple, atypical but neat estimation of energy released in fission

Simple but neat atypical calculation of energy released in fission (from Glasstone and Sesonske, “Nuclear Reactor Engineering”). It’s a nice illustration of guesstimating based on empirical data.

"The amount of energy released when a nucleus undergoes fission can be calculated by determining the net decrease in mass, from the known isotopic masses, and utilizing the Einstein mass-energy relationship. A simple, but instructive although less accurate, alternative procedure is the following. Disregarding the neutrons involved, since they have a negligible effect on the present calculation, the fission reaction may be represented (approximately) by

Uranium-235 -› Fission product A + Fission product B + Energy.

In uranium-235, the mean binding energy per nucleon is about 7.6 Mev, so that it is possible to write
92 p + 143 n -> Uranium-235 + (235 X 7.6) Mev

where p and n represent protons and neutrons, respectively.

The mass numbers of the two fission product nuclei are mostly in the range of roughly 95 to 140, where the binding energy per nucleon is, as in tin-120, for example, about 8.5 Mev; hence,
92 p + 143 n -› Fission products A and B + (235 X 8.5) Mev

Upon subtracting the two binding energy expressions, the result is

Uranium-235 -> Fission products + 210 Mev."

Book Review: "Into Siberia: George Kennan's Epic Journey Through the Brutal, Frozen Heart of Russia", by Gregory Wallance

It may seem hard to believe now, but in 1865, by the time the Civil War ended, Russia was America's best friend in Europe. The two countries enjoyed a healthy diplomatic relationship, buoyed by trade and a mutual distrust of Great Britain; Russia was the only European nation to support the Union during the war. America sent formal condolences when Tsar Alexander was assassinated; Russia did the same when Lincoln was shot.

By 1891 it was all over. American mistrust of Russia was so pronounced that all diplomatic relations had cooled. It has never been the same since. What changed? Many factors played a role, but a significant one was the publication in 1891 of a now forgotten book by the journalist, writer and explorer George Kennan. Titled "Siberia and the Exile System", it documented in vivid detail the brutal, cruel, unsparing system of Siberia exile, inflicted by Tsarist Russia on its people for the most trivial misdemeanors.

"Into Siberia" is the vivid account by Gregory Wallance of the Ohio-born and raised George Kennan's two visits to Russia, first in the 1860s as an employee of Western Union with the mammoth goal of laying a trans-Siberian telegraph line that would connect Europe to America, and then again as a journalist formally authorized by the Tsarist regime to document the exile system in Siberia. Ironically, the Russian monarchy and government thought that Kennan's coverage of the system would invoke sympathy in the rest of the world for its need; little did they know that they were letting a fox in the henhouse.

Wallance excels at two things in particular; firstly at describing the almost unbelievably stark and brutal Russian landscape, populated by neck-deep snow, fatal temperatures well below -40 degrees and fierce indigenous tribes who had hardly had any contact with their more modern countrymen, and second at describing Kennan's epic journey into this wasteland. He is also exceedingly good at charting the stunningly inhumane treatment of prisoners and their families at the hands of the Tsar and his officials; the book opens with an unforgettable description of a pillar at the border of Siberia at which men and women cried uncontrollably, because the journey past this pillar was almost certainly one from which they would not return.

It's hard to not be thoroughly inspired by Kennan, a sickly young man who, determined to prove that he was strong of body and character, undertook the almost impossibly dangerous and exotic journey in 1865 to Siberia. His letters home remind one of other brave explorers staying cheerful in the face of danger or death - Shackleton, Cherry-Garrard, Lewis and Clark. He seems like the epitome of "what does not kill you makes you stronger", deliberately laughing in the face of the most infernal of natural and human elements, braving bears, deadly storms, an endless land without direction, fierce tribes and meagre to no supplies of essential food and clothing. He had not just genuine curiosity but genuine empathy for the savage-looking tribes he met, learning their ways and their dialects and working together with them to survive, learn, rescue trapped companions. The first book he wrote after coming back, "Tent Life in Siberia", was an unprecedented account written by a sharp-eyed journalist with a gift for evocative prose which taught Americans about Russia.

"Siberia and the Exile System" was equally vivid. From the pillar at the Siberian border to the innermost reaches of the labor camps, Kennan was given free access by the Tsar and his regime to the prisoners and their families. What Kennan saw horrified him: men with barely anything on their backs marched for hundreds of miles - Bataan death march style - in the most inclement weather, until many of them died on the way; their wives facing an impossible choice of remaining behind and starving to death or accompanying their husbands into conditions so stark that they would starve anyway or would be raped or have to sell themselves into prostitution. The bodies of children in frozen embraces with their parents were not an uncommon sight. Perhaps worst of all were the reasons why these prisoners were condemned to hell in the first place. Most prisoners were condemned to Siberia on trumped up charges based on the flimsiest criticism of the Tsarist regime. Freedom of speech, Kennan saw, was a complete joke in Russia (sounds familiar?).

Everything that we read later about the gulag system had their origin in those horrific exile camps set up by a cruel, indifferent, repressive Russian regime. When Kennan wrote his book, Americans and Russians alike were appalled, albeit for different reasons. For the first time, Americans had their eyes opened to the reality of a country which they had considered their friend. For Russians the book was shocking for the level of detail and the convincing arguments with which Kennan exposed the crudities of their so-called civilization. Reading Kennan's account 50 years later was the best education that his namesake who was the more famous Kennan - the American diplomat George Kennan of containment fame - could get. In his memoirs and writings, the younger Kennan often credits his lesser-known ancestor for grounding him in the realities of the Soviet Union.

After Kennan published "Siberia and the Exile System", Russian-American relations permanently deteriorated. After the murder of Tsar Nicholas, Lenin effectively set up the state as an outlaw state, defined in opposition to the capitalist countries. It is of course impossible to escape a feeling of deja vu reading Kennan's account. There seems to be an almost unbroken thread from Alexander through Nicholas, Lenin, Stalin and all the way to Putin in the repression exerted by Russian strongmen and their henchmen on their own people. Reading this story of a 139-year-old tragedy, one can be forgiven for feeling pessimistic about the future of Russian democracy and human rights. While the internet and new modes of communication have alerted the rest of the world to Russian leaders' excess, it is time for another hardy soul of George Kennan's gifts, resilience and unbounded concern for human welfare to again lay bare the soul of this vast, inscrutable land.

Jack Dunitz (1923-2021): Chemist And Writer Extraordinaire

Every once in a while there is a person of consummate achievement in a field, a person who while widely known to workers in that field is virtually unknown outside it and whose achievements should be known much better. One such person in the field of chemistry was Jack Dunitz. Over his long life of 98 years Dunitz inspired chemists across varied branches of chemistry. Many of his papers inspired me when I was in college and graduate school, and if the mark of a good scientific paper is that you find yourself regularly quoting it without even realizing it, then Dunitz’s papers have few rivals.

Two rare qualities in particular made Dunitz stand out: simple thinking that extended across chemistry, and clarity of prose. He was the master of the semi-quantitative argument. Most scientists, especially in this day and age, are specialists who rarely venture outside their narrow areas of expertise. And it is even rarer to find scientists – in any field – who wrote with the clarity that Dunitz did. When he was later asked in an interview what led to his fondness for exceptionally clear prose, his answer was simple: “I was always interested in literature, and therefore in clear expression.” Which is as good a case for coupling scientific with literary training as I can think of.

Dunitz who was born in Glasgow and got his PhD there in 1947 had both the talent and the good fortune to have been trained by three of the best chemists and crystallographers of the 20th century: Linus Pauling, Dorothy Hodgkin and Leopold Ruzicka, all Nobel Laureates. In my personal opinion Dunitz himself could have easily qualified for a kind of lifetime achievement Nobel himself. While being a generalist, Dunitz’s speciality was the science and art of x-ray crystallography, and few could match his acumen in the application of this tool to structural chemistry.

X-ray crystallography was developed by physicists in the first half of the 20th century to peer inside molecules, the way x-rays and MRI peer inside the human body. Just like those two techniques tell us the locations and structures of various organs in our body, x-ray crystallography tells us where the atoms in a molecule are exactly located, what the lengths of the various bonds are and what the stoichiometry – the exact composition of a complex mixture – is. If you had to point out one technique that has truly revolutionized chemistry, laying the entire chemical universe ranging from rocks and minerals to proteins and nucleic acids bare, it is x-ray crystallography. Dozens of Nobel Prizes for figuring out the structures of increasingly complex molecules, starting with table salt and progressing on through DNA, hemoglobin and the entire ribosome – the multi-component assembly that synthesizes proteins in living organisms – have been awarded through the decades.

One such Nobel Prize was given to James Watson and Francis Crick for figuring out the structure of DNA, a feat made possible by the world-class x-ray crystallography on DNA done by Rosalind Franklin and Raymond Gosling. Dunitz who got his PhD in Glasgow and was working in Oxford in 1953 saw history in the making as he and a colleague drove up to Cambridge to see the ball-and-stick model of DNA using metal plates and tubes that Watson and Crick had constructed. In fact after making a suggestion to Pauling who had figured out the fundamental structure of proteins at Caltech, Dunitz might have contributed an immortal alphabet to the language of life:

While my own work at Caltech had nothing to do with protein structure, Pauling used to talk to me occasionally about his models and what one could learn from them. In his lecture, he had talked about spirals. In conversation a few days later, I told him that for me the word “spiral” referred to a curve in a plane. As his polypeptide coils were three-dimensional figures, I suggested they were better described as “helices.” Pauling’s erudition did not stop at the natural sciences. He answered, quite correctly, that the words “spiral” and “helix” are practically synonymous and can be used almost interchangeably, but he thanked me for my suggestion because he preferred “helix” and declared that he would always use it henceforth. Perhaps he felt that by calling his structure a helix there would be less risk of confusion with the various other models that had been proposed earlier. In their 1950 short preliminary communication, Pauling and Corey wrote exclusively about spirals, but in the series of papers published the following year the spiral had already given way to the helix. There was no going back. A few years later we had the DNA double helix, not the DNA double spiral.

After seeing the power of crystallography to crack open the very structure of life, Dunitz spent the rest of his career in that field at the famed ETH in Zurich, capping an incredible 64-year-long career with his death in 2021; his last paper, written when he was 96, was appropriately a critique of certain chemical terminology and titled “Bad Language“.

Dunitz was truly unusual in ranging across the broad spectrum of chemical disciplines. Organic, inorganic and biological chemistry all came within his purview, aided by the powerful interdisciplinary generality of the tool of x-ray crystallography which he wielded with aplomb. Over his long career he published more than 350 scientific papers and penned several foundational books. It would be impossible to review his entire corpus, so I now review three of his papers which made a striking impression on me, which I have cited and read many times over the years, and which I think showcase his striking originality in marshaling simple models and arguments across a variety of fields.

Hydrogen Bonding
Hydrogen bonds in water molecules: the hydrogens of one molecule form fleeting interactions with the oxygens of the other (Image credit: Bioninja)

Perhaps my favorite paper of Dunitz’s is a 1997 paper titled “Organic Fluorine Hardly Ever Accepts Hydrogen Bonds”. Some explication is needed here. Hydrogen bonds are weak, fleeting bonds between hydrogen and other atoms which, while weak, are absolutely critical in keeping all kinds of molecules including proteins and nucleic acids together. In fact, water would not be a liquid without hydrogen bonds and life as we know it would not exist without them. It is their very transient nature that make hydrogen bonds “on-demand” bonds; they can be formed when needed and rapidly dissolved when no longer needed. Linus Pauling, often considered the most important chemist of the 20th century, had underscored the importance of hydrogen bonds in the 1930s in his seminal book, “The Nature of the Chemical Bond”. Typically hydrogen bonds are formed between hydrogen and what are called ‘electronegative’ atoms, ones like oxygen and nitrogen. Electronegative atoms have a particular affinity for electrons, attracting the electron clouds of atoms like hydrogen; the most common hydrogen bonds therefore are ones between oxygen and nitrogen.

There is another element on the periodic table, a most unusual one, which should be even more powerful at forming hydrogen bonds, except that it isn’t. That element is fluorine. Fluorine is in fact the most electronegative element on the periodic table, which is why we would expect it to form hydrogen bonds with furious abandon. But while inorganic fluorine found in compounds like hydrofluoric acid – a diabolically corrosive and dangerous substance – does form these hydrogen bonds, organic fluorine (fluorine bonded to carbon, that is) found in compounds like polytetrafluoroethylene – PTFE or Teflon – does not. In fact it is precisely fluorine’s reluctance to form hydrogen bonds with water in Teflon that makes it such an effective coating for non-stick cookware.

This behavior of fluorine is what the facts indicate, but the facts in this case don’t line up well with chemical theory which expects hydrogen bonding tendencies to increase with electronegativity. Fortunately there is a big database of “solved” crystal structures of organic molecules that includes molecules containing fluorine; it was only waiting for the right person to come along to interpret it. Dunitz’s paper was perhaps the first one to exhaustively analyze this database and then come up with a convincing chemical explanation for the counterintuitive observation that fluorine hardly ever forms hydrogen bonds. He looked at almost 6000 structures with fluorine and determined that hardly a dozen form hydrogen bonds between the fluorine and other hydrogen atoms. The details of why fluorine is reluctant to form hydrogen bonds is beyond the scope of this post (and explained in a further paper by Dunitz), but the qualitative explanation is simple: imagine that an electronegative element like oxygen has “hands” that pull others toward it. The problem with fluorine is that it is so electronegative that it simply keeps its hands to itself.

Even today I keep meeting chemists who, based on what seems like entirely sound chemical logic, expect fluorine to form hydrogen bonds. They recommend that one make drug molecules with fluorine that would enable them to stick better to and form hydrogen bonds with proteins that they want to block, proteins that have gone haywire in cancer, for instance. It is then that I find myself waving Dunitz’s paper – sometimes literally since I still “believe” in paper copies – with the fervent enthusiasm of a preacher.

The second paper from Dunitz that I often highlight shows Dunitz’s masterful application of simple, semi-quantitative arguments to addressing an important question. One of the most important things that scientists want to know when thinking about biological molecules like proteins is how they interact with water. All biological molecules are swimming in a vast sea of water; in fact water not just ubiquitously surrounds these molecules but is also an intimate participant in their behavior. Knowing the thermodynamics of this system – the strength of binding in particular between proteins and other molecules and water – is critical in engineering better drugs and proteins. Two factors are key in quantifying this binding: enthalpy and entropy. Roughly speaking, enthalpy concerns itself with the strength of the interactions between two molecules and entropy concerns itself with how loosely or tightly they bind, whether they stay in place or whether they jiggle around. While enthalpy is often easy to estimate, entropy is not.

Image credit: Science

In 1994, Dunitz wrote a one-page paper in the journal ‘Science’ titled “The Entropic Cost of Bound Water Molecules in Crystals and Biomolecules” in which, using the simplest of data and arguments, he came up with a reliable number quantifying the entropy of a single water molecule binding to biological molecules. One of his strengths here which is also showcased in the fluorine paper is his ability to look at old data and come up with new explanations. He starts by looking at data on hydrates, simple salts like zinc sulfate which are surrounded by water molecules. He also looks at old data on the thermodynamics of the melting and freezing of ice which would also gives estimates on the entropy of water molecules; he points out something telling which is now a far more serious problem in our specialized world, namely that “this information has been available for a long time, but science has become so specialized that its practitioners in one branch are all too often unaware of what is common knowledge in another.”

How is thermodynamic information on ice, liquid water and hydrate salts relevant to what goes on with proteins? Because, as Dunitz astutely observes, this thermodynamics sets an upper limit on the entropy question for water around proteins: salts bind water molecules most tightly, so surely proteins would bind them more weakly? Using these arguments, Dunitz arrives at a value for the entropy of a bound water molecule which is now commonly used in calculations. The paper demonstrates characteristic Dunitzian strengths which should be widely emulated: scrupulous attention to existing data, including data going back decades, simple back-of-the-envelope calculations, and proof by analogy.

The last paper among Dunitz’s great corpus of works is a paper which exemplifies a particularly fine example of speculative as well as interdisciplinary thinking. It questioned a fact which everyone knows but no one really thinks about: Why is body temperature for animals like humans who can maintain their temperature about 36 degrees celsius, and why is it maintained across such a huge range of organisms? As we know, unless they are sick, homeothermic animals like ourselves are very efficient at regulating body heat. An explanation provided by some previous scientists pointed to the specific heat of water. Specific heat is the amount of heat required to change the temperature of a substance by one degree. Water has a very large specific heat compared to many other substances, which is just one of many of its remarkably unusual properties. But this specific heat happens to reach its lowest value at about 36 degrees celsius, just the optimum temperature mentioned above. The previous explanation said that water at this temperature was least resistant to changes in its temperature and quickly dissipated whatever heat was added to or subtracted from it.

Dunitz and his co-author, Steven Benner, found this argument “appealing, but not correct” in their response, published in the journal Nature in 1986. First, they identify what seems to be an obvious but overlooked problem: the smaller the specific heat, the easier it will be to cause fluctuations in temperature, making it harder for an organism to survive, not easier. They also realize that the previous argument only applies to pure water; water in living organisms is a complex aqueous mixture consisting of water, biomolecules like proteins and salts. So what could be responsible for the precise temperature regulation? Dunitz and Benner don’t pretend to know the answer, but they focus on two of water’s unique properties in particular, its hydrophobicity (or tendency to repel greasy, oil-like substances) and its viscosity. As temperature rises, water becomes less viscous and therefore facilitates chemical reactions in it. However, hydrophobicity also lessens with temperature, which could lead to unwanted mingling between water and greasy substances. Dunitz and Benner speculate that a temperature of 36 degrees is a Goldilocks-like zone, one where the viscosity is low enough for chemical reactions to speedily occur but hydrophobicity is high enough to prevent greasy substances from dissolving too easily.

To me this paper is a superb example of informed speculation, not pretending to solve a problem but offering a tantalizing potential solution and gently but firmly demolishing an existing explanation. It is widely believed that life anywhere in the universe would have to be based on water. Dunitz and Brenner’s analysis of the temperature dependence of water’s unique viscosity and hydrophobicity provides another window on why this substance is so unique for supporting life.

These three papers may serve to exemplify the range of Dunitz’s contributions, and they are but a slice of his vast corpus. In another analysis, he used a purely mathematical argument about the geometry of a pentagon to predict the experimentally-verified geometry of cyclopentane, a molecule with five carbon atoms arranged in a ring. His is a textbook name in many ways, none more so than in the eponymous “Bürgi-Dunitz angle” which describes the angle of attack of a reacting molecule and the precise geometric configuration of the reactants in an important class of organic reactions, one which has yielded great dividends of both academic and industrial interest.

Apart from scientific papers spanning a remarkable variety of topics, Dunitz also wrote books that are considered foundational in the field. Perhaps my favorite book of his is written for laymen. “Reflections on Symmetry: In Chemistry…and Elsewhere“, written with his co-author Edgar Heilbronner, is a marvelous look at symmetry, perhaps the deepest quality of nature. Symmetry is absolutely fundamental not just for chemistry and biology but in the deepest reaches of physics, including quantum mechanics and particle physics. Dunitz and Heilbronner’s book is a romp through aspects of symmetry in fields as disparate as medieval mathematics, Islamic and modern art and of course, chemistry. It is a beautiful book, filled with illustrations and elegant arguments.

Jack Dunitz was one of those scientists who enrich everything they touch, across a wide range of domains, with insight, revelation and beauty. The simplicity and importance of his arguments, humility as a man and fearlessness in tackling disparate problems will be a candle that will keep lighting the minds of aspiring chemists and other scientists for eons to come.





How Niels Bohr predicted Rydberg atoms

 


In Niels Bohr's original 1913 formulation of the quantum atom, the Bohr radius r was proportional to n^2, n being the principal quantum number. Highly excited states would correspond to very large values of n and Bohr predicted these "giant" atoms would exist. Since the volume scales as r^3 or n^6, for n=33 you should see a "hydrogenic" atom a billion times larger than a ground state hydrogen atom. However, no spectral lines corresponding to such atoms were observed. So was Bohr's theory wrong?

No! Bohr pointed out that unlike physicists, *astronomers* had observed faint spectral lines in the spectra or stars and nebulae, consistent with his theory. Because of the large proportion of gas and low density, he predicted such highly excited states would exist.

Because of the extremely low densities, these excited states could live for as long as 1 second - a lifetime for an atom. In 1957, astronomers looking for electron-proton recombination in the interstellar medium serendipitously observed spectra from hydrogen atoms for n=110! In the 1970s, after Bohr's death, the advent of tunable dye lasers finally made it possible to observe these excited states in the lab. Because of their long lifetimes and huge electric dipole moments, these atoms have potential applications in quantum computing.


These "atoms" are called Rydberg atoms because Johannes Rydberg had hypothesized about these large-quantum-number states in the 19th century. But Bohr provided a physical basis and an explanation, so they should really be called Rydberg-Bohr atoms at the least. Today, Rydberg atoms have diverse applications ranging from lasers to quantum computing to plasma physics to radio receivers for military applications. But it all goes back - almost as an afterthought - to Bohr's original pioneering 1913 paper and should be recognized as such.