The fundamental philosophical dilemma of chemistry

The classic potential energy curve of chemistry
hides a fundamental truth: bonds mean short distances,
but short distances don't mean bonds
Every field has its set of great philosophical dilemmas. For physics it may be the origin of the fundamental constants of nature, for biology it might be the generation of complexity by random processes. Just like physics and biology chemistry operates on both grand and local scales, but the scope of its fundamental philosophical dilemmas sometimes manifests itself in the simplest of observations.

For me the greatest philosophical dilemma in chemistry is the following: It is the near impossibility of doing controlled experiments on the molecular level. Other fields also suffer from this problem, but I am constantly struck by how directly one encounters it in chemistry.

Let me provide some background here. Much of chemistry is about understanding the fundamental forces that operate within and between molecules. These forces come in different flavors: strong covalent bonds (dictated by the sharing of electrons), hydrogen bonds (dictated by weak electrostatic interactions), strong charge-charge interactions (dictated by attraction between unlike charges), hydrophobic effects (dictated by the interaction between 'water-loving' and 'water-hating' parts of molecules) etc. The net interaction or repulsion between two molecules results from the sum total of these forces, some of which may be attractive and others might be repulsive. Harness these forces and you can control the structure, function and properties of molecules ranging from those used for solar capture to those used as breakthrough anticancer drugs.

Here’s how the fundamental dilemma manifests itself in the control of all these interactions: it is next to impossible to perform controlled experiments that would allow one to methodically vary one of the interactions and see its effect on the overall behavior of the molecule. In a nutshell, the interactions are all correlated, sometimes intimately so, and it can be impossible to change one without changing the other.

The fundamental dilemma is evident in many simple applications of chemistry. For instance, my day job involves looking at the crystal structures of proteins involved in disease and then designing small organic molecules which bind to and block such proteins. For binding to their target protein, these small molecules exploit many different interactions including hydrogen bonds, charge-charge interactions and hydrophobic effects to bring about a net lowering of their interaction energy with the protein. The lower this interaction or "free energy" the better the interaction. Unfortunately, while one can visualize the geometry of the various interactions by simply looking at the crystal structure, it is very difficult to say anything about their energies, for to do so would entail varying an interaction individually and looking at its effects on the net energy. Crystal structures thus can be very misleading when it comes to making a statement about how tightly a small molecule binds to a protein.

Let’s say I am interested in knowing how important a particular hydrogen bond in the small molecule is. What I could do would be to replace the atoms comprising the hydrogen bond with non hydrogen-bonding atoms and then look at the change in the affinity of the resulting molecule for the protein, either computationally or experimentally. Unfortunately this change also impacts other properties of the molecules; its molecular weight, its hydrophobicity, its steric or spatial interactions with other molecules. Thus, changing a hydrogen bonding interaction also changes other interactions, so how can we then be sure that any change in the binding affinity came only from the loss of the hydrogen bond? The matter gets worse when we realize that we can’t even do this experimentally; in my colleague Peter Kenny’s words, an individual interaction between molecules such as a hydrogen bond is not really an experimental observable. What you see in an experiment is only the sum total, not the dissection into individual parts.

There have of course been studies on ‘model systems’ where the number of working parts is far less than those in protein-bound small molecules, and from these model systems we have gotten a good sense of the energies of typical hydrogen bonds, but how reliably can we extend the results of these systems to the particular complex system that we are studying? Some of that extrapolation has to be a matter of faith. Also, model systems usually provide a ranges of energies rather than a single value and we know that even a tiny change in the energy of binding can correspond to a substantial loss of effective blocking of a protein, so the margin of error entrusted to us is slim indeed.

It is therefore very hard, if not impossible, to pin down a change in binding affinity resulting from a single kind of interaction with any certainty, because changing a single interaction potentially changes all interactions; it is impossible to perform the truly controlled experiment, a concept which has been at the heart of the scientific method. Sometimes these changes in other interactions can be tiny and we may get lucky, but the tragedy is that we can’t even calculate with the kind of accuracy we would like, what these tiny increments or reductions might be. The total perturbation of a molecule’s various interactions remains a known unknown.

The roots of the problem run even deeper. At the most elemental level, all interactions between molecules are simply a function of one of the four fundamental forces known in nature - the electromagnetic force. Of the four basic forces, gravity is too weak to play a role, while the strong and weak nuclear forces don't usually apply to molecular interactions since such interactions only involve the sharing of electrons. It is the electromagnetic force that is thus ascendant in mediating every single molecular interaction in the universe. When we divide this force up into hydrogen bonds, electrostatic interactions, hydrophobic interactions etc. what we are doing is imposing an artificial division on an indivisible fundamental force, purely for our convenience. It's a bit like the parable of the blind men and the elephant - there is only one electromagnetic force, just like there is only one elephant, but each of us describing that force divides it up into multiple flavors. No wonder then that we are led astray when we think we are doing a controlled experiment, since whenever we think we are varying one flavor or another we are actually varying the same basic parameter and not its independent components. That is because there are no independent components in the true sense of the term.

This inability to perform the truly controlled experiment is thus what I call the great philosophical dilemma of chemistry. The dilemma not only makes the practical estimation of individual interactions very hard but it leads to something even more damning: the ability to even call an interaction an 'interaction' or 'bond' in the first place. This point was recently driven home to me through an essay penned by one of the grand old men of chemistry and crystallography – Jack Dunitz. Dunitz’s point in the essay is that we are often misled by ‘short’ distances between atoms observed in crystal structures. We ascribe these distances to ‘attractive interactions’ and even ‘bonds’ when there is little evidence that these distances are actually attractive.

Let’s backtrack a bit to fundamentals. The idea of ascribing a short distance to an attractive interaction comes from the classic van der Waals potential energy curve (figure above) that is familiar to anyone who has taken a college chemistry class. The minimum of this curve corresponds to both the shortest distance (called the van der Waals distance) between two molecules and the lowest energy, typically taken to signify a bond. However this leads to a false equivalence that seems to flow both ways: van der Waals distances correspond to bonds and bonds correspond to van der Waals distances.

In reality the connection only flows one way. Bonds do correspond to short distances but short distances do not necessarily correspond to bonds. So then why do we observe short distances in molecules in the first place? Again, Dunitz said it very succinctly in a previous review: simply because ‘Atoms have to go somewhere’. The fact is that a crystal structure is the net result of a complex symphony of attractive and repulsive interactions, a game of energetic musical chairs if you will. At the end, when the dust has settled everyone has to find a chair, even if it means that two people might end up uncomfortably seated on the same chair. Thus, when you see a short distance between two atoms in a crystal, it does not mean at all that the interaction between them is attractive. It could simply mean that other interactions between other atoms are attractive and that those two atoms have simply then settled where they find a place, even if the interaction between them may be repulsive. 

The message here is clear: it is folly to describe an interaction as ‘attractive’ simply because the distance is short. This applies especially to weaker interactions like those between aromatic (benzene) rings. I am always wary when I see a benzene ring from a small molecule nicely sandwiched between another benzene ring in a protein and hear the short distance between the two described as a ‘stacking interaction’. Does that mean there is actually an attractive stacking interaction between the two? Perhaps, but maybe it means simply that there was no other place for the benzene ring to be. How could I test my hypothesis? Well, I know that varying the substituents or groups of atoms attached to benzene rings is known to vary their energies of interaction with other benzene rings. So I ask the chemist to make some substituted versions of that benzene ring. But hold on! Based on the previous discussion, I just remembered that varying the substituents is not going to just change the stacking energy; it’s also going to change other qualities of the ring that mess up the other interactions in the system. It’s that problem with performing controlled experiments all over again - welcome to the fundamental dilemma of chemistry.

The fundamental dilemma is why it is so hard to understand individual interactions in chemical systems, let alone exploit them for scientific or commercial gain. We see it in a myriad of chemical experiments, from investigating the effects of structural changes on the rates of simple chemical reactions to investigating the effects of structural changes on the metabolism of a drug. We can’t change one component without changing every other. There may be cases where these other changes might be minuscule, but in reality the belief that they may be minuscule in a particular case will always remain a matter of faith than of fact.

The fundamental dilemma then is why drug design, materials designs and every other kind of molecular design in chemistry is so tricky. It is why so much of complicated chemistry is still trial and error, why observations on one system cannot be easily extrapolated to another, and why even supercomputers are not yet able to nail down the precise balance of forces that dictate the structure and function of specific molecules. In a nutshell, the fundamental dilemma is why chemists are always ignorant and why chemistry will therefore always be endlessly fascinating.

Want to bind small molecules? Get a backbone

Here’s a paper from the Shoichet lab at UCSF that illustrates one of the major problems that drug designers encounter – predicting conformational changes (“entropy” to a physicist). What the study does is to plug a series of eight very simple congeneric ligands – benzene, methyl, ethyl and propyl benzene all the way to hexyl benzene - into a model protein cavity, in this case a lysozyme mutant, and observe the corresponding changes in protein conformation by solving the crystal structures. And the results aren’t exactly heartwarming for early phase drug discovery scientists.

Synthesizing congeneric series of ligands is a standard process in lead optimization and the elephant in the room which is often banished out of sight by drug designers is the possibility of large conformational changes in the protein caused by small changes in ligand structure (the other assumption is constancy in ligand binding orientation, and even that doesn’t always hold). The assumption is that any minor change in structure in the ligand would be accommodated by equivalent, small amino acid side chain movements in the protein.

This study shows that at best that assumption is a faith-based assumption which should always be considered provisional. What the authors observe is that instead of a smooth transition of amino acid side chain movements, you see a discrete and far more significant change in protein backbone movement, resulting in a subtle population of different states which bind the ligands. The difference in binding energy going from benzene to hexyl benzene is not too large – about 1.5 kcal/mol – but you are already seeing backbone movements. What is perhaps a bit more reassuring than this observation is that some of the discrete states are mirrored in lysozyme structures found in the PDB - but the authors looked at 121 structures to substantiate the result. Not the kind of numbers you would expect to find in the PDB for your typical novel drug discovery target.

The conclusions of the paper are a bit discomforting for at least two reasons. Firstly as mentioned above, drug designers often assume constancy or smooth and minor side chain changes in protein conformation when testing congeneric ligands in lead optimization. It’s quite clear that this is always a bit of a gamble: if something as simple as a change in molecular weight could lead to such divergent changes, what would small but important changes or reversals in polarity do? And then one also starts wondering how much weird or divergent SAR could potentially be explained by such unexpected backbone conformational changes.

Secondly, these kinds of changes pose a real problem for molecular modelers. As the paper says, you would need to go to pretty long MD (molecular dynamics) simulations or more radical protein modeling to look at backbone changes; even today, modeling backbone changes by either physics-based methods (like MD) or knowledge-based techniques (like Rosetta) is both less validated and more computationally expensive.

Lastly though, this study is another example of why drug discovery is hard even at a basic scientific level. Countless factors thwart the best intentions of drug designers at every stage, and uncertainty in predicting protein backbone conformational changes must rank pretty high on that list.

We are all Hamiltonians. We are all Newtonians.

Michael Lind is an economic historian who has recently written an excellent history of the United States. Lind divides the thread of American history as flowing in two parallel but competing directions. One thread belongs to the Jeffersonians who oppose central authority and value small government and individual economic initiative. The other belongs to Hamiltonians who favor a strong central government with broad governing powers.

Who wins in this battle? Lind's answer is clear - the Hamiltonians. After doing a careful study of all the institutions and policies that emerged in the United States since its founding, Lind concludes that one can see the sure hand of the Hamiltonians everywhere. He quite certainly does not discount the role of the Jeffersonians in creating private capital, entrepreneurship and innovation, but even these achievements could not have been carried out without an enabling framework set up by the Hamiltonians.

Lind's conclusions reminds us that although Jeffersonians Steve Jobs and Bill Gates and Thomas Edison and John D Rockefeller have made this country the technological powerhouse that it is, we live and breathe the air of Hamiltonians like Franklin Roosevelt, Vannevar Bush, Louis Agassiz and Alfred Newton Richards. At the end of the day we are all Hamiltonians. Even when Jeffersonians play their great game of competition and innovation, it is on a Hamiltonian stage with its well-defined boundaries that they must perform.

A similar thought went through my mind as I was reading Caltech physicist Leonard Mlodinow's first-rate romp through the major ideas and intellectual events of human civilization titled "The Upright Thinkers". Mlodinow takes us through a sweeping trek through the last ten thousand years of science and technology, covering such major milestones as agriculture, mathematics, Greek, Arabian and Renaissance science, Galileo and the birth of modern science, Newton, the evolution of chemistry from alchemy into a bonafide science, all the way to Maxwell, Einstein and the quantum pioneers.

One of the paragraphs that really stood out for me however was one in which Mlodinow communicates the sheer ubiquity of Newton's way of thinking. There is a reason that even with people like Maxwell, Darwin and Einstein following him, Newton can still legitimately lay claim to being the greatest scientist in history. That's because, as Mlodinow says, Newtonian thought has not just pervaded all of science and life but it has become a metaphor for almost everything we do and feel.

"Today we all reason like Newtonians. We speak of the force of a person's character and the acceleration of the spread of a disease. We talk of physical and even mental inertia, and the momentum of a sports team. To think in such terms would have been unheard of before Newton; not to think in such terms is unheard of today. Even those who know nothing of Newton's laws have had their psyches steeped in his ideas. And so to study the work of Newton is to study our own roots."

This a point that should not be lost on us. Even though the tiny details of our life may be infused with quantum events like cooking, metabolism and reproduction, we are truly Newtonian creatures. Quantum mechanics underlies everything we do, but it's layered on a Newtonian stage of events, metaphors and emotions. These days nobody writes books on Newton's laws and everyone writes books on quantum mechanics and relativity, but the fact is that the weird reality described by these theories matters very little for the mundane things in our life. One of them applies to the very fast and the other applies to the very small, and most of our lives deal with objects in between.

Even today after Einstein and Heisenberg and Bohr, 99% of the things that matter to us unfold on a Newtonian stage. Every time we get up from the bed, brush our teeth, drive down the highway, type on our keyboards or fly in an airplane we are obeying the dictates of Newton. Even if our atoms obey the laws of quantum mechanics and dictate the biochemistry of our actions and thoughts, the emergent laws that arise from them and that bear a far more direct connection to our actions are all Newtonian. The fact that we are Newton's children is not just a tribute to the unbelievable sweep of his theories but also a resounding tribute to the limitations of strict reductionism.

Just like even an arch Republican like Nixon had to admit that "We're all Keynesians", even people who (improbably) might be rejecting Newton in 2015 cannot escape the reality that they're all Newtonians. The ubiquity of his thoughts holds us in its sweeping embrace as firmly as gravity holds us to the earth.

Dynamic monopolies and the laws of thermodynamics: An interesting correspondence

In his new book on startups Peter Thiel makes a provocative argument about the necessity of monopolies. He points out that in an environment of perfect competition, everyone is so busy competing just to stay even that they would end up with neither the time nor the resources to innovate. Perfect competition is a dead end as far as truly novel ideas go.

There’s plenty of evidence of history to back up Thiel’s argument, and in no other case is the monopoly-innovation relationship evident as in the case of Bell Labs. Everyone talks about Bell Labs as the greatest cradle of industrial innovation in history but few talk about what birthed and sustained the cradle: the enormous monopoly and profits that AT&T enjoyed for fifty years. AT&T enjoyed this monopoly not just because they made superior products but also because they cut deals with the government that allowed them to always stay a step ahead of their competitors. It was this virtually unchallenged monopoly that allowed the telephone company to funnel its profits into the hallowed Nobel Prize producing milieu of Bell Labs. A similar example exists today with Google which has a monopoly on search. This monopoly brings them enough revenue so that they can entertain ten-year research visions and highly risky but potentially lucrative projects like self-driving cars and research into aging.

What the libertarian Thiel conveniently fails to mention however is that monopolies can use their surplus profits and resources for both good and evil. AT&T might have largely used their profits for fundamental research but Rockefeller and Vanderbilt used it to lobby the government and exploit workers. Similarly today oil companies spend an enormous amount of time and money in erecting massive lobbies in Washington. Of course, Rockefeller and Vanderbilt also used their enormous wealth for philanthropic purposes but the fact that they also sparked movements against low wages and worker exploitation and that Teddy Roosevelt was forced to break up the monopolies in 1907 speaks to how far out of hand the situation had gotten by then.

Thiel does not mention all this; nevertheless he understands that the value of a monopoly is often measured by the time it exists and the reach of its power. That is why in his ideal world monopolies would be what he calls “creative monopolies” and what I will call “dynamic monopolies”. Dynamic monopolies are monopolies which are constantly challenged, not by perfect but by imperfect but still serious competition. Thus they have to constantly keep innovating rather than exploiting their power for political purposes in order to stay on top. Monopolies generally can be harmful, but dynamic monopolies can often be a very good thing.

The value of dynamic monopolies however raises a more general question that is both far more provocative and more disturbing. It is almost a foregone conclusion that in the long view of human history many technological advances have been built on the backs and dead bodies of human beings. In his landmark new book “Empire of Cotton” for instance, historian Sven Beckert talks about the enormous evil of slave labor that allowed capitalism to become a bedrock of Western and eventually all of civilization. The Romans developed superior engineering and the Egyptians built the pyramids backed by thousands of slaves and peasants. Look at every major technological advance in history and you would be hard pressed to find one that did not involve the exploitation of at least some class or group of people. That is why we rightly have government laws which protect the exploited and the disadvantaged. That is why we rightly have human rights organization and the Red Cross.

Nevertheless, the principle that innovation and novelty can only arise from some kind of asymmetry of wealth, distribution or work seems to be a mainstay of history. I realized however that there is a remarkable correspondence between this principle and a most fundamental aspect of science and nature – the laws of thermodynamics. One of the essential truths about life on earth (and presumably everywhere) is that it presents a condition that exists far from equilibrium. In technical terms, the free energy change of living systems is not zero; a value of zero for the free energy only holds for systems at equilibrium. Now existing at equilibrium is a very bad thing for a living creature. In fact as the old saying goes, the only living system at equilibrium is a dead system. Death is the condition of true equilibrium; it is only by driving a system away from equilibrium that living beings can be born, move around, think, prove theorems and write sonnets. All the richness of life comes about because it's trying to be as far from equilibrium as possible.

Just as perfect equilibrium can lead to death, so it seems that perfect competition can lead to stagnation of innovation. Just as we need asymmetry in the thermodynamics of life in order to thrive and even exist in the first place, so it seems that we need asymmetry in the evolution of markets in order to sustain technological progress. That fact leads to an argument in favor of monopolies. 

But what’s even more essential to keep in mind is the ever-changing nature of life. Life can maintain its essential asymmetry only if old life makes way for new. If old life refuses to die it will lead to another condition of equilibrium which will be static instead of dynamic. Similarly technological innovation can maintain its edge only if old monopolies turn over on a reasonably rapid basis  - perhaps over periods of ten to twenty years - and make way for new ones.

Hence the need for dynamic monopolies. And hence the need for government laws that enable such entities. Even the fundamental laws of thermodynamics make a case for them.

How Linus Pauling almost gave Matt Meselson tellurium breath

Matt Meselson
Linus Pauling was the greatest chemist of the twentieth century. Matt Meselson devised the ingenious Meselson-Stahl experiment and almost single-handedly convinced Nixon and Kissinger to get rid of chemical and biological weapons (a feat for which he more than almost anyone else deserves a long overdue Nobel Prize).

But Meselson almost did not get around to doing these things, partly because Linus Pauling once came a step away from dooming him to a joyless existence of social expulsion when he was Pauling's graduate student. Here's the story as recounted fondly by Meselson at a Pauling anniversary celebration.

"That fall, in 1953, it came time for me to have a research problem, so I went to see Linus in his office in the Crellin Lab, and he took a rock down off of a shelf near his desk and announced that this was a tellurium mineral -- he had worked on tellurium minerals, years earlier -- and that this would have an interesting crystal structure. The discussion went something like this:
LP: Well, Matt, you know about tellurium, the group VI element below selenium in the periodic chart of the elements?
Me: Uh, yes. Sulfur, selenium, tellurium ...
LP: I know that you know how bad hydrogen sulfide smells. Have you ever smelled hydrogen selenide?
Me: No, I never have.
LP: Well, it smells much worse than hydrogen sulfide.
Me: I see.
LP: Now, Matt, Hydrogen telluride smells as much worse than hydrogen selenide as hydrogen selenide does compared to hydrogen sulfide.
Me: Ahh ...
LP: In fact, Matt, some chemists were not careful when working with tellurium compounds, and they acquired a condition known as "tellurium breath." As a result, they have become isolated from society. Some have even committed suicide.
Me: Oh.
LP: But Matt, I'm sure that you would be careful. Why don't you think it over and let me know if you would like to work on the structure of some tellurium compounds?"

Feynman to Wolfram: "You have to extract yourself from the organization in order to run it"

Imagine you are an ambitious scientist or entrepreneur wanting to start and run a scientific organization along the lines of your own notions of creativity and rigor. What would be the best way to do this? In a letter written in response to Wolfram Research founder Stephen Wolfram, Richard Feynman (who was on Wolfram's PhD committee at Caltech - when Wolfram was 19...) had this to say:


Feynman's point in #2 is worth noting and explains why it's not easy for someone with a scientific or technological vision to become head of their own company and why many scientists shun administration since it would mean a retreat from their favorite research; they would like to obey Feynman's admonition to have "as little technical contact with non-technical people as possible". However Wolfram himself is a happy counterexample since he was able to both conceive and run Wolfram Research and turn it into a profitable venture. The reason he could do was not only because he understood the science well but also because he was an astute entrepreneur and could understand the pulse of the market.

On a bigger note however, the point tells us why the familiar disdain that many scientists have for administrative work is also misplaced in some sense; even Feynman cannot escape noting the fact that you have to administer. It is not possible to run an industrial organization or even a purely scientific one based on technical vision alone. You must have an administrative plan that allows for few distractions and gives people the freedom they need to actually implement your vision. You should either implement this plan yourself (and risk falling into Feynman's black hole of misplaced action) or hire someone who can handle the business end without sacrificing your scientific vision. 

Most importantly though, you should hire people with a wide variety of backgrounds who still understand what you are trying to do and bring their eclectic training and enthusiasm to bear on the problem, and then you should give them complete freedom (within certain constraints and goals of course). In some sense you will be creating clones of yourself by giving them this freedom, at least as far as the scientific part of making an organization work is concerned.

Some of the best scientific and technological organizations in history were based on this model. Two that immediately come to mind are Bell Labs which famously gathered a collection of Nobel Prizes while still creating breakthrough technological innovations and the MRC Laboratory of Molecular Biology which gathered even more Nobel Prizes for its researchers.

The most famous director of the MRC was Max Perutz, himself a Nobel Laureate whose own work in solving the structure of hemoglobin was the culmination of fifteen years of dogged effort. Perutz brought a light but sure touch to the MRC while he was director. Afternoon tea was almost de rigeur and Nobel Laureate and postdoc chatted idly at the lunch table on a daily basis. A similar model was used by Robert Oppenheimer at the Institute for Advanced Study in Princeton, where "tea was where we explained to each other what we did not know."

Perutz and Oppenheimer managed to implement the ideal scientific organization by minimizing that implementation. Sometimes you have to extract yourself from the organization to run it well. And if you do that you might even be able to fall madly in love with someone or something.