I was quite saddened to hear about the passing of Sam Schweber, one of the foremost and most scholarly historians of physics of the last half century. Schweber occupied a rather unique place in the annals of twentieth century physics history. He was one of a select group of people - Abraham Pais, Jeremy Bernstein and Jagdish Mehra were others - who knew many of the pioneers of quantum mechanics and theoretical physics personally and who excelled as both scientists and historians. His work was outstanding as both historiography as well as history, and he wrote at least half a dozen authoritative books.
Schweber got his PhD with Arthur Wightman at Princeton University, but his real break came when he went to Cornell for a postdoc with the great Hans Bethe. Schweber became close friends with Bethe and his official biographer; it was a friendship that lasted until Bethe's demise in 2005. During this time Schweber authored a well received textbook on quantum theory, but he was just getting started with what was going to be life's work.
Schweber became known for two high achievements. Probably the most important one was his book "QED and the Men Who Made It" which stands as the definitive work on the history of quantum electrodynamics. The book focused on both the personal background and the technical contributions of the four main contributors to the early history of QED: Richard Feynman, Julian Schwinger, Sin Itiro Tomonaga and Freeman Dyson. It's one of those rare books that can be read with profit by both technically competent physicists as well as laymen, since the parts concerning the informal history of physics and personal background of the main participants are as fascinating to read about as the technical stuff. Other prime participants like Hans Bethe, Robert Oppenheimer and Paul Dirac also make major appearances. If Schweber had written just that book and nothing else, he would still have been remembered as a major historian. The book benefited immensely from Schweber's unique combination of talents: a solid understanding of the technical material, a sound immersion in the history of physics, a personal friendship with all the participants and a scholarly style that gently guided the reader along.
But Schweber did not stop there. His other major work was "In the Shadow of the Bomb", a superb contrasting study of Hans Bethe and Robert Oppenheimer, their background, their personalities, their contributions to physics and nuclear weapons and their similarities and differences. It's a sensitive and nuanced portrait and again stands as the definitive work on the subject.
Two other Schweber contributions guaranteed his place as a major historian. One was another contrasting study, this time comparing Oppenheimer and Einstein. And finally, Schweber put the finishing touches on his study of Bethe's life by writing "Nuclear Forces: The Making of the Physicist Hans Bethe". This book again stands as the definitive and scholarly study of Bethe's early life until World War 2. It's a pity Schweber could not finish his study of the second half of Bethe's remarkably long and productive life.
Another valuable contribution that Schweber made was to record a series of in-depth interviews with both Freeman Dyson and Hans Bethe for the Web of Stories site. These interviews span several hours and are the most detailed interviews with both physicists that I have come across: they will always be a cherished treasure.
Schweber's style was scholarly and therefore his books were not as well known to the layman as they should be. But he did not weigh down his writing with unnecessary baggage or overly academic-sounding phrases. His books generally strike a good balance between academic and popular writing. They are always characterized by meticulous thoroughness, a personal familiarity with the topic and an intimate knowledge of the history and philosophy of science.
By all accounts Schweber was also a real mensch and a loyal friend. When Oppenheimer's student David Bohm became the target of a witch hunt during the hysterical McCarthy years, Schweber and Bohm were both at Princeton. Bohm was dismissed by the university which worried far more about its wealthy donors and reputation than about doing the right thing. Schweber went to the office of Princeton's president and pleaded with him to reinstate Bohm. The president essentially threw Schweber out of his office.
Schweber spent most of his career at Brandeis University near Boston. I was actually planning to see him sometime this year and was in the process of arranging a letter of introduction. While I now feel saddened that I will miss meeting him, I will continue to enjoy and be informed by the outstanding books he has penned and his unique contributions to the history of science.
- Home
- Angry by Choice
- Catalogue of Organisms
- Chinleana
- Doc Madhattan
- Games with Words
- Genomics, Medicine, and Pseudoscience
- History of Geology
- Moss Plants and More
- Pleiotropy
- Plektix
- RRResearch
- Skeptic Wonder
- The Culture of Chemistry
- The Curious Wavefunction
- The Phytophactor
- The View from a Microbiologist
- Variety of Life
Field of Science
-
-
-
The Hayflick Limit: why humans can't live forever1 month ago in Genomics, Medicine, and Pseudoscience
-
-
Course Corrections4 months ago in Angry by Choice
-
-
The Site is Dead, Long Live the Site2 years ago in Catalogue of Organisms
-
The Site is Dead, Long Live the Site2 years ago in Variety of Life
-
Does mathematics carry human biases?3 years ago in PLEKTIX
-
-
-
-
A New Placodont from the Late Triassic of China5 years ago in Chinleana
-
Posted: July 22, 2018 at 03:03PM6 years ago in Field Notes
-
Bryophyte Herbarium Survey6 years ago in Moss Plants and More
-
Harnessing innate immunity to cure HIV8 years ago in Rule of 6ix
-
WE MOVED!8 years ago in Games with Words
-
-
-
-
post doc job opportunity on ribosome biochemistry!9 years ago in Protein Evolution and Other Musings
-
Growing the kidney: re-blogged from Science Bitez9 years ago in The View from a Microbiologist
-
Blogging Microbes- Communicating Microbiology to Netizens10 years ago in Memoirs of a Defective Brain
-
-
-
The Lure of the Obscure? Guest Post by Frank Stahl12 years ago in Sex, Genes & Evolution
-
-
Lab Rat Moving House13 years ago in Life of a Lab Rat
-
Goodbye FoS, thanks for all the laughs13 years ago in Disease Prone
-
-
Slideshow of NASA's Stardust-NExT Mission Comet Tempel 1 Flyby13 years ago in The Large Picture Blog
-
in The Biology Files
Cognitive biases in drug discovery: Part 1
The scientific way of thinking might seem natural to us in the twenty-first century, but it’s actually very new and startlingly unintuitive. For most of our existence, we blindly groped rather than reasoned our way to the truth. This was because evolution did not fashion our brains for the processes of hypotheses generation and testing that are now intrinsic to science; what it did fashion them for was gut reactions, quick thinking and emotional reflexes. When you were a hunter- gatherer on the savannah and the most important problem you faced was not how to invest in your retirement savings but to determine whether a shadow behind the bushes was a tree or lion, you didn’t quite have time for hypotheses testing. If you tried to do that it could likely be the last hypothesis you tested.
It is thus no wonder that modern science as defined by the systematic application of the scientific method emerged only in the last few hundred years. But even since then, it’s been hard to override a few million years of evolution and unfailingly use its tools every time. At every single moment the primitive, emotional, frisky part of our brain is urging us to jump to conclusions based on inadequate data and emotional biases, and so it’s hardly surprising that we often make the wrong decisions, even when the path to the right ones is clear (in retrospect). It’s only in the last few decades though that scientists have started to truly apply the scientific method to understand why we so often fail to apply the scientific method. These studies have led to the critical discovery of cognitive biases.
There are many important psychologists, neuroscientists and economists who have contributed to the field of cognitive biases, but it seems only fair to single out two: Amos Tversky and Daniel Kahneman. Over forty years, Tversky and Kahneman performed ingenious studies - often through surveys asking people to solve simple problems - that laid the foundation for understanding human cognitive biases. Kahneman received the Nobel Prize for this work; Tversky undoubtedly would have shared it had he not tragically died young from cancer. The popular culmination of the duo’s work was Kahneman’s book “Thinking Fast and Slow”. In that book he showed how cognitive biases are built into the human mind. These biases manifest themselves in the distinction between two systems in the brain: System 1 and System 2.
System 2 is responsible for most of our slow, rational thinking. System 1 is responsible for most of our cognitive biases. A cognitive bias is basically any thinking shortcut that allows us to bypass slow, rational judgement and quickly reach a conclusion based on instinct and emotion. Whenever we are faced with a decision, especially in the face of inadequate time or data, System 1 kicks in and presents us with a conclusion before System 2 has had time to evaluate the situation more carefully, using all available data. System 1 heavily uses the emotional part of the brain, including the amygdala which is responsible among other things for our fight-flight-freeze response. System 1 may seem like a bad thing for evolution to have engineered in our brains, but it’s what allows us to “think on our feet”, face threats or chaos and react quickly within the constraints of time and competing resources. Contrary to what we think, cognitive biases aren’t always bad; in the primitive past they often saved lives, and even in our modern times they allow us to occasionally make smart decisions and are generally indispensable. But they start posing real issues when we have to make important decisions.
We suffer from cognitive biases all the time - there is no getting away from a system hardwired into the “reptilian” part of our brain through millions of years of evolution - but these biases especially become a liability when we are faced with huge amounts of uncertain data, tight time schedules, competing narratives, quest for glory and inter and intragroup rivalry. All these factors are writ large in the multifaceted world of drug discovery and development.
First of all, there’s the data problem. Especially in the last two decades or so, because of advances in genomics, instrumentation and collaboration and the declining cost of technology, there has been an explosion of all kinds of data in drug discovery; chemical, biological, computational and clinical. In addition, much of this data is not integrated well into unified systems and can be unstructured, incomplete and just plain erroneous. This complexity of data sourcing, heterogeneity and management means that every single person working in drug development always has to make decisions based on a barrage of data that still only presents a partial picture of reality. Multiparameter optimization has to be driven when all parameters are almost always unknown. Secondly, there’s the time factor. Drug development is a very fast-paced field, with tight timelines driven by the urgency of getting new treatments to patients, the lure of large profits and the high burn rates and attrition rates. Most scientists or managers in drug discovery cannot afford to spend enough time getting all the data, and are almost always forced to make major decisions based on what they have rather than what they wish they had.
Thirdly, there’s the interpersonal rivalry and the quest for glory. The impact of this sociological factor on cognitive biases cannot be underestimated. While the collaborative nature of drug discovery makes the field productive, it also leads to pressure on scientists to be the first ones to declare success, or the first ones to set trends. On a basic scientific level for instance, trendsetting can take the form of the proclamation of “rules” or “metrics” for "druglike" features; the hope is that fame and fortune will then be associated with the institution or individuals who come up with these rules. But this relentless pressure to be first can foster biases of all kinds, including cognitive biases.
It would thus seem that drug discovery is a prime candidate for a very complex, multifaceted field that would be riddled with cognitive biases. But to my knowledge, there has been no systematic discussion of such biases in the literature. This is partly because many people might shrug off obvious biases like confirmation bias without really taking a hard look at what they entail, and partly because nobody really pays scientists in drug discovery organizations to explore their own biases. Yet it seems to me that trying to watch out for these biases in everyday organizational behavior would go at least some way in mitigating them. And it’s hard to refute the argument that mitigating these biases would likely make scientists more prone to smarter decision-making and contribute to the bottom line; in terms of both more efficient drug discovery as well as the ensuing profits. Surely pharmaceutical organizations would find that endpoint desirable.
A comprehensive investigation into cognitive biases in drug discovery would probably be a large-scale undertaking requiring ample time and resources; most of this would consist of identifying and recording such biases through detailed surveys. The good news though is that because cognitive biases are an inescapable feature of the human mind, the fact that they haven’t been recorded in systematic detail does not refute the fact of their existence. It therefore makes sense to discuss how they might show up in the everyday decision-making process in drug discovery.
We will start by talking about some of the most obvious biases, and discuss others in future posts. Let’s start with one that we are all familiar with: confirmation bias. Confirmation bias is the tendency to highlight and record information that reinforces our prior beliefs and discard information that contradicts it. The prior beliefs could have been cemented for a good reason, but that does not mean they will apply in every single case. Put simply, suffering from confirmation bias makes us ignore the misses and consider only the hits.
We see confirmation bias in drug discovery all the time. For instance, if molecular dynamics or fragment-based drug discovery or machine learning or some other technique, say Method X, is your favorite technique for discovering drugs, then you will keep on tracking successful applications of this technique without keeping track of the failures. Why would you do this? Several reasons, some of which are technological and some are sociological. You may have been trained in Method X since graduate school; method X is thus what you know and do best, and you don’t want to waste time learning Method Y. Method X might legitimately have had one big success, and you might therefore believe in it - even with an n of 1. Method X might just be easy to use; in that case you are transformed into the man who looks for his keys under the streetlight, not because that's where they are but that's where it's easiest to look. Method X could be a favorite of certain people who you admire, and certain other people who you don’t like as much might be hating it; in that case you will likely believe in it even if the haters actually have better data against it. Purported successes of Method X in the literature, in patents and as communicated by word-of-mouth will further reinforce it in your mind.
The same logic applies to the proliferation of metrics and “rules” for druglike compounds. Let me first say that I have used these metrics myself and they are often successful in a limited context in a specific project, but confirmation bias may lead me to only keep track of their successes and try to apply them in every drug discovery project. In general, confirmation bias can lead us to believe in the utility of certain ideas or techniques far beyond their sphere of applicability. The situation is made worse by the fact that the scientific literature itself suffers from a fundamental confirmation bias, publishing only successes. The great unwashed mass of Method X failures is thus lost to posterity.
There are some other biases that confirmation bias subsumes. For instance, the backfire effect leads people to paradoxically reinforce their beliefs when they are presented with contradicting evidence; it’s a very well documented phenomenon in political and religious belief systems. But science is also not immune from its influence. When you are already discounting evidence that contradicts your belief, then you can as readily discount evidence that seems to strengthen the opposite belief. Another pernicious and common subset of confirmation biases is the bandwagon effect, which is often a purely social phenomenon. In drug discovery it has manifested itself through scores of scientists jumping on to a particular bandwagon: computational drug design, combinatorial chemistry, organocatalysis, HTS, VS...the list goes on. When enough people are on a bandwagon, it becomes hard to resist not being a part of it; one fears this could lead to both missed opportunities as well as censure from the community. And yet it’s clear that the number of people on a bandwagon has little to do with the fundamental integrity of the bandwagon; in fact the two might even be inversely correlated.
Confirmation bias is probably the most general bias in drug discovery, probably because it’s the most common bias in science and life in general. In the next few posts we will take a look at some other specific biases, all of which lend themselves to potential use and misuse in the field. For now, an exhortation for the twenty-first century: "Know thyself. But know thy cognitive biases even better."
It is thus no wonder that modern science as defined by the systematic application of the scientific method emerged only in the last few hundred years. But even since then, it’s been hard to override a few million years of evolution and unfailingly use its tools every time. At every single moment the primitive, emotional, frisky part of our brain is urging us to jump to conclusions based on inadequate data and emotional biases, and so it’s hardly surprising that we often make the wrong decisions, even when the path to the right ones is clear (in retrospect). It’s only in the last few decades though that scientists have started to truly apply the scientific method to understand why we so often fail to apply the scientific method. These studies have led to the critical discovery of cognitive biases.
There are many important psychologists, neuroscientists and economists who have contributed to the field of cognitive biases, but it seems only fair to single out two: Amos Tversky and Daniel Kahneman. Over forty years, Tversky and Kahneman performed ingenious studies - often through surveys asking people to solve simple problems - that laid the foundation for understanding human cognitive biases. Kahneman received the Nobel Prize for this work; Tversky undoubtedly would have shared it had he not tragically died young from cancer. The popular culmination of the duo’s work was Kahneman’s book “Thinking Fast and Slow”. In that book he showed how cognitive biases are built into the human mind. These biases manifest themselves in the distinction between two systems in the brain: System 1 and System 2.
System 2 is responsible for most of our slow, rational thinking. System 1 is responsible for most of our cognitive biases. A cognitive bias is basically any thinking shortcut that allows us to bypass slow, rational judgement and quickly reach a conclusion based on instinct and emotion. Whenever we are faced with a decision, especially in the face of inadequate time or data, System 1 kicks in and presents us with a conclusion before System 2 has had time to evaluate the situation more carefully, using all available data. System 1 heavily uses the emotional part of the brain, including the amygdala which is responsible among other things for our fight-flight-freeze response. System 1 may seem like a bad thing for evolution to have engineered in our brains, but it’s what allows us to “think on our feet”, face threats or chaos and react quickly within the constraints of time and competing resources. Contrary to what we think, cognitive biases aren’t always bad; in the primitive past they often saved lives, and even in our modern times they allow us to occasionally make smart decisions and are generally indispensable. But they start posing real issues when we have to make important decisions.
We suffer from cognitive biases all the time - there is no getting away from a system hardwired into the “reptilian” part of our brain through millions of years of evolution - but these biases especially become a liability when we are faced with huge amounts of uncertain data, tight time schedules, competing narratives, quest for glory and inter and intragroup rivalry. All these factors are writ large in the multifaceted world of drug discovery and development.
First of all, there’s the data problem. Especially in the last two decades or so, because of advances in genomics, instrumentation and collaboration and the declining cost of technology, there has been an explosion of all kinds of data in drug discovery; chemical, biological, computational and clinical. In addition, much of this data is not integrated well into unified systems and can be unstructured, incomplete and just plain erroneous. This complexity of data sourcing, heterogeneity and management means that every single person working in drug development always has to make decisions based on a barrage of data that still only presents a partial picture of reality. Multiparameter optimization has to be driven when all parameters are almost always unknown. Secondly, there’s the time factor. Drug development is a very fast-paced field, with tight timelines driven by the urgency of getting new treatments to patients, the lure of large profits and the high burn rates and attrition rates. Most scientists or managers in drug discovery cannot afford to spend enough time getting all the data, and are almost always forced to make major decisions based on what they have rather than what they wish they had.
Thirdly, there’s the interpersonal rivalry and the quest for glory. The impact of this sociological factor on cognitive biases cannot be underestimated. While the collaborative nature of drug discovery makes the field productive, it also leads to pressure on scientists to be the first ones to declare success, or the first ones to set trends. On a basic scientific level for instance, trendsetting can take the form of the proclamation of “rules” or “metrics” for "druglike" features; the hope is that fame and fortune will then be associated with the institution or individuals who come up with these rules. But this relentless pressure to be first can foster biases of all kinds, including cognitive biases.
It would thus seem that drug discovery is a prime candidate for a very complex, multifaceted field that would be riddled with cognitive biases. But to my knowledge, there has been no systematic discussion of such biases in the literature. This is partly because many people might shrug off obvious biases like confirmation bias without really taking a hard look at what they entail, and partly because nobody really pays scientists in drug discovery organizations to explore their own biases. Yet it seems to me that trying to watch out for these biases in everyday organizational behavior would go at least some way in mitigating them. And it’s hard to refute the argument that mitigating these biases would likely make scientists more prone to smarter decision-making and contribute to the bottom line; in terms of both more efficient drug discovery as well as the ensuing profits. Surely pharmaceutical organizations would find that endpoint desirable.
A comprehensive investigation into cognitive biases in drug discovery would probably be a large-scale undertaking requiring ample time and resources; most of this would consist of identifying and recording such biases through detailed surveys. The good news though is that because cognitive biases are an inescapable feature of the human mind, the fact that they haven’t been recorded in systematic detail does not refute the fact of their existence. It therefore makes sense to discuss how they might show up in the everyday decision-making process in drug discovery.
We will start by talking about some of the most obvious biases, and discuss others in future posts. Let’s start with one that we are all familiar with: confirmation bias. Confirmation bias is the tendency to highlight and record information that reinforces our prior beliefs and discard information that contradicts it. The prior beliefs could have been cemented for a good reason, but that does not mean they will apply in every single case. Put simply, suffering from confirmation bias makes us ignore the misses and consider only the hits.
We see confirmation bias in drug discovery all the time. For instance, if molecular dynamics or fragment-based drug discovery or machine learning or some other technique, say Method X, is your favorite technique for discovering drugs, then you will keep on tracking successful applications of this technique without keeping track of the failures. Why would you do this? Several reasons, some of which are technological and some are sociological. You may have been trained in Method X since graduate school; method X is thus what you know and do best, and you don’t want to waste time learning Method Y. Method X might legitimately have had one big success, and you might therefore believe in it - even with an n of 1. Method X might just be easy to use; in that case you are transformed into the man who looks for his keys under the streetlight, not because that's where they are but that's where it's easiest to look. Method X could be a favorite of certain people who you admire, and certain other people who you don’t like as much might be hating it; in that case you will likely believe in it even if the haters actually have better data against it. Purported successes of Method X in the literature, in patents and as communicated by word-of-mouth will further reinforce it in your mind.
The same logic applies to the proliferation of metrics and “rules” for druglike compounds. Let me first say that I have used these metrics myself and they are often successful in a limited context in a specific project, but confirmation bias may lead me to only keep track of their successes and try to apply them in every drug discovery project. In general, confirmation bias can lead us to believe in the utility of certain ideas or techniques far beyond their sphere of applicability. The situation is made worse by the fact that the scientific literature itself suffers from a fundamental confirmation bias, publishing only successes. The great unwashed mass of Method X failures is thus lost to posterity.
There are some other biases that confirmation bias subsumes. For instance, the backfire effect leads people to paradoxically reinforce their beliefs when they are presented with contradicting evidence; it’s a very well documented phenomenon in political and religious belief systems. But science is also not immune from its influence. When you are already discounting evidence that contradicts your belief, then you can as readily discount evidence that seems to strengthen the opposite belief. Another pernicious and common subset of confirmation biases is the bandwagon effect, which is often a purely social phenomenon. In drug discovery it has manifested itself through scores of scientists jumping on to a particular bandwagon: computational drug design, combinatorial chemistry, organocatalysis, HTS, VS...the list goes on. When enough people are on a bandwagon, it becomes hard to resist not being a part of it; one fears this could lead to both missed opportunities as well as censure from the community. And yet it’s clear that the number of people on a bandwagon has little to do with the fundamental integrity of the bandwagon; in fact the two might even be inversely correlated.
Confirmation bias is probably the most general bias in drug discovery, probably because it’s the most common bias in science and life in general. In the next few posts we will take a look at some other specific biases, all of which lend themselves to potential use and misuse in the field. For now, an exhortation for the twenty-first century: "Know thyself. But know thy cognitive biases even better."
Bottom-up and top-down in drug discovery
There are two approaches to discovering new drugs. In one approach drugs fall in your lap from the sky. In the other you scoop them up from the ocean. Let’s call the first the top-down approach and the second the bottom-up approach.
The bottom-up approach assumes that you can discover drugs by thinking hard about them, by understanding what makes them tick at the molecular level, by deconstructing the dance of atoms orchestrating their interactions with the human body. The top-down approach assumes that you can discover drugs by looking at their effects on biological systems, by gathering enough data about them without understanding their inner lives, by generating numbers through trial and error, by listening to what those numbers are whispering in your ear.
To a large extent, the bottom-up approach assumes knowledge while the top-down approach assumes ignorance. Since human beings have been ignorant for most of their history, for most of the recorded history of drug discovery they have pursued the top-down approach. When you don't know what works, you try things out randomly. The Central Americans found out by accident that chewing the bark of the Cinchona plant relieved them of the afflictions of malaria. Through the Middle Ages and beyond, people who called themselves physicians prescribed a witches' brew of substances ranging from sulfur to mercury to arsenic to try to cure a corresponding witches' brew of maladies, from consumption to the common cold. More often than not these substances killed patients as readily as the diseases themselves.
The top-down approach may seem crude and primitive, and it was primitive, but it worked surprisingly well. For the longest time it was exemplified by the ancient medical systems of China and India – one of these systems delivered an antimalarial medicine that helped its discoverer bag a Nobel Prize for Medicine. Through fits and starts, scores of failures and a few solid successes, the ancients discovered many treatments that were often lost to the dust of ages. But the philosophy endured. It endured right up to the early 20th century when the German physician Paul Ehrlich tested 604 chemical compounds - products of the burgeoning dye industry pioneered by the Germans - and found that compound 606 worked against syphilis. Syphilis was a disease that so bedeviled people since medieval times that it was often a default diagnosis of death, and cures were desperately needed. Ehrlich's 606 was arsenic-based, unstable and had severe side effects, but the state of medicine was such back then that anything was regarded as a significant improvement over the previous mercury-based compounds.
It was with Ehrlich's discovery that drug discovery started to transition to a more bottom-up discipline, systematically trying to make and test chemical compounds and understand how they worked at the molecular level. But it still took decades before the approach bore fruition. For that we had to await a nexus of great and concomitant advances in theoretical and synthetic organic chemistry, spectroscopy and cell and molecular biology. These advances helped us figure out the structure of druglike organic molecules, they revealed the momentous fact that drugs work by binding to specific target proteins, and they also allowed us to produce these proteins in useful quantity and uncover their structures. Finally at the beginning of the 80s, we thought we had enough understanding of chemistry to design drugs by bottom-up approaches, "rationally", as if everything that had gone on before was simply the product of random flashes of unstructured thought. The advent of personal computers (Apple and Microsoft had launched in the late 70s) and their immense potential left people convinced that it was only a matter of time before drugs were "designed with computers". What the revolution probably found inconvenient to discuss much was that it was the top-down analysis which had preceded it that had produced some very good medicines, from penicillin to thorazine.
Thus began the era of structure-based drug design which tries to design drugs atom by atom from scratch by knowing the protein glove in which these delicate molecular fingers fit. The big assumption is that the hand that fits the glove can deliver the knockout punch to a disease largely on its own. An explosion of scientific knowledge, startups, venture capital funding and interest from Wall Street fueled those heady times, with the upbeat understanding that once we understood the physics of drug binding well and had access to more computing power, we would be on our way to designing drugs more efficiently. Barry Werth's book "The Billion-Dollar Molecule" captured this zeitgeist well; the book is actually quite valuable since it's a rare as-it-happens study and not a more typical retrospective one, and therefore displays the same breathless and naive enthusiasm as its subjects.
And yet, 30 years after the prophecy was enunciated in great detail and to great fanfare, where are we? First, the good news. The bottom-up approach did yield great dividends - most notably in the field of HIV protease inhibitor drugs against AIDS. I actually believe that this contribution from the pharmaceutical industry is one of the greatest public services that capitalism has performed for humanity. Important drugs for lowering blood pressure and controlling heartburn were also the beneficiaries of top-down thinking.
The bad news is that the paradigm fell short of the wild expectations that we had from it. Significantly short in fact. And the reason is what it always has been in the annals of human technological failure: ignorance. Human beings simply don't know enough about perturbing a biological system with a small organic molecule. Biological systems are emergent and non-linear, and we simply don't understand how simple inputs result in complex outputs. Ignorance was compounded with hubris in this case. We thought that once we understood how a molecule binds to a particular protein and optimized this binding, we had a drug. But what we had was simply a molecule that bound better to that protein; we still worked on the assumption that that protein was somehow critical for a disease. Also, a molecule that binds well to a protein has to overcome enormous other hurdles of oral bioavailability and safety before it can be called a drug. So even if - and that's a big if - we understood the physics of drug-protein binding well, we still wouldn't be any closer to a drug, because designing a drug involves understanding its interactions with an entire biological system and not just with one or two proteins.
In reality, diseases like cancer manifest themselves through subtle effects on a host of physiological systems involving dozens if not hundreds of proteins. Cancer especially is a wily disease because it activates cells for uncontrolled growth through multiple pathways. Even if one or two proteins were the primary drivers of this process, simply designing a molecule to block their actions would be too simplistic and reductionist. Ideally we would need to block a targeted subset of proteins to produce optimum effect. In reality, either our molecule would not bind even one favored protein sufficiently and lack efficacy, or it would bind the wrong proteins and show toxicity. In fact the reason why no drug can escape at least a few side effects is precisely because it binds to many other proteins other than the one we intended it to.
In reality, diseases like cancer manifest themselves through subtle effects on a host of physiological systems involving dozens if not hundreds of proteins. Cancer especially is a wily disease because it activates cells for uncontrolled growth through multiple pathways. Even if one or two proteins were the primary drivers of this process, simply designing a molecule to block their actions would be too simplistic and reductionist. Ideally we would need to block a targeted subset of proteins to produce optimum effect. In reality, either our molecule would not bind even one favored protein sufficiently and lack efficacy, or it would bind the wrong proteins and show toxicity. In fact the reason why no drug can escape at least a few side effects is precisely because it binds to many other proteins other than the one we intended it to.
Faced with this wall of biological complexity, what do we do? Ironically, what we had done for hundreds of years, only this time armed with far more data and smarter data analysis tools. Simply put, you don't worry about understanding how exactly your molecule interacts with a particular protein; you worry instead only about its visible effects, about how much it impacts your blood pressure or glucose levels, or how much it increases urine output or metabolic activity. These endpoints are agnostic of knowledge of the detailed mechanism of action of a drug. You can also compare these results across a panel of drugs to try to decipher similarities and differences.
This is top-down drug design and discovery, writ large in the era of Big Data and techniques from computer science like machine learning and deep learning. The field is fundamentally steeped in data analysis and takes advantage of new technology that can measure umpteen effects of drugs on biological systems, greatly improved computing power and hardware to analyze these effects, and refined statistical techniques that can separate signal from noise and find trends.
The top-down approach is today characterized mainly by phenotypic screening and machine learning. Phenotypic screening involves simply throwing a drug at a cell, organ or animal and observing its effects. In its primitive form it was used to discover many of today's important drugs; in the field of anxiety medicine for instance, new drugs were discovered by giving them to mice and simply observing how much fear the mice exhibited toward cats. Today's phenotypic screening can be more fine-grained, looking at drug effects on cell size, shape and elasticity. One study I saw looked at potential drugs for wound healing; the most important tool in that study was a high-resolution camera, and the top-down approach manifested itself through image analysis techniques that quantified subtle changes in wound shape, depth and appearance. In all these cases, the exact protein target the drug might be interacting with was a distant horizon and an unknown. The large scale, often visible, effects were what mattered. And finding patterns and subtle differences in these effects - in images, in gene expression data, in patient responses - is what the universal tool of machine learning is supposed to do best. No wonder that every company and lab from Boston to Berkeley is trying feverishly to recruit data and machine learning scientists and build burgeoning data science divisions. These companies have staked their fortunes on a future that is largely imaginary for now.
Currently there seems to be, if not a war, at least a simmering and uneasy peace between top-down and bottom-up approaches in drug discovery. And yet this seems to be mainly a fight where opponents set up false dichotomies and straw men rather than find complementary strengths and limitations. First and foremost, the ultimate proof of the pudding is in the eating, and machine learning's impact on the number of approved new drugs still has to be demonstrated; the field is simply too new. The constellation of techniques has also proven itself to be much better at solving certain problems (mainly image recognition and natural language processing) than others. A lot of early stage medicinal chemistry data contains messy assay results and unexpected structure-activity relationships (SAR) containing "activity cliffs" in which a small change in structure leads to a large change in activity. Machine learning struggles with these discontinuous stimulus-response landscapes. Secondly, there are still technical issues in machine learning such as working with sparse data and noise that have to be resolved. Thirdly, while the result of a top-down approach may be a simple image or change in cell type, the number of potential factors that can lead to that result can be hideously tangled and multifaceted. Finally, there is the perpetual paradigm of garbage-in-garbage-out (GIGO). Your machine learning algorithm is only as good as the data you feed it, and chemical and biological data are notoriously messy and ill-curated; chemical structures might be incorrect, assay conditions might differ in space and time, patient reporting and compliance might be sporadic and erroneous, human error riddles data collection, and there might be very little data to begin with. The machine learning mill can only turn data grist into gold if what it's provided with is grist in the first place.
In contrast to some of these problems with the top-down paradigm, bottom-up drug design has some distinct advantages. First of all, it has worked, and nothing speaks like success. Also operationally, since you are usually looking at the interactions between a single molecule and protein, the system is much simpler and cleaner, and the techniques to study it are less prone to ambiguous interpretation. Unlike machine learning which can be a black box, here you can understand exactly what's going on. The amount of data might be smaller, but it may also be more targeted, manageable and reproducible. You don't usually have to deal with the intricacies of data fitting and noise reduction or the curation of data from multiple sources. Ultimately at the end of the day, if like HIV protease your target does turn out to be the Achilles heel of a deadly disease like AIDS, your atom-by-atom design can be as powerful as Thor's hammer. There is little doubt that bottom-up approaches have worked in selected cases, where the relevance of the target has been validated, and there is little doubt that this will continue to be the case.
Now it's also true that just like with top-downers, bottom-uppers have had their burden of computational problems and failures, and both paradigms have been subjected to their fair share of hype. Starting from that "designing drugs using computers" headline in 1981, people have understood that there are fundamental problems in modeling intermolecular interactions: some of these problems are computational and in principle can be overcome with better hardware and software, but others like the poor understanding of water molecules and electrostatic interactions are fundamentally scientific in nature. The downplaying of these issues and the emphasizing of occasional anecdotal successes has led to massive hype in computer-aided drug design. But in case of machine learning it's even worse in some sense since hype from applications of the field in other human endeavors is spilling over in drug discovery too; it seems hard for some to avoid claiming that your favorite machine learning system is going to soon cure cancer if it's making inroads in trendy applications like self-driving cars and facial recognition. Unlike machine learning though, the bottom-up take has at least had 20 years of successes and failures to draw on, so there is a sort of lid on hype that is constantly waved by skeptics.
Ultimately, the biggest advantage of machine learning is that it allows us to bypass detailed understanding of complex molecular interactions and biological feedback and work from the data alone. It's like a system of psychology that studies human behavior purely based on stimuli and responses of human subjects, without understanding how the brain works at a neuronal level. The disadvantage is that the approach can remain a black box; it can lead to occasional predictive success but at the expense of understanding. And a good open question is to ask how long we can keep on predicting without understanding. Knowing how many unexpected events or "Black Swans" exist in drug discovery, how long can top-down approaches keep performing well?
Now it's also true that just like with top-downers, bottom-uppers have had their burden of computational problems and failures, and both paradigms have been subjected to their fair share of hype. Starting from that "designing drugs using computers" headline in 1981, people have understood that there are fundamental problems in modeling intermolecular interactions: some of these problems are computational and in principle can be overcome with better hardware and software, but others like the poor understanding of water molecules and electrostatic interactions are fundamentally scientific in nature. The downplaying of these issues and the emphasizing of occasional anecdotal successes has led to massive hype in computer-aided drug design. But in case of machine learning it's even worse in some sense since hype from applications of the field in other human endeavors is spilling over in drug discovery too; it seems hard for some to avoid claiming that your favorite machine learning system is going to soon cure cancer if it's making inroads in trendy applications like self-driving cars and facial recognition. Unlike machine learning though, the bottom-up take has at least had 20 years of successes and failures to draw on, so there is a sort of lid on hype that is constantly waved by skeptics.
Ultimately, the biggest advantage of machine learning is that it allows us to bypass detailed understanding of complex molecular interactions and biological feedback and work from the data alone. It's like a system of psychology that studies human behavior purely based on stimuli and responses of human subjects, without understanding how the brain works at a neuronal level. The disadvantage is that the approach can remain a black box; it can lead to occasional predictive success but at the expense of understanding. And a good open question is to ask how long we can keep on predicting without understanding. Knowing how many unexpected events or "Black Swans" exist in drug discovery, how long can top-down approaches keep performing well?
The fact of the matter is that both top-down and bottom-up approaches to drug discovery have strengths and limitations and should therefore be part of an integrated approach to drug discovery. In fact they can hopefully work well together, like members of a relay team. I have heard of at least one successful major project in a leading drug firm in which top down phenotypic screening yielded a valuable hit which then, midstream, was handed over to a bottom-up team of medicinal chemists, crystallographers and computational chemists who deconvoluted the target and optimized the hit all the way to an NDA (New Drug Application). At the same time, it was clear that the latter would not have been made possible without the former. In my view, the old guard of the bottom-up school has been reluctant and cynical in accepting membership in the guild for the young Turks of the top-down school, while the Turks have been similarly guilty of dismissing their predecessors as antiquated and irrelevant. This is a dangerous game of all-or-none in the very complex and challenging landscape of drug discovery and development, where only multiple and diverse approaches are going to allow us to discover the proverbial needle in the haystack. Only together will the two schools thrive, and there are promising signs that they might in fact be stronger together. But we'll never know until we try.
(Image: BenevolentAI)
Unifiers and diversifiers in physics, chemistry and biology
On my computer screen right now are two molecules. They are both large rings with about thirty atoms each, a motley mix of carbons, hydrogens, oxygens and nitrogens. In addition they have appendages of three or four atoms dangling off their periphery. There is only one, seemingly minor difference: The appendage in one of the rings has two more carbon atoms than that in the other. If you looked at the two molecules in flat 2D - in the representation most familiar to practicing chemists - you will sense little difference between them.
Yet when I look at the two molecules in 3D - if I look at their spatial representations or conformations - the differences between them are revealed in their full glory. The presence of two extra carbons in one of the compounds causes it to scrunch up, to slightly fold upon itself the way a driver edges close to the steering wheel. This slight difference causes many atoms which are otherwise far apart to come together and form hydrogen bonds, weak interactions that are nonetheless essential in holding biological molecules like DNA and proteins together. These hydrogen bonds can in turn modulate the shape of the molecule and allow it to get past cell membranes better than the other one. A difference of only two carbons - negligible on paper- can thus have profound consequences for the three-dimensional life of these molecules. And this difference in 3D can in turn translate to significant differences in their functions, whether those functions involve capturing solar energy or killing cancer cells.
Chemistry is full of hidden differences and similarities like these. Molecules exist on many different levels, and on each level they manifest unique properties. In one sense they are like human beings. On the surface they may appear similar, but probe deeper and each one is unique. And probing even deeper may then again reveal similarities. They are thus both similar and different all at once. But just like human beings molecules are shy; they won't open up unless you are patient and curious, they may literally fall apart if you are too harsh with them, and they may even turn the other cheek and allow you to study them better if you are gentle and beguiling enough. It is often only through detailed analysis that you can grasp their many-splendored qualities. It is this ever-changing landscape of multifaceted molecular personalities, slowly but surely rewarding the inquisitive and dogged mind, that makes chemistry so thrilling and open-ended. It is why I get a kick out of even mundane research.
When I study the hidden life of molecules I see diversity. And when I see diversity I am reminded of how important it is in all of science. Sadly, the history of science in the twentieth century has led both scientists and the general public to value unity over diversity. The main culprit in this regard has been physics whose quest for unity has become a victim of its own success. Beginning with the unification of mechanics with heat and electricity with magnetism in the nineteenth century, physics achieved a series of spectacular feats when it combined space with time, special relativity with quantum mechanics and the weak force with electromagnetism. One of the greatest unsolved problems in physics today is the combination of quantum mechanics with general relativity. These unification feats are both great intellectual achievements as well as noteworthy goals, but they have led many to believe that unification is the only thing that really matters in physics, and perhaps in all of science. They have also led to the belief that fundamental physics is all that is worth studying. The hype generated by the media in fields like cosmology and string theory and the spate of popular books written by scientist-celebrities in these fields have only made matters worse. All this is in spite of the fact that most of the world's physicists don't study fundamental physics in their daily work.
The obsession with unification has led to an ignorance of the diversity of discoveries in physics. In parallel with the age of the unifiers has existed the universe of diversifiers. While the unifiers have been busy proclaiming discoveries from the rooftops, the diversifiers have been quietly building new instruments and cataloging the reach of physics in less fundamental but equally fascinating fields like solid-state physics and biophysics. They have also gathered the important data which allowed the unifiers to ply their trade. Generally speaking, unifiers tend to be part of idea-driven revolutions while diversifiers tend to be part of tool-driven revolutions. The unifiers would never have seen their ideas validated if the diversifiers had not built tools like telescopes, charged coupled devices and, superconducting materials to test the great theories of physics. And yet, just like unification is idolized at the expense of diversification, ideas in physics have also been lionized at the expense of practical tools. We need to praise the tools of physics as much as the diversifiers who build them.
As a chemist I find it easier to appreciate diversity. Examples of molecules like the ones I cited above abound in chemistry. In addition chemistry is too complex to be reduced to a simple set of unifying principles, and most chemical discoveries are still made by scientists looking at special cases rather than those searching for general laws. It's also a great example of a tool-driven revolution, with new instrumental technologies like x-ray diffraction and nuclear magnetic resonance (NMR) completely revolutionizing the science during the twentieth century. There were of course unifiers in chemistry too - the chemists who discovered the general laws of chemical bonding are the most prominent example - but these unifiers have never been elevated to a status seen among physicists. Diversifiers who play in the mud of chemical phenomena and find chemical gems are still more important than ones who might proclaim general theories. There will always be the example of an unusual protein structure, a fleeting molecule whose existence defies our theories or or a new polymer with amazing ductility that will keep chemists occupied. And this will likely be the case for the foreseeable future.
Biology too has seen its share of unifiers and diversifiers. For most of its history biology was the ultimate diversifiers' delight, with intrepid explorers, taxonomists and microbiologists cataloging the wonderful diversity of life around us. When Charles Darwin appeared on the scene he unified this diversity in one stunning fell swoop through his theory of evolution by natural selection. The twentieth century modern synthesis of biology that married statistics, genetics and evolutionary biology was also a great feat of unification. And yet biology continues to be a haven for diversifier. There is always the odd protein, the odd sequence of gene or the odd insect with a particularly startling method of reproduction that catches the eye of biologists. These examples of unusual natural phenomena do not defy the unifying principles, but they do illustrate the sheer diversity in which the unifying principles can manifest themselves, especially on multiple emergent levels. They assure us that no matter how much we may unify biology, there will always be a place for diversifiers in it.
At the dawn of the twenty-first century there is again a need for diversifiers, especially in new fields like neuroscience and paleontology. We need to cast off the spell of fundamental physics and realize that diversifiers play on the same field as unifiers. Unifiers may come up with important ideas, but diversifiers are the ones who test them and who open up new corners of the universe for unifiers to ponder. Whether in chemistry or physics, evolutionary biology or psychology, we should continue to appreciate unity in diversity and diversity in unity. Together the two will advance science into new realms.
If you believe Western Civilization is oppressive, you will ensure it is oppressive
Philosopher John Locke's defense of the natural rights of man should apply to all people, not just to one's favorite factions |
This is my third monthly column for the website 3 Quarks Daily. In it I lament what I see as an attack on Enlightenment values and Western Civilization from both the right and the left. I am especially concerned by the prevalent narrative on the left that considers Western Civilization as fundamentally oppressive, especially since the left could be the only thing standing between civilization and chaos at the moment. Both right and left thus need to reach back into their roots as stewards of Enlightenment values.
When the British left India in 1947, they left a complicated legacy behind. On one hand, Indians had suffered tremendously under oppressive British rule for more than 250 years. On the other hand, India was fortunate to have been ruled by the British rather than the Germans, Spanish or Japanese. The British, with all their flaws, did not resort to putting large numbers of people in concentration camps or regularly subjecting them to the Inquisition. Their behavior in India had scant similarities with the behavior of the Germans in Namibia or the Japanese in Manchuria.
More importantly, while they were crisscrossing the world with their imperial ambitions, the British were also steeping the world in their long history of the English language, of science and the Industrial Revolution and of parliamentary democracy. When they left India, they left this legacy behind. The wise leaders of India who led the Indian freedom struggle - men like Jawaharlal Nehru, Mahatma Gandhi and B. R. Ambedkar - understood well the important role that all things British had played in the world, even as they agitated and went to jail to free themselves of British rule. Many of them were educated at Western universities like London, Cambridge and Columbia. They hated British colonialism, but they did not hate the British; once the former rulers left they preserved many aspects of their legacy, including the civil service, the great network of railways spread across the subcontinent and the English language. They incorporated British thought and values in their constitution, in their educational institutions, in their research laboratories and in their government services. Imagine what India would have been like today had Nehru and Ambedkar dismantled the civil service, banned the English language, gone back to using bullock cart and refused to adopt a system of participatory democracy, simply because all these things were British in origin.
The leaders of newly independent India thus had the immense good sense to separate the oppressor and his instruments of oppression from his enlightened side, to not throw out the baby with the bathwater. Nor was an appreciation of Western values limited to India by any means. In the early days, when the United States had not yet embarked on its foolish, paranoid misadventures in Southeast Asia, Ho Chi Minh looked toward the American Declaration of Independence as a blueprint for a free Vietnam. At the end of World War 1 he held the United States in great regard and tried to get an audience with Woodrow Wilson at the Versailles Conference. It was only when he realized that the Americans would join forces with the occupying French in keeping Vietnam an occupied colonial nation did Ho Chi Minh's views about the U.S. rightly sour. In other places in Southeast Asia and Africa too the formerly oppressed preserved many remnants of the oppressor's culture.
Yet today I see many, ironically in the West, not understanding the wisdom which these leaders in the East understood very well. The values bequeathed by Britain which India upheld were part of the values which the Enlightenment bequeathed to the world. These values in turn went back to key elements of Western Civilization, including Greek, Roman, Byzantine, French, German and Dutch. And simply put, Enlightenment values and Western Civilization are today under attack, in many ways from those who claim to stand by them. Both left and right are trampling on them in ways that are misleading and dangerous. They threaten to undermine centuries worth of progress.
The central character of Enlightenment values should be common knowledge, and yet the fact that it seems worth reiterating them is a sign of our times.
To wit, consider the following almost axiomatic statements:
Freedom of speech, religion and the press is all-important and absolute.
The individual and his property have certain natural and inalienable rights.
Truth, whatever it is, is not to be found in religious texts.
Kings and religious rulers cannot rule by fiat and are constrained by the wishes of the governed.
The world can be deciphered by rationally thinking about it.
All individuals deserve fair trials by jury and are not to be subjected to cruel punishment.
The importance of these ideas cannot be overestimated. When they were first introduced they were novel and revolutionary; we now take them for granted, perhaps too much for granted. They are in large part what allow us to distinguish ourselves as human beings, as members of the unique creative species called Homo sapiens.
The Enlightenment reached its zenith in mid-eighteenth century France, Holland and England, but its roots go back deep into the history of Western Civilization. As far back as ancient Babylon, the code of Hammurabi laid out principles of justice describing proportionate retaliation for crimes. The peak of enlightened thought before the French enlightenment was in Periclean Athens. Socrates, Plato and Aristotle, Athens led the way in philosophy and science, in history and drama; in some sense, almost every contemporary political and social problem and its resolution goes back to the Greeks. Even when others superseded Greek and Roman civilization, traces of the Enlightenment kept on appearing throughout Europe, even in its dark ages. For instance, the Code of the Emperor Justinian laid out many key judicial principles that we take for granted, including a right to a fair trial, a right against self-incrimination and a proscription against trying someone twice for the same crime.
In 1215, the Magna Carta became the first modern document to codify the arguments against the divine authority of kings. Even as wars and revolutions engulfed Europe during the next five hundred years, principles like government through the consent of the governed, trial by jury and the prohibition of cruel and unusual punishment got solidified through trial and error, through resistance and triumph. They saw their culmination in the English and American wars of independence and the constitutions of these countries in the seventeenth and eighteenth centuries. By the time we get to France in the mid 1750s, we have philosophers like John Locke explicitly talking about the natural rights of men and Charles-Louis Montesquieu explicitly talking about the tripartite separation of powers in government. These principles are today the bedrock of most democratic republics around the world, Western and Eastern. At the same time, let us acknowledge that Eastern ideas and thinkers – Buddha and Confucius in particular – have also contributed immensely to humanity's progress and will continue to do. In fact, personally I believe that the concepts of self-control, detachment and moderation that the East has given us will, in the final analysis, supersede everything else. However, most of these ideas are personal and inward looking. They are also very hard to live up to for most mortals, and for one reason or another have not integrated themselves thoroughly yet into our modern ways of life. Thus, there is little doubt that modern liberal democracies as they stand today, both in the West and the East, are mostly products of Western Civilizational notions.
In many ways, the study of Western Civilization is therefore either a study of Enlightenment values or of forces – mainly religious ones – aligned against them. It shows a steady march of the humanist zeitgeist through dark periods which challenged the supremacy of these values, and of bright ones which reaffirmed them. One would think that a celebration of this progress would be beyond dispute. And yet what we see today is an attack on the essential triumphs of Western Civilization from both left and right.
Each side brings its own brand of hostility and hypocrisy to bear on the issue. As the left rightly keeps pointing out, the right often seems to forget about the great mass of humanity that was not only cast on to the sidelines but actively oppressed and enslaved, even as freedom and individual rights seemed to be taking root elsewhere for a select few. In the 17th and 18th centuries, as England and America and France were freeing themselves from monarchy and the divine rights of kings, they were actively plunging millions of men and women in Africa, India and other parts of the world into bondage and slavery and pillaging their nations. The plight of slaves being transported to the English colonies under inhuman conditions was appalling, and so was the hypocrisy of thinkers like Thomas Jefferson and George Washington who wrote about how all men are born equal while simultaneously keeping them unequal. Anyone who denies the essential hypocrisy of such liberal leaders in selectively promulgating their values would be intentionally misleading themselves and others.
Even later, as individual rights became more and more codified into constitutions and official documents, they remained confined to a minority, largely excluding people of color, indigenous people, women and poor white men and from their purview. This hadn't been too different even in the crucible of democracy, Periclean Athens, where voting and democratic membership were restricted to landowning men. It was only in the late twentieth century - more than two hundred years after the Enlightenment - that these rights were fully extended to all. That's an awfully long time for what we consider as basic freedoms to seep into every strata of society. But we aren't there yet. Even today, the right often denies the systemic oppression of people of color and likes to pretend that all is well when it comes to equality of the law; in reality, when it comes to debilitating life events like police stops and searches, prison incarceration and health emergencies, minorities, women and the poor can be disproportionately affected. The right will seldom agree with these facts, but mention crime or dependence on welfare and the right is more than happy to generalize their accusations to all minorities or illegal immigrants.
The election of Donald Trump has given voice to ugly elements of racism and xenophobia in the U.S., and there is little doubt that these elements are mostly concentrated on the right. Even if many right-wingers are decent people who don't subscribe to these views, they also don't seem to be doing much to actively oppose them. Nor are they actively opposing the right's many direct assaults on the environment and natural resources, assaults that may constitute the one political action whose crippling effects are irreversible. Meanwhile, the faux patriotism on the far right that worships men like Jefferson and Woodrow Wilson while ignoring their flaws and regurgitates catchy slogans drafted by Benjamin Franklin and others during the American Revolution conveniently pushes the oppressive and hypocritical behavior of these men under the rug. Add to this a perverse miscasting of individual and states' rights, and you end up with people celebrating the Confederate Flag and Jefferson Davis.
If criticizing this hypocrisy and rejection of the great inequities in this country's past and present were all that the left was doing, then it would be well and good. Unfortunately the left has itself started behaving in ways that aren't just equally bad but possibly worse in light of the essential function that it needs to serve in a liberal society. Let's first remember that the left is the political faction that claims to uphold individual rights and freedom of speech. But especially in the United States during the last few years, the left has instead become obsessed with playing identity politics, and both individual rights and free speech have become badly mangled victims of this obsession. For the, left individual rights and freedom of speech are important as long as they apply to their favorite political groups, most notably minorities and women. For the extreme left in particular, there is no merit to individual opinion anymore unless it is seen through the lens of the group that the individual belongs to. Nobody denies that membership in your group shapes your individual views, but the left believes that the latter basically has no independent existence; this is an active rejection of John Locke's primacy of the individual as the most important unit of society. The left has also decided that some opinions – even if they may be stating facts or provoking interesting discussion – are so offensive that they must be censored, if not by official government fiat, then by mass protest and outrage that verges on bullying. Needless to say, social media with its echo chambers and false sense of reassurance engendered by surrounding yourself with people who think just like you has greatly amplified this regressive behavior.
As is painfully familiar by now, this authoritarian behavior is playing out especially on college campuses, with a new case of "liberal" students bullying or banning conservative speakers on campus emerging almost every week. Universities are supposed to be the one place in the world where speech of all kinds is not just explicitly allowed but encouraged, but you would not see this critical function fulfilled on many college campuses today. Add to this the Orwellian construct of "micoroagressions" that essentially lets anyone decide whether an action, piece of speech or event is an affront to their favorite oppressed political group, and you have a case of full-blown unofficial censorship purely based on personal whims that basically stifles any kind of disagreement. It is censorship which squarely attacks freedom of speech as espoused by Voltaire, Locke, Adams and others. As Voltaire's biographer Evelyn Hall – a woman living in Victorian times – famously said, "I disapprove of what you say, but I will defend to the death your right to say it." Seemingly a woman in Victorian times - a society that was decidedly oppressive to women - had more wisdom to defend freedom of speech than a young American liberal in the twenty-first century.
This behavior threatens to undermine and tear apart the very progressive forces which the left claims to believe in. Notably, their so-called embrace of individual rights and diversity often seems to exclude white people, and white men in particular. The same people who claim to be champions of individual rights claim that all white men are "privileged", have too many rights, are trampling on others' rights and do not deserve more. The writer Edward Luce who has just written a book warning about the decline of progressive values in America talks about how, at the Democratic National Convention leading up to the 2016 U.S. election, he saw pretty much every "diversity" box checked except that belonging to white working class people; it was almost as if the Democrats wanted to intentionally exclude this group. For many on the left, diversity equates only to ethnic and gender diversity; any other kind of diversity and especially intellectual or viewpoint diversity are to be either ignored or actively condemned.This attitude is entirely contrary to the free exchange of ideas and respect for diverse opinions that was the hallmark of Enlightenment thinking.
The claim that white men have enough rights and are being oppressive is factually contradicted by the plight of millions of poor whites who are having as miserable a time as any oppressed minority. They have lost their jobs and have lost their health insurance, they have been sold a pipe dream full of empty promises by all political parties, and in addition they find themselves mired in racist and ignorant stereotypes. The left's drumbeat of privilege is very real, but it is also context-dependent; it can rise and ebb with time and circumstance. To illustrate with just one example, a black man in San Francisco will enjoy certain financial and social privileges that a white man in rural Ohio quite certainly won't: how can one generalize notions of privilege to all white men then, and especially those who have been dealt a bad hand? The white working class has thus found itself with almost no friend; rich white people have both Democrats and Republicans, rich and poor minorities largely have Democrats, but poor whites have no one and are being constantly demonized. No wonder they voted for Donald Trump out of desperation; he at least pretended to be their friend, while the others did not even put on a pretense. The animosity among white working class people is thus understandable and documented in many enlightening books, especially Arlie Hochschild's "Strangers in their Own Land". Even Noam Chomsky, who cannot be faintly accused of being a conservative, has sympathized with their situation and justifiable resentment. And as Chomsky says, the problem is compounded by the fact that not everyone on the left actually cares about poor minorities, since the Democratic party which they support has largely turned into a party of moneyed neoliberal white elites in the last two decades.
This singling out of favorite political groups at the expense of other oppressed ones is identity politics at its most pernicious, and it's not just hypocritical but destructive; the counter-response to selective oppression cannot also be selective oppression. As Gandhi said, an eye for an eye makes the whole world go blind. And this kind of favoritism steeped in identity politics is again at odds with John Locke's idea of putting the individual front and center. Locke was a creature of his times, so just like Jefferson he did not actively espouse individual freedom for indigenous people, but his idealization of the individual as the bearer of natural rights was clear and critical. For hundreds of years that individual was mostly white, but the response to that asymmetry cannot simply be to pick an individual of another skin color.
The general response on the left against the sins of Western Civilization and white men has been to consider the whole edifice of Western Civilization as fundamentally oppressive. In some sense this is not surprising since for many years, the history of Western Civilization was written by the victors; by white men. A strong counter-narrative emerged with books like Howard Zinn's "A People's History of the United States"; since then many others have followed suit and they have contributed very valuable, essential perspectives from the other side. Important contributions to civilizational ideas from the East have also received their dues. But the solution is not to swing to the other extreme and dismiss everything that white men in the West did or focus only on their sins, especially as viewed through the lens of our own times. That would be a classic ousting of the baby with the bathwater, and exactly the kind of regressive thinking that the leaders of India avoided when they overthrew the British.
Yes, there are many elements of Western Civilization that were undoubtedly oppressive, but the paradox of it was that Western Civilization and white men also simultaneously crafted many ideas and values that were gloriously progressive; ideas that could serve to guide humanity toward a better future and are applicable to all people in all times. And these ideas came from the same white men who also brought us colonialism, oppression of women and slavery. If that seems self-contradictory or inconvenient, it only confirms Walt Whitman's strident admission: "Do I contradict myself? Very well, then I contradict myself. I am large, I contain multitudes." We can celebrate Winston Churchill's wartime leadership and oratory while condemning his horrific orchestration of one of India's worst famines. We can celebrate Jefferson's plea for separation of church and state and his embrace of science while condemning his treatment of slaves; but if you want to dismantle statues of him or James Madison from public venues, then you are effectively denouncing both the slave owning practices as well as the Enlightenment values of these founding fathers.
Consider one of the best-known Enlightenment passages, the beginnings of the Declaration of Independence as enshrined in Jefferson's soaring words: "We hold these truths to be self-evident; that all men are created equal; that they are endowed by their Creator with certain inalienable rights; that among these are life, liberty and the pursuit of happiness." It is easy to dismiss the slave-owning Jefferson as a hypocrite when he wrote these words, but their immortal essence was captured well by Abraham Lincoln when he realized the young Virginian's genius in crafting them:
"All honor to Jefferson--to the man who, in the concrete pressure of a struggle for national independence by a single people, had the coolness, forecast, and capacity to introduce into a merely revolutionary document, an abstract truth, applicable to all men and all times, and so to embalm it there, that to-day, and in all coming days, it shall be a rebuke and a stumbling-block to the very harbingers of re-appearing tyranny and oppression."
Thus, Lincoln clearly recognized that whatever his flaws, Jefferson intended his words to apply not just to white people or black people or women or men, but to everyone besieged by oppression or tyranny in all times. Like a potent mathematical theorem, the abstract, universal applicability of Jefferson's words made them immortal. In light of this great contribution, Jefferson's hypocrisy in owning slaves, while unfortunate and deserving condemnation, cannot be held up as a mirror against his entire character and legacy.
In its blanket condemnation of dead white men like Jefferson, the left also fails in appreciating what is perhaps one of the most marvelous paradoxes of history. It was precisely words like these, written and codified by Jefferson, Madison and others in the American Constitution, that gradually allowed slaves, women and minorities to become full, voting citizens of the American Republic. Yes, the road was long and bloody, and yes, we aren't even there yet, but as Martin Luther King memorably put it, the arc of the moral universe definitely bent toward justice in the long term. The left ironically forgets that the same people who it rails against also created the instruments of democracy and freedom that put the levers of power into the hands of Americans of all colors and genders. There is no doubt that this triumph was made possible by the ceaseless struggles of traditionally oppressed groups, but it was also made possible by a constitution written exclusively by white men who oppressed others: Whitman's multitudinous contradictions in play again.
Along with individual rights, a major triumph of Western Civilization and the Enlightenment has been to place science, reason, facts and observations front and center. In fact in one sense, the entire history of Western Civilization can be seen as a struggle between reason and faith. This belief in science as a beacon of progress was enshrined in the Royal Society's motto extolling skepticism: "Nullius in verba", or "Nobody's word is final". Being skeptical about kings' divine rights or about truth as revealed in religious texts was a profound, revolutionary and counterintuitive idea at the time. Enlightenment values ask us to bring only the most ruthless skepticism to bear on truth-seeking, and to let the facts lead us where they do. Science is the best tool for ridding us of our prejudices, but it never promises us that its truths would be psychologically comforting or conform to our preconceived social and political beliefs. In fact, if science does not periodically make us uncomfortable about our beliefs and our place in the universe, we are almost certainly doing it wrong.
Sadly, the left and right have both played fast and loose with this critical Enlightenment value. Each side looks to science and cherry-picks facts for confirming their social and political beliefs; each side then surrounds itself with people who believe what they do, and denounces the other side as immoral or moronic. For instance, the right rejects factual data on climate change because it's contrary to their political beliefs, while the left rejects data on gender or racial differences because it's contrary to theirs. The religious right rejects evidence, while the religious left rejects vaccination. Meanwhile, each side embraces the data that the other has rejected with missionary zeal because it supports their social agenda. Data on other social or religious issues is similarly met with rebuke and rejection. The right does not want to have a reasonable discussion on socialism, while the left does not want to have a reasonable discussion on immigration or Islam. The right often fails to see the immense contribution of immigration to this country's place in the world, while the left often regards any discussion even touching on reasonable limits to immigration as xenophobia and nativism.
The greatest tragedy of this willful blindness is that where angels fear to tread, fools and demagogues willingly step in. For instance, the left's constant refusal to engage in an honest and reasonable critique of Islam and its branding of those who wish to do this as Islamophobes discourages level-headed people from entering that arena, thus paving the way for bonafide Islamophobes and white supremacists. Meanwhile, the right's refusal to accept even reasonable evidence for climate change opens the field to those who think of global warming as a secular religion with special punishments for heretics. Both sides lose, but what really loses here is the cause of truth. Since truth has already become a casualty in this era of fake news and exaggerated polemics on social media, this refusal on both sides to accept facts that are incompatible with their psychological biases will surely sound the death knell for science and rationality. Then, as Carl Sagan memorably put it, unable to distinguish between what is true and what feels good, clutching our pearls, we will gradually slide, without even knowing it, into darkness and ignorance.
We need to resurrect the cause of Enlightenment values and Western Civilization, the values espoused by Jefferson, Locke and Hume, by Philadelphia, London and Athens. The fact that flawed white men largely created them should have nothing to do with their enthusiastic acceptance and propagation, since their essential, abstract, timeless qualities have nothing to do with the color of the skin of those who thought of them; rejecting them because of the biases of their creators would be, at the very least, replacing one set of biases with another.
One way of appreciating these values is to actually resurrect them with all their glories and faults in college courses, because college is where the mind truly forms. In the last 40 years or so, the number of colleges that include Western Civilization as a required course in their curriculum has significantly reduced. Emphasis is put instead on world history. It is highly rewarding to expose students to world history, but surely there is space to include a capsule history of the fundamental principles of Western Civilization as a required component of these curricula. Another strategy to leverage these ideals is to use the power of social media in a constructive manner, to use the great reaches of the Internet to bring together people who are passionate about them and who care about their preservation and transmission.
This piece may seem like it dwells more on the failures of the left than the right. For me the reason is simple: Donald Trump's election in the United States, along with the rise of similar authoritarian right-wing leaders in other countries, convinces me that at least for the foreseeable future, we won't be able to depend on the right to safeguard these values. Over the last few decades, conservative parties around the world and especially the Republican party in the United States have made their intention to retreat from the shores of science, reason and moderation clear. That does not mean that nobody on the right cares about these ideals, but it does mean that for now, the left will largely have to fill the void. In fact, by stepping up the left will in one sense simply be fulfilling the ideals enshrined by many of its heroes, including Franklin Roosevelt, Rosa Parks, Susan B. Anthony and John F. Kennedy. Conservatives in turn will have to again be the party of Abraham Lincoln and Dwight Eisenhower if they want to sustain democratic ideals, but they seem light years from being this way right now. If both sides fail to do this then libertarians will have to step in, but unfortunately libertarians comprise a minority of politically effective citizens. At one point in time, libertarians and liberals were united in sharing the values of individual rights, free speech, rational enlightenment and a fearless search for the truth, but the left seems to have sadly ceded that ground in the last few years. Their view of Western Civilization has become not only one-sided but also fundamentally pessimistic and dangerous.
Here are the fatal implications of that view: If you think Western Civilization is essentially oppressive, then you will always see it as oppressive. You will always see only the wretchedness in it. You will end up focusing only on its evils and not its great triumphs. You will constantly see darkness where you should see light. And once you relinquish stewardship of Western Civilization, there may be nobody left to stand up for liberal democracy, for science and reason, for all that is good and great that we take for granted.
You will then not just see darkness but ensure it. Surely none of us want that.
Subscribe to:
Posts (Atom)