The scientific way of thinking might seem natural to us in the twenty-first century, but it’s actually very new and startlingly unintuitive. For most of our existence, we blindly groped rather than reasoned our way to the truth. This was because evolution did not fashion our brains for the processes of hypotheses generation and testing that are now intrinsic to science; what it did fashion them for was gut reactions, quick thinking and emotional reflexes. When you were a hunter- gatherer on the savannah and the most important problem you faced was not how to invest in your retirement savings but to determine whether a shadow behind the bushes was a tree or lion, you didn’t quite have time for hypotheses testing. If you tried to do that it could likely be the last hypothesis you tested.
It is thus no wonder that modern science as defined by the systematic application of the scientific method emerged only in the last few hundred years. But even since then, it’s been hard to override a few million years of evolution and unfailingly use its tools every time. At every single moment the primitive, emotional, frisky part of our brain is urging us to jump to conclusions based on inadequate data and emotional biases, and so it’s hardly surprising that we often make the wrong decisions, even when the path to the right ones is clear (in retrospect). It’s only in the last few decades though that scientists have started to truly apply the scientific method to understand why we so often fail to apply the scientific method. These studies have led to the critical discovery of cognitive biases.
There are many important psychologists, neuroscientists and economists who have contributed to the field of cognitive biases, but it seems only fair to single out two: Amos Tversky and Daniel Kahneman. Over forty years, Tversky and Kahneman performed ingenious studies - often through surveys asking people to solve simple problems - that laid the foundation for understanding human cognitive biases. Kahneman received the Nobel Prize for this work; Tversky undoubtedly would have shared it had he not tragically died young from cancer. The popular culmination of the duo’s work was Kahneman’s book “Thinking Fast and Slow”. In that book he showed how cognitive biases are built into the human mind. These biases manifest themselves in the distinction between two systems in the brain: System 1 and System 2.
System 2 is responsible for most of our slow, rational thinking. System 1 is responsible for most of our cognitive biases. A cognitive bias is basically any thinking shortcut that allows us to bypass slow, rational judgement and quickly reach a conclusion based on instinct and emotion. Whenever we are faced with a decision, especially in the face of inadequate time or data, System 1 kicks in and presents us with a conclusion before System 2 has had time to evaluate the situation more carefully, using all available data. System 1 heavily uses the emotional part of the brain, including the amygdala which is responsible among other things for our fight-flight-freeze response. System 1 may seem like a bad thing for evolution to have engineered in our brains, but it’s what allows us to “think on our feet”, face threats or chaos and react quickly within the constraints of time and competing resources. Contrary to what we think, cognitive biases aren’t always bad; in the primitive past they often saved lives, and even in our modern times they allow us to occasionally make smart decisions and are generally indispensable. But they start posing real issues when we have to make important decisions.
We suffer from cognitive biases all the time - there is no getting away from a system hardwired into the “reptilian” part of our brain through millions of years of evolution - but these biases especially become a liability when we are faced with huge amounts of uncertain data, tight time schedules, competing narratives, quest for glory and inter and intragroup rivalry. All these factors are writ large in the multifaceted world of drug discovery and development.
First of all, there’s the data problem. Especially in the last two decades or so, because of advances in genomics, instrumentation and collaboration and the declining cost of technology, there has been an explosion of all kinds of data in drug discovery; chemical, biological, computational and clinical. In addition, much of this data is not integrated well into unified systems and can be unstructured, incomplete and just plain erroneous. This complexity of data sourcing, heterogeneity and management means that every single person working in drug development always has to make decisions based on a barrage of data that still only presents a partial picture of reality. Multiparameter optimization has to be driven when all parameters are almost always unknown. Secondly, there’s the time factor. Drug development is a very fast-paced field, with tight timelines driven by the urgency of getting new treatments to patients, the lure of large profits and the high burn rates and attrition rates. Most scientists or managers in drug discovery cannot afford to spend enough time getting all the data, and are almost always forced to make major decisions based on what they have rather than what they wish they had.
Thirdly, there’s the interpersonal rivalry and the quest for glory. The impact of this sociological factor on cognitive biases cannot be underestimated. While the collaborative nature of drug discovery makes the field productive, it also leads to pressure on scientists to be the first ones to declare success, or the first ones to set trends. On a basic scientific level for instance, trendsetting can take the form of the proclamation of “rules” or “metrics” for "druglike" features; the hope is that fame and fortune will then be associated with the institution or individuals who come up with these rules. But this relentless pressure to be first can foster biases of all kinds, including cognitive biases.
It would thus seem that drug discovery is a prime candidate for a very complex, multifaceted field that would be riddled with cognitive biases. But to my knowledge, there has been no systematic discussion of such biases in the literature. This is partly because many people might shrug off obvious biases like confirmation bias without really taking a hard look at what they entail, and partly because nobody really pays scientists in drug discovery organizations to explore their own biases. Yet it seems to me that trying to watch out for these biases in everyday organizational behavior would go at least some way in mitigating them. And it’s hard to refute the argument that mitigating these biases would likely make scientists more prone to smarter decision-making and contribute to the bottom line; in terms of both more efficient drug discovery as well as the ensuing profits. Surely pharmaceutical organizations would find that endpoint desirable.
A comprehensive investigation into cognitive biases in drug discovery would probably be a large-scale undertaking requiring ample time and resources; most of this would consist of identifying and recording such biases through detailed surveys. The good news though is that because cognitive biases are an inescapable feature of the human mind, the fact that they haven’t been recorded in systematic detail does not refute the fact of their existence. It therefore makes sense to discuss how they might show up in the everyday decision-making process in drug discovery.
We will start by talking about some of the most obvious biases, and discuss others in future posts. Let’s start with one that we are all familiar with: confirmation bias. Confirmation bias is the tendency to highlight and record information that reinforces our prior beliefs and discard information that contradicts it. The prior beliefs could have been cemented for a good reason, but that does not mean they will apply in every single case. Put simply, suffering from confirmation bias makes us ignore the misses and consider only the hits.
We see confirmation bias in drug discovery all the time. For instance, if molecular dynamics or fragment-based drug discovery or machine learning or some other technique, say Method X, is your favorite technique for discovering drugs, then you will keep on tracking successful applications of this technique without keeping track of the failures. Why would you do this? Several reasons, some of which are technological and some are sociological. You may have been trained in Method X since graduate school; method X is thus what you know and do best, and you don’t want to waste time learning Method Y. Method X might legitimately have had one big success, and you might therefore believe in it - even with an n of 1. Method X might just be easy to use; in that case you are transformed into the man who looks for his keys under the streetlight, not because that's where they are but that's where it's easiest to look. Method X could be a favorite of certain people who you admire, and certain other people who you don’t like as much might be hating it; in that case you will likely believe in it even if the haters actually have better data against it. Purported successes of Method X in the literature, in patents and as communicated by word-of-mouth will further reinforce it in your mind.
The same logic applies to the proliferation of metrics and “rules” for druglike compounds. Let me first say that I have used these metrics myself and they are often successful in a limited context in a specific project, but confirmation bias may lead me to only keep track of their successes and try to apply them in every drug discovery project. In general, confirmation bias can lead us to believe in the utility of certain ideas or techniques far beyond their sphere of applicability. The situation is made worse by the fact that the scientific literature itself suffers from a fundamental confirmation bias, publishing only successes. The great unwashed mass of Method X failures is thus lost to posterity.
There are some other biases that confirmation bias subsumes. For instance, the backfire effect leads people to paradoxically reinforce their beliefs when they are presented with contradicting evidence; it’s a very well documented phenomenon in political and religious belief systems. But science is also not immune from its influence. When you are already discounting evidence that contradicts your belief, then you can as readily discount evidence that seems to strengthen the opposite belief. Another pernicious and common subset of confirmation biases is the bandwagon effect, which is often a purely social phenomenon. In drug discovery it has manifested itself through scores of scientists jumping on to a particular bandwagon: computational drug design, combinatorial chemistry, organocatalysis, HTS, VS...the list goes on. When enough people are on a bandwagon, it becomes hard to resist not being a part of it; one fears this could lead to both missed opportunities as well as censure from the community. And yet it’s clear that the number of people on a bandwagon has little to do with the fundamental integrity of the bandwagon; in fact the two might even be inversely correlated.
Confirmation bias is probably the most general bias in drug discovery, probably because it’s the most common bias in science and life in general. In the next few posts we will take a look at some other specific biases, all of which lend themselves to potential use and misuse in the field. For now, an exhortation for the twenty-first century: "Know thyself. But know thy cognitive biases even better."
It is thus no wonder that modern science as defined by the systematic application of the scientific method emerged only in the last few hundred years. But even since then, it’s been hard to override a few million years of evolution and unfailingly use its tools every time. At every single moment the primitive, emotional, frisky part of our brain is urging us to jump to conclusions based on inadequate data and emotional biases, and so it’s hardly surprising that we often make the wrong decisions, even when the path to the right ones is clear (in retrospect). It’s only in the last few decades though that scientists have started to truly apply the scientific method to understand why we so often fail to apply the scientific method. These studies have led to the critical discovery of cognitive biases.
There are many important psychologists, neuroscientists and economists who have contributed to the field of cognitive biases, but it seems only fair to single out two: Amos Tversky and Daniel Kahneman. Over forty years, Tversky and Kahneman performed ingenious studies - often through surveys asking people to solve simple problems - that laid the foundation for understanding human cognitive biases. Kahneman received the Nobel Prize for this work; Tversky undoubtedly would have shared it had he not tragically died young from cancer. The popular culmination of the duo’s work was Kahneman’s book “Thinking Fast and Slow”. In that book he showed how cognitive biases are built into the human mind. These biases manifest themselves in the distinction between two systems in the brain: System 1 and System 2.
System 2 is responsible for most of our slow, rational thinking. System 1 is responsible for most of our cognitive biases. A cognitive bias is basically any thinking shortcut that allows us to bypass slow, rational judgement and quickly reach a conclusion based on instinct and emotion. Whenever we are faced with a decision, especially in the face of inadequate time or data, System 1 kicks in and presents us with a conclusion before System 2 has had time to evaluate the situation more carefully, using all available data. System 1 heavily uses the emotional part of the brain, including the amygdala which is responsible among other things for our fight-flight-freeze response. System 1 may seem like a bad thing for evolution to have engineered in our brains, but it’s what allows us to “think on our feet”, face threats or chaos and react quickly within the constraints of time and competing resources. Contrary to what we think, cognitive biases aren’t always bad; in the primitive past they often saved lives, and even in our modern times they allow us to occasionally make smart decisions and are generally indispensable. But they start posing real issues when we have to make important decisions.
We suffer from cognitive biases all the time - there is no getting away from a system hardwired into the “reptilian” part of our brain through millions of years of evolution - but these biases especially become a liability when we are faced with huge amounts of uncertain data, tight time schedules, competing narratives, quest for glory and inter and intragroup rivalry. All these factors are writ large in the multifaceted world of drug discovery and development.
First of all, there’s the data problem. Especially in the last two decades or so, because of advances in genomics, instrumentation and collaboration and the declining cost of technology, there has been an explosion of all kinds of data in drug discovery; chemical, biological, computational and clinical. In addition, much of this data is not integrated well into unified systems and can be unstructured, incomplete and just plain erroneous. This complexity of data sourcing, heterogeneity and management means that every single person working in drug development always has to make decisions based on a barrage of data that still only presents a partial picture of reality. Multiparameter optimization has to be driven when all parameters are almost always unknown. Secondly, there’s the time factor. Drug development is a very fast-paced field, with tight timelines driven by the urgency of getting new treatments to patients, the lure of large profits and the high burn rates and attrition rates. Most scientists or managers in drug discovery cannot afford to spend enough time getting all the data, and are almost always forced to make major decisions based on what they have rather than what they wish they had.
Thirdly, there’s the interpersonal rivalry and the quest for glory. The impact of this sociological factor on cognitive biases cannot be underestimated. While the collaborative nature of drug discovery makes the field productive, it also leads to pressure on scientists to be the first ones to declare success, or the first ones to set trends. On a basic scientific level for instance, trendsetting can take the form of the proclamation of “rules” or “metrics” for "druglike" features; the hope is that fame and fortune will then be associated with the institution or individuals who come up with these rules. But this relentless pressure to be first can foster biases of all kinds, including cognitive biases.
It would thus seem that drug discovery is a prime candidate for a very complex, multifaceted field that would be riddled with cognitive biases. But to my knowledge, there has been no systematic discussion of such biases in the literature. This is partly because many people might shrug off obvious biases like confirmation bias without really taking a hard look at what they entail, and partly because nobody really pays scientists in drug discovery organizations to explore their own biases. Yet it seems to me that trying to watch out for these biases in everyday organizational behavior would go at least some way in mitigating them. And it’s hard to refute the argument that mitigating these biases would likely make scientists more prone to smarter decision-making and contribute to the bottom line; in terms of both more efficient drug discovery as well as the ensuing profits. Surely pharmaceutical organizations would find that endpoint desirable.
A comprehensive investigation into cognitive biases in drug discovery would probably be a large-scale undertaking requiring ample time and resources; most of this would consist of identifying and recording such biases through detailed surveys. The good news though is that because cognitive biases are an inescapable feature of the human mind, the fact that they haven’t been recorded in systematic detail does not refute the fact of their existence. It therefore makes sense to discuss how they might show up in the everyday decision-making process in drug discovery.
We will start by talking about some of the most obvious biases, and discuss others in future posts. Let’s start with one that we are all familiar with: confirmation bias. Confirmation bias is the tendency to highlight and record information that reinforces our prior beliefs and discard information that contradicts it. The prior beliefs could have been cemented for a good reason, but that does not mean they will apply in every single case. Put simply, suffering from confirmation bias makes us ignore the misses and consider only the hits.
We see confirmation bias in drug discovery all the time. For instance, if molecular dynamics or fragment-based drug discovery or machine learning or some other technique, say Method X, is your favorite technique for discovering drugs, then you will keep on tracking successful applications of this technique without keeping track of the failures. Why would you do this? Several reasons, some of which are technological and some are sociological. You may have been trained in Method X since graduate school; method X is thus what you know and do best, and you don’t want to waste time learning Method Y. Method X might legitimately have had one big success, and you might therefore believe in it - even with an n of 1. Method X might just be easy to use; in that case you are transformed into the man who looks for his keys under the streetlight, not because that's where they are but that's where it's easiest to look. Method X could be a favorite of certain people who you admire, and certain other people who you don’t like as much might be hating it; in that case you will likely believe in it even if the haters actually have better data against it. Purported successes of Method X in the literature, in patents and as communicated by word-of-mouth will further reinforce it in your mind.
The same logic applies to the proliferation of metrics and “rules” for druglike compounds. Let me first say that I have used these metrics myself and they are often successful in a limited context in a specific project, but confirmation bias may lead me to only keep track of their successes and try to apply them in every drug discovery project. In general, confirmation bias can lead us to believe in the utility of certain ideas or techniques far beyond their sphere of applicability. The situation is made worse by the fact that the scientific literature itself suffers from a fundamental confirmation bias, publishing only successes. The great unwashed mass of Method X failures is thus lost to posterity.
There are some other biases that confirmation bias subsumes. For instance, the backfire effect leads people to paradoxically reinforce their beliefs when they are presented with contradicting evidence; it’s a very well documented phenomenon in political and religious belief systems. But science is also not immune from its influence. When you are already discounting evidence that contradicts your belief, then you can as readily discount evidence that seems to strengthen the opposite belief. Another pernicious and common subset of confirmation biases is the bandwagon effect, which is often a purely social phenomenon. In drug discovery it has manifested itself through scores of scientists jumping on to a particular bandwagon: computational drug design, combinatorial chemistry, organocatalysis, HTS, VS...the list goes on. When enough people are on a bandwagon, it becomes hard to resist not being a part of it; one fears this could lead to both missed opportunities as well as censure from the community. And yet it’s clear that the number of people on a bandwagon has little to do with the fundamental integrity of the bandwagon; in fact the two might even be inversely correlated.
Confirmation bias is probably the most general bias in drug discovery, probably because it’s the most common bias in science and life in general. In the next few posts we will take a look at some other specific biases, all of which lend themselves to potential use and misuse in the field. For now, an exhortation for the twenty-first century: "Know thyself. But know thy cognitive biases even better."
Hi Ash,
ReplyDeleteAs noted on twitter, the unquestioning acceptance of PAINS filters can be interpreted as confirmation bias (the compounds look ugly so nobody bothers to check the data analysis). Something that you may wish to address in subsequent posts is how cognitive biases are exploited by those lobbying to have their opinions more widely accepted. One can speculate as to how much lobbying was done in order to get the J Med Chem editors to prescribe special treatment in the author guidelines for compounds matching PAINS filters.
I’m not sure where it fits into the cognitive bias framework but you may wish to consider whether or not ‘converting’ an IC50 to a free energy change makes it seem more ‘physical’ (and, by implication, meaningful).
I would regard the bandwagon effect as more herding instinct than cognitive bias. It is also worth noting that, in some organizations, not leaping on the bandwagon can get you denounced as ‘negative’. Bandwagons in drug discovery may be partly due to panacea-centric thinking by management. When an organization has spent a lot of money on acquiring a capability, it is in the interests of both vendor and customer that the purchase is seen in the most positive light.
Good points Peter; I will include some of these in future posts. PAINS does increasingly seem to me to be at least in part a cognitive bias.
DeleteI suspect there are incentives not to look for these kind of biases at public companies. If you are an executive at a pharma R&D company you have a number of reasons to want to paint future drug candidates as promising (high stock value, justify yourself to the board, attract the best talent, be a desirable partner for other companies in the field etc..) all of which happen over a relatively short time horizon.
ReplyDeleteIf you went and did an in depth analysis of biases in drug development it will either be worthless or, in the main, throw cold water on your current crop of candidates. After all those are the candidates you picked with the aid of these biases. Legal obligations would force you to disclose such information to stockholders with the resulting harms to the executives. In the long run you could choose better targets for future research but its likely that the shorter time horizon effects will matter more to executives.
Moreover, while it is relatively easy to show that biases exist even under considered analysis by studying responses to questions with known answers I'm not aware of any good results on how to avoid such bias in decision making by large organizations.
Yes, the short term vs long term thinking problem is the bane of organizational productivity. I don't know how anybody could think that long term productivity could go anywhere but up if we scrutinize these kinds of biases.
Delete