Karl Popper's grounding in the age of physics colored his views regarding the way science is done. Falsification was one of the resulting casualties (Image: Wikipedia Commons) |
Earlier this year the 'Big Questions' website Edge.org’s asked the following question: “What scientific idea is ready for retirement”? In response to the question physicist Sean Carroll. Carroll takes on an idea from the philosophy of science that’s usually considered a given: falsification. I mostly agree with Carroll’s take, although others seem to be unhappier, mainly because Carroll seems to be postulating that lack of falsification should not really make a dent in ideas like the multiverse and string theory.
I think falsification is one of those ideas which is a good guideline but which cannot be taken at face value and applied with abandon to every scientific paradigm or field. It’s also a good example of how ideas from the philosophy of science may have little to do with real science. Too much of anything is bad, especially when that anything is considered to be an inviolable truth.
It’s instructive to look at falsification’s father to understand the problems with the idea. Just like his successor Thomas Kuhn, Karl Popper was steeped in physics. He grew up during the heyday of the discipline and ran circles around the Vienna Circle whose members (mostly mathematicians, physicists and philosophers) never really accepted him as part of the group. Just like Kuhn Popper was heavily influenced by the revolutionary discoveries in physics during the 1920s and 30s and this colored his philosophy of science.
Popper and Kuhn are both favorite examples of mine for illustrating how the philosophy of science has been biased toward physics and by physicists. The origin of falsification was simple: Popper realized that no amount of data can really prove a theory, but that even a single key data point can potentially disprove it. The two scientific paradigms which were reigning then – quantum mechanics and relativity – certainly conformed to his theory. Physics as practiced then was adept at making very precise, quantitative predictions about a variety of phenomena, from the electron’s charge to the perihelion of Mercury. Falsification certainly worked very well when applied to these theories. Sensibly Popper advocated it as a tool to distinguish science from non-science (and from nonsense).
But in 2014 falsification has become a much less reliable and more complicated beast. Let’s run through a list of its limitations and failures. For one thing, Popper’s idea that no amount of data can confirm a theory is a dictum that’s simply not obeyed by the majority of the world’s scientists. In practice a large amount of data does improve confidence in a theory. Scientists usually don’t need to confirm a theory one hundred percent in order to trust and use it; in most cases a theory only needs to be good enough. Thus the purported lack of confidence in a theory just because we are not one hundred percent sure of its validity is a philosophical fear, more pondered by grim professors haunting the halls of academia than by practical scientists performing experiments in the everyday world.
Nor does Popper’s exhortation that a single incisive data point slay a theory hold any water in many scientists’ minds. Whether because of pride in their creations or because of simple caution, most scientists don’t discard a theory the moment there’s an experiment which disagrees with its main conclusions. Maybe the apparatus is flawed, or maybe you have done the statistics wrong; there’s always something that can rescue a theory from death. But most frequently, it’s a simple tweaking of the theory that can save it. For instance, the highly unexpected discovery of CP violation did not require physicists to discard the theoretical framework of particle physics. They could easily save their quantum universe by introducing some further principles that accounted for the anomalous phenomenon. Science would be in trouble if scientists started abandoning theories the moment an experiment disagreed with them. Of course there are some cases where a single experiment can actually make or break a theory but fortunately for the sanity of its practitioners, there are few such cases in science.
Another reason why falsification has turned into a nebulous entity is because much of modern, cutting-edge science is based on models rather than theories. Models are both simpler and less rigorous than theories and they apply to specific, complicated situations which cannot be resolved from first principles. There may be multiple models that can account for the same piece of data. As a molecular modeler I am fully aware of how one can tweak models to fit the data. Sometimes this is justified, at other times it’s a sneaky way to avoid admitting failure. But whatever the case, the fact is that falsification of a model almost never kills it instantly since a model by its very nature is supposed to be more or less a fictional construct. Both climate models and molecular models can be manipulated to agree with the data when the data disagrees with their previous incarnation, a fact that gives many climate skeptics heartburn. The issue here is not whether such manipulation is justified, rather it’s that falsification is really a blunt tool to judge the validity of such models. As science becomes even more complex and model-driven, this failure of falsification to discriminate between competing models will become even more widespread.
The last problem with falsification is that since it was heavily influenced by Popper’s training in physics it simply fails to apply to many activities pursued by scientists in other fields, such as chemistry. The Nobel Prize winning Roald Hoffmann has argued in his recent book how falsification is almost irrelevant to many chemists whose main activity is to synthesize molecules. What hypothesis are you falsifying, exactly, when you are making a new drug to treat cancer or a new polymer to sense toxic environmental chemicals? Now you could get very vague and general and claim that every scientific experiment is a falsification experiment since it’s implicitly based on belief in some principle of science. But as they say, a theory that explains everything explains nothing, so such a catchall definition of falsification ceases to be useful.
All this being said, there is no doubt that falsification is a generally useful guideline for doing science. Like a few other commenters I am surprised that Carroll uses his critique of falsification to justify work in areas like string theory and the multiverse, because it seems to me that those are precisely the areas where testable and falsifiable predictions are badly needed because of lack of success. Perhaps Carroll is simply saying that too much of anything including falsification is bad. With that I resoundingly agree. In fact I would go further and contend that too much of philosophy is always bad for science; as they say, the philosophy of science is too important to be left to philosophers of science.
I think falsification is one of those ideas which is a good guideline but which cannot be taken at face value and applied with abandon to every scientific paradigm or field. It’s also a good example of how ideas from the philosophy of science may have little to do with real science. Too much of anything is bad, especially when that anything is considered to be an inviolable truth.
It’s instructive to look at falsification’s father to understand the problems with the idea. Just like his successor Thomas Kuhn, Karl Popper was steeped in physics. He grew up during the heyday of the discipline and ran circles around the Vienna Circle whose members (mostly mathematicians, physicists and philosophers) never really accepted him as part of the group. Just like Kuhn Popper was heavily influenced by the revolutionary discoveries in physics during the 1920s and 30s and this colored his philosophy of science.
Popper and Kuhn are both favorite examples of mine for illustrating how the philosophy of science has been biased toward physics and by physicists. The origin of falsification was simple: Popper realized that no amount of data can really prove a theory, but that even a single key data point can potentially disprove it. The two scientific paradigms which were reigning then – quantum mechanics and relativity – certainly conformed to his theory. Physics as practiced then was adept at making very precise, quantitative predictions about a variety of phenomena, from the electron’s charge to the perihelion of Mercury. Falsification certainly worked very well when applied to these theories. Sensibly Popper advocated it as a tool to distinguish science from non-science (and from nonsense).
But in 2014 falsification has become a much less reliable and more complicated beast. Let’s run through a list of its limitations and failures. For one thing, Popper’s idea that no amount of data can confirm a theory is a dictum that’s simply not obeyed by the majority of the world’s scientists. In practice a large amount of data does improve confidence in a theory. Scientists usually don’t need to confirm a theory one hundred percent in order to trust and use it; in most cases a theory only needs to be good enough. Thus the purported lack of confidence in a theory just because we are not one hundred percent sure of its validity is a philosophical fear, more pondered by grim professors haunting the halls of academia than by practical scientists performing experiments in the everyday world.
Nor does Popper’s exhortation that a single incisive data point slay a theory hold any water in many scientists’ minds. Whether because of pride in their creations or because of simple caution, most scientists don’t discard a theory the moment there’s an experiment which disagrees with its main conclusions. Maybe the apparatus is flawed, or maybe you have done the statistics wrong; there’s always something that can rescue a theory from death. But most frequently, it’s a simple tweaking of the theory that can save it. For instance, the highly unexpected discovery of CP violation did not require physicists to discard the theoretical framework of particle physics. They could easily save their quantum universe by introducing some further principles that accounted for the anomalous phenomenon. Science would be in trouble if scientists started abandoning theories the moment an experiment disagreed with them. Of course there are some cases where a single experiment can actually make or break a theory but fortunately for the sanity of its practitioners, there are few such cases in science.
Another reason why falsification has turned into a nebulous entity is because much of modern, cutting-edge science is based on models rather than theories. Models are both simpler and less rigorous than theories and they apply to specific, complicated situations which cannot be resolved from first principles. There may be multiple models that can account for the same piece of data. As a molecular modeler I am fully aware of how one can tweak models to fit the data. Sometimes this is justified, at other times it’s a sneaky way to avoid admitting failure. But whatever the case, the fact is that falsification of a model almost never kills it instantly since a model by its very nature is supposed to be more or less a fictional construct. Both climate models and molecular models can be manipulated to agree with the data when the data disagrees with their previous incarnation, a fact that gives many climate skeptics heartburn. The issue here is not whether such manipulation is justified, rather it’s that falsification is really a blunt tool to judge the validity of such models. As science becomes even more complex and model-driven, this failure of falsification to discriminate between competing models will become even more widespread.
The last problem with falsification is that since it was heavily influenced by Popper’s training in physics it simply fails to apply to many activities pursued by scientists in other fields, such as chemistry. The Nobel Prize winning Roald Hoffmann has argued in his recent book how falsification is almost irrelevant to many chemists whose main activity is to synthesize molecules. What hypothesis are you falsifying, exactly, when you are making a new drug to treat cancer or a new polymer to sense toxic environmental chemicals? Now you could get very vague and general and claim that every scientific experiment is a falsification experiment since it’s implicitly based on belief in some principle of science. But as they say, a theory that explains everything explains nothing, so such a catchall definition of falsification ceases to be useful.
All this being said, there is no doubt that falsification is a generally useful guideline for doing science. Like a few other commenters I am surprised that Carroll uses his critique of falsification to justify work in areas like string theory and the multiverse, because it seems to me that those are precisely the areas where testable and falsifiable predictions are badly needed because of lack of success. Perhaps Carroll is simply saying that too much of anything including falsification is bad. With that I resoundingly agree. In fact I would go further and contend that too much of philosophy is always bad for science; as they say, the philosophy of science is too important to be left to philosophers of science.
No comments:
Post a Comment
Markup Key:
- <b>bold</b> = bold
- <i>italic</i> = italic
- <a href="http://www.fieldofscience.com/">FoS</a> = FoS