This year's Nobel Prize for physics was awarded to Saul Perlmutter, Brian Schmidt and Adam Riess for their discovery of an accelerating universe, a finding leading to the startling postulate that 75% of our universe contains a hitherto unknown entity called dark energy. All three were considered favorite candidates for a long time so this is not surprising at all. The prize also underscores the continuing importance of cosmology since it had been awarded in 2o06 to George Smoot and John Mather, again for confirming the Big Bang and the universe's expansion.
This is an important discovery which stands on the shoulders of august minds and an exciting history. It continues a grand narrative that starts from Henrietta Swan Leavitt (who established a standard reference for calculating astronomical distances) through Albert Einstein (whose despised cosmological constant was resurrected by these findings) and Edwin Hubble, continuing through George Lemaitre and George Gamow (with their ideas about the Big Bang) and finally culminating in our current sophisticated understanding of the expanding universe. Anyone who wants to know more about the personalities and developments leading to today's event should read Richard Panek's excellent book "The 4 Percent Universe".
But what is equally interesting is the ignorance that the prizewinning discovery reveals. The prize was really awarded for the observation of an accelerating universe, not the explanation. Nobody really knows why the universe is accelerating. The current explanation for the acceleration consists of a set of different models, none of which has been definitively proven to explain the facts well enough. And this makes me wonder if such a proliferation of models without accompanying concrete theories is going to embody science in the future.
The twentieth century saw theoretical advances in physics that agreed with experiment to an astonishing degree of accuracy. The culmination of achievement in modern physics was surely quantum electrodynamics (QED) which is supposed to be the most accurate theory of physics we have. Since then we have had some successes in quantitatively correlating theory to experiment, most notably in the work on validating the Big Bang and the development of the standard model of particle physics. But dark energy- there's no theory for it that remotely approaches the rigor of QED when it comes to comparison with experiment.
Of course it's unfair to criticize dark energy since we are just getting started on tackling its mysteries. Maybe someday a comprehensive theory will be found, but given the complexity of what we are trying to achieve (essentially explain the nature of all the matter and energy in the universe) it seems likely that we may always be stuck with models, not actual theories. And this may be the case not just with cosmology but with other sciences. The fact is that the kinds of phenomena that science has been dealing with recently have been multifactorial, complex and emergent. The kind of mechanical, reductionist approaches that worked so well for atomic physics and molecular biology may turn out to be too impoverished for taking apart these phenomena. Take biology for instance. Do you think we could have a complete "theory" for the human brain that can quantitatively calculate all brain states leading to consciousness and our reaction to the external world? How about trying to build a "theory" for signal transduction that would allow us to not just predict but truly understand (in a holistic way) all the interactions with drugs and biomolecules that living organisms undergo? And then there's other complex phenomena like the economy, the weather and social networks. It seems wise to say that we don't anticipate real overarching theories for these phenomena anytime soon.
On the other hand, I think it's a sign of things to come that most of these fields are rife with explanatory models of varying accuracy and validity. Most importantly, modeling and simulation are starting to be considered as a respectable "third leg" of science, in addition to theory and experiment. One simple reason for this is the recognition that many of science's greatest current challenges may not be amenable to quantitative theorizing, and we may have to treat models of phenomena as independent, authoritative explanatory entities in their own right. We are already seeing this happen in chemistry, biology, climate science and social science, and I have been told that even cosmologists are now extensively relying on computational models of the universe. Admittedly these models are still far behind theory and experiment which have had head starts of about a thousand years. But there can be little doubt that such models can only become more accurate with increasing computational firepower. How accurate remains to be seen, but it's worth noting that there are already books that make a case for an independent, study-worthy philosophy of modeling and simulation. These books extol philosophers of science to treat models not just as convenient applications and representations of theories (which are then the only fundamental things worth studying) but as ultimate independent explanatory devices in themselves that deserve separate philosophical consideration.
Could this then be at least part of the future of science? A future where robust experimental observations are encompassed not by beautifully rigorous and complete theories like general relativity or QED but only by different models which are patched together through a combination of rigor, empirical data, fudge factors and plain old intuition? This would be a new kind of science, as useful in its applications as its old counterpart but rooting itself only in models and not in complete theories. Given the history of theoretical science, such a future may seem dark and depressing. That is because as the statistician George Box famously quipped, although some models are useful, all models are wrong. What Box meant was that models often feature unrealistic assumptions about all kinds of details that nonetheless allow us to reproduce the essential features of reality. Thus they can never provide the sure connection to "reality" that theories seem to. This is especially a problem when disparate models give the same answer to a question. In the absence of discriminating ideas, which model is then the "correct" one? The usual answer is "none of them", since they all do an equally good job of explaining the facts. But this view of science, where models that can be judged only on the basis of their utility are the ultimate arbiters of reality and where there is thus no sense of a unified theoretical framework, feels deeply unsettling. In this universe the "real" theory will always remain hidden behind a facade of models, much as reality is always hidden behind the event horizon of a black hole. Such a universe can hardly warm the cockles of the heart of those who are used to crafting grand narratives for life and the universe. However it may be the price we pay for more comprehensive understanding. In the future, Nobel Prizes may be frequently awarded for important observations for which there are no real theories, only models. The discovery of dark matter and energy and our current attempts to understand the brain and signal transduction could well be the harbingers of this new kind of science.
Should we worry about such a world rife with models and devoid of theories? Not necessarily. If there's one thing about science that we know, it's that it evolves. Grand explanatory theories have traditionally been supposed to be a key part- probably the key part- of the scientific enterprise. But this is mostly because of historical precedent as well a psychological urge for seeking elegance and unification. Such belief has been resoundingly validated in the past but it's utility may well have plateaued. I am not advocating some "end of science" scenario here - far from it - but as the recent history of string theory and theoretical physics in general demonstrates, even the most mathematically elegant and psychologically pleasing theories may have scant connection to reality. Because of the sheer scale and complexity of what we are trying to currently explain, we may have hit a roadblock in the application of the largely reductionist traditional scientific thinking which has served us so well for half a millennium
Ultimately what matters though is whether our constructs- theories, models, rules of thumb or heuristic pattern recognition- are up to the task of constructing consistent explanations of complex phenomena. The business of science is explanation, whether through unified narratives or piecemeal explanation is secondary. Although the former sounds more psychologically satisfying, science does not really care about stoking our egos. What is out there exists, and we do whatever's necessary and sufficient to unravel it.
- Home
- Angry by Choice
- Catalogue of Organisms
- Chinleana
- Doc Madhattan
- Games with Words
- Genomics, Medicine, and Pseudoscience
- History of Geology
- Moss Plants and More
- Pleiotropy
- Plektix
- RRResearch
- Skeptic Wonder
- The Culture of Chemistry
- The Curious Wavefunction
- The Phytophactor
- The View from a Microbiologist
- Variety of Life
Field of Science
-
-
-
Political pollsters are pretending they know what's happening. They don't.1 month ago in Genomics, Medicine, and Pseudoscience
-
-
Course Corrections6 months ago in Angry by Choice
-
-
The Site is Dead, Long Live the Site2 years ago in Catalogue of Organisms
-
The Site is Dead, Long Live the Site2 years ago in Variety of Life
-
Does mathematics carry human biases?4 years ago in PLEKTIX
-
-
-
-
A New Placodont from the Late Triassic of China5 years ago in Chinleana
-
Posted: July 22, 2018 at 03:03PM6 years ago in Field Notes
-
Bryophyte Herbarium Survey7 years ago in Moss Plants and More
-
Harnessing innate immunity to cure HIV8 years ago in Rule of 6ix
-
WE MOVED!8 years ago in Games with Words
-
-
-
-
post doc job opportunity on ribosome biochemistry!9 years ago in Protein Evolution and Other Musings
-
Growing the kidney: re-blogged from Science Bitez9 years ago in The View from a Microbiologist
-
Blogging Microbes- Communicating Microbiology to Netizens10 years ago in Memoirs of a Defective Brain
-
-
-
The Lure of the Obscure? Guest Post by Frank Stahl12 years ago in Sex, Genes & Evolution
-
-
Lab Rat Moving House13 years ago in Life of a Lab Rat
-
Goodbye FoS, thanks for all the laughs13 years ago in Disease Prone
-
-
Slideshow of NASA's Stardust-NExT Mission Comet Tempel 1 Flyby13 years ago in The Large Picture Blog
-
in The Biology Files
2 comments:
Markup Key:
- <b>bold</b> = bold
- <i>italic</i> = italic
- <a href="http://www.fieldofscience.com/">FoS</a> = FoS
Subscribe to:
Post Comments (Atom)
Excellent post!
ReplyDeleteI've found that arguments along the lines of "reductionist thinking, as we know it, must stop [here]" often succumb to a special fallacy: overestimating confidence in exactly how much more complex some system is. The crux of such an argument, looking forward, often depends strictly on how well we can imagine knowing what we currently do not now. [Danger, danger...tautology alert]
In addition, it presumes that models themselves are composed of things that are not in some ways based off of existing theories (or smaller models).
In the sense of: computationally tractable AND sufficiently well understood to construct a model, it's clear that our ability to estimate the difficulty of creating a theory/model depends on our subjective appraisal of what we don't know yet...not to mention the shifting baselines of both knowledge and tractability.
The way you summed up the post was fitting, and (to me at least) brought to mind the first part of the Feynman series.
In the history of physics, there have been periods where physics were rife with models. Before the triumph of the Schrodinger equation, there were many different models of what an atom actually was. One famous example was Thompson's pudding model of the atom.
ReplyDeleteAnother period was particle physics before the adoption of Gellman's standard model (which I think is rather ugly), combined with the t'Hooft and Veltman's renormalization scheme. Indeed, this period was so full of models, that such models gained a technical name: phenomenological theories, or... models. Even Feynman got into the game, with his "parton" model of sub-atomic particle collisions.