|The cost of gene sequencing has surpassed even Moore's Law.|
I have always thought that Moore's Law is one of the most misunderstood and exaggerated articles of faith of our times. It should really be called Moore's Observation, since it was no more than that when Gordon Moore came up with it in the 60s, a time when support for the observation was barely getting off the ground. Since then the observation has been applied to everything from genetics to neuroscience. But it has always suffered from hype, most notably by people like Ray Kurzweil who see the law as some kind of holy harbinger of the hallowed singularity.
There is little doubt that the law has held up quite well for electronics, and as the Nature article makes clear, has been the result of a number of fundamental and unexpected enhancements in technology.
The 1990s called for further innovation. Until then, as transistors became smaller, their speed and energy efficiency increased. But when the components reached around 100 nanometres across, miniaturization began to have the opposite effect, worsening performance. Chip-makers such as Intel, which Moore co-founded, and IBM again looked to basic science to improve the performance of transistor materials. Major help came from condensed-matter physicists. They had known for decades that the ability of silicon to conduct electricity improves substantially when its crystal lattice is stretched — for instance, by layering it on another crystal in which the atoms have a different spacing. Engineers introduced strained silicon into chips in the 2000s, and Moore’s law stayed true for several more years.
And yet as the piece notes, the observation is now approaching limits set by fundamental physics. Only new physics can now possibly circumvent the law. Given all the limitations that it notes the article in fact ends on a rather unwarranted upbeat note.
Stepping outside its traditional domain of electronics, in other areas Moore's observation has clearly been uneven. The cost and speed of gene sequencing for instance has spectacularly circumvented the law, and new approaches in the field might cut down the cost and time even more. There is no doubt that the upheaval in gene sequencing is truly remarkable and should definitely warm the cockles of the hearts of Moorians everywhere.
And yet in other fields the trend hasn't really held up. Drug discovery for instance has become even harder than what it was in the 90s, an observation that is encapsulated by its own depressing moniker - Eroom's Law. Similarly when it comes to battery technology the limitations of the law seem to be clear based on fundamental notions of chemical reactivity and kinetics: this might mean that the vision of a Tesla Model S costing ten thousand dollars might remain little more than a cherished vision for a long time. And let's not even get started on neuroscience where, even if the mapping of neuronal space might follow a Moorish pattern, the interpretation and understanding of this space seem to be poised to proceed at a rate that would put even the glacially slow Eroom to shame. The truth of Moore's observation thus seems to follow the mundane law that there are in fact no laws.
There are several problems with generalizing Moore's Law, but to me the most serious problem is that the law assumes exponential or at least constant and linear growth in knowledge on which new technologies can piggyback. Scientific and technological revolutions seem to follow a Galisonian-Kuhnian alternation in which new ideas give birth to new tools which in turn unearth still newer ideas. This works fine as long as every new tool discovers a fundamentally new layer of knowledge. However this optimistic view is largely a child of the age of reductionism in which reductionist tools (like the microscope and the telescope) gave rise to reductionist ideas (like the genetic basis of heredity and the nuclear atom) and vice versa. We are now past that age and are realizing the walls that reductionism has erected for us in the form of complexity, non-linearity and emergence. Unlike the reductionist tools of the past, we simply don't have a good idea right now of the kinds of emergent tools that will break these barriers and uncover more emergent knowledge. To me it's these limitations of reductionism in the face of emergent complexity that translate into the biggest arguments against the continuing application of Moore's Law to everything.
It is interesting to contemplate the reach of Moore's Law in the next fifty years and wonder how it will look like in 2065 (who knows, I may even be around then to cross-check my words...). I think the general features of the law are going to be exactly what they have been since 1965, wildly successful in certain areas and disappointingly bland in others. That exact mix will determine the impact of the law on the essential features of civilization. Gene sequencing will likely exceed the law for a long time, at least a few decades. Electronics will likely taper off at some point, although quantum computing might provide a slight boost. Neuroscience will likely start obeying the trend, at least in certain narrowly defined sub-applications like individual neuron mapping. But fields like drug discovery and battery technology are likely to run into a wall, one erected both by the fundamental laws of physics and chemistry as well as by our ignorance of the sheer complexity of biological systems.
The ultimate challenge though is understanding. No matter how many genomes we rapidly sequence, transistors we densely pack or neurons we interrogate, the knowledge emerging from all that data is not automatically going to follow Moore's Law. Technology can deliver us information at an exponentially accelerating pace, but the trees of knowledge growing from that information will still be a function of the foibles of the very mind that makes it possible to plumb the depths of the river of data predicted by Gordon Moore in 1965.