Carl Sagan's 1995 prediction of our technocratic dystopia

In 1995, just a year before his death, Carl Sagan published a bestselling book called “The Demon-Haunted World” which lamented what Sagan saw as the increasing encroachment of pseudoscience on people’s minds. It was an eloquent and wide-ranging volume. Sagan was mostly talking about obvious pseudoscientific claptrap such as alien abductions, psychokinesis and astrology. But he was also an astute observer of human nature who was well-educated in the humanities. His broad understanding of human beings led him to write the following paragraph which was innocuously buried in the middle of the second chapter.

“I have a foreboding of an America in my children's or grandchildren's time -- when the United States is a service and information economy; when nearly all the manufacturing industries have slipped away to other countries; when awesome technological powers are in the hands of a very few, and no one representing the public interest can even grasp the issues; when the people have lost the ability to set their own agendas or knowledgeably question those in authority; when, clutching our crystals and nervously consulting our horoscopes, our critical faculties in decline, unable to distinguish between what feels good and what's true, we slide, almost without noticing, back into superstition and darkness.”
As if these words were not ominous enough, Sagan follows up just a page later with another paragraph which is presumably designed to reduce us to a frightened, whimpering mass.

“I worry that, especially as the Millennium edges nearer, pseudoscience and superstition will seem year by year more tempting, the siren song of unreason more sonorous and attractive. Where have we heard it before? Whenever our ethnic or national prejudices are aroused, in times of scarcity, during challenges to national self-esteem or nerve, when we agonize about our diminished cosmic place and purpose, or when fanaticism is bubbling up around us - then, habits of thought familiar from ages past reach for the controls.

The candle flame gutters. Its little pool of light trembles. Darkness gathers. The demons begin to stir.”

What’s striking about this writing is its almost clairvoyant prescience. The phrases “fake news” and “post-factual world” were not used during Sagan’s times, but he is clearly describing them when he talks about people being “unable to distinguish between what’s real and what feels good”. And the rise of nationalist prejudice seems to have occurred almost exactly as he described.

It’s also interesting how Sagan’s prediction of the outsourcing of manufacturing mirrors the concerns of so many people who voted for Trump. The difference is that Sagan was not taking aim at immigrants, partisan politics, China or similar factors; he was simply seeing the disappearance of manufacturing as an essential consequence of its tradeoff with the rise of the information economy. We are now acutely living that tradeoff and it has cost us mightily.

One thing that’s difficult to say is whether Sagan was also anticipating the impact of technology on the displacement of jobs. Automation had already been around in the 90s and the computer was becoming a force to reckon with, but speech and image recognition and the subsequent impact of machine learning on these tasks was in its fledgling days. Sagan didn’t know about these fields: nonetheless, the march of technology also feeds into his concern about people gradually descending into ignorance because they cannot understand the world around them, even as technological comprehension stays in the hands of a privileged few.

In terms of people “losing the ability to set their own agendas or question those in power”, consider how many of us, let alone those in power, can grasp the science and technology behind deep learning, climate change, genome editing or even our iPhones? And yet these tools are subtly inserting them in pretty much all aspects of life, and there will soon be a time when no part of our daily existence is untouched by them. Yet it will also be a time when we use these technologies without understanding them, essentially safeguarding them with our lives, liberties and pursuit of happiness. Then, if something goes wrong, as it inevitably does with any complex system, we will be in deep trouble because of our lack of comprehension. Not only will there be chaos everywhere, but because we mindlessly used technology as a black box, we wouldn’t have the first clue about how to fix it.

Equally problematic is the paradox in which as technology becomes more user-friendly, it becomes more and more easy to apply it with abandon without understanding its strengths and limitations. My own field of computer-aided drug design (CADD) is a good example. Twenty years ago, software tools in my field were the realm of experts. But graphical user interfaces, slick marketing and cheap computing power have now put them in the hands of non-experts. While this has led to a useful democratization of these tools, it had also led to their abuse and overapplication. For instance, most of these techniques have been used without a proper understanding of statistics, not only leading to incorrect results being published but also to a waste of resources and time in the always time-strapped pharmaceutical and biotech industries.

This same paradox is now going to underlie deep learning and AI which are far more hyped and consequential than computer-aided drug design. Yesterday I read an interview with computer scientist Andrew Ng from Stanford who enthusiastically advocated that millions of people be taught AI techniques. Ng and others are well-meaning, but what’s not discussed is the potential catastrophe that could arise from putting imperfect tools in the hands of millions of people who don’t understand how they work and who suddenly start applying them to important aspects of our lives. To illustrate the utility of large-scale education in deep learning, Ng gives the example of how the emergence of commercial electric installations suddenly led to a demand for large numbers of electrical engineers. The difference was that electricity was far more deterministic and well-understood compared to AI. If it went wrong we largely knew how to fix it because we knew enough about the behavior of electrons, wiring and circuitry.

The problem with many AI algorithms like neural nets is that not only are they black boxes but their exactly utility is still a big unknown. In fact, AI is such a fledgling field that even the experts don’t really understand its domains of applicability, so it’s too much to believe that people who acquire AI diplomas in a semester or two will do any better. I would rather have a small number of experts develop and use imperfect technology than millions adopt technologies which are untested, especially when they are being used not just in our daily lives but in critical services like healthcare, transportation and banking.

As far as “those in power” are concerned, Sagan hints at the fact that they may no longer be politicians but technocrats. Both government and Silicon Valley technocrats have already taken over many aspects of our lives, but their hold seems to only tighten. One little appreciated story from that recent Google memo fiasco was written by journalist Elaine Ou who focused on a very different aspect of the incident; the way it points toward the technological elite carefully controlling what we read, digest and debate based on their own social and political preferences. As Ou says,

“Suppressing intellectual debate on college campuses is bad enough. Doing the same in Silicon Valley, which has essentially become a finishing school for elite universities, compounds the problem. Its engineers build products that potentially shape our digital lives. At Google, they oversee a search algorithm that seeks to surface “authoritative” results and demote low-quality content. This algorithm is tuned by an internal team of evaluators. If the company silences dissent within its own ranks, why should we trust it to manage our access to information?”

I personally find this idea that technological access can be controlled by the political or moral preferences of a self-appointed minority to be deeply disturbing. Far from all information being freely available at our fingertips, it will instead ensure that we increasingly read the biased, carefully shaped perspective of this minority. For example, this recent event at Google has indicated the social opinions of several of its most senior personnel as well as of those engineers who more directly control the flow of vast amounts of information permeating our lives every day. The question is not whether you agree or disagree with their views, it’s that there’s a good chance that these opinions will increasingly and subtly – sometimes without their proponents even knowing it – embed themselves into the pieces of code that influence what we see and hear pretty much every minute of our hyperconnected world. And this is not about simply switching the channel. When politics is embedded in technology itself, you cannot really switch the channel until you switch the entire technological foundation, something that’s almost impossible to accomplish in an age of oligopolies. This is an outcome that should worry even the most enthusiastic proponent of information technology, and it certainly should worry every civil libertarian. Even Carl Sagan was probably not thinking about this when he was talking about “awesome technological powers being in the hands of a very few”.

The real fear is that ignorance borne of technological control will be so subtle, gradual and all-pervasive that it will make us slide back, “almost without noticing”, not into superstition and darkness but into a false sense of security, self-importance and connectivity. In that sense it would very much resemble the situation in “The Matrix”. Politicians have used the strategy for ages, but ceding it to all-powerful machines enveloping us in their byte-lined embrace will be the ultimate capitulation. Giving people the illusion of freedom works better than any actual efforts at curbing freedom. Perfect control works when those who are controlled keep on believing the opposite. We can be ruled by demons when they come disguised as Gods.

5 comments:

  1. This is a very good blog post. Thought-provoking!

    ReplyDelete
  2. Good one, Ash! The first 2 paragraphs of great Carl Sagan has to be visualized against the back drop of the psychobabble, we all are exposed!

    ReplyDelete
    Replies
    1. Thanks. Sagan's words are definitely immortal, because they seem to speak to an indelible feature of human nature.

      Delete
  3. This is an excellent discussion, thanks Ash. In addition to the control of data and search, keep in mind that in addition there is the newspaper ownership (e.g Jeff Bezos owns the Washington Post etc), and the immense political influence that the technology companies have. Which ensures that the necessary antitrust legislation agains companies such as Google and Facebook does not proceed, and the media coverage of them is immensely soft. Why?

    I find the hype of new technological progess does not match the scale of technological progress that we see in our lives over 40 years or so, which has actually been quite small. As Peter Thiel aptly said, what we wanted was flying machines, what we got was 140 characters. And this continues, with the constant peddling of AI, virtual reality and so on. I don't find AI amazing. I find modifying yeast pathways to yield farnesene amazing, and other such concrete, amazing but not heralded fields.

    Pseudoscience and superstition have always been around and always will be. I don't think darkness gathers; enthusiasm for making and creating is large at all levels, witness the "maker revolution" in areas such as electronics. What we have is an ensconcing of oligopoly at all levels (government, media, education, technology), which we cannot seem to shake.
    Dana

    ReplyDelete

Markup Key:
- <b>bold</b> = bold
- <i>italic</i> = italic
- <a href="http://www.fieldofscience.com/">FoS</a> = FoS