Field of Science

Carl Sagan's 1995 prediction of our technocratic dystopia

In 1995, just a year before his death, Carl Sagan published a bestselling book called “The Demon-Haunted World” which lamented what Sagan saw as the increasing encroachment of pseudoscience on people’s minds. It was an eloquent and wide-ranging volume. Sagan was mostly talking about obvious pseudoscientific claptrap such as alien abductions, psychokinesis and astrology. But he was also an astute observer of human nature who was well-educated in the humanities. His broad understanding of human beings led him to write the following paragraph which was innocuously buried in the middle of the second chapter.

“I have a foreboding of an America in my children's or grandchildren's time -- when the United States is a service and information economy; when nearly all the manufacturing industries have slipped away to other countries; when awesome technological powers are in the hands of a very few, and no one representing the public interest can even grasp the issues; when the people have lost the ability to set their own agendas or knowledgeably question those in authority; when, clutching our crystals and nervously consulting our horoscopes, our critical faculties in decline, unable to distinguish between what feels good and what's true, we slide, almost without noticing, back into superstition and darkness.”
As if these words were not ominous enough, Sagan follows up just a page later with another paragraph which is presumably designed to reduce us to a frightened, whimpering mass.

“I worry that, especially as the Millennium edges nearer, pseudoscience and superstition will seem year by year more tempting, the siren song of unreason more sonorous and attractive. Where have we heard it before? Whenever our ethnic or national prejudices are aroused, in times of scarcity, during challenges to national self-esteem or nerve, when we agonize about our diminished cosmic place and purpose, or when fanaticism is bubbling up around us - then, habits of thought familiar from ages past reach for the controls.

The candle flame gutters. Its little pool of light trembles. Darkness gathers. The demons begin to stir.”

What’s striking about this writing is its almost clairvoyant prescience. The phrases “fake news” and “post-factual world” were not used during Sagan’s times, but he is clearly describing them when he talks about people being “unable to distinguish between what’s real and what feels good”. And the rise of nationalist prejudice seems to have occurred almost exactly as he described.

It’s also interesting how Sagan’s prediction of the outsourcing of manufacturing mirrors the concerns of so many people who voted for Trump. The difference is that Sagan was not taking aim at immigrants, partisan politics, China or similar factors; he was simply seeing the disappearance of manufacturing as an essential consequence of its tradeoff with the rise of the information economy. We are now acutely living that tradeoff and it has cost us mightily.

One thing that’s difficult to say is whether Sagan was also anticipating the impact of technology on the displacement of jobs. Automation had already been around in the 90s and the computer was becoming a force to reckon with, but speech and image recognition and the subsequent impact of machine learning on these tasks was in its fledgling days. Sagan didn’t know about these fields: nonetheless, the march of technology also feeds into his concern about people gradually descending into ignorance because they cannot understand the world around them, even as technological comprehension stays in the hands of a privileged few.

In terms of people “losing the ability to set their own agendas or question those in power”, consider how many of us, let alone those in power, can grasp the science and technology behind deep learning, climate change, genome editing or even our iPhones? And yet these tools are subtly inserting them in pretty much all aspects of life, and there will soon be a time when no part of our daily existence is untouched by them. Yet it will also be a time when we use these technologies without understanding them, essentially safeguarding them with our lives, liberties and pursuit of happiness. Then, if something goes wrong, as it inevitably does with any complex system, we will be in deep trouble because of our lack of comprehension. Not only will there be chaos everywhere, but because we mindlessly used technology as a black box, we wouldn’t have the first clue about how to fix it.

Equally problematic is the paradox in which as technology becomes more user-friendly, it becomes more and more easy to apply it with abandon without understanding its strengths and limitations. My own field of computer-aided drug design (CADD) is a good example. Twenty years ago, software tools in my field were the realm of experts. But graphical user interfaces, slick marketing and cheap computing power have now put them in the hands of non-experts. While this has led to a useful democratization of these tools, it had also led to their abuse and overapplication. For instance, most of these techniques have been used without a proper understanding of statistics, not only leading to incorrect results being published but also to a waste of resources and time in the always time-strapped pharmaceutical and biotech industries.

This same paradox is now going to underlie deep learning and AI which are far more hyped and consequential than computer-aided drug design. Yesterday I read an interview with computer scientist Andrew Ng from Stanford who enthusiastically advocated that millions of people be taught AI techniques. Ng and others are well-meaning, but what’s not discussed is the potential catastrophe that could arise from putting imperfect tools in the hands of millions of people who don’t understand how they work and who suddenly start applying them to important aspects of our lives. To illustrate the utility of large-scale education in deep learning, Ng gives the example of how the emergence of commercial electric installations suddenly led to a demand for large numbers of electrical engineers. The difference was that electricity was far more deterministic and well-understood compared to AI. If it went wrong we largely knew how to fix it because we knew enough about the behavior of electrons, wiring and circuitry.

The problem with many AI algorithms like neural nets is that not only are they black boxes but their exactly utility is still a big unknown. In fact, AI is such a fledgling field that even the experts don’t really understand its domains of applicability, so it’s too much to believe that people who acquire AI diplomas in a semester or two will do any better. I would rather have a small number of experts develop and use imperfect technology than millions adopt technologies which are untested, especially when they are being used not just in our daily lives but in critical services like healthcare, transportation and banking.

As far as “those in power” are concerned, Sagan hints at the fact that they may no longer be politicians but technocrats. Both government and Silicon Valley technocrats have already taken over many aspects of our lives, but their hold seems to only tighten. One little appreciated story from that recent Google memo fiasco was written by journalist Elaine Ou who focused on a very different aspect of the incident; the way it points toward the technological elite carefully controlling what we read, digest and debate based on their own social and political preferences. As Ou says,

“Suppressing intellectual debate on college campuses is bad enough. Doing the same in Silicon Valley, which has essentially become a finishing school for elite universities, compounds the problem. Its engineers build products that potentially shape our digital lives. At Google, they oversee a search algorithm that seeks to surface “authoritative” results and demote low-quality content. This algorithm is tuned by an internal team of evaluators. If the company silences dissent within its own ranks, why should we trust it to manage our access to information?”

I personally find this idea that technological access can be controlled by the political or moral preferences of a self-appointed minority to be deeply disturbing. Far from all information being freely available at our fingertips, it will instead ensure that we increasingly read the biased, carefully shaped perspective of this minority. For example, this recent event at Google has indicated the social opinions of several of its most senior personnel as well as of those engineers who more directly control the flow of vast amounts of information permeating our lives every day. The question is not whether you agree or disagree with their views, it’s that there’s a good chance that these opinions will increasingly and subtly – sometimes without their proponents even knowing it – embed themselves into the pieces of code that influence what we see and hear pretty much every minute of our hyperconnected world. And this is not about simply switching the channel. When politics is embedded in technology itself, you cannot really switch the channel until you switch the entire technological foundation, something that’s almost impossible to accomplish in an age of oligopolies. This is an outcome that should worry even the most enthusiastic proponent of information technology, and it certainly should worry every civil libertarian. Even Carl Sagan was probably not thinking about this when he was talking about “awesome technological powers being in the hands of a very few”.

The real fear is that ignorance borne of technological control will be so subtle, gradual and all-pervasive that it will make us slide back, “almost without noticing”, not into superstition and darkness but into a false sense of security, self-importance and connectivity. In that sense it would very much resemble the situation in “The Matrix”. Politicians have used the strategy for ages, but ceding it to all-powerful machines enveloping us in their byte-lined embrace will be the ultimate capitulation. Giving people the illusion of freedom works better than any actual efforts at curbing freedom. Perfect control works when those who are controlled keep on believing the opposite. We can be ruled by demons when they come disguised as Gods.

How to thrive as a fox in a world full of hedgehogs

This is my fourth monthly column for 3 Quarks Daily.

The Nobel Prize winning animal behaviorist Konrad Lorenz once said about philosophers and scientists, “Philosophers are people who know less and less about more and more until they know nothing about everything. Scientists are people who know more and more about less and less until they know everything about nothing.” Lorenz had good reason to say this since he worked in both science and philosophy. Along with two others, he remains the only zoologist to win the Nobel Prize for Physiology or Medicine. His major work was in investigating aggression in animals, work that was found to be strikingly applicable to human behavior. But Lorenz’s quote can also said to be an indictment of both philosophy and science. Philosophers are the ultimate generalists, scientists are the ultimate specialists.

Specialization in science has been a logical outgrowth of its great progress over the last five centuries. At the beginning, most people who called themselves natural philosophers – the word scientist was only coined in the 19th century – were generalists and amateurs. The Royal Society which was established in 1660 was a bastion of generalist amateurs. It gathered together a motley crew of brilliant tinkerers like Robert Boyle, Christopher Wren, Henry Cavendish and Isaac Newton. These men would not recognize the hyperspecialized scientists of today; between them they were lawyers, architects, writers and philosophers. Today we would call them polymaths.

These polymaths helped lay the foundations of modern science. Their discoveries in mathematics, physics, chemistry, botany and physiology were unmatched. They cracked open the structure of cells, figured out the constitution of air and discovered the universal laws governing motion. Many of them were supported by substantial hereditary wealth, and most of them did all this on the side, while they were still working their day jobs and spending time with their families. The reasons these gentlemen (sadly, there were no ladies then) of the Royal Society could achieve significant scientific feats were many fold. Firstly, the fundamental laws of science still lay undiscovered, so the so-called “low hanging fruit” of science was ripe and plenty. Secondly, doing science was cheap then; all Newton needed to figure out the composition of light was a prism.

But thirdly and most importantly, these men saw science as a seamless whole. They did not distinguish much between physics, chemistry and biology, and even when they did they did so for the sake of convenience. In fact their generalist view of the world was so widespread that they didn’t even have a problem reconciling science and religion. For Newton, the universe was a great puzzle built by God, to be deciphered by the hand of man, and the rest of them held similar views.

Fast forward to the twentieth century, and scientific specialization was rife. You could not imagine Werner Heisenberg discovering genetic transmission in fruit flies, or Thomas Hunt Morgan discovering the uncertainty principle. Today science has become even more closeted into its own little boxes. There are particle astrophysicists and neutrino particle astrophysicists, cancer cell biologists, organometallic chemists and geomicrobiologists. The good gentlemen of the Royal Society would have been both fascinated and flummoxed by this hyperspecialization.

There is a reason why specialization became the order of the day from the seventeenth century onwards. Science simply became too vast, its tendrils reaching deep into specific topics and sub-topics. You simply could not flit from topic to topic if you were to understand something truly well and make important discoveries in the field. If you were a protein crystallographer, for instance, you simply had to spend all your time learning about instrumentation, protein production and software. If you were a string theorist, you simply had to learn pretty much all of modern physics and a good deal of modern mathematics. Studying any topic in such detail takes time and effort and leaves no time to investigate other fields. The rewards from such single-minded pursuit are usually substantial; satisfaction from the deep immersion that comes from expertise, the enthusiastic adulation of your peers, and potential honors like the Nobel Prize. There is little doubt that specialization has provided great dividends for its practitioners, both personal and scientific.

And yet there were always holdouts, men and women who carried on the tradition of their illustrious predecessors and left the door ajar to being generalists. Enrico Fermi and Hans Bethe were true generalists in physics, and Fermi went a step further by becoming the only scientist of the century who truly excelled in both theory and experiment; he would have made his fellow countryman Galileo proud. Then there was Linus Pauling who mastered and made seminal contributions to quantum chemistry, organic chemistry, biochemistry and medicine. John von Neumann was probably the ultimate polymath in the tradition of old natural philosophers, contributing massively to every field from pure mathematics and economics to computing and biology.

These polymaths not only kept the flame of the generalist alive, but they also anticipated science ironically coming full circle. The march of science from the seventeenth to the twentieth century might have been one toward increasing specialization, but in the last few years we have seen generalist science again blossoming. Why is this? Simply because the most important and fascinating scientific questions we face today require the meld of ideas from different fields. For instance: What is consciousness? What is life? How do you combat climate change? What is dark energy? These questions don’t just benefit from an interdisciplinary approach but they require it. Now, the way modern science approaches these questions is to bring together experts from various fields rather than relying on a single person who is an expert in all the fields. The Internet and global communication have made this kind of intellectual cross-pollination easier. 

And yet I would contend that there is a loss of insight when people keep excelling in their chosen fields and simply funnel the output of their efforts to other scientists without really understanding in what way it’s used. In my own field of drug discovery for instance, I have found that people who at least have a conceptual understanding of other areas are far more likely to contribute useful insights compared to those who simply do their job well and shove the product on to the next step of the pipeline.

I thus believe there is again a need for the kind of generalist who dotted the landscape of scientific research two hundred years ago. Both the poet Archilochus as well as the philosopher Isaiah Berlin have fortunately given us the right vocabulary to describe generalists and specialists. The fox, wrote Archilochus, knows many things while the hedgehog knows one big thing. Generalists are foxes; specialists are hedgehogs.

The history of science demonstrates that both foxes and hedgehogs are necessary for its progress. But history also shows that foxes and hedgehogs can alternate. In addition there are fields like chemistry which have always benefited more from foxes than hedgehogs. Generally speaking, foxes are more important when science is theory-rich and data-poor, while hedgehogs are more important when science is theory-poor and data-rich. The twentieth century was largely the century of hedgehogs while the twenty-first is likely to be the century of foxes.

Being a fox is not very easy though. Both personal and institutional forces in science have been built to support hedgehogs. You can mainly blame human resources personnel for contriving to make the playing field more suitable for these creatures. Consider the job descriptions in organizations. We want an “In vivo pharmacologist” or “Soft condensed matter physicist”, the job listing will say; attached would be a very precise list of requirements – tiny boxes within the big box. This makes it easier for human resources to check all the boxes and reject or accept candidates efficiently. But it makes it much harder for foxes who may not fit precise labels but who may have valuable insights to contribute to make it past those rigid labels. Organizations thus end up losing fine, practical minds who pay the price for their eclectic tastes. Academic training is also geared toward producing hedgehogs rather than foxes, and funding pressures on professors to do very specific kinds of research do not make the matter any easier. In general, these institutions create an environment in which being a fox is actively discouraged and in which hedgehogs and their intellectual children and grandchildren flourish.

As noted above, however, this is a real problem at a time when many of the most important problems in science are essentially interdisciplinary and would greatly benefit from the presence of foxes. But since institutional strictures don’t encourage foxes to ply their trade, they also by definition do not teach the skills necessary to be a fox. Thus the cycle perpetuates; institutions discourage foxlike behavior so much that the hedgehogs don’t even know how to be productive foxes even if they want to, and they in turn further perpetuate hedgehogian principles.

Fortunately, foxes in the past and present have provided us with a blueprint of their behavior. The essence of foxes is generalist behavior, and there are some commonsense steps one can take to inculcate these habits. Based on both historical facts about generalists as well as, well, general principles, one can come up with a kind of checklist on being a productive fox in an urban forest full of hedgehogs. This checklist draws on the habits of successful foxes as well as recent findings from both the sciences and the humanities that allow for flexible and universal thinking that can be applied not just in different fields but especially across their boundaries. Here are a few lessons that I have learnt or read about over the years. Because the lessons are general, they would not be confined to scientific fields.

1. Acknowledge psychological biases.

One of the most striking findings over the last three decades or so, exemplified by the work of Amos Tversky, Daniel Kahneman, Paul Slovic and others, is the tendency of human beings to make the same kinds of mistakes when thinking about the world. Through their pioneering research, psychologists have found a whole list of biases like confirmation bias, anchoring effects and representativeness that dog our thinking. Recognizing these biases doesn’t just help connect ideas across various disciplines but also helps us step back and look at the big picture. And looking at the big picture is what foxes need to do all the time.

2. Learn about statistics.

A related field of inquiry is statistical thinking. In fact, many of the cognitive biases which I just mentioned arise from the fundamental inability of human beings to think statistically. Basic statistical fallacies include: extrapolating from small sample sizes, underestimating or ignoring error bars, putting undue emphasis on rare but dramatic effects (think terrorist attacks), inability to think across long time periods and ignoring baselines. In an age when the news cycle has shrunk from 24 hours to barely 24 seconds of our attention span, it’s very easy to extrapolate from random, momentary exposure to all kinds of facts, especially when the media’s very existence seems to depend on dramatizing or exaggerating them. In such cases, stepping back and asking oneself some basic statistical questions about every new fact can be extremely helpful. You don't have to actually be able to calculate p values and confidence intervals, but you should know what these are.

3. Make back-of-the-envelope calculations.

When the first atomic bomb went off in New Mexico in July, 1945, Enrico Fermi famously threw a few pieces of paper into the air and, based on where the shockwave scattered them, came up with an accurate estimate of the bomb’s yield. Fermi was a master of the approximate calculation, the rough, order of magnitude estimate that would give the right ballpark answer. It’s illuminating how that kind of thinking can help to focus our thinking, no matter what field we may be dealing with. Whenever we encounter a fact that would benefit from estimating a number, it’s worth applying Fermi’s method to find a rough answer. In most cases it’s good enough.

4. Know your strengths and weaknesses.

As the great physicist Hans Bethe once sagely advised, “Always work on problems for which you possess an undue advantage.” We are always told that we should work on our weaknesses, and this is true to some extent. But it’s far more important to match the problems we work on with our particular strength, whether it’s calculation, interdisciplinary thinking or management. Leveraging your strengths to solve a problem is the best way to not get bogged down in one place and being able to nimbly jump across several problems like a fox. Hedgehogs often spend their time not just honing their strengths but working on their weaknesses; this is an admirable trait, but it’s not always the most optimal for working across disciplinary boundaries.

5. Learn to think at the emergent level that’s most useful for every field.

If you have worked in various disciplines long enough, you start realizing that every discipline has its own zeitgeist, its own way of doing things. It’s not just about learning the technical tools and the facts, it’s about knowing how to pitch your knowledge at a level that’s unique and optimal for that field. For instance, a chemist thinks in terms of molecules, a physicist thinks in terms of atoms and equations, an economist thinks in terms of rational individuals and a biologist thinks in terms of genes or cells. That does not mean a chemist cannot think in terms of equations or atoms, but that is not the most useful level of thinking to apply to chemistry. This matching of a particular brand of thinking to a particular field is an example of emergent thinking. The opposite of emergent thinking is reductionist thinking which breaks down everything into its constituent parts. One of the discoveries of science in the last century is the breakdown of strict reductionism, and if one wants to be a productive fox, he or she needs to learn the right level of emergent thinking that applies to a field.

6. Read widely outside your field, but read just enough.

If you want to become a generalist fox, this is an obvious suggestion, but because it’s obvious it needs to be reiterated. Gaining knowledge of multiple fields entails knowing something about those fields, which entails reading about them. But it’s easy to get bogged down in detail and to try to become an expert in every field. This goal is neither practical nor the correct one. The goal instead is to gain enough knowledge to be useful, to be able to distill general principles, to connect ideas from your field to others. Better still, talk to people. Ask experts what they think are the most important facts and ideas, keeping in mind that experts have their own biases and can reach different conclusions.

A great example of someone who learnt enough about a complementary field to not just be useful but very good at his job was Robert Oppenheimer. Oppenheimer was a dyed-in-the-wool theorist, and at first had little knowledge of experiment. But as one of his colleagues said,

“He began to observe, not manipulate. He learned to see the apparatus and to get a feeling of its experimental limitations. He grasped the underlying physics and had the best memory I know of. He could always see how far any particular experiment would go. When you couldn’t carry it any further, you could count on him to understand and to be thinking about the next thing you might want to try.”

Oppenheimer thus clearly learnt enough about experimental physics to know the strengths and limitations of the field, imparting another valuable piece of advice: know the strengths and limitations of every field at the very least, so you know whether the connections you are forming are within its purview. In other words, know the domain of applicability of every field so that you can form reasonable connections.

7. Learn from your mistakes, and from others.

If you are a fox trying to jump across various disciplinary boundaries, it goes without saying that you might occasionally stumble. Because you lack expertise in many fields you are likely to make mistakes. This is entirely understandable, but what’s most important is to acknowledge those mistakes and learn from them. In fact, making mistakes is often the best shortcut to quick learning (“Fail fast”, as they say in the tech industry). Learning from our mistakes is of course important for all of us, but especially so for foxes who are often intrinsically dealing with incomplete information. Make mistakes, revise your worldview, make new mistakes. Rinse and repeat. That should be your philosophy.

Parallel to learning from your mistakes is to learn from others. During her journey a fox will meet many interesting people from different fields who know different facts and possess different mental models of thinking about the world. Foxlike behavior often entails being able to flexibly use these different mental models to deal with various problems in different fields, so it’s key to keep on being a lifelong learner of these patterns of thought. Fortunately the Internet has opened up a vast new opportunity for networking, but we don’t always take advantage of this opportunity in serious, meaningful ways. Everyone will benefit from such deliberate, meaningful connections, but foxes in particular will reap rewards.

8. “The opposite of a big truth is also a big truth” – Niels Bohr

The world is almost always gray. Foxes must imbibe this fact as deeply as Niels Bohr imbibed quantum mechanics. Especially when you are encountering and trying to integrate disparate ideas from different fields, it’s very likely that some of them may seem contradictory. But often the contradiction is in our minds, and there’s actually a way to reconcile those ideas (as a general rule, only in the Platonic world of mathematics can contradictory ideas not be tolerated at all). The fact is that most ideas from the real world are fuzzy and ill defined, so it’s no surprise that they will occasionally run into each other. Not just ideas but patterns of thinking may seem contradictory; for example, what a biologist sees as the most important feature of a particular system may not be the most important feature for a physicist (emergence again). In most cases the truth lies somewhere in between, but in others it may lie wholly on one side. As they say, being able to hold opposite ideas in your mind at the same time is a mark of intelligence. If you are a fox, prove this.

These are but a few of the potential avenues that you can explore for being a generalist fox. But the most important principle that foxes can benefit from is, as the name indicates, general. When confronted by an idea, a system or a problem, learn to ask the most general questions about it, questions that flow across disciplines. A few of these questions in science are: What’s the throughput? How robust is the system? What are the assumptions behind it? What is the problem that we are trying to solve? What are its strengths and limitations? What kinds of biases are baked into the system and our thinking about it?

Keep on asking these questions, make a note of the answers and you will realize that they can be applied across domains. At the same time, remember that as a fox you will always work in tandem with specialized hedgehogs. Foxes will be needed to explore the uncharted territory of new areas of science and technology, hedgehogs will be needed to probe its corners and reveal hidden jewels. The jewels will further reflect light that will illuminate additional playgrounds for the foxes to frolic in. Together the two creatures will make a difference.

The bomb ended World War 2: And other myths about nuclear weapons

Sixty seven years ago on this day, a bomb fueled by the nuclear fission of uranium leveled Hiroshima in about six seconds. Since then, two foundational beliefs have colored our views of nuclear weapons; one, that they were essential or at least very significant for ending the war, and two, that they have been and will continue to be linchpins of deterrence. These beliefs have, in one way or another, guided all our thinking about these mythic creations. Historian Ward Wilson who is at the Monterey Institute of International Studies wants to demolish these and other myths about nukes in a new book titled "5 Myths about Nuclear Weapons", and I have seen few volumes which deliver their message so effectively in such few words. Below are Wilson's thoughts about the two dominant nuclear myths interspersed with a few of my own.
"Nuclear weapons were paramount in ending World War 2".
This is where it all begins. And the post facto rationalization certainly bolsters the analysis; brilliant scientists worked on a fearsome weapon in a race against the Nazis, and when the Nazis were defeated, handed it over to world leaders who used to it bring a swift end to a most horrible conflict. Psychologically it fits into a satisfying and noble narrative. Hiroshima and Nagasaki have become so completely ingrained in our minds as symbols of the power of the bomb that we scarcely think about whether they really served the roles that they have been ascribed over the last half century. In one sense the atomic bombings of Japan have dictated all our consequent beliefs about weapons of mass destruction. But troubling and mounting evidence has emerged in the last half century that is now consequential enough to deal a major blow to this thinking. Contrary to popular belief, this is not "revisionist" history; by now the files in American, Soviet, Japanese and British archives have been declassified to an extent that allows us to piece together the cold facts and reveal what exactly was the impact of the atomic bombings of Japan on the Japanese decision to end the war. They tell a story very different from the standard narrative.
Wilson draws on detailed minutes from the meetings of the Japanese Imperial Staff to make two things clear; first, that the bomb did not have a disproportionate influence on Japanese leaders' deliberations and psyche, and second, that what did have a very significant impact on Japanese policy was the invasion of Manchuria and the Sakhalin Islands by the Soviet Union. Wilson reproduces the reactions of key Japanese leaders after the bombing of Hiroshima on August 6. You would expect them to register shock and awe but we see little of this. There was no major meeting summoned after the event and most leaders seemed to display mild consternation, but little of the terror or extreme emotion, that you might expect from such a world-shattering event. What does emerge from the record is that the same men were extremely rattled after the Soviets declared war on August 8.
The reason was that before Hiroshima the Japanese were contemplating two strategies for surrender, one political and the other military. The military strategy involved throwing the kitchen sink against the Americans when they invaded the southern part of the Japanese homeland in the coming months and causing them so many losses that their victory would be be a pyrrhic one at best; the Japanese could then seek a surrender on their own terms. The political strategy involved negotiating with the Allies through Moscow. With Hiroshima, both these options remained open since the Japanese army and Soviet relations were still intact. But with the Soviet invasion in the north, the concentration of troops against the Allied invasion in the south and the seeking of favorable surrender terms through the Soviets suddenly turned into impossibilities. This double blow convinced the Japanese that they must now confront unconditional surrender. When the Emperor finally implored his people to surrender and cited a "new and most cruel bomb" as the reason, it was likely to save face so that the Japanese could blame the bomb rather than their own instigation of the war at Pearl Harbor.
Why were the Japanese not affected by the bombing of Hiroshima? Because on the ground the bombing looked no different from the relentless pounding that dozens of major Japanese cities had received at the hands of Curtis Le May's B-29s during the past six months. The infamous firebombing of Tokyo in March, 1945 had killed even more civilians than the atomic bomb. As Ward details it, no less than 68 cities had been subjected to intense attack, and aerial photos of these cities are strikingly almost indistinguishable from those of Hiroshima. Thus for the Japanese, Hiroshima was one more casualty in a long list. It did little either to shock them or to weaken their resolve for continuing the war. This was especially true when the event was too soon for people to truly take stock of what had really happened.
Unfortunately the perception of the bombing of Hiroshima also fed into the general perception regarding strategic bombing, itself a myth largely perpetuated by the air forces of the U.S. and other European nations which wanted to convince leaders that they could win wars through air attack alone. The conventional wisdom since before World War 2 was that strategic bombing can deal a deadly blow to the enemy's moral and strategic resources. This wisdom was perpetuated in the face of much evidence to the contrary. The bombings of London, Hamburg, Dresden and Tokyo had little effect on morale; in fact, postwar analysis indicated that if anything, they made the survivors of the bombing more determined and resilient and had a minor impact on war production capability. The later follies of Vietnam, Cambodia and Laos also proved the futility of strategic bombing in ending wars. And the same was true of Hiroshima. The main point, as Ward makes it clear, is that you cannot win a war by destroying cities because ultimately it's the enemy's armies and military resources that are involved in fighting a war. Destroying cities helps, especially when the means of war production are grounded in civilian activity, but it is almost never decisive. 
One instructive example which Wilson provides is the burning of Atlanta and then Richmond during the American Civil War which did little to crush the South's fighting ability or spirit. Another example is Napoleon's march into Russia; after setting fire to Moscow and destroying scores of Russian cities, Napoleon was still defeated because ultimately his army was defeated. These facts were conveniently ignored in the face of beliefs about bombing whose culmination seemed to be the destruction of Hiroshima. These beliefs were largely responsible for the arms race and the development of strategic hydrogen bombs which were again expressly designed to bring about the annihilation of cities. But all this development did was raise the risk of accidental devastation. If we realize that the atomic bombing of Hiroshima and the general destruction of cities played little role in ending World War 2, almost everything that we think we know about the power of nuclear questions is called into question.
"Nuclear weapons are essential for deterrence".
Conventional thinking continues to hold that the Cold War stayed cold because of nuclear weapons. This is true to some extent. But what it fails to realize is how many times the war threatened to turn hot. Declassified documents now provide ample evidence of near-hits that could have easily led to nuclear war. The Cuban Missile Crisis is only the most well-known example of how destabilizing nuclear weapons can make the status quo.
The missile crisis is in fact a fine example of how conventional thinking about deterrence presents gaps. Kennedy's decision to blockade Cuba is often touted as an example of mild escalation and the resolution of the crisis itself is often held up as a shining example of how tough diplomacy can forestall war. But Ward takes the opposite tack; he says that the Soviets had made it clear that any action against Cuba would provoke war. Given the nature of the conflict almost everybody understood that war in this case could mean nuclear war. Yet Kennedy chose to blockade Cuba, so deterrence does not seemed to have worked for him. The consequent set of events brought the world closer to nuclear devastation than we think. As we now know, there were more than 150 nuclear weapons in Cuba which would have carpeted most of the eastern and midwestern United States and led to the deaths of tens of millions of Americans. A subsequent second strike would have caused even more devastation in the Soviet Union, not to mention in neighboring countries. In addition there were several relatively minor events which were close calls. These included the depth mining by the US Air Force of submerged Soviet submarines that almost caused one submarine commander to launch a nuclear torpedo; it was an unsung hero of the crisis named Vasili Arkhipov who prevented the launch. Other examples cited by Ward include the straying of an American reconnaissance flight into Soviet airspace and the consequent scrambling of American and Soviet fighter aircraft.
One could add several other examples to the list of close calls; a later one would be the Able Archer exercise of 1983 that caused the Soviets deep anxiety and borderline paranoia. In addition, as documented in Eric Schlosser's book Command and Control, there were dozens of close calls - swiftly classified, of course - in the form of nuclear accidents which could have led to catastrophic loss of life. The fact is that deterrence is always touted as the ultimate counter-argument to the risks of nuclear warfare, but there are scores of examples where political leaders decided to escalate and provoke the other side in spite of deterrence. From the other side of the fence it looks like deterrence ultimately worked, but often by a very slim margin. Add to this the fact that the vast network of nuclear command and control centers and protocols developed by nuclear nations are manned by fallible human beings; they are examples of complex systems subject to so-called "normal accidents". There is also no dearth of examples during the Cold War where lowly technicians and army officers could have launched World War 3 because of miscalculation, misunderstandings or paranoia. The fact is that these weapons of mass destruction have a life of their own; they are beyond the abilities of human beings to completely harness because human weaknesses and flaws also have lives of their own.
The future
Nuclear weapons are often compared to a white elephant. A better comparison might be to a giant T. rex; one could possible imagine a use for such a creature in extreme situations, but by and large it only serves as an unduly sensitive and enormously destructive creature whose powers are waiting to be unleashed on to the world. Having the beast around is just not worth its supposed benefits anymore, especially when most of these benefits are only perceived and have been extrapolated from a sample size of one.
Yet we continue to nurture this creature. Much progress has been made in reducing the nuclear arsenals of the two Cold War superpowers, but others have picked up the slack and continued to pursue the image and status - and not actual fighting capability - that they think nuclear weapons confer on them. The US currently has about 5000 weapons including 1700 strategic ones, many of which are still on hair trigger alert. This is still overkill by a huge margin. A hundred or so, especially on submarines, would be more than sufficient for deterrence. More importantly, the real elephant in the room is the spending on maintaining and upgrading the US nuclear arsenal; several estimates have put a figure of $50 billion on this spending. In fact the US is now spending more on nukes than it did during the Cold War. In a period when the economy is still not exactly booming and basic services like education and healthcare are underfunded, this kind of spending on what is essentially a relic of the Cold War should be unacceptable. In addition during the Bush administration, renewed proposals for "precision" munitions like the so-called Robust Nuclear Earth Penetrator (RNEP) threatened to lower the bar for the introduction of tactical nuclear weapons; detailed analysis showed that the fallout and other risks from such weapons far outweigh their modest usefulness. The current administration has also shown a dangerously indifferent, if not downright irresponsible, attitude toward nuclear weapons.
More importantly, experts have pointed out since the 1980s that technology and computational capabilities have now improved to an extent that allows conventional precision weapons to do almost all the jobs that were once imagined for nuclear weapons; the US especially now has enough conventional firepower to protect itself and to overpower almost any nuclear-armed state with massive retaliation. It's worth noting the often quoted infamous fact in this regard that the United States spends more on conventional weapons every year than the next several countries combined. The fact is that nuclear weapons as an instrument of military policy are now almost completely outdated even from a technical standpoint. But until zealous and paranoid politicians in Congress who are still living in the Cold War era are reined in, a significant reduction on maintaining the nuclear arsenal doesn't seem to be on the horizon.
Fortunately there are renewed calls for the elimination of these outdated weapons. The risk of possible use of nuclear weapons by terrorists calls for completely new strategies and does nothing to justify the growth and preservation of existing strategic arsenals by new and aspiring nuclear states. The most high-profile recent development has been the introduction of a bipartisan proposal by veteran policy makers and nuclear weapons experts Henry Kissinger, William Perry, Sam Nunn, George Schultz and Sidney Drell who have called for an abolition of these weapons of war. Some would consider this plan a pipe dream, but nothing would be accomplished if we don't fundamentally alter our thinking about nuclear war. There are many practical proposals that would thwart the spread of both weapons and material, including careful accounting of reactor fuel by international alliances, securing of all uranium and plutonium stocks and the blending down of weapons-grade uranium into reactor-grade material, a visionary policy started in the 90s through the Megatons to Megawatts program. For me, one of the most poignant and fascinating facts about nuclear history is that material from Soviet ICBMs aimed at American cities now supplies about half of all American nuclear electricity.
Ultimately as Ward and others have pointed out, nuclear weapons will not go away unless we declare them to be pariahs. No number of technical remedies will cause nations to abandon them until we make these destructive instruments fundamentally unappealing and start seeing them at the very least as outdated dinosaurs whose technological usefulness is now completely obsolete, and at best as immoral and politically useless tools whose possession taints their owner and results in international censure and disapproval. This is another myth that Wilson talks about, the myth that nuclear weapons are here to stay because they "cannot be uninvented". But as Wilson cogently argues, technologies don't go away because they are uninvented, they go away simply because they stop being useful. An analogy would be with cigarettes, at one time seen as status symbols and social lubricants whose risks have now turned them into nuisances at best. This strategy has worked in the past and it should work in the future. We can only make progress when technology becomes unattractive, both from a purely technical as well as a moral and political standpoint. But key to this is a realistic appraisal of the roles that the technology played during its conception. In case of nuclear weapons that mythic appraisal was created by Hiroshima. And it's time we destroyed that myth.

A Manhattan Project for AI?

Neuroscientist and AI researcher Gary Marcus has an op-ed in the NYT in which he bemoans the lack of international collaboration in AI, a limitation that Marcus thinks is significant hampering progress in the field. He says that AI researchers should consider a global effort akin to CERN; a massively funded, wide-ranging project to solve specific problems in AI that would benefit from the expertise of hundreds of independent researchers. This hivemind effort could potentially clear the AI pipeline of several clogs which have held back progress.

On the face of it this is not a bad idea. Marcus's opinion is that both private and public research has some significant limitations which a meld of the two could potentially overcome.

"Academic labs are too small. Take the development of automated machine reading, which is a key to building any truly intelligent system. Too many separate components are needed for any one lab to tackle the problem. A full solution will incorporate advances in natural language processing (e.g., parsing sentences into words and phrases), knowledge representation (e.g., integrating the content of sentences with other sources of knowledge) and inference (reconstructing what is implied but not written). Each of those problems represents a lifetime of work for any single university lab.

Corporate labs like those of Google and Facebook have the resources to tackle big questions, but in a world of quarterly reports and bottom lines, they tend to concentrate on narrow problems like optimizing advertisement placement or automatically screening videos for offensive content. There is nothing wrong with such research, but it is unlikely to lead to major breakthroughs. Even Google Translate, which pulls off the neat trick of approximating translations by statistically associating sentences across languages, doesn’t understand a word of what it is translating.

I look with envy at my peers in high-energy physics, and in particular at CERN, the European Organization for Nuclear Research, a huge, international collaboration, with thousands of scientists and billions of dollars of funding. They pursue ambitious, tightly defined projects (like using the Large Hadron Collider to discover the Higgs boson) and share their results with the world, rather than restricting them to a single country or corporation. Even the largest “open” efforts at A.I., like OpenAI, which has about 50 staff members and is sponsored in part by Elon Musk, is tiny by comparison.

An international A.I. mission focused on teaching machines to read could genuinely change the world for the better — the more so if it made A.I. a public good, rather than the property of a privileged few."

This is a good point. For all its commitment to blue sky research, Google is not exactly the Bell Labs of 2017, and except for highly targeted research like that done at Verily and Calico, it's still committed to work that has more or less immediate applications to its flagship products. And as Marcus says, academic labs suffer from limits to capacity that keep them from working on the big picture.

A CERN for AI wouldn't be a bad idea, but it would be different from the real CERN in some key aspects. Most notably, unlike discovering the Higgs Boston, AI has immense potential social, economic and political ramifications. Thus, keeping the research at a CERN-like facility open and free for all would be a steep challenge, with governments and individuals constantly vying for a piece of the pie. In addition, there would be important IP issues if corporations were funding this endeavor. And even CERN had to contend with paranoid fears of mini black holes, so one can only imagine how much the more realistic (albeit more modest) fears of AI would be blown out of proportion.

As interesting as a CERN-like AI facility is, I think another metaphor for a global AI project would be the Manhattan Project. Now let me be the first to say that I consider most comparisons of Big Science projects to the Manhattan Project to be glib and ill-considered; comparing almost any peacetime project with necessarily limited resources to a wartime project that benefited from a virtually unlimited supply of resources brought to bear on it with great urgency will be a fraught exercise. And yet I think the Manhattan Project supplies at least one particular ingredient for successful AI research that Marcus does not really talk about. It's the essential interdisciplinary nature of tackling big problems like nuclear weapons or artificial intelligence.

What seems to be missing from a lot of the AI research taking place today is that it does not involve scientists from all disciplines working closely together in an open, free-for-all environment. That is not to say that individual scientists have not collaborated in the field, and it's also not to say that fields like neuroscience and biology have not given computer scientists a lot to think about. But a practical arrangement in which generally smart people from a variety of fields work intensely on a few well-defined AI problems seems to still be missing.

The main reason why this kind of interdisciplinary work may be key to cracking AI is very simple: in a very general sense, there are no experts in the field. It's too new for anyone to really claim expertise. The situation was very similar to the Manhattan Project. While physicists are most associated with the atomic bomb, without specialists in chemistry, metallurgy, ordnance, engineering and electronics the bomb would have been impossible to create. More importantly, none of these people were experts in the field and they had to make key innovations on the fly. Let's take the key idea of implosion, perhaps the most important and most novel scientific contribution to emerge from the project: Seth Neddermeyer who worked on cosmic rays before the war came up with the initial idea of implosion that made the Nagasaki bomb possible. But Neddermeyer's idea would not have taken practical shape had it not been for the under-appreciated British physicist James Tuck who came up with the ingenious design of having explosives of different densities around the plutonium core that would focus the shockwave inward toward the core, similar to how a lens focuses light. And Tuck's design would not have seen the light of day had they not brought in an expert in the chemistry of explosives - George Kistiakowsky.

These people were experts in their well-defined fields of science, but none of them were expert in nuclear weapons design, and they were making it up as they went along. But they were generally smart and capable people, capable of thinking widely outside their immediate sphere of expertise, capable of producing at least parts of ideas which they could then hand over in a sort of relay to others with different parts.

Similarly, nobody in the field of AI is an expert, and just like nuclear weapons the field is still new enough and wide enough for all kinds of generally smart people to make contributions to it. So along with a global effort, we should perhaps have a kind of Manhattan Project of AI that brings together computer scientists, neuroscientists, physicists, chemists, mathematicians and biologists at the minimum to dwell on the field's outstanding problems. These people don't need to be experts or know much about AI at all, they don't even need to know how to implement every idea they have, but they do need to be idea generators, they need to be able to bounce ideas off of each other, and they need to be able to pursue odd leads and ends and try to see the big picture. The Manhattan Project worked not because of experts pursuing deep ideas but because of a tight deadline and a concentrated effort by smart scientists who were encouraged to think outside the box as much as possible. Except for the constraints of wartime urgency, it should not be hard to replicate that effort, at least in its essentials.

Why a "superhuman AI" won't destroy humanity (and solve drug development)

A significant part of the confusion about AI these days arises from the term "AI" being used with rampant abandon and hype to describe everything from self-driving cars to the chip inside your phone to elementary machine learning applications that are glorified linear or multiple regression models. It's driving me nuts. The media of course is the biggest culprit in this regard, and they really need to come up with some "rules" for writing about the topic. Once you start distinguishing between real and potentially groundbreaking advances which are far and few in between and incremental, interesting advances which constitute the vast majority of "AI" applications, you would be able to put the topic in perspective.

That has not stopped people like Elon Musk from projecting their doom-and-gloom apocalyptic fears onto the AI landscape. Musk is undoubtedly a very intelligent man, but he's not an expert on AI so his words need to be taken with a grain of salt. I would be far more interested in hearing from Kevin Kelly, a superb thinker and writer on technology who has been writing about AI and related topics for decades. Kelly who is a former editor of Wired magazine launched the latest salvo in the AI wars a few weeks ago when he wrote a very insightful piece in Wired about four reasons why he believes fears of an AI that will "take over humanity" are overblown. He casts these reasons in the form of misconceptions about AI which he then proceeds to question and dismantle. The whole thing is eminently worth reading.

The first and second misconceptions: Intelligence is a single dimension and is "general purpose".

This is a central point that often gets completely lost when people talk about AI. Most applications of machine intelligence that we have so far are very specific, but when people like Musk talk about AI they are talking about some kind of overarching single intelligence that's good at everything. The media almost always mixes up multiple applications of AI in the same sentence, as in "AI did X, so imagine what it would be like when it could do Y"; lost is the realization that X and Y could refer to very different dimensions of intelligence, or significantly different in any case. As Kelly succinctly puts it, "Intelligence is a combinatorial continuum. Multiple nodes, each node a continuum, create complexes of high diversity in high dimensions." Even humans are not good at optimizing along every single of these dimensions, so it's unrealistic to imagine that AI will. In other words, intelligence is horizontal, not vertical. The more realistic vision of AI is thus what it already has been; a form of augmented, not artificial, intelligence that helps humans with specific tasks, not some kind of general omniscient God-like entity that's good at everything. Some tasks that humans do indeed will be replaced by machines, but in the general scheme of things humans and machines will have to work together to solve the tough problems. Which brings us to Kelly's third misconception.

The third misconception: A super intelligence can solve our major problems.

As a scientist working in drug development, this fallacy is my favorite. Just the other day I was discussing with a colleague how the same kind of raw intelligence that produces youthful prodigies in physics and math fails to do so in highly applied fields like drug discovery: when was the last time you heard of a 25 year old inventing a new drug mainly by thinking about it? That's why institutional knowledge and experience counts in drug discovery, and that's why laying off old timers is especially a bad idea in the drug development field. 

In case of drug discovery the reason is clear: it's pretty much impossible to figure out what a drug does to a complex, emergent biological system through pure thought. You have to do the hard experimental work, you have to find the right assays and animal models, you have to know what the right phenotype is, you have to do target validation using multiple techniques, and even after all this, when you put your drug into human beings you go to your favorite church and pray to your favorite God. None of this can be solved by just thinking about it, no matter what your IQ.

Kelly calls this belief that AI can solve major problems just by thinking about it "thinkism": "the fallacy that future levels of progress are only hindered by a lack of thinking power, or intelligence." However, 

"Thinking (intelligence) is only part of science; maybe even a small part. As one example, we don’t have enough proper data to come close to solving the death problem. In the case of working with living organisms, most of these experiments take calendar time. The slow metabolism of a cell cannot be sped up. They take years, or months, or at least days, to get results. If we want to know what happens to subatomic particles, we can’t just think about them. We have to build very large, very complex, very tricky physical structures to find out. Even if the smartest physicists were 1,000 times smarter than they are now, without a Collider, they will know nothing new."

To which I may also add that no amount of Big Data will translate to the correct data.

Kelly also has some useful words to keep in mind when it comes to computer simulations, and this is another caveat for drug discovery scientists:

"There is no doubt that a super AI can accelerate the process of science. We can make computer simulations of atoms or cells and we can keep speeding them up by many factors, but two issues limit the usefulness of simulations in obtaining instant progress. First, simulations and models can only be faster than their subjects because they leave something out. That is the nature of a model or simulation. Also worth noting: The testing, vetting and proving of those models also has to take place in calendar time to match the rate of their subjects. The testing of ground truth can’t be sped up."

These are all very relevant points. Most molecular simulations for instance are fast not just because of better computing power but because they are intrinsically leaving out parts of reality - and sometimes significant parts (that's the very definition of a 'model' in fact). And it's absolutely true that even sound models have to be tested through often tedious experiments. MD simulations are good examples. You can run an MD simulation very long and hope to see all kinds of interesting fluctuations emerging on your computer screen, but the only way to know whether these fluctuations (a loop moving here, a pocket transiently opening up there) correspond to something real is by doing experiments - mutagenesis, NMR, gene editing etc. - which are expensive and time-consuming. Many of those fluctuations from the simulation may be irrelevant and may lead you down rabbit holes. There's no getting around this bottleneck in the near future even if MD simulations were to be sped up another thousand fold. The problem is not one of speed, it's one of ignorance and complex reality.

The fourth misconception: Intelligence can be infinite.

Firstly, what does "infinite" intelligence even mean? Infinite computing power? The capacity to crunch an infinite amount of data? Growing infinitely along an infinite number of dimensions? Being able to solve every single one of our problems ranging from nuclear war to marriage compatibility? None of these tasks seems even remotely within reach in the near or far future. There is little doubt that AI will keep on crunching more data and keep on benefiting from more computing power, but its ultimate power will be circumscribed by the laws of emergent physical and biological systems that are constrained by the hard work of experiment and various levels of understanding.

AI will continue to make significant advances. It will continue to "take over" specific sectors of industry and human effort, mostly with little fanfare. The mass of workers it will continue to quietly displace will pose important social and political problems. But from a general standpoint, AI is unlikely to take over humanity, let alone destroy it. Instead it will do what pretty much every technological innovation in history has done: keep on making solid, incremental advances that will both improve our lives and create new problems for us to solve.

Want to know if you are depressed? Don't ask Siri just yet.

"Tell me more about your baseline calibration, Siri"
There's no dearth of articles claiming that the "wearables revolution" is around the corner and that we aren't far from the day when every aspect of our health is recorded every second, analyzed and sent to the doctor for rapid diagnosis and treatment. That's why it was especially interesting for me to read this new analysis from computer scientists at Berkeley and Penn that should temper the soaring enthusiasm that riddles pretty much all things "AI" these days.

The authors are asking a very simple question in the context of machine learning (ML) algorithms that claim to predict your mood - and by proxy mental health issues like depression - based on GPS and other data. What's this simple question? It's one about baselines. When any computer algorithm makes a prediction, one of the key questions is how much better this prediction is compared to some baseline. Another name for baselines is "null models". Yet another is "controls", although controls themselves can be artificially inflated. 

In this case the baseline can be of two kinds: personal baselines (self-reported individual moods) or population baselines (the mood of a population). What the study finds is not too pretty. They analyze a variety of literature on mood-reporting ML algorithms and find that in about 77% of cases the studies use meaningless baselines that overestimate the performance of the ML models with respect to predicting mood swings. The reason is because the baselines that are used in most studies are population baselines rather than the more relevant personal baselines. The population baseline assumes a constant average state for all individuals, while the individual baseline assumes an average state for every individuals but different states between individuals. 

Clearly doing better than the population baseline is not very useful for tracking individual mood changes, and this is especially true since the authors find greater errors for population baselines compared to individual ones; these larger errors can simply obscure model performance. The paper also consider two datasets and try to figure out how to improve the performance of models on these datasets using a metric which they call "user lift" that determines how much better the model is compared to the baseline. 

I will let the abstract speak for itself:

"A new trend in medicine is the use of algorithms to analyze big datasets, e.g. using everything your phone measures about you for diagnostics or monitoring. However, these algorithms are commonly compared against weak baselines, which may contribute to excessive optimism. To assess how well an algorithm works, scientists typically ask how well its output correlates with medically assigned scores. Here we perform a meta-analysis to quantify how the literature evaluates their algorithms for monitoring mental wellbeing. We find that the bulk of the literature (∼77%) uses meaningless comparisons that ignore patient baseline state. For example, having an algorithm that uses phone data to diagnose mood disorders would be useful. However, it is possible to over 80% of the variance of some mood measures in the population by simply guessing that each patient has their own average mood - the patient-specific baseline. Thus, an algorithm that just predicts that our mood is like it usually is can explain the majority of variance, but is, obviously, entirely useless. Comparing to the wrong (population) baseline has a massive effect on the perceived quality of algorithms and produces baseless optimism in the field. To solve this problem we propose “user lift” that reduces these systematic errors in the evaluation of personalized medical monitoring."

That statement about being able to explain 80% of the variance in the model simply by guessing an average  mood for every individual should stand out. It means that simple informed guesswork based on an average "feeling" is both as good as the model and is also eminently useless since it predicts no variability and is therefore of little practical utility.

I find this paper important because it should put a dent in what is often inflated enthusiasm about wearables these days. It also illustrates the dangers of what is called "technological solutionism": simply because you can strap on a watch or device on your body to measure various parameters and simply because you have enough computing power to analyze the resulting stream of data does not mean the results will be significant. You record because you can, you analyze because you can, you conclude because you can. What the authors find about tracking moods can apply to tracking other kinds of important variables like blood pressure and sleep duration. Every time the question must be; am I using the right baseline for comparison? And am I doing better than the baseline? Hopefully the authors can use larger and more diverse datasets and find out similar facts about other such metrics.

I also found this study interesting because it reminds me of a whole lot of valid criticism in the field of molecular modeling that we have seen over the last few years. One of the most important questions there is about null models. Whenever your latest and greatest FEP/MD/informatics/docking study is claimed to have done exceptionally well on a dataset, the first question should be; is it better than the null model? And have you defined the null model correctly to begin with? Is your model doing better than a simpler method? And if it's not, why use it, and why assign a causal connection between your technique and the relevant result?

In science there are seldom absolutes. Studies like this show us that every new method needs to be compared with what came before it. When old facts have already paved the way, new facts are compelled to do better. Otherwise they can create the illusion of doing well.