Field of Science

Domains of Applicability (DOA) in top-down and bottom-up drug discovery

You don’t use a hammer to do impressionistic painting. And although you technically could, you won’t use a spoon for drinking beer. The domains of applicability of these tools are different, in terms of quality and quantity.

The idea of domains of applicability (DOA) is an idea that is somehow both blatantly simple as well as easily forgotten. As the examples above indicate, the definition is apparent; every tool, every idea, every protocol, has a certain reach. There are certain kinds of data for which it works well and certain others for which it fails miserably. Then there are the most interesting cases; pieces of data on the boundary between applicable and non-applicable. These often serve as real testing grounds for your tool or idea.

Often the DOA of a tool becomes clear only when it’s been used for a long time on enough number of test cases. Sometimes the DOA reveals itself accidentally, when you are trying to use the tool on data for which it’s not really designed. That way can lie much heartbreak. It’s better instead to be constantly aware of the DOA for your techniques and deliberately stress-test its range. The DOA can also inform you about the sensitivity of your model; for instance, for a certain model a small change from a methyl to a hydroxy might fall within its DOA, while for another it might exceed it.

The development and use of molecular docking, an important part of bottom-up drug discovery, makes the idea of DOA clear. By now there’s an extensive body of knowledge about docking, developed over at least twenty years, which makes it clear when docking works well and when you can trust it less. For example, docking works quite well in reproducing known crystal poses and generating new poses when the protein is well resolved and relatively rigid; when there are no large-scale conformational changes; when there are no unusual interactions in the binding site; when water molecules aren’t playing any weird or special role in the binding. On the other hand, if you are doing docking on a homology model built on sparse homology that features a highly flexible loop and several bridging water molecules as key binding elements, all bets are off. You have probably stepped way outside the DOA of docking. Then there are the intermediate and in many ways the most interesting cases; somewhat rigid proteins, just one or two water molecules, a good knowledge base around that protein that tells you what works. In these cases, one can be cautiously optimistic and make some testable hypotheses.

Fortunately there are ways to pressure-test the DOA of a favorite technique. If you suspect that the system under consideration does not fall within the DOA, there are simple tests you can run and questions you can ask. The first set of questions concerns the quality and quantity of data that is available. This data falls into two categories; data that was used for training the method and the data that you actually have in your test case. If the test data closely matches the training data then there’s a fair chance that your DOA is covered. If not, you ask the second important question: What’s the quickest way I can actually test the DOA? Usually the quickest way to test any hypothesis in early stage drug discovery is to propose a set of molecules that your model suggests as top candidates. As always, the easier these are to make, the faster you can test them and the better you can convince chemists to make them in the first place. It might also be a good idea to sneak in a molecule that your model says has no chance in hell of working. If neither of these predictions come true within a reasonable margin, you clearly have a problem, either with the data itself or with your DOA.

There are also ways to fix the DOA of a technique, but because that task involves generating more training data and tweaking the code accordingly, it’s not something that most end users can do. In case of docking for instance, a DOA failure might result from inadequate sampling or inadequate scoring. Both of these issues can be fixed in principle through better data and better force fields, but that’s really something only a methods developer can do.

When a technique is new it always struggles to establish its DOA. Unfortunately both technical users and management don’t understand this and can immediately start proclaiming the method as a cure for all your problems; they think that just because it has worked well on certain cases it will do so on most others. The lure of publicity, funding and career advancement can further encourage this behavior. That certainly happened with docking and other bottom-up drug design tools in the Wild West of the late 80s and early 90s. I believe that something similar is happening with machine learning and deep learning now.

For instance it’s well known that when it comes to problems like image recognition and natural language processing (NLP), machine learning can do extremely well. In that case one is clearly operating well within the DOA. But what about modeling traffic patterns or brain activity or social networks or SAR data for that matter? What is the DOA of machine learning in these areas? The honest answer is that we don’t know. Now some users and developers of machine learning acknowledge this and are actually trying to circumscribe the right DOA by pressure-testing the algorithms. Others unfortunately simply take it for granted that more data must translate to better accuracy; in other words, they assume that the DOA is purely dictated by data quantity. This is true only in a narrow sense. Yes, less data can certainly hamper your efforts, but more data is neither always necessary and certainly not sufficient. You can have as much data as possible, but your technique can still be operating in the wrong DOA. For example, the presence of a discontinuous landscape of molecular activity places limitations on using machine learning in medicinal chemistry. Would more data ameliorate this problem? We don’t know yet, but this kind of thinking would not be inconsistent with the new religion of “dataism” which says that data is everything.

There are many opportunities to test the DOA of top-down approaches like deep learning in drug discovery and beyond. But to do this, both scientists and management must have realistic goals about the efficacy of the techniques, and more importantly must honestly acknowledge that they don’t know the DOA in the first place. In other words, they need to honestly acknowledge that they don’t yet know whether the technique will work for their specific problem. Unfortunately these kinds of decisions and proclamations are severely subject to hype and the enticement of dollars and drama. Machine learning is seen as a technique with such an outsize potential impact on diverse areas of our lives, that many err on the side of wishful thinking. Companies have sunk billions of dollars into the technology; how many of them would be willing to admit that the investment was really based on hope rather than reality?

In this context, machine learning can draw some useful lessons from the cautionary tale of drug design in the 80s, when companies were throwing money from all directions at molecular modeling. Did that money result in important lessons learnt and egos burnt? Indeed it did, but one might argue that computational chemists are still suffering from the negative effects of that hype, both in accurately using their techniques and in communicating the true value of those techniques to what seem like perpetually skeptical Nervous Nellies and Debbie Downers. Machine learning could go down the same route and it would be a real tragedy, not only because the technique is promising but because it could potentially impact many other aspects of science, technology, engineering and business and not just pharmaceutical development. And it might all happen because we were unable or unwilling to acknowledge the DOA of our methods.

Whether it’s top-down or bottom-up approaches, we can all ultimately benefit from Feynman’s words: “For a successful technology, reality has to take precedence over public relations, for Nature cannot be fooled.” For starters, let’s try not to fool each other.

2017 Nobel Prize picks

The nice thing about Nobel Prizes is that it gets easier to predict them every year, simply because most of the people you nominate don't win and automatically become candidates for the next year (note however that I said "easier to predict", not "easier to correctly predict"). That's why every year you can carry over much of the same list of likely candidates as before.

Having said that, there is a Bayesian quality to the predictions since the previous year's prize does compel you to tweak your priors, even if ever so slightly. Recent developments and a better understanding of scientific history also might make you add or subtract from your choices. For instance, last year the chemistry prize was awarded for molecular machines and nanotechnology. This was widely considered a “pure chemistry” prize, so this year’s prize is unlikely to be in the same area. Knowing the recent history of recent prizes for chemistry, my bets are on biological chemistry or inorganic chemistry as leading contenders this year.


As in previous years, I have decided to separate the prizes into lifetime achievement awards and specific discoveries. There have been fewer of the former in Nobel history and I have only two in mind myself, although the ones that do stand out are no lightweights - for instance R B Woodward, E J Corey, Linus Pauling and Martin Karplus were all lifetime achievement awardees. If you had to place a bet though, then statistically speaking you would bet on specific discoveries since there have been many more of these. So here goes:

Lifetime achievement awards

Inorganic chemistry: Harry Gray and Steve Lippard: For their pioneering and foundational work in the field of bioinorganic chemistry;work which has illuminated the workings of untold number of enzymatic and biological processes including electron transfer.

Biological chemistry: Stuart Schreiber and Peter Schultz: For their founding of the field of modern chemical genetics and their impact on the various ramifications of this field in chemistry, biology and medicine. Schreiber has already received the Wolf Prize last year so that improves his chances for the Nobel. The only glitch with this kind of recognition is that a lot of people contributed to the founding of chemical biology in the 1980s and 90s, so it might be a bit controversial to single out Schreiber and Schultz. The Thomson-Reuters website has a Schreiber prediction, but for rapamycin and mTOR; in my opinion that contribution, while noteworthy, would be too narrow and probably not sufficient for a prize.

Specific awards

John Goodenough and Stanley Whittingham for lithium-ion batteries: This has been on my list for a very long time and it will remain so. Very few science-based innovations have revolutionized our basic standard of living the way lithium-ion batteries have, and I cannot think of anyone else who deserves a prize for this as much as Goodenough. Just a few months ago there was a book about the making of the iPhone which featured him and his outsize impact on enabling our modern electronics age. As this recent New York Times profile noted, Goodenough is 94 and is still going strong, but that’s no reason to delay a recognition for him.


Generally speaking, recognition for the invention of specific devices have been rather rare, with the charged-coupled device (CCD) and the integrated circuit being exceptions. More importantly, a device prize was given out just three years ago in physics (for blue light-emitting diodes) so based on the Bayesian argument stated above, it might make it a bit unlikely for another device-based invention to win this year. Nonetheless, a prize for lithium ion batteries more than most other inventions would conform to the line in Alfred Nobel's will about the discovery that has "conferred the greatest benefits on mankind."

Franz-Ulrich Hartl and Arthur Horwich for their discovery of chaperones: This is clearly a discovery which has had a huge impact on our understanding of both basic biological processes as well as their therapeutic relevance.


Barry Sharpless for click chemistry, Marvin Caruthers for DNA synthesis:
I am grouping these two together under the heading of organic synthesis. Sharpless’s click chemistry has seen widespread enough use since it was developed. However, it’s worth contrasting it with two other kinds of novel reactions which were awarded the prize – olefin metathesis and palladium-catalyzed couplings. One of the reasons these two were recognized was because they had a huge impact on industrial synthesis of drugs, polymers, agricultural chemicals etc. I don’t know enough to know whether click chemistry has also had such a practical impact – it may not have since it’s still rather new – but I assume that this practical aspect would certainly play a role in the decision.

Of the two, I think Caruthers deserves it even more; the technology he invented in the 1980s has been chugging along for thirty years now, quietly fueling the biotechnology revolution. While perhaps not as monumental as sequencing, it’s certainly a close second, and unlike click chemistry its practical applications are uncontested. If Sanger could get a prize for figuring out the basic chemistry of DNA sequencing, then I don’t see why Caruthers shouldn’t get one for figuring out the basic chemistry of DNA synthesis. Caruthers could also nicely split the prize with Leroy Hood (below), who really pioneered both commercial DNA sequencers as well as synthesizers.

The medicine prize

As is traditionally the case, several of the above discoveries and inventions can be contenders for the medicine prize. However we have left out what is potentially the biggest contender of all until now.

Jennifer Doudna, Emmanuelle Charpentier and Feng Zhang for CRISP-Cas9: I don't think there is a reasonable soul who thinks CRISPR-Cas9 does not deserve a Nobel Prize at some point in time. In terms of revolutionary impact and ubiquitous use it almost certainly belongs in the same shelf that houses PCR and Sanger sequencing.

There are two sets of questions I have about it though: Firstly, whether an award for it would still be rather premature. While there is no doubt as to the broad applicability of CRISPR, it also seems to me that it's rather hard right now to apply it with complete confidence to a wide variety of systems. I haven't seen numbers describing the percentage of times that CRISPR works reliably, and one would think that kind of statistics would be important for anyone wanting to reach an informed decision on the matter (I would be happy to have someone point me to such numbers). While that infamous Chinese embryo study that made the headlines last year was quite flawed, it also exposed the problems with efficacy and specificity that still bedevil CRISPR (these are problems similar to the two major problems for drugs). My personal take on it is that we might have to wait for just a few more years before the technique becomes robust and reliable enough to thoroughly enter the realm of reality from one of possibility.

The second question I have about it is the whole patent controversy, which if anything seems to have become even more acrimonious since last year, reaching worthy-of-optioning-movie-rights level of acrimonious in fact. Doudna also wrote a book on CRISPR this year which I reviewed here; while it’s generally fair and certainly well-written, it does downplay Church and Zhang’s role (and wisely omits any discussion of the patent controversy). Generally speaking Nobel Prizes try to stay clear of controversy, and one would think that the Nobel committee would be especially averse to sullying their hands with a commercial one. The lack of clear assignment of priority that is being played out in the courts right now not only tarnishes the intellectual purity of the discovery, but on a more practical level it also makes the decision to award the prize to all three major contenders (Doudna, Charpentier and Zhang) difficult.

Hopefully, as would be fitting for a good novel, the allure of a Nobel Prize would make the three protagonists reach an agreement to settle their differences over a few beers. But that could still take some time. A different way to look at the whole issue however is to say that the Nobel committee could actually heal the divisions by awarding the prize to the trio. Either way, a recognition of CRISPR is likely going to be one of the most publicly debated prizes of recent times.

The bottom line in my mind: CRISPR definitely deserves a prize, and its past results and tremendous future potential may very well tip the balance this year, but it could also happen that the lack of robust, public vindication of the method and the patent controversy could make the recognition seem premature and delay the actual award.

Mary-Claire King: For the discovery of the BRCA1 breast cancer gene. Not only did this discovery revolutionize the treatment and detection of breast cancer, but it really helped to solidify the connection between genetics and cancer.

Craig Venter, Francis Collins, Eric Lander, Leroy Hood and others for genomics and sequencing: The split here may be pretty hard here and they might have to rope in a few consortiums, but as incomplete and even misleading as the sequencing of the human genome might have been, there is little doubt that it was a signal scientific achievement deserving of a Nobel Prize.

Alec Jeffreys for DNA fingerprinting and assorted applications: Alec Jeffreys is another perpetual favorite on the list and one whose invention has had a huge societal impact. I have never really understood why he has never been awarded the prize; the societal impact of DNA fingerprinting is almost as great as the contraceptive pill (for which Carl Djerassi was unfortunately never recognized).

Ronald Evans and Pierre Chambon for nuclear receptors: After GPCRs, nuclear receptors are the biggest targets for drugs, and GPCRs have already been recognized a few years ago. The third discoverer of nuclear receptors, Elwood Jensen, sadly passed away in 2012.

Bert Vogelstein, Robert Weinberg and others for cancer genes: This again seems like a no-brainer to me. Several medicine prizes have been awarded to cancer genetics so this certainly wouldn't be a novel idea, and it's also clear that Vogelstein and Weinberg have done more than almost anyone else in identifying rogue cancer genes and their key roles in health and disease.

The physics prize: There should be zero doubt in anyone’s mind that this year's Nobel Prize in physics will be awarded to Kip Thorne and Rainer Weiss for their decades-long dogged leadership and work that culminated in last year's breakthrough discovery of gravitational waves by the LIGO observatory. I would both love and hate to be in their position right now. It's a dead ringer, and the only reason they missed it last year was because the discovery came after the nomination. Sadly Ron Drever died of dementia this year. For those wanting to know more about the kind of dedication and personality clashes these three men brought to the project, Janna Levin's book which came out earlier this year is a great source.


There is another recognition that I have always thought has been due: a recognition of the ATLAS-CMS collaboration at the LHC which discovered the Higgs boson. A prize for them would emphasize several things: it would put experiment at the center of this important scientific discovery (there would have been no 2013 Nobel Prize without the LHC) and it would herald a new and necessary tradition of awarding the prize to teams rather than individuals, reflecting the reality of contemporary science. The Nobel committee could also recognize the international, collaborative nature of science and actually award the prize to the entire LIGO team and not just to Thorne and Weiss, but that’s unlikely to happen.

It also seems to me that a Nobel Prize for chaos theory and the study of dynamical systems - a field that surprisingly has not been recognized yet - should include any number of pioneers featured for instance in James Gleick's amazing book "Chaos", most notably Mitchell Feigenbaum.

Literature

My interest in fiction has picked up again over the last few years so I am going to venture a few guesses here. Unlike the science awards, Nobel Prizes for literature are usually more of lifetime achievement awards rather than awards for specific books; in fact nobody has won the prize for writing just one book, no matter how transformational it might have seemed.

More accurately, the literature prize is usually given for writers who have consistently explored specific themes in their work. For instance, Naipaul and Coetzee were recognized for vividly exploring issues of post-colonial identity, Tony Morrison was recognized for exploring issues of black identity and Bertrand Russell was recognized for extolling the virtues of individual freedom and rationality. It thus makes sense to think in terms of themes when considering potential literature Nobel Laureates.

My personal favorite pick for the prize is Haruki Murakami. Interestingly, my introduction to him came not through his novels but through his amazing book on running which was a great driving force for my own running efforts. But whether it’s in that book or in his novels, Murakami is quite stunning at exploring existential angst, isolation and anxiety in a world where technology is supposed to function as a palliative that connects humans together. His prose is characteristically Japanese; spare, stark and straight to the point. More than most writers I know, I think Murakami deserves to be recognized for his substantial body of work with unifying themes.

Among other people who have traditionally been on nomination lists are Salman Rushdie, Milan Kundera, Philip Roth, Cormac McCarthy and Joan Didion. I think all of these are great writers, but Cormac McCarthy would top the list for me, again because he has produced a consistent body of work that investigates raw themes of Americana in devastatingly brief and searing prose. Rushdie’s “Midnight’s Children” is brilliant, but I honestly don’t think his other work really is up to Nobel caliber. Roth is also an eminent contender in my opinion, and I would be happy to see him receive the award. Joyce Carol Oates is another favorite, but frankly I haven’t read enough of her work to form an informed judgement. In any case, it’s now been twenty four years since an American won the prize, and there are certainly a few worthy contenders by this point.

Far and away, I personally think the most creative writer in English alive today is Richard Powers; he's one of those few novelists to whom I would apply the "genius" label. His sentence constructions and metaphors defy belief and for sheer imaginative prose I cannot think of his equal. Unfortunately, while he has a cult following, his novels are rather challenging to become widely read, and generally speaking the Nobel Committee does simple rather than complex.

So that's it from my side. Let the bloodbath games commence!

Heisenberg on Helgoland

The sun was setting on a cloudless sky, the gulls screeching in the distance. The air was bracing and clear. Land rose from the blue ocean, a vague apparition on the horizon.

He breathed the elixir of pure evening air in and heaved a sigh of relief. This would help the godforsaken hay fever which had plagued him like a demon for the last four days. It had necessitated a trip away from the mainland to this tiny outcrop of flaming red rock out in the North Sea. Here he could be free not just of the hay fever but of his mentor, Niels Bohr.

For the last several months, Bohr had followed him like a shadow, an affliction that seemed almost as bad as the hay fever. It had all started about a year earlier, but really, it started when he was a child. His father, an erudite scholar but unsparing disciplinarian, made his brother and him compete mercilessly with each other. Even now he was not on the best terms with his brother, but the cutthroat competition produced at least one happy outcome: a passion for mathematics and physics that continued to provide him with intense pleasure.

He remembered those war torn years when Germany seemed to be on the brink of collapse, when one revolution after another threatened to tear apart the fabric of society. Physics was the one refuge. It sustained him then, and it promised to sustain him now.

If only he could understand what Bohr wanted. Bohr was not his first mentor. That place of pride belonged to Arnold Sommerfeld in Munich. Sommerfeld, the man with the impeccably waxed mustache who his friend Pauli called a Hussar officer. Sommerfeld, who would immerse his students not only in the latest physics but in his own home, where discussions went on late into the night. Discussions in which physics, politics and philosophy co-existed. His own father was often distant; Sommerfeld was the father figure in his life. It was also in Sommerfeld’s classes that he met his first real friend – Wolfgang Pauli. Pauli was still having trouble attending classes in the morning when there were all those clubs and parties to frequent at night. He always enjoyed long discussions with Pauli, the ones during which his friend often complimented him by telling him he was not completely stupid. It was Pauli who had steered him away from relativity and toward the most exciting new field in physics – quantum theory.

Quantum theory was the brainchild of several people, but Bohr was its godfather, the man who everyone looked up to. It was Bohr who had first applied the notion of discontinuity to the interior of the atom. It was Bohr who had explained the behavior of the simplest of atoms, hydrogen. But much more than that, it was Bohr who had an almost demonic obsession both with the truths of quantum theory and the dissemination of its central tenets to young physicists like him.

Darkness was approaching as he descended the rock and started walking back to his inn. He smiled as he remembered his first meeting with Bohr. After the war, Germany was the world’s most hated nation. Nobody wanted to deal with her. The Versailles treaty had imposed draconian measures on her already devastated economy. How could they do this? Bohr was one of those very few who had extended a statesmanlike hand toward his country. War is war, Bohr had said, but science is science. Its purity cannot be violated by the failings of humanity. The University of Göttingen had invited Bohr to inaugurate a new scientific relationship between Germany and the rest of the world. The day was as clear in his memory as the air around him. The smell of roses wafting through the windows, the audience standing or sitting on the windowsills, the medieval churches chiming in the distance.

Bohr was explaining one of the finer points related to spectroscopy which his quantum theory explained. But there was clearly a mathematical error. Had anyone else seen it? The error was an elementary one, and it did not seem worthy of Bohr. Later as he found out, Bohr was a competent but not particularly noteworthy mathematician. Physical and philosophical intuition was his forte. The mathematics he left to lesser souls, to young men who he called scientific assistants. At Göttingen he pointed out the mistake from the back and offered some other comments. He was all of twenty. Bohr graciously admitted the mistake. After the talk, when he was leaving, Bohr caught up with him. Walk with me, said Bohr. Walk, and talk. It was what Bohr did best.

They climbed up the hill near the university, then discussed the problems of atomic physics in a nearby cafe. He felt he could pledge his soul to Bohr. After Munich he had been tempted to go to Copenhagen right away, but Sommerfeld had cautioned him otherwise. Bohr was an excellent physicist, Sommerfeld had said, but at this stage in his career he would be much better served by a more rigorous and mathematical immersion in atomic physics. The best man to mentor him in this regard was Max Born in Göttingen. Born was hesitant, sometimes too sensitive to perceived slights, often in undue awe of his own students, but there was no one else who combined physical insights with mathematical rigor the way he did. Born could acquaint him much better with the formal techniques; he could always spend time with the philosophical Niels. His friend Pauli had already served as Born’s assistant and had vouched for Born’s first-rate mentorship. However he had cautioned him about Born’s insistence on early morning meetings, an expectation that had been so hard for him to meet that Born had had to send a maid to wake him up.

The moonlight illuminated the path in front of him, but there were few other lights on the tiny island. This was what he liked best about it though. Very few people, very few lights, almost nobody to talk with, but plenty of opportunities for walking and swimming in the cool water. And the air, the air. Crystal clear and seemingly designed for clearing both his nasal passages and the cobwebs in his mind. His hay fever seemed almost gone already. He could read Goethe and think about physics as much as he wanted. When he arrived at the inn he greeted the innkeeper, who when he arrived four days ago, had seemed horrified at his swollen face. She had asked him if he had been in a brawl. Sadly, political brawls and beatings were not uncommon in Germany. After a light meal of sausages and potato dumplings, he retired to his room.

In Munich, for his doctoral dissertation, he had chosen an uncontroversial topic in fluid dynamics. The final exam had been a fiasco though, and he wrinkled his brow as he thought about it. One of the examiners, Wilhelm Wien, had asked him a question from elementary physics about the resolving power of a microscope. He had forgotten the formula and had gotten hopelessly entangled in trying to work it out. He was trying to solve problems at the forefront of quantum theory; why was he being asked to answer questions that were better suited to a second-rate undergraduate? Wien would not let up, however, and Sommerfeld finally had to step in, assuring the examiners that his student was certainly promising enough to be awarded his doctorate. He had still barely escaped with a passing grade. It still rankled.

He had packed his bags and gone straight to Göttingen from Munich. It was partly to start off on quantum theory right away, but also to escape the depressing pessimism that gripped German society. The past year had seen unprecedented inflation cripple his beloved country. At its height an American dollar had been worth a trillion marks. People were carrying entire carts full of money to trade for a load of bread or for some potatoes. They were using it as insulating wallpaper in their homes. Is this what his country really deserved? As he pondered the situation he felt a spring of resentment welling up inside him. If nothing else, he would show them that Germany was still not lacking in scientific talent.

After spending some time with Born and becoming familiar with the fundamental mathematical tools of atomic physics, he had finally made it to Copenhagen. The past few months there had been among the happiest of his life. Bohr had created an atmosphere whose spirit of camaraderie exceeded even Sommerfeld’s seminars. The days would be filled with deep scientific and philosophical discussions, long walks in the Faelledparken behind the institute and games of ping-pong. Evenings were spent in entertaining Bohr and his kind wife Margarethe with Beethoven and Schubert on the piano, which after physics had been his main passion. Even more than Sommerfeld Bohr had become a father figure to him. His avuncular nature, his obsession with quantum theory and his physical agility; all of these were impressive. He would take stairs two at a time, and it seemed nobody could beat him at ping-pong.

But he had also encountered aspects of Bohr’s personality that had not been apparent before. Bohr was very gentle in personal relations, but when it came to divining scientific truth he could be ferocious, unremittingly persistent, a fanatic without scruples. He had been arguing the validity of some rather well known facts of atomic physics, but Bohr’s relentless questioning of even the basic existence of the properties of electrons and photons - questioning that continued well into the night even after he had expressed his fatigue - had almost reduced him to tears. As if Bohr’s inquisition-style interrogation had not been enough, another hitherto unobserved particle had entered Bohr’s orbit since he last met him. His name was Hendrik Kramers. Kramers was Dutch, voluble, mathematically sophisticated, could speak four languages and could play both the piano and the cello. He had been struggling with Danish and English for some time and it was difficult not to be jealous of Kramers. A kind of sibling rivalry had developed between them, both vying for the attention of the father figure.

While he had been putting the finishing touches on his mundane dissertation on fluid dynamics, Bohr, Kramers, and a young American postdoctoral fellow named John Slater had created a compelling picture of electrons in the atom as a set of pendulum-like objects. The technical term for this was harmonic oscillators. The oscillators would vibrate with certain frequencies that would correspond to transitions of electrons between different states in the atoms. Bohr and Kramers were using these oscillators as convenient representations to picture what goes on inside an atom, but they were still concerned with the well-known basic properties of atoms like their positions and velocities. He had been asked to see what he could do with Bohr and Kramers’s model.

This was where the problems had started. He liked the idea of using oscillators to represent electrons. The oscillators expressed themselves in the form of a well-known mathematical device called a Fourier series. His time with Born had made him quite familiar with Fourier series. But when he had inserted formulas for the series into the basic equations of motion, single numbers had grotesquely multiplied into entire lists of numbers. Every time he got rid of certain numbers others would mushroom, like the heads of a Hydra. He had played algebraic games, filled tables upon tables with numerical legerdemain, had gotten not an inch closer to expressing any physical quantity. And then, suddenly, like a gale from the North Sea, he had been swept off his feet by the worst bout of hay fever he remembered. It kept him awake at night. It made him feel groggy during the day. It made the morass of numbers appear even bigger than what it was.

He had finally had enough. Time for resetting the mental gears, he had told himself. The little rocky outcrop with its very low pollen count had been a favored destination for sufferers. That’s where he would go, away from the stifling hay fever and the intellectual hothouse, to the ocean, mountains and clear air which he loved best. He had known this part of the country during expeditions with his youthful Pfadfinder classmates. There they had sung songs about the fatherland and had had fervent patriotic discussions about the spiritual and political revival of Germany. He felt at home there.

The light on the ceiling was flickering as he started thinking about oscillators, about frequencies, about electrons. How does one ever know what goes inside an atom? And that’s when it struck him. It seemed like a bolt out of the blue then, but later on he realized that it was part of a continuum of mental states, a flash of insight that only seemed discontinuous like the transitions of electrons. Once again, how does one ever know what goes on inside an atom? Nobody has seen an atom or electron; they are unobservable. And yet we know they are real because we observe their tangible effects. Unobservable entities have been part of science for a very long time. Nobody knew what went on inside the sun. But scientists – German scientists among them – had figured it out based on the frequencies of spectral lines that indicated the presence of certain elements. Spectroscopy had also been paramount in the development of atomic theory. Bohr himself had demonstrated the success of the theory by using it to explain spectral lines of hydrogen.

He took a step back, looked at the whole picture from a fresh viewpoint, saw the forest for the trees. What we see are spectral lines and nothing else but spectral lines. We do not see the electron’s position; we do not see its momentum. Position and momentum may have been the primary variables in classical physics, but that was because we could measure them. In case of atoms and electrons, all we see are the frequencies of the spectral lines. What we do not see we do not know. Then why pretend to use it? Why pretend to calculate it? The frequencies are the observables. Why not use them as the primary variables, with the positions and momenta as secondary quantities? He had always been a first-rate mathematician, but now he thought about the physics. It was a fundamental shift of a frame of reference, so memorably introduced by Einstein before. The problem was that representing the position and momenta as Fourier series and frequencies still led to a list of numbers rather than a single number obtained by multiplication. But here is where his physical intuition proved pivotal. One could know which numbers from the list to keep and which ones to discard based on whether they represented transitions between real energy states in atoms. That information was available and implicit in the frequency of the spectral lines. Nature could steady that tentative march of numbers.

It was finally time to use his strange calculus to calculate the energy of a real physical system. As his excitement mounted he kept on making mistakes and correcting them, but finally he had it. When he looked at it he was struck with joy and astonishment. Out of the dance of calculations emerged an answer for the energy of the system, but crucially, this energy could only exist in a restricted set of values. In one fell swoop he had rediscovered Max Planck’s original formulation of quantum theory without explicitly using Planck’s energy formula. An answer this correct must be true. An answer this elegant must be true.

It was almost three o’clock in the morning. The night outside seemed to deepen into a deep chasm. He had hardly talked to anyone during his four days on the island, and now it seemed that all that silence was culminating in a full-throated expression of revolutionary insight. The hand of nature and his own dexterous mind had cracked the puzzle in front of him, just as invisible writing is suddenly revealed by the application of the right chemical solution. But the sheer multiplicity of applications that he now foresaw was startling. At first he was deeply alarmed. He had the feeling that, through the surface of atomic phenomena he was looking at a strangely beautiful interior, and now had to probe this wealth of mathematical structures that nature had so generously spread before him.

But that could wait. He now knew that he had a general scheme of quantum theory that could be used to solve any number of old and new problems. Bohr would be pleased, although he would still insist on several modifications to his formulation when it was time to publish. And of course he would show it to his friend Pauli who would provide the most stringent test of the correctness of his theory.


His hay fever seemed to have disappeared. He felt strong again. There did not seem much point in trying to fall asleep at this very late hour. He put on his boots and set out. There was a distant rocky outcrop, the northernmost tip of the island that he had not explored yet. He walked in the predawn light. Not a gull cried around him, not a leaf seemed to tremble. An hour later he was at the base of the rock and scaled it without much effort. There he sat for a long time until he saw the first rays of the sun penetrate the darkness. Photons of light falling on his eyes, stimulating electron transitions in atoms of carbon, nitrogen and oxygen. And at that moment he was the sole human being on earth who knew how this was happening.

Note: This is my latest column for 3 Quarks Daily. It's a piece of historical fiction in which I imagine 24-year-old Werner Heisenberg inventing quantum mechanics on the small island of Helgoland in the North Sea. Heisenberg's formulation was not the easiest to use and was supplanted by Schrödinger's more familiar wave mechanics, but it inaugurated modern quantum theory and was by any reckoning one of the most important discoveries in the history of physics.

Carl Sagan's 1995 prediction of our technocratic dystopia

In 1995, just a year before his death, Carl Sagan published a bestselling book called “The Demon-Haunted World” which lamented what Sagan saw as the increasing encroachment of pseudoscience on people’s minds. It was an eloquent and wide-ranging volume. Sagan was mostly talking about obvious pseudoscientific claptrap such as alien abductions, psychokinesis and astrology. But he was also an astute observer of human nature who was well-educated in the humanities. His broad understanding of human beings led him to write the following paragraph which was innocuously buried in the middle of the second chapter.

“I have a foreboding of an America in my children's or grandchildren's time -- when the United States is a service and information economy; when nearly all the manufacturing industries have slipped away to other countries; when awesome technological powers are in the hands of a very few, and no one representing the public interest can even grasp the issues; when the people have lost the ability to set their own agendas or knowledgeably question those in authority; when, clutching our crystals and nervously consulting our horoscopes, our critical faculties in decline, unable to distinguish between what feels good and what's true, we slide, almost without noticing, back into superstition and darkness.”
As if these words were not ominous enough, Sagan follows up just a page later with another paragraph which is presumably designed to reduce us to a frightened, whimpering mass.

“I worry that, especially as the Millennium edges nearer, pseudoscience and superstition will seem year by year more tempting, the siren song of unreason more sonorous and attractive. Where have we heard it before? Whenever our ethnic or national prejudices are aroused, in times of scarcity, during challenges to national self-esteem or nerve, when we agonize about our diminished cosmic place and purpose, or when fanaticism is bubbling up around us - then, habits of thought familiar from ages past reach for the controls.

The candle flame gutters. Its little pool of light trembles. Darkness gathers. The demons begin to stir.”

What’s striking about this writing is its almost clairvoyant prescience. The phrases “fake news” and “post-factual world” were not used during Sagan’s times, but he is clearly describing them when he talks about people being “unable to distinguish between what’s real and what feels good”. And the rise of nationalist prejudice seems to have occurred almost exactly as he described.

It’s also interesting how Sagan’s prediction of the outsourcing of manufacturing mirrors the concerns of so many people who voted for Trump. The difference is that Sagan was not taking aim at immigrants, partisan politics, China or similar factors; he was simply seeing the disappearance of manufacturing as an essential consequence of its tradeoff with the rise of the information economy. We are now acutely living that tradeoff and it has cost us mightily.

One thing that’s difficult to say is whether Sagan was also anticipating the impact of technology on the displacement of jobs. Automation had already been around in the 90s and the computer was becoming a force to reckon with, but speech and image recognition and the subsequent impact of machine learning on these tasks was in its fledgling days. Sagan didn’t know about these fields: nonetheless, the march of technology also feeds into his concern about people gradually descending into ignorance because they cannot understand the world around them, even as technological comprehension stays in the hands of a privileged few.

In terms of people “losing the ability to set their own agendas or question those in power”, consider how many of us, let alone those in power, can grasp the science and technology behind deep learning, climate change, genome editing or even our iPhones? And yet these tools are subtly inserting them in pretty much all aspects of life, and there will soon be a time when no part of our daily existence is untouched by them. Yet it will also be a time when we use these technologies without understanding them, essentially safeguarding them with our lives, liberties and pursuit of happiness. Then, if something goes wrong, as it inevitably does with any complex system, we will be in deep trouble because of our lack of comprehension. Not only will there be chaos everywhere, but because we mindlessly used technology as a black box, we wouldn’t have the first clue about how to fix it.

Equally problematic is the paradox in which as technology becomes more user-friendly, it becomes more and more easy to apply it with abandon without understanding its strengths and limitations. My own field of computer-aided drug design (CADD) is a good example. Twenty years ago, software tools in my field were the realm of experts. But graphical user interfaces, slick marketing and cheap computing power have now put them in the hands of non-experts. While this has led to a useful democratization of these tools, it had also led to their abuse and overapplication. For instance, most of these techniques have been used without a proper understanding of statistics, not only leading to incorrect results being published but also to a waste of resources and time in the always time-strapped pharmaceutical and biotech industries.

This same paradox is now going to underlie deep learning and AI which are far more hyped and consequential than computer-aided drug design. Yesterday I read an interview with computer scientist Andrew Ng from Stanford who enthusiastically advocated that millions of people be taught AI techniques. Ng and others are well-meaning, but what’s not discussed is the potential catastrophe that could arise from putting imperfect tools in the hands of millions of people who don’t understand how they work and who suddenly start applying them to important aspects of our lives. To illustrate the utility of large-scale education in deep learning, Ng gives the example of how the emergence of commercial electric installations suddenly led to a demand for large numbers of electrical engineers. The difference was that electricity was far more deterministic and well-understood compared to AI. If it went wrong we largely knew how to fix it because we knew enough about the behavior of electrons, wiring and circuitry.

The problem with many AI algorithms like neural nets is that not only are they black boxes but their exactly utility is still a big unknown. In fact, AI is such a fledgling field that even the experts don’t really understand its domains of applicability, so it’s too much to believe that people who acquire AI diplomas in a semester or two will do any better. I would rather have a small number of experts develop and use imperfect technology than millions adopt technologies which are untested, especially when they are being used not just in our daily lives but in critical services like healthcare, transportation and banking.

As far as “those in power” are concerned, Sagan hints at the fact that they may no longer be politicians but technocrats. Both government and Silicon Valley technocrats have already taken over many aspects of our lives, but their hold seems to only tighten. One little appreciated story from that recent Google memo fiasco was written by journalist Elaine Ou who focused on a very different aspect of the incident; the way it points toward the technological elite carefully controlling what we read, digest and debate based on their own social and political preferences. As Ou says,

“Suppressing intellectual debate on college campuses is bad enough. Doing the same in Silicon Valley, which has essentially become a finishing school for elite universities, compounds the problem. Its engineers build products that potentially shape our digital lives. At Google, they oversee a search algorithm that seeks to surface “authoritative” results and demote low-quality content. This algorithm is tuned by an internal team of evaluators. If the company silences dissent within its own ranks, why should we trust it to manage our access to information?”

I personally find this idea that technological access can be controlled by the political or moral preferences of a self-appointed minority to be deeply disturbing. Far from all information being freely available at our fingertips, it will instead ensure that we increasingly read the biased, carefully shaped perspective of this minority. For example, this recent event at Google has indicated the social opinions of several of its most senior personnel as well as of those engineers who more directly control the flow of vast amounts of information permeating our lives every day. The question is not whether you agree or disagree with their views, it’s that there’s a good chance that these opinions will increasingly and subtly – sometimes without their proponents even knowing it – embed themselves into the pieces of code that influence what we see and hear pretty much every minute of our hyperconnected world. And this is not about simply switching the channel. When politics is embedded in technology itself, you cannot really switch the channel until you switch the entire technological foundation, something that’s almost impossible to accomplish in an age of oligopolies. This is an outcome that should worry even the most enthusiastic proponent of information technology, and it certainly should worry every civil libertarian. Even Carl Sagan was probably not thinking about this when he was talking about “awesome technological powers being in the hands of a very few”.

The real fear is that ignorance borne of technological control will be so subtle, gradual and all-pervasive that it will make us slide back, “almost without noticing”, not into superstition and darkness but into a false sense of security, self-importance and connectivity. In that sense it would very much resemble the situation in “The Matrix”. Politicians have used the strategy for ages, but ceding it to all-powerful machines enveloping us in their byte-lined embrace will be the ultimate capitulation. Giving people the illusion of freedom works better than any actual efforts at curbing freedom. Perfect control works when those who are controlled keep on believing the opposite. We can be ruled by demons when they come disguised as Gods.

How to thrive as a fox in a world full of hedgehogs

This is my fourth monthly column for 3 Quarks Daily.

The Nobel Prize winning animal behaviorist Konrad Lorenz once said about philosophers and scientists, “Philosophers are people who know less and less about more and more until they know nothing about everything. Scientists are people who know more and more about less and less until they know everything about nothing.” Lorenz had good reason to say this since he worked in both science and philosophy. Along with two others, he remains the only zoologist to win the Nobel Prize for Physiology or Medicine. His major work was in investigating aggression in animals, work that was found to be strikingly applicable to human behavior. But Lorenz’s quote can also said to be an indictment of both philosophy and science. Philosophers are the ultimate generalists, scientists are the ultimate specialists.

Specialization in science has been a logical outgrowth of its great progress over the last five centuries. At the beginning, most people who called themselves natural philosophers – the word scientist was only coined in the 19th century – were generalists and amateurs. The Royal Society which was established in 1660 was a bastion of generalist amateurs. It gathered together a motley crew of brilliant tinkerers like Robert Boyle, Christopher Wren, Henry Cavendish and Isaac Newton. These men would not recognize the hyperspecialized scientists of today; between them they were lawyers, architects, writers and philosophers. Today we would call them polymaths.

These polymaths helped lay the foundations of modern science. Their discoveries in mathematics, physics, chemistry, botany and physiology were unmatched. They cracked open the structure of cells, figured out the constitution of air and discovered the universal laws governing motion. Many of them were supported by substantial hereditary wealth, and most of them did all this on the side, while they were still working their day jobs and spending time with their families. The reasons these gentlemen (sadly, there were no ladies then) of the Royal Society could achieve significant scientific feats were many fold. Firstly, the fundamental laws of science still lay undiscovered, so the so-called “low hanging fruit” of science was ripe and plenty. Secondly, doing science was cheap then; all Newton needed to figure out the composition of light was a prism.

But thirdly and most importantly, these men saw science as a seamless whole. They did not distinguish much between physics, chemistry and biology, and even when they did they did so for the sake of convenience. In fact their generalist view of the world was so widespread that they didn’t even have a problem reconciling science and religion. For Newton, the universe was a great puzzle built by God, to be deciphered by the hand of man, and the rest of them held similar views.

Fast forward to the twentieth century, and scientific specialization was rife. You could not imagine Werner Heisenberg discovering genetic transmission in fruit flies, or Thomas Hunt Morgan discovering the uncertainty principle. Today science has become even more closeted into its own little boxes. There are particle astrophysicists and neutrino particle astrophysicists, cancer cell biologists, organometallic chemists and geomicrobiologists. The good gentlemen of the Royal Society would have been both fascinated and flummoxed by this hyperspecialization.

There is a reason why specialization became the order of the day from the seventeenth century onwards. Science simply became too vast, its tendrils reaching deep into specific topics and sub-topics. You simply could not flit from topic to topic if you were to understand something truly well and make important discoveries in the field. If you were a protein crystallographer, for instance, you simply had to spend all your time learning about instrumentation, protein production and software. If you were a string theorist, you simply had to learn pretty much all of modern physics and a good deal of modern mathematics. Studying any topic in such detail takes time and effort and leaves no time to investigate other fields. The rewards from such single-minded pursuit are usually substantial; satisfaction from the deep immersion that comes from expertise, the enthusiastic adulation of your peers, and potential honors like the Nobel Prize. There is little doubt that specialization has provided great dividends for its practitioners, both personal and scientific.

And yet there were always holdouts, men and women who carried on the tradition of their illustrious predecessors and left the door ajar to being generalists. Enrico Fermi and Hans Bethe were true generalists in physics, and Fermi went a step further by becoming the only scientist of the century who truly excelled in both theory and experiment; he would have made his fellow countryman Galileo proud. Then there was Linus Pauling who mastered and made seminal contributions to quantum chemistry, organic chemistry, biochemistry and medicine. John von Neumann was probably the ultimate polymath in the tradition of old natural philosophers, contributing massively to every field from pure mathematics and economics to computing and biology.

These polymaths not only kept the flame of the generalist alive, but they also anticipated science ironically coming full circle. The march of science from the seventeenth to the twentieth century might have been one toward increasing specialization, but in the last few years we have seen generalist science again blossoming. Why is this? Simply because the most important and fascinating scientific questions we face today require the meld of ideas from different fields. For instance: What is consciousness? What is life? How do you combat climate change? What is dark energy? These questions don’t just benefit from an interdisciplinary approach but they require it. Now, the way modern science approaches these questions is to bring together experts from various fields rather than relying on a single person who is an expert in all the fields. The Internet and global communication have made this kind of intellectual cross-pollination easier. 

And yet I would contend that there is a loss of insight when people keep excelling in their chosen fields and simply funnel the output of their efforts to other scientists without really understanding in what way it’s used. In my own field of drug discovery for instance, I have found that people who at least have a conceptual understanding of other areas are far more likely to contribute useful insights compared to those who simply do their job well and shove the product on to the next step of the pipeline.

I thus believe there is again a need for the kind of generalist who dotted the landscape of scientific research two hundred years ago. Both the poet Archilochus as well as the philosopher Isaiah Berlin have fortunately given us the right vocabulary to describe generalists and specialists. The fox, wrote Archilochus, knows many things while the hedgehog knows one big thing. Generalists are foxes; specialists are hedgehogs.

The history of science demonstrates that both foxes and hedgehogs are necessary for its progress. But history also shows that foxes and hedgehogs can alternate. In addition there are fields like chemistry which have always benefited more from foxes than hedgehogs. Generally speaking, foxes are more important when science is theory-rich and data-poor, while hedgehogs are more important when science is theory-poor and data-rich. The twentieth century was largely the century of hedgehogs while the twenty-first is likely to be the century of foxes.

Being a fox is not very easy though. Both personal and institutional forces in science have been built to support hedgehogs. You can mainly blame human resources personnel for contriving to make the playing field more suitable for these creatures. Consider the job descriptions in organizations. We want an “In vivo pharmacologist” or “Soft condensed matter physicist”, the job listing will say; attached would be a very precise list of requirements – tiny boxes within the big box. This makes it easier for human resources to check all the boxes and reject or accept candidates efficiently. But it makes it much harder for foxes who may not fit precise labels but who may have valuable insights to contribute to make it past those rigid labels. Organizations thus end up losing fine, practical minds who pay the price for their eclectic tastes. Academic training is also geared toward producing hedgehogs rather than foxes, and funding pressures on professors to do very specific kinds of research do not make the matter any easier. In general, these institutions create an environment in which being a fox is actively discouraged and in which hedgehogs and their intellectual children and grandchildren flourish.

As noted above, however, this is a real problem at a time when many of the most important problems in science are essentially interdisciplinary and would greatly benefit from the presence of foxes. But since institutional strictures don’t encourage foxes to ply their trade, they also by definition do not teach the skills necessary to be a fox. Thus the cycle perpetuates; institutions discourage foxlike behavior so much that the hedgehogs don’t even know how to be productive foxes even if they want to, and they in turn further perpetuate hedgehogian principles.

Fortunately, foxes in the past and present have provided us with a blueprint of their behavior. The essence of foxes is generalist behavior, and there are some commonsense steps one can take to inculcate these habits. Based on both historical facts about generalists as well as, well, general principles, one can come up with a kind of checklist on being a productive fox in an urban forest full of hedgehogs. This checklist draws on the habits of successful foxes as well as recent findings from both the sciences and the humanities that allow for flexible and universal thinking that can be applied not just in different fields but especially across their boundaries. Here are a few lessons that I have learnt or read about over the years. Because the lessons are general, they would not be confined to scientific fields.

1. Acknowledge psychological biases.

One of the most striking findings over the last three decades or so, exemplified by the work of Amos Tversky, Daniel Kahneman, Paul Slovic and others, is the tendency of human beings to make the same kinds of mistakes when thinking about the world. Through their pioneering research, psychologists have found a whole list of biases like confirmation bias, anchoring effects and representativeness that dog our thinking. Recognizing these biases doesn’t just help connect ideas across various disciplines but also helps us step back and look at the big picture. And looking at the big picture is what foxes need to do all the time.

2. Learn about statistics.

A related field of inquiry is statistical thinking. In fact, many of the cognitive biases which I just mentioned arise from the fundamental inability of human beings to think statistically. Basic statistical fallacies include: extrapolating from small sample sizes, underestimating or ignoring error bars, putting undue emphasis on rare but dramatic effects (think terrorist attacks), inability to think across long time periods and ignoring baselines. In an age when the news cycle has shrunk from 24 hours to barely 24 seconds of our attention span, it’s very easy to extrapolate from random, momentary exposure to all kinds of facts, especially when the media’s very existence seems to depend on dramatizing or exaggerating them. In such cases, stepping back and asking oneself some basic statistical questions about every new fact can be extremely helpful. You don't have to actually be able to calculate p values and confidence intervals, but you should know what these are.

3. Make back-of-the-envelope calculations.

When the first atomic bomb went off in New Mexico in July, 1945, Enrico Fermi famously threw a few pieces of paper into the air and, based on where the shockwave scattered them, came up with an accurate estimate of the bomb’s yield. Fermi was a master of the approximate calculation, the rough, order of magnitude estimate that would give the right ballpark answer. It’s illuminating how that kind of thinking can help to focus our thinking, no matter what field we may be dealing with. Whenever we encounter a fact that would benefit from estimating a number, it’s worth applying Fermi’s method to find a rough answer. In most cases it’s good enough.

4. Know your strengths and weaknesses.

As the great physicist Hans Bethe once sagely advised, “Always work on problems for which you possess an undue advantage.” We are always told that we should work on our weaknesses, and this is true to some extent. But it’s far more important to match the problems we work on with our particular strength, whether it’s calculation, interdisciplinary thinking or management. Leveraging your strengths to solve a problem is the best way to not get bogged down in one place and being able to nimbly jump across several problems like a fox. Hedgehogs often spend their time not just honing their strengths but working on their weaknesses; this is an admirable trait, but it’s not always the most optimal for working across disciplinary boundaries.

5. Learn to think at the emergent level that’s most useful for every field.

If you have worked in various disciplines long enough, you start realizing that every discipline has its own zeitgeist, its own way of doing things. It’s not just about learning the technical tools and the facts, it’s about knowing how to pitch your knowledge at a level that’s unique and optimal for that field. For instance, a chemist thinks in terms of molecules, a physicist thinks in terms of atoms and equations, an economist thinks in terms of rational individuals and a biologist thinks in terms of genes or cells. That does not mean a chemist cannot think in terms of equations or atoms, but that is not the most useful level of thinking to apply to chemistry. This matching of a particular brand of thinking to a particular field is an example of emergent thinking. The opposite of emergent thinking is reductionist thinking which breaks down everything into its constituent parts. One of the discoveries of science in the last century is the breakdown of strict reductionism, and if one wants to be a productive fox, he or she needs to learn the right level of emergent thinking that applies to a field.

6. Read widely outside your field, but read just enough.

If you want to become a generalist fox, this is an obvious suggestion, but because it’s obvious it needs to be reiterated. Gaining knowledge of multiple fields entails knowing something about those fields, which entails reading about them. But it’s easy to get bogged down in detail and to try to become an expert in every field. This goal is neither practical nor the correct one. The goal instead is to gain enough knowledge to be useful, to be able to distill general principles, to connect ideas from your field to others. Better still, talk to people. Ask experts what they think are the most important facts and ideas, keeping in mind that experts have their own biases and can reach different conclusions.

A great example of someone who learnt enough about a complementary field to not just be useful but very good at his job was Robert Oppenheimer. Oppenheimer was a dyed-in-the-wool theorist, and at first had little knowledge of experiment. But as one of his colleagues said,

“He began to observe, not manipulate. He learned to see the apparatus and to get a feeling of its experimental limitations. He grasped the underlying physics and had the best memory I know of. He could always see how far any particular experiment would go. When you couldn’t carry it any further, you could count on him to understand and to be thinking about the next thing you might want to try.”

Oppenheimer thus clearly learnt enough about experimental physics to know the strengths and limitations of the field, imparting another valuable piece of advice: know the strengths and limitations of every field at the very least, so you know whether the connections you are forming are within its purview. In other words, know the domain of applicability of every field so that you can form reasonable connections.

7. Learn from your mistakes, and from others.

If you are a fox trying to jump across various disciplinary boundaries, it goes without saying that you might occasionally stumble. Because you lack expertise in many fields you are likely to make mistakes. This is entirely understandable, but what’s most important is to acknowledge those mistakes and learn from them. In fact, making mistakes is often the best shortcut to quick learning (“Fail fast”, as they say in the tech industry). Learning from our mistakes is of course important for all of us, but especially so for foxes who are often intrinsically dealing with incomplete information. Make mistakes, revise your worldview, make new mistakes. Rinse and repeat. That should be your philosophy.

Parallel to learning from your mistakes is to learn from others. During her journey a fox will meet many interesting people from different fields who know different facts and possess different mental models of thinking about the world. Foxlike behavior often entails being able to flexibly use these different mental models to deal with various problems in different fields, so it’s key to keep on being a lifelong learner of these patterns of thought. Fortunately the Internet has opened up a vast new opportunity for networking, but we don’t always take advantage of this opportunity in serious, meaningful ways. Everyone will benefit from such deliberate, meaningful connections, but foxes in particular will reap rewards.

8. “The opposite of a big truth is also a big truth” – Niels Bohr

The world is almost always gray. Foxes must imbibe this fact as deeply as Niels Bohr imbibed quantum mechanics. Especially when you are encountering and trying to integrate disparate ideas from different fields, it’s very likely that some of them may seem contradictory. But often the contradiction is in our minds, and there’s actually a way to reconcile those ideas (as a general rule, only in the Platonic world of mathematics can contradictory ideas not be tolerated at all). The fact is that most ideas from the real world are fuzzy and ill defined, so it’s no surprise that they will occasionally run into each other. Not just ideas but patterns of thinking may seem contradictory; for example, what a biologist sees as the most important feature of a particular system may not be the most important feature for a physicist (emergence again). In most cases the truth lies somewhere in between, but in others it may lie wholly on one side. As they say, being able to hold opposite ideas in your mind at the same time is a mark of intelligence. If you are a fox, prove this.

These are but a few of the potential avenues that you can explore for being a generalist fox. But the most important principle that foxes can benefit from is, as the name indicates, general. When confronted by an idea, a system or a problem, learn to ask the most general questions about it, questions that flow across disciplines. A few of these questions in science are: What’s the throughput? How robust is the system? What are the assumptions behind it? What is the problem that we are trying to solve? What are its strengths and limitations? What kinds of biases are baked into the system and our thinking about it?


Keep on asking these questions, make a note of the answers and you will realize that they can be applied across domains. At the same time, remember that as a fox you will always work in tandem with specialized hedgehogs. Foxes will be needed to explore the uncharted territory of new areas of science and technology, hedgehogs will be needed to probe its corners and reveal hidden jewels. The jewels will further reflect light that will illuminate additional playgrounds for the foxes to frolic in. Together the two creatures will make a difference.

The bomb ended World War 2: And other myths about nuclear weapons

Sixty seven years ago on this day, a bomb fueled by the nuclear fission of uranium leveled Hiroshima in about six seconds. Since then, two foundational beliefs have colored our views of nuclear weapons; one, that they were essential or at least very significant for ending the war, and two, that they have been and will continue to be linchpins of deterrence. These beliefs have, in one way or another, guided all our thinking about these mythic creations. Historian Ward Wilson who is at the Monterey Institute of International Studies wants to demolish these and other myths about nukes in a new book titled "5 Myths about Nuclear Weapons", and I have seen few volumes which deliver their message so effectively in such few words. Below are Wilson's thoughts about the two dominant nuclear myths interspersed with a few of my own.
"Nuclear weapons were paramount in ending World War 2".
This is where it all begins. And the post facto rationalization certainly bolsters the analysis; brilliant scientists worked on a fearsome weapon in a race against the Nazis, and when the Nazis were defeated, handed it over to world leaders who used to it bring a swift end to a most horrible conflict. Psychologically it fits into a satisfying and noble narrative. Hiroshima and Nagasaki have become so completely ingrained in our minds as symbols of the power of the bomb that we scarcely think about whether they really served the roles that they have been ascribed over the last half century. In one sense the atomic bombings of Japan have dictated all our consequent beliefs about weapons of mass destruction. But troubling and mounting evidence has emerged in the last half century that is now consequential enough to deal a major blow to this thinking. Contrary to popular belief, this is not "revisionist" history; by now the files in American, Soviet, Japanese and British archives have been declassified to an extent that allows us to piece together the cold facts and reveal what exactly was the impact of the atomic bombings of Japan on the Japanese decision to end the war. They tell a story very different from the standard narrative.
Wilson draws on detailed minutes from the meetings of the Japanese Imperial Staff to make two things clear; first, that the bomb did not have a disproportionate influence on Japanese leaders' deliberations and psyche, and second, that what did have a very significant impact on Japanese policy was the invasion of Manchuria and the Sakhalin Islands by the Soviet Union. Wilson reproduces the reactions of key Japanese leaders after the bombing of Hiroshima on August 6. You would expect them to register shock and awe but we see little of this. There was no major meeting summoned after the event and most leaders seemed to display mild consternation, but little of the terror or extreme emotion, that you might expect from such a world-shattering event. What does emerge from the record is that the same men were extremely rattled after the Soviets declared war on August 8.
The reason was that before Hiroshima the Japanese were contemplating two strategies for surrender, one political and the other military. The military strategy involved throwing the kitchen sink against the Americans when they invaded the southern part of the Japanese homeland in the coming months and causing them so many losses that their victory would be be a pyrrhic one at best; the Japanese could then seek a surrender on their own terms. The political strategy involved negotiating with the Allies through Moscow. With Hiroshima, both these options remained open since the Japanese army and Soviet relations were still intact. But with the Soviet invasion in the north, the concentration of troops against the Allied invasion in the south and the seeking of favorable surrender terms through the Soviets suddenly turned into impossibilities. This double blow convinced the Japanese that they must now confront unconditional surrender. When the Emperor finally implored his people to surrender and cited a "new and most cruel bomb" as the reason, it was likely to save face so that the Japanese could blame the bomb rather than their own instigation of the war at Pearl Harbor.
Why were the Japanese not affected by the bombing of Hiroshima? Because on the ground the bombing looked no different from the relentless pounding that dozens of major Japanese cities had received at the hands of Curtis Le May's B-29s during the past six months. The infamous firebombing of Tokyo in March, 1945 had killed even more civilians than the atomic bomb. As Ward details it, no less than 68 cities had been subjected to intense attack, and aerial photos of these cities are strikingly almost indistinguishable from those of Hiroshima. Thus for the Japanese, Hiroshima was one more casualty in a long list. It did little either to shock them or to weaken their resolve for continuing the war. This was especially true when the event was too soon for people to truly take stock of what had really happened.
Unfortunately the perception of the bombing of Hiroshima also fed into the general perception regarding strategic bombing, itself a myth largely perpetuated by the air forces of the U.S. and other European nations which wanted to convince leaders that they could win wars through air attack alone. The conventional wisdom since before World War 2 was that strategic bombing can deal a deadly blow to the enemy's moral and strategic resources. This wisdom was perpetuated in the face of much evidence to the contrary. The bombings of London, Hamburg, Dresden and Tokyo had little effect on morale; in fact, postwar analysis indicated that if anything, they made the survivors of the bombing more determined and resilient and had a minor impact on war production capability. The later follies of Vietnam, Cambodia and Laos also proved the futility of strategic bombing in ending wars. And the same was true of Hiroshima. The main point, as Ward makes it clear, is that you cannot win a war by destroying cities because ultimately it's the enemy's armies and military resources that are involved in fighting a war. Destroying cities helps, especially when the means of war production are grounded in civilian activity, but it is almost never decisive. 
One instructive example which Wilson provides is the burning of Atlanta and then Richmond during the American Civil War which did little to crush the South's fighting ability or spirit. Another example is Napoleon's march into Russia; after setting fire to Moscow and destroying scores of Russian cities, Napoleon was still defeated because ultimately his army was defeated. These facts were conveniently ignored in the face of beliefs about bombing whose culmination seemed to be the destruction of Hiroshima. These beliefs were largely responsible for the arms race and the development of strategic hydrogen bombs which were again expressly designed to bring about the annihilation of cities. But all this development did was raise the risk of accidental devastation. If we realize that the atomic bombing of Hiroshima and the general destruction of cities played little role in ending World War 2, almost everything that we think we know about the power of nuclear questions is called into question.
"Nuclear weapons are essential for deterrence".
Conventional thinking continues to hold that the Cold War stayed cold because of nuclear weapons. This is true to some extent. But what it fails to realize is how many times the war threatened to turn hot. Declassified documents now provide ample evidence of near-hits that could have easily led to nuclear war. The Cuban Missile Crisis is only the most well-known example of how destabilizing nuclear weapons can make the status quo.
The missile crisis is in fact a fine example of how conventional thinking about deterrence presents gaps. Kennedy's decision to blockade Cuba is often touted as an example of mild escalation and the resolution of the crisis itself is often held up as a shining example of how tough diplomacy can forestall war. But Ward takes the opposite tack; he says that the Soviets had made it clear that any action against Cuba would provoke war. Given the nature of the conflict almost everybody understood that war in this case could mean nuclear war. Yet Kennedy chose to blockade Cuba, so deterrence does not seemed to have worked for him. The consequent set of events brought the world closer to nuclear devastation than we think. As we now know, there were more than 150 nuclear weapons in Cuba which would have carpeted most of the eastern and midwestern United States and led to the deaths of tens of millions of Americans. A subsequent second strike would have caused even more devastation in the Soviet Union, not to mention in neighboring countries. In addition there were several relatively minor events which were close calls. These included the depth mining by the US Air Force of submerged Soviet submarines that almost caused one submarine commander to launch a nuclear torpedo; it was an unsung hero of the crisis named Vasili Arkhipov who prevented the launch. Other examples cited by Ward include the straying of an American reconnaissance flight into Soviet airspace and the consequent scrambling of American and Soviet fighter aircraft.
One could add several other examples to the list of close calls; a later one would be the Able Archer exercise of 1983 that caused the Soviets deep anxiety and borderline paranoia. In addition, as documented in Eric Schlosser's book Command and Control, there were dozens of close calls - swiftly classified, of course - in the form of nuclear accidents which could have led to catastrophic loss of life. The fact is that deterrence is always touted as the ultimate counter-argument to the risks of nuclear warfare, but there are scores of examples where political leaders decided to escalate and provoke the other side in spite of deterrence. From the other side of the fence it looks like deterrence ultimately worked, but often by a very slim margin. Add to this the fact that the vast network of nuclear command and control centers and protocols developed by nuclear nations are manned by fallible human beings; they are examples of complex systems subject to so-called "normal accidents". There is also no dearth of examples during the Cold War where lowly technicians and army officers could have launched World War 3 because of miscalculation, misunderstandings or paranoia. The fact is that these weapons of mass destruction have a life of their own; they are beyond the abilities of human beings to completely harness because human weaknesses and flaws also have lives of their own.
The future
Nuclear weapons are often compared to a white elephant. A better comparison might be to a giant T. rex; one could possible imagine a use for such a creature in extreme situations, but by and large it only serves as an unduly sensitive and enormously destructive creature whose powers are waiting to be unleashed on to the world. Having the beast around is just not worth its supposed benefits anymore, especially when most of these benefits are only perceived and have been extrapolated from a sample size of one.
Yet we continue to nurture this creature. Much progress has been made in reducing the nuclear arsenals of the two Cold War superpowers, but others have picked up the slack and continued to pursue the image and status - and not actual fighting capability - that they think nuclear weapons confer on them. The US currently has about 5000 weapons including 1700 strategic ones, many of which are still on hair trigger alert. This is still overkill by a huge margin. A hundred or so, especially on submarines, would be more than sufficient for deterrence. More importantly, the real elephant in the room is the spending on maintaining and upgrading the US nuclear arsenal; several estimates have put a figure of $50 billion on this spending. In fact the US is now spending more on nukes than it did during the Cold War. In a period when the economy is still not exactly booming and basic services like education and healthcare are underfunded, this kind of spending on what is essentially a relic of the Cold War should be unacceptable. In addition during the Bush administration, renewed proposals for "precision" munitions like the so-called Robust Nuclear Earth Penetrator (RNEP) threatened to lower the bar for the introduction of tactical nuclear weapons; detailed analysis showed that the fallout and other risks from such weapons far outweigh their modest usefulness. The current administration has also shown a dangerously indifferent, if not downright irresponsible, attitude toward nuclear weapons.
More importantly, experts have pointed out since the 1980s that technology and computational capabilities have now improved to an extent that allows conventional precision weapons to do almost all the jobs that were once imagined for nuclear weapons; the US especially now has enough conventional firepower to protect itself and to overpower almost any nuclear-armed state with massive retaliation. It's worth noting the often quoted infamous fact in this regard that the United States spends more on conventional weapons every year than the next several countries combined. The fact is that nuclear weapons as an instrument of military policy are now almost completely outdated even from a technical standpoint. But until zealous and paranoid politicians in Congress who are still living in the Cold War era are reined in, a significant reduction on maintaining the nuclear arsenal doesn't seem to be on the horizon.
Fortunately there are renewed calls for the elimination of these outdated weapons. The risk of possible use of nuclear weapons by terrorists calls for completely new strategies and does nothing to justify the growth and preservation of existing strategic arsenals by new and aspiring nuclear states. The most high-profile recent development has been the introduction of a bipartisan proposal by veteran policy makers and nuclear weapons experts Henry Kissinger, William Perry, Sam Nunn, George Schultz and Sidney Drell who have called for an abolition of these weapons of war. Some would consider this plan a pipe dream, but nothing would be accomplished if we don't fundamentally alter our thinking about nuclear war. There are many practical proposals that would thwart the spread of both weapons and material, including careful accounting of reactor fuel by international alliances, securing of all uranium and plutonium stocks and the blending down of weapons-grade uranium into reactor-grade material, a visionary policy started in the 90s through the Megatons to Megawatts program. For me, one of the most poignant and fascinating facts about nuclear history is that material from Soviet ICBMs aimed at American cities now supplies about half of all American nuclear electricity.
Ultimately as Ward and others have pointed out, nuclear weapons will not go away unless we declare them to be pariahs. No number of technical remedies will cause nations to abandon them until we make these destructive instruments fundamentally unappealing and start seeing them at the very least as outdated dinosaurs whose technological usefulness is now completely obsolete, and at best as immoral and politically useless tools whose possession taints their owner and results in international censure and disapproval. This is another myth that Wilson talks about, the myth that nuclear weapons are here to stay because they "cannot be uninvented". But as Wilson cogently argues, technologies don't go away because they are uninvented, they go away simply because they stop being useful. An analogy would be with cigarettes, at one time seen as status symbols and social lubricants whose risks have now turned them into nuisances at best. This strategy has worked in the past and it should work in the future. We can only make progress when technology becomes unattractive, both from a purely technical as well as a moral and political standpoint. But key to this is a realistic appraisal of the roles that the technology played during its conception. In case of nuclear weapons that mythic appraisal was created by Hiroshima. And it's time we destroyed that myth.