Field of Science

Malcolm Gladwell's "The Bomber Mafia" - Weak Tea

Just finished reading Malcolm Gladwell's new book, "The Bomber Mafia", and am sorely disappointed. It's like Gladwell has expanded a short blog post into a 150 page book that's big on storytelling but essentially a complete lightweight when it comes to content and conclusions.

The basic thesis of the book can be summed up in a few short sentences: During WW2, there was a group of air force officers led by Haywood Hansell called the Bomber Mafia who thought that they could bomb cities "more morally" through daytime precision bombing; they were hedging their bets on a revolutionary invention, the Norden bombsight, that allowed bombardiers to pinpoint targets from 15,000 feet. In reality, the philosophy failed miserably because the bombsight was less than perfect under real conditions, because even with the bombsight the bombs were not that precise, and most crucially because in Japan, the hitherto undiscovered jet stream which buffeted airplanes with 120 knot winds basically made it impossible for the B-29s to stabilize and successfully bomb their targets. 

Enter Curtis LeMay, the ruthless air force general who took the B-29 down to 5000 feet to avoid the jet stream, ripped most of the guns out and instead of precision bombs, used incendiary bombs with napalm at night to burn down large built up civilian areas of Japanese with horrific casualties, the most famous incident of course being the March 1945 firebombing of Tokyo that killed over 100,000 people.

Gladwell tells these stories and others like the invention of napalm well, but the core message in the book is that the switch from precision bombing by Hansell which failed to strategic bombing by LeMay which presumably worked was the linchpin air strategy of the war. This message is a highly incomplete and gross oversimplification. The fact of the matter is that strategic bombing did very little damage to morale and production until very late in the war. And while strategic bombing in Japan was more successful, the bombing in Europe did not work until the bombers were accompanied by long-range P-51 Mustang fighters, and even then its impact on shortening the war was dubious. Even in Japan, strategic bombing could have been hobbled had the Japanese had better fighter defenses the way the Germans did. The Germans used a novel method of firing called "Schräge Musik" that allowed their fighters to shoot at the British Lancaster bombers vertically - if the Japanese had used such efforts they would likely have been devastating to LeMay's strategy. Even from a civilian standpoint, the strategic bombing of Dresden and Hamburg did little to curb either morale or production. But in talking only about Tokyo and not Dresden or Hamburg, only about Japan and not Europe, Gladwell leaves the impression that strategic bombing was pretty much foolproof and always worked. These omissions are especially puzzling since he does discuss the lack of effectiveness of the bombing of civilians in London during The Blitz.

There are very few references in this short book - Gladwell seems obsessed with quoting two historians named Tami Biddle and Stephen McFarland for most of the discussion. These are fine historians, but the superficial treatment is especially jarring because strategic bombing has been written about extensively during the last several decades by historians like Richard Overy and Paul Kennedy. The Nobel Prize-winning physicist Patrick Blackett wrote about the mistaken assumptions about strategic bombing way back in the 1950s. I would also recommend physicist Freeman Dyson's essays on the part he himself played in strategic bombing during the war that really drives home how boneheaded the method was. But Gladwell quotes none of these sources, instead just focusing on Haywood Hansell and Curtis LeMay as if they and their thoughts on the matter were the only things that counted.

Perhaps worst of all, the complex moral consequences of LeMay's argument and strategic bombing in general are completely sidelined except for a short postscript in which he discusses how precision bombing has gotten so much better (except in that case the moral consequences have also gotten more complex, precisely because it's become easier). Strategic bombing was wasteful and morally unforgivable because it cost both pilot and civilian lives. LeMay generally receives a very favorable treatment and there are copious quotes from him, but interestingly the one quote which is missing is one which might have shed a different perspective - this is his quote after the war that he would have been hanged as a war criminal had the Allies lost.

I really wish this book were better, given Gladwell's fine storytelling skills which can draw the reader in. As it stands it's slim pickings, a couple of anecdotes and stories compressed as a grand philosophical message in 150 pages that leaves the reader completely unsatisfied. If you are really interested in the topic of bombing during WW2, look at other sources

The human problems with molecular modeling

Molecular modeling and computational chemistry are the neglected stepchildren of pharmaceutical and biotech research. In almost every company, whether large or small, these disciplines are considered "support" disciplines, peripheral to the main line of research and never at the core. At the core instead are synthetic chemistry, biology and pharmacology, with ancillary fields like formulations and process chemistry becoming increasingly important as the path to a drug progresses.

In this post I will explore two contentions:

1. Unless its technical and human problems are addressed, molecular modeling and simulation will remain peripheral instead of core fields in drug discovery.

2. The overriding problem with molecular modeling is the lack of a good fit between tools and problems. If this problem is addressed, molecular modeling stands a real chance of moving from the periphery to, if not the very core, at least close to the core of drug discovery.

There are two kinds of challenges with molecular modeling that practitioners have known for a long time - technical and human. The technical problems are well known; although great progress has been made, we still can't model the details of biochemical systems very accurately, and even key aspects of these systems like protein motion, water molecules and - in case of approaches like machine learning - lack of adequate benchmarks and datasets continue to thwart the field. 

However, in this piece I will focus on the human problems and explore potential ways of mitigating them. My main contention is that the reason modeling often works so poorly in a pharmaceutical setting is because the incentives of modelers and other scientists are fundamentally misaligned. 

In a nutshell, a modeler has two primary objectives - to make predictions about active, druglike molecules and to validate the models they are using. But that second part is actually a prerequisite for the first - without proper validation, a modeler cannot know if the exact problem space they are applying their models to is actually a valid application of their techniques. For proper validation, two things are necessary:

1. That the synthetic chemist actually makes the molecules they are suggesting.

2. That the synthetic chemistry does not make molecules which they aren't suggesting.

In reality the synthetic chemist who takes up the modelers' suggestions has little to no interest in model validation. As anyone who has done modeling knows, when a modeler suggests ten compounds to a synthetic chemist, the synthetic chemist would typically pick 2 or 5 out of those 10. In addition, the synthetic chemist might pick 5 other compounds which the modeler never recommended. The modeler typically also has no control and authority over ordering compounds themselves.

The end result of this patchwork implementation of the modeler's predictions is that they never know whether their model really worked. Negative data is especially a problem, since synthetic chemists are almost never going to make molecules that the modeler thinks will be inactive. You are therefore left with a scenario in which neither the synthetic chemist nor the modeler knows or is satisfied with the utility of the models. No wonder the modeler is relegated in the back of the room during project discussions.

There is another fundamental problem which the modeler faces, a problem which is actually more broadly applicable to drug discovery scientists. In one sense, not just modeling but all of drug discovery including devices, assays, reagents and models can be considered as a glorious application of tools. Tools only work if they are suited to the problem. If a practitioners thinks the tool will be unsuited, they need to be able to say so and decline using the tool. Unfortunately, incentive structures in organizations are rarely set up for employees to say "no". Hearing this is often regarded as an admission of defeat or an unwillingness to help out. This is a big mistake. Modelers in particular should always be rewarded if they decline to use modeling and can gives good reasons for doing so. As it stands, because they are expected to be "useful", most modelers end up indiscriminately using their tools on problems, no matter what the quality of the data or the probability of success is. This means that quite often they are simply using the wrong tool for the wrong problem. Add to this the aforementioned unwillingness of synthetic chemists to validate the models, and it's little surprise that modeling so often fails to have an impact and is relegated to the periphery.

How does one address this issue? In my opinion, the issue can be mitigated to a significant extent if modelers know something about the system they are modeling and the synthesis which will yield the molecules they are predicting. If a modeler can give sound reasons based on assays and synthesis - perhaps the protein construct they are using for docking is different from one in the assay, perhaps the benchmarks are inadequate or perhaps the compounds they are suggesting won't be amenable to easy synthesis because of a weird ring system - other scientists are more likely to both take their suggestions more seriously as well as respect their unwillingness to use modeling for a particular problem. The overriding philosophy that a modeler utilizes should be captured not in the question, "What's the best modeling tool for this problem?" but "Is modeling the right tool for this problem?". So, the first thing a modeler should know is whether modeling would even work, but if not, he or she will go a long way in gaining the respect of their organization if they can say at least a few intelligent things about alternative experimental approaches or the experimental data. There is no excuse for a computational chemist to not be a chemist in the first place.

More significantly, my opinion is that this mismatch will not be addressed until modelers themselves are in the driver's seat, until they can ensure that their predictions are tested in their entirety. Unfortunately there's little control modelers have over testing their models; much of it simply depends on how much the synthetic chemists trust the modelers, a relationship driven as much by personality and experience as modeling success. Even today, modelers can't usually simply order their compounds for synthesis from internal or external teams.

Fortunately there are two very significant recent developments that promise modelers a degree of control and validation that is unprecedented. One is the availability of cheap CROs like WuXi and Enamine which can make many of the compounds that are predicted by modeling. These CROs have driven the cost down so significantly that even negative predictions can now importantly be tested. In general, the big advantage of external CROs relative to internal chemists is that you can dictate what the external CROs should and shouldn't make - they won't make compounds which you don't recommend and they will make every compound that you do; the whims of personal relationships won't make a difference in a fee-for-service structure.

More tantalizingly, there have been a few success stories now of fully computationally-driven pipelines, most notably Nimbus and Morphic Therapeutic and, more recently, Silicon Therapeutics. When I say "fully computationally driven" I don't mean that synthetic chemists don't have any input - the inaccuracy of computational techniques precludes fully automated molecule selection from a model - what I mean is that every compound is a modeled compound. In these organizations the relationship between modeling and other disciplines is reversed, computation is front and center - at the core - and it's synthetic chemistry and biology in the form of CROs that are at the periphery. These organizations can ensure that every single prediction made by modelers is tested and made, or conversely, that no molecule that is made and tested fails to go through the computational pipeline. At the very least, you can then keep a detailed bookkeeping record of how designed molecules perform and therefore validate the models; at best, as some of these organizations showed, you can discover viable druglike leads and development candidates.

Computational chemistry and modeling have come a long way, but they have a long way to go both in terms of technical and organizational challenges. Even if the technical challenges are solved, the human challenges are significant and will hobble the influence computation has on drug discovery. Unless incentive structures are aligned the fields will continue to have poor impact and be at the periphery. The only way for them to progress is for computation to be in the driver's seat and for computational chemists to be as informed as possible. Fortunately with the advent of the commodification of synthesis and the increased funding and interest in computationally driven drug pipelines, it seems there may be a chance for us to find out how well these techniques work after all.

Image source

Book review: Charles Seife’s “Hawking Hawking”

I still remember the first time I encountered “A Brief History of Time”. I must have been in high school. I marveled at the elfin-looking bespectacled man on the cover who looked like an alien. And were the contents the very definition of exotic or what. Clearly I understood very little of what was written about black holes, the Big Bang and quantum theory, but the book definitely got me hooked on to both cosmology and Stephen Hawking and cemented the image of the scientist in my mind as some kind of otherworldly alien superintelligence.

Now I just finished Charles Seife’s unique, must-read contribution to Hawking biography, “Hawking Hawking” and realize that in fact that was the intended effect.  Seife’s book does a first-rate job of stripping the myth-making, hype and self-promotion from the celebrity and revealing the man inside in all his triumph and folly. The achievement is all the more remarkable since Seife did not have access to Hawking’s personal papers and family members, resources which the foundation set up after his death guards carefully in order to preserve the image.

The book recounts several episodes of Hawking being very human; of opposing scientists who did not agree with his ideas and trying to hobble their professional advancement, of playing favorites and denying credit to others, of neglecting and mocking his wife and her work in the humanities, of making pronouncements especially in his last years about topics that were far beyond his expertise and which the media and the public held up as sacrosanct - an image that he not only didn’t do much to dispel but often encouraged. Of course, all scientists can occasionally be cruel, vain, jealous and egotistical, but these qualities of Hawking were hidden behind a blitz of media publicity.

And yet the book is not a takedown in any way. It acknowledges Hawking’s brilliant and important contributions to science, especially his key discovery of Hawking radiation that married general relativity and quantum theory in a tour de force of calculation. Seife sensitively describes how much Hawking struggled because of his terrible diseases, and how ambivalent he was about the media and public highlighting his disability. Much of the public never understood how hard even doing calculations was for him, even aided by his powerful memory and remarkable imagination. It’s not surprising that a lot of his best work was done with collaborators, brilliant scientists in their own right whose names the public never remembered.

Ultimately, although Hawking seems to have contributed to a good deal of self-promotion and myth-making himself, he seems to have been much more in touch with the inner human being than what he let on. In distinguishing what was real from what was hype, Seife gives Hawking his rightful place in science, not as another Newton or Einstein but as Stephen Hawking.

Hawking Hawking: The Selling of a Scientific Celebrity https://www.amazon.com/dp/1541618378/ref=cm_sw_r_cp_api_glt_i_8H7P48KA10T7XX43N1VN

Chandra and Johnny come close to discovering black holes

This is from Jagdish Mehra and Helmut Rechenberg's monumental "The Historical Development of Quantum Mechanics, Vol. 6, Part 2". With Chandrasekhar's facility with astrophysics and von Neumann's with mathematics, there is little doubt in my mind that they would have succeeded.


As it happened, it was Oppenheimer and his student Hartland Snyder who wrote the decisive paper describing black holes in 1939. 


The timing was bad, though; on the same day that the paper came out in the Physical Review, Germany attacked Poland and started World War 2. Far more consequential was another paper published on the same day in the same issue - John Wheeler and Niels Bohr's liquid drop model of nuclear fission.

"Hawking Hawking" and Michio Kaku

Two items of amusement and interest. One is a new biography of Hawking by Charles Seife, coming out tomorrow, that attempts to close the gap between Hawking’s actual scientific accomplishments and his celebrity status. Here's a good review by top science writer and online friend Philip Ball:


Seife's Hawking is a human being, given to petty disputes of priority and oneupmanship and often pontificating with platitudes on fields beyond his expertise. I used to have similar thoughts about Hawking myself but thought that his pronouncements were largely harmless fun. My copy of Seife's book arrives tomorrow and I am looking forward to his views, especially his take on how much it was the media rather than Hawking himself who fueled the exaggerations and the celebrity status.

The second item is an interview with Michio Kaku which seems to have ruffled a lot of feathers in the physics and science writing communities. 


The critics complain that he distorts the facts and says highly misleading things like string theory directly leading to the standard model. I hear the complaints as legitimate, but my take on Kaku is different. I don’t think of him as a science writer but as a futurist, fantasist and storyteller. I think of him rather like E. T. Bell whose “Men of Mathematics”, while highly romanticized and inaccurate regarding the details, nevertheless served to get future scientists Freeman Dyson and John Nash interested in math as kids. I doubt whether either Kaku himself or his readers take the details in his books very seriously.

I think we should always distinguish between writers who write about the facts and writers who tell stories. While you should be as rigorous as possible while writing about facts, you are allowed considerable leeway and speculation while telling stories. If not for this leeway, there wouldn't be any science writers and certainly on science fiction writers. A personal memory: my father was a big fan of Alvin Toffler's "Future Shock" and other futuristic musings. But he never took Toffler seriously as a writer on technology; rather he thought of him as an "ideas man" whose ideas were raw material for more serious considerations. If Kaku's writings get a few kids excited about science and technology the way "Star Trek' did, his purpose would be served.

Six lessons from the biotech startup world

Having worked for a few biotech startups over the years, while I am not exactly a grizzled warrior, I have been around the block a bit and have drawn some conclusions about what seems to work and not work in the world of small biopharma. I don't have any kind of grand lessons related to financial strategy, funding or IPOs or special insights, just some simple observations about science and people based on a limited slice of the universe. My suspicion is that much of what I am saying will be familiar.

1. It's about the problem, not about the technology: 

Many startups are founded with a particular kind of therapeutic area in mind, perhaps a particular kind of cancer or metabolic disease to address. But some are also founded on the basis of an exciting new platform or technology. This is completely legitimate as long as there is also a concomitant list of problems that can be addressed by that platform. If there aren't, then you are in the proverbial hammer-trying-to-find-a-nail territory, trying to be tool-oriented rather than problem-oriented. The best startups I have seen do what it takes to address a problem, sometimes even pivoting from their original toolkit. The not so great ones fall in love with the platform and technology so much that they keep on generating results from it in a frenzy that may or may not be applicable to a real problem. No matter how amazing your platform may be, it's key to find the right problem space as soon as you can. Not surprisingly, this is especially an issue in Silicon Valley where breathless new technology is often the basis for the founding platform for companies. Now I am as optimistic and excited about new technology as anyone else, but with new technological vision must come rigorous scrutiny that allows constant validation of the path that you are on and course-correction if that path looks crooked.

A corollary of this obsession with tools comes from my own field of molecular modeling and structure-based drug design. I have said before that the most important reason computational chemistry stays at the periphery rather than core of drug discovery is because it's not matched to the right problem. And while technical challenges still play a big role in the failure of the field - the complexity of biology usually far overshadows the utility of the tools - the real problem in my view is cultural. In a nutshell, modelers are not paid for saying "no". A modeler constantly has to justify his or her utility by applying the latest and greatest tools to every kind of problem. It doesn't matter if the protein structure is poorly resolved; it doesn't matter if the SAR is sparse; it doesn't matter if you have one static structure for a dynamic protein with many partners - the constant clink of your hammer in that corner office must be heard if your salary is to be justified. It's even more impressive, and correspondingly more futile, if you are using The Cloud or a whole bank of GPUs for your calculations (there are certainly some cases where sheer computing power can make a difference, but these are rare). There are no incentives for you to say, "You know what, computational tools are really not the best approach to this problem given the paucity and quality of data." (as Werner Heisenberg once said, the definition of an expert is someone who knows what doesn't work).

But it goes both ways. Just like management needs to not just allow but reward this kind of judicious selection and rejection of tools, it really helps if modelers know something about assays, synthesis and pharmacology so that they can provide an alternative suggestion to using modeling, otherwise you are just cursing the dark instead of lighting a candle. They don't need to be experts, but having enough knowledge to make general suggestions helps. In my view, having a modeler say, "You know what, I don't think current computational tools are the best way to find inhibitors for this protein, but have you tried biophysical assay X" can be music to the ears.

2. Assays are everything

In all the startups I have worked at, no scientist has been more important to success in the early stages of a drug discovery project than the assay expert. Having a well designed assay that mirrors the behavior of a protein under realistic conditions is worth a thousand computer models or hundreds of hours spent around the whiteboard. Good assays can both test and validate the target. Conversely, a badly designed assay, one that does not recapitulate the real state of the protein, can not only doom the project but lead you down a rabbit hole of false positives. No matter what therapeutic area or target you are dealing with, there are going to be few more important early hires than people who know the assays. And assays are all about the details - things like salt and protein concentration, length of construct, mutations, things only known by someone who has learnt them the hard way. The devil is always in the details, but he really hides in the assays.

3. Outsourcing works great, except when it doesn't

Most biotechs now outsource key aspects of their processes like compound synthesis, HTS and biophysical assays to CROs. And this works fine in many cases, except when that devil in the details rears his head. The problem with many CROs is that while they may be doing a good job of executing on the task, they then throw the results over the wall. The details are lost, and sometimes you don't even know you are going down a rabbit hole when that happens. I remember one example where the contamination of a chip in a SPR binding assay was throwing off our results for a long time, and it took a lot of forensic work and back and forth to figure this out. Timelines were set back substantially and confusion reigned. CROs need to be as collaborative and closely involved as internal scientists, and when this doesn't happen you can spend more time fixing that relationship than actually solving your problem - needless to say, the best CROs are very good at doing this kind of collaborative work. And it's important not just to have collaborative CROs but to have access to as many details as possible in case a problem arises, which it inevitably does.

4. Automation works great, except when it doesn't

The same problems that riddle CRO collaborations riddle automation. These days some form of automation is fairly common for tools like HTS, what with banks of liquid handling robots hopping rapidly and merrily over hundreds of wells in plates. And it again works great for pre-programmed protocols. But simple problems of contamination, efficiency and breakdowns like spills and robotic arms getting stuck can afflict these systems, especially in the more cutting-edge areas like synthesis - one thing you constantly discover that the main problem with automation is not the software but the hardware. I have found that the same caveats apply to automation that Hans Moravec applied to AI - the hard things are easy and the simple things are hard. Getting that multipipetting robot to transfer nanoliters around blazingly fast is beyond the ability of human beings, but that robot won't be able to look at a powder and determine if it's fluffy or crystalline. Theranos is a good example of the catastrophe that can result when the world of well-defined hard robotic grippers and vials meets the messy world of squishy cells, fluffy chemicals and messy fluids like blood (for one thing, stuff behaves very differently at small scale). You know your automation has a problem when you are spending more time babysitting the automation than doing things manually. It's great to be able to use automation to free up your time, but you need to make sure that it's actually doing so as well as generating accurate results without needing babysitting.

5. The best managers delegate

Now a human lesson. I have had the extraordinary good fortune of working for some truly outstanding scientists and human beings, some of whom have become good friends. And I have found that the primary function of a good manager is not to get things done from their reports but to help them grow. The best way to encapsulate sound manager thinking is Steve Jobs's famous quote - "It doesn't make sense to hire good people and tell them what they should do. We hire good people so that they can tell us what to do." The best managers I have worked with delegate important responsibilities to you, trust that you can get the job done, and then check in occasionally on how things are going, leaving the details and execution to you. Not only does this provide a great learning experience but more importantly it helps you feel empowered. If your manager communicates to you how important the task entrusted to you is for the entire company and how they trust you to do it well, the sense of empowerment this brings is enormous and you will usually do the job well (if you don't, it's a good sign for both you and your manager that things are not going well and a conversation is to be had). 

Bad managers are of course well known - they micromanage, constantly tell you what you should do and are often not on top of things. And while this is an uncomfortable truth to hear, often the best scientists are also the poorest managers (there's exceptions of course - Roy Vagelos who led Merck during its glory days excelled at both). One of the best scientists I have ever encountered wisely and deliberately stay away from senior managerial positions that repeatedly came his way. There are few managers worse than distracted scientists.

6. Expect trouble and enjoy the journey

I will leave the most obvious observation for last. Biology and drug discovery are devilishly complicated, hard and messy. After a hundred years of examining life at the molecular level, we still haven't figured it out. Almost every strategy you will adopt, every inspired idea you will have, every new million-dollar tranche of funding you will sink into your organization, will fail. No model will be accurate enough to capture the real life workings of a drug in a cell or a gene that's part of a network of genes, and you will have to approximate, simplify, build model systems and hope for the best. And on the human side, you will have disagreements and friction that should always be handle with considerateness and respect. Be forgiving of both the science and the people since both are hard. But in that sense, getting to the right answer in biotechnology is like building that "more perfect union" that Lincoln talked about. It's a goal that always seems to be one step beyond where you are, but that's precisely why you should enjoy the journey, because you will find that the gems you uncover on the way make the whole effort worth it.