Solomon Snyder on academic publishing: ask for adequate, not exhaustive, documentation

Image: Corpus Callosum

Renowned neuropharmacologist Solomon Snyder has a thought-provoking take on what seems to be one of the two evils that has plagued modern academia: publication (the other one is the job market). I have previously blogged about the increasing conservatism of academic publishing myself, and in this case “conservatism” also translates to “excessive rigor”.

Snyder starts by lamenting the startling fact that the average duration for a modern American biomedical scientist to start his or her academic career is about the same as that for a neuro or cardiovascular surgeon, people whose specialty is usually considered to be in the top tier of their profession; the difference of course is that a cardiovascular surgeon starts making $500K right off the bat while a new assistant professor starts making $80K and almost never goes beyond $200k or so. The long trudge begins with graduate education, the average duration of which has stretched out over the last three decades (these days, a 5 year Ph.D. is considered relatively quick). Every part of the academic process, from getting a postdoctoral position to your first job to your first grant, has turned into a war of attrition. The “winners” who emerge at the end of it are often demoralized academics in their early 40s whose best years may be behind them. And the situation seems to only be getting worse.
But the article’s really about publishing papers. Snyder hits the nail when he says that academic publishing has become so rigorous in asking for exhaustive experimentation and documentation that it dissuades many authors from publishing their best ideas, ideas which are interesting and valid but which may not have been completely fleshed out. He points to reviewers’ insistence that authors perform a comprehensive set of experiments – often ranging over several months – that would qualify their manuscript for publication. Anyone who has tried to publish biomedical papers must be well aware of how tedious and demoralizing the experience can be. This long-drawn process significantly impacts the progress of science:
“Why does it take so much longer to move from test tube to the printed page? One element is a journal review process that is substantially lengthier, especially in terms of experiments required to address the concerns of referees. To anticipate such referee responses, scientists preemptively carry forward experimentation more exhaustively than is necessary to document their assertions. Yet, we can clone genes in a couple of days. Shouldn’t we be able to complete experiments to satisfy reviewers in a few weeks rather than the 7–12 months typically consumed in revision, not to mention the many years devoted to developing the original manuscript? If one spends 5 years accumulating the data for a manuscript and another year revising it to satisfy referees, benefits to the public are delayed for years.”
In contrast Snyder points to his postdoctoral advisor, another legendary scientist named Julius Axelrod at the NIH who churned out discovery after discovery in short order and won a Nobel Prize (the Axelrod dynasty is nicely charted out in Robert Kanigel’s book “Apprentice to Genius”). The point that Snyder is making is that in those days the reviewing process was much quicker but the quality of science doesn’t seem to have suffered in spite of this speedier turnaround. What has gone wrong since then?
Snyder partially places the blame at the feet of Cell founder Benjamin Lewin who wanted Cell to showcase papers that were essentially complete stories; from hypotheses to final products. But Lewin also made the process highly streamlined. Reviewers were warned to stay away from insults, stick to succinct criticism and suggest adequate but not unrealistic experiments and further studies. The objective was to get the best science out in a form that was interesting enough to spark further inquiry but which was not necessarily the last word.
Lewin understood the piecemeal nature of science where researchers build on each other’s discoveries. This understanding of the scientific process has since been subverted by academic reviewers, partially to cull a flood of proposals and ideas and partially to satisfy their own whims. Sometimes old boys’ networks can conspire to put sound science in a straitjacket. Expecting every research project to tell complete, final stories not only imposes unrealistic and demotivating standards on scientists but also ignores the always incomplete and provisional nature of science. Snyder asks that expectations for accepting papers be changed and points to recent developments like the journal eLIFE which incorporates some of his thinking. Blogger SciCurious suggests her own system of peer-review where a paper is simultaneously sent to a group of journals with different standards; after hearing back from reviewers, the authors can decide whether to push ahead with further experiments to satisfy the top-tier journals or whether to publish the paper in a lower-tier journal right away. But Snyder’s perspective points out that all journals – whether top tier or otherwise – should have a reviewing system that allows for rapid dissemination of results.
Reviews and authors need to seriously contemplate Snyder’s recommendations. Academic research has already turned into a long slog with its uncertain job market and draconian grant approval and does not to face need additional difficulties in the form of glacial and unrealistic reviewing standards. Let’s remember that the purpose of science is to generate ideas, not products. And it shouldn’t take very long for ideas to see the light of day.
First published on Scientific American Blogs.


  1. Why bother going back to the previous system, which was obviously changed because it had flaws. I can see a future with a radically different, open-access world of publication. In this world, scientists are constantly updating their work online in real-time, allowing others to follow their work as they do it, rather than waiting until a paper is published a few years down the road.

  2. Yes, open-access would be the best system. Snyder is recommending an upgrade to the current journal-based system which I don't see completely disappearing anytime soon.

  3. I agree with much of what Snyder says but we need to be clear that not every additional experiment suggested by reviewers is needless - as is often suggested see for example:

    The distinction between a wholly necessary experiment, an experiment that will bolster the conclusions, and an experiment that ought to form an interesting part of a future study is one that often needs to be made by a good editor - the kind of editor I aspire to be.

    Good editors can be academically based and doing it on a voluntary basis or professional (I'm the latter), and while that makes me biased I don't think that there is an obvious division of academic editors good/professional editors bad - or vice-versa although this is a commonly stated position.

    From what I've seen the biggest problems arise when the relationship between authors and reviewers is treated like a student-teacher relationship. It's called peer review for a reason - there is no good reason to always believe the reviewer is on the right side of the argument. It's about weighing the arguments and making a decision.

    1. Thanks Stephen, you are quite right that a discerning editor can make all the difference. Reviewers indeed often act as if they have been given the task of instructing the authors in a particular discipline.


Markup Key:
- <b>bold</b> = bold
- <i>italic</i> = italic
- <a href="">FoS</a> = FoS