Field of Science

The anatomy of peer review: Why airing dirty laundry in public is important

Since everyone is talking about Ron Breslow, I thought I might bring readers' attention to a truly fascinating (from the point of view of the sociology of science) article published in Nature in 1992 regarding two of Breslow's papers. The article was written by Prof. Fred Menger (Emory) and Prof. Albert Haim (Stony Brook) and details what happened when the two tried to publish a rebuttal addressing errors in two Breslow articles from the Journal of the American Chemical Society (JACS).

First I want to emphasize the reason why I am writing about this episode. I write about it not because I want to add to Breslow's troubles. I don't want it to sound like I am kicking someone when they are down, nor do I think that I will ever have the capability to do this to someone like Breslow. As I have reiterated in other places, neither this account nor the recent controversy should blind us to an unambiguous fact; Breslow's contributions to research, service and education have been outstanding by any standards. Most scientists would be lucky if they could accomplish half of what he has done during an unusually long and productive career that has still not slowed down. We can all hope that people will continue to remember him by his superlative accomplishments.

So I am not writing this post because I want to add to further criticism of Breslow. It's because I have always thought that the Nature article is a unique document in the history and sociology of chemistry, a great illustration of the pitfalls and promises of peer review that deserves a wider audience. I think laymen (who don't have journal access) will find its contents very interesting, and there's lessons in there for fellow scientists too. I do want to add a disclaimer: Prof. Menger was on my Ph.D. committee and I have enormous respect for his research, but as someone interested in the way science functions, this paper would have been equally fascinating to me even if I had absolutely no connection to him. There were several other papers related to this incident, but the one in Nature stands out as a unique example of scrutiny.

The article is one of the most remarkable publications I have ever read, for two reasons. Firstly, because it presents a rare glimpse into what we might call the "anatomy" of peer review; and it does this in excruciating detail, with warts and all exposed. Secondly, because it makes you wonder whether a journal like Nature would ever again have the inclination to publish something like this.

The whole episode germinated when Breslow and a pair of graduate students (Eric Anslyn and Deeng-Lih Huang) published two papers in JACS dealing with the kinetics of an imidazole-catalysed nucleotide cleavage reaction. This is standard stuff in physical organic and bioorganic chemistry and is part of a large body of work going back 50 years. What was interesting was that the authors seemed to derive negative rate constants for some of the reactions. Now, even college students will recognize this as odd; a rate constant is supposed to indicate the speed of a reaction. If it's positive, the reaction proceeds. If it's zero, the reaction halts. But what is a negative rate constant supposed to mean? Of course the original authors had their own interpretation of their numbers. Independently, Menger and Haim investigated this question and they found out a few rather significant problems with the two papers. The technical details can only be appreciated by a physical organic chemist, but the main problem seemed to be in treating the background reactions with water in the absence of imidazole. It seemed that at least some of the "negative" rate constants were artifacts of the data fitting.

The publication saga started when Menger and Haim independently submitted papers to JACS criticizing the original paper and offering corrections. The editor of JACS at the time was Alan Bard, an internationally renowned chemist who served a distinguished stint as journal editor for several years. Menger's paper was rejected by the reviewers with some odd and rather self-contradictory commentary. On one hand, the reviewers acknowledged the errors in the Breslow papers, but on the other hand they inexplicably chose to reject the manuscript, "hoping" that Breslow and Huang would publish a more detailed explanation and correction. It wasn't clear why they would not let Menger himself publish a correction in the journal.

Haim's paper was also rejected by the journal with similar comments. At this point both Menger and Haim wrote to Bard and the associate editor in charge of the manuscripts. They appealed to JACS's editorial policy which encouraged the submission of manuscripts detailing major errors in published material. This appeal did not have much effect. Something strange also transpired at this juncture; as Menger details, Breslow sent a rather interesting note to Haim, saying: "I don't know what you are so excited about. Are you being led astray by a notoriously unstable individual?". 

Following their unsuccessful attempt at publication in JACS, both Menger and Haim did what scientists always do; try to publish in other places. Haim first send his manuscript to the Journal of Physical Chemistry (JPC). JPC's response was about as strange as JACS's; while acknowledging the problems with the original papers, they too chose not to publish Haim's manuscript. Menger in turn sent his paper to the Journal of Organic Chemistry (JOC). He seems to have found a much more sympathetic audience in the journal's chief and associate editors, Clayton Heathcock and Andrew Streitwieser, both leading researchers. The paper was accepted without much ado by Streitwieser who wondered how the original paper had made it past the JACS referees.

The story does not end there. Menger's JOC paper stimulated a number of largely favorable responses. An especially noteworthy response was from a Nobel Laureate, who sent a note to Bard saying, 

"It seems to me there is a long-established scientific etiquette which says that papers pointing out errors should be published in the same journal in which the original article appeared". 

But these encouraging exchanges were followed by a few calls for retraction of the JOC paper, one of these being from Breslow. At least some of these raised legitimate scientific questions. This prompted Heathcock to ask for clarification, which Menger duly provided. Streitwieser accepted Menger's responses and the paper was published. To the journal's credit, they even permitted a footnote in which Menger said that his attempts to publish his paper in JACS had been rejected.

Meanwhile, Haim was still trying to publish his correction in JACS. He sent a revised, more thorough manuscript to Bard, asking that the original associate editor who had rejected the manuscript be replaced by a fresh pair of eyes. Bard did not agree to this request, but asked Haim to recommend ten referees for the paper. The article was sent to four of these reviewers. Two of the reviewers again agreed that the original science contained errors but again asked that Breslow and his collaborators be given a chance to publish corrections. One reviewer's response was especially interesting; he or she thought that the matter should simply be dropped, ostensibly to protect the reputation of the author:
"It could lead to a reputation, rightly or wrongly, of the author being a nitpicker and Breslow would certainly fight back loudly. Who needs such things"

I find the part about being a nitpicker especially interesting. Firstly, the criticism was not just about a few minor details; it was about rather fundamental analyses and conclusions in the paper. But more importantly, a lot of science in fact is nitpicking because it's through nitpicking that one often uncovers the really important things. Science especially should provide a welcome refuge for nitpickers. 

In any case, after yet another rejection, Haim submitted another revised manuscript. It's worth noting that most reviewers' comments during the 11 months that Haim had tried to get his manuscript published had been favorable, and nobody had ever called Haim's basic analysis into serious question. Yet the paper kept on getting rejected for various reasons. Finally, Haim appealed to Bard and the paper suddenly and inexplicably got published.

Menger ends his account of the long saga with the following words:

"As the dust settles, it is comforting to reflect that the system ultimately worked. After all, both of us succeeded in getting our papers published. Yet this was accomplished only at the cost of considerable anguish to us. Few people, we presume, would be willing to go through this experience...Two problems are involved here. First is the mishandling of the original publications, which many people have come to regard as substandard. The second is the position taken by the associate editor after flaws were pointed out. That position can only be described as evasive and defensive. Without attributing motivation for his actions, we simply state that we believe them to have been inimical to the best interests of science"

He ends by hoping that the fear of open criticism would encourage scientists to police themselves better (At this point science bloggers should let out a collective hurrah, for reasons that will become apparent below). There is a postscript: Breslow replied to this lengthy report shortly (and gratifyingly, his response was published in the same journal) and described experiments that would clarify his earlier work, but he did not address the many questions about peer review raised in Menger's communication.

This fascinating account raises many important issues. For one thing, it was quite clear that the original paper had problems; even the reviewers consistently agreed with this part of the story. Thus the science seems pretty clear, and the ambiguity in the situation came from the human element. We will never know what went behind the scenes when the manuscripts were rejected even after the reviewers agreed with the rebuttal. Unfortunately motivations are hard to unravel, but one cannot help but suspect that there was prestige and influence at work here which thwarted efficient and open scientific revision. The fact that powerful people from the chemistry community (especially Breslow and Bard) were involved cannot be an inconsequential factor.

Secondly, it does seem important to me (although this is a relatively minor issue in my mind) for journals to publish corrections to papers in their own pages; at the very least, this underscores a culture of responsibility on the part of the journal and sends out a positive message. However this practice involves some interesting operational questions. Should the journal first allow the original authors to publish a correction? If so, how long should it wait before doing this? It seems clear to me that legitimate corrections should immediately be published, irrespective of the source.

The most remarkable fact about this account is that Nature published it, and in writing it Menger performed a unique and valuable public service. Personally I have never seen such a detailed dissection of peer review described in a major journal. Some people would deplore this public airing of dirty laundry. They would say that none of this can undo what happened, and the only effect of such articles is bad blood and destroyed reputations. I happen to disagree. I think journals should occasionally publish such analyses, because it alerts us to the very human aspect of science. It demonstrates to the public what science is truly like, how scientists can make mistakes, and how they can react when they are corrected or challenged. It sheds important light on the limitations of the peer review process, but also reaffirms faith in its ultimately self-correcting nature. Some people might think that this is a great example of how peer review should not be, but I would like to think that this is in fact exactly how the process works in the vast majority of cases; imperfect, ambiguous, influenced by human factors like reputations, biases and beliefs. If we want to understand science, we need to acknowledge its true workings instead of trying to fit it into our idealized worldview of perfect peer review.

In this day and age, blogs are performing the exact same function as Nature did in 1992, and this is clearly apparent from the latest Breslow brouhaha. Menger and Haim in 2012 would not have to test their patience by trying to publish in JACS for 11 months; instead they could upload their correction on a website and let the wonder of instant online dissemination work its magic. Blogs may not yet be as respectable as JACS, but the recent incident shows that they can be perfectly respectable outlets of criticism as long as the criticism is fair and rigorous. The growing ascendancy of blogs and their capacity to inflict instant harm on sloppy or unscrupulous science should hopefully result in much better self-policing, leading authors to be more careful about what they publish in "more respectable" venues. Thus, quite paradoxically, blogs could lead to the publication of better science in the very official sources which have largely neglected them until now. This would be a delightful irony.

Perhaps the greatest message that the public can take home from such incidents is that even great scientists can make mistakes and remain great scientists, and that science continues to progress in one way or another. No matter how bad this kind of stuff sounds, it's actually business as usual for the scientific process, and there's nothing wrong with it.

27 comments:

  1. Investigations into self-plagiarism in the recent JACS article by Breslow uncovered two earlier review articles with significant similarities (Tetr. Lett. And Israel J. Chem, refs. 30 and 31). However, it seems to have escaped notice that ref. 14 (Evol Life Orig. Biosphere) is a third review article that contains many of the same passages.
    The original research carried out by Breslow’s group in the area of homochirality cited in the recent JACS Perspective consists of one Biorg. Med. Chem Lett paper (Ref 16), one Org Lett paper (Ref 15), and three PNAS papers. While Refs. 15 and 16 appear to have passed the standard peer review process (judging by dates of submission and publication), it may be noticed that the three PNAS papers were treated differently. The 2006 paper was contributed by Breslow without being reviewed, as is the prerogative of NAS members. The 2009 and 2010 papers were contributed by Breslow and sent for review – and published within one week of being sent for review.

    ReplyDelete
    Replies
    1. Anon: Thanks for those references. The situation is getting curiouser and curiouser. PNAS papers seem to carry their share of problems, as was pointed out by Derek and some other bloggers a while back.

      Delete
  2. Very interesting situation. A coauthor and I tried submitting a paper to a major paleontology journal outlining basic errors in several major dinosaur phylogenetic analyses that make their results next to useless, but the peer reviewers didn't care, with one saying that because ALL analyses aren't flawed in this way, it's not important to correct some of them. Sigh.

    ReplyDelete
  3. Absolutely fantastic post, Ash. Profs should be printing it out to use in senior seminars on non-experiment-related aspects of a career in chemistry.

    It is incredibly frustrating to see--laid so bare--"truth" have such a hard time rising to the surface. This is the "old boys' club" at work, with members of the elite/in-crowd looking out for each other instead of looking out for what is right.

    And full props to Haim and Menger for sticking to their guns, then using the rotten experience to draw attention to problems with the system. Perhaps if they were our age, they'd have started a blog.

    ReplyDelete
    Replies
    1. Thanks Paul, I do think it's a very interesting story in the sociology of peer review, and one that would be very useful for students and green researchers. And I indeed keep thinking of how blogs have potentially completely transformed this process.

      As an aside, I would love to congratulate the Nature editor who saw it fit to publish Menger's report.

      Delete
    2. We assume Menger and Haim were right but do we know for sure?
      Menger was able to publish in top journals (jacs, acr) criticizing Jencks, Houk, and Breslow. Just because Menger had the last say, does it make him right? Was Menger's critique of Houk in his paper titled 'fudge it' fair? And his ACR article criticizing Jenck's entropy argument correct?
      Nitpicking may be important in science but is it always? and lastly....of course Breslow and others knew there is no such thing as a negative rate constant. What he meant was that you can have apparent observed rate constants that are negative. Perhaps this was misleading or bad style to some. But it was quite clear to others.

      Delete
  4. Check out the acknowledgements in this Breslow PNAS paper from 1992 (http://www.pnas.org/content/90/4/1208.full.pdf) (a paper that deals with further research on the topic of the sloppy JACS papers discussed here). Breslow thanks Richard Schowen for comments. This individual was the JACS associate editor who handled the critiques of the sloppy papers, and appeared to stonewall the critical manuscripts.

    ReplyDelete
    Replies
    1. Thanks, that's pretty interesting. Perhaps that sheds some light on the rather strange, self-contradictory comments from the JACS editors.

      Delete
    2. I once crticized a big shot and Richard Schowen accepted the paper. And believe me I was (and still am) a nobody. So it's not always that simple. We all like to think there are good guys and there bad guys. It's not as simple as that unfortunately. There are twists and turns and double flips and tripple loups.

      Delete
  5. First of all: a lovely and insightful post! These things are at the heart of human nature, scientists or not... When there's people involved, there's reputations, relationships, expectations, etc... When there's people involved, it's hard to solely stick to the facts...

    Also, the Nature Commentary starts on page 666, a mere coincidence? ;-)

    ReplyDelete
  6. its the submision date of the 2006 pnas and the last line that are really interesting

    ReplyDelete
    Replies
    1. 2006 pnas submission:
      Contributed by Ronald Breslow, July 13, 2006

      last line:
      (Klussman et al. have reported some relevant studies after our work was completed; ref. 6).

      Ref 6:
      6. Klussman et al (2006) Nature 441, 621–623. publication date June 1, 2006; submission date Nov 21, 2005.

      Delete
  7. It seems as thought the wrong lesson is being drawn here. As you quote Menger: "...the system ultimately worked." I think that we ought to be focusing on the word "ultimately". Although I am a terrific fan of peer review, approximately as it is now practiced, and although I appear to be in the minority here, I doubt that anyone in any part of this spectrum would disagree that it should be made more efficient, specifically, faster. It's not clear how that could happen, as peer review systems have long been operated by email and other digital means, so it's not that "the magical web" is going to improve that too much. Unfortunately, the supply of "expert reviewer time" is limited by the number of experts, and their time. (Here "expert" means anyone who could have a sufficient understanding of the topic at hand to say something useful about it). Perhaps we're going about peer review the wrong way. It's too hard to read and write a review of a whole paper in detail -- therein which lies the time problem. Perhaps review should be operated as a sort of bayesian adaptive trial (as in biomedicine), where you send parts of the paper to everyone who has ever published in the given journal, and they are each only asked a short question, which must be answered in 24 hours: does this paragraph make sense, or not (and you could see more if you need to to make your analysis, but you're not at all asked to review the whole paper), and so on. If you break the paper up into enough overlapping chunks, and fan it out to enough experts who have to make simple choices, perhaps you would speed up the review process...or something like that. I'm just thinking aloud here, but the key feature, it seems to me, is speed, not quality, of the system.

    ReplyDelete
  8. How peer review works (and by extension how science works), were it fails and how it can be improved is a legitimate field of scientific inquiry – no need to be defensive. But in the end this is only a case study and we would need more systematic research into peer review.

    ReplyDelete
  9. Jeff Shrager: That's a very interesting suggestion and I may have more to say about this sometime. Breaking up the paper into short bits might be interesting and lead to less bias, although I wonder how you will be able to avoid people considering facts out of context.

    ReplyDelete
  10. "What was interesting was that the authors seemed to derive negative rate constants for some of the reactions. Now, even college students will recognize this as odd; a rate constant is supposed to indicate the speed of a reaction. If it's positive, the reaction proceeds. If it's zero, the reaction halts. But what is a negative rate constant supposed to mean?"

    A negative rate constant mean that your concentration is falling
    y'=ky solves to y=ce^kt, if k is negative then it decreases exponentially.

    What am I missing?

    ReplyDelete
    Replies
    1. Hells: While that explanation indeed seems to be the obvious one, this is what Menger had to say about it:

      "Since reactions with imidazole were often found to be slower than those in water without imidazole, negative rate constants ensued. Note that the negative rate constants were not simply presented as a euphemism for an ordinary rate inhibition where one positive rate diminishes to a smaller positive rate...in actual fact, the experimental rate constants, in and of themselves, were found to be subzero and extolled as such"

      There's more in the JOC paper, including an explanation of how a negative rate constant could arise as an artifact of the difference in rates between the imidazole-catalyzed reaction minus the background reaction.

      Delete
  11. If memory serves, the footnote that Menger published in the JOC article complaining of rejection by JACS was followed up by a short JOC editorial in a subsequent issue stating that Menger slipped the footnote into the final galley proofs, after review, and that the editors would never have knowingly let that footnote get published.

    ReplyDelete
    Replies
    1. Found it.
      http://pubs.acs.org/doi/pdf/10.1021/jo00024a053

      good memory man.
      -InfMP

      Delete
  12. Haim's quotation from a referee's report is unethical. Referees write reports in confidence to the editors of a journal and are not public documents. How can the accuracy of the citation be checked and will referees do their job if their reports are reported? In the Haim case, there is a clear misuse as the sentences are not in any context. It is made to look like the referee has an agenda that serves Haim's purpose. Would a referee really do that? I'll bet not. I am glad that this abuse has not continued but dealing referees is not for publication.

    ReplyDelete
    Replies
    1. Why do you think that referees reports should be confidential? Should not a referee stand behind what he/she has written? Sometimes referees say truly ridiculous things, and if a paper is rejected you don't even really have the chance to discuss with the referees about their criticism. Once it happened to me that a referee did not even accept textbook knowledge - and based on his review, the paper has been rejected (it was a higher standing journal of the ACS).

      I like the philosophy of journals like PLoS ONE: "Although reviewers may remain anonymous during the review process, we strongly urge them to relinquish this anonymity to promote open and transparent decision-making."

      Delete
  13. Anon 7:17: And that's an important point. In today's era of open science, some are suggesting that referees' comments (even if not their identity) should be made public so that the reviewing process becomes as transparent as possible. Perhaps if their comments are publicly available, reviewers will make a more honest attempt to reject manuscripts based on substance rather than bias or simple disagreements.

    ReplyDelete
    Replies
    1. OK, you have been faster tham me (Anon 6:54)... ;-)

      Delete
  14. Although Menger like to police others, when he was pointed out by Christl, he refused to retract his erroneous paper. I agree his great service to science and big fan of science his opening lines in any paper specifically.

    http://onlinelibrary.wiley.com/doi/10.1002/anie.200704704/abstract


    the last paragraph in the above comments reads
    "
    Chemistry as a science does not
    suffer damage by errors, since if they
    concern an important field of research
    these are recognized as such sooner or
    later, and if they occur in research
    niches they are without significance.
    For the reader of scientific work, how-
    ever, it will become increasingly more
    difficult to separate the wheat from the
    chaff, if authors and referees do not do
    the preliminary sorting adequately.
    "

    ReplyDelete
  15. EMBO Journal now publishes the editorial communications that precede many of its publications.

    ReplyDelete
  16. Very interesting discussion! You might find this one inspiring as well: http://www.davidrasnick.com/Home_files/2012,%20Steinhauser%20et%20multi,%20Anti-MeHy%20censorship%20-%20Elsevier.pdf

    ReplyDelete

Markup Key:
- <b>bold</b> = bold
- <i>italic</i> = italic
- <a href="http://www.fieldofscience.com/">FoS</a> = FoS