Field of Science

Chemistry: The difference between important and useful

Over on the Skeptical Chymist blog there's another discussion about using highly cited chemists to gauge the importance of chemical sub-fields. In the past post I suspected that highly cited chemists lists from the 50s through the 90s would reflect the now seemingly diminished importance of organic synthesis. Partly goaded by this, Michelle Francl of Culture of Chemistry drew up a list of chemist citations from the 80s and 90s. Interestingly, there are no bona fide synthetic organic chemists in there. However, as a commentator on the Chymist's blog noted, a better metric of judging trends might be to count the number of highly cited papers in every sub-field in every decade rather than just looking at highly cited chemists. In my opinion, the latter would do a much better job of indicating the supremacy of organic chemistry from the 50s through the 90s.

Or would it? To get a better idea of the whole issue I did something which I thought was obvious, and was surprised by the results.
The exercise really got me thinking about the very nature of judging achievement and importance in chemistry.

I simply logged on to the ACS website and looked at the list of
highest-cited JACS articles of all time. Although extrapolating to chemical significance from this exercise is as fraught with limitations as extrapolating from ISI/Thomson Reuters lists, most of us would agree that JACS has mirrored important chemical developments in the last fifty years. So is the list of heavy hitters unsurprisingly dominated by Woodward, Smalley, Corey, Djerassi, Sharpless or Grubbs? Surely the top spot would be taken by the greatest chemist of all time?

Hardly. Of the top ten articles, four including the top two belong to computational chemistry, a field that has often been regarded as relatively unfashionable compared to organic synthesis, chemical biology, materials science and polymer science. Ask scientists to name the most important chemists of the last fifty years and very few will state the names of John Pople or Michael Dewar, let alone Peter Kollman, Clark Still or Warren Hehre. Yet computational chemistry dominates the list of the top 20 highest cited papers in JACS. Where is the chemical God Woodward in the list? Or his successor Corey? In fact no one who looks at the JACS list would even suspect that organic chemistry ever dominated the chemical landscape.

Does this mean that organic synthesis was hyped for fifty years and we were convinced
of the towering implications of the field by a conspiracy of chemical raconteurs led by R B Woodward? Certainly not. To me the list only signifies the signature character of chemistry: on a practical basis, in chemistry 'importance' is judged by utility rather than by any other single metric. If you look at the computational chemistry papers in the JACS list, you will realize that each one of those papers contributed techniques which became universally adopted by all kinds of chemists doing calculations on all kinds of molecules. Woodward's papers on synthesis may seem like great works of art compared to these pragmatic prescriptions, but chemists going about their daily business may have scant use for them. Considering this emphasis on utility, I was actually surprised not to see some of Corey's papers- such as the one describing the oxidation of secondary alcohols- on the list. I was also surprised not to see the work done by the palladium crowd, not to mention the Sharpless epoxidation and dihydroxylation accounts.

But the examples which are included make the context clear. Consider the solvation models developed by Clark Still which are a mainstay of molecular simulation. Consider the force fields developed by Peter Kollman and Bill Jorgensen, again incorporated in leading computer programs and used by thousands around the world. And of course, the pioneering Nobel Prize winning quantum chemical programs developed by John Pople brought high-flying theory to the bench. In fact a very few people cited by ISI/Thomson Reuters feature in the list. Whitesides does, but again, for his very practical and important work on generating monolayer films on surfaces. Ralph Pearson is similarly cited for his very helpful development of the hard/soft acid base concept. Robert Grubbs actually makes it, but again for his decidedly practical innovation of olefin metathesis.

Interestingly, when the general history of chemistry is written, the pioneering articles which make the list will almost certainly not be these highly cited ones. They will instead be the synthesis of chlorophyll or fullerene, or those detailing the reactions of CFCs with the ozone layer. The cracking of hydrocarbons will probably be mentioned. And of course there will be all those papers on the nature of the chemical bond, featuring Linus Pauling and others. In the long-term, what would stand out from the chemical canon would be the papers which laid the foundation of the field, not the ones which allowed chemists to calculate a dipole moment with better accuracy. And yet it's the latter and not the former which make the JACS list.

These observations based on a most limited data set should not be taken too seriously. But I think that they drive home an important point. In chemistry, what's regarded as important by history and what's regarded as important by chemists going about their daily work might be very different. We may wax eloquent about how Pauling's paper on hybridization lit up the great darkness, or how Woodward's synthesis of Vitamin B12 reminds one of Chartres Cathedral, but at the end of the day, all a chemist wants is to grow some thiol monolayers on gold and calculate their interactions from first principles.

7 comments:

  1. I'm not sure the prevelance of computational papers in the top 10 has anything to do with either their importance or usefulness (which isn't to imply that they're not important or useful). Rather, I think it has more to do with the typical citation patterns used in various sub-fields.

    If you do a calculation with particular compultational program, there's usually an "offical" paper listed in the manual which you are supposed to cite. On the other hand, people doing FT-IR, or NMR or other established techniques rarely cite the "original" methodology paper - Varian doesn't include a "if you use this instrument, please cite ..." statement in the manual.

    To some extent it has to do with how modifications to a technique are presented. If you do a modified hydrogenation, say, you'll cite the paper that presented the modification, but probably not the paper the modification was based on, let alone the original work which introduced catalytic hydrogenation. On the other hand, with computational work you tend to cite the original software introduction paper in addition to the paper which introduced the modifications you actually used.

    ReplyDelete
  2. You are probably right about some of the programs but at least for some computational software (eg. GAUSSIAN) you don't actually cite the paper but use the 'official' program citation which includes the copyright. Would be interesting to see for how many programs the high citation of the paper simply reflects the mandatory reference.

    ReplyDelete
  3. Think of that flash chromatography paper by Clarke Still; if this was cited every time a column was run ...

    Also, it seems acceptable not to cite this paper because the technique is so commonplace.

    ReplyDelete
  4. True. Actually the Still paper does turn out to be the second-highest cited of all time in JOC.

    ReplyDelete
  5. Well, this blog has done a wonderful job in capturing the only good/useful thing in this whole discussion.

    It is understandable that ISI/Thomson Reuters publishes these kinds of rankings in order to promote their products that are perverting the way people do science (just to get publications, citations and all that rubbish we know). What I cannot understand is why Nature Chemistry (or Skeptical Chemist blog, whatever) gives publicity to these things. Can't they see that they are contributing to the propagation of the awful view that the importance/quality of science is measured by number of papers, citations, impact factors, h-index, etc? I know that in the end Nature, Science, ACS Pubs, etc, have only profit as their objectives (and not disseminating good scientific practices), and because of that the end of the "impact factor/h-index era" would be terrible for their business, but it is unjustified that they motivate/give reasons for especially young people to believe that the number of publications, citations, h-index, etc are more important than really understanding our universe in different and useful ways, having different and new ideas, and other things that seem to be completely forgotten by most of the scientists that sustain the bad system that is currently governing the academic world now. In plain English, it looks like disrespect.

    I do understand the value of discussions about chemistry heroes based, for example (and of course not limited to), on how a limited group of scientists changed the way people do or view chemistry. However, extrapolating this, and using citations, h-index, etc as the measure is as said before just a terrible mistake. That is simply because the correlation is too small, as noted in this blog entry. There is no mystery in why these ISI/Thomson Reuters ranking are never right on Nobel Prize winners.

    I cannot help to envy the physics community. ArXiv is a great example, since it has completely modified the publication of papers by physicists; the way they evaluate the quality of research, among other things. Of course they have their own problems, but it is just frustrating that the dominance of ACS and their strict publication rules makes this possibility too far away from a near future in the case of chemistry. Nature Physics is not a big deal among physicists. This actually makes me think that maybe Nature Chemistry is just afraid someday soon the same thing that happened to the physics community will happen to chemistry. Then, their journal will not be “the dream” of these researchers that believe h-indexes, or impact factors will tell if they are a good scientist or not.

    Anyway, your conclusion could not be better.“ In the long-term, what would stand out from the chemical canon would be the papers which laid the foundation of the field, not the ones which allowed chemists to calculate a dipole moment with better accuracy.” It is just a very sad thing few people notice this.

    ReplyDelete
  6. Thank you for laying bare the faulty assumption that the influence of a given scientist is directly proportional to the number of times their papers have been cited. Citations are easy to measure; influence, not so much.
    It reminds me of the old Sufi tale about the man who lost his keys in the house but is searching on the porch "because there is more light here".

    ReplyDelete
  7. Dear Anonymous

    We posted about the Times Higher Education story on the Sceptical Chymist blog because a lot of chemists find these things interesting (whatever they judge the value of metrics to be); the follow up post was then in response to some comments made by Wavefunction and another reader regarding organic chemistry. The fact that our original post has generated some debate testifies to the value of our blog posts. We are not saying that the best way to evaluate the significance/importance of science is through the citation of scholarly articles. Far from it, citation measures are an incredibly blunt tool. I would direct you to the editorial article in the July 2009 issue of Nature Chemistry that addresses this in a broad sense, but your mind already seems to be made up about our journal.

    Speaking as the editor of Nature Chemistry, my goal is not profit (sure, I work for a commercial company, but show me any mainstream chemistry publisher who does not produce journals for profit, even if the organisations that produce them are registered charities, such as the RSC). My aim is to produce a journal that people in the chemistry community enjoy reading - the editorial aspects of the journal are decoupled from sales, the editorial team is not driven by economic factors (or, indeed, impact factors).

    Prior to what you call the 'h-index/impact factor' era, there was always a supposed hierarchy of journals, some were considered to be better than others and these views have shifted over time... and will continue to do so. The fact is that the value placed on metrics is driven by the academic community itself, not by journals. Sure, journals will market themselves based on impact factor and number of citations once they have been calculated and released by ISI, but these metrics thrive because academics like to quote them and compare them to see how they stack up against their peers. I don't know who you are 'Anonymous', but if you are a member of the chemistry community, maybe start by talking to your colleagues rather than blaming journals.

    And if impact factors and the h-index vanished tomorrow, I don't see that Nature journals would suffer. We have more going for us than a number released by ISI that measures our perceived 'impact'. To respond to your assertion, I can tell you that we are not afraid. Which is perhaps more than can be said for you - what is it that you are afraid of that you can't make your points using your real name?

    Stuart (Chief Editor, Nature Chemistry)

    ReplyDelete

Markup Key:
- <b>bold</b> = bold
- <i>italic</i> = italic
- <a href="http://www.fieldofscience.com/">FoS</a> = FoS