Field of Science

Showing posts with label open science. Show all posts
Showing posts with label open science. Show all posts

On the impact of social media and Twitter on scientific peer review

I am very pleased to note that an my article on the impact of social media and especially of blogs and Twitter on peer review in chemistry in particular and science in general has just come out in a special issue of the journal 'Accountability in Research'. This project has been in the works for almost a year and I have spent quite a bit of time on it. The whole issue is open access and it was made possible by the dedicated and generous efforts of my colleague and friend, the eminent historian of chemistry Jeff Seeman. I am privileged to have my article appear along with those by Roald Hoffmann, William Schulz, Jeffrey Kovac and Sandra Titus. All their papers are highly readable.

Here in a nutshell is what I say. I have had a very dim view of Twitter recently as a vehicle for cogent science communication and rational debate, but in this article I find myself full of praise for the medium. This sentiment has been inspired by the use of Twitter in recent times for demolishing careless science and questioning shoddy or controversial papers in the scientific literature. In my opinion the most spectacular use of Twitter to this effect was Nature Chemistry editor Stuart Cantrill's stark highlighting of 'self-plagiarism' in a review article published by Ronald Breslow in JACS in 2012 (I hold forth on the concept of self-plagiarism itself in the article). As I say in my piece, to my knowledge this is the first and only instance I know in which Twitter - and Twitter alone - was used to point our errors in a paper published in a major journal. If Cantrill's analysis was not a resounding example of peer review in the age of social media, I don't know what is.

I have had a much more consistent and positive views of blogs as tools for instant and comprehensive peer review, and thanks to the vibrant chemistry blogosphere that I have been lucky to be a part of for almost eleven years, have witnessed the true coming of age of this medium. There is no doubt that peer review on blogs is here to stay, and in my article I address the pitfalls and promises inherent in this development. One of the most important concerns that a naive observer would have regarding the use of blogs or Twitter for peer review is the potential for public shaming and ad hominem attacks - and such an observer would find plenty of recent evidence in the general Twittersphere to support their suspicions. Yet I argue that, at least as far as the limited milieu of chemistry blogs is concerned, the signal to noise ratio has been very high and the debate remarkably forward-thinking and positive; in fact I think that, by and large, chemistry blogs could serve as models of civil and productive debate for blogs on more socially or politically contentious topics like evolution and climate change. I am proud to be part of this (largely) civil community.

What I aim to do in this piece is to view the positive role of Twitter and blogs in effecting rapid and comprehensive peer review through the lens of three major case studies which would be familiar to informed observers: the debacle of 'arsenic life', the fiasco of hexacyclinol and the curious case of self-plagiarism in the Breslow 'space dinosaurs' review. In each case I point out how blogs and Twitter were responsible for pointing out mistakes and issues with the relevant material far faster than official review ever could and how they circumvented problems with traditional peer review, some obvious and some more structural. The latter part of the review raises questions about the problems and possibilities inherent in the effective use of these tools, and I muse a bit about how the process could be made fairer and simpler.

Due to the sheer speed with which blogs and social media can turn our collective microscopes on the scientific literature and the sheer diversity of views which can be instantly brought to bear on a contentious topic, there is no doubt in my mind that this new tier of scientific appraisal is here to stay. In my opinion the future of completely open peer review is bright and beckons. How it can complement existing modalities of 'official' peer review is an open question. While I raise this question and offer some of my own thoughts I claim to provide no definitive answers. Those answers can only be provided by our community.

Which brings me to the crux of the article: although my name is printed on the first page of the piece it really is of, by and for the community. Hope there will be something of interest to everyone in it. I welcome your comments.

The GPCR Network: A model for open scientific collaboration

This post was first published on the Scientific American Blog Network


The complexity of GPCRs is illustrated by this mechanical view of their workings (Image: Scripps Research Institute)
G Protein-Coupled Receptors (GPCRs) are the messengers of the human body, key proteins whose ubiquitous importance was validated by the 2012 Nobel Prize in chemistry. As I mentioned in a post written after the announcement of the prize, GPCRs are involved in virtually every physiological process you can think of, from sensing colors, flavors and smells to the action of neurotransmitters and hormones. In addition they are of enormous commercial importance, with something like 30% of marketed drugs binding to these proteins and regulating their function. These drugs include everything from antidepressants to blood-pressure lowering medications.

But GPCRs are also notoriously hard to study. They are hard to isolate from their protective lipid cell membrane, hard to crystallize and hard to coax into giving up their molecular secrets. One reason the Nobel Prize was awarded was because the two researchers – Robert Lefkowitz and Brian Kobilka – perfected techniques to isolate, stabilize, crystallize and study these complex proteins. But there’s still a long way to go. There are almost 800 GPCRs, out of which ‘only’ 16 have been crystallized during the past decade or so. In addition all the studied GPCRs are from the so-called Class A family. There’s still five classes left to decipher, and these contain many important receptors including the ones involved in smell. Clearly it’s going to be a long time before we can get a handle on the majority of these important proteins.

Fortunately there’s something important that GPCR researchers have realized; it’s the fact that many of these GPCRs have amino acid sequences that are similar. If you know what experimental conditions work for one protein, perhaps you can use the same conditions for another similar GPCR. Even for dissimilar proteins one can bootstrap based on existing knowledge. Based on the similarity you could also build computer models for related proteins. Finally, you can use a small organic molecule like a drug to essentially serve as a clamp that helps stabilize and crystallize the GPCR.

But all this knowledge represents a distributed body of work, spread over the labs of researchers worldwide and expected to be sequestered by them for their own benefits. These individual researchers working in isolation would not only face an uphill battle in figuring out the right conditions for studying their proteins but would also run the risk of reinventing the wheel and duplicating conditions from other laboratories. The central question asked by all these researchers is, how does the binding of a small molecule like a drug on the outside of a GPCR lead to the transmission of a signal to the inside?

Enter the GPCR Network, a model of collaborative science which promises to serve as a fine blueprint for other similar efforts. The network was created through a funding opportunity from the National Institute of General Medical Sciences in 2010 and has set itself the goal of structurally characterizing 15-25 GPCRs in the next five years. The effort is based at the Scripps Research Institute in La Holla and involves at least a dozen academic and industrial labs.

So how does this network work? The idea for the network came from the recognition that there are hundreds of GPCR researchers spread across the world. Each one is an expert on a particular GPCR but each one has largely worked separately. What the network does is to leverage the expertise from one researcher’s lab and apply it a similar GPCR in another lab (there are technical criteria for defining ‘similarity’ in this case). There are a variety of very useful protocols, ideas and equipment that can be shared between labs. This sharing cuts down on redundant protocols, saves money and accelerates the resolution of new GPCR puzzles much faster than what could be achieved individually.

For instance, a favorite strategy for stabilizing a GPCR involves tagging it with an antibody that essentially holds the protein together. An antibody that worked for one GPCR can be lent to a researcher who is investigating another GPCR with a similar amino acid sequence. Or perhaps there is a chemist who has discovered a new molecule that binds very tightly to a particular receptor. The network would put him in touch with a crystallographer who could use that molecule to fish out that GPCR from a soup of other proteins and crystallize it. Once the crystallographer solves the structure of the protein using this molecule, he or she could then send the structure to a computer modeler who can use it to build a structure for another particularly stubborn GPCR which could not be crystallized. The computer model might explain some unexpected observations from a fellow network researcher who was using a novel instrumental technique. This novel technique would then be shared with everyone else for further studies.

Thus, what has happened here is that the individual pockets of knowledge from a biochemist, organic chemist, crystallographer and computer modeler – none of whom would have proceeded very far by themselves – are merged together to provide an integrated picture of a few important GPCRs. The entire pipeline of protocols including protein isolation, purification, structure determination and modeling also serves as a feedback loop, with insights from one step constantly informing and enriching others. This represents a fine example of how collaborative and open research can accelerate important research and save time and money. It's to the credit of these scientists that they haven't held their valuable reagents and techniques close to their chest but are sharing them for everyone's benefit.

In the three years since it has been up and running, the GPCR Network has leveraged the expertise of many experts in generating insights into the structure and function of important receptors. Its collaborative efforts have resulted in eight protein structures in just two years. These include the adenosine receptor which mediates the effect of caffeine, the opioid receptor which is the target for morphine and the dopamine receptor which binds to dopamine. Every one of these collaborations involved a dozen or so researchers across at least three or four labs, with each lab employing its particular area of expertise. Gratifyingly, there’s also a few industrial labs involved in the efforts and we can hope that this number will increase even as the pharmaceutical industry becomes more collaborative.

It’s also worth noting that the network was funded by the NIGMS, an institution which has been subject to the whims of budget and personnel cuts. This institution is now responsible for an effort that’s not only accelerating research in a fundamental biological area but is also contributing to a better understanding of existing and future drugs. Scientists, politicians and members of the public who are seeking a validation of basic, curiosity-driven scientific research and reasons to fund it shouldn’t have to look for.