Field of Science

Showing posts with label collaboration. Show all posts
Showing posts with label collaboration. Show all posts

The GPCR Network: A model for open scientific collaboration

This post was first published on the Scientific American Blog Network


The complexity of GPCRs is illustrated by this mechanical view of their workings (Image: Scripps Research Institute)
G Protein-Coupled Receptors (GPCRs) are the messengers of the human body, key proteins whose ubiquitous importance was validated by the 2012 Nobel Prize in chemistry. As I mentioned in a post written after the announcement of the prize, GPCRs are involved in virtually every physiological process you can think of, from sensing colors, flavors and smells to the action of neurotransmitters and hormones. In addition they are of enormous commercial importance, with something like 30% of marketed drugs binding to these proteins and regulating their function. These drugs include everything from antidepressants to blood-pressure lowering medications.

But GPCRs are also notoriously hard to study. They are hard to isolate from their protective lipid cell membrane, hard to crystallize and hard to coax into giving up their molecular secrets. One reason the Nobel Prize was awarded was because the two researchers – Robert Lefkowitz and Brian Kobilka – perfected techniques to isolate, stabilize, crystallize and study these complex proteins. But there’s still a long way to go. There are almost 800 GPCRs, out of which ‘only’ 16 have been crystallized during the past decade or so. In addition all the studied GPCRs are from the so-called Class A family. There’s still five classes left to decipher, and these contain many important receptors including the ones involved in smell. Clearly it’s going to be a long time before we can get a handle on the majority of these important proteins.

Fortunately there’s something important that GPCR researchers have realized; it’s the fact that many of these GPCRs have amino acid sequences that are similar. If you know what experimental conditions work for one protein, perhaps you can use the same conditions for another similar GPCR. Even for dissimilar proteins one can bootstrap based on existing knowledge. Based on the similarity you could also build computer models for related proteins. Finally, you can use a small organic molecule like a drug to essentially serve as a clamp that helps stabilize and crystallize the GPCR.

But all this knowledge represents a distributed body of work, spread over the labs of researchers worldwide and expected to be sequestered by them for their own benefits. These individual researchers working in isolation would not only face an uphill battle in figuring out the right conditions for studying their proteins but would also run the risk of reinventing the wheel and duplicating conditions from other laboratories. The central question asked by all these researchers is, how does the binding of a small molecule like a drug on the outside of a GPCR lead to the transmission of a signal to the inside?

Enter the GPCR Network, a model of collaborative science which promises to serve as a fine blueprint for other similar efforts. The network was created through a funding opportunity from the National Institute of General Medical Sciences in 2010 and has set itself the goal of structurally characterizing 15-25 GPCRs in the next five years. The effort is based at the Scripps Research Institute in La Holla and involves at least a dozen academic and industrial labs.

So how does this network work? The idea for the network came from the recognition that there are hundreds of GPCR researchers spread across the world. Each one is an expert on a particular GPCR but each one has largely worked separately. What the network does is to leverage the expertise from one researcher’s lab and apply it a similar GPCR in another lab (there are technical criteria for defining ‘similarity’ in this case). There are a variety of very useful protocols, ideas and equipment that can be shared between labs. This sharing cuts down on redundant protocols, saves money and accelerates the resolution of new GPCR puzzles much faster than what could be achieved individually.

For instance, a favorite strategy for stabilizing a GPCR involves tagging it with an antibody that essentially holds the protein together. An antibody that worked for one GPCR can be lent to a researcher who is investigating another GPCR with a similar amino acid sequence. Or perhaps there is a chemist who has discovered a new molecule that binds very tightly to a particular receptor. The network would put him in touch with a crystallographer who could use that molecule to fish out that GPCR from a soup of other proteins and crystallize it. Once the crystallographer solves the structure of the protein using this molecule, he or she could then send the structure to a computer modeler who can use it to build a structure for another particularly stubborn GPCR which could not be crystallized. The computer model might explain some unexpected observations from a fellow network researcher who was using a novel instrumental technique. This novel technique would then be shared with everyone else for further studies.

Thus, what has happened here is that the individual pockets of knowledge from a biochemist, organic chemist, crystallographer and computer modeler – none of whom would have proceeded very far by themselves – are merged together to provide an integrated picture of a few important GPCRs. The entire pipeline of protocols including protein isolation, purification, structure determination and modeling also serves as a feedback loop, with insights from one step constantly informing and enriching others. This represents a fine example of how collaborative and open research can accelerate important research and save time and money. It's to the credit of these scientists that they haven't held their valuable reagents and techniques close to their chest but are sharing them for everyone's benefit.

In the three years since it has been up and running, the GPCR Network has leveraged the expertise of many experts in generating insights into the structure and function of important receptors. Its collaborative efforts have resulted in eight protein structures in just two years. These include the adenosine receptor which mediates the effect of caffeine, the opioid receptor which is the target for morphine and the dopamine receptor which binds to dopamine. Every one of these collaborations involved a dozen or so researchers across at least three or four labs, with each lab employing its particular area of expertise. Gratifyingly, there’s also a few industrial labs involved in the efforts and we can hope that this number will increase even as the pharmaceutical industry becomes more collaborative.

It’s also worth noting that the network was funded by the NIGMS, an institution which has been subject to the whims of budget and personnel cuts. This institution is now responsible for an effort that’s not only accelerating research in a fundamental biological area but is also contributing to a better understanding of existing and future drugs. Scientists, politicians and members of the public who are seeking a validation of basic, curiosity-driven scientific research and reasons to fund it shouldn’t have to look for.

Charity begins in the university

I mentioned in the last post how the transition time between academic science---->industrial technology needs to be accelerated, and it struck me that there were so many things in the conference which were being said by pharma scientists, which originally came from academia, and I cannot help but think of technologies that people in pharma currently rave about, all of which were developed in academic laboratories.

Consider the recent use of NMR spectroscopy in studying the interaction of drugs with proteins, a development that has really taken place in the last five to ten years. NMR is essentially an academic field which has been around for almost fifty years now, originally developed by physicists who worked on radar and the bomb, and then bequeathed to chemists. It is the humdrum tool that every chemist uses to determine the structure of molecules, and in the last twenty years it was also expanded into a powerful tool for studying biomolecules. What if pharma had actually gone to the doorstep of the NMR pioneers twenty years back, and asked them to develop NMR especially as a tool for drug discovery? What if pharma had funded a few students to focus on such an endeavor, and promised general funding for the lab? What if Kurt Wuthrich had been offered such a prospect in the early 90s? I don't think he would have been too averse to the idea. There could then have been substantial funding to specially focus on the application of NMR to drug-protein binding, and who knows, maybe we could have had NMR as a practical tool for drug discovery ten years ago, if not as sophisticated as it is now.

Or think of the recent computational advances used to study protein-ligand interaction. One of the most important advances in this area has beendocking, in which one calculates the interactions that a potential drug has with a target in the body, and then thinks of ways to improve those interactions based on the structure of the drug bound to the protein. These docking programs are not perfect, but they are getting better every day, and now are at a stage where they are realistically useful for many problems. These docking protocols are based on force fields, and the paradigm in which force fields are developed, molecular mechanics, was developed by Norman Allinger at UGA, and then improved by many other academic scientists. Only one very effective force field was developed by an industrial scientist named Thomas Halgren at Merck. During the 80s and 90s, force fields were regularly used to calculate the energies of simple organic molecules. One can argue that at that point they simply lacked the sophistication to tackle problems in drug discovery. But what if pharmaceutical companies had then channeled millions of dollars into these academic laboratories for specifically trying to focus on adapting these force fields for drug-like molecules and biomolecules? It is very likely that academic scientists would have been more than eager to make use of those funding opportunities and dedicate some of their time to exploring this particular aspect of force fields. The knowledge from this specific application could have been used in a mutually beneficial and cyclic manner to improve basic characteristics of the force fields. And perhaps we could have had good docking programs based on force fields in the late 90s. Pharma could also fund computer scientists in academia to develop parallel processing platforms specifically for these applications, as much of the progress in the last ten years has been possible because of exponential rise in software and hardware technology.

There are many other such technologies; fabrication, microfluidics, single molecule spectroscopy, which can potentially revolutionize drug discovery. All these technologies are being pursued in universities at a basic level. As far as I know, pharma is not providing significant funding to universities for specifically trying to adapt these technologies to their benefit. There are of course a few very distinguished academic scientists who are focused on shortening the science--->technology timeframe; George Whitesides at Harvard and Robert Langer at MIT immediately come to mind. But not everybody is a Whitesides or Langer, both of whom have massive funding from every imaginable source. There are lesser known scientists in lesser known universities who may also be doing research that could be revolutionary for pharma. Whitesides recently agreed to license his lab's technologies to the company Nano-Terra. Nano-Terra would get the marketing rights, and Harvard would get the royalties. There are certainly a few such examples. But I don't know of many where pharma is pouring money into academic laboratories to accelerate the transformation of science into enabling technology.

In retrospect, it's actually not surprising that future technologies are being developed in universities. In fact it was almost always the case. Even now-ubiquitous industrial research tools like x-ray crystallography, sequencing, and nuclear technology were originally products of academic research. Their great utility immediately catapulted these technologies into industrial environs. But we are in a new age now, with the ability to suddenly solve many complex problems being manifested through our efforts and intellect. More than at any other time, we need to shorten the transition time between science and technology. For doing this, industry needs to draw up a checklist of promising academic scientists and labs who are doing promising research, and try to strike deals with them to channel their research acumen into specifically tweaking their pet projects to deliver tangible and practical results. There would of course be new problems that we would need to solve. But such an approach in general would be immensely and mutually satisfying, with pharma possibly getting products on their tables in five instead of ten years, and academia getting funded for doing this. It would keep pharma, professors, and their students reasonably happy. The transition time may not always be speeded up immensely. But in drug discovery, even saving five years can mean potentially saving millions of lives. And that's always a good cause isn't it.