Field of Science

All active kinase states are similar but...

CDK2 bound to inhibitor staurosporine (PDB code: 1aq1)
…inactive kinase states are different in their own ways. This admittedly awkward rephrasing of Tolstoy's quote came to my mind as I read this new report on differences between active and inactive states of kinases as revealed by differential interactions with inhibitors. 

The paper provides a good indication of why kinases continue to provide such enduring objects of interest for drug designers. Ever since imatinib (Gleevec) opened the floodgates to kinase drugs and quenched the widespread belief that it might well be impossible to hit particular kinases selectively, researchers have realized that kinase inhibitors may work either by targeting the active as well as the inactive states of their proteins. One intriguing observation emerging from the last few years is that inhibitors targeting inactive, monomeric states of kinases seem to provide better selectivity than those targeting phosphorylated, active states. In this study the authors (from Oxford, Padua and Newcastle) interrogate this phenomenon specifically for cyclin-dependent kinases (CDKs).

CDKs are prime regulators of the cell cycle and their discovery led to a Nobel Prize a few years ago. Over the last few years there have been several attempts to target these key proteins, not surprisingly especially in the oncology field; I worked on a rather successful kinase inhibitor project myself at one point. CDKs are rather typical kinases, having low activity when not bound to a cyclin but becoming active and ready to phosphorylate their substrates when cyclin-bound. What the authors did was to study the binding of a diverse set of inhibitors to a set of CDKs ranging from CDK2 to CDK9 by differential scanning fluorimetry (DSF). DSF can provide information on binding affinity of inhibitors by way of the melting temperature Tm. Measuring the Tm values for binding to different kinases can thus give you an idea of inhibitor selectivity.

The main observation from the study is that there is much more discrimination in the Tm values and therefore the binding affinities of the inhibitors when they are bound to the monomeric state than when they are bound to active, cyclin-bound states. Interestingly the study also finds that the same CDK in different states provides more discrimination in inhibitor binding compared to different CDKs in the same state. There is thus plenty of scope in targeting the same CDK based on its different states. In addition, binding to the monomeric state of specific cyclins will be a better bet for engineering selectivity than binding to the active, cyclin-bound states.

There are clearly significant structural differences between the inactive and active states of CDKs, mainly related to the movement of the so-called alphaC-helix. This study promisingly indicates that structure-based drug design experts can potentially target such structural features of the binding sites in the inactive states and design more selective drugs.

High-throughput method for detecting intramolecular hydrogen bonds (IHBs)

We have talked about the importance of intramolecular hydrogen bonds (IHBs) as motifs to hide polar surface area and improve the permeability of molecules a couple of times here. The problem is that there's no high-throughput method for detecting these in, say, a library of a thousand molecules. The best method I know is temperature and solvent dependent NMR spectroscopy but that's pretty time consuming. Computational methods can certainly be useful but they can sometimes lead to false positives by overemphasizing hydrogen bonding between groups in proximity.

Now here's a promising new method from Pfizer which could lend itself to high-throughput screening of IHBs. The authors essentially use supercritical fluid chromatography (SFC) to study retention times of molecules with and without IHBs. The solvent system consists of supercritical CO2 spiked with some methanol. They pick a judicious set of matched molecular pairs, each one of which contains a hydrogen bonded and non-hydrogen bonded version. They then look at retention times and find - not surprisingly - that the hydrogen-bonded versions which can hide their polar surface area have lower retention times on the column. 

They corroborate these findings with some detailed NMR studies looking at solvent and temperature dependent chemical shifts. At the beginning, when they plot the retention times vs the total polar surface area (TPSA) they get a nice correlation for non IHB compounds. For compounds with IHBs however, the TPSA is an imperfect predictor; what they find in that case is that a new parameter called EPSA based on the retention time is a better predictor of IHBs compared to TPSA.

This seems to me to be a potentially valuable method for quickly looking at IHBs in relatively large numbers of compounds, especially since we have now started worrying about how to get "large" molecules like macrocycles and peptides across membranes. And I assume every medium or large company will probably have access to SFC, so the technology itself should not pose a measurable barrier. One more handy tool to look at "beyond rule of five" compounds.

Reference: 


A High Throughput Method for the Indirect Detection of Intramolecular Hydrogen Bonding

J. Med. Chem.Just Accepted
Publication Date (Web): March 18, 2014 (Article)
DOI: 10.1021/jm401859b

Why the new NIH guidelines for psychiatric drug testing worry me

Chlorpromazine
Psychiatric drugs have always been a black box. The complexity of the brain has meant that most successful drugs for treating disorders like depression, psychosis and bipolar disorder were discovered by accident and trial and error rather than rational design. There have also been few truly novel classes of these drugs discovered since the 70s (and surely nothing like chlorpromazine which caused a bonafide revolution in the treatment of brain disorders). Psychiatric drugs also challenge the classic paradigm linking a drug to a single defective protein whose activity it blocks or improves. When the molecular mechanism of many psychiatric medicines was studied, it was found that they worked by binding to multiple receptors for neurotransmitters like serotonin and dopamine; in other words psychiatric drugs are "dirty".

There is a running debate over whether a drug needs to be clean or dirty in order to be effective. The debate has been brought into sharp focus by three decades of targeted drug discovery in which selective, (relatively) clean drugs hitting single proteins have led to billion dollar markets and relief for millions of patients. For instance consider captopril which blocks the action of angiotensin-converting-enzyme (ACE). For a long time this was the world's best-selling blood pressure-lowering drug. Similar single drug-single protein strategies have been effective for other major diseases like AIDS (HIV protease inhibitors) and heart disease (HMG-CoA inhibitors like Lipitor).

However recent thinking has veered in the direction of drugs that are "selectively non-selective". The logic is actually rather simple. Most biological disorders are modulated by networks of proteins spanning several physiological systems. While some of these are more important than others as drug targets, there are sound reasons to think that targeting a judiciously picked set of proteins rather than just a single one would be more effective in treating a disease. The challenge has been to purposefully engineer this hand-picked target promiscuity into drugs; mostly it is found accidentally and in retrospect, as in case of the anticancer drug Gleevec. Since we couldn't do this rationally (it's hard even to target a single protein rationally), the approach was simply to test different drugs without worrying about their mechanism and let biology decide which drugs work best. In fact, in the absence of detailed knowledge about the molecular targets of drugs this became a common approach in many disorders, and even today the FDA does not necessarily require proof of mechanism of action for a drug as long as it's shown to be effective and safe. In psychiatry this has been the status quo for a long time.

But now it looks like this approach has run into a wall. Lack of knowledge of the mode of action of psychiatric drugs may have led to accidental discovery, but the NIH thinks that it has also stalled the discovery of original drugs for decades. The agency has now taken note of this and as a recent editorial in Nature indicates, they are now going to require proof of mechanism of action for new psychiatric medicines. The new rules came from an appreciation of ignorance:
"Part of the problem is that, for many people, the available therapies simply do not work, and that situation is unlikely to improve any time soon. By the early 1990s, the pharmaceutical industry had discovered — mostly through luck — a handful of drug classes that today account for most mental-health prescriptions. Then the pipeline ran dry. On close inspection, it was far from clear how the available drugs worked. Our understanding of mental-health disorders, the firms realized, is insufficient to inform drug development."
For several years, the NIMH has been trying to forge a different approach, and late last month institute director Thomas Insel announced that the agency will no longer fund clinical trials that do not attempt to determine a drug or psychotherapy’s mechanism of action. Without understanding how the brain works, he has long maintained, we cannot hope to know how a therapy works."
This is a pretty significant move on the part of the NIMH since as the article notes, it could mean a funding cut for about half of the clinical trials that the agency is currently supporting. The new rules would require researchers to have much better hypotheses regarding targets or pathways in the brain which they think their therapies are targeting, whether the therapies are aimed at depression or ADD. So basically now you cannot just take a small molecule that seems to make mice happier and pitch it in clinical trials for depression.

Personally I have mixed feelings about this development. It would indeed be quite enlightening to know the mechanism of action for neurological drugs, and I will be the first one to applaud if we could neatly direct therapies at specific patient populations based on known differences in signaling pathways for instance. But the fact remains that our knowledge of the brain is still too primitive and clunky for us to easily formulate target-based hypotheses for new psychiatric drugs. For complex, multifactorial diseases like schizophrenia there are still dozens of hypotheses for mechanisms of action. In addition there is a reasonable argument that it's precisely this obsession with targets and mechanisms of action that has slowed down pharmaceutical development; the thinking is that hitting well-defined targets has been too reductionist, and many times it doesn't work because it disregards the complexities of biology. If you really wanted to discover a new antidepressant, then it really might be better to look at what drug makes mice happier rather than try to design drugs to hit specific protein targets that may or may not be involved in depression.

So yes, I am skeptical about the NIMH's new proposal, not because an understanding of mechanism of action is futile - it's the opposite, in fact - but because our knowledge of drug discovery and design is still not advanced enough for us to formulate and successfully test target-based hypotheses for complex psychiatric disorders. The NIH thinks that its approach is necessary because we haven't found new psychiatric drugs for a while, but in the face of biological ignorance what it may do might be to make the situation worse. I worry that requiring this kind of proposal would simply slow down new psychiatric drug discovery for want of knowledge. Perhaps there is a middle ground in which you require a few trials to demonstrate mechanism of action while allowing the majority to proceed on their own merry way, simply banking on the messy world of the biology to give them the answer. Biology is too complex to be held hostage to rational thinking alone.

Cosmological inflation, water in proteins and JFK: The enigma of ignorance

I was immersed in the American Chemical Society's national meeting in Dallas this week, which meant that I could not catch more than wisps of the thrilling announcement from cosmology on Monday that could potentially confirm the prediction of inflation. If this turns out to be right it would indeed be a landmark discovery. My fellow Scientific American blogger John Horgan - who performs the valuable function of being the wet blanket of the network - prudently cautions us to wait for confirmation from the Planck satellite and from other groups before we definitively proclaim a new era in our understanding of the universe. As of now this does look like the real deal though, and physicists must be feeling on top the world. Especially Andrei Linde whose endearing reaction to a surprise announcement at his front door by a fellow physicist has been captured on video.

But as social media and the airwaves were abuzz with news of this potentially historic discovery, I was sitting in a session devoted to the behavior of water in biological systems, especially in and around proteins. Even now we have little understanding of the ghostly networks of water molecules surrounding molecules that allow them to interact with each other. We have some understanding of the thermodynamic variables that influence this interaction, but as of now we have to dissect these parameters individually on a case-by-case basis; this is still no general algorithm. Our lack of knowledge is hampered by both an overarching theoretical framework and computational obstacles. The water session was part of a larger one on drug design and discovery. The role of water in influencing the binding of drugs to proteins is only one of the unknowns that we struggle with; there are dozens of others factors – both known unknowns and unknown unknowns – which contribute to the behavior of drugs on a molecular level. We have made some promising advances, but there is clearly a long way to go.

Sitting in these talks, surrounded by physicists and chemists who were struggling to apply their primitive computational tools to drug design, my thoughts about water briefly juxtaposed with the experimental observation of cosmological inflation. And I could not help but think about the still gaping chasms that exist in our understanding of so many different fields.
Let’s put this in perspective: We have now obtained what is likely the first strong experimental evidence for cosmological inflation. This is a spectacular achievement of both experiment and theory. If true no doubt there will be much well deserved celebration, not to mention at least one well deserved Nobel Prize.

But meanwhile, we still cannot design a simple small organic molecule that will bind to a protein involved in a disease, be stable inside the body, show minimum side effects and cure or mitigate the effects of that disease. We are about as far away from this goal as physics was from discovering the Big Bang two hundred years ago, perhaps more. Our cancer drugs are still dirty and most of them cause terrible side effects; we just don’t have a good enough scientific understanding of drug behavior and the human body to minimize these effects. Our knowledge of neurological disorders like Alzheimer’s disease is even more backward. There we don’t even know what the exact causes are, let alone how we can mitigate them. We still waste billions of dollars in designing and testing new drugs in a staggering process of attrition that we would be ashamed of had we known something better. And as I mentioned in my series of posts on challenges in drug discovery, even something as simple as getting drugs past the cell membranes is still an unsolved problem on a general level. So is the general problem of figuring out the energy of binding between two arbitrary molecules. The process of designing medicines, both on a theoretical and an experimental level, is still a process of fits and starts, of Hail Mary passes and failed predictions, of groping and simply lucking out rather than proceeding toward a successful solution with a trajectory even resembling a smooth one. We are swimming in a vast sea of ignorance, floundering because we often simply don’t have enough information.

The fact remains that we may have now mapped the universe from its birth to the present but there are clearly areas of science where our knowledge is primitive, where we constantly fight against sheer ignorance, where we are no more than children playing with wooden toys. In part this is simply about what we call domains of expertise. There are parts of nature which can bend to our equations after intense effort, and yet there are other parts where those equations almost become pointless because we cannot solve them without significant approximations. The main culprit for this failure concerns the limitations of reductionism which we have discussed many times on this blog. Physics can solve the puzzle of inflation but not the conundrum of side effects because the latter is a product of a complicated emergent system, every level of which demands an understanding of fundamental rules at its own level. Physics is as powerless in designing drugs today  - or in understanding the brain for that matter - as it is successful in calculating the magnetic moment of the electron to ten decimal places. Such is the paradox of science; the same tools which allow us to understand the beginnings of the cosmos fail when applied to the well-being of one of its tiniest, most insignificant specks of matter.

Scientists around the world are calling the latest discovery “humbling”. But for me the finding is far more humbling because it illuminates the gap between what we know and how much more we still have to find out. This may well be a historic day for physics and astronomy, but there are other fields like chemistry, neuroscience and medicine where we are struggling even with seemingly elementary problems. As a whole science continues to march on into the great unknown and there remains an infinite regression of work to do. That’s what makes it our impossibly difficult companion, one whose company we will be confronted with for eternity. While we have reached new peaks in some scientific endeavors, we have barely started clearing the underbrush and squinting into the dark forest in others. It is this ignorance that keeps me from feeling too self-congratulatory as a member of the human species whenever a major discovery like this is announced. And it is this ignorance that makes our world an open world, a world without end.

A comparison, however, provided a silver lining to this feeling of lack of control. Catching a break in the day’s events I strolled down Houston Street after lunch. Almost fifty-one years ago a car drove down this street and then slowed down for the sharp left turn on Elm Street. At the intersection stood the Texas Book Depository. Three shots rang out, a young President’s life was snuffed out and the river of American history changed course forever. All because of the rash actions of a confused and deranged 23-year old former marine. Looking out of the sixth floor window I could see how a good marksman could easily take the shot. What really strikes you however is the perfect ordinariness of the location, a location made extraordinary in space and time because of a freak accident of history. It compels us to confront our utter helplessness in the face of history’s random acts. Oswald got lucky and left us floundering in the maelstrom of misfortune.
 
But a cosmic perspective may help to assuage our incomprehension and provide salve for our wounds. Carl Sagan once said that if you want to make an apple pie from scratch, you have to first invent the universe. That fateful bullet on November 22, 1963 was the result of an infinitude of events whose reality was energized into potential existence by the same inflation that we are now exploring through ground and space-based telescopes and the ingenuity of our scribblings. There is something reassuring in the fact that while we still do not understand the enigma of human thought and feeling that dictated the trajectory of that bullet, we can now at least understand where it all started. That has to count for something.

Enthusiasm and promise at the ACS Dallas National Meeting

I am writing from the Dallas airport from where I am heading out back home after a fantastic national meeting of the American Chemical Society (ACS). This meeting was definitely the best of its kind I have attended until now, mainly because I had a generous invitation to give an invited talk on the impact of information technology on chemistry and related disciplines. The topic is of course quite vast and general so I was spending all my free time this month preparing for it (that explains the hiatus here).

The day-long session and subsequent panel discussion along with an excellent lunch and dinner were part of the ACS Graduate Student Symposium and organized by graduate students from the University of Texas at Austin. These young men and women really did a phenomenal job, both in picking a remarkably diversity of topics for discussion and in the warm reception that we got at the session. Although I have not been out of graduate school for that long myself, I was really impressed by their enthusiasm and organizational skills. The entire program was put together with care and good sense. I think my own talk went well but what I really enjoyed were talks by the other speakers and panelists. A big thanks once again to the organizers for giving me this memorable opportunity; I hope they got as much out of my contribution as I got from the day's events.

Gautam Bhattacharya from Clemson University gave a talk about ethical and professional development of graduate students and professors; he made me realize how much more sensitive professors need to be about the personal and professional development of their students and how much students themselves need to ask the right questions concerning their progress. One of the most important messages that he drove home was that "learning by doing" is really a misleading proposition; what really enables you to learn is reflection, and this is something that's not stressed enough in college and grad school. Gautam's work is really important for the psychological advancement of graduate students and it should be widely appreciated.

Paula Stephan followed up with a fantastic talk on trends in science hiring, the changing nature of science careers and the state of the job market. Paula's tool is hard data, culled from 40 years of surveys by the NSF and other agencies; something about her presentation reminded me of the old saying about slaying a beautiful and fond belief by hard facts. There was enough in her talk to warrant a separate post but the gist was that postdocs and grad students need to be exposed much more to alternative careers and that we really need to focus on quality instead of quantity when training our students. A lot of postdocs seem to appreciate the state of the job market but often think of themselves as being good enough to surmount its difficulties. Postdocs need to thus get a much better idea of how long they want to continue their gigs and what they should expect in their near future. Stephan had actually led a panel that made these recommendations; not surprisingly almost all of them were ignored. I completely agree with her points, but I find it hard to shake off my depressing cynicism regarding the ossified structure of the government-academic funding system and the fundamental psychological changes that we would need to bring about to change the minds of people with entrenched opinions. But let's carry on.

Sonja Krane who is a managing editor of the Journal of American Chemical Society gave a nice talk about submitting and preparing articles. As much as I respect formal publications like JACS, the successful rise of blogging as a valid second tier for peer review in the last decade or so makes me increasingly think of the formal publication system in rather quaint terms. Bruce Gibb from Tulane University was the last speaker. He gave a funny, entertaining and inspiring talk about the value of basic R&D and the futility of "picking winners" when funding science, especially based on the fact that winners often appear out of the blue and from unexpected quarters and therefore cannot be predicted. Bruce also had a valid analysis of the increasing encroachment of administrative personnel on universities who sap resources away from research and the students.

In my own talk I gave a bird's eye view of how computers have affected chemistry by referring to mainly four areas - data, simulation and analysis, what I call "sociology, namely the culture of chemistry research communication and future challenges. I started out by pointing out the exponential growth in chemical data and its rapid (although not necessarily cheap) accessibility. This data has completely changed the way both theoretical and experimental chemists visualize chemical structures, analyze trends in chemical information and relate structure to properties using the tools of cheminformatics. Along with data, simulation also now has a seat at the table, propped up by phenomenal advances in hardware and software over the last two decades. Important advances like molecular dynamics, harnessing the wisdom of crowds and using knowledge from chemical and biological databases to predict the properties and uses of new materials, drugs and food products are definitely shaping the landscape of chemistry. As far as I am concerned the future shines with promise and knowledge.

The final part of my talk addressed the role of blogs and social media  in serving as a valid source of peer-review and in performing a variety of non-research functions (analyzing academic culture, the job market etc) that are often not part of formal trade journals. I have been lucky to witness the rise of chemistry blogging (and science blogging in general for that matter) from a limited milieu inhabited by a few enthusiasts to a serious, mainstream source of enlightenment and criticism. I focused on two case studies - the hexacyclinol controversy and the Breslow "spacedinos" brouhaha - as examples of how blogging and social media can serve not only as legitimate means of research analysis, but sometimes as the only ones.

The acknowledgment of blogs as an increasing valuable mode of communicating research and improving communication skills was affirmed in the stimulating panel discussion after the talks. Most of the audience seemed to concur that it might be a good idea for graduate students and professors to start writing about their research. This is especially valuable for graduate students since most of them are going to end up in non-academic jobs where softer skills like science communication might be even more important. I was very gratified to see enthusiasm for blogging both among young and old members of the audience. The younger graduate students and undergraduates had some anxiety about their careers, but my perception of that anxiety was more than compensated by the clear enthusiasm for science in all its forms that rippled across the room. Leaving the symposium I could not help but get the feeling that in spite of the problems and challenges all is well with the future of science, at least as reflected through the eyes of its young practitioners.

Pity the postdoc

PNAS has an interesting interview with well-known biochemist Greg Petsko about the plight of the postdoc. Postdocs are the main drivers of published academic research so it was a surprise to Petsko - and it's a sad surprise to me - to know how woefully uninformed many US academic institutions are about the numbers and kinds of postdocs they have. They are almost equally uninformed about where postdocs go after they do their time.

The problem, as Petsko describes, is that the many inequities in the system - a shrunk job market, limited funding, the propensity of PIs to squeeze as much as possible from postdocs - have led postdocs to sustain a generally uncertain and grief-filled existence, with tenures of five to eight years now being depressingly common. Petsko also talks about how so many graduate students do postdocs simply because it seems like the next thing to do and because it would seem to be mandatory for an academic career; what they don't know is that academic careers are very much the exceptions these days. As he says, it would really be helpful to expose both graduate students and postdocs to alternative careers.


PNAS: Part of the plight of US postdoctoral fellows, Petsko says, can be attributed to unrealistic expectations and perverse incentives in the scientific job market.

Petsko: When I look around at my own university and the universities I visit, I see lots of postdocs; I see older postdocs than I used to see when I was younger. I see people doing postdocs for what seem to me to be considerably longer periods than they used to when I was younger, and I see them in many cases doing multiple postdocs, what I would call serial postdocs, if you will. And, I asked myself what drives these trends, and I think they’re driven by a number of things. One is that the bar has been raised, maybe unrealistically, for people to get from a postdoctoral position to an academic position in terms of the amount of work they are expected to accomplish, the number of papers that people seem to expect them to have published, and the degree of training they seem to have to have. I think the bar has been further raised for young principal investigators, young faculty in universities in terms of the amount of work they have to do to get a grant, to get a grant renewed, to publish papers in leading journals, and so forth. The net result of those perverse incentives is that people stay in postdocs longer because it takes more time to try to climb over this very high bar, and they tend to have multiple postdocs because they think they need lots of time and lots of experience to accumulate a vast CV before going out and applying for the limited number of jobs that are out there.

PNAS: With the changing economic climate that has increasingly affected scientific institutions, the notion of an “alternative career” in science might itself need revisiting, says Petsko.

Petsko: They go into a postdoctoral position almost by default because they think it’s what you are supposed to do, and in many cases they’re unaware that fewer than a third of them will ever do academic science. That, in fact, people like me are now the alternative career, and that not being an academic is by far the majority outcome for postdocs. And if they knew that, they might make different decisions about whether to do a postdoc, or what kind to do, or how long to do it for, and if they understood also what their realistic career options are, they might also choose to try to acquire more information about some of those options, which in many cases we don’t provide for them. If I think about what would benefit my own postdocs, boy, I think it would be great if they had some internships that they could try out some of these careers, if we could provide those for them. Certainly exposure to people with different careers, bringing them into a university, have them sit down and talk to postdocs about what it’s like to be a patent lawyer, a science writer, a policy wonk in Washington, all kinds of things like that.