Field of Science

Tautomers need some love

ResearchBlogging.org
Now here's a paper about something that every college student knows about and which is yet not considered by people who do drug design as often as it should- tautomers. Yvonne Martin (previously at Abbott) has a nice article about why tautomers are important in drug design and what are the continuing challenges in predicting and understanding them. This should be a good reminder for both experimentalists and theoreticians to consider tautomerism in their projects.

So why are tautomers important? For one thing, a particular tautomer of a drug molecule might be the one that binds to its protein target. More importantly, this tautomer might be the minor tautomer in solution, so knowing the major tautomer in solution may not always help determine the form bound to a protein. This bears analogy with conformational equilibria in which the conformer binding to a protein more often than not is a minor conformer. Martin illustrates some remarkable cases in which both tautomers of a particular kinase inhibitor were observed in the same crystal structure. In many cases, quantum chemical calculations indicate a considerable energy different between the minor protein-bound tautomer and its major counterpart. A further fundamental complication arises from the fact that solvent changes hugely impact tautomer equilibria, and not enough data is always available on tautomers in aqueous solution because of problems like solubility.

Thus, predicting tautomers is crucial if you want to deal with ligands bound to proteins. It is also important for predicting parameters like logP and blood brain barrier penetration which in turn depend on accurate estimations of hydrophobicity. Different tautomers have different hydrophobicities, and Martin indicates that different methods and programs can sometimes calculate different values of hydrophobicity for a given tautomer, which will directly impact important calculations of logP and blood-brain penetration. It will also affect computational calculations like docking and QSAR where tautomer state will be crucial.

Sadly, there is not enough experimental data on tautomer equilibria. Such data is also admittedly hard to obtain; the net pKa of a compound is a result of all tautomers contributing to its equilibrium, and the number of tautomers can sometimes be tremendous; for instance 8-oxoguanine which is a well known DNA lesion caused by radiation can exist in 100 or so ionic and neutral tautomers. Now let's say you want to dock this compound to a protein to predict a ligand orientation. Which tautomer on earth do you possibly choose?

Clearly calculating tautomers can be very important for drug design. As Martin mentions, more experimental as well cas theoretical data on tautomers is necessary; however such research, similar to solvation measurements discussed in a past post, usually falls under the title of "too basic" and therefore may not be funded by the NIH. But whether funded or not, successful ligand design cannot prevail without consideration of tautomers. What was that thing about basic research yielding its worth many times over in applications?

Martin, Y. (2009). Let’s not forget tautomers Journal of Computer-Aided Molecular Design DOI: 10.1007/s10822-009-9303-2

The model zoo

So I am back from the eCheminfo meeting at Bryn Mawr College. For those having the inclination (both computational chemists and experimentalists), I would strongly recommend the meeting for the small group and consequent close interaction. The campus with its neo-gothic architecture and verdant lawns provides a charming environment.

Whenever I go to most of these meetings I am usually left with a slightly unsatisfied feeling at the end of many talks. Most computational models to describe proteins and protein-ligand interactions are patchwork models based on several approximations. Often one finds several quite different methods (force fields, QSAR, quantum mechanics, docking, similarity based searching) giving similar answers to a given problem. The choice of method is usually made on the basis of availability and computational power and past successes, rather than some sound judgement allowing one to choose that particular method over all others. And as usual it depends on what question you are trying to ask.

But in such cases, I am always left with two questions; firstly, if several methods give similar answers (and sometimes if no method gives the right answer), then which is the "correct" method? And secondly, because there is no one method that gives the right answer, one cannot escape the feeling at the end of a presentation that the results that were obtained could have been obtained by chance. Sadly, it is not even always possible to actually calculate the probability that a result was obtained by chance. An example is our own work on the design of a kinase inhibitor which was recently published; docking was remarkably successful in this endeavor, and yet it's hard to pinpoint why it worked. In addition a professor might use some complex model combining neural networks and machine learning and may get results agreeing with experiment, and yet by that time the model may have become so abstract and complex that one would have trouble understanding any of its connections to reality (that is partly what happened to financial derivatives models when their creators themselves stopped understanding why they are really working, but I am digressing...)

However, I remind myself in the end about something that is always easy to forget; models are emphatically not supposed to be "correct" from the point of view of modeling "reality", no matter what kind of fond hopes their creators may have. The only way in which it is possible to gauge the "correctness" of a model is by comparing it to experiment. If several models agree with experiment, then it may be meaningless to really argue about which one is the right one. There are metrics suggested by people to discriminate between such similar models, for instance employing that time-honored principle of Occam's Razor where a model with fewer parameters might be better. Yet in practice such philosophical distinctions are hard to apply and the details can be tricky.

Ultimately, while models can work well on certain systems, I can never escape the nagging feeling that we are somehow "missing reality". Divorcing models from reality, irrespective of whether they are supposed to represent reality or not, can have ugly consequences, and I think all these models are in danger of falling into a hole on specific problems; adding too many parameters to comply with experimental data can easily lead to overfitting for instance. But to be honest, at this point what we are trying to model is so complex (the forces dictating protein folding or protein-ligand interactions only get more and more convoluted like Alice's rabbit hole) that this is probably the best we can do. Even ab initio quantum mechanics involves acute parameter fitting and approximations in modeling the real behavior of biochemical systems. The romantic platonists like me will probably have to wait, perhaps forever.

New Book

Dennis Gray's "Wetware: A Computer in every Living Cell" discusses the forces of physics, chemistry and self-assembly that turns a cell into a computer like concatenation of protein networks that communicate, evolve and perform complex functions. The origin of life is essentially a chemistry problem and it centers on self-assembly.

At the Bryn Mawr eCheminfo Conference

From Monday through Wednesday I will be at the eCheminfo "Applications of Cheminformatics & Chemical Modelling to Drug Discovery" meeting at Bryn Mawr College, PA. The speakers and topics as seen in the schedule are interesting and varied. As usual, if anyone wants to crib about the finger food I will be around. I have heard the campus is quite scenic.

Coyne vs Dawkins

This year being Darwin's 200th birth anniversary, we have seen a flurry of books on evolution. Out of these two stand out for the authority of their writers and the core focus on the actual evidence for evolution that they provide; Jerry Coyne's "Why Evolution is True" and Richard Dawkins's "The Greatest Show on Earth". I have read Coyne's book and it's definitely an excellent introduction to evolution. Yet I am about 300 pages into Dawkins and one cannot help but be sucked again into his trademark clarity and explanatory elegance. I will have detailed reviews of the two books later but for now here are the main differences I can think of:

1. Dawkins talks about more evidence than simply that from biology. He also has evidence from history, geology and astronomy.

2. Dawkins's clarity of exposition is of course highly commendable. You would not necessarily find the literary sophistication of the late Stephen Jay Gould here but for straight and simple clarity this is marvelous.

3. A minor but noteworthy difference is the inclusion of dozens of absorbing color plates in the Dawkins book which are missing in Coyne's.

4. Most importantly, Dawkins's examples for evolution on the whole are definitely more fascinating and diverse than Coyne's, although Coyne's are pretty good too. For instance Coyne dwells more on the remarkable evolution of the whale from land-dwelling animals (with the hippo being a close ancestral cousin). Also, Coyne's chapter on sexual selection and speciation are among the best such discussions I have come across.

Dawkins on the other hand has a fascinating account of Michigan State University bacteriologist Richard Lenski's amazing experiments with E. coli that have been running for over twenty years. They have provided a remarkable window into evolution in real time like nothing else. Also marvelously engaging are his descriptions of the immensely interesting history of the domestication of the dog. Probably the most striking example of evolution in real time from his book is his clear account of University of Exter biologist John Endler's fabulous experiments with guppies in which the fish evolved drastically before our very eyes in relatively few generations because of carefully regulated and modified selection pressure.

Overall then, Coyne's book does a great job of describing evolution but Dawkins does an even better job of explaining it. As usual Dawkins is also uniquely lyrical and poetic in parts with his sparkling command of the English language.

Thus I would think that Dawkins and Coyne (along with probably Carl Zimmer's "The Tangled Bank" due to be published on October 15) would provide the most comprehensive introduction to evolution you can get.

As Darwin said, "There is grandeur in this view of life". Both Coyne and Dawkins serve as ideal messengers to convey this grandeur to us and to illustrate the stunning diversity of life around us. Both are eminently readable.

The 2009 Nobel Prize in Chemistry: Ramakrishnan, Steitz and Yonath

Image Hosted by ImageShack.us

Source: Nobelprize.org

Venki Ramakrishnan, Ada Yonath and Tom Steitz have won the Nobel Prize for chemistry for 2009 for their pioneering studies on the structure of the ribosome. The prize was predicted by many for many years and I myself have listed these names in my lists for a couple of years now; in fact I remember talking with a friend about Yonath and Ramakrishnan getting it as early as 2002. Yonath becomes the first Israeli woman to win a science Nobel Prize and Ramakrishnan becomes the first Indian-born scientist to win a chemistry prize.

The importance of the work has been obvious for many years since the ribosome is one of the most central components of the machinery of life in all organisms. Every school student is taught about its function in acting as the giant player that holds the multicomponent assembly of translation- the process in which the code of letters in RNA is read to produce proteins- together. The ribosome comes as close to being an assembly line for manufacturing proteins as something possibly can. It is also an important target for antibiotics like tetracycline. It's undoubtedly a highly well-deserved accolade. The prize comes close on the heels of the 2006 prize awarded to Roger Kornberg for his studies of transcription, the process preceding translation in which DNA is copied into RNA.

The solution of the ribosome structure by x-ray crystallography is a classic example of work that has a very high chance of getting a prize because of its fundamental importance. X-ray crystallography is a field which has been honored many times and as people have mentioned before, if there's any field where you stand a good chance of winning a Nobel Prize, it's x-ray crystallography on some important protein or biomolecule. In the past x-ray crystallography on hemoglobin, potassium ion channels, photosynthetic proteins, the "motor" that generates ATP and most recently, the machinery of genetic transcription, have all been honored by the Nobel Prize. It's also the classic example of a field where the risks are as high as the rewards, since you may easily spend two decades or more working on a structure and in the end fail to solve it or worse, be scooped.

However, when this meticulous effort pays off the fruits are sweet indeed. In this case the three researchers have been working on the project for years and their knowledge has built up not overnight but incrementally through a series of meticulous and exhaustive experiments reported in top journals like Nature and Science. It's an achievement that reflects as much stamina and the ability to overcome frustration as it does intelligence.

It's a prize that is deserved in every way.

Update: As usual the chemistry blog world seems to be be divided over the prize with many despondently wishing that a more "pure" chemistry prize should have been awarded. However this prize is undoubtedly being awarded primarily for chemistry.

Firstly, as some commentators have pointed out, crystallography was only the most important aspect of the ribosome work. There were a lot of important chemical manipulations that had to be carried out in order to shed light on its structure and function.

Secondly, as Roger Kornberg pointed out in his interview (when similar concerns were voiced), the prize is being awarded for the determination of an essentially chemical structure, in principle no different from the myriad structures of natural and unnatural compounds that have been the domain of classical organic chemistry for decades.

Thirdly, the ribosome can be thought of as an enzyme that forms peptide bonds. To this end the structure resolution engaged knowing the precise locations of catalytic groups that are responsible for the all-important peptide bond formation reaction. Finding out the locations of these groups is no different from determining the catalytic parts of a more conventional enzyme like chymotrypsin or ornithine decarboxylase.

Thus, the prize quite squarely falls in the domain of chemistry. It's naturally chemistry as applied to a key biological problem, but I don't doubt that the years ahead will see prizes given to chemistry as applied to the construction of organic molecules (palladium catalysis) or chemistry as applied to the synthesis of energy efficient materials (perhaps solar cells).

I understand that having a chemistry prize awarded in one's own area of research is especially thrilling, but as a modified JFK quote would say, first and foremost "Wir sind Chemiker". We are all chemists, irrespective of our sub-disciplines, and we should be all pleased that an application of our science has been awarded, an application that only underscores the vast and remarkably diverse purview of our discipline.

Update: Kao, Boyle and Smith

Seems nobody saw this coming but the importance of optical fibers and CCDs is obvious.

It's no small irony that the CCD research was done in 1969 at Bell Labs. With this Bell Labs may well be the most productive basic industrial research organization in history, and yet today it is less than a mere shadow of itself. The CCD research was done 40 years back and the time in which it was done seems disconnected from the present not just temporally, but more fundamentally. The research lab that once housed six Nobel Prize winners on its staff can now count a total of four scientists in its basic physics division.

The 80s and indeed most of the postwar decades before then seem to be part of a different universe now. The Great American Industrial Research Laboratory seems like a relic of the past. Merck, IBM, Bell Labs...what on earth happened to all that research productivity? Are we entering a period of permanent decline?

The 2009 Nobel Prize in Physiology or Medicine

Image Hosted by ImageShack.us

Source: Nobelprize.org

The 2009 Nobel Prize in Physiology or Medicine has been awarded to Elizabeth Blackburn (UCSF), Carol Greider (Johns Hopkins) and Jack Szostak (Harvard) for their discovery of the enzyme telomerase and its role in human health and disease.

This prize was highly predictable because the trio's discovery is of obvious and fundamental importance to an understanding of living systems. DNA replication is a very high fidelity event where new nucleotides are added to the new DNA helix being synthesized with an error rate of only 1 in 10*9. Highly efficient repair enzymes act on damaged or wrongly structured DNA strands and repair them with impressive accuracy. And yet the process has some intrinsic problems. One of the most important problems concerns the shortening of one of the two newly synthesized strands of the double helix during every successive duplication. This is an inherent result of the manner in which the two strands are synthesized.

This shortening leads to shortened ends of chromosomes, termed telomeres. As our cells divide in every generation, there is progressive shortening of the chromosomal ends. Ultimately the chromosomal ends become too short for the chromosomes to remain functional and the cell puts into the motion the machinery of apoptosis or cell death which eliminates cells with these chromosomes. The three recipients of this year's prize discovered an enzyme called telomerase that actually prevents the shortening of chromosomes by adding new nucleotides to the ends. Greider was actually Blackburn's PhD. student at Berkeley when they did the pioneering work (not every PhD. student can claim that his or her PhD. thesis was recognized by a Nobel Prize). The group not only discovered the enzyme but actually demonstrated through a series of comprehensive experiments that mutant cells and mice lacking the enzymes had shortened life spans and other fatal defects, indicating the key role of the enzyme in preventing cell death. At the same time, they and other scientists also crucially discovered that certain kinds of cancers, brain tumors for instance, had high levels of telomerase. This high level meant that cancer cells repaired their chromosomes more efficiently than normal cells, thus accounting for their increased activities and life spans and their ability to outcompete normal cells for survival (As usual, what's beneficial for normal cells unfortunately turns out to be even more beneficial for cancer cells; this need to address similar processes in both cells is part of what makes cancer such a hard disease to treat)

The work thus is a fine example of both pure and applied research. Most of the work's implications lie in an increased understanding of the fundamental biochemical machinery governing living cells. However, with the observation that cancer cells express higher levels of telomerase the work also opens up possible chemotherapy that could target increased levels of telomerase in such cells using drugs. Conversely, boosting the level of the enzyme in normal cells could possibly contribute toward slowing down aging.

The prize has been awarded for work that was done about twenty years ago. This is quite typical of the Nobel Prize. Since then Jack Szostak has turned his focus on to other exciting and unrelated research involving the origins of life. In this field too he has done pioneering work involving for instance, the synthesis of membranes that could mimic the proto-cells formed on the early earth. Blackburn also became famous in 2004 for a different reason; she was bumped off President Bush's bioethics council for her opposition to a ban on stem cell research. Given the Bush administration's consistent manipulation and suppression of cogent scientific data, Blackburn actually wore her rejection as a proud label. Catherine Brady has recently written a fine biography of Blackburn.

Update: Blackburn, Greider, Szostak

A well-deserved and well-predicted prize for telomerase
Again, I point to Blackburn's readable biography

The evils of our time

So yesterday over lunch me and some colleagues got into a discussion about why scientific productivity in the pharmaceutical industry has been perilously declining over the last two decades. What happened to the golden 80s when not just the "Merck University" but other companies produced academic-style high quality research and published regularly in the top journals? We hit on some of the usual factors. Maybe readers can think of more.

1. Attack of the MBAs: Sure, we can all benefit from MBAs but in the 80s places like Merck used to be led by people with excellent scientific backgrounds, sometimes exceptional ones. Many were hand-picked from top academic institutions. These days we see mostly lawyers and pure MBAs occupying the top management slots. Not having a scientific background definitely causes them to empathize less with the long hairs.

2. Technology for its own sake: In the 90s many potentially important technologies like HTS and combi chem were introduced. However people have a tendency to worship technology for its own sake and many have fallen in love with these innovations to the extent that they want to use them everywhere and think of them as cures for most important problems. Every technology works best when it occupies its own place in the hierarchy of methodologies and approaches, and where a good understanding of its limitations wisely prevents its over-application. This does not seem to have really happened with things like HTS or combi chem.

3. The passion of the structuralists: At the other end of the science-averse managers are the chemical purists who are so bent on "rules" for generating leadlike and druglike molecules that they have forgotten the original purpose of a drug. The Lipinskians apply Lipinski's rules (which were meant to be guidelines anyway) to the extent that they trump everything else. Lipinski himself never meant these rules to be absolute constraints.

What is remarkable is that we already knew that about 50% of drugs are derived from natural products which are about as un-Lipinskian as you can imagine. In fact many drugs are so un-Lipinskian as to defy imagination. I remember the first time I saw the structure of Metformin, essentially methyl guanidine, and almost fell off my chair. I couldn't have imagined in my wildest dreams that this molecule could be "druglike", let alone of the biggest selling drugs in the world. I will always remember Metformin as the granddaddy of rejoinders to all these rules.

The zealous application of rules means that we forget the only two essential features of any good drug; efficacy and safety, essentially pharmacology. If a drug displays good pharmacology, its structure could resemble a piece of coal for all I know. In the end the pharmacology and toxicity are all that really matter.

4. It's the science stupid: In the 80s there were four Nobel Prize winners on the technical staff of Bell Labs. Now the entire physics division of the iconic research outfit boasts a dozen or so scientists in all. What happened to Bell Labs has happened to most pharmaceutical companies. The high respect that basic science once enjoyed has now been accorded to other things like quarterly profits, CEO careers and the pleasure of stock holders. What is even more lamentable is the apparent mentality that doing good science and making profits are somehow independent of each other; the great pharmaceutical companies in the 80s like Merck clearly proved otherwise.

Part of the drive toward only short term profits and the resulting obsession with mergers and acquisitions has clearly arose from the so-called blockbuster model. If a candidate is not foreseen to be making a billion dollars or more, dump it overboard. Gone are the days when a molecule was pursued as an interesting therapy that would validate some interesting science or biochemical process, irrespective of its projected market value. Again, companies in the past have proved that you can pursue therapeutic molecules for their own sake and still reap healthy profits. Profits seem to be like that electron in the famous double slit experiment; if you don't worry about them, they will come to you. But start obsessing about them too much and you will watch them gradually fade away like that mystical interference pattern.

We ended our discussion wondering what it's going to take in the end for big pharma to start truly investing in academic style basic science? The next public outcry that emerges from drug-resistant strains of TB killing millions because the drugs which could have fought them were never discovered in the current business model? It could be too late then.