Field of Science

Why the free market is like quantum mechanics

If we were all omniscient and had infinitely fast and perfect computers, maybe we could use quantum mechanics to explain chemistry and biology. In reality, no amount of quantum mechanics can completely explain the chemistry that goes into treating a disease with a drug, baking a cake or making a baby.

Now imagine someone who has started out with the honest and admirable goal of trying to apply quantum mechanics to understand the behavior of a biological system like a protein. He knows for a fact that quantum mechanics can account for (a far better word than explain) all of chemistry- the great physicist Paul Dirac himself said that. He has complete confidence that quantum mechanics is really the best way to get the most accurate estimates of free energy, dipole moments, molecular charges and a variety of other chemical properties for his system.

But as our brave protagonist actually starts working out the equations, he starts struggling. After all, the Schrodinger equation can be solved exactly only for the hydrogen atom. We are dealing with a system of solute and solvent that’s infinitely more complex. The complexity forces our embattled savant to make cruel approximations at every stage. At some point, not only is he forced to commit the blasphemy of using classical mechanics for simulating the dynamics of the protein, but he has to stoop to using empirical data for parameterizing many of his models. At one point he finds himself fighting against the Uncertainty Principle itself!

In the end our hero is chagrined. He started out with the lofty dream of using quantum mechanics to capture the essence of his beloved protein. He ended instead with a set of approximations, parameters from experiments, and classical mechanics-derived quantities just for explaining his system. Prediction was not even an option at this point.

But his colleagues were delighted. This patchwork model actually gave fairly useful answers. Like most models in chemistry, it had some explanatory and predictive value. Even though the model was imperfect and they did not completely understand why it worked, it worked well enough for practical purposes. But this modest degree of success held no sway for our bright young scientist. He stubbornly insisted that if, just if, we had a perfectly fast computer with unlimited accuracy and an infinite amount of time, quantum mechanics indeed would have been spectacularly successful at predicting every property of this system with one hundred percent accuracy. Maybe next time he should just wait until he gets a perfectly accurate computer and has an infinite amount of time.

Now I usually don’t hold forth on economics on this blog, but the following parable was narrated to describe what I think is a rather unwarranted swathe of criticism coming from libertarians about the financial crisis during the last few years. The reasons for the financial crisis are many, probably more complex than quantum mechanics, and society will surely keep on debating them for years. But one of the most common reasons cited by libertarians (usually in the form of a complaint) for the failure of the economy is that we should not blame the free market for what happened because we never got a chance to actually have a free market. If only we got a chance to have a perfect free market, things would be lovely.

Notwithstanding the fact that this argument inches uncomfortably close to arguments made by the most vocal proponents of socialism in the twentieth century (“There was nothing wrong with the system per se, only with its implementation”), I think it’s a little nutty. Maybe a perfect free market wouldn’t have led to the crisis, but that’s like our young chemist saying that infinitely accurate computers and quantum mechanics would not have led to the kind of imperfect models that we get. The problem is really that there are so many practical obstacles in the application of quantum mechanics to a real-life chemical system, that we are simply forced to abandon the dream of using it for explaining such systems. Unless we come up with a practical prescription for how quantum mechanics is going to address these real-life obstacles without making approximations, it seems futile to argue that it can really take us to heaven.

To me it seems that libertarians are ignoring similar obstacles in the way of implementing a perfect free market. What are these obstacles? Most of them are well known. There’s imperfect competition because of the existence of inherent inequalities, leading to monopolies. There’s all that special interest lobbying, encouraged by politicians, which discourages true competition and allows monopolies to get a head start. There’s information asymmetry, which simply keeps people from knowing all the facts.

But all these problems are really part of a great problem- human nature itself. All the obstacles described above are basically the consequence of ingrained, rather unseemly human qualities- greed, the lust for power, the temptation to deceive, and a relentless focus on short term goals at long term expense. I don’t see these qualities disappearing from our noble race anytime soon.

Now sure, I think we can completely agree that the free market was invented to curb some of the worst manifestations of these qualities, and it has worked remarkably well in this regard. Remarkably well, but not perfectly so. Maybe libertarians need to understand that the last vestiges of the dark side of humanity cannot be done away with, since they are an indelible part of what makes us human. So unless they come up with practical ways in which they can surmount these obstacles, in which they can solve the problem of human nature itself- a difficult goal to put it mildly- it’s rather futile to keep on chanting that all our problems would be solved if only we could somehow make these inherently human qualities disappear.

The final argument that libertarians usually make is; just because there are obstacles in the way of a goal (the perfect free market) that may seem even insurmountable, that does not mean we should not keep on striving towards the goal. I think that’s perfectly laudable. But the problem is, unless you come up with a practical solution for all the problems that you face on the way, your goal is just going to remain an abstract and unworkable ideal, not exactly the kind of solution that's desirable in the practical fields of politics and economics. More importantly, all this striving towards the goal may create problems of its own (the science analogy would be unimaginably expensive calculations, scientists laid-off because of the lack of results, overheating of the computers leading to fires etc.). We have all seen these problems. There’s the well-known problem of externalities, there’s the problem of unregulated firms getting ‘too big to fail’, there’s the problem of growing income inequality. Surely we have to admit that these are real problems too.

So what should libertarians do? Well, didn’t our intrepid quantum mechanic grudgingly accept the intervention of approximations and parametrizations? These seemed ugly, but he had no option but to use them, since quantum mechanics simply could not solve all the obstacles in his way. Similarly, perhaps free marketers could realize that at least in some cases, government intervention, no matter how ugly it may seem, may be the only way to reach a workable goal. Sure, it may not be the best of all goods, but it could be the least of all evils. What would have happened if our bright young scientist had kept on insisting that he wouldn’t budge an inch if he is forced to use anything other than quantum mechanics? He would have ended up with nothing.

And in economics even more than in chemistry, a model that partly works is better than a model that does not exist. “Sometimes it’s not enough to do our best; we need to do what’s necessary” (W. Churchill)...

But Bohr never said anything about Nobel Prizes

Niels Bohr famously said that “prediction is futile...especially about the future”. But that’s assuming that the prediction is truly novel. That’s not really the case with the Nobel Prizes. One can keep on trotting out the same names for twenty years and if one or more of them stick in a particular year, that’s not exactly a novel prediction. Nor is it one when you just keep on re-iterating names others have proposed. Predicting a Nobel Prize either of these ways is like “predicting” that some winter in the next fifty years will be particularly cold or that some hurricane season will be particularly violent.

What would be a really commendable and difficult prediction? Well, predicting the winner of the 2006 Prize would have been a home run. Nobody that I knew suggested Roger Kornberg’s name. So no use patting myself on the back for predicting four correct names for last year’s prizes- Ada Yonath, Venki Ramakrishnan, Jack Szostak and Elizabeth Blackburn. In the former case, I remember predicting a prize for the ribosome at least since 2002. The latter prediction was one made by dozens of others, again for several years.

In light of the above rather obvious observations, there’s no glory in trotting out Nobel predictions if there’s no novelty in them. Others have done it better. So instead of reiterating the twenty names that I have thrown out every year, it would be much more worthwhile sticking with only four or five and taking my chances.

So here goes. I am dividing the categories into ‘easy’ and ‘difficult’ predictions. The easy ones are those made by dozens of others for years regarding discoveries whose importance is ‘obvious’. The difficult predictions would be ones which few others seem to make or ones which are ‘non-obvious’. But what exactly is a discovery of ‘non-obvious’ importance? Well, one of the criteria in my mind for a ‘non-obvious’ Nobel Prize is one that is awarded to an individual for general achievements in a field rather than for specific discoveries, much like the lifetime achievement Academy Awards given out to men and women with canes. Such predictions are somewhat harder to make simply because fields are honored by prizes much less frequently than specific discoveries. But interestingly, in retrospect the field-based awards seem more than obviously warranted; one of the best examples is Woodward, who was clearly honored for his overall accomplishments in organic synthesis. Pauling’s Nobel would be another example.

Anyway, here’s the line-up that includes prizes for all three science disciplines plus a random peace prize. To make it more interesting I am also listing what I feel are the pros and cons for that particular field or discovery. Note also that there can always be an overlap between the chemistry and medicine prizes as has often been in the past. It's just that, considering that last year's chemistry prize went to biochemists, a biology prize this year seems unlikely.

CHEMISTRY


1. Palladium-catalyzed reactions (Bleedingly Easy): A perpetual favorite of organikers. However, this bleedingly easy discovery has nevertheless eluded the prize for years.
Pros: The applications in organic synthesis are tremendous and all-pervading, so much so that newer generations may forget that someone had to actually discover this stuff
Cons: The last Nobel for organic methodology was awarded in 2005, pretty recently. Plus, who to award? Heck, Suzuki and Sonogashira are the more obvious candidates, but Hartwig and Buchwald are also major players. Also, not awarding the prize to a discovery for a long time can always mean that it’s too late.

2. Computational chemistry and biochemistry (Difficult):
Pros: Computational chemistry as a field has not been recognized since 1999 so the time seems due. One obvious candidate would be Martin Karplus.
Cons: This would definitely be a lifetime achievement award. Karplus did do the first MD simulation of a protein ever but that by itself wouldn’t command a Nobel Prize. The other question is regarding what field exactly the prize would honor. If it’s specifically applications to biochemistry, then Karplus alone would probably suffice. But if the prize is for computational methods and applications in general, then others would also have to be considered, most notably Ken Houk who has been foremost in applying such methods to organic chemistry. Another interesting candidate is David Baker whose program Rosetta has really produced some fantastic results in predicting protein structure and folding. But this field is probably too new for a prize.

3. Chemical biology and chemical genetics (Easy)
Another favorite for years, with Schreiber and Schultz being touted as leading candidates.
Pros: The general field has had a significant impact on basic and applied science
Cons: This again would be more of a lifetime achievement award which is rare. Plus, there are several individuals in recent years (Cravatt, Bertozzi) who have contributed to the field. It may make some sense to award Schreiber a ‘pioneer’ award for raising ‘awareness’ but that’s sure going to make a lot of people unhappy. Also, a prize for chemical biology might be yet another one whose time has just passed.

4. Single-molecule spectroscopy (Easy)
Pros: The field has obviously matured and is now a powerful tool for exploring everything from nanoparticles to DNA. It’s been touted as a candidate for years. The frontrunners seem to be W E Moerner and M Orrit, although R Zare has also been floated often.
Cons: The only con I can think of is that the field might yet be too new for a prize

5. Electron transfer in biological systems (Easy)
Pros: Another field which has matured and has been well-validated. Gray and Bard seem to be leading candidates.

Many other discoveries have been listed, most notably by Paul@Chembark. I don’t really see a prize for the long lionized birth pill and Carl Djerassi; although we might yet be surprised, the time just seems to have passed. Then there are fields which seem too immature for the prize; among these are molecular machines (Stoddart et al.) and solar cells (Gratzel).

MEDICINE:

1. Nuclear receptors (Easy)
Pros: The importance of these proteins is unquestioned. Most predictors seem to converge on the names of Chambon/Jensen/Evans
Cons: A prize for biology was given out last year to chemists and biologists

2. Statins (Difficult)
Akira Endo’s name does not seem to have been discussed much. Endo discovered the first statin. Although this particular compound was not a blockbuster drug, since then statins have revolutionized the treatment of heart disease.
Pros: The “importance” as described in Nobel’s will is obvious. It also might be a nice statement to award the prize to the discovery of a drug for a change. Who knows, it might even boost the image of a much maligned pharmaceutical industry.
Cons: The committee is not really known for awarding actual drug discovery. Sure, there are precedents like Fleming, Black and Elion, but these are far and few in between. On the other hand this fact might make a prize for drug discovery overdue.

2. Genomics (Difficult)
A lot of people say that Venter should get the prize, but it’s not clear exactly for what. Not for the human genome, which others would deserve too. If a prize was to be given out for synthetic biology, it’s almost certainly premature. Venter’s synthetic organisms from last year may rule the world, but for now we humans still prevail. On the other hand, a possible prize for genomics may rope in people like Carruthers and Hood who pioneered methods for DNA synthesis.

3. DNA diagnostics (Difficult)
Now this seems to me to be a field whose time is very much due. The impact of DNA fingerprinting and Western and Southern Blots on pure and applied science, everything from discovering new drugs to hunting down serial killers, is at least as big as the prizeworthy PCR. I think the committee would be doing itself a favor by honoring Jeffreys, Stark, Burnette and Southern.

4. Stem Cells (Easy)
This seems to be yet another favorite. McCulloch and Till are often listed.
Pros: Surely one of the most important biological discoveries of the last 50 years, promising fascinating advances in human health and disease.
Cons: Politically controversial (although we hope the committee can rise above this). Plus, a 2007 Nobel was awarded for work on embryonic stem cells using gene targeting strategies so there’s a recent precedent.

4. Membrane vesicle trafficking (Easy)
Rothman and Schekman
Pros: Clearly important. The last trafficking/transport prize was given out in 1999 (Blobel) so another one is due and Rothman and Schekman seem to be the most likely canidates. Plus, they have already won the Lasker Award which in the past has been a good indicator of the Nobel.

PHYSICS

I am not a physicist
But if I were
I would dare
To shout from my lair
“Give Hawking and Penrose the Prize!”
For being rock stars of humungous size

Also, Zeilinger, Clauser and Aspect probably deserve it for bringing the unbelievably weird phenomenon of quantum entanglement to the masses.

PEACE

Two names consistently come to my mind- Sam Nunn and Richard Lugar. The world owes more than it can imagine to these two gentlemen for securing loose nuclear material and weapons from the Soviet states of Kazakhstan, Belarus and Ukraine after the collapse of the Soviet Union. Given the threat of nuclear terrorism, their efforts will go very far indeed. Plus in these troubled political times, they showcase a rare example of bipartisan cooperation.

The two Bills- Clinton and Gates- will probably also get the prize, but maybe not this year.

In any case,

So much in human affairs
Is open to fate and chance
Thanks, but I will stick to science
And watch the molecules dance...


Previous predictions: 2009, 2008, 2007, 2006

Does bond cleavage require or release energy?

Back from a severe flu spell with a question worth pondering. A commenter on Derek's blog posed the following question, which I think could be a nice trick question for undergrads (or grads for that matter).

Question: "Looks like a bunch of chemists are pooling here. Let me ask an undergrad question as to why bond cleavage in chemistry requires energy whereas bond cleavage in biology releases energy, i.e. from ATP to ADP."

Here's the answer which I could think of on the spur of the moment:

Answer: "Bond cleavage always requires energy, even in biology. The question is whether the net reaction is energetically favorable, which it is in case of ATP. That gives the illusion of energy 'release' by bond cleavage. There are such cases in organic chemistry too; for instance the cleavage of a bond that expels nitrogen as a leaving group is usually very favorable (nitrogen is sometimes called "the world's best leaving group"). That does not mean that you don't need energy to cleave this bond, it's just that the unusual stability of nitrogen makes the net reaction favorable.

Fair enough?

'SAR by C13 NMR'

ResearchBlogging.org
The biggest utility of NMR spectroscopy in drug discovery is in assessing three things; whether a particular ligand binds to a protein, what site on the protein it binds, and what parts of the ligand interact with the protein. Over the last few years a powerful technique named ‘SAR by NMR’ has emerged which is now widely used in ligand screening. In this technique, changes in the resonances of ligand and protein protons are observed to pinpoint the ligand binding site and corresponding residues. Generally when a ligand binds to a protein, both its and the protein’s rotational correlation time decreases; the result is a broadening of signals in the spectrum which can be used to detect ligand binding. One of the most effective methods in this general area is Saturation Transfer Difference (STD) spectroscopy. As the name indicates, it hinges on the transfer of magnetization between protein and ligand; the resulting decrease in intensity of ligand signals can provide valuable information about proximity of ligand protons with specific protein residues.

But these kinds of techniques suffer from some drawbacks. One straightforward drawback is that signals from protein and ligand may simply overlap. Secondly, the broadening may be so much as to virtually make the signals disappear. Thirdly from a practical perspective, it is hard to get sufficient amounts of N15-labeled protein (usually obtained by growing bacteria on a N15-rich source and then purifying the proteins of interest).

To circumvent some of these problems, a team at Abbott Laboratories has come up with a neat and relatively simple method which they call ‘labeled ligand displacement’. The method involves synthesizing a protein-binding probe that has been selectively labeled with C13. Protein binding broadens and diminishes the signals of this probe. However, when a high-affinity ligand is then added, it displaces the probe and we get recovery of the C13 signals. The authors illustrate this paradigm with several proteins of pharmaceutical interest, including heat-shock protein and carbonic anhydrase.

The method is relatively simple. For one thing, using a commercially available C13-labeled building block for synthesizing a ligand is easier than obtaining a N15-labeled protein. The biggest merit of the method though is the fact that it hinges on C13 signals very specific to the probe; thus there is no complicating overlap of signals. And finally, the ligand seems to be general enough to be applied to any protein. Only time will tell how much it is utilized, but for now it seems like a neat addition to the arsenal of NMR methods for studying protein-ligand interactions.

Swann, S., Song, D., Sun, C., Hajduk, P., & Petros, A. (2010). Labeled Ligand Displacement: Extending NMR-Based Screening of Protein Targets ACS Medicinal Chemistry Letters, 1 (6), 295-299 DOI: 10.1021/ml1000849

Drugging the cell's best friend

ResearchBlogging.org
The tumor suppressor p53 is one of the cell’s very best friends. Just how good a friend it is becomes apparent when, just like in other relationships, this particular relationship turns sour. p53 is the “master guardian angel” of the genome and constitutes the most frequent genetic alteration in cancer. More than 50% of human tumors contain a mutation in the p53 gene. With this kind of glowing track record, p53 would be a prime target for drugs.

It turns out that discovering drugs for p53 is trickier than you think. The protein displays complex structural biology, and the mechanism of inhibitor action is not clear. But p53 malfunction is also characterized by one of the most distinctive physical mechanisms to ever emerge in an oncoprotein- about 30% of mutations in p53 simply lower the melting temperature (Tm) so that the protein becomes unstable and disordered. Thus, potential inhibitors of mutated p53 have been often termed ‘rescuers’ since ideally they would ‘rescue’ the protein from its unstable state.

In a recent paper, a team led by Alan Fersht of Cambridge- one of the world’s foremost protein chemists and p53 experts- explores one frequent mutation in p53 and how its consequences could be suitably exploited for rational drug discovery. The study is a nice example of the value of interdisciplinary research in tackling a complex problem. The rogue mutation is quite simple; it turns a tyrosine on the protein surface to a cysteine. The change to a smaller amino acid opens up a cavity on the protein surface. Is this cavity druggable’, that is, can a small molecule be found that selectively and potently binds to this cavity? This is what the researchers seek to do in this study (The presence of the cysteine makes me wonder if someone has tried a covalent tethering strategy for targeting this site).

The targeted site is an interesting one. It’s not exactly an allosteric site, since it is far away from the functional site but does not seem to affect the functional site. But Fersht and his colleagues have previously found a small molecule that binds to this site and raises the melting temperature. In this report, the authors extent p53 inhibitor discovery for the Y220C mutation further by using a combination of experimental and computational techniques.

They start by screening a fragment library; fragment-based drug discovery is now a stable of rational drug design. To minimize false positives and negatives, they use two complementary techniques: a 1D NMR method and a technique called thermal denaturation scanning fluorimetry, which detects the effect of a ligand using an exogenous dye. The two methods interestingly gave quite diverse hits, indicating the wisdom of combining them. To further confirm the binding of the fragments to the protein, they then use N15/H1 HSQC, an NMR method that detects changes in the proton and N15 chemical shifts when a ligand binds to the protein. By comparing unbound protein shifts to bound ones, one can locate only the amino acids that interact with the ligand. This is a really nice method since one can make out the concerned amino acids just by inspection; in the figure below, the multi-colored areas indicate significantly perturbed amino acids, and it’s very useful to locate the exact binding sites since amino acids outside the site don’t seem to be affected. In this particular case the key residues turned out to be a valine, an aspartate and a few others.




The study proceeded by employing the one method which can confirm ligand binding better than any other- x-ray crystallography. Crystal structures revealed binding modes for a few of the hits. One molecule turned out to bind to the binding cavity in two copies.

What exactly are the hits doing to the protein? To investigate this, the authors used molecular dynamics (MD) simulations in isopropanol. In this case isopropanol is not a solvent mimic but it’s actually a drug mimic. It has a polar and non-polar part and it can approximate a typical protein-binding molecule well. Binding cavities can be detected by looking at the density of isopropanol in the pockets. In this particular example, the highest density is actually found in other sites, but those sites are not relevant for this study.

The most intriguing observation from the MD simulations was that the size of the cavity fluctuates wildly when the ligand is not present. This dynamic flexibility of residues can be characteristic of an unstable region (this was also observed in some computational enzyme designs that I described earlier). To validate this flexibility, the simulations were run with the ligand present; not surprisingly, the size fluctuation in the cavity reduces. Intriguingly, the ligand seems to play the role of the previous tyrosine in packing against the other residues and keeping the site stable.

There is some way to go before we have a bonafide drug that ‘rescues’ p53 from its ignominious fate. But this study is a nice illustration of how only interdisciplinary computational and experimental work can really help us to unravel the mysteries of this ubiquitous and enigmatic Jekyll and Hyde.

Basse, N., Kaar, J., Settanni, G., Joerger, A., Rutherford, T., & Fersht, A. (2010). Toward the Rational Design of p53-Stabilizing Drugs: Probing the Surface of the Oncogenic Y220C Mutant Chemistry & Biology, 17 (1), 46-56 DOI: 10.1016/j.chembiol.2009.12.011

Fishin' in the membrane

ResearchBlogging.org
Since we were talking about GPCRs the other day, here's a nice overview of some of the experimental challenges associated with membrane proteins and how researchers are trying to overcome them. These challenges are associated not just with the crystallization, but with the whole shebang. Although many clever tricks have emerged, we have a long way to go, and at least a few of the tricks sound like brute trial and error.

To begin with, it's not that easy to get your expression system to produce ample amounts of protein. As indicated, you often need liters of cell culture to get a few milligrams of protein. The workhorse for production is still good old E. coli. E. coli does not always fold membrane proteins well, but it still beats other expression systems because of its cost and efficiency. Researchers have discovered several tricks to coax E. coli to make better protein. For instance it turns out that cold, nutrient poor conditions and slower-growing bacteria produce better folded and functional protein (although the exact reasons are probably not known, I suspect it has to do with thermodynamics and the binding of chaperones). Adding lipids from higher organisms to the medium also seems to sometimes help.

What’s more interesting are efforts to do away with cellular production altogether and just add reagents to cell lysates to jiggle the protein-production machinery. For some reason, wheat-germ lysates seem to work particularly well. There are companies willing to use these lysates to produce hundreds of milligrams of protein. One of the advantages of such cell-free systems is that you can add solubilizing agents and detergents to stabilize the proteins. A striking fact emerging from the article is how many private companies are engaged in developing such technology for membrane proteins; the end "credits" list at least a dozen corporate entities. The list should be encouraging to visionaries who see more fruitful academic-industrial collaborations in the future.

Then of course, there’s the all-important problem of crystallization. Of the 50,000 or so structures in the PDB, hardly a dozen are of membrane proteins. Membrane proteins present the classic paradox; keep them stable in the membrane and methods like crystallography and NMR cannot study them, but take them out of the membrane and, divorced from the protective effects of the lipid bilayer, they fall apart. Scientists have worked for years and come up with dozens of tricks to circumvent this catch-22. Adding the right kind of detergents can help. In the landmark structure of the beta-2 adrenergic receptor that was solved in 2007, the researchers used two tricks: attaching a stabilizing antibody to essentially clamp two transmembrane helices together, and replacing a disordered section of the protein with a T4 lysozyme, both strategies geared toward stabilizing the protein.

In the end though, there is really no general strategy and that’s still the cardinal bottleneck; as the article's title says, a "trillion tiny tweaks" are necessary to make your system work. What works for one specific membrane protein fails for another. As one of the pioneers in the field, Raymond Stevens from Scripps says, “People are always asking what the one strategy that worked is. The answer is there wasn’t one strategy, there were about fifteen”.

This is why chemistry (or economics) is not like physics. Although there are general rules, every specific case still invokes its own principles. In fields like membrane protein chemistry, it is unlikely that a single holy-grail strategy could be discovered that could work for all of them. The medley of techniques applied to membrane proteins makes the science seem sometimes like black magic and trial-and-error. All this makes chemistry hard, but also very interesting; if only a dozen membrane proteins have their structures solved, think of how many more are waiting in the shadows, awaiting the fruits of our sweat and toil.

Baker, M. (2010). Making membrane proteins for structures: a trillion tiny tweaks Nature Methods, 7 (6), 429-434 DOI: 10.1038/nmeth0610-429

Why modeling GPCRs is (still) hard

ResearchBlogging.org
Well, it's hard for several reasons which I have discussed in previous posts, but here's one reason demonstrated by a recent paper. In this paper they crystallized the ß2 adrenergic receptor with an antagonist. Previously, in the landmark publication of the ß2 structure in 2007, the protein had been crystallized with an inverse agonist. Recall that an inverse agonist inhibits the basal activity of the GPCR whereas an antagonist stabilizes both active and inactive states but does not affect the basal activity.

In this case they crystallized the ß2 with an antagonist and compared the resulting structure to that of the agonist-GPCR complex. And they saw...nothing in particular. The protein backbone and side-chain locations are very similar for the antagonist (compound 3) and inverse agonist (compound 2) shown in the figure below.



As we can see in the figure, about the only side-chain that shows any movement is the tyrosine on the left (Y316). No wonder that cross-docking the two ligands (that is, docking one ligand into the other ligand's protein conformation) gave very accurate ligand orientations; this was essentially a softball problem for a docking program since the antagonist was being docked into a protein conformation that was virtually identical to its own.

But of course, we know that antagonists and agonists affect GPCR function quite differently. As this study shows, clearly the action is not taking place in the ligand-binding pocket where things aren't really moving. So where is the real action? It's naturally taking place on the intracellular side, where the GPCR interacts with a medley of other proteins. And as the paper accurately notes, the difference between antagonist and inverse agonist binding is probably also reflected in the protein dynamics corresponding to the two ligands.

Good luck modeling that. That's the whole deal with modeling GPCRs. Simply modeling the ligand-binding pocket is not going to help us understand the differences between the binding of various ligands; one has to model multiprotein interactions and subtle effects on dynamics that are relayed through the helices. The program Desmond which I described in a earlier post is a powerful MD program, but even MD is going to really turn heads when it can take account of multiprotein interactions, and such interactions happen on a time-scale much longer than what even Desmond can access. We have a long way to go before we can do all this. But please, don't stop.

Wacker, D., Fenalti, G., Brown, M., Katritch, V., Abagyan, R., Cherezov, V., & Stevens, R. (2010). Conserved Binding Mode of Human β-2 Adrenergic Receptor Inverse Agonists and Antagonist Revealed by X-ray Crystallography Journal of the American Chemical Society, 132 (33), 11443-11445 DOI: 10.1021/ja105108q

The price of teaching

Why spreadsheets and university research don't gel

I have always been wary of evaluating faculty members based on the amount of money they bring in. One of the casualties of American academic science in the latter half of the twentieth century was that it commodified research, and money became a much bigger part of the equation. Research groups started to bear a striking resemblance to corporate outfits. Undoubtedly there were benefits to this practice since it brought in valuable funding, but it also tended to put a price on the generation of knowledge, which seems inherently wrong.

Now it seems that Texas A & M is thinking of turning this kind of valuation into official policy. As C & EN reports, TAMU is planning to rate its faculty based on their "net worth". This would be calculated based on the faculty member's salary, the funding that he or she can generate, and teaching (how on earth are they going to financially evaluate that?)

Sorry, but I think this is hogwash, and others seem to agree with me. The "worth" of faculty members goes way beyond the funding they can procure. There may be professors who bring in modest amounts of money but who inspire generations of students through their teaching, who significantly contribute to the public perception of science through science communication, and who generally contribute to the academic environment in a department simply through their passion and strong advocacy of science. Even from the point of view of research, there are faculty members who publish relatively less, do research on the cheap, and yet steer their respective fields in new directions simply by generating interesting ideas. Very few of these qualities lend themselves to spreadsheet analysis.

In fact, I will go a step further. If a faculty member does little more than inspire generations of students to pursue careers in science research, education and policy, there is no metric that can financially measure the worth of such contributions. Simply put, such contributions may well be priceless. That should easily satisfy Texas A & M's criteria for high-value "assets".

Louisa Gilder and Robert Oppenheimer

Louisa Gilder's book "The Age of Entanglement" is a rather unique and thoroughly engrossing book which tells the story of quantum mechanics and especially the bizarre quantum phenomenon called entanglement through a unique device- recreations of conversations between famous physicists. Although Gilder does take considerable liberty in fictionalizing the conversations, they are based on real events and for the most part the device works.

Gilder's research seems quite exhaustive and well-referenced, which was why the following observation jumped out of the pages and bothered me even more.

On pg. 189, Gilder describes a paragraph from a very controversial and largely discredited book by Jerrold and Leona Schecter. The book which created a furor extensively quotes a Soviet KGB agent named Pavel Sudoplatov who claimed that, among others, Niels Bohr and Robert Oppenheimer were working for the Soviet Union and that Oppenheimer knew that Klaus Fuchs was a Soviet spy (who knew!). No evidence for these fantastic allegations has ever turned up. In spite of this, Gilder refers to the book and essentially quotes a Soviet handler named Merkulov who says that a KGB agent in California named Grigory Kheifets thought that Oppenheimer was willing to transmit secret information to the Soviets. Gilder says nothing more after this and moves on to a different topic.

Now take a look at the footnotes on pg. 190-191 of Kai Bird and Martin Sherwin's authoritative biography of Oppenheimer. B & S also quote exactly the same paragraph, but then emphatically add how there is not a shred of evidence to support what was said and how the whole thing was probably fabricated by Merkulov to save Kheifets's life (since Kheifets had otherwise turned up empty-handed on potential recruits).

What is troubling is that Gilder quotes the paragraph and simply ends it there, leaving the question of Oppenheimer's loyalty dangling and tantalizingly open-ended. She does not quote the clear conclusion drawn by B & S that there is no evidence to support this insinuation. She also must surely be aware of several other works on Oppenheimer and the Manhattan Project, none of which give the slightest credence to such allegations.

You would expect more from an otherwise meticulous author like Gilder. I have no idea why she gives credence to the canard about Oppenheimer. But in an interview with her which I saw, she said that she was first fascinated by Oppenheimer (as most people were and still are) but was then repulsed by his treatment of his student David Bohm who dominates the second half of her book. Bohm was a great physicist and philosopher (his still-in-print textbook on quantum theory is unmatched for its logical and clear exposition), a dedicated left-wing thinker who was Oppenheimer's student at Berkeley in the 1930s. After the War, he was suspected of being a communist and stripped of his faculty position at Princeton which was then very much an establishment institution. After this unfortunate incident, Bohm lived a peripatetic life in Brazil and Israel before settling down at Birkbeck College in England. Oppenheimer essentially distanced himself from Bohm after the war, had no trouble detailing Bohm's left-wing associations to security agents and generally did not try to save Bohm from McCarthy's onslaught.

This is well-known; Robert Oppenheimer was a complex and flawed character. But did Gilder's personal dislike of Oppenheimer in the context of Bohm color her attitude toward him and cause her to casually toss out a tantalizing allegation which she must have known is not substantiated? I sure hope not.

The difference between chemistry and physics

I came across the following quote by the Nobel Prize winning chemist William Lipscomb (think diborane, carbonic anhydrase) with which I generally concur:

‎"Chemistry is not 'physics with less rigor'. In chemistry there are discoverable guiding principles for systems which are too complex for a "first principles" approach. The nature of chemistry is very difficult to explain to most physicists, in my experience!"
In chemistry there are emergent phenomena that cannot be simply reduced to physics. One has to think at the level of molecules and not just atoms, especially for understanding chemical reactions. This is especially true for understanding biochemical reactions. Knowing about quarks won't directly help you to understand the structure of DNA but knowing about hydrogen bonds definitely will. Of course the same caveats apply to thinking about biology as 'applied chemistry'. The fact is that every science comes with its own set of fundamental laws. These laws are strictly reducible to 'lower-level' laws in a philosophical sense, but the lower-level laws don't directly lead to the higher-level fundamental ones. Thus, an understanding of the lower-level laws, no matter how thorough, does not automatically imply an understanding of the higher-level ones.

My E. coli brother's keeper

ResearchBlogging.org
Would an anti-indole work?

Antibiotic resistance is one of the best examples of evolution in real-time and it’s also one of the most serious medical problems of our time. Emerging resistance in bacteria like MRSA threatens to bring on a wave of epidemics that may remind us of past, more unseemly times.

Given the threat that antibiotic resistance poses, it is paramount to understand the mechanisms behind this process. While considerable progress has been made in understanding the genetic basis of mutations that confer antibiotic resistance, much less is known about the population dynamics of bacteria that evolve this kind of resistance. Now, in the cover story of the latest issue of Nature, researchers from Boston University discover a novel and remarkable mechanism by which bacteria acquire resistance. The mechanism is effectively a form of bacterial altruism.

The researchers start by challenging successive generations of E. coli in a bioreactor with increasing concentrations of the antibiotic norfloxacin, which inhibits DNA synthesis by binding to DNA gyrase. Around the tenth generation or so, they notice something interesting. Not all bacteria have evolved resistance to the antibiotic, but there’s a very small population of bacteria with high resistance. However, in the next few generations, the other bacteria also seem to acquire this resistance. What’s going on?

It turns out that the small populations of bacteria which are highly resistant are actually ‘teaching’ their fellow bacteria to become resistant. They are doing this by a remarkably simple mechanism- by secreting the molecule indole into their environment. This indole acts as a signaling molecule that is mopped up by the other bacteria. The result is the activation of a variety of resistance mechanisms, including increased production of drug transporter proteins which are well-known to confer resistance by extruding drug molecules out.

Now indole is well-known as a component of signaling molecules. For instance, indole-3-acetic acid (IAA) plays many important signaling roles in plants and encourages cell growth and division. The detection of indole by itself was not surprising in this case, since all the bacteria secreted indole as part of their regular metabolism in the beginning. But what was surprising was the mechanism; as the antibiotic stressed out the bacteria, most of them essentially weakened and stopped indole-secretion with the exception of this small cadre of selfless individuals who kept on generating the molecular signal. Since production of indole in times of stress clearly requires an investment of energy, this was a bona fide case of bacterial altruism; sacrifice one’s own fitness to increase that of the group.

Ultimately though, we don’t want to just understand such novel mechanisms of antibiotic resistance but want to thwart them. Based on this mechanism I had an idea. If indole is so important for bacteria to acquire resistance, then one logical way to counter resistance would be to introduce an ‘anti-indole’ in their environment and mix it up with the natural molecule to cause confusion. This anti-indole would be a molecule resembling indole- an indole mimic and antagonist- that would effectively compete with indole for uptake, without causing any of the resulting effects.

Most likely this molecule would be a very close analog of indole, perhaps indole with a hydroxyl or fluoro group on it. Any small modification of indole would do, as long as it’s enough to confuse the bacteria. Of course we would also need to worry about bioavailability and toxicity, but I don’t see why the basic strategy would be completely unfeasible and why a proof-of-principle experiment could not be done in a petri dish.

Lee HH, Molla MN, Cantor CR, & Collins JJ (2010). Bacterial charity work leads to population-wide resistance. Nature, 467 (7311), 82-5 PMID: 20811456

Stimulating quasi-erotic excitement through organic structure determination



Thanks to the graces of the intertubes I came across this rare and fascinating video of R B Woodward put up by some kind soul a couple of months ago. The novelty of the quintessential Bostonian accent, the cigarette and glass of scotch adorning the lectern and the man in blue are only eclipsed by his achievements and what he has to say. He especially saves the coup de grace for the end.

Woodward essentially sheds light on the remarkable developments in organic chemistry until then by providing contrasting examples from his own research. He emphasizes how times had changed between his own work and the state of the art in 1979. One can make similar comparisons right now. Woodward basically attributes the astonishing progress in organic chemistry in the last forty years to two factors- an intense infusion of theoretical concepts in their most general form (MO theory, quantum chemistry etc.), and the path-breaking developments in physical methods, including IR, UV and NMR spectroscopy and x-ray crystallography. He then provides famous examples from his own work to starkly emphasize the contrast.

The first example is from his synthesis of quinine. In this synthesis, one of the steps involved the elimination of a quaternary ammonium ion to form a double bond. The question was whether the double bond formed was a vinyl double bond or an ethylidene double bond; it was the vinyl that was desired.



Nowadays, and even in 1979, a graduate student could settle this question in literally a matter of minutes, but at that point (circa 1945), Harvard did not even have the experimental facilities necessary to chemically investigate this fact. Woodward had to send the sample to the famous chemist Max Tischler at Merck. Tischler got back saying it was an ethylidene. This threw the chemists into a state of despondency for a few days, until Tischler called back to inform them that Merck had made a mistake and it was in fact the vinyl double bond. The tense drama during this situation seems almost comical in the light of modern structure determination methods.

The second example concerned Woodward’s astonishing decade-long synthesis of Vitamin B12. He expressed wonder how an NMR spectrometer had been able to obtain the natural abundance C13 spectrum of 1 mg of the synthetic finished product using 995,000 transient scans. This incredulity would sound almost laughable today. Capillary NMR and 1 GHZ machines have pushed the science and art of structure determination to limits, and doing a million scans on 1 mg of material is almost old hat.

The third example was a nice little anecdote. Woodward had a wager with Linus Pauling in the 1950s whether he could chemically determine the structure of the antibiotic terramycin faster than Pauling could do it with x-ray crystallography. Woodward won the wager, but also admitted that he would probably lose it today because x-ray crystallography had gotten so powerful. Today x-ray crystallography is already at the top of its game, and who knows what breakthroughs in structure determination would be possible with AFM and STM.

The last example cracked everyone up. Woodward talked about the structure determination of cantharidin, the active principle of the Spanish fly. Chemists had isolated up to 500 grams of cantharidin to find out its structure. “Just think of it, 500 grams of cantharidin”, says Woodward. “There are many people who would think it’s an absolute tragedy. Realize that that would be enough to keep the entire population of Spain in a state of quasi-erotic excitement for a period of a full year!”

What would be Woodward’s reaction if he were to suddenly materialize today in a poof of chemical pixie dust and survey the synthesis landscape? My humble guess is that he would not be too impressed. He would undoubtedly be excited by the development of the Sharpless and Grubbs methods and the great success of the palladium-catalyzed reactions (not to mention the general development of organometallic chemistry, in the founding of which he himself played a role). But beyond that, I doubt if he would notice any fundamental change in the science of organic synthesis compared to what he witnessed and orchestrated during his own lifetime. Sure, things have become more efficient, streamlined and automated, but those details, as impressive as they are, are really operational details.

My personal guess is that Woodward would be much more impressed by the application of organic synthesis to biology and materials science. But as for the science itself, it probably still stands very close to where Woodward left it thirty years ago, and the whiz-kid from Quincy would have little trouble bringing himself up to speed on it in no time at all.