Field of Science

Book review: The Ideological Origins of the American Revolution by Bernard Bailyn

The Ideological Origins of the American RevolutionThe Ideological Origins of the American Revolution by Bernard Bailyn

While a slightly academic and challenging read, this book (first published in 1967 and then reprinted twice) is a seminal contribution to revolutionary and pre-revolutionary history and a must-read, not just for understanding the American Revolution but also some of the most important issues we grapple with now. The book is entirely based on pamphlets - essentially the Twitter of their times, but far more intelligent - that were written by people across all social strata in response to events in the 1750s and 1760s. These pamphlets were remarkably flexible, spanned anywhere between ten and seventy pages, and contained a wide variety of writing, from scurrilous, sarcastic, bawdy polemics ("wretched harpies" was a favorite derogatory term - I have long since thought of making a list of choice insults of those times) to calls for populist revolution to reasoned, highly erudite writings. More than any other written form of the era, they contain a microcosm of the basic thinking that led to the revolution.

Perhaps more than any other book I have come across, Bailyn's book helped me understand how far back the roots of the revolution went, how entrenched in English political philosophy and especially libertarian philosophy they were, how simplistic and incomplete the textbook version of “no taxation without representation” is, and how many of the central issues of both 1776 and 2019 are rooted in the core of Americans’ view of their own identity and geography going back all the way to the settlers.

A few key takeaways:

1. It's easy to underestimate the outsized impact that geography had on the colonists' thinking. Decentralized control was almost de rigueur in the vast wildernesses bordering Virginia or Massachusetts, so the idea of central control - both by Parliament in the 1760s *and* by a federal government in 1787 - was deeply unpalatable to many people. The abhorrence toward virtual representation in Parliament was only a logical consequence. You gain a much better understanding of Americans' fondness for states' rights and their fears of federal power by understanding this background.

This decentralized thinking also led quite naturally to freedom of religion - Bailyn cites the prominent struggles by baptists in western Massachusetts against taxation by the Congregationalists as an example - and more haltingly and less successfully, for calls to abolish slavery which although they did not make their way into the Constitution, did lead individual states to abolish the institution and to stop the slave trade.

2. Almost the entire debate about independence was about where the seat of sovereignty lay. For the English it lay in Parliament, but the colonists argued that while Parliament did have some central rights (there were some strenuous attempts to distinguish between "external" taxation that Parliament could impose and "internal" taxation that was the people's right - this argument was rapidly dropped), the people had "natural" rights that were outside all authority including parliament's.

The colonists were inspired in this thinking by Enlightenment philosophers like Locke and Hume and this foundation is well known, but Bailyn makes a convincing case that they were inspired even more by the early 18th century English libertarians John Trenchard and Thomas Gordon and their predecessors, who in writings like the famous "Cato's Letters" argued against standing armies, lack of due process and absolute and arbitrary power. Some of these arguments went back to Charles I and the English revolution of the 1640s, so many of the leaders of the revolution had assimilated them way before 1776; Pennsylvania and New York even had written documents outlining some of the key provisions in the Bill of Rights as early as 1677. By the time the Stamp and Townshend Acts were imposed in the 1760s, taxation (which was a relatively minor grievance anyway) was only the last straw on the camel's back.

The biggest strength of the book is that it beautifully illustrates how thinking about decentralized control, natural rights and English libertarian philosophy was a common thread tying together so many disparate themes - independence, taxation and representation, abolitionism, religious freedom, geographic expansion, and finally, the great debate about the Constitution. The volume really reveals the core set of philosophies on which the country is founded better than any other that I have read. A groundbreaking contribution.

Book review: The Island at the Center of the World: The Epic Story of Dutch Manhattan and the Forgotten Colony That Shaped America

The Island at the Center of the World: The Epic Story of Dutch Manhattan and the Forgotten Colony That Shaped AmericaThe Island at the Center of the World: The Epic Story of Dutch Manhattan and the Forgotten Colony That Shaped America by Russell Shorto

About a decade ago when I was living in New Jersey, I used to drive every weekend from New Jersey to Massachusetts to see my then-girlfriend. While driving back I used to take a road called the Saw Mill Parkway, near a town called Yonkers, on the way to crossing the Tappan Zee bridge. Both reference points seemed completely nondescript to me then. What I did not know until now was that both Yonkers and the Saw Mill Parkway are the only tributes in this country to a remarkable man and a lost time which, if it had endured, could have significantly influenced the history of this country.

The remarkable man was Adriaen van der Donck, a Dutchman who brought the liberal outlook of 17th century Amsterdam to the Dutch colony of New Netherlands, with its capital New Amsterdam. He was known as a 'Jonkheer' or 'young lord', and on his estate up the Hudson river he built a saw mill. Hence Yonkers and the Saw Mill Parkway.

Today about the only two things that most people know about the place was that it was bought from the Indians in 1626 for the seemingly laughably small sum of 60 guilders or 24 dollars, and that New Amsterdam became New York when the English took over it in 1664. During that period it became the most progressive European colony in America, reflecting the liberal, multicultural, intellectual and progressive spirit of the Netherlands, but its history is basically taught today as an English history. Russell Shorto's engaging book charts the history of this remarkable and forgotten colony, from the discovery of the location by Henry Hudson in 1608 (about the time that Jamestown was founded) to its takeover by the English.

Much of the book centers on two larger-than-life characters; Adriaen van der Donck and Peter Stuyvesant who were sort of opposites; the former is virtually forgotten while the latter lives on in the form of names like Stuyvesant High School. Van der Donck was educated at Leiden which then had a university rivaling Oxford in its embrace of natural philosophy and logic, and the Netherlands was already serving as a refuge for religious apostates like pilgrims and Rene Descartes. Inspired by his law studies at Leiden, Van der Donck had a scientists' eye for observation and objectivity. He made friends with the Indians, lived with them, studied the plants, animals, mountains and rivers in the vast landscape of what is now the Albany region and wrote a book describing the land that became a bestseller. He brought principles of representative government and religious freedom to New Amsterdam.

Stuyvesant who had fought the Spanish in South America and lost a leg to a cannonball belonged to the conservative old guard and believed in exercising the will of the company of which he was director - the Dutch West India Company which was then scouting around the world looking for natural and human resources. Although Stuyvesant and van der Donck were on good terms before, Stuyvesant's heavy-handed management of the colony led van der Donck and a select few settlers to write protests to the Hague laying out some remarkably forward-looking principles of secular governance for the colony.

And yet somehow, between Stuyvesant's authoritarian but dogged direction and van der Donck's progressive views, New Amsterdam for a few decades became a model for secular civilization that later defined New York City as a melting pot, a unique place with a ragtag band of seamen, traders, brewers, prostitutes, soldiers, farmers, freethinkers, frontiersmen and people from all countries and professions which encouraged multiculturalism and religious freedom, in significant contrast to the monocultural, religiously rigid Puritan colonies of New England to the North; in fact it served as a refuge for persecuted Englishmen and women from New England who settled mainly in Long Island. It also largely regarded the neighboring Indian tribes as equals and traded beaver furs and wampum with them, and unlike the English at Jamestown rarely engaged in murderous conflict with the natives. Under van der Donck's leadership, the Netherlands was going to institute a bonafide progressive government in the place - the Treaty of Westphalia in 1648 had inspired a general sentiment of tolerance and peace - but sadly the beginning of the Anglo-Dutch wars led the country to again cede authority to the West India Company, and Stuyvesant again had the upper hand. Nonetheless the colony still flourished because of its decentralized nature.

The book describes how the colony bequeathed many Americanisms, among them "boss" (from "baas"), "coleslaw" (from "koosla") and "cookies" (from "koekje"). A lot of upstate New York, New Jersey, Philadelphia, (the name Schuylkill is Dutch) and especially New York City have a deeply embedded Dutch heritage, and there's even a non-trivial amount of Swedish and Finnish heritage that came from a Swedish colony that endured for about twenty years in what's now Delaware before it capitulated to the Dutch. The end of the colony came when the English first under Cromwell and then under Charles II realized the lucrative advantage that the location would provide in exploring the interior up the Hudson river, along with a strategic waypoint for the then exploding slave trade. Van der Donck sadly died in an Indian raid while Stuyvesant lived out his life in his colony and was buried there in 1672. The golden age of Dutch civilization and free thought was on the decline, and England - and the English version of history - took over.

There is no doubt that New Amsterdam was a model of religious and cultural tolerance that needs to be remembered, largely because there was no system of top-down governance there for a long time, but perhaps Shorto overstates the influence it had on future developments in the United States including the Constitution; nobody knows how exactly history would have turned out had there been two dominant colonies - the English to the North and the Dutch to the South, but it would have been very interesting indeed. Ironically, in his zeal to demonstrate how forgotten secular New Amsterdam was, Shorto fails, even when mentioning New England in some detail, to mention even once the equally secular and remarkable secular experiment to the North which I wrote about earlier - Roger Williams and his founding of Rhode Island. Seems like someone's always forgotten.

Book review: An Empire on the Edge: How Britain Came to Fight America

An Empire on the Edge: How Britain Came to Fight AmericaAn Empire on the Edge: How Britain Came to Fight America by Nick Bunker

A highly illuminating, novel account of the lead-up to the Boston Tea Party and the American Revolution from the British side, covering the critical years 1772 to 1775. British politics and economics played a seminal role in inciting the conflict (who knew that smuggling which drastically undercut the prices of tea and other commodities played such a huge role in the revolution). As an Englishman Bunker offers a very thoughtful and sympathetic but still fair and balanced portrait of British political leaders and convincingly demonstrates how they were complex characters with far from outright villainous intent.

Unlike the simplistic textbook version of events, Bunker explains in painstaking detail (sometimes too much detail) how the British simply misunderstood the colonists rather than actively oppressed them, seeing them as sources of revenue and little more, largely ignoring them until 1773. Occupied with extending their empire in other parts of the globe and chronically afraid of the French, they thought of the American colonies as a reliable outpost which they would never have to worry about, even as they were losing their grip both on the vast, overstretched geography as well as the hearts and minds of the colonists. They also badly misunderstood that what were for them simple matters of (modest) taxation were for the colonists fundamental matters of religion, individual rights and free trade; unlike other British colonies like India, the North Americans already had a lot of freedom and individual rights, which made England's petulant, scolding attitude highly inflammatory and the decision to separate from the mother country consequently easier.

Even after the Tea Party there was a huge gulf of misunderstanding between King George III, prime minister Lord North, the colonial secretary Lord Dartmouth and the colonists that led to missed opportunities. Support for taking hard measures against the colonies was also far from monolithic, with many famous members of parliament like Edmund Burke delivering fiery speeches against North, and the quality of British democracy was commendable, with even riotous events like the Boston pamphlets and the Tea Party needing proper legislative procedure, witness accounts and documentary evidence to prosecute (the book makes it clear that even by the 1770s the King could make almost no law without parliament's consent). News (including scurrilous "fake news") also took at least a month to travel from one shore to another during those days, compounding the misunderstanding.

The British also made a critical mistake by getting obsessed with Boston and Massachusetts which, although symbolically important, were politically as well as economically much less important than the Southern colonies and the Hudson Valley. After the tea was dumped North basically thought he could quell the rebellion through a targeted local war in Massachusetts or Rhode Island (the most radical state); little did he know that the colony shared deep resentments with other colonies. In a curious sense the British understanding of the colonists was as impoverished as the later American understanding of the Vietnamese. Just like an American General could never see through the eyes of Ho Chi Minh or a Vietnamese peasant, North and his ministers were simply too different from an Ethan Allen or Thomas Young.

Essentially this was a story of many missed opportunities. if they had played their cards right England could well have turned North America into a country like Australia or Canada, essentially an independent republic with intimate ties to the commonwealth, or even defused the budding revolution in 1772 by extending an olive branch of the right kind, perhaps dividing North from South, until it was too late. As it turned out, the two countries indeed turned out to be two nations separated by a common language.

Computer simulations and the Universe

There is a sense in certain quarters that both experimental and theoretical fundamental physics are at an impasse. Other branches of physics like condensed matter physics and fluid dynamics are thriving, but since the composition and existence of the fundamental basis of matter, the origins of the universe and the unity of quantum mechanics with general relativity have long since been held to be foundational matters in physics, this lack of progress rightly bothers its practitioners.
Each of these two aspects of physics faces its own problems. Experimental physics is in trouble because it now relies on energies that cannot be reached even by the biggest particle accelerators around, and building new accelerators will require billions of dollars at a minimum. Even before it was difficult to get this kind of money; in the 1990s the Superconducting Supercollider, an accelerator which would have cost about $2 billion and reached energies greater than those reached by the Large Hadron Collider, was shelved because of a lack of consensus among physicists, political foot dragging and budget concerns. The next particle accelerator which is projected to cost $10 billion is seen as a bad investment by some, especially since previous expensive experiments in physics have confirmed prior theoretical foundations rather than discovered new phenomena or particles.
Fundamental theoretical physics is in trouble because it has become unfalsifiable, divorced from experiment and entangled in mathematical complexities. String theory which was thought to be the most promising approach to unifying quantum mechanics and general relativity has come under particular scrutiny, and its lack of falsifiable predictive power has become so visible that some philosophers have suggested that traditional criteria for a theory’s success like falsification should no longer be applied to string theory. Not surprisingly, many scientists as well as philosophers have frowned on this proposed novel, postmodern model of scientific validation.
Quite aside from specific examples in theory and experiment, perhaps the most serious roadblock that fundamental physics seems to be facing is that it might have reached the end of “Why”. That is to say, the causal framework for explaining phenomena that has been a mainstay of physics since its very beginnings might have ominously hit a wall. For instance, the Large Hadron Collider found the Higgs Boson, but this particle had already been predicted thirty years before. Similarly, the gravitational waves predicted by LIGO were a logical prediction of Einstein’s theory of relativity proposed almost a hundred years before. Both these experiments were technical tour de forces, but they did not make startling, unexpected new discoveries. Other “big physics” experiments before the LHC had validated the predictions of the Standard Model which is our best theoretical framework for the fundamental constituents of matter.
The problem is that the basic fundamental constants in the Standard Model like the masses of elementary particles and their numbers are ad hoc quantities. Nobody knows why they have the values they do. This dilemma has led some physicists to propose the idea that while our universe happens to be the one in which the fundamental constants have certain specific values, there might be other universes in which they have different values. This need for explanation of the values of the fundamental constants is part of the reason why theories of the multiverse are popular. Even if true, this scenario does not bode well for the state of physics. In his collection of essays “The Accidental Universe”, physicist and writer Alan Lightman says:
Dramatic developments in cosmological findings and thought have led some of the world’s premier physicists to propose that our universe is only one of an enormous number of universes, with wildly varying properties, and that some of the most basic features of our particular universe are mere accidents – random throws of the cosmic dice. In which case, there is no hope of ever explaining these features in terms of fundamental causes and principles.
Lightman also quotes the reigning doyen of theoretical physicists, Steven Weinberg, who recognizes this watershed in the history of his discipline:
We now find ourselves at a historic fork in the road we travel to understand the laws of nature. If the multiverse idea is correct, the style of fundamental physics will be radically changed.
Although Weinberg does not say this, what’s depressing about the multiverse is that its existence might always remain postulated and never proven since there is no easy way to experimentally test it. This is a particularly bad scenario because the only thing that a scientist hates even more than an unpleasant answer to a question is no answer at all.
Do the roadblocks that experimental and theoretical physics have hit combined with the lack of explanation of fundamental constants mean that fundamental physics is stuck forever? Perhaps not. Here one must remember Einstein when he said that “Our problems cannot be solved with the same thinking that created them”. Physicists may have to think in wholly different ways, to change the fundamental style that Weinberg refers to, in order to overcome the impasse.
Fortunately there is one tool in addition to theory and experiment which has not been prominently used by physicists but which has been used by biologists and chemists and which could help physicists do new experiments. That tool is computation. Computation is usually regarded separately from experiment, but computational experiments can be performed the same way that lab experiments can as long as the parameters and models underlying the computation are well defined and valid.  In the last few decades, computation has become as legitimate a tool in science as theory and experiment.
Interestingly, this problem of trying to explain fundamental phenomena without being able to resort to deeper explanations is familiar to biologists: it is the old problem of contingency and chance in evolution. Just like physicists want to explain why the proton has a certain mass, biologists want to explain why marsupials have pouches that carry their young or why Blue Morpho butterflies are a beautiful blue. While proximal explanations for such phenomena are available, the ultimate explanations hinge on chance. Biological evolution could have followed an infinite number of pathways, and the ones that it did simply arose from natural selection acting on random mutations. Similarly one can postulate that while the fundamental constants could have had different values, the ones that they do have in our universe came about simply because of random perturbations, each one of which rendered a different universe. Physics turns into biology.
Is there a way to test this kind of thinking in the absence of concrete experiments? One way would be to think of different universes as different local minima in a multidimensional landscape. This scenario would be familiar  to biochemists who are used to thinking of different folded structures for a protein as lying in different local energy minima. A few years back a biophysicist named Collin Stultz in fact made this comparison as a helpful way to think about the multiverse. Computational biophysicists test this protein landscape by running computer simulations in which they allow an unfolded protein to explore all these different local minima until it finds a global minimum which corresponds to its true folded state. In the last few years, thanks to growing computing power, thousands of such proteins have been simulated.
Similarly, I postulate that computational physicists could perform simulations in which they simulate universes with different values for the fundamental constants and evaluate which ones resemble our real universe. Because the values of the fundamental constants dictate chemistry and biology, one could well imagine completely fantastic physics, biology and chemistry arising in universes with different values for Planck’s constant or for the fine structure constant. A 0.001% difference in some values might lead to a lifeless universe with total silence, one with only black holes or spectacularly exploding supernovae, or one which bounced back between infinitesimal and infinite length scales in a split second. Smaller variations on the constants could result in a universe with silicon-based life, or one with liquid ammonia rather than water as life’s essential solvent, or one with a few million earth-like planets in every galaxy. With a slight tweaking of the cosmic calculator, one could even have universes where Blue Morpho butterflies are the dominant intelligent species or where humans have the capacity to photosynthesize.
All these alternative universes could be simulated and explored by computational physicists without the need to conduct billion dollar experiments and deal with politicians for funding. I believe that both the technology and the knowledge base required to simulate entire universes on a computer could be well within our means in the next fifty years, and certainly within the next hundred years. In some sense the technology is already within reach; already we can perform climate and protein structure simulations on mere desktop computers, so simulating whole universes should be possible on supercomputers or distributed cloud computing systems. Crowdsourcing of the kind done for the search for extraterrestrial intelligence or protein folding would be readily feasible. Another alternative would be to do computation using DNA or quantum computers: Because of DNA’s high storage and permutation capacity, computation using DNA can multiply required computational resources manyfold. One can also imagine taking advantage of natural phenomena like electrical discharges in interstellar space or in the clouds of Venus or Jupiter to perform large-scale computation; in fact an intelligence based on communication using electrical discharges was the basis of Fred Hoyle’s science fiction story “The Black Cloud”.
On the theoretical side, the trick is to have enough knowledge about fundamental phenomena and to be able to abstract away the details so that the simulation can be run at the right emergent level. For instance, physicists can already simulate the behavior of entire galaxies and supernovae without worrying about the behavior of every single subatomic particle in the system. Similarly, biologists can simulate the large-scale behavior of ecosystems without worrying about the behavior of every single organism in them. In fact physicists are already quite familiar with such an approach in the field of statistical mechanics where they can simulate quantities like temperature and pressure in a system without simulating every individual atom or molecule in it. And they have measured the values of the fundamental constants to many decimal places to use them confidently in the simulations.
In our hypothetical simulated universe, all the simulator would have to do would be to input slightly different values of the fundamental constants and then hard-code some fundamental emergent laws like evolution by natural selection and the laws of chemical bonding. In fact, a particularly entertaining enterprise would be to run the simulation and see if these laws emerge by themselves. The whole simulation would in one sense largely be a matter of adjusting initial values, setting the boundary value conditions and then sitting back and watching the ensuing fireworks. It would simply be an extension of what scientists already do using computers albeit on a much larger scale. Once the simulations are validated, they could be turned into user-friendly tools or toys that can be used by children. The children could try to simulate their own universes and can have contests to see which one creates the most interesting physics, chemistry and biology. Adults as well as children could thus participate in extending the boundaries of our knowledge of fundamental physics.
Large-scale simulation of multiple universes can help break the impasse that both experimentation and theory in fundamental physics are facing. Computation cannot completely replace experiment if the underlying parameters and assumptions are not well-validated, but there is no reason why this cannot happen as our knowledge of the world based on small-scale experiments grows. In fields like theoretical chemistry, weather prediction and drug development, computational predictions are becoming as important as experimental tests. At the very least, the results from these computational studies will constrain the number of potential experimental tests and provide more confidence in asking governments to allocate billions of dollars for the next generation of particle accelerators and gravitational wave detectors.
I believe that the ability to simulate entire universes is imminent, will be part of the future of physics and will undoubtedly lead to many exciting results. But the most exciting ones will be those that even our best science fiction writers cannot imagine. That is something we can truly look forward to.

First published on 3 Quarks Daily.

Book review: Roger Williams and the Creation of the American Soul

Roger Williams and the Creation of the American Soul: Church, State, and the Birth of LibertyRoger Williams and the Creation of the American Soul: Church, State, and the Birth of Liberty by John M. Barry

If anyone wants to know what makes the United States unique, part of the answer can be found in this book. At its center is a wholly remarkable, extraordinary, awe-inspiring individual who was light years ahead of his time. Roger Williams founded Rhode Island (then Providence Plantation) in 1636 and it became the world's first model of both full religious tolerance as well as individual rights, and it established government by the consent of the governed as a foundational principle. At that time nothing like it existed anywhere, and certainly not in Europe where Catholics and Protestants were killing each other over absurdly trivial matters like the age for baptism and Calvinist predestination. Williams's teachings and writings set the stage for fundamental debates about the role of religion and the state in individuals' lives with which we are still grappling.

Williams had fled from England when Charles I intensified his father James I's campaign to persecute Protestant Puritans who wanted a purer, more rigorous form of worship. Growing up in London, Williams had been enormously influenced by Edward Coke and Francis Bacon, two men who ironically were sworn enemies. Coke was the most eminent jurist in English history and had challenged the divine right of kings and emphasized rights to property and due process. Bacon was one of the fathers of the scientific method and put a premium on evidence and observation. From both these men Williams imbibed a deep set of ethics about free, secular thinking.

He arrived in Massachusetts a decade after the Mayflower docked at Plymouth and a few years after the Massachusetts Bay Colony was established by John Winthrop, the "city on a hill". A talented lawyer, minister and linguist who was steeped in Bacon's scientific method, he became friends with the Indians, learnt their customs and and became fluent in their language. He was received with great respect and offered the post of minister in the newly-established Boston's first church. His time in history came when he made a historic break by opposing two basic tenets of the Puritans and Christians in general: that the state should have no authority to enforce the first four commandments dealing with God, and that Indians had property rights too and the Puritans did not have the authority to simply seize them and needed to buy their lands. This went not just against the fundamentalist religious beliefs of the colony but was something wholly new that directly contradicted both the meld between church and state and in fact all the political and religious philosophy that existed at the time.

For his novel views Williams was duly banished from Massachusetts under threat of execution, but he kept on privately preaching his creed in the more tolerant Salem. When Massachusetts sent out a squad of soldiers to haul him onto a ship bound for England for imprisonment, they found that (goaded by a tip from Winthrop), Williams had already escaped into the bitter, snowy winter wilderness. The only reason he remained alive was because he found refuge and friendship among the Narragansett and other Indians who lived in the area, and the fact that his friends and colleagues had denounced him while strangers had saved his life fundamentally changed his views of race, of religion, of Native Americans, of freedom and individual rights, of how much control men should have over other men. His colony became a refuge for the rejected, the denounced, the banished of Massachusetts, Plymouth and Connecticut; the three major colonies of the time.

He decided to codify his beliefs in a formal document. Massachusetts kept on being threatened by its freethinking neighbor to the South and kept on trying to usurp its territories, so Williams went back again to the same England from which he had fled about fifteen years ago. At this point England itself had become roiled up in what was going to lead to the English Civil War and the execution of Charles I. He befriended Oliver Cromwell and managed to get a charter for Rhode Island written and later endorsed by Charles II (who seems to have forgiven his friendship with Cromwell). In an age when almost every piece of paper including the founding charter of Puritan Massachusetts was infused throughout with the names of God and Christ, the Rhode Island charter is an extraordinary document, not mentioning God even once. It established almost complete freedom of religion and made it clear that no one should be persecuted simply for their beliefs; a groundbreaking assertion at a time when even minor differences in religious beliefs between Catholics and Protestants, let alone ones between Protestants and Jews or Quakers, were enough to ignite religious wars that killed thousands. Finally, with his charter safely establishing the legality of Rhode Island, Williams returned back to his colony and lived to be an old man, still preaching the gospel of tolerance.

Williams's writing serve as the foundation for the novelty of the American experiment. He was a devout Christian who conceived a separation of church from state, private from public activity. He might have been the first bonafide libertarian. There is a straight line between his teachings, John Locke, the Declaration of Independence and all the worldwide events that the American Revolution inspired. No wonder that when the tide of history met the shores of fate, not only did Rhode Island become the first to protest against unlawful behavior by the English even before the Boston Tea Party, but it became the first state in the colonies to declare independence from Great Britain in 1776.

Book review: Fur, Fortune and Empire

Fur, Fortune, and Empire: The Epic History of the Fur Trade in AmericaFur, Fortune, and Empire: The Epic History of the Fur Trade in America by Eric Jay Dolin

A marvelous and highly revealing history of the fur trade in America, right from the first permanent European settlements in the 17th century to the end of the 19th century. A story of inspiring doggedness against an incredibly unforgiving environment and of the tragic clash of civilizations.

Dolin's basic thesis is that fur was to the 17th and 19th centuries what oil was to the 20th, and it was the possibility of buying beaver furs in unprecedented quantities for fashion-hungry Europe from Indians that largely drew first the Dutch and French and later the English to North America, so the settling and expansion of North America especially to the West tracks very closely with the fur trade. Having access to the Mississippi and the Hudson rivers, the former were much better placed to buy fur in exchange for European goods, at first trinkets like utensils and clothing but later deadlier commodities like guns and alcohol. The Dutch started trading for beaver pelts in their New Amsterdam colony, while the French swept in from Canada and controlled the Mississippi. This led to an inevitable clash between the British and the French for control of the Great Lakes region. After the French and Indian War, clashes arose between the British and the colonies regarding jurisdiction over the newly-opened vast Ohio territory and its lucrative fur possibilities, and this was at least one of the factors leading to the American Revolution. Americans continued to duke it out with the British even as both expanded into the Northwest, this time killing sea otters in unprecedented numbers for trade with China with brutal techniques and gleeful avarice. The Lewis and Clark expedition was at least in part a quest to map lucrative locations for the fur trade.

One of the highlights of the book is the light it sheds on early European-Indian relations which were much more benign compared to later years. In almost every case the Indians welcomed the Europeans at first contact and were in awe of their guns and other modern technology. Partly out of necessity - the Europeans were completely dependent on the natives at first for fetching furs from the deep interior - and partly out of genuine respect and curiosity, Europeans established trading relationships with the Indians through trading posts, and the Indians were often canny enough to play competing French and British trappers and companies against each other to get the best price. The relationship started changing when the Europeans became more land-hungry and when they started taking advantage of the Indians by plying them with alcohol; the independent forays of European trappers also started reducing their dependence on native fur acquisition. But there were violent clashes on both sides, sometimes instigated by Indians but more often invoked by European greed.

The book has memorable portraits of key fur trappers, sailors and soldiers who braved unbelievable rigors of starvation, predation and hostile engagements with Indians to get the furs, living for months in inhospitable, sub-zero temperatures in the Midwest and the Great Plains. One of these "mountain men" was Hugo Glass who was mauled by a grizzly bear and left for dead before he endured an astonishing foot journey to reach civilization; Glass was the inspiration for the movie "The Revenant". The mountain men are fascinating; mostly originating from Kentucky, Tennessee and other border states, they were the most free-lancing among the free-lancing trappers, traveling with aplomb whenever and wherever they wanted, yet 80% of them were married and a third took Indian wives. What is truly interesting is that these uneducated, hardy men were often as well read as an East Coast businessman and practiced a kind of equality among themselves and their wives, often living in communal camps, that might have been unique on the continent for the times. Other memorable characters include John Jacob Astor, one of America's first millionaires who thrived on and greatly expanded the fur trade, Captain James Cook who was the first to discover the Northwest before he was killed in Hawaii and frontiersmen like Kit Carson, Daniel Boone and Manuel Lisa.

The last part of the book deals with the tragic effects the fur trade had on America's fauna as well as on the Indians. By the 1850s or so Europeans and Indians had both hunted the beaver nearly to extinction before they discovered a new source of fur: the American buffalo or bison. With that discovery began probably the greatest episode of manmade carnage in history. At the beginning tens of millions of buffalo roamed the Great Plains and the Southwest; by the end of 1890 there were a few hundred. The building of the transcontinental railroad sealed the fate of both the buffalo and the Indians in whose life the buffalo was so intimately integrated that they would use and consume every single part of it, including the scrotum and the tail, the heart and the blood. Meanwhile, Europeans started killing the animal for sport, sometimes lazily shooting it from train compartments and leaving the carcasses rotting. The long-range rifle made it possible for a single hunter to kill dozens in a day and waste most of their meat. Soon the plains were literally dotted with rotting carcasses and skulls for as far as the eye could see. The westward expansion also split the Indian population into small groups which were at the mercy of settlers and the U.S. Army, leading to their complete subjugation. This was truly a sad chapter in the history of the United States, and one that frankly brought tears to my eyes.

Not just the buffalo but the beaver and the sea otter were killed in the tens of millions and hunted to near extinction, so it's perhaps a miracle that they are still around. While the history of the fur trade tells the story of expansion, greed, killings and conquest along with one of resilience, doggedness and adventure, its aftermath tells a story of hope even as Teddy Roosevelt, John Muir, Thoreau and others reminded Americans of humans' deep connection to nature, made a strong push for conservation and assigned large areas of the country to conservation where bison, otters and other animals killed during the fur trade started thriving again. A few years ago a beaver was spotted on the Bronx River in New York for the first time in two hundred years. Perhaps there is a kernel of compassion and hope in the gnarly undergrowth of man's cruelty after all.

Book review: Miracle at Philadelphia

Miracle at Philadelphia: The Story of the Constitutional Convention, May to September 1787Miracle at Philadelphia: The Story of the Constitutional Convention, May to September 1787 by Catherine Drinker Bowen

A superb, must-read day-by-day account of the Constitutional Convention which took place in Philadelphia between May and September 1787. The writing and description of not just the deliberations and the personalities but the stuffy, hot, Philadelphia weather, the shops, the clothes and the impressions of European visitors of a society that snubs its nose at class are so vivid that you get the feeling you are there. I have read a few other accounts of this all-important episode, but none so revealing as to the spirit of the times.

Present here are the great men of American history in all their glory and flaws: Washington, Hamilton, Madison, Franklin, Gouverneur Morris (from whose pen came “We the people” in the preamble to the Constitution), and even a lobbyist for land companies, Manasseh Cutler, who helped draft the Northwest Ordinance that created the vast Northwest Territory and sealed the fate of millions of Indians. Exerting their influence subtly from Europe were Jefferson and Adams. There were fiery speakers both for and against a central government - George Mason and Edmund Randolph from Virginia, Luther Martin from Maryland, Hamilton from New York, Elbridge Gerry from Massachusetts (from whom comes one of my favorite quotes: “The evils we have stem from the excess of democracy. The people do not want virtue, but are the dupes of pretended patriots”) - who made no secret of their feelings. They formed the Federalists and Antifederalists who were to have such bitter debates later.

Discussed were issues both trivial and momentous: the exact terms for Senators and Congressmen, whether the President should be appointed for life, the regulation of trade with other countries, the requirements for voting and citizenship, the provision for a national army. But the three most important issues were taxation, representation in both houses, and Western expansion. In many ways these issues encapsulated the central issue: states’ rights vs a strong national government. The small states were afraid that proportional representation would diminish their influence to nothing; the large ones were afraid that incomplete representation would harm their economy, their manufacturing and their landed gentry; sparsely populated ones worried that it would harm Westward expansion and slavery. Many people spoke openly against slavery, but it was out of concerns for the Southern states’ objections that the Constitution adopted the infamous three-fifths clause relating to “other persons” (there was consolation in the fact that the convention at least set a 1808 date for the ending of the slave trade). To soothe concerns on both sides, Roger Sherman of Connecticut offered the Sherman Compromise which proposed that the House would have proportionate representation while the Senate’s composition would be fixed to two from each state.

Women, white men without property, Africans and Indians famously got fleeced. As Jill Lepore wrote in her history “These Truths”, while Africans were degraded as slaves and considered as three-fifths of men, women fared almost as badly and were completely left out of the Constitution: in 1776, Abigail Adams memorably wrote to her husband, "Do not put such unlimited power into the hands of the husbands. Remember, all men would be tyrants if they could. If particular care and attention is not paid to the ladies, we are determined to foment a rebellion, and will not hold ourselves bound by any laws in which we have no voice or representation”, but her words were far from anyone’s mind in 1787. Women’s rights as we know them were non-existent then. But the Constitution was at least a triumph of religious freedom when, in the face of objections by some prominent Americans, it did away with any religious test for becoming a citizen and for holding office. This was a revolutionary move for the times.

Bowen’s book also does a fantastic job of letting us see the world through the eyes of these men and women. It’s very difficult for us in the age of the Internet to realize how slow communication was during those times and how disconnected people felt from each other in the unimaginably vast expanse of the country and the frontier to the West. The states were so loosely bound to each other by the previous Articles of Confederation and had such disparate geographies and cultures that in some cases they were threatening to fracture (for instance Maine wanted to separate from Massachusetts, and Virginia was planning to form a navy to defend herself against other states) So many of the concerns arose from legitimate worries that a Senator or President from Washington would never understand the concerns of a farmer from South Carolina, or that a farmer from South Carolina would never understand the concerns of a New England artisan. The fear that a central government would run roughshod over individual states was a very real one, although seventy years later it manifested itself in an ugly incarnation. There was also deep skepticism about “the people” (as Hamilton had put it, “If men were angels, governments would be unnecessary.”), and many vociferously asked that the preamble should say “We the states”.

Another revealing aspect of the book is to communicate how many measures were either defeated when they were first proposed or passed by a slim majority; sometimes the delegates even changed their votes. This was democracy in action; giving everyone a chance to voice their concerns while still obeying the wishes of the majority. Fun fact, especially in light of the present times: the presidential veto was struck down ten-to-one when first proposed. And, in what today seems like the most incomprehensible move, a Bill of Rights was also struck down ten-to-one when first proposed. The main argument was: if Americans are already free, why do they need a separate Bill of Rights? And if you are already laying down rules for what the government can do, why is it necessary to explicitly state what it cannot do? It was only after the Constitution was sent to the states for ratification that Massachusetts proposed adding a bill of rights; in fact some of the amendments in the Bill or Rights mirror Massachusetts’ own proposals for a state bill of rights. Once the powerful states like Massachusetts, Virginia and Pennsylvania ratified, the other states quickly fell in line.

It is wonderful to see Antifederalists who had opposed the Constitution immediately concede to the wishes of the people, often in generous terms, when it is ratified by individual states. In fact that is perhaps the single-most important fact that comes across in Bowen’s account; that men with widely differing views reached a compromise and forged a document which, although it contained important flaws, became a trailblazing, unique, enduring piece of work asking for a “more perfect Union” that led to a clarion call for individual rights and liberty not just in the United States but throughout the world.

View all my reviews

Chemistry is not harder than other sciences...just different.

A well known physicist turned venture capitalist asked on Twitter the other day why people seem to have a harder time understanding chemistry rather than physics or biology. Chemistry is by no means harder to understand than physics or biology, but it occupies a tricky middle ground between rigor and intuition, between deduction and creation, between creativity and understanding. Understanding it can bring great dividends: Robert Oppenheimer once said that “If you want to get someone interested in science teach them a course on elementary chemistry…unlike physics it gets very quickly to the heart of things.”
Chemistry’s path was partly driven by an impulse to understand the physical world, much like the path of physics and astronomy, but somewhat differently from physics and astronomy, to consciously improve the material conditions of life. What passed for medicine, art, architecture, agriculture and commerce in the ancient world was suffused with chemistry. Whether it was indigo dye for royal textiles, mercury or arsenic for medicine, lime for protecting crops or plaster for holding together stones of medieval stone buildings, the world looked to chemistry, whether consciously or not, to feed, transport, clothe and sustain itself. But this foundational practical role that chemistry played also obscured its philosophy.
The philosophy of chemistry developed in the 18th and 19th centuries through the work of Dalton, Lavoisier, Liebig, Kekule, Mendeleev and other thinkers. Much like biologists had spent their time collecting specimens and systematizing their science before someone like Darwin could make a great theoretical leap, chemists had systematized the vast body of observations that natural philosophers had documented and assimilated over the years. But key questions still remained: Why did water freeze at 0 degrees celsius and expand as it cooled? Why were gallium and mercury liquids? Why was lithium relatively stable while its cousin sodium a fiery, unstable beast? Even Mendeleev’s famed periodic table, after answering the how and what, did not answer the why.
It was only with the advent of atomic physics and quantum theory in the 20th century that these questions started to be answered. Niels Bohr’s atomic model led to the idea of the atom as an entity with a central dense nucleus surrounded by fuzzy probabilistic shells of electrons. Concomitant developments by 19th century chemists that had led to the precise measurements of atomic weights and the elucidation of rules that predicted how elements combine with each other intersected with the basic Bohr atom and the science of spectroscopy to illuminate how different elements were built up with different numbers of electrons and protons (the neutron whose discovery explained isotopes came only in 1932).
It was only after Walter Heitler, Fritz London, Gilbert Newton Lewis and especially Linus Pauling explained how the chemical bond was formed that chemistry truly exploded as a self-contained discipline. By showing how different atoms shared electrons in different ways so that they were held together by a variety of forces – weak dispersion forces and strong electrostatic forces for instance – modern chemistry finally started answering those questions about liquid ice and mercury that had been asked for centuries.
But how was the philosophy of chemistry faring compared to the philosophy of science during this period? Not very well. Firstly, philosophers were more naturally drawn first to physics and then to biology as deductive disciplines for laying out their conception of how science was done. Quantum mechanics especially, with its paradoxes and mysteries, became a fertile ground for philosophers to erect their edifice. Biology with evolution and heredity seemed to go to the heart of human existence and also attracted philosophical theorizing. Somehow chemistry slipped through the fingers of the prominent philosophers, partly because it seemed too practical like engineering (although engineering has its own philosophy) and partly because they simply didn’t get it.
Why? Because chemistry largely defies the traditional philosophy of science as laid down not only in physics and biology but in science in general in the centuries since the competing visions of Baconian and Cartesian science molded the way both scientists and philosophers view the natural world. Francis Bacon said, “All depends on keeping the eye fixed on the facts of nature.” Descartes said, “I think, therefore I am.” Science developed along both these lines and it led to the familiar set of ideas about hypothesis testing, observation, experiment and theorizing, and later in the 20th century, to conjectures and refutations, falsification and paradigm shifts. Most people were comfortable dealing with sciences that seem to at least broadly fit these notions from the philosophy of science.
Chemistry does not neatly fit into these categories every time because it’s more akin to the creative arts of architecture and painting. The Nobel Prize winning chemist, writer and poet Roald Hoffmann asks what hypothesis we are exactly generating or falsifying when we are synthesizing a molecule like quinine or indigo, or for that matter what hypothesis we are exactly trying to generate or falsify when we are composing a poem like “J. Alfred Prufrock”. Synthesis of novel substances is really at the heart of chemistry and it has had an incalculable impact of our way of life. There is great science as well as great art in synthesizing a complex molecule through the precise, creative assembly of simple atomic components; there is great beauty as well, of the kind found in constructing the finest cathedrals. 

There is really nothing that a chemist is trying to falsify when she makes a new compound, except to prove that it can actually be made. In addition, chemistry much more than physics is a tool-driven science, and instrumental revolutions like x-ray crystallography and NMR spectroscopy are counter to the more traditional idea-driven revolutions framework by Thomas Kuhn that is popular among science philosophers. Chemistry is thus a slippery eel, easily escaping the grasp of the flowing waters of philosophy. It’s this inability of traditional boxes of philosophy to hold chemistry that often makes it hard for people to appreciate it.
The second aspect of chemistry makes it easier for biologists to appreciate it than physicists. Hoffmann provocatively hits on this aspect when he says, “When I talk about chemistry I have three audiences in mind; fellow academics in the humanities and arts, the man on the street and physicists. Among these three I find it hardest to explain chemistry to physicists, because they think they understand, but they don’t”. The problem here is that chemistry did depend on physics, especially atomic physics and quantum mechanics, to provide some of its key foundations. There is little doubt that explaining the Bohr atom allowed theoretical chemists to then explain the chemical bond. But this success also lulled physicists – and I would say a good number of laymen – into an illusory sense of total explanatory power.
This illusion was reflected in the words of Paul Dirac, as great a theoretical physicist as one can find, when, after setting into place the full laws of quantum mechanics in the late 1920s, he said that “The underlying physical laws necessary for the mathematical theory of a large part of physics and the whole of chemistry are thus completely known, and the difficulty is only that the exact application of these laws leads to equations much too complicated to be soluble. It therefore becomes desirable that approximate practical methods of applying quantum mechanics should be developed, which can lead to an explanation of the main features of complex atomic systems without too much computation.”
Dirac was both presciently, profoundly right in saying this as well as profoundly wrong. Profoundly right because it is indeed true that many simplifying approximations and massive computations have to be brought to bear when quantum mechanics is applied to real chemical systems. Profoundly wrong because while this fact is true in theory, it’s almost irrelevant for real chemical systems. Even if you could hypothetically solve the Schrödinger equation for every single molecule of DNA in the body, that solution would still not tell you why DNA is a double helix, why it replicates semi-conservatively, why it mutates, how these mutations are passed down from parents to children or how the information it encodes is passed from DNA to RNA to protein.
All these are examples of emergent phenomena, unique to chemistry that cannot be completely reduced to physics. One can write down the Schrödinger equation for DNA, but the exact functions of DNA are the consequences of its unique structure combined with evolutionary contingency that selected the replication and transmission of hereditary characteristics as one among many functions of DNA. Contingency and emergence confer a special status on DNA the chemical as opposed to DNA the collection of atoms described by quantum theory. The same theme permeates other parts of chemistry. A good example is the hydrogen bond, a bonding interaction that’s strong enough to hold the molecules of life together but weak enough to allow them to shape-shift between structures performing a variety of functions essential to life. The hydrogen bond is a minimalist feature of chemical and biological systems that’s composed of just three atoms, oxygen or nitrogen and hydrogen being exchanged between them like a tennis ball. One can write a Schrödinger equation for a hydrogen bond and it’s useful in deriving fairly accurate energies for it, but the solution by itself doesn’t inform us how useful hydrogen bonds are, how they differ on different length and time scales and how their distribution of energies leads us to a more refined understanding of biological systems.
There are concepts in chemistry like hydrogen bonding, electronegativity, aromaticity and polarizability that get “frayed at their edges”, in Hoffmann’s words, when one tries to scrutinize them too finely using the scalpel of physics; in that sense they are like the mythical electron that physicists talk about, best-behaved when not observed and left alone. It’s not that physics is useless for understanding these ideas, it’s that they are best understood at the level of chemistry itself as semi-qualitative concepts.
It’s this emergent nature of chemical concepts which still keep one foot rooted in physics, this imprecise and yet immensely useful blend of rigor and qualitative understanding, this inability of traditional philosophy of science to keep chemistry encased within its boxes, that makes chemistry a unique science. It’s not hard to understand. It’s just complicated.
First published on 3 Quarks Daily.

Open Borders

The traveler comes to a divide. In front of him lies a forest. Behind him lies a deep ravine. He is sure about what he has seen but he isn’t sure what lies ahead. The mostly barren shreds of expectations or the glorious trappings of lands unknown, both are up for grabs in the great casino of life.
First came the numbers, then the symbols encoding the symbols, then symbols encoding the symbols. A festive smattering of metamaniacal creations from the thicket of conjectures populating the hive mind of creative consciousness. Even Kurt Gödel could not grasp the final import of the generations of ideas his self-consuming monster creation would spawn in the future. It would plough a deep, indestructible furrow through biology and computation. Before and after that it would lay men’s ambitions of conquering knowledge to final rest, like a giant thorn that splits open dreams along their wide central artery.
Code. Growing mountains of self-replicating code. Scattered like gems in the weird and wonderful passage of spacetime, stupefying itself with its endless bifurcations. Engrossed in their celebratory outbursts of draconian superiority, humans hardly noticed it. Bits and bytes wending and winding their way through increasingly Byzantine corridors of power, promise and pleasure. Riding on the backs of great expectations, bellowing their heart out without pondering the implications. What do they expect when they are confronted, finally, with the picture-perfect contours of their creations, when the stagehands have finally taken care of the props and the game is finally on? Shantih, shantih, shantih, I say.
Once the convoluted waves of inflated rational expectations subside, the reality kicks in in ways that only celluloid delivered in the past. Machines learning, loving, loving the learning that other machines love to do was only a great charade. The answer arrives in a hurry, whispered and then proudly proclaimed by the stewards of possibility. We can never succeed because we don’t know what success means. How doth the crocodile eat the tasty bits if he can never know where red flesh begins and the sweet lilies end? Who will tell the bards what to sing if the songs of Eden are indistinguishable from the lasts gasps of death? We must brook no certainty here, for the fruit of the tree can sow the seeds of murderous doubt.
Just so often, although not as often as our eager minds would like, science uncovers connections between seemingly unrelated phenomena that point to wholly new ways forward. Last week, a group of mathematicians and computer scientists uncovered a startling connection between logic, set theory and machine learning. Logic and set theory are the purest of mathematics. Machine learning is the most applied of mathematics and statistics. The scientists found a connection between two very different entities in these very different fields – the continuum hypothesis in set theory and the theory of learnability in machine learning.
The continuum hypothesis is related to two different kinds of infinities found in mathematics. When I first heard the fact that infinities can actually be compared, it was as if someone had cracked my mind open by planting a firecracker inside it. There is the first kind of infinity, the “countable infinity”, which is defined as an infinite set that maps one-on-one with the set of natural numbers. Then there’s the second kind of infinity, the “uncountable infinity”, a gnarled forest of limitless complexity, defined as an infinity that cannot be so mapped. Real numbers are an example of such an uncountable infinity. One of the staggering results of mathematics is that the infinite set of real numbers is somehow “larger” than the infinite set of natural numbers. The German mathematician Georg Cantor supplied the proof of the uncountable nature of the real numbers, sometimes called the “diagonal proof”. It is like a beautiful gem that has suddenly fallen from the sky into our lap; reading it gives one intense pleasure.
The continuum hypothesis asks whether there is an infinity whose size is between the countable infinity of the natural numbers and the uncountable infinity of the real numbers. The mathematicians Kurt Gödel and – more notably – Paul Cohen were unable to prove whether the hypothesis is correct or not, but they were able to prove something equally or even more interesting; that the continuum hypothesis cannot be decided one way or another within the axiomatic system of number theory. Thus, there is a world of mathematics in which the hypothesis is true, and there is one in which it is false. And our current understanding of mathematics is consistent with both these worlds.
Fifty years later, the computational mathematicians have found a startling and unexpected connection between the truth or lack thereof of the continuum hypothesis and the idea of learnability in machine learning. Machine learning seeks to learn the details of a small set of data and make correlative predictions for larger datasets based on these details. Learnability means that an algorithm can learn parameters from a small subset of data and accurately make extrapolations to the larger dataset based on these parameters. The recent study found that whether learnability is possible or not for arbitrary, general datasets depends on whether the continuum hypothesis is true. If it is true, then one will always find a subset of data that is representative of the larger, true dataset. If the hypothesis is false, then one will never be able to pick such a dataset. In fact in that case, only the true dataset represents the true dataset, much as only an accused man can best represent himself.
This new result extends both set theory and machine learning into urgent and tantalizing territory. If the continuum hypothesis is false, it means that we will never be able to guarantee being able to train our models on small data and extrapolate to large data. Specific models will still be able to be built, but the general problem will remain unsolvable. This result can have significant implications for the field of artificial intelligence. We are entering an age where it’s possible to seriously contemplate machines controlling others machines, with human oversight not just impossible in practice but also in principle. As code flows through the superhighway of other code and groups and regroups to control other pieces of code, machine learning algorithms will be in charge of building models based on existing data as well as generating new data for new models. Results like the current result might make it impossible for such self-propagating intelligent algorithms to ensure being able to solve all our problems, or solve their own problems to imprison us. The robot apocalypse might be harder than we think.
As Jacob Bronowski memorably put it in his “The Ascent of Man”, one of the major goals of science in the 20th century was to establish the certainty of scientific knowledge. One of the major achievements of science in the 20th century was to prove that this goal is unattainable. In physics, Heisenberg’s uncertainty principle put a fundamental limit on measurement in the world of elementary particles. Einstein’s theory of relativity put a fundamental limit on the speed of light. But most significantly, it was Gödel’s famous incompleteness theorem that put a fundamental limit on what we could prove and know even in the seemingly impregnable world of pure, logical mathematics. Even in logic, that bastion of pure thought, where conjectures and refutations don’t depend on any quantity in the real world, we found that there are certain statements whose truth might forever remain undecidable.
Now the same Gödel has thrown another wrench in the machine, asking us whether we can indeed hold inevitability and eternity in the palm of our hands. As long as the continuum hypothesis remains undecidable, so will the ability of machine learning to transform our world and seize power from human beings. And if we cannot accomplish that feat of extending our knowledge into the infinite unknown, instead of despair we should be filled with the ecstatic joy of living in an open world, a world where all the answers can never be known, a world forever open to exploration and adventure by our children and grandchildren. The traveler comes to a divide, and in front of him lies an unyielding horizon.

Modular complexity, and reverse engineering the brain

The Forbes columnist Matthew Herper has a profile of Microsoft co-founder Paul Allen who has placed his bets on a brain institute whose goal is to to map the brain...or at least the visual cortex. His institute is engaged in charting the sum total of neurons and other working parts of the visual cortex and then mapping their connections. Allen is not alone in doing this; there's projects like the Connectome at MIT which are trying to do the same thing (and the project's leader Sebastian Seung has written an excellent book about it) .

Well, we have heard echoes of reverse engineered brains from more eccentric sources before, but fortunately Allen is one of those who does not believe that the singularity is near. He also seems to have entrusted his vision to sane minds. His institute's chief science officer is Christof Koch, former professor at Caltech and longtime collaborator of the late Francis Crick who started at the institute this year. Just last month Koch penned a perspective in Science which points out the staggering challenge of understanding the connections between all the components of the brain; the "neural interactome" if you will. The article is worth reading if you want to get an idea of how simple numerical arguments illuminate the sheer magnitude of mapping out the neurons, cells and proteins that make up the wonder that's the human brain.

Koch starts by pointing out that calculating the interactions between all the components in the brain is not the same as computing the interactions between all atoms of an ideal gas since the interactions are between different kinds of entities and are therefore not identical. Instead, he proposes, we have to use something called Bell's number B(n) which reminds me of the partitions that I learnt when I was sleepwalking through set theory in college. Briefly for n objects, B(n) refers to the number of combinations (doubles, triples, quadruples etc.) that can be formed. Thus, when n=3 B(n) is 5. Not surprisingly, Bn scales exponentially with n and Koch points out that B(10) is already 115,975. If we think of a typical presynaptic terminal with its 1000 proteins or so, B(n) already starts giving us heartburn. For something like the visual cortex where n= 2 million B(n) would be prohibitive. And as the graph demonstrates, for more than 10^5 components or so the amount of time spirals out of hand at warp speed. Koch then uses a simple calculation based on Moore's law in trying to estimate the time needed for "sequencing" these interactions. For n = 2 million the time would be of the order of 10 million years.

And this considers only the 2 million neurons in the visual cortex; it doesn't even consider the proteins and cells which might interact with the neurons on an individual basis. Looks like we can rapidly see the outlines of what Allen himself has called the "complexity brake". And this one seems poised to make an asteroid-sized impact.

So are we doomed in trying to understand the brain, consciousness and the whole works? Not necessarily, argues Koch. He gives the example of electronic circuits where individual components are grouped separately into modules. If you bunch a number of interacting entities together and form a separate module, then the complexity of the problem reduces since you now have to only calculate interactions between modules. The key question then is, is the brain modular? Commonsense would have us think it is, but it is far from clear how we can exactly define the modules. We would also need a sense of the minimal number of modules to calculate interactions between them. This work is going to need a long time (hopefully not as long as that for B(2 million) and I don't think we are going to have an exhaustive list of the minimal number of modules in the brain any time soon, especially since these are going to be composed of different kinds of components and not just one kind.

Any attempt to define these modules are going to run into problems of emergent complexity that I have occasionally written about. Two neurons plus one protein might be different from two neurons plus two proteins in unanticipated ways. Nevertheless this goal seems far more attainable in principle than calculating every individual interaction, and that's probably the reason Koch left Caltech to join the Allen Institute in spite of the pessimistic calculation above. If we can ever get a sense of the modular structure of the brain, we may have at least a fighting chance to map out the whole neural interactome. I am not holding my breath too hard, but my ears will be wide open.

Image source: Science magazine