The Social Singularity is (Somewhat) Near
An Interview With Singularity U and an excerpt from The Social Singularity
Embedded below, you’ll find a podcast interview with Steven Parton of Singularity U.
97,900,000. 1,030,000,000. That’s the number of results the search term “artificial intelligence” returns as of this writing. From chess-playing supercomputers to robot overlords, the idea of thinking machines has become something of a worldwide obsession. I’m aware that this paragraph will quickly date itself. How long before we simply commune with AI in our heads as if consulting some benevolent oracle?
Among the 1,030,000,000 results, we’ll surely find reference to John von Neumann, the polymath inventor. In 1950 von Neumann wrote: “The accelerating progress of technology and changes in the mode of human life give the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.” Maybe some sentient AI in the future will honor von Neumann as a kind of Ur creator, along with Babbage, Turing, and Wiener, if such honors can be coded.
But what is a singularity? Or, more specifically, what is the technological singularity? In the 1960s, I.J. Good offered a clue:
Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an "intelligence explosion," and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control. [Emphasis Good’s.]
Science fiction author Vernor Vinge runs with Good’s great idea. And in a now-famous 1993 essay, Vinge describes a not-so-distant future in which machine superintelligence has brought the human age to an end. “Large computer networks (and their associated users) may ‘wake up’ as a superhumanly intelligent entity,” Vinge writes.
What does it mean for machines to wake up, exactly?
Enter Ray Kurzweil. In The Age of Spiritual Machines, Kurzweil applies the logic of Moore’s Law: that is, the rate of improvements in processing power, and parallel advances in artificial intelligence. Kurzweil invites us to imagine an era in which our evolved wetware could not only be replaced for nearly any task by AI, but that advances in related fields would yield sentient beings. These won’t just be advanced computing devices. They’ll be animated and aware. Kurzweil thus popularized the idea of the (technological) singularity and gave his predictions evidentiary support.
Max Borders interview with Singularity U
Using the rationale of exponential innovation, Kurzweil forecast the point that AI superintelligence would emerge, almost down to the year. Kurzweil envisioned a time in which these “spiritual machines” would be able to solve a host of human problems, enabling us to live longer and interface with us, perhaps within some networked Allmind. But not everyone was so optimistic as Kurzweil.
In 2000, having reviewed a draft of The Age of Spiritual Machines, computer scientist Bill Joy pours cold water on the enthusiasm, but not before conceding the basic premise.
“Each of these technologies also offers untold promise,” Joy admitted in the pages of Wired. “The vision of near immortality that Kurzweil sees in his robot dreams drives us forward; genetic engineering may soon provide treatments, if not outright cures, for most diseases; and nanotechnology and nanomedicine can address yet more ills. Together they could significantly extend our average life span and improve the quality of our lives.”
Cures. Longer life spans. Quality of life. Millions of readers had said sign me up, but Joy was fearful.
“Yet, with each of these technologies, a sequence of small, individually sensible advances leads to an accumulation of great power and, concomitantly, great danger.”
It’s the danger of unintended consequences. In complex systems, these can be compounded, replicated, and tough to reverse. Whether in CRISPR, genetic algorithms, or artificial intelligence, Joy was warning us about the unpredictability of nature, of which AI is but an extension. Thus the yin and yang of technological progress had started to reflect upon itself around 2000. The Joy-Kurzweil debate created something like a permanent controversy.
Currently, titans like Elon Musk and Jeff Bezos trade barbs on social media. Musk has more or less taken up Joy’s view. Bezos follows Kurzweil in his optimism. And as the titans argue, pundits join the fray. Perceptions on the prospects and perils of advanced AI have become a cottage industry. 1,030,000,000 and counting.
Yet in all of it, few deny that the singularity is coming. No one wants to be neutral in one of history’s most important debates, but we’ll be wise to consider both perspectives. Picking sides isn’t our goal here, though. We want instead to set the stage. As everyone looks to the horizon to scan for advancing robot armies, unemployed laborers, or both, a quieter revolution is unfolding around us.
Coevolution: Tools, Rules, and Culture
“We become what we behold,” philosopher, and social commentator Marshall McLuhan said. “We shape our tools and then our tools shape us.” You might say that if John von Neumann is the father of the singularity, McLuhan is the father of the social singularity. It is, after all, on this single insight that our thesis turns.
Before we get into it, let’s add a variation to McLuhan’s maxim:
We become what we follow. We shape our rules and then our rules shape us.
Think about it: a decade ago, it would be virtually unheard of to get into a car with a stranger. It would also have been odd to stay in a stranger’s bedroom. Today the hitchhiker taboo has been more-or-less neutralized. The technology behind the sharing-economy has normalized these behaviors, because the tech injects security and reputation into what were once potentially dangerous activities.
Technology and culture thus coevolve. This vacillation occurs in waves as technology changes culture, which changes technology, which changes culture, and so on through time. The advent of paper money gave rise to an economic revolution of global trade. Global trade gave rise to the advent of double-entry accounting. And modern accounting methods gave rise to the wealthy merchant classes of Italy, who sparked the Renaissance.
“He who wants to know how to keep a ledger and its journal in due order must pay strict attention to what I say” Lucas Pacioli wrote of the Venetian system in 1494. Here, a system of profit and loss was made explicit and easily quantifiable. One might argue that this innovation created a robust market culture. If you could measure activities that created customer value, you could do more of that activity. That’s just good business.
During the Renaissance the optical lens came onto the scene, too, which challenged the orthodoxies of world religions (think Galileo) and galvanized the sciences. Science, the enterprise of observation, needed more tools as it gobbled up mindshare. Eventually the scientific worldview challenged religion as the dominant currency of belief. And as those tools proliferated, cultures began to change. Great swaths of humanity adopted new values. A commercial-scientific value system came to predominate in Europe and America. It spread like wildfire, giving rise to new innovations and cultural forms.
Fast forward to 1960.
The FDA approved G.D. Searle & Company’s Enovid, the first oral contraceptive. We know it as “the pill.” Arguably, this event sparked the sexual revolution—especially for women—which by 1968 was in full flower. Sexual norms changed and women had far more control over both their sex lives and reproductive choices. Then, of course, with sexual independence came more career options and a wave of feminism.
Today, technology has broadened and deepened the sexual revolution—for better or worse. Apps like Tinder reduce the transaction costs of dating and mating. Japanese youth move online and away from each other, as hentai (explicit cartoons) hastens their demographic downward spiral. And young people everywhere are now freezing their sperm or eggs to delay the ancient rite of parenthood—often to have more adventures before settling down. Tools. Culture. Tools.
Back and forth it goes.
But what about the rules? These invisible incentive systems Nobel laureate Douglass North called “institutions,” co-evolve with culture, too. When considering two vastly different sets of rules, the twentieth century offers us a couple of enormous A/B tests.
After WWII, the major powers carved up Germany. East Germany became communist and West Germany became capitalist. These two vastly different ways of organizing society, through two starkly different sets of institutions, offered us a natural experiment. By 1989 it was obvious that West Germany was far more prosperous and less authoritarian, and one could see the contrast simply by looking. On either side of the Berlin Wall, for example, bright commercial thoroughfares lay on one side, and dreary brutalism lay on the other. But what had emerged in the hearts and minds of the two peoples over those years?
A group of behavioral economists wanted to study the difference between cultural values after years in different institutional settings. Specifically, the team of Lars Hornuf of the University of Munich, and Dan Ariely, Ximena García-Rada, and Heather Mann of Duke University ran a test to determine Germans’ willingness to lie for personal gain. Some 250 Berliners (the citizens, not the doughnuts) were randomly selected to take part in a game where they could win up to $8.00. And the game involved opportunities to gain through lying and cheating.
According to The Economist, after wrapping up the game, the players had to fill out a form that “asked their age and the part of Germany where they had lived in different decades.” The researchers concluded that, on average, those participants with East German roots cheated twice as much as those who had grown up in West Germany. The team also looked at how much time the participants had spent in either place prior to the fall of the Berlin Wall. “The longer the participants had been exposed to socialism, the greater the likelihood that they would claim improbable numbers of high rolls.”
Why did these two groups perform so differently?
The study did not prove the causal origins of the different behaviors. But we can speculate. First, we can safely rule out the hypothesis that East Germans are born with a predilection to cheat and lie. Both sets come from more or less the same genetic stock. It’s also doubtful that the differences in moral outlook came from differences in, say, diet. The likeliest explanation for the difference is that the two vastly different sets of rules eventually shaped the values of the peoples.
We become what we follow. We shape our rules and then our rules shape us.
Think about what you would have to do to get by in these two different societies. Late urbanist Jane Jacobs called these “systems of survival.” The two different strategies, respectively, are taking and trading. That is, we can take what we need from others, or we can produce and exchange for what we need. Each of these strategies has a corresponding set of values, or “syndromes,” which can be considered virtuous or vicious depending on one’s perspective. According to Jacobs, takers tend to embrace “guardian syndrome.” Among the prime values of guardian syndrome are loyalty, obedience, and knowing one’s place. Traders, on the other hand, tend to follow “commerce syndrome.” The values of commerce syndrome include industriousness, efficiency and honesty.
Depending on circumstances, society can require both syndromes. But the values of guardian syndrome are the values that go along with shoring up hierarchies. The values of commerce syndrome are the values that coincide with creating networks of trade and collaboration. Crudely put, these are the values of the public and private sectors, respectively. And in the case of East and West Germany, we have two real-world analogs. Indeed, Jacobs lists one of the foremost values of guardian syndrome: “Deceive for the sake of the task.”
Economic history supports the idea that institutions can shape cultural values, too. Following is Douglass North, from his Nobel Prize lecture:
The organizations that come into existence will reflect the opportunities provided by the institutional matrix. That is, if the institutional framework rewards piracy then piratical organizations will come into existence; and if the institutional framework rewards productive activities then organizations—firms—will come into existence to engage in productive activities.
Institutional economics, long ignored by the wizards of macroeconomics, supplies a quiet corrective to a discipline that has drifted into the obscurity of mathematical models. North’s insight, that the rules provide a latticework of incentives powerful enough to shape our values, operates in the background here, too.
Although thinkers like economic historian Deirdre McCloskey would caution us not to reduce everything to crude institutional behaviorism, we must respect just how powerful the rules of the game are in shaping society.
Still, if we’re right about technology’s power to transform culture and institutions, catalyzing social change becomes more than just a possibility.
Excerpted from The Social Singularity.
Somehow or other, I had either never heard of, or have long since forgotten the “willingness to lie” study. It sounds very interesting, and I’m going to read more about it. Thanks for bringing it to my attention.
I’m going to download a couple of podcasts from Singularity University to listen to the next time I make a long drive. Are there any that you’d particularly recommend?
I waiver between optimism and pessimism about the future of humanity.
There are always reasons to be optimistic. It’s impossible to predict some positive event like the dissolution of the Soviet Union or the dismantling of the Berlin wall, or some new invention that will benefit humanity. Almost all inventions and technology can be used to either benefit or harm humanity. The only exception I can think of is certain kinds of military weaponry like nuclear bombs which at least on this planet don’t have any beneficial uses.
Decentralization is an important tool, and technology can either assist or hinder decentralization.
OTOH, I tend to be pessimistic, mainly because of state funding for science, which crowds out private funding. The number of Robert Stadlers in the world increases every year.
Another reason to be pessimistic is the way that rules shape our culture. Of course, it’s possible that the rules will change to favor a different outcome, but the way the rules are being shaped in modern society, they increasingly favor the guardian syndrome over the commerce syndrome.