What is it Like to Be AI?
Reflections on reflections on artificial intelligence and the problem of consciousness.
The following is an excerpt from my 2018 book, The Social Singularity. As I was using AI to work today, it occurred to me that I should check in on my past self—i.e., what I thought and wrote. Is the following obsolete? How well does it hold up?
One of the great things about being human is that we feel. You might not think emotion is that important. If you’re a Vulcan, feelings might seem like curious artifacts of a primitive past. But emotion is more important than just crushes and hot tempers. It’s a critical aspect of a multidimensional experience system we call consciousness. And the miracle of consciousness is not merely that we have evolved certain pain or pleasure reactions to our environments, which we can experience.
It’s that we are conscious at all.
Consciousness makes up our lives from the moment we awaken to the sensation of floating in theta waves just before sleep. It includes the redness of Washington apples, the experience of bacon’s scent at breakfast, and the giddy waves of your first crush. It also includes gut feelings about a place or a person, like that guy at work who just can’t be trusted. Conscious properties are the what it’s like properties of experience.
But what is consciousness? And why does it matter so much?
For many, the following might seem like a theoretical diversion. But consciousness—what it is like to be a human—is critical to our success as a species and our medium-term success relative to AI. So what I am about to write may annoy the most cocksure AI prognosticator:
AI programmers don’t yet have a clue about what it would mean to create a conscious machine.
That’s because nobody really understands the nature of consciousness. Despite the fact that most philosophers have been relegated to teaching undergraduates about Plato’s shadows on cave walls, philosophers of mind deserve a lot of credit for pointing out just how far we have to go before building sentient AI.
Take a moment to look at the margin of this page. What you are experiencing is part of your subjectivity, that is, your consciousness. Are you experiencing this on a laptop or a phone? Whiteness is a familiar concept. But the whiteness you are currently experiencing—call it Whiteness X—is a unique property of your consciousness. That is, it’s a property of your experience, not of the page. It is also not a neural firing, although Whiteness X depends upon your neurons firing.
Let’s briefly forget about Whiteness X and attend to the sounds around you. Can you hear a fan? A refrigerator’s hum? A child playing in the distance? When you do that, it can be difficult to keep reading. You have focused your attention on another aspect of experience, from Whiteness X to Noise Y and back to these words. These sensations are real. And indeed they are real to you and not to me. The nature of these properties is deeply mysterious, but understanding that nature is essential to creating machines that think and feel.
We know there is a strong connection between conscious properties and the physical universe. In fact, most philosophers of mind, brain scientists, and AI developers all agree that consciousness is part of the fabric of the universe. Some think it is related to entropy. It’s not a mystical essence or a ghostly elan that animates the physical brain but is somehow separate from it. Consciousness is a feature of reality. But there is a massive explanatory gap between the properties of consciousness such as Whiteness X and Noise Y on the one hand, and properties of the brain and the environment—for example, lightwaves, soundwaves, axons, dendrites, neurotransmitters—on the other. To repeat, no one has a clue how to bridge this gap. No matter how sophisticated the “thinking” parts of AI get in the next few years, there is a long way to building a conscious machine, a machine for whom there is something it is like—not just to think, but to intuit, to experience, and to feel.
By the way, none of this is to argue that we can never in principle create conscious AI. Many philosophers and neuroscientists think that because consciousness is a feature of the causal-physical world (that is, reality), it is possible for conscious AI to be designed, unconscious AI to create conscious AI, or consciousness to emerge within complex AI systems. I realize the latter thought might be chilling to some.
In some sufficiently advanced neuroscience, we (or they) might discover just how consciousness gets instantiated, and then use technology to instantiate it. To create conscious AI, it seems reasonable to think we will have faithfully to replicate the causal-physical processes the human brain does with all its interconnected modules and subsystems. These systems work as a harmonious whole, giving rise to our waking lives and experiences. If AI doesn’t have a waking life and experiences, it will always be a bloodless intelligence.
Think of it as a fancy algorithm.
Thus, we’re not as worried about robots as some, at least not yet. There are simply way too many things that humans will be able to do better and more authentically than AI because, thanks to evolution, we are complex, holistic beings.
Consider an analogy: we have been assisted for decades now with computers that create special effects. CGI technology has gotten really good. But no matter how good it gets, we must suspend disbelief when watching. After all, we have this sense that it’s not quite real-looking. The better CGI gets, the more attuned we are to why it doesn’t quite look real. This is known among animators as the uncanny valley. For the foreseeable future, there will be uncanny valleys in all manner of human endeavors, such as literature, art, and sex bots. Such is not to say AI won’t assist us, but creative, culturally contextual, and human-sensibility AI is a ways off.
Again, there’s no reason AIs won’t someday be complex, holistic beings. But currently, AI only does narrow sorts of thinking, though extremely well. Put another way: AI is similar to a small slice of the recently evolved brain region called the neocortex. But there is so much to the rest of the brain than this slice, and therefore also to the mind.
Social psychologist Jonathan Haidt is now famous for his metaphor, which gets to the heart of the matter.
Intuition is the best word to describe the dozens or hundreds of rapid, effortless moral judgments and decisions that we all make every day. Only a few of these intuitions come to us embedded in full-blown emotions. (Emphasis Haidt’s.)
We only attend to and focus on the non-automatic processes, so it might be easy to overlook the cognitive power of emotion. But it’s there, Haidt claims.
In The Happiness Hypothesis, I called these two kinds of cognition the rider (controlled processes, including “reasoning-why”) and the elephant (automatic processes, including emotion, intuition, and all forms of “seeing-that”). I chose the elephant rather than a horse because elephants are so much bigger—and smarter—than horses. Automatic processes run the human mind, just as they have been running animal minds for 500 million years, so they’re very good at what they do, like software that has been improved through thousands of product cycles.
Haidt’s metaphor is more than literary window dressing.
Screenwriter Lisa Cron reminds us that people and characters need emotions and intuitions because these are at least half of what motivates us. Cron offers the story of “Elliot,” a patient of a doctor named Antonio Damasio. Elliot had lost a small section of his prefrontal cortex during surgery on a benign brain tumor.
“Before his illness,” writes Cron, “Elliot held a high-level corporate job and had a happy thriving family.” She continues:
By the time he saw Damasio, Elliot was in the process of losing everything. He still tested in the 95th percentile of IQ, had high-functioning memory, and had no trouble enumerating every possible solution to a problem. Trouble was, he couldn’t make a decision—from what color pen to use to whether it was more important to do the work his boss expected or spend the day alphabetizing all the folders in his office.
How had Elliot become so lost?
Damasio figured out Elliot was no longer capable of experiencing emotion. Elliot was so detached, in fact, “he approached everything as if it were neutral.” Try to imagine not feeling anything when you hear your favorite music. You might be aware of the notes, which register as a hollow stimulus. You might even recall that the music once moved you. But now you regard it and the rest of the world as a dispassionate observer who cannot care.
Due to his injury, Elliot couldn’t establish any sort of value hierarchy. He didn’t know what mattered to him and what didn’t, which meant he was devoid of motivation. And in this way, human beings are beings to whom things matter. Or as psychologist Daniel Gilbert puts it, “feelings don’t just matter—they are what mattering means.“
Feelings make us care. Our inner lives are thus unique phenomena that, in a sense, define our humanness in contrast to AI. This suggests that, given material abundance, we will be drawn to those aspects of life that require both an elephant and a rider. So we’re not only poised to rediscover our humanity, we’re poised to create new industries around that very humanity. Evolution has provided us not just with values, consciousness, emotion, and intuition, but also philosophical, literary, and aesthetic sensibilities. Being empathic creatures, we have evolved the ability to imagine what it’s like to be in another’s skin and feel with them.
In this fact alone, new industries are waiting to be born.
The totality of these human properties makes for a truly well-rounded being. Far from being stripped of opportunities, we will create new ones, letting AI be great at Go. In a condition of radical abundance, we will see the re-emergence of cottage industries. We will see the emergence of new industries. And we will see the development of sectors, going from undifferentiated and ambivalent to the consumers’ needs, to far more differentiated and customized experiences.
This development proceeds by building on levels of the adjacent possible, as people go from seeking stuff, all the way to seeking guided transformations. In all of this, we will discover new experiences and rediscover our humanity in what Joe Pine and Jim Gilmore call “the Experience Economy.”
Fascinating! But may be deeper than I can go, like swimming underwater without scuba gear.
Thanks guys for the volley, enjoyed all of it!