Foreword
I strenuously object to the utopianism about the Singularity and the transhumanist project of immortality Ray Kurzweil expresses in The Singularity Is Near, but I respect Kurzweil’s expertise on technology and his willingness to forecast its future. If the fruits of the Industrial Revolution tell us anything, it’s that we should think more deliberately than ever about the future. So of course we need futurists, people willing to go out on an intellectual limb, to speculate, to predict, to tell us what to strive for and what to beware of. I would have nothing to criticize in Kurzweil if he spent as much time and thought on warning us about the dangers of the Singularity as he does on proselytizing for it.
A bit of balance and rational moderation, that’s all I’d like to see from the man — because, in truth, I agree with him that a technological singularity may be imminent. In fact, I often wonder whether it may soon become unavoidable. I’m not a scientist or a tech expert, so I can’t predict when humanity might cross this or that threshold pertaining to the prospect of a singularity, such as the development of truly sentient AI. But I don’t believe “when?” is the crucial question. Whether we’re talking about just one threshold or the whole shebang, the crucial questions are:
What do human beings stand to gain? What do we stand to lose? And how?
What do various other species stand to gain or lose? And how?
What does the entirety of Earth’s biosphere stand to gain or lose? And how?
Cybernetic Totalism
Given my objections to Kurzweil, naturally I’m a fan of Jaron Lanier. I think of these two as the Odd Couple of Silicon Valley.
Kurzweil is trim, fastidious in his manners, health-obsessed. He dresses in suits for his public appearances. He looks like someone with all his marbles until the moment he opens his mouth.
Lanier, in dreadlocks and T-shirts, looks like he just came back from a lost weekend (or maybe a year) in Jamaica. Sometimes he talks like it, too, but he isn’t stoned. His thoughts just tend to race ahead of what he’s presently saying. And what he’s saying, nine-tenths of the time, is well-considered, probative, and insightful.
Lanier’s futurism is philosophical. Whereas Kurzweil asks “how?” too much and “why or why not?” too little, Lanier asks both in roughly equal measure, but he stresses the value of the latter.
Generally, Lanier believes technologists think rarely and myopically about the why-or-why-not of their work. Though he’s a computer scientist and a virtual reality pioneer, at heart Lanier is a humanist. He champions human subjectivity and decries its digital erosion. His critique of what he calls “cybernetic totalism” begins with his 2000 essay “One Half a Manifesto” in the online magazine Edge. He elaborates on the critique in the book You Are Not a Gadget: A Manifesto (presumably the whole thing), published by Knopf in 2010. For now, I’ll focus on the ideas in “One Half a Manifesto.”
I’m not always keen on Lanier’s turns of phrase. For instance, in Gadget, he uses the term “digital Maoism” to describe how the Internet’s collectivization of knowledge tends to dumb it down, shallow it out, and strip it of contexts, for instance in order to create easily digestible memes. I think he makes an excellent point, but calling it “digital Maoism” on the one hand leaves too much to the reader’s imagination and on the other hand invites irrelevant politics into the picture. Lanier is criticizing a form of collectivism, but it has nothing to do with communism. Connecting it to Chairman Mao and China’s disastrous Great Leap Forward, however pithy the connection, only shades an erudite observation with Red Scare fear-mongering.
A similar problem hovers around the coinage “cybernetic totalism.” It smacks of the academy. Seeing the term, Lanier’s readers might suppose he shares with postmodern theorists a fondness for jargon which pretends to carry more meaning than it actually does, so they may miss how solid his idea really is. Also, “totalism” lives next door to “totalitarianism” — again, the shrill specter of fear-mongering. It’s possible the trope Lanier is critiquing could finally lead to totalitarianism — like, perhaps, the theocracy of control in Quibble — but it’s not a given. In short, despite its pithiness, I wish the term didn’t sound like Jordan Peterson’s paranoid brainchild.
“Enough with this quibbling!” you say. “What is cybernetic totalism?”
Let’s begin by reckoning with its antithesis. Kurzweil thinks skeptics’ worries about “a loss of some vital aspect of our humanity” arises from a misunderstanding of the Singularity, i.e. that “technology will match and then vastly exceed the refinement and suppleness” of biological features, including mind and subjectivity. By contrast, Lanier disbelieves machines will ever match us. He rejects the notion that technology, however sophisticated its simulacra of experience become, can ever truly replicate our subjectivity’s ineffable qualities. His view of what’s “unequivocally human” is far more expansive and rich than Kurzweil’s, which was expressed in a single sentence. For Lanier, what makes us alive — the interplay of mind, subjectivity, and experience — is in a sense already transcendent. Attempting further transcendence by mucking about with a technological singularity only risks throwing away what we already have.
In a nutshell, then, cybernetic totalism is a belief system which reduces human life — and human beings — to what’s intelligible and expressible in a computer or some other theorized cybernetic system, such as sentient AI.
You heard it from Kurzweil: “Information defines your personality, your memories, your skills. And it really is information.” Saying people are amalgams of personality, memories, and skills isn’t particularly objectionable, but the problem is that Kurzweil bypasses how much freight each of those words carries. He reduces them all to the category of “information.” That’s a stupefying oversimplification.
The freight words carry is central to Lanier’s critique of cybernetic totalism, which arises from the metaphors with which technologists think. In “One Half a Manifesto,” he suggests their belief system emerges from an evolutionary metaphor which casts biological organisms as vessels of genetic information.1
It’s not that organisms can’t be seen this way. The problem, Lanier argues, is that cybernetic totalists forget it’s only a metaphor, not a comprehensive explanation. They “totalize” mind, subjectivity, and experience as merely the emergent properties of nature’s information system. As such, anything transcending information must be non-essential, irrelevant, worthless.
The concept of the soul is then no longer viable.
Here, at mention of the soul, we must ask, “Whose perspective is the most religious?”
I’ll tackle that question in part 2, “Spiritual Suicide.”
rem
One is welcome to comment.
Religious apologists, in particular creationists, commonly abuse the analogy between DNA and computer code to attack mutation, which they misinterpret as “an error in the code.”
I tend to agree with your sentiments here. The prospect of the singularity scares me, though I’m less convinced that we’re approaching it than I was a year ago when I was certain it could happen at literally any moment.
I’m more inclined to believe now that all of the fear mongering around AI in the past 12 months or so has been a hype game played in order to drum up funding. I don’t think anybody — even the experts — understand AI well enough to predict the singularity. They might guess and be right, but I might guess tomorrow’s weather too.
I will say that I think the potential nearness of AGI in my head has been enlightening. I thought I was in favor of it until I saw it lurking around the corner. “Anything to get us out of this situation we’re in,” I thought, gesturing vaguely. “Anything but this.”
But really what I want is a revolution in favor of humanity. In favor of real human relationships, of time spent out in the world and not stuck behind a desk or glued to a screen. Of art and discourse and creativity — all of the things AI has proven best at robbing us of so far.
Kurzweil seems so obsessed with meeting his dead father again that he’s forgotten that there’s a whole world of people living finite lives that he’s missing out on.