2 Comments

I tend to agree with your sentiments here. The prospect of the singularity scares me, though I’m less convinced that we’re approaching it than I was a year ago when I was certain it could happen at literally any moment.

I’m more inclined to believe now that all of the fear mongering around AI in the past 12 months or so has been a hype game played in order to drum up funding. I don’t think anybody — even the experts — understand AI well enough to predict the singularity. They might guess and be right, but I might guess tomorrow’s weather too.

I will say that I think the potential nearness of AGI in my head has been enlightening. I thought I was in favor of it until I saw it lurking around the corner. “Anything to get us out of this situation we’re in,” I thought, gesturing vaguely. “Anything but this.”

But really what I want is a revolution in favor of humanity. In favor of real human relationships, of time spent out in the world and not stuck behind a desk or glued to a screen. Of art and discourse and creativity — all of the things AI has proven best at robbing us of so far.

Kurzweil seems so obsessed with meeting his dead father again that he’s forgotten that there’s a whole world of people living finite lives that he’s missing out on.

Expand full comment
author

I write dystopia, so of course I'm gonna scare your socks off. But dystopia works by depicting the worst possible outcomes; of necessity, it's a slippery-slope argument.

So let me tease apart "Infinite Lock-In" from my novel: we don't have to see a singularity come to pass to get awful effects from the decisions of technologists who think like Kurzweil. A bad philosophy is a bad philosophy, never mind whether the worst comes of it.

"A revolution in favor of humanity" ... amen, brother! Most broadly, that's what I'm arguing for. The anti-human revolution isn't going so well.

Expand full comment