I’m into this critique, but it’s pretty dense - I need reread and let this marinate for a while. But, I think I’ve got the gist (at the risk of being horribly reductive):
Guy 1 thinks humans = robots, so all we need to do is build computers strong enough to crunch all that data and wham-bam, we’re uploading ourselves to ‘digital heaven’ and leaving these fleshy mech-suits behind forever
Guy 2 thinks humans ≠ robots because there’s SO MUCH going on with consciousness, memory and whatever else is happening inside these brains of ours that no computer could ever express it all
as 1s and 0s. If we were to try, whatever facsimiles we created of ourselves would only exist across 2 dimensions instead of our 3. In effect, Guy 2 leaves space for ‘souls’ in his calculations, but souls have infinite value and complexity, so that math just really doesn’t work out sensibly. Consider: if you have an infinite amount of something, and then you have an infinite amount of an infinite amount of other somethings, both the single infinite and the infinite infinites are of equal measure. Infinity defies all reason, yet we can grasp it!
I can see you’re backing away slowly. Am I frothing? Sorry, sorry. I just love this kind of conjecture so much. Must read more. More!
Yeah, that way of contrasting Ray Kurzweil and Jaron Lanier is a tad simplistic, but you have the gist. Further parts of the essay series unpack the nuances. I doubt Ray Kurzweil quite believes humans are just robots, but his ideology treats us as such. The problem is that this assumption flies under the radar and people with vested interests minimize it all the time.
I had to read your infinite infinites example twice to work it out. The cool thing about infinity, to me, is that it’s a purely abstract concept — we can’t “see” it — but in maths it’s perfectly demonstrable and in fact it does a lot of heavy lifting.
Your fandom for this stuff reminds me of reading Pascal: “The eternal silence of these infinite expanses terrifies me.” I did a quick search and found this page containing a significant chunk of the text (you should find and read it complete):
I tend to agree with your sentiments here. The prospect of the singularity scares me, though I’m less convinced that we’re approaching it than I was a year ago when I was certain it could happen at literally any moment.
I’m more inclined to believe now that all of the fear mongering around AI in the past 12 months or so has been a hype game played in order to drum up funding. I don’t think anybody — even the experts — understand AI well enough to predict the singularity. They might guess and be right, but I might guess tomorrow’s weather too.
I will say that I think the potential nearness of AGI in my head has been enlightening. I thought I was in favor of it until I saw it lurking around the corner. “Anything to get us out of this situation we’re in,” I thought, gesturing vaguely. “Anything but this.”
But really what I want is a revolution in favor of humanity. In favor of real human relationships, of time spent out in the world and not stuck behind a desk or glued to a screen. Of art and discourse and creativity — all of the things AI has proven best at robbing us of so far.
Kurzweil seems so obsessed with meeting his dead father again that he’s forgotten that there’s a whole world of people living finite lives that he’s missing out on.
I write dystopia, so of course I'm gonna scare your socks off. But dystopia works by depicting the worst possible outcomes; of necessity, it's a slippery-slope argument.
So let me tease apart "Infinite Lock-In" from my novel: we don't have to see a singularity come to pass to get awful effects from the decisions of technologists who think like Kurzweil. A bad philosophy is a bad philosophy, never mind whether the worst comes of it.
"A revolution in favor of humanity" ... amen, brother! Most broadly, that's what I'm arguing for. The anti-human revolution isn't going so well.
I’m into this critique, but it’s pretty dense - I need reread and let this marinate for a while. But, I think I’ve got the gist (at the risk of being horribly reductive):
Guy 1 thinks humans = robots, so all we need to do is build computers strong enough to crunch all that data and wham-bam, we’re uploading ourselves to ‘digital heaven’ and leaving these fleshy mech-suits behind forever
Guy 2 thinks humans ≠ robots because there’s SO MUCH going on with consciousness, memory and whatever else is happening inside these brains of ours that no computer could ever express it all
as 1s and 0s. If we were to try, whatever facsimiles we created of ourselves would only exist across 2 dimensions instead of our 3. In effect, Guy 2 leaves space for ‘souls’ in his calculations, but souls have infinite value and complexity, so that math just really doesn’t work out sensibly. Consider: if you have an infinite amount of something, and then you have an infinite amount of an infinite amount of other somethings, both the single infinite and the infinite infinites are of equal measure. Infinity defies all reason, yet we can grasp it!
I can see you’re backing away slowly. Am I frothing? Sorry, sorry. I just love this kind of conjecture so much. Must read more. More!
Yeah, that way of contrasting Ray Kurzweil and Jaron Lanier is a tad simplistic, but you have the gist. Further parts of the essay series unpack the nuances. I doubt Ray Kurzweil quite believes humans are just robots, but his ideology treats us as such. The problem is that this assumption flies under the radar and people with vested interests minimize it all the time.
I had to read your infinite infinites example twice to work it out. The cool thing about infinity, to me, is that it’s a purely abstract concept — we can’t “see” it — but in maths it’s perfectly demonstrable and in fact it does a lot of heavy lifting.
Your fandom for this stuff reminds me of reading Pascal: “The eternal silence of these infinite expanses terrifies me.” I did a quick search and found this page containing a significant chunk of the text (you should find and read it complete):
https://arielesieling.com/blog/2015/the-eternal-silence-of-these-infinite-spaces
I tend to agree with your sentiments here. The prospect of the singularity scares me, though I’m less convinced that we’re approaching it than I was a year ago when I was certain it could happen at literally any moment.
I’m more inclined to believe now that all of the fear mongering around AI in the past 12 months or so has been a hype game played in order to drum up funding. I don’t think anybody — even the experts — understand AI well enough to predict the singularity. They might guess and be right, but I might guess tomorrow’s weather too.
I will say that I think the potential nearness of AGI in my head has been enlightening. I thought I was in favor of it until I saw it lurking around the corner. “Anything to get us out of this situation we’re in,” I thought, gesturing vaguely. “Anything but this.”
But really what I want is a revolution in favor of humanity. In favor of real human relationships, of time spent out in the world and not stuck behind a desk or glued to a screen. Of art and discourse and creativity — all of the things AI has proven best at robbing us of so far.
Kurzweil seems so obsessed with meeting his dead father again that he’s forgotten that there’s a whole world of people living finite lives that he’s missing out on.
I write dystopia, so of course I'm gonna scare your socks off. But dystopia works by depicting the worst possible outcomes; of necessity, it's a slippery-slope argument.
So let me tease apart "Infinite Lock-In" from my novel: we don't have to see a singularity come to pass to get awful effects from the decisions of technologists who think like Kurzweil. A bad philosophy is a bad philosophy, never mind whether the worst comes of it.
"A revolution in favor of humanity" ... amen, brother! Most broadly, that's what I'm arguing for. The anti-human revolution isn't going so well.