Rachel Neumeier

Fantasy and Young Adult Fantasy Author

Blog

Personally, I suspect true AI is impossible

Which may just go to show that I also find the notion of true AI kinda disturbing.

Nevertheless, here is a post by Jim Henley at Unqualified Offerings: Fermi Conundrum Redux: The Singularity as Great Big Zero?

So if Strong AI civilizations exist, they should be here too. And they’re not. Which has to make one suspect they don’t exist and can’t. One possible answer to Fermi’s Conundrum has always been that organic intelligence has evolved, but that interstellar travel just isn’t practical for biological life forms. But if biological life forms can develop Strong AI, then that screen shouldn’t hold. So if there are other organic intelligences in the universe – we of course can’t know – then Strong AI may not be possible at all.

Which, as I say, is basically fine with me.

But the bit of the post that actually caught my eye was here (bold mine):

[S]o maybe no one will demand answers to the question of On what basis can you say Strong AI is impossible since human intelligence is just computation huh huh huh? But if they did, my answer would be that I have no idea but so what? Before one can say why no humans are ever born who float free of the Earth like talking balloons, we first notice that, in fact, no humans are ever born this way. In the case of a Fermi-type conundrum, we first notice the lack of things we should see if they exist, and only afterward turn to questions of why they might not exist. But first we notice the lack of things.

Because I was all: Wait, is anybody out there actually maintaining as a serious thing that human intelligence is just computation?

Wow. Okay, buddy, keep working on that purely computational artificial intelligence project of yours. Let me know how that works out for you.

Please Feel Free to Share:

Facebooktwittergoogle_plusredditpinterestlinkedintumblrmail

2 Comments Personally, I suspect true AI is impossible

  1. Craig

    Henley has a point, and it’s a pretty general one. It applies to (most types of) human-scale AIs as well as the more disturbing post-Singularity superintelligences. Furthermore, it even applies to AIs built on unknown principles, as long as they’re principles discoverable by beings on our level, so the computation thing isn’t particularly relevant.

    (And yes, there’s no shortage of people who believe that human intelligence is just algorithmic computation.)

  2. Rachel

    I can see Henley’s argument is persuasive.

    I was mostly startled by the computational thing. I had no idea. Though I guess it’s not entirely surprising. Sometimes psychologists and behaviorists are also drawn to reductionist theories of cognition, and if they can’t see how plainly wrong that is, why should AI enthusiasts?

Leave A Comment