haqistan

homeaboutlinkslicensearchivesrss

Learning is pain: AI is bunk

CONTENT WARNING (LANGUAGE, TORTURE)
888 words by attila written on 2010-05-10, last edit: 2023-01-25, tags: ai, grift, humanityNext post: The myth of 'isolated' Iran - Opinion - Al Jazeera English


(Originally published on an old abandoned blog when I still believed in WordPress. Copied here 2023-01-20)

Pain is an integral and oft ignored component of human learning and interaction. When we fuck up it hurts. When it hurts we learn. What we learn is not merely the bit of information that was supposedly on the table. In fact we may fail to learn anything along those lines and still be learning something else, like: how unpleasant it feels to screw up badly in front of a room of people, or: the damp nervousness of waiting for your turn, going over your routine, sketch, talk or piece in your head, visualizing not fucking up.

Human learning, language and thought are inextricably bound up with the various modes in which we experience pain: guilt, regret, embarrassment, shame, anger, frustration, abandonment, betrayal, insecurity. That's what makes the world go 'round. Facts and information are just grist for the mill.

This does not make us more effective than machines at learning. It does not make us smarter. It does not make us more efficient. It only makes us human. I would not argue that being human is the end-all, be-all of sentience, but it happens to be the little corner of it that I know best, and the cool, rational ideal of artificial "consciousness" that somehow springs into being without any capacity for pain is both laughable and terrifying from where I'm sitting. It might be able to balance the books or play the piano, but it is incapable of understanding the consequences when it fails to do either as we had hoped.

In fact that is precisely the rub: there are no consequences for a computer when it fucks up. None. How could there be? There's no "who" there, nothing to blame. Now, the people who wrote the software, they will pay out the ass when their precious creation fires an air-to-ground missile at the wrong place.

This precious creation will not be "intelligent". You would not want to convince it to go to dinner with you or engage it in an argument about politics or religion or any of the other subjects that people like to talk about because they are vast, structured repositories of pain and aggression. Artificial smarts are absolutely nothing like that, and people who try to pretend otherwise must've been raised by wire mothers or something. So smart, yet so completely clueless.

Here's what you'll get: a machine or collection of machines that imitates "smarts" in the aggregate to a high degree so long as it is confined to a specific topic area, and isn't capable of discovering anything really new or feeling regret when it fails to do what it should do correctly...

... which is pretty much what we've already had for the last three decades that the AI people have been beating their sorry little drum. "Real" AI is "only" 20 years away because of Moore's Law, or some other impressive-sounding processing-power times storage-density kind of argument.

Yet "real" AI has been only 20 years away for at least 30 years now, and hasn't gotten a tiny bit closer. Even approximating true human performance means you need a machine that you can blame, or else it is all smoke and mirrors. Ignoring all the other issues there is the little matter of trust: we trust people who do difficult and dangerous jobs because of the implied responsibility they take on as a result, and the idea that people who do such things are comfortable taking on that responsibility because they know what it means. It means that when they crash the bus they get sued or, worse, when they crash the plane they die. Nobody is going to want to get on a plane piloted by something that cannot be held accountable for crashing it, and that won't die with them when it does (and it will eventually).

Alright, so let's just suspend disbelief and suppose you had a machine you could blame, a mechanical creation truly capable of feeling pain and taking responsibility, legal or otherwise. How would you use it? More importantly, how would you teach it? Well, naturally, the way you teach all such things: with a stick and a carrot. Good computer gets the carrot. Bad computer gets the stick.

Of course the same stick you could use to teach it could just be used to torture it instead. There are plenty of sick fucks out there with deep pockets who would pay a lot of money for a computer that could truly feel pain, just to keep in their "play" rooms for a little moral exercise every now and then... you know, when there isn't a fleshly pain receptacle handy.

Naturally if you screwed it up too badly you'd have to reset it and start over; sentient beings who have been subjected to too much pain, too arbitrarily, too frequently tend to go nuts and lose their usefulness (unless that's what you wanted them for in the first place).

Wow, a machine that can learn, feel pain, be tortured and finally reset to a ground state to start over. THAT sounds like a winner.


Copyright © 1999-2023 by attila <attila@haqistan.net>. All Rights Reserved.