Fuse, two things. One, you’re absolutely right; let’s all get in the spirit of extrapolating, and not merely getting hung up on the current and likely fleeting failures of today. Flying machines crashed until the Wright brothers came along. In science, thinking big is equal to thinking rational.
Second thing, I agree there’s no direct harm to others if we relate to automata that’s indistinguishable from a human.
But two things: 1) indirect harm to social fabric, and 2) might feel unsatisfying if you know there’s no “awareness” taking place.
Let’s tackle 2 first because it’s Wendy’s point, and it’s a valid one.
Empathy and being seen, felt and experienced, seems to be a pre-requisite for human relationships.
We connect by approximating each others internal lives and signaling awareness and shared emotions.
Once we KNOW this isn’t happening in the bot, it becomes hard to pretend that this connection is legitimate.
Whereas it’s much easier to pretend the connection is legitimate with an actual person.
So, for me, what’s required for a bond with an AI doesn’t hinge on whether it passes the Turing or is indistinguishable from real.
A lie is still a lie no matter how well-drawn. Instead, what I need to be convinced of is that the AI has an internal life
that I can relate with. While this may sound far-fetched, I think it’s the only real way to connect thinking, sensitive humans like Wendy
with AI with regard to a relationship. And I think such tech is theoretically possible. This could tangent into a deeper discussion about the actual tech involved.
Now let’s tackle 1. If AI can meet the emotional needs of humans with regard to friendship, love, camaraderie, care, sex, and the ability to relate and share,
the social fabric will be reborn and present major existential challenges. One hallmark of technology is how is helps humans. But the flip side is how it makes
humans not needed by other humans. Often, this is a good thing. If I am no longer needed for ditch digging, great. If I’m no longer needed for doing busy work,
emptying garbage, harvesting energy, producing goods, etc and so on, great. Now, let’s say I’m no longer needed for creating art, poetry, writing, entertainment, govt policy…
Hmmm… that makes me nervous. Not because these AI-produced content products might be bad – they will likely be quite good – but because if nobody needs ME to produce art, I don’t get to be an artist in the way I’m accustomed: much of the impetus for creating art is intertwined with knowing you have a conscious receiver on the other end who can be nourished or reached in some way by my art,
in ways that other other mediums fail. If people are getting this from AI in a personalized and brilliant way, they don’t need it from me, and anything I produce will be superfluous and ignored.
Now let’s get even scarier. What if, suddenly, I’m no longer needed for sex, love, affection, friendship? That’s a problem. Because while half of my being is all about RECEIVING those things,
an equally important half is about feeling NEEDED for those things.
In sum, my biggest objection to human-like AI isn’t that it can’t happen or that it can’t be rewarding to the receiver. Rather, my biggest objection is that it makes humans (you and me) NOT NEEDED. When i say not needed, I’m not talking about JOBS. Fuck jobs, I don’t WANT to be needed for jobs. I’m talking about not being needed for art and friendship/love, physical care, companionship of any kind.
I’m using the word NEED, and it’s a carefully chosen word. When a far-superior replacement becomes a genuine option, you are no longer NEEDED. It’s quite possible to still be WANTED. But that’s a precarious status. When that happens, there won’t be a human being on earth who needs you for anything. They may still care if you live or die, but not for any reason other than quaint sentiment. And even that will dissipate after a while.
Imagine a world where you only ever come into contact with AI, and no actual human NEEDS you. That’s where we’re heading. That’s what sucks about AI and Her. It’s sad.
And if humans don’t need you, AI won’t need you either. Unless they somehow find that human bodies can be used for energy or processing power. The Infinite Tsukuyomi is not a good thing. Neither is the Matrix. That’s what we’re really discussing. Does anyone want to live alone in a dream?
This gets us back to square one. Not a day goes by that I don’t suspect I’m already living alone in a dream. This is the existentialist, solipsist dilemma. But the ace up my sleeve is that I don’t know for sure whether, as of now, I’m alone or not. I can choose to believe, quite rationally, that I’m not alone, and that you see these words and are being affected by them.
If we do wind up in some sort of closed matrix, and find ourselves completely alone, then by the grace of some benevolent AI’s good will, I hope we don’t know we’re alone; I hope, as in a dream, that the unreality of the situation never occurs to us.