It’s not only very possible for humanlike AI to be designed, but plausible that humans will adopt it to meet all manner of needs, physical, emotional, etc, to varying degrees of success.
We turn to actual humans to “varying degrees of success,” too.
It can and will happen. I can support that with clear enough arguments but won’t do that here. It’s a great philosophical question, whether this SHOULD happen.
Like any question on the ought side of the is/ought divide, it helps to add the word “if.”
We ought to do X, if we want Y.
Well, it comes down to what people will want, and like anything else, there are healthy wants, unhealthy wants, and health-neutral wants that come down to taste.
I can make a clear argument that a large part of the spectrum of AI stand-ins for human relationships will fall under the “health neutral” category.
All that’s left is for a third party to subjectively judge if the dynamic is “sad,” “fine” repugnant" “acceptable” “beautiful,” etc.
Within dreams, I sometimes meet people who don’t exist, and I feel an emotional bond with that entity while I’m dreaming. I don’t find this sad, until i wake up.
On reflection, the entity was real, it was a vestige of some inner component of my mind that for a moment was separate and discreet from my conscious first-person awareness.
When we project our desires and perceptions onto an AI, we may be doing something similar, combined with the fact that an AI can be an extension of human traits, and is thus
a way to connect to humanity, albeit indirectly, through a substrate.
How often have you felt intimacy with your favorite author? We connect to souls and ethos thru artificial substrates all the time.
I understand the informal fallacy that kicks in when we deny the possibility of lifelike AI that we can fall in love with, etc. We are afraid because it’s weird, grotesque. We render ourselves instant fools. Lunatics talking to dolls. There’s something we naturally find pathetic about this. But so much of the human condition is already weird and grotesque, and pathetic. Consider that the only people you ever know are actually projections in your mind, reconstructions of only a tiny part of the reality of the source being, assuming a source being even exists that’s in any way similar to what you think it is or want it to be. At least with AI we
gain some measure of control.
Humans often have illusions and world views foisted upon them. They are blind to the origins of their epistemology. It’s a philosopher’s job to knowingly choose her illusion & embrace it in the spirit of a Gamer. We do it all the time. Some of us will indeed choose to love AI, in the way Berkley, Wittgenstein, Sisyphus, analytic philosophers and existentialists, or any of the great solipsists, choose to play along, feel, live, and love, and somehow get by as a normal person in a sea of abstraction. Just as some of us who know better can choose, like Kierkegaard, Tolstoi, William James, etc., to love God. I don’t know where I’ll wind up, but I’m not naive to the eventuality of a genuine choice heading my way, and yours.