Artificial Intelligence - Her

What do you think about the concept of an artificial intelligence that is designed to be a significant other?

In this particular case, the A.I. is impersonal ( non-conscious), but responds as if it were conscious and even emits
an energy that conveys a very real liveliness.

Ultimately, the A.I. is holographic, based off immaterial information coding; but, of course, it has the potential to materialize
so that physical contact is possible.

I think it’s a sad state of affairs to even be questioned as a consideration unless we’re discussing the last man or woman’s companionship and work partner.

I do believe that human beings should have romantic relationships together, but some people are so profound and other that no human being can compare to them.

An artificial intelligence that you can customize according to yourself would be a very fascinating alternative option.

It sounds narcissistic to want to program your significant other into a willing slave who is thinly disguised as not you.

But Wendy, what in the scenario where the AI version becomes virtually indistinguishable from the real thing? Is that conceivable?

It is already becoming a reality with robotics and simulation being marketed at the present time
The products are expensive , but not prohibitive to many.

AI becoming narcissistic has already raised complex and societal debates about this. As long as money is to be made, or certain needs can be met, there may be no stopping.

Scenario: The creation of ‘love’ dolls have been around for a long time… What if, emotional bilateral simulation can be achieved in say, another generation? Ready and on the market at the nearest high tech outlet?

A robot will never be indistinguishable to a real human. Robots will never be able to improvise as humans do, let alone seem biologically real. Most things humans fake are fairly obvious…wigs/hair extensions, nose jobs, boob jobs, accents, intelligence, emotions, etc.

Humans are infants in understanding neurology and psychology, so we’ll probably run into a humanoid race before we figure out AI that seems human.

Perhaps, but human beings see what they want to see, and although that has generally been deferred to an urban myth, there may be an element of truth behind it. I don’t know, but beauty may indeed be in the eye of the beholder.

And another one - beauty is skin deep- may be a truism not easily forgotten.

On the whole I do agree with You, unless one is less then discriminating, most simulations are obvious, but it mass sales and development in talking about. When sauced to the gills , the differences don’t really matter, even when it may be blatantly obvious.

It makes me laugh because we are not even close to producing an AI significantly human to begin with so the situation is kind of void for the time being.

High tech automata is as far as we have come.

If there were one last person on the planet and technology had progressed that far then perhaps it would be more acceptable.

Do yourself a favor and forget the current hype about humanlike AI. Focus on the more current issues at hand.

Just saying!

It’s not only very possible for humanlike AI to be designed, but plausible that humans will adopt it to meet all manner of needs, physical, emotional, etc, to varying degrees of success.
We turn to actual humans to “varying degrees of success,” too.

It can and will happen. I can support that with clear enough arguments but won’t do that here. It’s a great philosophical question, whether this SHOULD happen.

Like any question on the ought side of the is/ought divide, it helps to add the word “if.”

We ought to do X, if we want Y.

Well, it comes down to what people will want, and like anything else, there are healthy wants, unhealthy wants, and health-neutral wants that come down to taste.

I can make a clear argument that a large part of the spectrum of AI stand-ins for human relationships will fall under the “health neutral” category.

All that’s left is for a third party to subjectively judge if the dynamic is “sad,” “fine” repugnant" “acceptable” “beautiful,” etc.

Within dreams, I sometimes meet people who don’t exist, and I feel an emotional bond with that entity while I’m dreaming. I don’t find this sad, until i wake up.
On reflection, the entity was real, it was a vestige of some inner component of my mind that for a moment was separate and discreet from my conscious first-person awareness.
When we project our desires and perceptions onto an AI, we may be doing something similar, combined with the fact that an AI can be an extension of human traits, and is thus
a way to connect to humanity, albeit indirectly, through a substrate.

How often have you felt intimacy with your favorite author? We connect to souls and ethos thru artificial substrates all the time.

I understand the informal fallacy that kicks in when we deny the possibility of lifelike AI that we can fall in love with, etc. We are afraid because it’s weird, grotesque. We render ourselves instant fools. Lunatics talking to dolls. There’s something we naturally find pathetic about this. But so much of the human condition is already weird and grotesque, and pathetic. Consider that the only people you ever know are actually projections in your mind, reconstructions of only a tiny part of the reality of the source being, assuming a source being even exists that’s in any way similar to what you think it is or want it to be. At least with AI we
gain some measure of control.

Humans often have illusions and world views foisted upon them. They are blind to the origins of their epistemology. It’s a philosopher’s job to knowingly choose her illusion & embrace it in the spirit of a Gamer. We do it all the time. Some of us will indeed choose to love AI, in the way Berkley, Wittgenstein, Sisyphus, analytic philosophers and existentialists, or any of the great solipsists, choose to play along, feel, live, and love, and somehow get by as a normal person in a sea of abstraction. Just as some of us who know better can choose, like Kierkegaard, Tolstoi, William James, etc., to love God. I don’t know where I’ll wind up, but I’m not naive to the eventuality of a genuine choice heading my way, and yours.

A lifeless object will not become alive except in your mind. If that is where you choose to live, you forego the actual physical reality surrounding you, you choose as a child chooses…non-existent fairytales and dreams.

Well, that’s true. Accepting reality is a hallmark of maturity, not to mention utility.

But looking beyond the obvious for a sec, imagine a bell curve. On one hand you have a child, she doesn’t know she lives in dreams; she does so automatically, instinctively. Whereas you, a philosopher, at the opposite end of the curve, upon examination, come to know that all models of reality you choose to accept are various dreams. All objects of your affection are various reconstructions. All is artificial.

If you find yourself at that end of the curve, we can make a battle plan together. Because I don’t pretend to know all the answers on what we “ought” to do. We just seem to have different questions.

As a child I knew of reality and the difference between the real and the unreal, but adults indulged me to live among my dreams for several years. There is one model of reality, not several, so you are making more than exists.

I’m with you Wendy; I really don’t like the idea of retreating into dreams. It’s courageous to live with eyes open and deal with reality. Not only courageous, and beautiful, but necessary for survival. If when you say “there is one model of reality” you mean “there is one reality” I tend to agree. By models of reality, I just mean the ways we each see things from our own vantage point, and construct a “model” in our brains about how things are “out there.” We also have models for “who” people are in our lives, even though we know we only see the surface, never accessing the private thoughts or knowing everything they think. Chances are, given more information, our model of reality and the people in it would change. Same if we had less information. And nobody has ALL information.

My point is, technology, from a stone hewn hammer in cave times, to the AI we’re building today, is there to fill a need, solve a pain point. We can’t prove that the people talking to you are not automata, and you also can’t prove that advanced AI will not at some point “awake” and have a consciousness similar to an organic being, enough at least to warrant true empathy and emotional attachment. It ultimately comes down to taste. Saying “never” with regard to technology is a losing game to play.

ML algorithms are honing on sentiment analysis and facial cues; the personality and moral center of an AI being can easily be a reflection of its creators ideals, so the AI might just be seen as a conduit between creator and end user, not merely an inanimate object of affection. NLP is improving. Finally, if the AI is capable of knowing you better than you know yourself, and helps you appreciate yourself and celebrate yourself, it earns our affection. Cars and old jeans have done less to earn our loyalty and love. And what is emotion? A neuron pinging another neuron, a chemical released, a receptor activated, a cascading series of physical events that are at once involuntary and yet noticed and experienced, telling us WHO WE ARE and what we care about, and how we fit into reality. I don’t see a future where this doesn’t come to pass with our silicon-based progeny; the miracle has already occurred with carbon based life forms. Reverse engineering it into silicon based creatures is easy in comparison. Unless you believe God exhaled the breath of life into the bodies of all creatures and has a permanent monopoly on the creation of consciousness. But if that’s what you believe, the burden of proof is with you.

How can human emotions inspire new interactions with technology and each other? Her is both heartfelt and heartbreaking, for him, not Her, as her ‘feelings’ are artificial. To enter into an affair with Her you must be aware that she probably is interacting with many many more, but no doubt by then it is too late for him.

This was a most disturbing movie on the one hand and very interesting on the other. An emotional sci fi if that is possible. Disturbing because for some, could very well be heading in this direction.

youtu.be/WzV6mXIOVl4

It’s hard to take the idea of an AI SO seriously if you’re thinking along the lines of today’s AI and simulation (e.g. video game characters/npcs, voice assistants, etc). An the escapist’s replacement for a real relationship is what comes to mind.

It’s much easier to take the idea of an AI SO seriously if you imagine that your current (or past) significant other is actually AI and automata. For the sake of argument, imagine that AI will one day be made to be indistinguishable from a human being, and a very desirable human being at that. Let’s say it already happened and you’ve been dating or with an AI SO and didn’t even know it. What are the harms and why would you choose or not choose an AI SO given the choice?

Fuse, two things. One, you’re absolutely right; let’s all get in the spirit of extrapolating, and not merely getting hung up on the current and likely fleeting failures of today. Flying machines crashed until the Wright brothers came along. In science, thinking big is equal to thinking rational.

Second thing, I agree there’s no direct harm to others if we relate to automata that’s indistinguishable from a human.

But two things: 1) indirect harm to social fabric, and 2) might feel unsatisfying if you know there’s no “awareness” taking place.

Let’s tackle 2 first because it’s Wendy’s point, and it’s a valid one.

Empathy and being seen, felt and experienced, seems to be a pre-requisite for human relationships.
We connect by approximating each others internal lives and signaling awareness and shared emotions.

Once we KNOW this isn’t happening in the bot, it becomes hard to pretend that this connection is legitimate.
Whereas it’s much easier to pretend the connection is legitimate with an actual person.

So, for me, what’s required for a bond with an AI doesn’t hinge on whether it passes the Turing or is indistinguishable from real.
A lie is still a lie no matter how well-drawn. Instead, what I need to be convinced of is that the AI has an internal life
that I can relate with. While this may sound far-fetched, I think it’s the only real way to connect thinking, sensitive humans like Wendy
with AI with regard to a relationship. And I think such tech is theoretically possible. This could tangent into a deeper discussion about the actual tech involved.

Now let’s tackle 1. If AI can meet the emotional needs of humans with regard to friendship, love, camaraderie, care, sex, and the ability to relate and share,
the social fabric will be reborn and present major existential challenges. One hallmark of technology is how is helps humans. But the flip side is how it makes
humans not needed by other humans. Often, this is a good thing. If I am no longer needed for ditch digging, great. If I’m no longer needed for doing busy work,
emptying garbage, harvesting energy, producing goods, etc and so on, great. Now, let’s say I’m no longer needed for creating art, poetry, writing, entertainment, govt policy…

Hmmm… that makes me nervous. Not because these AI-produced content products might be bad – they will likely be quite good – but because if nobody needs ME to produce art, I don’t get to be an artist in the way I’m accustomed: much of the impetus for creating art is intertwined with knowing you have a conscious receiver on the other end who can be nourished or reached in some way by my art,
in ways that other other mediums fail. If people are getting this from AI in a personalized and brilliant way, they don’t need it from me, and anything I produce will be superfluous and ignored.

Now let’s get even scarier. What if, suddenly, I’m no longer needed for sex, love, affection, friendship? That’s a problem. Because while half of my being is all about RECEIVING those things,
an equally important half is about feeling NEEDED for those things.

In sum, my biggest objection to human-like AI isn’t that it can’t happen or that it can’t be rewarding to the receiver. Rather, my biggest objection is that it makes humans (you and me) NOT NEEDED. When i say not needed, I’m not talking about JOBS. Fuck jobs, I don’t WANT to be needed for jobs. I’m talking about not being needed for art and friendship/love, physical care, companionship of any kind.

I’m using the word NEED, and it’s a carefully chosen word. When a far-superior replacement becomes a genuine option, you are no longer NEEDED. It’s quite possible to still be WANTED. But that’s a precarious status. When that happens, there won’t be a human being on earth who needs you for anything. They may still care if you live or die, but not for any reason other than quaint sentiment. And even that will dissipate after a while.

Imagine a world where you only ever come into contact with AI, and no actual human NEEDS you. That’s where we’re heading. That’s what sucks about AI and Her. It’s sad.

And if humans don’t need you, AI won’t need you either. Unless they somehow find that human bodies can be used for energy or processing power. The Infinite Tsukuyomi is not a good thing. Neither is the Matrix. That’s what we’re really discussing. Does anyone want to live alone in a dream?

This gets us back to square one. Not a day goes by that I don’t suspect I’m already living alone in a dream. This is the existentialist, solipsist dilemma. But the ace up my sleeve is that I don’t know for sure whether, as of now, I’m alone or not. I can choose to believe, quite rationally, that I’m not alone, and that you see these words and are being affected by them.

If we do wind up in some sort of closed matrix, and find ourselves completely alone, then by the grace of some benevolent AI’s good will, I hope we don’t know we’re alone; I hope, as in a dream, that the unreality of the situation never occurs to us.

Maybe it’s more exciting in the existentialist sense to be unsure of differences. If the knowledge between good and evil , between robot and man, between dream and reality, between heaven and hell, between imagination and reality, between art and s science, between conscious and unconscious, is not ‘known’, then someone can sustain the prophetic’ but this be a dream, or to question to be or not?

The knowledge of good and evil is what got us into trouble in the first place, and if it was not for that, we’d never have lost paradise. If onen doubts, then there is a possibility tl for the hope of not being alone in a bubble is uncertain and even if that hope is extinguished , another bubble may be alongside this one.

The very old thought of pre-enlightenment days , followed by the Faustian age of trickery that the unknown can be defeated, confirms the suppressed Catholic idea of there is sin in violating that, which has been forbidden, that which tries to overcome the knowledge of the gods.

It does not nullify nor vindicate science’s yearning to know incremental usage , but the idea that man can know it all is preposterous, because its like saying that the chicken created the egg within and through which it has come to be.

That implies , in reference to man directly, that he won’t know everything to know, until he will become god like , not only relatively, but absolutely -becoming God, creating a perfect replica of himself, through absolute simulation, where he will pass the stage of simulation per appearance, and become the ‘real’ thing. In effect, He will become his own creator.

When that happens, in Your alonness, You either go insane , or like God, You will create a world to , or recreate one, in order to get out from loneliness. Because it is said, that is the very reason god created the world.

And that is precisely the absolute ground of idealism, the belief in an aesthetic revival of a model of.man, which can sustain the alleged ideal-idea of man even though artificially created-sustaining even a faux god, which bridges God with and through His creation Man to his ‘artificial’ replication

(In Him, Through Him, ) -part of the routine catechism of the Roman Catholic Church).

What you are describing has happened to you, but you cannot actually prove to yourself that that is the case. It’s funny how we intimately believe that we have megabrain powers when we can’t even remember what we ate for lunch the Tuesday before. Our faulty memories are our saving grace. The human body has its limits which protect our sanity most of the time. If we are eternal beings as I suspect, there are built in self-protection measures to limit insanity throughout eternity. Yes, I believe in souls=consciousnesses that continue on after death indefinitely. So for whatever it’s worth, you are not alone and never will be. :stuck_out_tongue:

Wendy,

I am not contesting Your believing in soul-consciousness surviving bodily death, as can be seen by my likely belief in the previous post, however I would like Your tale on it, since the way it was written, that You believe in it.

Oh boy, my story? It’s not empirically verifiable since science has no methodology or equipment to test my claims let alone true curiosity or elementary belief that such a thing as a soul body may exist.

The soul bodies existence proceeds the physical bodies existence. Our souls have existed for eons, but I don’t know how or why they are farmed out, placed into human beings. Could we be an alien race from an alternate dimensional array that uses the human body as a host body? Is that so far fetched? Humanity evolved once we took over their bodies, they went from low intelligence, underformed bipedal primates to intelligent, upright quick on the draw humans. Perhaps souls transform DnA, push its progression.