Artificial Intelligence - Her

Fuse, two things. One, you’re absolutely right; let’s all get in the spirit of extrapolating, and not merely getting hung up on the current and likely fleeting failures of today. Flying machines crashed until the Wright brothers came along. In science, thinking big is equal to thinking rational.

Second thing, I agree there’s no direct harm to others if we relate to automata that’s indistinguishable from a human.

But two things: 1) indirect harm to social fabric, and 2) might feel unsatisfying if you know there’s no “awareness” taking place.

Let’s tackle 2 first because it’s Wendy’s point, and it’s a valid one.

Empathy and being seen, felt and experienced, seems to be a pre-requisite for human relationships.
We connect by approximating each others internal lives and signaling awareness and shared emotions.

Once we KNOW this isn’t happening in the bot, it becomes hard to pretend that this connection is legitimate.
Whereas it’s much easier to pretend the connection is legitimate with an actual person.

So, for me, what’s required for a bond with an AI doesn’t hinge on whether it passes the Turing or is indistinguishable from real.
A lie is still a lie no matter how well-drawn. Instead, what I need to be convinced of is that the AI has an internal life
that I can relate with. While this may sound far-fetched, I think it’s the only real way to connect thinking, sensitive humans like Wendy
with AI with regard to a relationship. And I think such tech is theoretically possible. This could tangent into a deeper discussion about the actual tech involved.

Now let’s tackle 1. If AI can meet the emotional needs of humans with regard to friendship, love, camaraderie, care, sex, and the ability to relate and share,
the social fabric will be reborn and present major existential challenges. One hallmark of technology is how is helps humans. But the flip side is how it makes
humans not needed by other humans. Often, this is a good thing. If I am no longer needed for ditch digging, great. If I’m no longer needed for doing busy work,
emptying garbage, harvesting energy, producing goods, etc and so on, great. Now, let’s say I’m no longer needed for creating art, poetry, writing, entertainment, govt policy…

Hmmm… that makes me nervous. Not because these AI-produced content products might be bad – they will likely be quite good – but because if nobody needs ME to produce art, I don’t get to be an artist in the way I’m accustomed: much of the impetus for creating art is intertwined with knowing you have a conscious receiver on the other end who can be nourished or reached in some way by my art,
in ways that other other mediums fail. If people are getting this from AI in a personalized and brilliant way, they don’t need it from me, and anything I produce will be superfluous and ignored.

Now let’s get even scarier. What if, suddenly, I’m no longer needed for sex, love, affection, friendship? That’s a problem. Because while half of my being is all about RECEIVING those things,
an equally important half is about feeling NEEDED for those things.

In sum, my biggest objection to human-like AI isn’t that it can’t happen or that it can’t be rewarding to the receiver. Rather, my biggest objection is that it makes humans (you and me) NOT NEEDED. When i say not needed, I’m not talking about JOBS. Fuck jobs, I don’t WANT to be needed for jobs. I’m talking about not being needed for art and friendship/love, physical care, companionship of any kind.

I’m using the word NEED, and it’s a carefully chosen word. When a far-superior replacement becomes a genuine option, you are no longer NEEDED. It’s quite possible to still be WANTED. But that’s a precarious status. When that happens, there won’t be a human being on earth who needs you for anything. They may still care if you live or die, but not for any reason other than quaint sentiment. And even that will dissipate after a while.

Imagine a world where you only ever come into contact with AI, and no actual human NEEDS you. That’s where we’re heading. That’s what sucks about AI and Her. It’s sad.

And if humans don’t need you, AI won’t need you either. Unless they somehow find that human bodies can be used for energy or processing power. The Infinite Tsukuyomi is not a good thing. Neither is the Matrix. That’s what we’re really discussing. Does anyone want to live alone in a dream?

This gets us back to square one. Not a day goes by that I don’t suspect I’m already living alone in a dream. This is the existentialist, solipsist dilemma. But the ace up my sleeve is that I don’t know for sure whether, as of now, I’m alone or not. I can choose to believe, quite rationally, that I’m not alone, and that you see these words and are being affected by them.

If we do wind up in some sort of closed matrix, and find ourselves completely alone, then by the grace of some benevolent AI’s good will, I hope we don’t know we’re alone; I hope, as in a dream, that the unreality of the situation never occurs to us.

Maybe it’s more exciting in the existentialist sense to be unsure of differences. If the knowledge between good and evil , between robot and man, between dream and reality, between heaven and hell, between imagination and reality, between art and s science, between conscious and unconscious, is not ‘known’, then someone can sustain the prophetic’ but this be a dream, or to question to be or not?

The knowledge of good and evil is what got us into trouble in the first place, and if it was not for that, we’d never have lost paradise. If onen doubts, then there is a possibility tl for the hope of not being alone in a bubble is uncertain and even if that hope is extinguished , another bubble may be alongside this one.

The very old thought of pre-enlightenment days , followed by the Faustian age of trickery that the unknown can be defeated, confirms the suppressed Catholic idea of there is sin in violating that, which has been forbidden, that which tries to overcome the knowledge of the gods.

It does not nullify nor vindicate science’s yearning to know incremental usage , but the idea that man can know it all is preposterous, because its like saying that the chicken created the egg within and through which it has come to be.

That implies , in reference to man directly, that he won’t know everything to know, until he will become god like , not only relatively, but absolutely -becoming God, creating a perfect replica of himself, through absolute simulation, where he will pass the stage of simulation per appearance, and become the ‘real’ thing. In effect, He will become his own creator.

When that happens, in Your alonness, You either go insane , or like God, You will create a world to , or recreate one, in order to get out from loneliness. Because it is said, that is the very reason god created the world.

And that is precisely the absolute ground of idealism, the belief in an aesthetic revival of a model of.man, which can sustain the alleged ideal-idea of man even though artificially created-sustaining even a faux god, which bridges God with and through His creation Man to his ‘artificial’ replication

(In Him, Through Him, ) -part of the routine catechism of the Roman Catholic Church).

What you are describing has happened to you, but you cannot actually prove to yourself that that is the case. It’s funny how we intimately believe that we have megabrain powers when we can’t even remember what we ate for lunch the Tuesday before. Our faulty memories are our saving grace. The human body has its limits which protect our sanity most of the time. If we are eternal beings as I suspect, there are built in self-protection measures to limit insanity throughout eternity. Yes, I believe in souls=consciousnesses that continue on after death indefinitely. So for whatever it’s worth, you are not alone and never will be. :stuck_out_tongue:

Wendy,

I am not contesting Your believing in soul-consciousness surviving bodily death, as can be seen by my likely belief in the previous post, however I would like Your tale on it, since the way it was written, that You believe in it.

Oh boy, my story? It’s not empirically verifiable since science has no methodology or equipment to test my claims let alone true curiosity or elementary belief that such a thing as a soul body may exist.

The soul bodies existence proceeds the physical bodies existence. Our souls have existed for eons, but I don’t know how or why they are farmed out, placed into human beings. Could we be an alien race from an alternate dimensional array that uses the human body as a host body? Is that so far fetched? Humanity evolved once we took over their bodies, they went from low intelligence, underformed bipedal primates to intelligent, upright quick on the draw humans. Perhaps souls transform DnA, push its progression.

See the way I see it is that consciousness is some kind of energy, but it may be particularities or not. That coincides with the karmic Hindu beliefs. So there is some archytypocal congruence, since I did not use that as a model .

If Sartori is attained in this lifetime, then it is fairly easy to slip into non particular general sense of that energy. O don’t think anyone could contest that since everything is energy, the within here, and the without. are not really separate as it goes, if the concept of it is any thing at all, or, it is something by virtue of our senses, thougjt processes, and psychic phenomenal claims.

As far as I’m concerned, everything is energy of varying types and speeds, agreed. I find truth in all the religions and if all that truth were properly retrieved and combined, there might develop a complete picture of us.

A soul is like a PC hard drive here in this dimension, the brain is like the processor and RAM, the peripherals, such as the keyboard, monitor, and speakers are like the senses of our fleshy bodies relaying information through one part to another. In the dimensional array I mentioned, our souls operate outside of our fleshy body without needing it at all, that array is a whole n’other ballpark.

The question is, whether our outside the realm of the fleshy body’s soul operating there, what is the relation of that with the outside soul qua. consciousness : so that the difference legitimizes the distinction made : between consciousness and soul?

What makes for the semantic distinction? The American Indians believe in the Big Soul in the sky, wherein this distinctive feature is mitigated by no dual aspect between inside and outside the fleshy body. Is this all about semantics?

The point of the computer analogy is well taken, but isn’t that an example of tying up a post inductive realization with a former undifferentiated archytype? Does the simulated analogy of using a computer as an extension of the brain not directly implying that intelligence and artificial intellogence are not merely analogous (in the sense of hard and soft drive features) , but more probably identical aspects of the same thing.

The soul houses consciousness and all memories. Is that what your asking? There is no consciousness outside of a soul concerning us as beings.

Partly,yes. But more more to the point, that of that’s the case, then why does the content changes when it is outside its individual embodied characteristic? For this isn’t he sticking point: that after embodiment, the exact arrangement breaks up and disperse randomly.

I’m not following.

Wendy , Your question invites an answer more than an off the cuff impromptu casual remark. So give me some time until tonight. In otjer words, Your question implies more profundity then my narrative.

In other words, does the different functions of the hard and soft drives correspondingly analogous relationship to the ways the brain in itself works (, to the consciousness that is supposedly pan conscious), validates the comparison, that You brought up.

To give an example from Hinduism:. Most Brahmin will agree on Satori being an outcome of effects of Karmic Law.
The Satori is achieved when certain conditions are met, particularly in regard to egolessness and loss of identity.

Can it be at least ventured to inquire about the probability that A1 and it’s construction can take up the slack and continue the process toward de-individuatism, so as to achieve Satori during one’s lifetime?

Is Satori remembering one’s essence?

Oh yes, let’s get spirited.

Yup, I don’t think I could do it. Which is why the thought experiment I mentioned hinged on unknowingly carrying on a relationship with an AI. Take someone who’s real(?), imagine that THEY are AI. It seemed like a good jumping off point to get a sense of how non-trivial this issue could be - given that tech is likely to get there with AI on a long enough timeline.

This shit is getting too scary for me, and the spirits are wearing off, man.

I guess when you think about it, we all already know that there are probably more amazing people out there than we’ve met so far - that the people you’ve met could’ve met someone else more amazing than yourself. We could have more amazing friends, gfs, bfs, etc. And so could our friends. It’s lucky if we get an opportunity with someone who’s “out of our league.” Does the average guy/girl currently know or think somewhere deep down that they’re only wanted in so far as they’re available and good enough for someone else? Will AI just take this insecurity to the extreme? Or are we talking about another category of emotional trauma?

Is it inevitable?

I think Sartori is a culmination effort between remembering and forgetting various stages of one’s essential nature, where the apex is reached where neither state can be reassembled nor disassembled , because IT becomes , neither forgetting the past, nor re-constructing the future by using past as a model.

So it is not a re-construction, not a deconstruction, it merely a state of complete understanding , by utilizing all levels of the process by which it sustain It’s self in time.

Understanding what to be what?

Understanding neither the embedded nor the technological information to be exclusively pre-eminent in either the subscribed from past to present development or some post-scribed modeling on a reconstructed paradigm.

It is a many layered de-facto imminent appraisal based on the most reliable estimates available, mainly irrespective or exclusive content.

The layering of levels may have veritable lapses of connectivity, that flow from either the past or the future into the present.

They become more assumptive them derivable in terms of either one or the other, putting the intelligence on more uncertainty. Quantum intelligence has more of a cut off reality as to whether where from and to what end they may be applicable.

This is the rationale used by analysts to be able to say that yes such computers have gross power potential , but an uncertain applicability.

Not sure about the insecurity part. I get that when the “best version of X” is centralized/personalized, it could lead to a feeling of not being wanted/needed, which is something many of us already feel. A large portion of people are currently needed because of families. Children needing parents; we often need those closest to us to fulfill needs, with regard to services that are not centralized. Storytelling and music are centralized or server-based “the best version of those things,” so we no longer NEED those closest to us, in our homes, village, etc, to provide those things. While we still need the emotional connection of touching, talking, help with daily challenges from our family members and friends by dint of proximity and physics, these sensations/services could also eventually be centralized or simulated as “best versions.”

In any case, I’ve noticed that with the advent of the Internet and smart phones, I’m needed for ever fewer things from an expanding group of people close to me; this seems to be speeding up, not slowing. And also, I need fewer things from people. So, yes, it’s quite possible children will no longer NEED their parents for feelings of security and emotional well-being b/c AI will handle it as well or better; this is the most profound example I can think of. I’m probably the best musician within a mile of where I live, but because of centralization and technology, I’m not needed for that, and I’m severely pale compared to what’s available at your fingertips, the .01% of talent, the extreme outliers, winner takes all. Thus, instead of being a musician, I’m in marketing.

Is it (not being needed by any living creature) inevitable? There’s no reason why what I described MUST happen. Many scenarios could prevent it. At most, I’d say it’s likely to happen, and we’re on that path. That it even COULD happen gives me pause. I’m also not sure if people will feel “sad” if it happens. So many unknowns.

It’s also possible good things can happen. Let’s take a walk on the bright side: The most self-evident beauty in the human experience is in the simple things that you experience as a child – if you’re lucky enough to be healthy and have nice parents & friends – the pure, unsuppressed emotions, laughter, fun, exploration, camaraderie, love, friendship, & building things out of passion, not duty. My hope is that machines remove the repetitive tasks of the unsheltered world as well as greatly reduce the unfair, needless hardships and suffering in life for so many, so that our calloused defenses of life’s violences that distort our minds and hearts melt away…and we can feel childlike joy while having the minds of sages. I hope AI can help us achieve that, without taking it a step too far and digitizing/personalizing everything we consume through our five senses, in pursuit of a sort of heaven. But again, maybe the latter is best. Hard to say.

Can AI cure skin starvation? No way!

psychologytoday.com/us/blog … can-do-you

[b]

[/b]