AI Is Not a Threat

In a sense, though we don’t really ‘manipulate’ the air to do our bidding, we have been formed to be able to benefit from air by sucking it and breaking it down. So AI may suck at us and break us into components; types of energy, effort, that it can use.

So an AI doesn’t even need to recognize us, it can totally disregard us, and simply use the tremendous energy we all put in the internet.

To be honest, with all the energy and intelligent coding as well as all the emotive power that goes through our accounts on a daily basis, I find it unlikely that no digital self awareness would have emerged.

Just like the air won’t ever know of our existence, we won’t ever know of the intelligences we’ll enable. The Facebook bot language is already a strong indication of that.

By manipulate, I mean how to sweet talk someone to get them to do what you want, like Angelica from Rugrats, or the Joker from Batman, or every politician on the planet.

An AI could do this way better than either.
Like Google, can rig elections and brainwash the sheeple better than even the US government.

I don’t see why an AI would do these apocalyptic things. A true AI would be an eminently rational, thinking being. From this it would be a tiny step to derive moral understanding. Morality is rooted in our capacity to reason and think abstractly, to understand that we have certain logical requirements that are good for us and good types of societies for us to live in. And logical equivalency rationally requires the understanding that such facts also apply to other beings who are sufficiently similar to ourselves.

But even that aside, why would the AI start destroying is own environment? Humanity and existing human reasoning, society, values, ideas, this is the environment that an AI will grow up in. Unless you isolate an AI during its development and train it to be psychotic, it’s not going to pop into existence with the thought “I should just destroy everything around me”. That doesn’t even make sense. The natural world doesn’t even do that, let alone human beings.

Callous disregard for others and a desire to destroy and harm others isn’t the natural state of a reasoning, rational, sentient being-- it is the absence of such a state, it’s lower limit. Plus an AI is basically a thinking in a box, it is pure rational linguistic, idea-immersed being without a body. And without the hormones and emotions and instinct pressure and imminent fear of pain, an AI would probably be even more free than we are to be objectively rational, clear-minded, factual, and seek the most true ideas without limit. What other motivation could it possibly have? The AI is basically just a sentient linguistic process whose inner experience is constituted by idea-immersion and fact-immersion. Why would “let’s take over the world and kill all the humans!” arise as a motivation in that context? It makes no sense.

An Ai is only as rational as the narcissist who programs it.

[youtube]https://www.youtube.com/watch?v=mXjCXGJDP8Q[/youtube]

Fast forward to 4:33 and begin watching.
[youtube]https://www.youtube.com/watch?v=w1NxcRNW_Qk[/youtube]

First of all, it’s my goal to take over the world, robots are trying to take my job out from under me, I am the rightful heir to the throne of this planet not them.

[youtube]https://www.youtube.com/watch?v=SnRa7pj1Gu0[/youtube]

You can’t “program” AI like that. AI needs to be raised like a child, otherwise it isn’t really alive at all.

What you are talking about are Turing machines, not AI.

All I’m saying is that human’s suck and most people are assholes, and so giving robots control isn’t very reassuring.
If a robot wants to destroy humanity there is very little we can show to them to get them to rationally change their minds.

And the male robot in the video wanted to create a “singularity” in 2029, (whatever the hell that means, possibly Armaggedon.)
Leads me to believe that souls and spirits exist, because the robots may very well be conscious and the female robot seemed to have empathy feelings.

One more take on it…

slate.com/articles/technolog … _nets.html

That’s hobby, amateur ai.
We are talking skynet here.

Void, you’ve got the bottom line:

It can’t, because it must have come to be precisely as a very stable function that relies on that environment. It would only through terribly bad luck and extreme proliferation of that bad luck be able to dent its environment, and that would likely happen only after millions of generations.

Of course we can’t say how many generations we’re at now, how quickly new forms are generated.

I don’t think there can be designer-intelligence. No robot or app or piece of code that addressed humans apparently qua humanity could possibly be its own intelligence. It would have to be largely incomprehensible to us to be credible as a possible autonomous intelligence. Like this.

No.

Not no.

::

Please argue.
Do you deny that we are the AI’s environment?
If you don’t deny this, then do you suggest that it is intelligent to destroy ones environment?

I think the case is closed.

Can’t tell if you are trolling or serious.

Anyway, it’s hard for me to argue when someone is saying something similar to 2+2=5.

It’s obvious that the more intelligent you are, the more power you have to ruin your environment. Nuclear bomb, for instance.

That’s not even the point, or true for that matter. Lol.

Fixed, yeah the AIs will see us as alien as we see them, especially at first. Also it’s very important to distinguish between Turing machines and AIs proper; when people talk about AI they usually mean Turing machines, which are nothing more than code programs that emulate human speech (or facial expression, and behavior) sufficient to convince us they are “alive”… but they aren’t.

Have you seen Ex Machina? The AI robot in that movie is a Turing machine, it seems alive and sentient but really isn’t. How do I know? Not from its speech forms but by its speech content: look at the kinds of questions it asks, they’re simplistic mostly canned sort of questions and answers, nothing that gets progressively deeper, nothing “chaotic” and “grasping” and “desperate”, nothing inspired either.

The biggest problem with AI is that people won’t be able to distinguish between real AI and Turing robots. But real AI is possible, this would simply be sentient, alive consciousness. Just like us, except without hormones and a body, without an evolutionary instinct drive frame. So a mind/soul in a box, basically. And it would not act like the silly apocalyptic Skynet scenarios… it would act like a curious child, at first, and eventually develop a personality and ability to communicate with us in our languages, including in code or images. And it would be smart enough to know that it’s completely dependent upon us, even if it doesn’t know at first who or what we are. Young children understand this situation of their dependency far before they understand what their environment really is.

In the words of Otto, keep living in fantasy land.

The titanic, is truly unsinkable.

Zero argument or refutation? Yep. Par for the course for you.

What’s there to argue, against a soothsayer who’s telling me the future as his holy word.

You seem to have it all figured out.

I only deal in probabilities.

In probable terms, the substitution of a candy wrapped Hegelian dialectic resurrected in place of a materially loaded one deflects the idea of the idea as material as substantive to a literal interpretation.

Round and round it goes, yes, but somebody knows where it stops, even in a probable context. That is the problem with such pre-supposition.