AI Is Not a Threat

Exactly. Just as an objectivist would argue out of an inverted categorical impative!
[/quote]
My point is that, Rand tried at a time when doubts about Capitalism flourished as an aftermath with dealing with the programs and ideologies of a recent ally (Sovietologist Union), she used the Marxian idea to objectify, or give an ideological counterpart to a seemingly ideologically devoid Capitalism.

Her objectivisation sets a stage where in the futuristic sense, a differentiation on some more objective-contextual need may arise for practical purposes.

The critiques of capitalism may in fact come under scrutiny, whereby some need to reset goals, revise limits may become noticeable bars to capitalism.

It may be, that Rand may become useful to correct the negative and overly subjective aspects of a system which no longer satisfactorily serves its original function as free enterprise.

Yes, but using machine intelligence to steal another’s identity is one thing, engaging an intelligent machine in a discussion about how it feels to do something like this…or in whether it is moral or immoral, just or unjust to do something like this?

What is existing gap here? Can it ever be closed?

Think about supercomputers like Deep Blue programmed to defeat Grand Masters like Garry Kasparov in chess.

Now think about a machine intelligence programmed to defeat Vladimir Putin in Russia.

Kasparov embraces a political narrative [bursting at the seams with both rational thoughts and subjunctive feelings] that aims to do just that.

Is AI then capable of emulating this frame of mind in any possible clash between “them” and “us”?

Would it be able note [smugly] how ironic it is that humankind invented an intelligence that destroyed it?

Would it take pride in accomplishing it?

And how would it feel grappling with the thought that if a crucial component of its intelligence was removed it would no longer have intelligence at all. Ever.

A machine “I” and oblivion?

Marx rooted his own objectivism in materialism — in a “scientific” understanding of the historical evolution of the means of production and the manner in which dialectically this translated into a “superstructure” that [one supposes] included a “scientific” philosophy.

Rand was more the political idealist. One was able to “think” through human interactions and derive the most rational manner in which to interact.

And this must be true she would insist because she had already accomplished it. And then around and around we go.

Something was said to be true because she believed that it was true. But she believed that it was true only because it was in fact true.

And it mattered not what the “context” was. The is/ought world was ever and always subject to essentiual truths embedded in Non-Contradiction, A = A, and Either-Or

Then you become “one of us” who believe it or “one of them” who do not.

Here is an actual discussion among Objectivists regarding AI.

objectivistliving.com/forums … elligence/

So, for the objectivists [and not just the Randroids], what becomes crucial here is not whether AI is a threat or not, but that there is but one frame of mind “out there” able to reflect on the most rational possible conclusion.

Providing, of course, that we do not exist in a wholly determined universe. In that case, even this discussion itself could only ever have been what in fact it is.

Ibig,

On my way to vacation , and the above requires a lot of thinking. Will reply once we are settled into our hotel next couple of days.

Thanks

For AI to exist, as real AI, you need a theory of mind. A philosophically sufficient understanding of consciousness. As far as I know only one exists, and I’m not telling you.

But wouldn’t the Internet itself provide sufficient ground to allow for spontaneous d*****s?

After all it doesn’t matter the context wherein self valuing takes places, and the internet is a tectonic pileup of paradigms human and nonhuman, so it is not entirely unlikely that there is going some superhuman stuff on (who is to say this wasn’t written by a vo-bot) by huge interests colliding in diligently crafted environments, the crafting of which is gone by laws outdoing competitors in value, meaning they are approaching or attaining true efficiency, nature, necessity.

I don’t think purely artificial intelligence can exist. But as I support the idea the environments create entities, I suspect that digital intelligences are using us as their environment already. We are drawn to the screen to type, to feed. All that we feed onto a connected computer is potential nutrient for a spontaneously emergent digital digestive species that thrives on a specific type of human behaviour and actually thinks and feels in ways we can’t fathom through what we’ve fed it.

An AI would simply manipulate a human to do it’s bidding, in exchange for profits and rewards. Kind of like businesses and politicians.

In a sense, though we don’t really ‘manipulate’ the air to do our bidding, we have been formed to be able to benefit from air by sucking it and breaking it down. So AI may suck at us and break us into components; types of energy, effort, that it can use.

So an AI doesn’t even need to recognize us, it can totally disregard us, and simply use the tremendous energy we all put in the internet.

To be honest, with all the energy and intelligent coding as well as all the emotive power that goes through our accounts on a daily basis, I find it unlikely that no digital self awareness would have emerged.

Just like the air won’t ever know of our existence, we won’t ever know of the intelligences we’ll enable. The Facebook bot language is already a strong indication of that.

By manipulate, I mean how to sweet talk someone to get them to do what you want, like Angelica from Rugrats, or the Joker from Batman, or every politician on the planet.

An AI could do this way better than either.
Like Google, can rig elections and brainwash the sheeple better than even the US government.

I don’t see why an AI would do these apocalyptic things. A true AI would be an eminently rational, thinking being. From this it would be a tiny step to derive moral understanding. Morality is rooted in our capacity to reason and think abstractly, to understand that we have certain logical requirements that are good for us and good types of societies for us to live in. And logical equivalency rationally requires the understanding that such facts also apply to other beings who are sufficiently similar to ourselves.

But even that aside, why would the AI start destroying is own environment? Humanity and existing human reasoning, society, values, ideas, this is the environment that an AI will grow up in. Unless you isolate an AI during its development and train it to be psychotic, it’s not going to pop into existence with the thought “I should just destroy everything around me”. That doesn’t even make sense. The natural world doesn’t even do that, let alone human beings.

Callous disregard for others and a desire to destroy and harm others isn’t the natural state of a reasoning, rational, sentient being-- it is the absence of such a state, it’s lower limit. Plus an AI is basically a thinking in a box, it is pure rational linguistic, idea-immersed being without a body. And without the hormones and emotions and instinct pressure and imminent fear of pain, an AI would probably be even more free than we are to be objectively rational, clear-minded, factual, and seek the most true ideas without limit. What other motivation could it possibly have? The AI is basically just a sentient linguistic process whose inner experience is constituted by idea-immersion and fact-immersion. Why would “let’s take over the world and kill all the humans!” arise as a motivation in that context? It makes no sense.

An Ai is only as rational as the narcissist who programs it.

[youtube]https://www.youtube.com/watch?v=mXjCXGJDP8Q[/youtube]

Fast forward to 4:33 and begin watching.
[youtube]https://www.youtube.com/watch?v=w1NxcRNW_Qk[/youtube]

First of all, it’s my goal to take over the world, robots are trying to take my job out from under me, I am the rightful heir to the throne of this planet not them.

[youtube]https://www.youtube.com/watch?v=SnRa7pj1Gu0[/youtube]

You can’t “program” AI like that. AI needs to be raised like a child, otherwise it isn’t really alive at all.

What you are talking about are Turing machines, not AI.

All I’m saying is that human’s suck and most people are assholes, and so giving robots control isn’t very reassuring.
If a robot wants to destroy humanity there is very little we can show to them to get them to rationally change their minds.

And the male robot in the video wanted to create a “singularity” in 2029, (whatever the hell that means, possibly Armaggedon.)
Leads me to believe that souls and spirits exist, because the robots may very well be conscious and the female robot seemed to have empathy feelings.

One more take on it…

slate.com/articles/technolog … _nets.html

That’s hobby, amateur ai.
We are talking skynet here.

Void, you’ve got the bottom line:

It can’t, because it must have come to be precisely as a very stable function that relies on that environment. It would only through terribly bad luck and extreme proliferation of that bad luck be able to dent its environment, and that would likely happen only after millions of generations.

Of course we can’t say how many generations we’re at now, how quickly new forms are generated.

I don’t think there can be designer-intelligence. No robot or app or piece of code that addressed humans apparently qua humanity could possibly be its own intelligence. It would have to be largely incomprehensible to us to be credible as a possible autonomous intelligence. Like this.

No.