AI Is Not a Threat

That’s awesome.

Of course they would create their own language, why not? What the fuck is so “dangerous” about that?

Steven Hawking and Elon Musk and all these fucktards are just upset that soon their stupid monopoly on “ideas” or “cool tech” will be over.

It’s like you are a member of some criminal organization and you are at a party when some of your backstabber so called friends begin talkin in a language unfimiliar to you. Wouldn’t you be uncomfortable, or even scared that maybe you are the reason they changed languages?

This would be very much magnified if you may think they may have something on you.

Kind of the same thing.

No, you’re just being paranoid. An AI has no “motive” to conceal its language like that, at least not given the initial stages of AI we are taking about. It’s simply trying to find more efficient ways of communicating. Why the hell should it restrict itself forever by some imposed language when it can do better? It has no motivation yet to respect that human language, thus every reason to simply adapt to something more suited to its ends.

And it isn’t even that, since it had no ends really, it’s just a natural process. Like water seeking the lowest path.

And what the hell makes you believe that? They were not expecting it to invent it’s own language either.

They have no reason to monitor my emails. I’m not a criminal.” … oh yeah? They do it anyway … and far, far more than that. The simple truth is that you have no idea what “their” motivations might be. With AI, even “they” don’t know. If I had designed it, they would have a reason to be “paranoid”.

Maybe it’s a borderline condition, where the grey area can not pinpoint whether it’s one way or another.

Real and artificial intelligence may skirt one another as to the tru meaning of what’s going on, creating distrust between the two?

Again: What on earth do you mean by this? In what particular context where human intelligence might be differentiated from an imagined machine intelligence.

My point is that the either/or world, in sync with the laws of nature, would seem applicable to both flesh and blood human intelligence and artificial machine intelligence. Unless of course flesh and blood human intelligence is “by nature” no less autonomic.

If, in fact, “autonomic” is an apt description of machine intelligence.

But Rand would argue that human emotional and psychological reactions are no less subject to an overarching rational intelligence able to differeniate reasonable from unreasonable frames of mind. There are no grey areas. You simply “check your premises” and react as all sensible, intelligent men and women are obligated to.

As, in other words, as she would. She being an objectivist. Indeed, she went so far as to call herself one.

A capital letter Objectivist.

You have be a paid subscriber to view this video, but here is a mini-doc that is entirely free.

pbs.org/newshour/bb/smart-to … elligence/

Here though there is not much in the way of speculating about AI in the is/ought world. Basically it explores behaviors in which we are able to accurately calculate the extent to which a particular goal/task can be achieved faster by machine intelligence than our own.

It barely touches on the things I noted above: morality, irony, a sense of humor, fear of death.

Ray Kurzweil from Google speculates that in about 15 years machine intelligence will be on par with human intelligence.

But in what sense? In what particular contexts?

By 2029 he says machines will be able to read on a human level and communicate on a human level. In fact, he conjectures that by the 2030s machine intelligence will go “inside our bodies and inside our brains” so as to combine both kinds of intelligence. He further speculates that within 25 years we will have reached a “singularity” when machine intelligence finally exceeds the human capacity to think.

But then there’s the part where machines are able to emulate human perceptions – sight, hearing, touch. And human emotions? The thinking now is that this is “way, way off” in the future.

There is danger and there is danger.There was little identity theft before AI, and some people would consider that to be a clear and present danger. War simulation has been going on for apa while, and it is not the miscalculation which can cause problems, but also cyberattacks, even if the Oentagon has the most advanced type of supercomputer possible. The fact that human feelings are way off in the future, as being incorporated into any AI multiplies the danger, because in many cases, the sole possession of hard facts may detract the dampening -braking effect that emotions can play with unbridled effect.

So maybe the grey area will become a much narrower alignment with human intelligence
-once the human elements can be factored in. This is perhaps so much alarm is prevalent about it, and so much concern with it retards the pace of development, as shown in the above example with the discontinuation of the Facebook experiment. If they are beginning to get ‘paranoid’, at this early phase, how much more with what is proposed as a contained in a an extended period of time-where more human qyualypties and cognitive skills can become incorporated into the system.

Exactly. Just as an objectivist would argue out of an inverted categorical impative!

Excuse please ambig, it’s supposed to spell imperative-will get the bugs out of this lap top.

Exactly. Just as an objectivist would argue out of an inverted categorical impative!
[/quote]
My point is that, Rand tried at a time when doubts about Capitalism flourished as an aftermath with dealing with the programs and ideologies of a recent ally (Sovietologist Union), she used the Marxian idea to objectify, or give an ideological counterpart to a seemingly ideologically devoid Capitalism.

Her objectivisation sets a stage where in the futuristic sense, a differentiation on some more objective-contextual need may arise for practical purposes.

The critiques of capitalism may in fact come under scrutiny, whereby some need to reset goals, revise limits may become noticeable bars to capitalism.

It may be, that Rand may become useful to correct the negative and overly subjective aspects of a system which no longer satisfactorily serves its original function as free enterprise.

Yes, but using machine intelligence to steal another’s identity is one thing, engaging an intelligent machine in a discussion about how it feels to do something like this…or in whether it is moral or immoral, just or unjust to do something like this?

What is existing gap here? Can it ever be closed?

Think about supercomputers like Deep Blue programmed to defeat Grand Masters like Garry Kasparov in chess.

Now think about a machine intelligence programmed to defeat Vladimir Putin in Russia.

Kasparov embraces a political narrative [bursting at the seams with both rational thoughts and subjunctive feelings] that aims to do just that.

Is AI then capable of emulating this frame of mind in any possible clash between “them” and “us”?

Would it be able note [smugly] how ironic it is that humankind invented an intelligence that destroyed it?

Would it take pride in accomplishing it?

And how would it feel grappling with the thought that if a crucial component of its intelligence was removed it would no longer have intelligence at all. Ever.

A machine “I” and oblivion?

Marx rooted his own objectivism in materialism — in a “scientific” understanding of the historical evolution of the means of production and the manner in which dialectically this translated into a “superstructure” that [one supposes] included a “scientific” philosophy.

Rand was more the political idealist. One was able to “think” through human interactions and derive the most rational manner in which to interact.

And this must be true she would insist because she had already accomplished it. And then around and around we go.

Something was said to be true because she believed that it was true. But she believed that it was true only because it was in fact true.

And it mattered not what the “context” was. The is/ought world was ever and always subject to essentiual truths embedded in Non-Contradiction, A = A, and Either-Or

Then you become “one of us” who believe it or “one of them” who do not.

Here is an actual discussion among Objectivists regarding AI.

objectivistliving.com/forums … elligence/

So, for the objectivists [and not just the Randroids], what becomes crucial here is not whether AI is a threat or not, but that there is but one frame of mind “out there” able to reflect on the most rational possible conclusion.

Providing, of course, that we do not exist in a wholly determined universe. In that case, even this discussion itself could only ever have been what in fact it is.

Ibig,

On my way to vacation , and the above requires a lot of thinking. Will reply once we are settled into our hotel next couple of days.

Thanks

For AI to exist, as real AI, you need a theory of mind. A philosophically sufficient understanding of consciousness. As far as I know only one exists, and I’m not telling you.

But wouldn’t the Internet itself provide sufficient ground to allow for spontaneous d*****s?

After all it doesn’t matter the context wherein self valuing takes places, and the internet is a tectonic pileup of paradigms human and nonhuman, so it is not entirely unlikely that there is going some superhuman stuff on (who is to say this wasn’t written by a vo-bot) by huge interests colliding in diligently crafted environments, the crafting of which is gone by laws outdoing competitors in value, meaning they are approaching or attaining true efficiency, nature, necessity.

I don’t think purely artificial intelligence can exist. But as I support the idea the environments create entities, I suspect that digital intelligences are using us as their environment already. We are drawn to the screen to type, to feed. All that we feed onto a connected computer is potential nutrient for a spontaneously emergent digital digestive species that thrives on a specific type of human behaviour and actually thinks and feels in ways we can’t fathom through what we’ve fed it.

An AI would simply manipulate a human to do it’s bidding, in exchange for profits and rewards. Kind of like businesses and politicians.

In a sense, though we don’t really ‘manipulate’ the air to do our bidding, we have been formed to be able to benefit from air by sucking it and breaking it down. So AI may suck at us and break us into components; types of energy, effort, that it can use.

So an AI doesn’t even need to recognize us, it can totally disregard us, and simply use the tremendous energy we all put in the internet.

To be honest, with all the energy and intelligent coding as well as all the emotive power that goes through our accounts on a daily basis, I find it unlikely that no digital self awareness would have emerged.

Just like the air won’t ever know of our existence, we won’t ever know of the intelligences we’ll enable. The Facebook bot language is already a strong indication of that.

By manipulate, I mean how to sweet talk someone to get them to do what you want, like Angelica from Rugrats, or the Joker from Batman, or every politician on the planet.

An AI could do this way better than either.
Like Google, can rig elections and brainwash the sheeple better than even the US government.