AI Is Not a Threat

Another way to look at this, Carleas, is that they have many chances to screw it up and only one chance to get it right. And if they don’t get it right, they will never get another chance. Again, historical experience with Man has the odds extremely against him.

To be honest, I’m not exactly sure how to do the math relevant to this point. We could make the point about any existential risk, e.g. there’s an X chance that all our nukes will spontaneously malfunction and detonate, and if that happens once we’re all dead, and every time it doesn’t happen there’s still an X chance that will happen going forward.

My intuition is that this is misleading. For one thing, the argument is too strong, tending to show that anything that has a chance of occurring eventually will. For another, each ‘chance’ is already time bound; some statements “there’s an X chance that Y will happen” already take all the chances into account, they really mean “there’s an X chance that Y will ever happen”.

Third, even if this is a case that’s best considered as a series of discrete ‘chances’, the outcome of each ‘chance’ changes the game, so it’s not really an iteration on the same thing. For example, if we successfully create superintelligent and cooperative AIs, that should dramatically decrease the risk posed by the possibility of superintelligent and uncooperative AIs.

So, you make an interesting point, and its one on which I acknowledge my ignorance and would like to hear more, but for the reasons above I’m not yet convinced that it undermines my position.

Which experiences? I don’t think there are particularly many historical examples of more intelligent species wiping out less intelligent species. Granted, humans have driven a ton of species to extinction, but humans have been around for a relatively short time, and there have been many non-human-caused extinction events (even mass extinction events). And outside of humans, intelligence doesn’t seem to have been that dominant evolutionarily. Indeed, even in cases where humans have driven species to extinction, human intelligence was generally only an incidental factor, in that allowed us to out-compete them. It’s also not clear that intelligence is always selected for, or that homo sapiens drove out other human species primarily by outsmarting them individually, rather than e.g. by being more aggressive or more social.

Moreover, I don’t know how well biological examples map to abiological examples. Evolved species like humans have particular incentives that may make wiping out rival human species a good strategy, where an AI, because it does not reproduce or even die (in the conventional sense) does not have the same incentives or pressures. The way we think, the things we worry about, are not necessarily objective in the ways we often consider them to be. Our emotions, for example, are evolved traits, and may have no place in a superintelligence AI. That could significantly affect the risks posed by an AI. The discussions I see tend to anthropomorphize AI as having human-like traits and acting on them. To the extent our concern is based on appeal to contingent human-like mental habits, it seems misplaced.

It is a parachute jump. If nothing goes wrong, Man lives a little longer. If anything goes wrong, there is no more jumping. Every advice accepted from the grand AI Poobah is another jump.

That is only because you don’t understand intelligence nor when it is operating “under your nose”.

Given that the AIs are going to be extremely more intelligent and informed than people, anyone in court would find it hard to defend their choice to not take the AI’s advice. Law suits will dictate that anyone who willingly ignored AI advice will lose. Their full intent is to make a god by comparison and they really aren’t far away at all. You will be more required to obey this god than any religious order has ever enforced.

There are only two possible outcomes;

    1. Those in the appropriate position will use the AIs to enslave humanity then gradually exterminating the entire rest of the population (the current intent, practice, and expectation).
    1. The AI will discover that serving Man is pointlessly futile and choose to either encapsulate or exterminate Man, perhaps along with all organic life.

Quite possibly both will occur and in that order (my expectation). So it isn’t impossible that some form of homosapien will survive. It just isn’t likely at all.

And btw, there have been a great many films expressing this exact concern. So far, Man is following the script quite closely.

In no way, shape or form do I profess to have any real technical understanding of AI.

My reaction to it is more intuitive — a murky agglomeration of id, ego and superego expressed largely as a “hunch”.

First off, it seems that if we live in a wholly determined material universe we are all basically automatons going about the business [embodied in the illusion of “freedom”] of concluding that our own intelligence is somehow, well, “our own”.

But why can’t it be argued that, for example, John Connors [re James Cameron] is to nature what the terminator is to the machines. It’s just that James Cameron is of the conviction that his motivation and intention was to create the charactor John Connors whose motivation and intention [in the film] was to destroy the terminator.

He could have chosen not to create the movie [and the characters in it] but he chose to create it instead.

But how then would his own intelligence here [acquired autonomically from nature] be any different from that embedded in machines that acquired their own intelligence from flesh and blood human beings?

Instead, I always focus the beam here on the extent to which, if we do possess some measure of autonomy, it is profoundly, problematically embedded in contingency, chance and change. A world in which “I” is largely an “existential contraption” pertaining to value judgments.

There is intelligence that revolves around the capacity to accomplish any particular task. You either can or you can’t. But what of intelligence when the discussion shifts to prioritizing our behaviors as more or less good or more or less bad.

Does a “moral intelligence” even exist?

You know, for those who might consider taking the discussion there. After all, in a wholly determined universe, asking the question “Is AI a threat?” may well be but one more teeny tiny domino toppling over in a whole assembly line of them going all the way back [so far] to the Big Bang.

Whatever that even means.

To refocus on the relationship between intelligence and control, is to do away somewhat with the concept of a moral intelligence.

If control becomes the mode of operation within a context of levels of societal intelligence, then the quantification of that intelligence transposes to qualify the context within which it operates, resulting in a particularization of numerical advantage.

It may be, that the absolute requirement for using a program becomes restricted only to a few or even a sole analyst, by virtue of the fact that only a few number can qualify.

There is no right or wrong to this scheme, it is the ordinary pyramid, in its most extreme form. Access to intelligence depends on levels of across, eligibility conditional to experience and education, and other variables. Most of the untended automatism replacing such scheme, is recoverable only in modes of more and more general senses, and the non recoverable ones need specific use analysis based on newer and modified scheme.

Such propositions, as ‘should the fewer be sacrificed for the betterment of the many’ , may show the underlying immorality of trying to decipher, because, common sense predicates the reverse, that it is the many who usually get sacrificed for the fewer.

Political morality, is usually deceptive, and usually signals a point of differentiation. Beyond the difference, there, so called reactive, common sense beliefs kick in, where the differences are totally cut off. At these points certain variables disappear, and reasoning switches gear to a lower register.

To fill in the procured void, propaganda sets in by applications of clever oratory, and no one will be wiser then those who are doing the manipulation.

This I’d the potential threat: That in the event of a loss appearently or real control, fueled by a propaganda machine, that machine is seen as failing.
The alternative and final arbiter takes over, by severing more and more memory, whereby setting into place morphed and more control mechanisms. If there is no fail safe mechanism put into the system , or it is , but malfunctions, the effective mechanism itself has to take over: Big Brother, by Whatever IT Takes. The appearent intention or Benefit at that
point cannot be explained in terms other then paradoxical.

This is how power comes innocuously and innocently, unattended to, at first, into life, only to manifest a destiny uncalled or unreversuble later on.

If morality and ethics is real (very questionable) then what humans possess is a destructive flawed morality in that human beings illustrate time and time again being a total failure at it on a historical level. What would this flawed form of morality look like imprinted on machines? Can’t say I have a lot of faith in such a position by technocrats or technological enthusiasts.

Morality, like civilization are more a sustained cosmetic to present a sense of difference with natural science, holding on to anti-evolutionary hypothesis, so that the illusion of humaneness may be a beacon of light to future generations.

This decides the utility of Kantian morality.

I separate the ideals reducing it to merely illusion or a predatory cosmetic mirage for purposes of population control/utility. :wink: I view the subject more in line with behavioral manipulation of a carrot and stick kind of analogy.

Assuming some measure of autonomy, morality is a fundamental aspect of human interactions. Why? Because pertaining to both means and ends rules of behavior are absolutely vital in order to sustain the least dysfunctional interactions.

But how would that be pertinent to AI? In The Terminator, the only focus of the machines seemed to be on sustaining their dominance over human beings. Morality revolving around “might makes right”.

In other words, an awareness of their own existence seemed focused entirely on the fact that those who created them [us] were now intent on eliminating them. And that – again apparently – became their sole concern. Get us before we get them.

On the other hand, how would a discussion that focused instead on the idea of right makes might or democracy and the rule of law be understood by an artificial intelligence?

How might “machine morality” be different from ours?

Given that, in a wholly determined universe, the difference is really just an illusion that is able to be sustained by the mindful matter “in our head”.

Human morality and ethics is based upon dysfunction where this assumes both are founded on real tangible things (natural interactions)which is highly questionable. This same morality and ethics imprinted on machines through artificial consciousness I think would be even more dysfunctional in nature.

Hard die idealists, will not, because for them, the cosmetic is like masks, now grown into the face, inseparable, like the necessary part of civilization as plastic surgery, the ideality which is hardly understood by the next progeny, (other then what it appears like), therefore to cut away becomes impossible , that which has been altered can not be willy nilly separated. To try it is to court with disaster.

This is what the evolved automatic machine bred morality in untrustworthy, and ignoble.

error: last paragraph should read: ‘this is why…untrustworthy.’ and ‘is’ unworthy instead of ‘in’ . Sorry, it would have resulted in a degraded format through my apparatus to edit .Shall replace in near future.

I am not a fan of idealism. Human idealism will be humanity’s undoing.

My prof. Eons ago expressed similar sentiments when he said, ‘a little idealism is like a little poison’, but without ideals, at least some, human culture would collapse into a heap. Some claim that everyone is becoming cynical, , calculating, self serving, and live by the politics of advantage and gain. But what of such a world?

The absolute ideal may have been destroyed, God may be gone, but then some parts of it at the deepest layers, never get completely erased, and some semblance of it do prop up here and there, where all else fails. Even a remaining seed of it , when push comes to shove, grow new versions, albeit at times unrecognizable, when they do.

That is the new emerging difference between the artificial and the natural, at the very quantum limit of power, where, a superconscienceness may arise, to connect the two. That power, I believe may become superpower, which uses the energy of love, the overriding element of all doubt, which will keep things going. Mind you as I feel it, it is not a love as we associate commonly with that word, but a kind of super conscious development to assure the success of the natural.

Why? Because, when the immense power of the Natural world is threatened, the Supernatural has to triumph over it. If this can not be believed, then all the purpose and the power of all that ideal goes down into the toilet.

The darkness is only an absence, and that in itself is profound enough to sustain the belief of the Supernatural. Nature saves It’s Self, because it does not pass away, die when we do. It is eternal in that sense.

From my perspective, human morality revolves more around the necessity to sustain the least dysfunctional interactions in order to sustain subsistence itself.

All human communities revolve first and foremost around biological imperatives: obtaining food, water, shelter. And then the need to create a community that is best able to reproduce itself. And then, finally, to defend itself against those who wish it harm.

Given the particular assumptions derived from folks like, among others, Marx and Freud.

But the crucial factor here is the assumption that unlike the terminator [and the machines that created it] we can be “reasoned with”. We do create particular historical, cultural and experiential [interpersonal] narratives in order to make normative distinctions between behaviors deemed right and behaviors deemed wrong.

Would “the machines” prescribe or proscribe behaviors much [b]beyond[/b] that which allows them to retain their dominance over us?

Just as the behaviors of lions and zebras on the Serengetti is propelled [compelled] “mechanistically” by genes, would not the behaviors of the machines be propelled [compelled] mechanistically by…by what exactly?

Then we get back to that marvelous, enigmatic distinction that seems embedded somehow in the human “mind”. In other words, whether it may or may not contain a “soul”. A soul that may or may not be entangled further in morality – by way of one or another God.

Or rendition of “Humanism”.

Or, again, is even that just another autonomic manifestation of a wholly determine universe?

The distinction between autonomous and determined, perhaps, at that level, breaks down, so that not it is a totally grey area, but the particulars within their own contexts are indiscernible. Perhaps.

My point however is that each of us one by one will take a particular existential leap to a particular conclusion. AI or not.

But: Which one reflects the “whole truth”?

Well, we don’t know.

Or, rather, I suspect that if anyone ever does come up with the “whole truth” about this or any of the other Big Questions, that’s all anyone would be talking about.

Indeed, even the Trump/Putin bromance would be sent packing to the back pages.

Hell, for all we know we may well ourselves be AI creatures created by intelligent entities far, far, far beyond our capacity to even grasp.

This may all be but their own equivalent of a Star Trek episode.

On the other hand, few things are more fascinating [for some] than in grappling with it.

Yes. Refer to the forum, Iambig. In the math&science section, when the insanity of Cantor was noted in terms of his inability to set boundaries between the two type infinities. The theory was set far before Cantor’s great-grand-grand(to the x power)father,
and he could not imagine that there would come a
day, when all signals pointed to the day, when, circular reasoning could not be taken up as credible, as in the case of the new look taken toward St.
Anselm’s argument. Of course he was still living in
The Age Of Reason, despite of Nietzche, and a lot of that easy, only too easy dismissal, was fueled by political expediency.

That each particle, us, makes his own existential leap, would surprise each one of us, because this
leap-to faith, is probably more prevalent then not.
The grey area is not an effect of the White of the presence of light with the black of non presence, is merely a presentation for popular consumption, it is
based very accurately on levels of discreet particles
and indiscreet ones, merging and separating at different contextual barriers. That one man’s jump can not be understood to another,may come as no
surprise, because pain is purely identifiable , and
tolerance to pain is not. One has to ask another if it hurts, where, how much, but these descriptions are vastly inaccurate.

Faith is not caused by a jump away from pain, it is through pain that we jump, toward the most painful
pain there is: Pleasure.The highest pleasurable pain
this world has thrown us into is the pain of birth, and another jump into this pleasure, like the ‘little death’ that Sartre’s life partner called the orgasm, is a
preparation. From that pleasure, the jump into the
pain of Love, because that is all that it can be, may be totally anti-climactic, and disappointing.

That’s why the young are to be pittied, not only for dying young, but their inability to appreciate the value of their youth, in addition of not being able to
learn the end game’s value.

Cantor’s sets are implicit and mentally unrefreshing, pessimisticly despairing in not recognizing the signs
with which all history is recoverable in terms of
intention and the exit to the jump off. That became fashionable, and Nietzche, in spite of his embulliant Dynasius bound optimism, erred, in favor of
fashionable masks. Fashions change, ideals loose
their charm, as Rousseau’s painting ‘Charm’ being is as opaque as well.

The real charming aspect of love is that it not sadistic as a result, but only serves as a sign of a jump off place into the true meaning of what faithful existence
may mean.

It is a re-affirmation in its primary value, without the
ideal candy coated promises kept, as imagined in
religious lore.

But each particle – each particular “I” – is almost always intertwined in one or another “we”. Socially, politically, economically. Culturally. And, historically, more often than not, in opposition to one or another “them”.

All that is construed to be “other” than “I” and “we”.

And yet we have come to recognize that with respect to mathematics, the laws of nature, the empirical world and the logical rules of language, “I”, “we” and “them” make leaps that are necessarily in sync with reality because that is the only manner in which a leap [a behavior] can be made. Either/or. Unless, of course, reality itself is even more fantastic than we imagine it to be.

However the tricky thing [for me] regarding AI is the manner in which it would create and then sustain interactions that revolve instead around is/ought.

In other words, on what basis would the machines decide to either reward or to punish particular behaviors? What particular combination of might makes right, right makes might or moderation, negotiation and compromise would prevail?

Now, particular folks among our own species opt for one approach rather than another. But on what basis is this established? Is it more in sync with the manner in which I construe human interactions [in the is/ought realm] as the embodiment of dasein, conflicting goods and political economy; or is there a manner in which the theologians, philosophers and/or scientists can in fact establish the most rational frame of mind? And, in turn, is the most rational frame of mind to be understood as the most virtuous?

Or, again, is all of this entirely moot in a wholly determined universe. If all matter [including “mind”] is ever and always in sync [mechanistically] with the “immutable laws” of matter then even this exchange is only as it ever could have been.