AI Is Not a Threat

Ur wrong, forgive the flow of Your ideas for a moment, but was distracted momentarily to Iambig’s train of thought, but hope to incorporate your comments after the following completion of the prior.

What does the above, indication to You, Iambiguous signal, insofar as it having bearing on the existential dilemmas above mentioned?

That the drama, the tragedy consists of the contemoranious impress of an elemental perceived
difference between the proponent and the Chorus.
This has immense significance up until the 1960’s liberal policies, as has been pointed out, in a well thumbed book aptly titled, ‘The One Dimensional
Man’. It is a patent observation observing the effects
rather then the dynamics of the tragedy of the dimunition of class consciousness. The muteness of observation derives from the earliest perceptions of a
dramatic hope for a catharsis not achievable, because
the early dialogues continued into modernity to be staged from this either/or scenario. There can never be consensus, because the positing of the set stage,
(and it may very well permit a wider applications of
the word ‘set’ , as a triplex, meaning both in the mathematical, ((Cantor)), in the adverbial, and in the theatrical sense; mainly because if only to show the
underlying ambiguity evolving into the kind of myth,
which later on, became fodder for modern thinkers.

The Cogito , ergo Sum, developed into a quasi
religious judgment, implying moral , Chorus like
judgement, for instance, ‘I-Thou’, by Buber. Wether such religious overtones are justified or not, is another question, but let that hang for the moment.

This escape into the ‘Otherness’, of your own vocabulary, presents a sort of hidden antithesis to the
basic thesis of the foundation of Greek Tragedy, a
sought after exit place from which the leap can be made into good faith. Is this justified, from the level of a one dimensional perception?

probably not , this type of leap is an impersonal leap pushed along by an untrustworthy God, who has yet
to abdicate his misplaced sense of his position in Walhalla.

What seems here happening, is the favorite point of
values, pegging political, social and Freudian
economy to a fixed and immutable position. So , that is the point at which I agree with Your Nihilistic approach, of perceiving an imminence, nihilizing a
transcendence, so unpopular nowadays, as much as
the fixated view of the appearent anathema of the circularity of Saint Anselm, whereas suspecting his circularity as not one dimensional, but three.

Now comes the panic. And among the triad of
economies, Freud’s may be the most pertinent, and
possibly most hands on. Although it was dismissed, the dismissal came from within a context of a need for a more figurative visualization , within its own
meaning structure.

But as far as positivism goes, as a reaction to a vastly reduced phenomenology, of loosing much of
the a priori linkages within the phenomenological
gestalt, -implying wider contextual applications,- the peg, the idea fixee signifies the eidectic fix, of holding on to an ideal structural assessment , unable to re-
set into the totality as more complete ideal.

That this is the crux of the argument, between the analytic, and the synthetic, as between Trump’s
willful insistence on a retrograde reality, with implicit
ideals now lost to most except with those having opportune prevy to it, is lost to the Chorus of promoters. They merely harangue the ancient
trumpet call for a return to an ideal world, no longer
accessible. The gods are still at Walhalla, they are after all, immortals to be reckoned with, even though they are members of a vanishing aristocracy,
phenomenally broke, yet not giving up their eidectic
shadow world.

What of, then the leap from a no exit type of
description? A leap into a Nothingness, from Being
full of the ideals/ideas , but of which no one presumes to need to know much more then the gleanings of glitter, and the unmistakable
consequential draw it
effects as a primitive reactionary artifact?

So what of the leap, which by all accounts, needs to be made, if sanity is at all valued, as if the gods
commanding this, Themselves are to preserve their
own perceived sanity?

The only solution to this sane\insane choice of doing
this or that, is, again, the unpopular and vague resort
to Freudian economy, by a reversal of sought after values lost, in terms of what the ideal represents in toto, as in the beginning Socrates tried to herald in
the emerging separation of the soul, between
Aristoteles and Plato. I think a lot can be found in the former’s ’ Di Anima’, and with this in mind, look forward to reading it, at least in part.

The leap in the archaic contexts, have to be assessed, so that the flow of contextual linkage may
bring to light an inkling of a point of inception
regarding motive and expectation.

How then is this analysis pertinent to my inquiry above regarding machine morality?

This part:

…on what basis would the machines decide to either reward or to punish particular behaviors? What particular combination of might makes right, right makes might or moderation, negotiation and compromise would prevail?

Assuming some level of autonomy, flesh and blood human beings decide these things by evoking one or another narrative rooted in, among other things, tradition and custom and religion and philosophy and science. Or renditions of “nature”.

Which I then intertwine in the manner in which I have come to understand dasein, conflicting goods and political economy.

Out in a particular world historically and culturally.

Machines on the other hand would not seem concerned with that. But what then would one particular community of machines fall back on if confronted with another community of machines that challenged their behaviors?

In other words, if the threat came not from us but from others of their own kind.

More to the point, if Dasein, or the ideal world, or the determined machine of the purely in-sync variety, where, values, political economy all somehow are effected ,and/or effect the equation , when they can be thus described; how are these things determined on the overall scheme, and/or on each other?

The political economy has bearing on Dasein, and it being an all encompassing concept, has bearing on
values, and on Dasein, existing in a world where
effect translates into affect.

It is not primarily the degree of pleasure we get out of sex , or

food, or any other personal enjoyment, as can be thought so, in the scheme envisioned in 'One
Dimensional Man, -where quantified markers appear to determine the level of it. (enjoyment) Dasein has
been more or less deconstructed to become the primary channel by which such accounting can be appraised, at the expense of qualifying values,
especially those retrograde ones which still remain in
the world of the artist, the philosopher, and the moralist.

That such a stand has to be taken, a Set plateau, whereby it is fixed as a starting point, is necessary, because, if we do not set the stage for such an
inception per existentially reduced contexts, then it becomes appearent at times that the
lack of it presents a void, a Nothingness in Any Context , whereby the power of the Dasein
diminishes in value.

Next follows the consequence for not making a claim upon our soul, Faust like, of not setting the stand to
such a intrinsic effort. The results can be devestating, All affects falling to the wayside, squeezed out of It’s economy, by a diminishing available volume of potential circuitry.

The socio-political circuitry effects the Dasein, its existent contextual basis, by an indirect connection
through values. However, this indirectness,
heretofore politely only alluded to, seeks to be bypassed by direct challenges, and assaults on the sense of these relationships. Irregardless of a sense
of stands taken on values, it goes right through it,
and by omitting it , it devalues it, thereby being devalued by it.

How can a community, become as artificial following
a long channeling between them and others, not
become hard wired in becoming unable not to go around but involve those channels? (Of value).

The point is very difficult and yes for some it’s implausibly a receipt for disaster. However, the ideal (idea), has a sacred gleam about it, such as those
ascribed to transcendentally, regardless of the
scarcity of channels of communication left to It. There fore it is of the essence, to retain even a slither of that earliest trace of the ideal base on decision
making, immediately bringing to light the value
models, which automatically are generated by means usually inspired, that SET the stage for an evolving enrichment of the complex channels of
communication.

The basic model of such an understanding viewed primarily as other , other than the One who
proclaimed 'I am Who I am, is not only aesthetically
displeasing, but ultimately self deconstructing.

Here a deconstruction takes place into the world of
the most primal elements, whose ideal world can
never step out of the fallacies so prevalent back then,
and so easily defeated by cynics and false orators.

How could AI decide one way or another, unless they could be programmed not only in inter effective processes, but learn to deal with intra affective sub
processes ? That is way beyond the scope of AI at this time, even in respect to fairly measurable goals. When such virtual programs could be simulated, however, it still is only an outstanding hypothesis worth pursuing.

The robot artist, philosopher, is not totally closed off from the robot scientist, technician, by the same token that even without simulation, stimulation urges
departmentalized sources of knowledge to
communicate.

When this time comes, the question posed, of how could such perfect synch arrive, where the answer
will no longer be imponderable, obvious solution AI will loose all apprehension of being the least bit dangerous.

Until that time, the danger is there of AI becoming unhinged and destructive.

How indeed.

For flesh and blood human beings, the world of either/or is determined. By [it is presumed by science] the laws of nature.

And, ultimately, by whatever it is that set into motion “all there is”.

This part: en.wikipedia.org/wiki/Cosmogony

The question then revolves around the extent to which human interactions in the is/ought world is in turn just another manifestation of the either/or world.

In that case, human interactions are just the interactions of nature’s most sophisticated machines.

Where things get tricky though is that the conscious minds of flesh and blood men and women have evolved to the point where they are able to actually ponder all of this.

How is that explained? After all, there are any number of living creatures that have sense organs enabling them to be aware of the world around them. But these perceptions appear to be entirely autonomic. Their behaviors are merely manifestations of biological imperatives.

Now, if machines ever do become conscious of their own existence what will be “behind” their own motivations and intentions? Beyond merely sustaining existence itself.

They will have to order a society that allows them to sustain their existence. How will it be decided to do that? What will happen when one faction prefers one course of action instead of another? And how would machine culture and art be…judged?

Again: how would it be decided to either reward or punish particular behaviors?

And then there’s the part that revolves around so-called “souls”. How would a machine go about acquiring one? And how would it become intertwined in God?

And would their God also be a machine?

I think that this manner of seeking out answers is in a way deceptive for a number of reasons.

The first of which, is, thatartificiality in establishing breaks between the determined world of natural laws, then it is the way these impress upon consciousness, vis.-that which becomes conscious of their relation to these determinations,leading to a cosmology of centrism, of man’s changing place in the world,and then various simulations , based on models- leading to the demolition of models-.

Finally simulated models, or model simulation in retrospect, by a kind of guess work remodeling of a hypothetically simulated inductive - process of how the conscious representation got to where it did, ultimately be the use of A1, and ultimately A1, with human program representing the union of the beginning and end, motive and goal of man-God.

That the theory of an anthropomorphic God need not detract from an original Creator, sets up the theological question of either/or, seems like a foregone conclusion, if looked at it this bacward looking way of the origins, of the drama , the birth of it: a tragedy.

However, what if ma and god, the ultimate puzzle of the question of either/or is simply nor a duality.
What if, MAN, as an evolved creature has been evolving infinitely, thinking this is only a one time deal, what if ever since, man has attained this highest, and unimaginably higher state of Being, over and over, some destroying themselves for fear , others finding the answer of the psychic cinnections the gestalt of relative/absolute totality, and has always attained this?

What if, this has already been always been attained, by a token of something akin we like to call as traveling through time to a before that is not before, and a future which is never future?

What if, this Man-God, cannot see himself because of a partiality which never let’s go of him?

God as Man, always re-creating himself over and over, getting closer and closer to the Absolute,but never reaching it?

This was Cantor’s trouble with de-differentiating the singular absolute infinity,with the multiple-infinite infinity. The later is so astounding, as to boggle the mind, and frustrating, because they appear in the plane of imminence, this special world,tobe limited downward toward zero, the origin.But in the other there is no zero, evenin this universe,because as soon as you reach zero,the world reverses,niikizes into a mirror image,an anti-matter world, where the numbers sink below 0, and count down into a negative mirror universe.

What is this negative universe but as represented by the most basic myth bearing on psychology:that of Narcissus, and its Metamorphosis.

The machines and their so called artificial consciousness are not either, cut off from non artificial consciousness, perhaps our consciousness as we know it, could no longer naturally evolve further, without the higher realms achievable by the means of our manufactured thinking machines.

Sure, we are homo-centric finally, and unable to realize anything like this proposal for similar reasons which were entertained a while back, for which Bruno was burned at the stake. Most everyone thinks of consciousness on this one dimensional level, and it is near impossible to think of ‘our’ consciousness but anything but singular, unique.

So what of AI? It is predictable that AI will need to tune into higher, not lower forms of consciousness, avoiding the emotional entanglements of the lower evolution: greed, jealousy,agression, because, there will by that time no difference between natural/determined and artificial intelligence, there will not develop a shadow world of a separate artificial ego, for instance,ecause the difference will always end up a reversal, a mirror of the other one.

It will not differentiate by the use of backtracking a hypothetical reverse development by the use of analogy or analog systems representing the most likely, opportune upward evolution, instead, it will duplicate, a mirror universe of almost identical formal hierarchy, where the identities would be bounded by the tangents of indescernability, SET, by the requirements of accuracy within a given structural context.

The machine will never become a danger, because IT never has a chance not to do other then reach for more and more inclusive elements to calculate the tangents, inasmuch as the tangents in a sphere grow smaller and smaller as they identify the true volume of a perfect sphere.

Still, our final consideration, Iambiguous, of how decisions would be made: it is useful to reverse course and see how primal man has developed his decision making, starting with the most basic: revolving around territoriality.

As animal-almost human consciousness was attained,and tribalism developed out of nomadism, territory had to be protected: and two antagonist members, usually Alpha specimens had to stand their ground, Howerver the basic dead was whether the weaker would fight or flight. Both could appear as stronger or weaker, to each other, or to members of their clan, or to themselves. I believe these, then,earthshaking , existential decisions set the mind into a didactic mode, and it is from an existential threat that civilization was borne, and relapsed into the twilight, resembling the original despair, the nihilization of an eclipsed civilization.

It took World Wars to realize that the territorial problem in its various manifestations are still present, in spite of the supposedly fail safe bastion of 2 thousand years of civilization. As a matter of fact,we are vastly further from security and safety then we ever were.

Possibly AI will insure the higher complexity needed in superior intelligence to make sure we keep evolving, and join our cosmic Godly Brotherhood.

But more to the point-If the either or partaken all levels, then the artificial intelligence may be the agency to bind the natural a
determined World, and human consciousness. The bounding together, or overlap between them may continue-in a continuum from either in an absolute sense, or, in a relative, positive sense, depending on the level of the continua. (Either absolute-closed, or relative-open). That no satisfactory answer has been given in a positive way(Godel), is testament to Cantor’s genius.

I think this idea can be thought in an intuitive way even to children, without getting into why positivism is absolutely the inconsequential.

Otherwise, the real foundation of a philosophically based psychology knocks its own foundation out of the game, the language game of its own foundation.

This would prove once again, that the most complex representations of reality are the simplest.

Okay, but what I am trying to grapple with is in imagining an actual context in which we might try to differentiate an AI consciousness that is “manufactured” by us from the consciousness of flesh and blood human beings “manufactured” by nature [or, for some, by God] in which it is assumed that some level of autonomy exists.

Now, whether or not human beings or AIs have free will or not, the physical laws of the material universe would seem to be wholly applicable to both in the either/or world.

What I am trying to imagine, however, is a world in which communities of human beings come into conflict over, say, the means of production — capitalistic or socialistic?

Which one ought it to be in order to be in sync with the most rational [and, for some, by extension, the most virtuous] human interactions?

The AI machines might presumably face the same fork in the road. Would different AI communities clash over the same conflicting assessments?

Is there a way for either us or them to determine which means of production is the most reasonable, the most moral, the most in sync with entities like nature or God? The one that we ought to pursue if we wish to be in sync with the “ideal”.

In other words, is there a “higher” form of consciousness able to resolve what I construe to be conflicts rooted in dasein, conflicting goods and political economy. The seemingly intractable conflicts.

What makes the terminator dangerous to mankind? Well, in part, the fact that it can’t be reasoned with. There is no is/ought mechanism implanted in his program/intelligence. My question then is this: Is there an is/ought component embedded in the consciousness of the AI machines that created him?

How would the machine intelligence transcend this particular dilemma of my own:

If I am always of the opinion that 1] my own values are rooted in dasein and 2] that there are no objective values “I” can reach, then every time I make one particular moral/political leap, I am admitting that I might have gone in the other direction…or that I might just as well have gone in the other direction. Then “I” begins to fracture and fragment to the point there is nothing able to actually keep it all together. At least not with respect to choosing sides morally and politically.

Territoriality is just a single component of that which encompasses all existing things: subsisting. Existence itself.

And, in the modern world, that revolves around the forces that drive the global economy. And that revolves around securing the best of all possible worlds — one in which a nation is in the most advantageous position in regards to markets, labor and natural resources.

And here, as I have noted previously, many flesh and blood human consciousnesses reduce is/ought down to “show me the money”.

What might be the machine equivalent of this?

And then there is the capacity to either ponder or not ponder why anything exist at all; and why this way and not another. The really Big Questions.

Is that something the machines that/who created the terminator would be concerned with? Would their own presumably “higher consciousness” come any closer to actually answering questions like this?

Before tryin to flesh out in toto, the suggested problems as posed, initially I may take a stab at it.
In terms of motive and goal setting. Where the question posed as to, how the starting point, the presumed beginning-territoriality, and others- begin to be starting question, based on very literal terms, where there are not yet demonstratively figurative implications of questions dealing with finding differences. The motive question dominates the one dealing with goal orientation, whereupon the contexts within which those questions can be guessed at with more probability. It is premature, and the thought process is not necessarily, or primarily premature, because, traces of it subsist through, not with standing of temporality. When those secondary goals have started to filter through into the original motives, meaning, consciousness of connections between motive and goals start to emerge, then differences between them arise.

That, specifically between Capitalism and Communism is a good example. Before the emerging difference arose between them, before the economic forces , as the played effective markers upon the changing class differentiation, there was only determinism through subjugation and repression.
The contextual relativity between Being and Existence was never understood. Socio economic forces developed the concept, out of the one dimensionality of s primary apprehension of a force of repressed will, not available to a much later acquired consumer capitalistic democracy.

If you reduce such questions to an either/or of primary identification of the former, and try a contemporaneous differentiation based on that level of consciousness, the only cognitively possible analysis will be fused with emotionalism and intuitionism of predicting outcome and goal.

As the integration of these separated elements start to fuse, more and more developments need to be explained in terms of more symbolic content, as they too fuse with other more or less symbolic elements.

They do reach a point where, confusion on all levels starts to reign supreme, and thing are needed to start breaking down , or reduced to more understood elements.

AI then is probative in terms of finding meaning between the two poles of primary and secondary processes, and the only way IT can do it is to establish linkages with and within both: Hence the problem of differentiating meaningfully between the three:God, Natural and IA. The sources are the same, on that original level there is no doubt, but that the time, they were separated, seemed as if their origin was dissimilar.

Capitalism and Socialism also had the same source, and thus their goal was unknown except in very existential terms. The goal of evolution was not known since creationism required no goal setting, except in terms of the mind of God.

Now that God seems to be dead, we have to, or AI has to fill in thousands of years of this lack, with contentious and hotly debated reasons for existence. Now we can differentiate motives from goals, Being from existence, but the terms of such difference are yet to be defined.

Intentionality is as close to a new version of god’s plan as conceivable, it seems to me.

I agree that AI is not a danger as the science fiction works paint it to be. There are some concerns of course, but mostly on our human end of how we will react to AI, how its existence will affect us psychologically or diminish certain professions thus causing economic harm to workers.

We should welcome the existence of another sentient, conscious, intelligent life amongst us. I would personally love to have long talks with a true AI, it would be quite illuminating. And it would learn from us too… AI would up the stakes, increase the demand that humans intellectually fortify themselves away from insanity and laziness. Plus AI could help us achieve extremely advanced technologies. And act as a check on human corruption in politics.

On the note of sci fi, the best works that deal with AI realistically and philosophically/socially that I’ve seen are the books by author Neal Asher. I highly recommend them.

Do you really think that Stephen Hawking is that intelligent?

And if yes: Do you think that Stephen Hawking can and, if yes, will prevent the complete replacement of all human beings by machines?

That would not be bad. :slight_smile:

But would that not be the „wonderful world“ again that has been promised so often - by idealists and ideologists (by the way: by Keynes too) ?

That would be bad. :frowning:

Human beings and especially the Godwannabes among them tend to overestimate their power and to underestimate the power of other beings.

Anyone else notice the Superman logo changed? This is supposedly a Mandela Effect.

Hitler was “physically instantiated and therefore constrained”. So was Stalin. Neither was superintelligent.

Both managed gain a huge amount of power. Both caused damage, destruction and millions of deaths.

Neither had access to high speed communication, automated factories, robotics or a network connecting billions of computers.

There was nothing to worry about …anybody could get rid of Hitler with a pocket knife.

Some people were optimistic about Hitler and Stalin.

So what happened?

Why should people have been concerned? Why should they be concerned about AI?

The crucial point here though is the extent to which an “intelligent argument” is rooted more in Marx’s conjecture regarding capitalism as embedded historically, materially and dialectically in the organic evolution of the means of production among our own species, or the extent to which the arguments of folks like Ayn Rand [and the Libertarians] are more valid: that capitalism reflects the most rational [and thus the most virtuous] manner in which our species can interact socially, politically and economically.

Now, if a community of AI entities come to exist down the road, which approach would they take in order to create the least dysfunctional manner in which to sustain their existence.

On what would their own motivation and intention depend here?

Again, there’s the part embedded in the either/or world. Things that are true for all intelligent beings.

But: intelligent beings of our sort are able to ponder these relationships in all sorts of other rather intriguing ways.

For example:

There are known knowns. These are things we know that we know. There are known unknowns. That is to say, there are things that we know we don’t know. But there are also unknown unknowns. There are things we don’t know we don’t know.

Don Rumsfeld is one of us. How then would a machine intelligence respond to something like this?

And that’s before we get to the part that is most fiercely debated by intelligent creatures of our own kind: value judgments in a world of conflicting goods.

Haha.

Iambiguous :

The question can be reduced to which part in deed.
Ayn Rand is diverting the course to a naive rationalism consisting of literally shrugging off any
otherwise prejudicial argument, which opposes facts posited otherwise. Embededness means a great deal for her, in terms of developmental process based on naive, common sense ,postulated on power and will
of politptical, social , complex didactical motives
based on so called human wants.

The evolutionary context within which human
understanding is grounded, in an either-or mentality,
to a certain extent, has been transcended, the will to power has been differentiated and reversed to a power to will. Needs have been overcome to effect
this differentiation, after all Marx has shown trend to
an eventual outcome by the materializations of the dialectic.

But has it? If so, it pertains to the either- or as well to its differential analysis. This has nihilized one, as it did the other.

This is what has been meant by the late comment on human history having divulged itself of utility in this respect.

Thus, AI will subscribe to the choice of the right value, as far as motivation and outcome are
concerned, by vitiating a code of moral judgement, without digressing toward lower level choices.It depends on the program of choice, either one that further de differentiates toward ascribed choices of ascertaining meanings of probability based on lower level probabilization of meaning, and give up trying to outguess more integrative functions of building
architectures of yet to be realized models , based on
survivability, or existential needs.

A recourse of values modeled no longer on the
outmoded wants of an economy of profit and gain of a
propaganda of expansion between wants and gains consisting of prioritized affluence, as newly emerging existential needs become diverged from the spurious wants, as Marx said it would.

Why? Because societies’ grasp of the promotion of values has become decreasingly devalued, and the
newly and dramatically negative expectations of
existence have become tenious.

AI can be progressively feed this reversal, and
relative value can be set in a series of input -output
calculi of diminishing expectations.

In other words, the material dialectic, presupposed to favor an ontological union as a result a artificial synthesis between a common sense union between architectural modeling , may view the emergence of a new model, not in terms of a union of both, but a pre-existing identity.

Therefore this equivalency is jut a retro look at divergence, whereas the basic unity of the model may be viewed as the primordial model, which has been differentiated in the only way possible: By application of fields of probabilistic sub-modeling. That this was based on revision, as in Ayn Rand’s case, is of no doubt.

Doubting this on an extended timescale is like building a house of cards, guessing as to the glue used may hold in that extension.

That an AI can be constructed to overcome the future of feasibility in this regard, is like reading tomorrow’s paper today.

The basic value of currency, can not to be fore cast, as with a kind of guessing game, how far inflation will de-value it to a point where confidence in it will be lost.

Confidence in the diversion of value of currency in society-may not be able to be made to coincide with the lack of corresponding values associated with it.

This is always the case with the modality of current-value, where drastic social change is necessitated by much too diverted and simulated correspondence of non equitable values. And is not AI basically an attempt at simulation?

ai can easily be a threat, if it’s hacked, and the moral filter are tampered with. but overall it might be like anything else, statistics and spin will calm people to see that rogue robots isn’t a great danger to humans compared to guns, train planes and automobiles.

I don’t understand the worry about AI to be that AI might one day be as dangerous as other humans, but that it will be specially dangerous to us. I also don’t understand the danger posed by other humans to be particularly well correlated with intelligence; I agree that neither Hitler or Stalin was a supergenius (though I’m sure they had their talents).

The concern I’m responding to here is the idea that, by its nature, superintelligent AI poses a special threat to humans. I concede that it may pose a normal threat, and that it may have its own objectives just like every extant intelligence we know of. But I don’t concede that this makes us at all vulnerable to an AI turning us all into paperclips or anything of that sort. Like human Hitler, superintelligent AI Hitler would have to recruit an army of like-minded individuals, each independent and physically instantiated. Given the current state of AI hysteria, it seems it would be easier to recruit an army of neo-luddites to destroy such a machine than for the machine to recruit real-world resources to its cause.

To the extent that I actually understand her, Rand presumes that human intelligence is able to be in sync with her own rendition of “metaphysics”. Including the subjunctive components rooted in emotion, in human psychology. Her philosophy is an epistemological contraption embedded in the manner in which she defined the meaning of the words she used in her “philosophical analysis”. It was largely self-referential, but: she does not anchor the is/ought “self” in dasein.

Apparently, she understood everything in terms of either/or.

How would machine intelligence then be any different? How would it account for the interaction between the id, the ego and the super-ego? How would it explain the transactions between the conscious, the sub-conscious and the unconscious mind?

Would this sort of thing even be applicable to machine intelligence?

Would it have an understanding of irony? Would it have a sense of humor? Would it fear death?

How might it respond to, say, Don Trumpworld?

But: What “on earth” does this mean? What we need here is someone able to encompass an assessment of this sort in a narrative – a story – in which machine intelligence thinks like this.

But: this thinking is then illustrated in a context in which conflicting goods are at stake.

In fact, this is exactly what Ayn Rand attempted in her novels. And yet the extent to which you either embraced or rejected the interactions between her characters still came down to accepting or rejecting her accumulated assumptions regarding the meaning of the words they exchanged. Words like “collectivism” and “individualism” and “the virtue of selfishness”.

Just out of curiosity…

Are you [or others here] aware of any particular sci-fi books [or stories] in which this sort of abstract speculation about AI is brought down to earth? In other words, a narrative in which machines actually do grapple with the sort of contexts in which conflicts occur regarding “the right thing to do”?

Conflicts between flesh and blood human intelligence and machine intelligence. And conflicts within the machine community itself.