Okay, but what I am trying to grapple with is in imagining an actual context in which we might try to differentiate an AI consciousness that is “manufactured” by us from the consciousness of flesh and blood human beings “manufactured” by nature [or, for some, by God] in which it is assumed that some level of autonomy exists.
Now, whether or not human beings or AIs have free will or not, the physical laws of the material universe would seem to be wholly applicable to both in the either/or world.
What I am trying to imagine, however, is a world in which communities of human beings come into conflict over, say, the means of production — capitalistic or socialistic?
Which one ought it to be in order to be in sync with the most rational [and, for some, by extension, the most virtuous] human interactions?
The AI machines might presumably face the same fork in the road. Would different AI communities clash over the same conflicting assessments?
Is there a way for either us or them to determine which means of production is the most reasonable, the most moral, the most in sync with entities like nature or God? The one that we ought to pursue if we wish to be in sync with the “ideal”.
In other words, is there a “higher” form of consciousness able to resolve what I construe to be conflicts rooted in dasein, conflicting goods and political economy. The seemingly intractable conflicts.
What makes the terminator dangerous to mankind? Well, in part, the fact that it can’t be reasoned with. There is no is/ought mechanism implanted in his program/intelligence. My question then is this: Is there an is/ought component embedded in the consciousness of the AI machines that created him?
How would the machine intelligence transcend this particular dilemma of my own:
If I am always of the opinion that 1] my own values are rooted in dasein and 2] that there are no objective values “I” can reach, then every time I make one particular moral/political leap, I am admitting that I might have gone in the other direction…or that I might just as well have gone in the other direction. Then “I” begins to fracture and fragment to the point there is nothing able to actually keep it all together. At least not with respect to choosing sides morally and politically.
Territoriality is just a single component of that which encompasses all existing things: subsisting. Existence itself.
And, in the modern world, that revolves around the forces that drive the global economy. And that revolves around securing the best of all possible worlds — one in which a nation is in the most advantageous position in regards to markets, labor and natural resources.
And here, as I have noted previously, many flesh and blood human consciousnesses reduce is/ought down to “show me the money”.
What might be the machine equivalent of this?
And then there is the capacity to either ponder or not ponder why anything exist at all; and why this way and not another. The really Big Questions.
Is that something the machines that/who created the terminator would be concerned with? Would their own presumably “higher consciousness” come any closer to actually answering questions like this?