back to the beginning: morality

This is the main board for discussing philosophy - formal, informal and in between.

Moderator: Only_Humean

Forum rules
Forum Philosophy

Re: back to the beginning: morality

Postby iambiguous » Thu Jul 04, 2019 7:25 pm

"Artificial Consciousness: Our Greatest Ethical Challenge"
Paul Conrad Samuelsson in Philosophy Now magazine

Assuming, then, that we can come to create consciousness digitally, it ought to be obvious that the suffering of AI is potentially indefinitely more horrendous than even the worst imaginable human suffering. We stand in a position to develop the means for creating amounts of pain which vastly outweigh any previously seen in the history of human or animal suffering. The obstacles to creating biological suffering are demanding – the number of possible biological beings is relatively low, their upkeep is high, and they are prone to becoming desensitized to painful stimuli. In the digital world, when simulated consciousnesses can be programmed in computers to be subject to whatever laws we wish, these limitations disappear.


Pain and suffering.

How does one wrap their head around mindless matter evolving over billions of years into self-conscious mind-matter able to experience pain and suffering?

We can imagine how excruciating our own pain and suffering would be in any number of contexts that revolve around, say, "natural disasters". But the mindless matter embedded in the tumultuous fury of the tornado or volcanic eruption or a raging fire itself presumably feel nothing at all.

And yet pain and suffering is often a critical factor when we approach that distinction between ethical and unethical behavior. The more of it we inflict on another the more likely it is to be seen as unethical.

But can we then create an artificial consciousness able to both feel pain and suffering and to approach the behaviors of other AI beings as either moral or immoral?

Again, assuming that in creating an AI "I", we have some measure of autonomy and are in turn able to "manufacture" autonomy in this articial intelligence "I".

Really, how close are we to dealing with these things in reality? A reality such as this for example:

The consequences are not fully comprehendible, but let me sketch an image of what could be possible. Someone could, for example, digitally recreate a concentration camp, fill it with sentient, suffering AI, and let it run on a loop forever. It may even be possible to turn up the speed of the suffering, so that a thousand reiterations of the camp are completed every second. From the perspective of the AI, this will feel no different from what was felt by those who suffered through the real thing. Then the programmers use the copy-and-paste function on their computer, and double it all again… So the reason that pain-disposed AI is the greatest ethical challenge of our time is that it could so easily be caused to suffer. Picture a bored teenager finding bootlegged AI software online and using it to double the amount of pain ever suffered in the history of the world – all in one afternoon, and from the comfort of a couch.


How, in an autonomous universe, would the question of morality be the same or different for biological individuals and those individuals created by biological individuals to feel pain and suffering and to react to it as either justified or not justified?
He was like a man who wanted to change all; and could not; so burned with his impotence; and had only me, an infinitely small microcosm to convert or detest. John Fowles

Start here: viewtopic.php?f=1&t=176529
Then here: viewtopic.php?f=15&t=185296
And here: viewtopic.php?f=1&t=194382
User avatar
iambiguous
ILP Legend
 
Posts: 31020
Joined: Tue Nov 16, 2010 8:03 pm
Location: baltimore maryland

Re: back to the beginning: morality

Postby iambiguous » Sun Jul 07, 2019 7:05 pm

"Artificial Consciousness: Our Greatest Ethical Challenge"
Paul Conrad Samuelsson in Philosophy Now magazine

If there are such things as cultural and moral progress, they pale in comparison to the technological explosion that humanity has experienced in the last ten thousand years, faster still in the last century. The advancement of invention is palpable, high-speed and tremendously useful to everyone – few people feel they need further motivation to embrace ever newer and more audacious gadgets, software, and weapons. Yet, as the story progresses, our inventions become more powerful and thereby riskier. So far, the potential mishaps have been manageable. Our historical nuclear disasters have been survivable because of their relative small scale. Artificial intelligence is an invention which promises to be far more destructive if misused. We have the existential risks to humanity which have already been raised by the authors mentioned above. Now we have also seen that there are consequences even more problematic than nuclear holocaust, as weird as that may seem.


Morality exploding in ever more mindboggling directions as new technologies beget brand new contexts in which to argue distinctions between right and wrong behaviors. Only now a flesh and blood "I" has to contend with an AI "I" that may or may not be in sync with any particular moral narrative and political agenda.

And then the equivalent of me down the road arguing the extent to which dasein, conflicting goods and political economy are in turn applicable to this AI "I".

That ever expanding gap between the extraordinary acceleration of things that we know are true objectively for all of us in the either/or world, and the fact that going all the way back to the pre-Socratics, the is/ought world is still bursting at seams with subjective renditions of conflicting goods. Only now the technological bound has ushered in any number of brave new worlds to contend with.

Artificial intelligence has for decades been the greatest hope for transcendence and fulfilment in the secularised West. Chasing the unyielding dream of perfecting the world, convinced that we are entitled to anything for which we strive, as so often before, we put ourselves beyond morality. But now we’re claiming our reward at potential costs so terrifyingly great for others that they resemble Dante’s Inferno or Memling’s Final Judgement, perhaps as just the first monument of the forthcoming Homo deus.


In other words, one thing never changes: political economy. You can bet that those who own and operate the global economy [be they flesh and blood or homo deus] will make certain that whatever is deemed "theoretically" to be right and wrong in places like this, always comes down to the behaviors that they are empowered to enforce in order to sustain their own perceived best interests.
He was like a man who wanted to change all; and could not; so burned with his impotence; and had only me, an infinitely small microcosm to convert or detest. John Fowles

Start here: viewtopic.php?f=1&t=176529
Then here: viewtopic.php?f=15&t=185296
And here: viewtopic.php?f=1&t=194382
User avatar
iambiguous
ILP Legend
 
Posts: 31020
Joined: Tue Nov 16, 2010 8:03 pm
Location: baltimore maryland

Re: back to the beginning: morality

Postby iambiguous » Mon Jul 15, 2019 7:19 pm

Question of the Month
"Is Morality Objective?"
From Philosophy Now magazine

Ronald W. Pies

I should like to reformulate the question as follows: Can we demonstrate that any moral claim is objectively true? My reply is ‘Yes and No’.

It seems clear that to answer this rephrased question, we must have a notional idea of what the term ‘objective’ means. Not surprisingly, its meaning is highly contested.


The part that I keep coming around to. You can believe that morality is objective. You can claim to know that it's objective. But how do you actually go about demonstrating that in fact it is given human interactions that come into conflict over behaviors said to be either right or wrong?

The "notional idea" of objective morality is one thing, but it's not the thing that most interest me.

The economist and philosopher Amartya Sen has described two central features of objectivity: observation dependence and impersonality. In effect, Sen meant here that objectivity requires both careful observation and inter-observer corroboration. Thus, on Sen’s view, if I say, “I truly and deeply believe that your house is on fire” without having observed your house, I am making a subjective claim. In contrast, if two people simultaneously witness smoke coming from your house and say, “We believe your house is on fire,” Sen would argue that they are making a type of objective statement.

But Sen’s use of ‘objective’ doesn’t seem to work well for moral claims.


Exactly.

Instead, let's focus in on human interactions in which any number of conflicting moral and political assessments crop up.

How about the role of government in our lives? Some value a considerably larger role than other. Depending on the issue.

Now, in regard to the "two central features of objectivity" above, everyone will agree that objectively the government does in fact exist.

But, using these two features, how is it determined objectively what the role of government ought to be in regard to, say, the legalization of marijuana use?

Smith and Jones might agree that someone just stole a loaf of bread from the grocer, but disagree as to the ‘wrongness’ of the act. For example, suppose the thief was penniless, starving and had no other recourse. It appears there is no objective means of adjudicating the matter.


In a word: context. Construed subjectively from a point of view rooted in dasein.

However, philosopher Alasdair MacIntyre’s ‘virtue ethics’ suggests that a degree of moral objectivity is possible – within the confines of certain communities and their shared values. For MacIntyre, there are objective standards of virtue found within a tradition, such as the ethical traditions of ancient Athens. For MacIntyre, in a given society, the moral code is based on what is agreed to be the shared end of the society and the best way to achieve it, which also gives each member their proper role in the society and their own proper tasks. Thus, in a society one of whose shared aims is the protection of private property, it would be objectively wrong to steal a loaf of bread, all other things being equal. So, morality itself may not be objective, but for people who share a worldview expressed by the community, morality has context and a shared meaning.


Sure, if you insist that a consensus reached in any particular aggregation of human beings subsisting in any particular historical and culture and community context, need be as far as one goes in order to claim that morality is objective, then, for you, that makes it so.

You merely assert it to be true.

But then there's the part that revolves around who gets to decide what this consensus shall be. The role that economic wealth and political power play.

And then the part where communities come into contact with other communities and the consensuses themselves come into conflict.

Thus the "notional idea" of objective morality falls apart at the seams when "for all practical purposes" there is no philosophical or scientific method for pinning them all down once and for all.
He was like a man who wanted to change all; and could not; so burned with his impotence; and had only me, an infinitely small microcosm to convert or detest. John Fowles

Start here: viewtopic.php?f=1&t=176529
Then here: viewtopic.php?f=15&t=185296
And here: viewtopic.php?f=1&t=194382
User avatar
iambiguous
ILP Legend
 
Posts: 31020
Joined: Tue Nov 16, 2010 8:03 pm
Location: baltimore maryland

Re: back to the beginning: morality

Postby iambiguous » Sat Jul 20, 2019 7:52 pm

Question of the Month
"Is Morality Objective?"
From Philosophy Now magazine
Kristine Kerr

You are ugly and grossly overweight. Consider how you feel after reading that. Keep that feeling to hand for the moment. That sentence is an insult, and I shouldn’t have written it, due to the feeling it has most certainly caused in you, and would cause in me had such an insult been aimed at me, regardless of its truth or falsity. A wrong has been committed, a moral law has been broken. It’s not a law contained in a spelt-out legal system; but it doesn’t have to be spelt-out to be real. Instead, the hurt feelings in the insulted person make the offence fairly objective. By ‘objective’ here, I mean existing universally, or virtually universally: anyone and everyone would feel insulted, assuming they understood the words. By those words I have created something that’s out there: it’s objective. You can’t see it, but you feel its sting. It registers.


This in my view is clearly a frame of mind that only has to be believed to make it true. Someone thinks that it's true. In part because, subjunctively, they feel that it's true as we'll. But to then call the fact that you believe and feel something is true all the proof we need to establish it as in fact true objectively?

Should we then make it illegal to call anyone ugly and grossly overweight? Should we enact a punishment for doing so?

And it clearly intertwines human emotions into the mix. If someone hurts your feelings, that in turn ought to become an important factor in establishing the objectivety of behaviors deemed to be either right or wrong?

These hurt feelings -- a genetic component of human biology -- said to be universal or virtually universal?

But then:

You will of course have realised that I didn’t mean what I wrote; but for that initial moment the feeling was real. It is in those kinds of moments where morality is shown to be objective, where everyone ‘sees’ the offence: when the ghost in the machine (if I may borrow that phrase) becomes solid.


Okay, let's choose a hundred people at random, put them in a room and ask them to pin down that which ought to be construed as universally offensive. What hurts their feelings? And if, say, the liberals and the conservatives compile a very different list? Or, instead, is the whole point here that, in being able to note that feelings can be hurt in all of us, this becomes the basis in and of itself for claiming objective morality?

On the other hand...

This kind of ‘real’ is clearly not the same real as, say, the keyboard with which I wrote the sentence, but there are many types of real: real love, real bananas, real quantum particles. While the feeling isn’t empirical evidence as are results taken with a ruler or beeps on a Geiger Counter, it is real evidence of a different kind. I can’t proclaim an area safe from radiation with a ruler, it’s the wrong detector. I need the correct tool, a Geiger Counter, to do that. We, human beings, are the morality detectors. We all feel the sting when something wrong has been created, say an insult has been slung – and therein lies the objectivity.


This in my opinion is the classic example of the "general description" argument. You make a blanket statement about human interactions: We all feel offended by certain behaviors.

So, let's use this as the basis for establishing that morality is objective. That we cannot intertwine our general rule into a frame of mind that actually differentiates right from behavior when conflicting goods are confronted in any particular context, shouldn't stop us from pointing out that it's still there to suggest that a resolution must be within our reach because objective morality has been proven to exist.

And I'm not arguing that this is an irrational point of view. I'm only looking for someone who embraces it to bring their assumptions down to earth and explore them...existentially.
He was like a man who wanted to change all; and could not; so burned with his impotence; and had only me, an infinitely small microcosm to convert or detest. John Fowles

Start here: viewtopic.php?f=1&t=176529
Then here: viewtopic.php?f=15&t=185296
And here: viewtopic.php?f=1&t=194382
User avatar
iambiguous
ILP Legend
 
Posts: 31020
Joined: Tue Nov 16, 2010 8:03 pm
Location: baltimore maryland

Previous

Return to Philosophy



Who is online

Users browsing this forum: No registered users