The Moral Automaton

This is the main board for discussing philosophy - formal, informal and in between.

The Moral Automaton

Postby Carleas » Wed Jan 22, 2020 9:13 pm

[What follows is abstract, perhaps to the point of uselessness. It is what Venkatesh Rao calls 'refactoring', something of an exercise in unpacking moral questions by translating them into a very different kind of question. If that kind of thing upsets you, you may want to stop here.]

At its simplest cellular automaton is a grid of cells (think pixels) that are in one of two states (on or off), and that changes over time following certain simple rules that apply to each cell, e.g. if at time t1 a cell is on and x neighbors are on, turn the cell off at t2. This simple setup can produce surprising and complex results, and as a result it has gotten a lot of interest from people studying complexity and emergence.

We can also make the cellular automaton more complicated: adding more states, abstracting from a regular grid to a highly connected network, and using more elaborate rules. The idea is the same: cells change over time following rules, and complex behavior emerges.

Now consider society as such a system, with each person as a node in a highly connected network, each with many many possible states. Further, take morality to the be the set of rules that govern each node. Different nodes have different rules, but that's OK, so long as nodes' state transitions are governed by their rules. This is the moral automaton.

A few questions come to mind:
- Do we care about the moral automaton, as distinct from the nodes (i.e. is the automaton itself a moral patient)?
- What is the best set of rules, if we assume everyone will use the same set? (I leave 'best' undefined here, so import whatever that is from your rule set... er, moral system)
- What is the best set of rules for a node, given uncertainty about what rules other nodes will follow?

Thinking of morality and social interaction in this way seems to pose a problem for consequentialists of any type. The simple cellular automata we started with are known to behave unpredictably. For example, see the behavior of elementary cellular automata. The patterns that emerge are hard to predict from the rules, and in some cases the patterns themselves are impossible to predict, as the rules are capable of universal computation. If moral rules act this way in society, the result for the moral automaton is likely impossible to predict, and depending on the rules the outcome may be unpredictable in principle. What then can the consequentialist say in defense of any rule set? This is not the only way to raise this objection, but this framing gives us mathematical certainty that consequences cannot be predicted even in principle.

But perhaps the most interesting question to me is how the node relates to the moral automaton. Should rules be chosen by appeal to the global behavior of the automaton? Is morality about the well-functioning of the automaton (which is roughly my claim here, if put into different terms)? We can't predict outcomes, but we can observe the global behavior of the automaton and see if it's producing the right kind of chaotic output: processing information, generating novelty, etc. Since boring outcomes like "all cells off" are easy to predict, is chaotic output at the global level the best we can aim for?

Morality is ultimately grounded on intuition, so it might not be possible to think about it using an unintuitive construct like the modal automaton. But it's also hard to say that the moral automaton doesn't accurately capture an aspect of morality that is underexplored by the major schools.
User Control Panel > Board preference > Edit display options > Display signatures: No.
Carleas
Magister Ludi
 
Posts: 6025
Joined: Wed Feb 02, 2005 8:10 pm
Location: Washington DC, USA

Re: The Moral Automaton

Postby WendyDarling » Thu Jan 23, 2020 12:31 am

Carleas wrote
complex behavior emerges.

From the "on" state? Or from the "off" state? or both? I'm getting confused especially when building or developing character in individuals is no longer taught at home or in the educational system, character which I consider the basis of moral on's and off's.
I AM OFFICIALLY IN HELL!

I live my philosophy, it's personal to me and people who engage where I live establish an unspoken dynamic, a relationship of sorts, with me and my philosophy.

Cutting folks for sport is a reality for the poor in spirit. I myself only cut the poor in spirit on Tues., Thurs., and every other Sat.
User avatar
WendyDarling
Heroine
 
Posts: 7492
Joined: Sat Sep 11, 2010 8:52 am
Location: Hades

Re: The Moral Automaton

Postby Carleas » Thu Jan 23, 2020 4:17 pm

The complex 'behavior' is of the automaton, where each time step changes unpredictably. In a cellular automaton, the cells are simple, having a small number of states and changing according to simple rules. The automaton has many more states, and changes between those states in ways that are difficult to describe in global-level rules (at least for some rule sets; others are very simple).

The term 'behavior' gets messy as applied to society, where it's appropriate to talk about the behavior of nodes (i.e. individuals) as also complex, having a much greater state space and being less thus similarly difficult to predict. But there is still an automaton-level description, and a coherent idea of behavior on that level.

[EDIT: a word]
User Control Panel > Board preference > Edit display options > Display signatures: No.
Carleas
Magister Ludi
 
Posts: 6025
Joined: Wed Feb 02, 2005 8:10 pm
Location: Washington DC, USA

Re: The Moral Automaton

Postby Del Ivers » Thu Jan 23, 2020 4:40 pm

Carleas

That the clock tells me the hour is important to the degree that I have ascribed value to the clock. But if I have to deal with what others have ascribed to the clock then I/we have a situation. We may agree that "4pm" is a viable, existential indicator, but that's about it. Personally, we could each have a very different take on "4pm". And even at that, if we did agree "relatively" on a shared meaning/morality it would only be between you and me.

"Do we care about the moral automaton, as distinct from the nodes (i.e. is the automaton itself a moral patient)?"

Separating the content from the container? Yes, that's already done - by death, a final distinction. All 'caring' is a counter-argument to existential cessation, e.g., as a parent you can care for your child more than anything else, even your own life, but your caring will not augment the inevitable demise of the child in the due course of time.

"What is the best set of rules, if we assume everyone will use the same set?"

'If we assume', the viable existential indicator.

"What is the best set of rules for a node, given uncertainty about what rules other nodes will follow?"

Since uncertainty at its apex is of a cosmic level then the rules are going to be of a character appropriate to the prevailing human context. With that in mind, then what happens is that the human being has to compartmentalize their caring to an appropriate, and most importantly, manageable existential premise. Thus, the rules are not fixed, at best there are 'working' rules.

Religious manipulators/manipulation understood this. They claim(ed) knowledge of the Cause, of the 'primo motore' of existence. They refactored the existential much like a clock refactors time.

Granted, perhaps an abstract answer to some. But you did begin thusly.
Del Ivers
 
Posts: 141
Joined: Thu Mar 14, 2019 10:09 pm
Location: Nevada

Re: The Moral Automaton

Postby Meno_ » Thu Jan 23, 2020 5:22 pm

Carleas,
Basically, automation inferentially erases it's own legitimacy, via. The question of the on-off qualification being tested, should be of primary consideration.
Can a moral test versus an ethical one can not be unilaterally posed.

It is significant, since such distinction can only hypothetically tested as an automata.
Simulation can not test it's reality, just as conscience can not include it's primary self conscience, and as a result the simulation and it's content can not deverge.
In the beginning what was is what was, similarly today it is said that what is what it is.
Meaning has returned, in terms of the original sense of the word to within changing structural development.

The only difference between auto and sense derived distinctions is within the increasing sequential cross reference between them.
A lot of autonomous correspondence develops in systems considered not of artificially derived, therefore instead of continuous evolvement of senses of derivative difference, a continuous progressive integration takes place.
This is the result that progressively filled by automatic reactions, taking up the slack by automatic systems, as an effect of naturally derived learning-which become more remote as basis of higher level learning in instinctual behavior.
Meno_
ILP Legend
 
Posts: 5955
Joined: Tue Dec 08, 2015 2:39 am
Location: Mysterium Tremendum

Re: The Moral Automaton

Postby Tab » Thu Jan 23, 2020 7:19 pm

Now consider society as such a system, with each person as a node in a highly connected network, each with many many possible states. Further, take morality to the be the set of rules that govern each node. Different nodes have different rules, but that's OK, so long as nodes' state transitions are governed by their rules. This is the moral automaton.


Do we care about the moral automaton, as distinct from the nodes (i.e. is the automaton itself a moral patient)?

I take this to mean applying a measure of 'moral<->amoral' nomenclature to the collective as a object in and of itself, rather than the individual. So I don't think we can, because there is no choice, intent or reflection involved at that level, the collective is wholly an emergent property of the summed actions of the individuals.

What is the best set of rules, if we assume everyone will use the same set?

If everyone uses the same set then a broad "always be nice" would suffice. We could dress it up but that's what it would basically amount to, anything else would be bargaining.

What is the best set of rules for a node, given uncertainty about what rules other nodes will follow?

Eh game theory time I guess, 'tit for two tats' was the best strategy last time I looked, which boils down in real life to adhering to the 'always be nice' rule, and forgiving everyone who treats you badly at least once, in the hopes it was just a misunderstanding. This only holds for dealing with people you are sure to have to deal with again at some point. If it is a certain (and unobserved) one time interaction, always screw them over.
Last edited by Tab on Sat Jan 25, 2020 4:00 pm, edited 1 time in total.
User avatar
Tab
Deeply Shallow
 
Posts: 8446
Joined: Thu Feb 03, 2005 2:49 pm

Re: The Moral Automaton

Postby inzydeout » Sat Jan 25, 2020 2:17 pm

sounds like something from skyrim
Inzydeout


Image

"Send my credentials to the house of detention."
"For those who need direction, allow my words of advice be enough guidance in this senseless world."
"Beware the fury of a patient man."

Pneumatic-Coma :mrgreen:
User avatar
inzydeout
 
Posts: 243
Joined: Fri Apr 16, 2010 9:15 pm
Location: https://www.facebook.com/

Re: The Moral Automaton

Postby Ecmandu » Sat Jan 25, 2020 6:26 pm

Reality as we understand it is a virtual machine.

I’ve explained this before.

Have you ever seen the optical illusions that present motion when it’s just a “2D” image? It’s amazing.

Have you ever seen optical illusions that present depth from a “2D” image? It’s amazing.

“1D” images become “2D” images become “3D” images...

At each level of “abstraction” we are creating virtual awareness from unary (base 1).

The universe is fundamentally unary. We create virtual machines (like running windows on a Mac) with different dimensions other than unary.

We are fundamentally unary beings.
Ecmandu
ILP Legend
 
Posts: 9423
Joined: Thu Dec 11, 2014 1:22 am


Return to Philosophy



Who is online

Users browsing this forum: surreptitious75