Morality and Networks

(For the purpose of this thread, I am taking as a given that dualism is false, that there is nothing incorporeal attached to people to make consciousness possible. If you reject that premise, please make that argument elsewhere.)

Moral agency is attributed to all normally functioning human adults. Such moral agents can understand right and wrong, and make decisions for which they can be held morally culpable. Unlikely a rock falling of a cliff that kills someone, a person who throws a rock off a cliff and kills someone can rightly be blamed.

But the moral agency of that person is just a result of the network of cells in her nervous system: a special set of connections between special kinds of cells produce a machine that can receive information, process it, and act on it. Stimuli excite receptors, which propagate signals across the network. Each new signal strengthens or weakens connections, refining the network to react differently to subsequent stimuli. These refinements over time become concepts, words, ideas, understanding, and from that understanding we impute moral agency and moral responsibility.

But such networks are not limited to the brains of moral agents. Inter-brain connections, be they biological, economic, social, technological, similarly create networks, that receive stimuli, and learn and respond in ways that approximate understanding.

Why, then, limit moral agency, and moral worth, to only those brain-like networks that actually happen to be brains? It seems clear that a brain recreated in silica would have all the moral agency of the original carbon-based brain. Shouldn’t a non-brain network that nonetheless possesses many of the properties of a brain, that seems to ‘make choices’ in the same way that a brain does, also be granted such agency? Shouldn’t all such networks be granted moral agency? Isn’t it some other property, some property of network topology, that makes brains worthy of moral agency?

Without personal consequences, there is no understanding. Can other networks understand without personal consequences being a direct result of their choices to act? I say they can’t. What they can do is mimic a consequence or follow a program towards a consequence, but they can’t internalize a personal consequence’s impression to avert any similar repeat situations. :twocents-02cents: What would be a personal consequence for a machine or non-organic entity, a non-sentient object? There never would be a natural one only an assisted or programmed one, so no moral agency to those networks, but yes to a moral agency for their network of human developers with the threat of execution being their personal consequence ie. your network kills people we kill you.

I have a gut feeling

i fear taking this to an end - assuming agency of any object, or system of objects. How does one define the boundaries of said agent (instance of agency)? Does one brain constitute agency? (I would like to say so). Does a smaller network of neurons constitute agency? I’d like to keep the (a) definition at the confine of the human body.

I also fear taking this to another End - Not assuming agency of an Object (Perhaps a spectrum question pertaining to parenting). How would we say a child has “agency”?

Are all of the participants of this thread comprising the same (instance of) agency?

I guess my question is: What is an agent?

I am trying to pay attention to my language here - I believe that God plays a role in drawing the lines when it comes to agency.

I think we cling to what is primal, instinctive

Hopefully this does not question the stated premise too much


Now addressing the ability to answer a moral question - What do humans do? I think humans tend to answer this question from avoiding specific negative outcomes first and foremost… avoiding hunger, pain, negative emotions if possible. Although I wish to sincerely portray the same sentiment with a positive spin - To enjoy eating, Positive touch and emotion.

I’ll finish this post with what I feel: I have an urge to want to steer this thread towards a few different ends:

  1. Practicality of Shadenfruede
  2. Applying the golden rule to a dog

Agreed … networks … all networks … collectively propel the human family is a specific direction.

As I just mentioned in another OP …

[b]

[/b]

Something both Wendy and Demoralized touched on that should be further explored is what actions and consequences, and therefore moral obligations, look like as applied to more nebulous agencies like societies and markets. That’s difficult to answer, but I think I can make a start of it.

Reasoning by analogy from more familiar moral questions, we can look at when human moral questions arise. We see that generally it’s when one human’s actions affect another human. I think it’s reasonable to assume that the moral implications of the actions of an agency which is composed of humans would similarly apply when that agency interacted with other similarly composed agencies. This seems to match my intuitions: it seems uncontroversial to say that if the United States were to invade Mexico, and to subject its economy and society to complete US control for US benefit, that would be morally wrong. I would say that it is morally wrong at the level of the society, and not just morally wrong of the individuals who compose society. That would be an action taken by one collective agent against another, with moral consequences and subject to moral criticism.

I think this gets at something important about how networks affect moral consideration. Demoralized raised good questions about whether a subset of neurons in a brain is a moral agent. There is no inconsistency between treating both that subset of the brain and the brain as a whole as both being moral agents. Indeed, we have to do that, since we have very good evidence that humans can function almost normally even after loosing substantial portions on their brains, and we don’t want to say that a person who has a brain tumor removed is no longer a full moral agent (though we may if their agency is compromised in the process).

In the same way, both the society and its members may have moral obligations, and a society composed of scoundrels can do good, while a society of saints can still sin without any individual doing anything morally wrong. These seem counter-intuitive, but seem to match our intuition that a person doing wrong tells us nothing about the moral rectitude of her neurons.

What about Internet societies? Are they exempt from the questions that you pose?

I imagined a cause and effect relationship over the internet - like a Chinese whisper - whereby information leads to misinformation down the line leading to conspiracy theories, the misleading of people and actions taken based on misinformation leading to negative consequences and such - should morality be called into question there? There is much information propagated over the Internet.

Is my small post even remotely related? I am not much of an expert here. Please guide me . . .

The resolve to this, and all morality concerns rests in the “purpose of judgement”.

Next time my computer freezes up on me, I’m sending it to prison.

This is basically what I’m talking about when I harp on the intersubjectivity of the self.

It’s easy to fall into jargon – I’m certainly guilty of this. So it’s all “no self” and “the self is an illusion” which are all good points if you accept all the premises but without them it can seem deeply counter-intuitive.

Cognito ergo sum has every bit as much baggage. We’re just more used to it.

Practically, if a robot brain malfunctions even slightly we can switch it off, tinker with it, reprogram and repair it. We can’t reprogram people with any degree of precision or efficiency, and killing them for not indicating at a junction or forgetting to pay a bill would be seen as somewhat extreme… While in some senses we talk of computer faults as being responsible for errors, we also talk of metal fatigue or storm conditions being responsible for deaths. The computer itself is not at fault. Perhaps if each computer were unique, with unique software running on it, we’d have enough identity to hold computers “personally” responsible?

I think empathy has much more to do with practical moral agency than most rational analytical types would like to think. We hold people morally responsible because we can imagine what it is like to be them, what it would be like to face such choices. We can’t do that with the insane, or with animals, so like the computer we’re reduced to treating them as Others to be cured or contained/liquidated when they become a danger to us.

I fact, people do assign moral agency to nonhumans. This is how Evil was born. Grendel is evil. Solar eclipses are evil, or the result of Evil. Dogs can be evil. Evil is seen as a continuum, as being placed on a continuum, by everybody except most famous philosophers.

I don’t subscribe to the concept that Evil is an entity, but most people, at least most people I know, do.

Beyond Good and Evil, by Nietzsche. Check it out.

Grendel is not evil, just a being sick and tired of dealing with constant bullshit of the human race.
en.wikipedia.org/wiki/Grendel_(novel