Moral Beliefs as Prices

Phyllo wrote

Later Carleas wrote

Perhaps some folks believe that a life, any life, is priceless and not interchangeable with another life. Is this a thought experiment in how to be evil and justify it?

As Karpel Tunnel said the psychopaths would get rich, their workloads would be tremendous especially if they were willing to off people for $1.99 without any donations to charity. Why better, save any lives through a charity if lives aren’t priceless? Letting people parish due to their poor luck and lot in life would be extremely cost efficient. In world A the mindset that possibly everyone is expendable for possibly a 1 cent payment does make a life saving charity absurd. That type of mentality was not advertised in world B. World B would be better for everyone for everyone would have greater odds of surviving without the rampant kill for a buck mentality.

I’m sympathetic to the ‘hidden consequences’ argument, but that argument does not make the question meaningless or unanswerable or absurd. Hidden consequences distinguish the hypothetical world in which anything is possible from the real world. So what you’re saying in appealing to it seems to be, yes, in the hypothetical, we should kill the person if offered a trillion dollars, but in the real world we shouldn’t because XYZ.

I feel like you’re resisting that pretty strongly, but the rejection of an absurd hypo is missing the point. Look at Hillary Putnam’s twin earth thought experiment, it’s as absurd as can be and it doesn’t matter, because it helps to isolate certain concepts.

First, the original problem does suggest that killing the one person is good, at least to a consequentialist who values human life: It is a moral good to cause the death of one person who would not die but for your intervention in order to save five people who will die but for your intervention.

Second, the problem I’m proposing doesn’t suggest anything about the future. Let’s concede, if you require it, that this will be just the worst if it happens all the time, and just mentally insert into the hypo whatever additional props you need to limit it to a one-time offer to you and only you.

Doesn’t depend on the future?

All cost benefit analysis would be null and void if no future for anyone existed after the event.

In World B, Joe Random had some kind of “right” to exist and to be free of harm. He doesn’t have that in World A.

I see that as very important - more important than the math.

It’s not stated but it’s there.

If you lose it once, then it’s very hard or perhaps impossible to get it back.

To clarify, I’m just not trying to extend the analysis to rearranging society so that what we’re talking about happens all the time. I see the questions of “Should you do X in this one-off situation” and “Should we as a society make doing X a regular part of our everyday lives” as separate questions that it is consistent to answer differently.

Sure, but the same is true if you pull the switch in the vanilla problem, right?

No

Unfairness exists, but you don’t create the unfairness.

Injustice happens, but you don’t make it happen.

You don’t choose a world where rights are destroyed.

=D>

I don’t see how you aren’t doing that when you intentionally hit someone with a train, but you are when you intentionally shoot someone with a gun.

If you can enjoy your wealth after having obtained it by killing a random person, your life must have been supremely shitty beforehand.
That indeed there are a lot of such humans is reason I rank all other mammals above humans (in general) qua degree of sentience.

Spending wealth badly or selfishly is a separate moral question. If rather than “enjoy[ing] your wealth”, you use that wealth to do more good than you have done wrong, you can leave the world better off for having done that wrong, and a consequentialist should conclude that the transaction was a good thing, i.e. if World A is better than World B, a consequentialist should be OK with someone taking actions that lead to World A instead of World B. If feeling bad about it weighs against World A, increase X to compensate.

Has anyone, particularly you Carleas, seen the movie, The Box?

[youtube]https://www.youtube.com/watch?v=nSOjMkoBYYA[/youtube]

Haha, looks like this discussion has been done! I haven’t seen it, but I’ll add it to my list in case it ever comes on my streaming platforms.

I found the short story it’s based on, Button, Button.

Can you explain this. It is a little ambiguous to me. Are you saying that it is we ourselves who do not do the above or cause the above to happen or are you just being ironic?

Are you telling us not to be a part of all of that?

In the trolley problem, a person is thrown into a situation which he/she did not create. A choice is made between two undesirable options based on a personal ethical standards.

In Carleas’ two worlds problem, a person is asked to create a World A by abandoning his/her personal ethical standards.

The actions expected of us seem to be quite different to me in the two problems. If you’re a consequentialist to start, then it may seem that you are not being asked to change anything about your ethical standards. So the problems may appear to be more or less the same to you.

it still amazes me he said this:

  1. yes, as you point out, the lack of empathy that is presumed
  2. that he doubts my self-evaluation
  3. that he thinks the behavior of some people means that everyone has a price
  4. that it seems clear he, himself, would kill a random person for a certain sum and would not feel the aftereffects of empathy
  5. the way it is assumed that money always can function like a force. IOW perhaps I am content with whatever income I have or savings. No, he assumes that one can be enticed to do anything with enough money, even if one has enough money already for a decent life. Now I happen not to be in a perfectly safe economic position, so my decision is not based on that. Sure, more money would give me more security, apart from pure bonuses. But he assumes that everyone will do whatever for money seems to assume that they those who could look at certain cruel acts in economic terms, necessarily would kill regardless of their financial situation.

There is something both confused and I think even pathological at root here. A fundamental ignorance of humans - however correct he may be about some - coupled with something disconnected personally in himself. He, clearly, would kill random people for some sum of money. And this is a lawyer, who I’ve gotten the impression, is doing alright financially. What does not bother him already that he does or would do for money? What else does he not understand about people at such fundamental levels?

It’s not about personal wealth. You would be using the money to “do good” so presumably you would think that you did the right thing. The bad feeling of killing someone would be more than compensated by the good feeling of helping orphans.

Oh KT, I love a good psychoanalysis as much as the next internet stranger, but I’m afraid that like many beliefs formed on insufficient evidence, this one misses the mark.

My point here is not so much about what people will do (as you point out, many people don’t like thinking icky things, so many will just refuse to engage with the thought and stick with simpler rules to avoid facing (and resolving) the cognitive dissonance). Rather, I’m making a case about what people should: You should be able to express your moral beliefs in prices, because your moral beliefs are just another way in which we value things, and prices mediate value. We’ve all gotten stuck on the particular moral belief that it’s wrong to kill, and ignored my suggestion that we start with lying or some other less expensive moral belief.

And I do doubt your self-evaluation, as should you. People are pretty bad at self-evaluation. We can’t assume perfect self-knowledge of how we would act in a cockamamie hypothetical world that is so far from everyday experience that others in this thread have refused to acknowledge it as presenting an actual answerable question.

Still waiting on you to deliver on “do[ing] it as a thought experiment.”

The bad feeling can still be felt, and though I wouldn’t put much objective moral weight on it, it is compatible with the argument I’m making here to feel that pain, treat it as another moral cost, and still do enough good elsewhere to compensate for it. The bad feeling can be priced, and that price can be added into the equation to find X.

That’s an odd idea. If someone refuses to live with that “bad feeling” then the price of doing something which produces the bad feeling is effectively infinite for that person.

Surely, your thesis only works if there is an X which makes the bad feelings go away.

The alternative being that the person does the deed, feels bad and then kills himself in order to end that feeling. That raises the question of how reasonable the exchanges are, if people can’t live with themselves afterwards.

I don’t think so. The X only has to be large enough to make the bad feeling worthwhile. In my experience, and I assume this is a human universal, we sometimes face choices where either path is painful, and we can make a choice that we know to be the right choice, but still feel the pain of having made it. Something like initiating a painful breakup: remembering the person’s face as you are breaking their heart may always cause you pain, but you can still know that it was the right thing to do.

Though maybe this is what you mean by the pain “go[ing] away”. I think of it as the pain staying but being held at bay by the knowledge that the pain is somehow right, but it is reasonable to describe this as a lessening of the pain.