Moral Beliefs as Prices

If you enjoy killing, does the buyer need to pay less to keep things moral?

Psychopaths will be well off, in any case, even if they have to tally up a few more murders.

I suppose the price would also have to be higher, like if they wanted you to kill your kid or your mother and you also, coincidentally, felt affection for these family members.

But there is always a price Carleas and I guess most people would go for.

And again, while standing feeling a little guilty at mom’s cemetary plot, you can comfort yourself with the fact that you got a bigger sum of money which you can give to doctors without borders to help even more children get rid of harelips or even life-saving operations. Even, with the extra sum, spend a little on yourself, a vacation, perhaps. I mean, one death in the family, 6 kids saved, and the family-murder bonus could go to a week in Barcelona. Still a net gain folr others.

People killing loved ones or random people will have no negative side effects on how we bond and function as societies.

And then there’s the bonuses for raping members of your own family.

There’s always a price that convinces. If you think you wouldn’t rape your own daughter, you just don’t realize how tempting a billion dollars is. Your self-assessment must be wrong.

Is this your response to the vanilla trolley problem as well?

Yes, by random I mean there is an equal probability of it being any person. It’s the same ‘random person’ that stands on one side in the trolley problem, and five of whom stand on the other side. The people being saved by the charity are also random people.

And you can make the decision without know who they are specifically because expected value is well defined, and because whatever the expected value of 1 person, the expected value of 5 people is 5 x [expected value of 1].

We can plug in specific people to change the question, but that’s just a different question.

I think the original idea was not a random person, but rather an anonymous stranger, a ‘known-unknown’ person. I don’t think this changes the math, but feel free to substitute if it keeps the question on track.

KT, I’m not proposing a policy, I’m proposing a thought experiment: put dollar values on your moral beliefs. You promised you’d try.

Carleas, you assume that the non-zero in randomness must be another person … what if the infinitely valuable person was you or I ?

It’s defined as random after all. Why should an infinitely valuable person give power to those who again, seek to kill it. See, in your thought experiment, everyone, including us, is wearing the mask of randomness. So as a cost benefit analysis, it makes no sense for anyone to give all their sustanence to another random person.

The vanilla trolley problem does not appear to have hidden consequences in the options. At least, I don’t see them.

It doesn’t suggest that killing the people on the tracks is good. It doesn’t suggest that random people ought to be killed in the future.

Phyllo wrote

Later Carleas wrote

Perhaps some folks believe that a life, any life, is priceless and not interchangeable with another life. Is this a thought experiment in how to be evil and justify it?

As Karpel Tunnel said the psychopaths would get rich, their workloads would be tremendous especially if they were willing to off people for $1.99 without any donations to charity. Why better, save any lives through a charity if lives aren’t priceless? Letting people parish due to their poor luck and lot in life would be extremely cost efficient. In world A the mindset that possibly everyone is expendable for possibly a 1 cent payment does make a life saving charity absurd. That type of mentality was not advertised in world B. World B would be better for everyone for everyone would have greater odds of surviving without the rampant kill for a buck mentality.

I’m sympathetic to the ‘hidden consequences’ argument, but that argument does not make the question meaningless or unanswerable or absurd. Hidden consequences distinguish the hypothetical world in which anything is possible from the real world. So what you’re saying in appealing to it seems to be, yes, in the hypothetical, we should kill the person if offered a trillion dollars, but in the real world we shouldn’t because XYZ.

I feel like you’re resisting that pretty strongly, but the rejection of an absurd hypo is missing the point. Look at Hillary Putnam’s twin earth thought experiment, it’s as absurd as can be and it doesn’t matter, because it helps to isolate certain concepts.

First, the original problem does suggest that killing the one person is good, at least to a consequentialist who values human life: It is a moral good to cause the death of one person who would not die but for your intervention in order to save five people who will die but for your intervention.

Second, the problem I’m proposing doesn’t suggest anything about the future. Let’s concede, if you require it, that this will be just the worst if it happens all the time, and just mentally insert into the hypo whatever additional props you need to limit it to a one-time offer to you and only you.

Doesn’t depend on the future?

All cost benefit analysis would be null and void if no future for anyone existed after the event.

In World B, Joe Random had some kind of “right” to exist and to be free of harm. He doesn’t have that in World A.

I see that as very important - more important than the math.

It’s not stated but it’s there.

If you lose it once, then it’s very hard or perhaps impossible to get it back.

To clarify, I’m just not trying to extend the analysis to rearranging society so that what we’re talking about happens all the time. I see the questions of “Should you do X in this one-off situation” and “Should we as a society make doing X a regular part of our everyday lives” as separate questions that it is consistent to answer differently.

Sure, but the same is true if you pull the switch in the vanilla problem, right?

No

Unfairness exists, but you don’t create the unfairness.

Injustice happens, but you don’t make it happen.

You don’t choose a world where rights are destroyed.

=D>

I don’t see how you aren’t doing that when you intentionally hit someone with a train, but you are when you intentionally shoot someone with a gun.

If you can enjoy your wealth after having obtained it by killing a random person, your life must have been supremely shitty beforehand.
That indeed there are a lot of such humans is reason I rank all other mammals above humans (in general) qua degree of sentience.

Spending wealth badly or selfishly is a separate moral question. If rather than “enjoy[ing] your wealth”, you use that wealth to do more good than you have done wrong, you can leave the world better off for having done that wrong, and a consequentialist should conclude that the transaction was a good thing, i.e. if World A is better than World B, a consequentialist should be OK with someone taking actions that lead to World A instead of World B. If feeling bad about it weighs against World A, increase X to compensate.

Has anyone, particularly you Carleas, seen the movie, The Box?

[youtube]https://www.youtube.com/watch?v=nSOjMkoBYYA[/youtube]

Haha, looks like this discussion has been done! I haven’t seen it, but I’ll add it to my list in case it ever comes on my streaming platforms.

I found the short story it’s based on, Button, Button.

Can you explain this. It is a little ambiguous to me. Are you saying that it is we ourselves who do not do the above or cause the above to happen or are you just being ironic?

Are you telling us not to be a part of all of that?

In the trolley problem, a person is thrown into a situation which he/she did not create. A choice is made between two undesirable options based on a personal ethical standards.

In Carleas’ two worlds problem, a person is asked to create a World A by abandoning his/her personal ethical standards.

The actions expected of us seem to be quite different to me in the two problems. If you’re a consequentialist to start, then it may seem that you are not being asked to change anything about your ethical standards. So the problems may appear to be more or less the same to you.

it still amazes me he said this:

  1. yes, as you point out, the lack of empathy that is presumed
  2. that he doubts my self-evaluation
  3. that he thinks the behavior of some people means that everyone has a price
  4. that it seems clear he, himself, would kill a random person for a certain sum and would not feel the aftereffects of empathy
  5. the way it is assumed that money always can function like a force. IOW perhaps I am content with whatever income I have or savings. No, he assumes that one can be enticed to do anything with enough money, even if one has enough money already for a decent life. Now I happen not to be in a perfectly safe economic position, so my decision is not based on that. Sure, more money would give me more security, apart from pure bonuses. But he assumes that everyone will do whatever for money seems to assume that they those who could look at certain cruel acts in economic terms, necessarily would kill regardless of their financial situation.

There is something both confused and I think even pathological at root here. A fundamental ignorance of humans - however correct he may be about some - coupled with something disconnected personally in himself. He, clearly, would kill random people for some sum of money. And this is a lawyer, who I’ve gotten the impression, is doing alright financially. What does not bother him already that he does or would do for money? What else does he not understand about people at such fundamental levels?