Moral Beliefs as Prices

How much money would you need to be paid to kill a random person? How much money would you need to be paid to rob a bank? Depending on your answer to the trolley problem, how much to go the other way?

Moral positions are generally not thought of as subject to pricing. Giving up ones moral positions in exchange for money is seen as a form of corruption and even as a lack of morals. But any pragmatic, consequentialist morality must take such a payment into account, and acknowledge that it can do enough good (however defined) to outweigh the harm (however defined). There are at least two ways to do this: 1) as above, we can examine how much the moral belief is ‘worth’, by asking how much it would cost to ignore it for a specific case; and 2) we can ask how much someone violating a moral rule would need to pay to compensate society for the harm they cause.

The questions are uncomfortable, but there isn’t anything irrational in using a universal medium of exchange to exchange moral value for other forms of value. And doing so enables us to compare moral beliefs. I propose an analogy: pricing moral beliefs is to the trolley problem as currency mediated transactions are to barter. Trolley problems place two competing outcomes directly against each other, in much the way that barter places two goods against each other. Introducing pricing in either cases allows for all goods or all values to be compared simultaneously.

So, what are your moral beliefs worth?

EDIT: spelling.

No amount of money would get me to kill a random person. In fact I think the payment incident discussion would make me even more reluctant. A terrible job and chronic pain coupled with bad timing access to a weapon, well, that’s another story.

And I can’t see how one could figure out punishment sums of money, unless the immorality or better put only to the extent that the immoral act had a financial impact. And even if we worked with % of income or somehow tried to weight in the actual cost to the immoral person, I can’t see how it would work out.

Social punishments are not perfect either, but they ‘work’ for me.

And then there are the people who are willing to die rather than “kill a random person”.

Which seems to indicate that money is not a universal medium of exchange when it comes to morality.

It may be true that you value not killing a random person more than literally more liquid value than all of humanity can produce, but I doubt it. In any case, we know from observation that plenty of people do in fact kill random people for substantially less than everything.

If instead it is suggested, as Phyllo does, that one might prefer to die than to kill an innocent, it only entails that that person values their own continued existence less than they value the life of the other (or rather, the moral belief that they shouldn’t kill).

Let’s put it differently: Take Singer’s parable of the drowning child. How much do you sacrifice to save the drowning child? Put differently, how much do you currently contribute to charitable causes that demonstrably save the lives on innocents for around a thousand dollars? If the amount you currently donate to those causes is zero, you probably aren’t as committed to not killing people as you claim. Moreover, if you currently choose to buy yourself food instead of contributing every cent you earn to saving those lives until you die from hunger and exhaustion, you probably aren’t so strongly committed to dying rather than killing.

It may be that you just aren’t a consequentialist, and you base your morality in the opinion of some all loving god who could care less about what happens to anyone around you so long as you don’t participate directly in the causal chain that leads to their death. That morality is nonsense for a number of reasons, but that is a different discussion and my argument here doesn’t address it.

On the other hand, it may just be that the idea of accepting money in exchange for committing moral wrongs is seen as taboo. That’s understandable, because signaling a commitment to moral positions that is strong enough to overcome any personal gain is important for social cohesion and alliance building.

Here is one test: if you think that it’s right to pull the lever in the trolley problem, to save five lives by killing one, then why would it be the case that you can’t accept $10,000 to kill a random person and then donate that money to a charitable initiative that reliably saves a life for each $1,000 it receives? You would on net save 9 lives, 5 lives better than in the trolley problem. What gives? We’re just replacing the trolley switch with a check for killing the one person followed by an alms collection to save the five.

Here’s another way to approach the problem, which is probably where we should have started: I think lying is always wrong, but surely you would tell a white lie for $1 million, right? Think of all the orphans you could save! Murder is an extreme case, and when we start there it’s easy to take our gut rejection as an indication that there’s nothing to this price-of-morality argument. But start with tiny moral wrongs, and (I hope) it’s clear that we would take money for small moral wrongs. If nothing else, we can differentiate moral wrongs for which it’s not taboo to discuss accepting money to violate, and ones for which it is.

:astonished: Wow.

Rereading my previous post, I’d like to clarify that most uses of the word “you” in that post were intended in the generic form. Replacing them with “one” would have been more accurate/diplomatic, but at the cost of readability/zest.

I apologize if it comes off as a personal attack, that is not my intent.

In the OP, you were pricing the dropping of a moral belief. Here, instead, you are putting a price on some “random person” and the person who refuses to take money for killing him. IOW, now there are three prices proposed.

And honestly, is a person who is willing to die rather than kill someone, really thinking in these terms - looking at the value of the other person and his own value - pricing these things out?

I don’t think so.

The price of a human life is around $30 per day.
People can say all they want.
However, it’s about money. Life is right now about money.

You can put a price on all kinds of things, Phyllo. Price just mediates value. Anything you can value you can price.

Suppose you are faced with two possible worlds:
World A: a random person is killed and an orphanage gets an X dollar donation;
World B: neither of those things happened.

Is there some X where you choose world A? Shouldn’t there be?
Do you say don’t pull the switch in the trolley problem?

My position is that X exists. It’s less than a trillion. And furthermore, that X is the price at which you should prefer a world where you are paid to kill a random person. Whatever value X is, you can just take X as payment and give it to the orphanage, and create what, by hypothesis, is the better world. In the event we choose between a better or worse world, it is moral to choose the better world.

And we might not know precisely what X is. We can narrow down on it. It’s less than a trillion. It’s greater than $1. Greater than $2. $50? $1000? Less than $999 billion? But we don’t need to know what X is, or think about what X is, or base our actions off what X is. None of that changes whether or not X exists.

I agree, and I think it’s a bad thing as practiced. But I don’t think that it has to be a bad thing.

Sometimes killing is a form of love.
A wolf loves her pups so much, and her self so much,
that she eats other living things.

People who love their life don’t want to sacrifice it for others.

Self hate is part of some moral theories.
We live to serve others, for example.
Sacrifice of self for a stranger.
That’s heroism.

Some people will price anything but others do not think in that way. It’s not a universal attitude.

You have brought up the trolley problem a few times now.

The “fat man” version of the trolley problem shows the general aversion to these ideas.

en.wikipedia.org/wiki/Trolley_p … he_fat_man

The difference between World A and World B is not just one dead person plus well-off orphans versus one live person plus suffering orphans. World A contains an individual who has sold his beliefs and he has blood on his own hands. He has accepted that a price can be put on anyone/anything and there are other people out there prepared to kill him and those he cares about for a price. You can call World A the “better world” if you like.

Willingness to do something is not the same as ability to do something. In theory, there exists a break-even price for anything one values. It may fail in practice for any number of reasons (e.g. transaction costs are too high; the good is not excludable; that price is more than the total value produced by all humans ever; people find this kind of thinking icky; etc.).

But is that general aversion consistent? We know that people’s beliefs are often inconsistent depending on how a choice is framed, so it is not a given that people have a general aversion that turns out to be irrational upon examination. Particularly where the general aversion is among non-philosophers in a non-reflective mode, it isn’t clear that we should put much weight in the moral consensus.

And one may simply reject consequentialist morality, or prefers not to swallow its hard pills. If the outcome isn’t the basis of whether or not an action is moral, then whether or not World A is better than World B is irrelevant to the question of which action is required or permissible. So, maybe put this a different way: if we take as a given that the correct morality is consequentialist and that that entails that we should push the fat man, in that case do you agree that X must exist?

One response to this line is to point out that, whatever other consequences you want to load into World A, there should still be an X that outweighs those consequences. Count up all the children in the world who will die of malnutrition in the next ten years, figure out how much they need to not die of malnutrition, plus the cost of distributing that much to each one. Is the life of every child who would die of malnutrition in the next ten years really not worth another source of existential dread and a little blood on your cuffs?

But I prefer another approach: to quote my torts professor, “don’t buck the hypo” (not sure if that’s original to her). You can make the math not work by adding additional terms, but those aren’t the hypothetical being considered. If what you’re saying is, “yes, in the case you presented, it’s morally permissible to kill for money, but that case could never happen in the real world for [reasons]”, then fine, say that. But if you don’t agree that the hypo as presented justifies killing for money, then lets keep discussing the hypo as presented before we embellish it.

I think the fat man problem suffers from a similar problem: people implicitly read reality into a fanciful intuition pump. It’s actually difficult to conceive of a situation where you know 100% that pushing a fat person in front of a trolley will stop the trolley and save lives, and so even when people’s conscious minds acknowledge that’s a given here, their moral intuitions can’t be readily separated from the real-world in which they were honed, in which the fat man problem is outlandish because there’s no one in the world fat enough to stop a speeding trolley. (Maybe a way to test the general aversion under this hypothesis: do a survey where you ask half the people the traditional fat man problem, and ask the other half a variation in which the fat man is sitting above the switch and pushing him off will kill him and hit the switch. If I’m right, there should be less aversion to the latter.)

Why on earth would you doubt it? People have refused to kill people trying to kill them. IOW despite losing everything possible. How much money would it take for you to rape a child, Carleas? I mean, you could use that money to help other children.

Sure. People will kill over a couple of bucks or who drank the last beer. Does this mean the price of a beer is the value of killing someone? Your proposal rests on some kind of at least vague consensus, not only on money as the measure, but what the measures tend to be.

Well, if we are accepting it, as you do here, we are accepting that money is not the correct measure.

So for you not saving someone is the same as not killing someone.

Here’s a thought experiment. You need a babysitter or a coworker. You can have a person who does not donate to charities or you can have someone who does donate to charities but who will kill a random person for 1000 dollars. Which category of person would you consider hiring?

Me, there is not a chance in hell I would hire a hit man.

Wow, look at the assumptions here

  1. there are no consequentialist arguments against your position
  2. all deontologists are theists

Ah, there you go, one possible consequentialist argument AND NOTE: IT IS VERY VERY HARD TO TRACK THE CONSEQUENCES of such things. I say this, because most consequentialists tend to treat only those effects that can be tracked as the set of effects and/or show to me a kind of hubris in the ability to track consequences.

I’ve made it so far without having to make such decisions. And what are the consequences of having these kinds of scenarios BEING A REGULAR PART OF HUMAN INTERACTIONS? Oh, we don’t have to think of that. Indirect effects, those that deal with how we think and the way it affects how we views others, oh those are hard to track, so we don’t have to consider them.

The idea that decisions and effects can be narrowed down this way, expecially when we are talking about a new system of evaluation behavior, is pathological and confused.

And again what are the side effects of making this kind of thinking the main guideline in a society? How does that monetary evaluation, when taught to children, when it becomes the common way of evaluating actions in adult society…how does that affect how we view and then treat each other? Ah, that’s hard to figure out, we don’t have to think of that.

Morality is like a chess puzzle. Causes and effects can be easily broken down and tracked.

I tell white lies for free.

So, Carleas, in the time you wrote these posts, you could have worked for enough money to help a starving child somewhere. It’s nice you never tell white lies

but you just contributed to the starvation of an African child.

Seriously, there is something extremely unpleasant here. Not because of what such thinking is and does.

I mean that from the bottom of both my consequentialist and deontological hearts.

Probably. It would be disingenuous to claim that small lies and big ones are separable on the long run, but they may be optically motivated short term. Myopia is a common short term affliction, and J.Stalin’s observation puts depth perception. , to it, when he declared that it is very difficult to murder someone close to You , but entirely easy when the victims are in the millions.

More moderately . these optical illusions are hard to notice, except by the cliche that small arrears lead to big crimes.

The value of these beliefs is based partly on the factual worth of human beings . as familiarity with the victim(s) becomes a factor to the one who is doing the evaluation, as basis for murder.

During mass killings during the Holocaust, some friends and relationships were given preferential treatment.

You appear to think in terms of numbers, so this all seems reasonable to you.

Not everyone thinks in terms of numbers.

So you are saying that “ordinary people” really don’t understand it … only philosophers are able to understand and reason it out correctly.

It seems reasonable that someone is repelled by having to physically kill the fat man. That being above and beyond the mathematics of the situation.
It also seems reasonable that someone believes that mathematics do not enter into life and death decisions.

You mean if I accept your beliefs that these things can be reduced to numbers and simple math operations of addition and subtraction, then would I agree that X must exist?
The answer is embedded in that particular formulation of the question.

But other consequences are hidden in the original presentation of the options. World A is a place where people will be routinely killed for the benefit of others. That will be the norm and it will be called good. And that’s not all, because theft and sales of humans can clearly be justified on the same basis as the killings - for a net benefit. Anything is acceptable as long as you demonstrate the net benefit.

The hypothetical has been stripped of most of the consequences. It’s a sanitized world.

Karpel Tunnel, is your position on the original trolley problem (not the fat man problem) the same as your position on what I’m saying here? If not, how do you distinguish them? Your arguments seem equally applicable (“what are the consequences of having these kinds of scenarios BEING A REGULAR PART OF HUMAN INTERACTIONS”, such that everyone around you would be pulling the switch to kill you to save five other people all the time.)

I’m not suggesting any change in social order – I posted this in the Philosophy forum because it’s not a policy proposal. This is a question of morality no more horrible in the asking (and taking no more time/orphan lives in the discussing) than the trolley problem or its more visceral variations.

Meno_, I think your point about the Stalin quote is apt: human cognition is not consistent, we think differently about questions when posed differently, including, as Stalin notes, when they deal with concrete vs. abstract concepts. Our cognition evolved to have a pretty good intuitive grasp of what a single person is, but not at all of what a million people are. We literally engage different brain structures to reason about those two things.

But we can reflect on those differences and see if they’re consistent. If the value we place on the life of one person is greater than the value we place on the life of a million people, we know that something is going wrong and we need to examine the intuitions to find out which is correct. If you think that pressing a switch so a train hits one person is OK, but that hitting that person so they press the switch isn’t, we can tell there’s something more to the story.

Granted, but that doesn’t bear on how well numbers describe the world. You might be an artist of housebuilding, and eyeball every length with perfect precision, and never once resort to a tape measure or calculator. But someone else can still measure every piece you cut, and can say with great confidence what the lengths are and what you would measure the lengths to be if you were to measure them.

Again, this isn’t about how people do think, it’s about they can think.

I’m saying surveys aren’t a great way to get at a consistent moral framework. People who haven’t analyzed their moral intuitions are not likely to notice if they are inconsistent.

I would also say it seems understandable that people are repelled by the fat man hypo, but not reasonable. See my comments to Meno_ above; people’s intuitions are derived from cognitive mechanisms that evolved to solve very different problems from the trolley problem and the fat man variation. They don’t tend to think about it, or be bothered by the possibility that it’s inconsistent upon analysis. That doesn’t mean that it is not inconsistent upon analysis.

Not if they’ve already happily answered the original trolley problem in favor of pulling the switch, in which case it’s special pleading to complain about using mathematics in hypothetical life and death decisions only when we get to a life and death decision that feels icky.

These statements conflict, and I agree with the latter. This is a hypothetical limited to its terms, the hidden consequences are removed by hypothesis, we’re talking about an artificially pure scenario that gets only at a specific question of morality. The only difference between world A and world B, by hypothesis is that in world A someone is dead and an orphanage has X dollars.

And if your only problem with the hypo is that a more realistic situation would have a whole lot else going on, then it seems like you agree: it is morally permissible to choose World A and to act to bring it about.

as I am approaching this question from the side of
Kantian thought, see the latest in my “new theory of space, time
thread”…

My approach over there has been via Kant and his three questions…

  1. what can I know?
  2. what ought I do?
  3. what my I hope?

and my current statements are about the second question…
what ought I do? Carleas is offering us one possibility
in this question of “what ought I do”?

can we consider morality/ethics in terms of monetary prices?

that is certainly one way to approach this problem…

what standards should I use to engage with or act with…
can I build a morality/ethics system via understand it
by monetary prices? How much does being ethical actually costs us?
thinking about it in terms of money does raise the question into a new
way of thinking about……….

what ought I do? ought I be ethical and what exactly does that mean?
and how do we judge that?

by our actions, we make judgements all the time… we send money
to the red cross… that is a judgement… and a ethical decision…

upon what criteria should we make such a judgement or ethical decision
upon? should we use money or the bible or our own judgement?

the question that Carleas really raises is this, upon what criteria should
we make judgement or ethical standards upon? should we use authority
or money or some other standard to make ethical decisions upon…

if is the “fat man problem” then we are using Bentham theory of utilitarianism…
the acts we make must be for the greater good… we decide upon the number
of people who benefit and if the greater number benefit, that should be
our actions… so under the “fat man” problem, we decide base on the
greater number who benefit… so clearly we toss the fat man into the
path to save a greater number… one dies so that four may live…

but the problem becomes this, we can use several different, alternative
and equally convincing theories upon which to decide this problem…

for example, if the “fat man” were an Einstein… would we toss
Einstein into the path to save 4 rather ordinary people?

who becomes more important, the number of people or
the “relative” value of each person and once again, we run
into the problem of how do we judge or create
a criteria to decide which one is the answer?

each path is blocked by another consideration of equal value……

Kropotkin

Again, some people can’t think it.

But you brought up the trolley problem in the first place.

They have valid reasons. They see the two problems as fundamentally different in an important sense. You deny the existence of that difference.

I was thinking of those who refuse to participate in pulling the switch or pushing the fat man.

And I was also thinking of those who decide on another basis … for example, Einstein is on one track and a bunch of skinheads on the other. I understand that you would say that’s putting a dollar value on those people’s lives. But how is the morality “supposed” to work here? … Each life has equal value, so the greater number saved is the moral decision? Or some people are worth more than others? How do you know their value based on looking at them at a distance on the tracks? That decision would be based solely on visible physical characteristics.

Person A: What is the value of that person in US dollars?
Person B: People can’t be valued in terms of dollars.
Person A: That’s not answering the question.
Person B: Obviously.

Right. The merit/value of the people involved is only superficially known by the person at the switch or beside the fat man. How can he assign value to them?
Therefore, the trolley problem defaults to “all lives have equal value”.

There is another version where the fat man is a villain who has placed the people on the tracks and made the trolley go out of control. What’s the decision in that case?