Moral Beliefs as Prices

Spending wealth badly or selfishly is a separate moral question. If rather than “enjoy[ing] your wealth”, you use that wealth to do more good than you have done wrong, you can leave the world better off for having done that wrong, and a consequentialist should conclude that the transaction was a good thing, i.e. if World A is better than World B, a consequentialist should be OK with someone taking actions that lead to World A instead of World B. If feeling bad about it weighs against World A, increase X to compensate.

Has anyone, particularly you Carleas, seen the movie, The Box?

[youtube]https://www.youtube.com/watch?v=nSOjMkoBYYA[/youtube]

Haha, looks like this discussion has been done! I haven’t seen it, but I’ll add it to my list in case it ever comes on my streaming platforms.

I found the short story it’s based on, Button, Button.

Can you explain this. It is a little ambiguous to me. Are you saying that it is we ourselves who do not do the above or cause the above to happen or are you just being ironic?

Are you telling us not to be a part of all of that?

In the trolley problem, a person is thrown into a situation which he/she did not create. A choice is made between two undesirable options based on a personal ethical standards.

In Carleas’ two worlds problem, a person is asked to create a World A by abandoning his/her personal ethical standards.

The actions expected of us seem to be quite different to me in the two problems. If you’re a consequentialist to start, then it may seem that you are not being asked to change anything about your ethical standards. So the problems may appear to be more or less the same to you.

it still amazes me he said this:

  1. yes, as you point out, the lack of empathy that is presumed
  2. that he doubts my self-evaluation
  3. that he thinks the behavior of some people means that everyone has a price
  4. that it seems clear he, himself, would kill a random person for a certain sum and would not feel the aftereffects of empathy
  5. the way it is assumed that money always can function like a force. IOW perhaps I am content with whatever income I have or savings. No, he assumes that one can be enticed to do anything with enough money, even if one has enough money already for a decent life. Now I happen not to be in a perfectly safe economic position, so my decision is not based on that. Sure, more money would give me more security, apart from pure bonuses. But he assumes that everyone will do whatever for money seems to assume that they those who could look at certain cruel acts in economic terms, necessarily would kill regardless of their financial situation.

There is something both confused and I think even pathological at root here. A fundamental ignorance of humans - however correct he may be about some - coupled with something disconnected personally in himself. He, clearly, would kill random people for some sum of money. And this is a lawyer, who I’ve gotten the impression, is doing alright financially. What does not bother him already that he does or would do for money? What else does he not understand about people at such fundamental levels?

It’s not about personal wealth. You would be using the money to “do good” so presumably you would think that you did the right thing. The bad feeling of killing someone would be more than compensated by the good feeling of helping orphans.

Oh KT, I love a good psychoanalysis as much as the next internet stranger, but I’m afraid that like many beliefs formed on insufficient evidence, this one misses the mark.

My point here is not so much about what people will do (as you point out, many people don’t like thinking icky things, so many will just refuse to engage with the thought and stick with simpler rules to avoid facing (and resolving) the cognitive dissonance). Rather, I’m making a case about what people should: You should be able to express your moral beliefs in prices, because your moral beliefs are just another way in which we value things, and prices mediate value. We’ve all gotten stuck on the particular moral belief that it’s wrong to kill, and ignored my suggestion that we start with lying or some other less expensive moral belief.

And I do doubt your self-evaluation, as should you. People are pretty bad at self-evaluation. We can’t assume perfect self-knowledge of how we would act in a cockamamie hypothetical world that is so far from everyday experience that others in this thread have refused to acknowledge it as presenting an actual answerable question.

Still waiting on you to deliver on “do[ing] it as a thought experiment.”

The bad feeling can still be felt, and though I wouldn’t put much objective moral weight on it, it is compatible with the argument I’m making here to feel that pain, treat it as another moral cost, and still do enough good elsewhere to compensate for it. The bad feeling can be priced, and that price can be added into the equation to find X.

That’s an odd idea. If someone refuses to live with that “bad feeling” then the price of doing something which produces the bad feeling is effectively infinite for that person.

Surely, your thesis only works if there is an X which makes the bad feelings go away.

The alternative being that the person does the deed, feels bad and then kills himself in order to end that feeling. That raises the question of how reasonable the exchanges are, if people can’t live with themselves afterwards.

I don’t think so. The X only has to be large enough to make the bad feeling worthwhile. In my experience, and I assume this is a human universal, we sometimes face choices where either path is painful, and we can make a choice that we know to be the right choice, but still feel the pain of having made it. Something like initiating a painful breakup: remembering the person’s face as you are breaking their heart may always cause you pain, but you can still know that it was the right thing to do.

Though maybe this is what you mean by the pain “go[ing] away”. I think of it as the pain staying but being held at bay by the knowledge that the pain is somehow right, but it is reasonable to describe this as a lessening of the pain.

Carleas, there is no X to get me to kill a random person. There is no way to alleviate my conscience from murdering an unsuspecting, unwilling, for all intents and purposes…innocent person. Even if the random person was suicidal or homicidal, inviting me to alleviate their suffering, I myself would suffer from that destructive deed until my last breath. No amount of good deeds recovers a heinous bad of which murder is the most heinous. Please re-evaluate your principles as if you or your child were that random person, are you or your kid replaceable for X amount?

How bad is your suffering? Would you suffer like that if it would save every baby that would have died from malaria? The unpleasantness of your moral suffering would prevent you from doing good?

Please re-evaluate your principles as if you or your child were one of those saved babies.

Just a small observation. Paid is the correct spelling.

Carleas wrote:

…and who is going to ‘morally’ choose which one dies. In this case there is no “morally”, but if you reward a person $10,000 to kill I would say this is ‘amoral’ whatever way you look at it.

youtu.be/jsWPEhSt9OA

The reality of this discussion.

However in the real world, for 10,000, the question really becomes a non issue, if the question would be rephrased:

 Could you find anyone in this world for whom taking a life would not constitute a moral issue? To which the answer would certainly a resounding yes. As a matter of fact, a guesstimate could be made fairly easily, that if considering the population of the world is say: 3 billion , if we take 1 /100 th of a percent of that figure who would do the job, it would leave with 300,000 people with no moral consideration who would do the job.

Practically, there would be a direct correlation between the belief of criminally induced mortality and its price.

Even if there was only one person without moral doubts as to doing the job, even then the price for it would validate its worth.

Another way to come at a problem with this is to point out that all consequentialists are also deontologists. This becomes clear when polling for prices. Different cultures and individuals will determine the prices differently. And at base there is no way to objectively determine the value of a life, since much of that value is determined subjectively. Some people even argue that large cullings of the population are good in terms of long term survival for the human race. Others may think that some human lives have less - including vastly less - value than others. If ou believe in reincarnation…this affects evaluation. If you are a materialist who believes that in any case all the matter in the body is replaced every [specific amoutn of time] the deaths can be evaluated monetarily in different terms from someone materialist or not who thinks the whole possible lifetime must be considered. Then there will be different evaluations of the value of the death in terms of effects on others. And some, here, might even dismiss this value, since deaths and separations are inevitable so it’s good even for kids to be bluntly made awae of mortaility. Spartan time pragmatists who see toughening up as a road to arete, for example. Or then the corporate way of evaluating the value of a life - like when care manufacturers choose the level of safety by considering the number of likely deaths and then lawsuits and when this money is more than what they save on not adding more safety, then add a little more. Here a circular, emperor has no clothes, evaluation system. And of course each life will have different values in any single evaluation system. The asshole with two months to live dying of cancer vs. the 25 year old married surgeon with kids, at least in some systems will have more value, as they do in terms of lawsuit judgments for wrongful deaths when fines get tallied. But then it is only in terms of compensating families, it is not meant to assess the full value of a life, just the monetary aspects to others.

Anti-natalists, even if they are anti-violence, might think that not real value was lost.

Consequentialists often think they are not deontologists, but they are always also deontologists. Once you open the door for deontology, then it is open for considering the very idea of being paid to kill being immoral.

One of the reasons deontology is present is it is often impossible to rationally work out all consequences and a consequentialist could even view this as humans naturally select over long periods of time deontological heuristics that prevent effects that are not easy to predict. Some will be poor heuristics but then these groups will, over long periods, thrive less welll. You take intuition out of individual humans and they cannot function well with others. See Damasio.

I think that the more the monetary value of a life is considered somehow equivalent to one’s morals, the more damaging to society it is. Of course that’s hard to track.

You didn’t answer my question, are you and your kid replaceable for X amount? Is yes your answer, then what is that X amount?

My suffering would be an ongoing tantamount consequence. I’ve only ever hit a few animals while driving a vehicle, a cat, a bird, and a raccoon. I still mourn for them and those were accidental deaths, bad timing deaths of wild animals who are not part of the human race. I even feel something for the insects on my windshield although I loathe most insects. I would never escape the haunting guilt of murdering a human. No price nor better deed could erase that guilt. But then again, I suffer from guilt for other people’s wrong doings as well which I don’t understand. Perhaps in another life, I was sheer evil and the weight of my current guilty conscience keeps that evil at bay. :confusion-shrug:

I believe that every life is preciously impossible to assign a limited value to, why do you doubt that?

Just for the record, in terms of values placed on human life in insurance dealings, being dead is worth far less than remaining alive. Insurance companies would prefer to settle a claim on a corpse rather than pay for ongoing medical bills for some great but not deadly physical harm. You cost them more if you remain damaged but alive.

How can a baby be saved if they grow up to be a murderer? Malaria is a natural disease, is murder natural? So I should be grateful if my baby is temporarily saved from malaria but not upset if they happen to be randomly murdered unnaturally in the future since 10,000 other babies were saved?

Are 10,000 life-saving compliments equal to one murder?

I agree with Phyllo that in the trolley problem you had no choice to avoid death, death was inevitable, but not initiated by yourself as murder. I would not choose myself in that scenario and the train would strike who it would strike of its own accord for which I would not feel responsible since I didn’t build the train nor set it in motion down any track. It would be horror in the end no matter what since one or more lives would be lost.

Blasted irregular verbs! Thanks for the catch, I’ll correct it.

Again, I’d ask if this is true of the vanilla trolley problem, and if not, what is the distinction?

There are two separate questions here:

  1. Are there certain goods that can’t be weighed against each other? So, we might say that the vanilla trolley problem can’t be answered, because lives are special and can’t be compared, and (x) lives are always worth the same as (y) lives, no matter what values we put in (x) and (y).
  2. Is there a special moral calculus for mediating such comparisons through money, i.e. if it’s OK to cause (a) harm for (b) benefit, is it OK to be paid to do (a) harm if the money is used to buy (b) benefit? To me, this does not seem like a different case: if I can barter (a) for (b), I can sell (a) to buy (b).

KT, You make an interesting point here, although I think you and I may use the term “price” differently. I don’t see any contradiction in different people having different prices for things, and I don’t think “(x) is the price of (a)” is a normative claim, but an empirical one that is dependent on who the buyer and seller are.

A Ferrari might sell for $200k on the open market, but to me it’s worth the resale value because I have no use for a supercar; if I can’t resell it or scrap it for parts, it has negative value: I would pay to avoid owning a Ferrari that can only be used as a car. But that’s just saying that, knowing what money is and what it can buy, knowing what a Ferrari is and how it would function in my life, I would prefer a world where I have less money and no Ferrari to a world where I have more money and a Ferrari shaped ball-and-chain.

But that claim isn’t deontological, because it’s not about what people should value. My particular live circumstances make a Ferrari very low value to me (I live in a crowded city, parking is expensive and crime is high, etc.).

But I do think this hypothetical betrays an instinctual deontology in most consequentialists. If you accept that consequences make the morals, and you agree that World A is better than world B, then the question is easy. But comparing money and morals feels like crossing some line that we have a duty not to cross. And in rejecting that, I am swallowing what I recognize to be a bitter pill of consequentialism.

I answered your question obliquely, by pointing out that putting “you and your kid” on either side of the equation changes the question. Killing a random person is different from killing your kid; saving a random person is different from saving your kid. I don’t deny that I value my kid differently. But putting your kid on either side in the vanilla trolley problem changes that question too.

But as I said to Phyllo above, it’s not about erasing the guilt but of making the guilt worthwhile. Ask the same question with other forms of suffering: if you were going to have excruciating pain for the rest of your life, and in exchange saved the lives of a million orphans, is it worth it? To me, it seems selfish (if understandable) to let many others suffer or die to avoid suffering myself.

Unless you want to argue that moral suffering (guilt) is somehow different form other forms of suffering (e.g. pain), that avoiding moral suffering is not just a selfish desire but a altruistic good. I don’t buy it, but I’m interested to see the case.

See my response to Ecmandu above. Really valuing life infinitely creates weird consequences that aren’t followed in real life. It might be that you believe that and just aren’t living up to your beliefs, but I think it’s more likely that you’re failing to distinguish between ‘really really large value’ and ‘infinite value’.

So too here: you can either commit one murder, or allow millions to die of malaria. That’s the hypo. Death is inevitable.

But it seems like your answer is that you just can’t compare between two lives. That introduces a ton of practical problems, and just seems like ostrich-ing in a way that, if applied consistently, leaves the world worse off in a ton of real world scenarios (triage? self-defense? child-birth complications? flight 93?).

Carleas, my issues as this thread have continued, is that it is logically impossible to place value on any life unless it is infinite… I.e. Everyone obliviates, no harm no good. As a proposition for the value of life, it must in some way continue eternally.

I also find it interesting that you are trying to frame this as a one-off while querying multiple people at how’d they respond, while suggesting in your distinction that it only applies to one person one time and not everyone

I believe you have used the phrase value to society. That must entail deontological ideas.

You cannot evaluate the consequences without apriori deontological values. You have no criteria to work with. This health care proposal leads to X. X is bad because Y. Ask enough questions and you get down to deontology. Or it is not morals. It is simply an analysis between results and there is no way to argue that results set X is better than Y. It could do that in terms of say, goal A. But then we must argue why goal A is better, and again we turn to deontological criteria and rules.

The money issue does not create this problem for consequentialism. It is not a special case. It is inherent in consquentialism.

Even if, say, you do not want to argue that killing a neighbor’s baby is apriori immoral and when brought to that point while being questioned you say instead: well, here’s what happens when you allow people to just up and kill other people’s babies, you will then be describing, for example, things like loss of social cohesion, chaos, feuds, rampant revenge killings, and either justify these as negative in terms of new sets of consequences, keeping your Socratic interlocuter working as long as you can till you finally admit some deontological root, or accidently slip and simply present, for ex, societal chaos as an obvious bad, for example.

Otherwise consequentialism is not part of ethics or morality and you are talking about something like science.

It is working out the probable effects of a phenomenon. Like metereological science say.

To call it consequentialism is heavily implying that it is not mere prediction in statistics of effects, but includes value judgments and at root these will have deontological evaluations like: unnecessary suffering should be avoided, life is precious or has value, if people can get along this is better than if they do not. Not simply becasue of what these LEAD TO but because of that they are.

A consequentialist - a misleading term if considered distinct from deontologists I would say - tend to have more flexibility than the classic Abrahamist deontologist. Well, I might do that if it led to good effects or less bad ones. Less things prohibited apriori.

But actually the consequentialist is really just another deontologist, just one with, more abstract deontological values, which he or she uses to evaluate consequences. But those base values will not come into question. They will be seen as givens. Most open deontologists will still evaluate consequences. And if they are not traditional religious deontologists, they may even do this as much as those who identify as consequentialists. But even Abrahamists will look at effects and argue in meetings about why policy Y leads to bad stuff. Given that this latter group tends to have more specific deontologies, they may also argue from the root more often, but this is not inherent in deontology, since, well, consequentialists are all, I would argue, deontologists.

If they are not, then they are not actually consequentialists in the philosopical sense. They are predictors, with no way to justify why anyone, includning themselves, should choose one set of outcomes over another.

Unless they want to argue from preference. And that then puts them in another category and another branch of philosophy and they should probably call themselves something else.

Murder is not. That is the difference.