Moral Beliefs as Prices

Oh KT, I love a good psychoanalysis as much as the next internet stranger, but I’m afraid that like many beliefs formed on insufficient evidence, this one misses the mark.

My point here is not so much about what people will do (as you point out, many people don’t like thinking icky things, so many will just refuse to engage with the thought and stick with simpler rules to avoid facing (and resolving) the cognitive dissonance). Rather, I’m making a case about what people should: You should be able to express your moral beliefs in prices, because your moral beliefs are just another way in which we value things, and prices mediate value. We’ve all gotten stuck on the particular moral belief that it’s wrong to kill, and ignored my suggestion that we start with lying or some other less expensive moral belief.

And I do doubt your self-evaluation, as should you. People are pretty bad at self-evaluation. We can’t assume perfect self-knowledge of how we would act in a cockamamie hypothetical world that is so far from everyday experience that others in this thread have refused to acknowledge it as presenting an actual answerable question.

Still waiting on you to deliver on “do[ing] it as a thought experiment.”

The bad feeling can still be felt, and though I wouldn’t put much objective moral weight on it, it is compatible with the argument I’m making here to feel that pain, treat it as another moral cost, and still do enough good elsewhere to compensate for it. The bad feeling can be priced, and that price can be added into the equation to find X.

That’s an odd idea. If someone refuses to live with that “bad feeling” then the price of doing something which produces the bad feeling is effectively infinite for that person.

Surely, your thesis only works if there is an X which makes the bad feelings go away.

The alternative being that the person does the deed, feels bad and then kills himself in order to end that feeling. That raises the question of how reasonable the exchanges are, if people can’t live with themselves afterwards.

I don’t think so. The X only has to be large enough to make the bad feeling worthwhile. In my experience, and I assume this is a human universal, we sometimes face choices where either path is painful, and we can make a choice that we know to be the right choice, but still feel the pain of having made it. Something like initiating a painful breakup: remembering the person’s face as you are breaking their heart may always cause you pain, but you can still know that it was the right thing to do.

Though maybe this is what you mean by the pain “go[ing] away”. I think of it as the pain staying but being held at bay by the knowledge that the pain is somehow right, but it is reasonable to describe this as a lessening of the pain.

Carleas, there is no X to get me to kill a random person. There is no way to alleviate my conscience from murdering an unsuspecting, unwilling, for all intents and purposes…innocent person. Even if the random person was suicidal or homicidal, inviting me to alleviate their suffering, I myself would suffer from that destructive deed until my last breath. No amount of good deeds recovers a heinous bad of which murder is the most heinous. Please re-evaluate your principles as if you or your child were that random person, are you or your kid replaceable for X amount?

How bad is your suffering? Would you suffer like that if it would save every baby that would have died from malaria? The unpleasantness of your moral suffering would prevent you from doing good?

Please re-evaluate your principles as if you or your child were one of those saved babies.

Just a small observation. Paid is the correct spelling.

Carleas wrote:

…and who is going to ‘morally’ choose which one dies. In this case there is no “morally”, but if you reward a person $10,000 to kill I would say this is ‘amoral’ whatever way you look at it.

youtu.be/jsWPEhSt9OA

The reality of this discussion.

However in the real world, for 10,000, the question really becomes a non issue, if the question would be rephrased:

 Could you find anyone in this world for whom taking a life would not constitute a moral issue? To which the answer would certainly a resounding yes. As a matter of fact, a guesstimate could be made fairly easily, that if considering the population of the world is say: 3 billion , if we take 1 /100 th of a percent of that figure who would do the job, it would leave with 300,000 people with no moral consideration who would do the job.

Practically, there would be a direct correlation between the belief of criminally induced mortality and its price.

Even if there was only one person without moral doubts as to doing the job, even then the price for it would validate its worth.

Another way to come at a problem with this is to point out that all consequentialists are also deontologists. This becomes clear when polling for prices. Different cultures and individuals will determine the prices differently. And at base there is no way to objectively determine the value of a life, since much of that value is determined subjectively. Some people even argue that large cullings of the population are good in terms of long term survival for the human race. Others may think that some human lives have less - including vastly less - value than others. If ou believe in reincarnation…this affects evaluation. If you are a materialist who believes that in any case all the matter in the body is replaced every [specific amoutn of time] the deaths can be evaluated monetarily in different terms from someone materialist or not who thinks the whole possible lifetime must be considered. Then there will be different evaluations of the value of the death in terms of effects on others. And some, here, might even dismiss this value, since deaths and separations are inevitable so it’s good even for kids to be bluntly made awae of mortaility. Spartan time pragmatists who see toughening up as a road to arete, for example. Or then the corporate way of evaluating the value of a life - like when care manufacturers choose the level of safety by considering the number of likely deaths and then lawsuits and when this money is more than what they save on not adding more safety, then add a little more. Here a circular, emperor has no clothes, evaluation system. And of course each life will have different values in any single evaluation system. The asshole with two months to live dying of cancer vs. the 25 year old married surgeon with kids, at least in some systems will have more value, as they do in terms of lawsuit judgments for wrongful deaths when fines get tallied. But then it is only in terms of compensating families, it is not meant to assess the full value of a life, just the monetary aspects to others.

Anti-natalists, even if they are anti-violence, might think that not real value was lost.

Consequentialists often think they are not deontologists, but they are always also deontologists. Once you open the door for deontology, then it is open for considering the very idea of being paid to kill being immoral.

One of the reasons deontology is present is it is often impossible to rationally work out all consequences and a consequentialist could even view this as humans naturally select over long periods of time deontological heuristics that prevent effects that are not easy to predict. Some will be poor heuristics but then these groups will, over long periods, thrive less welll. You take intuition out of individual humans and they cannot function well with others. See Damasio.

I think that the more the monetary value of a life is considered somehow equivalent to one’s morals, the more damaging to society it is. Of course that’s hard to track.

You didn’t answer my question, are you and your kid replaceable for X amount? Is yes your answer, then what is that X amount?

My suffering would be an ongoing tantamount consequence. I’ve only ever hit a few animals while driving a vehicle, a cat, a bird, and a raccoon. I still mourn for them and those were accidental deaths, bad timing deaths of wild animals who are not part of the human race. I even feel something for the insects on my windshield although I loathe most insects. I would never escape the haunting guilt of murdering a human. No price nor better deed could erase that guilt. But then again, I suffer from guilt for other people’s wrong doings as well which I don’t understand. Perhaps in another life, I was sheer evil and the weight of my current guilty conscience keeps that evil at bay. :confusion-shrug:

I believe that every life is preciously impossible to assign a limited value to, why do you doubt that?

Just for the record, in terms of values placed on human life in insurance dealings, being dead is worth far less than remaining alive. Insurance companies would prefer to settle a claim on a corpse rather than pay for ongoing medical bills for some great but not deadly physical harm. You cost them more if you remain damaged but alive.

How can a baby be saved if they grow up to be a murderer? Malaria is a natural disease, is murder natural? So I should be grateful if my baby is temporarily saved from malaria but not upset if they happen to be randomly murdered unnaturally in the future since 10,000 other babies were saved?

Are 10,000 life-saving compliments equal to one murder?

I agree with Phyllo that in the trolley problem you had no choice to avoid death, death was inevitable, but not initiated by yourself as murder. I would not choose myself in that scenario and the train would strike who it would strike of its own accord for which I would not feel responsible since I didn’t build the train nor set it in motion down any track. It would be horror in the end no matter what since one or more lives would be lost.

Blasted irregular verbs! Thanks for the catch, I’ll correct it.

Again, I’d ask if this is true of the vanilla trolley problem, and if not, what is the distinction?

There are two separate questions here:

  1. Are there certain goods that can’t be weighed against each other? So, we might say that the vanilla trolley problem can’t be answered, because lives are special and can’t be compared, and (x) lives are always worth the same as (y) lives, no matter what values we put in (x) and (y).
  2. Is there a special moral calculus for mediating such comparisons through money, i.e. if it’s OK to cause (a) harm for (b) benefit, is it OK to be paid to do (a) harm if the money is used to buy (b) benefit? To me, this does not seem like a different case: if I can barter (a) for (b), I can sell (a) to buy (b).

KT, You make an interesting point here, although I think you and I may use the term “price” differently. I don’t see any contradiction in different people having different prices for things, and I don’t think “(x) is the price of (a)” is a normative claim, but an empirical one that is dependent on who the buyer and seller are.

A Ferrari might sell for $200k on the open market, but to me it’s worth the resale value because I have no use for a supercar; if I can’t resell it or scrap it for parts, it has negative value: I would pay to avoid owning a Ferrari that can only be used as a car. But that’s just saying that, knowing what money is and what it can buy, knowing what a Ferrari is and how it would function in my life, I would prefer a world where I have less money and no Ferrari to a world where I have more money and a Ferrari shaped ball-and-chain.

But that claim isn’t deontological, because it’s not about what people should value. My particular live circumstances make a Ferrari very low value to me (I live in a crowded city, parking is expensive and crime is high, etc.).

But I do think this hypothetical betrays an instinctual deontology in most consequentialists. If you accept that consequences make the morals, and you agree that World A is better than world B, then the question is easy. But comparing money and morals feels like crossing some line that we have a duty not to cross. And in rejecting that, I am swallowing what I recognize to be a bitter pill of consequentialism.

I answered your question obliquely, by pointing out that putting “you and your kid” on either side of the equation changes the question. Killing a random person is different from killing your kid; saving a random person is different from saving your kid. I don’t deny that I value my kid differently. But putting your kid on either side in the vanilla trolley problem changes that question too.

But as I said to Phyllo above, it’s not about erasing the guilt but of making the guilt worthwhile. Ask the same question with other forms of suffering: if you were going to have excruciating pain for the rest of your life, and in exchange saved the lives of a million orphans, is it worth it? To me, it seems selfish (if understandable) to let many others suffer or die to avoid suffering myself.

Unless you want to argue that moral suffering (guilt) is somehow different form other forms of suffering (e.g. pain), that avoiding moral suffering is not just a selfish desire but a altruistic good. I don’t buy it, but I’m interested to see the case.

See my response to Ecmandu above. Really valuing life infinitely creates weird consequences that aren’t followed in real life. It might be that you believe that and just aren’t living up to your beliefs, but I think it’s more likely that you’re failing to distinguish between ‘really really large value’ and ‘infinite value’.

So too here: you can either commit one murder, or allow millions to die of malaria. That’s the hypo. Death is inevitable.

But it seems like your answer is that you just can’t compare between two lives. That introduces a ton of practical problems, and just seems like ostrich-ing in a way that, if applied consistently, leaves the world worse off in a ton of real world scenarios (triage? self-defense? child-birth complications? flight 93?).

Carleas, my issues as this thread have continued, is that it is logically impossible to place value on any life unless it is infinite… I.e. Everyone obliviates, no harm no good. As a proposition for the value of life, it must in some way continue eternally.

I also find it interesting that you are trying to frame this as a one-off while querying multiple people at how’d they respond, while suggesting in your distinction that it only applies to one person one time and not everyone

I believe you have used the phrase value to society. That must entail deontological ideas.

You cannot evaluate the consequences without apriori deontological values. You have no criteria to work with. This health care proposal leads to X. X is bad because Y. Ask enough questions and you get down to deontology. Or it is not morals. It is simply an analysis between results and there is no way to argue that results set X is better than Y. It could do that in terms of say, goal A. But then we must argue why goal A is better, and again we turn to deontological criteria and rules.

The money issue does not create this problem for consequentialism. It is not a special case. It is inherent in consquentialism.

Even if, say, you do not want to argue that killing a neighbor’s baby is apriori immoral and when brought to that point while being questioned you say instead: well, here’s what happens when you allow people to just up and kill other people’s babies, you will then be describing, for example, things like loss of social cohesion, chaos, feuds, rampant revenge killings, and either justify these as negative in terms of new sets of consequences, keeping your Socratic interlocuter working as long as you can till you finally admit some deontological root, or accidently slip and simply present, for ex, societal chaos as an obvious bad, for example.

Otherwise consequentialism is not part of ethics or morality and you are talking about something like science.

It is working out the probable effects of a phenomenon. Like metereological science say.

To call it consequentialism is heavily implying that it is not mere prediction in statistics of effects, but includes value judgments and at root these will have deontological evaluations like: unnecessary suffering should be avoided, life is precious or has value, if people can get along this is better than if they do not. Not simply becasue of what these LEAD TO but because of that they are.

A consequentialist - a misleading term if considered distinct from deontologists I would say - tend to have more flexibility than the classic Abrahamist deontologist. Well, I might do that if it led to good effects or less bad ones. Less things prohibited apriori.

But actually the consequentialist is really just another deontologist, just one with, more abstract deontological values, which he or she uses to evaluate consequences. But those base values will not come into question. They will be seen as givens. Most open deontologists will still evaluate consequences. And if they are not traditional religious deontologists, they may even do this as much as those who identify as consequentialists. But even Abrahamists will look at effects and argue in meetings about why policy Y leads to bad stuff. Given that this latter group tends to have more specific deontologies, they may also argue from the root more often, but this is not inherent in deontology, since, well, consequentialists are all, I would argue, deontologists.

If they are not, then they are not actually consequentialists in the philosopical sense. They are predictors, with no way to justify why anyone, includning themselves, should choose one set of outcomes over another.

Unless they want to argue from preference. And that then puts them in another category and another branch of philosophy and they should probably call themselves something else.

Murder is not. That is the difference.

What is guilt but the feeling that you have done something wrong?

Therefore, it’s contrary to the idea that you did something that was worth it. If you felt that it was worth it, then you would not feel guilty.

Guilt indicates that you think that it was not worth it.

Your kid is someone’s “random person”.

If you think that it’s okay to kill a random kid, then you are agreeing that it’s okay for someone to kill your kid.

You’re trying to avoid that implication by calling this a one-off event.

I’m still awaiting the X on his kid’s life, since there is an X on someone elses’ kid’s life. He can pick his price for surrendering his child’s life for the greater good as he suggests would be well worth it. What’s good for the goose is good for the gander and all that.

When these ethical systems are implemented, some authority sets the standard price of “a kid”. Then society can use that to evaluate whether some action was moral or immoral. If you are not in sync with that price, then you are being immoral.

Well put.

I would add also that we have not mentioned the ramifications or essence of doing the bidding of evil. The one who paid. We did the bidding of evil and this evil benefitted in some way. I don’t know what that evil is, what organization or individual. But someone/thing made what it felt was a good purchase.

There is some value there and it is also going to be very hard to track.

One is also, since secrets get out if this even is a secret, sending ripples through one’s family and peers, that you, this decent person they know, was willing to do this. What effects does that have?

So how much of your income do you donate to charities? And how much could you? Could you get a roomate and cut the rent in half, send that out? Are you down at the bare minimum of things and expenses, yet?

We don’t need the ornate example of being paid to kill.

This seems like a semantic point. Consequentialism, as I am using, is the class of moral systems that judge the morality of an action on the outcome (or intended outcome). Deontology, as I am using it, is the class of moral systems that judge the morality of an action based on rules. There are systems that qualify as both (e.g. rule utilitarianism). You can cast “bring about the best outcome” as a rule and say that all consequentialism is deontology, but it doesn’t do a lot for clarity.

Perhaps you bring this up to say that consequentialism is at its base just as arbitrary as any more concrete deontological system, because you have to find some ultimate value on which to judge consequences, and that’s no better or worse than a value like “don’t kill”. One way to reject that claim is to argue that a consequentialist system is logically or empirically necessary, that it is a branch of math or science. I don’t think this makes it any less a real flavor of moral philosophy, as you calim – it’s ultimately trying to answer moral questions – and I don’t see what labeling it ‘science’ does to its force.

This is just question begging. Murder is just one form of death. For consequentialism, the only difference is in including guilt etc. in the tally of outcomes.

I don’t think that’s true. As I argued above, you can feel guilty about things you know to be right. You can also feel guilty about things you haven’t done: Wendy notes that she has experienced this, and the phenomenon of survivor’s guilt is well attested.

Guilt is an emotion, it isn’t rational and it isn’t drawing from some mystical layer of reality that tells you about right and wrong. You can be mistakenly guilty, and can simultaneously feel guilty and know that you did the right thing.

These are non-sequiturs. Yes, some specific people are worth significantly more to me than “a random person”, just as some specific stocks are priced significantly higher than the expected value of a random stock. These claims aren’t in tension.

And the line of argument doesn’t weigh in either direction: if we put a specific, high cost thing on either side of the scale, it changes the outcome in that direction. That isn’t at all surprising.

It’s just a non-sequitur.

Please don’t. If the thought experiment had a different set of givens, we would get a different outcome, granted. That tells us nothing about the concept the thought experiment is trying to isolate.

I agree. As I said in my second post in this thread, trying (unsuccessfully) to move this discussion away from the “ornate example”: