Moral Beliefs as Prices

Blasted irregular verbs! Thanks for the catch, I’ll correct it.

Again, I’d ask if this is true of the vanilla trolley problem, and if not, what is the distinction?

There are two separate questions here:

  1. Are there certain goods that can’t be weighed against each other? So, we might say that the vanilla trolley problem can’t be answered, because lives are special and can’t be compared, and (x) lives are always worth the same as (y) lives, no matter what values we put in (x) and (y).
  2. Is there a special moral calculus for mediating such comparisons through money, i.e. if it’s OK to cause (a) harm for (b) benefit, is it OK to be paid to do (a) harm if the money is used to buy (b) benefit? To me, this does not seem like a different case: if I can barter (a) for (b), I can sell (a) to buy (b).

KT, You make an interesting point here, although I think you and I may use the term “price” differently. I don’t see any contradiction in different people having different prices for things, and I don’t think “(x) is the price of (a)” is a normative claim, but an empirical one that is dependent on who the buyer and seller are.

A Ferrari might sell for $200k on the open market, but to me it’s worth the resale value because I have no use for a supercar; if I can’t resell it or scrap it for parts, it has negative value: I would pay to avoid owning a Ferrari that can only be used as a car. But that’s just saying that, knowing what money is and what it can buy, knowing what a Ferrari is and how it would function in my life, I would prefer a world where I have less money and no Ferrari to a world where I have more money and a Ferrari shaped ball-and-chain.

But that claim isn’t deontological, because it’s not about what people should value. My particular live circumstances make a Ferrari very low value to me (I live in a crowded city, parking is expensive and crime is high, etc.).

But I do think this hypothetical betrays an instinctual deontology in most consequentialists. If you accept that consequences make the morals, and you agree that World A is better than world B, then the question is easy. But comparing money and morals feels like crossing some line that we have a duty not to cross. And in rejecting that, I am swallowing what I recognize to be a bitter pill of consequentialism.

I answered your question obliquely, by pointing out that putting “you and your kid” on either side of the equation changes the question. Killing a random person is different from killing your kid; saving a random person is different from saving your kid. I don’t deny that I value my kid differently. But putting your kid on either side in the vanilla trolley problem changes that question too.

But as I said to Phyllo above, it’s not about erasing the guilt but of making the guilt worthwhile. Ask the same question with other forms of suffering: if you were going to have excruciating pain for the rest of your life, and in exchange saved the lives of a million orphans, is it worth it? To me, it seems selfish (if understandable) to let many others suffer or die to avoid suffering myself.

Unless you want to argue that moral suffering (guilt) is somehow different form other forms of suffering (e.g. pain), that avoiding moral suffering is not just a selfish desire but a altruistic good. I don’t buy it, but I’m interested to see the case.

See my response to Ecmandu above. Really valuing life infinitely creates weird consequences that aren’t followed in real life. It might be that you believe that and just aren’t living up to your beliefs, but I think it’s more likely that you’re failing to distinguish between ‘really really large value’ and ‘infinite value’.

So too here: you can either commit one murder, or allow millions to die of malaria. That’s the hypo. Death is inevitable.

But it seems like your answer is that you just can’t compare between two lives. That introduces a ton of practical problems, and just seems like ostrich-ing in a way that, if applied consistently, leaves the world worse off in a ton of real world scenarios (triage? self-defense? child-birth complications? flight 93?).

Carleas, my issues as this thread have continued, is that it is logically impossible to place value on any life unless it is infinite… I.e. Everyone obliviates, no harm no good. As a proposition for the value of life, it must in some way continue eternally.

I also find it interesting that you are trying to frame this as a one-off while querying multiple people at how’d they respond, while suggesting in your distinction that it only applies to one person one time and not everyone

I believe you have used the phrase value to society. That must entail deontological ideas.

You cannot evaluate the consequences without apriori deontological values. You have no criteria to work with. This health care proposal leads to X. X is bad because Y. Ask enough questions and you get down to deontology. Or it is not morals. It is simply an analysis between results and there is no way to argue that results set X is better than Y. It could do that in terms of say, goal A. But then we must argue why goal A is better, and again we turn to deontological criteria and rules.

The money issue does not create this problem for consequentialism. It is not a special case. It is inherent in consquentialism.

Even if, say, you do not want to argue that killing a neighbor’s baby is apriori immoral and when brought to that point while being questioned you say instead: well, here’s what happens when you allow people to just up and kill other people’s babies, you will then be describing, for example, things like loss of social cohesion, chaos, feuds, rampant revenge killings, and either justify these as negative in terms of new sets of consequences, keeping your Socratic interlocuter working as long as you can till you finally admit some deontological root, or accidently slip and simply present, for ex, societal chaos as an obvious bad, for example.

Otherwise consequentialism is not part of ethics or morality and you are talking about something like science.

It is working out the probable effects of a phenomenon. Like metereological science say.

To call it consequentialism is heavily implying that it is not mere prediction in statistics of effects, but includes value judgments and at root these will have deontological evaluations like: unnecessary suffering should be avoided, life is precious or has value, if people can get along this is better than if they do not. Not simply becasue of what these LEAD TO but because of that they are.

A consequentialist - a misleading term if considered distinct from deontologists I would say - tend to have more flexibility than the classic Abrahamist deontologist. Well, I might do that if it led to good effects or less bad ones. Less things prohibited apriori.

But actually the consequentialist is really just another deontologist, just one with, more abstract deontological values, which he or she uses to evaluate consequences. But those base values will not come into question. They will be seen as givens. Most open deontologists will still evaluate consequences. And if they are not traditional religious deontologists, they may even do this as much as those who identify as consequentialists. But even Abrahamists will look at effects and argue in meetings about why policy Y leads to bad stuff. Given that this latter group tends to have more specific deontologies, they may also argue from the root more often, but this is not inherent in deontology, since, well, consequentialists are all, I would argue, deontologists.

If they are not, then they are not actually consequentialists in the philosopical sense. They are predictors, with no way to justify why anyone, includning themselves, should choose one set of outcomes over another.

Unless they want to argue from preference. And that then puts them in another category and another branch of philosophy and they should probably call themselves something else.

Murder is not. That is the difference.

What is guilt but the feeling that you have done something wrong?

Therefore, it’s contrary to the idea that you did something that was worth it. If you felt that it was worth it, then you would not feel guilty.

Guilt indicates that you think that it was not worth it.

Your kid is someone’s “random person”.

If you think that it’s okay to kill a random kid, then you are agreeing that it’s okay for someone to kill your kid.

You’re trying to avoid that implication by calling this a one-off event.

I’m still awaiting the X on his kid’s life, since there is an X on someone elses’ kid’s life. He can pick his price for surrendering his child’s life for the greater good as he suggests would be well worth it. What’s good for the goose is good for the gander and all that.

When these ethical systems are implemented, some authority sets the standard price of “a kid”. Then society can use that to evaluate whether some action was moral or immoral. If you are not in sync with that price, then you are being immoral.

Well put.

I would add also that we have not mentioned the ramifications or essence of doing the bidding of evil. The one who paid. We did the bidding of evil and this evil benefitted in some way. I don’t know what that evil is, what organization or individual. But someone/thing made what it felt was a good purchase.

There is some value there and it is also going to be very hard to track.

One is also, since secrets get out if this even is a secret, sending ripples through one’s family and peers, that you, this decent person they know, was willing to do this. What effects does that have?

So how much of your income do you donate to charities? And how much could you? Could you get a roomate and cut the rent in half, send that out? Are you down at the bare minimum of things and expenses, yet?

We don’t need the ornate example of being paid to kill.

This seems like a semantic point. Consequentialism, as I am using, is the class of moral systems that judge the morality of an action on the outcome (or intended outcome). Deontology, as I am using it, is the class of moral systems that judge the morality of an action based on rules. There are systems that qualify as both (e.g. rule utilitarianism). You can cast “bring about the best outcome” as a rule and say that all consequentialism is deontology, but it doesn’t do a lot for clarity.

Perhaps you bring this up to say that consequentialism is at its base just as arbitrary as any more concrete deontological system, because you have to find some ultimate value on which to judge consequences, and that’s no better or worse than a value like “don’t kill”. One way to reject that claim is to argue that a consequentialist system is logically or empirically necessary, that it is a branch of math or science. I don’t think this makes it any less a real flavor of moral philosophy, as you calim – it’s ultimately trying to answer moral questions – and I don’t see what labeling it ‘science’ does to its force.

This is just question begging. Murder is just one form of death. For consequentialism, the only difference is in including guilt etc. in the tally of outcomes.

I don’t think that’s true. As I argued above, you can feel guilty about things you know to be right. You can also feel guilty about things you haven’t done: Wendy notes that she has experienced this, and the phenomenon of survivor’s guilt is well attested.

Guilt is an emotion, it isn’t rational and it isn’t drawing from some mystical layer of reality that tells you about right and wrong. You can be mistakenly guilty, and can simultaneously feel guilty and know that you did the right thing.

These are non-sequiturs. Yes, some specific people are worth significantly more to me than “a random person”, just as some specific stocks are priced significantly higher than the expected value of a random stock. These claims aren’t in tension.

And the line of argument doesn’t weigh in either direction: if we put a specific, high cost thing on either side of the scale, it changes the outcome in that direction. That isn’t at all surprising.

It’s just a non-sequitur.

Please don’t. If the thought experiment had a different set of givens, we would get a different outcome, granted. That tells us nothing about the concept the thought experiment is trying to isolate.

I agree. As I said in my second post in this thread, trying (unsuccessfully) to move this discussion away from the “ornate example”:

You omitted the second and third part of my post :

If you are prepared to have your kid killed in that consequentialist society, then just say so and we won’t need to raise the point again.

Humans feel emotions. It’s part of our biology. If you want to pretend that’s not a “layer of reality” and that humans ought to only reason and not to feel, then you’re talking about some fantasy world.

Feeling badly about what you did to someone means, at the very least, that you are ambivalent about what you did. Saying that guilt, since it is an emotion, isn’t rational, is confused.

There are no rational morals. There are just moral values built up based on what we care about and what we dislike. Without emotions there are no morals.

There are just practical judgments.

Do this and it leads to this. And no way to determine either what you want or what we think is good.

Unless you believe in God, which clearly you do not, you can have codes of behavior, like the rules of hockey, without emotions, but no morals.

And this is not a shot at atheists, since every one I’ve ever met is informed by their emotions in forming, justifying and understanding their morals.

Without emotions one does not function rationally in society. See Damasio the neuroscientist. Sure, doing math, you don’t need emotions to be rational, but with other humans and society you will fail to make good decisions and even manage yourself rationally without contant input from the limbic system. Damasion goes into what happens when people with damage not longer have the limbic system tied into the loop.

So to say that guilt and emotions have no deep mystical…etc., is a deep confusion.

Without emotions, there is no such thing as morals. There is just behavior and tactics.

And sure, one can feel guilt for things that are not wrong. But when you start killing people for money, you are giving your own mirror neurons and your own limbic system and your own yearning for closeness and good treatment the finger. You pretty much stopped being a social mammal. Now psychotics can be like this. And they lead limited lives.

Alright then, carleas, you are still surprisingly holding your ground. So let’s raise the stakes…

If you knew for a fact that if you tortured one random person forever, and that would make everyone else happy forever, but if you didn’t torture that random person forever, everyone would be tortured forever besides them…

Do you see how absurd this looks?

For one, it’s absurd. It’s a false dichotomy.

People have pointed out repeatedly that if it’s an unknown stranger, that person could be way more valuable than you… who takes a trillion dollars and gives it to charity (a perpetual system) - this random person may solve all poverty issues for all beings in existence, which a trillion dollars can’t do.

Ecmandu, however, but Jesus is such a figure, he took the totality of humanity on his shoulder for ever, given that the passion was eternal, never ending. The contradictory spiritual payoff was in line with the level of contradiction still sustained, by the sign of the cross.

What consequentialist society?

Let me again compare “random person” to “random stock”. A “random stock” has a calculable expected value. If we average all the prices of all the stocks, we get the expected value of a random stock. It’s going to be a lot less than a lot of stocks, even though in theory if we picked one stock at random it could be the most expensive stock possible. The expected value combines the values of all outcomes with the likelihood of those outcomes. The expected value of a roll of the dice at the craps table is negative. The expected value of a fair coin flip is zero. The expected value of the random person functions just the same way.

I hope that this comparison makes clear that “BUT SOMEONE WITH VERY HIGH VALUE IS ONE OF THE POSSIBLE OUTCOMES!!!” is not a rebuttal. So long as every person in the set from which we’re picking has a finite value, the expected value of a random person remains finite even if very valuable people are included.

I said that guilt doesn’t “come from” a “mystical layer of reality”. Phyllo seemed to be claiming that we could draw a syllogism like, “I feel bad, therefore we know what I did was wrong.” That syllogism doesn’t work, because guilt can be mistaken. Like hot sauce feels like burning and mint feels like cold, dumping a girlfriend can feel like being cruel even when we know it’s the most compassionate thing to do.

You may feel guilty, but it’s just another negative value that can be priced. It does not tell us very much about morality or reality.

“I priced it, therefore we know what I did was right”.

“I reasoned it out, therefore we know what I did was right”.

Your pricing can be mistaken and your reasoning can be mistaken.

Guess that pricing and reasoning can’t be used. Oh well, back to the drawing board.

Carleas, as I stated earlier (and can you refute this?):

If every being is finite, there is no value to them, oblivion forever - no wrong, no right - certainly not your formula.

Value only truly comes into the picture of beings that are immortal or eternal in some way.

The moment you say that all lives have finite value, you refute your argument.

The moment you say they have infinite value, your argument becomes much more complex than you present.

Sorry, I took us off topic, I think your point is good, and I think mine didn’t deserve the response.

Go back to:

I make two pretty bad points here that don’t advance the discussion.

Forgive me for leading us astray, on reflection I don’t think I’m clear on what role guilt is playing in the argument. If one chooses World A, one may feel guilty for the rest of ones life. I think there are a few questions here, but I’m not sure any change the outcome.
Case G: You do feel guilty your whole life
Case N: You don’t feel guilty your whole life

Case G can be the case if
(1) Guilt is compatible with having done the right thing, and you have done the right thing; OR
(2) Guilt is incompatible with having done the right thing, and you have done the wrong thing.

Case N can be the case if
(1) Guilt is compatible with having done the right thing, and you have done the right thing.
(2) Guilt is incompatible with having done the right thing, and you have done the right thing.

This is how I understand your argument:
(a) we know that we would feel guilty killing a stranger (We are in case G);
(b) we know that guilt is incompatible with having done the right thing (so if we are in case G, we are wrong);
therefore
(c) we know that killing a stranger is wrong.

Is that what you’re saying?

I’m open to seeing some math here. In math as I know it, assigning infinities to constants has the weird outcomes I describe (e.g., you sending me all your money to help that guy with the cough). There may be math that says otherwise, but I predict that when you actually do that math out, you’ll find that it works about the same way that the usual math does without the infinities.