Moral Beliefs as Prices

I believe you have used the phrase value to society. That must entail deontological ideas.

You cannot evaluate the consequences without apriori deontological values. You have no criteria to work with. This health care proposal leads to X. X is bad because Y. Ask enough questions and you get down to deontology. Or it is not morals. It is simply an analysis between results and there is no way to argue that results set X is better than Y. It could do that in terms of say, goal A. But then we must argue why goal A is better, and again we turn to deontological criteria and rules.

The money issue does not create this problem for consequentialism. It is not a special case. It is inherent in consquentialism.

Even if, say, you do not want to argue that killing a neighbor’s baby is apriori immoral and when brought to that point while being questioned you say instead: well, here’s what happens when you allow people to just up and kill other people’s babies, you will then be describing, for example, things like loss of social cohesion, chaos, feuds, rampant revenge killings, and either justify these as negative in terms of new sets of consequences, keeping your Socratic interlocuter working as long as you can till you finally admit some deontological root, or accidently slip and simply present, for ex, societal chaos as an obvious bad, for example.

Otherwise consequentialism is not part of ethics or morality and you are talking about something like science.

It is working out the probable effects of a phenomenon. Like metereological science say.

To call it consequentialism is heavily implying that it is not mere prediction in statistics of effects, but includes value judgments and at root these will have deontological evaluations like: unnecessary suffering should be avoided, life is precious or has value, if people can get along this is better than if they do not. Not simply becasue of what these LEAD TO but because of that they are.

A consequentialist - a misleading term if considered distinct from deontologists I would say - tend to have more flexibility than the classic Abrahamist deontologist. Well, I might do that if it led to good effects or less bad ones. Less things prohibited apriori.

But actually the consequentialist is really just another deontologist, just one with, more abstract deontological values, which he or she uses to evaluate consequences. But those base values will not come into question. They will be seen as givens. Most open deontologists will still evaluate consequences. And if they are not traditional religious deontologists, they may even do this as much as those who identify as consequentialists. But even Abrahamists will look at effects and argue in meetings about why policy Y leads to bad stuff. Given that this latter group tends to have more specific deontologies, they may also argue from the root more often, but this is not inherent in deontology, since, well, consequentialists are all, I would argue, deontologists.

If they are not, then they are not actually consequentialists in the philosopical sense. They are predictors, with no way to justify why anyone, includning themselves, should choose one set of outcomes over another.

Unless they want to argue from preference. And that then puts them in another category and another branch of philosophy and they should probably call themselves something else.

Murder is not. That is the difference.

What is guilt but the feeling that you have done something wrong?

Therefore, it’s contrary to the idea that you did something that was worth it. If you felt that it was worth it, then you would not feel guilty.

Guilt indicates that you think that it was not worth it.

Your kid is someone’s “random person”.

If you think that it’s okay to kill a random kid, then you are agreeing that it’s okay for someone to kill your kid.

You’re trying to avoid that implication by calling this a one-off event.

I’m still awaiting the X on his kid’s life, since there is an X on someone elses’ kid’s life. He can pick his price for surrendering his child’s life for the greater good as he suggests would be well worth it. What’s good for the goose is good for the gander and all that.

When these ethical systems are implemented, some authority sets the standard price of “a kid”. Then society can use that to evaluate whether some action was moral or immoral. If you are not in sync with that price, then you are being immoral.

Well put.

I would add also that we have not mentioned the ramifications or essence of doing the bidding of evil. The one who paid. We did the bidding of evil and this evil benefitted in some way. I don’t know what that evil is, what organization or individual. But someone/thing made what it felt was a good purchase.

There is some value there and it is also going to be very hard to track.

One is also, since secrets get out if this even is a secret, sending ripples through one’s family and peers, that you, this decent person they know, was willing to do this. What effects does that have?

So how much of your income do you donate to charities? And how much could you? Could you get a roomate and cut the rent in half, send that out? Are you down at the bare minimum of things and expenses, yet?

We don’t need the ornate example of being paid to kill.

This seems like a semantic point. Consequentialism, as I am using, is the class of moral systems that judge the morality of an action on the outcome (or intended outcome). Deontology, as I am using it, is the class of moral systems that judge the morality of an action based on rules. There are systems that qualify as both (e.g. rule utilitarianism). You can cast “bring about the best outcome” as a rule and say that all consequentialism is deontology, but it doesn’t do a lot for clarity.

Perhaps you bring this up to say that consequentialism is at its base just as arbitrary as any more concrete deontological system, because you have to find some ultimate value on which to judge consequences, and that’s no better or worse than a value like “don’t kill”. One way to reject that claim is to argue that a consequentialist system is logically or empirically necessary, that it is a branch of math or science. I don’t think this makes it any less a real flavor of moral philosophy, as you calim – it’s ultimately trying to answer moral questions – and I don’t see what labeling it ‘science’ does to its force.

This is just question begging. Murder is just one form of death. For consequentialism, the only difference is in including guilt etc. in the tally of outcomes.

I don’t think that’s true. As I argued above, you can feel guilty about things you know to be right. You can also feel guilty about things you haven’t done: Wendy notes that she has experienced this, and the phenomenon of survivor’s guilt is well attested.

Guilt is an emotion, it isn’t rational and it isn’t drawing from some mystical layer of reality that tells you about right and wrong. You can be mistakenly guilty, and can simultaneously feel guilty and know that you did the right thing.

These are non-sequiturs. Yes, some specific people are worth significantly more to me than “a random person”, just as some specific stocks are priced significantly higher than the expected value of a random stock. These claims aren’t in tension.

And the line of argument doesn’t weigh in either direction: if we put a specific, high cost thing on either side of the scale, it changes the outcome in that direction. That isn’t at all surprising.

It’s just a non-sequitur.

Please don’t. If the thought experiment had a different set of givens, we would get a different outcome, granted. That tells us nothing about the concept the thought experiment is trying to isolate.

I agree. As I said in my second post in this thread, trying (unsuccessfully) to move this discussion away from the “ornate example”:

You omitted the second and third part of my post :

If you are prepared to have your kid killed in that consequentialist society, then just say so and we won’t need to raise the point again.

Humans feel emotions. It’s part of our biology. If you want to pretend that’s not a “layer of reality” and that humans ought to only reason and not to feel, then you’re talking about some fantasy world.

Feeling badly about what you did to someone means, at the very least, that you are ambivalent about what you did. Saying that guilt, since it is an emotion, isn’t rational, is confused.

There are no rational morals. There are just moral values built up based on what we care about and what we dislike. Without emotions there are no morals.

There are just practical judgments.

Do this and it leads to this. And no way to determine either what you want or what we think is good.

Unless you believe in God, which clearly you do not, you can have codes of behavior, like the rules of hockey, without emotions, but no morals.

And this is not a shot at atheists, since every one I’ve ever met is informed by their emotions in forming, justifying and understanding their morals.

Without emotions one does not function rationally in society. See Damasio the neuroscientist. Sure, doing math, you don’t need emotions to be rational, but with other humans and society you will fail to make good decisions and even manage yourself rationally without contant input from the limbic system. Damasion goes into what happens when people with damage not longer have the limbic system tied into the loop.

So to say that guilt and emotions have no deep mystical…etc., is a deep confusion.

Without emotions, there is no such thing as morals. There is just behavior and tactics.

And sure, one can feel guilt for things that are not wrong. But when you start killing people for money, you are giving your own mirror neurons and your own limbic system and your own yearning for closeness and good treatment the finger. You pretty much stopped being a social mammal. Now psychotics can be like this. And they lead limited lives.

Alright then, carleas, you are still surprisingly holding your ground. So let’s raise the stakes…

If you knew for a fact that if you tortured one random person forever, and that would make everyone else happy forever, but if you didn’t torture that random person forever, everyone would be tortured forever besides them…

Do you see how absurd this looks?

For one, it’s absurd. It’s a false dichotomy.

People have pointed out repeatedly that if it’s an unknown stranger, that person could be way more valuable than you… who takes a trillion dollars and gives it to charity (a perpetual system) - this random person may solve all poverty issues for all beings in existence, which a trillion dollars can’t do.

Ecmandu, however, but Jesus is such a figure, he took the totality of humanity on his shoulder for ever, given that the passion was eternal, never ending. The contradictory spiritual payoff was in line with the level of contradiction still sustained, by the sign of the cross.

What consequentialist society?

Let me again compare “random person” to “random stock”. A “random stock” has a calculable expected value. If we average all the prices of all the stocks, we get the expected value of a random stock. It’s going to be a lot less than a lot of stocks, even though in theory if we picked one stock at random it could be the most expensive stock possible. The expected value combines the values of all outcomes with the likelihood of those outcomes. The expected value of a roll of the dice at the craps table is negative. The expected value of a fair coin flip is zero. The expected value of the random person functions just the same way.

I hope that this comparison makes clear that “BUT SOMEONE WITH VERY HIGH VALUE IS ONE OF THE POSSIBLE OUTCOMES!!!” is not a rebuttal. So long as every person in the set from which we’re picking has a finite value, the expected value of a random person remains finite even if very valuable people are included.

I said that guilt doesn’t “come from” a “mystical layer of reality”. Phyllo seemed to be claiming that we could draw a syllogism like, “I feel bad, therefore we know what I did was wrong.” That syllogism doesn’t work, because guilt can be mistaken. Like hot sauce feels like burning and mint feels like cold, dumping a girlfriend can feel like being cruel even when we know it’s the most compassionate thing to do.

You may feel guilty, but it’s just another negative value that can be priced. It does not tell us very much about morality or reality.

“I priced it, therefore we know what I did was right”.

“I reasoned it out, therefore we know what I did was right”.

Your pricing can be mistaken and your reasoning can be mistaken.

Guess that pricing and reasoning can’t be used. Oh well, back to the drawing board.

Carleas, as I stated earlier (and can you refute this?):

If every being is finite, there is no value to them, oblivion forever - no wrong, no right - certainly not your formula.

Value only truly comes into the picture of beings that are immortal or eternal in some way.

The moment you say that all lives have finite value, you refute your argument.

The moment you say they have infinite value, your argument becomes much more complex than you present.

Sorry, I took us off topic, I think your point is good, and I think mine didn’t deserve the response.

Go back to:

I make two pretty bad points here that don’t advance the discussion.

Forgive me for leading us astray, on reflection I don’t think I’m clear on what role guilt is playing in the argument. If one chooses World A, one may feel guilty for the rest of ones life. I think there are a few questions here, but I’m not sure any change the outcome.
Case G: You do feel guilty your whole life
Case N: You don’t feel guilty your whole life

Case G can be the case if
(1) Guilt is compatible with having done the right thing, and you have done the right thing; OR
(2) Guilt is incompatible with having done the right thing, and you have done the wrong thing.

Case N can be the case if
(1) Guilt is compatible with having done the right thing, and you have done the right thing.
(2) Guilt is incompatible with having done the right thing, and you have done the right thing.

This is how I understand your argument:
(a) we know that we would feel guilty killing a stranger (We are in case G);
(b) we know that guilt is incompatible with having done the right thing (so if we are in case G, we are wrong);
therefore
(c) we know that killing a stranger is wrong.

Is that what you’re saying?

I’m open to seeing some math here. In math as I know it, assigning infinities to constants has the weird outcomes I describe (e.g., you sending me all your money to help that guy with the cough). There may be math that says otherwise, but I predict that when you actually do that math out, you’ll find that it works about the same way that the usual math does without the infinities.

Carleas, I wasn’t even trying to “whoosh” you, but apparently, I did.

Think about this:

At some point in time, eventually, everyone obliviates. Zero. Nothing. EVERYONE!

Now here’s the deal. If we know for a fact that everyone obliviates at some point, morality is meaningless. Torture someone for a trillion years, and then they obliviate! Horrid right? No, after they die it never existed.

So then comes you with some silly one off purchase for what? A trillion dollars? That’s laughable in infinity. Especially if we all obliviate as infinity ticks.

So, my point was, unless everyone has consciousness forever, your argument contradicts itself as incorrect (ultimate oblivion for all beings = no good or bad)

So then we have to argue as if beings are immortal, to even humor your line of thought …

The compelling argument from this standpoint is that everyone pick an unknown stranger to torture forever to make everyone else be in heavenly bliss.

To this, I would say that you don’t understand existence well. That you are incompetent to even make your argument. To suggest that torturing someone forever so everyone else can totally and and absolutely enjoy life forever, is like talking about pigs flying (hypothetically) so that your inane posture can solve mathematically.

I disagree with this. Morality exists within people, it’s not independent, but it is nonetheless a fact about the world at a specific point in time, in the same way that the meaning of the words I’m using are a fact of the world at this particular moment. After the heat death of the universe, no one will understand what I’ve written today, but they still have meaning today, and it is true today that, at time ( t _2 ) = [some time after the heat death of the universe], the statement “At ( t _1 ) = June 18, 2018, the words Carleas wrote were meaningful” will be true.

So too will we be able to construct statements that will be true at ( t _2 ) of the type that, at ( t _1 ), certain actions were wrong or immoral. The point being, those statements remain true after their subjects ‘obliviate’.

And I think that fits with what I’m claiming here: it will also be true at ( t _2 ) that, at ( t _1 ), some stock was priced at some amount, some good was available at a specific location for a specific price, and, if I’m right, that some specific individual valued some specific moral belief at some specific amount of money.