This seems like a semantic point. Consequentialism, as I am using, is the class of moral systems that judge the morality of an action on the outcome (or intended outcome). Deontology, as I am using it, is the class of moral systems that judge the morality of an action based on rules. There are systems that qualify as both (e.g. rule utilitarianism). You can cast “bring about the best outcome” as a rule and say that all consequentialism is deontology, but it doesn’t do a lot for clarity.
Perhaps you bring this up to say that consequentialism is at its base just as arbitrary as any more concrete deontological system, because you have to find some ultimate value on which to judge consequences, and that’s no better or worse than a value like “don’t kill”. One way to reject that claim is to argue that a consequentialist system is logically or empirically necessary, that it is a branch of math or science. I don’t think this makes it any less a real flavor of moral philosophy, as you calim – it’s ultimately trying to answer moral questions – and I don’t see what labeling it ‘science’ does to its force.
This is just question begging. Murder is just one form of death. For consequentialism, the only difference is in including guilt etc. in the tally of outcomes.
I don’t think that’s true. As I argued above, you can feel guilty about things you know to be right. You can also feel guilty about things you haven’t done: Wendy notes that she has experienced this, and the phenomenon of survivor’s guilt is well attested.
Guilt is an emotion, it isn’t rational and it isn’t drawing from some mystical layer of reality that tells you about right and wrong. You can be mistakenly guilty, and can simultaneously feel guilty and know that you did the right thing.
These are non-sequiturs. Yes, some specific people are worth significantly more to me than “a random person”, just as some specific stocks are priced significantly higher than the expected value of a random stock. These claims aren’t in tension.
And the line of argument doesn’t weigh in either direction: if we put a specific, high cost thing on either side of the scale, it changes the outcome in that direction. That isn’t at all surprising.
It’s just a non-sequitur.
Please don’t. If the thought experiment had a different set of givens, we would get a different outcome, granted. That tells us nothing about the concept the thought experiment is trying to isolate.
I agree. As I said in my second post in this thread, trying (unsuccessfully) to move this discussion away from the “ornate example”: