Edit: since you think we should base morals on ‘survival’, it would be good to define that would count as survival. Warning: I plan to find odd conclusions based on the definition.
Which morality is objective? We have evolved a set of moralities and some of these moralities support we not survive – anti-natalism - some consider us parasites to such a degree that we should be eliminated to protect other life. The transhumanists have moralities or perhaps aesthetics that want us to choose the way homo sapiens will no longer exist – they are the most likely of the three to win the natural selection battle with other moralities. I can’t see how one can know the objective good, nor can I see that teleological arguments based on evolution lead to any conclusion about what is good with a capital G. We can come up with tactics that might be good for the spreading of our genes, though that does not sound like morals to me. Evolution led to a capacity. That capacity - the portions of our nervous systems, say, that amongst other things, came up with morals - may or may not be adaptive in the long term. And we cannot assign it a purpose. Once this capacity is present it is clear that it will be applied to all sorts of purposes.
I am not sure if your ‘why’ is teleological here, but this is a bird’s eye view. Or view from nowhere. In situ we have a way of creating meaning for ourselves and that meaning is emotionally evaluated and generated, and not bound by any ‘purpose’ in evolution. If there were a purpose in evolution, and it wanted control, it made a mistake when it came up with our capacities and tendencies since we evaluate and generate morals based on rationality AND emotions. If I am supposed to respect evolutions goals, it seems to me I must respect the processes and skills it gave me to do things and evaluate things. IOW I have been made such that I mix emotions and rationality, both when I function like a consequentialist and when like a deontologist. I find emotions deeply involved in both processes and I note this in everyone I meet also. We are this way. I don’t see why I should just abstract out and respect in SHOULD terms evolution’s intent for morals, but ignore evolution’s result in making me/us the way I am/we are.
Who says? How do you know that is good? What if we achieve interstellar travel and kill off lovely smarter, less nasty species, perhaps all of them? Where can I stand to view the objective good of our species even? All you are talking about is heuristics for survival. That’s not morality. I feel a bit like when I see physicalists talking about being spiritual. They may have wonderful philosophies of life, be great people, generate sweet and caring ethical codes, etc., but they are not spiritual. That word literally entails other stuff. So does morality entail more than heuristics. It includes an in part emotional/desire-based choosing of what goals we want good heuristics for, and often, including how we feel about the heuristics. Or it would not be so common to challenge the idea that the means might not justify the ends. What you describe is certainly not objective morals. It is tactics towards what you consider the one goal, a goal we don’t even know is objectively a good one, though it might be good for us. I considered it an extremely limited goal for us, just one part of what morality covers. But even if it was the only goal of ours, we cannot know if it is a moral one. I mean, who are we to judge the goodness of the human race. Or better put, who are we to think we can judge it objectively and without emotion?
Again teleological. But further we evolved as creatures that evaluate morals emotionally. If we are going to use a teleological argument then perhaps we should leave that alone, rather than deciding that we can and should just do it only rationally - which I don’t think is possible, in any case. Further it seems completely irrational to generate the way humans relate to each other without making emotional and desire-based evaluations central. I mean, we have to live with all the consequences of those morals, and emotional consequences will be real and central. For some reason emotions are often considered quasi-real. And this is often based on their fallibility. First, reason is also fallible, but further, emotions are real. Now I know that you would not assert that emotions are not real. But note how they end up somehow being moved off the table when they are central to pretty much all the consequences of morals. And then also in the process of choosing and evaluation, etc.
I don’t think there is more than a handful of people who think morality is JUST about calculating the survival of the species. So, you must then explain how evolution led to us having a belief/method/approach that runs counter to what you are saying. If evolution can give us should, this would entail that it would give us should around methodology also.
IOW your argument seems to be that since evolution shaped morals and evolution is all about surviving, then morality is about surviving period. But evolution led to us, and other moral creating species, making morals about much more. Perhaps you need to note what evolution has selected for: and in this case it is moral making animals with limbic systems involved morals at all levels.
Personally, I don’t really care what evolution wants or intends, but I can see what was selected for in our case.
If it somehow turned out that rationality indicated I should kill off my wife after she births our second child - that the best AI’s analyzing all the complex chains of effects, sees this as the best heuristic for human survival that husbands/fathers do this OR EVEN if God told me to do it…no way. I won’t. I fail God’s test of Abraham, though I have often wondered if in fact he failed it.
This was an extreme example - though one that fits nicely with our other discussion - but there are all sorts of other moral-like guidelines I would follow regardless of what the best minds said was our best strategy for survival. And if you think that is a problem, blame evolution. Evolution made my moral making or in my case preference-making process to be such that there are things I will not do (even for money or what the supposedly detached people with views from nowhere say is moral). And there are things I will do that may go against their supposed best heuristics.
see above about emotions always being in the mix of creating, applying, modifying, justifying…etc. That is the tactic we evolved.
They all are emotionally invested in finding the correct outcome. And they likely all are interested in their, perhaps at this stage vague guess of direction, being the right guess. And the one who solves it will have been extremely emotionally involved in finding the answer. IOW I wasn’t arguing that Carleas shouldn’t participate, but rather trying to highlight – corner you - that you are likely driven by emotions, even in this telling us we should prioritize survival because that is what evolution gave us morality for. This likely seems like a view from nowhere, but absolutely cannot be once it is couched as a should. Further the results of the Riemann hypothesis is not like the results of a morality argument or decision about how we should, for example, relate to each other. The latter has to do with what we like, love, hate, desire, are repulsed by and those emotional reactions will guide our personal and collective decisions about what is moral. In fact they must. If they are not involved we may all end up working in some dystopian, panopticon-tyranny that seems efficient and at least in the short term seems to completely guarantee survival, but which we hate every waking minute living in. For example. I think there are other problems that will arise. Some can be based on the emotions now having an unremovable part in what we will even want to survive in and thus making emotions a selection factor, like it or not. Others need not even be bound to your fundamental should - we must base our morals on what we think evolution intended morals to be for.
Often the people who come up with morals in what they think is View from nowhere or objective or disinterested end up making horrible decisions. I would not use that as an argument against using rationality. I don’t think it works as an argument against including emotions. And as far as I can see the people who judge emotions and present themselves as avoiding their influence, they are less aware of how their emotions are influencing their choices than the people who do not present their ideas this way. But further those groups that make decisions are applying morals decided in part on emotional grounds. And they likely have strong feelings about those morals. Courts often use juries, lawyers use emotional arguments, etc. Yes, emotions can lead to wrong decisions. But they are central to morals, determining what morals are and how they affect us. Anyone trying to eliminate emotions from the process of deciding morals, will be incredibly lucky if they come up with a moral system that does not feel unnecessarily bad in a wide variety of ways. And if they have the single goal of species survival, this could lead to solutions like…
It is moral to kill off 70% of the population tomorrow, have an elite take over complete control of all genetic combination – read: via sex, gm work, etc. And so on.
Any country that decided to come up with morals without including emotions in the process is one I would avoid, because essentially such a country with that one goal has no interest in the bearers of the genes, except to the except they bear them. Science fiction has many such dystopian ‘logical’ solutions.
How could it be immoral, in your system of belief, since we clearly evolved with this mixed approach to choosing, creating. It is part of our evolved criteria in all such decision-making.
This makes you a kind of Platonist or some other form of metaphysics that has these things outside us. But here’s the thing, you are deciding to NOT work with morals the way we obviously have evolved to work with morals and clearly to me emotions are involved deeply in all moral evaluation and they become clearly visible when there are disagreements about morals, which are regular and ongoing. I don’t really care what evolution may have intended my emotions, morality and rationality to be for and have as goals - and perhaps I am more adapative precisely because I take the freedom given to me by evolution and don’t care to let it’s intent rule me. But for the sake of argument, let’s say I should go with evolution’s intentions: shouldn’t I then go with the full set of ways one evaluates and chooses emotions, which would include both emotions and rationality whichare intermingled and interdependent in any case? And yes, morality does not exist in a world without emotions and it never has. Animals had behavior before emotions, perhaps, but not morality.
Further ‘rationality’ is of a different category than the rest of that list. I would need to know where you see rationality existing without us. And whose rationality? Rationality is a human process - also exhibited in more limited - though often highly effective - forms in animals. We can call it an animal process. For it to function well, in terms of human interactions, the limbic system must be undamaged and emotions are a part of that process. But even without that proviso, I do not find rationality anywhere outside us as possible in the physicalist paradigm, unless we are talking about aliens or perhaps one day AIs.
And note:
I still think this is in the air.
I think one of the reasons we have intermeshed emotional and rational decision-making is because the higher forms of rationality get weaker when there are too many variables and potential chains of causes. AND rationality tend to have a hubris that it can track all this. For some things emotional reactions have a better chance, though of course this is fallible. But then both are fallible and both are intermeshed. And just as some are better at rationality, some are better at intuition than others are. I would find it odd if what determined how we lived was determined without emotions and desires as central to the process of determining it.
I’ve mentioned Damasio, here’s a kind of summary. Obviously better to read his books or articles…
huffingtonpost.com/fred-kof … ccounter=1
People can’t even make choices without emotions. But then here, with morality, we are talking about making choices about things we react to with strong emotions and have effects that affect us emotionally, that affect our desires and goals - on emotional levels.
I see no reason to consider my DNA more important than my life - how it is lived, what it feels like, what my loved one’s experience, the state of what I value - nature, etc.
[/quote]