Functional Morality

I am a little late on this response, but I tried to be thorough by way of apology. There were many good responses, and clearly some weak points in my argument I needed to address. To avoid responding to individual sentences, I rolled them into some overarching categories. Please correct any mistakes or misreadings, and let me know if I failed to adequately address any criticisms.

================================================================================

  1. What is “morality”?

Peter Kropotkin points out that I have not defined “morality”, and goes on to note that without a definition, statements like “we observe morality in both young children and non-human primates” are unclear. Peter is correct that I did not provide a definition, but I disagree that that is a significant problem. In some sense, I am arguing for what should be the definition of morality, i.e. how that term should be understood and used. To the extent that’s so, any definition I provide would be effectively tautologous with the argument that I’m making.

But I’m also appealing to a colloquial, small-m ‘morality’ when I say that we observe morality in children and non-humans. In both those groups, we observe strong, seemingly principled reactions in adherence to innate concepts of fairness, and often those reactions are contrary to immediate self-interest. So, for example, capuchin monkeys trained to complete a task for a given reward will react violently if they observe another monkey get a more valuable reward for the same task. They will go as far as to reject a reward that they had previously been satisfied to receive, as if in protest at the unfair treatment. That reaction is a rudimentary morality, as I mean it. Children, too, will react angrily to being rewarded differently for the same task, and from a very young age have a concept of fair distribution of rewards.

In these situations, we see that there is clear global instrumental value in the reactions, since they are intended to punish unfairness and communicate that the recipient will not stand for unfair treatment. In a non-lab setting, this reaction will encourage fairness is repeated encounters. But the reaction is also clearly of a piece with more sophisticated moral reasoning, as when a person reacts to such unfair treatment on someone else’s behalf. It takes little more than this seemingly inbuilt reaction and the ability to model other minds to generate such a vicarious indignation. We then tend to label these vicariously felt slights as moral sentiment, and further refinements are just further abstraction on the same idea. Kant’s categorical imperative is nothing more than generalizing upon them.

As Wendy points out, morality in this sense is “society’s cohesive glue”, it’s a set of generalized standards of treatment, and one about which third-parties will get indignant on some else’s behalf. It creates a social glue by creating a set of presumptions about acceptable conduct. And I mean “morality” to point to that glue. Morality as I use it is an observable part of human affairs, a collection of behaviors common to normal-functioning humans (and deficit of which we describe as one of several mental illnesses). And because of its roots in innate tendencies visible in unschooled humans and our close animal relatives, I argue that the observable behaviors of morality are a result of cognitive habits selected for in our evolutionary history, i.e. that they exist because they are functional, so there is no higher authority to appeal to in moral matters than function.

But I should clarify that, even with my functional framing, not all moral rules are as hard-wired as unfairness. For example, it’s perfectly consistent with this understanding of morality that there are some moral rules that are necessary (I think this is what Meno_ means when he says “intrinsic”) and some that are contingent (what Meno_ describes as a “given…set of moral rules”). Necessary rules will be those that follow from the base facts of biological existence; contingent rules will be those that create social efficiency but are just one of many ways to create such efficiency (perhaps this is what Wendy meant by morality being functional in a multitude of ways). This distinction is neither sharp nor certain, but it is meaningful when considered in degrees: the moral maxim that one should follow traffic laws is more necessary than the moral maxim that one should drive on the right, even though it may be possible to efficiently structure society without traffic laws.

Urwrong suggests a basis of morality in “death and its inevitability”, but I don’t see that in practice in the real world. Even the example he gives (giving your life for a higher good, or for your child) are clearly functional, whether by supporting self-sacrifice for collective benefit, or simply by ensuring the direct survival of your genes as carried by your offspring.

It may be true that the adherents of some things we call morality describe their actions in terms of other values, such as “god’s will” or “karma”, but the existence of a mythology and alternative narrative does not detract from the fact that, if those moral systems have persisted over time, it is because they kept the groups that supported them cohesive and self-perpetuating. (I will say more below about the potential description between accurate descriptions of the world that involve selection, and the behavioral effects of descriptions of the world on which selection acts).

It isn’t impossible to have a non-functional moral system, but if it is non-functional, it is not likely to survive. Early Christianity has a moral prohibition against reproduction, and that moral sentiment died out because it was selected against: people who believed it did not reproduce, and a major method of moral transmission (likely the primary method) was unavailable. The existence of such beliefs, and their description as a form of morality, does not mean that morality is not as I describe it.

================================================================================
2) In what sense is it “functional”?

Several people challenged the claim that morality evolved. Attano asked how we could know (“Fossils bear no trace of the morality of a specimen”), and Prismatic notes the memetic evolution of morality on sub-genetic-evolutionary timescales.

I have described the biological roots of morality as “cognitive habits”. I describe them this way because it doesn’t seem that most particular moral propositions are coded in our genes, but instead that we have a few simple innate predispositions plus a more general machinery that internalizes observed moral particulars. A Greek raised among the Callatiae would certainly find it right and proper to eat his dead father’s body, and a Callatian raised among the Greeks would find the practice repulsive. The general moral cognitive habits that are selected for in genetic evolution are the foundation of the moral particulars we see in practice, especially the tendency to align ones behavior with others as a means of coordinating society and enabling cooperation. Those cognitive habits are functional insofar as they enable more cohesive groups to out-compete less cohesive groups.

Attano is correct that we can’t see this directly in the fossil record. But we can still infer its origins in genetic evolution by looking at non-human animals and young children. There, we see both the tendency to imitate the herd and the foundations of specific moral precepts. Explaining this through “History” (which I understand to be something like memetic evolution) doesn’t work, because non-human animals aren’t plugged into the same cultural networks, and very young children haven’t absorbed the culture yet (and I believe the moral-like actions of young children are similar across cultures, though I am less confident on that point). Evolved cognitive habits also best explain that we see moral systems in all human groups. Though they differ between groups, they are present everywhere and there is broad agreement within a group.

On top of those cognitive habits is another form of evolution, what I would call memetic (as opposed to genetic) evolution. Our wetware is evolved to harmonize groups, but the resulting harmonies will vary from group to group due to differences of circumstance and happenstance. That explains the “progress” in morality that Prismatic notes: memetic evolution can take place much more rapidly, since its components are reproduced and mutated much more quickly than are genes.

Now, we might call “progress” the process of coming up with moral codes that allow us to form yet larger and more efficient groupings. Or it might be the process of removing the moral noise that is built into local moralities by happenstance (e.g. rules surrounding specific types of livestock), boiling down to more universal moral beliefs like “don’t murder”. Progress in a system of functional morality would be if the sets of moral particulars made the group function better.

Serendipper seems to suggest on this point that population growth may be bad (or perhaps just non-functional) if not coupled with an “opposing force selecting for any particular gene mutation”. But population growth is the result of functional morality; bearing offspring who bear more offspring is what it means to have genes selected for. This may be clearer if we compare competing ant hills, and ask what it would mean if one ant hill began to increase in population significantly over the competing hills. More population means that the hill is already relatively successful, because population expansion requires resources, and also that it’s likely to be more successful, because more ants working on behalf of the collective means the collective is likely to be stronger. So too with humans: we can read success from population growth, and we would expect population growth to create success (up to a point, the dynamics change when there are no competing groups).

A growing population may, and probably does, require morals to change, but we should expect that: as context changes, including changes in the nature of “the group”, different behaviors will be functional. But that our old morals will be a victim of their success does not mean that they were successful: a growing population and growing cooperation between group members means that the old rules were functional in their context.

================================================================================
3) How does this apply?

A few people asked about the applicability of this way of framing morality. That line of argument usually isn’t so much an objection as an invitation to keep developing the theory, which I am glad to do.

Jakob suggests maybe morality needs to be naive, in the sense that the inborn sense of morality as an ideal is important to its functioning. That may be the case. But it is also true that in order to dodge a speeding car, we need to forget about special relativity, even though the most accurate description of the car’s motion that we can produce requires us to use special relativity. So too might we recognize and describe morality as a system of cognitive habits that support group cohesion, and yet in deciding how we live appeal to more manageable utilitarian or deontological axioms. This goes to Urwrong’s point above about descriptions of morality in terms of death rather than life: different descriptions may more effectively achieve the ends of morality, but they do not change the nature of morality as an evolved system that helps perpetuate human groups.

This is related to a question from iambiguous, of how we actually put the idea into practice. I don’t think that’s easy, but I also don’t think it’s necessary. I am not here offering an applied morality of daily life, but a moral theory to which such an applied morality should appeal. There are potential subordinate deisagreements about e.g. whether brutal honesty or white lies are more effective in creating group cohesion and cooperation; what I am proposing here is the system to which the parties to such disagreement should appeal to make their case.

Serendipper asks how we could determine evolutionary success, and I think the answer is easy in retrospect (though not trivial), and more difficult in prospect. In retrospect, we can just ask what survived and why. Sometimes we know that groups fell apart for arbitrary reasons, and other times we can readily identify problems within the groups themselves. We can point to moral prohibitions that harmed groups and were abandoned, e.g. sex and usury prohibitions. We can compare across surviving systems and see what they have in common, e.g. respect for laws and public institutions.

In prospect, we can make similar arguments, drawing from the history of moral evolution and make predictions about what will work going forward. Like any theory about what will happen on a large scale in the future, there’s substantial uncertainty, but that doesn’t mean we know nothing. We can more readily identify certain options that are very unlikely to be the best way forward.

But again, this uncertainty isn’t fatal to the proposition that morality is functional – indeed, it’s expected. Much as we don’t know for sure what evolved genetic traits will survive, whether K- vs. R-strategies are more reliable in a given context, we also do not know what moral approach will guarantee group prosperity. But these observations do not undermine the theory of evolution, and they do not undermine the theory of functional morality.

I was referred here in the context of my saying that without emotions there are no morals. I see nothing here to argue against that. If you have strategies that unemotionally lead to the propagation of your genes, and no emotions are present, you have tactics and strategies. Machines could be programmed to this - something like those robot cagematches, though fully programmed ones. That isn’t morals. Morals are inextricably tied to someone’s feeling and values - iow subjective preferences, even if it is a posited God’s - and notice how these gods get pissed off if you break the rules.

And guilt would fit in in the discussion you have of emotions above. Natural selection slowing working on which kinds of guilt are adaptively poor.

Once something is tactic and strategy online, you have no way to decide between this set of tactics - that lead to the destruction of life on earth - or that set of tactics that do not

UNLESS

emotional/desire based evaluations

are made.

If you have none, you are no longer an animal.

If you have none, you cannot decide, though you could flip a coin.

And one interesting thing about evolution is that it has lead, and not just in the case of humans, to species having the ability to not necessarily put their own genes ahead of others. This may benefit the species - it is part of what makes us so versatile, or our versatility makes us like this.

Yes, apparantly unemotional viruses may be even more effective than us - in the long or even short run - but they are not moral creatures. I think it would be a category error to call them that.

The argument that morality doesn’t depend on emotions is that morality was a product of evolution, and was selected for independently of any emotional valence. The origin of morality as something that supports group selection does not depend on emotion; emotion is neither necessary nor sufficient for morality to be selected for.

That’s not to say that morality can’t interact with emotion; it may be that morality subjectively experienced as an emotion is an effective way to encourage beneficial ingroup cooperation. Or it may be that tuning into the emotions of others gives us inputs into our moral machinery that help produce such beneficial ingroup cooperation.

But like all evolved traits, the fact that they produced outcomes that were selected for in the past do not guarantee that they will produce outcomes that will be selected for in the present. We evolved to go nuts for sugar, because in the environment in which we evolved sugar was scarce and we should eat all we can. In our current world, sugar is abundant and too much enthusiasm for sugar is selected against. Many common fears are unjustified, we’re too risk averse, we overreact to cold. We have a lot of subjective experiences that were handed down through evolution that are actively counterproductive in the modern world. Our subjective preferences can be mistaken in that sense.

So too can connections between emotion and morality be seen to be spurious, once we accept why morality exists at all. Whatever weight we give to emotion we can and do discount completely when it leads to the wrong outcome. We feel guilty when we dump someone, and it’s not that we shouldn’t feel that way, it’s that that emotion has no bearing on the rightness or wrongness of the action. We feel it because we evolved in small bands without the elaborate puritan mating regime of modern society, and hurting someone and tearing social bonds in the evolutionary context to the extent we hurt someone and tear social bonds when we dump them now was disruptive to the group and bad for us and our tribe. So we feel guilty, we have the moral intuition that we’ve done wrong, and that moral intuition is mistaken.

The fact that we can look at a situation and use non-emotional factors to identify emotions that are just incorrect and that point to the wrong moral conclusions entails that moral conclusions can’t actually be based on the emotions. They’re based on something else, something independent of the emotions.

We literally have a neural network that was trained on certain inputs to achieve a certain goal, and now we’re feeding it different inputs and just declaring wherever it points to be the goal. That’s a nonsensical approach. The goal is the same goal: survival.

  1. you did not really interact with the ideas I presented. 2) you are claiming to know what is good and what is bad, iow to have access to objective morality, to some degree or other. 2a) you need to demonstrate this.

My point is not that I have access to objective morality, but that all moralities are founded by us humans on emotions. Why must this be the case? Because otherwise we have no other way to determine what we think is good. Note the difference between us. You are claiming to know the good, the objective good. I am focused on the process that must take place to decide whether a morality is good. If one is a consequentialist, which you are, then the only way for you to determine what you consider good, is via emotions. Social mammal emotions.

We cannot even say that the survival of any human is objectively good. How would we know this? But we use values based on social mammal biases to decide, well, survival of humans is good. Perhaps the consequentialist thinks that reducing unnecessary pain is good. This is based on empathy and one’s own personal revulsion of pain projected on others.

You can have goals and then best inferred heuristics to reach that goal.

But morals are not simply goals.

The whole opening of your post does not address what is happening. It claims that a non-emotional natural selection led to emotions. whoopie. Irrelevant.

Emotions led to morals. It is a necessary part of the process through which we evaluate the good. You may decide that I or a younger you reached a poor conclusion based in part on emotions when it came to morals. But again, you MUST use emotional social mammal values to determine this.

At some point you have what, I can only assume, you think is merely a rational, logical decision. An emotionless evaluation. Whatever that is, I will bet, not coincidentally values your own life as good, though perhaps one that could be outweighed by other goods. That life is good. That not causing unnecessary harm is good.

All based on your desires to be alive and hopefully empathy at least as a factor in relation to others.

If you take away the emotions, you are then claiming that what in fact are really just tactics are morals. Tactics to achieve certain outcomes that you are utterly indifferent emotionally about. Tactics to reach a goal you are utterly emotionally indifferent about.

If you are indifferent emotionally about those goals, why enter the discussion at all`?

Why not let people who emotionally prefer certain outcomes, ways of relating, decide?

It doesn’t matter to you.

There is somsething messed up in here, like some deep confused category error and I wish I could really explain this well.

I can only right now take a stab at it with a reductio…

Unless whatever process you have for deciding on morality is not itself based on functions coming out of evolution, whatever your emotionless process is, is also not necessary.

Morality emerged out of emotional beings, beings who evaluated the good and bad using emotions. It is not a coincidence that chimp and wolf moralities correlate incredibly well with with emotional likes and dislikes. Animals with no limbic systems are never referred to as having emotions. We may talk about power dynamics in animals without limbic systems, but I don’t hear anyone talking about reptile morals. Just reptile behavior. But with apes and canines, using ideas like fairness work.

Anyone who told me thay had arrived at an objective morality without emotions, I would distrust in the extreme. Because they are claiming emotional indifference, that their conclusions have not used emotions in their evaluation. Which means, their morality is not based on empathy, even for themselves. And since they are claiming to be indifferent, they are presenting themselves as a disinterested party. Which I find suspicious in the extreme.

I am claiming that there is an objective good, that morality is objective. Do you disagree with that?

We know where emotions and morals come from, we know why they evolved, so we can determine what they should say without bootstrapping from them.

My point in this thread is that starting with morals as an empirical phenomenon observable in humans and certain other social animals, we can examine what morals are, why they exist. And any claimed moral commandment that undermines the empirically observed reason for the existence of morals must be mistaken, morals must continue to be what they evolved to be.

And what they evolved to be had nothing to do with emotion (except insofar as emotions also evolved to do the same thing).

Or rather, morals are a tactic that evolved because they kept people who used them alive and helped them reproduce.

This is a strange line of argument.

  1. Should only people that are emotionally invested in outcomes discuss anything? Like, only mathematicians that are emotionally invested in a specific outcome to the Riemann Hypothesis should spend any time trying to figure it out?
  2. Often people who are emotionally invested in an outcome are the worst people to solve it. That’s why we have courts and arbitrators and mediators and trilateral talks. Neutral third parties are often better at resolving disputes.
  3. I’m not saying emotion does nothing or doesn’t matter, I’m saying emotion isn’t the basis, isn’t a component, of morality. As I say above, emotions will often align with morality, and ‘naive’ morality will often align with survival, because they all evolved to the same ends. But where they differ, it is survival that wins. And I, weak as I am, feel and follow emotions, but I often do so knowing that it is immoral.

This poses the interesting question of how to distinguish rationality from morality, i.e. rationality also evolved, so why doesn’t survival trump rationality? I would look to what rationality and morality each purport to do. Rationality is an attempt to describe something that exists independently of humans. It is a way of describing the world. Morality, by contrast, is something we created (or that was created as a part of us).

I think you would have to agree with this distinction: if morality is based on our emotions, then it doesn’t exist in a world without our emotions. Rationality, logic, math, those things exist without us.

Which morality is objective? We have evolved a set of moralities and some of these moralities support we not survive – anti-natalism - some consider us parasites to such a degree that we should be eliminated to protect other life. The transhumanists have moralities or perhaps aesthetics that want us to choose the way homo sapiens will no longer exist – they are the most likely of the three to win the natural selection battle with other moralities. I can’t see how one can know the objective good, nor can I see that teleological arguments based on evolution lead to any conclusion about what is good with a capital G. We can come up with tactics that might be good for the spreading of our genes, though that does not sound like morals to me. Evolution led to a capacity. That capacity - the portions of our nervous systems, say, that amongst other things, came up with morals - may or may not be adaptive in the long term. And we cannot assign it a purpose. Once this capacity is present it is clear that it will be applied to all sorts of purposes.

I am not sure if your ‘why’ is teleological here, but this is a bird’s eye view. Or view from nowhere. In situ we have a way of creating meaning for ourselves and that meaning is emotionally evaluated and generated, and not bound by any ‘purpose’ in evolution. If there were a purpose in evolution, and it wanted control, it made a mistake when it came up with our capacities and tendencies since we evaluate and generate morals based on rationality AND emotions. If I am supposed to respect evolutions goals, it seems to me I must respect the processes and skills it gave me to do things and evaluate things. IOW I have been made such that I mix emotions and rationality, both when I function like a consequentialist and when like a deontologist. I find emotions deeply involved in both processes and I note this in everyone I meet also. We are this way. I don’t see why I should just abstract out and respect in SHOULD terms evolution’s intent for morals, but ignore evolution’s result in making me/us the way I am/we are.

Who says? How do you know that is good? What if we achieve interstellar travel and kill off lovely smarter, less nasty species, perhaps all of them? Where can I stand to view the objective good of our species even? All you are talking about is heuristics for survival. That’s not morality. I feel a bit like when I see physicalists talking about being spiritual. They may have wonderful philosophies of life, be great people, generate sweet and caring ethical codes, etc., but they are not spiritual. That word literally entails other stuff. So does morality entail more than heuristics. It includes an in part emotional/desire-based choosing of what goals we want good heuristics for, and often, including how we feel about the heuristics. Or it would not be so common to challenge the idea that the means might not justify the ends. What you describe is certainly not objective morals. It is tactics towards what you consider the one goal, a goal we don’t even know is objectively a good one, though it might be good for us. I considered it an extremely limited goal for us, just one part of what morality covers. But even if it was the only goal of ours, we cannot know if it is a moral one. I mean, who are we to judge the goodness of the human race. Or better put, who are we to think we can judge it objectively and without emotion?

Again teleological. But further we evolved as creatures that evaluate morals emotionally. If we are going to use a teleological argument then perhaps we should leave that alone, rather than deciding that we can and should just do it only rationally - which I don’t think is possible, in any case. Further it seems completely irrational to generate the way humans relate to each other without making emotional and desire-based evaluations central. I mean, we have to live with all the consequences of those morals, and emotional consequences will be real and central. For some reason emotions are often considered quasi-real. And this is often based on their fallibility. First, reason is also fallible, but further, emotions are real. Now I know that you would not assert that emotions are not real. But note how they end up somehow being moved off the table when they are central to pretty much all the consequences of morals. And then also in the process of choosing and evaluation, etc.
I don’t think there is more than a handful of people who think morality is JUST about calculating the survival of the species. So, you must then explain how evolution led to us having a belief/method/approach that runs counter to what you are saying. If evolution can give us should, this would entail that it would give us should around methodology also.
IOW your argument seems to be that since evolution shaped morals and evolution is all about surviving, then morality is about surviving period. But evolution led to us, and other moral creating species, making morals about much more. Perhaps you need to note what evolution has selected for: and in this case it is moral making animals with limbic systems involved morals at all levels.
Personally, I don’t really care what evolution wants or intends, but I can see what was selected for in our case.
If it somehow turned out that rationality indicated I should kill off my wife after she births our second child - that the best AI’s analyzing all the complex chains of effects, sees this as the best heuristic for human survival that husbands/fathers do this OR EVEN if God told me to do it…no way. I won’t. I fail God’s test of Abraham, though I have often wondered if in fact he failed it.
This was an extreme example - though one that fits nicely with our other discussion - but there are all sorts of other moral-like guidelines I would follow regardless of what the best minds said was our best strategy for survival. And if you think that is a problem, blame evolution. Evolution made my moral making or in my case preference-making process to be such that there are things I will not do (even for money or what the supposedly detached people with views from nowhere say is moral). And there are things I will do that may go against their supposed best heuristics.

see above about emotions always being in the mix of creating, applying, modifying, justifying…etc. That is the tactic we evolved.

They all are emotionally invested in finding the correct outcome. And they likely all are interested in their, perhaps at this stage vague guess of direction, being the right guess. And the one who solves it will have been extremely emotionally involved in finding the answer. IOW I wasn’t arguing that Carleas shouldn’t participate, but rather trying to highlight – corner you - that you are likely driven by emotions, even in this telling us we should prioritize survival because that is what evolution gave us morality for. This likely seems like a view from nowhere, but absolutely cannot be once it is couched as a should. Further the results of the Riemann hypothesis is not like the results of a morality argument or decision about how we should, for example, relate to each other. The latter has to do with what we like, love, hate, desire, are repulsed by and those emotional reactions will guide our personal and collective decisions about what is moral. In fact they must. If they are not involved we may all end up working in some dystopian, panopticon-tyranny that seems efficient and at least in the short term seems to completely guarantee survival, but which we hate every waking minute living in. For example. I think there are other problems that will arise. Some can be based on the emotions now having an unremovable part in what we will even want to survive in and thus making emotions a selection factor, like it or not. Others need not even be bound to your fundamental should - we must base our morals on what we think evolution intended morals to be for.

Often the people who come up with morals in what they think is View from nowhere or objective or disinterested end up making horrible decisions. I would not use that as an argument against using rationality. I don’t think it works as an argument against including emotions. And as far as I can see the people who judge emotions and present themselves as avoiding their influence, they are less aware of how their emotions are influencing their choices than the people who do not present their ideas this way. But further those groups that make decisions are applying morals decided in part on emotional grounds. And they likely have strong feelings about those morals. Courts often use juries, lawyers use emotional arguments, etc. Yes, emotions can lead to wrong decisions. But they are central to morals, determining what morals are and how they affect us. Anyone trying to eliminate emotions from the process of deciding morals, will be incredibly lucky if they come up with a moral system that does not feel unnecessarily bad in a wide variety of ways. And if they have the single goal of species survival, this could lead to solutions like…

It is moral to kill off 70% of the population tomorrow, have an elite take over complete control of all genetic combination – read: via sex, gm work, etc. And so on.
Any country that decided to come up with morals without including emotions in the process is one I would avoid, because essentially such a country with that one goal has no interest in the bearers of the genes, except to the except they bear them. Science fiction has many such dystopian ‘logical’ solutions.

How could it be immoral, in your system of belief, since we clearly evolved with this mixed approach to choosing, creating. It is part of our evolved criteria in all such decision-making.

This makes you a kind of Platonist or some other form of metaphysics that has these things outside us. But here’s the thing, you are deciding to NOT work with morals the way we obviously have evolved to work with morals and clearly to me emotions are involved deeply in all moral evaluation and they become clearly visible when there are disagreements about morals, which are regular and ongoing. I don’t really care what evolution may have intended my emotions, morality and rationality to be for and have as goals - and perhaps I am more adapative precisely because I take the freedom given to me by evolution and don’t care to let it’s intent rule me. But for the sake of argument, let’s say I should go with evolution’s intentions: shouldn’t I then go with the full set of ways one evaluates and chooses emotions, which would include both emotions and rationality whichare intermingled and interdependent in any case? And yes, morality does not exist in a world without emotions and it never has. Animals had behavior before emotions, perhaps, but not morality.

Further ‘rationality’ is of a different category than the rest of that list. I would need to know where you see rationality existing without us. And whose rationality? Rationality is a human process - also exhibited in more limited - though often highly effective - forms in animals. We can call it an animal process. For it to function well, in terms of human interactions, the limbic system must be undamaged and emotions are a part of that process. But even without that proviso, I do not find rationality anywhere outside us as possible in the physicalist paradigm, unless we are talking about aliens or perhaps one day AIs.

And note:

I still think this is in the air.
I think one of the reasons we have intermeshed emotional and rational decision-making is because the higher forms of rationality get weaker when there are too many variables and potential chains of causes. AND rationality tend to have a hubris that it can track all this. For some things emotional reactions have a better chance, though of course this is fallible. But then both are fallible and both are intermeshed. And just as some are better at rationality, some are better at intuition than others are. I would find it odd if what determined how we lived was determined without emotions and desires as central to the process of determining it.

I’ve mentioned Damasio, here’s a kind of summary. Obviously better to read his books or articles…
huffingtonpost.com/fred-kof … ccounter=1

People can’t even make choices without emotions. But then here, with morality, we are talking about making choices about things we react to with strong emotions and have effects that affect us emotionally, that affect our desires and goals - on emotional levels.

I see no reason to consider my DNA more important than my life - how it is lived, what it feels like, what my loved one’s experience, the state of what I value - nature, etc.

Edit: since you think we should base morals on ‘survival’, it would be good to define that would count as survival. Warning: I plan to find odd conclusions based on the definition.

Which morality is objective? We have evolved a set of moralities and some of these moralities support we not survive – anti-natalism - some consider us parasites to such a degree that we should be eliminated to protect other life. The transhumanists have moralities or perhaps aesthetics that want us to choose the way homo sapiens will no longer exist – they are the most likely of the three to win the natural selection battle with other moralities. I can’t see how one can know the objective good, nor can I see that teleological arguments based on evolution lead to any conclusion about what is good with a capital G. We can come up with tactics that might be good for the spreading of our genes, though that does not sound like morals to me. Evolution led to a capacity. That capacity - the portions of our nervous systems, say, that amongst other things, came up with morals - may or may not be adaptive in the long term. And we cannot assign it a purpose. Once this capacity is present it is clear that it will be applied to all sorts of purposes.

I am not sure if your ‘why’ is teleological here, but this is a bird’s eye view. Or view from nowhere. In situ we have a way of creating meaning for ourselves and that meaning is emotionally evaluated and generated, and not bound by any ‘purpose’ in evolution. If there were a purpose in evolution, and it wanted control, it made a mistake when it came up with our capacities and tendencies since we evaluate and generate morals based on rationality AND emotions. If I am supposed to respect evolutions goals, it seems to me I must respect the processes and skills it gave me to do things and evaluate things. IOW I have been made such that I mix emotions and rationality, both when I function like a consequentialist and when like a deontologist. I find emotions deeply involved in both processes and I note this in everyone I meet also. We are this way. I don’t see why I should just abstract out and respect in SHOULD terms evolution’s intent for morals, but ignore evolution’s result in making me/us the way I am/we are.

Who says? How do you know that is good? What if we achieve interstellar travel and kill off lovely smarter, less nasty species, perhaps all of them? Where can I stand to view the objective good of our species even? All you are talking about is heuristics for survival. That’s not morality. I feel a bit like when I see physicalists talking about being spiritual. They may have wonderful philosophies of life, be great people, generate sweet and caring ethical codes, etc., but they are not spiritual. That word literally entails other stuff. So does morality entail more than heuristics. It includes an in part emotional/desire-based choosing of what goals we want good heuristics for, and often, including how we feel about the heuristics. Or it would not be so common to challenge the idea that the means might not justify the ends. What you describe is certainly not objective morals. It is tactics towards what you consider the one goal, a goal we don’t even know is objectively a good one, though it might be good for us. I considered it an extremely limited goal for us, just one part of what morality covers. But even if it was the only goal of ours, we cannot know if it is a moral one. I mean, who are we to judge the goodness of the human race. Or better put, who are we to think we can judge it objectively and without emotion?

Again teleological. But further we evolved as creatures that evaluate morals emotionally. If we are going to use a teleological argument then perhaps we should leave that alone, rather than deciding that we can and should just do it only rationally - which I don’t think is possible, in any case. Further it seems completely irrational to generate the way humans relate to each other without making emotional and desire-based evaluations central. I mean, we have to live with all the consequences of those morals, and emotional consequences will be real and central. For some reason emotions are often considered quasi-real. And this is often based on their fallibility. First, reason is also fallible, but further, emotions are real. Now I know that you would not assert that emotions are not real. But note how they end up somehow being moved off the table when they are central to pretty much all the consequences of morals. And then also in the process of choosing and evaluation, etc.
I don’t think there is more than a handful of people who think morality is JUST about calculating the survival of the species. So, you must then explain how evolution led to us having a belief/method/approach that runs counter to what you are saying. If evolution can give us should, this would entail that it would give us should around methodology also.
IOW your argument seems to be that since evolution shaped morals and evolution is all about surviving, then morality is about surviving period. But evolution led to us, and other moral creating species, making morals about much more. Perhaps you need to note what evolution has selected for: and in this case it is moral making animals with limbic systems involved morals at all levels.
Personally, I don’t really care what evolution wants or intends, but I can see what was selected for in our case.
If it somehow turned out that rationality indicated I should kill off my wife after she births our second child - that the best AI’s analyzing all the complex chains of effects, sees this as the best heuristic for human survival that husbands/fathers do this OR EVEN if God told me to do it…no way. I won’t. I fail God’s test of Abraham, though I have often wondered if in fact he failed it.
This was an extreme example - though one that fits nicely with our other discussion - but there are all sorts of other moral-like guidelines I would follow regardless of what the best minds said was our best strategy for survival. And if you think that is a problem, blame evolution. Evolution made my moral making or in my case preference-making process to be such that there are things I will not do (even for money or what the supposedly detached people with views from nowhere say is moral). And there are things I will do that may go against their supposed best heuristics.

see above about emotions always being in the mix of creating, applying, modifying, justifying…etc. That is the tactic we evolved.

They all are emotionally invested in finding the correct outcome. And they likely all are interested in their, perhaps at this stage vague guess of direction, being the right guess. And the one who solves it will have been extremely emotionally involved in finding the answer. IOW I wasn’t arguing that Carleas shouldn’t participate, but rather trying to highlight – corner you - that you are likely driven by emotions, even in this telling us we should prioritize survival because that is what evolution gave us morality for. This likely seems like a view from nowhere, but absolutely cannot be once it is couched as a should. Further the results of the Riemann hypothesis is not like the results of a morality argument or decision about how we should, for example, relate to each other. The latter has to do with what we like, love, hate, desire, are repulsed by and those emotional reactions will guide our personal and collective decisions about what is moral. In fact they must. If they are not involved we may all end up working in some dystopian, panopticon-tyranny that seems efficient and at least in the short term seems to completely guarantee survival, but which we hate every waking minute living in. For example. I think there are other problems that will arise. Some can be based on the emotions now having an unremovable part in what we will even want to survive in and thus making emotions a selection factor, like it or not. Others need not even be bound to your fundamental should - we must base our morals on what we think evolution intended morals to be for.

Often the people who come up with morals in what they think is View from nowhere or objective or disinterested end up making horrible decisions. I would not use that as an argument against using rationality. I don’t think it works as an argument against including emotions. And as far as I can see the people who judge emotions and present themselves as avoiding their influence, they are less aware of how their emotions are influencing their choices than the people who do not present their ideas this way. But further those groups that make decisions are applying morals decided in part on emotional grounds. And they likely have strong feelings about those morals. Courts often use juries, lawyers use emotional arguments, etc. Yes, emotions can lead to wrong decisions. But they are central to morals, determining what morals are and how they affect us. Anyone trying to eliminate emotions from the process of deciding morals, will be incredibly lucky if they come up with a moral system that does not feel unnecessarily bad in a wide variety of ways. And if they have the single goal of species survival, this could lead to solutions like…

It is moral to kill off 70% of the population tomorrow, have an elite take over complete control of all genetic combination – read: via sex, gm work, etc. And so on.
Any country that decided to come up with morals without including emotions in the process is one I would avoid, because essentially such a country with that one goal has no interest in the bearers of the genes, except to the except they bear them. Science fiction has many such dystopian ‘logical’ solutions.

How could it be immoral, in your system of belief, since we clearly evolved with this mixed approach to choosing, creating. It is part of our evolved criteria in all such decision-making.

This makes you a kind of Platonist or some other form of metaphysics that has these things outside us. But here’s the thing, you are deciding to NOT work with morals the way we obviously have evolved to work with morals and clearly to me emotions are involved deeply in all moral evaluation and they become clearly visible when there are disagreements about morals, which are regular and ongoing. I don’t really care what evolution may have intended my emotions, morality and rationality to be for and have as goals - and perhaps I am more adapative precisely because I take the freedom given to me by evolution and don’t care to let it’s intent rule me. But for the sake of argument, let’s say I should go with evolution’s intentions: shouldn’t I then go with the full set of ways one evaluates and chooses emotions, which would include both emotions and rationality whichare intermingled and interdependent in any case? And yes, morality does not exist in a world without emotions and it never has. Animals had behavior before emotions, perhaps, but not morality.

Further ‘rationality’ is of a different category than the rest of that list. I would need to know where you see rationality existing without us. And whose rationality? Rationality is a human process - also exhibited in more limited - though often highly effective - forms in animals. We can call it an animal process. For it to function well, in terms of human interactions, the limbic system must be undamaged and emotions are a part of that process. But even without that proviso, I do not find rationality anywhere outside us as possible in the physicalist paradigm, unless we are talking about aliens or perhaps one day AIs.

And note:

I still think this is in the air.
I think one of the reasons we have intermeshed emotional and rational decision-making is because the higher forms of rationality get weaker when there are too many variables and potential chains of causes. AND rationality tend to have a hubris that it can track all this. For some things emotional reactions have a better chance, though of course this is fallible. But then both are fallible and both are intermeshed. And just as some are better at rationality, some are better at intuition than others are. I would find it odd if what determined how we lived was determined without emotions and desires as central to the process of determining it.

I’ve mentioned Damasio, here’s a kind of summary. Obviously better to read his books or articles…
huffingtonpost.com/fred-kof … ccounter=1

People can’t even make choices without emotions. But then here, with morality, we are talking about making choices about things we react to with strong emotions and have effects that affect us emotionally, that affect our desires and goals - on emotional levels.

I see no reason to consider my DNA more important than my life - how it is lived, what it feels like, what my loved one’s experience, the state of what I value - nature, etc.
[/quote]

Perhaps a shorter objection is better:

  1. how do we know that morality is not a spandrel?
  2. even if it is not, how do we have an obligation to the intent of evolution, in what sense are beholden to funtion? Function, evolution, natural selection are not moral agents. What is it that puts us in some contractual commitment to following their intentions? If the argument is not that we are beholden but rather X is what morality is for, so we should use it as X, a more determinist connection, then we don’t have to worry about adhering to the function, since whatever we do is a product of evolutionarily-created function. Once I am supposed to follow evolution, use my adaptions, well, how can I fail? And if I fail as an individual, I am still testing for my species and if my approach was poor it will be weeded out. No harm, no foul.

Thanks for your patience and your excellent replies, they have helped my to develop my thinking on this topic and I appreciate the critique.

I think there’s a number of levels on which we can define it, which I’ll discuss in a minute, and there’s room to debate the appropriate locus of survival as it relates to morality. But I think that debate is separate from whether morality does relate to survival. Morality exists because of its effect on past generations; it seems clear that there is no morality independent of humans, no moral field that we’re sensing but rather a moral intuition (i.e. innate brain configurations) that influences our behaviors in ways that supported our ancestors in producing us.

But, as promised, some thoughts on ‘survival’:
First, individual gene-line survival means an organism not dying until it produces offspring who are likely to not-die until they produce offspring.
At a group or society level, survival means the group continues to exist. It’s a little vaguer here because the ‘group’ isn’t is somewhat amorphous, and there aren’t discrete generations for reproduction, but a constant production and death of constituent members.
Defining the survival of any thing inherits the problems in defining that thing, i.e. the “can’t step in the same river twice” problems. Moreover, where morality functions on the substrate-independent level of our existence (thoughts), it isn’t clear whether the survival it requires is the survival of the substrate or the survival of the survival of the programs that run on it. Would morality support the transhumanist idea that we should abandon our bodies and upload our consciousness to silicon? Even if we take functional morality ias true, I don’t know that that question is settled.

I do think that morality must operate on the meta-organism, rather than the organism, i.e. society rather than the individual. Morality, as a functional trait, works between individuals, so oughts can only be coherent in relation to and support of the tribe or collective. And I have a sketch of an idea that that entails that we should prefer the pattern over the substrate, since the beast society exists continuously as its substrate is born and dies in an endless churn.

But that is a weak and fuzzy position, and in any case beyond the scope here.

Sure, but some morality is just wrong. Anti-natalism specifically is pretty clearly wrong, but that statement rests on the functional morality I’m advancing here.

If what you’re asking for is which morality is the functional morality, I actually think that too is beyond the scope of this discussion. “There is an objective morality that we can discover” is a different claim from “X is the objective morality”. I’m making the former claim here, and arguing that we should use the criteria of functionality to evaluate claims about the latter, but I am not making a specific claim about the latter.

I don’t disagree with this idea or those in the surrounding paragraph, but let me make an analogy.

Once, on a hot summer night, I awoke with intense nausea. I laid in bed feeling wretched for a minute staring at the ceiling, and the nausea passed. I closed my eyes to sleep again and soon again felt intense nausea. I opened my eyes, and shortly the nausea passed again. I did this a few more times as my rational faculties slowly kicked in, and then noticed that my bed was vibrating slightly. A fan that I’d placed at the foot of the bed was touching the bed frame, and creating a barely perceptible vibration. I put it together that the nausea was in fact motion sickness. I moved the fan, the bed stopped shaking, and I slept the rest of the night without incident.

The point here is that motion sickness is an evolved response to certain feelings of motion. In particular, our brains are concerned that certain unnatural sensations of motion are actually the result of eating something toxic. The nausea is a response that, if taken to its logical end, will cause us to purge what we’ve eaten, in the hopes that any toxins will be purged with it. In the evolutionary context, that’s a useful response. But we did not evolve in the presence of beds and fans, and so the way we’ve evolved misleads us into thinking we’re ill when in fact we’re perfectly fine.

A similar thing can happen with morality, and understanding morality as a product of evolution, as a mental trait that evolved in a specific context and suited to that context, and not necessarily to this context, may let us “move the fan” of morality, i.e. shed moral claims that are clearly at odds with what morality was meant to do. Given a few thousand years and a few hundred generations of life in this context, we should expect evolution to get us there on its own, but we don’t have the luxury of that.

So, yes, we are this way, there is some information in our emotions and moral intuitions and we should pay attention to them, just as we should take nausea seriously. But we can examine them in other ways at the same time. We can appreciate the ways in which evolution’s result is inadequate to its purpose, and rely on the other results of evolution (rationality and the view from nowhere) to exert a countervailing drive.

You yourself make a few similar points further down, and I basically agree with them: our moral intuitions and emotions are not for nothing, they can be better than our reason for making decisions in certain cases, and we should treat them as real and expected and important in our decision making. But we should also treat them as subject to rational refutation. And when reason and emotion conflict in making statements of fact about the world, reason should prevail (though perhaps you don’t agree with that).

Yes, I think that’s right. But so too are cardiac surgeons deciding not to work with hearts the way we evolved to work with hearts. The project of moral philosophy, as I understand it, must involve some very unusual treatment of moral intuitions, ones that are obscene to our evolved first impression in the way that delivering a baby by C-section is obscene to someone who only understands it as stabbing a pregnant woman in the belly.

And as I said above in reply to Jakob, there’s no contradiction in the most true description of a phenomenon being nigh useless in our everyday lives. In the game of go, there is a saying, “If you want to go left, go right”, meaning that going directly for the play we want is not the best way of achieving the play we want. But that is not to say that moving left is wrong, just that moving right is the best way to achieve moving left. So too, being a naive consequentialist may be the best way to achieve the functional ends I advocate here. Still, though, I would argue that the functional ends are the ends, and if it could be shown that different naive system better achieved them, it would be damning of naive consequentialism.

There may be an argument that functional morality is actively counterproductive to its own stated ends. I don’t know what to make of self-defeating truths, but I don’t think functional morality is one. I see no tension between understanding and discussing functional morality and still practicing more common moral systems as rules of thumb on a day-to-day basis.

I don’t think this problem is unique to a rationally-grounded moral system. Emotions too can be a basis for hubris; emotion-based religions are some of the most pompous and unjustifiably self-assured systems of belief that we’ve ever seen. We should not be overconfident.

But reason’s advantage is that it scales: we can use reason to analyse other modes of thought, and even reason itself. Through, we can identify situations where relying on intuition is better than relying on deliberate reflection. We can’t do that emotionally. We can rationally examine emotion, and while we can feel things about reason, we can’t get very far with it.

How do we know any evolved trait isn’t a spandrel? We can look at whether morality influences reproductive success, whether it imposes costs that would require a benefit to offset, whether it’s been selected against in isolated populations, etc. I think all these things suggest that it isn’t a spandrel, that it’s been selected for as part of an evolved reproductive strategy:

  • Amoral people tend to suffer socially. Psychopaths can and do succeed, but they depend on the moral behavior of others, and they are also employing a high risk, high reward strategy (many psychopaths are killed or imprisoned, but many others are managers or politicians).
  • Morality entails evolutionary costs, e.g. forgoing actions with clear immediate reproductive benefits like theft or resources, murder of rivals, or rape of fertile women. That suggests that it has attendant benefits, and that forgoing these provides a reproductive benefit in the long term, e.g. reciprocal giving and social support, not being murdered, and better mating opportunities long term.
  • To my knowledge, morality exists in all human populations, including isolated populations. The isolation may not have been sufficiently long to permit evolutionary divergence, but given the presence of psychopaths it seems that the genes for amorality were there to be selected for and haven’t come to dominate any society.

Consider the example of motion sickness, or of sugar, or of any other evolved predispositions what we can rationally understand to be actively counter to the reasons for which they evolved. We have intuitions that motion not dependent on our moving our limbs means we’ve been poisoned and need to purge, and that sugar and fat are good and we should eat them all as much as possible. But we know that these are false, that our evolved tendencies are misleading us, and they are misleading us because of the context in which we evolved in which such motion did mean poison, and sugar was a precious resource.

So too did morality evolve in that context, ought-ness is derived from our evolutionary past, and we can look at it in that light. Without reference to its evolved purpose, it has no meaning. If we take the position that the evolved meaning of morality is not relevant, it seems the only alternative is moral nihilism.

EDIT, 7/14: words, formatting. Deletions indicated by strike-through, additions underlined…

This is one of the areas I was probing around because I think it may be very hard for many adherents of functional morality to stay consistent. Perhaps not you. If survival is connected to genetically related progeny having progeny that are genetically related - iow sustaining genetically related individuals through time, transhumanism should be considered bad or evil - if we take the case of strong transhumanism where better substrates for consciounsess and existence are created and homo sapiens, as a genetic organism (and physically in general, outside the nucleus of cells also), are no longer present. We will have replaced ourselves with something else. At least in terms of genetic material.

But even setting aside the transhumanism issue. If survival is the guide to morality, the measure of it, it seems to me we can have all sorts of odd scenarios. We Freeze our DNA and send it out into the universe with instructions for use plus an AI to help us seed the first good planet… When we find out another civilization somewhere or the AI gets us going on say ten worlds, it seems like then we would be free to do what we want. Like as long as survival is happening, elsewhere, I have no need for morals. We have insured continuation, now we can here do what we want. Or we could set up a world where the AI combines DNA to make 1000 humans. Their genitals, after puberty are harvested for DNA, and they are all put down. The AI waits a thousand years and repeats, mixes new DNA, new batch of humans, new cull, repeat. This prevents Mass self-destruction events, and the large gaps between generations 1) slow down changes, so the DNA really stays close to earlier generations longer and 2) create longer survival. IOW there may well be an incredibly efficient way of making our DNA survive - and occasionally create humans - for vast eons, which at the same time entails an existence that is repulsive to most people.

Survival, and not much else.

I didn’t say enough. Antinatalism is one of the moralities that evolution has given rise to. Right now it is a minority position. Perhaps it will become the majority or power morality. Then this is what evolution has led to. It might lead to our extinction, but evolution led to it. IF I coming from a now more minority position - before the anti-natalists sterilize all of us, push for my morality, which includes life, I must wonder, as the anti-natalists take over, if I am on the wrong side - if evolution has led to antinatalist morality and the anti-natalists win. Whatever happens would be functional, it might just not be what we want functional to be. IOW it was functional that dinosaurs became extinct. Evolution and natural selection are selecting to whatever fits, whatever fits, that is, whatever else exists - other species, the weather, etc. I don’t really see where I should do anything other than prioritize what I want, and let natural selection see to the outcomes. Just like every other individual in other species. Because once I follow my interests and desires, including mammalian empthy, I am living out what I have been selected to be like. Whatever this leads to is functional, though it may not include my kind.

This might seem obvious: If it is survival of our or ‘our’ genes and these shaping new generations of ‘us’ or us, then some of transhumanism is wrong and I should oppose it, since it will replaces our genes and us.

On the other hand if I am a functionalist, natural selection supporter, then if transhumanism wins, then that’s fine. I do not need to think in terms of the best morality or heuristics. We will do what we do and it will be part of natural selection - I mean, unless I have an emotional attachment to humans… :smiley:

IOW There is some weird mix of selfishness - I should support functionalism as far as it furthers my species (though not me in particular) - and follow the intended function of morality…however natural selection is nots not itself a respecter of species.

I cannot in any way avoid fitting in with evolution as a whole, so why should I focus in on one selfish part, where I identify with future generations of my DNA. It seems to me that must have an emotional component. But if we strip away the emotional AND suggest one should take a functionalist point of view, well there are no worries.

Natural selection will continue whatever I do.

Let’s take this last bit first. 1) I think it is complicated. First, immediately, I want to stress that there is always the option of delaying judgment or agnosticism. Reason is not infallible - and is, often, guided by emotions and assumptions we are aware of and then also often by emotions and assumptions we are not aware of. So when in a real contradiction between emotions and reason, we might, especially if we do not seem to immediately lose anything a) delay choice or 2) make a choice but keep an agnosticism about whether it was the right one. 3) it depends for me on what reason, whose reason, and for that matter whose emotions/intuition. 4) a problem with the choice is that emotions and reason are mixed. It is muddy in there. Reason depends on emotions, especially when we are talking about how humans should interact - iow what seems reasonable will include emotional reactions to consequences, prioritizing inside reasoning itself, the ability to evaluate one’s reasoning (such as, have I looked at the evidence long enough? which is evaluated with emotional qualia (see Damasio) and of course emotions are often affected strongly by memes, what is presented as reasonable, assumptions in society and culture, etc. When someone claims to be on the pure reason side of an argument, I immediately get wary. I just don’t meet any people without motives, emotions, biases and so on. If we are trying to determine the height of a tree, ok I may dismiss emotion based objections after the rational team used three different measuring devices and come to the same measurement, despite it seeming off to the emotional team. But when dealing with how should we treat each other…

In a sense what I am saying is that reason is often used as a postive term. IOW it represents logical work with rationally chosen facts, gathered in X postive types of ways…etc. But actually reasoning is a cognitive style. A neutral one. It can be a mess, it can be well done. It may have false assumptions that will take decades to recognize but are obviously false to others. It is just a way to reach a conclusion. Some do it very well. Some do not.

The reasoned idea within science was that animals did not have emotions, motivations, desires, etc. They were considered mechanical, with another significant group of scientists thinking that any claims were anthropomorphizing, unprovable, and confused in form, though these mainstream scientists were sometimes technically agnostic. That was mainstream position until the 70s and it was dangerous for a biologist to go against that position in any official way: articles, public statements. etc. People having the opposite opinion were consider to be being irrational, projecting, anthropomorphizing and following their emotions.

Now of course this example is just an example. It does not prove that reason and emotion/intuition are equally good at getting to the truth or that reason is worse.

I bring it up because, basically, what appears to be reason, need not be good. It is just descriptive without valence. Certain parts of the mind are more in charge and they have their tool box. Maybe it is good use of tools maybe not. An attempt by the mind to reach conclusions in a fastidious manner and based, often primarily on word based arguements. This isn’t always the best way to figure something out. And underneath the reasoning and emotional world in the mind is seething.

OK, let’s look at the motion sickness. I’ll keep this one short. It’s a good example on your part and I do not think I can or would want to fully counter it. But let me partially counter it. In the case of morals, we are talking about what it is like to live, given who we are. If we are going to say certain behaviors are not good, then one such behavior might be putting a fan up against someone’s bed. Now this will come off as silly, but my point is, that despite the fact that the person who gets nauseous because of this is actually having an inappropriate reaction because fans and beds can give one an experience that resembles when one needed to throw up…
it still makes the guy in the bed have a bad time, even if he
‘shouldn’t’

So here we are, after this long evolutionary process reacting emotionally to a lot of stuff. Right, wrong, confused, misplaces emotions…quite possibly. Emotions that perhaps worked to protect us but now react to harmless things. But we have those emotions. We react these ways.

If we do not consider the emotional reactions to the moral act and to the consequences of any moral rule, then we are ignoring a large part of what is real. IOW if we just focus on the survival of our genes creating more gene bearers we are removing a large part of the real from our calculations.

  1. this may have serious consequences regarding our survival
  2. but regardless I think it is wrongheaded even if it did not
  3. I question our ability to know when it is simply some vestige of a no longer relevent reaction, or a deeper insight. I see reason as often being hubristic when it comes to these evaluations.

And to be very clear, I am not arguing that we should do away with rationality. I am pro-combination. So when I point out the problems with rationality, I am not saying emotions have no problems, and we should switch to just that.

The cardiac surgeon, in all liklihood, is working on someone who smoked or overate and did not move around very much. And if they did, then the cardiac surgeon is adding a way of working on top of what evolution set us out to do. But even more importantly, if we are to take from evolution what morality’s function is, why would we then ignore what evolution has given us. So it is that juncture I am focused on. I don’t have problems with technology per se. IOW my argument is not, hey that’s not natural - with all the problems inherent in that - but rather

I note that you think our morality should be based on its function in evolution. Evolution is given a kind of authority. Then when it comes to how our evolved emotions deal with morals, we must modify that. If we are appealing to authority in evolution, why stop at deciding it is about survival?

They may be pompous and unjustifiably self-assured systems of belief, but the jury is still out on whether they 1) added to both survival AND better lives or 2) whether they still are better than secular humanism, say. Testing such things is not easy.

Certainly you are correct that emotions can be problematic. But I am not arguing that there should be no rationality - and even in religions and folk morality, in fact any moral system I have seen, there is a mixture of reasoning and emotion, consequentialism and deontology. I am arguing for the mix and that the mix is generally unavoidable, in any case. I think the cane toad type hubris in rational solutions often comes about because we think complicated situations can be tracked using the frontal lobe skins and evolution made a boo boo when giving us tendencies to use both emotion and reason. WE cut out the emotion. I also think there
skills in emotion/intuition, or better put, some people are more skilled than others, just as in reasoning.

I disagree. I make very rapid decisions all the time whether to go with intuition or to pause and analyze and reflect. Actually think nearly the opposite of what you said. We cannot make such decisions without intuition. Certainly reasoning can come in also. But reasoning can never it self decide when it should be set in motion, when it has done enough work, when it is satisfied it listened to the right experts, when it is satisfied with its use of semantics in its internal arguments.
Rationality AS LIVED as opposed to on paper, is filled with micro-intuitions and generally initiated by a macro intuition and also one knows when to stop with yet another intuition. And there are qualia at all stages.
i
When we imagine reasoning we often imagine it as if it is happening on paper and the words have simple relations to things.

But actually it is not happening on paper, even when written and read, but in minds, and in the process there is an ongoing set of intuitions.

But, again, importantly, I am not for throwing out reason. I just think we should not throw out emotions/intuition AND further, I don’t think we can anyway.

Actually think if we go into the phenomenology of checking out an argument, we will find that intuition rings a bell, and then we zoom in to find out why. Especially in tricky arguments.

And to jump: I imagine a kind of traditional male female argument. The man cleverly explains why whatever he did was actually ok, given her behavior, given some factor in his life, given some complicated reasoning that seems right to all parties. And the woman is sitting there thinking ‘BS’.

I see the way we are raised tend to work against integrating the various approaches. Or to put this another way, we tend to officially priortizes intuition or rationality, emotions or calm word based tries to be logical arguments. Underneath, I think, each approach is using facets of the other, but because of how we are trained, we feel we need to identify with one. Also we tend to want to hide, because actually all sides tend to present themselves as rational, the emotions underneath our positions and the intuitions in our metaphysics, say.

I think leads to adults who are damaged and this is only increasing as pharma and psychiatry pathologizes mroe and more of the way we limbically react, and then also in modernity in general. So we think we have to choose one approach, when in fact homo sapiens have them tied together, so we might as well practice that, get to know our reactions and couple the two approaches.

That has been selected for. Maybe it won’t last. Maybe we will be replaced by AIs that have no emotion. I can’t see where a not radically fragmented homo sapien would not find that horrifying. Problem is, humans are radically fragmented. God, I hope I got that triple negative right.

I’d need to see the science. I am not even sure this is the case. If you are chaotically amoral, well, that leads to a lot of bad reactions, unless you are some kind of autocrate - so in your home, in your company, in your country, if you are the boss, you can probably get away with a lot, and in fact those guys often have a lot of kids, all over the place. Hence they are evolutionarily effective. But more pragmatic amoral people, I see no reason for them not to thrive. Maybe, just maybe less in modern society, and maybe less in tribal society. I think they have many benefits in between, even the chaotic ones. In fact a lot of the names in history books probably had amoral tendencies…and quite a few kids.

I do wonder how they do on creating babies however.

If it is in all populations it might be neutral or only slightly negative functionally. A byproduct of some other cognitive capacities that helped us. AGain testing this hypothesis is hard.

I think we have problems on the diet end of your justification, not because of faulty desires, but rather to cultural problems. I think sugar is a drug and we use it to self-medicate. Psychotropic drug. You know that old thing about rats triggering cocaine being made available or stimulating the pleasure center of the brain? The idea that if we could we would just destroy ourselves? Well they redid that experiment but gave the rats complicated interesting environments and very few got addicted. And I can imagine that even the nice complicated homes they gave the rats probably had less of the smells that rats bodies expect and lacked the nuance there is in what was once the original environment of rats. I think, just as in the cardiac surgeon example, we are using culture to fight nature that is having a problem because of culture.

In the ‘are there objective morals sense?’ I am certainly a moral nihilist. On the other hand this does not mean we need to stop determining what we want. We can ignore whatever we think evolution intended and decide what we want. No easy task, of course, given out wants, natural and culturally created, often by those with great power in their interests. But given that as social mammals we have self-interest but also empathy and tend to collaborate, there is room for desire to create what may not be morals but heuristics. Desire (and emotions and intuition) in collaboration with reason.

That’s where we are now, with what we are and have. Ironically ignoring whatever evolution intended or selected for might in fact be the best strategy for survival, though I am not saying it is, nor do I know a way to test that. However, I think there are reasons to think it might be a better route.

Focusing only on surivival and survival of genes, deprioritizes a lot of things that make us like life. I think we will soon if not already have solutions to survival of genes that do not need us to enjoy life at all. Forget the panopticon Amazon workplace, I mean complete dystopias that, one the other hand, keep those genes chugging along. Of course, we might opt out if that is where logic leads us, not feeling at home in the efficient world we created with one prime criterion for the goal and reason trying to be devoid of emotion and intuition as the means of working towards this final solution.

I see a few central disagreements/differing definitions running through your replies, so I’m going to take your points a bit out of order. I apologize for the wall of text, I hope it serves to bring the separate threads of our conversation back toward the main point.

First, returning to the meaning of “functional” and the role it plays in my argument. You say that “it was functional that dinosaurs became extinct”, and I think this suggests that we are using the term “functional” very differently. If you mean that it was functional for humans that dinosaurs became extinct, I agree. But it certainly wasn’t functional for dinosaurs, since most dinosaurs’ genetic and social lines ended. To the former claim, we could compare it to a claim that it would be functional for modern humans to make mosquitoes go extinct, and thus morally proper. That claim doesn’t seem far fetched to me.

But I feel like that line is a bit of a non-sequitur, since while the extinction of a species may be functional, it is only distantly moral. The way I’m using functional here is just this: morality exists because individuals with “moral machinery” survived and those without it perished. The evidence for this is the universal existence of morality in human groups, and the near universal existence of morality in individuals. It is functional for those who have it in the sense that the survival of whatever produces it (be it genes or memes or something else) was a result of how morality shaped behavior.

The role this plays in my argument about morality is that we can’t abstract morality out of its evolutionary context. We can look at morality as a phenomenon in the world, observe it empirically, ask what it does and why it exists, and in so doing discover what it means to say that one ought or ought not do X. The only objective reference of those terms is that evolutionary origin, the role morality played and why it exists. The only recourse for morality is thus to its function: it exists because it improved the odds of survival of the individuals and groups who shared the trait. (I grant your point with respect to other reasons why people act, but I would contend that those aren’t morality. Moreover, I think ‘moral nihilism plus heuristics for subjectively meaningful life’ is compatible with my argument here; moral nihilism is a viable out. One can take the position that, “morality is just something that helped apes survive, therefor there’s not really any objective truth of morality”. That out seems less appealing to me than the conclusion it’s trying to avoid, but I’m not sure my arguments here address that choice. At one point you seem to essentially ask why you should do what you are morally obligated to do, and for that I make no attempt at an answer.)

I think this largely addresses your points about antinatalism and alternative forms of survival. My argument here is, again, not for or against any specific moral system, but rather for a meta moral system that says that moral systems should be evaluated on their likelihood of leading to survival, because that’s the only meaningful way to evaluate whether something is moral.

But there is a predictive element to this position: we’re talking about a prosepective best-guess about the effect a system will have on survival. It is absolutely true that, if antinatalism ends up leading to the long term survival of humanity, we should score it as functional. You are right to point out that, ultimately, the truest test of functionality is what actually ends up succeeding, but that doesn’t seem to favor any particular hypothesis about which moralities are actually functional in prospect. In the same way, I could say “I think that by reading these monkey bones, I can predict the stock market, and you have to admit that the truest test of what method of market prediction works the best is what method actually does predict the stock market; if my monkey bones method actually predicts the stock market, then you would have to admit that it was the best method.” And so I would, but that doesn’t say anything about whether the monkey bones method actually does work, and we still have every reason to think that it is a bad way to predict the stock market.

Similarly, in prospect, we have every reason to think that antinatalism is not a good moral system under the metric of survival. We can come up with any number of scenarios whereby a moral philosophy that literally requires the slow extinction of the human race actually ends up preserving the human race better than other moral systems (e.g. maybe if everyone stops having children, they spend more time extending lifespans, copying consenting adults at the atomic level, and conquering the stars etc. etc.). But that seems unlikely, given what we know about the present state of human longevity and atomic-level copying. Still, someone can coherently say that we should be antinatalists because that’s the moral system that will best achieve functional aims; they may be making a mistake of fact, making a bad prediction about what will work, but that’s an empirical question about the future, and the truest test is indeed the arrival of the future.

I do think you raise an important distinction that I need to make: we observe morality, and I argue we should conclude certain things from it; similarly we observe antinatalism, why shouldn’t we conclude similar things from it? If antinatalism can be like eating too much sugar, why can’t the same be said of morality itself? To this I point to my comments on whether morality is a spandrel. Antinatalism doesn’t seem to pass the way morality does: it’s strongly negatively associated with reproduction, it’s certainly costly (thus the negative selection), but it tends to die out as quickly as it arises: it’s been proposed many times in many places and has been rejected (likely because everyone who practiced it died without raising any children to believe it).

I don’t think there’s any tension in saying that certain traits that exist in an evolved organism are contingent and haven’t been selected for, and I think you accept that based on your question about spandrels. The existence of particular moral beliefs don’t suggest that those beliefs have been selected for; the near-universality of some kind of moral belief in all humans does suggest that the underlying machinery has been selected for, i.e. has conveyed some survival benefit on the people whose genes express that machinery.

I think we can say similar things about your proposed reductios (transhumanism and AI breeding a new batch of humans for one generation every x-thousand years). It may be that those methods produce survival better, and that could be shown by someone trying those systems and actually surviving. But regular reproduction and genetic evolution have proved a pretty effective means of survival, it’s reasonable to think that they will more effectively continue our survival than exotic systems like the AI harvesting, breeding, and euthanizing generations of humans. Moreover, if what we want to see survive is society, then a bunch of DNA held in stasis doesn’t well achieve that goal (this goes to what particular form of survival is best, which I don’t think is answered by functional morality, nor does it need to be answered for the purposes of making a case for functional morality).

The ‘seeds to the stars’ reductio raises the open question of at what point we can rest in our pursuit of moral action. In most moral systems, it’s a good thing to save someone’s life, but once someone has saved someone’s life, they aren’t absolved of moral responsibilities. Even after saving a million lives, we can continue to do good. As a matter of subjective experience, we may decide we’ve done enough and no longer care, but it would seem a strange moral system in which the morality of an act actually changes based on the past moral acts of the actor (I can’t think of any that do this expressly, at least if an act is taken ceteris paribus).

But I take your more general point here: functional morality probably commits us to accepting some odd scenarios. I’m OK with that. Odd scenarios are philosophy’s bread and butter. Earlier I alluded to not being able to step in the same river twice, a claim that sounds odd upon first encounter but is normal and mundane in philosophy. And I would expect the truth to be somewhat unintuitive, given the same limits on intuition that I’ve been relying on in this thread: we have the brains and intuitions of of savanna apes, our intuitions are ill-suited to space travel.

I don’t mean to be too dismissive of oddness as a weakness, I do think intuition is often useful as an indicator of subtle logical mistakes. But I also think our oddness tolerance should be properly calibrated: even given that we’re committed to the positions you propose, the scenarios themselves are so odd that any moral conclusions about them will feel odd. If functional morality gets odd at the margins, so does every moral system I’ve ever seen. We have poor moral intuitions about AI, because we have never actually encountered one. In every-day situations, functional morality will work out to support many naive moral intuitions, and will approximate many sophisticated consequentialist and deontological systems. Are there any everyday situations where functional morality gets it wildly wrong?

To your points re: reason vs. emotion, I admit I’m losing the thread of how this impacts our disagreement. For one thing, I think we basically agree on the role of both emotion and reason, i.e. that they are both useful and valuable, and both can be flawed. But more importantly, I don’t think conceding that emotion sometimes provides important insights and we should be wary of too easily dismissing emotional/intuitive reactions as merely vestigial – I don’t think conceding that undermines my point that our moral intuitions can be rationally compared against what we know about how moral intuition arose in humans and what purpose it served. The way we know if a moral intuition is ‘right’ or ‘wrong’ is whether it fulfills its role in tending to increase the odds of survival or not. There is an objective reality, and our reason and emotion are both useful in helping us discover it, but they should arrive at the same answers because they are both discovering the same reality.

(I think I’ve addressed all your main lines of argument, but if I missed any please let me know, particularly if my omission seems calculated to avoid a particularly devastating point.)

Carleas, If we ever cease to exist, we can’t exist.

Morality is not about survival in some form, it’s about the quality of it.

I would say that my argument here is exactly counter to this. A being that suffers through life and reproduces will pass on its pattern, a being that enjoys life and fails to reproduce will not. We are the descendants of beings who reproduced, regardless of any subjective pleasure or pain they felt in getting there.

Of course, pleasure and pain are tuned to the same end, so the subjective experience of a life that leads to reproduction is likely to be positive. Safety, nutrients, and reproduction all feel good because beings that pursue those experiences are more likely to survive and reproduce.

Ihave to admit I am too lazy to go back and understand the points you are responding to. I will just repond below to points I have opinions about now, reactions that may even contradict things I’ve said before.

I am not sure if we have abstracted it before we had evolutionary theory, but we certainly had morality out of that context and even so now. IOW often morality goes against, at least, so it might seen, my own benefits in relation to natural selection as an individual, and at the species level is not based on this consideration, at least consciously. Let’s for the sake of argument accept that morality was selected for. OK. And in what form. Well, it hasn’t, generally, been in the form - Whatever leads to survival is the Good. What got selected for was a species that framed moral issues in other ways. So if we want to respect natural selection, we would continue with that unless we have evidence that this is not working.
IOW the trait that got selected for was not MOrality=survival.
What got selected for was trying to be Good, often in social ways, that fits ideals which we did not directly think of in terms of survival. Now underneath this may have been doing just that. but precisely for that reason we have no need to now consciously think about survival - perhaps having this as the heuristic is less effective, for example.

Perhaps I’ll reword in response to this: consider the possibility that having moralities that go beyond, do not focus (just on) survival or even mainly on survival is vastlyi more effective. That we have other ideals leads to more cohesion or whatever, as one possible side effect.

Ido feel there is a conscous/unconscious, intuition vs logic split in here, or between us. Not that I can nicely sum this up in words.

Let’s say that romance is really just pheramones and dopamine driven altered states. Let’s that it is actually the best description. It still might radically damage humans to think that way.

I don’t want to assume that my opposition is soley a noble lie argument either. That excess that our morality goes out over which does not have to do with survival, I grant that meaning in and of itself. I am not beholden to evolution. And that is what evolution has led to in any case.

IOW I am not sure why I have an obligation to go against my nature and view morality or preferred social relations as to be evaluated only in terms of survival. My reasons for resisting this are personal, but I could say that I have been selected to not be like that, so would I not be betraying selection to start viewing things in the way you suggest?

It’s a bit like how feelings might guide a golf swing adjustment, even with vague fluffy terms as heuristics, rather than some set of formulas based on calculas and some of Newton’s laws. You may be trying to get us to use the wrong part of our brains to do something.

I think we need a definition of survival. Is it the continuation of homo sapien genes? Anything beyond that?

Antinatalism combined with cloning and whatever the rich decide are the best techs to keep their lives long would certainly seem to have a good chance. I mentioned earlier some dystopian scenarios that might very well have great survival outlooks. I think it would be odd to not immediately come in with quality of life, fairness, justice type moral objections, even though the truth is the survival of homo sapiens might best served by some horror show.

If it turns out that by AI’s assessments the best chance for survival of homo sapiens is to eliminat 99% of the population and do some cryogenic alternating with short periods of waking for procreation, while the AI take care of security and safety

must we just knuckle under and choose that.

And I don’t think that is a loopy suggestion. I actually think that some rather dystopic solution would be most likely to extend the survival of homo sapien genes and lives.

Ah, now I see you respond to this…

Our modes of choosing partners and being social have changed a lot over time. I see no reason to assume that further changes will not take place. We can control much more. Food production from hunter gatherer to ancient agriculture to modern agriculture to gm agriculture with crops that cannot breed. Why assume that the ‘best’ method for human production will not radically shift. And it’s not like they are not working toward that out there.

Here you mention seeing society survive. There would be a society it would just be different, but further why should evolution care about the specifics of human interaction. If the point is homo sapien survival. It seems to me you are smuggling in other values than survival in that word ‘society’.

I would think it will. I would guess that it is already in place, in many ways, in the business world and that Amazon could use functional morality to justify its panoptican, radically efficiency focused, horrific workplaces. That words like dignity, sense of self, fairness no longer have any priority. Now a sophisticated functional morality, one that looks way into the future might find that such business practices somehow reduce survivability…but…

  1. maybe it is better to start up from with other criteria - even if they all boil down somehow to survivability which I doubt
  2. I suspect that some other nightmares will be just peachy under functional morality and in any case we will have no tools to fight against them. We would then have to demonstrate not that many of these things we value are damaged but rather that this process damages survivability, perhaps decades or hundreds of years in the future.

If we limit morality to survivability, I suspect that we will limit our ability to protect our experiences agasint those with power.