Functional Morality

Morality has resulted from an evolutionary process that selects for survival. Since we observe morality in both young children and non-human primates, we can infer that the cognitive habits that we call morality are evolved: they were selected for because they tended to increase the likelihood that those organisms that exhibited them would pass down the genes that produced them. Morality, in other words, is functional, and this meta-ethical basis should be the foundation for any particular moral system.

This raises problems for many popular moral systems, but most acutely for utilitarian ethics, since it is ostensibly grounded in the same secular liberal worldview that recognizes the mind’s material identity and evolutionary origins. Because if morality is a product of evolution, if its purpose all along has been to do whatever keeps the genes propagating, then moral intuitions about the value of humans, or conscious beings, or subjective experience, are at best accidentally correct: they are right if and only if they produce moral prescriptions that tend to favor propagation.

It bears mentioning that subjective experience, too, is the product of evolution, and individuals feel happy and sad because those feelings tended to help their ancestors to survive and reproduce. So we should actually expect subjective experience to be a somewhat reliable proxy for gene-replication. Moreover, since humans’ greatest evolutionary asset has been their cooperation, we should also expect valuing the intuition that others’ subjective experience matters to be selfish: we have dedicated brain structures for modeling the subjective states of others (more specifically, our ingroup others), and we reproduce their subjective experience automatically as we observe them; their pain feels to us like pain, their pleasure feels to us like pleasure.

But note that these are proxies. We can identify many situations where they mis-assign value, both in our own subjective experience and in how we value the subjective experience of others. We can be tricked into valuing the subjective experiences of robots, and into devaluing the experiences of friends, by subtle or overt manipulations of other evolved cognitive habits: cute robots who mimic babies get incorrectly included; unfamiliar potential allies get incorrectly excluded.

One way of interpreting moral debates is as competing assertions about what system most faithfully produces evolutionary success, and as an evolutionary process itself in that the ideas themselves replicate and are selected for. But I would argue that we can actually draw separate normative conclusions from this observation. To note the evolutionary origins of morality is to short circuit the is-ought fallacy, because it describes what ‘ought’ is, where moral ideas come from and why they persist. It therefore permits us to reject normative claims that are inconsistent with descriptive claims about what morality is. Functional oughts are is claims.

Interesting proposition. However whether morality is functional, has a contingent relationship towards the argument. In other words counter argument can be made as well, that morality has not to do with its utility, but is based on intrinsic given properties.

In the case of the child, such is evident, for most likely he will be given a set of moral rules to live by. If he questions these along the way, and changes them to suit himself, then, it can be said, that he utilizes moral acts to his advantage. As far as interpreting evolutionary traits within the scope of such individual re-formation, can be said to conform to large scale changes that occur as a result of intergenerational learning, but it may be hard pressed to point to this as directly related to a functionally related evolutionary process.

Examples abound:

Homosexuality does nothing toward securing the genetic security and advance of progeny, on the contrary, current views favor the idea that matters of population control, irrespective of more or less genetic endowment , are more of a factor in placing markers on the moral spectrum.

It appears that neither a functional approach nor an existentially preloaded-based on variance upon traditional morality is the fundamental basis, but a quasy meta psychological adaptive confirmation to more ideal basis may be the key.

In case of homosexuality, again, the motive is not concern with overpopulation which is the causa causa, but conrarily, the justification for it becomes primary.

Deeper structural analysis, may reveal the ‘ideal’ sequence of motivations , rather than the idea of superseding favorable genotypes, as a more of an emphatic evolutionary factor.

K: I think this piece has basic and fundamental problem…you haven’t defined morality…
to say we have observe morality in children and non-human primates still leaves us the
question of, what exactly did we see and have we put our own interpetation on what
we saw… instead of understanding the “morality” on its own terms…
children must be taught everything including morality… so if you see a child
acting “morally” what does that exactly mean? I suspect that we see an action
and we, as adults defined the action and for the child, there was no underlying
acts of morality… it was simply an action… without our notion of morality
which we then gave to the actions of the child or nonhuman primates…

Kropotkin

Minor observation:

A: “Morality, in other words, is functional

does not necessitate
B: “and this meta-ethical basis should be the foundation for any particular moral system.”

For example it may well be that for morality to be functional, it needs to be experienced as an ideal.
That morality, when it is experienced as something less than ideal, ceases to have its effect.

In this way we can comprehend the function of the ideal, which, too, must have evolved as a an advantage in terms of survival.

Putting it sharply: to approach morality as something less than divine may be to eliminate its function.

[tab]Even sharper; It may be why protestant Europe, after its god was pronounced dead, embraced Islam. Europeans may actually harbour an instinctive respect for the idealistic approach to morality of the muslims.[/tab]

Proportionally, this brings it down to the level of functional categories: Should the ideal or shouldn’t be a derivative, from a utilitarian perspective. It has merit bit does not overcome the dual, religious aspect of good and evil.

The premise is flawed. Many systems of morality are not centered around survival and life, but death and its inevitability. Morality includes people who know they will die, or actively puts their lives on the line, for ‘higher’ purposes. Thus morality is not necessarily about survival (function). It’s about when the price of your life, is worth paying, for a higher cause. Or when another’s life (your children for example) are worth more than yours.

None of what you say “functional morality” applies to life and death matters.

I like this topic but I’m not understanding the differentiations since they are all interconnected in my mind as one system not an either/or or an is/ought. Isn’t all morality functionally based on the individual and his/her relationship with society at large? Isn’t morality society’s cohesive glue? Morality has to be functional for society to be functional and it’s not functional in one way, but a multitude.

Just out of curiosity [for those in the know] does this not seem to reflect many of the points that Satyr raises over at KT? For him morality is ever and always in sync with nature. Natural behaviors can be understood if you are able to grasp the role that evolution plays in the reproduction of all living things.

Perhaps a special dispensation might be granted here. Allow him to come on board and participate on this thread.

My own interest of course is the extent to which these “general descriptions” might be integrated into actual conflicted human behaviors. Where does nature/genes stop and nurture/memes begin when the discussion becomes embedded in moral/political conflagrations that we are all likely to be familiar with.

I largely share the background, but I don’t think you succeed in supporting your view.

It is OK to suppose that morality responds to the environment, but calling it the result of an evolutionary process is problematic. How could we ever observe this? Fossils bear no trace of the morality of a specimen. At the same time it begs for the assumption of a theoretical framework where according to the complexity of an organism and its living conditions we can infer the morality this organism would develop. Somehow this has to be dared, yet there is an inner dynamics in groups, which I would call History, that appears to be at least as determining as physiology and environment. (Of course it’s possible to posit that also History is linked to evolution, but that goes way beyond simple morality).
I am inclined to accept that morality assists ‘life’, but ‘life’ is not necessarily self-preservation or one’s own genes propagation. So, your “select for survival” becomes problematic too. Oversimplifying, we might see morality (as long as we don’t assess it at its face-value) as a checklist in order to make an individual subservient to a group. In that respect it may be functional to sustain the life of the many, but quite often by requesting to individuals the opposite of their survival and genes propagation. So morality and consciousness are complementary in a way, but also conflicting - and you implicitly point to that. A non-moral attitude may well respond to a drive for survival, which could well be also an outcome of evolution (probably a more genuine one).

That said, Utilitarianism has a problem, I agree with that.

I agree with the above re the intrinsic moral drive within human[s].

However the progress of morality within humanity that is going on is not based directly on biological evolutionary adaption and natural selection.
The ongoing ‘evolution’ of the moral drive which is inherent is based on a meme basis [ideological] that in turn [as driven by the inherent moral drive] program the collective brain of humanity.

Note 200 years ago no one would forecast the possibility of legal banning of ‘Chattel Slavery’ in all nations in the World. Whilst this is only pertaining to Laws [not practice], it is a definite ‘moral’ achievement and progress for humanity. Such an evolution is not by natural selection re normal evolution.

What is critical is how can we abstract a sound Framework and System of Morality and System
[with groundings and principles] from the reality of what is within the ongoing progress of morality. To expedite the progress in quantum jumps we need a sound Framework.

I have been posting views relating to a a sound Framework and System of Morality and System to expedite progress in morality in various posts.

Unless you qualify survival and genes propagation as ‘moral’, it does not seem to me that you and OP are saying the same thing. (Of course, it is ultimately up to Carleas to judge on the matter).
If I understood correctly, he maintains that ‘moral habits’ were ‘selected’ throughout evolution because they assist survival and genes propagation, not really because of an intrinsic moral drive in men.

The ‘findings’ reported in the article (which is an interview, not a scientific paper) do not seem to me conducive to what Mr. Bloom claims.
They can be easily interpreted in the way I guess Carleas favours, not as evidence of a genuine moral instinct, but as ‘moral feelings’, ‘proxies’, that serve the real instinct of survival.

I did state I agree to a degree and disagree on,

“survival and genes propagation” is not morality per se but rather they are grounds for morality and ethics.

As I had mentioned one need the following to understand how it works;

I have posted on the above elsewhere - won’t go into details here.

That is only an article re a book he wrote. The researching proper is in the background and published elsewhere. I can’t find his scientific paper off hand, but he would not be rewarded $1 million if there was no scientific paper.

Btw, there are other researches done re Babies and inherent Morality.

Sorry if I misunderstood, but honestly I still don’t understand.
I confess I have a problem at the literal comprehension level. You wrote “I agree with the above re the intrinsic moral drive within human[s].” What does “re” mean? I thought it was a typo, but I see that the same “re” returns elsewhere.
Regardless, after the same sentence “I agree with the above re the intrinsic moral drive within human[s]”, it seems that you posit a ‘moral drive’ in men.
Now, I take you agree with Carleas by saying:

.
My understanding is that Carleas maintains that if these instincts are a ‘ground’ for morality, they are so in some deceitful way, meaning that what is deemed a moral habit is, in fact, a device serving “survival and genes propagation”. This amounts to a denial of a moral drive in men. And if you agree with that, where would ‘your’ moral drive be?
Then you add

and

So, if I may consider that instincts to “survival and genes propagation” are key to adaptation and natural selection, I have to understand that these non-moral grounds of morality are not what ‘directly’ propels this ‘ongoing progress of morality’. So, their role remain mysterious. Maybe I should have read your other posts on the “sound Framework and System of Morality” to understand this better. Nevertheless I get that this framework would be obtained through abstraction, hence I conjecture (maybe superficially) that these non-direct grounds would no longer play in it.

That’s OK, I don’t mean that Professor Bloom’s research is unreliable. I have no reason to think that and, actually, what he says about children’s reactions makes sense to me, I can easily back it after my own experience. Yet, I still don’t think that those findings hint to an innate morality in a sense that would confute the OP. (Incidentally, the Jakob Foundaton is about «the future of young people so that they become socially responsible and productive members of society». I don’t mean to disrespect that, but scientifically their grant does not prove much in my view).

Awesome epiphany!

Yes exactly and it’s why morality is only relevant within that assumed context.

Is there such a thing as objective experience? Can an object look at itself?

Gene replication? You mean population growth with no opposing force selecting for any particular gene mutation? Hmm… what happens when life gets too easy? What genes are chosen in that environment?

I think it was farming and one doesn’t need cooperation for that; just luck in having good soil and animals to be domesticated. That’s the biggest difference between the Native Americans and the Europeans.

Are wolves more successful than tigers? Wolves cooperate, but have to share in a hierarchy where some wolves may be excluded. Tigers manage alone and don’t need to share.

We do tend to personify, but I don’t think we need cooperation for that. The rustling of the grass should be interpreted as a tiger whether it really is or not.

How could anyone determine what “evolutionary success” means or how to get there? Evolution presupposes no presumptions or else it’s not evolution. As soon as someone has a plan in mind, it would cease to be evolution since there is no obstacle to overcome, but conditions upon which to undercome and devolve.

The functional morality is what ought to be done if one desires the continuation of what’s been happening.

“Regarding” is my guess.

Morality is in sync with society and whether one regards society to be natural or artificial is subjective along with the interpretation about which is best.

You can argue his side and we won’t have to deal with the foul order.

Are you asking whether nature or nurture more prominently affects moral/political leanings? Well, since morality is a function of society, then it would seem that nurture would instill properties consistent with societal influences.

Everything you do is a product of your structure.
When you act, you reinforce your structure.
Your will is a product of your structure.
Beyond the bias of the living, all is neutral.
The bias is a product of your structure.
All values are a bias.
Morality is how to best act in accord with your bias.

Survival is neutral.
Evolution neutral.
Happiness neutral.
Change neutral.
Progress neutral.

Only the biased care one way or the other.
What is your bias and why are you bias?
Ought you rely on the systems that produced your bias to dictate how you respond to your structure?
Follow their lead? Set their results as your goal? Mimic the blind?

I am a little late on this response, but I tried to be thorough by way of apology. There were many good responses, and clearly some weak points in my argument I needed to address. To avoid responding to individual sentences, I rolled them into some overarching categories. Please correct any mistakes or misreadings, and let me know if I failed to adequately address any criticisms.

================================================================================

  1. What is “morality”?

Peter Kropotkin points out that I have not defined “morality”, and goes on to note that without a definition, statements like “we observe morality in both young children and non-human primates” are unclear. Peter is correct that I did not provide a definition, but I disagree that that is a significant problem. In some sense, I am arguing for what should be the definition of morality, i.e. how that term should be understood and used. To the extent that’s so, any definition I provide would be effectively tautologous with the argument that I’m making.

But I’m also appealing to a colloquial, small-m ‘morality’ when I say that we observe morality in children and non-humans. In both those groups, we observe strong, seemingly principled reactions in adherence to innate concepts of fairness, and often those reactions are contrary to immediate self-interest. So, for example, capuchin monkeys trained to complete a task for a given reward will react violently if they observe another monkey get a more valuable reward for the same task. They will go as far as to reject a reward that they had previously been satisfied to receive, as if in protest at the unfair treatment. That reaction is a rudimentary morality, as I mean it. Children, too, will react angrily to being rewarded differently for the same task, and from a very young age have a concept of fair distribution of rewards.

In these situations, we see that there is clear global instrumental value in the reactions, since they are intended to punish unfairness and communicate that the recipient will not stand for unfair treatment. In a non-lab setting, this reaction will encourage fairness is repeated encounters. But the reaction is also clearly of a piece with more sophisticated moral reasoning, as when a person reacts to such unfair treatment on someone else’s behalf. It takes little more than this seemingly inbuilt reaction and the ability to model other minds to generate such a vicarious indignation. We then tend to label these vicariously felt slights as moral sentiment, and further refinements are just further abstraction on the same idea. Kant’s categorical imperative is nothing more than generalizing upon them.

As Wendy points out, morality in this sense is “society’s cohesive glue”, it’s a set of generalized standards of treatment, and one about which third-parties will get indignant on some else’s behalf. It creates a social glue by creating a set of presumptions about acceptable conduct. And I mean “morality” to point to that glue. Morality as I use it is an observable part of human affairs, a collection of behaviors common to normal-functioning humans (and deficit of which we describe as one of several mental illnesses). And because of its roots in innate tendencies visible in unschooled humans and our close animal relatives, I argue that the observable behaviors of morality are a result of cognitive habits selected for in our evolutionary history, i.e. that they exist because they are functional, so there is no higher authority to appeal to in moral matters than function.

But I should clarify that, even with my functional framing, not all moral rules are as hard-wired as unfairness. For example, it’s perfectly consistent with this understanding of morality that there are some moral rules that are necessary (I think this is what Meno_ means when he says “intrinsic”) and some that are contingent (what Meno_ describes as a “given…set of moral rules”). Necessary rules will be those that follow from the base facts of biological existence; contingent rules will be those that create social efficiency but are just one of many ways to create such efficiency (perhaps this is what Wendy meant by morality being functional in a multitude of ways). This distinction is neither sharp nor certain, but it is meaningful when considered in degrees: the moral maxim that one should follow traffic laws is more necessary than the moral maxim that one should drive on the right, even though it may be possible to efficiently structure society without traffic laws.

Urwrong suggests a basis of morality in “death and its inevitability”, but I don’t see that in practice in the real world. Even the example he gives (giving your life for a higher good, or for your child) are clearly functional, whether by supporting self-sacrifice for collective benefit, or simply by ensuring the direct survival of your genes as carried by your offspring.

It may be true that the adherents of some things we call morality describe their actions in terms of other values, such as “god’s will” or “karma”, but the existence of a mythology and alternative narrative does not detract from the fact that, if those moral systems have persisted over time, it is because they kept the groups that supported them cohesive and self-perpetuating. (I will say more below about the potential description between accurate descriptions of the world that involve selection, and the behavioral effects of descriptions of the world on which selection acts).

It isn’t impossible to have a non-functional moral system, but if it is non-functional, it is not likely to survive. Early Christianity has a moral prohibition against reproduction, and that moral sentiment died out because it was selected against: people who believed it did not reproduce, and a major method of moral transmission (likely the primary method) was unavailable. The existence of such beliefs, and their description as a form of morality, does not mean that morality is not as I describe it.

================================================================================
2) In what sense is it “functional”?

Several people challenged the claim that morality evolved. Attano asked how we could know (“Fossils bear no trace of the morality of a specimen”), and Prismatic notes the memetic evolution of morality on sub-genetic-evolutionary timescales.

I have described the biological roots of morality as “cognitive habits”. I describe them this way because it doesn’t seem that most particular moral propositions are coded in our genes, but instead that we have a few simple innate predispositions plus a more general machinery that internalizes observed moral particulars. A Greek raised among the Callatiae would certainly find it right and proper to eat his dead father’s body, and a Callatian raised among the Greeks would find the practice repulsive. The general moral cognitive habits that are selected for in genetic evolution are the foundation of the moral particulars we see in practice, especially the tendency to align ones behavior with others as a means of coordinating society and enabling cooperation. Those cognitive habits are functional insofar as they enable more cohesive groups to out-compete less cohesive groups.

Attano is correct that we can’t see this directly in the fossil record. But we can still infer its origins in genetic evolution by looking at non-human animals and young children. There, we see both the tendency to imitate the herd and the foundations of specific moral precepts. Explaining this through “History” (which I understand to be something like memetic evolution) doesn’t work, because non-human animals aren’t plugged into the same cultural networks, and very young children haven’t absorbed the culture yet (and I believe the moral-like actions of young children are similar across cultures, though I am less confident on that point). Evolved cognitive habits also best explain that we see moral systems in all human groups. Though they differ between groups, they are present everywhere and there is broad agreement within a group.

On top of those cognitive habits is another form of evolution, what I would call memetic (as opposed to genetic) evolution. Our wetware is evolved to harmonize groups, but the resulting harmonies will vary from group to group due to differences of circumstance and happenstance. That explains the “progress” in morality that Prismatic notes: memetic evolution can take place much more rapidly, since its components are reproduced and mutated much more quickly than are genes.

Now, we might call “progress” the process of coming up with moral codes that allow us to form yet larger and more efficient groupings. Or it might be the process of removing the moral noise that is built into local moralities by happenstance (e.g. rules surrounding specific types of livestock), boiling down to more universal moral beliefs like “don’t murder”. Progress in a system of functional morality would be if the sets of moral particulars made the group function better.

Serendipper seems to suggest on this point that population growth may be bad (or perhaps just non-functional) if not coupled with an “opposing force selecting for any particular gene mutation”. But population growth is the result of functional morality; bearing offspring who bear more offspring is what it means to have genes selected for. This may be clearer if we compare competing ant hills, and ask what it would mean if one ant hill began to increase in population significantly over the competing hills. More population means that the hill is already relatively successful, because population expansion requires resources, and also that it’s likely to be more successful, because more ants working on behalf of the collective means the collective is likely to be stronger. So too with humans: we can read success from population growth, and we would expect population growth to create success (up to a point, the dynamics change when there are no competing groups).

A growing population may, and probably does, require morals to change, but we should expect that: as context changes, including changes in the nature of “the group”, different behaviors will be functional. But that our old morals will be a victim of their success does not mean that they were successful: a growing population and growing cooperation between group members means that the old rules were functional in their context.

================================================================================
3) How does this apply?

A few people asked about the applicability of this way of framing morality. That line of argument usually isn’t so much an objection as an invitation to keep developing the theory, which I am glad to do.

Jakob suggests maybe morality needs to be naive, in the sense that the inborn sense of morality as an ideal is important to its functioning. That may be the case. But it is also true that in order to dodge a speeding car, we need to forget about special relativity, even though the most accurate description of the car’s motion that we can produce requires us to use special relativity. So too might we recognize and describe morality as a system of cognitive habits that support group cohesion, and yet in deciding how we live appeal to more manageable utilitarian or deontological axioms. This goes to Urwrong’s point above about descriptions of morality in terms of death rather than life: different descriptions may more effectively achieve the ends of morality, but they do not change the nature of morality as an evolved system that helps perpetuate human groups.

This is related to a question from iambiguous, of how we actually put the idea into practice. I don’t think that’s easy, but I also don’t think it’s necessary. I am not here offering an applied morality of daily life, but a moral theory to which such an applied morality should appeal. There are potential subordinate deisagreements about e.g. whether brutal honesty or white lies are more effective in creating group cohesion and cooperation; what I am proposing here is the system to which the parties to such disagreement should appeal to make their case.

Serendipper asks how we could determine evolutionary success, and I think the answer is easy in retrospect (though not trivial), and more difficult in prospect. In retrospect, we can just ask what survived and why. Sometimes we know that groups fell apart for arbitrary reasons, and other times we can readily identify problems within the groups themselves. We can point to moral prohibitions that harmed groups and were abandoned, e.g. sex and usury prohibitions. We can compare across surviving systems and see what they have in common, e.g. respect for laws and public institutions.

In prospect, we can make similar arguments, drawing from the history of moral evolution and make predictions about what will work going forward. Like any theory about what will happen on a large scale in the future, there’s substantial uncertainty, but that doesn’t mean we know nothing. We can more readily identify certain options that are very unlikely to be the best way forward.

But again, this uncertainty isn’t fatal to the proposition that morality is functional – indeed, it’s expected. Much as we don’t know for sure what evolved genetic traits will survive, whether K- vs. R-strategies are more reliable in a given context, we also do not know what moral approach will guarantee group prosperity. But these observations do not undermine the theory of evolution, and they do not undermine the theory of functional morality.

I was referred here in the context of my saying that without emotions there are no morals. I see nothing here to argue against that. If you have strategies that unemotionally lead to the propagation of your genes, and no emotions are present, you have tactics and strategies. Machines could be programmed to this - something like those robot cagematches, though fully programmed ones. That isn’t morals. Morals are inextricably tied to someone’s feeling and values - iow subjective preferences, even if it is a posited God’s - and notice how these gods get pissed off if you break the rules.

And guilt would fit in in the discussion you have of emotions above. Natural selection slowing working on which kinds of guilt are adaptively poor.

Once something is tactic and strategy online, you have no way to decide between this set of tactics - that lead to the destruction of life on earth - or that set of tactics that do not

UNLESS

emotional/desire based evaluations

are made.

If you have none, you are no longer an animal.

If you have none, you cannot decide, though you could flip a coin.

And one interesting thing about evolution is that it has lead, and not just in the case of humans, to species having the ability to not necessarily put their own genes ahead of others. This may benefit the species - it is part of what makes us so versatile, or our versatility makes us like this.

Yes, apparantly unemotional viruses may be even more effective than us - in the long or even short run - but they are not moral creatures. I think it would be a category error to call them that.

The argument that morality doesn’t depend on emotions is that morality was a product of evolution, and was selected for independently of any emotional valence. The origin of morality as something that supports group selection does not depend on emotion; emotion is neither necessary nor sufficient for morality to be selected for.

That’s not to say that morality can’t interact with emotion; it may be that morality subjectively experienced as an emotion is an effective way to encourage beneficial ingroup cooperation. Or it may be that tuning into the emotions of others gives us inputs into our moral machinery that help produce such beneficial ingroup cooperation.

But like all evolved traits, the fact that they produced outcomes that were selected for in the past do not guarantee that they will produce outcomes that will be selected for in the present. We evolved to go nuts for sugar, because in the environment in which we evolved sugar was scarce and we should eat all we can. In our current world, sugar is abundant and too much enthusiasm for sugar is selected against. Many common fears are unjustified, we’re too risk averse, we overreact to cold. We have a lot of subjective experiences that were handed down through evolution that are actively counterproductive in the modern world. Our subjective preferences can be mistaken in that sense.

So too can connections between emotion and morality be seen to be spurious, once we accept why morality exists at all. Whatever weight we give to emotion we can and do discount completely when it leads to the wrong outcome. We feel guilty when we dump someone, and it’s not that we shouldn’t feel that way, it’s that that emotion has no bearing on the rightness or wrongness of the action. We feel it because we evolved in small bands without the elaborate puritan mating regime of modern society, and hurting someone and tearing social bonds in the evolutionary context to the extent we hurt someone and tear social bonds when we dump them now was disruptive to the group and bad for us and our tribe. So we feel guilty, we have the moral intuition that we’ve done wrong, and that moral intuition is mistaken.

The fact that we can look at a situation and use non-emotional factors to identify emotions that are just incorrect and that point to the wrong moral conclusions entails that moral conclusions can’t actually be based on the emotions. They’re based on something else, something independent of the emotions.

We literally have a neural network that was trained on certain inputs to achieve a certain goal, and now we’re feeding it different inputs and just declaring wherever it points to be the goal. That’s a nonsensical approach. The goal is the same goal: survival.