I am a little late on this response, but I tried to be thorough by way of apology. There were many good responses, and clearly some weak points in my argument I needed to address. To avoid responding to individual sentences, I rolled them into some overarching categories. Please correct any mistakes or misreadings, and let me know if I failed to adequately address any criticisms.
================================================================================
- What is âmoralityâ?
Peter Kropotkin points out that I have not defined âmoralityâ, and goes on to note that without a definition, statements like âwe observe morality in both young children and non-human primatesâ are unclear. Peter is correct that I did not provide a definition, but I disagree that that is a significant problem. In some sense, I am arguing for what should be the definition of morality, i.e. how that term should be understood and used. To the extent thatâs so, any definition I provide would be effectively tautologous with the argument that Iâm making.
But Iâm also appealing to a colloquial, small-m âmoralityâ when I say that we observe morality in children and non-humans. In both those groups, we observe strong, seemingly principled reactions in adherence to innate concepts of fairness, and often those reactions are contrary to immediate self-interest. So, for example, capuchin monkeys trained to complete a task for a given reward will react violently if they observe another monkey get a more valuable reward for the same task. They will go as far as to reject a reward that they had previously been satisfied to receive, as if in protest at the unfair treatment. That reaction is a rudimentary morality, as I mean it. Children, too, will react angrily to being rewarded differently for the same task, and from a very young age have a concept of fair distribution of rewards.
In these situations, we see that there is clear global instrumental value in the reactions, since they are intended to punish unfairness and communicate that the recipient will not stand for unfair treatment. In a non-lab setting, this reaction will encourage fairness is repeated encounters. But the reaction is also clearly of a piece with more sophisticated moral reasoning, as when a person reacts to such unfair treatment on someone elseâs behalf. It takes little more than this seemingly inbuilt reaction and the ability to model other minds to generate such a vicarious indignation. We then tend to label these vicariously felt slights as moral sentiment, and further refinements are just further abstraction on the same idea. Kantâs categorical imperative is nothing more than generalizing upon them.
As Wendy points out, morality in this sense is âsocietyâs cohesive glueâ, itâs a set of generalized standards of treatment, and one about which third-parties will get indignant on some elseâs behalf. It creates a social glue by creating a set of presumptions about acceptable conduct. And I mean âmoralityâ to point to that glue. Morality as I use it is an observable part of human affairs, a collection of behaviors common to normal-functioning humans (and deficit of which we describe as one of several mental illnesses). And because of its roots in innate tendencies visible in unschooled humans and our close animal relatives, I argue that the observable behaviors of morality are a result of cognitive habits selected for in our evolutionary history, i.e. that they exist because they are functional, so there is no higher authority to appeal to in moral matters than function.
But I should clarify that, even with my functional framing, not all moral rules are as hard-wired as unfairness. For example, itâs perfectly consistent with this understanding of morality that there are some moral rules that are necessary (I think this is what Meno_ means when he says âintrinsicâ) and some that are contingent (what Meno_ describes as a âgivenâŠset of moral rulesâ). Necessary rules will be those that follow from the base facts of biological existence; contingent rules will be those that create social efficiency but are just one of many ways to create such efficiency (perhaps this is what Wendy meant by morality being functional in a multitude of ways). This distinction is neither sharp nor certain, but it is meaningful when considered in degrees: the moral maxim that one should follow traffic laws is more necessary than the moral maxim that one should drive on the right, even though it may be possible to efficiently structure society without traffic laws.
Urwrong suggests a basis of morality in âdeath and its inevitabilityâ, but I donât see that in practice in the real world. Even the example he gives (giving your life for a higher good, or for your child) are clearly functional, whether by supporting self-sacrifice for collective benefit, or simply by ensuring the direct survival of your genes as carried by your offspring.
It may be true that the adherents of some things we call morality describe their actions in terms of other values, such as âgodâs willâ or âkarmaâ, but the existence of a mythology and alternative narrative does not detract from the fact that, if those moral systems have persisted over time, it is because they kept the groups that supported them cohesive and self-perpetuating. (I will say more below about the potential description between accurate descriptions of the world that involve selection, and the behavioral effects of descriptions of the world on which selection acts).
It isnât impossible to have a non-functional moral system, but if it is non-functional, it is not likely to survive. Early Christianity has a moral prohibition against reproduction, and that moral sentiment died out because it was selected against: people who believed it did not reproduce, and a major method of moral transmission (likely the primary method) was unavailable. The existence of such beliefs, and their description as a form of morality, does not mean that morality is not as I describe it.
================================================================================
2) In what sense is it âfunctionalâ?
Several people challenged the claim that morality evolved. Attano asked how we could know (âFossils bear no trace of the morality of a specimenâ), and Prismatic notes the memetic evolution of morality on sub-genetic-evolutionary timescales.
I have described the biological roots of morality as âcognitive habitsâ. I describe them this way because it doesnât seem that most particular moral propositions are coded in our genes, but instead that we have a few simple innate predispositions plus a more general machinery that internalizes observed moral particulars. A Greek raised among the Callatiae would certainly find it right and proper to eat his dead fatherâs body, and a Callatian raised among the Greeks would find the practice repulsive. The general moral cognitive habits that are selected for in genetic evolution are the foundation of the moral particulars we see in practice, especially the tendency to align ones behavior with others as a means of coordinating society and enabling cooperation. Those cognitive habits are functional insofar as they enable more cohesive groups to out-compete less cohesive groups.
Attano is correct that we canât see this directly in the fossil record. But we can still infer its origins in genetic evolution by looking at non-human animals and young children. There, we see both the tendency to imitate the herd and the foundations of specific moral precepts. Explaining this through âHistoryâ (which I understand to be something like memetic evolution) doesnât work, because non-human animals arenât plugged into the same cultural networks, and very young children havenât absorbed the culture yet (and I believe the moral-like actions of young children are similar across cultures, though I am less confident on that point). Evolved cognitive habits also best explain that we see moral systems in all human groups. Though they differ between groups, they are present everywhere and there is broad agreement within a group.
On top of those cognitive habits is another form of evolution, what I would call memetic (as opposed to genetic) evolution. Our wetware is evolved to harmonize groups, but the resulting harmonies will vary from group to group due to differences of circumstance and happenstance. That explains the âprogressâ in morality that Prismatic notes: memetic evolution can take place much more rapidly, since its components are reproduced and mutated much more quickly than are genes.
Now, we might call âprogressâ the process of coming up with moral codes that allow us to form yet larger and more efficient groupings. Or it might be the process of removing the moral noise that is built into local moralities by happenstance (e.g. rules surrounding specific types of livestock), boiling down to more universal moral beliefs like âdonât murderâ. Progress in a system of functional morality would be if the sets of moral particulars made the group function better.
Serendipper seems to suggest on this point that population growth may be bad (or perhaps just non-functional) if not coupled with an âopposing force selecting for any particular gene mutationâ. But population growth is the result of functional morality; bearing offspring who bear more offspring is what it means to have genes selected for. This may be clearer if we compare competing ant hills, and ask what it would mean if one ant hill began to increase in population significantly over the competing hills. More population means that the hill is already relatively successful, because population expansion requires resources, and also that itâs likely to be more successful, because more ants working on behalf of the collective means the collective is likely to be stronger. So too with humans: we can read success from population growth, and we would expect population growth to create success (up to a point, the dynamics change when there are no competing groups).
A growing population may, and probably does, require morals to change, but we should expect that: as context changes, including changes in the nature of âthe groupâ, different behaviors will be functional. But that our old morals will be a victim of their success does not mean that they were successful: a growing population and growing cooperation between group members means that the old rules were functional in their context.
================================================================================
3) How does this apply?
A few people asked about the applicability of this way of framing morality. That line of argument usually isnât so much an objection as an invitation to keep developing the theory, which I am glad to do.
Jakob suggests maybe morality needs to be naive, in the sense that the inborn sense of morality as an ideal is important to its functioning. That may be the case. But it is also true that in order to dodge a speeding car, we need to forget about special relativity, even though the most accurate description of the carâs motion that we can produce requires us to use special relativity. So too might we recognize and describe morality as a system of cognitive habits that support group cohesion, and yet in deciding how we live appeal to more manageable utilitarian or deontological axioms. This goes to Urwrongâs point above about descriptions of morality in terms of death rather than life: different descriptions may more effectively achieve the ends of morality, but they do not change the nature of morality as an evolved system that helps perpetuate human groups.
This is related to a question from iambiguous, of how we actually put the idea into practice. I donât think thatâs easy, but I also donât think itâs necessary. I am not here offering an applied morality of daily life, but a moral theory to which such an applied morality should appeal. There are potential subordinate deisagreements about e.g. whether brutal honesty or white lies are more effective in creating group cohesion and cooperation; what I am proposing here is the system to which the parties to such disagreement should appeal to make their case.
Serendipper asks how we could determine evolutionary success, and I think the answer is easy in retrospect (though not trivial), and more difficult in prospect. In retrospect, we can just ask what survived and why. Sometimes we know that groups fell apart for arbitrary reasons, and other times we can readily identify problems within the groups themselves. We can point to moral prohibitions that harmed groups and were abandoned, e.g. sex and usury prohibitions. We can compare across surviving systems and see what they have in common, e.g. respect for laws and public institutions.
In prospect, we can make similar arguments, drawing from the history of moral evolution and make predictions about what will work going forward. Like any theory about what will happen on a large scale in the future, thereâs substantial uncertainty, but that doesnât mean we know nothing. We can more readily identify certain options that are very unlikely to be the best way forward.
But again, this uncertainty isnât fatal to the proposition that morality is functional â indeed, itâs expected. Much as we donât know for sure what evolved genetic traits will survive, whether K- vs. R-strategies are more reliable in a given context, we also do not know what moral approach will guarantee group prosperity. But these observations do not undermine the theory of evolution, and they do not undermine the theory of functional morality.