Functional Morality

Perhaps a shorter objection is better:

  1. how do we know that morality is not a spandrel?
  2. even if it is not, how do we have an obligation to the intent of evolution, in what sense are beholden to funtion? Function, evolution, natural selection are not moral agents. What is it that puts us in some contractual commitment to following their intentions? If the argument is not that we are beholden but rather X is what morality is for, so we should use it as X, a more determinist connection, then we don’t have to worry about adhering to the function, since whatever we do is a product of evolutionarily-created function. Once I am supposed to follow evolution, use my adaptions, well, how can I fail? And if I fail as an individual, I am still testing for my species and if my approach was poor it will be weeded out. No harm, no foul.

Thanks for your patience and your excellent replies, they have helped my to develop my thinking on this topic and I appreciate the critique.

I think there’s a number of levels on which we can define it, which I’ll discuss in a minute, and there’s room to debate the appropriate locus of survival as it relates to morality. But I think that debate is separate from whether morality does relate to survival. Morality exists because of its effect on past generations; it seems clear that there is no morality independent of humans, no moral field that we’re sensing but rather a moral intuition (i.e. innate brain configurations) that influences our behaviors in ways that supported our ancestors in producing us.

But, as promised, some thoughts on ‘survival’:
First, individual gene-line survival means an organism not dying until it produces offspring who are likely to not-die until they produce offspring.
At a group or society level, survival means the group continues to exist. It’s a little vaguer here because the ‘group’ isn’t is somewhat amorphous, and there aren’t discrete generations for reproduction, but a constant production and death of constituent members.
Defining the survival of any thing inherits the problems in defining that thing, i.e. the “can’t step in the same river twice” problems. Moreover, where morality functions on the substrate-independent level of our existence (thoughts), it isn’t clear whether the survival it requires is the survival of the substrate or the survival of the survival of the programs that run on it. Would morality support the transhumanist idea that we should abandon our bodies and upload our consciousness to silicon? Even if we take functional morality ias true, I don’t know that that question is settled.

I do think that morality must operate on the meta-organism, rather than the organism, i.e. society rather than the individual. Morality, as a functional trait, works between individuals, so oughts can only be coherent in relation to and support of the tribe or collective. And I have a sketch of an idea that that entails that we should prefer the pattern over the substrate, since the beast society exists continuously as its substrate is born and dies in an endless churn.

But that is a weak and fuzzy position, and in any case beyond the scope here.

Sure, but some morality is just wrong. Anti-natalism specifically is pretty clearly wrong, but that statement rests on the functional morality I’m advancing here.

If what you’re asking for is which morality is the functional morality, I actually think that too is beyond the scope of this discussion. “There is an objective morality that we can discover” is a different claim from “X is the objective morality”. I’m making the former claim here, and arguing that we should use the criteria of functionality to evaluate claims about the latter, but I am not making a specific claim about the latter.

I don’t disagree with this idea or those in the surrounding paragraph, but let me make an analogy.

Once, on a hot summer night, I awoke with intense nausea. I laid in bed feeling wretched for a minute staring at the ceiling, and the nausea passed. I closed my eyes to sleep again and soon again felt intense nausea. I opened my eyes, and shortly the nausea passed again. I did this a few more times as my rational faculties slowly kicked in, and then noticed that my bed was vibrating slightly. A fan that I’d placed at the foot of the bed was touching the bed frame, and creating a barely perceptible vibration. I put it together that the nausea was in fact motion sickness. I moved the fan, the bed stopped shaking, and I slept the rest of the night without incident.

The point here is that motion sickness is an evolved response to certain feelings of motion. In particular, our brains are concerned that certain unnatural sensations of motion are actually the result of eating something toxic. The nausea is a response that, if taken to its logical end, will cause us to purge what we’ve eaten, in the hopes that any toxins will be purged with it. In the evolutionary context, that’s a useful response. But we did not evolve in the presence of beds and fans, and so the way we’ve evolved misleads us into thinking we’re ill when in fact we’re perfectly fine.

A similar thing can happen with morality, and understanding morality as a product of evolution, as a mental trait that evolved in a specific context and suited to that context, and not necessarily to this context, may let us “move the fan” of morality, i.e. shed moral claims that are clearly at odds with what morality was meant to do. Given a few thousand years and a few hundred generations of life in this context, we should expect evolution to get us there on its own, but we don’t have the luxury of that.

So, yes, we are this way, there is some information in our emotions and moral intuitions and we should pay attention to them, just as we should take nausea seriously. But we can examine them in other ways at the same time. We can appreciate the ways in which evolution’s result is inadequate to its purpose, and rely on the other results of evolution (rationality and the view from nowhere) to exert a countervailing drive.

You yourself make a few similar points further down, and I basically agree with them: our moral intuitions and emotions are not for nothing, they can be better than our reason for making decisions in certain cases, and we should treat them as real and expected and important in our decision making. But we should also treat them as subject to rational refutation. And when reason and emotion conflict in making statements of fact about the world, reason should prevail (though perhaps you don’t agree with that).

Yes, I think that’s right. But so too are cardiac surgeons deciding not to work with hearts the way we evolved to work with hearts. The project of moral philosophy, as I understand it, must involve some very unusual treatment of moral intuitions, ones that are obscene to our evolved first impression in the way that delivering a baby by C-section is obscene to someone who only understands it as stabbing a pregnant woman in the belly.

And as I said above in reply to Jakob, there’s no contradiction in the most true description of a phenomenon being nigh useless in our everyday lives. In the game of go, there is a saying, “If you want to go left, go right”, meaning that going directly for the play we want is not the best way of achieving the play we want. But that is not to say that moving left is wrong, just that moving right is the best way to achieve moving left. So too, being a naive consequentialist may be the best way to achieve the functional ends I advocate here. Still, though, I would argue that the functional ends are the ends, and if it could be shown that different naive system better achieved them, it would be damning of naive consequentialism.

There may be an argument that functional morality is actively counterproductive to its own stated ends. I don’t know what to make of self-defeating truths, but I don’t think functional morality is one. I see no tension between understanding and discussing functional morality and still practicing more common moral systems as rules of thumb on a day-to-day basis.

I don’t think this problem is unique to a rationally-grounded moral system. Emotions too can be a basis for hubris; emotion-based religions are some of the most pompous and unjustifiably self-assured systems of belief that we’ve ever seen. We should not be overconfident.

But reason’s advantage is that it scales: we can use reason to analyse other modes of thought, and even reason itself. Through, we can identify situations where relying on intuition is better than relying on deliberate reflection. We can’t do that emotionally. We can rationally examine emotion, and while we can feel things about reason, we can’t get very far with it.

How do we know any evolved trait isn’t a spandrel? We can look at whether morality influences reproductive success, whether it imposes costs that would require a benefit to offset, whether it’s been selected against in isolated populations, etc. I think all these things suggest that it isn’t a spandrel, that it’s been selected for as part of an evolved reproductive strategy:

  • Amoral people tend to suffer socially. Psychopaths can and do succeed, but they depend on the moral behavior of others, and they are also employing a high risk, high reward strategy (many psychopaths are killed or imprisoned, but many others are managers or politicians).
  • Morality entails evolutionary costs, e.g. forgoing actions with clear immediate reproductive benefits like theft or resources, murder of rivals, or rape of fertile women. That suggests that it has attendant benefits, and that forgoing these provides a reproductive benefit in the long term, e.g. reciprocal giving and social support, not being murdered, and better mating opportunities long term.
  • To my knowledge, morality exists in all human populations, including isolated populations. The isolation may not have been sufficiently long to permit evolutionary divergence, but given the presence of psychopaths it seems that the genes for amorality were there to be selected for and haven’t come to dominate any society.

Consider the example of motion sickness, or of sugar, or of any other evolved predispositions what we can rationally understand to be actively counter to the reasons for which they evolved. We have intuitions that motion not dependent on our moving our limbs means we’ve been poisoned and need to purge, and that sugar and fat are good and we should eat them all as much as possible. But we know that these are false, that our evolved tendencies are misleading us, and they are misleading us because of the context in which we evolved in which such motion did mean poison, and sugar was a precious resource.

So too did morality evolve in that context, ought-ness is derived from our evolutionary past, and we can look at it in that light. Without reference to its evolved purpose, it has no meaning. If we take the position that the evolved meaning of morality is not relevant, it seems the only alternative is moral nihilism.

EDIT, 7/14: words, formatting. Deletions indicated by strike-through, additions underlined…

This is one of the areas I was probing around because I think it may be very hard for many adherents of functional morality to stay consistent. Perhaps not you. If survival is connected to genetically related progeny having progeny that are genetically related - iow sustaining genetically related individuals through time, transhumanism should be considered bad or evil - if we take the case of strong transhumanism where better substrates for consciounsess and existence are created and homo sapiens, as a genetic organism (and physically in general, outside the nucleus of cells also), are no longer present. We will have replaced ourselves with something else. At least in terms of genetic material.

But even setting aside the transhumanism issue. If survival is the guide to morality, the measure of it, it seems to me we can have all sorts of odd scenarios. We Freeze our DNA and send it out into the universe with instructions for use plus an AI to help us seed the first good planet… When we find out another civilization somewhere or the AI gets us going on say ten worlds, it seems like then we would be free to do what we want. Like as long as survival is happening, elsewhere, I have no need for morals. We have insured continuation, now we can here do what we want. Or we could set up a world where the AI combines DNA to make 1000 humans. Their genitals, after puberty are harvested for DNA, and they are all put down. The AI waits a thousand years and repeats, mixes new DNA, new batch of humans, new cull, repeat. This prevents Mass self-destruction events, and the large gaps between generations 1) slow down changes, so the DNA really stays close to earlier generations longer and 2) create longer survival. IOW there may well be an incredibly efficient way of making our DNA survive - and occasionally create humans - for vast eons, which at the same time entails an existence that is repulsive to most people.

Survival, and not much else.

I didn’t say enough. Antinatalism is one of the moralities that evolution has given rise to. Right now it is a minority position. Perhaps it will become the majority or power morality. Then this is what evolution has led to. It might lead to our extinction, but evolution led to it. IF I coming from a now more minority position - before the anti-natalists sterilize all of us, push for my morality, which includes life, I must wonder, as the anti-natalists take over, if I am on the wrong side - if evolution has led to antinatalist morality and the anti-natalists win. Whatever happens would be functional, it might just not be what we want functional to be. IOW it was functional that dinosaurs became extinct. Evolution and natural selection are selecting to whatever fits, whatever fits, that is, whatever else exists - other species, the weather, etc. I don’t really see where I should do anything other than prioritize what I want, and let natural selection see to the outcomes. Just like every other individual in other species. Because once I follow my interests and desires, including mammalian empthy, I am living out what I have been selected to be like. Whatever this leads to is functional, though it may not include my kind.

This might seem obvious: If it is survival of our or ‘our’ genes and these shaping new generations of ‘us’ or us, then some of transhumanism is wrong and I should oppose it, since it will replaces our genes and us.

On the other hand if I am a functionalist, natural selection supporter, then if transhumanism wins, then that’s fine. I do not need to think in terms of the best morality or heuristics. We will do what we do and it will be part of natural selection - I mean, unless I have an emotional attachment to humans… :smiley:

IOW There is some weird mix of selfishness - I should support functionalism as far as it furthers my species (though not me in particular) - and follow the intended function of morality…however natural selection is nots not itself a respecter of species.

I cannot in any way avoid fitting in with evolution as a whole, so why should I focus in on one selfish part, where I identify with future generations of my DNA. It seems to me that must have an emotional component. But if we strip away the emotional AND suggest one should take a functionalist point of view, well there are no worries.

Natural selection will continue whatever I do.

Let’s take this last bit first. 1) I think it is complicated. First, immediately, I want to stress that there is always the option of delaying judgment or agnosticism. Reason is not infallible - and is, often, guided by emotions and assumptions we are aware of and then also often by emotions and assumptions we are not aware of. So when in a real contradiction between emotions and reason, we might, especially if we do not seem to immediately lose anything a) delay choice or 2) make a choice but keep an agnosticism about whether it was the right one. 3) it depends for me on what reason, whose reason, and for that matter whose emotions/intuition. 4) a problem with the choice is that emotions and reason are mixed. It is muddy in there. Reason depends on emotions, especially when we are talking about how humans should interact - iow what seems reasonable will include emotional reactions to consequences, prioritizing inside reasoning itself, the ability to evaluate one’s reasoning (such as, have I looked at the evidence long enough? which is evaluated with emotional qualia (see Damasio) and of course emotions are often affected strongly by memes, what is presented as reasonable, assumptions in society and culture, etc. When someone claims to be on the pure reason side of an argument, I immediately get wary. I just don’t meet any people without motives, emotions, biases and so on. If we are trying to determine the height of a tree, ok I may dismiss emotion based objections after the rational team used three different measuring devices and come to the same measurement, despite it seeming off to the emotional team. But when dealing with how should we treat each other…

In a sense what I am saying is that reason is often used as a postive term. IOW it represents logical work with rationally chosen facts, gathered in X postive types of ways…etc. But actually reasoning is a cognitive style. A neutral one. It can be a mess, it can be well done. It may have false assumptions that will take decades to recognize but are obviously false to others. It is just a way to reach a conclusion. Some do it very well. Some do not.

The reasoned idea within science was that animals did not have emotions, motivations, desires, etc. They were considered mechanical, with another significant group of scientists thinking that any claims were anthropomorphizing, unprovable, and confused in form, though these mainstream scientists were sometimes technically agnostic. That was mainstream position until the 70s and it was dangerous for a biologist to go against that position in any official way: articles, public statements. etc. People having the opposite opinion were consider to be being irrational, projecting, anthropomorphizing and following their emotions.

Now of course this example is just an example. It does not prove that reason and emotion/intuition are equally good at getting to the truth or that reason is worse.

I bring it up because, basically, what appears to be reason, need not be good. It is just descriptive without valence. Certain parts of the mind are more in charge and they have their tool box. Maybe it is good use of tools maybe not. An attempt by the mind to reach conclusions in a fastidious manner and based, often primarily on word based arguements. This isn’t always the best way to figure something out. And underneath the reasoning and emotional world in the mind is seething.

OK, let’s look at the motion sickness. I’ll keep this one short. It’s a good example on your part and I do not think I can or would want to fully counter it. But let me partially counter it. In the case of morals, we are talking about what it is like to live, given who we are. If we are going to say certain behaviors are not good, then one such behavior might be putting a fan up against someone’s bed. Now this will come off as silly, but my point is, that despite the fact that the person who gets nauseous because of this is actually having an inappropriate reaction because fans and beds can give one an experience that resembles when one needed to throw up…
it still makes the guy in the bed have a bad time, even if he
‘shouldn’t’

So here we are, after this long evolutionary process reacting emotionally to a lot of stuff. Right, wrong, confused, misplaces emotions…quite possibly. Emotions that perhaps worked to protect us but now react to harmless things. But we have those emotions. We react these ways.

If we do not consider the emotional reactions to the moral act and to the consequences of any moral rule, then we are ignoring a large part of what is real. IOW if we just focus on the survival of our genes creating more gene bearers we are removing a large part of the real from our calculations.

  1. this may have serious consequences regarding our survival
  2. but regardless I think it is wrongheaded even if it did not
  3. I question our ability to know when it is simply some vestige of a no longer relevent reaction, or a deeper insight. I see reason as often being hubristic when it comes to these evaluations.

And to be very clear, I am not arguing that we should do away with rationality. I am pro-combination. So when I point out the problems with rationality, I am not saying emotions have no problems, and we should switch to just that.

The cardiac surgeon, in all liklihood, is working on someone who smoked or overate and did not move around very much. And if they did, then the cardiac surgeon is adding a way of working on top of what evolution set us out to do. But even more importantly, if we are to take from evolution what morality’s function is, why would we then ignore what evolution has given us. So it is that juncture I am focused on. I don’t have problems with technology per se. IOW my argument is not, hey that’s not natural - with all the problems inherent in that - but rather

I note that you think our morality should be based on its function in evolution. Evolution is given a kind of authority. Then when it comes to how our evolved emotions deal with morals, we must modify that. If we are appealing to authority in evolution, why stop at deciding it is about survival?

They may be pompous and unjustifiably self-assured systems of belief, but the jury is still out on whether they 1) added to both survival AND better lives or 2) whether they still are better than secular humanism, say. Testing such things is not easy.

Certainly you are correct that emotions can be problematic. But I am not arguing that there should be no rationality - and even in religions and folk morality, in fact any moral system I have seen, there is a mixture of reasoning and emotion, consequentialism and deontology. I am arguing for the mix and that the mix is generally unavoidable, in any case. I think the cane toad type hubris in rational solutions often comes about because we think complicated situations can be tracked using the frontal lobe skins and evolution made a boo boo when giving us tendencies to use both emotion and reason. WE cut out the emotion. I also think there
skills in emotion/intuition, or better put, some people are more skilled than others, just as in reasoning.

I disagree. I make very rapid decisions all the time whether to go with intuition or to pause and analyze and reflect. Actually think nearly the opposite of what you said. We cannot make such decisions without intuition. Certainly reasoning can come in also. But reasoning can never it self decide when it should be set in motion, when it has done enough work, when it is satisfied it listened to the right experts, when it is satisfied with its use of semantics in its internal arguments.
Rationality AS LIVED as opposed to on paper, is filled with micro-intuitions and generally initiated by a macro intuition and also one knows when to stop with yet another intuition. And there are qualia at all stages.
i
When we imagine reasoning we often imagine it as if it is happening on paper and the words have simple relations to things.

But actually it is not happening on paper, even when written and read, but in minds, and in the process there is an ongoing set of intuitions.

But, again, importantly, I am not for throwing out reason. I just think we should not throw out emotions/intuition AND further, I don’t think we can anyway.

Actually think if we go into the phenomenology of checking out an argument, we will find that intuition rings a bell, and then we zoom in to find out why. Especially in tricky arguments.

And to jump: I imagine a kind of traditional male female argument. The man cleverly explains why whatever he did was actually ok, given her behavior, given some factor in his life, given some complicated reasoning that seems right to all parties. And the woman is sitting there thinking ‘BS’.

I see the way we are raised tend to work against integrating the various approaches. Or to put this another way, we tend to officially priortizes intuition or rationality, emotions or calm word based tries to be logical arguments. Underneath, I think, each approach is using facets of the other, but because of how we are trained, we feel we need to identify with one. Also we tend to want to hide, because actually all sides tend to present themselves as rational, the emotions underneath our positions and the intuitions in our metaphysics, say.

I think leads to adults who are damaged and this is only increasing as pharma and psychiatry pathologizes mroe and more of the way we limbically react, and then also in modernity in general. So we think we have to choose one approach, when in fact homo sapiens have them tied together, so we might as well practice that, get to know our reactions and couple the two approaches.

That has been selected for. Maybe it won’t last. Maybe we will be replaced by AIs that have no emotion. I can’t see where a not radically fragmented homo sapien would not find that horrifying. Problem is, humans are radically fragmented. God, I hope I got that triple negative right.

I’d need to see the science. I am not even sure this is the case. If you are chaotically amoral, well, that leads to a lot of bad reactions, unless you are some kind of autocrate - so in your home, in your company, in your country, if you are the boss, you can probably get away with a lot, and in fact those guys often have a lot of kids, all over the place. Hence they are evolutionarily effective. But more pragmatic amoral people, I see no reason for them not to thrive. Maybe, just maybe less in modern society, and maybe less in tribal society. I think they have many benefits in between, even the chaotic ones. In fact a lot of the names in history books probably had amoral tendencies…and quite a few kids.

I do wonder how they do on creating babies however.

If it is in all populations it might be neutral or only slightly negative functionally. A byproduct of some other cognitive capacities that helped us. AGain testing this hypothesis is hard.

I think we have problems on the diet end of your justification, not because of faulty desires, but rather to cultural problems. I think sugar is a drug and we use it to self-medicate. Psychotropic drug. You know that old thing about rats triggering cocaine being made available or stimulating the pleasure center of the brain? The idea that if we could we would just destroy ourselves? Well they redid that experiment but gave the rats complicated interesting environments and very few got addicted. And I can imagine that even the nice complicated homes they gave the rats probably had less of the smells that rats bodies expect and lacked the nuance there is in what was once the original environment of rats. I think, just as in the cardiac surgeon example, we are using culture to fight nature that is having a problem because of culture.

In the ‘are there objective morals sense?’ I am certainly a moral nihilist. On the other hand this does not mean we need to stop determining what we want. We can ignore whatever we think evolution intended and decide what we want. No easy task, of course, given out wants, natural and culturally created, often by those with great power in their interests. But given that as social mammals we have self-interest but also empathy and tend to collaborate, there is room for desire to create what may not be morals but heuristics. Desire (and emotions and intuition) in collaboration with reason.

That’s where we are now, with what we are and have. Ironically ignoring whatever evolution intended or selected for might in fact be the best strategy for survival, though I am not saying it is, nor do I know a way to test that. However, I think there are reasons to think it might be a better route.

Focusing only on surivival and survival of genes, deprioritizes a lot of things that make us like life. I think we will soon if not already have solutions to survival of genes that do not need us to enjoy life at all. Forget the panopticon Amazon workplace, I mean complete dystopias that, one the other hand, keep those genes chugging along. Of course, we might opt out if that is where logic leads us, not feeling at home in the efficient world we created with one prime criterion for the goal and reason trying to be devoid of emotion and intuition as the means of working towards this final solution.

I see a few central disagreements/differing definitions running through your replies, so I’m going to take your points a bit out of order. I apologize for the wall of text, I hope it serves to bring the separate threads of our conversation back toward the main point.

First, returning to the meaning of “functional” and the role it plays in my argument. You say that “it was functional that dinosaurs became extinct”, and I think this suggests that we are using the term “functional” very differently. If you mean that it was functional for humans that dinosaurs became extinct, I agree. But it certainly wasn’t functional for dinosaurs, since most dinosaurs’ genetic and social lines ended. To the former claim, we could compare it to a claim that it would be functional for modern humans to make mosquitoes go extinct, and thus morally proper. That claim doesn’t seem far fetched to me.

But I feel like that line is a bit of a non-sequitur, since while the extinction of a species may be functional, it is only distantly moral. The way I’m using functional here is just this: morality exists because individuals with “moral machinery” survived and those without it perished. The evidence for this is the universal existence of morality in human groups, and the near universal existence of morality in individuals. It is functional for those who have it in the sense that the survival of whatever produces it (be it genes or memes or something else) was a result of how morality shaped behavior.

The role this plays in my argument about morality is that we can’t abstract morality out of its evolutionary context. We can look at morality as a phenomenon in the world, observe it empirically, ask what it does and why it exists, and in so doing discover what it means to say that one ought or ought not do X. The only objective reference of those terms is that evolutionary origin, the role morality played and why it exists. The only recourse for morality is thus to its function: it exists because it improved the odds of survival of the individuals and groups who shared the trait. (I grant your point with respect to other reasons why people act, but I would contend that those aren’t morality. Moreover, I think ‘moral nihilism plus heuristics for subjectively meaningful life’ is compatible with my argument here; moral nihilism is a viable out. One can take the position that, “morality is just something that helped apes survive, therefor there’s not really any objective truth of morality”. That out seems less appealing to me than the conclusion it’s trying to avoid, but I’m not sure my arguments here address that choice. At one point you seem to essentially ask why you should do what you are morally obligated to do, and for that I make no attempt at an answer.)

I think this largely addresses your points about antinatalism and alternative forms of survival. My argument here is, again, not for or against any specific moral system, but rather for a meta moral system that says that moral systems should be evaluated on their likelihood of leading to survival, because that’s the only meaningful way to evaluate whether something is moral.

But there is a predictive element to this position: we’re talking about a prosepective best-guess about the effect a system will have on survival. It is absolutely true that, if antinatalism ends up leading to the long term survival of humanity, we should score it as functional. You are right to point out that, ultimately, the truest test of functionality is what actually ends up succeeding, but that doesn’t seem to favor any particular hypothesis about which moralities are actually functional in prospect. In the same way, I could say “I think that by reading these monkey bones, I can predict the stock market, and you have to admit that the truest test of what method of market prediction works the best is what method actually does predict the stock market; if my monkey bones method actually predicts the stock market, then you would have to admit that it was the best method.” And so I would, but that doesn’t say anything about whether the monkey bones method actually does work, and we still have every reason to think that it is a bad way to predict the stock market.

Similarly, in prospect, we have every reason to think that antinatalism is not a good moral system under the metric of survival. We can come up with any number of scenarios whereby a moral philosophy that literally requires the slow extinction of the human race actually ends up preserving the human race better than other moral systems (e.g. maybe if everyone stops having children, they spend more time extending lifespans, copying consenting adults at the atomic level, and conquering the stars etc. etc.). But that seems unlikely, given what we know about the present state of human longevity and atomic-level copying. Still, someone can coherently say that we should be antinatalists because that’s the moral system that will best achieve functional aims; they may be making a mistake of fact, making a bad prediction about what will work, but that’s an empirical question about the future, and the truest test is indeed the arrival of the future.

I do think you raise an important distinction that I need to make: we observe morality, and I argue we should conclude certain things from it; similarly we observe antinatalism, why shouldn’t we conclude similar things from it? If antinatalism can be like eating too much sugar, why can’t the same be said of morality itself? To this I point to my comments on whether morality is a spandrel. Antinatalism doesn’t seem to pass the way morality does: it’s strongly negatively associated with reproduction, it’s certainly costly (thus the negative selection), but it tends to die out as quickly as it arises: it’s been proposed many times in many places and has been rejected (likely because everyone who practiced it died without raising any children to believe it).

I don’t think there’s any tension in saying that certain traits that exist in an evolved organism are contingent and haven’t been selected for, and I think you accept that based on your question about spandrels. The existence of particular moral beliefs don’t suggest that those beliefs have been selected for; the near-universality of some kind of moral belief in all humans does suggest that the underlying machinery has been selected for, i.e. has conveyed some survival benefit on the people whose genes express that machinery.

I think we can say similar things about your proposed reductios (transhumanism and AI breeding a new batch of humans for one generation every x-thousand years). It may be that those methods produce survival better, and that could be shown by someone trying those systems and actually surviving. But regular reproduction and genetic evolution have proved a pretty effective means of survival, it’s reasonable to think that they will more effectively continue our survival than exotic systems like the AI harvesting, breeding, and euthanizing generations of humans. Moreover, if what we want to see survive is society, then a bunch of DNA held in stasis doesn’t well achieve that goal (this goes to what particular form of survival is best, which I don’t think is answered by functional morality, nor does it need to be answered for the purposes of making a case for functional morality).

The ‘seeds to the stars’ reductio raises the open question of at what point we can rest in our pursuit of moral action. In most moral systems, it’s a good thing to save someone’s life, but once someone has saved someone’s life, they aren’t absolved of moral responsibilities. Even after saving a million lives, we can continue to do good. As a matter of subjective experience, we may decide we’ve done enough and no longer care, but it would seem a strange moral system in which the morality of an act actually changes based on the past moral acts of the actor (I can’t think of any that do this expressly, at least if an act is taken ceteris paribus).

But I take your more general point here: functional morality probably commits us to accepting some odd scenarios. I’m OK with that. Odd scenarios are philosophy’s bread and butter. Earlier I alluded to not being able to step in the same river twice, a claim that sounds odd upon first encounter but is normal and mundane in philosophy. And I would expect the truth to be somewhat unintuitive, given the same limits on intuition that I’ve been relying on in this thread: we have the brains and intuitions of of savanna apes, our intuitions are ill-suited to space travel.

I don’t mean to be too dismissive of oddness as a weakness, I do think intuition is often useful as an indicator of subtle logical mistakes. But I also think our oddness tolerance should be properly calibrated: even given that we’re committed to the positions you propose, the scenarios themselves are so odd that any moral conclusions about them will feel odd. If functional morality gets odd at the margins, so does every moral system I’ve ever seen. We have poor moral intuitions about AI, because we have never actually encountered one. In every-day situations, functional morality will work out to support many naive moral intuitions, and will approximate many sophisticated consequentialist and deontological systems. Are there any everyday situations where functional morality gets it wildly wrong?

To your points re: reason vs. emotion, I admit I’m losing the thread of how this impacts our disagreement. For one thing, I think we basically agree on the role of both emotion and reason, i.e. that they are both useful and valuable, and both can be flawed. But more importantly, I don’t think conceding that emotion sometimes provides important insights and we should be wary of too easily dismissing emotional/intuitive reactions as merely vestigial – I don’t think conceding that undermines my point that our moral intuitions can be rationally compared against what we know about how moral intuition arose in humans and what purpose it served. The way we know if a moral intuition is ‘right’ or ‘wrong’ is whether it fulfills its role in tending to increase the odds of survival or not. There is an objective reality, and our reason and emotion are both useful in helping us discover it, but they should arrive at the same answers because they are both discovering the same reality.

(I think I’ve addressed all your main lines of argument, but if I missed any please let me know, particularly if my omission seems calculated to avoid a particularly devastating point.)

Carleas, If we ever cease to exist, we can’t exist.

Morality is not about survival in some form, it’s about the quality of it.

I would say that my argument here is exactly counter to this. A being that suffers through life and reproduces will pass on its pattern, a being that enjoys life and fails to reproduce will not. We are the descendants of beings who reproduced, regardless of any subjective pleasure or pain they felt in getting there.

Of course, pleasure and pain are tuned to the same end, so the subjective experience of a life that leads to reproduction is likely to be positive. Safety, nutrients, and reproduction all feel good because beings that pursue those experiences are more likely to survive and reproduce.

Ihave to admit I am too lazy to go back and understand the points you are responding to. I will just repond below to points I have opinions about now, reactions that may even contradict things I’ve said before.

I am not sure if we have abstracted it before we had evolutionary theory, but we certainly had morality out of that context and even so now. IOW often morality goes against, at least, so it might seen, my own benefits in relation to natural selection as an individual, and at the species level is not based on this consideration, at least consciously. Let’s for the sake of argument accept that morality was selected for. OK. And in what form. Well, it hasn’t, generally, been in the form - Whatever leads to survival is the Good. What got selected for was a species that framed moral issues in other ways. So if we want to respect natural selection, we would continue with that unless we have evidence that this is not working.
IOW the trait that got selected for was not MOrality=survival.
What got selected for was trying to be Good, often in social ways, that fits ideals which we did not directly think of in terms of survival. Now underneath this may have been doing just that. but precisely for that reason we have no need to now consciously think about survival - perhaps having this as the heuristic is less effective, for example.

Perhaps I’ll reword in response to this: consider the possibility that having moralities that go beyond, do not focus (just on) survival or even mainly on survival is vastlyi more effective. That we have other ideals leads to more cohesion or whatever, as one possible side effect.

Ido feel there is a conscous/unconscious, intuition vs logic split in here, or between us. Not that I can nicely sum this up in words.

Let’s say that romance is really just pheramones and dopamine driven altered states. Let’s that it is actually the best description. It still might radically damage humans to think that way.

I don’t want to assume that my opposition is soley a noble lie argument either. That excess that our morality goes out over which does not have to do with survival, I grant that meaning in and of itself. I am not beholden to evolution. And that is what evolution has led to in any case.

IOW I am not sure why I have an obligation to go against my nature and view morality or preferred social relations as to be evaluated only in terms of survival. My reasons for resisting this are personal, but I could say that I have been selected to not be like that, so would I not be betraying selection to start viewing things in the way you suggest?

It’s a bit like how feelings might guide a golf swing adjustment, even with vague fluffy terms as heuristics, rather than some set of formulas based on calculas and some of Newton’s laws. You may be trying to get us to use the wrong part of our brains to do something.

I think we need a definition of survival. Is it the continuation of homo sapien genes? Anything beyond that?

Antinatalism combined with cloning and whatever the rich decide are the best techs to keep their lives long would certainly seem to have a good chance. I mentioned earlier some dystopian scenarios that might very well have great survival outlooks. I think it would be odd to not immediately come in with quality of life, fairness, justice type moral objections, even though the truth is the survival of homo sapiens might best served by some horror show.

If it turns out that by AI’s assessments the best chance for survival of homo sapiens is to eliminat 99% of the population and do some cryogenic alternating with short periods of waking for procreation, while the AI take care of security and safety

must we just knuckle under and choose that.

And I don’t think that is a loopy suggestion. I actually think that some rather dystopic solution would be most likely to extend the survival of homo sapien genes and lives.

Ah, now I see you respond to this…

Our modes of choosing partners and being social have changed a lot over time. I see no reason to assume that further changes will not take place. We can control much more. Food production from hunter gatherer to ancient agriculture to modern agriculture to gm agriculture with crops that cannot breed. Why assume that the ‘best’ method for human production will not radically shift. And it’s not like they are not working toward that out there.

Here you mention seeing society survive. There would be a society it would just be different, but further why should evolution care about the specifics of human interaction. If the point is homo sapien survival. It seems to me you are smuggling in other values than survival in that word ‘society’.

I would think it will. I would guess that it is already in place, in many ways, in the business world and that Amazon could use functional morality to justify its panoptican, radically efficiency focused, horrific workplaces. That words like dignity, sense of self, fairness no longer have any priority. Now a sophisticated functional morality, one that looks way into the future might find that such business practices somehow reduce survivability…but…

  1. maybe it is better to start up from with other criteria - even if they all boil down somehow to survivability which I doubt
  2. I suspect that some other nightmares will be just peachy under functional morality and in any case we will have no tools to fight against them. We would then have to demonstrate not that many of these things we value are damaged but rather that this process damages survivability, perhaps decades or hundreds of years in the future.

If we limit morality to survivability, I suspect that we will limit our ability to protect our experiences agasint those with power.

So Karp, you see morality as a form of leverage (human) nature has on its most powerful parts?
Not sure if I summarize you right. I like the idea I come away with in any case.

Also I agree/think that evolution can not be explained by using the term evolution.

Hi Jacob!

I think You got it partly right. The other ‘part’ , if you one can call it that, relates will and space-time to the equation, particularly the spatial component, for the sake of this argument.

For Ambig’s benefit, this linear reduction may be exemplified by the following illustration.

The teeter Twitter places near absolute weight on one end, with its distance to the fulcrum approaches 0.
That’s balanced against a weight approaching zero mass, but IT’s distance from the fulcrum approaches absolute infinite extension.

Which will have the most power? This is difficult hypothetically, but I would side with maximum extension, hence space/time.
I would like to call this the David & Goliath presumption.

Now the calculation can not be done now, but perhaps with time approaching to infinity, some computer may come up with more then a hypothetical.
So power as defined has not yet come to an estimable definition.

I’m completely with the opening post.

I remember Nietzsche commenting on how primitive moral systems developed according to what survivors were already doing - or more specifically on what they were doing that their rivals were not doing. It’s very revealing that the derivation of the word “moral” comes from “customs” - i.e. what people are accustomed to doing already.

I find it interesting that all behaviour that we see as negative in both others and ourselves are actually just inevitable by-products of behavioural tendencies that happen to get selected for. For example: female insecurity is the same instinct that compels them to try and look physically attractive, and male “banter” is the fishing for insecurity in other males so we know who to exclude and who to rely on when it comes to team situations to best dominate when threat arises. They might not be seen as morally admirable tendencies, but not surprisingly they’re the ones that last because they’re (whether intentionally or not) optimal.

Morals are just unintentional (mostly) and inevitable game theory.

If not based on intention , then it is structural, but not necessarily based on game theory, maybe contingently.?

Totally understandable, given that our conversation has become spaced out. I don’t think it harms the discussion, and your response was still good and well appreciated.

I’d like to start with something you say halfway through, because it’s a nice analogy and touches on many of your other points:

This is a good analogy because it distinguishes our positions well. My attempt here is to provide “the best description” of morality. You say that you are “not sure why [you] have an obligation to go against [your] nature and view morality or preferred social relations as to be evaluated only in terms of survival”, and my response is that that is just what it means to have a moral obligation. Insofar as “morality” means anything, insofar as it means anything that one “ought” to do something, it means that doing that thing will advance survival, for oneself or ones tribe.

And I agree with your observation that “[w]hat got selected for was a species that framed moral issues in other ways”. So too was flavor selected for rather than nutrients, and instinctive fear rather than insect biology, and pleasure and pain rather than reproduction and anatomy. And just as we have used the study of nutrition to recognize that some things that taste good are nonetheless harmful, and that some insects that scare us are nonetheless harmless, and that some things that feel good are bad and other that hurt are good, so too can we decide to overcome our moral intuitions in favor of an explicit morality that, while devoid of romance, is empirically rigorous.

I’ve been reluctant to narrowly define survival for two reasons:

  1. I don’t think it matters. If there’s a moral instinct, it comes from where all of our innate traits come from: a heritable pattern of thought and behavior that led our ancestors to survive. Regardless of how much of that is genetic, how much is culture, how much it operates on the individual and how much on the group, regardless of the many particulars of what such survival may entail, inherited traits can only be inherited where they lead to there being an heir to inherit them.

  2. I am unsure of where morality functions, i.e. what thing’s survival it’s influencing. On the one hand, certain parts of the inheritance must be genetic, but I am unsure how much. I am unsure, for example, whether a group of people left to their own devices would benefit from the inherited mental machinery that, when it develops within a culture, leads to a positive survival impact. If the group itself is part of the context for which the moral machinery of the brain evolved, then it’s not just the genes that produce that machinery that matter, the group itself also matters. I tend to think that’s the case (thus my concern that the “society” continue, and not just genetic humans), but I’m uncertain about it. That uncertainty leads me to want to leave this as an open question. Does this undermine point #1?

First, I’ll note that this is a bit question begging. A solution is dystopic in part for violating some moral principle, so to some extent this smuggles in intuitive morality as a given.

Second, as I said above, I think intuitive morality will fail us more and more frequently as time goes on. To use a near-term example that you bring up: in the past, we just didn’t know what genetic pairings would produce good or bad outcomes, so we left it to chance and instinct. But chance and instinct frequently misled us, and we ended up with immense suffering over the course of history as a result. Pre-modern societies just killed children who didn’t develop right, and many women died in childbirth as the result of genetic abnormalities in their developing babies. So if we suggest that greater deliberation or intervention in genetic pairings is bad going forward is somehow immoral, we need to weigh it against the immense suffering that still happens as a result.

I’m not arguing in favor of such intervention, rather I mean to say that merely knowing, merely developing the ability to predict genetic outcomes in advance requires us to make a moral decision that we never had to make before. It may be creepy to centrally control or regulate genetic pairing, but if we know that (a + b) will create resource hungry and burdensome locus of suffering, and (a + c) will create a brilliant and productive self-actualized person who will spread happiness wherever she goes, there is at least as strong an argument for the creepiness of not intervening. (Note that I don’t use “creepy” in the pejorative sense here, I intend it as shorthand for the intuitive moral reaction and, subjectively, I think it captures what intuitive moral rejection feels like).

So, I reiterate the point I made above: our intuitions are bad at the future, because they are the intuitions of savanna apes, and not of globe-spanning manipulators of genetic inheritance. We will need more than intuition to make sense of these questions.

My response is as you would expect: I think those things aren’t particularly function, since a large underclass of people without “dignity, sense of self, fairness”, etc. lead to things like the current collapse of global institutions (and, relevant to my discussion of the meaning of ‘survival’ above, institutions are beneficial to group survival). I think that’s always likely to be the case. Moreover, using fully functional humans, whose brains are some of the most powerful computers available, to do busywork is a waste of resources. I expect a society optimized to plug in all of humanity will be both functional and generally pleasant for its inhabitants.

But functional morality is ultimately a meta-ethical system, it admits of a lot of debate about what specific moral positions are permitted or best achieve its goals. I think most nightmare scenarios are likely to fail to optimize functionality, or for all moral systems to struggle with them equally (see the discussion of the consequences of genetic intervention above).

After posting what’s below and then mulling I think I can boil down my two objections and be more clear than my groping.

  1. if we tell chess learners and even top players that every move should be evaluated only in terms of not getting checkmated, they will likely do less well than players who have a wider range of heuristics and guidelines. Yes, someday, the top quantum computer may be able to crunch so many lines of moves, that it can work with a single heuristic, but I will bet that even now, the top computers have more heuristics. Humans/society are more complicated than chess. I think reducing how we evaluate morals to survival will reduce our survivability, and that the trial and error of evolution has led to us using more guidelines, and unless there is tremendous evidence otherwise, I do not think that having ‘how does it effect survivability?’ as the only criterion is likley to be better. But further…
  2. I see no reason to go against both my gut reactions to what I would call a dystopian society, because it will lead to greater survival. IOW if I think we, in general, will dislike our lives, even more than we do now, being assured that it leads to greater survival is not enough. It is not sufficient for me. Given that I think it is rather easy to come up with dystopias where we would all want to die, but our genes would be reproduced and our deaths, at least prior to DNA harvesting or procreation, could be prevented by the goverment, drugs and AIs working together, I actually consider the idea dangerous. Just as I would consider it dangerous if parents removed all the potential risks to their child’s survival. If that was their sole criterion for good parenting. I do realize that a single child and the society of humans are more like analogous situations, but I think it is a useful analogy.

In fact there is an odd counterpart to your ‘obligation to go against nature’ is what morality is, but regarding survival as the only criterion. I would never guarantee our survival knowing I was casting the deciding vote, if I knew no one would want to live in the future being created. That it would be meaningless suffering, where we are treated as meat and conveyers of DNA, and no other criteria were given to the AIs set to take over and those AIs were known to think 1) other criteria weaken survivability and 2) their plans were ones that would lead to hell on earth. As you argued about morality being just that which guides us to overcome our intincts for the good of society, I could argue that morality is precisely that which, as a society, makes us decide not to do X, even though this might be viewed as good by selfish genes.

And frankly. I think this is not just negative thinking. I think the best guarantee that homo sapien DNA and hosts keep appearing would likely be very negative to live. REducing all risks for individuals and the species as a whole could be handled with greatest security via narrowing our options down to the barest minimum of the forms of life.

That’s not quite getting two points I think. 1) Evolution led me to developing a morality in a certain manner. It selected for this. It has also selected for the way I think about it and couch it in language. You are suggesting that we now develop it in a different manner and think, in words, about it in another manner. I have both gut and rational negative reactions to the way you want to couch it. Your argument is based on the idea that evolution has selected morality in a certain manner and we should consciously do this in this manner. But evolution has not selected for that doing, at least, not yet. There are other ways to modify our morality that do not rely on what I consider an extremely restricted heuristic - that which increases survival is good. I am not arguing that my way of viewing morality is wrong and the truth, if different could be harmful, but rather suggesting that what has been selected for offers a wide range of heuristics - for example not just focusing on survival - and I see no reason to pare things down. 2) you say above that a moral obligation is to go against one’s nature - where it is problematic, I presume - and evaluate only in terms of survival. But our heuristics take in more factors and my tribe wants that, though they do not necessarily agree on priorities or even factors, none of them, not a single one, I have ever encountered, wants us to evaluate something only in terms of survival. That’s really quite mammalian of us, and certainly primate of us. And humans as apex primates, I think, we would want to be very careful about streamlining the complex heuristics that at least millions of years of trial and error have developed. Even if, just like our eating, we may be led astray by things that worked on the veldt.

We have other ways of dealing with these problems than, for example, reducing fears, though this is often the current approach to what are seen as irrational reactions - that is the entire pharma/psychiatric approach to not feeling so good.

I am wary of reductionist approaches because it seems to me we go through a, hey, I can’t see the importance of this, let’s throw it out: and so into the garbage go wetlands and tonsils almost as a rule. I see Cane Toads on the horizon.

Morality, or even just our preferences/desires including those informed by empathy, is very complicated. Quality of life, fairness and all sorts of other criteria are used to evaluate what is good to do.

I would think to pare this down to a single criterion would require a large amount of evidence, and not just deduction, before it ever was put widely in place. And I am not sure how to test it.

I think we are also evolving further away from evaluating things just in terms of survival. Shouldn’t we honor that trend. Compared to other animals we have very complex heuristics. That seems to have given us an advantage or correlate with it. It may also correlate with dangers also, I will concede. Reptiles behave much more along the lines of a single heuristic. I don’t think given what we are like this is a good idea for us.

Nice point. To better put my objection: I see no objections, in terms of the survivability criterion, in scenerios that I think pretty much everyone would be horrified by, guessing also, including you. I have given some examples. You may have argued against them. But I think they are very hard to criticise with the one remaining moral criterion: does it secure our survival well. If we can come up with horrifying dystopias - horrifying to us - that nevertheless, at least on paper, seem to meet the single criterion, I think that speaks against having that single criterion.

To argue that not wanting to be treated in the some of the ways I presented we would be treated, is just like our irrational attraction to sweet things, is to take us out of the equation. Perhaps some of my wants are problematic, but if you are taking pretty much all of my wants off the table, then you are saying that my experience does not matter only my successfuly conveying my DNA forward.

I tink there is a category confusion here, but I will have to mull. I am not sure the morality/technology analogy works. If we have more information, that informs our choices. I am not saying we simply follow impulses. And we often have complicated impulses pulling us in a few ways. Generallly in our more complex moralities, we don’t just follow impulses. We look at consequences. The difference with your methodology is you have one criterion. That could be handled impulsively also.

Yes, and we can make those moral decisions based on that information coupled with a variety of moral priorities. Or we can use that information and use it in relation to the single one you present. What if the AI decides that certain birth defects are beneficial because they lead to a population that is easier to control. Humans who cannot walk, cannot lead a rebellion against the suvival society’s rigid controls. That humans still have irrational desires for good quality of life is part and parcel of their DNA, but if they are born without feet, they are less mobile, easier to track, easier to protect, and less likely to successfully overturn a society that they irrational judge as wanting because of stone age desires.

But then the AI might find that increasing depression leads to greater stability and better control. I would throw in heuristics that include potential for happiness, freedom of movement, room to create, freedom to associate and a wide range of others. But these might very likely seem not to add a bit to survivability as long as top down control is very effective. They might even be viewed a negative. A Matrix like scenario where we are not used as an energy source, but placed in Virtual realities and vatted seems like a very stable and controllable solution that might be viewed as top by AIs. I suppose it might even be pleasant, but I don’t want it. And the AIs might find no reason, given the single criterion, to have it be pleasant.

In your scenario above - and I know this was just a gestural shorthand - nurture is not on the table, just genetics. I think that this is coincidentally telling. Survivability scenarios likely need not care much about nurture. They need the bodies to last, but do not necessarily at all need the minds to thrive and develop.

I have to ask if it would be me surviving, or a shadow of me. Induced comas would be one extreme. Is that us surviving?

Why on earth should I make my DNA me? or our DNA us?

In keeping my word:

This is good stuff:

Here you’re making someone feel at home, respected, of substance, significant, valued, appreciated which is conducive for speech maximization because it’s indicative of someone who’s willing to play fairly and concede good points when made. I’ve read 1000s of youtube comments, twitter and forum comments and I never see that behavior in the wild, but only when I email big companies with large customer service departments with staff who have been trained how to interact with people.

Even when you disagree, you do so tactfully, delicately, respectfully which elicits reciprocation in kind.

We need more of this.

I wish I could do it, but it’s a struggle for me since I’m not a people person and likely could be on the autism spectrum, which is likely of many folks who frequent topics such as philosophy and science. Tact is, unfortunately, a little bit of a foreign language to me :blush:

KT, you also make some noteworthy comments:

Here you’re displaying consideration and working to be accommodating.

Wary but not condemning.

Concessions, where applicable, are good.

Congratulations given for good work.

Coming across as genuine.

I don’t mean to put you guys under a microscope, but I just wanted to encourage more of this type of behavior. Functional morality?

Some of Carleas’ opinions could possibly drive me crazy, but he is an exploratory thinker and poster. He is actually interested in critiques, concedes points, responds to at least many of the points I make, such that the any frustration I feel is usually about how hard it is to make certain points in a clear way and then also the problems where values and priorities are different. But the conversation itself, his side of it, is how I wish most philosophical discussions were carried out. Nice that you noticed.