Functional Morality

If not based on intention , then it is structural, but not necessarily based on game theory, maybe contingently.?

Totally understandable, given that our conversation has become spaced out. I don’t think it harms the discussion, and your response was still good and well appreciated.

I’d like to start with something you say halfway through, because it’s a nice analogy and touches on many of your other points:

This is a good analogy because it distinguishes our positions well. My attempt here is to provide “the best description” of morality. You say that you are “not sure why [you] have an obligation to go against [your] nature and view morality or preferred social relations as to be evaluated only in terms of survival”, and my response is that that is just what it means to have a moral obligation. Insofar as “morality” means anything, insofar as it means anything that one “ought” to do something, it means that doing that thing will advance survival, for oneself or ones tribe.

And I agree with your observation that “[w]hat got selected for was a species that framed moral issues in other ways”. So too was flavor selected for rather than nutrients, and instinctive fear rather than insect biology, and pleasure and pain rather than reproduction and anatomy. And just as we have used the study of nutrition to recognize that some things that taste good are nonetheless harmful, and that some insects that scare us are nonetheless harmless, and that some things that feel good are bad and other that hurt are good, so too can we decide to overcome our moral intuitions in favor of an explicit morality that, while devoid of romance, is empirically rigorous.

I’ve been reluctant to narrowly define survival for two reasons:

  1. I don’t think it matters. If there’s a moral instinct, it comes from where all of our innate traits come from: a heritable pattern of thought and behavior that led our ancestors to survive. Regardless of how much of that is genetic, how much is culture, how much it operates on the individual and how much on the group, regardless of the many particulars of what such survival may entail, inherited traits can only be inherited where they lead to there being an heir to inherit them.

  2. I am unsure of where morality functions, i.e. what thing’s survival it’s influencing. On the one hand, certain parts of the inheritance must be genetic, but I am unsure how much. I am unsure, for example, whether a group of people left to their own devices would benefit from the inherited mental machinery that, when it develops within a culture, leads to a positive survival impact. If the group itself is part of the context for which the moral machinery of the brain evolved, then it’s not just the genes that produce that machinery that matter, the group itself also matters. I tend to think that’s the case (thus my concern that the “society” continue, and not just genetic humans), but I’m uncertain about it. That uncertainty leads me to want to leave this as an open question. Does this undermine point #1?

First, I’ll note that this is a bit question begging. A solution is dystopic in part for violating some moral principle, so to some extent this smuggles in intuitive morality as a given.

Second, as I said above, I think intuitive morality will fail us more and more frequently as time goes on. To use a near-term example that you bring up: in the past, we just didn’t know what genetic pairings would produce good or bad outcomes, so we left it to chance and instinct. But chance and instinct frequently misled us, and we ended up with immense suffering over the course of history as a result. Pre-modern societies just killed children who didn’t develop right, and many women died in childbirth as the result of genetic abnormalities in their developing babies. So if we suggest that greater deliberation or intervention in genetic pairings is bad going forward is somehow immoral, we need to weigh it against the immense suffering that still happens as a result.

I’m not arguing in favor of such intervention, rather I mean to say that merely knowing, merely developing the ability to predict genetic outcomes in advance requires us to make a moral decision that we never had to make before. It may be creepy to centrally control or regulate genetic pairing, but if we know that (a + b) will create resource hungry and burdensome locus of suffering, and (a + c) will create a brilliant and productive self-actualized person who will spread happiness wherever she goes, there is at least as strong an argument for the creepiness of not intervening. (Note that I don’t use “creepy” in the pejorative sense here, I intend it as shorthand for the intuitive moral reaction and, subjectively, I think it captures what intuitive moral rejection feels like).

So, I reiterate the point I made above: our intuitions are bad at the future, because they are the intuitions of savanna apes, and not of globe-spanning manipulators of genetic inheritance. We will need more than intuition to make sense of these questions.

My response is as you would expect: I think those things aren’t particularly function, since a large underclass of people without “dignity, sense of self, fairness”, etc. lead to things like the current collapse of global institutions (and, relevant to my discussion of the meaning of ‘survival’ above, institutions are beneficial to group survival). I think that’s always likely to be the case. Moreover, using fully functional humans, whose brains are some of the most powerful computers available, to do busywork is a waste of resources. I expect a society optimized to plug in all of humanity will be both functional and generally pleasant for its inhabitants.

But functional morality is ultimately a meta-ethical system, it admits of a lot of debate about what specific moral positions are permitted or best achieve its goals. I think most nightmare scenarios are likely to fail to optimize functionality, or for all moral systems to struggle with them equally (see the discussion of the consequences of genetic intervention above).

After posting what’s below and then mulling I think I can boil down my two objections and be more clear than my groping.

  1. if we tell chess learners and even top players that every move should be evaluated only in terms of not getting checkmated, they will likely do less well than players who have a wider range of heuristics and guidelines. Yes, someday, the top quantum computer may be able to crunch so many lines of moves, that it can work with a single heuristic, but I will bet that even now, the top computers have more heuristics. Humans/society are more complicated than chess. I think reducing how we evaluate morals to survival will reduce our survivability, and that the trial and error of evolution has led to us using more guidelines, and unless there is tremendous evidence otherwise, I do not think that having ‘how does it effect survivability?’ as the only criterion is likley to be better. But further…
  2. I see no reason to go against both my gut reactions to what I would call a dystopian society, because it will lead to greater survival. IOW if I think we, in general, will dislike our lives, even more than we do now, being assured that it leads to greater survival is not enough. It is not sufficient for me. Given that I think it is rather easy to come up with dystopias where we would all want to die, but our genes would be reproduced and our deaths, at least prior to DNA harvesting or procreation, could be prevented by the goverment, drugs and AIs working together, I actually consider the idea dangerous. Just as I would consider it dangerous if parents removed all the potential risks to their child’s survival. If that was their sole criterion for good parenting. I do realize that a single child and the society of humans are more like analogous situations, but I think it is a useful analogy.

In fact there is an odd counterpart to your ‘obligation to go against nature’ is what morality is, but regarding survival as the only criterion. I would never guarantee our survival knowing I was casting the deciding vote, if I knew no one would want to live in the future being created. That it would be meaningless suffering, where we are treated as meat and conveyers of DNA, and no other criteria were given to the AIs set to take over and those AIs were known to think 1) other criteria weaken survivability and 2) their plans were ones that would lead to hell on earth. As you argued about morality being just that which guides us to overcome our intincts for the good of society, I could argue that morality is precisely that which, as a society, makes us decide not to do X, even though this might be viewed as good by selfish genes.

And frankly. I think this is not just negative thinking. I think the best guarantee that homo sapien DNA and hosts keep appearing would likely be very negative to live. REducing all risks for individuals and the species as a whole could be handled with greatest security via narrowing our options down to the barest minimum of the forms of life.

That’s not quite getting two points I think. 1) Evolution led me to developing a morality in a certain manner. It selected for this. It has also selected for the way I think about it and couch it in language. You are suggesting that we now develop it in a different manner and think, in words, about it in another manner. I have both gut and rational negative reactions to the way you want to couch it. Your argument is based on the idea that evolution has selected morality in a certain manner and we should consciously do this in this manner. But evolution has not selected for that doing, at least, not yet. There are other ways to modify our morality that do not rely on what I consider an extremely restricted heuristic - that which increases survival is good. I am not arguing that my way of viewing morality is wrong and the truth, if different could be harmful, but rather suggesting that what has been selected for offers a wide range of heuristics - for example not just focusing on survival - and I see no reason to pare things down. 2) you say above that a moral obligation is to go against one’s nature - where it is problematic, I presume - and evaluate only in terms of survival. But our heuristics take in more factors and my tribe wants that, though they do not necessarily agree on priorities or even factors, none of them, not a single one, I have ever encountered, wants us to evaluate something only in terms of survival. That’s really quite mammalian of us, and certainly primate of us. And humans as apex primates, I think, we would want to be very careful about streamlining the complex heuristics that at least millions of years of trial and error have developed. Even if, just like our eating, we may be led astray by things that worked on the veldt.

We have other ways of dealing with these problems than, for example, reducing fears, though this is often the current approach to what are seen as irrational reactions - that is the entire pharma/psychiatric approach to not feeling so good.

I am wary of reductionist approaches because it seems to me we go through a, hey, I can’t see the importance of this, let’s throw it out: and so into the garbage go wetlands and tonsils almost as a rule. I see Cane Toads on the horizon.

Morality, or even just our preferences/desires including those informed by empathy, is very complicated. Quality of life, fairness and all sorts of other criteria are used to evaluate what is good to do.

I would think to pare this down to a single criterion would require a large amount of evidence, and not just deduction, before it ever was put widely in place. And I am not sure how to test it.

I think we are also evolving further away from evaluating things just in terms of survival. Shouldn’t we honor that trend. Compared to other animals we have very complex heuristics. That seems to have given us an advantage or correlate with it. It may also correlate with dangers also, I will concede. Reptiles behave much more along the lines of a single heuristic. I don’t think given what we are like this is a good idea for us.

Nice point. To better put my objection: I see no objections, in terms of the survivability criterion, in scenerios that I think pretty much everyone would be horrified by, guessing also, including you. I have given some examples. You may have argued against them. But I think they are very hard to criticise with the one remaining moral criterion: does it secure our survival well. If we can come up with horrifying dystopias - horrifying to us - that nevertheless, at least on paper, seem to meet the single criterion, I think that speaks against having that single criterion.

To argue that not wanting to be treated in the some of the ways I presented we would be treated, is just like our irrational attraction to sweet things, is to take us out of the equation. Perhaps some of my wants are problematic, but if you are taking pretty much all of my wants off the table, then you are saying that my experience does not matter only my successfuly conveying my DNA forward.

I tink there is a category confusion here, but I will have to mull. I am not sure the morality/technology analogy works. If we have more information, that informs our choices. I am not saying we simply follow impulses. And we often have complicated impulses pulling us in a few ways. Generallly in our more complex moralities, we don’t just follow impulses. We look at consequences. The difference with your methodology is you have one criterion. That could be handled impulsively also.

Yes, and we can make those moral decisions based on that information coupled with a variety of moral priorities. Or we can use that information and use it in relation to the single one you present. What if the AI decides that certain birth defects are beneficial because they lead to a population that is easier to control. Humans who cannot walk, cannot lead a rebellion against the suvival society’s rigid controls. That humans still have irrational desires for good quality of life is part and parcel of their DNA, but if they are born without feet, they are less mobile, easier to track, easier to protect, and less likely to successfully overturn a society that they irrational judge as wanting because of stone age desires.

But then the AI might find that increasing depression leads to greater stability and better control. I would throw in heuristics that include potential for happiness, freedom of movement, room to create, freedom to associate and a wide range of others. But these might very likely seem not to add a bit to survivability as long as top down control is very effective. They might even be viewed a negative. A Matrix like scenario where we are not used as an energy source, but placed in Virtual realities and vatted seems like a very stable and controllable solution that might be viewed as top by AIs. I suppose it might even be pleasant, but I don’t want it. And the AIs might find no reason, given the single criterion, to have it be pleasant.

In your scenario above - and I know this was just a gestural shorthand - nurture is not on the table, just genetics. I think that this is coincidentally telling. Survivability scenarios likely need not care much about nurture. They need the bodies to last, but do not necessarily at all need the minds to thrive and develop.

I have to ask if it would be me surviving, or a shadow of me. Induced comas would be one extreme. Is that us surviving?

Why on earth should I make my DNA me? or our DNA us?

In keeping my word:

This is good stuff:

Here you’re making someone feel at home, respected, of substance, significant, valued, appreciated which is conducive for speech maximization because it’s indicative of someone who’s willing to play fairly and concede good points when made. I’ve read 1000s of youtube comments, twitter and forum comments and I never see that behavior in the wild, but only when I email big companies with large customer service departments with staff who have been trained how to interact with people.

Even when you disagree, you do so tactfully, delicately, respectfully which elicits reciprocation in kind.

We need more of this.

I wish I could do it, but it’s a struggle for me since I’m not a people person and likely could be on the autism spectrum, which is likely of many folks who frequent topics such as philosophy and science. Tact is, unfortunately, a little bit of a foreign language to me :blush:

KT, you also make some noteworthy comments:

Here you’re displaying consideration and working to be accommodating.

Wary but not condemning.

Concessions, where applicable, are good.

Congratulations given for good work.

Coming across as genuine.

I don’t mean to put you guys under a microscope, but I just wanted to encourage more of this type of behavior. Functional morality?

Some of Carleas’ opinions could possibly drive me crazy, but he is an exploratory thinker and poster. He is actually interested in critiques, concedes points, responds to at least many of the points I make, such that the any frustration I feel is usually about how hard it is to make certain points in a clear way and then also the problems where values and priorities are different. But the conversation itself, his side of it, is how I wish most philosophical discussions were carried out. Nice that you noticed.

We spend so much time focusing on bad behavior that I thought it might be productive to balance it by pointing out the good as well.