Functional Morality

I did state I agree to a degree and disagree on,

“survival and genes propagation” is not morality per se but rather they are grounds for morality and ethics.

As I had mentioned one need the following to understand how it works;

I have posted on the above elsewhere - won’t go into details here.

That is only an article re a book he wrote. The researching proper is in the background and published elsewhere. I can’t find his scientific paper off hand, but he would not be rewarded $1 million if there was no scientific paper.

Btw, there are other researches done re Babies and inherent Morality.

Sorry if I misunderstood, but honestly I still don’t understand.
I confess I have a problem at the literal comprehension level. You wrote “I agree with the above re the intrinsic moral drive within human[s].” What does “re” mean? I thought it was a typo, but I see that the same “re” returns elsewhere.
Regardless, after the same sentence “I agree with the above re the intrinsic moral drive within human[s]”, it seems that you posit a ‘moral drive’ in men.
Now, I take you agree with Carleas by saying:

.
My understanding is that Carleas maintains that if these instincts are a ‘ground’ for morality, they are so in some deceitful way, meaning that what is deemed a moral habit is, in fact, a device serving “survival and genes propagation”. This amounts to a denial of a moral drive in men. And if you agree with that, where would ‘your’ moral drive be?
Then you add

and

So, if I may consider that instincts to “survival and genes propagation” are key to adaptation and natural selection, I have to understand that these non-moral grounds of morality are not what ‘directly’ propels this ‘ongoing progress of morality’. So, their role remain mysterious. Maybe I should have read your other posts on the “sound Framework and System of Morality” to understand this better. Nevertheless I get that this framework would be obtained through abstraction, hence I conjecture (maybe superficially) that these non-direct grounds would no longer play in it.

That’s OK, I don’t mean that Professor Bloom’s research is unreliable. I have no reason to think that and, actually, what he says about children’s reactions makes sense to me, I can easily back it after my own experience. Yet, I still don’t think that those findings hint to an innate morality in a sense that would confute the OP. (Incidentally, the Jakob Foundaton is about «the future of young people so that they become socially responsible and productive members of society». I don’t mean to disrespect that, but scientifically their grant does not prove much in my view).

Awesome epiphany!

Yes exactly and it’s why morality is only relevant within that assumed context.

Is there such a thing as objective experience? Can an object look at itself?

Gene replication? You mean population growth with no opposing force selecting for any particular gene mutation? Hmm… what happens when life gets too easy? What genes are chosen in that environment?

I think it was farming and one doesn’t need cooperation for that; just luck in having good soil and animals to be domesticated. That’s the biggest difference between the Native Americans and the Europeans.

Are wolves more successful than tigers? Wolves cooperate, but have to share in a hierarchy where some wolves may be excluded. Tigers manage alone and don’t need to share.

We do tend to personify, but I don’t think we need cooperation for that. The rustling of the grass should be interpreted as a tiger whether it really is or not.

How could anyone determine what “evolutionary success” means or how to get there? Evolution presupposes no presumptions or else it’s not evolution. As soon as someone has a plan in mind, it would cease to be evolution since there is no obstacle to overcome, but conditions upon which to undercome and devolve.

The functional morality is what ought to be done if one desires the continuation of what’s been happening.

“Regarding” is my guess.

Morality is in sync with society and whether one regards society to be natural or artificial is subjective along with the interpretation about which is best.

You can argue his side and we won’t have to deal with the foul order.

Are you asking whether nature or nurture more prominently affects moral/political leanings? Well, since morality is a function of society, then it would seem that nurture would instill properties consistent with societal influences.

Everything you do is a product of your structure.
When you act, you reinforce your structure.
Your will is a product of your structure.
Beyond the bias of the living, all is neutral.
The bias is a product of your structure.
All values are a bias.
Morality is how to best act in accord with your bias.

Survival is neutral.
Evolution neutral.
Happiness neutral.
Change neutral.
Progress neutral.

Only the biased care one way or the other.
What is your bias and why are you bias?
Ought you rely on the systems that produced your bias to dictate how you respond to your structure?
Follow their lead? Set their results as your goal? Mimic the blind?

I am a little late on this response, but I tried to be thorough by way of apology. There were many good responses, and clearly some weak points in my argument I needed to address. To avoid responding to individual sentences, I rolled them into some overarching categories. Please correct any mistakes or misreadings, and let me know if I failed to adequately address any criticisms.

================================================================================

  1. What is “morality”?

Peter Kropotkin points out that I have not defined “morality”, and goes on to note that without a definition, statements like “we observe morality in both young children and non-human primates” are unclear. Peter is correct that I did not provide a definition, but I disagree that that is a significant problem. In some sense, I am arguing for what should be the definition of morality, i.e. how that term should be understood and used. To the extent that’s so, any definition I provide would be effectively tautologous with the argument that I’m making.

But I’m also appealing to a colloquial, small-m ‘morality’ when I say that we observe morality in children and non-humans. In both those groups, we observe strong, seemingly principled reactions in adherence to innate concepts of fairness, and often those reactions are contrary to immediate self-interest. So, for example, capuchin monkeys trained to complete a task for a given reward will react violently if they observe another monkey get a more valuable reward for the same task. They will go as far as to reject a reward that they had previously been satisfied to receive, as if in protest at the unfair treatment. That reaction is a rudimentary morality, as I mean it. Children, too, will react angrily to being rewarded differently for the same task, and from a very young age have a concept of fair distribution of rewards.

In these situations, we see that there is clear global instrumental value in the reactions, since they are intended to punish unfairness and communicate that the recipient will not stand for unfair treatment. In a non-lab setting, this reaction will encourage fairness is repeated encounters. But the reaction is also clearly of a piece with more sophisticated moral reasoning, as when a person reacts to such unfair treatment on someone else’s behalf. It takes little more than this seemingly inbuilt reaction and the ability to model other minds to generate such a vicarious indignation. We then tend to label these vicariously felt slights as moral sentiment, and further refinements are just further abstraction on the same idea. Kant’s categorical imperative is nothing more than generalizing upon them.

As Wendy points out, morality in this sense is “society’s cohesive glue”, it’s a set of generalized standards of treatment, and one about which third-parties will get indignant on some else’s behalf. It creates a social glue by creating a set of presumptions about acceptable conduct. And I mean “morality” to point to that glue. Morality as I use it is an observable part of human affairs, a collection of behaviors common to normal-functioning humans (and deficit of which we describe as one of several mental illnesses). And because of its roots in innate tendencies visible in unschooled humans and our close animal relatives, I argue that the observable behaviors of morality are a result of cognitive habits selected for in our evolutionary history, i.e. that they exist because they are functional, so there is no higher authority to appeal to in moral matters than function.

But I should clarify that, even with my functional framing, not all moral rules are as hard-wired as unfairness. For example, it’s perfectly consistent with this understanding of morality that there are some moral rules that are necessary (I think this is what Meno_ means when he says “intrinsic”) and some that are contingent (what Meno_ describes as a “given…set of moral rules”). Necessary rules will be those that follow from the base facts of biological existence; contingent rules will be those that create social efficiency but are just one of many ways to create such efficiency (perhaps this is what Wendy meant by morality being functional in a multitude of ways). This distinction is neither sharp nor certain, but it is meaningful when considered in degrees: the moral maxim that one should follow traffic laws is more necessary than the moral maxim that one should drive on the right, even though it may be possible to efficiently structure society without traffic laws.

Urwrong suggests a basis of morality in “death and its inevitability”, but I don’t see that in practice in the real world. Even the example he gives (giving your life for a higher good, or for your child) are clearly functional, whether by supporting self-sacrifice for collective benefit, or simply by ensuring the direct survival of your genes as carried by your offspring.

It may be true that the adherents of some things we call morality describe their actions in terms of other values, such as “god’s will” or “karma”, but the existence of a mythology and alternative narrative does not detract from the fact that, if those moral systems have persisted over time, it is because they kept the groups that supported them cohesive and self-perpetuating. (I will say more below about the potential description between accurate descriptions of the world that involve selection, and the behavioral effects of descriptions of the world on which selection acts).

It isn’t impossible to have a non-functional moral system, but if it is non-functional, it is not likely to survive. Early Christianity has a moral prohibition against reproduction, and that moral sentiment died out because it was selected against: people who believed it did not reproduce, and a major method of moral transmission (likely the primary method) was unavailable. The existence of such beliefs, and their description as a form of morality, does not mean that morality is not as I describe it.

================================================================================
2) In what sense is it “functional”?

Several people challenged the claim that morality evolved. Attano asked how we could know (“Fossils bear no trace of the morality of a specimen”), and Prismatic notes the memetic evolution of morality on sub-genetic-evolutionary timescales.

I have described the biological roots of morality as “cognitive habits”. I describe them this way because it doesn’t seem that most particular moral propositions are coded in our genes, but instead that we have a few simple innate predispositions plus a more general machinery that internalizes observed moral particulars. A Greek raised among the Callatiae would certainly find it right and proper to eat his dead father’s body, and a Callatian raised among the Greeks would find the practice repulsive. The general moral cognitive habits that are selected for in genetic evolution are the foundation of the moral particulars we see in practice, especially the tendency to align ones behavior with others as a means of coordinating society and enabling cooperation. Those cognitive habits are functional insofar as they enable more cohesive groups to out-compete less cohesive groups.

Attano is correct that we can’t see this directly in the fossil record. But we can still infer its origins in genetic evolution by looking at non-human animals and young children. There, we see both the tendency to imitate the herd and the foundations of specific moral precepts. Explaining this through “History” (which I understand to be something like memetic evolution) doesn’t work, because non-human animals aren’t plugged into the same cultural networks, and very young children haven’t absorbed the culture yet (and I believe the moral-like actions of young children are similar across cultures, though I am less confident on that point). Evolved cognitive habits also best explain that we see moral systems in all human groups. Though they differ between groups, they are present everywhere and there is broad agreement within a group.

On top of those cognitive habits is another form of evolution, what I would call memetic (as opposed to genetic) evolution. Our wetware is evolved to harmonize groups, but the resulting harmonies will vary from group to group due to differences of circumstance and happenstance. That explains the “progress” in morality that Prismatic notes: memetic evolution can take place much more rapidly, since its components are reproduced and mutated much more quickly than are genes.

Now, we might call “progress” the process of coming up with moral codes that allow us to form yet larger and more efficient groupings. Or it might be the process of removing the moral noise that is built into local moralities by happenstance (e.g. rules surrounding specific types of livestock), boiling down to more universal moral beliefs like “don’t murder”. Progress in a system of functional morality would be if the sets of moral particulars made the group function better.

Serendipper seems to suggest on this point that population growth may be bad (or perhaps just non-functional) if not coupled with an “opposing force selecting for any particular gene mutation”. But population growth is the result of functional morality; bearing offspring who bear more offspring is what it means to have genes selected for. This may be clearer if we compare competing ant hills, and ask what it would mean if one ant hill began to increase in population significantly over the competing hills. More population means that the hill is already relatively successful, because population expansion requires resources, and also that it’s likely to be more successful, because more ants working on behalf of the collective means the collective is likely to be stronger. So too with humans: we can read success from population growth, and we would expect population growth to create success (up to a point, the dynamics change when there are no competing groups).

A growing population may, and probably does, require morals to change, but we should expect that: as context changes, including changes in the nature of “the group”, different behaviors will be functional. But that our old morals will be a victim of their success does not mean that they were successful: a growing population and growing cooperation between group members means that the old rules were functional in their context.

================================================================================
3) How does this apply?

A few people asked about the applicability of this way of framing morality. That line of argument usually isn’t so much an objection as an invitation to keep developing the theory, which I am glad to do.

Jakob suggests maybe morality needs to be naive, in the sense that the inborn sense of morality as an ideal is important to its functioning. That may be the case. But it is also true that in order to dodge a speeding car, we need to forget about special relativity, even though the most accurate description of the car’s motion that we can produce requires us to use special relativity. So too might we recognize and describe morality as a system of cognitive habits that support group cohesion, and yet in deciding how we live appeal to more manageable utilitarian or deontological axioms. This goes to Urwrong’s point above about descriptions of morality in terms of death rather than life: different descriptions may more effectively achieve the ends of morality, but they do not change the nature of morality as an evolved system that helps perpetuate human groups.

This is related to a question from iambiguous, of how we actually put the idea into practice. I don’t think that’s easy, but I also don’t think it’s necessary. I am not here offering an applied morality of daily life, but a moral theory to which such an applied morality should appeal. There are potential subordinate deisagreements about e.g. whether brutal honesty or white lies are more effective in creating group cohesion and cooperation; what I am proposing here is the system to which the parties to such disagreement should appeal to make their case.

Serendipper asks how we could determine evolutionary success, and I think the answer is easy in retrospect (though not trivial), and more difficult in prospect. In retrospect, we can just ask what survived and why. Sometimes we know that groups fell apart for arbitrary reasons, and other times we can readily identify problems within the groups themselves. We can point to moral prohibitions that harmed groups and were abandoned, e.g. sex and usury prohibitions. We can compare across surviving systems and see what they have in common, e.g. respect for laws and public institutions.

In prospect, we can make similar arguments, drawing from the history of moral evolution and make predictions about what will work going forward. Like any theory about what will happen on a large scale in the future, there’s substantial uncertainty, but that doesn’t mean we know nothing. We can more readily identify certain options that are very unlikely to be the best way forward.

But again, this uncertainty isn’t fatal to the proposition that morality is functional – indeed, it’s expected. Much as we don’t know for sure what evolved genetic traits will survive, whether K- vs. R-strategies are more reliable in a given context, we also do not know what moral approach will guarantee group prosperity. But these observations do not undermine the theory of evolution, and they do not undermine the theory of functional morality.

I was referred here in the context of my saying that without emotions there are no morals. I see nothing here to argue against that. If you have strategies that unemotionally lead to the propagation of your genes, and no emotions are present, you have tactics and strategies. Machines could be programmed to this - something like those robot cagematches, though fully programmed ones. That isn’t morals. Morals are inextricably tied to someone’s feeling and values - iow subjective preferences, even if it is a posited God’s - and notice how these gods get pissed off if you break the rules.

And guilt would fit in in the discussion you have of emotions above. Natural selection slowing working on which kinds of guilt are adaptively poor.

Once something is tactic and strategy online, you have no way to decide between this set of tactics - that lead to the destruction of life on earth - or that set of tactics that do not

UNLESS

emotional/desire based evaluations

are made.

If you have none, you are no longer an animal.

If you have none, you cannot decide, though you could flip a coin.

And one interesting thing about evolution is that it has lead, and not just in the case of humans, to species having the ability to not necessarily put their own genes ahead of others. This may benefit the species - it is part of what makes us so versatile, or our versatility makes us like this.

Yes, apparantly unemotional viruses may be even more effective than us - in the long or even short run - but they are not moral creatures. I think it would be a category error to call them that.

The argument that morality doesn’t depend on emotions is that morality was a product of evolution, and was selected for independently of any emotional valence. The origin of morality as something that supports group selection does not depend on emotion; emotion is neither necessary nor sufficient for morality to be selected for.

That’s not to say that morality can’t interact with emotion; it may be that morality subjectively experienced as an emotion is an effective way to encourage beneficial ingroup cooperation. Or it may be that tuning into the emotions of others gives us inputs into our moral machinery that help produce such beneficial ingroup cooperation.

But like all evolved traits, the fact that they produced outcomes that were selected for in the past do not guarantee that they will produce outcomes that will be selected for in the present. We evolved to go nuts for sugar, because in the environment in which we evolved sugar was scarce and we should eat all we can. In our current world, sugar is abundant and too much enthusiasm for sugar is selected against. Many common fears are unjustified, we’re too risk averse, we overreact to cold. We have a lot of subjective experiences that were handed down through evolution that are actively counterproductive in the modern world. Our subjective preferences can be mistaken in that sense.

So too can connections between emotion and morality be seen to be spurious, once we accept why morality exists at all. Whatever weight we give to emotion we can and do discount completely when it leads to the wrong outcome. We feel guilty when we dump someone, and it’s not that we shouldn’t feel that way, it’s that that emotion has no bearing on the rightness or wrongness of the action. We feel it because we evolved in small bands without the elaborate puritan mating regime of modern society, and hurting someone and tearing social bonds in the evolutionary context to the extent we hurt someone and tear social bonds when we dump them now was disruptive to the group and bad for us and our tribe. So we feel guilty, we have the moral intuition that we’ve done wrong, and that moral intuition is mistaken.

The fact that we can look at a situation and use non-emotional factors to identify emotions that are just incorrect and that point to the wrong moral conclusions entails that moral conclusions can’t actually be based on the emotions. They’re based on something else, something independent of the emotions.

We literally have a neural network that was trained on certain inputs to achieve a certain goal, and now we’re feeding it different inputs and just declaring wherever it points to be the goal. That’s a nonsensical approach. The goal is the same goal: survival.

  1. you did not really interact with the ideas I presented. 2) you are claiming to know what is good and what is bad, iow to have access to objective morality, to some degree or other. 2a) you need to demonstrate this.

My point is not that I have access to objective morality, but that all moralities are founded by us humans on emotions. Why must this be the case? Because otherwise we have no other way to determine what we think is good. Note the difference between us. You are claiming to know the good, the objective good. I am focused on the process that must take place to decide whether a morality is good. If one is a consequentialist, which you are, then the only way for you to determine what you consider good, is via emotions. Social mammal emotions.

We cannot even say that the survival of any human is objectively good. How would we know this? But we use values based on social mammal biases to decide, well, survival of humans is good. Perhaps the consequentialist thinks that reducing unnecessary pain is good. This is based on empathy and one’s own personal revulsion of pain projected on others.

You can have goals and then best inferred heuristics to reach that goal.

But morals are not simply goals.

The whole opening of your post does not address what is happening. It claims that a non-emotional natural selection led to emotions. whoopie. Irrelevant.

Emotions led to morals. It is a necessary part of the process through which we evaluate the good. You may decide that I or a younger you reached a poor conclusion based in part on emotions when it came to morals. But again, you MUST use emotional social mammal values to determine this.

At some point you have what, I can only assume, you think is merely a rational, logical decision. An emotionless evaluation. Whatever that is, I will bet, not coincidentally values your own life as good, though perhaps one that could be outweighed by other goods. That life is good. That not causing unnecessary harm is good.

All based on your desires to be alive and hopefully empathy at least as a factor in relation to others.

If you take away the emotions, you are then claiming that what in fact are really just tactics are morals. Tactics to achieve certain outcomes that you are utterly indifferent emotionally about. Tactics to reach a goal you are utterly emotionally indifferent about.

If you are indifferent emotionally about those goals, why enter the discussion at all`?

Why not let people who emotionally prefer certain outcomes, ways of relating, decide?

It doesn’t matter to you.

There is somsething messed up in here, like some deep confused category error and I wish I could really explain this well.

I can only right now take a stab at it with a reductio…

Unless whatever process you have for deciding on morality is not itself based on functions coming out of evolution, whatever your emotionless process is, is also not necessary.

Morality emerged out of emotional beings, beings who evaluated the good and bad using emotions. It is not a coincidence that chimp and wolf moralities correlate incredibly well with with emotional likes and dislikes. Animals with no limbic systems are never referred to as having emotions. We may talk about power dynamics in animals without limbic systems, but I don’t hear anyone talking about reptile morals. Just reptile behavior. But with apes and canines, using ideas like fairness work.

Anyone who told me thay had arrived at an objective morality without emotions, I would distrust in the extreme. Because they are claiming emotional indifference, that their conclusions have not used emotions in their evaluation. Which means, their morality is not based on empathy, even for themselves. And since they are claiming to be indifferent, they are presenting themselves as a disinterested party. Which I find suspicious in the extreme.

I am claiming that there is an objective good, that morality is objective. Do you disagree with that?

We know where emotions and morals come from, we know why they evolved, so we can determine what they should say without bootstrapping from them.

My point in this thread is that starting with morals as an empirical phenomenon observable in humans and certain other social animals, we can examine what morals are, why they exist. And any claimed moral commandment that undermines the empirically observed reason for the existence of morals must be mistaken, morals must continue to be what they evolved to be.

And what they evolved to be had nothing to do with emotion (except insofar as emotions also evolved to do the same thing).

Or rather, morals are a tactic that evolved because they kept people who used them alive and helped them reproduce.

This is a strange line of argument.

  1. Should only people that are emotionally invested in outcomes discuss anything? Like, only mathematicians that are emotionally invested in a specific outcome to the Riemann Hypothesis should spend any time trying to figure it out?
  2. Often people who are emotionally invested in an outcome are the worst people to solve it. That’s why we have courts and arbitrators and mediators and trilateral talks. Neutral third parties are often better at resolving disputes.
  3. I’m not saying emotion does nothing or doesn’t matter, I’m saying emotion isn’t the basis, isn’t a component, of morality. As I say above, emotions will often align with morality, and ‘naive’ morality will often align with survival, because they all evolved to the same ends. But where they differ, it is survival that wins. And I, weak as I am, feel and follow emotions, but I often do so knowing that it is immoral.

This poses the interesting question of how to distinguish rationality from morality, i.e. rationality also evolved, so why doesn’t survival trump rationality? I would look to what rationality and morality each purport to do. Rationality is an attempt to describe something that exists independently of humans. It is a way of describing the world. Morality, by contrast, is something we created (or that was created as a part of us).

I think you would have to agree with this distinction: if morality is based on our emotions, then it doesn’t exist in a world without our emotions. Rationality, logic, math, those things exist without us.

Which morality is objective? We have evolved a set of moralities and some of these moralities support we not survive – anti-natalism - some consider us parasites to such a degree that we should be eliminated to protect other life. The transhumanists have moralities or perhaps aesthetics that want us to choose the way homo sapiens will no longer exist – they are the most likely of the three to win the natural selection battle with other moralities. I can’t see how one can know the objective good, nor can I see that teleological arguments based on evolution lead to any conclusion about what is good with a capital G. We can come up with tactics that might be good for the spreading of our genes, though that does not sound like morals to me. Evolution led to a capacity. That capacity - the portions of our nervous systems, say, that amongst other things, came up with morals - may or may not be adaptive in the long term. And we cannot assign it a purpose. Once this capacity is present it is clear that it will be applied to all sorts of purposes.

I am not sure if your ‘why’ is teleological here, but this is a bird’s eye view. Or view from nowhere. In situ we have a way of creating meaning for ourselves and that meaning is emotionally evaluated and generated, and not bound by any ‘purpose’ in evolution. If there were a purpose in evolution, and it wanted control, it made a mistake when it came up with our capacities and tendencies since we evaluate and generate morals based on rationality AND emotions. If I am supposed to respect evolutions goals, it seems to me I must respect the processes and skills it gave me to do things and evaluate things. IOW I have been made such that I mix emotions and rationality, both when I function like a consequentialist and when like a deontologist. I find emotions deeply involved in both processes and I note this in everyone I meet also. We are this way. I don’t see why I should just abstract out and respect in SHOULD terms evolution’s intent for morals, but ignore evolution’s result in making me/us the way I am/we are.

Who says? How do you know that is good? What if we achieve interstellar travel and kill off lovely smarter, less nasty species, perhaps all of them? Where can I stand to view the objective good of our species even? All you are talking about is heuristics for survival. That’s not morality. I feel a bit like when I see physicalists talking about being spiritual. They may have wonderful philosophies of life, be great people, generate sweet and caring ethical codes, etc., but they are not spiritual. That word literally entails other stuff. So does morality entail more than heuristics. It includes an in part emotional/desire-based choosing of what goals we want good heuristics for, and often, including how we feel about the heuristics. Or it would not be so common to challenge the idea that the means might not justify the ends. What you describe is certainly not objective morals. It is tactics towards what you consider the one goal, a goal we don’t even know is objectively a good one, though it might be good for us. I considered it an extremely limited goal for us, just one part of what morality covers. But even if it was the only goal of ours, we cannot know if it is a moral one. I mean, who are we to judge the goodness of the human race. Or better put, who are we to think we can judge it objectively and without emotion?

Again teleological. But further we evolved as creatures that evaluate morals emotionally. If we are going to use a teleological argument then perhaps we should leave that alone, rather than deciding that we can and should just do it only rationally - which I don’t think is possible, in any case. Further it seems completely irrational to generate the way humans relate to each other without making emotional and desire-based evaluations central. I mean, we have to live with all the consequences of those morals, and emotional consequences will be real and central. For some reason emotions are often considered quasi-real. And this is often based on their fallibility. First, reason is also fallible, but further, emotions are real. Now I know that you would not assert that emotions are not real. But note how they end up somehow being moved off the table when they are central to pretty much all the consequences of morals. And then also in the process of choosing and evaluation, etc.
I don’t think there is more than a handful of people who think morality is JUST about calculating the survival of the species. So, you must then explain how evolution led to us having a belief/method/approach that runs counter to what you are saying. If evolution can give us should, this would entail that it would give us should around methodology also.
IOW your argument seems to be that since evolution shaped morals and evolution is all about surviving, then morality is about surviving period. But evolution led to us, and other moral creating species, making morals about much more. Perhaps you need to note what evolution has selected for: and in this case it is moral making animals with limbic systems involved morals at all levels.
Personally, I don’t really care what evolution wants or intends, but I can see what was selected for in our case.
If it somehow turned out that rationality indicated I should kill off my wife after she births our second child - that the best AI’s analyzing all the complex chains of effects, sees this as the best heuristic for human survival that husbands/fathers do this OR EVEN if God told me to do it…no way. I won’t. I fail God’s test of Abraham, though I have often wondered if in fact he failed it.
This was an extreme example - though one that fits nicely with our other discussion - but there are all sorts of other moral-like guidelines I would follow regardless of what the best minds said was our best strategy for survival. And if you think that is a problem, blame evolution. Evolution made my moral making or in my case preference-making process to be such that there are things I will not do (even for money or what the supposedly detached people with views from nowhere say is moral). And there are things I will do that may go against their supposed best heuristics.

see above about emotions always being in the mix of creating, applying, modifying, justifying…etc. That is the tactic we evolved.

They all are emotionally invested in finding the correct outcome. And they likely all are interested in their, perhaps at this stage vague guess of direction, being the right guess. And the one who solves it will have been extremely emotionally involved in finding the answer. IOW I wasn’t arguing that Carleas shouldn’t participate, but rather trying to highlight – corner you - that you are likely driven by emotions, even in this telling us we should prioritize survival because that is what evolution gave us morality for. This likely seems like a view from nowhere, but absolutely cannot be once it is couched as a should. Further the results of the Riemann hypothesis is not like the results of a morality argument or decision about how we should, for example, relate to each other. The latter has to do with what we like, love, hate, desire, are repulsed by and those emotional reactions will guide our personal and collective decisions about what is moral. In fact they must. If they are not involved we may all end up working in some dystopian, panopticon-tyranny that seems efficient and at least in the short term seems to completely guarantee survival, but which we hate every waking minute living in. For example. I think there are other problems that will arise. Some can be based on the emotions now having an unremovable part in what we will even want to survive in and thus making emotions a selection factor, like it or not. Others need not even be bound to your fundamental should - we must base our morals on what we think evolution intended morals to be for.

Often the people who come up with morals in what they think is View from nowhere or objective or disinterested end up making horrible decisions. I would not use that as an argument against using rationality. I don’t think it works as an argument against including emotions. And as far as I can see the people who judge emotions and present themselves as avoiding their influence, they are less aware of how their emotions are influencing their choices than the people who do not present their ideas this way. But further those groups that make decisions are applying morals decided in part on emotional grounds. And they likely have strong feelings about those morals. Courts often use juries, lawyers use emotional arguments, etc. Yes, emotions can lead to wrong decisions. But they are central to morals, determining what morals are and how they affect us. Anyone trying to eliminate emotions from the process of deciding morals, will be incredibly lucky if they come up with a moral system that does not feel unnecessarily bad in a wide variety of ways. And if they have the single goal of species survival, this could lead to solutions like…

It is moral to kill off 70% of the population tomorrow, have an elite take over complete control of all genetic combination – read: via sex, gm work, etc. And so on.
Any country that decided to come up with morals without including emotions in the process is one I would avoid, because essentially such a country with that one goal has no interest in the bearers of the genes, except to the except they bear them. Science fiction has many such dystopian ‘logical’ solutions.

How could it be immoral, in your system of belief, since we clearly evolved with this mixed approach to choosing, creating. It is part of our evolved criteria in all such decision-making.

This makes you a kind of Platonist or some other form of metaphysics that has these things outside us. But here’s the thing, you are deciding to NOT work with morals the way we obviously have evolved to work with morals and clearly to me emotions are involved deeply in all moral evaluation and they become clearly visible when there are disagreements about morals, which are regular and ongoing. I don’t really care what evolution may have intended my emotions, morality and rationality to be for and have as goals - and perhaps I am more adapative precisely because I take the freedom given to me by evolution and don’t care to let it’s intent rule me. But for the sake of argument, let’s say I should go with evolution’s intentions: shouldn’t I then go with the full set of ways one evaluates and chooses emotions, which would include both emotions and rationality whichare intermingled and interdependent in any case? And yes, morality does not exist in a world without emotions and it never has. Animals had behavior before emotions, perhaps, but not morality.

Further ‘rationality’ is of a different category than the rest of that list. I would need to know where you see rationality existing without us. And whose rationality? Rationality is a human process - also exhibited in more limited - though often highly effective - forms in animals. We can call it an animal process. For it to function well, in terms of human interactions, the limbic system must be undamaged and emotions are a part of that process. But even without that proviso, I do not find rationality anywhere outside us as possible in the physicalist paradigm, unless we are talking about aliens or perhaps one day AIs.

And note:

I still think this is in the air.
I think one of the reasons we have intermeshed emotional and rational decision-making is because the higher forms of rationality get weaker when there are too many variables and potential chains of causes. AND rationality tend to have a hubris that it can track all this. For some things emotional reactions have a better chance, though of course this is fallible. But then both are fallible and both are intermeshed. And just as some are better at rationality, some are better at intuition than others are. I would find it odd if what determined how we lived was determined without emotions and desires as central to the process of determining it.

I’ve mentioned Damasio, here’s a kind of summary. Obviously better to read his books or articles…
huffingtonpost.com/fred-kof … ccounter=1

People can’t even make choices without emotions. But then here, with morality, we are talking about making choices about things we react to with strong emotions and have effects that affect us emotionally, that affect our desires and goals - on emotional levels.

I see no reason to consider my DNA more important than my life - how it is lived, what it feels like, what my loved one’s experience, the state of what I value - nature, etc.

Edit: since you think we should base morals on ‘survival’, it would be good to define that would count as survival. Warning: I plan to find odd conclusions based on the definition.

Which morality is objective? We have evolved a set of moralities and some of these moralities support we not survive – anti-natalism - some consider us parasites to such a degree that we should be eliminated to protect other life. The transhumanists have moralities or perhaps aesthetics that want us to choose the way homo sapiens will no longer exist – they are the most likely of the three to win the natural selection battle with other moralities. I can’t see how one can know the objective good, nor can I see that teleological arguments based on evolution lead to any conclusion about what is good with a capital G. We can come up with tactics that might be good for the spreading of our genes, though that does not sound like morals to me. Evolution led to a capacity. That capacity - the portions of our nervous systems, say, that amongst other things, came up with morals - may or may not be adaptive in the long term. And we cannot assign it a purpose. Once this capacity is present it is clear that it will be applied to all sorts of purposes.

I am not sure if your ‘why’ is teleological here, but this is a bird’s eye view. Or view from nowhere. In situ we have a way of creating meaning for ourselves and that meaning is emotionally evaluated and generated, and not bound by any ‘purpose’ in evolution. If there were a purpose in evolution, and it wanted control, it made a mistake when it came up with our capacities and tendencies since we evaluate and generate morals based on rationality AND emotions. If I am supposed to respect evolutions goals, it seems to me I must respect the processes and skills it gave me to do things and evaluate things. IOW I have been made such that I mix emotions and rationality, both when I function like a consequentialist and when like a deontologist. I find emotions deeply involved in both processes and I note this in everyone I meet also. We are this way. I don’t see why I should just abstract out and respect in SHOULD terms evolution’s intent for morals, but ignore evolution’s result in making me/us the way I am/we are.

Who says? How do you know that is good? What if we achieve interstellar travel and kill off lovely smarter, less nasty species, perhaps all of them? Where can I stand to view the objective good of our species even? All you are talking about is heuristics for survival. That’s not morality. I feel a bit like when I see physicalists talking about being spiritual. They may have wonderful philosophies of life, be great people, generate sweet and caring ethical codes, etc., but they are not spiritual. That word literally entails other stuff. So does morality entail more than heuristics. It includes an in part emotional/desire-based choosing of what goals we want good heuristics for, and often, including how we feel about the heuristics. Or it would not be so common to challenge the idea that the means might not justify the ends. What you describe is certainly not objective morals. It is tactics towards what you consider the one goal, a goal we don’t even know is objectively a good one, though it might be good for us. I considered it an extremely limited goal for us, just one part of what morality covers. But even if it was the only goal of ours, we cannot know if it is a moral one. I mean, who are we to judge the goodness of the human race. Or better put, who are we to think we can judge it objectively and without emotion?

Again teleological. But further we evolved as creatures that evaluate morals emotionally. If we are going to use a teleological argument then perhaps we should leave that alone, rather than deciding that we can and should just do it only rationally - which I don’t think is possible, in any case. Further it seems completely irrational to generate the way humans relate to each other without making emotional and desire-based evaluations central. I mean, we have to live with all the consequences of those morals, and emotional consequences will be real and central. For some reason emotions are often considered quasi-real. And this is often based on their fallibility. First, reason is also fallible, but further, emotions are real. Now I know that you would not assert that emotions are not real. But note how they end up somehow being moved off the table when they are central to pretty much all the consequences of morals. And then also in the process of choosing and evaluation, etc.
I don’t think there is more than a handful of people who think morality is JUST about calculating the survival of the species. So, you must then explain how evolution led to us having a belief/method/approach that runs counter to what you are saying. If evolution can give us should, this would entail that it would give us should around methodology also.
IOW your argument seems to be that since evolution shaped morals and evolution is all about surviving, then morality is about surviving period. But evolution led to us, and other moral creating species, making morals about much more. Perhaps you need to note what evolution has selected for: and in this case it is moral making animals with limbic systems involved morals at all levels.
Personally, I don’t really care what evolution wants or intends, but I can see what was selected for in our case.
If it somehow turned out that rationality indicated I should kill off my wife after she births our second child - that the best AI’s analyzing all the complex chains of effects, sees this as the best heuristic for human survival that husbands/fathers do this OR EVEN if God told me to do it…no way. I won’t. I fail God’s test of Abraham, though I have often wondered if in fact he failed it.
This was an extreme example - though one that fits nicely with our other discussion - but there are all sorts of other moral-like guidelines I would follow regardless of what the best minds said was our best strategy for survival. And if you think that is a problem, blame evolution. Evolution made my moral making or in my case preference-making process to be such that there are things I will not do (even for money or what the supposedly detached people with views from nowhere say is moral). And there are things I will do that may go against their supposed best heuristics.

see above about emotions always being in the mix of creating, applying, modifying, justifying…etc. That is the tactic we evolved.

They all are emotionally invested in finding the correct outcome. And they likely all are interested in their, perhaps at this stage vague guess of direction, being the right guess. And the one who solves it will have been extremely emotionally involved in finding the answer. IOW I wasn’t arguing that Carleas shouldn’t participate, but rather trying to highlight – corner you - that you are likely driven by emotions, even in this telling us we should prioritize survival because that is what evolution gave us morality for. This likely seems like a view from nowhere, but absolutely cannot be once it is couched as a should. Further the results of the Riemann hypothesis is not like the results of a morality argument or decision about how we should, for example, relate to each other. The latter has to do with what we like, love, hate, desire, are repulsed by and those emotional reactions will guide our personal and collective decisions about what is moral. In fact they must. If they are not involved we may all end up working in some dystopian, panopticon-tyranny that seems efficient and at least in the short term seems to completely guarantee survival, but which we hate every waking minute living in. For example. I think there are other problems that will arise. Some can be based on the emotions now having an unremovable part in what we will even want to survive in and thus making emotions a selection factor, like it or not. Others need not even be bound to your fundamental should - we must base our morals on what we think evolution intended morals to be for.

Often the people who come up with morals in what they think is View from nowhere or objective or disinterested end up making horrible decisions. I would not use that as an argument against using rationality. I don’t think it works as an argument against including emotions. And as far as I can see the people who judge emotions and present themselves as avoiding their influence, they are less aware of how their emotions are influencing their choices than the people who do not present their ideas this way. But further those groups that make decisions are applying morals decided in part on emotional grounds. And they likely have strong feelings about those morals. Courts often use juries, lawyers use emotional arguments, etc. Yes, emotions can lead to wrong decisions. But they are central to morals, determining what morals are and how they affect us. Anyone trying to eliminate emotions from the process of deciding morals, will be incredibly lucky if they come up with a moral system that does not feel unnecessarily bad in a wide variety of ways. And if they have the single goal of species survival, this could lead to solutions like…

It is moral to kill off 70% of the population tomorrow, have an elite take over complete control of all genetic combination – read: via sex, gm work, etc. And so on.
Any country that decided to come up with morals without including emotions in the process is one I would avoid, because essentially such a country with that one goal has no interest in the bearers of the genes, except to the except they bear them. Science fiction has many such dystopian ‘logical’ solutions.

How could it be immoral, in your system of belief, since we clearly evolved with this mixed approach to choosing, creating. It is part of our evolved criteria in all such decision-making.

This makes you a kind of Platonist or some other form of metaphysics that has these things outside us. But here’s the thing, you are deciding to NOT work with morals the way we obviously have evolved to work with morals and clearly to me emotions are involved deeply in all moral evaluation and they become clearly visible when there are disagreements about morals, which are regular and ongoing. I don’t really care what evolution may have intended my emotions, morality and rationality to be for and have as goals - and perhaps I am more adapative precisely because I take the freedom given to me by evolution and don’t care to let it’s intent rule me. But for the sake of argument, let’s say I should go with evolution’s intentions: shouldn’t I then go with the full set of ways one evaluates and chooses emotions, which would include both emotions and rationality whichare intermingled and interdependent in any case? And yes, morality does not exist in a world without emotions and it never has. Animals had behavior before emotions, perhaps, but not morality.

Further ‘rationality’ is of a different category than the rest of that list. I would need to know where you see rationality existing without us. And whose rationality? Rationality is a human process - also exhibited in more limited - though often highly effective - forms in animals. We can call it an animal process. For it to function well, in terms of human interactions, the limbic system must be undamaged and emotions are a part of that process. But even without that proviso, I do not find rationality anywhere outside us as possible in the physicalist paradigm, unless we are talking about aliens or perhaps one day AIs.

And note:

I still think this is in the air.
I think one of the reasons we have intermeshed emotional and rational decision-making is because the higher forms of rationality get weaker when there are too many variables and potential chains of causes. AND rationality tend to have a hubris that it can track all this. For some things emotional reactions have a better chance, though of course this is fallible. But then both are fallible and both are intermeshed. And just as some are better at rationality, some are better at intuition than others are. I would find it odd if what determined how we lived was determined without emotions and desires as central to the process of determining it.

I’ve mentioned Damasio, here’s a kind of summary. Obviously better to read his books or articles…
huffingtonpost.com/fred-kof … ccounter=1

People can’t even make choices without emotions. But then here, with morality, we are talking about making choices about things we react to with strong emotions and have effects that affect us emotionally, that affect our desires and goals - on emotional levels.

I see no reason to consider my DNA more important than my life - how it is lived, what it feels like, what my loved one’s experience, the state of what I value - nature, etc.
[/quote]

Perhaps a shorter objection is better:

  1. how do we know that morality is not a spandrel?
  2. even if it is not, how do we have an obligation to the intent of evolution, in what sense are beholden to funtion? Function, evolution, natural selection are not moral agents. What is it that puts us in some contractual commitment to following their intentions? If the argument is not that we are beholden but rather X is what morality is for, so we should use it as X, a more determinist connection, then we don’t have to worry about adhering to the function, since whatever we do is a product of evolutionarily-created function. Once I am supposed to follow evolution, use my adaptions, well, how can I fail? And if I fail as an individual, I am still testing for my species and if my approach was poor it will be weeded out. No harm, no foul.

Thanks for your patience and your excellent replies, they have helped my to develop my thinking on this topic and I appreciate the critique.

I think there’s a number of levels on which we can define it, which I’ll discuss in a minute, and there’s room to debate the appropriate locus of survival as it relates to morality. But I think that debate is separate from whether morality does relate to survival. Morality exists because of its effect on past generations; it seems clear that there is no morality independent of humans, no moral field that we’re sensing but rather a moral intuition (i.e. innate brain configurations) that influences our behaviors in ways that supported our ancestors in producing us.

But, as promised, some thoughts on ‘survival’:
First, individual gene-line survival means an organism not dying until it produces offspring who are likely to not-die until they produce offspring.
At a group or society level, survival means the group continues to exist. It’s a little vaguer here because the ‘group’ isn’t is somewhat amorphous, and there aren’t discrete generations for reproduction, but a constant production and death of constituent members.
Defining the survival of any thing inherits the problems in defining that thing, i.e. the “can’t step in the same river twice” problems. Moreover, where morality functions on the substrate-independent level of our existence (thoughts), it isn’t clear whether the survival it requires is the survival of the substrate or the survival of the survival of the programs that run on it. Would morality support the transhumanist idea that we should abandon our bodies and upload our consciousness to silicon? Even if we take functional morality ias true, I don’t know that that question is settled.

I do think that morality must operate on the meta-organism, rather than the organism, i.e. society rather than the individual. Morality, as a functional trait, works between individuals, so oughts can only be coherent in relation to and support of the tribe or collective. And I have a sketch of an idea that that entails that we should prefer the pattern over the substrate, since the beast society exists continuously as its substrate is born and dies in an endless churn.

But that is a weak and fuzzy position, and in any case beyond the scope here.

Sure, but some morality is just wrong. Anti-natalism specifically is pretty clearly wrong, but that statement rests on the functional morality I’m advancing here.

If what you’re asking for is which morality is the functional morality, I actually think that too is beyond the scope of this discussion. “There is an objective morality that we can discover” is a different claim from “X is the objective morality”. I’m making the former claim here, and arguing that we should use the criteria of functionality to evaluate claims about the latter, but I am not making a specific claim about the latter.

I don’t disagree with this idea or those in the surrounding paragraph, but let me make an analogy.

Once, on a hot summer night, I awoke with intense nausea. I laid in bed feeling wretched for a minute staring at the ceiling, and the nausea passed. I closed my eyes to sleep again and soon again felt intense nausea. I opened my eyes, and shortly the nausea passed again. I did this a few more times as my rational faculties slowly kicked in, and then noticed that my bed was vibrating slightly. A fan that I’d placed at the foot of the bed was touching the bed frame, and creating a barely perceptible vibration. I put it together that the nausea was in fact motion sickness. I moved the fan, the bed stopped shaking, and I slept the rest of the night without incident.

The point here is that motion sickness is an evolved response to certain feelings of motion. In particular, our brains are concerned that certain unnatural sensations of motion are actually the result of eating something toxic. The nausea is a response that, if taken to its logical end, will cause us to purge what we’ve eaten, in the hopes that any toxins will be purged with it. In the evolutionary context, that’s a useful response. But we did not evolve in the presence of beds and fans, and so the way we’ve evolved misleads us into thinking we’re ill when in fact we’re perfectly fine.

A similar thing can happen with morality, and understanding morality as a product of evolution, as a mental trait that evolved in a specific context and suited to that context, and not necessarily to this context, may let us “move the fan” of morality, i.e. shed moral claims that are clearly at odds with what morality was meant to do. Given a few thousand years and a few hundred generations of life in this context, we should expect evolution to get us there on its own, but we don’t have the luxury of that.

So, yes, we are this way, there is some information in our emotions and moral intuitions and we should pay attention to them, just as we should take nausea seriously. But we can examine them in other ways at the same time. We can appreciate the ways in which evolution’s result is inadequate to its purpose, and rely on the other results of evolution (rationality and the view from nowhere) to exert a countervailing drive.

You yourself make a few similar points further down, and I basically agree with them: our moral intuitions and emotions are not for nothing, they can be better than our reason for making decisions in certain cases, and we should treat them as real and expected and important in our decision making. But we should also treat them as subject to rational refutation. And when reason and emotion conflict in making statements of fact about the world, reason should prevail (though perhaps you don’t agree with that).

Yes, I think that’s right. But so too are cardiac surgeons deciding not to work with hearts the way we evolved to work with hearts. The project of moral philosophy, as I understand it, must involve some very unusual treatment of moral intuitions, ones that are obscene to our evolved first impression in the way that delivering a baby by C-section is obscene to someone who only understands it as stabbing a pregnant woman in the belly.

And as I said above in reply to Jakob, there’s no contradiction in the most true description of a phenomenon being nigh useless in our everyday lives. In the game of go, there is a saying, “If you want to go left, go right”, meaning that going directly for the play we want is not the best way of achieving the play we want. But that is not to say that moving left is wrong, just that moving right is the best way to achieve moving left. So too, being a naive consequentialist may be the best way to achieve the functional ends I advocate here. Still, though, I would argue that the functional ends are the ends, and if it could be shown that different naive system better achieved them, it would be damning of naive consequentialism.

There may be an argument that functional morality is actively counterproductive to its own stated ends. I don’t know what to make of self-defeating truths, but I don’t think functional morality is one. I see no tension between understanding and discussing functional morality and still practicing more common moral systems as rules of thumb on a day-to-day basis.

I don’t think this problem is unique to a rationally-grounded moral system. Emotions too can be a basis for hubris; emotion-based religions are some of the most pompous and unjustifiably self-assured systems of belief that we’ve ever seen. We should not be overconfident.

But reason’s advantage is that it scales: we can use reason to analyse other modes of thought, and even reason itself. Through, we can identify situations where relying on intuition is better than relying on deliberate reflection. We can’t do that emotionally. We can rationally examine emotion, and while we can feel things about reason, we can’t get very far with it.

How do we know any evolved trait isn’t a spandrel? We can look at whether morality influences reproductive success, whether it imposes costs that would require a benefit to offset, whether it’s been selected against in isolated populations, etc. I think all these things suggest that it isn’t a spandrel, that it’s been selected for as part of an evolved reproductive strategy:

  • Amoral people tend to suffer socially. Psychopaths can and do succeed, but they depend on the moral behavior of others, and they are also employing a high risk, high reward strategy (many psychopaths are killed or imprisoned, but many others are managers or politicians).
  • Morality entails evolutionary costs, e.g. forgoing actions with clear immediate reproductive benefits like theft or resources, murder of rivals, or rape of fertile women. That suggests that it has attendant benefits, and that forgoing these provides a reproductive benefit in the long term, e.g. reciprocal giving and social support, not being murdered, and better mating opportunities long term.
  • To my knowledge, morality exists in all human populations, including isolated populations. The isolation may not have been sufficiently long to permit evolutionary divergence, but given the presence of psychopaths it seems that the genes for amorality were there to be selected for and haven’t come to dominate any society.

Consider the example of motion sickness, or of sugar, or of any other evolved predispositions what we can rationally understand to be actively counter to the reasons for which they evolved. We have intuitions that motion not dependent on our moving our limbs means we’ve been poisoned and need to purge, and that sugar and fat are good and we should eat them all as much as possible. But we know that these are false, that our evolved tendencies are misleading us, and they are misleading us because of the context in which we evolved in which such motion did mean poison, and sugar was a precious resource.

So too did morality evolve in that context, ought-ness is derived from our evolutionary past, and we can look at it in that light. Without reference to its evolved purpose, it has no meaning. If we take the position that the evolved meaning of morality is not relevant, it seems the only alternative is moral nihilism.

EDIT, 7/14: words, formatting. Deletions indicated by strike-through, additions underlined…

This is one of the areas I was probing around because I think it may be very hard for many adherents of functional morality to stay consistent. Perhaps not you. If survival is connected to genetically related progeny having progeny that are genetically related - iow sustaining genetically related individuals through time, transhumanism should be considered bad or evil - if we take the case of strong transhumanism where better substrates for consciounsess and existence are created and homo sapiens, as a genetic organism (and physically in general, outside the nucleus of cells also), are no longer present. We will have replaced ourselves with something else. At least in terms of genetic material.

But even setting aside the transhumanism issue. If survival is the guide to morality, the measure of it, it seems to me we can have all sorts of odd scenarios. We Freeze our DNA and send it out into the universe with instructions for use plus an AI to help us seed the first good planet… When we find out another civilization somewhere or the AI gets us going on say ten worlds, it seems like then we would be free to do what we want. Like as long as survival is happening, elsewhere, I have no need for morals. We have insured continuation, now we can here do what we want. Or we could set up a world where the AI combines DNA to make 1000 humans. Their genitals, after puberty are harvested for DNA, and they are all put down. The AI waits a thousand years and repeats, mixes new DNA, new batch of humans, new cull, repeat. This prevents Mass self-destruction events, and the large gaps between generations 1) slow down changes, so the DNA really stays close to earlier generations longer and 2) create longer survival. IOW there may well be an incredibly efficient way of making our DNA survive - and occasionally create humans - for vast eons, which at the same time entails an existence that is repulsive to most people.

Survival, and not much else.

I didn’t say enough. Antinatalism is one of the moralities that evolution has given rise to. Right now it is a minority position. Perhaps it will become the majority or power morality. Then this is what evolution has led to. It might lead to our extinction, but evolution led to it. IF I coming from a now more minority position - before the anti-natalists sterilize all of us, push for my morality, which includes life, I must wonder, as the anti-natalists take over, if I am on the wrong side - if evolution has led to antinatalist morality and the anti-natalists win. Whatever happens would be functional, it might just not be what we want functional to be. IOW it was functional that dinosaurs became extinct. Evolution and natural selection are selecting to whatever fits, whatever fits, that is, whatever else exists - other species, the weather, etc. I don’t really see where I should do anything other than prioritize what I want, and let natural selection see to the outcomes. Just like every other individual in other species. Because once I follow my interests and desires, including mammalian empthy, I am living out what I have been selected to be like. Whatever this leads to is functional, though it may not include my kind.

This might seem obvious: If it is survival of our or ‘our’ genes and these shaping new generations of ‘us’ or us, then some of transhumanism is wrong and I should oppose it, since it will replaces our genes and us.

On the other hand if I am a functionalist, natural selection supporter, then if transhumanism wins, then that’s fine. I do not need to think in terms of the best morality or heuristics. We will do what we do and it will be part of natural selection - I mean, unless I have an emotional attachment to humans… :smiley:

IOW There is some weird mix of selfishness - I should support functionalism as far as it furthers my species (though not me in particular) - and follow the intended function of morality…however natural selection is nots not itself a respecter of species.

I cannot in any way avoid fitting in with evolution as a whole, so why should I focus in on one selfish part, where I identify with future generations of my DNA. It seems to me that must have an emotional component. But if we strip away the emotional AND suggest one should take a functionalist point of view, well there are no worries.

Natural selection will continue whatever I do.

Let’s take this last bit first. 1) I think it is complicated. First, immediately, I want to stress that there is always the option of delaying judgment or agnosticism. Reason is not infallible - and is, often, guided by emotions and assumptions we are aware of and then also often by emotions and assumptions we are not aware of. So when in a real contradiction between emotions and reason, we might, especially if we do not seem to immediately lose anything a) delay choice or 2) make a choice but keep an agnosticism about whether it was the right one. 3) it depends for me on what reason, whose reason, and for that matter whose emotions/intuition. 4) a problem with the choice is that emotions and reason are mixed. It is muddy in there. Reason depends on emotions, especially when we are talking about how humans should interact - iow what seems reasonable will include emotional reactions to consequences, prioritizing inside reasoning itself, the ability to evaluate one’s reasoning (such as, have I looked at the evidence long enough? which is evaluated with emotional qualia (see Damasio) and of course emotions are often affected strongly by memes, what is presented as reasonable, assumptions in society and culture, etc. When someone claims to be on the pure reason side of an argument, I immediately get wary. I just don’t meet any people without motives, emotions, biases and so on. If we are trying to determine the height of a tree, ok I may dismiss emotion based objections after the rational team used three different measuring devices and come to the same measurement, despite it seeming off to the emotional team. But when dealing with how should we treat each other…

In a sense what I am saying is that reason is often used as a postive term. IOW it represents logical work with rationally chosen facts, gathered in X postive types of ways…etc. But actually reasoning is a cognitive style. A neutral one. It can be a mess, it can be well done. It may have false assumptions that will take decades to recognize but are obviously false to others. It is just a way to reach a conclusion. Some do it very well. Some do not.

The reasoned idea within science was that animals did not have emotions, motivations, desires, etc. They were considered mechanical, with another significant group of scientists thinking that any claims were anthropomorphizing, unprovable, and confused in form, though these mainstream scientists were sometimes technically agnostic. That was mainstream position until the 70s and it was dangerous for a biologist to go against that position in any official way: articles, public statements. etc. People having the opposite opinion were consider to be being irrational, projecting, anthropomorphizing and following their emotions.

Now of course this example is just an example. It does not prove that reason and emotion/intuition are equally good at getting to the truth or that reason is worse.

I bring it up because, basically, what appears to be reason, need not be good. It is just descriptive without valence. Certain parts of the mind are more in charge and they have their tool box. Maybe it is good use of tools maybe not. An attempt by the mind to reach conclusions in a fastidious manner and based, often primarily on word based arguements. This isn’t always the best way to figure something out. And underneath the reasoning and emotional world in the mind is seething.

OK, let’s look at the motion sickness. I’ll keep this one short. It’s a good example on your part and I do not think I can or would want to fully counter it. But let me partially counter it. In the case of morals, we are talking about what it is like to live, given who we are. If we are going to say certain behaviors are not good, then one such behavior might be putting a fan up against someone’s bed. Now this will come off as silly, but my point is, that despite the fact that the person who gets nauseous because of this is actually having an inappropriate reaction because fans and beds can give one an experience that resembles when one needed to throw up…
it still makes the guy in the bed have a bad time, even if he
‘shouldn’t’

So here we are, after this long evolutionary process reacting emotionally to a lot of stuff. Right, wrong, confused, misplaces emotions…quite possibly. Emotions that perhaps worked to protect us but now react to harmless things. But we have those emotions. We react these ways.

If we do not consider the emotional reactions to the moral act and to the consequences of any moral rule, then we are ignoring a large part of what is real. IOW if we just focus on the survival of our genes creating more gene bearers we are removing a large part of the real from our calculations.

  1. this may have serious consequences regarding our survival
  2. but regardless I think it is wrongheaded even if it did not
  3. I question our ability to know when it is simply some vestige of a no longer relevent reaction, or a deeper insight. I see reason as often being hubristic when it comes to these evaluations.

And to be very clear, I am not arguing that we should do away with rationality. I am pro-combination. So when I point out the problems with rationality, I am not saying emotions have no problems, and we should switch to just that.

The cardiac surgeon, in all liklihood, is working on someone who smoked or overate and did not move around very much. And if they did, then the cardiac surgeon is adding a way of working on top of what evolution set us out to do. But even more importantly, if we are to take from evolution what morality’s function is, why would we then ignore what evolution has given us. So it is that juncture I am focused on. I don’t have problems with technology per se. IOW my argument is not, hey that’s not natural - with all the problems inherent in that - but rather

I note that you think our morality should be based on its function in evolution. Evolution is given a kind of authority. Then when it comes to how our evolved emotions deal with morals, we must modify that. If we are appealing to authority in evolution, why stop at deciding it is about survival?