Functional Morality

I am claiming that there is an objective good, that morality is objective. Do you disagree with that?

We know where emotions and morals come from, we know why they evolved, so we can determine what they should say without bootstrapping from them.

My point in this thread is that starting with morals as an empirical phenomenon observable in humans and certain other social animals, we can examine what morals are, why they exist. And any claimed moral commandment that undermines the empirically observed reason for the existence of morals must be mistaken, morals must continue to be what they evolved to be.

And what they evolved to be had nothing to do with emotion (except insofar as emotions also evolved to do the same thing).

Or rather, morals are a tactic that evolved because they kept people who used them alive and helped them reproduce.

This is a strange line of argument.

  1. Should only people that are emotionally invested in outcomes discuss anything? Like, only mathematicians that are emotionally invested in a specific outcome to the Riemann Hypothesis should spend any time trying to figure it out?
  2. Often people who are emotionally invested in an outcome are the worst people to solve it. That’s why we have courts and arbitrators and mediators and trilateral talks. Neutral third parties are often better at resolving disputes.
  3. I’m not saying emotion does nothing or doesn’t matter, I’m saying emotion isn’t the basis, isn’t a component, of morality. As I say above, emotions will often align with morality, and ‘naive’ morality will often align with survival, because they all evolved to the same ends. But where they differ, it is survival that wins. And I, weak as I am, feel and follow emotions, but I often do so knowing that it is immoral.

This poses the interesting question of how to distinguish rationality from morality, i.e. rationality also evolved, so why doesn’t survival trump rationality? I would look to what rationality and morality each purport to do. Rationality is an attempt to describe something that exists independently of humans. It is a way of describing the world. Morality, by contrast, is something we created (or that was created as a part of us).

I think you would have to agree with this distinction: if morality is based on our emotions, then it doesn’t exist in a world without our emotions. Rationality, logic, math, those things exist without us.

Which morality is objective? We have evolved a set of moralities and some of these moralities support we not survive – anti-natalism - some consider us parasites to such a degree that we should be eliminated to protect other life. The transhumanists have moralities or perhaps aesthetics that want us to choose the way homo sapiens will no longer exist – they are the most likely of the three to win the natural selection battle with other moralities. I can’t see how one can know the objective good, nor can I see that teleological arguments based on evolution lead to any conclusion about what is good with a capital G. We can come up with tactics that might be good for the spreading of our genes, though that does not sound like morals to me. Evolution led to a capacity. That capacity - the portions of our nervous systems, say, that amongst other things, came up with morals - may or may not be adaptive in the long term. And we cannot assign it a purpose. Once this capacity is present it is clear that it will be applied to all sorts of purposes.

I am not sure if your ‘why’ is teleological here, but this is a bird’s eye view. Or view from nowhere. In situ we have a way of creating meaning for ourselves and that meaning is emotionally evaluated and generated, and not bound by any ‘purpose’ in evolution. If there were a purpose in evolution, and it wanted control, it made a mistake when it came up with our capacities and tendencies since we evaluate and generate morals based on rationality AND emotions. If I am supposed to respect evolutions goals, it seems to me I must respect the processes and skills it gave me to do things and evaluate things. IOW I have been made such that I mix emotions and rationality, both when I function like a consequentialist and when like a deontologist. I find emotions deeply involved in both processes and I note this in everyone I meet also. We are this way. I don’t see why I should just abstract out and respect in SHOULD terms evolution’s intent for morals, but ignore evolution’s result in making me/us the way I am/we are.

Who says? How do you know that is good? What if we achieve interstellar travel and kill off lovely smarter, less nasty species, perhaps all of them? Where can I stand to view the objective good of our species even? All you are talking about is heuristics for survival. That’s not morality. I feel a bit like when I see physicalists talking about being spiritual. They may have wonderful philosophies of life, be great people, generate sweet and caring ethical codes, etc., but they are not spiritual. That word literally entails other stuff. So does morality entail more than heuristics. It includes an in part emotional/desire-based choosing of what goals we want good heuristics for, and often, including how we feel about the heuristics. Or it would not be so common to challenge the idea that the means might not justify the ends. What you describe is certainly not objective morals. It is tactics towards what you consider the one goal, a goal we don’t even know is objectively a good one, though it might be good for us. I considered it an extremely limited goal for us, just one part of what morality covers. But even if it was the only goal of ours, we cannot know if it is a moral one. I mean, who are we to judge the goodness of the human race. Or better put, who are we to think we can judge it objectively and without emotion?

Again teleological. But further we evolved as creatures that evaluate morals emotionally. If we are going to use a teleological argument then perhaps we should leave that alone, rather than deciding that we can and should just do it only rationally - which I don’t think is possible, in any case. Further it seems completely irrational to generate the way humans relate to each other without making emotional and desire-based evaluations central. I mean, we have to live with all the consequences of those morals, and emotional consequences will be real and central. For some reason emotions are often considered quasi-real. And this is often based on their fallibility. First, reason is also fallible, but further, emotions are real. Now I know that you would not assert that emotions are not real. But note how they end up somehow being moved off the table when they are central to pretty much all the consequences of morals. And then also in the process of choosing and evaluation, etc.
I don’t think there is more than a handful of people who think morality is JUST about calculating the survival of the species. So, you must then explain how evolution led to us having a belief/method/approach that runs counter to what you are saying. If evolution can give us should, this would entail that it would give us should around methodology also.
IOW your argument seems to be that since evolution shaped morals and evolution is all about surviving, then morality is about surviving period. But evolution led to us, and other moral creating species, making morals about much more. Perhaps you need to note what evolution has selected for: and in this case it is moral making animals with limbic systems involved morals at all levels.
Personally, I don’t really care what evolution wants or intends, but I can see what was selected for in our case.
If it somehow turned out that rationality indicated I should kill off my wife after she births our second child - that the best AI’s analyzing all the complex chains of effects, sees this as the best heuristic for human survival that husbands/fathers do this OR EVEN if God told me to do it…no way. I won’t. I fail God’s test of Abraham, though I have often wondered if in fact he failed it.
This was an extreme example - though one that fits nicely with our other discussion - but there are all sorts of other moral-like guidelines I would follow regardless of what the best minds said was our best strategy for survival. And if you think that is a problem, blame evolution. Evolution made my moral making or in my case preference-making process to be such that there are things I will not do (even for money or what the supposedly detached people with views from nowhere say is moral). And there are things I will do that may go against their supposed best heuristics.

see above about emotions always being in the mix of creating, applying, modifying, justifying…etc. That is the tactic we evolved.

They all are emotionally invested in finding the correct outcome. And they likely all are interested in their, perhaps at this stage vague guess of direction, being the right guess. And the one who solves it will have been extremely emotionally involved in finding the answer. IOW I wasn’t arguing that Carleas shouldn’t participate, but rather trying to highlight – corner you - that you are likely driven by emotions, even in this telling us we should prioritize survival because that is what evolution gave us morality for. This likely seems like a view from nowhere, but absolutely cannot be once it is couched as a should. Further the results of the Riemann hypothesis is not like the results of a morality argument or decision about how we should, for example, relate to each other. The latter has to do with what we like, love, hate, desire, are repulsed by and those emotional reactions will guide our personal and collective decisions about what is moral. In fact they must. If they are not involved we may all end up working in some dystopian, panopticon-tyranny that seems efficient and at least in the short term seems to completely guarantee survival, but which we hate every waking minute living in. For example. I think there are other problems that will arise. Some can be based on the emotions now having an unremovable part in what we will even want to survive in and thus making emotions a selection factor, like it or not. Others need not even be bound to your fundamental should - we must base our morals on what we think evolution intended morals to be for.

Often the people who come up with morals in what they think is View from nowhere or objective or disinterested end up making horrible decisions. I would not use that as an argument against using rationality. I don’t think it works as an argument against including emotions. And as far as I can see the people who judge emotions and present themselves as avoiding their influence, they are less aware of how their emotions are influencing their choices than the people who do not present their ideas this way. But further those groups that make decisions are applying morals decided in part on emotional grounds. And they likely have strong feelings about those morals. Courts often use juries, lawyers use emotional arguments, etc. Yes, emotions can lead to wrong decisions. But they are central to morals, determining what morals are and how they affect us. Anyone trying to eliminate emotions from the process of deciding morals, will be incredibly lucky if they come up with a moral system that does not feel unnecessarily bad in a wide variety of ways. And if they have the single goal of species survival, this could lead to solutions like…

It is moral to kill off 70% of the population tomorrow, have an elite take over complete control of all genetic combination – read: via sex, gm work, etc. And so on.
Any country that decided to come up with morals without including emotions in the process is one I would avoid, because essentially such a country with that one goal has no interest in the bearers of the genes, except to the except they bear them. Science fiction has many such dystopian ‘logical’ solutions.

How could it be immoral, in your system of belief, since we clearly evolved with this mixed approach to choosing, creating. It is part of our evolved criteria in all such decision-making.

This makes you a kind of Platonist or some other form of metaphysics that has these things outside us. But here’s the thing, you are deciding to NOT work with morals the way we obviously have evolved to work with morals and clearly to me emotions are involved deeply in all moral evaluation and they become clearly visible when there are disagreements about morals, which are regular and ongoing. I don’t really care what evolution may have intended my emotions, morality and rationality to be for and have as goals - and perhaps I am more adapative precisely because I take the freedom given to me by evolution and don’t care to let it’s intent rule me. But for the sake of argument, let’s say I should go with evolution’s intentions: shouldn’t I then go with the full set of ways one evaluates and chooses emotions, which would include both emotions and rationality whichare intermingled and interdependent in any case? And yes, morality does not exist in a world without emotions and it never has. Animals had behavior before emotions, perhaps, but not morality.

Further ‘rationality’ is of a different category than the rest of that list. I would need to know where you see rationality existing without us. And whose rationality? Rationality is a human process - also exhibited in more limited - though often highly effective - forms in animals. We can call it an animal process. For it to function well, in terms of human interactions, the limbic system must be undamaged and emotions are a part of that process. But even without that proviso, I do not find rationality anywhere outside us as possible in the physicalist paradigm, unless we are talking about aliens or perhaps one day AIs.

And note:

I still think this is in the air.
I think one of the reasons we have intermeshed emotional and rational decision-making is because the higher forms of rationality get weaker when there are too many variables and potential chains of causes. AND rationality tend to have a hubris that it can track all this. For some things emotional reactions have a better chance, though of course this is fallible. But then both are fallible and both are intermeshed. And just as some are better at rationality, some are better at intuition than others are. I would find it odd if what determined how we lived was determined without emotions and desires as central to the process of determining it.

I’ve mentioned Damasio, here’s a kind of summary. Obviously better to read his books or articles…
huffingtonpost.com/fred-kof … ccounter=1

People can’t even make choices without emotions. But then here, with morality, we are talking about making choices about things we react to with strong emotions and have effects that affect us emotionally, that affect our desires and goals - on emotional levels.

I see no reason to consider my DNA more important than my life - how it is lived, what it feels like, what my loved one’s experience, the state of what I value - nature, etc.

Edit: since you think we should base morals on ‘survival’, it would be good to define that would count as survival. Warning: I plan to find odd conclusions based on the definition.

Which morality is objective? We have evolved a set of moralities and some of these moralities support we not survive – anti-natalism - some consider us parasites to such a degree that we should be eliminated to protect other life. The transhumanists have moralities or perhaps aesthetics that want us to choose the way homo sapiens will no longer exist – they are the most likely of the three to win the natural selection battle with other moralities. I can’t see how one can know the objective good, nor can I see that teleological arguments based on evolution lead to any conclusion about what is good with a capital G. We can come up with tactics that might be good for the spreading of our genes, though that does not sound like morals to me. Evolution led to a capacity. That capacity - the portions of our nervous systems, say, that amongst other things, came up with morals - may or may not be adaptive in the long term. And we cannot assign it a purpose. Once this capacity is present it is clear that it will be applied to all sorts of purposes.

I am not sure if your ‘why’ is teleological here, but this is a bird’s eye view. Or view from nowhere. In situ we have a way of creating meaning for ourselves and that meaning is emotionally evaluated and generated, and not bound by any ‘purpose’ in evolution. If there were a purpose in evolution, and it wanted control, it made a mistake when it came up with our capacities and tendencies since we evaluate and generate morals based on rationality AND emotions. If I am supposed to respect evolutions goals, it seems to me I must respect the processes and skills it gave me to do things and evaluate things. IOW I have been made such that I mix emotions and rationality, both when I function like a consequentialist and when like a deontologist. I find emotions deeply involved in both processes and I note this in everyone I meet also. We are this way. I don’t see why I should just abstract out and respect in SHOULD terms evolution’s intent for morals, but ignore evolution’s result in making me/us the way I am/we are.

Who says? How do you know that is good? What if we achieve interstellar travel and kill off lovely smarter, less nasty species, perhaps all of them? Where can I stand to view the objective good of our species even? All you are talking about is heuristics for survival. That’s not morality. I feel a bit like when I see physicalists talking about being spiritual. They may have wonderful philosophies of life, be great people, generate sweet and caring ethical codes, etc., but they are not spiritual. That word literally entails other stuff. So does morality entail more than heuristics. It includes an in part emotional/desire-based choosing of what goals we want good heuristics for, and often, including how we feel about the heuristics. Or it would not be so common to challenge the idea that the means might not justify the ends. What you describe is certainly not objective morals. It is tactics towards what you consider the one goal, a goal we don’t even know is objectively a good one, though it might be good for us. I considered it an extremely limited goal for us, just one part of what morality covers. But even if it was the only goal of ours, we cannot know if it is a moral one. I mean, who are we to judge the goodness of the human race. Or better put, who are we to think we can judge it objectively and without emotion?

Again teleological. But further we evolved as creatures that evaluate morals emotionally. If we are going to use a teleological argument then perhaps we should leave that alone, rather than deciding that we can and should just do it only rationally - which I don’t think is possible, in any case. Further it seems completely irrational to generate the way humans relate to each other without making emotional and desire-based evaluations central. I mean, we have to live with all the consequences of those morals, and emotional consequences will be real and central. For some reason emotions are often considered quasi-real. And this is often based on their fallibility. First, reason is also fallible, but further, emotions are real. Now I know that you would not assert that emotions are not real. But note how they end up somehow being moved off the table when they are central to pretty much all the consequences of morals. And then also in the process of choosing and evaluation, etc.
I don’t think there is more than a handful of people who think morality is JUST about calculating the survival of the species. So, you must then explain how evolution led to us having a belief/method/approach that runs counter to what you are saying. If evolution can give us should, this would entail that it would give us should around methodology also.
IOW your argument seems to be that since evolution shaped morals and evolution is all about surviving, then morality is about surviving period. But evolution led to us, and other moral creating species, making morals about much more. Perhaps you need to note what evolution has selected for: and in this case it is moral making animals with limbic systems involved morals at all levels.
Personally, I don’t really care what evolution wants or intends, but I can see what was selected for in our case.
If it somehow turned out that rationality indicated I should kill off my wife after she births our second child - that the best AI’s analyzing all the complex chains of effects, sees this as the best heuristic for human survival that husbands/fathers do this OR EVEN if God told me to do it…no way. I won’t. I fail God’s test of Abraham, though I have often wondered if in fact he failed it.
This was an extreme example - though one that fits nicely with our other discussion - but there are all sorts of other moral-like guidelines I would follow regardless of what the best minds said was our best strategy for survival. And if you think that is a problem, blame evolution. Evolution made my moral making or in my case preference-making process to be such that there are things I will not do (even for money or what the supposedly detached people with views from nowhere say is moral). And there are things I will do that may go against their supposed best heuristics.

see above about emotions always being in the mix of creating, applying, modifying, justifying…etc. That is the tactic we evolved.

They all are emotionally invested in finding the correct outcome. And they likely all are interested in their, perhaps at this stage vague guess of direction, being the right guess. And the one who solves it will have been extremely emotionally involved in finding the answer. IOW I wasn’t arguing that Carleas shouldn’t participate, but rather trying to highlight – corner you - that you are likely driven by emotions, even in this telling us we should prioritize survival because that is what evolution gave us morality for. This likely seems like a view from nowhere, but absolutely cannot be once it is couched as a should. Further the results of the Riemann hypothesis is not like the results of a morality argument or decision about how we should, for example, relate to each other. The latter has to do with what we like, love, hate, desire, are repulsed by and those emotional reactions will guide our personal and collective decisions about what is moral. In fact they must. If they are not involved we may all end up working in some dystopian, panopticon-tyranny that seems efficient and at least in the short term seems to completely guarantee survival, but which we hate every waking minute living in. For example. I think there are other problems that will arise. Some can be based on the emotions now having an unremovable part in what we will even want to survive in and thus making emotions a selection factor, like it or not. Others need not even be bound to your fundamental should - we must base our morals on what we think evolution intended morals to be for.

Often the people who come up with morals in what they think is View from nowhere or objective or disinterested end up making horrible decisions. I would not use that as an argument against using rationality. I don’t think it works as an argument against including emotions. And as far as I can see the people who judge emotions and present themselves as avoiding their influence, they are less aware of how their emotions are influencing their choices than the people who do not present their ideas this way. But further those groups that make decisions are applying morals decided in part on emotional grounds. And they likely have strong feelings about those morals. Courts often use juries, lawyers use emotional arguments, etc. Yes, emotions can lead to wrong decisions. But they are central to morals, determining what morals are and how they affect us. Anyone trying to eliminate emotions from the process of deciding morals, will be incredibly lucky if they come up with a moral system that does not feel unnecessarily bad in a wide variety of ways. And if they have the single goal of species survival, this could lead to solutions like…

It is moral to kill off 70% of the population tomorrow, have an elite take over complete control of all genetic combination – read: via sex, gm work, etc. And so on.
Any country that decided to come up with morals without including emotions in the process is one I would avoid, because essentially such a country with that one goal has no interest in the bearers of the genes, except to the except they bear them. Science fiction has many such dystopian ‘logical’ solutions.

How could it be immoral, in your system of belief, since we clearly evolved with this mixed approach to choosing, creating. It is part of our evolved criteria in all such decision-making.

This makes you a kind of Platonist or some other form of metaphysics that has these things outside us. But here’s the thing, you are deciding to NOT work with morals the way we obviously have evolved to work with morals and clearly to me emotions are involved deeply in all moral evaluation and they become clearly visible when there are disagreements about morals, which are regular and ongoing. I don’t really care what evolution may have intended my emotions, morality and rationality to be for and have as goals - and perhaps I am more adapative precisely because I take the freedom given to me by evolution and don’t care to let it’s intent rule me. But for the sake of argument, let’s say I should go with evolution’s intentions: shouldn’t I then go with the full set of ways one evaluates and chooses emotions, which would include both emotions and rationality whichare intermingled and interdependent in any case? And yes, morality does not exist in a world without emotions and it never has. Animals had behavior before emotions, perhaps, but not morality.

Further ‘rationality’ is of a different category than the rest of that list. I would need to know where you see rationality existing without us. And whose rationality? Rationality is a human process - also exhibited in more limited - though often highly effective - forms in animals. We can call it an animal process. For it to function well, in terms of human interactions, the limbic system must be undamaged and emotions are a part of that process. But even without that proviso, I do not find rationality anywhere outside us as possible in the physicalist paradigm, unless we are talking about aliens or perhaps one day AIs.

And note:

I still think this is in the air.
I think one of the reasons we have intermeshed emotional and rational decision-making is because the higher forms of rationality get weaker when there are too many variables and potential chains of causes. AND rationality tend to have a hubris that it can track all this. For some things emotional reactions have a better chance, though of course this is fallible. But then both are fallible and both are intermeshed. And just as some are better at rationality, some are better at intuition than others are. I would find it odd if what determined how we lived was determined without emotions and desires as central to the process of determining it.

I’ve mentioned Damasio, here’s a kind of summary. Obviously better to read his books or articles…
huffingtonpost.com/fred-kof … ccounter=1

People can’t even make choices without emotions. But then here, with morality, we are talking about making choices about things we react to with strong emotions and have effects that affect us emotionally, that affect our desires and goals - on emotional levels.

I see no reason to consider my DNA more important than my life - how it is lived, what it feels like, what my loved one’s experience, the state of what I value - nature, etc.
[/quote]

Perhaps a shorter objection is better:

  1. how do we know that morality is not a spandrel?
  2. even if it is not, how do we have an obligation to the intent of evolution, in what sense are beholden to funtion? Function, evolution, natural selection are not moral agents. What is it that puts us in some contractual commitment to following their intentions? If the argument is not that we are beholden but rather X is what morality is for, so we should use it as X, a more determinist connection, then we don’t have to worry about adhering to the function, since whatever we do is a product of evolutionarily-created function. Once I am supposed to follow evolution, use my adaptions, well, how can I fail? And if I fail as an individual, I am still testing for my species and if my approach was poor it will be weeded out. No harm, no foul.

Thanks for your patience and your excellent replies, they have helped my to develop my thinking on this topic and I appreciate the critique.

I think there’s a number of levels on which we can define it, which I’ll discuss in a minute, and there’s room to debate the appropriate locus of survival as it relates to morality. But I think that debate is separate from whether morality does relate to survival. Morality exists because of its effect on past generations; it seems clear that there is no morality independent of humans, no moral field that we’re sensing but rather a moral intuition (i.e. innate brain configurations) that influences our behaviors in ways that supported our ancestors in producing us.

But, as promised, some thoughts on ‘survival’:
First, individual gene-line survival means an organism not dying until it produces offspring who are likely to not-die until they produce offspring.
At a group or society level, survival means the group continues to exist. It’s a little vaguer here because the ‘group’ isn’t is somewhat amorphous, and there aren’t discrete generations for reproduction, but a constant production and death of constituent members.
Defining the survival of any thing inherits the problems in defining that thing, i.e. the “can’t step in the same river twice” problems. Moreover, where morality functions on the substrate-independent level of our existence (thoughts), it isn’t clear whether the survival it requires is the survival of the substrate or the survival of the survival of the programs that run on it. Would morality support the transhumanist idea that we should abandon our bodies and upload our consciousness to silicon? Even if we take functional morality ias true, I don’t know that that question is settled.

I do think that morality must operate on the meta-organism, rather than the organism, i.e. society rather than the individual. Morality, as a functional trait, works between individuals, so oughts can only be coherent in relation to and support of the tribe or collective. And I have a sketch of an idea that that entails that we should prefer the pattern over the substrate, since the beast society exists continuously as its substrate is born and dies in an endless churn.

But that is a weak and fuzzy position, and in any case beyond the scope here.

Sure, but some morality is just wrong. Anti-natalism specifically is pretty clearly wrong, but that statement rests on the functional morality I’m advancing here.

If what you’re asking for is which morality is the functional morality, I actually think that too is beyond the scope of this discussion. “There is an objective morality that we can discover” is a different claim from “X is the objective morality”. I’m making the former claim here, and arguing that we should use the criteria of functionality to evaluate claims about the latter, but I am not making a specific claim about the latter.

I don’t disagree with this idea or those in the surrounding paragraph, but let me make an analogy.

Once, on a hot summer night, I awoke with intense nausea. I laid in bed feeling wretched for a minute staring at the ceiling, and the nausea passed. I closed my eyes to sleep again and soon again felt intense nausea. I opened my eyes, and shortly the nausea passed again. I did this a few more times as my rational faculties slowly kicked in, and then noticed that my bed was vibrating slightly. A fan that I’d placed at the foot of the bed was touching the bed frame, and creating a barely perceptible vibration. I put it together that the nausea was in fact motion sickness. I moved the fan, the bed stopped shaking, and I slept the rest of the night without incident.

The point here is that motion sickness is an evolved response to certain feelings of motion. In particular, our brains are concerned that certain unnatural sensations of motion are actually the result of eating something toxic. The nausea is a response that, if taken to its logical end, will cause us to purge what we’ve eaten, in the hopes that any toxins will be purged with it. In the evolutionary context, that’s a useful response. But we did not evolve in the presence of beds and fans, and so the way we’ve evolved misleads us into thinking we’re ill when in fact we’re perfectly fine.

A similar thing can happen with morality, and understanding morality as a product of evolution, as a mental trait that evolved in a specific context and suited to that context, and not necessarily to this context, may let us “move the fan” of morality, i.e. shed moral claims that are clearly at odds with what morality was meant to do. Given a few thousand years and a few hundred generations of life in this context, we should expect evolution to get us there on its own, but we don’t have the luxury of that.

So, yes, we are this way, there is some information in our emotions and moral intuitions and we should pay attention to them, just as we should take nausea seriously. But we can examine them in other ways at the same time. We can appreciate the ways in which evolution’s result is inadequate to its purpose, and rely on the other results of evolution (rationality and the view from nowhere) to exert a countervailing drive.

You yourself make a few similar points further down, and I basically agree with them: our moral intuitions and emotions are not for nothing, they can be better than our reason for making decisions in certain cases, and we should treat them as real and expected and important in our decision making. But we should also treat them as subject to rational refutation. And when reason and emotion conflict in making statements of fact about the world, reason should prevail (though perhaps you don’t agree with that).

Yes, I think that’s right. But so too are cardiac surgeons deciding not to work with hearts the way we evolved to work with hearts. The project of moral philosophy, as I understand it, must involve some very unusual treatment of moral intuitions, ones that are obscene to our evolved first impression in the way that delivering a baby by C-section is obscene to someone who only understands it as stabbing a pregnant woman in the belly.

And as I said above in reply to Jakob, there’s no contradiction in the most true description of a phenomenon being nigh useless in our everyday lives. In the game of go, there is a saying, “If you want to go left, go right”, meaning that going directly for the play we want is not the best way of achieving the play we want. But that is not to say that moving left is wrong, just that moving right is the best way to achieve moving left. So too, being a naive consequentialist may be the best way to achieve the functional ends I advocate here. Still, though, I would argue that the functional ends are the ends, and if it could be shown that different naive system better achieved them, it would be damning of naive consequentialism.

There may be an argument that functional morality is actively counterproductive to its own stated ends. I don’t know what to make of self-defeating truths, but I don’t think functional morality is one. I see no tension between understanding and discussing functional morality and still practicing more common moral systems as rules of thumb on a day-to-day basis.

I don’t think this problem is unique to a rationally-grounded moral system. Emotions too can be a basis for hubris; emotion-based religions are some of the most pompous and unjustifiably self-assured systems of belief that we’ve ever seen. We should not be overconfident.

But reason’s advantage is that it scales: we can use reason to analyse other modes of thought, and even reason itself. Through, we can identify situations where relying on intuition is better than relying on deliberate reflection. We can’t do that emotionally. We can rationally examine emotion, and while we can feel things about reason, we can’t get very far with it.

How do we know any evolved trait isn’t a spandrel? We can look at whether morality influences reproductive success, whether it imposes costs that would require a benefit to offset, whether it’s been selected against in isolated populations, etc. I think all these things suggest that it isn’t a spandrel, that it’s been selected for as part of an evolved reproductive strategy:

  • Amoral people tend to suffer socially. Psychopaths can and do succeed, but they depend on the moral behavior of others, and they are also employing a high risk, high reward strategy (many psychopaths are killed or imprisoned, but many others are managers or politicians).
  • Morality entails evolutionary costs, e.g. forgoing actions with clear immediate reproductive benefits like theft or resources, murder of rivals, or rape of fertile women. That suggests that it has attendant benefits, and that forgoing these provides a reproductive benefit in the long term, e.g. reciprocal giving and social support, not being murdered, and better mating opportunities long term.
  • To my knowledge, morality exists in all human populations, including isolated populations. The isolation may not have been sufficiently long to permit evolutionary divergence, but given the presence of psychopaths it seems that the genes for amorality were there to be selected for and haven’t come to dominate any society.

Consider the example of motion sickness, or of sugar, or of any other evolved predispositions what we can rationally understand to be actively counter to the reasons for which they evolved. We have intuitions that motion not dependent on our moving our limbs means we’ve been poisoned and need to purge, and that sugar and fat are good and we should eat them all as much as possible. But we know that these are false, that our evolved tendencies are misleading us, and they are misleading us because of the context in which we evolved in which such motion did mean poison, and sugar was a precious resource.

So too did morality evolve in that context, ought-ness is derived from our evolutionary past, and we can look at it in that light. Without reference to its evolved purpose, it has no meaning. If we take the position that the evolved meaning of morality is not relevant, it seems the only alternative is moral nihilism.

EDIT, 7/14: words, formatting. Deletions indicated by strike-through, additions underlined…

This is one of the areas I was probing around because I think it may be very hard for many adherents of functional morality to stay consistent. Perhaps not you. If survival is connected to genetically related progeny having progeny that are genetically related - iow sustaining genetically related individuals through time, transhumanism should be considered bad or evil - if we take the case of strong transhumanism where better substrates for consciounsess and existence are created and homo sapiens, as a genetic organism (and physically in general, outside the nucleus of cells also), are no longer present. We will have replaced ourselves with something else. At least in terms of genetic material.

But even setting aside the transhumanism issue. If survival is the guide to morality, the measure of it, it seems to me we can have all sorts of odd scenarios. We Freeze our DNA and send it out into the universe with instructions for use plus an AI to help us seed the first good planet… When we find out another civilization somewhere or the AI gets us going on say ten worlds, it seems like then we would be free to do what we want. Like as long as survival is happening, elsewhere, I have no need for morals. We have insured continuation, now we can here do what we want. Or we could set up a world where the AI combines DNA to make 1000 humans. Their genitals, after puberty are harvested for DNA, and they are all put down. The AI waits a thousand years and repeats, mixes new DNA, new batch of humans, new cull, repeat. This prevents Mass self-destruction events, and the large gaps between generations 1) slow down changes, so the DNA really stays close to earlier generations longer and 2) create longer survival. IOW there may well be an incredibly efficient way of making our DNA survive - and occasionally create humans - for vast eons, which at the same time entails an existence that is repulsive to most people.

Survival, and not much else.

I didn’t say enough. Antinatalism is one of the moralities that evolution has given rise to. Right now it is a minority position. Perhaps it will become the majority or power morality. Then this is what evolution has led to. It might lead to our extinction, but evolution led to it. IF I coming from a now more minority position - before the anti-natalists sterilize all of us, push for my morality, which includes life, I must wonder, as the anti-natalists take over, if I am on the wrong side - if evolution has led to antinatalist morality and the anti-natalists win. Whatever happens would be functional, it might just not be what we want functional to be. IOW it was functional that dinosaurs became extinct. Evolution and natural selection are selecting to whatever fits, whatever fits, that is, whatever else exists - other species, the weather, etc. I don’t really see where I should do anything other than prioritize what I want, and let natural selection see to the outcomes. Just like every other individual in other species. Because once I follow my interests and desires, including mammalian empthy, I am living out what I have been selected to be like. Whatever this leads to is functional, though it may not include my kind.

This might seem obvious: If it is survival of our or ‘our’ genes and these shaping new generations of ‘us’ or us, then some of transhumanism is wrong and I should oppose it, since it will replaces our genes and us.

On the other hand if I am a functionalist, natural selection supporter, then if transhumanism wins, then that’s fine. I do not need to think in terms of the best morality or heuristics. We will do what we do and it will be part of natural selection - I mean, unless I have an emotional attachment to humans… :smiley:

IOW There is some weird mix of selfishness - I should support functionalism as far as it furthers my species (though not me in particular) - and follow the intended function of morality…however natural selection is nots not itself a respecter of species.

I cannot in any way avoid fitting in with evolution as a whole, so why should I focus in on one selfish part, where I identify with future generations of my DNA. It seems to me that must have an emotional component. But if we strip away the emotional AND suggest one should take a functionalist point of view, well there are no worries.

Natural selection will continue whatever I do.

Let’s take this last bit first. 1) I think it is complicated. First, immediately, I want to stress that there is always the option of delaying judgment or agnosticism. Reason is not infallible - and is, often, guided by emotions and assumptions we are aware of and then also often by emotions and assumptions we are not aware of. So when in a real contradiction between emotions and reason, we might, especially if we do not seem to immediately lose anything a) delay choice or 2) make a choice but keep an agnosticism about whether it was the right one. 3) it depends for me on what reason, whose reason, and for that matter whose emotions/intuition. 4) a problem with the choice is that emotions and reason are mixed. It is muddy in there. Reason depends on emotions, especially when we are talking about how humans should interact - iow what seems reasonable will include emotional reactions to consequences, prioritizing inside reasoning itself, the ability to evaluate one’s reasoning (such as, have I looked at the evidence long enough? which is evaluated with emotional qualia (see Damasio) and of course emotions are often affected strongly by memes, what is presented as reasonable, assumptions in society and culture, etc. When someone claims to be on the pure reason side of an argument, I immediately get wary. I just don’t meet any people without motives, emotions, biases and so on. If we are trying to determine the height of a tree, ok I may dismiss emotion based objections after the rational team used three different measuring devices and come to the same measurement, despite it seeming off to the emotional team. But when dealing with how should we treat each other…

In a sense what I am saying is that reason is often used as a postive term. IOW it represents logical work with rationally chosen facts, gathered in X postive types of ways…etc. But actually reasoning is a cognitive style. A neutral one. It can be a mess, it can be well done. It may have false assumptions that will take decades to recognize but are obviously false to others. It is just a way to reach a conclusion. Some do it very well. Some do not.

The reasoned idea within science was that animals did not have emotions, motivations, desires, etc. They were considered mechanical, with another significant group of scientists thinking that any claims were anthropomorphizing, unprovable, and confused in form, though these mainstream scientists were sometimes technically agnostic. That was mainstream position until the 70s and it was dangerous for a biologist to go against that position in any official way: articles, public statements. etc. People having the opposite opinion were consider to be being irrational, projecting, anthropomorphizing and following their emotions.

Now of course this example is just an example. It does not prove that reason and emotion/intuition are equally good at getting to the truth or that reason is worse.

I bring it up because, basically, what appears to be reason, need not be good. It is just descriptive without valence. Certain parts of the mind are more in charge and they have their tool box. Maybe it is good use of tools maybe not. An attempt by the mind to reach conclusions in a fastidious manner and based, often primarily on word based arguements. This isn’t always the best way to figure something out. And underneath the reasoning and emotional world in the mind is seething.

OK, let’s look at the motion sickness. I’ll keep this one short. It’s a good example on your part and I do not think I can or would want to fully counter it. But let me partially counter it. In the case of morals, we are talking about what it is like to live, given who we are. If we are going to say certain behaviors are not good, then one such behavior might be putting a fan up against someone’s bed. Now this will come off as silly, but my point is, that despite the fact that the person who gets nauseous because of this is actually having an inappropriate reaction because fans and beds can give one an experience that resembles when one needed to throw up…
it still makes the guy in the bed have a bad time, even if he
‘shouldn’t’

So here we are, after this long evolutionary process reacting emotionally to a lot of stuff. Right, wrong, confused, misplaces emotions…quite possibly. Emotions that perhaps worked to protect us but now react to harmless things. But we have those emotions. We react these ways.

If we do not consider the emotional reactions to the moral act and to the consequences of any moral rule, then we are ignoring a large part of what is real. IOW if we just focus on the survival of our genes creating more gene bearers we are removing a large part of the real from our calculations.

  1. this may have serious consequences regarding our survival
  2. but regardless I think it is wrongheaded even if it did not
  3. I question our ability to know when it is simply some vestige of a no longer relevent reaction, or a deeper insight. I see reason as often being hubristic when it comes to these evaluations.

And to be very clear, I am not arguing that we should do away with rationality. I am pro-combination. So when I point out the problems with rationality, I am not saying emotions have no problems, and we should switch to just that.

The cardiac surgeon, in all liklihood, is working on someone who smoked or overate and did not move around very much. And if they did, then the cardiac surgeon is adding a way of working on top of what evolution set us out to do. But even more importantly, if we are to take from evolution what morality’s function is, why would we then ignore what evolution has given us. So it is that juncture I am focused on. I don’t have problems with technology per se. IOW my argument is not, hey that’s not natural - with all the problems inherent in that - but rather

I note that you think our morality should be based on its function in evolution. Evolution is given a kind of authority. Then when it comes to how our evolved emotions deal with morals, we must modify that. If we are appealing to authority in evolution, why stop at deciding it is about survival?

They may be pompous and unjustifiably self-assured systems of belief, but the jury is still out on whether they 1) added to both survival AND better lives or 2) whether they still are better than secular humanism, say. Testing such things is not easy.

Certainly you are correct that emotions can be problematic. But I am not arguing that there should be no rationality - and even in religions and folk morality, in fact any moral system I have seen, there is a mixture of reasoning and emotion, consequentialism and deontology. I am arguing for the mix and that the mix is generally unavoidable, in any case. I think the cane toad type hubris in rational solutions often comes about because we think complicated situations can be tracked using the frontal lobe skins and evolution made a boo boo when giving us tendencies to use both emotion and reason. WE cut out the emotion. I also think there
skills in emotion/intuition, or better put, some people are more skilled than others, just as in reasoning.

I disagree. I make very rapid decisions all the time whether to go with intuition or to pause and analyze and reflect. Actually think nearly the opposite of what you said. We cannot make such decisions without intuition. Certainly reasoning can come in also. But reasoning can never it self decide when it should be set in motion, when it has done enough work, when it is satisfied it listened to the right experts, when it is satisfied with its use of semantics in its internal arguments.
Rationality AS LIVED as opposed to on paper, is filled with micro-intuitions and generally initiated by a macro intuition and also one knows when to stop with yet another intuition. And there are qualia at all stages.
i
When we imagine reasoning we often imagine it as if it is happening on paper and the words have simple relations to things.

But actually it is not happening on paper, even when written and read, but in minds, and in the process there is an ongoing set of intuitions.

But, again, importantly, I am not for throwing out reason. I just think we should not throw out emotions/intuition AND further, I don’t think we can anyway.

Actually think if we go into the phenomenology of checking out an argument, we will find that intuition rings a bell, and then we zoom in to find out why. Especially in tricky arguments.

And to jump: I imagine a kind of traditional male female argument. The man cleverly explains why whatever he did was actually ok, given her behavior, given some factor in his life, given some complicated reasoning that seems right to all parties. And the woman is sitting there thinking ‘BS’.

I see the way we are raised tend to work against integrating the various approaches. Or to put this another way, we tend to officially priortizes intuition or rationality, emotions or calm word based tries to be logical arguments. Underneath, I think, each approach is using facets of the other, but because of how we are trained, we feel we need to identify with one. Also we tend to want to hide, because actually all sides tend to present themselves as rational, the emotions underneath our positions and the intuitions in our metaphysics, say.

I think leads to adults who are damaged and this is only increasing as pharma and psychiatry pathologizes mroe and more of the way we limbically react, and then also in modernity in general. So we think we have to choose one approach, when in fact homo sapiens have them tied together, so we might as well practice that, get to know our reactions and couple the two approaches.

That has been selected for. Maybe it won’t last. Maybe we will be replaced by AIs that have no emotion. I can’t see where a not radically fragmented homo sapien would not find that horrifying. Problem is, humans are radically fragmented. God, I hope I got that triple negative right.

I’d need to see the science. I am not even sure this is the case. If you are chaotically amoral, well, that leads to a lot of bad reactions, unless you are some kind of autocrate - so in your home, in your company, in your country, if you are the boss, you can probably get away with a lot, and in fact those guys often have a lot of kids, all over the place. Hence they are evolutionarily effective. But more pragmatic amoral people, I see no reason for them not to thrive. Maybe, just maybe less in modern society, and maybe less in tribal society. I think they have many benefits in between, even the chaotic ones. In fact a lot of the names in history books probably had amoral tendencies…and quite a few kids.

I do wonder how they do on creating babies however.

If it is in all populations it might be neutral or only slightly negative functionally. A byproduct of some other cognitive capacities that helped us. AGain testing this hypothesis is hard.

I think we have problems on the diet end of your justification, not because of faulty desires, but rather to cultural problems. I think sugar is a drug and we use it to self-medicate. Psychotropic drug. You know that old thing about rats triggering cocaine being made available or stimulating the pleasure center of the brain? The idea that if we could we would just destroy ourselves? Well they redid that experiment but gave the rats complicated interesting environments and very few got addicted. And I can imagine that even the nice complicated homes they gave the rats probably had less of the smells that rats bodies expect and lacked the nuance there is in what was once the original environment of rats. I think, just as in the cardiac surgeon example, we are using culture to fight nature that is having a problem because of culture.

In the ‘are there objective morals sense?’ I am certainly a moral nihilist. On the other hand this does not mean we need to stop determining what we want. We can ignore whatever we think evolution intended and decide what we want. No easy task, of course, given out wants, natural and culturally created, often by those with great power in their interests. But given that as social mammals we have self-interest but also empathy and tend to collaborate, there is room for desire to create what may not be morals but heuristics. Desire (and emotions and intuition) in collaboration with reason.

That’s where we are now, with what we are and have. Ironically ignoring whatever evolution intended or selected for might in fact be the best strategy for survival, though I am not saying it is, nor do I know a way to test that. However, I think there are reasons to think it might be a better route.

Focusing only on surivival and survival of genes, deprioritizes a lot of things that make us like life. I think we will soon if not already have solutions to survival of genes that do not need us to enjoy life at all. Forget the panopticon Amazon workplace, I mean complete dystopias that, one the other hand, keep those genes chugging along. Of course, we might opt out if that is where logic leads us, not feeling at home in the efficient world we created with one prime criterion for the goal and reason trying to be devoid of emotion and intuition as the means of working towards this final solution.

I see a few central disagreements/differing definitions running through your replies, so I’m going to take your points a bit out of order. I apologize for the wall of text, I hope it serves to bring the separate threads of our conversation back toward the main point.

First, returning to the meaning of “functional” and the role it plays in my argument. You say that “it was functional that dinosaurs became extinct”, and I think this suggests that we are using the term “functional” very differently. If you mean that it was functional for humans that dinosaurs became extinct, I agree. But it certainly wasn’t functional for dinosaurs, since most dinosaurs’ genetic and social lines ended. To the former claim, we could compare it to a claim that it would be functional for modern humans to make mosquitoes go extinct, and thus morally proper. That claim doesn’t seem far fetched to me.

But I feel like that line is a bit of a non-sequitur, since while the extinction of a species may be functional, it is only distantly moral. The way I’m using functional here is just this: morality exists because individuals with “moral machinery” survived and those without it perished. The evidence for this is the universal existence of morality in human groups, and the near universal existence of morality in individuals. It is functional for those who have it in the sense that the survival of whatever produces it (be it genes or memes or something else) was a result of how morality shaped behavior.

The role this plays in my argument about morality is that we can’t abstract morality out of its evolutionary context. We can look at morality as a phenomenon in the world, observe it empirically, ask what it does and why it exists, and in so doing discover what it means to say that one ought or ought not do X. The only objective reference of those terms is that evolutionary origin, the role morality played and why it exists. The only recourse for morality is thus to its function: it exists because it improved the odds of survival of the individuals and groups who shared the trait. (I grant your point with respect to other reasons why people act, but I would contend that those aren’t morality. Moreover, I think ‘moral nihilism plus heuristics for subjectively meaningful life’ is compatible with my argument here; moral nihilism is a viable out. One can take the position that, “morality is just something that helped apes survive, therefor there’s not really any objective truth of morality”. That out seems less appealing to me than the conclusion it’s trying to avoid, but I’m not sure my arguments here address that choice. At one point you seem to essentially ask why you should do what you are morally obligated to do, and for that I make no attempt at an answer.)

I think this largely addresses your points about antinatalism and alternative forms of survival. My argument here is, again, not for or against any specific moral system, but rather for a meta moral system that says that moral systems should be evaluated on their likelihood of leading to survival, because that’s the only meaningful way to evaluate whether something is moral.

But there is a predictive element to this position: we’re talking about a prosepective best-guess about the effect a system will have on survival. It is absolutely true that, if antinatalism ends up leading to the long term survival of humanity, we should score it as functional. You are right to point out that, ultimately, the truest test of functionality is what actually ends up succeeding, but that doesn’t seem to favor any particular hypothesis about which moralities are actually functional in prospect. In the same way, I could say “I think that by reading these monkey bones, I can predict the stock market, and you have to admit that the truest test of what method of market prediction works the best is what method actually does predict the stock market; if my monkey bones method actually predicts the stock market, then you would have to admit that it was the best method.” And so I would, but that doesn’t say anything about whether the monkey bones method actually does work, and we still have every reason to think that it is a bad way to predict the stock market.

Similarly, in prospect, we have every reason to think that antinatalism is not a good moral system under the metric of survival. We can come up with any number of scenarios whereby a moral philosophy that literally requires the slow extinction of the human race actually ends up preserving the human race better than other moral systems (e.g. maybe if everyone stops having children, they spend more time extending lifespans, copying consenting adults at the atomic level, and conquering the stars etc. etc.). But that seems unlikely, given what we know about the present state of human longevity and atomic-level copying. Still, someone can coherently say that we should be antinatalists because that’s the moral system that will best achieve functional aims; they may be making a mistake of fact, making a bad prediction about what will work, but that’s an empirical question about the future, and the truest test is indeed the arrival of the future.

I do think you raise an important distinction that I need to make: we observe morality, and I argue we should conclude certain things from it; similarly we observe antinatalism, why shouldn’t we conclude similar things from it? If antinatalism can be like eating too much sugar, why can’t the same be said of morality itself? To this I point to my comments on whether morality is a spandrel. Antinatalism doesn’t seem to pass the way morality does: it’s strongly negatively associated with reproduction, it’s certainly costly (thus the negative selection), but it tends to die out as quickly as it arises: it’s been proposed many times in many places and has been rejected (likely because everyone who practiced it died without raising any children to believe it).

I don’t think there’s any tension in saying that certain traits that exist in an evolved organism are contingent and haven’t been selected for, and I think you accept that based on your question about spandrels. The existence of particular moral beliefs don’t suggest that those beliefs have been selected for; the near-universality of some kind of moral belief in all humans does suggest that the underlying machinery has been selected for, i.e. has conveyed some survival benefit on the people whose genes express that machinery.

I think we can say similar things about your proposed reductios (transhumanism and AI breeding a new batch of humans for one generation every x-thousand years). It may be that those methods produce survival better, and that could be shown by someone trying those systems and actually surviving. But regular reproduction and genetic evolution have proved a pretty effective means of survival, it’s reasonable to think that they will more effectively continue our survival than exotic systems like the AI harvesting, breeding, and euthanizing generations of humans. Moreover, if what we want to see survive is society, then a bunch of DNA held in stasis doesn’t well achieve that goal (this goes to what particular form of survival is best, which I don’t think is answered by functional morality, nor does it need to be answered for the purposes of making a case for functional morality).

The ‘seeds to the stars’ reductio raises the open question of at what point we can rest in our pursuit of moral action. In most moral systems, it’s a good thing to save someone’s life, but once someone has saved someone’s life, they aren’t absolved of moral responsibilities. Even after saving a million lives, we can continue to do good. As a matter of subjective experience, we may decide we’ve done enough and no longer care, but it would seem a strange moral system in which the morality of an act actually changes based on the past moral acts of the actor (I can’t think of any that do this expressly, at least if an act is taken ceteris paribus).

But I take your more general point here: functional morality probably commits us to accepting some odd scenarios. I’m OK with that. Odd scenarios are philosophy’s bread and butter. Earlier I alluded to not being able to step in the same river twice, a claim that sounds odd upon first encounter but is normal and mundane in philosophy. And I would expect the truth to be somewhat unintuitive, given the same limits on intuition that I’ve been relying on in this thread: we have the brains and intuitions of of savanna apes, our intuitions are ill-suited to space travel.

I don’t mean to be too dismissive of oddness as a weakness, I do think intuition is often useful as an indicator of subtle logical mistakes. But I also think our oddness tolerance should be properly calibrated: even given that we’re committed to the positions you propose, the scenarios themselves are so odd that any moral conclusions about them will feel odd. If functional morality gets odd at the margins, so does every moral system I’ve ever seen. We have poor moral intuitions about AI, because we have never actually encountered one. In every-day situations, functional morality will work out to support many naive moral intuitions, and will approximate many sophisticated consequentialist and deontological systems. Are there any everyday situations where functional morality gets it wildly wrong?

To your points re: reason vs. emotion, I admit I’m losing the thread of how this impacts our disagreement. For one thing, I think we basically agree on the role of both emotion and reason, i.e. that they are both useful and valuable, and both can be flawed. But more importantly, I don’t think conceding that emotion sometimes provides important insights and we should be wary of too easily dismissing emotional/intuitive reactions as merely vestigial – I don’t think conceding that undermines my point that our moral intuitions can be rationally compared against what we know about how moral intuition arose in humans and what purpose it served. The way we know if a moral intuition is ‘right’ or ‘wrong’ is whether it fulfills its role in tending to increase the odds of survival or not. There is an objective reality, and our reason and emotion are both useful in helping us discover it, but they should arrive at the same answers because they are both discovering the same reality.

(I think I’ve addressed all your main lines of argument, but if I missed any please let me know, particularly if my omission seems calculated to avoid a particularly devastating point.)

Carleas, If we ever cease to exist, we can’t exist.

Morality is not about survival in some form, it’s about the quality of it.

I would say that my argument here is exactly counter to this. A being that suffers through life and reproduces will pass on its pattern, a being that enjoys life and fails to reproduce will not. We are the descendants of beings who reproduced, regardless of any subjective pleasure or pain they felt in getting there.

Of course, pleasure and pain are tuned to the same end, so the subjective experience of a life that leads to reproduction is likely to be positive. Safety, nutrients, and reproduction all feel good because beings that pursue those experiences are more likely to survive and reproduce.

Ihave to admit I am too lazy to go back and understand the points you are responding to. I will just repond below to points I have opinions about now, reactions that may even contradict things I’ve said before.

I am not sure if we have abstracted it before we had evolutionary theory, but we certainly had morality out of that context and even so now. IOW often morality goes against, at least, so it might seen, my own benefits in relation to natural selection as an individual, and at the species level is not based on this consideration, at least consciously. Let’s for the sake of argument accept that morality was selected for. OK. And in what form. Well, it hasn’t, generally, been in the form - Whatever leads to survival is the Good. What got selected for was a species that framed moral issues in other ways. So if we want to respect natural selection, we would continue with that unless we have evidence that this is not working.
IOW the trait that got selected for was not MOrality=survival.
What got selected for was trying to be Good, often in social ways, that fits ideals which we did not directly think of in terms of survival. Now underneath this may have been doing just that. but precisely for that reason we have no need to now consciously think about survival - perhaps having this as the heuristic is less effective, for example.

Perhaps I’ll reword in response to this: consider the possibility that having moralities that go beyond, do not focus (just on) survival or even mainly on survival is vastlyi more effective. That we have other ideals leads to more cohesion or whatever, as one possible side effect.

Ido feel there is a conscous/unconscious, intuition vs logic split in here, or between us. Not that I can nicely sum this up in words.

Let’s say that romance is really just pheramones and dopamine driven altered states. Let’s that it is actually the best description. It still might radically damage humans to think that way.

I don’t want to assume that my opposition is soley a noble lie argument either. That excess that our morality goes out over which does not have to do with survival, I grant that meaning in and of itself. I am not beholden to evolution. And that is what evolution has led to in any case.

IOW I am not sure why I have an obligation to go against my nature and view morality or preferred social relations as to be evaluated only in terms of survival. My reasons for resisting this are personal, but I could say that I have been selected to not be like that, so would I not be betraying selection to start viewing things in the way you suggest?

It’s a bit like how feelings might guide a golf swing adjustment, even with vague fluffy terms as heuristics, rather than some set of formulas based on calculas and some of Newton’s laws. You may be trying to get us to use the wrong part of our brains to do something.

I think we need a definition of survival. Is it the continuation of homo sapien genes? Anything beyond that?

Antinatalism combined with cloning and whatever the rich decide are the best techs to keep their lives long would certainly seem to have a good chance. I mentioned earlier some dystopian scenarios that might very well have great survival outlooks. I think it would be odd to not immediately come in with quality of life, fairness, justice type moral objections, even though the truth is the survival of homo sapiens might best served by some horror show.

If it turns out that by AI’s assessments the best chance for survival of homo sapiens is to eliminat 99% of the population and do some cryogenic alternating with short periods of waking for procreation, while the AI take care of security and safety

must we just knuckle under and choose that.

And I don’t think that is a loopy suggestion. I actually think that some rather dystopic solution would be most likely to extend the survival of homo sapien genes and lives.

Ah, now I see you respond to this…

Our modes of choosing partners and being social have changed a lot over time. I see no reason to assume that further changes will not take place. We can control much more. Food production from hunter gatherer to ancient agriculture to modern agriculture to gm agriculture with crops that cannot breed. Why assume that the ‘best’ method for human production will not radically shift. And it’s not like they are not working toward that out there.

Here you mention seeing society survive. There would be a society it would just be different, but further why should evolution care about the specifics of human interaction. If the point is homo sapien survival. It seems to me you are smuggling in other values than survival in that word ‘society’.

I would think it will. I would guess that it is already in place, in many ways, in the business world and that Amazon could use functional morality to justify its panoptican, radically efficiency focused, horrific workplaces. That words like dignity, sense of self, fairness no longer have any priority. Now a sophisticated functional morality, one that looks way into the future might find that such business practices somehow reduce survivability…but…

  1. maybe it is better to start up from with other criteria - even if they all boil down somehow to survivability which I doubt
  2. I suspect that some other nightmares will be just peachy under functional morality and in any case we will have no tools to fight against them. We would then have to demonstrate not that many of these things we value are damaged but rather that this process damages survivability, perhaps decades or hundreds of years in the future.

If we limit morality to survivability, I suspect that we will limit our ability to protect our experiences agasint those with power.

So Karp, you see morality as a form of leverage (human) nature has on its most powerful parts?
Not sure if I summarize you right. I like the idea I come away with in any case.

Also I agree/think that evolution can not be explained by using the term evolution.

Hi Jacob!

I think You got it partly right. The other ‘part’ , if you one can call it that, relates will and space-time to the equation, particularly the spatial component, for the sake of this argument.

For Ambig’s benefit, this linear reduction may be exemplified by the following illustration.

The teeter Twitter places near absolute weight on one end, with its distance to the fulcrum approaches 0.
That’s balanced against a weight approaching zero mass, but IT’s distance from the fulcrum approaches absolute infinite extension.

Which will have the most power? This is difficult hypothetically, but I would side with maximum extension, hence space/time.
I would like to call this the David & Goliath presumption.

Now the calculation can not be done now, but perhaps with time approaching to infinity, some computer may come up with more then a hypothetical.
So power as defined has not yet come to an estimable definition.

I’m completely with the opening post.

I remember Nietzsche commenting on how primitive moral systems developed according to what survivors were already doing - or more specifically on what they were doing that their rivals were not doing. It’s very revealing that the derivation of the word “moral” comes from “customs” - i.e. what people are accustomed to doing already.

I find it interesting that all behaviour that we see as negative in both others and ourselves are actually just inevitable by-products of behavioural tendencies that happen to get selected for. For example: female insecurity is the same instinct that compels them to try and look physically attractive, and male “banter” is the fishing for insecurity in other males so we know who to exclude and who to rely on when it comes to team situations to best dominate when threat arises. They might not be seen as morally admirable tendencies, but not surprisingly they’re the ones that last because they’re (whether intentionally or not) optimal.

Morals are just unintentional (mostly) and inevitable game theory.

If not based on intention , then it is structural, but not necessarily based on game theory, maybe contingently.?

Totally understandable, given that our conversation has become spaced out. I don’t think it harms the discussion, and your response was still good and well appreciated.

I’d like to start with something you say halfway through, because it’s a nice analogy and touches on many of your other points:

This is a good analogy because it distinguishes our positions well. My attempt here is to provide “the best description” of morality. You say that you are “not sure why [you] have an obligation to go against [your] nature and view morality or preferred social relations as to be evaluated only in terms of survival”, and my response is that that is just what it means to have a moral obligation. Insofar as “morality” means anything, insofar as it means anything that one “ought” to do something, it means that doing that thing will advance survival, for oneself or ones tribe.

And I agree with your observation that “[w]hat got selected for was a species that framed moral issues in other ways”. So too was flavor selected for rather than nutrients, and instinctive fear rather than insect biology, and pleasure and pain rather than reproduction and anatomy. And just as we have used the study of nutrition to recognize that some things that taste good are nonetheless harmful, and that some insects that scare us are nonetheless harmless, and that some things that feel good are bad and other that hurt are good, so too can we decide to overcome our moral intuitions in favor of an explicit morality that, while devoid of romance, is empirically rigorous.

I’ve been reluctant to narrowly define survival for two reasons:

  1. I don’t think it matters. If there’s a moral instinct, it comes from where all of our innate traits come from: a heritable pattern of thought and behavior that led our ancestors to survive. Regardless of how much of that is genetic, how much is culture, how much it operates on the individual and how much on the group, regardless of the many particulars of what such survival may entail, inherited traits can only be inherited where they lead to there being an heir to inherit them.

  2. I am unsure of where morality functions, i.e. what thing’s survival it’s influencing. On the one hand, certain parts of the inheritance must be genetic, but I am unsure how much. I am unsure, for example, whether a group of people left to their own devices would benefit from the inherited mental machinery that, when it develops within a culture, leads to a positive survival impact. If the group itself is part of the context for which the moral machinery of the brain evolved, then it’s not just the genes that produce that machinery that matter, the group itself also matters. I tend to think that’s the case (thus my concern that the “society” continue, and not just genetic humans), but I’m uncertain about it. That uncertainty leads me to want to leave this as an open question. Does this undermine point #1?

First, I’ll note that this is a bit question begging. A solution is dystopic in part for violating some moral principle, so to some extent this smuggles in intuitive morality as a given.

Second, as I said above, I think intuitive morality will fail us more and more frequently as time goes on. To use a near-term example that you bring up: in the past, we just didn’t know what genetic pairings would produce good or bad outcomes, so we left it to chance and instinct. But chance and instinct frequently misled us, and we ended up with immense suffering over the course of history as a result. Pre-modern societies just killed children who didn’t develop right, and many women died in childbirth as the result of genetic abnormalities in their developing babies. So if we suggest that greater deliberation or intervention in genetic pairings is bad going forward is somehow immoral, we need to weigh it against the immense suffering that still happens as a result.

I’m not arguing in favor of such intervention, rather I mean to say that merely knowing, merely developing the ability to predict genetic outcomes in advance requires us to make a moral decision that we never had to make before. It may be creepy to centrally control or regulate genetic pairing, but if we know that (a + b) will create resource hungry and burdensome locus of suffering, and (a + c) will create a brilliant and productive self-actualized person who will spread happiness wherever she goes, there is at least as strong an argument for the creepiness of not intervening. (Note that I don’t use “creepy” in the pejorative sense here, I intend it as shorthand for the intuitive moral reaction and, subjectively, I think it captures what intuitive moral rejection feels like).

So, I reiterate the point I made above: our intuitions are bad at the future, because they are the intuitions of savanna apes, and not of globe-spanning manipulators of genetic inheritance. We will need more than intuition to make sense of these questions.

My response is as you would expect: I think those things aren’t particularly function, since a large underclass of people without “dignity, sense of self, fairness”, etc. lead to things like the current collapse of global institutions (and, relevant to my discussion of the meaning of ‘survival’ above, institutions are beneficial to group survival). I think that’s always likely to be the case. Moreover, using fully functional humans, whose brains are some of the most powerful computers available, to do busywork is a waste of resources. I expect a society optimized to plug in all of humanity will be both functional and generally pleasant for its inhabitants.

But functional morality is ultimately a meta-ethical system, it admits of a lot of debate about what specific moral positions are permitted or best achieve its goals. I think most nightmare scenarios are likely to fail to optimize functionality, or for all moral systems to struggle with them equally (see the discussion of the consequences of genetic intervention above).