Functional Morality

This is the main board for discussing philosophy - formal, informal and in between.

Moderator: Only_Humean

Forum rules
Forum Philosophy

Functional Morality

Postby Carleas » Sun Apr 08, 2018 5:22 pm

Morality has resulted from an evolutionary process that selects for survival. Since we observe morality in both young children and non-human primates, we can infer that the cognitive habits that we call morality are evolved: they were selected for because they tended to increase the likelihood that those organisms that exhibited them would pass down the genes that produced them. Morality, in other words, is functional, and this meta-ethical basis should be the foundation for any particular moral system.

This raises problems for many popular moral systems, but most acutely for utilitarian ethics, since it is ostensibly grounded in the same secular liberal worldview that recognizes the mind's material identity and evolutionary origins. Because if morality is a product of evolution, if its purpose all along has been to do whatever keeps the genes propagating, then moral intuitions about the value of humans, or conscious beings, or subjective experience, are at best accidentally correct: they are right if and only if they produce moral prescriptions that tend to favor propagation.

It bears mentioning that subjective experience, too, is the product of evolution, and individuals feel happy and sad because those feelings tended to help their ancestors to survive and reproduce. So we should actually expect subjective experience to be a somewhat reliable proxy for gene-replication. Moreover, since humans' greatest evolutionary asset has been their cooperation, we should also expect valuing the intuition that others' subjective experience matters to be selfish: we have dedicated brain structures for modeling the subjective states of others (more specifically, our ingroup others), and we reproduce their subjective experience automatically as we observe them; their pain feels to us like pain, their pleasure feels to us like pleasure.

But note that these are proxies. We can identify many situations where they mis-assign value, both in our own subjective experience and in how we value the subjective experience of others. We can be tricked into valuing the subjective experiences of robots, and into devaluing the experiences of friends, by subtle or overt manipulations of other evolved cognitive habits: cute robots who mimic babies get incorrectly included; unfamiliar potential allies get incorrectly excluded.

One way of interpreting moral debates is as competing assertions about what system most faithfully produces evolutionary success, and as an evolutionary process itself in that the ideas themselves replicate and are selected for. But I would argue that we can actually draw separate normative conclusions from this observation. To note the evolutionary origins of morality is to short circuit the is-ought fallacy, because it describes what 'ought' is, where moral ideas come from and why they persist. It therefore permits us to reject normative claims that are inconsistent with descriptive claims about what morality is. Functional oughts are is claims.
User Control Panel > Board preference > Edit display options > Display signatures: No.
Carleas
Magister Ludi
 
Posts: 5659
Joined: Wed Feb 02, 2005 8:10 pm
Location: Washington DC, USA

Re: Functional Morality

Postby Meno_ » Sun Apr 08, 2018 5:45 pm

Interesting proposition. However whether morality is functional, has a contingent relationship towards the argument. In other words counter argument can be made as well, that morality has not to do with its utility, but is based on intrinsic given properties.

In the case of the child, such is evident, for most likely he will be given a set of moral rules to live by. If he questions these along the way, and changes them to suit himself, then, it can be said, that he utilizes moral acts to his advantage. As far as interpreting evolutionary traits within the scope of such individual re-formation, can be said to conform to large scale changes that occur as a result of intergenerational learning, but it may be hard pressed to point to this as directly related to a functionally related evolutionary process.

Examples abound:

Homosexuality does nothing toward securing the genetic security and advance of progeny, on the contrary, current views favor the idea that matters of population control, irrespective of more or less genetic endowment , are more of a factor in placing markers on the moral spectrum.

It appears that neither a functional approach nor an existentially preloaded-based on variance upon traditional morality is the fundamental basis, but a quasy meta psychological adaptive confirmation to more ideal basis may be the key.

In case of homosexuality, again, the motive is not concern with overpopulation which is the causa causa, but conrarily, the justification for it becomes primary.

Deeper structural analysis, may reveal the 'ideal' sequence of motivations , rather than the idea of superseding favorable genotypes, as a more of an emphatic evolutionary factor.
Meno_
Philosopher
 
Posts: 3487
Joined: Tue Dec 08, 2015 2:39 am
Location: Mysterium Tremendum

Re: Functional Morality

Postby Peter Kropotkin » Sun Apr 08, 2018 6:20 pm

Carleas wrote:Morality has resulted from an evolutionary process that selects for survival. Since we observe morality in both young children and non-human primates, we can infer that the cognitive habits that we call morality are evolved: they were selected for because they tended to increase the likelihood that those organisms that exhibited them would pass down the genes that produced them. Morality, in other words, is functional, and this meta-ethical basis should be the foundation for any particular moral system.

This raises problems for many popular moral systems, but most acutely for utilitarian ethics, since it is ostensibly grounded in the same secular liberal worldview that recognizes the mind's material identity and evolutionary origins. Because if morality is a product of evolution, if its purpose all along has been to do whatever keeps the genes propagating, then moral intuitions about the value of humans, or conscious beings, or subjective experience, are at best accidentally correct: they are right if and only if they produce moral prescriptions that tend to favor propagation.

It bears mentioning that subjective experience, too, is the product of evolution, and individuals feel happy and sad because those feelings tended to help their ancestors to survive and reproduce. So we should actually expect subjective experience to be a somewhat reliable proxy for gene-replication. Moreover, since humans' greatest evolutionary asset has been their cooperation, we should also expect valuing the intuition that others' subjective experience matters to be selfish: we have dedicated brain structures for modeling the subjective states of others (more specifically, our ingroup others), and we reproduce their subjective experience automatically as we observe them; their pain feels to us like pain, their pleasure feels to us like pleasure.

But note that these are proxies. We can identify many situations where they mis-assign value, both in our own subjective experience and in how we value the subjective experience of others. We can be tricked into valuing the subjective experiences of robots, and into devaluing the experiences of friends, by subtle or overt manipulations of other evolved cognitive habits: cute robots who mimic babies get incorrectly included; unfamiliar potential allies get incorrectly excluded.

One way of interpreting moral debates is as competing assertions about what system most faithfully produces evolutionary success, and as an evolutionary process itself in that the ideas themselves replicate and are selected for. But I would argue that we can actually draw separate normative conclusions from this observation. To note the evolutionary origins of morality is to short circuit the is-ought fallacy, because it describes what 'ought' is, where moral ideas come from and why they persist. It therefore permits us to reject normative claims that are inconsistent with descriptive claims about what morality is. Functional oughts are is claims.



K: I think this piece has basic and fundamental problem....you haven't defined morality.....
to say we have observe morality in children and non-human primates still leaves us the
question of, what exactly did we see and have we put our own interpetation on what
we saw... instead of understanding the "morality" on its own terms..........
children must be taught everything including morality... so if you see a child
acting "morally" what does that exactly mean? I suspect that we see an action
and we, as adults defined the action and for the child, there was no underlying
acts of morality.... it was simply an action... without our notion of morality
which we then gave to the actions of the child or nonhuman primates......

Kropotkin
"Those who sacrifice liberty for security
wind up with neither."
"Ben Franklin"
Peter Kropotkin
ILP Legend
 
Posts: 6754
Joined: Thu Apr 07, 2005 1:47 am
Location: blue state

Re: Functional Morality

Postby Jakob » Sun Apr 08, 2018 6:57 pm

Minor observation:

A: "Morality, in other words, is functional"

does not necessitate
B: "and this meta-ethical basis should be the foundation for any particular moral system."


For example it may well be that for morality to be functional, it needs to be experienced as an ideal.
That morality, when it is experienced as something less than ideal, ceases to have its effect.

In this way we can comprehend the function of the ideal, which, too, must have evolved as a an advantage in terms of survival.

Putting it sharply: to approach morality as something less than divine may be to eliminate its function.

Even sharper; It may be why protestant Europe, after its god was pronounced dead, embraced Islam. Europeans may actually harbour an instinctive respect for the idealistic approach to morality of the muslims.
Image
For behold, all acts of love and pleasure are my rituals
User avatar
Jakob
ILP Legend
 
Posts: 5917
Joined: Sun Sep 03, 2006 9:23 pm
Location: look at my suit

Re: Functional Morality

Postby Meno_ » Sun Apr 08, 2018 7:06 pm

Jakob wrote:Minor observation:

A: "Morality, in other words, is functional"

does not necessitate
B: "and this meta-ethical basis should be the foundation for any particular moral system."


For example it may well be that for morality to be functional, it needs to be experienced as an ideal.
That morality, when it is experienced as something less than ideal, ceases to have its effect.

In this way we can comprehend the function of the ideal, which, too, must have evolved as a an advantage in terms of survival.

Putting it sharply: to approach morality as something less than divine may be to eliminate its function.

Even sharper; It may be why protestant Europe, after its god was pronounced dead, embraced Islam. Europeans may actually harbour an instinctive respect for the idealistic approach to morality of the muslims.


Proportionally, this brings it down to the level of functional categories: Should the ideal or shouldn't be a derivative, from a utilitarian perspective. It has merit bit does not overcome the dual, religious aspect of good and evil.
Meno_
Philosopher
 
Posts: 3487
Joined: Tue Dec 08, 2015 2:39 am
Location: Mysterium Tremendum

Re: Functional Morality

Postby Urwrongx1000 » Sun Apr 08, 2018 7:45 pm

The premise is flawed. Many systems of morality are not centered around survival and life, but death and its inevitability. Morality includes people who know they will die, or actively puts their lives on the line, for 'higher' purposes. Thus morality is not necessarily about survival (function). It's about when the price of your life, is worth paying, for a higher cause. Or when another's life (your children for example) are worth more than yours.

None of what you say "functional morality" applies to life and death matters.
Urwrongx1000
Philosopher
 
Posts: 1126
Joined: Mon Jun 19, 2017 5:10 pm

Re: Functional Morality

Postby WendyDarling » Sun Apr 08, 2018 8:23 pm

I like this topic but I'm not understanding the differentiations since they are all interconnected in my mind as one system not an either/or or an is/ought. Isn't all morality functionally based on the individual and his/her relationship with society at large? Isn't morality society's cohesive glue? Morality has to be functional for society to be functional and it's not functional in one way, but a multitude.
I AM OFFICIALLY IN HELL!

I live my philosophy, it's personal to me and people who engage where I live establish an unspoken dynamic, a relationship of sorts, with me and my philosophy.

Cutting folks for sport is a reality for the poor in spirit. I myself only cut the poor in spirit on Tues., Thurs., and every other Sat.
User avatar
WendyDarling
Heroine
 
Posts: 7094
Joined: Sat Sep 11, 2010 8:52 am
Location: Hades

Re: Functional Morality

Postby iambiguous » Sun Apr 08, 2018 8:38 pm

Carleas wrote:Morality has resulted from an evolutionary process that selects for survival. Since we observe morality in both young children and non-human primates, we can infer that the cognitive habits that we call morality are evolved: they were selected for because they tended to increase the likelihood that those organisms that exhibited them would pass down the genes that produced them. Morality, in other words, is functional, and this meta-ethical basis should be the foundation for any particular moral system.

This raises problems for many popular moral systems, but most acutely for utilitarian ethics, since it is ostensibly grounded in the same secular liberal worldview that recognizes the mind's material identity and evolutionary origins. Because if morality is a product of evolution, if its purpose all along has been to do whatever keeps the genes propagating, then moral intuitions about the value of humans, or conscious beings, or subjective experience, are at best accidentally correct: they are right if and only if they produce moral prescriptions that tend to favor propagation.

It bears mentioning that subjective experience, too, is the product of evolution, and individuals feel happy and sad because those feelings tended to help their ancestors to survive and reproduce. So we should actually expect subjective experience to be a somewhat reliable proxy for gene-replication. Moreover, since humans' greatest evolutionary asset has been their cooperation, we should also expect valuing the intuition that others' subjective experience matters to be selfish: we have dedicated brain structures for modeling the subjective states of others (more specifically, our ingroup others), and we reproduce their subjective experience automatically as we observe them; their pain feels to us like pain, their pleasure feels to us like pleasure.

But note that these are proxies. We can identify many situations where they mis-assign value, both in our own subjective experience and in how we value the subjective experience of others. We can be tricked into valuing the subjective experiences of robots, and into devaluing the experiences of friends, by subtle or overt manipulations of other evolved cognitive habits: cute robots who mimic babies get incorrectly included; unfamiliar potential allies get incorrectly excluded.

One way of interpreting moral debates is as competing assertions about what system most faithfully produces evolutionary success, and as an evolutionary process itself in that the ideas themselves replicate and are selected for. But I would argue that we can actually draw separate normative conclusions from this observation. To note the evolutionary origins of morality is to short circuit the is-ought fallacy, because it describes what 'ought' is, where moral ideas come from and why they persist. It therefore permits us to reject normative claims that are inconsistent with descriptive claims about what morality is. Functional oughts are is claims.


Just out of curiosity [for those in the know] does this not seem to reflect many of the points that Satyr raises over at KT? For him morality is ever and always in sync with nature. Natural behaviors can be understood if you are able to grasp the role that evolution plays in the reproduction of all living things.

Perhaps a special dispensation might be granted here. Allow him to come on board and participate on this thread.

My own interest of course is the extent to which these "general descriptions" might be integrated into actual conflicted human behaviors. Where does nature/genes stop and nurture/memes begin when the discussion becomes embedded in moral/political conflagrations that we are all likely to be familiar with.
He was like a man who wanted to change all; and could not; so burned with his impotence; and had only me, an infinitely small microcosm to convert or detest. John Fowles

Start here: viewtopic.php?f=1&t=176529
Then here: viewtopic.php?f=15&t=185296
User avatar
iambiguous
ILP Legend
 
Posts: 26512
Joined: Tue Nov 16, 2010 8:03 pm
Location: baltimore maryland

Re: Functional Morality

Postby attano » Tue Apr 10, 2018 10:13 pm

I largely share the background, but I don’t think you succeed in supporting your view.

Carleas wrote:Morality has resulted from an evolutionary process that selects for survival. Since we observe morality in both young children and non-human primates, we can infer that the cognitive habits that we call morality are evolved: they were selected for because they tended to increase the likelihood that those organisms that exhibited them would pass down the genes that produced them. Morality, in other words, is functional, and this meta-ethical basis should be the foundation for any particular moral system.

It is OK to suppose that morality responds to the environment, but calling it the result of an evolutionary process is problematic. How could we ever observe this? Fossils bear no trace of the morality of a specimen. At the same time it begs for the assumption of a theoretical framework where according to the complexity of an organism and its living conditions we can infer the morality this organism would develop. Somehow this has to be dared, yet there is an inner dynamics in groups, which I would call History, that appears to be at least as determining as physiology and environment. (Of course it's possible to posit that also History is linked to evolution, but that goes way beyond simple morality).
I am inclined to accept that morality assists ‘life’, but ‘life’ is not necessarily self-preservation or one's own genes propagation. So, your “select for survival” becomes problematic too. Oversimplifying, we might see morality (as long as we don’t assess it at its face-value) as a checklist in order to make an individual subservient to a group. In that respect it may be functional to sustain the life of the many, but quite often by requesting to individuals the opposite of their survival and genes propagation. So morality and consciousness are complementary in a way, but also conflicting - and you implicitly point to that. A non-moral attitude may well respond to a drive for survival, which could well be also an outcome of evolution (probably a more genuine one).

That said, Utilitarianism has a problem, I agree with that.
«Va', va', povero untorello. Non sarai tu quello che spianti Milano.»
User avatar
attano
 
Posts: 163
Joined: Tue Jun 28, 2011 7:38 pm
Location: Europe

Re: Functional Morality

Postby Prismatic567 » Thu Apr 12, 2018 2:50 am

Carleas wrote:Morality has resulted from an evolutionary process that selects for survival. Since we observe morality in both young children and non-human primates, we can infer that the cognitive habits that we call morality are evolved: they were selected for because they tended to increase the likelihood that those organisms that exhibited them would pass down the genes that produced them. Morality, in other words, is functional, and this meta-ethical basis should be the foundation for any particular moral system.
I agree with the above re the intrinsic moral drive within human[s].

https://www.scientificamerican.com/article/the-moral-life-of-babies/
Morality is not just something that people learn, argues Yale psychologist Paul Bloom: It is something we are all born with. At birth, babies are endowed with compassion, with empathy, with the beginnings of a sense of fairness.


However the progress of morality within humanity that is going on is not based directly on biological evolutionary adaption and natural selection.
The ongoing 'evolution' of the moral drive which is inherent is based on a meme basis [ideological] that in turn [as driven by the inherent moral drive] program the collective brain of humanity.

Note 200 years ago no one would forecast the possibility of legal banning of 'Chattel Slavery' in all nations in the World. Whilst this is only pertaining to Laws [not practice], it is a definite 'moral' achievement and progress for humanity. Such an evolution is not by natural selection re normal evolution.

What is critical is how can we abstract a sound Framework and System of Morality and System
[with groundings and principles] from the reality of what is within the ongoing progress of morality. To expedite the progress in quantum jumps we need a sound Framework.

I have been posting views relating to a a sound Framework and System of Morality and System to expedite progress in morality in various posts.
I am a progressive human being, a World Citizen, NOT-a-theist and not religious.
Prismatic567
Philosopher
 
Posts: 1907
Joined: Sun Nov 02, 2014 4:35 am

Re: Functional Morality

Postby attano » Thu Apr 12, 2018 8:29 pm

Prismatic567 wrote:
Carleas wrote:Morality has resulted from an evolutionary process that selects for survival. Since we observe morality in both young children and non-human primates, we can infer that the cognitive habits that we call morality are evolved: they were selected for because they tended to increase the likelihood that those organisms that exhibited them would pass down the genes that produced them. Morality, in other words, is functional, and this meta-ethical basis should be the foundation for any particular moral system.
I agree with the above re the intrinsic moral drive within human[s].

Unless you qualify survival and genes propagation as ‘moral’, it does not seem to me that you and OP are saying the same thing. (Of course, it is ultimately up to Carleas to judge on the matter).
If I understood correctly, he maintains that ‘moral habits’ were ‘selected’ throughout evolution because they assist survival and genes propagation, not really because of an intrinsic moral drive in men.

The ‘findings’ reported in the article (which is an interview, not a scientific paper) do not seem to me conducive to what Mr. Bloom claims.
They can be easily interpreted in the way I guess Carleas favours, not as evidence of a genuine moral instinct, but as ‘moral feelings’, ‘proxies’, that serve the real instinct of survival.
«Va', va', povero untorello. Non sarai tu quello che spianti Milano.»
User avatar
attano
 
Posts: 163
Joined: Tue Jun 28, 2011 7:38 pm
Location: Europe

Re: Functional Morality

Postby Prismatic567 » Fri Apr 13, 2018 4:05 am

attano wrote:
Prismatic567 wrote:
Carleas wrote:Morality has resulted from an evolutionary process that selects for survival. Since we observe morality in both young children and non-human primates, we can infer that the cognitive habits that we call morality are evolved: they were selected for because they tended to increase the likelihood that those organisms that exhibited them would pass down the genes that produced them. Morality, in other words, is functional, and this meta-ethical basis should be the foundation for any particular moral system.
I agree with the above re the intrinsic moral drive within human[s].

Unless you qualify survival and genes propagation as ‘moral’, it does not seem to me that you and OP are saying the same thing. (Of course, it is ultimately up to Carleas to judge on the matter).
If I understood correctly, he maintains that ‘moral habits’ were ‘selected’ throughout evolution because they assist survival and genes propagation, not really because of an intrinsic moral drive in men.
I did state I agree to a degree and disagree on,

However the progress of morality within humanity that is going on is not based directly on biological evolutionary adaption and natural selection.


"survival and genes propagation" is not morality per se but rather they are grounds for morality and ethics.

As I had mentioned one need the following to understand how it works;

What is critical is how can we abstract a sound Framework and System of Morality and System
[with groundings and principles] from the reality of what is within the ongoing progress of morality. To expedite the progress in quantum jumps we need a sound Framework.


I have posted on the above elsewhere - won't go into details here.


The ‘findings’ reported in the article (which is an interview, not a scientific paper) do not seem to me conducive to what Mr. Bloom claims.
They can be easily interpreted in the way I guess Carleas favours, not as evidence of a genuine moral instinct, but as ‘moral feelings’, ‘proxies’, that serve the real instinct of survival.
That is only an article re a book he wrote. The researching proper is in the background and published elsewhere. I can't find his scientific paper off hand, but he would not be rewarded $1 million if there was no scientific paper.

Yale psychology professor Paul Bloom was awarded around $1 million by the Jacobs Foundation on Oct. 2 in recognition of his research on babies’ abilities to make moral judgments.
https://yaledailynews.com/blog/2017/10/ ... -research/


Btw, there are other researches done re Babies and inherent Morality.
I am a progressive human being, a World Citizen, NOT-a-theist and not religious.
Prismatic567
Philosopher
 
Posts: 1907
Joined: Sun Nov 02, 2014 4:35 am

Re: Functional Morality

Postby attano » Sat Apr 14, 2018 1:44 am

Sorry if I misunderstood, but honestly I still don’t understand.
I confess I have a problem at the literal comprehension level. You wrote “I agree with the above re the intrinsic moral drive within human[s].” What does “re” mean? I thought it was a typo, but I see that the same “re” returns elsewhere.
Regardless, after the same sentence “I agree with the above re the intrinsic moral drive within human[s]”, it seems that you posit a ‘moral drive’ in men.
Now, I take you agree with Carleas by saying:
Prismatic567 wrote:"survival and genes propagation" is not morality per se
.
My understanding is that Carleas maintains that if these instincts are a ‘ground’ for morality, they are so in some deceitful way, meaning that what is deemed a moral habit is, in fact, a device serving "survival and genes propagation". This amounts to a denial of a moral drive in men. And if you agree with that, where would 'your' moral drive be?
Then you add
Prismatic567 wrote:rather they are grounds for morality
and
Prismatic567 wrote:However the progress of morality within humanity that is going on is not based directly on biological evolutionary adaption and natural selection.

So, if I may consider that instincts to "survival and genes propagation" are key to adaptation and natural selection, I have to understand that these non-moral grounds of morality are not what ‘directly’ propels this ‘ongoing progress of morality’. So, their role remain mysterious. Maybe I should have read your other posts on the “sound Framework and System of Morality” to understand this better. Nevertheless I get that this framework would be obtained through abstraction, hence I conjecture (maybe superficially) that these non-direct grounds would no longer play in it.

Prismatic567 wrote:
attano wrote:The ‘findings’ reported in the article (which is an interview, not a scientific paper) do not seem to me conducive to what Mr. Bloom claims.
They can be easily interpreted in the way I guess Carleas favours, not as evidence of a genuine moral instinct, but as ‘moral feelings’, ‘proxies’, that serve the real instinct of survival.

That is only an article re a book he wrote. The researching proper is in the background and published elsewhere. I can't find his scientific paper off hand, but he would not be rewarded $1 million if there was no scientific paper.

That’s OK, I don’t mean that Professor Bloom’s research is unreliable. I have no reason to think that and, actually, what he says about children’s reactions makes sense to me, I can easily back it after my own experience. Yet, I still don't think that those findings hint to an innate morality in a sense that would confute the OP. (Incidentally, the Jakob Foundaton is about «the future of young people so that they become socially responsible and productive members of society». I don’t mean to disrespect that, but scientifically their grant does not prove much in my view).
«Va', va', povero untorello. Non sarai tu quello che spianti Milano.»
User avatar
attano
 
Posts: 163
Joined: Tue Jun 28, 2011 7:38 pm
Location: Europe

Re: Functional Morality

Postby Serendipper » Sat Apr 14, 2018 4:28 am

Carleas wrote:Morality has resulted from an evolutionary process that selects for survival. Since we observe morality in both young children and non-human primates, we can infer that the cognitive habits that we call morality are evolved: they were selected for because they tended to increase the likelihood that those organisms that exhibited them would pass down the genes that produced them. Morality, in other words, is functional, and this meta-ethical basis should be the foundation for any particular moral system.

Awesome epiphany!

This raises problems for many popular moral systems, but most acutely for utilitarian ethics, since it is ostensibly grounded in the same secular liberal worldview that recognizes the mind's material identity and evolutionary origins. Because if morality is a product of evolution, if its purpose all along has been to do whatever keeps the genes propagating, then moral intuitions about the value of humans, or conscious beings, or subjective experience, are at best accidentally correct: they are right if and only if they produce moral prescriptions that tend to favor propagation.

Yes exactly and it's why morality is only relevant within that assumed context.

It bears mentioning that subjective experience, too, is the product of evolution,

Is there such a thing as objective experience? Can an object look at itself?

and individuals feel happy and sad because those feelings tended to help their ancestors to survive and reproduce. So we should actually expect subjective experience to be a somewhat reliable proxy for gene-replication.

Gene replication? You mean population growth with no opposing force selecting for any particular gene mutation? Hmm... what happens when life gets too easy? What genes are chosen in that environment?

Moreover, since humans' greatest evolutionary asset has been their cooperation,

I think it was farming and one doesn't need cooperation for that; just luck in having good soil and animals to be domesticated. That's the biggest difference between the Native Americans and the Europeans.

Are wolves more successful than tigers? Wolves cooperate, but have to share in a hierarchy where some wolves may be excluded. Tigers manage alone and don't need to share.

we should also expect valuing the intuition that others' subjective experience matters to be selfish: we have dedicated brain structures for modeling the subjective states of others (more specifically, our ingroup others), and we reproduce their subjective experience automatically as we observe them; their pain feels to us like pain, their pleasure feels to us like pleasure.

But note that these are proxies. We can identify many situations where they mis-assign value, both in our own subjective experience and in how we value the subjective experience of others. We can be tricked into valuing the subjective experiences of robots, and into devaluing the experiences of friends, by subtle or overt manipulations of other evolved cognitive habits: cute robots who mimic babies get incorrectly included; unfamiliar potential allies get incorrectly excluded.

We do tend to personify, but I don't think we need cooperation for that. The rustling of the grass should be interpreted as a tiger whether it really is or not.

One way of interpreting moral debates is as competing assertions about what system most faithfully produces evolutionary success,

How could anyone determine what "evolutionary success" means or how to get there? Evolution presupposes no presumptions or else it's not evolution. As soon as someone has a plan in mind, it would cease to be evolution since there is no obstacle to overcome, but conditions upon which to undercome and devolve.

and as an evolutionary process itself in that the ideas themselves replicate and are selected for. But I would argue that we can actually draw separate normative conclusions from this observation. To note the evolutionary origins of morality is to short circuit the is-ought fallacy, because it describes what 'ought' is, where moral ideas come from and why they persist. It therefore permits us to reject normative claims that are inconsistent with descriptive claims about what morality is. Functional oughts are is claims.

The functional morality is what ought to be done if one desires the continuation of what's been happening.
Serendipper
Philosopher
 
Posts: 1094
Joined: Sun Aug 13, 2017 7:30 pm

Re: Functional Morality

Postby Serendipper » Sat Apr 14, 2018 4:31 am

attano wrote:What does “re” mean?

"Regarding" is my guess.
Serendipper
Philosopher
 
Posts: 1094
Joined: Sun Aug 13, 2017 7:30 pm

Re: Functional Morality

Postby Serendipper » Sat Apr 14, 2018 4:41 am

iambiguous wrote:Just out of curiosity [for those in the know] does this not seem to reflect many of the points that Satyr raises over at KT? For him morality is ever and always in sync with nature. Natural behaviors can be understood if you are able to grasp the role that evolution plays in the reproduction of all living things.

Morality is in sync with society and whether one regards society to be natural or artificial is subjective along with the interpretation about which is best.

Perhaps a special dispensation might be granted here. Allow him to come on board and participate on this thread.

You can argue his side and we won't have to deal with the foul order.

My own interest of course is the extent to which these "general descriptions" might be integrated into actual conflicted human behaviors. Where does nature/genes stop and nurture/memes begin when the discussion becomes embedded in moral/political conflagrations that we are all likely to be familiar with.

Are you asking whether nature or nurture more prominently affects moral/political leanings? Well, since morality is a function of society, then it would seem that nurture would instill properties consistent with societal influences.
Serendipper
Philosopher
 
Posts: 1094
Joined: Sun Aug 13, 2017 7:30 pm

Re: Functional Morality

Postby Ben JS » Mon Apr 16, 2018 12:59 pm

Everything you do is a product of your structure.
When you act, you reinforce your structure.
Your will is a product of your structure.
Beyond the bias of the living, all is neutral.
The bias is a product of your structure.
All values are a bias.
Morality is how to best act in accord with your bias.

Survival is neutral.
Evolution neutral.
Happiness neutral.
Change neutral.
Progress neutral.

Only the biased care one way or the other.
What is your bias and why are you bias?
Ought you rely on the systems that produced your bias to dictate how you respond to your structure?
Follow their lead? Set their results as your goal? Mimic the blind?
Formerly known as: Joe Schmoe

ben wrote:I think it is eloquently fitting that my farewell thread should be so graciously hijacked by such blatant penis waving. It condenses my entire ILP experience into one very manageable metaphor.
User avatar
Ben JS
Human Being
 
Posts: 2064
Joined: Thu Apr 19, 2012 9:12 am
Location: Australia

Re: Functional Morality

Postby Carleas » Fri Jun 08, 2018 7:26 pm

I am a little late on this response, but I tried to be thorough by way of apology. There were many good responses, and clearly some weak points in my argument I needed to address. To avoid responding to individual sentences, I rolled them into some overarching categories. Please correct any mistakes or misreadings, and let me know if I failed to adequately address any criticisms.


================================================================================
1) What is "morality"?

Peter Kropotkin points out that I have not defined "morality", and goes on to note that without a definition, statements like "we observe morality in both young children and non-human primates" are unclear. Peter is correct that I did not provide a definition, but I disagree that that is a significant problem. In some sense, I am arguing for what should be the definition of morality, i.e. how that term should be understood and used. To the extent that's so, any definition I provide would be effectively tautologous with the argument that I'm making.

But I'm also appealing to a colloquial, small-m 'morality' when I say that we observe morality in children and non-humans. In both those groups, we observe strong, seemingly principled reactions in adherence to innate concepts of fairness, and often those reactions are contrary to immediate self-interest. So, for example, capuchin monkeys trained to complete a task for a given reward will react violently if they observe another monkey get a more valuable reward for the same task. They will go as far as to reject a reward that they had previously been satisfied to receive, as if in protest at the unfair treatment. That reaction is a rudimentary morality, as I mean it. Children, too, will react angrily to being rewarded differently for the same task, and from a very young age have a concept of fair distribution of rewards.

In these situations, we see that there is clear global instrumental value in the reactions, since they are intended to punish unfairness and communicate that the recipient will not stand for unfair treatment. In a non-lab setting, this reaction will encourage fairness is repeated encounters. But the reaction is also clearly of a piece with more sophisticated moral reasoning, as when a person reacts to such unfair treatment on someone else's behalf. It takes little more than this seemingly inbuilt reaction and the ability to model other minds to generate such a vicarious indignation. We then tend to label these vicariously felt slights as moral sentiment, and further refinements are just further abstraction on the same idea. Kant's categorical imperative is nothing more than generalizing upon them.

As Wendy points out, morality in this sense is "society's cohesive glue", it's a set of generalized standards of treatment, and one about which third-parties will get indignant on some else's behalf. It creates a social glue by creating a set of presumptions about acceptable conduct. And I mean "morality" to point to that glue. Morality as I use it is an observable part of human affairs, a collection of behaviors common to normal-functioning humans (and deficit of which we describe as one of several mental illnesses). And because of its roots in innate tendencies visible in unschooled humans and our close animal relatives, I argue that the observable behaviors of morality are a result of cognitive habits selected for in our evolutionary history, i.e. that they exist because they are functional, so there is no higher authority to appeal to in moral matters than function.

But I should clarify that, even with my functional framing, not all moral rules are as hard-wired as unfairness. For example, it's perfectly consistent with this understanding of morality that there are some moral rules that are necessary (I think this is what Meno_ means when he says "intrinsic") and some that are contingent (what Meno_ describes as a "given...set of moral rules"). Necessary rules will be those that follow from the base facts of biological existence; contingent rules will be those that create social efficiency but are just one of many ways to create such efficiency (perhaps this is what Wendy meant by morality being functional in a multitude of ways). This distinction is neither sharp nor certain, but it is meaningful when considered in degrees: the moral maxim that one should follow traffic laws is more necessary than the moral maxim that one should drive on the right, even though it may be possible to efficiently structure society without traffic laws.

Urwrong suggests a basis of morality in "death and its inevitability", but I don't see that in practice in the real world. Even the example he gives (giving your life for a higher good, or for your child) are clearly functional, whether by supporting self-sacrifice for collective benefit, or simply by ensuring the direct survival of your genes as carried by your offspring.

It may be true that the adherents of some things we call morality describe their actions in terms of other values, such as "god's will" or "karma", but the existence of a mythology and alternative narrative does not detract from the fact that, if those moral systems have persisted over time, it is because they kept the groups that supported them cohesive and self-perpetuating. (I will say more below about the potential description between accurate descriptions of the world that involve selection, and the behavioral effects of descriptions of the world on which selection acts).

It isn't impossible to have a non-functional moral system, but if it is non-functional, it is not likely to survive. Early Christianity has a moral prohibition against reproduction, and that moral sentiment died out because it was selected against: people who believed it did not reproduce, and a major method of moral transmission (likely the primary method) was unavailable. The existence of such beliefs, and their description as a form of morality, does not mean that morality is not as I describe it.

================================================================================
2) In what sense is it "functional"?

Several people challenged the claim that morality evolved. Attano asked how we could know ("Fossils bear no trace of the morality of a specimen"), and Prismatic notes the memetic evolution of morality on sub-genetic-evolutionary timescales.

I have described the biological roots of morality as "cognitive habits". I describe them this way because it doesn't seem that most particular moral propositions are coded in our genes, but instead that we have a few simple innate predispositions plus a more general machinery that internalizes observed moral particulars. A Greek raised among the Callatiae would certainly find it right and proper to eat his dead father's body, and a Callatian raised among the Greeks would find the practice repulsive. The general moral cognitive habits that are selected for in genetic evolution are the foundation of the moral particulars we see in practice, especially the tendency to align ones behavior with others as a means of coordinating society and enabling cooperation. Those cognitive habits are functional insofar as they enable more cohesive groups to out-compete less cohesive groups.

Attano is correct that we can't see this directly in the fossil record. But we can still infer its origins in genetic evolution by looking at non-human animals and young children. There, we see both the tendency to imitate the herd and the foundations of specific moral precepts. Explaining this through "History" (which I understand to be something like memetic evolution) doesn't work, because non-human animals aren't plugged into the same cultural networks, and very young children haven't absorbed the culture yet (and I believe the moral-like actions of young children are similar across cultures, though I am less confident on that point). Evolved cognitive habits also best explain that we see moral systems in all human groups. Though they differ between groups, they are present everywhere and there is broad agreement within a group.

On top of those cognitive habits is another form of evolution, what I would call memetic (as opposed to genetic) evolution. Our wetware is evolved to harmonize groups, but the resulting harmonies will vary from group to group due to differences of circumstance and happenstance. That explains the "progress" in morality that Prismatic notes: memetic evolution can take place much more rapidly, since its components are reproduced and mutated much more quickly than are genes.

Now, we might call "progress" the process of coming up with moral codes that allow us to form yet larger and more efficient groupings. Or it might be the process of removing the moral noise that is built into local moralities by happenstance (e.g. rules surrounding specific types of livestock), boiling down to more universal moral beliefs like "don't murder". Progress in a system of functional morality would be if the sets of moral particulars made the group function better.

Serendipper seems to suggest on this point that population growth may be bad (or perhaps just non-functional) if not coupled with an "opposing force selecting for any particular gene mutation". But population growth is the result of functional morality; bearing offspring who bear more offspring is what it means to have genes selected for. This may be clearer if we compare competing ant hills, and ask what it would mean if one ant hill began to increase in population significantly over the competing hills. More population means that the hill is already relatively successful, because population expansion requires resources, and also that it's likely to be more successful, because more ants working on behalf of the collective means the collective is likely to be stronger. So too with humans: we can read success from population growth, and we would expect population growth to create success (up to a point, the dynamics change when there are no competing groups).

A growing population may, and probably does, require morals to change, but we should expect that: as context changes, including changes in the nature of "the group", different behaviors will be functional. But that our old morals will be a victim of their success does not mean that they were successful: a growing population and growing cooperation between group members means that the old rules were functional in their context.


================================================================================
3) How does this apply?

A few people asked about the applicability of this way of framing morality. That line of argument usually isn't so much an objection as an invitation to keep developing the theory, which I am glad to do.

Jakob suggests maybe morality needs to be naive, in the sense that the inborn sense of morality as an ideal is important to its functioning. That may be the case. But it is also true that in order to dodge a speeding car, we need to forget about special relativity, even though the most accurate description of the car's motion that we can produce requires us to use special relativity. So too might we recognize and describe morality as a system of cognitive habits that support group cohesion, and yet in deciding how we live appeal to more manageable utilitarian or deontological axioms. This goes to Urwrong's point above about descriptions of morality in terms of death rather than life: different descriptions may more effectively achieve the ends of morality, but they do not change the nature of morality as an evolved system that helps perpetuate human groups.

This is related to a question from iambiguous, of how we actually put the idea into practice. I don't think that's easy, but I also don't think it's necessary. I am not here offering an applied morality of daily life, but a moral theory to which such an applied morality should appeal. There are potential subordinate deisagreements about e.g. whether brutal honesty or white lies are more effective in creating group cohesion and cooperation; what I am proposing here is the system to which the parties to such disagreement should appeal to make their case.

Serendipper asks how we could determine evolutionary success, and I think the answer is easy in retrospect (though not trivial), and more difficult in prospect. In retrospect, we can just ask what survived and why. Sometimes we know that groups fell apart for arbitrary reasons, and other times we can readily identify problems within the groups themselves. We can point to moral prohibitions that harmed groups and were abandoned, e.g. sex and usury prohibitions. We can compare across surviving systems and see what they have in common, e.g. respect for laws and public institutions.

In prospect, we can make similar arguments, drawing from the history of moral evolution and make predictions about what will work going forward. Like any theory about what will happen on a large scale in the future, there's substantial uncertainty, but that doesn't mean we know nothing. We can more readily identify certain options that are very unlikely to be the best way forward.

But again, this uncertainty isn't fatal to the proposition that morality is functional -- indeed, it's expected. Much as we don't know for sure what evolved genetic traits will survive, whether K- vs. R-strategies are more reliable in a given context, we also do not know what moral approach will guarantee group prosperity. But these observations do not undermine the theory of evolution, and they do not undermine the theory of functional morality.
User Control Panel > Board preference > Edit display options > Display signatures: No.
Carleas
Magister Ludi
 
Posts: 5659
Joined: Wed Feb 02, 2005 8:10 pm
Location: Washington DC, USA

Re: Functional Morality

Postby Karpel Tunnel » Thu Jun 28, 2018 11:18 am

Carleas wrote:Morality has resulted from an evolutionary process that selects for survival. Since we observe morality in both young children and non-human primates, we can infer that the cognitive habits that we call morality are evolved: they were selected for because they tended to increase the likelihood that those organisms that exhibited them would pass down the genes that produced them. Morality, in other words, is functional, and this meta-ethical basis should be the foundation for any particular moral system.

This raises problems for many popular moral systems, but most acutely for utilitarian ethics, since it is ostensibly grounded in the same secular liberal worldview that recognizes the mind's material identity and evolutionary origins. Because if morality is a product of evolution, if its purpose all along has been to do whatever keeps the genes propagating, then moral intuitions about the value of humans, or conscious beings, or subjective experience, are at best accidentally correct: they are right if and only if they produce moral prescriptions that tend to favor propagation.

It bears mentioning that subjective experience, too, is the product of evolution, and individuals feel happy and sad because those feelings tended to help their ancestors to survive and reproduce. So we should actually expect subjective experience to be a somewhat reliable proxy for gene-replication. Moreover, since humans' greatest evolutionary asset has been their cooperation, we should also expect valuing the intuition that others' subjective experience matters to be selfish: we have dedicated brain structures for modeling the subjective states of others (more specifically, our ingroup others), and we reproduce their subjective experience automatically as we observe them; their pain feels to us like pain, their pleasure feels to us like pleasure.

But note that these are proxies. We can identify many situations where they mis-assign value, both in our own subjective experience and in how we value the subjective experience of others. We can be tricked into valuing the subjective experiences of robots, and into devaluing the experiences of friends, by subtle or overt manipulations of other evolved cognitive habits: cute robots who mimic babies get incorrectly included; unfamiliar potential allies get incorrectly excluded.

One way of interpreting moral debates is as competing assertions about what system most faithfully produces evolutionary success, and as an evolutionary process itself in that the ideas themselves replicate and are selected for. But I would argue that we can actually draw separate normative conclusions from this observation. To note the evolutionary origins of morality is to short circuit the is-ought fallacy, because it describes what 'ought' is, where moral ideas come from and why they persist. It therefore permits us to reject normative claims that are inconsistent with descriptive claims about what morality is. Functional oughts are is claims.
I was referred here in the context of my saying that without emotions there are no morals. I see nothing here to argue against that. If you have strategies that unemotionally lead to the propagation of your genes, and no emotions are present, you have tactics and strategies. Machines could be programmed to this - something like those robot cagematches, though fully programmed ones. That isn't morals. Morals are inextricably tied to someone's feeling and values - iow subjective preferences, even if it is a posited God's - and notice how these gods get pissed off if you break the rules.

And guilt would fit in in the discussion you have of emotions above. Natural selection slowing working on which kinds of guilt are adaptively poor.

Once something is tactic and strategy online, you have no way to decide between this set of tactics - that lead to the destruction of life on earth - or that set of tactics that do not

UNLESS

emotional/desire based evaluations

are made.

If you have none, you are no longer an animal.

If you have none, you cannot decide, though you could flip a coin.

And one interesting thing about evolution is that it has lead, and not just in the case of humans, to species having the ability to not necessarily put their own genes ahead of others. This may benefit the species - it is part of what makes us so versatile, or our versatility makes us like this.

Yes, apparantly unemotional viruses may be even more effective than us - in the long or even short run - but they are not moral creatures. I think it would be a category error to call them that.
Karpel Tunnel
Thinker
 
Posts: 852
Joined: Wed Jan 10, 2018 12:26 pm

Re: Functional Morality

Postby Carleas » Thu Jun 28, 2018 4:41 pm

The argument that morality doesn't depend on emotions is that morality was a product of evolution, and was selected for independently of any emotional valence. The origin of morality as something that supports group selection does not depend on emotion; emotion is neither necessary nor sufficient for morality to be selected for.

That's not to say that morality can't interact with emotion; it may be that morality subjectively experienced as an emotion is an effective way to encourage beneficial ingroup cooperation. Or it may be that tuning into the emotions of others gives us inputs into our moral machinery that help produce such beneficial ingroup cooperation.

But like all evolved traits, the fact that they produced outcomes that were selected for in the past do not guarantee that they will produce outcomes that will be selected for in the present. We evolved to go nuts for sugar, because in the environment in which we evolved sugar was scarce and we should eat all we can. In our current world, sugar is abundant and too much enthusiasm for sugar is selected against. Many common fears are unjustified, we're too risk averse, we overreact to cold. We have a lot of subjective experiences that were handed down through evolution that are actively counterproductive in the modern world. Our subjective preferences can be mistaken in that sense.

So too can connections between emotion and morality be seen to be spurious, once we accept why morality exists at all. Whatever weight we give to emotion we can and do discount completely when it leads to the wrong outcome. We feel guilty when we dump someone, and it's not that we shouldn't feel that way, it's that that emotion has no bearing on the rightness or wrongness of the action. We feel it because we evolved in small bands without the elaborate puritan mating regime of modern society, and hurting someone and tearing social bonds in the evolutionary context to the extent we hurt someone and tear social bonds when we dump them now was disruptive to the group and bad for us and our tribe. So we feel guilty, we have the moral intuition that we've done wrong, and that moral intuition is mistaken.

The fact that we can look at a situation and use non-emotional factors to identify emotions that are just incorrect and that point to the wrong moral conclusions entails that moral conclusions can't actually be based on the emotions. They're based on something else, something independent of the emotions.

We literally have a neural network that was trained on certain inputs to achieve a certain goal, and now we're feeding it different inputs and just declaring wherever it points to be the goal. That's a nonsensical approach. The goal is the same goal: survival.
User Control Panel > Board preference > Edit display options > Display signatures: No.
Carleas
Magister Ludi
 
Posts: 5659
Joined: Wed Feb 02, 2005 8:10 pm
Location: Washington DC, USA

Re: Functional Morality

Postby Karpel Tunnel » Thu Jun 28, 2018 10:07 pm

Carleas wrote:The fact that we can look at a situation and use non-emotional factors to identify emotions that are just incorrect and that point to the wrong moral conclusions entails that moral conclusions can't actually be based on the emotions. They're based on something else, something independent of the emotions.

We literally have a neural network that was trained on certain inputs to achieve a certain goal, and now we're feeding it different inputs and just declaring wherever it points to be the goal. That's a nonsensical approach. The goal is the same goal: survival.
1) you did not really interact with the ideas I presented. 2) you are claiming to know what is good and what is bad, iow to have access to objective morality, to some degree or other. 2a) you need to demonstrate this.

My point is not that I have access to objective morality, but that all moralities are founded by us humans on emotions. Why must this be the case? Because otherwise we have no other way to determine what we think is good. Note the difference between us. You are claiming to know the good, the objective good. I am focused on the process that must take place to decide whether a morality is good. If one is a consequentialist, which you are, then the only way for you to determine what you consider good, is via emotions. Social mammal emotions.

We cannot even say that the survival of any human is objectively good. How would we know this? But we use values based on social mammal biases to decide, well, survival of humans is good. Perhaps the consequentialist thinks that reducing unnecessary pain is good. This is based on empathy and one's own personal revulsion of pain projected on others.

You can have goals and then best inferred heuristics to reach that goal.

But morals are not simply goals.

The whole opening of your post does not address what is happening. It claims that a non-emotional natural selection led to emotions. whoopie. Irrelevant.

Emotions led to morals. It is a necessary part of the process through which we evaluate the good. You may decide that I or a younger you reached a poor conclusion based in part on emotions when it came to morals. But again, you MUST use emotional social mammal values to determine this.

At some point you have what, I can only assume, you think is merely a rational, logical decision. An emotionless evaluation. Whatever that is, I will bet, not coincidentally values your own life as good, though perhaps one that could be outweighed by other goods. That life is good. That not causing unnecessary harm is good.

All based on your desires to be alive and hopefully empathy at least as a factor in relation to others.

If you take away the emotions, you are then claiming that what in fact are really just tactics are morals. Tactics to achieve certain outcomes that you are utterly indifferent emotionally about. Tactics to reach a goal you are utterly emotionally indifferent about.

If you are indifferent emotionally about those goals, why enter the discussion at all`?

Why not let people who emotionally prefer certain outcomes, ways of relating, decide?

It doesn't matter to you.
Karpel Tunnel
Thinker
 
Posts: 852
Joined: Wed Jan 10, 2018 12:26 pm

Re: Functional Morality

Postby Karpel Tunnel » Thu Jun 28, 2018 10:23 pm

Carleas wrote:The argument that morality doesn't depend on emotions is that morality was a product of evolution, and was selected for independently of any emotional valence. The origin of morality as something that supports group selection does not depend on emotion; emotion is neither necessary nor sufficient for morality to be selected for.
There is somsething messed up in here, like some deep confused category error and I wish I could really explain this well.

I can only right now take a stab at it with a reductio...

The argument that rationality doesn't depend on thoughts is that rationality was a product of evolution, and was selected for independently of any thought valence. The origin of rationality as something that supports group selection does not depend on thoughts; thoughts is neither necessary nor sufficient for morality to be selected for.


Unless whatever process you have for deciding on morality is not itself based on functions coming out of evolution, whatever your emotionless process is, is also not necessary.

Morality emerged out of emotional beings, beings who evaluated the good and bad using emotions. It is not a coincidence that chimp and wolf moralities correlate incredibly well with with emotional likes and dislikes. Animals with no limbic systems are never referred to as having emotions. We may talk about power dynamics in animals without limbic systems, but I don't hear anyone talking about reptile morals. Just reptile behavior. But with apes and canines, using ideas like fairness work.

Anyone who told me thay had arrived at an objective morality without emotions, I would distrust in the extreme. Because they are claiming emotional indifference, that their conclusions have not used emotions in their evaluation. Which means, their morality is not based on empathy, even for themselves. And since they are claiming to be indifferent, they are presenting themselves as a disinterested party. Which I find suspicious in the extreme.
Karpel Tunnel
Thinker
 
Posts: 852
Joined: Wed Jan 10, 2018 12:26 pm

Re: Functional Morality

Postby Carleas » Fri Jun 29, 2018 6:05 pm

Karpel Tunnel wrote:Note the difference between us. You are claiming to know the good, the objective good. I am focused on the process that must take place to decide whether a morality is good.

I am claiming that there is an objective good, that morality is objective. Do you disagree with that?

Karpel Tunnel wrote:But again, you MUST use emotional social mammal values to determine this.

We know where emotions and morals come from, we know why they evolved, so we can determine what they should say without bootstrapping from them.

My point in this thread is that starting with morals as an empirical phenomenon observable in humans and certain other social animals, we can examine what morals are, why they exist. And any claimed moral commandment that undermines the empirically observed reason for the existence of morals must be mistaken, morals must continue to be what they evolved to be.

And what they evolved to be had nothing to do with emotion (except insofar as emotions also evolved to do the same thing).

Karpel Tunnel wrote:If you take away the emotions, you are then claiming that what in fact are really just tactics are morals.

Or rather, morals are a tactic that evolved because they kept people who used them alive and helped them reproduce.

Karpel Tunnel wrote:If you are indifferent emotionally about those goals, why enter the discussion at all`?

Why not let people who emotionally prefer certain outcomes, ways of relating, decide?

It doesn't matter to you.

This is a strange line of argument.
1) Should only people that are emotionally invested in outcomes discuss anything? Like, only mathematicians that are emotionally invested in a specific outcome to the Riemann Hypothesis should spend any time trying to figure it out?
2) Often people who are emotionally invested in an outcome are the worst people to solve it. That's why we have courts and arbitrators and mediators and trilateral talks. Neutral third parties are often better at resolving disputes.
3) I'm not saying emotion does nothing or doesn't matter, I'm saying emotion isn't the basis, isn't a component, of morality. As I say above, emotions will often align with morality, and 'naive' morality will often align with survival, because they all evolved to the same ends. But where they differ, it is survival that wins. And I, weak as I am, feel and follow emotions, but I often do so knowing that it is immoral.

Karpel Tunnel wrote:I can only right now take a stab at it with a reductio...

This poses the interesting question of how to distinguish rationality from morality, i.e. rationality also evolved, so why doesn't survival trump rationality? I would look to what rationality and morality each purport to do. Rationality is an attempt to describe something that exists independently of humans. It is a way of describing the world. Morality, by contrast, is something we created (or that was created as a part of us).

I think you would have to agree with this distinction: if morality is based on our emotions, then it doesn't exist in a world without our emotions. Rationality, logic, math, those things exist without us.
User Control Panel > Board preference > Edit display options > Display signatures: No.
Carleas
Magister Ludi
 
Posts: 5659
Joined: Wed Feb 02, 2005 8:10 pm
Location: Washington DC, USA

Re: Functional Morality

Postby Karpel Tunnel » Mon Jul 02, 2018 7:02 am

Carleas wrote:I am claiming that there is an objective good, that morality is objective. Do you disagree with that?
Which morality is objective? We have evolved a set of moralities and some of these moralities support we not survive – anti-natalism - some consider us parasites to such a degree that we should be eliminated to protect other life. The transhumanists have moralities or perhaps aesthetics that want us to choose the way homo sapiens will no longer exist – they are the most likely of the three to win the natural selection battle with other moralities. I can't see how one can know the objective good, nor can I see that teleological arguments based on evolution lead to any conclusion about what is good with a capital G. We can come up with tactics that might be good for the spreading of our genes, though that does not sound like morals to me. Evolution led to a capacity. That capacity - the portions of our nervous systems, say, that amongst other things, came up with morals - may or may not be adaptive in the long term. And we cannot assign it a purpose. Once this capacity is present it is clear that it will be applied to all sorts of purposes.

Karpel Tunnel wrote:But again, you MUST use emotional social mammal values to determine this.

We know where emotions and morals come from, we know why they evolved, so we can determine what they should say without bootstrapping from them.
I am not sure if your ‘why’ is teleological here, but this is a bird's eye view. Or view from nowhere. In situ we have a way of creating meaning for ourselves and that meaning is emotionally evaluated and generated, and not bound by any 'purpose' in evolution. If there were a purpose in evolution, and it wanted control, it made a mistake when it came up with our capacities and tendencies since we evaluate and generate morals based on rationality AND emotions. If I am supposed to respect evolutions goals, it seems to me I must respect the processes and skills it gave me to do things and evaluate things. IOW I have been made such that I mix emotions and rationality, both when I function like a consequentialist and when like a deontologist. I find emotions deeply involved in both processes and I note this in everyone I meet also. We are this way. I don't see why I should just abstract out and respect in SHOULD terms evolution’s intent for morals, but ignore evolution's result in making me/us the way I am/we are.

My point in this thread is that starting with morals as an empirical phenomenon observable in humans and certain other social animals, we can examine what morals are, why they exist. And any claimed moral commandment that undermines the empirically observed reason for the existence of morals must be mistaken, morals must continue to be what they evolved to be.
Who says? How do you know that is good? What if we achieve interstellar travel and kill off lovely smarter, less nasty species, perhaps all of them? Where can I stand to view the objective good of our species even? All you are talking about is heuristics for survival. That's not morality. I feel a bit like when I see physicalists talking about being spiritual. They may have wonderful philosophies of life, be great people, generate sweet and caring ethical codes, etc., but they are not spiritual. That word literally entails other stuff. So does morality entail more than heuristics. It includes an in part emotional/desire-based choosing of what goals we want good heuristics for, and often, including how we feel about the heuristics. Or it would not be so common to challenge the idea that the means might not justify the ends. What you describe is certainly not objective morals. It is tactics towards what you consider the one goal, a goal we don't even know is objectively a good one, though it might be good for us. I considered it an extremely limited goal for us, just one part of what morality covers. But even if it was the only goal of ours, we cannot know if it is a moral one. I mean, who are we to judge the goodness of the human race. Or better put, who are we to think we can judge it objectively and without emotion?
And what they evolved to be had nothing to do with emotion (except insofar as emotions also evolved to do the same thing).
Again teleological. But further we evolved as creatures that evaluate morals emotionally. If we are going to use a teleological argument then perhaps we should leave that alone, rather than deciding that we can and should just do it only rationally - which I don't think is possible, in any case. Further it seems completely irrational to generate the way humans relate to each other without making emotional and desire-based evaluations central. I mean, we have to live with all the consequences of those morals, and emotional consequences will be real and central. For some reason emotions are often considered quasi-real. And this is often based on their fallibility. First, reason is also fallible, but further, emotions are real. Now I know that you would not assert that emotions are not real. But note how they end up somehow being moved off the table when they are central to pretty much all the consequences of morals. And then also in the process of choosing and evaluation, etc.
I don't think there is more than a handful of people who think morality is JUST about calculating the survival of the species. So, you must then explain how evolution led to us having a belief/method/approach that runs counter to what you are saying. If evolution can give us should, this would entail that it would give us should around methodology also.
IOW your argument seems to be that since evolution shaped morals and evolution is all about surviving, then morality is about surviving period. But evolution led to us, and other moral creating species, making morals about much more. Perhaps you need to note what evolution has selected for: and in this case it is moral making animals with limbic systems involved morals at all levels.
Personally, I don't really care what evolution wants or intends, but I can see what was selected for in our case.
If it somehow turned out that rationality indicated I should kill off my wife after she births our second child - that the best AI's analyzing all the complex chains of effects, sees this as the best heuristic for human survival that husbands/fathers do this OR EVEN if God told me to do it...no way. I won't. I fail God's test of Abraham, though I have often wondered if in fact he failed it.
This was an extreme example - though one that fits nicely with our other discussion - but there are all sorts of other moral-like guidelines I would follow regardless of what the best minds said was our best strategy for survival. And if you think that is a problem, blame evolution. Evolution made my moral making or in my case preference-making process to be such that there are things I will not do (even for money or what the supposedly detached people with views from nowhere say is moral). And there are things I will do that may go against their supposed best heuristics.
Karpel Tunnel wrote:If you take away the emotions, you are then claiming that what in fact are really just tactics are morals.

Or rather, morals are a tactic that evolved because they kept people who used them alive and helped them reproduce.
see above about emotions always being in the mix of creating, applying, modifying, justifying...etc. That is the tactic we evolved.

Karpel Tunnel wrote:If you are indifferent emotionally about those goals, why enter the discussion at all`?

Why not let people who emotionally prefer certain outcomes, ways of relating, decide?

It doesn't matter to you.


This is a strange line of argument.
1) Should only people that are emotionally invested in outcomes discuss anything? Like, only mathematicians that are emotionally invested in a specific outcome to the Riemann Hypothesis should spend any time trying to figure it out?
They all are emotionally invested in finding the correct outcome. And they likely all are interested in their, perhaps at this stage vague guess of direction, being the right guess. And the one who solves it will have been extremely emotionally involved in finding the answer. IOW I wasn't arguing that Carleas shouldn't participate, but rather trying to highlight – corner you - that you are likely driven by emotions, even in this telling us we should prioritize survival because that is what evolution gave us morality for. This likely seems like a view from nowhere, but absolutely cannot be once it is couched as a should. Further the results of the Riemann hypothesis is not like the results of a morality argument or decision about how we should, for example, relate to each other. The latter has to do with what we like, love, hate, desire, are repulsed by and those emotional reactions will guide our personal and collective decisions about what is moral. In fact they must. If they are not involved we may all end up working in some dystopian, panopticon-tyranny that seems efficient and at least in the short term seems to completely guarantee survival, but which we hate every waking minute living in. For example. I think there are other problems that will arise. Some can be based on the emotions now having an unremovable part in what we will even want to survive in and thus making emotions a selection factor, like it or not. Others need not even be bound to your fundamental should - we must base our morals on what we think evolution intended morals to be for.

2) Often people who are emotionally invested in an outcome are the worst people to solve it. That's why we have courts and arbitrators and mediators and trilateral talks. Neutral third parties are often better at resolving disputes.
Often the people who come up with morals in what they think is View from nowhere or objective or disinterested end up making horrible decisions. I would not use that as an argument against using rationality. I don't think it works as an argument against including emotions. And as far as I can see the people who judge emotions and present themselves as avoiding their influence, they are less aware of how their emotions are influencing their choices than the people who do not present their ideas this way. But further those groups that make decisions are applying morals decided in part on emotional grounds. And they likely have strong feelings about those morals. Courts often use juries, lawyers use emotional arguments, etc. Yes, emotions can lead to wrong decisions. But they are central to morals, determining what morals are and how they affect us. Anyone trying to eliminate emotions from the process of deciding morals, will be incredibly lucky if they come up with a moral system that does not feel unnecessarily bad in a wide variety of ways. And if they have the single goal of species survival, this could lead to solutions like…

It is moral to kill off 70% of the population tomorrow, have an elite take over complete control of all genetic combination – read: via sex, gm work, etc. And so on.
Any country that decided to come up with morals without including emotions in the process is one I would avoid, because essentially such a country with that one goal has no interest in the bearers of the genes, except to the except they bear them. Science fiction has many such dystopian ‘logical’ solutions.
3) I'm not saying emotion does nothing or doesn't matter, I'm saying emotion isn't the basis, isn't a component, of morality. As I say above, emotions will often align with morality, and 'naive' morality will often align with survival, because they all evolved to the same ends. But where they differ, it is survival that wins. And I, weak as I am, feel and follow emotions, but I often do so knowing that it is immoral.
How could it be immoral, in your system of belief, since we clearly evolved with this mixed approach to choosing, creating. It is part of our evolved criteria in all such decision-making.
Karpel Tunnel wrote:I can only right now take a stab at it with a reductio...

This poses the interesting question of how to distinguish rationality from morality, i.e. rationality also evolved, so why doesn't survival trump rationality? I would look to what rationality and morality each purport to do. Rationality is an attempt to describe something that exists independently of humans. It is a way of describing the world. Morality, by contrast, is something we created (or that was created as a part of us).
I think you would have to agree with this distinction: if morality is based on our emotions, then it doesn't exist in a world without our emotions. Rationality, logic, math, those things exist without us.
This makes you a kind of Platonist or some other form of metaphysics that has these things outside us. But here's the thing, you are deciding to NOT work with morals the way we obviously have evolved to work with morals and clearly to me emotions are involved deeply in all moral evaluation and they become clearly visible when there are disagreements about morals, which are regular and ongoing. I don't really care what evolution may have intended my emotions, morality and rationality to be for and have as goals - and perhaps I am more adapative precisely because I take the freedom given to me by evolution and don't care to let it's intent rule me. But for the sake of argument, let's say I should go with evolution's intentions: shouldn't I then go with the full set of ways one evaluates and chooses emotions, which would include both emotions and rationality whichare intermingled and interdependent in any case? And yes, morality does not exist in a world without emotions and it never has. Animals had behavior before emotions, perhaps, but not morality.

Further 'rationality' is of a different category than the rest of that list. I would need to know where you see rationality existing without us. And whose rationality? Rationality is a human process - also exhibited in more limited - though often highly effective - forms in animals. We can call it an animal process. For it to function well, in terms of human interactions, the limbic system must be undamaged and emotions are a part of that process. But even without that proviso, I do not find rationality anywhere outside us as possible in the physicalist paradigm, unless we are talking about aliens or perhaps one day AIs.

And note:
Karpel Tunnel wrote:
I tend to think the way we evolved is more adaptive than the suppressed limbic system version of humanity you are advocating for.
Carleas:
I have not and am not advocating any such thing.
I know. And I certainly know it is not explicit. It was an intuitive reaction.
I still think this is in the air.
I think one of the reasons we have intermeshed emotional and rational decision-making is because the higher forms of rationality get weaker when there are too many variables and potential chains of causes. AND rationality tend to have a hubris that it can track all this. For some things emotional reactions have a better chance, though of course this is fallible. But then both are fallible and both are intermeshed. And just as some are better at rationality, some are better at intuition than others are. I would find it odd if what determined how we lived was determined without emotions and desires as central to the process of determining it.

I've mentioned Damasio, here's a kind of summary. Obviously better to read his books or articles....
https://www.huffingtonpost.com/fred-kof ... ccounter=1

People can't even make choices without emotions. But then here, with morality, we are talking about making choices about things we react to with strong emotions and have effects that affect us emotionally, that affect our desires and goals - on emotional levels.

I see no reason to consider my DNA more important than my life - how it is lived, what it feels like, what my loved one's experience, the state of what I value - nature, etc.
Karpel Tunnel
Thinker
 
Posts: 852
Joined: Wed Jan 10, 2018 12:26 pm

Re: Functional Morality

Postby Karpel Tunnel » Sat Jul 07, 2018 12:15 pm

Edit: since you think we should base morals on 'survival', it would be good to define that would count as survival. Warning: I plan to find odd conclusions based on the definition.

Carleas wrote:I am claiming that there is an objective good, that morality is objective. Do you disagree with that?
Which morality is objective? We have evolved a set of moralities and some of these moralities support we not survive – anti-natalism - some consider us parasites to such a degree that we should be eliminated to protect other life. The transhumanists have moralities or perhaps aesthetics that want us to choose the way homo sapiens will no longer exist – they are the most likely of the three to win the natural selection battle with other moralities. I can't see how one can know the objective good, nor can I see that teleological arguments based on evolution lead to any conclusion about what is good with a capital G. We can come up with tactics that might be good for the spreading of our genes, though that does not sound like morals to me. Evolution led to a capacity. That capacity - the portions of our nervous systems, say, that amongst other things, came up with morals - may or may not be adaptive in the long term. And we cannot assign it a purpose. Once this capacity is present it is clear that it will be applied to all sorts of purposes.

Karpel Tunnel wrote:But again, you MUST use emotional social mammal values to determine this.

We know where emotions and morals come from, we know why they evolved, so we can determine what they should say without bootstrapping from them.
I am not sure if your ‘why’ is teleological here, but this is a bird's eye view. Or view from nowhere. In situ we have a way of creating meaning for ourselves and that meaning is emotionally evaluated and generated, and not bound by any 'purpose' in evolution. If there were a purpose in evolution, and it wanted control, it made a mistake when it came up with our capacities and tendencies since we evaluate and generate morals based on rationality AND emotions. If I am supposed to respect evolutions goals, it seems to me I must respect the processes and skills it gave me to do things and evaluate things. IOW I have been made such that I mix emotions and rationality, both when I function like a consequentialist and when like a deontologist. I find emotions deeply involved in both processes and I note this in everyone I meet also. We are this way. I don't see why I should just abstract out and respect in SHOULD terms evolution’s intent for morals, but ignore evolution's result in making me/us the way I am/we are.

My point in this thread is that starting with morals as an empirical phenomenon observable in humans and certain other social animals, we can examine what morals are, why they exist. And any claimed moral commandment that undermines the empirically observed reason for the existence of morals must be mistaken, morals must continue to be what they evolved to be.
Who says? How do you know that is good? What if we achieve interstellar travel and kill off lovely smarter, less nasty species, perhaps all of them? Where can I stand to view the objective good of our species even? All you are talking about is heuristics for survival. That's not morality. I feel a bit like when I see physicalists talking about being spiritual. They may have wonderful philosophies of life, be great people, generate sweet and caring ethical codes, etc., but they are not spiritual. That word literally entails other stuff. So does morality entail more than heuristics. It includes an in part emotional/desire-based choosing of what goals we want good heuristics for, and often, including how we feel about the heuristics. Or it would not be so common to challenge the idea that the means might not justify the ends. What you describe is certainly not objective morals. It is tactics towards what you consider the one goal, a goal we don't even know is objectively a good one, though it might be good for us. I considered it an extremely limited goal for us, just one part of what morality covers. But even if it was the only goal of ours, we cannot know if it is a moral one. I mean, who are we to judge the goodness of the human race. Or better put, who are we to think we can judge it objectively and without emotion?
And what they evolved to be had nothing to do with emotion (except insofar as emotions also evolved to do the same thing).
Again teleological. But further we evolved as creatures that evaluate morals emotionally. If we are going to use a teleological argument then perhaps we should leave that alone, rather than deciding that we can and should just do it only rationally - which I don't think is possible, in any case. Further it seems completely irrational to generate the way humans relate to each other without making emotional and desire-based evaluations central. I mean, we have to live with all the consequences of those morals, and emotional consequences will be real and central. For some reason emotions are often considered quasi-real. And this is often based on their fallibility. First, reason is also fallible, but further, emotions are real. Now I know that you would not assert that emotions are not real. But note how they end up somehow being moved off the table when they are central to pretty much all the consequences of morals. And then also in the process of choosing and evaluation, etc.
I don't think there is more than a handful of people who think morality is JUST about calculating the survival of the species. So, you must then explain how evolution led to us having a belief/method/approach that runs counter to what you are saying. If evolution can give us should, this would entail that it would give us should around methodology also.
IOW your argument seems to be that since evolution shaped morals and evolution is all about surviving, then morality is about surviving period. But evolution led to us, and other moral creating species, making morals about much more. Perhaps you need to note what evolution has selected for: and in this case it is moral making animals with limbic systems involved morals at all levels.
Personally, I don't really care what evolution wants or intends, but I can see what was selected for in our case.
If it somehow turned out that rationality indicated I should kill off my wife after she births our second child - that the best AI's analyzing all the complex chains of effects, sees this as the best heuristic for human survival that husbands/fathers do this OR EVEN if God told me to do it...no way. I won't. I fail God's test of Abraham, though I have often wondered if in fact he failed it.
This was an extreme example - though one that fits nicely with our other discussion - but there are all sorts of other moral-like guidelines I would follow regardless of what the best minds said was our best strategy for survival. And if you think that is a problem, blame evolution. Evolution made my moral making or in my case preference-making process to be such that there are things I will not do (even for money or what the supposedly detached people with views from nowhere say is moral). And there are things I will do that may go against their supposed best heuristics.
Karpel Tunnel wrote:If you take away the emotions, you are then claiming that what in fact are really just tactics are morals.

Or rather, morals are a tactic that evolved because they kept people who used them alive and helped them reproduce.
see above about emotions always being in the mix of creating, applying, modifying, justifying...etc. That is the tactic we evolved.

Karpel Tunnel wrote:If you are indifferent emotionally about those goals, why enter the discussion at all`?

Why not let people who emotionally prefer certain outcomes, ways of relating, decide?

It doesn't matter to you.


This is a strange line of argument.
1) Should only people that are emotionally invested in outcomes discuss anything? Like, only mathematicians that are emotionally invested in a specific outcome to the Riemann Hypothesis should spend any time trying to figure it out?
They all are emotionally invested in finding the correct outcome. And they likely all are interested in their, perhaps at this stage vague guess of direction, being the right guess. And the one who solves it will have been extremely emotionally involved in finding the answer. IOW I wasn't arguing that Carleas shouldn't participate, but rather trying to highlight – corner you - that you are likely driven by emotions, even in this telling us we should prioritize survival because that is what evolution gave us morality for. This likely seems like a view from nowhere, but absolutely cannot be once it is couched as a should. Further the results of the Riemann hypothesis is not like the results of a morality argument or decision about how we should, for example, relate to each other. The latter has to do with what we like, love, hate, desire, are repulsed by and those emotional reactions will guide our personal and collective decisions about what is moral. In fact they must. If they are not involved we may all end up working in some dystopian, panopticon-tyranny that seems efficient and at least in the short term seems to completely guarantee survival, but which we hate every waking minute living in. For example. I think there are other problems that will arise. Some can be based on the emotions now having an unremovable part in what we will even want to survive in and thus making emotions a selection factor, like it or not. Others need not even be bound to your fundamental should - we must base our morals on what we think evolution intended morals to be for.

2) Often people who are emotionally invested in an outcome are the worst people to solve it. That's why we have courts and arbitrators and mediators and trilateral talks. Neutral third parties are often better at resolving disputes.
Often the people who come up with morals in what they think is View from nowhere or objective or disinterested end up making horrible decisions. I would not use that as an argument against using rationality. I don't think it works as an argument against including emotions. And as far as I can see the people who judge emotions and present themselves as avoiding their influence, they are less aware of how their emotions are influencing their choices than the people who do not present their ideas this way. But further those groups that make decisions are applying morals decided in part on emotional grounds. And they likely have strong feelings about those morals. Courts often use juries, lawyers use emotional arguments, etc. Yes, emotions can lead to wrong decisions. But they are central to morals, determining what morals are and how they affect us. Anyone trying to eliminate emotions from the process of deciding morals, will be incredibly lucky if they come up with a moral system that does not feel unnecessarily bad in a wide variety of ways. And if they have the single goal of species survival, this could lead to solutions like…

It is moral to kill off 70% of the population tomorrow, have an elite take over complete control of all genetic combination – read: via sex, gm work, etc. And so on.
Any country that decided to come up with morals without including emotions in the process is one I would avoid, because essentially such a country with that one goal has no interest in the bearers of the genes, except to the except they bear them. Science fiction has many such dystopian ‘logical’ solutions.
3) I'm not saying emotion does nothing or doesn't matter, I'm saying emotion isn't the basis, isn't a component, of morality. As I say above, emotions will often align with morality, and 'naive' morality will often align with survival, because they all evolved to the same ends. But where they differ, it is survival that wins. And I, weak as I am, feel and follow emotions, but I often do so knowing that it is immoral.
How could it be immoral, in your system of belief, since we clearly evolved with this mixed approach to choosing, creating. It is part of our evolved criteria in all such decision-making.
Karpel Tunnel wrote:I can only right now take a stab at it with a reductio...

This poses the interesting question of how to distinguish rationality from morality, i.e. rationality also evolved, so why doesn't survival trump rationality? I would look to what rationality and morality each purport to do. Rationality is an attempt to describe something that exists independently of humans. It is a way of describing the world. Morality, by contrast, is something we created (or that was created as a part of us).
I think you would have to agree with this distinction: if morality is based on our emotions, then it doesn't exist in a world without our emotions. Rationality, logic, math, those things exist without us.
This makes you a kind of Platonist or some other form of metaphysics that has these things outside us. But here's the thing, you are deciding to NOT work with morals the way we obviously have evolved to work with morals and clearly to me emotions are involved deeply in all moral evaluation and they become clearly visible when there are disagreements about morals, which are regular and ongoing. I don't really care what evolution may have intended my emotions, morality and rationality to be for and have as goals - and perhaps I am more adapative precisely because I take the freedom given to me by evolution and don't care to let it's intent rule me. But for the sake of argument, let's say I should go with evolution's intentions: shouldn't I then go with the full set of ways one evaluates and chooses emotions, which would include both emotions and rationality whichare intermingled and interdependent in any case? And yes, morality does not exist in a world without emotions and it never has. Animals had behavior before emotions, perhaps, but not morality.

Further 'rationality' is of a different category than the rest of that list. I would need to know where you see rationality existing without us. And whose rationality? Rationality is a human process - also exhibited in more limited - though often highly effective - forms in animals. We can call it an animal process. For it to function well, in terms of human interactions, the limbic system must be undamaged and emotions are a part of that process. But even without that proviso, I do not find rationality anywhere outside us as possible in the physicalist paradigm, unless we are talking about aliens or perhaps one day AIs.

And note:
Karpel Tunnel wrote:
I tend to think the way we evolved is more adaptive than the suppressed limbic system version of humanity you are advocating for.
Carleas:
I have not and am not advocating any such thing.
I know. And I certainly know it is not explicit. It was an intuitive reaction.
I still think this is in the air.
I think one of the reasons we have intermeshed emotional and rational decision-making is because the higher forms of rationality get weaker when there are too many variables and potential chains of causes. AND rationality tend to have a hubris that it can track all this. For some things emotional reactions have a better chance, though of course this is fallible. But then both are fallible and both are intermeshed. And just as some are better at rationality, some are better at intuition than others are. I would find it odd if what determined how we lived was determined without emotions and desires as central to the process of determining it.

I've mentioned Damasio, here's a kind of summary. Obviously better to read his books or articles....
https://www.huffingtonpost.com/fred-kof ... ccounter=1

People can't even make choices without emotions. But then here, with morality, we are talking about making choices about things we react to with strong emotions and have effects that affect us emotionally, that affect our desires and goals - on emotional levels.

I see no reason to consider my DNA more important than my life - how it is lived, what it feels like, what my loved one's experience, the state of what I value - nature, etc.[/quote]
Karpel Tunnel
Thinker
 
Posts: 852
Joined: Wed Jan 10, 2018 12:26 pm

Next

Return to Philosophy



Who is online

Users browsing this forum: No registered users