Karpel Tunnel wrote:[S]ince you think we should base morals on 'survival', it would be good to define that would count as survival.
Karpel Tunnel wrote:some of these moralities support we not survive – anti-natalism
Karpel Tunnel wrote:We are this way. I don't see why I should just abstract out and respect in SHOULD terms evolution’s intent for morals, but ignore evolution's result in making me/us the way I am/we are.
Karpel Tunnel wrote:[Y]ou are deciding to NOT work with morals the way we obviously have evolved to work with morals [...]
Karpel Tunnel wrote:[R]ationality tend to have a hubris that it can track all th[ese] [many variables and potential chains of causes].
Karpel Tunnel wrote:[H]ow do we know that morality is not a spandrel?
Karpel Tunnel wrote:even if it is not, how do we have an obligation to the intent of evolution, in what sense are beholden to funtion? Function, evolution, natural selection are not moral agents. What is it that puts us in some contractual commitment to following their intentions? If the argument is not that we are beholden but rather X is what morality is for, so we should use it as X, a more determinist connection, then we don't have to worry about adhering to the function, since whatever we do is a product of evolutionarily-created function. Once I am supposed to follow evolution, use my adaptions, well, how can I fail? And if I fail as an individual, I am still testing for my species and if my approach was poor it will be weeded out. No harm, no foul.
This is one of the areas I was probing around because I think it may be very hard for many adherents of functional morality to stay consistent. Perhaps not you. If survival is connected to genetically related progeny having progeny that are genetically related - iow sustaining genetically related individuals through time, transhumanism should be considered bad or evil - if we take the case of strong transhumanism where better substrates for consciounsess and existence are created and homo sapiens, as a genetic organism (and physically in general, outside the nucleus of cells also), are no longer present. We will have replaced ourselves with something else. At least in terms of genetic material.Carleas wrote:But, as promised, some thoughts on 'survival':
First, individual gene-line survival means an organism not dying until it produces offspring who are likely to not-die until they produce offspring.
At a group or society level, survival means the group continues to exist. It's a little vaguer here because the 'group' isn't somewhat amorphous, and there aren't discrete generations for reproduction, but a constant production and death of constituent members.
Defining the survival of any thing inherits the problems in defining that thing, i.e. the "can't step in the same river twice" problems. Moreover, where morality functions on the substrate-independent level of our existence (thoughts), it isn't clear whether the survival it requires is the survival of the substrate or the survival of the survival of the programs that run on it. Would morality support the transhumanist idea that we should abandon our bodies and upload our consciousness to silicon? Even if we take functional morality is true, I don't know that that question is settled.
I didn't say enough. Antinatalism is one of the moralities that evolution has given rise to. Right now it is a minority position. Perhaps it will become the majority or power morality. Then this is what evolution has led to. It might lead to our extinction, but evolution led to it. IF I coming from a now more minority position - before the anti-natalists sterilize all of us, push for my morality, which includes life, I must wonder, as the anti-natalists take over, if I am on the wrong side - if evolution has led to antinatalist morality and the anti-natalists win. Whatever happens would be functional, it might just not be what we want functional to be. IOW it was functional that dinosaurs became extinct. Evolution and natural selection are selecting to whatever fits, whatever fits, that is, whatever else exists - other species, the weather, etc. I don't really see where I should do anything other than prioritize what I want, and let natural selection see to the outcomes. Just like every other individual in other species. Because once I follow my interests and desires, including mammalian empthy, I am living out what I have been selected to be like. Whatever this leads to is functional, though it may not include my kind.Carleas wrote:Karpel Tunnel wrote:some of these moralities support we not survive – anti-natalism
Sure, but some morality is just wrong. Anti-natalism specifically is pretty clearly wrong, but that statement rests on the functional morality I'm advancing here.
If what you're asking for is which morality is the functional morality, I actually think that too is beyond the scope of this discussion. "There is an objective morality that we can discover" is a different claim from "X is the objective morality". I'm making the former claim here, and arguing that we should use the criteria of functionality to evaluate claims about the latter, but I am not making a specific claim about the latter.
Let's take this last bit first. 1) I think it is complicated. First, immediately, I want to stress that there is always the option of delaying judgment or agnosticism. Reason is not infallible - and is, often, guided by emotions and assumptions we are aware of and then also often by emotions and assumptions we are not aware of. So when in a real contradiction between emotions and reason, we might, especially if we do not seem to immediately lose anything a) delay choice or 2) make a choice but keep an agnosticism about whether it was the right one. 3) it depends for me on what reason, whose reason, and for that matter whose emotions/intuition. 4) a problem with the choice is that emotions and reason are mixed. It is muddy in there. Reason depends on emotions, especially when we are talking about how humans should interact - iow what seems reasonable will include emotional reactions to consequences, prioritizing inside reasoning itself, the ability to evaluate one's reasoning (such as, have I looked at the evidence long enough? which is evaluated with emotional qualia (see Damasio) and of course emotions are often affected strongly by memes, what is presented as reasonable, assumptions in society and culture, etc. When someone claims to be on the pure reason side of an argument, I immediately get wary. I just don't meet any people without motives, emotions, biases and so on. If we are trying to determine the height of a tree, ok I may dismiss emotion based objections after the rational team used three different measuring devices and come to the same measurement, despite it seeming off to the emotional team. But when dealing with how should we treat each other.....Carleas wrote:Karpel Tunnel wrote:We are this way. I don't see why I should just abstract out and respect in SHOULD terms evolution’s intent for morals, but ignore evolution's result in making me/us the way I am/we are.
I don't disagree with this idea or those in the surrounding paragraph, but let me make an analogy.
Once, on a hot summer night, I awoke with intense nausea. I laid in bed feeling wretched for a minute staring at the ceiling, and the nausea passed. I closed my eyes to sleep again and soon again felt intense nausea. I opened my eyes, and shortly the nausea passed again. I did this a few more times as my rational faculties slowly kicked in, and then noticed that my bed was vibrating slightly. A fan that I'd placed at the foot of the bed was touching the bed frame, and creating a barely perceptible vibration. I put it together that the nausea was in fact motion sickness. I moved the fan, the bed stopped shaking, and I slept the rest of the night without incident.
The point here is that motion sickness is an evolved response to certain feelings of motion. In particular, our brains are concerned that certain unnatural sensations of motion are actually the result of eating something toxic. The nausea is a response that, if taken to its logical end, will cause us to purge what we've eaten, in the hopes that any toxins will be purged with it. In the evolutionary context, that's a useful response. But we did not evolve in the presence of beds and fans, and so the way we've evolved misleads us into thinking we're ill when in fact we're perfectly fine.
A similar thing can happen with morality, and understanding morality as a product of evolution, as a mental trait that evolved in a specific context and suited to that context, and not necessarily to this context, may let us "move the fan" of morality, i.e. shed moral claims that are clearly at odds with what morality was meant to do. Given a few thousand years and a few hundred generations of life in this context, we should expect evolution to get us there on its own, but we don't have the luxury of that.
So, yes, we are this way, there is some information in our emotions and moral intuitions and we should pay attention to them, just as we should take nausea seriously. But we can examine them in other ways at the same time. We can appreciate the ways in which evolution's result is inadequate to its purpose, and rely on the other results of evolution (rationality and the view from nowhere) to exert a countervailing drive.
You yourself make a few similar points further down, and I basically agree with them: our moral intuitions and emotions are not for nothing, they can be better than our reason for making decisions in certain cases, and we should treat them as real and expected and important in our decision making. But we should also treat them as subject to rational refutation. And when reason and emotion conflict in making statements of fact about the world, reason should prevail (though perhaps you don't agree with that).
The cardiac surgeon, in all liklihood, is working on someone who smoked or overate and did not move around very much. And if they did, then the cardiac surgeon is adding a way of working on top of what evolution set us out to do. But even more importantly, if we are to take from evolution what morality's function is, why would we then ignore what evolution has given us. So it is that juncture I am focused on. I don't have problems with technology per se. IOW my argument is not, hey that's not natural - with all the problems inherent in that - but ratherCarleas wrote:Karpel Tunnel wrote:[Y]ou are deciding to NOT work with morals the way we obviously have evolved to work with morals [...]
Yes, I think that's right. But so too are cardiac surgeons deciding not to work with hearts the way we evolved to work with hearts. The project of moral philosophy, as I understand it, must involve some very unusual treatment of moral intuitions, ones that are obscene to our evolved first impression in the way that delivering a baby by C-section is obscene to someone who only understands it as stabbing a pregnant woman in the belly.
They may be pompous and unjustifiably self-assured systems of belief, but the jury is still out on whether they 1) added to both survival AND better lives or 2) whether they still are better than secular humanism, say. Testing such things is not easy.Carleas wrote:Karpel Tunnel wrote:[R]ationality tend to have a hubris that it can track all th[ese] [many variables and potential chains of causes].
I don't think this problem is unique to a rationally-grounded moral system. Emotions too can be a basis for hubris; emotion-based religions are some of the most pompous and unjustifiably self-assured systems of belief that we've ever seen. We should not be overconfident.
I disagree. I make very rapid decisions all the time whether to go with intuition or to pause and analyze and reflect. Actually think nearly the opposite of what you said. We cannot make such decisions without intuition. Certainly reasoning can come in also. But reasoning can never it self decide when it should be set in motion, when it has done enough work, when it is satisfied it listened to the right experts, when it is satisfied with its use of semantics in its internal arguments.But reason's advantage is that it scales: we can use reason to analyse other modes of thought, and even reason itself. Through, we can identify situations where relying on intuition is better than relying on deliberate reflection. We can't do that emotionally.
Actually think if we go into the phenomenology of checking out an argument, we will find that intuition rings a bell, and then we zoom in to find out why. Especially in tricky arguments.We can rationally examine emotion, and while we can feel things about reason, we can't get very far with it.
I'd need to see the science. I am not even sure this is the case. If you are chaotically amoral, well, that leads to a lot of bad reactions, unless you are some kind of autocrate - so in your home, in your company, in your country, if you are the boss, you can probably get away with a lot, and in fact those guys often have a lot of kids, all over the place. Hence they are evolutionarily effective. But more pragmatic amoral people, I see no reason for them not to thrive. Maybe, just maybe less in modern society, and maybe less in tribal society. I think they have many benefits in between, even the chaotic ones. In fact a lot of the names in history books probably had amoral tendencies...and quite a few kids.Carleas wrote:TKarpel Tunnel wrote:[H]ow do we know that morality is not a spandrel?
How do we know any evolved trait isn't a spandrel? We can look at whether morality influences reproductive success, whether it imposes costs that would require a benefit to offset, whether it's been selected against in isolated populations, etc. I think all these things suggest that it isn't a spandrel, that it's been selected for as part of an evolved reproductive strategy:
- Amoral people tend to suffer socially.
I do wonder how they do on creating babies however.Psychopaths can and do succeed, but they depend on the moral behavior of others, and they are also employing a high risk, high reward strategy (many psychopaths are killed or imprisoned, but many others are managers or politicians).
If it is in all populations it might be neutral or only slightly negative functionally. A byproduct of some other cognitive capacities that helped us. AGain testing this hypothesis is hard.- Morality entails evolutionary costs, e.g. forgoing actions with clear immediate reproductive benefits like theft or rsources, murder of rivals, or rape of fertile women. That suggests that it has attendant benefits, and that forgoing these provides a reproductive benefit in the long term, e.g. reciprocal giving and social support, not being murdered, and better mating opportunities long term.
- To my knowledge, morality exists in all human populations, including isolated populations. The isolation may not have been sufficiently long to permit evolutionary divergence, but given the presence of psychopaths it seems that the genes for amorality were there to be selected for and haven't come to dominate any society.
I think we have problems on the diet end of your justification, not because of faulty desires, but rather to cultural problems. I think sugar is a drug and we use it to self-medicate. Psychotropic drug. You know that old thing about rats triggering cocaine being made available or stimulating the pleasure center of the brain? The idea that if we could we would just destroy ourselves? Well they redid that experiment but gave the rats complicated interesting environments and very few got addicted. And I can imagine that even the nice complicated homes they gave the rats probably had less of the smells that rats bodies expect and lacked the nuance there is in what was once the original environment of rats. I think, just as in the cardiac surgeon example, we are using culture to fight nature that is having a problem because of culture.Consider the example of motion sickness, or of sugar, or of any other evolved predispositions what we can rationally understand to be actively counter to the reasons for which they evolved. We have intuitions that motion not dependent on our moving our limbs means we've been poisoned and need to purge, and that sugar and fat are good and we should eat them all as much as possible. But we know that these are false, that our evolved tendencies are misleading us, and they are misleading us because of the context in which we evolved in which such motion did mean poison, and sugar was a precious resource.
In the 'are there objective morals sense?' I am certainly a moral nihilist. On the other hand this does not mean we need to stop determining what we want. We can ignore whatever we think evolution intended and decide what we want. No easy task, of course, given out wants, natural and culturally created, often by those with great power in their interests. But given that as social mammals we have self-interest but also empathy and tend to collaborate, there is room for desire to create what may not be morals but heuristics. Desire (and emotions and intuition) in collaboration with reason.So too did morality evolve in that context, ought-ness is derived from our evolutionary past, and we can look at it in that light. Without reference to its evolved purpose, it has no meaning. If we take the position that the evolved meaning of morality is not relevant, it seems the only alternative is moral nihilism.
Ecmandu wrote:Morality is not about survival in some form, it's about the quality of it.
I am not sure if we have abstracted it before we had evolutionary theory, but we certainly had morality out of that context and even so now. IOW often morality goes against, at least, so it might seen, my own benefits in relation to natural selection as an individual, and at the species level is not based on this consideration, at least consciously. Let's for the sake of argument accept that morality was selected for. OK. And in what form. Well, it hasn't, generally, been in the form - Whatever leads to survival is the Good. What got selected for was a species that framed moral issues in other ways. So if we want to respect natural selection, we would continue with that unless we have evidence that this is not working.Carleas wrote:The role this plays in my argument about morality is that we can't abstract morality out of its evolutionary context.
Perhaps I'll reword in response to this: consider the possibility that having moralities that go beyond, do not focus (just on) survival or even mainly on survival is vastlyi more effective. That we have other ideals leads to more cohesion or whatever, as one possible side effect.We can look at morality as a phenomenon in the world, observe it empirically, ask what it does and why it exists, and in so doing discover what it means to say that one ought or ought not do X. The only objective reference of those terms is that evolutionary origin, the role morality played and why it exists. The only recourse for morality is thus to its function: it exists because it improved the odds of survival of the individuals and groups who shared the trait.
I think we need a definition of survival. Is it the continuation of homo sapien genes? Anything beyond that?But there is a predictive element to this position: we're talking about a prosepective best-guess about the effect a system will have on survival.
Antinatalism combined with cloning and whatever the rich decide are the best techs to keep their lives long would certainly seem to have a good chance. I mentioned earlier some dystopian scenarios that might very well have great survival outlooks. I think it would be odd to not immediately come in with quality of life, fairness, justice type moral objections, even though the truth is the survival of homo sapiens might best served by some horror show.Similarly, in prospect, we have every reason to think that antinatalism is not a good moral system under the metric of survival.
Our modes of choosing partners and being social have changed a lot over time. I see no reason to assume that further changes will not take place. We can control much more. Food production from hunter gatherer to ancient agriculture to modern agriculture to gm agriculture with crops that cannot breed. Why assume that the 'best' method for human production will not radically shift. And it's not like they are not working toward that out there.I think we can say similar things about your proposed reductios (transhumanism and AI breeding a new batch of humans for one generation every x-thousand years). It may be that those methods produce survival better, and that could be shown by someone trying those systems and actually surviving. But regular reproduction and genetic evolution have proved a pretty effective means of survival, it's reasonable to think that they will more effectively continue our survival than exotic systems like the AI harvesting, breeding, and euthanizing generations of humans.
Here you mention seeing society survive. There would be a society it would just be different, but further why should evolution care about the specifics of human interaction. If the point is homo sapien survival. It seems to me you are smuggling in other values than survival in that word 'society'.Moreover, if what we want to see survive is society, then a bunch of DNA held in stasis doesn't well achieve that goal (this goes to what particular form of survival is best, which I don't think is answered by functional morality, nor does it need to be answered for the purposes of making a case for functional morality).
I would think it will. I would guess that it is already in place, in many ways, in the business world and that Amazon could use functional morality to justify its panoptican, radically efficiency focused, horrific workplaces. That words like dignity, sense of self, fairness no longer have any priority. Now a sophisticated functional morality, one that looks way into the future might find that such business practices somehow reduce survivability....but...I don't mean to be too dismissive of oddness as a weakness, I do think intuition is often useful as an indicator of subtle logical mistakes. But I also think our oddness tolerance should be properly calibrated: even given that we're committed to the positions you propose, the scenarios themselves are so odd that any moral conclusions about them will feel odd. If functional morality gets odd at the margins, so does every moral system I've ever seen. We have poor moral intuitions about AI, because we have never actually encountered one. In every-day situations, functional morality will work out to support many naive moral intuitions, and will approximate many sophisticated consequentialist and deontological systems. Are there any everyday situations where functional morality gets it wildly wrong?
Jakob wrote:So Karp, you see morality as a form of leverage (human) nature has on its most powerful parts?
Not sure if I summarize you right. I like the idea I come away with in any case.
Also I agree/think that evolution can not be explained by using the term evolution.
Karpel Tunnel wrote:I have to admit I am too lazy to go back and understand the points you are responding to. I will just repond below to points I have opinions about now, reactions that may even contradict things I've said before.
Karpel Tunnel wrote:Let's say that romance is really just pheramones and dopamine driven altered states. Let's that it is actually the best description. It still might radically damage humans to think that way.
Karpel Tunnel wrote:I think we need a definition of survival. Is it the continuation of homo sapien genes? Anything beyond that?
...
Here you mention seeing society survive. There would be a society it would just be different, but further why should evolution care about the specifics of human interaction. If the point is homo sapien survival. It seems to me you are smuggling in other values than survival in that word 'society'.
Karpel Tunnel wrote:I actually think that some rather dystopic solution would be most likely to extend the survival of homo sapien genes and lives.
Carleasl wrote:Are there any everyday situations where functional morality gets it wildly wrong?
Carleas wrote:I would think it will. I would guess that it is already in place, in many ways, in the business world and that Amazon could use functional morality to justify its panoptican, radically efficiency focused, horrific workplaces. That words like dignity, sense of self, fairness no longer have any priority. Now a sophisticated functional morality, one that looks way into the future might find that such business practices somehow reduce survivability....but...
1) maybe it is better to start up from with other criteria - even if they all boil down somehow to survivability which I doubt
2) I suspect that some other nightmares will be just peachy under functional morality and in any case we will have no tools to fight against them. We would then have to demonstrate not that many of these things we value are damaged but rather that this process damages survivability, perhaps decades or hundreds of years in the future.
If we limit morality to survivability, I suspect that we will limit our ability to protect our experiences agasint those with power.
Karpel Tunnel wrote:Let's say that romance is really just pheramones and dopamine driven altered states. Let's that it is actually the best description. It still might radically damage humans to think that way.
That's not quite getting two points I think. 1) Evolution led me to developing a morality in a certain manner. It selected for this. It has also selected for the way I think about it and couch it in language. You are suggesting that we now develop it in a different manner and think, in words, about it in another manner. I have both gut and rational negative reactions to the way you want to couch it. Your argument is based on the idea that evolution has selected morality in a certain manner and we should consciously do this in this manner. But evolution has not selected for that doing, at least, not yet. There are other ways to modify our morality that do not rely on what I consider an extremely restricted heuristic - that which increases survival is good. I am not arguing that my way of viewing morality is wrong and the truth, if different could be harmful, but rather suggesting that what has been selected for offers a wide range of heuristics - for example not just focusing on survival - and I see no reason to pare things down. 2) you say above that a moral obligation is to go against one's nature - where it is problematic, I presume - and evaluate only in terms of survival. But our heuristics take in more factors and my tribe wants that, though they do not necessarily agree on priorities or even factors, none of them, not a single one, I have ever encountered, wants us to evaluate something only in terms of survival. That's really quite mammalian of us, and certainly primate of us. And humans as apex primates, I think, we would want to be very careful about streamlining the complex heuristics that at least millions of years of trial and error have developed. Even if, just like our eating, we may be led astray by things that worked on the veldt.This is a good analogy because it distinguishes our positions well. My attempt here is to provide "the best description" of morality. You say that you are "not sure why [you] have an obligation to go against [your] nature and view morality or preferred social relations as to be evaluated only in terms of survival", and my response is that that is just what it means to have a moral obligation. Insofar as "morality" means anything, insofar as it means anything that one "ought" to do something, it means that doing that thing will advance survival, for oneself or ones tribe.
We have other ways of dealing with these problems than, for example, reducing fears, though this is often the current approach to what are seen as irrational reactions - that is the entire pharma/psychiatric approach to not feeling so good.And I agree with your observation that "[w]hat got selected for was a species that framed moral issues in other ways". So too was flavor selected for rather than nutrients, and instinctive fear rather than insect biology, and pleasure and pain rather than reproduction and anatomy. And just as we have used the study of nutrition to recognize that some things that taste good are nonetheless harmful, and that some insects that scare us are nonetheless harmless, and that some things that feel good are bad and other that hurt are good, so too can we decide to overcome our moral intuitions in favor of an explicit morality that, while devoid of romance, is empirically rigorous.
Karpel Tunnel wrote:I actually think that some rather dystopic solution would be most likely to extend the survival of homo sapien genes and lives.
Nice point. To better put my objection: I see no objections, in terms of the survivability criterion, in scenerios that I think pretty much everyone would be horrified by, guessing also, including you. I have given some examples. You may have argued against them. But I think they are very hard to criticise with the one remaining moral criterion: does it secure our survival well. If we can come up with horrifying dystopias - horrifying to us - that nevertheless, at least on paper, seem to meet the single criterion, I think that speaks against having that single criterion.First, I'll note that this is a bit question begging. A solution is dystopic in part for violating some moral principle, so to some extent this smuggles in intuitive morality as a given.
I tink there is a category confusion here, but I will have to mull. I am not sure the morality/technology analogy works. If we have more information, that informs our choices. I am not saying we simply follow impulses. And we often have complicated impulses pulling us in a few ways. Generallly in our more complex moralities, we don't just follow impulses. We look at consequences. The difference with your methodology is you have one criterion. That could be handled impulsively also.Second, as I said above, I think intuitive morality will fail us more and more frequently as time goes on. To use a near-term example that you bring up: in the past, we just didn't know what genetic pairings would produce good or bad outcomes, so we left it to chance and instinct. But chance and instinct frequently misled us, and we ended up with immense suffering over the course of history as a result. Pre-modern societies just killed children who didn't develop right, and many women died in childbirth as the result of genetic abnormalities in their developing babies. So if we suggest that greater deliberation or intervention in genetic pairings is bad going forward is somehow immoral, we need to weigh it against the immense suffering that still happens as a result.
Yes, and we can make those moral decisions based on that information coupled with a variety of moral priorities. Or we can use that information and use it in relation to the single one you present. What if the AI decides that certain birth defects are beneficial because they lead to a population that is easier to control. Humans who cannot walk, cannot lead a rebellion against the suvival society's rigid controls. That humans still have irrational desires for good quality of life is part and parcel of their DNA, but if they are born without feet, they are less mobile, easier to track, easier to protect, and less likely to successfully overturn a society that they irrational judge as wanting because of stone age desires.I'm not arguing in favor of such intervention, rather I mean to say that merely knowing, merely developing the ability to predict genetic outcomes in advance requires us to make a moral decision that we never had to make before.
But then the AI might find that increasing depression leads to greater stability and better control. I would throw in heuristics that include potential for happiness, freedom of movement, room to create, freedom to associate and a wide range of others. But these might very likely seem not to add a bit to survivability as long as top down control is very effective. They might even be viewed a negative. A Matrix like scenario where we are not used as an energy source, but placed in Virtual realities and vatted seems like a very stable and controllable solution that might be viewed as top by AIs. I suppose it might even be pleasant, but I don't want it. And the AIs might find no reason, given the single criterion, to have it be pleasant.It may be creepy to centrally control or regulate genetic pairing, but if we know that \(a + b\) will create resource hungry and burdensome locus of suffering, and \(a + c\) will create a brilliant and productive self-actualized person who will spread happiness wherever she goes, there is at least as strong an argument for the creepiness of not intervening. (Note that I don't use "creepy" in the pejorative sense here, I intend it as shorthand for the intuitive moral reaction and, subjectively, I think it captures what intuitive moral rejection feels like).
Serendipper wrote:Carleas wrote:Serendipper wrote:You're good with people, so teach people to be as you are.
I appreciate you saying so, but I think this thread shows otherwise!
Well, you concede points and have a sense of fairness and equity that's unique among forum-people. You don't see it about yourself? Surely I'm not the only one to notice. Lao Tzu said "To lead people, walk behind them." and you either do it innately or have learned somewhere along the way. I'll point it out the next time I see you do it.
Carleas wrote:Totally understandable
your response was still good and well appreciated.
This is a good analogy because it distinguishes our positions well.
And I agree with your observation
Carleas wrote:I've been reluctant to narrowly define survival for two reasons:
First, I'll note that this is a bit question begging.
Karpel Tunnel wrote:After posting what's below and then mulling I think I can boil down my two objections and be more clear than my groping.
Karpel Tunnel wrote:I am wary of reductionist approaches
Karpel Tunnel wrote:It may also correlate with dangers also, I will concede.
Karpel Tunnel wrote:Nice point. Better put, I see no objections
Karpel Tunnel wrote:I think there is a category confusion here, but I will have to mull.
Some of Carleas' opinions could possibly drive me crazy, but he is an exploratory thinker and poster. He is actually interested in critiques, concedes points, responds to at least many of the points I make, such that the any frustration I feel is usually about how hard it is to make certain points in a clear way and then also the problems where values and priorities are different. But the conversation itself, his side of it, is how I wish most philosophical discussions were carried out. Nice that you noticed.Serendipper wrote:I don't mean to put you guys under a microscope, but I just wanted to encourage more of this type of behavior. Functional morality?
Karpel Tunnel wrote:Nice that you noticed.
Users browsing this forum: No registered users