There Is No Hard Problem

The origins of the imperative, "know thyself", are lost in the sands of time, but the age-old examination of human consciousness continues here.

Re: There Is No Hard Problem

Postby Karpel Tunnel » Fri Feb 08, 2019 1:35 pm

Carleas wrote:
Karpel Tunnel wrote:It seems to me any conclusions drawn about how function relates to interiority are speculative in the extreme.

Part of my point is that every conscious thing has access to a system from both a functional and experiential perspective, i.e. each conscious being can see both itself as a functioning system and itself as an experiencer. No speculation is required to tie together being hit in the head and losing consciousness, we can connect them through direct observation and induction.

But I think we can go further than that. The concept of consciousness comes after the concept of other minds, i.e. we accept that other humans are conscious before we develop a concept of consciousness, so other minds are baked into that notion by default. We can try to back out some concept that's independent of other minds, but I think that's much harder than it seems. The words I'm using, the divisions in my ontology, so much of how I see the world is totally dependent on the other minds that taught me how to see it, and so much of that learning depended on seeing them as other minds.

I don't know how valuable this part of the discussion is, but maybe it helps to get us on the same page about what we're even talking about. I think we have to take as a given that you and I are both conscious beings, and that gives us a great deal to work with in terms of identifying and understanding consciousness and its relationship to the physical systems to which it is attached.
I do work from the assumption that we are both conscious beings.

I'm not sure the extent to which any specific robotic vacuum can be said to be conscious (as predicted, I question how much they act like they are conscious), but in general I grant that this may be true. See below re reefs.
My hoover avoids objects, remembers where it started, tries different approaches to objects when blocked, etc. Presumably it is not directly perceiving the external world any more than we are. But good. You are already pretty fringe in your beliefs (as am I). IOW I am generally happy when someone will accept the fringy aspects of their own beliefs. I tend to use counterexamples to either tease this out or to criticize their arguments. I am happy if it works to tease out a less than mainstream belief, in fact happier than if they concede that their position had a problem.

Karpel Tunnel wrote:It seems like you have rejected [philosophical zombies], but not on grounds of coherence, but via definition.

That seems like a distinction without a difference: there's no coherent concept of 'consciousness' that is compatible with the existence of philosophical zombies, in the same sense that the concepts of 'square' and 'circle' entail that there cannot be a square circle.
Right, there is no consciousness in philosophical zombies. Just as most scientists would not grant even a limited consciousness to my hoover, the zombies are simply vastly more nuanced and complex entities - they behave in more complicated ways.
EDIT: to clarify, I should relate this to my point above, that the concept of consciousness has other minds baked in. So we define consciousness in part as by the observed behavior of other minds, and a philosophical zombie is the conceptually incoherent proposal that another mind is not actually another mind.
Right but this seems tautological to me. You are defending your position using your position. We have good grounds for believing that other human minds are like us because they are made up of the same stuff and do similar things.

In this context you are arguing that anything that behaves like we do, behaves as if it has our cognitive functions, should be treated as conscious. But that is very speculative. Imagine we build AIs with silicon as central, and it's carbon which has some weird emergent property, in complex entities, that has as a byproduct, consciousness. An experiencer. We make functionally intelligent AI robots, but really, no one's home.

It is an assumption that complexity causes consciousness. It is an assumption that only if you have certain kinds of behavior that entail consciousness, they you have consciousness. (and I mean that it is an assumption both that it is necessary AND an assumption that it is sufficient.) But we don't know what causes consciousness - though yes, we can hit people with rocks and they don't remember stuff, but you can be conscious and not form memories, there are even dental drugs that allow for this. And we don't know what is neccessay. Perhaps plants are, perhaps everything is, but without all those functions we have. Perhaps only certain materials or configurations leads to consciousness, but all sorts of things can be made that are 'like us' in terms of behavior. We don't, it seems to me, just get to hop forward as if we know what consciusness is and depends on.

We can certainly form consensus that, in case AIs are conscious, once they achieve a certain level of functions, we will treat them with empathetic guidelines. A precautionary principle application. But it would be speculative.
And then it would be odd if we decide that plants are conscious - since they exhibit functions of memory, nervous system like reactions, communication, problem solving and more - much of it, though not all of it, behavior at a slower pace than animals do.
So we might have plants that are relatively simple compared to the AI threshold we grant AI's consciousness. So we have to shift that downwards.
And then what if limited consciousness, is actually only limited functions, but a plant or even the hoover actually had as much consciousness as us, is just as aware, has as intense experiences, but has limited functions?
Karpel Tunnel
Posts: 3436
Joined: Wed Jan 10, 2018 12:26 pm

Re: There Is No Hard Problem

Postby Prismatic567 » Sat Feb 09, 2019 6:36 am

Carleas wrote:That sounds an awful lot like Lamarkian evolution, i.e. organism sees some future threat, bears children in the direction of the solution to that threat.

That isn't how evolution works. Think of it more like a filter: at every generation, some organisms pass the filter and some don't. The genetic lineage of those that don't pass ends, the genetic lineage of those that pass continues. Those organisms with traits that make them more fit for the current contex tend to pass the filter more frequently, those with fewer traits or maladaptive traits tend to be filtered out.

Potential threats aren't a part of the filter. Predictive modeling may be sexually selective (maybe talking up future planetary calamities is a good pickup line), but that isn't a given, and in any case what's being selected for isn't predictive modeling per se, it's game that sometimes happens to come in the form of predictive modeling.

The space missions are most likely peacock feathers and spandrels. Predictive modeling makes people fit by e.g. helping them hunt or cooperate, so displays of intelligence became a kind of mating dance (i.e., it's like peacock feathers). Intelligence ended up being useful for going to space, but since only 536 humans have ever gone to space, the ability to go to space can't itself have been selected for (i.e., it's a spandrel).
I did not argue that evolution in terms of bearing children [natural selection] is directly linked to the progress of consciousness.

Rather the progress of human consciousness is based on the existing state of the average human consciousness of all human beings, i.e. there is no killing of the 'lesser' and promoting those who are 'stronger'.

Note there is now a tremendous increase in the average literacy rate within the World.

There is a trend of the exponential exposure of people from even primitive tribes to the internet, satellite tv, smart phones and other advance technologies which in a way would facilitate to increase their level of consciousness like leading to greater awareness of global warming and other potential threats.

Brazilian rainforest: Remote tribes given smartphones to prevent 'being massacred by ranchers' ... rs-1514733 ... _phone.jpg

The jungle village hooked on their phones
Tech has arrived in this indigenous village in the remote Amazon jungle. Many young people now spend their time engrossed in their phones and social media pages. ... eir-phones

The ability to go into space is one of the foundation to meet potential threats from outer space that could exterminate the human species, e.g. a rogue asteroid heading Earth's way.
Note I linked this earlier;

Nasa probe to smash into asteroid and knock it out of orbit in first ever planetary defence system test ... 59751.html

There are loads of a wide range of threats against the human species and the increasing level of consciousness and cognition of the average human is to ensure the human species potential to deal with these threats with no certainty but at least the possibility of success.

The examples of space exploration, education, smartphones, internet are not the few reasons but there are tons of other reasons that are contributing to raise the consciousness and cognition of the average human being at present.

My point is thus;
The increasing level of the average level of human consciousness is critical and related to the greater awareness of pre-existing greater threats [previously not known] to the human species.
In addition, there is also new potential threats arising out of increasing human population and the rise of human consciousness itself.
I am a progressive human being, a World Citizen, NOT-a-theist and not religious.
Posts: 2854
Joined: Sun Nov 02, 2014 4:35 am


Return to Psychology and Mind

Who is online

Users browsing this forum: No registered users