I do work from the assumption that we are both conscious beings.Carleas wrote:Karpel Tunnel wrote:It seems to me any conclusions drawn about how function relates to interiority are speculative in the extreme.
Part of my point is that every conscious thing has access to a system from both a functional and experiential perspective, i.e. each conscious being can see both itself as a functioning system and itself as an experiencer. No speculation is required to tie together being hit in the head and losing consciousness, we can connect them through direct observation and induction.
But I think we can go further than that. The concept of consciousness comes after the concept of other minds, i.e. we accept that other humans are conscious before we develop a concept of consciousness, so other minds are baked into that notion by default. We can try to back out some concept that's independent of other minds, but I think that's much harder than it seems. The words I'm using, the divisions in my ontology, so much of how I see the world is totally dependent on the other minds that taught me how to see it, and so much of that learning depended on seeing them as other minds.
I don't know how valuable this part of the discussion is, but maybe it helps to get us on the same page about what we're even talking about. I think we have to take as a given that you and I are both conscious beings, and that gives us a great deal to work with in terms of identifying and understanding consciousness and its relationship to the physical systems to which it is attached.
My hoover avoids objects, remembers where it started, tries different approaches to objects when blocked, etc. Presumably it is not directly perceiving the external world any more than we are. But good. You are already pretty fringe in your beliefs (as am I). IOW I am generally happy when someone will accept the fringy aspects of their own beliefs. I tend to use counterexamples to either tease this out or to criticize their arguments. I am happy if it works to tease out a less than mainstream belief, in fact happier than if they concede that their position had a problem.I'm not sure the extent to which any specific robotic vacuum can be said to be conscious (as predicted, I question how much they act like they are conscious), but in general I grant that this may be true. See below re reefs.
Karpel Tunnel wrote:It seems like you have rejected [philosophical zombies], but not on grounds of coherence, but via definition.
Right, there is no consciousness in philosophical zombies. Just as most scientists would not grant even a limited consciousness to my hoover, the zombies are simply vastly more nuanced and complex entities - they behave in more complicated ways.That seems like a distinction without a difference: there's no coherent concept of 'consciousness' that is compatible with the existence of philosophical zombies, in the same sense that the concepts of 'square' and 'circle' entail that there cannot be a square circle.
Right but this seems tautological to me. You are defending your position using your position. We have good grounds for believing that other human minds are like us because they are made up of the same stuff and do similar things.EDIT: to clarify, I should relate this to my point above, that the concept of consciousness has other minds baked in. So we define consciousness in part as by the observed behavior of other minds, and a philosophical zombie is the conceptually incoherent proposal that another mind is not actually another mind.
In this context you are arguing that anything that behaves like we do, behaves as if it has our cognitive functions, should be treated as conscious. But that is very speculative. Imagine we build AIs with silicon as central, and it's carbon which has some weird emergent property, in complex entities, that has as a byproduct, consciousness. An experiencer. We make functionally intelligent AI robots, but really, no one's home.
It is an assumption that complexity causes consciousness. It is an assumption that only if you have certain kinds of behavior that entail consciousness, they you have consciousness. (and I mean that it is an assumption both that it is necessary AND an assumption that it is sufficient.) But we don't know what causes consciousness - though yes, we can hit people with rocks and they don't remember stuff, but you can be conscious and not form memories, there are even dental drugs that allow for this. And we don't know what is neccessay. Perhaps plants are, perhaps everything is, but without all those functions we have. Perhaps only certain materials or configurations leads to consciousness, but all sorts of things can be made that are 'like us' in terms of behavior. We don't, it seems to me, just get to hop forward as if we know what consciusness is and depends on.
We can certainly form consensus that, in case AIs are conscious, once they achieve a certain level of functions, we will treat them with empathetic guidelines. A precautionary principle application. But it would be speculative.
And then it would be odd if we decide that plants are conscious - since they exhibit functions of memory, nervous system like reactions, communication, problem solving and more - much of it, though not all of it, behavior at a slower pace than animals do.
So we might have plants that are relatively simple compared to the AI threshold we grant AI's consciousness. So we have to shift that downwards.
And then what if limited consciousness, is actually only limited functions, but a plant or even the hoover actually had as much consciousness as us, is just as aware, has as intense experiences, but has limited functions?