Moderator: MagsJ
Karpel Tunnel wrote:
But, in fact, in a physicalist universe everything is internal and external. There is just matter. There is no reason, yet demonstrated, for why causation inside an arbitrarily distinguished portion of all these causes and effects (the inside of the the brain) should have observation, and not all the others outside of bodies or in the interactions between the retina and photons or between photons outside bodies.
Silhouette wrote:This is why I still tend towards the notion that mind precedes body (matter and the physical).
To me the blue is all fine, but then I see no justification in the redded portions above for a creeping in of assumed consciousness. More complex 'phototropism' happens in the brain, but why 'experiencing' should arise, I don't think you've justified. Computer programs can recognize patterns. Machines can do this. Basically any physiological homeostatsis, including the sunflower, is recognizing patterns. Are there gradations of consciousness? How do we know where consciousness arises in complexity? How do we know consciousness is in anyway tied to complexity or self-relation? We lack a test for consciousness. We only have tests for behavior or, more neutrally, reactions. How do we know which reactions, including the stones, have some facet of 'experiencing' in them or not?Carleas wrote:Consciousness enters the picture each time some part of the network is causally influenced by a different part of the network, such that the former part is trained to recognize patterns within the latter part. When this occurs, the former part is "observing" the latter part, in the same sense that the occipital lobe is "observing" patterns in the retinal photoreceptors. It's pattern matching, in the same way that AlphaGo pattern-matches on the arrangement on playing pieces on a Go board.
Consciousness is the mental experience of observing mental experience, which is what we would expect a system that is wired to pattern-match to patterns in its own pattern-matching to report. At lower levels, the network pattern-matches on photoreceptor cells firing. At high levels, other parts of the network pattern-match to collections of neurons firing in the photoreceptor-pattern-matching area. This layering continues, with collections of cells reacting to collections of cells reacting to collections of cells etc. This self-observation within the network is isomorphic to the self-observation of conscious experience.
And again, this is all distinct from the rock because the causal chain isn't merely energy from light diffusing through this causal cascade, but the light starting a causal cascade that uses separate energy, and indeed depends on excluding diffusive energy (most often by residing inside a solid sphere of bone).
From this rough sketch, we need only abstract up to emotional or intellectual reactions, where the layers of self-referential causation permit highly abstracted patterns to be recognized, e.g. (in a super gross simplification) "ideas" made of "words" made of "sounds" made of "eardrum vibrations".
Carleas wrote:Silhouette wrote:This is why I still tend towards the notion that mind precedes body (matter and the physical).
This does not seem to fit with the observable ways in which purely 'body' causes can affect mind. For example, brain damage changes not only the intensity of mind, but the contents and fine-grained functioning. That makes sense if mind is just the working of the brain, but not if mind precedes the brain.
But humans aren't very good at reading ahead, certainly not compared to computers; that's not how humans play. Rather, humans look for patterns, they abstract based on experience. And that's what AlphaZero does.
I sort of understand. Perhaps it would be good to ask you: how is what you are arguing unique to mind body unity arguments? If it is. I feel like I am missing something, but perhaps I interpreted the title as indicating that you'd found a new angle - even simply new to the ones you've read.Carleas wrote:Ah, I think I see the gap in my argument that you're pointing out.
My aim here is to tie the outside description of the brain (neurons, photoreceptors, netorks) to the inside description of consciousness. So that first section you highlight in red is a description of what the experience of consciousness is, rather than something that follows from my argument. My intent there is to frame consciousness in a way that makes the mapping to brain function plausible. "The experience of experiencing" (a trimmed and, as I mean it, equivalent version of the first red section) seems both a reasonable description of consciousness, and a reasonable description of a network that is trained on itself.
Or one could, from a physicalist point of view, consider this definition excessive. There is nothing receiving information. Rather a very complicated, effective kind of pachinko machine is the brain, and when causes hit this incredibly complicated pachinko machine, the machine reacts in specific determined ways. There is no information, just causes and effects. It looks like information is being received because evolution has led to a complicated object that responds in certain ways. But that's an after the fact interpretation. (this is not my position, but I think it is entailed by physicalism, which your posts seem to fit within).When we look at the brain, we see a network configured to receive information about the external world and identify patterns in that information, and also to receive information about its own operations as it does so.
I think they will soon have things that act like our brains. Whether they will be experiencing is another story. And for all we know they already are. I think we should be very careful about conflating behavior, even the internal types focused on here, and consciousness. We have no idea what leads to experiencing. And we have a bias, at least in non-pagan, non-indigenous cultures, to assuming it is the exception.If I'm right, this should be the field that results in an artificial general intelligence for which the consensus is that it's conscious. And such an advance should not be too far off.
The hard problem of consciousness is the problem of explaining how and why sentient organisms have qualia or phenomenal experiences—how and why it is that some internal states are felt states, such as heat or pain, rather than unfelt states, as in a thermostat or a toaster.
Meno_ wrote:There is one test that I co incidentally read which consists of the following and is quite recent.
Points of light are impinged at various intervals upon the eye using a multi colored scheme, consisting of red and blue. The duration of the test may be factored in as of primary relevance, but that has not been verified at this end.
The crux of the relevance of other considerations consists in the finding, that it takes the repetition of the incidental light exposures exactly half through, before a color change is reported by the study.
The light change I believe results in a shift to green.
Does this not point to a quantifiable relevance to qualify a tool with which toeasure internal and external effects to variable input of visual change which relates inner and outer sources of experience?
If so , can this be a model of measurement in a more general study?
Karpel Tunnel wrote:Meno_ wrote:There is one test that I co incidentally read which consists of the following and is quite recent.
Points of light are impinged at various intervals upon the eye using a multi colored scheme, consisting of red and blue. The duration of the test may be factored in as of primary relevance, but that has not been verified at this end.
The crux of the relevance of other considerations consists in the finding, that it takes the repetition of the incidental light exposures exactly half through, before a color change is reported by the study.
The light change I believe results in a shift to green.
Does this not point to a quantifiable relevance to qualify a tool with which toeasure internal and external effects to variable input of visual change which relates inner and outer sources of experience?
If so , can this be a model of measurement in a more general study?
I am not sure I understand the test. It seems to me that they can measure reactions. They see a reaction. Well, even a stone will react to light. What we cannot test is whether someone experienced something. And then this test seems to be for beings with retinas and we've pretty much already decided that creatures with retinas are conscious.
Ierrellus wrote:Around the turn of the last century Colin McGinn suggested that consciousness is too complex to be explained by a mind. What have we learned since then that would make such an explanation possible?
I don't get it. How does the test demonstrate the lack of consciousness as opposed to the lack in the ability to report what one has experienced? IOW how would it demonstrate an animal, plant, rock is not conscious, rather than simply that they do not respond about their experience?Meno_ wrote:
In this case, reaction was reported by the test subject, versus in the reaction observed by the test giver, in the case of the stone, is the difference.
The test subject reported perceived changes he experienced, connecting the test with both the qualitative and quantifiable factors.
I think that does meet the criteria for a relative test to the problem.
I think conscousness is actually rather simple. But it is complicated to explain how it arises, especially in a physicalist paradigm.Ierrellus wrote:Around the turn of the last century Colin McGinn suggested that consciousness is too complex to be explained by a mind. What have we learned since then that would make such an explanation possible?
Mr Reasonable wrote:Reducing mind to body is as easy and employing the old type/token distinction.
Silhouette wrote:[G]iven that the brain is a product of the mind, it's still the mind being damaged causing a damaged mind.
iambiguous wrote:[W]hat machine intelligence is any closer to "thinking ahead" regarding whether the goal can be construed as encompassing good or encompassing evil?
Karpel Tunnel wrote:how is what you are arguing unique to mind body unity arguments?
Karpel Tunnel wrote:I assume you meant two different things by experience and experiencing in that sentence.
Karpel Tunnel wrote:I now see you are trying to make a model that is plausible which is different from making an argument that X is the case.
Karpel Tunnel wrote:There is nothing receiving information. Rather a very complicated, effective kind of pachinko machine is the brain, and when causes hit this incredibly complicated pachinko machine, the machine reacts in specific determined ways. There is no information, just causes and effects. It looks like information is being received because evolution has led to a complicated object that responds in certain ways. But that's an after the fact interpretation.
Karpel Tunnel wrote:I think they will soon have things that act like our brains. Whether they will be experiencing is another story. And for all we know they already are. I think we should be very careful about conflating behavior, even the internal types focused on here, and consciousness. We have no idea what leads to experiencing. And we have a bias, at least in non-pagan, non-indigenous cultures, to assuming it is the exception.
...
And like most formulations of the hard problem here it is assumed they know what is not conscious despite there being a complete lack of a scientific test for consciousness. All we have is tests for behavior/reaction.
Karpel Tunnel wrote:I don't think you are solving the hard problem, you are just presenting a practical model of intelligence similar to ours and suggesting that this will lead to effective AIs.
iambiguous wrote:But the hardest part about grappling with "Hard Problem of Consciousness" is still going to revolve around narrowing the gap between what any particular one of us thinks we know about it here and now and all that can be known about it in order to settle the question once and for all.
Well let me know when someone is perfectly informed, otherwise I will not conflate plausibility with truth or evenwith 'the only model we can't falsify at this point'.Carleas wrote:Karpel Tunnel wrote:I now see you are trying to make a model that is plausible which is different from making an argument that X is the case.
I disagree. If we were perfectly informed, only the truth would be plausible.
Carleas wrote:iambiguous wrote:[W]hat machine intelligence is any closer to "thinking ahead" regarding whether the goal can be construed as encompassing good or encompassing evil?
As I mentioned in my previous reply to Karpel Tunnel, I think this is a difference of degree and not of kind. Good and evil are abstractions of abstractions of abstractions... I am indeed taking the "human brain/mind/consciousness [to be] in itself just nature's most sophisticated machine".
iambiguous wrote:But the hardest part about grappling with "Hard Problem of Consciousness" is still going to revolve around narrowing the gap between what any particular one of us thinks we know about it here and now and all that can be known about it in order to settle the question once and for all.
Carleas wrote: This seems like defining the solution to the Hard Problem in such a way that it becomes vulnerable to Godel's Incompleteness Theorem. i.e., as a mind, it is impossible for us to fully contain a comparable mind within ourselves, so "all that can be known about a mind" can never be known by a single mind. If the Hard Problem is just Incompleteness (and that's not a totally unreasonable proposal), then we should call it the Provably Impossible Problem.
No, because that's just function. We would know it could function, in general, like us. Does Deep Blue have some limited experiencing? I would guess consensus is no and further we cannot know. Just because we make something that can function like us in many facets of our intelligence does not mean it is experiencing. It might be. It might not be.Carleas wrote:Having "a practical model of intelligence similar to ours" must solve the hard problem at the limit where the intelligence is so similar to ours as to be identical, right?
Yes, physicalists who consider all contacted mediated and interpreted should have that concern. And some do.If we're not ready to say that, we need to establish why we're willing to accept all these similarly intelligent humans as conscious without better evidence.
Carleas wrote:Silhouette wrote:[G]iven that the brain is a product of the mind, it's still the mind being damaged causing a damaged mind.
Is this just solipsism?
One problem I have with this line of reasoning is that it erases what seems like a meaningful distinction that (assuming multiple minds exist) can be communicated from one mind to another, suggesting that it isn't totally illusory: we talk about the distinction between mind and not-mind, and that difference is understandable and seems useful. At best, aren't we just pushing back the question? Let's say we accept that everything is mind. We still have the mind things of the sub-type mind (i.e. ideas, feelings, sensations, emotions), and the mind-things of the subtype not-mind (brains, rocks, hammers, whatever), and we still want to explain the mind-mind things in terms of the mind-not-mind things. And my argument still works for that.
How much of this is a linguistic problem? I grant that the only things we have access to are mind things, e.g. we perceive the world as sense impressions of the world. But are you saying that there is no world behind those impressions? There's a difference between saying that my car's backup sensor only delivers 1s and 0s and saying that there's no garage wall behind me. I'd argue that the most coherent description of the world is one that isn't dependent on mind, even though, being minds, our experience of the world is always going to be made of mind-stuff.
I guess I do think utility is meaningful. I say "this is mind and this isn't", and we can take that statement and test the things against it, so that e.g. the things that are mind only exist to the mind experiencing them and the things that aren't exist to everyone. The fact that we can draw useful inferences from that distinction suggests the distinction is real.
Carleas wrote:I don't think I do, but I am open to arguments otherwise. I mean 'experience' in this context in a non-mental sense, e.g. "during an earthquakes, tall buildings experience significant physical stresses." There's absolutely a danger of equivocating, i.e. assuming that rocks and humans both 'experience' things, and concluding that that experience is the same thing. That isn't my argument, but I do mean to point to light striking a rock and light striking a retina as the same phenomenon, which only differs in the respective reactions to that phenomenon. Call phenomena like being hit by a photon 'experiencing a photon'. Similarly, we can aggregate these small experiences, and say that the sunflower experiences the warmth of the sun. In the same way, then, neurons experience signals from other neurons. Whole brain regions experience the activity of other regions. The brain as a whole experiences its own operations. The parts of AlphaGo that plan for future moves experience the part of AlphaGo that evaluates positions.
If I'm right, the internal experiencing and the external experiencing are in fact the same thing, and qualia etc. are the inside view on the brain experiencing itself experiencing itself ... experiencing a photon of a certain wavelength. Qualia are not the incidence of light on your retina, but the incidence of the whole brain processing that event on itself.
Users browsing this forum: No registered users