A Footnote to “Why I Am Not a Materialist”

A Footnote to “Why I Am Not a Materialist”

  1. The “Hard Problem” of Consciousness” (see: Chalmers) is only validly “hard” for people who puzzle over how to realize a future robust AI. Such people disagree with others with an interest in AI, who believe that with increased complexity will automatically come the birth of consciousness. But aside from this disagreement, the problem is not “hard”, philosophically. People who puzzle over how consciousness could possibly arise from matter through incredulity alone apparently don’t understand what a scientific explanation is. If someone were to ask, “why did the ball fly through the air”, and another were to answer, “because somebody kicked it”, that surely acts as a satisfactory explanation for most normal people. But the only reason that serves as a satisfactory common sense explanation is because we have seen balls kicked – we know how balls respond to kicking. But the emergence of consciousness from matter, if this actually occurs, would be a cause and effect relation that cannot be seen – it cannot be part of our experience. There is no rational reason why it shouldn’t occur though, just as there is no rational reason why a ball, kicked, should react in an “equal and opposite” way. And the “equal and opposite” law serves as a scientific explanation, because a broad range of such observations have been distilled into a robust abstract principle. A little Hume goes a long way towards dispelling certain kinds of confusion.

Yes, this was an argument in support of materialism, even though I’m not a materialist and I have made various arguments against materialism. But this kind of argument against materialism has always bothered me as not compelling at all.

As someone with more than a passing interest in AI and how it might achieve consciousness, I’m a bit dumbfounded that anybody actually believes what you said. Automatically? From increased complexity? Complexity is certainly part of the equation (a rather superficial part), but it’s definitely not enough to make consciousness automatically arise. People actually believe this?

Actually, I have to agree with that notion.
Complexity does not equate to consciousness. But complexity eventually MUST cross the line and include consciousness, else it merely hasn’t gone far enough into the combinations involved in the complex. And frankly, it has long passed that point. You just don’t know it.

But more concerning is that emotionalism also cannot be avoided if complexity is pursued. I happen to know the make of emotion and I can tell you that it is an eventual certainty for machines.

Here’s a relevant link, I think. I glanced pretty quickly though, to be honest.

(It’s a PDF)

Thanks anon, I read the whole thing. It was just a very very quick overview of the various positions relating to consciousness and complexity, concluding with “[…]while it is tempting to suppose that the brain’s complexity is the key to consciousness, that key is not likely to fit the lock.”

And I agree.

I could string together a hundred CPUs and a hundred GPUs, that wouldn’t make the computer conscious. I could write a million-subroutine program, but just the number of subroutines wouldn’t make it conscious.

I would say that consciousness relies on complexity, but complexity doesn’t really seem to be enough. Necessary, but nowhere near sufficient. And it might be even less necessary than I think…

Fuck that, it’s a lot simpler to imagine that materialism is a construct of the mind.

anon, do you agree with the idea that consciousness starts to develop due to the senses? At a certain point in its gestation, the human fetus can feel and hear, and it can see at birth, just not very well–it’s very nearsighted. It can see colors, but not all of them. But, because everything involved with sight is present at birth, it’s the fastest developed sense–a baby can discern objects 12-15 feet away by the time she’s 2-3mo old.

I’m probably just complicating things for you, but, if consciousness starts to develop in a fetus, doesn’t that result in every conscious mind being unique? A mind isn’t a computer, it hasn’t been programmed before birth. If, after birth, it is ‘programmed’, it’s developing its own program as it grows, based on its developing senses.

Of what, exactly, our senses are comprised, isn’t totally known–other than the electrochemicals in our bodies and brains. How those chemicals somehow change to give us sensation and what those changes may inspire in us may only be generally identifiable (sometime in the future)–because of our mind’s uniqueness. Consciousness is phenomenological, as is thought. A materialist wants to reduce it down to a structure when it may not be reducible.

This is why, imm, AI is a very long time in coming. Humans have created simple robots and supercomputers. I just don’t see how they can create a human brain.

Unique in what sense? I can’t see a sense of ‘unique’ that makes this sentence true: “Consciousness starting to develop in a fetus makes every conscious mind unique.” But perhaps you can.

Well…that’s just an assertion, isn’t it? How do you know that? A lot of people do think the ‘mind’ is more or less equal to the ‘brain’ which they do think is a computer, whose programming is not something we readily recognize as programming because we don’t understand many of the algorithms involved yet. Why are they wrong? You state it like it’s obvious, but why is it obviously wrong?

To answer your first question: If consciousness starts to develop in the fetus as her senses develop, the first sense is the sense of touch–warmth, comfort, motion, etc. Then she develops hearing–her mother’s heartbeat, the rumble of voices, music, and whatever else is present during the gestation period–some of which are beneficial and some of which aren’t. (These developments aren’t sequential–nor are they case sensitive, btw)
The closest to identical development comes with identical twins who share the same DNA.

Other than that, siblings may come from the same parents, but the siblings won’t be the same. The DNA won’t be exactly the same; the circumstances of the gestation period will vary; Mom may not have the same time to devote to the development of the second fetus because she’s too busy taking care of the first baby. There are all sorts of extenuating circumstances involved in how a fetus develops–and this assumes a planned and cared for pregnancy.

As for your second question, [qupte]Why are they wrong? You state it like it’s obvious, but why is it obviously wrong?
[/quote]
Who’s wrong and how have I stated that it’s obvious that [u]it[/u} is wrong?

I question whether or not human consciousness can be reduced down to mathematical algorithms, other than in a general manner, since every mind is different and unique. I’ve tried to show the reasons why I think the human mind is unique. Give me your reasons for thinking what you think.

That the mind is much more than a computer has been shown, I believe, by current neuroscience. If you think otherwise, that’s for you to think, which is fine by me. It’s great when programmers can reduce mechanics down to the step-by-step procedure needed to program a mechanical instrument–this is very valuable for many applications.

I just don’t see how it has much value in philosophy, since I don’t see the mind as a mechanism and I’m not a philosophical materialist.

en.wikipedia.org/wiki/The_Concept_of_Mind

They’re wrong from a fairly fundamental linguistic/definitional point, rather than from any dualist/monist errors of measurement.

Ryle happens to be wrong about that… just FYI.

But none of that has to do with your statement that consciousness starts to develop in the fetus. Even if consciousness started when a baby was born, and thus no longer a fetus, those things would be true. It doesn’t really have anything to do with your statement…which is good, because it doesn’t really make sense.

The second sentence in that paragraph starts with the phrase “a lot of people think…” That’s the ‘who’. I thought that would be implied.
The people who think the things I said they think, is who. Is that clear enough?

As for it being ‘obvious’, it was the way you said it – you just said it in passing, as if it would just be accepted. “Oh yeah, that’s true, no need to go into detail on that one.”

Well, first of all, ‘uniqueness’ has absolutely 0 to do with algorithms. I can write an algorithm that nobody else has written before, it’s unique. I can string together a whole bunch of algorithms in a way nobody has done before, it’s unique. Uniqueness doesn’t preclude algorithms in the least. That’s just a bizarre argument, nothing relevant to the issue at hand.

And I don’t know that it can be reduced, maybe it can’t be, I’m just saying don’t talk about it like it’s obviously not the case, just in passing. It’s a big statement, not something that’s just going to be accepted if you say it in passing.

Thank you for your illuminating response, I shall accord it the weight it deserves in later considerations.

Didn’t want to get into a distraction.

Fair enough, but if it’s relevant to materialism/philosophy of mind, it seems on topic.

I’ve yet to see any reason to believe robots are any more conscious than an automated calculator.

If people want to say something is conscious, that’s fine. When they do, I want them to define consciousness, clearly illustrate how said something meets the criteria and what else meets the criteria.

I have a strong feeling that many only question the properties of robots because the impressions they make.

Consciousness: Remote Recognition

Liz,

No, I don’t agree that “consciousness starts to develop due to the senses”, since I think the senses themselves are forms of consciousness and, further, can’t exist independently of a more general mental framework within which visual appearances have some meaning. I think the evolution of the senses is the same evolution as the evolution of consciousness. To be clear, there is nothing in my view, then, that fundamentally precludes AI – in fact, AI already exists. Thermostats are conscious. But it’s important to remember that AI is of a very different kind (in other words, the difference in degree is mind-bogglingly enormous) than natural, evolved, biological organisms for many significant reasons. I don’t know if it’s theoretically impossible or not, but it’s very hard to imagine a constructed robot giving birth. On the other hand, though, a “robot” in the old fashioned sense is just that – old fashioned, not very sophisticated. Now we have genetic engineering, synthetic implants, etc. It’s not necessary to construct something “from scratch”, there is also the “chip away at it” approach. Is someone with a synthetic heart less human than others? Chalmers’s silicone neuron replacement idea (it’s often referred to as the fading qualia thought experiment) is very interesting in this regard, though it doesn’t prove anything either way. Something worth thinking about, though.

Like you, I don’t think scientists can create a human brain. But I don’t think it’s because of anything ontologically special about human brains. A human brain is an evolved organ, not a constructed machine. Again though, you can look at this philosophical issue from a different angle.

Consciousness requires that there is “something it is like”. Seeing the color blue is a conscious experience. The entire visual spectrum, without exception, can be described in black and white, but that doesn’t mean that the entire visual spectrum is black and white. Similarly, we can describe thermostats as conscious, but that doesn’t tell us anything about what it is like to be a thermostat. So yes, every being is unique, concrete. But there is nothing concrete about “consciousness”, which is an abstraction. Experience is concrete, unique; consciousness is just an umbrella term for what all experiences generically consist of – a subject and an object, arising simultaneously. Interdependently.

I’d add though, that I highly doubt thermostats have a sense of self, lol.

I’ve tried to find my original sources re fetal sensory development, but couldn’t–this, however, says pretty much the same things:

http://birthpsychology.com/free-article/fetal-senses-classical-view

If, as I said, consciousness starts to develop when the senses start to develop in the fetus, is it fair to say that sensory data has a great deal to do with consciousness? Yet sensory data are physical, aren’t they? To me, it’s reaction to the data–and the memory of that reaction–that becomes phenomenal, and that makes each mind unique. There’s more to it than that, but that’ll suffice for now.

anon, I’ve thought about a totally artificial person, but I think the complexities of the human body, not just the mind, are such that it won’t be possible at any time in the near future. Even with the miniaturization of a lot of the parts (electrical, optical, etc.) it would end up pretty big, imm. Just think about the skin–our largest organ–and all of its functions–even with neuromorphic silicon neuron circuits which have already been developed.

FJ, Imm a lot of people think the metaphor of ‘mind as a computer’ is fact rather than metaphor. But the metaphor has only been used to illustrate how some workings of the mind/brain can be understood–imaged, if you will. If I felt my statement needed more explanation, it would only be an explanation or definition of ‘metaphor’. Other than that, you’re quibbling–mostly about language. You have all my good wishes for success in your studies.

anon, If you “…don’t think it’s [a created human brain] because of anything ontologically special about human brains.” Yeah, the human brain is an evolved organ–all brains are. Are you saying that the human brain is no different than the simian brain? Then what makes us ‘other’ than apes?