Even computers, which are manmade and thus predictable in all relevant aspects, can have a sense of good and bad.
They can also possess what we may understand to be free-will in the empirical, which is to say down-to-earth, sense of the word. Free-will, not as an ability to go back in time and change your decisions, but as an ability to make your own choices without undesirable influences.
They can also choose between several alternatives, and not only that, they can also learn from the consequences of their choices.
Computers have no consciousness – a first-person experience of reality – only awareness – a map of reality that is not accompanied by any kind of first-person experience of it.
They also cannot go back in time and change their decisions.
Despite all of this, they can have a sense of good/bad, choose between several options on their own and learn from the consequences of their choices.
I don’t think it’s difficult to imagine a computer program that has all of these properties.
What is necessary for such a program is:
-
that it has awareness of the external world (e.g. outputs such as computer monitor)
-
that it has some idea of what it wants to do, otherwise known as a goal, which is just a description of the state of the external world it wants to bring about
-
that it can affect the external world in some way, say by sending “signals” to it, which in programming terms would be commands that are issued in order to change the state of the outputs
-
that it has a memory of every combination of posited goal, signal sent in order to achieve that goal and subsequent output state
The computer program would work in the following manner:
- posit some goal
- go through the memory in order to assign the probability of success in relation to the posited goal to every possible signal that can be sent
- choose the signal with the highest probability of success
- if the highest probability is shared among several signals, randomly pick one
- send the chosen signal
- declare “good” if output equals goal otherwise declare “bad”
- store the tuple (goal posited, signal sent, output state) in the memory
- return to step 1
This is incredibly simple stuff. It’s strange when people fail to understand it.
Let’s put forth some definitions then.
CHOOSING
the act of using some kind of logic to rank a finite set of options in an effort to determine the highest ranked option so that we can act upon it
FREE-WILL
the ability to choose on our own, using whatever logic we want, instead of choosing in a way that is not our own
GOOD/BAD
words describing the quality of correspondence between what was expected and what was realized
LEARNING
memorizing every expected-realized pair in order to be better informed when making decisions in the future
The above computer program is clearly performing actions of choosing and learning. It also has a sense of good/bad (learning cannot function without it anyways.) And finally, it has free-will but only because it cannot violate it. The computer program can be destroyed and in this way stopped from exercising its free-will. It can also be reprogrammed but this would only change what it wants to do – it wouldn’t go against its free-will. What the computer program cannot be made to do is to do something it does not want to do. It simply does not have the ability that is necessary to expose it to the risk of acting against its will.
That would be all.
So again, I have to ask, what is Biggy’s problem?