Yeah honestly I don't get what he is really contributing (and I'm sort of an AI skeptic). In 2000 in undergrad, I recall checking out some of his books from the library because people said he was important, and I learned about the "Chinese Room" argument [1] in class.
How is it even an argument? It doesn't illuminate anything, and it's not even clever. It seems like the most facile wrong-headed stab at refutation, by begging the question. As far as I can tell, the argument is, "well you can make this room that manipulates symbols like a computer, and of course it's not conscious, so a computer can't be either"? There are so many problems with this argument I don't even know where to begin.
The fact that he appears to think that changing a "computer" to a "room" has persuasive power just makes it all the more antiquated. As if people can't understand the idea that computers "just" manipulate symbols? Changing it to a "room" adds nothing.
I was never satisfied with the chinese room thought experiment. Let's momentarily replace the thing in the chinese room with a human, to parse Searle's notion of "understanding". Searle would argue that a human trained in to emit meaningful chinese characters would still lack understanding. But I think this is backwards and speaks to your identification of Searle begging the question: the only way a human could emit meaningful chinese responses would be if it had an understanding of chinese. Consequently if a machine is outputting meaningful chinese, it too must already understand chinese, and any argument otherwise is kind of a pro-biology bigotry with a shaky underlying logic at best.
This then devolves into semantics. Can a person locked in a room really come to "understand" chinese culture, for example, if only non-experiential learning were used as data inputs? I think we have to say the answer is yes. I am a chemist. I have never seen an atomic orbital with my bare eyes, yet I can design chemical reactions that work with my understanding of chemistry. Because I have not experienced an atomic orbital does that mean I do not understand? Even, when I set up my first reaction, I did not have any experience, and knew what I was doing only through what could be described as sophisticated analogy. I would say my understading was low, but it was certainly non-zero. Where does one draw the line?
I have always felt that the human in the room would start to recognize patterns and develop an "understanding". Their "understanding" may have no basis in reality but I don't see that it is any less valid to them.
If Searle is right then we should be able to perform a MRI on a blind person while they are talking to someone and spot the point where their brain switches into "symbol manipulation mode" when the conversation subject becomes something visual.
The guy in the room can memorize all of the rules and the ledgers and give you the same responses the room did, and if you asked him in his native language if he knew Chinese, he'd honestly tell you no.
He could have an entire conversation with you in Chinese, and only know that what you said merits the response he gave. He doesn't if he's telling you directions to the bathroom, or how to perform brain surgery.
What about Latin? I learned Latin in a somewhat sterile environment, that in many ways is akin to symbol manipulation. I certainly never conversed with any native Latin speakers. Do I not understand Latin? Why or why not?
It's just obfuscation. In reality the 'room' would have to be the size of a planet and if one person was manipulating the symbols it might take the whole thing the life span of the universe to think 'Hello World'. But by phrasing it as a 'room' with one person in it he makes it look inadequate to the task, and therefore the task impossible.
Neither the size of the room nor the speed of the computation is important to Searle's argument. You could replace the person in the room with the population of India (except for those who understand Chinese), and pretend to the Chinese speaker that the communication is by surface mail. Or use a bank of supercomputers if Indians aren't fast enough.
Fair enough. In which case Sear's argument is that even fantastically complex, sophisticated information processing systems with billions of moving parts and vast information storage retrieval resources operating over long periods of time cannot be intelligent. If that's what his position boils down to, what does casting it as a single person in a room add to the argument? As Kurzweil asked, how is that different from neurons manipulating neurotransmitter chemistry? Searl doesn't seem to have an answer to that.
No, his position, as I understand it, is that it cannot be conscious. It certainly can be intelligent.
Searle does try to explain why there's a difference. Although the person in the Chinese Room might be conscious of other things, he has no consciousness of understanding the Chinese text he's manipulating, and will readily verify this, and nothing else in the room is conscious. Chinese speakers are conscious of understanding Chinese.
I thought the idea was that because the only part of the room actually doing things (the person), doesn't understand chinese?
I mean, agree with it or not, but I think that's a bit stronger than just, making it seem intuitively worse because its a room instead of "a computer"?
I think the important part isn't the swap of "room" for "computer", but instead the swap of "person" for "cpu"?
Yeah, but the system would be the person + the lookup tables, not just the person. The problem is, we don't tend to say "does a room with a person in it and several books have this knowledge?" Relying on a system that doesn't tend to get grouped together (there's no term for the system human + book inside room), and having only one animate object (so that people think of the animate object as the system instead of the animate and inanimate objects), as well as asking the question only about the animate part of the system, all seem to suggest that the purpose of the thought experiment is to mislead people.
A better example would be saying something like - does this company have the knowledge to make a particular product? We can say that no individual member of the company does, but the company as a whole does.
Which, well there's a whole series of responses back and forth, with different ideas about what is or is not a good response.
One idea describes a machine where each state of the program is pre-computed, and the computer steps through the states one by one, but in each state, if the next of the pre-computed states was wrong (i.e. would not be the next step of the program, following from the current state), if (e.g.) a switch was flipped, it would cause the program to be computed correctly despite the pre-computed states being wrong, and if the switch is not flipped, then it would continue along the pre-computer states. If the switch is flipped on, or if its flipped off, and all the pre-computer states are correct, the same things happen, and it does not interact with the switch at all. If all the pre-computed states are nonsense, and the switch is flipped on, then it runs the program correctly, despite the pre-computed states being nonsense.
So, suppose that if the pre-computed states are all wrong, and the switch is on, that that counts as conscious.
Then, if the pre-computed states are all correct, and the switch is on, would that still be conscious?
What if almost all the pre-computed states were wrong, but a few were right?
It doesn't seem like there is an obvious cutoff point between "all the pre-computed steps are wrong" and "all the pre-computer steps are right", where there would be a switch between what is conscious.
So then, one might conclude that the one where all the pre-computed steps are right, and the switch is on, is just as conscious as the one which has the switch on but all the pre-computed states are wrong.
But then what of the one where all the pre-computed states are right, and the switch is off?
The switch does not interact with the rest of the stuff unless a pre-computed next step would be wrong, so how could it be that when the switch is on, the one with all the pre-computations is conscious, but when it is off, it isn't?
But the one with all the pre-computations correct, and the switch off, is not particularly different from just reading the list of states in a book.
If one grants consciousness to that, why not grant it to e.g. fictional characters that "communicate with" the reader?
One might come up with some sort of thing like, it depends on how it interacts with the world, and it doesn't make sense for it to have pre-computed steps if it is interacting with the world in new ways, that might be a way out. Or one could argue that it really does matter if the switch is flipped one way or the other, and when you flip it back and forth it switches between being actually conscious and being, basically, a p-zombie. And speaking of which you could say, "well what if the same thing is done with brain states being pre-computed?", etc. etc.
I think the Chinese Room problem, while not conclusive, is a useful introduction to these issues?
I don't think it actually brings up any relevant issues. For instance, you mention a p-zombie, but that's another one with glaringly obvious problems that are immediately evident. Does bacteria have consciousness? Or did consciousness arise later, with the first conscious creature surrounded by a community of p-zombies, including their parents, siblings, partners, etc. Both possibilities seem pretty detached from reality.
Pre-computation is another one that seems to obfuscate the actual issue. No, I don't think anyone would think a computer simply reciting a pre-computed conversation had conscious thought going into it; but that same is true for a human being reciting a a conversation they memorized (which wouldn't be that different from reading the conversation in a book). But that's a bit of a strawman, because no one is arguing that lookup table-type programs are conscious (you don't see anyone arguing that Siri is conscious). And the lookup table/precomputations for even a simple conversation would impossibly large (run some numbers, it's most likely larger than the number of atoms in the universe for even tiny conversations).
So I don't see these arguments as bringing up anything useful. They seem more like colorful attempts to purposefully confuse the issue.
The person in the room performing the lookup is a red-herring and can be replaced by suitable algorithm, e.g. a convnet - which could learn the lookup task.
Consciousness resides in the minds that created the lookup tables: they were constructed by conscious beings to map queries to meaningful responses.
The lookup tables are the very sophisticated part of Searle's Chinese Room.
The recent emergent semantic vector algebra discovered in human languages by Mikolov's word2vec [Mikolov et al] demonstrate that some of the computational power of language is inherent in the language ( but only meaningfully interpretable by
a conscious listener.
Meaning requires Consciousness but language is unexpectedly sophisticated and contains a semantic vector space which can answer simple queries ("What is the Capital of...") and analogise ("King is to What as Man is to Woman") algebraicly. [Mikolov et al]
This inherent semantic vector space is discoverable by context encoding large corpii.
Language is a very sophisticated and ancient tool that allows some reasoning to be performed by simple algebraic transformations inherent to the language.
-
[Mikolov et al] :
Efficient Estimation of Word Representations in Vector Space
Mikolov, Chen, Corrado, & Dean
http://arxiv.org/abs/1301.3781
Yeah but it just makes me more confused? How does that say anything about a computer then? There's no human being who doesn't understand something inside a computer.
The idea is, roughly, if a person in the place of the cpu does not understand chinese, then the cpu doesn't understand chinese.
And because the cpu is the part that does stuff, like the person, then if the system w/ the person doesn't understand chinese, then the computer w/ the cpu doesn't understand chinese.
Because there's nothing to make the cpu more understand-y, only things to make it less understand-y, and otherwise the systems are the same.
The Chinese room argument is actually needlessly convoluted. Just imagine a piece of paper on which three words are printed: "I AM SAD". Now is there anyone who believes that this piece of paper is actually feeling sad just because "it says so"? Of course not. Now, suppose we replace this piece of paper with a small tablet computer that changes its displayed "mood" over time according to some algorithm. Now in my opinion it is rather hard to imagine that all of a sudden consciousness will "arise" in the machine like some ethereal ghost and the tablet will actually start experiencing the displayed emotion. Because it's basically still the same piece of paper.
The Chinese room argument is actually needlessly convoluted. Just imagine a piece of paper on which I draw a face that looks sad. Now is there anyone who believes that this piece of paper is actually feeling sad just because it looks sad? Of course not. Now, suppose we replace this piece of paper with an organic machine made of cells, blood and neurons which changes its displayed "mood" over time according to some algorithm. Now in my opinion it is rather hard to imagine that all of a sudden consciousness will "arise" in the machine like some ethereal ghost and the organic machine will actually start experiencing the displayed emotion. Because it's basically still the same piece of paper.
The Chinese Room argument has always seemed to me a powerful illustration of the problem that "consciousness" is so poorly defined as to be not subject to meaningful discussion dressed up as a meaningful argument against AI consciousness.
Its always distressed me that some people take it seriously as an argument against AI consciousness; it does a better job of illustrating the incoherence of the fuzzy concept of consciousness on which the argument is based.
As a believer in Weak AI the Chinese Room argument really gave me more understanding of my position. His argument is based on the concept that interpretation of symbols is not the same as the understanding that we do. As an example, say that a person learns 1 + 1 = 2. Because that person understands the concept, he can then go apply it to other situations, and figure out that 1 + 2 = 3. Whereas because the Chinese room is just interpreting symbols, so when the computer is asked the question "what is 1 + 1?" and can answer "2" via lookup table, but the person inside the room has gained no understanding of the actual question so he can't then use that knowledge in different circumstances and know without looking up that 1 + 2 = 3.
The Chinese Room argument is that because computers can't "learn", everything has to be taught to them directly, whereas humans are able to take knowledge given and apply it to other situations. While some computers can "learn" enough rules to follow patterns, the argument is that computers can't "jump the track" and that humans can.
Yeah I see that, but the problem is that we don't know how humans are conscious, i.e. where meaning rises. If you believe that brains are just atoms, then meaning arises from "dumb physics" somewhere.
Another way to think of it is: a fetus, or a sperm, or ova is not conscious. Some might argue that a newborn isn't really conscious. Somewhere along the line it becomes conscious. How does that happen? Where is the line? We have no idea.
You can't assert that meaning can't arise from "dumb symbol manipulation" without understanding how meaning arises in the former case. We simply don't know enough to make any sort of judgement. The Chinese room argument is trying to make something out of nothing. We don't know.
I've always thought that the Chinese Room proved just the opposite of what Searle thinks it does.
I think of it this way:
I have two rooms: one has a person who doesn't speak Chinese in it, but they have reference and books that allow them to translate incoming papers into Chinese perfectly.
The second room just someone who speaks Chinese, and can translate anything coming in perfectly.
Searle says that AIs are like person in room one: they don't know Chinese.
I would argue that is the wrong way to look at things. A better comparison is that an AI is like the system of room 1, which does know Chinese, and from observation is indistinguishable from the system of room 2. What's going on inside (a human with Chinese reference books vs a human who knows Chinese) doesn't matter, it's just internal processing.
If it walks like a duck and quacks like a duck, then it's a duck.
If a machine claims to be conscious, and I can't tell it apart from another conscious being, who am I to say it isn't conscious?
How is it even an argument? It doesn't illuminate anything, and it's not even clever. It seems like the most facile wrong-headed stab at refutation, by begging the question. As far as I can tell, the argument is, "well you can make this room that manipulates symbols like a computer, and of course it's not conscious, so a computer can't be either"? There are so many problems with this argument I don't even know where to begin.
The fact that he appears to think that changing a "computer" to a "room" has persuasive power just makes it all the more antiquated. As if people can't understand the idea that computers "just" manipulate symbols? Changing it to a "room" adds nothing.
[1] http://plato.stanford.edu/entries/chinese-room/