I used
to be a big fan of John Searle's "Chinese room argument" against "strong
A.I.". I still agree with Searle in rejecting strong A.I., but I now have my doubts about the effectiveness of the Chinese room argument. In this post, I explain the
problem that I see with the argument.
"Strong
A.I." is a term that Searle coined. It refers to a claim made by
certain philosophers and scientists who investigate artificial intelligence
(A.I.). Roughly speaking, strong A.I. is the theory that a computer with the
right programming is a genuine mind
simply by virtue of its programming. More precisely, strong A.I. says that a
system (e.g. a computer) has mental state M as long as it follows a set of
rules that cause it to behave as if it has M.
The
Chinese room argument is one of Searle's most famous ideas, and literature on
it abounds. I won't bother to hunt down the specific references to it in
Searle's writings. Basically, the argument is as follows. Imagine that Searle
is sitting inside a room dubbed the Chinese room. Outside the room, a Chinese
speaker writes Chinese questions onto slips of paper and passes the slips into
the Chinese room through a slot. Searle writes replies to the questions and
passes them out through the slot. Searle doesn't know Chinese and can't read
the questions that he receives. However, he has a rulebook next to him that
tells him what to write in response to each combination of Chinese characters. Thus,
to the Chinese speaker who receives Searle's replies, it looks as if Searle
knows Chinese. In other words, Searle is following rules that cause him to
behave as if he knows Chinese. Yet he does not in fact know Chinese. Therefore,
Searle concludes, strong A.I. is false.
The most
common objection to the Chinese room argument is the "system" reply. The
system reply rightly notes that the Chinese room argument is misleading in its
portrayal of strong A.I. On a charitable interpretation, strong A.I. would not
entail that Searle himself knows Chinese while sitting inside the Chinese room.
It is not Searle himself but, rather, the whole room that behaves as if it
knows Chinese. (Searle himself is constantly looking at his rulebook to find
out what to write, and that is not
the way someone who knows Chinese would behave.) Thus, the system reply argues,
it is the room as a whole (the combination of Searle, his rulebook, his writing
instruments, and the walls of the room) that knows Chinese—and that remains
true even if Searle himself doesn't know Chinese.
Searle
has a response to the system reply. Imagine, he says, that he simply memorizes
all the rules in the rulebook. If he does so, then he can write responses to
Chinese questions without being part of a larger system, and yet he still doesn't
know Chinese. Thus, Searle concludes, a human being can follow rules that cause
him to behave as if he knows Chinese, and yet not know Chinese; ergo, strong A.I. is false.
And it
is here, I think, that Searle goes wrong.
Those
who support the Turing test may believe that the ability to answer Chinese
questions is sufficient for knowing Chinese. But there's no reason why strong A.I.
proponents in general must accept that belief. After all, normal Chinese
speakers do a lot more with their knowledge of Chinese than answer a bunch of questions.
They can, for example, ask for a glass of water when they are thirsty. For
Searle to really behave as if he knew Chinese, he would need to know how to
write "Please give me a glass of water" in Chinese, and he would have
to know to write this if and only if he wants a glass of water. It's not clear
to me how this knowledge would differ from knowledge of the meaning of
"Please give me a glass of water" in Chinese. In short, if Searle memorized
a set of rules that really allowed him to act as if he knew Chinese, then it's
not clear to me how Searle could fail to know Chinese.
Of
course, Searle could construct a more plausible scenario in which an entity behaves
as if it knows Chinese without knowing Chinese. Imagine an electronic robot
that is programmed to behave as if it knows Chinese, that is also programmed to
behave as if it is thirsty, and that is programmed to say "Please give me
a glass of water" in Chinese if and only if its thirst-behavior has kicked
in. Surely, Searle might say, this robot does not know Chinese, even though it follows
a set of rules (i.e. its programming) that cause it to behave like a Chinese
speaker. I would be inclined to agree with Searle about this. Yet, in saying
this, Searle would simply be begging the question against strong A.I., which is
precisely the claim that such a robot would
know Chinese. More importantly, by changing his hypothetical rule-follower from a human being to a robot, Searle would effectively be abandoning the Chinese room argument in any recognizable form.
Thus, I
think the Chinese room argument fails. It simply does not demonstrate that strong A.I. is false.
Slightly amended on November 21, 2011.
Slightly amended on November 21, 2011.
(I think) you're arguing that if we consider a wider range of behaviour than just having a conversation, then the difference between acting *as if* you understand Chinese and *actually* understanding Chinese goes away.
ReplyDeleteBut for Searle, it doesn't make any difference whether the system is answering questions, walking around, getting water or anything else: the key issue is still the subjective experience of the person in the room. And the man's subjective experience would still just be that of someone consciously following rules, without any sense of understanding.
I don't agree with Searle's argument, but he's not begging the question against Strong AI. He's not just saying 'surely the system wouldn't have understanding', he's arguing that it doesn't because: a) the man doesn't; and b) there's nothing else around that could supply this missing understanding (just bits of paper).
Thanks for the comment, Mercher.
ReplyDeleteYou're right that subjective experience is the key point for Searle. Searle claims that a computer wouldn't have subjective experiences simply by virtue of following the right program. I happen to agree with this claim. But I don't think that Searle offers a good argument in support of it.
It's important to remember that Searle's argument involves a live human being, not a computer. Searle says that a human being could (a) follow rules that make him act exactly as if he knows a language and (b) nonetheless not have the subjective experience of understanding the language. But I think Searle's wrong about that: suppose I knew a set of rules that allowed me to say "Give me water" if and only if I wanted to be given water; I don't see how I could have that knowledge without it constituting an actual understanding of the sentence "Give me water". (Maybe my imagination is just too limited, of course.)
If I understand you, you're saying that the Chinese Room scenario as Searle describes it just couldn't exist because the man would inevitably have the understanding that he's not supposed to have.
ReplyDeleteI can sort of see your point when it comes to behaviour that implies embodiment, like the "give me water" example. But I don't see that there's any such problem with the original conversation-holding, non-embodied, symbol-manipulating behaviour. There really are - in the real world, not just in thought experiments - cases of people following rules without understanding. A beginner learning to do something by consciously following rules of thumb, say.
So is it enough for Searle's argument if we consider only this less problematic, non-embodied sort of behaviour? I think it is, at least as an argument against
the sort of "disembodied" AI that looks like Watson, a Turing test candidate program etc.
And also, I don't think you *are* right even about the embodied sort of behaviour. Searle's "Robot reply" (which you refer to in your post) works fine here: we can just suppose that the input to the Chinese Room includes sensory information and that the output somehow controls behaviour. We could even allow the Room to maintain an internal state if we need to. You argued that this would amount to "abandoning the argument in any recognizable form", but I disagree: it's a clear implication of the functionalist viewpoint which Searle's arguing against that there is no essential difference between dealing with one sort of input/output and another. It's all just information.