Too Busy For Words - the PaulWay Blog

Tue 4th Aug, 2009

Understanding the chinese room

The Chinese Room argument against strong AI has always bothered me. It's taken me a while to realise what I dislike about the argument and to put it into words, though. For those of you who haven't read up on this, it's worth perusing the article above and others elsewhere to familiarise yourself with it, as there's a great deal of subtlety in Searle's arguing position.

Firstly, he's established that the computer program as is comfortably passes the Turing Test, so we know it's at least an artifical intelligence by that standard. Then he posits that he can perform the same program by following the same instructions (thus still passing the Turing Test), even though he himself "doesn't understand a word of Chinese". Then he proposes that he can memorise that set of instructions to pass the Turing Test in Chinese in his head, and still doesn't understand Chinese. If he can do that while not understanding Chinese, then the machine passing the Turing Test doesn't "understand" Chinese either.

So. Firstly, let's skip over the obvious problem: that the human trying to perform the computer program will do it millions of times slower. This speed is fairly important to the Turing Test, as we're judging the computer based on its ability to interact with us in real time - overly fast or slow responses can be used to identify the computer. A human that's learnt all the instructions by rote and follows them as a computer would still, I'd argue, be identifiably slow. We're assuming here that the person doesn't understand Chinese, so they have to follow the instructions rather than respond for themselves.

And let's skip over the big problem of what you can talk about in a Turing Test. Any system that can pass that has to be able to carry on a dialogue with quite a bit of stored state, has to be able to answer fairly esoteric questions about their history or their current state that a human has and a computer doesn't (e.g. what did you eat last, what sex are you, etc). I'm skipping that question because it's an even call as to whether this is in or out for current Turing Test practice: if an AI was programmed with an invented personality it might be able to pass this in ways a pure 'artificial intelligence' would not. It's a problem for the Chinese Room, because that too has to hold a detailed state in memory and have a life outside the questioning, and the example Searle gives is of a person simply answering questions and not actually carrying on some external 'life'. ("Can I tell you a secret later?" is the kind of thing that a human will remember to ask about later but the Chinese Room doesn't say anything about).

It's easy to criticise the Chinese Room at this point as being fairly stupid. You're not talking to the person inside the room, you're talking to a person inside the simulation. And the person executing all those instructions, even if they're in a high-level language, would have to be superhumanly ... something in order to merely execute those instructions rather than try to understand them. It's like expecting a person to take the numbers from one to a million in random order and sort them via bubble sort in their head, whilst forbidding them from just saying "one, two, three..." because they can see what the sequence is going to be.

To me the first flaw in Searle's argument is that his person in the room could somehow execute all those instructions without ever trying to understand what they mean. If nothing else, trying to learn Chinese is going to make the person's job considerably easier - she can skip the whole process of decoding meaning and go straight to the 'interact with the meaning' rules. Any attempt by Searle to interfere here and say that, no, you're not allowed to do that really has interfered with any attempt to disprove that the person doesn't understand Chinese - if he makes her too simple to even understand a language, then how does she read the books; if he makes her incapable of learning then how did she learn to do this process in the first place, etc. So the basis on which Searle's judgement that the AI doesn't really "understand" because the person in the room doesn't "understand" is based on the sophistry that you can have such a person in the first place.

But, more than this, the fundamental problem I have is that any process of trying to take statements / questions in a language and give responses to them in the same (or any other) language is bound to deal with the actual meaning and intelligence in the original question or statement. It's fairly counterintuitive to make an AI capable of interacting in a meaningful way in Chinese without understanding what makes a noun and a verb, understanding its rules of tense and plurality, or understanding its rules of grammar and structure and formality. If Searle would have us assume that we've somehow managed to create an AI that can pass the Turing Test without the programmers building these understandings of the actual meaning behind the symbols into the program, then I think he's constructed somewhat of an artificial (if you'll forgive the pun) situation.

To try and put this in context, imagine the instructions for the person in the room have been written in English (rather than in Python, for example). The obvious way to write this Chinese Room program, therefore, is by having big Chinese-English and English-Chinese dictionaries and a book of rules by which the person pretends that there's another person (the AI) answering the questions based on the English meaning of the words. I argue here that any attempt to obfuscate the process and remove the use of the dictionaries is not only basically impossible but would stop the Chinese Room being able to pass the Turing Test. It's impossible to remove the dictionaries because you're going to need some kind of mapping between each Chinese symbol and the English word that the instructions deal with, if for no other reason that Chinese has plenty of homographs - symbols which have two different meanings depending on context or inflection - and you need a dictionary to distinguish between them. No matter how you try to disguise that verb as something else, you'll need to put it in context so that the person can answer questions about it, which is therefore to make it meaningful.

So once you have a person capable of learning a language, in a room where symbols are given meaning in that language, you have a person that understands (at some level) the meaning of the symbols, and therefore understands Chinese.

Even if you introduce the Python at this point, you've only added an extra level of indirection to the equation. A person reading a piece of Python code will eventually learn what the variables mean no matter how obscurely the code is written - if we're already positing a person capable of executing an entire program literally then they are already better than the best maintenance programmer. If you take away this ability to understand what the variables mean, then you also (in my view) take away the ability for the person to learn how to interpret that program in the first place.

Searle's argument, therefore, is based on two fallacies. Firstly, that it's possible to have a human that can successfully execute a computer program without trying to learn the process. Secondly, that the program will not at some point deal with the meaning of the Chinese in a way that a person would make sense of. So on both counts Searle's "Chinese Room" is no argument against a machine intelligence "understanding" in the same way we understand things.

What really irritates me about Searle's argument here - and it does not change anything in my disproof above - is that it's such an arrogant position. "Only a real *human* mind can understand Chinese, because all those computer thingies are really just playing around with symbols! I'm so clever that I can never possibly learn Chinese - oh, wait, what was that?" He's already talking about an entity that can pass the Turing Test - and the first thing I would argue about that test is that people look for understanding in their interlocutors - and then says that "understanding" isn't there because it's an impelementation detail? Give me a break!

And then it all comes down to what "understand" means, and any time you get into semiotics it means that you've already lost.

Last updated: | path: tech / ideas | permanent link to this entry


All posts licensed under the CC-BY-NC license. Author Paul Wayper.


Main index / tbfw/ - © 2004-2023 Paul Wayper
Valid HTML5 Valid CSS!