top of page
  • Writer's pictureScott Robinson

Back into the Chinese Room



Just when I thought I was out, they pull me back in!


I’ve spent decades now immersed in the Chinese Room. I’ve argued it endlessly with others – sometimes with great pleasure, sometimes with even greater annoyance. It is, for me, the thought experiment to end all thought experiments, with ramifications that strike at the heart of human progress.


It’s been debated endlessly by some of the greatest minds alive today, from its creator John Searle to Douglas Hofstadter to Daniel Dennett. It’s been debated by grad students, scientists, computer geeks, AI-infatuated nerdboys. It’s been referenced thousands of times in books, magazines, even journal articles.


I’ve argued it in essays, social media threads, and even an entire book.1 I’ve extended my argument as recently as last years, in an essay called “Chinese in a Chinese Town”.


A brief recap


Searle’s thought experiment can be expressed like this. Suppose you find yourself in a closed room that has two slots in the wall at either end. Within the room, there is a stack of manuals filled with Chinese symbols. Into one slot will come slips of paper covered with Chinese symbols, and your job is to look the symbols up in the manuals, which give you corresponding symbols with which you are to write responses, also on slips of paper. Once you’ve done so, you slide those slips out the second slot.


To the outside observer, who is putting questions written in Chinese on the slips coming in and getting answers to those questions (determined by the manuals) coming out, it appears that the room itself understands Chinese.


The thing is, you yourself don’t understand a word of Chinese. You’re in there just following orders, and there is no actual thinking going on at all.


In 1980, Searle used this thought experiment to demonstrate that the computers of the day, which executed stored instruction sets in the form of programs to process data entered into them, were not “thinking,” no matter how sophisticated their output.


The computer science community (and fanboy legions everywhere) howled in outrage over this – the idea that a computer could not become conscious! - and even today, you can’t hear yourself think for the roar of their bowels. Objections and vigorous refutations have been flowing for decades now, and show no signs of abating.


Chief among these is the Systems Reply, which says that even though the person in the room doesn’t understand Chinese, the room itself – the system – does. While any one element might not satisfy the requirements of “understanding,” all of them working together do.


Bad arguments


It is an expenditure of many pages, at a minimum, to list every challenge to the Chinese Room that I’ve addressed over the years and state the refutation, but I’ll confine myself to the following generalization and defense of Searle: most of the rebuttals are essentially excuses for the rebutter to expound on their own definition of “understanding,” and to constrain Searle’s argument in some way as to leave the door open for machine intelligence, for which the rebutter eagerly waits.


Often, this means limiting the definition to “translation,” where the rebuttal’s reasoning might squeak through. Almost as often, it means insisting that Searle’s experiment, as stated, does not do as far as to make claims about barriers to artificial consciousness.


Searle cleared this up long ago, and others came alongside him. He was very explicitly saying that “understanding Chinese” means “a sentient processor of language”:


“...Searle directs the argument against machine intentionality  rather than machine consciousness, arguing that it is ‘understanding’ that the Chinese room lacks,” philosopher David Chalmers pointed out. “All the same, it is fairly clear that consciousness is at the root of the matter. What the core of the argument establishes directly, if it succeeds, is that the Chinese room system lacks conscious states, such as the conscious experience of understanding Chinese. On Searle’s view, intentionality requires consciousness, so this is enough to see that the room lacks intentionality also.” 


Intentionality  now makes its way into the mix, and we can add it to our list of components we must consider when contemplating the possibility of conscious AI. Pierre Jacob defines it as “the power of minds to be about, to represent, or to stand for, things, properties and states of affairs”. Searle defines understanding  in the original Chinese Room argument as mental states with intentionality, and those are artifacts of consciousness.  


Searle himself paraphrases Jacob as follows: “...that property of many mental states and events by which they are directed at or about or of objects and states of the world.”


In other words, he’s not saying “understanding Chinese” means “able to translate Chinese” - hell, Siri and Alexa can do that! - he’s saying “understanding Chinese” means “having intentional mental states that can be expressed in Chinese, and being able to respond to the mental states of others expressed in Chinese.”


And almost all of them find their outrage in what they perceive to be a biological bias on Searle’s part, which is what they’re actually harping about: they claim that he is saying that conscious is inherently organic in nature, that no machine, computer or otherwise, can ever achieve it.


Nope, Searle is in no way proposing an organic limitation on consciousness (though the original thought experiment didn’t express any position on such a limitation one way or another; that all came up later).


“Another misunderstanding of the Chinese Room Argument is to suppose that I am arguing that as a matter of logic, as an a priori necessity, only brains can have consciousness and intentionality,” he has written. “But I make no such claim. The point is that we know in fact that brains do it causally. And from this it follows as a logical consequence that any other system that does it causally, i.e., that produces consciousness and intentionality, must have causal powers to do it at least equal to those of human and animal brains.”


A sufficiently complex causal system, organic or artificial, should be able to achieve consciousness. That’s something Searle freely concedes. That just isn’t a digital computer of the instruction-set-executing variety, circa 1980.


An actual neural network (not a simple simulation) should be able to do it, for instance. And that’s where I’ve landed.


Why is he telling us this?


I’m telling you this because I’ve come across yet another weak attack on the Chinese Room, and it has irritated me into extending my own argument further.


I came across this argument in a book called Crisis of Control by Peter Scott. It’s generally a good and useful book, evaluating the best- and worse-case scenarios for an AI-saturated human future, wherein he dismisses the Chinese Room argument as follows:


“The problem with this argument is that it is circular, and a very small circle at that. It defines the action of the computer to be a process that is not thinking and then swoons with delight when a short while later it concludes it is in fact not thinking. It ignores the possibility that our own consciousness is also built on that same process of following rule after rule, writ large. In other words, consciousness could be an emergent property of large-scale algorithmic processing; a Society of Mind. Not seeing that is a consequence of reducing the vast complexity of human brain processing to the level of instructions read from books.”


He goes on to repeat the tired accusation that Searle is bio-centric, which I’ve already refuted.


My dismissal of Scott’s circularity complaint is simple: all Searle’s argument does is state a premise, and then it defends the premise. Every claim in every peer-reviewed paper ever published does the same thing. Searle doesn’t rely on the premise to self-justify his conclusion; he painstakingly builds his case. Scott’s dismissal is so thin, it’s hard to believe he even bothered to read the paper.


But I digress. I’m here to extend my argument in the Chinese Room’s defense, and here I go.


If we’re going to address the question of machine consciousness meaningfully – which the original thought experiment sought to do, and which (we trust) its detractors with to do, at least in principle, we must clear away the accumulated misunderstandings of the language used. We can probably ding Searle for a lack of specificity here, but it was 1980 and back then the argument was brand new, and who could have imagined there would be some many ways to gum up the works with alternate readings of individual words?


Let’s look at what’s going on – and not going on – in the Chinese Room. The occupant is following rules laid out in books to map incoming strings of Chinese symbols to other symbols. Input, output. The strings of symbols coming in represent questions, and the strings of symbols going out represent answers. The answers are so accurate that they satisfy the person outside the room submitting the questions that there is a mind within the room considering the question and formulating the answers in real time.


That’s what “understanding Chinese” means.


Now, consider that to achieve output like this, those manuals (which most who argue for or against Searle tend to skip over entirely) must contain more than just symbol-to-symbol mapping; they would need to contain the real-world knowledge necessary to answer the questions that is accumulated through conscious experience in the world. In order to understand English well enough to answer questions put to me in that language, I must have both a reservoir of knowledge and a mapping of that knowledge to my ability to express myself in English to create those answers spontaneously. I do not “look up” the answers in manuals in my head, nor do I consult any manuals in my head to encode my response. Both are modulations of the neural networks in my brain, accumulated through experience in the world.


Put more simply, to “understand Chinese” is to repeat the conscious experience of a Chinese speaker speaking Chinese.


Can only an organic brain do that? It is absolutely the case that an artificial neural network in a computer (or, better, an android) can achieve the necessary modulations to capture experiences in the world and learn to express them in Chinese. And Searle himself never said otherwise.


But old-school, rule-by-rule computer programs? No.


There’s much, much more to argue. But it we’ve moved the argument forward even a little bit toward a better picture of what’s involved in the understanding of language based on conscious experience, moving toward our holy grail – meaning – then I’ve spent the past hour well.

20 views0 comments

Recent Posts

See All

Comments


bottom of page