Skip to content

Instantly share code, notes, and snippets.

Last active February 11, 2024 15:09
Show Gist options
  • Save yoavg/001cca8ab6de3f20650192da17117292 to your computer and use it in GitHub Desktop.
Save yoavg/001cca8ab6de3f20650192da17117292 to your computer and use it in GitHub Desktop.
On Searle's Chinese Room Argument

On Searle's "Chinese Room" argument

When I first heard of Searle's "Chinese Room" argument, some twenty+ years ago, I had roughly the following dialog:

"Imagine there is a room with instructions, and someone slips a note written in chinese into this room, and you don't know chinese, but you follow the instructios in the room and based on the instructions you produce a different note in chinese and send it back out, and whoever sends you the original note thinks your note is a perfect response."

Oh, so the person outside doesn't know chinese either?

"No no, they do know chinese, you produced a perfect answer"

Was that a generic answer, like "wow that's interesting"?

"No, it was a real response to the slip"

And am I expected to do this for every slip of paper written in chinese?


And respond differently to each one?


(note: this is not a correct answer, you need to reponsd differently to most of them, but some do require the same answer.. anyhow, this bothered me already then but I didn't follow up on it.)

This sounds impossible to me.

"But its a thought experiment"

Ok. So I am supposed to be able, following the instructions, to produce responses to every possible message in chinese?


This is impossible, even if I fully understood chinese, there were some messages I couldn't responsd to properly.

"But its a thought experiment, it doesn't have to be realistic"

Ok, I guess. Do go on.

"Ok, so now you get another slip into the room, and you respond to this as well, and continue the conversation"

Conversation? you mean the messages in the different slips are related to each other?


This doesn't make sense. If the instructions are pre-written, they cannot take into account the previous messages and responses, they can only respond to one message at a time. But I don't see how you can carry a conversation that way.

"That's besides the point of the thought experiment"

Is it? well ok, I guess, let's see where this goes.

"Ok, so, you are in the room and there are instructions and you follow them on operate on chinese text and produce chinese text"

Wait, and what if I get a malformed chinese text? how do I respond then? the instructions cover this as well?

"umm... I guess, I didn't really think of this point. Anyhow, now the question is, do you, who operate the instructions, understand chinese?"


"Well? do you?"


"But you respond perfectly to chinese texts"

But I am in a magical room with magical instructions that instruct me, where are you going with this?

"That's the point of the story. If a human were only following instructions to maniuplate text, then they don't understand the text even if they are humans"


"Isn't this fascinating?"

Not really.

"It means that for an AI system, if it wants to really undersrtand, they also need to do more than symbols manipulation"


"Otherwise it is just weak AI, not strong AI"


"Don't you find this interesting?"

No, not really.

"You can respond perfectly without understanding, isn't it fascinating?"


"Well yes, but you follow the instructions"

If you have the magic room and can perfectly manipulate the symbols, why do I care if the symbol manipulator understand or not?

"Because otherwise its just weak AI"


Of course, it might have been different, and this version might be painted with some of my current perspectives. The phrasings certainly are not the same. But I do remember that the core was the same: the premise of the story fascinated me in its impossibleness, especially if you stop to consider the corner cases and the implications of such an instruction room. Then the switch to the "intended point" of weak-vs-strong AI was very very meh. I just didn't care about this topic then, and I still don't now. So what stuck in mind as "the chinese room" is the fascinating-yet-impossible premise. Not the weak/strong-ai thing. And so today it is kinda amazing to see how far you can push this premise story. We have a not-so-bad versions of this crazy magical instructions room! of course, it is not perfect and it cannot answer some queries and it goofs up a lot and so on and so on. But it still manages to implement a far better and stronger demo of this magical chinese room than I ever thought possible. And based only on statistics and examples. This is just crazy to me. (Will we ever be able to build the full chinese room? I don't think so, seems too magical to me to be possible. If we do, will it be weak or strong AI? who the fuck cares how you call it. Will the room be able to feel pain? I don't care.)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment