Chinese Room Argument 2/16

Chinese room argument:

  • thought experiment by philosopher John Searle - also introduced the distinction between strong vs weak AI, he believes we are at weak AI

  • goes directly against the physical symbol system hypothesis (need system with algorithms/rules that allow it to manipulate representations)

    • PSSH is only concerned with syntax, the question of what the symbols mean is irrelevant

  • summary of his point - syntax does not suffice for semantics

    • it will never grasp the symbols’ meaning - no access to the semantics

syntax/semantics distinction:

  • in philosophy of mind (the present context) the distinction is analogous, but not identical

    • syntax is about the symbols in a system, and the rules for manipulating them

    • semantics is about real world meaning of the symbols

Chinese room thought experiment:

  • a person who speaks no Chinese is seated in a room

  • pieces of paper are passed into the room through one window - these have questions written in Chinese on them

  • other pieces of paper can be passed out through a second window - these also have answers written in Chinese

  • the person inside the room has an instruction manual in English

  • it instructs the person which pieces of paper to pass out, depending on the symbols on the pieces of paper that come in

  • the way the manual is written, it makes sure that the room outputs appropriate answers in Chinese to the Chinese questions

this person applies rules to symbols, based only on symbols’ form - person operates purely on basis of syntax - the room with the person inside acts as PSS

the point: this is just simulating understanding, the person doesn’t actually understanding the meaning of any symbols

counter-arguments (many people agree, many do not):

  1. the systems reply:

    1. the question of whether the person in the room understands Chinese is the wrong question - meaning is “the system as a whole understands” not that something inside that person understands

    2. the system is the room with everything in it - person, rule book, windows, etc

    3. even if the person memorized the rule book, the person would still not understand Chinese

  2. the robot reply:

    1. the reason the symbols do not have meaning to the system is because they have not attained meaning during interaction w the world

    2. if we image system as part of a robot with sensors and limbs to interact, then it would be able to attach meaning to symbols

  3. the brain simulator reply

    1. what if the room didn’t just have two windows and a rule book, but a complex set of levers and connected pipes that corresponded to the neurons in the brain of a Chinese speaker

    2. the rule book didn’t specify which pieces of paper to pass out, but which levers to pull, which set in motion an interaction in the complex set of pipes, which exactly emulated the sequence of neural responses in the Chinese speaker’s brain

    3. Searle response- only simulated formal/syntactic interaction, person pulling levers doesn’t understand Chinese and neither do the pipes

point of agreement between Searle and Newell+Simon

  • both are talking about internal working of a cognitive system

  • question they are debating: if all system does internally is manipulate symbols according to algorithmic rules, what kind of cognitive properties can have it (understanding, intelligence, etc.)

  • others suggest that important criterion for intelligence does not life inside the system; intelligence can also be viewed in terms of system’s outward behavior

Turing test: outward test of intelligence:

  • computer (A) and human (B) interact with a human evaluator (C) using natural language

  • evaluator cannot see A and B; has to guess which is human based on blind interaction

  • if the human evaluator C can’t reliably tell which of the two is human, the computer passed the Turing test, which he considers a sign that the computer is intelligent

  • only about system’s outward behavior Turing test- whether the system is a PSSH or something else is irrelevant, more generally: what’s going on in the inside is irrelevant

  • CAPTCHA test is everyday Turing test

Searle’s argument hinges on understanding (issue is semantics: no grasp of meaning) - PSS just simulate thought, not truly intelligent

debate between Searle and N&S starts w question about internal working of cog system, not outward behavior: if all system does internally is manipulate systems according to algorithmic rules, what kind of cog properties can it can

N+S say can be intelligent, Searle says no