Mind and Machine - Flashcards (GOFAI & Classical Cognitive Science)

1 Background

  • AI as a field aims to make machines perform tasks that would require intelligence if done by humans; Marvin Minsky’s Maxim frames AI as the project of creating machine intelligence.

  • Historical imagination: AI has roots in myths and fiction (e.g., Homer’s Iliad features Hephaestus’s unmanned tripods and android attendants with minds).

  • The central questions guiding the study: Could a machine think? Are we such thinking machines?

  • Early modern roots set the stage for AI debates: the move from myth to philosophy about machine-like cognition.

1.1 Early modern roots

  • 1.1.1 Descartes: the ghost in the machine

    • Descartes is famous for mind–body dualism: mental (immaterial) vs physical (material) substances; the mind is non-spatial and immaterial; the body operates mechanically.

    • He used the hydraulics-and-windows image (moving statues) as a metaphor for how stimulus–response could be mechanized, potentially blurring the line between animate and inanimate.

    • Quote-style summary from Descartes: animals act naturally and mechanically like a clock; this mechanical view might explain the actions of bodies and mere animals, but not human thought.

    • In Discourse on the Method, Descartes argues that language use and meaningful response require an immaterial mind; machines could utter words in response to input, but not produce meaningfully new arrangements of words.

    • Implications for AI: language use is a visible difference between humans and animals and becomes a dividing line for genuine cognition; later AI treats language as a central test of intelligence (e.g., the Turing Test).

  • Descartes’s view motivates later “symbol-processing” conceptions of mind, while acknowledging a gap between symbol manipulation and genuine thought.

  • The broader issue: the possibility of AI raises questions about meaning and how linguistic symbols get their meaning.

  • 1.1.2 Hobbes and Leibniz: mechanical cognition and the roots of computation

    • Leviathan (Hobbes, 1651) connects reason with reckoning; processing mental content is like marking and signifying thoughts via agreed-upon names. This leads to a computational metaphor: mental processes as symbol manipulation.

    • Hobbes’s account raises the problem of grounding meaning: if symbols derive meaning only from prior thoughts, then there’s circularity in grounding semantics.

    • The question of original meaning remains: how do thoughts acquire their meaning independent of other thoughts? Haugeland later labels this as the problem of original meaning.

    • 1.1.3 Pascal and Leibniz: mathematical machines

    • Pascal designed the Pascaline (1642) to perform arithmetic; the device was nominally capable of doing work that would normally require mental effort, reducing memory load and error due to memory lapses. This fuels Pascal’s distinction between esprit géométrique (geometric/mathematical mind) and esprit de finesse (organic/intuitive mind).

    • Leibniz’s Stepped Reckoner (1672–1694) used the Leibniz wheel (stepped drum) to perform all four arithmetic operations; he envisioned a universal characteristic—a universal language or calculus ratiocinator—that would express and evaluate all mathematical and scientific concepts mechanistically.

    • Leibniz framed the universal language and a calculator-based reasoning engine as a way to reduce all reasoning to calculation; this foreshadows later ideas about machine-based reasoning and the dream of a combinatorial calculus that can derive truths from signs.

    • Wiener’s Cybernetics (1948/1961) cites Leibniz’s calculus ratiocinator as germinal for the idea that machines can formalize and mechanize reasoning.

1.2 Turing and the ‘Dartmouth Conference’

  • 1950 turning point: Alan Turing’s Computing Machinery and Intelligence begins with the question Can machines think? and also addresses whether humans are thinking machines.

  • Turing defines machines that could carry out any operation a human computer can do; he reframes AI around programmable digital computers.

  • He helps seed computation theory: a computer can simulate any other computer given a description of that computer; this leads to the notion of a universal machine and formalizing computation.

  • Dartmouth Summer Research Project on Artificial Intelligence (1956): organized by John McCarthy; included many early AI figures (Minsky, Newell, Simon, McCulloch, Shannon, etc.); the project aimed to show that every aspect of learning or intelligence could in principle be described precisely so that a machine could simulate it.

  • The Dartmouth conference is often seen as the birth of AI as a discipline; it gave AI its name and helped cluster the field around a common goal, despite its initial modest outcomes.

  • The framing of AI as a field blends theoretical and practical aims: simulation, modeling, and eventual mechanization of cognitive tasks.

1.3 Varieties of AI

  • Building on Searle (1980) and Flanagan (1991), four Understandings of AI:

    • 4D tasks (dangerous, dirty, dull, or difficult): machines excel at tasks humans would find intelligent but these do not necessarily reveal anything about mind or cognition.

    • Weak AI: AI as theoretical psychology; using computer programs to derive predictions, test theories, and force explicit formulations; advantages include explicitness, testability, and predictive power (e.g., Frame Problem in AI mirroring belief revision issues in psychology).

    • Strong AI: the claim that a properly programmed computer genuinely understands and has cognitive states; the computer isn’t just a tool but a mind in the sense that it has genuine mental states.

    • Suprapsychological AI: cognition-as-it-could-be; cognition implemented in different ways, potentially non-biological, or enhanced via augmentation; this links to debates about human–machine hybridity and future cognitive extension (Chapter 7; Singularity).

  • Important distinctions:

    • Weak AI is a method or research program (simulation / modeling) that may be empirically productive but is not about building a mind.

    • Strong AI is a categorical claim about actual cognition in machines.

    • Suprapsychological AI extends the discussion beyond the human-like mind to more radical possibilities.

  • Suprapsychological AI connects with broader debates about Artificial Life (Langton) and cognition-as-it-could-be, including human-machine hybrids.

1.4 Is AI an empirical or a priori enterprise?

  • The debate centers on whether AI is an empirical science (like biology) or an a priori enterprise (like mathematics).

  • Newell and Simon view AI as empirical in some respects: computers and programs are experiments; observing the machine yields answers about cognitive processes.

  • Kukla argues AI is a priori: a computer program is knowable a priori; running it is akin to a deductive proof; the relation between a program and its behavior is logical, not empirical.

  • Königsberg seven bridges: two ways to resolve the problem—empirically via experience (townspeople) or a priori via Euler’s graph-theoretic reasoning; this illustrates the asymmetry in how we classify empirical vs a priori problems.

  • The example BASIC program printHelloWorld; the behavior can be determined a priori in principle (though in practice it’s easier to run it): shows the tension between armchair reasoning and empirical testing.

  • Conclusion: Kukla’s view dominates the armchair a priori status of AI in principle, but Newell and Simon emphasize empirical processes in program design and testing; the proper stance depends on whether we’re focusing on program–behavior relations or on how programs are constructed.

  • GPS and the role of human problem-solving:

    • GPS used a human protocol (think-aloud) to guide a computer in solving problems; this is an empirical procedure that studies human cognition to build the program.

    • The section concludes that a balanced view is warranted: AI is empirical when we study how programs are built to mimic human reasoning; AI is a priori when we analyze the relation between a program’s behavior and its formal structure.

1.5 AI and the mind–body problem

  • The mind–body problem asks what kind of thing a mind is; AI’s success would bear on this issue.

  • If weak AI succeeds and strong AI holds, this might challenge Cartesian dualism but cannot conclusively refute the possibility of non-physical substances (a Cartesian soul) being sufficient for mentality.

  • Chalmers (2010) suggests non-physical processes could be emulated or artificially created; thus even strong AI might still leave open philosophical possibilities about non-physical minds.

  • If strong AI succeeds in producing genuine cognitive function and phenomenal consciousness in machines, it would bolster physicalism: everything can be physical; cognition can be realized in non-biological substrates.

  • Multiple realisability: a given mental state can be realized by different physical substrates (e.g., brains or computers); supports functionalist views where implementation details are secondary to functional organization.

  • The discussion sets the stage for later chapters on functionalism and cognitive extension.

2 Classical Cognitive Science and “Good Old Fashioned AI” (GOFAI)

  • The classical view holds that cognition is rule-governed symbol manipulation, bridging psychology, logic, linguistics, and philosophy.

  • GOFAI emerges from three roots and emphasizes a symbolic, representational approach to cognition.

  • The three roots (in order of specificity):

    • 2.1.1 Logic: cognition as logical reasoning; from Boole to logicism; formalizing reasoning as a calculus; formal syntax–semantics separation; the idea that “syntax” (forms) can be treated independently of “semantics” (meanings). The Laws of Thought (Boole) frame logic as the foundational technology for representing and manipulating mental content.

    • 2.1.2 Linguistics: Generative Grammar (Chomsky) and the idea that language reveals deep structural rules; generation of sentences via rules; the analogy that language rules are like command lines in a computer program; mentalese (Fodor): the proposed innate language of thought that underlies human cognition; child language learning as hypothesis testing with innate structure. The key takeaway: cognition is language-like in its symbol manipulation and rule-governed structure; generation of linguistic outputs corresponds to underlying cognitive states.

    • 2.1.3 Functionalism and the Representational Theory of Mind (RTM): mental states are defined by causal roles (inputs, outputs, and other mental states) rather than by their physical substrate; multiple realizability allows different physical systems to realize the same mental state; internal states serve representational roles (e.g., beliefs and desires as propositional attitudes). The Coke machine thought experiment illustrates how internal states functionally realize cognitive states through symbolic representations; leads to RTM’s promise of a computational, symbolic account of mind while preserving physicalism.

  • 2.2 Algorithms, Turing Machines, and Turing’s thesis

    • Four guiding questions:
      1) What is an algorithm? 2) What kind of device can behave algorithmically? 3) How much can such devices accomplish? 4) Why think of the mind as such a device?

    • Interplay with intuition and pedagogy: algorithms are like recipes but with definiteness and certainty; they guarantee a finite, definite, and effective procedure to transform input into output. A useful informal criterion is moronic procedures: steps that require no insight or creativity; with finiteness, definiteness, and effectiveness, an algorithm guarantees a solution.

    • De Morgan’s laws as a teaching example of mechanical procedures and formalization: negation-pushing rules that illustrate formal reasoning in a mechanical fashion.

    • Turing machines (theoretical device): two components – a read/write head and an unbounded tape; states and transition rules; a machine-table defines behavior; the machine can perform counting, addition, string checks, and more with appropriate tables.

    • Key points about Turing machines:

    • Universality: there exists a universal Turing machine U that can simulate any machine M given M’s description on its tape; this is the basis for programmable computers.

    • Turing’s thesis: anything that is algorithmically calculable can be carried out by some Turing machine; anything a Turing machine can do can be captured by a description on a tape and a universal machine.

    • Implications: if cognitive processes are algorithmically computable, a large enough computer could simulate the mind; conversely, if computation captures cognition, then AI is possible in principle.

    • This leads to the Turing Argument (universally quantified modus ponens):

    1. If some process is algorithmically calculable, then it could be carried out by a Turing Machine. 2. Cognitive processes are algorithmically calculable. 3. Therefore, cognitive processes could be carried out by a Turing Machine.

    • Cautions: real-world cognition is time-bounded and resource-constrained; the pure TM model abstracts away timing and reliability concerns, which dynamical approaches later emphasize.

    • Gödel-like challenges (Gödel’s incompleteness) and related debates (Penrose) raise questions about whether cognition could be fully captured by formal systems; this motivates consideration of arguments against or qualifiers to GOFAI.

  • 2.3 GOFAI’s success stories

    • 2.3.1 Logic Theory Machine (LTM): early AI program that proved theorems from Principia Mathematica by applying inference rules; demonstrated that computers could perform tasks previously thought to require human intellect.

    • The LTM found proofs or shorter proofs for some theorems; Russell reportedly delighted; editors at the Journal of Symbolic Logic declined to publish the first machine-authored proof.

    • Limitations: combinatorial explosion in proof search; real progress required heuristics; the emphasis shifts from brute-force proving to heuristic-guided search.

    • Key takeaway: symbolic reasoning can be mechanized with explicit rules, and heuristics (rules of thumb) are essential for tractable problem solving; the form of reasoning (not just the content) matters.

    • 2.3.2 Chess: Deep Blue and the limits of AI as a proxy for human cognition

    • Chess was a natural test bed for GOFAI: planning, strategy, and computation. Early predictions suggested that exhaustive search could, in principle, solve chess, but the enormity of the search space makes brute force impractical (millions of position evaluations per second are required).

    • Deep Blue (IBM): a hardware-accelerated chess computer; used a mixture of brute-force search, large opening databases, and a sophisticated evaluation function developed with grandmasters; could evaluate up to ~200 million positions per second and store ~4000 opening variations plus 700k grandmaster games.

    • Kasparov vs. Deep Blue matches (1996 Kasparov won 4–2; 1997 Deep Blue won 3.5–2.5): the machine’s strengths emphasize speed and memory rather than human-like understanding; IBM explicitly emphasizes that Deep Blue is not AI in the strong sense; its intelligence is a byproduct of brute-force computation and well-tuned heuristics.

    • Implications: chess demonstrates the power of computation and heuristics; it reveals that some cognitive tasks can be dominated by non-human cognitive processes; it also highlights the distinction between human-centric cognition and machine computation.

    • 2.3.3 ELIZA: language as a test case for GOFAI

    • ELIZA (Weizenbaum, 1966) mimicked Rogerian psychotherapist dialogue through simple pattern-matching and reflection techniques; it produced plausible-sounding conversation without genuine understanding.

    • Reactions and ethics: ELIZA exposed people’s tendency to anthropomorphize machines; Weizenbaum warned against overestimating machine intelligence and questioned the ethics of using such programs for therapy; some suggested therapeutic usefulness, while others argued that it misleads patients and professionals about the nature of mind.

    • The broader lesson: linguistic capabilities in GOFAI can produce convincing interactions without genuine understanding; language that appears intelligent can be a parodic or superficial mimicry rather than true cognition.

  • 2.4 GOFAI and the mind–body problem

    • GOFAI’s functionalism aligns with a mechanistic, symbol-manipulation view of cognition, compatible with physicalism and multiple realizability; cognition can be realized in different substrates as long as the functional organization is preserved.

    • Two senses of functionalism (as discussed by Block):

    • Metaphysical functionalism: mental states are defined by their causal roles (stimuli, outputs, and other mental states).

    • Computation–representation functionalism: mental processes are computations over representations; RTM emphasizes that mental states are represented and manipulated within a language of thought; thus cognition can be mechanistic and embodied in a computer program.

    • The slogan “It ain’t the meat, it’s the motion” captures the idea that the functional organization matters more than the material substrate; GOFAI’s approach rests on this idea and supports the view that cognition can be simulated in silicon if the right functional architecture is present.

    • Ongoing tension: some theorists (dynamical systems theorists) emphasize the role of temporal dynamics and continuous time in cognition, challenging a purely syntactic/symbolic view; GOFAI and dynamical approaches may be compatible in some formulations but differ in emphasis on timing and neural embodiment.


Notes on key formulas and ideas (LaTeX formatting)

  • Modus ponens (logical rule):
    p
    ightarrow q ext{ and } p ext{ together imply } q.

  • De Morgan’s laws (illustrating formal reasoning):

    eg orall x \, P(x) \equiv \, ext{exists } x \,
    eg P(x),

    eg
    exists x \, P(x) \equiv \, orall x \,
    eg P(x).

  • Turing machine universality (informal):
    ext{There exists a universal TM } U ext{ such that for any machine } M, \text{U}( ext{SD}(M)) ext{ computes the same sequence as } M.

  • Turing’s thesis (informal):
    ext{Algorithmically computable } orall f ext{ implies } ext{there exists a TM } T ext{ computing } f;\ ext{and conversely, any TM computes a computable function.}

  • Gödel/Harper-like note (Penrose-Lucas challenge): self-reference and non-computability concerns about human cognition; discussed as potential challenges to pure GOFAI.

  • Representational Theory of Mind (RTM) example: beliefs and desires as propositional attitudes; internal states (e.g., State-1, State-2 in the coke machine) that stand in for real-world states and enable reasoning and planning.

Key takeaways for exam preparation

  • AI’s historical arc moves from mythic automated agents to Descartes’ dualism to Pascal/Leibniz’s calculation engines and Turing’s formalization of computation; these milestones establish the symbolic, rule-based view of cognition that dominates GOFAI.

  • Distinctions among AI varieties (weak vs strong vs suprapsychological) are essential for understanding both methodological approaches and philosophical commitments.

  • The mind–body relationship matters: AI’s success has implications for physicalism and multiple realizability; the debate continues with functionalism and dynamical theories.

  • GOFAI’s core roots in logic, linguistics, and functionalism provide a framework for understanding symbolic AI; its success stories (LTM, Deep Blue, ELIZA) illustrate both capabilities and limitations of symbol-based cognition.

  • Critical questions to anticipate: Can symbol manipulation capture genuine understanding? Can AI reproduce timing and temporal dynamics of cognition? Is cognition restricted to human-like symbol manipulation, or can it be realized in non-biological substrates with different architectures?