Mind Design III: Intentionality and Understanding

11 True Believers: The Intentional Strategy and Why It Works

  • Central question: Can machines have genuine thoughts about things, i.e., intentional content, or do they merely act as if they do? The chapter surveys how humans represent the world and introduces the debate about intentionality in AI.

  • Brentano’s legacy: Intentionality is the mark of the mental; mental states are about things (their aboutness).

  • Derivative vs original intentionality:

    • Derivative intentionality: signs, words, graphs derive meaning only with an interpreter (e.g., road signs, books, graphs). No intrinsic meaning in the sign itself.

    • Original intentionality: the world contains systems that represent things in a stronger sense, without needing an interpreter.

  • How AI should be judged: the essays explore whether artificial systems can possess original intentionality, and what constraints such a claim would face.

  • Field-wide vantage: the question is linked to deeper debates in philosophy of mind (Dennett, Searle, Boden, Egan, etc.) about whether intentionality can be grounded in causal structure, brain properties, or social practices.

11.1 The intentional strategy and how it works

  • The intentional strategy (or the intentional stance): predict another system’s behavior by treating it as a rational agent with beliefs and desires.

  • Stances (levels of abstraction) to predict behavior:

    • Physical stance: use physical constitution and laws of physics; Laplacean omniscience example.

    • Design stance: predict by knowing the object’s design and intended function (e.g., an alarm clock will ring when set). If you’re considering a pump or complex machine, you revert to more detailed physical descriptions when conditions demand.

    • Intentional stance: treat the object as a rational agent; assign beliefs and desires, and predict action via rational planning.

  • How to populate beliefs and desires (rough rules):

    • Beliefs: attribute the truths relevant to the system’s interests/desires that its experience has exposed it to. True beliefs are favored; false beliefs arise from a lineage of true beliefs, misperceptions, memory failures, deception, or cultural transmission.

    • Desires: attribute basic motivations (survival, absence of pain, food, comfort, reproduction, entertainment) as starting points; derive other desires from those (e.g., “X is good for Y”).

    • Language and explicit desires complicate attribution (language imposes precise desires via stated propositions).

  • Rationality: start from a model of ideal rationality; revise downward when appropriate; acknowledge that real agents are not perfectly rational (knotty cases ignored here).

  • Empirical reach of the intentional stance: works not only for humans but for many animals and artifacts (e.g., chess computers, thermostats, even plants in some analogies).

  • Predictive success and limits: the stance reduces the space of possible actions to a handful of high-probability moves in complex scenarios (e.g., chess, stock trading, political speech acts).

  • Nozick’s Martians thought experiment: superior intelligences predicting our behavior via physics alone would still find patterns in human behavior if they adopt the intentional stance; otherwise, they’d miss the pattern in human behavior that is describable via beliefs/desires.

  • Objective vs interpretation-relativized aspects: adopting the intentional stance is free, but the outcomes (how well it works) are objective; no single optimal interpretation yields perfect predictions for all cases.

  • Central claim: true believers are intentional systems—their behavior is reliably predictable via the intentional stance; the best interpretation explains why such systems act as if they have beliefs and desires.

  • Limits and edge cases:

    • Some artifacts (lectern) can be predicted by the intentional stance, but that doesn’t make them genuine believers; if predictive power collapses (predicting nothing new), the intentional stance loses its force for that artifact.

    • The approach does not imply a universal, observer-relative relativism; there are limits to cross-observer agreement about who is a believer.

  • Practical upshot: the intentional stance is a powerful, often indispensable tool for predicting behavior; it’s not simply a biologically reducible stance, but an explanatory framework with pragmatic value.

11.2 True believers as intentional systems

  • The central tension: What counts as an intentional system? Is it a spectrum where even a lectern could be seen as an intentional system under the stance, or must there be robust predictive power and meaningful beliefs/desires?

  • The ladder of systems: from thermostats to clams to humans; the key distinction is not merely the straw-man observation that prediction is possible, but that there is a meaningful, robust deliberate pattern (belief/desire structure) that systematically drives behavior.

  • The role of predictive success: the intentional stance is an extraordinarily powerful predictive tool for humans and many other organisms; faults in prediction do not undermine its overall efficacy.

  • Objections addressed:

    • Systems reply (Oxford Oxford critics): even if you treat parts of a system as belief-desiring, such as a person inside a room, the system as a whole might not understand; the reply seeks to internalize the elements to make the whole system understand. Dennett counters that the system’s predictive power remains the measure; the mere internalization doesn’t guarantee genuine understanding.

    • Robot reply: adding perception and motor capacity to a symbol-manipulating system does not automatically instantiate genuine intentionality or understanding; the key issue remains whether the system’s internal processes instantiate intentional states in a way that is functionally similar to human intentional states.

    • Brain-simulator reply: simulating the brain’s neuron firings could, in principle, produce genuine understanding if the simulation is accurate and causally relevant; Dennett argues that mere simulation of causal structure without semantic content falls short.

    • Combination reply: even if a robot with a brain-shaped computer shows behavior indistinguishable from humans, mere surface-level similarity does not guarantee genuine intentionality; the system’s internal states must instantiate intentional attitudes.

  • The undeniability of intentional patterns: intelligent beings display robust patterns that are describable in terms of beliefs/desires; Martians observing humans would still detect intentionality as patterns in rational behavior, even if they could predict through physics alone.

  • The boundary problem: distinguishing true believers from mere belief-like behavior is a practical and philosophical challenge; but there is a contextual, objective pattern to the success of the intentional strategy that supports its use in predictions and generalizations.

  • The broader philosophical implication: a genuine believer is defined by being a system for which beliefs/desires can be predicted using the intentional stance; this implies a robust, non-degenerate semantic relation to the world, not just a superficial input-output mimicry.

11.3 Why does the intentional strategy work?

  • Two kinds of answers to why it works:

    • Easy answer (design/evolution): evolution has tuned humans to be rational (believe what they ought to believe and want what they ought to want). This is true but non-informative about the internal machinery.

    • Hard answer (machinery): we don’t yet fully understand the internal hardware; the best current theories point toward a computational or language-of-thought-like mechanism, but this is still under debate.

  • Alternative explanations of “why”: psychological theories such as Skinnerian behaviorism explain beliefs/desires as shorthand for histories of reinforcement; modern views often seek a closer mapping between beliefs/desires and functional internal states.

  • The language-of-thought hypothesis: internal states mirror the structure of beliefs/desires, with a roughly isomorphic functional organization to their linguistic expressions; this is a major program in cognitive science, though not universally accepted.

  • Combinatorial explosion and the evolution of cognition: as systems grow in complexity, naive design strategies fail due to combinatorial explosion; brains appear to have solved this problem with a highly scalable, generative structure of representation (often argued to be language-like in its generativity).

  • Toward a unified theory of representation: counter to a naïve view that “the mind is just a brain executing a program,” Dennett argues for a nuanced position: the brain’s physical substrate matters deeply, and a purely formal program cannot by itself be sufficient for intentionality.

  • The debate about whether there is a single, inevitable “language of thought”: many researchers think there is; others caution that the best explanation is likely a family of representational schemes, perhaps nested (connectionist and symbolic components).

  • Final point: while the intentional strategy is remarkably successful, there remain crucial gaps between formal computational descriptions and the full, content-rich semantics of human thought; this is the central philosophical tension in the mind/brain/program triad.

12 Minds, Brains, and Programs

  • In this section, Searle distinguishes between a number of positions in AI and cognitive science, especially the distinction between weak AI and strong AI.

  • Weak AI: the computer is a powerful tool for formulating and testing hypotheses about the mind; it helps simulate hypotheses and provide formal descriptions, but does not itself possess genuine minds.

  • Strong AI: the properly programmed computer literally has a mind and mental states; these programs themselves are explanations of cognition, not merely tools to investigate it.

  • Focus on Schank and Abelson’s stories-based AI as a case study: Schank’s program simulates understanding stories by maintaining a representation structure and answering questions; proponents claim the system can understand the story and that the program explains human understanding. Searle argues this is not the case.

  • The Chinese room thought experiment (the basic setup): a person in a room follows a rule-book to manipulate Chinese symbols; from the outside, the outputs are indistinguishable from a native Chinese speaker, but the person inside does not understand Chinese.

  • Core claims examined:

    • (a) The machine can appear to understand; it does not truly understand in the sense humans do.

    • (b) The program explains human understanding; this is not the case because the program’s operations do not generate genuine intentional states.

  • The four main replies to the Chinese Room

    • The Systems Reply (Berkeley): the whole system (room + rules) understands, even if the individual inside does not.

    • The Robot Reply (Yale): if the computer is embedded in a robot with perception and action, it could understand Chinese because it interacts causally with the world.

    • The Brain-Simulator Reply (Berkeley/MIT): if a computer simulates the brain’s neuron firings and internal processes, it would understand; the brain’s causal powers are what matter.

    • The Combination Reply (Berkeley/Stanford): if you combine the robot with a brain-like computer inside a robot cavity and the behavior is indistinguishable from human behavior, we would attribute intentionality to the system.

  • Searle’s responses to these replies center on two points:

    • The semantics of the system are not contained in mere formal symbol manipulation; adding perception, motor components does not automatically generate genuine understanding.

    • The brain’s causal powers matter; a computer cannot by itself instantiate the causal powers needed for understanding; minds are biological phenomena.

  • The Other-Minds Reply: the objection that we only know others have minds by their behavior; if computers can pass the same behavioral tests, should we attribute mentality to them? The response clarifies that the issue is not about behavior alone but about the presence of genuine intentional states.

  • The Many-Mansions Reply: AI could be extended to all sorts of devices; the central argument remains about whether automated symbol processing suffices for genuine intentionality.

  • The final tally: Searle argues that strong AI—computers with minds—fails because formal programs do not by themselves instantiate intentional states; the mind is a biological phenomenon and depends on brain biochemistry; strong AI cannot deliver genuine intentionality just by running the right programs.

13 Escaping from the Chinese Room

  • Margaret Boden’s rebuttal to Searle: computational theories can be meaningful and explanatory about semantic content without committing to Searle’s strict biological enthusiasm for mind.

  • Boden challenges two main claims from Searle:

    • That computational theories are essentially formal and cannot explain semantic content.

    • That brains have causal powers that computers lack, making mind a biological phenomenon beyond computational replication.

  • Boden’s arguments in brief:

    • The claim that “meaning cannot be explained computationally” rests on a mistaken view of computation as merely syntactic; many computational theories use content and semantics in explanatory roles without claiming that computation is all content.

    • The brain’s causal powers are not the only possible substrates for mind; the conceptual questions revolve around how semantic content is grounded in representations, causes, and causal relations—computation can contribute to these explanations without being the whole story.

    • The “robot reply” and other attempts to close the gap between syntax and semantics show that even with perception, action, and embodied interaction, mere symbol manipulation remains insufficient unless there is a robust grounding of content.

  • Boden emphasizes that computational theories can be compatible with semantic content and explanatory power; the key is to understand how computational processes connect to real-world content through intentional interpretation and grounding in environment.

  • The broader implication: the mind’s content and intentionality can, in Boden’s view, be approached with computational tools rather than being ruled out as mere syntactic manipulation; strong AI is not automatically refuted by the Chinese Room thought experiment.

14 Computation and Content

  • Frances Egan surveys the landscape of computational theories of mind, especially the debates around whether computational theories imply intentionality and the role of content in computational explanations.

  • Core topics:

    • Computationalism vs. formality: computational theories describe cognitive processes as information processing and symbol manipulation, typically interpreted in light of a formality condition that restricts how representations influence computation.

    • The formality condition: computational processes have access only to the formal (nonsemantic) properties of the representations they manipulate; this raises the question of how semantic content can play a causal role in cognition if the computation itself is purely formal.

    • The formality problem and the role of content: content does not have to be constitutive of computation; instead, content serves an explanatory function by connecting abstract computational descriptions to the environment and to real cognitive capacities.

  • Marr’s theory of vision as a case study:

    • Marr’s hierarchy: three levels of description—computational level (top: what function is computed), algorithmic level (how is the function computed), implementation level (how is it realized physically).

    • The top level provides a mathematical description of the function; the algorithmic and implementation levels are not necessarily intentional (semantics can be extrinsic to the formal computation).

    • The top level is abstract and environment-insensitive; the environment-specific interpretation happens through the intentional reading that connects the mechanism to environmental content.

  • The Explanatory Role of Content:

    • Content ascriptions connect the formal computation to the environment, explaining how the computation supports cognitive capacities in real-world contexts.

    • An intentional interpretation is an expository bridge, not an intrinsic part of the computational mechanism.

    • Broad vs narrow content: wide, environment-dependent content (broad content) often plays the explanatory role in perception and cognition; narrow content (intrinsic to the organism) is a debated notion in the literature.

  • The nature of computational explanation:

    • Computational explanations model cognitive capacities by placing them under a general, environment-independent mathematical function and then explaining how environment-specific content grounds cognition.

    • The same computational mechanism can be embedded in different environments with different intentional interpretations; this supports the idea that content is not essential to the computational description but is essential for explaining cognition in real-world terms.

  • Content ascription and interpretation functions:

    • Interpretations map internal states to content; multiple interpretations can be plausible, but only those that do explanatory work are viable.

    • A single, direct interpretation is not required; unintended interpretations do not undermine the intended explanatory role of content.

  • Broad contents in cognitive psychology:

    • Marr’s approach allows broad content that is environment-sensitive, connecting distal properties (e.g., depth) to representations, while proximal (nearby) representations can be used when distal properties aren’t available.

  • The relationship between computational theory and naturalistic psychology:

    • A computational theory need not be non-naturalistic; it can align with naturalistic explanations by grounding content in environment through an intentional interpretation.

    • The content-explanatory role serves as a bridge between abstract computation and the kinds of cognitive tasks (e.g., depth perception) that require referential content.

  • Scope and limits:

    • Connectionist models present similar challenges for content, but Egan argues that the explanatory role of content can extend to connectionist architectures as well as classical, rule-governed systems.

    • The account is especially focused on the role of content in modular, information-encapsulated cognitive processes; it remains an open question how well the approach generalizes to propositional attitudes (beliefs, desires) outside modular domains.

  • Final takeaway: a robust, explanatory role for content in computational theories can be maintained without requiring computationalism to make content constitutive of computation; content helps connect the formal description to real-world cognition, and Marr’s framework provides a concrete model for how this bridging can occur.

NOTE: Throughout these notes, I have organized the material as follows: the Mind Design III volume discusses the nature of intentionality and understanding, the role of the intentional stance in predicting behavior, the limits of strong AI (whether symbol manipulation suffices for understanding), and the treatment of content in computational theories of mind. The debates include Dennett’s intentional strategy, Searle’s Chinese Room, Boden’s escape from the room, and Egan’s analysis of content and computation in Marr-like cognitive architectures. The notes emphasize major concepts (intentionality, intentional stance, derivative vs original intentionality, the brain as substrate, substrate-independence arguments, and the role of content in computation) as well as key examples (Twin Earth, Nozick’s Martians, the robot reply, the brain-simulation reply, etc.), and aim to provide a comprehensive, ready-to-study synthesis of the material.