Challenge 1

Dualism The view that the mind and body are separate substances: the body obeys physical laws and reflexes, while the mind enables free will and conscious thought.
Dualism – Time period Late 1500s–1600s
Dualism – Notable Figure René Descartes
Dualism – Example Reflexive pinprick vs. intentional act of writing a poem.
Dualism – Related Concepts Contrasts with Empiricism; reframed by Cognitivism and Predictive Mind perspectives.

Empiricism The idea that the mind starts as a blank slate (tabula rasa) and that knowledge is built from experience and association.
Empiricism – Time period Late 1600s–1700s
Empiricism – Notable Figure John Locke
Empiricism – Related Concepts Precursor to Behaviorism (learning through conditioning); challenged by Cognitive Science and Chomsky on innate structure.

Behaviorism Emphasizes observable behavior and conditioning, rejecting introspection.
Behaviorism – Time period Early 1900s
Behaviorism – Notable Figures Pavlov (classical conditioning), Skinner (operant conditioning), Watson (conditioning of emotions).
Behaviorism – Example The “Little Albert” experiment showed conditioned fear responses.
Behaviorism – Related Concepts Preceded the Cognitive Revolution and Constructivism.

Kurt Lewin’s Field Theory Behavior is a function of the person and environment (B = f(P, E)); behavior arises from dynamic forces within a person’s “life space.”
Kurt Lewin’s Field Theory – Time period 1940s
Kurt Lewin’s Field Theory – Notable Figure Kurt Lewin
Kurt Lewin’s Field Theory – Related Concepts Foundation for Attribution Theory and Representations (R).

Cognitivism Reintroduced mental structure (schemas, attributions, holistic impressions) as essential to understanding behavior.
Cognitivism – Time period 1930s–1950s→1960s
Cognitivism – Notable Figures Bartlett (schemas), Asch (Gestalt impression formation), Heider (attributions).
Cognitivism – Related Concepts Leads into Cognitive Psychology, Attribution Theory, and Mentalizing.

Cognitive Psychology Defines psychology as the study of how input is transformed, stored, and used, even in absence of stimuli.
Cognitive Psychology – Time period 1960s
Cognitive Psychology – Notable Figure Ulric Neisser
Cognitive Psychology – Key Idea Cognition as information processing.
Cognitive Psychology – Related Concepts Bridges to Cognitive Science and Marr’s Levels.

Cognitive Science An interdisciplinary field treating the mind as an information-processing system, integrating psychology, AI, linguistics, and philosophy.
Cognitive Science – Time period 1960s–1970s
Cognitive Science – Notable Figures Turing, Simon, Chomsky
Cognitive Science – Related Concepts Precursor to Computational Cognitive Science and Intentional Stance.

Constructivism Perception and memory are active constructions from prior knowledge and input.
Constructivism – Time period 1960s–1970s
Constructivism – Notable Figure Jerome Bruner
Constructivism – Example “The dress” illusion illustrates perceptual priors.
Constructivism – Related Concepts Supports Predictive Mind; informs Mentalizing.

Computational Cognitive Science Explains cognition in terms of representations and algorithms; introduced Marr’s Levels of Analysis.
Computational Cognitive Science – Time period 1970s–1980s
Computational Cognitive Science – Notable Figures David Marr, Allen Newell, Herbert Simon
Computational Cognitive Science – Related Concepts Foundation for Predictive Models and Generative Models.

Predictive Mind The brain generates predictions about the world (including other minds) and updates via prediction error.
Predictive Mind – Time period 1990s–present
Predictive Mind – Notable Figures Andy Clark, Jakob Hohwy
Predictive Mind – Related Concepts Unifies Mentalizing, Generative Models, and Cognitive Neuroscience.

Dualism (Descartes) Mind vs body distinction; body = reflexive & lawful; mind = unobservable, willful, uniquely human. Sets up later debates about what counts as ‘mind’.
Empiricism (Locke) Mind as tabula rasa; associations learned from experience → early algorithmic flavor (associative “laws”).
Behaviorism (Pavlov, Watson, Skinner) Focus on observable behavior and conditioning; mind as black box; legacy in advertising and “psychological engineering.”
Cognitive Revolution Shift from behaviorism’s what to cognitive science’s how (representations, algorithms); how we infer minds.
Predictivism Representations are predictive in nature; they help us figure out what’s going to happen next.

A Walk Through History Dualism → Empiricism → Behaviorism → Cognitivism → Cognitive Science → Constructivism → Computational Cognitive Science → Predictive Mind.
Dualism (expanded) Mind/soul distinct from body; body obeys natural laws and supports reflexes; complex conscious behavior and free will treated as uniquely human and unobservable; sets up split between visible behavior vs invisible mind.
Empiricism (expanded) Blank slate (tabula rasa); knowledge via sensory experience; laws of association (time, space, similarity) form layered associations—historical seeds of representational/learning views and early AI parallels.
Behaviorism (expanded) Pavlov’s classical conditioning; Skinner’s operant conditioning; Watson’s emotional conditioning (e.g., Little Albert). Mind treated as a black box. Advertising as “psychological engineering.” Important but incomplete model of mind.
Cognitivism / Cognitive Psychology We don’t just respond to stimuli; we hold schemas, attributions, and impressions. The mind processes information using representations; constructivist view: we construct reality through representations.
Kurt Lewin’s Field Theory Behavior (B) = f(Person, Environment); behavior results from interaction between individual and context; foundation for attribution theory.
Computational Cognitive Science → Predictive Mind Cognition = computation on representations; representing/computing helps predict the future. Predictive mind compares top-down predictions with sensory input; learning shapes representations that support planning, imagination, and simulation of other minds.
Marr’s Three Levels 1. Computational (what & why) → problem being solved. 2. Algorithmic (how) → representations and procedures. 3. Implementational (hardware) → neural machinery. Explains how symbolic and biological levels coexist.
Foundations of Mind Perception The field studies how we infer invisible mental states from visible cues—perception and behavior → inference of beliefs, desires, goals. Sets up the study of mentalizing and animacy.

Bounded Rationality (Simon) Humans aim for “good enough” decisions under constraints (satisficing); we are cognitive misers.
Physical Symbol System Hypothesis (Newell & Simon) Symbolic information processing can yield intelligence; bridges symbolic AI and cognitive models.
Predictive Mind (overview) Brains generate predictions and compare them to inputs; representations support action, imagination, memory, and reading other minds.

Heider & Simmel (1944) Animated shapes trigger spontaneous mentalistic narratives—proof that people automatically attribute minds beyond raw motion.
Intentional Stance Strategy of predicting behavior by assuming beliefs, desires, and rationality; adopted automatically and refined by evidence.
Dimensions of Mind Perception (Gray et al., 2007) Agency = capacity to act/plan; Experience = capacity to feel. Different beings (robots, babies, patients) are mapped differently along these dimensions.
Predictive Mind (social cognition) Representations predict transitions among feelings and actions—basis for understanding others’ emotions and intentions.

Animacy Internal “life detector” schema answering “Is it alive? Can it act? Can it feel?” It’s invisible and inferred from visible motion cues.
Perception–Attribution Link We connect what we see to why it’s happening (“is there a mind behind this?”), tying perception directly to attribution.
Core Cues for Animacy 1. Biological Motion – movement implying self-propulsion. 2. Rationality/Efficiency – goal-directed pathing. 3. Contingency – interactive, responsive motion between objects.
Cross-Modal Animacy (Kiki/Bouba) Sounds, shapes, and movements share expressive structure (e.g., sharp sounds → spiky forms; soft sounds → round forms).
Bo Sievers et al. Study Participants created melodies/animations for target emotions; fMRI showed shared cross-modal structure for perceiving “aliveness.”
Third Visual Pathway Hypothesis Brain pathway (pSTS-centered) specialized for social perception and detecting animacy beyond “what” and “where” visual streams.
Animacy to Theory of Mind Once an entity is perceived as animate, we begin mentalizing—attributing beliefs and goals to predict behavior.

Key Ideas

  • Three evidence streams for animacy:
    (a) Biological motion, (b) Rationality/Efficiency (goal‑directed, shortest‑path, purposive movement), (c) Contingency/Interaction (interdependence among objects; e.g., chasing).

  • Cross‑modal mappings: Kiki–Bouba (shape–sound mapping) shows consistent cross‑modal correspondences; modern versions extend to emotion & motion in music and animation; the brain extracts a common dynamic structure.

  • A third perceptual pathway for social perception: Beyond classic what/where, a system specialized for “Is it animate?” and social cues (pSTS/STS‑adjacent systems; conceptually framed in class).

  • When animacy goes wrong: Uncanny Valley = negative affect to near‑human but imperfect agents; framed as life‑detector conflict.

  • When mentalizing goes wrong: Dehumanization = denying mental states to agents who deserve them. Two forms covered: Animalistic (denial of uniquely human traits) and Mechanistic (denial of human nature).

  • Theory‑of‑Mind (ToM) – Premack & Woodruff (1978): To have a ToM is to impute mental states to self and others; it’s a theory because states are unobservable and the system is used to predict behavior.

  • Sets up ToM strategies: Implicit Theories, Causal Inference, Simulation (Lectures 5–6).

Lecture Notes

  • Recap: Mentalizing & Animacy (tying the schema to perception)

    • Re‑emphasis: everything below the line (slides) is perceptual evidence; everything above is invisible; animacy is the internal, computed signal we use to bridge the two.

    • Three evidence classes (biological motion, rational/efficient goal‑directedness, contingency) are not mutually exclusive and often co‑occur, but can be isolated experimentally (e.g., self‑propulsion without contact, chasing displays).

  • When animacy goes “wrong”: The Uncanny Valley (amplified by motion)

    • As a stimulus becomes almost human (face/body/voice), small mismatches in the social signal (shape, timing, contingency) evoke a negative affective dip; motion exacerbates the effect. The class connected this to current interactions with chatbots: something feels missing.

  • When mentalizing goes “wrong”: Dehumanization (Haslam)

    • Denying deserved mental states along two denial axes:

      • Animalistic dehumanization: stripping what separates humans from other animals (typical hits to rationality, self‑control, etc.).

      • Mechanistic dehumanization: stripping experience/agency, treating others as objects/machines.

    • Mapped back to earlier Agency/Experience dimensions; discussion emphasized that while mentalizing is automatic, stories can be updated (we can intervene on our attributions).

  • Transition: From “Does it have a mind?” to “What state is that mind in?” → **Theory of Mind (ToM)

    • Definition used in class (Premack & Woodruff framing): capacity to represent another agent’s beliefs, desires, intentions to explain and predict behavior; i.e., how we generate explanations for the mental states we attribute.

    • Non‑human primates & infants (preview via video): gaze‑based false‑belief logic (observer looks where an agent wrongly believes an object to be). Apes (and later preverbal infants) show looking patterns consistent with attributing false beliefs, hinting ToM is not human‑exclusive and pre‑verbal reasoning is measurable with VOE (Violation‑of‑Expectation) methods.

  • Three broad ToM “algorithms” (introduced)

    • Implicit Theories (fast rules/schemas we carry).

    • Causal Inference (reason backward from observation to hidden causes).

    • Simulation (use one’s own mind/body to model another).

Highlights

  • Attribution toolkit: Heider’s person–situation split; Correspondent Inference (choice/uniqueness) and Kelley’s Covariation (CCD triad) operationalize causal inference about people.

  • Implicit theories: fast, learned rules for social prediction (slides 39–45); maps to the Predictive Mind framing.

  • Dehumanization family: Animalistic (deny UH) vs Mechanistic (deny HN) routes help explain uncanny responses and moral exclusion patterns. (Connect to Week 02 readings.)

  • Mechanistic link to FAE: Spontaneous mentalizing (DMN) biases dispositional explanations—students should connect this to classic FAE.


Fri-10‑10 ToM II Causal Inference

Big Questions. What algorithms/strategies do we actually use to infer minds? When do our fast rules mislead us? How is social knowledge organized for prediction? What developmental capacities scaffold ToM?

Key Ideas

  • Strategy I: Implicit Theories (fast rules & schemas)

    • Heider (Naive Psychology): People act like scientists, generating explanations that distinguish reasons (internal) from causes (situations).

    • Jones & Davis (Correspondent Inference): We attribute intentionality when behavior appears freely chosen and goal‑directed.

    • Kelley (Covariation Model): We weigh Consistency (across contexts), Distinctiveness (across targets), and Consensus (across observers) to infer causes (e.g., why did Eshin laugh at a Kevin Hart show?).

    • Fundamental Attribution Error (FAE): Tendency to overestimate personal/dispositional causes and underestimate situational ones when explaining others’ behavior. (Covered with class scenarios & brain‑imaging tie‑in.)

    • Predictive Implicit Knowledge (transition structure): we’re good at predicting transitions between mental states and actions; social knowledge is organized for prediction. (Tamir et al., 2021; Tamir & Thornton, 2023, presented in lecture.)

    • Mental‑state space (3 axes): Valence, Social Impact, Rationality; Action space (ACT‑FAST) (e.g., Abstractness, Creation, Tradition, Food, Animacy, Spiritualism). People predict “what comes next” by moving through these learned geometries.

  • Strategy II: Causal Inference (developmental/evolutionary building blocks)

    • Core systems → concepts: Before language, infants show intuitive physics and object knowledge (e.g., object permanence).

    • Violation‑of‑Expectation (VOE) paradigm: If infants/agents look longer at impossible events, it reveals expectationsproto‑beliefs.

    • Pretense & Decoupling (Leslie, 1987): Ability to represent the world differently from what is perceived—foundational for perspective‑taking and later false‑belief understanding.

  • Bridges to Lecture 6 (Simulation),

Lecture Notes

  • Implicit Theories of Mind (historical thread and the fast‑and‑frugal strategy)

    • Heider’s “Naive Psychology”: people act like scientists separating reasons (internal motives) from causes (situations).

    • Jones & Davis (Correspondent Inference): we infer intentions especially when behavior seems freely chosen (could they have done otherwise?). Free action cues → stronger mentalizing about what they were trying to achieve.

    • Kelley (Covariation Model):

      • We implicitly track Consistency (across contexts), Distinctiveness (across targets), Consensus (across actors).

      • Example used repeatedly: “Why did Eshin laugh at the Kevin Hart show?”

        • If he laughs in general (consistency), only at Kevin Hart (distinctiveness), and others laugh too (consensus), we infer he likes Kevin Hart.

    • Fundamental Attribution Error (FAE) (defined & reframed):

      • The tendency to overestimate personal causes and underestimate situational ones when explaining others’ behavior.

      • Reframing from lecture: not merely an “error” — could be a by‑product of our spontaneous tendency to see minds. Neuroimaging shows stronger engagement of the same mentalizing‑related regions in participants who favored dispositional explanations for ambiguous vignettes (the same stories you answered in class). We can still update with more evidence.

  • Our social knowledge is organized for prediction (bridging to modern view)

    • Daily life has temporal structure (e.g., American Time Use–style transitions: sleep → work → run…). We carry intuitions about what comes next, both for actions and feelings.

    • Key proposal from recent work (presented in slides):

      • Mental‑state space (3 axes): Valence (±), Social Impact (interpersonal force), Rationality (uniquely human/reflective).

      • Action space (ACT‑FAST taxonomy; 6 axes): Abstractness, Creation, Tradition, Food, Animacy, Spiritualism.

      • Minds as sequences: Brains can represent a person as the sequence of mental states we predict they’ll experience next; this is useful precisely because it helps us anticipate behavior.

  • (Transition) Core Systems → Causal Inference

    • Lecture emphasizes that much pre‑linguistic understanding comes from core knowledge — notably intuitive physics. Violation‑of‑Expectation (VOE) paradigms show infants stare longer at impossible events (e.g., occlusion violations), evidencing expectationsproto‑beliefs about the world.

    • Causal inference (formalized next): the general ability to reason backwards from observations to causes (“Why am I seeing this?” “What did I expect to see?”).

  • In‑class game (announced at end of 10‑10 slides)

    • 2/3 of the average” guessing contest (rules posted): submit one number; aim for 2/3 of class average; no talk/calculators; candy prize next time. (Results revealed in 10‑13.)

Highlights

  • VOE paradigm: looking‑time surprises imply expectations/beliefs; foundation for preverbal ToM claims (slides 46–47).

  • Pretense/Decoupling (Leslie, 1987): representing “as if” enables considering beliefs that diverge from reality → stepping stone to false belief (slides ~52–56).

  • Causal inference → beliefs: “Why am I seeing this?”; reasoning back to unobservables underwrites belief attribution (slide 57 and context).

  • Recursive reasoning: capacity limited to ≈4–5 nested levels in practice (slide “Recursive Reasoning”).

  • Concept geometry for prediction: ACT‑FAST taxonomy (action space) + 3D Mind Model (e.g., Rationality, Social Impact, Valence)—used to predict next states/actions (slides 34–39).

Key Ideas

  • Causal Inference → Recursive Reasoning

  • Recursive/iterated reasoning: We think, they think, we think…

    • Most humans can’t go deeper than ~4–5 levels.

    • 2/3 of the average game” (beauty‑contest) game illustrated k‑level distributions

    • Nash equilibrium (0) is rarely chosen

    • We’re imperfect information processors; we have bounded rationality

  • Strategy III: Simulation: using your own thoughts/feelings/actions to infer others’ internal states (“how would I feel? what would I do?); two types

    1. Mirroring (Embodied Simulation): Automatic vicarious motor & affective activations when observing others’ actions/states. Useful for online, cue‑rich contexts.

    2. Self‑Projection (Internal Simulation): Autobiographical/episodic construction of imagined perspectives; we can mentally travel to past/future/counterfactual situations and step into someone’s shoes. Linked to default network functions (memory, imagination, navigation).

  • Simulation supports mutual adaptation

    • Synchrony (coupling in time & state; e.g., tapping, mimicry)

    • Anticipatory coordination (coupling across distinct times; e.g., dancing),

    • Complementarity (different internal states; e.g., conversation, teamwork).

    • We simulate to predict and adapt.

Lecture Notes

  • Recap (Causal inference & development)

    • Intuitive physics and VOE reviewed: occlusion/cart experiments (Baillargeon et al.) show infants as young as ~3.5 months expect object permanence and look longer at impossible events; belief/expectation signals without language.

    • Causal inference facilitates decoupling: understanding cause→effect supports pretense (Leslie, 1987) — the capacity to represent the world differently from what it is (“pretending”) → cognitive decoupling from immediate perception. This, in turn, scaffolds representing others’ beliefs (e.g., false‑belief).

    • Formal slide definition: Causal inference = reasoning backwards from observations to hidden causes to predict future behavior; provides the basis for higher‑order ToM.

  • Recursive reasoning (orders of intentionality) — class contest results

    • Results from the 2/3 of the average game:

      • Class average = 46.42/3 (rounded) = 31; four winners named on slide. Distribution mapped onto intentionality levels:

        • k=0 (first‑order beliefs) ~25–30%

        • k=1 (second‑order) ~9%

        • k=2 (third‑order) ~7%

        • k=3 (fourth‑order) ~3%

      • Take‑home: humans show bounded rationality and finite recursion (practically ≤ 4–5 levels), consistent with Cognitive Hierarchy models (Nagel; Camerer et al.).

  • ToM Strategy III: Simulation (two mechanisms emphasized)

    • Mirroring / Embodied simulation

      • Observing actions/emotions automatically activates motor and affective representations (partial resonance) that guide fast inferences about another’s state. (Waytz & Mitchell framing on slides.)

    • Projection / Internal simulation

      • We use autobiographical memory and imagination to construct others’ perspectives—mentally travel to future/past, to counterfactuals, or into someone else’s shoes. The same neural system that supports remembering and navigation supports simulating other minds (Buckner & Carroll figure on slides).

  • Mutual adaptation via simulation → social connection

    • Synchrony (time & internal‑state coupling; e.g., tapping, mimicry).

    • Anticipatory coordination (coupling across distinct times; e.g., dancing).

    • Complementarity (coupling across distinct internal states; e.g., conversation, teamwork).

    • Slogan from slides: Simulating to predict. Predicting to adapt. Adapting to connect.

  • Grand recap via Marr’s levels (applied to the whole first unit)

    • Computational (what/why): Detect other minds to know how to act in a social world.

    • Algorithmic (how/what’s represented): Look for animacy; attribute mental states using implicit knowledge, causal reasoning, and simulation; minds as actionmental‑state transitions (predictive organization).

    • Implementational (constraints): Cross‑modal animacy cues; common system for remembering/imagining/navigating/simulating; bounded rationality (finite recursion).

Highlights

  • Two mechanisms: Mirroring/embodied simulation (fast resonance) vs Projection/self‑projection (offline, imagination‑based) (slides ~22–27).

  • Social coupling: Synchrony, Anticipatory coordination, Complementarity as consequences of prediction‑through‑simulation (slides 28–29).

  • Strategic ToM: Beauty‑Contest and Cognitive Hierarchy introduce finite recursion and population heterogeneity in k‑levels (class exercise + Week 03 readings).

  • Grand synthesis (Marr‑framed): Computational—detect & predict minds; Algorithmic—animacy, attribution, simulation; Implementational—cross‑modal social vision & DMN; bounded depth of recursion (≤~4–5).