Cognition: Brains, Computers, and Feedback Metaphors

Administrative Announcements and Classroom Schedule

  • Assessment Task: A brief report based on a recent workshop is available on Canvas.   - Students must complete readings provided on the platform.   - The report should communicate a perspective on robots based on the reading and the previous hour's experience, ideally incorporating external research.

  • Class Representative Selection:   - Interested students are invited to the front of the class on Thursday morning to introduce themselves and explain their motivation.   - An election will follow if more than one person runs for the position.

The Nature of the Mind and Historical Metaphors

  • Prevailing Metaphor: The current scientific and public consensus often views the brain as a computer or an information processor. Phrases like "my wires got crossed" reflect this machine-like conceptualization.

  • Biological Substrate: While similar to organs like the liver in its biological makeup, the brain possesses unique properties related to cognition.

  • The Internet Metaphor: A more recent comparison views the brain as a network of internal computers communicating with each other, where cognition emerges from their collective interaction.

  • Guitar Feedback Metaphor: An alternative conceptualization suggesting that cognition is a process of self-exciting loops and environmental coupling, rather than just internal processing.

  • John Searle’s Critique (1980s): Searle observed that humans tend to model the brain after the latest technology because they do not fully understand it. Historical examples include:   - Ancient Greeks: Attributed brain function to a catapult, "launching" ideas.   - Leibniz: Compared the brain to a mill.   - Freud: Compared it to hydraulic and electromagnetic systems.   - Telegraph Systems: Used as a model during that technological era.   - Telephone Switchboard: A common childhood metaphor for the brain in the mid-20th century.   - Digital Computer: The current predominant metaphor, often called "Computationalist Cognitivism."

Frameworks for Understanding Cognition

  • Dualism: The belief that cognition arises from the interaction of two distinct substances: the physical (material) world and the mental (mind/soul) world.

  • Behaviorism: The focus on understanding cognition solely through observable behavior and interaction with the environment. It rejects introspection (examining one’s own thinking processes) because subjective accounts are often misleading or different from actual internal experiences.   - Key concepts include associative learning and Pavlovian conditioning.

  • Cybernetics: Developed in the 1940s alongside computationalism, focusing on feedback systems and how agents adapt to their environment to meet goals.

  • Cognitivism: The theory that cognition involves the manipulation of internal mental representations and processes.

  • Computationalism: An implementation of cognitivism where the brain is treated as an information processor.

  • Connectionism: A focus on neural network-like structures.

  • Embodied Cognition: The view that cognition fundamentally involves the body and its interaction with the world.

  • Enaction: Explains how systems develop their own needs and goals and act to satisfy them.

The Challenge of Internal Representation

  • Symbolic Manipulation: Classic AI, like a chess program, doesn't use the physical board; it creates an internal model of the pieces and uses algorithms to perform a systematic search of possible actions to determine the best choice.

  • SLAM (Simultaneous Localization and Mapping): Used in robots like the Roomba. The robot uses sensors (e.g., lasers and triangulation) to map its environment and localize itself within that map simultaneously.

  • SHRDLU (1968): Created by Terry Winograd, this was a symbolic AI program operating in a "micro-world" of cubes, pyramids, and boxes.   - It used natural language processing to follow instructions like "pick up a big red block."   - It could handle complex logic, such as "find a block taller than the one you are holding."   - It demonstrated that impressive results were possible as early as the 1960s without neural networks.

  • The Representation Problem: Engineers must decide which properties are important to represent.   - Essential properties might include size, shape, and stackability.   - Non-essential properties (the "open-ended list") might include smell, taste, date of origin, electrical conductivity, or the distance of the object to Saturn.   - The failure of micro-worlds was the inability to generalize; an engineer cannot pre-specify all possible real-world interactions.

  • Primitive Technology Example: A YouTube creator demonstrates how humans use open-ended nature to build complex tools (kilns, iron refinement, addle-addles) from raw environment. This highlights the infinite variety of human-environmental interaction compared to the closed constraints of symbolic AI.

Models of Cognitive Operation

  • The Linear Model (The Sandwich Model): A decoupled process consisting of three distinct phases:   1. Sense: Gather raw data.   2. Model/Plan: Update an internal simulation and calculate a solution. This is where the "intelligence" is perceived to reside (the meat of the sandwich).   3. Act: Execute the motor commands.

  • The Sensory-Motor Feedback Loop: An alternative view where intelligence emerges from the ongoing interaction between the agent and the environment.   - Braitenberg Vehicles: Robots with simple wire connections between sensors and motors.   - For example, if a sensor is wired so that more light makes the wheel turn faster, the robot will orient toward a light source without need for an internal map or "brain."   - The intelligence is in the feedback loop (Brain + Body + World interaction).

Situatedness, Embodiment, and Dynamics

  • Situatedness: Rodney Brooks (1990s) argued that "the world is its own best model." Instead of building complex internal maps, agents should use raw sensory data directly.   - Example: A simple light-seeking robot can navigate around a wall to find a light source because the environment (light leaking around the edge) provides more information than a simplified model might include.

  • Embodiment: The body is not just a vessel but a critical component of cognition.   - Historically, researchers focused on disembodied AI (ChatGPT, AlphaGo, Watson) because robotics is physically difficult.   - Passive Dynamics: A robot with no motors or brain (just hinges and specific joint lengths) can walk down a hill using only gravity and mechanical oscillation.   - Arms are not just aesthetic; they act as counterweights that facilitate the dynamics of walking.

  • Temporal Dynamics: Time is often ignored in computer science, but the duration and timing of actions are essential for solving problems like bipedal walking.

Questions & Discussion

  • Subjective Experience and Programming: A student noted that neurodivergent perspectives, such as autism or ADHD, might lead to a stronger bias toward "intellectualizing" or representational models, though the student noted they use a physical/spatial map of code in their head to debug systems.

  • Aphantasia (Matthew's Experience): The lecturer shared that he cannot visualize objects (e.g., an apple) when he closes his eyes. He suggested this lack of internal visual representation might be why he is more drawn to non-representational, embodied theories of cognition.

  • Intersubjectivity: Discussion on how history-dependent experiences and individual nervous systems create a diverse range of ways to perceive and act in the world.