1/66
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced | Call with Kai |
|---|
No analytics yet
Send a link to your students to track their progress
What cognitive effect does the random-walk semantic network view predict in semantic fluency?
It predicts that retrieval order reflects proximity in the semantic network: words produced close together in time are likely to be close in the network, with transition probabilities governed by connection strengths.
What cognitive effect does the optimal foraging view predict in semantic fluency?
It predicts clustered output: people will name several semantically similar items in succession, then switch abruptly to a new cluster when the current cluster yields fewer accessible items.
What is the random walk algorithm in the semantic network account of memory search?
It models retrieval as a random walk through a network of nodes representing concepts, where the search moves probabilistically along edges from one related concept to another.
What is the optimal foraging account of semantic memory search?
It proposes that people search semantic memory like animals foraging in patches: they produce several related items from one semantic cluster, then switch to a new cluster when returns diminish, analogous to the marginal value theorem.
What is a semantic fluency task?
It is a task where participants are asked to list as many items as possible from a semantic category (e.g., animals) within a time limit, revealing how they search semantic memory.
According to Otto et al. (2013), how does a demanding dual task affect reinforcement learning in humans?
Adding a demanding dual task reduces model-based control (which needs more cognitive resources) and shifts behavior more toward model-free, habitual responding.
What was the key takeaway from Daw et al. (2011) about human reinforcement learning?
Humans show evidence of using a mixture of model-free and model-based reinforcement learning, combining habitual and goal-directed control.
How does knowledge updating differ between model-free and model-based reinforcement learning?
Model-free updating adjusts cached action values directly from reward experience, often leading to habitual repetition, whereas model-based updating revises the internal model of transitions and rewards, enabling strategic re-planning.
Why is model-based reinforcement learning compared to navigating a cognitive map?
Because the agent mentally explores possible paths through a state space, similar to an organism using a cognitive map to plan routes to a goal.
What characterizes the algorithm of model-based reinforcement learning?
It uses the internal model to simulate future state sequences and outcomes, enabling complex forward planning and flexible decision-making when circumstances change.
In model-based reinforcement learning, what additional representations does the agent have?
The agent represents both actionâstate transition probabilities and actionâreward probabilities, effectively having an internal model of how actions change states and yield rewards.
Why is model-free reinforcement learning likened to behaviorist stimulusâresponse learning?
Because it directly links actions to rewards based on past experience, without modeling underlying state transitions, similar to behaviorismâs focus on reinforced stimulusâresponse associations.
What characterizes the algorithm of model-free reinforcement learning?
It relies on a simple memory cache updating action values based on past reward prediction errors; it does not perform forward planning and is relatively inflexible.
In model-free reinforcement learning, what is the core representational limitation?
Model-free learning stores only actionâreward values (e.g., how good actions have been historically) without an explicit model of how actions change states in the environment.
What are actionâreward probabilities in reinforcement learning?
They specify the likelihood and magnitude of rewards that result when an agent takes a particular action in a particular state.
What are actionâstate transition probabilities in reinforcement learning?
They are the probabilities that a given action taken in a given state will lead to a particular next state.
In reinforcement learning, what are states, actions, and rewards?
States are situations or configurations an agent can be in, actions are choices the agent can make in each state, and rewards are outcomes (positive or negative) that follow actions and guide learning.
Why did Wong et al. (2023) include a control task with silhouettes?
The silhouette control ensured that performance differences were not driven simply by low-level visual differences in the images, but by how participants represented underlying objects.
How does performance in the draping versus object change conditions reveal what people are representing?
If people track the underlying object, they will be more sensitive to object changes than to mere draping changes, indicating their representations prioritize objects over superficial properties.
In Wong et al. (2023), what is the difference between a draping change and an object change?
A draping change alters the appearance of the covering material (e.g., cloth) while the underlying object stays the same; an object change alters the underlying object itself while draping may remain similar.
What is the computational problem of object perception in the object representation case study?
It is how to track and represent enduring objects over time and transformations, distinguishing them from superficial or temporary properties like draping or coverings.
What evidence suggested that people spontaneously use polar rather than Cartesian coordinates?
Error correlations were lower for polar coordinates (θ, d) than for Cartesian (x, y), consistent with polar coordinates being a more efficient and likely actual internal format.
Why do highly correlated errors between x and y (or θ and d) matter for identifying the format of position representations?
If errors in two coordinates are highly correlated, it suggests the coding scheme is redundant and thus less efficient; lower correlation supports that those coordinates are the ones the system naturally uses.
What is the efficiency assumption used by Yousif & Keil (2021)?
An efficient coding system should avoid redundancy: its coordinate dimensions should not contain duplicate information, and errors in estimating each coordinate should not be highly correlated.
What is the representational question in the Yousif & Keil (2021) position representation case study?
It asks whether people internally represent positions in 2-D space using Cartesian coordinates (x, y) or polar coordinates (distance d, angle θ).
What did Firestone & Scholl (2014) find about human shape representation?
They found that when people were asked to tap points on shapes, their responses aligned with the shapesâ skeletal structures, even with disruptions like notches, suggesting humans are sensitive to shape skeletons.
What is a âshape skeletonâ in the context of AI and perception?
A shape skeleton is an internal curve or network capturing the medial structure of a shape, which remains relatively stable across transformations and can serve as a basis for recognizing shapes.
What is the computational problem of shape perception described in the notes?
The problem is how to perceive that an object has the same shape across different transformations (e.g., rotations, occlusions, distortions) despite changes in the retinal image.
In representation theory, what are "content" and "format" of representations?
Content is what information a representation carries (what it is about), and format is how that information is structurally organized or encoded (e.g., coordinates, skeletal structures).
In the lexicalization case study, what trade-off do languages manage?
Languages manage a trade-off between simplicity (fewer, more general categories) and informativeness (more detailed, specific distinctions) to support efficient communication.
What is the computational problem of lexicalization and categorization in language?
It is how to design systems of words and categories (e.g., kinship terms) that balance simplicity of the system with informativeness for communication, supporting effective information transfer.
How is the history of information use in the environment related to memory retrieval, according to the notes?
Patterns of memory retrieval mirror patterns of information use in the world, often following a logâlog linear relationship: more frequently and recently used information is more likely to be retrieved.
What are two determinants of a memoryâs need probability in the notes?
Need probability depends on how relevant the memory is to the current situation and how recently or frequently it has been used in the past.
In the memory retrieval case study, what is the main computational problem?
The problem is how to prioritize which memories to retrieve, given that retrieval has costs and many memories may be potentially relevant, so the system should retrieve those most likely to be needed.
What is Marrâs implementation level?
The implementation level specifies the physical hardware or biological substrate that realizes the representations and algorithmsâfor example, neural circuits in the brain.
What is Marrâs algorithm and representation level?
It is the level at which we specify how the problem is solved: what representations are used and what algorithms or strategies transform those representations to achieve the computational goal.
What is Marrâs computational level of analysis?
The computational level specifies what problem the system is solving and what function is being computedâi.e., what the goal of the computation is and why it is appropriate.
What is externalism about perceptual or cognitive states?
Externalism is the view that some cognitive or perceptual state kinds depend partly on relations to the external physical environment, so symbols and formal rules alone are not enough to fully characterize them.
What is the difference between a representing system and a represented system?
A representing system is the system that carries or encodes information (e.g., a graph, a memory trace), while the represented system is the thing in the world that the representation is about (e.g., a childâs actual height).
How do symbols in a Turing machine differ from mental representations, according to the notes?
Symbols in a Turing machine have meaning only by external interpretation; by themselves they need not refer to anything in the world, whereas mental representations are taken to have content that is about things beyond the symbol system.
What does the Chinese room thought experiment aim to challenge?
It challenges the claim that implementing the right computational program (mere symbol manipulation) is sufficient for genuine understanding or consciousness.
What is the basic setup of Searleâs Chinese room argument?
It imagines a person in a room following syntactic rules to manipulate Chinese symbols to produce appropriate outputs, despite not understanding Chinese, suggesting that symbol manipulation alone may not constitute understanding.
Name one problem with using the Turing test as a decisive criterion for intelligence.
Outcomes depend on subjective human guesses, success can hinge on superficial language tricks, and equating being good at language with being good at thought may be mistaken.
What is the core idea of the Turing test as an objective test for machine intelligence?
A machine is deemed intelligent if, in a text-based conversation, it can consistently lead a human judge to be unable to reliably distinguish it from a human interlocutor.
According to the Heuristic Search Hypothesis, how does an intelligent system solve problems?
It represents problems as symbol structures and uses heuristic search through this space, demonstrating intelligence by generating relevant, efficient solutions rather than just many solutions.
What does the Physical Symbol System Hypothesis (PSSH) claim?
The Physical Symbol System Hypothesis claims that a physical symbol system has the necessary and sufficient means for general intelligent actionâthat is, such a system can, in principle, exhibit human-like intelligence.
What is the equivalence argument for the ChurchâTuring thesis?
The equivalence argument notes that diverse formal systems of computation (e.g., Turing machines, lambda calculus, modern programming languages) can compute exactly the same class of functions, supporting the idea that this class captures all effective computation.
State the ChurchâTuring thesis.
The ChurchâTuring thesis states that any function that can be effectively computed by any mechanical procedure can be computed by a Turing machine; in other words, anything computable is Turing-computable.
What is a Turing machine in the context of computation?
A Turing machine is a formal, idealized model of computation that manipulates symbols on an infinite tape according to a set of rules; it serves as one canonical mathematical definition of what it is to compute.
According to the notes, what is the overall goal of mental computation?
The overall goal is for the brain to construct and update an internal model that is the best guess about the most likely state of the environment, guiding perception and action.
What is the computational problem of color perception mentioned in the notes?
The problem is that the same pattern of incoming light (luminance) can arise from different combinations of object and lighting conditions, yet the visual system must infer the true color of surfaces despite this ambiguity.
In computationalism, what role do symbols and formal rules play?
Symbols stand for concepts or states, and formal rules specify how symbols can be transformed, allowing systematic, rule-governed information processing.
What is computationalism about the mind?
Computationalism is the view that cognitive processes are forms of computation, in which the mind manipulates symbols according to formal rules, much like a computer.
What is the suggested relationship between the brain and the mind in functionalist cognitive science?
The brain is seen as the physical system that implements the functional organization that constitutes the mind; if the brain implements the right functions, then it is sufficient for having a mind.
How does functionalism differ from defining something by its physical constitution?
Functionalism defines something by what it does (its role or function), whereas a physical-constitution view defines it by what material it is made of (its physical substrate).
What is functional individuation of a cognitive component?
Functional individuation means identifying a component by the role it plays in a system, such that different physical realizations (e.g., different ways of encoding the number 9) count as the same component if they perform the same function.
How does a law-based explanation differ from a functional analysis in cognitive science?
Law-based explanation cites universal laws that must always hold and are used for prediction, whereas functional analysis explains a capacity by decomposing it into functional components and their roles, focusing more on explanation than pure prediction.
Define functionalism in the philosophy of mind.
Functionalism is the view that mental states are defined by their functional rolesâwhat they do in the cognitive systemârather than by what they are made of physically.
What is the distinction between cognitive capacities and cognitive effects?
Cognitive capacities are the mindâs reliable abilities to transform one mental state into another, whereas cognitive effects are the observable ways those capacities manifest in behavior.
According to Chomskyâs critique, what problem does language pose for behaviorism?
People can produce and understand sentences they have never heard before, so simple stimulusâresponse accounts based on past reinforcement cannot fully explain language use, suggesting a more complex internal system.
How did Tolmanâs cognitive map results challenge strict behaviorism?
They showed rats could learn the layout of a maze without reinforcement, implying internal representations (cognitive maps) rather than just stimulusâresponse links driven by rewards and punishments.
What did Tolmanâs 1948 cognitive map experiment with rats show about learning?
Tolmanâs experiment showed that rats formed a cognitive map of a maze: even rats that had not been rewarded during initial exploration could quickly find food once it was introduced, suggesting learning without reinforcement and challenging strict behaviorism.
What is a Skinner box and what was it used to study?
A Skinner box is an experimental apparatus in which an animal (such as a rat) learns to perform actions (like pressing a lever) in response to stimuli (like a light) to receive rewards (like food pellets), used to study stimulusâresponse learning and reinforcement.
How do positive and negative reinforcement relate to behavior shaping in behaviorism?
Positive and negative reinforcement are used to increase the likelihood of a desired response to a stimulus: positive reinforcement adds a rewarding consequence, while negative reinforcement removes an unpleasant one.
In behaviorism, what is meant by a stimulusâresponse relationship?
A stimulusâresponse relationship means that when a subject encounters a specific stimulus, it tends to produce a particular response, typically in a reliable way unless some external factor intervenes.
Why did behaviorists move away from introspection as a method?
Introspection was considered unreliable because participantsâ reports of their own mental states were subjective and difficult to verify, so behaviorists focused on observable behavior instead.
What is the core claim of behaviorism about explaining behavior?
Behaviorism claims that behavior can be explained as stimulus â response, and that prediction and control of observable behavior can be achieved by modifying stimuli in observable environments, without reference to internal mental states.