Cog Sci Midterm

0.0(0)
studied byStudied by 0 people
0.0(0)
call with kaiCall with Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/66

encourage image

There's no tags or description

Looks like no tags are added yet.

Last updated 11:15 PM on 2/2/26
Name
Mastery
Learn
Test
Matching
Spaced
Call with Kai

No analytics yet

Send a link to your students to track their progress

67 Terms

1
New cards

What cognitive effect does the random-walk semantic network view predict in semantic fluency?

It predicts that retrieval order reflects proximity in the semantic network: words produced close together in time are likely to be close in the network, with transition probabilities governed by connection strengths.

2
New cards

What cognitive effect does the optimal foraging view predict in semantic fluency?

It predicts clustered output: people will name several semantically similar items in succession, then switch abruptly to a new cluster when the current cluster yields fewer accessible items.

3
New cards

What is the random walk algorithm in the semantic network account of memory search?

It models retrieval as a random walk through a network of nodes representing concepts, where the search moves probabilistically along edges from one related concept to another.

4
New cards

What is the optimal foraging account of semantic memory search?

It proposes that people search semantic memory like animals foraging in patches: they produce several related items from one semantic cluster, then switch to a new cluster when returns diminish, analogous to the marginal value theorem.

5
New cards

What is a semantic fluency task?

It is a task where participants are asked to list as many items as possible from a semantic category (e.g., animals) within a time limit, revealing how they search semantic memory.

6
New cards

According to Otto et al. (2013), how does a demanding dual task affect reinforcement learning in humans?

Adding a demanding dual task reduces model-based control (which needs more cognitive resources) and shifts behavior more toward model-free, habitual responding.

7
New cards

What was the key takeaway from Daw et al. (2011) about human reinforcement learning?

Humans show evidence of using a mixture of model-free and model-based reinforcement learning, combining habitual and goal-directed control.

8
New cards

How does knowledge updating differ between model-free and model-based reinforcement learning?

Model-free updating adjusts cached action values directly from reward experience, often leading to habitual repetition, whereas model-based updating revises the internal model of transitions and rewards, enabling strategic re-planning.

9
New cards

Why is model-based reinforcement learning compared to navigating a cognitive map?

Because the agent mentally explores possible paths through a state space, similar to an organism using a cognitive map to plan routes to a goal.

10
New cards

What characterizes the algorithm of model-based reinforcement learning?

It uses the internal model to simulate future state sequences and outcomes, enabling complex forward planning and flexible decision-making when circumstances change.

11
New cards

In model-based reinforcement learning, what additional representations does the agent have?

The agent represents both action–state transition probabilities and action–reward probabilities, effectively having an internal model of how actions change states and yield rewards.

12
New cards

Why is model-free reinforcement learning likened to behaviorist stimulus–response learning?

Because it directly links actions to rewards based on past experience, without modeling underlying state transitions, similar to behaviorism’s focus on reinforced stimulus–response associations.

13
New cards

What characterizes the algorithm of model-free reinforcement learning?

It relies on a simple memory cache updating action values based on past reward prediction errors; it does not perform forward planning and is relatively inflexible.

14
New cards

In model-free reinforcement learning, what is the core representational limitation?

Model-free learning stores only action–reward values (e.g., how good actions have been historically) without an explicit model of how actions change states in the environment.

15
New cards

What are action–reward probabilities in reinforcement learning?

They specify the likelihood and magnitude of rewards that result when an agent takes a particular action in a particular state.

16
New cards

What are action–state transition probabilities in reinforcement learning?

They are the probabilities that a given action taken in a given state will lead to a particular next state.

17
New cards

In reinforcement learning, what are states, actions, and rewards?

States are situations or configurations an agent can be in, actions are choices the agent can make in each state, and rewards are outcomes (positive or negative) that follow actions and guide learning.

18
New cards

Why did Wong et al. (2023) include a control task with silhouettes?

The silhouette control ensured that performance differences were not driven simply by low-level visual differences in the images, but by how participants represented underlying objects.

19
New cards

How does performance in the draping versus object change conditions reveal what people are representing?

If people track the underlying object, they will be more sensitive to object changes than to mere draping changes, indicating their representations prioritize objects over superficial properties.

20
New cards

In Wong et al. (2023), what is the difference between a draping change and an object change?

A draping change alters the appearance of the covering material (e.g., cloth) while the underlying object stays the same; an object change alters the underlying object itself while draping may remain similar.

21
New cards

What is the computational problem of object perception in the object representation case study?

It is how to track and represent enduring objects over time and transformations, distinguishing them from superficial or temporary properties like draping or coverings.

22
New cards

What evidence suggested that people spontaneously use polar rather than Cartesian coordinates?

Error correlations were lower for polar coordinates (θ, d) than for Cartesian (x, y), consistent with polar coordinates being a more efficient and likely actual internal format.

23
New cards

Why do highly correlated errors between x and y (or θ and d) matter for identifying the format of position representations?

If errors in two coordinates are highly correlated, it suggests the coding scheme is redundant and thus less efficient; lower correlation supports that those coordinates are the ones the system naturally uses.

24
New cards

What is the efficiency assumption used by Yousif & Keil (2021)?

An efficient coding system should avoid redundancy: its coordinate dimensions should not contain duplicate information, and errors in estimating each coordinate should not be highly correlated.

25
New cards

What is the representational question in the Yousif & Keil (2021) position representation case study?

It asks whether people internally represent positions in 2-D space using Cartesian coordinates (x, y) or polar coordinates (distance d, angle θ).

26
New cards

What did Firestone & Scholl (2014) find about human shape representation?

They found that when people were asked to tap points on shapes, their responses aligned with the shapes’ skeletal structures, even with disruptions like notches, suggesting humans are sensitive to shape skeletons.

27
New cards

What is a “shape skeleton” in the context of AI and perception?

A shape skeleton is an internal curve or network capturing the medial structure of a shape, which remains relatively stable across transformations and can serve as a basis for recognizing shapes.

28
New cards

What is the computational problem of shape perception described in the notes?

The problem is how to perceive that an object has the same shape across different transformations (e.g., rotations, occlusions, distortions) despite changes in the retinal image.

29
New cards

In representation theory, what are "content" and "format" of representations?

Content is what information a representation carries (what it is about), and format is how that information is structurally organized or encoded (e.g., coordinates, skeletal structures).

30
New cards

In the lexicalization case study, what trade-off do languages manage?

Languages manage a trade-off between simplicity (fewer, more general categories) and informativeness (more detailed, specific distinctions) to support efficient communication.

31
New cards

What is the computational problem of lexicalization and categorization in language?

It is how to design systems of words and categories (e.g., kinship terms) that balance simplicity of the system with informativeness for communication, supporting effective information transfer.

32
New cards

How is the history of information use in the environment related to memory retrieval, according to the notes?

Patterns of memory retrieval mirror patterns of information use in the world, often following a log–log linear relationship: more frequently and recently used information is more likely to be retrieved.

33
New cards

What are two determinants of a memory’s need probability in the notes?

Need probability depends on how relevant the memory is to the current situation and how recently or frequently it has been used in the past.

34
New cards

In the memory retrieval case study, what is the main computational problem?

The problem is how to prioritize which memories to retrieve, given that retrieval has costs and many memories may be potentially relevant, so the system should retrieve those most likely to be needed.

35
New cards

What is Marr’s implementation level?

The implementation level specifies the physical hardware or biological substrate that realizes the representations and algorithms—for example, neural circuits in the brain.

36
New cards

What is Marr’s algorithm and representation level?

It is the level at which we specify how the problem is solved: what representations are used and what algorithms or strategies transform those representations to achieve the computational goal.

37
New cards

What is Marr’s computational level of analysis?

The computational level specifies what problem the system is solving and what function is being computed—i.e., what the goal of the computation is and why it is appropriate.

38
New cards

What is externalism about perceptual or cognitive states?

Externalism is the view that some cognitive or perceptual state kinds depend partly on relations to the external physical environment, so symbols and formal rules alone are not enough to fully characterize them.

39
New cards

What is the difference between a representing system and a represented system?

A representing system is the system that carries or encodes information (e.g., a graph, a memory trace), while the represented system is the thing in the world that the representation is about (e.g., a child’s actual height).

40
New cards

How do symbols in a Turing machine differ from mental representations, according to the notes?

Symbols in a Turing machine have meaning only by external interpretation; by themselves they need not refer to anything in the world, whereas mental representations are taken to have content that is about things beyond the symbol system.

41
New cards

What does the Chinese room thought experiment aim to challenge?

It challenges the claim that implementing the right computational program (mere symbol manipulation) is sufficient for genuine understanding or consciousness.

42
New cards

What is the basic setup of Searle’s Chinese room argument?

It imagines a person in a room following syntactic rules to manipulate Chinese symbols to produce appropriate outputs, despite not understanding Chinese, suggesting that symbol manipulation alone may not constitute understanding.

43
New cards

Name one problem with using the Turing test as a decisive criterion for intelligence.

Outcomes depend on subjective human guesses, success can hinge on superficial language tricks, and equating being good at language with being good at thought may be mistaken.

44
New cards

What is the core idea of the Turing test as an objective test for machine intelligence?

A machine is deemed intelligent if, in a text-based conversation, it can consistently lead a human judge to be unable to reliably distinguish it from a human interlocutor.

45
New cards

According to the Heuristic Search Hypothesis, how does an intelligent system solve problems?

It represents problems as symbol structures and uses heuristic search through this space, demonstrating intelligence by generating relevant, efficient solutions rather than just many solutions.

46
New cards

What does the Physical Symbol System Hypothesis (PSSH) claim?

The Physical Symbol System Hypothesis claims that a physical symbol system has the necessary and sufficient means for general intelligent action—that is, such a system can, in principle, exhibit human-like intelligence.

47
New cards

What is the equivalence argument for the Church–Turing thesis?

The equivalence argument notes that diverse formal systems of computation (e.g., Turing machines, lambda calculus, modern programming languages) can compute exactly the same class of functions, supporting the idea that this class captures all effective computation.

48
New cards

State the Church–Turing thesis.

The Church–Turing thesis states that any function that can be effectively computed by any mechanical procedure can be computed by a Turing machine; in other words, anything computable is Turing-computable.

49
New cards

What is a Turing machine in the context of computation?

A Turing machine is a formal, idealized model of computation that manipulates symbols on an infinite tape according to a set of rules; it serves as one canonical mathematical definition of what it is to compute.

50
New cards

According to the notes, what is the overall goal of mental computation?

The overall goal is for the brain to construct and update an internal model that is the best guess about the most likely state of the environment, guiding perception and action.

51
New cards

What is the computational problem of color perception mentioned in the notes?

The problem is that the same pattern of incoming light (luminance) can arise from different combinations of object and lighting conditions, yet the visual system must infer the true color of surfaces despite this ambiguity.

52
New cards

In computationalism, what role do symbols and formal rules play?

Symbols stand for concepts or states, and formal rules specify how symbols can be transformed, allowing systematic, rule-governed information processing.

53
New cards

What is computationalism about the mind?

Computationalism is the view that cognitive processes are forms of computation, in which the mind manipulates symbols according to formal rules, much like a computer.

54
New cards

What is the suggested relationship between the brain and the mind in functionalist cognitive science?

The brain is seen as the physical system that implements the functional organization that constitutes the mind; if the brain implements the right functions, then it is sufficient for having a mind.

55
New cards

How does functionalism differ from defining something by its physical constitution?

Functionalism defines something by what it does (its role or function), whereas a physical-constitution view defines it by what material it is made of (its physical substrate).

56
New cards

What is functional individuation of a cognitive component?

Functional individuation means identifying a component by the role it plays in a system, such that different physical realizations (e.g., different ways of encoding the number 9) count as the same component if they perform the same function.

57
New cards

How does a law-based explanation differ from a functional analysis in cognitive science?

Law-based explanation cites universal laws that must always hold and are used for prediction, whereas functional analysis explains a capacity by decomposing it into functional components and their roles, focusing more on explanation than pure prediction.

58
New cards

Define functionalism in the philosophy of mind.

Functionalism is the view that mental states are defined by their functional roles—what they do in the cognitive system—rather than by what they are made of physically.

59
New cards

What is the distinction between cognitive capacities and cognitive effects?

Cognitive capacities are the mind’s reliable abilities to transform one mental state into another, whereas cognitive effects are the observable ways those capacities manifest in behavior.

60
New cards

According to Chomsky’s critique, what problem does language pose for behaviorism?

People can produce and understand sentences they have never heard before, so simple stimulus–response accounts based on past reinforcement cannot fully explain language use, suggesting a more complex internal system.

61
New cards

How did Tolman’s cognitive map results challenge strict behaviorism?

They showed rats could learn the layout of a maze without reinforcement, implying internal representations (cognitive maps) rather than just stimulus–response links driven by rewards and punishments.

62
New cards

What did Tolman’s 1948 cognitive map experiment with rats show about learning?

Tolman’s experiment showed that rats formed a cognitive map of a maze: even rats that had not been rewarded during initial exploration could quickly find food once it was introduced, suggesting learning without reinforcement and challenging strict behaviorism.

63
New cards

What is a Skinner box and what was it used to study?

A Skinner box is an experimental apparatus in which an animal (such as a rat) learns to perform actions (like pressing a lever) in response to stimuli (like a light) to receive rewards (like food pellets), used to study stimulus–response learning and reinforcement.

64
New cards

How do positive and negative reinforcement relate to behavior shaping in behaviorism?

Positive and negative reinforcement are used to increase the likelihood of a desired response to a stimulus: positive reinforcement adds a rewarding consequence, while negative reinforcement removes an unpleasant one.

65
New cards

In behaviorism, what is meant by a stimulus–response relationship?

A stimulus–response relationship means that when a subject encounters a specific stimulus, it tends to produce a particular response, typically in a reliable way unless some external factor intervenes.

66
New cards

Why did behaviorists move away from introspection as a method?

Introspection was considered unreliable because participants’ reports of their own mental states were subjective and difficult to verify, so behaviorists focused on observable behavior instead.

67
New cards

What is the core claim of behaviorism about explaining behavior?

Behaviorism claims that behavior can be explained as stimulus → response, and that prediction and control of observable behavior can be achieved by modifying stimuli in observable environments, without reference to internal mental states.

Explore top flashcards