1/211
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced | Call with Kai |
|---|
No analytics yet
Send a link to your students to track their progress
Learning
Definition (from slide):
A relatively permanent change in behavior that occurs as a result of experience with events
Important exclusions (from slide):
❌ Not temporary changes (e.g., fatigue)
❌ Not changes due to maturation
Why this matters:
This definition separates learning from other causes of behavior change.
Maturation
Definition (implied by slide):
Behavior change that occurs due to the passage of time and physical growth, not experience
Why it’s important here:
The slide explicitly contrasts maturation with learning to clarify what does NOT count as learning.
Learning Curve
Definition (from slide):
A graph showing the acquisition of a behavior change over many learning trials
Key properties (from slide):
Monotonic → learning moves in one direction (improves)
Negatively accelerated → learning slows down over time
Why it matters:
Learning is fastest early on, then gains become smaller — a core empirical pattern in learning research.
MCQ Memory Anchor
Learning = experience → lasting behavior change
Maturation = growth, not learning
Learning curve = learning over trials, slows with time
Maturation
Definition (implied by slide):
Behavior change that occurs due to the passage of time and physical growth, not experience
Why it’s important here:
The slide explicitly contrasts __ with learning to clarify what does NOT count as learning
Learning Curve
Definition (from slide):
A graph showing the acquisition of a behavior change over many learning trials
Key properties (from slide):
Monotonic → learning moves in one direction (improves)
Negatively accelerated → learning slows down over time
Why it matters:
Learning is fastest early on, then gains become smaller — a core empirical pattern in learning research.

Memory
Definition (from slide):
The retention or retrieval of information over time
Key emphasis (from slide):
__ focuses on how behavior change or information is retained, not how it is acquired.
Engram (Memory Trace)
Definition (from slide):
The record of information stored in memory, also called the memory trace
Why this matters:
It reflects the assumption that memory has a physical or representational basis in the organism.
Context the slide gives (important distinction):
Learning → acquisition of behavior change
Memory → retention and retrieval of that change
This distinction explains why learning and memory are studied as related but separate processes.
MCQ Memory Anchor
Memory = retention or retrieval
Engram = stored record of information
Learning ≠ memory (acquisition vs retention)
Engram (Memory Trace)
Definition (from slide):
The record of information stored in memory, also called the __ __
Why this matters:
It reflects the assumption that memory has a physical or representational basis in the organism.
Comparative Psychology
Definition (from slide):
The scientific study of differences between species
Explanation (from slide context):
__ __ compares human and nonhuman animal behavior, but researchers must be cautious about how much mental complexity they assume in animals.
Morgan’s Canon
Person (from slide):
C. Lloyd Morgan (1903)
Definition (from slide):
Animal behavior should not be interpreted using higher psychological processes if it can be explained using simpler processes
Explanation:
Morgan’s Canon argues for parsimonious explanations, meaning scientists should choose the simplest explanation that fully accounts for the behavior, rather than assuming complex mental abilities.
Example (aligned with slide intent):
If an animal solves a task through trial-and-error learning, we should not assume reasoning or insight unless simpler explanations fail.
MCQ Memory Anchor
__ __ = compare species
Morgan’s Canon = use the simplest (parsimonious) explanation
Philosophical Origins
Philosophical Origins refers to the early philosophical questions and ideas that shaped how psychologists think about knowledge, learning, and the mind. These ideas set the foundation for scientific theories of learning and memory.
Epistemology
Definition (from slide):
The philosophical study of the nature of knowledge — asking how we come to have knowledge
Explanation (connected to the title):
Epistemology is included here because learning and memory research depends on understanding where knowledge comes from and how it is formed.
MCQ Memory Anchor
Philosophical origins = ideas about knowledge before psychology
Epistemology = how knowledge is possible
Psychological Origins
Refers to early schools of psychology that shaped how researchers began to study the mind and behavior before modern learning theories.
Person: Wilhelm Wundt
Definition: Focus on the structure of the conscious mind, used introspection (a method in which one looks carefully inward, reporting on inner sensations and experiences.)
In other words, the study of the structure of the mind → the sensations, images, and feelings
Wilhelm Wundt (first psych lab, practiced introspection)
Edward Titchener → worked at Cornell with Wundt; attempted to study the structure of the unconscious mind (dubbed it functionalism)
William James
Definition:
An approach that focused on what mental processes do and how they help individuals adapt to their environment. William James → proposed the approach of __ (emphasis on the functions of consciousness); his approach was influenced by Darwin
E.g., asked questions like, “how does the mind adapt to new circumstances?”
Example:
Studying memory in terms of how it helps us learn and survive, not just what it is made of.
Why it matters:
Cognitive psychology shares this goal of understanding how thinking helps us function.
Explanation:
Structuralism aimed to break conscious experience into basic elements, making it one of the first attempts to scientifically study the mind.
MCQ Memory Anchor
___ (Wundt) = structure of conscious experience
___ (James) = purpose of consciousness
Structuralism
Person:
Wilhelm Wundt
Definition:
The view that the proper topic for psychology is conscious processes and immediate experience, studied mainly through introspection.
Explanation:
___ aimed to break conscious experience into basic elements, making it one of the first attempts to scientifically study the mind.
Functionalism
Person:
William James
Definition:
An approach emphasizing the functions of consciousness and how mental processes help individuals adapt to their environment.
Explanation:
__ focused on what the mind does, which influenced later theories of learning and behavior.
Scientific Origins:
This slide explains how evolutionary theory shaped psychology by viewing learning and behavior as adaptations that help organisms survive and reproduce.
Evolution
Definition (from slide):
Characteristics of species change over time, so descendants may differ from earlier members.
Explanation:
Evolution provides the biological foundation for why learning exists — it helps organisms adapt to their environment.
Natural Selection
Person:
Charles Darwin
Definition (from slide):
Individuals vary in traits
Some traits increase survival and reproduction
These traits are passed to offspring
Helpful traits become more common over generations
Explanation:
Learning is adaptive because it increases an organism’s chances of survival.
Environment of Evolutionary Adaptiveness (EEA)
Definition (from slide):
The environment that existed when a trait was evolving.
Explanation:
Some learning mechanisms evolved to solve problems in ancestral environments, even if those problems are different today.
Historical Contrast on the Slide: Lamarck vs. Darwin
Jean-Baptiste Lamarck (false start):
Proposed that acquired characteristics (traits gained during life) could be inherited.
Why he’s mentioned:
The slide includes Lamarck to show an early but incorrect idea about evolution that was later replaced.
Darwin (accepted view):
Traits are inherited only if they are genetically passed on, through natural selection.
MCQ Memory Anchor
Lamarck = acquired traits inherited (wrong)
Darwin = natural selection (correct)
Evolution → learning as adaptation
Evolution
Definition (from slide):
Characteristics of species change over time, so descendants may differ from earlier members.
Explanation:
__ provides the biological foundation for why learning exists — it helps organisms adapt to their environment.
Natural Selection
Person:
Charles Darwin
Definition (from slide):
Individuals vary in traits
Some traits increase survival and reproduction
These traits are passed to offspring
Helpful traits become more common over generations
Explanation:
Learning is adaptive because it increases an organism’s chances of survival.
Environment of Evolutionary Adaptiveness (EEA)
Definition (from slide):
The environment that existed when a trait was evolving.
Explanation:
Some learning mechanisms evolved to solve problems in ancestral environments, even if those problems are different today.
An Example of Evolution-Based Psychological Theory
This slide gives a concrete example of how evolutionary theory is used to explain learning and behavior, instead of just describing evolution in general.
Buss’ Sexual Strategies Theory
Person: David Buss
What Buss thought (main claim):
He argued that human mating behavior can be explained using evolutionary principles (like other species) — men and women evolved different mating strategies because they faced different reproductive pressures.
Definition:
Human mating behavior evolved via natural selection, leading men and women to pursue short-term and long-term mating strategies.
Core idea (why sexes differ):
Differences come from different biological investments in reproduction.
Key logic (helpful examples):
Men: lower biological investment → more emphasis on short-term mating + fertility cues
Women: higher biological investment → more emphasis on resources/commitment and selectivity; short-term mating can sometimes provide resources or genetic benefits
Humans are unusual (from slides):
Single-offspring births
Paternal investment
Female menopause
Hidden ovulation
Sex outside fertile periods
MCQ memory anchor:
Buss = evolutionary explanation of sex differences in mating strategies
Wants to explain human mating behavior in the = terms as behavior in other species, considers (takes into account) environment of evolutionary adaptiveness
Human mating behavior is unusual in some respects:
Single offspring birth (typically shared with some primates)
Paternal investment in humans (greater in humans)
Female menopause (useful for mothers to be around, e.g., grandma)
Hidden ovulation in females
F → sexually receptive even when not ovulating (in humans, other species may kill male)
Sex is private (humans)
What it involves:
Both sexes pursue short-term + long-term mating strategies
Men devote > part of their effort to short-term mating
Successful short-term mating (Men) → many partners, no parental involvement, only with potentially fertile women
Successful long-term mating (Men) ⇒ probable fertility, certainty of paternity (more guarding)
Vs Females
Successful short-term mating → providing resources (being able to stay alive)
Prospects (likelihood) for long-term mating
Cheating: better genetic contribution than potential long-term mate (happens through behavior unconsciously)
Long-term mating
Find mates who will be able and willing to provide resources (care for personally too; intelligence, they will be able to care for children)
Behaviorism
This slide introduces behaviorism, an approach that explains learning by focusing on observable behavior rather than internal mental processes.
Person(s) (from slide):
John B. Watson (1913)
B. F. Skinner (radical behaviorism)
Definition (from slide):
An approach that views psychology as the study of observable events, defining learning in terms of stimulus–response (S–R) relationships
Key features emphasized on the slide:
Antimentalistic → rejects mentalistic explanations
Empiricist & associationist → behavior shaped by experience
Reductionist → explains behavior using basic components
Skinner’s contribution (from slide):
Radical behaviorism attempted to remove all mentalistic concepts from psychology.
Why this slide matters:
Behaviorism strongly influenced early learning research and later motivated the development of cognitive approaches that reintroduced mental processes.
MCQ Memory Anchor
Behaviorism = observable behavior only
Watson → S–R psychology
Skinner → radical behaviorism, no mental terms
Cognitive Approaches to Learning
This slide introduces an approach that explains learning by focusing on mental processes that guide behavior, not just observable actions.
Definition (from slide):
An approach to learning that uses measures of behavior to develop and test theories about mental processes.
What this approach assumes (from slide):
The mind processes information by encoding, transforming, storing, and retrieving it
The mind is often compared to a computer
Learning involves forming an internal (mental) representation that guides behavior
The organism is an active processor of information, not a passive responder
Why this matters (as intended by the slide):
The cognitive approach contrasts with behaviorism by arguing that mental representations must be studied to fully understand learning.
MCQ Memory Anchor
Cognitive approach = behavior used to infer mental processes
Key idea = internal representations guide behavior
Repetition Priming
Definition (PROF’s exact slide words):
“Processing of a stimulus is affected by a previous presentation of it.”
Examples from the slide:
Can often be shown in preference (liking) judgments:
Maslow (1937): Preference for familiar tasks, familiar lab, and familiar pictures.
Not just humans: Rats show neophobia (fear of new things) for foods, places, and people.
Other methods of __ __:
Priming in perceptual identification: Easier to identify rapidly-shown words if they had been shown earlier.
Priming in word completion: Easier to complete word fragments (-V---V-, ---F-M-) if words had been seen earlier (e.g., EVASIVE, PERFUME).
Explanation (mine):
__ __ occurs when a previous exposure to a stimulus affects how you process it later. Essentially, if you see something multiple times, you’re faster and more accurate at recognizing or completing it. For example, seeing the word "EVASIVE" earlier makes you more likely to complete a word fragment like "-V---V-" with that word.
The Maslow (1937) example also shows that familiarity with a task or environment (like being in the same lab or seeing the same picture) leads to more positive reactions, because the brain processes familiar stimuli more easily.
Habituation
Definition (PROF’s exact slide words):
“Decrease in response to a stimulus after repeated presentations”
Bullets from the slide
Response declines with repetition
Not due to sensory adaptation or fatigue
Considered a simple form of learning
Examples from the slide
Looking response in infants decreases after seeing the same stimulus repeatedly
Coolidge effect: When a male animal’s sexual response decreases after repeated exposure to the same female (habituation). However, when a new female is introduced, the male’s sexual response suddenly increases again.
Fear responses Animals show decreased fear responses after repeated exposure to the same stimuli.
Siphon withdrawal in Aplysia: If Aplysia receives identical touches six times within a few minutes, there is almost no withdrawal response
Neural explanation: There is a change in the strength of the connection between the sensory neuron and the motor neuron involved in the response
Explanation (mine)
Across infants, animals, and even sea slugs, repeated exposure weakens the response because the nervous system learns the stimulus is not important. This learning can be seen all the way down at the level of synaptic connections between neurons.
Dishabituation
Definition (PROF’s exact slide words):
“Recovery of a response after habituation when a new stimulus is presented”
Example from the slide (fully explained)
Siphon withdrawal in Aplysia
Aplysia is a sea slug with a simple nervous system.
It has a body part called a siphon. When touched, it reflexively withdraws the siphon for protection.
If the siphon is touched repeatedly (six times within a few minutes), the withdrawal response becomes very small. This is habituation.
After this habituation, if you shine a bright light on the animal and then touch the siphon again, the siphon withdraws strongly.
Explanation (mine)
The strong withdrawal after the bright light shows the animal was not tired or fatigued. It had learned to ignore the repeated touch. The new stimulus (light) resets the nervous system, and the response returns.
Discrimination
Definition (PROF’s exact slide words):
“The ability to tell the difference between two stimuli”
Example from the slide (now fully explained)
Bornstein et al. (1976): Use habituation and discrimination to show that 4-month-old infants can distinguish colors like adults.
Habituate the baby to a greenish blue (480 nm): Fixation times drop with repeated presentations.
After a break, present one of three slides: More habituation to blue (450 nm) than bluish green (510 nm).
Habituate to bluish green (510 nm): Fixation times drop with repeated presentations.
After a break, present one of three slides: More habituation to green (540 nm) than greenish blue (480 nm).
Explanation (mine)
The experiment shows infants’ ability to discriminate between colors using habituation (getting used to a stimulus) and discrimination (noticing differences between stimuli). After the infants are repeatedly exposed to a color, their response decreases. But when a different color is introduced, their attention increases, proving they can tell the difference.

Aplysia
Definition (PROF’s exact slide words):
“__ californica is a giant marine snail with a very simple nervous system with few neurons. The neurons are large and therefore easy to study.”
Example from the slide
The behavior studied was siphon withdrawal: Aplysia uses a siphon to take in sea water, and in response to danger, it can withdraw the siphon back into its body.
Siphon withdrawal shows habituation: If Aplysia receives identical touches six times within a few minutes, there is almost no withdrawal response.
Neurally, this happens because there is a change in the strength of the connection between the sensory neuron and motor neuron involved in the response.
Siphon withdrawal shows dishabituation: After habituation has occurred, if a bright light is shined, the animal will show a big siphon withdrawal again after a touch.
Explanation (mine)
Aplysia is used for studying simple forms of learning because it has large neurons, making it easy to observe neural changes. The siphon withdrawal reflex is a protective behavior. When Aplysia is repeatedly touched, it stops withdrawing its siphon (habituation). When a new stimulus (like the bright light) is added, the withdrawal response comes back (dishabituation), showing that the organism’s nervous system is learning to ignore some stimuli while responding to others.
Sensitization
Definition (PROF’s exact slide words):
“Magnification of a response as a result of repetition”
Examples from the slide
Siphon withdrawal in Aplysia: After a strong stimulus (such as a bright light), Aplysia shows an increased response to a mild touch, demonstrating __.
Emotional responses: Exposure to a painful stimulus (e.g., a bee sting) can make you more __ to minor stimuli afterward.
Explanation (mine)
___ occurs when an intense stimulus increases the magnitude of the response to subsequent, less intense stimuli. It’s the opposite of habituation, where repeated exposure decreases a response. In Aplysia, a bright light makes the animal more sensitive to a touch afterward, showing this change in neural sensitivity.
Thompson et al. (1973): Dual-Process Theory of Habituation and Sensitization
Definition (PROF’s exact slide words):
“Response to stimulus depends on two different sets of neurons”
Examples from the slide:
Type H neurons: These neurons are most directly involved in the reflex arc and tend to habituate to repeated stimulation.
Type S neurons: These are more central and reflect the general state of arousal in the organism; they enhance responsiveness (leading to sensitization).
Explanation (mine):
Thompson’s theory explains that habituation and sensitization are controlled by two separate systems of neurons:
Type H neurons are responsible for habituation. These neurons “get used to” the stimulus and start to respond less over time.
Type S neurons are responsible for sensitization. These neurons become more active when an organism is in a heightened state of arousal, making the response stronger.
The important takeaway: habituation and sensitization aren’t opposites. They are controlled by different neural systems that can operate simultaneously to shape behavior. For example, an animal might habituate to a repetitive noise (Type H neurons) but become more sensitive to a loud sound if it’s associated with danger (Type S neurons).
Habituation and Sensitization
Habituation: A decrease in response to a stimulus after repeated exposure.
Sensitization: An increase in response to a stimulus following repeated exposure to that stimulus, particularly if the stimulus is intense or noxious.
Examples from the Slide:
Habituation
Example:
Siphon withdrawal in Aplysia: If Aplysia is repeatedly touched on the siphon (a sensory organ), the response of withdrawing the siphon decreases with repeated exposure.
Infant looking times: Babies may initially react to a new toy, but over time, they lose interest if the toy is presented repeatedly.
Sensitization
Example:
Loud noise and startle response: After hearing a loud sound (e.g., a firecracker), you may become more sensitive to other sounds, jumping at every small noise.
Shock response: After experiencing an intense shock, you may show heightened reactions to smaller, less intense stimuli.
Explanation (mine):
Habituation occurs when we stop responding to a stimulus after we’ve been exposed to it repeatedly. For instance, if you live near a busy street, at first, the sound of traffic might disturb you. But over time, you become less sensitive to the noise because you have learned to ignore it.
Sensitization, on the other hand, increases your sensitivity to a stimulus after experiencing it several times, especially if it’s strong or unpleasant. For example, if you’re shocked by a loud noise, you might jump at every small sound afterward.
Acquired Motivation
Definition (PROF’s exact slide words):
“Our emotional reactions to stimuli may change as a result of experience.”
Example from the slide
Motivation is not purely innate but may reflect learning.
Many emotional responses are biphasic (two different stages).
Explanation (mine)
Emotional reactions change over time with experience. For example:
Unpleasant stimuli may become more pleasant (like how fear can become excitement).
Pleasing stimuli may become unpleasant after too much exposure (like eating too much of your favorite food).
Opponent-process Theory of Acquired Motivation
Definition (PROF’s exact slide wording):
The __ ___theory suggests that emotional responses (both positive and negative) trigger a counteracting response after they occur. Over time, the initial emotional response (A) habituates, while the counteracting response (B) sensitizes, leading to stronger and more lasting emotional reactions.
Key Examples from the Slide:
Drug Tolerance and Addiction
Example: As a person becomes tolerant to a drug (like heroin), the initial euphoria (A) diminishes, but the negative withdrawal symptoms (B) become more pronounced. Over time, the person requires more of the drug to reach the same pleasurable effects, and the negative withdrawal feelings are amplified.
Skydiving
Example: Initially, a person might feel intense fear (A) during their first skydiving experience, followed by exhilaration (B). However, after multiple jumps, the fear response diminishes, and the excitement grows stronger, as their emotional system adapts.
Romantic Passion
Example: In romantic relationships, intense passion (A) may be followed by feelings of longing or loss (B). Over time, these feelings of longing may increase, even though the initial intensity of the romantic attraction weakens.
Shock Example (from the slide)
Example: If you are shocked (intense emotional experience A), your heart rate may increase drastically. However, after repeated shocks, the initial heart rate increase (A) will become less pronounced, while the drop in heart rate (B) after the shock becomes more pronounced and long-lasting. This shows that intense emotional reactions (A) tend to habituate while the counteracting responses (B) sensitize over time.
Explanation (mine):
The ___ ___ theory explains that emotions don’t happen in isolation — they trigger opposite responses (counterreaction B). The initial emotion (reaction A) weakens over time due to habituation, but the opposite emotional response (B) grows stronger through sensitization.
For example:
In drug addiction, you stop feeling the initial pleasure (A), but the withdrawal symptoms (B) become stronger and more intense.
In skydiving, your fear response (A) lessens, but the excitement (B) increases as you get more used to the activity.
Significance:
___ ___ theory explains how addiction and emotional adaptation work. It’s not just about the initial emotional experience; it’s about how the emotional system balances itself by adjusting over time.
This theory is useful for understanding tolerance in addiction and emotional changes in activities like skydiving or romantic passion.
Stimulus Learning
Definition (PROF’s exact slide words):
“Learning about the relationships (associations) between stimuli.”
Examples from the Slide:
Recall / Recognition
Example: Free recall, cued recall, and recognition tasks measure how well we can retrieve information from memory.
Repetition Priming
Example: You identify a word faster if you’ve seen it before, such as identifying “cat” faster if it was shown to you earlier.
Habituation / Sensitization
Example:
Habituation: Over time, you stop reacting to a familiar stimulus (e.g., you stop noticing a clock ticking).
Sensitization: You become more sensitive to a stimulus after a strong or noxious experience (e.g., after an initial shock, you become more sensitive to other noises).
Opponent-process Theory of Acquired Motivation
Example: If you experience a positive emotion (e.g., excitement), you might eventually feel the opposite (e.g., boredom) after prolonged exposure.
Perceptual Learning
Example: After seeing a face repeatedly, you recognize and distinguish facial features (e.g., recognizing someone in a crowd).
Explanation (mine):
__ ___ occurs when organisms learn through exposure to stimuli and form associations between them. There are various ways of stimulus learning:
Habituation (ignoring repetitive, non-important stimuli)
Sensitization (becoming more sensitive to important stimuli)
Perceptual Learning (learning to recognize and distinguish specific stimuli like faces).
Repetition Priming shows us that previous exposure makes it easier to identify stimuli later.
Hedonic Treadmill
Definition (Prof’s exact slide wording):
Emotion systems adapt to life circumstances, returning us to our emotional set point.
Life experiences have only a temporary effect on our level of happiness.
Wealth and physical attractiveness have only a very weak relationship with happiness.
Lottery winners
Widows/widowers
Crippling accidents
Explanation (mine)
__ __ is a concept from Diener et al. (2006) that suggests that our happiness levels tend to return to a baseline after experiencing significant life events, whether positive or negative. This model proposes that even major events, like winning the lottery or suffering from a traumatic incident (e.g., the death of a loved one or a serious injury), have only a temporary effect on our happiness. Over time, we tend to adapt to these experiences, returning to a natural emotional set point.
Examples:
Lottery winners often report an increase in happiness initially, but after a period, their happiness tends to return to baseline levels.
Widows/widowers, while they experience a period of grieving and lower happiness, eventually return to their baseline emotional state.
Crippling accidents may lead to a period of distress, but people typically adapt, and their happiness levels stabilize over time.
This suggests that material gains or losses do not contribute to long-term happiness—internal factors (e.g., emotional set point, personality) and how we adapt to life events play a larger role in our happiness over time.
Perceptual Learning
Definition (from the slide):
“Once we have learned how to perceive or identify a stimulus, it is easier to learn other things about it.”
Key Examples from the Slide:
Gibson & Gibson (1955)
Example: Participants were shown “scribble” cards (coiled drawings differing in tightness and left-right orientation). After repeated exposure to one "standard" scribble, participants were asked to determine if other scribbles matched it. With repeated trials, they became more accurate at distinguishing between scribbles.
Explanation: This shows that repeated exposure to similar stimuli helps improve the ability to distinguish subtle differences.
Gibson & Walk (1956)
Example: Rats raised in an enriched environment (exposed to shapes like circle and triangle) were faster to discriminate between the two shapes compared to rats in a control environment.
Explanation: This shows how repeated exposure to stimuli helps improve discrimination ability over time.
Additional Concepts and Mechanisms (added for depth):
Differentiation
Definition: Learning to make finer distinctions between stimuli that appear similar at first glance.
Example: In the Gibson & Gibson (1955) experiment, participants became more sensitive to subtle differences between scribbles as they practiced sorting them.
Explanation: Differentiation refers to the ability to notice minor differences between similar stimuli, which is key for recognizing and distinguishing things in our environment (e.g., identifying different kinds of animals or objects).
Stimulus Storage
Definition: Storing specific features of stimuli to enable faster recognition when new, similar stimuli are encountered.
Example: A person who has frequently encountered a particular type of flower will quickly identify a similar flower in a new environment because they’ve stored key features.
Explanation: When we are repeatedly exposed to stimuli, our brain stores important characteristics of that stimulus to help us recognize similar stimuli more quickly in the future.
Attention Weighting
Definition: Learning to focus on important features of stimuli while ignoring irrelevant ones.
Example: In the case of a penny, people learn to focus on the shape and color rather than irrelevant features like the scratches on the surface.
Explanation: Attention weighting allows us to prioritize which features of a stimulus are most relevant, which is why we don’t notice all aspects of an object (e.g., when looking at a face, we focus on the eyes and mouth, not the shape of the ears).
Unitization
Definition: Learning to combine stimuli into larger, cohesive units to process them more easily and holistically.
Example: The Word-superiority effect shows that it’s easier to recognize a whole word like “CAT” than to recognize the individual letters “C”, “A”, and “T” on their own.
Explanation: Unitization refers to the ability to process stimuli as a whole, rather than as separate parts. When we recognize something as a unit (like a word), our brain processes it more efficiently and accurately.
Memory Hook:
__ __ = Improved ability to perceive or identify stimuli with experience.
Significance:
These concepts show that __ ___ isn’t just about learning to recognize stimuli, it also involves fine-tuning how we process them, from differentiating between similar objects to focusing on relevant features.
As we become more experienced with a type of stimulus, we refine our ability to process and identify similar stimuli, making it easier to recognize and interact with the world around us.
Perceptual Learning in Holistic Face Recognition
Definition (from the slide):
Perceptual learning in face recognition refers to the process by which faces are recognized more easily when viewed as holistic units (whole faces), rather than as parts.
Key Examples from the Slide:
Composites Effect
Example (from slide):
Participants are shown half of a famous face. When this half is paired with half of another famous face, it becomes much more difficult to recognize. This happens because we treat faces holistically (as a whole), not just as individual parts.
However, it becomes easier to recognize the faces when the two halves are misaligned.
Explanation: This demonstrates that face recognition works better when we process faces as integrated wholes, not by individual features. The composite effect shows that holistic processing is important for recognizing faces. Misaligning the halves reduces the holistic effect, making it easier to recognize the faces.
Whole Advantage
Example (from slide):
Participants study a full face. When asked to recognize it later, they can distinguish it from another face that differs only in the nose. However, they can’t recognize just the nose alone in isolation.
Explanation: This shows that we process whole faces better than individual features. The Whole Advantage implies that face recognition is more accurate when the full face is presented rather than a single feature, such as the nose.
Inversion Effect
Example (from slide):
When faces are presented upside down, it disrupts face recognition, especially sensitivity to spatial relations between facial features (e.g., the distance between eyes, nose, and mouth).
Explanation: The Inversion Effect demonstrates that face recognition is much more difficult when faces are inverted. We process faces better when they are upright, and inversion disrupts our ability to process them holistically and accurately. This effect underscores how specialized face recognition mechanisms are in the brain, optimized for upright faces.
Explanation (mine):
In this slide, the key focus is on holistic face processing, which means we tend to recognize faces as a whole rather than focusing on individual features. The Composites Effect and Whole Advantage illustrate this by showing how much harder it is to recognize faces when they are split into parts or isolated features (like the nose). The Inversion Effect further emphasizes that our face recognition abilities are orientation-dependent, and we struggle more with faces presented upside down.
Significance:
The Composites Effect and Whole Advantage show that holistic processing is a fundamental aspect of face recognition.
The Inversion Effect highlights the specialization of the brain for processing faces in their natural, upright position.
This research has important implications for understanding how the brain processes faces differently from other objects
🔥 The difference in one sentence
Composite Effect: The whole interferes with recognizing a part
Whole Advantage: The whole helps you recognize a part
Both prove: faces are processed holistically, not by features.
But:
Composite = whole is a problem
Whole advantage = whole is a benefit
Wagner (1976) Cognitive Theory of Habituation
This theory emphasizes the distinction between short-term memory and long-term memory.
Habituation could reflect either short-term memory or long-term memory.
Responses are reduced if stimulus is recognized.
From the slide
Short-term habituation
Stimulus is likely to be recognized if it is still in short-term memory.
Massed repetitions increase probability that one or more occurrences are in STM.
Long-term habituation
Stimulus may be recognized if it can be retrieved from long-term memory.
Spaced repetitions increase probability that occurrences can be retrieved.
Expectancies / Missing stimulus effect
Our memories allow us to build expectancies.
We may show greater orienting response if expected stimulus fails to occur.
Diagram from the slide (what it means)
Stimulus Input → Stimulus Analyzer → Comparator → Response → ORIENT or NOT
The Comparator checks the current stimulus against what is stored in:
STM (recent exposures)
LTM (earlier exposures)
If the stimulus is recognized → reduced orienting (habituation)
If the stimulus is not recognized or expected but missing → strong orienting response
Explanation (mine)
Wagner is saying habituation is not just “help I’m bored of this stimulus.”
It is a memory process.
You stop responding because your brain says:
“I’ve seen this before.”
If you saw it very recently → STM → massed trials cause habituation
If you saw it a while ago but remember it → LTM → spaced trials cause habituation
If you expect it and it doesn’t happen → you react more, not less (missing stimulus effect)
This is why:
Massed trials = short-term habituation
Spaced trials = long-term habituation
Reminder: this slide is the reason your professor cares about spacing vs massing and memory in habituation.

Contingency Learning
Exact idea from the slide
Acquisition of knowledge about correlations between stimuli
Learning that one stimulus predicts another stimulus
even when that relationship is irrelevant to the task.
Example from the slide — Lin & MacLeod (2018)
Subjects complete 8 blocks, 48 trials each.
On every trial they see a word:
MONTH, UNDER, PLATE, CLOCK
printed in red, yellow, or green.
Their only task is to press a key for the color.
They are not told to pay attention to the word.
Critical manipulation
One word appears equally often in all three colors → baseline
Other words appear:
83.33% of the time in one color → high contingency
8.33% in the other colors → low contingency
Example pattern the brain starts picking up:
MONTH → usually red
UNDER → usually green
Results from the slide
Compared to baseline:
High-contingency trials → faster responses
Low-contingency trials → slower responses
This happens:
Even though the word is irrelevant
In the first block (very fast learning)
What this actually shows (the part that was missing)
Your brain learns:
“This word predicts this color”
without trying, without awareness, and without needing it for the task.
So when MONTH appears, your brain is already biased toward red before you even look at the color.
That prediction makes you faster when the prediction is correct and slower when it is violated.
Why this is ___ ___
Because you learned the correlation:
Word ↔ Color
That is learning about the statistical relationship between stimuli.
That is __ ___.
Classical Conditioning
Classical Conditioning Origin of the idea — Ivan Pavlov
Ivan Pavlov was not a psychologist. He was a physiologist studying digestion in dogs.
While studying salivation, he noticed something strange. The dogs began salivating before food was placed in their mouths. They salivated when they heard the footsteps of the lab assistant, the sound of the door, or any cue that reliably occurred before food.
Pavlov realized the dogs were not simply reacting to food. They were learning that one stimulus predicts another stimulus.
This discovery is called Classical Conditioning.
Definition
Classical conditioning is learning about the relationship between stimuli.
A neutral stimulus becomes meaningful because it predicts something important.
Example from Pavlov:
Food naturally causes salivation.
If a tone is repeatedly presented before food, the tone alone will later cause salivation.
The dog learned: the tone predicts food.
Pavlov’s Basic Components (from slides)
Unconditioned stimulus — food
Unconditioned response — salivation to food
Conditioned stimulus — tone
Conditioned response — salivation to tone
The conditioned response is not necessarily identical to the unconditioned response
Extinction — elimination of a conditioned response when the conditioned stimulus is presented alone
Spontaneous recovery — extinction fades away with time
Other Common Procedures (from slides)
These are different laboratory methods used to demonstrate the same learning process.
Eyeblink conditioning
An airpuff to the eye naturally causes a blink. If a tone is repeatedly presented before the airpuff, the tone alone will later cause blinking. The organism learned the tone predicts the airpuff.
Fear conditioning (Conditioned Emotional Response)
A shock naturally causes freezing. If a tone is repeatedly presented before the shock, the tone alone will later cause freezing. The organism learned the tone predicts danger.
Skin Conductance Response (Galvanic Skin Response)
Skin conductance measures sweating and arousal. Humans show increased skin conductance to a stimulus that has been paired with shock. This is used to study the role of awareness in conditioning. The key point from the slide is that awareness is sufficient but not necessary for conditioning.
What your professor corrects on later slides
For many years, classical conditioning was seen as primitive: if conditioned stimulus–unconditioned stimulus pairing is repeated enough, the response to the unconditioned stimulus transfers to the conditioned stimulus.
The slides say this is wrong.
Many repetitions are not needed (for example, conditioned taste aversion)
Repetition alone is not sufficient; the value of information in the conditioned stimulus is critical
The conditioned response is not identical to the unconditioned response
Classical conditioning is a sophisticated cognitive process that allows organisms to predict the occurrence of important events.
Evidence: Devaluation Experiments (from slides)
A dog learns that a tone predicts food and salivates to the tone.
Then the food is devalued by making the dog sick after eating it.
When the tone is played again, the dog no longer salivates.
If conditioning were just reflex transfer, the dog would still salivate. Instead, the dog learned that the tone predicts food, and when food loses value, the response disappears.
Rescorla (1967) — Proper Control Condition
The conditioned stimulus works because it provides information about the unconditioned stimulus.
The informativeness of the conditioned stimulus is defined as the difference between:
the probability of the unconditioned stimulus when the conditioned stimulus is present
the probability of the unconditioned stimulus when the conditioned stimulus is absent
The ideal control condition is the Truly Random Control Procedure, where these probabilities are equal. In this case, no learning occurs because the conditioned stimulus provides no information.
Explanation (mine)
Classical conditioning is not about repetition. It is about how well one stimulus allows the organism to predict another. The organism is learning predictions, not reflexes.
Unconditioned Stimulus (US)
Definition: e.g., food
Example from the slide
Food naturally produces salivation.
Explanation (mine)
This is a stimulus that automatically causes a response.
No learning is required.
Food → salivation happens naturally.
Unconditioned Response (UR)
Definition: e.g., salivation to food
Example from the slide
Salivating when food is placed in the mouth.
Explanation (mine)
This is the natural response to the unconditioned stimulus.
It happens before any conditioning.
Conditioned Stimulus (CS)
Definition: e.g., tone
Example from the slide
Originally, the tone does not cause salivation.
After being paired with food repeatedly, the tone alone produces salivation.
Explanation (mine)
The __ __ is a stimulus that starts out neutral.
It only becomes important because the animal learns:
“This stimulus predicts the US.”
The tone does not naturally cause salivation.
It gains meaning only through learning.
That’s why it’s called conditioned.
Conditioned Response (CR)
Definition - e.g., salivation to tone, not necessarily identical to unconditioned response
Example from the slide
Dog salivates to the tone.
Explanation (mine)
This is the learned response to the conditioned stimulus
Important: It is not just a copy of the UR — it is a predictive response.
Extinction
Definition
Elimination of a conditioned response as a result of presentation of the conditioned stimulus alone
Example from the slide
Tone is presented repeatedly without food → salivation disappears.
Explanation (mine)
The animal learns:
“The CS no longer predicts the US.”
Spontaneous Recovery
Definition:
Extinction fades away as a result of the passage of time
Example from the slide
After extinction, if time passes and the tone is played again, salivation briefly returns.
Explanation (mine)
Extinction does not erase learning.
The association is still there and can come back after time.
Conditioned Inhibition
Definition:
A conditioned stimulus can become associated with the absence of the unconditioned stimulus, rather than its occurrence.
Most classical conditioning teaches an organism that a stimulus means something is about to happen. ___ ___ teaches the organism that a stimulus means something is not about to happen.
Summation Test (Pavlov)
Phase 1
A dog is first trained with two separate pairings:
A bell is followed by food. The dog salivates.
A light is followed by food. The dog salivates.
At this point, the dog has learned that both the bell and the light predict food.
Phase 2
Now the experimenter changes the situation:
The bell is presented together with a tone.
No food is given.
This happens many times.
The dog begins to learn that when the tone is present, food will not occur, even though the bell normally predicts food.
Test
The experimenter now presents the light together with the tone.
Normally, the light causes salivation.
But if salivation is greatly reduced when the tone is present, this shows that the tone has become a conditioned inhibitor. The dog has learned that the tone signals the absence of food.
Retardation of Acquisition Test
Phase 1
The dog is trained:
A bell is followed by food.
The dog salivates to the bell.
Phase 2
Now the bell and a tone are presented together and no food follows.
The dog learns that the tone means food will not happen.
Test
The experimenter now tries to condition the tone with food by pairing the tone with food.
If the dog is very slow to learn that the tone now predicts food, this shows that the tone had already been strongly learned as a signal for the absence of food. This slow learning is called retardation of acquisition.
Explanation (mine)
These experiments prove that the organism is not only learning that a stimulus predicts an event. The organism can also learn that a stimulus predicts that an event will not occur. This is a different kind of learning and shows that animals track both presence and absence of important events.
Basic Phenomena of Classical Conditioning
Pre-exposure of conditioned stimulus
Latent inhibition of conditioned stimulus–unconditioned stimulus association
Acquisition (learning) curve
Monotonic negatively accelerated curve
Effects of conditioned stimulus–unconditioned stimulus interval
Two ways of varying the interval:
Delay Conditioning: Conditioned stimulus starts before unconditioned stimulus and stays on until unconditioned stimulus is done
Trace Conditioning: Conditioned stimulus starts before unconditioned stimulus and turns off before unconditioned stimulus starts
Intermediate interval is best; depends on response
Little evidence for simultaneous or backward conditioning
Intensity of unconditioned stimulus
Typically enhances learning
Intensity of conditioned stimulus
Effect found in within-subject designs
Explanation (mine)
These are the regular patterns researchers always observe when doing classical conditioning experiments.
Latent inhibition: If the organism hears the tone many times before it is ever paired with food, it becomes harder for the tone to form an association with food later. The organism learned the tone was meaningless.
Acquisition curve: Learning happens fast at first and then slows down as it approaches a maximum level of responding.
Delay vs Trace: Learning works best when the conditioned stimulus predicts the unconditioned stimulus. If they happen at the same time or the unconditioned stimulus happens before the conditioned stimulus, learning is very weak.
Intensity of unconditioned stimulus: Stronger shocks, stronger airpuffs, or more food lead to faster learning.
Intensity of conditioned stimulus: Stronger tones or lights are easier to learn from when compared within the same subject.
This slide tells you the rules of how classical conditioning works in real experiments.
Conditioned Taste Aversion
Definition:
Avoidance of food as response to illness
Relative preparedness defined by the number of learning experiences that must occur before behavior change is reliable
Unusual Aspects (from slide)
Long delay
In normal classical conditioning, the conditioned stimulus must occur shortly before the unconditioned stimulus. In taste aversion, the illness can occur hours after the food was eaten and learning still happens.
One-trial learning
Most conditioning requires many pairings. Here, a single experience is enough for the organism to permanently avoid that food.
Only certain aspects are learned (taste in rats)
The rat associates the illness specifically with the taste, not with other things that were present like sounds, sights, or the environment.
What this looks like in an experiment (explained)
A rat drinks a new flavored water for the first time.
Several hours later, the rat becomes sick.
The next time the rat is offered that flavor, it refuses to drink it — even though:
The sickness happened much later
It only happened once
Other stimuli were present at the time but were ignored
The rat learned that the taste caused the illness.
Preparedness (what this explains)
Preparedness means some associations are biologically easier to learn than others.
Organisms are “prepared” to learn that taste predicts illness because this helps them avoid poisonous food. The number of experiences required before behavior reliably changes is called relative preparedness.
Taste and illness require very few experiences.
Other associations (like sound and illness) require many and often do not occur at all.
Explanation (mine)
This slide is showing that classical conditioning does not always follow the usual rules. Some types of learning happen extremely easily because the brain is built to make those connections for survival.
Preparedness
Definition
Relative __ defined by the number of learning experiences that must occur before behavior change is reliable
What this means (from slides across examples)
___ explains why some associations are learned extremely easily while others are very difficult to learn.
Organisms are biologically predisposed to form certain associations because those associations were important for survival in evolutionary history.
Examples from the slides
Conditioned taste aversion
A rat drinks a new flavored water and becomes sick hours later. After one experience, the rat avoids that taste forever.
The rat associates the illness specifically with the taste, not with sights or sounds that were present.
Phobias and fear conditioning (Ohman & Mineka, 2001)
Humans easily develop fear of snakes, spiders, heights, and angry faces. These are stimuli that were dangerous in our evolutionary past.
The fear response:
Happens automatically and involuntarily
Is difficult to override with logic or reasoning
Involves specialized neural circuits such as the amygdala
Little Albert (Watson & Rayner, 1920)
A child learned to fear a white rat when it was paired with a loud noise. He then generalized this fear to other white furry objects.
Explanation (mine)
__ shows that classical conditioning is not equally easy for all stimuli. The brain is wired to quickly learn associations that helped our ancestors survive, such as taste and illness or visual cues and danger. The fewer experiences required to learn an association, the more “prepared” the organism is to learn it.
Rescorla–Wagner Theory of Classical Conditioning
Definition:
Classical conditioning has evolved to allow animals to predict events.
Classical conditioning is primarily stimulus–stimulus learning.
Core Ideas (from slide)
Conditioning takes place to the extent that the unconditioned stimulus cannot successfully be predicted. In other words, the unconditioned stimulus must be surprising for learning to occur.
Every unconditioned stimulus is limited in the amount of associative strength it can support, and possible conditioned stimuli compete with each other for that associative strength.
The amount of learning that occurs on any trial depends on the difference between how much learning is possible for that unconditioned stimulus and how much conditioning has already occurred to the conditioned stimuli present.
Delta V = f(Lambda − V)
Lambda is the amount of conditioning the unconditioned stimulus can support.
V is the total associative strength of all conditioned stimuli present on that trial.
What this looks like in an experiment (explained)
At the beginning of learning, a tone is followed by food. The food is surprising because nothing predicts it yet. A large amount of learning occurs.
As trials continue, the tone becomes a good predictor of food. The food is no longer surprising. Learning slows down.
If a second stimulus, such as a light, is added along with the tone, very little learning occurs to the light because the food is already predicted by the tone. The tone has already taken most of the associative strength the food can support. This explains blocking.
Explanation (mine)
This theory explains that conditioning is driven by prediction error. Learning happens when the outcome is unexpected. Once the organism can fully predict the unconditioned stimulus, no more learning occurs. Conditioned stimuli compete with each other because the unconditioned stimulus has a limited amount of learning it can support.
Overshadowing
Definition:
A stronger or more salient conditioned stimulus may be learned at the expense of a weaker conditioned stimulus
What actually happens in the experiment
An animal is trained with two stimuli at the same time:
A loud tone and a dim light are presented together → food.
This pairing happens many times.
Later, the experimenter tests each stimulus alone.
The animal salivates strongly to the tone, but barely responds to the light.
Why this happens (explained)
Both stimuli were paired with food the same number of times.
But the tone was more noticeable.
Because the tone stood out more, the animal mostly learned that the tone predicts food and paid little attention to the light.
The stronger stimulus “overshadowed” the weaker one during learning.
Blocking (Kamin, 1969)
Definition: Phase 1: Conditioned stimulus 1 – unconditioned stimulus
Phase 2: Conditioned stimulus 1 and conditioned stimulus 2 – unconditioned stimulus
No conditioned response to conditioned stimulus 2
What actually happens in the experiment
First, an animal learns:
A tone is followed by food → salivation.
Now the animal fully expects food when it hears the tone.
Then the experimenter changes the setup:
Tone and light are presented together → food.
After many trials, the experimenter tests the light alone.
The animal does not salivate to the light.
Even though the light was paired with food many times, no learning happened to it.
Why this happens (explained)
Because the tone already perfectly predicted the food.
The food was not surprising anymore.
Since the light added no new information, the animal did not learn it.
This proves conditioning is about prediction, not just pairing.
__ : Pre-existing learning blocks the new stimulus (CS2) from being conditioned, because CS1 already predicts the US well, and no new information is provided by CS2.
Unblocking: Increased intensity of the US allows the new stimulus to be conditioned, despite prior learning.
Unblocking
Definition (from the slide):
Occurs when a new stimulus (Stimulus 2) can become associated with the unconditioned stimulus (US) after the intensity of the US is increased, allowing it to become a more predictive signal for the response.
Slide Example (from the slide):
Phase 1: A light (Stimulus 1) is paired with a weak shock (Unconditioned Stimulus), causing a mild response.
Phase 2: A bell (Stimulus 2) is paired with the light (Stimulus 1) and a stronger shock.
Test: The bell (Stimulus 2) now causes a stronger response than before, because the stronger shock allowed the bell to be conditioned alongside the light.
Explanation (mine):
In __, you first have a conditioned response where Stimulus 1 (like the light) predicts something (like the shock).
When the shock’s intensity is increased, the brain recognizes this new “stronger” signal, and it allows Stimulus 2 (like the bell) to become associated with the US (shock) as well.
Without this increase in US intensity, Stimulus 2 would not have been able to make the same prediction about the shock.
Memory Hook: "__" learning by increasing the intensity of the unconditioned stimulus (US).
Imagine a weak shock (Stimulus 1) blocks learning for a new stimulus (Stimulus 2). When the shock's intensity is increased, the block is removed, and the new stimulus can now trigger a response.
Blocking: Pre-existing learning blocks the new stimulus (CS2) from being conditioned, because CS1 already predicts the US well, and no new information is provided by CS2.
___: Increased intensity of the US allows the new stimulus to be conditioned, despite prior learning.
Safety Learning
Definition: An example of inhibitory classical conditioning in which a stimulus becomes a signal that something painful will not occur.
What the slide describes
Rescorla–Wagner theory emphasizes prediction error. Learning happens when what occurs does not match what is expected.
In this situation, Stimulus A becomes associated with something painful, such as a shock.
However, when Stimulus X is present along with Stimulus A, no shock occurs.
Over time, Stimulus X becomes a safety signal.
What this looks like in an experiment
An animal is trained so that a tone is followed by a shock. The animal shows fear when it hears the tone.
Now the experimenter presents the tone together with a light, and no shock occurs.
After repeated trials, the animal learns that when the light is present, the shock will not happen.
The light becomes a signal for safety, even though the tone normally predicts danger.
Explanation (mine)
This is inhibitory learning because the organism is not learning that the stimulus predicts something. It is learning that the stimulus predicts the absence of something painful. This explains how organisms can feel safe in situations that would normally cause fear when a safety cue is present.
Superconditioning
Definition (from the slide):
An extremely strong association between an excitatory conditioned stimulus and an unconditioned stimulus when that conditioned stimulus overrules an inhibitory conditioned stimulus.
Procedure from the slide:
Phase 1: Inhibitory conditioning of Stimulus 2 to the unconditioned stimulus. Stimulus 2 becomes a signal that the unconditioned stimulus will not occur.
Phase 2: Stimulus 1 and Stimulus 2 are presented together, followed by the unconditioned stimulus.
Phase 3: Test Stimulus 1 alone. Very strong response occurs.
What this looks like in an experiment (Example):
Phase 1: First, an animal learns that a light means no shock will happen. The light becomes a safety signal.
Phase 2: Now, the experimenter presents a tone together with the light, and a shock occurs.
Phase 3: Later, when the tone is presented alone, the fear response is much stronger than normal.
Explanation (mine):
Phase 1 involves the learning of a safety signal: the animal learns that the light predicts the absence of shock, so it doesn’t react when the light is present.
In Phase 2, the tone is presented alongside the safety signal (the light), and a shock occurs. The shock is unexpected because the animal has been conditioned to expect no shock when the light is present.
Phase 3 tests the tone alone, and the animal now shows a stronger response than usual, because the shock was unexpected in the previous phase, creating a large prediction error. This strong learning is transferred to the tone.
Memory hook:
__ = A huge surprise after the safety signal is learned, causing an unexpectedly strong reaction to the associated stimulus.
Overexpectation Effect
Definition:
Phase 1: Conditioned stimulus 1–unconditioned stimulus trials intermixed with conditioned stimulus 2–unconditioned stimulus trials
Phase 2: Conditioned stimulus 1 and conditioned stimulus 2 together–unconditioned stimulus trials
Result: Reduced responding to conditioned stimulus 1 and conditioned stimulus 2
What actually happens in the experiment
First, an animal learns:
A tone is followed by food → salivation
A light is followed by food → salivation
Each stimulus alone strongly predicts food.
Now the experimenter presents:
Tone and light together → food
Later, the experimenter tests the tone alone and the light alone.
The animal salivates less to each than before.
Why this happens (explained)
When the tone and light are presented together, the animal expects a lot of food because both previously predicted food.
But only the same amount of food is delivered.
The outcome is less than expected, creating a negative prediction error.
Because of this, associative strength for both stimuli decreases — even though food was still presented.
Explanation (mine)
This is called extinction without extinction trials because responding decreases even though the unconditioned stimulus was never removed. The decrease happens because the outcome did not match the organism’s high expectation.
Hall-Pearce Negative Transfer
Definition:
Evidence that subjects may reduce attention to a conditioned stimulus when its association with an unconditioned stimulus seems to be well-understood.
It is called __ __ because learning of one association in Phase 1 impairs learning of a different association in Phase 2.
The experiment (Hall & Pearce, 1979 — from slide)
Phase 1
A tone is followed by a weak shock for 66 trials.
The animal learns that the tone predicts a weak shock.
Phase 2
The same tone is now followed by a strong shock.
The animal learns this new association very slowly. It learns much more slowly than a control group that never heard the tone in Phase 1.
What this shows
Because the animal already “thought it knew” what the tone predicted, it paid less attention to the tone in Phase 2.
This reduced attention makes new learning about the tone harder.
Explanation (mine)
This experiment shows something the Rescorla–Wagner theory cannot explain. The theory focuses only on prediction error, but this experiment shows that attention to the conditioned stimulus changes over time. When an organism believes it understands what a stimulus predicts, it stops paying attention to it, which interferes with later learning.
Perruchet Effect
Definition (EXACT words from the slide):
“Subjects’ conscious expectations may differ from physiological responses during classical conditioning.”
Original Experiment (Perruchet, 1985) — from the slide
Eyeblink conditioning
p(airpuff | tone) = .5
This means:
Every time the tone plays, there is a 50% chance an airpuff will hit the eye.
So sometimes the tone is followed by an airpuff, and sometimes it is not.
Two things were measured:
Whether the person blinked when the tone played (the learned physiological response)
What the person thought would happen (they rated how likely the airpuff was on a 0–7 scale)
What the slide says happened
If several recent trials had no airpuff after the tone:
People’s expectancy ratings went UP
They believed: “The airpuff is now more likely” (gambler’s fallacy)
But their eyeblink response went DOWN
Their body had learned the tone is less predictive
If several recent trials did have airpuffs after the tone:
People’s expectancy ratings went DOWN
But their eyeblink response went UP
What the graph slide shows
As the number of tone-alone trials increases:
Expectancy ratings ↑
Eyeblink probability ↓
They move in opposite directions.
Replication slide (Perruchet et al., 2006)
Same effect using reaction times:
A light sometimes predicts a tone (50% of the time)
Reaction times get faster when light predicts tone
After many trials where the tone does NOT follow the light:
Reaction-time benefit decreases
But people’s predictions that the tone will occur increase
Same mismatch.
Implication (EXACT from slide)
“Classical conditioning in humans could reflect two distinct processes:”
Automatic, unconscious associative processes (shown by the eyeblink response)
Conscious expectations driven by cognitive processes (shown by expectancy ratings)
Explanation (mine)
The person’s body learns from actual experience.
The person’s mind predicts using logic.
So the person thinks the airpuff is more likely…
while their eyeblink shows they’ve learned it’s less likely.
That contradiction is the __ ___.
Evaluative Conditioning
Definition (exact slide words):
“change in affective response (usually ratings) to a previously neutral stimulus after pairing with another (more emotion-provoking) stimulus.”
Step 1 — What “affective response” means
Affective response = how you feel about something
Do you like it? dislike it? feel neutral?
This slide is about changing feelings, not changing reflexes.
Step 2 — What is “previously neutral stimulus”
Something you had no feelings about before.
Example: a random picture.
Step 3 — What does “pairing” mean here
The neutral thing is shown at the same time as something that already makes you feel something.
You are not told to learn.
You are not predicting anything.
You are just experiencing them together.
Slide Example 1 — Razran (1938)
People looked at pictures
While they were eating food
Eating food = pleasant emotional state.
Later…
Those same pictures were rated more attractive.
Why?
Because your brain linked:
picture + feeling good
So now the picture feels good.
Slide Example 2 — Hammerl et al. (1997)
Neutral pictures paired with liked pictures
Neutral pictures paired with disliked pictures
Later shown alone and rated
Results:
Paired with liked → now liked
Paired with disliked → now disliked
Again, the neutral picture did not predict anything.
It just shared emotional space with something else.
What this is teaching (very important)
Earlier in this unit you saw:
Eyeblink conditioning
Fear conditioning
Skin conductance
Those showed:
learning to react
This slide shows:
learning to feel
That is a completely different kind of learning.
Why this matters (connection to phobias slide)
Think about a phobia.
A neutral thing (dog, elevator, airplane)
gets paired with a scary experience.
Now the thing itself feels scary.
That is evaluative conditioning.
Explanation (mine)
You are not learning:
“This predicts something.”
You are learning:
“This makes me feel good”
or
“This makes me feel bad.”
And you don’t even realize it happened.
That’s why this explains:
Phobias
Preferences
Advertising
Emotional biases
Causal Learning
Definition (from slides)
__ ___: How do we decide that one event causes another?
Classical conditioning represents learning of associations between events. Does this sort of learning underlie other sorts of cognitive processing?
Key idea from slides
We often use the same associative principles from classical conditioning when deciding whether one thing causes another.
Major example from slides — Gluck & Bower (1988): medical diagnosis task
Subjects study patient files listing:
Symptoms
Diagnosis (imaginary diseases)
Later they are tested by:
Rating how strongly symptoms are associated with diseases, or
Seeing symptoms and making a diagnosis
This shows how people learn cause–effect relationships the same way they learn CS–US relationships.
Connection to classical conditioning (explicit slide bullets)
__ __ shows the same effects as conditioning:
Frequency of Cause–Effect pairing → learning curve
Effect of relative validity
Effect of redundant information (blocking)
Effect of more salient cause (overshadowing)
These are Rescorla-Wagner effects showing up in human reasoning.
Philosophical background from slides
David Hume (1748): We never see causation directly. We judge it when:
Cause and effect are close in time and space
Cause happens before effect
There is no other likely cause
Bertrand Russell (1912): causation is not obvious in the world — it is inferred.
Your professor includes this to show:
Causality is a cognitive construction, built from associations.
Important limitation from slides
Association strength is not the only factor in causal learning.
Other factors:
Our prior knowledge
Understanding of mechanisms
Strong beliefs can create illusory correlations (Chapman & Chapman, 1969) — seeing relationships that do not exist.
What this term is really teaching
Humans use associative learning mechanisms (like conditioning) as a foundation for:
Diagnosing problems
Predicting events
Planning actions
Judging what causes what
Causal reasoning is conditioning plus cognition.
Explanation (mine)
This term shows that classical conditioning is not just about dogs and tones. The same learning rules explain how humans decide that “this causes that.” Blocking, overshadowing, and relative validity are not just lab effects — they shape how we interpret the world.
Delay Conditioning
Definition: Conditioned stimulus starts before unconditioned stimulus and stays on until the unconditioned stimulus ends.
What the slide is showing
The conditioned stimulus and unconditioned stimulus overlap in time.
Example (apply Pavlov)
Tone starts → food comes while tone is still playing → tone stops after food.
The dog hears the tone during the food.
Explanation (mine)
This is the most effective form of classical conditioning because the organism experiences the conditioned stimulus while the important event is happening.
It makes prediction easy: “This sound is happening at the same time as the food.”
Trace Conditioning
Definition: Conditioned stimulus starts before unconditioned stimulus and turns off before the unconditioned stimulus begins.
What the slide is showing
There is a gap between conditioned stimulus and unconditioned stimulus.
Example (apply Pavlov)
Tone plays → tone stops → short pause → food appears.
The dog must remember the tone during the gap.
Explanation (mine)
This requires memory. The organism must keep a “trace” of the conditioned stimulus in mind to connect it to the unconditioned stimulus.
This is harder and usually produces weaker conditioning than delay conditioning.
Sensory Preconditioning
Definition:
Phase 1: Two neutral stimuli are paired together (no food yet)
Phase 2: One of those stimuli is paired with food
Test: The other stimulus now causes salivation
Concrete Example
Phase 1:
A light turns on at the same time as a tone. No food. Nothing happens.
The dog just learns: light and tone go together.
Phase 2:
The tone is now paired with food. The dog salivates to the tone.
Test:
The light alone now causes salivation.
Explanation (mine)
The dog learned the relationship between the light and the tone first, before either meant anything.
When the tone later gained meaning, the light inherited that meaning.
This proves conditioning is stimulus–stimulus learning.
__ ___: Two neutral stimuli (e.g., A and B) are paired before any conditioning. Later, when Stimulus A is conditioned, Stimulus B will also evoke a response, even though it was never directly paired with the unconditioned stimulus.
Second-Order Conditioning: Stimulus A is already conditioned to elicit a response. Then, Stimulus B is paired with Stimulus A, and Stimulus B can now elicit the same response, even though it was never paired with the unconditioned stimulus.
Key difference: In __ __, the pairing of stimuli happens before conditioning. In Second-Order Conditioning, the first stimulus is conditioned before pairing with the second one.
Second-Order Conditioning
Definition (from slide)
Phase 1: A stimulus is paired with food
Phase 2: A new stimulus is paired with that stimulus
Test: The new stimulus causes salivation
Concrete Example
Phase 1:
Tone → food. Dog salivates to tone.
Phase 2:
A light turns on before the tone. No food yet.
Test:
The light alone now causes salivation.
Explanation (mine)
The light never touched the food.
It works because it predicts something that predicts food.
The dog is learning chains of prediction.
Sensory Preconditioning: Two neutral stimuli (e.g., A and B) are paired before any conditioning. Later, when Stimulus A is conditioned, Stimulus B will also evoke a response, even though it was never directly paired with the unconditioned stimulus.
__ ___: Stimulus A is already conditioned to elicit a response. Then, Stimulus B is paired with Stimulus A, and Stimulus B can now elicit the same response, even though it was never paired with the unconditioned stimulus.
Key difference: In Sensory Preconditioning, the pairing of stimuli happens before conditioning. In __ __, the first stimulus is conditioned before pairing with the second one.
Positive Patterning
Definition:
Only the combination of two stimuli predicts food.
Each stimulus alone does not.
Concrete Example
Light + tone together → food
Light alone → no food
Tone alone → no food
Explanation (mine)
The dog learns that both signals together mean food.
Neither one is enough.
This cannot be explained by “adding” associations — the dog treats the pair as a new pattern.
Negative Patterning
Definition: Each stimulus alone predicts food.
Together, they do not.
Concrete Example
Light → food
Tone → food
Light + tone together → no food
Explanation (mine)
The dog learns that when both signals happen at once, food will not come.
This shows the dog is learning configurations, not simple associations.
Morgan’s Canon
Person (from slide):
C. Lloyd Morgan (1903)
Definition (from slide):
Animal behavior should not be interpreted using higher psychological processes if it can be explained using simpler processes
Explanation:
__ __ argues for parsimonious explanations, meaning scientists should choose the simplest explanation that fully accounts for the behavior, rather than assuming complex mental abilities.
Example (aligned with slide intent):
If an animal solves a task through trial-and-error learning, we should not assume reasoning or insight unless simpler explanations fail.
MCQ Memory Anchor
Comparative psychology = compare species
__ __ = use the simplest (parsimonious) explanation
Coolidge Effect
When a male animal’s sexual response decreases after repeated exposure to the same female (habituation). However, when a new female is introduced, the male’s sexual response suddenly increases again.
Composites Effect
Example (from slide):
Participants are shown half of a famous face. When this half is paired with half of another famous face, it becomes much more difficult to recognize. This happens because we treat faces holistically (as a whole), not just as individual parts.
However, it becomes easier to recognize the faces when the two halves are misaligned.
Explanation: This demonstrates that face recognition works better when we process faces as integrated wholes, not by individual features. The __ ___ shows that holistic processing is important for recognizing faces. Misaligning the halves reduces the holistic effect, making it easier to recognize the faces.
Whole Advantage
Example (from slide):
Participants study a full face. When asked to recognize it later, they can distinguish it from another face that differs only in the nose. However, they can’t recognize just the nose alone in isolation.
Explanation: This shows that we process whole faces better than individual features. The Whole Advantage implies that face recognition is more accurate when the full face is presented rather than a single feature, such as the nose.
Inversion Effect
Example (from slide):
When faces are presented upside down, it disrupts face recognition, especially sensitivity to spatial relations between facial features (e.g., the distance between eyes, nose, and mouth).
Explanation: The __ __ demonstrates that face recognition is much more difficult when faces are inverted. We process faces better when they are upright, and inversion disrupts our ability to process them holistically and accurately. This effect underscores how specialized face recognition mechanisms are in the brain, optimized for upright faces.
Fear conditioning (Conditioned Emotional Response)
A shock naturally causes freezing. If a tone is repeatedly presented before the shock, the tone alone will later cause freezing. The organism learned the tone predicts danger.
Summation test of conditioned inhibition
Phase 1
A dog is first trained with two separate pairings:
A bell is followed by food. The dog salivates.
A light is followed by food. The dog salivates.
At this point, the dog has learned that both the bell and the light predict food.
Phase 2
Now the experimenter changes the situation:
The bell is presented together with a tone.
No food is given.
This happens many times.
The dog begins to learn that when the tone is present, food will not occur, even though the bell normally predicts food.
Test
The experimenter now presents the light together with the tone.
Normally, the light causes salivation.
But if salivation is greatly reduced when the tone is present, this shows that the tone has become a conditioned inhibitor. The dog has learned that the tone signals the absence of food.
Retardation of acquisition test of conditioned inhibition
Phase 1
The dog is trained:
A bell is followed by food.
The dog salivates to the bell.
Phase 2
Now the bell and a tone are presented together and no food follows.
The dog learns that the tone means food will not happen.
Test
The experimenter now tries to condition the tone with food by pairing the tone with food.
If the dog is very slow to learn that the tone now predicts food, this shows that the tone had already been strongly learned as a signal for the absence of food. This slow learning is called __ _ __.

Thorndike’s Law of Effect
Quoted Slide Information (exact wording from slides)
“Law of Effect – Responses in a situation followed by satisfaction will become stronger. Responses followed by discomfort will become weaker.”
“Cats in a puzzle-box – Random trial-and-error behavior – Gradual change in behavior”
Definition (Clear Explanation)
__ __ states that behaviors followed by satisfying consequences become stronger, and behaviors followed by discomfort become weaker.
Learning occurs through trial-and-error.
Successful responses become strengthened, while unsuccessful responses weaken.
Experiment — Puzzle Box (Thorndike, 1898)Procedure (step-by-step)
A cat is placed inside a puzzle box.
The box can be opened by performing a specific response (e.g., pulling a loop or lever).
The cat initially shows random trial-and-error behaviors (scratching, clawing, meowing).
By chance, the cat performs the correct response and escapes to obtain food.
The cat is placed back in the box repeatedly across trials.
Results
Escape time decreases gradually across trials.
The learning curve shows a smooth, gradual reduction in escape time.
There is no sudden jump in performance.
This supports the idea that learning occurs through gradual strengthening of successful responses.
What This Shows
Learning is incremental.
Correct responses become strengthened through reinforcement (Thorndike called this “stamping in” successful responses).
Learning does not require reasoning or sudden insight.
Context
Category: Learning about the relationship between stimuli and our behavior.
This principle is the foundation of instrumental (operant) conditioning.
Many later concepts build on this idea, including:
Skinner’s operant conditioning
Reinforcement and punishment
Partial reinforcement
Schedules of reinforcement
Challenges to __ Approach: Contrast Effects
Research later showed that animals compare current rewards to previous rewards.
Negative contrast: large reward → small reward → responding drops below normal.
Positive contrast: small reward → large reward → responding increases above normal.
This suggests behavior is not strengthened blindly; animals form expectations about rewards.
Insight Learning (Köhler)
observed chimpanzees solving problems such as reaching bananas by stacking boxes or combining sticks.
Instead of gradual trial-and-error learning, apes sometimes:
paused and examined the situation
suddenly performed the correct solution
This sudden change in behavior (insight) challenges Thorndike’s idea that learning always occurs gradually.
Comparison
__ __ → behavior strengthened through consequences.
Classical conditioning → learning associations between stimuli.
Memory Hook
Behavior followed by satisfaction strengthens → __ __

Shaping
Quoted slide information (exact wording from slides)
“Reinforcement of successive approximations to a desired behavior”
Definition (clear explanation)
__ __ is the process of reinforcing small steps that gradually get closer to the final desired behavior.
The full behavior is not reinforced immediately. Instead, behaviors that increasingly resemble the goal behavior are reinforced.
Experiment/Example (Skinner box / operant chamber — step-by-step)
Goal: get a rat to press a lever.
Reinforce when the rat moves toward the lever.
Then only reinforce when the rat gets closer to the lever.
Then only reinforce when the rat touches the lever.
Finally, reinforce only when the rat presses the lever.
Each step is a “successive approximation” of the final behavior.
What this shows
New behaviors can be built gradually through reinforcement history. Learning does not require sudden insight—behavior can be constructed by reinforcing closer-and-closer versions of the target response.
Context
This is part of operant conditioning (learning the relationship between behavior and consequences).
It shows how new behaviors are created in the operant approach (Skinner), which connects back to Thorndike’s idea that consequences strengthen or weaken responses.
It also sets up the next related idea: chaining, where instead of building one behavior, you link multiple behaviors into a sequence.
Difference from Chaining
__ __ = builds one behavior gradually by reinforcing closer approximations.
Chaining = links multiple behaviors into a sequence with reinforcement only after the final response.
Memory Hook
Building a behavior step-by-step → __ __

Chaining
Quoted slide information (exact wording from slides)
“Constructing a sequence of behaviors with reinforcement only occurring after the final response in the sequence”
Definition (clear explanation)
__ __ is the process of linking multiple learned behaviors together into a sequence.
Reinforcement is delivered only after the final response in the chain.
Each completed response produces a stimulus that signals the next response in the sequence.
Example (step-by-step)
Rat must:
Press lever
Run to food tray
Retrieve pellet
Food appears only after the final step.
What This Shows
Complex behaviors are built from smaller learned responses.
Difference From Shaping
Shaping = gradual refinement of one behavior
__ __ = linking several behaviors together
Context
Still within operant conditioning. Explains formation of habits and multi-step routines. Unlike shaping, which builds one behavior gradually, __ __ explains how already-learned behaviors are combined into longer behavioral sequences.

Extinction
Slide information (exact wording from slides)
“___: response eliminated when reinforcement is withheld”
“___ burst: temporary increase of nonreinforced behavior when extinction starts”
“Spontaneous recovery: extinction fades away as a result of the passage of time”
Definition (clear explanation)
____ occurs when a previously reinforced behavior decreases because reinforcement is no longer delivered.
The behavior does not disappear immediately — it gradually weakens.
What Happens When Reinforcement Stops (step-by-step)
Phase 1: Behavior is reinforced
→ Response increases
Phase 2: Reinforcement is removed
→ Behavior initially increases briefly (extinction burst)
→ Then gradually decreases
Later:
→ After time passes, the behavior may briefly return (spontaneous recovery)
Important Subcomponents (do NOT confuse)
Extinction burst
Temporary spike in responding when reinforcement first stops.
Spontaneous recovery
Return of behavior after time has passed following extinction.
These are NOT new learning — they are temporary effects.
What This Shows
The behavior was not erased. Learning remains in memory, but performance decreases when reinforcement stops.
Context
Category: Operant conditioning
Framework: Consequences control behavior.
This connects to:
Partial reinforcement (affects resistance to extinction)
Resurgence (another way extinguished responses return)
Difference From Classical Conditioning Extinction
In operant conditioning → behavior stops because reinforcement stops
In classical conditioning → response stops because CS no longer predicts US (i.e tone no longer predicts food)
Mechanism is similar, but learning type differs.
In operant conditioning, extinction occurs because the behavior no longer produces reinforcement.
In classical conditioning, extinction occurs because the conditioned stimulus is presented without the unconditioned stimulus.
Memory Hook
No reward → behavior fades → __ __

Resurgence
Quoted slide information (exact wording from slides)
“__: recovery of an extinguished response after extinction of a competing behavior.”
“RESULT: Rat will resume pressing of Lever1 (___) even in absence of food. (This response will gradually extinguish.)”
Definition (clear explanation)
__ __ is the reappearance of a previously extinguished behavior after a different, competing behavior is also extinguished.
It is not spontaneous recovery. It occurs because reinforcement for the alternative behavior stops.
Bouton & Schepers (2013) Procedure — Step-by-Step
Phase 1
Rat in Skinner box with two levers.
Press Lever 1 → food
Press Lever 2 → nothing
Rat learns Lever 1.
Phase 2
Lever 1 no longer gives food.
Lever 1 responding decreases (extinction).
Phase 3
Lever 2 now gives food.
Rat learns Lever 2.
Lever 1 remains extinguished.
Phase 4
Lever 2 no longer gives food.
Now BOTH behaviors are extinguished.
RESULT
Rat resumes pressing Lever 1 — even though it does not produce food. When reinforcement for the second behavior is removed, the original response regains strength because it was previously reinforced and remains stored.
That return of Lever 1 = ____.
Eventually it extinguishes again.
What This Shows
Extinction does not erase learning.
Old behaviors remain stored and can reappear when newer reinforcement patterns collapse.
Context
Operant conditioning
Belongs under: Recovery effects
Connected terms:
Extinction
Spontaneous recovery (different mechanism)
Partial reinforcement (affects persistence)
Difference From Spontaneous Recovery
Spontaneous recovery → behavior returns after passage of time.
__ __ → behavior returns because reinforcement for the alternative behavior is removed.
Behavior returns because reinforcement for an alternative response stops.
Time is not the cause.
Why This Is Important Clinically
When replacing a bad behavior with a better one, if the new behavior stops being reinforced, the old behavior can come back.
This is relapse.
Memory Hook
when you resurge/ go back to the old thing

Partial Reinforcement Extinction Effect
Quoted slide information (exact wording from slides)
“__ ___ __ __: Slows down acquisition of a response but increases resistance to extinction”
“Discrimination hypothesis: more difficult to distinguish between acquisition phase and extinction phase”
“Frustration hypothesis: responding while frustrated becomes associated with reinforcement”
“Sequential hypothesis: memory of sequence of nonrewarded trials becomes associated with reinforcement”
Definition (clear explanation)
__ __ refers to the finding that behaviors reinforced only some of the time are more resistant to extinction than behaviors reinforced every time.
However, partial reinforcement slows initial learning.
So:
Slower acquisition - Because reinforcement does not occur after every response, it takes longer for the subject to learn the response-reinforcement relationship.
Stronger persistence
Basic Experimental Comparison
Group 1: Continuous reinforcement
Every response → reward
Group 2: Partial reinforcement
Some responses → reward
After learning, reinforcement stops for both groups.
Result: Continuous group stops responding quickly.
Partial group keeps responding much longer.
That persistence = __ __.
Why Does This Happen? (Three Theories)
1. Discrimination Hypothesis
In continuous reinforcement: Extinction is obvious because reward suddenly stops.
In partial reinforcement: Nonreward is normal. So the animal does not immediately detect extinction.
2. Frustration Hypothesis
During training under partial reinforcement: Sometimes the subject responds and gets no reward. This creates frustration.
Over time, frustration becomes associated with eventual reward.
So during extinction: Frustration does not stop responding — it motivates continued responding.
3. Sequential Hypothesis
The animal remembers sequences like: Nonreward → Nonreward → Reward
During extinction: Nonreward feels like part of a familiar sequence that usually leads to reward.
So responding continues.
What This Shows
Learning history shapes extinction behavior. Resistance to extinction depends on reinforcement history, not just whether reinforcement occurred.
Behavior trained under uncertainty becomes persistent.
Context
Operant conditioning
Belongs under reinforcement principles.
Directly connected to:
Extinction
Schedules of reinforcement (next major term)
Gambling behavior (variable-ratio example)
This explains why gambling is addictive.
Difference From Continuous Reinforcement
Continuous → fast learning, fast extinction
__ __ → slow learning, slow extinction
Memory Hook
Unpredictable reward = hardest behavior to kill → __ __

Schedules of Partial Reinforcement
Quoted slide information (exact wording from slides)
“Fixed-interval: Reinforcement is given for the first response that occurs after a set period of time”
“Variable interval: Reinforcement is given for the first response that occurs after a changing period of time”
“Fixed-ratio: Reinforcement is given for a set number of responses”
“Variable-ratio: Reinforcement is given for a changing number of responses”
“Effects on Behavior of Partial-Reinforcement Schedules
– Pauses after reinforcement on fixed schedules (especially fixed-interval)
– Greater responding on ratio schedules than on interval schedules
– Greatest total number of responses on variable-ratio schedules”
Definition (clear explanation)
__ __ refers to structured rules that determine when reinforcement is delivered under partial reinforcement.
There are four main types:
Two based on time (interval)
Two based on number of responses (ratio)
Each can be fixed or variable
The Four Schedules (clear breakdown)
1⃣ Fixed-Interval (FI)
Reinforcement occurs for the first response after a fixed amount of time.
Example: Every 30 seconds, the first response is rewarded.
Behavior pattern:
Slow after reward
Speeds up as time approaches
That pause is what creates the “scalloped” curve on the graph.
Key feature: After the subject receives reinforcement, responding temporarily drops or stops because another reward cannot occur until the fixed time interval passes. Since the animal “knows” that responding immediately will not produce another reward, it pauses. As the interval gets closer to ending, responding gradually increases again.
2⃣ Variable-Interval (VI)
Reinforcement occurs for the first response after a changing amount of time.
Example: On average every 30 seconds, but unpredictable.
Behavior pattern:
Steady, moderate responding
No large pauses
3⃣ Fixed-Ratio (FR)
Reinforcement after a fixed number of responses.
Example: Every 10 lever presses → food.
Behavior pattern:
Fast responding
Brief pause after reinforcement
4⃣ Variable-Ratio (VR)
Reinforcement after a changing number of responses.
Example: On average every 10 responses, but unpredictable.
Behavior pattern:
Very high rate of responding
No pauses
Most resistant to extinction
This produces the greatest total number of responses.
Major Behavioral Findings (from slide)
Fixed schedules → pauses after reinforcement
Ratio schedules → more responding than interval . Animals press/peck more frequently under ratio schedules than under interval schedules. Why? Bc Ratio schedules = reinforcement depends on number of responses. If you respond more, you get reward faster. So the subject learns: “More responding = more reward.” That produces high, steady responding. Interval schedules = reinforcement depends on time passing. No matter how many responses you make, you can only get reinforced after the time interval ends. So responding faster does NOT increase reward rate. That produces slower or moderate responding.
Variable-ratio → highest overall response rate
Context
Belongs under: Operant conditioning → Reinforcement principles
Directly connected to:
Partial Reinforcement Extinction Effect
Gambling behavior (VR schedule)
Persistence of habits
This explains why:
Slot machines are addictive
Sales commission motivates high performance
Studying before fixed exam dates increases near deadline
Difference: Ratio vs Interval
Ratio = based on number of responses
Interval = based on time
Ratio schedules produce higher response rates than interval schedules.
Difference: Fixed vs Variable
Fixed = predictable
Variable = unpredictable
Variable schedules produce steadier and more persistent responding.
Exam Super Hook
Time vs Responses
Fixed vs Unpredictable
Most powerful schedule = __ __

Fixed-Interval Schedule
Quoted slide information (exact wording from slides)
“__ __: Reinforcement is given for the first response that occurs after a set period of time”
“Effects on Behavior of Partial-Reinforcement Schedules
– Pauses after reinforcement on fixed schedules (especially fixed-interval)”
Definition (clear explanation)
__ __ is a reinforcement schedule in which reinforcement is delivered for the first response that occurs after a fixed amount of time has passed.
Reinforcement is based on time, not number of responses.
How It Works (step-by-step example)
Example: Every 30 seconds, the first lever press produces food.
If the rat presses during the 30 seconds → no reinforcement.
When 30 seconds passes → the next response is reinforced.
The timer resets.
Behavior Pattern
After reinforcement:
Responding slows or pauses.
As the time interval approaches, responding increases.
Produces a “scalloped” pattern of responding.
Key feature: Post-reinforcement pause.
What This Shows
Behavior is controlled by predictable timing of reinforcement.
Because the subject learns when reinforcement becomes available, responding increases as that time approaches.
Context
Category: Operant conditioning
Section: Schedules of partial reinforcement
This is a time-based schedule (interval) and a predictable one (fixed).
Compared to ratio schedules, interval schedules produce lower overall response rates.
Difference from Variable-Interval
__ __ = reinforcement after a set amount of time
Variable-interval = reinforcement after an unpredictable amount of time
Fixed schedules produce pauses; variable schedules produce steadier responding.
Memory Hook
Wait for the clock → respond → reward → __ __

Variable-Interval Schedule
Quoted slide information (exact wording from slides)
“__ __: Reinforcement is given for the first response that occurs after a changing period of time”
“Effects on Behavior of Partial-Reinforcement Schedules
– Greater responding on ratio schedules than on interval schedules”
Definition (clear explanation)
__ __ is a reinforcement schedule in which reinforcement is delivered for the first response that occurs after an unpredictable, changing amount of time.
Reinforcement is based on time, not number of responses.
The interval length varies around an average.
How It Works (step-by-step example)
Example: On average every 30 seconds, but the actual interval varies.
Sometimes reinforcement becomes available after 10 seconds.
Sometimes after 45 seconds.
The first response after the interval is reinforced.
The interval resets unpredictably.
Behavior Pattern
Steady, moderate responding.
No large pauses.
Lower overall response rate than ratio schedules.
Because reinforcement timing is unpredictable, responding remains consistent.
What This Shows
When reinforcement timing is unpredictable, the subject cannot anticipate when reward becomes available.
This prevents post-reinforcement pauses seen in fixed schedules.
Context
Category: Operant conditioning
Section: Schedules of partial reinforcement
This is:
Time-based (interval)
Unpredictable (variable)
Compared to fixed-interval, this produces steadier responding. Compared to ratio schedules, this produces lower overall response rates.
Difference from Fixed-Interval
Fixed-interval → reinforcement after a set time → pause then accelerate
__ __ → reinforcement after unpredictable time → steady responding
Memory Hook
Unpredictable clock → steady responding → __ __

Fixed-Ratio Schedule
Quoted slide information (exact wording from slides)
“__ __: Reinforcement is given for a set number of responses”
“Effects on Behavior of Partial-Reinforcement Schedules
– Pauses after reinforcement on fixed schedules
– Greater responding on ratio schedules than on interval schedules”
Definition (clear explanation)
__ __ is a reinforcement schedule in which reinforcement is delivered after a fixed number of responses.
Reinforcement is based on number of responses, not time.
The required number of responses stays constant.
How It Works (step-by-step example)
Example: Every 10 lever presses → food.
Rat presses 9 times → no reinforcement.
On the 10th press → reinforcement delivered.
Count resets.
Cycle repeats.
Behavior Pattern
High rate of responding.
Brief pause after reinforcement.
Then rapid responding until next reinforcement.
Because reinforcement is predictable after a set number of responses, subjects often pause briefly after reward.
What This Shows
When reinforcement depends on effort/output, response rate increases.
Ratio schedules produce higher overall responding than interval schedules.
Context
Category: Operant conditioning
Section: Schedules of partial reinforcement
This is:
Response-based (ratio)
Predictable (fixed)
Compared to interval schedules → higher response rates.
Compared to variable-ratio → slightly less persistent.
Difference from Variable-Ratio
__ __ = reinforcement after a set number of responses → brief pauses
Variable-ratio = reinforcement after unpredictable number of responses → no pauses
Memory Hook
Fixed number of responses → reward → short break → __ __

Variable-Ratio Schedule
Quoted slide information (exact wording from slides)
“__ __: Reinforcement is given for a changing number of responses”
“Effects on Behavior of Partial-Reinforcement Schedules
– Greatest total number of responses on variable-ratio schedules”
Definition (clear explanation)
__ __ is a reinforcement schedule in which reinforcement is delivered after an unpredictable, changing number of responses.
Reinforcement is based on number of responses, not time.
The number of required responses varies around an average.
How It Works (step-by-step example)
Example: On average every 10 responses → food.
Sometimes reinforcement occurs after 3 responses.
Sometimes after 15.
The subject never knows exactly when reinforcement will occur.
The requirement resets unpredictably after each reward.
Behavior Pattern
Very high rate of responding.
No pauses after reinforcement.
Produces the greatest total number of responses.
Most resistant to extinction.
Because reinforcement is unpredictable and response-based, responding remains persistent and steady.
What This Shows
Unpredictable reinforcement based on effort produces the strongest, most persistent responding.
This schedule generates the highest overall response rate.
Context
Category: Operant conditioning
Section: Schedules of partial reinforcement
This is:
Response-based (ratio)
Unpredictable (variable)
Produces more responding than all other schedules.
This explains behaviors like gambling (slot machines).
Difference from Fixed-Ratio
Fixed-ratio → reinforcement after a set number of responses → pause after reward
__ __ → reinforcement after unpredictable number of responses → no pause
Memory Hook
Unpredictable effort → nonstop responding → __ __

Contrast Effects
Quoted slide information (exact wording from slides)
“Do animals really not know what reinforcement to expect?: __ __”
“Negative (switch from large to small reinforcement)”
“Positive (switch from small to large reinforcement)”
Definition (clear explanation)
__ __ refers to changes in behavior that occur when the value of reinforcement shifts relative to what was previously received.
Behavior is influenced by comparison to past reward, not just the current reward.
Negative Contrast (step-by-step logic)
Animal receives a large reward for a behavior.
Reward is suddenly reduced to a smaller amount.
Behavior decreases to a level below animals that always received the small reward.
Performance drops more than expected.
This shows disappointment or comparison to prior reward.
Positive Contrast (step-by-step logic)
Animal receives a small reward.
Reward is suddenly increased to a larger amount.
Behavior increases to a level above animals that always received the large reward.
Performance exceeds baseline.
What This Shows
Behavior is not determined solely by reinforcement history in a simple strengthening way.
Animals compare current reward to previous reward.
Thorndike’s Law of Effect states that responses followed by satisfying consequences become stronger, and responses followed by discomfort become weaker. The theory assumes behavior is strengthened directly and gradually by its consequences. Contrast effects challenge this strict interpretation because behavior is not determined solely by the current reward. Instead, animals compare the current reward to previous rewards, showing that expectations and relative value influence responding — not just simple reinforcement strengthening.
Context
Appears under “Doubts about Thorndike’s Approach.”
Thorndike suggested gradual strengthening of responses through consequences.
__ __ suggests animals form expectations and compare outcomes.
Introduces cognitive evaluation into instrumental conditioning.
Difference from Law of Effect
Law of Effect → behavior strengthens or weakens based on consequences.
__ __ → behavior depends on comparison between previous and current reward value.
Memory Hook
Big becomes small → performance crashes →Negative __
Small becomes big → performance spikes →Positive __
Comparison drives behavior → __ __

Skinner Box (Operant Chamber)
Quoted slide information (exact wording from slides)
“Skinner box (or operant chamber) – Small apparatus with way for subject to make responses and for experimenter to deliver reinforcement”
“Often rats pressing levers or pigeons pecking response keys”
Definition (clear explanation)
__ __ is a controlled experimental apparatus used to study operant conditioning.
It allows:
The subject to emit a specific response (e.g., lever press, key peck)
The experimenter to deliver reinforcement immediately after the response
How It Works (step-by-step)
An animal (rat or pigeon) is placed inside the chamber.
The chamber contains a response device (lever or key).
When the animal performs the response, reinforcement (e.g., food pellet) can be delivered.
The experimenter controls reinforcement schedules.
This allows precise measurement of response rate and reinforcement timing.
What This Shows
Behavior can be systematically studied by manipulating reinforcement contingencies.
This apparatus made it possible to test:
Shaping
Chaining
Extinction
Partial reinforcement
Reinforcement schedules
Context
Part of Skinner’s operant approach.
Thorndike studied behavior in puzzle boxes. __ __ allowed more precise control and measurement of behavior-reinforcement relationships.
It is the primary tool used in operant conditioning research.
Difference from Thorndike’s Puzzle Box
Puzzle box → escape-based learning, less controlled.
__ __ → controlled environment with precise reinforcement delivery.
Memory Hook
Lever → response → food → controlled learning box → __ __

Extinction Burst
Quoted slide information (exact wording from slides)
“__ __: temporary increase of nonreinforced behavior when extinction starts”
Definition (clear explanation)
__ __ is the temporary increase in responding that occurs when reinforcement is first removed.
When a previously reinforced behavior stops producing reward, the behavior initially increases before it declines.
What Happens (step-by-step)
A behavior is consistently reinforced.
Reinforcement is suddenly withheld.
The organism increases responding (more frequent or more intense).
After this temporary spike, responding gradually decreases.
What This Shows
When reinforcement stops, the organism attempts to regain the lost reward by increasing effort.
Extinction does not immediately reduce behavior — it can temporarily intensify it.
Context
Occurs during operant extinction.
This is not new learning. It is a short-term reaction to the removal of reinforcement.
Often confused with spontaneous recovery, but they occur at different times.
Difference from Spontaneous Recovery
__ __ → immediate spike when reinforcement is removed.
Spontaneous recovery → behavior returns after time has passed.
Memory Hook
No reward? Try harder first → __ __

Spontaneous Recovery
Quoted slide information (exact wording from slides)
“__ __: extinction fades away as a result of the passage of time”
Definition (clear explanation)
__ __ is the reappearance of an extinguished behavior after a period of time has passed following extinction.
No new reinforcement is given.
What Happens (step-by-step)
A behavior is reinforced.
Reinforcement is removed → responding decreases (extinction).
Time passes with no reinforcement.
The behavior briefly returns.
The response typically weakens again quickly if reinforcement is still absent.
What This Shows
Extinction does not erase the original learning.
The memory of reinforcement remains stored.
Extinction reflects reduced performance, not deletion of learning.
Context
Operant conditioning → recovery effects.
This differs from resurgence, which requires extinction of an alternative behavior.
Here, time alone produces the temporary return.
Difference from Extinction Burst
Extinction burst → immediate increase when reinforcement stops.
__ __ → delayed return after time passes.
Difference from Resurgence
Resurgence → behavior returns because competing behavior loses reinforcement.
__ __ → behavior returns because time has passed.
Memory Hook
Time passes → behavior comes back → __ __

Discrimination Hypothesis
Quoted slide information (exact wording from slides)
“__ __: more difficult to distinguish between acquisition phase and extinction phase”
Definition (clear explanation)
__ __ explains the Partial Reinforcement Extinction Effect by arguing that extinction is harder to detect after partial reinforcement.
Because nonreward occurs during acquisition, the shift to extinction is less obvious.
Step-by-Step Logic
During continuous reinforcement:
Every response → reward.
When reward stops → clear signal that extinction has begun.
Responding declines quickly.
During partial reinforcement:
Some responses → no reward.
Nonreward is normal during training.
When reward stops completely → difficult to detect change.
Responding continues longer.
What This Shows
Resistance to extinction under partial reinforcement occurs because the subject cannot clearly discriminate when extinction begins.
The problem is detection, not emotion or memory.
Context
One explanation of the Partial Reinforcement Extinction Effect.
This hypothesis focuses on difficulty distinguishing acquisition from extinction.
It contrasts with:
Frustration hypothesis (emotion-based)
Sequential hypothesis (memory-based)
Difference from Frustration Hypothesis
__ __ → persistence due to difficulty detecting extinction.
Frustration hypothesis → persistence due to frustration becoming associated with reward.
Difference from Sequential Hypothesis
__ __ → persistence due to discrimination difficulty.
Sequential hypothesis → persistence due to learned memory of nonreward sequences.
Memory Hook
Can’t tell extinction started → keep responding → __ __

Frustration Hypothesis
Quoted slide information (exact wording from slides)
“__ __: responding while frustrated becomes associated with reinforcement”
Definition (clear explanation)
__ __ explains the Partial Reinforcement Extinction Effect by arguing that frustration becomes part of the learning process.
Under partial reinforcement, the subject sometimes responds and receives no reward. This creates frustration. Over time, frustration becomes associated with eventual reinforcement.
So during extinction, frustration does not stop responding — it motivates continued responding.
Step-by-Step Logic
During partial reinforcement training:
Subject responds.
Sometimes receives reward.
Sometimes receives no reward.
Nonreward produces frustration.
Eventually, reward still occurs after frustration.
Result: Frustration becomes a cue that reward may still be coming.
During extinction:
No reward is delivered.
Frustration occurs.
But frustration previously predicted reward.
Subject keeps responding.
What This Shows
Persistence under partial reinforcement is driven by emotional conditioning.
The subject has learned to respond despite frustration.
Context
One of three explanations for the Partial Reinforcement Extinction Effect.
This hypothesis focuses on emotional processes.
Contrast with:
Discrimination hypothesis → detection problem.
Sequential hypothesis → memory for trial sequences.
Difference from Discrimination Hypothesis
__ __ → persistence because frustration becomes associated with reinforcement.
Discrimination hypothesis → persistence because extinction is hard to detect.
Difference from Sequential Hypothesis
__ __ → emotional association with frustration.
Sequential hypothesis → memory for sequences of nonrewarded trials.
Memory Hook
Frustration predicts reward → keep going → __ __

Sequential Hypothesis
Quoted slide information (exact wording from slides)
“__ __: memory of sequence of nonrewarded trials becomes associated with reinforcement”
Definition (clear explanation)
__ __ explains the Partial Reinforcement Extinction Effect by arguing that subjects remember sequences of rewarded and nonrewarded trials.
During partial reinforcement, subjects experience patterns like:
Nonreward → Nonreward → Reward
Over time, these sequences become associated with reinforcement.
So during extinction, repeated nonreward feels like part of a familiar sequence that usually ends in reward.
Step-by-Step Logic
During partial reinforcement training:
Subject responds.
Sometimes no reward.
Sometimes multiple nonrewarded trials occur.
Eventually reward follows.
The subject learns: Nonreward can be followed by reward.
During extinction:
No reward occurs.
This feels similar to previous nonreward sequences.
Subject expects reward may still occur.
Responding continues.
What This Shows
Persistence is driven by memory for reinforcement sequences.
Extinction resembles past training patterns.
Context
Third explanation for the Partial Reinforcement Extinction Effect.
This hypothesis focuses on memory processes.
Compare to:
Discrimination hypothesis → difficulty detecting extinction.
Frustration hypothesis → emotional conditioning to frustration.
All three explain why partial reinforcement produces resistance to extinction.
Difference from Frustration Hypothesis
__ __ → persistence due to learned memory of nonreward sequences.
Frustration hypothesis → persistence due to emotional conditioning.
Difference from Discrimination Hypothesis
__ __ → persistence due to remembered sequences.
Discrimination hypothesis → persistence due to failure to detect phase change.
Memory Hook
Nonreward usually ends in reward → keep responding → __ __

What the Axes Mean
Y-axis = cumulative number of responses
X-axis = time
The steeper the line → the faster the responding.
What Each Line Represents Variable Ratio (steepest line)
Very high responding
No pauses
Straight steep line
Most responses overall
Why? Unpredictable reward based on responses → keep responding constantly.
Fixed Ratio
High responding
Small pauses after reinforcement
Step-like pattern
Pause → rapid responding → pause → rapid responding
Variable Interval
Moderate, steady responding
Straight but less steep than ratio schedules
No big pauses
Because timing is unpredictable.
Fixed Interval
Scalloped pattern
Pause after reinforcement
Then increasing responding as time approaches
Scalloping = slow responding right after reinforcement, then accelerating responding as the fixed time interval gets closer.
This creates the curved “shell-like” shape on the graph.
What You Must Be Able to Say on the Exam
Ratio schedules produce higher response rates than interval schedules.
Variable schedules produce steadier responding than fixed schedules.
Variable ratio produces the highest overall response rate.
Fixed interval produces scalloping.
If you can state those four things, you understand this image.
What They Might Ask
Which schedule produces the steepest line?
Which schedule produces post-reinforcement pauses?
Why does fixed interval show scalloping?
Which schedule is most resistant to extinction? (Variable ratio)

Matching Law
Quoted slide information (exact wording from slides)
“__ __ (Herrnstein, 1961): Responses are distributed to reflect the distribution of reinforcements.”
“The proportion of responses that a subject makes of a certain kind matches the proportion of reinforcement that is received from the responses.”
Core Definition
__ __ states that organisms distribute their responses across options in proportion to the rate of reinforcement each option provides.
Response ratio ≈ Reinforcement ratio.
What This Actually Means (clear explanation)
When two response options are available at the same time, organisms divide their behavior according to how much reinforcement each option produces.
They do not commit exclusively to one option.
They allocate behavior proportionally.
More reinforcement → more responding.
Less reinforcement → less responding.
But in matching proportions.
Classic Operant Setup (this is the real Matching Law example)
Two variable-interval schedules are available simultaneously.
Example: Option A produces 70% of total reinforcement.
Option B produces 30%.
Observed result: About 70% of responses go to A.
About 30% go to B.
That proportional allocation is __ __.
This was demonstrated primarily in animal operant experiments (e.g., pigeons pecking two keys).
Clean Simple Example
If Source A gives food twice as often as Source B, the animal will respond to Source A about twice as often as Source B.
Behavior is divided according to reinforcement rate.
Difference from Probability Matching (Important Exam Distinction)
__ __ applies to operant conditioning situations in which organisms distribute responses in proportion to obtained reinforcement rates.
Probability matching refers to a human tendency to match choice frequency to outcome probabilities in prediction tasks, even when consistently choosing the more probable option would yield more total rewards.
Matching Law → reinforcement-based responding in operant tasks.
Probability matching → cognitive probability-based choice in prediction tasks.
They look similar numerically, but they come from different experimental frameworks.
Why This Matters
It shows that organisms often distribute behavior proportionally instead of exclusively choosing the highest-payoff option.
Choice behavior reflects reinforcement distribution.
Memory Hook
Reinforcement ratio = Response ratio → __ __

Probability Matching
Quoted slide information (exact wording from slides)
“__ __: A related pattern is found in human cognition. The probability of choosing an option tends to match the probability of that option succeeding.”
“Example: Friedman et al. (1964): Subjects predict which of two lights will come on. If one light comes on 80% and the other comes on 20%, they’ll pick the first one 80% of the time. This does not maximize successes (one should choose more probable choice all the time).”
Core Definition
__ __ is the tendency for people to choose options in proportion to their probability of success rather than always choosing the most likely option.
Choice probability ≈ Outcome probability.
Step-by-Step Example (from slide)
Two lights:
Light A turns on 80% of the time.
Light B turns on 20% of the time.
Participants must predict which light will turn on.
What people typically do: • Choose Light A about 80% of the time.
• Choose Light B about 20% of the time.
That proportional responding is __ __.
Why This Is Irrational (Important Insight)
If you always chose the 80% light, you would be correct 80% of the time.
But with probability matching:
Accuracy = (0.8 × 0.8) + (0.2 × 0.2)
= 0.64 + 0.04
= 0.68 (68%)
So __ __ produces fewer correct responses than always choosing the higher-probability option.
People behave as if they are trying to track probabilities rather than maximize total reward.
Difference from Matching Law
Matching Law occurs in operant conditioning tasks where organisms distribute responses in proportion to obtained reinforcement rates.
__ __ occurs in human prediction tasks where people match their choice frequency to outcome probabilities.
Matching Law is driven by reinforcement contingencies.
__ __ reflects probabilistic reasoning under uncertainty.
They look numerically similar, but the mechanisms and experimental contexts differ.
What This Shows
Humans often allocate behavior proportionally instead of choosing the single best option.
Decision-making under uncertainty does not always maximize reward.
Memory Hook
80% chance → choose it 80% of the time → __ __

Ephemeral Reward Task
Quoted slide information (exact wording from slides)
“In the ERT, subjects are given a choice between two responses on each trial, each leading to identical rewards. If A is chosen, reward is given and trial ends. If B is chosen, reward is given but A response is still available”
“The optimal strategy should be to always first make the response that will keep the other response available (Option B) and get 2 reinforcements.”
“Often, they pick A and B 50-50.”
Core Definition
__ __ is a choice task showing that organisms often fail to choose the option that maximizes total reward.
Even when one choice clearly leads to more reinforcement, subjects frequently choose suboptimally.
Step-by-Step Procedure (Salwiczek et al., 2012)
On each trial, two options:
Option A:
Choose A → get reward
Trial ends
Option B:
Choose B → get reward
Option A remains available
You can then choose A and get another reward
Optimal strategy: Always choose B first. This gives two rewards per trial.
What Actually Happens
Many animals — and many humans — do not consistently choose B first.
Instead, they often choose A and B about 50-50.
They fail to maximize total reinforcement.
Mueller et al. (2024): Humans often report being confused after many trials.
What This Shows
Choice behavior does not always maximize reward.
Organisms sometimes focus on immediate reinforcement rather than long-term optimal strategy.
Even when rewards are identical, structure matters.
Context
This builds on:
Matching Law
Probability matching
All deal with how organisms allocate behavior across options.
This task highlights limits of rational decision-making.
Difference from Matching Law
Matching Law → proportional responding based on reinforcement rates.
__ __ → failure to choose the option that produces the greatest total reward.
Matching involves proportional allocation. This task tests maximizing strategy directly.
Memory Hook
Choose B first = 2 rewards.
But subjects often split choices → __ __

Self-Control
Quoted slide information (exact wording from slides)
“Studied behaviorally by requiring choice between immediate small reward and larger delayed reward”
“Discounting: Future rewards are not valued as highly as immediate rewards; future punishments are not feared as much as immediate punishments”
“Degree of discounting varies across species”
“Within humans, degree of discounting also varies between individuals and experimental conditions”
Core Definition (In Practical Terms)
__ __ is the ability to choose a larger delayed reward instead of a smaller immediate reward.
It is about resisting immediate gratification.
Practical Example (This Is What You Remember)
Would you rather:
• Get $5 right now
OR
• Get $10 next week
If you choose $5 now → low self-control
If you wait for $10 → high self-control
That’s the whole concept.
What “Discounting” Means (Simple and Real)
Your brain treats future rewards as less valuable.
Even if the future reward is bigger, it feels smaller because you have to wait.
The longer the delay → the less valuable it feels.
That decrease in perceived value over time = discounting.
No math. Just psychological shrinking.
What This Explains
Some people: • Can wait for bigger rewards
• Future rewards still feel important
Other people: • Hate waiting
• Future rewards feel almost worthless
• Choose immediate reward
That difference is the degree of discounting.
What the Slide Means by “Degree Varies”
Across species: Some animals discount very steeply (very impulsive).
Within humans: Some people are very patient. Some people are very impulsive. Situations can also change how much people discount.
Context
Operant conditioning → choice behavior.
Earlier terms: • Matching Law → divide responses by reinforcement rate
• Ephemeral Reward Task → failure to maximize
• __ __ → focuses specifically on delay and impulse control
This is about time affecting value.
Exam-Safe Summary
__ __ = choosing bigger later over smaller now.
Discounting = future rewards feel less valuable.
Memory Hook
Small now vs bigger later → __ __

Reward / Positive Reinforcmeent
Exact Slide Wording (term blanked)
“TWO DIMENSIONS: Reinforcing Consequence (Pleasant, Unpleasant) vs. Response-Outcome Relationship (Produces, Prevents)”
“__ (sometimes called positive reinforcement)”
Definition:
__ happens when a behavior causes something pleasant to appear, and because of that pleasant outcome, the behavior becomes more likely in the future.
Behavior → pleasant outcome appears → behavior increases.
That’s it at the core.
Why the Slide Mentions “Two Dimensions”
Every consequence can be understood by asking two questions:
Did the behavior make (produce) something happen, or stop (prevent) something from happening?
Was that outcome pleasant or unpleasant?
Reward fits into this framework because:
• The behavior makes something happen (it produces it).
• What happens is pleasant.
So reward = behavior causes something good to appear.
That’s why the “dimensions” matter — they show how reward is one specific type of consequence among several possible ones.
What Is the Reward in the Rat Example?
Rat presses lever.
Food appears immediately after.
The reward is the food delivered because of the lever press.
It is the fact that:
The lever press caused the food to appear.
Because the rat experiences food as pleasant, the brain strengthens the lever press.
Next time the rat is in that situation, it is more likely to press again.
That strengthening process is reward.
Why This Connects to the Rest of the Chapter
Thorndike said: Behaviors followed by satisfaction become stronger.
Reward is the modern operant version of that idea.
Every time you see:
• Shaping
• Chaining
• Reinforcement schedules
• Matching Law
What is increasing behavior in all those cases?
Reward.
So reward is not a side concept.
It is the engine behind behavior increasing.
Clean Comparison to Punishment
Reward → behavior causes something pleasant → behavior increases.
Punishment → behavior causes something unpleasant → behavior decreases.
Increase vs decrease is the real difference.
Memory Line
Behavior causes something good to appear → behavior goes up → __

Punishment
Quoted slide information (exact wording from slides)
“Punishment”
(From slide structure under consequences)
Core Definition (Simple and Direct)
____ occurs when a behavior produces an unpleasant consequence, which decreases the likelihood that the behavior will happen again.
Behavior → something bad happens → behavior decreases.
Simple Example
A rat presses a lever → receives a mild shock → presses the lever less.
A child touches a hot stove → feels pain → avoids touching it again.
The unpleasant outcome weakens the behavior.
What This Means in Plain Terms
Reward makes behavior stronger.
___ makes behavior weaker.
It is still about consequences controlling behavior — just in the opposite direction.
When Does It Work? (From Slide)
The slide says punishment effectiveness depends on:
• Consistency (must happen every time)
• Delay (should happen immediately)
• Intensity (must be strong enough to matter)
If punishment is inconsistent, delayed, or weak → behavior may not decrease.
Side Effects (From Slide)
Punishment can produce:
• Conditioned fear
• Aggression
So punishment does not simply “erase” behavior. It can produce emotional consequences.
Context
We are still in Instrumental Conditioning.
Thorndike’s Law of Effect said:
Behaviors followed by satisfaction strengthen.
Behaviors followed by discomfort weaken.
__ __ is the “discomfort” side.
It is one of the core consequence types that define operant conditioning.
Difference from Reward
Reward → behavior produces something pleasant → behavior increases.
___ → behavior produces something unpleasant → behavior decreases.
Memory Hook
Do something → something bad happens → do it less → ___

Omission Training
Core Definition (Simple and Direct)
__ __ occurs when a behavior prevents a pleasant consequence from occurring, which decreases the likelihood of that behavior.
Behavior → you lose something good → behavior decreases.
What This Means (Plain Language)
Instead of adding something bad (like punishment),
You remove something good.
And because something good is taken away, the behavior becomes less likely.
Simple Example
A child talks during class → loses recess time.
The behavior (talking) caused the loss of something pleasant (recess).
Result: talking decreases.
Another example: A rat presses a lever → food that was about to be delivered is cancelled.
Behavior → reward is prevented.
Why This Is Different from Punishment
Punishment → behavior produces something unpleasant.
__ __ → behavior causes something pleasant to be removed or withheld.
Punishment adds something bad.
__ __ removes something good.
Both reduce behavior — but through different consequences.
Context
We are still defining the different ways consequences can change behavior in instrumental conditioning.
Earlier:
Reward increases behavior.
Punishment decreases behavior.
Now: __ __ shows that removing a pleasant consequence can also reduce behavior.
This is sometimes called negative punishment in psychology.
Memory Hook
Do something → lose something good → do it less → __ __

Avoidance / Negative Reinforcement
Quoted slide information (exact wording from slides)
“__”
“two-process (Watson-Mowrer) theory: Classical conditioning of fear to warning signal and reduction of fear as reinforcement”
“cognitive theory of avoidance: subject develops expectations (e.g., no shock if I respond; shock if I don’t)”
Core Definition (Simple and Direct)
____ or negative reinforcement occurs when a behavior prevents an unpleasant stimulus from happening, which increases the likelihood of that behavior.
Behavior → something bad is prevented → behavior increases.
What This Means (Plain Language)
The unpleasant event has NOT happened yet.
You perform a behavior.
Because you perform the behavior, the bad thing never occurs.
That makes you more likely to perform that behavior again.
Simple Example
A rat hears a warning tone that signals shock.
If it presses a lever during the tone, shock does not occur.
The rat learns to press the lever during the tone.
Shock never happens — but the behavior strengthens.
Another example: You leave early to avoid traffic. You never experience the traffic. Leaving early becomes more likely next time.
Key Difference from Escape
Escape → behavior stops something unpleasant that is already happening.
____ → behavior prevents something unpleasant from happening at all.
Why This Term Is Important
Avoidance creates a puzzle:
If the shock never happens, how does the behavior stay strong?
What is reinforcing the behavior?
That question leads directly to:
• Two-process theory
• Cognitive theory of avoidance
So this term is central to theory development.
Context
We are still in instrumental conditioning.
Now we are dealing with aversive control of behavior.
Earlier:
Reward strengthens behavior by producing something pleasant.
Escape strengthens behavior by stopping something unpleasant.
____ strengthens behavior by preventing something unpleasant.
Memory Hook
Stop it from happening at all → ___

Two-Process Theory (Watson-Mowrer theory)
Quoted slide information (exact wording from slides)
“Classical conditioning of fear to warning signal and reduction of fear as reinforcement”
“Evidence against » Sidman avoidance procedure (no warning signal) » No sign of fear after extended training”
Core Definition (Simple and Direct)
__ __ explains avoidance behavior using two learning processes:
Classical conditioning of fear to a warning signal
Instrumental reinforcement through reduction of fear
Step-by-Step Explanation
Process 1: Classical Conditioning
A warning signal (like a tone) is paired with shock.
The tone becomes associated with shock.
The tone now produces fear.
Process 2: Instrumental Conditioning
The subject performs a response (e.g., presses lever).
The warning signal, the tone, (and fear) stop.
Fear reduction reinforces the behavior.
So:
Tone → fear
Response → fear stops
Fear reduction = reinforcement
That keeps avoidance behavior going.
Why This Theory Was Proposed
Avoidance is confusing because:
If shock never happens, what reinforces the behavior?
This theory says:
The reduction of fear is what reinforces the behavior.
Not shock removal — fear removal.
Evidence Against It (From Slide)
Sidman avoidance procedure: No warning signal is given. Yet animals still learn avoidance. No tone → no conditioned fear.
No sign of fear after extended training: Animals continue avoiding even when fear responses are minimal.
This weakens the classical-conditioning explanation.
Context
We are explaining avoidance behavior.
Earlier:
Escape = remove ongoing shock.
Avoidance = prevent shock.
Now: __ __ attempts to explain why avoidance persists.
This is a bridge between classical and instrumental conditioning.
Difference from Cognitive Theory of Avoidance
__ __ → fear reduction reinforces behavior.
Cognitive theory → behavior is guided by expectations about outcomes.
Two-process = emotional mechanism.
Cognitive theory = expectancy mechanism.
Memory Hook
Fear is learned → fear is reduced → behavior strengthens → __ __

Escape / Negative Reinforcement
Core Definition (Simple and Direct)
___ occurs when a behavior produces the removal of an unpleasant stimulus, which increases the likelihood of that behavior.
Behavior → something bad stops → behavior increases.
What This Means (Plain Language)
Something unpleasant is already happening.
You perform a behavior.
The unpleasant thing goes away.
Because it goes away, you are more likely to perform that behavior again.
Simple Example
A rat receives a mild shock → presses a lever → shock stops.
Because pressing the lever ended the shock, the rat presses it faster next time shock begins.
Another example: You have a headache → take medication → pain decreases.
Taking medication becomes more likely next time you feel pain.
Key Point
The unpleasant stimulus is already present.
The behavior terminates it.
That’s the defining feature.
Difference from Avoidance
____ → behavior stops something unpleasant that is already happening.
Avoidance → behavior prevents something unpleasant from happening at all.
Escape = stop it.
Avoidance = prevent it.
Context
Still in instrumental conditioning.
Earlier:
Reward → behavior produces something pleasant.
Omission training → behavior removes something pleasant.
__ __ → behavior removes something unpleasant.
This term explains how organisms learn to terminate aversive events.
Memory Hook
Something bad is happening → behavior makes it stop → do it again → __ __

Cognitive Theory of Avoidance
Quoted slide information (exact wording from slides)
“___ ___ __ ___: subject develops expectations (e.g., no shock if I respond; shock if I don’t)”
Core Definition (Simple and Direct)
__ __ explains avoidance behavior by proposing that the subject forms expectations about the consequences of responding or not responding.
Behavior is guided by learned beliefs about outcomes.
What This Means (Plain Language)
The subject thinks:
“If I respond, no shock.”
“If I don’t respond, shock.”
So the behavior continues because the subject expects it to prevent the shock.
No fear-reduction mechanism is required.
It’s about prediction.
Why This Theory Was Proposed
Two-process theory said:
Fear reduction reinforces avoidance.
But evidence showed:
Avoidance can occur without a warning signal. →Sidman Avoidance Procedure
Animals may not show signs of fear after extended training.
So instead of fear reduction, this theory says:
The subject learns the contingency: Response → no shock
No response → shock
And acts based on that expectation.
Simple Example
A rat learns:
Press lever → no shock
Do nothing → shock
Even if the rat is not visibly afraid, it keeps pressing because it expects that pressing prevents shock.
Context
This is another explanation for avoidance behavior.
Two-process theory → emotional (fear-based).
__ __ → cognitive (expectation-based).
This represents a shift toward cognitive explanations in learning theory.
Difference from Two-Process Theory
Two-process theory → fear reduction reinforces behavior.
__ __ → learned expectation about response–outcome relationship maintains behavior.
Emotion vs expectation.
Memory Hook
“If I respond, no shock” → __ __

The Four Basic Instrumental Conditioning Contingencies
What This Diagram Is Showing
All instrumental conditioning consequences can be understood using two questions:
Does the behavior produce it or prevent it?
Is the outcome pleasant or unpleasant?
That’s it.
Those two questions create four possible combinations.
Step 1: Pleasant vs Unpleasant
Pleasant = appetitive (something good, like food, praise, money)
Unpleasant = aversive (something bad, like shock, pain, noise)
Step 2: Produces vs Prevents
Produces = the behavior makes something happen.
Prevents = the behavior stops something from happening or removes it.
The Four Boxes Explained Clearly 1⃣ Behavior Produces Something Pleasant
→ Behavior increases
This is Reward
Example: Press lever → get food → press more.
2⃣ Behavior Produces Something Unpleasant
→ Behavior decreases
This is Punishment
Example: Touch stove → feel pain → touch less.
3⃣ Behavior Prevents Something Pleasant
→ Behavior decreases
This is Omission Training
Example: Misbehave → lose recess → misbehave less.
4⃣ Behavior Prevents Something Unpleasant
→ Behavior increases
This is Escape / Avoidance (Negative Reinforcement)
Example: Press lever → shock stops (escape)
Press lever → shock never happens (avoidance)
Press more.
The Core Logic You Must Remember
Producing pleasant things strengthens behavior.
Producing unpleasant things weakens behavior.
Preventing pleasant things weakens behavior.
Preventing unpleasant things strengthens behavior.
The Pattern (This Is the Trick)
Pleasant produced → increase
Unpleasant produced → decrease
Pleasant prevented → decrease
Unpleasant prevented → increase
Opposites matter.
Context
This diagram is the structural foundation of instrumental conditioning.
Everything you learned about:
Reward
Punishment
Omission training
Escape
Avoidance
comes directly from this structure.
If you understand this diagram, you understand how consequences control behavior.
If I say:
“A behavior prevents an aversive stimulus.”
Does behavior increase or decrease?