Module 1

Key Concepts

  1. Learning Basics

    • Learning constantly shapes human behavior.

    • Two primary types:
      a. Classical (Pavlovian) Conditioning – learning associations between environmental stimuli and significant events.
      b. Instrumental (Operant) Conditioning – learning associations between our own actions and their consequences (rewards or punishments).

  2. Classical (Pavlovian) Conditioning

    • Famous example: Pavlov’s dog

      • Dog salivates (response) to the bell (stimulus) after learning that the bell predicts food.

    • Key idea: Stimulus → Response association

    • Real-world relevance:

      • Anxiety in crowded places

      • Dislike for alarms

      • Aversion to foods after one bad experience

    • Extends beyond behavior; it can shape preferences, dislikes, and even aspects of identity.

  3. Instrumental (Operant) Conditioning

    • Focuses on behavior → consequence.

    • Example: Rewarded behavior is likely to repeat; punished behavior is less likely.

    • Shows how our actions are shaped by outcomes.

  4. Observational Learning

    • Learning by watching others, largely separate from classical and operant conditioning.

    • Social Learning Theory identifies four aspects (though not listed in your excerpt, they are usually: attention, retention, reproduction, motivation).


Learning Objectives (What You Should Be Able to Do)

  1. Distinguish classical vs. operant conditioning.

  2. Understand how each type of learning works.

  3. Understand their combined influence on behavior outside the lab.

  4. List the four components of observational learning.


Why This Matters

  • Learning principles explain everyday human behavior.

  • They are foundational for understanding normal and disordered behavior, preferences, fears, and habits.

  • Classical conditioning shows that even subtle environmental cues can shape emotions and choices.

  • Operant conditioning explains motivation and self-regulation through consequences.

Classical (Pavlovian) Conditioning

Definition: Learning to associate a neutral stimulus with a psychologically significant event, so that the neutral stimulus eventually triggers a response on its own.

Key Terms:

  • Unconditioned Stimulus (US): Naturally triggers a response (e.g., food → drooling).

  • Unconditioned Response (UR): Natural response to the US (e.g., drooling at food).

  • Conditioned Stimulus (CS): Originally neutral, becomes meaningful after pairing with a US (e.g., bell).

  • Conditioned Response (CR): Learned response to the CS, similar to the UR (e.g., drooling at the bell).

Examples:

  • Pavlov’s Dog: Bell (CS) + food (US) → drooling (CR).

  • Food Poisoning: Fish (CS) + illness (US) → nausea (CR).

  • Alarm Clock: Tone (CS) + waking early (US) → grumpiness (CR).

  • Fast-Food Logo: Logo (CS) + eating food (US) → salivation (CR).

Importance:

  • Explains everyday emotional and physiological responses.

  • Still widely used to study human and animal learning.


Operant (Instrumental) Conditioning

Definition: Learning where a behavior is associated with a consequence (reward or punishment). The behavior is voluntary and operates on the environment.

Key Terms:

  • Operant Behavior: Voluntary actions an organism performs.

  • Reinforcer: A consequence that strengthens the behavior.

Examples:

  • Skinner Box (Rat): Rat accidentally presses lever → receives food → presses lever more often.

  • Video Game Shortcut: Experimenting with different paths → finds faster route → repeats that path.

Importance:

  • Explains how actions are shaped by their consequences.

  • Applies to humans and animals in learning, habit formation, and skill acquisition.


Comparison: Classical vs. Operant Conditioning

Feature

Classical Conditioning

Operant Conditioning

Focus

Association between stimuli

Association between behavior and consequence

Behavior Type

Involuntary/reflexive

Voluntary

Example

Salivating to bell

Pressing lever for food

Key Concept

CS paired with US → CR

Behavior reinforced or punished → more/less likely to occur

Operant Conditioning

Definition:
Operant conditioning studies how the consequences of a behavior affect the likelihood of that behavior happening again.

Key Principle – Thorndike’s Law of Effect:

  • Behaviors that lead to satisfying/positive outcomes → more likely to occur again.

  • Behaviors that lead to negative/painful outcomes → less likely to occur again.

Key Terms:

  • Reinforcer: Consequence that increases the probability of a behavior.

  • Punisher: Consequence that decreases the probability of a behavior.


Examples

  1. Rat in a Skinner Box:

    • Presses lever → gets food pellet (reinforcer) → presses lever more.

    • Behavior is voluntary; rat actively engages with environment.

  2. Student and Grades:

    • Good grade (reward) → positive emotion.

    • Speaking up in class on relevant topics → earns participation points (reinforcer) → repeats behavior.

    • Speaking up about irrelevant topics → loses points (punisher) → behavior decreases.

Key Insight:

  • Operant conditioning focuses on voluntary behaviors influenced by consequences.

  • Classical conditioning, in contrast, focuses on involuntary behaviors triggered automatically by stimuli (e.g., drooling in Pavlov’s dogs).


Summary Comparison: Classical vs. Operant Conditioning

Feature

Classical Conditioning

Operant Conditioning

Behavior

Involuntary (reflexive)

Voluntary

Trigger

Stimulus → Response

Behavior → Consequence

Example

Dog drools to bell

Rat presses lever for food

Learning Method

Passive participant

Active participation

Takeaway:

  • Operant conditioning shows that voluntary behavior is strongly shaped by rewards and punishments, allowing organisms to adapt their actions to achieve desired outcomes.

Classical Conditioning: Beyond Simple Reflexes

Core Idea:

  • Classical conditioning does more than elicit one simple reflex (like salivation).

  • A conditioned stimulus (CS) can trigger a whole system of responses that prepare the organism for a significant event (US).

Physiological and Behavioral Effects

  1. Food-related CSs:

    • CSs predicting food trigger salivation, gastric acid secretion, pancreatic enzymes, insulin release, and approach behavior.

    • Can cause increased eating even when full.

    • Examples in humans:

      • Sound of a chip bag opening

      • Logos like Coca-Cola

      • Comfortable couch in front of the TV

  2. Nutrient Preferences:

    • Flavors paired with beneficial nutrients (e.g., protein) become preferred automatically.

    • Example: Meat flavor (CS) signals protein (US) → cravings (CR).

  3. Taste Aversion:

    • Flavors associated with illness or negative experiences are avoided.

    • Example: Disliking tequila after getting sick.

    • Clinically relevant: Chemotherapy patients may develop aversions to foods eaten before treatment, or even to the clinic environment.


Emotional Effects of Classical Conditioning

  • CSs can also trigger emotional responses instead of physical reflexes.

  • Example:

    • Tone paired with mild shock → fear (CR) after one or two pairings.

    • This mechanism contributes to phobias and anxiety disorders (e.g., panic in malls or closed spaces).


Drug-Related Effects

  • Drug cues (e.g., room, smell, paraphernalia) can trigger physical or emotional responses similar to the drug itself.

  • Conditioned compensatory responses:

    • Body anticipates the drug and may counteract its effects (e.g., morphine cues → increased pain sensitivity).

    • Implications:

      • Tolerance is higher when drugs are taken with familiar cues.

      • Overdose risk increases in novel environments without familiar cues.

      • Compensatory responses may motivate continued drug use.


Interaction with Operant Behavior

  • Classical CSs motivate ongoing operant behavior.

  • Examples:

    • Rat presses lever harder when lever cues signal drug delivery.

    • Presence of food-related cues increases effort to obtain food.

    • Fear cues motivate avoidance behavior.

Takeaway:

  • Classical conditioning influences physiology, emotion, preferences, aversions, and motivation.

  • CSs often interact with operant behaviors, amplifying or directing voluntary actions.

The Learning Process in Classical Conditioning

Key Idea:

  • Simply pairing a CS (Conditioned Stimulus) and a US (Unconditioned Stimulus) is not always enough for learning to occur.

  • Learning requires surprise or prediction error—the outcome must differ from what is expected.

Example – Blocking (Kamin, 1969):

  1. Phase 1: CS A (bell) → US (food) → learning occurs.

  2. Phase 2: CS A + CS B (light) → US (food)

    • Animal does not learn CS B because CS A already predicts the US.

    • CS B adds no new predictive value; no learning occurs.

Real-World Analogy:

  • Supermarket stickers:

    • Star-shaped stickers indicate discounts → learned first.

    • Orange tags added later → ignored because star stickers already predict discount.

Factors Strengthening Classical Conditioning:

  • Intensity/Salience: Strong, noticeable CS and US.

  • Novelty: CS and US are relatively new.

  • Preparedness: Organism is biologically predisposed to associate certain stimuli (e.g., flavors with illness).


Erasing Classical Learning (Extinction)

Extinction:

  • CR decreases when CS is repeatedly presented without US.

  • Example: Bell rings without food → drooling gradually stops.

Clinical Application:

  • Exposure therapy for phobias: Repeated exposure to feared stimulus (CS) without negative outcome (US) reduces fear response (CR).

Important Caveats:

  • Extinction does not erase original learning—it inhibits it.

  • Spontaneous Recovery: CR can return after a lapse in exposure.

    • Example: Smell of chalkboards triggers old fear of detention.

  • Renewal Effect: CR can return in a new context.

    • Example: Fear reappears when encountering the CS in a different location.

Implication for Therapy:

  • Extinction is more effective when applied in contexts where relapse is likely.


Instrumental (Operant) Conditioning

Factors Affecting Strength:

  • Larger reinforcers/punishers → stronger learning.

  • Behavior extinguishes if reinforcement stops.

Stimulus Control in Operant Learning:

  • Discriminative Stimulus (SD): Signals when a behavior will be reinforced.

    • Example: Lever press gives food only when light is on → behavior occurs only in presence of light.

  • Real-world analogy: Traffic lights

    • Green arrow → turn; red or green circle → don’t turn.

Key Insight:

  • Stimuli set the occasion for voluntary behavior but do not elicit it like classical CSs.

    • Example: Canvas in front of an artist → opportunity to paint, doesn’t compel painting.

Applications in Research:

  • Stimulus-control methods allow researchers to test perception and sensory abilities in animals.

    • Example: Rats learn to press lever in response to specific colors, ultrasounds, or magnetic fields.


Summary: Classical vs. Operant Conditioning Insights

Feature

Classical Conditioning

Operant Conditioning

Focus

CS → US → CR

Behavior → Consequence (reinforcer/punisher)

Learning Requirement

Surprise / Prediction error

Reinforcement or punishment

Extinction

CR decreases if CS presented without US

Behavior decreases if no reinforcement

Stimulus Role

CS elicits CR

Discriminative stimulus signals opportunity but doesn’t elicit behavior


If you want, I can now create a full visual map of all classical and operant conditioning concepts, including blocking, extinction, spontaneous recovery, and stimulus control, so you can see the entire learning process in one diagram. This is great for studying and memorization. Do you want me to do that?

Higher Cognitive Processes in Operant Conditioning

Stimulus Discrimination and Categorization:

  • Animals can learn to categorize stimuli.

    • Example: Pigeons peck buttons for images of flowers, cars, chairs, or people.

    • They can generalize to new, unseen examples.

  • Stimulus-control methods allow researchers to study how these categories are learned.


Operant Conditioning Involves Choice

  • Operant behavior requires choosing one action over alternatives.

    • Example: Student chooses to go out drinking vs. studying; rat chooses to press lever vs. scratching or sleeping.

  • The reinforcers for alternative behaviors affect choice.

  • Quantitative Law of Effect (Herrnstein, 1970):

    • Choice depends on relative reinforcement rates and “cost” of behavior.

    • A reinforcer is less powerful if other reinforcers are available in the environment.


Cognition in Instrumental Learning

Goal-Directed Behavior:

  • Animals learn specific consequences of each behavior and perform actions based on how much they value the outcome.

Example – Reinforcer Devaluation Effect:

  1. Rat trained to press two levers → each lever gives a different reinforcer (e.g., sucrose vs. food pellet).

  2. One reinforcer (sucrose) paired with illness → taste aversion.

  3. Test without reinforcers: Rat avoids lever that gave the now-devalued reinforcer.

  • Conclusion: Behavior is goal-directed; animals consider the current value of the outcome, not just the action.

Habits:

  • Repeated, long-term instrumental actions can become automatic habits.

    • Example: Rat continues pressing lever for sucrose even after it is paired with illness.

  • Habits reduce cognitive load in humans (e.g., brushing teeth, making coffee).


Classical and Operant Conditioning Together

  • Real-world learning often combines both forms:

    • Stimuli in the environment (S) become associated with reinforcers/outcomes (O).

    • Behavior (R) is performed to obtain outcomes.

Four Key Associations:

  1. R → O (Instrumental): Behavior leads to outcome.

    • Strengthened by reinforcement; influenced by goal value and alternatives.

  2. S → O (Classical): Stimulus predicts outcome.

    • Evokes preparatory responses (e.g., salivation, approach, or avoidance).

  3. S → R (Habit): Stimulus elicits response automatically after extensive practice.

    • Less sensitive to outcome value; behavior becomes habitual.

  4. S → (R → O) (Occasion Setting): Stimulus signals that a particular behavior-outcome relationship is active.

    • Example: Canvas sets occasion for painting → behavior will be reinforced.

Takeaway:

  • This integrated framework explains most learned behavior in daily life, combining classical cues, voluntary actions, habit formation, and contextual signals.

  • It highlights that behavior is influenced not just by reinforcement, but also by context, prediction, and habit strength.

Observational Learning

Definition:

  • Learning that occurs by watching others (social models) rather than through direct reinforcement or trial-and-error.

  • Key idea: you can acquire new behaviors just by observing someone else perform them.

Social Models:

  • People who demonstrate the behavior.

  • Usually higher in status or authority, but peers can also act as models.

  • Examples: parents, teachers, older siblings, peers, or anyone skilled at a task.

Real-Life Examples:

  • A child learns to place a napkin in their lap by watching parents.

  • A customer observes where ketchup is at a food stand.

  • Watching others play a game to learn the rules and strategies.


Four Components of Observational Learning (Bandura, 1977)

  1. Attention – The learner must notice the model’s behavior.

  2. Retention – The learner must be able to remember the observed behavior.

  3. Initiation (Reproduction) – The learner must have the ability to perform the behavior.

  4. Motivation – The learner must want to perform the behavior.


Key Experiments

Bobo Doll Experiment (Bandura, Ross & Ross, 1961)

  • Children observed an adult interacting with a clown doll (Bobo).

  • Aggressive model: Adult hit and kicked Bobo → children showed more aggressive behavior.

  • Non-aggressive model: Adult played nicely → children showed less aggression.

  • Conclusion: Children imitated observed behavior, showing observational learning.

Vicarious Reinforcement (Bandura, Ross & Ross, 1963)

  • Children saw a model either rewarded or punished for aggressive behavior.

  • Observation of consequences influenced whether children imitated the behavior:

    • If the model was punished → children were less aggressive.

  • Shows that reinforcement doesn’t have to be direct; learning can occur by observing outcomes.


Observational Learning vs. Classical & Operant Conditioning

Feature

Classical Conditioning

Operant Conditioning

Observational Learning

How learning occurs

Association between stimuli (CS → US)

Association between behavior and consequence (R → O)

By observing others’ behavior

Requires reinforcement?

No (directly)

Yes

Not required, but consequences can influence (vicarious reinforcement)

Example

Salivating at a restaurant logo

Studying to get good grades

Learning manners by watching parents


Takeaway:

  • Observational learning shows that we don’t need direct experience to learn.

  • It complements classical and operant conditioning, explaining how behavior can be acquired through social influence and modeling.

  • Everyday behaviors—fashion choices, restaurant selection, punctuality—can often be explained by a combination of these learning mechanisms.

Classical (Pavlovian) Conditioning

  • Classical conditioning / Pavlovian conditioning: Learning where a neutral stimulus (CS) is paired with a US to produce a conditioned response (CR).

  • Conditioned stimulus (CS): Initially neutral stimulus that elicits a CR after association with a US.

  • Unconditioned stimulus (US): Stimulus that naturally elicits a response without learning.

  • Conditioned response (CR): Learned response to the CS.

  • Unconditioned response (UR): Natural, unlearned response to a US.

  • Conditioned compensatory response: CR that opposes the UR, often seen with drug conditioning.

  • Prediction error: Difference between expected and actual outcomes; drives learning.

  • Blocking: Failure to learn a new CS when paired with an already-conditioned CS because the US is already predicted.

  • Preparedness: Evolutionary tendency to form certain associations more easily (e.g., taste and illness, fear and spiders).

  • Taste aversion learning: Avoiding a taste after it has been paired with illness.

  • Fear conditioning: CS paired with an aversive US to elicit fear.

  • Extinction: Decline of a CR when CS is presented without US.

  • Spontaneous recovery: Reappearance of an extinguished response after a time delay.

  • Renewal effect: Return of an extinguished response when context changes.


Operant (Instrumental) Conditioning

  • Instrumental / Operant conditioning: Learning the association between a behavior and its consequences.

  • Operant: Behavior controlled by its consequences.

  • Reinforcer: Any outcome that strengthens behavior.

  • Punisher: Outcome that weakens behavior.

  • Law of effect: Behaviors followed by positive outcomes increase; those followed by negative outcomes decrease.

  • Quantitative law of effect: Reinforcer effectiveness depends on the availability of other reinforcers in the environment.

  • Discriminative stimulus: Stimulus that signals whether a behavior will be reinforced.

  • Stimulus control: When a behavior is influenced or triggered by a preceding stimulus.

  • Goal-directed behavior: Behavior guided by knowledge of the response-outcome relationship and current value of the outcome.

  • Habit: Behavior performed automatically, insensitive to changes in reinforcer value.

  • Reinforcer devaluation effect: Reduced responding when a previously desired reinforcer is made undesirable.


Observational / Social Learning

  • Observational learning / Social Learning Theory: Learning by watching others.

  • Social models: People whose behavior is observed and learned from.

  • Vicarious reinforcement: Learning by observing consequences experienced by others.


General / Miscellaneous

  • Context: Background stimuli (physical, temporal, or internal) present during learning.

  • Categorize: To sort items into classes or categories.