Can result in:
Essential apraxia (from frontal lesion)
Agnosia (from back lesion of language)
Expressive aphasia – Inability to produce speech
Broca’s Area (Frontal Association Area)
Lesions here lead to speech production issues
Controls motor projection areas for speech: throat, tongue, jaw, lips
Related to general language capacity
Corpus Callosum Severance
Done to reduce severity of seizures
Leaves patient mostly normal but creates subtle independence between left and right brain
Left side of each eye sends information to left hemisphere
Right side of each eye sends information to right hemisphere
Result:
Left visual field → Right hemisphere
Right visual field → Left hemisphere
Called contralateral organization
Setup:
Patient looks straight ahead
Picture is flashed faster than the eye can move
Asked, “What did you see?”
Results:
Cup on right → Left hemisphere says “cup”
Spoon on left → Left hemisphere says nothing
When told to reach for the object with the left hand, the right hemisphere grabs the spoon
If asked what it is, the left hemisphere guesses (e.g., "pencil"), and the right hemisphere may frown
Left hemisphere = Language processing
Right hemisphere = No language, but can recognize objects
US (Unconditioned Stimulus) → Input to a reflex (e.g., food in mouth)
UR (Unconditioned Response) → Output of reflex (e.g., salivation)
CS (Conditioned Stimulus) → Initially causes investigatory response, then habituation (e.g., bell)
CR (Conditioned Response) → Learned response to CS (e.g., salivation to bell)
Psychic Reflex: Dog perceives something and responds
Conditioned Reflex: A learned reflex (CS before US)
Strength of Conditioned Response
Measured by: Amplitude, Probability, Latency
Example: Rabbit conditioned to fear a tone (CS) due to an electric shock (US)
Acquisition Phase:
Strength of CR increases with reinforced trials (CS + US)
Growth rate flattens out when max response is reached (e.g., dog does not have infinite saliva)
Extinction:
CR declines and disappears over trials when US is not presented (e.g., bell without food)
Due to buildup of inhibition
Spontaneous Recovery:
After a rest interval (e.g., 24 hours), CR reappears at almost previous strength but weakens faster
Due to dissipation of inhibition
Once a conditioned response is learned, it is never forgotten
Extinction does not erase learning but creates a competing inhibitory association
Dog learns:
Bell → Food (excitatory)
Bell → No food (inhibitory)
After rest, the dog balances between the two, leading to a weaker CR
Involuntary responses involved
Contiguity: Closeness in time is the basis of acquisition of a conditioned reflex
Optimal Time Interval:
Differs depending on response being conditioned
Dog’s salivation response: 5-30 sec
Human eyeblink response: 0.5 sec
Number of trials required for conditioning varies
Stronger CS = Stronger CR
Example: Louder tone, brighter light → More salivation
Establish CS → (e.g., Bell → Salivation)
Pair a new CS with the old CS, without the US → (e.g., Tone → Bell → Salivation)
Eventually, the new CS produces CR without US → (e.g., Tone → Salivation)
US acts as a reinforcer for the conditioned reflex
In higher-order conditioning, a CS acts like a US (secondary reinforcer)
Classical conditioning is more versatile – It can happen even in the absence of an unconditioned stimulus
Extinction: Presenting a CS without a US leads to a decline in CR
Generalization:
Similar stimuli produce similar responses (e.g., petting both dogs and cats)
Example: A different pitch tone still produces salivation
Discrimination:
Learning to differentiate between stimuli
Example: Training “CS+” (high tone with US) and “CS-” (low tone without US) → CR only to CS+
CR ≠ UR
CR may be a preparatory response for US
Example:
CS (Tone) → US (Shock) → UR (Fast heartbeat, heavy breathing)
CS (Tone) alone → CR (Slower heartbeat, breathing)
Example:
CS (Injection) → US (Morphine) → UR (Less pain)
CS (Injection) alone → CR (More pain sensitivity)
Pavlov’s View: CS-CR (Conditioned Reflex)
Modern View: CS-US Association → CS provides information about US
Backward Conditioning (US before CS) fails
Corpus Callosum Severance:
Done to reduce seizures
Minimal effect on daily life
Visual Pathways:
Right eye → Left hemisphere
Left eye → Right hemisphere
Developed by Edward Thorndike (1898)
Trial-and-error learning
Law of Effect:
Behavior is strengthened when followed by reinforcement (“satisfying state of affairs”)
Behavior is weakened when followed by punishment (“annoying state of affairs”)
Studied animal problem-solving
Reinforcement strengthens response
Insight: Sudden grasp of a problem’s structure
Wolfgang Köhler (1914): Experimented with chimpanzees to test insight learning
Feature | Operant Conditioning | Classical Conditioning |
Response Type | Voluntary (Emitted) | Involuntary (Elicited) |
Reinforcement | Depends on response | Occurs regardless |
What is Learned? | Behavior | Association (CS → US) |
Mechanism | Law of Effect (Consequences matter) | Contiguity (Closeness in time matters) |
Delay of reinforcement weakens response
Operant Conditioning Example:
Rat presses a bar in a Skinner Box → Reinforcement strengthens bar-pressing behavior
Terminology:
Emitted Response: Spontaneously produced behavior (Operant Conditioning)
Elicited Response: Experimenter controls response (Classical Conditioning)
Contingency: Dependency between behavior and reinforcement
Contiguity: Closeness in time makes learning happen
Many responses can be made with little time and effort.
Responses are easily recorded and response rate is the preferred dependent variable (measured by the slope of a cumulative record).
Cumulative record: Graph showing responses over time.
Example: 30 responses in 5 minutes = 6 responses per minute.
Extinction: Represented by a flat, straight line (unlike classical conditioning).
Common test subjects: Rats and pigeons.
Reinforcement always increases behavior (both positive and negative).
Positive reinforcement: Adds an appetitive stimulus (e.g., food, approval).
Negative reinforcement: Removes an aversive stimulus (e.g., shock, alarms, clock noise).
Example: An alarm clock removes an aversive stimulus when turned off.
Punishment decreases behavior
Positive punishment: Adds an aversive stimulus (e.g., shock when a response is made).
Negative punishment: Removes a desirable stimulus (e.g., taking away a toy from a child).
Behavior Change | Present Stimulus | Remove Stimulus |
Increase Behavior | Positive Reinforcement (food, approval) | Negative Reinforcement (removal of shock, alarm) |
Decrease Behavior | Positive Punishment (shock, scolding) | Negative Punishment (taking away a toy, timeout) |
Extinction: When reinforcement is removed, the response stops.
Spontaneous recovery: A previously extinguished response reappears after a delay.
Discriminative stimulus: Signals under what conditions a response will be reinforced.
Example: A rat presses a bar but only gets food when a light is on → eventually, it only presses when the light is on.
The stimulus does not cause the response or reinforcement; it sets the occasion for the response.
Parallel to classical conditioning:
Instead of a conditioned response (CR) → operant response
Instead of an unconditioned stimulus (US) → reinforcement
Instead of a conditioned stimulus (CS) → discriminative stimulus
Order Difference:
Classical Conditioning: Stimulus (CS) → Reinforcement (US) → Response (CR)
Operant Conditioning: Stimulus → Response → Reinforcement
A stimulus that gains reinforcing properties through classical conditioning.
Example: A clicker for training dogs → initially paired with food → eventually reinforces behavior on its own.
Every response is reinforced.
Reinforcing only some responses produces stronger responding than reinforcing all responses.
Types of Partial Reinforcement:
Interval Schedules (Reinforcement based on time)
Fixed Interval (FI): Reinforcement occurs after a fixed time interval (e.g., checking mail at a set time each day).
Variable Interval (VI): Reinforcement occurs after an unpredictable time interval (e.g., checking email, which is delivered at random times).
Predictability affects behavior—fixed intervals lead to responses closer to expected reinforcement time.
Ratio Schedules (Reinforcement based on number of responses)
Fixed Ratio (FR): Reinforcement after a set number of responses (e.g., factory workers paid per item produced).
Variable Ratio (VR): Reinforcement after an average number of responses (e.g., gambling, where payouts are unpredictable).
Shaping: Reinforcing successive approximations of a behavior to gradually guide toward a desired response.
Example: Training an animal to press a lever by reinforcing behaviors that get closer to the goal.
Chaining: Linking multiple responses into a sequence to train complex behaviors.
Example: Teaching a dog to fetch by reinforcing picking up the toy, bringing it back, and dropping it in order.