1/40
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
R-O associations
Rescorla and Colwill performed experiment in which they changed the ‘value’ of the outcome after learning and saw how responding changed
R-O association experiment
Phase I: R1 -> O1; R2 -> O2…Phase II: O1 -> LiCl (illness)...Test: R2
Results show that rats associate R1 with O1, because once O1 makes them ill, they stop.
S-O associations
Rescorla and Colwill performed experiment in which they changed R-O relationship depending on the stimulus that was present
S-O association experiment
Phase 1: S1: Rn -> O1; S2: Rn -> O2…Phase 2: R1 -> O1; R2 -> O2
Results: When in the presence of S1, rats performed R1; when in presence of S2, rats performed R2
hierarchical associations
made animals learn what R-O relationship is in effect during the signal stimulus
hierarchical association experiment
Phase I: S1: R1 -> O1; R2 -> O2; S2: R1 -> O2; R2 -> O1…Phase II: O2 -> LiCl
Results: During S1, rats performed R1 more frequently, in S2, rats performed R2 more frequently
Significance: Rats can learn which R -> Outcome relationship is in place during stimulus cue
concurrent schedules
two schedules are in effect at the same time, each one is reinforced according to its own schedule; the participant is free to switch from one to the other at any time
examples of concurrent schedules
slot machines at casinos, remote control for TV (choose which channel; reinforcement set by program itself), talking to people at a party (choose who to talk to; reinforcement is set by how interesting)
relative rate of responding
rate of responding on choice A / rate of responding on all choices
relative rate of reinforcement
rate of reinforcement on choice A / rate of reinforcement on all choices
Herrnstein’s Matching Law
relative rate of responding = relative rate of reinforcement
the relative rate of responding on an alternative matched the relative rate of reinforcement on that alternative
significance of Herrnstein’s Matching Law
whether a behavior occurs frequently or infrequently depends not only on its own schedule of reinforcement, but also on the rates of reinforcement of other activities the individual may perform
generalized form of matching law
incorporated two more variables to account for “undermatching” (s and b)
s
varying sensitivities to the change for the reinforcement schedule (might not notice/understand differences between two)
example of s
don’t realize that earning extra credit in one way is more difficult than the alternative
b
response bias, we prefer some activities over others, even when reinforcement schedule is lower
example of b
may prefer to work on spreadsheets over presentations even though you might receive more praise for your presentations
stimulus control
cues that are present during operant learning will begin to signal (or control) when it is (or is not) appropriate to make the operant response
example of stimulus control
bedroom (context cue) - undress (response) -> appropriate; public (context cue) - undress (response) -> inappropriate
forms of stimulus control
intradimensional discrimination
interoceptive stimuli
configural stimuli
contextual cues
intradimensional discrimination
red light v. green light
interoceptive stimuli
internal sensations of drug withdrawal serve as a signal to engage in drug seeking behaviors
configural stimuli
figure or “whole” image (not just a light or tone) signals when it is appropriate to make the response
contextual cues
learn to navigate about town using landmarks
typical stimuli control experiment steps
training phase: red circle with white triangle -> peck -> food
test: red circle alone (do they peck? some); white triangle alone (do they peck? some)
typical stimulus control experiment significance
the degree of differential responding (responding differently to each stimulus) tells us the degree of control that stimulus has over the behavior
stimulus generalization
an organism responds in a similar fashion to two or more stimuli, this is the opposite of stimulus discrimination and/or differential responding (more generalization = less stimulus control)
stimulus generalization gradient
steep - more stimulus control (less generalization), flat - more stimulus generalization (less control)
example of stimulus generalization gradient
A pigeon is trained to peck a red light for food. If that pigeon is color-blind (he cannot distinguish one color from another), then he will peck at all lights, no matter what color (flat stimulus generalization gradient). If the pigeon is not color-blind, then he will only peck at the red light and maybe some colors that are close to red.
factors that affect stimulus control
sensory capacity and orientation
ease of conditioning
sensory capacity and orientation
physical limits of what our sensory systems can perceive
ease of conditioning
some stimuli are easier to notice, identify, encode, and remember (overshadowing)
types of reinforcers
visual signals → responses that lead to appetitive outcomes
auditory signals → responses that lead to removal of aversive outcomes
stimulus elements
sometimes we perceive individual elements that make up the cue, sometimes cue as a whole (this affects how we generalize learning)
discrimination training (learning)
experience with stimuli (learning about them) may determine the extent to which those stimuli come to control behavior
S+
stimulus that signals reinforcement is available
S-
stimulus that signals reinforcement is not available
example of discrimination training
traffic lights: S+ = green light; S- = red light
will learn discrimination faster if…
S+ and S- are presented simultaneously
will have greater stimulus control to S+ if…
S+ and S- are close in similarity
example of how we use interoceptive cues as S+ and S-
hunger → S+ = hunger (you eat); S- = stomach full (you don’t eat)