Looks like no one added any tags here yet for you.
Schedules of Reinforcement
Rule specifying which occurrences of a given behaviour, if any, will be reinforced
Continuous Reinforcement
Every correct response is reinforced; fast learning and fast extinction
Partial/Intermittent Reinforcement
Only some correct responses are reinforced; slow learning and extinction
Learning something New
acquisition - more
more you reinforce the quicker acquisition happens
maintenance - less
give less reinforcers
schedule thinning
process of gradually reducing reinforcement of behaviour
Partial/Intermittent Reinforcement Advantages
a) reinforcer remains effective longer because satiation takes place more slowly.
b) Behaviour that has been reinforced intermittently tends to take longer to extinguish.
c) Individuals work more consistently on certain intermittent schedules.
d) Behaviour that has been reinforced intermittently is more likely to persist after being transferred to reinforcement in natural environment.
Ration Schedules
Reinforcement based on number of responses emitted
Fixed ratio schedule
Reinforcement occurs each time a set number
of responses of a particular type are emitted.
Produce high steady rate of responding until reinforcement, followed by post-reinforcement pause
Initially produces high rate of responding during extinction
Produces high resistance to extinction
Ratio strain
Deterioration of responding from increasing an FR schedule too rapidly
Variable- ratio (VR) schedule
The number of responses required to produce reinforcement changes unpredictably from one reinforcement to the next
Produces a high steady rate of responding
Produces no (or at least very small) post- reinforcement pause
Differences between VR and FR schedules
✅ VR schedules can be increased more abruptly than FR schedules without producing ratio strain
✅ Values of VR that can maintain a behaviour are somewhat higher than those of FR
✅ VR produces higher resistance to extinction than FR of same value does
Bot VR and FR schedules
used when a high rate of responding is desired, and each response can be monitored.
It is necessary to count the responses to know when to deliver reinforcement.
FR is more commonly used than VR in behavioural programs because it is simpler to administer
Interval Schedules
Schedules based on time
Fixed Interval (FI) Schedule
The first response after a fixed amount of time following the previous reinforcement is reinforced and a new interval begins
Size of FI schedule: amount of time that must elapse
No limit on how long after the end of the interval a response can occur in order to be reinforced
FI schedules (without access to a clock) produce:
A rate of responding that increases gradually near the end of the interval until reinforcement
A post-reinforcement pause
Length depends on value of FI: higher value = longer pause
Variable Interval (VI) Schedule
A response is reinforced after unpredictable intervals of time
Length of the interval changes from one reinforcement to the next: varies around some mean value
Produces a moderate steady rate of responding and no post-reinforcement pause
Produces high resistance to extinction
Lower rates of responding than FR or VR
Simple Interval Schedules
“Simple” because only requirement = time must pass
Interval schedules are not often used because: – FI produces long post-reinforcement pauses
VI generates lower response rates than ratio schedules
Simple interval schedules require continuous monitoring of behaviour after each interval until a response occurs
Limited Hold
a deadline for meeting the response requirement of a schedule of reinforcement.
Ex. FR 30/LH 2 minutes
Not common for limited hold to be added to ration schedules
Interval Schedules with Limited Hold
Finite time after a reinforcer becomes available that a response will produce it.
FI/LH
VI/LH
example bus arrives at specific stop every 20 mins and you have a limited time 1 min to get on the bus
Interval schedules with short limited holdsàsimilar results to ratio schedules
For small FIs, FI/LH produce results similar to FR schedules
VI/LH – similar results to VR schedules
LHs used when we want ratio-like behaviour, but are unable to count each instance of behaviour
Duration Schedules
Reinforcement occurs after the behaviour has been engaged in for a continuous period of time.
Fixed Duration (FD) Ex. FD 5 minutes
Variable-Duration (VD) – interval changes unpredictably
Used only when target behaviour can be measured continuously
Guidelines for Effective Use of Intermittent Reinforcement
Choose an appropriate schedule for behaviour you wish to strengthen and maintain.
Choose a schedule that is convenient to administer.
Use appropriate instruments and materials to determine accurately and conveniently when the behaviour should be reinforced.
Frequency of reinforcement should initially be high enough to maintain desired behaviour, then decrease gradually.
Inform individual of what schedule you are using.
Concurrent Schedules of Reinforcement
Different schedules of reinforcement that are in effect at the same time
Herrnstein’s (1961) matching law:
– The response rate or the time devoted to an activity in a concurrent schedule is proportional to the rate of reinforcement of that activity relative to the rates of reinforcement on the other concurrent activities.
Research findings on factors influencing choice of reinforcement:
– Types of schedules that are operating
– The immediacy of reinforcement
– The magnitude of reinforcement
– Response effort involved in different options