lecture 7: Schedules of Reinforcement

0.0(0)
studied byStudied by 0 people
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/20

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

21 Terms

1
New cards

Schedules of Reinforcement

Rule specifying which occurrences of a given behaviour, if any, will be reinforced

2
New cards

Continuous Reinforcement

Every correct response is reinforced; fast learning and fast extinction

3
New cards

Partial/Intermittent Reinforcement

Only some correct responses are reinforced; slow learning and extinction

4
New cards

Learning something New

  • acquisition - more

    • more you reinforce the quicker acquisition happens

  • maintenance - less

    • give less reinforcers

  • schedule thinning

    • process of gradually reducing reinforcement of behaviour

5
New cards

Partial/Intermittent Reinforcement Advantages

a) reinforcer remains effective longer because satiation takes place more slowly.

b)  Behaviour that has been reinforced intermittently tends to take longer to extinguish.

c) Individuals work more consistently on certain intermittent schedules.

d) Behaviour that has been reinforced intermittently is more likely to persist after being transferred to reinforcement in natural environment.

6
New cards

Ratio Schedules

Reinforcement based on number of responses emitted

7
New cards

Fixed ratio schedule

  • Reinforcement occurs each time a set number of responses of a particular type are emitted.

    • Produce high steady rate of responding until reinforcement, followed by post-reinforcement pause

    •  Initially produces high rate of responding during extinction

    • Produces high resistance to extinction

8
New cards

Ratio strain

Deterioration of responding from increasing an FR schedule too rapidly

9
New cards

Variable- ratio (VR) schedule

  • The number of responses required to produce reinforcement changes unpredictably from one reinforcement to the next

  • Produces a high steady rate of responding

  • Produces no (or at least very small) post- reinforcement pause

10
New cards

Differences between VR and FR schedules

VR schedules can be increased more abruptly than FR schedules without producing ratio strain

Values of VR that can maintain a behaviour are somewhat higher than those of FR

VR produces higher resistance to extinction than FR of same value does

11
New cards

Bot VR and FR schedules

  • used when a high rate of responding is desired, and each response can be monitored.

  • It is necessary to count the responses to know when to deliver reinforcement.

  • FR is more commonly used than VR in behavioural programs because it is simpler to administer

12
New cards

Interval Schedules

Schedules based on time

13
New cards

Fixed Interval (FI) Schedule

  • The first response after a fixed amount of time following the previous reinforcement is reinforced and a new interval begins

  • Size of FI schedule: amount of time that must elapse

  • No limit on how long after the end of the interval a response can occur in order to be reinforced

14
New cards

FI schedules (without access to a clock) produce:

  • A rate of responding that increases gradually near the end of the interval until reinforcement

  • A post-reinforcement pause

    • Length depends on value of FI: higher value = longer pause

15
New cards

Variable Interval (VI) Schedule

  • A response is reinforced after unpredictable intervals of time

  • Length of the interval changes from one reinforcement to the next: varies around some mean value

  • Produces a moderate steady rate of responding and no post-reinforcement pause

  • Produces high resistance to extinction

  • Lower rates of responding than FR or VR

16
New cards

Simple Interval Schedules

  • “Simple” because only requirement = time must pass

  • Interval schedules are not often used because: – FI produces long post-reinforcement pauses

    • VI generates lower response rates than ratio schedules

    • Simple interval schedules require continuous monitoring of behaviour after each interval until a response occurs

17
New cards

Limited Hold

  • a deadline for meeting the response requirement of a schedule of reinforcement.

  • Ex. FR 30/LH 2 minutes

  • Not common for limited hold to be added to ration schedules

18
New cards

Interval Schedules with Limited Hold

  • Finite time after a reinforcer becomes available that a response will produce it.

    • FI/LH

    • VI/LH

  • example bus arrives at specific stop every 20 mins and you have a limited time 1 min to get on the bus

  • Interval schedules with short limited holdsa →similar results to ratio schedules

  • For small FIs, FI/LH produce results similar to FR schedules

  • VI/LH – similar results to VR schedules

  • LHs used when we want ratio-like behaviour, but are unable to count each instance of behaviour

19
New cards

Duration Schedules

  • Reinforcement occurs after the behaviour has been engaged in for a continuous period of time.

    • Fixed Duration (FD) Ex. FD 5 minutes

    • Variable-Duration (VD) – interval changes unpredictably

  • Used only when target behaviour can be measured continuously

20
New cards

Guidelines for Effective Use of Intermittent Reinforcement

  • Choose an appropriate schedule for behaviour you wish to strengthen and maintain.

  • Choose a schedule that is convenient to administer.

  • Use appropriate instruments and materials to determine accurately and conveniently when the behaviour should be reinforced.

  • Frequency of reinforcement should initially be high enough to maintain desired behaviour, then decrease gradually.

  • Inform individual of what schedule you are using.

21
New cards

Concurrent Schedules of Reinforcement

  • Different schedules of reinforcement that are in effect at the same time

  • Herrnstein’s (1961) matching law:

    – The response rate or the time devoted to an activity in a concurrent schedule is proportional to the rate of reinforcement of that activity relative to the rates of reinforcement on the other concurrent activities.

  • Research findings on factors influencing choice of reinforcement:

    – Types of schedules that are operating

    – The immediacy of reinforcement

    – The magnitude of reinforcement

    – Response effort involved in different options