Schedules of Reinforcement

Schedules of Reinforcement

Definition of Schedules of Reinforcement

  • A schedule of reinforcement refers to a rule that describes a contingency of reinforcement.

    • The extremes of reinforcement are:

    • Extinction (referred to as "Nothing")

    • Continuous Reinforcement (referred to as "All")

  • Intermittent reinforcement is positioned between these two extremes.

Maintenance of Behavior Change

  • Maintenance refers to a lasting change in behavior.

    • A major goal of most behavior change programs is to develop naturally occurring activities, stimuli, or events that function as reinforcement.

    • It is noted that intermittent reinforcement is usually necessary to achieve effective maintenance of behavior change.

Basic Intermittent Schedules of Reinforcement

  • There are two main classifications for intermittent reinforcement schedules:

    1. Ratio schedules: Require a number of responses before reinforcement is delivered.

    2. Interval schedules: Require an elapse of time before a response produces reinforcement.

Fixed vs. Variable Schedules

  • Fixed schedules: The response ratio or the time requirement remains constant.

  • Variable schedules: The response ratio or the time requirement can change from one reinforced response to another.

Four Basic Schedules of Intermittent Reinforcement

  1. Fixed Ratio (FR): Requires the completion of a fixed number of responses for delivery of a reinforcer.

    • Example: (FR5) A participant earns reinforcement after reading 5 pages.

  2. Fixed Interval (FI): Provides reinforcement for the first response following a fixed duration of time.

    • Example: (FI5) The first question answered in class after a 5-minute interval produces reinforcement.

  3. Variable Ratio (VR): Requires the completion of a variable number of responses for delivery of a reinforcer.

    • Example: (VR5) A participant earns reinforcement on average after reading five pages; the specific responses vary (e.g., after 3 pages, then after 10 pages).

  4. Variable Interval (VI): Provides reinforcement for the first response after a variable duration of time.

    • Example: (VI5) The first question answered in class after an average of 5 minutes produces reinforcement; variations in response times occur.

Details of Fixed Ratio Schedule

  • Fixed Ratios (FR): Reinforcement is delivered following a set number of responses.

    • Effects include:

    • Participants tend to complete the required responses with little hesitation between responses.

    • FR schedules often produce high rates of response.

    • Quick responding maximizes the delivery of reinforcement.

    • A larger ratio requirement leads to a higher rate of response.

    • Typically followed by a post-reinforcement pause.

Details of Variable Ratio Schedule

  • Variable Ratios (VR): Reinforcement is delivered following a variable number of responses.

    • Effects include:

    • Produces quick, consistent, steady rates of response without a post-reinforcement pause.

    • Larger ratio requirements correlate with higher rates of response.

    • Gradual thinning of the ratio leads to high response levels.

Details of Fixed Interval Schedule

  • Fixed Interval (FI): The first response after a predetermined, fixed amount of time produces reinforcement.

    • Effects include:

    • Slow to moderate rates of response with a post-reinforcement pause.

    • An initially slow but accelerating rate of response, known as the FI scallop, is evident toward the end of the time interval.

Details of Variable Interval Schedule

  • Variable Interval (VI): The first response after a predetermined, fixed amount of time produces reinforcement.

    • Effects include:

    • Constant and stable low to moderate rates of responses.

    • Typically few hesitations between responses.

    • A larger average interval correlates with lower overall rates of response.

Interval Schedules with a Limited Hold

  • In a standard interval schedule, the first response after the end of an interval receives reinforcement that is available until the first response.

  • With a limited hold, there is a restriction in place wherein reinforcement is only available for a set amount of time following the interval.

  • If responses do not occur during this limited hold period, the interval is reset.

Thinning Intermittent Schedules of Reinforcement

  • Two procedures for thinning schedules:

    1. Gradually increase the response ratio or duration of the time interval of an existing schedule.

    2. Use clear instructions to communicate the schedule of reinforcement during the thinning process.

  • Ratio strain can occur due to abrupt increases in ratio requirements, which may lead to:

    • A situation where the ratio becomes too large for the reinforcement to maintain response levels.

    • Requirements exceeding physiological capabilities of the participant.

Variations on Basic Intermittent Schedules of Reinforcement

  • Differential reinforcement of specific rates of responding includes variations such as:

    • Differential Reinforcement of High Rates (DRH): Reinforcement is contingent upon responses that exceed a predetermined criterion, producing a higher rate of responding.

    • Differential Reinforcement of Low Rates (DRL): Responses are only reinforced when they meet or are below a set criterion.

    • Differential Reinforcement of Diminishing Rates (DRD): Reinforcement is provided after a predetermined interval for responses that are consistently lower than the gradually decreasing criterion.

    • Example: Reinforcement contingent upon fewer than 5 responses in 5 minutes, then fewer than 4 in 5 minutes, and so on.

Lag Schedules of Reinforcement

  • A lag schedule reinforces responses that differ in a predetermined way from previous behaviors.

    • Examples:

    • A Lag 1 schedule rewards any response differing from the prior response.

    • A Lag 2 schedule rewards responses differing from the previous two responses.

Progressive Schedules of Reinforcement

  • These schedules systematically thin each reinforcement opportunity within a session, independent of participant behavior.

  • Ratios are increased until reaching the breaking point where the participant stops responding or until a predetermined duration is met.

    • Example: A student is reinforced on a FR1 schedule, and response requirements gradually increase, leading to a decline in response rates on an FR5 schedule.

Compound Schedules of Reinforcement

Discriminated Compound Schedules of Reinforcement
  • Concurrent Schedules: Involve two or more reinforcement schedules operating independently and simultaneously.

    • An SD (discriminative stimulus) signals each component schedule.

    • Example: (Conc FR10/FR3) - A student can choose between two chores with different reinforcement requirements.

Nondiscriminated Compound Schedules of Reinforcement
  • Multiple Schedule: Composed of two or more basic schedules presented successively, usually in a random sequence.

    • Each schedule has an SD that is correlated with it.

  • Mixed Schedule: Composed of two or more basic schedules presented in sequence without an SD.

    • Example: (Mix FR10/FI5) Reinforcement varies without specific signals for each component schedule.

Chained Schedule of Reinforcement
  • Two or more basic schedules operate in sequence, possibly requiring the same or different behaviors with an SD correlated with each component schedule.

    • Example: (Chain VR3/FR10) Reinforcement occurs after completing several steps of behaviors in sequence.

Tandem Schedule of Reinforcement
  • Similar to the chained schedule, but without an SD signal for each component schedule.

    • Example: (Tand FR12/FI4) necessitates completion of specified jobs in sequence with a timed interval for reinforcement.