NC

PSYCH 311

Chapter 5 10/24/20204

Appetitive stimulus: a pleasant or satisfying stimuli that can be used to positively reinforce instrumental response  

Aversive stimulus: an unpleasant or annoying stimulus that can be used to punish an instrumental response 


Positive: Think addition- does not mean good 

Negative: Think removal- does not mean bad 


Reinforcement: Future probability of the behavior increasing

Punishment: Future probability of the behavior decreasing


Four Types of Operant Conditioning

Instrumental conditioning procedures:

Positive reinforcement: a preferred or appetitive stimulus is provided (added) as a result of target behavior and the future probability of the target behavior increases 

Negative reinforcement: an aversive stimulus is removed following a target behavior and the future probability of the target behavior increases

Escape: Behavior that leads to the termination (end) of the aversive stimulus

Avoidance: Behavior that leads to the aversive stimulus


Negative punishment: an instrumental conditioning procedure in which the instrumental response (behavior) prevents/removes the delivery of a reinforcing stimulus. The future probability of the behavior decreases.

DRO: Differential Reinforcement of Other behavior-reinforcement in the absence of the response/behavior

Negative Punishment/omission training: an appetitive stimulus is removed as a result of a target behavior and the future probability of the behavior decreases


Positive punishment: an aversive stimulus is added as a result of the target behavior and the future probability of the target behavior decreases. 

10/29/2024


Reminders: 

Quiz number 5,6,7 on blackboard due 11/5

Virtual class on election Day 

Examples…

  • Lyle leaves the theater because the music in the show is too loud. What is this an example of?

    • Negative reinforcement 


  • Brenda steals Kelly’s car because Kelly went to Europe without her this is an example of?

    • Negative Punishment


  • A rat in a skinner box receives a food pellet every fifth time they press the lever. This is an example of? 

    • Positive removed


  • Joey's brother started crying because he threw his favorite ball out the window while driving home. This is an example of?

  • Positive punishment


  • What makes something a reinforcer? 

    • Satiation and Deprival 

    • If the person likes the reinforcer presented


  • A stimulus is only a reinforcer if a behavior that is made contingent upon it increases in the future. 

  • Reinforcement has nothing to do with how pleasant you think something is.


Quality vs. Quantity

  • Several aspects of a reinforcer determines the effect it has on learning and performance of behaviors

  • If the reinforcer is very small and of poor quality it will not increase instrumental responding

  • If the reinforcer is very large with high quality it will probably increase instrumental responding


  • The magnitude of the reinforcer also influences the rate of free operant responding 

  • The bigger the reinforcement and the better the reinforcement the higher the probability of that behavior occurring 

  • The effectiveness of a reinforcer not only depends on the quality but also the quantity of it 

Magnitude of Reinforcer 

  • Studies have shown that: 

    • A particular amount in addition to the type of reinforcer depends on the quantity and quality of the reinforcer the individual is used to experience 

    • Large than small 

    • Small than large

  • A large reward is treated very good especially if your used to small rewards/reinforcements

  • A small reward is not treated as good if you are used to a large reward 

    • This is known as the behavioral contrast

Behavioral Contrast: Change in the value of a reinforcer produced by prior experiences with a reinforcer of a high or lower value

  • Prior experience with a lower value reinforcer increases the reinforcer value

    • Positive behavioral contrast 

  • Prior experience with a higher value reinforcer reduces reinforcer value

    • Negative behavioral contrast 


  • Drug Addiction and Drug Abuse

    • In a study of laboratory animals, they found that if a rat was given cociane in a distinct chamber ut will choose that area repeatedly compared to choosing an area where it never got that coaine was reinforcing

  • So the biggest cornerstone of instructional behavior is that is produces and is controlled by the consequences

  • In some cases there is a strong relation between what a person does and the consequences of that action

    • The relation between behavior and its consequences can alos be based on probability

Response + Reinforcer

  • Temporal Relations: the time interval between an instrumental response and the reinforcer (consequence)

    • The time between the response and consequence 

    • Ex. Child braking the window and getting yelled at by mom vs. told “wait till your dad gets home”.

  • Temporal Contiguity: the occurrence of two events such as a response and a reinforcer at the same time or very close together in time

    • Two evens happening very close together so they begin to be related

    • Ex. Lucky socks

  • Response reinforcer contigency: the relation of a response to a reinforcer defined in terms of the probability of getting reinforced for the making the responses as compared to the probability of getting reinforced in the absence of the response

    • What environment is it more likely for you to get a response


How do we know its effective?

  • Overtime psychologist have emphasized that instrumental conditioning requires providing the reinforcer immediately after the occurrence of the instrumental response


  • Grice 1948- disruption in learning due to days

    • If you take to long to give them a consequence it will lower the chances of them responding

    • They recorded that instrumental delays of even 0.5 sec decreases effectiveness

    • Is is sensitive because it is difficultto gauge what response deserved the credit for the reinforcer 

Credit Assignment

  • Secondary Reinforcer: A stimulus that becomes an effective reinforcer because of its a ssocitation with s primary unconditioned reinforcer

  • Primary reinforcer (unconditioned): food, water, sex,

  • Secondary (conditioned) reinforcer: sight of food, label of the water bottle, money.

Skinners Pigeongs

  • Skinners superstitious experiment

  • Superstitious Behavior: behvaior that increases of accidental pairings of the delivery of a reinforcer with the occurrences of the behavior

Chapter 6 10/31/2024

  • Self-control: the power of acting without the constraint of necessity of fate; the ability to act at one’s own discretion. 

  • Every behavior will not come in contact with a consequence (reinforcer or punishment)

  • The relationship between behavior and reinforcement/punishment is not that simple

    • It is much more complex

  • Time will play a factor in strengthening or weakening of the behavior

    • Specifically: the immediacy of the consequence following the behavior 

Schedules of Reinforcement

  • Defined: a program or rule that determines how and when the occurrence of a response will be followed by the delivery of the reinforcer 

  • Delivery of a consequence (reinforcer for this purpose) this could be dependent on a lot of things:

    • Numbers of responses

    • Presence of certain stimuli

    • Other responses

    • Or a combination of all of these


  • Schdules of reinforcement will influence how a behavior is learned as well as how it is maintained by reinforcement 

  • It will reinforce a similar pattern of behavior 


  • B.F Skinner

    • Studies show that schedules of reinforcement are typically conducted by using a skinner box. Because a clearly defined response can occur repeatedly so that changes in the rate of responding can be ovderved and analyzed quickly

  • Simple schedules of intermittent Reinforcement 

    • Ratio schedules 

    • Interval Schdules

    • (tick marks mark behavior)

    • In simple schedules: a single factor determines which occurrence of the behavior is reinforced

    • The factor can be how many response have occurred or how much time has passed before the target response can be reinforced 

Ratio Schedules

  • A schedule in which reinforcement depends ONLY on the number of response the participant preformance no matter when those responses happen

  • In a ration schedule you are only counting the number of responses that have occurred

  • THEN delivering the reinforcer every time that required number is reached

    • Ex. Chase is biting so everytime he bites we sit him down the ratio is 1:1, 1 bite = sitting down-

    • When chase wants Animals we give it to him by the second time he asks, the ration is 2:1. Asks twice for animals and he gets them once he asks the second time. 

  • This type of schedule is called a:  Continuous Reinforcement Schedule (CRF) 

one behavior: one reinforcement

  • Defined: A schedule of reinforcement in which every occurrence of the instrumental response (behavior) produced the reinforcer

  • Although you engage in the behvaior is doesn NOT mean you will always get reinforced

Partial Reinforcement or Intermittent Reinforcement

  • Defined: A schedule of reinforcement in which only some of the occurrences of the instrumental responses are reinforced also known as intermitten reinforcement

    • Fixed ratio

    • Variable ratio


  • Fixed Ratio: a reinforcement schedule in which the reiforcer is delivered after a fixed amount of responded (same as CRF)

  • Variable Ratio: a reinforcement schedule in which the nimber of responded necessary to produce reinforcement varied from trial to trial the value of this schedule refers to the average number of responses required for reinforcement 

  • VR schedules produce high and roughly constant response rates between reinforcers



PLUG IN NOTES FROM PICTURES from  11/5


11/7/2024

Topics for paper 2- due 11/11 or one class over

Operant Conditioning 

We are the first to present on 11/21; aim for 10 minutes


Sniffy’s Cumulative record

  • How we visualize Sniffy’s responding

    • Records the rate of response

    • Why? You must have a measurable behavior 

  • The slope of the lines represent the speed of responding

    • Steeper slope the faster the responding

    • More slandered the slope the slower the responding

    • The little blue lines signal the delivery of the reinforcement (food)

    • You can have up to 10 cumulative records (might only get two)

  • The flat horizontal line means that sniffy is not responding, he’s just taking a rest 

  • The height of the cumulative record is always 75 responses

    • Every reset is an additional 75 responses

The alternating solid and dotted vertical lines are time markers

  • Dotted to solid- 5 min

  • Solid to dotted- 5 min

  • Solid to solid- 10 min

  • Dotted to dotted- 10 min


  • With animal behavior, it is important that the ratio of reward to goal brhavior is low, especially in the beginning

  • This ration helps the animal figure out the required number of responses it takes to receive the reward

  • This will help produce quick aquisition


  • It is important ot remember that a fixed interval (F1) does not guarantee that the reinforcer will be provided at a certain point in time

  • The interval determines only when the reinforcer becomes available. To receive the reinforcer after it has become available, the participants still has to make the required response. 

  • The fixed interval schedule of reinforcement produces a unique pattern of behavior called scalloping 

    • The pattern is typified by a dramatic increase in the goal behavior as the interval approaches, followed by a substantial drop or porst-reinforcement pause. 

    • Example: students will often only begin studying right before the interval is about to elapse. After the exam is administered studying drops.

    • A fixed interval graph is always going to be scalloped 

Downside of Fixed Intercal Schdules

  • Of the four types of reinforcement schedules, the fixed intercal produces the lowest frequency of the goal behavior

    • Meaning you typically see lower rates pf the behavior

    • If you aren’t seeing consistent time it is variable and minutes, seconds, weeks is an interval of time

  • Reinforcement of Inter-Response times

    • There are various features of behavior can be increased by reinforcement. The inter-response time (IRT) is one such behavior future.

    • Defined: the interval between one response and the next. Inter response time can be differentially reinforced in the same fashion as other aspects of behavior such as response force or response

    • If the participant is reinforced for a response that occurs shortly after the preceding one, then a short IRT is reinforced and short IRT’s become more likely in the future.

      • On the other hand if the participant is reinforced for a response that ends a long IRT, then a long IRT is reinforced and long IRTs, then a long IRT is reinforced and long IRT’s become more likely in the future 

      • A participant who has mostly short IRTs is responding at a high rate. By, contrast, a participant who has mostyl long IRTs is responding at a low rate

    • How do ratio and intercal schedules determine the reinforcement of IRTs?

      • With a ratio schedule there are no time constraints and the faster the participants, and the faster the participant completeness the ratio requirement, the faster he or she will receive the reinforcer

      • A ratio schedule favors not waiting long between responses, It favors short IRT’s

      • Interval schedules provide a little advantage for short IRTs rather interval schedules favor waiting longer between responses


For ratio schedules:

  • Response rate is directly related to reinforcement rate, the more reinforcers the participant will earn and the higher will be its reinforcement rate


For interval schedules: 

  • Interval schedules have an upper limit on the number of reinforcers a participant can earn



11/12/2024

Concurrent Schedules of Reinforcement

  • Two different reinforcement schedules

  • In experiments, we can control alot of variables that are not so easily controlled in the ‘real world’

  • In a skinner box, a pigeon can peck the only response key in the box, or preen or move about the chamber

  • People are also constantly having to make choices about what to do. 

    • When watching T.V and an ad pops up you pick up your phone. You are on a concurrent schedule of reinforcement. 

  • Numerous studies of choice have been conducted in skinner boxes with two pecking keys 

  • In the typical experiment, responding on each key is reinforced on some schedule of reinforcement. The two schedules are in effect at the same time (or concurrently) and the pigeon is free to switch from one key to the other

  • Conccurrent Schedules Defined: A complex reinforcement procedure in which the participant can choose any one of two or more simple reinforcement schedules that are available simultaneously. Concurrent schedules allow foe  the measures of direct choice between simple schedule alternatives

    • Two schedules are set up simultaneously and are accessed throug two different operant behaviors

    • Conccurent schedules allow for continuous measurement of choice because the organism is free to change back and forth between the response alternatives at any time 

Measure of Choic Behavior

  • How an organism dustributes its behavior between the two response alternatives is greatly influenced by the reinforcement schedule in effect for each response

  • If the same VI reinforcement schedule is available for each response alternative, as in a concurrent VI 60-seconf procedure, the pigeon will peck the two keys equally often

  • The matching law- Herrnstein (1961)

    • The matching law states that given two behaviors (B1) and (B2), each on its own reinforcement schedule (R1) and (R2) respectively the frequency of each behavior will equal the frequency of the reinforcement available 

  • Two Common Equations 

  • He conducted an experiment with a concurrent VI 6-min, VI-2-min schedule

  • Where a maximum of 10 reinforcers per hour could be obtained by responding on the VI-6-min alternative

  • A maximum of 30 reinforcers per hourcould be obtained by responding on the VI 2-min alternative

  • The rate of a particular response does not depend on the rate of reinforcement of that response alone. 

  • Whether a behavior occurs frequently or infrequently depends on both the schedule of reinforcement and on the rated of reinforcement of others activities the individuals may perform

  • The matching law has had a profound impact on the way in which scientist think about instrumental behavior

  • The major insight provided by the matching las is that the rate of a particular response does not depen on the rate of reinforcement

    • Whether a behavior occurs frequently or infrequently depends not only on its own schedule of reinforcement but also on the rates of reinforcement of other activities the individual may preform

    • A given simple reinforcement schedule that is highly effective in a reward-improvished environment may have a little impact if there are numer ous alternative sources of reinforcement

  • Therefore, how we go about training and motivating a particular resoinse (e.x studying among high school students) has to take into account other activities and sources of reinforcement the individuals have at their disposal


Concurrent - Chain Schedules

  • A complex reinforcement procedure in which the participant is permitted to choose during the first link which of several simple reinforcement schedules will be in effect in the second link. Once a choice has been made, the rejected alternatives became an unavailable until the start of the next trial. Concurrent-chain shcedules allow for the study of choice with commitment.

  • Once the participant has made a choice, it is stuck with thast choice until the end of the trial or schedule

  • Big Take away: Concurrent-chain schedules involve choice with commitment

Chapter 7 : Motivational Mechanisms 11/ 14 /2024


For lab: data should come from sniffy exercises 33-37

Lab topic due: 11/19


Motivation is very specific to an individual


Two approaches:

Associative Approach 

  • Thorndike

  • Relies heavily on associations; compatible with pavlonian conditioning

  • Research efforts wanted to identify the of pavlonian mechanisms in instrumental learning


Response Allocation Approach

  • Skinner

  • Relies on broader context of numerous activities organisms are constantly doing

  • Concerned with how instrumental conditioning procedure limits free flow of activities/consequences of limitation

  • Molar perspective; considers long-term goals and how to achieve goals in context of behaviorial options


Assosiative structure on Instrumental Conditioning 

*When you hear associative think Thorndike

  • Realized thst instrumental conditioning involves more than just a response and a reinforcer

  • The instrumental response will occur in the context of specific environmental stimuli

    • Ex. Sending a text

S-R-O

S: Stimulus

R: Response

O: Outcome





S-R association + Law of effect

Throndike: 

  • Considered the S-R association to be the key to instrumental learning and central to his law of effect

  • Hypothosized that, instrumental conditioning involves the establishment of an S-R association

    • R, instrumental response

    • S, Contextual Stimuli Response

  • The law of effect: if a response (R) is in the presence of a stimulus (S) is followed by a satisfying event, the association between the stimulus S and the response R becomes strengthened. 

  • If the response is followed by an annoying event the S-R association is weakened 


  • According to the law of effect what is learned is an association between the response and the stimuli present at the time of the response.

    • The consequence of the response is not one of the elements in the association 

  • S-R associations + Law of effect Cont. 

    • Law of effect does not involve learning about reinforcer or response outcome (O) or relation between response and reinforcing outcome (the R-O association)

  • Role of the reinforcer + “stamp in” or strengthen S-R association 

  • Throndike thought once established S-R association solely responsible for instrumental behavior

  • Habits are things we do automatically and in the same way each time without thinking

    • Habits constitute about 45% human behavior

    • Wood and Neal (2007): Expectancy of a Reward

      • How might we capture the notion that individuals learn to expect the reinforcer during the course of instrumental conditioning? 

      • One way to look for reward expectancy is to consider how Pavlonian processes may be involved in instrumental learning

Clark and Hull (1930) 

  • Hypothesized that the instrumental response increases during the course of instrumental conditioning for two reasons:

    • First: presence of the stimulus (S) comes to evoke the instrumental response (R)  directly through Thorndike’s S-R association

    • Second: the instrumental response (R) also comes to be made in response to an S-O association that creates the expectancy of reward. 


Two Process Theory, Rescoral and Solomon (1967) 

  • Two process theory assumes that there are two distinct types of learning : Pavlonian and Instrumental Conditioning 

    • Instrumental conditioning is based off of consequences

    • Pavlonian conditioning is associations

  • In particular during the course of instrumental conditioning the stimuli (S) in the presence of the instrumental response is reinforced become associated with the response outcome (O) through pavlonian conditioning ans this results in an S-O association

  • Resorla and Solomon assumed that the S-O association activates an emotional state that motivates the instrumental behavior

  • The emotional state was assumed ti be either positive or negative, depending on whether the reinforcer was an appetative or an aversice stimulus

    • So far we have considered two different associations that can motivate instrumental behavior, Thordike’s S-R association and the S-O association, which activated a reward specific expectancy or emotional state

    • However the instigation of instrumental behavior involves more than just these two associations

    • Notive that neither S-R nor the S-O association involves a direct link between the response (R) and the reinforcer or the outcome (O)

  • R-O Associations

    • The most common technique used ti demonstrate the existence of R-O associations involves devaluating the reinforcer after the conditioning Reinforcer devaluation involves making the reinforcer less attractive

    • If the reinforcer is food one can make the food lest attractive by conditioning a taste aversion to the food

    • If the instrumental response occurs because of and R-O association, devaluation of the reinforce should reduce the rate of instrumental response


Premack Principle (1965)

  • The premack principle focuses on the differenc in the likelihood of the instrumental and reinforcer responses

  • The Premack (or differential probability) principle states:

    • Given two responses, the opportunity to prefore the higher probability response after the lower probability response will result in the reinforcement of the lower probability response

1. It means you have the opportunity to engage in two behaviors: 

  • Behavior 1 : high probability (your typically likely to do this)

  • Behavior 2 : low probability (you will do it, but its not your top choice)

2. It also means that when the opportunity arrives:

  • The high probability behavior (behavior 1) has the ability to reinforce (strengthen) the low probability behavior (behavior 2)

  • THe principle states: the opportunity to preform the higher probability response after the lower probability response will result in the reinforcement of the lower probability response.

    • The low probability behavior does not have the ability to strengthen the high probability behavior

Response Deprivation Hypothesis

Defined: An explanation of reinforcement according to which restricting access to a response below its baseline rate of occurrence (response deprtvation) is sufficient to make the opportunity to preform that response an effective reinforcer

  • Restriction of the reinforcer response is the critical factor for instrumental reinforcement

    • For example depriving you of a low probability response (something you are less likely to do) can make access to that response an effective reinforcement 

    • EX. Chase wanted goldfish but we were eating rice cakes, I sat him next to me and poured out the rice cake. I told him they were mine and not to touch. A few seconds later he was eating the snack.

    • In most instrumental conditioning procedures, the probability of the reinforcer activity is kept at a high level by restricting access to the reinforcer

  • The response-deprivation hypothesis provided a simple new strategy for creating reinforcers.

  • All instrumental conditioning procedures require withholding rhe reinforcer until the specified instrumental response has been preformed

  • The response-deprivation hypothesis points out that this is defining feature of instrumental conditioning is critical for producing a reinforcement effect

Timberlake and Allison (Response Deprivation Hypothesis) had the opposite view of the Premack principle

  • Abandoned the differential probability principle altogether and argued that restriction of the reinforcer activity was the critical factor for instrumental reinforcement 

  • Skinner is responsible for the Response Allocation Approach

    • Which looks at reinforcement and instrumental conditioning from a broader perspective than the Premack principle of the response deprivation hypothesis

    • The response allocation approach considers the broad range of activities that are always available to an individual 

    • It refers to how an individual distributes his or her responses among the various options that are available

    • When we think about the response allocation approach we should be thinking about how the distribution of our responses become altered and what factors determine when,where and why our behaviors happen. 

Chapter 8: 

Identifying stimulus control: It is a stimulus that has control over behavior 


Introduction to the chapter: This chapter focuses on the topic of stimulus control

  • The chapter deals with the ways in which behavior comes under teh control of particular stimuli

  • Thorndike and Skinner recognized that operant behavior and reinforcers occur in the presence of particular stimuli which come to control those behaviors


It means: Behaviors (instrumental response) and the availability of some reinforcers, will only happen when a particular stimulus is present 


If that specific stimuli is not present that behavior or reinforcer will not occur

  • Stimulus control of instrumental behavior is evident in many aspects of life

  • The failure of appropriate stimulus control is often found abnormal

Reynolds (1961)

  • Reinforced for pecking circular response key

    • Two pigeons, VI schedule

    • Reinforced for pecking circular response key

    • Illuminated response key

  • Takeaway: you can experimentally test if a behavior is under the control or a particular stimuli

  • Stimulus control of behavior is demonstrated by variations in responding in relation to different stimuli

  • If an organism responds one way in the presence of one stimulus and ina different way in the presence of another stimulus the behavior has come under the control of those stimuli

  • There was no control over which stimuli would gain stimulus control

  • IN the absence of special procedures, one cannot always predict which of the various stimuli an organism experiences will gain control over its instrumental behavior

  • Stimulus Discrimination (SD): Differential responding in the presence of two or more stimuli

    • This is when you treat or respond differently to two or more stimuli

    • Stimuli discrimination and stimulus control go hand in hand

    • You cannot have one without the other

      • If there is no discrimination between the two stimuli, the behavior is not under the control of those cues

Some types of stimulus discrimination  

  • Basic: Color ID, Leter ID, Number ID

  • Complex: Cars, Responding 


Stimulus generalization: Responding to test stimuli that are different from the cues that were present during training

  • Stimulus generalization is the opposite ofstimulus discrimination 

  • Stimulus generalization is when you present two or more stimuli and the participant/individual responds in a similar fashion to all of them


Pavlov; stimulus genralization was first observed by pavlov

  • He found that after one stimulus was used as a CS his dogs would also make the conditioned response to other similar stimuli

  • That is they failed to respond differentially to stimuli that were similar to the original CS

  • Stimulus generalization gradient: a gradient of responding that is observed if participants are tested with stimuli that increasingly differ from the stimulus that was present during training.

  • That the steepness of a stimulus generalization gradient provides a precise measure of the degree of stimulus control

  • A steep generalization gradient indicated strong control or=f behavior by the stimulus dimension that is tested. In contrast, a flat genealization gradient, indicates weak or nonexistent control 

  • Use the same 

  • Multiple exampled during training to support generalization

  • Ex. when teaching a color give several examples

    • Purple

  

  • Use multiple examples during training to support generalization

    • If  training to tie a kids shoes make sure to change up the shoe

  • Make the training procedure incidental to other activities


  • Teach as many things incidentally as possible makes it easier for generalization to happen 

    • Ex . Running, you must train in in every condition. On a track and on a treadmil, in the rain, in the cold and in the heat. 

  • FInally, generalization outside a training situation is achieved if the training helps to bring the individual in contact with contingencies of reinforcement available in the natural environment

  • Take away Discrimination Vs. Generalization

    • With stimulus discrimination you are responding to one stimulus but not to the other stimulo that look different to that one

    • With stimulus generalization you are giving a conditional response to the stimuli that are similar to the one you learned to identify

  • Stimulus control

    • What determine which feature of the stimulus gains control over the behavior?

      • The organisms sensory capacity and orientation is the obvious variable that determines which stimulus gains control

      • Meaning you will orient to things you can see and hear

      • Beacsue sensory capacity sets a limit on what stimuli can control behavior, studies of stimulus control are often used to determine what an organism is or is not bale to perceive

    • Conditioning Various Stimuli

      • Having necessary sense organs to detect the stimulus being presented does not guarantee that the organisms behavior will come under the control of that stimulus

      • Stimulus control also dependso n the presence of other cues in the situation

      • In particular how strongly organisms learn about one stimulus depends on how easily other cues in the situations can become conditioned

    • Context cues

    • A stimulus is said to be discrete if it is presented for a bried period has a clear beginning and end and can be easily characterized 

      • During presentation, there are various events occurring in the presence of that stimul also know as context clues

      • The context clues are various features (visual, auditory, and olfactory) of the room or place where the discrete discriminative stimuli are presented

    • Contextual cues can come to control behavior in a variety of ways

      • Study: Atking (1998)- Sexual Conditioning 

        • Context cues were uses to signal sexual reinforcement in male quails 

    • Meaning that context cues can come to control behavior if they serve as a signal for a US or a reinforcer

  • Control by conditional Relations

    • Realtions between two events are called binary relations

    • Relations that involve just two events are CS and US or a response and a reinforcer

    • However the nature of a binary relation is determined by a third event, called a modulator

    • Modulator: a stimulus that signals that relation between two other events

    • Overshadowing: interference with the conditioning of the stimulus because of the simultaneous presence of another stimulus that is easier to condition 

    • Overeshadowing illustrates competition among stimuli for access to the processes of learning 

    • Ex. the stimulus of a picture on the book overshadows the words

      • Chase doesn’t know how to read he just knows the brown bear story

    • The child will quickly memorize the story based on the picture rather than the words and will not learn much about the words

    • Overshadowing has been a considerable interest in contemporary studies of spatial navigation

    • People and other animals use a variety of different stimuli to find their way around

    • The availability of one type of cue can sometimes overshadow learning about other types of spatial information

Chapter 9

  • Extinction can only be conducted after a response or association has been established using Pavlonian or instrumental conditioning

  • Often the goals is to reverse the effects of acquisition

    • Meaning you are trying to weaken the association


Extinction: Classical + Operant

Classical Conditioning

Phase 1: CS-US-CR

Phase 2: CS- nothing- no CR

OperantConditioning

Phase 1: R-S-R increases

Phase 2: R- nothing- R decreases


So far, classical and instrumental conditioning has centered on various aspects of the acquisition (learning) and maintenance of new associations and new responses

  • This chapter particularly is not targeting learning or increasing behavior or anything

Chapter focus: how to reduce the frequency of a previously reinforced behavior by withholding reinforcement

  • Diminishes the rate of response

  • Extinction provides zero probability of reinforcement following any given behavior. It is also a behaviorial process of diminishing rate of response

  • Can be applied effectively in many settings such as homes, schools and various situations with problem behaviors raining from severe self injurious behaviors and as simple as disruptive behaviors.

  • The effectiveness of extinction in an applied setting depends primarily on rhe identification of reinforcing consequences and consistent application of the procedure

    • Wirth extinction, you do not need to apply an aversive stimulus to decrease behaviors, you do not need to provide vewrbal or physical models of punishment

    • All you do is withhold the reinforcer following the target behavior

    • Can be difficult because when you hear the word of decease wich is natural and that is OK but the procedure of extinction itself is simply withholding reinforcement following a behavior

      • We will also talk about resistance to extinction because although extinction appears to be a simple process its application can be very difficult 

  • Extinction is a technical term that should be used only to identify the procedures of withholding reinforcers that maintain a behavior

    • Four common missuses of the term:

      • Using extinction to refer to any decrease in behavior 

      • Confusing forgetting and extinction 

      • Confusing response blocking and sensory extinction

  • Using the term to refer to decrease in behavior does not look at what caused the change in behavior only the decrease 

  • Forgetting a behavior happens when a behavior is weakened bu the passage of time

    • With extinction the behavior is weakened because it is no longer producing reinforcement

    • Forgetting definition: the loss of a learned response that occurs because information about training is irrevocably lost due to the passage of time. Forgetting is contrasted with extinction, which is produced by a specific procedure rather than just the passage of time

    • Response blocking focuses on preventing the behavior from happening alltogether

      • Whereas extinction does not stop the behavior from happening if just no longer produces reinforcement

    • Non-contingent reinfocemenr decreases the behavior by changing the motivated operations, with extinction you are changing the consequence that follows the behavior

    • Extinction involves omitting the US or reinforcer

    • In classical conditioning extinction involves repeares presentations of the CS without the US

    • In instrumental conditioning, extinction involves no longer presenting the reinforcer when the response occurs

    • With both types of procedures conditioned responding declines. Thus the behavior changes that occurs in extinction is the reverse of what was observed in acquisition 

    • In both cases extinction is not the reversal of acquisition, actually some new response is learned that is overlay on the previously acquired response

    • So following extinction, organisms fo not forget how to respond or that they should respond, they also do not unlearn what they had learned before.

  • The extinction procedure does not prevent occurrecnes of proble =m behavior (the interruptions in the previous example) rather what we see is that the environment is chanfed so that the problem behavior no longer produces reinforcement


12/3/2024

  • When a previously reinforced behavior is emitted but is not followed by reinforcement or reinforcing consequences, the occurrence of that behavior should gradually decrease to its pre-reinforcement level

  • Meaning before this behavior was reinforced it was probably very low or noneixstent and thats what you’re looking for when extinction is in place

  • Behaviors that are typically placed on extinction are usually associated with predictable characteristics

    • Rate

    • Frequency

    • These effects typically generalize through species

  • With extinction, what we see is gradual reduction in behavior if the procedure is applied correctly and appropriately

    • School 

    • Homes

  • If reinforcement is removed

    • Numerous unreinforced responses can follow 

    • The gradual decrease in response frequency will tend to be sporadic with a gradual increase (first) in pauses between responses

  • Extinction is not recommended  because the behavior increases with frequency, magnitude and intensity (instead their is lots of reinforcement)


Effects of Extinction Procedure

  • Two behavorial effects on extinction:

    • Variability in responding 

      • overall decline in responses 

      • Increases in variability of responses

    • Aggression

      • Under certain conditions, frustration may be intense enough to induce aggression 


  • Frustrated non-reward energizes behavior, under certain conditions, frustration may be intense enough to induce aggression 

  • Gradual decrease in frequency: with extinction you will typically see a gradual reduction in behavior over time

  • However, you will see new behaviors and unreinforced behaviors pop up

    • Our behavior has to be consistent when it comes to extinction

    • Additionally, responses will be very sporadic and they will come and go

    • Some will last longer and some will be shorter

    • As long as YOU remain consistent the behavior will change

  • Extinction burst: an increase in the frequency of responding when an extinction procedure is initially implemented

    • You will typically see an increase in the frequency of the behavior

      • The behavior will more than likely get worse before it gets better



Extinction Induced Aggression

  • Azrin, Hutchinson and Hake 1966

  • Agression induced by extinction was dramatically demonstrated by an experiment in which two pigeons were placed in the same skinner box

Variables Affecting Resistance to Extinction 

  • Meaning: continued responding during the extinction procedure

  • The extinction procedure did not stop  the responding or behavior

  • Behavior that continues to occur during extinction is said to have a greater resistance to extinction than behavior diminishes quickly

  • Keep in mind the resistance to extinction is a relative concept and there are three measures that are typically used to measure resistance to extinction 

  1. The rate of decline in response frequency

  2. The total number of responses emitted reach a final low level or ceases 

  3. Measuring resistance to extinction as the duration of time requires to reach criterion

1. The rate of decline in response frequency

  • Remember that intermitten reinforcement (schedules  if reinforcement) may produce behavior with greater resistance to extinction compared to resistance produced by a continuous reinforcement

  • Ratio strain: when you go from a constant rate of reinforcement to a large gap between reinforcement 

  • Some intermittent schedules may produce more resistance than others

    • Variable ratio and variable interval schedules particularly 

    • Main point: the ‘thinner’ the intermitten schedule of reinforcement the greater the resistance to extinction

2.The total number of responses emitted reach a final low level or ceses

  • The number of times a behavior has produces a reinforcement may influence the resistance to extinction 

  • Behavior with a long history of reinforcement may have more resistance to extinction compared to a behavior with a short history of reinforcement 

  • Behavior with a long history of reinforcement may have more resistance to extinction than a behavior with a short history of reinforcement 

3. Measuring resistance to extinction as the duration of time requires to reach criterion

  • Sometimes problem behaviors can diminish during exgtinction and then sometimes thet are acidentally re-streengthened with reinforcement 

    • Keep track of what aspect of the behavior you are trying to diminish 

      • Frequency 

      • Magnitude 

      • Duration 


Witholding Reinforcement CONSISTENTLY

  • When the reinforcing consequence has been identified for the behavior you must make sure that you are withholding them consistently 

  • All behavior chance procedures require consistent application because consistency is essential 


  • Extinction involves omitting the US or reinforcer

  • In instrumental conditioning it involves no longer presenting the reinforcer when the response occurs

Recovery from Extinction: how behavior begins again after an extinction procedure 

  1. Spontaneous Recovery

  2. Renewal

  3. Reinstatment 

  4. Resurgance 


Spontaneous recovery, was originals identified by Pavlov

  • The decline in conditioned behavior that occurs with extinction dissipated with time. If a rest period is introduced after extinction training, responding back

  • Because nothing specific was done during the rest period to produce the recovery the effect is called spontaneous recovery

  • Spontaneous recovery is a short lived and limited reapperance of the behavior following an extinction procedure

    • The reapperance of a behavior after if has diminished to its pre-reinforcement level or stopped entirely 

      • When this happened if the behavior does not produce reinforcement, the behavior will go away 

      • An instance of spontaneous recovery might signal that extinction was ineffective 

      • Extinction was sucessful but that does not stop the recurrence of the behavior periodically

  • Renewal: reaperance of an extinguished response produced by a shift away from contextual cues that were present during extinction. In ABA renewal, the shift is back to the context of acquisition. In ABC renewal, the shift is to a familiar context unrelated to either acquisition or extinction 

Renewal of Conditioned responses

  • This happens when you train participants in one conditioner which is labled condition A

  • The next step is move the participants to a different condition in which is labled condition B where thet receive extinction training

  • Then they are returned to condition AS to test if the behavior was extinguished or if it will reappear after extinction 

Implications of the Renewal effect

  • It suggest that even if therapeutic procuedue is effective in extinguishing a pathological fear or phoibai in the safety of the therapy office, the conditioned fear may easily return when the client encounters the fear CS in a different context 

  • You can generalize excitartoy conditioning from one context to another 

    • Meaning if you have a fear in one situation it can more likely generalize to other sitautions

  • Reapperance of an extinguished response produced by exposure ti the US or reinforcer 


Reinstatement, you have a learned aversion to fish  because you got sick to after eating it on a trip.

  • You take nibbles and feel like your no longer getting sick from the fish

  • With reinstatement if you were to become sick again for some reason your aversion to fish will return even though it had nothing to do with the fish


Resurgance, the reappearnce of an extinguished response caused by the extinction of another behavior 

  • The mechanism of resurgence are not well understood. THis is due in part to the fact that a variety of procedures have been used in a different procedures may produce resurgence by different mechanisms 

These phenomenon indicate that extinction does not erase what you originally learned 

  • The extinguished behavior can reappear with: 

    • The passage of time (spontaneous recover)

    • Change of context (renewal)

    • Reexpouse to the US (reinstatment)

    • The extinction of another response (resurgence)

“This is bad news for various forms of exposure therapy that use extinction to eliminate problematic fears, phobias and habits