BLP Exam 3

studied byStudied by 0 people
0.0(0)
learn
LearnA personalized and smart learning plan
exam
Practice TestTake a test on your terms and definitions
spaced repetition
Spaced RepetitionScientifically backed study method
heart puzzle
Matching GameHow quick can you match all your cards?
flashcards
FlashcardsStudy terms and definitions

1 / 51

encourage image

There's no tags or description

Looks like no one added any tags here yet for you.

52 Terms

1

operant conditioning

A learning process in which a behavior becomes associated with a consequence. As a result of this association, the consequence influences the probability of that behavior occurring again in the future

New cards
2

Edward Thorndike

experiments with cats in the puzzle boxes.
Rejected anthropomorphism
Explained operant conditioning in terms of the S and R connection
Law of Effect

New cards
3

law of effect

behaviors that are followed by a positive outcome are more likely to be repeated, while behaviors that are followed by a negative outcome are less likely to be repeated

New cards
4

pavlovian vs instrumental conditioning

Pavlovian:

-S-S association

-subject is passive

-CR is involuntary, response is elicited

Instrumental:

-R association, actions → consequences

-subject is active

-contingency is necessary

New cards
5

B.F. Skinner

behaviorist associated with further defining instrumental conditioning and using it to modify and control behavior. Skinner’s solutions:
Distinguished between classical and operant conditioning
Got rid of the S (of the S-R connection).
Responses are emitted in operant conditioning, not elicited.
Focus on rate of responding
Avoid theoretical speculation

New cards
6

positive reinforcement

increasing behavior by adding something

New cards
7

negative reinforcement

increasing behavior by removing something

New cards
8

positive punishment

decreasing behavior by adding something

New cards
9

negative punishment

decreasing behavior by removing something

New cards
10

shaping

reinforcing successive approximations of the goal behavior

New cards
11

fixed ratio

There is a set number of times you must make the response before it is reinforced

<p>There is a set number of times you must make the response before it is reinforced </p>
New cards
12

fixed interval

There is a set interval of time that must pass before you are reinforced

<p>There is a set interval of time that must pass before you are reinforced</p>
New cards
13

variable ratio

the number of times that you must make the response before you are reinforced varies every trial

<p>the number of times that you must make the response before you are reinforced varies every trial </p>
New cards
14

variable interval

the amount of time that must pass before you get reinforced varies from trial to trial

<p>the amount of time that must pass before you get reinforced varies from trial to trial </p>
New cards
15

extinction

when the r→o contingency is broken, the organism stops making the target response

-does not reverse, erase, or eliminate effects of acquisition

-extinguished response reappears under various circumstances

-extinction is specific to the context it occurs

New cards
16

continuous reinforcement

reinforcement occurs after every target

New cards
17

partial reinforcement

sometimes the target is reinforced, sometimes it is not

New cards
18

partial reinforcement extinction effect

responses that are on a partial reinforcement schedule take longer to extinguish than behaviors on a continuous reinforcement schedule

-seems paradoxical, you’d think continuous reinforcement would create stronger learning, but no

New cards
19

PREE Amsel’s Frustration Theory

When an organism makes a response and does not receive reinforcement, the organism experiences frustration

Pigeon in the corner experiment: trained to peck at light to get food, then switch to extinction, pigeon pecks at other pigeon in corner out of frustration

Continuous Reinforcement Schedule
Response →Food
Response → Food
Response → No food → Frustration
Response → No food → Frustration
Partial Reinforcement Schedule
Response →food
Response → no food → frustration
(While frustrated) Response → food
**Subjects are rewarded for responding while frustrated**

New cards
20

PREE Capaldi’s Sequential Theory

animals have the memory of recent, non-rewarded trial when experiencing a rewarded trial

RRNRNNRRNRNNNR
Underlined R = reinforced trial that follows a non-reinforced
trial.
Argued that the memory of the behavior during the recent
non-rewarded trial is rewarded

New cards
21

Instinctive Drift

the tendency for animals to revert to instinctive behaviors that may interfere with the performance of learned behavuors

-raccoon in bank commercial, associated coins w/ food so much, began treating them like food

New cards
22

contrast effects/crespi effects

sudden shifts in behavior after changing the value of the reinforcer

-direction and magnitude of the shift is relative to prior experience with the reinforcer

-if you’ve been getting 1 pellet, 20 pellets is great. if you’ve been getting 100 pellets, 20 pellets sucks

<p>sudden shifts in behavior after changing the value of the reinforcer</p><p>-direction and magnitude of the shift is relative to prior experience with the reinforcer</p><p>-if you’ve been getting 1 pellet, 20 pellets is great. if you’ve been getting 100 pellets, 20 pellets sucks</p>
New cards
23

depression effect

if individuals have experienced a large reward for their efforts in the past, then shifting to a medium reward will result in an exaggerated decrease in effort

New cards
24

elation effect

if individuals have experienced only a small reward for their efforts in the past, then shifting to a medium reward will result in an exaggerated increase in effort

New cards
25

Drive Reduction Theory (Hull and Skinner)

things that fulfill one’s basic needs will reinforce behavior

Problems:

  1. if any behavior that increases when followed by a reinforcer is a drive, then we will have an endless list of drives

  2. Defining reinforcer based on drive reduction and drives are defined based on reinforcement: circular reasoning

New cards
26

Evidence against Drive Reduction Theory

Experiment 1: Animals will learn to solve a maze for
saccharin (which is sweet but does not reduce hunger).
Experiment 2: Animals will learn to solve a maze for
access to female (even without getting to have sex).
Experiment 3: Premack Principle: more likely behaviors will reinforce less likely behaviors, but not vice versa

New cards
27

Premack Principle

more likely behaviors will reinforce less likely behaviors, but not vice versa

New cards
28

calculating the relative rate of responding

rate of responses to one choice/total rate of responding

New cards
29

calculating the relative rate of reinforcement

relative rate of reinforcement to one choice/total rate of reinforcement

New cards
30

Hernstein’’s Matching Law

the relative rate of responding on a choice matched the relative rate of reinforcement on that choice

New cards
31

Implications of the Matching Law

Whether a behavior occurs frequently or infrequently depends not only on its own schedule of reinforcement, but also on the rates of reinforcement of other activities the individual may perform.
In an impoverished environment, one reinforcement schedule may be highly-effective, but in an enriched-environment (where there are many rewarding options to choose from), the same schedule may have little impact.
Example:
Teens living in reinforcement-barren environment are more likely to engage in sexual activity than teens that have other options for entertainment (e.g., sports, music, art).

New cards
32

generalized form of matching law

differing sensitivities (s): when individual doesn’t notice or can’t tell that the rates of reinforcement are different

differing response biases (b): one type of response/reward is favored over another

<p>differing sensitivities (s): when individual doesn’t notice or can’t tell that the rates of reinforcement are different </p><p>differing response biases (b): one type of response/reward is favored over another </p>
New cards
33

Do animals encode the relationship between R and O? (Rescorla and Collwill)

Evidence for R-O associations
Rescorla and Colwill – devaluation experiment
Exp. Design
Phase I: R1→O1; R2→O2
Phase II: O1→LiCl (illness)
Test: R1? or R2?
Results: Rats choose to perform R2.

Significance: Result show that rats associate R1 with O1, because once O1 makes them
ill, they stop performing R1.

New cards
34

Do animals encode the relationship between the S and the O? (Rescorla and Collwill)

Exp. Design
Phase I: S1 (light): R(Nose poke) →O1; S2 (noise): RN → O2
Phase II: R1→O1; R2→O2
Test: During S1: R1? or R2?; During S2: R1? or R2?
Results:
When in the presence of S1, rats performed R1; when in the presence of S2, rats performed R2
Significance: Rats can learn S-O associations.

New cards
35

Do animals encode a hierarchal relationship (rule changes in different contexts) (Rescorla and Collwill)?

Exp. Design
Phase I: S1: R1→O1; R2→O2
S2: R1→O2; R2→O1 (flipped)
Phase II: O2→LiCl (illness)
Test: S1: R1? Or R2?; S2: R1 or R2?
Results:
During S1, rats performed R1 more frequently
During S2, rats performed R2 more frequently
Significance:
Rats can learn which R→outcome relationship is in place during each stimulus cue.
When one outcome is now associated with illness, they will change their responding during each stimulus to ensure they do not encounter that outcome that made them ill.

New cards
36

stimulus control

ways in which instrumental behavior becomes under the control of particular stimuli

Many behaviors are subject to stimulus control. The
stimulus context in which a response is performed is
important. In fact, failure to use appropriate stimulus control
is often considered abnormal.
Examples:
• Undressing in your bedroom versus a public place.
• Staring at the TV when it is on versus when it is off.
• Talking when you have an audience versus when you don’t.
• Students studying during Fall break. The stimuli present on campus is very different than the ones present on break...end up doing very little studying over break.

New cards
37

Pigeon Stimulus control experiment

Training Phase: Red Circle with white triangle à peck à food
Test: Red circle alone (do they peck?); White triangle alone (do they peck?)
Results: Some pigeons peck to red circle; some pigeons peck to white triangle.
Significance
-The degree of differential responding (responding differently to each stimulus) tells us the degree of control that stimulus has over the behavior.
-Differential responding to the two stimuli indicates that the subjects are treating each stimulus as “different” from the other. This is called stimulus discrimination.
-One cannot predict which stimulus will control responding.

New cards
38

stimulus generalization

an organism responds in a similar fashion to two or more stimuli. This is the opposite of stimulus discrimination/differential responding

New cards
39

stimulus generalization gradient

measures stimulus control; provides precise information about sensitivity of behavior to systematic stimulus variations

-steep generalization gradient indicates strong control of behavior by the stimulus

-flat generalization gradient indicates weak/nonexistent stimulus control

Example: A pigeon is trained to peck a red light for food. If that pigeon is color-blind (he cannot distinguish one color from another), then he will peck at all lights, no matter what color (flat stimulus generalization gradient). If the pigeon
is not color-blind, then he will only peck at the red light and maybe some colors that are close to red

New cards
40

factors that affect stimulus control

• Properties of the Stimulus
o Sensory capacity and orientation
o Ease of conditioning (e.g., salience)
• Type of reinforcement
• Learning factors
o Stimulus discrimination training

New cards
41

sensory capacity and orientation

physical limits of what our sensory systems can perceive

• This is the most obvious variable that determines whether a
particular stimulus feature controls responding. Sensory capacity determines which stimuli are included in an organism’s sensory world.
• Sensory capacity sets limit on what stimuli can control behavior.
• Studies of stimulus control are often used to determine what the organism is/is not able to perceive.

New cards
42

ease of conditioning

some stimuli are easier to notice, identify, encode, and remember. Those stimuli with noticeable and memorable features will be easier to condition.

New cards
43

overshadowing

a more salient stimulus will overshadow learning about a less salient one

New cards
44

type of reinforcement

certain types of stimuli more likely to gain control over instrumental behavior in appetitive rather than aversive situations

-visual control predominates when the stimulus acquires positive or appetitive properties

-auditory control predominates when the stimulus acquiress negative or aversive properties

New cards
45

stimulus elements

Sometimes we perceive individual elements that make up the cue
Sometimes we perceive the cue as a whole.
-This will affect how well we generalize learning

New cards
46

Learning (discrimination training)

-experience with stimuli (learning about them) may determine the extent to which those stimuli come to control behavior

Discrimination Training
S+ = Stimulus that signals reinforcement is available
S- = Stimulus that signals reinforcement is not available
Ex: Traffic lights: S+ = Green light (you can cross); S- = Red light (no crossing)
-Will learn discrimination faster if...
1. S+ and S- are presented simultaneously (side by side)
-Will have greater stimulus control to S+ if...
2. S+ and S- are close together in similarity
Example: Learning to discriminate flags was better when training contained flags that were hard to tell apart.
-We can use interoceptive cues as S+ and S-
Example: Hunger: S+ = hunger (you eat); S- = stomach full (you don’t eat)

New cards
47

spontaneous recovery

-response reoccurs “spontaneously” after some time has passed

New cards
48

renewal

environmental cues present during extinction are removed/changed, and response reoccurs

Money in vending machine (Bowman Hall) → no food (stop putting money in)
Will put money in vending machine (Kent Hall) because it is different context

New cards
49

reinstatement

after extinction, response is reinforced again (usually by accident), response returns

Example: Extinguish fear of bees; stung by bee → reinstates fear of bees

New cards
50

Resurgence

extinguish an unhealthy behavior (drinking to relieve stress) by replacing it with a healthy behavior (exercise). Healthy behavior is prohibited (injury), so unhealthy behavior returns

New cards
51

factors that affect extinction

  • Number and spacing of extinction trials

    • more trials = stronger extinction

    • distributed trials = longer lasting extinction

    • massed trials = faster extinction, but response more likely to return

  • immediate vs delayed extinction

    • extinction training directly after acquisition = faster extinction

    • delayed extinction training = longer lasting

  • Repeated Extinction Cycles = longer lasting

  • Multiple Contexts

    • extinction is context specific, must complete trials in multiple contexts to facilitate generalization

  • Reminder Cues

    • pair a stimulus w/ extinction training, reproduce in other contexts to enhance memory of training

  • Compound Stimulus Training

    • multiple stimuli that signal extinction training will have a greater impact

New cards
52
New cards
robot