Chp 5- Operant conditioning - Psychology

0.0(0)
Studied by 0 people
call kaiCall Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/53

encourage image

There's no tags or description

Looks like no tags are added yet.

Last updated 11:19 AM on 5/2/26
Name
Mastery
Learn
Test
Matching
Spaced
Call with Kai

No analytics yet

Send a link to your students to track their progress

54 Terms

1
New cards

Define operant conditioning

Type of learning whereby the consequences of behaviour determines wether they will be repeated or not.

2
New cards

What is the three phase model and who introduced it?

  • model that describes operant conditioning as a sequence in which an antecedent leads to a behaviour followed by its consequences

  • B.F Skinner, 1938

3
New cards

What is an antecedent?

  • the conditions present immediately before a particular response

4
New cards

What are the three phases of the three phase model?

  1. Antecedent

  2. Behaviour

  3. Consequences

5
New cards

Explain the antecedent phase

  • internal & external conditions present immediately before particular response

  • Directs behaviour by indicating what action is appropriate in a particular situation

6
New cards

Explain the behaviour phase

  • observable actions that occur in response to the antecedent

7
New cards

Explain the consequence phase

  • outcomes that follow the behaviour and motivate future behaviour by influencing wether it will be repeated

  • If consequence is rewarding, behaviour more likely to occur

  • If consequence unpleasant, behaviour less likely to occur

  • If behaviour does not produce any change, no consequence will occur

8
New cards

What is the reinforcer and punisher?

Reinforcer: stimulus that increases the likelihood that a response will be repeated

Punisher: stimulus that decreases the likelihood that a response will be repeated

9
New cards

Define reinforcement

Process of adding or removing a reinforcer to strengthen behaviour

10
New cards

Define punishment

Process of adding or removing a punisher to weaken or discourage behaviour

11
New cards

What is positive reinforcement?

  • strengthening a response by adding a desirable stimulus after the response

12
New cards

What is negative reinforcement?

  • strengthening a response by removing or preventing an undesirable stimulus after the response

13
New cards

What is positive punishment

Weakening a response by adding an undesirable stimulus after the response

14
New cards

What is negative punishment

Weakening a response by removing a desirable stimulus after the response

15
New cards

What is schedules of reinforcement?

  • rules that determine the timing & frequency with which a reinforcer is delivered following a response

  • Formalised by Skinner & Ferster in 1957

2 types: continuous reinforcement, intermittent reinforcement

16
New cards

How can schedules of reinforcement be assessed?

  • response rate (Frequency behaviour occurs over given period of time)

  • → indicates how effective reinforcement is at maintaining or increasing a behaviour

  • Extinction rate ( how quickly learned behaviour decreases or stops when reinforcement is no longer provided)

17
New cards

What is continuous reinforcement?

  • reinforcement schedule in which a response is reinforced every time it occurs

  • highly effective when learning new behaviour = rapid acquisition

18
New cards

What is intermittent reinforcement?

  • ạ reinforcement schedule in which a response is reinforced only some if the time it occurs

  • used to strengthen behaviour.

19
New cards

Compare continuous and intermittent reinforcement

Continuous reinforcement

Intermittent reinforcement

Acquisition speed

Fast initial learning bcs every desired response is reinforced

Slow initial learning bcs not every response is reinforced

Response rate

Moderate

High

Extinction rate

Fast bcs reinforcement previously given for each response stops

Slow bcs reinforcement that was only given occasionally continues to be expected

Resistance to extinction

Low resistance to extinction bcs behaviour tends to stop quickly once reinforcement is withdrawn

High resistance to extinction as learned behaviour persists even when reinforcement no longer provided

Application

Effective for establishing new behaviours, leading to rapid skill acquisition.

Helps maintain behaviours over long-term, promoting persistence.

20
New cards

Define a fixed schedule

  • a predictable schedule in which the length of time or number of responses between reinforcements is set

21
New cards

Define a Variable schedule

  • an unpredictable schedule in which the length of time or number of responses between reinforcements changes

22
New cards

Define a ratio schedule

  • a schedule dependent on the n umber of responses needed before reinforcement is provided

23
New cards

Define an interval schedule

  • a schedule reliant on the length of time between reinforcements

24
New cards

Compare a fixed ratio and variable ratio

FR: reinforcement is given after a set number of responses

VR: reinforcement is given after an unpredictable number of responses

25
New cards

Compare fixated interval and variable interval

FI: reinforcement is given at fixed time intervals

VI: reinforcement is given at irregular time internals

26
New cards

What are the benefits of a Fixed ratio schedule?

  • Fast acquisition (fastest out of all intermittent reinforcement schedules)

  • Predictable: bus set no. Of responses needed for reinforcement = easy to understand

  • → learner knows exactly how many responses required

  • Response rate (speed/frequency of responses after behaviour has been learnt) = fast

  • → each response brings learner closer to reward = motivation

27
New cards

What are the limitations of a fixed ratio schedule?

  • post reinforcement pause: short break in responding immediately after reinforcement

  • → happens bcs learner knows reinforcement will not occur again until set no. Of responses is completed

  • Less resistant to extinction compared w variable schedules

28
New cards

What are the strengths of Variable ratio schedules?

  • produce highest response rate w steady responding, almost no pauses

  • Greatest resistance to extinction

  • → learners continue responding for long periods due to unpredictability of reinforcement

29
New cards

What are the limitations of Variable ratio schedules?

  • high potential for impulsive or excessive behaviour

  • → e.g gambling addiction

  • Behaviour difficult to reduce or stop bcs of its persistence

30
New cards

What are the strengths of a Fixed Interval schedule?

  • predictable reinforcement pattern

  • → east to implement,

  • suitability for long-term behaviour change

31
New cards

What are some limitations of Fixed Interval schedules?

  • slowest acquisition speed

  • Slower response rate

  • fastest extinction rate/ lowest resistance to extinction

  • → bcs learns know reinforcement will occur at set times

32
New cards

What are some strengths to Variable interval schedules?

  • high resistance to extinction

  • → reinforcement unpredictable & time-based, so behaviour continues even when reinforcement stops

  • Produce moderate steady response rate

33
New cards

What are the limitations of a variable interval schedule?

  • slow acquisition speed

  • → bcs acquisition isn’t directly related to no. Of responses and occurs less frequently

  • Provides fewer reinforcers overall, may reduce motivation for learners

34
New cards

What study did Thorndike conduct and when?

Puzzle Box Experiments and the Law of Effect

1898

35
New cards

What did Thorndike propose?

  • responses followed by desirable consequences are strengthened bcs they produce a ‘satisfying state’ inside the organism, and therefore become more likely to be repeated.

  • Responses followed by unpleasant consequences are weakened and become less likely to occur

36
New cards

What is the aim for Puzzle Box Experiments and the Law of Effect (Thorndike, 1898)

  • to examine the influence of reinforcement on the response of cats attempting to escape from a puzzle box in order to reach food

37
New cards

What was the research design used in Puzzle Box Experiments and the Law of Effect (Thorndike, 1898)?

— experimental

13 cats

38
New cards

What were the variable in Puzzle Box Experiments and the Law of Effect (Thorndike, 1898)?

IV: consequence of cart’s response

→ wether performing the correct action led to desired outcome of escaping puzzle box & obtaining food

DV: length of time it took for the cats to escape the puzzle boxes

39
New cards

What was the procedure of Puzzle Box Experiments and the Law of Effect (Thorndike, 1898)?

  • hungry cat placed inside puzzle box, door closed & mechanisms that kept box shut was set

  • → Puzzle box wooden box w slats that allowed cat to look out and reach paw through

  • Fish placed near puzzle box to motivate cat to escape

  • Cat’s behaviour (how it managed to trigger release mechanisms & time it took to do so & exit box) was recorded

  • Shortly after car exited box, Thorndike placed it back inside, rest mechanisms and repeated the process

  • Repeated up to 24 times, multiple cats, 8 diff puzzle boxes

40
New cards

What were the key findings of Puzzle Box Experiments and the Law of Effect (Thorndike, 1898)?

  • when first placed in box, cats experienced discomfort (from being confused or from wanting food outside)

  • Instinctively attempted to escape by engaging in ‘random’ behaviours until accidentally performed action that triggered release mechanism & opened door

  • Thorndike proposed cats did not use insight or reasons to escape

  • → through random process, trial & error, gradually eliminated ineffective behaviours and eventually discovered effective response

  • w repeated trials cats learned to associate escaping box w receiving food & identified specific response required to open door

  • Once association established, cats made conscious decision to perform behaviour that activated release mechanism

  • → choice demonstrates positive reinforcement, & demonstrates law of effect

41
New cards

How did Puzzle Box Experiments and the Law of Effect (Thorndike, 1898) Contribute to psychology?

  • Laid foundation for later theories of operant conditioning

  • Skinner;s work w rats & other annuals developed directly from Thorndike

42
New cards

What were some limitations and criticisms of Puzzle Box Experiments and the Law of Effect (Thorndike, 1898)?

  • can’t be generalised to humans (bcs used cats who have simplified view of learning, humans more advanced cognitive abilities)

  • → Thorndike later investigate & animal & human behaviour can be understood in terms if trial & error learning

  • Animal ethics not upheld, cats distressed from hunger, confusing situations (puzzle box).

43
New cards

What experiment did Skinner conduct and when?

Skinner Box Experiments

1938

44
New cards

What did Skinner propose?

  • rejected ideas that behaviour changes bcs consequences create internal; ‘satisfying’ effects

  • Argued that learning not guided by internal states. Not how organism felt about consequences but its observable effect on future behaviour

  • If a consequence increases likelihood of a behaviour, it is a reinforcer, regardless of any internal feeling

45
New cards

What was the aim of Skinner Box Experiments (Skinner, 1938)

  • Assess the rate if response (lever pushing) & rate of extinction in rats across different reinforcement schedules

46
New cards

What research design was used in Skinner Box Experiments (Skinner, 1938)?

  • experimental

  • White rats (number not specified)

47
New cards

What are the variables in Skinner Box Experiments (Skinner, 1938)?

IV: schedule of reinforcement (Fixed interval or Fixed ratio)

DV: response rate (measured by time taken between each response) & extinction rate for each reinforcement schedule

48
New cards

What was the pre-procedure for Skinner Box Experiments (Skinner, 1938)? (Not seperate schedules)

  • hungry rats individually placed in experiments box & remained inside for 1 hr, w multiple rats tested simultaneously in seperate boxes

  • Extraneous variables controlled by keeping box, dark, sound-proof, & smooth-walled

  • Well ventilated trough tube that continuously drew air out of box, water always available.

  • Rats not observed directly bcs any visual monitoring would ave require light which could distract and interfere with rat behaviour

  • When rats lifted forelegs and pressed down lever inside box, with their paw, movement triggered release of food pellets into food tray.

49
New cards

What was the procedure for the fixed interval Skinner Box Experiments (Skinner, 1938)?

  • food pellets released automatically through electrical mechanism after set periods of time

50
New cards

What was the procedure for the Fixed Ratio Skinner Box Experiments (Skinner, 1938)?

  • Mechanical system used instead, modified lever turned ratchet w each press, releasing pellet after specific no. Of responses

  • Rats each completed daily 1 hr test for 54 consecutive dats

  • Initially conditioned to ratio 16:1, 24:1, 32:1, 38:1, 64:1, 96:1, 192:1

51
New cards

How was the rate of responding and extinction measured?

  • record length of time between 1 lever press & next

  • Recorded electrically, producing graph

  • Extinction occurred when lever pressing no longer followed by food reinforcement, achieved by disconnecting food tray from pellet release mechanism. Data recorded

52
New cards

What were the key findings of Skinner Box Experiments (Skinner, 1938)?

  • rats initially learned to press lever through trial & error, performing various random behaviours until accidentally pressed lever & received food

  • Food pellets act as reinforcement for behaviour, lever pressing = conditioned response maintained by positive reinforcement

  • Fixed Interval = slower response rate than fixed ratio, rats rapidly stopped after each reinforcement, as responses made straight after reinforcement were never reinforced.

  • → rats learned to press lever more quickly when less time elapsed between response & reinforcement. Learning occurred more slowly as delay between response and reinforcement increases

  • Fixed ratio = faster response rate bcs rats recited food more frequently when pressed lever more. Bcs apparatus did not allow ratios above 192:1, experiment did not reach limit at what fixed-ratio could achieve

53
New cards

How did Skinner Box Experiments (Skinner, 1938) Contribute to psychology?

  • Provided detailed insights into how different reinforcement schedules affect behaviour

  • → showing that timing & frequency of reinforcement include response rates & resistance to extinction

  • Offered more detailed & systematic understanding of operant conditioning

54
New cards

What were some criticisms and limitations of Skinner Box Experiments (Skinner, 1938)?

  • animal ethics for rats not upheld

  • → exposed to periods of starvation, died of starvation, only fed 15 pellets a day

  • Cannot be fully generalised to humans.

  • → bcs rats & humans differ in cognitive & social characteristics

  • External validity: experimental conditions highly artificial, cannot generalise to real-life situations