Reinforcement Schedules and Choice Behaviour (Video)

0.0(0)
studied byStudied by 0 people
GameKnowt Play
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/48

flashcard set

Earn XP

Description and Tags

40 flashcards focusing on reinforcement schedules, choice behaviour, and related concepts for exam preparation.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

49 Terms

1
New cards

What does Thorndike's Law of Effect say about responses in the presence of a stimulus followed by a pleasant reward?

They strengthen the stimulus–response (S–R) association.

2
New cards

What happens to the S–R association when a response is followed by a punishment?

The S–R association is weakened.

3
New cards

What does Partial Reinforcement mean in operant conditioning?

Not all occurrences of a behaviour are reinforced; reinforcement is intermittent.

4
New cards

Why does behaviour often persist under partial reinforcement in real life?

Reinforcement is not always immediate or guaranteed, so behaviour can persist.

5
New cards

How is a schedule of reinforcement defined?

A rule that determines which occurrence of the instrumental response is followed by the reinforcer.

6
New cards

What is a Ratio schedule?

Reinforcement based on the number of responses performed.

7
New cards

In a Fixed Ratio (FR) schedule, when is reinforcement given?

For every nth instance of the behaviour.

8
New cards

Give an FR9 schedule example.

Reinforcement after every 9 responses.

9
New cards

Fixed Ratio

Steady and high rate of responding with a post-reinforcement pause.

10
New cards

What is the relationship between the number of required responses and the post-reinforcement pause?

The pause length increases as the number of required responses increases.

11
New cards

What is Continuous Reinforcement (CRF)?

Every instance of the behaviour is reinforced; FR1.

12
New cards

What FR schedule characterizes CRF?

FR1 (every instance is reinforced).

13
New cards

How does CRF affect the pattern of responding?

Steady and moderate rate with brief, unpredictable pauses.

14
New cards

What is a Variable Ratio (VR) schedule?

Reinforcement is given for every nth response on average; steady responding with no predictable pauses.

15
New cards

Give an example of a VR schedule from the notes.

VR100 – reinforcement on average every 100 responses (payouts can occur at 80th, 120th, etc.).

16
New cards

What is a Fixed Interval (FI) schedule?

Reinforcement is available after a fixed amount of time since last reinforcement.

17
New cards

Under FI, what happens if a response occurs before the fixed time has elapsed?

It is not reinforced.

18
New cards

What is the crosswalk button example used to illustrate?

A Fixed Interval schedule; reinforcement after a fixed time.

19
New cards

What pattern is typical for FI performance?

A scalloped pattern with increasing responding as the reinforcement time nears.

20
New cards

What is a Variable Interval (VI) schedule?

Reinforcement becomes available after varying time intervals; average interval is specified (e.g., VI 24h).

21
New cards

How do responses typically look under VI schedules?

Steady and stable rate with no noticeable pauses.

22
New cards

What is the main difference in response patterns between FR/FI and VR/VI schedules?

FR/FI show post-reinforcement pauses and bursts; VR/VI yield steady rates without clear pauses.

23
New cards

What does the Molecular Theory say about ratio schedules?

Higher response rate on ratio schedules due to incentives to produce short inter-response times.

24
New cards

Why do ratio schedules reinforce short inter-response times according to the Molecular Theory?

Waiting between responses delays reinforcement.

25
New cards

How do interval schedules affect inter-response times according to the Molecular Theory?

Waiting to respond can be advantageous since reinforcement is time-based.

26
New cards

What is the 'feedback function' in the Molar Theory?

The relationship between responses and reinforcement across the session; ratio is an increasing linear function; interval has an upper limit (asymptote).

27
New cards

In a 10-minute session with VI-2 min reinforcement, what is the maximum number of reinforcers?

5 reinforcements (limited by the interval).

28
New cards

What does 'Limited Hold' mean in reinforcement schedules?

A restriction on how long the reinforcement remains available.

29
New cards

What are Concurrent Schedules of Reinforcement?

Two or more reinforcement schedules operate simultaneously for different responses.

30
New cards

What does the Concurrent Schedule procedure allow researchers to measure?

Choice behavior and the distribution of responses between options.

31
New cards

What does the Matching Law state about relative rates of responding?

They match the relative rates of reinforcement.

32
New cards

What does the Generalised Matching Law add to the Matching Law?

Parameters for sensitivity and bias to account for imperfect matching.

33
New cards

What does the 's' parameter represent in the Generalised Matching Law?

Sensitivity of choice behavior to relative reinforcement; s < 1 indicates undermatching.

34
New cards

What does the 'b' parameter represent in the Generalised Matching Law?

Bias or preference for a particular option.

35
New cards

Do people generally prefer ratio or interval schedules?

Generally prefer ratio over interval, and variable over fixed.

36
New cards

What is Self-Control in the context of instrumental conditioning?

Complex choice behavior involving prioritizing long-term benefits over short-term costs.

37
New cards

How can self-control be trained according to the notes?

Shaping with increasing delay, low-effort tasks, or distraction during the delay.

38
New cards

What is Delay Discounting?

The value of a reinforcer decreases as waiting time increases.

39
New cards

What do V, M, D, and K stand for in the delay discounting function?

V = value of reinforcer; M = magnitude; D = delay; K = discounting rate.

40
New cards

What does a steeper delay discounting function imply about a person?

Greater impulsivity and more difficulty showing self-control.

41
New cards

What is the essence of the 2-process theory of instrumental conditioning?

Both Pavlovian and instrumental learning contribute to motivation via S–O associations activating emotional states.

42
New cards

What is Pavlovian Instrumental Transfer (PIT)?

A Pavlovian S–O association motivates instrumental responding when the CS is present.

43
New cards

What is the Response Deprivation Hypothesis?

Restricting access to certain activities can make low-probability responses reinforcing.

44
New cards

What is the Response Allocation approach?

Examines how restricted conditions alter the distribution of responses from a bliss point baseline.

45
New cards

What is a bliss point in behavioral economics?

The unconstrained baseline distribution of responses that maximizes overall benefit.

46
New cards

What is elasticity of demand in this context?

How sensitive consumption is to changes in price.

47
New cards

What do some items’ responses to prices demonstrate in behavioral economics?

That different goods have different price sensitivities and substitutes.

48
New cards

What is the link between elasticity and complementary commodities?

Demand for one item can be linked to demand for another (e.g., BBQ buns and hamburgers).

49
New cards

What summarizes the overall purpose of using reinforcement schedules in this material?

To understand motivation, choice, and how different schedules influence behavior and preferences.