PSYC 2500 Final Exam (Lundquist)

0.0(0)
studied byStudied by 0 people
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/41

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

42 Terms

1
New cards

taste aversion learning

a form of learning in which an organism learns to avoid a taste after just one pairing of that taste with illness
e.g., Rats.
- CS - the taste of something it either drinks or eats.
- US - injection of poison
- Result: the rat, when presented with the opportunity to taste the CS again will consume little to none of it!

2
New cards

belongingness

biological preparedness to make certain associations
Pavlov assumed: all associations are ARBITRARY, contiguity causes conditioning

3
New cards

Garcia's effect

special facility for learning taste aversion (taste-illness association) - difficult for classical conditioning because... (3x)

1)association established in one trial
2) up to 24 hours between CS and US
3) very resistant to extinction

Arbitrariness and contiguity are not a good explanation

4
New cards

Garcia and Koelling (slides)

4 Four groups:
US = shock OR illness
CS = light and sound OR saccharin taste

1) CS - Light/sound + US - shock = the rat avoided the bright noisy water
2) CS - taste + US - shock = did not avoid bright noisy water
3) CS - light/sound + US - Illness = did not avoid saccharin water
4) CS - taste + US - illness = avoided the saccharin water

5
New cards

What matters more in classical conditioning?

CONTINGENCY (dependent)

6
New cards

Robert Rescorla
(3 groups - 40%, 20%, 10%)

did an experiment on what it takes to make a signal work (more than just contiguity)

3 groups of rats hear a tone for two minutes, when the tone was ON it meant that the probability of a shock was 40%

- The three groups have the same degree of contiguity of tone and shock, the shock is on for 48 seconds out of 120 seconds

- But they varied in the probability of shock when the tone was OFF:
Group 1 = w/o the tone playing, the p(shock) was 40%
Results = they showed NO fear conditioning to tone
--- the tone says to this group that your 40% stays the same, it is what it is

Group 2 = w/o tone playing, p(shock) was 20%
Results = showed some fear, but less than group 3

Group 3 = w/o tone playing, p(shock) was 10%
Results = showed strong conditioned fear of tone
--- The tone says to this group that your 10% now goes up to 40%, BE SCARED

7
New cards

Contingency definition
AND Pavlov

how the US depends on the CS - the "probability of the US in presence CS" relative to "probability of US in the absence of CS"

Pavlov - contingency confounded with contiguity

8
New cards

Rescorla-Wagner model

the change in conditioned strength or predictive power of a CS on this trial is equal to the salience/noticeability of the CS times the difference between the amount of US there is to predict and the amount currently being predicted by all CS's present on this trial

9
New cards

Instrumental or OPERANT conditioning (thorndike)

cats in a puzzle box
trial and error; incremental learning

10
New cards

Law of Effect (Thorndike)

Thorndike's principle that behaviors followed by favorable consequences (reinforcement) become more likely, and that behaviors followed by unfavorable consequences (punishment) become less likely

11
New cards

Operant vs. Classical Conditioning

1) Op Cond. = reinforcement depends on response;
Class Cond. = reinforcement (US) comes regardless

2) Op Cond. = response is EMITTED and voluntary
Class Cond. = response is ELICITED and involuntary


WHAT IS LEARNED?
3) Op Cond. = a BEHAVIOR
Class Cond. = a SIGNAL (CS --> US)

THROUGH WHAT MECHANISM?
4) Op Cond. = Law of Effect: CONSEQUENCES (but delay of reinforcement weakens response!)
Class Cond. = CONTIGUITY

NOTE: "Conditioning," because changing the conditions changes the response frequency; not under conscious control even though voluntary

12
New cards

B.F. Skinner Box

1) There are many responses
2) Little time and effort is required
3) Responses are easily recorded

The RESPONSE RATE is the dependent variable

13
New cards

Reinforcement: (both Pos and Neg)

ALWAYS INCREASE the rate of responding

- Pos reinforcement: delivers appetitive stimulus (food, approval)
---- when you present a stimulus and it increases the behavior
- neg reinforcement: removes aversive stimulus (shock, alarm clock noise)
---- when you remove a stimulus and it increases the behavior

14
New cards

Punishment definition

DECREASES the rate of responding

- When you present a stimulus and it decreases behavior that is POSITIVE PUNISHMENT
(e.g., spanking a child)

- When you remove a stimulus and it decreases behavior that is NEGATIVE PUNISHMENT (e.g., losing access to a toy)

15
New cards

Pos (in punishment and reinforcement)

Positive means presenting a stimulus

the difference is the word that follows positive;
Pos Rein = presenting a stimulus and it increases the behavior
Pos Punish = presenting a stimulus, but it decreases the behavior

16
New cards

Neg (in punishment and reinforcement)

Negative means removing a stimulus

the difference is the word that follows negative;
Neg Rein = removing a stimulus that increases the behavior
Neg Punish = removing a stimulus that decreases the behavior

17
New cards

discriminative stimulus (definition and example)

indicates under what circumstances a response will be reinforced

e.g., a rat presses a bar, but only gets food when the LIGHT in the box is ON eventually it doesn't press the bar unless the light is on

The stimulus does NOT CAUSE a response or SIGNAL reinforcement; rather it SETS THE OCCASION for the response

18
New cards

Operant conditioning parallel to classical conditioning

1) Instead of a CR - there is a operant response
2) instead of a US - there is reinforcement
3) instead of a CS - there is a discriminative stimulus

ORDER CHANGES!!

Class Cond. = Stimulus (CS) --> Reinforcement (US) --> Response (CR)

Op Cond. = Stimulus --> Response --> Reinforcement

19
New cards

conditioned (secondary) reinforcer

a stimulus that gains its reinforcing power through its association with a primary reinforcer

How does something get to be a conditioned reinforcer? Through CLASSICAL CONDITIONING
e.g., in higher order classical cond. - once a bell is connected with food, it's used like a US

20
New cards

partial reinforcement effect

reinforcing ONLY SOME TRIALS produces an even STRONGER response than reinforcing ALL TRIALS

21
New cards

schedules or reinforcement
(and definition of a continuous reinforcement - CR)

describe interval, ratio, fixed, variable
- continuous reinforcement (CR) = all responses get reinforced

22
New cards

interval schedule

reinforce next response after some time interval

23
New cards

Fixed Interval (FI)

time is FIXED; rat gets food pellet for next bar press, say, 30 seconds after last pellet (e.g., checking mail, delivered daily)

24
New cards

Variable Interval (VI)

time is AVERAGE; rat gets food pellet for next bar press 20, 40, 25, 35 seconds after last pellet, etc. - 30 seconds on average (e.g., checking e-mail, delivered whenever)

25
New cards

Ratio schedule

reinforcement after some number of responses (ratio of responses to reinforcement)

26
New cards

Fixed Ratio (FR)

ratio is FIXED; rat gets food pellet for every 10th bar press (e.g., a factory pieceworker)

27
New cards

Variable Ratio (VR)

ratio is AVERAGE; rat gets food pellet after 8, 12, 5, 15 responses - 10th response on average (e.g., gambling)

28
New cards

Shaping

an operant conditioning procedure in which reinforcers guide behavior toward closer and closer approximations of the desired behavior

can produce a response the animal would never have made spontaneously on its own

29
New cards

Chaining

using operant conditioning to teach a complex response by linking together less complex skills

30
New cards

Cumulative record (Skinner)

A learning curve plotting cumulative number of responses against time (so it can only go up or stay flat).

The slope is a "response rate," the main Skinnerian dependent variable; emphasizes maintenance of behavior: the end product of learning rather than the actual process of learning

31
New cards

Skinner vs. Thorndike on Operant Conditioning

1) skinner assumed no neural model or brain states explaining S-R connections
2) Skinner does NOT believe reinforcement strengthens an S-R connection - responses are NOT CAUSED by stimuli, but rather are selected and produced for their reinforcing consequences

32
New cards

instinctive drift (Breland & Keller & Marian)

tendency for animals to return to innate behaviors following repeated reinforcement

"learned behavior drifts toward instinctive behavior"

went to show that there are weaknesses in conditioning techniques!

33
New cards

Pavlov's definition of reinforcement

reinforcement is the US because when paired with the CS it produces the CR, whereas presenting the CS without the US leads to the extinction of the CR

34
New cards

Thorndike's definition of reinforcement

reinforcement is a "satisfying state of affairs," defined Behaviorally: "By satisfying state of affairs is meant one which the animal does nothing to avoid, often doing such things to attain and preserve it"

35
New cards

Watson's definition of reinforcement

reinforcement ensures that the desired response is the one most frequently paired with the stimulus, because it's the response that must happen on every trial - each trial continues until the desired response is made, and reinforcement ends the trial

36
New cards

Guthrie's definition of reinforcement

reinforcement works not because the animal has some goal or desire but because it changes the stimulus situation - thus protecting the successful response (the most recent one) from being replaced by a new response; no new response can be attached to those stimuli if they're no longer present

e.g., still inside the puzzle box after the cat has gotten out

37
New cards

Hull's definition of reinforcement

reinforcement was initially defined as a "drive reduction" i.e., anything that reduces drive, which is linked to biology and an animal's drive to meet their biological needs.

Later it was defined as "drive stimulus reduction" in the sense of merely seeming to meet those biological needs by providing the stimuli associated with them (chewing, swallowing, tasting, etc.), which brought the concept back into the realm of the psychological.

BOOK: Need reduction = Hull's drive reduction
Drive reduction = Hull's drive stimulus reduction

EXAM; earlier and later definition

38
New cards

Tolman's definition of reinforcement

reinforcement doesn't make learning happen, but is instead the motivation for performance

39
New cards

Skinner's definition of reinforcement

reinforcement (whether positive or negative) is anything that increases the rate of responding

40
New cards

James Olds and reinforcement

showed that electrical stimulation of the brain in certain regions was apparently a strong reinforcer for rats, even being preferred to eating

This suggested that there was an underlying neural basis to all reinforcement

Note: unreplicable findings, eventually abandoned

41
New cards

Premack's Theory

a more valued activity can be used to reinforce the performance of a less valued activity

reinforcers may be behaviors rather than stimuli

42
New cards

Response Deprivation Theory
(best modern view of reinforcement!!!)

the theory of reinforcement that says a behavior is reinforcing to the extent that the organism has been deprived of performing that behavior

Different from Premack's Principle because it means that even a low-probability behavior can reinforce a high-probability behavior - as long as that low-probability one is restricted to be even less available than it would normally be