Decision Making 1

0.0(0)
studied byStudied by 0 people
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/32

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

33 Terms

1
New cards

Different paths from stimulus to response

reflex

habit

rule-driven

pavlovian

deliberative

2
New cards

different types of input into a decision

perceptual

value based

3
New cards

perceptual input into a decision

which response best fits the sensory information

e.g., are there more dots moving to the left or right?

4
New cards

perceptual information is generated in

sensory-specific regions (e.g., MT), with greater activity for stronger info

this info feeds into a decision

5
New cards

value based input to a decision

which response best fits my preferences?

6
New cards

process of learning (acquiring) in value based decision making

anticipate value (can be automatic/implicit)

experience outcome

update future expectation based on how much better or worse outcome was than expected (prediction error)

7
New cards

FutureValuePrediction =

CurrentPrediction + PredictionError*

8
New cards

rewardExpected(t+1) =

rewardExpected(t) + (RewardRecieved(t) - RewardExpected(t))*

9
New cards

Midbrain (VTA) dopamine neurons signal

predicted reward and violations of predictions based on likelihood and delay

think monkey with juice

10
New cards

low likelihood of reward =>

large prediction error when reward occurs

11
New cards

high likelihood of reward

small prediction error when reward occurs

12
New cards

reward occurs around when expected =>

no prediction error

13
New cards

reward occurs long after expected =>

large prediction error

14
New cards

striatum receives dopamine inputs, and shows similar patterns of

expected reward to cues and prediction error to outcomes

15
New cards

expected rewards to cues

cues:

greater activity when reward expected to be more likely

16
New cards

prediction error to outcomes

outcomes:

greater activity for unexpected wins (positive prediction errors)

lower activity for unexpected no-wins (negative prediction errors)

17
New cards

we can learn to associate rewards with stimuli and with actions

pavlovian learning, and instrumental learning

18
New cards

pavlovian learning

learning to associate rewards with stimuli (e.g., tone)

19
New cards

instrumental learning

learning to associate actions that caused rewards (e.g., turn left)

basis for habits

20
New cards

learning of stimulus value have been associated with

ventral regions of striatum

21
New cards

learning of action values have been associated with

dorsal regions of striatum

22
New cards

ventral mPFC* and or/ v. striatum* activity increases with

greater subjective pleasantness of stimulus (e.g., odor, taste, attractiveness)

23
New cards

Assesing a value: ventral striatum (vStr)

includes nucleus accumbens (NAcc)

24
New cards

Assessing a value: ventral medial PFC (vmPFC)

includes medial OFC (mOFC)

25
New cards

Overlapping regions track

different kinds of rewards

primary (e.g., food), secondary (e.g., social, $$)

may form “common currency”

26
New cards

auction-like procedures encourage participants to indicate how much

they would actually be willing to pay for an item (WTP)

27
New cards

lateral orbitofrontal cortex (OFC) may hold

more specific information about stimulus features, vmPFC may integrate these

28
New cards

vStr/vmPFC signal value even when

unrelated to the task, and predict future choices

think: car priming

29
New cards

vmPFC/vStr activity can predict

how the success of a pitch for funding (e.g., crowdfund, microloan), better than ratings alone

30
New cards

The ___ of $100 depends on how much we have alreadt

utility

31
New cards

utility

how much pleasure or satisfaction we get from something

32
New cards

people weigh ___ 2x more than ___

anticipated losses

anticipated gains

33
New cards

some evidence for overlapping gain and loss regions

vmPFC, vStr