Behavioral final cram

0.0(0)
Studied by 1 person
call kaiCall Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/115

encourage image

There's no tags or description

Looks like no tags are added yet.

Last updated 6:32 PM on 4/20/26
Name
Mastery
Learn
Test
Matching
Spaced
Call with Kai

No analytics yet

Send a link to your students to track their progress

116 Terms

1
New cards

real incentives (experiment)

payment based on decisions/choices

2
New cards

flat fee (experiment)

$$ for participation

3
New cards

lab vs. field experiment

  • lab=controlled experiement

  • field=natural environment

4
New cards

between subject vs. within subject experiment

  • between subject: every subject/group of subjects has a different treatment

  • within subject: every subject is exposed to multiple treatments

5
New cards

deception vs no deception (experiment)

  • deception= don’t reveal the true purpose of the experiment

6
New cards

weak preference

>= ; “is weakly preferred to”

  • transitive

7
New cards

strict preference

> <

  • transitive

8
New cards

indifference

~

  • transitive

9
New cards

Transitivity

if x >= y & y>=z, then x>=z

  • strict, weak, & indifference preference relations are all transitive

10
New cards

completeness

either x>=y or y>=x. if both=indifferent

  • completeness+transitivity=weak order (*note: this doesn’t mean weak preference, weak preference becomes a weak order once its assumed that its complete & transitive)

11
New cards

reflexivity

x is >= X; every option x is at least as good as itself

almost always holds

12
New cards

symmetry

~ is symmetric, if x~y, y~x (consistency across equivalent or mirrored situations)

13
New cards

ordinal utility

can replace u by v with v(x) = f(u(x)) for all x, as long as f is an increasing function

  • prefer higher utility

  • utility differences have no meaning (can be 0.007 or 67)

14
New cards

cardinal utility

can replace u by v with v(x) = f(u(x)) for all x, as long as f is a linear increasing function

  • higher utility means preferred

  • a larger utility difference means a stronger preference

15
New cards

(strong) pareto

1 policy is better if no one is worse off & at least 1 person is better off than before

16
New cards

utilitarianism

1 policy is better than another if it generates greater amt of social welfare (cardinal utility, sum of all utility of all ppl in society)

17
New cards

revealed preferences

people’s preference are revealed through their choices

18
New cards

projection bias

people project their current preferences onto the future

  • people’s preferences depend too little on what they will be in the future & too much on the present

19
New cards

duration neglect

ranking of past experiences are insensitive to variations in duration, ppl mostly focus on peaks & ends

20
New cards

peak-end rule

ranking of past experiences based on their peaks & end only (ppl forget bad/boring time consuming parts in the middle)

21
New cards

diversification bias

people overestimate the degree to which they will like variety in the future (think they want more variety, but they would rlly choose the same thing over time)

22
New cards

risk vs. uncertainty

  • risk= known probabilities, unknown outcomes

  • uncertainty= unknown probabilities & unknown outcomes

23
New cards

expected value

EV(Lottery)=p1*x1+…+pnxn

  • multiplying probabilities by outcomes

24
New cards

expected utility

EU(Lottery)=p1*u(x1)+…+pnu(xn)

25
New cards

certainty equivalent

  • same utility level as initial position, without risk, amount you’re indifferent between the lottery with

26
New cards

risk seeking

  • expected value is lower than CE

  • prefers risk/lottery to the expected value

  • convex utility function

27
New cards

risk averse

  • expected value is > CE

  • prefers receiving the expected value of a lottery for sure to the lottery itself

  • concave utility function

28
New cards

risk neutral

  • indifferent between risk or no risk

  • EV(L)~L

  • CE(L)=EV(L)

  • linear utility

29
New cards

Sure thing Principle

common component x between options, therefore it should not affect the preference between the 2

30
New cards

Asian disease problem

  • violation of expected utility

  • =the framing of the situation affects ppl’s choices (shouldn’t occur)

31
New cards

prospect theory

  1. reference points

    1. derive utility (utility seen as gains and losses, which have emotional attachment) from the payoff relative to a reference point (not in cumulative/absolute terms)

  2. diminishing sensitivity & reflection effects

    1. gains concave

    2. losses convex

    3. value function

  3. loss aversion

    1. losses hurt 2x as much as gains, convexity of losses

  4. probability weighing

32
New cards

diminishing marginal sensitivity

a change from 0-10 appears larger than a change from 100-110, both for gains & losses

  • utility=concave for gains & convex for losses

  • risk averse for gains, risk seeking for losses (reflection effect)

33
New cards

Reflection Effect

risk attitudes for gains are opposite to those for losses:

  • risk averse for gains (concave)

  • risk-seeking for losses (convex)

34
New cards

St. Petersburg paradox

fair coin tossed repeatedly until heads is obtained, if it takes n tosses you earn $2^n. how much are you willing to pay to play?

  • expected to pay infinite amount because the expected value is infinite, but nobody pays a large amount (eu is a small #)

    • Violation of expected value

      • explained by expected utility

  • *shows that expected utility>expected value to describe ppl’s behavior

35
New cards

Loss Aversion

ppl fear losses more than enjoyment from gains

  • around the origin/reference point, the utility function is steeper for losses than for gains (losses have a bigger impact)

36
New cards

Alais Paradox

  • replacing the value of a sure thing shouldn’t change people’s preferences

    • ex: 89% chance of 60 in choice 1 for both, 89% chance of 0 for both in 2 makes them change preferences

  • violation of expected utility & sure thing principle

  • consisted with prospect theory

  • driven by certainty effect: ppl overweigh outcomes that are certain

37
New cards

maximin

choose option with greatest minimum utility payoff

38
New cards

maximax

choose option with greatest maximum utility payoff

39
New cards

minimax regret

choose option with lowest maximum regret

  • to calculate: find the difference in utility levels between the options (first note which choice is a win (predicted=outcome & there is no regret, those have a regret of 0; the other choice is the difference)

    • then compare the regrets & choose the lower one

40
New cards

subjective expected utility model

  • assign subjective probabilities to different options & then use expected utility

    • different people have different probabilities (unlike expected utility where everyone has the same probability)

  • still satisfies the sure thing principle

41
New cards

Ellsberg Paradox

  • consistent with ambiguity aversion; we dislike not knowing the probabilities

  • violation of expected utility

  • should prefer bets where you know the probability for certain (ex: 30/50 balls are blue > 20/50 balls are purple or yellow & u bet on how much are purple (don’t know the exact #)

  • calculate the expected utility: should have higher EU from known probabilities

42
New cards

decision time, temporal distance, & consumption time

  1. decision time = when you make the decision

  2. temporal distance = distance between deciding & consumption (the longer this is = further in the future the consequences of our decisions)

  3. consumption time = consequences of the decisions occur

43
New cards

outcome stream

specifies what the consequences of our decision will be at every pt in time

  • transform outcomes into utility levels (turning it into a utility stream)

  • x= (x0,x1, …, xn)

  • utility stream =

    • u=u0,u1,…,un)

44
New cards

discounted utility (DU(x))

  • value of the utility stream

  • weighted utilities

  • further into the future= utility gets a lower weight

  • equation: DU(x) = u(x0)+d(1)u(x1)+…+d(n)u(xn)

  • impatience + DU = decreasing D(t); larger t (time)=lower discount function of t, further in the future = less utility

45
New cards

impatience

preference for positive utility sooner rather than later

  • impatience for unpleasant events (ex:dentist)

    • you prefer to postpone the event; it will hold a lower weigh in the future bc of discounted utility

    • *note: duration neglect & diminishing marginal utility are unrelated to delaying negative utility!!!

46
New cards

reasons for impatience

  1. market interest

  2. risk & uncertainty

  3. pure time preference

  4. health behavior

  5. occupational choice

  6. behavior of children & adolescents

47
New cards

contant impatience

  • adding a common delay to all options will not change preferences

  • adding a delay of sigma to both = unchanged preferences

  • (s:x)>(t:y), then (s+sigma:x)>(t+sigma:y) is still prefered

  • common time delay doesn’t affect preferences

  • time consistency

  • keep consumption time fixed, change decision time = preferences remain the same

48
New cards

decreasing impatience

  • time inconsistent; make plans & don’t stick to them

  • you’re more willing to wait to choose the better option (ex: get more money) if its far into the future.

    • if its recent/soon you will be more impatient & choose whatever is closer in time

49
New cards

Exponential Discounting

  • D(t)= deltat

  • delta = discount factor

    • (0<delta<1)

    • delta = 1/(1+r)

      • r=discount rate

      • therefore, larger discount factor = smaller discount rate

  • constant impatience & time-consistent behavior

50
New cards

Quasi-hyperbolic discounting

  • constant impatience when all outcomes are received in the future

  • decreasing impatience if possible to receive an outcome immediately

  • delta=discount factor, beta = present-bias parameter

  • decision changes due to time = time inconsistent

51
New cards

rational discounting

  • perfectly rational economic agent should behave this way

  • time consistency

  • exponential discounting can be considered rational

52
New cards

self-commitment

commitment to do a choice in the future

53
New cards

2 challenges to assumption that discounted utility function is independent from outcomes & depends only on time

  1. Magnitude effect=large outcomes are discounted @ slower rate than small ones

  2. Sign effect= losses are discounted @ a lower rate than gains (losses hurt more than gains, value function, reflection effect, convexity of losses, etc)

54
New cards

impatience for gains, but not for losses

  • contradiction/violation of discounted utility; prefer to get unpleasant things “over with”

  • dentist today is preferred to dentist in 1 week, despite being unpleasant

  • going to dentist today reduces unpleasant anticipation

55
New cards

preference for variation

ppl choose variety (ex: 2 different restaurants rather than the same one 2 days in a row)

56
New cards

preference for improving profiles

choice between:

a. 50 (today), 100 (1 month), 150 (2 months)

b. 150 (today), 100 (1 month), 50 (2 months)

  • ppl choose A

    • possibly due to loss aversion, prefer to gain over time rather than decrease

    • loss aversion in B

57
New cards

preference for spread

ppl prefer to spread out/distribute things they enjoy over time (rather than all at once or asap)

58
New cards

game theory 2 types of games

  1. simultaneous-move games

    1. all players decide simultaneously; cannot first observe what others have done

      1. Nash eq

  2. sequential-move games

    1. observe what others have done

      1. sublime perfect eq

59
New cards

why people deviate from nasheq/subgame perf. eq

  1. limited strategic reasoning

    1. either/both you yourself are limited or believe that others are

    2. guessing game

  2. utility depends on own payoff & payoff of others

    1. “social preferences”; WB depends on more than your own utility

    2. dictator game, ultimatum game, trust game

60
New cards

guessing game

state a # between [ , ], you win if you’re closest to 2/3 of the mean of all #s chosen.

  • playing 0 is a Nash EQ & there is no other Nash Eq

    • your best response is always to minimize the distance, aka lower the x

  • other method for obtaining Nash Eq: iterated elimination of dominated strategies

    • formula: #*P (p=%/prob given in the question)

    • elimination of all 1st order dominated strategies = eliminate all #s greater than & including #*P

  • Why don’t ppl play the Nash eq?

    • 1. limited strategic reasoning

    • 2. believe that others have limited strategic reasoning

61
New cards

ultimatum game

Steps:

  1. proposer gets an amount S

  2. proposer offers amount x to responder

  3. responder can accept or reject the offer

    1. accept= responder gets x & prosper gets s-x

    2. reject= both get 0

EQ:

  • subgame perfect eq: if player’s utility depends only on their own payoff

    • 2 subgame perfect eq

      • 1. proposer proposes 1 cent & responder accepts bc they accept all + offers & reject offers of 0

      • 2. proposer proposes 0 & responder accepts bc they accept anything

62
New cards

ultimatum game in practice

  • responders’ utilities cannot only depend on their own payoff (also depend on payoffs/intentions of other players too)

  • 2 options for proposers utilities

    • they derive utility only from their own payoff & expect responders to reject small + offers

    • they derive utility not only from own payoff (also depends on others)

63
New cards

Dictator Game

Steps:

  1. proposer gets amount s

  2. proposer offers x to responder but responder cannot do anything (has no role, cannot accept or reject)

    1. responder gets x & proposer gets s-x

outcome prediction:

  • if proposer’s utility depends strictly on own payoff, then he will propose 0 (no risk of rejection)

  • if the proposer gives anything more than 0, then their utility must depend on more than their own payoff

64
New cards

Trust Game

Steps:

  1. proposer gets amount s

  2. proposer sends x to responder

  3. experimenter increases x to (1+r)x

  4. responder returns y to proposer

  • result: proposer & responder both send 0

    • tragedy bc its not pareto optimal: they could both get more than eq if they could commit

65
New cards

public good

steps:

  1. player 1 starts with an endowment

  2. player 1 contributes to the public good some amount X

  3. *if 1 player uses the public good, this doesn’t prevent other players from using it (all players benefit equally)

payoff eq for player I: Pi I = e-xi+msumofxi

  • Nash eq= no one contributes (assume someone else will anyway)

    • again results in a tragedy

    • IRL, ppl do contribute; driven by social preferences, ppl don’t want to be the free rider

66
New cards

Social Preferences

U(x,y) depends on X & Y

*standard preferences = U (x,y) only depends on x

67
New cards

Altruism

u(x,y) increases if y increases

68
New cards

envy

u(x,y) decreases if y decreases

69
New cards

rawlsian

u(x,y) increases if the payoff of the worst off increases

70
New cards

inequality aversion

u(x,y) increases if inequality |y-x| decreases

71
New cards

reciprocity

reward players with good intentions & punish those with bad intentions

72
New cards

outcome fairness

derive utility from final allocation of payoffs, not only from our own payoffs

73
New cards

process fairness

derive utility from how we get to the final allocation of payoffs

74
New cards

opportunity cost

missed value of the BEST NOT CHOSEN ALTERNATIVE

75
New cards

sunk costs/sunk cost fallacy

a) costs that are beyond recovery @ time of a decision & should therefore have no effect on the decision

b) the idea that a person/company is more likely to continue w/ a project if they’ve already invested a lot of time/effort into it (even tho this may not be the best decision) (violation of standard economic theory)

  • why do ppl believe in the fallacy?

    • ppl feel a need to justify decisions made in the past

    • ppl tend to be risk seeking when it comes to losses

76
New cards

decoy effect/expansion condition

  • the introduction of an inferior product/irrelevant alternative should not change your mind

    • the decoy is strictly worse than the target in all dimensions but better than the competition in 1 (no one should buy the decoy)

  • expansion c: if you choose x from {x,y} & if you don’t prefer z over x or y, then you must also choose x from {x,y,z}

    • but some ppl fall for the fallacy & change their mind

77
New cards

compromise effect

ppl’s tendency to choose an alternative that represents a compromise/middle option in the menu

ex: 1.5 litre coca cola bottle

78
New cards

endowment effect

an individual values something they already own more than in case they don’t yet own it (mugs ex)

*economic model predicts that losses & gains should be valued the same, but ppl value losses larger than gains (losses loom over gains)

  • gap in WTA & WTP is the endowment effect (they should be the same)

79
New cards

value function

  • used to describe the endowment effect, shows the larger magnitude (steeper line) of losses

  • important pt = reference pt

    • allows you to model if you’re disappointed or surprised by something (ex: firms make use of this by giving you a 30 day free trial)

  • side of losses is steeper than side of gains

80
New cards

Heuristic

shortcuts for the brain, a rule of thumb, can lead to predictable mistakes

  • when thinking of a problem, the brain uses shortcuts instead of computing probabilities & utilities

81
New cards

adjustment

  • subjects mis estimate the math of: 1×2×3×4×5×6 vs 6×5×4×3×2×1 (they think first is lower than the second, they don’t adjust upwards enough)

  • ppl adjust wrongly, in a predictable manner

82
New cards

Anchoring

  • an anchor is an initial value or estimate

  • it can influence the person making the estimation

    • ex to test this:

      • give ppl an irrelevant #. ask them if their estimate is > or < the irrelevant number. ask them what their true estimate is

83
New cards

diminishing sensitivity to gains

  • with a concave utility function for gains, you prefer to segregate gains (aka experience them separately = gives u more utility)

84
New cards

diminishing sensitivity to losses

  • with a convex utility function for gains, you prefer to integrate losses (combine them to decrease the bad feeling/effect)

85
New cards

mental accounting

ppl categorize/put money in different categories/accounts in their mind

  • open mental account when a payment is incurred

  • close the account when the benefits arrive

  • *timing plays a role in opening & closing of the different accounts (should not happen)

  • *budgeting is also important bc money is reserved for different budgets & not used intermixably

86
New cards

representativeness

estimating the probability that some outcome was the result of a process by reference to the degree to which the outcome is representative of that process (how similar the outcome is to ppl’s mental representation of that event)

87
New cards

Law of small numbers

ppl exaggerate how much small samples represent the population (could be outliers, etc)

88
New cards

gamblers fallacy

  • thinking that statistical outcomes are corrected in the short run

ex: thinking that throwing 6-6-6-6-6-6-6 is more unlikely than 5-3-4-5-2-5-1

89
New cards

regression to the mean

  • failling to see that statistical processes will return to their average in the long run

  • could be good/bad luck

90
New cards

base rate neglect

  • failing to take the base rate of an event into account (dont make correlations)

  • Ex: thinking someone works a specific job bc of personality characteristics, but irl need to consider the % of ppl with that type of job

91
New cards

Probability calculation: independent events

p(a) * p(b)

92
New cards

Probability calculation: bayes Rule

P(B|A) = (p(A|B)*p(B))/((p(A|B)*p(B) + p(A|-B)*P(-B)

93
New cards

Probability calculation: dependent events

p(a|b) * p(b)

94
New cards

availability heuristic

assessing the probability that you think some event will occur based on how easily it comes to your mind (ez=higher prob given)

95
New cards

retrievability of instances

ppl are asked to come up with instances of an event & they place more weight on things that are more salient/they remember more (ex dying of shark over dog, even though dogs are deadlier statistically) MEMORY*

96
New cards

effectiveness of a search set

ppl search for sets in their mind (try to remember/based off memory)

  • ex: how many words that start with the letter k vs have the letter k as a third letter (ppl think start with but its rlly as 3rd letter) MEMORY

97
New cards

imaginability

ppl predict something based on how they can generate a rule in their mind (have to imagine something)

  • ex: how many groups of 3 vs groups of 9 (ppl think there’s more of 3 than 9 but its the same)

98
New cards

confirmation bias

  • ppl look for evidence confirming the bias in their mind already, instead of looking for or considering evidence that disproves it

  • tendency to interpret evidence as supporting prior beliefs to a greater extent than warranted

99
New cards

conjunction fallacy

  • overestimating the probability of a conjunction (string of events all of which must happen)

  • ppl tend to use the probability of one event as an anchor & adjust downward insufficiently

(ex: what’s more probable: Linda works at a bank or Linda works at a bank & is a feminist; bank its statistically more probable, regardless of Linda’s personality or characteristics)

  • P(AnB)

100
New cards

disjunction fallacy

  • underestimating the probability of a disjunction (string of events, 1 of which has to happen)

  • P(A or B)

  • ppl tend to use the prob of 1 event as an anchor & adjust upward insufficiently

  • ex: prob of ppl with the same birthday is higher than you expect