ECON 5420 - Final

0.0(0)
Studied by 0 people
call kaiCall Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/49

encourage image

There's no tags or description

Looks like no tags are added yet.

Last updated 2:40 PM on 4/25/26
Name
Mastery
Learn
Test
Matching
Spaced
Call with Kai

No analytics yet

Send a link to your students to track their progress

50 Terms

1
New cards

discrete distribution

one of a countable number of events

2
New cards

continuous distribution

one of an uncountably infinite set

3
New cards

data-generating process

the stochastic mechanism that generates the observed sample

4
New cards

likelihood

L(θ|D) = f(D|θ)

f is a joint density for continuous data

f is a joint mass function for discrete data

5
New cards

DGP interpretation

parameters are fixed and data are random

6
New cards

likelihood analysis interpretation

observed data are fixed and we compare candidate parameter values

7
New cards

maximum likelihood estimator

choose parameters that maximize the plausibility of the observed sample under the assumed probability model

8
New cards

joint likelihood for continuous variables

L(θ|D) = Π (N, i=1) f(Di|θ)

9
New cards

joint likelihood for discrete variables

L(θ|D) = Π (N, i=1) P [Di=di|θ]

10
New cards

steps in maximum likelihood estimation

  1. gather data (independent draws)

  2. construct the likelihood using the appropriate formulation

  3. maximize the likelihood (typically via log-likelihood)

11
New cards

monotonic transformations preserve

the location of extreme points (but not their values)

12
New cards

key properties of log-likelihood

products become sums

exponents become coefficients

for a local extreme value: df/dx = 0

13
New cards

under correct specification and standard regularity conditions, many familiar OLS properties are

special cases of general MLE results

  • consistency

  • asymptotic normality

  • asymptotic efficiency within regular parametric models

14
New cards

properties of maximum likelihood estimation (MLE)

  • consistency

    • the MLE converges to the true parameter value with probability as sample size grows

  • asymptotic efficiency

    • MLE achieves the lowest possible variance among unbiased estimators in large samples

  • pseudo-true parameter under misspecification

    • even with model misspecification, MLE converges to a meaningful parameter value

  • asymptotic normality

    • allows for hypothesis testing and confidence intervals using normal approximations

  • transformation invariance

    • natural plug-in estimators for transformed parameters

15
New cards

functional invariance interpretation

once we estimate the primitive parameters, many economically interesting objects are just plug-in transformations of θhat

16
New cards

OLS applies directly only when

the model is linear in parameters and has an additive error term

17
New cards

MLE lets us estimate models that are

nonlinear in parameters, nonlinear in probabilities, or based on non-Gaussian outcomes

18
New cards

hypothesis testing with MLE (single parameters)

  • use the large-sample normal approximation for θhat

    • this justifies estimated standard errors, test statistics, and confidence intervals

    • results are asymptotic approximations, not exact finite-sample properties

19
New cards

functions of parameters

  • use plug-in estimates together with delta method or bootstrap

    • transformation invariance tells us the natural plug-in estimator

20
New cards

multiple hypothesis tests

many ways to conduct multiple hypothesis tests with MLE

21
New cards

pseudo R²

  • compares simple benchmark with richer model using likelihoods

  • formula: 1 - [ln(Lu)/ln(L0)] where: Lu = unrestricted model being estimated and L0 = simple model with only intercept

22
New cards

if the true model is simple

L0 = Lu, so pseudo R² = 0

23
New cards

if the richer model fits better

Lu > L0, pseudo R² rises above 0

24
New cards

two types of binary outcomes

unconditional probability and conditional probability

25
New cards

linearity probability model (LPM)

  • model: yi = xi’β+ui where yi ε {0,1}

  • fitted conditional mean: xi’βhat

  • core limitation: linearity can generate predicted values outside [0,1]

26
New cards

error structure problems in a LPM

  • errors are definitely not normal

  • error distribution depends on predicted probability values

  • cannot be treated as random normally-distributed disturbances

27
New cards

a generalized linear model connects a linear index to the conditional mean through

a link function

28
New cards

in the LPM, the link is

the identity: pi = xi’β

  • generally we pick a monotone function g such that g(pi) = xi’β

29
New cards

logit

g(p) = ln (p/(1-p))

30
New cards

probit

g(p) = ϕ^-1 (p)

31
New cards

latent index

define an unobserved latent variable: yi* = xi’β + ui

  • not directly observed: the latent index exists only in the model

  • interpretation: represents net utility, net benefit, or underlying propensity to choose

  • threshold rule: the binary outcome records whether this latent variable crosses a threshold

32
New cards

threshold mapping rule

yi = {1 if yi*≥0, 0 otherwise

33
New cards

properties of the logit model

  • very easy to compute numerically (closed-form expression)

  • easy to add other terms (not just binary logit)

  • because only the ratio β/σu is identified in the latent-index model, normalizing the error scale is essential

34
New cards

logit model

assume the error ui follows a standard logistic distribution. the logistic CDF is:

F(z) = (1/1+exp(-z))

35
New cards

probit model

assume the error ui follows a standard normal distribution. the standard normal CDF is:

F(z) = ∫z,-∞ ((1/√2π)exp(-0.5x²)dx = ϕ(z)

36
New cards

outside option

the alternative of purchasing nothing has normalized mean utility of 0

37
New cards

multiple inside goods

allows multiple products indexed by j in addition to the outside option

38
New cards

normal shocks

this gives multinomial probit, which is flexible but usually requires simulation

39
New cards

type 1 extreme value shocks

this gives a closed-form multinomial logit choice probability

40
New cards

poisson regression

we usually specify the conditional mean as

ln(Ε[yi | xi, θ]) = ln(λi) = xi’θ

41
New cards

in the linear fixed-effects model, we remove the individual effect by

demeaning or differencing

  • the within estimator is consistent for β under standard strict-exogeneity conditions

  • but the estimated λi values themselves are based on only T observations per person

  • with small T, those individual effects are noisy

42
New cards

for random effects we need

σ²v: the variance of the individual time period shocks

σ²λ: the variance of the individual fixed effects in the population

T: number of time periods

43
New cards

for continuous random variables, a probability density function describes

how probability is distributed over the support

  • derivative of a CDF at a given point

44
New cards

for a discrete random variables, a probability mass function is

the probability that a specific number is drawn

45
New cards

the support of a distribution

the range of possible values

46
New cards

censored distribution

the unit is observed, but the realized value is only known up to a threshold

47
New cards

truncated distribution

the unit is absent from the observed sample whenever its latent value falls outside the admissible range

48
New cards

tobit

censoring plus a continuous latent outcome

49
New cards

adverse selection

one side of the market has private information

50
New cards

Heckman two-step correction (heckit)

step 1: selection equation

  • d*i = zi’λ + vi with observed participation indicator: di = 1[di* ≥ 0]

step 2: outcome equation

  • yi = xi’β + ui