1/22
hypothesis testing, t-tests and errors, OLS regression with dummy variable, assumption
Name | Mastery | Learn | Test | Matching | Spaced | Call with Kai |
|---|
No analytics yet
Send a link to your students to track their progress
hypothesis testing
state H0 and H1, compute t statistic (how many SE is Y^ from H0), find p-value, compare p to a, if p < a → reject H0
what does SE(Y^) mean
how much does sample mean vary
which test to use
2-sided is used by default (µ≠µ0, p=P(T>|t|)), 1-sided when theory predicts a direction (µ>µ0, p=P(T>t))
2 ways to be wrong
type I - false positive and type II - false negative
type I
false positive - reject H0 when it is true (probability a)
type II
false negative - fail to reject H0 when it is false (probability ß)
lowering a
leads to increasing ß (trade off)
Ordinary least squares
minimises the sum of the squared residuals
Step 1 - intercept only
Yi = ß0 + Ei → ß^0 = Y^
Step 2 - Dummy variable
Yi = ß0 + ß1Di + Ei
when D=0
E(Y) = ß0 = Y^B
when D=1
E(Y) = ß0 +ß1 = Y^A
step 3 - continuous X (what if X is not a dummy)
ß^1 = Cov(X,Y) / Var(X), ß0 = Y^ - ß^1*X^
assumptions - if they hold ß^1 is unbiased, consistent, efficient
zero conditional mean, i.i.d, no large outliers
Assumption 1- zero conditional mean
E(E|X) = 0, all factors in E that affect Y must be on average unrelated to X
Assumption 2 - i.i.d. observations
independent and identically distributed
Assumption 3 - No large outliers
because squaring magnifies large errors
main threat - OVB
when Z affects both X and Y - true model: Yi = ß0 + ß1Xi + ß2Zi + Ei
Bias(ß^1) = ß2*Cov(X,Z)/Var(X)
same sign = bias is upward, opposite sign = bias is downward
other violations
reverse causality, selection bias, measurement error
3 reasons why correlation arises
X causes Y, Y causes X, Z cause both (confounding)
multivariate model - Yi = ß0 + ß1Xi + ß2Zi + Ei
adding controls to remove omitted variable bias, ß^1 = effect of X on Y holding Z constant, ß^2 = effect of Z on Y holding X constant
limits of multivariate regression
removes bias from observed confounders included, does not fix reverse causality, does not fix selection bias, does not remove unobserved confounders