Interpreting P-value (Significance tests)
Assuming (context of Ho), there is a (percent) probability of getting a sample (proportion/mean) of (p/mu) or (more/less) purely by chance.
Conclusion (Significance tests)
Because (p-value) is (less than/greater than) (alpha), we (reject/fail to reject) the Ho. We (have/do not have) convincing evidence of (Ho in context).
p-value is less than significance level…
significant → reject null
p-value is greater than significance level…
not significant → fail to reject the null
Interpreting Power
If the true (mean/difference in mean) of (context) is (sample mean), there is a (Power) probability of correctly rejecting the null of (Ho).
Type I Error
Truth: H0 true, Conclusion: Reject H0 P(Type I)=Alpha (Significance Level)
Type II Error
Truth: Ha true, H0 false, Conclusion: Fail to reject H0
Consequences…
Health consequences always considered worse
Power
Truth: Ha true, Conclusion: Reject H0 P(Reject H0 I Ha is true) P(Power)=1-P(Type II)
Increase Power by:
increasing n, increasing alpha, increasing distance btwn H0 & Ha
Interpreting CI
We are (conf level)% confident the interval from (A) to (B) captures the true (mean/proportion) of (context).
Interpreting CL
If we make many (conf level)% confidence intervals, we expect about (conf level)% to capture the true (mean/proportion) of (context).
A CI gives…
plausible values
normal/z-score smaller than t…
larger MOE for interval
for critical values the MOE of error is smaller and confidence interval is smaller
Conf. Int. for Mu 1) state:
parameter (true mean…), confidence level
Conf. Int. for Mu 2) plan:
1 sample t-interval, 3 conditions
a) random sample/assignment
b) 10% condition
c) normality (meets one of three options)
population is normal → sample distribution is normal
CLT n >= 30
graph sample data for skew/outliers
Conf. Int. for Mu 3) do:
x-bar +- t* sx/root n (substitute #s in), interval
t* → table B w/ tail % & df=n-1
If df not in table, round down
Conf. Int. for Mu 4) conclude:
we are _% confident…
choosing sample size → t unknown if n unknown, so use z* instead
Conf. Int. for Difference in Means 1) state:
true difference in means, confidence level
Conf. Int. for Difference in Means 2) plan:
2 sample t-int for Mu1-Mu2
3 conditions (x2)
Conf. Int. for Difference in Means 3) do:
(xbar1-xbar2)+-t*root(s1^2/n1+s2^2/n2)
Conf. Int. for Difference in Means 4) conclude:
if 2 diff. n’s, use smaller n for df/t* → gives a conservative estimate
real df is bigger 2-sample (show formula)
(-,-)
x-bar2 is greater
(+,+)
x-bar1 is greater
(-,+)
no difference
one sample t-int for Mdiff
x-bar diff+- t* sdiff/rootn
mean diff → 1 mean = 1 sample
paired data
2 specific data points that must be paired together (usually because they’re both from 1 individual)
Test statistic (t-score)
t = x-bar - Mu/(Sx/rootn)
Significance Test for Mu and CI
calculate test statistic (t-score) and use table B w/ df & tail probability OR tcdf(lower, upper, df)
OR
calculator → copy title up to x-bar
If H0 in int.
H0 plausible → fail to reject
If H0 not in int.
H0 not plausible → reject
For 2-sided sig tests ONLY
a c% confidence int. will make the same decision as alpha = 1-C%
Significance Test for a Difference in Means
t-score = (xbar1-xbar2)-(Mu1-Mu2 from H0) / root(s1^1/n1+s2^2/n2)
& graph & tcdf(labeled)
OR
2 sample t-test → copy title up to x
paired data →
1 sample t-test for Mudiff
t=xbardiff - Mudiff / sdiff/rootn
1 sample
matched pairs creating 1 sample
subtract then average
1 mean (Mudiff)
2 samples
2 groups and 2 means
average then subtract
difference of means (Mu1-Mu2)