1/31
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
|---|
No study sessions yet.
H0 hypothesis
The null hypothesis
HA hypothesis
The alternative hypothesis
Type 1 error
Rejecting the null hypothesis when H0 is true (false positive)
Type 2 error
Not rejecting the null hypothesis was HA is true (false negative)
We almost never know is H0 is true
but we consider all possibilities
Which error are hypothesis tests designed to avoid
The Type 1 error
Significance level
how often we are comfortable with accepting false positives (0.05 = 5%, 0.01 = 1%)
Null Hypothesis Rejection Region (R)
Contains values of our test statistics that provide evidence for HA over H0
HA : p > p0
H0 : p </= P0
Right tailed (like shaded in)
HA: p < p0
H0: p >/= p0
Left tailed
HA: p = p0
H0: p =/ p0
Double sided/tailed
Assuming H0 is true, the probability of our test statistic lying in R is
alpha: Probability/Pvalue(P(hat) in R | H0 is true) = alpha
When our observed sample statistics (Pobservation) falls into R we
Reject H0
When Pobservation falls outside of R
We retain/fail to reject H0
Pcutoff
Significance level/alpha
P value
lowest significance level (alpha) for which our data would lead us to reject H0 in favor or HA
Pobservation > Pcutoff (for right-sided test)
We reject H0 (for right sided test)
Comparing the area of Pobservation to the area of Pcutoff
That’s how we tell if Pobs in the R
P > Alpha
Accept the null hypothesis
P </= alpha
Reject the null hypothesis (H0)
P(Phat > Pcutoff | H0 is true)
Area to the right of Pcutoff
P(Phat > Pobservation | H0 is true)
The area to the right of Pobservation
If Pobservation is in R
P =/< Alpha
If Pobservation is not in R
P value > Alpha
P value is not
The probability the H0 is true given our observed test statistic (what?)
Lowering the rate of type 1 errors by choosing a lower alpha
Increases type 2 errors. So we lower the power of our hypothesis test. It is often a good idea to use the highest alpha you’re willing to accept
Power =
P(reject H0 | HA is true) = Beta
Standing by your alpha
Use the same alpha test to test
Confidence level
Confidence level of confidence interval tells us the long-run average of false positives
Confidence interval
A range that is likely to contains the true value of the population parameter, such as the mean
False positive in the confidence interval
Confidence interval does not contain the population parameter