Looks like no one added any tags here yet for you.
significance test
Formal procedure for using observed data to decide between two competing claims (the null hypothesis and the alternative hypothesis). The claims are usually statements about parameters. Also called a test of significance, a hypothesis test, or a test of hypotheses.
null hypothesis H𝗈
Claim we weigh evidence against in a significance test. Often the null hypothesis is a statement of 'no difference.'
alternative hypothesis H𝙖
The claim that we are trying to find evidence for in a significance test.
one-sided alternative hypothesis
An alternative hypothesis is one-sided if it states that a parameter is greater than the null value or if it states that the parameter is less than the null value. Tests with a one-sided alternative hypothesis are sometimes called one-sided tests or one-tailed tests.
two-sided alternative hypothesis
The alternative hypothesis is two-sided if it states that the parameter is different from the null value (it could be either greater than or less than). Tests with a two-sided alternative hypothesis are sometimes called two-sided tests or two-tailed tests.
P-value
The probability of getting evidence for the alternative hypothesis H• as strong as or stronger than the observed evidence when the null hypothesis H𝗈 is true. The smaller the P-value, the stronger the evidence against H𝗈 and in favor of H𝙖 provided by the data.
significance level α
Value that we use as a boundary to decide if an observed result is unlikely to happen by chance alone when the null hypothesis is true. The significance level gives the probability of a Type I error.
standardized test statistic
Value that measures how far a sample statistic is from what we would expect if the null hypothesis H𝗈 were true, in standard deviation units.
one-sample z test for a proportion
A significance test of the null hypothesis that a population proportion p is equal to a specified value.
Type I error
An error that occurs if we reject H𝗈 when H𝗈 is true. That is, the data give convincing evidence that H𝙖 is true when it really isn't.
Type II error
An error that occurs if we fail to reject H𝗈 when H𝙖 is true. That is, the data do not give convincing evidence that H𝙖 is true when it really is.
power
The probability that a test will find convincing evidence for H𝙖 when a specific alternative value of the parameter is true. The power of a test against any alternative is 1 minus the probability of a Type II error for that alternative; that is, power = 1 - P(Type II error).
two-sample z test for the difference in proportions
A significance test of the null hypothesis that the difference in the proportions of successes for two populations or treatments is equal to a specified value (usually 0).