1/16
These flashcards cover key concepts related to statistical significance testing, including definitions and explanations of important terms.
Name | Mastery | Learn | Test | Matching | Spaced |
|---|
No study sessions yet.
Statistical Significance
A determination of whether the results of a study are likely due to chance or represent a true effect.
p value
The probability of obtaining the observed results of a study if the null hypothesis were true.
Null Hypothesis
The default assumption that there is no effect or no difference.
Type I Error
Incorrectly rejecting the null hypothesis when it is true; represented by alpha, often set at 0.05.
Type II Error
Failing to reject the null hypothesis when it is false; represented by beta, typically set at 0.20.
Sampling Error
The error that occurs when a sample somehow does not represent the target population.
Bias
Systematic error due to how the researcher samples the population.
Central Limit Theorem
States that the sampling distribution of the mean will be normally distributed if the sample size is sufficiently large.
Effect Size
A quantitative measure of the magnitude of the experimental effect.
Confidence Interval
A range of values that is likely to contain the true population parameter.
Power
The probability of correctly rejecting the null hypothesis when it is false.
Critical Value
The threshold that determines whether the null hypothesis can be rejected based on the test statistics.
Significance Level (alpha)
The probability of making a Type I error; commonly set at 0.05.
Statistical Significance Testing
A method used to evaluate the likelihood that observed results occurred by random chance alone.
Random Sampling Error
Natural deviations that occur when samples are taken from a population.
Clinically Meaningful
Results that have practical significance in a real-world context, not just statistical significance.
Decision
Failing to reject the null hypothesis when it is true, or correctly distinguishing between null and alternative hypotheses.