1/11
These flashcards cover key terms and concepts related to statistical significance testing, including definitions of critical terms and explanations of key concepts.
Name | Mastery | Learn | Test | Matching | Spaced |
|---|
No study sessions yet.
Confidence Intervals
Communicates precision by providing a range of plausible values for the population parameter being estimated.
Effect Sizes
Communicates strength by telling us the magnitude of the experimental effect or relationship between variables.
Statistical Significance Testing
Communicates probability, telling us how likely the current result would be if the study’s null hypothesis were true.
Random Sampling Error
The natural deviations that occur when randomly sampling from the population.
Bias
Flawed sampling procedures where the researcher does not use a representative sample.
Sampling Distribution
The distribution of a sample statistic that would be obtained if all possible samples of the same size were drawn from a given population.
Central Limit Theorem
States that as the sample size increases, the sampling distribution tends to be normal regardless of the original population distribution.
Type I Error (Alpha)
When we reject the null hypothesis when it is true.
Type II Error (Beta)
When we fail to reject a null hypothesis when it is false.
Power (1-β)
The ability of the test to correctly reject a null hypothesis when it is false.
P-value
The probability that we would have the result found in our treatment group if the null hypothesis were true.
Statistical Significance
A result is statistically significant if a p-value is less than or equal to alpha (typically ≤ 0.05).