1/27
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
effect size
a quantitative measure of the magnitude of an effect or relationship in a study
why effect sizes matter
they help interpret the practical significance of results, not just statistical significance
standardized effect size
an effect size expressed without units, allowing comparison across studies (e.g., Cohen’s d, Pearson’s r)
unstandardized effect size
an effect size in the original measurement units (e.g., a 5-point test score difference)
Cohen’s d
standardized mean difference between two groups,
Hedges’ g
an unbiased version of Cohen’s d, corrected for small sample bias
Glass’s Δ
like Cohen’s d but uses only the standard deviation of the control group
Pearson’s r
measures the strength and direction of a linear relationship; ranges from –1 to +1
R² (coefficient of determination)
the proportion of variance in the outcome explained by a predictor;
partial eta-squared (η²)
proportion of variance in the outcome uniquely explained by an effect, including interactions
omega-squared (ω²)
an unbiased alternative to η²; more conservative and better for generalization
Cohen’s d interpretation benchmarks
0.2 = small 0.5 = medium 0.8 = large
Pearson’s r interpretation benchmarks
0.1 = small 0.3 = medium 0.5 = large
effect sizes in large samples
small effects can still be statistically significant but might not be practically important
effect sizes in small samples
often inflated and more vulnerable to random variation
the Facebook experiment lesson
small effects can be statistically significant in large samples but have little real-world relevance
the hungry judges study
illustrates how effect sizes can be large in observational data but not imply causality
winner’s curse
significant results often overestimate the true effect size due to selection bias, especially in underpowered studies
low power and inflated effects
low-powered studies are more likely to report large, exaggerated effect sizes
MSDE (Minimal Statistically Detectable Effect)
the smallest effect size a study is powered to detect as statistically significant
use of MSDE
helps plan studies and interpret whether a non-significant result reflects no effect or insufficient power
interaction effect
when the effect of one variable depends on the level of another variable
ordinal interaction
effect direction is the same across groups, but the size varies; lines don’t cross
disordinal (crossover) interaction
effect reverses across groups; lines cross
disordinal interaction & effect size
usually has a larger effect size, especially when means are extreme (e.g., 0 vs. 1)
adjusted R²
corrects R² for the number of predictors, reducing bias in models with many variables
meta-analysis
uses standardized effect sizes to combine and compare results across studies
effect size vs p-value
p-value tells you if an effect likely exists; effect size tells you how big or meaningful it is