1/20
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced | Call with Kai |
|---|
No analytics yet
Send a link to your students to track their progress
Analysis of variance (ANOVA)
A statistical test used to compare three or more group means to see if at least one group is significantly different. It looks at how much the total variability in scores can be explained by group differences versus random error
Factor
In analysis of variance, the variable (independent or quasi-independent) that designates the groups being compared is called a factor
Levels
The individual conditions or values that make up a factor are called the levels of the factor
Two-factor design (factorial design)
A design with two independent variables (factors) studied at the same time. It allows researchers to see main effects for each factor and whether they interact (combine in unique ways)
Single-factor design
A study that examines only one independent variable (one factor) with two or more levels
Single-factor independent-measures design
A type of single-factor design where different participants are in each group (between-subjects). Example: one group studies with music, another without
Testwise alpha level
The testwise alpha level is the risk of a Type I error, or alpha level, for an individual hypothesis test
Experimentwise alpha level
When an experiment involves several different hypothesis tests, the experimentwise alpha level is the total probability of a Type I error that is accumulated from all of the individual tests in the experiment. Typically, the experimentwise alpha level is substantially greater than the value of alpha used for any one of the individual tests
Between-treatments variance
How much the group means differ from each other. This captures both the treatment effect (real differences caused by the experiment) and random error
Treatment effect
The actual change or difference in scores caused by the experimental manipulation (not by chance)
Within-treatments variance
The variabiity of scores inside each group, caused by individual differences, measurement error, or chance—not by treatment itself
F-ratio
The statistic calculated in ANOVA: F=between treatments variance/within-treatments variance. If F is large, it suggests the group means are more different than chance alone can explain
Error term
Represents the amount of variability due to random, unexplained factors (individual differences, chance, etc.) It’s used in the denominator of the F-ratio
Mean square (MS)
An average of squared deviations (variances). In ANOVA, MS is calculated for both between-treatments and within-treatments: MS = SS/df (SS= sum of squares, df= degrees of freedom)
Distribution of F-ratios
A theoretical distribution showing the range of possible F-values you could get if the null hypothesis were true. Most F-values are near 1.0 (no difference), but larger values become less likely and may indicate a real effect
ANOVA summary table
A table that organizes all the key calculations in an ANOVA—showing sources of variance (between, within, total), their SS, df, MS, F, and significance
Eta squared (n²)
A measure of effect size for ANOVA. It shows the proportion of total variability in the data that’s explained by the treatment. Ranges from 0 to 1 — largers values mean a stronger effect
Post hoc tests (or Prottests)
Additional tests done after finding a significant ANOVA result to figure out which specific groups differ from each other
Pariwise comparisons
Comparisons made between every possible pair of group means to see which ones are significantly different
Tukey’s HSD test (Honest significant difference)
A post hoc test that compares all possible pairs of means while controlling for overall error rate. It’s used when all groups have equal sample sizes and variances
Scheffé test
A very cautious post hoc test that controls for Type I error very strictly. It’s flexible (works with unequal group sizes or complex comparisons) but less powerful — meaning it’s harder to find a significant difference unless it’s strong