1/16
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced | Call with Kai |
|---|
No analytics yet
Send a link to your students to track their progress
interaction effects
the effect of one factor depends on another
how multiple IVs affect the DV together
the combined effect increases with the number of IVs
2 IVs - 1 interaction
3 IVs - 4 interactions
a significant interaction can explain any main effect
single main effect
both lines parallel
no actual difference between them
double main effect
both lines parallel
one is higher, showing each IV has an effect but work separately
stronger+no effect
lines not parallel
sign of interaction
IVs work together
opposite effects
aka cross graph
lines not parallel
sign of interaction
visualising interaction effects in R
helps us see whether the effect of one variable depends on another
show whether the lines representing each group:
follow a similar pattern (no interaction)
diverge and cross (possible interaction)
use emmip() function to visualise model-estimated means and their CIs
displays means adjusted for other factors in the model
can plot error bars that reflect the 95% CI
emmip(model_1, BS IV ~ RM IV, CIs = TRUE) +
labs(x = ”RM IV”,
Y = ”DV”)
follow-up tests for interaction effects
do once significant interaction is identified
pairwise comparisons tells us which specific combinations of levels are driving it
tells which specific effects are meaningful within the interaction
exact code used depends on aims, which is largely determined by rationale and hypotheses
code for follow-up tests
pairs(emmeans(model_1, ~ BS IV | RM IV), adjust = “Bonferroni”)
the IVs can be swapped depending
emmeans(model_1, ~ BS IV | RM IV)
and swapped too
| tells R to look at the RM IV within the BS IV levels
eg. use time|group to see how scores change over time within each group
or use group|time to see how groups differ at each time point
recommended to explore both ways of doing comparisons and selecting one which makes most sense for hypothesis
effect sizes
quantifies the magnitude of a difference
eg. standardised beta, r family
represents strength of relationships between variables
helps us understand the strength/practical importance of our findings, beyond saying it’s significant
they are used as significance is based on p-values
effect sizes are independent of this significance and the sample size =
useful to report as small, practically insignificant effects can produce small p-values in large samples
significance doesn’t always mean large
effect sizes are reported as standardised mean differences
eg. Cohen’s d, Hedge’s g
they show the difference between 2 means relative to the variability of the scores (in SD units)
d = .50 = groups differ by half a SD
these are suited to 2 group comparisons and don’t generalise to multi factor ANOVAS
mean differences
differences between the means
most direct way of expressing an effect
tells exactly how much lower/higher one’s groups scores were in the original measurement units
always interpret results in relation to the mean difference, before moving onto standardised effect sizes
the exact magnitude of an effect will be determined by the original scale the DV is measured on
eg. a mean difference of 5 between 2 conditions on a DV measured from 0-10 is larger than if it was measured from 0-100
generalised ETA2
most common way to express effect sizes in ANOVA (n2)
tells how much of the total variation in DV is explained by each factor/interaction
answers how much of the difference we see in our data can be attributed to the effect we manipulated
aka standardised effect size
values range from 0-1
higher = larger proportion of variance in outcome is explained by factor of interest
can express this as a %
similar concept to R2 values in linear regressions
traditional n2 is tricky to compare across designs
as it’s influenced by the number and type of factors in model
eg. a within-subjects ANOVA and a between-subjects ANOVA with the same data structure can yield different η² values, even when the effect is equally strong
solve this by using generalised ETA2 (n2g)
adjusts the calculation so it can be fairly compared across BS, WS and mixed designs
useful in research where studies often use complex designs
rough conversion of n2g/ETA2
0.01 - small effect
0.06 - medium effect
0.14 - large effect
interpretation depends on context though
ANOVA effect sizes in R (ETA2)
eta_squared(model_1, generalised = TRUE, alternative = “two.sided”)
generalised requests for generalised eta squared
alternative ensures the 95% CI is meaningful
reflecting an observed variable in ANOVA model
add:
observed = “IV”
at end of model
represent existing differences, while not adding new variance
unlike manipulated variables that will add variance