Review for Midterm 2 - PSYC 2911

When to Use Repeated-Measures ANOVA

  • Compares two or more means from a within-subjects design.

  • Avoids using dependent samples t-tests due to:

    • Limited to comparing two related means.

    • Multiple tests inflate experimentwise type I error.

Assumptions of Repeated-Measures ANOVA

  1. Residuals of the dependent variable are normally distributed.

  2. Homogeneity of variance among levels of the independent variable.

  3. Sphericity: variances of difference scores must be equal (assumed met in the course).

Purpose

  • Tests for statistically significant differences among related means.

  • Null hypothesis (H0): All means are equal (e.g. 1 = 2 = 3).

  • Alternative hypothesis (HA): At least two means differ.

    • A significant result indicates differences but does not indicate which means differ; requires post-hoc analyses.

Variability in Repeated-Measures ANOVA

  • Partition within-group variability into:

    1. Variability due to error

    2. Variability due to subjects

  • Total variability = variability due to error + variability due to subjects.

Effect Sizes for Repeated-Measures ANOVA

  • Eta-squared (η²): Proportion of total variance explained by an independent variable (ranges from 0 to 1).

  • Partial eta-squared (ηp²): Proportion of explained variance associated with a given independent variable after accounting for variance due to other variables.

Post-Hoc Comparisons

  • Determine which mean differences are significant.

  • Common post-hoc tests include:

    • Bonferroni

    • Tukey Honestly Significant Difference (HSD).

Comparison with One-Way ANOVA

  • Repeated-Measures ANOVA: Compares means of dependent groups.

  • One-Way ANOVA: Compares means of independent groups.

  • Repeated-measures studies are more cost-effective but may suffer from:

    • Greater data loss if participants drop out.

    • Potential for order effects.

    • Constraints in assigning participants to treatments based on intrinsic factors.

One-Way ANOVA Basics

  • Focuses on one independent variable with multiple levels.

  • Examples of levels of an independent variable:

    • Sleep deprivation on exam performance (0, 5, 10 hours).

    • Variations in instructional methods.

Factorial Designs

  • Involve two or more independent variables with multiple levels (e.g., 2 × 2 or 2 × 3 designs).

  • Main effects and interactions are examined:

    • Main effects: Impact of one independent variable collapsed over others.

    • Interactions: Occur when the effect of one variable differs across levels of another.

Practical Considerations in Factorial Design

  • Random assignment to conditions ensures valid comparisons.

  • The complexity of designs can lead to intricate results, complicating interpretation.

Assumptions of ANOVA

  1. Dependent variable is normally distributed.

  2. Homogeneity of variance across factors.

  3. Independence of scores across factor levels.

Context of ANOVA Results

  • A result is either significant or not, avoiding ambiguous terminology regarding significance levels.

robot