Planned Comparisons in ANOVA

Planned Comparisons in ANOVA

Introduction to Planned Comparisons

  • Planned comparisons are guided by theory and determined before data collection.
  • The researcher has specific conditions they want to compare to others based on their hypothesis.

Planned Comparisons vs. Post Hoc Tests

  • Post Hoc Tests: These tests help manage family-wise error by being more conservative.
  • Planned Comparisons: They don't necessarily solve the family-wise error problem, but are different because:
    • Post hoc tests involve 'fishing' for differences without specific predictions.
    • Planned comparisons predict specific conditions will differ and test those.
  • It's different calling your shot in advance and hitting it, versus taking many shots and focusing on the few that go in.

Family-Wise Error in Planned Comparisons

  • If the number of planned comparisons is small relative to the total conditions, family-wise error is less of a concern.
  • With many comparisons in a large design, you may need a Bonferroni correction.
  • The decision to correct for family-wise error is at the researcher's discretion, subject to review processes.

Process of Planned Comparisons

  • Experiment Setup: Includes several conditions, some as controls.
  • Theoretical Components: Focus on specific conditions of theoretical importance.
  • Significant Omnibus F: The experiment yields a significant omnibus F statistic.
  • Testing Key Groups: Differences are tested between key, theoretically important groups.
  • Few Comparisons: If only a few comparisons are made, family-wise error is less of a concern.
  • Researchers need to be honest, ensuring comparisons are planned in advance and acknowledging the increased chance of finding significance with more slicing of the data.
  • Consumers of research should be cautious when many comparisons are made, and authors focus only on significant results.

Types of Planned Comparisons

  • Pairwise Comparisons: Simple differences between two means.
  • Complex Comparisons: Differences between sets of means.

Pairwise Comparisons: Example

  • The lecturer wants to see if a fruit and veggie group differ significantly.
  • A reasonable analytic strategy to what we found in this effective diet on happiness study might be that donuts would be statistically meaningfully different from fruit and veggie.
  • First, demonstrate that there is no meaningful difference between fruit and veggie
  • Then we can do follow-up complex comparisons in this case where we compare donuts to the combination of fruit and veggie.

Statistics for Pairwise Comparisons

  • SCI (Simple Comparison of Interest): Captures the simple difference between two condition means.
    • SCI = mean1 - mean2 (where 1 and 2 are the groups being compared)
  • Sums of Squares for Comparison:
    • SS_{comparison} = \frac{n "> SCI^2}{2}
      • Where n = number of people in each group. Assumes equal sample sizes per group.
  • Mean Squares Comparison:
    • MS{comparison} = \frac{SS{comparison}}{1} = SS_{comparison}
      • This is a single degree of freedom comparison.
  • F for Comparison:
    • F = \frac{MS{comparison}}{MS{within}}
      • MS_{within} is the omnibus error term (within-groups mean squares or residual mean squares from ANOVA).

Conceptual Importance

  • Planned comparisons use a statistically significant F ratio to see what is driving the significance.
  • By comparing pairs or clusters of means using the omnibus error term, we parse out variability.
  • The omnibus error term is more robust and reliable.

Example Computation for Pairwise Comparison

  • Compare means of fruit and veggie groups.
    • Fruit mean = 3, Veggie mean = 3.2
    • SCI = 3.2 - 3 = 0.2
    • n = 20 people in each group.
    • SS_{comparison} = \frac{20 "> (0.2)^2}{2} = 0.4
    • MS_{comparison} = 0.4
    • F = \frac{0.4}{1.096} = 0.365 (where 1.096 is the MS_{within} from SPSS output)
  • Find the F critical value in an F distribution table.
    • 1 numerator degree of freedom, 57 denominator degrees of freedom.
    • If 57 is not available, use a lesser value, so it's a little more conservative (e.g. use d.f. 55).

Using F Table and Interpreting Results

  • Use F distribution table to find critical value at different significance levels and degrees of freedom combinations.
  • If observed F ratio is less than F critical value, the result is non-significant.
  • Online calculators can give exact p-values.

F Ratios vs. T Values

  • SPSS often gives T values, while research articles may report F ratios or T values.
  • T values are helpful for directional hypotheses or one-tailed significance tests.
  • F distribution is skewed and cannot be less than zero, making it unsuitable for one-tailed tests.
  • T distribution is roughly normal, centered at zero.
    • F = t^2
  • The square root can be taken to convert the F value to a T value.

Complex Comparisons: Comparing a Condition Mean to Two Other Condition Means.

  • When computing SCI compare the average of the fruit and veggie groups to the donut group.
  • Averaging fruit and veggie groups relative to the donut group.

Contrast Weights

  • Contrast weights, also known as effects coding, are needed to incorporate desired means into the analysis.
  • Example: Demonstrating that the donut group has a higher mean than fruit and veggie groups combined. Using contrast weights expressed as a sum of weighted means where:
    • SCI = (+0.5 \cdot mean{fruit}) + (+0.5 \cdot mean{veggie}) + (-1 \cdot mean_{donut})
  • Coefficients determine which sample means are compared.
  • Means can be excluded by giving them a weight of 0.

Formulas with Coefficients

  • SCI is equal to the sum of weighted means. Positive weights are compared against negative weights.
  • SCI= \Sigma(coefficient \cdot mean)
  • SS_{comparison} = \frac{n "> SCI^2}{\Sigma coefficient^2}
    * Still a single degree of freedom comparison, so SS = MS

F Computation for Complex Comparisons

  • F = \frac{MS{comparison}}{MS{within}}
  • Critical value remains at about 4.02 (single degree of freedom).
  • Convert F to T by taking the square root, useful for directional comparisons.

SPSS Output and Consistency

  • SPSS output shows contrasts, with fruit and veggie combination compared to the donut group in one contrast, and fruit group versus veggie group in another.
  • Specifying contrast weights is necessary in SPSS to compare means.
  • The sign dictates which are compared; the value doesn't matter as long as all contrast weights are zero.
  • SPSS T values match square roots of F ratios, leading to consistent conclusions.

Rules for Comparisons: Field

  • Sensible Comparisons: Have a plan driven by theory.
  • Positive vs. Negative Weights: Groups with positive weights are compared to those with negative weights.
  • Sum of Weights: The sum of the weights should equal zero.
  • Zero Weights: Applying a zero weight excludes a mean from the comparison.

Orthogonal Contrasts

  • Following comparison rules leads to orthogonal or independent contrasts.
  • Avoid reanalyzing the same variability portions to mitigate family-wise error.
  • Compare uncorrelated subsets of the variance; the outcome of one comparison should be unrelated to another.

Theoretical Sense and Oddball Circumstances

  • Comparisons should make theoretical sense.
  • It's best practice that comparisons really should be orthogonal, or you need to kind of justify why they're not if they're not.
  • In rare cases, if you care about conditions even if the omnibus test isn't significant, follow-up comparisons are permissible.
  • Planned comparisons offer an advantage over post hoc tests in having a grounded plan.
  • You can avoid roadblocks by executing a grounded plan, even if the omnibus isn't significant.