ANOVA

0.0(0)
studied byStudied by 0 people
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/52

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

53 Terms

1
New cards

crucial precondition for any statements of causality is

random assignment in an experiment

2
New cards

Research questions about differences between more than two independent groups (between-subjects design):

• Cannot conduct multiple t-tests (type-I error inflation) • Use analysis of variance = ANOVA

3
New cards

Analysis of variance characteristics:

• DV is interval/ratio scale

• IV is nominal scale (indicates group, AKA factor)

• ANOVA compares 3 (or more) groups

4
New cards

Hypotheses for 3 groups (k = 3):

knowt flashcard image
5
New cards

Analysis of variance what does it compare

between-group variation (variance explained by Group) and within-group variation (unexplained/residual variance)

6
New cards

• Test statistic t-test formula

knowt flashcard image
7
New cards

Standard error:

Measure of variation in sample means

= standard deviation of the sampling distribution

= expected/ average difference under the null hypothesis

<p><span data-name="black_small_square" data-type="emoji">▪</span> Measure of variation in sample means </p><p><span data-name="black_small_square" data-type="emoji">▪</span> = standard deviation of the sampling distribution </p><p><span data-name="black_small_square" data-type="emoji">▪</span> = expected/ average difference under the null hypothesis</p>
8
New cards

In ANOVA, why don’t we look at mean differences directly when comparing more than 2 groups?

Because we look at variances instead of direct mean differences.

9
New cards

What is the general formula for the ANOVA test statistic?

F = observed variance in means / expected variance in means

10
New cards

What does "observed variance" refer to in ANOVA?

The variance between the group means, also called model variance. = variance explained by the differences between the groups

11
New cards

What does "expected variance" refer to in ANOVA?

The variance within the groups, also called residual variance. = variance not explained by the differences between the groups

12
New cards

How is variance measured in ANOVA?

By Mean Squares (MS).

<p>By <strong>Mean Squares (MS)</strong>.</p>
13
New cards

What is the formula for the F-statistic in terms of Mean Squares?

F = MSₘ / MSᵣ, where:

  • MSₘ = Mean Square for the model (between-group variance)

  • MSᵣ = Mean Square for the residual (within-group variance)

14
New cards
term image

left

<p>left </p>
15
New cards

Assumptions of ANOVA

1. Random sample

2. Observations are independent

3. Dependent variable is at least interval scale

4. Dependent variable is normally distributed in each group

5. Homogeneity of variances (equality of within-group variances)

16
New cards

Dependent variable should be normally distributed in each group, no outliers present. How to check

• Check “Raincloud plot” in JASP. This means that the residuals are normally distributed!

• Check Q-Q plots of residuals (similar to regression

17
New cards

Steps in ANOVA

1. Check assumptions (homogeneity of variances, normality)

2. Check significance of the factor

3. Determine effect size for significant factors

4. Check post-hoc tests for significant factors with > 2 levels

5. Report significant results and state conclusions

18
New cards

Homogeneity: Equality of Within-group Variances. What does it check?

Checking the homogeneity of variances assumption:

• We need to check if the spread/ variance/ standard deviation in each group is (roughly) the same

<p>Checking the homogeneity of variances assumption: </p><p>• We need to check if the spread/ variance/ standard deviation in each group is (roughly) the same</p>
19
New cards

Homogeneity of variances: Step 1

Create side-by-side boxplots: • Check if the IQR in each group is (roughly) the same

20
New cards

Homogeneity of variances: Step 2

Check Levene’s test for equality of variances in the output:

• If not significant, assume homogeneity

• Since the test is very strict, we use α = .01 instead of α = .05

Note: p-value > .01, hence the null hypothesis is not rejected →Assumption of homogeneity of variances is not violated, F(2, 44) = 1.22, p = .304.

<p>Check Levene’s test for equality of variances in the output: </p><p>• If not significant, assume homogeneity </p><p>• Since the test is very strict, we use α = .01 instead of α = .05</p><p>Note: p-value &gt; .01, hence the null hypothesis is not rejected →Assumption of homogeneity of variances is not violated, F(2, 44) = 1.22, p = .304.</p>
21
New cards

Levene’s test doesn’t always work well • Problems if:

sample sizes differ a lot (between the groups)

samples are all small (Levene’s test may not indicate problems with variances when – in reality – the variances are different)

samples are very large (Levene’s test may indicate problems when there aren’t any)

22
New cards

Levene’s test: Alternatives. If samples are all small…

rely on boxplot

23
New cards

Levene’s test: Alternatives. If Levene’s test is significant and samples are not small:

Check if largest sample < 4× smallest sample

Check if largest variance < 10× smallest variance

<p><span data-name="black_small_square" data-type="emoji">▪</span> Check if largest sample &lt; 4× smallest sample </p><p><span data-name="black_small_square" data-type="emoji">▪</span> Check if largest variance &lt; 10× smallest variance</p>
24
New cards

example for levenes test check sample size

knowt flashcard image
25
New cards
<p>Effect of condition: Significance. When is signfiicant what do I write</p>

Effect of condition: Significance. When is signfiicant what do I write

Condition is significant, 𝐹(2, 44) = 6.26, 𝑝 = .004, hence job satisfaction is differently affected by different kind of work contracts.

26
New cards

Effect size, normally and in ANova and formula

• Most-used measure of effect size is R-squared • In ANOVA usually denoted is eta-squared (𝜂 2 ):

<p>• Most-used measure of effect size is R-squared • In ANOVA usually denoted is eta-squared (𝜂 2 ):</p>
27
New cards

What is the purpose of post-hoc tests in ANOVA?

To find out which specific groups differ after finding a significant overall effect.

28
New cards

What are post-hoc tests also called?

Pairwise comparisons

29
New cards

What type of statistical test is used in post-hoc comparisons?

t-tests with adjusted p-values

30
New cards

Why do we adjust p-values in post-hoc tests?

To correct for multiple testing and avoid Type I error inflation (false positives).

31
New cards

What is a Type I error (α error

The probability of getting a significant result when the null hypothesis is actually true (false positive).

32
New cards

What is a Type II error (β error)?

The probability of not getting a significant result when the null hypothesis is false (false negative).

33
New cards

What is the relationship between β error and power?

β error = 1 − power, so increasing power reduces the chance of a Type II error.

34
New cards

What is the relationship between power and Type I error (α)?

Power is not directly related to Type I error, but both depend on the significance level (α):

  • Power = the probability of correctly rejecting a false null (1 − β)

  • α (Type I error) = the probability of incorrectly rejecting a true null

  • Lowering α reduces the risk of Type I error, but also tends to lower power, increasing the chance of a Type II error.
    - Raising α increases power but also increases the risk of a Type I error.

35
New cards

What happens to the overall Type I error rate as the number of tests increases?

The overall α increases, meaning the chance of making at least one false positive increases with more tests.

36
New cards

What is the overall α for 1, 2, and 3 tests with α = .05 per test?

  • 1 test → overall α = .05

  • 2 tests → overall α ≈ .0975

  • 3 tests → overall α ≈ .1426

37
New cards

What is the probability of making at least one incorrect decision across 3 tests if H₀ is true?

1 − (.95³) = .1426

38
New cards

Why do we need post-hoc correction in ANOVA?

To control the overall Type I error rate when making multiple comparisons.

39
New cards

What are three types of post-hoc corrections?

  • Bonferroni: for small number of groups

  • Tukey’s HSD: for large number of groups

  • LSD: no correction (only when comparing two groups)

40
New cards

What is the trade-off of controlling Type I error in post-hoc testing?

Reducing Type I error also reduces statistical power, making it harder to detect true effects.

41
New cards
<p>interpret this</p>

interpret this

On average, employes that receive free lunch but work every day from the office rate their job satisfaction 1.56 point lower employees with flexible home days. This mean difference is significant (𝑝 = .005) • On average, employees that receive free lunch but work every day from the office rate their job satisfaction 1.24 point lower than those with a regular contract (1-home day and 4 at office). This mean difference is significant (𝑝 = .036)

42
New cards

What are post-hoc tests used for in ANOVA?

They are used to compare all possible pairs of groups after finding a significant effect. They use adjusted p-values to correct for multiple testing and reduce Type I error risk.

43
New cards

What are contrasts used for in ANOVA?

They are used to test specific hypotheses about group differences defined before the analysis.

44
New cards

Why are contrasts often more powerful than post-hoc tests? When should you use contrasts instead of post-hoc tests?

Because they involve fewer comparisons and usually don’t require correction for multiple testing.

When you have theoretical expectations or specific comparisons you want to test.

45
New cards

How can ANOVA be interpreted in terms of regression?

ANOVA can be seen as a regression model with dummy variables for categorical predictors.

46
New cards

In the regression model 𝑦̂ = b₀ + b₁FL + b₂FH, what does b₀ represent? and b1?

b₀ is the intercept, representing the mean of the control group (reference category). b1 is The difference in mean between the Free Lunch (FL) condition and the control group.

<p><strong>b₀ is the intercept</strong>, representing the <strong>mean of the control group</strong> (reference category). b1 is The <strong>difference in mean</strong> between the <strong>Free Lunch (FL)</strong> condition and the <strong>control group</strong>.</p>
47
New cards

How to interpret Bayes factor?

knowt flashcard image
48
New cards
<p>interpret </p>

interpret

The data are 11 times more likely under the alternative hypothesis than under the null hypothesis

• Support for “effect of condition” much larger than for “no effect”

• Post-hoc testing needed to see which conditions differ from each other

49
New cards

What does it mean if Posterior Odds < Prior Odds?

The data provides support for the null hypothesis (H₀) → Bayes Factor (BF) < 1.

50
New cards

What does the "U" in BF₁₀, U mean?

It stands for uncorrected Bayes Factor — not adjusted for multiple comparisons.

51
New cards

How is multiple testing handled in Bayesian post-hoc analysis?

Posterior odds are corrected by fixing the prior probability that H₀ is true across all comparisons.

52
New cards

Odds: bf

Odds = 1: Chance of H0 and H1 are equal

Odds > 1: Chance of H1 larger than H0

Odds < 1: Chance of H1 smaller than H

53
New cards
<p>interpret</p>

interpret

The data strongly supports the idea of a difference between FL and FH The data supports the idea of a difference between FL and C The data does not support the idea of a difference between FH and C