Analysis of Variance (ANOVA)

0.0(0)
studied byStudied by 0 people
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/33

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

34 Terms

1
New cards

ANOVA (F = t²)

Hypothesis testing procedure that is used to evaluate mean differences between 2 OR MORE TREATMENT GROUPS

  • Provides greater flexibility in interpreting results

Use sample data to draw conclusions about populations

  • has the goal of determining whether the mean differences in sample provide enough evidence to show that there are mean differences among the populations

2
New cards

T-tests treatment groups

t-tests are limited to situations in which there are only two treatments to compare

3
New cards

ANOVA Interpretations

  1. No real difference between the populations (or treatments)

  2. The populations (or treatments) have different means that cause systematic differences between the sample means

4
New cards

Independent variable

the variable manipulated by the researcher to create the treatment conditions in an experiment

5
New cards

Quasi-independent variable

A non-manipulated variable used by the researcher to designate groups

  • age and gender are these variables as ppl aren’t randomly assigned them

6
New cards

What is an Anova independent or quasi-independent variable called

a factor

7
New cards

Levels of factors

levels of the independent variable

  • the individual groups or treatment conditions that are used to make up a factor

8
New cards

What are factors in terms of Anova

One factor = One-way ANOVA

Two factors = Two-way ANOVA (or two-factor design)

N factors = n-way ANOVA

9
New cards

One-way ANOVAs use with what

either an independent measures or a repeated measures deisgn

10
New cards

Factorial ANOVAs (factorial designs)

ANOVAs with 2 factors or more

11
New cards

ANOVA Hypotheses

Null H0: states there are no differences (the populations means are all the same)

  • ex: u1 = u2 = u3 (study with 3 conditions)

AlterativeH1: states that the population means are not all the same (there is a real treatment effect)

  • there is at least 1 mean difference among the populations

12
New cards

standard error

measures how much difference is expected between 2 sample means if there is no treatment effect

  • if H0 is true

13
New cards

Variance role with ANOVA

used to measure sample mean differences when there are 2 or more samples

14
New cards

F-ratio ANOVA

based on variance rather than the sample mean difference that the t statistic uses

  • variance between sample means / variance expected with no treatment effect

15
New cards

Type I errors

For every hypothesis test, an alpha level is selected that determines the risk of a type I error

  • ex: alpha = .05 = 5% risk of type I error

16
New cards

Testwise alpha level

the alpha level selected for each individual hypothesis test

17
New cards

Experiment-wise alpha level

The total probability of a type I error accumulated from all of the separate tests in the experiment

  • when the number of separate tests increases, so does the ____ alpha level

  • it controls the overall probability of making a type I error over all tests at alpha

18
New cards

ANOVAs goal

to measure the amount of variability (the size of the differences)

  • to explain why the scores are different

19
New cards

What is the total variability of an entire data set

The combination of all scores from all the separate samples

20
New cards

Analysis of variance

Dividing into smaller parts

  • breaking apart the total variability into separate components (2 basic components)

21
New cards

2 basic components of total variability

  1. Between-treatments variance: differences between treatment conditions

    • measure the variance between treatments = the overall difference between treatment conditions = difference between sample means

    • VARIANCE BETWEEN TREATMENTS IS REALLY MEASURING DIFFERENCES BETWEEN SAMPLE MEANS

  2. Within-treatment variance: the variability within each sample

    • inside each treatment condition

22
New cards

Between Treatments Variance

Measures how much difference exists between treatment conditions

  • the differences are explained as..

    1. not being caused by any treatment effect and being actually random

    2. caused by the treatment effects

Difference caused by systematic treatment effects, or random unsystematic factors

23
New cards

Within Treatments Variance

Difference that exists within each treatment/sample

  • this difference represents randomness with no treatment effects causing it to occur

Difference caused by random, unsystematic factors

24
New cards

F-ratio

The test for ANOVA

  • the comparison of the two basic components of total variability

Variance between treatments / variance within treatments

25
New cards

F value of 1 means what

there are no systematic treatment effects

  • the differences between treatments are entirely caused by random unsystematic factors

26
New cards

F value that is large

evidence of existence of systematic treatment effects

  • numerator should be larger than the random differences alone in the denominator

27
New cards

Error term

the denominator of the f-ratio

  • because it only measures random & unsystematic variability

28
New cards

ANOVA Notation and Formulas

k = # of treatment conditions/# of lvls of the factor

n = # of scores in each treatment

N = total # of scores in entire study

T = the sume of the scores (EX) for each treatment (subscripts)

G = the sum of all the scores in the research study

  • g = ET (all N scores added up or all treatment totals)

29
New cards

Effect size

Tells us…

  1. Magnitude of the effect (how big the difference or relationship is)

  2. Practical significance (not just statistical)

  3. Helps compare resutls across studies (especially in meta-analysis)

  4. Useful for planning studies (e.g., sample size via power analysis)

30
New cards

p-values

tells whether an effect is statistically significant (whether effects are due to chance)

31
New cards

Small effect

might be meaningful if it impacts ppl, cost effect to implement, or has few side effects

32
New cards

Large effect

might not be meaningful if it is too expensive, harmful trade-offs, or is already known

33
New cards

Cohen’s d (means)

Its increment from small to large correlates to how well what is being tested works

  • ex: d = 0.1, means barely works, d = 0.7, means it works really well

34
New cards

Post Hoc Tests

statistical analyses conducted after a significant ANOVA result to determine which specific groups or conditions are significantly different from each other

  • to determine which sample mean difference is large enough to be statistically significant