Looks like no one added any tags here yet for you.
t-tests
used when you want to compare two groups on some criteria
its inefficient, time consuming, and every time you run a t-test you run the risk of committing a type I error
Why can't we use multiple t-tests to compare more than 2 groups?
0.95 ^x
x = the number of t-tests you would need to run
How to calculate the chances of NOT making a type I error
1 - (chances of NOT making a type I error) = the probability of a type I error
How to calculate the chances of making a type I error
at least three
How many levels are required for an ANOVA
One-way ANOVA
There is one independent variable with more than two groups
Two-way ANOVA
There are two independent variables each with more than two groups
Repeated Measures ANOVA
The same participants take part in each of the 3+ levels of the IV
Mixed ANOVA
There are one or more between-subject variables and one or more within subject variables
MANOVA
There is more than one dependent variables
Between-subjects ANOVA
compares means of three or more different groups of people, measured at the same time
only one IV (with at least three levels)
What does one-way refer to?
participants are randomly assigned to one group
What does between subjects refer to?
*same as the t-test
1. homogeneity of variance
2. groups are independent
3. normally distributed and continuous DV
Assumption of the One-Way Between Subjects ANOVA
Not always, control groups may or may not be necessary depending on the research question(s)
Is there always a control group in a One-Way Between Subjects ANOVA?
When
- H0: all groups are equal to each other
- Reject H0 when at least two of the groups are different
When do you perform post hoc tests (f ratio testing)
post hoc tests
tells us where differences are to further understand broad f ratios
As many as there are the levels of the IV
How many alternative hypotheses are there in a one-way between subjects ANOVA?
Analysis of variance
What does ANOVA stand for
F statistic
compares the amount of variability between groups against the amount of variability within each group
between group variance (SSB)
the distance or deviation of raw scores from the grand mean
within group variance (SSW)
- similar to pooled variances
the distance or deviation of raw scores means from the group mean
Ronald Fischer who set the alpha value
Who is the F statistics named after?
SS/df
mathematical representation for variance where squareness highlights the difference among our groups
No, the F stat is only 1 tailed
Is the F stat. two tailed?
At least greater than 1
- cannot be negative or zero as this means there is no difference among the groups at all, this is very unlikely
To be considered to being significant, what should the F stat. be?
SST = SSB + SSW
mathematical representation for total variance
Grand mean
mean for all our data points
We want the SSB to be bigger than the SSW
what value to we want to be large and which smaller?
SSB
represents IV
SSW
represents random error
k
number of groups or levels of the IV
n1, n2, n3...
number of people in each group
N
the total number of people
EXij
the sum of scores for each group
EXi
the total sum of scores from all groups
EXi^2
the total squared sum of scores (all groups)
k-1
number of groups - 1
N-k
total number of participants - groups
N-1
total number of participants - 1
dfn/dfd from our source table
What do we use to find the Fcrit. on the F table
high;low
When looking at our source table, we want between group variance to be _ and within group variance to be _ to produce significant results
effect size and statistical power
decreasing within group variances increases...
To ensure that changes in IV are responsible for changes in the DV
Why do we want small within group variance?
more
less variance and overlap between samples = _ significant results
less
more variance and more overlap between samples = _ significant results
.01 = small effect size
.06 = medium effect size
.14 = large effect size
What are the values for partial eta squared and what do they mean?
Partial eta squared
tells us how much variability (variance)can be accounted for by an independent variables
Because Cohen's D only works for 2 groups
Why can't we use Cohen's D for effect size in the one-way ANOVA
When we have significant results
When do we run Post Hoc tests?
1) All three groups are different
2) One condition differs from the other two
3) Two conditions differ from the other one
Three possible ways groups could be different from one another as determined by Post Hoc Tests?
Yes!
Are all groups included in the alternative hypothesis?
Tukey's Honestly Significant Difference (HSD)
can investigate multiple comparison of means (when the F is significant). If it is, we use its' formula
q in HSD
level of significance for the total number of groups being compared
decreased (easier to detect differences); increased (harder to detect differences)
For HSD: increased n = _ standard error and decreased n = _ standard error
Type I error
Tukey's HSD helps us find more conservative findings which combat against...
1) Arrange the means from small to largest
2) Subtract the means from each other (largest-smallest)
3) Compare the HSD value to the mean difference for the groups
What are the three rules of Post Hoc difference table?
significant results (compare each mean group with HSD)
If mean difference for the group is greater than HSD, we have what results?
SSerror (random error)
SSw=