1/53
lord help me
Name | Mastery | Learn | Test | Matching | Spaced |
|---|
No study sessions yet.
What is the IV also called in an ANOVA?
What are conditions/groups also called in ANOVAs?
The independent variable is also called a factor
Each condition/group of the independent variable is called a level
Distributions of one way ANOVA
Always have two directional hypothesis BUT
Always gonna test in one tail, no negative values
the region of rejection always lies in the upper right-hand tail of the distribution
What are the two types of ANOVA, and what is the difference between them?
Within (dependent) groups ANOVA: used for dependent designs (repeated measures or matched)--- compares means from the same subjects measured multiple times
Between (independent) groups anova: used for independent designs—compares means of different subjects in different groups, focusing on variation between these distinct groups
When would you use an ANOVA rather than a t-test?
You use an ANOVA rather than a t-test when you are comparing the means of 3 or more groups
Experiment-wise error rate:
the probability of making one or more Type I errors across multiple statistical tests conducted within a single experiment
Why not use multiple t-tests instead of ANOVA if we have more than 2 conditions of the IV?
The more tests we do on the same data set, the greater the probability that we’re going to get a type 1 error (false pos)
There is usually (if a=0.05) a 5% chance you would commit a type 1 error for each statistical test, but if you do it three times, that multiplies the probability by three. (14% chance of making a type I error)
Assumptions of One-Way ANOVA
One IV with 3+ independent conditions
DV normally distributed & interval or ratio
Homogeneity of variance (or sphericity for dependent anovas)
Sphericity:
The assumption that the variances of the differences between all the combinations of pairs of groups are equal.
What does a statistically significant Fobt tell us about the differences between our conditions?
A significant F test means that somewhere among the means at least two of them differ significantly. It does NOT indicate which specific means differ significantly
When the F-test is significant, we perform post hoc comparisons
Performing post-hoc comparisons in one way ANOVA: when/why are they done?
Post-hoc comparisons are done when you find a significant result in an ANOVA to find out which means differ from each other (ANOVA only tells you that there is at least one significant difference somewhere between means, but not which means it is between)
It is unnecessary to perform post hoc comparisons for F test with only two means
Fisher’s Least Significant Difference (LSD) test
A commonly used post hoc test (one way anova) that computes the smallest amount that group means can differ in order to be significant
Compare only two groups at a time
Between-Group (treatment) Variance:
variation due to the treatment or factor being tested. (difference between group means)
You want greater between-group variance (larger=greater chance for significance)
The variability in scores created by different conditions (think different levels of an IV).
(Between-groups variance = treatment variance + error variance)
Within-Group (error) Variance:
random error or unexplained variability within each group. (variability within each condition/group)
You want lower error variance
Less variability → greater chance for significance
This variability is created by the individual differences of each person, which result in slight differences in scores even under the same conditions
what is the f statistic
The F-statistic is the ratio of within and between variance:
(F = MS(Between) / MS(Within)), indicating how much larger the differences between group means are compared to the natural spread within the group
Fscore will increase if:
The between-groups variance increases
The within-groups variance decreases
F ratio when H0 is true:
F ratio when H0 is false:
F ratio when H0 is true: MSb = MSw and thus F=1.
F ratio when H0 is false: MSb will be larger than MSw and thus F > 1.
How to find Fcrit values using Ftable
Find dfb (k-1) k = # of groups
Use dfb as column number (up and down)
Find dfw (N-k) N = total sample size/pop size
Use dfw as row number (side to side)
Fobt > fcrit, statistically significant, reject null
Sum of squares:
measures how widely a set of data points is spread out from the mean. (variation)
Two-way ANOVA:
comparing two independent variables with multiple levels (2xx)
Calculate 3 Fs (factor A, factor B, interaction)
One-way ANOVA:
one independent variable comparing 3+ levels
Only calculates 1 F
Within (dependent) groups ANOVA:
used for dependent designs (repeated measures or matched)--- compares means from the same subjects measured multiple times
Partial eta squared (η2partial) is used to assess the effect size of a dependent-groups
Between (independent) groups ANOVA:
used for independent designs—compares means of different subjects in different groups, focusing on variation between these distinct groups
Factorial design:
A design used to examine how two or more variables (factors) predict or explain an outcome.
Factor:
A predictor variable in a correlational design or an IV in an experiment or quasi-experiment.
Three major advantages of a multifactor design over a single-factor design?
(finis this one)
Simultaneously test the unique effects of multiple IVs (quicker/more effecient
Test if the IVs interact with each other to affect the DV
Assumptions of two-way between-subject (independent) ANOVA:
2 factors, each with 2 or more conditions
Groups are independent
DV has normal dist and is interval/ratio
Homogeneity of variance
Cell in 2 way anova
A comparison of one level of a factor across a level of another factor.
(The different conditions of the study (ex: combination of which two levels they receive)
Marginal Means:
the means for that variable averaged across every level of the other variable
Main effect:
The effect of only one factor (IV) on the DV, controlling for the effect of all other factors (IVs)--- how one variable predicts or affects the outcome
A main effect is the same as a study with just one IV
There are as many main effects as there are factors/ IVs (1 main effect per factor/IV)
collapse across a factor
averaging together all scores from all levels of that factor (IV)
Interactions:
When the relationship between one IV and the DV depends on the level of a second IV
The joint effect of 2 IVs on the DV
How do you know if there’s a interaction?
If the lines in the line graph are NOT parallel, then there is a good chance that there IS an interaction
In a two-way ANOVA, the values of n and the values of k must be the same for each factor?
False: n is the # of scores in whatever sample you are looking at and K is the # of levels
K = # of levels
n = # in sample
N = # population (samples added up)
What does FAxB signify and what does a significant value of FAxB indicate? Does it also indicate that FA and FB are significant?
It signifies that there is a significant interaction between the two variables. It does not indicate that both the factors are significant, because the two by themselves arent guaranteed to have a main effect. It could be that they are only significant in their interaction and not independently
To find Fcrit, which df values do we use?
Fcrit for IV1(df1, dfw)
Fcrit for IV2 (df2, dfw)
Fcrit for the interaction (dfinteraction, dfw)
Is it necessary to compute post-hoc comparisons for significant Fobt’s in a 2 x 2 design?
If there is a significant main effect for the two IVs but not a significant interaction then , no (only if one or more of the IVs has 3 levels)
If there is a significant interaction, then yes a post hoc is needed
Single-sample z-test:
finding z-scores, pop. std (σ) is known, pop. mean (μ) is known
Sampling distribution of the means
Single-sample t-test:
Pop. SD is not known. Sample mean (x) and sample SD (s) are known
sampling distribution of the difference between two means
Compares sample mean to population mean
One sample mean and sample SD, s, are known
Independent-samples (between-group) t-test
used to test differences between means in a study with two independent groups
Comparing 2 independent samples (different individuals not related)
Both sample means and SDs, s, are known
Determine if IV has an effect on DV
Dependent-samples (within-group) t-test:
Used when we have two sample means from two related samples. — Pair each score in one sample with a particular score in the other sample
matched pairs or repeated measures
Matched pairs design t-teset:
person in one condition matched on a characteristic with a person in the other condition --- The pair is randomly assigned to different conditions
Repeated measures design ttest:
each participant is tested under multiple conditions of the IV---randomly assigned to the order of conditions
Within (dependent) groups ANOVA:
used for dependent designs (repeated measures or matched)--- compares means from the same subjects measured multiple times
Partial eta squared (η2partial) is used to assess the effect size of a dependent-groups
Between (independent) groups ANOVA:
used for independent designs (independent groups not related)—compares means of different subjects in different groups, focusing on variation between these distinct groups
Two-way between-subjects ANOVA:
comparing two independent variables with multiple levels
Calculate 3 Fs (factor 1, factor 2, interaction)