1/12
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
what does ANOVA test for / what is its purpose?
ANOVA tests if there is a statistical difference between 2 or more groups
purpose: determine if atleast one group has a different mean from the others, and assesses the effect of a single categorical independent variable on a single continuous dependent variable
describe key concepts of ANOVA including: null hypothesis, F-stat, p-value
null hypothesis: all group means are equal/the same
alternative hypothesis: at least one group has a different mean
between group variance: the variability attributed to between group differences
within group variances: variability within each group
F-stat: ratio of between group variance to within group variance.
this is the distribution that allows us to determine p-value
p-value: probability of observing F statistic
why are statistical tests important?
datasets have measurement error, variability due to experiemental conditions etc
test help quantify variability
what is considered a ‘WEIRD’ sample/data
Western, educated, industrialised and democratic
what is chance variation the same as?
within group variation
t-tests: what distribution is looked at
t-tests look at the distribution of differences between scores
F-statistic: what does it compare? what does a large F-value suggest?
f-statistic compares the variation between groups and within groups
large f-value suggests samples come from different populations, and it is likely that there was an effect/null hypothesis is less likely to be true
what is the sum of squares & what is it sensitive to?
a measure of variation, sensitive to the sample size
what do the dfs account for?
number of groups and sample size
what does the F-ratio mean? what determines the shape of the distribution?
95% of f-ratios in the main part of distribution, whereas 5% of f-ratios are in tail. the p-value tells us where are f-ratio is in the f distribution
the dfs determine the shape of the distribution
p values: define type I error (false positives), type II error (false negatives) - how are they controlled?
type I error: concluding that there is an effect when there isn’t (a false positive). can be controlled via alpha or significance level
type II error: failing to detect an effect when there is one. can be controlled by statistical power, increased sample size
describe 3 assumptions for ANOVA/statistical testing
indepdence of observations
value of one observation does not influence others →> this can increase type I and type II error
normally distributed
data within each group should be normally distributed - for all sample sizes
homogeneity of variance
each group must have approx the same amount of variance, as unequal variances can affect the validity of F stat
post-hoc tests: why are they used & what does the bonferroni correction do?
post hoc tests are used to compare all groups against every other group to spot where the significant difference lies