1/33
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced | Call with Kai |
|---|
No analytics yet
Send a link to your students to track their progress
analysis of variance (ANOVA)
allows us to test more than 2 group means
same purpose as t tests
considers within and between group variability
systematic variability:
variability between our groups
Random Error
without knowing what other factors account for variability
grouping variable:
predictor that explains values in the outcome variable, AKA independent variable
outcome variable:
dependent variable
grand mean (Mg)
the mean across all groups
between-groups variability:
variability arising from group differences
within-groups variability:
variability arising within each group
within groups-sum of squares (SSW)
looking for distance between groups and the mean of the group to which they belong
Bonferroni Test:
series of t tests performed on pairs of groups
factorial ANOVA:
multiple grouping variables
repeated measures ANOVA:
each person is measured 3+ times
Correlations:
relationships between two continuous variables
covariance:
variables differing together
inverse relationship
as one variable goes up, another goes down
linear relationships:
middle points through a scatterplot would be best represented by a straight line
curvilinear relationships:
line through middle of data will be curved
correlation coefficients
between -1.00 and 1.00
magnitude reports strength
.10 = weak
.30 = moderate
.50 = strong
Pearson’s r
r acts as a descriptive statistic like M
tells us about the linear relationship’s magnitude and direction
r also acts as a test statistic like t because we can compare it to a r*
coefficient of determination
can ALSO calculate r2 as an effect size
spurious correlations:
variables related simply due to random chance
range restriction
if our data doesn’t have the full range of variability of a variable
outlier:
datapoint far away from rest of observations in a dataset
Spearman’s rho (ρ):
finds relationships with ordinal data
line of best fit:
central tendency of scatterplot. close as possible to all points
distance between line of best fit and each data point =
error = residual
least squares error solution:
equation of the line of best fit gives the smallest possible value of squared errors/residuals
Intercept and Slope
intercept: where line crosses on Y axis
slope: steepness, directionality of line
sum of squares error/residual:
distance from observed score to the line of best fit
sum of squares total:
distance from observed score to the mean
sum of squares model:
difference from prediction line to the mean (aka the observed effect/ability to explain variance)
average size of the residual =
standard error of the estimate
multiple regression:
multiple X variables as predictors for a single Y variable at the same time