1/23
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
One Way Within ANOVA
3 or more groups
1 independent variable
Determines: whether group means are different where the participants are in all groups
One Way Between ANOVA + How to interpret
2 or more independent groups
1 independent variable
Determines: whether there is statistical evidence that the associated population means are significantly different
f (4,156) = 7.80, p < .001, n2 = 0.17
F Statistic = if the means b/w the 2 groups are sig dif - high value = sig dif
Dependent Variable DF =
Residual DF = N
P Value = Significance, Less than < .05
Ets Squared = effect size
Factorial ANOVA
2 or more independent variables
Outcome can show consistent differences between levels of a factor
Correlation + How to interpret
Finding out whether a relationship exists between 2 variables and then determining the magnitude and action of that relationship
Scatter plots will give us a general sense of how closely related two variables are
Used to assess possible linear associations between 2 continuous variables
r (155) = .54, p < .01, r2 = .29, 95% CI [.43,64]
Correlation coefficient
Degrees of freedom = (n-2)
Bivariate correlation
P Value = Significance
Correlation of determination - how well it fits towards the regression line
Confidence interval = study repeated 95% of the time scores will be b/w
Interpreting effect sizes and meaning
Tells you how meaningful the relationship between variables of differences between groups is
Large effect size means that a research finding has practical significance
Small effect size indicates limited practical applications
0.2 = small
0.5 = moderate
0.8 = large
Correlation coefficient
r
The correlation coefficient between two variables X and Y
r = -1 = perfect negative relationship
r = 0 = no relationship at all
Homoscedasticity or Homogeneity of Variance
What test to run
we've only got one value for the population standard deviation rather than allowing each group to have its own value.
ANOVA assumes that the population standard deviation is the same for all groups
Levene test: used to test is k samples have equal variance
Equal variances across samples is called homogeneity of variance
Brown-Forsythe test: test for the equality of group variance based on performing an ANOVA on a transformation of the response variable
Normality
Residuals are assumed to be normally distributed. We can assess this by looking at scatter plots or running a Shapiro-Wilk test.
Independence
Knowing one residual tells you nothing about any other residual
All values are assumed to have been generated without any regard for or relationship to any of the other ones.
Spearman's Rank Correlations
Correlation
Measures the strength and direction association between two ranked variables
Gives the measure of monotonicity of the relationship between two variables
What is a linear regression model?
Regression line: a straight line that describes how a response variable y changes as an explanatory variable x changes
The two variables are x and y and we have two coefficients, a and b
Coefficient a represents the y-intercept of the line
Coefficient b represents the slope of the line
X is the predictor variable (the IV)
Slope Meaning
a regression coefficient of b = -8.94 means that if I increase x by 1, then I'm decreasing y by 8.94
Multiple Linear Regression + Multiple Regression
In many research projects, you actually have multiple predictors that you want to examine
add more terms to our regression equation
The R2 Value
Sometimes called the Coefficient of determination has a simple interpretation
It is the proportion of the variance in the outcome variable that can be accounted for by the predictor
The relationship between regression and correlation
Running a Pearson correlation is more or less equivalent to running a linear regression model that uses only one predictor variable
The adjusted R2 value
The motivation behind calculating the adjusted R2 value is the observation that adding more predictors into the model will always cause the R2 value to increase
The adjustment is an attempt to take the degrees of freedom into account
The big advantage of the adjusted R2 value is that when you add more predictors to the model, closer to the regression line
Hypothesis tests for regression models
Testing the model as a whole
Test for individual coefficient
Testing the model as a whole
The first hypothesis test you might try is the null hypothesis that there is no relationship between the predictors and the outcome,
The alternative hypothesis is that the data are distributed in exactly the way that the regression model predicts
Test for individual coefficient
If your regression model doesn’t produce a significant result for the F-test then you probably don’t have a very good regression mode
For our purposes, it is sufficient to point out that the standard error of the estimate regression coefficient depends on both the predictor and outcome variables, and it is somewhat sensitive to violations of the homogeneity of variance assumption
Checking the homogeneity of variance assumption
Levene test: used to test is k samples have equal variance
Equal variances across samples is called homogeneity of variance
Brown-Forsythe test: test for the equality of group variance based on performing an ANOVA on a transformation of the response variable
Regardless of whether you're doing the standard levene test, or the brown-forsthe test, the test statistic is calculated in exactly the same way that the F statisitc for the regular ANOVA is calculated
Removing the normality assumption
Easiest solution is to switch to a non-parametric test
When you’ve got three or more groups, you can use the Kruskal-Wallis Rank Sum Test
The logic behind the Kruskal-Wallis test
What we do is rank all of these Y values and conduct our analysis on the ranked data
What are our degrees of freedom?
For any given factor, the degrees of freedom is equal to the number of levels minus 1
What is an interaction effect?
the effect of Factor A is different, depending on which level of factor B were talking about
Effect size
Used eta-squared as a simple way to measure how big the overall effect is more any particular term
Eta-squared = SSa/SSt
Analysis of Covariance (ANCOVA)
A variation in ANOVA is when you have an additional continuous variable that you think might be related to the dependent variable. This additional variable can be added to the analysis as a covariate, in the aptly named analysis of covariance
Values of the dependent variable are adjusted for the influence of the covariate and then the adjusted score means are tested between groups in the usual way
ANCOVA runs the risk of undoing real differences between groups and this should be avoided