1/49
This set of flashcards covers vocabulary and key concepts from Chapters 14-17, including Two-Way ANOVA, Correlation, Linear and Multiple Regression, and Chi-Square tests.
Name | Mastery | Learn | Test | Matching | Spaced | Call with Kai |
|---|
No analytics yet
Send a link to your students to track their progress
Two-way ANOVA
A statistical analysis of variance involving two factors.
Factorial design
A research design that includes two or more factors.
Factors
The independent variables in an ANOVA, often identified using letters (e.g., Factor A and Factor B).
Levels
The different groups or categories within a factor, often identified using numbers.
Cells
The groups created by combining the levels of two or more factors in a factorial design.
Complete factorial design
A design in which every level of one factor is combined with every level of 모든 other factors.
Between-subjects design (2-Between)
A factorial design that combines the levels of two between-subjects factors, where the total sample size is calculated as N=nimespimesq.
Mixed design (1-Between 1-Within)
A factorial design that includes one between-subjects factor and one within-subjects factor.
Within-subjects design (2-Within)
A factorial design that combines two within-subjects factors, where the number of participants n=N.
Main effect
A source of variation used to determine whether group means vary across the levels of a single factor.
Interaction
An AimesB test used to determine if the effect of one factor changes across the levels of another factor.
Error (Within-groups variation)
Sources of variability that are attributed to differences within each group or cell.
Simple main effect test
A test used to analyze the effect of one factor at a single level of another factor, typically computed following a significant interaction.
Pairwise comparisons
Post hoc tests required to determine which specific groups are different, used only if k > 2.
Eta-Squared (extη2 or R2)
A measure of effect size representing the proportion of variance, where ext{η}^2 = rac{SS_{ ext{Between-Groups}}}{SS_T}.
Omega-Squared (extω2)
A measure of effect size that is less biased than eta-squared, calculated as ext{ω}^2 = rac{SS_{ ext{Between-Groups}} - (df_{ ext{Between-Groups}})(MSE)}{SS_T + MSE}.
Homogeneity of variance
An assumption for the two-way between-subjects ANOVA requiring that the variance in each population is equal.
Shapiro-Wilk test
A statistical test used in SPSS to evaluate the assumption of normality.
Levene’s test
A test used in SPSS to evaluate the assumption of homogeneity of variance.
Correlation
A statistical procedure that estimates how variables are related and describes the pattern of data points.
Scatter plot
A graphical representation of data points or bivariate plots used to observe the direction and strength of a relationship.
Correlation coefficient (r)
A numerical value that describes the direction and strength of a linear relationship.
Positive correlation
A relationship where both variables change in the same direction.
Negative correlation
A relationship where variables change in opposite directions.
Homoscedasticity
An assumption for the Pearson correlation coefficient requiring that the variability of scores for one variable remains constant across all levels of another variable.
Linearity
The assumption that the relationship between two variables can be best described by a straight line.
Coefficient of determination (r2)
A measure of effect size for correlation that results in a value between 0 and +1.
Causality
The direct relationship of cause and effect, which cannot be ruled out or confirmed by correlation alone due to possible reverse causality or confound variables.
Outliers
Scores that fall significantly above or below other scores, which can obscure relationships or change the strength and direction of a correlation.
Restriction of range
A limitation in interpretation where the range of data in a sample is limited, potentially leading to erroneous conclusions.
Spearman rank-order correlation coefficient (rs)
A nonparametric alternative to Pearson r used when factors are ranked or do not meet interval/ratio scale requirements.
Monotonic relationship
An assumption for the Spearman correlation where the variables tend to move in the same relative direction but not necessarily at a constant rate.
Point-Biserial correlation coefficient (rpb)
A correlation coefficient used when one factor is continuous and the other factor is dichotomous.
Phi correlation coefficient ($ϕ$)
A correlation coefficient used when both factors are dichotomous, where ϕ = rac{ ext{matches} - ext{mismatches}}{ ext{sample size}}.
Linear regression
A statistical method for describing a linear relationship and predicting the value of one variable based on another.
Predictor variable (X)
The variable used in regression to predict the value of the criterion variable.
Criterion variable (Y)
The variable in regression that is being predicted (the outcome).
Residual
The difference between an observed value and the value predicted by the regression line.
Slope (b)
The amount of change in Y for every one-unit change in X in the equation Y=bX+a, also known as the regression coefficient.
y-intercept (a)
The value of Y when X=0 in the equation Y=bX+a.
Method of least squares
A procedure for finding the best-fitting line that makes the sum of the squared residuals (SSextresidual) as small as possible.
Standard error of estimate (se)
A measure of the accuracy of predictions in regression, calculated as se=ext√MSextresidual.
Multiple regression
A statistical method used to predict a single criterion variable using two or more predictor variables.
Multicollinearity
A condition in multiple regression where predictor variables are highly correlated with each other.
Variance inflation factor (VIF)
A measure used to detect the presence of multicollinearity among predictor variables.
Beta coefficient ($β$)
Standardized regression coefficients used to compare the relative influence or unique contribution of each predictor variable.
Chi-square test ($χ^2$)
A nonparametric test used for nominal or categorical data where variance is not meaningful.
Chi-square goodness-of-fit test
A test used to determine how well observed frequencies (fo) fit expected frequencies (fe) for a single categorical variable.
Chi-square test for independence
A test used to determine whether two categorical variables are independent by comparing frequencies organized into tables.
Cramer’s V
A measure of effect size for a chi-square test for independence involving tables larger than 2imes2, also known as Cramer’s phi.