1/44
Vocabulary flashcards summarising core terms and concepts from the PSYC3010 lecture on standard and hierarchical multiple regression, including key statistics, equations, SPSS outputs, and assignment-related terminology.
Name | Mastery | Learn | Test | Matching | Spaced |
|---|
No study sessions yet.
Standard Multiple Regression (SMR)
A regression approach where all predictors are entered into the model simultaneously to evaluate their collective and individual contributions to predicting the criterion.
Hierarchical Multiple Regression (HMR)
A regression approach in which predictors are entered sequentially in pre-specified steps (blocks) so that each step’s added variance in the criterion can be evaluated.
Predictor (X)
An independent variable used to predict scores on the criterion in regression analyses.
Criterion (Y)
The dependent variable whose variance is being predicted or explained by the predictors.
Unstandardised Regression Coefficient (b)
The raw slope indicating the expected change in Y for a 1-unit change in a predictor, holding other predictors constant.
Standardised Regression Coefficient (β)
A scale-free slope indicating the expected SD change in Y for a 1 SD change in a predictor, controlling for other predictors.
Regression Intercept (a / Constant)
The predicted value of Y when all predictors equal zero in the unstandardised regression equation.
Multiple Correlation Coefficient (R)
The correlation between observed Y scores and the linear composite of all predictors (Ŷ).
Coefficient of Determination (R²)
The proportion of total variance in Y jointly explained by the set of predictors (R squared).
Adjusted R²
A downwardly-corrected estimate of R² that accounts for sample size and number of predictors, reducing inflation bias.
R² Change (ΔR²)
The increase in explained variance in Y produced by predictors entered at a specific step in HMR.
F_change (FΔ)
The F-test that assesses whether a given ΔR² in hierarchical regression is statistically significant.
Zero-order Correlation (r)
The simple bivariate correlation between one predictor and Y without adjusting for other predictors.
Partial Correlation (pr)
The correlation between a predictor and Y after removing variance shared with the other predictors from both variables.
Semipartial (Part) Correlation (sr)
The correlation between a predictor (with shared variance removed) and the full Y; its square (sr²) is the unique variance that predictor explains in Y.
Unique Variance (sr²)
Portion of total variance in Y that is explained solely by one predictor and not shared with others.
Shared (Overlapping) Variance
Variance in Y that two or more predictors jointly explain; calculated as R² minus the sum of all sr² values.
Standard Error of Estimate
The standard deviation of residuals; reflects average prediction error of the regression model.
Omnibus F-test (Overall Model)
The test that determines whether the set of predictors as a whole accounts for a significant amount of variance in Y (tests R²).
t-test for Regression Coefficient
Assesses whether an individual predictor’s b (and β) differs significantly from zero after accounting for other predictors.
Collinearity
The degree of intercorrelation among predictors; high collinearity can obscure unique effects and inflate variance estimates.
Principle of Parsimony
The guideline that, among equally effective models, the simplest (fewest predictors) should be preferred.
Step / Block (in HMR)
A stage of predictor entry in hierarchical regression at which new predictors are added and their incremental contribution is evaluated.
Covariance
A measure of how two variables vary together, forming the basis for correlation and regression coefficients.
Pearson’s r
The standardized covariance giving the strength and direction of linear association between two variables.
Variance Components
Breakdowns of Y variance into portions explained and unexplained by predictors (e.g., unique, shared, residual).
Linear Composite (Ŷ)
The predicted score created from the regression equation combining all predictors weighted by their b coefficients.
Least Squares Criterion
The rule used to estimate regression coefficients: minimise the sum of squared residuals between observed and predicted Y.
Confidence Interval (CI)
A range of values within which the true population parameter (e.g., b) is expected to fall with a specified probability, commonly 95%.
Eta-squared (η²)
An effect-size index for F-tests indicating the proportion of total variance in Y attributable to a factor (used in ANOVA and reported in assignments).
Cohen’s d
An effect-size measure for t-tests expressing mean differences in standard deviation units.
APA 7th Formatting
The writing and reporting style guidelines (e.g., italics for statistics, double spacing, heading structure) required for assignment write-ups.
Omnibus Test
A broad significance test (e.g., overall F) that evaluates whether any effects exist before specific follow-ups are examined.
Follow-up Tests
Additional analyses (e.g., simple effects, pairwise comparisons) conducted only when omnibus tests or interactions are significant.
Model Summary Table (SPSS)
SPSS output section providing R, R², adjusted R², and standard error of estimate for each regression model/step.
Model ANOVA Table (SPSS)
SPSS section reporting SS, df, MS, overall F, and p-value for each model/step, testing significance of R².
Coefficients Table (SPSS)
SPSS output listing b, SE b, β, t, p, zero-order, partial, and part correlations plus 95% CIs for each predictor.
95% Confidence Interval for B
Lower and upper bounds around an unstandardised coefficient indicating where the true slope is likely to lie with 95% certainty.
Standard Multiple Regression Equation
Ŷ = b₁X₁ + b₂X₂ + … + bₚXₚ + a ; predicts raw Y scores from unstandardised coefficients.
Standardised Multiple Regression Equation
ZŶ = β₁Z₁ + β₂Z₂ + … + βₚZₚ ; predicts standardised Y scores without an intercept.
F-ratio Formula from R²
F = [(N – p – 1)R²] / [p(1 – R²)] ; converts R² to the F statistic for testing overall model significance.
Hierarchical Regression Rationale
The theoretical or practical justification for the chosen order of predictor entry (e.g., control variables first, focal variables next).
Control Variable
A predictor entered early in HMR to statistically remove its influence before assessing other variables of interest.
Interaction Term
A product variable entered in HMR to test whether the effect of one predictor on Y depends on another predictor.
Adjusted Degrees of Freedom (N – p – 1)
The denominator df used in regression F-tests and t-tests reflecting sample size minus number of predictors and intercept.