Decision Errors: Incorrect conclusions drawn from hypothesis testing regarding an unknown or undetected reality.
Not merely procedural mistakes; correct methods may yield incorrect conclusions.
Errors stem from limitations in using sample statistics to estimate population parameters.
Probability of Errors: Impossible to entirely eliminate errors in hypothesis testing.
Type II Error (𝛽):
Failing to reject the null hypothesis when it is false.
Researchers can try to reduce the chance of this error, but cannot set its level in advance.
Example: Not detecting an existing effect.
Type I Error (𝛼):
Rejecting the null hypothesis when it is actually true.
This type of error can be set in advance (e.g., 5% significance level).
Example: Finding effects that do not actually exist.
Definition: The probability of obtaining a statistically significant result when the research hypothesis (H1) is true.
Directly related to the power to detect significant results when they exist.
Power = 1 - 𝛽 (Type II error probability).
Higher Type I error leads to a lower Type II error, affecting power.
Definition: Measures the degree of difference between populations, indicating the presence of a significant effect.
Key to assessing how large an effect is rather than merely whether it exists.
Cohen's d:
Formula: d = \frac{\mu1 - \mu0}{\sigma} where:
\mu_0 = mean under the null hypothesis (H0)
\mu_1 = mean under the alternative hypothesis (H1)
\sigma = common standard deviation of distributions.
Effect sizes categorized as small (d = .2), medium (d = .5), and large (d = .8).
Purpose: Assess the effect of one or more factors on a dependent variable by comparing group means.
Sum of Squares (SS): Total variability calculated by summing squared differences from the mean.
SS{total} = SS{between} + SS_{within}
Mean Squares (MS): Average variability calculated from SS divided by degrees of freedom (df).
Test Statistics: Ratio comparing between-group variance to within-group variance.
F = \frac{MS{between}}{MS{within}}
Cutoff F-ratio used to determine significance (e.g., F(df{between}, df{within})).
Definition: Examines the effects of two categorical independent variables on a dependent variable.
Main effect: Individual effect of each factor.
Interaction effect: The combined effect of factors on the dependent variable, which can differ depending on the levels of other factors.
Scatter Plot: Visual representation of the relationship between two variables.
Correlation Coefficient (r): Measures the degree of linear correlation, ranging from -1 (perfect negative) to +1 (perfect positive).
Formula: r = \frac{\Sigma ZX ZY}{N}
Coefficient of Determination (r^2): Proportion of variance accounted for in one variable by the other.
Purpose: To examine the prediction capabilities between independent and dependent variables.
Regression Equation: \hat{Y} = a + bX where:
a = intercept
b = slope (change in Y for a unit change in X).
Least Squares Criterion: Minimize the sum of squared differences between observed and predicted values.
Coefficient of Determination (R²): The percentage of variation in the criterion variable explained by the predictor.
Purpose: Assess relationships between categorical variables.
Observed vs. Expected Frequencies: Used to determine if distributions differ significantly from expectations.
Understand definitions and formulas.
Familiarize yourself with different types of errors in hypothesis testing and their implications.
Practice ANOVA calculations and understand the significance of F-ratios.
Review correlation coefficients and regression to familiarize yourself with predictive relationships.