Looks like no one added any tags here yet for you.
Why do you divide by the expected frequencies or probabilities in chi-squared tests?
Dividing by the expected frequencies standardizes the differences between observed and expected values, ensuring the test statistic isn't biased by the size of expected counts.
Why do the critical values for a chi-squared distribution get larger as the degrees of freedom increase?
Chi-squared critical values grow with degrees of freedom because the distribution becomes more spread out; more independent comparisons require a higher threshold for significance.
How do outliers affect the results of a t-test, chi-squared test, and correlation/regression analysis?
Outliers can inflate standard deviation in t-tests, affect slopes in correlation/regression, and may distort chi-squared statistics, but are less relevant due to reliance on categorical data.
Explain how overgeneralization can affect your predicted values in a regression.
Overgeneralization can lead to inaccurate predictions when a regression model is applied outside the range of observed data, assuming the same relationship holds.
Why do we need to test for linearity in correlation and regression analysis?
Testing for linearity checks if the assumption of a linear relationship is valid; non-random patterns in residuals indicate non-linearity.
What are the similarities and/or differences between Pearson’s r and Cohen’s d?
Both measure effect sizes; Pearson’s r assesses the strength and direction of relationships, while Cohen’s d measures standardized mean differences between groups.
Explain the similarities and/or differences between the chi-squared, t, and F distributions.
All three are sampling distributions used in hypothesis testing, differing in data type: chi-squared for categorical, t for small sample means, and F for variance analysis.
What is the difference between an ANOVA and an independent samples t-test?
The t-test compares means of two groups, while ANOVA compares means of three or more groups by analyzing variance.
Explain why F = 1 in an ANOVA when the null hypothesis is true.
F equals 1 when the between-group variance equals the within-group variance under the null hypothesis, indicating no significant differences.
Why do we need to test for homogeneity of variances when conducting an ANOVA?
Testing for homogeneity ensures that groups have similar variability; violations can skew F-ratios and mislead conclusions.
Describe the two ways in which you estimate the population variance in an ANOVA.
Between-group variance measures variability of group means; within-group variance is based on variability of individual scores within each group.
Are ANOVAs one-tailed or two-tailed tests?
ANOVAs are two-tailed because they test for any difference among group means without regard to direction.
Explain the difference between parametric and non-parametric tests.
Parametric tests assume normal distribution and specific conditions, making them appropriate when assumptions are met; non-parametric tests do not assume normality and are suitable for ordinal data.