1/5
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced | Call with Kai |
|---|
No analytics yet
Send a link to your students to track their progress
Distinguish between when it would be appropriate to use the different t-tests and anovas including the more advanced ones.
Choosing the correct statistical test depends on the number of independent variables (IVs) and whether you are measuring different groups or the same people multiple times. When comparing exactly two groups, use a t-test: an Independent Samples T-test compares two separate groups (like men vs. women), while a Dependent (Paired) Samples T-testcompares the same people at two different times (like a pre-test and post-test). If your research involves three or more groups, you must move to an ANOVA to avoid inflating your chance of error. A One-Way ANOVA handles a single IV with three or more levels, such as comparing three different medication types. For more complex studies with two or more IVs, use a Factorial ANOVA to see how variables interact, or a Mixed Factorial ANOVA if you have a combination of separate groups and repeated measurements. If you need to "cancel out" the influence of an extra variable you aren't studying, use an ANCOVA, or use a MANOVA if you are measuring multiple related outcomes at the same time.
Know when to use non-parametric tests
-Parametric tests: When all assumptions are passed; needs a normal distribution
-Non-parametric tests: When an assumption is violated; does not need a normal distribution
Distinguish between power and effect size
Power: Probability that there is a statically significant difference when running a test
Effect size: How strong the difference actually is (Cohen’s d)
Know when you SHOULD use a one-tailed t-test
Use it when there is an expected difference ONLY in one direction; for hypotheses that predict a difference in one direction and ignores the possibility of the opposite direction.
Define homoscedasticity and multicollinearity
Homoscedasticity: When there are similar variance across different groups
Multicollinearity (cannot be present for ANOVAs): The IVs are extremely correlated, making it hard to distinguish their individual effects
Determine significance using t and F tables
To determine significance using statistical tables, you must compare your calculated test statistic against a "critical value" found in the table based on a predetermined alpha level, usually .05 in psychology. When using a t-table, you first identify your degrees of freedom—calculated as n1+n2−2 for independent samples or N(pairs)−1 for dependent samples—and then locate the intersection of that row with your chosen alpha column. For the F-table used in ANOVAs, you must coordinate two different degrees of freedom: the numerator (dfbetween or k−1) and the denominator (dfwithinor N−k). In both cases, the decision rule is identical: if your calculated value is greater than the table's critical value, the result is statistically significant (p<.05), leading you to reject the null hypothesis. While modern software like JASP provides exact p-values that make these tables less necessary for daily analysis, understanding this "cutoff" logic remains fundamental to interpreting whether an observed difference is likely due to your intervention or merely random chance.