Confounding Variables:
These are variables that influence both the IV and DV, making it unclear whether changes in the DV are due to the IV or the confound.
Example: Studying the effect of exercise on weight loss while ignoring diet, which could also impact weight loss.
Internal Validity Threats:
Selection Bias: Non-random assignment of participants to groups.
History: Events outside the experiment influencing results.
Maturation: Natural changes in participants over time.
Regression to the Mean: Extreme scores tend to normalize over time.
Attrition: Participants dropping out affects group composition.
Testing Effects: Practice or fatigue from repeated testing.
Instrumentation: Changes in measurement tools or procedures.
Independent Variables (IV):
The variable that is manipulated to observe its effect on the DV.
Example: Amount of sleep (4, 6, or 8 hours).
Dependent Variables (DV):
The variable that is measured; its value depends on the IV.
Performance on a memory test.
Independent vs. Dependent Groups:
Independent (Between-subjects): Different participants in each group. Each experiences a single condition.
Dependent (Within-subjects): Same participants in all conditions.
Random Assignment vs. Random Selection:
Random Assignment: Ensures equal chance of participants being in any group, reducing bias within the study.
Random Selection: Increases external validity by ensuring the sample represents the population.
Manipulation Checks: Measures used to confirm that the IV was effectively manipulated.
Example: If testing stress effects, measure participants’ stress levels to confirm the manipulation was effective.
Demand Characteristics: Cues that reveal the experiment’s purpose, causing participants to alter their behavior.
Participants alter their behavior based on perceived expectations.
Minimize by using double-blind designs or deception.
Frequency Tables: Organize data into categories and show counts or percentages.
When to use: Categorical data.
Bar Graphs: Compare categories using bars.
When to use: Categorical data.
Histograms: Display frequency distributions of continuous data.
When to use: Continuous data.
Line Graphs: Show trends over time or continuous variables.
When to use: Time-series or interval data.
Measures of Central Tendency:
Mean: Average.
Median: Middle score.
Mode: Most frequent score.
Variability: Spread of scores (range, variance, standard deviation).
Types of Distributions: Normal, skewed (positive/negative), bimodal, etc
Characteristics of a Normal Distribution: Symmetrical, bell-shaped, mean = median = mode.
Z-scores: Standardized scores indicating how far a score is from the mean in standard deviations. FORMULA
Why useful: Compare scores across different scales.
Sampling Distributions: Distribution of a statistic (e.g., mean) across samples.
Why use: Infer population parameters.
Descriptive vs. Inferential Statistics:
Descriptive: Summarize data (e.g., mean).
Inferential: Conclude a population.
Null vs. Alternative Hypothesis:
Null: No effect. (H0)
Alternative: Some effect exists. (H1)
Criterion Values/Alpha Levels: Threshold (e.g., 0.05) for rejecting the null hypothesis.
Representativeness of a Sample: The degree to which the sample reflects the population.
Reject vs. Fail to Reject:
Reject: Evidence supports the alternative hypothesis.
Fail to Reject: Insufficient evidence to support the alternative.
One-Tailed vs. Two-Tailed:
One-tailed: Predicts direction.
Two-tailed: Tests for any difference.
When to Run a z-Test: Compare the sample mean to the population mean with known population SD.
z Obtained vs. z Critical:
z Obtained: Calculated z-score.
z Critical: Cutoff value based on alpha.
Standard Error of the Mean (SEM): SEM=σnSEM=nσ, measures variability in the sample mean.
Type I and Type II Errors:
Type I: Rejecting true null hypothesis (false positive).
Type II: Failing to reject false null hypothesis (false negative).
Power: Probability of correctly rejecting the null.
Influenced by: Sample size, effect size, and error variability.
When to Run: Compare the sample mean to the population mean when the population SD is unknown.
t-Test vs. z-Test:
z-Test: Known population SD.
t-Test: Estimate SD from the sample.
t-Distribution vs. Sampling Distributions: t-distribution is wider and accounts for small sample sizes.
Assumptions: Independence, normality, interval/ratio data.
Effect Size: Quantifies magnitude of difference (e.g., Cohen’s d).
Confidence Interval: Range likely to include population mean.
APA Style Reporting: DO THIS
When to Run: Compare the means of two groups.
Assumptions: Independence, normality, equal variances (for independent samples).
Between vs. Within Groups Variability:
Between: Differences between group means.
Within: Variability within each group.
Effect Size: Quantifies group differences.
Difference from t-Test: Tests 3+ group means simultaneously.
Why Not Multiple t-Tests: Increases Type I error rate.
Assumptions: Normality, independence, equal variances.
Null vs. Alternative:
Null: All group means equal.
Alternative: At least one differs.
Between vs. Within Groups Variance: Variance due to IV vs. random error.
F-Distribution: Positively skewed; used in ANOVA.
Posthoc Tests: Run when the F-test is significant to identify specific differences.
Assumptions: Independence, normality, equal variances.
Notation: e.g., 2x3 factorial design (2 IVs, one with 2 levels, one with 3 levels).
Why Use Factorial Designs: Examine interactions between IVs.
Main Effects vs. Interactions:
Main Effects: Independent effect of each IV.
Interactions: Combined effect of IVs.
APA Reporting: Include F(df between, df within) = value, p < .05, partial eta squared. DO THIS