1/62
Vocabulary flashcards covering experimental design, statistical measures, and APA report formatting based on the Psychology 2101-11 Midterm II exam materials.
Name | Mastery | Learn | Test | Matching | Spaced | Call with Kai |
|---|
No analytics yet
Send a link to your students to track their progress
Reversal (ABA) Design
An experimental design involving three steps: establishing a baseline (A), applying an intervention (B), and then reverting to the baseline (A) by removing the intervention.
Baseline
The initial phase of a study where target behavior is measured before any intervention is applied.
Intervention
The phase of a study where a specific treatment, reward, or manipulation is introduced to observe its effect on behavior.
Order Effects
Confounding variables that occur due to the sequence in which conditions are presented; these are controlled in within-subjects designs using counterbalancing.
Factor
An independent variable in a factorial research design.
Main Effect
The separate contribution or individual influence of a single independent variable on the dependent variable in a factorial design.
Interaction
The effect that occurs when the impact of one independent variable depends on the level of another independent variable.
Cohen's d
The most commonly used measure of effect size when comparing the means of two groups on a quantitative dependent variable.
Predictor Variables
The independent variables used in a multiple regression analysis to estimate the value of a criterion variable.
Criterion Variable
The dependent variable in a multiple regression analysis that researchers attempt to estimate or predict.
Third Variable Problem
A limitation of nonexperimental or cross-sectional designs where an unmeasured factor (e.g., ethnicity) may explain the relationship between variables.
Cross-sectional Design
A research design that compares different groups at one point in time; it is limited because it cannot solve the directionality problem.
2 X 3 X 5 Factorial Design
A design that contains 3 factors and results in 30 unique conditions.
Mixed Factorial Design
A research design that includes at least one factor manipulated between participants and at least one factor manipulated within participants.
Statistical Control
The use of statistical techniques, such as multiple regression, to account for and control the influence of possible confounding variables.
Steady State Strategy
A method in single-subject research where the researcher waits until behavior becomes consistent over time before changing experimental conditions.
Multiple Baseline Design
A single-subject experimental design where changes in conditions (interventions) are introduced at different times for different participants, settings, or behaviors.
Visual Inspection
The primary method used to analyze data in single-subject experimental designs rather than comparing means or standard deviations.
Standard Deviation
A measure of variability or spread of scores around the mean; it can be equal to zero but never less than zero.
Median
The specific score in a frequency distribution that exceeds 50% of all the scores.
Restriction of Range
A limitation that occurs when the sample data does not include the full range of possible values, moving the correlation coefficient closer to zero.
Abstract
A brief summary of a research article that appears before the Introduction section in APA style formatting.
Method Section
The part of an APA style research report where the researcher explains the design and the plan for data collection.
Discussion Section
The section of an APA style article that interprets the results, addresses research weaknesses, and offers suggestions for future research.
Standard Error of the Mean
A statistical value that uses the square root of the sample size (n) in its denominator.
Skewed to the Right
A distribution of scores where the majority of the data points are relatively low, and the tail extends toward the higher values.
Null Hypothesis (H₀)
The hypothesis that there is no relationship or effect in the population, and that any pattern observed in the sample is due to sampling error (chance)
Alternative Hypothesis (H₁)
The hypothesis that there is a real relationship or effect in the population, and the sample result reflects this true pattern
p Value
The probability of obtaining a sample result at least as extreme as the one observed, assuming the null hypothesis is true. It reflects how unusual the data are under H₀—not the probability that H₀ is true.
Alpha (α)
The predetermined threshold for statistical significance, usually .05, representing the probability of making a Type I error (false positive)
Statistical Significance
A result is statistically significant when p < α (usually .05), leading researchers to reject the null hypothesis and conclude that the effect is unlikely due to chance
Sampling Error
A numerical summary of a population (e.g., population mean), which is typically estimated using sample statistics
Correct Interpretation of p Value
A p value indicates the probability of the observed data given that the null hypothesis is true, NOT the probability that the hypothesis itself is true
Practical Significance
The extent to which a result is meaningful or useful in real-world terms, regardless of whether it is statistically significant
t Test
A family of statistical tests used to compare means and determine whether differences are statistically significant
One-Sample t Test
Compares the mean of a single sample to a known or hypothesized population mean
Dependent-Samples t Test (Paired-Samples)
Compares two means from the same participants (e.g., pretest vs. posttest) using difference scores
Independent-Samples t Test
Compares means between two different groups in a between-subjects design
Difference Score
The result of subtracting one measurement from another for the same participant, used in dependent-samples t tests
Degrees of Freedom (df)
A value based on sample size that determines the shape of the sampling distribution; varies by test (e.g., N − 1, N − 2)
ANOVA (Analysis of Variance)
A statistical test used to compare means across three or more groups, producing an F statistic
F Statistic
A ratio of between-group variability to within-group variability; larger values indicate greater likelihood of a real effect
Post Hoc Comparisons
Follow-up tests conducted after a significant ANOVA to determine which specific group means differ, while controlling Type I error
Test of Pearson’s r
A statistical test used to determine whether the correlation between two variables differs significantly from zero in the population
Two-Tailed Test
A test that evaluates the possibility of an effect in both directions; the most common and conservative approach
One-Tailed Test
A test that evaluates an effect in one specified direction only, increasing power but requiring the direction to be specified in advance
Type I Error (False Positive)
Rejecting the null hypothesis when it is actually true; concluding that an effect exists when it does not
Type II Error (False Negative)
Failing to reject the null hypothesis when it is actually false; missing a real effect
Statistical Power
The probability of correctly rejecting a false null hypothesis; the ability of a study to detect a real effect (Power = 1 − Type II error)
Confidence Interval (CI)
A range of values (typically 95%) that is likely to contain the true population parameter, providing more information than a simple significance test
Publication Bias
The tendency for journals to publish statistically significant results more often than non-significant ones, leading to a distorted scientific literature
File Drawer Problem
The phenomenon where studies with non-significant results remain unpublished, often kept in researchers’ files
Researcher Degrees of Freedom (p-Hacking)
Flexibility in data collection and analysis decisions that can inflate Type I error rates, such as selectively reporting significant results
Replication Crisis
The finding that many published studies in psychology fail to replicate, raising concerns about the reliability of research findings
Meta-Analysis
A statistical technique that combines the results of multiple studies on the same topic to estimate an overall effect size
Moderator (in Meta-Analysis)
A variable that explains variation in effect sizes across studies
Forest Plot
A graphical display showing the effect sizes from multiple studies in a meta-analysis
Open Science
A movement to make research more transparent, accessible, and reproducible
Preregistration
The practice of publicly recording hypotheses, methods, and analyses before data collection, preventing researcher bias
Preregistered Reports
A publication format where studies are accepted based on their research question and method before results are known, reducing publication bias
Open Data & Materials
The practice of sharing raw data and study materials publicly to allow verification and replication
Registered Replication Reports
Collaborative, preregistered replication studies conducted by multiple labs and published regardless of outcome
Open Science Badges
Journal indicators showing that a study meets standards for data sharing, materials sharing, or preregistration