1/210
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
Quasi-experiment
A study that is similar to an experiment but lacks random assignment to conditions. Researchers examine the effect of an independent variable, but participants are assigned to groups based on factors that are not under the researcher’s control (e.g., gender, classrooms, naturally occurring groups).
Quasi-independent Variable
A variable that functions like an independent variable in a quasi-experiment but is not manipulated by the researcher. It is usually a pre-existing characteristic or grouping (e.g., age group, school attended, before vs. after an event).
Nonequivalent control group posttest-only design
A quasi-experimental design in which participants are assigned to groups (treatment and control) without random assignment, and only a posttest is given to measure outcomes. There is no pretest to assess baseline equivalence.
Nonequivalent control group pretest/posttest-only design
A quasi-experimental design that includes a pretest and posttest for both the treatment and control groups, but participants are not randomly assigned to the groups. This helps assess changes over time and control for initial group differences.
Interrupted time-series design
A quasi-experimental design where a single group is measured on a dependent variable multiple times before and after an intervention or event. The goal is to detect whether the “interruption” caused a significant change in the trend
Nonequivalent Control Group Interrupted Time-Series Design
A more advanced design combining elements of a nonequivalent control group and an interrupted time-series. Two or more groups are compared over time, with one group experiencing an intervention or event, allowing researchers to evaluate effects more confidently.
Wait-list design
A quasi-experimental design where all participants plan to receive the treatment, but some receive it later than others. The group waiting acts as a control during the delay, helping to assess the treatment's effectiveness.
Small-N Design
A research design that focuses on a small number of participants (often just a few) to gather detailed data over time. Often used in clinical or applied settings to assess individual responses to interventions
Stable-baseline Design
A type of small-N design where researchers observe a participant’s behavior for a long baseline period before introducing the treatment. If behavior remains stable during baseline and changes after treatment, this suggests an effect.
Multiple-baseline Design
A small-N design where researchers stagger the introduction of the treatment across different times, situations, or individuals. This helps control for external factors and strengthens causal inference
Reversal Design
Also known as ABA or ABAB design, this involves introducing and then removing the treatment to see if the behavior returns to baseline. It helps confirm that the treatment, not other factors, caused the change.
Single-N design
A type of small-N design that focuses on just one participant. It involves repeated, systematic measurement and often includes baseline and treatment phases to assess changes in behavior.
Replicable
A study is replicable if its results can be obtained again when the study is repeated. Replication is essential for confirming the reliability of scientific findings.
Direct Replication
A type of replication in which researchers repeat an original study as closely as possible to see whether the same results are obtained with a new sample.
Conceptual Replication
A replication that tests the same hypothesis as the original study, but uses different methods, operational definitions, or procedures to see if the effect generalizes across settings.
Replication-plus-extension
A replication study that repeats the original experiment but adds new variables or conditions to test additional questions or expand on the original findings.
Scientific literature
The collection of all published studies in a particular area of research. It includes original studies, review articles, and theoretical papers.
Meta-analysis
A statistical technique that combines the results of many studies on the same topic to estimate the overall effect size and identify patterns among study results.
File Drawer problem
A problem in scientific literature where studies with null or non-significant results are less likely to be published, meaning the published literature may overestimate the true effect size.
HARKing
Stands for "Hypothesizing After the Results are Known." This refers to creating or revising a hypothesis after seeing the results, which threatens the integrity of the research process.
P-hacking
The practice of manipulating data or analysis (e.g., stopping data collection early, removing outliers, testing many variables) to produce a statistically significant result (p < .05).
Open science
A movement toward transparency in research, encouraging practices like sharing data, materials, and preregistering studies so others can verify or build on the work.
open data
When researchers share the raw data from their study publicly so that others can analyze it or reproduce the findings.
open materials
When researchers make their study materials (e.g., surveys, stimuli, instructions) publicly available to allow for replication or adaptation.
Preregistration
The practice of registering the study's hypothesis, design, and analysis plan before collecting data, which helps prevent HARKing and p-hacking.
Ecological validity
A type of external validity that refers to how well the study setting or tasks resemble real-world situations.
Theory-testing mode
A research approach focused on testing theories in controlled settings, often prioritizing internal validity over real-world applicability.
generalization mode
A research approach focused on applying findings to the real world and generalizing results to different people, places, or times; emphasizes external validity.
Cultural psychology
A subfield of psychology that studies how cultural contexts shape people’s thoughts, feelings, and behaviors. It often emphasizes generalization mode to understand diverse populations.
WEIRD
Field setting
A real-world environment where a study is conducted, as opposed to a laboratory. Field settings often have higher external validity
Experimental realism
The degree to which a study is psychologically engaging and participants experience it as real and involving, regardless of whether it looks like the real world.
Experiment
study where the researcher manipulates one variable and measures the effect on another.
ex. Giving one group caffeine and another no caffeine to see its effect on memory performance.
Manipulated variable
The variable the researcher changes to test its effects.
Ex: The amount of caffeine given (none, low, high).
independent
Measured variable
Records of thoughts, feelings, or behaviors, not directly influenced by the researchers
Ex. memory test scores after caffeine consumption
dependent
condition variable
the levels or versions of the independent variable
control variable
factors kept to avoid affecting the outcome; things kept the same
Ex. time of day
comparison group
a group used to compare results against the experimental group
Ex. a group that receives no caffeine when testing caffeine effects
control group
a type of comparison group that doesn’t receive the treatment/ neutral condition
Ex. participants receive a sugar pill instead of caffeine
treatment group
group receiving the manipulated variable
Ex. participants who receive caffeine
placebo group
a control group that receives a fake treatment
Ex. Participants who drink decaffeinated coffee, thinking it has caffeine.
cofound
a threat to internal validity
Ex. If caffeine and sugar are both given, it’s unclear what caused the effect.
design cofound
A specific kind of confound that occurs due to poor experimental design.
Ex. The caffeine group gets more attention from researchers than the control group.
Systematic variability
Variability that is related to the independent variable.
ex. Only the caffeine group gets a more engaging researcher
Unsystematic Variability
Random differences across participants that are unrelated to the IV.
Ex. Some participants are naturally more alert than others.
Selection effects
Occurs when participants in different groups are not randomly assigned.
Ex. Participants choose whether they want caffeine or not
random assignment
Participants are randomly placed in groups to avoid selection effects
Ex. Names drawn from a hat to assign groups.
matched groups
participants are matched on a key variable before random assignment
Ex. Matching by age or IQ before assigning to caffeine or control
Independent-group design
each participant experiences only one condition.
ex. One group gets caffeine, another gets a placebo.
Within-group design
Each participant experiences all conditions.
ex. Everyone tries caffeine one week and no caffeine another week.
Posttest only design
Participants are tested only after the manipulation and then complete the measure once
ex: Memory is tested only after caffeine is consumed.
Advantage: simple
Disadvantage: possible random assignment failure
Pretest/posttest designs
Participants are tested before and after the manipulation
ex: Memory is tested before and after caffeine consumption.
Advantage: controls for failures of random assignment
Disadvantage: demand characteristics
Repeated-measures design
A within-groups design where participants are exposed to each condition and measured after each.
ex. Each person takes a memory test after drinking caffeine and again after no caffeine.
Advantages: Equivalence across conditions • Increased statistical power • Functionally doubles* sample size • Decreased noise
Disadvantage: carryover effect, Sensitization effects • Practice effects • Fatigue effects
Concurrent measures designs
Participants are exposed to all levels of the IV at the same time, and a preference or choice is recorded.
ex. Babies are shown two faces at once, one male and one female, and researchers see which they look at longer
Order effect
the order in which conditions are presented affects results
Ex. Participants do better on the second memory test because they practiced
Practice effect
improvement due to repeated exposure
Ex. Participants score higher the second time because they’re more familiar with the test.
Carryover effect
one condition affects performance in the next
Ex. Caffeine taken earlier still affects results during the second condition.
Counterbalancing
Presenting conditions in different orders to cancel out order effects.
ex. Half get caffeine first, half get placebo first
Full counterbalancing:
all possible condition orders are used.
Ex. For two conditions (A and B), participants are split between AB and BA.
partial counterbalancing
Only some of the possible condition orders are used
ex. Randomly choosing a few sequences from all possibilities
latin square
A counterbalancing technique ensuring each condition appears in each position.
ex: In a study with three tasks, each task is shown first, second, and third equally across participants
Demand characteristic
Participants guess the study’s purpose and change their behavior.
ex: Participants try harder on a memory test if they think that’s the goal
Manipulation check
A test to see if the manipulation worked.
ex: Asking participants how alert they felt after drinking caffeine
pilot study
A small, preliminary study to test the design
Ex. Running the caffeine study on a few people to identify any issues
One-group pretest/posttest design
A study where one group is measured before and after a treatment, but no comparison group is used.
Example: Measuring stress levels before and after a meditation program in the same group.
Maturation Threat
A threat to internal validity where participants change over time naturally.
Example: Kids improve in reading simply because they’re getting older, not due to an intervention.
prevention: comparison groups
History threat
An external event happens during the study that affects all participants.
Example: A new national health campaign starts while you're testing a health program.
prevention: use comparison group
regression threat
Extremely high or low scores tend to move closer to average on a retest due to chance.
Example: Students scoring very poorly on a pretest do better later simply by chance
prevention: use comparison group
regression to the mean
The phenomenon behind regression threat — extreme scores tend to become less extreme over time.
Attrition Threat
occurs when participants drop out of the study between pre- and posttest.
Example: If only the least stressed participants complete the posttest, it skews results.
prevention: remove pretest scores
testing threat
a threat when taking a test more than once affects scores
Example: Practice effects improve scores, not the treatment.
Instrumentation threat
A change in the measuring instrument over time.
Example: Observers become stricter or more lenient in coding behaviors over the course of a study.
Selection-history threat
An outside event affects only one group in a multiple-group design
Example: One class gets a new teacher during an educational experiment
Selection-attrition threat
One group has higher dropout rates than another, potentially biasing results.
Example: More people leave the experimental group than the control group in a therapy study.
Observer bias
bias that occurs when observer expectations influence the interpretation of participant behaviors or the outcome of the study.
Example: A researcher unconsciously rates behaviors as more positive in the treatment group.
double blind study
Neither the participants nor the researchers know who’s in the treatment or control group
Example: Helps reduce observer bias and demand characteristics
masked design
Only the observers are unaware of group assignments (also called single-blind).
- Example: The person rating a behavior doesn’t know whether the participant got the treatment.
placebo effect
improvement caused by participants' belief in the treatment, not the treatment itself
Example: A person feels less pain after taking a sugar pill they believe is a painkiller.
Double blind placebo control study
Both participants and experimenters are unaware of who gets the placebo
null effect
When there’s no significant difference between groups or conditions.
Example: A study finds no difference in anxiety between a meditation and control group.
celling effect
All the scores are high, leaving no room to detect differences.
Example: A math test is too easy, so everyone scores near 100%
floor effect
All the scores are low, making it hard to see improvement.
Example: A reading test is too hard and everyone scores poorly.
measurement error
Inaccuracy in measuring the dependent variable.
Example: A bathroom scale gives different weights for the same person.
insensitive measure
a dependent variable (or measurement tool) that lacks the precision or range to detect meaningful differences or effects between experimental groups; using the wrong tool
noise
Random variability within the data that can obscure true effects.
Example: Differences in lighting, mood, or noise levels during testing
situation noise
External distractions or uncontrolled variables in the environment.
Example: Construction noise outside affecting concentration during testing
power
The likelihood that a study will show a statistically significant result when an independent variable truly has an effect in the population; the probability of not making a Type II error.
Example: A study with high power is more likely to detect a real difference between groups.
interaction effect
A result from a factorial design, in which the difference in the levels of one independent variable changes, depending on the level of the other independent variable
example: you’re testing how exercise (high vs. low) and diet (healthy vs. unhealthy) affect weight loss.
The benefit of exercise may be greater when the diet is healthy, showing an interaction between exercise and diet.
factorial design
experiment with two or more independent variables
Example: A 2×2 factorial design studying: Teaching method (lecture vs. hands-on) & Study time (short vs. long)
cell
a particular combination of the conditions of each independent variable
Participant variable
A variable that is measured, not manipulated, but used as an IV in factorial designs.
Often demographic or trait-based (e.g., age, gender, personality, etc.)
Example: Studying how test anxiety (high vs. low, measured) interacts with study method (manipulated) on performance.
main effect
The overall effect of one independent variable on the dependent variable
Example: If hands-on teaching improves scores regardless of study time, that’s a main effect of teaching method.
marginal means
The mean for ONE level of ONE independent variable
Example: If scores for hands-on = 85 and lecture = 75 (averaged over both study times), these are marginal means.
valence
refers to the emotional value associated with a stimulus — whether it is positive, negative, or neutral in emotional tone
bivariate correlation
An association that involves exactly two variables
correlation
Two continuous
T-test
one continuous, one categorical
Chi Square
two categorical
Statistical significance
In NHST, the conclusion is assigned when p < .05; that is, when it is unlikely the result came from the null-hypothesis population
Replication
The process of conducting a study again to test whether the result is consistent
Outlier
A score that stands out as either much higher or much lower than most of the other scores in a sample