1/37
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
Design Confounds
An accidental second variable varies systematically along with
the intended independent variable
Systematic variability
changes in a dependent variable that are consistently related to the independent variable
Unsystematic variability
random fluctuations in the dependent variable that are not related to the independent variable
Maturation threat
A change in behavior that emerges more or less spontaneously over time
ex. kids getting better at talking
prevent: add an appropriate comparison group
History threat
Something specific has happened between the pretest and posttest
(not just time has passed)
something that affects the treatment group the same time as the treatment
prevent: include comparison group
Regression threats
Regression to the mean refers to the tendency of results that are extreme by
chance on first measurement—i.e., extremely higher or lower than average—to move closer to the average when measured a second time
Occurs only when a group is measured twice and
Only when the group has an extreme score at pretest
E.g.,
The 40 depressed women might have scores exceptionally high on the depression
pretest due to random effects, such as recent illness, family or relationship problems
prevent: include a comparison group
Attrition threat
A reduction in participant numbers that occurs when people drop out
before the end of the study
Problem for internal validity when attrition is systematic – only a certain
kind of participant drops out
prevent: remove the dropped-out participants’ scores from the pretest average too
Testing threats
A change in the participants as a result of taking a test (dependent
measure) more than once
Prevent:
No pretest
Two different forms – one for pretest and one for posttest
Include a comparison group
Instrumentation threat
Occurs when a measuring instrument changes over time or When a researcher uses different forms for the pretest and posttest, but the two forms are not sufficiently equivalent.
ex. if observers in a study start scoring student interactions differently, the perceived effect of an intervention could be skewed, even if the intervention itself is not the cause
prevent:
Use a posttest-only design
Ensure that the pretest and posttest measures are equivalent
Counterbalance the versions of the test
3 potential internal validity threats in any study
observer bias
demand charcteristics
placebo effects
Floor effects in the dependent measure
E.g. A difficult test is used as a dependent measure
Solution:
Use a manipulation check
a separate dependent variable to make sure the manipulation
worked
Ceiling effects in the dependent measure:
E.g., an easy test is used as a dependent measure
Solution:
Use a manipulation check
a separate dependent variable to make sure the manipulation
worked
Within-groups variablity
Too much unsystematic variability within each group → noise (a.k.a
error variance or unsystematic variance)
Measurement error
Error in the measurement
All dependent variables involve a certain amount of measurement
error
Solution 1: Use Reliable, Precise Tools
have excellent reliability (internal, interrater, and test-retest)
Solution 2: Measure More Instances.
Individual differences
Solution 1: Change the Design
Use a Within-groups design instead of independent-groups
design
Solution 2: Add more Participants
Situation noise
Solution: carefully control the surroundings of an experiment
Power
The likelihood that a study will return an accurate result when the
independent variable really has an effect.
increasing power:
Within-groups design
A strong manipulation
A larger number of participants
Less situation noise
interaction effect
a result from a factorial design, in which the difference in the levels
of one independent variable changes, depending on the level of the
other independent variable; a difference in differences. Also called
interaction
factorial design
a study in which there are two or more independent variables, or
factors
cell
A condition in an experiment; in a simple experiment, a cell can
represent the level of one independent variable; in a factorial design, a
cell represents one of the possible combinations of two independent
variables
participant variable
A variable such as age, gender, or ethnicity whose levels are selected
(i.e., measured), not manipulated
Main effect
In a factorial design, the overall effect of one independent variable on
the dependent variable, averaging over the levels of the other
independent variable
Marginal means
In a factorial design, the arithmetic means for each level of an
independent variable, averaging over the levels of another
independent variable
Quasi experiments
A research design used to investigate cause-and-effect relationships when it's impossible or unethical to randomly assign participants to different groups. Unlike true experiments, participants are not randomly assigned to treatment and control groups, making it difficult to isolate the impact of the independent variable.
Do not have full experimental control (control groups aren’t mandatory)
First select an independent variable and a dependent variable.
Random assignment might not be possible
Interrupted time-series design
A quasi-experimental research method that measures an outcome at multiple time points before and after an intervention or policy change
Nonequivalent control group
a quasi-experimental research method where researchers use pre-existing groups (not randomly assigned) for a treatment and a control group. This means the groups may have inherent differences before the intervention, which researchers must account for in their analysis
Selection effects
• Relevant only for independent-groups designs, not for repeated-measures
designs
• Applies when the kinds of participants at one level of the independent
variable are systematically different from those at the other level
Design cofounds
• Some outside variable accidentally and systematically varies with the levels
of the targeted independent variable
ex. studying effects of anti-anxiety medicine but the treatment group is also getting treatment, which will not allow us to know if the differences between the placebo group and experimental group is due to the medicine, therapy, or both
quasi-independent variable
a pre-existing characteristic of participants that cannot be manipulated by the researcher, such as gender, age, or ethnicity.
Small-N design
Obtain a lot of information from just a few cases instead of a little
information from a larger sample
Involve studying the behavior or outcomes of a small number of participants (often 10 or fewer) repeatedly over time. These designs are particularly useful for examining individual responses to interventions and establishing experimental control within each participant, rather than relying on a control group.
disadvantages:
• Issues with Internal Validity
• E.g., which part of H.M.’s brain was responsible for each behavioral deficit
Issues with External Validity
• Participants in small-N studies may not represent the general population
very well.
• Solution: compare a case study results to research using other methods.
Stable baseline design
• A researcher observes behavior for an extended baseline period before
beginning a treatment or other intervention
• Behavior during the baseline is stable.
Multiple baseline design
a research design that assesses the effect of an intervention by comparing it to a baseline level of behavior across different individuals, behaviors, or settings, with the intervention implemented at different times for each
Reversal design
a single-subject research design used to assess the effectiveness of an intervention by alternating between a baseline phase (no intervention) and an intervention phase, repeating this cycle to demonstrate a functional relationship. This design involves implementing the intervention, observing the impact on the target behavior, reversing the intervention to baseline, and then implementing the intervention again to verify the initial effects.
Replication (or reproducible)
• Part of interrogating statistical validity
Size of the estimate (effect size), precision of estimate (95% CI)
• Gives a study credibility
• Crucial part of scientific process
Direct replication
Repeat an original study as closely as possible