1/20
Flashcards covering threats to internal validity in experiments, methods to prevent them, and reasons why a study might result in null effects.
Name | Mastery | Learn | Test | Matching | Spaced |
|---|
No study sessions yet.
What are three common threats to internal validity?
Design confounds, selection effects, and order effects.
What is the internal validity threat known as "history"?
An external event that affects most members of the treatment group at the same time as the treatment, making it unclear if the outcome is due to the treatment or the external event.
Describe the internal validity threat of "maturation."
A change in behavior that emerges spontaneously over time, such as children getting better at reading or people becoming less depressed.
What is "regression to the mean" as a threat to internal validity?
When an extreme score on a first measurement is likely to be closer to the average on a second measurement, regardless of treatment.
Explain the internal validity threat of "attrition."
When participants drop out of a study before it ends, especially if those who drop out are systematically different from those who stay.
What is "testing threat" in internal validity?
When participants change their response or behavior simply because they have been tested multiple times (e.g., practice effects or fatigue).
Define "instrumentation threat" to internal validity.
When the measuring instrument (e.g., a survey, observer ratings) changes over time from the pretest to the posttest, leading to an observed change not due to the treatment.
What is "observer bias" as a threat to internal validity?
When researchers' expectations influence their interpretation of the outcomes, often in subjective measures.
Describe "demand characteristics" as a threat to internal validity.
When participants guess what the study is supposed to be about and change their behavior accordingly, acting in a way that confirms the hypothesis.
What is a "placebo effect" in the context of internal validity threats?
When people improve simply sensing they are receiving a special treatment, not because of the genuine effects of the treatment.
How do comparison groups help prevent threats to internal validity?
They provide a baseline for comparison, allowing researchers to see if changes in the treatment group are genuinely due to the treatment or to other factors that affect both groups (e.g., history, maturation).
How do double-blind studies help prevent threats to internal validity?
By ensuring neither the participants nor the researchers know who is in the treatment group and who is in the control group, they prevent observer bias and demand characteristics.
What are some "other design considerations" to prevent internal validity threats?
Using random assignment to deal with selection effects, counterbalancing to deal with order effects, careful measurement to reduce instrumentation, and including placebo control groups.
Define "comparison groups" in experimental design.
Groups in an experiment that do not receive the treatment (or receive a different level of it) and serve as a baseline to evaluate the effect of the treatment group.
What is a "double-blind study"?
A study design where neither the participants nor the researchers administering the treatment and collecting data know which participants are in the experimental group and which are in the control group.
What are the main reasons a study might show a null effect?
Not enough variance between groups, too much variance within groups, or a true null effect.
What are two ways a study might have inadequate variance between groups, leading to a null effect, and how can researchers identify these problems?
Weak manipulations (check manipulation check), and insensitive measures (use a more precise measure or a different dependent variable).
Why can large variance within groups obscure a true difference between groups?
High variability within each group makes it harder to detect a statistically significant difference between the group means, as any true difference might be hidden by the spread of scores within each group.
What is "measurement error" as a cause of within-group variance, and how can it be reduced?
Inaccurate recording or measurement of data; it can be reduced by using precise and reliable measures, multiple measurements, or increasing the sample size.
How do "individual differences" contribute to within-group variance, and how can this be addressed?
Unique characteristics of each participant (e.g., personality, background) that cause variability in scores; it can be reduced by using a within-subjects design, matched groups design, or increasing the sample size.
What is "situation noise" as a source of within-group variance, and how can it be minimized?
External distractions or irrelevant events in the experimental setting that increase variability; it can be minimized by controlling the environment and procedures consistently for all participants.