1/56
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced | Call with Kai |
|---|
No analytics yet
Send a link to your students to track their progress
Comparison Groups/Conditions
Different groups of participants that experience different levels of the independent variable.
Concurrent-Measures Design
Participants see all conditions at the same time and choose their preference.
Construct Validity
Does the study measure what it claims to measure?
Control Group
The group that receives no treatment or a neutral condition.
Control Variable
Variables that are kept constant so they do not influence results.
Covariance
Two variables change together. When the independent variable changes, the dependent variable also changes.
Dependent Variable
The variable measured to see if it changes.
Design Confound
Another variable accidentally changes along with the independent variable and could explain the results.
External Validity
Can the results apply to other people, places, or times?
Independent Variable
The variable manipulated by the researcher.
Internal Validity
How confident we are that the independent variable actually caused the change in the dependent variable.
Order Effects
The order of conditions affects results.
Post-Test Only Design
Participants experience one level of the independent variable and measured once afterward.
Pre-Post Design
The dependent variable is measured before and after the treatment.
Repeated-Measures Design
Participants experience every level of the independent variable.
Selection Effects
Groups are different from the start, which may affect results.
Statistical Validity
Are the data analysis and statistical conclusions accurate?
Temporal Precedence
The cause must happen before the effect.
Treatment Group
The group that receives the independent variable being tested.
Unsystematic Variability
Random factors that influence participants, but are not part of the study.
Confound
an additional variable that could explain empirical findings. The presence of a confound threatens the internal validity of the study
how researchers can design studies to prevent internal validity threats
use random assignment to avoid selection bias.
include control groups to rule out alternative explanations; also useful for detecting history and maturation effects.
keep environment and interactions consistent across all participants
use single or double blind studies
measure participants before and after the intervention (pre-post test design)
control do confounding variables
keep studies engaging so minimize participant dropout
use alternate forms of testing to avoid testing effects
use reliable testing measurement to avoid instrumentation threats
do shorter time frames to reduce maturation effects
how researchers can design studies to minimize possible obscuring factors
maximize statistical power
use sensitive research designs
improve measurement quality
strengthen the independent variable
control extraneous variables
increase variability
minimize attrition (drop outs)
reduce random error
use multiple measures
maturation threat
changes in the variable that emerge spontaneously over time
history threat
when some external or real-world event affects members of one of the experimental groups
maturation vs history threats
maturation: personal development & people mature
history: contextual development & history happens around people
countermeasure: include a control group
regression
a statistical concept, when an extreme level in an observed variable is likely to return to the mean level in the future
attrition threat
reduction in participants from pre to post test
countermeasures for attrition threats
remove extreme data points from attritting participants
investigate why participants drop-out and adapt incentives
use statistical methods that can handle missing data
testing threat
a change in a participant’s response on the DV as a result of experiencing the DV more than once (fatigue effect)
countermeasures for testing threat
include a control group
use different tests/instruments
instrumentation threats
when an instrument or measurement tool changes over time
Ex: a rater changing their rating criteria over time
Ex: using non-equivalent instruments to measure the same construct
countermeasures for instrumentation threats
rigorous training of raters
use post-test only design
testing vs instrumentation threats
testing threat: participant changes; testing activity changes, not the test itself
instrumentation threat: measure change, instrument changes
null effects
either not enough between variance or too much within variance
not enough between variability
insensitive measures
ceiling and floor effects
weak manipulations
insufficient power
too much within variability
measurement error
individual differences
situation noise
insufficient power
insensitive measures
measure hasn’t been operationalized in a way to distinguish differences in the conceptual variable
Ex: Pass or Fail versus A+ to F grading scale
ceiling and floor effects
questions are “too easy” or “too hard”
“too agreeable” or “too disagreeable”
insufficient variability
insensitive measures vs ceiling/floor effects
both have some kind of scale limitation but insensitive measures has precision/discrimination problem while ceiling/floor effects has a boundary problem
weak manipulation
manipulation of the IV did not have an effect or change the thing that wanted to change
measurement error
measurement error of psychological instruments introduces noise to the dependent variable
use reliable and valid measures
individual differences
can restrict the sample
can account for individual differences with a within-groups or pre-post design
situation noise
any kind of external distraction that could cause variability within-groups that obscure between-groups differences
control the surroundings of the experiment to minimize situation noise
insufficient power
doesn’t have enough ability to detect a real effect
how to maximize power
have a large sample size
use strong experimental manipulations
study theoretically plausible solutions
reduce causes of variability that can cause error
Factorial design
A research design with at least two independent variables. They are either manipulated or specifically selected
Reasons to use factorial design
To test theories and test limits (external validity)
Interaction effect
Degree to which an independent variable effect depends on another independent variable.
Interaction effect with words
Ex: Caffeine helps those with low sleep and has little to no effect on people with high sleep.
Interaction effects with tables
Helps to see patterns across conditions
Interaction effects with graphs
Show lines that are not parallel. Non-parallel lines = interaction is present
Between groups factorial design
Each participant is in only one condition. Ex: a 2×2 design with 4 separate groups, each person only experiences one combinations.
Within groups factorial design
Each participant experiences all conditions
Mixed factorial design
Includes both between and within groups.
Example: sleep level (between groups) each person is assigned to either high or low sleep. Caffeine level (within groups) everyone completes both caffeine and no caffeine trials.
Increasing the number of IV levels
Adding more levels to a variable increases how detailed your data can be.
Increasing the number of IV
Adding another independent variable increases the design’s complexity