1/19
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced | Call with Kai |
|---|
No analytics yet
Send a link to your students to track their progress
demand characteristics
participants change their behaviour based on what they think the hypothesis is
placebo effect
participants improve only because they believe they’re getting treatment
observer bias
the researcher’s expectations interfere with their observations/results
maturation
people naturally change over time
history
something external happens during the study that affects the results
regression to the mean
extreme scores move towards average over time
testing
taking the same test more than once affects performance
instrumentation
measurement changes over time
attrition
participants leave the study
comparison groups
controls for:
maturation
regression
history
testing
both groups experience those things → allows you to see if change is due to IV or confounds
placebo control groups
control group gets a fake treatment → controls for placebo effect
double blind study
participants dont know their condition—nor do the researchers
controls for:
demand characteristic
placebo effects
observer bias
null effect
the study finds no difference between groups (IV didnt affect DV)
→ BUT does not mean the IV has no effect in reality
3 possible explanations for a null effect
not enough difference between groups results → IV didn’t create a strong enough change
too much variance within groups → people within each group are very different from each other
true null effect → the IV doesn’t affect the DV at all
systematic variability
differences in the DV that are consistently associated with the IV
comes directly from the IV
good because it reflects the true effects of your IV—the “signal” you’re trying to detect
unsystematic variability
differences in the DV that are random or unrelated to the IV
comes from individual differences, measurement errors, situational factors
bad for detecting effects—obscures the IVs affect
effect size
a measure of how strong the effects fo the IV is on the DV
high systematic variability → larger effect size
high unsystematic variability → smaller effect size
statistical significance
a measure of how likely the observed effect was due to chance
high systematic variability → easier to reach significance
high unsystematic variability → harder to reach significance
reasons for inadequate difference between group’s results
weak manipulation of the IV
ceiling or floor effects → DV scale limits the ability to see differences
insensitive measure → the DV isnt precise enough to detect changes
reasons for large within groups variance
individual differences
measurement error
uncontrolled extraneous variable(s)