1/45
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced | Call with Kai |
|---|
No analytics yet
Send a link to your students to track their progress
causal claim
boldest type of claim. replaces association verbs with causal ones
experiment
researchers manipulate at least one variable and measure another
manipulated variable
aka independent v. variable whose levels are managed by researcher
conditions
levels of an independent variable
measured variable
aka dependent. variable whose levels are observed and recorded
control
any variable the experimenter holds constant across conditions on purpose
control group
participants exposed to “no treatment” level of independent variable
treatment group
participants exposed to intervention level of independent variable
placebo group
participants exposed to an inert treatment
confounds
accidentally varies systematically with changes in IV. possible alternate explanations for changes in dependent variable.
selection effect
when the participants in one level of IV are systematically different than those at another level
matched groups design
controls for selection effect by placing participants with a similar possible confound into matched sets across experimental conditions so they cancel each other out
independent-groups design
separate groups are put into different levels of IV
posttest only design
type of independent-groups design where participants are only tested once on DV after experimentation
pretest/posttest design
type of independent-groups design where participants are tested twice on DV once before and once after IV exposure
within-groups designs
each participant is presented with all levels of IV
repeated-measures design
type of within-groups design where participants are measured on DV after each exposure to each level of IV
concurrent-measures design
type of within-groups design where participants are exposed to all levels of IV at once and then measured just once
order effects
disadvantages of within-group designs when being exposed to first condition changes how participants react to later condition(s) and is a threat to internal validity
practice effects
repeat testing causes participants to get better at a task
fatigue effects
repeat testing causes participants to get bored/tired of a task
carryover effects
contamination carries over from one test to the next
counterbalancing
different participants are given the dif levels of IV in different orders so order effects balance out when all data is combined
demand characteristics
a cue that can lead participants to guess the experiment’s hypothesis
observer bias
when observer expectations influence their interpretation of participant behaviours or outcomes of a study
observer effects
when the observer being there effects the participants performance
expectancy effects
maturation threat
when a change in experimental group could have emerged spontaneously over time
history threat
when something else specific+ systematic happens between pretest and posttest
regression threat
phenomenon in which any extreme finding is likely to be closer to its own typical, or mean level the next time it is measured (w or without intervention)
attrition threat
reduction of participants occurs when people drop out. Problematic when it is systematic (eg most rambunctious camper leaves early, saddest people drop out of therapy)
prevention of attrition threat
when participant drops out you remove their score from pretest too
prevention of history/regression/maturation threat
comparison group
testing threat
specific kind of order effect. change in participants as a result of taking a dependent measure more than once. includes practice effect.
instrumentation/item effects
occurs when measuring instrument changes over time. (eg changing standards of judgement, different forms of pretest and posttest)
selection-history threat
outside event/factor affects only those at one level of experiment condition
selection-attrition threat
only one of the experimental groups experiences attrition
placebo effects
when people in an experimental treatment experience a change only because they believe in the treatment
placebo-controlled double blind study
study that uses a treatment and placebo group but neither researchers nor participants know which is which
cohen’s D
index of how much 2 sample means differ. expressed in SD units
statistical power
ability to detect an effect if there is one. probability of rejecting the null. low power = less reliable study
manipulation check
separate dependant variable that experimenters include in a study to check if operationalization worked
ceiling effect
obscuring factor to explain null result. all scores clustered together at the high end leading to not enough variability between levels
floor effect
obscuring factor to explain null result. all scores squeezed together at the low end. leads to not much variability between levels
situation noise
obscuring factor to explain null result. external distractions. leads to too much variability within levels
measurement error
obscuring factor to explain null result. human or instrument factor randomly changes result