1/47
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced | Call with Kai |
|---|
No analytics yet
Send a link to your students to track their progress
manipulated variable
a variable that is controlled by the researchers, who assigns participants go experience particular levels of the variable
measured variable
records of thoughts, feelings, or behaviors, not directly influenced by the researchers
control variables
any variable the researcher intentionally holds constant across conditions
independent variable
the manipulated variables in an experiment
conditions
the levels or versions of the independent variable
dependent variable
the measured variable in an experiment
what are two ways to check construct validity of manipulated variables
pilot study
manipulation check
pilot study
conducted before the actual study to check the construct validity of a manipulation
manipulation check
an extra measure designed to see how well a manipulation worked
ex. watch a positive thing, manipulation check to see if desired change occurred, then do dependent variable
different types of variables
manipulated variable
measured variable
control variable
independent variable (and conditions)
dependent variable
types of conditions
control group
treatment group
placebo group
control group
a condition that is supposed to represent no treatment, or a neutral state
treatment group
the conditions of interest, which are compared to the control group (not all experiments have a true control vs. treatment design)
placebo group
a control group who believes they’re a treatment group, with the goal of ruling out expectancy effects (very few psych experiments have a true placebo group)
well designed experiments have what
maximizes internal validity compared to other designs
confound
anything that differs between your groups other than the levels of the independent variables
design confound
something that inherently varies along with the independent variable
creates systematic variability (problem!)
unsystematic variability
created when something differs among participants but does not systematically co-occur with the independent variable
systematic variability
people in one condition vary from people in the other condition in more than just the way you’re manipulating
true experiment essential characteristics
manipulation of one or more independent variables
random assignment to conditions
random assignment
each participant has an equal chance of being in each condition
maximizes the likelihood that unintended variability is unsystematic instead of systematic
avoids selection effects
selection effects
when the kind of person in one condition are systematically different from the ones in other conditions
chance is lumpy
randomness doesn’t always end up seeing random
can create a ‘failure of random assignment’
matched groups (or matching)
ensuring that your groups are equivalent in important ways
pair (match) people on the characteristic of interest, then split the pair across conditions through random assignment
types of experiments/designs
between subjects design
posttest-only design
pretest-posttest design
within subjects design
concurrent measures
repeated measures
between subjects design
each participant is only in one experimental condition
posttest-only design
between subjects, participants undergo the manipulation (just one condition) and then complete the measures (once)
pretest-posttest design
between subjects design, participants first complete the measures, then the manipulation, then the measures again
advantages of between subjects
test and control for selection effects (see if there is a systematic difference, if there was a difference in subjects between assigned groups)
test and control for failures of random assignment
disadvantages of between subjects design
might create demand characteristics
people might think they should be consistent in their responses
demand characteristics
might clue participants in to what you’re doing, so participants might realize what researchers are interested in, and thing might change because of their beliefs
within subjects design
each participant is in all experimental conditions
concurrent measures
within subjects design, participants experience all levels of the independent variable at once
repeated measures design
within subjects design, participants experience levels of the independent variable one after the other, with the measures following each level of the independent variable
why would you want a within subjects design
guarantees equivalence of groups (no selection effects)
functionally doubles your sample size (for two conditions)
statistical power, p-values = function of effect size + sample size
statistical power
ability of a study to get a statistically significant effect, assuming the effect is real
why would you want a between subjects design
reduces possibility of order effects
order effects
a confound that occurs when experiencing one condition changes how participants react to subsequent conditions in a within-subjects design
types of order effects
practice effects
fatigue effects
carryover effects
sensitization effects
practice effects
participants get better at the measures
fatigue effects
participants get worse at the measures
carryover effects
effects of one conditions contaminate subsequent responsess
sensitization effects
participants become suspicious or clued in from earlier discussions
how do you deal with some order effects
counterbalancing (doesn’t FIX order effects, it just allows you to check for them)
counterbalancing
randomly assigning participants to experience the conditions in different orders
types of counterbalancing
full
partial
full counterbalancing
all possible orders are represented
partial counterbalancing
only some orders are represented