1/38
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
|---|
No study sessions yet.
Double – blind study –
designed that nether the participants nor the experimenters working with the participants know who is in the control group and experimental group
Blind design -
A study design in which the observers are unaware of the experimental conditions to which participants have been assigned
one-group, pretest/posttest design -
An experiment in which a researcher recruits one group of participants, measures them on a pretest, exposes them to a treatment, intervention, or change, and then measures them on a posttest (the worst experiment)
List of 12 threats to internal validity
regression to the mean
attrition
testing
instrumentation
observer bias
demand characteristics
Placebo effects
design confound
selection effects
order effects
maturation
history
Regression to the mean
An experimental group whose average is extremely low (or high) at pretest will get better (or worse) over time because the random events that caused the extreme pretest scores do not recur the same way at Posttest
Attrition
An experimental group changes over time, but only because the most extreme cases have systematically dropped out and their scores are not included in the posttest.
Testing
A type of order effect: An experimental group changes over time because repeated testing has affected the participants. Practice effects (fatigue effects) are one subtype.
Instrumentation
An experimental group changes over time, but only because the measurement instrument has changed.
Observer bias
An experimental group’s ratings differ from those of a comparison group but only because the researcher expects the groups’ ratings to differ.
Demand characteristic
Participants guess what the study’s purpose is and change their behavior in the expected direction.
Placebo effect-
Participants in an experimental group improve only because they believe in the efficacy of the therapy or drug they receive.
Design confound
A second variable that unintentionally varies systematically with the independent variable.
Selection effect
In an independent groups design, when the two independent variable groups have systematically different kinds of participants in them.
Order effect
In a repeated-measures design, when the effect of the independent variable is confounded with carryover from one level to the other, or with practice, fatigue, or boredom.
Maturation
An experimental group improves over time only because of natural development or spontaneous improvement.
History
An experimental group changes over time because of an external factor that affects all or most members of the group
What treats to internal validity especially apply to one group pre test post test designs?
maturation threats, history threats, regression threats, attrition threats, testing threats, and instrumentation threats design confounds, selection effects, and order effects
Witch treats to internal validity applies to all study’s and possibly one group pretest/posttest designs?
observer bias, demand characteristics, and placebo effects
Null hypothesis –
the alternative hypothesis – saying that there is no difference or no effect null effect is when this hypothesis is accepted
null effect
A finding that an independent variable did not make a difference in the dependent variable; there is no significant covariance between the two.
Selection history threat-
an outside event or factor systematically affects participants at one level of the IV
Selection attrition threat –
participants in only one experimental group experience attrition
Janes possible causes for null effect
- Perhaps there is not enough between groups differences
- Perhaps within groups variability obscured the group differences ( more to between then within)
- Perhaps there really is no difference
Insensitive measures –
Sometimes, no results occurs because the researchers haven’t operationalized the dependent variable with enough sensitivity ( your prep course will improve students’ scores by 20% Is a precise measure instead of if you take it you will be in the high group or pass) the more quantifiable increments the more sensitivity the study has
Ceiling and floor effects
types of insensitive measures on the dependent variable – if the tool you are using is too easy or too hard in terms of manipulation you will not obtain an effect – no matter what you do they will score the same – if you give a calc test to 2nd graders they will be on the floor unless you change the measure used
Manipulation checks
to help detect weak manipulations of ceilings and floors. In an experiment, an extra dependent variable that researchers can include to determine how well a
manipulation worked.
design confound acting in reverse -
You can introduce a confound to the levels of your independent variable to make sure that your measures are sensitive ( something you know will have an effect) to see if your IV and operational definition of DV is sensitive enough
Book possible causes of null effects
not enough variability between variables
Too much variability within levels
If the study was sound, place in context of the body of evidence
not enough variability between variables - null effect
- Ineffective manipulation of independent variable
- Insufficiently sensitive measurement of dependent variable
- Ceiling or floor effects on independent variable
- Ceiling or floor effects on dependent variable
Too much variability within levels
- Measurement error
- Individual differences
- Situation noise
If the study was sound, place in context of the body of evidence
- The independent variable could, in truth, have almost no effect on the dependent variable
- The independent variable could have a true effect on the dependent variable, but because of random errors of measurement or sampling, this one study didn’t detect it.
floor effect -
An experimental design problem in which independent variable groups score almost the same on a dependent variable, such that all scores fall at the low end of their possible distribution.
ceiling effect -
An experimental design problem in which independent variable groups score almost the same on a dependent variable, such that all scores fall at the high end of their possible distribution.
Reverse confounds-
is when you design your study to include a confound anticipating that, that confound actually counteracts or reverses some true effect of an independent variable,
Error Variance-
Unsystematic variability among the members of a group in an experiment, which might be caused by situation noise, individual differences, or measurement error.
dose between (across) or within (inside) have less variance?
within has less variance.
Placebo effects –
this effect is present when people receive a treatment and improve but only because they believe they are receiving the valid or effective treatment
One way to evaluate or asses wither there is a placebo effect is to
add a placebo/a comparison/placebo control group.
You can also add a double blind to remove bias
The most important thing is to design a study where you can evaluate the placebo effect in relation to the
independent and dependent variables