1/40
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
|---|
No study sessions yet.
Internal validity
Outcome of the study happened for the hypothesized reasons
Are there other factors that could have influenced your results?
Rule out influence of extraneous variables
External validity
Can the study results be extended to the general population?
To individuals other than the ones who were participants in the study
Generalizability can relate to populations, settings, treatment variables, measurement variables
Threats to internal validity
History
Maturation
Statistical regression
Instrumentation
Selection
Mortality
History
Something that happened during course of study that could have impacted the study
Maturation
Improvements over time due to growth & development
Especially important in research with children
Could also relate to recovery
Accounting for maturation
Use a control group that did not receive treatment. This will prove the treatment did or didn’t do anything. If they improved by the same amount, it’s just maturation. If the treatment group improved far more, then it did do something.
Statistical regression (regression to the mean)
Participants often selected because of how poorly they are performing on pre-study selection measure
Scores vary over time. If participants are tested over time, scores will tend to be around the mean
If a participants scores very low, by statistical probabilities the next test will be higher/closer to the mean
Overcoming statistical regression
Control group
Alternate treatment groups
Multiple testing
Instrumentation
asks “did a change in instrumentation lead to changes in participants’ behaviors over time?”
Physical instruments
Human “instruments” that record data, observe behavior, etc
What are ways to control the instrumentation threat to internal validity?
Check instruments as frequently as possible.
Take breaks and distribute work to reduce fatigue.
Save and back up data as frequently as possible.
Consider how an update to a machine might change your data.
Selection
How were participants selected? Happens more often if selection is not random
Would decrease the ability to define a cause/effect relationship
Can co-occur with threats like maturation or history
This is why it is strongest to randomly assign groups if at all possible
Mortality
Participants drop out before the end of the study
Can happen even with random assignment of group
Can happen for seemingly no reason.
Can have an impact on the interpretation of results.
Quasi-experimental
No random assignment but still has manipulation.
Why even do Quasi-experimental research?
Can still be valuable research, but may be at greater risk for threats to internal validity.
Cause/effect weaker than true experimental
Nonequivalent control group design
Identify two pre-existing groups
Assign one group to experimental condition, one to control condition
May randomly assign to experimental vs control
This random assignment may help reduce investigator bias, but not as strong as randomly assigning groups
Nonequivalent control group design is typically
pretest – posttest design
Pre-test helps determine that
the groups are not significantly different on key variables before treatment
Notations: X
treatment
Notations: N
nonrandom assignment
Notations: O
observation/ measurement
Pretest – post test nonequivalent control group design
N O X O
N O _ O
Pretest – post test nonequivalent treatment and control groups
N O X1 O
N O X2 O
N O _ O
Switching replication design with nonequivalent control group
N O X O _ O
N O _ O X O
Double pretest – posttest nonequivalent control group
N O O X O
N O O _ O
Repeated measures group design
One group of participants
Multiple measurements/observations
With this description (1 group, multiple measurements)
May be non-experimental or experimental in nature
What is the benefit of repeated measures group design?
(as compared to two groups) is that fewer participants are needed
Experimental repeated measures design is more feasible if
the multiple measures are task manipulations rather than treatment
Experimental repeated measures design may be possible to be
treatment
X1 O X2 O
Experimental repeated measures design with counterbalancing
R* X1 O X2 O
R* X2 O X1 O
R*: random assignment to treatment/condition order
All (of one group) receive both treatment/conditions, half receive one order and half receive the opposite order.
Group designs may not be beneficial in all cases, but can be helpful
Number of participants small (e.g., disorder with low prevalence)
Individual participants are expected to act in distinctive ways
Group measures may not allow you to see the distinctive trends
Levels of evidence: Strength of Evidence
In EBP, have to be able to evaluate the strength of the evidence
Part of the level of evidence relates to
the research design used in the study.
Strongest design being true experimental control group designs
Randomized clinical trials
Levels of Evidence: Depth of Evidence
Systematic reviews
Meta-analysis
Systematic reviews
Rigorous evaluation of previous studies
Specific methods of finding studies, seeing if studies can be included, critically evaluating the research
Done to attempt to answer a research question
“..aims to provide an objective & comprehensive literature search to identify empirical studies addressing the same research questions…”
Meta-analysis
“systematic evaluation of the aggregated findings of multiple studies”
Allows for an estimate of effectiveness of intervention
“combines and synthesizes results from separate studies to provide a quantitative summary of research findings using statistical tools”
ASHA Levels of evidence Ia
Well-designed meta-analysis of >1 randomized controlled trial
ASHA Levels of evidence Ib
Well-designed randomized controlled study
ASHA Levels of evidence IIa
Well-designed controlled study without randomization
ASHA Levels of evidence IIb
Well-designed quasi-experimental study
ASHA Levels of evidence III
Well-designed non-experimental studies, i.e., correlational and case studies
ASHA Levels of evidence IV
Expert committee report, consensus conference, clinical experience of respected authorities