1/164
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
Why do you need research methods?
Every clinician is a scientist
Every therapy session is an experiment
Every treatment target must be carefully measured to determine:
Is the client changing
Is the treatment responsible for any observed changes
Am I really a Scientist?
Scientific Method:
Ask a question
Make a hypothesis & predict what will happen if the hypothesis is true
Test the hypothesis through experimentation/observation
Analyze the results
Replicate the results
Characteristics of scientific method
Systematic
Empirical
Control
Critical Examination
Systematic
a logical sequence is followed from data collection through analysis and interpretation
Empirical
objective, observable data is documented to avoid subjectivity
Control
factors other than the manipulated variable(s) are controlled (not allowed to vary)
Critical Examination
results are checked by the researcher and others to permit validation and reduce bias
Types of Research
Quantitative
Qualitative
Basic (AKA Bench research)
Applied
Translational (AKA bench to bedside research)
Quantitative
uses numerical data to measure outcomes under controlled, standardized conditions
Qualitative
uses narrative description based on observation and interviews/open-ended questions to understand a phenomenon in the context where it takes place
Basic (AKA bench research)
designed to develop, test, or refine a theory based on empirical data without regard for a practical application
Applied
designed to test theories and solve problems that directly impact practice; often conducted under practice conditions with the population in question
Translational (AKA bench to bedside research)
designed to apply results from basic research to answer clinical questions AND create scientific questions rooted in clincial problems
Experimental (research)
involves researcher controlling and manipulating variables to determine the impact on an outcome
ex: randomized controlled trials (RCTs), single case research designs, quasi-experimental studies
Nonexperimental/observational
involves observing and describing variables without controlling or manipulating them
ex: case study
Constructs
unobservable ideas abstracted from empirical observations and constructed by the scientists to explain phenomena
Models
analogy or description that symbolically represents an unobservable phenomenon to help scientists better understand that phenomenon
Theories
a collection of ideas that describes how variables are related
allow scientists to summarize knowledge, explain events, predict future events and promote further inquiry
Deductive reasoning (top-down)
drawing specific conclusions based on general observations
Inductive reasoning (bottom-up)
drawing general conclusions based on specific observations
Evidence-Based Practice - ASHA
Clinical expertise/expert opinion (including your clinical data)
External scientific evidence
Client/Patient/Caregiver perspectives
Steps of EBP
Frame the clinical question
Find the evidence
Assess the evidence
Make the Clinical Decision
KEY Steps of EBP
Pose an answerable question
Search for the evidence
Critically appraising evidence for its validity and relevance
Make a decision by integrating the evidence with clinical experience and patient/client values
Evaluate the performance after acting on the evidence
Pose an Answerable Question
Are people with vocal nodules who received surgery more or less likely to have a recurrence of nodules than similar patients who received behavioral therapy (VFEs)
PICO
Population: People with vocal nodules
Intervention: VFEs
Comparison: microsurgery
Outcome: recurrence of nodules
Types of PICO Questions
Treatment, prevention, diagnosis, prognosis, etiology
PICO: Population
disease, disorder
PICO: Intervention
variable of interest, exposure, risk factor
PICO: Comparison
placebo, business as usual, absence of risk factor
PICO: Outcome
risk of impairment, accuracy of diagnosis, rate of occurence
Critically Evaluate the Evidence
Focus on designs and methods used by researchers
Levels of evidence
Hierarchy of research designs
Organized according to the strength of the conclusions that can be made about cause-effect relationships between treatments and outcomes
Hierarchy of Evidence
1a. Well designed syntheses of multiple Randomized Controlled Trials
1b. Well designed Randomized Control Trial
2a. Well designed, non-randomized (quasi-experimental) design
2b. well designed, single case research design
Quantitative Reviews (e.g., “5 case studies reported that 60%…”)
Narrative reviews (e.g., “this article was about…”)
Non-experimental (case reports; descriptive studies)
Expert opinion (e.g., textbooks, lectures)
Research Questions
A question about a specific problem that can be answered with a single study and is:
Important: meaningful and useful with a strong potential impact (on practice or theory)
Answerable: empirically testable through observation
Feasible: able to be completed by the researcher (& team) within a realistic timeframe and in an ethical manner
Descriptive RQ
asks “what is” to improve characterization of a phenomenon
Psychometric RQ
asks whether a measurement tool has sufficient evidence of reliability and validity to be used for a specific purpose
Relationship RQ
Asks whether there are associations, linkages, or predictions between two variables, without requiring a cause/effect
Difference RQ
asks whether there are similarities, differences, comparisons, influences or effects of an IV on a DV
Diagnostic/Prognostic
usually asks about how one variable predicts the presence of another
Causation
usually asks whether a certain exposure causes an outcome
Disorder Characteristics
Do _____(population) produce/comprehend (task) _____(S/L element) during ___ condition?
Severity Changes
Is _____ in ______ (population) more/less severe after ____?
Therapeutic
usually asks about the effectiveness, safety, and/or tolerability of a specific treatment
Treatment Comparison
does ___ (behavior in ____ (population decrease more after ____ treatment than ____ treatment?
Treatment
Does ___ treatment increase/decrease ______ (behavior) in ____ (population)
Target Population
the group of people that will be studied (and to whom we home to generalize results
Consider specifying age, diagnosis, functioning level, etc
Independent Variable
conditions that cause changes in behavior. Manipulated variable
Dependent Variable
Behavior that is/are measured for change based on manipulation of the IV; usually expected to change (outcome variable)
Active IV
Variable that is actively manipulated, such as an intervention
e.g.: medical/behavioral intervention, listening conditions (noise vs. quiet)
Attribute IV
variable that cannot be actively manipulated, but causes/predicts changes to outcome
e.g.: SES, diagnosis, age, sex
Demographic Variables
are NOT manipulated
is the variable controlled through matching or mentioned as being similar across group? Is the variable listed only as a way of describing the participants?
Independent Variables
ARE manipulated
Is the variable being tested to see whether it causes a chance in the outcome
Hypothesis
statements that predict how an IV and DV are related (or different) in a specified population
Null hypothesis: prediction that no relationship or difference exists between the IV and DV
We do not accept the Null Hypothesis. We reject or fail to reject the null hypothesis
Critically Evaluate the Evidence
Focus on designs and methods used by researchers
Levels of Evidence
Hierarchy of Research Designs
Organized according to the strength of the conclusions that can be made about of cause-effect relationships between treatments and outcomes
Filtered Information
Systematic Reviews, Evidence Syntheses + Clinical Guidelines
Unfiltered Information
Randomized Control Trials
Cohort Studies
Case Studies Background Information and Expert Opinion
How to Critically Appraise a Study?
Consider how the study was conducted:
Sampling
Control of Extraneous (“nuisance”) variables
Use of a control group
Initial group similarity
Extraneous Nuisance Variable
Any variable that could affect the dependent variable but does not immediately affect the variable
Selection of Homogeneous Subjects
Choose only subjects who have the same characteristics of the extraneous variable
Blocking
Build extraneous attribute variables into the design by using them as independent variables, creating blocks of subjects that are homogenous for the different levels of the variable
Matching
Match subjects on specific characteristics across groups
Using Subjects as their own Control
Expose subjects to all levels of the independent variable, creating a repeated measures design
Analysis of Covariance
Select an extraneous variable as a covariate, adjusting scores statistically to control for differences on the extraneous variable
Handling of Lost/incomplete data
Someone leaves half way through or doesn’t do the treatment, etc
Blinding
Double Blind - neither know what condition they have until data is collected
Single Blind - Only the experimenter and measurement be blind
Evaluate
Evaluate evidence with SLP experience. Student parent factors, patient value factors. Performance needs to match evidence or go back to beginning and start again
Experimental Control
Methods used to prevent extraneous variables from interfering with our ability to detect a relationship between an independent and dependent variable
Statistical Conclusion Validity
Relates to using appropriate statistical methods to analyze data
Use of inappropriate statistics can result in misinterpretation of findings
Insufficient power
Violation of statistical assumptions
Internal Validity
Relates to the ways in which extraneous variables can interfere with cause-effect relationships between Ivs and Dvs
So how do we know if cause-effect relationship exists?
Cause (intervention) must precede effect (change in outcome) in time
Effect (change in outcome) should:
Be present when the cause (intervention occurs)
Not be present when the cause does not occur
No other reasonable explanation can account for the effect
Threats to Internal Validity
History
Maturation
Attrition
Testing (i.e., repeated measurements)
Instrumentation
Regression
Construct Validity
Relates to how the theoretical constructs associated with IVs and Dvs are conceptualized
Operational def.
Order effects
Experimental bias
External Validity
Relates to the generalizability of study findings to new circumstances
Answers “How useful are findings outside of the study?”
Interaction of Treatment and Selection
Interaction of Treatment and Setting
Interaction of Treatment and History
Efficacy and Effectiveness
are terms that capture the effects of treatment
Efficacy
examines the effects of treatment on an outcome measure under randomized and controlled conditions
“efficaCy” (C for controlled)
Effectiveness
examine the effects of treatment on an outcome measure under applied, “real world” practice conditions
“EffectiVeness” (EV for external validity)
Experimental Design
Research methods used to assess the cause-effect relationship between one or more IVs (condition/intervention) and one or more DVs (outcome measure)
True experimental designs require":
Experimental control (to prevent extraneous variables from affecting the outcome)
Randomization of participants to IV conditions
Control and random assignment help to mitigate threats to internal validity
Randomized control trial (RCT)
Randomized control trial (RCT)
randomly assigning people to control and experimental group
History
Background info (parents cannot afford slp so inconsistent therapy)
Maturation
growing up changes e.g.: phonological errors go away when correct age
Attrition
Loss of participants
Testing
Child might get used to test and score higher
Instrumentation
reliability of how the dependent variable is messured
Regression
Scores that fall at the extreme ends of the measurements
Operational define
say what the targets are
Independent Group
Only gets one level of independent variable
Between Groups Design
Comparing the different groups
Pretest/Posttest
Pretest of dependent variable is given before test of independent variable. Repeat dependent variable test after independent variable test
Posttest Only
Only measured after independent variable test
Factorial Design
two or more independent variables with two or more conditions
Repeated Measures/Within Group Designs
Each person gets all of the conditions, can compare how they did with the different conditions
Mixed Designs
Involves elements of between and withing groups
Quasi-Experimental Design
Research methods that lack one or more elements required for true experimental designs
Control group
OR
Randomization of participants to IV conditions
May be more susceptible to threats to internal validity
One group pretest-posttest
Lacks control group
Time Series Design
repeatedly measuring dependent variable over time
measurement might happen before and after intervention
multiple measurements happening before treatment and after treatment
Nonequivalent Pretest-Posttest (posttest only) control group design
Lacks random assignment to group. Still have control group but participants not randomly assigned to group
Primary Outcome Measure
Only ONE primary question/measure
Main question of the study - study designed to answer this question
Secondary outcome measures
additional question(s) of interest
Study NOT designed to answer these questions
Interpret with caution
Measure construct you want to measure
Need to isolate construct of interest from other constructs
Ex: think you are measuring oral reading, but really you are measuring apraxia or speech
Measure appropriate, accessible, feasible and practical for population?
Language/cognitive level
Age/development
Culture
Length
Complexity
Results interpretable/comprehensible to people who need the information?
Will others understand your results?
Consider how results are presented: raw score, T-score, standard score, % correct