Looks like no one added any tags here yet for you.
Evidence-based medicine
the consciousness, explicit, and judicious use of current best evidence in making decisions about the care of individual patients
Evidence Based Practice
taking into account the research evidence, your practice experience, and the client's values when making clinical decision
EBP is multifaceted and includes:
- external scientific evidence
- practitioner's experiences
- client/family/caregiver situation and values
External Evidence is typically obtained from
- research articles
- scientific journals
Steps of the Scientific Method
1. Ask a question
2. Gather information
3. Formulate a hypothesis
4. Test/Implement the hypothesis
5. Exam and report the evidence
Important characteristic of the Scientific Method
replication
Importance of Practitioner Experience in EBP
- Research cannot keep up with clinical practice
- Diversity of clients
- Different settings
- pragmatic (logistical) constraints of the "real world"
The reflective practitioner
incorporates theory into their decision-making
Client-centered practice
- Emphasizes the client's choice
- Appreciated the client's expertise in their own life situation
Shared decision-making
- Client shares personal values and lived experience/ client has final word
- Practitioner shared research and clinical experience/ acts as trusted advisor
Why evidence based practice?
- Carries more weight when backed with evidence
- Increases the confidence of clients & colleagues
- Improves quality of service
- Facilitates communication w/ colleagues, agencies, and clients
-Receive more positive outcomes
Process of EBP
(Mirror Scientific Method)
1. Formulate a question based on a clinical problem
2. Identify relevant evidence
3. Evaluate evidence
4. Implement useful findings
5. Evaluate the outcomes
1. Formulate a question based on a clinical problem
1. Identification of a problem
2. Formulation of a question to narrow the focus
Common types of questions
-Efficacy of an intervention
-Description of a condition
-Prediction of an outcome
-Lived experience of a client
-Usefulness of an assessment
2. Identify relevant information; evidence should include:
- Client's experience & values
- Practitioners experience
- Research
3. Evaluate the Evidence
- Strength
- Applicability
3. Evaluation of evidence strength includes
- sample size
- study design
- outcome measures used
3. Evaluation of evidence applicability includes
- practice situation/setting
- client circumstances
4. Implement useful findings
Collaborative!!
-shared decision making
-apply evidence
5. Evaluate the outcome
Achieved goals:
- provided desired information?
- meet client's goals?
- prediction was consistent w/ client's outcome
- client's lived experience resonated with research evidence
Questions on Efficacy of an Intervention
P= population/participant
I= intervention/exposure
C= comparison/control
O= outcome
Efficacy (common design methods)
- randomized trials
- nonrandomized trials
- pretest/posttest
- single subject
Usefulness of an assessment (common design methods)
- psychometric methods
- reliability studies
- validity studies
- sensitivity/specificity studies
Description of a condition (common design methods)
- incidence & prevalence studies
- group comparisons
- surveys/interviews
Prediction of an outcome (common design methods)
- correlation & regression studies
- cohort studies
Client lived experience (common design methods)
- qualitative studies
- ethnography
- phenomenology
Levels of Evidence
a hierarchal system to evaluate the strength of evidence for efficacy questions
Randomized controlled trial
- at least 2 groups (experimental & control/comparison)
- participants are randomly selected
- an intervention must be applied to the experimental group
Nonrandomized controlled trial
- at least 2 groups (experimental & control/comparison)
- participants are not randomly selected
- an intervention must be applied to the experimental group
Single group trial w/ pre- and posttest (pre-experimental)
- 1 group (experimental, no control/comparison)
- comparison comes from variation between pre-intervention assessment & post-intervention assessment
Systematic Review
Analysis of an accumulation of two or more randomized studies
Level I Evidence
Systematic Review
Level II Evidence
Randomized controlled trials
Level III Evidence
Nonrandomized controlled trial
Level IV Evidence
One group trial with pre & post test (no control)
Level V Evidence
Case reports and expert opinion
What is the significance of randomized trials
Eliminates bias that could confound the finding of a study
Internal Validity
ability to indicate that the intervention, rather than another influence caused the outcome (causation > correlation)
Drawback of Level IV evidence
Variation of scores could be due to natural maturation/healing or placebo affect
Which evidence is often used for initial pilot programs
Level IV because there are fewer resources required
Research designs used in usefulness assessments
- Validity
- Reliability
Reliability study
how consistent the result are across different conditions
Validity study
Ability of an assessment to measure what the assessment is intended to measure
Research designs used in assessment studies
- Sensitivity
- Specificity
Sensitivity
proportion of individuals who are accurately identified as possessing the condition of interest
Specificity
the proportion of individuals who are identified as not having the condition of interest
Research designs used in descriptive studies
- Prevalence
- Incidence
- Cross section research
- Longitudinal research
Prevalence
Proportion of a population found to have a condition
Incidence
risk of developing a condition within a period of time
ex: 784,000 adults over the age of 65 developing this disease in 2019
Cross-sectional research
data collected at a single point in time
Longitudinal research
data collected over an extended period of time
Designs used in predictive studies
- cross sectional
- longitudinal (higher level of evidence)
Designs about client's lived experience
(qualitative)
- interviews
- observed extensively in natural contexts
Critically Appraised Topic (CAT)
Summarized evidence of several studies regarding a topic; geared towards application
- often published by professional organizations
Critically Appraised Papers (CAP)
Analysis of a single study
- includes identification of strengths and weaknesses of a study
Clinical Bottom Line
summary statement of a CAP with condensed recommendations from the author
The PICO format is most consistently applied to which type of research?
Efficacy
The question "Which factors are most correlated with fall risk?" would best be answered with what type of question?
Prediction of an outcome
What feature of systematic reviews make them stronger?
Repetition
PICO question format
Includes:
Population, Implementation, Control/comparison, and Outcome
ex: "In adults with arthritis (P) what is the effect of occupational therapy (I) compared to those not receiving care (C) on reduced levels of pain (O)"