Intro to Evidence-Based Practice (DPT 6112 – 2024)
Introduction to Evidence-Based Practice (EBP)
• Course context: “Intro to Evidence-Based Practice – DPT 6112 – RDI – 2024”.
• Central metaphor (blind men & the elephant): each observer thinks the elephant is a spear, snake, fan, etc. → illustrates the distorting influence of bias and fragmented perspectives.
• Goal of EBP: obtain a complete, minimally biased picture by deliberately integrating research, expertise, and patient values.
Bias & How to Minimize It
• Bias = any systematic deviation that distorts truth.
• Bayes’ Theorem – practical mantra: “Update your priors.”
– Step 1 (Prior): Begin with an initial belief P(A) about phenomenon A.
– Step 2 (New Evidence): Observe information B with likelihood P(B\mid A).
– Step 3 (Update): Revise belief to posterior P(A\mid B)=\dfrac{P(B\mid A)P(A)}{P(B)}.
• Overcoming bias therefore demands openness to new data and iterative probability revision.
Roles of Statistics & Research
• THREE core purposes:
- Describe the world (descriptive statistics).
- Assess causal effects between variables (inferential/analytic statistics).
- Predict future outcomes (predictive analytics & modeling).
Defining Evidence-Based Practice
• Sackett (1996) classic phrasing: EBP is “the conscientious, explicit, and judicious use of the current best evidence in making decisions about the care of individual patients.”
• Three-legged stool diagram:
– Research evidence.
– Clinical expertise.
– Patient values/preferences.
• NONE of the components alone is sufficient; synergy is required.
Decision-Making Models
• Traditional model: experience + clinical circumstances + patient preferences.
• EBP model: traditional triad PLUS scientific evidence → more explicit, transparent, and reproducible.
Why Do We Need EBP?
• Information overload: “drinking from a fire hydrant” – exponential publication growth.
• Research waste: 50\% waste at each research step → aggregate \approx85\% overall loss (Chalmers & Glasziou 2009).
• Historical mistakes: e.g., hormone-replacement therapy lowered theoretical CV risk yet raised breast-cancer incidence.
• Variation & authority bias: clinicians tend to practice as first taught; large geographic and inter-provider variation.
• Gap between knowledge & practice necessitates structured translation mechanisms.
Clinical Example – Hypertension Treatment
• Factors influencing physician decision to treat hypertension:
– Absolute BP level.
– Patient age.
– Physician’s graduation year (practice inertia).
– Degree of target-organ damage.
• Illustrates non-evidence factors (training era) driving care.
Maintaining Up-to-Date Skills
• SUNY EBM Course graphic: hierarchy of question generation → publication → appraisal.
• Common failures (research waste diagram):
– Irrelevant questions, poor methodology, inadequate bias control.
– Non-publication (>50%), selective outcome reporting (>50%), incomplete intervention description (>30%).
Barriers to EBP in Physical Therapy
• APTA/Section on Research survey – top four barriers:
- No time to read.
- Lack of relevant research for given population.
- Difficult access.
- Insufficient time to learn/apply EBP methods.
• Jette et al 2003:
– Majority value EBP, yet 34\% have low search confidence, 44\% low interpretation confidence.
– Older PTs show lower training and familiarity.
• Stroke PT practice survey (Salbach et al): organizational/practitioner barriers—insufficient time, poor generalizability, statistical illiteracy, isolation, absent mandates.
“Evidence” Clarified
• Einstein quote: “Not everything that can be counted counts, and not everything that counts can be counted.”
• Emphasis on “Best Available External Clinical Evidence” – quality & relevance trump mere quantity of data.
Theory vs Evidence
• Theory: explains WHY an intervention SHOULD work (biomechanics, physiology).
• Evidence: demonstrates IF it DOES work in real patients.
• Example:
– Theoretical rationale for lumbar stabilization vs.
– RCT (O’Sullivan 1997) showing sustained pain/function improvement at 1–3 yrs.
• Clinicians must balance mechanistic plausibility with empirical verification.
APTA Strategic Plan (Education & Practice)
• Objectives:
- Reduce unwarranted practice variation; standardize via outcomes & evidence.
- Integrate movement-system paradigm.
- Harmonize educational readiness/performance.
- Provide faculty development & resources.
- Promote diversity & inclusion; expand PT roles in primary care; bolster health-services & outcomes research.
Fundamental Principles of Evidence
• Evidence is NECESSARY but NEVER SUFFICIENT.
• Other determinants:
– Benefits vs. risks.
– Patient inconvenience.
– Costs/feasibility.
– Individual values & preferences.
Two Evidence Questions
- Where does EACH individual study lie on the evidence hierarchy?
- What does the PREPONDERANCE of literature collectively say? (weight of evidence approach)
Levels & Hierarchy of Evidence (Individual Studies)
• 1a = Systematic Review (SR) of RCTs.
• 1b = Individual RCT.
• 2a = SR of cohort studies.
• 2b = Individual cohort.
• 2c = Outcomes study.
• 3a = SR of case-control.
• 3b = Individual case-control.
• 4 = Case series.
• 5 = Expert opinion.
• Pyramid graphic also includes guidelines sitting ABOVE SRs/meta-analyses for decision-making utility.
Randomized Controlled Trials (RCTs)
• Definition: subjects randomly allocated to ≥2 interventions; one serves as control.
• Advantages:
– Randomization balances confounders (“washes out” bias).
– Facilitates blinding.
– Familiar statistical frameworks.
– Well-defined populations.
• Disadvantages:
– Expensive & time-intensive.
– Volunteer bias limits generalizability.
– Ethical/logistical issues; attrition.
• Example: Bech et al 2018 – \beta-alanine supplementation showed no effect on force decline or kayak performance.
Cohort Studies
• Prospective (or retrospective) tracking of groups defined by exposure → compare outcome incidence.
• Advantages: matched cohorts, standardized criteria, cheaper/faster than RCTs.
• Disadvantages: potential confounding, no randomization, blinding difficult, long latency outcomes.
• Example: Magill et al (healthy pediatric athletes) – baseline limb asymmetries on return-to-sport tests.
Case-Control Studies
• Retrospective comparison of exposure frequency between disease cases vs. controls.
• Computes odds ratios.
• Useful for rare diseases or long latency.
• Advantages: time-efficient, multiple risk factors, ethical for harmful exposures.
• Disadvantages: recall bias, control selection challenges, unsuitable for diagnostic accuracy assessment.
• Example: Harkey et al 2018 – ultrasound assessment of femoral cartilage in ACL-reconstructed knees vs. uninjured controls.
Case Reports / Case Series
• Narrative of a single (or small series) patient(s) with unique or unexpected presentation/outcome.
• Lowest evidence tier yet crucial for hypothesis generation.
• Example: Downhill gait-training post-TKA improved quadriceps strength & gait symmetry; suggests feasibility for future trials.
Systematic Reviews (SR)
• Panel-driven exhaustive search, appraisal, and synthesis of ALL relevant studies for specific question.
• Advantages: broad generalizability, evidence-based resource, less costly than new trials.
• Disadvantages: extremely time-consuming; heterogeneity may limit pooling.
• Example: Veerbeek et al 2014 – SR of PT post-stroke identifying 53 interventions and strong evidence for high-dose, task-oriented training.
Meta-Analyses
• Quantitative subset of SRs combining effect sizes to yield pooled estimate with greater statistical power.
• Use cases: resolve conflicting findings, refine magnitude of effect, detect small effects.
• Advantages: stronger statistics, confirmatory, broader inference.
• Disadvantages: requires homogeneous data, advanced statistical skill, publication bias risk.
Practice Guidelines (PGs)
• Expert-panel statements translating evidence into actionable recommendations.
• Qualities: clear scope, evidence appraisal, integration with values/costs, regular updates.
• Example: APTA Neck Pain CPG (2017) linked to ICF; assigns grades A–F based on evidence strength; multimodal interventions recommended by chronicity stage.
GRADE Framework
• Modern replacement of rigid hierarchies; rates quality of evidence across studies (high → very low) and strength of recommendations.
• Downgrading factors: study limitations, inconsistency, indirectness, imprecision, publication bias.
• Upgrading factors: large effect, dose-response, plausible confounding reduction.
Steps in the EBP Process
- Identify information need & craft answerable question.
- Acquire best evidence (search).
- Appraise validity, impact, applicability.
- Apply evidence with clinical expertise & patient values.
- Assess outcomes & personal performance → feed back into cycle.
The Evidence Cycle Mnemonic
Ask → Acquire → Appraise → Apply → Assess (5 A’s).
Background vs. Foreground Questions
• Background: broad foundational; e.g., “What is the typical ACL injury mechanism?”
• Foreground: patient-specific, PICO-structured; e.g., “In adults 35–50 post-ACL repair, does CPM use improve return-to-sport time?”
Formulating PICO Questions
- Patient/Problem.
- Intervention.
- Comparison.
- Outcome(s).
• Employs focused keywords → efficient database searches.
Information Management – How Much to Read?
• Haynes et al 2006: clinicians can remain current by reading ≈ 20 key articles/yr (≈1–2% of total output).
• Necessitates discerning selection (pre-appraised resources, point-of-care summaries, alerts).
Key Takeaways & Practical Implications
• EBP is an ethical imperative amid exploding knowledge and finite resources.
• High-quality evidence sits atop a structured hierarchy but must be contextualized via GRADE and patient preferences.
• Clinicians must develop skills in questioning (PICO), searching, appraisal, and application while mitigating common barriers (time, access, confidence).
• Continuous updating and evaluation complete the recursive evidence cycle, ultimately improving patient outcomes and professional consistency.