• Course context: “Intro to Evidence-Based Practice – DPT 6112 – RDI – 2024”.
• Central metaphor (blind men & the elephant): each observer thinks the elephant is a spear, snake, fan, etc. → illustrates the distorting influence of bias and fragmented perspectives.
• Goal of EBP: obtain a complete, minimally biased picture by deliberately integrating research, expertise, and patient values.
• Bias = any systematic deviation that distorts truth.
• Bayes’ Theorem – practical mantra: “Update your priors.”
– Step 1 (Prior): Begin with an initial belief P(A) about phenomenon A.
– Step 2 (New Evidence): Observe information B with likelihood P(B\mid A).
– Step 3 (Update): Revise belief to posterior P(A\mid B)=\dfrac{P(B\mid A)P(A)}{P(B)}.
• Overcoming bias therefore demands openness to new data and iterative probability revision.
• THREE core purposes:
• Sackett (1996) classic phrasing: EBP is “the conscientious, explicit, and judicious use of the current best evidence in making decisions about the care of individual patients.”
• Three-legged stool diagram:
– Research evidence.
– Clinical expertise.
– Patient values/preferences.
• NONE of the components alone is sufficient; synergy is required.
• Traditional model: experience + clinical circumstances + patient preferences.
• EBP model: traditional triad PLUS scientific evidence → more explicit, transparent, and reproducible.
• Information overload: “drinking from a fire hydrant” – exponential publication growth.
• Research waste: 50\% waste at each research step → aggregate \approx85\% overall loss (Chalmers & Glasziou 2009).
• Historical mistakes: e.g., hormone-replacement therapy lowered theoretical CV risk yet raised breast-cancer incidence.
• Variation & authority bias: clinicians tend to practice as first taught; large geographic and inter-provider variation.
• Gap between knowledge & practice necessitates structured translation mechanisms.
• Factors influencing physician decision to treat hypertension:
– Absolute BP level.
– Patient age.
– Physician’s graduation year (practice inertia).
– Degree of target-organ damage.
• Illustrates non-evidence factors (training era) driving care.
• SUNY EBM Course graphic: hierarchy of question generation → publication → appraisal.
• Common failures (research waste diagram):
– Irrelevant questions, poor methodology, inadequate bias control.
– Non-publication (>50%), selective outcome reporting (>50%), incomplete intervention description (>30%).
• APTA/Section on Research survey – top four barriers:
• Einstein quote: “Not everything that can be counted counts, and not everything that counts can be counted.”
• Emphasis on “Best Available External Clinical Evidence” – quality & relevance trump mere quantity of data.
• Theory: explains WHY an intervention SHOULD work (biomechanics, physiology).
• Evidence: demonstrates IF it DOES work in real patients.
• Example:
– Theoretical rationale for lumbar stabilization vs.
– RCT (O’Sullivan 1997) showing sustained pain/function improvement at 1–3 yrs.
• Clinicians must balance mechanistic plausibility with empirical verification.
• Objectives:
• Evidence is NECESSARY but NEVER SUFFICIENT.
• Other determinants:
– Benefits vs. risks.
– Patient inconvenience.
– Costs/feasibility.
– Individual values & preferences.
• 1a = Systematic Review (SR) of RCTs.
• 1b = Individual RCT.
• 2a = SR of cohort studies.
• 2b = Individual cohort.
• 2c = Outcomes study.
• 3a = SR of case-control.
• 3b = Individual case-control.
• 4 = Case series.
• 5 = Expert opinion.
• Pyramid graphic also includes guidelines sitting ABOVE SRs/meta-analyses for decision-making utility.
• Definition: subjects randomly allocated to ≥2 interventions; one serves as control.
• Advantages:
– Randomization balances confounders (“washes out” bias).
– Facilitates blinding.
– Familiar statistical frameworks.
– Well-defined populations.
• Disadvantages:
– Expensive & time-intensive.
– Volunteer bias limits generalizability.
– Ethical/logistical issues; attrition.
• Example: Bech et al 2018 – \beta-alanine supplementation showed no effect on force decline or kayak performance.
• Prospective (or retrospective) tracking of groups defined by exposure → compare outcome incidence.
• Advantages: matched cohorts, standardized criteria, cheaper/faster than RCTs.
• Disadvantages: potential confounding, no randomization, blinding difficult, long latency outcomes.
• Example: Magill et al (healthy pediatric athletes) – baseline limb asymmetries on return-to-sport tests.
• Retrospective comparison of exposure frequency between disease cases vs. controls.
• Computes odds ratios.
• Useful for rare diseases or long latency.
• Advantages: time-efficient, multiple risk factors, ethical for harmful exposures.
• Disadvantages: recall bias, control selection challenges, unsuitable for diagnostic accuracy assessment.
• Example: Harkey et al 2018 – ultrasound assessment of femoral cartilage in ACL-reconstructed knees vs. uninjured controls.
• Narrative of a single (or small series) patient(s) with unique or unexpected presentation/outcome.
• Lowest evidence tier yet crucial for hypothesis generation.
• Example: Downhill gait-training post-TKA improved quadriceps strength & gait symmetry; suggests feasibility for future trials.
• Panel-driven exhaustive search, appraisal, and synthesis of ALL relevant studies for specific question.
• Advantages: broad generalizability, evidence-based resource, less costly than new trials.
• Disadvantages: extremely time-consuming; heterogeneity may limit pooling.
• Example: Veerbeek et al 2014 – SR of PT post-stroke identifying 53 interventions and strong evidence for high-dose, task-oriented training.
• Quantitative subset of SRs combining effect sizes to yield pooled estimate with greater statistical power.
• Use cases: resolve conflicting findings, refine magnitude of effect, detect small effects.
• Advantages: stronger statistics, confirmatory, broader inference.
• Disadvantages: requires homogeneous data, advanced statistical skill, publication bias risk.
• Expert-panel statements translating evidence into actionable recommendations.
• Qualities: clear scope, evidence appraisal, integration with values/costs, regular updates.
• Example: APTA Neck Pain CPG (2017) linked to ICF; assigns grades A–F based on evidence strength; multimodal interventions recommended by chronicity stage.
• Modern replacement of rigid hierarchies; rates quality of evidence across studies (high → very low) and strength of recommendations.
• Downgrading factors: study limitations, inconsistency, indirectness, imprecision, publication bias.
• Upgrading factors: large effect, dose-response, plausible confounding reduction.
Ask → Acquire → Appraise → Apply → Assess (5 A’s).
• Background: broad foundational; e.g., “What is the typical ACL injury mechanism?”
• Foreground: patient-specific, PICO-structured; e.g., “In adults 35–50 post-ACL repair, does CPM use improve return-to-sport time?”
• Haynes et al 2006: clinicians can remain current by reading ≈ 20 key articles/yr (≈1–2% of total output).
• Necessitates discerning selection (pre-appraised resources, point-of-care summaries, alerts).
• EBP is an ethical imperative amid exploding knowledge and finite resources.
• High-quality evidence sits atop a structured hierarchy but must be contextualized via GRADE and patient preferences.
• Clinicians must develop skills in questioning (PICO), searching, appraisal, and application while mitigating common barriers (time, access, confidence).
• Continuous updating and evaluation complete the recursive evidence cycle, ultimately improving patient outcomes and professional consistency.