Evidence-Based Practice, Research & Performance Improvement – Comprehensive Study Notes chap 5
Age of Accountability & Rationale for Evidence-Based Practice (EBP)
Modern nursing operates in an “age of accountability”: quality, safety, cost, transparency.
Public awareness of medical errors + payer penalties (e.g. CMS Hospital-Acquired Condition penalties) → pressure to use current science, not tradition.
ANA Scope & Standards of Practice, Standard 14: RNs must integrate “scholarship, evidence, and research” into practice.
Research priorities (Sun & Prufeta, 2019): nursing workflow, communication, collaboration, patient/family satisfaction, infection prevention, outcomes, safety.
Benefits & Foundations of EBP
Combines:
Best current evidence (research, guidelines, expert opinion)
Clinician expertise
Patient preferences/values
Available resources
Outcomes documented: ↑ patient satisfaction, ↓ costs, ↑ clinician empowerment, ↑ quality/consistency (Melnyk & Fineout-Overholt, 2019).
Aligned with QSEN competency; 12 EBP competencies validated for RNs, 11 more for APRNs.
Relationship to Clinical Judgment
Evidence informs interpretation of assessment data, identification of problems, selection of interventions.
Must still individualize: consider culture, beliefs, expectations.
7-Step EBP Process (Melnyk & Fineout-Overholt)
Cultivate spirit of inquiry & supportive culture.
Ask clinical question in PICOT format.
Search for best evidence.
Critically appraise evidence.
Integrate evidence with expertise, patient values, resources.
Evaluate outcomes.
Communicate outcomes.
Step 0 – Cultivating Inquiry
Characteristics of supportive organizations: mentors, evidence-based P&Ps, infrastructure/tools, leaders who model EBP, inclusion in evaluations, recognition programs.
Step 1 – PICOT Questions
P = Patient/Population (age, gender, condition)
I = Intervention / area of Interest
C = Comparison (standard care)
O = Outcome (nondirectional statement)
T = Time frame (optional)
Types of triggers:
Problem-focused (e.g., ↑ CLABSI trend)
Knowledge-focused (new guideline, new drug)
Background vs foreground questions → foreground framed in PICOT.
Examples developed by Cathy & Tom:
"Does 2% chlorhexidine (I) vs alcohol (C) for skin prep in hospitalized patients (P) affect CLABSI incidence (O) within 6 months (T)?"
"Do sterile barrier techniques (I) vs sterile gloves only (C) during insertion affect CLABSI (O) in post-surgical patients (P) during hospitalization (T)?"
Step 2 – Searching for Evidence
Sources: P&P manuals, QI data, practice guidelines, journals.
Partner with medical librarian; manipulate keywords.
Major databases (Table 5.1): CINAHL, MEDLINE, PubMed, EMBASE, PsycINFO, Cochrane, AHRQ, World Views on EBN.
Hierarchy of evidence (Fig. 5.2):
Systematic review / meta-analysis of RCTs; evidence-based guidelines.
Single RCT
Controlled trial without randomization
Case-control / cohort / correlational (non-experimental)
Systematic review of descriptive & qualitative
Single descriptive / qualitative study
Expert opinion & committees
Step 3 – Critical Appraisal
Determine value, feasibility, utility.
Use critical appraisal guides (purpose, sample, method, validity, results, limitations).
Elements of a research article:
Abstract, Introduction, Literature Review/Background, Manuscript Narrative (Purpose, Methods, Analysis, Results/Conclusions, Clinical Implications).
Statistical significance: p < 0.05 → < 5 % probability result due to chance.
Cathy & Tom: Level I systematic review found no dressing difference; CDC guidelines strongly recommend chlorhexidine & sterile barriers; Level IV cohort supported bundled interventions.
Step 4 – Integrating Evidence
Direct application to single patient OR larger-scale change (new protocol).
Gain stakeholder buy-in: administrators (cost/outcome), staff (workflow impact), providers (care implications).
Use mentors, education (seminars, newsletters), pilot testing (≈ 3 months) before system-wide roll-out.
Update P&Ps whenever new evidence emerges (not just annually).
Step 5 – Evaluation
Compare pre- & post-intervention data over months.
Determine effectiveness, need for modification, or discontinuation.
Example: UPC audits CLABSI rates & staff adherence; unexpected ↑ infections would trigger reevaluation.
Step 6 – Communication & Sustainability
Share via huddles, Gemba boards, newsletters, councils, posters, conferences, publications.
Strategies to sustain (Box 5.2): leadership vision, EBP workgroups/journal clubs, certification, academic partnerships, visual management tools, fellowship programs.
Scientific Method (Research Foundation)
Observation of problem.
Literature review & data gathering.
Form research question/hypothesis.
Conduct rigorous study (quantitative or qualitative) → collect empirical data.
Analyze & draw conclusions (validity, reliability, generalizability).
Comparison: Nursing Process vs Scientific Method (Table 5.2)
Assessment ↔ Observe & review literature
Diagnosis ↔ Formulate research question/hypothesis
Planning ↔ Design study (methodology, sampling, variables, analytics)
Implementation ↔ Conduct study & collect data
Evaluation ↔ Analyze data, draw conclusions, disseminate findings
Types & Methods of Research
Exploratory, Descriptive, Correlational, Historical, Evaluation, Experimental.
Quantitative Research
Objective, numeric, statistical.
Experimental (RCTs): random assignment; highest causal evidence.
Quasi-experimental (non-randomized controlled): potential bias.
Non-experimental/Descriptive: explain/predict phenomena (e.g., case-control, cohort).
Surveys: measure frequency, distribution; must minimize sampling error.
Qualitative Research
Subjective, narrative; explores meaning/experience.
Uses inductive reasoning.
Designs: Phenomenology, Ethnography, Grounded Theory.
Data = interview transcripts, field notes; analysis → themes.
Translation Research (Implementation Science)
Tests strategies to implement & sustain EBP in real-world settings.
Goal: determine what works, for whom, in which context.
5-phase continuum: Basic → Phase 1 safety → Phase 2&3 efficacy → Phase 4 practice → Phase 5 community/population outcomes.
Example: testing adoption strategies for Naylor’s Transitional Care Model.
Differs from EBP (which applies known evidence); translation research creates evidence about implementation strategies.
Outcomes Research
Examines benefits, risks, costs, holistic effects of treatments.
Care delivery outcomes = measurable effects on recipients (not providers).
Nurse-sensitive indicators: falls, pressure injuries, CLABSI, etc.
Outcome components: indicator, measurement method, parameters (scale/range). Example: patient satisfaction scored .
Performance Improvement (PI)
Local, systematic analysis of existing processes; aims at efficiency & safety; results usually not generalizable.
Continuous effort to meet “triple aim”: better care, better health, lower cost.
Common models (Table 5.3):
PDSA (Plan → Do → Study → Act)
Root Cause Analysis (RCA) for sentinel events (identifies active vs latent errors)
Six Sigma
Balanced Scorecard
Just Culture: focuses on process, encourages error reporting.
Comparing EBP, Research, & PI (Table 5.4)
Purpose: implement best evidence vs generate new knowledge vs improve local processes.
Data sources: multiple studies/expert opinion vs defined sample vs unit/hospital records.
IRB: required for research, sometimes for EBP/PI when patient data or novel interventions involved.
Funding: internal for EBP/PI, often external grants for research.
Databases & Resources (exam ready)
AHRQ, CINAHL, MEDLINE/PubMed, EMBASE, PsycINFO, Cochrane Library.
Free sources: PubMed, Cochrane abstracts; institutional access often via OVID/EBSCO.
Ethical & Practical Implications
Informed consent elements: complete info, comprehension, voluntariness, confidentiality.
-value threshold () denotes statistical significance; guides decision to adopt change.
Must balance patient preferences with strongest evidence; cultural sensitivity (e.g., therapeutic touch example).
Nurses obligated to question status quo, avoid tradition-only care.
Numerical / Statistical Reminders
CMS HAC penalties: lowest-performing hospitals lose reimbursement.
p-value: p<0.05 → <5 % chance result due to randomness.
Pilot implementation suggested duration ≈ .
PICOT time element often in CLABSI example.