MK

Evidence Based Practice: Comprehensive Notes

Evidence-Based Practice: Key Concepts

  • Evidence-based Practice (EBP) is the integration of the best available research evidence with clinical expertise and the patient’s unique circumstances, including patient values and needs, while delivering high-quality, cost-effective health care.

  • Pre-exercise history and assessment are central to understanding the patient and practicing patient-centered care.

  • Understanding differing levels of evidence and their reliability is essential for making correct health care decisions.

  • EBP also emphasizes applying findings from literature to exercise prescription while considering practical, ethical, and real-world relevance.


Objectives of the Lecture (as stated)

  • Know how to search library databases

  • Describe major study observational and experimental designs

  • Articulate strengths and weaknesses of each design

  • Read the methods section to determine the design used

  • Critically appraise robustness of studies to inform exercise prescription/practice


Using Findings from Pre-Exercise Assessment & Literature Understanding

  • Avoid injury and assess level of risk

  • Prescribe evidence-based exercise for current and potential future conditions

  • Prescribe exercise to augment current medical treatment

  • Prescribe exercise to offset side effects of medications

  • Understand barriers to adoption/adherence (client-centered care)

  • Communicate findings to the health care team implementing the prescription


Screening for Exercise in Cardiometabolic Disease: Summary

  • Take a good history

  • Ascertain patient goals

  • Confirm the reason for referral

  • Identify diseases requiring exercise

  • Identify absolute or temporary contraindications to exercise

  • Identify conditions requiring modification of general guidelines for adults

  • Ask about current/past exercise habits and injuries

  • Ask about current symptoms at rest or with exercise that may indicate risk of adverse events

  • Perform targeted physical exam and review other practitioners’ results

  • Perform or refer for pre-exercise functional and exercise capacity assessments


Putting It All Together: Optimal Management

  • Optimal management requires: pre-exercise assessments, an evidence-based exercise prescription, and patient-centered care within a multidisciplinary team


Goals

  • (See lecture slides for goals; summarized here as alignment with EBP practice and literature appraisal.)


INTRODUCTION TO EBP: How to Decide in Clinical Practice

  • What is Evidence-Based Practice (EBP)?


WHAT IS EBP?

  • EBP is the integration of best research evidence with clinical expertise and the patient’s unique circumstances, including patient values and needs.

  • Pre-exercise history and assessment underpin patient-centered care.

  • Understanding different levels of evidence and their reliability is crucial for correct health care decisions.


WEIGHING THE EVIDENCE

  • Weigh potential benefits against potential risks when considering evidence.

  • Visual cue: balance between potential benefits and risks.


STEPS TO EVALUATING THE EVIDENCE

  • Find the evidence.

  • Determine the level of evidence available.

  • Determine the overall strength of the evidence:

    • Quantity

    • Quality (Internal Validity)

    • Generalizability


FINDING THE EVIDENCE: KEY DATABASES

  • Medline (via Ovid)

  • Embase (via Ovid)

  • AMED: Allied and Complementary Medicine (via Ovid SP)

  • PsycINFO (via Ovid)

  • Pre-Medline (via Ovid)

  • EBM Review – Cochrane Central Register of Controlled Trials

  • EBM Review – Cochrane Database of Systematic Reviews

  • CINAHL

  • SPORTDiscus

  • PEDRO

  • Access: https://library.sydney.edu.au/


Library Resources Interface Note (context for research workflow)

  • Examples shown include Ovid MEDLINE, EndNote imports, PubMed searches, and EndNote tips for quick scoping searches.

  • Practical demonstrations include searching for topics like pilates and related interventions, and exporting references to EndNote.


Brainstorming Your Question: Quick Guide

1) Find background information about topic
2) Identify main key concepts
3) Brainstorm synonyms, alternative spellings, and variant forms
4) Identify limits
5) Select and search databases


Identify Main Concepts (Example Topic)

  • Example topic: Impact of exercise therapy on psychological distress among Cardiac Rehab patients

  • Core concepts: exercise therapy; psychological distress; cardiac rehab patients


Brainstorming Synonyms and Variant Forms

  • Consider: literature from Google Scholar, Library Search, Scopus

  • Include synonyms and related terms: e.g., mental health, mood, depression, anxiety; physical activity; rehab; coronary disease; heart disease

  • Spelling variations (US/UK English) and plurals


Building the Clinical Question: Conceptual Mapping

  • Exercise therapy; psychological distress; Cardiac Rehab patient

  • Related terms: mental health, mood, physical activity, coronary artery disease, heart disease, depression

  • Expand with additional related concepts and outcomes


Using OR and AND in Searches

  • OR combines alternative keywords (e.g., cardiac rehab patient OR coronary artery disease)

  • AND combines different concepts (e.g., exercise therapy AND cardiac rehab patient)


Refining Your Search

  • Manage results: too many vs too few

  • Add limits: date, country; limit to title field if needed

  • Use more specific terms; remove irrelevant words

  • Limit to keywords in title/abstract; adjust strategy; consider different databases

  • Check and revise search statement; remove/adjust limits as needed


Example: Medline Search (EndNote workflow title)

  • Topic: Pilates; example results show randomized trials and related topics

  • Demonstrates importing references into EndNote and using search terms like "pilates" and related terms


Finding Information in Health & Medical Databases (Practice Slide)

  • Find resources and learn referencing skills

  • Access assignment support and librarian help

  • Use subject guides to locate best sources


Understanding Study Design: WHEN, WHY, WHAT, HOW

  • Distinguish observational vs experimental study designs

  • Observational studies observe individuals without influencing responses

  • Experimental studies involve assignment to interventions and control conditions


TYPES OF STUDY DESIGNS: OBSERVATIONAL

  • Observe outcomes without manipulating any variables, allowing for the assessment of real-world effects and relationships.

  • Descriptive observational studies

    • Correlational studies

    • Case Reports / Case Series

    • Cross-sectional surveys

  • Analytical observational studies

    • Case-control studies

    • Cohort studies

    • Retrospective and Prospective designs


CORRELATIONAL STUDY: Definition & Examples

  • Definition: Describe relationships between events and characteristics of a population (e.g., age, gender, toxin exposure, diet, physical activity, geography)

  • Examples include associations like fat consumption and breast cancer prevalence, or salt intake and hypertension prevalence

  • Beware of SPURIOUS correlations: correlations can be high by chance or due to confounding variables

  • Not evidence of causation, only association


CASE REPORTS / CASE SERIES

  • Description of a series of cases with an outcome of interest, no control group

  • Useful for generating hypotheses but limited for establishing causality

  • Classic examples given (e.g., Kaposi’s sarcoma in AIDS era, weight-lifting showing retinal hemorrhage)


CROSS-SECTIONAL STUDY & CROSS-SECTIONAL SURVEY

  • Cross-sectional: population observed at a single point in time or a time interval; exposure and outcome determined simultaneously

  • Cross-sectional surveys exemplified by UK Biobank study on associations between alcohol consumption and brain volumes


COHORT STUDIES

  • Cohort studies observe groups before they develop a disease or outcome; can assess multiple outcomes from a single exposure

  • Prolonged follow-up; can estimate relative risk (RR)

  • Prospective cohorts are costly and time-consuming but powerful for establishing temporal relationships


PROSPECTIVE COHORT EXAMPLE (Nuts and Healthy Aging in Women)

  • Sample: 33,931 participants at midlife

  • Outcome: healthy aging at older ages; 16% became “healthy agers”

  • Key finding: higher nut consumption at midlife associated with higher odds of healthy aging, strongest effect for walnuts after full confounder control

  • Reported effect: OR for ≥2 servings/week vs none =
    OR = 1.20,
    95 ext{% CI} = [1.00, 1.44]

  • Conclusion: nut consumption may be a simple intervention to promote healthy aging


CORRELATION VS CAUSATION & CONFUNDING

  • Confounding variable: a factor that may influence both the independent and dependent variables, potentially skewing the results and leading to misinterpretation of the causal relationship.

  • Correlation does not imply causation; a confounding variable may influence observed associations

  • Common in observational studies; media may misinterpret observational links as causal


EXPERIMENTAL STUDIES & RANDOMIZED CONTROLLED TRIALS (RCTs)

  • RCTs are the strongest design for attributing cause and effect and influencing policy/practice

  • Strengths: minimize chance, bias, and confounding through randomization and control groups

  • Weaknesses: can be costly; external validity may be questioned if participants are not representative

  • Important features: randomization, blinding, control groups, intention-to-treat analyses, predefined outcomes


TYPES OF EXPERIMENTAL STUDIES

  • True experimental: Randomized Controlled Trials (RCTs)

  • Other designs: quasi-experimental studies, uncontrolled trials, cross-over studies, non-randomized controlled trials, pseudo-randomized trials, cluster randomization

  • Factorial design: test multiple interventions and interactions simultaneously


ESSENTIAL CONCEPTS: RANDOMIZED CLINICAL TRIALS (RCTs)

  • Randomization: equal chance assignment to each group to balance known and unknown confounders

  • Blinding: single-blind, double-blind, or placebo-controlled designs to reduce bias

  • Controls for Hawthorne effect: controls for behaviour changes due to trial participation

  • Outcome assessment: standardized, objective measures; clearly defined primary and secondary outcomes


RCTs: STRENGTHS vs WEAKNESSES

  • Strengths: high internal validity; strong evidence for causality when well-conducted

  • Weaknesses: cost; external validity concerns if not representative

  • Internal validity maximized by design/analysis strategies: pre-specified outcomes, adequate sample size, proper randomization, blinding, standardized data collection, complete follow-up, intention-to-treat analyses

  • chance: likelihood that the results observed occurred by chance (type I error), or by chance you miss an actual effect (type II error) - minimized by:

    • choosing a small p value/alpha level

    • selecting a large enough sample size

    • using methods to ensure high compliance rates such as randomization and blinding, which help to eliminate bias and enhance the validity of the results

    • selecting a population in which events of interest occur at high enough frequencies to detect differences between rates in the comparison group

  • Statistical significance: this refers to the likelihood that the relationship observed in the study is not due to chance, typically measured with a p-value of less than 0.05.

    • Does not imply clinical meaningfulness

  • Clinical meaningfulness: this concept assesses whether a statistically significant result has practical implications for patient care or treatment outcomes, indicating that the difference observed is relevant for making clinical decisions.

  • Understanding the distinction between statistical significance and clinical meaningfulness is crucial for practitioners, as it guides the application of research findings to real-world scenarios.


External Validity (Generalizability)

  • External validity (generalizability) is the extent to which findings apply to broader populations beyond the study sample

  • It is meaningful only if the study has strong internal validity

  • Key questions:

    • Can results be generalized to different ages, sexes, disease severities, comorbidities?

    • Are results applicable to other drugs or doses in the same class?

    • Can results be generalized across care settings (primary, secondary, tertiary)?

    • What about other related outcomes not assessed, and the impact of follow-up duration or harms?

    • Should this intervention be used for your clients?


Threats to External Validity

  • Cohort studies with unusual samples or very restrictive criteria

  • Interventions not common or feasible in other settings

  • Outcome measures not defined universally


BIAS: TYPES & MINIMIZATION

  • Types of bias: selection, observation (measurement), recall, misclassification

    • Selection: the process by which certain individuals or groups are systematically included or excluded from a study, potentially skewing the results and affecting the generalizability of the findings.

    • Observation (measurement): Bias that occurs when the data collected does not accurately reflect the true values due to flaws in measurement techniques or tools used, which can lead to misleading conclusions.

    • Recall: This bias arises when participants have difficulty remembering past events or experiences, leading to inaccuracies in the data reported by them, which can negatively impact the reliability of the study findings.

    • Misclassification: This occurs when individuals are incorrectly categorized into exposure or outcome groups, resulting in possible dilution of the true associations being investigated.

  • Hawthorne effect: behaviour changes due to awareness of being observed

  • Minimizing bias:

    • Blinding participants to hypotheses during recruitment

    • Baseline assessments before randomization

    • Blinding interventionists and outcome assessors

    • Blinding analysts until analyses are completed


CONFOUNDING: CONCEPTS & ROLE OF RANDOMIZATION

  • Confounding: when factors other than the exposure/treatment influence the outcome

  • Randomization addresses both known and unknown confounders

  • If randomization is unsuccessful, differences between groups may persist and require adjustment in analyses


EVIDENCE TYPES AND STUDY DESIGNS: SUMMARY LIST

  • Observational studies

    • Descriptive: descriptive observational studies

    • Correlational studies

    • Case reports / case series

    • Cross-sectional studies

  • Analytical observational studies

    • Case-control studies

    • Cohort studies (retrospective & prospective)

  • Experimental studies

    • Randomized Controlled Trials (true experiments)

    • Quasi-experimental studies (uncontrolled, cross-over, non-randomized, etc.)

    • Cluster randomized trials

    • Factorial designs


REPORTING GUIDELINES AND QUALITY ASSESSMENT

  • CONSORT: guidelines for reporting randomized trials; updated versions include CONSORT 2010 and CONSORT 2025 (explanation/elaboration)

  • EQUATOR Network: hub for reporting guidelines across study types (CONSORT, STROBE, PRISMA, CERT, PEDro, etc.)

  • Other reporting guidelines: STROBE (observational), PRISMA (systematic reviews), CARE (case reports), SPIRIT (study protocols), TRIPOD (diagnostic/prognostic), CERT (Exercise Reporting Template)


CONSORT: What to Look For in Trials

  • Identification as a randomized trial

  • Structured abstract with trial design, methods, results, conclusions

  • Trial registration details and protocol/analysis plans

  • Data sharing, funding, and conflicts of interest

  • Details on trial design (parallel vs crossover), allocation ratio, and framework (superiority, etc.)

  • Eligibility criteria, interventions and comparators with sufficient detail to replicate

  • Prespecified primary and secondary outcomes with measurement details and time points

  • Harms: definition and assessment of adverse events

  • Sample size: calculation assumptions and interim analyses if any

  • Randomization: sequence generation, allocation concealment, implementation

  • Blinding: who was blinded and how blinding was achieved

  • Statistical methods: analyses for primary/secondary outcomes, missing data handling, and prespecified vs post hoc analyses

  • Flow diagram: eligibility, allocation, follow-up, and analysis


PEDro Scale: What It Assesses in Trials

  • Eligibility criteria specified

  • Random allocation to groups

  • Allocation concealment

  • Baseline comparability of groups

  • Blinding of subjects

  • Blinding of therapists delivering therapy

  • Blinding of assessors

  • Outcomes obtained from >85% of initial participants

  • Intention-to-treat analysis or equivalent

  • Between-group statistical comparisons for at least one key outcome

  • Reporting of both point estimates and variability for at least one key outcome


CERT: Exercise Reporting Template

  • 16 items to ensure transparent reporting of exercise interventions

  • WHAT (content of exercise content and equipment; delivery by whom; supervision; adherence reporting; progression rules; replication details)

  • WHERE (setting and location of exercises)

  • WHEN and HOW MUCH (dosage details: sets, reps, duration, intensity)

  • TAILORING (whether generic or individualized)

  • HOW WELL (delivery fidelity and adherence reporting)

  • NONEXERCISE COMPONENTS (home programs, etc.)

  • Adverse events documentation and management


NHMRC Levels of Evidence and Grades for Recommendations (2009)

  • Grade A: Body of evidence can be trusted to guide practice

  • Grade B: Body of evidence can be trusted to guide practice in most situations

  • Grade C: Evidence provides some support for recommendations; apply with caution

  • Grade D: Evidence is weak; recommendations should be applied with caution


Practical Application: Assessing Evidence Quality

  • Look for large, robust randomized controlled trials or recent high-quality systematic reviews

  • Check adherence to CONSORT reporting requirements

  • Use quality tools to quantify flaws:

    • PEDro scale or Cochrane Risk of Bias Tool (RoB 2)

  • Assess the exercise intervention using CERT and apply exercise physiology principles

  • Determine clinical meaningfulness: is the observed effect larger than the Minimum Clinically Important Difference (MCID) for the outcome?

  • Evaluate generalizability to your clients and feasibility in your setting


Minimal Clinically Important Difference (MCID): Concept

  • MCID represents the smallest change in an outcome that patients perceive as beneficial

  • Example: for a physical performance measure like the Short Physical Performance Battery (SPPB), the clinically meaningful change is about
    0.5 ext{ to } 1.0 ext{ points}

  • When interpreting study results, compare the observed difference to the MCID to judge clinical relevance


Example: Interpreting an RCT Result (Illustrative)

  • Hypothetical trial: exercise intervention vs control in very elderly during hospitalization

  • Primary endpoint: change in SPPB; observed between-group difference:
    ext{Difference} = -5.0 ext{ (95% CI: -6.8 to -3.2)}

  • Significant improvement in functional capacity with intervention; consider MCID and clinical meaningfulness


Summary: How to Approach Evidence for Practice

  • EBP integrates client goals, clinical context, and strength of evidence

  • Seek large, robust RCTs or recent high-quality systematic reviews

  • Assess conformity to CONSORT reporting for design and reporting quality

  • Apply tools (PEDro or RoB 2) to quantify design/reporting flaws

  • Use CERT to evaluate the quality of the exercise intervention

  • Determine whether results are clinically meaningful (MCID) and generalizable to your clients and setting


Quick Reference: External Resources

  • EQUATOR Network for reporting guidelines: https://www.equator-network.org/

  • CONSORT guidelines (2010; 2025 update): detailed reporting standards for RCTs

  • Cochrane RoB 2: risk-of-bias tool for randomized trials (https://www.riskofbias.info/welcome/rob-2-0-tool/current-version-of-rob-2)

  • PEDro: Physiotherapy Evidence Database for trial quality assessment (https://www.pedro.org.au/)

  • CERT: Exercise Reporting Template (as part of exercise intervention reporting)


References to Key Concepts Shown in Slides

  • Definitions and aims of EBP; patient-centered care; levels of evidence

  • Steps to evaluate evidence: find, rank, assess strength (quantity, quality, generalizability)

  • Search strategies: brainstorming, synonyms, Boolean operators, refining searches

  • Basics of study designs: observational vs experimental; descriptive vs analytical; cross-sectional; cohort; case reports/series

  • Bias and confounding: types and mitigation strategies (blinding, randomization, baseline assessments, intention-to-treat)

  • RCT design principles and internal/external validity

  • Reporting standards: CONSORT, STROBE, PRISMA; trial flow diagrams; trial registration

  • Tools to judge quality: PEDro, RoB 2, CERT

  • NHMRC levels of evidence and grades for recommendations

  • Clinical meaningfulness: MCID concepts and interpretation


Quick Mnemonic for Review (optional)

  • EBP: Evidence, Bias, Practice

  • CONSORT: Clear reporting for trials

  • PEDro: Trial quality checklist

  • CERT: Exercise intervention reporting

  • MCID: Clinically meaningful changes

  • NHMRC: Levels of evidence and grades


Final Checkpoints Before Application

  • Identify study design and assess internal validity first

  • Check for blinding and allocation concealment where possible

  • Evaluate outcome measures and follow-up completeness

  • Confirm that results are clinically meaningful and generalizable to your population

  • Ensure the evidence aligns with patient goals and practical constraints in your setting