EBP Lec 2: Clinical questioning and critical appraisal (👌🏻)

Learning Outcomes for this lecture

  • Construct PICO questions to ask clinical questions

  • Construct a search strategy using Boolean operators

  • Use university library databases to conduct a literature search

  • Recall basic experimental design principles

  • Identify sources of variability in experimental design

  • Identify sources of bias and acknowledge the impact it may have on experimental outcomes

  • Identify confidence intervals and explain their significance to research

    findings

  • Recall different appraisal checklists (qualitative and quantitative) and explain

    their role in appraising research

Clinical Scenario Used Throughout the Lecture

  • Setting: Audiologist in a hospital cochlear-implant centre.

  • Patient profile

    • 62-year-old adult.

    • Post-lingual severe-to-profound hearing loss (≥ 2 years).

    • No medical contraindications to anaesthesia.

    • Currently wears a right hearing aid (40 % speech-recognition score) and left ear (10 %).

  • Patient concern: "Can I implant only the left ear and still notice a positive change? I rely heavily on the right ear."

  • Purpose: Use evidence-based practice (EBP) to answer the patient’s question.

Constructing the PICO Question

  • P = Adults with significant (severe–profound) hearing loss.

  • I = Bilateral cochlear implantation

  • C = Unilateral cochlear implantation

  • O = Improvement in quality of life (QoL) or other outcomes (e.g.
    speech recognition-in-noise, listening effort).

  • Resulting research question:

    • “In adults with significant hearing loss, is bilateral cochlear implantation superior to unilateral implantation in improving patient quality of life?”

Identify Main Concepts
  1. Adults / hearing‐impaired population.

  2. Bilateral vs.
    unilateral cochlear implantation.

  3. Quality of life (QoL) / speech perception, etc.

Building a Literature-Search Strategy

Boolean Operators (MUST be uppercase)
  • AND = intersection of different concepts (narrows search).

  • OR = union of synonyms within the same concept (broadens search).

  • NOT = excludes unwanted concepts.

Auxiliary Search Symbols
  • Quotation marks " " : exact phrase ("sound localization").

  • Wildcards (varies by database):

    • ? or # to replace a single character (locali?ation for localisation/localization).

  • Truncation * : retrieves multiple word endings (implant* → implant, implants, implantation).

Example Advanced Search String
(adult* OR "older adult*" OR "hearing impair*" OR deaf OR "hard of hearing")
  AND
("bilateral" OR "two ear*" OR "both side*")
  AND
("unilateral" OR "one ear" OR "one side")
  AND
("cochlear implant*")
  AND
("quality of life" OR QOL)
  • Place synonyms for each concept in parentheses.

  • Connect synonymous terms with OR.

  • Connect different PICO elements with AND.

  • Exclude irrelevant tech with NOT (e.g.
    NOT "bone anchored").

Databases Demonstrated
  • CINAHL (Allied Health‐focused; accessed via UoM library > Databases).

  • PubMed (preferred by lecturer for broader medical & psychoacoustic coverage).

  • Both interfaces support Boolean logic but differ in wildcard symbols and filters.

Practical Tips
  • Start simple, then layer specificity (add NOT terms, phrase searching, filters).

  • Use database-specific tutorials (UoM Library video recommended).

  • Expect to iterate – first returns may be thousands; refine to a manageable subset.

Primer on Experimental Design

Core Terms
  • Population: group of interest (e.g.
    CI candidates, school-age children).

  • Independent Variable (IV): manipulated factor.

  • Dependent Variable (DV): measured outcome.

Simple Example
  • Study: "Do clients report higher satisfaction with Phonak vs Oticon premium hearing aids?"

    • Population = hearing-aid clients.

    • IV = brand (Phonak, Oticon).

    • DV = satisfaction score.

Balanced vs.

Incomplete Designs

  • Balanced: equal observations per cell (e.g.
    15 participants per brand).

  • Complete: every combination of factor levels is tested.

  • Incomplete: some combinations absent.

  • Unbalanced: unequal sample sizes.

Design Taxonomy Illustrated
  1. One-factor balanced design (2 brands × 15 subjects each).

  2. Two-factor balanced design (Brand × Consultation length 15 / 30 / 45 min).

  3. Complete, unbalanced design (cells populated, but n differs).

  4. Incomplete design – certain cells empty (e.g.
    animal study left vs right implantation only in some subjects).

Block Design
  • Subjects blocked (grouped) on a nuisance variable, then randomised.

  • Example: Handedness might influence task accuracy; assign equal left-/right-handed persons to each intervention arm.

Nested Design
  • Levels of one factor exist only within a single level of another.

  • Example: Receiver size (small, medium) nested within brand (Phonak, Oticon).

Single-Subject (A–B–A–B) Designs
  • One participant measured across sessions & conditions.

  • Diagnostic plots demonstrate:

    • Ceiling effect (always ~100 %).

    • Fatigue effect (rise → plateau → drop).

    • Order/learning effects (performance improves across sessions).

Sources of Variability in Audiological Measurement

  1. Measurement error

    • Examiner technique (pace, instructions).

    • Equipment calibration (5 dB step size in PTA).

  2. Sampling error

    • Non-representative pool (age, geography).

  3. Environmental factors

    • Booth noise, external vibrations, time of day.

  4. Inherent random variability

    • Listener attention, cognition, transient hearing status.

  5. Unknown/uncontrolled factors

    • Medication, fatigue, etc.

Illustrative discussion: Two clinicians run PTAs in adjacent rooms—differences in booth noise, tester rhythm, or calibration all shift thresholds.

Experimental Pitfalls & Effects to Avoid

  • Ceiling effect – task too easy \Rightarrow all near 100 %.

  • Floor effect – task too hard \Rightarrow all near 0 %.

  • Learning/practice effect – improvement due solely to repetition.

  • Order effect – fixed stimulus order conflates IV and list difficulty.

  • Confounding variable – uncontrolled factor correlated with IV (e.g.
    list difficulty with program 1).

  • Latin-square counterbalancing mitigates order effects in incomplete designs.

Bias in Research & How to Detect It

Common Sources
  • Funding or conflicts of interest (hearing-aid manufacturers).

  • Participant selection (convenience sampling, language barriers).

  • Over-testing or excluding outliers based on preconceived expectations.

  • Reporting bias (publishing only significant findings).

Cochrane Collaboration Risk-of-Bias Tool (Quantitative)

Domains to evaluate:

  1. Selection bias – randomisation & allocation concealment.

  2. Performance bias – blinding of participants/researchers.

  3. Detection bias – blinding of outcome assessors, placebo control.

  4. Attrition bias – incomplete outcome data, dropout patterns.

  5. Reporting bias – selective outcome reporting.

  6. Other bias – funding sources, design quirks.

Quantitative vs Qualitative Research
  • Quantitative: numerical data, patterns, statistical inference; strives to eliminate bias.

  • Qualitative: experiences, attitudes, "how/why" questions; acknowledges & addresses researcher bias (researcher positionality statements common).

  • Mixed-methods studies combine both.

Practical Take-Aways for Students

  • Start every EBP task with a well-formed PICO question.

  • Decompose the question into discrete search concepts.

  • Master Boolean logic; iterate search strings; exploit wildcards & phrase searching.

  • Always scrutinise Methods:

    • Are designs balanced/complete?

    • Any confounding factors or unblinded procedures?

  • Use formal appraisal checklists (Cochrane, CASP, JBI) to score risk of bias.

  • Recognise that funding, language, and accessibility systematically shape published evidence—stay critical.

  • Quantitative and qualitative paradigms answer different facets of a problem; appreciate both.


Constructing a PICO Question

3.1 Clinical Scenario

You are an audiologist in a cochlear implant centre guiding patients through candidacy testing and recommendations to improve quality of life.

Case Details:

  • Patient: 62-year-old adult

  • Hearing loss: Severe-to-profound post-lingual hearing loss for ≥2 years

  • Health: Fit for anesthesia

  • Hearing aids: Not providing adequate benefit; left aid unused

  • Speech recognition with aids: Left ear 10%, right ear 40%

Patient concern:

“I’m nervous about implanting my right ear. Can I get an implant in just one ear and still notice a positive change?”


3.2 Formulating the PICO Components

Component

Example from Case

P (Population)

Adults with significant hearing loss

I (Intervention)

Bilateral cochlear implantation

C (Comparison)

Unilateral cochlear implantation

O (Outcome)

Improvement in quality of life (could also be speech perception in noise, etc.)

Resulting PICO Question:

In adults with significant hearing loss, is bilateral cochlear implantation superior to unilateral implantation in improving quality of life?


3.3 Identifying Main Concepts

Breaking down into core search concepts:

  • Significant hearing loss

  • Bilateral / unilateral

  • Cochlear implantation

  • Quality of life


4. Building a Search Strategy

4.1 Boolean Operators

Used to combine or exclude search terms:

Operator

Function

Example

AND

Combines different concepts

“hearing aids AND sound localization”

OR

Combines synonyms or related terms

“hearing loss OR hearing impairment”

NOT

Excludes unwanted topics

“hearing aids NOT cochlear implants”

Example from lecturer’s own search:

“sound localization AND hearing aids NOT cochlear implants” — narrowed down irrelevant results.


4.2 Punctuation & Symbols in Searches

Symbol

Function

Example

“ ” (quotation marks)

Search exact phrase

“sound localization”

# or ? (wildcard)

Accounts for spelling variants

local#ation → localization / localisation

(asterisk / truncation)*

Includes all word endings

“hear*” → hear, hearing, hearing-impaired

(Wildcards differ between databases: e.g., #, ?, or *)


4.3 Constructing a Search Statement

Example (Population → Intervention → Comparison → Outcome):

(adults AND (significant hearing loss OR profound hearing loss OR hearing impairment OR deaf OR hard of hearing))
AND (bilateral OR "two ears" OR "both sides")
AND (unilateral OR "one ear" OR "one side")
AND ("cochlear implant*" OR "cochlear implantation")
AND ("quality of life" OR QOL)

Tips:

  • Use quotation marks for exact phrases.

  • Use OR for synonyms.

  • Use AND to combine main concepts.


4.4 Searching in Databases (Demonstration)

Databases introduced:

  • CINAHL: Allied health–focused.

  • PubMed: Covers allied health, medicine, psychoacoustics.

Steps:

  1. Open the University Library portal.

  2. Select Databases from the left column.

  3. Choose CINAHL or PubMed.

  4. Input Boolean search.

  5. Refine filters (peer-reviewed, publication date, population).

Observation from demo:

  • Poorly constructed questions lead to broad or irrelevant results.

  • Adding specific Boolean operators refines the search.

  • A precise PICO question ensures relevant and manageable results.


5. Experimental Design Principles

Once relevant papers are found, understanding design helps you interpret methods and outcomes.

5.1 Core Elements

  • Population: Target group (e.g., adults, children)

  • Independent Variable (IV): The factor manipulated (e.g., hearing aid brand)

  • Dependent Variable (DV): The outcome measured (e.g., satisfaction, performance)

Example:

“Do clients report higher satisfaction with Phonak or Oticon premium hearing aids?”

  • IV = Hearing aid brand

  • DV = Satisfaction level

  • Population = Hearing aid users


6. Types of Experimental Designs

Design Type

Description

Example

Balanced Design

Equal sample size per condition

15 test Phonak, 15 test Oticon

Two-Factor Balanced Design

Two factors with equal group sizes

Handedness (L/R) × Hair colour (Red/Black/Blonde)

Complete Design

Data in every condition combination

Brand (2 levels) × Consultation Length (3 levels: 15/30/45 mins)

Incomplete Design

Some combinations missing

Only certain ears implanted in each animal

Block Design

Groups balanced by a variable (e.g., handedness)

Group A (L-handers), Group B (R-handers); randomize mirror condition

Nested Design

One factor’s levels depend on another factor

Receiver size (small/medium) nested within brand (Phonak/Oticon)


7. Common Pitfalls in Design & Bias Effects

7.1 Poor Design → Systematic Errors

Effect

Description

Example

Ceiling Effect

Task too easy → all perform near 100%

Sound localization task with large speaker angles (22.5°)

Floor Effect

Task too hard → all perform near 0%

Speech test at too high noise level

Learning Effect

Improvement due to repetition, not intervention

Speech test repeated many times in one session

Order Effect

Biased presentation sequence

Always using “easy list” first for all participants


8. Sources of Variability in Data Collection

8.1 Causes of Variability

  • Measurement Error:

    • Tester rhythm, tone length, calibration differences.

  • Sampling Error:

    • Participant-related (e.g., testing children vs adults).

  • Environmental Conditions:

    • Background noise differences, booth isolation.

  • Random Variability:

    • Subject’s attention, fatigue, or natural variation.

8.2 Test–Retest Variability

  • Small differences expected across repeated sessions (e.g., PTA results vary ±5 dB).


9. Recognising Bias in Research

9.1 Common Sources

Source

Description

Funding Bias

Sponsorship from hearing aid companies may influence findings.

Sampling Bias

Urban vs rural populations may differ.

Language Bias

Non-English studies often excluded or inaccessible.

Accessibility Bias

Participants unavailable due to schedule/location limitations.

Researcher Bias

Over-testing or favouring certain participants.

9.2 Example

A German psychoacoustics study was inaccessible due to language — illustrates over-representation of English-language research.


10. Tools for Assessing Bias

10.1 Cochrane Collaboration Tool (Quantitative Research)

Bias Type

Description

Example

Selection Bias

Were participants randomly assigned?

Mild HL only → limits generalizability

Performance Bias

Were researchers/participants blinded?

Encouraging certain participants

Detection Bias

Were outcome assessors blinded?

Audiologist aware of hearing aid brand

Attrition Bias

Were dropouts handled properly?

Severe HL participants withdrew

Reporting Bias

Selective reporting of positive results

Only significant findings published

Other Biases

Funding or design flaws

Hearing aid manufacturer sponsorship


11. Quantitative vs Qualitative Research

Feature

Quantitative

Qualitative

Data Type

Numerical, measurable

Descriptive, narrative

Goal

Identify patterns, test hypotheses

Understand experiences, beliefs

Focus

“How much?”, “How many?”

“How?” and “Why?”

Analysis

Statistical tests

Thematic / narrative analysis

Bias Treatment

Eliminated or minimized

Acknowledged and contextualized

Example

Speech perception scores

Patient interview on coping with HL

Mixed Methods combine both for comprehensive insight.


11.1 Acknowledging Bias in Qualitative Research

  • Researchers explicitly state personal context to enhance transparency.

    • e.g., “I am an occupational therapist from Colombia; my professional and cultural background shapes my interpretation.”

  • Rooted in anthropology and ethnography — observing communities, beliefs, and lived experiences.