1/42
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
Quality management
The coordinated set of activities that ensures laboratory processes consistently produce accurate, reliable, and timey results that are appropriate for patient care and public health decision-making
ISO 15189, DOH-LTO, CAP, NEQAS
Regulatory compliance and accreditation
Accuracy
Denotes closeness to the true value (low bias)
Precision
Denotes the closeness of repeated results (low imprecision)
Systematic error
High precision with poor accuracy
Consistent, predictable deviations from the true value that occur in the same direction each time a measurement is made
Calibration errors, reagent instability, or instrument malfunction
Corrected through QC and calibration verification
Random error
Poor precision (reproducibility)
Unpredictable fluctuations in measurements caused by chance variations in technique, environment, or instrument performance
Cannot be completely eliminated but can be minimized through proper technique and maintenance
Standard deviation and F-test
Parameters and test for precision
Mean and t-test
Parameters and test for accuracy
Sensitivity / true-positive rate
Measures the ability of a test to correctly identify individuals who truly have the disease
Specificity / true-negative rate
Measures the ability of a test to correctly identify individuals who do not have the disease
Positive predictive value (PPV)
Represents the probability that a person with a positive test result truly has the disease
Negative predictive value (NPV)
Represents the probability that a person with a negative test result truly does not have the disease
Do not change with prevalence
Relationship of sensitivity and specificity with disease prevalence
Change with prevalence
Relationship of PPV and NPV with prevalence
High prevalence
High PPV, low NPV
Low prevalence
Low PPV, high NPV
Calibration
Process of establishing the relationship between the measurement response of an instrument and the known concentration or activity of an analyte in a reference material
Calibrators
Specimens with assigned, traceable values, often derived from reference standards
Calibration curve
Ensures that test results are accurate and comparable over time
Controls
Biological or synthetic specimens with known or established target values used to monitor the precision and accuracy of analytical measurements
Not used to establish the calibration curve; verifies system stability and reliable results
Standard
Pure substance of known composition and concentration used to prepare calibrators or assign values to control materials
Reference point for accuracy
Primary standard
A highly pure chemical that can be weighed or measured directly
e.g. anhydrous sodium chlored for chloride assays
Secondary standard
A solution standardized against a primary standard; used more routinely because it is stable and easier to prepare
Active errors
Errors committed by the frontline laboratory personnel during direct interaction with instruments, patients, or specimens
Occur at the “sharp end” of the system
Usually individual-level rather than systemic flaws
Latent errors
Hidden system weaknesses that create conditions for active errors to occur; organizational, procedural, or design flaws that may remain undetected
Occur at the “blunt end” of the system—management, policy, or workflow level
Often involve inadequate resources, unclear procedures, or poor safety culture
Pre-analytical phase / pre-examination phase
All activities from test ordering and patient preparation up to a specimen that is ready for analysis; controls the inputs to testing
Active errors in pre-analytical phase
Wrong patient/wrong label, wrong tube or QNS, inadequate mixing, hemolysis
Latent errors in pre-analytical phase
No barcode verification, ambiguous requisitions, inadequate staffing/layout
Analytical phase
All activities from loading an accepted specimen onto a validated method through generation verified analytical data
Governs measurement performance
Systemic error in analytical phase
miscalibration, deteriorated reagent lot, wrong blank
Random error in analytical phase
Pipetting bubbles, transient temperature/voltage fluctuation
Active error in analytical phase
overriding critical instrument flags
Latent error in analytical phase
Unclear SOPs, deferred maintenance, inadequate QC frequency
Post-analytical phase
All activities from analytical validation through result, reporting, communication, archiving, and follow-up
Controls the outputs to clinical decision-making
Active errors in the post-analytical phase
Wrong unit or reference interval, wrong patient result posted, delayed critical call
Latent errors in post analytical phase
No double-check policy, poorly configured auto-verification rules, inadequate TAT monitoring
Quality control
Refers to the system or set of procedures used to monitor the performance of analytical systems, detect errors, and maintain accuracy and precision of results
Internal quality control (IQC)
To ensure reliability within the laboratory on a daily basis
External quality control (proficiency testing / EQA)
To verify inter-laboratory compatibility and overall performance by testing standardized specimens from an external provider
Levey-Jennings chart
Plots QC results over time against the mean (x̄) and control limits (±1, ±2, ±3 SD)
Visualizes shifts (sudden change in mean) and trends (gradual drift)
Primary tool for daily QC monitoring
Westgard rules
Applies statistical decision rules to identify QC violations
Enhances error detection power and reduces false rejections
Youden plot (Two-control plot)
Compares two control levels (e.g., normal and high) on X-Y axes
Clusters indicate systemic bias; scattered points show random errors
External QC/PT reviews; reagent lot crossover
Cusum (Cumulative sum) chart
Tracks cumulative derivation from target mean
Detects subtle trends earlier than Levey-Jennings charts
Low-sigma methods; critical analytes needing tight drift control