Quality Control

Quality Control

  • Application Of Statistical Methods To The Evaluation Of The Quality Of Products And Services

  • refers specifically to the activities directed toward monitoring the individual elements of care.

  • monitors the processes related to the examination phase of testing and allows for detecting errors in the testing system

  • examining “control” materials of known substances along with patient samples to monitor the accuracy and precision of the complete analytic process.

  • A system of ensuring accuracy and precision in the laboratory by including QC reagents in every series of measurements.

  • process of ensuring that analytical results are correct by testing known samples that resemble patient samples

  • involves the process of monitoring the characteristics of the analytical process detects analytical errors during testing, and ultimately prevents the reporting of inaccurate patient test results.

  • one component of the quality assurance system and is part of the performance monitoring that occurs after a test has been established

Parameters of Quality Control

  1. Sensitivity- it is the ability of an analytical method to measure the smallest concentration of the analyte of interest.

  2. Specificity- the ability of an analytical method to measure only the analyte of interest

  3. Accuracy- nearness or closeness of the assayed value to the true or target value

    • estimated using 3 types of studies: recovery, interference, and patient sample comparison

    • Recovery study determines how much of the analyte can be identified in the sample

    • Interference study determines if specific compounds affect the laboratory tests like hemolysis, turbidity, and icteric

    • A sample comparison study is used to assess the presence of error (inaccuracy) in the actual patient sample.

  4. Precision or Reproducibility- the ability of an analytical method to give repeated results on the same sample that agree with one another.

  5. Practicability- it is the degree to which a method is easily repeated.

  6. Reliability- the ability of an analytical method to maintain accuracy and precision over an extended period of time during which equipment, reagents, and personnel may change

  7. Diagnostic Sensitivity- the ability of the analytical method to detect the proportion of individuals with the disease

    • indicates the ability of the test to generate more true positive results and fewer false negatives.

    • Screening tests require high sensitivity so that no case is missed.

      Sensitivity (%) = 100 x the number of diseased individuals with a positive test/total number of diseased individuals tested.

  8. Diagnostic Specificity- the ability of an analytical method to detect the proportion of individuals without the disease.

    • reflects the ability of the method to detect true negatives with very few false positives

    • Confirmatory tests require high specificity to be certain of the diagnosis.

    Specificity(%) = 100 x the number of individuals without the disease with a negative test/total number of individuals tested without the disease.

    Note : 100% sensitivity and specificity indicate that the test or method detects every patient with the disease and that the test is negative for every patient without the disease.

Objectives of Quality Control

  1. To check the stability of the machine

  2. To check the quality of reagents

  3. To check technical (operator) errors

Control Solutions (QC Materials)

-the accuracy of any assay depends on the control solutions, how they are originally constituted, and how they remain stable overtime

-General chemistry assays used 2 levels of control solutions, while immunoassays used 3 levels

-To establish statistical quality control on a new instrument or on a new lot number of control materials, the different levels of control material must be analyzed for 20 days.

-For highly precise assays (with CV less than 1%) such as blood gases, analysis for 5 days is adequate.

Control Limits (Control Values)

  • these are expected values represented by intervals of acceptable values with upper and lower limits.

  • if the expected (control) values are within the desired control limits, the clinicians are assured that the test results are accurate and precise.

  • Control limits are calculated from the mean and standard deviation (SD)

  • The ideal control/reference limit is between ±2 SD

  • Use of a single lot for an extended period of time allows reliable interpretation criteria to be established which will permit efficient identification of an assay problem

  • When changing to a new lot number, laboratorians use the newly calculated mean value as the target mean but retain the previous SD value, but when more data are obtained, all values should be averaged to get the best estimate of the mean and SD.

  • Determination of the mean and SD for the unassayed controls is also advisable because this process improves the performance characteristics od statistical control procedures,

Characteristics of an Ideal QC material

  1. Resembles human sample

  2. Inexpensive and stable for long periods.

  3. NO communicable disease

  4. No matrix effects/known matrix effects

  5. With known analyte concentration (assayed control)

  6. Convenient packaging for easy dispensing and storage.

Notes to remember:

-should resemble human sample and be available for a minimum of one year (same lot number)- different lot numbers of the same material have different concentrations which require new estimates of the mean and standard deviation.

-human control material are preferred but due to limited sources and biohazard considerations bovine control materials are used.

-Bovine QC materials are not the choice for immunochemistry, dye-binding, and certain bilirubin assays

-QC materials should be the the same matrix as the specimens being tested

-Matrix effects are the results of improper product manufacturing, use of unpurified human and non-human analyte additives, and altered protein components.

-Control materials can be purchased with or without assayed values.

-Assayed controls are expensive but can be used as external checks for accuracy.

-Reconstitution of lyophilized control materials must be properly done to avoid incorrect control values.

-Stabilized frozen controls do not require reconstitution but may have different characterizations compared to actual specimens.

Quality Assurance

A program in which the overall activities conducted by the institution are directed toward assuring the quality of the products and services provided.

• Focused on the recipient, namely, the patient.

• Focused on the monitoring of outcomes or indicators of care

• Risk management, in-service and continuing education, safety programs, quality control, and peer review were all part of the quality assurance program

Quality Assurance QA- can be envisioned as a tripod with a program development, assessment and monitoring, and quality improvement forming the three legs

  • It is a systematic action necessary to provide adequate confidence that laboratory services will satisfy the given medical needs for patient care.

Philosophy of QA

  1. Quality is important to customers

  2. Quality can be assessed and monitored

  3. Quality can be improved

  4. Quality’s benefits exceed its costs.

Primary Goal of QA - To deliver quality services and products to customers.

In 1985 JCAHO finally published its 10-step QA monitoring process.

1. Assign responsibility for QA plan.

2. Define scope of patient care.

3. Identify important aspects of care.

4. Construct indicators.

5. Define thresholds for evaluation

6. Collect and organize data.

7. Evaluate data.

8. Develop corrective action plan.

9. Assess action; document improvement.

  1. Communicate relevant information.

TOTAL QUALITY MANAGEMENT AND CONTINUOUS QUALITY IMPROVEMENT

TQM/CQI quickly replaced the QA model because of its expanded emphasis on satisfying the needs of the customer, especially in its ultimate definition of quality: “a delighted customer.”

• To accomplish this goal, TQM/CQI held that the total enterprise, as well as each unit within the organization (and especially each employee), had to successfully perform, and meet the obligations of, three simultaneous roles:

1. Customer

2. Producer

3. Supplier

• The inclusion of each component in the creation process—from the acquisition of supplies to active follow-up after the product or service has been received by a delighted customer

QUALITY ASSESSMENT AND IMPROVEMENT & CONTINUOUS PERFORMANCE IMPROVEMENT

QA&I incorporates the concepts of quality assurance and TQM/CQI, especially the idea that quality is a continuous process of improving the system, not just an end point measurement, and that it requires the direct support and active participation of the leadership of the organization.

Focuses on the success of the organization in designing and meeting set goals and objectives, hence the term “continuous performance improvement (CPI).”

JCAHO has also established nine dimensions of performance (the “what” and “how” of CPI and patient care) that must be included and measured in the design of the organization’s quality assessment and performance improvement plan:

  1. Efficacy

  2. Appropriateness

  3. Availability

  4. Timeliness

  5. Effectiveness

  6. Continuity

  7. Safety

  8. Efficiency

  9. Care and

  10. Respect

MAJOR FIGURES IN QUALITY MANAGEMENT

Armand Feigenbaum - coined the term total quality management.

Walter Shewhart - his work served as a basis for the multirule-based Westgard rules

- is known as the father of statistical quality control.

Philip Crosby - Frequently referred to as the “evangelist” of quality management,

- preached the need for quality practices in the book Quality Is Free and through the worldwide consulting network of quality colleges. - propounded that:

  1. Quality is free.

  2. Poor quality is expensive.

  3. Do things right the first time.

  4. “Zero defects” is the only legitimate goal of a quality program.

W. Edwards Deming - a statistician who worked with Shewhart, introduced the use of statistical tools in decision-making, problem-solving, and troubleshooting the production process. - among Deming’s more prominent contributions to the language of TQM are the “fourteen Points”, the “delighted customer” definition of quality, the “seven old tools,” and the “seven deadly diseases”.

Joseph Juran • Established the concept that quality is a continuous improvement process that requires managers’ active pursuit in reaching and setting goals for improvement.

• Introduced the pareto principle, or 80/20 rule, which states that 80 percent (80%) of serious problems arise from only 20 percent 20%) of the causes or trouble points.

• A leader in promoting participatory management styles.

James O. Westgard

• Applied Shewhart's multi-rule system to the evaluation of the quality control data in the medical laboratory, particularly the multi-ranged controls used in clinical chemistry.

• The six rules for accepting or rejecting a control run are now commonly referred to as the Westgard rules

BASIC QUALITY CONTROL STATISTICS: ACCURACY & PRECISION

Accuracy - refers to the closeness of a result to the actual value of an analyte when performing a test, more commonly called “hitting the bull’s-eye.”

Precision - by contrast, is determined by how well a procedure reproduces a value. - or reproducibility; the ability of an analytical method to give repeated results on the same sample that agree with one another

• Accuracy- closeness of the value to the target/true value

• Precision- Closeness of the value to the repeated value.

• T-test is a test for accuracy. It determines the difference between means of two groups of data.

• F-Test is a test for precision. It determines the difference between SD of two groups of data.

• Reliability- ability to maintain both accuracy and precision

BASIC QUALITY CONTROL STATISTICS: POPULATION & SAMPLE

Data Population • the term population is used in statistics to describe and define the items that are being studied at a particular time.

• is defined by the interest of the person doing the statistical study.

Population Sample • is a part of a population that is used to analyze the characteristics of that population.

• It is a particularly useful technique when evaluating a population with a large number of entities, which makes it impractical to include every member in the study.

• to be truly representative, and to avoid bias, samples should be selected at’ random (i.e., probability Sample)

Gaussian Distribution • Many terms are associated with the Gaussian distribution, including “bell-shaped curve,” “normal distribution,” “frequency polygon,” and “Levey-Jenning charts.”

• Occurs when the data elements are centered around the mean with most elements close to the mean.

• The total area under the curve is 1.0 or 100%

• Acceptable range is 95% confidence limit which is equivalent to +/- 2 SD

BASIC QUALITY CONTROL STATISTICS: PERCENTAGE AND PROBABILITY

Probability - is usually expressed in statistical notation as a decimal (0.0 to 1.0) according to the likelihood of an event occurring

- the nearer to 0, the less like! ’ it is to, occur; the nearer to 1, the more likely the event is to happen. - probability is often expressed as a percentage

BASIC QUALITY CONTROL STATISTICS: MEAN, MEDIAN, MODE

Mean (X) - is simply the arithmetic average for all the data contained in a sample population (or an algebraic set)

- obtained by computing the sum of the values contained in the population and dividing by the number of values included in the calculation.

- associated with symmetrical or normal distribution.

Median - obtained by aligning the population from the smallest to the largest unit and selecting the midpoint, the point at which exactly 50 percent of the population falls on both sides

Mode - the most frequent observation; it is used to describe data with two centers (bimodal)

BASIC QUALITY CONTROL STATISTICS: STANDARD DEVIATION

Standard deviation (SD) - is a measurement of precision, or the tendency of the values in each population to cluster, center, or scatter around the mean.

An average of 2 standard deviations (+2SD) is generally considered as the minimal limit for an individual control value to be acceptable because 95 percent of all legitimate values should be within this range.

Coefficient of Variance (CV) - an index of precision - is expressed as a percent and calculated by di-viding the standard deviation by the mean and multi-plying the result by 100.

- CV = SD/Mean x 100

GRAPHIC AND SYSTEMATIC PRESENTATION OF INFORMATION: STANDARD DATA PLOTTING TECHNIQUES

- The first task of any evaluation plan is to arrange and present the data in a manner that facilitates further analysis.

- This procedure is referred to as the “orderly array of data”

- from lowest to highest value, chronologically by run or date, in groups by sex and age, and so on

BASIC STATISTICAL GRAPHS

  1. Circle or Pie Charts - circular figures with areas marked off, shaded, or sketched according to the percentage of each component, compared to the whole.

  2. Bar graphs - may be helpful in presenting comparative interpopulation and intrapopulation factors

  3. Line graphs - are particularly useful for plotting and tracking data over a period of time.

  4. GAUSSIAN DISTRIBUTION DISPLAYS. There are two popular methods of displaying the frequency distribution characteristics of a population: histograms and frequency polygons

    1. Histogram - uses a bar graph format to show the relative size or frequency of each “class interval.

    2. Frequency polygons - are the very familiar line graphs that give the frequency distribution its descriptive name, “bell curve

GRAPHIC AND SYSTEMATIC PRESENTATION OF INFORMATION: SEVEN OLD METHODS

  1. Flow Chart - used in identifying and describing ‘the’ exact sequence of work tasks and checking out ways for improvement, by modeling alternative work routes.

  2. CONTROL CHARTS - Levey-Jennings chart - used to plot control measurements against standards (ex. upper and lower limits, usually equal to the numerical value of +2 SD) used to identify whether a process is in or out of control.

  3. PARETO CHARTS - This is the term assigned , to a bar chart that is designed to illustrate the classical pareto principle, which states that 80 percent of all problems can be attributed to 20 percent of the possible causes

  4. CAUSE-AND-EFFECT DIAGRAMS - “Ishikawa diagrams,” after Kaoru Ishikawa - “fishbone diagrams” - is used to identify the possible causes or contributing factors of problems or quality defects.

  5. RUN CHARTS - is a line graph used to display data over a period of time. – are also called “trend charts”, as they are designed to show patterns of performance.

  6. SCATTER DIAGRAMS - This method is used to show the relationship between one variable and another.

  7. STORY BOARDS - refer to the technique of using a pictorial sequence on a flip chart or other visual aid to “tell the story” of a quality management project.

GRAPHIC AND SYSTEMATIC PRESENTATION OF INFORMATION: SPECIALIZED LABORATORY DATA EVALUATION METHODS

  1. LEVEY-JENNINGS (L-J) CHART - most widely used QC chart in the clinical laboratory - are control charts used to plot quality control values against previously set limits to determine if a procedure is in or out of control

  2. YOUDEN PLOT - are used to demonstrate and compare the performance of a laboratory on paired samples with other laboratories using common control lots or survey material.

  3. MULTIRULE ANALYSIS - commonly referred to in the laboratory as the “Westgard rules. - there are six basic rules proposed by Westgard and Barry for accepting or rejecting a control run based on the expected Gaussian distribution of sample values

Multirule Analysis

  1. 12s - a warning rule that is violated when a single control observation is outside the ±2s limits. - run is accepted when both control results are within 2 SD limits from the mean value

  2. 13s - Observed when one control result exceeds the mean ±3 SD limits. - The run is considered out of control when one of the control results exceeds the ±3 SD limits

  3. 22s Observed when the last two control results (or 2 results from the same run) exceeds either the mean ±2 SD limits.

  4. R4s - The range or difference between the highest and lowest control result within an analytical run exceeds 4s. - Reject when 1 control measurement in a group exceeds the mean plus 2s and another exceeds the mean minus 2s

  5. 41s - the last four (or any four) consecutive control results exceed either mean ±1 SD

  6. 10X - reject when 10 consecutive control measurements fall on one side of the mean

  7. 31s -reject when 3 consecutive control measurements exceed the same mean plus 1s or mean minus 1s control limit.

-Westgard recommend that at least 40 samples, and preferable 100 samples be run by comparison-of-methods experiment (test method and reference method)

-The combination of the control rules used in conjunction with a control chart has been called the Multirule Shewhart procedure.

-Multirules establish criteria for deciding whether an analytical process is out of control

-The sensitivity of the multi-rule can be increased to detect smaller systematic errors by increasing the number of observations considered.

-False rejections can happen because of the control limits design and not actually identify a problem with the method.

“Westgard rules” are generally used with 2 or 4 control measurements per run, which means they are appropriate when two different control materials are measured 1 or 2 times per material, which is the case in many chemistry applications

-Some alternative control rules are more suitable when three control materials are analyzed, which is common for applications in hema, coagulation and immunoassays

INTERPRETATIVE STRATEGY

  1. Error - The concept of error is related to accept/reject, and problem, no problem decisions.

    - Errors may be classified into two types:

    1. Random errors - which may occur at any time and place within the testing or service process

    2. Systematic errors - which occur in a consistent direction or pattern

    - evidenced by a change in the mean of the control values. The change in the mean may be gradual and demonstrated as a trend in control values or it may be abrupt and demonstrated as a shift in control values.

  1. Random Error- an error that occurs unpredictably due to poor precision

    Remedy: Sample is retested using the same reagent

    Test: Replication Experiment

    Westgard Rules: 12s, 13s, R4s

    2. Systematic Error- an error that occurs predictably once a pattern of recognition is established.

    Remedy: Step-wise evaluation

    Test: Comparison of Methods

    Westgard Rules: 22s, 41s, 10x

Possible Causes of Random Error

  1. Mislabeling a specimen

  2. Pipetting error

  3. Improper mixing of samples

  4. Voltage fluctuations

  5. Temperature fluctuations

Possible Causes of Systematic Error

  1. Improper calibration

  2. Deterioration of reagent

  3. Sample instability

  4. Instrument drift

  5. Change in standard materials

2. Manipulation of Data Statistical Bias - set of numbers (i.e., sample) that do not truly reflect the characteristics of the whole population.

Outlier - values that are far from the main set of values - highly deviating values - caused by random or systematic error

3. Examination of Control Charts - deviations from the symmetrical bell-shaped appearance of a frequency polygon are called skewed curves and serve as a signal that the data do not accurately reflect the parameters of the population.

- Distribution curves can become skewed in either direction (to the right or left of the mean) because of non-representatively small sample size or by the inclusion of data that are flawed because of sampling or process (technical, administrative) errors.

Three problem-related patterns may be detected by studying how data appear when plotted on a control chart.

  1. Trends - are marked by a systematic drift in one direction away from the established mean. - It is formed by control values that either increase or decrease for six consecutive days.

  2. Dispersion - occurs when control values are widely scattered in an unusual and unexplained pattern around the control chart; particularly, with increasing out-of-control (i.e., exceeding +2 SD and +3 SD) results. - sign

  3. Shifts - are a sudden switch of data points to another area of the control chart away from the previous mean. Trend Main Cause: Deterioration of reagent Shift Main Cause: Improper calibration of the instrument

EXTERNAL QUALITY ASSESSMENT PROGRAMS

- The laboratory may participate on a voluntary or mandatory basis in external programs initiated through the laboratory community, such as proficiency testing, accreditation, and licensure activities in-hospital programs such as PRO, risk management, JCAHO, Medicare, and facility licensing.

- External quality assessment programs may be roughly divided into two types for review purposes: (1) proficiency surveys - in which blind specimens are sent by an external agency for analysis and comparison with other laboratories, and (2) licensure and accreditation programs - which involve on-site inspections

Kinds of Quality Control

  1. Intralab Quality Control (Internal QC)

    -involves the analyses of control samples together with the patient specimens

    -detects changes in performance between the present operation and the “stable”

    -important for daily monitoring of accuracy and precision of analytical methods

    -detects both random and systematic errors in a daily basis

    -allows identification of analytical errors within a one-week cycle

  2. Interlab Quality Control

    -involves proficiency testing programs that periodically provide samples of unknown concentration to participating clinical labs

    -important in maintaining the long-term accuracy of analytical methods

  3. The College of American Pathologies (CAP) is the gold standard for clinical lab external QC testing.

National Reference Laboratories

National Kidney Transplant Institute- Hematology, Immunohematology and Urinalysis

-Anatomic Pathology for Renal Diseases and Unassigned Organ systems

  • Cellular-based product testing

Lung Center of the Philippines- General Clinical Chemistry

-Anatomic Pathology for Pulmonary and Pleural diseases

San Lazaro Hospital -Immunology and Serology

-(HIV/AIDS, Hepatitis, Syphilis and other STI’s)

Research Institute for Tropical Medicine- Microbiology and Parasitology

-Confirmatory Testing of Blood Unit

East Avenue Medical Center- Drug Testing, Environmental, Occupational Health, Toxicology, and Micronutrient Assay

UP-National Institure for Health- Newborn Screening

Philippine Heart Center- Cardiac Markers

-Anatomic Pathology of Cardiac Diseases

INDICATORS OF QUALITY PERFORMANCE: INSTITUTIONAL PROGRAMS

  1. Utilization Review And Peer Review Organizations

    Utilization Review - Hospital and physician review of the necessity of care mainly focused on reducing patient length of stay in the hospital.

    Peer Review Organization -A federally mandated program that appoints an agency for a state or region to review hospital case records for quality of care and reimbursement decisions

  2. Critical-Care Pathways - A hospital-wide quality care management program that places emphasis on the outcomes of treatment received by the patient as the definition of quality.