R

CHEM-2200: Analytical Chemistry - Chapter 5: Quality Assurance and Calibration Methods

CHEM-2200: Analytical Chemistry

Chapter 5: Quality Assurance and Calibration Methods

1. Introduction to Quality Assurance

Quality Assurance (QA) refers to the processes and methodologies employed to ensure that analytical results are accurate and reliable enough to support subsequent decisions, such as determining whether a substance is safe for consumption. There is a threshold of precision and accuracy needed, as exceeding this threshold could lead to unnecessary expenditures in time and resources.
The chapter begins with fundamental vocabulary and concepts essential for quality assurance, followed by a discussion of two critical analytical techniques.

2. Basics of Quality Assurance

In analytical chemistry, our work involves transforming raw measurements into treated data and ultimately into results.

  • Raw data: These are the initial measurements taken.

  • Treated data: These are concentrations calculated from raw data through calibration methods.

  • Results: These are the final quantities reported after applying statistical analyses to the treated data.
    Quality assurance encompasses all planning, steps, and metrics established before an analysis to confirm the correctness of the outcome. In contrast, Quality Control (QC) involves checks performed during the analytical process to guarantee necessary accuracy and precision.

3. Objectives of Quality Assurance

The primary aim of quality assurance is to ensure results align with user or customer needs. Key components include:

  • Use Objectives: A concise articulation of how data and results will be applied, which prevents misuse of the data.

  • Specifications: Statements outlining the quality standards required for analytical results, which cover:

    • How samples are collected.

    • The quantity of samples needed.

    • Any necessary precautions to avoid sample degradation.

    • Practical constraints such as cost, time, and material availability.

    • The acceptable level of accuracy.

    • Acceptable rates of false positives and false negatives.

4. Understanding False Positives and False Negatives
  • False Positive: This occurs when a measurement indicates the presence of a condition that is not actually present. For example, a reported concentration of an analyte exceeds a critical limit despite it being below that limit in reality.

  • False Negative: This indicates the absence of a condition when it actually exists, such as a reported concentration falling below a threshold when it should be above that limit.

5. Key Concepts in Quality Assurance

Certain fundamental concepts are crucial for understanding quality assurance:

  • Selectivity: This is the ability of an analytical method to differentiate the analyte from other components in a sample, thereby avoiding interference.

  • Sensitivity: This refers to the capability of the method to respond consistently and measurably to changes in analyte concentration, quantified by the slope of the calibration curve.

  • Blank Samples: These are samples that are not intended to contain the analyte but are essential for accounting for interference and trace amounts of analytes from the reagents or previous samples.

    • Method Blank: Contains all components except the analyte and goes through all analytical procedures to verify that the method is functioning correctly.

    • Reagent Blank: Similar to method blank but does not undergo complete sample preparation.

    • Field Blank: Exposed to the sampling environment, which helps indicate whether contamination from field conditions has occurred.

6. Matrix and Spike Concept

The response to an analyte can be influenced by the matrix, defined as all components present in the unknown sample excluding the analyte. A spike is a defined quantity of analyte added to a sample, used to evaluate if the response aligns with expectations based on known calibration curves. Discrepancies can indicate issues of contamination, loss, or interference from the matrix.

7. Understanding Contamination and Loss
  • Contamination: The unintentional addition of analyte to samples or standards from various sources including dirty glassware, impurities in reagents, or the laboratory environment.

  • Loss: Refers to the removal of the analyte from samples or standards due to incomplete transfers, decomposition during storage, evaporation, precipitation, or adsorption to glassware. Both contamination and loss are random factors and are particularly significant for samples with low analyte concentrations.

8. Example: Spike Recovery

An unknown sample was assessed with an analyte concentration of 10.0 µg/L. A spike of 5.0 µg/L was introduced into a replicate of this sample, resulting in an analysis that delivered a concentration of 14.6 µg/L. To determine the percent recovery of the spike:
ext{% Recovery} = rac{Cs - C{us}}{C_{spiked}} imes 100 ext{%}
Where:

  • $C_s = 14.6 ext{ µg/L}$ (spiked sample concentration)

  • $C_{us} = 10.0 ext{ µg/L}$ (unknown sample concentration)

  • $C_{spiked} = 5.0 ext{ µg/L}$ (amount of spike added)
    Following the calculation yields a percent recovery of ext{% Recovery} = rac{14.6 - 10.0}{5.0} imes 100 ext{%} = 92 ext{%}
    The acceptable recovery range is specified between 96% and 104%, indicating the recovery here is not acceptable.

9. Periodic Calibration Checks

It is critical to perform regular calibration checks to monitor potential drift—slow changes in response often caused by alterations in environmental conditions or degradation of reagents. Calibration check solutions should differ from those used to create the original calibration curve.
Performance test samples (also known as quality control samples) can provide known concentrations presented as unknowns to analysts. Results are then typically verified by a supervisor to diminish biases that can arise from prior knowledge of the calibration check sample concentrations.

10. Standard Operating Procedures (SOP)

A Standard Operating Procedure (SOP) outlines the specific steps to be taken during an analytical procedure. Control experiments are integrated into these standard processes to detect any anomalies. Adherence to SOPs helps counteract the common human reluctance to follow procedures faithfully due to unfounded assumptions. SOPs exist for instruments, samples, and more. Furthermore, the Chain of Custody refers to the documented route of a sample from collection to analysis, including sign-offs at each transition and inspections of conditions at every transfer point.

11. Assessment and Compliance

The assessment process involves:

  1. Collecting data to demonstrate that analytical methods operate within predetermined limits.

  2. Verifying that the final results align with use objectives. If final results meet user objectives, then the method is confirmed to be fit for purpose. Regulatory agencies publish standard methods that define the expectations for precision, accuracy, blank requirements, replicates, and calibration checks necessary for certified analyses.

12. Method Validation Process

Method Validation refers to the procedure of confirming that an analytical method is suitable for its intended application. Key validation parameters required for regulatory submission include, but are not limited to:

  • Selectivity: The ability of the method to distinguish the analyte from other sample components such as degradation products and excipients.

  • Linearity: This measures how well a calibration curve adheres to a straight line, thereby indicating if the response is proportional to the analyte quantity. A key performance metric for linearity is the correlation coefficient ($R^2$), with values between 0.995 and 0.999 indicating a relatively good fit.

R^2 = rac{( ext{Cov}(X,Y))^2}{ ext{Var}(X) imes ext{Var}(Y)}

  • Trueness and Accuracy: These two terms express how closely results approach the true value. Demonstration can be achieved through various approaches including testing certified reference materials, comparing outcomes from disparate methods, analyzing blank samples with known analyte additions, and employing standard additions of analyte to unknown samples. Spiking is commonly used for accuracy evaluation due to the limited availability of reference materials.

  • Precision: Represents the degree of agreement among replicate measurements, typically expressed in terms of standard deviation, relative standard deviation, or confidence intervals. Notably, precision can fluctuate based on concentration levels, necessitating analysis near the concentrations relevant to the study. Two categories of precision are defined:

    • Repeatability: Variability in results when one individual uses one method to analyze the same sample repeatedly.

    • Reproducibility: Variability in results when multiple individuals across different laboratories apply the same analytical method.

13. Range, Robustness, and Limitations

Range signifies the concentration interval over which linearity, accuracy, and precision are deemed acceptable.

  • Linear Range: Defined as the concentration range where there is a direct proportional response from the detector to analyte concentration change.

  • Dynamic Range: Represents the concentration range in which measurable responses can be observed.
    Robustness refers to an analytical method's resilience to minor deliberate variations in operational parameters, such as solvent composition, pH, buffer concentration, temperature, injection volume, and detector wavelength.

14. Detection and Quantification Limits
  • Limit of Detection (LOD): The minimum analyte quantity that can be determined as distinct from the background noise, usually defined as a signal that is at least three times greater than this noise.

  • Limit of Quantification (LOQ): The smallest analyte quantity that can be measured with acceptable accuracy and precision, typically represented as ten times the standard deviation of the blank.
    ext{LOD} = rac{3s}{m}, ext{ where } m ext{ is the slope of the calibration curve}
    ext{LOQ} = rac{10s}{m}

15. Reporting Limits

The Reporting Limit indicates the concentration level below which regulations stipulate no detection of an analyte can take place. This does not imply that the analyte is nonexistent; rather, it denotes that the concentration is under a specified threshold. Reporting limits are typically established to be at least five to ten times above the LOD to avoid ambiguity. For trans fats in the USA, this reporting limit is set at 0.5 g per serving, while in Canada, it is established at 0.05 g per serving.

16. Standard Addition Methodology

Standard Addition involves adding known quantities of the analyte to an unknown to determine its original concentration based on increases in response signals. This method is particularly beneficial when the unknown sample’s composition is complex or not well understood since creating reliable standards/blanks can be challenging. It helps ensure the calibration curve is applicable to the actual sample composition.

  • The relationship between the initial and final signals can be expressed as follows:
    rac{[ ext{X}]i}{[ ext{X}]f + [ ext{S}]} = rac{Ii}{If}

  • Example of Standard Addition: A serum sample was subjected to atomic emission analysis exhibiting a signal of 4.41 mV. After adding 5.00 mL of a NaCl solution with a concentration of 2.08 M to 95.0 mL of serum, the spiked serum showed a signal of 7.82 mV. The analysis would follow similar relationships to determine the concentration of Na+.

17. Internal Standards

Using Internal Standards involves adding a known amount of a different compound (the internal standard) to the unknown samples. This compound remains distinct in its signal from the analyte but should be chemically similar to mitigate variations in response due to matrix effects. Internal standards are beneficial when different quantities of samples may be analyzed across runs or when sample loss occurs prior to analysis. If the internal standard's signal increases by a certain percentage due to a larger injection volume, the analyte's signal is expected to reflect a similar increase.

18. Example of Using an Internal Standard

In a scenario where a mixture containing the analyte and internal standard yields peak areas of 423 and 347 arbitrary units respectively:

  • Adding 10.0 mL of 0.146 M S to 10.0 mL of an unknown and subsequently diluting to 25.0 mL provides necessary data points for evaluation. This approach allows the calculation of the original concentration of the analyte in the unknown using the internal standard relationship.

19. Recommended Exercises and Problems

Exercises and Problems for the 11th edition include:

  • Exercises: 5-A(b,c), 5-B, 5-E

  • Problems: Quality Assurance and Method Validation 5-1, 5-3, 5-7, 5-8, 5-9, 5-15, 5-21

  • Problems: Standard Addition 5-25, 5-26

  • Problems: Internal Standard 5-32, 5-33, 5-34, 5-37
    Recommended problems for the 10th edition include similar tasks correlating to the same concepts.

These extensive notes serve as a detailed resource covering the essential themes and methodologies surrounding quality assurance and method validation in analytical chemistry, enabling students to prepare thoroughly for their studies and applications in this critical area of the discipline.