Diffraction, PSF & Deconvolution Notes
Useful Reading
Cox G (2012) Optical Imaging Techniques in Cell Biology provides valuable insights through its chapters. Chapter 1 focuses on The Light Microscope, while Chapter 5 (pages 65-68) delves into The Confocal Microscope. Chapter 7, Aberrations and Their Consequences, details the impact of aberrations on image quality and resolution. Lastly, Chapter 10, Deconvolution and Image Processing (pages 137-141), explains algorithms and practical considerations.
Wallace W, Schaefer LH, Swedlow JR (2001) A Working Person’s Guide to Deconvolution in Light Microscopy, published in BioTechniques 31:1076-1097, offers step-by-step instructions and visual aids, particularly on pages 1076-1084 and in Figures 2+3. An updated version with tutorials is available at http://micro.magnet.fsu.edu/primer/digitalimaging/deconvolution/deconvolutionhome.html.
Conchello J-A, Lichtman JW (2005) Optical Sectioning Microscopy, featured in Nature Methods 2:920-931, discusses advanced techniques and applications in optical sectioning, especially on pages 928-930. The article is accessible at http://www.nature.com/nmeth/journal/v2/n12/pdf/nmeth815.pdf.
iBiology Lectures cover topics such as Intensity Integrated Over a Wavelength, Huygens Wavelets (wave behavior and propagation), Constructive/Destructive Interference and Diffraction (how interference affects image formation), Aliasing, the relationship between PSF size and shorter wavelength light, Point Spread Function, Deconvolution Microscopy, and Resolution of a Microscope.
Lecture Overview
Part 1: Causes of Image Blur
Microscope Objectives and Aberrations: A detailed examination of lens quality is crucial as aberrations can significantly degrade image quality. Understanding and correcting these is essential for high-resolution imaging.
Sources of Image Degradation: Noise, scatter, and glare can obscure fine details and reduce overall image clarity. Minimizing these is essential for accurate imaging.
The Point Spread Function (PSF) in 2D and 3D: The PSF characterizes how a point source of light is spread in the image, providing a mathematical description of image blur. It's essential for deconvolution techniques aiming to restore the image.
Diffraction of Light: Diffraction limits the achievable resolution in microscopy. Understanding its effects is key to optimizing imaging parameters.
Resolution and the Airy Disc in 3D: The Airy disc represents the smallest point to which a perfect lens can focus light, explaining fundamental limits of resolution. Its size determines the resolution limit of the microscope.
Formation of an Image in the Microscope: Tracing the path of light helps optimize alignment and understand how each component contributes to the final image.
How to Remove Out-of-Focus Light and Obtain Sharp Images: Techniques like confocal microscopy and deconvolution are used to eliminate or reassign out-of-focus light, enhancing image clarity.
Part 2: 3D Deconvolution Microscopy
The Principle of Deconvolution: Deconvolution uses computational algorithms to remove blur and restore image sharpness. This is particularly useful for thick samples, outlining the theory behind computational correction.
Deblurring Approaches (the quick and dirty method): These methods are faster but may introduce artifacts or fail to fully correct blur, with simplistic methods and limitations. They are suitable for quick assessments but not for quantitative analysis.
Image Restoration Approaches (proper deconvolution): These methods iteratively reassign out-of-focus light to improve resolution and contrast, employing advanced computational methods. They are computationally intensive but yield better results.
Improvement of Signal-to-Noise Ratio: Deconvolution enhances the signal-to-noise ratio, making it easier to detect faint structures and improve quantitative analysis, resulting in the enhancement of image clarity.
Practical Considerations: Accurate PSF measurement is crucial for successful deconvolution. Minimizing artifacts ensures the restored image accurately represents the sample, covering PSF measurement and artifact reduction.
Deconvolution Versus Confocal and Related Techniques: Deconvolution and confocal microscopy offer different advantages for optical sectioning. The choice depends on the sample, desired resolution, and available resources, comparing methods for optical sectioning.
Software Packages and Information Sources: Various software packages offer deconvolution algorithms. Choosing the right one depends on the microscope system and specific requirements.
Microscope Objectives and Aberrations
Spherical Aberration: Rays from the margins of the lens are focused closer than those near the axis, resulting in a 'circle of least confusion' and an imperfect focus. Correction involves multiple lens surfaces with different curvatures, including aspheric designs, which are crucial for high-quality imaging and are well corrected in modern objectives. Use of correction collars on high-NA objectives to adjust for coverslip thickness and refractive index mismatch can further aid in correcting spherical aberrations.
Curvature of Field: This naturally follows from the spherical surfaces of lens elements. Correction can be achieved using aspheric surface geometry (though difficult to manufacture) and multiple lens element designs. Modern objectives use field-flattening lenses to correct for curvature of field, ensuring that the image is sharp across the entire field of view.
Chromatic Aberration: This results from dispersion, where the refractive index (n) varies across wavelengths, impacting image sharpness and color fidelity. Different colors of light are focused at different points, leading to color fringes and reduced image sharpness, making it crucial in multicolour stainings.
Corrections for Chromatic Aberrations
Imperfect corrections include the use of low dispersion glass (e.g., fluorite) and the addition of lens elements with different dispersion and curvature.
Partial Correction (2 colors) = Achromat: Achromatic objectives correct for chromatic aberration at two wavelengths (typically red and blue), providing improved color correction compared to uncorrected objectives.
Correction Across the Visible Range = Apochromat: Apochromatic objectives provide the highest level of chromatic aberration correction, correcting for three or more wavelengths across the visible spectrum. The use of extra-low dispersion glass and multiple lens elements helps to minimize chromatic aberration and improve image sharpness and color fidelity.
Types of Objectives and Corrections
Plan = Flat Field: Plan objectives are designed to correct for field curvature, ensuring that the image is flat and in focus across the entire field of view.
Apo = Apochromat: Apochromat objectives correct for chromatic aberration across the whole visible spectrum, enabling high-resolution multicolor imaging. They are essential for applications requiring accurate color rendering and high resolution, such as colocalization studies and spectral imaging.
Table 1: Objective Correction for Optical Aberration
Objective Type | Spherical Aberration | Chromatic Aberration | Field Curvature |
|---|---|---|---|
Achromat | Corrected for one color | Corrected for two colors | No |
Plan Achromat | Corrected for one color | Corrected for two colors | Yes |
Fluorite | Corrected for two to three colors | Corrected for two to three colors | No |
Plan Fluorite | Corrected for three to four colors | Corrected for two to four colors | Yes |
Plan Apochromat | Highly corrected for three to four colors | Highly corrected for four to five colors | Yes |
Objective Information
Corrections for aberrations are designed for specific glass coverslip thicknesses, indicated in mm (e.g. 0.17). Using a correction collar adjusts for coverslip variations and is essential for high-NA objectives.
Considerations for Objectives
Objectives are corrected to avoid aberrations, but this applies only under certain conditions. Some important factors to consider include:
Thickness of Coverslip: Using the correct coverslip thickness is crucial for achieving optimal image quality with high-NA objectives. Deviations from the specified thickness can introduce spherical aberration and degrade image resolution.
Specifications of Immersion Oil: Immersion oil must match the refractive index of the objective to minimize refraction and maximize light collection. Using the wrong immersion oil can result in reduced image brightness, contrast, and resolution.
Nature of the Sample: The distance of the sample from the coverslip can affect image quality, particularly in high-resolution microscopy. Objectives with correction collars can compensate for variations in sample distance, ensuring optimal image quality.
Suboptimal conditions will induce spherical aberrations, and environmental conditions such as temperature fluctuations can affect the refractive index of optical components.
Causes of Image Blur in Microscopy
Even with corrected high numerical aperture planapochromats, images can still appear fuzzy. For example, consider a single isolated cell or a cell fluorescing in a histological section. Cells are typically 5-15 µm thick, and at NA=1.4, the depth of focus is <300 nm, meaning most of the cell is not in the focal plane! This shallow depth of focus results in out-of-focus blur from regions above and below the focal plane, reducing image clarity. Excitation of fluorophore still occurs in other (deeper or shallower) planes, and the image is further degraded by noise (quantal nature of light or sensor noise), glare (lateral reflections from bright light outside the view area), scatter (optical inhomogeneity), and diffraction.
Causes of Image Degradation
Noise is caused by random processes like photon statistics and electronic noise. Photon shot noise arises from the discrete nature of light and follows a Poisson distribution, while electronic noise is generated by the detector and can be reduced by cooling the sensor.
Glare results from lateral reflections from bright light outside the view area. Glare can be minimized by using appropriate filters and light baffles.
Scatter is due to optical inhomogeneity in the sample. Scattering can be reduced by using clearing agents to match the refractive index of the sample to the surrounding medium.
Blur is caused by optical aberrations, which can distort the image and reduce resolution. Correcting for these aberrations is essential for high-quality imaging. Even with perfect optics, blur is also caused by the diffraction of light as it passes through the imaging system. This non-random spreading of light is predictable.
Newton’s View of Optics
Sir Isaac Newton (1704) in “Opticks” wrote that "Light is never known to follow crooked passages nor to bend into the shadow,” indicating light travels in straight lines (‘ray of light’). Tracing rays (the paths that single photons take) as straight lines is still useful to understand the path of light through microscopes (or Newton’s telescope). Ray optics provides a simple model for understanding how lenses and mirrors focus light to form images.
3D Blur
Conventional fluorescence images captured at two different depths from a single neuron dendritic tree (neuron injected with a fluorescent tracer) show that features sharp at one level of focus are completely out of focus at another. This 3D blur is caused by the limited depth of focus of the objective lens. Out-of-focus light from regions above and below the focal plane contributes to the blur, reducing image clarity.
Point Spread Function (PSF)
Newton’s geometric view of optics still predicts a 3-dimensional spread of light away from the focus: this 3D blur structure is called the point spread function (PSF). The PSF is critical for understanding and correcting image blur. The PSF describes how a point source of light is spread in three dimensions by the microscope. Understanding the PSF is essential for optimizing image restoration techniques, such as deconvolution, that aim to remove blur and improve resolution.
Wave Optics and Diffraction
Despite its quantal nature (discrete photons), light behaves as a wave. Diffraction is associated with this wave property and limits the achievable resolution in microscopy. Wave optics provides a more accurate model for understanding the behavior of light at high resolution than ray optics.
More on PSF
The PSF describes how light from an infinitely small, light-emitting object will be spread in three dimensions due to the diffraction of light in the microscope. Visualizing the PSF involves capturing images at multiple focal planes around a point light source, aiding in deconvolution. Understanding the PSF is crucial for optimizing image restoration techniques. The PSF is affected by the numerical aperture (NA) of the objective lens, the wavelength of light, and the refractive index of the imaging medium.
Diffraction Pattern
Even the best focus of a point object is still a diffraction pattern, with a bright central disc (the Airy disc) and a sequence of faint alternating light and dark rings. The Airy disc size determines the resolution limit of the microscope. The diameter of the Airy disk is inversely proportional to the NA of the objective lens and directly proportional to the wavelength of light.
Rayleigh and Abbe
Rayleigh and Abbe both quantified the diffraction limit for resolving objects in the microscope. Lord Rayleigh, an astronomer, and Ernst Karl Abbe, who worked with microscopes, contributed to this understanding. The Rayleigh criterion states that two point sources are just resolvable when the center of the Airy disk of one point source is located at the first minimum of the Airy disk of the other point source. Abbe's equation relates the resolution of a microscope to the wavelength of light and the NA of the objective lens.
Numerical Aperture
The numerical aperture (NA) = n sinµ. With µ cannot be more than 180° ! Very good oil immersion lenses have NA =1.4. The NA determines the light-gathering ability and resolution of the objective lens.
3 Dimensions of the PSF
Computed (theoretical) diffraction patterns for an ideal, aberration-free objective: Rayleigh’s criterion (in astronomy) only considers the effect of diffraction on resolution in 2 dimensions (i.e., at the best focal plane). These axes are called X and Y, hence ‘lateral’ resolution. In most samples, it is also necessary to resolve features lying at different depths within the sample along the Z axis. This is resolution in Z, or ‘axial’ resolution. With a perfect lens, this function is symmetric along the optical axis.
Axial Resolution
In the axial direction, the intensity distribution is similar in shape to the Airy disk. Axial resolution is typically lower than lateral resolution. The axial resolution is affected by the NA of the objective lens and the refractive index of the imaging medium.
Lateral and Axial Resolution
The first and second dimensions represent lateral resolution, which is circularly symmetrical because the aperture is circular. The third (Z) axis represents ‘axial’ resolution, which has an elongated shape and is broader along the axial than lateral planes. Rayleigh’s criterion can be applied to both components, but the equations are different: rlateral= {0.61 λ NA} and raxial= {2 λn NA^2}. These equations show that lateral resolution is inversely proportional to the NA, while axial resolution is inversely proportional to the square of the NA.
PSF Examples
PSFs can be computed from theory (i.e., knowing λ and NA) or for known optical errors (aberrations) in lens designs, with computation accounting for lens imperfections. Both lateral and axial resolution depend on λ and NA, but axial dependence is = 1/NA^2. Aberrations in the objective lens can distort the PSF and reduce image quality.
PSF and the Superposition Principle
An object can be considered as a collection of points, each differing in light emission (fluorescence) or absorption (bright field). Light from each point in the object gives rise to a three-dimensional PSF. The image at the current plane of focus is formed by the central part of the PSFs from features in that plane and the deeper (or more superficial) parts of PSFs from features in other object planes. If structures contain many features fluorescing at the same (or nearby) XY locations but located in different axial (Z) planes, the image contrast is degraded for features in the current plane by the out-of-focus (diffracted) light from other planes. Each 3-D point in the image (voxel) has a brightness consisting of the sum of the local brightness of the PSF from every other point.
Image Degradation - Noise
Noise is comprised of random processes, statistical distribution of photons, and electronic noise generated by the detector. It is only predictable in a statistical sense but can be reduced by capturing the image several times and averaging, which reduces random noise (though it can also reduce the temporal resolution of the image), using a longer exposure time (camera), where longer exposure times increase the signal-to-noise ratio but can also lead to photobleaching and increased background fluorescence, and increasing the intensity of illumination (more emitted photons = less noise), though this carries a risk of photobleaching and sample damage.
Image Degradation - Scatter
Scatter refers to optical inhomogeneity and is not predictable. It can be reduced by using thinner samples (not always practical or desirable), where thinner samples reduce scatter but may not be representative of the entire sample, and through tissue clearing, where optimizing the immersion medium to match the refractive index of the tissue and optimizing the sample (e.g., by lipid removal, hydrogel substitution) improves light transmission. Tissue clearing makes the sample more transparent by reducing scatter, allowing for deeper imaging.
Image Degradation - Glare
Glare is only somewhat predictable and can be reduced by using better anti-reflection coatings on optics, where anti-reflection coatings reduce glare by minimizing reflections from lens surfaces, using fewer lens elements (e.g., extra-low dispersion glass instead of triplets for reducing chromatic aberration), where fewer lens elements reduce glare but may also reduce the ability to correct for aberrations, and using better light-absorbing materials in lens tubes, etc., which minimizes internal reflections, with light-absorbing materials in lens tubes reducing glare by absorbing stray light.
Image Degradation - Blur
Predictability is key, as the theoretical PSF is (somewhat) predictable, and the actual PSF can be measured (e.g., by imaging a fluorescent nanobead). Optical blur is caused by optical aberrations, which can distort the image and reduce resolution, and the diffraction of light when it passes the imaging system, as well as the non-random spreading of light.
How to Remove Out-of-Focus Light
There are two major and fundamentally different ways to remove out-of-focus light: Confocal microscopy (and related optical sectioning techniques) and Deconvolution. Confocal microscopy uses a pinhole to block out-of-focus light, resulting in sharper images. Deconvolution, on the other hand, computationally reassigns blurred out-of-focus light back to its source and removes blur, involving blurred out-of-focus light being computationally reassigned back to its source (or computationally removed).
Deconvolution and Convolution
The acquired blurred image can be mathematically modeled as the result of convolving the observed objects with a 3D point-spread function (PSF). The formation of an image can be regarded as a convolution, where the 3D image results from convolving actual light intensities from each object point with the PSF. Convolution is a mathematical operation that describes how the shape of one function modifies the shape of another function.
Convolution Explained
Convolution is a mathematical operation related to multiplication and cross-correlation. Consider 2 mathematical functions, f and h, where f describes the distribution of light for an object in 3 dimensions, and h defines the blur of a single point in space (i.e. the PSF). Convolution of these two functions produces a 3rd function (f * h) that describes how the shape of one function modifies the shape of the other. Technically, (f * h) is the integral of the product of f and h: g = f * h = ∫∫∫ f(ξ', ξ', ξ')h(ξ - ξ', ξ - ξ', ξ - ξ') d³ξ'. This allows prediction of the effect of the PSF on a known object. Convolution is used in image processing to blur, sharpen, and enhance images.
Deconvolution
Deconvolution in microscopy attempts to estimate f, given a model for g and on the assumption that the image represents (f * h) + noise. Deconvolution is a general term, encompassing a large family of different methods operating in 2D and 3D. Deconvolution algorithms use knowledge of the PSF to remove blur and restore image sharpness.
Deconvolution Methods
With knowledge about the PSF of a microscope, it is possible to estimate the contribution of blurred out-of-focus light to an image. Out-of-focus light can then either be reassigned back to its estimated point of origin to deconvolve the image (also known as image restoration or real deconvolution) or the blurred light can simply be subtracted from each image plane ('de-blurring'). Image restoration methods are more computationally intensive but produce better results than de-blurring methods.
Deconvolution Process
The starting point involves collecting a stack of images (‘XYZ’ series or ‘Z-stack’) from different focal planes, typically using a wide-field fluorescence microscope. Each image (slice) contains features blurred by diffraction and out-of-focus light, as well as noise and other forms of degradation. The goal of deconvolution is either to remove out-of-focus blur or re-assign it back to its point of origin. The Z-stack should be acquired with an appropriate step size to ensure that the PSF is adequately sampled.
De-blurring Approaches
These are quasi 2-dimensional techniques that operate on each image plane individually to decrease blur by removing out-of-focus light; however, they do not improve the signal-to-noise ratio. An example of this is Nearest-neighbor subtraction, where this method assumes that out-of-focus information in one image must result from in-focus information in the image immediately above and below in the stack (the nearest neighbor images). The algorithm subtracts a blurred version of the nearest-neighbor images from each focal plane. Note that this is a subtraction of signal, so a lot of signal is lost, and the signal-to-noise ratio does not improve. Nearest-neighbor subtraction is a simple and fast de-blurring method but can introduce artifacts and does not improve the signal-to-noise ratio.
Image Restoration Algorithms
These algorithms operate simultaneously on every pixel in the whole image stack, attempting to re-assign blurred light to the proper in-focus location, which can lead to an improvement of signal-to-noise ratio. They are computationally demanding and can be divided roughly into inverse filtering (involving Fourier transform of PSF and image data), and iterative methods such as Maximum Likelihood Estimation, Expectation Maximization. Image restoration algorithms are more computationally intensive than de-blurring methods but produce better results and can improve the signal-to-noise ratio.
Mathematical Model for Image Formation
The mathematical model for the image formation process in the spatial domain is that the true object (f) is convolved with the PSF (h) and contaminated by the noise (n) to give the observed image (g): g = h ⊗ f + n. Since the convolution operation (⊗) transforms to a multiplication in the frequency domain, it is often easier to process the data using the Fourier transform operation (\Im): G = (g) = H ⋅ F + N, where G, H, F, and N are the Fourier transforms of g, h, f, and n, respectively. The Fourier transform of the PSF, H, is also the OTF. Ignoring the noise, it may appear that the simplest approach to estimate the true data (\hat{f}) is to divide the observed frequency data by the OTF, often referred to as an inverse filter: \hat{f} = \Im^{-1} ⋅ (G/H). However, this is rarely possible because of zeros in the OTF and the amplification of the broadband noise contamination that overwhelms the results. The mathematical model provides a framework for understanding the image formation process and developing deconvolution algorithms.
Inverse Filter Algorithms
Types of inverse filter algorithms include the Moore-Penrose pseudo-inverse (MPPI), which is a generalization of the inverse of a matrix that can be used to solve systems of linear equations that do not have a unique solution, and the Wiener-Helstrom filter (WHF), which is a linear filter that minimizes the mean square error between the estimated image and the true image. Inverse filter algorithms are simple to implement but are sensitive to noise and can produce artifacts.
Statistical Image Estimation
These statistical image estimation techniques use iterative methods including Maximum likelihood methods, Expectation maximization and Maximum-entropy methods. Maximum likelihood methods estimate the true image by maximizing the likelihood function, which is the probability of observing the data given the true image. Expectation maximization (EM) is an iterative algorithm that estimates the true image by alternating between an expectation step and a maximization step. Maximum-entropy methods estimate the true image by maximizing the entropy of the image, subject to constraints that ensure that the estimated image is consistent with the data. Statistical image estimation methods are more robust to noise than inverse filter algorithms but are more computationally intensive.
Iterative Deconvolution
This process starts with an initial guess of the real object (i.e., what the real images should be like) and convolves (blurs) this estimated object using the PSF. It then compares the blurred images to the raw observed images and computes an error criterion. This comparison is used to improve the estimate of the object, and the process is repeated until the error criterion is minimized. The estimation of the object at this stage is now the restored image. The formula is: f{k+1} = fk + (g - h \bigotimes f_k) \bigotimes h. Iterative deconvolution algorithms are more computationally intensive than inverse filter algorithms but produce better results and are less sensitive to noise.
Improvement of Signal-to-Noise Ratio
Image restoration algorithms for deconvolution re-assign out-of-focus light signals (photons) to the planes where they likely came from. This improves the signal-to-noise ratio compared to the raw images! Data can be used quantitatively, making this approach an economical use of light with good sensitivity, suitable for sensitive samples with faint signals. The improvement in signal-to-noise ratio allows for more accurate quantification of image data.
Practical Considerations
How is the PSF obtained? There are three ways to do that, theoretical PSF, Blind deconvolution, and Measurement of PSF.
A theoretical PSF can be calculated, knowing NA, emission λ, and refractive index. This method assumes optically ideal conditions, which is never the case, and while theoretical PSFs are easy to obtain, they may not accurately represent the actual PSF of the microscope.
Some deconvolution algorithms can estimate both the PSF and the real object from the raw images using blind deconvolution. In this case, no knowledge about PSF is needed beforehand, and the PSF is iteratively updated at the same time as the estimation of the real object. Blind deconvolution algorithms do not require prior knowledge of the PSF but are more computationally intensive and may not produce accurate results.
Collect image stack from a sub-resolution-sized fluorescent bead (regarded as a pinpoint source of light) when you're doing measurement of PSF. Software can calculate a PSF from this set of data. It takes the conditions of the particular microscopy system into account and should be done from preparations of exactly the same type as the real samples. Measuring the PSF using fluorescent beads is the most accurate method but requires careful sample preparation and imaging.
More Practical Considerations
Artefacts: Interpret with care! Deconvolution algorithms can easily lead to artefacts in the images. Artefacts can be caused by noise, inaccurate PSF measurements, and inappropriate algorithm settings.
Sources of error: The quality of raw data is very important. Sources of error are an accurately determined PSF that is valid for the imaging conditions, avoiding spherical aberration (which can distort the PSF and reduce image quality), reducing noise, compensating for photobleaching (which can affect the accuracy of deconvolution), avoiding lamp flickering (which can introduce noise into the images), and avoiding saturation of the detector (which can lead to inaccurate data).
Worth it? Even for thin samples, resolution is approximately double following real deconvolution. The benefits of deconvolution depend on the sample, imaging conditions, and algorithm settings.
Software Packages
Types of software packages include 3D Huygens Deconvolution software (www.svi.nl), Microvolution (www.microvolution.com), AutoQuant X3 (http://www.mediacy.com/autoquantx3), Imaris (https://imaris.oxinst.com/), and Plug-ins for FiJi / ImageJ (free software). Deconvolution is often included in software packages for running microscopes and acquiring images, such as MetaMorph Microscopy Automation & Image Analysis Software (www.moleculardevices.com), ZEN (Zeiss digital imaging software) (www.zeiss.com/