E

(268) How MRI Works - Part 3 - Fourier Transform and K-Space

Sinusoids

Only three pieces of information are needed to uniquely specify the entire function:

  1. Frequency (\omega): Specifies the number of wiggles within a given span of the independent variable. Units are radians or cycles per second when time is the independent variable.

  2. Amplitude (A): Specifies the sinusoid crest height; must be a positive real number.

  3. Phase (\phi): Specifies a shift of the sinusoid. The convention is that \phi = 0 when a sinusoid crest occurs at time t = 0. The range is between -\pi and \pi , since the sinusoid repeats itself every 2\pi radians.

Representing Amplitude/Phase Combo with a Complex Number

The amplitude/phase combination can be represented with a single complex number, which contains two pieces of information: real and imaginary, or magnitude and phase. In the Argand plane (complex plane), the length of the complex vector corresponds to the sinusoid amplitude, and the phase corresponds to the sinusoid's phase.

The real part of a complex number is just the magnitude A times cos(\phi), and the imaginary part is A times sin(\phi), so by employing Euler's relation, this complex number specifying our amplitude/phase combo is written as Ae^{i\phi}.

This complex number serves the purpose of scaling and shifting our sinusoid, so we call it a 'phasor' (short for phase vector).

Functional Form of a Sinusoid: The Complex Exponential

The complex exponential is given by: e^{i\omega t}

By Euler's relation, we get two sinusoids:

  • A real sinusoid (green).

  • An imaginary sinusoid (purple), which is shifted from the real part by a quarter of a cycle or \pi/2.

If \omega is a positive number, the real part will lead the imaginary; if \omega is a negative number, the real part will lag the imaginary.

Positive and negative frequencies are distinct for a complex exponential, unlike regular sines and cosines.

To scale the amplitude and shift the phase of this sinusoid, multiply e^{i\omega t} by a phasor Ae^{i\phi}. In many physics applications, it is common practice to work the math with complex exponentials, then just ignore the imaginary part of the solution.

Fourier Theory

Any function can be represented as a sum of sinusoid waves. A function f(t) can be made by adding sinusoid waves together. This involves starting with a frequency scaled and shifted according to a corresponding complex phasor and then adding all of the infinite number of scaled and shifted frequencies, such that once all accounted for, f(t) has been perfectly specified as a frequency function of complex phasors rather than an input/output function.

A grid can be drawn showing all the sinusoids which yield f(t) when summed. Every real number frequency will have corresponding phasors describing their contribution to f(t). This works with any function f(t), including non-differentiable, sharp points; non-periodic functions; and discontinuous, non-periodic functions. Sinusoid functions of different frequencies are mathematically orthogonal to each other.

Each sinusoid frequency can be characterized by a single phasor. Since every frequency \omega has an associated phasor, a new function is created. For every input frequency \omega, it returns the amplitude and phase of that particular frequency. Instead of drawing a grid of phasors, a plot is made where frequency \omega is the independent variable, and at each frequency, the magnitude of its associated phasor is plotted. This function is called F(\omega). Each function f(t) has its own unique phasor function F(\omega), and it has peaks wherever those sinusoidal frequencies tend to dominate in f(t).

Fourier Transform

The Fourier Transform is a mathematical operation which allows you to represent a function f(t) in terms of the different sinusoidal frequencies \omega of which that function is composed. You give it a function f(t), and it returns the complex function F(\omega) specifying the precise phasor associated with every possible sinusoid frequency contained in the function.

In the F(\omega) plot, every location on the independent \omega axis corresponds to a particular sinusoid of that frequency. Higher or lower \omega values correspond to higher or lower sinusoid frequencies. At every frequency \omega, there is a corresponding phasor, F(\omega), which multiplies the complex exponential, thereby scaling and shifting its respective sinusoid according to its relative contribution to f(t).
If you add together all the scaled and shifted sinusoids as dictated by F(\omega), then you get back your original function f(t). Adding all the sinusoids back together is a simple integral over all \omega:

f(t) = \int_{-\infty}^{\infty} F(\omega) e^{i\omega t} d\omega

This expression is called the inverse Fourier transform since it takes in the function F(\omega) and returns f(t).

A third axis can be added to extend the plot into three dimensions to indicate phase. The independent axis denotes the frequency \omega, and then use one axis as a real axis and another as an imaginary axis. Thus, every frequency \omega has its own Argand plane which can display complete phasor information. Just like before, the location on the \omega axis corresponds to a particular sinusoid frequency \omega0 and carries with it a particular phasor F(\omega0), which scales and shifts that sinusoid frequency according to its contribution within f(t).

Drawing 3D Fourier transforms is not common. To display Fourier transform plots without losing phase information, the real and imaginary components of F(\omega) are plotted together, with the real part in green and the imaginary in purple. If phase isn't important, a single plot of the modulus of F(\omega) can be displayed since this would just represent the amplitudes of contributing sinusoidal frequencies in f(t).

Example: Fourier Transform of sin(\omega_0 t)

The frequencies seen in sin(\omega0 t) should have a non-zero phasor in Fourier space. The phasor, F(\omega0), multiplies the sinusoid e^{i\omega0 t}. The amplitude of sine is 1. Since sine is a real function, shift the \omega0 sinusoid to the right by \pi/2, which means the phasor must have a phase of \frac{-\pi}{2}. Since f(t) doesn't have an imaginary component, another frequency in sin(\omega0 t) must be included, which is \omega0. If -\omega_0 is included and shifted to the right also by \pi/2, then the real components add constructively, and the imaginary components add destructively.

The phasor for \omega0 is F(\omega0) = \frac{1}{2} e^{\frac{-i\pi}{2}}, whereas the phasor for \omega0 is F(-\omega0) = \frac{1}{2} e^{\frac{i\pi}{2}}. Sine has an amplitude of 1, so each sinusoid's real part must have an amplitude of 1/2 so they can constructively sum to 1.

Positive frequencies progress forward in t, and negative frequencies progress backward in t. Positive phase shifts pull the sinusoid progression backward, and negative phase shifts do the opposite. The Fourier transform of sine can be written as:

\begin{cases}
F(\omega0) = \frac{1}{2} e^{\frac{-i\pi}{2}} \ F(-\omega0) = \frac{1}{2} e^{\frac{i\pi}{2}}
\end{cases}

Sine can be written in terms of complex exponentials according to this identity:

sin(\omega0 t) = \frac{1}{2i} (e^{i\omega0 t} - e^{-i\omega_0 t})

This is sine written explicitly as a sum of its individual Fourier components:

  • Two frequencies, positive and negative.

  • Each multiplied by its corresponding phasor.

Properties of Fourier Transforms

For an entirely real function, the Fourier transform must be Hermitian

F(-\omega) = F^*(\omega)

where * denotes the complex conjugate. This is because the imaginary components must add to 0. Real functions contain one piece of information per input $t$, and a Fourier transform contains 2 pieces of information, magnitude and phase, for every \omega, so this redundancy makes sense.

Using this property, useful relationships between functions and their Fourier transforms can be devised:

  • Real functions have Hermitian Fourier transforms.

  • Hermitian functions have real Fourier transforms.

  • Even functions have real Fourier transforms.

  • Odd functions have imaginary Fourier transforms.

Dirac Delta Distribution

Consider the following function: g(t) is 0 everywhere, except between -\frac{1}{2A} and \frac{1}{2A} where it has a constant amplitude of A. The area underneath this curve would be:

Area = A * \frac{1}{A} = 1

If A is quadrupled, the function is a quarter as wide but 4x taller, giving an area under the curve of 1. If A goes to infinity, the function is an infinitely narrow, infinitely high spike, but the area underneath it is still 1. This is called a Dirac delta distribution, \delta(t). It is defined by the property that if you integrate through the spike, you'll get the area underneath the spike. So:

\int_{-\infty}^{\infty} \delta(t) dt = 1

Of course, the spike can be placed at whatever time a you wish to get \delta(t-a). Extending this property, if any function f(t) is integrated times the Dirac delta function \delta(t), you'll get back the value of f(t) at the location of the spike:

\int_{-\infty}^{\infty} f(t) \delta(t) dt = f(0)

So the Fourier transform of sine has phasors that are Dirac delta spikes with areas underneath the spikes of 1/2. The delta spikes account for the fact that sine and cosine extend infinitely in time.

Common Fourier Transform Pairs

  • Decaying exponential and the Lorentzian distribution.

  • Square pulse and the sinc function.

  • Gaussians.

  • Fourier shift theorem.

Decaying Exponential

Models the Free Induction Decay (FID) in NMR. Write the decay function as:

f(t) = e^{-\alpha t}

where α is a positive real number on the interval (0, \infty), thus defining the beginning of resonance phenomenon at t=0.

The Fourier transform of this function is:

F(\omega) = \int{0}^{\infty} f(t) e^{-i\omega t} dt = \int{0}^{\infty} e^{-\alpha t} e^{-i\omega t} dt

= \int{0}^{\infty} e^{-(\alpha + i\omega)t} dt = \frac{-1}{\alpha + i\omega} [e^{-(\alpha + i\omega)t}]{0}^{\infty}

= \frac{-1}{\alpha + i\omega} [0 - 1] = \frac{1}{\alpha + i\omega}

This is the complex form of what's called the Lorentzian distribution. Multiplying the top and bottom by the complex conjugate of the denominator yields:

F(\omega) = \frac{1}{\alpha + i\omega} * \frac{\alpha - i\omega}{\alpha - i\omega} = \frac{\alpha - i\omega}{\alpha^2 + \omega^2} = \frac{\alpha}{\alpha^2 + \omega^2} - i \frac{\omega}{\alpha^2 + \omega^2}

The real term has a single peak when \omega = 0 with amplitude \frac{1}{\alpha}. This is called the absorption Lorentzian. The imaginary part is essentially the same thing as the real part but multiplied by -\omega - it is called the dispersion Lorentzian.

A position on the \omega axis corresponds to a particular sinusoid frequency. The real and imaginary values at \omega correspond to F(\omega) at that location and is the complex phasor which precisely scales and shifts the sinusoid according to its contribution to f(t).

The DC offset occurs at \omega = 0. If 0 is plugged into the complex exponential, it just becomes a 1, and we're left with just F(\omega), which is a complex constant. DC stands for direct current because this nomenclature comes from electronics. Voltage in direct current does not oscillate like alternating current (AC) - it is constant. The DC offset is just a constant which shifts the entire function f(t) up or down the y-axis. Every other frequency \omega not equal to zero would be a form of AC.

A larger α results in a faster decay in f(t) and a corresponding increase in the width of the Lorentzian peak. The rate at which an FID decays reflects the amount of time in which spins remain coherent with each other following excitation, characterized by time constant T2 (or T2* when considering field inhomogeneity), so α is just 1/T2 (T2 = 1/α). Longer T2 has a corresponding narrow Lorentzian peak, and shorter T2 has a wider Lorentzian peak.

The Lorentzian can be thought of as a sort of histogram specifying the relative number of spins which have a given ω. If the spins dephase very quickly, the resultant Lorentzian is broad and could confound your ability to resolve peaks of nuclei neighboring Larmor resonances, such as protons in water vs protons in fat.

Square Pulse

The square pulse is a function that is a constant A over some time τ and is symmetric about t=0.

The Fourier transform is:

F(\omega) = \int{-\frac{\tau}{2}}^{\frac{\tau}{2}} A e^{-i\omega t} dt = A \frac{1}{-i\omega} [e^{-i\omega t}]{-\frac{\tau}{2}}^{\frac{\tau}{2}}

= \frac{A}{-i\omega} [e^{-i\omega \frac{\tau}{2}} - e^{i\omega \frac{\tau}{2}}] = A \frac{e^{i\omega \frac{\tau}{2}} - e^{-i\omega \frac{\tau}{2}}}{i\omega} = A \frac{2 sin(\omega \frac{\tau}{2})}{\omega} = A \tau sinc(\omega \frac{\tau}{2})

where the sinc function is defined as:

sinc(x) = \frac{sin(x)}{x}

There exists a relationship between the duration of the square pulse τ and the frequency band of sinc pulse' the middle lobe Ω:

\tau * \Omega = 4\pi

Longer pulse durations will excite a narrower band of frequencies, and short pulses will excite a large band of frequencies.

Gaussian

The Fourier transform of a Gaussian is a Gaussian. A broad Gaussian in the time domain yields a narrow Gaussian in the frequency domain and vice versa.

Fourier Shift Theorem

If f(t) is multiplied by a sinusoid of frequency \omega0, e^{i\omega0 t}, its entire Fourier transform shifts over by \omega_0.

Engineers might call the f(t) curve the envelope function which contains the carrier frequency, \omega_0. This is just a Fourier shift.

Electronically, these shifts are accomplished through heterodyning - just like a radio!

Digital Signal Processing

In digital signal processing:

  • A continuous signal is digitized.

  • Values are recorded and stored at regular temporal spacing \Delta t, where \Delta t is called the dwell time.

  • The spectrometer operates with a particular sample rate: the number of samples or points it will collect in a given amount of time, where the sample rate is the inverse of dwell time.

  • The analog to digital converter (ADC) does not have infinite precision and will further cast its measurements into signal amplitude bins. The number of bins available to the ADC is determined by the bit depth, where 2 to the number of bits effectively determines the number of available levels.

  • Higher or lower sample rates result in more or fewer acquired points in time.

  • Higher or lower bit depths result in more or less precision in the measured values.

The most important consequence of the sample rate is that there is a maximum frequency which the spectrometer can accurately record.

Aliasing

As the input signal frequency increases, eventually we reach a point where the frequency information is lost in the sampled data, and in fact, the sampled data looks as though it were recording a lower frequency than it actually is. This is called aliasing and it can be avoided by employing the Nyquist criterion:

  • Acquire at least 2 data points per cycle of the highest frequency you want to record.

  • Make sure your sample rate is at least twice as high as your max frequency.

Given a defined sample rate, the maximum frequency which can be recorded unaliased is half the sample rate and is called the Nyquist frequency.

Having a super high bit depth doesn't help if you don't properly amplify your signal before it reaches the ADC. If your signal is too weak, then you're not taking advantage of all the bits available to you, and if too high, the signal can only be cast into the highest and lowest bins resulting in clipping and introduction of unwanted harmonics in your Fourier signal.

Fast Fourier Transform (FFT)

The fast Fourier transform (FFT) is an algorithm that computes discrete Fourier transforms very quickly. The number of points in the FFT output will equal the number of points from the input. There is a specific order to the returned vector:

  • DC offset.

  • Positive frequencies increasing from left to right.

  • Nyquist frequency in the middle.

  • Negative frequencies decreasing from left to right.

The units of frequency in FFTs is 'cycles per field of view (FOV)'. If the independent axis is time, this would be the duration of acquisition; if it's space, then this would be the full distance we're sampling over, etc. The DC offset is 0 cycles per field of view. The Nyquist frequency is half of the sampling frequency. A minimum of 2 points per full cycle is needed.

K-Space

K-space is a 2D plane of possible frequencies kx and ky. The Fourier function is F(kx, ky). The location on the kx-ky plane corresponds to a particular 2D sinusoid frequency. A Fourier transform of a function of space would yield a function of spatial frequencies carrying units of inverse distance such as radians per meter.

Kernel for 2D sinusoids:

e^{i kx x} * e^{i ky y} = e^{i (kx x + ky y)}

The 2D function f(x,y) will increase in phase, and thus sinusoidal wiggles, according to the sum of phases kx x and ky y. These individual 2D sinusoid kernels are called 'spatial frequencies'. Low spatial frequencies have few wiggles per unit space, and high spatial frequencies have many wiggles per unit space.

Our location in k-space dictates a certain 2D sinusoid or spatial frequency where the progression of phase now has a 2D direction. The higher kx, the more wiggles in the x direction; the higher ky, the more wiggles in the y direction. Each point in k-space contains a complex phasor which serves the same purpose of scaling the amplitude and shifting the phase of our 2D sinusoid.

Images are 2D functions of space, f(x,y). The image f(x,y) is the elementwise sum of all the scaled and shifted 2D sinusoids dictated by the k-space phasor function. The process of adding all the scaled/shifted sinusoids together is a double integral over kx and ky.

Frequencies near the center of k-space seem to have larger phasors than those near the edge. The very center frequency at kx=0, ky=0 is the DC offset and is generally orders of magnitude higher than most of the other frequencies. Since the pixel values of images are all higher than 0, the entire image rests at an average pixel value much greater than 0.

If f(x,y) is continuous and infinite, then k-space will also be continuous and infinite. Since the discussion is focused on images (discrete functions of space), k-space is a discrete function of spatial frequencies.

K-Space is shown as a grayscale image where the brightness of a voxel corresponds to that phasor's magnitude. For display, the logarithm or fractional root of the magnitude might be performed first - otherwise, all the frequencies would be dwarfed in brightness by the DC offset. Phase is generally ignored in these displays.

K-Space Details

A vector from the k-space origin to the point of interest. The length of the vector reflects the spatial frequency or number of cycles per distance, and the direction of the vector corresponds to the direction of the fringe progression.

For real functions, the Fourier transform must be symmetric about k=0 because necessarily the imaginary components must add to 0. k-Space of real functions is Hermitian. If the image is entirely real, necessarily reflected positive and negative frequency components will sort of 'counter-mirror' each other; they are each other's complex conjugates.

The center of k-space contains low frequency sinusoids. Information regarding gradual changes in intensity in the image is largely contained here. The high frequency information closer to the Nyquist edges