Modulation Systems EMS310 - Ch 6 Sampling and A/D Conversion
Sampling Theorem
- Consider a signal g(t) band-limited to B Hz, meaning its Fourier transform G(f) = 0 for |f| > B.
- The signal g(t) can be perfectly reconstructed from discrete time samples taken at a rate of R samples per second, where R ≥ 2B.
- The minimum frequency for perfect signal recovery is f_s = 2B Hz (Nyquist rate).
Sampling Theorem - Proof
- Sampled signal ḡ(t) is given by: ḡ(t) = g(t)δ{Ts}(t) = ∑n g(nTs)δ(t − nT_s).
- The Fourier series expansion of the periodic impulse train δ{Ts}(t) is: δ{Ts}(t) = \frac{1}{Ts} ∑{n=−∞}^{∞} e^{jnωst}, where ωs = \frac{2π}{Ts} = 2πfs.
- Therefore, ḡ(t) = g(t)δ{Ts}(t) = \frac{1}{Ts} ∑{n=−∞}^{∞} g(t)e^{jn2πf_st}.
- Taking the Fourier transform of ḡ(t) yields: Ḡ(f) = \frac{1}{Ts} ∑{n=−∞}^{∞} G(f − nf_s).
- This implies that Ḡ(f) consists of G(f), scaled by a constant \frac{1}{Ts}, repeated periodically with period fs = \frac{1}{T_s}.
Sampling Theorem - Question
- Can g(t) be reconstructed from ḡ(t) without loss or distortion?
- Perfect recovery is possible if there is no overlap between replicas of G(f), i.e., if f_s \geq 2B (Nyquist rate for g(t)).
- Therefore, the Nyquist interval for g(t) is T_s \leq \frac{1}{2B}.
- However, for a sinusoid of frequency f = B, then f_s > 2B.
Signal reconstruction from sampled signal
- The process of reconstructing a continuous signal from its sampled version is called interpolation.
- The sampled signal is: ḡ(t) = g(t)δ{Ts}(t) = ∑n g(nTs)δ(t − nT_s).
- The signal g(t) can be recovered by passing ḡ(t) through an ideal low pass filter (LPF) given by H(f) = TsΠ(\frac{ω}{4πB}) = TsΠ(\frac{f}{2B}).
Ideal reconstruction
- The impulse response of the ideal LPF is h(t) = 2BT_s \text{sinc}(2πBt).
- Assuming sampling at the Nyquist rate, 2BT_s = 1, then h(t) = \text{sinc}(2πBt).
Ideal reconstruction
- The output of the ideal LPF is (convolution with impulse train) h(t) = 2BT_s \text{sinc}(2πBt).
- Assuming sampling at the Nyquist rate, 2BTs = 1, then
g(t) = ∑k g(kTs)h(t − kTs)
= ∑k g(kTs)\text{sinc}[2πB(t − kTs)]
= ∑k g(kT_s)\text{sinc}(2πBt − kπ). - This is the interpolation formula, which is a weighted sum of sinc functions.
Example 6.1
- Find a signal g(t) band-limited to B Hz whose samples are: g(0) = 1 and g(±Ts) = g(±2Ts) = g(±3Ts) = ··· = 0, where the sampling interval Ts is the Nyquist interval for g(t), i.e., T_s = \frac{1}{2B}.
- Using the interpolation formula, since all but one of the Nyquist samples are zero, we have: g(t) = \text{sinc}(2πBt).
Practical Signal Reconstruction
- Ideal reconstruction requires a non-causal unrealizable filter (sinc impulse response).
- For practical applications, we need to implement realizable reconstruction systems to reconstruct a continuous-time (CT) signal from a uniform discrete-time (DT) sampled signal.
- The reconstruction pulse p(t) needs to be easy to generate.
- We need to determine how accurate the reconstructed signal using p(t) is.
Practical Signal Reconstruction - Accuracy
- The reconstructed signal approximation using p(t) is given by: g̃(t) = ∑n g(nTs)p(t − nT_s).
- From the convolution property, g̃(t) = p(t) ∗ [∑n g(nTs)δ(t − nT_s)] = p(t) ∗ ḡ(t).
- The reconstructed signal in the frequency domain is: G̃(f) = P(f) \frac{1}{Ts} ∑n G(f − nf_s).
Practical Signal Reconstruction - Equalization
- An equalizer E(f) is used to obtain distortionless reconstruction, such that G(f) = E(f)G̃(F) = E(f)P(f) \frac{1}{Ts} ∑n G(f − nf_s).
- All replicas of \frac{1}{Ts} ∑n G(f − nfs) need to be removed apart from the baseband version where n = 0, i.e., E(f)P(f) = 0 for|f| > fs − B.
- Also, distortionless reconstruction requires that E(f)P(f) = T_s for |f| < B.
- Note: E(F) must be a lowpass filter and the inverse of P(f)
Special Case - Rectangular Reconstruction Pulse
- Consider a rectangular reconstruction pulse p(t) = Π(\frac{t − 0.5Tp}{Tp}).
- The reconstructed signal before equalization will take the form: g̃(t) = ∑n g(nTs)Π(\frac{t − nTs − 0.5Tp}{T_p}).
Special Case - Rectangular Reconstruction Pulse (2)
- The transfer function of the pulse is P(f), the Fourier transform of Π(·), i.e., P(f) = Tp \text{sinc}(πfTp)e^{−jπfT_p}.
- As such, the equalizer frequency response should satisfy:
E(f) = \begin{cases} \frac{Ts}{P(f)} & \text{if } |f| \leq B \ \text{flexible} & \text{if } B < |f| < (\frac{1}{Ts} − B) \ 0 & \text{if } |f| > (\frac{1}{T_s} − B) \end{cases}.
Special Case - Rectangular Reconstruction Pulse (3)
- To make the equalizer passband response realizable, a time delay is added: E(f) = Ts · \frac{πf}{\text{sin}(πfTp)}e^{−j2πft_0} for |f| < B.
- For the passband to be well-defined (not include zeros implying infinite E(f)), Tp must be short, i.e., \frac{\text{sin}(πfTp)}{πf} \neq 0 for |f| < B.
- This equivalently requires that T_p < \frac{1}{B}.
- In practice, Tp can be made very small, such that E(f) = Ts · \frac{πf}{\text{sin}(πfTp)} ≈ Ts T_p for |f| < B.
- This means that very little distortion remains when very short rectangular pulses are used.
Special Case - Rectangular Reconstruction Pulse (4)
- When Tp = Ts, you get a zero-order hold filter or staircase interpolation.
- First-order hold is linear interpolation.
Realizability of Reconstruction Filters
- Implication of the Paley-Wiener criterion
Problem of Aliasing
- All practical signals are time-limited.
- This implies that all practical signals have infinite bandwidth!
- All replicas of G(f) in Ḡ(f) will overlap.
- This can be fixed using an anti-aliasing filter.
Anti-aliasing
- An anti-aliasing filter is applied before sampling to limit the bandwidth of the signal, preventing overlap of the replicas in the frequency domain after sampling.
- TWO BITS PER SECOND PER HZ!!!!!
- Self-study: Read Section 6.1.3 in detail!
Nonideal Practical Sampling Analysis
- The sampler does NOT take an instantaneous sample but averages over a short time period.
- Self-study: Read Section 6.1.4 in detail!
Applications of the Sampling Theorem
- Sampling is powerful, allowing us to replace a continuous-time (CT) signal with a discrete-time sequence of numbers.
- Allows for the transmission of CT signals using a sequence of numbers (or pulse trains).
- Several ways in which pulses can be modulated:
- Pulse Amplitude Modulation (PAM)
- Pulse Width Modulation (PWM)
- Pulse Position Modulation (PPM)
- Pulse Code Modulation (PCM) - most important today
- Pulse modulation allows for simultaneous transmission of several signals simultaneously, called Time Division Multiplexing.
Time Division Multiplexing
- Time Division Multiplexing (TDM) involves interleaving samples from multiple signals for simultaneous transmission.
Pulse Code Modulation
- PCM involves sampling, quantizing, and encoding a signal into a digital bit stream.
Advantages of Digital Communication Techniques
- Digital communication can withstand channel noise and distortion much better within limits
- Regenerative repeaters can be used
- Digital hardware is flexible and permits the use of microprocessors, digital switching, and large-scale integrated circuits
- Digital signals can be encoded to yield extremely low error rates
- It is easier to multiplex digital signals
- Digital communication is more efficient than analog at exchanging SNR for bandwidth
- Digital signal storage is easy and inexpensive
- Digital messages can be reproduced extremely reliably and without deterioration
- The cost of digital hardware halves every two or three years
Quantizing
- Assume the message signal m(p) remains in the range [−mp, mp].
- Anything outside the range will simply be chopped off.
- The range [−mp, mp] is divided into L uniformly spaced intervals with size ∆v = \frac{2m_p}{L}.
- Two types of error at the receiver: quantization errors (from having discrete intervals) and pulse errors from incorrectly detecting pulses.
- The latter is small and can be ignored.
Quantization Noise
- If m(kTs) is the kth sample of m(t), and m̂(kTs) is the kth quantized sample, then according to the interpolation formula:
m(t) = ∑k m(kTs)\text{sinc}(2πBt − kπ)
m̂(t) = ∑k m̂(kTs)\text{sinc}(2πBt − kπ)
where m̂(t) is the signal reconstructed from quantized samples. - Consider the distortion signal q(t) = m̂(t) − m(t), then
q(t) = ∑k [m̂(kTs) − m(kTs)]\text{sinc}(2πBt − kπ)
= ∑k q(kT_s)\text{sinc}(2πBt − kπ) - The signal q(t) is known as the quantization noise!
Quantization Noise Power
- The power or mean square value of the quantization noise is:
q^2(t) = \lim{T→∞} \frac{1}{T} ∫{-T/2}^{T/2} q^2(t) dt
= \lim{T→∞} \frac{1}{T} ∫{-T/2}^{T/2} [∑k q(kTs)\text{sinc}(2πBt − kπ)]^2 dt - According to Prob 3.7-4, the signals sinc(2πBt − mπ) and sinc(2πBt − nπ) are orthogonal; hence,
\int_{-∞}^{∞} \text{sinc}(2πBt − mπ)\text{sinc}(2πBt − nπ) dt = \begin{cases} 0 & \text{if } m \neq n \ \frac{1}{2B} & \text{if } m = n \end{cases} - Because of this, the cross-product terms resulting from the squaring are zero. Thus,
Quantization Noise Power (2)
- Because of this, the cross-product terms resulting from the squaring are zero, thus:
q^2(t) = \lim{T→∞} \frac{1}{T} ∫{-T/2}^{T/2} ∑k q^2(kTs)\text{sinc}^2(2πBt − kπ) dt
= \lim{T→∞} \frac{1}{T} ∑k q^2(kTs) ∫{-T/2}^{T/2} \text{sinc}^2(2πBt − kπ) dt - From the orthogonality relationship in the equation above,
q^2(t) = \lim{T→∞} \frac{1}{2BT} ∑k q^2(kT_s) - Since the Nyquist sampling rate is 2B, the total number of samples is 2BT.
- As such, it represents the average or mean of the squared quantization error.
Quantization Noise Power (3)
- A quantized sample value is at the midpoint of an interval with size ∆v = \frac{2m_p}{L}.
- As such, the quantization error lies in the range [−\frac{∆v}{2}, \frac{∆v}{2}]; the maximum quantization error is ±\frac{∆v}{2}.
- Assuming that the error is equally likely in the interval [−\frac{∆v}{2}, \frac{∆v}{2}], we average the squared quantization error through:
q^2 = \frac{1}{∆v} ∫{-∆v/2}^{∆v/2} q^2 dq
= \frac{(∆v)^2}{12}
= \frac{mp^2}{3L^2} - Since q^2(t) is the quantization noise power, we have q^2(t) = Nq = \frac{mp^2}{3L^2}.
Signal to Quantization Noise Ratio
- Assuming a negligible pulse detection error, the reconstructed signal m̂(t) at the receiver output is m̂(t) = m(t) + q(t).
- Given a signal power of So = m^2(t) and a quantization noise power of No = Nq = \frac{mp^2}{3L^2}, the signal-to-quantization noise ratio (SQNR) is given by
\frac{So}{No} = \frac{3L^2 m^2(t)}{m_p^2} - Thus, the SQNR is a linear function of signal power m^2(t), but a quadratic function of the number of quantization levels.
- Ideally, we would like a constant SQNR for all signal powers.
- Speech can vary by as much as 40dB in power (ratio of 10^4).
- Noise will significantly increase for a soft speaker.
- Smaller amplitudes dominate in speech, and as such, will be at a poor SQNR most of the time.
- At the root of the problem is uniform quantization steps of ∆v = \frac{2m_p}{L}.
- On the other hand, N_q = \frac{(∆v)^2}{12} - proportional to the square of the step size.
- One solution: the smaller the signal, the smaller the step sizes (compression).
- A compressor maps input signal increments ∆m into larger increments ∆y when the message signal is small and vice versa for large message signals.
Compression
- Two compression law standards are defined by the ITU.
- The µ-law curve (for positive amplitudes) is given by: y = \frac{1}{\ln(1 + µ)} \ln(1 + µ\frac{m}{mp}) for 0 ≤ \frac{m}{mp} ≤ 1.
- The A-law curve (for positive amplitudes) is given by:
y = \begin{cases} \frac{A}{1+\ln A} (\frac{m}{mp}) & 0 \leq \frac{m}{mp} \leq \frac{1}{A} \ \frac{1}{1 + \ln(A)} (1 + \ln A\frac{m}{mp}) & \frac{1}{A} \leq \frac{m}{mp} \leq 1 \end{cases} - Compressed signals must be expanded at the output of the PCM channel.
- The compressor and expander together constitute a compandor.
µ- and A-law curves
- When a µ-law compandor is used, the output SQNR is: \frac{So}{No} = \frac{3L^2}{[\ln(1 + µ)]^2} \frac{m^2(t)}{m_p^2}
PCM Encoding
- A PAM output is applied to the input of the encoder.
- A digit-at-a-time encoder makes n sequential comparisons to generate an n-bit codeword.
- The sample is compared to n reference voltages proportional to 2^n, 2^{n−1}, . . . , 2^3, 2^2, 2^1, 2^0.
- The reference voltages are conveniently generated by a bank of resistors R, 2R, 2^2R, . . . , 2^nR.
- The first digit is 0 or 1, depending on whether the sample is in the lower or upper half of the range.
- The second digit is 0 or 1, depending on whether the sample is in the lower or upper half of the subinterval defined by the first digit, and so on. For example, the 8-bit word 10010110 represents the number 150.
Transmission Bandwidth
- Binary PCM: Assign a group of bits to each of the L quantization levels.
- A sequence of n binary digits can assume 2^n values/patterns, where L = 2^n or n = \log_2 L.
- Each quantized sample is encoded into n bits.
- Assuming m(t) is bandlimited to B Hz, it requires a minimum of 2B samples per second or 2nB bits per second to transmit the corresponding PCM signal.
- A unit bandwidth (1 Hz) can transmit 2 pieces of information per second; therefore, we need a minimum channel bandwidth BT Hz, where BT = nB Hz.
- This is the theoretical minimum transmission bandwidth to transmit a PCM signal.
Effect of Transmission Bandwidth on Output SNR
- We know that L^2 = 2^{2n}, and the output SNR can be expressed as \frac{So}{No} = c · 2^{2n}, where
c = \begin{cases} \frac{3m^2(t)}{m_p^2} & \text{(uncompressed case)} \ \frac{3}{\ln(1 + µ)^2} & \text{(compressed case)} \end{cases}
- Substituting eq. (51) into eq. (52), we have \frac{So}{No} = c · 2^{\frac{2B_T}{B}}
- From the equation above, it is clear that the SNR (SQNR) increases exponentially with bandwidth B_T.
Digital Telephone Systems: PCM in T1
- Digital telephone systems using PCM in T1
History of Digital Systems
- PCM was invented 20 years before its implementation.
- Before transistors, the only available electronic switches were vacuum tubes.
- Vacuum tubes were prone to overheating.
- Transistors allowed for almost perfect electronic switching at low power.
- Digital telephony followed analog telephony; existing infrastructure was designed for voice audio (0-4kHz).
- The only way to use the existing infrastructure was to use regenerative repeaters 1.8km apart.
- Led to Bell Systems’ T1 carrier system.
- Designed for BW of 4kHz, carried PCM signals with BW of 1.544 MHz.
T1 Carrier System Details
- Commutators are high-speed electronic switching circuits (switch between 24 8-bit channels).
- Sampling is performed by electronic gates (such as bridge diode circuits) opened periodically by narrow pulses of 2µs duration.
- The 1.544 Mbit/s T1 signal is called digital signal level 1 (DS1).
- The DS1 signal is further multiplexed into higher-level (bitrate) signals DS2, DS3, and DS4 (explained in the next section).
- Other similar standards were proposed in other countries (such as ITU-T).
T1 Synchronization and Signaling
- A codeword for each channel consists of 8 bits.
- The collection of all 24 channels’ codewords is called a frame.
- Each frame has 24 × 8 = 192 information bits.
- A framing bit is added to the beginning of each frame (total of 193 bits in the frame).
- The framing bit is chosen such that a sequence of framing bits over several frames forms a unique sequence.
- If the sequence is incorrect at the receiver, then synchronization loss is detected, and the next bit is examined to see whether THAT bit is the framing bit.
- Low-rate signaling information is carried in the least significant bit of every sixth sample.
- Every sixth frame has 168 information bits and 24 signaling bits, called 7/6 encoding or robbed-bit signaling.
T1 Synchronization and Signaling (2)
- Given that every sixth frame has signaling information, signaling frames need to be identified by the receiver.
- A superframe was developed for this purpose.
- Framing bits form a pattern over 12 frames: 100011011100.
- It allows for identification of frame boundaries, as well as the identification of every sixth frame.
- Results in four possible signaling bit patterns (states) per superframe: 00, 01, 10, 11.
- The extended superframe (ESF) allows for 16-state signaling.
- The ESF format allowed cyclic redundancy check (CRC) error checking and other extensions.
- The information in this section is mostly for historical purposes and has been replaced by newer technologies.
Digital Multiplexing
- Multiplexing (interweaving) several T1 channels to form DS2, DS3, etc., channels at higher bit rates.
- Sometimes, propagation speed changes owing to temperature changes in the propagation medium.
- Bit rates can increase by a fraction, leading to extra bits received that need to be stored before being multiplexed.
- Bit rates can increase by a fraction, leading to vacant slots, which cannot be handled by the multiplexer; slots are then stuffed with dummy digits (pulse stuffing).
- This leads to the plesiochronous (almost synchronous) digital hierarchy.
- Self-study: SECTION 6.4!
Differential Pulse Code Modulation (DPCM)
Differential Pulse Code Modulation (DPCM) - Simple Prediction
- PCM is not very efficient, generating many bits per sample and requiring significant bandwidth to transmit.
- DPCM is an attempt to improve the efficiency of A/D conversion.
- Consider transmitting the difference between successive values instead of transmitting sample values.
- If m[k] is the kth sample, we transmit the difference d[k] = m[k] − m[k − 1].
- At the receiver, knowing m[k − 1] and receiving d[k], m[k] can be reconstructed.
- The differences between successive samples are typically much smaller than the sample values themselves.
- As such, the peak amplitude mp is reduced considerably.
Differential Pulse Code Modulation (DPCM) - Simple Prediction (2)
- As such, the peak amplitude m_p is reduced considerably.
- Since the quantization interval is ∆v = \frac{m_p}{L} for a given L (or n),
- the quantization noise power ∆v^2/12 can be reduced.
- As such, for a given n (or Tx bandwidth), we can increase the SNR.
- OR for a given SNR, we can reduce n (or Tx bandwidth).
Differential Pulse Code Modulation (DPCM) - Further Improvement
- At the transmitter: for some estimate m̂[k], transmit the difference d[k] = m[k] − m̂[k] (or prediction error).
- At the receiver: obtain the estimate m̂[k] from previous samples and generate m[k] by adding d[k] to the estimate m̂[k].
- The scheme is known as differential PCM (DPCM).
- DPCM is superior to simple prediction, which is a special case of DPCM where m̂[k] = m[k − 1].
About Signal Predictors
How can one predict a signal (function)?
Consider a Taylor series expansion of signal m(t + T_s)
m(t + Ts) = m(t) + Ts m'(t) + \frac{Ts^2}{2!}m''(t) + \frac{Ts^3}{3!} … m(t) + … = m(t) + Ts m'(t) \text{ for small } Ts
Consider the kth sample of m(t), and m(kTs ± Ts) = m[k ± 1]
Note that m'(k Ts) ≈ \frac{m(kTs) - m(kTs -Ts )}{Ts}, then m[k + 1] = m(t) + Ts m'(t) ≈ m[k] + Ts\frac{[m[k] − m[k − 1]]}{Ts} = 2m[k] − m[k − 1]
Hence, a crude prediction of the (k + 1)th sample can be obtained from the two previous samples.
About Signal Predictors (2)
- The larger the number of previous samples used, the better the prediction.
- A general formula is given by m[k] ≈ a1m[k − 1] + a2m[k − 2] + . . . + a_Nm[k − N].
- The RHS of the above equation is the approximated signal m̂[k]; therefore, an Nth order linear predictor is given by m̂[k] = a1m[k − 1] + a2m[k − 2] + . . . + a_Nm[k − N].
- The prediction coefficients given by a1, a2, . . . , a_N are determined from the statistical correlation between samples.
Analysis of DPCM
- There is a difficulty with DPCM: m[k − 1], m[k − 2], . . . are not available at the receiver.
- Only their quantized versions, mq[k − 1], mq[k − 2], . . . are available.
- As such, the estimate m̂[k] cannot be determined at the receiver, but instead, only m̂_q[k] can be determined.
- This will increase the error in reconstruction.
- A better strategy is to determine the estimate m̂_q[k].
Analysis of DPCM - Tx
- Assume predictor input: mq[k], predictor output: m̂q[k].
- Difference: d[k] = m[k] − m̂_q[k].
- Quantized difference: d_q[k] = d[k] + q[k], where q[k] is the quantization error.
- The predictor output m̂q[k] is fed back to the input such that
mq[k] = m̂q[k] + dq[k] = m[k] − d[k] + d_q[k] = m[k] + q[k]
Analysis of DPCM - Tx (2)
- Shows that m_q[k] is a quantized version of m[k].
- The predictor input is m_q[k] as assumed.
- The difference signal d_q[k] is now transmitted over the channel.
Analysis of DPCM - Rx
- The input to the receiver predictor is the same as the transmitter predictor.
- Hence, the receiver predictor output must be the same, i.e., m̂_q[k] = m[k] + q[k].
- The desired signal is received with some quantization noise.
- The received signals are decoded and low-pass filtered for D/A conversion.
DPCM SQNR Improvement over PCM
- Let mp and dp be the peak amplitudes of m(t) and d(t), respectively.
- If the same number of quantization levels L is used in both cases, the quantization step size ∆v is reduced by the factor \frac{dp}{mp}.
- The quantization noise power is \frac{(∆v)^2}{12}; as such, the quantization noise power is multiplied by (\frac{dp}{mp})^2.
- The SQNR is multiplied by (\frac{mp}{dp})^2.
- In other words, the SQNR improvement Gp due to DPCM is at least Gp = \frac{Pm}{Pd}, where Pm and Pd are the powers of m(t) and d(t), respectively.
- In dB: 10 \log{10}(\frac{Pm}{P_d}).
- In practice, improvements of 5.6 dB to 25 dB can be obtained.
Adaptive Differential PCM (ADPCM)
- DPCM uses a fixed number of quantization levels L or fixed ∆v.
- The efficiency of DPCM can be further improved by using an adaptive quantizer.
- If a fixed quantization step is applied, either:
- the quantization error is too large because ∆v is too large, OR
- the quantizer cannot cover the entire range (clips) because ∆v is too small.
- The quantized prediction error d_q[k] can be a good indicator of prediction error size:
- when d_q[k] varies close to the max value, ∆v needs to increase, and
- when d_q[k] oscillates around zero, ∆v needs to decrease.
- The same algorithm needs to be used at Tx and Rx such that ∆v is adjusted identically.
- Changing from DPCM to ADPCM results in half the bandwidth usage.
Delta Modulation Intro
- Delta modulation is a special case of DPCM.
- Sample correlation exploited by oversampling (typically 4x the Nyquist rate).
- Results in a small enough prediction error that can be encoded using one bit, i.e., L = 2.
- Delta modulation is a