GH

Modulation Systems EMS310

General Information

  • Lecturer: Prof Pieter de Villiers
  • Assistant Lecturer: Mr Neil-John Lord
  • Department: EEC Engineering, University of Pretoria
  • Contact Details:
    • Module Co-ordinator: Prof Pieter de Villiers
      • Room: 7-54, Eng. 3
      • Telephone: 012 420 2872
      • Email: pieter.devilliers@up.ac.za
    • Assistant Lecturer: Mr Neil-John Lord
      • Room: 7-56, Eng. 3
      • Email: nj.lord@tuks.co.za
    • Lab. Instructors: TBA
    • Tutor and Teaching Assistant(s): TBA
    • EECE undergr. admin.: Ms Cornel Freislich, Ms Mari Ferreira

Course Introduction

  • Communication System Block Diagram:
    • Source -> Transducer -> Transmitter -> Channel -> Receiver -> Transducer -> Sink
    • Input message -> Input signal -> Transmitted signal -> Received signal -> Output signal -> Output message
    • Noise and distortion are introduced in the channel.
  • Analog vs Digital Information:
    • Analog sources:
      • Microphone speech signal
      • Music signal to speakers
    • Digital sources:
      • Printed English
      • Morse code
      • Music notes
    • Noise resistance: Digital is better due to regenerative repeaters.
    • M-ary vs binary signals
  • Digital Transmission:
    • Digital signals can use regenerative repeaters to combat distortion and noise.
    • A/2 and -A/2 represent signal amplitudes.
  • Sampling: Analog to Digital
    • Nyquist Sampling Theorem: If signal bandwidth is B, sample at a rate greater than 2B.
    • Pulse Coded Modulation: Quantized level 7 -> 4 bit PCM 0111
  • Channel Effect, SNR
    • Channel Bandwidth (B): Range of frequencies for transmission with reasonable fidelity.
    • Signal to Noise Ratio (SNR):
      • SNR = Ps/Pn
      • P_s - signal power
      • P_n - noise power (typically constant)
    • Noise sources are unspecified.
    • Increasing signal quality involves managing noise.
    • dB scale is used.
    • 3dB represents a doubling of power.
    • Tradeoff between B and Ps: Can compensate for smaller B by using more Ps.
  • Channel Capacity
    • Shannon Channel Capacity:
      • C = B \log_2(1 + SNR) bits/s
      • C - channel capacity
      • SNR - signal to noise ratio
    • Channel capacity is the upper bound on the rate of information transmission per second.
    • If no noise (SNR \rightarrow \infty), an infinite amount of information can be transmitted.
  • Modulation and Demodulation
    • Baseband Signal m(t): Analog or digital signal generated by sampling.
    • Carrier Signal: High-frequency sinusoidal signal used to convey data over the channel.
    • Modulation: Using the baseband signal to modify (modulate) a property (amplitude/phase/frequency) of the carrier signal.
    • Demodulation: Reverse process of modulation to recover m(t) at the receiver.
  • Modulation Types:
    • Amplitude Modulation (AM)
    • Frequency Modulation (FM)
  • Simultaneous Transmission of Multiple Signals
    • Frequency division multiplexing (FDM): Transmission over non-overlapping frequency bands.
    • Time division multiplexing (TDM): Transmission during non-overlapping allocated time slots.
  • Digital Source Coding and Error Correction Coding
    • Source coding: Remove redundancy to represent data in as few bits as possible.
    • Error correction coding: Add redundancy to make transmission robust to noise and channel effects.
  • Source Coding
    • Source coding: Remove redundancy to represent data in as few bits as possible
    • Randomness:
      • Both associated with noise and data
      • Information content: A fully predictable signal contains no information
      • Data needs to be as random as possible - this is the premise of source coding
      • Requires uniform distribution over equal length symbols
      • Leads to efficiency in the representation and transmission of data
  • Source Coding Error correction coding:
    • Error correction coding - Add redundancy to make transmission robust to noise and channel effects
    • Redundancy in English:
      • English is 50% redundant
      • Only hear part of conversation and use context to derive topic of conversation
      • If redundancy were to be removed, errors could not be corrected
    • Parity check example
      • Add parity bit to 0001 resulting in 00011
      • always even number of ones
      • If single error occurs, we would notice
      • Shortcomings?

Signals and the Signal Space

  • Signals and Systems
    • Signals: A set of information or data that varies with time or space (time/space is the independent variable).
    • Systems: Processes signals by modifying them (time domain/frequency domain). Systems have inputs and processed outputs.
  • Size of a Signal - Signal Energy
    • Signal Energy - E_g
    • Defined as the energy that a voltage signal g(t) dissipates in an 1Ω resistor
    • Eg = \int{-\infty}^{\infty} g^2(t) dt, for a real signal
    • Eg = \int{-\infty}^{\infty} |g(t)|^2 dt, for a complex-valued signal
  • Energy and power signals
    • Examples of signals:
      • Signal with finite energy.
      • Signal with finite power.
  • Size of a Signal - Signal Power
    • For signals that g(t) \nrightarrow 0 as t \rightarrow \infty, E_g \rightarrow \infty
    • Signal Power - P_g
    • Signal power is the time-average of energy, and is suitable for g(t) where g(t) \nrightarrow 0 as t \rightarrow \infty
    • Pg = \lim{T \rightarrow \infty} \frac{1}{T} \int_{-T/2}^{T/2} g^2(t) dt, for a real signal
    • Pg = \lim{T \rightarrow \infty} \frac{1}{T} \int_{-T/2}^{T/2} |g(t)|^2 dt, for a complex valued signal
  • Energy and power units
    • Energy - Joules
    • Power - Watts
    • In dB scale
      • dB Watts - [10 \log_{10} P] dBW
      • dB milliWatts - [30 + 10 \log_{10} P] dBm
  • Energy and power signals - Example
    • Example 2.1
    • Determine the suitable measures of the signals in Fig. 2.2.
    • The signal in Fig. 2.2a approaches 0 as |t| \rightarrow \infty. Therefore, the suitable measure for this signal is its energy E_g, given by
    • Eg = \int{-\infty}^{\infty} g^2(t) dt = \int{-1}^{1} (2)^2 dt + \int{1}^{\infty} (2e^{-t})^2 dt = \int{-1}^{1} 4 dt + 4 \int{1}^{\infty} e^{-2t} dt = 4 + 4 = 8
    • The signal in Fig. 2.2b does not approach 0 as |t| \rightarrow \infty. However, it is periodic, and therefore its power exists. We can use Eq. (2.3) to determine its power. For periodic signals, we can simplify the procedure by observing that a periodic signal repeats regularly each period (2 seconds in this case). Therefore, averaging g^2(t) over an infinitely large interval is equivalent to averaging it over one period (2 seconds in this case). Thus
    • Pg = \frac{1}{T} \int{-T/2}^{T/2} g^2(t) dt = \frac{1}{2} \int{-1}^{1} (1)^2 dt = \frac{1}{2} \int{-1}^{1} dt = \frac{1}{3}
    • Recall that the signal power is the square of its rms value. Therefore, the rms value of this signal is \frac{1}{\sqrt{3}}.
  • Classification of Signals
    1. Continuous and discrete time
    2. Analog and digital
    3. Periodic and aperiodic
    4. Energy and power
    5. Deterministic and probabilistic
  • Continuous and Discrete Time
    • (a) Continuous time signal.
    • (b) Discrete time signals.
  • Analog and Digital
    • Analog signal has infinite levels (continuous in range)
    • Digital signal has M levels, for example binary M = 2
  • Periodic and Aperiodic
    • Analog signal is said to be periodic if there exists a positive constant T0 such that g(t) = g(t + T0) for all t (5)
    • The smallest value of T_0 for which (5) holds is the period of the signal
    • A periodic signal is shift invariant for a shift of T_0
    • A periodic signal has and infinite domain
  • Energy and Power Signals
    • Energy Signal: Signal with finite energy
    • \int_{-\infty}^{\infty} |g(t)|^2 dt < \infty (6)
    • Power Signal: Signal with finite power
    • \lim{T \rightarrow \infty} \frac{1}{T} \int{-T/2}^{T/2} |g(t)|^2 dt < \infty (7)
  • Deterministic and Probabilistic Signals
    • Deterministic Signal:
      • Past, current, future values known exactly
      • Described by a function
    • Probabilistic Signal:
      • Values only known once observed - cannot be predicted
      • Described by a probability distribution function
      • Example: noise
  • Unit Impulse (Dirac delta)
    • The unit impulse (Dirac delta) has the following properties:
      • \delta(t) = 0, t \neq 0 (8)
      • \int_{-\infty}^{\infty} \delta(t) dt = 1 (9)
    • QUESTION: What is the value of \delta(0)?
  • Multiplication by \delta(t)
    • Multiply a function \phi(t) with unit impulse \delta(t):
      • \delta(t)\phi(t) = \delta(t)\phi(0) (10)
    • Similarly for impulse at t = T:
      • \delta(t - T)\phi(t) = \delta(t - T)\phi(T) (11)
      • provided \phi(T) is defined.
  • Sampling property of \delta(t)
    • From (11):
      • \int{-\infty}^{\infty} \delta(t - T)\phi(t) dt = \phi(T) \int{-\infty}^{\infty} \delta(t - T) dt = \phi(T) (12)
      • provided \phi(t) is continuous at t = T.
    • Sampling property: The area under the product of a function with an impulse \delta(t) is equal to the value of that function where the unit impulse is located.
    • More generally (where \delta(T) may or may not be within integration limits):
      • \int{a}^{b} \delta(t-T)\phi(t) dt = \phi(T) \int{a}^{b} \delta(t-T) dt = \begin{cases} \phi(T) & a \leq T < b \ 0 & T < a \leq b, \text{ or } T \geq b > a \end{cases} (13)
  • Unit step function u(t)
    • Unit step function defined as:
      • u(t) = \begin{cases} 1 & t \geq 0, \ 0 & t < 0. \end{cases}
    • A causal signal is any signal multiplied with u(t), i.e. g(t) = 0,t < 0. (15)
    • Relationship with \delta(t)
      • u(t) = \int_{-\infty}^{t} \delta(\tau) d\tau = \begin{cases} 0 & t < 0, \ 1 & t \geq 0. \end{cases}, therefore \frac{du}{dt} = \delta(t) (16)
  • Signals as Vectors
    • Consider a signal g(t) over [a, b]
    • Pick N uniformly spaced points on interval [a, b] denoted by t1, t2, . . . t_N
    • Resulting signal vector g = [g(t1), g(t2), . . . , g(t_N)]^T (17)
    • As N \rightarrow \infty: \lim_{N \rightarrow \infty} g = g(t), t \in [a, b] (18)
    • Dot (inner) product of vectors g and x:
      • where \theta is the angle between g and x, and || \cdot || denotes the norm (length) of the vector
    • Special case of dot product of vector with itself:
  • Vector components
    • A vector g can be expressed in terms of x through
      • g = cx + e. (21)
    • Decomposition is not unique:
      • g = c1x + e1 = c2x + e2. (22)
  • Vector components (2)
    • Say we want to approximate g according to g \approx \hat{g} = cx
    • Best way to decompose g??
    • Which error vector e minimises the distance between g and cx?
  • Vector components (2)
    • Say we want to approximate g according to g \approx \hat{g} = cx
    • Best way to decompose g??
    • Which error vector e minimises the distance between g and cx?
    • Vector e which minimises ||e|| is orthogonal to x
  • Vector components (3)
    • Magnitude of component of g along x?
      • ||g|| \cos \theta = c||x|| (23)
    • Multiply both sides with ||x||:
      • ||g|| \cdot ||x|| \cos \theta = c||x||^2 =
      • therefore:
        • c = \frac{1}{||x||^2}
  • Orthogonality
    • Two vectors g and x are orthogonal if
  • Signal decomposition and Signal components
    • Consider approximating a real signal g(t) by using some other signal x(t) over an interval [t1,t2], i.e.
      • g(t) \approx cx(t), t1 \leq t \leq t2. (27)
    • The error signal e(t) is given by
      • e(t) = \begin{cases} g(t) - cx(t) & t1 \leq t \leq t2, \ 0 & \text{otherwise}. \end{cases}
    • The best approximation minimises the norm (energy) of e(t), defined as
      • Ee = \int{t1}^{t2} e^2(t) dt, (29)
      • = \int{t1}^{t_2} (g(t) - cx(t))^2 dt (30)
  • Signal decomposition and Signal components (2)
    • To minimise the error, or Ee, the derivative of Ee with respect to the parameter c must be equal to 0, i.e.
      • \frac{dE_e}{dc} = 0. (31)
    • Hence
      • \frac{d}{dc} [\int{t1}^{t_2} (g(t) - cx(t))^2 dt] = 0. (32)
    • Expanding the square in the integral:
      • \frac{d}{dc} [\int{t1}^{t2} g(t)^2 dt] - \frac{d}{dc} [2c \int{t1}^{t2} g(t)x(t) dt] + \frac{d}{dc} [c^2 \int{t1}^{t_2} x(t)^2 dt] = 0. (33)
  • Signal decomposition and Signal components (3)
    • Remember:
      • \frac{d}{dc} [\int{t1}^{t2} g(t)^2 dt] - \frac{d}{dc} [2c \int{t1}^{t2} g(t)x(t) dt] + \frac{d}{dc} [c^2 \int{t1}^{t_2} x(t)^2 dt] = 0. (34)
    • Therefore:
      • -2 \int{t1}^{t2} g(t)x(t) dt + 2c \int{t1}^{t2} x(t)^2 dt = 0. (35)
      • c = \frac{\int{t1}^{t2} g(t)x(t) dt}{\int{t1}^{t2} x^2(t) dt} = \frac{1}{Ex} \int{t1}^{t2} g(t)x(t) dt. (36)
    • To summarise, a signal g(t) can be approximated as follows:
      • g(t) \approx cx(t), t1 \leq t \leq t2, (37)
      • with c given by (36).
  • Inner product for signals
    • The Inner product of two real valued signals g(t) and x(t):
    • Recall the “discrete version” (dot product) for length N vectors:
    • Two signals are orthogonal if
    • As with vectors, the norm of a signal is given by
      • ||g(t)|| = \sqrt{
  • Example - Signal Approximation (1)
    • Example 2.2
    • For the square signal g(t) shown in Fig. 2.10 find the component in g(t) of the form of sin t. In other words, approximate g(t) in terms of sin t:
      • g(t) \sim c \sin t 0 \leq t \leq 2\pi
      • so that the energy of the error signal is minimum.
  • Example - Signal Approximation (2)
    • In this case
      • x(t) = \sin t
    • and
      • Ex = \int0^{2\pi} \sin^2(t) dt = \pi
    • From Eq. (2.24), we find
      • c = \frac{1}{\pi} \int0^{2\pi} g(t) \sin t dt = \frac{1}{\pi} [\int0^{\pi} (1) \sin t dt + \int_{\pi}^{2\pi} (-1) \sin t dt]
      • = \frac{1}{\pi} [\int0^{\pi} \sin t dt - \int{\pi}^{2\pi} \sin t dt] = \frac{4}{\pi}
    • Therefore
      • g(t) \sim \frac{4}{\pi} \sin t
      • represents the best approximation of g(t) by the function sin t, which will minimize the error signal energy. This sinusoidal component of g(t) is shown shaded in Fig. 2.10. As in vector space, we say that the square function g (t) shown in Fig. 2.10 has a component of signal sin t with magnitude of \frac{4}{\pi}.
  • Complex signal space
    • The inner product of two complex valued signals g(t) and x(t) given by
      • where x^*(t) denotes the complex conjugate of x(t).
    • The norm of a complex signal is given by:
      • ||g(t)|| = \sqrt{\int{t1}^{t_2} g(t)g^*(t) dt}. (43)
    • Consider an approximation:
      • g(t) \approx cx(t), t1 \leq t \leq t2. (44)
    • Following a similar derivation to the real case (see textbook), the optimal value for c in the complex case is given by
      • c = \frac{1}{Ex} \int{t1}^{t2} g(t)x^*(t) dt. (45)
  • Energy of the sum of orthogonal signals
    • From linear algebra: if x and y are orthogonal, and if z = x + y, then
      • ||z||^2 = ||x||^2 + ||y||^2 (46)
    • Similarly for orthogonal signals over interval [t1,t2]:
      • Ez = Ex + E_y (47)
    • PLEASE REFER TO TEXTBOOK FOR THE PROOF
  • Vector correlation
    • Inner product and norm lay foundation of signal comparison
    • Vectors g and x are similar if g has a large component along x
    • Quantity c could be a measure of such similarity, but varies with lengths of g and x
    • Proceed to define a normalised version:
      • Correlation coefficient
        • \rho = \cos \theta = \frac{
        • where -1 \leq \rho \leq 1
        • Special cases?
  • Signal correlation
    • Signal correlation coefficient
      • \rho = \frac{1}{\sqrt{Eg Ex}} \int_{-\infty}^{\infty} g(t)x(t) dt, (49)
      • where -1 \leq \rho \leq 1 (Proof using Cauchy-Schwarz inequality).
  • Correlation functions
    • Define a type of correlation for all time shifts of one of the signals
    • Useful in cases of transmission time delay (for example radar)
    • Cross correlation function
    • \psi{gz}(\tau) \equiv \int{-\infty}^{\infty} z(t)g^(t - \tau) dt = \int_{-\infty}^{\infty} z(t + \tau)g^(t) dt (50)
    • Therefore \psi_{gz}(\tau) is an indication of similarity between g(t) and z(t) advanced by \tau seconds
  • Correlation functions (2)
    • Cross correlation of a signal with itself is called autocorrelation
    • Autocorrelation function
      • \psig(\tau) \equiv \int{-\infty}^{\infty} g(t + \tau)g^*(t) dt (51)
      • Therefore \psi_g(\tau) is an indication of similarity between g(t) and itself for different time shifts
  • Orthogonal bases
    • [Insert Figure 2.12]
    • Representation of a vector in three-dimensional space.
  • Orthogonal vector bases
    • Vectors x1, x2 and x3 are a complete set of orthogonal bases, where
      • g = c1x1 + c2x2 + c3x3 (52)
    • If {x_i} is not complete, an approximation error would exist (as per previous examples)
    • Basis vectors are not unique and depend on choice of coordinate system
    • Constants c_i are given by
      • ci = \frac{i>}
      • = \frac{<g, xi>}{||xi||^2}, for i = 1, 2, 3 (54)
  • Orthogonal signals
    • Orthogonal vectors
    • Vectors are orthogonal if
    • Orthogonal signals
    • Consider set of signals {x1(t), x2(t), . . . , x_N(t)}
    • Signals are orthogonal if
    • Orthonormal signals
      • Special case where all signal energies E_n = 1
      • Can be achieved by dividing xn(t) by \sqrt{En}
  • Signal Approximation
    • Consider the problem of approximating a signal g(t) over the time interval \Theta by a set of N mutually orthogonal signals x1(t), x2(t), . . . , x_N(t):
      • g(t) \approx c1x1(t) + c2x2(t) + . . . + cNxN(t)
      • = \sum{n=1}^{N} cnx_n(t)
    • If the energy E_e of the error signal e(t) is minimised:
      • cn = \frac{\int{t \in \Theta} g(t)x^*n(t) dt}{\int{t \in \Theta} |x_n(t)|^2 dt}
      • = \frac{1}{En} \int{t \in \Theta} g(t)x^*_n(t) dt
  • Signal Approximation - Completeness
    • Completeness
    • An orthogonal set is said to be complete, if the error energy E_e \rightarrow 0
    • Mathematically:
      • \lim{N \rightarrow \infty} \int{t \in \Theta} |e_N(t)|^2 dt = 0, (61)
      • where eN(t) = g(t)-(c1x1(t) + c2x2(t) + . . . + cNxN(t)) = g(t) - \sum{n=1}^{N} cnxn(t), t \in \Theta. (62)
  • Generalised Fourier series
    • Generalised Fourier series
    • Any signal can be reconstructed as follows:
      • g(t) = c1x1(t) + c2x2(t) + . . .
      • = \sum{n=1}^{\infty} cnx_n(t), t \in \Theta
  • Parseval’s theorem
    • Parseval’s theorem
    • The energy of a signal reconstructed from orthogonal signals is equal to the sum of energies of individual components, i.e.
      • Er = c1^2 E1 + c2^2 E2 + c3^2 E_3 + . . .
      • = \sum{n} cn^2 E_n
  • Exponential Fourier Series
    • Orthogonal signal representation NOT unique
    • Trig Fourier series good representation of periodic signals, but exponential Fourier series simpler
    • Orthogonality of set of exponentials
      • Set of exponentials e^{jn\omega0t}, n = 0, ±1, ±2, . . . orthogonal over interval of duration T0 = 2\pi/\omega_0, that is
        • \int{T0} e^{jn\omega0t}(e^{jm\omega0t})^* dt = \int{T0} e^{j(m-n)\omega0t} dt = \begin{cases} 0 & m \neq n \ T0 & m = n \end{cases} (67)
    • The above set is complete.
  • Exponential Fourier Series (2)
    • From equations (59) and (63), any signal g(t) can be expressed over an interval of duration T_0 seconds as an exponential Fourier series.
    • Exponential Fourier Series
      • g(t) = \sum{n=-\infty}^{\infty} Dn e^{jn\omega_0t}
      • = Dn e^{jn2\pi f0t},
      • where
      • Dn = \frac{1}{T0} \int{T0} g(t)e^{-jn2\pi f_0t} dt
    • The set {D_n} are the Fourier coefficients.
  • Exponential Fourier Series Example(Example 2.3)
  • Exponential Fourier Series Example (2)
    • where
      • Dn = \frac{1}{T0} \int{T0} g(t)e^{-j2nt} dt = \frac{1}{\pi} \int0^{\pi} e^{-t/2-j2nt} = \frac{1}{\pi} \int0^{\pi} e^{-(½+j2n)t} dt
      • = \frac{1}{\pi} [\frac{-1}{(½+j2n)} e^{-(½+j2n)t}]_0^{\pi} = \frac{-1}{\pi} [\frac{e^{-(½+j2n) \pi} - 1}{(½+j2n)}] = \frac{1}{\pi} \frac{(1 - e^{-½})}{½ + j2n} = \frac{0.504}{1 + j4n}
    • and
    • \phi(t) = 0.504 \sum_{n=-\infty}^{\infty} \frac{e^{-j2nt}}{1 + j4n}
    • = 0.504 [1 + \frac{1}{1+j4} e^{j4t} + \frac{1}{1+j8} e^{j6t} + \frac{1}{1+j12} + …
    • + \frac{1}{1-j4} e^{-j2t} + \frac{1}{1-j8} e^{-j6t} + \frac{1}{1-j12} e^{-j6t} + …]
    • Observe that the coefficients {Dn} are complex. Moreover, Dn and D_{-n} are conjugates, as expected.
  • Exponential Fourier Series Spectra
    • Exponential spectrum: plot (complex) coefficients D_n as function of \omega
    • Two possible plots?
    • Prefer |Dn| and \angle Dn