The signal in Fig. 2.2b does not approach 0 as |t| \rightarrow \infty. However, it is periodic, and therefore its power exists. We can use Eq. (2.3) to determine its power. For periodic signals, we can simplify the procedure by observing that a periodic signal repeats regularly each period (2 seconds in this case). Therefore, averaging g^2(t) over an infinitely large interval is equivalent to averaging it over one period (2 seconds in this case). Thus
Sampling property: The area under the product of a function with an impulse \delta(t) is equal to the value of that function where the unit impulse is located.
More generally (where \delta(T) may or may not be within integration limits):
\int{a}^{b} \delta(t-T)\phi(t) dt = \phi(T) \int{a}^{b} \delta(t-T) dt = \begin{cases} \phi(T) & a \leq T < b \ 0 & T < a \leq b, \text{ or } T \geq b > a \end{cases} (13)
Unit step function u(t)
Unit step function defined as:
u(t) = \begin{cases} 1 & t \geq 0, \ 0 & t < 0. \end{cases}
A causal signal is any signal multiplied with u(t), i.e. g(t) = 0,t < 0. (15)
To summarise, a signal g(t) can be approximated as follows:
g(t) \approx cx(t), t1 \leq t \leq t2, (37)
with c given by (36).
Inner product for signals
The Inner product of two real valued signals g(t) and x(t):
Recall the “discrete version” (dot product) for length N vectors:
Two signals are orthogonal if
As with vectors, the norm of a signal is given by
||g(t)|| = \sqrt{
Example - Signal Approximation (1)
Example 2.2
For the square signal g(t) shown in Fig. 2.10 find the component in g(t) of the form of sin t. In other words, approximate g(t) in terms of sin t:
g(t) \sim c \sin t 0 \leq t \leq 2\pi
so that the energy of the error signal is minimum.
Example - Signal Approximation (2)
In this case
x(t) = \sin t
and
Ex = \int0^{2\pi} \sin^2(t) dt = \pi
From Eq. (2.24), we find
c = \frac{1}{\pi} \int0^{2\pi} g(t) \sin t dt = \frac{1}{\pi} [\int0^{\pi} (1) \sin t dt + \int_{\pi}^{2\pi} (-1) \sin t dt]
= \frac{1}{\pi} [\int0^{\pi} \sin t dt - \int{\pi}^{2\pi} \sin t dt] = \frac{4}{\pi}
Therefore
g(t) \sim \frac{4}{\pi} \sin t
represents the best approximation of g(t) by the function sin t, which will minimize the error signal energy. This sinusoidal component of g(t) is shown shaded in Fig. 2.10. As in vector space, we say that the square function g (t) shown in Fig. 2.10 has a component of signal sin t with magnitude of \frac{4}{\pi}.
Complex signal space
The inner product of two complex valued signals g(t) and x(t) given by
where x^*(t) denotes the complex conjugate of x(t).
Therefore \psi_g(\tau) is an indication of similarity between g(t) and itself for different time shifts
Orthogonal bases
[Insert Figure 2.12]
Representation of a vector in three-dimensional space.
Orthogonal vector bases
Vectors x1, x2 and x3 are a complete set of orthogonal bases, where
g = c1x1 + c2x2 + c3x3 (52)
If {x_i} is not complete, an approximation error would exist (as per previous examples)
Basis vectors are not unique and depend on choice of coordinate system
Constants c_i are given by
ci = \frac{i>}
= \frac{<g, xi>}{||xi||^2}, for i = 1, 2, 3 (54)
Orthogonal signals
Orthogonal vectors
Vectors are orthogonal if
Orthogonal signals
Consider set of signals {x1(t), x2(t), . . . , x_N(t)}
Signals are orthogonal if
Orthonormal signals
Special case where all signal energies E_n = 1
Can be achieved by dividing xn(t) by \sqrt{En}
Signal Approximation
Consider the problem of approximating a signal g(t) over the time interval \Theta by a set of N mutually orthogonal signals x1(t), x2(t), . . . , x_N(t):
g(t) \approx c1x1(t) + c2x2(t) + . . . + cNxN(t)
= \sum{n=1}^{N} cnx_n(t)
If the energy E_e of the error signal e(t) is minimised: