Fourier Series and Integrals With Applications To Signal Analysis
Fourier Series and Integrals With Applications To Signal Analysis
N
f N (t) = fˆn ei2πnt/T , (2.1)
n=−N
where
* +* +∗
1 i2πn(t−t )/T
N N
1 i2πnt/T 1 i2πnt /T
KN (t − t ) = e = √ e √ e ,
T T T
n=−N n=−N
(2.4)
and as shown in the following, approaches a delta function at points of con-
tinuity as N approaches infinity. The last form highlights the fact that this
kernel can be represented as a sum of symmetric products of expansion func-
tions in conformance with the general result in (1.301) and (1.302). Using the
geometrical series sum formula we readily obtain
which is known as the Fourier series kernel. As is evident from (2.4) this kernel
is periodic with period T and is comprised of an infinite series of regularly
spaced peaks each similar to the a-periodic sinc function kernel encountered in
(1.254). A plot of T KN (τ ) for N = 5 as a function of (t − t )/T ≡ τ /T is
shown in Fig. 2.1. The peak value attained by T KN (τ ) at τ /T = 0, ±1, ±2,
12
10
6
T*K(tau/T)
-2
-4
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
tau/T
where is an arbitrarily small positive number. Let us first assume that f (t ) is
smooth, i.e., piecewise differentiable and continuous. In that case then g (t, t )
has the same properties provided t = t . This is true for the function in the
integrand of the first and third integral in (2.9). Hence by the RLL these vanish
so that I(t) is determined solely by the middle integral
t+ /2
sin [2π (N + 1/2) (t − t ) /T ]
I (t) = lim f (t ) dt . (2.10)
N −→∞ t− /2 T sin [π (t − t ) /T ]
78 2 Fourier Series and Integrals with Applications to Signal Analysis
with coefficients given by (2.2) converges in the interval −T /2 < t < T /2.
where T /2
sin [2π (N + 1/2) (t − t ) /T ]
λN (t) = dt . (2.17)
t1 T sin [π (t − t ) /T ]
The limiting form of the first integral on the right of (2.16) as N −→ ∞ has
already been considered so that
! !
lim f N (t) = fs (t) + f t+
1 − f t1
−
lim λN (t) (2.18)
N −→∞ N −→∞
and only the last limit introduces novel features. Confining our attention to this
term we distinguish three cases: the interval −T /2 < t < t1 , wherein t = t so
that the RLL applies, the interval t1 < t < T /2, and the point of discontinuity
t = t1 . In the first case λN (t) approaches zero. In the second case we divide the
integration interval into three subintervals as in (2.9). Proceeding in identical
fashion we find that λN (t) approaches unity. For t = t1 we subdivide the
integration interval into two subintervals as follows:
t1 + /2
sin [2π (N + 1/2) (t1 − t ) /T ]
λN (t1 ) = dt
t1 T sin [π (t1 − t ) /T ]
T /2
sin [2π (N + 1/2) (t1 − t ) /T ]
+ dt , (2.19)
t1 + /2 T sin [π (t1 − t ) /T ]
where again is an arbitrarily small positive quantity. In the second integral
t = t1 so that again the RLL applies and we obtain zero in the limit. Hence
the limit is given by the first integral which we compute as follows:
t1 + /2
sin [2π (N + 1/2) (t1 − t ) /T ]
lim λN (t1 ) = lim dt
N −→∞ N −→∞ t
1
T sin [π (t1 − t ) /T ]
π(2N2T+1)
sin x
= lim dx
N −→∞ 0 π (2N + 1) sin 2Nx+1
π(2N2T+1) ∞
sin x sin x 1
= lim dx = dx = . (2.20)
N −→∞ 0 πx 0 πx 2
Summarizing the preceding results we have
⎧
⎨ 0 ; − T /2 < t < t1 ,
lim λN (t) = 1/2 ; t = t1 , (2.21)
N −→∞ ⎩
1 ; t1 < t < T /2.
Returning to (2.18) and taking account of the continuity of fs (t) we have the
final result
1 ! !
lim f N (t1 ) = f t+1 + f t1
−
. (2.22)
N −→∞ 2
Clearly this generalizes to any number of finite discontinuities within the ex-
pansion interval. Thus, for a piecewise differentiable function with step discon-
tinuities the Fourier series statement (2.13) should be replaced by
∞
1 ! !
f t+ + f t− = fˆn ei2πnt/T . (2.23)
2 −∞
80 2 Fourier Series and Integrals with Applications to Signal Analysis
Although the limiting form (2.23) tells us what happens when the number
of terms in the series is infinite, it does not shed any light on the behavior of
the partial approximating sum for finite N. To assess the rate of convergence
we should examine (2.17) as a function of t with increasing N. For this purpose
let us introduce the function
x
sin[(N + 1/2)θ]
Si s(x, N ) = dθ (2.24)
0 2 sin (θ/2)
so that the dimensionless parameter x is a measure of the distance from
the step discontinuity (x = 0). The integrand in (2.24) is just the sum
n=N
(1/2) n=−N exp(−inθ) which we integrate term by term and obtain the
alternative form
x sin(nx)
N
Si s(x, N ) = + . (2.25)
2 n=1 n
Note that for any N the preceding gives Si s(π, N ) = π/2. As N → ∞ with
0 < x < π this series converges to π/2. A plot of (2.25) for N = 10 and N = 20
is shown in Fig. 2.2. For larger values of N the oscillatory behavior of Si s(y, N )
2
** *
1.5
* N=10
0.5
** N=20
Sis(x,N)
-0.5
-1
-1.5
-2
-4 -3 -2 -1 0 1 2 3 4
x
damps out and the function approaches the asymptotes ±π/2 for y = 0. Note
that as N is increased the peak amplitude of the oscillations does not diminish
but migrates toward the location of the step discontinuity, i.e., y = 0. The
numerical value of the overshoot is ±1.852 or about 18% above (below) the
positive (negative) asymptote. When expressed in terms of (2.25), (2.17) reads
1 1
λN (t) = Si s[(T /2 − t)2π/T, N ] − Si s[(t1 − t)2π/T, N ]. (2.26)
π π
2.1 Fourier Series 81
Taking account of the limiting forms of (2.25) we note that as long as t < T /2 in
the limit as N → ∞ the contribution from the first term on the right of (2.26)
approaches 1/2, while the second term tends to −1/2 for t < t1 ,1/2 for t >
t1 and 0 for t = t1 , in agreement with the limiting forms enumerated in (2.21).
Results of sample calculations of λN (t) (with t1 = 0) for N = 10, 20, and
50 are plotted in Fig. 2.3. Examining these three curves we again observe that
increasing N does not lead to a diminution of the maximum amplitude of the
1.2
*** ** *
1
0.8
LambdaN(t)
*N=10
0.6 **N=20
***N=50
0.4
0.2
-0.2
-0.5 -0.4 -0.3 -0.2 -0.1 0 0.1 0.2 0.3 0.4 0.5
t/T
where Si(z) is the sine integral function defined in (1.278e) and plotted in
Fig. 1.15. If we use this asymptotic form in (2.26), we get
1 1
λN (t) = Si[(N + 1/2)(T /2 − t)2π/T ] − Si[(N + 1/2)(t1 − t)2π/T ],
π π
which shows directly that N enters as a scaling factor of the abscissa. Thus
as the number of terms in the approximation becomes infinite the oscillatory
behavior in Fig. 2.3 compresses into two vanishingly small time intervals which
in the limit may be represented by a pair of infinitely thin spikes at t = 0+ and
t = 0− . Since in the limit these spikes enclose zero area we have here a direct
demonstration of convergence in the mean (i.e., the LMS error rather than the
error itself tending to zero with increasing N ). This type of convergence, charac-
terized by the appearance of an overshoot as a step discontinuity is approached,
is referred to as the Gibbs phenomenon, in honor of Willard Gibbs, one of the
America’s greatest physicists. Gibbs phenomenon results whenever an LMS ap-
proximation is employed for a function with step discontinuities and is by no
means limited to approximations by sinusoids (i.e., Fourier series). In fact the
numerical example in Fig. 1.11 demonstrates it for Legendre Polynomials.
Another aspect of the Gibbs phenomenon worth mentioning is that it af-
fords an example of nonuniform convergence. For as we have seen lim N → ∞
λN (t1 ) → 1/2. On the other hand, the limit approached when N is allowed to
approach infinity first and the function subsequently evaluated at t as it is made
to approach t1 (say, through positive values) is evidently unity. Expressed in
symbols, these two alternative ways of approaching the limit are
In other words, the result of the limiting process depends on the order in which
the limits are taken, a characteristic of nonuniform convergence. We can
view (2.26***) as a detailed interpretation of the limiting processes implied
in the Fourier series at step discontinuities which the notation (2.23) does not
make explicit.
−3T / 2 −T / 2 o T / 2 3T / 2 5T / 2
τ−T / 2 τ+T / 2
fs (t ) by the fsext (t ) in (2.28) we can mimic the limiting process employed
following (2.17) without change. Carrying this out we get an identical result
at each endpoint, viz., [fs (−T /2) + fs (T /2)] /2. Clearly as far as any “real”
discontinuity at an interior point of the original expansion interval is concerned,
say at t = t1 , its contribution to the limit is obtainable by simply adding the
last term in (2.27). Hence
1.1
0.9
0.8
f10(t)
0.7
0.6
0.5
0.4
0.3
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
t
and the same number of expansion functions. When the discontinuity occurs in
the interior of the interval, the convergence is also marred by the Gibbs oscilla-
tions as illustrated in Fig. 2.7 for the pulse p.5 (t − .5) , again using 21 sinusoids.
Fig. 2.8 shows a stem diagram of the magnitude of the Fourier coefficients fˆn
plotted as a function of (m = n + 10, n = −10, −9, . . . 11). Such Fourier coef-
ficients are frequently referred to as (discrete) spectral lines and are intimately
related to the concept of the frequency spectrum of a signal as will be discussed
in detail in connection with the Fourier integral.
0.95
0.9
0.85
0.8
0.75
0.7
0.65
0.6
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
1.2
0.8
0.6
p1(t-.5)
0.4
0.2
-0.2
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
t
0.25
0.2
0.15
fm
0.1
0.05
0
0 5 10 m 15 20 25
Figure 2.8: Magnitude of Fourier series coefficients for the pulse in Fig. 2.7
Alternatively, we can replace the kernel by the original geometric series and
write
∞ * +* +∗
1 1
√ ei2πnt/T √ ei2πnt /T
n=−∞ T T
∞ ∞
1 i2πn(t−t )/T
= e = δ (t − t − kT ) . (2.31)
T n=−∞
k=−∞
are of particular interest better than the original series. This broad subject is
treated in detail in books specializing in spectral estimation. Here we merely
illustrate the technique with the so-called Fejer summation approach, wherein
the modified trigonometric sum actually does converge to the original function.
In fact this representation converges uniformly to the given function and thus
completely eliminates the Gibbs phenomenon.
The Fejer [16] summation approach is based on the following result from
the theory of limits. Given a sequence f N such that lim f N → f exists, the
N →∞
arithmetic average
1
M
σM = fN (2.32)
M +1
N =0
lim σ M → f. (2.33)
M→∞
In the present case we take for f N = f N (t) , i.e., the partial Fourier series
summation. Thus if this partial sum approaches f (t) as N → ∞, the preceding
theorem states that σ M = σ M (t) will also converge to f (t). Since f N (t) is just
a finite sum of sinusoids we should be able to find a closed-form expression for
σ M (t) by a geometrical series summation. Thus
1
σ M (t) = {fˆ0 + fˆ0 + fˆ1 ei2πt/T + fˆ−1 e−i2πt/T +
M +1
fˆ0 + fˆ1 ei2πt/T + fˆ2 ei2(2πt/T ) +
+ . . .}.
fˆ−1 e−i2πt/T + fˆ−2 e−i2(2πt/T )
1 M
= {(M + 1) fˆ0 + fˆk (M − k + 1) eik(2πt/T )
M +1
k=1
M
+ fˆ−k (M − k + 1) e−ik(2πt/T ) }.
k=1
After changing the summation index from k to −k in the last sum we get
M * +
|k|
σ M (t) = ˆ
fk 1 − eik(2πt/T ) , (2.34)
M +1
k=−M
are obtained by multiplying the Fourier series coefficients fˆk by the triangular
spectral window
|k|
ŵk (M ) = 1 − k = 0, ±1, ±2, . . . ± M. (2.35)
M +1
We can view (2.34) from another perspective if we substitute the integral repre-
sentation (2.3) of the partial sum f N (t) into (2.32) and carry out the summation
on the Fourier series kernel (2.5). Thus after setting ξ = 2π(t − t )/T we get
the following alternative form:
M
T /2
1 sin [(N + 1/2) ξ]
σ M (t) = f (t ) dt
M +1 −T /2 N =0 T sin [ξ/2]
T /2 M * +
1 f (t ) dt ei(N +1/2)ξ e−i(N +1/2)ξ
= − . (2.36)
M + 1 −T /2 T sin(ξ/2) 2i 2i
N =0
M
sin [(M + 1) ξ/2]
eiN ξ = eiξM/2
sin [ξ/2]
N =0
This representation of σ M (t) is very much in the spirit of (2.3). Indeed in view
of (2.33) σ M (t) must converge to the same limit as the associated Fourier series.
The new kernel function
sin2 [(M + 1) π(t − t )/T ]
KM (t − t ) = (2.38)
T (M + 1) sin2 [π(t − t )/T ]
is called the Fejer kernel and (2.34) the Fejer sum. Just like the Fourier series
kernel the Fejer kernel is periodic with period T so that in virtue of (2.33) we
may write
∞
sin2 [(M + 1) π(t − t )/T ]
lim = δ (t − t − kT ) . (2.39)
M→∞ T (M + 1) sin2 [π(t − t )/T ]
k=−∞
Note that in the Fejer sum the Gibbs oscillations are absent but that the ap-
proximation underestimates the magnitude of the jump at the discontinuity.
In effect, to achieve a good fit to the “corners” at a jump discontinuity the
penalty one pays with the Fejer sum is that more terms are needed than with
a Fourier sum to approximate the smooth portions of the function. To get
some idea of the rate of convergence to the “corners” plots of Fejer sums for
M = 10, 25, 50, and 100 are shown in Fig. 2.10, where (for t > 0.5) σ 10 (t) <
σ 25 (t) < σ 50 (t) < σ 100 (t) .
1.2
← Fourier sum
1
← Fejer sum
0.8
0.6
0.4
0.2
-0.2
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
t
In passing we remark that the Fejer sum (2.34) is not a partial Fourier series
sum because the expansion coefficients themselves, σ̂ k = ŵk (M ) fˆk are functions
of M. Trigonometric sums of this type are not unique. In fact by forming the
arithmetic mean of the Fejer sum itself
(1) 1
M
σ M (t) = σ N (t) (2.40)
M +1
N =0
we can again avail ourselves of the limit theorem in (2.32) and (2.33) and con-
(1)
clude that the partial sum σ M (t) must approach f (t) in the limit of large
M , i.e.,
(1)
lim σ (t) = f (t) . (2.41)
M→∞ M
(1)
For any finite M we may regard σ M (t) as the second-order Fejer approximation.
Upon replacing M by N in (2.34) and substituting for σ N (t) we can easily carry
90 2 Fourier Series and Integrals with Applications to Signal Analysis
0.8
0.6
0.4
0.2
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
out one of the sums and write the final result in the form
(1)
M
(1)
σ M (t) = fˆk ŵk (M ) eik(2πt/T ) , (2.42)
k=−M
where
(1) 1
M−|k|+1
n
ŵk (M ) = , k = 0, ±1, ±2, . . . ± M (2.43)
M +1 n=1
|k| + n
is the new spectral window. We see that we no longer have the simple linear
taper that obtains for the first-order Fejer approximation. Unfortunately this
sum does not appear to lend itself to further simplification. A plot of (2.43) in
the form of a stem diagram is shown in Fig. 2.11 for M = 12. Figure 2.12 shows
plots of the first- and second-order Fejer approximations for a rectangular pulse
using M = 25. We see that the second-order approximation achieves a greater
degree of smoothing but underestimates the pulse amplitude significantly more
than does the first-order approximation. Apparently to reduce the amplitude
error to the same level as achieved with the first-order approximation much
larger spectral width (values of M ) are required. This is consistent with the
concave nature of the spectral taper in Fig. 2.11 which, for the same bandwidth,
will tend to remove more energy from the original signal spectrum than a lin-
ear taper.
Clearly higher order Fejer approximations can be generated recursively with
the formula
1 (m−1)
M
(m)
σ M (t) = σk (t) , (2.44a)
M +1
k=0
2.1 Fourier Series 91
1.4
1.2
0.8
0.6
0.4
0.2
0
-10 -5 0 5 10
1.2
1
*
**
0.8
0.4
0.2
-0.2
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
(0)
wherein σ k (t) ≡ σ k (t). It should be noted that Fejer approximations of all
orders obey the limiting property
(m−1) 1 ! !
lim σ (t) = [f t+ + f t− ] ; m = 1, 2, 3, . . . . (2.44b)
M→∞ M 2
i.e., at step discontinuities the partial sums converge to the arithmetic average
of the given function, just like ordinary Fourier series. The advantage of higher
92 2 Fourier Series and Integrals with Applications to Signal Analysis
order Fejer approximations is that they provide for a greater degree of smoothing
in the neighborhood of step discontinuities. This is achieved at the expense of
more expansion terms (equivalently, requiring wider bandwidths) to reach a
given level of approximation accuracy.
we can still interpret them as the projections of the signal f (t) along the
basis functions ei2πnt/T and think of them geometrically as in Fig. 1.3. Because
each fˆn is uniquely associated with a radian frequency of oscillation ω n , with
ωn /2π = n/T Hz, f is said to constitute the frequency domain representation
of the signal, and the elements of f the signal (line) spectrum. A very important
relationship between the frequency domain and the time domain representations
of the signal is Parseval formula
T /2
n=∞ 2
1 2 ˆ
|f (t)| dt = fn . (2.46)
T −T /2 n=−∞
We now suppose that the Fourier series coefficients fˆn and ĝn of f (t) and g (t),
defined within −T /2, T /2, are known. What will be the Fourier coefficients ĥm
of h (t) when expanded in the same interval? The answer is readily obtained
when we represent f (τ ) by its Fourier series (2.13) and similarly g (t − τ ) . Thus
T /2 ∞
∞
1
h (t) = fˆn ei2πnτ /T ĝm ei2πm(t−τ )/T dτ
T −T /2 n=−∞ m=−∞
∞
∞ T /2
1 i2πmt/T ˆ
= ĝm e fn ei2π(n−m)τ /T dτ
T m=−∞ n=−∞ −T /2
∞ ∞
1
= ĝm ei2πmt/T fˆn T δnm
T m=−∞ n=−∞
∞ ∞
= ĝm fˆm ei2πmt/T = ĥm ei2πmt/T (2.50)
m=−∞ m=−∞
from which we identify ĥm = ĝm fˆm . A dual situation frequently arises when we
need the Fourier coefficients of the product of the two functions, e.g., q(t) ≡ f (t)
g (t) . Here we can proceed similarly
Symmetries
Frequently (but not always ) the signal in the time domain will be real. In that
case the formula for the coefficients gives
which means that the magnitude of the line spectrum is symmetrically disposed
with respect to the index n = 0. Simplifications also arise when the signal is
either an even or an odd function with respect to t = 0. In case of an even
function f (t) = f (−t) we obtain
T /2
2
fˆn = f (t) cos (2πnt/T ) dt (2.53)
T 0
It is worth noting that (2.53-2.54) hold for complex functions in general, inde-
pendent of (2.52).
f (t)
t
−T 0 T
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
As may be verified directly, they are orthogonal over the interval 0, T . In our
compact notation this reads
(φcn , φcm ) = (T /εn ) δ nm ,
where we have introduced the abbreviation
1; n = 0,
εn =
2 ; n > 0,
which is usually referred to as the Neumann symbol.
The convergence properties of the cosine series at points of continuity and
at jump discontinuities within the interval are identical to those of the com-
plete Fourier series from which, after all, the cosine series may be derived.
The cosine expansion functions form a complete set in the space of piecewise
differentiable functions whose derivatives must vanish at the interval endpoints.
This additional restriction arises because of the vanishing of the derivative of
cos (πnt/T ) at t = 0 and t = T. In accordance with (1.303), the formal state-
ment of completeness may be phrased in1terms of an infinite series of products
of the orthonormal expansion functions εn /T φcn (t) as follows:
∞
εn εn
δ (t − t ) = cos (πnt/T ) cos (πnt /T ) . (2.61)
n=0
T T
Sine Series
If instead of an even extension of f (t) into the interval −T, 0 as in Fig. 2.13, we
employ an odd extension, as in Fig. 2.15, and expand the function f (|t|) sign(t)
in a Fourier series within the interval −T, T , we find that the cosine terms
2.1 Fourier Series 97
f (t)
t
−T 0 T
vanish and the resulting Fourier series is comprised entirely of sines. Within the
original interval 0, T it converges to the prescribed function f (t) and constitutes
the so-called sine series expansion, to wit,
∞
f (t) = fˆns sin (πnt/T ) , (2.62)
n=0
where
T
2
fˆns = f (t) sin (πnt/T ) dt. (2.63)
T 0
Evidently because the sine functions vanish at the interval endpoints the sine
series will necessarily converge to zero there. Since at a discontinuity a Fourier
series always converges to the arithmetic mean of the left and right endpoint
values, we see from Fig. 2.15 that the convergence of the sine series to zero at
the endpoints does not require that the prescribed function also vanishes there.
Of course, if this is not the case, only LMS convergence is guaranteed at the
endpoints and an approximation by a finite number of terms will be vitiated
by the Gibbs effect. A representative illustration of the expected convergence
behavior in such cases can be had by referring to Fig. 2.5. For this reason the
sine series is to be used only with functions that vanish at the interval endpoints.
In such cases convergence properties very similar to those of cosine series are
achieved. A case in point is the approximation shown in Fig. 2.6.
The sine expansion functions
they form a complete set in the space of piecewise differentiable functions that
vanish at the interval endpoints. Again the formal statement of this complete-
ness may be summarized by the delta function representation
∞
2 2
δ (t − t ) = sin (πnt/T ) sin (πnt /T ) . (2.66)
n=0
T T
N
2πnt
f (t) = cn e i T ; 0 ≤ t ≤ T. (2.67)
n=−N
M−1
2π(m−N )
f (Δt) = cm−N ei M . (2.69)
m=0
M−1
2π(m−k)
ei M = M δ mk . (2.70)
=0
2πk
Upon multiplying both sides of (2.69) by e−i M and summing on and us-
ing (2.70) we obtain the solution for the coefficients
1
M−1
2π(m−N )
cm−N = f (Δt) e−i M . (2.71)
M
=0
2N
1 2πn
cn = f (Δt) e−i 2N +1 . (2.72)
2N + 1
=0
2.1 Fourier Series 99
On the other hand we know that the solution for cn in (2.67) is also given by
the integral
1 T 2πnt
cn = f (t)e−i T dt. (2.73)
T 0
If in (2.72) we replace 1/ (2N + 1) by its equivalent Δt/T , we can interpret (2.71)
as a Riemann sum approximation to (2.73). However we know from the fore-
going that (2.72) is in fact an exact solution of (2.69). Thus whenever f (t)
is comprised of a finite number of sinusoids the Riemann sum will represent
the integral (2.73) exactly provided 2N +1 is chosen equal to or greater than the
number of sinusoids. Evidently, if the number of sinusoids is exactly 2N +1, the
cn as computed using either (2.73) or (2.72) must be identically zero whenever
|n| > N. If f (t) is a general piecewise differentiable function, then (2.67) with
the coefficients determined by (2.72) provides an interpolation to f (t) in terms
of sinusoids. In fact by substituting (2.72) into (2.67) and again summing a
geometric series we obtain the following explicit interpolation formula:
!
M−1 t
sin π Δt −
f (t) = f (Δt) π t ! . (2.74)
=0
M sin M Δt −
Unlike the LMS approximation problem underlying the classical Fourier series,
the determination of the coefficients in the interpolation problem does not re-
quire the evaluation of integrals. This in itself is of considerable computational
advantage. How do interpolation-type approximations compare with LMS ap-
proximations? Figure 2.16 shows the interpolation of e−t achieved with 11
sinusoids while Fig. 2.17 shows the approximation with the same number of si-
nusoids using the LMS approximation. We note that the fit is comparable in
the two cases except at the endpoints where,
! as we know, the LMS approxi-
mation necessarily converges to 1 + e−1 /2. As the number of terms in the
interpolation is increased the fit within the interval improves. Nevertheless,
the interpolated function continues to show considerable undamped oscillatory
behavior near the endpoints as shown by the plot in Fig. 2.18.
1.4
1.2
0.8
exp(-t)
0.6
0.4
0.2
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
t
1.2
0.8
exp(-t)
0.6
0.4
0.2
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
t
M−1
f (mΔt) = ccn cos [πnm/ (M − 1/2)] ; m = 0, 1, 2, . . . M − 1, (2.76)
n=0
2.1 Fourier Series 101
1.2
1.1
0.9
0.8
exp(-t)
0.7
0.6
0.5
0.4
0.3
0.2
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
t
where the ccn are the unknown coefficients. The solution for the ccn is made
somewhat easier if one first extends the definition of f (mΔt) to negative indices
as in Fig. 2.13 and rewrites (2.76) in terms of complex exponentials. Thus
M−1
f (mΔt) = cc
ne
i2πnm/(2M−1)
; m = 0, ±1, ±2, . . .±(M − 1) , (2.77)
n=−(M−1)
M−1
sin [π (m − k)]
ei2πn(m−k)/(2M−1) =
sin [π (m − k) /(2M − 1)]
n=−(M−1)
≡ (2M − 1) δ mk (2.79)
1
M−1
cc = f (mΔt) e−i2πnm/(2M−1)
n
2M − 1
m=−(M−1)
1
M−1
= εm f (mΔt) cos [2πnm/ (2M − 1)]
2M − 1 m=0
102 2 Fourier Series and Integrals with Applications to Signal Analysis
1
M−1
= (εm /2) f (mΔt) cos [πnm/ (M − 1/2)] .
M − 1/2 m=0
2
M−1
ccn = (εn εm /4) f (mΔt) cos [πnm/ (M −1/2)] ; n = 0, 1, 2, . . .M −1.
M − 1/2m=0
(2.80)
The final interpolation formula now follows through a direct substitution of
(2.80) into
M−1
f (t) = ccn cos (πnt/T ) . (2.81)
n=0
1
M−1
f (t) = (εm /2) f (mΔt) {1 + kM (t/Δt − m) + kM (t/Δt + m)} ,
M − 1/2 m=0
(2.82)
where
* + sin π(M−1))
πM 2(M−1/2) t
kM (t) = cos t . (2.83)
2M − 1 sin π
2(M−1/2) t
T
Δt = (2.84)
2M
2.1 Fourier Series 103
1.2
0.8
exp(-t)
0.6
0.4
0.2
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
t
1.2
0.8
exp(-t)
0.6
0.4
0.2
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
t
and forcing the first and the last step size to equal Δt/2 we replace (2.76) by
M−1
f [Δt (m + 1/2)] = ĉcn cos [πn (2m + 1) /2M ] ; m = 0, 1, 2, . . . M − 1.
n=0
(2.85)
104 2 Fourier Series and Integrals with Applications to Signal Analysis
With the aid of the geometrical sum formula we can readily verify the orthogo-
nality relationship
M−1
M
cos [πn (2m + 1) /2M ] cos [πk (2m + 1) /2M ] = δ nk (2.86)
m=0
εn
εn
M−1
ĉcn = f [Δt (m + 1/2)] cos [πn (2m + 1) /2M ] . (2.87)
M m=0
1
M−1 ! !
f (t) = f [Δt (m + 1/2)] 1 + k̂M τ + + k̂M τ − , (2.88)
M m=0
where
and
π(M−1)
sin 2M t
k̂M (t) = cos(πt/2) π
! . (2.89)
sin 2M t
Equation (2.85) together with (2.87) is usually referred to as the discrete cosine
transform pair. Here we have obtained it as a by-product along our route toward
a particular interpolation formula comprised of cosine functions.
N
f (t) ∼ fˆn ψ n (t) , (2.90)
n=1
wherein
ψ n (t) = An sin μn t + Bn cos μn t (2.91)
and An and Bn are suitable normalization constants. It is not hard to show
that as long as all the μn are distinct the Gram matrix Γnm = (ψ n , ψ m ) is
nonsingular so that the normal equations yield a unique set of expansion co-
efficients fˆn . Of course their computation would be significantly simplified if
2.1 Fourier Series 105
it were possible to choose a sets of radian frequencies μn such that the Gram
matrix is diagonal, or, equivalently, that the ψ n are orthogonal over the chosen
interval. We know that this is always the case for harmonically related radian
frequencies. It turns out that orthogonality also obtains when the radian fre-
quencies are not harmonically related provided they are chosen such that for
a given pair of real constants α and β the ψ n (t) satisfy the following endpoint
conditions:
To prove orthogonality we first observe that the ψ n (t) satisfy the differential
equation of the harmonic oscillator, i.e.,
d2 ψ n
+ μ2n ψ n = 0, (2.93)
dt2
where we may regard the ψ n as an eigenvector and μ2n as the eigenvalue of the
differential operator −d2 ψ n / dt2 . Next we multiply (2.93) by ψ m and integrate
the result over a ≤ t ≤ b to obtain
b b
dψ dψ m dψ n
ψ m n ba − dt + μ2n ψ m ψ n dt = 0, (2.94)
dt a dt dt a
We now observe that substitution of the endpoint conditions (2.92) into the left
side of (2.96) yields zero. This implies orthogonality provided we assume that
for n = m μm and μn are distinct. For then
b
ψ n ψ m dt = 0 ; n = m. (2.97)
a
The fact that the eigenvalues μ2n are distinct follows from a direct calculation.
To compute the eigenvalues we first substitute (2.91) into (2.92) which yields
the following set of homogeneous algebraic equations:
(sin μn a − αμn cos μn a) An + (cos μn a + αμn sin μn a) Bn = 0, (2.98a)
(sin μn b − βμn cos μn b) An + (cos μn a + αμn sin μn a) Bn = 0. (2.98b)
106 2 Fourier Series and Integrals with Applications to Signal Analysis
0 µ1b
y
• µ2b
-1 •
µ3b
•
-2
-3
-4
0 1 2 3 4 5 6 7 8 9
x
From Fig. 2.21 we note that as n increases the abscissas of the points where
the straight line intersects the tangent curves approach π/2(2n − 1) ≈ nπ.
Hence for large n the radian frequencies of the anharmonic expansion (2.90) are
asymptotically harmonic, i.e.,
μn ∼ nπ/b. (2.103)
n∼∞
Taking account of (2.103) in (2.102) we also observe that for large n formula
(2.102) represents the expansion coefficient of a sine Fourier series (2.63). Thus
the anharmonic character of the expansion appears to manifest itself only for
finite number of terms. Hence we would expect that the convergence properties
of anharmonic expansions to be essentially the same as harmonic Fourier series.
An anharmonic series may be taken as a generalization of a Fourier series.
For example, it reduces to the (harmonic) sine series in (2.62) when α = β = 0
and when α = β → ∞ to the (harmonic) cosine series (2.58), provided f (a) = 0
and f (b) = 0. When the endpoint conditions (2.92) are replaced by a periodicity
condition we obtain the standard Fourier series.
Thus unlike in the case of a discrete set of sinusoids the unknown “coefficients”
fˆ (ω ) now span a continuum. In fact, according to (2.106), to find fˆ (ω ) we
must solve an integral equation.
where we have set F (ω) = 2π fˆ (ω) which shall be referred to as the Fourier
Integral (or the Fourier transform) of f (t) . Substituting this in (2.104) and
integrating with respect to ω we get
∞
sin [(t − t ) Ω]
f Ω (t) = f (t ) dt . (2.109)
−∞ π (t − t )
The corresponding LMS error εΩ min is
!
εΩ min = f − f Ω, f − f Ω
!
= (f, f ) − f, f Ω ≥ 0, (2.110)
where the inner products are taken over the infinite time domain and account has
been taken of the projection theorem (1.75). Substituting for f Ω from (2.104)
the preceding is equivalent to
∞ ∞ Ω
2
εΩ min = |f (t)| dt − f ∗ (t) dt fˆ (ω) eiωt dω
−∞ −∞ −Ω
∞ Ω
2 ˆ 2
= |f (t)| dt − 2π f (ω) dω
−∞ −Ω
∞ Ω
2 1 2
= |f (t)| dt − |F (ω)| dω 0, (2.111)
−∞ 2π −Ω
we have
1 ! !
lim f Ω (t) = f t+ + f t− (2.112)
Ω−→∞ 2
or, equivalently, using (2.104) with F (ω) = 2π fˆ (ω)
Ω
1 1 ! !
lim F (ω) eiωt dω = f t+ + f t− . (2.113)
Ω−→∞ 2π −Ω 2
At the same time the MS error in (2.111) approaches zero and we obtain
∞ ∞
2 1 2
|f (t)| dt = |F (ω)| dω, (2.114)
−∞ 2π −∞
which is Parseval theorem for the Fourier transform. Equation (2.113) is usually
written in the abbreviated form
∞
1
f (t) = F (ω) eiωt dω (2.115)
2π −∞
In addition, we shall at times find it useful to express the direct and inverse
transform pair as
F {f (t)} = F (ω) , (2.117)
which is just an abbreviation of the statement “the Fourier transform of f (t) is
F (ω).” We shall adhere to the convention of designating the time domain signal
by a lowercase letter and its Fourier transform by the corresponding uppercase
letter.
and note that for any finite Ω the integration yields sin [Ω (t − t )] /π(t − t ).
The representation (2.119) bears a formal resemblance to the completeness
relationship for orthonormal discrete function sets, (1.302), and, more directly,
to the completeness statement for Fourier series in (2.31). This resemblance can
be highlighted by rewriting (2.119) to read
∞* +* +∗
1 1
δ (t − t ) = √ eiωt √ eiωt dω (2.120)
−∞ 2π 2π
so √that a comparison with (2.31) shows that the functions φω (t) ≡
1/√2πexp (iωt) play an analogous role to the orthonormal functions φn (t) ≡
1/ T exp (2πint/T ) provided we view the continuous variable ω in (2.120) as
proportional to a summation index. In fact a direct comparison of the variables
between (2.31) and (2.120) gives the correspondence
2πn
ω ←→ (2.121a)
T
2π
dω ←→ . (2.121b)
T
Thus as the observation period T of the signal increases, the quantity 2π/T may
be thought of as approaching the differential dω while the discrete spectral lines
occurring at 2πn/T merge into a continuum corresponding to the frequency
variable ω. Moreover the orthogonality over the finite interval −T /2, T /2, as in
(1.213), becomes in the limit as T −→ ∞
∞
1
δ (ω − ω ) = eit(ω−ω ) dt
2π −∞
∞* +* +∗
1 itω 1 itω
= √ e √ e dt (2.122)
−∞ 2π 2π
i.e., the identity matrix represented by the Kronecker symbol δ mn goes over into
a delta function, which is the proper identity transformation for the continuum.
A more direct but qualitative connection between the Fourier series and the
Fourier transform can be established if we suppose that the function f (t) is
initially truncated to |t| < T /2 in which case its Fourier transform is
T /2
F (ω) = f (t) e−iωt dt. (2.123)
−T /2
The coefficients in the Fourier series that represents this function within the
interval −T /2, T /2 can now be expressed as fˆn = F (2πn/T ) /T so that
∞ * +
1
f (t) = F (2πn/T ) ei2πnt/T . (2.124)
n=−∞
T
2.2 The Fourier Integral 111
Thus in view of (2.121) we can regard the Fourier transform inversion for-
mula (2.115) as a limiting form of (2.124) as T −→ ∞. Figure 2.22 shows the
0.9
0.8
0.7
0.6
Amplitude
0.5
0.4
0.3
0.2
0.1
0
-20 -15 -10 -5 0 5 10 15 20
ωτ
T2
lim lim f (t) dt, (2.125)
T1 →∞T2 →∞ −T1
which means that integral converges when the upper and lower limits approach
infinity independently. This definition turns out to be too restrictive in many
situations of physical interest. An alternative and more encompassing definition
is the following:
T
lim f (t) dt. (2.126)
T →∞ −T
Here we stipulate that upper and lower limits must approach infinity at the
same rate. It is obvious that (2.126) implies (2.125). The converse is, however,
not true. The class of functions for which the integral exists in the sense of
(2.126) is much larger than under definition (2.125). In particular, all (piece-
wise differentiable) bounded odd functions are integrable in the sense of (2.126)
and the integral yields zero. Under these circumstances (2.125) would gener-
ally diverge, unless of course the growth of the function at infinity is suitably
restricted. When the limit is taken symmetrically in accordance with (2.126)
the integral is said to be defined in terms of the Cauchy Principal Value (CPV).
We have in fact already employed this definition implicitly on several occasions,
in particular in (2.113). A somewhat different form of the CPV limit is also of
interest in Fourier transform theory. This form arises whenever the integral is
improper in virtue of one or more simple pole singularities within the integration
, 8 dt
interval. For example, the integral −2 t−1 has a singularity at t = 1 where the
integrand becomes infinite. The first inclination would be to consider this inte-
gral simply as divergent. On the other hand since the integrand changes sign as
one moves through the singularity it is not unreasonable to seek a definition of
a limiting process which would facilitate the mutual cancellation of the positive
and negative infinite contributions. For example, suppose we define
1− 1 8
dt dt
I (1 , 2 ) = + ,
−2 t−1 1+ 2 t − 1
where 1 and 2 are small positive numbers so that the integration is carried
out up to and past the singularity. By direct calculation we find I (1 , 2 ) =
ln (71 /32 ) . We see that if we let 1 and 2 approach zero independently the
integral diverges. On the other by setting 1 = 2 = the result is always finite.
Apparently when the singularity is approached symmetrically from both sides
2.2 The Fourier Integral 113
This limiting procedure constitutes the CPV definition of the integral whenever
the singularity falls within the integration interval. Frequently a special symbol
is used to indicate a CPV evaluation. We , 8 shall indicate it by prefixing the letter
dt
P to the integration symbol. Thus P −2 t−1 = ln (7/3) . When more than one
singularity is involved the CPV limiting procedure must be applied to each. For
example,
9
dt
I = P
−5 (t − 1) (t − 2)
1− 2−ε
dt dt
= lim +
→0 −5 (t − 1) (t − 2) 1+ε (t − 1) (t − 2)
9 2
dt
+
2+ε (t − 1) (t − 2)
= ln 3/4.
The following example illustrates the CPV evaluation of an integral with infinite
limits of integration:
∞ 2− T )
dt dt dt
I = P = lim +
−∞ t − 2 →0 T →∞ −T t−2 2+ t − 2
(2 − − 2) (T − 2)
= lim ln = 0.
→0 T →∞ (−T − 2) (2 + ε − 2)
where a < q < b and f (t) is a bounded function within a, b and differentiable
at t = q. We can represent this integral as a sum of an integral of a bounded
function and a CPV integral which can be evaluated in closed form as follows:
b
f (t) − f (q) + f (q)
I = P dt
a t−q
b b
f (t) − f (q) dt
= dt + f (q) P
a t−q a t−q
b
f (t) − f (q) b−q
= dt + f (q) ln . (2.128)
a t − q q −a
114 2 Fourier Series and Integrals with Applications to Signal Analysis
Note that the integrand in the first integral in the last expression is finite at
t = q so that the integral can be evaluated, if necessary, numerically using
standard techniques.
Let us now apply the CPV procedure to the evaluation of the Fourier trans-
form of f (t) = 1/t. Even though a signal of this sort might appear quite artificial
it will be shown to play a pivotal role in the theory of the Fourier transform.
Writing the transform as a CPV integral we have
∞ −iωt ∞ ∞
e cos ωt sin ωt
F (ω) = P dt = P dt − iP dt.
−∞ t −∞ t −∞ t
,∞
Since P −∞ cost ωt dt = 0, and sintωt is free of singularities we have
∞
sin ωt
F (ω) = −i dt (2.129)
−∞ t
,∞
Recalling that −∞ sinx x dx = π we obtain by setting ωt = x in (2.129)
1 ∞ sin ωt 1 ; ω > 0,
dt = sign(ω) = (2.130)
π −∞ t −1 ; ω < 0.
Exponential Functions
Since the Fourier transform is a representation of signals in terms of exponentials
we would expect exponential functions to play a special role in Fourier analysis.
In the following we distinguish three cases: a purely imaginary argument, a
purely real argument with the function truncated to the positive time axis, and
a real exponential that decays symmetrically for both negative and positive
times. In the first case we get from the definition of the delta function (2.119)
and real ω0
F
eiω0 t ⇐⇒ 2πδ (ω − ω 0 ) . (2.138)
This result is in perfect consonance with the intuitive notion that a single
tone, represented in the time domain by a unit amplitude sinusoidal oscillation
of infinitely long duration, should correspond in the frequency domain to a sin-
gle number, i.e., the frequency of oscillation, or, equivalently, by a spectrum
consisting of a single spectral line. Here this spectrum is represented symbol-
ically by a delta function at ω = ω0 . Such a single spectral line, just like the
116 2 Fourier Series and Integrals with Applications to Signal Analysis
tn−1 −p0 t F 1
e U (t) ⇐⇒ n; n ≥ 1. (2.142)
(n − 1)! (p0 + iω)
Using this formula in conjunction with the partial fraction expansion technique
constitutes one of the basic tools in the evaluation of inverse Fourier transforms
of rational functions.
Gaussian Function
A rather important idealized signal is the Gaussian function
2
1 − t
f (t) = 1 e 2σ2t ,
2πσ 2t
2.2 The Fourier Integral 117
√ √ !
where we have adopted the normalization f , f = 1. We compute its FT
as follows:
∞ 2 ∞
1 − t 1 − 1 [t2 +2iωσ 2t t]
F (ω) = 1 e 2σ2t e−iωt dt = 1 e 2σ2t dt
2πσ 2t −∞ 2πσ 2t −∞
1 2 2 ∞ 1 2 2 ∞+iωσ 2
e− 2 σ t ω − 1 [t+iωσ 2t ]
2
e− 2 σ t ω t − z
2
Note that except for a scale factor the Gaussian function is its own FT. Here we
see another illustration of the inverse relationship between the signal duration
and bandwidth. If we take σ t as the nominal duration of the pulse in the time
domain, then a similar definition for the effective bandwidths of F (ω) yields
σ ω = 1/ σ t .
Symmetries
For any Fourier transform pair
F
f (t) ⇐⇒ F (ω)
we also have, by a simple substitution of variables,
F
F (t) ⇐⇒ 2πf (−ω) . (2.143)
For example, using this variable replacement in (2.141), we obtain
α F
⇐⇒ e−α|ω| . (2.143*)
π (α2 + t2 )
The Fourier transform of the complex conjugate of a function follows through
the variable replacement
F
f ∗ (t) ⇐⇒ F ∗ (−ω) . (2.144)
Frequently we shall be interested in purely real signals. If f (t) is real, the
preceding requires
F ∗ (−ω) = F (ω) . (2.145)
If we decompose F (ω) into its real and imaginary parts
F (ω) = R (ω) + iX (ω) , (2.146)
we note that (2.145) is equivalent to
so that for a real signal the real part of the Fourier transforms is even function
while the imaginary part an odd function of frequency. The even and odd
symmetries carry over to the amplitude and phase of the transform. Thus
writing
F (ω) = A (ω) eiθ(ω) , (2.148)
wherein
2 2
A (ω) = |F (ω)| = [R (ω)] + [X (ω)] , (2.149a)
X (ω)
θ (ω) = tan−1 , (2.149b)
R (ω)
we have in view of (2.147)
A (ω) = A (−ω) (2.150a)
θ (ω) = −θ (−ω) . (2.150b)
2.2 The Fourier Integral 119
The last expression shows that a real physical signal can be represented as
the real part of a fictitious complex signal whose spectrum equals twice the
spectrum of the real signal for positive frequencies but is identically zero for
negative frequencies. Such a complex signal is referred to as an analytic signal,
a concept that finds extensive application in the study of modulation to be
discussed in 2.3.
F A iθ0 A
f (t) A cos (ω 0 t + θ0 ) ⇐⇒ e F (ω − ω 0 ) + e−iθ0 F (ω + ω 0 ) . (2.154)
2 2
If we suppose that F (ω) is negligible outside the band defined by |ω| < Ω, and
also assume that ω0 > 2Ω, the relationship among the spectra in (2.154) may
be represented schematically as in Fig. 2.23
F (ω )
Ae−iq0 Aeiq0
F (ω + ω0 ) F (ω − ω0 )
2 2
ω
−ω0 −Ω Ω ω0
Differentiation
If f (t) is everywhere differentiable, then a simple integration by parts gives
∞ ∞
f (t) e−iωt dt = f (t) e−iωt ∞
−∞ + iω f (t) e−iωt dt
−∞ −∞
= iωF (ω) . (2.155)
Clearly if f (t) is differentiable n times we obtain by repeated integration
F
f (n) (t) ⇐⇒ (iω)n F (ω) . (2.156)
Actually this formula may still be used even if f (t) is only piecewise dif-
ferentiable and discontinuous with discontinuous first and even higher order
derivatives at a countable set of points. We merely have to replace f (n) (t) with
a generalized derivative defined in terms of singularity functions, an approach
we have already employed for the first derivative in (1.280). For example, the
Fourier transform of (1.280) is
F
! ! −iωtk
f (t) ⇐⇒ iωF (ω) = F {fs (t)} + f t+
k − f tk
−
e . (2.157)
k
Convolution
We have already encountered the convolution of two functions in connection with
Fourier series, (2.49). Since in the present case the time domain encompasses
the entire real line the appropriate definition is
∞
h (t) = f (τ ) g (t − τ ) dτ .
−∞
Note that
∞ ∞
f (τ ) g (t − τ ) dτ = g (τ ) f (t − τ ) dτ
−∞ −∞
as one can readily convince oneself through a change of the variable of integra-
tion. This can also be expressed as f ∗ g = g ∗ f , i.e., the convolution operation
F
is commutative. In view of (2.152) g (t − τ ) ⇐⇒ G (ω) e−iωτ so that
∞ ∞
F
f (τ ) g (t − τ ) dτ ⇐⇒ f (τ ) G (ω) e−iωτ dτ = F (ω) G (ω) . (2.164)
−∞ −∞
Integration
When the Fourier transform is applied to integro-differential equations one some-
times needs to
, tevaluate the transform of the integral of a function. For example
with g (t) = −∞ f (τ ) dτ we would like to determine G (ω) in terms of F (ω) .
,t ,∞
We can do this by first recognizing that −∞ f (τ ) dτ = −∞ f (τ ) U (t − τ ) dτ .
Using (2.164) and (2.135) we have
∞
F 1
f (τ ) U (t − τ ) dτ ⇐⇒ F (ω) πδ (ω) +
−∞ iω
This is certainly compatible with (2.166) since ωδ (ω) = 0. However the solu-
tion of (2.167) for G (ω) by simply dividing both sides by iω is in general not
permissible since G (ω) = F (ω) /iω unless F (0) = 0.
so that f (t) = fe (t) + fo (t) for any signal. Since fe (t) = fe (−t) and fo (t) =
−fo (−t) (2.168a) and (2.168b) are referred to as the even and odd parts of f (t),
respectively. Now
1 ∞
F {fe (t)} = [f (t) + f (−t)] [cos (ωt) − i sin (ωt)] dt
2 −∞
∞
= f (t) cos (ωt) dt (2.169a)
−∞
and
∞
1
F {fo (t)} = [f (t) − f (−t)] [cos (ωt) − i sin (ωt)] dt
2 −∞
∞
= −i f (t) sin (ωt) dt. (2.169b)
−∞
In view of the definition (2.146), for a real f (t) (2.169a) and (2.169b) are
equivalent to
F
fe (t) ⇐⇒ R (ω) , (2.170a)
F
fo (t) ⇐⇒ iX (ω) . (2.170b)
Evidently the even and odd parts are not independent for
which can be rephrased in more concise fashion with the aid of the sign function
as follows:
These relations show explicitly that the real and imaginary parts of the Fourier
transform of a causal signal may not be prescribed independently. For example
if ,we know R (ω), then X (ω) can be determined uniquely by (2.173a). Since
∞ dη
P −∞ ω−η = 0, an R (ω) that is constant for all frequencies gives a null result
for X (ω). Consequently, (2.173b) determines R (ω) from X (ω) only within a
constant. ,∞
The integral transform π1 P −∞ R(η)dη
ω−η is known as the Hilbert Transform
which shall be denoted by H {R (ω)} . Using this notation we rewrite (2.173) as
X (ω) = −H {R (ω)} , (2.174a)
R (ω) = H {X (ω)} . (2.174b)
Since (2.174b) is the inverse of (2.174a) the inverse Hilbert transform is obtained
by a change in sign. As an example, suppose R (ω) = pΩ (ω) . Carrying out the
simple integration yields
1 ω − Ω
X (ω) = ln , (2.175)
π ω + Ω
which is plotted in Fig. 2.24. The Hilbert Transform in the time domain is
defined similarly. Thus for a signal f (t)
∞
1 f (τ ) dτ
H {f (t)} = P . (2.176)
π −∞ t − τ
2.5
1.5
R (ω)
1
0.5
X(ω)
0
-0.5
-1
-1.5
-2
-2.5
-3 -2 -1 0 1 2 3
ω/Ω
Since by assumption f (t) exists for t > 0, or, equivalently, f (t) is smooth,
F {f (t)} approaches zero as ω → ∞ (c.f. (2.158)). Under these conditions the
last equation yields
!
lim iωF (ω) = f 0+ , (2.181)
ω→∞
a result known as the initial value theorem. Note that fe (0) ≡ f (0) but ac-
cording to (2.171) 2 fe (0) = f (0+ ) . Hence
1 !
f (0) = f 0+ , (2.182)
2
which is consistent with the fact that the FT converges to the arithmetic mean
of the step discontinuity.
Consider now the limit
∞
+
!
lim iωF (ω) − f 0 = lim f (t) e−iωt dt
ω→0 ω→0 0+
∞
= f (t) lim e−iωt dt
0+ ω→0
!
= lim f (t) − f 0+ .
t→∞
1 ∞
f (t ) = ∫ F (ω )ejω tdω
2π −∞
−T / 2 T /2
⎛ 2π n⎞
∞
F⎜ ⎟ 2π n
f (t ) = ∑ ⎝
T ⎠ jT
e
n = −∞ T
−T /2 T /2
It is easy to see that g (t) is periodic with period T. We take the FT to obtain
∞
∞
F
f (t − nT ) ⇐⇒ F (ω) e−iωnT .
n=−∞ n=−∞
Since the left side in the last expression must be identical to (2.185) we are
justified in writing
∞
∞
f (t − nT ) = F (2π/T ) /T ei2πt/T , (2.186)
n=−∞ =−∞
1.2
Ω=50 Ω=20 Ω = 10
1
0.8
0.6
0.4
0.2
-0.2
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1
integral function. This is illustrated in Fig. 2.26 which shows a unit step together
with plots of 1/2 + (1/π) Si(Ωt) for Ω = 10, 20, and 50.
instead of (2.32) we must resort to the following fundamental theorem from the
theory of limits. Given a function f (Ω) integrable over any finite interval 0, Ω
we define, by analogy with (2.135), the average σ Ω by
Ω
1
σΩ = f (Ω)dΩ. (2.189)
Ω 0
1 + !
lim σ Ω (t) = f (t ) + f t− . (2.190)
Ω−→∞ 2
By integrating the right side of (2.109) with respect to Ω and using (2.189) we
obtain
∞
sin2 [(Ω/2) (t − t )]
σ Ω (t) = f (t ) 2 dt . (2.191)
−∞ π (Ω/2) (t − t )
Unlike the kernel (2.38) in the analogous formula for Fourier series in (2.37),
the kernel
sin2 [(Ω/2) (t − t )]
KΩ (t − t ) = (2.192)
π (Ω/2) (t − t )2
is not periodic. We leave it exercise to show that
sin2 [(Ω/2) (t − t )]
lim 2 = δ (t − t ) , (2.193)
Ω−→∞ π (Ω/2) (t − t )
Since the right side of (2.191) is a convolution in the time domain, its FT
yields a product of the respective transforms. Therefore using (2.194) we can
rewrite (2.191) as an inverse FT as follows:
Ω * +
1 |ω| iωt
σ Ω (t) = F (ω) 1 − e dω. (2.195)
2π −Ω Ω
130 2 Fourier Series and Integrals with Applications to Signal Analysis
0.8
0.6
*Fejer
0.4
0.2
**Fourier
-0.2
-0.4
-20 -15 -10 -5 0 5 10 15 20
Ωt
where
Ωt
(1) 1 1 − cos x
KΩ (t) = dx. (2.200)
πt2 0 x
2.2 The Fourier Integral 131
1.2
**
1
*
0.8
** Fourier
*Fejer
0.6
0.4
0.2
-0.2
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
t/T
(1)
One can show directly that lim KΩ (t) = δ (t) , consistent with (2.198).
Ω−→∞
(1)
A plot of 4πKΩ (t) /Ω2 as a function of Ωt together with the (first-order) Fejer
and Fourier kernels is shown in Fig. 2.29. Unlike the Fourier Integral and the
(1)
(first-order) Fejer kernels, KΩ (t) decreases monotonically on both sides of the
0.8
0.4 * Fourier
**
0.2
*
-0.2
-0.4
-20 -15 -10 -5 0 5 10 15 20
Ωt
maximum, i.e., the functional form is free of sidelobes. At the same time its
single lobe is wider than the main lobe of the other two kernels. It can be shown
that for large Ωt
(1) ln(Ω |t| γ)
KΩ (t) ∼ 2 , (2.201)
π (Ωt)
where lnγ = 0.577215 . . . is the Euler constant. Because of the presence of
the logarithmic term (2.201) represents a decay rate somewhere between that
of the Fourier
! Integral kernel (1/Ωt) and that of the (first-order) Fejer kernel
1/(Ωt)2 .
(1)
The Fourier transform of KΩ (t) furnishes the corresponding spectral win-
dow. An evaluation of the FT by directly transforming (2.200) is somewhat
cumbersome. A simpler approach is the following:
2 2
(1) 1 Ω sin2 [(a/2) t] 1 Ω sin [(a/2) t]
F {KΩ (t)} = F { da} = F da
Ω 0 π (a/2) t2 Ω 0 π (a/2) t2
* + ( ,
1 Ω |ω| 1 Ω |ω|
= 1− pa (ω) da = Ω |ω| 1 − a da ; |ω| < Ω,
Ω 0 a 0 ; |ω| > Ω.
A plot of this spectral window is shown in Fig. 2.30 which is seen to be quite
similar to its discrete counterpart in Fig. 2.11.
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
ω/Ω
∞
1
m {w (t)} = A(ω) sin[ωt + θ (ω)]dω. (2.204b)
π 0
Taking Hilbert transforms of both sides of (2.204a) and using trigonometric sum
formulas together with (2.179) and (2.180) we obtain
∞ 2
1
ẑ (t) = H {z (t)} = H A(ω) cos[ωt + θ (ω)]dω
π 0
∞
1
= A(ω)H {cos[ωt + θ (ω)]} dω
π 0
∞
1
= A(ω) (cos [θ (ω)] H {cos (ωt)} − sin [θ (ω)] H {sin (ωt)}) dω
π 0
1 ∞
= A(ω) (cos [θ (ω)] sin (ωt) + sin [θ (ω)] cos(ωt)) dω
π 0
1 ∞
= A(ω) sin[ωt + θ (ω)]dω = m {w (t)}
π 0
out entirely in terms of the FT, as already remarked in connection with the
frequency domain calculation in (2.180).
The complex function
Using this in the FT of (2.206) yields W (ω) = Z (ω) + i [−i sign (ω) Z (ω)]
which is equivalent to
2Z (ω) ; ω > 0
W (ω) = (2.208)
0 ; ω < 0.
Because of (2.207) the energy of an analytic signal is shared equally by the real
signal and its Hilbert transform.
for real negative values of frequency, i.e., is represented by the integral (2.203), is an analytic
function of t in the upper half of the complex t plane (i.e., m (t) > 0). (See Appendix, pages
341–348).
2.3 Modulation and Analytic Signal Representation 135
a Z(ω )
ω
−ω max ω max
W(ω)
ω
−ω max ω max
sinusoids. Thus we say that the signal r cos (ωt + ψ 0 ) has amplitude r, fre-
quency ω and a fixed phase reference ψ 0 , where for purposes of analysis we
sometimes find it more convenient to deal directly with a fictitious complex sig-
nal r exp [i (ωt + ψ 0 )] with the tacit understanding that physical processes are
to be associated only with the real part of this signal. A generalization of this
construct is an analytic signal. In addition to simplifying the algebra such com-
plex notation also affords novel points of view. For example, the exponential of
magnitude r and phase angle ψ (t) = ωt + θ0 can be interpreted graphically as
d
a phasor of length r rotating at the constant angular velocity ω = dt (ωt + θ0 ) .
Classically for a general nonsinusoidal (real) signal z (t) the concepts of fre-
quency, amplitude, and phase are associated with each sinusoidal component
comprising the signal Fourier spectrum, i.e., in this form these concepts appear
to have meaning only when applied to each individual spectral component of
the signal. On the other hand we can see intuitively that at least in special
cases the concept of frequency must bear a close relationship to the rate of zero
crossings of a real signal. For pure sinusoids this observation is trivial, e.g.,
the number of zero crossings of the signal cos (10t) per !unit time is twice that
of cos(5t). Suppose instead we take the signal cos 10t2 . Here the number of
zero crossings varies linearly with time and
! the corresponding complex !signal,
as represented by the phasor exp i 10t2 , rotates at the rate dt d
10t2 = 20t
rps. Thus we conclude that the frequency of this signal varies linearly with
time. The new concept here is that of instantaneous frequency which is clearly
not identical with the frequency associated with each Fourier component of the
signal (except of course in case of a pure sinusoid). We extend this definition to
136 2 Fourier Series and Integrals with Applications to Signal Analysis
We can, of course, not “evaluate” this integral without knowing the specific
signal. However for signals characterized by a large time-bandwidth product
we can carry out an approximate evaluation utilizing the so-called principle
of stationary phase. To illustrate the main ideas without getting sidetracked
by peripheral generalities consider the real part of the exponential in (2.212),
i.e., cos [q(t)] with q(t) = ψ(t) − ωt. Figure 2.32 shows a plot of cos [q(t)] for
1.5
0.5
-0.5
-1
-1.5
0 1 2 3 4 5 6 7 8 9 10
t
the special choice ψ (t) = 5t2 and ω = 50. This function is seen to oscillate
rapidly except in the neighborhood of t = t0 = 5 = ω/10 which point cor-
responds to q (5) = 0. The value t0 = 5 in the neighborhood of which the
phase varies slowly is referred to as the stationary point of q (t) (or a point of
stationary phase). If we suppose that the function r (t) is slowly varying rel-
ative to these
, ∞ oscillations, we would
! expect the contributions to an integral of
the form −∞ r (t) cos 5t2 − 50t dt from points not in the immediate vicinity
of t = 5 to mutually cancel. Consequently the dominant contributions to the
integral would arise only from the values of r (t) and ψ (t) in the immediate
neighborhood of the point of stationary phase. We note in passing that in this
example the product t0 ω = 250 >>1. It is not hard to show that the larger this
dimensionless quantity (time bandwidth product) the narrower the time band
within which the phase is stationary and therefore the more nearly localized the
contribution to the overall integral. In the general case the stationary point is
determined by
q (t) = ψ (t) − ω = 0, (2.213)
which coincides with the definition of the instantaneous frequency in (2.211).
When we expand the argument of the exponential in a Taylor series about t = t0
we obtain
1
q(t) = ψ(t0 ) − ωt0 + (t − t0 )2 ψ (t0 ) + . . . (2.214)
2
Similarly we have for r (t)
r (t) = r (t0 ) + (t − t0 )r (t0 ) + . . . (2.215)
In accordance with the localization principle just discussed we expect, given a
sufficiently large ωt0 , that in the exponential function only the first two Taylor
series terms need to be retained. Since r (t) is assumed to be relatively slowly
varying it may be replaced by r (t0 ) . Therefore (2.212) may be approximated by
∞
2
1
ψ (t0 )
W (ω) ∼ r (t0 ) ei[ψ(t0 )−ωt0 ] ei 2 (t−t0 ) dt. (2.216)
−∞
When the preceding standard Gaussian integral is evaluated we obtain the final
formula
2π π
W (ω) ∼ r (t0 ) e i[ψ(t0 )−ωt0 ] ei 4 sign[ψ (t0 )] . (2.217)
ωt0 ∼∞
ψ (t0 )
1
From (2.221) we see that the FT of g(t) approaches the constant AT /2 2/M
within the frequency band r < |f | < 1 and vanishes outside this range, except
at the band edges (i.e., f = ±1 and ±r) where it equals one-half this constant.
Since g(t) is of finite duration it is asymptotically simultaneously bandlimited
and timelimited. Even though for any finite M the signal spectrum will not be
bandlimited this asymptotic form is actually consistent with Parseval theorem.
For applying Parseval formula to (2.221) we get
∞
1 2 A2 T !
|G (ω)| dω = 2B = A2 /2 T (2.222)
2π −∞ 4 B
1.6
1.4 BT=5
BT=10
1.2
BT=1000
abs(G)*2*sqrt(B*T)/(A*T)
0.8
0.6
0.4
0.2
0
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8
frequency*(1-r)/B
On the other hand we recognize the last term as the asymptotic form (i.e., for
large ω 0 T ) of the total energy of a sinusoid of fixed frequency, amplitude A,
and duration T . Thus apparently if the time-bandwidth product is sufficiently
large we may approximate the energy of a constant amplitude sinusoid with
variable phase by the same simple formula (A2 /2)T. Indeed this result actually
generalizes to signals of the form A cos [φ(t)]. How large must M be for (2.221)
to afford a reasonable approximation to the signal spectrum? Actually quite
large, as is illustrated by the plots in Fig. 2.33 for BT = 5, 10, and 1, 000, where
the lower (nominal) band edge is defined by r = 0.2 and the magnitude of the
asymptotic spectrum equals unity within .2 < f < 1 and 1/2 at f = .2 and
f = 1.
140 2 Fourier Series and Integrals with Applications to Signal Analysis
When this constraint is satisfied the spectrum of z (t) may in fact extend down
to zero frequency (as, e.g., in Fig. 2.31a) so that theoretically the spectra of
x (t) and y(t) are allowed to occupy the entire bandwidth |ω| ≤ ω 0 . However
in practice there will generally also be a lower limit on the band occupancy
of Z (ω), say ω min . Thus the more common situation is that of a bandpass
spectrum illustrated in Fig. 2.34 wherein the nonzero spectral energy of z (t)
occupies the band ω min < ω < ω max for positive frequencies and the band
-ωmax < ω < −ωmin for negative frequencies.
Z(ω )
ω
−ω max −ωmin ωmin ω max
−ω 0 ω0
ω
ωmin−ω 0 ω max−ω 0
ω
ω 0−ω max ω 0−ω min
X(ω )
ω
ω min − ω 0 ω 0−ω min
In the case depicted ω min < ω0 < ω max and ω 0 − ω min > ωmax − ω 0 .
The synthesis of X (ω) from the two frequency-shifted sidebands follows
from (2.225a) resulting in a total band occupancy of 2 |ω 0 − ωmin |. It is
easy to see from (2.225b) that Y (ω) must occupy the same bandwidth. Ob-
serve that shifting ω0 closer to ω min until ω max − ω 0 > ω0 − ω min results in a
142 2 Fourier Series and Integrals with Applications to Signal Analysis
total band occupancy of 2 |ωmax − ω 0 | and that the smallest possible baseband
bandwidth is obtained by positioning ω 0 midway between ωmax and ω min .
The two real baseband signals x (t) and y(t) are referred to as the inphase
and quadrature signal components. It is convenient to combine them into the
single complex baseband signal
b(t) = x(t) + iy(t). (2.227)
The analytic signal w(t) = z(t) + iẑ(t) follows from a substitution of (2.224a)
and (2.224b)
w(t) = x (t) cos (ω 0 t) − y(t) sin (ω0 t)
+i[x (t) sin (ω 0 t) + y(t) cos (ω 0 t)]
= [x(t) + iy(t)] eiω0 t = b (t) eiω0 t . (2.228)
The FT of (2.228) reads
W (ω) = X (ω − ω0 ) + iY (ω − ω0 ) = B (ω − ω 0 ) (2.229)
or, solving for B (ω) ,
B (ω) = W (ω + ω0 ) = 2U (ω + ω 0 ) Z(ω + ω 0 ) = X (ω) + iY (ω) . (2.230)
In view of (2.224a) the real bandpass z(t) signal is given by the real part
of (2.228), i.e.,
& '
z(t) =
e b (t) eiω0 t . (2.228*)
Taking the FT we get
1
Z(ω) = [B (ω − ω 0 ) + B ∗ (−ω − ω0 )] , (2.228**)
2
which reconstructs the bandpass spectrum in terms of the baseband spectrum.
As the preceding formulation indicates, given a bandpass signal z (t) , the
choice of ω 0 at the receiver effectively defines the inphase and quadrature
components. Thus a different choice of (local oscillator) frequency, say ω 1 ,
ω1 = ω 0 leads to the representation
z (t) = x1 (t) cos (ω 1 t) − y1 (t) sin (ω 1 t) , (2.229a)
ẑ (t) = x1 (t) sin (ω1 t) + y1 (t) cos (ω 1 t) , (2.229b)
wherein the x1 (t) and y1 (t) are the new inphase and quadrature components.
The relationship between x (t) , y (t) and x1 (t) and y1 (t) follows upon equat-
ing (2.229) to (2.224):
cos (ω0 t) − sin (ω 0 t) x (t) cos (ω1 t) − sin (ω 1 t) x1 (t)
= ,
sin (ω 0 t) cos (ω 0 t) y (t) sin (ω 1 t) cos (ω 1 t) y1 (t)
which yields
x (t) cos [(ω 0 − ω1 ) t] sin [(ω0 − ω 1 ) t] x1 (t)
= . (2.230)
y (t) − sin [(ω0 − ω 1 ) t] cos [(ω 0 − ω1 ) t] y1 (t)
2.3 Modulation and Analytic Signal Representation 143
This demonstrates directly that the analytic signal and the complex baseband
signal have the same envelope r (t) which is in fact independent of the frequency
of the reference carrier. We shall henceforth refer to r(t) as the signal enve-
lope. Unlike the signal envelope, the phase of the complex baseband signal does
depend on the carrier reference. Setting
x (t) x1 (t)
θ (t) = tan−1 , θ1 (t) = tan−1 , (2.232)
y (t) y1 (t)
we see that with a change in the reference carrier the analytic signal undergoes
the transformation
or, equivalently, that the two phase angles transform in accordance with
It should be noted that in general the real and imaginary parts of a complex
baseband signal need not be related by Hilbert transforms. In fact suppose x (t)
and y (t) are two arbitrary real signals, bandlimited to |ω| < ωx and |ω| < ω y ,
respectively. Then, as may be readily verified, for any ω0 greater than ω x /2
and ω y /2 the Hilbert transform of the bandpass signal z (t) defined by (2.224a)
is given by (2.224b).
With
F
Rzz (τ ) ⇐⇒ Szz (ω) (2.241)
we have in view of (2.238) and (2.207)
F
Rẑz (τ ) = R̂zz (τ ) ⇐⇒ −iSzz (ω) sign (ω) . (2.242)
Denoting the spectral density of w (t) by Sww (ω) , (2.240) together with (2.241)
and (2.242) gives
4Szz (ω) ; ω > 0,
Sww (ω) = (2.243)
0; ω < 0.
so that the spectral density of the analytic complex process has only positive
frequency content. The correlation functions of the baseband (inphase) x (t) and
(quadrature) process y (t) follow from (2.223). By direct calculation we get
Recall that for any two real stationary processes Rxy (τ ) = Ryx (−τ ). Using
this relation in (2.244c) we get
Ryx (τ ) = −Rxy (τ ). (2.245)
Also according to (2.244a) and (2.244b) the autocorrelation functions of the in-
phase and quadrature components of the stochastic baseband signal are identical
and consequently so are the corresponding power spectra. These are
The autocorrelation function of the real bandpass process can then be repre-
sented in terms of the autocorrelation function of the complex baseband process
as follows:
1 & '
Rzz (τ ) =
e Rbb (τ )eiω0 τ . (2.250)
2
146 2 Fourier Series and Integrals with Applications to Signal Analysis
LPF Ax(t) / 2
z(t)
Acos(w0 t)
s(t) BPF
−Asin(w0 t)
z(t)
LPF Ay(t) / 2
The signal s (t) is first bandpass filtered to the bandwidth of interest and
then split into two separate channels each of which is heterodyned with a local
oscillator with a 90 degree relative phase shift. The inphase and quadrature
components are obtained after lowpass filtering to remove the second harmonic
contribution generated in each mixer. To determine the power spectral density
of the bandpass signal requires measurement of the auto and cross spectra of
x (t) and y (t). The power spectrum can then be computed with the aid of
(2.246) and (2.247) which give
1
Szz (ω + ω 0 ) = [Sxx (ω) − iSxy (ω)] . (2.252)
2
This procedure assumes that the process s (t) is stationary so that Sxx (ω) =
Syy (ω) . Unequal powers in the two channels would be an indication of non-
stationarity on the measurement timescale. A rather common form of nonsta-
tionarity is the presence of an additive deterministic signal within the bandpass
process.
2.3 Modulation and Analytic Signal Representation 147
Szz
N0 / 4 N0 / 4
ω
−ω0 − 2π B −ω0 −ω0 + 2π B ω0 − 2π B ω0 ω0 + 2π B
Sww
N0
ω
ω0 − 2π B ω0 ω0 + 2π B
Sbb
N0
ω
− 2π B 2π B
The corresponding spectrum Sxx (ω) = Syy (ω) occupies the band |ω| ≤ 2πB +
Δω. Unlike in the symmetric case, the power spectrum is no longer flat but
exhibits two steps caused by the spectral shifts engendered by cos (τ Δω), as
shown in Fig. 2.37.
Sxx = Syy
N0 / 2
N0 / 4
ω
− 2π B − Δω −2π B + Δω 2π B − Δω 2π B +Δω
Figure 2.37: Baseband I&Q power spectra for assymmetric local oscillator fre-
quency positioning
From this follows (see Appendix) that F (z) is an analytic function of the com-
plex variable z in the closed lower half of the complex z plane, i.e., Im z ≤ 0.
Moreover, for Im z ≤ 0,
lim F (z) → 0 (2.260)
|z|→∞
−R ω0 − ε ω0 + ε R
ω
θR θε
cε
CR
Figure 2.38: Integration contour ΓR for the derivation of the Hilbert transforms
wherein ω 0 is real, taken in the clockwise direction along the closed path ΓR as
shown in Fig. 2.38. We note that ΓR is comprised of the two linear segments
(−R, ω0 − ε), (ω 0 + ε, R) along the axis of reals, the semicircular contour cε of
radius ε with the circle centered at ω = ω0 , and the semicircular contour CR
150 2 Fourier Series and Integrals with Applications to Signal Analysis
of radius R in the lower half plane with the circle centered at ω = 0. Since
the integrand in (2.261) is analytic within ΓR , we have IR (ω 0 ) ≡ 0, so that
integrating along each of the path-segments indicated in Fig. 2.38 and adding
the results in the limit as ε → 0 and R → ∞, we obtain
R
ω 0 −ε
F (ω) F (ω)
0 = lim dω + dω
ε→0, R→∞ −R ω − ω0 ω 0 +ε ω − ω 0
F (z) F (z)
+ lim dz + lim dz. (2.262)
ε→0 z − ω0 R→∞ z − ω0
cε CR
so that in view of (2.260) in the limit of large R the last integral in (2.262) tends
to zero. On cε we set z − ω 0 = εeiθε and substituting into the third integral in
(2.262) evaluate it as follows:
0
F (z) !
lim dz = lim F ω 0 + εeiθε idθ = iπF (ω0 ) .
ε→0 z − ω0 ε→0 −π
cε
Now the limiting form of the first two integrals in (2.262) are recognized as the
definition a CPV integral so that collecting our results we have
∞
F (ω)
0=P dω + iπF (ω 0 ). (2.263)
−∞ ω − ω0
By writing F (ω) = R(ω) + iX(ω) and similarly for F (ω 0 ), substituting
in (2.263), and setting the real and the imaginary parts to zero we obtain
∞
1 R (ω)
X(ω0 ) = P dω, (2.264a)
π −∞ ω − ω0
∞
1 R (ω)
R(ω0 ) = − P dω, (2.264b)
π −∞ ω − ω0
which, apart from a different labeling of the variables, are the Hilbert Transforms
in (2.173a) and (2.173b). Because the real and imaginary parts of the FT
evaluated on the real frequency axis are not independent it should be possible
to determine the analytic function F (z) either from R(ω) of from X (ω) . To
obtain such formulas let z0 be a point in the lower half plane (i.e., Im z0 < 0)
and apply the Cauchy integral formula
4
1 F (z)
F (z0 ) = − dz (2.265)
2πi z − z0
Γ̂R
2.4 Fourier Transforms and Analytic Function Theory 151
−R R ω
z0
•
CR
taken in the counterclockwise direction over the contourΓ̂R as shown in Fig. 2.39
and comprised of the line segment (−R, R) and the semicircular contour CR of
radius R. Again because of (2.260) the contribution over CR vanishes as R is
allowed to approach infinity so that (2.265) may be replaced by
∞
1 F (ω)
F (z0 ) = − dω
2πi −∞ ω − z0
∞ ∞
1 R (ω) 1 X (ω)
= − dω − dω. (2.266)
2πi −∞ ω − z0 2π −∞ ω − z0
In the last integral we now substitute for X (ω) its Hilbert Transform from
(2.264a) to obtain
∞ ∞ ∞
1 X (ω) 1 R (η)
− dω = − 2 dωP dη
2π −∞ ω − z0 2π −∞ −∞ (ω − z 0 ) (η − ω)
∞ ∞
1 dω
= R (η) dηP .
2π2 −∞ −∞ (ω − z 0 ) (ω − η)
(2.267)
The last CPV integral over ω is evaluated using the calculus of residues as
follows:
∞ 4
dω dz 1
P = − iπ , (2.268)
−∞ (ω − z 0 ) (ω − η) (z − z 0 ) (z − η) η − z0
ΓR
where ΓR is the closed contour in Fig. 2.38 and where the location of the simple
pole at ω 0 is now designated by η. The contour integral in (2.268) is performed
in the clockwise direction and the term −iπ/ (η − z0 ) is the negative of the
contribution from the integration over the semicircular contour cε . The only
152 2 Fourier Series and Integrals with Applications to Signal Analysis
contribution to the contour integral arises from the simple pole at z = z0 which
equals −i2π/ (z0 − η) resulting in a net contribution in (2.268) of iπ/ (η − z0 ) .
Substituting this into (2.267) and then into (2.266) gives the final result
i ∞ R (η)
F (z) = dη, (2.269)
π −∞ η − z
The factor multiplying R (η) in integrand will be recognized as the delta function
kernel in (1.250) so that lim R(ω, δ) as −δ → 0 is in fact R (ω) .
and set
A(ω) = eα(ω) . (2.272)
Taking logarithms we have
Based on the results of the preceding subsection it appears that if ln F (ω) can
be represented as an analytic function in the lower half plane one should be
able to employ Hilbert Transforms to relate the phase to the log amplitude of
the signal FT. From the nature of the logarithmic function we see that this is
not possible for an arbitrary FT of a causal signal but only for signals whose
FT, when continued analytically into the complex z-domain via formula (2.257)
or (2.269), has no zeros in the lower half of the z-plane. Such transforms are
said to be of the minimum-phaseshift type. If f (t) is real so that A(ω) and
θ (ω) is, respectively, an even and an odd function of ω, we can express θ (ω) in
terms of α (ω) using contour integration, provided the FT decays at infinity in
accordance with
−k
|F (ω)| ∼ O |ω| for some k > 0. (2.274)
ω→∞
2.4 Fourier Transforms and Analytic Function Theory 153
− R −ω0 −ε − ω0 +ε ω0 −ε ω0 + ε R ω
• •
−ω0 ω0
cε− cε+
R
CR
taken in the clockwise direction over the closed contour ΓR comprised of the
three linear segments (−R, −ω0 − ε) , (−ω0 + ε, ω 0 − ε) ,(ω 0 + ε, R) , the two
semicircular arcs cε− and cε+ each with radius ε, and the semicircular arc CR
with radius R, as shown in Fig. 2.40. By assumption F (z) is free of zeros within
the closed contour so that IR ≡ 0. In the limit as R → ∞ and ε → 0 the integral
over the line segments approaches a CPV integral while the integrals cε and c+ ε
each approach iπ times the residue at the respective poles. The net result can
then be written as follows:
∞
ln F (ω) ln F (−ω 0 ) ln F (ω 0 )
0 = P 2 2
dω + iπ + iπ
−∞ ω 0 − ω 2ω0 −2ω0
4
ln F (z)
+ lim dz. (2.276)
R→∞ ω20 − z 2
CR
In view of (2.274) for sufficiently large R the last integral may be bounded as
follows:
4 π
ln F (z) k ln R
dz ≤ constant ×
ω2 − z 2 |ω 2 − R2 ei2θ | Rdθ. (2.277)
0 0 0
CR
Since ln R < R for R > 1, the last integral approaches zero as R → ∞ so that
the contribution from CR in (2.276) vanishes. Substituting from (2.273) into
154 2 Fourier Series and Integrals with Applications to Signal Analysis
the first three terms on the right of (2.276) and taking account of the fact that
α (ω) is even while θ (ω) is odd, one obtains
∞
α (ω) + iθ (ω) α (ω0 ) − iθ (ω 0 ) α (ω 0 ) + iθ (ω 0 )
0=P 2 − ω2 dω + iπ + iπ
−∞ ω 0 2ω 0 −2ω0
Observe that the terms on the right involving α (ω0 ) cancel while the integration
involving θ (ω) vanishes identically. As a result we can solve for θ (ω 0 ) with the
result ∞
2ω0 α (ω)
θ (ω 0 ) = P 2 − ω2
dω. (2.278)
π 0 ω 0
Proceeding similarly with the aid of the contour integral
4
ln F (z)
IR = dz (2.279)
z (ω 20 − z 2 )
ΓR
which is termed the Paley–Wiener condition [15]. Note that it precludes A(ω)
from being identically zero over any finite segment of the frequency axis.
Ámw
•i
Âew
• −2i
Ámw
CR+
•i
Âew
−R R
•−2i
Ámw
·i
-R R Âew
·- 2i
CR-
Since now the exponential decays in the lower half plane, Jordan’s lemma again
guarantees that the limit of the integral over CR− vanishes. Thus the final result
reads −t
e /3 ; t ≥ 0,
f (t) = (2.283)
e2t /3 ; t ≤ 0.
This procedure is readily generalized to arbitrary rational functions. Thus sup-
pose F (ω) = N (ω) /D(ω) with N (ω) and D (ω) polynomials in ω. We shall
assume that2 degree N (ω) < degree D (ω) so that F (ω) vanishes at infinity,
2 If N and D are of the same degree, then the FT contains a delta function which can be
as required by the Jordan lemma. If D (ω) has no real zeros, then proceeding
as in the preceding example we find that the inverse FT is given by the residue
sums ⎧
N (ω) iωt
⎨ i
res e ; t ≥ 0,
f (t) =
k;Im ω k >0
D(ω) ω=ωk (2.284)
⎩ −i
res N (ω) iωt
; t ≤ 0.
k;Im ω <0 k D(ω) e ω=ω k
2
For example, suppose F (ω) = i/(ω + 2i) (ω − i) which function has a double
pole at ω = −2i and a simple pole at ω = i. For t ≥ 0 the contribution comes
from the simple pole in the upper half plane and we get
ie−t e−t
f (t) = i = ; t ≥ 0.
(i + 2i)2 9
For t ≤ 0 the double pole in the lower half plane contributes. Hence
The case of D (ω) having real roots requires special consideration. First, if
the order of any one of the zeros is greater than 1, the inverse FT does not
exist.3 On the other hand, as will be shown in the sequel, if the zeros are
simple the inverse FT can computed by suitably modifying the residue formu-
las (2.284). Before discussing the general case we illustrate the procedure by
a specific example. For this purpose consider the time function given by the
inversion formula
∞
1 eiωt
f (t) = P 2 2
dω, (2.285)
2π −∞ (ω − 4)(ω + 1)
with
( )
−2−ε 2−ε R
1 eiωt
IR,ε = + + dω. (2.287)
2π −R −2+ε 2+ε (ω 2 − 4)(ω 2 + 1)
3 The corresponding time functions are unbounded at infinity and are best handled using
Laplace transforms.
158 2 Fourier Series and Integrals with Applications to Signal Analysis
over a closed path Γ that includes IR,ε as a partial contribution. For t > 0
the contour Γ is closed with the semicircle of radius R and includes the two
semicircles cε+ and cε− of radius ε centered, respectively, at ω = 2 and ω = −2,
as shown in Fig. 2.44.
Ám w
CR+
·i
e e Âew
· ·
-R - 2 -e -2+e 2-e 2+e R
·-i
Taking account of the residue contribution at ω = i we get for the integral over
the closed path
eiωt e−t
IˆR,ε = i 2 |ω=i = − .
(ω − 4)(2ω) 10
As ε → 0 the integrals over cε− and cε− each contribute −2πi times one-half
the residue at the respective simple pole (see Appendix A) and a R → ∞ the
integral over CR+ vanishes by the Jordan lemma. Thus taking the limits and
summing all the contributions in (2.289) we get
e−t 1 eiωt 1 eiωt
− = f (t) − i |ω=−2 + |ω=2
10 2 (2ω)(ω 2 + 1) 2 (2ω)(ω 2 + 1)
1
= f (t) + sin(2t)
20
and solving for f (t),
1 e−t
f (t) = − sin(2t) − ; t > 0. (2.290)
20 10
2.5 Time-Frequency Analysis 159
et 1 1
− + sin(2t) = f (t) − sin(2t).
10 10 10
Solving for f (t) and combining with (2.290) we have for the final result
1 e−|t|
f (t) = − sin(2t) sign(t) − . (2.291)
20 10
Note that we could also have used an integration contour with the semicircles
cε− and cε+ in the lower half plane. In that case we would have picked up the
residue at ω = ±2 for t > 0.
Based on the preceding example it is not hard to guess how to generalize
(2.284) when D(ω) has simple zeros for real ω. Clearly for every real zero at
(ω) iωt
ω = ωk we have to add the contribution sign(t) (i/2) res N D(ω) e |ω=ωk .
Hence we need to replace (2.284) by
N (ω) iωt
f (t) = (i/2) sign(t) res e |ω=ωk
D (ω)
k;Im ω k =0
⎧
N (ω) iωt
⎨ i
res e ; t ≥ 0,
+
k;Im ωk >0
D(ω) ω=ωk (2.292)
⎩ −i
res N (ω) iωt
; t ≤ 0.
k;Im ω <0
k D(ω) e
ω=ω k
For example, for F (ω) = iω/(ω20 − ω2 ), the preceding formula yields f (t) =
1
2 sign(t) cos ω 0 t and setting ω 0 = 0 we find that the FT of sign(t) is 2/iω, in
agreement with our previous result.
and ∞
2
E= |f (t)| dt (2.295)
−∞
are the signal energies. We can accept this as a plausible measure of signal
duration if we recall that σ 2t corresponds algebraically to the variance of a ran-
2
dom variable with probability density |f (t)| /E wherein the statistical mean
has been replaced by < t >. This quantity we may term “the average time of
signal occurrence”.4 Although definition (2.295) holds formally for any signal
(provided, of course, that the integral converges), it is most meaningful, just
like the corresponding concept of statistical average in probability theory, when
the magnitude of the signal is unimodal. For example, using these parameters
a real Gaussian pulse takes the form
√
E (t− < t >)2
f (t) = exp − . (2.296)
(2πσ 2 )
1/4 4σ 2t
t
To get an idea how the signal spectrum F (ω) affect the rms signal duration we
first change the variables of integration in (2.293) from t to t = t− < t > and
write it in the following alternative form:
1 ∞ 2 2
σ 2t = t |f (t + < t >)| dt . (2.297)
E −∞
Using the identities F {−itf (t)} = dF (ω)/dω and F {f (t+ < t >)} =
F (ω) exp iω < t > we apply Parseval’s theorem to (2.297) to obtain
1 d [F (ω) exp iω < t >] 2
∞
σ 2t = dω
2πE dω
−∞
∞ 2
1 dF (ω)
= + i < t > F (ω) dω. (2.298)
2πE −∞ dω
This shows that the rms signal duration is a measure of the integrated fluctu-
ations of the amplitude and phase of the signal spectrum. We can also express
4 For a fuller discussion of this viewpoint see Chap. 3 in Leon Cohen, “Time-Frequency
the average time of signal occurrence < t > in terms of the signal spectrum
by first rewriting the integrand in (2.294) as the product tf (t)f (t)∗ and using
F {tf (t)} = idF (ω)/dω together with Parseval’s theorem. This yields
∞
1 dF (ω) ∗
< t >= i F (ω) dω.
2πE −∞ dω
where ψ (t) = dψ (t) /dt is the instantaneous frequency. This equation provides
another interpretation of < ω >, viz., as the average instantaneous frequency
2
with respect to the density |f (t)| /E, a result which may be considered a sort
of dual to (2.299).
The rms signal duration and rms bandwidth obey a fundamental inequality,
known as the uncertainty relationship, which we now proceed to derive. For
this purpose let us apply the Schwarz inequality to the following two functions:
(t− < t >) f (t) and df (t) /dt − i < ω > f (t) . Thus
∞ ∞ 2
df (t)
2 2
(t− < t >) |f (t)| dt
dt − i < ω > f (t) dt
−∞ −∞
∞ 2
df (t)
≥ (t− < t >) f ∗ (t) − i < ω > f (t) dt . (2.305)
−∞ dt
Substituting for the first two integrals in (2.305) the σ 2t and σ 2ω from (2.297)
and (2.303), respectively, the preceding becomes
∞ 2
df (t)
σ 2t σ 2ω E 2 ≥ ∗
(t− < t >) f (t) − i < ω > f (t) dt
dt
−∞
∞ 2
df (t)
= (t− < t >) f ∗
(t) dt
dt , (2.306)
−∞
,∞ 2
where in view of (2.294) we have set −∞ (t− < t >) |f (t)| dt = 0. We now
integrate the last integral by parts as follows:
∞
df (t)
(t− < t >) f ∗ (t) dt
−∞ dt
∞
2 d [(t− < t >) f ∗ (t)]
= (t− < t >) |f (t)| ∞ −∞ − f (t) dt
−∞ dt
∞
2 df ∗ (t)
= (t− < t >) |f (t)| ∞ −∞ − E − (t− < t >) f (t) dt. (2.307)
−∞ dt
√
Because f (t) has finite
energy it must decay at infinity faster than 1/ t so that
(t− < t >) |f (t)| ∞
2
−∞ = 0. Therefore after transposing the last term in (2.307)
to the left of the equality sign we can rewrite (2.307) as follows:
∞ 2
∗ df (t)
Re (t− < t >) f (t) dt = −E/2. (2.308)
−∞ dt
2.5 Time-Frequency Analysis 163
Since the magnitude of a complex number is always grater or equal to the mag-
nitude of its real part the right side of (2.306) equals at least E 2 /4. Cancelling
of E 2 and taking the square root of both sides result in
1
σt σω ≥ , (2.309)
2
which is the promised uncertainty relation. Basically it states that simultaneous
localization of a signal in time and frequency is not achievable to within arbi-
trary precision: the shorter the duration of the signal the greater its spectral
occupancy and conversely. We note that except for a constant factor on the
right (viz., Planck’s constant ), (2.309) is identical to the Heisenberg uncer-
tainty principle in quantum mechanics where t and ω stand for any two canoni-
cally conjugate variables (e.g., particle position and particle momentum). When
does (2.309) hold with equality? The answer comes from the Schwarz inequal-
ity (2.305) wherein equality can be achieved if and only if (t− < t >) f (t) and
df (t)
dt − i < ω > f (t) are proportional. Calling this proportionality constant −α
results in the differential equation
df (t)
− i < ω > f (t) + α (t− < t >) f (t) = 0. (2.310)
dt
This is easily solved for f (t) with the result
5 α α 6
2
f (t) = A exp − (t− < t >) + < t >2 +i < ω > t , (2.311)
2 2
where A is a proportionality constant. Thus the optimum signal from the stand-
point of simultaneous localization in time and frequency has the form of a Gaus-
sian function. Taking account of the normalization (2.295) we obtain after a
simple calculation
& '
α = 1/2σ2t , A = E/2πσ 2t exp − < t >2 /2σ2t . (2.312)
for all t. For if we now multiply both sides of (2.314) by g ∗ (t − τ ) and integrate
with respect to τ we obtain
∞ ∞
1
f (t) = S (ω, τ ) g ∗ (t − τ ) eiωt dωdτ . (2.316)
2π −∞ −∞
The two-dimensional function S (ω, τ ) is referred to as the short-time
Fourier transform5 (STFT) of f (t) and (2.316) the corresponding inversion
formula. The STFT can be represented graphically in various ways. The most
common is the spectrogram, which is a two-dimensional plot of the magnitude
of S (ω, τ ) in the τ ω plane. Such representations are commonly used as an aid
in the analysis of speech and other complex signals.
Clearly the characteristics of the STFT will depend not only on the signal
but also on the choice of the window. In as much as the entire motivation
for the construction of the STFT arises from a desire to provide simultaneous
localization in frequency and time it is natural to choose for the window function
the Gaussian function since, as shown in the preceding, it affords the optimum
localization properties. This choice was originally made by Gabor [6] and the
STFT with a Gaussian window is referred to as the Gabor transform. Here we
adopt the following parameterization:
21/4 πt2
g (t) = √ e− s2 . (2.317)
s
√
Reference to (2.311) and (2.312) shows that σ t = s/ (2 π) . Using (2.142*) we
have for the FT √ 2 2
G (ω) = 21/4 se−s ω /4π (2.318)
√
from which we obtain σ ω = π/s so that σ t σ ω = 1/2, as expected. !
As an example, let us compute the Gabor transform of exp αt2 /2 . We
obtain
2πτ
!2
√ 1/4 π s − iωs πτ 2
S (ω, τ ) / s = 2 2
exp − 2
− 2 . (2.319)
iαs /2 − π 4 (iαs /2 − π) s
5 Also referred to as the sliding-window Fourier transform
2.5 Time-Frequency Analysis 165
& '
Figure 2.45: Magnitude of Gabor Transform of exp i 21 αt2
√
A relief map of the magnitude of S (ω, τ ) / s (spectrogram) as a function of the
nondimensional variables τ /s (delay) and ωs (frequency) is shown in Fig. 2.45.
In this plot the dimensionless parameter (1/2) αs2 equals 1/2. The map
shows a single ridge corresponding to a straight line ω = ατ corresponding
to the instantaneous frequency at time τ . As expected only positive frequency
components are picked up by the & transform.
' On the other hand, if instead
we transform the real signal cos 12 αt2 , we get a plot as in Fig. 2.46. Since
the cosine contains exponentials of both signs the relief map shows a second
ridge running along the line ω = −ατ corresponding to negative instantaneous
frequencies.
As a final example consider the signal plotted in Fig. 2.47. Even though
this signal looks very much like a slightly corrupted sinusoid, it is actually
comprised of a substantial band of frequencies with a rich spectral structure.
This can be seen from Fig. 2.48 which shows a plot of the squared magnitude of
the FT. From this spectral plot we can estimate the total signal energy and the
relative contributions of the constitutive spectral components that make up the
total signal but not their positions in the time domain. This information can
be inferred from the Gabor spectrogram whose contour map is represented in
Fig. 2.49. This spectrogram shows us that the spectral energy of the signal is in
fact confined to a narrow sinuous band in the time-frequency plane. The width of
this band is governed by the resolution properties of the sliding Gaussian window
(5 sec. widths in this example) and its centroid traces out approximately the
locus of the instantaneous frequency in the time-frequency plane.
166 2 Fourier Series and Integrals with Applications to Signal Analysis
&1 2
'
Figure 2.46: Magnitude of Gabor Transform of cos 2 αt
1.5
0.5
f(t)
-0.5
-1
-1.5
-2
0 10 20 30 40 50 60 70 80 90 100
Time (sec)
12000
10000
Signal Energy/Hz
8000
6000
4000
2000
0
0 0.5 1 1.5 2 2.5 3
Frequency (Hz)
200
0.4
150
1.6
Hz*100
0.8 0.2
1.4 0.2
100 0.6
1.61.2
0.4
50 1
1.2
100 200 300 400 500 600 700 800 900 1000
sec*10.24
Figure 2.49: Contour map of the Gabor Transform of the signal in Fig. 2.48
168 2 Fourier Series and Integrals with Applications to Signal Analysis
where
ω n +Δω/2
1
zn (t) = eiωt e−iψ(ω) F (ω) dω (2.324)
π ω n −Δω/2
zn (t) is the corresponding analytic signal (assuming real f (t) and ψ(ω) =
−ψ(−ω)) and the integration is carried out over the shaded band in Fig. 2.50.
F(w) ,y (w)
y¢ (wn )
y (w)
F(w)
· · · · · ·
w
wn-1 wn wn+1
Dw
Clearly the complete signal y (t) can be represented correctly by simply sum-
ming over the totality of such non-overlapping frequency bands, i.e.,
y(t) = yn (t) . (2.325)
n
For sufficiently small Δω/ωn the phase function within each band may be
approximated by
ψ(ω) ∼ ψ(ω n ) + (ω − ω n ) ψ (ω n ), (2.325*)
where ψ (ω n ) is the slope of the dispersion curve at the center of the band in
Fig. 2.50. If we also approximate the signal spectrum F (ω) by its value at the
band center, (2.324) can be replaced by
ω n +Δω/2
1
zn (t) ∼ F (ω n ) eiωt e−i[ψ(ωn )+(ω−ωn )ψ (ω n )]
dω.
π ω n −Δω/2
and upon setting F (ωn ) = A (ωn ) eiθ(ωn ) the real signal (2.323) assumes the
form
!
sin Δω/2 t − ψ (ω n )
yn (t) ∼ A (ω n ) sin [ω n t + θ (ωn ) + π − ψ(ω n )] ! (2.327)
π t − ψ (ω n )
a representative plot of which is shown in Fig. 2.51. Equation (2.327) has the
form of a sinusoidal carrier at frequency ω n that has been phase shifted by
-1
-2
-3
-4
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1
t-Tg
Figure 2.51: Plot of (2.327) for Δω/2θ (ωn ) = 10, ωn = 200rps and A (ω n ) = 1
ψ(ω n ) radians. Note that the carrier is being modulated by an envelope in form
of a sinc function delayed in time by ψ (ω n ). Evidently this envelope is the time
domain representation of the spectral components contained within the band
Δω all of which are undergoing the same time delay as a “group.” Accordingly
ψ (ω n ) is referred to as the group delay (Tg ) while the time (epoch) delay of the
carrier θ (ω n ) /ω n is referred to as the phase delay (T ϕ). One may employ these
concepts to form a semi-quantitative picture of signal distortion by assigning to
each narrow band signal constituent in the sum in (2.325) its own phase and
group delay. Evidently if the dispersion curve changes significantly over the
2.6 Frequency Dispersion 171
0.35
0.3 t1 t2 t3 t4 t5
0.25
0.2
t-x/v
0.15
0.1
0.05
0
-5 -4 -3 -2 -1 0 1 2 3 4 5
x
Figure 2.52: Self-preserving spatial pattern at successive instants of time (t1 <
t2 < t3 < t4 < t5)
or, explicitly, by
Ω
(ω 0 )η ]L iηt dη
ŝ (t) = P (η) e−i[β(η+ω0 )−β e . (2.340)
−Ω 2π
We shall obtain an approximation to this integral under the following two
assumptions:
ω0 Ω, (2.341a)
2
Ω β (ω 0 )L 1. (2.341b)
The first of these is the conventional narrow band approximation while the
second implies a long propagation path.7 Thus in view of (2.341a) we may
approximate β (η + ω0 ) by
1
β (η + ω 0 ) ∼ β (ω 0 ) + β (ω 0 ) η + β (ω 0 ) η 2 . (2.342)
2
Substituting this into (2.340) leads to the following series of algebraic steps:
Ω
−iβ(ω 0 )L L 2 dη
ŝ (t) ∼ e P (η) e−i 2 β (ω0 )η eiηt
−Ω 2π
Ω
η 2
−i L β (ω 0 )Ω2 ( Ω ) −2( Ωη ) Ωβ (ω
t dη
= e−iβ(ω0 )L P (η) e 2 0 )L
−Ω 2π
1
L 2 2
−i β (ω 0 )Ω ν −2ν Ωβ (ω )L t dν
= Ωe−iβ(ω0 )L P (νΩ) e 2 0
−1 2π
1 2
t2 2 dν
−iβ(ω 0 )L −i 2Lβ (ω 0 )
−i L t
2 β (ω 0 )Ω ν− Ωβ (ω 0 )L
= Ωe e P (νΩ) e
−1 2π
t2
−i
= Ωe−iβ(ω0 )L e 2Lβ (ω0 )
1−t/Ωβ (ω0 )L
L 2 2 dx
P xΩ + t/β (ω0 ) L e−i 2 β (ω0 )Ω x . (2.343)
−1−t/Ωβ (ω 0 )L 2π
Since we are interested primarily in assessing pulse distortion the range of the
time variable of interest is on the order of t ∼ 1/Ω we have in view of (2.341b)
t/Ωβ (ω 0 ) L 1 . (2.344)
Consequently the limits in the last integral in (2.343) may be replaced by −1, 1.
Again in view of (2.341b) we may evaluate this integral by appealing to the
principle of stationary phase. Evidently the point of stationary phase is at
x = 0 which leads to the asymptotic result
* +
1 t2 t
−iπ/4sign[β (ω0 )] −iβ(ω 0 )L −i 2Lβ (ω0 )
ŝ (t) ∼ e e e P .
2π β (ω 0 )L β (ω0 ) L
(2.345)
7 Note (2.341b) necessarily excludes the special case β (ω 0 ) = 0.
2.6 Frequency Dispersion 175
Parseval’s theorem tells us that the energies of the input and output signals
must be identical. Is this still the case for the approximation (2.346)? Indeed
it is as we verify by a direct calculation:
∞ ∞
!
2
|ŝ (t)| dt = (1/2π β (ω0 ) L) P t/β (ω 0 ) L 2 dt
−∞ −∞
∞ Ω
1 1
= |P (ω)|2 dω ≡ |P (ω)|2 dω.
2π −∞ 2π −Ω
Equation (2.346) states that the envelope of a pulse propagating over a suf-
ficiently long path assumes the shape of its Fourier transform wherein the
timescale is determined only by the path length and the second derivative of
the propagation constant at the band center. For example, for a pulse of unit
amplitude and duration T we obtain
sin2 2βtT
(ω )L
2 0
|ŝ (t)| ∼ 4 2
t
β (ω 0 )L
so that
Ω
L
(ω 0 )η 2 −i L
(ω 0 )η 3 iηt dη
ŝ (t) ∼ e−iβ(ω0 )L P (η) e−i 2 β 6β e . (2.351)
−Ω 2π
We shall not evaluate (2.351) for general pulse shapes but confine our attention
to a Gaussian pulse. In that case we may replace the limits in (2.351) by ±∞
and require only that (2.341a) hold but not necessarily (2.341b). Using the
parameterization in (2.296) we have
21/4 πt2
p (t) = √ e− T 2 , (2.352)
T
where we have relabeled the nominal pulse width s by T . The corresponding
FT then reads √ 2 2
P (ω) = 21/4 T e−T ω /4π (2.353)
so that (2.351) assumes the form
1/4
√ −iβ(ω )L ∞ −T 2 η2 /4π −i L β (ω )η2 −i L β (ω )η3 iηt dη
ŝ (t) ∼ 2 Te 0
e e 2 0 6 0
e
−∞ 2π
√ ∞ Lβ (ω0 ) 3 2 dη
= 21/4 T e−iβ(ω0 )L e−i 6 [η +Bη −Cη] , (2.354)
−∞ 2π
where
3β (ω 0 ) 3T 2
B = − i , (2.355a)
β (ω0 ) 2πLβ (ω0 )
6t
C = . (2.355b)
Lβ (ω 0 )
β (ω 0 ) T
q = , (2.358a)
β (ω 0 )
β (ω0 ) L
p = , (2.358b)
T3
2β (ω 0 ) L
χ = 2qp = . (2.358c)
T2
Introducing these into (2.357) we obtain
5 2 6
1/4
√ χq i i 2 6 t
−iβ(ω 0 )L −i 6 (1− πχ ) (1− πχ ) + χq ( T )
ŝ (t) ∼ 2 (1/ T )e e
( )
p 2/3 * i
+2
t
−1/3 2
(p/2) Ai −q 1− +4 . (2.359)
2 πχ qχT
Let us first examine this expression for the case in which the third derivative
term in (2.350) can be neglected. Clearly this is tantamount to dropping the
cubic term in (2.351). The integral then represents the FT of a Gaussian func-
tion and can be evaluated exactly. On the other hand, from the definition of q
in (2.358a) we note that β (ω0 ) → 0 and β (ω0 ) = 0 correspond to q → ∞.
Hence we should be able to obtain the same result by evaluating (2.359) in the
limit as q → ∞. We do this with the aid of the first-order asymptotic form of
the Airy function for large argument the necessary formula for which is given
in [1]. It reads
π
Ai(−z) ∼ π −1/2 z −1/4 sin(ζ + ), (2.360)
4
where
2
ζ = z 3/2 ; |arg(z)| < π. (2.361)
3
Thus we obtain for8 |q| ∼ ∞
( )
p 2/.3 * i
+2
t
−1/3 2
(p/2) Ai −q 1− +4
2 πχ qχT
−1/2
i
∼ −i πχ(1 − )
πχ
⎛ ( 3/2 ) ⎞
2
3
⎜ exp i (p/3) q 1 − πχ + 4 qχT
i t π
+ i4 ⎟
⎜ ⎟
⎜ ( 3/2 ) ⎟ , (2.362)
⎜ 2 ⎟
⎝ ⎠
− exp −i (p/3) q 3 1 − πχ + 4 qχT
i t
−i4π
* +3
2
! i
= i χq /6 1 −
πχ
⎡ ⎤
⎢ t t2 3 ⎥
⎣1 + 6 2 + 6 4 + o(1/q )⎦
qχT 1 − πχ i
(qχT )2 1 − πχ
i
* +3 * +
2
! i t i
= i χq /6 1 − + iq 1−
πχ T πχ
* + −1
t2 i
+i 2 1 − + o(1/q). (2.363)
χT πχ
In identical fashion we can expand the argument of the second exponential which
would differ from (2.363) only by a minus sign. It is not hard to show that for
sufficiently large |q| is real part will be negative provided
1
χ2 > 2 . (2.364)
3π
In that case the second exponential in (2.362) asymptotes to zero and may be
ignored. Neglecting the terms o(1/q) in (2.363) we now substitute (2.362) into
(2.359) and note that the first two terms in the last line of (2.363) cancel against
the exponential in (2.359). The final result then reads
√
ŝ (t) ∼ 21/4 (1/ T )e−iβ(ω0 )L
( −1/2 ) ( * +−1 )
i t2 i π
−i πχ(1 − ) exp i 2 1 − +i
πχ χT πχ 4
√ −1/2 πt2 −1
= 21/4 (1/ T )e−iβ(ω0 )L (1 + iπχ) exp − 2 (1 + iπχ) . (2.365)
T
For the squared magnitude of the pulse envelope we get
√
2 !−1/2 πt2
|ŝ (t)|2 ∼ 1 + π 2 χ2 exp − 2 . (2.366)
T (T /2) (1 + π 2 χ2 )
√
The nominal duration of this Gaussian signal may be defined by (T /2 π)
1
1 + π 2 χ2 so that χ plays the role of a pulse-stretching parameter. When
χ 1 (2.366) reduces to
2.6 Frequency Dispersion 179
√
2 2t2 T T 2 t2
|ŝ (t)|2 ∼ exp − 2 2 = √ exp − . (2.367)
πχT πT χ π 2β (ω 0 ) L 2πβ (ω 0 ) L
The same result also follows more directly from the asymptotic form (2.346)
as is readily verified by the substitution of the FT of the Gaussian pulse (2.353)
into (2.346). Note that with χ = 0 in (2.366) we recover the squared magnitude
of the original (input) Gaussian pulse (2.352). Clearly this substitution violates
our original assumption |q| ∼ ∞ under which (2.366) was derived for in accor-
dance with (2.358) χ = 0 implies q = 0. On the other hand if β (ω0 ) is taken to
be identically zero (2.366) is a valid representation of the pulse envelope for all
values of χ. This turns out to be the usual assumption in the analysis of pulse
dispersion effects in optical fibers. In that case formula (2.366) can be obtained
directly from (2.351) by simply completing the square in the exponential and
integrating the resulting Gaussian function. When β (ω 0 ) = 0 with q arbitrary
numerical calculations of the output pulse can be carried out using (2.359). For
this purpose it is more convenient to eliminate χ in favor of the parameters p
and q. This alternative form reads
5 6
1/4
√ p i i 2 3 t
−iβ(ω 0 )L −i 3 (q− 2πp ) (q− 2πp ) + p ( T )
ŝ (t) ∼ 2 (1/ T )e e
( * + * +2 )
2/3
−1/3 |p| i t
(|p| /2) Ai − q− +2 . (2.368)
2 2πp pT
1.5
p=0
1 p=-0.2 p=0.2
T*abs(s)2
p=-0.5 p=0.5
0.5
p=-1.0 p=1.0
0
-4 -3 -2 -1 0 1 2 3 4
t/T
To assess the influence of the third derivative of the phase on the pulse envelope
we set q = 0 and obtain the series of plots for several values of p as shown
in Fig. 2.53. The center pulse labeled p = 0 corresponds to the undistorted
Gaussian pulse (χ = 0 in (2.366)). As p increases away from zero the pulse
envelope broadens with a progressive increase in time delay. For sufficiently
large p the envelope will tend toward multimodal quasi-oscillatory behavior the
onset of which is already noticeable for p as low as 0.2. For negative p the pulse
shapes are seen to be a mirror images with respect to t = 0 of those for positive
p so that pulse broadening is accompanied by a time advance.
9 Note
√ !
that T0 = 1/ 2π T where T represents the definition of pulse width in (2.352).
√
Also A = 21/4 / T .
2.6 Frequency Dispersion 181
* +
κ t
ω (t) = ω0 1 − (2.373)
ω 0 T0 T0
so that over the nominal pulse interval −T0 ≤ t ≤ T0 the fractional change in
the instantaneous frequency is 2κ/ω0 T0 . Presently we view this chirping as the
intrinsic drift in the carrier frequency during the formation of the pulse. How
does this intrinsic chirp affect pulse shape when this pulse has propagated over
a transmission medium with transfer function exp − β (ω) L ? If we neglect the
effects of the third and higher order derivatives of the propagation constant the
answer is straightforward. We first compute the FT of (2.372) as follows:
* +
∞ 2 ∞ 2 2
iωT0 ω 2 T0
4
− t 2 (1+iκ) −iωt − (1+iκ)
2 t− 1+iκ + (1+iκ) 2
2T
P (ω) = A e 2T0 e dt = A e 0 dt
−∞ −∞
* +
∞ 2 2
iωT0
ω 2 T0
2
− 2(1+iκ) − (1+iκ)
2 t− 1+iκ
2T0
= Ae e dt
−∞
ω 2 T0
2π − 2(1+iκ) 2
= AT0 e , (2.374)
1 + iκ
where the last result follows from the formula √ for the Gaussian error func-
tion with (complex) variance parameter T02 / 1 + iκ. Next we substitute (2.374)
in (2.340) with Ω = ∞ together with the approximation (2.342) to obtain
∞
−iβ(ω0 )L 1 2 dη
ŝ (t) = e P (η) e−i 2 β (ω0 )η L eiηt (2.375)
−∞ 2π
Simplifying,
∞
T02
Lβ (ω0 )
2π −iβ(ω0 )L − 2(1+iκ) +i 2 η2 dη
s(t) = AT0 e e eiηt . (2.376)
1 + iκ −∞ 2π
Setting Q = T02 / [2 (1
+ iκ)] + iLβ (ω 0 ) /2 we complete the square in the
exponential as follows:
it 2 t2 t2 2
2 −Q (η− 2Q ) + 4Q it
e−Qη +iηt
=e 2
= e− 4Q e−Q(η− 2Q ) . (2.377)
From this we note that the complex variance parameter is 1/(2Q) so that (2.376)
integrates to
AT0 2π −iβ(ω0 )L − 4Q t2 π
ŝ (t) = e e
2π 1 + iκ Q
A
= √ e−iβ(ω0 )L
1 + iκ
T0 t2 (1 + iκ)
1 exp − .
T02 + iβ (ω0 ) L (1 + iκ) 2 T02 + iβ (ω 0 ) L (1 + iκ)
(2.378)
182 2 Fourier Series and Integrals with Applications to Signal Analysis
Expression for the pulse width and chirp is obtained by separating the argument
of the last exponential into real and imaginary parts as follows:
t2 (1 + iκ)
exp − 2
2 T0 + iβ (ω 0 ) L (1 + iκ)
T02 t2
= exp − 5 2 2 6 exp −iψ, (2.379)
2 T02 − β (ω0 ) Lκ + β (ω 0 ) L
where
κt2 T02 − β (ω 0 ) L(1 + κ)
ψ = 5 2 2 6 . (2.380)
2 T02 − β (ω 0 ) Lκ + β (ω 0 ) L
!
Defining the magnitude of (2.379) as exp −t2 / 2TL2 we get for the pulse
length TL
* +2 * +2
β (ω 0 ) Lκ β (ω 0 ) L
TL = T0 1− + . (2.381)
T02 T02
When the input pulse is unchirped κ = 0, and we get
* +2
β (ω0 ) L
TL = T02 + . (2.382)
T0
We see from (2.381) that when κ = 0, TL may be smaller or larger than the right
side of (2.382) depending on the sign of κ. and the magnitude of L. Note, how-
ever, that for sufficiently large L, (2.381) is always larger than (2.382) regardless
of the sign of κ. The quantity
LD = T02 /β (ω 0 ) (2.383)
a(t) <
< cos(ω 0 t + φ(t)), (2.385)
2.6 Frequency Dispersion 183
To get the response that results after this random waveform has propagated over
a transmission medium with transfer function exp −iβ(ω)L we have to replace
P (ω − ω0 ) in (2.336) by the right side of (2.387). Thus we obtain
( ∞ 2 )
ω 0 +Ω
1 −iβ(ω)L iωt dω
y(t) =
e 2 (1/2) P (ω−ξ−ω0 ) X(ξ)dξ e e
ω 0 −Ω 2π −∞ 2π
( Ω ∞ 2 )
1 dη
=
e eiω0 t P (η − ξ) X(ξ)dξ e−iβ(η+ω0 )L eiηt
−Ω 2π −∞ 2π
5
6
=
e e iω 0 t
s3(t − β (ω0 )L ,
where
Ω ∞ 2
1 −i β(η+ω 0 )−β (ω 0 )η L iηt dη
s3(t) = P (η − ξ) X(ξ)dξ e e (2.388)
−Ω 2π −∞ 2π
Assuming a Gaussian pulse with the FT as in (2.374) the inner integral in (2.391)
can be expressed in the following form:
2
1 Ω −iβ (ω0 )η 2 L/2 iηt
A2 T02 π
P (η − ξ)e e dη = 2 √ f (ξ) , (2.392)
(2π) −Ω
3 (2π) 1 + κ2 |Q|
where Q = T02 / [2 (1 + iκ)] + iLβ (ω0 ) /2,
ξ 2T 2
f (ξ) = e2
e{Qb } e−
2 0 1 1
[ 1+iκ + 1−iκ ]
2 (2.393)
and
ξT02 it
b= + . (2.394)
2Q(1 + iκ) 2Q
To complete the calculation of the average pulse envelope we need the functional
form of the power spectral density of the phase fluctuations. The form depends
on the physical process responsible for these fluctuations. For example for high
quality solid state laser sources the spectral line width is Lorenzian, i.e., of the
form
2/W
F (ω − ω 0 ) = !2 . (2.395)
1 + ω−ω
W
0
Unfortunately for this functional form the integration in (2.391) has to be carried
out numerically. On the other hand, an analytical expression is obtainable if we
assume the Gaussian form
1
F (ω − ω0 ) = √ exp − (ω − ω 0 ) /2W 2 . (2.396)
2πW 2
After some algebra we get
!2 !2
2 A2 T02 π T02 −β (ω 0 ) Lκ + β (ω 0 ) L
|EN V | = 2√
! !
(2π) 1 + κ2 |Q| T02 −β (ω 0 ) Lκ 2 +(1+2W 2 T02 ) β (ω 0 ) L 2
t2 T02
exp − !2 !2 . (2.397)
T02 − β (ω 0 ) Lκ + (1 + 2W 2 T02 ) β (ω 0 ) L
Note that the preceding is the squared envelope so that to get the effective
pulse length of the envelope itself an additional factor of 2 needs to be inserted
(see (2.381)). We then get
* +2 * +2
β (ω 0 ) Lκ 2 T 2)
β (ω 0 ) L
TL = T0 1− + (1 + 2W 0 . (2.398)
T02 T02
It should be noted that this expression is not valid when β (ω0 ) = 0 as then
the cubic phase term dominates. In that case the pulse is no longer Gaussian.
The pulse width can then be defined as an r.m.s. duration. The result reads
* +2 * +2
β (ω 0 ) Lκ 2T 2 )
β (ω0 ) L
TL = T0 1− + (1 + 2W 0 + C, (2.399)
T02 T02
2.7 Fourier Cosine and Sine Transforms 185
where * +2
2 2 β (ω 0 ) L
C = (1/4) (1 + κ + 2W T02 ) . (2.400)
T03
where fˆc (ω) if the expansion (coefficient) function. In accordance with (1.100)
the normal equation reads
T Ω T
cos(ωt)f (t) dt = fˆc (ω ) dω cos(ωt) cos(ω t)dt. (2.402)
0 0 0
For arbitrary T this integral equation does not admit of simple analytical solu-
tions. An exceptional case obtains when T is allowed to approach infinity for
then the two Fourier Integral kernels approach delta functions. Because Ω > 0
only the first of these contributes. Assuming that fˆc (ω ) is a smooth function
and we obtain in the limit
∞
Fc (ω) = cos(ωt)f (t) dt, (2.404)
0
At the same time lim εΩ min = 0 so that (2.407) gives the identity
Ω→∞
∞ ∞
2 2
|f (t)| dt = |Fc (ω)|2 dω. (2.410)
0 π 0
Problems
1. Using (2.37) compute the limit as M → ∞, thereby verifying (2.39).
2. Prove (2.48).
188 2 Fourier Series and Integrals with Applications to Signal Analysis
f(t)
• • • • • •
t2
t
−2 0 2 4 6
w(t)
1 1
t
−3 −2 2 3
,∞ sin4 x
16. With the aid of Parseval’s theorem evaluate −∞ x4 dx.
17. With F (ω) = R(ω) + iX (ω) the FT of a causal signal find X (ω) when a)
2
R(ω) = 1/(1 + ω2 ) b) R(ω) = sinω22ω .
!
18. Given the real signal 1/ 1 + t2 construct the corresponding analytic sig-
nal and its FT.
19. Derive (2.217).
20. For the signal z(t) = cos 5t
1+t2 compute and plot the spectra of the inphase
and quadrature components x (t) and y (t) for ω 0 = 5, 10, 20. Interpret
your results in view of the constraint (2.226).
1
21. The amplitude of a minimum phase FT is given by |F (ω)| = 1+ω 2n , n > 1.
Compute the phase.
http://www.springer.com/978-1-4614-3286-9