0% found this document useful (0 votes)
128 views

Fourier Series and Integrals With Applications To Signal Analysis

This document discusses Fourier series and integrals and their applications to signal analysis. It introduces Fourier series and integrals and examines their pointwise convergence for smooth functions and at step discontinuities. It also discusses the Fourier series kernel and its properties.

Uploaded by

isla
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
128 views

Fourier Series and Integrals With Applications To Signal Analysis

This document discusses Fourier series and integrals and their applications to signal analysis. It introduces Fourier series and integrals and examines their pointwise convergence for smooth functions and at step discontinuities. It also discusses the Fourier series kernel and its properties.

Uploaded by

isla
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 116

Chapter 2

Fourier Series and Integrals


with Applications to Signal
Analysis

Perhaps the most important orthogonal functions in engineering applications are


trigonometric functions. These were briefly discussed in 1.5.2 as one example
of LMS approximation by finite orthogonal function sets. In this chapter we
reexamine the LMS approximation problem in terms of infinite trigonometric
function sets. When the approximating sum converges to the given function
we obtain a Fourier Series; in case of a continuous summation index (i.e., an
integral as in (1.92) the converging approximating integral is referred to as a
Fourier Integral.

2.1 Fourier Series


2.1.1 Pointwise Convergence at Interior Points for Smooth
Functions
We return to 1.5.2 and the LMS approximation of f (t) within the interval
−T /2 < t < T /2 by 2N + 1 using complex exponentials as given by (1.211).
The approximating sum reads


N
f N (t) = fˆn ei2πnt/T , (2.1)
n=−N

while for the expansion coefficients we find from (1.214)


 T /2
1
fˆn = f (t) e−i2πnt/T dt. (2.2)
T −T /2

W. Wasylkiwskyj, Signals and Transforms in Linear Systems Analysis, 75


DOI 10.1007/978-1-4614-3287-6 2, © Springer Science+Business Media, LLC 2013
76 2 Fourier Series and Integrals with Applications to Signal Analysis

Upon substituting (2.2) in (2.1) and interchanging summation and integration


we obtain  T /2
f N (t) = f (t ) KN (t − t ) dt , (2.3)
−T /2

where
* +* +∗
1  i2πn(t−t )/T 
N N
 1 i2πnt/T 1 i2πnt /T
KN (t − t ) = e = √ e √ e ,
T T T
n=−N n=−N
(2.4)
and as shown in the following, approaches a delta function at points of con-
tinuity as N approaches infinity. The last form highlights the fact that this
kernel can be represented as a sum of symmetric products of expansion func-
tions in conformance with the general result in (1.301) and (1.302). Using the
geometrical series sum formula we readily obtain

sin [2π (N + 1/2) (t − t ) /T ]


KN (t − t ) = , (2.5)
T sin [π (t − t ) /T ]

which is known as the Fourier series kernel. As is evident from (2.4) this kernel
is periodic with period T and is comprised of an infinite series of regularly
spaced peaks each similar to the a-periodic sinc function kernel encountered in
(1.254). A plot of T KN (τ ) for N = 5 as a function of (t − t )/T ≡ τ /T is
shown in Fig. 2.1. The peak value attained by T KN (τ ) at τ /T = 0, ±1, ±2,

12

10

6
T*K(tau/T)

-2

-4
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
tau/T

Figure 2.1: Fourier series kernel (N=5)


2.1 Fourier Series 77

is in general 2N + 1, as may be verified directly with the aid of (2.4) or (2.5).


As the peaks of these principal lobes grow in proportion with N their widths
diminish with increasing N. In fact we readily find directly from (2.5) that
the peak-to-first null lobe width is Δτ = T /(2N + 1). We note that Δτ KN
(±kT ) = 1, so that the areas under the principal lobes should be on the order
of unity for sufficiently large N. This suggests that the infinite series of peaks
in Fig. 2.1 should tend to an infinite series of delta functions as the number
N increases without bound. This is in fact the case. To prove this we must
show that for any piecewise differentiable function defined in any of the intervals
(k − 1/2)T < t < (k + 1/2) T, k = 0, ±1, . . . the limit
 (k+1/2)T
1 ! !
lim f (t ) KN (t − t ) dt = f t+ + f t− (2.6)
N −→∞ (k−1/2)T 2
holds. Of course, because of the periodicity of the Fourier Series kernel it suffices
if we prove (2.6) for k = 0 only. The proof employs steps very similar to those
following (1.263) except that the constraint on the behavior of f (t) at infinity,
(1.266), presently becomes superfluous since the integration interval is finite.
Consequently the simpler form of the RLL given by (1.262) applies. As in
(1.267) we designate the limit by I (t) and write
 T /2
I (t) = lim g (t, t ) sin [2π (N + 1/2) (t − t ) /T ] dt , (2.7)
N −→∞ −T /2

where by analogy with (1.264) we have defined the function


f (t )
g (t, t ) = . (2.8)
T sin [π (t − t ) /T ]
In (2.7) we may identify the large parameter 2π (N + 1/2) /T with ω in (1.262)
and apply the RLL provided we again exclude the point t = t where g (t, t )
becomes infinite. We then proceed as in (1.265) to obtain
 t− /2
I (t) = lim g (t, t ) sin [2π (N + 1/2) (t − t ) /T ] dt
N −→∞ −T /2
 t+ /2
+ lim g (t, t ) sin [2π (N + 1/2) (t − t ) /T ] dt
N −→∞ t− /2
 T /2
+ lim g (t, t ) sin [2π (N + 1/2) (t − t ) /T ] dt , (2.9)
N −→∞ t+ /2

where  is an arbitrarily small positive number. Let us first assume that f (t ) is
smooth, i.e., piecewise differentiable and continuous. In that case then g (t, t )
has the same properties provided t = t . This is true for the function in the
integrand of the first and third integral in (2.9). Hence by the RLL these vanish
so that I(t) is determined solely by the middle integral
 t+ /2
sin [2π (N + 1/2) (t − t ) /T ] 
I (t) = lim f (t ) dt . (2.10)
N −→∞ t− /2 T sin [π (t − t ) /T ]
78 2 Fourier Series and Integrals with Applications to Signal Analysis

Since  is arbitrarily small, f (t ) can be approximated as closely as desired


by f (t) and therefore factored out of the integrand. Also for small  the
sin [π (t − t ) /T ] in the denominator can be replaced by its argument. With
these changes (2.10) becomes
 t+ /2
sin [2π (N + 1/2) (t − t ) /T ] 
I (t) = f (t) lim dt . (2.11)
N −→∞ t− /2 π (t − t )
The final evaluation becomes more transparent when the integration variable is
changed from t to x = 2π (N + 1/2) (t − t) /T which transforms (2.11) into
 π(N +1/2)/T  ∞
1 sin x 1
sin x
I (t) = f (t) lim dx = f (t) dx = f (t) .
N −→∞ π − π(N +1/2)/T x −∞ x π
(2.12)
This establishes the delta function character of the Fourier series kernel. Equiv-
alently, we have proven that for any smooth function f (t) the Fourier series


f (t) = fˆn ei2πnt/T (2.13)
n=−∞

with coefficients given by (2.2) converges in the interval −T /2 < t < T /2.

2.1.2 Convergence at Step Discontinuities


Note that in the preceding limiting argument we have excluded the endpoints of
the interval, i.e., we have shown convergence only in the open interval. In fact,
as we shall shortly see, pointwise convergence can in general not be achieved by a
Fourier series at t = ±T /2 even for a function with smooth behavior in the open
interval. It turns out that convergence at the endpoints is intimately related
to convergence at a step discontinuity, to which we now turn our attention.
Thus suppose our function possesses a finite number of step discontinuities in
the (open) interval under consideration. We can then represent it as a sum
comprising a smooth function fs (t) and a sum of step functions as in (1.280).
In order not to encumber the development with excessive notation we confine
the discussion to one typical discontinuity, say at t = t1 , and write
 ! !
f (t) = fs (t) + f t+
1 − f t1

U (t − t1 ) . (2.14)
The Fourier coefficients follow from (2.2) so that
 T /2  +
! −
!  T /2
1 −i2πnt/T f t 1 − f t 1
fˆn = fs (t) e dt + e−i2πnt/T dt (2.15)
T −T /2 T t1

and substitution in (2.1) yields the partial sum


 T /2
sin [2π (N + 1/2) (t − t ) /T ]   ! !
N
f (t) = fs (t )  ) /T ]
dt + f t+ −
1 − f t1 λN (t) ,
−T /2 T sin [π (t − t
(2.16)
2.1 Fourier Series 79

where  T /2
sin [2π (N + 1/2) (t − t ) /T ] 
λN (t) = dt . (2.17)
t1 T sin [π (t − t ) /T ]
The limiting form of the first integral on the right of (2.16) as N −→ ∞ has
already been considered so that
 ! !
lim f N (t) = fs (t) + f t+
1 − f t1

lim λN (t) (2.18)
N −→∞ N −→∞

and only the last limit introduces novel features. Confining our attention to this
term we distinguish three cases: the interval −T /2 < t < t1 , wherein t = t so
that the RLL applies, the interval t1 < t < T /2, and the point of discontinuity
t = t1 . In the first case λN (t) approaches zero. In the second case we divide the
integration interval into three subintervals as in (2.9). Proceeding in identical
fashion we find that λN (t) approaches unity. For t = t1 we subdivide the
integration interval into two subintervals as follows:
 t1 + /2
sin [2π (N + 1/2) (t1 − t ) /T ] 
λN (t1 ) = dt
t1 T sin [π (t1 − t ) /T ]
 T /2
sin [2π (N + 1/2) (t1 − t ) /T ] 
+ dt , (2.19)
t1 + /2 T sin [π (t1 − t ) /T ]
where again  is an arbitrarily small positive quantity. In the second integral
t = t1 so that again the RLL applies and we obtain zero in the limit. Hence
the limit is given by the first integral which we compute as follows:
 t1 + /2
sin [2π (N + 1/2) (t1 − t ) /T ] 
lim λN (t1 ) = lim dt
N −→∞ N −→∞ t
1
T sin [π (t1 − t ) /T ]
 π(2N2T+1)
sin x
= lim dx
N −→∞ 0 π (2N + 1) sin 2Nx+1
 π(2N2T+1)  ∞
sin x sin x 1
= lim dx = dx = . (2.20)
N −→∞ 0 πx 0 πx 2
Summarizing the preceding results we have

⎨ 0 ; − T /2 < t < t1 ,
lim λN (t) = 1/2 ; t = t1 , (2.21)
N −→∞ ⎩
1 ; t1 < t < T /2.
Returning to (2.18) and taking account of the continuity of fs (t) we have the
final result
1 ! !
lim f N (t1 ) = f t+1 + f t1

. (2.22)
N −→∞ 2
Clearly this generalizes to any number of finite discontinuities within the ex-
pansion interval. Thus, for a piecewise differentiable function with step discon-
tinuities the Fourier series statement (2.13) should be replaced by

1 ! ! 
f t+ + f t− = fˆn ei2πnt/T . (2.23)
2 −∞
80 2 Fourier Series and Integrals with Applications to Signal Analysis

Although the limiting form (2.23) tells us what happens when the number
of terms in the series is infinite, it does not shed any light on the behavior of
the partial approximating sum for finite N. To assess the rate of convergence
we should examine (2.17) as a function of t with increasing N. For this purpose
let us introduce the function
 x
sin[(N + 1/2)θ]
Si s(x, N ) = dθ (2.24)
0 2 sin (θ/2)
so that the dimensionless parameter x is a measure of the distance from
the step discontinuity (x = 0). The integrand in (2.24) is just the sum

n=N
(1/2) n=−N exp(−inθ) which we integrate term by term and obtain the
alternative form
x  sin(nx)
N
Si s(x, N ) = + . (2.25)
2 n=1 n
Note that for any N the preceding gives Si s(π, N ) = π/2. As N → ∞ with
0 < x < π this series converges to π/2. A plot of (2.25) for N = 10 and N = 20
is shown in Fig. 2.2. For larger values of N the oscillatory behavior of Si s(y, N )

2
** *

1.5

* N=10
0.5
** N=20
Sis(x,N)

-0.5

-1

-1.5

-2
-4 -3 -2 -1 0 1 2 3 4
x

Figure 2.2: FS convergence at a step discontinuity for N=10 and N=20

damps out and the function approaches the asymptotes ±π/2 for y = 0. Note
that as N is increased the peak amplitude of the oscillations does not diminish
but migrates toward the location of the step discontinuity, i.e., y = 0. The
numerical value of the overshoot is ±1.852 or about 18% above (below) the
positive (negative) asymptote. When expressed in terms of (2.25), (2.17) reads
1 1
λN (t) = Si s[(T /2 − t)2π/T, N ] − Si s[(t1 − t)2π/T, N ]. (2.26)
π π
2.1 Fourier Series 81

Taking account of the limiting forms of (2.25) we note that as long as t < T /2 in
the limit as N → ∞ the contribution from the first term on the right of (2.26)
approaches 1/2, while the second term tends to −1/2 for t < t1 ,1/2 for t >
t1 and 0 for t = t1 , in agreement with the limiting forms enumerated in (2.21).
Results of sample calculations of λN (t) (with t1 = 0) for N = 10, 20, and
50 are plotted in Fig. 2.3. Examining these three curves we again observe that
increasing N does not lead to a diminution of the maximum amplitude of the
1.2
*** ** *
1

0.8
LambdaN(t)

*N=10
0.6 **N=20
***N=50

0.4

0.2

-0.2
-0.5 -0.4 -0.3 -0.2 -0.1 0 0.1 0.2 0.3 0.4 0.5
t/T

Figure 2.3: Convergence at a step discontinuity

oscillations. On the contrary, except for a compression of the timescale, the


oscillations for N = 50 have essentially the same peak amplitudes as those for
N = 10 and in fact exhibit the same overshoot as in Fig. 2.2. Thus N appears to
enter into the argument in (2.22) merely as scaling factor of the abscissa, so that
the magnitude of the peak overshoot appears to persist no matter how large N
is chosen. The reason for this behavior can be demonstrated analytically by
approximating (2.24) for large N. We do this by first changing the variable of
integration in (2.24) to y = (N + 1/2)θ to obtain
 (N +1/2)x
sin y
Si s(x, N ) = dy. (2.26*)
0 (2N + 1) sin[y/(2N + 1)]
Before proceeding with the next algebraic step we note that as N → ∞ the
numerator in (2.24) will be a rapidly oscillating sinusoid so that its contributions
to the integral will mutually cancel except for those in the neighborhood of
small θ. In terms of the variables in (2.26*) this means that for large N the
argument y/(2N + 1) of the sine function will remain small. In that case we
may replace the sine by its argument which leads to the asymptotic form
Si[s(N, x)] ∼ Si[(N + 1/2)x], (2.26**)
82 2 Fourier Series and Integrals with Applications to Signal Analysis

where Si(z) is the sine integral function defined in (1.278e) and plotted in
Fig. 1.15. If we use this asymptotic form in (2.26), we get
1 1
λN (t) = Si[(N + 1/2)(T /2 − t)2π/T ] − Si[(N + 1/2)(t1 − t)2π/T ],
π π
which shows directly that N enters as a scaling factor of the abscissa. Thus
as the number of terms in the approximation becomes infinite the oscillatory
behavior in Fig. 2.3 compresses into two vanishingly small time intervals which
in the limit may be represented by a pair of infinitely thin spikes at t = 0+ and
t = 0− . Since in the limit these spikes enclose zero area we have here a direct
demonstration of convergence in the mean (i.e., the LMS error rather than the
error itself tending to zero with increasing N ). This type of convergence, charac-
terized by the appearance of an overshoot as a step discontinuity is approached,
is referred to as the Gibbs phenomenon, in honor of Willard Gibbs, one of the
America’s greatest physicists. Gibbs phenomenon results whenever an LMS ap-
proximation is employed for a function with step discontinuities and is by no
means limited to approximations by sinusoids (i.e., Fourier series). In fact the
numerical example in Fig. 1.11 demonstrates it for Legendre Polynomials.
Another aspect of the Gibbs phenomenon worth mentioning is that it af-
fords an example of nonuniform convergence. For as we have seen lim N → ∞
λN (t1 ) → 1/2. On the other hand, the limit approached when N is allowed to
approach infinity first and the function subsequently evaluated at t as it is made
to approach t1 (say, through positive values) is evidently unity. Expressed in
symbols, these two alternative ways of approaching the limit are

lim lim λN (t) = 1/2, (2.26***a)


N −→∞t−→t+
1

lim lim λN (t) = 1. (2.26***b)


t−→t+ N −→∞
1

In other words, the result of the limiting process depends on the order in which
the limits are taken, a characteristic of nonuniform convergence. We can
view (2.26***) as a detailed interpretation of the limiting processes implied
in the Fourier series at step discontinuities which the notation (2.23) does not
make explicit.

2.1.3 Convergence at Interval Endpoints


The preceding discussion applies only to convergence properties of the Fourier
series within the open interval. To complete the discussion of convergence we
must still consider convergence at the interval endpoints ±T /2. We start with
the approximate form (2.20) (c.f. Fig. 2.3) which, together with the periodicity
of λN (t) based on the exact form (2.17), gives

lim λN (±T /2) = 1/2.


N −→∞
2.1 Fourier Series 83

Thus in view of (2.16) we have at the endpoints


lim f N (±T /2)
N →∞
 T /2
sin [2π (N + 1/2) (±T /2 − t ) /T ] 
= lim fs (t ) dt
N →∞ −T /2 T sin [π (±T /2 − t ) /T ]
 ! !
f t+
1 + f t1

+ . (2.27)
2
Since the observation points ±T /2 coincide with the integration limits, the lim-
iting procedure following (2.9) is not directly applicable. Rather than examining
the limiting form of the integral in (2.27) directly, it is more instructive to in-
fer the limit in the present case from (2.24) and the periodicity of the Fourier
series kernel. This periodicity permits us to increment the integration limits
in (2.27) by an arbitrary amount, say τ , provided we replace fs (t) by its peri-
odic extension

n=∞
fsext (t) = fs (t − nT ) . (2.28)
n=−∞
With this extension the endpoints ±T /2 now become the interior points in an
infinite sequence of expansion intervals . . . (τ − 3T /2, τ − T /2) , (τ − T /2, τ+
T /2) . . .. These intervals are all of length T and may be viewed as centered
at t = τ ± nT , as may be inferred from Fig. 2.4. We note that unless fs (T /2)
= fs (−T /2) the periodic extension of the originally smooth fs (t) will have a
step discontinuity at the new interior points of the amount fs (−T /2)−fs (T /2) .
Thus with a suitable shift of the expansion interval and the replacement of
T
fs(t)

−3T / 2 −T / 2 o T / 2 3T / 2 5T / 2

τ−T / 2 τ+T / 2

Figure 2.4: Step discontinuity introduced by a periodic extension of fs (t)

fs (t ) by the fsext (t ) in (2.28) we can mimic the limiting process employed
following (2.17) without change. Carrying this out we get an identical result
at each endpoint, viz., [fs (−T /2) + fs (T /2)] /2. Clearly as far as any “real”
discontinuity at an interior point of the original expansion interval is concerned,
say at t = t1 , its contribution to the limit is obtainable by simply adding the
last term in (2.27). Hence

f (−T /2) + f (T /2)


lim f N (±T /2) = . (2.29)
N →∞ 2
84 2 Fourier Series and Integrals with Applications to Signal Analysis

Of course, as in the convergence at an interior discontinuity point, the


limit (2.29) gives us only part of the story, since it sidesteps the very important
issue of Gibbs oscillations for finite N. A representative example of what hap-
pens when the given function assumes different values at the two endpoints is
demonstrated by the Fourier expansion of e−t as shown in Fig. 2.5, where the
expansion interval is 0, 1, and 21 terms (N = 10) are employed. Clearly the
convergence at t = 0 and t = 1 is quite poor. This should be contrasted with
the plot in Fig. 2.6 which shows the expansion of e−|t−1/2| over the same interval

1.1

0.9

0.8
f10(t)

0.7

0.6

0.5

0.4

0.3
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
t

Figure 2.5: Fourier series approximation of e−t with 21 sinusoids

and the same number of expansion functions. When the discontinuity occurs in
the interior of the interval, the convergence is also marred by the Gibbs oscilla-
tions as illustrated in Fig. 2.7 for the pulse p.5 (t − .5) , again using 21 sinusoids.
Fig. 2.8 shows a stem diagram of the magnitude of the Fourier coefficients fˆn
plotted as a function of (m = n + 10, n = −10, −9, . . . 11). Such Fourier coef-
ficients are frequently referred to as (discrete) spectral lines and are intimately
related to the concept of the frequency spectrum of a signal as will be discussed
in detail in connection with the Fourier integral.

2.1.4 Delta Function Representation


The convergence properties of Fourier series can be succinctly phrased in terms
of delta functions. Thus the Fourier series kernel can be formally represented
by the statement


sin [2π (N + 1/2) (t − t ) /T ]
lim = δ (t − t − kT ) . (2.30)
N →∞ T sin [π (t − t ) /T ]
k=−∞
2.1 Fourier Series 85

0.95

0.9

0.85

0.8

0.75

0.7

0.65

0.6
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Figure 2.6: Fourier series approximation of e−|t−1/2| with 21 sinusoids

1.2

0.8

0.6
p1(t-.5)

0.4

0.2

-0.2
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
t

Figure 2.7: Fourier series approximation of a pulse using 21 terms


86 2 Fourier Series and Integrals with Applications to Signal Analysis

0.25

0.2

0.15
fm

0.1

0.05

0
0 5 10 m 15 20 25

Figure 2.8: Magnitude of Fourier series coefficients for the pulse in Fig. 2.7

Alternatively, we can replace the kernel by the original geometric series and
write
∞ * +* +∗
1 1 
√ ei2πnt/T √ ei2πnt /T
n=−∞ T T
∞ ∞
1  i2πn(t−t )/T 
= e = δ (t − t − kT ) . (2.31)
T n=−∞
k=−∞

These expressions, just as the corresponding completeness statements for general


orthogonal sets discussed in 1.7.1, are to be understood as formal notational
devices invented for efficient analytical manipulations; their exact meaning is
to be understood in terms of the limiting processes discussed in the preceding
subsection.

2.1.5 The Fejer Summation Technique


The poor convergence properties exhibited by Fourier series at step disconti-
nuities due to the Gibbs phenomenon can be ameliorated if one is willing to
modify the expansion coefficients (spectral lines) by suitable weighting factors.
The technique, generally referred to as “windowing,” involves the multiplication
of the Fourier series coefficients by a suitable (spectral) “window” and summa-
tion of the new trigonometric sum having modified coefficients. In general,
the new series will not necessarily converge to the original function over the
entire interval. The potential practical utility of such a scheme rests on the fact
the approximating sum may represent certain features of the given function that
2.1 Fourier Series 87

are of particular interest better than the original series. This broad subject is
treated in detail in books specializing in spectral estimation. Here we merely
illustrate the technique with the so-called Fejer summation approach, wherein
the modified trigonometric sum actually does converge to the original function.
In fact this representation converges uniformly to the given function and thus
completely eliminates the Gibbs phenomenon.
The Fejer [16] summation approach is based on the following result from
the theory of limits. Given a sequence f N such that lim f N → f exists, the
N →∞
arithmetic average
1 
M
σM = fN (2.32)
M +1
N =0

approaches the same limit as M → ∞, i.e.,

lim σ M → f. (2.33)
M→∞

In the present case we take for f N = f N (t) , i.e., the partial Fourier series
summation. Thus if this partial sum approaches f (t) as N → ∞, the preceding
theorem states that σ M = σ M (t) will also converge to f (t). Since f N (t) is just
a finite sum of sinusoids we should be able to find a closed-form expression for
σ M (t) by a geometrical series summation. Thus
1  
σ M (t) = {fˆ0 + fˆ0 + fˆ1 ei2πt/T + fˆ−1 e−i2πt/T +
M +1
 
fˆ0 + fˆ1 ei2πt/T + fˆ2 ei2(2πt/T ) +
+ . . .}.
fˆ−1 e−i2πt/T + fˆ−2 e−i2(2πt/T )

This can be rewritten as follows:


1  
σ M (t) = {(M + 1) fˆ0 + M fˆ1 ei2πt/T + fˆ−1 e−i2πt/T +
M +1
 
(M − 1) fˆ2 ei2(2πt/T ) + fˆ−2 e−i2(2πt/T ) + . . .}

1  M
= {(M + 1) fˆ0 + fˆk (M − k + 1) eik(2πt/T )
M +1
k=1

M
+ fˆ−k (M − k + 1) e−ik(2πt/T ) }.
k=1

After changing the summation index from k to −k in the last sum we get


M * +
|k|
σ M (t) = ˆ
fk 1 − eik(2πt/T ) , (2.34)
M +1
k=−M

which we now identify as the expansion of the function σ M (t) in terms of 2M +


1 trigonometric (exponential) functions. We note that expansion coefficients
88 2 Fourier Series and Integrals with Applications to Signal Analysis

are obtained by multiplying the Fourier series coefficients fˆk by the triangular
spectral window
|k|
ŵk (M ) = 1 − k = 0, ±1, ±2, . . . ± M. (2.35)
M +1
We can view (2.34) from another perspective if we substitute the integral repre-
sentation (2.3) of the partial sum f N (t) into (2.32) and carry out the summation
on the Fourier series kernel (2.5). Thus after setting ξ = 2π(t − t )/T we get
the following alternative form:
 M
T /2
1 sin [(N + 1/2) ξ]
σ M (t) = f (t ) dt
M +1 −T /2 N =0 T sin [ξ/2]
 T /2 M * +
1 f (t ) dt  ei(N +1/2)ξ e−i(N +1/2)ξ
= − . (2.36)
M + 1 −T /2 T sin(ξ/2) 2i 2i
N =0

Using the formula


M
sin [(M + 1) ξ/2]
eiN ξ = eiξM/2
sin [ξ/2]
N =0

to sum the two geometric series transforms (2.36) into


 T /2
sin2 [(M + 1) π(t − t )/T ]
σ M (t) = f (t ) dt . (2.37)
−T /2 T (M + 1) sin2 [π(t − t )/T ]

This representation of σ M (t) is very much in the spirit of (2.3). Indeed in view
of (2.33) σ M (t) must converge to the same limit as the associated Fourier series.
The new kernel function
sin2 [(M + 1) π(t − t )/T ]
KM (t − t ) = (2.38)
T (M + 1) sin2 [π(t − t )/T ]

is called the Fejer kernel and (2.34) the Fejer sum. Just like the Fourier series
kernel the Fejer kernel is periodic with period T so that in virtue of (2.33) we
may write
∞
sin2 [(M + 1) π(t − t )/T ]
lim = δ (t − t − kT ) . (2.39)
M→∞ T (M + 1) sin2 [π(t − t )/T ]
k=−∞

Alternatively with the aid of limiting arguments similar to those employed


in (2.11) and (2.12) one can easily verify (2.39) directly by evaluating the limit
in (2.37) as M → ∞.
Figure 2.9 shows the approximation achieved with the Fejer sum (2.34) (or
its equivalent (2.37)) for f (t) = U (t − 0.5) with 51 sinusoids (M = 25). Also
shown for comparison is the partial Fourier series sum for the same value of M .
2.1 Fourier Series 89

Note that in the Fejer sum the Gibbs oscillations are absent but that the ap-
proximation underestimates the magnitude of the jump at the discontinuity.
In effect, to achieve a good fit to the “corners” at a jump discontinuity the
penalty one pays with the Fejer sum is that more terms are needed than with
a Fourier sum to approximate the smooth portions of the function. To get
some idea of the rate of convergence to the “corners” plots of Fejer sums for
M = 10, 25, 50, and 100 are shown in Fig. 2.10, where (for t > 0.5) σ 10 (t) <
σ 25 (t) < σ 50 (t) < σ 100 (t) .

1.2

← Fourier sum
1
← Fejer sum
0.8

0.6

0.4

0.2

-0.2
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
t

Figure 2.9: Comparison of Fejer and Fourier convergence

In passing we remark that the Fejer sum (2.34) is not a partial Fourier series
sum because the expansion coefficients themselves, σ̂ k = ŵk (M ) fˆk are functions
of M. Trigonometric sums of this type are not unique. In fact by forming the
arithmetic mean of the Fejer sum itself

(1) 1 
M
σ M (t) = σ N (t) (2.40)
M +1
N =0

we can again avail ourselves of the limit theorem in (2.32) and (2.33) and con-
(1)
clude that the partial sum σ M (t) must approach f (t) in the limit of large
M , i.e.,
(1)
lim σ (t) = f (t) . (2.41)
M→∞ M
(1)
For any finite M we may regard σ M (t) as the second-order Fejer approximation.
Upon replacing M by N in (2.34) and substituting for σ N (t) we can easily carry
90 2 Fourier Series and Integrals with Applications to Signal Analysis

0.8

0.6

0.4

0.2

0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Figure 2.10: Convergence of the Fejer approximation

out one of the sums and write the final result in the form

(1)

M
(1)
σ M (t) = fˆk ŵk (M ) eik(2πt/T ) , (2.42)
k=−M

where
(1) 1 
M−|k|+1
n
ŵk (M ) = , k = 0, ±1, ±2, . . . ± M (2.43)
M +1 n=1
|k| + n
is the new spectral window. We see that we no longer have the simple linear
taper that obtains for the first-order Fejer approximation. Unfortunately this
sum does not appear to lend itself to further simplification. A plot of (2.43) in
the form of a stem diagram is shown in Fig. 2.11 for M = 12. Figure 2.12 shows
plots of the first- and second-order Fejer approximations for a rectangular pulse
using M = 25. We see that the second-order approximation achieves a greater
degree of smoothing but underestimates the pulse amplitude significantly more
than does the first-order approximation. Apparently to reduce the amplitude
error to the same level as achieved with the first-order approximation much
larger spectral width (values of M ) are required. This is consistent with the
concave nature of the spectral taper in Fig. 2.11 which, for the same bandwidth,
will tend to remove more energy from the original signal spectrum than a lin-
ear taper.
Clearly higher order Fejer approximations can be generated recursively with
the formula
1  (m−1)
M
(m)
σ M (t) = σk (t) , (2.44a)
M +1
k=0
2.1 Fourier Series 91

1.4

1.2

0.8

0.6

0.4

0.2

0
-10 -5 0 5 10

Figure 2.11: Second-order Fejer spectral window

1.2

1
*
**
0.8

0.6 * first order


**second order

0.4

0.2

-0.2
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Figure 2.12: First- and second-order Fejer approximations

(0)
wherein σ k (t) ≡ σ k (t). It should be noted that Fejer approximations of all
orders obey the limiting property
(m−1) 1 ! !
lim σ (t) = [f t+ + f t− ] ; m = 1, 2, 3, . . . . (2.44b)
M→∞ M 2
i.e., at step discontinuities the partial sums converge to the arithmetic average
of the given function, just like ordinary Fourier series. The advantage of higher
92 2 Fourier Series and Integrals with Applications to Signal Analysis

order Fejer approximations is that they provide for a greater degree of smoothing
in the neighborhood of step discontinuities. This is achieved at the expense of
more expansion terms (equivalently, requiring wider bandwidths) to reach a
given level of approximation accuracy.

2.1.6 Fundamental Relationships Between the Frequency


and Time Domain Representations
Parseval Formula
Once all the Fourier coefficients of a given function are known they may be
used, if desired, to reconstruct the original function. In fact, the specification
of the coefficients and the time interval within which the function is defined is,
in principle, equivalent to the specification of the function itself. Even though
the fˆn are components of the infinite-dimensional vector

f = [. . .fˆn ..]T , (2.45)

we can still interpret them as the projections of the signal f (t) along the
basis functions ei2πnt/T and think of them geometrically as in Fig. 1.3. Because
each fˆn is uniquely associated with a radian frequency of oscillation ω n , with
ωn /2π = n/T Hz, f is said to constitute the frequency domain representation
of the signal, and the elements of f the signal (line) spectrum. A very important
relationship between the frequency domain and the time domain representations
of the signal is Parseval formula
 T /2 
n=∞ 2
1 2 ˆ
|f (t)| dt = fn . (2.46)
T −T /2 n=−∞

This follows as a special case of (1.305) and is a direct consequence


√ of the LMS

error in the approximation tending to zero. With f ≡ T f we can rewrite
(2.46) using the notation
2
(f, f ) = f   , (2.47)
which states that the norm in the frequency domain is identical to that in the
time domain. Since physically the time average on the left of (2.46) may gener-
ally be interpreted as the average signal power (or some quantity proportional
to it), Parseval formula in effect states that the average power in the time and
frequency domains is preserved.
Given the two functions, f (t) and g (t) within the interval −T /2, T /2 with
Fourier coefficients fˆn and ĝn , it is not hard to show (problem 2-2) that (2.46)
generalizes to
 
n=∞
1 T /2
f (t) g ∗ (t) dt = fˆn ĝn∗ . (2.48)
T −T /2 n=−∞
2.1 Fourier Series 93

Time and Frequency Domain Convolution


An important role in linear system analysis is played by the convolution integral.
From the standpoint of Fourier series this integral is of the form
 T /2
1
h (t) = f (τ ) g (t − τ ) dτ . (2.49)
T −T /2

We now suppose that the Fourier series coefficients fˆn and ĝn of f (t) and g (t),
defined within −T /2, T /2, are known. What will be the Fourier coefficients ĥm
of h (t) when expanded in the same interval? The answer is readily obtained
when we represent f (τ ) by its Fourier series (2.13) and similarly g (t − τ ) . Thus

 T /2 ∞
 ∞

1
h (t) = fˆn ei2πnτ /T ĝm ei2πm(t−τ )/T dτ
T −T /2 n=−∞ m=−∞

 ∞  T /2
1 i2πmt/T ˆ
= ĝm e fn ei2π(n−m)τ /T dτ
T m=−∞ n=−∞ −T /2
∞ ∞
1  
= ĝm ei2πmt/T fˆn T δnm
T m=−∞ n=−∞
∞ ∞

= ĝm fˆm ei2πmt/T = ĥm ei2πmt/T (2.50)
m=−∞ m=−∞

from which we identify ĥm = ĝm fˆm . A dual situation frequently arises when we
need the Fourier coefficients of the product of the two functions, e.g., q(t) ≡ f (t)
g (t) . Here we can proceed similarly

q(t) ≡ f (t) g (t)


∞ ∞

= fˆn ei2πnt/T ĝm ei2πmt/T
n=−∞ m=−∞
∞ ∞

= fˆn ĝm ei2π(n+m)t/T
n=−∞ m=−∞
∞ ∞

= fˆn ĝk−n ei2πkt/T
n=−∞ k=−∞

 ∞
 ∞
  
= fˆm ĝn−m ei2πnt/T = q̂n ei2πnt/T , (2.51)
n=−∞ m=−∞ n=−∞


where in the last step we identify the Fourier coefficient of q(t) as q̂n = m=−∞
fˆm ĝn−m which is a convolution sum formed with the Fourier coefficients of the
two functions.
94 2 Fourier Series and Integrals with Applications to Signal Analysis

Symmetries
Frequently (but not always ) the signal in the time domain will be real. In that
case the formula for the coefficients gives

fˆ−n = fˆn∗ , (2.52)

which means that the magnitude of the line spectrum is symmetrically disposed
with respect to the index n = 0. Simplifications also arise when the signal is
either an even or an odd function with respect to t = 0. In case of an even
function f (t) = f (−t) we obtain
 T /2
2
fˆn = f (t) cos (2πnt/T ) dt (2.53)
T 0

and since fˆ−n = fˆn the Fourier series reads




f (t) = fˆ0 + 2 fˆn cos (2πnt/T ) . (2.54)
n=1

In case of an odd function f (t) = −f (−t) the coefficients simplify to



−i2 T /2
fˆn = f (t) sin (2πnt/T ) dt (2.55)
T 0

and since fˆ−n = −fˆn we have for the Fourier series




f (t) = i2 fˆn sin (2πnt/T ) . (2.56)
n=1

It is worth noting that (2.53-2.54) hold for complex functions in general, inde-
pendent of (2.52).

2.1.7 Cosine and Sine Series


In our discussion of convergence of Fourier series we noted that whenever a func-
tion assumes unequal values at the interval endpoints its Fourier series coverages
at either endpoint to the arithmetic mean of the two endpoint values. An illus-
tration of how the approximation manifests itself when finite partial sums are
involved may be seen from the plot in Fig. 2.5 for an exponential function. It
turns out that these pathological convergence properties can actually be elimi-
nated by a judicious choice of the expansion interval. The approach rests on the
following considerations. Suppose function f (t) to be expanded is defined in the
interval 0, T while the nature of its periodic extension is outside the domain of
the problem of interest and, consequently, at our disposal. In that case we may
artificially extend the expansion interval to −T, T and define a function over
this new interval as f (|t|), as shown in Fig. 2.13. This function is continuous
2.1 Fourier Series 95

f (t)

t
−T 0 T

Figure 2.13: Extension of the function for the cosine series

at t = 0 and moreover assumes identical values at −T and T. Hence its periodic


extension is also continuous at these endpoints which means that its Fourier
series will converge uniformly throughout the closed interval −T, T to f (|t|)
and, in particular, to the prescribed function f (t) throughout the desired range
0 ≤ t ≤ T. Of course, since f (|t|) is even with respect to t = 0, this Fourier
series contains only cosine terms. However, because the expansion interval is
2T rather than T, the arguments of the expansion functions are πnt/T rather
than 2πnt/T. Hence
 T
ˆ 1
fn = f (|t|) cos (πnt/T ) dt
2T −T

1 T
= f (t) cos (πnt/T ) dt. (2.57)
T 0
The Fourier cosine series reads
∞
f (t) = fˆ0 + 2 fˆn cos (πnt/T )
n=1


= fˆnc cos (πnt/T ) , (2.58)
n=0
where
( ,T
1
f (t) dt ; n = 0,
fˆnc = 2
,T T 0 (2.59)
T 0 f (t) cos (πnt/T ) dt ; n > 0.
The approximation to e−t using a cosine series comprised of 10 terms is plotted
in Fig. 2.14. We note a significant improvement in the approximation over that
obtained with the conventional partial Fourier series sum in Fig. 2.5, where 21
terms are employed to approximate the same function.
It should be noted that the coefficients of the cosine series (2.59) are nothing
more than the solution to the normal equations for the LMS problem phrased
in terms of the cosine functions
φcn (t) = cos (πnt/T ) , n = 0, 1, 2, . . . (2.60)
96 2 Fourier Series and Integrals with Applications to Signal Analysis

0.9

0.8

0.7

0.6

0.5

0.4

0.3
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Figure 2.14: Cosine series approximation (N=10)

As may be verified directly, they are orthogonal over the interval 0, T . In our
compact notation this reads
(φcn , φcm ) = (T /εn ) δ nm ,
where we have introduced the abbreviation

1; n = 0,
εn =
2 ; n > 0,
which is usually referred to as the Neumann symbol.
The convergence properties of the cosine series at points of continuity and
at jump discontinuities within the interval are identical to those of the com-
plete Fourier series from which, after all, the cosine series may be derived.
The cosine expansion functions form a complete set in the space of piecewise
differentiable functions whose derivatives must vanish at the interval endpoints.
This additional restriction arises because of the vanishing of the derivative of
cos (πnt/T ) at t = 0 and t = T. In accordance with (1.303), the formal state-
ment of completeness may be phrased in1terms of an infinite series of products
of the orthonormal expansion functions εn /T φcn (t) as follows:
∞  
εn εn
δ (t − t ) = cos (πnt/T ) cos (πnt /T ) . (2.61)
n=0
T T

Sine Series
If instead of an even extension of f (t) into the interval −T, 0 as in Fig. 2.13, we
employ an odd extension, as in Fig. 2.15, and expand the function f (|t|) sign(t)
in a Fourier series within the interval −T, T , we find that the cosine terms
2.1 Fourier Series 97

f (t)

t
−T 0 T

Figure 2.15: Function extension for sine series

vanish and the resulting Fourier series is comprised entirely of sines. Within the
original interval 0, T it converges to the prescribed function f (t) and constitutes
the so-called sine series expansion, to wit,


f (t) = fˆns sin (πnt/T ) , (2.62)
n=0

where
 T
2
fˆns = f (t) sin (πnt/T ) dt. (2.63)
T 0
Evidently because the sine functions vanish at the interval endpoints the sine
series will necessarily converge to zero there. Since at a discontinuity a Fourier
series always converges to the arithmetic mean of the left and right endpoint
values, we see from Fig. 2.15 that the convergence of the sine series to zero at
the endpoints does not require that the prescribed function also vanishes there.
Of course, if this is not the case, only LMS convergence is guaranteed at the
endpoints and an approximation by a finite number of terms will be vitiated
by the Gibbs effect. A representative illustration of the expected convergence
behavior in such cases can be had by referring to Fig. 2.5. For this reason the
sine series is to be used only with functions that vanish at the interval endpoints.
In such cases convergence properties very similar to those of cosine series are
achieved. A case in point is the approximation shown in Fig. 2.6.
The sine expansion functions

φsn (t) = sin (πnt/T ) , n = 1, 2, 3, . . . (2.64)

possess the orthogonality properties

(φsn , φsm ) = (T /2) δ nm ; (2.65)


98 2 Fourier Series and Integrals with Applications to Signal Analysis

they form a complete set in the space of piecewise differentiable functions that
vanish at the interval endpoints. Again the formal statement of this complete-
ness may be summarized by the delta function representation
∞  
 2 2
δ (t − t ) = sin (πnt/T ) sin (πnt /T ) . (2.66)
n=0
T T

2.1.8 Interpolation with Sinusoids


Interpolation Using Exponential Functions
Suppose f (t) can be represented exactly by the sum


N
2πnt
f (t) = cn e i T ; 0 ≤ t ≤ T. (2.67)
n=−N

If f (t) is specified at M = 2N + 1 points within the given interval (2.67) can


be viewed as a system of M linear equations for the M unknown coefficients cn .
A particularly simple formula for the coefficients results if we suppose that the
function is specified on uniformly spaced points within the interval. To derive
it we first change the summation index in (2.67) from n to m = N + n to obtain
2N
 2π(m−N )t
f (t) = cm−N ei T . (2.68)
m=0

With t = Δt and Δt = T /M (2.68) becomes


M−1
2π(m−N )
f (Δt) = cm−N ei M . (2.69)
m=0

From the geometric series M−1 m=0 e


imα
= eiα(M−1)/2 sin (M α/2) / sin(α/2) we
readily establish the orthogonality relationship


M−1
2π(m−k)
ei M = M δ mk . (2.70)
=0
2πk
Upon multiplying both sides of (2.69) by e−i M and summing on  and us-
ing (2.70) we obtain the solution for the coefficients

1 
M−1
2π(m−N )
cm−N = f (Δt) e−i M . (2.71)
M
=0

Reverting to the index n and M = 2N + 1 the preceding is equivalent to

 2N
1 2πn
cn = f (Δt) e−i 2N +1 . (2.72)
2N + 1
=0
2.1 Fourier Series 99

On the other hand we know that the solution for cn in (2.67) is also given by
the integral

1 T 2πnt
cn = f (t)e−i T dt. (2.73)
T 0
If in (2.72) we replace 1/ (2N + 1) by its equivalent Δt/T , we can interpret (2.71)
as a Riemann sum approximation to (2.73). However we know from the fore-
going that (2.72) is in fact an exact solution of (2.69). Thus whenever f (t)
is comprised of a finite number of sinusoids the Riemann sum will represent
the integral (2.73) exactly provided 2N +1 is chosen equal to or greater than the
number of sinusoids. Evidently, if the number of sinusoids is exactly 2N +1, the
cn as computed using either (2.73) or (2.72) must be identically zero whenever
|n| > N. If f (t) is a general piecewise differentiable function, then (2.67) with
the coefficients determined by (2.72) provides an interpolation to f (t) in terms
of sinusoids. In fact by substituting (2.72) into (2.67) and again summing a
geometric series we obtain the following explicit interpolation formula:
 !

M−1 t
sin π Δt −
f (t) = f (Δt) π t ! . (2.74)
=0
M sin M Δt − 

Unlike the LMS approximation problem underlying the classical Fourier series,
the determination of the coefficients in the interpolation problem does not re-
quire the evaluation of integrals. This in itself is of considerable computational
advantage. How do interpolation-type approximations compare with LMS ap-
proximations? Figure 2.16 shows the interpolation of e−t achieved with 11
sinusoids while Fig. 2.17 shows the approximation with the same number of si-
nusoids using the LMS approximation. We note that the fit is comparable in
the two cases except at the endpoints where,
! as we know, the LMS approxi-
mation necessarily converges to 1 + e−1 /2. As the number of terms in the
interpolation is increased the fit within the interval improves. Nevertheless,
the interpolated function continues to show considerable undamped oscillatory
behavior near the endpoints as shown by the plot in Fig. 2.18.

Interpolation Using Cosine Functions


Recalling the improvement in the LMS approximation achieved with the co-
sine series over the complete Fourier expansion, we might expect a similar im-
provement in case of interpolation. This turns out actually to be the case.
As will be demonstrated, the oscillatory behavior near the endpoints in Fig. 2.18
can be completely eliminated and a substantially better fit to the prescribed
function achieved throughout the entire approximating interval using an alter-
native interpolation that employs only cosine functions, i.e., an interpolation
formula based on (2.58) rather than (2.67). In this case we set the interpolation
interval to
Δt = T /(M − 1/2) (2.75)
100 2 Fourier Series and Integrals with Applications to Signal Analysis

1.4

1.2

0.8
exp(-t)

0.6

0.4

0.2

0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
t

Figure 2.16: Interpolation of e−t using 11 sinusoids

1.2

0.8
exp(-t)

0.6

0.4

0.2

0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
t

Figure 2.17: LMS approximation to e−t using 11 sinusoids

and with t = mΔt in (2.58) we obtain


M−1
f (mΔt) = ccn cos [πnm/ (M − 1/2)] ; m = 0, 1, 2, . . . M − 1, (2.76)
n=0
2.1 Fourier Series 101

1.2

1.1

0.9

0.8
exp(-t)

0.7

0.6

0.5

0.4

0.3

0.2
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
t

Figure 2.18: Interpolation of e−t using 101 sinusoids

where the ccn are the unknown coefficients. The solution for the ccn is made
somewhat easier if one first extends the definition of f (mΔt) to negative indices
as in Fig. 2.13 and rewrites (2.76) in terms of complex exponentials. Thus


M−1
f (mΔt) = cc
ne
i2πnm/(2M−1)
; m = 0, ±1, ±2, . . .±(M − 1) , (2.77)
n=−(M−1)

where in addition to f (mΔt) = f (−mΔt) we postulated that ccn = cc−n and


defined
c
c0 ; n = 0,
cc = (2.78)
n ccn /2 ; n = 0.
Again using the geometric series sum formula we have the orthogonality


M−1
sin [π (m − k)]
ei2πn(m−k)/(2M−1) =
sin [π (m − k) /(2M − 1)]
n=−(M−1)

≡ (2M − 1) δ mk (2.79)

with the aid of which the solution for the cc


n in (2.78) follows at once:

1 
M−1
cc = f (mΔt) e−i2πnm/(2M−1)
n
2M − 1
m=−(M−1)

1 
M−1
= εm f (mΔt) cos [2πnm/ (2M − 1)]
2M − 1 m=0
102 2 Fourier Series and Integrals with Applications to Signal Analysis

1 
M−1
= (εm /2) f (mΔt) cos [πnm/ (M − 1/2)] .
M − 1/2 m=0

Taking account of (2.78) we obtain the final result

2 
M−1
ccn = (εn εm /4) f (mΔt) cos [πnm/ (M −1/2)] ; n = 0, 1, 2, . . .M −1.
M − 1/2m=0
(2.80)
The final interpolation formula now follows through a direct substitution of
(2.80) into

M−1
f (t) = ccn cos (πnt/T ) . (2.81)
n=0

After summation over n we obtain

1 
M−1
f (t) = (εm /2) f (mΔt) {1 + kM (t/Δt − m) + kM (t/Δt + m)} ,
M − 1/2 m=0
(2.82)
where
 
* + sin π(M−1))
πM 2(M−1/2) t
kM (t) = cos t  . (2.83)
2M − 1 sin π
2(M−1/2) t

Fig. 2.19 shows the interpolation of e−t using 11 cosine functions.


The improvement over the interpolation wherein both sines and cosines were
employed, Fig. 2.16, is definitely noticeable. A more important issue with gen-
eral sinusoids is the crowding toward the interval endpoints as in Fig. 2.18. With
the cosine interpolation these oscillations are completely eliminated, as may be
seen from the plot in Fig. 2.20.
By choosing different distributions of the locations and sizes of the interpo-
lation intervals the interpolation properties can be tailored to specific classes of
functions. Of course, a nonuniform distribution of interpolation intervals will
in general not lead to analytically tractable forms of expansion coefficients and
will require a numerical matrix inversion. We shall not deal with nonuniform
distribution of intervals. There is, however, a slightly different way of specify-
ing a uniform distribution of interpolation intervals from the one we have just
considered which is worth mentioning since it leads to formulas for the so-called
discrete cosine transform commonly employed in data and image compression
work. Using the seemingly innocuous modification of (2.75) to

T
Δt = (2.84)
2M
2.1 Fourier Series 103

1.2

0.8
exp(-t)

0.6

0.4

0.2

0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
t

Figure 2.19: Interpolation of e−t with 11 cosine functions

1.2

0.8
exp(-t)

0.6

0.4

0.2

0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
t

Figure 2.20: Interpolation of e−t with 101 cosine functions

and forcing the first and the last step size to equal Δt/2 we replace (2.76) by


M−1
f [Δt (m + 1/2)] = ĉcn cos [πn (2m + 1) /2M ] ; m = 0, 1, 2, . . . M − 1.
n=0
(2.85)
104 2 Fourier Series and Integrals with Applications to Signal Analysis

With the aid of the geometrical sum formula we can readily verify the orthogo-
nality relationship


M−1
M
cos [πn (2m + 1) /2M ] cos [πk (2m + 1) /2M ] = δ nk (2.86)
m=0
εn

with the aid of which we solve for the coefficients in (2.85):

εn 
M−1
ĉcn = f [Δt (m + 1/2)] cos [πn (2m + 1) /2M ] . (2.87)
M m=0

Replacing ccn in (2.81) by ĉcn of (2.87) yields the interpolation formula

1 
M−1  ! !
f (t) = f [Δt (m + 1/2)] 1 + k̂M τ + + k̂M τ − , (2.88)
M m=0

where

τ+ = t/Δt − (m + 1/2) (2.88a)



τ = t/Δt + (m + 1/2) (2.88b)

and
 
π(M−1)
sin 2M t
k̂M (t) = cos(πt/2) π
! . (2.89)
sin 2M t
Equation (2.85) together with (2.87) is usually referred to as the discrete cosine
transform pair. Here we have obtained it as a by-product along our route toward
a particular interpolation formula comprised of cosine functions.

2.1.9 Anharmonic Fourier Series


Suppose we approximate the signal f (t) in the LMS sense by a sum of sinusoids
with radian frequencies μ1 , μ2 , . . . μN which are not necessarily harmonically
related. Assuming the signal is specified in the interval a ≤ t ≤ b we write this
approximating sum as follows:


N
f (t) ∼ fˆn ψ n (t) , (2.90)
n=1

wherein
ψ n (t) = An sin μn t + Bn cos μn t (2.91)
and An and Bn are suitable normalization constants. It is not hard to show
that as long as all the μn are distinct the Gram matrix Γnm = (ψ n , ψ m ) is
nonsingular so that the normal equations yield a unique set of expansion co-
efficients fˆn . Of course their computation would be significantly simplified if
2.1 Fourier Series 105

it were possible to choose a sets of radian frequencies μn such that the Gram
matrix is diagonal, or, equivalently, that the ψ n are orthogonal over the chosen
interval. We know that this is always the case for harmonically related radian
frequencies. It turns out that orthogonality also obtains when the radian fre-
quencies are not harmonically related provided they are chosen such that for
a given pair of real constants α and β the ψ n (t) satisfy the following endpoint
conditions:

ψ n (a) = αψ n (a) , (2.92a)


ψ n (b) = βψ n (b) . (2.92b)

To prove orthogonality we first observe that the ψ n (t) satisfy the differential
equation of the harmonic oscillator, i.e.,

d2 ψ n
+ μ2n ψ n = 0, (2.93)
dt2
where we may regard the ψ n as an eigenvector and μ2n as the eigenvalue of the
differential operator −d2 ψ n / dt2 . Next we multiply (2.93) by ψ m and integrate
the result over a ≤ t ≤ b to obtain
 b  b
dψ dψ m dψ n
ψ m n ba − dt + μ2n ψ m ψ n dt = 0, (2.94)
dt a dt dt a

where the second derivative has been eliminated by an integration by parts. An


interchange of indices in (2.94) gives
 
dψ b
dψ n dψ m b
ψ n m ba − dt + μ2m ψ n ψ m dt = 0 (2.95)
dt a dt dt a

and subtraction of (2.95) from (2.94) yields



dψ dψ ! b
ψ m n ba − ψ n m ba = μ2n − μ2m ψ n ψ m dt. (2.96)
dt dt a

We now observe that substitution of the endpoint conditions (2.92) into the left
side of (2.96) yields zero. This implies orthogonality provided we assume that
for n = m μm and μn are distinct. For then
 b
ψ n ψ m dt = 0 ; n = m. (2.97)
a

The fact that the eigenvalues μ2n are distinct follows from a direct calculation.
To compute the eigenvalues we first substitute (2.91) into (2.92) which yields
the following set of homogeneous algebraic equations:
(sin μn a − αμn cos μn a) An + (cos μn a + αμn sin μn a) Bn = 0, (2.98a)
(sin μn b − βμn cos μn b) An + (cos μn a + αμn sin μn a) Bn = 0. (2.98b)
106 2 Fourier Series and Integrals with Applications to Signal Analysis

0 µ1b
y

• µ2b
-1 •
µ3b

-2

-3

-4
0 1 2 3 4 5 6 7 8 9
x

Figure 2.21: Diagram of the transcendental equation −0.2x = tan x

A nontrivial solution for An and Bn is only possible if the determinant of


the coefficients vanishes. Computing this determinant and setting the result to
zero yield the following equation for μn :
(β − α)μn cos [μn (b − a)] − (1 + αβμ2n ) sin [μn (b − a)] = 0. (2.99)
This transcendental equation possesses an infinite set of distinct positive simple
zeros μn . For an arbitrary set of parameters these roots can only be determined
numerically. Many standard root finding algorithms are available for this pur-
pose. Generally these are iterative techniques that require a “good” first guess
of the root. In case of (2.99) an approximate location to start the iteration can
be got from a graphical construction. We illustrate it for α = 0 and a = 0 in
which case (2.99) becomes βμn cos μn b − sin μn b = 0, which is equivalent to
βμn = tan(μn b). (2.100)
Defining the nondimensional variable x = μn b in (2.100) we obtain the roots
from the intersection of the straight line y = xβ/b with the curves defined by
the various branches of y = tan x as shown in Fig. 2.21 for β/b = −0.2. The first
three roots expressed in terms of the nondimensional quantities μ1 b, μ2 b and μ3 b
may be read off the abscissa. When α = 0 and a = 0 (2.98a) requires that Bn
= 0 so that the expansion functions that correspond to the solutions of (2.100)
are
ψ n (t) = An sin μn t. (2.101)
Setting An = 1 we compute the normalization constant
 b  b
1 − cos 2μn t
Qn = sin2 μn tdt = dt
0 0 2
2.2 The Fourier Integral 107
* +
sin 2μn b b sin μn b cos μn b
= b/2 − = (1 − ).
4μn 2 μn b
The expansion coefficients in (2.90) for this case are
* +−1  b
2 sin μn b cos μn b
ˆ
fn = 1− f (t) sin μn tdt. (2.102)
b μn b 0

From Fig. 2.21 we note that as n increases the abscissas of the points where
the straight line intersects the tangent curves approach π/2(2n − 1) ≈ nπ.
Hence for large n the radian frequencies of the anharmonic expansion (2.90) are
asymptotically harmonic, i.e.,
μn ∼ nπ/b. (2.103)
n∼∞

Taking account of (2.103) in (2.102) we also observe that for large n formula
(2.102) represents the expansion coefficient of a sine Fourier series (2.63). Thus
the anharmonic character of the expansion appears to manifest itself only for
finite number of terms. Hence we would expect that the convergence properties
of anharmonic expansions to be essentially the same as harmonic Fourier series.
An anharmonic series may be taken as a generalization of a Fourier series.
For example, it reduces to the (harmonic) sine series in (2.62) when α = β = 0
and when α = β → ∞ to the (harmonic) cosine series (2.58), provided f (a) = 0
and f (b) = 0. When the endpoint conditions (2.92) are replaced by a periodicity
condition we obtain the standard Fourier series.

2.2 The Fourier Integral


2.2.1 LMS Approximation by Sinusoids Spanning
a Continuum
Instead of approximating f (t) by a sum of 2N + 1 sinusoids with discrete fre-
quencies ωn = 2πn/T we now suppose that the frequencies ω span a continuum
between −Ω and Ω. With
 Ω
Ω
f (t) = fˆ (ω) eiωt dω (2.104)
−Ω

we seek a function fˆ (ω) such that the MS error


 T /2

εΩ (T ) ≡ f (t) − f Ω (t) 2 dt (2.105)
−T /2

is minimized. As we know, this minimization leads to the normal equation


(1.99), where we identify φ (ω, t) = eiωt , a = −T /2, b = T /2, so that with the
aid of (1.100) we obtain
 T /2  Ω
2 sin [(ω − ω ) T /2] 
f (t) e−iωt dt = fˆ (ω  ) dω . (2.106)
−T /2 −Ω (ω − ω )
108 2 Fourier Series and Integrals with Applications to Signal Analysis

Thus unlike in the case of a discrete set of sinusoids the unknown “coefficients”
fˆ (ω  ) now span a continuum. In fact, according to (2.106), to find fˆ (ω  ) we
must solve an integral equation.

2.2.2 Transition to an Infinite Observation Interval:


The Fourier Transform
For any finite time interval T the solution of (2.106) for fˆ (ω  ) can be expressed
in terms of spheroidal functions [23]. Here we confine our attention to the case
of an infinite time interval, which is the conventional domain of the Fourier
integral. In that case we can employ the limiting form of the Fourier kernel in
(1.269) (with Ω replaced by T /2) so that the right side of (2.106) becomes
 Ω
2 sin [(ω − ω ) T /2] 
lim fˆ (ω  ) dω = 2π fˆ (ω) . (2.107)
T −→∞ −Ω (ω − ω  )
Hence as the expansion interval in the time domain is allowed to approach
infinity the solution of (2.106) reads
 ∞
f (t) e−iωt dt = F (ω), (2.108)
−∞

where we have set F (ω) = 2π fˆ (ω) which shall be referred to as the Fourier
Integral (or the Fourier transform) of f (t) . Substituting this in (2.104) and
integrating with respect to ω we get
 ∞
sin [(t − t ) Ω] 
f Ω (t) = f (t ) dt . (2.109)
−∞ π (t − t )
The corresponding LMS error εΩ min is
!
εΩ min = f − f Ω, f − f Ω
!
= (f, f ) − f, f Ω ≥ 0, (2.110)

where the inner products are taken over the infinite time domain and account has
been taken of the projection theorem (1.75). Substituting for f Ω from (2.104)
the preceding is equivalent to
 ∞  ∞  Ω
2
εΩ min = |f (t)| dt − f ∗ (t) dt fˆ (ω) eiωt dω
−∞ −∞ −Ω
 ∞  Ω
2 ˆ 2
= |f (t)| dt − 2π f (ω) dω
−∞ −Ω
 ∞ Ω
2 1 2
= |f (t)| dt − |F (ω)| dω  0, (2.111)
−∞ 2π −Ω

which is the Bessel inequality for the Fourier transform. As Ω −→ ∞ the


integrand in (2.109) approaches a delta function and in accordance with (1.285)
2.2 The Fourier Integral 109

we have
1 ! !
lim f Ω (t) = f t+ + f t− (2.112)
Ω−→∞ 2
or, equivalently, using (2.104) with F (ω) = 2π fˆ (ω)
 Ω
1 1 ! !
lim F (ω) eiωt dω = f t+ + f t− . (2.113)
Ω−→∞ 2π −Ω 2

At the same time the MS error in (2.111) approaches zero and we obtain
 ∞  ∞
2 1 2
|f (t)| dt = |F (ω)| dω, (2.114)
−∞ 2π −∞

which is Parseval theorem for the Fourier transform. Equation (2.113) is usually
written in the abbreviated form
 ∞
1
f (t) = F (ω) eiωt dω (2.115)
2π −∞

and is referred to as the inverse Fourier transform or the Fourier transform


inversion formula. It will be frequently convenient to designate both (2.115)
and the direct transform (2.108) by the concise statement
F
f (t) ⇐⇒ F (ω) . (2.116)

In addition, we shall at times find it useful to express the direct and inverse
transform pair as
F {f (t)} = F (ω) , (2.117)
which is just an abbreviation of the statement “the Fourier transform of f (t) is
F (ω).” We shall adhere to the convention of designating the time domain signal
by a lowercase letter and its Fourier transform by the corresponding uppercase
letter.

2.2.3 Completeness Relationship and Relation to Fourier


Series
Proceeding in a purely formal way we replace F (ω) in (2.115) by (2.108) and
interchange the order of integration and obtain
 ∞  ∞ 2
1 
f (t) = f (t ) eiω(t−t ) dω dt . (2.118)
−∞ 2π −∞

The quantity in braces can now be identified as the delta function


 ∞
 1 
δ (t − t ) = eiω(t−t ) dω, (2.119)
2π −∞
110 2 Fourier Series and Integrals with Applications to Signal Analysis

which is a slightly disguised version of (1.254). To see this we merely have to


rewrite (2.119) as the limiting form
 Ω
1 
lim eiω(t−t ) dω
Ω−→∞ 2π −Ω

and note that for any finite Ω the integration yields sin [Ω (t − t )] /π(t − t ).
The representation (2.119) bears a formal resemblance to the completeness
relationship for orthonormal discrete function sets, (1.302), and, more directly,
to the completeness statement for Fourier series in (2.31). This resemblance can
be highlighted by rewriting (2.119) to read
 ∞* +* +∗
1 1 
δ (t − t ) = √ eiωt √ eiωt dω (2.120)
−∞ 2π 2π
so √that a comparison with (2.31) shows that the functions φω (t) ≡
1/√2πexp (iωt) play an analogous role to the orthonormal functions φn (t) ≡
1/ T exp (2πint/T ) provided we view the continuous variable ω in (2.120) as
proportional to a summation index. In fact a direct comparison of the variables
between (2.31) and (2.120) gives the correspondence
2πn
ω ←→ (2.121a)
T

dω ←→ . (2.121b)
T
Thus as the observation period T of the signal increases, the quantity 2π/T may
be thought of as approaching the differential dω while the discrete spectral lines
occurring at 2πn/T merge into a continuum corresponding to the frequency
variable ω. Moreover the orthogonality over the finite interval −T /2, T /2, as in
(1.213), becomes in the limit as T −→ ∞
 ∞
 1 
δ (ω − ω ) = eit(ω−ω ) dt
2π −∞
 ∞* +* +∗
1 itω 1 itω
= √ e √ e dt (2.122)
−∞ 2π 2π
i.e., the identity matrix represented by the Kronecker symbol δ mn goes over into
a delta function, which is the proper identity transformation for the continuum.
A more direct but qualitative connection between the Fourier series and the
Fourier transform can be established if we suppose that the function f (t) is
initially truncated to |t| < T /2 in which case its Fourier transform is
 T /2
F (ω) = f (t) e−iωt dt. (2.123)
−T /2

The coefficients in the Fourier series that represents this function within the
interval −T /2, T /2 can now be expressed as fˆn = F (2πn/T ) /T so that
∞ * +
1
f (t) = F (2πn/T ) ei2πnt/T . (2.124)
n=−∞
T
2.2 The Fourier Integral 111

Thus in view of (2.121) we can regard the Fourier transform inversion for-
mula (2.115) as a limiting form of (2.124) as T −→ ∞. Figure 2.22 shows the

0.9

0.8

0.7

0.6
Amplitude

0.5

0.4

0.3

0.2

0.1

0
-20 -15 -10 -5 0 5 10 15 20
ωτ

Figure 2.22: Continuous and discrete spectra

close correspondence between the discrete spectrum defined by Fourier series


coefficients and the continuous spectrum represented by the Fourier transform.
The time domain signal is the exponential exp −2 |t/τ |. For the discrete spec-
trum the time interval is truncated
to −T /2 ≤ t ≤ T /2 (with T /2τ = 2) and

the Fourier series coefficients (T /τ ) fˆn (stem diagram) plotted as a function of
2πnτ
 /T. Superposed
 for comparison is the continuous spectrum represented by
2
4/ 4 + (ωτ ) , the Fourier transform of (1/τ ) exp −2 |t/τ |.

2.2.4 Convergence and the Use of CPV Integrals


The convergence properties of the Fourier integral are governed by the delta
function kernel (2.109). In many respects they are qualitatively quite simi-
lar to the convergence properties of Fourier series kernel (2.5). For example,
as we shall show explicitly in 2.2.7, the convergence at points of discontinu-
ity is again accompanied by the Gibbs oscillatory behavior. The one conver-
gence issue that does not arise with Fourier series, but is unavoidable with the
Fourier Integral, relates to the behavior of the functions at infinity, a problem
we had already dealt with in Chap. 1 in order to arrive at the limit state-
ment (1.269). There we found that it was sufficient to require that f (t) satisfy
(1.266) which, in particular, is satisfied by square integrable functions (Prob-
lem 1-18). Unfortunately this constraint does not apply to several idealized
signals that have been found to be of great value in simplifying system analysis.
To accommodate these signals, the convergence of the Fourier transform has to
112 2 Fourier Series and Integrals with Applications to Signal Analysis

be examined on a case-by-case basis. In certain cases this requires a special def-


inition of the limiting process underlying the improper integrals that define the
Fourier transform and its inverse. In the following we provide a brief account
of this limiting process. ,∞
An improper integral of the form −∞ f (t) dt, unless stated to the contrary,
implies the limit (1.278c)

 T2
lim lim f (t) dt, (2.125)
T1 →∞T2 →∞ −T1

which means that integral converges when the upper and lower limits approach
infinity independently. This definition turns out to be too restrictive in many
situations of physical interest. An alternative and more encompassing definition
is the following:
 T
lim f (t) dt. (2.126)
T →∞ −T

Here we stipulate that upper and lower limits must approach infinity at the
same rate. It is obvious that (2.126) implies (2.125). The converse is, however,
not true. The class of functions for which the integral exists in the sense of
(2.126) is much larger than under definition (2.125). In particular, all (piece-
wise differentiable) bounded odd functions are integrable in the sense of (2.126)
and the integral yields zero. Under these circumstances (2.125) would gener-
ally diverge, unless of course the growth of the function at infinity is suitably
restricted. When the limit is taken symmetrically in accordance with (2.126)
the integral is said to be defined in terms of the Cauchy Principal Value (CPV).
We have in fact already employed this definition implicitly on several occasions,
in particular in (2.113). A somewhat different form of the CPV limit is also of
interest in Fourier transform theory. This form arises whenever the integral is
improper in virtue of one or more simple pole singularities within the integration
, 8 dt
interval. For example, the integral −2 t−1 has a singularity at t = 1 where the
integrand becomes infinite. The first inclination would be to consider this inte-
gral simply as divergent. On the other hand since the integrand changes sign as
one moves through the singularity it is not unreasonable to seek a definition of
a limiting process which would facilitate the mutual cancellation of the positive
and negative infinite contributions. For example, suppose we define
 1− 1  8
dt dt
I (1 , 2 ) = + ,
−2 t−1 1+ 2 t − 1

where 1 and 2 are small positive numbers so that the integration is carried
out up to and past the singularity. By direct calculation we find I (1 , 2 ) =
ln (71 /32 ) . We see that if we let 1 and 2 approach zero independently the
integral diverges. On the other by setting 1 = 2 =  the result is always finite.
Apparently when the singularity is approached symmetrically from both sides
2.2 The Fourier Integral 113

results in a cancellation of the positive and negative infinite contributions and


yield a convergent integral. The formal expression for the limit is
 1−  8 2
dt dt
lim + = ln (7/3) .
→0 −2 t−1 1+ t − 1

This limiting procedure constitutes the CPV definition of the integral whenever
the singularity falls within the integration interval. Frequently a special symbol
is used to indicate a CPV evaluation. We , 8 shall indicate it by prefixing the letter
dt
P to the integration symbol. Thus P −2 t−1 = ln (7/3) . When more than one
singularity is involved the CPV limiting procedure must be applied to each. For
example,
 9
dt
I = P
−5 (t − 1) (t − 2)
 1−  2−ε
dt dt
= lim +
→0 −5 (t − 1) (t − 2) 1+ε (t − 1) (t − 2)
 9 2
dt
+
2+ε (t − 1) (t − 2)
= ln 3/4.

The following example illustrates the CPV evaluation of an integral with infinite
limits of integration:
 ∞  2−  T )
dt dt dt
I = P = lim +
−∞ t − 2 →0 T →∞ −T t−2 2+ t − 2

(2 −  − 2) (T − 2)
= lim ln = 0.
→0 T →∞ (−T − 2) (2 + ε − 2)

Note that the symbol P in this case pertains to a CPV evaluation at t = −∞


and t = ∞. A generic form of an integral that is frequently encountered is
 b
f (t)
I =P dt, (2.127)
a t −q

where a < q < b and f (t) is a bounded function within a, b and differentiable
at t = q. We can represent this integral as a sum of an integral of a bounded
function and a CPV integral which can be evaluated in closed form as follows:
 b
f (t) − f (q) + f (q)
I = P dt
a t−q
 b  b
f (t) − f (q) dt
= dt + f (q) P
a t−q a t−q
 b
f (t) − f (q) b−q
= dt + f (q) ln . (2.128)
a t − q q −a
114 2 Fourier Series and Integrals with Applications to Signal Analysis

Note that the integrand in the first integral in the last expression is finite at
t = q so that the integral can be evaluated, if necessary, numerically using
standard techniques.
Let us now apply the CPV procedure to the evaluation of the Fourier trans-
form of f (t) = 1/t. Even though a signal of this sort might appear quite artificial
it will be shown to play a pivotal role in the theory of the Fourier transform.
Writing the transform as a CPV integral we have
 ∞ −iωt  ∞  ∞
e cos ωt sin ωt
F (ω) = P dt = P dt − iP dt.
−∞ t −∞ t −∞ t
,∞
Since P −∞ cost ωt dt = 0, and sintωt is free of singularities we have
 ∞
sin ωt
F (ω) = −i dt (2.129)
−∞ t
,∞
Recalling that −∞ sinx x dx = π we obtain by setting ωt = x in (2.129)

1 ∞ sin ωt 1 ; ω > 0,
dt = sign(ω) = (2.130)
π −∞ t −1 ; ω < 0.

Thus we arrive at the transform pair


1 F
⇐⇒ −isign(ω). (2.131)
πt
By using the same procedure for the inverse transform of 1/ω we arrive at the
pair
F2
sign(t) ⇐⇒ . (2.132)

Several idealized signals may be termed canonical in that they form the essential
building blocks in the development of analytical techniques for evaluation of
Fourier transforms and also play a fundamental role in the characterization of
linear system. One such canonical signal is the sign function just considered.
We consider several others in turn.

2.2.5 Canonical Signals and Their Transforms


The Delta Function
That Fourier transform of δ (t) equals 1 follows simply from the basic property
of the delta function as an identity transformation. The consistency of this with
the inversion formula follows from (2.119). Hence
F
δ (t) ⇐⇒ 1. (2.133)
In identical fashion we get
F
1 ⇐⇒ 2πδ (ω) . (2.134)
2.2 The Fourier Integral 115

The Unit Step Function


1
From the identity U (t) = 2 [1 + sign(t)] we get in conjunction with (2.132)
and (2.134)
F 1
U (t) ⇐⇒ πδ (ω) + . (2.135)

The Rectangular Pulse Function


Using the definition for pT (t) in (1.6-40b) we obtain by direct integration the
pair
2 sin(ωT )
F
pT (t) ⇐⇒ , (2.136)
ω
where we again find the familiar Fourier integral kernel. If, on the other hand,
pΩ (ω) describes a rectangular frequency window, then a direct evaluation of the
inverse transform yields
sin(Ωt) F
⇐⇒ pΩ (ω). (2.137)
πt
The transition of (2.137) to (2.133) as Ω → ∞ and of (2.136) to (2.134) as
T → ∞ should be evident.

Triangular Pulse Function


Another signal that we should like to add to our catalogue of canonical trans-
forms is the triangular pulse qT (t) defined in (1.278d) for which we obtain the
pair
F T sin2 (ωT /2)
qT (t) ⇐⇒ . (2.137*)
(ωT /2)2

Exponential Functions
Since the Fourier transform is a representation of signals in terms of exponentials
we would expect exponential functions to play a special role in Fourier analysis.
In the following we distinguish three cases: a purely imaginary argument, a
purely real argument with the function truncated to the positive time axis, and
a real exponential that decays symmetrically for both negative and positive
times. In the first case we get from the definition of the delta function (2.119)
and real ω0
F
eiω0 t ⇐⇒ 2πδ (ω − ω 0 ) . (2.138)
This result is in perfect consonance with the intuitive notion that a single
tone, represented in the time domain by a unit amplitude sinusoidal oscillation
of infinitely long duration, should correspond in the frequency domain to a sin-
gle number, i.e., the frequency of oscillation, or, equivalently, by a spectrum
consisting of a single spectral line. Here this spectrum is represented symbol-
ically by a delta function at ω = ω0 . Such a single spectral line, just like the
116 2 Fourier Series and Integrals with Applications to Signal Analysis

corresponding tone of infinite duration, are convenient abstractions never realiz-


able in practice. A more realistic model should consider a tone of finite duration,
say −T < t < T. We can do this either by truncating the limits of integration
in the evaluation of the direct transform, or, equivalently, by specifying this
truncation in terms of the pulse function pT (t). The resulting transform pair
then reads
F 2 sin [(ω − ω 0 ) T ]
pT (t)eiω 0 t ⇐⇒ , (2.139)
(ω − ω0 )
so that the form of the spectrum is the Fourier kernel (2.136) whose peak has
been shifted to ω 0 . One can show that slightly more than 90% of the energy is
contained within the frequency band defined by the first two nulls on either side
of the principal peak. It is therefore reasonable to take this bandwidth as the
nominal spectral linewidth of the tone. Thus we see that a tone of duration 2T
has a spectral width of 2π/T which is sometimes referred to as the Rayleigh res-
olution limit. This inverse relationship between the signal duration and spectral
width is of fundamental importance in spectral analysis. Its generalization to a
wider class of signals is embodied in the so-called uncertainty principle discussed
in 2.5.1.
With α > 0 and the exponential truncated to the nonnegative time axis
we get
F 1
e−αt U (t) ⇐⇒ . (2.140)
α + iω
For the exponential e−α|t| defined over the entire real line the transform pair
reads
F 2α
e−α|t| ⇐⇒ 2 . (2.141)
α + ω2
Formula (2.140) also holds when α is replaced by the complex number p0 =
α − iω 0 where ω 0 is real. A further generalization follows if we differentiate the
right side of (2.140) n − 1 times with respect to ω. The result is

tn−1 −p0 t F 1
e U (t) ⇐⇒ n; n ≥ 1. (2.142)
(n − 1)! (p0 + iω)

Using this formula in conjunction with the partial fraction expansion technique
constitutes one of the basic tools in the evaluation of inverse Fourier transforms
of rational functions.

Gaussian Function
A rather important idealized signal is the Gaussian function
2
1 − t
f (t) = 1 e 2σ2t ,
2πσ 2t
2.2 The Fourier Integral 117

√ √ !
where we have adopted the normalization f , f = 1. We compute its FT
as follows:
 ∞ 2  ∞
1 − t 1 − 1 [t2 +2iωσ 2t t]
F (ω) = 1 e 2σ2t e−iωt dt = 1 e 2σ2t dt
2πσ 2t −∞ 2πσ 2t −∞
1 2 2  ∞ 1 2 2  ∞+iωσ 2
e− 2 σ t ω − 1 [t+iωσ 2t ]
2
e− 2 σ t ω t − z
2

= 1 e 2σ2t dt = 1 e 2σ2t dz.


2πσ 2t −∞ 2πσ 2t −∞+iωσ 2t

The last integral may be interpreted as an integral in the complex z plane


with the path of integration running along the straight line with endpoints
(−∞ + iωσ 2t , ∞ + iωσ 2t ). Since the integrand is analytic in the entire finite z
plane we can shift this path to run along the axis of reals so that
 ∞+iωσ2t z2
 ∞ z2

− −
e 2σ 2
t dz = e 2σ 2
t dz = 2πσ 2t .
−∞+iωσ2t −∞

Thus we obtain the transform pair


2
1 − t F 1 2 2
1 e 2σ2t ⇐⇒ e− 2 σt ω . (2.142*)
2
2πσ t

Note that except for a scale factor the Gaussian function is its own FT. Here we
see another illustration of the inverse relationship between the signal duration
and bandwidth. If we take σ t as the nominal duration of the pulse in the time
domain, then a similar definition for the effective bandwidths of F (ω) yields
σ ω = 1/ σ t .

2.2.6 Basic Properties of the FT


Linearity
The Fourier transform is a linear operator which means that for any set of
functions fn (t) n = 1, 2, . . . N and corresponding transforms Fn (ω) we have
(N )
 N
F αn fn (t) = αn Fn (ω) ,
n=1 n=1

where the αn are constants. This property is referred to as the superposition


principle. We shall return to it in Chap. 3 when we discuss linear systems. This
superposition principle carries over to a continuous index. Thus if
F
f (ξ, t) ⇐⇒ F (ξ, ω)

holds for a continuum of values of ξ, then


 2 
F f (ξ, t) dξ = F (ξ, ω) dξ.
118 2 Fourier Series and Integrals with Applications to Signal Analysis

Symmetries
For any Fourier transform pair
F
f (t) ⇐⇒ F (ω)
we also have, by a simple substitution of variables,
F
F (t) ⇐⇒ 2πf (−ω) . (2.143)
For example, using this variable replacement in (2.141), we obtain
α F
⇐⇒ e−α|ω| . (2.143*)
π (α2 + t2 )
The Fourier transform of the complex conjugate of a function follows through
the variable replacement
F
f ∗ (t) ⇐⇒ F ∗ (−ω) . (2.144)
Frequently we shall be interested in purely real signals. If f (t) is real, the
preceding requires
F ∗ (−ω) = F (ω) . (2.145)
If we decompose F (ω) into its real and imaginary parts
F (ω) = R (ω) + iX (ω) , (2.146)
we note that (2.145) is equivalent to

R (ω) = R (−ω) , (2.147a)


X (ω) = −X (−ω) , (2.147b)

so that for a real signal the real part of the Fourier transforms is even function
while the imaginary part an odd function of frequency. The even and odd
symmetries carry over to the amplitude and phase of the transform. Thus
writing
F (ω) = A (ω) eiθ(ω) , (2.148)
wherein

2 2
A (ω) = |F (ω)| = [R (ω)] + [X (ω)] , (2.149a)
X (ω)
θ (ω) = tan−1 , (2.149b)
R (ω)
we have in view of (2.147)
A (ω) = A (−ω) (2.150a)
θ (ω) = −θ (−ω) . (2.150b)
2.2 The Fourier Integral 119

As a result the inversion formula can be put into the form



1 ∞
f (t) = A (ω) cos [ωt + θ (ω)] dω
π 0
 ∞ 2
1
=
e iωt
2F (ω) e dω . (2.151)
2π 0

The last expression shows that a real physical signal can be represented as
the real part of a fictitious complex signal whose spectrum equals twice the
spectrum of the real signal for positive frequencies but is identically zero for
negative frequencies. Such a complex signal is referred to as an analytic signal,
a concept that finds extensive application in the study of modulation to be
discussed in 2.3.

Time Shift and Frequency Shift


For any real T we have
F
f (t − T ) ⇐⇒ F (ω) e−iωT (2.152)
and similarly for any real ω 0
F
f (t) eiω 0 t ⇐⇒ F (ω − ω0 ) . (2.153)
The last formula is the quantification of the modulation of a high frequency
CW carrier by a baseband signal comprised of low frequency components.
For example, for the carrier of A cos (ω0 t + θ0 ) and a baseband signal f (t) we
get

F A iθ0 A
f (t) A cos (ω 0 t + θ0 ) ⇐⇒ e F (ω − ω 0 ) + e−iθ0 F (ω + ω 0 ) . (2.154)
2 2
If we suppose that F (ω) is negligible outside the band defined by |ω| < Ω, and
also assume that ω0 > 2Ω, the relationship among the spectra in (2.154) may
be represented schematically as in Fig. 2.23

F (ω )
Ae−iq0 Aeiq0
F (ω + ω0 ) F (ω − ω0 )
2 2

ω
−ω0 −Ω Ω ω0

Figure 2.23: Modulation by a CW carrier


120 2 Fourier Series and Integrals with Applications to Signal Analysis

Differentiation
If f (t) is everywhere differentiable, then a simple integration by parts gives
 ∞  ∞

f  (t) e−iωt dt = f (t) e−iωt ∞
−∞ + iω f (t) e−iωt dt
−∞ −∞
= iωF (ω) . (2.155)
Clearly if f (t) is differentiable n times we obtain by repeated integration
F
f (n) (t) ⇐⇒ (iω)n F (ω) . (2.156)

Actually this formula may still be used even if f (t) is only piecewise dif-
ferentiable and discontinuous with discontinuous first and even higher order
derivatives at a countable set of points. We merely have to replace f (n) (t) with
a generalized derivative defined in terms of singularity functions, an approach
we have already employed for the first derivative in (1.280). For example, the
Fourier transform of (1.280) is
F
 ! ! −iωtk
f  (t) ⇐⇒ iωF (ω) = F {fs (t)} + f t+
k − f tk

e . (2.157)
k

In the special case of only one discontinuity at t = 0 and f (0− ) = 0 (2.157)


becomes
F !
fs (t) ⇐⇒ iωF (ω) − f 0+ . (2.158)
What about the Fourier transform of higher order derivatives? Clearly if the
first derivative is continuous at t = 0, the Fourier transform of fs (t) may be
obtained by simply multiplying the right side of (2.158) by iω. However in case of
a discontinuity in the first derivative the magnitude of the jump in the derivative
must be subtracted. Again assuming f  (0− ) = 0 we have
F  ! !
fs (t) ⇐⇒ iω iωF (ω) − f 0+ − f  0+ . (2.159)

Higher order derivatives can be handled similarly.


Since an n − th order derivative in the time domain transforms in the fre-
quency domain to a multiplication by (iω)n , the Fourier transform of any linear
differential operator with constant coefficients is a polynomial in iω. This feature
makes the Fourier transform a natural tool for the solution of linear differen-
tial equations with constant coefficients. For example, consider the following
differential equation:
x (t) + 2x (t) + x (t) = 0. (2.160)
+
We seek a solution for x (t) for t ≥ 0 with initial conditions x (0 ) = 2 and
x (0+ ) = 6. Then
F
x (t) ⇐⇒ iωX (ω) − 2
F
x (t) ⇐⇒ −ω 2 X (ω) − iω2 − 6.
2.2 The Fourier Integral 121

The solution for X (ω) reads


i2ω + 10
X (ω) = ,
−ω2 + i2ω + 1
while the signal x (t) is to be computed from
 ∞
1 i2ω + 10
x (t) = eiωt dω. (2.161)
2π −∞ −ω2 + i2ω + 1
The integral can be evaluated by contour integration as will be shown in 2.4.4
(see also (A.96) in the Appendix).

Inner Product Invariance


We compute the inner product of two functions in the time domain and with the
aid of the inversion formulas transform it into an inner product in the frequency
domain as follows:
 ∞
(f1 , f2 ) = f1∗ (t) f2 (t) dt
−∞
 ∞  ∞  ∞ 2
1 ∗ −iωt 1  iω  t 
= F (ω) e dω F2 (ω ) e dω dt
−∞ 2π −∞ 1 2π −∞
 ∞ ∞  ∞ 2
1 ∗  1 i(ω  −ω )t
= F (ω) F2 (ω ) e dt dω  dω
2π −∞ −∞ 1 2π −∞
 ∞ ∞
1
= F ∗ (ω) F2 (ω  ) δ (ω − ω ) dω  dω
2π −∞ −∞ 1
 ∞
1
= F ∗ (ω) F2 (ω) dω.
2π −∞ 1
The final result may be summarized to read
 ∞  ∞
1
f1∗ (t) f2 (t) dt = F1∗ (ω) F2 (ω) dω, (2.162)
−∞ 2π −∞

which is recognized as a generalization of Parseval’s formula.

Convolution
We have already encountered the convolution of two functions in connection with
Fourier series, (2.49). Since in the present case the time domain encompasses
the entire real line the appropriate definition is
 ∞
h (t) = f (τ ) g (t − τ ) dτ .
−∞

We shall frequently employ the abbreviated notation


 ∞
f (τ ) g (t − τ ) dτ = f ∗ g. (2.163)
−∞
122 2 Fourier Series and Integrals with Applications to Signal Analysis

Note that
 ∞  ∞
f (τ ) g (t − τ ) dτ = g (τ ) f (t − τ ) dτ
−∞ −∞

as one can readily convince oneself through a change of the variable of integra-
tion. This can also be expressed as f ∗ g = g ∗ f , i.e., the convolution operation
F
is commutative. In view of (2.152) g (t − τ ) ⇐⇒ G (ω) e−iωτ so that
 ∞  ∞
F
f (τ ) g (t − τ ) dτ ⇐⇒ f (τ ) G (ω) e−iωτ dτ = F (ω) G (ω) . (2.164)
−∞ −∞

In identical manner we establish that


 ∞
F 1 1
f (t) g (t) ⇐⇒ F (η) G (ω − η) dη = F ∗ G. (2.165)
2π −∞ 2π

Integration
When the Fourier transform is applied to integro-differential equations one some-
times needs to
, tevaluate the transform of the integral of a function. For example
with g (t) = −∞ f (τ ) dτ we would like to determine G (ω) in terms of F (ω) .
,t ,∞
We can do this by first recognizing that −∞ f (τ ) dτ = −∞ f (τ ) U (t − τ ) dτ .
Using (2.164) and (2.135) we have
 ∞  
F 1
f (τ ) U (t − τ ) dτ ⇐⇒ F (ω) πδ (ω) +
−∞ iω

with the final result


 t
F F (ω)
f (τ ) dτ ⇐⇒ πF (0) δ (ω) + = G (ω) . (2.166)
−∞ iω

Note that the integral implies g  (t) = f (t) so that

iωG (ω) = F (ω) . (2.167)

This is certainly compatible with (2.166) since ωδ (ω) = 0. However the solu-
tion of (2.167) for G (ω) by simply dividing both sides by iω is in general not
permissible since G (ω) = F (ω) /iω unless F (0) = 0.

Causal Signals and the Hilbert Transform [16]


Let
f (t) + f (−t)
fe (t) = , (2.168a)
2
f (t) − f (−t)
fo (t) = , (2.168b)
2
2.2 The Fourier Integral 123

so that f (t) = fe (t) + fo (t) for any signal. Since fe (t) = fe (−t) and fo (t) =
−fo (−t) (2.168a) and (2.168b) are referred to as the even and odd parts of f (t),
respectively. Now

1 ∞
F {fe (t)} = [f (t) + f (−t)] [cos (ωt) − i sin (ωt)] dt
2 −∞
 ∞
= f (t) cos (ωt) dt (2.169a)
−∞

and
 ∞
1
F {fo (t)} = [f (t) − f (−t)] [cos (ωt) − i sin (ωt)] dt
2 −∞
∞
= −i f (t) sin (ωt) dt. (2.169b)
−∞

In view of the definition (2.146), for a real f (t) (2.169a) and (2.169b) are
equivalent to
F
fe (t) ⇐⇒ R (ω) , (2.170a)
F
fo (t) ⇐⇒ iX (ω) . (2.170b)

In the following we shall be concerned only with real signals.


As will be discussed in Chap. 3, signals that vanish for negative values of the
argument play a special role in linear time-invariant systems. Such signals are
said to be causal. Suppose f (t) is a causal signal. Then according to (2.168)

2fe (t) = 2fo (t) ; t > 0,
f (t) = (2.171)
0 ; t < 0.

Evidently the even and odd parts are not independent for

fe (t) = fo (t) ; t > 0,


fe (t) = −fo (t) ; t < 0, .

which can be rephrased in more concise fashion with the aid of the sign function
as follows:

fo (t) = sign (t) fe (t) (2.172a)


fe (t) = sign (t) fo (t) . (2.172b)

Taking account of (2.170), (2.132), and (2.165) Fourier Transformation of both


sides of (2.172) results in the following pair of equations:
 ∞
1 R (η) dη
X (ω) = − P , (2.173a)
π −∞ ω − η
 ∞
1 X (η) dη
R (ω) = P . (2.173b)
π −∞ ω − η
124 2 Fourier Series and Integrals with Applications to Signal Analysis

These relations show explicitly that the real and imaginary parts of the Fourier
transform of a causal signal may not be prescribed independently. For example
if ,we know R (ω), then X (ω) can be determined uniquely by (2.173a). Since
∞ dη
P −∞ ω−η = 0, an R (ω) that is constant for all frequencies gives a null result
for X (ω). Consequently, (2.173b) determines R (ω) from X (ω) only within a
constant. ,∞
The integral transform π1 P −∞ R(η)dη
ω−η is known as the Hilbert Transform
which shall be denoted by H {R (ω)} . Using this notation we rewrite (2.173) as
X (ω) = −H {R (ω)} , (2.174a)
R (ω) = H {X (ω)} . (2.174b)
Since (2.174b) is the inverse of (2.174a) the inverse Hilbert transform is obtained
by a change in sign. As an example, suppose R (ω) = pΩ (ω) . Carrying out the
simple integration yields

1 ω − Ω
X (ω) = ln , (2.175)
π ω + Ω
which is plotted in Fig. 2.24. The Hilbert Transform in the time domain is
defined similarly. Thus for a signal f (t)
 ∞
1 f (τ ) dτ
H {f (t)} = P . (2.176)
π −∞ t − τ

2.5

1.5
R (ω)
1

0.5
X(ω)
0

-0.5

-1

-1.5

-2

-2.5
-3 -2 -1 0 1 2 3
ω/Ω

Figure 2.24: R (ω) and its Hilbert transform


2.2 The Fourier Integral 125

Particularly simple results are obtained for Hilbert transforms of sinusoids.


For example, with f (t) = cos (ωt) (with ω a real constant) we have
 ∞  ∞
1 cos (ωτ ) dτ 1 cos [ω (t − τ )] dτ
P = P
π −∞ t − τ π −∞ τ
 ∞
1 cos (ωτ ) dτ
= cos (ωt) P
π −∞ τ
 ∞
1 sin (ωτ ) dτ
+ sin (ωt) P .
π −∞ τ
We note that the first of the two preceding integrals involves an odd function and
therefore vanishes, while in virtue of (2.132) the second integral yields sign (ω) .
Hence
H {cos (ωt)} = sign (ω) sin (ωt) . (2.177)
In identical fashion we obtain
H {sin (ωt)} = −sign (ω) cos (ωt) . (2.178)
We shall have occasion to employ the last two formulas in connection with
analytic signal representations.
The Hilbert transform finds application in signal analysis, modulation the-
ory, and spectral analysis. In practical situations the evaluation of the Hilbert
transform must be carried out numerically for which purpose direct use of the
defining integral is not particularly efficient. The preferred approach is to carry
out the actual calculations in terms of the Fourier transform which can be com-
puted efficiently using the FFT algorithm. To see how this may be arranged,
let us suppose that R (ω) is given and we wish to find X (ω) . By taking the
inverse FT we first find fe (t) , in accordance with (2.170a). In view of (2.171),
if we now multiply the result by 2, truncate it to nonnegative t, and take the
direct FT, we should obtain F (ω) . Thus
F
2fe (t) U (t) ⇐⇒ F (ω) (2.179)
and X (ω) follows by taking the imaginary part of F (ω) . In summary we have
 ∞ 2
H {R} = −m 2F −1 {R} e−iωt dt = −X (ω) , (2.180)
0
where
 ∞
1 
F −1 {R} ≡ R (ω ) eiω t dω  .
2π −∞

Initial and Final Value Theorems


Again assume that f (t) is a causal signal and that it is piecewise differentiable
for all t > 0. Then
 ∞  ∞
F {f  (t)} = f  (t) e−iωt dt = f (t) e−iωt |∞
0+ + iω f (t) e−iωt dt
0+ 0+
!
= iωF (ω) − f 0+ .
126 2 Fourier Series and Integrals with Applications to Signal Analysis

Since by assumption f  (t) exists for t > 0, or, equivalently, f (t) is smooth,
F {f  (t)} approaches zero as ω → ∞ (c.f. (2.158)). Under these conditions the
last equation yields
!
lim iωF (ω) = f 0+ , (2.181)
ω→∞

a result known as the initial value theorem. Note that fe (0) ≡ f (0) but ac-
cording to (2.171) 2 fe (0) = f (0+ ) . Hence
1 !
f (0) = f 0+ , (2.182)
2
which is consistent with the fact that the FT converges to the arithmetic mean
of the step discontinuity.
Consider now the limit
 ∞
 +
!
lim iωF (ω) − f 0 = lim f  (t) e−iωt dt
ω→0 ω→0 0+
 ∞
 
= f  (t) lim e−iωt dt
0+ ω→0
!
= lim f (t) − f 0+ .
t→∞

Upon cancelling f (0+ ) we get

lim [iωF (ω)] = lim f (t) , (2.183)


ω→0 t→∞

which is known as the final value theorem.

Fourier Series and the Poisson Sum Formula


Given a function f (t) within the finite interval −T /2, T /2 we can represent it
either as a Fourier integral, (2.123), comprised of a continuous spectrum of

1 ∞
f (t ) = ∫ F (ω )ejω tdω
2π −∞

−T / 2 T /2
⎛ 2π n⎞

F⎜ ⎟ 2π n
f (t ) = ∑ ⎝
T ⎠ jT
e
n = −∞ T

−T /2 T /2

Figure 2.25: Fourier integral and Fourier series representations


2.2 The Fourier Integral 127

sinusoids, or as a Fourier series, (2.124), comprised of discrete harmonically


related sinusoids. In the former case the representation converges to zero outside
the interval in question while in the latter case we obtain a periodic repetition
(extension) of the given function, as illustrated in Fig. 2.25. The significant point
to note here is that the Fourier series coefficients are given by the FT formula.
Note also that the Fourier transform of f (t) and its periodic extension (taken
over the entire real-time axis) is a infinite series comprised of delta functions, i.e.,

 ∞

F
fˆn ei2πnt/T ⇐⇒ 2π fˆn δ (ω − 2πn/T ) . (2.184)
n=−∞ n=−∞

In the following we present a generalization of (2.124), known as the Poisson


sum formula wherein the function f (t) may assume nonzero values over the
entire real line. We start by defining the function g (t) through the sum


g (t) = f (t − nT ) . (2.185)
n=−∞

It is easy to see that g (t) is periodic with period T. We take the FT to obtain

 ∞

F
f (t − nT ) ⇐⇒ F (ω) e−iωnT .
n=−∞ n=−∞

In view of (2.31) the sum of exponentials can be replaced by a sum comprised


of delta functions. Thus
∞ ∞

F (ω) e−iωnT = F (ω) 2πδ (ωT − 2π)
n=−∞ =−∞
∞

= F (2π/T ) δ (ω − 2π/T ) .
T
=−∞

Inverting the FT gives



 ∞
F 2π 
F (2π/T ) /T ei2πt/T ⇐⇒ F (2π/T ) δ (ω − 2π/T ) .
T
=−∞ =−∞

Since the left side in the last expression must be identical to (2.185) we are
justified in writing

 ∞

f (t − nT ) = F (2π/T ) /T ei2πt/T , (2.186)
n=−∞ =−∞

which is the desired Poisson sum formula.


As an example, suppose f (t) = 1/(1 + t2 ). Then F (ω) = πe−|ω| (see
(2.143*)) and with T = 2π we get

 ∞
1 1  −(||−it) e2 − 1
= e = . (2.187)
n=−∞
1 + (t − 2πn)2 2 2 [e2 − 2e cos(t) + 1]
=−∞
128 2 Fourier Series and Integrals with Applications to Signal Analysis

2.2.7 Convergence at Discontinuities


The convergence of the FT at a step discontinuity exhibits the Gibbs oscillatory
behavior similar to Fourier series. Thus suppose f (t) has step discontinuities
at t = tk , k = 1, 2, . . . and we represent it as in (1.282). Then with f Ω (t) as in
(2.109) we have
 
! ! ∞ sin [(t − t ) Ω] 
f Ω (t) = fsΩ (t) + f t+ − f t −
dt
k k
tk π (t − t )
k
 
Ω +
! −
! 1 (t−tk )Ω sin x
= fs (t) + f tk − f tk dx
π −∞ x
k
  
Ω +
! −
! 1 1
= fs (t) + f tk − f tk + Si [(t − tk ) Ω] . (2.188)
2 π
k

As Ω → ∞ the fsΩ (t) tends uniformly to fs (t) whereas the convergence of


each member in the sum is characterized by the oscillatory behavior of the sine

1.2
Ω=50 Ω=20 Ω = 10
1

0.8

0.6

0.4

0.2

-0.2
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1

Figure 2.26: Convergence of the Fourier transform at a step discontinuity

integral function. This is illustrated in Fig. 2.26 which shows a unit step together
with plots of 1/2 + (1/π) Si(Ωt) for Ω = 10, 20, and 50.

2.2.8 Fejer Summation


In 2.1.5 it was shown that the Gibbs oscillations at step discontinuities arising
in partial sums of Fourier series can be suppressed by employing the Fejer sum-
mation technique. An analogous procedure works for the Fourier Integral where
2.2 The Fourier Integral 129

instead of (2.32) we must resort to the following fundamental theorem from the
theory of limits. Given a function f (Ω) integrable over any finite interval 0, Ω
we define, by analogy with (2.135), the average σ Ω by
 Ω
1
σΩ = f (Ω)dΩ. (2.189)
Ω 0

It can be shown that if lim f (Ω) = f exists then so does lim σ Ω = f .


Ω−→∞ Ω−→∞
Ω
Presently for the function f (Ω) we take the partial “sum” f (t) in (2.109)
and denote the left side of (2.189) by σ Ω (t). If we suppose that lim f Ω (t) =
Ω−→∞
1
2 [f (t+ ) + f (t− )], then by the above limit theorem we also have

1 + !
lim σ Ω (t) = f (t ) + f t− . (2.190)
Ω−→∞ 2
By integrating the right side of (2.109) with respect to Ω and using (2.189) we
obtain
 ∞
sin2 [(Ω/2) (t − t )] 
σ Ω (t) = f (t ) 2 dt . (2.191)
−∞ π (Ω/2) (t − t )
Unlike the kernel (2.38) in the analogous formula for Fourier series in (2.37),
the kernel
sin2 [(Ω/2) (t − t )]
KΩ (t − t ) = (2.192)
π (Ω/2) (t − t )2
is not periodic. We leave it exercise to show that

sin2 [(Ω/2) (t − t )]
lim 2 = δ (t − t ) , (2.193)
Ω−→∞ π (Ω/2) (t − t )

which may be taken as a direct verification of (2.190). A plot of the Fejer


kernel together with the Fourier integral kernel is shown in Fig. 2.27, where
the maximum of each kernel has been normalized to unity. Note that the Fejer
kernel is always nonnegative with a wider main lobe than the Fourier kernel and
exhibits significantly lower sidelobes. One can readily show that
2 2
sin (Ω/2) t 1 − |ω|
Ω ; |ω| < Ω,
F = (2.194)
π (Ω/2) t 2 0; |ω| > Ω.

Since the right side of (2.191) is a convolution in the time domain, its FT
yields a product of the respective transforms. Therefore using (2.194) we can
rewrite (2.191) as an inverse FT as follows:
 Ω * +
1 |ω| iωt
σ Ω (t) = F (ω) 1 − e dω. (2.195)
2π −Ω Ω
130 2 Fourier Series and Integrals with Applications to Signal Analysis

0.8

0.6
*Fejer

0.4

0.2
**Fourier

-0.2

-0.4
-20 -15 -10 -5 0 5 10 15 20
Ωt

Figure 2.27: Fejer and Fourier integral kernels

We see that the Fejer “summation” (2.195) is equivalent to the multiplication


of the signal transform F (ω) by the triangular spectral window:
* +
|ω|
W (ω) = 1 − pΩ (ω) (2.196)
Ω
quite analogous to the discrete spectral weighting of the Fourier series coeffi-
cients in (2.34). Figure 2.28 shows a rectangular pulse together with the Fejer
and Fourier approximations using a spectral truncation of Ω = 40/T. These
results are seen to be very similar to those plotted in Fig. 2.9 for Fourier series.
Just like for Fourier series, we can also introduce higher order Fejer approxima-
(1)
tions. For example, the second-order approximation σ Ω (t) can be defined by

(1) 1 Ω
σ Ω (t) = σ Ω = σ a (t) da (2.197)
Ω 0
again with the property
(1) 1 + !
lim σ (t) = f (t ) + f t− . (2.198)
Ω−→∞ Ω 2
Substituting (2.191) with Ω replaced by the integration variable a into (2.197)
one can show that
 ∞
(1) (1)
σ Ω (t) = f (t ) KΩ (t − t ) dt , (2.199)
−∞

where
 Ωt
(1) 1 1 − cos x
KΩ (t) = dx. (2.200)
πt2 0 x
2.2 The Fourier Integral 131

1.2
**
1
*
0.8
** Fourier
*Fejer
0.6

0.4

0.2

-0.2
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
t/T

Figure 2.28: Comparison of Fejer and Fourier integral approximations

(1)
One can show directly that lim KΩ (t) = δ (t) , consistent with (2.198).
Ω−→∞
(1)
A plot of 4πKΩ (t) /Ω2 as a function of Ωt together with the (first-order) Fejer
and Fourier kernels is shown in Fig. 2.29. Unlike the Fourier Integral and the
(1)
(first-order) Fejer kernels, KΩ (t) decreases monotonically on both sides of the

0.8

***(second order) Fejer


0.6 ***
**(first order) Fejer

0.4 * Fourier

**
0.2
*

-0.2

-0.4
-20 -15 -10 -5 0 5 10 15 20
Ωt

Figure 2.29: Comparison of Fourier and Fejer kernels


132 2 Fourier Series and Integrals with Applications to Signal Analysis

maximum, i.e., the functional form is free of sidelobes. At the same time its
single lobe is wider than the main lobe of the other two kernels. It can be shown
that for large Ωt
(1) ln(Ω |t| γ)
KΩ (t) ∼ 2 , (2.201)
π (Ωt)
where lnγ = 0.577215 . . . is the Euler constant. Because of the presence of
the logarithmic term (2.201) represents a decay rate somewhere between that
of the Fourier
! Integral kernel (1/Ωt) and that of the (first-order) Fejer kernel
1/(Ωt)2 .
(1)
The Fourier transform of KΩ (t) furnishes the corresponding spectral win-
dow. An evaluation of the FT by directly transforming (2.200) is somewhat
cumbersome. A simpler approach is the following:
  2 2
(1) 1 Ω sin2 [(a/2) t] 1 Ω sin [(a/2) t]
F {KΩ (t)} = F { da} = F da
Ω 0 π (a/2) t2 Ω 0 π (a/2) t2
 * + ( ,  
1 Ω |ω| 1 Ω |ω|
= 1− pa (ω) da = Ω |ω| 1 − a da ; |ω| < Ω,
Ω 0 a 0 ; |ω| > Ω.

The last integral is easily evaluated with the final result


* +2
(1) (1) |ω| |ω|
F {KΩ (t)} ≡ W (ω) = 1 + ln −1 pΩ (ω) . (2.202)
Ω Ω

A plot of this spectral window is shown in Fig. 2.30 which is seen to be quite
similar to its discrete counterpart in Fig. 2.11.

2.3 Modulation and Analytic Signal


Representation
2.3.1 Analytic Signals
Suppose z (t) is a real signal with a Fourier transform Z (ω) = A (ω) eiθ(ω) .
According to (2.151) this signal can be expressed as a real part of the complex
signal whose Fourier transform vanishes for negative frequencies and equals
twice the transform of the given real signal for positive frequencies. Presently
we denote this complex signal by w (t) so that
 ∞
1
w (t) = 2Z (ω) eiωt dω, (2.203)
2π 0

whence the real and imaginary parts are, respectively,



1 ∞

e {w (t)} = z (t) = A(ω) cos[ωt + θ (ω)]dω, (2.204a)
π 0
2.3 Modulation and Analytic Signal Representation 133

0.9

0.8

0.7

0.6

0.5

0.4

0.3

0.2

0.1

0
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
ω/Ω

Figure 2.30: FT of second-order Fejer kernel

 ∞
1
m {w (t)} = A(ω) sin[ωt + θ (ω)]dω. (2.204b)
π 0

We claim that m {w (t)}, which we presently denote by ẑ (t) , is the Hilbert


transform of z (t) , i.e.,
 ∞
1 z (τ )
ẑ (t) = P dτ . (2.205)
π −∞ − τ
t

Taking Hilbert transforms of both sides of (2.204a) and using trigonometric sum
formulas together with (2.179) and (2.180) we obtain
 ∞ 2
1
ẑ (t) = H {z (t)} = H A(ω) cos[ωt + θ (ω)]dω
π 0
 ∞
1
= A(ω)H {cos[ωt + θ (ω)]} dω
π 0
 ∞
1
= A(ω) (cos [θ (ω)] H {cos (ωt)} − sin [θ (ω)] H {sin (ωt)}) dω
π 0

1 ∞
= A(ω) (cos [θ (ω)] sin (ωt) + sin [θ (ω)] cos(ωt)) dω
π 0

1 ∞
= A(ω) sin[ωt + θ (ω)]dω = m {w (t)}
π 0

as was to be demonstrated. As a by-product of this derivation we see that


the evaluation of the Hilbert transform of any signal can always be carried
134 2 Fourier Series and Integrals with Applications to Signal Analysis

out entirely in terms of the FT, as already remarked in connection with the
frequency domain calculation in (2.180).
The complex function

w (t) = z (t) + iẑ (t) (2.206)

of a real variable t is referred to as an analytic signal.1 By construction the


Fourier transform of such a signal vanishes identically for negative frequen-
cies. This can also be demonstrated directly by Fourier transforming both sides
of (2.206). This entails recognition of (2.205) as a convolution of z (t) with 1/πt
and use of (2.164) and (2.131). As a result we get the transform pair
F
ẑ (t) ⇐⇒ −i sign (ω) Z (ω) . (2.207)

Using this in the FT of (2.206) yields W (ω) = Z (ω) + i [−i sign (ω) Z (ω)]
which is equivalent to

2Z (ω) ; ω > 0
W (ω) = (2.208)
0 ; ω < 0.

In practical situations a signal will invariably have negligible energy above a


certain frequency. It is frequently convenient to idealize this by assuming that
the FT of the signal vanishes identically above a certain frequency. Such a
signal is said to be bandlimited (or effectively bandlimited). For example if z (t)
is bandlimited to |ω| < ω max the magnitude of its FT may appear as shown in
Fig. 2.31a. In conformance with (2.208) the magnitude of the Fourier spectrum
of the corresponding analytic signal then appears as in Fig. 2.31b. It is common
to refer to the spectrum in Fig. 2.31a as double sided and to that in Fig. 2.31b as
the single sided. In practical applications the use of the latter is more common.
The energy balance between the time and the frequency domains follows from
Parseval theorem
 ∞ 
2 ωmax
|w (t)|2 dt = |Z (ω)|2 dω.
−∞ π 0

Because of (2.207) the energy of an analytic signal is shared equally by the real
signal and its Hilbert transform.

2.3.2 Instantaneous Frequency and the Method


of Stationary Phase
The analytic signal furnishes a means of quantifying the amplitude, phase,
and frequency of signals directly in the time domain. We recall that these
concepts have their primitive origins in oscillatory phenomena described by
1 The term “analytic” refers to the fact that a signal whose Fourier transform vanishes

for real negative values of frequency, i.e., is represented by the integral (2.203), is an analytic
function of t in the upper half of the complex t plane (i.e., m (t) > 0). (See Appendix, pages
341–348).
2.3 Modulation and Analytic Signal Representation 135

a Z(ω )

ω
−ω max ω max

W(ω)

ω
−ω max ω max

Figure 2.31: Spectrum of z(t) and z(t) + iẑ(t)

sinusoids. Thus we say that the signal r cos (ωt + ψ 0 ) has amplitude r, fre-
quency ω and a fixed phase reference ψ 0 , where for purposes of analysis we
sometimes find it more convenient to deal directly with a fictitious complex sig-
nal r exp [i (ωt + ψ 0 )] with the tacit understanding that physical processes are
to be associated only with the real part of this signal. A generalization of this
construct is an analytic signal. In addition to simplifying the algebra such com-
plex notation also affords novel points of view. For example, the exponential of
magnitude r and phase angle ψ (t) = ωt + θ0 can be interpreted graphically as
d
a phasor of length r rotating at the constant angular velocity ω = dt (ωt + θ0 ) .
Classically for a general nonsinusoidal (real) signal z (t) the concepts of fre-
quency, amplitude, and phase are associated with each sinusoidal component
comprising the signal Fourier spectrum, i.e., in this form these concepts appear
to have meaning only when applied to each individual spectral component of
the signal. On the other hand we can see intuitively that at least in special
cases the concept of frequency must bear a close relationship to the rate of zero
crossings of a real signal. For pure sinusoids this observation is trivial, e.g.,
the number of zero crossings of the signal cos (10t) per !unit time is twice that
of cos(5t). Suppose instead we take the signal cos 10t2 . Here the number of
zero crossings varies linearly with time and
! the corresponding complex !signal,
as represented by the phasor exp i 10t2 , rotates at the rate dt d
10t2 = 20t
rps. Thus we conclude that the frequency of this signal varies linearly with
time. The new concept here is that of instantaneous frequency which is clearly
not identical with the frequency associated with each Fourier component of the
signal (except of course in case of a pure sinusoid). We extend this definition to
136 2 Fourier Series and Integrals with Applications to Signal Analysis

arbitrary real signals z (t) through an analytic signal constructed in accordance


with (2.203). We write it presently in the form
w(t) = r (t) eiψ(t) , (2.209)
where
1
r (t) = z 2 (t) + ẑ 2 (t) (2.210)
is the (real, nonnegative) time-varying amplitude, or envelope, ψ(t) the instan-
taneous phase, and

3 (t) =
ω (2.211)
dt
the instantaneous frequency. Note also that the interpretation of ω 3 (t) as a
zero crossing rate requires that it be nonnegative which is compatible with the
analytic signal having only positive frequency components. To deduce the rela-
tionship between the instantaneous frequency and the signal Fourier spectrum
let us formulate an estimate of the spectrum of w(t) :
 ∞
W (ω) = r (t) ei[ψ(t)−ωt] dt. (2.212)
−∞

We can, of course, not “evaluate” this integral without knowing the specific
signal. However for signals characterized by a large time-bandwidth product
we can carry out an approximate evaluation utilizing the so-called principle
of stationary phase. To illustrate the main ideas without getting sidetracked
by peripheral generalities consider the real part of the exponential in (2.212),
i.e., cos [q(t)] with q(t) = ψ(t) − ωt. Figure 2.32 shows a plot of cos [q(t)] for

1.5

0.5

-0.5

-1

-1.5
0 1 2 3 4 5 6 7 8 9 10
t

Figure 2.32: Plot of cos(5t2 − 50t) (stationary point at t = 5)


2.3 Modulation and Analytic Signal Representation 137

the special choice ψ (t) = 5t2 and ω = 50. This function is seen to oscillate
rapidly except in the neighborhood of t = t0 = 5 = ω/10 which point cor-
responds to q  (5) = 0. The value t0 = 5 in the neighborhood of which the
phase varies slowly is referred to as the stationary point of q (t) (or a point of
stationary phase). If we suppose that the function r (t) is slowly varying rel-
ative to these
, ∞ oscillations, we would
! expect the contributions to an integral of
the form −∞ r (t) cos 5t2 − 50t dt from points not in the immediate vicinity
of t = 5 to mutually cancel. Consequently the dominant contributions to the
integral would arise only from the values of r (t) and ψ (t) in the immediate
neighborhood of the point of stationary phase. We note in passing that in this
example the product t0 ω = 250 >>1. It is not hard to show that the larger this
dimensionless quantity (time bandwidth product) the narrower the time band
within which the phase is stationary and therefore the more nearly localized the
contribution to the overall integral. In the general case the stationary point is
determined by
q  (t) = ψ  (t) − ω = 0, (2.213)
which coincides with the definition of the instantaneous frequency in (2.211).
When we expand the argument of the exponential in a Taylor series about t = t0
we obtain
1
q(t) = ψ(t0 ) − ωt0 + (t − t0 )2 ψ  (t0 ) + . . . (2.214)
2
Similarly we have for r (t)
r (t) = r (t0 ) + (t − t0 )r (t0 ) + . . . (2.215)
In accordance with the localization principle just discussed we expect, given a
sufficiently large ωt0 , that in the exponential function only the first two Taylor
series terms need to be retained. Since r (t) is assumed to be relatively slowly
varying it may be replaced by r (t0 ) . Therefore (2.212) may be approximated by
 ∞
2
1
ψ  (t0 )
W (ω) ∼ r (t0 ) ei[ψ(t0 )−ωt0 ] ei 2 (t−t0 ) dt. (2.216)
−∞

When the preceding standard Gaussian integral is evaluated we obtain the final
formula

2π π 
W (ω) ∼ r (t0 ) e i[ψ(t0 )−ωt0 ]  ei 4 sign[ψ (t0 )] . (2.217)
ωt0 ∼∞
ψ (t0 )

It should be noted that the variable t0 is to be expressed in terms of ω by


inverting (2.213), a procedure that in general is far from trivial. When this
is done (2.217) provides an asymptotic approximation to the signal Fourier
spectrum for large ωt0 .
138 2 Fourier Series and Integrals with Applications to Signal Analysis

To illustrate the relationship between the instantaneous frequency of a signal


and its frequency content as defined by Fourier synthesis consider the signal

A cos(at2 + βt) ; 0 ≤ t ≤ T,
g(t) =
0 elsewhere.
whose instantaneous frequency increases linearly from ω min = β (β > 0) to
ωmax = 2aT +β rps. Based on this observation it appears reasonable to define
the nominal bandwidth of this signal by B = aT /π Hz. The relationship between
B and the bandwidth as defined by the signal Fourier spectrum is more readily
clarified in terms of the dimensionless parameters M = 2BT (the nominal time-
bandwidth product) and r = ω min /ωmax < 1. Using these parameters we put
the signal in the form
(  ! !
A cos π2 M (t/T )2 + π 1−r
M t
T ; 0 ≤ t ≤ T,
g(t) = (2.218)
0 elsewhere.

The FT of (2.218) can be expressed in terms of Fresnel integrals whose standard


forms read
 x π 
C(x) = cos ξ 2 dξ, (2.219a)
0 2
 x π 
S(x) = sin ξ 2 dξ. (2.219b)
0 2
One then finds
 * +
 2  
AT −i π r−f
2 M 1−r
√ 1−f √ 1−f
G(ω) = √ e [C{ M } + iS{ M }
2 M 1−r 1−r
 
√ r−f √ r−f
−C{ M } − iS{ M }]
1−r 1−r
* +
 2  
iπ r+f
2 M 1−r
√ 1+f √ 1+f
+e [C{ M } − iS{ M }
1−r 1−r
 

√ r+f √ r+f
−C{ M } + iS{ M }] , (2.220)
1−r 1−r

where we have introduced the normalized frequency variable f  = ω(1−r)/2πB.


Using the asymptotic forms of the Fresnel integrals for large arguments, i.e.,
C(±∞) = ±1/2 and S(±∞) = ±1/2, we find that as the nominal time-
bandwidth product (M/2) approaches infinity, the rather cumbersome expres-
sion (2.220) assumes the simple asymptotic form
⎧ * +
⎪ r−f
 2

⎪ 2
−i π2 M 1−r

⎨ 2
AT
Me ; r < f  < 1,
* +2
G (ω) ∼ iπ r+f

(2.221)

⎪ AT 2 2 M 1−r
; −1 < f  < −r,
M∼∞ ⎪
⎪ 2 M e

0 ; |f  | > 1 and |f  | < r.
2.3 Modulation and Analytic Signal Representation 139

1
From (2.221) we see that the FT of g(t) approaches the constant AT /2 2/M
within the frequency band r < |f  | < 1 and vanishes outside this range, except
at the band edges (i.e., f  = ±1 and ±r) where it equals one-half this constant.
Since g(t) is of finite duration it is asymptotically simultaneously bandlimited
and timelimited. Even though for any finite M the signal spectrum will not be
bandlimited this asymptotic form is actually consistent with Parseval theorem.
For applying Parseval formula to (2.221) we get
 ∞
1 2 A2 T !
|G (ω)| dω = 2B = A2 /2 T (2.222)
2π −∞ 4 B

1.6

1.4 BT=5

BT=10
1.2
BT=1000
abs(G)*2*sqrt(B*T)/(A*T)

0.8

0.6

0.4

0.2

0
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8
frequency*(1-r)/B

Figure 2.33: Magnitude of the FT of a linear FM pulse for different time-


bandwidth products

On the other hand we recognize the last term as the asymptotic form (i.e., for
large ω 0 T ) of the total energy of a sinusoid of fixed frequency, amplitude A,
and duration T . Thus apparently if the time-bandwidth product is sufficiently
large we may approximate the energy of a constant amplitude sinusoid with
variable phase by the same simple formula (A2 /2)T. Indeed this result actually
generalizes to signals of the form A cos [φ(t)]. How large must M be for (2.221)
to afford a reasonable approximation to the signal spectrum? Actually quite
large, as is illustrated by the plots in Fig. 2.33 for BT = 5, 10, and 1, 000, where
the lower (nominal) band edge is defined by r = 0.2 and the magnitude of the
asymptotic spectrum equals unity within .2 < f  < 1 and 1/2 at f  = .2 and
f  = 1.
140 2 Fourier Series and Integrals with Applications to Signal Analysis

2.3.3 Bandpass Representation


The construct of an analytic signal affords a convenient tool for describing the
process of modulation of a low frequency (baseband) signal by a high frequency
carrier as well as the demodulation of a transmitted bandpass signal down to
baseband frequencies. We have already encountered a modulated signal in sim-
plified form in connection with the frequency shifting properties of the FT in
(2.154). Adopting now a more general viewpoint we take an arbitrary real sig-
nal z (t) together with its Hilbert transform ẑ (t) and a positive constant ω 0 to
define two functions x (t) and y (t) as follows:

x (t) = z (t) cos (ω0 t) + ẑ(t) sin (ω 0 t) , (2.223a)


y (t) = −z (t) sin (ω 0 t) + ẑ(t) cos (ω0 t) , (2.223b)

which are easily inverted to yield

z (t) = x (t) cos (ω 0 t) − y(t) sin (ω 0 t) (2.224a)


ẑ(t) = x (t) sin (ω 0 t) + y(t) cos (ω 0 t) . (2.224b)

Equations (2.223) and (2.224) constitute a fundamental set of relations that


are useful in describing rather general modulation and demodulation processes.
In fact the form of (2.224a) suggests an interpretation of z (t) as a signal
modulated by a carrier of frequency ω 0 a special case of which is represented by
the left side of (2.154). Comparison with (2.224a) yields x (t) = Af (t) cos(θ0 )
and y (t) = Af (t) sin(θ 0 ). We note that in this special case x (t) and y(t) are
linearly dependent which need not be true in general.
Let us now suppose that the only datum at our disposal is the signal z (t)
and that the carrier frequency ω 0 is left unspecified. As far as the mathematical
representation (2.223) and (2.224) is concerned it is, of course, perfectly valid
and consistent for any choice of (real ) ω 0 . However, if x (t) and y(t) in (2.223) are
to represent baseband signals at the receiver resulting from the demodulation of
z (t) by the injection of a local oscillator with frequency ω 0 , then the bandwidth
of x (t) and y(t) (centered at ω = 0) should certainly be less than 2ω0 . A more
precise interrelation between the constraints on signal bandwidth and carrier
frequency ω0 is readily deduced from the FT of x (t) and y(t). Denoting these,
respectively, by X (ω) and Y (ω), we obtain using (2.223) and (2.207)

X (ω) = U (ω 0 − ω)Z (ω − ω 0 ) + U (ω + ω 0 )Z (ω + ω 0 ) , (2.225a)


Y (ω) = iU (ω0 − ω)Z (ω − ω0 ) − iU (ω + ω 0 )Z (ω + ω 0 ) . (2.225b)

On purely physical grounds we would expect Z (ω) to be practically zero above


some finite frequency, say ωmax . If the bandwidth of X(ω) and Y (ω) is to
be limited to |ω| ≤ ω 0 , then ωmax − ω 0 may not exceed ω 0 . This follows
directly from (2.225) or from the graphical superposition of the spectra shown
in Fig. 2.34. In other words if x (t) and y(t) are to represent baseband signals,
we must have
ω0 ≥ ω max /2. (2.226)
2.3 Modulation and Analytic Signal Representation 141

When this constraint is satisfied the spectrum of z (t) may in fact extend down
to zero frequency (as, e.g., in Fig. 2.31a) so that theoretically the spectra of
x (t) and y(t) are allowed to occupy the entire bandwidth |ω| ≤ ω 0 . However
in practice there will generally also be a lower limit on the band occupancy
of Z (ω), say ω min . Thus the more common situation is that of a bandpass
spectrum illustrated in Fig. 2.34 wherein the nonzero spectral energy of z (t)
occupies the band ω min < ω < ω max for positive frequencies and the band
-ωmax < ω < −ωmin for negative frequencies.

Z(ω )

ω
−ω max −ωmin ωmin ω max
−ω 0 ω0

Z(ω +ω 0) U(ω 0+ω )

ω
ωmin−ω 0 ω max−ω 0

Z(ω −ω 0) U(ω 0−ω )

ω
ω 0−ω max ω 0−ω min

X(ω )

ω
ω min − ω 0 ω 0−ω min

Figure 2.34: Bandpass and demodulated baseband spectra

In the case depicted ω min < ω0 < ω max and ω 0 − ω min > ωmax − ω 0 .
The synthesis of X (ω) from the two frequency-shifted sidebands follows
from (2.225a) resulting in a total band occupancy of 2 |ω 0 − ωmin |. It is
easy to see from (2.225b) that Y (ω) must occupy the same bandwidth. Ob-
serve that shifting ω0 closer to ω min until ω max − ω 0 > ω0 − ω min results in a
142 2 Fourier Series and Integrals with Applications to Signal Analysis

total band occupancy of 2 |ωmax − ω 0 | and that the smallest possible baseband
bandwidth is obtained by positioning ω 0 midway between ωmax and ω min .
The two real baseband signals x (t) and y(t) are referred to as the inphase
and quadrature signal components. It is convenient to combine them into the
single complex baseband signal
b(t) = x(t) + iy(t). (2.227)
The analytic signal w(t) = z(t) + iẑ(t) follows from a substitution of (2.224a)
and (2.224b)
w(t) = x (t) cos (ω 0 t) − y(t) sin (ω0 t)
+i[x (t) sin (ω 0 t) + y(t) cos (ω 0 t)]
= [x(t) + iy(t)] eiω0 t = b (t) eiω0 t . (2.228)
The FT of (2.228) reads
W (ω) = X (ω − ω0 ) + iY (ω − ω0 ) = B (ω − ω 0 ) (2.229)
or, solving for B (ω) ,
B (ω) = W (ω + ω0 ) = 2U (ω + ω 0 ) Z(ω + ω 0 ) = X (ω) + iY (ω) . (2.230)
In view of (2.224a) the real bandpass z(t) signal is given by the real part
of (2.228), i.e.,
& '
z(t) =
e b (t) eiω0 t . (2.228*)
Taking the FT we get
1
Z(ω) = [B (ω − ω 0 ) + B ∗ (−ω − ω0 )] , (2.228**)
2
which reconstructs the bandpass spectrum in terms of the baseband spectrum.
As the preceding formulation indicates, given a bandpass signal z (t) , the
choice of ω 0 at the receiver effectively defines the inphase and quadrature
components. Thus a different choice of (local oscillator) frequency, say ω 1 ,
ω1 = ω 0 leads to the representation
z (t) = x1 (t) cos (ω 1 t) − y1 (t) sin (ω 1 t) , (2.229a)
ẑ (t) = x1 (t) sin (ω1 t) + y1 (t) cos (ω 1 t) , (2.229b)
wherein the x1 (t) and y1 (t) are the new inphase and quadrature components.
The relationship between x (t) , y (t) and x1 (t) and y1 (t) follows upon equat-
ing (2.229) to (2.224):
     
cos (ω0 t) − sin (ω 0 t) x (t) cos (ω1 t) − sin (ω 1 t) x1 (t)
= ,
sin (ω 0 t) cos (ω 0 t) y (t) sin (ω 1 t) cos (ω 1 t) y1 (t)
which yields
    
x (t) cos [(ω 0 − ω1 ) t] sin [(ω0 − ω 1 ) t] x1 (t)
= . (2.230)
y (t) − sin [(ω0 − ω 1 ) t] cos [(ω 0 − ω1 ) t] y1 (t)
2.3 Modulation and Analytic Signal Representation 143

The linear transformations defined by the 2 × 2 matrices in (2.224), (2.229),


and (2.230) are all orthogonal so that
2
z 2 (t) + ẑ 2 (t) = x2 (t) + y 2 (t) = x21 (t) + y12 (t) = |b (t)| = r2 (t) . (2.231)

This demonstrates directly that the analytic signal and the complex baseband
signal have the same envelope r (t) which is in fact independent of the frequency
of the reference carrier. We shall henceforth refer to r(t) as the signal enve-
lope. Unlike the signal envelope, the phase of the complex baseband signal does
depend on the carrier reference. Setting

x (t) x1 (t)
θ (t) = tan−1 , θ1 (t) = tan−1 , (2.232)
y (t) y1 (t)

we see that with a change in the reference carrier the analytic signal undergoes
the transformation

w (t) = r (t) eiθ(t) eiω0 t


= r (t) eiθ1 (t) eiω1 t (2.233)

or, equivalently, that the two phase angles transform in accordance with

θ (t) + ω 0 t = θ1 (t) + ω 1 t. (2.234)

It should be noted that in general the real and imaginary parts of a complex
baseband signal need not be related by Hilbert transforms. In fact suppose x (t)
and y (t) are two arbitrary real signals, bandlimited to |ω| < ωx and |ω| < ω y ,
respectively. Then, as may be readily verified, for any ω0 greater than ω x /2
and ω y /2 the Hilbert transform of the bandpass signal z (t) defined by (2.224a)
is given by (2.224b).

2.3.4 Bandpass Representation of Random Signals*


In the preceding discussion it was tacitly assumed that the signals are deter-
ministic. The notion of an analytic signal is equally useful when dealing with
stochastic signals. For example, if z (t) is a real wide-sense stationary stochas-
tic process, we can always append its Hilbert transform to form the complex
stochastic process
w (t) = z (t) + iẑ (t) . (2.235)
By analogy with (2.206) we shall refer to it as an analytic stochastic process. As
we shall show in the sequel its power spectrum vanishes for negative frequencies.
First we note that in accordance with (2.207) the magnitude of the transfer
function that transforms z (t) into ẑ (t) is unity. Hence the power spectrum as
well as the autocorrelation function of ẑ (t) are the same as that of z (t) , i.e.,

z (t + τ ) z (t) ≡ Rzz (τ ) = ẑ (t + τ ) ẑ (t) = Rẑẑ (τ ) . (2.236)


144 2 Fourier Series and Integrals with Applications to Signal Analysis

The cross-correlation between z (t) and its Hilbert transform is then


 

1 ∞ z τ z (t) 
Rẑz (τ ) = ẑ (t + τ ) z (t) = dτ
π −∞ t + τ − τ 
  
 
1 ∞ Rzz τ − t  1 ∞ Rzz (ξ)
=  dτ = dξ. (2.237)
π −∞ t + τ − τ π −∞ τ − ξ
The last expression states that the cross-correlation between a stationary
stochastic process and its Hilbert transform is the Hilbert transform of the
autocorrelation function of the process. In symbols
Rẑz (τ ) = R̂zz (τ ) . (2.238)
Recall that the Hilbert transform of an even function is an odd function and
conversely. Thus, since the autocorrelation function of a real stochastic process
is always even, Rẑz (τ ) is odd. Therefore we have

Rzẑ (τ ) ≡ Rẑz (−τ ) = −Rẑz (τ ) . (2.239)

The autocorrelation function of w (t) then becomes



Rww (τ ) = w (t + τ ) w (t) = 2 [Rzz (τ ) + iRẑz (τ )]
 
= 2 Rzz (τ ) + iR̂zz (τ ) . (2.240)

With
F
Rzz (τ ) ⇐⇒ Szz (ω) (2.241)
we have in view of (2.238) and (2.207)
F
Rẑz (τ ) = R̂zz (τ ) ⇐⇒ −iSzz (ω) sign (ω) . (2.242)
Denoting the spectral density of w (t) by Sww (ω) , (2.240) together with (2.241)
and (2.242) gives

4Szz (ω) ; ω > 0,
Sww (ω) = (2.243)
0; ω < 0.
so that the spectral density of the analytic complex process has only positive
frequency content. The correlation functions of the baseband (inphase) x (t) and
(quadrature) process y (t) follow from (2.223). By direct calculation we get

Rxx (τ ) = {z (t + τ ) cos [ω0 (t + τ )] + ẑ (t + τ ) sin [ω 0 (t + τ )]}


{z (t) cos (ω 0 t) + ẑ (t) sin (ω0 t)}
= Rzz (τ ) cos (ω0 τ ) + Rẑz (τ ) sin (ω 0 τ ) (2.244a)
Ryy (τ ) = {−z (t + τ ) sin [ω 0 (t + τ )] + ẑ (t + τ ) cos [ω0 (t + τ )]}
{−z (t) sin (ω0 t) + ẑ (t) cos (ω 0 t)}
= Rzz (τ ) cos (ω0 τ ) + Rẑz (τ ) sin (ω 0 τ ) = Rxx (τ ) (2.244b)
2.3 Modulation and Analytic Signal Representation 145

Rxy (τ ) = {z (t + τ ) cos [ω0 (t + τ )] + ẑ (t + τ ) sin [ω 0 (t + τ )]}


{−z (t) sin (ω0 t) + ẑ (t) cos (ω 0 t)}
= −Rẑz (τ ) cos (ω 0 τ ) + Rzz (τ ) sin (ω 0 τ )
= −R̂zz (τ ) cos (ω 0 τ ) + Rzz (τ ) sin (ω 0 τ ) . (2.244c)

Recall that for any two real stationary processes Rxy (τ ) = Ryx (−τ ). Using
this relation in (2.244c) we get
Ryx (τ ) = −Rxy (τ ). (2.245)
Also according to (2.244a) and (2.244b) the autocorrelation functions of the in-
phase and quadrature components of the stochastic baseband signal are identical
and consequently so are the corresponding power spectra. These are

Sxx (ω) = Syy (ω) =


1
[1 − sign (ω − ω 0 )] Szz (ω − ω 0 )
2
1
+ [1 + sign (ω + ω 0 )] Szz (ω + ω0 ) . (2.246)
2
From (2.244c) we note that Rxy (0) ≡ 0 but that in general Rxy (τ ) = 0 when
τ = 0. The FT of this quantity, i.e., the cross-spectrum, is
i
Sxy (ω) = [sign (ω − ω 0 ) − 1] Szz (ω − ω 0 )
2
i
+ [sign (ω + ω 0 ) + 1] Szz (ω + ω0 ) . (2.247)
2
By constructing a mental picture of the relative spectral shifts dictated by
(2.247) it is not hard to see that the cross spectrum vanishes identically (or,
equivalently, Rxy (τ ) ≡ 0) when Szz (ω) , the spectrum of the band-pass process,
is symmetric about ω 0 .
Next we compute the autocorrelation function Rbb (τ )of the complex stochas-
tic baseband process b(t) = x(t) + iy(t). Taking account of Rxx (τ ) = Ryy (τ )
and (2.245) we get
Rbb (τ ) = b(t + τ )b∗ (t) = 2[Rxx (τ ) + iRyx (τ )]. (2.248)
In view of (2.240) and (2.228) the autocorrelation function of the analytic
bandpass stochastic process is
 
Rww (τ ) = 2 Rzz (τ ) + iR̂zz (τ ) = Rbb (τ )eiω 0 τ . (2.249)

The autocorrelation function of the real bandpass process can then be repre-
sented in terms of the autocorrelation function of the complex baseband process
as follows:
1 & '
Rzz (τ ) =
e Rbb (τ )eiω0 τ . (2.250)
2
146 2 Fourier Series and Integrals with Applications to Signal Analysis

With the definition


F
Rbb (τ ) ⇐⇒ Sbb (ω)
the FT of (2.250) reads
1
Szz (ω) = [Sbb (ω + ω 0 ) + Sbb (−ω − ω 0 )] . (2.251)
4
Many sources of noise can be modeled (at least on a limited timescale) as sta-
tionary stochastic processes. The spectral distribution of such noise is usually
of interest only in a relatively narrow pass band centered about some frequency,
say ω0 . The measurement of the power spectrum within a predetermined pass
band can be accomplished by using a synchronous detector that separates the
inphase and quadrature channels, as shown in Fig. 2.35.

LPF Ax(t) / 2

z(t)
Acos(w0 t)

s(t) BPF

−Asin(w0 t)
z(t)

LPF Ay(t) / 2

Figure 2.35: Synchronous detection

The signal s (t) is first bandpass filtered to the bandwidth of interest and
then split into two separate channels each of which is heterodyned with a local
oscillator with a 90 degree relative phase shift. The inphase and quadrature
components are obtained after lowpass filtering to remove the second harmonic
contribution generated in each mixer. To determine the power spectral density
of the bandpass signal requires measurement of the auto and cross spectra of
x (t) and y (t). The power spectrum can then be computed with the aid of
(2.246) and (2.247) which give
1
Szz (ω + ω 0 ) = [Sxx (ω) − iSxy (ω)] . (2.252)
2
This procedure assumes that the process s (t) is stationary so that Sxx (ω) =
Syy (ω) . Unequal powers in the two channels would be an indication of non-
stationarity on the measurement timescale. A rather common form of nonsta-
tionarity is the presence of an additive deterministic signal within the bandpass
process.
2.3 Modulation and Analytic Signal Representation 147

A common model is a rectangular bandpass power spectral density. Assuming


that ω 0 is chosen symmetrically disposed with respect to the bandpass power
spectrum, the power spectra corresponding to the analytic and baseband
stochastic processes are shown in Fig. 2.36. In this case the baseband autocor-
relation function is purely real and equal to
sin(2πBτ )
Rbb (τ ) = N0 = 2Rxx (τ ) = 2Ryy (τ ) (2.253)
πτ

Szz

N0 / 4 N0 / 4
ω
−ω0 − 2π B −ω0 −ω0 + 2π B ω0 − 2π B ω0 ω0 + 2π B
Sww
N0

ω
ω0 − 2π B ω0 ω0 + 2π B
Sbb
N0

ω
− 2π B 2π B

Figure 2.36: Bandpass-to-baseband transformation for a symmetric power spec-


trum
while Rxy (τ ) ≡ 0. What happens when the local oscillator frequency is set
to ω = ω 1 = ω0 − Δω, i.e., off the passband center by Δω? In that case the
baseband power spectral density will be displaced by Δω and equal to

N0 ; − 2πB + Δω < ω < 2πB + Δω,
Sbb (ω) = (2.254)
0 ; otherwise.
The baseband autocorrelation function is now the complex quantity
sin(2πBτ )
Rbb (τ ) = eiτ Δω N0 . (2.255)
πτ
In view of (2.248) the auto and crosscorrelation functions of the inphase and
quadrature components are
sin(2πBτ )
Rxx (τ ) = Ryy (τ ) = cos (τ Δω) N0 , (2.256a)
2πτ
sin(2πBτ )
Ryx (τ ) = sin (τ Δω) N0 . (2.256b)
2πτ
148 2 Fourier Series and Integrals with Applications to Signal Analysis

The corresponding spectrum Sxx (ω) = Syy (ω) occupies the band |ω| ≤ 2πB +
Δω. Unlike in the symmetric case, the power spectrum is no longer flat but
exhibits two steps caused by the spectral shifts engendered by cos (τ Δω), as
shown in Fig. 2.37.

Sxx = Syy

N0 / 2

N0 / 4

ω
− 2π B − Δω −2π B + Δω 2π B − Δω 2π B +Δω

Figure 2.37: Baseband I&Q power spectra for assymmetric local oscillator fre-
quency positioning

2.4 Fourier Transforms and Analytic Function


Theory
2.4.1 Analyticity of the FT of Causal Signals
Even though both the direct and the inverse FT have been initially defined
strictly for functions of a real variables one can always formally replace t and
(or) ω by complex numbers and, as long as the resulting integrals converge,
define the signal f (t) and (or) the frequency spectrum F (ω) as functions of a
complex variable. Those unfamiliar with complex variable theory should consult
the Appendix, and in particular A.4.
Let us examine the analytic properties of the FT in the complex domain of
a causal signal. To this end we replace ω by the complex variable z = ω + iδ
and write  ∞
F (z) = f (t) e−izt dt, (2.257)
0
wherein F (ω) is F (z) evaluated on the axis of reals. Furthermore let us as-
sume that
 ∞
|f (t)| dt < ∞. (2.258)
0
To put the last statement into the context of a physical requirement let us
suppose that the signal f (t) is the impulse response of a linear time-invariant
system. In that case, as will be shown in 3.1.4, absolute integrability in the sense
of (2.258) is a requirement for system stability. Using (2.257) we obtain in view
of (2.258) for all Im z = δ ≤ 0 the bound
 ∞  ∞

−izt
f (t) e dt ≤ |f (t)| eδt dt < ∞. (2.259)
0 0
2.4 Fourier Transforms and Analytic Function Theory 149

From this follows (see Appendix) that F (z) is an analytic function of the com-
plex variable z in the closed lower half of the complex z plane, i.e., Im z ≤ 0.
Moreover, for Im z ≤ 0,
lim F (z) → 0 (2.260)
|z|→∞

as we see directly from (2.259) by letting δ approach −∞. In other words,


the FT of the impulse response of a causal linear time-invariant system is an
analytic function of the complex frequency variable z in the closed lower half
plane. This feature is of fundamental importance in the design and analysis of
frequency selective devices (filters).

2.4.2 Hilbert Transforms and Analytic Functions


A direct consequence of the analyticity of F (z) is that the real and imaginary
parts of F (ω) may not be specified independently. In fact we have already
established in 2.2.6 that for a causal signal they are linearly related through the
Hilbert transform. The properties of analytic functions afford an alternative
derivation. For this purpose consider the contour integral
4
F (z)
IR (ω0 ) = dz, (2.261)
z − ω0
ΓR

−R ω0 − ε ω0 + ε R
ω
θR θε

CR

Figure 2.38: Integration contour ΓR for the derivation of the Hilbert transforms

wherein ω 0 is real, taken in the clockwise direction along the closed path ΓR as
shown in Fig. 2.38. We note that ΓR is comprised of the two linear segments
(−R, ω0 − ε), (ω 0 + ε, R) along the axis of reals, the semicircular contour cε of
radius ε with the circle centered at ω = ω0 , and the semicircular contour CR
150 2 Fourier Series and Integrals with Applications to Signal Analysis

of radius R in the lower half plane with the circle centered at ω = 0. Since
the integrand in (2.261) is analytic within ΓR , we have IR (ω 0 ) ≡ 0, so that
integrating along each of the path-segments indicated in Fig. 2.38 and adding
the results in the limit as ε → 0 and R → ∞, we obtain
  R 
ω 0 −ε
F (ω) F (ω)
0 = lim dω + dω
ε→0, R→∞ −R ω − ω0 ω 0 +ε ω − ω 0
 
F (z) F (z)
+ lim dz + lim dz. (2.262)
ε→0 z − ω0 R→∞ z − ω0
cε CR

On CR we set z = ReiθR so that dz = iReiθR dθ R and we have


!
  −π
F (z) F ReiθR
dz = Rdθ
z − ω0 ReiθR − ω 0
R

0
CR

so that in view of (2.260) in the limit of large R the last integral in (2.262) tends
to zero. On cε we set z − ω 0 = εeiθε and substituting into the third integral in
(2.262) evaluate it as follows:
  0
F (z) !
lim dz = lim F ω 0 + εeiθε idθ = iπF (ω0 ) .
ε→0 z − ω0 ε→0 −π

Now the limiting form of the first two integrals in (2.262) are recognized as the
definition a CPV integral so that collecting our results we have
 ∞
F (ω)
0=P dω + iπF (ω 0 ). (2.263)
−∞ ω − ω0
By writing F (ω) = R(ω) + iX(ω) and similarly for F (ω 0 ), substituting
in (2.263), and setting the real and the imaginary parts to zero we obtain
 ∞
1 R (ω)
X(ω0 ) = P dω, (2.264a)
π −∞ ω − ω0
 ∞
1 R (ω)
R(ω0 ) = − P dω, (2.264b)
π −∞ ω − ω0

which, apart from a different labeling of the variables, are the Hilbert Transforms
in (2.173a) and (2.173b). Because the real and imaginary parts of the FT
evaluated on the real frequency axis are not independent it should be possible
to determine the analytic function F (z) either from R(ω) of from X (ω) . To
obtain such formulas let z0 be a point in the lower half plane (i.e., Im z0 < 0)
and apply the Cauchy integral formula
4
1 F (z)
F (z0 ) = − dz (2.265)
2πi z − z0
Γ̂R
2.4 Fourier Transforms and Analytic Function Theory 151

−R R ω

z0

CR

Figure 2.39: Integration contour for the evaluation of Eq. (2.265)

taken in the counterclockwise direction over the contourΓ̂R as shown in Fig. 2.39
and comprised of the line segment (−R, R) and the semicircular contour CR of
radius R. Again because of (2.260) the contribution over CR vanishes as R is
allowed to approach infinity so that (2.265) may be replaced by
 ∞
1 F (ω)
F (z0 ) = − dω
2πi −∞ ω − z0
 ∞  ∞
1 R (ω) 1 X (ω)
= − dω − dω. (2.266)
2πi −∞ ω − z0 2π −∞ ω − z0

In the last integral we now substitute for X (ω) its Hilbert Transform from
(2.264a) to obtain
 ∞  ∞  ∞
1 X (ω) 1 R (η)
− dω = − 2 dωP dη
2π −∞ ω − z0 2π −∞ −∞ (ω − z 0 ) (η − ω)
 ∞  ∞
1 dω
= R (η) dηP .
2π2 −∞ −∞ (ω − z 0 ) (ω − η)
(2.267)
The last CPV integral over ω is evaluated using the calculus of residues as
follows:
 ∞ 4
dω dz 1
P = − iπ , (2.268)
−∞ (ω − z 0 ) (ω − η) (z − z 0 ) (z − η) η − z0
ΓR

where ΓR is the closed contour in Fig. 2.38 and where the location of the simple
pole at ω 0 is now designated by η. The contour integral in (2.268) is performed
in the clockwise direction and the term −iπ/ (η − z0 ) is the negative of the
contribution from the integration over the semicircular contour cε . The only
152 2 Fourier Series and Integrals with Applications to Signal Analysis

contribution to the contour integral arises from the simple pole at z = z0 which
equals −i2π/ (z0 − η) resulting in a net contribution in (2.268) of iπ/ (η − z0 ) .
Substituting this into (2.267) and then into (2.266) gives the final result

i ∞ R (η)
F (z) = dη, (2.269)
π −∞ η − z

where we have replaced the dummy variable ω by η and z0 by z ≡ ω + iδ.


Unlike (2.257), the integral (2.269) defines the analytic function F (z) only in
the open lower half plane, i.e., for Im z < 0. On the other hand, one would
expect that in the limit as δ → 0, F (z) → F (ω) . Let us show that this limit is
actually approached by the real part. Thus using (2.269) we get with z = ω + iδ
 ∞
−δ
Re F (z) = R(ω, δ) = R (η)   dη. (2.270)
2
−∞ π (η − ω) + δ 2

The factor multiplying R (η) in integrand will be recognized as the delta function
kernel in (1.250) so that lim R(ω, δ) as −δ → 0 is in fact R (ω) .

2.4.3 Relationships Between Amplitude and Phase


We again suppose that F (ω) is the FT of a causal signal. Presently we write it
in terms of its amplitude and phase

F (ω) = A(ω)eiθ(ω) (2.271)

and set
A(ω) = eα(ω) . (2.272)
Taking logarithms we have

ln F (ω) = α (ω) + iθ (ω) . (2.273)

Based on the results of the preceding subsection it appears that if ln F (ω) can
be represented as an analytic function in the lower half plane one should be
able to employ Hilbert Transforms to relate the phase to the log amplitude of
the signal FT. From the nature of the logarithmic function we see that this is
not possible for an arbitrary FT of a causal signal but only for signals whose
FT, when continued analytically into the complex z-domain via formula (2.257)
or (2.269), has no zeros in the lower half of the z-plane. Such transforms are
said to be of the minimum-phaseshift type. If f (t) is real so that A(ω) and
θ (ω) is, respectively, an even and an odd function of ω, we can express θ (ω) in
terms of α (ω) using contour integration, provided the FT decays at infinity in
accordance with  
−k
|F (ω)| ∼ O |ω| for some k > 0. (2.274)
ω→∞
2.4 Fourier Transforms and Analytic Function Theory 153

− R −ω0 −ε − ω0 +ε ω0 −ε ω0 + ε R ω
• •
−ω0 ω0

cε− cε+
R

CR

Figure 2.40: Integration contour for relating amplitude to phase

For this purpose we consider the integral


4
ln F (z)
IR = dz (2.275)
ω 20 − z 2
ΓR

taken in the clockwise direction over the closed contour ΓR comprised of the
three linear segments (−R, −ω0 − ε) , (−ω0 + ε, ω 0 − ε) ,(ω 0 + ε, R) , the two
semicircular arcs cε− and cε+ each with radius ε, and the semicircular arc CR
with radius R, as shown in Fig. 2.40. By assumption F (z) is free of zeros within
the closed contour so that IR ≡ 0. In the limit as R → ∞ and ε → 0 the integral
over the line segments approaches a CPV integral while the integrals cε and c+ ε
each approach iπ times the residue at the respective poles. The net result can
then be written as follows:
 ∞
ln F (ω) ln F (−ω 0 ) ln F (ω 0 )
0 = P 2 2
dω + iπ + iπ
−∞ ω 0 − ω 2ω0 −2ω0
4
ln F (z)
+ lim dz. (2.276)
R→∞ ω20 − z 2
CR

In view of (2.274) for sufficiently large R the last integral may be bounded as
follows:

4  π
ln F (z) k ln R
dz ≤ constant ×
ω2 − z 2 |ω 2 − R2 ei2θ | Rdθ. (2.277)
0 0 0
CR

Since ln R < R for R > 1, the last integral approaches zero as R → ∞ so that
the contribution from CR in (2.276) vanishes. Substituting from (2.273) into
154 2 Fourier Series and Integrals with Applications to Signal Analysis

the first three terms on the right of (2.276) and taking account of the fact that
α (ω) is even while θ (ω) is odd, one obtains
 ∞
α (ω) + iθ (ω) α (ω0 ) − iθ (ω 0 ) α (ω 0 ) + iθ (ω 0 )
0=P 2 − ω2 dω + iπ + iπ
−∞ ω 0 2ω 0 −2ω0

Observe that the terms on the right involving α (ω0 ) cancel while the integration
involving θ (ω) vanishes identically. As a result we can solve for θ (ω 0 ) with the
result  ∞
2ω0 α (ω)
θ (ω 0 ) = P 2 − ω2
dω. (2.278)
π 0 ω 0
Proceeding similarly with the aid of the contour integral
4
ln F (z)
IR = dz (2.279)
z (ω 20 − z 2 )
ΓR

one obtains the formula


 ∞
2ω2 θ (ω)
α(ω 0 ) = α(0) − 0 P dω. (2.280)
π 0 ω (ω 2 − ω20 )

It is worth noting that the assumed rate of decay at infinity in (2.274) is


crucial to the vanishing of the contribution over the semicircular contour CR in
Fig. 2.40 and hence the validity of (2.278). Indeed if the decay of the FT is too
rapid the contribution from CR will not vanish and can in fact diverge as, e.g.,
for A(ω) = exp(−ω 2 ). Note that in this case (2.278) also diverges. This means
that for an arbitrary A (ω) one cannot find a θ(ω) such that A (ω) exp −iθ(ω)
has a causal inverse, i.e., an f (t) that vanishes for negative t. What properties
must A (ω) possess for this to be possible? An answer can be given if A (ω)
is square integrable over (−∞, ∞). In that case the necessary and sufficient
condition for a θ(ω) to exist is the convergence of the integral
 ∞
|ln A(ω)|
2
dω < ∞,
−∞ 1 + ω

which is termed the Paley–Wiener condition [15]. Note that it precludes A(ω)
from being identically zero over any finite segment of the frequency axis.

2.4.4 Evaluation of Inverse FT Using Complex Variable


Theory
The theory of functions of a complex variable provides a convenient tool for
the evaluation of inverse Fourier transforms. The evaluation is particularly
straightforward when the FT is a rational function. For example, let us evaluate
 ∞
1 eiωt dω
f (t) = . (2.281)
2π −∞ ω2 + iω + 2
2.4 Fourier Transforms and Analytic Function Theory 155

Ámw

•i

Âew

• −2i

Figure 2.41: Deformation of integration path within the strip of analyticity

The only singularities of F (ω) = 1/(ω2 + iω + 2) in the complex ω plane are


poles corresponding to the two simple zeros of ω 2 + iω + 2 = (ω − i)(ω + 2i) = 0,
namely ω 1 = i and ω 2 = −2i. Therefore the integration path in (2.281) may
be deformed away from the real axis into any path P lying within the strip of
analyticity bounded by −2 < Im ω < 1, as depicted in Fig. 2.41. The exponential
multiplying F (ω) decays for t > 0 in the upper half plane (Im ω > 0) and for
t < 0 in the lower half plane (Im ω < 0) . For t > 0 we form the contour integral
4

IR = eiωt F (ω) (2.282)

taken in the counterclockwise direction over the closed path formed by the linear
segment (−R, R) along P and the circular contour CR+ lying in the upper half
plane, as shown in Fig. 2.42. The residue evaluation at the simple pole at ω = i
gives IR = e−t /3. As R is allowed to approach infinity the integral over the
linear segment becomes just f (t). Therefore
4

e−t /3 = f (t) + lim eiωt F (ω) .
R→∞ 2π
C R+

Since F (ω) → 0 as ω → ∞, and the exponential decays on CR+ Jordan lemma


(see Appendix A) applies so that in the limit the integral over CR+ vanishes
and we obtain f (t) = e−t /3 ; t > 0. When t < 0 the contour integral (2.282)
is evaluated in the clockwise direction over the closed path in Fig. 2.43 with a
circular path CR− in the lower half plane. The residue evaluation at the simple
pole at ω = −2i now gives IR = e2t /3 so that
4

e2t /3 = f (t) + lim eiωt F (ω) .
R→∞ 2π
C R−
156 2 Fourier Series and Integrals with Applications to Signal Analysis

Ámw

CR+

•i
Âew
−R R

•−2i

Figure 2.42: Integration contour for t > 0

Ámw

·i
-R R Âew

·- 2i

CR-

Figure 2.43: Integration contour for t < 0

Since now the exponential decays in the lower half plane, Jordan’s lemma again
guarantees that the limit of the integral over CR− vanishes. Thus the final result
reads −t
e /3 ; t ≥ 0,
f (t) = (2.283)
e2t /3 ; t ≤ 0.
This procedure is readily generalized to arbitrary rational functions. Thus sup-
pose F (ω) = N (ω) /D(ω) with N (ω) and D (ω) polynomials in ω. We shall
assume that2 degree N (ω) < degree D (ω) so that F (ω) vanishes at infinity,

2 If N and D are of the same degree, then the FT contains a delta function which can be

identified by long division to obtain N/D =constant+


  N̂ /D, with degree N̂ <degree D. The
inverse FT then equals constant× δ (t) + F−1 N̂ /D .
2.4 Fourier Transforms and Analytic Function Theory 157

as required by the Jordan lemma. If D (ω) has no real zeros, then proceeding
as in the preceding example we find that the inverse FT is given by the residue
sums ⎧  
N (ω) iωt
⎨ i
res e ; t ≥ 0,
f (t) =
k;Im ω k >0
D(ω) ω=ωk (2.284)
⎩ −i
res N (ω) iωt
; t ≤ 0.
k;Im ω <0 k D(ω) e ω=ω k
2
For example, suppose F (ω) = i/(ω + 2i) (ω − i) which function has a double
pole at ω = −2i and a simple pole at ω = i. For t ≥ 0 the contribution comes
from the simple pole in the upper half plane and we get

ie−t e−t
f (t) = i = ; t ≥ 0.
(i + 2i)2 9

For t ≤ 0 the double pole in the lower half plane contributes. Hence

d eiωt (ω − i) iteiωt − eiωt


f (t) = −i i |ω=−2i = 2 |ω=−2i
dω (ω − i) (ω − i)
1 − 3t −2t
= e ; t ≤ 0.
9

The case of D (ω) having real roots requires special consideration. First, if
the order of any one of the zeros is greater than 1, the inverse FT does not
exist.3 On the other hand, as will be shown in the sequel, if the zeros are
simple the inverse FT can computed by suitably modifying the residue formu-
las (2.284). Before discussing the general case we illustrate the procedure by
a specific example. For this purpose consider the time function given by the
inversion formula
 ∞
1 eiωt
f (t) = P 2 2
dω, (2.285)
2π −∞ (ω − 4)(ω + 1)

where F (ω) = 1/ (ω 2 − 4)(ω 2 + 1) has two simple zeros at ω = ±i and two


at ω = ±2 with the latter forcing a CPV interpretation of the integral. Before
complementing (2.285) with a suitable contour integral it may be instructive to
make the CPV form of (2.285) explicit. Thus

f (t) = lim IR,ε (2.286)


ε→0,R→∞

with
(   )
−2−ε 2−ε R
1 eiωt
IR,ε = + + dω. (2.287)
2π −R −2+ε 2+ε (ω 2 − 4)(ω 2 + 1)

3 The corresponding time functions are unbounded at infinity and are best handled using

Laplace transforms.
158 2 Fourier Series and Integrals with Applications to Signal Analysis

To evaluate (2.286) by residues we define a contour integral


4

IˆR,ε = eiωt F (ω) (2.288)

Γ

over a closed path Γ that includes IR,ε as a partial contribution. For t > 0
the contour Γ is closed with the semicircle of radius R and includes the two
semicircles cε+ and cε− of radius ε centered, respectively, at ω = 2 and ω = −2,
as shown in Fig. 2.44.

Ám w

CR+

·i
e e Âew
· ·
-R - 2 -e -2+e 2-e 2+e R
·-i

Figure 2.44: Integration contour for CPV integral

Writing (2.287) out in terms of its individual contributors we have


  
ˆ iωt dω iωt dω dω
IR,ε = IR,ε + e F (ω) + e F (ω) + eiωt F (ω) . (2.289)
2π 2π 2π
c ε− c ε+ C R+

Taking account of the residue contribution at ω = i we get for the integral over
the closed path
eiωt e−t
IˆR,ε = i 2 |ω=i = − .
(ω − 4)(2ω) 10
As ε → 0 the integrals over cε− and cε− each contribute −2πi times one-half
the residue at the respective simple pole (see Appendix A) and a R → ∞ the
integral over CR+ vanishes by the Jordan lemma. Thus taking the limits and
summing all the contributions in (2.289) we get
 
e−t 1 eiωt 1 eiωt
− = f (t) − i |ω=−2 + |ω=2
10 2 (2ω)(ω 2 + 1) 2 (2ω)(ω 2 + 1)
1
= f (t) + sin(2t)
20
and solving for f (t),
1 e−t
f (t) = − sin(2t) − ; t > 0. (2.290)
20 10
2.5 Time-Frequency Analysis 159

For t < 0 we close the integration path (−R, −2 − ε) + cε− + (−2 + ε, 2 − ε) +


(2 + ε, R) in Fig. 2.44 with a semicircular path in the lower half plane and carry
out the integration in the clockwise direction. Now Γ encloses in addition to the
pole at the pole at ω = −i, the two poles at ω = ±2. Hence
 
eiωt eiωt eiωt
IR,ε = −i 2
ˆ |ω=−i − i |ω=−2 + |ω=2
(ω − 4)(2ω) (2ω)(ω 2 + 1) (2ω)(ω 2 + 1)
e−t 1
= − + sin(2t).
10 10
Summing the contributions as in (2.289) and taking limits we get

et 1 1
− + sin(2t) = f (t) − sin(2t).
10 10 10
Solving for f (t) and combining with (2.290) we have for the final result

1 e−|t|
f (t) = − sin(2t) sign(t) − . (2.291)
20 10
Note that we could also have used an integration contour with the semicircles
cε− and cε+ in the lower half plane. In that case we would have picked up the
residue at ω = ±2 for t > 0.
Based on the preceding example it is not hard to guess how to generalize
(2.284) when D(ω) has simple zeros for real ω. Clearly for every real zero at
(ω) iωt
ω = ωk we have to add the contribution sign(t) (i/2) res N D(ω) e |ω=ωk .
Hence we need to replace (2.284) by
  
N (ω) iωt
f (t) = (i/2) sign(t) res e |ω=ωk
D (ω)
k;Im ω k =0
⎧  
N (ω) iωt
⎨ i
res e ; t ≥ 0,
+
k;Im ωk >0
D(ω) ω=ωk (2.292)
⎩ −i
res N (ω) iωt
; t ≤ 0.
k;Im ω <0
k D(ω) e
ω=ω k

For example, for F (ω) = iω/(ω20 − ω2 ), the preceding formula yields f (t) =
1
2 sign(t) cos ω 0 t and setting ω 0 = 0 we find that the FT of sign(t) is 2/iω, in
agreement with our previous result.

2.5 Time-Frequency Analysis


2.5.1 The Uncertainty Principle
A common feature shared by simple idealized signals such as rectangular, tri-
angular, or Gaussian pulses is the inverse scaling relationship between signal
duration and its bandwidth. Qualitatively a relationship of this sort actually
holds for a large class of signals but its quantitative formulation ultimately
160 2 Fourier Series and Integrals with Applications to Signal Analysis

depends on the nature of the signal as well as on the definition of signal


duration and bandwidth. A useful definition which also plays a prominent role
not only in signal analysis but also in other areas where Fourier transforms are
part of the basic theoretical framework is the so-called rms signal duration σ t ,
defined by 
1 ∞
σ 2t = (t− < t >)2 |f (t)|2 dt, (2.293)
E −∞
where  ∞
1 2
< t >= t |f (t)| dt (2.294)
E −∞

and  ∞
2
E= |f (t)| dt (2.295)
−∞

are the signal energies. We can accept this as a plausible measure of signal
duration if we recall that σ 2t corresponds algebraically to the variance of a ran-
2
dom variable with probability density |f (t)| /E wherein the statistical mean
has been replaced by < t >. This quantity we may term “the average time of
signal occurrence”.4 Although definition (2.295) holds formally for any signal
(provided, of course, that the integral converges), it is most meaningful, just
like the corresponding concept of statistical average in probability theory, when
the magnitude of the signal is unimodal. For example, using these parameters
a real Gaussian pulse takes the form

E (t− < t >)2
f (t) = exp − . (2.296)
(2πσ 2 )
1/4 4σ 2t
t

To get an idea how the signal spectrum F (ω) affect the rms signal duration we
first change the variables of integration in (2.293) from t to t = t− < t > and
write it in the following alternative form:

1 ∞ 2 2
σ 2t = t |f (t + < t >)| dt . (2.297)
E −∞

Using the identities F {−itf (t)} = dF (ω)/dω and F {f (t+ < t >)} =
F (ω) exp iω < t > we apply Parseval’s theorem to (2.297) to obtain

1 d [F (ω) exp iω < t >] 2

σ 2t = dω
2πE dω
−∞
 ∞ 2
1 dF (ω)
= + i < t > F (ω) dω. (2.298)

2πE −∞ dω

This shows that the rms signal duration is a measure of the integrated fluctu-
ations of the amplitude and phase of the signal spectrum. We can also express
4 For a fuller discussion of this viewpoint see Chap. 3 in Leon Cohen, “Time-Frequency

Analysis,” Prentice Hall PTR, Englewood Cliffs, New Jersey (1995).


2.5 Time-Frequency Analysis 161

the average time of signal occurrence < t > in terms of the signal spectrum
by first rewriting the integrand in (2.294) as the product tf (t)f (t)∗ and using
F {tf (t)} = idF (ω)/dω together with Parseval’s theorem. This yields
 ∞
1 dF (ω) ∗
< t >= i F (ω) dω.
2πE −∞ dω

With F (ω) = A (ω) eiθ(ω) the preceding becomes


 ∞
1    2
< t >= −θ (ω) |F (ω)| dω, (2.299)
2πE −∞
where θ (ω) = dθ (ω) /dω. In 2.6.1 we shall identify the quantity −θ (ω) as the
signal group delay. Equation then (2.299) states that the group delay, when
2
averaged with the “density” function |F (ω)| /2πE, is identical to the average
time of signal occurrence.
We now apply the preceding definitions of spread and average location in
the frequency domain. Thus the rms bandwidth σ ω will be defined by
 ∞
1 2 2
σ 2ω = (ω− < ω >) |F (ω)| dω, (2.300)
2πE −∞
where  ∞
1 2
< ω >= ω |F (ω)| dω. (2.301)
2πE −∞
We can view < ω > as the center of mass of the amplitude of the frequency
spectrum. Clearly for real signals < ω >≡ 0. By analogy with (2.297) we change
the variable of integration in (2.300) to ω  = ω− < ω > and rewrite it as follows:
 ∞
2 1 2
σω = ω2 |F (ω  + < ω >)| dω (2.302)
2πE −∞
F {df (t)/dt} = iωF (ω) and Parseval’s theorem obtain the dual to (2.295), viz.,
 2
2 1 ∞ d [f (t) exp −i < ω > t]
σω = dt
E −∞ dt
 2
1 ∞ df (t)
= − i < ω > f (t) dt. (2.303)
E −∞ dt
Thus the rms bandwidth increases in proportion to the norm of the rate of
change of the signal. In other words, the more rapid the variation of the signal
in a given time interval the greater the frequency band occupancy. This is
certainly compatible with the intuitive notion of frequency as a measure of the
number of zero crossings per unit time as exemplified, for instance, by signals
of the form cos [ϕ (t)] .
Again using F {df (t)/dt} = iωF (ω) and Parseval’s theorem we trans-
form (2.301) into 
1 ∞ df (t) ∗
< ω >= −i f (t) dt.
E −∞ dt
162 2 Fourier Series and Integrals with Applications to Signal Analysis

If f (t) = r (t) exp iψ (t) is an analytic signal, the preceding yields



1 ∞ 
< ω >= ψ (t) |f (t)|2 dt, (2.304)
E −∞

where ψ  (t) = dψ (t) /dt is the instantaneous frequency. This equation provides
another interpretation of < ω >, viz., as the average instantaneous frequency
2
with respect to the density |f (t)| /E, a result which may be considered a sort
of dual to (2.299).
The rms signal duration and rms bandwidth obey a fundamental inequality,
known as the uncertainty relationship, which we now proceed to derive. For
this purpose let us apply the Schwarz inequality to the following two functions:
(t− < t >) f (t) and df (t) /dt − i < ω > f (t) . Thus
 ∞  ∞ 2
df (t)
2 2
(t− < t >) |f (t)| dt
dt − i < ω > f (t) dt
−∞ −∞
 ∞   2
df (t)
≥ (t− < t >) f ∗ (t) − i < ω > f (t) dt . (2.305)
−∞ dt
Substituting for the first two integrals in (2.305) the σ 2t and σ 2ω from (2.297)
and (2.303), respectively, the preceding becomes
 ∞   2
df (t)
σ 2t σ 2ω E 2 ≥ ∗
(t− < t >) f (t) − i < ω > f (t) dt
dt
−∞
 ∞ 2
df (t)
= (t− < t >) f ∗
(t) dt
dt , (2.306)
−∞
,∞ 2
where in view of (2.294) we have set −∞ (t− < t >) |f (t)| dt = 0. We now
integrate the last integral by parts as follows:
 ∞
df (t)
(t− < t >) f ∗ (t) dt
−∞ dt
 ∞
2 d [(t− < t >) f ∗ (t)]
= (t− < t >) |f (t)| ∞ −∞ − f (t) dt
−∞ dt
 ∞
2 df ∗ (t)
= (t− < t >) |f (t)| ∞ −∞ − E − (t− < t >) f (t) dt. (2.307)
−∞ dt

Because f (t) has finite
energy it must decay at infinity faster than 1/ t so that
(t− < t >) |f (t)| ∞
2
−∞ = 0. Therefore after transposing the last term in (2.307)
to the left of the equality sign we can rewrite (2.307) as follows:
 ∞ 2
∗ df (t)
Re (t− < t >) f (t) dt = −E/2. (2.308)
−∞ dt
2.5 Time-Frequency Analysis 163

Since the magnitude of a complex number is always grater or equal to the mag-
nitude of its real part the right side of (2.306) equals at least E 2 /4. Cancelling
of E 2 and taking the square root of both sides result in
1
σt σω ≥ , (2.309)
2
which is the promised uncertainty relation. Basically it states that simultaneous
localization of a signal in time and frequency is not achievable to within arbi-
trary precision: the shorter the duration of the signal the greater its spectral
occupancy and conversely. We note that except for a constant factor on the
right (viz., Planck’s constant ), (2.309) is identical to the Heisenberg uncer-
tainty principle in quantum mechanics where t and ω stand for any two canoni-
cally conjugate variables (e.g., particle position and particle momentum). When
does (2.309) hold with equality? The answer comes from the Schwarz inequal-
ity (2.305) wherein equality can be achieved if and only if (t− < t >) f (t) and
df (t)
dt − i < ω > f (t) are proportional. Calling this proportionality constant −α
results in the differential equation
df (t)
− i < ω > f (t) + α (t− < t >) f (t) = 0. (2.310)
dt
This is easily solved for f (t) with the result
5 α α 6
2
f (t) = A exp − (t− < t >) + < t >2 +i < ω > t , (2.311)
2 2
where A is a proportionality constant. Thus the optimum signal from the stand-
point of simultaneous localization in time and frequency has the form of a Gaus-
sian function. Taking account of the normalization (2.295) we obtain after a
simple calculation
& '
α = 1/2σ2t , A = E/2πσ 2t exp − < t >2 /2σ2t . (2.312)

2.5.2 The Short-Time Fourier Transform


Classical Fourier analysis draws a sharp distinction between the time and fre-
quency domain representations of a signal. Recall that the FT of a signal of
duration T can be computed only after the signal has been observed in its en-
tirety. The computed spectrum furnishes the relative amplitude concentrations
within the frequency band and the relative phases but information as to the
times at which the particular frequency components have been added to the
spectrum is not provided. Asking for such information is of course not always
sensible particularly in cases of simple and essentially single scale signals such
as isolated pulses. On the other hand for signals of long duration possessing
complex structures such as speech, music, or time series of environmental pa-
rameters the association of particular spectral features with the times of their
generation not only is meaningful but in fact also constitutes an essential step
164 2 Fourier Series and Integrals with Applications to Signal Analysis

in data analysis. A possible approach to the frequency/time localization prob-


lem is to multiply f (t), the signal to be analyzed, by a sliding window function
g (t − τ ) and take the FT of the product. Thus
 ∞
S (ω, τ ) = f (t)g (t − τ ) e−iωt dt (2.313)
−∞
whence in accordance with the FT inversion formula
 ∞
1
f (t)g (t − τ ) = S (ω, τ ) eiωt dω. (2.314)
2π −∞
We can obtain an explicit formula for determining f (t) from S (ω, τ ) by requiring
that the window function satisfies
 ∞
2
|g (t − τ )| dτ = 1 (2.315)
−∞

for all t. For if we now multiply both sides of (2.314) by g ∗ (t − τ ) and integrate
with respect to τ we obtain
 ∞ ∞
1
f (t) = S (ω, τ ) g ∗ (t − τ ) eiωt dωdτ . (2.316)
2π −∞ −∞
The two-dimensional function S (ω, τ ) is referred to as the short-time
Fourier transform5 (STFT) of f (t) and (2.316) the corresponding inversion
formula. The STFT can be represented graphically in various ways. The most
common is the spectrogram, which is a two-dimensional plot of the magnitude
of S (ω, τ ) in the τ ω plane. Such representations are commonly used as an aid
in the analysis of speech and other complex signals.
Clearly the characteristics of the STFT will depend not only on the signal
but also on the choice of the window. In as much as the entire motivation
for the construction of the STFT arises from a desire to provide simultaneous
localization in frequency and time it is natural to choose for the window function
the Gaussian function since, as shown in the preceding, it affords the optimum
localization properties. This choice was originally made by Gabor [6] and the
STFT with a Gaussian window is referred to as the Gabor transform. Here we
adopt the following parameterization:
21/4 πt2
g (t) = √ e− s2 . (2.317)
s

Reference to (2.311) and (2.312) shows that σ t = s/ (2 π) . Using (2.142*) we
have for the FT √ 2 2
G (ω) = 21/4 se−s ω /4π (2.318)

from which we obtain σ ω = π/s so that σ t σ ω = 1/2, as expected. !
As an example, let us compute the Gabor transform of exp αt2 /2 . We
obtain
 2πτ
!2
√ 1/4 π s − iωs πτ 2
S (ω, τ ) / s = 2 2
exp − 2
− 2 . (2.319)
iαs /2 − π 4 (iαs /2 − π) s
5 Also referred to as the sliding-window Fourier transform
2.5 Time-Frequency Analysis 165

& '
Figure 2.45: Magnitude of Gabor Transform of exp i 21 αt2


A relief map of the magnitude of S (ω, τ ) / s (spectrogram) as a function of the
nondimensional variables τ /s (delay) and ωs (frequency) is shown in Fig. 2.45.
In this plot the dimensionless parameter (1/2) αs2 equals 1/2. The map
shows a single ridge corresponding to a straight line ω = ατ corresponding
to the instantaneous frequency at time τ . As expected only positive frequency
components are picked up by the & transform.
' On the other hand, if instead
we transform the real signal cos 12 αt2 , we get a plot as in Fig. 2.46. Since
the cosine contains exponentials of both signs the relief map shows a second
ridge running along the line ω = −ατ corresponding to negative instantaneous
frequencies.
As a final example consider the signal plotted in Fig. 2.47. Even though
this signal looks very much like a slightly corrupted sinusoid, it is actually
comprised of a substantial band of frequencies with a rich spectral structure.
This can be seen from Fig. 2.48 which shows a plot of the squared magnitude of
the FT. From this spectral plot we can estimate the total signal energy and the
relative contributions of the constitutive spectral components that make up the
total signal but not their positions in the time domain. This information can
be inferred from the Gabor spectrogram whose contour map is represented in
Fig. 2.49. This spectrogram shows us that the spectral energy of the signal is in
fact confined to a narrow sinuous band in the time-frequency plane. The width of
this band is governed by the resolution properties of the sliding Gaussian window
(5 sec. widths in this example) and its centroid traces out approximately the
locus of the instantaneous frequency in the time-frequency plane.
166 2 Fourier Series and Integrals with Applications to Signal Analysis

&1 2
'
Figure 2.46: Magnitude of Gabor Transform of cos 2 αt

1.5

0.5
f(t)

-0.5

-1

-1.5

-2
0 10 20 30 40 50 60 70 80 90 100
Time (sec)

Figure 2.47: Constant amplitude signal comprising multiple frequencies


2.5 Time-Frequency Analysis 167

12000

10000
Signal Energy/Hz

8000

6000

4000

2000

0
0 0.5 1 1.5 2 2.5 3
Frequency (Hz)

Figure 2.48: Squared magnitude of the FT of the signal in Fig. 2.47

Contours of Squared Magnitude of Gabor Transform


250

200

0.4
150
1.6
Hz*100

0.8 0.2

1.4 0.2
100 0.6
1.61.2

0.4

50 1

1.2

100 200 300 400 500 600 700 800 900 1000
sec*10.24

Figure 2.49: Contour map of the Gabor Transform of the signal in Fig. 2.48
168 2 Fourier Series and Integrals with Applications to Signal Analysis

2.6 Frequency Dispersion


2.6.1 Phase and Group Delay
In many physical transmission media the dominant effect on the transmitted
signal is the distortion caused by unequal time delays experienced by different
frequency components. In the frequency domain one can characterize such a
transmission medium by the transfer function e−iψ(ω) where ψ(ω) is real. The
FT F (ω) of the input signal f (t) is then transformed into the FT Y (ω) of the
output signal y(t) in accordance with
Y (ω) = e−iψ(ω) F (ω) . (2.320)
The time domain representation of the output then reads
 ∞
1
y(t) = eiωt e−iψ(ω) F (ω) dω (2.321)
2π −∞
so that by Parseval’s theorem the total energy of the output signal is identical
to that of the input signal. However its spectral components are in general
delayed by different amounts so that in the time domain the output appears
as a distorted version of the input. The exceptional case arises whenever the
transfer phase ψ(ω) is proportional to frequency for then with ψ(ω) = ωT the
output is merely a time delayed version of the input:
y(t) = f (t − T ) . (2.322)
Such distortionless transmission is attainable in certain special situations, the
most notable of which is EM propagation through empty space. It may also be
approached over limited frequency bands in certain transmission lines (coaxial
cable, microstrip lines). In most practical transmission media however one has
to count on some degree of phase nonlinearity with frequency, particularly as
the signal bandwidth is increased. Clearly for any specific signal and transfer
phase the quantitative evaluation of signal distortion can proceed directly via
a numerical evaluation of (2.321). Nevertheless, guidance for such numerical
investigations must be provided by a priori theoretical insights. For example,
at the very minimum one should like to define and quantify measures of signal
distortion. Fortunately this can usually be accomplished using simplified and
analytically tractable models.
Let us first attempt to define the delay experienced by a typical signal.
Because each spectral component of the signal will be affected by a different
amount, it is sensible to first attempt to quantify the delay experienced by a
typical narrow spectral constituent of the signal. For this purpose we conceptu-
ally subdivided the signal spectrum F (ω) into narrow bands, each of width Δω,
as indicated in Fig. 2.50 (also shown is a representative plot of ψ(ω), usually
referred to as the medium dispersion curve). The contribution to the output
signal from such a typical band (shown shaded in the figure) is
yn (t) =
e {zn (t)} , (2.323)
2.6 Frequency Dispersion 169

where
 ω n +Δω/2
1
zn (t) = eiωt e−iψ(ω) F (ω) dω (2.324)
π ω n −Δω/2

zn (t) is the corresponding analytic signal (assuming real f (t) and ψ(ω) =
−ψ(−ω)) and the integration is carried out over the shaded band in Fig. 2.50.

F(w) ,y (w)
y¢ (wn )

y (w)
F(w)

· · · · · ·
w
wn-1 wn wn+1

Dw

Figure 2.50: Group delay of signal component occupying a narrow frequency


band

Clearly the complete signal y (t) can be represented correctly by simply sum-
ming over the totality of such non-overlapping frequency bands, i.e.,

y(t) = yn (t) . (2.325)
n
For sufficiently small Δω/ωn the phase function within each band may be
approximated by
ψ(ω) ∼ ψ(ω n ) + (ω − ω n ) ψ  (ω n ), (2.325*)
where ψ  (ω n ) is the slope of the dispersion curve at the center of the band in
Fig. 2.50. If we also approximate the signal spectrum F (ω) by its value at the
band center, (2.324) can be replaced by
 ω n +Δω/2
1 
zn (t) ∼ F (ω n ) eiωt e−i[ψ(ωn )+(ω−ωn )ψ (ω n )]
dω.
π ω n −Δω/2

After changing the integration variable to η = ω − ω n this becomes


 Δω/2
1 
zn (t) ∼ F (ωn ) ei(ωn t−ψ(ωn )) eiη (t−ψ (ωn )) dη
π −Δω/2
170 2 Fourier Series and Integrals with Applications to Signal Analysis
 !
sin Δω/2 t − ψ  (ω n )
= 2iF (ωn ) ei(ωn t−ψ(ωn )) ! (2.326)
π t − ψ  (ω n )

and upon setting F (ωn ) = A (ωn ) eiθ(ωn ) the real signal (2.323) assumes the
form
 !
sin Δω/2 t − ψ  (ω n )
yn (t) ∼ A (ω n ) sin [ω n t + θ (ωn ) + π − ψ(ω n )] ! (2.327)
π t − ψ  (ω n )

a representative plot of which is shown in Fig. 2.51. Equation (2.327) has the
form of a sinusoidal carrier at frequency ω n that has been phase shifted by

-1

-2

-3

-4
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1
t-Tg

Figure 2.51: Plot of (2.327) for Δω/2θ (ωn ) = 10, ωn = 200rps and A (ω n ) = 1

ψ(ω n ) radians. Note that the carrier is being modulated by an envelope in form
of a sinc function delayed in time by ψ  (ω n ). Evidently this envelope is the time
domain representation of the spectral components contained within the band
Δω all of which are undergoing the same time delay as a “group.” Accordingly
ψ  (ω n ) is referred to as the group delay (Tg ) while the time (epoch) delay of the
carrier θ (ω n ) /ω n is referred to as the phase delay (T ϕ). One may employ these
concepts to form a semi-quantitative picture of signal distortion by assigning to
each narrow band signal constituent in the sum in (2.325) its own phase and
group delay. Evidently if the dispersion curve changes significantly over the
2.6 Frequency Dispersion 171

signal bandwidth no single numerical measure of distortion is possible. Thus it


is not surprising that the concept of group delay is primarily of value for signals
having sufficiently narrow bandwidth. How narrow must Δω be chosen for the
representation (2.327) to hold? Clearly in addition to Δω/ωn << 1 the next
term in the Taylor expansion in (2.325*) must be negligible by comparison with
(ω − ω n ) ψ  (ω n ). Since |ω − ω n | ≤ Δω/2 this additional constraint translates
into 
4ψ (ω n )

Δω   (2.328)
ψ (ω n )
which evidently breaks down when ψ  (ω n ) = 0.

2.6.2 Phase and Group Velocity


Phase and group delay are closely linked to phase and group velocities associated
with wave motion. To establish the relationship we start with the definition of
an the elementary wave
f (t, x) = f (t − x/v), (2.329)
where t is time x, represents space, and v a constant. Considered as a function of

0.35

0.3 t1 t2 t3 t4 t5

0.25

0.2
t-x/v

0.15

0.1

0.05

0
-5 -4 -3 -2 -1 0 1 2 3 4 5
x

Figure 2.52: Self-preserving spatial pattern at successive instants of time (t1 <
t2 < t3 < t4 < t5)

x which is sampled at discrete instances of time we can display it as in Fig. 2.52.


Such a spatial display may be regarded as a sequence of snapshots of the function
f (ξ) which executes a continuous motion in the direction of the positive x-axis.
172 2 Fourier Series and Integrals with Applications to Signal Analysis

Clearly the speed of this translation may be defined unambiguously by the


condition that the functional argument t − x/v be maintained constant in time
for a continuum of x. The derivative of the argument is then zero so that
dx
= v. (2.330)
dt
We take (2.330) as the definition of the velocity of the wave. Note that this
definition is based entirely on the requirement that the functional form f (ξ)
be preserved exactly. This characterizes what is usually designated as disper-
sionless propagation. It is an idealization just as is distortionless transmission
mentioned in the preceding subsection. Evidently as long as x is fixed the two
concepts are identical as we see by setting x/v = T in (2.322). In general the
preservation of the waveform is approached only by narrow band signals. Hence
we can again examine initially the propagation of a single sinusoid and appeal
to Fourier synthesis to formulate the general case. For a time-harmonic signal
the elementary wave function (2.329) reads
eiω(t−x/v(ω)) = eiωt e−iβ(ω)x , (2.331)
wherein we now allow the speed of propagation vϕ (ω) to depend on frequency.
Note, however, that even though mathematically the functional forms (2.329)
and (2.331) are identical, (2.331) represents an infinitely long periodic pattern
so that we cannot really speak of the velocity of the translation of an identi-
fiable space limited pattern (as, e.g., displayed in Fig. 2.52). Thus if we want
to associate vϕ (ω) with the motion of some identifiable portion of the spatial
pattern, we have only a phase reference at our disposal. Quite aptly then vϕ (ω)
is referred to as the phase velocity. The quantity β (ω) = ω/v(ω) in (2.331)
represents the propagation constant and may be taken as a fundamental char-
acteristic of the propagation medium. The time domain representation of a
general signal with spectrum F (ω) that has propagated through a distance x is
obtained by multiplying (2.331) by F (ω) and taking the inverse FT. Thus
 ∞
1
y(t, x) = eiωt e−iβ(ω)x F (ω) dω, (2.332)
2π −∞
which is just (2.321) with the phase shift relabeled as β (ω) x. Note that in the
special case β (ω) = ω/v and with v a constant (2.332) reduces to (2.329), i.e.,
the propagation is dispersionless. In the general case we proceed as in (2.326).
After replacing ψ(ω) with β (ω) x in (2.327) we obtain
 !
sin Δω/2 t − β  (ω n ) x
yn (t, x) ∼ A (ω n ) sin [ω n t + θ (ωn ) + π − β (ω n ) x] ! .
π t − β  (ω n ) x
(2.333)
Unlike (2.327), (2.333) depends on both space and time. It does not, however,
have the same simple interpretation as the wavefunction defined in (2.329) be-
cause the speed of propagation of the carrier phase and the envelope differ. Thus
while the carrier phase moves with the phase velocity
vϕn = ω n /β (ω n ) (2.334)
2.6 Frequency Dispersion 173

the envelope6 moves with velocity



vgn = 1/β  (ωn ) = β=β(ω n ). (2.335)

The latter is referred to as the group velocity and is the speed of propagation
of the energy contained within the frequency band Δω in Fig. 2.50. By con-
trast, the phase velocity has generally no connection with energy transport but
represents merely the translation of a phase reference point.

2.6.3 Effects of Frequency Dispersion on Pulse Shape


Thus far we have not explicitly addressed quantitative measures of signal distor-
tion. For this purpose consider a pulse with a (baseband) spectrum P (ω) most
of whose energy is confined to the nominal frequency band (−Ω, Ω) . The pulse,
after having been modulated by a carrier of frequency ω 0 , propagates through
a medium of length L characterized by the propagation constant β (ω) . The
output signal occupies the frequency band ω 0 − Ω < ω < ω0 + Ω with and has
the time domain representation
(  )
ω 0 +Ω
−iβ(ω)L iωt dω
y(t, ω0 ) =
e 2 (1/2)P (ω − ω 0 ) e e
ω 0 −Ω 2π
(  Ω )

=
e eiω0 t P (η) e−iβ(η+ω0 )L eiηt , (2.336)
−Ω 2π
where we have assumed that the pulse is a real function. From the last expression
we identify the complex baseband output signal as
 Ω

s (t) = P (η) e−iβ(η+ω0 )L eiηt . (2.337)
−Ω 2π
Irrespective of the nature of the pulse spectrum the frequencies at the band
center ω = ω 0 will be delayed by the group delay β  (ω 0 ) L. In order to focus on
pulse distortion (e.g., pulse broadening) it will be convenient to subtract this
delay. We do this by initially adding and subtracting ηβ  (ω 0 ) L from the phase
of the integrand in (2.337) as follows:
 Ω
  dη
s (t) = P (η) e−i[β(η+ω0 )−β (ω0 )η]L eiη [t−β (ω0 )L] . (2.338)
−Ω 2π
Observe that this integral defines the time delayed version of ŝ (t) defined by
 
s (t) = ŝ t − β  (ω 0 ) L (2.339)
6 When the emphasis is on wave propagation rather than signal analysis, it is customary

to represent the wavefunction (2.332) as a superposition of propagation constants β, in terms


of the so-called wavenumber spectrum. In that case the envelope in (2.333) (usually referred
to as a wavepacket) assumes the form
sin[Δβ/2(vgn t−x)]
vgn π (vgn t−x)
,
where Δβ is the range of propagation constants corresponding to the frequency band Δω.
174 2 Fourier Series and Integrals with Applications to Signal Analysis

or, explicitly, by
 Ω

(ω 0 )η ]L iηt dη
ŝ (t) = P (η) e−i[β(η+ω0 )−β e . (2.340)
−Ω 2π
We shall obtain an approximation to this integral under the following two
assumptions:
ω0  Ω, (2.341a)
2 
Ω β (ω 0 )L  1. (2.341b)
The first of these is the conventional narrow band approximation while the
second implies a long propagation path.7 Thus in view of (2.341a) we may
approximate β (η + ω0 ) by
1
β (η + ω 0 ) ∼ β (ω 0 ) + β  (ω 0 ) η + β  (ω 0 ) η 2 . (2.342)
2
Substituting this into (2.340) leads to the following series of algebraic steps:
 Ω
−iβ(ω 0 )L L  2 dη
ŝ (t) ∼ e P (η) e−i 2 β (ω0 )η eiηt
−Ω 2π
 Ω 
η 2
 
−i L β  (ω 0 )Ω2 ( Ω ) −2( Ωη ) Ωβ (ω
t dη
= e−iβ(ω0 )L P (η) e 2 0 )L

−Ω 2π
 1   
L  2 2
−i β (ω 0 )Ω ν −2ν Ωβ  (ω )L t dν
= Ωe−iβ(ω0 )L P (νΩ) e 2 0

−1 2π
 1  2
t2  2 dν
−iβ(ω 0 )L −i 2Lβ  (ω 0 )
−i L t
2 β (ω 0 )Ω ν− Ωβ  (ω 0 )L
= Ωe e P (νΩ) e
−1 2π
t2
−i
= Ωe−iβ(ω0 )L e 2Lβ (ω0 )
 1−t/Ωβ  (ω0 )L
  L  2 2 dx
P xΩ + t/β  (ω0 ) L e−i 2 β (ω0 )Ω x . (2.343)

−1−t/Ωβ (ω 0 )L 2π
Since we are interested primarily in assessing pulse distortion the range of the
time variable of interest is on the order of t ∼ 1/Ω we have in view of (2.341b)
t/Ωβ  (ω 0 ) L  1 . (2.344)
Consequently the limits in the last integral in (2.343) may be replaced by −1, 1.
Again in view of (2.341b) we may evaluate this integral by appealing to the
principle of stationary phase. Evidently the point of stationary phase is at
x = 0 which leads to the asymptotic result
* +
1 t2 t
−iπ/4sign[β  (ω0 )] −iβ(ω 0 )L −i 2Lβ (ω0 )
ŝ (t) ∼ e e e P .
2π β  (ω 0 ) L β  (ω0 ) L
(2.345)
7 Note (2.341b) necessarily excludes the special case β  (ω 0 ) = 0.
2.6 Frequency Dispersion 175

In many applications (e.g., intensity modulation in fiber optic communication


systems) only the pulse envelope is of interest. In that case (2.345) assumes the
compact form
* + 2
1 t
2
|ŝ (t)| ∼ 
P (2.346)

2π β (ω 0 ) L β (ω 0 ) L


Parseval’s theorem tells us that the energies of the input and output signals
must be identical. Is this still the case for the approximation (2.346)? Indeed
it is as we verify by a direct calculation:
 ∞  ∞
 !
2
|ŝ (t)| dt = (1/2π β (ω0 ) L) P t/β  (ω 0 ) L 2 dt
−∞ −∞
 ∞  Ω
1 1
= |P (ω)|2 dω ≡ |P (ω)|2 dω.
2π −∞ 2π −Ω

Equation (2.346) states that the envelope of a pulse propagating over a suf-
ficiently long path assumes the shape of its Fourier transform wherein the
timescale is determined only by the path length and the second derivative of
the propagation constant at the band center. For example, for a pulse of unit
amplitude and duration T we obtain
 
sin2 2βtT
(ω )L
2 0
|ŝ (t)| ∼ 4  2
t
β  (ω 0 )L

giving a peak-to-first null pulsewidth of



2πβ  (ω 0 ) L
TL = .
(2.347)
T
In optical communications pulse broadening is usually described by the group
index N (ω) defined as the ratio of the speed of light in free space to the group
velocity in the medium:
c
N (ω) = = cβ  (ω). (2.348)
vg (ω)
Expressed in terms of the group index the pulse width in (2.347) reads

2πL d
TL = N (ω) |ω=ω0 . (2.349)
cT dω

In view of (2.341b) these results break down whenever β  (ω0 ) = 0, i.e., at


the inflection points (if they exist) of the dispersion curve. To include the
case of inflection points requires the retention of the third derivative in Taylor
expansion (2.342), i.e.,
1 1
β (η + ω 0 ) ∼ β (ω 0 ) + β  (ω 0 ) η + β  (ω 0 ) η 2 + β  (ω 0 ) η 3 (2.350)
2 6
176 2 Fourier Series and Integrals with Applications to Signal Analysis

so that
 Ω
L 
(ω 0 )η 2 −i L 
(ω 0 )η 3 iηt dη
ŝ (t) ∼ e−iβ(ω0 )L P (η) e−i 2 β 6β e . (2.351)
−Ω 2π

We shall not evaluate (2.351) for general pulse shapes but confine our attention
to a Gaussian pulse. In that case we may replace the limits in (2.351) by ±∞
and require only that (2.341a) hold but not necessarily (2.341b). Using the
parameterization in (2.296) we have

21/4 πt2
p (t) = √ e− T 2 , (2.352)
T
where we have relabeled the nominal pulse width s by T . The corresponding
FT then reads √ 2 2
P (ω) = 21/4 T e−T ω /4π (2.353)
so that (2.351) assumes the form

1/4
√ −iβ(ω )L  ∞ −T 2 η2 /4π −i L β (ω )η2 −i L β (ω )η3 iηt dη
ŝ (t) ∼ 2 Te 0
e e 2 0 6 0
e
−∞ 2π
√  ∞ Lβ  (ω0 ) 3 2 dη
= 21/4 T e−iβ(ω0 )L e−i 6 [η +Bη −Cη] , (2.354)
−∞ 2π

where
3β  (ω 0 ) 3T 2
B = − i , (2.355a)
β  (ω0 ) 2πLβ  (ω0 )
6t
C = . (2.355b)
Lβ  (ω 0 )

Changing the variable of integration to z via η = z − B/3 eliminates the


quadratic term in the polynomial in the exponential (2.354) resulting in
!
η 3 + Bη 2 − Cη = z 3 − z B 2 /3 + C + (2/27) B 3 + BC/3.

Because of the analyticity of the integrand the integration limits in (2.354)


may kept at ±∞. A subsequent change of the integration from z to w =
  1/3
Lβ (ω0 ) /2 z transforms (2.354) into
√ β  (ω 0 )L 3
ŝ (t) ∼ 21/4 T e−iβ(ω0 )L e−i 6 [(2/27)B +BC/3]
 2−1/3 (   2/.3 )
β (ω 0 ) L β (ω 0 ) L 2
!
Ai − B /9 + C/3 , (2.356)
2 2

where Ai(x) is the Airy function defined by the integral


 ∞
1 3
Ai(x) = e−i(w /3+xw) dw. (2.357)
2π −∞
2.6 Frequency Dispersion 177

The interpretation of (2.356) will be facilitated if we introduce the following


dimensionless parameters:

β  (ω 0 ) T
q = , (2.358a)
β  (ω 0 )
β  (ω0 ) L
p = , (2.358b)
T3
2β  (ω 0 ) L
χ = 2qp = . (2.358c)
T2
Introducing these into (2.357) we obtain
5 2  6
1/4
√ χq i i 2 6 t
−iβ(ω 0 )L −i 6 (1− πχ ) (1− πχ ) + χq ( T )
ŝ (t) ∼ 2 (1/ T )e e
(  )
 p 2/3 * i
+2
t
−1/3 2
(p/2) Ai −q 1− +4 . (2.359)
2 πχ qχT

Let us first examine this expression for the case in which the third derivative
term in (2.350) can be neglected. Clearly this is tantamount to dropping the
cubic term in (2.351). The integral then represents the FT of a Gaussian func-
tion and can be evaluated exactly. On the other hand, from the definition of q
in (2.358a) we note that β  (ω0 ) → 0 and β  (ω0 ) = 0 correspond to q → ∞.
Hence we should be able to obtain the same result by evaluating (2.359) in the
limit as q → ∞. We do this with the aid of the first-order asymptotic form of
the Airy function for large argument the necessary formula for which is given
in [1]. It reads
π
Ai(−z) ∼ π −1/2 z −1/4 sin(ζ + ), (2.360)
4
where
2
ζ = z 3/2 ; |arg(z)| < π. (2.361)
3
Thus we obtain for8 |q| ∼ ∞
(  )
 p 2/.3 * i
+2
t
−1/3 2
(p/2) Ai −q 1− +4
2 πχ qχT
 −1/2
i
∼ −i πχ(1 − )
πχ
⎛ (  3/2 ) ⎞
2
3
⎜ exp i (p/3) q 1 − πχ + 4 qχT
i t π
+ i4 ⎟
⎜ ⎟
⎜ (  3/2 ) ⎟ , (2.362)
⎜ 2 ⎟
⎝ ⎠
− exp −i (p/3) q 3 1 − πχ + 4 qχT
i t
−i4π

8q is real but may be of either sign.


178 2 Fourier Series and Integrals with Applications to Signal Analysis

where in the algebraic term corresponding to z −1/4 in (2.360) we have dropped


the term o(1/q). Next we expand the argument of the first exponentials term
in (2.362) as follows:
* +2 3/2
2
! i t
i χq /6 1− +4
πχ qχT
⎡ ⎤3/2
* +3
! i ⎢ t ⎥
= i χq 2 /6 1 − ⎣1 + 4  2 ⎦
πχ
qχT 1 − πχ i

* +3
2
! i
= i χq /6 1 −
πχ
⎡ ⎤
⎢ t t2 3 ⎥
⎣1 + 6  2 + 6  4 + o(1/q )⎦
qχT 1 − πχ i
(qχT )2 1 − πχ
i

* +3 * +
2
! i t i
= i χq /6 1 − + iq 1−
πχ T πχ
* + −1
t2 i
+i 2 1 − + o(1/q). (2.363)
χT πχ
In identical fashion we can expand the argument of the second exponential which
would differ from (2.363) only by a minus sign. It is not hard to show that for
sufficiently large |q| is real part will be negative provided
1
χ2 > 2 . (2.364)

In that case the second exponential in (2.362) asymptotes to zero and may be
ignored. Neglecting the terms o(1/q) in (2.363) we now substitute (2.362) into
(2.359) and note that the first two terms in the last line of (2.363) cancel against
the exponential in (2.359). The final result then reads

ŝ (t) ∼ 21/4 (1/ T )e−iβ(ω0 )L
(  −1/2 ) ( * +−1 )
i t2 i π
−i πχ(1 − ) exp i 2 1 − +i
πχ χT πχ 4
√ −1/2 πt2 −1
= 21/4 (1/ T )e−iβ(ω0 )L (1 + iπχ) exp − 2 (1 + iπχ) . (2.365)
T
For the squared magnitude of the pulse envelope we get

2 !−1/2 πt2
|ŝ (t)|2 ∼ 1 + π 2 χ2 exp − 2 . (2.366)
T (T /2) (1 + π 2 χ2 )

The nominal duration of this Gaussian signal may be defined by (T /2 π)
1
1 + π 2 χ2 so that χ plays the role of a pulse-stretching parameter. When
χ  1 (2.366) reduces to
2.6 Frequency Dispersion 179


2 2t2 T T 2 t2
|ŝ (t)|2 ∼ exp − 2 2 = √  exp − . (2.367)
πχT πT χ π 2β (ω 0 ) L 2πβ  (ω 0 ) L

The same result also follows more directly from the asymptotic form (2.346)
as is readily verified by the substitution of the FT of the Gaussian pulse (2.353)
into (2.346). Note that with χ = 0 in (2.366) we recover the squared magnitude
of the original (input) Gaussian pulse (2.352). Clearly this substitution violates
our original assumption |q| ∼ ∞ under which (2.366) was derived for in accor-
dance with (2.358) χ = 0 implies q = 0. On the other hand if β  (ω0 ) is taken to
be identically zero (2.366) is a valid representation of the pulse envelope for all
values of χ. This turns out to be the usual assumption in the analysis of pulse
dispersion effects in optical fibers. In that case formula (2.366) can be obtained
directly from (2.351) by simply completing the square in the exponential and
integrating the resulting Gaussian function. When β  (ω 0 ) = 0 with q arbitrary
numerical calculations of the output pulse can be carried out using (2.359). For
this purpose it is more convenient to eliminate χ in favor of the parameters p
and q. This alternative form reads
5  6
1/4
√ p i i 2 3 t
−iβ(ω 0 )L −i 3 (q− 2πp ) (q− 2πp ) + p ( T )
ŝ (t) ∼ 2 (1/ T )e e
( * + * +2 )
2/3
−1/3 |p| i t
(|p| /2) Ai − q− +2 . (2.368)
2 2πp pT

1.5
p=0

1 p=-0.2 p=0.2
T*abs(s)2

p=-0.5 p=0.5

0.5
p=-1.0 p=1.0

0
-4 -3 -2 -1 0 1 2 3 4
t/T

Figure 2.53: Distortion of Gaussian pulse envelope by cubic phase nonlinearities


in the propagation constant
180 2 Fourier Series and Integrals with Applications to Signal Analysis

To assess the influence of the third derivative of the phase on the pulse envelope
we set q = 0 and obtain the series of plots for several values of p as shown
in Fig. 2.53. The center pulse labeled p = 0 corresponds to the undistorted
Gaussian pulse (χ = 0 in (2.366)). As p increases away from zero the pulse
envelope broadens with a progressive increase in time delay. For sufficiently
large p the envelope will tend toward multimodal quasi-oscillatory behavior the
onset of which is already noticeable for p as low as 0.2. For negative p the pulse
shapes are seen to be a mirror images with respect to t = 0 of those for positive
p so that pulse broadening is accompanied by a time advance.

2.6.4 Another Look at the Propagation of a Gaussian


Pulse When β  (ω 0 ) = 0
As was pointed out above in the absence of cubic (and higher order) nonlinear-
ities (2.366) is an exact representation of the pulse envelope. In fact we can also
get the complete waveform in the time domain with the aid of (2.336), (2.339),
and (2.365). Thus
⎧   ⎫
√ ⎨ iω0 t3− (T 2 )(π21+π
χt32
2 χ2 ) −i[β(ω 0 )L+(1/2) tan−1 (πχ)]

y(t, ω 0 ) = 21/4 / T
e e
!
e ,
⎩ 1 + π 2 χ2
−1/4
exp − πt2 ⎭
(T 2 )(1+π 2 χ2 )
(2.369)
where
3
t = t − β  (ω 0 ) L. (2.370)
Note that the instantaneous frequency of this complex waveform varies linearly
with time, i.e., !
2π 2 χ t − β  (ω 0 ) L
ω (t) = ω0 − . (2.371)
(T 2 ) (1 + π 2 χ2 )
In fiber optics such a pulse is referred to as a chirped pulse. This “chirping,”
(or linear FM modulation) is just a manifestation of the fact that the pulse
distortion is due entirely to the quadratic nonlinearity in the phase rather than
in the amplitude of the effective transfer function. On the other hand, chirping
can occur also due to intrinsic characteristics of the transmitter generating the
input pulse. We can capture this effect using the analytic form
t2
− 2 (1+iκ)
2T0
p (t) = Ae , (2.372)
where A is a constant, κ the so-called chirp factor, and 2T0 the nominal pulse
width.9 Evidently when this pulse gets upconverted to the carrier frequency
ω0 its instantaneous frequency becomes

9 Note
√ !
that T0 = 1/ 2π T where T represents the definition of pulse width in (2.352).

Also A = 21/4 / T .
2.6 Frequency Dispersion 181
 * +
κ t
ω (t) = ω0 1 − (2.373)
ω 0 T0 T0
so that over the nominal pulse interval −T0 ≤ t ≤ T0 the fractional change in
the instantaneous frequency is 2κ/ω0 T0 . Presently we view this chirping as the
intrinsic drift in the carrier frequency during the formation of the pulse. How
does this intrinsic chirp affect pulse shape when this pulse has propagated over
a transmission medium with transfer function exp − β (ω) L ? If we neglect the
effects of the third and higher order derivatives of the propagation constant the
answer is straightforward. We first compute the FT of (2.372) as follows:
* + 
 ∞ 2  ∞ 2 2
iωT0 ω 2 T0
4
− t 2 (1+iκ) −iωt − (1+iκ)
2 t− 1+iκ + (1+iκ) 2
2T
P (ω) = A e 2T0 e dt = A e 0 dt
−∞ −∞
* +
 ∞ 2 2
iωT0
ω 2 T0
2
− 2(1+iκ) − (1+iκ)
2 t− 1+iκ
2T0
= Ae e dt
−∞

ω 2 T0
2π − 2(1+iκ) 2

= AT0 e , (2.374)
1 + iκ
where the last result follows from the formula √ for the Gaussian error func-
tion with (complex) variance parameter T02 / 1 + iκ. Next we substitute (2.374)
in (2.340) with Ω = ∞ together with the approximation (2.342) to obtain
 ∞
−iβ(ω0 )L 1  2 dη
ŝ (t) = e P (η) e−i 2 β (ω0 )η L eiηt (2.375)
−∞ 2π
Simplifying,
  ∞

T02
Lβ (ω0 )

2π −iβ(ω0 )L − 2(1+iκ) +i 2 η2 dη
s(t) = AT0 e e eiηt . (2.376)
1 + iκ −∞ 2π

Setting Q = T02 / [2 (1
+ iκ)] + iLβ (ω 0 ) /2 we complete the square in the
exponential as follows:
 
it 2 t2 t2 2
2 −Q (η− 2Q ) + 4Q it
e−Qη +iηt
=e 2
= e− 4Q e−Q(η− 2Q ) . (2.377)
From this we note that the complex variance parameter is 1/(2Q) so that (2.376)
integrates to
 
AT0 2π −iβ(ω0 )L − 4Q t2 π
ŝ (t) = e e
2π 1 + iκ Q
A
= √ e−iβ(ω0 )L
1 + iκ
T0 t2 (1 + iκ)
1 exp −  .
T02 + iβ  (ω0 ) L (1 + iκ) 2 T02 + iβ  (ω 0 ) L (1 + iκ)
(2.378)
182 2 Fourier Series and Integrals with Applications to Signal Analysis

Expression for the pulse width and chirp is obtained by separating the argument
of the last exponential into real and imaginary parts as follows:

t2 (1 + iκ)
exp −  2 
2 T0 + iβ  (ω 0 ) L (1 + iκ)
T02 t2
= exp − 5 2  2 6 exp −iψ, (2.379)
2 T02 − β  (ω0 ) Lκ + β  (ω 0 ) L

where  
κt2 T02 − β  (ω 0 ) L(1 + κ)
ψ = 5 2  2 6 . (2.380)
2 T02 − β  (ω 0 ) Lκ + β  (ω 0 ) L
!
Defining the magnitude of (2.379) as exp −t2 / 2TL2 we get for the pulse
length TL 
* +2 *  +2
β  (ω 0 ) Lκ β (ω 0 ) L
TL = T0 1− + . (2.381)
T02 T02
When the input pulse is unchirped κ = 0, and we get

*  +2
β (ω0 ) L
TL = T02 + . (2.382)
T0

We see from (2.381) that when κ = 0, TL may be smaller or larger than the right
side of (2.382) depending on the sign of κ. and the magnitude of L. Note, how-
ever, that for sufficiently large L, (2.381) is always larger than (2.382) regardless
of the sign of κ. The quantity

LD = T02 /β  (ω 0 ) (2.383)

is known as the dispersion length. Using this in (2.381) we have



* +2 * +2
L L
TL = T0 1− κ + . (2.384)
LD LD

The significance of LD is that with κ = 0 for L  LD the effect of dispersion


may be neglected.

2.6.5 Effects of Finite Transmitter Spectral Line Width*


In the preceding it was assumed that the carrier modulating the pulse is
monochromatic, i.e., an ideal single frequency sinusoid with constant phase.
In practice this will not be the case. Instead the carrier will have a fluctuating
amplitude and phase which we may represent as

a(t) <
< cos(ω 0 t + φ(t)), (2.385)
2.6 Frequency Dispersion 183

< and φ(t)


where a(t) < are random functions of time and ω0 is the nominal carrier
frequency which itself has to be quantified as a statistical average. In the follow-
ing we assume that only the phase is fluctuating and that the carrier amplitude
is fixed. Reverting to complex notation we then assume that the pulse p(t) upon
modulation is of the form
<
p(t)eiω0 t eiφ(t) . (2.386)
<  we get for the FT
If we denote the FT of eiφ(t) by the random function X(ω),
of (2.386)
 ∞  ∞
< 1 
p(t)eiω 0 t eiφ(t) e−iωt dt = P (ω − ξ − ω 0 ) X(ξ)dξ. (2.387)
−∞ 2π −∞

To get the response that results after this random waveform has propagated over
a transmission medium with transfer function exp −iβ(ω)L we have to replace
P (ω − ω0 ) in (2.336) by the right side of (2.387). Thus we obtain
(   ∞ 2 )
ω 0 +Ω
1  −iβ(ω)L iωt dω
y(t) =
e 2 (1/2) P (ω−ξ−ω0 ) X(ξ)dξ e e
ω 0 −Ω 2π −∞ 2π
(  Ω  ∞ 2 )
1  dη
=
e eiω0 t P (η − ξ) X(ξ)dξ e−iβ(η+ω0 )L eiηt
−Ω 2π −∞ 2π
5 
6
=
e e iω 0 t
s3(t − β (ω0 )L ,

where
 Ω  ∞ 2  

1  −i β(η+ω 0 )−β (ω 0 )η L iηt dη
s3(t) = P (η − ξ) X(ξ)dξ e e (2.388)
−Ω 2π −∞ 2π

is the complex random envelope of the pulse. It is reasonable to characterize


this envelope by its statistical average which we denote by
2 2
|EN V | ≡ |s (t)| . (2.389)
<
In evaluating (2.389) we shall assume that eiφ(t) is a WSS process so that its
spectral components are uncorrelated, i.e.,
!
3 X
X(ξ) 3 ∗ (ξ  ) = 2πF (ξ) δ ξ − ξ  , (2.390)
<
where F (ξ) is the spectral power density of eiφ(t) . If we approximate the propa-
gation constant in (2.388) by the quadratic form (2.342), and substitute (2.388)
into (2.389) we obtain with the aid of (2.390)
  2
∞ Ω
1  2
|EN V |2 = F (ξ) dξ P (η − ξ)e−iβ (ω0 )η L/2 eiηt dη . (2.391)
−∞ (2π)3 −Ω
184 2 Fourier Series and Integrals with Applications to Signal Analysis

Assuming a Gaussian pulse with the FT as in (2.374) the inner integral in (2.391)
can be expressed in the following form:
 2
1 Ω −iβ  (ω0 )η 2 L/2 iηt

A2 T02 π
P (η − ξ)e e dη = 2 √ f (ξ) , (2.392)
(2π) −Ω
3 (2π) 1 + κ2 |Q|
where Q = T02 / [2 (1 + iκ)] + iLβ  (ω0 ) /2,
ξ 2T 2

f (ξ) = e2
e{Qb } e−
2 0 1 1
[ 1+iκ + 1−iκ ]
2 (2.393)
and
ξT02 it
b= + . (2.394)
2Q(1 + iκ) 2Q
To complete the calculation of the average pulse envelope we need the functional
form of the power spectral density of the phase fluctuations. The form depends
on the physical process responsible for these fluctuations. For example for high
quality solid state laser sources the spectral line width is Lorenzian, i.e., of the
form
2/W
F (ω − ω 0 ) = !2 . (2.395)
1 + ω−ω
W
0

Unfortunately for this functional form the integration in (2.391) has to be carried
out numerically. On the other hand, an analytical expression is obtainable if we
assume the Gaussian form
1
F (ω − ω0 ) = √ exp − (ω − ω 0 ) /2W 2 . (2.396)
2πW 2
After some algebra we get
!2 !2
2 A2 T02 π T02 −β  (ω 0 ) Lκ + β  (ω 0 ) L
|EN V | = 2√
 ! ! 
(2π) 1 + κ2 |Q| T02 −β  (ω 0 ) Lκ 2 +(1+2W 2 T02 ) β  (ω 0 ) L 2

t2 T02
exp − !2 !2 . (2.397)
T02 − β  (ω 0 ) Lκ + (1 + 2W 2 T02 ) β  (ω 0 ) L
Note that the preceding is the squared envelope so that to get the effective
pulse length of the envelope itself an additional factor of 2 needs to be inserted
(see (2.381)). We then get

* +2 *  +2
β  (ω 0 ) Lκ 2 T 2)
β (ω 0 ) L
TL = T0 1− + (1 + 2W 0 . (2.398)
T02 T02
It should be noted that this expression is not valid when β  (ω0 ) = 0 as then
the cubic phase term dominates. In that case the pulse is no longer Gaussian.
The pulse width can then be defined as an r.m.s. duration. The result reads

* +2 *  +2
β  (ω 0 ) Lκ 2T 2 )
β (ω0 ) L
TL = T0 1− + (1 + 2W 0 + C, (2.399)
T02 T02
2.7 Fourier Cosine and Sine Transforms 185

where * +2
2 2 β  (ω 0 ) L
C = (1/4) (1 + κ + 2W T02 ) . (2.400)
T03

2.7 Fourier Cosine and Sine Transforms


In Chap. 2.2 we took as the starting point in our development of the FT theory
the LMS approximation of a function defined in (−T /2, T /2) in terms of sinu-
soids with frequencies spanning the interval (−Ω, Ω). The formal solution can
then be phrased in terms of the integral equation (2.106) for the unknown coef-
ficients (functions). For arbitrary finite intervals no simple analytical solutions
of the normal equation appears possible. On the other hand, when both the
expansion interval in the time domain and the range of admissible frequencies
are allowed to approach infinity the normal equations admit a simple solution
which we have identified with FT. As we shall see in the following a suitable set
of normal equations can also be solved analytically when the expansion intervals
in the time domain and in the frequency domain are chosen as semi-infinite.
We suppose that f (t) is defined over (0, T ) and seek its LMS approximation
in terms of cos(ωt) with ω in the interval (0, Ω) :
 Ω
f (t) ∼ cos(ωt)fˆc (ω) dω = fcΩ (t), (2.401)
0

where fˆc (ω) if the expansion (coefficient) function. In accordance with (1.100)
the normal equation reads
 T  Ω  T
cos(ωt)f (t) dt = fˆc (ω ) dω  cos(ωt) cos(ω  t)dt. (2.402)
0 0 0

Using the identity cos(ωt) cos(ω  t) = (1/2){cos[t(ω − ω  )] + cos[t(ω + ω  )]} we


carry out the integration with respect to t to obtain
 T 
π Ω ˆ  sin[(ω − ω )T ] sin[(ω + ω  )T ]
cos(ωt)f (t) dt = fc (ω ) dω  { + }.
0 2 0 π(ω − ω  ) π(ω + ω  )
(2.403)

For arbitrary T this integral equation does not admit of simple analytical solu-
tions. An exceptional case obtains when T is allowed to approach infinity for
then the two Fourier Integral kernels approach delta functions. Because Ω > 0
only the first of these contributes. Assuming that fˆc (ω  ) is a smooth function
and we obtain in the limit
 ∞
Fc (ω) = cos(ωt)f (t) dt, (2.404)
0

where we have defined


πˆ
Fc (ω) = fc (ω) . (2.405)
2
186 2 Fourier Series and Integrals with Applications to Signal Analysis

Inserting (2.404) into (2.401) the LMS approximation to f (t) reads


  ∞
2 Ω
fcΩ (t) = cos(ωt) cos(ωt )f (t ) dt dω
π 0 0
 ∞ 
2 Ω
= f (t ) dt cos(ωt) cos(ωt )dω
0 π 0
 ∞  Ω
 
= f (t ) dt (1/π) {cos[ω(t − t )] + cos[ω(t + t )]}dω
0 ∞ 0
2
  sin[(t − t )Ω] sin[(t + t )Ω]
= f (t ) dt + . (2.406)
0 π(t − t ) π(t + t )
Using the orthogonality principle the corresponding LMS error εΩ min is
 ∞  ∞
εΩ min = |f (t)|2 dt − f ∗ (t) fcΩ (t)dt
0 0

and using (2.401) and (2.404)


 ∞  ∞  Ω
εΩ min = |f (t)|2 dt − f ∗ (t) cos(ωt)fˆc (ω) dωdt
0 0 0
 ∞  Ω
2
= |f (t)|2 dt − |Fc (ω)|2 dω ≥ 0. (2.407)
0 π 0

As Ω → ∞ the two Fourier kernels yield the limiting form


f (t+ ) + f (t− )
lim fcΩ (t) = . (2.408)
Ω→∞ 2
We may then write in lieu of (2.401)
 ∞
f (t+ ) + f (t− ) 2
= cos(ωt)Fc (ω)dω. (2.409)
2 π 0

At the same time lim εΩ min = 0 so that (2.407) gives the identity
Ω→∞
 ∞  ∞
2 2
|f (t)| dt = |Fc (ω)|2 dω. (2.410)
0 π 0

When f (t) is a smooth function (2.409) may be replaced by



2 ∞
f (t) = cos(ωt)Fc (ω)dω. (2.411)
π 0
The quantity Fc (ω) defined by (2.404) is the Fourier Cosine Transform (FCT)
and (2.411) the corresponding inversion formula. Evidently (2.410) is the cor-
responding Parseval formula. As in the case of the FT we can use the compact
notation
Fc
f (t) ⇐⇒ Fc (ω) . (2.412)
Problems 187

Replacing Fc (ω) in (2.411) by (2.404) yields the identity


 ∞ 
 2 2
δ(t − t ) = cos(ωt) cos(ωt )dω, (2.413)
0 π π
which may be taken as the completeness relationship for the FCT.
Note that the derivative of fcΩ (t) at t = 0 vanishes identically. This means
that pointwise convergence for the FCT is only possible for functions that pos-
sess a zero derivative at t = 0. This is, of course, also implied by the fact that
the completeness relationship (2.413) is comprised entirely of cosine functions.
What is the relationship between the FT and the FCT? Since the FCT
involves the cosine kernel one would expect that the FCT can be expressed in
terms of the FT of an even function. This is actually the case. Thus suppose
f (t) is even then  ∞
F (ω) = 2f (t) cos(ωt)dt (2.414)
0
so that F (ω) is also even. Therefore the inversion formula becomes

1 ∞
f (t) = F (ω) cos(ωt)dω. (2.415)
π 0
Evidently with Fc (ω) = F (ω)/2 (2.414) and (2.415) correspond to (2.404)
and (2.411), respectively.
In a similar manner, using the sine kernel, one can define the Fourier Sine
Transform (FST):  ∞
Fs (ω) = sin(ωt)f (t) dt. (2.416)
0
The corresponding inversion formula (which can be established either formally
in terms of the normal equation as above or derived directly from the FT rep-
resentation of an odd function) reads

2 ∞
f (t) = Fs (ω) sin(ωt)dω. (2.417)
π 0
Upon combining (2.416) and (2.417) we get the corresponding completeness
relationship
 ∞ 
 2 2
δ(t − t ) = sin(ωt) sin(ωt )dω. (2.418)
0 π π
Note that (2.417) and (2.418) require that f (0) = 0 so that only for such
functions a pointwise convergent FST representation is possible.

Problems
1. Using (2.37) compute the limit as M → ∞, thereby verifying (2.39).
2. Prove (2.48).
188 2 Fourier Series and Integrals with Applications to Signal Analysis

3. Derive the second-order Fejer sum (2.42).


4. For the periodic function shown in the following sketch:

f(t)

• • • • • •
t2

t
−2 0 2 4 6

Figure P4: Periodic function with step discontinuities

(a) Compute the FS coefficients fˆn .


(b) Compute and plot the partial sum f N (t) for N = 5 and N = 20.
Also compute the corresponding LMS errors.
(c) Repeat (b) for the first-order Fejer sum.
(d) Repeat (c) for the second-order Fejer sum.

5. Derive the interpolation formula (2.82)


6. Derive the interpolation formula (2.88)
7. Approximate the signal f (t) = te−t in the interval (0, 4) by the first
five terms of a Fourier sine series and an anharmonic Fourier series with
expansion functions as in (2.101) assuming (a) β = −1/3 and (b) β = −1.
Plot f 5 (t) for the three cases together with f (t) on the same set of axes.
Account for the different values attained by the three approximating sums
at t = 4.
8. The integral  2
tdt
I=P
−2 (t − 1)(t2 + 1) sin t
is defined in the CPV sense. Evaluate it numerically.
9. Derive formulas (2.137)(2.137*)(2.141) and (2.142).
10. The differential equation

x (t) + x (t) + 3x(t) = 0

is to be solved for t ≥ −2 using the FT. Assuming initial conditions


x(−2) = 3 and x (−2) = 1 write down the general solution in terms of the
FT inversion formula.
Problems 189

11. Derive formula (2.73).


(1)
12. Prove that KΩ (t) in (2.200) is a delta function kernel.
13. Prove the asymptotic form (2.201).
14. Compute the Fourier transform of the following signals:
!
a) e−3t cos 4t U (t) b)e−4|t| sin 7t
∞
!
c) te−5t sin 4t U (t) d) 4−n δ(t − nT )
n=0
* +* + ∞

sin at sin 2a (t − 1)
e) f) e−|t−4n|
at a (t − 1) n=−∞

15. Compute the Fourier transform of the following signals:


sin at
(a) f (t) = πt U (t)
,∞
(b) f (t) = −∞ g(t + x)g ∗ (x)dx with g(t) = e−at U (t − 2)
(c) f (t) = w(t) with w(t) defined in the following sketch.

w(t)

1 1

t
−3 −2 2 3

,∞ sin4 x
16. With the aid of Parseval’s theorem evaluate −∞ x4 dx.

17. With F (ω) = R(ω) + iX (ω) the FT of a causal signal find X (ω) when a)
2
R(ω) = 1/(1 + ω2 ) b) R(ω) = sinω22ω .
!
18. Given the real signal 1/ 1 + t2 construct the corresponding analytic sig-
nal and its FT.
19. Derive (2.217).
20. For the signal z(t) = cos 5t
1+t2 compute and plot the spectra of the inphase
and quadrature components x (t) and y (t) for ω 0 = 5, 10, 20. Interpret
your results in view of the constraint (2.226).
1
21. The amplitude of a minimum phase FT is given by |F (ω)| = 1+ω 2n , n > 1.
Compute the phase.
http://www.springer.com/978-1-4614-3286-9

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy