Fourier Transforms - An Introduction For Engineers (1995) PDF
Fourier Transforms - An Introduction For Engineers (1995) PDF
by
Robert M. Gray
Joseph W. Goodman
....
"
SPRINGER SCIENCE+BUSINESS MEDIA, LLC
ISBN 978-1-4613-6001-8 ISBN 978-1-4615-2359-8 (eBook)
DOI 10.1007/978-1-4615-2359-8
Preface xi
Bibliography 353
Index 356
Preface
parameter) and finite and infinite duration. The DFT is emphasized early
because it is the easiest to work with and its properties are the easiest
to demonstrate without cumbersome mathematical details, the so-called
"delta-epsilontics" of real analysis. Its importance is enhanced by the fact
that virtually all digital computer implementations of the Fourier trans-
form eventually reduce to a DFT. Furthermore, a slight modification of
the DFT provides Fourier series for infinite duration periodic signals and
thereby generalized Fourier transforms for such signals.
This approach has several advantages. Treating the basic signal types
in parallel emphasizes the common aspects of these signal types and avoids
repetitive proofs of similar properties. It allows the basic properties to
be proved in the simplest possible context. The general results are then
believable as simply the appropriate extensions of the simple ones even
though the detailed proofs are omitted. This approach should provide more
insight than the common engineering approach of quoting a result such as
the basic Fourier integral inversion formula without proof. Lastly, this
approach emphasizes the interrelations among the various signal types, for
example, the production of discrete time signals by sampling continuous
time signals or the production of a finite duration signal by windowing
an infinite duration signal. These connections help in understanding the
corresponding different types of Fourier transforms.
Synopsis
The topics covered in this book are:
1. Signals and Systems. This chapter develops the basic definitions and
examples of signals, the mathematical objects on which Fourier trans-
forms operate, the inputs to the Fourier transform. Included are
continuous time and discrete time signals, two-dimensional signals,
infinite and finite duration signals, time-limited signals, and periodic
signals. Combinations of signals to produce new signals and systems
which operate on an input signal to produce an output signal are
defined and basic examples considered.
4. Basic Properties. This chapter is the heart of the book, developing the
basic properties of Fourier transforms that make the transform useful
in applications and theory. Included are linearity, shifts, modulation,
Parseval's theorem, sampling, the Poisson summation formula, alias-
ing, pulse amplitude modulation, stretching, downsampling and up-
sampling, differentiating and differencing, moment generating, band-
width and pulse width, and symmetry properties.
The final two topics are relatively advanced and may be tackled in any
order as time and interest permit. The construction of Fourier transform
tables is treated in the Appendix and can be referred to as appropriate
throughout the course.
Instructional Use
This book is intended as an introduction and survey of Fourier analysis for
engineering students and practitioners. It is a mezzanine level course in the
sense that it is aimed at senior engineering students or beginning Master's
level students. The basic core of the course consists of the unstarred sections
of Chapters 1 through 6. The starred sections contain additional details
and proofs that can be skipped or left for background reading without
classroom presentation in a one quarter course. This core plus one of the
topics from the final chapters constitutes a one quarter course. The entire
book, including many of the starred sections, can be covered in a semester.
Many of the figures were generated using Matlab ™ on both unix ™
and Apple Macintosh ™ systems and the public domain NIH Image pro-
gram (written by Wayne Rasband at the U.S. National Institutes of Health
and available from the Internet by anonymous ftp from zippy.nimh.nih.gov
or on floppy disk from NITS, 5285 Port Royal Rd., Springfield, VA 22161,
part number PB93-504568) on Apple Macintosh ™ systems.
The problems in each chapter are intended to test both fundamentals
and the mechanics of the algebra and calculus necessary to find transforms.
Many of these are old exam problems and hence often cover material from
previous chapters as well as the current chapter. Whenever yes/no answers
are called for, the answer should be justified, e.g., by a proof for a positive
answer or a counterexample for a negative one.
PREFACE xvii
Recommended Texts
Fourier analysis has been the subject of numerous texts and monographs,
ranging from books of tables for practical use to advanced mathematical
treatises. A few are mentioned here for reference. Some of the classic texts
still make good reading.
The two most popular texts for engineers are The Fourier Transform
and its Applications by R. Bracewell [6] and The Fourier Integral and its
Applications by A. Papoulis [24]. Both books are aimed at engineers and
emphasize the infinite duration, continuous time Fourier transform. Cir-
cuits, Signals, and Systems by W. McC. Siebert [30] is an excellent (and
enormous) treatment of all forms of Fourier analysis applied to basic circuit
and linear system theory. It is full of detailed examples and emphasizes ap-
plications. A detailed treatment of the fast Fourier transform may be found
in O. Brigham's The Fast Fourier Transform and its Applications [9]. Treat-
ments of two-dimensional Fourier transforms can be found in Goodman [18J
and Bracewell [8] as well as in books on image processing or digital image
processing. For example, Gonzales and Wintz [17] contains a variety of
applications of Fourier techniques to image enhancement, restoration, edge
detection, and filtering.
Mathematical treatments include Wiener's classic text The Fourier In-
tegral and Certain of its Applications [36], Carslaw's An Introduction to the
Theory of Fourier's Series and Integrals [10], Bochner's classic Lectures on
Fourier Integrals [3], Walker's Fourier Analysis [33], and Titchmarsh's In-
troduction to the Theory of Fourier Integrals [32]. An advanced and modern
(and inexpensive) mathematical treatment can also be found in An Intro-
duction to Harmonic Analysis by Y. Katznelson [21]. An elementary and
entertaining introduction to Fourier analysis applied to music may be found
in The Science of Musical Sound, by John R. Pierce [25].
Discrete time Fourier transforms are treated in depth in several books
devoted to digital signal processing such as the popular text by Oppenheim
and Schafer [23].
Some Notation
We will deal with a variety of functions of real variables. Let n denote
the real line. Given a subset r of the real line n, a real-valued function g
of a real variable t with domain of definition r is an assignment of a real
number g(t) to every point in T Thus denoting a function 9 is shorthand
for the more careful and complete notation {g(t)j t E r} which specifies
the name of the function (g) and the collection of values of its argument
xviii PREFACE
for which it is defined. The most common cases of interest for a domain of
definition are intervals of the various forms defined below.
• T = n, the entire real line.
• T = (a, b) = {r : a < r < b}, an open interval consisting of the points
between a and b but not a and b themselves. The real line itself is
often written in this form as n = (-00,00).
• T = [a, b] = {r : a ~ r ~ b}, a closed interval consisting of the points
between a and b together with the endpoints a and b (a and b both
finite).
• T = [a, b) = {r : a ~ r < b}, a half open (or half closed) interval con-
sisting of the points between a and b together with the lower endpoint
a (a finite).
III T = (a, b] = {r : a < r ~ b}, a half open (or half closed) interval
consisting of the points between a and b together with the upper
endpoint b (b finite).
• T = Z = {- .. , -1,0, 1, ... }, the collection of integers.
• T = ZN
N-l.
= {O, 1, ... , N - I}, the collection of integers from °
through
z = x + iy,
where x = ~(z) is the real part of z, y = ~(z) is the imaginary part of z,
and
Imaginary
z
y
Real
x
g( -t) = get}; t E R.
xx PREFACE
It is said to be odd if
g( -t) = -g(t); tEn.
A slight variation of this definition is common: strictly speaking, an odd
signal must satisfy g(O) = 0 since -g(O) = g(O). This condition is some-
times dropped so that the definition becomes g( -t) = -g(t) for all t ;f O.
For example, the usual definition of the sign function meets the strict def-
inition, but the alternative definition (which is +1 for t ~ 0 and -1 for
t < 0) does not. The alternative definition is, however, an odd function if
one ignores the behavior at t = O. A signal is Hermitian if
g(-t) = g*(t); tEn.
For example, a complex exponential g(t) = ei21r/ot is Hermitian. A signal
is anti-Hermitian if
g( -t) = -g*(t); tEn.
As examples, sin t and te-'>'Itl are odd functions of tEn, while cos t
and e-'>'Itl are even.
We shall have occasion to deal with modular arithmetic. Given a posi-
tive real number T > 0, any real number a can be written uniquely in the
form a = kT + r where the "remainder" term is in the interval [0, T). This
formula defines a mod T = r, that is, a mod T is what is left of a when the
largest possible number of integer multiples of T is subtracted from a. This
is often stated as "a modulo T." The definition can be summarized by
a mod T = r if a = kT + r where r E [0, T) and k E Z. (0.1)
More generally we can define modular arithmetic on any interval [a, b),
b> a by
a mod [a, b) = r if a = kT + r where r E [a, b) and k E Z. (0.2)
Thus the important special case a mod T is an abbreviation for a mod
[0, T). By "modular arithmetic" is meant doing addition and subtraction
within an interval. For example, (0.5 + 0.9) mod 1 = 1.14 mod 1 = .14 mod
1 and (-0.3) mod 1 = 0.7.
Acknowledgements
We gratefully acknowledge our debt to the many students who suffered
through early versions of this book and who considerably improved it by
their corrections, comments, suggestions, and questions. We also acknowl-
ege the Industrial Affiliates Program of the Information Systems Labora-
tory, Stanford University, whose continued generous support provided the
computer facilities used to write and design this book.
Chapter 1
°
it could mean either a finite extent domain of definition or a signal with
an infinite extent index set with the property that the signal is except
on a finite region. We adopt the first meaning, however, and hence "finite
duration" is simply a short substitute for the more precise but clumsy
°
"finite extent domain of definition." The infinite duration signal with the
property that it is except for a finite region will be called a time-limited
signal. For example, the signal = {sin tj t E [O,211")} has finite duration,
2 CHAPTER 1. SIGNALS AND SYSTEMS
h(t)
°
= {sint t E [0,211')
tEn, t ¢ [0,211')
9 = {g(t)j t E T}
when the index set T is clear from context. It is also fairly common practice
to use boldface to denote the entire signalj that is, g = {g(t)j t E T}.
Signals can also be sequences, such as sampledsinusoids {sin(nT)j n E
Z}, where Z is the set of all integers { ... , -2, -1,0, 1, 2, ... }, a geometric
progression {rnj n = 0, 1,2, ... }, or a sequence of binary data {unj n E Z},
where all of the Un are either 1 or 0. Analogous to the waveform case we
can denote such a sequence as {g(t)j t E T} with the index set T now
being a set of integers. It is more common, however, to use subscripts
rather than functional notation and to use indexes like k, I, n, m instead
of t for the index for sequences. Thus a sequence will often be denoted
by {gnj nET}. We still use the generic notation 9 for a signal of this
type. The only difference between the first and second types is the nature
of the index set T. When T is a discrete set such as the integers or the
nonnegative integers, the signal 9 is called a discrete time signal, discrete
parameter signal, sequence, or time series. As in the waveform case, T may
°
have infinite duration (e.g., all integers) or finite duration (e.g., the integers
from to N - 1).
In the above examples the index set T is one-dimensional, that is, con-
sists of some collection of real numbers. Some signals are best modeled as
having multidimensional index sets. For example, a two-dimensional square
sampled image intensity raster could be written as {gn,kj n = 1, ... , Kj k =
1, ... ,K}, where each gn,k represents the intensity (a nonnegative number)
of a single picture element or pixel in the image, the pixel located in the
nth column and kth row of the square image. Note that in this case the
1.1. WAVEFORMS AND SEQUENCES 3
°
256 x 256 square array of pixel intensities, each represented by an integer
from (black, no light) to 29 - 1 = 511 (white, fully illuminated). As we
shall wish to display such images on screens which support only 8 and not
9 bits in order to generate examples, however, we shall consider MR images
°
to consist of 256 x 256 square array of pixel intensities, each represented by
an integer from to 28 -1 = 255. We note that, in fact, the raw data used
to generate MR images constitute (approximately) the Fourier transform of
the MR image. Thus when MR images are rendered for display, the basic"
operation is an inverse Fourier transform.
A square continuous image raster might be represented by a wave-
form depending on two arguments {g(x, y)j x E [0, al, y E [0, a]}. A
three-dimensional sequence of image rasters could be expressed by a sig-
nal {gn,k,lj n = 0,1,2, ... , k = =
1,2, ... ,K, 1 1,2, ... ,K}, where now
n is the time index (any nonnegative integer) and k and 1 are the spatial
indices. Here the index set T is three-dimensional and includes both time
and space.
In all of these different signal types, the signal has the general form
9 = {g(t)j t E T},
where T is the domain of definition or index set of the signal, and where g(t)
denotes the value of the signal at "time" or parameter or dummy variable
t. In general, T can be finite, infinite, continuous, discrete, or even vector
valued. Similarly g(t) can take on vector values, that is, values in Euclidean
space. We shall, however, usually focus on signals that are real or complex
valued, that is, signals for which g(t) is either a real number or a complex
number for all t E T. As mentioned before, when T is discrete we will often
write gt or gn or something similar instead of g(t).
In summary, a signal is just a function whose domain of definition is T
and whose range is the space of real or complex numbers. The nature of T
determines whether the signal is continuous time or discrete time and finite
duration or infinite duration. The signal is real-valued or complex-valued
depending on the possible values of g(t).
Although T appears to be quite general, we will need to impose some
structure on it to get useful results and we will focus on a few special cases
that are the most important examples for engineering applications. The
most common index sets for the four basic types of signals are listed in
Table 1.1. The subscripts of the domains inherit their meaning from their
place in the table, that is,
4 CHAPTER 1. SIGNALS AND SYSTEMS
Duration Time
Discrete Continuous
(1) _ ~ (1) [
Finite TDTFD - ZN - {O, 1,···,N -I} TCTFD = O,T)
It should be pointed out that the index sets of Table 1.1 are not the
only possibilities for the given signal types; they are simply the most com-
mon. The two finite duration examples are said to be one-sided since only
nonnegative indices are considered. The two infinite duration examples are
two-sided in that negative and nonnegative indices are considered. Com-
mon alternatives are to use two-sided sets for finite duration and one-sided
sets for infinite duration as in Table 1.2. Superscripts are used to dis-
tinguish between one- and two-sided time domains when convenient. They
will be dropped when the choice is clear from context. The modifications in
Duration Time
Discrete Continuous
(2) _ {
Finite TDTFD - -N,···,-I,O,I,···,N } 7,(2) _ [ T T)
CTFD - -2' 2
Infinite (1)
TDTID = {
0,1,'" }
Ti:~ID = [0,00)
We shall emphasize the choices of Table 1.1, but we shall often encounter
examples from Table 1.2. The careful reader may have noticed the use
of half open intervals in the definitions of finite duration continuous time
signals and be puzzled as to why the apparently simpler open or closed
intervals were not used. This was done for later convenience when we con-
struct periodic signals by repeating finite duration signals. In this case one
endpoint of the domain is not included in the domain as it will be provided
by another domain that will be concatenated.
A portion of this signal is plotted in Figure 1.2. Note that the period has
0.5
~
0
~
0)
-0.5
-\
-20 20
as shown in Figure 1.3. Here the figure shows the entire signal, which is not
possible for infinite duration signals. For convenience we have chosen the
number of time values to be a power of 2j this will lead to simplifications
when we consider numerical evaluation of Fourier transforms. Also for
convenience we have chosen the signal to contain an integral number of
1.2. BASIC SIGNAL EXAMPLES 7
.. ····0···
O.S .
-0.5 .
-I
Figure 1.3: One-Sided Finite Duration Discrete Time Sine Signal: Period 8
where A > 0 and a is some real constant. The signal is depicted in Figure 1.4
for the case a = 1 and A = .9. This signal is commonly considered as a
two-sided signal {g(t)j t E 'R} by defining
ae-At t >0
9 ()
t ={ o -.
otherwise
(1.4)
The discrete time analog of the exponential signal is the geometric sig-
nal. For example, consider the signal given by the finite sequence
This signal has discrete time and finite duration and is called the finite
duration geometric signal because it is a finite length piece of a geometric
progression. It is sometimes also called a discrete time exponential. The
signal is plotted in Figure 1.5 for the case of r = .9 and N = 32.
Given any real T > 0, the box function DT(t) is defined for any real t
by
DT(t) = {1 It I ~ T. .
o otherWise (1.6)
1.2r---~--~---~--~--~-----'--'
0.8
0.6
,--.
~
Ol 0.4
0.2 0
•
0
0
0
0
o 0 0 o 0
o 0 0 0
0
-0.2
0 10 15 20 25 30
DT(t) will prove useful, especially when used to define a discrete time sig-
nal, since then discontinuities do not pose problems. The notation rect(t)
is also used for n(t).
As an example, a portion of the two-sided infinite duration discrete time
box signal {D5(n); n E Z} is depicted in Figure 1.6 and a finite duration
one-sided box signal {D5(n); n E {O, 1"", 15} is depicted in Figure 1.7.
The corresponding continuous time signal {D5 (t); tEn} is depicted in
Figure 1.8.
The Kronecker delta function Ot is defined for any real t by
t=O
t;f:O'
This should not be confused with the Dirac delta or unit impulse which
will be introduced later when generalized functions are considered. The
Kronecker delta is primarily useful as a discrete time signal, as exemplified
in Figure 1.9. The Kronecker delta is sometimes referred to as a "unit
sample" or as the "discrete time impulse" or "discrete impulse" in the lit-
erature, but the latter terms should be used with care as "impulse" is most
associated with the Dirac delta and the two deltas have several radically
different properties. The Kronecker and Dirac delta functions will play sim-
ilar roles in discrete time and continuous time systems, respectively, but the
10 CHAPTER 1. SIGNALS AND SYSTEMS
1.2,,----~--~~--r_--~--~--___r_"1
0.6
.--,
~
0) 0.4 .;
0.2 .
30-----"_2.,-0---'-:-----'------'-10-----"20,-------,3',-10
-0.2 L _-:'c
1.2.-----r---..,.---r--~---.---r--~--,
0.8
0.6
....
.--,
'-"
0) 0.4
0.2
0 ... :, . ~ .0 ..
-0.2
0 2 4 6 10 12 14
1.2~--~--~r----.----"-----r------r-o
0.8 .C
-.:::,
0)
0.6
0.4
0.2
-0.2
-30 -20 -10 0 10 20 30
The notation indicates that the unit step function is one of a class of special
functions udt) related to each other by integration and differentiation. See,
e.g., Siebert [30]. The continuous time step function and the discrete time
step function are depicted in Figures 1.11 and 1.12.
12 CHAPTER 1. SIGNALS AND SYSTEMS
1.2,...------,-----,----,...------,-----,------,
0.8
-O·~3L.0---_~20:----....J_\':-0---0':-----:':\0,----~20:-----:'30
1.2,...--...,---,----,-----,:---,...--...,---,----,-----,:-----,
0.8
0.6
---....
'-'
Q')
0.4
0.2
o.
-0.2
-\0 -8 -6 -4 -2 0 2 4 6 \0
1.2r------.-----,-----.-----.----...-----,
0.8
0.6
....
.........
........
Q) 0.4
0.2
-0.2
-30 -20 -\0
1.2r------.-----.-----.-,---...,:~----.-,----,
I I
..............,............................ ! ........................ ~GGo·o·o.eGG-o.!·oeeo.o.o.oeG;·o.o«).&o.o.o
;
0.8
0.6
....
.........
........
Q) 0.4 ................ ., ·······················1 ......................... !.... . ....................... -f ...
0.2 ....................... .
: :
o .o-o.o.oeeo.~.o.o.&o.o.o-G.!o.o.oeGoo.o.o+.. ........... ~..... .
~.2L---~---~---~---~---~--~
-30 -20 -\0 0 \0 20 30
H(t) = {! t>O
t
t<O
= O. (1.8)
The Heaviside step function is used for the same purpose as the rectangle
function; the definition of its value at discontinuities as the midpoint be-
tween the values above and below the discontinuity will be useful when
forming Fourier transform pairs. Both step functions share a common
Fourier transform, but the inverse continuous time Fourier transform yields
the Heaviside step function.
The signum or sign function also has two common forms: The most
common (especially for continuous time) is
+1 if t > 0
sgn(t) ={ 0 if t = 0 (1.9)
-1 if t < o.
The most popular alternative is to replace the 0 at the origin by +1. The
principal difference is that the first definition has three possible values while
the second has only two. The second is useful, for example, when modeling
the action of a hard limiter (or binary quantizer) which has two possible
outputs depending on whether the input is smaller than a given threshold or
not. Rather than add further clutter to the list of names of special functions,
we simply point out that both definitions are used. Unless otherwise stated,
the first definition will be used. We will explicitly point out when the second
definition is being used.
The continuous time and discrete time sgn(t) signals are illustrated in
Figures 1.13-1.14.
Another common signal is the triangle or wedge /\(t) defined for all real
t by
/\(t) = { ~ - It I if It I < 1 (1.10)
otherwise.
The continuous time triangle signal is depicted in Figure 1.15. In order to
simplify the definition of the discrete time triangle signal, we introduce first
the time scaled triangle function /\T(t) defined for all real t and any T > 0:
(1.11)
Thus /\(t) = /\1 (t). The discrete time triangle is defined as /\T(n); n E Z
for any positive integer T. Figure 1.16 shows the discrete time triangle
signal /\5 (n).
1.2. BASIC SIGNAL EXAMPLES 15
0.5
,.-..,
0
~
0:>
·0.5
-I
0.5
,.-..,
0
~
0:>
-0.5
20 30
1.2,..---,----.,---,------,--,...--...,---.,---,----,.----,
0.8
0.6
----
.....
'"5; 0.4
0.2
-0.2
·5
1.2,.-----,----,...----,----,...----,-----,
0.8
0.6
0.2
o e&o.o.o..,eo.~.o OOo-&o.o.oe~o
0 0 0'0& ....... ; ............. O-e00.0.~-()eoo.o.OiJOo-~.O 0-090-0-0,0,
.0·~3LO---.-:'-20O----'.1':"0---0~---:':1O:----~20O----:'30
J"
integer n by
In(t) = 2.. eitsinq,-inq, d</>. (1.12)
271" _"
Bessel functions arise as solutions to a variety of applied mathematical
problems, especially in nonlinear systems such as frequency modulation
(FM) and quantization, as will be seen in Chapter 8. Figure 1.17 shows a
plot of the In(t) for various indices n.
(1.13)
Table 1.3 summarizes several examples of signals and their index sets
along with their signal type. w, >. > 0, and x are fixed real parameters, and
m is a fixed integer parameter.
18 CHAPTER 1. SIGNALS AND SYSTEMS
4,-----,------,-----,------,-----,-----"
-3
40·L-----~S----~IOL-----~IS----~WL---~~~----~30~
1.4 Systems
A common focus of many of the application areas mentioned in the in-
troduction is the action of systems on signals to produce new signals. A
system is simply a mapping which takes one signal, often called the input,
and produces a new signal, often called the output. A particularly trivial
system is the identity system which simply passes the input signal through
to the output without change (an ideal "wire"). Another trivial system is
one which sets the output signal equal to 0 regardless of the input (an ideal
"ground"). More complicated systems can perform a variety of linear or
20 CHAPTER 1. SIGNALS AND SYSTEMS
w(t) = Ct(v).
Note that the output of a system at a particular time can, in principle,
depend on the entire past and future of the input signal (if indeed t cor-
responds to "time"). While this may seem unphysical, it is a useful ab-
straction for introducing properties of systems in their most general form.
We shall later explore several physically motivated constraints on system
structure.
The ideal wire mentioned previously is modeled as a system by C(v) = v.
An ideal ground is defined simply by C( v) = 0, where here 0 denotes a signal
that is 0 for all time.
In many applications the input and output signals are of the same type,
that is, 7i and To are the samej but they need not always be. Several
examples of systems with different input and output signal types will be
encountered in section 1.8. As the case of identical signal types for input
and output is the most common, it is usually safe to assume that this is the
case unless explicitly stated otherwise (or implied by the use of different
symbols for the input and output time domains of definition).
The key thing to remember when dealing with systems is that they map
an entire input signal v into a complete output signal w.
A particularly simple type of system is a memoryless system. A mem-
oryless system is one which maps an input signal v = {v(t); t E T} into an
output signal w = {w(t)j t E T} via a mapping of the form
w(t) = at(v(t»j t E T
so that the output at time t depends only on the current input and not on
any past or future inputs (or outputs).
{ag( t); t E T}; that is, the new signal formed by multiplying all values of
the original signal by a. This production of a new signal from an old one
provides another simple example of a system, where here the system £ is
defined by £t(g) = ag(t); t E T.
Similarly, given two signals 9 and h and two complex numbers a and b,
define a linear combination of signals ag+bh as the signal {ag(t) + bh(t); t E
T}. We have effectively defined an algebra on the space of signals. This
linear combination can also be considered as a system if we extend the
definition to include multiple inputs; that is, here we have a system £ with
two input signals and an output signal defined by £t(g, h) = ag(t) + bh(t);
t E T.
As a first step toward what will become Fourier analysis, consider the
specific example of the signal shown in Figure 1.19 obtained by adding two
sines together as follows:
sines1l"n) + sin(~n)
g(n) = 2 .
The resulting signal is clearly not a sine, but it is equally clearly quite well
0.5
o 0 o 0
o
o 0 o 0
-0.5
-\
o 10 15 20 25 30
behaved and periodic. One might guess that given such a signal one should
be able to decompose it into its sinusoidal components. Furthermore, one
22 CHAPTER 1. SIGNALS AND SYSTEMS
2.-----~----~----~------~----~----_ro
1.5
0.5 ... 0 0
....
----
-......-
0
Ol
-0.5
-1
-1.5
-2
0 10 15
the original sinusoid from the sum_ In fact, several classical problems in
detection theory involve such sums of sinusoids and noise. For example:
given a signal that is known to be either noise alone or a sinusoid plus noise,
how does one intelligently decide if the sinusoid is present or not? Given
a signal that is known to be noise plus a sinusoid, how does one estimate
the amplitude or period or the phase of the sinusoid? Fourier methods are
crucial to the solutions of such problems. Although such applications are
beyond the scope of this book, we will later suggest how they are approached
by simply computing and looking at some transforms.
Linear Systems
A system is linear if linear combinations of input signals yield the corre-
sponding linear combination of outputs; that is, if given input signals v(1)
1.5. LINEAR COMBINATIONS 23
L(av) = aL(v)
for any complex constant a.
Common examples of linear systems include systems that produce an
output by adding (in discrete time) or integrating (in continuous time)
the input signal times a weighting function. Since integrals and sums are
linear operations, using them to define systems result in linear systems. For
1:
example, the systems with output w defined in terms of the input v by
w(t) = v(T)ht(T) dT
L
00
Wn = Vkhn,k
k=-oo
Lt (v) = v 2 (t)
Lt (v) a + bv(t)
Lt (v) sgn( v(t))
Lt (v) sin( v( t))
Lt( v) e- i21fv (t) .
(Note that all of the above systems are also memoryless.) Thus a square
law device, a hard limiter (or binary quantizer), a sinusoidal mapping, and
a phase-modulator are all nonlinear systems.
24 CHAPTER 1. SIGNALS AND SYSTEMS
1.6 Shifts
The notion of a shift is fundamental to the association of the argument of a
signal, the independent variable t called "time," with the physical notion of
time. Shifting a signal means starting the signal sooner or later, but leaving
its basic shape unchanged. Alternatively, shifting a signal means redefining
the time origin. If the independent variable corresponds to space instead of
time, shifting the signal corresponds to moving the signal in space without
changing its shape. In order to define a shift, we need to confine interest
to certain index sets T. Suppose that we have a signal g = {get); t E T}
and suppose also that T has the property that if t E T and T E T, then
also t - T E T. This is obviously the case when T = n or T = Z, but it
is not immediately true in the finite duration case (which we shall remedy
shortly). We can define a new signal
g(T) = {g(Tl(t); t E T} = {get - T); t E T}
as the original signal shifted or delayed by T. Since by assumption t - T E T
for all t E T, the values get - T) are well-defined. The shifted signal can be
thought of as a signal that starts T seconds after get) does and then mimics
it. The property that the difference (or sum) of any two members of T is
also a member of T is equivalent to saying that T is a group in mathematical
terms. For the two-sided infinite cases the shift has the normal meaning.
For example, a continuous time wedge signal with width 2T is depicted in
Figure 1.21 and the same signal shifted by 2T is shown in Figure1.22.
get)
t
-2T -T 0 T 2T
g(t)
t
-2T -T 0 T 2T
In the discrete time case, the natural analog is used. The periodic extension
g = {gn; n E Z} of a finite duration signal 9 = {gn; n E ZN} is defined by
g(t) = g(n mod N); t E Z. (1.15)
• I t
T T
2"
11~6. . . . . .
o T
2"
T
t
II
g(t)
o
I
T
2"
for the triangle signal. The choice of index set [0, T) = {t : 0 :::; t < T}
does not include the endpoint T because the periodic extension starts its
replication of the signal at T, that is, the signal at time T is the same as
the signal at time O.
An alternative and equivalent definition of a cyclic shift is to simply
redefine our "time arithmetic" t - l' to mean difference modulo T (thereby
again making T a group) and hence we are defining the shift of {get); t E
[0, T)} to be the signal {g«t-1') mod T); t E [0, Tn.Since (t -1') mod T E
[0, T), the shifted signal is well defined.
Time-Invariant Systems
We have seen that linear systems handle linear combinations of signals in
a particularly simple way. This fact will make Fourier methods particulary
amenable to the analysis of linear systems. In an analogous manner, some
systems handle shifts of inputs in a particularly simple way and this will
result in further simplifying the application of Fourier methods.
A system C is said to be time invariant or shift invariant or stationary
if shifting the input results in a corresponding shift of the output. To be
precise, a system is time invariant if for any input signal v and any shift r,
the shifted input signal veT) = {vet - r); t E 1t} yields the shifted output
signal
(1.16)
In other words, if wet) = Ct ( {v(t); t E 1t}) is the output at time t when
v is the input, then wet - r) is the output at time t when the shifted signal
v T is the input.
One can think of a time-invariant system as one which behaves in the
same way at any time. If you apply a signal to the system next week at
this time the effect will be the same as if you apply a signal to the system
now except that the results will occur a week later.
Examples of time-invariant systems include the ideal wire, the ideal
ground, a simple scaling, and an ideal delay. A memoryless system defined
by wet) = at(v(t)) is time-invariant if at does not depend on t, in which
case we drop the subscript.
As an example of a system that is linear but not time invariant, consider
the infinite duration continuous time system defined by
input by 7r /2 does not shift the output by 7r /2. Alternatively, the system
is time-varying because it always produces an output of 0 when 27r Jot is
an odd multiple of 7r /2. Thus the action of the system at such times is
different from that at other times.
Another example of a time varying system is given by the infinite du-
ration continuous time system
n
This system can be viewed as one which closes a switch and passes the
input during the interval [- ~, but leaves the switch open (producing a
zero output) otherwise. This system is easily seen to be linear by direct
substitution, but it is clearly not time invariant. For example, shifting an
input of vet) = net) by 1 time unit produces an output of 0, not a shifted
square pulse. Another way of thinking about a time-invariant system is
that its action is independent of the definition of the time origin t = O.
A more subtle example of a time varying system is given by the con-
tinuous time "stretch" system, a system which compresses or expands the
time scale of the input signal. Consider the system which maps a signal
{v(t); t E R} into a stretched signal defined by {v(at); t E R}, i.e., we have
a system mapping L that maps an input signal {vet); t E R} into an output
signal {wet); t E R} where wet) = v(at). Assume for simplicity that a > a
so that no time reversal is involved.
Shift the input signal to form a new input signal {v1'(t);t E R} defined
by v1'(t) = vet - 1'). If this signal is put into the system, the output signal,
say wo(t), is defined by wo(t) = v1'(at) = v(at - 1').
On the other hand, if the unshifted v is put into the system to get
wet) = v(at), and then the output signal is delayed by 1', then w1'(t) =
wet - 1') = v(a(t - 1')) = v(at - a1'), since now w directly plugs t - l' into
the functional form defining w.
Since wo(t) and wet -1') are not equal, the system is not time invariant.
The above shows that it makes a difference in which order the stretch and
shift are done.
1. 7 Two-Dimensional Signals
Recall that a two-dimensional or 2D signal is taken to mean a signal of the
form {g(x,y); x E Tx, y E Ty}, that is, a signal with a two-dimensional
domain of definition. Two dimensional signal processing is growing in im-
portance. Application areas include image processing, seismology, radio as-
tronomy, and computerized tomography. In addition, signals depending on
two independent variables are important in applied probability and random
1.7. TWO-DIMENSIONAL SIGNALS 29
or, equivalently,
in two dimensions. This we do using the public domain NIH Image program
as in Figure 1.25. Here the light intensity at each pixel is proportional to the
signal value, i.e., the larger the signal value, the whiter the pixel appears.
Image rescales the pixel values of a signal to run from the smallest value to
the largest value and hence the image appears as a light square in a dark
background.
Both mesh and image representations provide depictions of the same
2D signal.
The above 2D signal was easy to describe because it could be written as
a product, in two "separable" pieces. It is a product of separate signals in
each of the two rectangular coordinates. Another way to construct simple
signals that separate into product terms is to use polar coordinates. To
convert rectangular coordinates (x, y) into polar coordinates (r,8) so that
Consider for example the one-dimensional signals 9 R (r) = sinc r for all
positive real r and ge (8) = 1 for all () E [-11', 11'). Form the 2D signal from
these two 1D signals by
g(x,y) = 9R (r)ge ((})
for all real x and y. Once again the signal is separable, but this time in
polar coordinates.
A simple and common special case of separable signals in polar coordi-
nates is obtained by setting
gs((}) = 1; for all (}
so that
g(x, y) = gR(r). (1.21)
32 CHAPTER 1. SIGNALS AND SYSTEMS
(1.22)
256 x 256 section of a digitized version of the Mona Lisa taken from the NIH
collection of image examples and depicted in Figure 1.31. The second image
is a magnetic resonance (MR) brain scan image, which we shall refer to as
"Eve." This image is 256 x 256 pixels and is 8-bit gray scale as previously
discussed. The printed version is, however, half-toned.
°
it as a piece of an infinite duration sine wave {sin(wt); tEn} or even as a
time-limited waveform that lasts forever and assumes the value for t not
in [0, T). Which model is more "correct"? None; the appropriate choice
for a particular problem depends on convenience and the goal of the anal-
ysis. If one only cares about system behavior during [0, T), then the finite
duration model is simpler and leads to finite limits of integrals and sums.
If, however, the signal is to be used in a system whose behavior outside
this time range is important, then the infinite (or at least larger) duration
model may be better. Knowing only the output during time [0, T) may
force one to guess the behavior for the rest of time, and that this can be
done in more than one way. If we know the oscillator behaves identically
for a long time, then repeating the sinusoid is a good idea. If we do not
know what mechanism produced the sinusoid, however, it may make more
sense to set unknown values to zero or something else. The only general
1.8. SAMPLING, WINDOWING, AND EXTENDING 37
gn = genT); n E Z; (1.24)
the original and the original mayor may not be reconstructible from its
sampled version. In other words, the sampling operation is not necessarily
invertible. One of the astonishing results in Fourier analysis (which we
will prove in a subsequent chapter) is the Whittaker-Shannon-Kotelnikov
sampling theorem which states that under certain conditions having to do
with the shape of the Fourier transform of 9 and the sampling period T,
the original waveform can (in theory) be perfectly reconstructed from its
samples. This result is fundamental to sampled-data systems and digital
signal processing of continuous waveforms. The sampling idea also can be
used if the duration of the original signal is finite.
Figure 1.1 shows a continuous time sinusoid having a frequency of one
Hz; that is, g(t) = sin(211't). Figure 1.33 shows the resulting discrete time
signal formed by sampling the continuous time signal using a sampling
period of T = .1, that is, sin(211't) for t = n/l0 and integer n. Note that
the sampled waveform looks different in shape, but it is still periodic and
resembles the sinusoid. Figure 1.2 shows the resulting discrete time signal
9n = sin(211'n/1O), where we have effectively scaled the time axis so that
there is one time unit between each sample.
O.S
........ 0
~
<:::I)
.(l.S
-1
-2 0 0.5 I.S 2
as to have one time unit between consecutive samples yields the discrete
time signal gn = sin(2n/3) shown in Figure 1.35. This discrete time signal
is not periodic in n (for example, it never returns to the value 0 ;:: sin 0)
and it bears less resemblance to the original continuous time signal. This
o.S .~............~...... ..
° ,
o .................................... !....................;.................... ~ .......................................................,.... .
,0
,
-0.5
,0
.................. ......................................... y ....................... ..................... ··.... o· ...... 0·
·o" .... ·•· ...... ·........ · .. '.·~
simple example shows that discrete time signals obtained from continuous
time signals can be quite different in appearance and behavior even if the
original signal being sampled is fixed.
p
(t) _
-
{I 0
if 0 ::; t < T
otherwise (1.25)
as depicted in Figure 1.36 for T ;:: .1. This pulse is an example of a time-
limited signal, an infinite duration signal that is nonzero only on an interval
40 CHAPTER 1. SIGNALS AND SYSTEMS
0.5
~ 0
~
-0.5
-1
-20 -15 10 15 20
1.2,-------,---,-------,---,-------.---;-------,------,
0.8
0.6
/"'0,
~
~ 0.4
0.2
-0.2
-2 -1.5 -1 -0.5 0 O.S 1.5 2
L
00
0.5
....-- 0
~
<::ll
-0.5
-1
0.5
....-- 0
~
<::ll
-0.5
-1
°
a continuous time signal {g(t); t E R}, we can define a continuous time
window function w(t) with w(t) = for t not in [0, T) and then define the
windowed and truncated signal 9 = {g(t) = g(t)w(t); t E [0, T)}. Once
again, the constant window is called a "boxcar" window. If the continuous
I Hz sine wave (a portion of which is shown in Figure 1.1) is truncated
to the time interval [-.5, .5], then the resulting finite duration signal is as
shown in Figure 1.39.
O.S
-:;;- 0 ..
.......
~
-0.5
-I
_ {gn if n E ZN
gn = 0 otherwise (1.30)
The infinite duration signal has simply taken the finite duration signal and
inserted zeros for all other times. Observe that if the finite duration signal
was originally obtained by windowing an infinite duration signal, then it is
likely that the infinite duration signal constructed as above from the finite
duration signal will differ from the original infinite duration signal. The
one notable exception will be if the original infinite duration signal was in
fact 0 outside of the window ZN, in which case the original signal will be
perfectly reconstructed. Extending a finite duration signal by zero filling
always produces a time-limited signal.
In a similar fashion, a continuous time finite duration signal can be
extended by zero filling. The signal {get); t E [0, T)} can be extended to
the infinite duration signal
9 = {g(t) t E [0, T)
o otherwise
As an example, extending the finite duration sinusoid of Figure 1.39 by
zero filling produces an infinite duration signal which has one period of
a sinusoid and is zero elsewhere, as illustrated in Figure 1.40. Another
example is given by noting that the two-sided continuous time ideal pulse
can be viewed as a one-sided box function extended by zero filling.
Periodic Extension
Another approach to constructing an infinite duration signal from a finite
duration signal is to replicate the finite duration signal rather than just
insert zeros. For example, given a discrete time finite duration signal 9 =
{gn; n = 0,1"", N - I} we can form an infinite duration signal 9 =
{9n; n E Z} by defining
9n = gnmodN, (1.31 )
where the mod operation was defined in (0.1). Note that the infinite dura-
tion signal 9 has the property that 9n+N = 9n for all integers n; that is, it is
1.B. SAMPLING, WINDOWING, AND EXTENDING 45
0.5
01----,-------,
-0.5
-1
L
00
1.2 ..--~-~-~-~-~-..,.--..,.--..,.--..,.----,
0.8
0.6
----
~
0) 0.4
0.2
-0.2
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
-0.2 L . . . . _ L - - _ - ' - - _ - ' - - _ - ' - - _ - ' - - _ - ' - - _ - ' - - _ - ' - - _ - ' - - - - '
-5 -4 -3 -2 -I 0 2 4
(1.35)
is called a probability mass function or pm! for short. (Here T is called the
alphabet.) A continuous parameter signal {p( x); x E T} with the properties
that p(x) :::: 0 all x and
r
iXET
p(x)dx =1 (1.36)
1.10 Problems
1.1. Which of the following signals are periodic and, if so, what is the
period?
(a) {sin(2rrft)j t E (-00, oo)}
(b) {sin(2rrfn); n E { ... , -1,0,1, ... }} with f a rational number.
(c) Same as the previous example except that f is an irrational
number.
(d) n=~=l sin(2rr fatn); t E (-00, oo)}
(e) {sin(2rrfot) +sin(2rr Itt); t E (-00, co)} with fa and It relatively
primej that is, their only common divisor is 1.
1.2. Is the sum of two continuous time periodic signals having different
periods itself periodic? Is your conclusion the same for discrete time
signals? Here and hereafter questions requiring a yes/no answer also
require a justification of the answer, e.g., a proof for a positive answer
or a counter example for a negative answer.
1.3. Suppose that 9 = {g(t)j t E 'R.} is an arbitrary signal. Prove that the
signal
L
00
g(t) = { 1 -
° ¥l It I ~
else
f
1.10. PROBLEMS 49
and
It I ~ T
else.
1.4. Prove the basic geometric progression formulas:
L
N-l N
rn = __
l-r _
(1.37)
l-r
n=O
If Irl < 1,
(a) Wn = Vn - Vn-l'
(b) Wn = sgn(vn ).
(c) Wn = r Vn , Irl < 1.
(d) Wn = 2:~=-oo Vk·
(e) Wn = aV n + b, a, b real constants.
50 CHAPTER 1. SIGNALS AND SYSTEMS
1.8. The following systems describe infinite duration continuous time sys-
tems with input v(t) and output w(t). Are the systems linear? time
invariant? (Justify your answers!)
1.10. Define the infinite duration continuous time signal g(t) = e- t u_l(t)
for all t E 'R.
i:
approximation fI defined by
for all real x, y. Provide a labeled sketch of the mesh and image forms
of the signal
signals). For reasons that will be made clear in the next section, we do not
always need to consider the Fourier transform of a signal to be defined for
all real I; each signal type will have a corresponding domain of definition
for I. There is nothing magic about the sign of the exponential, but the
choice of the negative sign is the most common for the Fourier transform.
The inverse Fourier transform will later be seen to take similar form except
that the sign of the exponent will be reversed.
We sometimes refer to the original signal 9 as a time domain signal and
the second signal G a frequency domain signal or spectrum. We will denote
the general mapping by F; that is,
G = F(g). (2.2)
When we wish to emphasize the value of the transform for a particular
frequency I we write
• Have we lost information by taking the transform; that is, can the
original signal be recovered from its spectrum? Is the Fourier trans-
form invertible?
• What are the basic properties of the mapping, e.g., linearity and
symmetry?
• What happens to the spectrum if we do something to the original
signal such as scale it, shift it, filter it, scale its argument, or modulate
it? By filtering we include, for example, integrating or differentiating
continuous time signals and summing or differencing discrete time
signals.
• Suppose that we are given two signals and their transforms. If we
combine the signals to form a new signal, e.g., using addition, mul-
tiplication, or convolution, how does the transform of the new signal
relate to those of the old signals?
• What happens to the spectrum if we change the signal type, e.g.,
sample a continuous signal or reconstruct a continuous signal from a
discrete one?
Before specializing the basic definitions to the most important cases,
it is useful to make several observations regarding the definitions and the
quantities involved. The basic definitions require that the sum or integral
exists, e.g., the limits defining the Riemann integrals converge. If the sum
or integral exists, we say the Fourier transform exists. To distinguish the
two cases of discrete and continuous T we often speak of the sum form as a
discrete time (or parameter) Fourier transform or DTFT, and the integral
form as the continuous time (or parameter) Fourier transform or CTFT
or integral Fourier transform. Note that even if the original signal is real,
its transform is in general complex valued because of the multiplicative
complex exponential e- i21r It.
The dimensions of the frequency variable f are inverse to those of t.
Thus if t has seconds as units, f has cycles per second or hertz as units.
Often frequency is measured as w = 27T f with radians per second as units. If
t has meters as units, f has cycles/meter as units. If t uses the dimensionless
spatial units of distance/wavelength, then f has cycles as units. The symbol
w/21r is also commonly used as the frequency variable, the units of w being
radians per second (or radians per meter, etc.).
One fundamental difference between the discrete and continuous time
cases follows from the fact that the exponential e-i21rln is a periodic func-
tion in f with period one for every fixed integer nj that is,
e- i21r (f+1)n = e-i21rlne-i27fn = e-i27f/n, all fEn.
56 CHAPTER 2. THE FOURIER TRANSFORM
This means that if we consider a DTFT G(f) to be defined for all real f,
it is a periodic function with period 1 (since sums of periodic functions of
a common period are also periodic). Thus
G(f + 1) = G(f) (2.4)
for the DTFT discrete time signals. G(f) does not exhibit this behavior in
the CTFT casej that is, e- i27r It is not periodic in f with a fixed period for
all values of t E T when T is continuous. The periodicity of the spectrum
in the discrete time case means that we can restrict consideration of the
spectrum to only a single period of its argument when performing our
analysis.
In addition to distinguishing the Fourier transforms of discrete time
signals and continuous time signals, the transforms will exhibit different
behavior depending on whether or not the index set T is finite or infinite,
that is whether or not the signal has finite duration or infinite duration. For
example, if 9 has a finite duration index set T = ZN = {O, 1, ... , N - I},
then
= E g(n)e-i27r/n.
N-l
GU) (2.5)
n=O
To define a Fourier transform completely we need to specify the domain
of definition of the frequency variable f. While the transforms appear to
be defined for all real f, in many cases only a subset of real frequencies will
be needed in order to recover the original signal and have a useful theory.
We have already seen, for example, that all the information in the spec-
trum of a discrete time signal can be found in a single period and hence if
T = Z, we could take the frequency domain to be S = [0,1) or [-1/2,1/2),
for example, since knowing GU) for f E [0,1) gives us G(f) for all real
f by taking the periodic extension of G (f) of period 1. We introduce the
appropriate frequency domains at this point so as to complete the defi-
nitions of the Fourier transforms and to permit a more detailed solution
of the examples. The reasons for these choices, however, will not become
clear until the next chapter. The four basic types of Fourier transform are
presented together with their most common choice of frequency domain of
definition in Table 2.1. When evaluating Fourier transforms it will often
be convenient first to find the functional form for arbitrary real f and then
to specialize to the appropriate set of frequencies for the given signal type.
This is particularly true when we may be considering differing signal types
having a common functional form.
Common alternatives are to use a two-sided finite duration DTFT
E
N
G(f) = gne-i27r/nj
n=-N
2.1. BASIC DEFINITIONS 57
Duration Time
Discrete Continuous
IE [-1/2,1/2) IE (-00,00)
N 1 1 N
f E {-2N + l' 2N + l' a, 2N + 1' , 2N + I}'
G(I) = Jf
g(t)e-i21rlt dt; f E {k/T; k E Z}.
,...
-T
It is also common to replace the frequency domain for the infinite duration
DTFT by [a, 1). There is some arbitrariness in these choices, but as we shall
see the key point is to use a frequency domain which suffices to invert the
transform. The reader is likely to encounter an alternative notation for the
DTFT. Many books that treat the DTFT as a variation on the z transform
(which we will consider later) write G(e i21f /) instead of the simpler G(f).
A discrete time finite duration Fourier transform or finite duration
DTFT defined for the frequencies {a, l/N, ... , (N - l)/N} is also called a
discrete Fourier transform or DFT because of the discrete nature of both
the time domain and frequency domain. It is common to express the trans-
form as G(k) instead of G(k/N) in order to simplify the notation, but one
should keep in mind that the frequency is the normalized k/N and not the
integer k.
The DFT can be expressed in the form of vectors and matrices: Given a
signal 9 = {gn; n = 0,1, ... ,N -I}, suppose that we consider it as a column
58 CHAPTER 2. THE FOURIER TRANSFORM
vector g = (Yo, 91, ... , YN _d t , where the superscript denotes the transpose
of the vector (which makes the row vector written in line with the text a
column vector). We will occasionally use boldface notation for vectors when
we wish to emphasize that we are considering them as column vectors and
we are doing elementary linear algebra using vectors and matrices. Similarly
let G denote the DFT vector (G(O),G(l/N),···,G«N -l)/N))t. Lastly,
define the N x N square matrix W by
-2" k-
W={e- tiV Jik=O,1,···,N-1ij=O,1,···,N-l}. (2.6)
G=Wg=
1 1 1 1
1 e-i~ e-i:jf e-i7f(N -1)
1 e-ij,J e-i~ e-i"; (N-l)
x
e- i7f (N-l)(N-l)
(2.7)
During much of the book we will attempt to avoid actually doing integra-
tion or summation to find transforms, especially when the calculus strongly
resembles something already done. Instead the properties of transforms will
be combined with an accumulated collection of simple transforms to obtain
new, more complicated transforms. The simple examples to be treated can
be considered as a "bootstrap" for this approach; a modicum of calculus
now will enable us to take many shortcuts later.
=E
00
GU) c5 n e- i27f / n =1
n=-oo
(2.8)
c5n -1
=l
= { 0I neIse. (2.10)
60 CHAPTER 2. THE FOURIER TRANSFORM
L
00
In a similar manner we can show that for the DFT and the cyclic shift that
for any l E ZN
gn =L aIOn-l, (2.13)
I
where al= gl. Since summations are linear, taking the Fourier transform
of the signal 9 thus amounts to taking a Fourier transform of a sum of
scaled and shifted delta functions, which yields a sum of scaled complex
exponentials. Each sample of the input signal yields a single scaled complex
exponential in the Fourier domain, so the entire signal yields a weighted
sum of complex exponentials.
Consider next the infinite duration continuous time signal
g(t) = {e- At t 2: 0
o otherwise
which can be written more compactly as
g(t) = e-Atu_l (t); tEn,
(2.15)
The magnitude and phase of this transform are depicted in Figure 2.1. (The
units of phase are radians here.) In this example the Fourier transform
2r----,----,----,----,----,----,----,----,
0.5
s
e"
0
-0.5 ,
:\
-1 ..... ,'
. .'
. :, ..\ ...
\
-1.5
"' " ---
~~---L----~---L----~--~----~--~----~
-4 -3 -2 -1 o 2 3 4
f
DTFT) is given by
(2.18)
Alternatively, L'H6pital's rule of calculus can be used to find the result from
the geometric progression. Applying the appropriate frequency domain of
definition SDTFD we have found the Fourier transform (here the DFT):
10
~ 4
---
-Ie!
'-"
C!) 2
-2
0 0.1
kiN
10
6 ..............
4
: 0 0
I :
0 0
. ...
2 ,......................... 1............. -
...
0
0 0
• v
..... -
0
; " o 0
o 0
o 0 ~ ~ o 0 0 o 0 0
o ;....; ...
, ,
.... , 0 or' 0
-2
-o.s -0.4 -0.3 -0.2 -0.1 0 0.1 0.2 0.3 0.4 O.S
kiN
Figure 2.3: DFT of Finite Duration Geometric Signal with shifted frequency
domain: o=magnitude, *=phase
64 CHAPTER 2. THE FOURIER TRANSFORM
(2.21)
10
-
4
~
-'<! 2 ...
......-
\!)
0 .. 0 ...
-2
.
-4
0 0.5 0.6 0.7 0.8 0.9
kiN
10
-
4
~ o 0
-'<! 2 ..
......-
\!)
0
-2
-4
0
kiN
14
12
10
2 .• .... ".: .
. : -
o 000 ~ o:v: ~ o ... o.. o .. ~ti.~ Ii!. 8 & I{O'O O~.: I) o;U:
-2 ... II' •
kiN
(2.23)
gn -
_{rn0 n = 0, 1, ...
n < 0, (2.24)
where now we require that /r/ < 1. We can write this signal more compactly
using the unit step function as
(2.25)
Observe that the functional form of the time dependence is the same here
as in the previous finite duration example; the difference is that now the
functional form is valid for all integer times rather than just for a finite set.
The DTFT is
=L
00
GU) r n e- i2 1<f n ,
n=O
which is given by the geometric progression formula as
1
G(f) =1- re-''2 1< f'
(2.26)
(2.27)
This transform is the discrete time version of (2.15). Note the transforms
do not clearly resemble each other as much as the original signals do.
As another example of a DTFT consider the two-sided discrete time box
function
_ 0 ~ {1 if Inl ::; N
gn - N(n) - 0 n = ±(N + 1), ±(N + 2),·,. (2.28)
L
N
G(f) e- i2 1<f n
n=-N
cos(21rfN) - cos(27rf(N + 1»
= 1 - cos(27r f)
sin(27rf(N + !»
= sin( 7r f)
(2.29)
68 CHAPTER 2. THE FOURIER TRANSFORM
This spectrum for the case of Figure 1.6 is thus purely real and (N = 5)
is plotted in Figure 2.7. Note the resemblance to the sinc function. In
S 4
c.J
2
-2
-4
-0.5 -0.4 -OJ -0.2 -0.1 0 0.1 0.2 0.3 0.4 0.5
°
Note that the calculus would yield G(J) if we considered g to be an infinite
duration signal for which g(t) = for t not in the interval [0,1). Restricting
the result to the frequency domain of definition for finite duration signals
yields the transform
l k=O
{g(t)j t E (0, I)} :::> {G(k) = { ~
21rk
k Z k -I-
E,-r
oJ· (2.32)
if f = 0
{g(t)j t E R} J {{ t-;2~1 e-;2~/-l if fER, f =/: 0.
}
(2.33)
~ + (21rj) 2 .
1 if ItI < 1
n(t) ={ ~ if t ±~
= (2.34)
o otherwise
G(J) = J
00
n(t)e-i21rltdt = J!
e-i21rltdt
-00 -l
1
e-i21rltI2 _sin(7rf)~.
= -t
'2 7r f _1
- 7r f - smc(J), (2.35)
2
The sinc function was shown in Figure 1.10. Although that was sinc(t) and
this is sinc(J), obviously the name of the independent variable has nothing
to do with the shape of the function.
Riemann integrals are not affected by changing the integrand by a finite
amount at a finite number of values of its argument. As a result, the Fourier
transforms of the two box functions D 1/ 2 (t) and n(t), which differ in value
70 CHAPTER 2. THE FOURIER TRANSFORM
only at the two endpoints, are the same; hence, we cannot count on inverting
the Fourier transform in an unambiguous fashion. This fact is important
and merits emphasis:
Different signals may have the same Fourier transform and hence
the Fourier transform may not have a unique inverse. As in the
box example, however, it turns out that two signals with the same
Fourier transform must be the same for most values of their argu-
ment ("almost everywhere," to be precise).
G(f) f00
DT(t)e-i21Tftdt = fT
e-i21Tftdt
-00 -T
-i21TftIT . (2 IT)
~2I = sm; = 2Tsinc(2Tf), (2.37)
-z 7l" -T 7l"
and hence
{DT(t); t E R} :) {2Tsinc(2Tf); IE R}. (2.38)
This is another example of a real-valued spectrum. Note the different forms
of the spectra of the discrete time box function of (2.29) and the continuous
time transform of (2.37). We also remark in passing that the spectrum of
(2.37) has the interesting property that its samples at frequencies of the
form k/2T are 0:
k
G(2T) =0; k=±1,±2,···, (2.39)
as can be seen in Figure 1.10. Thus the zeros of the sinc function are
uniformly spaced.
We have completed the evaluation, both analytically and numerically,
of a variety of Fourier transforms. Many of the transforms just developed
will be seen repeatedly throughout the book. The reader is again referred
to the Appendix where these and other Fourier transform relations are
summarized.
1
tET
= g(t)COS(27T't)dt-il g(t)sin(27T/t)dt
tET tET
= C,(g) - is, (g), (2.40)
where c, (g) is called the cosine trons/orm of g( t) and S, (g) is called the sine
trons/orm. (There often may be a factor of 2 included in the definitions.)
Observe that if the signal is real-valued, then the Fourier transform can be
found by evaluating two real integrals. The cosine transform is particularly
important in image processing where a variation of the two-dimensional
discrete time cosine transform is called the discrete cosine transform or
DCT in analogy to the DFT. Its properties and computation are studied
in detail in Rao and Yip [27].
then the resulting transform is called the Hartley trons/orm and many of its
properties and applications strongly resemble those of the Fourier transform
since the Hartley transform can be considered as a simple variation on the
Fourier transform.
An alternative way to express the Hartley transform is by defining the
cas function
cas(x) ~ cos(x) + sin (x) (2.42)
1
and then write
1/.,(g) = get) Cas(27T/t) dt. (2.43)
tET
provided that the sum exists, i.e., that the sum converges to something fi-
nite. The region of z in the complex plane where the sum converges is called
the region of convergence or ROC. When the sum is taken over a two-sided
index set T such as the set Z of all integers or {-N, ... , -1,0,1, ... ,N},
the transform is said to be two-sided or bilateral. If it is taken over the
set of nonnegative integers or a set of the form 0,1"", N, it is said to be
one-sided or unilateral. Both unilateral and bilateral transforms have their
uses and their properties tend to be similar, but there are occasionally dif-
ferences in details. We shall focus on the bilateral transform as it is usually
the simplest. If G(f) is the Fourier transform of gn, then formally we have
that
that is, the Fourier transform is just the z transform evaluated at e i27f f. For
this reason texts treating the z-transform as the primary transform often
use the z-transform notation G(e i27ff ) to denote the DTFT, but we prefer
the simpler Fourier notation of G(f). If we restrict f to be real, then the
Fourier transform is just the z-transform evaluated on the unit circle in the
2.3. COUSINS OF THE FOURIER TRANSFORM 73
z plane. If, however, we permit I to be complex, then the two are equally
general and are simply notational variants of one another.
Why let I be complex or, equivalently, let z be an v.rbitrary complex
number? Provided Izl ::f:. 0, we can write z in magnitude-phase notation as
z = r- 1 ei9 . Then the z transform becomes
Gz(z) = 2: (9nrn)e-inO.
nET
This can be interpreted as a Fourier transform of the new signal 9nrn and
this transform might exist even if that of the original signal 9n does not since
the rn can serve as a damping factor. Put another way, a Fourier transform
was said to exist if G(f) m-,de sense for all I, but the z transform does
not need to exist for all z to be useful, only within its ROC. In fact, the
existence of the Fourier transform of an infinite duration signal is equivalent
to the ROC of its z-transform containing the unit circle {z: Izi = I}, which
is the region of all z = e- i21f ! for real I.
As an example, consider the signal 9n = u_l(n) for n E Z. Then the
ordinary Fourier transform of 9 does not exist for all I, e.g., it blows up for
I = O. Choosing Irl < 1, however, yields a modified signal 9nrn which, as
we have seen, has a transform. In summary, the z transform will exist for
some region of possible values of z even though the Fourier transform may
not. The two theories, however, are obviously intimately related.
The Laplace transform plays the same role for continuous time wave-
forms. The Laplace transform of a continuous time signal 9 is defined by
Gds) = 1
tET
g(t)e- st dt. (2.45)
As for the z transform, for the case where T is the real line, one can define a
bilateral and a unilateral transform, the latter being the bilateral transform
of g(t)U_l(t). As before, we focus on the bilateral case.
If we replace i21r I in the Fourier transform by s we get the Laplace
transform, although the Laplace transform is more general since s can be
complex instead of purely imaginary. Letting the I in a Fourier transform
take on complex values is equivalent in generality. Once again, the Laplace
transform can exist more generally since, with proper choice of s, the orig-
inal signal is modified in an exponentially decreasing fashion before taking
a Fourier transform.
The primary advantage of Laplace and z-transforms over Fourier trans-
forms in engineering applications is that the one-sided transforms provide a
natural means of incorporating initial conditions into linear systems analy-
sis. Even in such applications, however, two-sided infinite duration Fourier
74 CHAPTER 2. THE FOURIER TRANSFORM
transforms can be used if the initial conditions are incorporated using delta
functions.
It is natural to inquire why these two variations of the Fourier transform
with the same general goal are accomplished in somewhat different ways.
In fact, one could define a Laplace transform for discrete time signals by
replacing the integral by a sum and one could define a z transform for con-
tinuous time signals by replacing the sum by an integral. In fact, the latter
transform is called a Mellin transform and it is used in the mathematical
theory of Dirichlet series. The differences of notation and approach for
what are clearly closely related transforms is attributable to history; they
arose in different fields and were developed independently.
The limits of integration depend on the index set I; they can be finite or
infinite. Likewise the frequency domain is chosen according to the nature
of T-
In the discrete parameter case the transform is the same with integrals
replaced by sums:
G(f) = L g(t)e- i27rt-f (2.48)
tE7"
where
Gx(fy) ~ Lg(x,y)e- i27rY !v (2.51)
Y
Separable 2D Signals
The evaluation of Fourier transforms of 2D signals is much simplified in the
special case of 20 signals that are separable in rectangular coordinates; i.e.,
if
g(x,y) = gx(x)gy(y),
then the computation of 20 Fourier transforms becomes particularly simple.
For example, in the continuous time case
= Gx(fx)Gy(fy),
the product of two I-D transforms. In effect, separability in the space do-
main implies a corresponding separability in the frequency domain. As a
simple example of a 2D Fourier transform, consider the continuous param-
eter 20 box function of (1.17): g(x,y) = 0T(X)OT(Y) for all real x and y.
The separability of the signal makes its evaluation easy:
G(p,¢) = JJ 211'
dO
00
drrgR{r)e-i 211'[rpcos8cosHrpsin8sin4>j
o 0
= J J
00 211'
dr rgR(r) dOe-i 211'rpcos(8-4».
o 0
To simplify this integral we use an identity for the zero-order Bessel
function of the first kind:
Jo{a) = ~ (211' e-iacos(8-4» dO. (2.53)
27r 10
Thus
J
00
G(p) = 27r 1 00
r01 (r)Jo(27rrp) dr
= 27r 11 rJo(27rrp) dr
78 CHAPTER 2. THE FOURIER TRANSFORM
Make the change of variables r' = 27rrp so that r = r' /27rp and dr =
1
dr' /27rp. Then
21rP
G(p) = - 122 r'Jo(r')dr'.
7rp 0
It is a property of Bessel functions that
1 al
(Jo(() d( = xJ1 (x),
where J1 (x) is a first-order Bessel function. Thus
I:
large. We then form a Riemann sum approximation to the integral as
G(f) = g(t)e-i21rft dt
L
M
~ g(nT)e-i21rfnTT; (2.55)
n=-M
L
M
G(f) ~ g(nT)e-i21r/nTT
n=-M
L
N-l
= ei21r/MTT 9ne-i21r/nT
n=O
= ei21r/MTTG(fT) , (2.56)
G(~)
NT
~ ei21rkM/NTG(~)
N
= ei21rkM/NTGk' k = -M ...
" "
°... , M ,
(2.58)
which provides an approximation for a large discrete set of frequencies that
becomes increasingly dense in the real line as T shrinks and NT grows.
The approximation is given in terms of the DFT scaled by a complex ex-
ponential; i.e., a phase term with unit magnitude, and by the sampling
period.
The above argument makes the point that the DFT is useful more gener-
ally than in its obvious environment of discrete time finite duration signals.
It can be used to numerically evaluate the Fourier transform of continuous
time signals by approximating the integrals by Riemann sums.
2.6. THE FAST FOURIER TRANSFORM 81
(2.59)
which consists of four real multiplies and two real adds, and a complex
addition has the form
Computation of all N DFT coefficients G(O), ... , G«N - l)jN) then re-
quires a total of N 2 complex-multiply-and-adds, or 4N2 real multiplies and
4N 2 real adds.
If k represents the time required for one complex-multiply-and-add, then
the computation time Td required for this "direct" method of computing
a DFT is Td = kN2. An approach requiring computation proportional to
N log2 N instead of N2 was popularized by Cooley and Tukey in 1965 and
dubbed the fast Fourier transform or FFT [13]. The basic idea of the algo-
rithm had in fact been developed by Gauss and considered earlier by other
authors, but Cooley and Tukey are responsible for introducing the algo-
rithm into common use. The reduction from N2 to N log2 N is significant
if N is large. These numbers do not quite translate into proportional com-
putation time because they do not include the non-arithmetic operations
of shuffling data to and from memory.
for all n such that nM E T. Thus g~M) is formed by taking every Mth
sample of gn' This new signal is called a downsampled version of the original
signal. Downsampling is also called decimation after the Roman army
practice of decimating legions with poor performance (by killing every tenth
soldier). We will try to avoid this latter nomenclature as it leads to silly
statements like "decimating a signal by a factor of 3" which is about as
sensible as saying "halving a loaf into three parts." Furthermore, it is an
incorrect use of the term since the decimated legion referred to the survivors
and hence to the 90% of the soldiers who remained, not to the every tenth
soldier who was killed. Unfortunately, however, the use of the term is so
common that we will need to use it on occasion to relate our discussion to
the existing literature.
We change notation somewhat in this section in order to facilitate the
introduction of several new sequences that arise and in order to avoid re-
2.6. THE FAST FOURIER TRANSFORM 83
For an 8-point g(n), the downsampled signals go(n) and gl(n) have four
points as in Figure 2.8.
The direct Fourier transforms of the downsampled signals go and gl,
say Go and G 1 , can be computed as
.if-I .if-I
Go(m) = L go(n)e-i~mn =L g(2n)w2mn (2.64)
n=O n=O
.if-I .if-I
G 1 {m) = L gl{n)e-i~mn =L g{2n+ 1)w 2mn , (2.65)
n=O n=O
where m = 0,1, ... , !f - 1. As we have often done before, we now observe
that the Fourier sums above can be evaluated for all integers m and that the
sums are periodic in m, the period here being N 12. Rather than formally
define the periodic extensions of Go and G 1 and cluttering the notation
further (in the past we put tildes over the function being extended), here
we just consider the above sums to define Go and G 1 for all m and keep in
mind that the functions are periodic.
84 CHAPTER 2. THE FOURIER TRANSFORM
go(n)
g(n) n
n gl(n)
I I
~ I
I
I I
I I I
0 1 2
3
.. n
L
N-l
G(m) = g(n)e-i~mn
n=O
.If-l .If-l
= L g(2n)e-i~m(2n) + L g(2n + 1)e- i1fm (2n+l)
n=O n=O
+ Gt{m)e-'"h
Go(m) N m
Note that this equation makes sense because we extended the definition of
Go(m) and Gl(m) from Z/f-l to ZN.
We have thus developed the scheme for computing G(m) given the
smaller sample DFTs Go and G l as depicted in Figure 2.9 for the case
of N = 8. In the figure, arrows are labeled by their gain, with unlabeled
arrows having a gain of 1. When two arrow heads merge, the signal at that
point is the sum of the signals entering through the arrows. The connection
pattern in the figure is called a butterfly pattern.
The total number of complex-multiply-and-adds in this case is now
2(N)2 + N
2 ~
T wo N'7-'DFT
=4 s
Combining Step
2.6. THE FAST FOURIER TRANSFORM 85
g(l) t---~--it--i!--it-~ G( 4)
g(3) t--~-f--+----'\r--~ G( 5)
DFT
N' =4
g(5) t---JI....'-.'--f---~.----~ G(6)
This implies that we can expand on the left side of the previous flow graph
to get the flow graph shown in Figure 2.10. Recall that any unlabeled
branch has a gain of 1. We preserve the branch labels of W O to highlight
the structure of the algorithm.
The number of computations now required is
4(N )2 + N + N
4 ~ ~
~
This Step Combination Previous Step Combination
4 Nil = 2 DFTs
r--------,Goo(O) Go(O)
g(O) G(O)
DFT
Nil =2
g(2)
DFT
Nil =2
g(l) I----<t----+:i~-~---*--+-*_~ G( 4)
DFT
Nil =2
g(5) I----<t---if--+:i~-~---f--+-T_~ G( 5)
g(3)
DFT
N"=2
In other words, the DFTs of the one-point signals are given by the signals
themselves. We can now work backwards to find Goo, G01 , G lO , and G ll
as follows:
GOO(m) = g(O) + g(4}w4mj m = 0,1
G Ol (m) = g(2) + g(6}w4mj m = 0,1
GlO(m} = g(l} + g(5}W 4m j m = 0, 1
G u (m) = g(3) + g(7}W 4m j m = 0,1.
IN log2 N complex-multiply-and-adds·1
This is the generally accepted computational complexity of the FFT.
There are, however, further tricks and variations that can yield lower com-
plexity. For example, of the N log2 N multiplies, if log2 N can be elimi-
nated by noting that W 4 = -Wo, W5 = _Wl, W 6 = _W2, and W 7 =
- W3, allowing half the multiplies to be replaced by inverters as shown in
Fig. 2.12.
The astute reader will have observed a connection between the binary
vector subscripts of the final single sample signals (and the correspond-
ing trivial DFTs) with the corresponding sample of the original signal in
Eq. 2.69. If the index of the original signal is represented in binary and
then reversed, one gets the subscript of the single sample final downsampled
sequence. For example, writing the argument 3 of g(3} in binary yields 011
which yields 110 when the bits are reversed so that g(3} = g110(0}.
The reduction of computation from roughly N2 to N log2 N may not
seem all that significant at first glance. A simple example shows that it is
2.6. THE FAST FOURIER TRANSFORM 89
g(4) G(l)
g(2) G(2)
g(6) G(3)
g(l) G(4)
g(5) G(5)
g(3) G(6)
g(7) G(7)
times as many computations. In our example with N = 216 , this means the
brute force approach will take roughly 216 /2 4 = 212 = 4096 times as long.
On a Macintosh IIci the Matlab FFT of one of these images took about 39
seconds. A brute force evaluation would take more than 45 hours! (In fact
it can take much longer because of the optimized code of an FFT and the
brute force evaluation of the powers of the complex exponentials required
before the multiply and adds.)
As a more extreme example, a typical digitized x-ray image has 2048 x
2048 pixels, yielding a computational complexity of 22 x 222 with the FFT
in comparison to 244 for the brute force method!
FFT Examples
We have already considered ID examples of the FFT when we computed
the DFT of the random signal and the sinusoid plus the random signal
in Figures 2.4-2.5. The advantages of the FFT become more clear when
computing 2D DFTs, e.g., of image signals. Before doing so, however, we
point out an immediate problem. Since images are usually represented as
a nonnegative signal (the intensity at each pixel is a nonnegative number),
they tend to have an enormous DC value. In other words, the value of
the Fourier transform for (Ix, fy) = (0,0) is just the average of the entire
image, which is often a large positive number. This large DC value dwarfs
2.6. THE FAST FOURIER TRANSFORM 91
the values at other frequencies and can make the resulting DFT look like
a spike at the origin with 0 everywhere else. For this reason it is common
to weight plots of the spectrum so as to deemphasize the low frequencies
and enhance the higher frequencies. The most common such weighting is
logarithmic: instead of plotting the actual magnitude spectrum IG (fx, fy) I,
it is common to instead plot
The term 1 in this expression is added to assure that when IGI has value
zero, so will Glog (fx, fy). Although other weightings are possible (the 1
and the magnitude spectrum can be multiplied by constants or one can use
a power of the magnitude spectrum rather than the log), this form seems
the most popular. Figures 2.13-2.24 show the Fourier transforms of several
of the 2D signal examples. Both mesh and image plots are shown for the
log weighted and unweighted versions of the simple box and disk functions.
For the Mona Lisa and MR images the mesh figures are not shown as the
number of pixels is so high as to render the mesh figures too black.
and infinite summation. Some of the details are described in the starred
subsections. For those who do not wish to (or are not asked to) read these
sections, the key points are summarized below.
If this sum is finite and equals, say, M, then the spectrum G(f)
exists for all f E S DT ID and
L
00
lim
N-400
12
!
-2
1
IG(f) - L
N
n=-N
g(n)e-i21f/nI2 df = O. (2.74)
10 2 = [: IG(f) - GN(fW df
2
r
itET
Ig(t)1 dt < 00.
(2.76)
then the Fourier transform need not exist for all frequencies. For
example, the signal get) = lit for 0 < t < 1 and 0 otherwise does
not have a Fourier transform at f = O. Finite energy is, however,
a sufficient condition for the existence of a Fourier transform in a
mean square sense or limit in the mean or L2 sense analogous to
the discrete time case. We will not treat such transforms in much
detail because of the complicated analysis required. We simply
point out that the mathematical machinery exists to generalize
most results of this book from the absolutely integrable case to
the finite energy case.
98 CHAPTER 2. THE FOURIER TRANSFORM
* Discrete Time
The most common infinite duration DTFT has r equal to the set of all
integers and hence
L
00
This infinite sum is in fact a limit of finite sums and the limit may blow
up or not converge for some values of f. Thus the DTFT mayor may not
exist, depending on the signal. To be precise, an infinite sum is defined as
N
L L
00
an = lim an
n=-oo N-+oo,K-+oo n=-K
if the double limit exists, i.e., converges. Mathematically, the double limit
exists and equals, say, a if for any f > 0 there exist numbers M and L such
2.7. * EXISTENCE CONDITIONS 99
K
I L an - al :5 f.
n=-N
Note that this means that if we fix either N or K large enough (bigger than
M or L above, respectively) and let the other go to 00, then the sum cannot
be more than € from its limit. For example, the sum L~=-N 1 = K + N + 1
does not have a limit, since if we fix N the sum blows up as K -+ 00.
A Fourier transform is often said to exist if the limiting sum exists in
the more general Cauchy sense or Cauchy principal value sense, that is, if
the limit
00 N
~ an
"" = N-+oo
lim ~ an
""
n=-oo n=-N
exists. We will not dwell on the differences between these limits; we simply
point out that care must be taken in interpreting and evaluating infinite
sums. The infinite sum L~=-oo n does exist in the Cauchy principal value
100 CHAPTER 2. THE FOURIER TRANSFORM
1
L
00
k=-oo,k;to
k
does not exist in the strict sense, but it does exist in the Cauchy sense.
Several of the generalizations of Fourier transforms encountered here and
in the literature are obtained by using a weaker or more general definition
for an infinite sum.
A sufficient condition for the existence of the sum (and hence of the
transform) can be shown (using real analysis) to be
L
00
that is, if the signal is absolutely summable then the Fourier transform
2.7. * EXISTENCE CONDITIONS 101
L
00
(2.79)
implies that
00
IGU)I = L
n=-oo
gne-i27r/nl
00
< L
n=-oo
Igne-i27r/nl
00
= L
n=-oo
Ignlle-i27r/nl
102 CHAPTER 2. THE FOURIER TRANSFORM
L
00
= Ignl=M.
n=-oo
if it satisfies (2.73). For example, the sequence {g(n) = 1/nj n = 1,2, ... }
has finite energy but is not absolutely summable. If a signal has finite
energy, then it can be proved that the Fourier transform G(f) exists in the
following sense:
lim
N.-+oo
12
1
1
-2
IG(f) - L
n=-N
N
g(n)e- i2 71'/nI2 df = O. (2.80)
When this limit exists we say that the Fourier transform G(f) exists in the
sense of the limit in the mean. This is sometimes expressed as
N
G(f) = N.-+oo
l.i.m. ' " g(n)e- i2 71'/n
~
n=-N
where "l.i.m." stands for "limit in the mean." Even when this sense is
intended, we often write the familiar and simpler form
L
00
but if finite energy signals are being considered, the infinite sum should be
interpreted as an abbreviation of (2.80). Note in particular that when a
Fourier transform of this type is being considered, we cannot say anything
about the ordinary convergence of the sum E:=-K g(n)e- i2 71'/n for any
particular frequency f as K and N go to 00, we can only know that an
integral of the form (2.80) converges.
It is not expected that this definition will be natural at first glance,
but the key point is that one can extend Fourier analysis to finite energy
infinite duration signals, but that the definitions are somewhat different.
We also note that discrete time finite energy signals are sometimes called
[2 sequences in the mathematical literature.
In the discrete time case, the property of finite energy is indeed more
general than that of absolutely summablej that is, absolute summability
implies finite energy but not vice versa. For example, if
n n k
n nf.k n
104 CHAPTER 2. THE FOURIER TRANSFORM
* Continuous Time
The most common finite duration continuous time Fourier transform con-
siders a signal of the form g = {get); t E [0, Tn and has the form
(2.83)
1 00
-00
g(t)e-i2rrftdt = lim lim
5-t00 T-too
1T
-5
g(t)e-i2rrftdt, (2.84)
if the limits exist. For such a double limit to exist, one must get the
same answer when taking Sand T to their limits separately in any manner
whatsoever.
We formalize the statement of the basic existence theorem so as to ease
comparison with later existence theorems. No proof is given (it is standard
integration theory).
I:
signal be absolutely integrable:
As in the finite duration case, this is sufficient but not necessary for
the existence of the CTFT. Signals violating the condition include {t(l +
t 2)-1; t E 'R} and {sinet; t E 'R}. Observe that if a signal is absolutely
integrable, then its transform is bounded in magnitude:
This is more general than the usual notion. (That is, if the integral exists as
an improper Riemann integral, then it also exists in the Cauchy principal
value sense. The converse, however, is not always true.) The following
theorem gives sufficient conditions for the Fourier transform to exist in this
sense. A proof may be found in Papoulis [24].
This general condition need not hold for all interesting signals. For
example, if g(t) is equal to 1 for all t, the condition of (2.88) is violated. The
most important example of a signal meeting the conditions of this theorem
106 CHAPTER 2. THE FOURIER TRANSFORM
is g(t) = sinc(t). This signal is not absolutely integrable, but it meets the
conditions of the theorem with ¢o = 0, Wo = 1l', and f(t) = 1/(1l't).
To prepare for a final existence theorem (for the present, at least), we
say that the Fourier transform of a signal 9 = {g(t); t E T} exists in a
limit-in-the-mean sense if the following conditions are satisfied:
• The signal can be approximated arbitrarily closely by a sequence of
signals gN = {gN(t); t E T} in the sense that the error energy goes
to zero as N -t 00:
lim
N-too iT
r Ig(t) - gN(tW dt = 0, (2.89)
lim
N-too isr IG(f) - GN(fW df = 0 (2.92)
(The convergence is called h for the sum in the first case and L2 for
the integral in the second case.)
If these conditions are met then G is a Fourier transform of g. (We say "a
Fourier transform" not "the Fourier transform" since it need not be unique.
For example, changing a G in a finite way at a finite collection of points
yields another G with the desired properties since Riemann integrals are
not affected by changing the integrand at a finite number of points.) This
is usually expressed formally by writing
but in the current situation this formula is an abbreviation for the more
exact definition above.
2.8. PROBLEMS 107
(2.93)
2.8 Problems
2.1. Let 9 be the signal in problem 1.11 Find the Fourier transforms of 9
and its zero-filled extension g.
2.2. Find the OTFT of the following infinite duration signals (T = Z)
signals:
(a) gn = rlnl, where Irl < 1. What happens if r = I?
(b) Un = On-k, the shifted Kronecker delta function, where k is a
fixed integer.
(c) gn = Ef=-N On-k'
(d) Un = a for Inl : : ; N °
and Un = otherwise.
n a °
(e) Un = r - for n ~ a and Un = otherwise. Assume that Irl < 1.
2.3. Find the OFT of the following sequences:
(a) g1 = {2,2,2,2,2,2,2,2}
(b) g2 = {e i1Tn / 2; n = 0,1,2,3,4,5,6, 7}
(c) g3 = {e i1T (n-2 l /2 i n = 0,1,2,3,4,5,6, 7}
(d) g4 ={0,0,0,0,0,0,1,0}
2.8. Find the CTFT of the following signals using the following special
signals (7 is the real line in all cases): The rectangle function
1 if -~ < t < ~
if It I = ~
net) = { ~
° otherwise
H(t) ={
1
o~ t
t >
=
°
°
otherwise
(c)
g(t)={ ~(1-~) -a ~ t ~ a
otherwise
2.10. By direct integration, find the Fourier transform of the infinite dura-
tion continuous time signal g(t) = t A (t).
2.11. Find the Fourier transform ofthe signals {e->'It l; t E R} and {sgn(t)e->'Itl;
t E R}.
(a) g(t) = At for t E [-T/2, T/2] and 0 for t E R but t (j. [-T/2, T/2].
(b) {I sintl; It I < w}.
(c)
for 0 < t < T/2
for t= 0
for - T /2 < t < 0 .
for It I ~ T/2
(a) Prove this result for the special cases of the DFT (finite duration
discrete time Fourier transform) and the infinite duration CT
Fourier transform.
110 CHAPTER 2. THE FOURIER TRANSFORM
(a) Prove this result for the special cases of the infinite duration
DTFT.
(b) Use this result to evaluate the Fourier transform of the signal
9 = {gnj n E Z} defined by
n=O,l, ...
otherwise
and verify your answer by finding the Fourier transform directly.
2.15. Suppose that 9 is the finite duration discrete time signal {onj n =
0,···, N - I}. Find F(F(g», that is, the Fourier transform of the
Fourier transform of g. Repeat for the signal h defined by h n = 1 for
n = k (k a fixed integer in {O, ... , N - I}) and hn = 0 otherwise.
2.16. Suppose you know the Fourier transform of a real-valued signal. How
can you find the Hartley transform? (Hint: Combine the transform
and its complex conjugate to find the sine and cosine transforms.)
Can you go the other way, that is, construct the Fourier transform
from the Hartley transform?
2.17. Suppose that 9 = {g(h, v)j h E [0, H], v E [0, V)} (h represents hor-
izontal and v represents vertical) represents the intensity of a sin-
gle frame of a video signal. Suppose further that 9 is entirely black
(g(h, v) = 0) except for a centered white rectangle (g(h, v) = 1) of
width aH and height a V (a < 1). Find the two-dimensional Fourier
transform F(g).
2.18. Suppose that 9 = {g(h, v)j h E [0, H), v E [0, V)} is a two dimen-
sional signal (a continuous parameter image raster). The independent
variables h and v stand for "horizontal" and "vertical", respectively.
The signal g( h, v) can take on three values: 0 for black, 1/2 for grey,
and 1 for white. Consider the specific signal of Figure 2.25.
2.8. PROBLEMS 111
v g=O
4vl5
3vIs
2Vl5 0=1
VIS
(a) Write a simple expression for g(h, v) in terms of the box function
DT(X) = {I Ixi ~ T .
o otherwise
(You can choose 9 to have any convenient values on the bound-
aries as this makes no difference to the Fourier transform.)
(b) Find the 2-D Fourier transform G(/h, 11J} of g.
(2.94)
Note that gtwt = (Wg}t.
2.21. Consider an N = 4 FFT.
(b) Draw a modified flow graph using inverters to eliminate half the
multiplies.
2.22. (a) Express the DFT of the 9-point sequence {gO.gl, ... , gs} in terms
of the DFTs of the 3-point sequences
ga(n) = {go,g3,g6}
gb(n) = {gl,g4,gr}
gc(n) = {g2,g5,gS}.
(b) Draw a legible flow graph for the "base 3" method for computing
the FFT, as suggested above, for N = 9.
2.23. Suppose you have an infinite duration discrete time signal 9 = {gn; n E
Z} and that its Fourier transform is G = {G(f); f E [-1/2,1/2)}.
Consider the new signals
g2n+l; n E Z
Vn = { gn/2 if n is an even number
o otherwise
with Fourier transforms H, W, and V, respectively. hand ware ex-
amples of downsampling or subsampling. v is called upsampling. Note
that downsampling and upsampling are not generally inverse opera-
tions, i.e., downsampling followed by upsampling need not recover the
original signal.
(a) Find an expression for V in terms of G.
(b) Find an expression for G in terms of Wand H. (This is a
variation on the fundamental property underlying the FFT.)
(c) Suppose now that r = {rn; n E Z} is another signal and that p
is the upsampled version of r, i.e., Pn = r n /2 for even nand 0
otherwise. We now form a signal x defined by
xn = Vn + Pn+l; n E Z.
2.24. Consider the signal 9 = {gn = nrnu_l(n); n E Z}, where Irl < 1.
(a) Is this signal absolutely summable?
(b) Find a simple upper bound to IG(f)1 that holds for all f.
(c) Find the Fourier transform of the signal g.
(d) Consider the signal h = {h n ; n E Z} defined by h n = g2n' (h is
a downsampled version of g.) Find the DTFT of h.
(e) Find a simple upper bound to IH(f)1 that holds for all f.
(f) Consider the signal W = {w n ; n E Z} defined by W2n = h n and
W2n+l = 0 for all integer n. W is called an upsampled version of
h. Find the DTFT of w.
(g) Find the DTFT of the signal g - w.
Chapter 3
Fourier Inversion
Having defined the Fourier transform and examined several examples, the
next issue is that of invertibility: if G = F(g), can 9 be recovered from the
spectrum G? More specifically, is there an inverse Fourier transform F- 1
with the property that
F- 1 (F(g)) = g? (3.1)
When this is the case, we shall call 9 and G a Fourier transform pair and
write
9 f-t G, (3.2)
where the double arrow notation emphasizes that the signal and its Fourier
transform together form a Fourier transform pair. We have already seen
that Fourier transforms are not always invertible in the strict sense, since
changing a continuous signal at a finite number of points does not change
the value of the Riemann integral giving the Fourier transform. For exam-
ple, {Ol/2(t); t E 'R} and {n(t); t E 'R} have the same transform. In this
chapter we shall see that except for annoying details like this, the Fourier
transform can usually be inverted.
The first case considered is the DFT, the finite duration discrete time
Fourier transform. This case is considered first since an affirmative answer
is easily proved by a constructive demonstration. The remaining cases are
handled with decreasing rigor, but the key ideas are accurate.
Yn = ~
N "
~
G(~)
N e i27r~n . (3.3)
k=O
(3.4)
we have that
L L g,e-i27r~lei27r~n
N-l N-l
Yn =~ (3.5)
k=O 1=0
and hence exchanging the order of summation, which is always valid for
finite sums, yields
N-l
L
N-l
Yn = g, ~ L ei27r~(n-I). (3.6)
1=0 k=O
1 N-l. 1 1 _ i27rNN
_ " et27r~m = _ e m = O.
N ~ N l-e a7r N
k=O
3.1. INVERTING THE DFT 117
Readers may recognize this as a variation on the fact that the sum of all
roots of unity of a particular order is O. Alternatively, adding up N equally
spaced points on a circle gives their center of gravity, which is just the origin
of the circle. Observe that the m = 0 result is consistent with the m =P 0
result if we apply L'Hopital's rule to the latter.
Recalling the definition of the Kronecker delta function 8m :
8 _{1a
m -
ifm = a
otherwise' (3.7)
then
1
L
N-l . k
Yn = L
1=0
g18n - 1 = gn; (3.10)
1 N-l k . k
9n = -N~
" G(_)e
N
t21r N"n.
' n = 0"
1 '", N - 1, (3.11)
k=O
In summary we have shown that the following are a Fourier transform pair:
N-l
L gn e- i21r j;n; kE ZN (3.13)
n=O
1 N-l k . k
gn = -N~" G(_)e'
N
21r N"n. n E Z
' N·
(3.14)
k=O
• Recall from the matrix form of the OFT of (2.7) that G = Wg,
·2" k·
where W = {e-a-w J; k = 0, I,···,N -I; j = 0, I,···,N -I}. From
elementary linear algebra this implies that
(2) NIl N
SDTFD = {- 2N + I' ... , -
2N + 1,0, 2N + I' , 2N + I}'
I Nk
gn = " G( )ei21f~n. (3.18)
2N + 1 L.,; 2N + 1 '
k=-N
n E {-N, .. ·,O, .. ·,N}.
L
N-1
¢kn)¢~)· = C n 8n - 1 (3.19)
k=O
for C n =I- OJ i.e., two signals are orthogonal if the sum of the coordinate
products of one signal with the complex conjugate of the other is 0
for different signals, and nonzero for two equal signals. If C n = 1 for
all appropriate n, the signals are said to be orthonormal. Eq. (3.9)
implies that the exponential family {e i21f *mj k = 0,1, ... , N - I} for
m = 0, 1, ... , N - 1 are orthogonal and the scaled signals
VN
1 {i21f*m.
e , 0 1, ... , N - I}
k =,
The matrix form of this relation also crops up. Let W be the expo-
nential matrix defined in (2.6) and define
(3.21)
(3.23)
1 N-l . k
6n = -N "'"'
~
et27rwn.
, n E Z N· (3.25)
k=O
This again has the general form of (3.24), this time with Ck = 1/N
for all k E ZN.
3.2. DISCRETE TIME FOURIER SERIES 121
where
N-l
Ck = N1 G( Nk ) = N1 "'" N; k E ZN.
~ gle -i27rl-"- (3.27)
1=0
N-l 1 k . k
g- = "'" _G(_)e'27rNn. n E Z (3.28)
n ~ N N ' .
k=O
=L
N-l
Yn ckei27r~n; n E Z (3.29)
k=O
N-l
Ck -- ~
N "'"
~ g-I e- i27rl -/!r., k E Z N· (3.30)
1=0
This provides a Fourier representation for discrete time infinite duration
periodic signals. This is a fact of some note because the signal, an infinite
duration discrete time signal, violates the existence conditions for ordinary
Fourier transforms (unless it is trivial, i.e., gn = 0 for all n). If gn is ever
nonzero and it is periodic, then it will not be absolutely summable nor
will it have finite energy. Hence such periodic signals do not have Fourier
transforms in the strict sense. We will later see in Chapter 5 that they have
122 CHAPTER 3. FOURIER INVERSION
L
00
G(f) = gke-i21rfk (3.31)
k=-oo
Yn = (2 G(f)ei27rfn df
L!
= [! ( f: gke-i21rkf) ei27rfn df
! k=-oo
L
00 1
= gk (2 ei21rf (n-k) df. (3.32)
k=-oo L!
To complete the evaluation observe for m = 0 that
j-!! ei21rmf df = j!-! df = 1
j-!
1
2 ei21rmf df =0
and hence
[21
1
ei21rmf df = 6m ; mE Z. (3.33)
2
3.3. INVERTING THE INFINITE DURATION DTFT 123
Yn L
00
k=-oo
gk 1 1
2
,
-2
e i27rf (n-k) df
(3.34)
gn = /2,
-!
G(f)ei2rrfn df; n E Z (3.36)
and hence that the right hand side above indeed gives the inverse Fourier
transform. For example, application of the discrete time infinite duration
inversion formula to the signal rnU_l (n); n E Z for Irl < 1 yields
(3.37)
Unlike (3.14) where a discrete time finite duration signal was represented
by a weighted sum of complex exponentials (a Fourier series), here a dis-
crete time infinite duration signal is represented as a weighted integral of
complex exponentials, where the weighting is the spectrum. Instead of a
Fourier series, in this case we have a Fourier integral representation of a sig-
nal. Intuitively, a finite duration signal can be perfectly represented by only
a finite combination of sinuosoids. An infinite duration signal, however, re-
quires a continuum of frequencies in general. As an example, (3.33) provides
the infinite duration analog to the Fourier series representation (3.25) of a
124 CHAPTER 3. FOURIER INVERSION
finite duration discrete time Kronecker delta: the infinite duration discrete
time signal {8 n ; n E Z} has the Fourier integral representation
(3.38)
00
1 -)
1
G(f) = L
n=-oo
9n e-''2 71' / n., j E [--2' 2 (3.39)
With this definition the Fourier inversion of (3.12) extends to the more
general case of the two-sided DTFT. Note, however, the key difference be-
tween these two cases: in the case of the DFT the spectrum was discrete in
that only a finite number of frequencies were needed and the inverse trans-
form, like the transform itself, was a sum. This resulted in a Fourier series
representation for the original signal. In the infinite duration DTFT case,
however, only time is discrete, the frequencies take values in a continuous
interval and the inverse transform is an integral, resulting in a Fourier inte-
gral representation of the signal. We can still interpret the representation
of the original signal as a weighted average of exponentials, but the average
is now an integral instead of a sum.
We have not really answered the question of how generally the result of
(3.39)-(3.40) is valid. We have given without proof a sufficient condition
under which it holds: if the original sequence is absolutely summable then
(3.39)-(3.40) are valid.
One might ask at this point what happens if one begins with a spectrum
{G (f); j E S} and defines the sequence via the inverse Fourier transform.
Under what conditions on G(f) will the formulas of (3.39)-(3.40) still hold;
that is, when will one be able to recover gn from G(f) using the given
formulas? This question may now appear academic, but it will shortly gain
in importance. Unfortunately we cannot give an easy answer, but we will
describe some fairly general analogous conditions when we treat the infinite
duration CTFT.
Analogous to the remarks following the DFT inversion formula, we could
also consider different frequency domains of definition, in particular any
unit length interval such as [0,1) would work, and we could consider dif-
ferent time domains of definition, such as the nonnegative integers which
3.3. INVERTING THE INFINITE DURATION DTFT 125
Frequency Scaling
Instead of having a frequency with values in an interval of unit length, we
°
can scale the frequency by an arbitrary positive constant and adjust the
formulas accordingly. For example, we could fix 10 > and define a Fourier
transform as
We now restate the Fourier transform pair relation with the scaled fre-
quency value and change the name of the signal and the spectrum in an
attempt to minimize confusion. Replace the sequence gn by h n and the
transform G 10 (I) by H (I). We have now proved that for a given sequence
h, the following are a Fourier transform pair (provided the technical as-
sumptions used in the proof hold):
L
00
(3.42)
The idea is that we can scale the frequency parameter and change its
range and the Fourier transform pair relation still holds with minor modifi-
cations. The most direct application of this form of transform is in sampled
data systems when one begins with a continuous time signal {g(t); t E 'R}
and forms a discrete time signal {g(nT); n E Z}. In this situation the
scaled frequency form of the Fourier transform is often used with a scaling
10 = lIT. More immediately, however, the scaled representation will prove
useful in the next case treated.
126 CHAPTER 3. FOURIER INVERSION
but are not equal. If the limits exist and equal 9 at t, g(t) = g(t+) = g(t-),
then g(t) is continuous at t.
A real valued signal {g(t)j t E T} is said to be piecewise continuous on
an interval (a, b) c T if it has only a finite number of jump discontinuities
in (a, b) and if the lower limit exists at b and the upper limit exists at a.
A real valued signal {g(t); t E T} is said to be piecewise smooth on an
interval (a,b) if its derivative dg(t)fdt is piecewise continuous on (a,b).
A real valued signal {g(t)j t E T} is said to be piecewise continuous
(piecewise smooth) if it is piecewise continuous (piecewise smooth) for all
intervals (a, b) c T. Piecewise smooth signals are a class of "nice" signals
for which an extension of Fourier inversion works.
One further detail is required before we can formally treat the inversion
of finite duration continuous time signals. Suppose that T = [0, T). What
3.4. INVERTING THE CTFT 127
if the discontinuity occurs at the origin where only the upper limit g(t+)
makes sense? (We do not need to worry about the point T because we have
purposefully excluded it from the time domain of definition [0, T).) We
somewhat arbitrarily redefine the lower limit of a signal defined on [0, T)
°
at by
(3.45)
that is, the limit of g(t) as t approaches T. This definition can be interpreted
as providing the ordinary lower limit for the periodic extension g(t) =
g(t mod T) of the finite duration signal. Alternatively, it can be considered
as satisfying the ordinary definition (3.44) if we interpret addition and
subtraction of time modulo T; that is, t - r means (t - r) mod T. This will
later be seen to be a reasonable interpretation when we consider shifts for
finite duration signals.
We can now state the inversion theorems for continuous time signals.
The reader should concentrate on their similarities to the discrete time
analogs for the moment, the chief difference being the special treatment
given to jump discontinuities in the signal.
• 9 is piecewise smooth.
Define the Fourier transform by
Then
L:
OO
G(-if)e i27r.!!.t
- T T
. (3.48)
n=-oo
If 9 is continuous at t, then
g(t) = f
n=-oo
G~) e i27r ¥<t. (3.49)
128 CHAPTER 3. FOURIER INVERSION
g(t) '" f 7)
n=-(X)
G ei21T ¥<t. (3.50)
This formula means that the two sides are equal at points t of continuity
of g(t), but that the more complicated formula (3.48) holds if g(t) has a
jump discontinuity at t. With this notation we can easily summarize the
theorem as stating that under suitable conditions, the following is a Fourier
transform pair:
g(t) '" f 7)
n=-(X)
G ei27r ¥<t; t E [0, T). (3.52)
t = !2 + "L.J _i_ei27Tkt. t E (0 1)
21rk ' ,. (3.53)
k~O.kEZ
G(f) = l:T
"'2
g(t)e- i21T / t dt, f E {kiT; k E Z}, (3.54)
T > 1, which has discontinuities at ±1/2. From (2.35) the Fourier trans-
form for this signal is found by restricting the frequency domain to the
integer multiples of kiT, that is,
Using the continuous time finite duration inversion formula then yields
~
n (t ) = L..J T1 smc
. ( k) i27rt.!L
T e T; [T T )
tE - 2' 2 . (3.56)
k=-oo
Since the rectangle function has values at discontinuities equal to the mid-
points of the upper and lower limits, the Fourier inversion works. Although
01/2(t) shares the same Fourier transform, (3.56) does not hold with 0 1 / 2
replacing net) because the right-hand side does not agree with the 01/2
signal at the discontinuities. There are two common ways to handle this
difficulty. The first is to modify all interesting signals with discontinuities
so that their values at the discontinuities are the midpoints. The second
is to simply realize that the Fourier inversion formula for continuous time
signals can only be trusted to hold at times where the signal is continuous.
This latter approach is accomplished by recalling the notation
00
""
0 1/ 2 (t ) '" L..J T1 smc
. ( k) i27rt
T e 'i'; k
tE
[T T)
-2'2' (3.57)
k=-oo
to denote the fact that the right hand side gives the left hand side only at
points of continuity. The right hand side gives the midpoints of the left
hand side's upper and lower limits at points of discontinuity.
1:
Fourier transform
Then
1 -00
00 G(f) e i2wlt dlf = {g(t)
g(t+)+g(C)
2
if t is a point of continuity
otherwise.
(3.59)
= I:
I:
G(f) g(t)e- i21f / t dt; fER, (3.60)
e-tH(t) = J
oo
-00
1
ei21f/t
+
2
7n
'f df; t E R. (3.62)
with its inverse. Putting this together yields the guess that the appropriate
Fourier transform pair is given by
get) = f
n=-oo
G;) ei21r~tj t E [0, T). (3.64)
Note that as in the DFT inversion formula, we again have the original
signal represented as a Fourier series. It is not expected that the above pair
should be obvious given the corresponding result for the DTFT, only that
it should be a plausible guess. We now prove that it is in fact equivalent
to the DTFT result. To do this we make the substitutions of Table 3.1 in
the scaled-frequency DTFT Fourier transform pair.
Eq.3.42 Eq.3.47
f (frequency) t (time)
H(f) ( spectrum) get) (signal)
fo T
h n (signal) ~G{-j.) (spectrum)
Table 3.1: Duality of the Infinite Duration DTFT and the Finite Duration
CTFT
get) = f:
n=-oo
G(;j.) e- i21r "t.
Changing the summation dummy variable sign yields
G(!!:.)
T
= r g(t)e-i27rlTt dt,
io
T
2. g(t) cannot "wiggle" too much in a way made precise below, and
gN(t) = t
n=-N
G~) e i27rlTt
as N ~ 00. We have that
T1 inr 2::
T N. n
g(r)( e,27r(t-r)'1') dr. (3.65)
o n=-N
3.4. INVERTING THE CTFT 133
(3.66)
so that
()-iT (
gN t - o gT
)sin(2IT U¥)(N + ~)) d
T' ( (t T))
sm IT T
T. (3.67)
What happens as N -+ oo? The term multiplying the signal inside the
integral can be expressed in terms of the Dirichlet kernel defined by
gN(t)
r g(T)f1 DN ('i')
= 10
T t-
dT.
T
(3.70)
10,---,----,---,----,---,---,----,---,---,,---,
o ............... ~\:.:
: :,
-4~--~--~--~--~----~--~--~--~--~--~
-0.5 -0.4 -0.3 -0.2 -0.1 0 0.1 0.2 0.3 0.4 0.5
Figure 3.1: The Dirichlet Kernel DN(t): N=1 (solid line), 2 (dashed line),
4 (dash-dot line)
Making this line of argument precise, that is, actually proving that
gN(t) -t g(t) as N -t 00 would require the Riemann-Lebesgue Lemma, one
of the fundamental results of Fourier series. Alternatively, it is proving that
the limit of the Dirichlet kernels behave like a generalized function - the
Dirac delta to be considered in Chapter 5. We here will content ourselves
with the above intuitive argument, which is reinforced by the similarity of
the result to the discrete time inversion formulas, which were proved in
some detail.
The inversion formula can be extended from absolutely integrable sig-
nals to finite energy signals by considering the infinite sum to be a limit
in the mean. In particular, it can be shown that if the signal 9 has finite
energy, then it is true that
(3.71)
Note that this form of inversion says nothing about pointwise convergence
of the sum to g(t) for a particular value of t and is not affected by the
values of g(t) at simple isolated discontinuities.
3.4. INVERTING THE CTFT 135
I:
transform G = F(g) defined by
(3.73)
I:
that is, that the integral
G(f)ei27rft df
exists (in a Cauchy principal value sense) and has the desired value. As in
the DFT and DTFT case, we begin by inserting the formula for G(f) into
the (truncated) inversion formula:
If the function is suitably well behaved, we can change the order of inte-
gration:
ga(t) = [ : dxg(x) ([: ei27rf (t-x) df ) (3.75)
where
. sin2at1l'
Fa(t) = 2asmc2at =- -
t1l'
- (3.78)
is called the Fourier integral kernel and is simply the CTFT of the box
function {DT(t)j t E 'R.} with t substituted for f. Like the Dirichlet kernel
ga(t) i:
it is symmetric about the origin and has unit integral. Thus
= g(x)Fa{t - x) dx.
convolution integral. Figure 3.2 shows the Fourier integral kernel for several
values of a. As with the Dirichlet kernel encountered in the finite duration
continuous time case, the sinc functions become increasingly concentrated
around their center as a becomes large. Thus essentially the same argument
Figure 3.2: The Fourier Kernel Fa(t): a=l (solid line), 2 (dashed line), 4
(dash-dot line)
yields the inversion formula as in the finite duration case. The Riemann-
Lebesgue Lemma implies that the limit of this function as a -+ 00 is as
stated in the theorem under the given conditions.
We also consider an alternative intuitive "proof" of the Fourier integral
theorem for the special case where the signal is continuous. The proof is a
3.5. CONTINUOUS TIME FOURIER SERIES 137
G(f) = lim
T~oo
12-t
T
g(t)e-i2trft dt = 1
00
-00
g(t)e-i2trft dt,
the usual CTFT, and (3.55) can be considered as a Riemann sum (liT
becomes df) which in the limit as T ~ 00 approximates the integral
get) = lim gT(t)
T~oo
which is the claimed inverse transform. This "proof" at least points out
that the form of the infinite duration CTFT is consistent with that for the
finite duration CTFT. Thus the Fourier integral transform relations can be
viewed as a limiting form of the Fourier series relations for continuous time
signals. This is not, however, the way that such results are properly proved.
This is not a rigorous proof because the Riemann integrals are improper
(have infinite limits) and there is no guarantee that the Riemann sums will
converge to the integrals. Furthermore, the improper Riemann integrals in
the argument have only been considered in a Cauchy principal value sense.
(3.81)
(3.82)
138 CHAPTER 3. FOURIER INVERSION
(3.83)
The ak and bk can be determined from the Ck and vice-versa. The details
are left as an exercise.
As yet another form of this relation, consider the case where T =
[-T/2,T/2} and replace Ck by dk/T:
g(t) f:
k=-oo
~ ei2 11'tk; t E [-T/2, T/2) (3.84)
dk = 1:T
-~
g(t)e- i2 11't k dt; k E Z. (3.85)
L
00
(3.87)
From the continuous time finite duration inversion formula this series will
have a value of 1/2 at the discontinuities which occur at integer t.
This series can be put in a somewhat simpler form by observing that
the sum can be rewritten as
f _i_(ei27rtk - e-i27rtk)
k=l 2'1rk
=- f
k=l
sin(2'1rkt)
'Irk
1 1 ~ sin(2'1rkt)
t mo d rv 2- L..J 'Irk . (3.89)
k=l
for real t/J. The quantity z can be considered as a parameter. This signal
is periodic in t/J with period 2'1r and hence can be expanded into a Fourier
series
L L
00 00
where
Ck = -1 127r , '.1. '.I'k
el%Sln "'e- ' ''' dt/J = Jk(Z), (3.90)
2'1r 0
the kth order Bessel function of (1.12). Thus we have the expansion
L
00
(3.92)
3.6 Duality
When inferring the inversion formula for the finite duration continuous time
Fourier transform from that for the infinite duration discrete time Fourier
transform, we often mentioned the duality of the two cases: interchang-
ing the role of time and frequency turned one Fourier transform pair into
another. We now consider this idea more carefully.
3.6. DUALITY 141
Knowing one transform often easily gives the result of a seemingly dif-
ferent transform. In fact, we have already taken advantage of this duality
in proving the finite duration continuous time Fourier inversion formula by
viewing it as a dual result to the infinite duration discrete time inversion
result.
Suppose, for example, you know that the Fourier transform of an infinite
duration continuous time signal g = {get); t E 'R} is G(f), e.g., we found
that the transform of get) = e- t H(t) is G(f) = (1 + i27r1)-1. Now suppose
that you are asked to find the Fourier transform of the continuous time
signal ret) = (1 + i27rt)-I. This is easily done by noting that ret) = G(t);
that is, r has the same functional dependence on its argument that G
has. We know, however, that the inverse Fourier transform of G(f) is get)
and that the inverse Fourier transform in the infinite duration continuous
time case is identical to a Fourier transform except for the sign of the
exponent. Putting these facts together we know that the Fourier transform
of ret) = G(t) will be g( - I). The details might add some insight: Given
get) and G(f), then the inversion formula says that
get) = J00
G(f)ei21rlt df
-00
= J00
-00
G(a)ei21rQt da,
where the name of the dummy variable is changed to help minimize confu-
sion when interchanging the roles of f and t. Now if ret) = G(t), its Fourier
transform is found using the previous formula to be
J J
00 00
= J00
-00
G(a)e i21rQ (-f)da=g(_I).
To summarize:
then also
(3.93)
Similarly,
Ff({sinc(t)j t E 'R}) = n(f); f E 'R. (3.94)
Almost the same thing happens in the DFT case because both trans-
forms have the same formj that is, they are both sums. Thus with suitable
scaling, every DFT transform pair has its dual pair formed by properly
reversing the role of time and frequency as above.
The finite duration continuous time and infinite duration discrete time
are a little more complicated because the transforms and inverses are not
of the same form: one is a sum and the other an integral. Note, however,
that every transform pair for a finite duration continuous time signal has as
its dual a transform pair for an infinite duration discrete time signal given
appropriate scaling. Suppose, for example, that we have an infinite duration
discrete time signal {gn; n E Z} with Fourier transform {G(f); f E [0, I)}
11
and hence
gn = G(f)ei21rfn df. (3.95)
t
x(t) = G(f)jt E [O,T)
(where now the scaling is needed to permit a time domain more general
than [0,1)). In the specific example the waveform becomes
3.6. DUALITY 143
= {T G( ~ )e-i21Tfa da
10 T
= T 10 1 G({3)e- i2 11"k{3 d{3,
which also has the flavor of the infinite duration continuous time result
except that the additional scaling is required. We can summarize this
duality result as follows.
k = 0, -1, -2,···
otherwise.
3.7 Summary
The Fourier transform pairs for the four cases considered are summarized in
Table 3.2. The'" notation in the continuous time inversion formulas can be
changed to = if the signals are continuous at t. Note in particular the ranges
of the time and frequency variables for the four cases: both are discrete in
the finite duration discrete time case (the DFT) and both are continuous in
the infinite duration continuous time case. In the remaining two cases one
parameter is continuous and the other discrete and these two cases can be
viewed as duals in the sense that the roles of time and frequency have been
reversed. These results assume that the transforms and inverses exist and
that the functions of continuous parameters are piecewise continuous. Note
also that the finite duration inversion formulas both involve a normalization
by the length of the duration, a normalization not required in the infinite
duration formulas. In many treatments, this normalization constant is di-
vided between the Fourier and inverse Fourier transforms to make them
more symmetric, e.g., both transform and inverse transform incorporate a
normalization factor of l/vN (discrete time) or l/VT (continuous time).
Such changes of the definitions by a constant do not affect any of the the-
ory, but one should be consistent. If one uses a stretched frequency scale
(instead of [0, 1) or [-1/2,1/2» for the infinite duration discrete time case,
then one needs a scaling of 1/8 in the inversion formula, where 8 is the
length of the frequency domain.
It is also informative to consider a similar table describing the nature
of the frequency domains for the various signal types. This is done in
Table 3.3. The focus is on the two attributes of the frequency domain S. As
with the time domain r, we have seen that the frequency can be discrete
or continuous. The table points out that also like the time domain, the
frequency domain can be "finite duration" or "infinite duration" in the sense
of being defined for a time interval of finite or infinite length. We dub these
cases finite bandwidth and infinite bandwidth and observe that continuous
time signals yield infinite bandwidth frequency domains and discrete time
signals yield finite bandwidth frequency domains. These observations add
to the duality of the time and frequency domains and between signals and
spectra: as there are four basic signal types, there are also four basic spectra
types. Lastly observe that discrete time finite duration signals yield discrete
frequency finite bandwidth spectra and continuous time infinite duration
signals yield continuous frequency infinite bandwidth spectra. Thus in these
cases the behavior of the time and frequency domains are the same. On the
other hand, continuous time finite duration signals yield discrete frequency
infinite bandwidth spectra and discrete time infinite duration signals yield
continuous frequency finite bandwidth spectra. In these cases the time and
3.B. * ORTHONORMAL BASES 145
Duration Time
Discrete Continuous
N-l T
Finite G(-M= L: gne-Z-2 1I"Nk n ; k E ZN G(~) = J g(t)e-i21r~t dt; k EZ
n=O 0
N-l
= -k L: -k
00
gn
k=O
G( )ei211"~n; n E Zj.f g( t) '" L: G<t)ei211"~t; t E [O,T)
k=-oo
Duration Time
Discrete Continuous
Finite Discrete Frequency Discrete Frequency
Finite Bandwidth Infinite Bandwidth
Infinite Continuous Frequency Continuous Frequency
Finite Bandwidth Infinite Bandwidth
We demonstrate this for the special case of discrete time finite duration sig-
nals. The case for continuous time finite duration signals is considered in
the exercises and similar ideas extend to infinite duration signals. In the
next section we consider an important special case, the discrete wavelet
transform.
We here confine interest to signals of the form 9 = {g(n); n = 0,1, ... , N-
I}, where N = 2£ for some integer L. Let ON denote the collection of all
such signals, that is, the space of all real valued discrete time signals of
duration N.
A collection of signals 'lfJk = {'l/Jk(n)j n = 0,1, ... , N -I}; k = 0,1, ... , K-
1, is said to form an orthonormal basis for the space ON if (1) the signals
are orthonormal in the sense that
N-l
L 'lfJk(n)'l/Jt(n) = 8k- l , (3.96)
n-O
and (2), the set of signals is complete in the sense that any signal 9 E 0
can be expressed in the form
K-l
g(n) = L ak'lfJk(n). (3.97)
k=O
e i27r -/!;n
'lfJk = { -IN ; n E ZN}, k = 0, ... , N - 1, (3.98)
are orthonormal and the DFT inversion formula (3.14) guarantees that any
signal 9 E ON can be written as a weighted sum of the 'lfJk and hence the
complex exponentials indeed form an orthonormal basis. Thus the idea
of an orthonormal basis can be viewed as a generalization of the Fourier
transform and its inverse.
The smallest integer K for which there exists an orthonormal basis for a
space is called the dimension of the space. In the case of ON the dimension
is N. While we will not actually prove this, it should be believable since
the Fourier example proves the dimension is not more than N.
The general case mimics the Fourier example in another way. If we wish
to compute the linear weights ak in the expansion, observe that
N-l N-l
= L ak L 1/Ji(n)1/Jk(n)
k=O n=O
N-l
= L aktSl-k = al
k=O
using the orthonormality. Thus the "coefficients" ak are calculated in gen-
eral in the same way as in the Fourier special case: multiply the signal by
1/Ji and sum over time to get al. This is sometimes abbreviated using the
inner product or scalar product notation
N-l
< g,1/Jk >~ L g(n)1/J,i;(n) (3.99)
n=O
Let K denote the set of all possible indices (m, k) specified in the above
collection.
With the exception of the (0,0) signal, these signals are all formed
by dilations and shifts of the basic function t/J. m is called the dilation
parameter and k is called the shift parameter. We say that t/J(t) is a wavelet
or mother wavelet if the collection {t/Jm,kj (m, k) E K} form an orthonormal
basis for gN. (This is not the usual definition, but it will suit our purposes.)
If this is the case, then we can expand any signal g E gN in a series of the
form
g(n) = L
am,kt/Jm,k(n), (3.102)
(m,k}EK:
g(t)
°
1
:2 1
-1
1
'l/J2,O = J4(I, 1, -1, -1,0,0,0,0)
1
'l/J2,1 = J4(0, 0, 0, 0,1,1, -1, -1)
1
'l/J3,o = y'8(1, 1, 1, 1, -1, -1, -1, -1)
1
'l/Jo,o = y'8(1,1,1,1,1,1,1,1).
These signals are easily seen to be orthogonal. The fact that they form a
basis is less obvious, but it follows from the fact that the space is known
to have dimension N and hence N orthonormal signals must form a basis.
It also follows from the fact that we can write every shifted delta function
{8 n -k; n = 0,1, ... , N -I}, k = 0,1, ... , N - 1, as a linear combination of
the 'l/Jm,k and hence since the shifted deltas form a basis, so do the 'l/Jm,k.
For example,
j E ZN, where as usual the shift is modulo N, verify the following formula:
b1,0 = go + gl
b1,1 = g2 + g3
b1.2 = g4 + g5
3.9. * DISCRETE TIME WAVELET TRANSFORMS 151
bl ,3 = 96 + 97·
Thus the b1 ,k sequence replaces the differences used for a1,k by sums. Note
for later that summing up the b1 ,k will give ao,o.
Next we wish to compute the coefficients a2,k =< 9, 'I/J2,k > as
but we can use the computations already done to assist this. These coeffi-
cients can be found by forming differences (as we used to find the al,k on
the auxiliary bl,k; i.e., form
We can also form the auxiliary sequence as before by replacing these dif-
ferences by sums.
and ao,o as
i: i:
duration case we have the Fourier transform pair
J
00
by
gR(r) = 211' 10 00
pG(p)JO(211'Tp) dp
at all points of continuity of gR. Thus the transform and inverse transform
operations are identical in the circularly symmetric case.
3.11. PROBLEMS 153
3.11 Problems
3.1. Given a signal 9 = {g(t); t E T}, what is .1'(.1'(g))? What is .1'(.1'(.1'(g)))?
3.2. Prove the two-sided discrete time finite duration Fourier inversion
formula (3.18).
3.3. Recall that the DCT of a signal {g(k,j); k = O,I, ... ,N -1; j =
0,1, ... , N - I} is defined by
9 (k , J
.) =~ ~ ~ C(l)C( m
N L..J L..J
)G(l )
, m cos
(2k + 1)111"
2N cos
(2j + I)m1l"
2N .
1=0 m=O
(3.109)
Warning: This problem takes some hacking, but it is an important
result for engineering practice.
3.4. Find a Fourier series for the two-sided discrete time signal {r- 1nl ; n =
-N,· .. ,0, N} for the cases Irl < 1, Irl = 1, and r > 1. Compare the
result with the Fourier series of the one-sided discrete time geometric
signal of (3.23). Write the Fourier series for the periodic extensions
of both signals (period N for the one-sided signal and period 2N + 1
for the two sided signal) and sketch the two periodic signals.
3.5. Find a Fourier series for the discrete time infinite duration signal
defined by gn = n mod 10, n E Z. Is the Fourier series accurate
for all integer n? Compare the result to the Fourier series for the
continuous time ramp function g(t) = t mod 10, tEn.
3.6. Define the "roundoff" function q(x) which maps real numbers x into
the nearest integer, that is, q(x) = n if n -1/2 < x ~ n + 1/2. Define
the roundoff error by f(X) = q(x) - x. Find a Fourier series in x for
f(X).
3.7. What signal has Fourier transform e- 1fl for all real f?
154 CHAPTER 3. FOURIER INVERSION
i:
3.8. Suppose that G is the infinite duration CT Fourier transform of a
signal g. Then the definition at f = 0 gives the formula
(a) Find an analogous result for the finite duration CT Fourier trans-
form.
(b) Repeat for the finite duration DT Fourier transform.
(c) Suppose now we have an infinite duration time signal h defined
by h(t) = G(t) for all real t. In words, we are now looking at
the function G as a time signal instead of a spectrum. What is
the Fourier transform H of h (in terms of g)?
(d) Use the previous part to find the dual of Eq. (3.110), that is,
relate J~oo GU) df to 9 in a simple way for an infinite duration
CT signal g.
(e) Use the previous part to evaluate the integral
L oo
sinc(t) dt.
(To appreciate this shortcut you might try to evaluate this inte-
gral by straightforward calculus.)
3.9. What is the Fourier transform of the finite duration, continuous time
signal
( ) _ sin(21Tt(~». [_~ ~)?
9t - sin(1Tt) ,t E 2' 2 .
Find a Fourier series representation for the periodic extension (having
period 1) of this signal.
3.10. What infinite duration discrete time signal {gn; n E Z} has Fourier
transform {n(4f); f E [-~, ~)}?
3.11. Define the finite duration continuous time signal 9 = {g(t); t E
[-T/2, T/2)} by g(t) = A for -T/4 ~ t ~ T/4 and g(t) = 0 for
It I > T /4. Find the Fourier transform GU). Is the inverse Fourier
transform of GU) equal to g(t)?
Now let g(t) be the infinite duration continuous time signal formed by
zero-filling g(t). Again find the Fourier transform and inverse Fourier
transform. (Note that the Fourier transform has the same functional
form in both cases, but the frequency domain of definition is different.
The inverse Fourier transforms, however, are quite different.)
3.11. PROBLEMS 155
3.12. What is the Fourier transform of the continuous time finite duration
signal 9 = t; t E [- ~, ~ )? Find an exponential Fourier series repre-
sentation for g. Find a trigonometric Fourier series representation for
g.
3.13. Find a trigonometric Fourier series representation for the infinite du-
ration continuous time periodic signal g(t) = (t - 1/2)modl - 1/2.
(First sketch the waveform.) For what values of t is the Fourier series
not accurate. Repeat for the signal g(t - 1/2).
3.14. Prove the Fourier transform pair relationship of (3.84)-(3.85) by di-
rect substitution.
3.15. Show how the an and bn are related to g(t) in Eq. 3.83. Express the
an and bn in terms of the Cn of the exponential Fourier series.
3.16. Orthogonal Expansions
A collection of signals {4>i(t); t E [0, T]}, i = -N,···, 0,1,2, ... , N
are said to be orthonormal on [0, T] if
that is, the integral is 1 if the functions are the same and ° otherwise.
(a) Suppose that you are told that a real-valued signal 9 is given by
N
g(t) = 2: bk4>k(t).
k=-N
How do you find the bk from g(t) and the 4>k(t)? Evaluate the
energy
£g = loT Ig(tWdt
in terms of the bi. (This is an example of Parseval's theorem.)
(b) Are the functions A sin(27rkt/T); k = 1,2, ... , N orthonormal
on [O,T]?
(c) Suppose that we have an orthonormal set of functions {4>k(t); t E
[0, T]}, k E Z and that we have an arbitrary signal g(t), t E
[0, T]. We want to construct an approximation p(t) to g(t) of
the form
N
p(t) = 2: cn4>n(t).
n=-N
156 CHAPTER 3. FOURIER INVERSION
r le(tWdt
T1 10
T
=
1
T 10
rT Ig(t) - q(t)1 2 dt
1 rT
+ T 10 Iq(t) - p(tW dt
and
N-l
ca = IIGII2 = G*G = L /G(n/NW,
n=O
158 CHAPTER 3. FOURIER INVERSION
Show that the MSE in the original time domain is proportional to that
in the frequency domain and find the constant of proportionality.
Hint: See Problem 3.2l.
Now suppose that we wish to "compress" the image by throwing away
some of the transform coefficients. In other words, instead of keeping
all N floating point numbers describing the G( N); n = 0, 1, ... ,N -1,
we only keep M < N of these coefficients and assume all the remain-
ing N - M coefficients are 0 for purposes of reconstruction. Assuming
a fixed number of bytes, say m, for representing each floating point
number on a digital computer, we have reduced the storage require-
ment for the signal from Nm bytes to Mm bytes, achieving a com-
pression ratio of N : M. Obviously this comes at a cost as the setting
3.11. PROBLEMS 159
°
h(t) = { 6 - t t E [O,~) .
otherwIse
(c) Find the Fourier series coefficients cg (k) for the signal g.
(d) What is the period of r(t)?
(e) Express the Fourier series coefficients Cr (k) for the periodic signal
r(t) in terms of the coefficients ch(k) and cg(k) that you found
in parts (b) and (c).
Basic Properties
4.1 Linearity
Recall that a linear combination of two signals 9 and h is a signal of the
form ag + bh = {ag(t) + bh(t)jt E T}. The most important elementary
property of Fourier transforms is given by the following theorem.
Theorem 4.1 The Fourier transform is linear; that is, given two signals
9 and h and two complex numbers a and b, then
The theorem follows immediately from the fact that the Fourier trans-
form is defined by a sum or integral and that sums and integrals have the
linearity property. We have already seen that the DFT is linear by express-
ing it in matrix form. Linearity can also easily be proved directly from the
definitions in this case:
L (ag
N-l N-l N-l
n + bhn ) e- i2tr !n = a L gn e- i2tr !n +bL h n e- i2tr !n
n=O n=O n=O
aG(f) + bH(f)
as claimed.
The linearity property is also sometimes called the superposition prop-
erty.
162 CHAPTER 4. BASIC PROPERTIES
4.2 Shifts
In the previous section the effect on transforms of linear combinations of
signals was considered. Next the effect on transforms of shifting or delaying
a signal is treated.
4.2. SHIFTS 163
(4.4)
N-l
= ~
~ g(n-T)modNe -i2"-/frn
n=O
N-l
i2 i2,,T-/fr
= ~ 9
~ (n-T)modN e- "-/fr(n-T)e-
n=O
N-l
~ 9 e-i2,,-/fr«n-T)modN)e-i27rT-/fr
= ~ (n-T)modN
n=O
N-l
'2 .L '2 .L ·2.L
= e-'''T N ~
~gne-'''Nn=e-'''TNF;.(g)j kEZN,
n=O
proving the result. Note that we have used the fact that e- i2";'n =
e- i2 ";'{nmodN) which holds since for n = KN + I with 0 < < N - 1
e-i21r -/fr(KN+I) = e-i21rkKe-i2,,;'1 = e-i2,,;'I.
164 CHAPTER 4. BASIC PROPERTIES
i:
Next consider the infinite duration CTFT. Here we simply change vari-
ables a = t - T to find
i: i:
Ff(gr) = g(t - T)e-i2trft dt
= e-i2trfr Ff(9)·
The finite duration CTFT and the infinite duration DTFT follow by similar
methods.
As an example, consider the infinite duration continuous time pulse
p(t) = A for t E [0, T) and 0 otherwise. This pulse can be considered as a
scaled and shifted box function
T
p(t) = ADT / 2 (t - 2")
and hence using linearity and the shift theorem
(4.5)
4.3 Modulation
The modulation theorem treats the modulation of a signal by a complex
exponential. It will be seen to be a dual result to the shift theorem; that is,
it can be viewed as the shift theorem with the roles of time and frequency
interchanged.
Suppose that 9 = {g(t); t E T} is a signal with Fourier transform
G = {G(f); f E S}. Consider the new signal ge(t) = g(t)e i2trfot ; t E T
where fo E S is a fixed frequency (sometimes called the carrier frequency).
The signal ge(t) is said to be formed by modulating the complex expo-
nential ei2trfot, called the carrier, by the original signal g(t). In general,
modulating is the methodical alteration of one waveform, here the com-
plex exponential, by another waveform, called the signal. When the signal
and the carrier are simply multiplied together, the modulation is called
amplitude modulation or AM. In general AM includes multiplication by a
complex exponential or by sinusoids as in gc(t) = g(t) cos(27r fot); t E T
and gs(t) = g(t) sin(27r fot); t E T. Often AM is used in a strict sense to
mean signals of the form ga(t) = A[l + mg(t)] cos(27r fot) which contains
a separate carrier term Acos(27rfot). ga(t) is referred to as double side-
band (DSB) or double sideband amplitude modulation (DSB-AM), while
the simpler forms of gc or gs are called double sideband suppressed carrier
4.3. MODULATION 165
(DSB-SC). The parameter m is called the modulation index and sets the
relative strengths of the signal and the carrier. Typically it is required that
m and 9 are chosen so that Img(t)1 < 1 for all t.
Amplitude modulation without the carrier term, i.e., gc or gs and not
ga, are called linear modulation because the modulation is accomplished
by a linear operation, albeit a time varying one.
The operation of modulation can be considered as a system in a math-
ematical sense: the original signal put into the system produces at the
output a modulated version of the input signal. It is perhaps less obvious
in this case than in the ideal delay case that the resulting system is linear
(for the type of modulation considered - other forms of modulation can
result in nonlinear systems).
Theorem 4.3 The Modulation Theorem.
Given a signal {g(t); t E T} with spectrum {G(f); IE S}, then
{g(t)ei21r/ot; t E T} ::) {G(f - 10); IE S}
1 1
{g(t) cos(21r lot); t E T} ::) {2 G (f - 10) + 2G(f + 10); I E S}
i i
{g(t) sin(21r lot); t E T} ::) {2G(f + 10) - 2G(f - 10); IE S}
I:
exercise. In the infinite duration continuous time case,
I:
Ge(f) = (g(t)ei21r/ot)e-i21r/t dt
= G(f - 10).
The results for cosine and sine modulation then follow via Euler's relations.
For the DFT we have that with a frequency 10 = ko/N
L (gnei21r!J9.n)e-i21r~n
N-l
n=O
166 CHAPTER 4. BASIC PROPERTIES
n=O
= G(k ~kO).
Thus, for example, the Fourier transform of n(t) cos(7I"t) is given by
I:
The energy of a continuous time infinite duration signal is defined by
c9 = Ig(t)12 dt
and it has the interpretation of being the energy dissipated in a one ohm
resistor if 9 is considered to be a voltage. It can also be viewed as a measure
of the size of a signal. In a similar manner we can define the energy of the
I:
appropriate Fourier transform as
ca = IG(fW df.
These two energies are easily related by substituting the definition of the
i:
transform, changing the order of integration, and using the inversion for-
mula:
ca = G(f)G* (f) df
i: (i:
G(f) g(t)e- i27r / t dt) * df
i: (i:
I:
g*(t) G(f)e i27r / t df ) dt
= g*(t)g(t) dt
4.4. PARSEVAL'S THEOREM 167
proving that the energies in the two domains are the same for the continuous
time infinite duration case.
The corresponding result for the DFT can be proved by the analogous
string of equalities for discrete time finite duration or by matrix manipula-
tion as in Problem 3.21. In that case the result can be expressed in terms
of the energies defined by
L Ig(n)12
N-l
cg =
n=O
and
N-l
Co =L IG(n/NW
n=O
as
1
c = NCo.
g
The following theorem summarizes the general result and its specializa-
tion to the various signal types.
I: I:
1. If the signals are infinite duration continuous time signals, then
[T 1 1
L IG(fW = Tca .
00
cg = io Ig(tW dt =T
o n=-oo
(4.7)
i:
duration signals 9 and h with Fourier transforms G and H and consider
the integral
< g, h >= g(t)h*(t) dt,
<G,H> = i:
Exactly as in the earlier case where 9 = h, we have that
G(f)H*(f) df
= L: (L: df) * dt
i:
get) H(J)ei21rft
= g(t)h*(t) dt
= <g,h> .
In a similar fashion we can define inner products for discrete time finite
duration signals in the natural way as
N-l
< g, h >= L g(n)h*(n)
n=O
4.4. PARSEVAL'S THEOREM 169
and
N-l
< G,H >= L G(~)H*(~)
n=O
and derive by a similar argument that for the DFT case
i: i:
1. If the signals are infinite duration continuous time signals, then
r T
< g,h >= 10 g(t)h*(t)dt = T
1 ~ n n
~ G(T)H*(T)
1
= T < G,H >.
o n=-oo
n=-oo
gnh~ = 1 -!
!
G(f)H*(f)dj =< G,H >.
We shall later see that Parseval's theorem is itself just a special case of
the convolution theorem, but we do not defer its statement as it is a handy
result to have without waiting for the additional ideas required for the more
general result.
170 CHAPTER 4. BASIC PROPERTIES
I:
Parseval's theorem is extremely useful for evaluating integrals. For ex-
ample, the integral
sinc 2 (t) dt
is difficult to evaluate using straightforward calculus. Since sinc(t) +-+ n(J),
where the double arrow was defined in (3.2) as denoting that the signal and
spectrum are a Fourier transform pair, Parseval's Theorem can be applied
1: 1: 1:
to yield that
1: 1:
As an example of the general theorem observe that
=
I:
sinc3 (t) dt sinc(t)sinc 2 (t) dt
n(J) 1\ (f) df
21'i
1
(1 - 1) df = ~.
(4.8)
n=-oo
4.5. THE SAMPLING THEOREM 171
where
Note that the only thing unusual in this derivation is the interchange
of signs in the exponentials and the fact that we have formed a Fourier
series for a finite bandwidth spectrum instead of a Fourier series for a finite
duration signal. It is this interchange of roles for time and frequency that
suggests the corresponding changes in the signs of the exponentials.
Since G(J) is assumed to be zero outside of (-W, W), we can rewrite
the coefficients as
en _1_ {w G(J)ei27f-rtvn df
2W Lw
= _1 ('XJ G(J)ei27r-rtvn df
2W i-co
1 n
= 2W g (2W); n E Z.
f E (-W,W) (4.10)
otherwise
This formula yields an interesting observation. Suppose that we define
the discrete time signal, = {Tn; n E Z} by the samples of g, i.e.,
(4.12)
We will see in the next section how to generalize this relation between the
transform of a continuous time signal to the DTFT of a sampled version
of the same signal to the case where Ts > 1/2Wmin, i.e., the signal is not
bandlimited or it is bandlimited but the sampling period is too large for
the above analysis to hold.
Knowing the spectrum in terms of the samples of the signal means that
we can take an inverse Fourier transform and find the original signal in
terms of its samples! In other words, knowing g(n/2W) for all n E Z
determines the entire continuous time signal. Taking the inverse Fourier
transform we have that
(4.14)
:E
00
L
00
G(f) = L
n=-oo
G(2~)sinc [2T(f - 2~)] . (4.17)
'Yn = g(nTs); n E Z;
that is, 'Y is the sampled version of g. Unlike the previous section, no
assumptions are made to the effect that 9 is bandlimited so there is no
guarantee that the sampling theorem holds or that 9 can be reconstructed
from 'Y. The immediate question is the following. How does the DTFT r
of the sampled signal, defined by
r(f) = L
00.
'Yke-t27rfkj f E
1 1
[-2' 2) (4.18)
k=-oo
[21
1
'Yn =
= I:
g(nT)
G(f)ei27rfnT. df
= "
~
00
r .
h.
~
k - 1/ 2 )
G(f)ei27rfnT'df (4.20)
k=-oo ~
where we have broken up the integral into an infinite sum of integrals over
disjoint intervals of length liT•. Each of these integrals becomes with a
change of variables f' = fT. + k
r ~
T.
}l.k-l/2)
G(f)ei27r fnT. df
T.
4.6. THE DTFT OF A SAMPLED SIGNAL 175
so that interchanging the sum and integral in (4.20) yields the formula
"In = [~
~
ei2 11"/n [ f
k=-oo
G(f ~ k)] df. (4.22)
Comparison with (4.19) identifies the term in brackets as the DTFT of "I;
that is,
r(f) = f
k=-oo
G(f ~ k). (4.23)
Second Approach
We establish this relation in an indirect manner, but one which yields an
interesting and well known side result. Let t denote the periodic extension
of r with period 1; that is,
E
00
In much of the literature the same notation is used for r and t with the
domain of definition left to context, but we will distinguish them.
Consider the following function of frequency formed by adding an infi-
nite number of scaled and shifted versions of G:
1 f-k
a(f) =~
S
E
00
k=-oo
G( ~); fEn.
S
(4.25)
L
00
where
Ck = 11/2 a(f)e i2 11"/k df.
-1/2
Before evaluating these coefficients, note the similarity of (4.24) and (4.26).
In fact we will demonstrate that Ck = "Ik = g(kTs), thereby showing that
t(f) = a(f) and hence r(f) = a(f) for a E [-1/2,1/2) since the two
continuous functions have the same Fourier series, which will provide the
desired formula relating rand G.
176 CHAPTER 4. BASIC PROPERTIES
1 r...L--",-
L _T: G(f')e i2trk (T.!'+n) Tsdl'
00
Ck Ts 1-2~'
n=-oo
f:
2T" Ts
= r2~'
J_--L_.D-
-;, G(f')ei2trkTs!' dj'
I:
n=-oo
g(kTs).
2Ts Ts
(4.29)
~ f:
k=-oo
G( f ; k) = f:
n=-oo
g(nT)e- i2trnf j lEn. (4.30)
4.6. THE DTFT OF A SAMPLED SIGNAL 177
In particular, i/ / = 0, then
G(!.)
L ;- L
00 00
= g(nT). (4.31)
k=-oo n=-oo
s k=-oo
G(--r)j / E
S
1 1
[-2' 2)' (4.32)
G(f)
f
-W 0 W
be the sum of shifted replicas of G(f /Ts), the original spectrum with the
argument "stretched." We depict this basic waveform by simply relabeling
the time axis as in Figure 4.2.
If the sampling period Ts is chosen so that 1/Ts ~ 2W or 1/2 ~ WTs ,
then the DTFT of the sampled signal is given by (4.29) and the individual
terms in the sum do not overlap, yielding the picture of Figure 4.3 with
separate "islands" for each term in the sum. Only one term, the k = 0
term, will be nonzero in the frequency region [-1/2,1/2]. In this case,
r(f) = G(f /Ts)/Tsj f E (-1/2,1/2) (4.33)
178 CHAPTER 4. BASIC PROPERTIES
r(f)
f
-1 1 1
2
and the DTFT and CTFT are simply frequency and amplitude scaled ver-
sions of each other and the continuous time signal 9 can be recovered from
the discrete time signal 'Y by inverting
which is the same as (4.13) and provides another proof of the sampling
theorem! If 9 is not bandlimited, however, the separate scaled images of G
in the sum giving r(f) will overlap as depicted in Figure 4.4 so that taking
the sum to form the spectrum will yield a distorted version in (-1/2,1/2),
as shown in Figure 4.5. This prevents recovery of G and hence of 9 in
general. This phenomenon is known as aliasing.
4.6. THE DTFT OF A SAMPLED SIGNAL 179
t(f)
f
-2 ~ 1 2
t(f)
f
1 1 1
2T. T. 2T.
f:
n=-oo
g(t - nT) = f:
k=-oo
G~) ei21r!J.t. (4.36)
L
00
where the signal (pulse) p = {p(t)j t E 'R} has a Fourier transform P(f).
In the ideal sampling expansion, the pulses would be sine functions and the
transforms box functions. A more realistic pulse might be
L
00
= rnP(f)e-i2rrnIT., (4.39)
n=-oo
where the last step used the shift theorem for continuous time Fourier
transforms. Pulling P(f) out of the sum we are left with
L
00
R(f) = P(f); f
s k=-oo
G(f - ; ).
S
(4.41)
differ in important ways for the different signal types and hence we are
forced to consider the cases separately.
Proof: We here consider only the case a > O. The case of negative a is
left as an exersise. If a is strictly positive, just change variables 7 = at to
obtain
4.9 * Downsampling
Stretching the time domain variable has a much different behavior in dis-
crete time. Suppose that {gn; n E Z} is a discrete time signal. If analogous
to the continuous time case we try to form a new signal {gan; n E Z} we run
4.9. * DOWNSAMPLING 183
L
00
L
00
[21
1
gn = G(f)e i2 71'/n dJ
2
and hence we can find the downsampled signal by simply plugging in the
184 CHAPTER 4. BASIC PROPERTIES
(4.43)
This formula can be compared with the inverse Fourier transform represen-
tation for the downsampled process:
gnM = I: 1
2
G[Ml(f)e i21f / n df. (4.44)
It is tempting to identify G[Ml(f) with G(f /M)/M, but this inference can-
not in general be made because of the differing regions of integration. Sup-
pose, however, that G(f /M) = 0 for If I ~ 1/2; that is, that
1
G(f) = 0 for Ifl ~ 2M'
gnM - -12"-! 1 L
G(MM) ei21f/n dlf (4.45)
and this formula combined with (4.44) and the uniqueness of Fourier trans-
forms proves the following discrete time stretch theorem.
G(f)
I
1 1
4' 2'
G[Ml(f)
G(O)
I
1 1
4' 2'
L
00
and hence the original signal can be recovered from its downsampled version.
Proof: We have, using the bandlimited property and making a change
of variables, that
L 121
00 1
= gkM e i2tr !(j;r-k) df
k=-oo -~
L
00
= gkMsinc<;-k).
k=-oo
The theorem can also be proved in a manner much like the contin-
uous time sampling theorem. One begins by expanding the bandlimited
transform GU) on the region [-1/2M,1/2M) into a Fourier series, the
coefficients of which are in terms of the samples. An inverse DTFT then
yields the theorem.
4.10 * Upsampling
As previously mentioned, another method of stretching the time variable in
discrete time is to permit a scaling constant that is not an integer, but to
extend the definition of the signal to the new time indices. The most com-
mon example of this is to take the scale constant to be of the form a = 11M,
4.11. THE DERIVATIVE AND DIFFERENCE THEOREMS 187
L
00
H(f) = hne-i21f/n
n=-oo
L
00
= hnMe-i21f/nM
n=-oo
00. 11
= '"' 9 e-,21f/nM.
L...J n ,
f E [-- -)
2'2·
n=-oo
The latter formula resembles G(f), the Fourier transform of g, with a scale
change of the frequency range. Strictly speaking, G(f) is only defined for
f E [-t, t), but the right-hand side of the above equation has frequencies
f M which vary from [- ~ , ~). Thus H (f) looks like M periodic replicas
of a compressed G(f). In particular, if a(f) is the periodic extension of
G(f), then
- 1 1
H(f) = G(Mf)j f E [-2' 2)·
G(f)
G(O)
f
1 1
4" 2
H(f)
f
1 1
4" 2
Let G(f) denote the Fourier transform of g. Then if the signal is nice
enough for the Fourier inversion formula to hold and for us to be able to
interchange the order of differentiation and integration:
g'(t)
4.11. THE DERIVATIVE AND DIFFERENCE THEOREMS 189
= J00
G(f).!!..ei2trlt df
dt
-00
J
00
From the inversion formula we can identify i27r fG(f) as the Fourier trans-
form of g', yielding the following result.
(4.49)
n=-oo n=-oo
L L
00 00
= gne-i2trln - gne-i2trl(n+l)
n=-oo n=-oo
L
00
(4.51)
Note that the nth order difference of a signal can be defined iteratively
by
gn(k) = g(k-l)
n
_ g(k-l)
n-l ,
e.g.,
(2) _ (1) (1)
gn - gn - gn-l - gn -
- 2gn-l + gn-2,
and so on.
The derivative and difference theorems are similar, but the dependence
on / in the extra term is different. In particular, the multiplier in the
discrete time case is periodic in / with period 1 (as it should be).
Both results give further transform pairs considering the dual results.
In the continuous time case we have that the Fourier transform of the signal
tg(t) is
i
F/({tg(t)j t E 'R}) = 27rG'(f)j f E 'R.
This can be proved directly by differentiating the formula for G(f) with
respect to / which results in a multiplier of -i27rt. The discrete time
equivalent is left as an exercise.
of the independent variable, but the basic ideas of evaluating moments from
transforms remain the same.
We focus on moments in the time domain, but duality can be invoked
to find similar formulas for frequency domain moments. We will consider
both continuous and discrete time, but we focus on the infinite duration
case for simplicity. It is often convenient to normalize moments, as we shall
do later.
We begin with the general definition of moments (unnormalized) and
we will then consider several special cases. Given a signal g, define for any
nonnegative integer n the nth order moment
M(n) ={
'J tng(t) dt
-00
continuous time
9
L
00
kngk discrete time
k=-oo
M(O) ={
'J g(t) dt
-00
continuous time
9 00
L gk discrete time
k=-oo
This is easily recognized in both cases as being G(O), where G(f) is the
Fourier transform of g. Thus for both continuous and discrete time,
(4.52)
spectral moments are integrals. Similar results could be obtained for the
finite duration counterparts, but that is left as an exercise.
As an example of the use of the area property consider the continuous
time signal g(t) = Jo(211't), a zeroth order ordinary Bessel function. From
the transform tables G(f) = ~. Thus from the area property
1f..jl-j2
The above formula of course holds only if the derivative and integral can
i:
be interchanged. This is the case if the absolute moment exists,
M(l)
9
= ~G'(O).
211'
(4.55)
dG (f)
df
= -611'i sinc(f)e -6i1f f + e-6i". f ~
df
sinc(f)
.
4.12. MOMENT GENERATING 193
°
At f = 0, dSi:(f(f) = (any even function has zero slope at origin if the
derivative is well-defined) and therefore G' (0) = -i611" and hence
1-00
00 iG' (0) 611"
tg(t)dt = - - = - = 3.
211" 211"
i:
condition:
Itng(t)1 dt < ,
00· continuous time (4.56)
00
L Ikngkl
k=-oo
< , discrete time;
00· (4.57)
then
(4.58)
where
= ~
dfn
J 00
g(t)e-i21rft dt
-00
J
00
(_211"it)ng(t)e-i21rft dt
-00
J
00
= (-211"i)n tng(t)e-i21rftdt.
-00
°
Setting f = then yields the theorem. Not setting f = 0, however, leaves
one with another result of interest, which we state formally.
194 CHAPTER 4. BASIC PROPERTIES
Theorem 4.16 Given a continuous time infinite duration signal get) with
spectrum G(f),
(4.59)
called the second moment property. This result has an interesting impli-
cation. Suppose that the signal is nonnegative and hence can be thought
of as a density (say of mass or probability) so that the moment of inertia
can be thought of as a measure of the spread of the signal. In other words,
if the second moment is small the signal is clumped around the origin. If
it is large, there is significant "mass" away from the origin. The above
second moment property implies that a low moment of inertia corresponds
to a spectrum with a low negative second derivative which means a small
curvature or relatively flat behavior around the origin. Correspondingly,
a large moment of inertia means that the spectrum has a large curvature
at the origin and hence is very "peaky". Thus a signal with a peak at the
origin produces a spectrum that is very flat at the origin and a signal that
is very flat produces a spectrum that is very steep. This apparent tradeoff
between steepness in one domain and flatness in another will be explored
in more depth later.
As an example that provides a simple evaluation of an important inte-
2
gral, consider the continuous time signal g(t) = e- 1ft which has spectrum
G(f) = e- 1f / 2 • We have that
G'(f) = _27r!e- 1f / 2
1 -00
00 2
e- 1ft dt = G(O) = 1,
1 00
-00
2
te- 1ft dt
.
= -G'
Z
27r
(0) = 0,
4.12. MOMENT GENERATING 195
f oo
-00
t2e-trt2 dt = __I_G II (O) = ~.
411"2 211"
As a second example and an object lesson in caution when dealing with
moments, consider the continuous time signal g(t) = 2/(1 + (211"t)2) and its
I:
transform G(f) = e- 1fl . The Oth moment is
g(t) dt = G(O) = 1.
If one tries to find the first moment, however, one cannot use the moment
theorem because G' is not continuous at O! In particular, G' (0+) = -1 and
G'(O-) = +1. The problem here is that the integrand in
f oo
-00
tg(t) dt =
foo
-00
1
+
2t
(2 )2 dt
1I"t
falls as lit for large t, which is not integrable. In other words, the first
moment blows up. The integral does exist in a Cauchy sense (it is 0). In
fact, this signal corresponds to the so-called Cauchy distribution in proba-
bility theory. Note that it violates the sufficient condition for the moment
theorem to hold, i.e., Itg(t)1 is not integrable.
As a final example pointing out a more serious peril, consider the signal
sinc(t). Its spectrum is n(f) which is infinitely differentiable at the origin
and the derivatives are all o. Thus one would suspect that the moments
are all 0 (except for the area). This is easily seen to not be the case for the
second moment, however, by direct integration. Integration by parts shows
that in fact
f -T
T t2 sinc(t) dt = sin T - Tcos T
does not converge as T -t 00 and hence the second moment does not exist.
The problem is that the conditions for validity of the theorem are violated
since the second absolute moment does not exist. Thus existence of the
derivatives is not sufficient to ensure that the formula makes sense. In
order to apply the formula, one needs to at least argue or demonstrate by
other means that the desired moments exist.
* Normalized Moments
Normalizing moments allows us to bring out more clearly some of their
basic properties. If the signal is nonnegative, the normalized signal can
196 CHAPTER 4. BASIC PROPERTIES
The normalized first moment is called the centroid or mean of the signal.
In the continuous time case this is given by
where the moment theorem has been used. The second moment is called
the mean squared abscissa and it is given in the continuous time case by
The variance or its square root, the standard deviation, is often used as
a measure of the spread or width of a signal. If the signal is unimodal,
then the "hump" in the signal will be wide (narrow) if the variance is
large (small). This interpretation must be made with care, however, as the
variance may not be a good measure of the physical width of a signal. It
can be negative, for example, if the signal is not required to be nonnegative.
When computing the variance, it is usually easier to use the fact that
(4.61)
4.13. BANDWIDTH AND PULSE WIDTH 197
°
real and even spectrum. Since an even function has 0 derivative at time 0,
this means that < t >g= for any real even signal. If the centroid is 0,
then the variance and the mean squared abscissa are equal.
along with some properties. We will introduce the definitions for signals,
but they have obvious counterparts for spectra. The entire section con-
centrates on the case of continuous time infinite duration signals so that
indices in both time and frequency are continuous. The notion of width for
discrete time or discrete frequency is of much less interest.
Equivalent Width
The simplest notion of the width of a signal is its equivalent width defined as
the width of a rectangle signal with the same area and the same maximum
height as the given signal. The area of the rectangle n(t/T) is T. Given
a signal 9 with maximum height gmax and area J~oo g(t) dt = G(O) (using
the moment property), then we define the equivalent width Wg so that the
rectangular pulse gmax n (t/Wg) has the same area as g; that is, gmaxWg =
G(O). Thus
Wg = G(O).
gmax
In the special but important case where g(t) attains its maximum at the
origin, this becomes
G(O)
Wg = g(O).
An obvious drawback to the above definition arises when a signal has
zero area and hence the width is zero. For example, the signal n(t -1/2) -
n(t + 1/2) is assigned a zero width when its actual width should be 2.
Another shortcoming is that the definition makes no sense for an idealized
pulse like the impulse or Dirac delta function.
The equivalent width is usually easy to find. The equivalent width of
n(t) is 1, as are the equivalent widths of I\(t) and sinc(t). These signals
have very different physical widths, but their equal areas result in equal
equivalent widths.
When used in the time domain, the equivalent width is often called the
equivalent pulse width. We shall also use the term equivalent time width.
The same idea can be used in the frequency domain to define equivalent
bandwidth:
Wa = J~oo G(f) df = g(O) .
Gmax G max
In the common special case where G(f) attains its maximum value at the
origin, this becomes
g(O)
Wa = G(O)·
4.14. SYMMETRY PROPERTIES 199
Note that even if GU) is complex, its area will be real if get) is real. Note
also that if the signal and spectra achieve their maxima at the origin, then
and hence the width in one domain is indeed inversely proportional to the
width in the other domain.
f
00
f
00
< IGU)ei21fltl df
-00
f
00
= IGU)I df
-00
= BeqGmax,
then choosing t so that get) = gmax we have that
B eq > gmax _ 1
--.
- G max Wg
By construction the two functions are even and odd and their sum is g(t).
The representation is unique, since if it were not, there would be another
even function e(t) and odd function o(t) with g(t) = e(t) + o(t). But this
would mean that
e(t) + o(t) = ge(t) + go(t)
and hence
e(t) - ge(t) = go(t) - o(t).
Since the left-hand side is even and the right-hand side is odd, this is
only possible if both sides are everywhere 0; that is, if e(t) = ge(t) and
o(t) = go(t).
Remarks
1. The choices of ge(t) and go(t) depend on the time origin, e.g., cost is
even while cos( t - 7r /2) is odd.
2. I~CXJ go(t) dt = 0, at least in the Cauchy principal value sense. It
is true in the general improper integral sense if go(t) is absolutely
integrable. (This problem is pointed out by the function
t>O
sgn{t) ={ ~-1 t=O (4.64)
t<O
4.14. SYMMETRY PROPERTIES 201
which has a 0 integral in the Cauchy principal value sense but which
is not integrable in the usual improper Riemann integral sense. The
function get) = lit is similarly unpleasant.)
3. If el (t) and e2 (t) are even functions and 01 (t) and 02 (t) are odd func-
tions, then el(t) ± e2(t) is even, 01(t) ± 02(t) is odd, el(t)e2(t) is
even, 01 (t)02(t) is even, and edt)02(t) is odd. The proof is left as an
exercise.
4. All of the ideas and results for even and odd signals can be applied
to infinite duration two-sided discrete time signals.
We can now consider the Fourier transforms of even and odd functions.
Again recall that get) = ge(t) + go(t), where the even and odd parts in
1:
general can be complex. Then
1:
G(f) = g(t)e-i21Tftdt
1:
= (ge(t) + go(t)) (cos(27rft) - isin(27rft)) dt
Since the second and third terms in the final expression are the integrals of
odd functions, they are zero and hence
G(f) = 1:ge(t)COS(27rft)dt-i[:9o(t)Sin(27rft)dt
= Ge(f) + Go(f), (4.65)
where Ge(f) is the cosine transform of the even part of get) and Go(f) is -i
times the sine transform of the odd part. (Recall that the cosine and sine
transforms may have normalization constants for convenience.) Note that
if get) is an even (odd) function of t, then G(f) is an even (odd) function
of f.
As an interesting special case, suppose that get) is a real-valued sig-
i:
nal and hence that ge(t) and go(t) are also both real. Then the real and
imaginary parts of the spectrum are immediately identifiable as
i:
lR(G(f)) = ge(t)cos(27rft)dt (4.66)
Observe that the real part of G(f) is even in f and the imaginary part of
G(f) is odd in f. Observe also that if g(t) is real and even (odd) in t, then
G(f) is real (imaginary) and even (odd) in f.
If g(t) is real valued we further have that
I:
inary and hence ge(t) and go(t) are imaginary, then
I:
is odd in f, and
is even in f, and
G(-J) = -G*(f)j (4.68)
that is, the spectrum is anti-Hermitian.
We can summarize the symmetry properties for a general complex signal
g(t) as follows:
g(t) ~ G(f)
ge(t) ~ Ge(f)
go(t) ~ Go (f)
eR(t) ~ ER(f) (4.71)
el(t) ~ El(f)
OR(t) ~ iOI(f)
01(t) ~ -iOR(f)
4.15. PROBLEMS 203
4.15 Problems
4.1. What is the DFT of the signal
n = 0, 1,2,···
otherwise
Find the Fourier transform G of g.
4.4. Prove that the Fourier transform of the infinite duration continuous
time signal {g(at - b); tEn} is 'farG(f /a)e-i21rfb/a, where G is the
Fourier transform of g.
4.5. Find the Fourier transform of the following continous time infinite
duration signal: g(t) = e- 1t - 3I j tEn. Repeat for the discrete time
case (now t E Z).
4.6. Suppose that g(t) and G(f) are infinite duration continuous time
Fourier transform pairs. What is the transform of cos2(27rfot)g(t)?
4.7. What is the Fourier transform of {sinc(t)cos(27rt); tEn}?
4.8. State and prove the following properties for the two-dimensional finite
duration discrete time Fourier transform (the two-dimensional DFT):
Linearity, the shift theorem, and the modulation theorem.
4.9. Given a discrete time, infinite duration signal gn = rn for n ~ 0
and gn = 0 for n < 0 with Irl < 1, suppose that we form a new
signal h n which is equal to gn whenever n is a multiple of 10 and is
o otherwise: hn = gn for n = ... , -20, -10,0,10,20, .... What is the
Fourier transform H(f) of hn ?
204 CHAPTER 4. BASIC PROPERTIES
i:
4.13. Evaluate the integral
for all integers n, m. What does this say about the signals sinc(2B(t-
2'1 )) for integer n?
4.14. This problem considers a simplified model of "oversampling" tech-
niques used in CD player audio reconstruction.
A continuous time infinite duration signal 9 = {g(t); t E 'R} is band-
limited to ±22 kHz. It is sampled at 10 = 44 kHz to form a discrete
time signal U= {Un; n E Z}, where Un = g(n/lo).
A new discrete time signal, h = {h n ; n E Z} is formed by repeating
each value of U four times; that is,
Note that this step can be expressed in terms of the upsampling op-
eration: If we define the signal T = {Tn; n E Z} by
then
h(t) = r: h
00
n=-oo
n sinc(176000t - n).
4.16. Develop an analog of the discrete time sampling theorem for the DFT,
that is, for finite duration discrete time signals.
4.17. The DFT of the sequence {go,gl, ... ,gN-d is {Go,Gl, ... ,GN-d.
What is the DFT of the sequence {gO, O,gl, 0, ... , gN-l, O}?
4.18. If 9 = {gn; n = 0,1, ... , N - I} has Fourier transform
N-l
G(f) = L gne-i2rrfn,
n=O
206 CHAPTER 4. BASIC PROPERTIES
where U-l (k) is the unit step function defined as 1 for k ~ 0 and
o otherwise.
(b) Form the truncated signal f) = {Ykj k E Z16} defined by Yn = Yn
for n E Z16. Find the Fourier transform Y of Yfor the case where
x is defined by
Xk = sin(21r~:) k E Z.
where the shift inside G is cyclic on the frequency domain, e.g., using
[0,1) as the frequency domain,
4.15. PROBLEMS 207
g(t) = sinc2(~).
(a) What is G(f)? (Give an explicit expression.) Make a sketch
labeling points of interest in frequency and amplitude.
(b) The signal g is now sampled at the Nyquist rate and a discrete
time sequence h n is formed with the samples: h n = g(nTs).
What is the transform of the sampled sequence? (Give an ex-
plicit expression for H(f) for the given g.) Make a labeled sketch.
(c) h n is then upsampled by 5 to form
4.22. Consider the 9 point sequence g = {12 0120121}. Define the Fourier
transform (DFT) of g to be G = {G(k/9); k = 0,1, ... ,8}.
Let 9 be the periodic extension of g. Define two new sequences:
h n = r n g"nu_l(n), Irl < 1; and Vn = h 3n + 1 .
4.23. Sketch and find the Fourier transform of the infinite duration dis-
crete time signal gn = 2D4(n) - D2(n). What is the inverse Fourier
transform of the resulting spectrum?
208 CHAPTER 4. BASIC PROPERTIES
(b) Derive the Fourier transform of the upsampled signal gl/3 defined
by
( ) _ { g( i) if n is an integer multiple of 3
gl/3 n - 0 otherwise
(c) Find the Fourier transform of the signal w defined by
Wn = gl/3(n) - g(n).
Compare this signal and its Fourier transform with h and its
Fourier transform.
4.27. An infinite duration continuous time signal 9 = {get); t E R} is band-
limited to (- W, W), i.e., its Fourier transform G satisfies
G(f) = 0 for III 2: w.
We showed that we can expand G in a Fourier series on [- W, W) for
this case. Use this expansion to find an expression for
[ : IG(fWdl
(a) Find g.
(b) Write a sampling expansion for g using a sampling period of T
and state for which T the formula is valid.
For the remainder of this problem assume that T meets
this condition.
(c) Find the DTFT r for the sampled sequence 'Y = bn = g(nT}j nE
Z}.
(d) Evaluate the sum
L
00
g(nT}.
n=-oo
L g2(nT} = 1-
00
n=-oo
[00
-00
l(t} dt
should find that h is trivially related to 9 in one case and has a simple
relation in the other. Note that neither of these p are really physical
pulses, but both can be approximated by physical pulses.
4.31. What is the Fourier transform of 9 = {te- rrt2 ; t E R}?
4.32. Define the signal 9 = {g(t); t E R} by
I + e- 1tl It I < ~
g(t) = { e- 1tl
It I ~ ~.
(a) Find the Fourier transform G of g.
Write a Fourier integral formula for 9 in terms of G. Does the
formula hold for all t?
(b) What signal h has Fourier transform {H(f); fER} given by
H(f) = 2G(f) cos(87rf)?
Provide a labeled sketch of h.
(c) Define the truncated finite duration signal 9 = {g(t); t E [-1/2, 1/2)},
where g(t) = g(t) for t E [-1/2,1/2). Find the Fourier trans-
form 6 of 9 and write a Fourier series representation for g. Does
the Fourier series give [} for all t E [-1/2, 1/2)?
4.33. (a) Define the discrete time, finite duration signal 9 = {go, gl, g2, g3, g4, gs}
by
9 = {+1, -1, +1, -1, +1, -I}
and define the signal h by
h= {+1,-1,+1,+1,-1,+1}.
Find the DFTs G and H of 9 and h, respectively. Compare and
contrast G and H. (Remark on any similar or distinct proper-
ties. )
(b) Define the continuous time finite duration pulse p = {p(t); t E
[0,6)} by
p
(t)
otherwise
Find the Fourier transform P of p.
°
= {10::; t < 1 .
5
h(t) =L hnP{t - n); t E [0,6).
n=O
in terms of G.
(b) Suppose now that
°
where r > is a real parameter. What is g? (Your final answer
should be a closed form, not an infinite sum.) What did you
have to assume about r to get this answer?
(c) Given g as in part (b) of this problem, evaluate
11 g(t) dt.
L
00
G(k).
k=-oo
4.35. State and prove the moment theorem for finite duration discrete time
signals.
212 CHAPTER 4. BASIC PROPERTIES
4.36. State and prove the moment theorem for finite duration continuous
time signals.
4.37. For the function get) = A(t) cos(1I"t)j tEn, find
and
I: g(t)g( -t) dt
4.40. Find the odd and even parts of the following signals (T = 'R):
(a) eit
(b) e- it H(t) (Where H(t) is the Heaviside step function.)
(c) It Isin(t - 11"/4)
(d) ei 1l" sin(t)
4.41. Find the even and odd parts of the continuous time infinite duration
signals
4.15. PROBLEMS 213
4.43. Is it true that the magnitude spectrum IG(f)1 of a real signal must
be even?
4.44. Match the signals in the first list with their corresponding DFT's in
the second. The DFT's are rounded to one decimal place.
A (1,2,3,4,3,4,3,2)
B (0,2,3,4,0, -4, -3, -2)
C (i,3i,2i,4i,3i,4i,2i,3i)
D (1,2,2,3,3,4,4,3)
E (0,3,2, 5i, 0, -5i, -2, -3)
1 (22, -3.4 + 3.4i, -2, -.6 - .6i, -2, -.6 + .6i, -2, -3.4 - 3.4i)
2 (22i, -3.4i, 0, -.6i, -.6i, -.6i, 0, -3.4i)
3 (22, -4.8, -2, .8, -2, .8, -2, -4.8)
4 (0,7.1- 8.2i, -10 - 6i, 7.1- .2i, 0, -7.1 + .2i, 10 + 6i, -7.1 + 8.2i)
5 (0, -14.5i, 4i, -2.5i, 0, 2.5i, -4i, 14.5i)
4.45. Match the signals in the first list with their corresponding DFT's in
the second. The DFT's are rounded to one decimal place.
A (-1,-1,-3,4,1,4,-3,-1)
B (0,1, -3, 4, 0, -4,3, -1)
C (j, -j, -4, 2,j, -2,4, -j)
D (6,-4,j,-2,6,-2,-j,-4)
E (0, j, 4j, -5j, 0, -5j, 4j, j)
F (-2,3,j, -2, 1, -2, -j,3)
G (2 + j, 0, 0, 0, -2 - j, 0, 0, 0)
1 (0, 8.5j, -8j, -8.5j, 16j, -8.5j, -8j, 8.5j)
2 (0, -l.lj, 6j, -13.1j, 0, 13.1j, -6j, 1.1j)
3 (0, -9.1,6,5.1, -12, 5.1, 6, -9.1)
4 (0,4+ 2j, 0,4 + 2j,0,4 + 2j,0,4 + 2j)
5 (0, 3.8j, 6j, -9.4j, 4j, 12.2j, -2j, -6.6j)
6 (1,6.1, -1, -12.1, -3, -8.1, -1,2.1)
7 (0, -0.8, 12,0.8,24,4.8,12, -4.8)
4.46. Table 4.1 has two lists of functions. For each of the functions on the
left, show which functions are Fourier transform pairs by means of an
arrow drawn between that function and a single function on the right
(as illustrated in the top case).
4.47. What can you say about the Fourier transform of a signal that is
(a) real and even?
(b) real and odd?
(c) imaginary and even?
(d) complex and even?
(e) even?
(f) odd?
4.15. PROBLEMS 215
Generalized Transforms
and Functions
This signal clearly violates the absolute integrability criterion since the
integral of its absolute magnitude is infinite. Can a meaningful transform be
defined? One approach is to consider a sequence of better behaved signals
that converge to g(t). If the corresponding sequence of Fourier transforms
also converges to something, then that something is a candidate for the
Fourier transform of g(t) (in a generalized sense). One candidate sequence
is
t>O
t =0 j k = 0, 1, .... (5.2)
t<O
The signal sequence is depicted in Figure 5.1 for k = 1,10,100 along with
the step function. These signals are absolutely integrable and piecewise
0.8
0.6 , '.",
..... ...... ....
\
,
0.4 . . \ ··'>"<k=lO
0.2
\ "
\k=t·'~<:..~_
---
\'
~ \
0 . . ..... ,:-:- -----;,...-------.:....------
~ ----------:--...T...~-------:---'.'<
~
t;;
·0.2 -- .. .. ..... \
............... ,: I
-0.4 .. ~.....
"
..1..
,
.'.,... ',..
-0.6 \.
, ,
....... \, ..
'.
\
-1f----~--~----'~·
Figure 5.1: Function Sequence: The solid line is the sgn signal.
smooth and
lim gk(t) = g(t). (5.3)
k .... oo
(5.4)
5.2. PERIODIC SIGNALS AND FOURIER SERIES 219
(do the integral for practice). We could then define the Fourier transform
of sgn(t) to be
(5.5)
of signals for which the basic definitions fail. In particular, we show that
the transforms of finite duration signals can be used to define useful trans-
forms for periodic infinite duration signals, signals which are not absolutely
summable or integrable and hence signals for which the usual definitions
cannot be used. We begin with the simpler case of discrete time signals.
Recall that an infinite duration discrete time signal 9 = {gn; n E Z} is
periodic with period N if
gn+N = gn (5.9)
for all integers n. Recall also that if we truncate such a signal to produce
a finite duration discrete time signal
then a Fourier series representation for the infinite duration signal is given
from (3.28) as
(5.11)
Exactly the same idea works for a continuous time infinite duration
periodic signal g. Suppose that get) has period T, that is, g(t+T) = get) for
all t E 'R. Then the finite duration CTFT of the signal 9 = {get); t E [0, Tn
is given by
a(f) = loT g(t)e- i27r !tdt (5.12)
get) = f:
n=-oo
a;) e i27r y;t. (5.13)
where 10 = miN for some integer m. This signal has period N (and
no smaller period if miN is in lowest terms). For the moment the fre-
quency 10 will be considered as a fixed parameter. From the linearity of
the Fourier transform, knowing the DTFT of any single exponential signal
would then imply the DTFT for any discrete time periodic signal because
of the representation of (5.11) of any such signal as a finite sum of weighted
exponentials! Observe that the discrete time exponential is unchanged if we
replace m by m + M N for any integer M. In other words, all that matters
is m mod N.
The DFT of one period of the signal is
E(n = {Io 1 = 10 = ~
1=j"l=O, ... ,N-1,lim
The ordinary DTFT of the signal defined by
L
00
E(f) = en e- i27r !n
n=-oo
does not exist (because the limiting sum does not converge). Observe also
that en is clearly neither absolutely summable nor does it have finite energy
222 CHAPTER 5. GENERALIZED TRANSFORMS AND FUNCTIONS
since
n=-oo n=-oo
Suppose for the moment that the transform did exist and was equal to some
function of I which we call for the moment E(f). What properties should
E(f) have? Ideally we should be able to use the DTFT inversion formula
on E(f) to recover the original signal en, that is, we would like E(f) to
solve the integral equation
1
1
(5.15)
What E(f) will do this; that is, what E(f) is such that integrating
E(f) times an exponential ei27rln will exactly produce the value of the
exponential in the integrand with I == 10 for all n? The answer is that
no ordinary function E(f) will accomplish this, but by using the idea of
generalized functions we will be able to make rigorous something like (5.15).
So for the moment we continue the fantasy of supposing that there is a
function E(f) for which (5.15) holds and we look at the implications of the
formula. This will eventually lead up to a precise definition.
Before continuing it is convenient to introduce a special notation for
E(f), even though it has not yet been precisely defined. Intuitively we
would like something which will have a unit area, i.e.,
11 E(f) dl == 1,
8(/ - 10)
I
10 1
11
where
Xn = X (/)e i21f In df.
We have changed the signs in the exponentials because we have reversed the
usual roles of time and frequency, i.e., we are writing a Fourier series for a
frequency domain signal rather than a time domain signal. We assume for
simplicity that X(/) is continuous at fo (that is, X(/o + e:) and X(/o - e:)
go to X(/o) as e: -+ 0) so that the Fourier series actually holds with equality
224 CHAPTER 5. GENERALIZED TRANSFORMS AND FUNCTIONS
nEZ 10
= L: Xne-i27rnlo
nEZ
= X(fo), (5.16)
where we have used the property (5.15) that 8(/ - 10) sifts complex expo-
nentials at 10. The point is we have shown that if 6(/ - 10) sifts complex
exponentials, it also sifts all other continuous frequency domain signals
(assuming they are well behaved enough to have Fourier series).
We now summarize our hand-waving development to this point: If the
signal {e i27r Ion; n E Z} has a Fourier transform 6(/ - 10); 1 E [0,1), then
this Fourier transform should satisfy the sifting property, i.e., for any suit-
ably well behaved continuous frequency domain signal G(f)
property is
(5.18)
the midpoint of the upper and lower limits of G(I) at 10. This is the general
form of the sifting property. Again observe that no ordinary function has
this property.
Before making the Dirac delta rigorous, we return to the original ques-
tion of finding a generalized DTFT for periodic signals and show how the
sifting property provides a solution.
If we set E(I) = 8(1 - ~), then (5.15) holds. Thus we could consider
the DTFT of an exponential to be a Dirac delta function; that is, we would
have the Fourier transform pair for k E Z N
(5.19)
where the frequency difference I - kiN is here taken modulo 1, that is,
8(1 - kiN) = 8((1 - kiN) mod 1).
If (5.19) were true, then (5.11) and linearity would imply that the DTFT
G of a periodic discrete time signal 9 would be given by
G(f) = 'I:
k=O
G;;) a(f - ~); f E [0,1). (5.20)
gn = 11 G(f)e i2 11'fn df
= r1'I:G;)a(f_ ~)ei211'fndf
Jo k=O
226 CHAPTER 5. GENERALIZED TRANSFORMS AND FUNCTIONS
G(f)
C(O)
f
o 1
N
2
N
N-l
IV
1
which is the same as the previous equation because of the sifting proper-
ties of Dirac delta functions. Thus we can represent gn either as a sum of
weighted exponentials as in (5.21) (usually referred to as the Fourier series
representation) or by an integral of weighted exponentials (the Fourier in-
tegral representation). Both forms are Fourier transforms, however. Which
form is best? The Fourier series representation is probably the simplest to
use when it suffices, but if one wants to consider both absolutely summable
and periodic infinite duration signals together, then the integral represen-
tation using delta functions allows both signal types to be handled using
the same notation.
To summarize the discussion thus far: given a discrete time periodic
function 9 with period N, the following can be considered to be a Fourier
transform pair:
N-1C(.!.) k
G(f) = L ; o(f - N); f E [0,1) (5.22)
k=O
1: 1
2
G(f)ei2rrfn df; nEZ, (5.23)
where
,k N-l . k
G( -) - ' " 9n e-t2rrwn., k -- 0 , ... , N - 1 .
N -L..J (5.24)
n=O
The same scenario works for continuous time periodic functions assum-
ing the supposed properties of the Dirac delta function if we also assume
that the Fourier transform is linear in a countable sense; that is, the trans-
form of a countable sum of signals is the sum of the corresponding trans-
5.3. GENERALIZED FUNCTIONS 227
forms. In this case the fundamental Fourier transform pair is that of the
complex exponential:
f: GC;) ~);
i:
G(J) = 0(J - / E 'R. (5.26)
k=-oo
where
• -)
G( k
T
= iT
0
g(t)e- t°21f'1'k t dt; k E Z. (5.28)
1. (Linearity)
Given two functions gl and g2 and complex constants a1 and a2, then
228 CHAPTER 5. GENERALIZED TRANSFORMS AND FUNCTIONS
2. (Continuity)
If limn-too gn(X) = g(x) for all x, then also
lim D(gn) = D(g)
n-too
i:
that h(x) is a fixed ordinary function and define the generalized function
D by
Dh(g) = g(x)h(x)dx,
that is, D h (g) assigns the value to 9 equal to the integral of the product of 9
with the fixed function h. The properties of integration then guarantee that
D meets the required conditions to be a generalized function. Note that
this generalized function has nothing strange about it; the above integral
is an ordinary integral.
As a second example of a generalized function, consider the operator
Dii defined as follows:
if g(t) is continuous
at t = 0 (5.29)
otherwise
where g(O+) and g(O-) are the upper and lower limits of gat 0, respectively.
We assume that g(t) is sufficiently well behaved to ensure the existence of
these limits, e.g., g(t) is piecewise smooth.
i:
We write this generalized function symbolically as
Do(g) = 8(x)g(x)dx,
i:
The shifted Dirac delta D6cois interpreted in a similar fashion. That is,
when we write the integral
D () = g(xci) + g(xi))
6. 0 9 2'
The shifted Dirac delta can be related to the unshifted Dirac delta with a
shifted argument. Define the shifted signal gxo (x) by
gxo(x) = g(x + xo).
Then it is easy to see that
D6(gxo) = D6. 0 (g).
This relationship becomes more familiar if we use the integral notation for
i: i:
the generalized function:
i: i:
of a generalized function <5 we had an ordinary function h, then
1 -00
00 hn(x)dx = 1; n = 0,1,2, ...
230 CHAPTER 5. GENERALIZED TRANSFORMS AND FUNCTIONS
and hence the shifted sequence satisfies the properties of a shifted Dirac
delta. In fact, we can also prove (5.30) by defining the generalized function
D6 zo in terms of the limiting behavior of integrals of the shifted functions
hn (x - xo) in the above sense. This provides a very useful general approach
to proving properties for the Dirac delta: find a sequence h n describing the
generalized function, prove the property for the members of the sequence
using ordinary signals and integrals, then take the limit to get the implied
property for the generalized function.
It is important to note that we are NOT making the claim that the
delta function is itself the limit of the hn' i.e., that "<5(t) = limn-too hn(t)."
In fact, most sequences offunctions hn(t) satisfying the required conditions
will not have a finite limit at t :::: O! In spite of this warning, it is some-
times useful to think of an impulse as a limit of functions satisfying these
conditions. We next consider a few such sequences .
• hn(x) :::: nD..1...(x). (See Figure 5.4.) Alternatively, one can use n n
2n
(nx). This is the simplest such sequence, a rectangle with vanishing
width and exploding height while preserving constant area. If g(t) is
continuous at t :::: 0, then the mean value theorem of calculus implies
that
n Jin
_..1...
2n
g(t) dt r:::J n g(O) .
n
5.3. GENERALIZED FUNCTIONS 231
9,----------r----------,---------,----------,
8
_ .. _ - .1 .. :-
7
I-
n=8
3 --I--j"
I I
2 .. -- --,--"----- ---
I I
n=2
n=1
~1~--------.~O~.5----~--~O~--------~O.~5--------~
1 (t)
Figure 5.4: Impulse via Box Functions: nO 2n
Note that hn(x) has 0 as a limit as n -+ 00 for all x except the point
x = O. Although this sequence is simple, it has the disadvantage that
its derivative does not exist at the edges .
• The sequence hn(x) = nt\(nx) also has the required properties, where
t\ is the triangle function of (1.10). (See Figure 5.5.)
It is also not differentiable at its edges and at the origin. It is piecewise
smooth. Again hn(x) has a limit (0) except for the point x = o.
9r----------r----------.-------~_r--------_.
8 ··r
;1
n
7 ;!
6 i:'
; I
5 iii.
I I
. ./ I
I I
I
n = 8
3 ................. ,. ...:...! ................ .
I
2 'j' ...;.,~ .. ~ .
...( : ... ~... n =~
... "".. 1.. I .....
·0.5 o 0.5
t
1I· msinx
--=
1
:1:--+0 X
and
lim sin (ax) = a.
:1:--+0 X
Thus
. sin(n1rx)
11m =n.
:1:--+0 1rX
These function sequences are useful for proving properties of Dirac delta
functions. For example, consider the meaning of o(ax), a delta function
5.4. FOURIER TRANSFORMS OF GENERALIZED FUNCTIONS 233
I:
we can argue that this should behave under the integral sign like
c5(ax)g(x)dx = 1
lim
00
nO---.L (x)g(x)dx
.1
n-+oo -00
2ria
hm
00
nO y y dy
(-)g(-)-
1
a a a
1
n--too -00 2n<i"
00 y dy
= lim nOJ..(y)g(-)-
n--too -00 2n a a
g(O)
a
= 1 ~c5(x)g(x)
-00
00
a
dx.
for any a "I o. The proof is left as an exercise. (See Problem 6.) Other
properties of Dirac c5 functions are also developed in the exercises. One
such property that is often useful is
g(t)c5(t - to) = g(to)c5(t - to) (5.35)
1:
for example, in the continuous time case we have that
o(t - to)e-i27rftdt
that is, the Fourier transforII]. of a Dirac delta at the time origin is a con-
stant. This result is the continuous time analog to the result that the DTFT
of a Kronecker delta is a constant.
We have already argued that the Fourier transform of a complex expo-
nential {e i27r fot j tEn} should have the sifting property, that is, behave like
the generalized function defining a Dirac delta. Thus we can also define
which "converges" to o(f - fo) in the sense that the function N sinc[N(f-
fo) 1 inside an integral has the sifting property defining the Dirac delta.
(The formula makes no sense as an ordinary limit because the final limit
does not exist.)
Again the special case of fo = 0 yields the relation
that is, the Fourier transform of a continuous time infinite duration signal
that is a constant (a dc) is a delta function in frequency.
Note that these two above results are duals: Transforming a delta in
one domain yields a complex exponential in the other domain.
In the discrete time case the intuition is slightly different and there is not
the nice duality because there is no such thing as a Dirac delta in discrete
5.5. * DERIVATIVES OF DELTA FUNCTIONS 235
(5.40)
We have seen that the intuitive ideas of Dirac deltas can be made pre-
cise using generalized functions and that this provides a means of defining
DTFTs and CTFTs for infinite duration periodic signals, even though these
signals violate the sufficient conditions for the existence of ordinary DTFTs
and CTFTs. Generalized functions also provide a means of carefully prov-
ing conjectured properties of Dirac delta functions either by limiting ar-
guments or from the properties of distributions. We will occasionally have
need to derive such properties. One should always be careful about treating
delta functions as ordinary functions.
The previously derived properties of Fourier transforms extend to gen-
eralized transforms involving delta functions. For example, the differen-
tiation theorem gives consistent results for some signals which are not
strictly speaking differentiable. Consider, for example, the box function
{DT(t); t E 'R}. This function is not differentiable at -T and +T in the
usual sense, but one can define the derivative as a generalized function as
d
dt DT(t) = 6(t + T) - 6(t - T)
since if one integrates the generalized function on the right, one gets DT(t).
Now the transform of DT(t) is 2Tsinc(2T f) and hence the differentiation
It
theorem implies that the transform of DT( t) should be i27r f2T sinc(2T f) =
2i sin(2T7r f), which is easily seen from the sifting property and Euler's re-
lations to be the transform of 6(t + T) - 6(t - T).
"converges" to the Dirac delta in the sense that (5.31) holds. This sequence
of functions has a derivative defined by
1 -1 . g(tl.X)_g(-tl.X)
lim [ng( - ) - ng(-)]
n-+oo 2n 2n = hm
tl.x-+O
2
~x
2
= dg(x) I ~ '(0)
dx x=O - 9 .
J
Thus
lim h'n(x)g(x) dx = -g'(O). (5.42)
n-+oo
The intuition here is as follows: If the functions hn "converge" to a Dirac
delta, then their derivatives should "converge" to the derivative of a Dirac
delta, say c5'(x), which should behave under the integral sign as above; that
is, for any g(x) that is differentiable at the origin,
J
obtain the formula
c5'(t - x)g(x) dx = -g'(t) (5.44)
5.6. * THE GENERALIZED FUNCTION 8(G(T)) 237
i:
The Fourier transform of the doublet is easily found to be
L: L:
demonstrate that for any well-behaved function r(t),
We will sketch how this result is proved and find the constant A in the
process.
Let hn(t) = nOt.. (t) denote the box sequence of functions converging
to the Dirac delta under the integral sign and consider the limit defining
i: L:
the new generalized function:
To see how this behaves in the limit as n -+ 00, we suppose that t is very
near to and expand get) in a Taylor series around to as
(5.49)
:t
that is,
1 00
-00
r(t)t5(g(t)) <it =
k=l
r,((tt))·
9 k
(5.50)
formula for the delta function which can be proved both formally, pretend-
ing that the delta function is an ordinary signal, and carefully, using the
ideas of generalized functions. This representation leads naturally to peri-
odic impulse trains as infinite duration signals, the so-called ideal sampling
function (or sampling waveform or impulse train). As some of the results
may be counter-intuitive at first glance, it is helpful to first develop them
in the simple context of a finite duration signal. The generalizations then
come immediately using the periodic extension via Fourier series.
Suppose that we consider fJ = {fJ(t); -T/2 ~ t < T/2} as a finite dura-
tion signal which equals the Dirac delta function during T = [- T /2, T /2).
Assume for the moment that we can treat this as an ordinary finite du-
ration continuous time signal and derive its Fourier series; that is, we can
represent fJ(t) on T by a series of the form
T T
L
00 ei27T+n
fJ(t) = -T-; t E [-2' 2)· (5.51)
n=-oo
(5.52)
This still leaves a problem of rigor, however, as the infinite sum inside the
right-hand integral may not exist; in fact it cannot exist if it is to equal
a delta function. In order to make sense of the generalized function we
240 CHAPTER 5. GENERALIZED TRANSFORMS AND FUNCTIONS
wish to call E~-oo e i2 11'+n IT, we resort to the limiting definition of delta
functions. Define the sequence of functions hk(t) by
(5.53)
/ hk(t) dt = ei2~+n dt
-f -f n=-k
= t j ei2~+n
n=-k_I.
I.
dt
2
k
2: c5n = 1,
n=-k
where c5 n is a Kronecker c5 and we have used the fact that the integral of a
complex exponential over one period is 0 unless the exponent is O. Thus the
hk satisfy the first condition for defining an impulse as a limit. There is no
problem with the interchange of integral and sum because the summation
is finite. Next observe that if g(t) is continuous at the origin,
f I.
E
2 k e i2 11' ion
lim /
k--.oo
hk(t)g(t) dt = lim
k--.oo
/ g(t) - T - dt
n=-k
-f -'2
T
E /2 g(t)-T- dt
1:
k e i2 11'+n
= lim
k--.oo
n=-k_1:
2
G(-Tf)
= lim
k
~
k--.oo T
n=-k
G(Tf)
E
00
= ----;y-'
n=-oo
Does the last infinite sum above exist as implied? If the original signal
g(t) is well behaved in the sense that it has a Fourier transform G, then
5.7. IMPULSE TRAINS 241
from the inversion formula for the finite duration continuous time Fourier
transform we can write
G(lf)
L -r'
00
g(O) = (5.54)
n=-oo
that is, the claimed infinite sum exists and equals g(O). But this implies
that the sequence h n also satisfies the second condition required of the
limiting definition of a delta function:
J
f
lim
k-+oo
hk(t)g(t) dt = g(O)
-f
for continuous functions g. This shows that indeed the (5.51) is valid in a
distribution sense.
We omit the details of what happens when the function g(t) is not
continuous at t = 0, but the proof can be completed by the stout of heart.
Eq. (5.51) provides a representation for a Dirac delta defined as a finite
duration signal. As always with a Fourier series, however, we can consider
the series to be defined for all time since it is is periodic with period T.
Thus the Fourier series provides immediately the periodic extension of the
original finite duration signal. In our case this consists of periodic replicas
of a delta function. This argument leads to the formula for an impulse train
IJIr(t) = L
00
n=-oo
i5(t - nT) = L
00
n=-oo
T;
a~+n
Thus the Fourier series for a Dirac delta considered as a finite duration
signal of duration T gives an infinite impulse train when considered as an
infinite duration signal. The infinite impulse train is sometimes referred to
as the "bed of nails" or "comb" function because of its symbolic appearance.
It is also referred to as the ideal sampling function since using the properties
of delta functions (especially (5.35)), we have that
L
00
iI!s(f)= f:
n==-oo
8(f-nS)= f:
n==-oo
~ei27rfn.
The summation index can be negated without changing the sum to yield
iI!s(f) = f:
n==-oo
~e-i27rin.
Recall that the Fourier transform of a periodic continuous time signal 9 has
the form
G(f)= ~ GU})t5(f_~)
L..J T T
n==-oo
where
(5.57)
that is, the spectrum of a periodic signal g(t) is the sampled Fourier trans-
form of the finite duration signal {g(t); t E [0, Tn
consisting of a single
period of the periodic waveform, when that transform is defined for all real
f·
We have seen a Fourier series representation for the sampling function.
An alternative Fourier representation for such a periodic signal is a Fourier
transform. If we proceed formally this is found to be
n==-oo_oo
J
00
L
00
= e-i27r/nT
n==-oo
This exponential sum, however, is almost identical to the Fourier series that
we have already seen for a sampling function, the only difference is that we
5.7. IMPULSE TRAINS 243
have replaced the time variable tiT by the frequency variable fT. Thus we
can conclude that
L
00
= T- ""
LJ 8(f - 'T!.!:.)
n=-oo
= (5.58)
that is, the Fourier transform of the sampling function in time is another
sampling function in frequency!
The sampling function provides an example in which we can use the
properties of delta functions to demonstrate that generalized Fourier trans-
forms can possess the same basic properties as ordinary Fourier transforms,
as one would hope. The stretch theorem provides an example. Consider
the periodic functions formed by replicating the finite duration signals. Let
9 = {g(t); t E R} be the periodic extension of some signal 9 and ga the
periodic extension of ga = {g(at);t E [O,Tla)}, with a > 0. Then 9 has
period T, while 9(at) has period T I a. The Fourier transform of 9(t) is
1 k k
L
_ 00
G(f) = -G(-)8(f --)
k=-oo T T T
and the Fourier transform of g(at) is, from the finite-duration argument,
~ _1_!G(!:..)8(f _ ka)
k~Tlaa T T
= f
k=_ooT
~G(!:..)8(f _
T
ka).
T
1- f 11 k f k
-G(-)
a a
= -- ""
aT LJ
00
G(-)8(- - -),
TaT
k=-oo
which from the stretch theorem for the Dirac delta function is
f:
k=-oo
~G(~)8(f - ~),
which agrees with the previous result.
244 CHAPTER 5. GENERALIZED TRANSFORMS AND FUNCTIONS
f: t5(~
n=-oo
- n) = T-1ill(t/T). (5.61)
Observe from the earlier results for sampling functions that ill is its own
Fourier transformj that is,
F( {ill(t)j tEn}) = {ill(f)j fEn}. (5.62)
Impulse Pairs
We close this section with two related generalized Fourier transforms. First
consider the continuous time signal (actually, generalized function) defined
by
II (t) = t5 (t + ~) + t5 (t - ~) j tEn, (5.63)
2
which is called an even impulse pair. Taking the generalized Fourier trans-
form we have that
(5.64)
5.8 Problems
5.1. Use the limiting transform method to find the Fourier transform of
sgn(t + 3).
5.2. Suppose that we attempt to find the Fourier transform of sgn(t) using
the sequence h n (t) defined by
1 O<t:::;n
hn(t)= { 0-1 -n:::;t<O
otherwise
instead of the exponential sequence actually used. Does this approach
yield the same result?
5.4. We have seen that a discrete time infinite duration periodic signal gn
with period N can be expressed as a Fourier series
L
00
gn = bkei27rfrn.
k=-oo
L
00
hn = bkei27rAkn
k=-oo
and assume that any limit interchanges are valid. This form of gener-
alized Fourier analysis is useful for a class of signals known as almost
periodic signals and has been used extensively in studying quantiza-
tion noise.
5.5. Find the DTFT of the signal gn = n mod N; n E Z. Find the DTFT
of the periodic extension of the signal 9 given by
gn -
_{Io n=O,1, ... ,N/2-1
n = N j 2, ... , N - 1
5.9. Evaluate
1 +00
-00 c5( -2x
X
+ 3) t\ ("3) dx.
5.10. Given an ordinary function h that is continuous at t = 0 and a Dirac
delta function (a distribution) c5(x), show that the product h(x)c5(x)
can also be considered to be a distribution DhO with the property
DhO(g) = h(O)g(O)
1:
if 9 is continuous at the origin. This is symbolically written as
h(x)c5(x)g(x)dx = h(O)g(O).
5.12. Are t8'(t) and -oCt) equal? (That is, are the corresponding distribu-
tions identical?)
L
N
ae-bdt-Tk)u_1 (t),
k=O
where bk > 0 for all k, the 'Tk and a are fixed, and where U-1 (t) is the
unit step function?
E
00 2
(d) e- 7r (t-n). (Hint: Use the Poisson summation formula.)
n=-oo
(e) I cos(1l"t)I.
5.16. Consider a continuous time finite duration signal g = {get); -1/2 ::;
t < 1/2} defined by
-1
t 4 1
if- 2 <t<-4:
(b) Find a Fourier series for g(t). Does the series equal g(t) exactly
for all -1/2 :$ t < 1/2?
(c) Suppose that {h n ; n E Z} is a discrete time signal with Fourier
transform H(f) = g(f), where 9 is as above. What is h n ?
(d) Let g(t) denote the periodic extension of g(t) having period 1.
Sketch g(t) and write a Fourier series for it.
(e) Sketch the shifted signal {g(t - 1/4); t E 'R} and find a Fourier
series for this signal.
(f) What is the Fourier transform of g(t)?
= L gnp(t -
00
f(t) nT).
n=O
Sketch the signal f(t) for a simple p and find its Fourier trans-
form F(f) in terms of the given information. What happens if
p is allowed to be a Dirac delta?
(c) For an ordinary signal p (not a Dirac delta), find the energy
_ {gn if n is odd
Yn - 0 if n is even
wn = y(2n + 1).
Find the Fourier transform W(f) of w in terms of G(f).
5.8. PROBLEMS 249
Note: This is a challenging problem since you may not assume that
Gis bandlimited here. You can avoid the use of generalized functions
if you see the trick, but straightforward analysis will lead you to an
impulse train in the frequency domain.
5.20. Define a pulse train (pulses, not impulses!)
1
L
00
5.21. List all the signals you know that are their own Fourier transform
or generalized Fourier transform. What unusual properties do these
signals have? (For example, what do the various properties derived
for Fourier transforms of signals imply in this case.)
Chapter 6
Convolution and
Correlation
We have thus far considered Fourier transforms of single signals and of lin-
ear combinations of signals. In this chapter we consider another means of
combining signals: convolution integrals and sums. This leads naturally
to the related topics of correlation and products of signals. As with the
transforms themselves, the details of the various definitions may differ de-
pending on the signal type, but the definitions and the Fourier transform
properties will have the same basic form.
We begin with an introduction to the convolution operation in the con-
text of perhaps its most well known and important application: linear
time-invariant systems.
i:
linear systems. For example, the systems with output u defined in terms
of the input v by
u(t) = v(r)ht(r) dr
in the infinite duration continuous time case or the analogous
L
00
Un = Vkhn,k
k=-oo
in the discrete time case yield linear systems. In both cases ht (r) is a
weighting which depends on the output time t and is summed or integrated
over the input times r. We shall see that these weighted integrals and sums
are sufficiently general to describe all linear systems. A special case will
yield the convolution operation that forms the focus of this chapter. First,
however, some additional ideas are required.
6.1. LINEAR SYSTEMS AND CONVOLUTION 253
The 8-Response
Suppose that we have a system e with input and output signals of the
same type, e.g., they are both continuous time signals or both discrete
time signals and the domains of definition are the same, say Ti = To = r.
Suppose that the input signal is a delta function at time rj that is, if the
system operates on discrete time signals, then the input signal v = {vnj n E
Z} is a Kronecker delta delayed by r, Vn = On-T' and if the system operates
on continuous time signals, then the input signal v = {v(t)j tEn} is a
Dirac delta delayed by r, vet) = oCt - r). In both cases we can call the
input signal otT) to denote a delta delayed by r. The output signal for
this special case, {h(t, r)j t E 7}, is called the delta response or o-response.
For continuous time systems it is commonly called the impulse response
and for discrete time systems it is often called a unit sample response.
The name impulse response is also used for the discrete time case, but
we avoid that use here as the word "impulse" or "unit impulse" is more
commonly associated with the Dirac delta, a generalized function, than
with the Kronecker delta, an ordinary function. While the two types of 0
functions play analogous roles in discrete and continuous times, the Dirac
delta or unit impulse is a far more complicated object mathematically than
is the Kronecker delta or unit sample.
In discrete time
(6.4)
Observe that if the system is time invariant, and if {wet) = h(t, O)j t E
T} is the response to a 0 at time r = 0, then shifting the 0 must yield a
response {w(t-r) = h(t,r)j t E T}. Rewriting this as h(t,r) = w(t-r) for
all t and r emphasizes the fact that the o-response of a time invariant system
depends on its arguments only through their difference. Alternatively, if a
system is time invariant, then for all allowable t, r and a
h(t - a,r - a) = h(t,r). (6.6)
Both views imply that if a system is time invariant, then there is some
function of a single dummy variable, say h(t), such that
Superposition
The 8-response plays a fundamental role in describing linear systems. To
see why, consider the case of an infinite duration discrete time signal v as
input to a linear system C. Recall that
00
Vn = L Vk 8n-k
k=-oo
and assume that the system satisfies the extended linearity property. Then
the output of the system is given by
Un = Cn(v)
= Cn({Vk; k E Z})
00
= L v/Cn({8k-/; k E Z})
/=-00
00
L v/hn,/. (6.9)
/=-00
A similar argument holds for the infinite duration continuous time case,
I:
where now the integral form of extended linearity is needed. In the contin-
uous time case
v(t) = v(r)o(t - r) dr
u(t) L:.t(v)
L:. t ({v (r); r E R} )
=
I:
L:.t(l: v(r){o(r - r); r E R} dr)
1:
v(r)L:.t({o(r - r); r E R}) dr
Vn =L VkOn-k.
k=O
un L:.n({vk;kEZN})
N-I
L vlL:.n({Ok-l; k E ZN})
1=0
256 CHAPTER 6. CONVOLUTION AND CORRELATION
N-l
= L vlh n ,/. (6.11)
/=0
Note that here ordinary linearity suffices; that is, we need not assume
extended linearity.
A similar form can be derived for the finite duration continuous time
case.
We have seen that if a system is time invariant, then it must have a
£5-response of the form h(t, r) = h(t - r). Conversely, if a system has a
£5-response of this form, then it follows from the superposition integral or
sum that the system is also time invariant. Thus we can determine whether
or not a system is time invariant by examination of its £5-response.
LTI Systems
Provided that the input and output signals to a linear system are of the
same type, the system always satisfies either the superposition integral
formula or the superposition sum formula expressing the output of the
system as a weighted average (sum or integral) of the inputs. We now
consider in more detail the simplifications that result when the system is
also time invariant.
Suppose that a system .c is both linear and time invariant, a special
case which we refer to as an LTl system or LTl filter. Note that this is
well-defined for all input and output signal types. Suppose further that
the input and output time domains are the same so that the superposition
integral or summation formula holds. Since in this case h(t, r) = h(t - r),
the superposition summation and integral reduce to simpler forms. For
example, in the infinite duration discrete time case we have that
L L
00 00
This operation on the signals v and h to form the signal u is called the
convolution sum and is denoted by
u = v * h. (6.13)
i: i:
Similarly, in the infinite duration continuous time case we have that
6.2 Convolution
First suppose that v = {v(t);t E 'R.} and h = {h(t); t E 'R.} are two infinite
duration continuous time signals. We formally define the convolution (or
i:
convolution integraQ of these two signals by the signal 9 = {g(t); t E 'R.}
given by
g(t) = v«()h(t - () d(j t E 'R., (6.17)
the integral of the product of one signal with the time reversed and shifted
version of the other signal. We abbreviate this operation on signals by
g = v * h.
We also use the asterisk notation as g(t) = v * h(t) when we wish to em-
phasize the value of the output signal at a specific time. The notation
g(t) = v(t) * h(t) is also common, but beware of the potential confu-
sion of dummy variables: the convolution operation depends on the entire
history of the two signals; that is, one is convolving {v( t); t E T} with
{h(t); t E T}, not just the specific output values v(t) with h(t).
258 CHAPTER 6. CONVOLUTION AND CORRELATION
L
00
gn = Vkhn-k; n E Z. (6.18)
k=-oo
9 = {g(t);t E [0, Tn = v * h
by
* Signal Algebra
Suppose that we now consider the space of all signals of the form g =
{g(t); t E 'R}. While we will emphasize the infinite duration continuous
time case in this section, the same results and conclusions hold in all cases
for which we have defined a convolution operation. We have defined two
operations on such signals: addition, denoted by +, and convolution, de-
noted by *. This resembles the constructions of arithmetic, algebra, and
group theory where we have a collection of elements (such as numbers,
polynomials, functions) and a pair of operations. A natural question is
whether or not the operations currently under consideration have useful
algebraic properties such as the commutative law, the distributive law, and
the associative law. The following result answers this question affirmatively.
1. Commutative Law
9 * h = h * g. (6.21)
2. Distributive Law
(6.22)
3. Associative Law
(6.23)
260 CHAPTER 6. CONVOLUTION AND CORRELATION
i:
time signals: the signal f * h is defined by
f(()h(t - () de·
i:
Changing variables by defining TJ =t - ( this becomes
L- OO
f(t - TJ)h(TJ) ( -dTJ) = f(t - TJ)h(TJ) dTJ,
which is just h * f, as claimed. The result follows for the other signal types
similarly. The Distributive Law follows from the linearity of integration.
The proof of the Associative Law is left as an exercise.
In order to have an algebra of signals with the convolution and sum
operations, we also need an identity signal; that is, a signal such that if
convolved with any other signal yields the other signal. (The signal that
is identically 0 for all time is the additive identity.) This role is filled
by the Kronecker delta function in discrete time and by the Dirac delta
function in continuous time since if we define the signal 8 by {8(t); t E T}
for continuous time or {8 n ; nET} for discrete time, then 0 * 9 = g. For
example, in the discrete time case
0* 9 = g. (6.25)
A detail not yet treated which is needed for our demonstration that
the space of signals (including generalized functions) is an algebra is the
fact that we can convolve generalized functions with each other; that is, the
convolution of two 0 functions is well-defined. In fact, if 0 is to play the role
of the convolution identity, we should have that 8 *8 = 8. This is immediate
for the Kronecker 8 in discrete time. To verify it in the continuous time
case suppose that hn(t) is a sequence of pulses yielding the Dirac delta in
i: i:
the sense of the limiting definition of a distribution:
J J
00 00
J J
00 00
= lim
n--+oo
d(hn() dt hn(t - ()g(t).
-00 -00
The rightmost integral approaches g(O in the limit and hence the overall
integral approaches g(O). Thus the convolution of two Dirac delta functions
is another Dirac delta function.
The final requirement for demonstrating that our signal space indeed
forms an algebra is the demonstration of an inverse for addition and for
convolution. The additive inverse is obvious - the negative of a signal
is its additive inverse (g + (-g) = 0) - but the inverse with respect to
convolution is not so obvious. What is needed is a means of finding for
a given suitably well-behaved signal 9 another signal, say g-l, with the
property that 9 * g-1 = 15. This is the signal space analog of the ordinary
multiplicative inverse a(l/a) = 1. This property we postpone until we have
proved the convolution theorem in a later section.
I:
-1/2 and hence when t = O. To perform the convolution and find the signal
get) = J(r)h(t - r) dr
262 CHAPTER 6. CONVOLUTION AND CORRELATION
J(t)
t
012
h(t)
1 +----,
t
012
get)
rt (1 - r) dr = (r - '2 )I~ = t -
= 10
r2 t2
2'
3. 1:S; t < 2. With reference to Figure 6.4, get) is the area of the product
of the waveforms in the region where both are nonzero, which is now
= 10r (1 -
1 1
get) r) dr = 2'
6.3. EXAMPLES OF CONVOLUTION 263
f(t)h(t - r)
r
o 1 2
t- 2 t
Figure 6.2: t <0
f(r)h(t - r)
r
o 1 2
t- 2 t
Figure 6.3: 0 S t <1
g(t) = 11
t-2
(1 - r) dr =- -
1
2
(t - 2)
1
+ -(t - 2)2.
2
f(r)h(t - r)
r
o 1 2
t-2
Figure 6.4: 1 :::; t < 2
f(r)h(t - r)
r
012
t- 2 t
Figure 6.5: 2 :::; t < 3
f(r)h(t - r)
r
012
t-2
Figure 6.6: 3:::; t
6.3. EXAMPLES OF CONVOLUTION 265
The final waveform is depicted in Figure 6.7. Exercise: Prove that the
f * h(t)
t
o 1 2 3
observe that hn-k is 1 if n-k = 0,1, ... , N and hence if k = n, n-l, .. . ,n-
N. Thus the sum becomes
n
gn = L fk.
k=n-N
1. n < 0
In this case 9n = 0 since the summand is 0 for the indexes being
summed over.
266 CHAPTER 6. CONVOLUTION AND CORRELATION
2.0:::; n < N
n 1 pn+l
gn = 2:l = ~ _p
k=O
Note that the above formula is not valid if p = 1.
3. N$ n
n
gn = 2:
k=n-N
pk
n
= pn-N
2:
k=n-N
pk-(n-N)
N
pn-N2:rri
j=O
n-N 1 - pN+1
= p 1 -p . (6.27)
i:
case recall that convolving an impulse with the signal simply produces the
original signal:
g(r)8(t - r) dr = g(t).
9 * WT(t) J00
-00
g(r) L
00
n=-oo
c5(t - nT - r) dr
= 2:
00
n=-oo_oo
J
00
g(r)c5(t - nT - r) dt
6.4. THE CONVOLUTION THEOREM 267
2:
00
Proof: First consider the case of infinite duration continuous time signals.
In this case
J
00
!
00
L
00
L g(k)e- 2"kk H( ~)
N-l
=
k=O
1 1
= H(N)G(N)'
The above proofs make an important point: in all cases the proofs look
almost the samej the only differences are minor. We used the functional
notation g(k) throughout instead of using gk for the discrete time case to
emphasize the similarity. We omit the proof for the case of finite duration
continuous time signals since the modifications required to the above proofs
should be clear.
We state without proof the dual result to the convolution theorem:
Theorem 6.4 The Dual Convolution Theorem
Given two signals 9 and h with spectra G and H, then
.1'( {g(t)h(t)j t E T}) = cG * Hj (6.30)
where c = 1 for infinite duration signals, liN for the DFT of duration N,
and liT for the CTFT of duration T. In words, multiplication in the time
domain corresponds to convolution in the frequency domain.
The extra factor comes in from the Fourier inversion formula. For example,
for the DFT case the Fourier transform of {gnhn; n = 0, ... , N - I} at
frequency kiN is
N-l N-l N-l
L gnhne- i2"nt, = L gn[N- 1 L H(~)ei2"mNle-i2"nt,
n=O n=O m=O
N-l N-l
= N- 1 L H(~) L gne-i2"n~
m=O n=O
L
N-l
= N- 1 H(m)G(k -m mod 1)
m=O N N
= N-IH*G(~).
270 CHAPTER 6. CONVOLUTION AND CORRELATION
gn = {Op
n n>O
th- . ; fn = dn - pdn-l =
{I n=O
-p n = 1
o erWlse 0 otherwise
Then the convolution is
9 * f(n) = f
k=-oo
gkfn-k = { ~
1 x pn - p X pn-l =0
~: ~
otherwise
and hence
g * f(n) = <5n.
In this case
G(f) = 1. j F(f) = 1- pe- i21T !.
1 - pe- t21T !
Theorem 6.6 Let C be an LTI system with common input and output
signal types. Let T be the time domain of definition and S the frequency
domain of definition. Then for any fo E S, the signal
e = {e i2 11"Jotj t E T}
as claimed.
If the system is instead an infinite duration continuous time system,
in which case S is the real line, then v(t) = ei21rlot has Fourier transform
V(f) = o(f - 10) and the same equality chain as above proves the result.
If the system is finite duration and discrete time, then S = {~; k E Z N}
and Vn = ei21r/on (with 10 = -k for some l E ZN) has Fourier transform
t5 / -f.i for 1 E S. As previously (except that now only 1 E {~; k E ZN}
are possible)
W(f) = H(f) t5/-/o = H(fo) t5/-/o
which implies that
to each, and then using the fact that if h is real, then H(- 1) = H*(f) to
write the resulting sum as ?R(H(fo)ei27r/ot).) Thus if H(f) also is purely
real, then the cosine is in fact an eigenfunction. In general, however, trans-
fer functions are not purely real and cosines are not eigenfunctions.
Another corollary to these results is that in an LTI system, only fre-
quencies appearing at the input can appear at the output. This is not the
case if the system is either nonlinear or time varying. For example, if the
system output w(t) for an input v(t) is given by w(t) = v 2(t) (a memoryless
square law device), then if v(t) = cos(21f'fot) we have that
I:
of ¢(t) can be easily found from the convolution theorem by using the fact
that
¢(t) = H * g(t) = U-l(t - ()g(() d(, (6.35)
Since the Fourier transform of the step function is ~ 8(f) - 2;1 (1 - 81), the
Fourier transform of ¢(t) is given by the convolution theorem as
1 G(f)
<P(f) = 2"G(O)8(f) + i21f'f (1 - 81)'
We have now proved the following result:
6.7. SAMPLING REVISITED 275
The first term can be thought of as half the transform of the DC com-
ponent of get) represented by its area J~oo get) dt. The second term shows
that integration in the time domain corresponds to division by I in the
frequency domain (except where I = 0).
The discrete time analog to the integration theorem is the Fourier trans-
form of the sum (or discrete time integral) of a discrete time signal. Given
{gn; n E Z}, what is the Fourier transform of
t
L
00
t
L
00
Using the convolution theorem, the fact that the ill function is its own
transform, and the scaling formula for delta functions yields
L f); fER.
00
G (f - (6.38)
n=-oo
For example, given an input signal spectrum G(f) having the shape
depicted in Figure 6.8, then a
is the sum of an infinite sequence of shifted
G(f)
f
-W 0 W
(6.39)
au)
f
1 1
2T T
, f
GU) n (2B) = GU),
the spectrum of the original signal! This action is depicted in Figure 6.10
where the dashed box is the filter magnitude which selects only the central
island.
au)
f
1 1
2T T
L
00
au)
f
1 1 2
2T T T
remove by low pass filtering. The figure shows the repeated copies and the
final spectrum is the sum of these copies, indicated by the curve forming the
"roof" of the overlapping islands. The low pass spectrum will be corrupted
by portions of other islands and this will cause the resulting signal to differ
from g. This distortion is called aliasing and some invariably occurs in any
physical system since no physical signal can be perfectly band-limited.
The final comment above merits some elaboration. The basic argument
is that all physical systems are time-limited assuming that the universe has
a finite lifetime. A signal cannot be both time-limited and band-limited
since, if it were, we could write for sufficiently large T and W that
This yields a contradiction, however, since the convolution with a sinc func-
tion expands the bandwidth of G(f), while multiplication by n(f /2W) in
general limits the extent of the spectrum.
As a final note, the sampling theorem states that a signal can be com-
pletely reconstructed from its samples provided l/T > 2W and that it is
not possible for l/T < 2W because the resulting aliasing by overlapping
spectral islands results in a distorted spectrum at low frequencies. In order
to avoid such aliasing when sampling too slowly, the original signal can be
first passed through a sharp low pass filter, that is, have its spectrum multi-
plied by n(f /2W), and then this new signal will be recreated perfectly from
its sampled version. The original low pass filtering introduces distortion,
but it results in a signal that can be sampled without further distortion.
6.8 Correlation
Correlation is an operation on signals that strongly resembles convolution
and which will be seen to have very similar properties. Its applications,
however, are somewhat different. The principal use of correlation functions
is in signal detection and estimation problems and in communications the-
ory where they provide a measure of how similar a signal is to a delay of
itself or to another signal. It also is crucial in defining bandwidth of sig-
nals and filters (as we shall see) and in describing the frequency domain
behavior of the energy of a signal.
Suppose, as earlier, that we have two signals 9 = {g(t)j t E f} and
h = {h(t)j t E f}. The cross correlation function rgh(r) of 9 and h is
defined for the various signal types as follows:
00 00
f g*(t - r)h(t) dt =f g*(t)h(t + r) dt CTIDj
-00 -00
00 00
L: g*(n - r)h(n) = L: g*(n)h(n + r) DTIDj
n=-oo n=-oo
T T _
f g* (t - r)h(t) dt = f g*(t)h(t + r) dt CTFDj
o 0
N-l N-l
L: g*(n - r)h(n) = L: g*(n)k(n + r) DTFD,
n=O n=O
where as usual g denotes the periodic extension and where the finite dura-
tion correlation is a cyclic correlation (as was convolution). Analogous to
the asterisk notation for convolution we abbreviate the correlation opera-
tion by a star: r gh = 9 * h. The argument of the correlation function (r
above) is often called the lag.
280 CHAPTER 6. CONVOLUTION AND CORRELATION
(6.40)
J
00
r hg (7) h*(t)g(t + 7) dt
-00
J
00
h*((-7)g(Od(
(l
-00
= g·(Oh((-7)d()·
The same result holds for discrete time and finite duration signals.
A function r( 7) with the property that r( -7) = r* (7) is said to be
Hermitian and hence we have proved that autocorrelation functions are
Hermitian. If g is a real function, this implies that the autocorrelation
function is even.
All of the definitions for correlation functions were blithely written as-
suming that the various integrals or sums exist. As usual, this is trivial in
the discrete time finite duration case. It can be shown that the other defi-
nitions all make sense (the integral or the limiting integral or sum exists) if
6.8. CORRELATION 281
the signals have finite energy. Recall that the energy of a signal 9 is defined
by
f Ig(t)12 dt continuous time;
£ _ { tET
9 - 2: Ig(n)12 discrete time.
nET
Note that the autocorrelation of a signal evaluated at 0 lag is exactly this
energy; that is,
(6.42)
It is often convenient to normalize correlation functions by the signal
energies. Towards this end we define the correlation coefficient
() rgh(r)
'Ygh T = J£g£h (6.43)
(6.44)
Ir
itET
g(t)h(t) dtl2 ~
itET
r Ig(tW dt r
itET
Ih(tW dt (6.45)
with equality if g(t) = Kh*(t) for some complex K; that is, 9 is a complex
constant times the complex conjugate of h.
Similarly, if {gn; nET} and {h n ; n E 7} are two complex-valued
discrete time signals on 7, then
Proof: We prove the result only for the continuous time (integral) case. The
corresponding result for discrete time follows in exactly the same manner
by replacing the integrals by sums. Most proofs in the literature use a
calculus of variations argument which is needlessly complicated. The proof
below is much simpler. It is based on a simple trick and the fact that the
answer is known and we need only prove it. (The calculus of variations is
mainly useful when you do not know the answer first and need to find it.)
Define as usual the energy of a signal by
Cg = r
itET
Ig(tW dt.
If either signal has infinite energy, then the inequality is trivially true.
Hence we can assume that both signals have finite energy. Observe that
obviously
or
Ig(t)12 + Ih(tW > 2 Ig(t)IIh(t)l.
cg Ch - JC9Ch
Integrating over t then yields
1+1 ~ ~
VCgCh
r
itET
Ig(t)h(t)1 dt.
The right hand side above can be bounded from below using the fact that
for any complex valued function x(t)
r
itEr
Ix(t)1 dt r x(t) dtl
~ IitEr (6.47)
(6.49)
(6.50)
since the energy of g* is also the energy of get) and the energy in h(t + 7)
is also the energy in h(t).
We summarize for latter use the principal properties of autocorrelation
functions:
i:
with infinite energy but finite average power by a suitable normalization.
For example, if
Ig(tW dt = 00, (6.51)
but
lim 21T
T ..... oo
jT Ig(t)1 2dt <
-T
00, (6.52)
J ,t
00
- Jg*«()ei21f'~
-00
= d(
00
J
00
y*«()e i21f f( d(
-00
Thus
F,(g*h) = F,(g:"*h)
= G*(f)H(f).
An implication of the correlation theorem is that the Fourier transform
of the autocorrelation is real and nonnegative.
Note that all phase information in the spectrum G(f) is lost in IG(f)12.
This implies that many functions (differing from one another only in phase)
have the same autocorrelation function. Thus the mapping of y(t) into
f 9 (r) is many to one. In general, without further a priori information or
restrictions, a unique y(t) cannot be found from fg(r).
6.9. PARSEVAL'S THEOREM REVISITED 285
I: I:
continuous time case this is
Application of the inversion formula then implies that r gh (r) must be the
inverse transform of G*(f)H(f). Applying this result to the special case
where T = 0 immediately yields the general form of Parseval's theorem of
Theorem 4.5. The general form is
Autocorrelation Width
Because of the shortcomings of the definition of equivalent width, it is
desirable to find a better definition not having these problems. Toward
this end we introduce the autocorrelation width of a signal defined (easily)
as the equivalent width of the autocorrelation of the signal. Since the
autocorrelation function has its maximum at the origin, the autocorrelation
width of 9 is defined by
From the correlation theorem, the Fourier transform of r(t) is IG(f)12 and
hence from the zeroth moment property
where the area property was used to express the denominator in terms of
the spectrum. The right hand side is just one over the equivalent width
286 CHAPTER 6. CONVOLUTION AND CORRELATION
Mean-Squared Width
Yet another definition of width is the standard deviation of the instanta-
neous power or the mean squared width, which we denote f:l.tg. It is defined
as the square root of the variance
2
0"Igl 2
= < t2 >lgl 2 - <t >lgl2
J~oot2Ig(tWdt _ (J~ootlg(t)12dt)2
J~oo Ig(t)12 dt J~oo Ig(t)12 dt
roo
Loo t 2 sinc2 (t) dt =
1
11"2
roo
Loo sin 2 (1I"t) dt = 00.
6.10. * BANDWIDTH AND PULSEWIDTH REVISITED 287
These two quantities have a famous relation to each other called the
uncertainty relation which is given in the following theorem.
Theorem 6.10 The Uncertainty Relation
For any continuous time infinite duration signal g(t) with spectrum
GU), the timewidth-bandwidth product tltgtlJo (called the uncertainty prod-
uct) satisfies the following inequality:
1
tltgtlJo ~ 411"' (6.57)
I: I:
Proof of the Uncertainty Relation: For convenience we assume that
Ig(t)12 dt = IGUW df =1
I: tlg(tW dt = I: JIGUW dJ = O.
Define
I: t 2Ig(t)12 dt
I:
(tJ.t)2 =
(tJ.t)(tJ.f) ~ 4~'
1. If D..t = 00, the relationship is obviously true, so we assume that
tJ.t < 00. If this is true, then limHoo tlg(t)12 = 0 since otherwise the
integral would blow up.
288 CHAPTER 6. CONVOLUTION AND CORRELATION
Thus
4. Now consider the left-hand side. For any complex number z we have
that
i:
whence
* dg dg* d * d 1 12
9 dt + g"dt = dt gg = dt 9 .
Thus we have
But f~oo Igl2dt = 1 and from point 1, tlgl 2 = 0 at -00 and 00.
6.11. * THE CENTRAL LIMIT THEOREM 289
6. Finally, we have
~(_1)2 ~ 47T2(At)2(Af)2
or
AlAf ~ 4~'
When does AtAf actually equal 1/47T? In the first inequality (step
(2», equality is achieved if and only if tg(t) = k/tg(t) for some complex
constant k. In the second inequality (step (4» equality is achieved if and
only if Z = J~oo tg(t)/tg(t)* dt is real valued. Thus if tg(t) = k/tg(t) for
some real constant k, then both conditions are satisfied since then Z =
J~oo t 2Ig(t)12 dt/k. Thus the lower bound will hold if
d
k dtg(t) - tg(t) = 0
= ce2i".
,2
g(t)
g () = f#
t
2Ct _<>t2
-e
7T
for Ct > O. It can be shown that a signal achieving the lower bound neces-
sarily has this form.
The proof of the theorem will take the remainder of the subsection.
First, however, some comments are in order.
The small f approximation may seem somewhat arbitrary, but it can
hold under fairly general conditions. For example, suppose that C(f) has
a Taylor series at f = 0:
= L bkf k ,
00
C(f)
k==O
where
C(k) (0)
bk = k!
(the derivatives of all orders exist and are finite). Suppose further that
1. bI = C' (0) = 0 (This is assumed for convenience.)
2. bo = C(O) > O. Define a = boo
3. b2 = C"(0)/2 < O. Define c = -b 2 .
These assumptions imply (6.58), an equation which is commonly written
as
C(f) =a - cf2 + o(f2),
where o(j2) means a term that goes to zero faster than j2. In fact this is
all we need and we could have used this as the assumption.
6.11. * THE CENTRAL LIMIT THEOREM 291
then
This formula provides one of the reasons that moments are of interest-they
can be used to compute (or approximate) the original signal or spectrum if
it is nice enough to have a Taylor series.
The condition that b1 = G/(O) = 0 is satisfied by any real even G(f).
From the first moment property this implies also that < t >g= O.
Proof of the Central Limit Theorem: From the stretch theorem, the signal
y'ng( y'nt) has Fourier transform G(f I y'n). From the convolution theorem,
(y'ng( y'nt)) *n has Fourier transform Gn (f I y'n). Thus
= an y'n
P
f +0(-)
-1 ( a-c(-)2
n
)n
=
c PIP
( 1- - - + -0(-)
)n
a nan
where the o(P In) notation means a term that goes to zero with increasing
n faster than Pin. From elementary real analysis, as n --+ 00 the right-
most term goes to e- cl2 fa. This result is equivalent to the fact that for
small f,
In(! - f) ~ -f.
Thus for large n
· h,mg(y'nt»*n
11m = /fa
-e -7r(·lllt)2
Yo ,
n-+oo an c
and hence
c /2
Gnu) ~ ane-"n .
Inverting this result we have that
g(t)= p.
nU·)
7r 1 - t
G(f) = n(f) .
1T~
Here G(O) > 0 as required, but Gil (0) > 0 violates the sufficient condition
and hence the CLT cannot be applied. What is happening here is that for
small 1
G(f) = .!:.(l - 1 2)-! ~ .!:. + ~ 12.
1T 1T 21T
Thus a = 1/1T and c = -1/1T. A positive c is required, however, for the
derivation to hold.
6.12 Problems
6.1. A certain discrete time system takes an input signal x = {xn; n E Z}
and forms an output signal Y = {Yn; n E Z} in such a way that the
input and output are related by the formula
Yn = Xn - aYn-l, n E Z;
6.3. Suppose that an infinite duration, continuous time linear time invari-
ant system has transfer function H(f) given by
This filter causes a 90 0 phase shift for negative frequencies and a -90 0
phase shift for positive frequencies. Let h(t) denote the corresponding
impulse response. Let v(t) be a real valued signal with spectrum V(f)
which is band limited to [-W, W]. Define the signal v = v * h, the
convolution of the input signal and the impulse response.
6.9. Find and sketch the following convolutions and cross-correlations. All
signals are infinite duration continuous time signals.
_ {I, n=0,1,2,3,4;
gn - 0, otherwise
and
h n = {(~)n, n=O,l, ... ;
0, otherwise.
6.15. Find the autocorrelation function and the Fourier transform of the
signal {e i27rk / 8 ; k = 0, 1, ... , 7}. Find the circular convolution of this
signal with itself and the Fourier transform of the resulting signal.
6.16. Find and sketch the auto correlations and the crosscorrelation of the
discrete time signals
n = 0,1, ... ;
otherwise
hn = {e- 0,
in , n = -N,-N
otherwise.
+ 1, ... , -1,0, 1, .. . ,N;
6.17. Evaluate the continuous time convolution t5(t - 1) * t5(t - 2) * n(t - 1).
6.18. Find a continuous time signal g(t) whose autocorrelation function is
6.20. Evaluate the continuous time convolutions t5(2t + 1) * n(t/3) and c5(t-
0.5) * c5(t + 1) * sinc(t).
6.21. Two finite duration discrete time complex-valued signals 9 = {gn; n =
0,1, ... , N -I} and h = {h n ; n = 0,1, ... , N -I} are to be convolved
(using cyclic or circular convolution) to find a new signal y = 9 * h.
This can be done in two ways:
• by brute force evaluation of the convolution sum, or
• by first taking the DFT of each signal, then forming the product,
and then taking the IDFT to obtain 9 * h.
6.12. PROBLEMS 297
(a) What is the maximum value of IX(T, 1)1 over all T and f. When
is it achieved?
(b) Evaluate X(T, I) for g(t) = n(t).
(a) x and Y
(b) {X n -5; n E Z}
(c) {X5-n; n E Z}
(d) {xlnlj n E Z}
(e) {xn cos(27Tn/9)j n E Z}
(f) {xnYn; n E Z}
(g) x * Y (the convolution of x and y)
(h) x +Y
(i) {x~; n E Z}
298 CHAPTER 6. CONVOLUTION AND CORRELATION
6.24. Suppose that you are told two continuous time, infinite duration sig-
nals x = {x(t); t E R} and h = {h(t); t E R} are convolved to
form a new signal y = x * h = {y(t); t E R}. You are also told that
x( t) = e-tu_l (t) and
1
y(t) = Ol/2(t - 2") - e-tu_l(t) + e-(t-l)U_l(t -1).
6.25. Find the circular convolution of the sequence {I, 1,0,1,0,0,0, O} with
itself.
°
6.26. Let g(t) be a bandlimited signal with spectrum G(f) for If I ~ w.
We wish to sample g(t) at the slowest possible rate that will allow
recovery of
[ : g(t) dt.
(a)
(b)
(c)
[ : (sinc(t +~) + sinc(t - ~))2 dt.
6.28. Evaluate the following integral using Fourier transforms (there is an
easy way).
6.12. PROBLEMS 299
6.29. Suppose that an infinite duration continuous time signal g(t} has a
zero spectrum outside the interval (- ~, ~). It is asserted that the
magnitude of such a function can never exceed the square root of its
energy. For what nontrivial signal g(t} is equality achieved? That is,
for what function is the magnitude of g(t) actually equal to ..;t; for
some t? Prove your answer. (Hint: G(J) = n(J)G(J).)
6.32. A continuous time signal g(t} is bandlimited and has 0 spectrum G(J)
for all I with III ~ W.
L
00
L
00
L
00
6.33. What is the convolution of the infinite duration continuous time sig-
nals
1
1 + [211"(t - 1))2
and
1 ?
1 + [211"(t + 1))2 .
6.34. Name two different nontrivial signals or generalized signals 9 that
have the property that 9 * 9 = g, Le., the signal convolved with itself
is itself. (A trivial signal is zero everywhere.) What can you say
about the spectrum of such a signal?
6.36. You are given a linear, time-invariant system with transfer function
H(f) = e- 1r / 2 , fEn.
(a) Suppose the input to the system is x(t) = e- 1rt2 , tEn. Write
the output v(t) as a convolution integral of the input with an-
other function h(t), which you must specify.
(b) Find the output v(t) with the input as given in part (a).
(c) A signal g(t) = 3 cos 311"t, tEn, is input to the system. Find the
output.
(d) The output to a particular input signal y(t) is e- 1rt2 , t E R.
What is the input y(t)?
(e) Now suppose that the signal x(t) of part (a) is instead put into
a filter with impulse response 01/2(t) to form an output w(t).
I:
Evaluate the integrals
w(t)dt
6.12. PROBLEMS 301
and
[ : tw(t)dt.
L
00
(b) What are the frequencies of the three most powerful tones?
(c) What are the frequencies of the next two most powerful tones?
(d) What is the power difference in dB between the most powerful
of the three terms and the next two most powerful?
(t) = { +1 0$ t <1
P -1 1$ t < 2'
(a) Find the Fourier transform P of p.
(b) Find a Fourier series representation for p.
(c) Find a Fourier series representation for p, the periodic extension
of p with period 2. Provide a labeled sketch of p.
(d) Find the Fourier transform P of p.
(e) Suppose that p is put into a filter with an impulse response h
whose Fourier transform H is defined by
6.40. Suppose that a discrete time system with input x = {xn; n E Z} and
output y = {Yni n E Z} is defined by the difference equations
M
Yn = aXn - L akYn-k; nEZ,
k=1
where a > 0 and the ak are all real. This system is linear and time
invariant and hence has a Kronecker delta response h with DTFT
H. Assume that h is a causal filter, that is, that h n = 0 for n < O.
(Physically, the difference equations cannot produce an output before
time 0 if the input is 0 for all negative time.) Assume also that the
ak are such that h is absolutely summable.
L
00
AU)= ake-i21r/k,
k=-oo
L:
00
This filter h will have the property that when a Kronecker delta
is input, the autocorrelation of the output will equal ("match")
the measured autocorrelation for the first M + 1 values.
Use the previous parts of this problem to provide a set of equa-
tions in terms of a, the ak, and r 8 that will satisfy (6.62).
G(f)
°
= {1 iii::; 1(6
otherWIse
.
(b)
ro 2
Loo 1 + (27rt)2 sinc(t) dt
(c)
[00 2
J- oo [sinc(t) + e- 1tl sgn(t)] dt
(d)
for real t.
(e)
j! 1
a.
e-·2", ei21r4 / dlf ,
-! 1 +-a-
where lal > 1.
6.47. Does the signal sinc3 (t) satisfy the conditions for the central limit
theorem?
6.48. For the function g(t) = t\(t) cos(7rt)j tEn, find
(a) The Fourier transform of g.
(b) The equivalent width of g.
(c) The autocorrelation width of g.
6.49. For the function g(t) = sinc2 (t) cos(7rt)j tEn, find
6.50. Find the equivalent width and the autocorrelation width of g(t) =
n(t) + o(t - 2).
6.55. Given a band-limited signal get) with spectrum G(J) = G(J) n (ztv),
define the semi-inverse signal get) as the inverse Fourier transform of
(l/G(J» n (iv). Evaluate 0';,0';,
and 0';*0
in terms of G(J) and its
derivatives.
Chapter 7
The basic definitions for two-dimensional (2D) Fourier transforms were in-
troduced in Chapter 2 as Fourier transforms of signals with two arguments.
As in the one-dimensional case, a variety of two-dimensional signal types
are possible. In this chapter we focus on a particular signal type, and
demonstrate the natural extensions of many of the one-dimensional results
to two-dimensional Fourier transforms. We here consider briefly several ex-
tensions of previously described results to the two-dimensional case. More
extensive treatments of two-dimensional Fourier analysis to images may be
found in Goodman [18) and Bracewell [8).
For this chapter we consider two dimensional signals having continuous
"time" and infinite duration. The word "time" is in quotes because in most
two-dimensional applications the parameters correspond to space rather
than time. Thus a signal will have the form
As we only consider this case in this chapter, we can safely abbreviate the
full notation to just g(x, y) when appropriate. Repeating the definition for
the two dimensional Fourier transform using this notation, we have
1. Linearity
If gk(X, y) :J Gk(fx, fy) for k = 1 ... K, then
K K
LO!k9k(X,Y):J LO!kGk(fx,fy). (7.2)
k=l k=l
I: I:
If g(x,y):J G(fx,fy), then
4. Stretch (Similarity)
If g(x,y) :::> G(fx,/y), then
( 1 /x /y (7.4)
9 ax,by):::> labIG(~'T)·
6. Parseval's Theorem
£: £: 91(X,Y)9~(X,y)dxdy =
I: I: G1{/x,fy)G;{/x,fy)dfx dfy. (7.6)
When the functions 91 and 92 are both equal to the same function 9,
Parseval's theorem reduces to a statement that the energy of 9 can
be calculated as the volume lying under either 191 2or IGI2.
7. Convolution
i: i:
sifting property,
6(x, y) = 6(x)6(y).
7.2. TWO DIMENSIONAL LINEAR SYSTEMS 313
y y
h(x,y)
x x
h(-x,-y)
However, other forms of the 2-D unit impulse are also possible. For example,
a circularly symmetric form could be defined through a limiting sequence
of circ functions, i.e.
8(x, y) = n-too
lim ~circ( v(nx)2 + (ny)2)
7r
where the factor 1/ 7r assures unit volume. Of course the above equation
should not be taken literally, for the limit does not exist at x = y = 0, but
equality of the left and right will hold if the limit is applied to integrals of
the right hand side.
In two dimensions, the unit impulse response of a linear system is often
i: i:
called the point-spread function. By direct analogy with the 1-D case:
Lens
Y
Y
~ Zo --~~I---- Zl - - -......
~I
x
x
Object Image
can be shown to be the ratio of the distance of the image plane from the
lens to the distance of the object plane from the lens.) We assume that
proper normalizations of the object coordinate system have been carried
out to make the system space invariant.
To find the transfer function H(p) = Fp{h(r)} recall that
. J 1 (27rp)
c1rc(r) ~ . (7.12)
p
The scaling theorem for the zero-order Hankel transform can be stated
(7.13)
and therefore
(7.14)
Thus
(7.15)
1. The first zero occurs at Po = 0.610/f. The larger f (Le. the larger the
defocus), the smaller Po and hence the smaller the bandwidth of the
system.
y
X'
()
, x
"'d
A
o
I Detector
Top View
~X-Ray
U Detector
I
Side View
The integral is required because absorption occurs along the entire y' path
7.3. RECONSTRUCTION FROM PROJECTIONS 319
I:
ficient
plJ(X') = g(x' cos e - y' sin e, x' sin e + y' cos e) dy'. (7.23)
y'
cutting through the cylinder has area P9(X') for the specific x'. To find this
area we need the height of the rectangle, which is unity, and its width. To
find the width refer to Fig. 7.7. The result is:
Proof:
J
00
This integral is to be carried out over the entire (x', y') plane. Equivalently,
I: I:
we can integrate over the entire (x,y) plane
g(x,y)
pe(x')
Area A Value
x
(b)
Ge(f)
Solution:
1. Any function that is separable in the (x, y) domain has a Fourier
transform that is separable in the (fx, fy) domain. Therefore
g(x,y) = gx(x)gy(y) ::> Gx(fx)Gy(fy).
i: i:
2. First find the projection through g(x, y) for () = 0:
Po(x) = g(x,y)dy = gx(x) gy(y)dy.
i: i:
3. Next find the projection through g(x, y) at angle () = 7r /2. We obtain
Thus the I-D transform of this projection with respect to variable x'
yields information about the fy dependence of the spectrum, up to
an unknown multiplier Gx(O).
4. Now the undefined multipliers must be found. Note that the value
of the I-D spectrum of either projection at zero frequency yields the
product of these two constants,
Po(fx)Pi(fy) = Gx(fx)Gy(O)Gx(O)Gy(fy) = G (I )G (I )
Po (0) Gx(O)Gy(O) x x y y.
Solution:
It can be shown (see, e.g., Bracewell, p. 249)
1
7raJo(27rar) :J "20(p - a) = G(p).
Therefore
1 1
Pe(f) = G(f) = "20(8 - a) + '20(8 + a)
for any () and hence
PI! (x') = cos 27rax' .
2. We cannot begin the final inverse FFT until all data has been gathered
and transformed.
This leads to an alternative approach, developed by Bracewell in the
1950s, called convolution and backprojection. The goal is to reconstruct
g(x,y) from the projections Pe(x' ) taken at all () between 0 and 7r. We will
neglect the fact that () is sampled discretely.
i:i:
The starting point is the definition of the inverse 2-D Fourier transform,
fx
Convert from (Ix, fy) coordinates to (s, B), where s scans 'R and Bruns
from 0 to 11':
fx = scosB
fy = ssinB
dfx dfy = lsi ds dB.
Then by straightforward substitution,
g(x, y) =
o
1 100 dslslG(s cos B, s sin B)e
211"
-00
dB i2 11"S(z cos 9+ysin e);
10r11" dBfe(xcosB+ysinB)
2
g(x,y) = (7.25)
where
(7.26)
326 CHAPTER 7. TWO DIMENSIONAL FOURIER ANALYSIS
g(x,y) "I r
Jo
21T
dOpo(xcosO+ysin(}).
The geometrical pattern in which the samples are taken (Le. the sam-
pling grid) is also quite flexible. Most common is the use of a rectangular
sampling function defined by
L L
00 00
and illustrated in Figure 7.12. However, other sampling grids are also
possible. For example, the sampling function
can be thought of as resulting from a rotation of the (x, y) axes with respect
to the delta functions by 45 degrees, thus preserving the volume under the
delta functions but changing their locations. The arguments of the two ill
functions are simultaneously zero at the new locations
(7.29)
where nand m run over the integers. The above locations are the new
locations of the delta functions in this modified sampling grid.
7.7. * TWO-DIMENSIONAL SAMPLING THEORY 329
• • • •
• • • •
x
• • • •
• • • •
Figure 7.12: Rectangular sampling grid. 8 functions are located at each
dot.
= '~
" " 8(jx - X)
'~ y) * G(jx,fy)
n 8(jy - m
n=-oom=-oo
L L
00 00
C(lx-;,fy-;). (7.30)
n=-oom=-oo
As in the 1-D case, the effect of the sampling operation is to replicate the
signal spectrum an infinite number of times in the frequency domain, as
shown in Figure 7.13.
In order for the zero-order (n = 0, m = 0) term in the frequency domain
to be recoverable, it is necessary that there be no overlap of the various
spectral terms. This will be assured when the sampling intervals X and
Yare chosen sufficiently small. For the original rectangular sampling grid,
if all frequency components of g(x,y) lie within a rectangle of dimensions
2Bx x 2By in the Ix and Iy directions, respectively, centered on the origin
in the frequency domain, then sampling intervals X = 2~x and Y = 2~Y or
330 CHAPTER 7. TWO DIMENSIONAL FOURIER ANALYSIS
Iy
Ix
(a)
Ix
(b)
smaller will suffice. Under such conditions, the spectral islands will separate
and the n = 0, m = 0 island will be recoverable by proper filtering.
To recover g(x, y), the sampled data g(x, y) is passed through a 2-D lin-
ear invariant filter which passes the (n = 0, m = 0) spectral island without
change, and completely eliminates all other islands. For this sampling grid,
a suitable linear filter is one with transfer function
G(fx,fy) = n (/;
x
) n (2;) f f
y n=-oo m=-oo
G(fx -nBx,fy -mBy)
f f
n=-oo m=-oo
9 (2; , 2; ) sine
X Y
(x - 2; X
) sine (y - 2; Y
)
hold provided X ::; 2~x' Y ::; 2~Y' This equation is the 2-D equivalent of
(4.15) which was valid in one dimension. Note that at least 4Bx By samples
per unit area are required for reconstruction of the original function.
This discussion would not be complete without some mention of the
wide variety of other possible sampling theorems that can be derived in the
2-D case. For example, suppose that the reconstruction or interpolation
filter were chosen to be circularly symmetric, rather than separable in the
(x, y) coordinate system. A filter with transfer function
00
'"' '"'
00
n m
( )[J1(27rPc\!(X - ..!!-)2 + (y -
2
m2)]
9 (x,y ) = ~ ~ 9 -,- 4"7r Pc Pc
2
n=-oo m=-oo Pc Pc 27rPcV(x - ..!!-)2
Pc
+ (y - m
Pc
)
(7.32)
where the factor 7r / 4 is the ratio of the unit rectangular cell in the 2-
D frequency domain replication grid (4p~) to the frequency domain area
covered by the transfer function of the interpolation filter (7r p~). Thus it is
clear that even for a fixed choice of sampling grid there are many possible
forms for the sampling theorem.
If the sampling grid is allowed to change, then additional forms of the
sampling theorem can be found. For example, an hexagonal sampling grid
plays an important role in the sampling of functions with circularly band-
limited spectra. Such a grid provides the densest possible packing of circular
regions in the frequency domain.
We will not pursue the subject of sampling further here. The purpose
of the discussion has been to point out the extra richness of the theory that
occurs when the dimensionality of the signals is raised from 1 to 2.
332 CHAPTER 7. TWO DIMENSIONAL FOURIER ANALYSIS
7.8 Problems
7.1. Find the following convolutions.
(a) g(r) = Jl(;1rr) * ~JlS1rr).
(b) g(x,y) = n(x) n (y) * n(x) n (y).
7.2. Evaluate the integral
g(r) = 1TaJo(21Tar).
(b) A projection through a 2-D circularly symmetric function is
found to be
p(x) = e- 1raz •
2
Memoryless Nonlinearities
We have seen that Fourier analysis is a powerful tool for describing and
analyzing linear systems. A particularly important application has been
that of sampling continuous time signals to produce discrete time signals
and the quantifying of the conditions under which no information is lost
by sampling. The purpose of this chapter is twofold. First, we demon-
strate that Fourier analysis can also be a useful tool for analyzing simple
nonlinear systems. The techniques used in this chapter are a relatively mi-
nor variation of techniques already seen, but this particular application of
Fourier theory is often overlooked in the engineering literature. Although a
standard component of courses is devoted to nonlinear systems, the relative
scarcity of such courses and the lack of examples in engineering transform
texts has led to a common belief of near mythological nature that Fourier
methods are useful only in linear systems. Using an idea originally due to
Rice [28) and popularized by Davenport and Root [15) as the "transform
method," we show how the behavior of memoryless nonlinear systems can
be studied by applying the Fourier transform to the nonlinearity rather
than to the signals themselves.
The second goal of the chapter is to consider the second step in the
conversion of continuous signals to digital signals: quantization. A uniform
quantizer provides an excellent and important example of a memoryless
nonlinearity and it plays a fundamental role at the interface of analog and
digital signal processing. Just as sampling "discretizes" time, quantization
converts a continuous amplitude signal into a discrete amplitude signal.
The combination of sampling and quantization produces a signal that is
discrete in both time and amplitude, that is, a digital signal. A benefit of
focusing on this example is that we can use the tools of Fourier analysis
to consider the accuracy of popular models of and approximations for the
334 CHAPTER 8. MEMORYLESS NONLINEARITIES
w(t) = O:t(v(t))j t E T
so that the output at time t depends only on the current input and not
on any past or future inputs (or outputs). We will emphasize real-valued
nonlinearities, i.e., O:t(v) E n for all v E n. When O:t does not depend on
t, the memoryless nonlinearity is said to be time invariant and we drop the
subscript.
Let A c n denote the range of possible values for the input signal v(t),
that is, v(t) E A for all tEn. Depending on the system, A could be the
entire real line n or only some finite length interval of the real line, e.g.,
[- V, V]. We assume that A is a continuous set since our principal interest
will be an O:t that quantizes the input, i.e., that maps a continuous input
into a discrete output.
The function O:t maps A into the real linej that is, 0: can be thought
of as being a signal O:t = {o:(v)j v E A}. This simple observation is the
fundamental idea needed. Since O:t is a signal, it has a Fourier transform
I:
(infinite length is better terminology here). In this case we have that
If, on the other hand, A has finite length, say L (we don't use T since the
parameter no longer corresponds to time), then the inversion is a Fourier
series instead of a Fourier integral:
~ At(!?c
L ) ·2 n
O:t(X) = ~ -L-e' 7rTXj x EA. (8.2)
n=-oo
8.2. SINUSOIDAL INPUTS 335
sinusoids are not the fundamental building blocks for nonlinear systems
that they are for linear systems. Nonetheless it is an important class of
"test signals" that are commonly used to describe the behavior of nonlinear
systems.
Suppose now that u(t) = asin(211"/ot) for t E T and suppose that the
Fourier series representation of the memory less nonlinearity is given. Then
The exponential term is itself now periodic in t and can be further expanded
in a Fourier series, which is exactly the Jacobi-Anger formula of (3.91):
L
00
where Jk is the kth order Bessel function of (1.12). Incorporating (8.5) into
(8.4) yields
wet)
00
L
n=-oo
AtlV f: Jk(211"~a)ei21Tfotk
k=-oo
f:
k=-oo
ei21Tfotk f: Jk(211"~a).
n=-oo
Atl£) (8.6)
L
00
with coefficients
(8.8)
ordinary Fourier series is that the frequencies do not have the form n/N for
an integer period N (since there is no period). A series of this form behaves
in many ways like a Fourier series and is an example of a generalized Fourier
series or a Bohr-Fourier series. The signal en turns out to be an example of
an almost periodic function. See, e.g., Bohr for a thorough treatment [4].
In both the continuous and discrete time case, (8.7) immediately gives
the Fourier transform of the output signal as
L
00
L
00
L
00
= Jk(~a)ei27r(fc+klo)t
k=-oo
L
00
~ = 2b.
M
The number R = log2 M is called the rate or bit rate of the quantizer and is
measured in bits per sample. We usually assume for convenience that M is
a power of 2 and hence R is an integer. If the input falls within a bin, then
the corresponding quantizer output is the midpoint (or Euclidean centroid)
of the bin. If the input is outside of the bins, its representative value is the
midpoint of the closest bin. Thus a uniform quantizer is represented as a
"staircase" function. Fig. 8.1 provides an example of a uniform quantizer
M = 8 and hence R = 3 and b = 4~. The operation of the quantizer can
also be summarized as in Table 8.1.
8.4. UNIFORM QUANTIZATION 339
q(v)
4~
-2~
-3~
-4~
-5~
[3A,00} 7f:!
2
[2A,3A} 5f:!
2
[A,2A} 3f:!
2
[O,A} ~
2'
~
[-A,O} -2'
(8.12)
We write it in this form so that the quantizer output can be written as its
input plus an error term, viz.
(8.13)
vn --+-~ + l - - - - I - - -
uantizer
v
-4~ -3~ -2~ -~ -1 ~ 2~ 3~ 4
Several important facts can be inferred from the picture. First, the error
function satisfies
le(v)1 S ~ if Ivl S 4~.
In other words, the normalized error cannot get too big provided the input
does not lie outside of the M quantizer bins. For general M the condition
becomes
le(v)1 S ~ if Ivl S ~ ~ = b,
342 CHAPTER 8. MEMORYLESS NONLINEARITIES
that is, provided the input does not exceed the nominal range b. When an
input falls outside this range [-b, b] we say that the quantizer is overloaded
or saturated. When a quantizer is not overloaded, the normalized quanti-
zation error magnitude cannot exceed 1/2. In this case the quantization
error is often called granular noise. If the quantizer is overloaded, the error
is called overload noise. We here consider only granular noise and assume
that the input range is indeed [-b, b] and hence e(v) E [-1/2,1/2]. If this
is not true in a real system, it is often forced to be true by clipping or lim-
iting the input signal to lie within the range of the quantizer. Sometimes
it is useful to model the quantizer as having an infinite number of levels, in
which case the no-overload region is the entire real line.
Next observe that e( u) is a periodic function of u for u E [-b, b] and that
its period is ~. In other words, the error is periodic within the no-overload
region. In fact it can be shown by direct substitution and some algebra
that
1 u
e(u) = 2" - ~ mod 1; u E [-b,b]. (8.14)
00 1 i"u 00. U
e(u) = I: i2rrle~ = I:sm(2rrl ~). (8.15)
1=-00 1=1
1,,0
This series gives the error function except at the points of discontinuity.
This is the Fourier series of the time-invariant memoryless nonlinearity
a(u) = e(u) of interest in this section.
Now suppose that the input to the quantizer is a sampled signal Un =
u(nTs) as previously. We now have that the resulting normalized error
sequence en = e( un) is given by
(8.16)
en = bkei27r/onk (8.17)
k=-oo
L
00
where here the frequency shift is modulo the frequency domain of definition,
and where the coefficients are
(8.19)
and hence
k odd
(8.20)
k even
which yields
e
n
= ~
L
ein27r/o(2m-l) (~_I_J
L i27r1 2m-l
(27r1!!:.))
. ~ (8.21)
m=-oo 1#0
L
00
en = eiAmnCm, (8.22)
m=-oo
where
Am = fo(2m - 1)
(taken mod 1 so that it is in [0,1)) and
A
=L
1
em i2 I J2m - 1 (27rl ~).
1#0 7r
We thus have a (generalized) Fourier series for the error signal when
a sinusoid is input to a sampler and a uniform quantizer. The Fourier
transform can be defined in the same way as it was for periodic signals: it
is a sum of impulses at frequencies Am having area em. Note that although
a single frequency sinusoid is put into the system, an infinity of harmonics
is produced-a behavior not possible with a time invariant linear system.
As one might guess, this technique is not useful for all memoryless non-
linearities. It only works if the input/output mapping in fact has a Fourier
transform or a Fourier series. This can fail in quite ordinary cases, for ex-
ample a square-law device o:(x) = x 2 acting on an unbounded input cannot
be handled using ordinary Fourier transforms. Laplace transforms can play
a useful role for such cases. See, for example, Davenport and Root [15].
8.5 Problems
1. Frequency modulation (FM) of an input signal g = {get); tEn} is
defined by the signal
We here collect several of the Fourier transform pairs developed in the book,
including both ordinary and generalized forms. This provides a handy
summary and reference and makes explicit several results implicit in the
book. We also use the elementary properties of Fourier transforms to extend
some of the results.
We begin in Tables A.I and A.2 with several of the basic transforms
derived for the continuous time infinite duration case. Note that both the
Dirac delta o(x) and the Kronecker delta Ox appear in the tables. The
Kronecker delta is useful if the argument x is continuous or discrete for
representations of the form hex) = I(x)ox + g(x)(1- ox) which means that
h(O) = 1(0) and hex) = g(x) when x =f. O.
The transforms in Table A.2 are all obtained from transforms in Ta-
ble A.I by the duality property, that is, by reversing the roles of time and
frequency.
Several of the previous signals are time-limited (Le., are infinite duration
signals which are nonzero only in a finite interval) and hence have corre-
sponding finite duratiun signals. The Fourier transforms are the same for
any fixed real frequency I, but we have seen that the appropriate frequency
domain S is no longer the real line but only a discrete subset. Table A.3
provides some examples.
Table A.4 collects several discrete time infinite duration transforms.
Remember that for these results a difference or sum in the frequency domain
is interpreted modulo that domain.
Table A.5 collects some of the more common closed form DFTs.
Table A.6 collects several two-dimensional transforms.
348 APPENDIX A. FOURIER TRANSFORM TABLES
9 :F(g)
{sgn(t)j t E R} t;}jl E R}
{U_l(t)jt E R} Ho(f) - 2;/(1- 8/)jl E R}
{oCt - to)j t E R} {e-i21rftoj fER}
{8(at + b) j t E R} {rare-i21r/!jl E R}
{O(t)jtER} {lj fER}
{o'(t)jt E J?} {i211}j fER}
{'l1 T (t) = L:~=-oo o(t - nT)j t E R} {,J.'l1 1/ T (f)j fER}
{m(t)j t E R} {m(f)j IE R}
H(8(t + !) + 8(t - !))j t E R} {COS(7r f)j I E R}
H(8(t + ~) - 8(t - !))j t E R} {i sin(7r f)j fER}
{sech(7rt)j t E R} {sech(7rf)jf E R}
g F(g)
7rt' E 'R}
{l·t {-isgn(f)j! E 'R}
{e- i27r !otj t E 'R} {8(f + !o)j! E 'R}
{ljt E 'R} {&(f)j! E 'R}
{COS(1Tt)jt E 'R} {II(f)j! E 'R}
{sin(1Tt)jt E 'R} {iII(f)j! E 'R}
H«S(t) + 2~t (1 - «St)j t E 'R} {U-l(f)j! E 'R}
g F(g)
9 F(g)
l_e;2,,/ . 1 1
{sgn(n); n E Z} f
{ I-cos 27r / ' E [- 2"' 2")}
I
{8 n- no ;n E Z} {e-i27r/no; f E [-~,~)}
{8 n ; n E Z} {I; f E [-~,~)}
9 F(g)
Function 'fransform
[4] H. Bohr. Almost Periodic Functions. Chelsea, New York, 1947. Trans-
lation by Harvey Cohn.
[7] R. Bracewell. The Hartley Transform. Oxford Press, New York, 1987.
unilateral, 72
unit impulse function, 223
unit sample, 9
unit sample response, 253
unit step, 11
unitary, 119
upper limit, 126
upsampling, 187
variance, 196
waveform, 2
wavelet, 147
Haar, 148
wedge, 14
Whittaker-Shannon-Kotelnikov sam-
pling theorem, 173
width
autocorrelation, 285
mean squared, 286
window, 43
windowing, 43
zero filling, 44