Fundamentals of Linear Systems and Signal Nprocessing
Fundamentals of Linear Systems and Signal Nprocessing
Fundamentals of Linear
Systems and Signal
Processing
Authored by:
BEKHITI Belkacem
NAIL Bachir
2021
Professor Kamel Hariche
This page intentionally left blank
The Breadth and Depth of
Signal Processing
(By Steven W. Smith)
Signal processing is one of the most powerful technologies that will shape science and
engineering in the twenty-first century. Revolutionary changes have already been made in a
broad range of fields: communications, medical imaging, radar & sonar, high fidelity music
reproduction, and oil prospecting, to name just a few. Each of these areas has developed a
deep Digital Signal Processing (DSP) technology, with its own algorithms, mathematics, and
specialized techniques. This combination of breath and depth makes it impossible for any
one individual to master all of the DSP technology that has been developed.
II. The Roots of DSP: Signal processing is distinguished from other areas in computer
science by the unique type of data it uses: signals. In most cases, these signals originate as
sensory data from the real world: seismic vibrations, visual images, sound waves, etc.
Signal processing is the mathematics, the algorithms, and the techniques used to
manipulate these signals after they have been converted into a digital form. This includes a
wide variety of goals, such as: enhancement of visual images, recognition and generation of
speech, compression of data for storage and transmission, etc. Suppose we attach an
analog-to-digital converter to a computer and use it to acquire a chunk of real world data.
DSP answers the question: What next?
The roots of DSP are in the 1960s and 1970s when digital computers first became
available. Computers were expensive during this era, and DSP was limited to only a few
critical applications. Pioneering efforts were made in four key areas: radar & sonar, where
national security was at risk; oil exploration, where large amounts of money could be made;
space exploration, where the data are irreplaceable; and medical imaging, where lives could
be saved.
The personal computer revolution of the 1980s and 1990s caused DSP to exploded with
new applications. Rather than being motivated by military and government needs, DSP was
suddenly driven by the commercial marketplace. Anyone who thought they could make
money in the rapidly expanding field was suddenly a DSP vender. DSP reached the public
in such products as: mobile telephones, compact disc players, and electronic voice mail. The
next page illustrates a few of these varied applications.
DSP Applications
Space Medical
−Space photograph enhancement −Diagnostic imaging (CT, MRI,
−Data compression ultrasound, and others)
−Intelligent sensory analysis by −Electrocardiogram analysis
remote space probes −Medical image storage/retrieval
Commercial Telephone
−Image and sound compression −Voice and data compression
for multimedia presentatio −Echo reduction
−Movie special effects −Signal multiplexing
−Video conference calling −Filtering
Military Industrial
−Radar −Oil and mineral prospecting
−Sonar −Process monitoring & control
−Ordnance guidance −Nondestructive testing
−Secure communication −CAD and design tool
This recent history is more than a curiosity; it has a tremendous impact on your ability to
learn and use DSP. Suppose you encounter a DSP problem, and turn to textbooks or other
publications to find a solution. What you will typically find is page after page of equations,
obscure mathematical symbols, and unfamiliar terminology. It's a nightmare! It is just
intended for a very specialized audience. State-of-the-art researchers need this kind of
detailed mathematics to understand the theoretical implications of the work.
As you go through each application, notice that DSP is very interdisciplinary, relying on the
technical work in many adjacent fields. As Fig. 1-2 suggests, the borders between DSP and
other technical disciplines are not sharp and well defined, but rather fuzzy and
overlapping. If you want to specialize in DSP, these are the allied areas you will also need to
study.
III.I. Multiplexing: is the process of combining multiple signals into one signal, over a
shared medium which the signals can individually recovered.
III.II. Compression: When a voice signal is digitized at 8000 samples/sec, most of the
digital information is redundant. That is, the information carried by any one sample is
largely duplicated by the neighboring samples. To avoid duplication we use a data
compression algorithm.
III.III. Echo Control: Echoes are a serious problem in long distance telephone
connections. When you speak into a telephone, a signal representing your voice travels to
the connecting receiver, where a portion of it returns as an echo. The DSP deals with the
solution of such problem by some electronic filters called antialiasing or antinoise.
III.IV. Speech Generation and Recognition: Speech generation and recognition are used
to communicate between humans and machines. Rather than using your hands and eyes,
you use your mouth and ears. This is very convenient when your hands and eyes should be
doing something else, such as: driving a car, performing surgery, or (unfortunately) firing
your weapons at the enemy. Electronic device are made suitable to use DSP for the purpose
of doing this well.
IV. DSP in Military Applications: The military of every country has its own
communication network which is usually much more technical sophisticated than civil
network. Examples of such applications are Radar signal detection, Sonar detection,
cryptology and cryptanalysis, electronic warfare system etc…
IV.II. Sonar: Sonar is an acronym for 𝐒𝐎und 𝐍𝐀vigation and 𝐑anging. It is divided into two
categories, active and passive. In active sonar, sound pulses between 2 kHz and 40 kHz are
transmitted into the water, and the resulting echoes detected and analyzed. Uses of active
sonar include: detection & localization of undersea
bodies, navigation, communication, and mapping the sea
floor. A maximum operating range of 10 to 100
kilometers is typical. In comparison, passive sonar simply
listens to underwater sounds, which includes: natural
turbulence, marine life, and mechanical sounds from
submarines and surface vessels. Since passive sonar
emits no energy, it is ideal for covert operations. You
want to detect the other guy, without him detecting you. The most important application of
passive sonar is in military surveillance systems that detect and track submarines. Passive
sonar typically uses lower frequencies than active sonar because they propagate through
the water with less absorption. Detection ranges can be thousands of kilometers.
IV. Industrial and Petroleum Applications: As early as the 1920s, geophysicists
discovered that the structure of the earth's crust could be probed with sound. Prospectors
could set off an explosion and record the echoes from boundary layers more than ten
kilometers below the surface. These echo seismograms were interpreted by the raw eye to
map the subsurface structure. The reflection seismic method rapidly became the primary
method for locating petroleum and mineral deposits, and remains so today.
In the ideal case, a sound pulse sent into the ground produces a single echo for each
boundary layer the pulse passes through. Unfortunately, the situation is not usually this
simple. Each echo returning to the surface must pass through all the other boundary
layers above where it originated. This can result in the echo bouncing between layers,
giving rise to echoes of echoes being detected at the surface. These secondary echoes can
make the detected signal very complicated and difficult to interpret. Digital Signal
Processing has been widely used since the 1960s to isolate the primary from the secondary
echoes in reflection seismograms. How did the early geophysicists manage without DSP?
The answer is simple: they looked in easy places, where multiple reflections were
minimized. DSP allows oil to be found in difficult locations, such as under the ocean.
In 1895, Wilhelm Conrad discovered that x-rays could pass through substantial amounts
of matter. Medicine was revolutionized by the ability to look inside the living human body.
Medical x-ray systems spread throughout the world in only a few years. In spite of its
obvious success, medical x-ray imaging was limited by four problems until DSP and related
techniques came along in the 1970s. First, overlapping structures in the body can hide
behind each other. For example, portions of the heart might not be visible behind the ribs.
Second, it is not always possible to distinguish between similar tissues. For example, it
may be able to separate bone from soft tissue, but not distinguish a tumor from the liver.
Third, x-ray images show anatomy, the body's structure, and not physiology, the body's
operation. The x-ray image of a living person looks exactly like the x-ray image of a dead
one! Forth, x-ray exposure can cause cancer, requiring it to be used sparingly and only
with proper justification.
The problem of overlapping structures was solved in 1971 with the introduction of the first
computed tomography scanner (formerly called computed axial tomography, or CAT
scanner). Computed tomography (CT) is a classic example of Digital Signal Processing. X-
rays from many directions are passed through the section of the patient's body being
examined. Instead of simply forming images with the detected x-rays, the signals are
converted into digital data and stored in a computer. The information is then used to
calculate images that appear to be slices through the body. These images show much
greater detail than conventional techniques, allowing significantly better diagnosis and
treatment. The impact of CT was nearly as large as the original introduction of x-ray
imaging itself. Within only a few years, every major hospital in the world had access to a CT
scanner. In 1979, two of CT's principle contributors, Godfrey N. Hounsfield and Allan M.
Cormack, shared the Nobel Prize in Medicine. That's good DSP!
The last three x-ray problems have been solved by using penetrating energy other than x-
rays, such as radio and sound waves. DSP plays a key role in all these techniques. For
example, Magnetic Resonance Imaging (MRI) uses magnetic fields in conjunction with radio
waves to probe the interior of the human body. Properly adjusting the strength and
frequency of the fields cause the atomic nuclei in a localized region of the body to resonate
between quantum energy states. This resonance results in the emission of a secondary
radio wave, detected with an antenna placed near the body. The strength and other
characteristics of this detected signal provide information about the localized region in
resonance. Adjustment of the magnetic field allows the resonance region to be scanned
throughout the body, mapping the internal structure. This information is usually presented
as images, just as in computed tomography. Besides providing excellent discrimination
between different types of soft tissue, MRI can provide information about physiology, such
as blood flow through arteries. MRI relies totally on Digital Signal Processing techniques,
and could not be implemented without them.
VI. Signal Processing in Space Applications: Sometimes, you just have to make the most
out of a bad picture. This is frequently the case with images taken from unmanned
satellites and space exploration vehicles. No one is going to send a repairman to Mars just
to tweak the knobs on a camera! DSP can improve the quality of images taken under
extremely unfavorable conditions in several ways: brightness and contrast adjustment,
edge detection, noise reduction, focus adjustment, motion blur reduction, etc. Images that
have spatial distortion, such as encountered when a flat image is taken of a spherical
planet, can also be warped into a correct representation. Many individual images can also
be combined into a single database, allowing the information to be displayed in unique
ways. For example, a video sequence simulating an aerial flight over the surface of a
distant planet.
VII. Signal Processing in Commercial Imaging Products: The large information content
in images is a problem for systems sold in mass quantity to the general public. Commercial
systems must be cheap, and this doesn't mesh well with large memories and high data
transfer rates. One answer to this dilemma is image compression. Just as with voice
signals, images contain a tremendous amount of redundant information, and can be run
through algorithms that reduce the number of bits needed to represent them. Television
and other moving pictures are especially suitable for compression, since most of the image
remain the same from frame-to-frame. Commercial imaging products that take advantage
of this technology include: video telephones, computer programs that display moving
pictures, and digital television.
VIII. Organization of this Book: For most practical systems, input and output signals are
continuous and these signals can be processed using continuous systems. However, due to
advances in digital systems technology and numerical algorithms, it is advantageous to
process continuous signals using digital systems (systems using digital devices) by
converting the input signal into a digital signal. Therefore, the study of both continuous
and digital systems is required. As most real systems are continuous and the concepts are
relatively easier to understand, we describe analog signals and systems first, immediately
followed by the corresponding description of digital signals and systems.
In this book, many illustrative examples are included in each chapter for easy
understanding of the fundamentals and methodologies of signals and systems. An
attractive feature of this book is the inclusion of MATLAB-based examples with codes to
encourage readers to implement exercises on their personal computers in order to become
confident with the fundamentals and to gain more insight into signals and systems.
This book is divided into 10 chapters. Chapter 1 presents an introduction to signals and
systems with basic classification of signals, elementary operations on signals, and some
examples of signals and systems. Chapter 2 introduces the linear time invariance and
systems. Chapter 3 gives Laplace transform analysis of continuous time signals and
systems. The z-transform analysis of Discrete-time signals and systems is covered in
Chapter 4. Chapter 5 deals with the Fourier transform and analysis of continuous-time
signals and systems. Chapter 6 deals with the Discrete-time Fourier transform and
analysis of Discrete-time signals and systems. The Fast Fourier Transform, implementation
of linear systems, and state-space representation of LTI systems are discussed in Chapter 7
and 8. Sampling theory and reconstruction of a band-limited signal from its samples,
analog/digital and digital/analog conversion are discussed in Chapter 9. Lastly, in Chapter
10 Ideal continuous time filters, practical filter (i.e. analog and digital) approximations and
design methodologies, and design of special class filters are briefly detailed.
CHAPTER I:
Introduction to
Signals and Systems
I. Introduction
II. Elementary Operations on Signals
III. Classification of Signals
1. Real, Complex, Even and odd signals
2. Continuous-time and discrete-time signals (Analog and digital signals)
3. Periodic and aperiodic signals
4. Energy and power signals (Measuring signals)
5. Deterministic and probabilistic signals
IV. Some Useful Signal Models
IV.I. The Step Signal (Heaviside Function)
IV.II The Impulse Signal
IV.III The Ramp Signal
IV.IV The Gate-Signal (П-Signal)
IV.V The Sign Signal
IV.VI The Exponential Signal
IV.VII The Sinusoidal Signals
V. Solved Problems
A signal is a function of one or more variables that conveys information about some
(usually physical) phenomenon. Signals can be classified based on the number of
independent variables with which they are associated. A signal that is a function of only
one variable is said to be one dimensional (1D). Similarly, a signal that is a function of two
or more variables is said to be multidimensional. A signal can also be classified on the
basis of whether it is a function of continuous or discrete variables. A system is an entity
that processes one or more input signals in order to produce one or more output signals,.
Such an entity is represented mathematically by a system of one or more equations. In a
communication system, the input might represent the message to be sent, and the output
might represent the received message. In a robotics system, the input might represent the
desired position of the end effector (e.g., gripper), while the output could represent the
actual position.
Introduction to Signals
and Systems
I. Introduction: It is natural and logical when we are dealing with a topic to ask such
questions: what it is this what does it stand for? Especially, when this matter is a science
which may has its own concepts and terminologies. Accordingly, it can be said that
A Signal is a term used to denote the information carrying property being transmitted to or
from an entity such as a device, instrument, or physiological source. Mathematically
talking, it is a function (or a sequence) of an independent variable 𝑡, typically representing
time representing a physical quantity or variable and it carry information. Thus, a
continuous-time signal is denoted 𝒙(𝑡) and discrete-time signal is denoted 𝒙[𝑛]. We can
also represent a Signal by a waveform.
Remark: All continuous signals that are functions of time are continuous-time, but not all
continuous-time signals are continuous functions of time.
A system whose output signal is 𝑦(𝑡) and input signal 𝑢(𝑡), is said to be a transformation of
the signal space 𝑋 to another signal space 𝑌 such that 𝑦(𝑡) = 𝑻(𝑢(𝑡)). 𝑻 is called an operator
because the output function 𝑦(𝑡) can be
viewed as being produced by an operation on
the input function 𝑢(𝑡). Another equivalent
way of thinking about a system is to consider
the input 𝑢(𝑡) being mapped into the
output 𝑦(𝑡). This viewpoint can be
conceptualized as shown in Fig. In the figure,
the set of all the possible inputs is denoted by
𝑋 and the set of all possible outputs is
denoted by 𝑌.
Computer Example: write a short Matlab program to validate 𝑦(𝑡) = 𝑥(𝑡) ± 𝑧(𝑡)
for k=1:1:length(t)
x3(k) = x1(k) + x2(k); x4(k) = x1(k) - x2(k);
end
subplot(411)
plot(t,x1,'k','linewidth',3)
grid on
subplot(412)
plot(t,x2,'r','linewidth',3)
grid on
subplot(413)
plot(t, x3 ,'b','linewidth',3)
grid on
subplot(414)
plot(t, x4 ,'g','linewidth',3)
grid on
Computer Example: write a short MATLAB code to validate 𝑦1/2 (𝑡) = (𝑒 2𝜋𝑖𝑡 ± 𝑒 −2𝜋𝑖𝑡 )/2
Even and Odd Signals: A signal is referred to as an Even if it is identical to its time-
reversed counterparts; 𝑥(𝑡) = 𝑥(−𝑡). Odd Signal: A signal is odd if 𝑥(𝑡) = −𝑥(−𝑡). An odd
signal must be 0 at 𝑡 = 0, in other words, odd signal passes the origin. Any signal 𝑥(𝑡) or
𝑥[𝑛] can be expressed as a sum of two signals, one of which is even and other is odd.
Solution:
Periodic and Aperiodic Signals: A signal which does not repeat itself (i.e. its pattern)
after a specific interval of time is called aperiodic signal or non-periodic. Signals that
repeats its pattern over a period is called periodic signal. A continuous-time signal 𝑥(𝑡) is
said to be periodic with period 𝑇 if there is 𝑇 ∈ ℝ+ for which 𝑥(𝑡 + 𝑚𝑇) = 𝑥(𝑡) for all 𝑡 ∈ ℝ+ &
𝑚 ∈ ℤ. The fundamental period 𝑇0 of 𝑥(𝑡) is the smallest positive value of 𝑇 for which last
equation holds.
Remark: Note that a sequence 𝑥[𝑛] obtained by uniform sampling of a periodic continuous-
time signal may not be periodic. Also, a discrete-time sinusoid is not necessarily periodic.
Examples:
Energy and power signals: The size of any entity is a number that indicates the
largeness or strength of that entity. Generally speaking, the signal amplitude varies with
time. How can a signal that exists over a certain time interval with varying amplitude be
measured by one number that will indicate the signal size or signal strength? Such a
measure must consider not only the signal amplitude, but also its duration. In this
manner, we may consider the area under a signal 𝑓(𝑡) as a possible measure of its size,
because it takes account of not only the amplitude, but also the duration. However, this
will be a defective measure because 𝑓(𝑡) could be a large signal, yet its positive and
negative areas could cancel each other, indicating a signal of small size. This difficulty can
be corrected by defining the signal size as the area under 𝑓 2 (𝑡), which is always positive.
We call this measure the signal energy 𝐸𝑓 , defined (for a real signal) as
𝐸𝑥 = ∫ |𝑥(𝑡)|2 𝑑𝑡 𝐸𝑥 = ∑|𝑥[𝑛]|2
−∞ −∞
The signal energy 𝐸𝑥 must be finite for it to be a meaningful measure of the signal size. A
necessary condition for the energy to be finite is that the signal amplitude goes to zero as
time increase (𝑡 → ∞). For a signal 𝑥(𝑡), we define its power
Remark: 1 Energy signals are time limited while power signals can exist over infinite time.
Non periodic signals are energy signals while power signals are (almost) periodic. Power of
an energy signal is zero and the energy of a power signals is infinite.
Remark: 2 If a signal is a power signal, then it cannot be an energy signal or vice versa;
power and energy signals are mutually exclusive. A signal may be neither a power nor an
energy signal, that is if the two conditions are not met. Almost all periodic functions of
practical interest are power signals.
Continuous (Analog) & Discrete (Digital) Signals: A signal 𝑥(𝑡) is a continuous-time
signal if 𝑡 is a continuous variable. If 𝑡 is a discrete variable, that is, 𝑥[𝑡] is defined at
discrete times, then 𝑥[𝑡] is a discrete-time signal - often denoted as 𝑥[𝑛], where 𝑛 is an
integer. A discrete-time signal 𝑥[𝑛] may represent a phenomenon for which the independent
variable is inherently discrete, such as the daily closing value of a stock price, or it may be
obtained by sampling a continuous-time signal 𝑥(𝑡) at 𝑡 = 𝑛𝑇𝑠 , where 𝑇𝑠 is the sampling
period. To convert an analog signal into a digital signal, the analog signal needs to be
sampled and quantized; therefor digital signal is an amplitude quantization of discrete time
signal.
IV. Some Useful Signal Models In the area of signals and systems, the step, the impulse,
the ramp and the exponential functions are very useful. They not only serve as a basis for
representing other signals, but their use can simplify many aspects of the signals and
systems. Therefore in this section we present definitions of the following basic control
signals: the step function, the gate (window) function, the impulse function, the ramp
function, the exponential function, and the sinusoidal function.
IV.I The Step Signal (Heaviside Function): Heaviside step function, or the unit step
function, usually denoted by ℍ or 𝜃 (but sometimes 𝑢, 1 or 𝟙), is a discontinuous function,
named after Oliver Heaviside (1850–1925), whose value is zero for negative argument and
one for positive argument. The function was originally developed in operational calculus for
the solution of differential equations, where it represents a signal that switches on at a
specified time and stays switched on indefinitely. Oliver Heaviside, who developed the
operational calculus as a tool in the analysis of telegraphic communications, represented
the function as 1.
Existence of Step functions in real life world: In real-life situations, step signal can be
viewed as a constant force of magnitude |𝐴| Newtons applied at time equals zero seconds to
a certain object for a long time. In another situation, 𝐴𝑢(𝑡) can be an applied voltage of
constant magnitude to a load resistor 𝑅 at the time 𝑡 → 0.
1 +∞ 𝑒 𝑖𝑡𝜔 1 𝜀+𝑗∞ 𝑒 𝑡𝑠
Fourier: 𝑢(𝑡) = lim+ ∫ 𝑑𝜔 Laplace: 𝑢(𝑡) = lim+ ∫ 𝑑𝑠
𝜀⟶0 2𝜋𝑖 −∞ 𝜔 − 𝑖𝜀 𝜀⟶0 2𝜋𝑖 𝜀−𝑗∞ 𝑠
The simplest definition of the Heaviside function is as the derivative of the ramp function or
in term of the sign (signum) function:
𝑑 1 1
𝑢(𝑡): = (max{𝑡, 0}) for 𝑡≠0 & 𝑢(𝑡): = + sgn(𝑡)
𝑑𝑡 2 2
The Heaviside function can also be defined as the integral of the Dirac delta function:
𝑢′ = 𝛿. This is sometimes written as
𝑡
𝑑𝑢(𝑡)
𝑢(𝑡): = ∫ 𝛿(𝑥)𝑑𝑥 or 𝛿(𝑡) =
−∞ 𝑑𝑡
Later we will see the meaning & proof of these last equations.
clear all
clc
for i=1:7
x=-2:0.01:2;
y=(1./(1+exp(-2.*a(i).*x)));
plot(x,y,'-','linewidth',3)
hold on
grid on
y2=1/2 +(1/2)*tanh(a(i).*x);
plot(x,y2,'--','linewidth',3)
hold on
pause(0.5)
end
1 𝑇
lim ∫ cos(𝜔(𝑡 − 𝑡0 ))𝑑𝜔 Fourier II
{ 𝑇→∞ 𝜋 0
Computer Example: write a short program to plot a smooth approximation of the impulse
signal using the Gauss distribution
clear all
clc;
Unit Impulse as a Generalized Function: The definition of the unit impulse function
given above is not mathematically rigorous, which leads to serious difficulties. First, the
impulse function does not define a unique function: for example, it can be shown that
𝛿(𝑡) + 𝑑𝛿/𝑑𝑡 also satisfies definition. Moreover, 𝛿(𝑡) is not even a true function in the
ordinary sense? An ordinary function is specified by its values for all time 𝑡. The impulse
function is zero everywhere except at 𝑡 = 0, and at this only interesting part of its range it is
undefined. These difficulties are resolved by defining the impulse as a generalized function
rather than an ordinary function. A generalized function is defined by its effect on other
functions instead of by its value at every instant of time.
In this approach the impulse function is defined by the sampling property (its effect on
other functions named 〈𝜙(𝑡), 𝛿(𝑡)〉 ). We say nothing about what the impulse function is or
what it looks like. Instead, the impulse function is defined in terms of its effect on a test
function 𝜙(𝑡). We define a unit impulse as a function for which the area under its product
with a function 𝜙(𝑡) is equal to the value of the function 𝜙(𝑡) at the instant where the
impulse is located. It is assumed that 𝜙(𝑡) is continuous at the location of the impulse.
+∞ +∞
〈𝜙(𝑡), 𝛿(𝑡)〉 = ∫ 𝜙(𝑡)𝛿(𝑡)𝑑𝑡 = 𝜙(0) ∫ 𝛿(𝑡)𝑑𝑡 = 𝜙(0)
−∞ −∞
Recall that the sampling property is the consequence of the classical (Dirac) definition of
impulse. In contrast, the sampling property defines the impulse function in the generalized
function approach.
This result shows that (i.e. practical derivative) 𝑑𝑢/𝑑𝑡 satisfies the sampling property of 𝛿(𝑡).
Therefore it is an impulse 𝛿(𝑡) in the generalized sense-that is, 𝑑𝑢/𝑑𝑡 = 𝛿(𝑡). Consequently
𝑡 0 𝑡<0
∫ 𝛿(𝑡)𝑑𝑡 = 𝑢(𝑡) = {
−∞ 1 𝑡≥0
Practical derivative of 𝑢(𝑡) by the graphical approach:
𝜏
0 𝑡<−
2 0 𝑡<0
1 1 𝜏 𝜏 1
𝑢𝜏 (𝑡) = 𝑡+ − ≤𝑡≤ ⟹ 𝑢(𝑡) = lim 𝑢𝜏 (𝑡) = { 𝑡=0
𝜏 2 2 2 𝜏⟶0 2
𝜏 1 𝑡>0
{ 1 𝑡 >
2
𝜏
0 𝑡<−
2
𝑑𝑢(𝑡) 𝑑𝑢𝜏 (𝑡) 1 𝜏 𝜏 ∞ 𝑡=0
= lim = lim − <𝑡< = 𝛿(𝑡) = {
𝑑𝑡 𝜏⟶0 𝑑𝑡 𝜏⟶0 𝜏 2 2 0 𝑡≠0
𝜏
{0 𝑡>
2
Remarks: The last derivative of 𝑢(𝑡) is called operational derivative or distributional
derivative. Derivatives of impulse function can also be defined as generalized functions.
Exercise: 01 let we demonstrate the validity of the last (thirteenth) property. We know that
the first derivative of 𝑥(𝑡)𝛿(𝑡) is given by: [𝑥(𝑡)𝛿(𝑡)]′ = 𝑥̇ (𝑡)𝛿(𝑡) + 𝑥(𝑡)𝛿̇ (𝑡) Integrate both sides
of this last equation we get
+∞ +∞ +∞
∫ 𝛿̇ (𝑡)𝑥(𝑡)𝑑𝑡 = [ 𝑥(𝑡)𝛿(𝑡)] −∫ 𝑥̇ (𝑡)𝛿(𝑡)𝑑𝑡
−∞ −∞ −∞
+∞ +∞ +∞
We know that [ 𝑥(𝑡)𝛿(𝑡)] = 0 therefore ∫ 𝛿̇ (𝑡)𝑥(𝑡)𝑑𝑡 = − ∫ 𝑥̇ (𝑡)𝛿(𝑡)𝑑𝑡
−∞ −∞ −∞
Now, take the 2nd derivative of 𝑥(𝑡)𝛿(𝑡) we get: [𝑥(𝑡)𝛿(𝑡)]′′ = 𝑥̈ (𝑡)𝛿(𝑡) + 2𝑥̇ (𝑡)𝛿̇ (𝑡) + 𝑥(𝑡)𝛿̈ (𝑡)
Integrate both sides of this last equation we get
+∞
+∞ +∞
𝑑2
∫ 𝛿̈ (𝑡)𝑥(𝑡)𝑑𝑡 = [ 2 (𝑥(𝑡)𝛿(𝑡))] − (∫ 𝑥̈ (𝑡)𝛿(𝑡) + 2𝑥̇ (𝑡)𝛿̇ (𝑡) 𝑑𝑡)
−∞ 𝑑𝑡 −∞
−∞
Exercise: 02 demonstrate the truth of 𝑡𝛿 ′ (𝑡) = −𝛿(𝑡). To prove it let we take the derivative
of 𝑡𝛿(𝑡) which is 𝑑(𝑡𝛿(𝑡))/𝑑𝑡 = 𝛿(𝑡) + 𝑡𝛿 ′ (𝑡) but 𝑡𝛿(𝑡) = 0 ⬌ 𝑡𝛿 ′ (𝑡) = −𝛿(𝑡)
Exercise: 03 prove that 𝛿(𝑡) is an even function and 𝛿 ′ (𝑡) is an odd function.
𝟙. Let we start by 𝛿(𝑡) is an even function, to do this we propose to prove |𝑎|𝛿(𝑎𝑡) = 𝛿(𝑡).
+∞
1 +∞ 𝜉 1
∫ 𝛿(𝑎𝑡)𝑥(𝑡)𝑑𝑡 = ∫ 𝛿(𝜉)𝑥 ( ) 𝑑𝜉 = 𝑥(0)
−∞ |𝑎| −∞ 𝑎 |𝑎|
+∞ +∞
1 +∞ 1
Make substitution of 𝑥(0) = ∫ 𝛿(𝑡)𝑥(𝑡)𝑑𝑡 ∫ 𝛿(𝑎𝑡)𝑥(𝑡)𝑑𝑡 = ∫ 𝛿(𝑡)𝑥(𝑡)𝑑𝑡 𝛿(𝑎𝑡) = 𝛿(𝑡)
−∞ −∞ |𝑎| −∞ |𝑎|
Taking the magnitude of 𝑎 is necessary which is clear if the substitution is carried out with
different signs of 𝑎. |𝑎|𝛿(𝑎𝑡) = 𝛿(𝑡) implise that 𝛿(−𝑡) = 𝛿(𝑡)
+∞ +∞
𝟚. To prove 𝛿 ′ (𝑡) is an odd function, we use ∫ 𝛿̇ (𝑡)𝑥(𝑡)𝑑𝑡 = − ∫ 𝑥̇ (𝑡)𝛿(𝑡)𝑑𝑡
−∞ −∞
+∞
With 𝑥(𝑡) = 1 then ∫ 𝛿̇ (𝑡)𝑑𝑡 = 0 𝛿 ′ (𝑡) is an odd function
−∞
𝑡 𝑛 (𝑛)
𝐄𝐱𝐞𝐫𝐜𝐢𝐬𝐞: 𝟎𝟒 prove that 𝛿 (𝑡) = (−1)𝑛 𝛿(𝑡) In exercise 01 we have gotten
𝑛!
+∞ +∞
∫ 𝑥(𝑡)𝛿 (𝑛) (𝑡)𝑑𝑡 = (−1)𝑛 ∫ 𝑥 (𝑛) (𝑡)𝛿(𝑡) 𝑑𝑡
−∞ −∞
+∞ +∞
If we take 𝑥(𝑡) = 𝑡 𝑛 we get ∫ 𝑛 (𝑛) (𝑡)𝑑𝑡
𝑡 𝛿 = (−1)𝑛 ∫ 𝑛! 𝛿(𝑡) 𝑑𝑡 end of proof
−∞ −∞
+∞
𝑡 𝑛 (𝑛)
As a consequence we deduce that ∫ (−1)𝑛 𝛿 (𝑡)𝑑𝑡 = 1
−∞ 𝑛!
+∞
𝐄𝐱𝐞𝐫𝐜𝐢𝐬𝐞: 𝟎𝟓 prove that ∫ 𝛿(𝑡 − 𝑎)𝛿(𝑡 − 𝑏)𝑑𝑡 = 𝛿(𝑎 − 𝑏)
−∞
Existence of Dirac function in real life world: Again, in real life, this signal can represent
a situation where a person hits an object with a hammer with a force of A Newtons for a
very short period of time (pico seconds). We sometimes refer to this kind of signal as a
shock. In another real-life situation, the impulse signal can be as simple as closing and
opening an electrical switch in a very short time. Another situation where a spring-carrying
mass is hit upward can be seen as an impulsive force. You may realize that it is impossible
to generate a pure impulse signal for zero duration and infinite magnitude. To create an
approximation of an impulse, we can generate a pulse signal where the duration of the
signal is very short compared to the response of the system.
1 𝑛=0
Properties of the Dirac function (Discrete-time Case): In this case 𝛿[𝑛] = {
0 𝑛≠0
+∞ +∞
❶∑ 𝑥[𝑘]𝛿[𝑘] = 𝑥[0] ❷ ∑ 𝑥[𝑘]𝛿[𝑘 − 𝑘0] = 𝑥[𝑘0 ] ❸ 𝑥[𝑘]𝛿[𝑘] = 𝑥[0]𝛿[𝑘]
−∞ −∞
+∞
❹ 𝑥[𝑘]𝛿[𝑘 − 𝑘0] = 𝑥[𝑘0 ]𝛿[𝑘 − 𝑘0 ] ❺ 𝛿[𝑘] = 𝛿[−𝑘] ❻ 𝑢[𝑛] = ∑ 𝛿[𝑛 − 𝑘]
𝑘=0
𝑘=+∞
❼ 𝛿[𝑛] = 𝑢[𝑛] − 𝑢[𝑛 − 1] ❽ ∑ 𝛿[𝑛 − 𝑘] = 1 = 𝑢[𝑛] + 𝑢[−𝑛] − 𝛿[𝑛]
𝑘=−∞
𝑚=𝑛
𝐄𝐱𝐞𝐫𝐜𝐢𝐬𝐞: prove that ❶ 2𝛿[𝑛] = −1 + 2𝑢[𝑛] + (𝑢[−𝑛] − 𝑢[𝑛 − 1]), ❷ 𝑢[𝑛] = ∑ 𝛿[𝑚]
𝑚=−∞
𝑑𝑅(𝑡) 𝑑2 𝑅(𝑡)
▪ = 𝑢(𝑡) , ▪ = 𝛿(𝑡) , ▪ 𝑅(𝑅(𝑡)) = 𝑅(𝑡) , ▪ 𝑅(𝑡) = 𝑢(𝑡) ⋆ 𝑢(𝑡)
𝑑𝑡 𝑑𝑡 2
Existence of Ramp functions in real life world In real-life situations this signal can be
viewed as a signal that is increasing linearly with time. An example is where a person starts
applying a force to an object at time 𝑡 → 0, and keeps pressing the object with increasing
force for a long time. The rate of the increase in the force applied is constant. Consider
another situation where a radar system, an anti-aircraft gun, and an incoming jet interact.
The radar antenna can provide an angular position input. Notion of the jet motion forces
this angle to change uniformly with time. This will force a ramp input signal to the anti-
aircraft gun since it will have to track the jet.
IV.IV The Gate-Signal (П-Signal): The rectangular function (also known as the rectangle
function, rect function, Pi function, gate function, unit pulse, or the normalized boxcar
function) is defined a
1
0 if |𝑡| >
2
1
rect(𝑡) ≔ ∏(𝑡) = 1/2 if |𝑡| =
2
1
1 if |𝑡| <
{ 2
The pulse function may also be expressed as a limit of a
rational function and/or step function:
1
rect(𝑡) ≔ lim ≔ 𝑢(𝑡 + 0.5) − 𝑢(𝑡 − 0.5)
𝑛⟶∞, 𝑛∈(𝑍) 1 + (2𝑡)2𝑛
Remark: The unit gate function is usually used to zero all values of another function,
outside a certain time interval.
Such a pulse's origination may be a natural occurrence or man-made and can occur as a
radiated, electric, or magnetic field or a conducted electric current, depending on the
source.
IV.VI The Exponential Signal: The exponential function is the function f (𝑡) = 𝐴𝑒 𝑎𝑡 and its
graphical representation is shown in Figure
Exercise:
6 6𝜋 Ω0 3
𝑥[𝑛] = 𝑒 𝑗7𝜋𝑛 ⟹ Ω0 = ⟹ = ⟹ 𝑥[𝑛] is periodic signal
7 2𝜋 7
Ω0
𝑥[𝑛] = 𝑒 𝑗√2𝑛 ⟹ ≠ rational number ⟹ 𝑥[𝑛] is non − periodic signal
2𝜋
Remark: 01 In discrete-time case all signals separated by a frequency of 2𝜋 are identical.
Remark: 02 In the continuous-time case if is periodic then it can be represented by a sum
of complex exponential signals (called the Fourier series 𝑥(𝑡) = ∑ 𝑎𝑘 𝑒 𝑗ω0 𝑘𝑡 ).
clear all,
clc,
y=0;
t=0:0.01:4*pi;
Nmax=8;
for k=0:Nmax
a=(2*k+1);
y=y+sin(a*t)/a;
plot(t,y,'-','linewidth',3)
hold on, grid on, pause(0.5)
end
If the continuous-time signal 𝑥(𝑡) is a periodic then it can be written as linear combination
of complex exponentials 𝑥(𝑡) = ∑ 𝑎𝑘 𝑒 𝑗ω0 𝑘𝑡 such that 𝜷1 = {𝜙𝑘 (𝑡) = 𝑒 𝑗ω0 𝑘𝑡 , 𝑘 = 0, ±1, ±2 ± ⋯ }
is a basis. In the language of Hilbert spaces, the set of functions 𝜙𝑘 (𝑡) is an orthonormal
basis for the space 𝐿2 of square-integrable functions on [ − 𝜋 , 𝜋 ]. In case of discrete-time
2𝜋
signals the basis is 𝜷2 = {𝜙𝑘 [𝑛] = 𝑒 𝑗𝑘( 𝑁 )𝑛 , 𝑘 = 0, ±1, ±2}, let we see what is happen if we look
2𝜋 2𝜋
for 𝜙𝑘+𝑁 [𝑛] = 𝑒 𝑗(𝑘+𝑁)( 𝑁 )𝑛 = 𝑒 𝑗𝑘( 𝑁 )𝑛 = 𝜙𝑘 [𝑛] ⟹ 𝜙𝑘+𝑁 [𝑛] = 𝜙𝑘 [𝑛] This equality tell as that:
there exist 𝑁-distinct signals to form a basis. In other word the basis of discrete-time
signals is 𝜷2 = {𝜙1 [𝑛], 𝜙2 [𝑛], … , 𝜙𝑁 [𝑛]}.
Remark: 03 from the above study we notice that the continuous-time signal have an
infinite dimensional space while the space of discrete-time signals is finite dimensional.
Computer Example: Sinusoidal signals for both continuous time and discrete time will
become important building blocks for more general signals, and the representation using
sinusoidal signals will lead to a very powerful set of ideas for representing signals and for
analyzing an important class of systems. As an application of this type of function let we
consider the Fourier series which states that “any periodic signal can be decomposed as an
infinite linear combination of sinusoidal signals”, here in this example let we see the plot of
rectangular pulse using Fourier series approximation.
clear all,clc,
y=4; A=4;
t=0:0.05:4*pi;
Nmax=10;
for k=1:Nmax
a=4*A*(1-(-1)^k)/((pi^2)*(k^2));
y=y + a*cos(k*t);
end
plot(t,y,'-','linewidth',3)
hold on, grid on, % pause(0.5)
1 𝑑 𝑡 + |𝑡| 1 1
𝑢(𝑡) Step signal ■ lim , ■ ( ) , ■ lim ( + tanh(𝑎𝑡))
𝑎→∞ 1 + 𝑒 −2𝑎𝑡 𝑑𝑡 2 𝑎→∞ 2 2
𝑡
𝑡 + |𝑡|
𝑟(𝑡) Ramp signal ■ max{0, 𝑡} , ■ , ■ ∫ 𝑢(𝜗)𝑑𝜗
2 −∞
𝑡 𝑡 𝑑|𝑡|
sgn(𝑡) Signum signal ■ lim , ■ , ■ 2𝑢(𝑡) − 1, ■ ,
𝜀→0 √𝑡 2 + 𝜀 2 |𝑡| 𝑑𝑡
1
Π(𝑡) Gate signal ■ lim , ■ 𝑢(𝑡 + 𝑇) − 𝑢(𝑡 − 𝑇)
𝑛→∞ 1 + (2𝑡)2𝑛
|𝑡|
Λ(𝑡) Triangle signal ■ 𝐴 (1 − ) , 0 ≤ 𝑡 ≤ 𝑇
𝑇
The useful basic test signals (discrete-time)
1 𝑛>0
𝑛 𝑛≥0 1 𝑛≥0 1 𝑛=0
𝑅[𝑛] = { 𝑢[𝑛] = { sgn[𝑛] = { 0 𝑛=0 𝛿[𝑛] = {
0 𝑛<0 0 𝑛<0 0 𝑛≠0
−1 𝑛<0
Solved Problems:
Exercise 1: Compute the following integrals
3/2 +∞
1
𝟏. 𝐼 = ∫ sin(5𝑡 − 𝜃)𝛿 (𝑡 − ) 𝑑𝑡 𝟐. 𝐼 = ∫ 𝑒 −𝑡 𝛿 ′ (𝑡)𝑑𝑡
1 2 −∞
2 +∞
𝟑. 𝐼 = ∫ (3𝑡 2 + 1)𝛿(𝑡)𝑑𝑡 𝟒. 𝐼 = ∫ (𝑡 2 + cos(𝜋𝑡))𝛿(𝑡 − 1) 𝑑𝑡
1 −∞
Ans:
3/2
1 1 3
𝟏. 𝐼=∫ sin(5𝑡 − 𝜃)𝛿 (𝑡 − ) 𝑑𝑡 = 0 because 𝑡 = ∉ [1 ]
1 2 2 2
+∞ +∞
𝑑 −𝑡
𝟐. 𝐼 = ∫ 𝑒 −𝑡 𝛿 ′ (𝑡)𝑑𝑡 = − ∫ (𝑒 )𝛿(𝑡)𝑑𝑡 = 𝑒 0 = 1
−∞ −∞ 𝑑𝑡
2
𝟑. 𝐼 = ∫ (3𝑡 2 + 1)𝛿(𝑡)𝑑𝑡 = 0 because 𝑡 = 0 ∉ [1 2]
1
+∞
𝟒. 𝐼 = ∫ (𝑡 2 + cos(𝜋𝑡))𝛿(𝑡 − 1) 𝑑𝑡 = 1 + cos(𝜋)
−∞
Exercise 2: Find and sketch the first derivatives of
1 if 𝑡 > 0
𝟏) 𝑥(𝑡) = 𝑢(𝑡) − 𝑢(𝑡 − 𝑎) 𝑎 > 0 𝟐) 𝑥(𝑡) = sgn(𝑡) = {
−1 if 𝑡 < 0
3) 𝑥(𝑡) = 𝑡(𝑢(𝑡) − 𝑢(𝑡 − 𝑎)) 𝑎 > 0
Ans:
𝑑𝑥 𝑑𝑥
𝟏) = 𝛿(𝑡) − 𝛿(𝑡 − 𝑎) 𝟐) sgn(𝑡) = 2𝑢(𝑡) − 1 ⟹ = 2𝛿(𝑡)
𝑑𝑡 𝑑𝑡
𝑑𝑥
𝟑) = [𝑢(𝑡) − 𝑢(𝑡 − 𝑎)] − 𝑎𝛿(𝑡 − 𝑎)
𝑑𝑡
Exercise 3: A discrete-time signal 𝑥[𝑛] is shown in Fig. Sketch and label each of the
following signals.
(𝒂) 𝑥 [𝑛]𝑢[1 − 𝑛 ] ;
(𝒃) 𝑥 [𝑛]{𝑢[𝑛 + 2] − 𝑢 [𝑛]};
(𝒄) 𝑥 [𝑛]𝛿[ 𝑛 − 1]
Exercise 4: Determine whether the following signals are energy signals, power signals, or
neither.
Ans:
∞ ∞
1
𝒂) 𝐸 = ∫ |𝑒 −𝑎𝑡 𝑢(𝑡)| 𝑑𝑡 = ∫ 𝑒 −2𝑎𝑡 𝑑𝑡 =
2
< ∞ ⟹ Energy signal
−∞ 0 2𝑎
𝒃) The sinusoidal signal 𝑥(𝑡) = 𝐴 cos(𝜔𝑡 + 𝜃) is periodic with 𝑇 = 2𝜋/𝜔 then the average
power is given by:
2𝜋
1 𝑇 1 𝜔
𝑃 = ∫ 𝐴2 cos2 (𝜔𝑡 + 𝜃) 𝑑𝑡 = ∫ 𝐴2 cos2 (𝜔𝑡 + 𝜃) 𝑑𝑡
𝑇 0 𝑇 0
𝐴2 𝜔 2𝜋/𝜔 1 𝐴2
𝑃= ∫ (1 + cos(2𝜔𝑡 + 2𝜃))𝑑𝑡 = < ∞ ⟹ Power signal
2𝜋 0 2 2
𝑇/2 𝑇/2 (𝑇/2)2
𝒄) 𝐸 = lim ∫ |𝑡𝑢(𝑡)|2 𝑑𝑡 = lim ∫ 𝑡 2 𝑑𝑡 = lim =∞
𝑇→∞ −𝑇/2 𝑇→∞ 0 𝑇→∞ 3
𝑇/2
1 1 𝑇/2 (𝑇/2)2
𝑃 = lim ∫ |𝑡𝑢(𝑡)|2 𝑑𝑡 = lim ∫ 𝑡 2 𝑑𝑡 = lim =∞
𝑇→∞ 𝑇 −𝑇/2 𝑇→∞ 𝑇 0 𝑇→∞ 3𝑇
Ans:
𝜋 𝜋 𝑇1 3
❶ 𝑥(𝑡) = cos ( 𝑡) + sin ( 𝑡) → 𝑇1 = 6 & 𝑇2 = 8 → = = rational number
3 4 𝑇2 4
Then 𝑥(𝑡) is periodic with fundamental period 𝑇0 = LCM(𝑇1 , 𝑇2 ) means that 𝑇0 = 3𝑇2 = 4𝑇1 = 24
1
❷ 𝑥(𝑡) = sin2(𝑡) = (1 − cos(2𝑡)) = 𝑥1 (𝑡) + 𝑥2 (𝑡) → fundamental period 𝑇0 = 𝜋
2
❸ 𝑥(𝑡) = sin2(𝑡) + cos 2 (𝑡) = 1 ′DC_signal′ ⇒ 𝑥(𝑡) is periodic with no fundamental period.
𝑇1 2𝜋
❹ 𝑥(𝑡) = cos(𝑡) + sin(√2𝑡) = 𝑥1 (𝑡) + 𝑥2 (𝑡) with 𝑇1 = 2𝜋 & 𝑇2 = √2𝜋 → =
𝑇2 √2𝜋
𝑇1 2
Since we have = ≠ rational number then 𝑥(𝑡) is not periodic signal
𝑇2 √2
𝜋 1 𝜋
❺ 𝑥[𝑛] = cos2 [ 𝑛] = (1 + cos [ 𝑛]) = 𝑥1 [𝑛] + 𝑥2 [𝑛], 𝑁0 = LCM(𝑁1 , 𝑁2 )
8 2 4
1 1 1 𝜋
With 𝑥1 [𝑛] = 2 = 2 (1)𝑛 & 𝑥2 [𝑛] = 2 cos [ 4 𝑛] such that 𝑥1 [𝑛] is periodic signal with fundamental
period 𝑁1 = 1 and 𝑥2 [𝑛] is periodic signal with fundamental period 𝑁2 = 8 Since we have
𝑁1 /𝑁2 = 1/8 = rational number, so we deduce that 𝑥[𝑛] is periodic signal with fundamental
period 𝑁0 = LCM(𝑁1 , 𝑁2 ) = 8
❻ 𝑥(𝑡) = 𝑒 𝑗2𝑡 + 𝑒 𝑗3𝑡 = 𝑥1 (𝑡) + 𝑥2 (𝑡) with 𝑥1 (𝑡) = 𝑒 𝑗2𝑡 & 𝑥2 (𝑡) = 𝑒 𝑗3𝑡 . It is well known that the
signal 𝑥1 (𝑡) is periodic with fundamental period 𝑇1 = 𝜋 and 𝑥2 (𝑡) is periodic with period
𝑇2 = 2𝜋/3 → 𝑇1 /𝑇2 = 3/2 = rational number ⟹ 𝑥(𝑡) is periodic signal with fundamental
period 𝑇0 = LCM(𝑇1 , 𝑇2 ) = 2𝑇1 = 3𝑇2 = 2𝜋.
❼ 𝑥(𝑡) = 𝑒 𝑗2𝑡 + 𝑒 𝑗𝜋𝑡 is aperiodic (not periodic) signal, because of 𝑇1 /𝑇2 = 𝜋/2. Since we have
𝑇1 /𝑇2 = 𝜋/2 ≠ rational number then 𝑥(𝑡) is 𝑛𝑜𝑡 periodic signal
❽ 𝑥(𝑡) = |sin(𝑡)| + |cos(𝑡)| = 𝑥1 (𝑡) + 𝑥2 (𝑡). If you are supposed to find period of sum of two
function such that, 𝑥1 (𝑡) + 𝑥2 (𝑡) given that period of f is 𝑇1 and period of g is 𝑇2 then period
of total 𝑥1 (𝑡) + 𝑥2 (𝑡) will be LCM(𝑇1 , 𝑇2 ). But this technique has some constrain as it will not
give correct answers in some cases. One of those case is, if you take 𝑥1 (𝑡)=|sin(𝑡)| and
𝑥2 (𝑡)=|cos(𝑡)|, then period of 𝑥1 (𝑡) + 𝑥2 (𝑡) should be 𝜋 as per the above rule but, period of
𝑥1 (𝑡) + 𝑥2 (𝑡) is not 𝜋 but 𝜋/2. So in general it is very difficult to identify correct answers for
the questions regarding period. Most of the cases graph will help.
3 1
❾ 𝑥(𝑡) = sin3(𝑡) = sin(𝑡) − sin(3𝑡) = 𝑥1 (𝑡) + 𝑥2 (𝑡) → 𝑇0 = LCM(𝑇1 , 𝑇2 ) = 2𝜋 Also:
4 4
3 1
𝑥(𝑡) = cos 3 (𝑡) = cos(𝑡) + cos(3𝑡) = 𝑥1 (𝑡) + 𝑥2 (𝑡) → 𝑇0 = LCM(𝑇1 , 𝑇2 ) = 2𝜋
4 4
❿ 𝑥(𝑡) = sin2(𝑡) + cos 4 (𝑡) = 1 + cos4 (𝑡) − cos 2 (𝑡) = 1 + cos 2 (𝑡) (cos 2 (𝑡) − 1) → 𝑥(𝑡) is periodic
1 𝜋
𝑥(𝑡) = 1 − cos2 (𝑡) sin2 (𝑡) = 1 − (sin(𝑡) cos(𝑡))2 = 1 − sin2(2𝑡) → 𝑇0 =
4 2
⓫ 𝑥(𝑡) = sin4(𝑡) + cos4 (𝑡) = (sin2(𝑡) + cos2 (𝑡))2 − 2 sin2 (𝑡) cos 2 (𝑡) → 𝑥(𝑡) is periodic
1 𝜋
𝑥(𝑡) = 1 − 2 sin2 (𝑡) cos2 (𝑡) = 1 − sin2 (2𝑡) → 𝑇0 = LCM(𝑇1 , 𝑇2 ) =
2 2
⓬ 𝑥(𝑡) = cos 4 (𝑡) − sin4 (𝑡) = (cos 2 (𝑡) − sin2 (𝑡))(cos 2(𝑡) + sin2 (𝑡)) → 𝑥(𝑡) is periodic
𝑥(𝑡) = (cos2 (𝑡) − sin2 (𝑡)) = 1 − 2 sin2 (𝑡) = cos(2𝑡) → 𝑇0 = LCM(𝑇1 , 𝑇2 ) = 𝜋
Ans: a) The sinusoidal signal 𝑥[𝑛] = 𝑥(𝑛𝑇𝑠 ) is periodic sequence if 𝑥[𝑛 + 𝑁] = 𝑥[𝑛], i.e.
cos(15𝑛𝑇𝑠 + 15𝑁𝑇𝑠 ) = cos(15𝑛𝑇𝑠 ) ⟹ 𝑇𝑠 = {2𝜋𝑚}/{15𝑁}, where 𝑚, 𝑛 ∈ ℕ.
𝜋 2𝜋𝑚 𝜋 20𝑚 4
b) 𝑇𝑠 = ⟹ = ⟹ 𝑁= = 𝑚, It is very well known that:
10 15𝑁 10 15 3
4
A fundamental period is the smallest integer value of 𝑁 ⟹ 𝑁0 = min {3 𝑚} = 4
Remark: sampled data not always necessarily be periodic, even if its continuous signal is
periodic. Sampling (Discretization) does not preserve periodicity.
Exercise 8: Show that if 𝑥(𝑡) is periodic with fundamental period 𝑇0 , then the normalized
average power 𝑃 of 𝑥(𝑡) is the same as the average power of 𝑥(𝑡) over any interval of length
𝑇0 , that is,
1 +𝑇/2 1 𝑇0
𝑃f = lim ∫ |𝑥(𝑡)|2 𝑑𝑡 = ∫ |𝑥(𝑡)|2 𝑑𝑡
𝑇⟶∞ 𝑇 −𝑇/2 𝑇0 0
Ans: Let we define 𝑇 = 𝑘𝑇0
1 +𝑇/2 2
1 +𝑘𝑇0 /2 2
1 𝑘𝑇0
𝑃f = lim ∫ |𝑥(𝑡)| 𝑑𝑡 = lim ∫ |𝑥(𝑡)| 𝑑𝑡 = lim ∫ |𝑥(𝑡)|2 𝑑𝑡
𝑇⟶∞ 𝑇 −𝑇/2 𝑘⟶∞ 𝑘𝑇0 −𝑘𝑇 /2 𝑘⟶∞ 𝑘𝑇0 0
0
1 𝑘𝑇0 2
𝑘 𝑇0 1 𝑇0
𝑃f = lim ∫ |𝑥(𝑡)| 𝑑𝑡 = lim ∫ |𝑥(𝑡)| 𝑑𝑡 = ∫ |𝑥(𝑡)|2 𝑑𝑡
2
𝑘⟶∞ 𝑘𝑇0 0 𝑘⟶∞ 𝑘𝑇0 0 𝑇0 0
Exercise 9: Find an odd and even parts of , 𝑥(𝑡) = 𝑒 𝑗𝑡
1
𝑥𝑒 (𝑡) = (𝑥(𝑡) + 𝑥(−𝑡)) = cos(𝑡)
𝐀𝐧𝐬: 𝑥(𝑡) = 𝑒 𝑗𝑡 = cos(𝑡) + 𝑖 sin(𝑡) ⇒ { 2
1
𝑥𝑜 (𝑡) = (𝑥(𝑡) − 𝑥(−𝑡)) = 𝑖 sin(𝑡)
2
Exercise 10: The following equalities are used on many occasions in this text. Prove their
validity
∞ ∞
𝛼𝑘 𝑛
𝛼
𝒂) ∑ 𝛼 = |𝛼| < 1 and 𝒃) ∑ 𝑛𝛼 𝑛 = |𝛼| < 1
1−𝛼 (1 − 𝛼)2
𝑛=𝑘 𝑛=0
∞ 𝑘−1 ∞
𝑛
1−𝛼
𝐀𝐧𝐬: 𝒂) Let we define the sum S = ∑ 𝛼 𝑛 = lim ( ) = ∑ 𝛼𝑛 + ∑ 𝛼𝑛 Then we have
𝑛→∞ 1−𝛼
𝑛=0 𝑛=0 𝑛=𝑘
∞ ∞ 𝑘−1
1 − 𝛼𝑛 1 − 𝛼𝑘 1 1 − 𝛼𝑘 𝛼𝑘
∑ 𝛼 = ∑ 𝛼 − ∑ 𝛼 𝑛 = { lim (
𝑛 𝑛
)} − ={ − }=
𝑛→∞ 1−𝛼 1−𝛼 1−𝛼 1−𝛼 1−𝛼
𝑛=𝑘 𝑛=0 𝑛=0
Exercise 11: Given a continuous signal 𝑥(𝑡) = cos(𝜔𝑡 2 ), 𝜔 = 5𝜋/4, prove that 𝑥(𝑡) is
aperiodic signal and also prove that, its sampled signal 𝑥[𝑛] is periodic when 𝑇𝑠 = 0.5. Find
the period of the discrete signal
Ans: Assume that 𝑥(𝑡) is periodic such that: cos(𝜔(𝑡 + 𝑇)2 ) = cos(𝜔𝑡 2 ) so 2ℓ𝜋 = 𝜔𝑇 2 + 2𝜔𝑡𝑇.
As we can see, 𝑇 is dependent on the value of 𝑡 and hence, is not a constant. So cos(𝜔𝑡 2 ) is
not periodic. This can also be observed in the graph for 𝑥(𝑡) = cos(𝜔𝑡 2 ).
The sampled signal is 𝑥[𝑛] = cos(𝜔𝑇𝑠 2 𝑛2 ) where 𝑇𝑠 = 0.5, let we look for the period of this
sequence 𝑥[𝑛 + 𝑁] = 𝑥[𝑛] ⇒ 𝑁 = 16. This can also be observed in the graph for 𝑥[𝑛].
Exercise 12: Prove that, if we let f(𝑡) = f𝑒 (𝑡) + f𝑜 (𝑡) then there exist g(𝑡) = g 𝑒 (𝑡) + g 𝑜 (𝑡) such
that the even and odd functions have the following property:
Ans:
𝟙. 𝑦(𝑡) = f𝑒 (𝑡) × f𝑜 (𝑡) = f 2 (𝑡) − f 2 (−𝑡) ⇒ 𝑦(𝑡) = −𝑦(−𝑡) Then, it results to an odd function.
𝟚. 𝑦(𝑡) = f𝑜 (𝑡) × f𝑜 (𝑡) = f 2 (𝑡) + f 2 (−𝑡) − 2f(𝑡)f(−𝑡) ⇒ 𝑦(𝑡) = 𝑦(−𝑡) Then, it results to an even
function.
𝟛. 𝑦(𝑡) = f𝑒 (𝑡) × f𝑒 (𝑡) = f 2 (𝑡) + f 2 (−𝑡) + 2f(𝑡)f(−𝑡) ⇒ 𝑦(𝑡) = 𝑦(−𝑡) Then, it results to an even
function.
Exercise 13: Let g(𝑡) be any real valued function where g(𝑡) = g 𝑜 (𝑡) + g 𝑒 (𝑡). Prove that
g 𝑒 (𝑡) is symmetrical about the vertical axis, and g 𝑜 (𝑡) is symmetrical about the origin. As a
consequence deduce that:
+𝑎 +𝑎 +𝑎
∫ g 𝑒 (𝑡)𝑑𝑡 = 2 ∫ g 𝑒 (𝑡)𝑑𝑡 & ∫ g 𝑜 (𝑡)𝑑𝑡 = 0
−𝑎 0 −𝑎
Exercise 14: Prove that,
+∞ +∞ +∞
2 (𝑡)𝑑𝑡
𝐸=∫ 𝑥 = 𝐸𝑒 + 𝐸𝑜 = ∫ 𝑥𝑒2 (𝑡)𝑑𝑡 +∫ 𝑥𝑜2 (𝑡)𝑑𝑡
−∞ −∞ −∞
Ans: we know that 𝑥(𝑡) = 𝑥𝑒 (𝑡) + 𝑥𝑜 (𝑡) ⇒ 𝑥 2 (𝑡) = 𝑥𝑒2 (𝑡) + 𝑥𝑜2 (𝑡) + 2𝑥𝑒 (𝑡)𝑥𝑜 (𝑡)
+∞ +∞ +∞ +∞
𝐸=∫ 𝑥 2 (𝑡)𝑑𝑡 = ∫ 𝑥𝑒2 (𝑡)𝑑𝑡 + ∫ 𝑥𝑜2 (𝑡)𝑑𝑡 + 2 ∫ 𝑥𝑒 (𝑡)𝑥𝑜 (𝑡)𝑑𝑡
−∞ −∞ −∞ −∞
Exercise 16: Given the complex exponential sequence which is of the form
Prove that 𝑥[𝑛] is periodic sequence with period 𝑁 (> 0), if and only if Ω0 satisfy the
following Ω0 /2𝜋 = 𝑚/𝑁, 𝑚 = positive integer. Thus the sequence 𝑥[𝑛] = 𝑒 𝑖Ω0 𝑛 not periodic for
any value of Ω0 . It is periodic only if Ω0 /2𝜋 is a rational number. Thus, if Ω0 , satisfies the
periodicity condition, Ω0 ≠ 0, and 𝑁 & 𝑚 have no factors in common, then the fundamental
period of the sequence 𝑥[𝑛] is given by 𝑁𝑜 = 𝑚(2𝜋/𝑁). Note that this property is quite
different from the property that the continuous-time signal 𝑒 𝑖ω0 𝑡 that is periodic for any
value of ω0 .
Exercise 17: Find the even and odd parts of the signal 𝑥[𝑛] = 𝛼 𝑛 𝑢[𝑛]
Exercise 18: If 𝑥[𝑛] = 0 for 𝑛 < 0 and 𝑥𝑒 [𝑛] = 2 × (0.9)|𝑛| 𝑢[|𝑛|], find 𝑥[𝑛]
2 × (0.9)𝑛 𝑛≥1
Hint: write 𝑥𝑒 [𝑛] in the form of brakts 𝑥𝑒 [𝑛] = {2𝛿[𝑛] 𝑛=0
2 × (0.9)−𝑛 𝑛 ≤ −1
𝐀𝐧𝐬: 𝑥[𝑛] = 𝛿[𝑛] + 2 × (0.9)𝑛 𝑢[𝑛 − 1]
𝛿(𝑡 − 𝑏) 𝛿(𝑡 − 𝑎)
𝛿((𝑡 − 𝑎)(𝑡 − 𝑏)) = −
{2𝑡 − (𝑎 + 𝑏)} {2𝑡 − (𝑎 + 𝑏)}
Using the property of g(𝑡)𝛿(𝑡 − 𝑎) = g(𝑎)𝛿(𝑡 − 𝑎) we get:
𝛿(𝑡 − 𝑏) 𝛿(𝑡 − 𝑎)
𝛿((𝑡 − 𝑎)(𝑡 − 𝑏)) = −
𝑏−𝑎 𝑎−𝑏
Because of 𝑏 − 𝑎 < 0 or 𝑏 − 𝑎 > 0 then
1
𝛿((𝑡 − 𝑎)(𝑡 − 𝑏)) = (𝛿(𝑡 − 𝑎) + 𝛿(𝑡 − 𝑏))
|𝑏 − 𝑎|
Generally we have
𝛿(𝑡 − 𝑡𝑖 )
𝛿(g(𝑡)) = ∑ , where 𝑡𝑖 is a real root of g(𝑡) = 0
𝑖 |ǵ (𝑡𝑖 )|
Computer program: This program provides a student a good support to plot & simulate
multistep functions
clear all
clc
t=-6:0.1:6;
k=length(t);
for i=1:k
if t(i)>=0 & t(i)<=1
x(i)=t(i);
elseif t(i)>1 & t(i)<=2
x(i)=-t(i)+2;
else
x(i)=0;
end
end
plot(t,x,'r','linewidth',3)
grid('minor')
Summary:
A signal is a set of information or data. A system is an organized relationship among
functioning units or components. A system may be made up of physical components
(hardware realization) or may be an algorithm (software realization).
A convenient measure of the size of a signal is its energy if it is finite. If the signal energy
is infinite, the appropriate measure is its power, if it exists. A signal whose physical
description is known completely in a mathematical or graphical form is a deterministic
signal. A random signal is known only in terms of its probabilistic description such as
mean value, mean square value, and so on, rather than its mathematical or graphical form.
The unit step function 𝑢(𝑡) is very useful in representing causal signals and signals with
different mathematical descriptions over different intervals.
In the classical definition, the unit impulse function 𝛿(𝑡) is characterized by unit area,
and the fact that it is concentrated at a single instant 𝑡 = 0. The impulse function has a
sampling (or sifting) property, which states that the area under the product of a function
with a unit impulse is equal to the value of that function at the instant where the impulse
is located (assuming the function to be continuous at the impulse location). In the modern
approach, the impulse function is viewed as a generalized function and is defined by the
sampling property.
The Dirac impulse function or unit impulse function or simply the delta function 𝛿(𝑡) is
not a function in the strict mathematical sense. It is defined in advanced texts and courses
using the theory of distribution.
A signal that is symmetrical about the vertical axis (𝑡 = 0) is an even function of time,
and a signal that is anti-symmetrical about the vertical axis is an odd function of time. The
product of an even function with an odd function results in an odd function. However, the
product of an even function with an even function or an odd function with an odd function
results in an even function. The area under an odd function from 𝑡 = −𝑎 to 𝑎 is always zero
regardless of the value of 𝑎. On the other hand, the area under an even function from
𝑡 = −𝑎 to 𝑎 is two times the area under the same function from 𝑡 = 0 to 𝑎 (or 𝑡 = −𝑎 to 0).
Every signal can be expressed as a sum of odd and even function of time.
A system processes input (i.e. insert) signals to produce another output signals
(response). The input is the cause and the output is its effect. In general, the output is
affected by two causes: the internal conditions of the system (such as the initial conditions)
and the external input.
If you are supposed to find period of sum of two function such that, f(𝑥) + g(𝑥) given that
period of f is a and period of g is b then period of total f(𝑥) + g(𝑥) will be LCM(a,b). But this
technique has some constrain as it will not give correct answers in some cases. One of
those case is, if you take f(𝑥) = |sin(𝑥)| and g(𝑥) = |cos(𝑥)|, then period of f(𝑥) + g(𝑥) should
be 𝜋 as per the above rule but, period of f(𝑥) + g(𝑥) is not 𝜋 but 𝜋/2. So in general it is very
difficult to identify correct answers for the questions regarding period. Most of the cases
graph will help.
Additional Computer programs:
y1=abs(cos(t));
[~,peaklocs] = findpeaks(y1);
T1= mean(Ts*diff(peaklocs))
y2=abs(sin(t));
[~,peaklocs] = findpeaks(y2);
T2= mean(Ts*diff(peaklocs))
y=y1+y2;
[~,peaklocs] = findpeaks(y);
T= mean(Ts*diff(peaklocs))
plot(t,y1,'-r','linewidth',2)
grid on
figure
>>
plot(t,y2,'-b','linewidth',2)
grid on T1 = 3.1425 % period 𝝅
figure T2 = 3.1420 % period 𝝅
plot(t,y,'-k','linewidth',2) T = 1.5700 % period 𝝅/2
y1=sign(sin((2*pi/T)*t));
%y1=sign(cos((2*pi/T)*t));
plot(t,y1,'-r','linewidth',2)
grid on
clear all,clc
dt=0.01;
t=-10:dt:10;
T=10;
y1=atan(cot((2*pi/T)*t));
plot(t,y1,'-r','linewidth',2)
grid on
y2=asin(sin((2*pi/T)*t));
grid on
plot(t,y2,'-b','linewidth',2)
Motion: The motion of an object can be considered to be a signal, and can be monitored by
various sensors to provide electrical signals. For example, radar can provide an
electromagnetic signal for following aircraft motion. A motion signal is one-dimensional
(time), and the range is generally three-dimensional. Position is thus a 3-vector signal;
position and orientation of a rigid body is a 6-vector signal. Orientation signals can be
generated using a gyroscope.
Sound: Since a sound is a vibration of a medium (such as air), a sound signal associates a
pressure value to every value of time and three space coordinates. A sound signal is
converted to an electrical signal by a microphone, generating a voltage signal as an analog
of the sound signal, making the sound signal available for further signal processing. Sound
signals can be sampled at a discrete set of time points; for example, compact discs (CDs)
contain discrete signals representing sound, recorded at 44,100 samples per second; each
sample contains data for a left and right channel, which may be considered to be a 2-vector
signal (since CDs are recorded in stereo). The CD encoding is converted to an electrical
signal by reading the information with a laser, converting the sound signal to an optical
signal.
Videos: A video signal is a sequence of images. A point in a video is identified by its two-
dimensional position and by the time at which it occurs, so a video signal has a three-
dimensional domain. Analog video has one continuous domain dimension (across a scan
line) and two discrete dimensions (frame and line).
Biology: The value of the signal is an electric potential ("voltage"). The domain is more
difficult to establish. Some cells or organelles have the same membrane potential
throughout; neurons generally have different potentials at different points. These signals
have very low energies, but are enough to make nervous systems work; they can be
measured in aggregate by the techniques of electrophysiology.
Computer program: Fourier cosine series of a simple linear function 𝑓(𝑥) = 𝑥 converges to
an even periodic extension of 𝑓(𝑥) = 𝑥, which is a traingular wave. Note the very fast
convergence, compared to the sine series
clear;
hold off
L = 1; % Length of the interval
x = linspace(-3*L, 3*L, 300); % Create 300 points on the interval [-3L, 3L]
Const = -4*L/pi^2; % Constant in the expression for A_n
Cn = L / 2; % The baseline of the cosine series: A0 = L/2
xlabel('x'); ylabel('Sum(B_nsin(n\pix/L))');
title('Sum of first few terms in cosine series A_0+A_ncos(n\pix) (L=1');
plot(x, abs(mod(x+1,2)-1), 'k--', 'linewidth', 1); % Trickiest part of the
code: create triagular wave
legend('A_0+A_1cos(\pix)', 'A_0+A_1cos(\pix)+A_3cos(3\pix)', 'Even extension
of f(x)=x');
References
1. Katsuhiko Ogata., Modern Control Engineering, Prentice Hall Boston, 2010.
2. Jagan. N.C., Control Systems, Second Edition, BS Publications, New Jersey, 2008.
3. Richard C. Dorf., Modern Control Systems, Twelfth Edition Prentice Hall Boston, 2011.
4. Stanislaw H. Zak, Systems and Control, Oxford University Press, New York, 2003.
5. Martin Schetzen, Linear Time-Invariant Systems, Wiley, New York, 2003.
6. Kailath, T., Linear Systems, Prentice-Hall, Englewood Cliffs, New Jersey, 1980.
7. Lathi, B.P., Signals, Systems, and Communication, Wiley, New York, 1965.
8. Lathi, B.P., Signal Processing and Linear Systems, Berkeley-Cambridge Press, New York, 1998.
CHAPTER II:
Linear Time Invariant
Systems
I. Introduction
II. Block Diagrams of Systems and Interconnection
III. Time Invariant and Time Varying Systems
IV. Impulse response continuous-time systems
V. Impulse response discrete-time systems
VI. Step response
VII. Properties of LTI Systems
V.I. Causality
VII.II Stability
VII.III Memoryless
VII.IV Invertible system
VIII. Eigenfunction of Continuous-time LTI Systems
IX. Eigenfunction of Discrete-time LTI Systems
X. Solved Problems
XI. Review Questions
In system analysis, among other fields of study, a linear time-invariant system (or
"LTI system") is a system that produces an output signal from any input signal
subject to the constraints of linearity and time-invariance; these terms are briefly
defined next. LTI system theory is an area of applied mathematics which has direct
applications in electrical circuit analysis and design, signal processing and filter
design, control theory, mechanical engineering, image processing, NMR spectroscopy,
and many other technical areas where systems of ordinary differential equations
present themselves. (Wiki)
Linear Time Invariant
Systems
I. Introduction: A system whose output is proportional to its input is an example of a
linear system. But linearity implies more than this; it also implies additivity property,
implying that if several causes are acting on a system, then the total effect on the system
due to all these causes can be determined by considering each cause separately while
assuming all the other causes to be zero. The total effect is then the sum of all the
component effects.
In this book, we are concerned only with linear 𝑢(𝑡) LTI Sys 𝑦(𝑡)
systems, so that the corresponding linear ∑:
T(•)
mapping characterizes completely the linear
system. 𝑦(𝑡) = 𝑻(𝑢(𝑡)) Where 𝑻 is linear mapping
Additivity: 𝑻(𝑢1 (𝑡)) + 𝑻(𝑢2 (𝑡)) = 𝑻(𝑢1 (𝑡) + 𝑢2 (𝑡)) Homogeneity: 𝛼𝑻(𝑢(𝑡)) = 𝑻(𝛼𝑢(𝑡))
Notes: Any physical linear system is a
mathematical model of a system based
on the use of a linear operator. Any
physical system that does not satisfy
superposition principal is classified as
a nonlinear system. Also you have to
notice that: a consequence of the
homogeneity (or scaling) property of
linear systems is that a zero input
yields a zero output. This follows
readily by setting 𝛼 = 0. This is another
important property of linear systems.
Using the superposition theorem, we can prove that the system is linear. For any given
input 𝑥1 (t), the output is 𝑦1 (𝑡) = 𝑥1 (𝑡) cos(3𝑡) and for 𝑥2 (𝑡), the output is 𝑦2 (𝑡) = 𝑥2 (𝑡) cos(3𝑡).
For input [𝑥1 (𝑡) + 𝑥2 (𝑡)], the output is 𝑦(𝑡) = [𝑥1 (𝑡) + 𝑥2 (𝑡)] cos(3𝑡) That is, 𝑦(𝑡) = 𝑦1 (𝑡) + 𝑦2 (𝑡).
Hence the system is linear.
Example: Now, let's check another system, 𝑦(𝑡) = 𝑥(𝑡)2 (•)𝟐
𝑢(𝑡) 𝑦(𝑡)
y1 (𝑡) = 𝑻(𝑥1 (𝑡)) = 𝑥1 (𝑡)2 2
{ 2
⤇ 𝑻(𝑥1 (𝑡) + 𝑥2 (𝑡)) = (𝑥1 (𝑡) + 𝑥2 (𝑡)) ≠ y1 (𝑡) + y2 (𝑡)
y2 (𝑡) = 𝑻(𝑥2 (𝑡)) = 𝑥2 (𝑡)
Example: Given a system described by its in/out relationship (mapping) 𝑦(𝑡) = 𝑻(𝑥(t)), say
if the system is linear or not?
𝑑𝑦(𝑡)
❶ y(𝑡) = cos(𝑥(𝑡)) , ❷ + 3y(𝑡) = 𝑥(t) ❸ 𝑦(𝑡) = 𝑥(𝑡 − 𝛼)
𝑑𝑡
𝑡 𝑙
II. Block Diagrams of Systems and Interconnection: Large systems may consist of an
enormous number of components or elements. Analyzing such systems all at once could be
next to impossible. In such cases, it is convenient to represent a system by suitably
interconnected subsystems, each of which can be readily analyzed. Each subsystem can be
characterized in terms of its input-output relationships. Subsystems may be interconnected
by using three elementary types of interconnections: cascade, parallel, and feedback.
III. Time Invariant and Time Varying Systems: A system is called rime-invariant if a
time shift (delay or advance) in the input signal causes the same time shift in the output
signal. Thus, for a continuous-time system, the system is time-invariant if
⦁ The system 𝑦(𝑡) = 𝑥(2𝑡) is not time invariant. To check it uses counterexample. Consider
{𝑥1 (𝑡) = 1 only if |𝑡| ≤ 2}, the resulting output {𝑦1 (𝑡) = 1 only if |𝑡| ≤ 1}. If the input is
shifted by 2, that is, consider {𝑥2 (𝑡) = 1 only if 0 ≤ 𝑡 ≤ 4},, we obtain the resulting output
{𝑦2 (𝑡) = 1 only if 0 ≤ 𝑡 ≤ 2}. It is clearly seen that 𝑦2 (𝑡) ≠ 𝑦1 (𝑡 − 2), so the system is not
time invariant.
IV. Impulse response continuous-time systems: The fundamental result in LTI system
theory is that any LTI system can be characterized entirely by a single function called the
system's impulse response, that is, the output of a plant when the input is the delta
function i.e. ℎ(𝑡) = 𝑻(𝛿(𝑡)). Question: How is important ℎ(𝑡) in system theory? Can always
we obtain the output 𝑦(t) in terms of ℎ(t)? To answer this question let we assume that, we
are given the response of Dirac signal ℎ(t) = 𝑻(𝛿(𝑡)) and also we know from the properties of
Dirac function
+∞ +∞
𝑥(𝑡) = ∫ 𝑥(𝜉)𝛿(𝑡 − 𝜉)𝑑𝜉 ⟺ 𝑦(𝑡) = 𝑻(𝑥(𝑡)) = 𝑻 (∫ 𝑥(𝜉)𝛿(𝑡 − 𝜉)𝑑𝜉 )
−∞ −∞
Remark: 01 This last integral equation is called the convolution between the input 𝑢(𝑡)
and ℎ(𝑡) where this linear operator is denoted by star ‘⋆’ and 𝑦(𝑡) = 𝑢(𝑡) ⋆ ℎ(𝑡) = ℎ(𝑡) ⋆ 𝑢(𝑡)
+∞
Remark: 02 Here, in the integral equation 𝑦(𝑡) = ∫−∞ 𝑢(𝜉)ℎ(𝑡 − 𝜉)𝑑𝜉 the function 𝑢(𝑡) is not
necessarily a step signal, but it can be any general input (it is better to denote it by 𝑥(𝑡)).
Homework: prove that the convolution operator is commutative. Hint: you can use change
of variables.
Remark: 03 The impulse response ℎ(𝑡) relates the input to output by inner operator called
convolution, and characterize completely the system. (i.e. The impulse response ℎ(𝑡) reveals
the system properties). Properties of system properties of the impulse response ℎ(𝑡)
Because convolution is fundamental to the analysis and description of LSI systems, in this
section we look at the mechanics of performing convolutions. We begin by listing some
properties of convolution that may be used to simplify the evaluation of the convolution
integral (or sum in discrete).
In the next page you gave shown a schematic diagram of the convolution properties
Computer program: This program provides a student a good support to plot & simulate
convolution integral & convolution sum. We write a nested program that will convolve two
analytic function f and g in both cases continous and discrete time signals.
for i=1:n1+n2-1
Y(i)=0;
for j=1:n1 ℎ(𝑡) 𝑥(𝑡)
if i-j+1>0
⋆
Y(i)= Y(i)+ X(j)* H(i-j+1);
else
end;
end;
end
t=-1.5:Ts:1.5;
plot(t1,x,'b','linewidth',3)
grid on
figure
plot(t2,h,'b','linewidth',3)
grid on
figure
plot(t,Ts*Y,'r','linewidth',3)
grid on 𝑦(𝑡) = 𝑥(𝑡) ⋆ ℎ(𝑡)
V. Impulse response discrete-time systems: The
output of a discrete time LTI system is completely
determined by the input and the system's response
to a unit impulse.
By the sifting property of impulses, any signal can be decomposed in terms of an infinite
sum of shifted, scaled impulses.
+∞ +∞
where 𝑯 is the system operator, and because of the LTI property of the system we have
+∞ +∞ 𝑘=+∞
This is the process known as Convolution. Since we are in discrete time, this is the Discrete
Time Convolution Sum. When a system is "shocked" by a delta function, it produces an
output known as its impulse response. For an LTI system, the impulse response completely
determines the output of the system given any arbitrary input. The output can be found
using discrete time convolution.
Example: Prove that the operation of convolution has the following property (Differencing)
for all discrete time signals 𝑥1 [𝑛], 𝑥2 [𝑛] where 𝕕𝑇 is the time shift operator with 𝑇 ∈ 𝑍.
= ∑ 𝑥1 [𝑘]𝑥2 [(𝑛 − 𝑇) − 𝑘]
𝑘=−∞
+∞
= ∑ 𝑥2 [𝑘]𝑥1 [(𝑛 − 𝑇) − 𝑘]
𝑘=−∞
= 𝕕𝑇 (𝑥1 [𝑛]) ⋆ 𝑥2 [𝑛]
Observation: Notice that the discrete convolution has the following property
Duration(𝑥1 [𝑛] ⋆ 𝑥2 [𝑛]) = Duration(𝑥1 [𝑛]) + Duration(𝑥2 [𝑛]) − 1
for all discrete time signals 𝑥1 [𝑛], 𝑥2 [𝑛] where Duration(𝑥[𝑛]) gives the duration of a signal 𝑥.
x=[0 1 1 0 1 1];
h=[0 1 2 3 2 1 0];
n1=length(x); n2=length(h);
X=[x,zeros(1,n2)];
H=[ h,zeros(1,n1)];
s= n1+n2-1;
for i=1:s
Y(i)=0; 𝑥[𝑛] ⋆ ℎ[𝑛]
for j=1:n1
if i-j+1>0
Y(i)=Y(i)+X(j)*H(i-j+1);
else
end;
end;
end
stem(1:n1,x,'b','linewidth',3)
grid on
figure
stem(1:n2,h,'b','linewidth',3)
grid on
figure
stem(1:s, Y,'r','linewidth',3)
grid on
𝑦[𝑛] = 𝑥[𝑛] ⋆ ℎ[𝑛]
VI. Step response The step response 𝑠(𝑡) of a continuous-time LTI system (represented by
the operator 𝑻) is defined to be the response of the system when the input is a Heaviside
function 𝑢(𝑡); that is, 𝑠(𝑡) = 𝑻(𝑢(𝑡)). In many applications, the step response 𝑠(𝑡) is also a
useful characterization of the system. The step response 𝑠(𝑡) can be easily determined by
+∞ ∞
𝑠(𝑡) = 𝑻(𝑢(𝑡)) = ∫ 𝑢(𝜉)ℎ(𝑡 − 𝜉)𝑑𝜉 = ∫ ℎ(𝑡 − 𝜉)𝑑𝜉
−∞ 0
Take change of variable 𝜇 = 𝑡 − 𝜉 and 𝑑𝜇 = −𝑑𝜉 we get:
∞ 𝑡
𝑑𝑠(𝑡)
𝑠(𝑡) = ∫ ℎ(𝑡 − 𝜉)𝑑𝜉 = ∫ ℎ(𝜇)𝑑𝜇 ⟹ ℎ(𝑡) =
0 −∞ 𝑑𝑡
ℎ(𝑡) = 𝑇(𝛿(𝑡))
𝑑𝑢(𝑡) 𝑑
= 𝑇( ) = {𝑇(𝑢(𝑡))}
𝑑𝑡 𝑑𝑡
𝑑𝑠(𝑡)
=
𝑑𝑡
Remark: The step response of a discrete-time LTI system is the convolution of the unit step
with the impulse response:
+∞ +∞ 𝑛
VII. Properties of LTI Systems Some of the most important properties of a system are
causality, invertibility, stability and memoryless.
A. Causality: ( )السببيةcausal systems do not include future input samples; such system is
practically realizable (i.e. can be implemented), that mean such system can be constructed
practically. Generally all real time systems (either nature or physical reality) are causal
systems, a causal system does not respond to an input event until that event actually
occurs.
∞
Continuous LTI system 𝑦(𝑡) = ℎ(𝑡) ⋆ 𝑥(𝑡) = ∫−∞ ℎ(𝜉)𝑥(𝑡 − 𝜉)𝑑𝜉 is causal if 𝑡 − 𝜉 < 𝑡 ⟹ 𝜉 > 0
∞ 0 ∞ ∞
𝑦(𝑡) = ∫ ℎ(𝜉)𝑥(𝑡 − 𝜉)𝑑𝜉 = ∫ ℎ(𝜉)𝑥(𝑡 − 𝜉)𝑑𝜉 + ∫ ℎ(𝜉)𝑥(𝑡 − 𝜉)𝑑𝜉 = ∫ ℎ(𝜉)𝑥(𝑡 − 𝜉)𝑑𝜉
−∞ −∞ 0 0
Therefore, for a causal continuous-time LTI system, we have ℎ(𝑡) = 0 for all 𝑡 < 0, in other
∞
word we can say: 𝑦(𝑡) = ∫0 ℎ(𝜉)𝑥(𝑡 − 𝜉)𝑑𝜉.
Example: The following examples are for systems with an input 𝑥 and output 𝑦, say if the
system is causal or not?
Example: The following examples are for systems with an input 𝑥 and output 𝑦, say if the
system is causal or not?
Remark: 01 Causality is a necessity if the independent variable is time, but not all systems
have time as an independent variable. For example, a system that processes still images
does not need to be causal (spatial independent variable). Also non-causal systems can be
built and can be useful in many circumstances. Even non-real systems can be built and
are very useful in many contexts.
B. Stability: ( )اإلستقراريةThere are many types of stability of systems, but this section we
are interested only by the so called BIBO stability which means that if a LTI system is
bounded input then it should be of bounded output. Assume that |𝑢(𝑡)| < 𝑀 for all 𝑡 ∈ ℝ
what is about |𝑦(𝑡)| ?
+∞ +∞
|𝑦(𝑡)| = |∫ 𝑢(𝜏)ℎ(𝑡 − 𝜏) 𝑑𝜏| ≤ ∫ |𝑢(𝜏)ℎ(𝑡 − 𝜏)|𝑑𝜏
−∞ −∞
+∞
≤∫ |𝑢(𝜏)||ℎ(𝑡 − 𝜏)|𝑑𝜏
−∞
+∞
≤ 𝑀∫ |ℎ(𝑡 − 𝜏)|𝑑𝜏
−∞
Then the output 𝑦(𝑡) is absolutely bounded iff is absolutely integrible, that is,
+∞ +∞
∫ |ℎ(𝑡 − 𝜏)|𝑑𝜏 < ∞ 𝑜𝑟 ∫ |ℎ(𝜂)|𝑑𝜂 < ∞
−∞ −∞
Remarks: As a physical interpretation of a stability, it means that the dissipated energy of
a system cannot remain constant, hence it keep decreasing until it eventually reach zero,
this is when we are dealing with non-forced systems. But when a system is forced it may
remain constant but not ∞. If a system is perturbed from its rest, or equilibrium position,
then it starts to move. We can roughly say that the equilibrium position is stable if the
system does not go far from this position for small initial perturbations.
Example: You are given the impulse response of LTI system, which one of them is BIBO
stable? (𝑡 > 0)
2 2
■ ℎ(𝑡) = 𝑒 −𝑡 ■ ℎ(𝑡) = 𝑒 𝑡 ■ ℎ(𝑡) = 𝑒 −0.5𝑡 ■ ℎ(𝑡) = 𝐴. sin(𝜔𝑡)
Answer: The 2nd and 4th are unstable, but 1st and 3rd are BIBO stable. You are asked to
check them at home (do it as homework!!)
A linear time invariant system is called memoryless if the output depends only on the input
at the present time that is 𝑦(𝑡) = 𝑘𝑢(𝑡) or equivalently ℎ(𝑡) = 𝑘𝛿(𝑡)
A memoryless system is always causal (as it doesn't depend on future input values), but a
causal system doesn't need to be memoryless (because it may depend on past input or
output values).
𝑁1 𝜃𝑙 𝜔𝑙
Gear ratio = 𝑛 = and = =𝑛
𝑁2 𝜃𝑚 𝜔𝑚
Example: (Dynamic system) A latch or a flip-flop is a circuit that
has two stable states and can be used to store finite state
information. Latches and flip-flops are used as data storage
elements. Such data storage can be used for storage of state,
(smallest storage element of memory)𝑸(𝒏)=𝐋𝐨𝐠𝐢𝐜_𝐟𝐮𝐧𝐜𝐭𝐢𝐨𝐧(𝑸(𝒏−𝟏),𝑺,𝑹)
Answer: The impulse response of this system ℎ(𝑡) = sin(𝜔𝑡 + 𝜑) 𝛿(𝑡). From the properties of
delta function we can write ℎ(𝑡) = 𝑀𝛿(𝑡), where: 𝑀 = sin(𝜑), the system is BIBO stable
because |𝑦(𝑡)| = |sin(𝜔𝑡 + 𝜑) 𝑥(𝑡)| < |sin(𝜔𝑡 + 𝜑)|. |𝑥(𝑡)| < |𝑥(𝑡)| = 𝑀 < ∞. The system also is
memoryless and linear time varying.
𝑉out 𝑅f 𝑅f
= (1 + ) = const ⇒ ℎ(𝑡) = (1 + ) 𝛿(𝑡)
𝑉𝑖𝑛 𝑅g 𝑅g
A system is memoryless if its output for each value of the independent variable as a given
time is dependent only on the input at the same time. For example: 𝑦[𝑛] = (2𝑥[𝑛] − 𝑥 2 [𝑛])2 is
memoryless. A resistor is a memoryless system, since the input current and output voltage
has the relationship 𝑦[𝑛] = 𝐾𝑥[𝑛]. An example of a discrete-time system with memory is an
accumulator or summer. Another example is a delay. A capacitor and inductor are examples
of a continuous-time system with memory.
D. Invertible system: ( )النظام العكوسIf we can obtain the input 𝑢(𝑡) back from the output 𝑦(𝑡)
by some operation, the system 𝑆 is said to be invertible. For a noninvertible system,
different inputs can result in the same output (as in a rectifier), and it is impossible to
determine the input for a given output. In rectifier circuit we have 𝑦(𝑡) = √𝑢2 (𝑡) = |𝑢(𝑡)|.
Therefore, for an invertible system, it is essential that distinct inputs result in distinct
outputs so that there is one-to-one mapping between an input and the corresponding
output. This ensures that every output has a unique input.
Example: An ideal differentiator is noninvertible because
integration of its output cannot restore the original signal
unless we know one piece of information about the signal.
𝑑𝑢(𝑡)
𝑦(𝑡) = −𝑅𝐶
𝑑𝑡
Homework: Show that a system described by the input output relation 𝑦(𝑡) = 𝑢2 (𝑡) is
noninvertible.
Definition: 1 LTI system that have ℎ(𝑡) as its impulse response is said to be invertible (i.e.
both left and right invertiblity) if and only if there exist a contiounous function 𝑔(𝑡) such
that: 𝑔(𝑡) ⋆ ℎ(𝑡) = 𝛿(𝑡) and ℎ(𝑡) ⋆ 𝑔(𝑡) = 𝛿(𝑡)
Example: Find the inverse system of an accumulator 𝑦[𝑛] = ∑𝑘=𝑛𝑘=−∞ 𝑥[𝑘]. Ans: by using the
properties of sum one can verify that 𝑥[𝑛] = 𝑦[𝑛] − 𝑦[𝑛 − 1].
Proof: The convolution of the phasor input 𝑣(𝑡) = 𝑒 𝑠𝑡 and ℎ(𝑡) gives:
𝑒 𝑠𝑡 𝜆 𝑒 𝑠𝑡
+∞
𝑦(𝑡) = ∫ 𝑒 𝑠𝜏 ℎ(𝑡 − 𝜏)𝑑𝜏 = 𝑻{𝑒 𝑠𝑡 }
−∞ 𝜆 𝑒 𝑠𝑡 = ℎ(𝑡) ⋆ 𝑒 𝑠𝑡
When we excite LTI system by a phasor input 𝑣(𝑡) = 𝑒 𝑠𝑡 then the response will be the
convolution of ℎ(𝑡) with 𝑣(𝑡), that is:
+∞ +∞
𝑦(𝑡) = 𝑻{𝑒 𝑠𝑡 } = ∫ ℎ(𝜏)𝑒 𝑠(𝑡−𝜏) 𝑑𝜏 = 𝑒 𝑠𝑡 ∫ ℎ(𝜏)𝑒 −𝑠𝜏 𝑑𝜏 = 𝜆𝑒 𝑠𝑡 ⬌ 𝑦(𝑡) = 𝜆𝑣(𝑡) ■
−∞ ⏟−∞
𝐻(𝑠)
Remark: Every LTI system has 𝑣(𝑡) = 𝑒 𝑠𝑡 as its eigenfunction with 𝑠 ∈ ℂ is any complex
+∞
number. Now we define a new integral transformation 𝜆 = 𝐻(𝑠) = ∫−∞ ℎ(𝜏)𝑒 −𝑠𝜏 𝑑𝜏 which is
called mathematically the Laplace transformation of impulse response, and in system
engineering is named transfer function of the LTI system and it characterize completely the
system “i.e. characteristic value”.
IX. Eigenfunction of Discrete-time LTI Systems As what we have seen before, a linear
time invariant system is a linear operator defined on a function space that commutes with
every time shift operator on that function space. Thus, we can also consider the eigenvector
functions, or eigenfunctions, of a system. It is particularly easy to calculate the output of a
system when an eigenfunction is the input as the output is simply the eigenfunction scaled
by the associated eigenvalue. As will be shown, discrete time complex exponentials serve as
eigenfunctions of linear time invariant systems operating on discrete time signals.
Consider a linear time invariant system 𝑯 with impulse response ℎ[𝑛] operating on some
space of infinite length discrete time signals. Recall that the output 𝑯(𝑥[𝑛]) of the system
for a given input 𝑥[𝑛] is given by the discrete time convolution of the impulse response with
the input
𝑘=+∞
𝑯(𝑥[𝑛]) = ∑ ℎ[𝑘]𝑥[𝑛 − 𝑘]
𝑘=−∞
𝑛
Now consider the input 𝑥[𝑛] = 𝑧 where 𝑧 ∈ ℂ . Computing the output for this input,
𝑘=+∞ 𝑘=+∞ 𝑘=+∞
In steady state, the response to a complex exponential (or a sinusoid) of a certain frequency
is the same complex exponential (or sinusoid).
Solved Problems:
Exercise 1: determine whether the following continuous time system is linear or not
𝟏. 𝑦(𝑡) = 𝑥(sin(𝑡)) 𝟐. 𝑦(𝑡) = sin(𝑥(𝑡))
𝟑. 𝑦(𝑡) = 𝑡 2 𝑥(𝑡 − 1) 𝟒. 𝑦(𝑡) = 𝑒 𝑥(𝑡)
Exercise 2: determine whether the following discrete time system is linear or not
Exercise 3: check whether the following continuous time system is causal or not
4𝑡
𝟓. 𝑦(𝑡) = ∫ 𝑥(𝑡)𝑑𝑡
𝟏. 𝑦(𝑡) = 𝑡𝑥(𝑡) 𝟐. 𝑦(𝑡) = 𝑥(𝑡 2 ) −∞
𝟑. 𝑦(𝑡) = 𝑥 2 (𝑡) 𝟒. 𝑦(𝑡) = 𝑥(sin(𝑡)) 𝑑
𝟔. 𝑦(𝑡) = 𝑥(𝑡)
𝑑𝑡
Ans: causal, non-causal, causal, causal, non-causal, causal.
Exercise 4: check whether the following continuous time system is stable or not (i.e.
compute the lim𝑡→∞ |𝑦(𝑡)/𝑥(𝑡)| when |𝑥(𝑡)| < 𝑀)
𝑡
𝟏. 𝑦(𝑡) = 𝑒 𝑥(𝑡)
𝟐. 𝑦(𝑡) = sin(𝑡) 𝑥(𝑡) 𝟓. 𝑦(𝑡) = ∫ 𝑥(𝑡)𝑑𝑡
−∞
𝟑. 𝑦(𝑡) = 𝑒 𝑡 𝑥(𝑡) 𝟒. 𝑦(𝑡) = √𝑥(sin(𝑡)) 𝑥(𝑡 − 2)
𝟔. 𝑦(𝑡) = 2
𝑡 +1
Exercise 5: determine whether the following continuous time system is invertible or not
(i.e. one to one operator or not)
𝟑. There are an infinite number of examples but we give a few to make things clear
1 1
𝑦(𝑡) = & 𝑦(𝑡) = 𝑒 𝑡𝑥(𝑡) & 𝑦(𝑡) =
𝑥(𝑡) − 𝑀 𝑡𝑥(𝑡) + 1
Exercise 7: check whether the following LTI continuous time system described by its
impulse response is stable or not? ℎ(𝑡) = 𝑒 −𝑎𝑡 sin(𝑎𝑡) 𝑢(𝑡)
Ans:
+∞ +∞ +∞ +∞
∫ |ℎ(𝑡)|𝑑𝑡 = ∫ |𝑒 −𝑎𝑡 sin(𝑎𝑡) 𝑢(𝑡)|𝑑𝑡 < ∫ 𝑒 −𝑎𝑡 |sin(𝑎𝑡)|𝑑𝑡 < ∫ 𝑒 −𝑎𝑡 𝑑𝑡
−∞ −∞ 0 0
+∞
1
∫ |ℎ(𝑡)|𝑑𝑡 < The system is BIBO stable.
−∞ 𝑎
+∞ +∞
𝑥[𝑛] = 𝑢[𝑛] − 𝑢[−𝑛 − 1] = 1 ∀𝑛 then 𝑦[𝑛] = ∑ ℎ[𝑘]𝑥[𝑛 − 𝑘] = ∑ ℎ[𝑘]
𝑘=−∞ 𝑘=−∞
+∞ 1 𝑘 +∞
𝑘=−1
𝑘
+∞ 1 𝑘 2
⟹ 𝑦[𝑛] = ∑ ℎ[𝑘] = ∑ ( ) +∑ 2 = 2 (∑ ( ) )−1= −1=3 ∀𝑛
𝑘=−∞ 𝑘=0 2 𝑘=−∞ 𝑘=0 2 1
1−2
+∞ 1 𝑘 1 𝑛−𝑘 𝑛 1 𝑘 1 𝑛−𝑘
𝟐. 𝑦[𝑛] = 𝑥[𝑛] ⋆ ℎ[𝑛] = ∑ ( ) 𝑢[𝑘] ( ) 𝑢[𝑛 − 𝑘] = ∑ ( ) ( )
𝑘=−∞ 3 6 𝑘=0 3 6
1 𝑛 𝑛
𝑘
1 𝑛 1 − 2𝑛+1 1 𝑛 1 𝑛
⟹ 𝑦[𝑛] = ( ) ∑ 2 = ( ) { } 𝑢[𝑛] = [2 ( ) − ( ) ] 𝑢[𝑛]
6 𝑘=0 6 1−2 3 6
0
+∞ ∑ 2𝑘 = 2 if 𝑛 ≥ 0
𝑘 −∞
𝟑. 𝑦[𝑛] = ∑ 2 𝑢[−𝑘]𝑢[𝑘]𝑢[𝑛 − 𝑘] = 𝑘=𝑛
𝑘=−∞
∑ 2𝑘 if 𝑛 < 0
{ −∞
0 1 𝑘
𝑘
∞
∑ 2 =∑ ( ) =2 for 𝑛 ≥ 0
−∞ 0 2
𝑦[𝑛] =
𝑘=𝑛 ∞ 1 𝑘 ∞ 1 𝑘
∑ 2 = ∑ ( ) = 2 ∑ ( ) = 2𝑛+1
𝑘 𝑛
for 𝑛 < 0
{ −∞ −𝑛 2 0 2
1 𝑘 1 𝑛−𝑘
+∞ 1 𝑛 𝑛
𝑘
1 𝑛 𝑛
𝟒. 𝑦[𝑛] = 𝑥[𝑛] ⋆ ℎ[𝑛] = ∑ ( ) 𝑢[𝑘] ( ) 𝑢[𝑛 − 𝑘] = ( ) ∑ 2 ⟺ 𝑦[𝑛] = ( ) [2 − 1]𝑢[𝑛]
𝑘=−∞ 2 4 4 0 4
−1 𝑛 1 𝑘 1 𝑛+1
∑ 2𝑘 + ∑ ( ) = 1 + 2 (1 − ( ) ) if 𝑛 ≥ 0
−∞ 0 2 2
𝑦[𝑛] =
𝑘=𝑛 1 𝑘 ∞ 1 𝑘 ∞
∑ 2 = ∑ ( ) = 2 ∑ ( ) = 2𝑛+1 if 𝑛 < 0
𝑘 𝑛
{ −∞ −𝑛 2 0 2
1 𝑛
⟹ 𝑦[𝑛] = {3 − ( ) } 𝑢[𝑛] + 2𝑛+1 𝑢[−𝑛 − 1]
2
Exercise 10: check whether the following LTI discrete time system described by its impulse
response (IR) is stable or not, causal or not?
1 𝑛 1 𝑛
𝟏. ℎ[𝑛] = 𝑛 ( ) 𝑢[𝑛] 𝟐. ℎ[𝑛] = 4𝑛 𝑢[2 − 𝑛] 𝟑. ℎ[𝑛] = ( ) 𝑢[𝑛] + (1.01)𝑛 𝑢[1 − 𝑛]
2 2
1 𝑛
Ans: ❶ ℎ[𝑛] = 𝑛 (2) 𝑢[𝑛] is BIBO stable because ℎ[𝑛] is absolutely summable i.e.
1
+∞ 1 𝑛 +∞
+∞ 1 𝑛 2
∑ |ℎ[𝑛]| = ∑ |𝑛 ( ) 𝑢[𝑛]| = ∑ 𝑛 ( ) = = 2 < ∞ → BIBO stable
−∞ −∞ 2 0 2 1 2
(1 − 2)
+∞ 𝑑 +∞ 𝑑 1 𝑥
We have used the fact that if 𝑥 < 1 then: ∑ 𝑛𝑥 𝑛 = 𝑥 ∑ 𝑥𝑛 = 𝑥 ( )=
0 𝑑𝑥 0 𝑑𝑥 1 − 𝑥 (1 − 𝑥)2
1 𝑛
❸ ℎ[𝑛] = (2) 𝑢[𝑛] + (1.01)𝑛 𝑢[1 − 𝑛]is BIBO stable because ℎ[𝑛] is absolutely summable i.e.
+∞ +∞ 1 𝑘 1 Where
∑ |ℎ[𝑛]| ≤ ∑ ( ) + ∑ (1.01)𝑘
−∞ 𝑘=0 2 −∞
+∞ 1 𝑘
∞ 1 𝑘 1
+∞ ∞ 1 𝑘 ∑ ( ) = = 101
∑ |ℎ[𝑛]| ≤ ∑ ( ) +∑ ( ) 𝑘=0 1.01 1
−∞ 𝑘=0 2 −1 1.01 1 −
1.01
+∞ +∞ 1 𝑘 ∞ 1 𝑘 +∞ 1 𝑘 1
∑ |ℎ[𝑛]| ≤ ∑ ( ) +∑ ( ) + 1.01 ∑ ( ) = =2
−∞ 𝑘=0 2 𝑘=0 1.01 𝑘=0 2 1
1−2
Hence, ∑+∞
−∞|ℎ[𝑛]| ≤ 104.01. The system is not causal because ℎ[𝑛] ≠ 0 for 𝑛 < 0 .
Exercise 11: you are given an input output sequences {𝑥0 [𝑛], 𝑦0 [𝑛]} for a particular LTI
discrete system, where 𝑥0 [𝑛] = {0,1, 𝟐, 1,0} & 𝑦0 [𝑛] = {−1, −2, 𝟎, 2,1} try to find a relation
between input output sequences and determine the response for 𝑥1 [𝑛] = {𝟎, 1,2,3,4,3,2,1,0}
And determine the impulse response (IR).
Exercise 12: determine and sketch the impulse response of LTI system described by:
𝑦[𝑛] = 𝑥[𝑛] − 2𝑥[𝑛 − 2] + 𝑥[𝑛 − 3] − 3𝑥[𝑛 − 4]
Ans: because of LTI properties we can say 𝑦[𝑛] = 𝐓{𝑥[𝑛]} ↔ ℎ[𝑛] = 𝐓{𝛿[𝑛]}, means that:
ℎ[𝑛] = ∑∞
𝑘=0 𝛿[𝑛 − 𝑘] = 𝑢[𝑛]. The system implements a digital integrator (i.e. accumulator) The
system is not stable because is not absolutely summable ∑+∞ −∞|ℎ[𝑛]| = ∞.
Exercise 14: a discrete time LTI system described by it impulse response ℎ[𝑛] = 𝛼 𝑛 𝑢[𝑛]
Ans: 𝟏. The system is always causal because {ℎ[𝑛] = 0 ∀ 𝑛 < 0}, but the stability depends
+∞
on the value of 𝛼 ∑+∞ +∞ 𝑛
−∞|ℎ[𝑛]| = ∑−∞|𝛼 𝑢[𝑛]| = ∑0 |𝛼|
𝑛
+∞ 1
Case I: |𝛼| < 1 ∑+∞ 𝑛
−∞|ℎ[𝑛]| = ∑0 |𝛼| = 1−𝛼 < ∞ ⟹ BIBO stable system
Case II: |𝛼| ≥ 1 ∑+∞
−∞|ℎ[𝑛]| = ∑+∞
0 |𝛼|
𝑛
=∞ ⟹ unstable system
The system is stable when the absolute value of the exponent 𝛼 is inside the unit disk. This
fact is extremely important and it is the property of locating the eigenvalues of the system
within the unitary disk. We will discuss this topic later in the next chapters when talking
about digital systems.
𝟐. Let we compute the output of the system when 𝑥[𝑛] = 𝑢[𝑛] we have
1 𝑛+1
+∞ +∞ 𝑛 1 − (𝛼 ) 𝛼 𝑛+1 − 1
𝑦[𝑛] = ∑ 𝑥[𝑘] ℎ[𝑛 − 𝑘] = ∑ 𝑢[𝑘]𝛼 𝑛−𝑘 𝑢[𝑛 − 𝑘] = 𝛼 𝑛 ∑ 𝛼 −𝑘 = 𝛼 𝑛 =
𝑘=−∞ 𝑘=−∞ 𝑘=0 1 𝛼−1
1−𝛼
1
𝛼 𝑛+1 − 1 |𝛼| < 1
When 𝛼 ≠ 1: lim 𝑦[𝑛] = lim ={ 𝛼−1
𝑛→∞ 𝑛→∞ 𝛼 − 1
∞ |𝛼| ≥ 1
When: 𝛼 = 1 𝑦[𝑛] = 𝑥[𝑛] ⋆ ℎ[𝑛] = 𝑛 𝑢[𝑛] ′ramp′
The figure below shows the output convergence towards a constant value if the exponent
value is less than one
clear all
clc
a=0.75;
n=0;
for i=1:20
y(i)=(1-a^(n+1))/(1-a);
n=n+1;
end
stem(1:n,y,'b','linewidth',4)
grid on
Now let we compute the output of the system when 𝑥[𝑛] = 𝛽 𝑛 𝑢[𝑛]
+∞ 𝑛 𝑛 𝛼 𝑘
𝑦[𝑛] = ∑ 𝛼 𝑘 𝑢[𝑘] 𝛽 𝑛−𝑘 𝑢[𝑛 − 𝑘] = 𝛽 𝑛 ∑ 𝛼 𝑘 𝛽 −𝑘 = 𝛽 𝑛 ∑ ( )
𝑘=−∞ 𝑘=0 𝑘=0 𝛽
𝛼 𝑛+1
( ) −1
𝛽
𝛽𝑛 𝛼≠𝛽
𝛼
( )−1
𝑦[𝑛] = { 𝛽 }
{ 𝛽 𝑛 (𝑛 + 1)𝑢[𝑛] 𝛼=𝛽
Exercise 15: a discrete time LTI system described by it impulse response ℎ[𝑛] = 𝛼 𝑛 𝑢[𝑛]
Ans: The output is 𝑦[𝑛] = 𝑥[𝑛] ⋆ ℎ[𝑛] = 𝑻{𝑥[𝑛]} means that 𝑦[𝑛] = ∑+∞ 𝑘
𝑘=−∞ 𝛼 𝑢[𝑘] 𝛼
−𝑛+𝑘
𝑢[𝑘 − 𝑛]
here we have two cases
1 − 𝛼 2𝑛
𝑦[𝑛] = lim 𝛼 𝑛
𝑛→∞ 1 − 𝛼2
1 − 𝛼 2𝑛
𝑦[𝑛] = lim 𝛼 −𝑛
𝑛→∞ 1 − 𝛼2
When 0 < 𝛼 < 1 the output can be combined in one formula
𝛼 |𝑛|
𝑦[𝑛] = for all 𝑛
1 − 𝛼2
The figure below shows the output decays towards a zero value if the exponent value is less
than one
clear all
clc
a=0.75;
m=[ ];
n=-10;
for i=1:21
m=[m,n];
y(i)=(a^abs(n))/(1-a^2);
n=n+1;
end
stem(m,y,'b','linewidth',4)
Exercise 16: Let 𝑦(𝑡) = 𝑢(𝑡) ⋆ ℎ(𝑡), and let 𝐴𝑦 , 𝐴𝑢 𝑎𝑛𝑑 𝐴ℎ are the areas under the graphs of
𝑦(𝑡), 𝑢(𝑡), ℎ(𝑡) respectively. Prove that: 𝑦(𝑡) = 𝑢(𝑡) ⋆ ℎ(𝑡) ⟹ 𝐴𝑦 = 𝐴𝑢 𝐴ℎ or equivalently
+∞ +∞ +∞
Ans:
+∞ +∞ +∞ +∞ +∞
▪ 𝑦[𝑛] − 𝑦[𝑛 − 1] = 𝑥[𝑛] ⋆ (ℎ[𝑛] − ℎ[𝑛 − 1]) = (𝑥[𝑛] − 𝑥[𝑛 − 1]) ⋆ ℎ[𝑛]
+∞ +∞ +∞
Exercise 18: Let 𝑦(𝑡) = 𝑥1 (𝑡) ⋆ 𝑥2 (𝑡). Then show that: 𝑥1 (𝑡 − 𝑡1 ) ⋆ 𝑥2 (𝑡 − 𝑡2 ) = 𝑦(𝑡 − 𝑡1 − 𝑡2 )
𝑦(𝑡 − 𝑡1 − 𝑡2 ) = 𝕕𝑡1 (𝕕𝑡2 (𝑥1 (𝑡) ⋆ 𝑥2 (𝑡))) = 𝕕𝑡1 {𝑥1 (𝑡) ⋆ 𝕕𝑡2 (𝑥2 (𝑡))}
= 𝕕𝑡1 (𝑥1 (𝑡)) ⋆ 𝕕𝑡2 (𝑥2 (𝑡)) = 𝑥1 (𝑡 − 𝑡1 ) ⋆ 𝑥2 (𝑡 − 𝑡2 )
Exercise 19: Consider a stable continuous-time LTI system with impulse response ℎ(𝑡) that
is real and even. Show that cos(𝜔𝑡) and sin(𝜔𝑡) are eigenfunctions of this system with the
same real eigenvalue.
Exercise 20: Consider the system described by 𝑦̇ (𝑡) + 2𝑦(𝑡) = 𝑥̇ (𝑡) + 𝑥(𝑡). Find the impulse
response ℎ(𝑡) of the system.
Ans: Laplace transform gives 𝐻(𝑠) = (𝑠 + 1)/(𝑠 + 2) = 1 − 1/(𝑠 + 2) ⟹ ℎ(𝑡) = (1 − 𝑒 −2𝑡 )𝑢(𝑡)
Exercise 21: For each of the following signals, determine if the system is
▪ Linear or nonlinear
▪ Causal or noncausal
▪ Time invariant or time varying
▪ Memory or memoryless (Dynamic/static)
▪ BIBO stable/unstable
In all cases 𝑥(𝑡) is an arbitrary input signal and 𝑦(𝑡) is the output
Exercise 22: Continuous linear time invariant system described by its linear operator
defined by: 𝑦(𝑡) = sin(𝑎𝑡 + 𝑏) 𝑥(𝑡 − 1)
Ans: ▪ The system is linear time varying because it obey the superposition principle
▪ ℎ(𝑡) = 𝑻{𝛿(𝑡)} = sin(𝑎𝑡 + 𝑏) 𝛿(𝑡 − 1) = sin(𝑎 + 𝑏) 𝛿(𝑡 − 1). The system is BIBO stable is
absolutely integrable
∞ ∞
∫ |ℎ(𝑡)| 𝑑𝑡 = |sin(𝑎 + 𝑏)| ∫ |𝛿(𝑡 − 1)| 𝑑𝑡 = |sin(𝑎 + 𝑏)| < ∞
−∞ −∞
▪ 𝐻(𝑠) = sin(𝑎 + 𝑏) 𝑒 −𝑠
Exercise 23: Find the impulse response ℎ(𝑡), of a given LTI system where: 𝑥̇ (𝑡) = 𝑥(𝑡) ⋆ ℎ(𝑡)
Ans: 𝑋(𝑠)𝐻(𝑠) = 𝑠𝑋(𝑠) ⟹ 𝐻(𝑠) = 𝑠 ⟹ ℎ(𝑡) = 𝛿 ′ (𝑡)or you can use directly the convolution
property 𝑥̇ (𝑡) = 𝑥(𝑡) ⋆ ℎ(𝑡) ⟹ 𝛿̇ (𝑡) = 𝛿(𝑡) ⋆ ℎ(𝑡) = ℎ(𝑡)
Exercise 24: Show that a system described the following differential equation is linear
𝑦̇ (𝑡) + 𝑡 2 𝑦(𝑡) = (2𝑡 + 3)𝑢(𝑡)
Exercise 25: Show that a system described the following differential equation is nonlinear
𝑦(𝑡)𝑦̇ (𝑡) + 3𝑦(𝑡) = 𝑢(𝑡)
Exercise 26: Show that different mathematical models can describes the same behavioral
system.
Answer: In power electronics the rectifier bridge circuit can be modeled by different math
representations:
1. 𝑦(𝑡) = sgn(𝑥(𝑡))𝑥(𝑡)
2. 𝑦(𝑡) = |𝑥(𝑡)|
3. 𝑦(𝑡) = √𝑥 2 (𝑡)
𝑑|𝑥(𝑡)|
4. 𝑦(𝑡) = { } 𝑥(𝑡)
𝑑𝑥(𝑡)
Exercise: 27 given a mapping say if the system is linear, causal and time invariant or not
Solution:
Method I: From the Dirac properties and integration by part we know that
+∞ +∞ +∞
𝛿(𝑡 − 𝜉)𝑥(𝜉)| =∫ 𝛿(𝑡 − 𝜉)𝑥̇ (𝜉)𝑑𝜉 − ∫ 𝛿̇ (𝑡 − 𝜉)𝑥(𝜉)𝑑𝜉
−∞ −∞ −∞
+∞
From the above results we can say the following general formula
When we give an impulse 𝛿(𝑡) to the system as an input then the system will respond
by ℎ(𝑡) and we can write
𝑑ℎ 1 − 𝑒 −𝑠
= 𝛿(𝑡) − 𝛿(𝑡 − 1) ⤇ ℎ(𝑡) = 𝑢(𝑡) − 𝑢(𝑡 − 1) ⤇ 𝐻(𝑠) = ( )
𝑑𝑡 𝑠
∞ ∞
∫ |ℎ(𝑡)| 𝑑𝑡 = ∫ |𝑢(𝑡) − 𝑢(𝑡 − 1)| 𝑑𝑡 = 𝐴𝑟𝑒𝑎 = 1 < ∞
−∞ −∞
Exercise: 30 Consider the system below with ℎ1 [𝑛] = 𝑢[𝑛 + 1] , ℎ2 [𝑛] = −𝑢[𝑛] Find 𝑦[𝑛] when
2𝜋 𝜋
𝑥[𝑛] = cos ( 𝑛) + sin ( 𝑛)
7 8
Solution: we have 𝑦[𝑛] = 𝑥[𝑛] ⋆ {ℎ1 [𝑛] + ℎ2 [𝑛]} = 𝑥[𝑛] ⋆ {𝑢[𝑛 + 1] − 𝑢[𝑛]}. Then the output
will be
2𝜋 𝜋
𝑦[𝑛] = 𝑥[𝑛] ⋆ 𝛿[𝑛 + 1] = 𝑥[𝑛 + 1] = cos ( {𝑛 + 1}) + sin ( {𝑛 + 1})
7 8
Exercise: 31 Write a program to convolve ∏(𝑡) with itself, given a sampling period 𝑇𝑠
clear all
clc
Ts=0.01;
t=-1/2:Ts:1/2;
n=length(t);
f=ones(1,n);
Y=conv(f,f);
t=-1:Ts:1;
plot(t,Ts*Y,'k','linewidth',3)
grid on
∏(𝑡) ⋆ ∏(𝑡) = Λ(𝑡)
Exercise: 32 Proving the commutativity of convolution (f ⋆ g)(𝑡) = (g ⋆ f)(𝑡). We have
𝑡
(f ⋆ g)(𝑡) = ∫ f(𝑡 − 𝑢)g(𝑢)𝑑𝑢
0
∞
Let we define m(𝑡 − 𝜂) = ∫0 g(𝜆)h(𝑡 − 𝜂 − 𝜆)𝑑𝜆 & m(𝑡) = g(𝑡) ⋆ h(𝑡)
∞
{(f ⋆ g) ⋆ h}(𝑡) = ∫ f(𝜂)g(𝜆)m(𝑡 − 𝜂)𝑑𝜂 = f(𝑡) ⋆ m(𝑡) = {f ⋆ (g ⋆ h)}(𝑡)
0
Exercise: 35 given a discrete linear time invariant system (DLTI) described by its
mathematical operator 𝑦[𝑛] = 𝑻{𝑥[𝑛]}, impulse response ℎ[𝑛], an input sequence 𝑥[𝑛] and
an output sequence 𝑦[𝑛] . Check the validity of the following discrete signal and system
properties
❑ 𝑢[𝑛] = ∑∞
𝑘=0 𝛿[𝑛 − 𝑘] ❑ 𝑦[𝑛] = ∑∞
−∞ 𝑥[𝑛 − 𝑘]ℎ[𝑘]
1
❑ 𝑦[𝑛] = 𝑻{𝑧 𝑛 } = 𝜆𝑧 𝑛 & 𝜆 = ∑+∞
−∞ ℎ[𝑘]𝑧
−𝑘
❑ 𝛿[−𝑛] = 𝛿[𝑛] & 𝛿[𝑎𝑛] ≠ |𝑎| 𝛿[𝑛]
❑ ∏[𝑛] = 𝑢[𝑛 − 𝑎] − 𝑢[𝑛 − 𝑏] ❑ 𝑥[𝑛] ⋆ ℎ[𝑛] = ℎ[𝑛] ⋆ 𝑥[𝑛]
❑ ( f[𝑛] ⋆ ℎ[𝑛]) ⋆ g[𝑛] = f[𝑛] ⋆ ( h[𝑛] ⋆ g[𝑛]) ❑ 𝑥[𝑛]𝛿[𝑛 − 𝑘] = 𝑥[𝑘]𝛿[𝑛 − 𝑘]
❑ BIB0 stable if ∑+∞
−∞|h[𝑛]| < ∞ ❑ 𝑥[𝑛] ⋆ 𝛿[𝑛] = 𝑥[𝑛]
Solution: We solve some of them and leave the rest to students as a homework
assignment
∞
❑ Let we verify 𝑢[𝑛] = ∑ 𝛿[𝑛 − 𝑘] Equivalence between continuous and discrete time
𝑘=0
∞ 𝑡
𝛿[𝑛] = 𝑢[𝑛] − 𝑢[𝑛 − 1]
𝑢[𝑛] = ∑ 𝛿[𝑛 − 𝑘] ⟺ 𝑢(𝑡) = ∫ 𝛿(𝜃)𝑑𝜃
𝛿[𝑛 − 1] = 𝑢[𝑛 − 1] − 𝑢[𝑛 − 2] −∞
𝑘=0
𝛿[𝑛 − 𝑘] = 𝑢[𝑛 − 𝑘] − 𝑢[𝑛 − 𝑘 − 1] 𝑑
-------------------------------------------- 𝛿[𝑛] = 𝑢[𝑛] − 𝑢[𝑛 − 1] ⟺ 𝛿(𝑡) = 𝑢(𝑡)
𝑑𝑡
sum: 𝑢[𝑛] = ∑∞ 𝑘=0 𝛿[𝑛 − 𝑘]
❑ Let we verify 𝑦[𝑛] = ∑+∞ −∞ 𝑥[𝑘]ℎ[𝑛 − 𝑘] = 𝑥[𝑛] ⋆ ℎ[𝑛]. We know that ℎ[𝑛] = 𝐓{𝛿[𝑛]} The
response of dirac. The general response is of the form 𝑦[𝑛] = 𝐓{𝑥[𝑛]} = 𝐓{∑+∞
−∞ 𝑥[𝑘]𝛿[𝑛 − 𝑘]},
because of LTI property we can say 𝑦[𝑛] = 𝐓{𝑥[𝑛]} = ∑−∞ 𝑥[𝑘]𝐓{𝛿[𝑛 − 𝑘]} = ∑+∞
+∞
−∞ 𝑥[𝑘]ℎ[𝑛 − 𝑘]
this last equation is colled the discret time convolution.
𝑦[𝑛] = ∑+∞
−∞ 𝑥[𝑘]ℎ[𝑛 − 𝑘] = 𝑥[𝑛] ⋆ ℎ[𝑛] take change of variable 𝑚 = 𝑛 − 𝑘 we get
𝑦[𝑛] = ∑+∞
−∞ 𝑥[𝑛 − 𝑚]ℎ[𝑚] = ℎ[𝑛] ⋆ 𝑥[𝑛] (Commutativity property)
Thus this equation indicates that the complex exponential functions 𝑧 𝑛 are eigenfunctions
of any LTI system, with 𝜆 = 𝐻(𝑧) = ∑+∞ −𝑘
−∞ ℎ[𝑘] 𝑧 . The eigenvalue of a discrete-time LTI
system associated with the eigenfunction 𝑧 𝑘 is given by 𝐻(𝑧) which is a complex constant
whose value is determined by the value of 𝑧 via ∑+∞ −∞ ℎ[𝑘] 𝑧
−𝑘
. The above results underlie
the definitions of the z-transform and discrete-time Fourier transform which will be
discussed later.
Equivalence between continuous and discrete time
∞
𝑡
𝑢[𝑛] = ∑ 𝛿[𝑛 − 𝑘] 𝑢(𝑡) = ∫ 𝛿(𝜈)𝑑𝜈
𝑘=0 −∞
Exercise: 36 Convolve the following two sequences using the polynomial method
{0 0 1 𝟏 0 1 1}
𝑥[𝑛] = ⏟ & {−2 1 𝟎 1 1}
ℎ[𝑛] = ⏟
↑ ↑
You can declare this statement by using the following MATLAB instructions
Solution:
ℎ[𝑛]
⋆ -1 -1 3 4
𝑥[𝑛]
0 0 0 0 0
1 -1 -1 3 4
2 -2 -2 6 8
3 -3 -3 9 12
Exercise: 38 Convolve the following two sequences using the sum of columns
method 𝑥[𝑛] = {1 1 2 − 1} & ℎ[𝑛] = {1 2 3 − 1}
Solution:
1 1 2 −1 𝑦[𝑛] = ℎ[𝑛] ⋆ 𝑥[𝑛]
1 2 3 −1
𝑦[𝑛] = { 1, 3,7,5 ,3 , −5,1 }
1 1 2 −1
2 2 4 −2
3 3 6 −3
−1 − 1 − 2 1
1 3 7 5 3 −5 1
+∞
F(𝑠) = ∫ f(𝑡)𝑒 −𝑠𝑡 𝑑𝑡
0
Solution:
+∞
1
F(0) = ∫ f(𝑡)𝑑𝑡 = Area of f(𝑡) = + 1 + 1.5 + 2 = 5
0 2
Review Questions:
In the mapping, we understand that the whole signal 𝑥(𝑡) is transformed into the whole
signal 𝑦(𝑡). You can encounter the notation: 𝑦(𝑡) = 𝑻[𝑥(𝑡)]. This notation can be
misleading. It can also mean that the value of the signal y at the time 𝑡 is functionally
related to only the value of the signal 𝑥 at 𝑡. When we want to indicate the functional
relationship between values, we will use the following notation:
When the LTI system is used to modify the spectrum of a signal, it is called a filter. We
can classify filters according to their amplitude response. Let H(s) be the transfer
function.
Summary:
CHAPTER III:
Analysis of Continuous
LTI Systems by Laplace
Transform
I. Introduction
II. Laplace transformation
II.I. The Region of Convergence
II.II Properties of The ROC
II.III Properties of The Laplace Transform
III. Inverse of the Laplace transforms
III.I. Inversion by Partial fraction expansions
III.II popular application of the Laplace transform
IV. System Response and Transfer Function
IV.I Transfer Function
IV.II Poles and Zeros of the System Function
IV.III System Stability
IV.IV Interconnected system
V. Solved Problems
The Laplace transform plays a particularly important role in the analysis and
representation of continuous-time LTI systems. Many properties of a system can be
tied directly to characteristics of the poles, zeros, and region of convergence of the
system function 𝑯(𝑠). Due to its convolution property, Laplace transform is a
powerful tool to analyze LTI systems. As discussed before, when the input is the
Eigen-function of all LTI system, i.e., 𝐱(𝑡) = 𝑒 𝜆𝑡 , the operation on this input by the
system 𝐲(𝑡 ) = 𝑻(𝐱(𝑡 )) can be found by multiplying the system's eigenvalue 𝑯(𝜆) to the
input: 𝐲(𝑡 ) = 𝑻(𝑒 𝜆𝑡 ) = 𝑯(𝜆)𝑒 𝜆𝑡 . If an LTI system is causal (with a right sided impulse
response function 𝐡(𝑡) = 0 for 𝑡 < 0), then the ROC of its transfer function 𝑯(𝑠) is a
right sided plane. In particular, when 𝑯(𝑠) is rational, then the system is causal if
and only if its ROC is the right half plane to the right of the rightmost pole, and the
order of numerator is no greater than that of the denominator.
A causal LTI system with a rational transfer function 𝑯(𝑠) is stable if and only if all
poles of 𝑯(𝑠) are in the left half of the s-plane, i.e., the real parts of all poles are
negative.
Analysis of Continuous
LTI Systems by Laplace
Transform
I. Introduction: In system analysis, among other fields of study, a linear time-invariant
system (or "LTI system") is a system that produces an output signal from any input signal
subject to the constraints of linearity and time-invariance. These properties apply to many
important physical systems, in which case the response 𝑦(𝑡) of the system to an arbitrary
input 𝑥(𝑡) can be found directly using convolution: 𝑦(𝑡) = 𝑥(𝑡) ⋆ ℎ(𝑡) where ℎ(𝑡) is called the
system's impulse response and ⋆ represents convolution. What's more, there are systematic
methods for solving any such system (determining ℎ(𝑡)), whereas systems not meeting both
properties are generally more difficult (or impossible) to solve analytically.
LTI system theory is an area of applied mathematics which has direct applications in
electrical circuit analysis and design, signal processing and filter design, control theory,
mechanical engineering, image processing, the design of measuring instruments of many
sorts, NMR spectroscopy, and many other technical areas where systems of ordinary
differential equations present themselves.
The main goal in analysis of any dynamic system described by ordinary differential
equations is to find its response to a given input. The system response in general has two
components: zero-state response due to external forcing signals and zero-input response
due to system initial conditions. The Laplace transform will produce both the zero-input
and zero-state components of the system response. We will also present procedures for
obtaining the system impulse, step, and ramp responses. It is important to point out that the
Laplace transform is very convenient for dealing with the system input signals that have
jump discontinuities (and delta impulses).
The first thing to do when you are dealing with the analysis and design of a technical
problem is to develop an appropriate model, for example if we are given a physical,
biological, or social problem, we may first develop a mathematical model for it, and then
solve the model, and interpret its solution with respect to the problem statement. Modeling
is a process of abstraction of a real system, and the abstracted model may be logical or
mathematical. A mathematical model is a mathematical function governing a properties
and interactions in the system.
The mathematical models of systems are obtained by applying the fundamental physical
laws governing the nature of the components making these systems. In real life many
systems are time-variant or nonlinear, but they can be linearized around certain operating
ranges about their equilibrium.
Consider the following LTI system described by its ordinary differential equations
𝑛 𝑚
𝑑𝑘 𝑦(𝑡) 𝑑 𝑘 𝑥(𝑡)
∑ 𝑎𝑘 = ∑ 𝑏𝑘
𝑑𝑡𝑘 𝑑𝑡𝑘
𝑘=0 𝑘=0
As it is well known, ordinary linear differential equations can be solved in the time domain
or in the frequency domain. In the other hand finding the impulse response of linear
system is the same as finding solutions to the differential equation, among the methods
that can be used in the frequency domain are: the Fourier method or the Laplace method.
However, in the use of Fourier transform, we have been able to find only the zero-state
system response. While the Laplace transform is a powerful integral transform used to
switch a function from the time domain to the s-domain and can be used to solve linear
differential equations with given initial conditions.
2
Example: Consider the functions: f(𝑡) = 𝑒 𝑡 , f(𝑡) = 𝑡 𝑛 , f(𝑡) = 𝑡1/2 , and f(𝑡) = ln(1 + 𝑡)
Remark: When one says "the Laplace transform" without qualification, the unilateral or
one-sided transform is normally intended. The Laplace transform can be alternatively
defined as the bilateral Laplace transform or two-sided Laplace transform by extending the
limits of integration to be the entire real axis. If that is done the common unilateral
transform simply becomes a special case of the bilateral transform where the definition of
the function being transformed is multiplied by the Heaviside step function.
Example: Consider the signal f(𝑡) = 𝑒 −𝑎𝑡 𝑢(𝑡) 𝑎 real. Then by definition of the Laplace
transform of f(𝑡)
+∞ +∞ +∞
−𝑒 −(𝑎+𝑠)𝑡 1
𝐹(𝑠) = ∫ 𝑒 −𝑎𝑡 𝑒 −𝑠𝑡 𝑢(𝑡)𝑑𝑡 = ∫ 𝑒 −(𝑎+𝑠)𝑡
𝑑𝑡 = | =
−∞ 0 (𝑎 + 𝑠) (𝑎 + 𝑠)
0
−(𝑎+𝑠)𝑡
With: Re(𝑠) > −𝑎, because lim𝑡→∞ 𝑒 = 0 only if Re(𝑠 + 𝑎) > 0 or.
Thus, the ROC for this example is specified as Re(𝑠) > −𝑎 and is displayed in the complex
plane as shown in the Figure by the shaded area to the right of the line Re(𝑠) = −𝑎. In
Laplace transform applications, the complex plane is commonly referred to as the s-plane.
The horizontal and vertical axes are sometimes referred to as the 𝜎-axis and the jω -axis,
respectively.
𝑗𝜔 𝑗𝜔
s-plane
𝑎>0 𝑎<0
𝜎 𝜎
−𝑎 −𝑎
II.II Properties of The ROC: The determination of ROC requires some effort, and so is
difficult. However, there are some properties of the ROC which simplify its determination.
Furthermore, some important properties of the function, f(t), can be determined directly
from its ROC. We shall discuss some of these properties in this section because they will be
important in our later discussions.
The ROC of a function f(𝑡) is unaffected by a time shift of the function. That is, for any
value of 𝑡0 , the ROC of f(𝑡) and f(𝑡 − 𝑡0 )are the same.
The ROC of any function always will be 𝜎1 ≤ 𝜎 ≤ 𝜎2 . That is, the ROC of any function
always will be an interval of the 𝜎-axis and not a disjointed set of intervals.
For rational Laplace transforms, the ROC does not contain any poles.
If is of finite duration and is absolutely integrable, then the ROC is the entire s-plane.
II.III Properties of The Laplace Transform: Basic properties of the Laplace transform are
presented in the following.
2. Time Shifting (Delay): The Laplace transformation of the shifted (i.e. delayed) function
𝑓(𝑡 ± 𝛼)𝑢(𝑡 ± 𝛼) is given by {𝑓(𝑡 ± 𝛼)}= 𝑒 ±𝛼𝑠 𝐹(𝑠). This property can be verified by taking
change of variable 𝜉 = 𝑡 ± 𝛼. This Shift theorem is very important, because in the analysis
and syntheses of some engineering problems we are not always faced by transient
responses start at zero time 𝑡 = 0, but rather some delay encounter the forcing function
causes a shift in the system response.
𝑡 𝑡 𝑡 𝑡
3. Shifting in the s-Domain: Let the Laplace transform of f1 (𝑡) be F1 (𝑠)with the ROC
𝜎𝑎 < 𝜎 < 𝜎𝑏 . Then the Laplace transform of {𝑒 ±𝑠 𝑡 f1(𝑡)}in which 𝑠0 = 𝜎0 + 𝑗𝜔0 is:
0
4. Time Scaling: Let the Laplace transform of f1 (𝑡) be F1 (𝑠) with the ROC 𝜎𝑎 < 𝜎 < 𝜎𝑏 . Then
the Laplace transform of {f1(𝑎𝑡)} is F(𝑠)= {f1(𝑎𝑡)}
+∞
−𝑠𝑡
1 +∞ 1 𝑠
F(𝑠) = ∫ f1 (𝑎𝑡)𝑒 𝑑𝑡 = ∫ f1 (𝜉)𝑒 −𝑠𝜉/𝑎 𝑑𝜉 = 𝐹1 ( )
0 |𝑎| 0 |𝑎| 𝑎
4. Time Reversal: A special case of interest is that for which 𝑎 = −1. For this special case,
we obtain that the Laplace transform of {f(−𝑡)}= 𝐹(−𝑠). Thus, time reversal of f(t)
produces a reversal of both the 𝜎- and jω- axes in the s-plane, with the ROC −𝜎𝑏 < 𝜎 < −𝜎𝑎
or, equivalently, 𝜎𝑎 < −𝜎 < 𝜎𝑏 .
𝑛 𝑑 (𝑛−𝑘) f(𝑡)
{f (𝑛)
(𝑡)} = 𝑠 𝑛 𝐹(𝑠) − ∑ {𝑠 𝑛−𝑘 ( | )}
𝑘=1 𝑑𝑡 𝑛−𝑘 𝑡=0
Comment: if the function f(𝑡) is assumed to be 𝑛-times differentiable, with 𝑛𝑡ℎ derivative of
exponential type then {f (𝑛) (𝑡)} follows by mathematical induction.
6. Differentiation in the s-Domain: The last important property we discuss is a dual of
the time-differentiation property. Let the Laplace transform of f1 (𝑡) be F1 (𝑠) with the ROC
𝜎𝑎 < 𝜎 < 𝜎𝑏 . Then the frequency-differentiation property states that
+∞ +∞
𝑑 𝑑 +∞ 𝑑F1 (𝑠)
∫ 𝑡f1 (𝑡)𝑒 −𝑠𝑡 𝑑𝑡 = − ∫ f1 (𝑡) ( 𝑒 −𝑠𝑡 ) 𝑑𝑡 = − ∫ f1 (𝑡)𝑒 −𝑠𝑡 𝑑𝑡 ⬌ 𝐹(𝑠) = −
0 0 𝑑𝑠 𝑑𝑠 0 𝑑𝑠
𝑑 𝑛 F1 (𝑠)
{𝑡 𝑛
f1 (𝑡)} = (−1)𝑛
𝑑𝑠 𝑛
7. Integration in the Time Domain: It is necessary when dealing with differential
equations, to know the Laplace transform of the derivation and the integral of the time
𝑡
function. The construction of the Laplace transform of ∫−∞ f1 (𝑡)𝑢(𝑡)𝑑𝑡 is just a consecounce
of the deivatition property as discussed before. Let the Laplace transform of f1 (𝑡) be
𝑡
F1 (𝑠) with the ROC 𝜎𝑎 < 𝜎 <𝜎𝑏 . Then the Laplace of this integral is F(𝑠) = {∫−∞ f1 (𝑡)𝑢(𝑡)𝑑𝑡}
with:
+∞
𝑡 𝑡
1 1
F(𝑠) = ∫ {∫ f1 (𝑡)𝑢(𝑡)𝑑𝑡} 𝑒 −𝑠𝑡 𝑑𝑡 = F1 (𝑠) − (∫ f1 (𝑡)𝑢(𝑡)𝑑𝑡 | )
−∞ s s −∞ 𝑡=0
0
To validate this fact we try to do the integration by part on {∫0𝑡 f1(𝑡)𝑑𝑡} and take the
well-known change of variables, we let
𝑡
1
𝑢 = ∫ f1 (𝑡)𝑑𝑡 𝑑𝑣 = 𝑒 −𝑠𝑡 𝑑𝑡 or 𝑑𝑢 = f1 (𝑡)𝑑𝑡 𝑣 = − 𝑒 −𝑠𝑡
0 s
Then
+∞ 𝑡 𝑡 +∞ +∞
−𝑠𝑡
1 1
F(𝑠) = ∫ (∫ f1 (𝑡)𝑑𝑡 ) 𝑒 𝑑𝑡 = − [(∫ f1 (𝑡)𝑑𝑡 ) 𝑒 −𝑠𝑡 ] − ∫ f1 (𝑡)𝑒 −𝑠𝑡 𝑑𝑡
0 0 0 s 0− 0− s
+∞ 𝑡 𝑡
1 1 1
F(𝑠) = ∫ (∫ f1 (𝑡)𝑑𝑡 ) 𝑒 −𝑠𝑡 𝑑𝑡 = F1 (𝑠) − (∫ f1 (𝑡)𝑑𝑡 | ) = F1 (𝑠)
0 0 s s 0 𝑡=0
s
This last equation shows that the Laplace transform operation corresponding to time-
domain integration is multiplication by 1/𝑠, and this is expected since integration is the
inverse operation of differentiation. The integration produce a pole at zero (i.e. 1/𝑠) that has
an effect on the region of convergence.
Let f(𝑡) = f1 (𝑡) ⋆ f2 (𝑡) where the Laplace of each time function is F1 (𝑠), F2 (𝑠) then total
Laplace transformation is F(𝑠) such that
𝑡
{f1 (𝑡) ⋆ f2(𝑡)} = {∫ f1 (𝜆)f2 (𝑡 − 𝜆)𝑑𝜆 }
0
Means that:
+∞ 𝑡
F(𝑠) = ∫ 𝑒 −𝑠𝑡 [∫ f1 (𝜆)f2 (𝑡 − 𝜆)𝑑𝜆 ] 𝑑𝑡
0 0
−𝑠𝑡
Since 𝑒 does not depend upon 𝜆, we can move this factor inside the inner integral. If we
do this and also reverse the order of integration, the result is:
+∞ 𝑡 +∞ 𝑡
F(𝑠) = ∫ [∫ 𝑒 −𝑠𝑡 f1 (𝜆)f2 (𝑡 − 𝜆)𝑑𝑡 ] 𝑑𝜆 = ∫ f1 (𝜆) [∫ 𝑒 −𝑠𝑡 f2 (𝑡 − 𝜆)𝑑𝑡 ] 𝑑𝜆
0 0 0 0
Now make the substitution of 𝑥 = 𝑡 − 𝜆
+∞ 𝑡 +∞
F(𝑠) = ∫ 𝑒 −𝑠𝜆 f1 (𝜆) [∫ 𝑒 −𝑠𝑥 f2 (𝑥)𝑑𝑥 ] 𝑑𝜆 = F2 (𝑠) ∫ 𝑒 −𝑠𝜆 f1 (𝜆)𝑑𝜆 = F1 (𝑠)F2 (𝑠)
0 ⏟0 0
F2 (𝑠)
9. Integration in the s-Domain: Let the Laplace transform of f1 (𝑡) be F1 (𝑠) with the ROC
+∞
𝜎𝑎 < 𝜎 < 𝜎𝑏 . We want to find the equivalent of the following s-function F(𝑠) = ∫𝑠 F1 (𝜇) 𝑑𝜇
in time domain,
+∞ +∞
F(𝑠) = ∫ [∫ 𝑒 −𝜇𝑡 f1 (𝑡)𝑑𝑡] 𝑑𝜇
𝑠 0
Comment: This is deduced using the nature of frequency differentiation and conditional
convergence.
10. Multiplication: The multiplication property is the dual property of convolution in time-
Domain and is often referred to as the frequency convolution theorem. Thus, multiplication
in the time domain becomes convolution in the frequency domain
1
F(𝑠) = {f1(𝑡)f2(𝑡)} = F1 (𝑠) ⋆ F2 (𝑠)
2𝜋𝑖
In other word
+∞
−𝑠𝑡
1 𝑐+𝑖𝑇
F(𝑠) = ∫ f1 (𝑡)f2 (𝑡)𝑒 𝑑𝑡 = lim ∫ F1 (𝜆)F2 (𝑠 − 𝜆)𝑑𝜆
0 𝑇→∞ 2𝜋𝑖 𝑐+𝑖𝑇
The inverse formula for the Laplace transformation (you will see it later) gives
+∞ +∞
1 𝛾+𝑖∞
∫ f1 (𝑡)f2 (𝑡)𝑒 −𝑠𝑡 𝑑𝑡 = ∫ ( ∫ F (𝜆)𝑒 𝜆𝑡 𝑑𝜆) f2 (𝑡)𝑒 −𝑠𝑡 𝑑𝑡
0 0 2𝜋𝑖 𝛾−𝑖∞ 1
1 𝛾+𝑖∞ ∞
1 𝑐+𝑖𝑇
= ∫ F1 (𝜆) (∫ f2 (𝑡)𝑒 −(𝑠−𝜆)𝑡 𝑒 𝑡 𝑑𝑡) 𝑑𝜆 = lim ∫ F1 (𝜆)F2 (𝑠 − 𝜆)𝑑𝜆
2𝜋𝑖 𝛾−𝑖∞ 0 𝑇→∞ 2𝜋𝑖 𝑐+𝑖𝑇
11. Periodic function: Consider a periodic signal f(𝑡) and it can be expressed as a sum of
time-shifted functions as
12. Initial & Final value theorems: The initial value f(0) and final value f(∞) are used to
relate frequency domain expressions to the time-domain as time approaches zero and
infinity, respectively. These properties can be derived by using the differentiation property
❇ If we consider 𝑠 → ∞, then
+∞
𝑑f(𝑡) −𝑠𝑡
′
{f (𝑡)} = ∫
𝑑𝑡
𝑒 𝑑𝑡 = sF(𝑠) − f(0)
0
Warning: The initial value theorem should be applied only if 𝐹(𝑠) is strictly proper (𝑚 < 𝑛),
because for 𝑚 ≥ 𝑛, lim𝑠⟶∞ 𝑠𝐹(𝑠) does not exist, and the theorem does not apply.
❇ If we consider 𝑠 → 0, then
+∞ ∞
𝑑f(𝑡) −𝑠𝑡
lim {sF(𝑠) − f(0)} = lim (∫ 𝑒 𝑑𝑡) = [f(𝑡)] = f(∞) − f(0) ➡ f(∞) = lim f(𝑡) = lim sF(𝑠)
𝑠→0 𝑠→0 0 𝑑𝑡 𝑡→∞ 𝑠→0
0
This is known as the final-value theorem. In the final-value theorem, all poles of F(𝑠) must
be located in the left half of the s-plane. The final value theorem is useful because it gives
the long-term behavior without having to perform partial fraction decompositions or other
difficult algebra. If F(s) has a pole in the right-hand plane or poles on the imaginary axis
(e.g., if f(𝑡) = 𝑒 𝑎𝑡 or f(𝑡) = sin(𝑡)), the behavior of this formula is undefined.
II.IV Laplace Transforms of Some Common Signals: The Laplace transforms of most of
the commonly encountered functions can be derived from knowledge of the transform for
only a few elementary functions. In these derivations, it is assumed that f(t) = 0 for t < 0,
i.e. all the functions are multiplied by u(t).
1 𝜔0
𝑢(𝑡) 𝜎>0 sin(𝜔0 𝑡) 𝑢(𝑡) 𝜎>0
𝑠 𝑠2 + 𝜔0 2
1 𝑠
𝑡𝑢(𝑡) 𝜎>0 cos(𝜔0 𝑡) 𝑢(𝑡) 𝜎>0
𝑠2 𝑠2 + 𝜔0 2
𝑡 𝑛−1 1 𝜔0
𝑢(𝑡) 𝜎>0 sinh(𝜔0 𝑡) 𝑢(𝑡) 𝜎 > 𝜔0
(𝑛 − 1)! 𝑠𝑛 𝑠2 − 𝜔0 2
1 𝑠
𝑒 −𝑎𝑡 𝑢(𝑡) 𝜎 > −𝑎 cosh(𝜔0 𝑡) 𝑢(𝑡) 𝜎 > 𝜔0
𝑠+𝑎 𝑠 2 − 𝜔0 2
1 𝜔0
𝑒 −𝑎𝑡 𝑢(−𝑡) 𝜎 < −𝑎 𝑒 −𝑎𝑡 sin(𝜔0 𝑡) 𝑢(𝑡) 𝜎 > −𝑎
𝑠+𝑎 (𝑠 + 𝑎)2 + 𝜔0 2
𝑛! 𝑠
𝑡 𝑛 𝑢(𝑡) 𝜎>0 𝑒 −𝑎𝑡 cos(𝜔0 𝑡) 𝑢(𝑡) 𝜎 > −𝑎
𝑠 𝑛+1 (𝑠 + 𝑎)2 + 𝜔0 2
1 2𝜔0 𝑠
𝑡𝑒 −𝑎𝑡 𝑢(𝑡) 𝜎 > −𝑎 𝑡 sin(𝜔0 𝑡) 𝑢(𝑡) 𝜎>0
(𝑠 + 𝑎)2 (𝑠 2 + 𝜔0 2 )2
𝑡 𝑛−1 𝑒 −𝑎𝑡 1 𝑠 2 − 𝜔0 2
𝑢(𝑡) 𝜎 > −𝑎 𝑡 cos(𝜔0 𝑡) 𝑢(𝑡) 𝜎>0
(𝑛 − 1)! (𝑠 + 𝑎)𝑛 (𝑠 2 + 𝜔0 2 )2
A. Unit Impulse Function 𝜹(t): It is very well-known from the definition of a unit impulse,
that
+∞ +∞
∫ 𝛿(𝑡)𝜑(𝑡)𝑑𝑡 = 𝜑(0) ⬌ ∫ 𝛿(𝑡)𝑒 −𝑠𝑡 𝑑𝑡 = lim 𝑒 −𝑠𝑡 = 1 ⬌ {𝛿(𝑡)}= 1
−∞ −∞ 𝑠→0
C. Unit Step Function 𝒖(t): If a is made equal to zero in the exponential function eat , we get
the step function. Therefore, making a = 0 in the last equation we get, {𝑢(𝑡)} = 𝑠 −1
D. Unit Ramp Function 𝒕𝒖(t): Treating the ramp function as a multiplication of the unit
step function by t, (i.e. f(t) = tu(t)) and using the technique of integration by part we get,
+∞ +∞ 1
{𝑡𝑢(𝑡)} = ∫−∞ 𝑡𝑢(𝑡 )𝑒 −𝑠𝑡 𝑑𝑡 = ∫0 𝑡𝑒 −𝑠𝑡 𝑑𝑡 =
𝑠2
. ⬌ {𝑡𝑢(𝑡)} = 𝑠 −2.
E. Sinusoidal Function sin(t): Treating the Sinusoid signals as a summation of the
exponential function i.e. 2𝑖f(𝑡) = 𝑒 𝑖𝑡 − 𝑒 −𝑖𝑡 , 2g(𝑡) = 𝑒 𝑖𝑡 + 𝑒 −𝑖𝑡
1 𝑎
{f(𝑡)} = {sin(𝑡)} = 2𝑖 {𝑒 𝑖𝑎𝑡 − 𝑒 −𝑖𝑎𝑡 } = 𝑠2 + 𝑎2
1 𝑠
{g(𝑡)} = {cos(𝑡)} = 2 {𝑒 𝑖𝑎𝑡 + 𝑒 −𝑖𝑎𝑡 } = 𝑠 + 𝑎2
2
The Laplace transforms of some common signals are tabulated in Table shown below.
Instead of having to reevaluate the transform of a given signal, we can simply refer to such
a table and read out the desired transform.
III. Inverse of the Laplace transforms: Inversion of the Laplace transform to find the
signal f(𝑡) from its Laplace transform 𝐹(𝑠) is called the inverse Laplace transform,
-1
symbolically denoted as {f(𝑡)}. There is a procedure that is applicable to all classes
of transform functions that involves the evaluation of a line integral in complex s-plane;
that is,
1 𝛾+𝑖∞ -1
f(𝑡) = ∫ F(𝑠)𝑒 𝑠𝑡 𝑑𝑠 = {f(𝑡)}
2𝜋𝑖 𝛾−𝑖∞
In this integral, the real 𝛾 is to be selected such that if the region of convergence ROC of
F(𝑠) is 𝜎1 < Re(𝑠) < 𝜎2 , then 𝜎1 < 𝛾 < 𝜎2 . The evaluation of this inverse Laplace transform
integral requires an understanding of complex variable theory. In fact, calculating this
integral requires a thorough knowledge of the integration of complex functions over lines in
C, an extensive subject which is outside the scope of this book. Hence, the fundamental
theorem of the Laplace transform will not be used in the remainder of this book; partial
fraction expansion will be used to obtain the inverse Laplace transform of a rational
function F(𝑠). It will be assumed that F(𝑠) has real coefficients; in practice this is usually
the case. We will now describe in a number of steps how the inverse Laplace transform of
such a rational function F(𝑠) can be determined.
𝑏𝑚 𝑠 𝑚 + ⋯ + 𝑏1 𝑠 + 𝑏0 𝑁(𝑠)
F(𝑠) = =
𝑎𝑛 𝑠 𝑛 + ⋯ + 𝑎1 𝑠 + 𝑎0 𝐷(𝑠)
The function F(𝑠) is improper if 𝑚 ≥ 𝑛 and proper if 𝑚 < 𝑛. An improper function can always
be separated into the sum of a polynomial in 𝑠 and a proper function.
𝑁 ′ (𝑠)
If 𝑚 ≥ 𝑛 F(𝑠) is improper and F(𝑠) = 𝑄(𝑠) +
𝐷′ (𝑠)
𝑁(𝑠)
If 𝑚 < 𝑛 F(𝑠) is proper and F(𝑠) =
𝐷(𝑠)
Example: Consider, for example, the function
𝑁(𝑠) 2𝑠 3 + 9𝑠 2 + 11𝑠 + 2
F(𝑠) = =
𝐷(𝑠) 𝑠 2 + 4𝑠 + 3
Because this is an improper function, we divide the numerator by the denominator until
the remainder has a lower degree than the denominator. Therefore, F(𝑠) can be expressed
as
𝑁(𝑠) 𝑠−1
F(𝑠) = = (2𝑠
⏟ + 1) + ( 2 )
𝐷(𝑠) 𝐏𝐨𝐥𝐲𝐧𝐨𝐦𝐢𝐚𝐥 ⏟𝑠 + 4𝑠 + 3
𝐩𝐫𝐨𝐩𝐞𝐫
𝐟𝐫𝐚𝐜𝐭𝐢𝐨𝐧
A proper function can be further expanded into partial fractions. This method consists of
writing a rational function as a sum of appropriate partial fractions with unknown
coefficients, which are determined by clearing fractions and equating the coefficients of
similar powers on the two sides.
𝑁(𝑠) 𝑁(𝑠) 𝑅1 𝑅2 𝑅𝑛
F(𝑠) = = = + +⋯+
𝐷(𝑠) (𝑠 − 𝑝𝑛 )(𝑠 − 𝑝𝑛−1 ) … (𝑠 − 𝑝𝑖 ) … (𝑠 − 𝑝1 ) (𝑠 − 𝑝1 ) (𝑠 − 𝑝2 ) (𝑠 − 𝑝𝑛 )
1 1 𝑑 1
𝑅11 = lim [{𝑠 3 F(𝑠)}] = −1, 𝑅12 = lim [ {𝑠 3 F(𝑠)}] = − | = −1
𝑠→0 0! 𝑠→0 1! 𝑑𝑠 (𝑠 − 1)2 𝑠=0
1 𝑑2 2
𝑅13 = lim [ 2 {𝑠 3 F(𝑠)}] = | = −1, 𝑅21 = lim[{(𝑠 − 1)F(𝑠)}] = 1
𝑠→0 2! 𝑑𝑠 2(𝑠 − 1)3 𝑠=0 𝑠→1
Computer Example: Find the inverse Laplace transform of the following functions using
partial fraction expansion method (Matlab):
2𝑠 2 + 7𝑠 + 4
F(𝑠) = 3
𝑠 + 5𝑠 2 + 8𝑠 + 4
num=[2 7 4]; den=[1 5 8 4]; [r,p,k]=residue(num,den)
>> r = 3, 2, -1
>> p = -2, -2, -1
>> k=[ ]
Hence,
3 2 1
F(𝑠) = + − ⇒ f(𝑡) = (3𝑒 −2𝑡 + 2𝑡𝑒 −2𝑡 − 𝑒 −𝑡 )𝑢(𝑡)
(𝑠 + 2) (𝑠 + 2)2 (𝑠 + 1)
A popular application of the Laplace transform: is in solving linear differential
equations with constant coefficients. In this case, the motivation for using the Laplace
transform is to simplify the solution of the differential equation. There is some analogy
between logarithms and Laplace transforms, that is the logarithms are able to reduce the
multiplication of two numbers to the sum of their logarithms, while the Laplace transform
reduces the solution of a differential equation to an algebraic equation.
IV. System Response and Transfer Function: The Laplace transform method gives the
total response, which includes zero input and zero-state components. It is possible to
separate the two components if we so desire. The initial condition terms in the response
give rise to the zero-input response.
Let we apply the Laplace transform to a linear time invariant system described by ordinary
differential equation (initial value problem) we get
𝑛 𝑚
𝑑𝑘 𝑦(𝑡) 𝑑 𝑘 𝑥(𝑡)
∑ 𝑎𝑘 = ∑ 𝑏𝑘 ⟺
𝑑𝑡𝑘 𝑑𝑡𝑘
𝑘=0 𝑘=0
𝑛 𝑚
𝑘
𝑘
𝑘−𝑖
𝑑(𝑘−𝑖) 𝑦(𝑡) 𝑘
𝑘
𝑘−𝑖
𝑑 (𝑘−𝑖) 𝑥(𝑡)
∑ 𝑎𝑘 (𝑠 𝑌(𝑠) − ∑ {𝑠 | }) = ∑ 𝑏𝑘 (𝑠 𝑋(𝑠) − ∑ {𝑠 | })
𝑖=1 𝑑𝑡𝑘−𝑖 𝑡=0 𝑖=1 𝑑𝑡𝑘−𝑖 𝑡=0
𝑘=0 𝑘=0
Simplification of this last equation gives
𝑛 𝑚 𝑛 −1
𝑘
𝑘−𝑖
𝑑(𝑘−𝑖) 𝑦(𝑡) 𝑘
𝑘
𝑘−𝑖
𝑑 (𝑘−𝑖) 𝑥(𝑡)
𝑌(𝑠) = [∑ 𝑎𝑘 ∑ {𝑠 𝑘−𝑖
| } + ∑ 𝑏𝑘 (𝑠 𝑋(𝑠) − ∑ 𝑠 𝑘−𝑖
| )] (∑ 𝑎𝑘 𝑠 𝑘 )
𝑖=1 𝑑𝑡 𝑡=0 𝑖=1 𝑑𝑡 𝑡=0
𝑘=0 𝑘=0 𝑘=0
This response can be decomposed into two main parts 𝑌(𝑠) = 𝑌Force (𝑠) + 𝑌Initial (𝑠) where:
𝑚 𝑛 −1
𝑘 𝑘
𝑌Force (𝑠) = (∑ 𝑏𝑘 𝑠 ) (∑ 𝑎𝑘 𝑠 ) 𝑋(𝑠)
𝑘=0 𝑘=0
𝑛 𝑚 𝑛 −1
𝑘
𝑘−𝑖
𝑑 (𝑘−𝑖) 𝑦(𝑡) 𝑘
𝑘−𝑖
𝑑 (𝑘−𝑖) 𝑥(𝑡)
𝑌Initial (𝑠) = (∑ ∑ 𝑎𝑘 𝑠 | + ∑ ∑ 𝑏𝑘 𝑠 | ) (∑ 𝑎𝑘 𝑠 𝑘 )
𝑖=1 𝑑𝑡𝑘−𝑖 𝑡=0 𝑖=1 𝑑𝑡 𝑘−𝑖
𝑡=0
𝑘=1 𝑘=1 𝑘=0
⦁ If we are looking for the oscillation behavior of the system, then a steady state and
transient factorization is made place: 𝑌(𝑠) = (Steady state + Transient)response
⦁ If we are looking at the influence (or impact) of initial conditions on the system, then a
forced and free response factorization is made place: 𝑌(𝑠) = (Forced + Non_Forced)responses .
⦁ If we are looking for the algebraic structure of the solution, then a homogenous and
particular factorization is made place: 𝑌(𝑠) = (Homogenous + Particular)solutions
IV.I Transfer Function: In LTI systems the response of a phasor of frequency 𝑠, is also a
phasor. The output phasor is proportional to the input one. The constant of proportionality
is called the transfer function. Since an LTI system is a linear operator, we can say that the
phasors are the "eigenfunctions" of LTI systems while the transfer function is the
"eigenvalue". As what we have seen before, the fundamental result in LTI system theory is
that any LTI system can be characterized entirely by a single function called the system's
impulse response, which is completely independent on initial conditions.
Remark: The transfer function is defined for, and is meaningful to, LTI systems only. It
does not exist for nonlinear or time-varying systems in general.
The transfer function is a property of a system itself, independent of the magnitude and
nature of the input or driving function. It includes the units necessary to relate the input to
the output; however, it does not provide any information concerning the physical structure
of the system (i.e. the transfer functions of many physically different systems can be
identical). If it is known, the output or response can be studied for various forms of inputs
with a view toward understanding the nature of the system. It may be established
experimentally by introducing known inputs and studying the output of the system. Once
established, a transfer function gives a full description of the dynamic characteristics of the
system, as distinct from its physical description. When the LTI system is used to modify the
spectrum of a signal, it is called a filter. (i.e. transfer function≡filter).
IV.II Poles and Zeros of the System Function: Poles and Zeros of a transfer function are
the frequencies for which the value of the denominator and numerator of transfer function
becomes zero respectively. The values of the poles and the zeros of a system determine
whether the system is stable, and how well the system performs. Physically realizable
systems must have a number of poles greater than the number of zeros. Systems that
satisfy this relationship are called Proper.
As 𝑠 approaches a zero, the numerator of the transfer function (and therefore the transfer
function itself) approaches the value 0. When 𝑠 approaches a pole, the denominator of the
transfer function approaches zero, and the value of the transfer function approaches
infinity.
𝑛 𝑚
𝑑 𝑘 𝑦(𝑡) 𝑑 𝑘 𝑥(𝑡) 𝑁(𝑠) ∑𝑚
𝑘=0 𝑏𝑘 𝑠
𝑘
∑ 𝑎𝑘 = ∑ 𝑏𝑘 ⟹ 𝐻(𝑠) = = ratio of two polynomials = 𝑛
𝑑𝑡𝑘 𝑑𝑡𝑘 𝐷(𝑠) ∑𝑘=0 𝑎𝑘 𝑠 𝑘
𝑘=0 𝑘=0
The 𝑧𝑖 ’𝑠 are the roots of the equation 𝑁(𝑠) = 0 (or lim𝑠→𝑧𝑖 𝐻(𝑠) = 0) and are defined to be the
system zeros, and the 𝑝𝑖 ’𝑠 are the roots of the equation 𝐷(𝑠) = 0 (or lim𝑠→𝑝𝑖 𝐻(𝑠) = ∞) and
are defined to be the system poles. If all of the coefficients of polynomials 𝑁(𝑠) and 𝐷(𝑠) are
real, then the poles and zeros mustbe either purely real, or appear in complex conjugate
pairs. In general for the poles, either 𝑝𝑖 = 𝜎𝑖 , or else 𝑝𝑖 , 𝑝𝑖+1 = 𝜎𝑖 ± 𝑗𝜔𝑖 . Similarly, the system
zeros are either real or appear in complex conjugate pairs.
Remark: The existence of a single complex pole without a corresponding conjugate pole
would generate complex coefficients in the polynomial 𝐷(𝑠).
Because the transfer function completely represents a system differential equation, its
poles and zeros effectively define the system response. In particular the system poles
directly define the components in the homogeneous response. The unforced response of a
linear system to a set of initial conditions is 𝑦ℎ (𝑡) = ∑𝑛𝑘=1 𝛽𝑘 𝑒 𝜆𝑘𝑡 where the constants 𝛽𝑘 are
determined from the given set of initial conditions and the exponents 𝜆𝑘 are the roots of the
characteristic equation or the system eigenvalues. The characteristic equation of the
system is 𝐷(𝑠) = ∑𝑛𝑘=0 𝑎𝑘 𝑠 𝑘 = 0 and its roots are the system poles, that is 𝜆𝑘 = 𝑝𝑘 .
Example: Plot the magnitude of 𝐻(𝑠) = (𝑠 2 + 2𝑠 + 17)/(𝑠 2 + 4𝑠 + 104) showing poles and zeros
Where 𝑝𝑘 are the system poles. In a stable system all components of the homogeneous
response must decay to zero as time increases. If any pole has a positive real part there is a
component in the output that increases without bound, causing the system to be unstable.
𝑡
𝑛 𝑛 𝑐𝑘
𝐻(𝑠) = ∑ 𝑐𝑘 (∫ 𝑒 (𝑝𝑘−𝑠)𝑡 𝑑𝑡) = ∑ ( )
𝑘=1 0 𝑘=1 𝑠 − 𝑝𝑘
In order for a linear system to be stable, all of its poles must have negative real parts, that
is they must all lie within the left-half of the s-plane. An “unstable” pole, lying in the right
half of the s-plane, generates a component in the system homogeneous response that
increases without bound from any finite initial conditions.
A system having one or more poles lying on the imaginary axis of the s-plane has non-
decaying oscillatory components in its homogeneous response, and is defined to be
marginally stable.
Example: To get a big picture on the system response and its stability try to plot the step
and impulse response of the next system (take all the possible cases of pole location)
𝑠+𝑎
𝐻(𝑠) =
𝑠2 + 𝑏𝑠 + 𝑐
Represent each response in a small window near its poles location in the s-plane.
IV.IV Interconnected system: Interconnections are very common in systems engineering.
The system that is to be processed commonly referred to as the plant may itself be the
result of interconnecting various sorts of subsystems in series, in parallel, and in feedback.
In addition, the plant is interfaced with sensors, actuators and the control system. Our
model for the overall system represents all of these components in some idealized or
nominal form, and will also include components introduced to represent uncertainties in,
or neglected aspects of the nominal description.
Solved Problems
Exercise: 1 For the given signal f(𝑡) find the Laplace transform F(𝑠)
Solution:
+∞
1 −𝑠𝑡 ∞ 1
❇ f(𝑡) = 𝑢(𝑡) ⟹ 𝐹(𝑠) = ∫ 𝑢(𝑡)𝑒 −𝑠𝑡
𝑑𝑡 = [− 𝑒 ] =
0− 𝑠 0− 𝑠
+∞ +∞
❇ f(𝑡) = 𝛿(𝑡) ⟹ 𝐹(𝑠) = ∫ 𝛿(𝑡)𝑒 −𝑠𝑡 𝑑𝑡 = ∫ 𝛿(𝑡)𝑑𝑡 = 1
0− 0−
+∞ ∞
1 (𝑎−𝑠)𝑡 1
❇ 𝑎𝑡
f(𝑡) = 𝑒 𝑢(𝑡) ⟹ 𝐹(𝑠) = ∫ 𝑒 𝑒 𝑑𝑡 = [ 𝑎𝑡 −𝑠𝑡
𝑒 ] =
0− 𝑎−𝑠 0− 𝑠−𝑎
+∞
1 𝑠
❇ f(𝑡) = cos(𝑎𝑡) 𝑢(𝑡) ⟹ 𝐹(𝑠) = ∫ {𝑒 𝑖𝑎𝑡−𝑠𝑡 + 𝑒 −(𝑖𝑎𝑡+𝑠𝑡) }𝑑𝑡 = 2
2 0− 𝑠 + 𝑎2
𝑒 −𝑎𝑠 − 𝑒 −𝑏𝑠
❇ (f(𝑡)) = {𝐴[𝑢(𝑡 − 𝑎) − 𝑢(𝑡 − 𝑏)]} = 𝐴 ( )
𝑠
+∞
1 2
❇ (𝑡𝑒 𝜆𝑡 ) = ∫ 𝑡𝑒 (𝜆−𝑠)𝑡 𝑑𝑡 = 2
and (𝑡 2 𝑒 𝜆𝑡 ) =
0− (𝑠 − 𝜆) (𝑠 − 𝜆)3
𝑠+𝑎 𝑏
❇ (𝑒 −𝑎𝑡 cos(𝑏𝑡) 𝑢(𝑡)) = , (𝑒 −𝑎𝑡
sin(𝑏𝑡)) =
(𝑠 + 𝑎)2 + 𝑏 2 (𝑠 + 𝑎)2 + 𝑏 2
sin 𝜑 𝑠 + 𝑎 cos 𝜑 𝑠 cos 𝜑 − 𝑎sin 𝜑
❇ {sin(𝑎𝑡 + 𝜑)} = 2 2
, {cos(𝑎𝑡 + 𝜑)} =
𝑠 +𝑎 𝑠 2 + 𝑎2
1 +∞ +∞
𝑎
❇ {sinh(𝑎𝑡)} = {∫ 𝑒 𝑎𝑡 𝑒 −𝑠𝑡 𝑑𝑡 − ∫ 𝑒 −𝑎𝑡 𝑒 −𝑠𝑡 𝑑𝑡} = 2
2 0− 0− 𝑠 − 𝑎2
1 +∞ 𝑎𝑡 −𝑠𝑡 +∞
𝑠
❇ {cosh(𝑎𝑡)} = {∫ 𝑒 𝑒 𝑑𝑡 + ∫ 𝑒 −𝑎𝑡 𝑒 −𝑠𝑡 𝑑𝑡} = 2
2 0− 0− 𝑠 − 𝑎2
1 1 2𝑎
❇ {𝑒 −𝑎|𝑡| } = {𝑒 −𝑎𝑡 𝑢(𝑡) + 𝑒 𝑎𝑡 𝑢(−𝑡)} = − = 2
𝑠 + 𝑎 𝑠 − 𝑎 𝑎 − 𝑠2
Remark: Two-sided exponential decay f(𝑡) = 𝑒 −𝑎|𝑡| can be transformed only by using
bilateral Laplace transformation. With −𝑎 < Re(𝑠) < 𝑎
a>0 a<0
Exercise: 3 find the Laplace transform of the triangle signal shown in figure
We have
1 |𝑡|
𝑥(𝑡) = (1 − )
2𝑇 2𝑇
1 2
𝑥̇ (𝑡) = ( ) {(𝑢(𝑡 + 2𝑇) − 𝑢(𝑡)) − (𝑢(𝑡) − 𝑢(𝑡 − 2𝑇))}
2𝑇
1 2 𝑒 2𝑠𝑇 − 1 1 − 𝑒 −2𝑠𝑇
𝑠𝑋(𝑠) = ( ) [( )−( )]
2𝑇 𝑠 𝑠
1 2 1 2
𝑠 2 𝑋(𝑠) = ( ) [𝑒 2𝑠𝑇 + 𝑒 −2𝑠𝑇 − 2] = ( ) (𝑒 𝑠𝑇 − 𝑒 −𝑠𝑇 )2
2𝑇 2𝑇
sinh(𝑠𝑇) 2
𝑋(𝑠) = ( )
𝑠𝑇
Solution:
𝑠 𝑠
𝑒 2 − 𝑒 −2
2
𝑥(𝑡) = ∏(𝑡) ⋆ ∏(𝑡) ⇔ 𝑋(𝑠) = (∏(𝑠)) where ∏(𝑠) = ( )
𝑠
𝑒 𝑠 + 𝑒 −𝑠 − 2
𝑋(𝑠) = ( ) ⇒ 𝑥(𝑡) = (𝑡 + 1)𝑢(𝑡 + 1) + (𝑡 − 1)𝑢(𝑡 − 1) − 2𝑡𝑢(𝑡)
𝑠2
𝑥(𝑡) = (𝑡 + 1)𝑢(𝑡 + 1) + (𝑡 − 1)𝑢(𝑡 − 1) − 2𝑡𝑢(𝑡) = (1 − |𝑡|)
Λ(𝑡)
∏(𝑡) ∏(𝑡)
1
1 1
⋆ =
1 1 1 1 −1 1
− −
2 2 2 2
Exercise: 5 Given that 𝛿(𝑡) = 𝑢̇ (𝑡), by virtue of this compute the convolution 𝑦(𝑡) = f(𝑡) ⋆ g(𝑡)
where f(𝑡) = cos(𝑡) 𝑢(𝑡) and g(𝑡) = 𝛿̇ (𝑡) + 𝑢(𝑡), deduce that the functions f(𝑡) and g(𝑡) are the
inverse composition of each other.
Solution:
Convolution method:
We know that 𝑦(𝑡) = f(𝑡) ⋆ g(𝑡) = {cos(𝑡) 𝑢(𝑡)} ⋆ 𝛿̇ (𝑡) + {cos(𝑡) 𝑢(𝑡)} ⋆ 𝑢(𝑡). And also we know
that: 𝑥(𝑡) ⋆ 𝛿(𝑡) = 𝑥(𝑡) & 𝑥(𝑡) ⋆ 𝛿̇ (𝑡) = 𝛿(𝑡) ⋆ 𝑥̇ (𝑡) then
𝑑
𝑦(𝑡) = {cos(𝑡) 𝑢(𝑡)} + {cos(𝑡) 𝑢(𝑡)} ⋆ 𝑢(𝑡)
𝑑𝑡
𝑦(𝑡) = cos(𝑡) 𝛿(𝑡) − sin(𝑡) 𝑢(𝑡) + sin(𝑡) 𝑢(𝑡)
𝑦(𝑡) = cos(𝑡) 𝛿(𝑡) = cos(0) 𝛿(𝑡) = 𝛿(𝑡)
Laplace method:
𝑠 1
𝐹(𝑠) = (cos(𝑡) 𝑢(𝑡)) = , 𝐺(𝑠) = (𝛿̇ (𝑡) + 𝑢(𝑡)) = 𝑠 +
𝑠2 +1 𝑠
𝑠 1
𝑌(𝑠) = 𝐹(𝑠)𝐺(𝑠) = ( 2 ) (𝑠 + ) = 1 ⟺ 𝑦(𝑡) = f(𝑡) ⋆ g(𝑡) = 𝛿(𝑡)
𝑠 +1 𝑠
Easily we conclude that 𝐹(𝑠) = 𝐺 −1 (𝑠) ⟺ f(𝑡) ⋆ g(𝑡) = 𝛿(𝑡) then f(t) and g(t) are the inverse
composition of each other.
Exercise: 6 A techniques that can be used to determine the bilateral Laplace transform of
some functions is to obtain a differential equation of which the transform is a solution and
then solve the differential equation for the transform. In this problem, we'll further
illustrate this technique by determining the Laplace transform of
𝛼 ∞ 𝛼
2 2
f(𝑡) = 𝑒 −( 2 )𝑡 , 𝛼 > 0, given that ∫ 𝑒 −( 2 )𝑡 𝑑𝑡 = √2𝜋/𝛼
0
𝑑f(𝑡)
+ 𝛼𝑡f(𝑡) = 0
𝑑𝑡
❂ Show that the Laplace transform of f(𝑡) satisfies the differential equation
𝑑F(𝑠) 1
− ( ) 𝑠F(𝑠) = 0
𝑑𝑠 𝛼
❂ Solve this differential equation and find the Laplace transform of f(𝑡)
𝑑f(𝑡) 𝛼 2 𝑑f(𝑡)
= −𝛼𝑡𝑒 −( 2 )𝑡 = −𝛼𝑡f(𝑡) ⇒ + 𝛼𝑡f(𝑡) = 0 − − − − − − − − − −(𝐼)
𝑑𝑡 𝑑𝑡
𝑑F(𝑠) 𝑑F(𝑠) 1
𝑠F(𝑠) + 𝛼 (− )=0⇔ − ( ) 𝑠F(𝑠) = 0 − − − − − − − − − −(𝐼𝐼)
𝑑𝑠 𝑑𝑠 𝛼
❂ Let we solve this differential equation: Notice that the equations (𝐼) and (𝐼𝐼) are similar to
each other, however the solution of 𝐸𝑞 (𝐼) is known then we can directly deduce the form of
solution of 𝐸𝑞 (𝐼𝐼). The essential difference is that 𝛼 has been replaced with −1/𝛼. From this
observation, conclude that the solution of the differential equation must be
1 2
F(𝑠) = 𝐾𝑒 (2𝛼)𝑠
To find the constant 𝐾 let we compute F(0) = 𝐾 from the Laplace definition
∞ ∞ 𝛼 2
F(0) = 𝐾 = lim ∫ f(𝑡) 𝑒 −𝑠𝑡 𝑑𝑡 = ∫ 𝑒 −( 2 )𝑡 𝑑𝑡
𝑠→0 −∞ −∞
2𝜋 ( 1 )𝑠2
F(𝑠) = √ 𝑒 2𝛼
𝛼
❂ The convolution g(𝑡) = f(𝑡) ⋆ f(𝑡) can be concluded from the Laplace transform
2𝜋 ( 1 )𝑠2 𝑑G 2𝑠
G(𝑠) = F 2 (𝑠) = 𝑒 𝛼 ⤇ − G(𝑠) = 0
𝛼 𝑑𝑠 𝛼
1 2 𝑑G 𝑠 𝑑g 𝛽
−( )𝑡 2
If we define = we get: − G(𝑠) = 0 ⤇ + 𝛽𝑡g(𝑡) = 0 therefor g(𝑡) = 𝑀𝑒 2
𝛽 𝛼 𝑑𝑠 𝛽 𝑑𝑡
∞ 𝛽
2𝜋 −( )𝑡 2 2𝜋 4𝜋 𝜋
Let we compute G(0) = = ∫ 𝑀𝑒 2 𝑑𝑡 = 𝑀√ = 𝑀√ ⤇ 𝑀 = √
𝛼 −∞ 𝛽 𝛼 𝛼
𝛼
−( )𝑡 2
𝛼
−( )𝑡 2 𝜋 −(𝛼)𝑡 2
g(𝑡) = (𝑒 2 ) ⋆ (𝑒 2 ) =√ 𝑒 4
𝛼
Exercise: 7 Find the convolution y(𝑡) = (𝑒 𝑎𝑡 𝑢(𝑡)) ⋆ (𝑒 𝑏𝑡 𝑢(𝑡)) Solution:
1 1 1 1 1
𝑌(𝑠) = ( )( )= ( − )
𝑠−𝑎 𝑠−𝑏 𝑎−𝑏 𝑠−𝑎 𝑠−𝑏
1
⟹ y(𝑡) = (𝑒 𝑎𝑡 − 𝑒 𝑎𝑡 )𝑢(𝑡)
𝑎−𝑏
Exercise: 8 Find the convolution y(𝑡) = (𝑒 −𝑡 𝑢(𝑡)) ⋆ (𝑢(𝑡) − 𝑢(𝑡 − 1)) Solution:
1 1 − 𝑒 −𝑠 1 − 𝑒 −𝑠 1 1 1 1
𝑌(𝑠) = ( )( )= ={ − }−{ − } 𝑒 −𝑠
𝑠+1 𝑠 𝑠(𝑠 + 1) 𝑠 𝑠+1 𝑠 𝑠+1
s+b s b 𝑑
𝐻(𝑠) = = + ⤇ ℎ(𝑡) = (𝑒 −𝑎𝑡 𝑢(𝑡)) + 𝑏𝑒 −𝑎𝑡 𝑢(𝑡)
𝑠+𝑎 𝑠+𝑎 𝑠+𝑎 𝑑𝑡
ℎ(𝑡) = (𝑏 − 𝑎)𝑒 −𝑎𝑡 𝑢(𝑡) + 𝑒 −𝑎𝑡 𝛿(𝑡) ⤇ ℎ(𝑡) = (𝑏 − 𝑎)𝑒 −𝑎𝑡 𝑢(𝑡) + 𝛿(𝑡)
Exercise: 10 Find the inverse of 𝐻(𝑠) = (s + 1)/𝑠 Ans: 𝐻(𝑠) = 𝑠 + 𝑠 −1 ⤇ ℎ(𝑡) = 𝑢(𝑡) + 𝛿̇ (𝑡)
y= Ts *conv(u,g);
t=-20: Ts:5; t=t';
plot(t,y(1:length(t)));
grid on
Exercise: 12 Two systems having an operator description 𝐻1 (𝑠) & 𝐻2 (𝑠) are cascaded
(connected) in series where
𝑋(𝑠) 𝑌(𝑠)
𝐻1 (𝑠) 𝐻2 (𝑠)
2−𝑠 1
𝐻1 (𝑠) = , 𝐻2 (𝑠) =
1 1 1
1 + 2𝑠 1 − 2 𝑠 + 4 𝑠2
❏ Determine the DE relating input to the output, and designate its poles
❏ Give the realization of this differential equation.
Solution:
2−𝑠 1 2−𝑠 16 − 8𝑠 𝑌(𝑠)
❏ 𝐻(𝑠) = 𝐻1 (𝑠) 𝐻2 (𝑠) = ( )( )=( )= 3 =
1 1 1 1 𝑠 +8 𝑋(𝑠)
1 + 2𝑠 1 − 2 𝑠 + 4 𝑠2 1 + 8 𝑠3
𝑑3 𝑑𝑥 1 𝑠 = −2
3
𝑦(𝑡) + 8𝑦(𝑡) = 16𝑥(𝑡) − 8 ⇔ 𝑠 = (8𝑖 2 )3 = {
𝑑𝑡 𝑑𝑡 𝑠 = 1 ± 𝑖√3
❏ In the parallel realization of this system we need: 2 adders, 2 gains, 4 diff
Exercise: 13 The output of LTI system is y(𝑡) = 𝑒 𝑡/3 𝑢(𝑡) when the system is subjected an
excitation
1 1 𝑑 1
𝑥(𝑡) = − 𝑒 2𝑡 𝑢(𝑡) + (𝑒 2𝑡 𝑢(𝑡))
4 𝑑𝑡
Solution: we have
1 1 1 1 1 1 1 1 1 1
❏ x(𝑡) = − 𝑒 2𝑡 𝑢(𝑡) + 𝑒 2𝑡 𝑢(𝑡) + 𝑒 2𝑡 𝛿(𝑡) = 𝑒 2𝑡 𝑢(𝑡) + 𝑒 2𝑡 𝛿(𝑡) = 𝑒 2𝑡 𝑢(𝑡) + 𝛿(𝑡)
4 2 4 4
1 1
𝑠−4 1
𝑋(𝑠) = 4+1= & 𝑌(𝑠) =
1 1 1
𝑠−2 𝑠−2 𝑠−3
1 1
𝑋(𝑠) 𝑠−2 𝑠−2 𝐴 𝐵
𝐻(𝑠) = = = = +
𝑌(𝑠) (𝑠 − 1) (𝑠 − 1) 𝑠 2 − 7 𝑠 + 1 (𝑠 −
1
) (𝑠 −
1
3 4 12 12 3 4)
1 1
𝐴 = lim1 (𝑠 − ) 𝐻(𝑠) = −2, 𝐵 = lim1 (𝑠 − ) 𝐻(𝑠) = 3
𝑠→ 3 𝑠→ 4
3 4
1 1
ℎ(𝑡) = (3𝑒 4𝑡 − 2𝑒 3𝑡 ) 𝑢(𝑡)
❏ Realization (implementation)
𝑋(𝑠) 𝑋(𝑠) 𝑊(𝑠) 1 1
𝐻(𝑠) = = =( ) (𝑠 − )
𝑌(𝑠) 𝑊(𝑠) 𝑌(𝑠) 7 1 2
𝑠2 − 𝑠 +
12 12
1
𝑌(𝑠) = 𝑠𝑊(𝑠) − 𝑊(𝑠) 1
2 𝑦(𝑡) = 𝑤̇ (𝑡) − 𝑤(𝑡)
{ ⬌ { 2
7 1
2
𝑋(𝑠) = 𝑠 𝑊(𝑠) − 𝑠𝑊(𝑠) − 𝑊(𝑠) 𝑤(𝑡) = 12𝑥(𝑡) − 12𝑤̈ (𝑡) + 7𝑤̇ (𝑡)
12 12
+∞
1
F(0) = ∫ f(𝑡)𝑑𝑡 = Area of f(𝑡) = + 1 + 1.5 + 2 = 5
0 2
Exercise: 15
f(𝑡)
Find the Laplace transform of the following graph 2
Exercise: 16 find the inverse of 𝑒 −2𝑠 /(𝑠 2 + 1) Solution: f(𝑡) = 𝑢(𝑡 − 2) sin(𝑡 − 2)
Exercise: 17 find the inverse of
1 + 𝑒 −𝜋𝑠
𝐹(𝑠) = 2
𝑠 +1
Solution:
1 𝑒 −𝜋𝑠 sin(𝑡) 0≤𝑡<𝜋
𝐹(𝑠) = + ⇒ f(𝑡) = sin(𝑡) + 𝑢(𝑡 − 𝜋) sin(𝑡 − 𝜋) = {
𝑠2 + 1 𝑠2 + 1 0 𝑡≥𝜋
𝑑2𝑦
− 𝑦 = sin(𝑡) with initial conditions 𝑦(0) = −1 & 𝑦̇ (0) = 0
𝑑𝑡 2
Solution: Applying the Laplace transform we get
1 1
(𝑠 2 𝑌(𝑠) + 𝑠) − 𝑌(𝑠) = 2 ⇔ (𝑠 2 − 1) 𝑌(𝑠) = ( 2 − 𝑠)
𝑠 +1 𝑠 +1
−𝑠 1 −1/2 −1/2 1/2 1/2
𝑌(𝑠) = + = ( + ) + ( − )
𝑠 2 − 1 (𝑠 2 − 1)(𝑠 2 + 1) 𝑠−1 𝑠+1 𝑠2 − 1 𝑠2 + 1
1 −1 −1 1 1/2 1/2
𝑌(𝑠) = ( + − 2 +( − ))
2 𝑠−1 𝑠+1 𝑠 +1 𝑠−1 𝑠+1
1 −1/2 −3/2 1 3 1 1
𝑌(𝑠) = ( + − 2 ) ⇔ 𝑦(𝑡) = − ( 𝑒 −𝑡 + 𝑒 𝑡 + sin(𝑡))
2 𝑠−1 𝑠+1 𝑠 +1 4 4 2
−1 −1
1 − 𝑒 (1−𝑠)𝑇 1 𝑒 (1−𝑠)𝑇
f1 (𝑡) = ( )= ( − )
(𝑠 − 1) (𝑠 − 1) (𝑠 − 1)
1 −𝑠
𝐹(𝑠) = (𝑒 − 𝑒 −2𝑠 )
𝑠2
Solution:
0 𝑡<1
(𝑡 (𝑡
f(𝑡) = − 1)𝑢(𝑡 − 1) − − 2)𝑢(𝑡 − 2) = {𝑡 − 1 1≤𝑡<2
1 𝑡≥2
Exercise: 25 Find the following transformation
−1
6 sin(𝑡)
❑ ( ) ❑ ( )
(𝑠 − 1)4 𝑡
−1
𝑛!
𝐒𝐨𝐥𝐮𝐭𝐢𝐨𝐧: We know that ( ) = 𝑡 𝑛 𝑒 −𝑎𝑡 𝑢(𝑡)
(𝑠 + 𝑎)𝑛+1
−1
6
Take 𝑛 = 3 and 𝑎 = 1 we get ( ) = 𝑡 3 𝑒 𝑡 𝑢(𝑡)
(𝑠 − 1)4
sin(𝑡) ∞ 𝑑𝜇 ∞ 𝜋
❑ ( )=∫ = tan−1(𝜇)| = tan−1(∞) − tan−1(𝑠) = − tan−1(𝑠)
𝑡 2
𝑠 𝜇 +1 𝑠 2
sin(𝑡) 1 1 𝜋
( ) = tan−1 ( ) Note that tan−1(𝑠) + tan−1 ( ) =
𝑡 𝑠 𝑠 2
Exercise: 26 Find the Laplace transform for the following graphical representation
❸
❶ ❷
𝑥(𝑡)
❹ ❺ − ❻
𝐴
−
𝑇 2𝑇 𝑡
− −
𝑥(𝑡)
❼
𝐴
Solution: 𝑡
𝑻
𝐴𝑡 𝑇
0≤𝑡< 𝑇
𝑇 2 |𝑡 − 2 |
❶ 𝑥(𝑡) = 𝑡 𝑇 = 𝐴 (1 − )
2𝐴 (1 − ) ≤𝑡<𝑇 𝑇
𝑇 2 2
{ 0 elsewhere
𝐴𝑡 𝑇 𝑡 𝑇
𝑥(𝑡) = {𝑢(𝑡) − 𝑢 (𝑡 − )} + 2𝐴 (1 − ) {𝑢 (𝑡 − ) − 𝑢(𝑡 − 𝑇)}
𝑇 2 𝑇 2
We know that from the graph of its derivative (i.e. which is the third one 3)
𝑇 𝑑𝑥 𝑇 𝑇
= {𝑢(𝑡) − 𝑢 (𝑡 − )} − {𝑢 (𝑡 − ) − 𝑢(𝑡 − 𝑇)}
𝐴 𝑑𝑡 2 2
𝑇 1 𝑇 1 𝑇 1 𝑇 1 𝑇 2
𝑠𝑋(𝑠) = (1 − 𝑒 −𝑠2 ) − (𝑒 −𝑠2 − 𝑒 −𝑠𝑇 ) = (1 − 2𝑒 −𝑠2 + 𝑒 −𝑠𝑇 ) = (1 − 𝑒 −𝑠2 )
𝐴 𝑠 𝑠 𝑠 𝑠
𝐴 𝑇 2
𝑋(𝑠) = 2 (1 − 𝑒 −𝑠 2 )
𝑇𝑠
𝐴𝑡
❷ 𝑥(𝑡) = { 𝑇 0 ≤ 𝑡 < 𝑇 = 𝐴𝑡 {𝑢(𝑡) − 𝑢(𝑡 − 𝑇)}
𝑇
0 𝑡>𝑇
𝐴
𝑥(𝑡) = {𝑡𝑢(𝑡) − (𝑡 − 𝑇)𝑢(𝑡 − 𝑇) − 𝑇𝑢(𝑡 − 𝑇)}
𝑇
𝐴 1 𝑒 −𝑠𝑇 𝑇𝑒 −𝑠𝑇 𝐴
⇒ 𝑋(𝑠) = [ 2− 2 − ] = 2 [1 − 𝑒 −𝑠𝑇 (1 + 𝑠𝑇)]
𝑇 𝑠 𝑠 𝑠 𝑇𝑠
𝑑 𝐴 𝑇 2
−𝑠
𝑥(𝑡) = Λ(𝑡) ⟺ 𝑋(𝑠) = 𝑠Λ(𝑠) = (1 − 𝑒 2 )
𝑑𝑡 𝑇𝑠
𝐴 0 ≤ 𝑡 < 𝑇/2
❹ 𝑥(𝑡) = {
−𝐴 𝑇/2 ≤ 𝑡 < 𝑇
𝑇 𝑇
𝑥1 (𝑡) = 𝐴 [{𝑢(𝑡) − 𝑢 (𝑡 − )} − {𝑢 (𝑡 − ) − 𝑢(𝑡 − 𝑇)}]
2 2
𝐴 𝑇 𝑇 𝐴 𝑇 𝐴 𝑠𝑇 2
−𝑠 −𝑠 −𝑠𝑇 −𝑠 −𝑠𝑇 −
𝑋1 (𝑠) = [(1 − 𝑒 ) − (𝑒
2 2 − 𝑒 )] = [1 − 2𝑒 2 + 𝑒 ] = (1 − 𝑒 2 )
𝑠 𝑠 𝑠
𝑇
−𝑠
𝐴 (1 − 𝑒 )
2
𝑋1 (𝑠) 𝑋1 (𝑠) 𝐴 𝑠𝑇
𝑋(𝑠) = = 𝑠𝑇 𝑠𝑇 = 𝑇 = tanh ( )
1 − 𝑒 −𝑠𝑇 𝑠 𝑠 4
(1 − 𝑒 − 2 ) (1 + 𝑒− 2 ) (1 + 𝑒 −𝑠2 )
❺ 𝑥(𝑡) is piecewise periodic signal, and over one period
𝐴 0≤𝑡<𝑎
it is 𝑥1 (𝑡) = { = 𝐴[𝑢(𝑡) − 𝑢(𝑡 − 𝑎)]
0 𝑎≤𝑡<𝑇
𝐴 𝑋1 (𝑠)
𝑋1 (𝑠) = [1 − 𝑒 −𝑎𝑠 ] & 𝑋(𝑠) =
𝑠 1 − 𝑒 −𝑇𝑠
𝐴 1 − 𝑒 −𝑎𝑠
𝑋(𝑠) = ( )
𝑠 1 − 𝑒 −𝑇𝑠
𝑋1 (𝑠) 𝐴 1 + 𝑒 −𝑇𝑠
𝑋(𝑠) = = [ ]
1 − 𝑒 −𝑇𝑠 𝑠 2 + 1 1 − 𝑒 −𝑇𝑠
Remark:
𝜋 𝐴 1 + 𝑒 −𝑇𝑠
{𝑥(𝑡)} = {𝐴 |sin ( 𝑡)|} = 2 [ ]
𝑇 𝑠 + 1 1 − 𝑒 −𝑇𝑠
𝐴 sin(𝑡) 0≤𝑡<𝑇
❼ 𝑥(𝑡) = { = 𝐴[sin(𝑡) 𝑢(𝑡) − sin(𝑡 − 𝑇) 𝑢(𝑡 − 𝑇)]
0 elsewhere
𝐴
𝑋(𝑠) = [1 + 𝑒 −𝑇𝑠 ]
𝑠2 +1
𝑡
❑ (g(𝑡) = 3 ) ❑ (sgn(𝑡))
−1 3
1 2 1 1
= ( ) 2 − 2
2 √2 1 √2 1
(𝑠 − 2 ) + 2 (𝑠 + 2 ) + 2
( )
3
𝑡 −𝑡
1 2 𝑡 𝑡 𝑡 𝑡
= ( ) (√2𝑒 sin ( ) − √2𝑒 sin ( )) = sin ( ) sinh ( )
√2 √2
2 √2 √2 √2 √2
−1
𝑡
1 𝑡 𝑡
Finally we deduce that { 4 } = ∫ sin ( ) sinh ( ) 𝑑𝑡
𝑠 +1 0 √2 √2
❷ To compute the Laplace transform of the gate function we use the definition
𝑎 𝑎
1 2
(∏(𝑡) = 𝑢(𝑡 + 𝑎) − 𝑢(𝑡 − 𝑎)) = ∫ 𝑒 −𝑠𝑡 𝑑𝑡 = [− 𝑒 −𝑠𝑡 ] = sinh(𝑎𝑠)
−𝑎 𝑠 −𝑎 𝑠
❹ The Laplace transform of signum function do not exist, since do not have the ROC.
𝑑 𝑑 𝑠 𝑠 2 − 𝜔0 2
{𝑡f(𝑡)} = − 𝐹(𝑠) ⟺ {𝑡cos(𝜔0 𝑡)} = − ( )= 2
𝑑𝑠 𝑑𝑠 𝑠 2 + 𝜔0 2 (𝑠 + 𝜔0 2 )2
1 1 𝜔0 𝑠 2 − 𝜔0 2 𝑠2
{ [sin(𝜔0 𝑡) + 𝜔0 𝑡cos(𝜔0 𝑡)]} = ( + 𝜔0 { 2 }) = 2
2𝜔0 2𝜔0 𝑠 2 + 𝜔0 2 (𝑠 + 𝜔0 2 )2 (𝑠 + 𝜔0 2 )2
1 1 𝜔0 𝑠 2 − 𝜔0 2 1
{ [sin(𝜔 0 𝑡) − 𝜔 0 𝑡cos(𝜔 0 𝑡)]} = ( − 𝜔 0 { }) = 2
2𝜔0 3 3 2
2𝜔0 𝑠 + 𝜔0 2 (𝑠 + 𝜔0 )
2 2 2 (𝑠 + 𝜔0 2 )2
❸ The same thing as before
1 1 𝑠 𝑠 𝑠
{ [cos(𝜔1 𝑡) − cos(𝜔 2 𝑡)]} = ( − ) = [ ]
𝜔2 2 − 𝜔1 2 𝜔2 2 − 𝜔1 2 𝑠 2 + 𝜔1 2 𝑠 2 + 𝜔2 2 (𝑠 2 + 𝜔1 2 )(𝑠 2 + 𝜔2 2 )
𝜔𝑛 2
𝑌(𝑠) = , 0<𝜉<1
𝑠(𝑠 2 + 2𝜉𝜔𝑛 𝑠 + 𝜔𝑛 2 )
𝜔𝑛 2 𝑅1 𝑅2 𝑅3
𝑌(𝑠) = = + +
𝑠(𝑠 + 2𝜉𝜔𝑛 𝑠 + 𝜔𝑛 )
2 2 𝑠 𝑠 + 𝑝1 𝑠 + 𝑝2
𝜔𝑛 2 𝐴 𝐵𝑠 + 𝐶
𝑌(𝑠) = = +
𝑠(𝑠 2 + 2𝜉𝜔𝑛 𝑠 + 𝜔𝑛 2 ) 𝑠 (𝑠 + 𝜉𝜔𝑛 )2 + 𝜔𝑛 2 (1 − 𝜉 2 )
After simplifying, you will get the values of 𝐴, 𝐵 and 𝐶 as 1, −1and −2𝜉𝜔𝑛 respectively.
Substitute these values in the above partial fraction expansion of
1 𝑠 + 𝜉𝜔𝑛 𝜉𝜔𝑛
𝑌(𝑠) = − −
𝑠 (𝑠 + 𝜉𝜔𝑛 ) + 𝜔𝑛 (1 − 𝜉 ) (𝑠 + 𝜉𝜔𝑛 ) + 𝜔𝑛 2 (1 − 𝜉 2 )
2 2 2 2
1 𝑠 + 𝜉𝜔𝑛 𝜉 𝜔𝑛 √1 − 𝜉 2
𝑌(𝑠) = − −
𝑠 (𝑠 + 𝜉𝜔 )2 + (𝜔 √1 − 𝜉 2 )2 √1 − 𝜉 2 (𝑠 + 𝜉𝜔 )2 + (𝜔 √1 − 𝜉 2 )2
𝑛 𝑛 𝑛 𝑛
𝑒 −𝜉𝜔𝑛𝑡
𝑦(𝑡) = {1 − {(√1 − 𝜉 2 ) cos(𝜔𝑑 𝑡) − 𝜉sin(𝜔𝑑 𝑡)}} 𝑢(𝑡)
√1 − 𝜉 2
If √1 − 𝜉 2 = sin(𝜙) , then ‘𝜉’ will be cos(𝜙). Substitute these values in the above equation.
𝑒 −𝜉𝜔𝑛𝑡
𝑦(𝑡) = {1 − {sin(𝜙) cos(𝜔𝑑 𝑡) − cos(𝜙)sin(𝜔𝑑 𝑡)}} 𝑢(𝑡)
√1 − 𝜉 2
𝑒 −𝜉𝜔𝑛𝑡 √1 − 𝜉 2 𝜋
𝑦(𝑡) = {1 − sin(𝜔𝑑 𝑡 + 𝜙)} 𝑢(𝑡), with 0 < 𝜙 = tan−1 ( )<
√1 − 𝜉 2 𝜉 2
𝑒 −𝜉𝜔𝑛𝑡 𝜔𝑛 2
{1 − sin(𝜔𝑑 𝑡 + 𝜙)} =
√1 − 𝜉 2 𝑠(𝑠 2 + 2𝜉𝜔𝑛 𝑠 + 𝜔𝑛 2 )
𝑑2 𝑑 2 2
𝑒 −𝜉𝜔𝑛𝑡
𝑦 + 2𝜉𝜔𝑛 𝑦 + 𝜔𝑛 𝑦(𝑡) = 𝜔𝑛 𝑢(𝑡) ⇔ 𝑦(𝑡) = { 1 − sin(𝜔𝑑 𝑡 + 𝜙)} 𝑢(𝑡)
𝑑𝑡 2 𝑑𝑡 √1 − 𝜉 2
Remark: This result will be required by the student in the next chapters, which is the
responses (reactions) of second-order systems for a fixed input signals (i.e. step) and in the
following graphical diagram we explain the variation of the function 𝑦(𝑡) in terms of time
and in terms of Zeta changes.
Computer Program
clear all, clc, Wn=1; zeta=[0:0.1:0.9,1+eps,2,3,5]; yy=[]; t=0:0.1:12;
for z=zeta
Wd=sqrt(z^2-1)*Wn;
y=1-Wn*exp(-z*Wn*t).*[cosh(Wd*t)/Wn+z*sinh(Wd*t)/Wd]; yy=[yy;y];
end
plot(t,yy,'linewidth',3), grid on
xlabel('Time(secs)'); ylabel('Amplitude'); title('Closed-loop step')
figure
surf(t,zeta,yy)
+∞ −𝑠𝜆2
Exercise: 31 Express the following integral 𝐼 = ∫0 𝑒 𝑑𝜆 and use it in the evaluation of
the following unilateral transforms
1
} and { {√𝑡}
√𝑡
Solution: Let we evaluate 𝐼 by using the double integral
+∞ +∞ +∞ +∞ +∞ 2
−𝑠𝑥 2 −𝑠𝑦 2 −𝑠𝑥 2 −𝑠𝑦 2 −𝑠𝑥 2
∫ ∫ 𝑒 𝑒 𝑑𝑥𝑑𝑦 = ( ∫ 𝑒 𝑑𝑥 ) ( ∫ 𝑒 𝑑𝑦 ) = ( ∫ 𝑒 𝑑𝑥 )
−∞ −∞ −∞ −∞ −∞
2 +∞ 2 +∞ 2
Notice that 𝑒 −𝑠𝜆 is an even function that is ∫−∞ 𝑒 −𝑠𝜆 𝑑𝜆 = 2 ∫0 𝑒 −𝑠𝜆 𝑑𝜆
+∞ +∞ +∞ 2
−𝑠𝑥 2 −𝑠𝑦 2 −𝑠𝑥 2
∫ ∫ 𝑒 𝑒 𝑑𝑥𝑑𝑦 = 4 (∫ 𝑒 𝑑𝑥 )
−∞ −∞ 0
+∞ 2
−𝑠𝑥 2
1 +∞ +∞ −𝑠(𝑥 2 +𝑦 2) 1 2𝜋 +∞ −𝑠𝑟 2
(∫ 𝑒 𝑑𝑥 ) = ∫ ∫ 𝑒 𝑑𝑥𝑑𝑦 = ∫ ∫ 𝑟𝑒 𝑑𝑟𝑑𝜃
0 4 −∞ −∞ 4 0 0
2 2 ∞
+∞ +∞
−𝑠𝑥 2
1 −𝑒 −𝑠𝑟 𝜋 2 1 𝜋
(∫ 𝑒 𝑑𝑥 ) = {2𝜋 ( )| } = ⇒ 𝐼 = ∫ 𝑒 −𝑠𝑥 𝑑𝑥 = √
0 4 2𝑠 0
4𝑠 0 2 𝑠
1 +∞ 1 −𝑠𝑡 1 +∞ 1 2
❑ { }=∫ 𝑒 𝑑𝑡 Let 𝜆 = √𝑡 then { }=∫ 𝑒−𝑠𝜆 2𝜆𝑑𝜆
√𝑡 0 √𝑡 √𝑡 0 𝜆
+∞
1 2 𝜋
{ } = 2∫ 𝑒 −𝑠𝜆 𝑑𝜆 = √
√𝑡 0 𝑠
𝑡 𝑑 𝜋 √𝜋
❑ {√𝑡} = { }=− (√ ) = 3/2
√𝑡 𝑑𝑠 𝑠 2𝑠
Exercise: 32 give the inverse of Laplace transform for the following s-function
1
𝐹(𝑠) =
𝑠4 + 𝑠2 + 1
Solution: Let firstly we compute the inverse of 𝑠𝐹(𝑠)
−1 −1 −1
𝑠 𝑠 𝑠+1 𝑠−1
{𝑠𝐹(𝑠)} = { 4 }= { [ 2 − 2 ]}
𝑠 + 𝑠2 + 1 2 𝑠 +𝑠+1 𝑠 −𝑠+1
−1 −1
1 𝑠2 + 𝑠 𝑠2 − 𝑠 1 1 1
= { 2 − 2 }= {1 − −1+ 2 }
2 𝑠 +𝑠+1 𝑠 −𝑠+1 2 𝑠2 +𝑠+1 𝑠 −𝑠+1
−1 −1
1 1 1 1 1 1
= { 2 − 2 }= { 2 − }
2 𝑠 −𝑠+1 𝑠 +𝑠+1 2 1 3 1 2 3
(𝑠 − 2) + 4 (𝑠 + 2) + 4
−1 −1 √3 √3
1 2 2
ḟ(𝑡) = {𝑠𝐹(𝑠)} = { 2 − }
√3 1 3 1 2 3
(𝑠 − 2) + 4 (𝑠 + 2) + 4
1 1 √3 1 √3 2 √3 1
ḟ(𝑡) = {𝑒 2𝑡 sin ( 𝑡) − 𝑒 −2𝑡 sin ( 𝑡)} = sin ( 𝑡) sinh ( 𝑡)
√3 2 2 √3 2 2
𝑡
2 √3 1
f(𝑡) = ∫ sin ( 𝑡) sinh ( 𝑡) 𝑑𝑡
0 √3 2 2
2
−𝑡 2
{𝑒 −𝑡 } 2
(𝑠 2 + 1)𝑌(𝑠) = {𝑒 } ⟺ 𝑌(𝑠) = ⇒ 𝑦(𝑡) = 𝑒 −𝑡 ⋆ sin(𝑡)
(𝑠 + 1)
2
This integral is a special (non-elementary) and sigmoid function that occurs often in
probability, statistics, and partial differential equations describing diffusion.
Exercise 37: Perform long division and determine the quotient and the remainder
𝑠 2 + 9𝑠 + 3 3𝑠 3 + 17𝑠 2 + 33𝑠 + 15
𝐹(𝑠) = , 𝐹(𝑠) =
𝑠 2 + 3𝑠 + 2 𝑠 3 + 6𝑠 2 + 11𝑠 + 6
Exercise 39: Compute the Laplace transform & the inverse of of the given functions.
|𝑡 − 𝜋|
𝑠 + 𝑖𝑏 𝐴 (1 − ) 0 < 𝑡 ≤ 2𝜋
𝟏. F(𝑠) = 𝑎 ( 2 ) 𝟐. f(𝑡) = { 𝜋
𝑠 + 𝑏2
f(𝑡 + 2𝜋)
Ans:
𝐴 𝜋𝑠
𝟏. f(𝑡) = 𝑎𝑒 𝑖𝑏𝑡 𝟐. F(𝑠) = 2
tanh ( )
𝜋𝑠 2
Exercise 40: Compute the Laplace transform of the half wave rectification of sin(t), denoted
g(t), in which the negative cycles of sin(t) have been canceled to create g(t).
𝐴
g(𝑡) = (sin(𝑎𝑡) + |sin(𝑎𝑡)|)
2
𝐴 𝑎 𝜋𝑠
𝐀𝐧𝐬: 𝐺(𝑠) = { 2 (1 + coth ( ))}
2 𝑠 + 𝑎2 2𝑎
Exercise 41: Compute the Laplace of: (a) f(t)=sin3(ωt) (b) g(t)=cos3(ωt).
Ans:
∞ ∞
1 𝜂 We take the change if the variable:
𝐹(𝑠) = ∫ Ln(𝑡)𝑒 −𝑠𝑡 𝑑𝑡 = ∫ Ln ( ) 𝑒 −𝜂 𝑑𝜂
0 0 s s
1
1 ∞ ∞
𝜂 = 𝑠𝑡 and 𝑑𝑡 = 𝑑𝜂
𝐹(𝑠) = {∫ Ln(𝜂)𝑒 −𝜂 𝑑𝜂 − ∫ Ln(s)𝑒 −𝜂 𝑑𝜂} 𝑠
s 0 0
1 ∞ 𝛾 + Ln(s) ∞
𝐹(𝑠) = {∫ Ln(𝜂)𝑒 −𝜂 𝑑𝜂 − Ln(s)} = −
s 0 𝑠 𝛾 = − ∫ Ln(𝜂)𝑒 −𝜂 𝑑𝜂
0
Exercise 43: Find the IR ℎ(𝑡), of a given LTI system such that: 𝑥̇ (𝑡) = 𝑥(𝑡) ⋆ ℎ(𝑡)
Exercise 44: Show that a system described the following differential equation is linear
𝑦̇ (𝑡) + 𝑡 2 𝑦(𝑡) = (2𝑡 + 3)𝑢(𝑡)
Exercise 45: Show that a system described the following differential equation is nonlinear
𝑦(𝑡)𝑦̇ (𝑡) + 3𝑦(𝑡) = 𝑢(𝑡)
Exercise 46: Show that:
1 𝑠𝑏
{𝛿(𝑎𝑡 + 𝑏)} = 𝑒 𝑎
|𝑎|
𝟏. f(𝑡) = cos2 (𝑡) 𝟐. f(𝑡) = sin2 (𝑡) 𝟑. f(𝑡) = cos 3 (𝑡) 𝟒. f(𝑡) = sin3(𝑡)
𝟓. f(𝑡) = cos(4𝑡) sin(3𝑡) 𝟔. f(𝑡) = cos4 (𝑡) − sin4 (𝑡) 𝟕. f(𝑡) = (−1)𝑡 𝟖. f(𝑡) = cos 2 (𝑡) + sin2 (𝑡)
Answer: Use the following help 2 cos(𝑎) sin(𝑏) = sin(𝑎 + 𝑏) − sin(𝑎 − 𝑏) and
1 1
cos2 (𝑡) = {1 + cos(2𝑡)}, sin2 (𝑡) = {1 − cos(2𝑡)}, (−1)𝑡 = 𝑒 𝑖𝜋𝑡
2 2
1 1
cos3 (𝑡) = {3 cos(𝑡) + cos(3𝑡)}, sin3 (𝑡) = {3 sin(𝑡) − cos(3𝑡)}
4 4
1
cos4 (𝑡) − sin4(𝑡) = cos(2𝑡), cos(4𝑡) sin(3𝑡) = {sin(7𝑡) − sin(𝑡)}
2
------------------------------------
2
s +2 2 1 s 3 s
𝟏. F(𝑠) = 𝟐. F(𝑠) = 𝟑. F(𝑠) = ( 2 )+ ( 2 )
𝑠(s2 + 4) 𝑠(s 2+ 4) 4 s +9 4 s +1
3 1 1 3 1 7 1
𝟒. F(𝑠) = ( 2 )− ( 2 ) 𝟓. F(𝑠) = ( 2 − 2 )
4 s +1 4 s +9 2 s + 49 s + 1
s 1 1
𝟔. F(𝑠) = 𝟕. F(𝑠) = 𝟖. F(𝑠) =
s2 +4 s − 𝑖𝜋 s
Exercise 48: (Introduction to the next Chapter) given a sequence 𝑥[𝑛], we define the
mapping T such that 𝑋(𝑧) = 𝑻{𝑥[𝑛]} = ∑+∞
−∞ 𝑥[𝑛]𝑧
−𝑛
.
𝟏. Compute the value of 𝑻{𝑥[𝑛 − 1]}. 𝟐. For which input 𝑥1 [𝑛] to the mapping 𝑻 such that
𝑻{𝑥1 [𝑛] } =– 𝑧{𝑑𝑋(𝑧)/𝑑𝑧} 𝟑. Solve the following difference equation 𝑛𝑥[𝑛] − 𝑥[𝑛 − 1] = 0 where
𝑥[𝑛] = 0 for negative arguments and 𝑥[0] = 1
+∞ +∞ +∞
𝐀𝐧𝐬: 𝟏. 𝑻{𝑥[𝑛 − 1]} = ∑ 𝑥[𝑛 − 1]𝑧 −𝑛 = ∑ 𝑥[𝑚]𝑧 −𝑚−1 = 𝑧 −1 ∑ 𝑥[𝑚]𝑧 −𝑚
−∞ −∞ −∞
𝑑 𝑑 +∞ +∞ +∞
𝟐. − 𝑧 𝑋(𝑧) = −𝑧 ∑ 𝑥[𝑛]𝑧 −𝑛 = −𝑧 ∑ −𝑛𝑥[𝑛]𝑧 −𝑛−1 = ∑ 𝑛𝑥[𝑛]𝑧 −𝑛 ⟺ 𝑥1 [𝑛] = 𝑛𝑥[𝑛]
𝑑𝑧 𝑑𝑧 −∞ −∞ −∞
𝟑. Solving this difference equation by using the operator 𝑋(𝑧) = 𝑻{𝑥[𝑛]} we get:
𝑑 𝑑
𝑛𝑥[𝑛] − 𝑥[𝑛 − 1] = 0 ⟺ 𝑧 𝑋(𝑧) + 𝑧 −1 𝑋(𝑧) = 0 ⟺ 𝑧 2 𝑋(𝑧) + 𝑋(𝑧) = 0
𝑑𝑧 𝑑𝑧
𝑑 𝑑𝑋(𝑧) 𝑑𝑧 1
𝑧2 𝑋(𝑧) + 𝑋(𝑧) = 0 ⟺ = − 2 ⟺ ln(𝑋(𝑧)) = ⟺ 𝑋(𝑧) = 𝐴𝑒 1/𝑧
𝑑𝑧 𝑋(𝑧) 𝑧 𝑧
+∞ 𝑢[𝑛] 1 𝑛 𝐴 𝐴
We know that 𝑋(𝑧) = 𝐴𝑒 1/𝑧 = ∑ 𝐴 ( ) = 𝑻 { 𝑢[𝑛]} ⇒ 𝑥[𝑛] = 𝑢[𝑛]
−∞ 𝑛! 𝑧 𝑛! 𝑛!
1
From initial conditions 𝑥[0] = 1 we obtain 𝐴 = 1 ⇒ 𝑥[𝑛] = 𝑢[𝑛]
𝑛!
CHAPTER IV:
Analysis of Discrete LTI
Systems by Z-Transform
I. Introduction
II. The Z Transform of Some Commonly Occurring Functions
III. Some Properties of the Z Transform
IV. Transfer Function (System Function) and DC Gain
V. Inverse Z-transform
V.I. Contour Integration
V.II Partial Fraction Expansion
V.III Inversion by Power Series
VI. Solved Problems
Many discrete transforms may not exist for all sequences due to the convergence
condition, whereas the z-transform exists for many sequences for which others does
not exist. Also, the z-transform allows simple algebraic manipulations. As such, the
z-transform has become a powerful tool in the analysis and design of digital systems.
This chapter introduces the z-transform, its properties, the inverse z-transform, and
methods for finding it. Also, in this chapter, the importance of the z-transform in the
analysis of LTI systems is established. Further, one-sided z-transform and the
solution of difference equations of discrete-time LTI systems are presented.
ROC is going to decide whether system is stable or unstable. ROC decides the type of
sequences causal or anti-causal. ROC also decides finite or infinite duration
sequences.
Analysis of Discrete
LTI Systems by
Z-Transform
I. Introduction: The Z-transform is the discrete-time counterpart of the Laplace transform.
The Z-transform is introduced to represent discrete-time signals (or sequences) in the z-
domain (i.e. complex frequency-domain where z is complex variable), and the concept of the
system function for a discrete-time LTI system will be described. The Laplace transform
converts integro_differential equations into algebraic equations. In a similar manner, the Z-
transform converts difference equations into algebraic equations, thereby simplifying the
analysis of discrete-time systems. The properties of the Z-transform closely parallel those of
the Laplace transform. However, we will see some important distinctions between the z-
transform and the Laplace transform.
There are a number of ways to represent the sampling process mathematically. One way
that is commonly used is to immediately represent the sampled signal by a series 𝑥[𝑛]. This
technique has the advantage of being very simple to understand, but makes the connection
of the sampled signal to the Laplace Transform easy to understand.
∞
⋆ (𝑡)
𝑥 = ∑ 𝑥(𝑘𝑇)𝛿(𝑡 − 𝑘𝑇)
𝑘=0
Since we now have a time domain signal, we wish to see what kind of analysis we can do in
a transformed domain. Let's start by taking the Laplace Transform of the sampled signal:
∞
⋆ (𝑠) {𝑥 ⋆ (𝑡)}
𝑋 = = {∑ 𝑥[𝑘]𝛿(𝑡 − 𝑘𝑇)}
𝑘=0
Since 𝑥[𝑘] is a constant, we can (because of Linearity) bring the Laplace Transform inside
the summation.
∞ ∞
To simplify the expression a little bit, we will use the notation 𝑧 = 𝑒 −𝑠𝑘𝑇 . We will call this the
Z Transform and define it as
Region of convergence: The region of convergence (ROC) is the set of points in the
complex plane for which the Z-transform summation converges.
𝑘=+∞
Example Let us now consider a signal that is the real exponential: f[𝑘] = 𝛼 𝑘 𝑢[𝑘]
𝑘=+∞ ∞
𝑘 −𝑘
𝛼 𝑘 𝑧
𝐹(𝑧) = ∑ 𝛼 𝑢[𝑘]𝑧 = ∑( ) = |𝑧| > |𝛼|
𝑧 𝑧−𝛼
𝑘=−∞ 𝑘=0
Example Let us now consider the left-sided real exponential: f[𝑘] = −𝛼 𝑘 𝑢[−𝑘 − 1]
𝑘=+∞ −1 ∞ ∞ ∞
𝛼 𝑘 𝑧 𝑘 𝑧 𝑘+1 𝑧 𝑧 𝑘 𝑧
𝐹(𝑧) = ∑ −𝛼 𝑘 𝑢[−𝑘 − 1]𝑧 −𝑘 = − ∑ ( ) = −∑( ) = −∑( ) = ∑( ) =
𝑧 𝛼 𝛼 𝛼 𝛼 𝑧−𝛼
𝑘=−∞ 𝑘=−∞ 𝑘=1 𝑘=0 𝑘=0
Remark: If you are primarily interested in application to one-sided signals, then the z-
transform used is restricted to causal signals (i.e., signals with zero values for negative
time) and the one-sided z-transform.
II The Z Transform of Some Commonly Occurring Functions: The z-transform is an
important tool in the analysis and design of discrete-time systems. It simplifies the solution
of discrete-time problems by converting LTI difference equations to algebraic equations and
convolution to multiplication. Thus, it plays a role similar to that served by Laplace
transforms in continuous-time problems. Now let us give some of commonly used discrete-
time signals such as the sampled step, exponential, and the discrete time impulse.
❶ The Unit Impulse Function: In discrete time systems the unit impulse is defined
somewhat differently than in continuous time systems.
1 𝑘=0
𝛿[𝑘] = { = 𝑢[𝑘] − 𝑢[𝑘 − 1]
0 𝑘≠0
1 𝑘≥0
Where 𝑢[𝑘] is defined by: 𝑢[𝑘] = {
0 𝑘<0
∞ ∞
−𝑘
𝑋(𝑧) = ∑ 𝛿[𝑘]𝑧 −𝑘
= 𝛿[𝑘]𝑧 | + ∑ 𝛿[𝑘]𝑧 −𝑘 = 1
𝑘=0 𝑘=0 ⏟
𝑘=1
zero
❷ The Unit Step Function: The unit step is one when 𝑘 is zero or positive.
∞ ∞
1 𝑧
𝑋(𝑧) = ∑ 𝑢[𝑘]𝑧 −𝑘 = ∑ 𝑧 −𝑘 = −1
=
1−𝑧 𝑧−1
𝑘=0 𝑘=0
❺ The Discrete Exponential Function: With the Z-Transform it is more common to get
solutions in the form of a power series
∞
1 𝑧
f[𝑘] = 𝑎𝑘 𝑢[𝑘] ⟹ 𝐹(𝑧) = ∑ 𝑎𝑘 𝑢[𝑘]𝑧 −𝑘 = = , |𝑧| > 𝑎
1 − 𝑎𝑘 𝑧 −1 𝑧 − 𝑎
𝑘=0
Exercise: Find the Z-Transform of the following signal f[𝑘] = cos(𝑎𝑘) 𝑢[𝑘] Solution:
∞ ∞ ∞ ∞
−𝑘
𝑒 𝑖𝑎𝑘 + 𝑒 −𝑖𝑎𝑘 1
𝐹(𝑧) = ∑ cos(𝑎𝑘) 𝑢[𝑘]𝑧 = ∑( ) 𝑢[𝑘]𝑧 −𝑘 = {∑ 𝑒 𝑖𝑎𝑘 𝑢[𝑘]𝑧 −𝑘 + ∑ 𝑒 −𝑖𝑎𝑘 𝑢[𝑘]𝑧 −𝑘 }
2 2
𝑘=0 𝑘=0 𝑘=0 𝑘=0
1 𝑧 𝑧 𝑧(𝑧 − cos(𝑎))
(f[𝑘]) = 𝐹(𝑧) = { 𝑖𝛼
+ −𝑖𝛼
}= 2
2 𝑧−𝑒 𝑧−𝑒 𝑧 − 2𝑧 cos(𝑎) + 1
𝑧 sin(𝑎𝑘)
𝐹(𝑧) =
𝑧2 − 2𝑧 cos(𝑎) + 1
Remarks: the Z-Transform 𝑋(𝑧) is always a rational function of the complex variable z, and
the same complex function can also be expressed in terms of 𝑧 −1
III. Some Properties of the Z Transform As we found with the Laplace Transform, it will
often be easier to work with the Z Transform if we develop some properties of the transform
itself. The z-transform can be derived from the Laplace transform. Hence, it shares several
useful properties with the Laplace transform, which can be stated without proof. These
properties can also be easily proved directly, and the proofs are left as an exercise for the
reader. Proofs are provided for properties that do not obviously follow from the Laplace
transform.
Given 𝑥[𝑘] = 𝑎. 𝑓[𝑘] + 𝑏. 𝑔[𝑘], the following property holds: 𝑋(𝑧) = 𝑎. 𝐹(𝑧) + 𝑏. 𝐺(𝑧)
Example: Find the z-transform of the causal sequence 𝑥[𝑘] = 2𝑢[𝑘] + 4𝛿[𝑘] 𝑘 = 0,1, …
∞ ∞ ∞
−𝑘 −𝑘
2𝑧 6𝑧 − 4
𝑋(𝑧) = ∑(2𝑢[𝑘] + 4𝛿[𝑘])𝑧 = ∑ 2𝑢[𝑘]𝑧 + ∑ 4𝛿[𝑘]𝑧 −𝑘 = +4=
𝑧−1 𝑧−1
𝑘=0 𝑘=0 𝑘=0
❷ Time Shift (Delay): An important property of the Z Transform is the time shift. To see
why this might be important consider that a discrete-time approximation to a derivative is
given by:
𝑑𝑓(𝑡) 𝑓((𝑘 + 1)𝑇) − 𝑓(𝑘𝑇) 𝑓[𝑘 + 1] − 𝑓[𝑘]
| = =
𝑑𝑡 𝑡=𝑘𝑇 𝑇 𝑇
Let's examine what effect such a shift has upon the Z Transform. Assume that the
sequence 𝑥[𝑘] has a Z-Transform 𝑋(𝑧), and we want to get the Z-Transform of 𝑥[𝑘 − 𝑘0 ]
∞ ∞ ∞
4, 𝑘 = 2,3 …
𝑥[𝑘] = {
0, otherwise
The given sequence is a step function starting at 𝑘 = 2 rather than 𝑘 = 0 (i.e., it is delayed
by two sampling periods). Using the delay property, we have
∞ ∞ ∞
−𝑘 −2 −𝑚 −2
𝑧 4
𝑋(𝑧) = 4 ∑ 𝑢[𝑘 − 2]𝑧 = 4𝑧 ∑ 𝑢[𝑚]𝑧 = 4𝑧 ∑ 𝑧 −𝑚 = 4𝑧 −2 =
𝑧 − 1 𝑧(𝑧 − 1)
𝑘=0 𝑚=−2 𝑚=0
❸ Time Advance: Let's explore it in the same way as we did the shift to the
right. Consider the same sequence, 𝑥[𝑘], as before. This time we shift it to the left by 𝑎
samples to get 𝑥[𝑘 + 𝑎].
∞ 𝑎−1
❹ Multiplication by 𝒛𝒏𝟎 in Time Domain: this property is also known as the Z-domain
scaling and it says that: if 𝑥[𝑘] ⟷ 𝑋(𝑧) with ROC = 𝑅 then
∞ ∞
𝑧 −𝑘 𝑧
𝑧0𝑘 𝑥[𝑘] ⟷ ∑ 𝑧0𝑘 𝑥[𝑘]𝑧 −𝑘 = ∑ 𝑥[𝑘] ( ) = 𝑋 ( ) with ROC = |𝑧0 |𝑅
𝑧0 𝑧0
𝑘=0 𝑘=0
❺ Time Scaling: This property deals with the effect on the frequency-domain
representation of a signal if the time variable is altered. The most important concept to
understand for the time scaling property is that signals that are narrow in time will be
broad in frequency and vice versa.
∞ ∞ ∞
𝑑 𝑑 𝑑2 𝑑
𝑘 2 𝑥[𝑘] ⟷= 𝑧 {𝑧 𝑋(𝑧)} = 𝑧 2 2 𝑋(𝑧) + 𝑧 𝑋(𝑧)
𝑑𝑧 𝑑𝑧 𝑑𝑧 𝑑𝑧
2
𝑑2
2
𝑧 𝑑 𝑧 2𝑧 2 𝑧 𝑧2 + 𝑧
𝑥1 [𝑘] = 𝑘 𝑢[𝑘] ⟷ 𝑧 ( )+𝑧 ( )= − =
𝑑𝑧 2 𝑧 − 1 𝑑𝑧 𝑧 − 1 (𝑧 − 1)3 (𝑧 − 1)2 (𝑧 − 1)3
2 𝑘
𝑑2
2
𝑧 𝑑 𝑧 2𝑎𝑧 2 𝑧𝑎 𝑎𝑧(𝑧 + 𝑎)
𝑥2 [𝑘] = 𝑘 𝑎 𝑢[𝑘] ⟷ 𝑧 ( )+𝑧 ( )= − =
2
𝑑𝑧 𝑧 − 𝑎 𝑑𝑧 𝑧 − 𝑎 (𝑧 − 𝑎) 3 (𝑧 − 𝑎) 2 (𝑧 − 𝑎)3
Therefore, a pole (or zero) in 𝑋(𝑧) at = 𝑧𝑘 , moves to 1/𝑧𝑘 , after time reversal. The
relationship ROC = 1/𝑅 indicates the inversion of 𝑅, reflecting the fact that a right-sided
sequence becomes left-sided if time-reversed, and vice versa.
❾ Convolution: The convolution product between two signals is an inner operator denoted
by star and is defined by
∞ ∞
Now the Z-transform of this convolution is 𝑌(𝑧) = (𝑥[𝑘] ⋆ ℎ[𝑘]) = 𝑋(𝑧)𝐻(𝑧). This
relationship plays a central role in the analysis and design of discrete-time LTI systems, in
analogy with the continuous-time case.
∞ ∞ ∞ ∞
−𝑘
𝑌(𝑧) = (𝑥[𝑘] ⋆ ℎ[𝑘]) = ∑ ( ∑ 𝑥[𝑘 − 𝑖]ℎ[𝑖]) 𝑧 = ∑ (ℎ[𝑖] ∑ 𝑥[𝑘 − 𝑖] 𝑧 −𝑘 )
𝑘=−∞ 𝑖=−∞ 𝑖=−∞ 𝑘=−∞
We know that ∑∞
𝑘=−∞ 𝑥[𝑘 − 𝑖] 𝑧
−𝑘
= 𝑧 −𝑖 𝑋(𝑧) hence
∞ ∞
𝑖=−∞ 𝑖=−∞
Example Determine the Z-transform of 𝛿[𝑘 − 𝑘0 ] and 𝑦[𝑘] = 𝑥[𝑘] ⋆ 𝛿[𝑘 − 𝑘0 ], given that 𝑋(𝑧)
be Z-transform of 𝑥[𝑘].
Z−transform Z−transform
𝛿[𝑘] ↔ 1 ⟹ 𝛿[𝑘 − 𝑘0 ] ↔ 𝑧 −𝑘0
𝐻(𝑧) 𝐻(𝑧)
𝛿[𝑘] ℎ[𝑘] and more general 𝑥[𝑘] 𝑦[𝑘]
LTI systems LTI systems
𝐻(𝑧)
𝑥[𝑘] = 𝑥[𝑘] ⋆ 𝛿[𝑘] 𝑦[𝑘] = 𝑥[𝑘] ⋆ ℎ[𝑘]
LTI systems
⓬ Parseval's Theorem:
𝑘=+∞
1 1
∑ 𝑥1 [𝑘]𝑥2⋆ [𝑘] = ∮ 𝑋1 (𝑧)𝑋2⋆ ( ⋆ ) 𝑧 −1 𝑑𝑧
2𝜋𝑗 𝑧
𝑘=−∞
Proof: Let 𝑥1 [𝑘] ⟷ 𝑋1 (𝑧) and 𝑥2 [𝑘] ⟷ 𝑋2 (𝑧) The proof of this identity is based on the inverse
relation and later on we will see the inverse formula of the Z-transform. Now assume that
the inverse formula is well-known
1
𝑥1 [𝑘] = ∮ 𝑋1 (𝑧)𝑧 𝑘−1 𝑑𝑧
2𝜋𝑗
𝑘=+∞ 𝑘=+∞ 𝑘=+∞
1 1
∑ 𝑥1 [𝑘]𝑥2⋆ [𝑘] = ∑ { ∮ 𝑋1 (𝑧)𝑧 𝑘−1 𝑑𝑧} 𝑥2⋆ [𝑘] = ∮ 𝑋1 (𝑧) ( ∑ 𝑥2⋆ [𝑘]𝑧 𝑘−1 ) 𝑑𝑧
2𝜋𝑗 2𝜋𝑗
𝑘=−∞ 𝑘=−∞ 𝑘=−∞
𝑘=+∞ 𝑘=+∞ ⋆
1 ⋆
1 −𝑘 −1 1 1 −𝑘
= ∮ 𝑋1 (𝑧) ( ∑ 𝑥2 [𝑘] ( ) 𝑧 ) 𝑑𝑧 = ∮ 𝑋1 (𝑧) ( ∑ 𝑥2 [𝑘] ( ⋆ ) ) 𝑧 −1 𝑑𝑧
2𝜋𝑗 𝑧 2𝜋𝑗 𝑧
𝑘=−∞ 𝑘=−∞
1 1
= ∮ 𝑋1 (𝑧)𝑋2⋆ ( ⋆ ) 𝑧 −1 𝑑𝑧
2𝜋𝑗 𝑧
1 𝑧
𝑥1 [𝑘]𝑥2 [𝑘] ⟷ ∮ 𝑋1 (𝑣)𝑋2 ( ) 𝑣 −1 𝑑𝑣
2𝜋𝑗 𝑣
⓮ Initial value theorem: If 𝑥[𝑘] is causal (i.e. No positive terms in its Z-transform), then
which has its Z-transformation as 𝑋(𝑧), then the initial value theorem can be written as;
𝐏𝐫𝐨𝐨𝐟: Let 𝑋(𝑧) = ∑ 𝑥[𝑘]𝑧 −𝑘 = 𝑥[0] + 𝑥[1]𝑧 −1 + 𝑥[2]𝑧 −2 + ⋯ ⟹ lim 𝑋(𝑧) = 𝑥[0] = lim 𝑥[𝑘]
𝑧→∞ 𝑘→0
𝑘=0
⓯ Final value theorem: The final value theorem allows us to calculate the limit of a
sequence as 𝑘 tends to infinity, if one exists, from the z-transform of the sequence.
Final Value Theorem states that if 𝑥[𝑘] is causal and the poles are all inside the circle, then
its final value is denoted as 𝑥[𝑘] or 𝑥(∞) and can be written as
lim{(𝑧 − 1)𝑋(𝑧) − 𝑧𝑥[0]} = lim 𝑥[𝑁 + 1] − 𝑥[0] ⟹ (lim(𝑧 − 1)𝑋(𝑧)) − 𝑥[0] = lim 𝑥[𝑁 + 1] − 𝑥[0]
𝑧→1 𝑁→∞ 𝑧→1 𝑁→∞
So
lim(𝑧 − 1)𝑋(𝑧) = lim 𝑥[𝑁 + 1] = 𝑥(∞)
𝑧→1 𝑁→∞
The main pitfall of the theorem is that there are important cases where the limit does not
exist. The two main cases are as follows:
The reader is cautioned against blindly using the final value theorem, because this can
yield misleading results.
Example: Verify the final value theorem using the z-transform of a decaying exponential
sequence and its limit as 𝑘 tends to infinity.
𝑥1 [𝑘] = 𝑎𝑘 𝑢[𝑘] 𝑧 𝑎𝑧
{ ⟺ 𝑎𝑘 𝑢[𝑘] ⟷ 𝑋1 (𝑧) = & 𝑎−𝑘 𝑢[𝑘] ⟷ 𝑋2 (𝑧) =
𝑥2 [𝑘] = 𝑎−𝑘 𝑢[𝑘] 𝑧−𝑎 𝑎𝑧 − 1
For the first signal 𝑥1 [𝑘] the theorem cannot even be applied. We have a growing sequence
without a limit (i.e. unbounded sequence); this once again violates the conditions of the
theorem. Thus, the final value theorem cannot be used.
The transfer function: We considered a discrete-time LTI system for which input 𝑥[𝑘] and
output 𝑦[𝑘] satisfy the general linear constant-coefficient difference equation of the form
𝑛 𝑚
∑ 𝑎𝑖 𝑦[𝑘 − 𝑖] = ∑ 𝑏𝑖 𝑥[𝑘 − 𝑖]
𝑖=0 𝑖=0
The transfer function of a system 𝐻(𝑧) is defined as the Z-transform of its output
𝑦[𝑘] divided by the Z-transform of its forcing function 𝑥[𝑘], so it is a complex rational
function of two polynomials named numerator and denominator.
Applying the z-transform and using the time-shift property and the linearity property of the
z-transform to this difference equation, we obtain
𝑛 𝑚
−𝑖 −𝑖
𝑌(𝑧) ∑𝑚𝑖=0 𝑏𝑖 𝑧
−𝑖
(∑ 𝑎𝑖 𝑧 ) 𝑌(𝑧) = (∑ 𝑏𝑖 𝑧 ) 𝑋(𝑧) ⟹ 𝐻(𝑧) = =
𝑋(𝑧) ∑𝑛𝑖=0 𝑎𝑖 𝑧 −𝑖
𝑖=0 𝑖=0
The DC gain: is that ratio of the steady-state output to the steady-state input, especially
we take the input as unit step. The DC gain is an important parameter, especially in
control applications.
𝑧
𝑌(𝑧) = 𝐻(𝑧)𝑋(𝑧) = 𝐻(𝑧)
𝑧−1
Using the final value theorem we obtain
𝑧
𝑥[∞] = lim(1 − 𝑧 −1 )𝑋(𝑧) = lim(1 − 𝑧 −1 ) =1
𝑧→1 𝑧→1 𝑧−1
𝑧
𝑦[∞] = lim(1 − 𝑧 −1 )𝑌(𝑧) = lim(1 − 𝑧 −1 )𝐻(𝑧) = 𝐻(1)
𝑧→1 𝑧→1 𝑧−1
The DC gain is
𝑦[∞]
DC gain = = 𝐻(1) = lim(1 − 𝑧 −1 )𝑌(𝑧)
𝑥[∞] 𝑧→1
Contour Integration We now discuss how to obtain the sequence 𝑥[𝑘] by contour
integration, given its Z-transform. Recall that the two-sided Z-transform is defined by
∞
𝑋(𝑧) = ∑ 𝑥[𝑘]𝑧 −𝑘
𝑘=−∞
Let us multiply both sides by 𝑧 𝑛−1 and integrate over a closed contour within ROC of 𝑋(𝑧);
let the contour enclose the origin. We have
∞
where 𝒞 denotes the closed contour within ROC, taken in a counterclockwise direction. As
the curve 𝒞 is inside ROC, the sum converges on every part of 𝒞 and, as a result, the
integral and the sum on the right-hand side can be interchanged. The above equation
becomes
∞
1 1 𝑘=𝑛
∮ 𝑧 𝑛−1−𝑘 𝑑𝑧 = {
2𝜋𝑗 𝒞 0 𝑘≠𝑛
Here, 𝒞 is any contour that encloses the origin. Using the above equation, the right hand
side becomes 2𝜋𝑗𝑥[𝑘] and hence we obtain the formula
∞
1 1
∮ 𝑋(𝑧)𝑧 𝑛−1 𝑑𝑧 = ∑ 𝑥[𝑘] ( ∮ 𝑧 𝑛−1−𝑘 𝑑𝑧) = 𝑥[𝑛]
2𝜋𝑗 𝒞 2𝜋𝑗 𝒞
𝑘=−∞
1
𝑥[𝑛] = ∮ 𝑋(𝑧)𝑧 𝑛−1 𝑑𝑧
2𝜋𝑗 𝒞
Because the inverse Z-transform of 𝑧/(𝑧 − 𝑎) is given by 𝑎𝑘 𝑢[𝑘], we split 𝐻(𝑧) into partial
fractions, as given below:
𝐻1 𝐻2 𝐻𝑚
𝐻(𝑧) = + + ⋯+
𝑧 − 𝑎1 𝑧 − 𝑎2 𝑧 − 𝑎𝑚
where we have assumed that 𝐻(𝑧) has 𝑚 simple poles at 𝑎1 , 𝑎2 , … , 𝑎𝑚 . The coefficients
𝐻1 , 𝐻2 , … , 𝐻𝑚 are known as residues at the corresponding poles. The residues are calculated
using the formula 𝐻𝑖 = lim𝑧→𝑎𝑖 (𝑧 − 𝑎1 ) 𝐻(𝑧) 𝑖 = 1,2, … , 𝑚
In case of repeated poles (i.e. each root 𝑎𝑖 is repeated ℓ𝑖 times):
𝑟 ℓ𝑖
𝐻𝑖𝑗
𝐻(𝑧) = ∑ ∑
(𝑧 − 𝑎𝑖 )𝑗
𝑖=1 𝑗=1
Where
1 𝑑 𝑗−1
𝐻𝑖𝑗 = { lim ( 𝑗−1 (𝑧 − 𝑎𝑖 )ℓ𝑖 𝐻(𝑧))}
(𝑗 − 1)! 𝑧→𝑎𝑖 𝑑𝑧
11𝑧 2 − 15𝑧 + 6
𝐻(𝑧) =
(𝑧 − 2)(𝑧 − 1)2
11𝑧 2 − 15𝑧 + 6
𝐻11 = lim(𝑧 − 2)𝐻(𝑧) = lim = 20
𝑧→2 𝑧→2 (𝑧 − 1)2
20 9 2 20𝑧 −1 9𝑧 −1 2𝑧 −1
𝐻(𝑧) = − − = { − − }
(𝑧 − 2) (𝑧 − 1) (𝑧 − 1)2 (1 − 2𝑧 −1 ) (1 − 𝑧 −1 ) (1 − 𝑧 −1 )2
⟹ ℎ[𝑘] = (20 × 2𝑘−1 − 9 − 2(𝑘 − 1))𝑢[𝑘 − 1]
𝑧 3 − 𝑧 2 + 3𝑧 − 1
𝐻(𝑧) =
(𝑧 − 1)(𝑧 2 − 𝑧 + 1)
Because it is proper, we first divide the numerator by the denominator and obtain
𝑧(𝑧 + 1)
𝐻(𝑧) = 1 + = 1 + 𝐺(𝑧)
(𝑧 − 1)(𝑧 2 − 𝑧 + 1)
As 𝐺(𝑧) has a zero at the origin, we can divide by 𝑧:
𝐺(𝑧) (𝑧 + 1) (𝑧 + 1)
= =
𝑧 (𝑧 − 1)(𝑧 2 − 𝑧 + 1) (𝑧 − 1)(𝑧 − 𝑒 𝑗𝜋/3 )(𝑧 − 𝑒 −𝑗𝜋/3 )
Note that complex poles or complex zeros, if any, would always occur in conjugate pairs for
real sequences.
𝐺(𝑧) 2 1 1
= + −
𝑧 (𝑧 − 1) 𝑗𝜋 𝑗𝜋
(𝑧 − 𝑒 3 ) (𝑧 − 𝑒 − 3 )
We cross multiply by z and invert:
2𝑧 𝑧 𝑧 𝜋
𝐻(𝑧) = 1 + + − ⟹ ℎ[𝑘] = 𝛿[𝑘] + (2 − 2 cos ( 𝑘)) 𝑢[𝑘]
(𝑧 − 1) 𝑗𝜋 𝑗𝜋 3
(𝑧 − 𝑒3 ) (𝑧 − 𝑒− 3 )
Sometimes it helps to work directly in powers of z−1. We illustrate this in the next example.
5
3 − 6 𝑧 −1 1
𝐻(𝑧) = |𝑧| >
1 1 3
(1 − 4 𝑧 −1 ) (1 − 3 𝑧 −1 )
There are two poles, one at 𝑧 = 1/3 and one at 1/4 . As ROC lies outside the outermost
pole, the inverse transform is a right handed sequence:
5
3 − 6 𝑧 −1 𝐻1 𝐻2
𝐻(𝑧) = = +
1 1 1 −1 1
(1 − 4 𝑧 −1 ) (1 − 3 𝑧 −1 ) (1 − 4 𝑧 ) (1 − 3 𝑧 −1 )
1 2 1 𝑘 1 𝑘
𝐻(𝑧) = + ⟹ ℎ[𝑘] = {( ) + 2 ( ) } 𝑢[𝑘]
1 1 4 3
(1 − 4 𝑧 −1 ) (1 − 3 𝑧 −1 )
𝑧 2 + 2𝑧
𝐻(𝑧) = 1 < |𝑧| < 2
(𝑧 − 2)(𝑧 + 1)2
As the degree of the numerator polynomial is less than that of the denominator polynomial,
and as it has a zero at the origin, first divide by z and do a partial fraction expansion:
𝐻(𝑧) 1 𝛼 𝛽 𝛾
= = + +
𝑧 𝑧(𝑧 2 + (√2 − 1)𝑧 − √2) 𝑧 𝑧 − 1 𝑧 + √2
1 1 𝑧 1 𝑧
𝐻(𝑧) = − +( ) +( )
√2 1 + √2 𝑧 − 1 2 + √2 𝑧 + √2
1 1 1 𝑘
ℎ[𝑘] = − 𝛿[𝑘] + {( )+( ) (−√2) } 𝑢[𝑘]
√2 1 + √2 2 + √2
❷ Inversion by Power Series Now we will present another method of inversion. In this,
both the numerator and the denominator are written in powers of 𝑧 −1 and we divide the
former by the latter through long division. Because we obtain the result in a power series,
this method is known as the power series method. We illustrate it with an example.
Example: Determine the inverse Z-transform of 𝐻(𝑧) = log(1 + 𝑎𝑧 −1 ) |𝑧| > |𝑎|
∞ ∞
(−1)𝑖+1 𝑣 𝑖 −1
(−1)𝑘+1 𝑎𝑘 𝑧 −𝑘
log(1 + 𝑣) = ∑ , |𝑣| < 1 ⟹ log(1 + 𝑎𝑧 )=∑ , |𝑧| > |𝑎|
𝑖 𝑘
𝑖=1 𝑘=1
𝑘+1
𝑎𝑘 −1
𝐻(𝑧) = log(1 + 𝑎𝑧 −1 ) = {(−1) 𝑘
𝑘 >0} ⟹ ℎ[𝑘] = (−𝑎)𝑘 𝑢[𝑘 − 1]
𝑘
0 𝑘≤0
−1
Example: Determine the inverse Z-transform of 𝐻(𝑧) = 𝑒 𝑧 .
−1
Let we use the power series expansion for 𝑒 𝑧
−1
∞ 1 −𝑘
𝐻(𝑧) = 𝑒 𝑧 = 𝑒 1/𝑧 = ∑ 𝑧
𝑘=0 𝑘!
1
ℎ[𝑘] = 𝑢[𝑘]
𝑘!
For convenience, the very important Z-transform properties are summarized in Table
𝐱[𝑛] 𝐗(𝑧)
𝐱1 [𝑛] 𝐗1 (𝑧)
𝐱 2 [𝑛] 𝐗 2 (𝑧)
Linearity 𝛼𝐱 2 [𝑛] + 𝛽𝐱 2 [𝑛] 𝛼𝐗1 (𝑧) + 𝛽𝐗 2 (𝑧)
Time shifting 𝐱[𝑛 − 𝑛0 ] 𝑧 −𝑛0 𝐗(𝑧)
Frequency scaling 1 𝑒 𝑗𝜔0 𝑛 𝐱[𝑛] 𝐗(𝑒 −𝑗𝜔0 𝑧)
Frequency scaling 2 𝑧0 𝑛 𝐱[𝑛] 𝐗(𝑧/𝑧0 )
Time reversal 𝐱[−𝑛] 𝐗(1/𝑧)
Frequency differentiation 𝑛𝐱[𝑛] −𝑧𝑑𝐗(𝑧)/𝑑𝑧
1
Integration ∑𝑛𝑚=−∞ 𝐱[𝑚] 𝐗(𝑧)
1−𝑧 −1
Convolution 𝐱1 [𝑛] ⋆ 𝐱1 [𝑛] 𝐗1 (𝑧)𝐗 2 (𝑧)
Parseual's theorem − − − − − − − − − − − − − − − − − − − − − − − − − − − − − − − − − −
∞
1 ∞
∑ 𝐱1 [𝑛]𝐱 2⋆ [𝑛] = ∫ 𝐗 (𝑧)𝐗 ⋆2 (1/𝑧 ⋆ )𝑧 −1 𝑑𝑧
2𝜋𝑗 −∞ 1
−∞
Remark: The notation |𝑧| stands for the magnitude of complex numbers
Solved Problems:
Exercise 1: Consider the Discrete time LTI system described by
5
𝑦[𝑛 − 1] − 𝑦[𝑛] + 𝑦[𝑛 + 1] = 𝑥[𝑛]
2
1. Determine the poles and zeros of the system function (is it stable?)
2. Determine the impulse response IR ℎ[𝑛]
There are two poles 𝑝1 = 1⁄2 , 𝑝2 = 2 and two zeros 𝑧1 = 0 , 𝑧2 = ∞, the system is unstable
because poles are not inside unit disc.
𝐻(𝑧) 2 1 1 2 1 1
= { − } ⟹ 𝐻(𝑧) = { − }
𝑧 3 (𝑧 − 2) (𝑧 − 1/2) 3 (1 − 2𝑧 −1 ) (1 − 1/2𝑧 −1 )
2 1 𝑛
𝐂𝐚𝐬𝐞: 𝟎𝟏 |𝑧| > 2 ℎ[𝑛] = (2𝑛 − ( ) ) 𝑢[𝑛]
3 2
𝑛
1 2 1
𝐂𝐚𝐬𝐞: 𝟎𝟐 |𝑧| < ℎ[𝑛] = (( ) − 2𝑛 ) 𝑢[−𝑛 − 1]
2 3 2
1 2 𝑛 1 𝑛
𝐂𝐚𝐬𝐞: 𝟎𝟑 <𝑧<2 ℎ[𝑛] = − (2 𝑢[−𝑛 − 1] + ( ) 𝑢[𝑛])
2 3 2
Exercise 2: Consider the Discrete time LTI system with input 𝑥[𝑛] and impulse response
ℎ[𝑛] where
𝑎𝑛 for 𝑛≥0 1 for 0 ≤ 𝑛 ≤ 𝑁 − 1
ℎ[𝑛] = { 𝑥[𝑛] = {
0 for 𝑛<0 0 elsewhere
Determine the output of the system using the Z-transform
Ans: We know that for any LTI system 𝑌(𝑧) = 𝑋(𝑧)𝐻(𝑧) or equivalently 𝑦[𝑛] = 𝑥[𝑛] ⋆ ℎ[𝑛]
+∞ +∞ +∞
𝑎 𝑛 1
𝐻(𝑧) = 𝒵(ℎ[𝑛]) = ∑ 𝑎 𝑢[𝑛]𝑧 𝑛 −𝑛
= ∑𝑎 𝑧 𝑛 −𝑛
= ∑( ) =
𝑧 1 − 𝑎𝑧 −1
𝑛=−∞ 𝑛=0 𝑛=0
+∞ +∞ 𝑁−1
1 − 𝑧 −𝑁
𝑋(𝑧) = 𝒵(𝑥[𝑛]) = 𝒵(𝑢[𝑛] − 𝑢[𝑛 − 𝑁]) = ∑ 𝑧 −𝑛
− ∑ 𝑧 −𝑛
= ∑𝑧 −𝑛
=
1 − 𝑧 −1
𝑛=0 𝑛=𝑁−1 𝑛=0
1 − 𝑧 −𝑁 1 𝑧 −𝑁
𝑌(𝑧) = 𝑋(𝑧)𝐻(𝑧) = = −
(1 − 𝑎𝑧 −1 )(1 − 𝑧 −1 ) (1 − 𝑎𝑧 −1 )(1 − 𝑧 −1 ) (1 − 𝑎𝑧 −1 )(1 − 𝑧 −1 )
1 1 1 1 1 1 1 1
𝑌(𝑧) = [( ) −1
+( −1
) −1
] − 𝑧 −𝑁 [( ) −1
+( −1
) ]
1−𝑎 1−𝑧 1−𝑎 1 − 𝑎𝑧 1−𝑎 1−𝑧 1−𝑎 1 − 𝑎𝑧 −1
1 1
𝑦[𝑛] = ( ) (𝑢[𝑛] − 𝑢[𝑛 − 𝑁]) + ( ) (𝑎𝑛 𝑢[𝑛] − 𝑎𝑛−𝑁 𝑢[𝑛 − 𝑁])
1−𝑎 1 − 𝑎−1
clear all, clc,
N=30; a=0.7;
for n=1:1:N
h(n)=a^n;
x(n)=1;
end
y=conv(x,h);
stem(y),
grid on
Other method for programing discrete LTI systems using difference equations
clear all, clc,
Ans: 1. We have
𝑧 𝑧2
ℎ1 [𝑛] = 𝑢[𝑛 + 1] ⟹ 𝐻1 (𝑧) = 𝒵(ℎ1 [𝑛]) = = ⟹ unstable "𝑧 = 1"
1 − 𝑧 −1 𝑧 − 1
−1 −𝑧
ℎ2 [𝑛] = −𝑢[𝑛] ⟹ 𝐻2 (𝑧) = 𝒵(ℎ2 [𝑛]) = = ⟹ unstable "𝑧 = 1"
1 − 𝑧 −1 𝑧 − 1
In other word ℎ1 [𝑛], ℎ2 [𝑛] are unstable because they are not absolutely summable.
2. To determine let we look for
2𝜋 𝜋
2𝜋 𝜋 1 − [cos ( 7 )] 𝑧 −1 [sin ( 8)] 𝑧 −1
𝑋(𝑧) = 𝒵 (cos ( 𝑛) + sin ( 𝑛)) = + 𝜋
7 8 2𝜋
1 − [2 cos ( )] 𝑧 −1 + 𝑧 −2 1 − [2 cos (8)] 𝑧 −1 + 𝑧 −2
7
It is very difficult to obtain 𝑦[𝑛] from the frequency domain because of the very complicated
formula of 𝑋(𝑧). To avoid complexity we will look for ℎ[𝑛] = ℎ1 [𝑛] + ℎ2 [𝑛]
ℎ[𝑛] = 𝑢[𝑛 + 1] − 𝑢[𝑛] = 𝛿[𝑛 + 1] ⟹ 𝑦[𝑛] = 𝑥[𝑛] ⋆ ℎ[𝑛] = 𝑥[𝑛] ⋆ 𝛿[𝑛 + 1] = 𝑥[𝑛 + 1]
2𝜋 𝜋
ℎ[𝑛] = 𝛿[𝑛 + 1] ⟹ 𝑦[𝑛] = 𝑥[𝑛 + 1] = cos ( (𝑛 + 1)) + sin ( (𝑛 + 1))
7 8
Exercise 4: Consider the Discrete time LTI system with an input output sequences
1 𝑛 1 1 𝑛−1 1 𝑛
𝑥[𝑛] = ( ) 𝑢[𝑛] − ( ) 𝑢[𝑛 − 1] 𝑦[𝑛] = ( ) 𝑢[𝑛]
2 4 2 3
Ans: 1. We have
1 −1 1
1 𝑧 1 − 4 𝑧 −1 1
𝑋(𝑧) = − 4 = , 𝑌(𝑧) =
1 −1 1 −1 1 −1 1
1 − 2𝑧 1 − 2𝑧 1− 2𝑧 1 − 3 𝑧 −1
⇕
1 1
𝑌(𝑧) 1 − 2 𝑧 −1 1 − 2 𝑧 −1
𝐻(𝑧) = = =
𝑋(𝑧) (1 − 1 𝑧 −1 ) (1 − 1 𝑧 −1 ) 1 − 7 𝑧 −1 + 1 𝑧 −2
4 3 12 12
2. The difference equation is
7 1 1
𝑦[𝑛] − 𝑦[𝑛 − 1] + 𝑦[𝑛 − 2] = 𝑥[𝑛] − 𝑥[𝑛 − 1]
12 12 2
3. We introduce an intermediate variable 𝑤
𝑌(𝑧) 𝑌(𝑧) 𝑊(𝑧) 𝑥[𝑛] 𝑤[𝑛] 𝑦[𝑛]
𝐻(𝑧) = =( )( )
𝑋(𝑧) 𝑊(𝑧) 𝑋(𝑧)
𝑌(𝑧) 1 𝑊(𝑧) 1 𝑫
= 1 − 𝑧 −1 , = 𝟕 −𝟏
𝑊(𝑧) 2 𝑋(𝑧) 1 − 𝑧 −1 + 1 𝑧 −2
7
12 12 𝟏𝟐 𝟐
7 1 𝑫
𝑤[𝑛] = 𝑥[𝑛] + 𝑤[𝑛 − 1] − 𝑤[𝑛 − 2] −𝟏
12 12
7 1 𝟏𝟐
⟹ 𝑥[𝑛] = 𝑤[𝑛] − 𝑤[𝑛 − 1] + 𝑤[𝑛 − 2]
12 12
1
{𝑦[𝑛] = 𝑤[𝑛] + − 𝑤[𝑛 − 1]
2
Exercise 5: Determine the inverse of the Z–transform of
1 1
𝑋1 (𝑧) = |𝑧| <
1 2
1 − 𝑧 −1
2
1
1 − 2 𝑧 −1 1
𝑋2 (𝑧) = |𝑧| >
3 −1 1 −2 2
1 − 4𝑧 +8𝑧
Ans: 1. From the region of convergence we deduce that 𝑥1 [𝑛] is left sided signal so
1 1 𝑛
𝑋1 (𝑧) = ⟹ 𝑥1 [𝑛] = − ( ) 𝑢[−𝑛 − 1]
1 2
1 − 2 𝑧 −1
1 1 𝑛−1 1 |𝑛|
𝟏. 𝑥[𝑛] = 𝛿[𝑛] − 𝛿[𝑛 − 6]; 𝟐. 𝑥[𝑛] = ( ) 𝑢[𝑛 − 1]; 𝟑. 𝑥[𝑛] = ( )
2 2 2
Ans:
1
𝟏. 𝑋(𝑧) = 1 − 𝑧 −6 The ROC is entire 𝑧_plane
2
𝑧 −1 1
𝟐. 𝑋(𝑧) = The ROC is |𝑧| >
1 2
1 − 2 𝑧 −1
∞ ∞
1 𝑛 1 −1 𝑛 1 1 1
𝟑. 𝑋(𝑧) = ∑ ( 𝑧) + ∑ ( 𝑧 ) ⟹ | 𝑧| < 1 and | 𝑧 −1 | < 1 ⟹ The ROC is {𝑧/ < 𝑧 < 2}
2 2 2 2 2
𝑛=1 𝑛=0
1
𝑧 1 1
𝑋(𝑧) = 2 + The ROC is {𝑧/ < 𝑧 < 2}
1 1 2
1 − 2 𝑧 1 − 2 𝑧 −1
3 1
𝑦[𝑛] + 𝑦[𝑛 − 1] + 𝑦[𝑛 − 2] = 𝑥[𝑛]
4 8
Ans:
3 1 1 2 −1
(1 + 𝑧 −1 + 𝑧 −2 ) 𝑌(𝑧) = 𝑋(𝑧) ⟹ 𝐻(𝑧) = = +
4 8 3 1 1 1
1 + 4 𝑧 −1 + 8 𝑧 −2 1 + 2 𝑧 −1 1 + 4 𝑧 −1
1 𝑛 1 𝑛
ℎ[𝑛] = [2 (− ) − (− ) ] 𝑢[𝑛]
2 4
Exercise 9: Consider the Discrete time LTI system with an impulse response
1 𝑛
ℎ[𝑛] = ( ) 𝑢[𝑛]
2
3 𝑛 1 𝑛
▪ 𝑥[𝑛] = ( ) 𝑢[𝑛] ▪ 𝑥[𝑛] = (𝑛 + 1) ( ) 𝑢[𝑛]
4 4
Ans:
1 −2 3 1 𝑛 3 𝑛
▪ 𝑌(𝑧) = = + ⟹ 𝑦[𝑛] = [−2 ( ) + 3 ( ) ] 𝑢[𝑛]
1 3 1 3 2 4
(1 − 2 𝑧 −1 ) (1 − 4 𝑧 −1 ) (1 − 2 𝑧 −1 ) (1 − 4 𝑧 −1 )
1 −1
1 𝑛 1 𝑛 1 𝑛
▪ 𝑥[𝑛] = (𝑛 + 1) ( ) 𝑢[𝑛] = 𝑛 ( ) 𝑢[𝑛] + ( ) 𝑢[𝑛] ⟹ 𝑋(𝑧) = 4𝑧 1 1
2+ 1
= 2
4 4 4 1 1 − 4 𝑧 −1 (1 − 1 𝑧 −1 )
(1 − 𝑧 −1 )
4 4
1 𝑛 1
ℎ[𝑛] = ( ) 𝑢[𝑛] ⟹ 𝐻(𝑧) =
2 1
1 − 2 𝑧 −1
𝑘1 = 4
1 1 𝑘1 𝑘2 𝑘3
𝑌(𝑧) = 𝑋(𝑧)𝐻(𝑧) = = + + with {𝑘2 = −2
1 −1 2 1 − 1 𝑧 −1 1 − 1 𝑧 −1 1 − 1 𝑧 −1 1 −1 2 𝑘3 = −1
(1 − 4 𝑧 ) 2 2 4 (1 − 4 𝑧 )
1 𝑛 1 𝑛 1 𝑛 1 𝑛 1 𝑛
𝑦[𝑛] = [4 ( ) − 2 ( ) − (𝑛 + 1) ( ) ] 𝑢[𝑛] = [4 ( ) − (𝑛 + 3) ( ) ] 𝑢[𝑛]
2 4 4 2 4
Exercise 10: Determine ℎ[𝑛] the inverse of a given Z–transform take into account that ℎ[𝑛]
is an absolutely summable sequence.
3 1
𝐻(𝑧) = |𝑧| >
1 1 2
𝑧 − 4 − 8 𝑧 −1
Ans:
3𝑧 𝐻(𝑧) 3 4 4 4 4
𝐻(𝑧) = ⟹ = = − ⟹ 𝐻(𝑧) = −
1 1 𝑧 1 1 1 1 1 1
𝑧2 − 4 𝑧 − 8 𝑧2 − 4 𝑧 − 8 𝑧 − 2 𝑧 + 4 1 − 2 𝑧 −1 1 + 4 𝑧 −1
1 𝑛 −1 𝑛
ℎ[𝑛] = 4 (( ) + ( ) ) 𝑢[𝑛]
2 4
𝑧−1
𝐻(𝑧) =
𝑧−𝑎
Ans:
1 𝑧 −1
𝐻(𝑧) = −1
− −1
⟹ ℎ[𝑛] = 𝑎𝑛 𝑢[𝑛] − 𝑎𝑛−1 𝑢[𝑛 − 1]
1 − 𝑎𝑧 1 − 𝑎𝑧
Since we have 𝑎𝑛 𝑢[𝑛] = 𝑎𝑛 𝑢[𝑛 − 1] + 𝛿[𝑛] so we can rewrite ℎ[𝑛] = 𝛿[𝑛] + (𝑎 − 1)𝑎𝑛−1 𝑢[𝑛 − 1]
𝑧−𝑎
𝐻(𝑧) = knowing that 𝑎𝑛+1 𝑢[𝑛] = 𝑎𝑛+1 𝑢[𝑛 − 1] + 𝑎𝛿[𝑛]
1 − 𝑎𝑧
Ans:
𝑧−𝑎 1 𝑎𝑧 −1 𝑧 −1 𝑎−1
𝐻(𝑧) = = −1 − −1 = −
1 − 𝑎𝑧 𝑧 − 𝑎 𝑧 − 𝑎 1 − 𝑎−1 𝑧 −1 1 − 𝑎−1 𝑧 −1
1 𝑛−1 1 𝑛 𝑎2 − 1 1 𝑎2 − 1
ℎ[𝑛] = ( ) 𝑢[𝑛 − 1] − 𝑎−1 ( ) 𝑢[𝑛] = −𝑎𝛿[𝑛] + 𝑛+1 𝑢[𝑛] = − 𝛿[𝑛] + 𝑛+1 𝑢[𝑛 − 1]
𝑎 𝑎 𝑎 𝑎 𝑎
كـــــل من هذه العبارات صحيحة
1 2 −1 1 𝑛 1 𝑛
𝐻(𝑧) = = + and ℎ[𝑛] = [2 ( ) − ( ) ] 𝑢[𝑛]
3 1 1 1 2 4
1 − 4 𝑧 −1 + 8 𝑧 −2 1 − 2 𝑧 −1 1 − 4 𝑧 −1
8/3 2 1/3 8 1 𝑛 1 1 𝑛
𝑆(𝑧) = − + ⟹ 𝑠[𝑛] = [ − 2 ( ) + ( ) ] 𝑢[𝑛]
(1 − 𝑧 −1 ) 1 −1 1 −1 3 2 3 4
(1 − 2 𝑧 ) (1 − 4 𝑧 )
Exercise 14: Find the z-transform of the following 𝑥[𝑛] = 3(0.5)𝑛 𝑢[𝑛] − 2(0.25)𝑛 𝑢[−𝑛 − 1]
Ans: 𝑋(𝑧) does not exist, because ROC = |𝑧| > 0.5 and |𝑧| < 0.25 ⟹ does not exist
Ans: 1. The possible ROCs are |𝑧| < 1, 1 < |𝑧| < 2, 2 < |𝑧| < 3, and |𝑧| > 3
2. The sequence 𝑥[𝑛] will be causal only if it is a right sided, that is |𝑧| > 3.
In representing and analyzing linear, time-invariant systems, our basic approach has
been to decompose the system inputs into a linear combination of basic signals and
exploit the fact that for a linear system the response is the same linear combination of
the responses to the basic inputs. The convolution sum and convolution integral grew
out of a particular choice for the basic signals in terms of which we carried out the
decomposition, specifically delayed unit impulses. This choice has the advantage that
for systems which are time-invariant in addition to being linear, once the response to
an impulse at onetime position is known, then the response is known at all time
positions. Complex exponentials as basic building blocks for representing the input and
output of LTI systems have a considerably different motivation than the use of
impulses. Complex exponentials are Eigen-functions of LTI systems; that is, the
response of an LTI system to any complex exponential signal is simply a scaled replica
of that signal. Consequently, if the input to an LTI system is represented as a linear
combination of complex exponentials, then the effect of the system can be described
simply in terms of a weighting applied to each coefficient in that representation. This
very important and elegant relationship between LTI systems and complex exponentials
leads to some extremely powerful concepts and results.
Fourier-Analysis of
Continuous LTI
Systems
I. Introduction: In mathematics, Fourier analysis is the study of the way general functions
may be represented or approximated by sums of simpler trigonometric functions (or
complex exponentials). Fourier analysis grew from the study of
Fourier series, and is named after Joseph Fourier (1768 –1830),
who showed that representing a function as a sum of trigonometric
functions greatly simplifies the study of heat transfer.
In the previous chapters we showed that a periodic signal can be represented as a sum of
sinusoidal signals (i.e., complex exponentials), but the method for determining the phase
and magnitude of the sinusoids was not discussed. This section will describe how to
determine the frequency domain representation of the signal. For now we will consider only
periodic signals, though the concept of the frequency domain can be extended to signals
that are not periodic (using what is called the Fourier Transform).
Important Note: In mathematics, the term Fourier analysis often refers to the study of
both operations (i.e. process of decomposing and rebuilding the function). Therefore, the
Fourier analysis includes two main categories named Fourier series and Fourier Transform.
Fourier analysis has many scientific applications: physics, partial differential equations,
number theory, combinatorics, digital signal processing, probability theory, statistics,
forensics, option pricing, cryptography, numerical analysis, acoustics, oceanography,
sonar, optics, diffraction, geometry, protein structure analysis, and other areas.
As it is very well known from linear algebra that every Hilbert space admits an orthonormal
basis, and each vector in the Hilbert space can be expanded as a series in terms of this
orthonormal basis. So, we suggest studying Fourier series on Hilbert Spaces. There exist a
large number of orthogonal signal sets which can be used as basis signals for generalized
Fourier series. Some well-known signal sets are exponential functions (i.e. sometimes
replaced by sinusoids or trigonometric functions), Walsh functions, Bessel functions,
Legendre polynomials, Laguerre functions, .Jacobi polynomials, Hermite polynomials, and
Chebyshev polynomials. The functions that concern us most in this book are the
trigonometric and the exponential sets discussed in the rest of the chapter. In order to
make things clear we prefer to introduce the concept of orthogonal functions.
II. Orthogonal Functions: The concept of orthogonality with regards to functions is like a
more general way of talking about orthogonality with regards to vectors. Orthogonal vectors
are geometrically perpendicular because their dot product is equal to zero. When you take
the dot product of two vectors you multiply their entries and add them together; but if you
wanted to take the "dot" or inner product of two functions, you would treat them as though
they were vectors with infinitely many entries and taking the dot product would become
multiplying the functions together and then integrating over some interval. It turns out that
for the inner product (for arbitrary real number 𝐿)
𝐿
〈𝐟(𝑡), 𝐠(𝑡)〉 = ∫ 𝐟(𝑡)𝐠(𝑡)𝑑𝑡
−𝐿
Definition: Two non-zero functions, 𝐟(𝑡) and 𝐱(𝑡), are said to be orthogonal on 𝑡1 ≤ 𝑡 ≤ 𝑡2 if
𝑡2
∫ 𝐟(𝑡)𝐱(𝑡)𝑑𝑡 = 0
𝑡1
II.I. Orthogonal Signals and Gram-Schimdt: Now, consider approximating a signal 𝐟(𝑡)
over the interval [𝑡1 𝑡2 ] by a set of 𝑁 mutually orthogonal signals 𝐱1 (𝑡), 𝐱 2 (𝑡), … , 𝐱 𝑁 (𝑡) as
𝑁 𝑁
Because we are dealing with infinite dimensional spaces, we can say that in our
approximation the error 𝒆(𝑡) = 𝐟(𝑡) − ∑𝑁𝑘=1 𝑐𝑘 𝐱 𝑘 (𝑡) will approaches zero as 𝑁 → ∞. So that
when we express a function in such way we are actually performing the Gram-Schimdt
process, by projecting a function onto a basis of {𝐱1 (𝑡), 𝐱 2 (𝑡), … , 𝐱 𝑁 (𝑡)} with
𝑡 1
〈𝐟(𝑡), 𝐱 𝑘 (𝑡)〉 ∫𝑡 𝐟(𝑡)𝐱 𝑘 (𝑡)𝑑𝑡 1 𝑡1
1
𝑐𝑘 = = 𝑡 2 = ∫ 𝐟(𝑡)𝐱 𝑘 (𝑡)𝑑𝑡 𝑘 = 1,2, … , 𝑁
〈𝐱 𝑘 (𝑡), 𝐱 𝑘 (𝑡)〉 1
∫𝑡 (𝐱 𝑘 (𝑡)) 𝑑𝑡 𝐸𝑘 𝑡1
1
Proof: Let we prove the formula of the coefficients 𝑐𝑘 beginning from the error 𝒆(𝑡) in the
above approximation
𝑁
2
Notice that (𝐱1 (𝑡) + ⋯ + 𝐱 𝑁 (𝑡)) = 𝐱12 (𝑡) + ⋯ + 𝐱 𝑁
2 (𝑡)
because of mutual orthogonality, so we
have to write
𝑡2 𝑡2 𝑁 𝑡2 𝑁 𝑡2
𝐸𝑒 = ∫ 𝒆 2 (𝑡)𝑑𝑡
=∫ 𝐟 2 (𝑡)𝑑𝑡
− 2 ∑ 𝑐𝑘 ∫ 𝐟(𝑡)𝐱 𝑘 (𝑡)𝑑𝑡 + ∑ 𝑐𝑘2 ∫ 𝐱 𝑘2 (𝑡)𝑑𝑡
𝑡1 𝑡1 𝑘=1 𝑡1 𝑘=1 𝑡1
Note that the right-hand side is a definite integral with 𝑡 as the dummy variable. Hence, 𝐸𝑒
is a function of the parameter 𝑐𝑘 (not 𝑡) and 𝐸𝑒 is minimum for some choice of 𝑐𝑘 . To
minimize 𝐸𝑒 , a necessary condition is
𝜕𝐸𝑒
=0
𝜕𝑐𝑘
From which we obtain
𝑁 𝑡2 𝑁 𝑡2 𝑡2 𝑡2
− ∑ ∫ 𝐟(𝑡)𝐱 𝑘 (𝑡)𝑑𝑡 + ∑ 𝑐𝑘 ∫ 𝐱 𝑘2 (𝑡)𝑑𝑡 = 0 ⟹ 𝑐𝑘 ∫ 𝐱 𝑘2 (𝑡)𝑑𝑡 = ∫ 𝐟(𝑡)𝐱 𝑘 (𝑡)𝑑𝑡
𝑘=1 𝑡1 𝑘=1 𝑡1 𝑡1 𝑡1
𝑡1
∫𝑡 𝐟(𝑡)𝐱 𝑘 (𝑡)𝑑𝑡 1 𝑡1
1
⟹ 𝑐𝑘 = 𝑡1 2 = ∫ 𝐟(𝑡)𝐱 𝑘 (𝑡)𝑑𝑡 ■
∫𝑡 (𝐱 𝑘 (𝑡)) 𝑑𝑡 𝐸𝑘 𝑡1
1
II.II. Elementary Energy Signals: It is striking and very interesting to look at the
relationship that links the energy stored in the basic signals with the total energy. We know
that the square of a sum of two orthogonal signals is equal to the sum of the squares two
2
signals, that is (𝐱1 (𝑡) + 𝐱 2 (𝑡)) = 𝐱12 (𝑡) + 𝐱 22 (𝑡). More generally if a set {𝐱 𝑖 (𝑡)} are mutually
2
orthogonal then (𝐱1 (𝑡) + ⋯ + 𝐱 𝑁 (𝑡)) = 𝐱12 (𝑡) + ⋯ + 𝐱 𝑁
2 (𝑡),
hence
𝑁 2 𝑁 𝑡2 𝑡2 𝑁 𝑁 𝑡2
𝐟 2 (𝑡)
= (∑ 𝑐𝑘 𝐱 𝑘 (𝑡)) = ∑ 𝑐𝑘2 𝐱 𝑘2 (𝑡) ⟺ ∫ 𝐟 2 (𝑡)𝑑𝑡
= ∫ ∑ 𝑐𝑘2 𝐱 𝑘2 (𝑡) 𝑑𝑡 = ∑ 𝑐𝑘2 (∫ 𝐱 𝑘2 (𝑡)𝑑𝑡)
𝑘=1 𝑘=1 𝑡1 𝑡1 𝑘=1 𝑘=1 𝑡1
𝑡2 𝑁
⟺ 𝐸f = ∫ 𝐟 2 (𝑡)𝑑𝑡 = ∑ 𝑐𝑘2 𝐸𝑘
𝑡1 𝑘=1
This means that the total energy of the original function is the sum of the energies of all the
orthogonal components. This equation goes under the name of Parseval's theorem.
III. Fourier-Series (Continuous-Time Signals): As what we gave said before, the French
mathematician Fourier found that any periodic waveform, that is, a waveform that repeats
itself after some time, can be expressed as a series of harmonically related sinusoids, i.e.,
sinusoids whose frequencies are multiples of a fundamental frequency (or first harmonic).
III.I Trigonometric Fourier series: There are two common forms of the Fourier series,
"Trigonometric Form" and the "Exponential Form", here we are going to start by the first
one. Consider a signal set: {1, cos(ω0 𝑡), cos(2ω0 𝑡), … , cos(𝑛ω0 𝑡), sin(ω0 𝑡), sin(2ω0 𝑡), … sin(𝑛ω0 𝑡)}.
In this set the sinusoid of frequency ω0 , called the fundamental, and a sinusoid of
frequency 𝑛ω0 is called the 𝑛𝑡ℎ harmonic.
Now we try to show that this set is orthogonal over any interval of duration 𝑇0 = 2𝜋/ω0 ,
which is the period of the fundamental. To prove the orthogonality we use the fact that
0 If 𝑛 ≠ 𝑚
𝑡1 +𝑇0
𝑇0 If 𝑛 = 𝑚 = 0
⦁∫ cos(𝑛ω0 𝑡)cos(𝑚ω0 𝑡)𝑑𝑡 =
𝑡1
𝑇0
{2 If 𝑛 = 𝑚 ≠ 0
0 If 𝑛 ≠ 𝑚
𝑡1 +𝑇0
⦁∫ sin(𝑛ω0 𝑡)sin(𝑚ω0 𝑡)𝑑𝑡 = {𝑇
0
𝑡1 If 𝑛 = 𝑚 ≠ 0
2
𝑡1 +𝑇0
⦁∫ sin(𝑛ω0 𝑡)cos(𝑚ω0 𝑡)𝑑𝑡 = 0 for all 𝑛 and 𝑚
𝑡1
Therefore, we can express a signal 𝐟(𝑡) by a trigonometric Fourier series over any interval of
duration 𝑇0 seconds as
∞ ∞
To find the coefficient 𝑏𝑛 , multiply both sides of the equation by sin(𝑚ω0 𝑡) for an arbitrary
but fixed positive integer 𝑚, and integrate over one period. Using the above facts we get
2
𝑏𝑚 = ∫ 𝐟(𝑡)sin(𝑚ω0 𝑡)𝑑𝑡 for each 𝑚 = 1,2, …
𝑇0 𝑇0
Since 𝑚 was arbitrary, you can change this to an 𝑛 and get the formula for the coefficients
for the sine terms. A similar argument for the cosine terms establishes the formula for the
coefficient 𝑎𝑚 .
If 𝑚 = 0 then
1 𝑇0
𝑎0 = ∫ 𝐟(𝑡)𝑑𝑡
𝑇0 0
Elseif 𝑚 = 1,2, … then
2
𝑎𝑚 = ∫ 𝐟(𝑡)cos(𝑚ω0 𝑡)𝑑𝑡
𝑇0 𝑇0
End If
Remark: if we define new function 𝐡(𝑡) = 𝐟(𝑡)𝐠(𝑡) we obtain the following result
𝐿
𝐿
2 ∫ 𝐡(𝑡)𝑑𝑡 If 𝐡(𝑡) is an even function
∫ 𝐡(𝑡)𝑑𝑡 = { 0
−𝐿
0 If 𝐡(𝑡) is an odd function
Note that this fact is only valid on a “symmetric” interval, i.e. an interval in the form [−𝐿, 𝐿].
If we aren’t integrating on a “symmetric” interval then the fact may or may not be true.
However when studying Fourier series, we're specifically looking at the space of square-
𝜋
(Lebesgue)-integrable 𝐿2 [−𝜋, 𝜋], which has an inner product, 〈𝐟(𝑡), 𝐠(𝑡)〉 = ∫−𝜋 𝐟(𝑡)𝐠(𝑡)𝑑𝑡
𝐟(𝑡) = 𝐶0 + ∑ 𝐶𝑛 cos(𝑛ω0 𝑡 + 𝜃)
𝑛=1
III.II Exponential Fourier series: To represent the Fourier series in exponential form, the
sine and cosine terms in the Fourier series are expressed in terms of exponential function
1 1
sin(𝑛ω0 𝑡) = (𝑒 𝑗𝑛ω0 𝑡 − 𝑒 −𝑗𝑛ω0 𝑡 ) and cos(𝑛ω0 𝑡) = (𝑒 𝑗𝑛ω0 𝑡 + 𝑒 −𝑗𝑛ω0 𝑡 )
2𝑗 2
that results in exponential Fourier series
∞ ∞
= ∑ 𝑐𝑘 𝑒 𝑗𝑘ω0 𝑡
𝑘=−∞
Where
1 1
𝑐𝑘 = (𝑎 − 𝑗𝑏𝑘 ), 𝑐−𝑘 = (𝑎𝑘 + 𝑗𝑏𝑘 ),
2 𝑘 2
𝑎𝑘 = 2Re[𝑐𝑘 ], 𝑏𝑘 = −2Im[𝑐𝑘 ], 𝑎0 = 𝑐0 and 𝑐−𝑘 = 𝑐𝑘 ⋆
Exercise: As developed above we observe that, any periodic signal 𝐟(𝑡) can be represented
by it exponential Fourier series 𝐟(𝑡) = ∑𝑘=+∞
𝑘=−∞ 𝑐𝑘 𝑒
𝑗𝑘ω0 𝑡
. Prove that
1
𝑐𝑘 = ∫ 𝐟(𝑡)𝑒 −𝑗𝑘ω0 𝑡 𝑑𝑡
𝑇0 𝑇0
Solution: given a complex basis signals {𝐱 𝑘 (𝑡) = 𝑒 −𝑗𝑘ω0 𝑡 } which are mutually orthogonal
signals, so we are able to perform the Gram-Schimdt process, by projecting a function
𝐟(𝑡) onto a basis of {𝐱 𝑘 (𝑡)} with
1
𝐸𝑘 = ∫ 𝐱 𝑘 (𝑡)𝐱 𝑘⋆ (𝑡)𝑑𝑡 = ∫ 𝑒 𝑗𝑘ω0 𝑡 𝑒 −𝑗𝑘ω0 𝑡 𝑑𝑡 = 𝑇0 ⟹ 𝑐𝑘 = ∫ 𝐟(𝑡)𝑒 −𝑗𝑘ω0 𝑡 𝑑𝑡
𝑇0 𝑇0 𝑇0 𝑇0
A more compact representation of the Fourier series uses complex exponentials. In this
case we end up with the following synthesis and analysis equations:
𝑘=+∞
The interpretation of this form of the theorem is that the total energy of a signal can be
calculated by the energy of its Fourier series coefficients.
III.III Convergence of Fourier Series: A function 𝐟(𝑡) defined on an interval [𝑎, 𝑏] is said to
be piecewise continuous if it is continuous on the interval
except for a finite number of jump discontinuities.
1. 𝐟(𝑡) is absolutely integrable over any period, that is, 𝐟(𝑡) = ∫𝑇 |𝐟(𝑡)|𝑑𝑡 < ∞
0
2. 𝐟(𝑡) has a finite number of maxima and minima within any finite interval of 𝑡.
3. 𝐟(𝑡) has a finite number of discontinuities within any finite interval of 𝑡, and each of
these discontinuities is finite.
Note that the Dirichlet conditions are sufficient but not necessary conditions for the
Fourier series representation.
III.IV Properties of Continuous Fourier Series: If the continuous time functions 𝐟(𝑡),
𝐠(𝑡) are two periodic signals with the same period and having exponential Fourier series:
F_series 1 F_series 1
𝐟(𝑡) ↔ 𝛼𝑘 = ∫ 𝐟(𝑡)𝑒 −𝑗𝑘ω0 𝑡 𝑑𝑡 and 𝐠(𝑡) ↔ 𝛽𝑘 = ∫ 𝐠(𝑡)𝑒 −𝑗𝑘ω0 𝑡 𝑑𝑡
𝑇0 𝑇0 𝑇0 𝑇0
Then some of the important properties of Fourier series are summarized below.
F_series
a. Linearity Property: 𝐴𝐟(𝑡) + 𝐵𝐠(𝑡) ↔ 𝑐𝑘 = (𝐴𝛼𝑘 + 𝐵𝛽𝑘 )
F_series 1
b. Time Shifting: 𝐟(𝑡 − 𝜏) ↔ 𝑏𝑘 = 𝑇 ∫𝑇 𝐟(𝑡 − 𝜏)𝑒 −𝑗𝑘ω0 𝑡 𝑑𝑡 = 𝑒 −𝑗𝑘ω0𝜏 𝛼𝑘
0 0
F_series 1
c. Frequency Shifting: 𝑒 𝑗ℓω0𝑡 𝐟(𝑡) ↔ 𝑏𝑘 = 𝑇 ∫𝑇 𝐟(𝑡)𝑒 −𝑗(𝑘−ℓ)ω0 𝑡 𝑑𝑡 = 𝛼𝑘−ℓ
0 0
F_series 1
d. Time Reversal: 𝐟(−𝑡) ↔ 𝑏𝑘 = 𝑇 ∫𝑇 𝐟(−𝑡)𝑒 −𝑗𝑘ω0 𝑡 𝑑𝑡 = 𝛼−𝑘
0 0
F_series
e. Time Scaling: 𝐟(𝑎𝑡) ↔ 𝑏𝑘 = 𝛼𝑘 Where 𝐟(𝑎𝑡) is periodic with fundamental period 𝑇0 /𝑎.
f. Differentiation and Integration:
𝑑 F_series 1 𝑑
𝐟(𝑡) ↔ 𝑏𝑘 = ∫ ( 𝐟(𝑡)) 𝑒 −𝑗𝑘ω0𝑡 𝑑𝑡 = (𝑗𝑘ω0 )𝛼𝑘
𝑑𝑡 𝑇0 𝑇0 𝑑𝑡
𝑡 𝑡
F_series 1 1
∫ 𝐟(𝑡)𝑑𝑡 ↔ 𝑏𝑘 = ∫ (∫ 𝐟(𝑡)𝑑𝑡) 𝑒 −𝑗𝑘ω0 𝑡 𝑑𝑡 = ( )𝛼
−∞ 𝑇0 𝑇0 −∞ 𝑗𝑘ω0 𝑘
g. Multiplication and Convolution:
F_series +∞
𝐟(𝑡)𝐠(𝑡) ↔ 𝑐𝑘 = ∑ 𝛼ℓ 𝛽𝑘−ℓ = 𝛼𝑘 ⋆ 𝛽𝑘
ℓ=−∞
Fseries 1
𝐟(𝑡) ⋆ 𝐠(𝑡) ↔ 𝑐𝑘 = (∫ 𝐟(𝑡)𝑒 −𝑗𝑘ω0 𝑡 𝑑𝑡) (∫ 𝐠(𝑡)𝑒 −𝑗𝑘ω0 𝑡 𝑑𝑡) = 𝑇0 𝛼𝑘 𝛽𝑘
𝑇0 𝑇0 𝑇0
h. Conjugate:
⋆
⋆ (𝑡)
F_series 1 1 ⋆
𝐟 ↔ 𝑐𝑘 = ∫ 𝐟 ⋆ (𝑡)𝑒 −𝑗𝑘ω0 𝑡 𝑑𝑡 = (∫ 𝐟(𝑡)𝑒 𝑗𝑘ω0𝑡 𝑑𝑡) = 𝛼−𝑘
𝑇0 𝑇0 𝑇0 𝑇0
i. Response of LTI: Assume that 𝐱(𝑡) be an input of linear system whose transfer function
𝑯(𝑠) = 𝐘(𝑠)/𝐗(𝑠) and the response is 𝐲(𝑡) with
F_series F_series
𝐱(𝑡) ↔ 𝛼𝑘 ⟹ 𝐲(𝑡) ↔ 𝛽𝑘 = 𝑯(𝑗ω)𝛼𝑘
Proof: In previous chapters we showed that the response of an LTI system with transfer
function 𝑯(𝑠) to an exponential input 𝑒 𝑠𝑡 is also an exponential 𝑯(𝑠)𝑒 𝑠𝑡 Therefore, the
system response to exponential 𝑒 𝑗ω𝑡 is an exponential 𝑯(𝑗ω)𝑒 𝑗ω𝑡 . This input-output pair can
𝑗ω𝑡
be displayed as: 𝑒⏟ ⟹ ⏟ 𝑯(𝑗ω)𝑒 𝑗ω𝑡 . Therefore, from linearity property
Input Output
𝑘=+∞ 𝑘=+∞
∑ 𝛼𝑘 𝑒 𝑗𝑘ω0 𝑡 ⟹ ∑ 𝛼𝑘 𝑯(𝑗ω)𝑒 𝑗𝑘ω0 𝑡
⏟ 𝑘=−∞ ⏟ 𝑘=−∞
Input 𝐱(𝑡) Output 𝐲(𝑡)
The response 𝐲(𝑡) is obtained in the form of an exponential Fourier series, and is therefore
a periodic signal of the same period as that of the input with 𝛽𝑘 = 𝑯(𝑗ω)𝛼𝑘 .
Remark: An odd function can be represented by a Fourier Sine series and an even function
can be represented by a Fourier cosine series, so it is not surprising that we use sinusoids.
∞
2
𝐟(𝑡) = 𝐟𝑒 (𝑡) = 𝑎0 + ∑ 𝑎𝑛 cos(𝑛ω0 𝑡) 𝑏𝑛 = 0 & 𝑎𝑛 = ∫ 𝐟(𝑡)cos(𝑛ω0 𝑡)𝑑𝑡
𝑇0 𝑇0
𝑛=1
∞
2
𝐟(𝑡) = 𝐟𝑜 (𝑡) = ∑ 𝑏𝑛 sin(𝑛ω0 𝑡) 𝑎𝑛 = 0 & 𝑏𝑛 = ∫ 𝐟(𝑡)sin(𝑛ω0 𝑡)𝑑𝑡
𝑇0 𝑇0
𝑛=1
Example: 01 Find the Fourier series coefficients 𝑐𝑘 of the signal 𝐱(𝑡) = 𝐴|sin(𝜋𝑡)|.
Solution: The signal 𝐱(𝑡) = 𝐴|sin(𝜋𝑡)| is periodic function with fundamental period 𝑇0 = 1,
1 2𝜋
the Fourier series is 𝐱(𝑡) = ∑𝑘=+∞
𝑘=−∞ 𝑐𝑘 𝑒
𝑗𝑘ω0 𝑡
, with 𝑐𝑘 = 𝑇 ∫𝑇 𝐱(𝑡)𝑒 −𝑗𝑘ω0 𝑡 𝑑𝑡 and ω0 = = 2𝜋
0 0 𝑇0
for k=-Nmax:1:Nmax
x1=x1+(2*A/(pi*(1-4*k^2)))*exp((i*k*w0)*t);
if k>2
plot(t,x1,'-','linewidth',1.5)
hold on, grid on
end
end
2𝐴 𝑘=+∞ 4𝐴
𝐱(𝑡) = +∑ ( ) cos(2𝜋𝑘𝑡)
𝜋 𝑘=1 𝜋(1 − 4𝑘 2 )
Example: 02 Find the trigonometric Fourier coefficients for the periodic signal
𝐴𝑡
𝐱(𝑡) = (𝑢(𝑡) − 𝑢(𝑡 − 𝑇0 )) 0 < 𝑡 < 𝑇0
𝑇0
1 𝑇0 𝐴𝑡 𝐴 2 𝑇0 𝐴𝑡 2𝑛𝜋
𝑎0 = ∫ 𝑑𝑡 = and 𝑎𝑛 = ∫ cos ( 𝑡) 𝑑𝑡 = 0
𝑇0 0 𝑇0 2 𝑇0 0 𝑇0 𝑇0
2 𝑇0 𝐴𝑡 2𝑛𝜋 2𝐴 −𝑇02 cos(2𝑛𝜋) −𝐴
𝑏𝑛 = ∫ sin ( 𝑡) 𝑑𝑡 = 2 ( )=
𝑇0 0 𝑇0 𝑇0 𝑇0 2𝑛𝜋 𝑛𝜋
The coefficients 𝑎𝑛 = 0 because 𝐱(𝑡) is an odd function. The Fourier series is therefore given
by
𝐴 𝐴 +∞ 1 2𝑘𝜋
𝐱(𝑡) = − ∑ sin ( 𝑡)
2 𝜋 𝑘=1 𝑘 𝑇0
Example: 03 Find the exponential and trigonometric Fourier coefficients for the signal
𝐴 0<𝑡<𝐿 𝑇
𝐱(𝑡) = { with 𝐿=
0 𝐿<𝑡<𝑇 2
𝑇 𝑇
1 2 1 2 𝐴
and 𝑐𝑘 = ∫ 𝐱(𝑡)𝑒 −𝑗𝑘ω0 𝑡 𝑑𝑡 = ∫ 𝐴𝑒 −𝑗𝑘ω0 𝑡 𝑑𝑡 = (1 − 𝑒 −𝑗𝑘𝜋 )
𝑇 −𝑇 𝑇 0 2𝑘𝜋𝑗
2
Example: 04 Find the trigonometric Fourier coefficients for the periodic signal
𝐴 |𝑡| < 𝑑
𝐱(𝑡) = { 𝑇
0 𝑑 < |𝑡| <
2
for k=-Nmax:1:Nmax
a=(A/(pi*(k+eps)));
x1=x1+a*(sin(w0*k*d))*exp((i*k*w0)*t);
if k>=12
plot(t,x1,'-','linewidth',1.5)
hold on, grid on
end
end
The trigonometric Fourier series is
2𝐴
𝑎𝑘 = 2Re[𝑐𝑘 ] = sin(𝑑ω0 𝑘), 𝑏𝑘 = −Im[𝑐𝑘 ] = 0, 𝑎0 = 𝑐0
𝑘𝜋
∞
2𝐴𝑑 2𝐴 sin(𝑑ω0 𝑘)
𝐱(𝑡) = + ∑ cos(𝑘ω0 𝑡)
𝑇 𝜋 𝑘
𝑘=1
x1=(2*A*d)/T; t=-T:0.01:T;
Nmax=30; w0=2*pi/T;
for k=1:Nmax
x1=x1+(2*A/pi)*(sin(k*w0*d)/k)*cos(k*w0*t);
if k>25
plot(t,x1,'-','linewidth',1.5)
hold on, grid on
end
end
Example: 05 Find the trigonometric Fourier coefficients for the periodic signal
𝑇
𝐴 |𝑡| <
𝐱(𝑡) = { 4
𝑇 𝑇
0 < |𝑡| <
4 2
The coefficients 𝑏𝑛 = 0 because 𝐱(𝑡) is an even function. The trigonometric Fourier series is
2𝐴 𝜋 𝐴
𝑎𝑘 = 2Re[𝑐𝑘 ] = sin (𝑘 ), 𝑏𝑘 = −Im[𝑐𝑘 ] = 0, 𝑎0 = 𝑐0 =
𝑘𝜋 2 2
∞ 𝜋
𝐴 sin (𝑘 2)
𝐱(𝑡) = + ∑ 𝐴 ( 𝜋 ) cos(𝑘ω0 𝑡)
2 𝑘
𝑘=1 2
Example: 06 Find the trigonometric Fourier coefficients for the periodic signal 𝐱(𝑡)
𝑇
𝐴 0<𝑡<
𝐱(𝑡) = { 2
𝑇
−𝐴 <𝑡<𝑇
2
𝑇
2𝐴 0<𝑡<
𝐱(𝑡) = 𝐱1 (𝑡) − 𝐴 with 𝐱1 (𝑡) = { 2
𝑇
0 <𝑡<𝑇
2
Let we compute the exponential Fourier series of 𝐱1 (𝑡) (we have seen it before)
𝑘=+∞ 𝑇 𝑇
1 2 1 2
𝐱1 (𝑡) = ∑ 𝑐𝑘 𝑒 𝑗𝑘ω0𝑡 with 𝑐0 = ∫ 𝐱1 (𝑡)𝑑𝑡 = 𝐴 and 𝑐𝑘 = ∫ 𝐱1 (𝑡)𝑒 −𝑗𝑘ω0 𝑡 𝑑𝑡
𝑇 0 𝑇 −𝑇
𝑘=−∞ 2
2𝐴 𝑚=+∞ 1
𝐱(𝑡) = 𝐱1 (𝑡) − 𝐴 = ∑ 𝑒 𝑗(2𝑚+1)ω0 𝑡 Exponential Form
𝜋𝑗 𝑚=−∞ (2𝑚 + 1)
4𝐴 𝑚=+∞ 1 2𝜋𝑡
= ∑ ( ) sin ( (2𝑚 + 1)) Trigonometric Form
𝜋 𝑚=0 (2𝑚 + 1) 𝑇
Example: 07 Find the trigonometric Fourier coefficients for the periodic signal 𝐱(𝑡)
2𝐴 𝑇
𝑡 0<𝑡<
𝐱(𝑡) = { 𝑇 2
2𝐴 𝑇
(𝑇 − 𝑡) <𝑡<𝑇
𝑇 2
Solution: The signal 𝐱(𝑡) is periodic differentiable function which can be written as
2𝐴 𝑇
𝑑 0<𝑡<
𝐱 𝑛𝑒𝑤 (𝑡) = 𝐱(𝑡) = { 𝑇 2
𝑑𝑡 2𝐴 𝑇
− <𝑡<𝑇
𝑇 2
From exercise 6 we get
𝑑 8𝐴 𝑚=+∞ 1 2𝜋𝑡
𝐱 𝑛𝑒𝑤 (𝑡) = 𝐱(𝑡) = ∑ ( ) sin ( (2𝑚 + 1))
𝑑𝑡 𝜋𝑇 𝑚=0 (2𝑚 + 1) 𝑇
4𝐴 𝑚=+∞ 1 2𝜋𝑡
𝐱(𝑡) = 𝑐 − ∑ cos ( (2𝑚 + 1))
𝑚=0 (2𝑚 + 1)
𝜋 2 2 𝑇
1 𝑇 1 𝐴
The constant of integration can be determined by 𝑐 = 𝑇 ∫0 𝐱(𝑡)𝑑𝑡 = 𝑇 Area = 2
𝐴 4𝐴 𝑚=+∞ 1 2𝜋𝑡
𝐱(𝑡) = − 2∑ cos ( (2𝑚 + 1))
𝑚=0 (2𝑚 + 1)
2 𝜋 2 𝑇
Example: 08 Find the trigonometric Fourier coefficients for the periodic signal 𝐱(𝑡)
𝐴
(𝑇 + 𝑡) −𝑇 <𝑡 <0
𝐱(𝑡) = { 𝑇
𝐴
(𝑇 − 𝑡) 0<𝑡<𝑇
𝑇
Solution: The signal 𝐱(𝑡) is periodic signal with period equal to 𝐿 = 2𝑇 and is even function
so we deduce that 𝑏𝑛 = 0, only we should determine the value of 𝑎𝑛
1 𝑇 1 𝐴 2 𝑇
𝑎0 = ∫ 𝐱(𝑡)𝑑𝑡 = Area = 𝑎𝑛 = ∫ 𝐱(𝑡) cos(𝑛ω0 𝑡) 𝑑𝑡
𝑇 −𝑇 𝑇 2 𝑇 −𝑇
From the exercise 07 and the shift property of Fourier series we get
𝑘=+∞
𝐴 4𝐴
𝐱(𝑡) = + ∑ cos((2𝑘 − 1)ω0 𝑡)
2 (2𝑘 − 1)2 𝜋 2
𝑘=1
Nmax=5; w0=2*pi/L;
for k=1:Nmax
x1=x1+(4*A/((2*k-1)*pi)^2)*cos((2*k-1)*w0*t);
if k>3
plot(t,x1,'-','linewidth',1.5)
hold on, grid on
end
end
Example: 09 Find the trigonometric Fourier coefficients for the periodic signal 𝐱(𝑡)
2𝐴 𝑇
𝐱(𝑡) = 𝑡 when |𝑡| <
{ 𝑇 2}
and the period is 𝑇
Solution: This signal is odd which implies that 𝑎𝑛 = 0 and using the result of exercise 2
with the shift property we get
− 𝑗𝑛ω0 −2𝐴
𝑇 2𝐴(−1)𝑛+1
𝑏𝑛 = 𝑒 2 ( )=
𝜋𝑛 𝜋𝑛
∞2𝐴(−1)𝑘+1
𝐱(𝑡) = ∑ sin(𝑘ω0 𝑡)
𝑘=1 𝜋𝑘
for k=1:Nmax
x1=x1+((2*A*(-1)^(k+1))/(pi*k))*sin(k*w0*t);
if k>4
plot(t,x1,'-','linewidth',1.5)
hold on, grid on
end
end
Example: 10 Find the trigonometric Fourier coefficients for the periodic signal 𝐱(𝑡)
𝑡
𝐱(𝑡) = 𝐴 (1 − ) when 0<𝑡<𝑇
{ 𝑇 }
and the period is 𝑇
Solution: To make things more clear let we go back to exercise 2 where the function that
we studied is of the form
𝐴
𝐲(𝑡) = 𝑡 when 0 < 𝑡 < 𝑇 ⟹ 𝐱(𝑡) = 𝐴 − 𝐲(𝑡)
𝑇
In exercise 2 we get
𝐴 𝐴 ∞ 1 𝐴 𝐴 ∞ 1
𝐲(𝑡) = − ∑ sin(𝑘ω0 𝑡) ⟹ 𝐱(𝑡) = 𝐴 − 𝐲(𝑡) = + ∑ sin(𝑘ω0 𝑡)
2 𝜋 𝑘=1 𝑘 2 𝜋 𝑘=1 𝑘
Example: 11 Find the trigonometric Fourier coefficients for the periodic signal 𝐱(𝑡)
2 𝑇
𝑎𝑛 = ∫ 𝐱(𝑡) cos(𝑛ω0 𝑡) 𝑑𝑡 = 0
𝑇 0
2 𝑇
𝑏𝑛 = ∫ 𝐱(𝑡) sin(𝑛ω0 𝑡) 𝑑𝑡
𝑇 0
4𝐴 𝑇/2
= ∫ sin(𝑛ω0 𝑡) 𝑑𝑡
𝑇 0
2𝐴 𝑇
= (1 − cos (𝑛ω0 ))
𝑛𝜋 2
4𝐴 4𝐴 ∞ 1
𝑏2𝑘−1 = and 𝑏2𝑘 = 0 ⟹ 𝐱(𝑡) = ∑ sin((2𝑘 − 1)ω0 𝑡)
(2𝑘 − 1)𝜋 𝜋 𝑘=1 (2𝑘 − 1)
for k=1:Nmax
x1=x1+(4*A/(pi*(2*k-1)))*sin((2*k-1)*t);
if k>4
plot(t,x1,'-','linewidth',1.5)
hold on, grid on
end
end
Example: 12 Find the Fourier series of 𝐱(𝑡) = 𝑡 2 periodic on interval (−𝜋, 𝜋) and ω0 = 1.
1 𝜋 2 1 2 2𝜋 2
𝑎0 = ∫ 𝑡 𝑑𝑡 = ∫ 𝑡𝑑𝑡 =
𝜋 −𝜋 4 0 3
𝜋
1 2
2 𝜋 2 −4 cos(𝑛𝜋) 4
𝑎𝑛 = ∫ 𝑡 cos(𝑛𝑡) 𝑑𝑡 = ∫ 𝑡 cos(𝑛𝑡) 𝑑𝑡 = (−𝜋 ) = 2 (−1)𝑛
𝜋 −𝜋 𝜋 0 𝑛𝜋 𝑛 𝑛
2𝜋 2 ∞ 4
𝐱(𝑡) = +∑ 2
(−1)𝑘 cos(𝑘𝑡)
3 𝑘=1 𝑘
for k=1:Nmax
x1=x1+(4/(k^2))*(-1)^k*cos(k*t);
if k>2
plot(t,x1,'-','linewidth',1.5)
hold on, grid on
end
end
Example: 13 Find the Fourier series of 𝐱(𝑡) = 𝑡 2 periodic on interval (0, 2𝜋) and ω0 = 1.
Short answer: you are asked to verify that the Fourier series of 𝐱(𝑡) is given by
4𝜋 2 ∞ 4 ∞ 4𝜋
𝐱(𝑡) = +∑ 2
cos(𝑘𝑡) − ∑ sin(𝑘𝑡)
3 𝑘=1 𝑘 𝑘=1 𝑘
for k=1:Nmax
x1=x1+(4/(k^2))*cos(k*t);
x2=x2+(-4*pi/k)*sin(k*t);
x=((4*pi^2)/3)+x1+x2;
if k>4
plot(t,x,'-','linewidth',1.5)
hold on, grid on
end
end
Example: 14 for the periodic signal
Solution: This function is neither even nor odd, so we expect both sine and cosine terms to
be present. The period is 4 = 2𝐿 so 𝐿 = 2. Because 𝐲(𝑡) = 0 on the interval (−2,0), each of
the integrals in the Euler formulas, which should be an integral from 𝑡 = −2 to 𝑡 = 2, can
be replaced with an integral from 𝑡 = 0 to 𝑡 = 2. Thus, the Euler formulas give
1 𝑇 1 2 1
𝑎0 = ∫ 𝐲(𝑡)𝑑𝑡 = ∫ 𝑡𝑑𝑡 =
𝑇 0 4 0 2
1 2 𝑛𝜋 𝑛𝜋 2𝑥 2𝑑𝑥
𝑎𝑛 = ∫ 𝑡 cos ( 𝑡) 𝑑𝑡 (Let 𝑥 = 𝑡 so 𝑡 = and 𝑑𝑡 = )
2 0 2 2 𝑛𝜋 𝑛𝜋
1 𝑛𝜋 2𝑥 2𝑑𝑥 2 𝑛𝜋
2
= ∫ cos(𝑥) = 2 2 ∫ 𝑥 cos(𝑥) 𝑑𝑥 = 2 2 (cos(𝑛𝜋) − 1)
2 0 𝑛𝜋 𝑛𝜋 𝑛 𝜋 0 𝑛 𝜋
1 when 𝑛 = 0
2 0 when 𝑛 = 2𝑘
= 2 2 ((−1)𝑛 − 1) = { −4
𝑛 𝜋
when 𝑛 = 2𝑘 − 1
𝑛2 𝜋 2
1 2 𝑛𝜋 2𝑥 2𝑑𝑥
𝑏𝑛 = ∫ 𝑡 sin(𝑛ω0 𝑡) 𝑑𝑡 (Let 𝑥 = 𝑡 so 𝑡 = and 𝑑𝑡 = )
2 0 2 𝑛𝜋 𝑛𝜋
1 𝑛𝜋 2𝑥 2𝑑𝑥 2 𝑛𝜋
2 𝑥=𝑛𝜋
= ∫ sin(𝑥) = 2 2 ∫ 𝑥 sin(𝑥) 𝑑𝑥 = 2 2 [sin(𝑥) − 𝑥 cos(𝑥)]𝑥=0
2 0 𝑛𝜋 𝑛𝜋 𝑛 𝜋 0 𝑛 𝜋
−2 2
= 2 2 (𝑛𝜋 cos(𝑛𝜋)) = (−1)𝑛+1
𝑛 𝜋 𝑛𝜋
Example: 15 Compute the Fourier series of the function 𝐱(𝑡) = sin3(𝑡) Solution: This can
be handled by trig identities to reduce it to a finite sum of terms of the form sin(𝑛𝑡).
1 1 1
sin3(𝑡) = sin(𝑡) sin2(𝑡) =
sin(𝑡) (1 − cos(2𝑡)) = sin(𝑡) − sin(𝑡) cos(2𝑡)
2 2 2
1 1 1 3 1
= sin(𝑡) − ( (sin(3𝑡) + sin(−𝑡))) = sin(𝑡) − sin(3𝑡)
2 2 2 4 4
Solution: For this function, it is more convenient to compute the 𝑎𝑛 and 𝑏𝑛 using
integration over the interval [0,2𝜋] rather than the interval [−𝜋, 𝜋].Thus,
The graph becomes a series of infinite height spikes of width 0. This looks like an infinite
sum of Dirac delta functions, which is the regular delta function extended to be periodic of
period 2𝜋. That is,
𝐱(𝑡) 𝑘=+∞
lim =∑ 𝛿(𝑡 − 2𝑘𝜋)
ℎ→0 ℎ 𝑘=−∞
Now compute the Fourier coefficients 𝑎𝑛 /ℎ and 𝑏𝑛 /ℎ as ℎ approaches 0.
𝑎𝑛 sin(𝑛ℎ) 1 𝑏𝑛 1 − cos(𝑛ℎ)
= = and = ⟶0
ℎ 𝑛ℎ𝜋 𝜋 ℎ 𝑛ℎ𝜋
Also, 𝑎0 /ℎ = 1/2𝜋. Thus, the 2𝜋-periodic delta function has a Fourier series
𝑘=+∞ 1 1 ∞ 1 1
∑ 𝛿(𝑡 − 2𝑘𝜋) ≈ + ∑ cos(𝑘𝑡) ≈ + (cos(𝑡) + cos(2𝑡) + cos(3𝑡) + ⋯ )
𝑘=−∞ 2𝜋 𝜋 𝑘=1 2𝜋 𝜋
1
≈ (1 + 𝑒 𝑖𝑡 + 𝑒 −𝑖𝑡 + 𝑒 2𝑖𝑡 + 𝑒 −2𝑖𝑡 + 𝑒 3𝑖𝑡 + 𝑒 −3𝑖𝑡 + ⋯ )
2𝜋
1
1
𝑖(𝑘+ )𝑡
1
−𝑖(𝑘+ )𝑡 sin ((𝑘 + 2) 𝑡)
1 𝑒 2 −𝑒 2 1
≈ ( )= lim
2𝜋 𝑖𝑡
−
𝑖𝑡 2𝜋 𝑘→∞ 1
𝑒 −𝑒
2 2 sin (2 𝑡)
{ }
IV. Fourier-Transform (Continuous-Time Signals): Can Fourier series be applied to
functions 𝐟(𝑡) that are not periodic? Strictly speaking the answer is no. But we can
generalize the approach to provide a positive
answer. The trick is to take the periodicity
length 2𝐿 to infinity, so that the function
becomes periodic with an infinite period ---
which is the same thing as not being periodic
at all. A consequence of this limiting procedure
is that the set of wave numbers implicated in
the Fourier expansion will no longer be
discrete, but will form a continuum. Discrete
sums will therefore be replaced by integrals, and the standard Fourier series will become a
Fourier transform (It is closely related to the Fourier series). In otherward, the Fourier
Transform is a mathematical technique that transforms a function of time, 𝐟(𝑡), to a
function of frequency, 𝐅(ω).
The Fourier transform can be viewed as an extension of the above Fourier series to non-
periodic functions, and this is the topic of this section. The Fourier Transform of a function
can be derived as a special case of the Fourier series when the period, 𝑇 → ∞. Start with the
Fourier series synthesis equation
∞
𝐟(𝑡) = ∑ 𝑐𝑘 𝑒 𝑗𝑘𝜔0 𝑡
𝑘=−∞
As 𝑇 → ∞ the fundamental frequency, 𝜔0 = 2𝜋/𝑇, becomes extremely small and the quantity
𝑘𝜔0 becomes a continuous quantity that can take on any value (since k has a range of ±∞)
so we define a new variable 𝜔 = 𝑘𝜔0 ; we also let 𝐅(𝜔) = 𝑇𝑐𝑘 . Making these substitutions in
the previous equation yields the analysis equation for the Fourier Transform (also called
the Forward Fourier Transform or the Fourier integral).
+∞
𝐅(𝜔) = lim 𝑇𝑐𝑘 = ∫ 𝐟(𝑡)𝑒 −𝑗𝜔𝑡 𝑑𝑡
𝑇→∞ −∞
Likewise, we can derive the Inverse Fourier Transform (i.e., the synthesis equation) by
starting with the synthesis equation for the Fourier series (and multiply and divide by T).
∞ ∞
𝑗𝑘𝜔0 𝑡
1
𝐟(𝑡) = ∑ 𝑐𝑘 𝑒 = ∑ 𝑇𝑐𝑘 𝑒 𝑗𝑘𝜔0 𝑡
𝑇
𝑘=−∞ 𝑘=−∞
As 𝑇 → ∞, 1/𝑇 = 𝜔0 /2𝜋. Since 𝜔0 is very small (as 𝑇 gets large, replace 𝜔0 by the quantity
𝑑𝜔). As before, we write 𝜔 = 𝑘𝜔0 and 𝐅(𝜔) = 𝑇𝑐𝑘 . A little work (and replacing the sum by an
integral) yields the synthesis equation of the Fourier Transform.
∞ ∞
𝑗𝑘𝜔0 𝑡
1 𝑑𝜔 1 +∞
𝐟(𝑡) = ∑ 𝑇𝑐𝑘 𝑒 = ∑ 𝐅(𝜔)𝑒 𝑗𝜔𝑡 = ∫ 𝐅(𝜔)𝑒 𝑗𝜔𝑡 𝑑𝜔
𝑇 2𝜋 2𝜋 −∞
𝑘=−∞ 𝑘=−∞
Example: 1 Let we find the Fourier series of the continuous-time periodic square wave
(pulse train) function of period 𝑇 = 2𝜋/𝜔0 and duration ℎ
The coefficients 𝑇𝑐𝑘 are denoted by 𝐗(𝜔). Let we plot the graph of 𝐗(𝜔) for fixed ℎ and
different values for 𝑇 (i.e. for different frequencies 𝜔).
Definition of the Fourier Transform (and Inverse): If 𝐟(𝑡) is a continuous, integrable signal,
then the forward Fourier Transform is:
+∞
𝐅(𝜔) = ∫ 𝐟(𝑡)𝑒 −𝑗𝜔𝑡 𝑑𝑡 (𝐀𝐧𝐚𝐥𝐲𝐬𝐢𝐬 𝐄𝐪𝐮𝐚𝐭𝐢𝐨𝐧)
−∞
Example: 2 Let we find the inverse of Fourier transform for the frequency pulse function
1
when |𝜔| < 𝐴 1 +∞ 1 +𝐴
1 sin(𝐴𝑡)
𝐅(𝜔) = { 2𝐴 ⟹ 𝐟(𝑡) = ∫ 𝐅(𝜔)𝑒 𝑗𝜔𝑡 𝑑𝜔 = ∫ 𝑒 𝑗𝜔𝑡 𝑑𝜔 =
2𝜋 −∞ 4𝜋𝐴 −𝐴 2𝜋 𝐴𝑡
0 when |𝜔| > 𝐴
Remark: Let we see what's happen if 𝐴 → 0. We start by checking the frequency domain
1
when |𝜔| < 𝐴
lim 𝐅(𝜔) = lim { 2𝐴 = 𝛿(𝜔)
𝐴→0 𝐴→0
0 when |𝜔| > 𝐴
And in the time domain we have
1 sin(𝐴𝑡) 1
lim 𝐟(𝑡) = lim =
𝐴→0 𝐴→0 2𝜋 𝐴𝑡 2𝜋
This means that
1
𝔽 { } = 𝛿(𝜔) ⟺ 𝔽{𝐟(𝑡) = 1} = 2𝜋𝛿(𝜔)
2𝜋
Example: 3 Assume that the function 𝐟(𝑡) is a continuous with Fourier transform 𝐅(𝜔),
then find the Fourier transform of 𝑑𝐟(𝑡)/𝑑𝑡
1 +∞ 𝑗𝜔𝑡
𝑑𝐟(𝑡) 1 𝑑 +∞
𝐟(𝑡) = ∫ 𝐅(𝜔)𝑒 𝑑𝜔 ⟺ = ( ∫ 𝐅(𝜔)𝑒 𝑗𝜔𝑡 𝑑𝜔)
2𝜋 −∞ 𝑑𝑡 2𝜋 𝑑𝑡 −∞
+∞
𝑑𝐟(𝑡) 1 𝑑𝑒 𝑗𝜔𝑡
⟺ = (∫ 𝐅(𝜔) 𝑑𝜔)
𝑑𝑡 2𝜋 −∞ 𝑑𝑡
+∞
𝑑𝐟(𝑡) 𝑗𝜔
⟺ = (∫ 𝐅(𝜔)𝑒 𝑗𝜔𝑡 𝑑𝜔)
𝑑𝑡 2𝜋 −∞
Therefore,
𝑑𝐟(𝑡)
𝔽{ } = 𝑗𝜔𝐅(𝜔)
𝑑𝑡
IV.I Alternate Forms of the Fourier Transform There are alternate forms of the Fourier
Transform that you may see in different references. Different forms of the Transform result
in slightly different transform pairs (i.e., 𝐟(𝑡) and 𝐅(𝜔)), so if you use other references, make
sure that the same definition of forward and inverse transform are used.
Existence of the Fourier Transform requires that the 𝐟(𝑡) be absolutely integrable,
+∞
∫ |𝐟(𝑡)|𝑑𝑡 < ∞
−∞
Existence of the Fourier Transform requires that discontinuities in 𝐟(𝑡) must be finite (i.e.
|𝐟(𝛼 + ) − 𝐟(𝛼 − )| < ∞). This presents no difficulty for the kinds of functions we will consider
(i.e., functions that we can produce in the lab).
Note: absolutely integrable functions would seem to present a problem, because common
signals such as the sine and cosine are not absolutely integrable. We will finesse this
problem, later, by considering impulse functions, 𝛿(𝛼), which are not functions in the strict
sense since the value isn't defined at 𝛼 = 0.
IV.II Connection between the Fourier Transform and the Laplace Transform: Since the
Laplace transform may be considered a generalization of the Fourier transform in which
the frequency is generalized from 𝑗𝜔 to 𝑠 = 𝑎 + 𝑗𝜔, the complex variable 𝑠 is often referred to
as the complex frequency.
+∞ +∞
−𝑠𝑡
𝐅(𝜔) = 𝐅(𝑠)|𝑠=𝑗𝜔 = ∫ 𝐟(𝑡)𝑒 𝑑𝑡| =∫ 𝐟(𝑡)𝑒 −𝑗𝜔𝑡 𝑑𝑡
−∞ 𝑠=𝑗𝜔 −∞
Because the Fourier transform is the Laplace transform with 𝑠 = 𝑗𝜔, it should not be
assumed automatically that the Fourier transform of a signal 𝐱(𝑡) is the Laplace transform
with 𝑠 replaced by 𝑗𝜔. If 𝐱(𝑡) is absolutely integrable, the Fourier transform of 𝐱(𝑡) can be
obtained from the Laplace transform of 𝐱(𝑡) with 𝑠 = 𝑗𝜔. This is not generally true of signals
which are not absolutely integrable.
Setting 𝑠 = 𝑎 + 𝑗𝜔 in Laplace transform, we have
+∞ +∞ +∞
𝐅(𝑠) = ∫ 𝐟(𝑡)𝑒 −𝑠𝑡 𝑑𝑡 = ∫ 𝐟(𝑡)𝑒 −(𝑎+𝑗𝜔)𝑡 𝑑𝑡 = ∫ {𝑒 −𝑎𝑡 𝐟(𝑡)}𝑒 −𝑗𝜔𝑡 𝑑𝑡 = 𝔽{𝑒 −𝑎𝑡 𝐟(𝑡)}
−∞ −∞ −∞
which indicates that the bilateral Laplace transform of 𝐟(𝑡) can be interpreted as the
Fourier transform of 𝑒 −𝑎𝑡 𝐟(𝑡).
Symmetry Property
Remark: If 𝐅(𝜔) is the Fourier transform of 𝐟(𝑡), then the symmetry between the analysis
and synthesis equations of the Fourier transform states that
+∞ +∞
2𝜋𝐟(𝑡) = ∫ 𝐅(𝜔)𝑒 𝑗𝜔𝑡 𝑑𝜔 ⟹ 2𝜋𝐟(−𝑡) = ∫ 𝐅(𝜔)𝑒 −𝑗𝜔𝑡 𝑑𝜔
−∞ −∞
1 +∞ 𝔽
𝐟(−𝜔) = ∫ 𝐅(𝑡)𝑒 −𝑗𝜔𝑡 𝑑𝑡 ⟺ 𝐅(𝑡) ↔ 2𝜋𝐟(−𝜔)
2𝜋 −∞
⦁ Gate Function (Pulse): We define a gate function rect(𝑡/ℎ) as a gate pulse of height 𝐴 and
width 2ℎ, centered at the origin,
+∞ +ℎ
𝐴 when |𝑡| < ℎ sin(𝜔ℎ)
∏(𝑡) = { ⟹ 𝐗(𝜔) = ∫ 𝐱(𝑡)𝑒 −𝑗𝜔𝑡 𝑑𝑡 = ∫ 𝐴𝑒 −𝑗𝜔𝑡 𝑑𝑡 = 2𝐴
0 when |𝑡| > ℎ −∞ −ℎ 𝜔
⟵⟵⟵
sin(𝜔ℎ) sin(ℎ𝑡)
𝔽{∏(𝑡)} = 2𝐴 and by symmetry 𝔽 {2𝐴 } = 2𝜋∏(𝜔)
𝜔 𝑡
⟶⟶⟶
⦁ Delta Function (Dirac): Consider the unit impulse function 𝛿(𝑡). The Laplace transform of
𝛿(𝑡) is 1 which can produce directly the Fourier transform
+∞ +∞
∫ 𝛿(𝑡)𝑒 −𝑠𝑡
𝑑𝑡 ⟹ 𝔽{𝛿(𝑡)} = ∫ 𝛿(𝑡)𝑒 −𝑗𝜔𝑡 𝑑𝑡 = 1 ⟺ 𝔽{𝛿(𝑡)} = 1
−∞ −∞
And from the symmetry between the analysis and synthesis equations of the Fourier
transform we have 𝔽{𝐟(𝑡) = 1} = 2𝜋𝛿(−𝜔) = 2𝜋𝛿(𝜔)
⦁ Signum Function (Sign): Consider the sign function sgn(𝑡) = (𝑢(𝑡) − 𝑢(−𝑡)), which can be
written in terms of 𝛿(𝑡)
𝑑𝐟(𝑡) 𝑑 𝑑 𝑑𝐟(𝑡)
= sgn(𝑡) = (𝑢(𝑡) − 𝑢(−𝑡)) = 2𝛿(𝑡) ⟹ 𝔽 { } = 𝔽{2𝛿(𝑡)} = 2
𝑑𝑡 𝑑𝑡 𝑑𝑡 𝑑𝑡
But what is about the complex exponential function (𝑡) = 𝑒 𝑗𝜔0 𝑡 with 𝜔0 > 0 ?
+∞ +∞
𝐗(𝜔) = ∫ 𝑒 𝑗𝜔0 𝑡 𝑒 −𝑗𝜔𝑡 𝑑𝑡 = ∫ 𝑒 −𝑗(𝜔−𝜔0 )𝑡 𝑑𝑡
−∞ −∞
And from the symmetry between the analysis and synthesis equations of the Fourier
+∞
transform we have 𝔽{𝛿(𝑡 − 𝑡0 )} = ∫−∞ 𝛿(𝑡 − 𝑡0 )𝑒−𝑗𝜔𝑡 𝑑𝑡 = 𝑒𝑗𝜔𝑡0
⦁ Step function: Consider the unit step function 𝐱(𝑡) = 𝑢(𝑡) then the Fourier transform is
+∞ +∞ +∞
−𝑗𝜔𝑡 −𝑗𝜔𝑡
𝑒 −𝑗𝜔𝑡
𝐗(𝜔) = ∫ 𝑢(𝑡)𝑒 𝑑𝑡 = ∫ 𝑒 𝑑𝑡 = − |
−∞ 0 𝑗𝜔 0
But this expression is not convergent so we will use the following trick sgn(𝑡) = 2𝑢(𝑡) − 1. If
we define a step function in terms of sgn(𝑡) then the corresponding Fourier transform is
1 1 1 1 1
𝔽{𝑢(𝑡)} = 𝔽 { + sgn(𝑡)} = 𝔽 { } + 𝔽 { sgn(𝑡)} = 𝜋𝛿(𝜔) +
2 2 2 2 𝑗𝜔
1
𝔽{𝑢(𝑡)} = 𝜋𝛿(𝜔) +
𝑗𝜔
Thus, the Fourier transform of 𝑢(𝑡) cannot be obtained from its Laplace transform. Note
that the unit step function 𝑢(𝑡) is not absolutely integrable.
⦁ Sine and Cosine functions: Consider the Fourier transform of sin(𝜔0 𝑡) and cos(𝜔0 𝑡)
+∞ +∞
𝑒 𝜔0 𝑗𝑡 − 𝑒 −𝜔0𝑗𝑡 −𝑗𝜔𝑡
𝔽{sin(𝜔0 𝑡)} = ∫ sin(𝜔0 𝑡) 𝑒 −𝑗𝜔𝑡 𝑑𝑡 = ∫ ( )𝑒 𝑑𝑡
−∞ −∞ 2𝑗
+∞ +∞
1 1
= ∫ 𝑒 𝑗(𝜔0 −𝜔)𝑡 𝑑𝑡 + ∫ 𝑒 −𝑗(𝜔0 +𝜔)𝑡 𝑑𝑡
2𝑗 −∞ 2𝑗 −∞
= −𝑗𝜋[𝛿(𝜔 − 𝜔0 ) − 𝛿(𝜔 + 𝜔0 )]
+∞ +∞
𝑒𝜔0𝑗𝑡 + 𝑒−𝜔0𝑗𝑡
𝔽{cos(𝜔0 𝑡)} = ∫ cos(𝜔0 𝑡) 𝑒 −𝑗𝜔𝑡
𝑑𝑡 = ∫ ( ) 𝑒−𝑗𝜔𝑡 𝑑𝑡
−∞ −∞ 2
= 𝜋[𝛿(𝜔 − 𝜔0 ) + 𝛿(𝜔 + 𝜔0 )]
Then it follows that |𝐗(−𝜔)| = |𝐗(𝜔)| and 𝝓(𝜔) = −𝝓(−𝜔). Hence, as in the case of periodic
signals, the amplitude spectrum |𝐗(𝜔)| is an even function and the phase spectrum 𝝓(𝜔) is
an odd function of 𝜔.
The output 𝐲(𝑡) and the input 𝐱(𝑡) are related 𝑅𝐶𝐲̇ (𝑡) + 𝐲(𝑡) = 𝐱(𝑡). Taking the Fourier
transforms of both sides of the above equation, the frequency response 𝐇(𝜔) of the 𝑅𝐶 filter
is given by
𝐘(𝜔) 1 1
𝐇(𝜔) = = =
𝐗(𝜔) 1 + 𝑗𝜔𝑅𝐶 1 + 𝑗𝜔/𝜔0
where 𝜔0 = 1/𝑅𝐶. Thus, the amplitude response |𝐇(𝜔)| and phase 𝜽𝑯 (𝜔) are given by
1 1 𝜔
|𝐇(𝜔)| = = , 𝜽𝑯 (𝜔) = − tan−1 ( )
|1 + 𝑗𝜔/𝜔0 | 1/2 𝜔0
[1 + (𝜔/𝜔0 )2 ]
IV.V Fourier Transforms of Periodic Functions While the Fourier transform was
originally incepted as a way to deal with aperiodic signals, we can still use it to analyze
period ones! The tackle method is twofold. First we know that any periodic signals can be
decomposed in terms of a Fourier series which is a summation of complex exponentials.
Second we know that each exponential has the Fourier transform as a delta function in the
frequency domain. By simply using superposition, then we would expect that the Fourier
transform of the periodic signal to be comprised of a sequence of delta functions in the
frequency domain, uniformly spaced by the fundamental frequency of the periodic signal in
the time domain.
We know that 𝔽{𝑒 𝑗𝜔0 𝑡 } = 2𝜋𝛿(𝜔 − 𝜔0 ) and from the linearity of the FT we can write
𝔽
𝑎𝑘 𝑒 𝑗𝑘𝜔0 𝑡 ↔ 2𝜋𝑎𝑘 𝛿(𝜔 − 𝑘𝜔0 )
⇓
⇓
𝑘=+∞ 𝑘=+∞
𝑗𝑘𝜔0 𝑡
𝔽
𝐱(𝑡) = ∑ 𝑎𝑘 𝑒 ↔ 𝐗(𝜔) = ∑ 2𝜋𝑎𝑘 𝛿(𝜔 − 𝑘𝜔0 )
𝑘=−∞ 𝑘=−∞
IV.VI Properties of the Continuous-Time Fourier Transform We now study some of the
basic and important properties of the Fourier transform and their implications as well as
applications (Many are presented with proofs, but a few are simply stated). Also many of
these properties are similar to those of the Laplace transform.
❶ Linearity The Fourier Transform is linear. The Fourier Transform of a sum of functions,
is the sum of the Fourier Transforms of the functions. Also, if you multiply a function by a
constant, the Fourier Transform is multiplied by the same constant.
❷ Time Scaling: The scaling property states that time compression of a signal results in its
spectral expansion, and time expansion of the signal results in its spectral compression.
+∞ +∞
𝑡 𝑡
𝐲(𝑡) = 𝐱 ( ) ⟹ 𝐘(𝜔) = ∫ 𝐱 ( ) 𝑒 −𝑗𝜔𝑡 𝑑𝑡 = |𝑎| ∫ 𝐱(𝑢)𝑒 −𝑗𝜔𝑎𝑢 𝑑𝑢 = |𝑎|𝐗(𝑎𝜔)
𝑎 −∞ 𝑎 −∞
That is
𝔽 𝔽 1 𝜔
𝐱(𝑡) ↔ 𝐗(𝜔) ⟹ 𝐲(𝑡) = 𝐱(𝑎𝑡) ↔ 𝐘(𝜔) = 𝐗( )
|𝑎| 𝑎
This indicates that scaling the time variable 𝑡 by the factor 𝑎 causes an inverse scaling of
the frequency variable 𝜔 by 1/𝑎 , as well as an amplitude scaling of 𝐗(𝜔/𝑎) by 1/𝑎. Thus,
the scaling property implies that time compression of a signal (𝑎 > 1) results in its spectral
expansion and that time expansion of the signal (𝑎 < 1) results in its spectral compression.
❸ Time Shift: If a function is delayed in time, this corresponds to multiplication by a
complex exponential in frequency domain. The complex exponential has a magnitude of 1,
so this is equivalent to a phase shift of −𝑎𝜔 radians.
+∞ +∞
𝐲(𝑡) = 𝐱(𝑡 − 𝑎) ⟹ 𝐘(𝜔) = ∫ 𝐱(𝑡 − 𝑎)𝑒 −𝑗𝜔𝑡 𝑑𝑡 = 𝑎 ∫ 𝐱(𝑢)𝑒 −𝑗𝜔(𝑢+𝑎) 𝑑𝑢 = 𝑒 −𝑗𝜔𝑎 𝐗(𝜔)
−∞ −∞
This equation shows that the effect of a shift in the time domain is simply to add a linear
term −𝑎𝜔, to the original phase spectrum 𝝓(𝜔). This is known as a linear phase shift of the
Fourier transform 𝐗(𝜔).
❹ Time Reversal: The time reversal of 𝐱(𝑡) produces a like reversal of the frequency axis
for 𝐗(𝜔). Time reversal is readily obtained by setting 𝑎 = −1 in time scaling.
𝔽
𝐲(𝑡) = 𝐱(−𝑡) ↔ 𝐘(𝜔) = 𝐗(−𝜔)
❺ Frequency Shifting: Because 𝑒 𝑗𝜔0 𝑡 is not a real function that can be generated,
frequency shifting in practice is achieved by multiplying 𝐱(𝑡) by a sinusoid. This assertion
follows from the fact that
1 𝔽 1
cos(𝜔0 𝑡) 𝐱(𝑡) = (𝑒 𝑗𝜔0 𝑡 𝐱(𝑡) + 𝑒 −𝑗𝜔0 𝑡 𝐱(𝑡)) cos(𝜔0 𝑡) 𝐱(𝑡) ↔ (𝐗(𝜔 + 𝜔0 ) + 𝐗(𝜔 − 𝜔0 ))
2 2
1 𝑗𝜔 𝑡 ⟺ 𝔽 𝑗
sin(𝜔0 𝑡) 𝐱(𝑡) = (𝑒 0 𝐱(𝑡) − 𝑒 −𝑗𝜔0 𝑡 𝐱(𝑡)) sin(𝜔0 𝑡) 𝐱(𝑡) ↔ (𝐗(𝜔 + 𝜔0 ) − 𝐗(𝜔 − 𝜔0 ))
2𝑗 2
+∞ +∞
𝔽
𝐲(𝑡) = 𝑒 𝑗𝜔0 𝑡
𝐱(𝑡) ↔ 𝐘(𝜔) = ∫ 𝐱(𝑡)𝑒 𝑗𝜔0 𝑡 −𝑗𝜔𝑡
𝑒 𝑑𝑡 = ∫ 𝐱(𝑡)𝑒 −𝑗(𝜔−𝜔0 )𝑡 𝑑𝑡 = 𝐗(𝜔 − 𝜔0 )
−∞ −∞
❻ Duality (or Symmetry): From the analysis and synthesis equations of the Fourier
transform we can show an interesting fact: the direct and the inverse transform operations
are remarkably similar. These operations, required to go from 𝐱(𝑡) to 𝐗(𝜔) and then from
𝐗(𝜔) to 𝐱(𝑡),
1 +∞
𝐱(𝑡) = ∫ 𝐗(𝜔)𝑒 𝑗𝜔𝑡 𝑑𝜔
2𝜋 −∞ 1 +∞
+∞ ⟺ 𝐱(−𝜔) = ∫ 𝐗(𝑡)𝑒 −𝑗𝜔𝑡 𝑑𝑡
−𝑗𝜔𝑡
2𝜋 −∞
𝐗(𝜔) = ∫ 𝐱(𝑡)𝑒 𝑑𝑡
{ −∞ }
𝔽 1 𝔽
𝐗(𝑡) ↔ 2𝜋𝐱(−𝜔) or 𝐗(−𝑡) ↔ 𝐱(𝜔)
2𝜋
❼ Even-Odd Properties: Consider only the case when x(t) is real (the complex case is not
much more difficult, but does not pertain to the signals being considered). Represent x(t) as
the sum of an even function and an odd function
Recall that the product of two odd functions or two even functions is an even function, and
the product of an odd and an even function is odd. Recall, also, that the integral of an odd
function from −𝑎 to +𝑎 is zero. Since 𝐱 𝑜 (𝑡) cos(𝜔𝑡) and 𝐱 𝑒 (𝑡) sin(𝜔𝑡) are odd, their integrals
go to zero so the previous result simplifies to
+∞
𝐗(𝜔) = ∫ (𝐱 𝑒 (𝑡) cos(𝜔𝑡) − 𝑗𝐱 𝑜 (𝑡) sin(𝜔𝑡))𝑑𝑡
−∞
+∞ +∞
=∫ 𝐱 𝑒 (𝑡) cos(𝜔𝑡) 𝑑𝑡 − 𝑗 ∫ 𝐱 𝑜 (𝑡) sin(𝜔𝑡) 𝑑𝑡
−∞ −∞
= 𝐗 𝑒 (𝜔) + 𝑗𝐗 𝑜 (𝜔)
+∞ +∞
With 𝐗 𝑒 (𝜔) = ∫−∞ 𝐱 𝑒 (𝑡) cos(𝜔𝑡) 𝑑𝑡 = 𝐗 𝑒 (−𝜔) and 𝐗 𝑜 (𝜔) = −𝑗 ∫−∞ 𝐱 𝑜 (𝑡) sin(𝜔𝑡) 𝑑𝑡 = −𝐗 𝑜 (−𝜔)
❽ Differentiation in the Time Domain: the effect of differentiation in the time domain is
the multiplication of 𝐗(𝜔) by 𝑗𝜔 in the frequency domain (see Exercise 03)
+∞
𝑑 𝑑𝐱(𝑡) 𝑗𝜔 𝑑𝐱(𝑡)
𝐲(𝑡) = 𝐱(𝑡) ⟹ = (∫ 𝐗(𝜔)𝑒 𝑗𝜔𝑡 𝑑𝜔) ⟹ 𝐘(𝜔) = 𝔽 { } = 𝑗𝜔𝐗(𝜔)
𝑑𝑥 𝑑𝑡 2𝜋 −∞ 𝑑𝑡
𝑑𝑛 𝐱(𝑡)
𝔽{ } = (𝑗𝜔)𝑛 𝐗(𝜔)
𝑑𝑡𝑛
𝑑𝑛 𝑑𝑛 +∞ −𝑗𝜔𝑡
+∞
𝑑 𝑛 𝑒 −𝑗𝜔𝑡 +∞
𝐗(𝜔) = ∫ 𝐱(𝑡)𝑒 𝑑𝑡 = ∫ 𝐱(𝑡) 𝑑𝑡 = ∫ 𝐱(𝑡)(−𝑗𝑡)𝑛 𝑒 −𝑗𝜔𝑡 𝑑𝑡
𝑑𝜔 𝑛 𝑑𝜔 𝑛 −∞ −∞ 𝑑𝜔 𝑛
−∞
𝔽 𝑑𝑛
(−𝑗𝑡)𝑛 𝐱(𝑡) ↔ 𝐗(𝜔)
𝑑𝜔 𝑛
❿ Integration in the Time Domain: This property is the counterpart of the previous one,
but it is based on the Fourier transform of the step function
𝑡
𝔽 𝐗(𝜔)
𝐲(𝑡) = ∫ 𝐱(𝜃)𝑑𝜃 = 𝐱(𝑡) ⋆ 𝒖(𝑡) ↔ 𝐘(𝜔) = 𝐗(𝜔)𝐔(𝜔) = + 𝜋𝐗(0)𝛿(𝜔)
−∞ 𝑗𝜔
Proof:
𝑡 ∞ 𝑡
𝐘(𝜔) = 𝔽 {∫ 𝐱(𝜉)𝑑𝜉 } = ∫ (∫ 𝐱(𝜉)𝑑𝜉 ) 𝑒 −𝑗𝜔𝑡 𝑑𝑡
−∞ −∞ −∞
Notice that the integration of 𝐱(𝑡) can be written in terms of convolution with step function
𝑡 +∞
∫ 𝐱(𝜉)𝑑𝜉 = 𝐱(𝑡) ⋆ 𝒖(𝑡) = ∫ 𝐱(𝜉)𝒖(𝑡 − 𝜉)𝑑𝜉
−∞ −∞
𝑡 ∞ +∞
𝔽 {∫ 𝐱(𝜉)𝑑𝜉 } = ∫ (∫ 𝐱(𝜉)𝒖(𝑡 − 𝜉)𝑑𝜉 ) 𝑒 −𝑗𝜔𝑡 𝑑𝑡
−∞ −∞ −∞
+∞ +∞
=∫ (∫ 𝑒 −𝑗𝜔𝑡 𝐱(𝜉)𝒖(𝑡 − 𝜉) 𝑑𝑡) 𝑑𝜉
−∞ −∞
+∞ +∞
=∫ 𝐱(𝜉) (∫ 𝑒 −𝑗𝜔𝑡 𝒖(𝑡 − 𝜉) 𝑑𝑡) 𝑑𝜉
−∞ −∞
+∞ +∞
=∫ 𝐱(𝜉) (∫ 𝑒 −𝑗𝜔(𝜉+𝜏) 𝒖(𝜏) 𝑑𝜏) 𝑑𝜉
−∞ −∞
+∞ +∞
−𝑗𝜔𝜉
=∫ 𝐱(𝜉)𝑒 (∫ 𝑒 −𝑗𝜔𝜏 𝒖(𝜏) 𝑑𝜏) 𝑑𝜉
−∞ −∞
+∞ +∞
= (∫ 𝐱(𝜉)𝑒 −𝑗𝜔𝜉 𝑑𝜉) (∫ 𝑒 −𝑗𝜔𝜏 𝒖(𝜏) 𝑑𝜏)
−∞ −∞
1 𝐗(𝜔)
= 𝐗(𝜔) ( + 𝜋𝛿(𝜔)) = + 𝜋𝐗(0)𝛿(𝜔)
𝑗𝜔 𝑗𝜔
⓫ Convolution: This property is also called time convolution theorem, and it states that
convolution in the time domain becomes multiplication in the frequency domain. As in the
case of the Laplace transform, this convolution property plays an important role in the
𝔽
study of continuous-time LTI systems 𝐲(𝑡) = 𝐱1 (𝑡) ⋆ 𝐱 2 (𝑡) ↔ 𝐘(𝜔) = 𝐗1 (𝜔)𝐗 2 (𝜔)
Proof:
+∞
𝐘(𝜔) = 𝔽{𝐱1 (𝑡) ⋆ 𝐱 2 (𝑡)} = 𝔽 {∫ 𝐱1 (𝜉)𝐱 2 (𝑡 − 𝜉)𝑑𝜉 }
−∞
∞ +∞
= ∫ (∫ 𝐱1 (𝜉)𝐱 2 (𝑡 − 𝜉)𝑑𝜉 ) 𝑒 −𝑗𝜔𝑡 𝑑𝑡
−∞ −∞
∞ +∞
= ∫ 𝐱1 (𝜉) (∫ 𝑒 −𝑗𝜔𝑡 𝐱 2 (𝑡 − 𝜉)𝑑𝑡) 𝑑𝜉
−∞ −∞
∞ +∞
= ∫ 𝐱1 (𝜉) (∫ 𝑒 −𝑗𝜔(𝜉+𝜏) 𝐱 2 (𝜏)𝑑𝜏) 𝑑𝜉
−∞ −∞
∞ +∞
= ∫ 𝐱1 (𝜉)𝑒 −𝑗𝜔𝜉 (∫ 𝑒 −𝑗𝜔𝜏 𝐱 2 (𝜏)𝑑𝜏) 𝑑𝜉
−∞ −∞
∞ +∞
= (∫ 𝐱1 (𝜉)𝑒 −𝑗𝜔𝜉 𝑑𝜉 ) (∫ 𝐱 2 (𝜏)𝑒 −𝑗𝜔𝜏 𝑑𝜏)
−∞ −∞
= 𝐗1 (𝜔)𝐗 2 (𝜔)
⓬ Multiplication (Frequency Convolution): The multiplication property is the dual
property of convolution and is often referred to as the frequency convolution theorem.
Thus, multiplication in the time domain becomes convolution in the frequency domain
𝔽 1
𝐲(𝑡) = 𝐱1 (𝑡)𝐱 2 (𝑡) ↔ 𝐘(𝜔) = 𝐗 (𝜔) ⋆ 𝐗 2 (𝜔)
2𝜋 1
Proof:
∞
𝐘(𝜔) = 𝔽{𝐱1 (𝑡)𝐱 2 (𝑡)} = ∫ (𝐱1 (𝑡)𝐱 2 (𝑡))𝑒 −𝑗𝜔𝑡 𝑑𝑡
−∞
∞ +∞
1
=∫ (∫ 𝐗1 (𝜗)𝑒 𝑗𝜗𝑡 𝑑𝜗) 𝐱 2 (𝑡)𝑒 −𝑗𝜔𝑡 𝑑𝑡
−∞ 2𝜋 −∞
1 ∞ +∞
= ∫ 𝐗 (𝜗) (∫ 𝐱 2 (𝑡)𝑒 −𝑗(𝜔−𝜗)𝑡 𝑑𝑡) 𝑑𝜗
2𝜋 −∞ 1 −∞
1 ∞ 1
= ∫ 𝐗1 (𝜗)𝐗 2 (𝜔 − 𝜗) 𝑑𝜗 = 𝐗 (𝜔) ⋆ 𝐗 2 (𝜔)
2𝜋 −∞ 2𝜋 1
⓭ Parseval's Relations: If we denote the normalized energy content of 𝐱(𝑡) by 𝐸𝐱 then the
Parseval's identity says that the energy content 𝐸𝐱 can be computed by integrating |𝐗(𝜔)|2
over all frequencies 𝜔. For this reason |𝐗(𝜔)|2 is often referred to as the energy-density
spectrum of 𝐱(𝑡), and the Parseual's theorem is also known as the energy theorem.
∞
1 ∞
⦁ ∫ 𝐱1 (𝑡)𝐱 2 (𝑡)𝑑𝑡 = ∫ 𝐗 (𝜗)𝐗 2 (−𝜗) 𝑑𝜗
−∞ 2𝜋 −∞ 1 ∞ ∞
|| ⦁ ∫ 𝐱1 (𝜆)𝐗 2 (𝜆)𝑑𝜆 = ∫ 𝐗1 (𝜆)𝐱 2 (𝜆) 𝑑𝜆
∞
1 ∞ −∞ −∞
⦁∫ |𝐱(𝑡)|2 𝑑𝑡 = ∫ |𝐗(𝜔)|2 𝑑𝜔
−∞ 2𝜋 −∞
This must hold for all values of 𝜔, it must also be true for 𝜔 = 0, and under this condition,
it reduces to
∞
1 ∞
∫ 𝐱1 (𝑡)𝐱 2 (𝑡)𝑑𝑡 = ∫ 𝐗 (𝜗)𝐗 2 (−𝜗) 𝑑𝜗
−∞ 2𝜋 −∞ 1
For the special case 𝐱 2 (𝑡) = 𝐱1⋆ (𝑡), and the conjugate functions property 𝔽{𝐱⋆1 (𝑡)} = 𝐗⋆1 (−𝜔),
we obtain:
∞
⋆ (𝑡))𝑑𝑡
1 ∞ ⋆
1 ∞
∫ (𝐱(𝑡)𝐱 = ∫ 𝐗(𝜗)𝐗 (−(−𝜗)) 𝑑𝜗 = ∫ 𝐗(𝜔)𝐗 ⋆ (𝜔) 𝑑𝜔
−∞ 2𝜋 −∞ 2𝜋 −∞
In LTI systems we know that 𝐲(𝑡) = 𝐱(𝑡) ⋆ 𝐡(𝑡) means that 𝐘(𝜔) = 𝐗(𝜔)𝐇(𝜔) and the complex
exponential signal 𝑒 𝑗𝜔0 𝑡 is an eigenfunction of the LTI system with corresponding
eigenvalue 𝐇(𝜔0 ).
Thus, the behavior of a continuous-time LTI system in the frequency domain is completely
characterized by its frequency response 𝐇(𝜔). Let 𝐗(𝜔) = |𝐗(𝜔)|𝑒 𝑗𝜽𝑋 (𝜔) , 𝐘(𝜔) = |𝐘(𝜔)|𝑒 𝑗𝜽𝑌 (𝜔)
with |𝐘(𝜔)| = |𝐗(𝜔)||𝐇(𝜔)| and 𝜽𝑌 (𝜔) = 𝜽𝑋 (𝜔) + 𝜽𝐻 (𝜔). Hence, the magnitude spectrum
|𝐗(𝜔)| of the input is multiplied by the magnitude response |𝐇(𝜔)| f the system to determine
the magnitude spectrum |𝐘(𝜔)| f the output, and the phase response 𝜽𝑋 (𝜔) is added to the
phase spectrum 𝜽𝐻 (𝜔) of the input to produce the phase spectrum 𝜽𝑌 (𝜔) of the output. The
magnitude response |𝐇(𝜔)| is sometimes referred to as the gain of the system.
For convenience, the Fourier transform properties and theorems are summarized in Table
Property Signal Fourier transform
𝐱(𝑡) 𝐗(𝜔)
𝐱1 (𝑡) 𝐗1 (𝜔)
𝐱 2 (𝑡) 𝐗 2 (𝜔)
Linearity 𝛼𝐱 2 (𝑡) + 𝛽𝐱 2 (𝑡) 𝛼𝐗1 (𝜔) + 𝛽𝐗 2 (𝜔)
Time shifting 𝐱(𝑡 − 𝑡0 ) 𝑒 −𝑗𝜔𝑡0 𝐗(𝜔)
Frequency shifting 𝑒 𝑗𝜔0 𝑡 𝐱(𝑡) 𝐗(𝜔 − 𝜔0 )
1
Time scaling 𝐱(𝑎𝑡) |𝑎|
𝐗 2 (𝜔/𝑎)
Time reversal 𝐱(−𝑡) 𝐗(−𝜔)
Duality 𝐗(𝑡) 2𝜋𝐱(−𝜔)
Time differentiation 𝑑𝐱(𝑡)/𝑑𝑡 𝑗𝜔𝐗(𝜔)
Frequency differentiation −𝑗𝑡𝐱(𝑡) 𝑑𝐗(𝜔)/𝑑𝜔
𝑡 1
Integration ∫−∞ 𝐱(𝑡)𝑑𝑡 𝐗(𝜔) + 𝜋𝐗(0)𝛿(𝜔)
𝑗𝜔
Convolution 𝐱1 (𝑡) ⋆ 𝐱1 (𝑡) 𝐗1 (𝜔)𝐗 2 (𝜔)
Multiplication 2𝜋𝐱1 (𝑡)𝐱1 (𝑡) 𝐗1 (𝜔) ⋆ 𝐗 2 (𝜔)
Parseual's theorem − − − − − − − − − − − − − − − − − − − − − − − − − − − − − − − − − −
∞
1 ∞
⦁ ∫ 𝐱1 (𝑡)𝐱 2 (𝑡)𝑑𝑡 = ∫ 𝐗 (𝜗)𝐗 2 (−𝜗) 𝑑𝜗
−∞ 2𝜋 −∞ 1 ∞ ∞
|| ⦁ ∫ 𝐱1 (𝜆)𝐗 2 (𝜆)𝑑𝜆 = ∫ 𝐗1 (𝜆)𝐱 2 (𝜆) 𝑑𝜆
∞
1 ∞ −∞ −∞
⦁ ∫ |𝐱(𝑡)|2 𝑑𝑡 = ∫ |𝐗(𝜔)|2 𝑑𝜔
−∞ 2𝜋 −∞
⓯ Area Under 𝐱(𝑡) and Area Under 𝐗(𝜔)
+∞
𝐱(0) = ∫ 𝐗(𝜔) 𝑑𝜔 𝐀𝐫𝐞𝐚 𝐔𝐧𝐝𝐞𝐫 𝐗(𝜔)
−∞
+∞
𝐗(0) = ∫ 𝐱(𝑡) 𝑑𝑡 𝐀𝐫𝐞𝐚 𝐔𝐧𝐝𝐞𝐫 𝐱(𝑡)
−∞
Solved Problems:
𝑡
Exercise 1: Compute the Fourier Transforms of 𝛾(𝑡) = ∫−∞ 𝛿(𝜃)𝑑𝜃
𝑡
𝐗(𝜔) 1
𝐘(𝜔) = 𝔽{𝛾(𝑡)} = 𝔽 {∫ 𝛿(𝜃)𝑑𝜃} = 𝜋𝐗(0)𝛿(𝜔) + = 𝜋𝛿(𝜔) +
−∞ 𝑗𝜔 𝑗𝜔
1 1
Exercise 2: Compute the Fourier Transform of the signal ∏(𝑡) = 𝛾 (𝑡 + 2) − 𝛾 (𝑡 − 2) where
𝛾(𝑡) is the step function
Exercise 3: Compute the Fourier Transform of the signal Λ(𝑡) = ∏(𝑡) ⋆ ∏(𝑡) where: ∏(𝑡) is
the unit pulse gate function and Λ(𝑡) is the triangle function
Solution: Let we define 𝐗(𝜔) = 𝔽{∏(𝑡)} and 𝐘(𝜔) = 𝔽{= ∏(𝑡) ⋆ ∏(𝑡)} so by using the
convolution property we have
𝜔
sin2 ( 2 )
𝐘(𝜔) = 𝐗(𝜔)𝐗(𝜔) = 4
𝜔2
Exercise 4: Compute the Fourier Transform of the unit Sawtooth signal
𝑡 𝑡
𝑡 0<𝑡<1 1
𝐱(𝑡) = { } = ∫ ∏ (𝜃 − ) 𝑑𝜃 − ∫ 𝛿(𝜃 − 1)𝑑𝜃
0 1<𝑡 −∞ 2 −∞
where: ∏(𝑡) is the pulse gate function and Λ(𝑡) is the triangle function
2 𝑗𝜔
Solution: we know that 𝐗1 (𝜔) = 𝔽 {∏ (𝑡 − 2)} = 𝜔 𝑒 − 2 sin(𝜔/2) & 𝐗 2 (𝜔) = 𝔽{𝛿(𝑡 − 1)} = 𝑒 −𝑗𝜔 ,
1
𝐗1 (𝜔) 𝐗 2 (𝜔)
𝐗(𝜔) = ( + 𝜋𝐗1 (0)𝛿(𝜔)) − ( + 𝜋𝐗 2 (0)𝛿(𝜔))
𝑗𝜔 𝑗𝜔
𝜔
𝑗𝜔 sin ( ) −𝑗𝜔
−
=𝑒 2 2 + 𝜋𝛿(𝜔) − 𝑒 − 𝜋𝛿(𝜔)
𝜔2 𝑗𝜔
(𝑗 2 )
𝑗𝜔 𝜔
−
𝑒 2 sin ( 2 ) −
𝑗𝜔 𝑒 −𝑗𝜔 − 1 + 𝑗𝜔𝑒 −𝑗𝜔
= ( −𝑒 2 )=
𝑗𝜔 𝜔/2 𝜔2
Exercise 5: Compute the Fourier Transform of the unit Sawtooth signal using the following
𝑡 0<𝑡<1 1
𝐱(𝑡) = { } = 𝑡. ∏ (𝑡 − )
0 1<𝑡 2
Solution: The sawtooth can also be created by a delayed pulse multiplied by time (and by
using the time multiplication property), we get
𝑑 1 𝑑 −
𝑗𝜔 sin(𝜔/2) 𝑒 −𝑗𝜔 − 1 + 𝑗𝜔𝑒 −𝑗𝜔
𝐗(𝜔) = 𝑗 (𝔽 {∏ (𝑡 − )}) = 𝑗 (𝑒 2 . )=
𝑑𝜔 2 𝑑𝜔 𝜔/2 𝜔2
Exercise 6: Compute the Fourier Transform of the unit Sawtooth signal using the following
𝑡 0<𝑡<1 1
𝐱(𝑡) = { } = Λ(𝑡 − 1). ∏ (𝑡 − )
0 1<𝑡 2
where: ∏(𝑡) is the unit pulse gate function and Λ(𝑡) is the unit triangle function
Solution: There are a number of other methods. A class of methods that might seem like
1
obvious choices involve multiplying equations in time, i.e., 𝐱(𝑡) = Λ(𝑡 − 1). ∏ (𝑡 − 2)
𝜔
𝔽 sin2 ( 2 )
𝐱1 (𝑡) = Λ(𝑡 − 1) ↔ 𝐗1 (𝜔) = 4𝑒 −𝑗𝜔
𝜔2
𝜔
1 𝔽 𝜔 sin ( 2 )
−𝑗 2
𝐱 2 (𝑡) = ∏ (𝑡 − ) ↔ 𝐗 2 (𝜔) = 𝑒 𝜔
2
2
1 𝔽 1
𝐱(𝑡) = Λ(𝑡 − 1). ∏ (𝑡 − ) ↔ 𝐗(𝜔) = 𝐗 (𝜔) ⋆ 𝐗 2 (𝜔)
2 2𝜋 1
The resulting convolution (on the right of the arrow) is quite laborious, so this is not a good
method. In general multiplication of two functions in the time domain is not a useful
technique because of the resulting convolution in the frequency domain.
Exercise 7: Compute the Fourier Transform of the Windowed Sine Wave defined by
Solution: The windowed sine function is just the product of a sine wave and a rectangular
pulse 𝐱(𝑡) = sin(𝜔0 𝑡) . ∏(𝑡/𝑇𝑝 )
So the Fourier Transform is the convolution of the transforms of the sine and rectangular
pulse in the frequency domain (divided by 2𝜋). This is depicted below, followed by the
required math.
𝜔
1 sin (𝑇𝑝 )
𝐗(𝜔) = (𝑗𝜋 [𝛿(𝜔 + 𝜔0 ) − 𝛿(𝜔 − 𝜔0 )]) ⋆ (𝑇𝑝 2 )
2𝜋 𝑇𝑝 𝜔/2
sin(3𝜔)
= 𝑗([𝛿(𝜔 + 2𝜋) − 𝛿(𝜔 − 2𝜋)]) ⋆ ( )
𝜔
sin(3(𝜔 + 2𝜋)) sin(3(𝜔 − 2𝜋))
= 𝑗{ − }
(𝜔 + 2𝜋) (𝜔 − 2𝜋)
Exercise 8: Find the Fourier Transform of the Damped Cosine Wave defined by
Solution: To calculate the Fourier transform it is helpful to convert the cosine into complex
exponentials.
1 1
For 𝑡 > 0 we write 𝐱(𝑡) = 𝑒 −𝑘𝑡 cos(𝛺𝑡) = 𝑒 −𝑘𝑡 (𝑒 𝑗𝛺𝑡 + 𝑒 −𝑗𝛺𝑡 ) = (𝑒 𝑗(𝛺+𝑗𝑘)𝑡 + 𝑒 −𝑗(𝛺−𝑗𝑘)𝑡 )
2 2
1 1
while for 𝑡 < 0, 𝐱(𝑡) = 𝑒 𝑘𝑡 cos(𝛺𝑡) = 𝑒 𝑘𝑡 (𝑒 𝑗𝛺𝑡 + 𝑒 −𝑗𝛺𝑡 ) = (𝑒 𝑗(𝛺−𝑗𝑘)𝑡 + 𝑒 −𝑗(𝛺+𝑗𝑘)𝑡 )
2 2
Exercise 9: Find the Fourier Transform of the Truncated Cosine Function described by
cos(𝑡) |𝑡| < 𝜋⁄2
𝐱(𝑡) = { }
0 otherwise
Solution: To calculate the Fourier transform it is helpful to convert the cosine into complex
exponentials.
𝜋⁄2
1 𝜋⁄2
𝐗(𝜔) = 𝔽{𝐱(𝑡)} = ∫ cos(𝑡) 𝑒 −𝑗𝜔𝑡 𝑑𝑡 = ∫ (𝑒 𝑗(1−𝜔)𝑡 + 𝑒 −𝑗(1+𝜔)𝑡 )𝑑𝑡
−𝜋⁄2 2 −𝜋⁄2
1 𝑗 𝑗 𝜋 𝜋
= { − } (𝑒 𝑗𝜔 2 + 𝑒 −𝑗𝜔 2 )
2𝑗 𝜔 + 1 𝜔 − 1
𝜋
cos (𝜔 2 )
=2
(1 − 𝜔 2 )
Exercise 10: Find the Fourier Transform of the function defined by
𝐴
𝑡 |𝑡| < 𝑇
𝐱(𝑡) = { 𝑇 }
0 |𝑡| > 𝑇
𝐴 𝑡 𝐴
𝐱(𝑡) = 𝑡. ∏ ( ) = 𝑡(𝑢(𝑡 + 𝑇) − 𝑢(𝑡 − 𝑇))
𝑇 2𝑇 𝑇
Solution: Let we calculate the Fourier transform using the second expression
𝑇
𝐴 −𝑗𝜔𝑡
𝐗(𝜔) = 𝔽{𝐱(𝑡)} = ∫ 𝑡𝑒 𝑑𝑡
−𝑇 𝑇
Using the integration by parts
𝑎𝑥
𝑒 𝑎𝑥
∫ 𝑥𝑒 𝑑𝑥 = 2 (𝑎𝑥 − 1)
𝑎
We obtain
𝑇 +𝑇
𝐴 −𝑗𝜔𝑡 𝐴 𝑒 −𝑗𝜔𝑡 𝐴
𝐗(𝜔) = 𝔽{𝐱(𝑡)} = ∫ 𝑡𝑒 𝑑𝑡 = − (𝑗𝜔𝑡 + 1)| = {𝑒 −𝑗𝜔𝑡 (𝑗𝜔𝑇 + 1) − 𝑒 𝑗𝜔𝑡 (1 − 𝑗𝜔𝑇)}
−𝑇 𝑇 𝑇 (𝑗𝜔) 2 𝑇𝜔 2
−𝑇
𝐴 𝑡 𝑑 sin(𝜔𝑇) 𝑗2𝐴
𝐱(𝑡) = 𝑡. ∏ ( ) ⟺ 𝐗(𝜔) = 𝑗 (2𝐴 )= [(𝜔𝑇). cos(𝜔𝑇) − sin(𝜔𝑇)]
𝑇 2𝑇 𝑑𝜔 𝜔𝑇 𝑇𝜔 2
Exercise 11: Find the Fourier Transform of the following functions defined by
1
❶ 𝐱(𝑡) = 𝑢(−𝑡) ❺ 𝐱(𝑡) =
𝑡2 + 𝑎2
2
❷ 𝐱(𝑡) = 𝑒 −𝑎𝑡 𝑢(𝑡) 𝑎>0 ❻ 𝐱(𝑡) = 𝑒 −𝑎𝑡 𝑎>0
1
❸ 𝐱(𝑡) = 𝑡𝑒 −𝑎𝑡 𝑢(𝑡) 𝑎>0 ❼ 𝐱(𝑡) =
𝑡
1
❹ 𝐱(𝑡) = 𝑒 −𝑎|𝑡| 𝑎>0 ❽ 𝐱(𝑡) = 2
𝑡
Solution:
𝔽 𝔽
❶ We know that 𝐱(𝑡) ↔ 𝐗(𝜔) ⟺ 𝐱(−𝑡) ↔ 𝐗(−𝜔) so we have
1 1 1
𝔽{𝑢(𝑡)} = 𝜋𝛿(𝜔) + ⟹ 𝔽{𝑢(−𝑡)} = 𝜋𝛿(−𝜔) − ⟹ 𝐗(𝜔) = 𝜋𝛿(𝜔) −
𝑗𝜔 𝑗𝜔 𝑗𝜔
❷
+∞ +∞ +∞
1
𝔽{𝐱(𝑡) = 𝑒 −𝑎𝑡
𝑢(𝑡)} = ∫ 𝐱(𝑡)𝑒 −𝑗𝜔𝑡
𝑑𝑡 = ∫ 𝑒 −𝑎𝑡
𝑢(𝑡)𝑒 −𝑗𝜔𝑡
𝑑𝑡 = ∫ 𝑒−(𝑎+𝑗𝜔)𝑡 𝑑𝑡 =
−∞ −∞ 0 𝑎 + 𝑗𝜔
𝔽 𝑑 1
❸ We know that 𝐱(𝑡) = 𝑡𝐱1(𝑡) ↔ 𝐗(𝜔) = 𝑗 𝐗 (𝜔) so we let 𝐱1(𝑡) = 𝑒−𝑎𝑡𝑢(𝑡) ⟹ 𝐗1(𝜔) =
𝑑𝜔 1 𝑎 + 𝑗𝜔
𝑑 𝑑 1 −𝑗 −𝑗 1
𝐗1 (𝜔) = ( )= ⟺ 𝐗(𝜔) = 𝔽 {𝑡𝑒 −𝑎𝑡
𝑢(𝑡)} = 𝑗 =
𝑑𝜔 𝑑𝜔 𝑎 + 𝑗𝜔 (𝑎 + 𝑗𝜔)2 (𝑎 + 𝑗𝜔)2 (𝑎 + 𝑗𝜔)2
❹ For the fourth case we use directly the definition of Fourier Transform
+∞ 0 +∞
𝔽{𝑒 −𝑎|𝑡|
}=∫ 𝑒 −𝑎|𝑡| −𝑗𝜔𝑡
𝑒 𝑑𝑡 = ∫ 𝑒 𝑒 𝑎𝑡 −𝑗𝜔𝑡
𝑑𝑡 + ∫ 𝑒 −𝑎𝑡 𝑒 −𝑗𝜔𝑡 𝑑𝑡
−∞ −∞ 0
0 +∞
1 1 2𝑎
= ∫ 𝑒 −(𝑗𝜔−𝑎)𝑡 𝑑𝑡 + ∫ −(𝑗𝜔+𝑎)𝑡
𝑒 𝑑𝑡 = + = 2
−∞ 0 𝑎 − 𝑗𝜔 𝑎 + 𝑗𝜔 𝑎 + 𝜔 2
𝔽 2𝑎 2𝑎 𝔽
𝐱1 (𝑡) = 𝑒 −𝑎|𝑡| ↔ 𝐗1 (𝜔) = ⟺ 𝐗1 (𝑡) = ↔ 2𝜋𝐱1 (𝜔) = 2𝜋𝑒 −𝑎|𝜔|
𝑎2 + 𝜔 2 𝑎2 + 𝑡 2
1 1 𝔽 𝜋 𝜋
𝐗(𝑡) = 𝐗1 (𝑡) = 2 ↔ 𝐱(𝜔) = 𝐱1 (𝜔) = 𝑒 −𝑎|𝜔|
2𝑎 𝑎 + 𝑡2 𝑎 𝑎
Now we have the permission to write
2𝑎 1 𝜋
𝔽{𝑒−𝑎|𝑡| } = and 𝔽{ } = 𝑒−𝑎|𝜔|
𝑎2 + 𝜔 2 𝑎2 + 𝑡 2 𝑎
2
❻ It is very difficult to evaluate the Fourier transform of 𝑒 −𝑎𝑡 so we use a tactical strategy
to avoid this obstacle
+∞ +∞
2 2 𝑑 2
𝐗(𝜔) = 𝔽{𝑒 −𝑎𝑡 } = ∫ 𝑒 −𝑎𝑡 𝑒 −𝑗𝜔𝑡 𝑑𝑡 ⟹ 𝐗(𝜔) = −𝑗 ∫ 𝑡𝑒 −𝑎𝑡 𝑒 −𝑗𝜔𝑡 𝑑𝑡
−∞ 𝑑𝜔 −∞
𝑑 𝑗 +∞ 2
⟹ 𝐗(𝜔) = ∫ −2𝑎𝑡𝑒 −𝑎𝑡 𝑒 −𝑗𝜔𝑡 𝑑𝑡
𝑑𝜔 2𝑎 −∞
We use integration by part
+∞
𝑑 𝑗 −𝑎𝑡 2 −𝑗𝜔𝑡
+∞ 2
𝐗(𝜔) = {[𝑒 𝑒 ]−∞ + 𝑗𝜔 ∫ 𝑒 −𝑎𝑡 𝑒 −𝑗𝜔𝑡 𝑑𝑡}
𝑑𝜔 2𝑎 −∞ 𝑑 −𝜔
+∞ ⟹ 𝐗(𝜔) = 𝐗(𝜔)
−𝜔 2 −𝜔 𝑑𝜔 2𝑎
= ∫ 𝑒 −𝑎𝑡 𝑒 −𝑗𝜔𝑡 𝑑𝑡 = 𝐗(𝜔)
2𝑎 −∞ 2𝑎
+∞ 2 𝜋
To determine the constant 𝐴 we use initial condition 𝐴 = 𝐗(0) = ∫−∞ 𝑒 −𝑎𝑡 𝑑𝑡 = √𝑎
𝜋 −𝜔2
−𝑎𝑡 2
𝐗(𝜔) = 𝔽{𝑒 } = √ 𝑒 4𝑎
𝑎
❼ By using the duality principal of 𝑗𝔽{sgn(𝑡)} = 2/𝜔 we obtain 𝔽{2/𝑡} = −𝑗2𝜋sgn(𝜔)
1 𝜋
𝔽 { } = sgn(𝜔)
𝑡 𝑗
1 1 1 𝔽 1 𝜋 𝜋 𝜋
𝐱(𝑡) = = . ↔ 𝐗(𝜔) = ( sgn(𝜔)) ⋆ ( sgn(𝜔)) = − (sgn(𝜔)) ⋆ (sgn(𝜔))
𝑡2 𝑡 𝑡 2𝜋 𝑗 𝑗 2
Notice that this convolution doesn't converge because 𝐱(𝑡) is not absolutely integrable.
Exercise 12: Find the Fourier transform of the following graphical representations
Solution:
−𝐴 −𝑇 <𝑡 ≤0
𝐟(𝑡) = { 𝐴 0<𝑡≤𝑇
0 |𝑡| > 𝑇
0 𝑇 0 𝑇
𝐴𝑒 −𝑗𝜔𝑡 𝐴𝑒 −𝑗𝜔𝑡 4𝐴 2 𝜔𝑇
𝐅(𝜔) = 𝔽{𝐟(𝑡)} = ∫ −𝐴 𝑒 −𝑗𝜔𝑡
𝑑𝑡 + ∫ 𝐴 𝑒 −𝑗𝜔𝑡
𝑑𝑡 = | − | = sin ( )
−𝑇 0 𝑗𝜔 −𝑇 𝑗𝜔 0 𝑗𝜔 2
❷ Now we will find the Fourier transform of the triangular function 𝚲(𝑡/𝑇), notice that the
previous function 𝐟(𝑡) is one class of derivatives of triangular function.
𝑑
𝐟(𝑡) = −𝐴𝑇. 𝚲(𝑡/𝑇)
𝑑𝑡
Assume that 𝐗(𝜔) is Fourier transform of 𝚲(𝑡/𝑇) then
𝑗 4𝐴 2 𝜔𝑇 𝜔𝑇 𝜔𝑇 2
−𝐴𝑇. 𝑗𝜔𝐗(𝜔) = 𝐅(𝜔) ⟹ 𝐗(𝜔) = ( sin ( )) = 𝑇 (sin ( ) / )
𝐴𝑇𝜔 𝑗𝜔 2 2 2
𝑗2𝐵
𝐗(𝜔) = [(𝜔𝑇). cos(𝜔𝑇) − sin(𝜔𝑇)]
𝑇𝜔 2
❹ The student is asked to check that
𝐴 𝑒 −𝑗𝜔𝑇1 − 1 𝐴𝑒 −𝑗𝜔𝑇2
𝐗(𝜔) = [ ] −
𝑇1 𝜔2 𝑗𝜔
Exercise 13: Find the Fourier transform of the following graphical representations
sin(𝜔) sin(𝜔) 2
𝐗(𝜔) = 𝑷2 (𝜔) + 𝑷4 (𝜔) = 2 +4 = (sin(𝜔) + sin(2𝜔))
𝜔 2𝜔 𝜔
Exercise 14: ❶ using Fourier transform to find the transfer function 𝐇(𝜔) of an LTI system
described by it differential equation
𝐘(𝜔) −𝑏0 𝜔2 + 𝑏1 𝑗𝜔 + 𝑏2
(−𝜔2 2
+ 𝑎1 𝑗𝜔 + 𝑎2 )𝐘(𝜔) = (−𝑏0 𝜔 + 𝑏1 𝑗𝜔 + 𝑏2 )𝐗(𝜔) ⟹ 𝐇(𝜔) = =
𝐗(𝜔) −𝜔 2 + 𝑎1 𝑗𝜔 + 𝑎2
𝐘(𝜔) 𝑗𝜔 + 2
❷ 𝐇(𝜔) = =
𝐗(𝜔) −𝜔 + 4𝑗𝜔 + 3
2
we heve: −𝜔2 + 4𝑗𝜔 + 3 = (𝑗𝜔 + 1)(𝑗𝜔 + 3) so take a partial fraction expansion of we obtain
𝑗𝜔 + 2 1/2 1/2 1 1
𝐇(𝜔) = = + ⟹ ℎ(𝑡) = ( 𝑒 −𝑡 + 𝑒 −3𝑡 ) 𝑢(𝑡)
(𝑗𝜔 + 1)(𝑗𝜔 + 3) 𝑗𝜔 + 1 𝑗𝜔 + 3 2 2
∞ ∞
1 1
∫ |ℎ(𝑡)| 𝑑𝑡 = ∫ | 𝑒 −𝑡 + 𝑒 −3𝑡 | 𝑑𝑡 = 1 < ∞
0 0 2 2
Exercise 15: An ideal bandpass filter (BPF) is specified by its transfer function
𝐇(𝜔) defined by
1 1
𝐇(𝜔) = ∏(𝜔 − 𝜔0 ) + ∏(𝜔 + 𝜔0 )
2 2
With
1 𝜔1 < 𝜔 < 𝜔2
⦁ ∏(𝜔 − 𝜔0 ) = {
0 otherwise
sin(𝑎𝑡) sin(𝑎𝑡)
∏(𝜔 − 𝜔0 ) = 𝔽 (𝑒 𝑗𝜔0 𝑡 ) and ∏(𝜔 + 𝜔0 ) = 𝔽 (𝑒 −𝑗𝜔0 𝑡 )
𝜋𝑡 𝜋𝑡
❶ If the DE is 𝑦̇ (𝑡) + 2𝑦(𝑡) = 𝑥(𝑡) + 𝑥̇ (𝑡) using the Fourier transform to find the IR ℎ(𝑡)
❷ If the DE is 𝑦̇ (𝑡) + 2𝑦(𝑡) = 𝑥(𝑡) find the output (response) of the system when
⦁ 𝑥(𝑡) = 𝑒 −𝑡 𝑢(𝑡)
⦁ 𝑥(𝑡) = 𝑢(𝑡)
Solution: ❶ we have
𝐘(𝜔) 𝑗𝜔 + 1
𝑦̇ (𝑡) + 2𝑦(𝑡) = 𝑥(𝑡) + 𝑥̇ (𝑡) ⟺ (𝑗𝜔 + 2)𝐘(𝜔) = (𝑗𝜔 + 1)𝐗(𝜔) ⟹ 𝐇(𝜔) = =
𝐗(𝜔) 𝑗𝜔 + 2
1
𝐇(𝜔) = 1 − ⟹ ℎ(𝑡) = 𝛿(𝑡) − 𝑒 −2𝑡 𝑢(𝑡)
𝑗𝜔 + 2
❷ we have
𝐘(𝜔) 1 𝐗(𝜔)
𝐇(𝜔) = = ⟹ 𝐘(𝜔) =
𝐗(𝜔) 𝑗𝜔 + 2 𝑗𝜔 + 2
1 1 1
⦁ 𝑥(𝑡) = 𝑒 −𝑡 𝑢(𝑡) ⟹ 𝐘(𝜔) = = − ⟹ 𝑦(𝑡) = (𝑒 −𝑡 − 𝑒 −2𝑡 )𝑢(𝑡)
(𝑗𝜔 + 1)(𝑗𝜔 + 2) (𝑗𝜔 + 1) (𝑗𝜔 + 2)
1 1 𝜋𝛿(𝜔) 1
⦁ 𝑥(𝑡) = 𝑢(𝑡) ⟹ 𝐘(𝜔) = (𝜋𝛿(𝜔) + ) = +
(𝑗𝜔 + 2) 𝑗𝜔 (𝑗𝜔 + 2) 𝑗𝜔(𝑗𝜔 + 2)
1
𝜋 1 1 1 1 1 2
⟹ 𝐘(𝜔) = 𝛿(𝜔) + ( − ) = (𝜋𝛿(𝜔) + ) − ⟹ 𝑦(𝑡) = (1 − 𝑒 −2𝑡 )𝑢(𝑡)
2 2 𝑗𝜔 (𝑗𝜔 + 2) 2 𝑗𝜔 (𝑗𝜔 + 2)
𝑒𝑗2 𝜔<0
❶ Determine the impulse response ℎ(𝑡) for this system
−𝑗 0<𝜔
𝐇(𝜔) = { = −𝑗. sgn(𝜔)
𝑗 𝜔<0
𝔽 2 1 𝔽
sgn(𝑡) ↔ by duality we obtain ℎ(𝑡) = ↔ 𝐇(𝜔) = −𝑗. sgn(𝜔)
𝑗𝜔 𝜋𝑡
1 1 +∞ 𝑥(𝜏)
𝑦(𝑡) = 𝑥(𝑡) ⋆ ℎ(𝑡) = 𝑥(𝑡) ⋆ = ∫ ( ) 𝑑𝜏
𝜋𝑡 𝜋 −∞ 𝑡 − 𝜏
This impulse response ℎ(𝑡) is a specific linear operator called the Hilbert transform which
takes a function, 𝑥(𝑡) of a real variable and produces another function 𝑦(𝑡) of a real
variable. The Hilbert transform was first introduced by David Hilbert in this setting, to solve
a special case of the Riemann–Hilbert problem for analytic functions.
1 +∞ cos(𝜔0 𝜏)
𝑦(𝑡) = ∫ ( ) 𝑑𝜏
𝜋 −∞ 𝑡−𝜏
1 +∞ cos(𝜔0 𝜏)
𝐘(𝜔) = 𝐗(𝜔)𝐇(𝜔) = 𝑗𝜋[𝛿(𝜔 + 𝜔0 ) − 𝛿(𝜔 − 𝜔0 )] ⟺ 𝑦(𝑡) = ∫ ( ) 𝑑𝜏 = sin(𝜔0 𝑡)
𝜋 −∞ 𝑡−𝜏
Exercise 18: Consider a causal continuous LTI system described by it transfer function
Solution: observe that we can decompose the impulse response into even odd parts
but {ℎ(−𝑡) = 0 for 𝑡 > 0} because the system is causal, which means that
ℎ(𝑡) + ℎ(−𝑡) = 2ℎ𝑒 (𝑡)
⟺ ℎ(𝑡) = 2ℎ𝑒 (𝑡) = 2ℎ𝑜 (𝑡)
ℎ(𝑡) − ℎ(−𝑡) = 2ℎ𝑜 (𝑡)
and from the even/odd property we obtained 𝐇(𝜔) = 𝐇𝑒 (𝜔) + 𝑗𝐇𝑜 (𝜔) that is 𝐇𝑒 (𝜔) = 𝐀(𝜔)
and 𝐇𝑜 (𝜔) = 𝐁(𝜔) so
−1 −1
ℎ𝑒 (𝑡) = 𝔽 (𝐀(𝜔)) and ℎ𝑜 (𝑡) = 𝔽 (𝐁(𝜔))
−1 −1
ℎ(𝑡) = 2𝔽 (𝐀(𝜔)) = 2𝔽 (𝐁(𝜔))
Exercise 19: Consider a causal continuous LTI system with a frequency response
Solution: In the previous exercise we have seen that for causal LTI ℎ(𝑡) = 2ℎ𝑒 (𝑡) = 2ℎ𝑜 (𝑡)
that is
ℎ (𝑡) = −ℎ𝑜 (𝑡) for 𝑡 < 0 ℎ (𝑡) = sgn(𝑡)ℎ𝑜 (𝑡)
{ 𝑒 ⟹ { 𝑒
ℎ𝑒 (𝑡) = ℎ𝑜 (𝑡) for 𝑡 > 0 ℎ𝑜 (𝑡) = sgn(𝑡)ℎ𝑒 (𝑡)
We use the fact that 𝐀(𝜔) = 𝔽(ℎ𝑒 (𝑡)) and 𝐁(𝜔) = 𝔽(ℎ𝑜 (𝑡)) we obtain
1 1 2 1 +∞ 𝐁(𝜆)
𝐀(𝜔) = 𝔽(sgn(𝑡)ℎ𝑜 (𝑡)) = 𝔽(ℎ𝑜 (𝑡)) ⋆ 𝔽(sgn(𝑡)) = (𝑗𝐁(𝜔) ⋆ ) = ∫ ( ) 𝑑𝜆
2𝜋 2𝜋 𝑗𝜔 𝜋 −∞ 𝜔 − 𝜆
1 1 2 1 +∞ 𝐀(𝜆)
𝑗𝐁(𝜔) = 𝔽(sgn(𝑡)ℎ𝑒 (𝑡)) = 𝔽(ℎ𝑒 (𝑡)) ⋆ 𝔽(sgn(𝑡)) = (𝐀(𝜔) ⋆ ) = ∫ ( ) 𝑑𝜆
2𝜋 2𝜋 𝑗𝜔 𝑗𝜋 −∞ 𝜔 − 𝜆
Finally we deduce that: for any causal continuous LTI system 𝐀(𝜔) is the Hilbert transform
of 𝐁(𝜔) and 𝐁(𝜔) is the Hilbert transform of −𝐀(𝜔).
1 |𝜔| < 1
𝐗(𝜔) = {
0 |𝜔| > 1
If this signal is an excitation of a LTI system defined by 𝑦(𝑡) = 𝑥̈ (𝑡), then find the output
energy defined by
+∞
∫ |𝑦(𝑡)|2 𝑑𝑡
−∞
Solution: In the frequency domain we have 𝐘(𝜔) = −𝜔2 𝐗(𝜔) and from Parseval theorem
+∞
1 +∞ 1 +∞ +∞ 4 2 (𝜔)
𝜔 𝐗 +1 4
𝜔 1
∫ |𝑦(𝑡)|2 𝑑𝑡 = 2
∫ |𝐘(𝜔)| 𝑑𝜔 = 2 2
∫ |−𝜔 𝐗(𝜔)| 𝑑𝜔 = ∫ 𝑑𝜔 = ∫ 𝑑𝜔 =
−∞ 2𝜋 −∞ 2𝜋 −∞ −∞ 2𝜋 −1 2𝜋 5𝜋
Exercise 21: Find the Fourier transform for the signal 𝐱(𝑡) = 𝑒 2𝑡 𝑢(−𝑡)
3 3 1 |𝜔 − 𝜔0 | < 𝑎
❹ 𝐗(𝜔) = ∏𝑎 (𝜔 + 2) + ∏𝑎 (𝜔 − 2) with ∏𝑎 (𝜔 + 𝜔0 ) = { & 𝑎=1
0 |𝜔 − 𝜔0 | > 𝑎
1 +∞ 1 0 1 ∞ −𝜔 𝑗𝜔𝑡
𝐱(𝑡) = ∫ 𝐗(𝜔) 𝑒 𝑗𝜔𝑡 𝑑𝜔 = ∫ 𝜋𝑒 𝜔 𝑒 𝑗𝜔𝑡 𝑑𝜔 + ∫ 𝜋𝑒 𝑒 𝑑𝜔
2𝜋 −∞ 2𝜋 −∞ 2𝜋 0
1 0 1 ∞ 1/2 1/2 1/2
= ∫ 𝑒 (𝑗𝑡+1)𝜔 𝑑𝜔 + ∫ 𝑒 (𝑗𝑡−1)𝜔 𝑑𝜔 = ( )−( )=
2 −∞ 2 0 𝑗𝑡 + 1 𝑗𝑡 − 1 1 + 𝑡2
1/2
𝔽( ) = 𝜋𝑒−|𝜔|
1 + 𝑡2
❹ In the fourth signal we have
3 3 3 3 sin(𝑡) 3 sin(𝑡)
𝔽−1 (∏𝑎 (𝜔 + ) + ∏𝑎 (𝜔 − )) = (𝑒 −2𝑗𝑡 + 𝑒 2𝑗𝑡 ) = 2 cos ( 𝑡)
2 2 𝜋𝑡 2 𝜋𝑡
Exercise 24: Find the inverse of Fourier transform for the signal
sin(𝑡/2)
𝐱(𝑡) = sin(𝑡)
𝜋𝑡 2
Solution:
sin(𝑡)
sin(𝑡/2) sin(𝑡) sin(𝑡/2) 1 𝐱1 (𝑡) =
𝐱(𝑡) = sin(𝑡) = 𝜋( )( ) = 2𝜋 ( 𝐱1 (𝑡)𝐱 2 (𝑡)) with { 𝜋𝑡
𝜋𝑡 2 𝜋𝑡 𝜋𝑡 2 sin(𝑡/2)
𝐱 2 (𝑡) =
𝜋𝑡
1 1
𝜔+ for |𝜔| <
2 2
1 1 1 1 3
𝐗(𝜔) = ∏1 (𝜔)∏ (𝜔) = 𝐗1 (𝜔) ⋆ 𝐗2 (𝜔) = 1 for ≤ 𝜔 <
2
1
2 2 2 2
2
1 3 5
𝜔+ for ≤ 𝜔<
2 2 2
{ 0 elsewhere
clear all, clc,
Ts=0.01; t1=-0.5:Ts:0.5; t2=-1:Ts:1;
x1=ones(1,length(t1));
x2=ones(1,length(t2));
n1=length(x1); n2=length(x2);
X1=[x1,zeros(1,n2)];X2=[x2,zeros(1,n1)];
for i=1:n1+n2-1
X(i)=0; for j=1:n1
if i-j+1>0
X(i)= X(i)+ X1(j)* X2(i-j+1);
else
end; end; end
t=-1.5:Ts:1.5;
plot(t,0.5*Ts*X,'r','linewidth',3)
grid on
This Fourier transform can be evaluated directly using the next MATLAB code
𝜋 1 1 1 |𝜔 − 𝜔0 | < 𝑎
❶ 𝐗(𝜔) = {∏𝑎 (𝜔 + ) − ∏𝑎 (𝜔 − )} with ∏𝑎 (𝜔 + 𝜔0 ) = { & 𝑎=1
2 2 2 0 |𝜔 − 𝜔0 | > 𝑎
𝑘
𝑘=+∞ 1 𝜋
❷ 𝐗(𝜔) = 𝜔−2 + 𝑗𝜔−3 ❸ 𝐗(𝜔) = ∑ ( ) 𝛿 (𝜔 − 𝑘 )
𝑘=−∞ 2 4
Solution:
−1 𝜋 1 1 𝜋 −𝑗𝑡 𝑗𝑡 sin(𝑡) sin2(𝑡)
❶ 𝐱(𝑡) = 𝔽 ( {∏𝑎 (𝜔 + ) − ∏𝑎 (𝜔 − )}) = (𝑒 2 −𝑒 )
2 = ⟹ complex, odd
2 2 2 2 𝜋𝑡 𝑗𝑡
−1 𝑑𝛿(𝑡) 1 𝑑𝛿(𝑡)
❷ (𝑗𝜔)3 𝐗(𝜔) = 1 − 𝑗𝜔 ⟺ 𝔽 ((𝑗𝜔)3 𝐗(𝜔)) = 𝛿(𝑡) − ⟺ 𝐗(𝜔) = 𝔽 ( 𝛿 (𝑡) − )
𝑑𝑡 (𝑗𝜔)3 𝑑𝑡
𝑑 3 𝛿(𝑡) 𝑑 4 𝛿(𝑡)
𝐱(𝑡) = − ⟹ real, nor even neither odd
𝑑𝑡 3 𝑑𝑡 4
𝑘 𝑘=+∞ 1 𝑘 𝜋
𝑘=+∞ 1 𝜋 𝑘 𝑡
❸ 𝐗(𝜔) = ∑ ( ) 𝛿 (𝜔 − 𝑘 ) ⟹ 𝐱(𝑡) = ∑ ( ) 𝑒 4 ⟹ real, even
𝑘=−∞ 2 4 𝑘=−∞ 2
Exercise 26: For the continuous time LTI system let 𝛼, 𝛽 > 0 and
LTI System
𝐱(𝑡) = 𝑒 −𝛼𝑡 𝑢(𝑡) 𝐲(𝑡) = 𝐡(𝑡) ⋆ 𝐱(𝑡)
𝐡(𝑡) = 𝑒 −𝛽𝑡 𝑢(𝑡)
Find the output of this system (Take into a consideration all cases for 𝛼 & 𝛽)
Solution:
1 1 1
𝐘(𝜔) = 𝐗(𝜔)𝐇(𝜔) = ( )( )= 2
𝛼 + 𝑗𝜔 𝛽 + 𝑗𝜔 𝛼𝛽 − 𝜔 + 𝑗(𝛼 + 𝛽)𝜔
⦁𝛼≠𝛽
1 1 𝑘1 𝑘2 𝑘1 = lim (𝛼 + 𝑗𝜔)𝐘(𝜔)
𝑗𝜔→−𝛼
𝐘(𝜔) = ( )( )= + with {
𝛼 + 𝑗𝜔 𝛽 + 𝑗𝜔 𝛼 + 𝑗𝜔 𝛽 + 𝑗𝜔 𝑘2 = lim (𝛽 + 𝑗𝜔)𝐘(𝜔)
𝑗𝜔→−𝛽
1 1 1 1
⟹ 𝐘(𝜔) = { − } ⟹ 𝐲(𝑡) = (𝑒 −𝛼𝑡 − 𝑒 −𝛽𝑡 )𝑢(𝑡)
𝛽 − 𝛼 𝛼 + 𝑗𝜔 𝛽 + 𝑗𝜔 𝛽−𝛼
0 for 𝑡≤0
1
𝐲(𝑡) = (𝑒 −𝛼𝑡 − 𝑒 −𝛽𝑡 )𝑢(𝑡) 𝛽≠𝛼
{𝛽 − 𝛼 } for 𝑡>0
−𝛼𝑡
{ 𝑡𝑒 𝑢(𝑡) 𝛽≠𝛼
Method II: using the convolution and knowing that for 𝑡≤0 𝐲(𝑡) = 0
+∞
𝜏<𝑡
𝐲(𝑡) = ∫ 𝑒 −𝛼𝜏 𝑢(𝜏)𝑒 −𝛽(𝑡−𝜏) 𝑢(𝑡 − 𝜏)𝑑𝜏 ⟹ {
−∞
𝜏 >0
⦁𝛼≠𝛽
𝜏=𝑡
𝑡
−𝛼𝜏 −𝛽(𝑡−𝜏) −𝛽𝑡
𝑡
(𝛽−𝛼)𝜏 −𝛽𝑡
𝑒 (𝛽−𝛼)𝜏 1
𝐲(𝑡) = ∫ 𝑒 𝑒 𝑑𝜏 = 𝑒 ∫𝑒 𝑑𝜏 = 𝑒 [ ] = (𝑒 −𝛼𝑡 − 𝑒 −𝛽𝑡 )
0 0 𝛽 − 𝛼 𝜏=0 𝛽 − 𝛼
⦁𝛼=𝛽
𝑡 𝑡
−𝛼𝜏 −𝛼(𝑡−𝜏) −𝛽𝑡
𝐲(𝑡) = ∫ 𝑒 𝑒 𝑑𝜏 = 𝑒 ∫ 𝑑𝜏 = 𝑡𝑒 −𝛽𝑡
0 0
𝑇 2𝐴 𝑇
+ 𝑡 for 0≤𝑡≤
𝐱(𝑡) = { 2 𝑇 2 with 𝐱(𝑡) = cos(𝑎𝑡)
𝑇 2𝐴 𝑇
− 𝑡 for ≤𝑡≤𝑇
2 𝑇 2
clear all,clc,
dt=0.01; t=-10:dt:10;
T=10; a=20;
y1=asin(sin((2*pi/T)*t));
y2=cos(a*t);
y=y1.*y2;
subplot(311)
plot(t,y1,'-b','linewidth',2)
grid on, hold on;
subplot(312)
plot(t,y2,'-r','linewidth',2)
grid on, hold on;
subplot(313)
plot(t,y,'-r','linewidth',2)
grid on
Exercise 28: Prove that the Fourier transform of the pulse train is
𝑘=+∞ 𝑘=+∞
2𝜋 2𝜋
𝔽 (𝐱(𝑡) = ∑ 𝛿(𝑡 − 𝑘𝑇)) = ∑ 𝛿 (𝜔 − 𝑘 )
𝑇 𝑇
𝑘=−∞ 𝑘=−∞
❷ Now if we let 𝑡 = 𝜋 we obtain sin(𝜋𝑡) = 0 and cos(𝜋𝑡) = ±1 but the function 𝐱(𝑡) is not
defined at 𝑡 = 𝜋 so we let
∞ ∞
𝑒 𝜋 + 𝑒 −𝜋 sinh(𝜋) 1 1 1 𝜋
𝐱(𝜋) = = cosh(𝜋) = (1 + 2 ∑ ) ⟹ 𝑠=∑ = ( − 1)
2 𝜋 (1 + 𝑘 2 ) (1 + 𝑘 2 ) 2 tanh(𝜋)
𝑘=1 𝑘=1
Exercise 30: Given LTI system described by its differential equation 𝐲(𝑡) = 𝐱̇ (𝑡) assume that
the input signal is periodic with Fourier series
𝑘=∞
𝑦
Find 𝐶𝑘 the exponential Fourier series coefficients for the signal 𝐲(𝑡)
Solution:
𝑘=∞ 𝑘=∞ 𝑘=∞
𝑑𝐱(𝑡) 𝑑 𝑑 𝑦
𝐲(𝑡) = = ∑ 𝐶𝑘𝑥 𝑒 −𝑗𝜔0 𝑘𝑡 = ∑ 𝐶𝑘𝑥 𝑒 −𝑗𝜔0 𝑘𝑡 = ∑ 𝑗𝜔0 𝑘𝐶𝑘𝑥 𝑒 −𝑗𝜔0 𝑘𝑡 ⟹ 𝐶𝑘 = 𝑗𝑘𝜔0 𝐶𝑘𝑥
𝑑𝑡 𝑑𝑡 𝑑𝑡
𝑘=−∞ 𝑘=−∞ 𝑘=−∞
𝑦
If we let 𝜔 = 𝑘𝜔0 then 𝐶𝑘 = 𝑗𝜔𝐶𝑘𝑥
Exercise 31: Find the Fourier series coefficients for the half cosine wave signal
𝑇 𝑇
0 for − ≤𝑡≤−
2 4
𝑇 𝑇=8
𝐱(𝑡) = 𝐴cos(𝜔0 𝑡) for |𝑡| ≤ Let
4 𝜔0 = 𝜋/4
𝑇 𝑇
{ 0 for ≤𝑡≤
4 2
Solution:
1 𝐴
𝑎0 = ∫ 𝐱(𝑡)𝑑𝑡 = Area over on period =
𝑇 𝑇 𝜋
2 4 𝑇/4 4𝐴 𝑇/4
𝑎𝑘 = ∫ 𝐱(𝑡)cos(𝑘𝜔0 𝑡)𝑑𝑡 = ∫ 𝐱(𝑡)cos(𝑘𝜔0 𝑡)𝑑𝑡 = ∫ cos(𝜔0 𝑡)cos(𝑘𝜔0 𝑡)𝑑𝑡
𝑇 𝑇 𝑇 0 𝑇 0
𝜋 𝜋
sin ((1 + 𝑘) ) = 0 = sin ((1 − 𝑘) ) 𝑘 = odd
2 2
𝜋 𝜋 𝜋
sin ((1 + 𝑘) ) = −sin ((1 − 𝑘) ) = cos (𝑘 ) = (−1)𝑘/2 𝑘 = even
2 2 2
Hence, To avoid using 𝑘 = 2,4,6, ... and also to ease computation, we can replace 𝑘 by 2ℓ
∞
𝐴 𝐴 2𝐴 (−1)ℓ cos(2ℓ𝜔0 𝑡)
𝐱(𝑡) = + cos(𝜔0 𝑡) − ∑
𝜋 2 𝜋 (2ℓ)2 − 1
ℓ=1
t=0:0.01:2;
Nmax=5; w0=2*pi;
x1=1/pi+(1/2)*cos(w0*t);
for k=1:1:Nmax
a=((2*(-1)^k)/(pi*(1-(2*k)^2))) ;
x1=x1+a*cos((2*w0*k)*t);
if k>2
plot(t,x1,'-','linewidth',1.5)
hold on, grid on
end
end
Exercise 32: Find the Fourier series coefficients for the half sine wave signal
𝑇
𝐴sin(𝜔0 𝑡) for 0≤𝑡≤
𝐱(𝑡) = { 2
𝑇
0 for ≤𝑡≤𝑇
2
𝑇=8
Let 𝜋
𝜔0 =
4
Solution: here we give a short answer and we let the detail to students
∞
𝐴 𝐴 2𝐴 cos(2ℓ𝜔0 𝑡)
𝐱(𝑡) = + sin(𝜔0 𝑡) − ∑
𝜋 2 𝜋 (2ℓ)2 − 1
ℓ=1
Exercise 33: Find the Fourier series coefficients for the given signal
∞ ∞
𝜋 𝜋 2 𝜋 𝜋 (−1)ℓ
𝐱 ( ) = = ∑(−1)𝑘+1 sin (𝑘 ) ⟹ =∑
2 2 𝑘 2 4 2ℓ + 1
𝑘=1 ℓ=0
Exercise 34: Find the Fourier transform for the given signal
𝐴(𝑏 + 𝑡)
for − 𝑏 ≤ 𝑡 ≤ −𝑎
𝑏−𝑎
𝐟(𝑡) = 𝐴 for −𝑎 ≤𝑡 ≤ 𝑎
𝐴(𝑏 − 𝑡)
{ 𝑏−𝑎 for 𝑎≤𝑡≤𝑏
𝑑2 𝐴
𝐟(𝑡) = (𝛿(𝑡 + 𝑏) − 𝛿(𝑡 + 𝑎) − 𝛿(𝑡 − 𝑎) + 𝛿(𝑡 − 𝑏))
𝑑𝑡 2 𝑏−𝑎
𝑑2 𝐴
𝔽 ( 2 𝐟(𝑡)) = (𝑒 𝑗𝑏𝜔 − 𝑒 𝑗𝑎𝜔 − 𝑒 −𝑗𝑎𝜔 + 𝑒 −𝑗𝑏𝜔 )
𝑑𝑡 𝑏−𝑎
𝑑2 2𝐴
𝔽 ( 2 𝐟(𝑡)) = (cos(𝑏𝜔) − cos(𝑎𝜔))
𝑑𝑡 𝑏−𝑎
Also we know taht
𝑑2 1 𝑑2 2𝐴
𝔽 ( 2 𝐟(𝑡)) = −𝜔2 𝔽(𝐟(𝑡)) ⟹ 𝔽(𝐟(𝑡)) = − 𝔽( 𝐟 ( 𝑡) ) = (cos(𝑎𝜔) − cos(𝑏𝜔))
𝑑𝑡 𝜔 2 𝑑𝑡2 𝜔 2 ( 𝑏 − 𝑎)
2𝐴
𝐅(𝜔) = (cos(𝑎𝜔) − cos(𝑏𝜔))
𝜔 2 (𝑏
− 𝑎)
CHAPTER VI:
Fourier-Analysis of
Discrete LTI Systems
I. Introduction
II. Discrete-Time Fourier Series (DTFS)
II.I Properties of Discrete Fourier Series
III. Discrete-Time Fourier Transform (DTFT)
III.I Connection between Fourier and z-Transform
III.II Common Discrete-time Fourier Transform Pairs
III.III DTFT Convergence Issues
III.IV Properties of the (DTFT) Fourier Transform
III.V General Comments on Fourier Transforms
V. Solved Problems
Because the basic complex exponentials {𝜙𝑘 [𝑛] = 𝑒 𝑗𝑘Ω0 𝑛 Ω0 = 2𝜋𝑘/𝑁} repeat periodically in
frequency, two alternative interpretations arise for the behavior of the Fourier series
coefficients. One interpretation is that there are only 𝑁 coefficients. The second is that the
sequence representing the Fourier series coefficients can run on indefinitely but repeats
periodically. Both interpretations, of course, are equivalent because in either case there are
only 𝑁 unique Fourier series coefficients.
II. Discrete-Time Fourier Series (DTFS): The exponential Fourier series consists of the
exponentials {𝜙𝑘 [𝑛] = 𝑒 𝑗𝑘Ω0 𝑛 } There would be an infinite number of harmonics, except for
the property proved before: that discrete-time exponentials whose frequencies are
separated by 2𝜋 (or integral multiples of 2𝜋) are identical because 𝜙𝑘 [𝑛] = 𝜙𝑘+𝑁 [𝑛]. The
consequence of this result is that the 𝑘 th harmonic is identical to the (𝑘 + 𝑁)th harmonic.
Since complex exponentials are Eigen-functions of linear time-invariant (LTI) systems 𝕋,
calculating the output of an LTI system given 𝑒 𝑗Ω𝑛 as an input amounts to simple
multiplication,
LTI system
𝑥[𝑛] = 𝑒 𝑗Ω𝑛 𝑦[𝑛] = 𝐻[𝑘]𝑒 𝑗Ω𝑛
𝕋
where Ω = 𝑘Ω0 = 2𝜋𝑘/𝑁, and 𝐻[𝑘] ∈ ℂ is the eigenvalue corresponding to k. As shown in the
figure, a simple exponential input would yield the output 𝑦[𝑛] = 𝐻[𝑘]𝑒 𝑗Ω𝑛 . Using this and
the fact that 𝕋 is linear, calculating 𝑦[𝑛] for combinations of complex exponentials is also
straightforward.
LTI system
𝑐1 𝑒 𝑗Ω1 𝑛 + 𝑐2 𝑒 𝑗Ω2 𝑛 𝑐1 𝐻[𝑘1 ]𝑒 𝑗Ω1 𝑛 + 𝐻[𝑘2 ]𝑐2 𝑒 𝑗Ω2 𝑛
𝕋
⋮ ⋮
⋮ ⋮
LTI system
∑ 𝑐𝑘 𝑒 𝑗𝑘Ω0 𝑛 ∑ 𝑐𝑘 𝐻[𝑘]𝑒 𝑗𝑘Ω0 𝑛
𝕋
𝑘 𝑘
The action of 𝕋 on an input such as those in the two equations above is easy to explain. 𝕋
independently scales each exponential component 𝑒 𝑗𝑘Ω0 𝑛 by a different complex number
𝐻[𝑘] ∈ ℂ . As such, if we can write a function 𝑦[𝑛] as a combination of complex exponentials
it allows us to easily calculate the output of a system. But, the periodicity of the complex
exponential 𝜙𝑘 [𝑛] tell as that: there exist 𝑁-distinct signals to form a basis, so far this lead
to the fact that discrete time-periodic function 𝐱[𝑛] = 𝐱[𝑛 + 𝑁] can be written as a linear
combination of only 𝑁 harmonic complex sinusoids. 𝐱[𝑛] = ∑𝑁−1 𝑘=0 𝑐𝑘 𝑒
𝑗𝑘Ω0 𝑛
with Ω0 = 2𝜋/𝑁
Theorem: Let we have an exponential term {𝑒 𝑗Ω𝑛 with 𝑛 = 1,2, … , 𝑁 − 1} which is a vector in
ℂ𝑁 , and if we define
1 𝑗2𝜋𝑘𝑛
𝜙𝑘 [𝑛] = 𝑒 𝑁 with and 𝑘 = 1,2, … , 𝑁 − 1
√𝑁
Therefore: 𝜙𝑘 [𝑛] is an orthonormal set. 𝜙𝑘 [𝑛] is also a basis, since there are 𝑁-vectors which
are linearly independent (orthogonality implies linear independence).
And finally, we have shown that the harmonic sinusoids {𝜙𝑘 [𝑛]} form an orthonormal basis
for ℂ𝑁 ▪
Theorem: Given a discrete-time, periodic signal 𝐱[𝑛] (i.e. vector in ℂ𝑁 ), we can write:
𝑁−1 𝑁−1 𝑁−1
1 2𝜋𝑘
𝐱[𝑛] = ∑ 𝑐𝑘 𝜙𝑘 [𝑛] = ∑ 𝑐𝑘 𝑒 𝑗𝑘Ω0 𝑛 with 𝑐𝑘 = ∑ 𝐱[𝑛]𝑒 −𝑗 𝑁 𝑛
𝑁
𝑘=0 𝑘=0 𝑛=0
Proof: In order to obtain the DTFS coefficients we perform the Gram-Schimdt process, by
projecting a function onto a basis of {𝜙0 [𝑛], 𝜙1 [𝑛], … , 𝜙𝑁−1 [𝑛]} with
2𝜋𝑘 2𝜋𝑘 𝑁−1
−𝑗 𝑛 −𝑗 𝑛
⟨𝐱[𝑛], 𝜙𝑘 [𝑛]⟩ ∑𝑁−1
𝑛=0 𝐱[𝑛]𝑒 𝑁 ∑𝑁−1
𝑛=0 𝐱[𝑛]𝑒 𝑁 1 2𝜋𝑘
𝑐𝑘 = = 2𝜋𝑘 = = ∑ 𝐱[𝑛]𝑒 −𝑗 𝑁 𝑛
⟨𝜙𝑘 [𝑛], 𝜙𝑘 [𝑛]⟩ 2𝜋𝑘 ∑𝑁−1
𝑛=0 1 𝑁
∑𝑛=0 𝑒 𝑁 𝑒 𝑁 𝑛
𝑁−1 𝑗 𝑛 −𝑗
𝑛=0
Example: 01 Given a Discrete Time Square Wave as shown in figure. Find its DTFS
Solution: We have
𝑁−1
1 2𝜋𝑘 1 2𝜋𝑘
𝑐𝑘 = ∑ 𝐱[𝑛]𝑒 −𝑗 𝑁 𝑛 = ∑ 𝐱[𝑛]𝑒 −𝑗 𝑁 𝑛 ⟹ 𝑐0 = 1
𝑁 𝑁
〈𝑁〉 𝑛=0
Just like continuous time Fourier series, we can take the summation over any interval, so
we have
𝑛=𝑁1
1 2𝜋𝑘
𝑐𝑘 = ∑ 𝑒 −𝑗 𝑁 𝑛
𝑁
𝑛=−𝑁1
Let ℓ = 𝑛 + 𝑁1 (so we can get a geometric series starting at 0)
ℓ=2𝑁1 ℓ=2𝑁1 ℓ=2𝑁1
1 𝑒 𝑗Ω0 𝑘𝑁1 𝑒 𝑗Ω0 𝑘𝑁1 ℓ
𝑐𝑘 = ∑ 𝑒 −𝑗Ω0 𝑘(ℓ−𝑁1 ) = ∑ 𝑒 −𝑗Ω0 𝑘ℓ = ∑ (𝑒 −𝑗Ω0 𝑘 )
𝑁 𝑁 𝑁
ℓ=0 ℓ=0 ℓ=0
𝑒 𝑗Ω0 𝑘𝑁1 1 − 𝑒 −𝑗Ω0 𝑘(1+2𝑁1) 1 𝑒 𝑗Ω0 𝑘(1+2𝑁1 )/2 1 − 𝑒 −𝑗Ω0 𝑘(1+2𝑁1 )
= ( )= ( )
𝑁 1 − 𝑒 −𝑗Ω0 𝑘 𝑁 𝑒 𝑗Ω0 𝑘/2 1 − 𝑒 −𝑗Ω0 𝑘
2𝜋𝑘 1
sin ( 𝑁 (2 + 𝑁1 ))
1 𝑒 𝑗Ω0 𝑘(1+2𝑁1 )/2 − 𝑒 −𝑗Ω0 𝑘(1+2𝑁1 )/2 1
= ( ) =
𝑁 𝑒 𝑗Ω0 𝑘/2 − 𝑒 −𝑗Ω0 𝑘/2 𝑁 𝜋𝑘
sin ( 𝑁 )
= digital sinc
Example: 02 Given a periodic Discrete Time Sine-Wave 𝐱[𝑛] = sin(Ω0 𝑛). Find its DTFS
Remark: The Fourier coefficients 𝑐𝑘 , are often referred to as the spectral coefficients of 𝐱[𝑛].
Remark: Since the discrete Fourier series is a finite series, in contrast to the continuous-
time case, there are no convergence issues with discrete Fourier series.
II.I Properties of Discrete Fourier Series: In this section we will discuss the basic
properties of the Discrete-Time Fourier Series (only some of the important properties). Let
𝐱[𝑛], 𝐲[𝑛] be two periodic signals with the same period and having Fourier series:
𝑁−1 𝑁−1 𝑁−1 𝑁−1
𝑗𝑘Ω0 𝑛
F−Series 𝐱[𝑛] −𝑗𝑘Ω 𝑛 𝑗𝑘Ω0 𝑛
F−Series 𝐲[𝑛] −𝑗𝑘Ω 𝑛
𝐱[𝑛] = ∑ 𝑐𝑘 𝑒 ↔ 𝑐𝑘 = ∑ 𝑒 0 𝐲[𝑛] = ∑ 𝑑𝑘 𝑒 ↔ 𝑑𝑘 = ∑ 𝑒 0
𝑁 𝑁
𝑘=0 𝑘=0 𝑘=0 𝑘=0
F−Series
a. Linearity and periodicity: 𝐴 𝐱[𝑛] + 𝐵𝐲[𝑛] ↔ 𝐴𝑐𝑘 + 𝐵𝑑𝑘 and 𝑐𝑘+𝑁 = 𝑐𝑘
F−Series
b. Time Shifting: 𝐱[𝑛 − 𝑛0 ] ↔ 𝑒 −𝑗𝑘Ω0 𝑛0 𝑐𝑘
F−Series
c. Frequency Shifting: 𝑒 𝑗ℓΩ0 𝑛 𝐱[𝑛] ↔ 𝑐𝑘−ℓ
F−Series F−Series
d. Time Reversal and Conjugate: 𝐱[−𝑛] ↔ 𝑐−𝑘 and 𝐱 ⋆ [𝑛] ↔ 𝑐𝑘⋆
F−Series
e. Periodic Convolution: ∑𝑁−1
𝑘=0 𝐱[𝑟]𝐱[𝑛 − 𝑟] ↔ 𝑁𝑐𝑘 𝑑𝑘
F−Series F−Series
g. Duality: 𝐱[𝑛] ↔ 𝑐𝑘 ⟹ 𝑁. 𝑐[𝑘] ↔ 𝐱[−𝑘] "very important"
F−Series F−Series
g. Even/Odd Sequences: 𝐱[𝑛] = 𝐱 𝑒 [𝑛] + 𝐱 𝑜 [𝑛] ⟹ 𝐱 𝑒 [𝑛] ↔ Re(𝑐𝑘 ) & 𝐱𝑜 [𝑛] ↔ 𝑗. Im(𝑐𝑘 )
F−Series
h. Multiplication: 𝐱[𝑛]. 𝒚[𝑛] ↔ ∑ℓ=〈𝑁〉 𝑐ℓ 𝑑𝑘−ℓ
F−Series
i. Difference: 𝐱[𝑛] − 𝐱[𝑛 − 1] ↔ (1 − 𝑒 −𝑗𝑘Ω0 )𝑐𝑘
j. Parseval's theorem: ∑𝑁−1 2 𝑁−1 2
𝑘=0 |𝐱[𝑘]| = 𝑁 ∑𝑘=0 𝑐𝑘
F−Series 1 𝑑 F−Series
k. Integration in the Fourier Domain: ∑𝑁−1
𝜂=0 𝐱[𝜂] ↔ 𝑐𝑘 and 𝑑𝑛 𝐱[𝑛] ↔ 𝑗𝑘Ω0 𝑐𝑘
𝑗𝑘Ω0
Example: 04 In this example we will develop two MATLAB codes to implement DTFS
analysis and synthesis equations. The first code given below computes the DTFS
coefficients for the periodic signal 𝐱[𝑘]. The vector “x” holds one period of the signal 𝐱[𝑘] for
𝑛 = 0, … , 𝑁 − 1. The vector “idx” holds the values of the index 𝑘 for which the DTFS
coefficients 𝑐𝑘 are to be computed. The coefficients are returned in the vector “c”
The second code implements the DTFS synthesis equation. The vector “c” holds one period
of the DTFS coefficients ck for k=0,...,N−1. The vector “idx” holds the values of the index n
for which the signal samples x[n] are to be computed. The synthesized signal x[n] is
returned in the vector “x”
Let 𝐱[𝑛] be a nonperiodic sequence of finite duration. That is, for some positive integer 𝑁1 ,
that is 𝐱[𝑛] = 0 |𝑛| > 𝑁1 . Such a sequence is shown in Fig. Let 𝐱̃[𝑛] be a periodic sequence
formed by repeating 𝐱[𝑛] with fundamental period 𝑁 as shown in Fig. If we let 𝑁 → ∞, we
have lim𝑁→∞ 𝐱̃[𝑛] = 𝐱[𝑛].
Since 𝐱̃[𝑛] = 𝐱[𝑛] for |𝑛| > 𝑁1 , and also since 𝐱[𝑛] = 0 outside this interval, so we can write
𝑁1 ∞
1 1
𝑐𝑘 = ∑ 𝐱[𝑛]𝑒 −𝑗𝑘Ω0 𝑛 = ∑ 𝐱[𝑛]𝑒 −𝑗𝑘Ω0 𝑛
𝑁 𝑁
𝑛=−𝑁1 𝑛=−∞
Let us define 𝑿(Ω) as
∞
2𝜋𝑘
𝑿(Ω) = ∑ 𝐱[𝑛]𝑒 −𝑗Ω𝑛 with Ω = 𝑘Ω0 = ⟹ 𝑁𝑐𝑘 = 𝑿(𝑘Ω0 )
𝑁
𝑛=−∞
𝑿(𝑘Ω0 ) 𝑗𝑘Ω 𝑛 1
𝐱̃[𝑛] = ∑ 𝑒 0 = ∑ 𝑿(𝑘Ω0 )𝑒 𝑗𝑘Ω0 𝑛 Ω0
𝑁 2𝜋
〈𝑁〉 〈𝑁〉
The term 𝑿(Ω) is periodic with period 2𝜋 and so is 𝑒 𝑗Ω𝑛 . Thus, the product 𝑿(Ω)𝑒 𝑗Ω𝑛 will
also be periodic with period 2𝜋. each term in the last summation represents the area of a
rectangle of height 𝑿(𝑘Ω0 )𝑒 𝑗𝑘Ω0 𝑛 and width Ω0 . As 𝑁 → ∞, Ω0 becomes infinitesimal (Ω0 → 0)
and the summation in 𝐱̃[𝑛] passes to an integral.
Since 𝑿(Ω)𝑒 𝑗Ω𝑛 is periodic with period 2𝜋, the interval of integration can be taken as any
interval of length 2𝜋.
1 1 1
𝐱[𝑛] = { lim ∑ 𝑿(𝑘Ω0 )𝑒 𝑗𝑘Ω0 𝑛 Ω0 } = { lim ∑ 𝑿(𝑘Ω0 )𝑒 𝑗𝑘Ω0 𝑛 Ω0 } = ∫ 𝑿(Ω)𝑒 𝑗Ω𝑛 𝑑Ω
2𝜋 𝑁→∞ 2𝜋 Ω0 →0 2𝜋 2𝜋
〈𝑁〉 〈𝑁〉
The discrete-time Fourier transform (DTFT) of a discrete set of real or complex numbers
𝐱[𝑛], for all integers 𝑛, is given by
∞
The discrete-time Fourier transform 𝑿(Ω) of 𝐱[𝑛] is, in general, complex continuous
function and can be expressed as 𝑿(Ω) = |𝑿(Ω)|𝑒 𝑗𝜙(Ω) . As in continuous time, the Fourier
transform 𝑿(Ω) of a non-periodic sequence 𝐱[𝑛] is the frequency-domain specification of
𝐱[𝑛] and is referred to as the spectrum (or Fourier spectrum) of 𝐱[𝑛]. The quantity |𝑿(Ω)| is
called the magnitude spectrum of 𝐱[𝑛], and 𝑿(Ω) is called the phase spectrum of 𝐱[𝑛].
Furthermore, if 𝐱[𝑛] is real, the amplitude spectrum |𝑿(Ω)| is an even function and the
phase spectrum 𝜙(Ω) is an odd function of Ω.
Remark: Just as in the case of continuous time, the sufficient condition for the
convergence of 𝑿(Ω) is that 𝐱[𝑛] is absolutely summable, that is, ∑∞
−∞|𝐱[𝑛]| < ∞.
III.I Connection between the Fourier Transform and the z-Transform: The z-transform
of a signal evaluated on a unit circle is equal to the Fourier transform of that signal.
∞ ∞
−𝑛
𝑿(Ω) = 𝑿(𝑧)| ⟺ 𝑿(Ω) = ∑ 𝐱[𝑛]𝑧 | = ∑ 𝐱[𝑛]𝑒 −𝑗Ω𝑛
𝑧=𝑒 𝑗Ω
𝑛=−∞ 𝑧=𝑒 𝑗Ω 𝑛=−∞
we see that if the ROC of 𝑿(𝑧) contains the unit circle, then the Fourier transform 𝑿(Ω) of
𝐱[𝑛] equals 𝑿(𝑧) evaluated on the unit circle, that is, 𝑿(Ω) = 𝑿(𝑧)| 𝑗Ω
𝑧=𝑒
Note that since the summation in z-transform is denoted by 𝑿(𝑧), then the summation in
DTFT may be denoted as 𝑿(𝑒 𝑗Ω ). Thus, in the
remainder of this text, both 𝑿(Ω) and 𝑿(𝑒 𝑗Ω ) mean
the same thing whenever we connect the Fourier
transform with the z-transform. Because the
Fourier transform is the z-transform with 𝑧 = 𝑒 𝑗Ω ,
it should not be assumed automatically that the
Fourier transform of a sequence 𝐱[𝑛] is z-
transform with z replaced by 𝑒 𝑗Ω . If 𝐱[𝑛] is
absolutely summable, the Fourier transform of
𝐱[𝑛] can be obtained from the z-transform of
𝐱[𝑛] with 𝑧 = 𝑒 𝑗Ω since the ROC of 𝑿(𝑧) will
contain the unit circle; that is, |𝑒 𝑗Ω | = 1. This is not generally true of sequences which are
not absolutely summable.
Example: 05 Consider the unit impulse sequence 𝛿[𝑛] where the z-transform of 𝛿[𝑛] is
∞ ∞
−𝑛
𝑿(𝑧) = ∑ 𝛿[𝑛]𝑧 = 1 = ∑ 𝛿[𝑛]𝑒 −𝑗Ω𝑛 = 𝑿(𝑒 𝑗Ω )
𝑛=−∞ 𝑛=−∞
Thus, the z-transform and the Fourier transform of 𝛿[𝑛] are the same. Note that 𝛿[𝑛] is
absolutely summable and that the ROC of the z-transform of 𝛿[𝑛] contains the unit circle.
Example: 06 Consider the causal exponential sequence 𝐱[𝑛] = 𝑎𝑛 𝑢[𝑛] The z-transform of
𝐱[𝑛] is given by
1
𝑿(𝑧) = |𝑧| > |𝑎|
1 − 𝑎𝑧 −1
Thus, 𝑿(𝑒 𝑗Ω ) exists for |𝑎| < 1 because the ROC of 𝑿(𝑧) then contains the unit circle. That
is,
1
𝑿(𝑒 −𝑗Ω ) = |𝑎| < 1
1 − 𝑎𝑒 −𝑗Ω
1
𝑿(𝑧) = |𝑧| > |1|
1 − 𝑧 −1
The Fourier transform of 𝑢[𝑛] cannot be obtained from its z-transform because the ROC of
the z-transform of 𝑢[𝑛] does not include the unit circle. Note that the unit step sequence
𝑢[𝑛] is not absolutely summable. The Fourier transform of 𝑢[𝑛] is given by
1
𝑿(Ω) = 𝜋𝛿(Ω) + |Ω| < 𝜋
1 − 𝑒 −𝑗Ω
To prove this let we consider
1 − 𝑒 −𝑗Ω 1 − 𝑒 −𝑗Ω
𝔽(𝛿[𝑛]) = 1 = −𝑗Ω
= −𝑗Ω
+ 𝜋𝛿(Ω)(1 − 𝑒 −𝑗Ω ) = 1 + 𝜋𝛿(Ω)(1 − 𝑒 −𝑗Ω )
1−𝑒 1−𝑒
The second term is always zero because for Ω = 0, 1 − 𝑒 −𝑗Ω = 0 and it's zero on any other
point |Ω| < 𝜋. From the other side if we apply the shift property we get
In the next section we will see more detail about the unit step sequence.
III.II Common Discrete-time Fourier Transform Pairs: For every time domain sequence
there is a corresponding frequency domain waveform, and vice versa. For example, a digital
rectangular pulse in the time domain coincides with a sinc function [i.e. ,sin(x)/x] in the
frequency domain. Duality provides that the reverse is also true; a rectangular pulse in the
frequency domain matches a sinc function in the time domain. Waveforms that correspond
to each other in this manner are called Fourier transform pairs. Several common pairs are
presented in this section (Some Fourier transform pairs can be computed quite easily
directly from the definition).
Unit Dirac Impulse: To compute the Fourier transform of an impulse we apply the
definition of Fourier transform 𝑿(Ω) = 𝔽(𝛿[𝑛]) = ∑∞
𝑛=−∞ 𝛿[𝑛]𝑒
−𝑗Ω𝑛
= 1. More generally
∞ ∞
Remark: 𝔽(𝐱[𝑛 − 𝑛0 ]) = 𝑒 −𝑗Ω𝑛0 𝑿(Ω) is very obvious and called the shift property.
DC Gain Sequence: The term DC stands for direct current, which is a constant current.
To compute DTFT of the sequence {1[𝑛] ∀𝑛} it is preferred to compute the inverse of:
∞
We know that 𝑒 𝑗Ω𝑛 𝛿(Ω − 2𝜋𝑘) = 𝑒 𝑗2𝜋𝑘𝑛 𝛿(Ω − 2𝜋𝑘) = 𝛿(Ω − 2𝜋𝑘) so
+𝜋 ∞ +𝜋 ∞ +𝜋
−1 𝑗Ω𝑛
𝐱[𝑛] = 𝔽 (𝑿(Ω)) = ∫ ∑ 𝑒 𝛿(Ω − 2𝜋𝑘) 𝑑Ω = ∫ ∑ 𝛿(Ω − 2𝜋𝑘) 𝑑Ω = ∫ 𝛿(Ω)𝑑Ω = 1 ∀𝑛
−𝜋 𝑘=−∞ −𝜋 𝑘=−∞ −𝜋
∞ ∞ ∞
𝔽
𝐱[𝑛] = 1 for all 𝑛 ↔ 𝑿(Ω) = 2𝜋 ∑ 𝛿(Ω − 2𝜋𝑘) or 𝔽 ( ∑ 𝛿[𝑛 − 𝑘]) = 2𝜋 ∑ 𝛿(Ω − 2𝜋𝑘)
𝑘=−∞ 𝑘=−∞ 𝑘=−∞
Since the continuous F.T. of 𝐱(𝑡) = 1 (for all 𝑡) is 𝛿(𝑡), the DTFT of 𝐱[𝑛] = 1 shall be a
impulse train (or impulse comb), and it turns out to be 2𝜋 ∑∞
𝑘=−∞ 𝛿(Ω − 2𝜋𝑘).
Sampling property: the DTFT of a continuous signal 𝐱(𝑡) sampled with period 𝑇 is obtained
by a periodic duplicationof the continuous Fourier transform 𝑿(ω) with a period ω𝑠 = 2𝜋/𝑇
and scaled by 𝑇. This fact will be used in later on in sampling theory.
Unit Step Sequence: Since the step function is not absolutely summable, so there is no
ordinary method for obtaining the DTFT. But how can we go beyond this obstacle?
𝔽 𝔽
We define 𝑢[𝑛] = 𝑢1 [𝑛] + 𝑢2 [𝑛] with 𝑢1 [𝑛] ↔ 𝑿1 (𝑗Ω), 𝑢2 [𝑛] ↔ 𝑿2 (𝑗Ω) and
1 1
𝑢1 [𝑛] = − ∞ < 𝑛 < +∞, & 𝑢2 [𝑛] = sgn[𝑛]
2 2
Therefore we express the impulse by 𝛿[𝑛] = 𝑢2 [𝑛] − 𝑢2 [𝑛 − 1] and using the fact that
𝔽(𝛿[𝑛]) = 1 and 𝔽(𝑢2 [𝑛] − 𝑢2 [𝑛 − 1]) = 𝑿2 (𝑗Ω) − 𝑒 −𝑗Ω 𝑿2 (𝑗Ω) = (1 − 𝑒 −𝑗Ω )𝑿2 (𝑗Ω)
Hence we get the following DTFT
1
𝑿2 (𝑗Ω) =
1 − 𝑒 −𝑗Ω
𝔽
We have seen that 𝑢1 [𝑛] = 1/2 for all 𝑛 ↔ 𝑿1 (𝑗Ω) = 𝜋 ∑∞𝑘=−∞ 𝛿(Ω − 2𝜋𝑘). Adding these two
results, we have the final result 𝑿(𝑗Ω) = 𝑿1 (𝑗Ω) + 𝑿2 (𝑗Ω) which is
∞
1
𝑿(Ω) = 𝔽(𝑢[𝑛]) = 𝑿1 (𝑗Ω) + 𝑿2 (𝑗Ω) = 𝜋 ∑ 𝛿(Ω − 2𝜋𝑘) + − ∞ < Ω < +∞
1 − 𝑒 −𝑗Ω
𝑘=−∞
Complex Exponential Sequence: Let we see what is the DTFT of 𝐱[𝑛] = 𝑒 𝑗Ω0 𝑛 , to do this
let we go back the dc gain sequence 𝐱1 [𝑛] = 1 and by using the definition of DTFT we get:
∞ ∞ ∞
And over one period we get: 𝑿(Ω) = 𝔽(𝑒 𝑗Ω0 𝑛 ) = 2𝜋𝛿(Ω − Ω0 ) |Ω| < 𝜋
Sine and Cosine Sequences: Let we see what is the DTFT of 𝐱[𝑛] = cos(Ω0 𝑛)? In fact this
is just a consequence of the previous result
1 𝔽
𝐱[𝑛] = cos(Ω0 𝑛) = (𝑒 𝑗Ω0 𝑛 + 𝑒 −𝑗Ω0 𝑛 ) ↔ 𝑿(Ω) = 𝜋(𝛿(Ω + Ω0 ) + 𝛿(Ω − Ω0 )) |Ω| < 𝜋
2
1 𝔽
𝐱[𝑛] = sin(Ω0 𝑛) = (𝑒 𝑗Ω0 𝑛 − 𝑒 −𝑗Ω0 𝑛 ) ↔ 𝑿(Ω) = 𝜋𝑗(𝛿(Ω + Ω0 ) − 𝛿(Ω − Ω0 )) |Ω| < 𝜋
2𝑗
The Sequences 𝐱[𝑛] = 𝛿[𝑛 + 𝑛0 ] ± 𝛿[𝑛 − 𝑛0 ]: What is the inverse Fourier transform of an
impulse located at 𝑛0 ? Applying the definition of inverse Fourier transform yields:
∞
1 |𝑛| ≤ 𝑁
𝐱[𝑛] = {
0 |𝑛| > 𝑁
To compute the Fourier transform of a pulse we apply the definition of Fourier transform:
𝑁
𝑿(Ω) = ∑ 𝑒 −𝑗Ω𝑛
𝑛=−𝑁
𝑁 𝑟 𝑁+1 − 𝑟 −𝑁
𝑛≠1
∑ 𝑟𝑛 = { 1 − 𝑟𝑁
𝑛=−𝑁 2𝑁 + 1 𝑛=1
1
𝑁 −𝑗Ω(𝑁+1) 𝑗Ω𝑁
sin (Ω (𝑁 + 2))
𝑒 −𝑒
𝑿(Ω) = ∑ 𝑒 −𝑗Ω𝑛 = { Ω ≠ 0} = Ω≠0
𝑒 −𝑗Ω −1 sin(Ω/2)
𝑛=−𝑁 2𝑁 + 1 Ω=0
{ 2𝑁 + 1 Ω=0
Because we have
Ω 1 1
𝑒 −𝑗Ω(𝑁+1)
−𝑒 𝑗Ω𝑁 𝑒 −𝑗 2 (𝑒 −𝑗Ω(𝑁+2) − 𝑒 𝑗Ω(𝑁+2) )
= Ω Ω Ω
𝑒 −𝑗Ω − 1
𝑒 −𝑗 2 (𝑒 −𝑗 2 − 𝑒 𝑗 2 )
1 1 1
sin (Ω (𝑁 + 2)) (𝑁 + 2) cos (Ω (𝑁 + 2))
Since lim = lim × = 2𝑁 + 1
Ω→0 sin(Ω/2) Ω→0 1/2 cos(Ω/2)
1
sin (Ω (𝑁 + 2))
1 |𝑛| ≤ 𝑁 𝔽
𝐱[𝑛] = { ↔
0 |𝑛| > 𝑁 sin(Ω/2)
sin(W𝑛) 𝔽 1 0 ≤ |Ω| ≤ 𝑊
𝐱[𝑛] = 0<W<𝜋 ↔ 𝑿(Ω) = {
𝜋𝑛 0 𝑊 < |Ω| ≤ 𝜋
III.III DTFT Convergence Issues: Recall the DTFT analysis and synthesis equations
∞
There are no convergence issues associated with the synthesis equation since the integral
is over a finite interval. For example, unlike the CTFT case, the Gibbs phenomenon is
absent when ∫2𝜋 𝑿(Ω)𝑒 𝑗Ω𝑛 𝑑Ω is used.
III.IV Properties of the (DTFT) Fourier Transform Basic properties of the Fourier
transform are presented in the following. There are many similarities to and several
differences from the continuous-time case. Many of these properties are also similar to
those of the z-transform when the ROC of 𝑿(𝑧) includes the unit circle.
❷ Linearity: Since the DTFT is an infinite sum, it should come as no surprise that it is a
linear operator. Nevertheless, it is a helpful property to know. Suppose we have the
𝔽 𝔽
following DTFT pairs: 𝐱1 [𝑛] ↔ 𝑿1 (Ω) & 𝐱2 [𝑛] ↔ 𝑿2 (Ω). Then by the linearity of the
DTFT we have that, for any constants 𝛼1 and 𝛼2 :
𝔽
𝛼1 𝐱1 [𝑛] + 𝛼2 𝐱 2 [𝑛] ↔ 𝛼1 𝑿1 (Ω) + 𝛼2 𝑿2 (Ω)
❸ DTFT Frequencies: Because 𝑿(Ω) is essentially the inner product of a signal 𝐱[𝑛] with
the signal 𝑒 𝑗Ω𝑛 , we can say that 𝑿(Ω) tells us how strongly the signal 𝑒 𝑗Ω𝑛 appears in 𝐱[𝑛].
𝑿(Ω), then, is a measure of the "frequency content" of the signal 𝐱[𝑛]. Consider the plot
below of the DTFT of some signal 𝐱[𝑛]:
This plot shows us that the signal 𝐱[𝑛] has a significant amount of low-frequency content
(frequencies around 𝜔 = 0), and less high-frequency content (frequencies around 𝜔 = ±𝜋 --
remember that the DTFT is 2π periodic).
❹ The DTFT and Time Shifts: If a signal is shifted in time, what effect might this have on
𝔽
its DTFT? Supposing 𝐱[𝑛] and 𝑿(Ω) are a DTFT pair, we have that: 𝐱[𝑛 − 𝑛0 ] ↔ 𝑒 −𝑗Ω𝑛0 𝑿(Ω)
So shifting a signal in time corresponds to a modulation (multiplication by a complex
sinusoid) in frequency. We can use the DTFT formula to prove this relationship, by way of a
change of variables 𝑚 = 𝑛 − 𝑛0 :
∞ ∞ ∞
❺ The DTFT and Time Modulation: We saw above how a shift in time corresponds to
modulation in frequency. What do you suppose happens when a signal is modulated in
time? If you guessed that it is shifted in frequency, you're right! If a signal 𝐱[𝑛] has a DTFT
𝔽
of 𝑿(Ω), then we have this DTFT pair: 𝑒 𝑗Ω0 𝑛 𝐱[𝑛] ↔ 𝑿(Ω − Ω0 ) Below is the proof:
∞ ∞
❻ The DTFT and Convolution Suppose that the impulse response of an LTI system is
𝒉[𝑛], the input to the system is 𝐱[𝑛], and the output is 𝐲[𝑛]. Because the system is LTI,
these three signals have a special relationship: 𝐲[𝑛] = 𝐱[𝑛] ⋆ 𝒉[𝑛] = ∑∞
𝑘=−∞ 𝐱[𝑘]𝒉[𝑛 − 𝑘]. The
output 𝐲[𝑛] is the convolution of 𝐱[𝑛] with 𝒉[𝑛]. Just as with the other DTFT properties, it
turns out there is also a relationship in the frequency domain. Consider the DTFT of each
of those signals; call them 𝑯(Ω), 𝑿(Ω), and 𝒀(Ω). The convolution of the signals 𝐱[𝑛] and
𝒉[𝑛] in time corresponds to the multiplication of their DTFTs in frequency:
𝔽
𝐲[𝑛] = 𝐱[𝑛] ⋆ 𝒉[𝑛] ↔ 𝒀(Ω) = 𝑿(Ω)𝑯(Ω)
Proof: For the proof, we take the DTFT of 𝐲[𝑛], using a change of variables along the way:
∞ ∞ ∞
−𝑗Ω𝑛
𝒀(Ω) = ∑ 𝐱[𝑛] ⋆ 𝒉[𝑛]𝑒 = ∑ { ∑ 𝐱[𝑘]𝒉[𝑛 − 𝑘]} 𝑒 −𝑗Ω𝑛
𝑛=−∞ 𝑛=−∞ 𝑘=−∞
∞ ∞
This relationship is very important. It gives insight, showing us how LTI systems modify the
frequencies of input signals. It is also useful, because it gives us an alternative way of
finding the output of a system. We could take the DTFTs of the input and impulse
response, multiply them together, and then take the inverse DTFT of the result to find the
output. There are some cases where this process might be easier than finding the
convolution sum.
❼ The DTFT and Duality The duality property of a continuous-time Fourier transform is
𝔽
expressed as 𝑿(𝑡) ↔ 2𝜋𝐱(−𝜔). There is no discrete-time counterpart of this property.
However, there is a duality between the discrete-time Fourier transform and the
𝔽
continuous-time Fourier series. Let 𝐱[𝑛] ↔ 𝑿(Ω) = 𝑿(Ω + 2𝜋) = ∑∞𝑛=−∞ 𝐱[𝑛]𝑒
−𝑗Ω𝑛
. Since Ω is
a continuous variable, letting Ω = 𝑡 and 𝑛 = −𝑘 in the 𝑿(Ω) equation we get:
∞
Since 𝑿(𝑡) is periodic with period 𝑇0 = 2𝜋 and the fundamental frequency Ω0 = 1, the last
equation indicates that the Fourier series coefficients of 𝑿(𝑡) will be 𝐱[−𝑘]. This duality
𝔽𝕊
relationship is denoted by 𝑿(𝑡) ↔ 𝑐𝑘 = 𝐱[−𝑘], where 𝔽𝕊 denotes the Fourier series and
𝑐𝑘 are its Fourier coefficients.
Remark: There are other symmetry relationships as well. For example, signals that are
purely imaginary and odd have DTFTs that are purely real and odd. These types of
symmetry are a result of a property of the complex exponentials which build up the DTFTs.
Any signal of the form 𝑒 𝑗Ω𝑛 is conjugate symmetric, meaning that its real part is even and
its imaginary part is odd. Additionally, for conjugate symmetric signals, their magnitude is
even and their phase is odd.
❽ Time Reversal and Conjugation: Time reversal of the signal causes angular frequency
reversal of the transform. This property will be useful when we consider symmetry
𝔽 𝔽
properties of the DTFT. 𝐱[𝑛] ↔ 𝑿(Ω) ⟹ 𝐱[−𝑛] ↔ 𝑿(−Ω).
Conjugation of the signal causes both conjugation and angular frequency reversal of the
transform. This property will also be useful when we consider symmetry properties of the
𝔽 𝔽
DTFT. 𝐱[𝑛] ↔ 𝑿(Ω) ⟹ 𝐱 ⋆ [𝑛] ↔ 𝑿⋆ (−Ω)
Proof: For the proof, we take the DTFT and then we differentiate with respect to Ω
∞ ∞
𝑑 𝑑 −𝑗Ω𝑛 𝔽 𝑑
𝑿(Ω) = ∑ 𝐱[𝑛] 𝑒 = ∑ −𝑗𝑛𝐱[𝑛]𝑒 −𝑗Ω𝑛 ⟹ 𝑛𝐱[𝑛] ↔ 𝑗 𝑿(Ω)
𝑑Ω 𝑑Ω 𝑑Ω
𝑛=−∞ 𝑛=−∞
❿ Differencing and Accumulation: The sequence 𝐱[𝑛] − 𝐱[𝑛 − 1] is called the first
difference sequence. Equation 𝔽(𝐱[𝑛] − 𝐱[𝑛 − 1]) = (1 − 𝑒 −𝑗Ω )𝑿(Ω) is easily obtained from the
𝔽
linearity and the time-shifting properties. 𝐲[𝑛] = 𝐱[𝑛] − 𝐱[𝑛 − 1] ↔ 𝒀(Ω) = (1 − 𝑒 −𝑗Ω )𝑿(Ω)
Note that accumulation 𝐲[𝑛] = ∑𝑛𝑘=−∞ 𝐱[𝑘] is the discrete-time counterpart of integration.
This formula can be written in terms of convolution of 𝐱[𝑛] with step sequence 𝒖[𝑛]
That is
𝑘=+∞ 𝑛
𝑛
1 𝑿(Ω)
𝒀(Ω) = 𝑿(Ω)𝑼(Ω) = 𝑿(Ω) (𝜋𝛿(Ω) + ) ⟺ 𝒀(Ω) = 𝔽 ( ∑ 𝐱[𝑘]) = 𝜋𝑿(0)𝛿(Ω) +
1 − 𝑒 −𝑗Ω 1 − 𝑒 −𝑗Ω
𝑘=−∞
The impulse term on the right-hand side of Eq. reflects the dc or average value that can
result from the accumulation.
⓬ Odd-Even Properties: If 𝐱[𝑛] is real, let 𝐱[𝑛] = 𝐱 𝑒 [𝑛] + 𝐱 𝑜 [𝑛] where 𝐱 𝑒 [𝑛] and 𝐱 𝑜 [𝑛] are the
even and odd components of 𝐱[𝑛], respectively. Let
Then 𝑿(−Ω) = 𝑿⋆ (Ω), is the necessary and sufficient condition for x[n] to be real.
⓭ Parseval's Relations: The Parseval or the energy theorem for DTFT states that
+∞ +∞
1 1
∑ 𝐱1 [𝑛]𝐱 2 [𝑛] = ∫ 𝑿 (𝜃)𝑿2 (−𝜃)𝑑𝜃 and ∑ |𝐱[𝑛]|2 = ∫ |𝑿(𝜃)|2 𝑑𝜃
2𝜋 2𝜋 1 2𝜋 2𝜋
𝑛−∞ 𝑛−∞
𝐱[𝑛] 𝐗(Ω)
𝐱1 [𝑛] 𝐗1 (Ω)
𝐱 2 [𝑛] 𝐗 2 (Ω)
Linearity 𝛼𝐱 2 [𝑛] + 𝛽𝐱 2 [𝑛] 𝛼𝐗1 (Ω) + 𝛽𝐗 2 (Ω)
Time shifting 𝐱[𝑛 − 𝑛0 ] 𝑒 −𝑗Ω𝑛0 𝐗(Ω)
Differencing 𝐱[𝑛] − 𝐱[𝑛 − 1] (1 − 𝑒 −𝑗Ω )𝐗(Ω)
Frequency scaling 𝑒 𝑗Ω0 𝑛 𝐱[𝑛] 𝐗(Ω − Ω0 )
Conjugation 𝐱 ⋆ [𝑛] 𝐗 ⋆ (−Ω)
Time reversal 𝐱[−𝑛] 𝐗(−Ω)
Frequency differentiation 𝑛𝐱[𝑛] 𝑗𝑑𝐗(Ω)/𝑑Ω
1
Integration ∑𝑛𝑚=−∞ 𝐱[𝑚] 𝐗(Ω) + 𝜋𝐗(0)𝛿(Ω)
1−𝑒 −𝑗Ω
Multiplication 2𝜋 𝐱1 [𝑛]𝐱 2 [𝑛] 𝐗1 (Ω) ⊗ 𝐗 2 (Ω)
Convolution 𝐱1 [𝑛] ⋆ 𝐱 2 [𝑛] 𝐗1 (Ω)𝐗 2 (Ω)
Parseual's theorem − − − − − − − − − − − − − − − − − − − − − − − − − − − − − − − − − −
+∞
1
∑ 𝐱1 [𝑛]𝐱 2 [𝑛] = ∫ 𝑿 (𝜃)𝑿2 (−𝜃)𝑑𝜃
2𝜋 2𝜋 1
𝑛−∞
▪ The Fourier series (FS) is discrete in frequency domain, since it is the discrete set of
exponentials –integer multiples of 𝛺0 that make up the signal. This is because only a finite
number of frequencies are required to construct a periodic signal.
▪ DTFT only exists for sequences that are absolutely summable, and is periodic with 2𝜋
▪ The DTFT of the impulse response is the frequency response of the system, and is
important in the filter design. If you want to design a filter that blocks a certain frequency
𝜔𝑐 , then we design the system such that 𝑯(𝜔𝑐 ) = 0; and if we want the system to pass a
certain frequency 𝜔𝑝 pass then we make sure that 𝑯(𝜔𝑝 ) = 1.
▪ The impulse response of an ideal LPF is infinitely long. ⟹ This is an IIR filter. In fact ℎ[𝑛]
is not absolutely summable ⟹ its DTFT cannot be computed ⟹ an ideal ℎ[𝑛] cannot be
realized! (Example is the LPF 𝑯(Ω) = rect(Ω) ⟹ ℎ[𝑛] = sinc[𝑛]). One possible solution is to
truncate ℎ[𝑛], say with a window function, and then take its DTFT to obtain the frequency
response of a realizable FIR filter.
▪ The Fourier transform is really not used in stability analysis. This is because with the
other transforms (Laplace and Z-transform), each can be totally described by its poles and
zeros and the residues at the poles, as the domains are two-dimensional. Poles and zeros
are not applicable to understanding and application of the Fourier transform, because its
domain is one-dimensional.
▪ The Fourier transform is a tool which allows us to represent a signal f(𝑡) as a continuous
sum of exponentials of the form 𝑒 𝑗𝜔𝑡 , whose frequencies are restricted to the imaginary axis
in the complex plane (𝑠=𝑗𝜔). As we saw previously, such a representation is quite valuable
in the analysis and processing of signals. In the area of system analysis, however, the use
of Fourier transform leaves much to be desired. First, the Fourier transform exists only for
a restricted class of signals and, therefore, cannot be used for such inputs as growing
exponentials. Second, the Fourier transform cannot be used easily to analyze unstable or
even marginally stable systems.
Solved Problems:
Exercise 1: Consider the signal
2𝜋𝑛 2𝜋𝑛
𝐱[𝑛] = cos ( ) + sin ( )
3 7
1. Determine the period of this signal
2. Determine the Fourier series representation
Ans: 1. If we let 𝐱[𝑛] = 𝐱1 [𝑛] + 𝐱 2 [𝑛] such that the period of each is 𝑁1 = 3𝑚 and 𝑁2 = 7ℓ then
the total period is the (LCM) least common multiple of the two periods 𝑁 = 3 × 7 = 21, this is
the fundamental period of 𝐱[𝑛].
2. The Fourier series representation
1 𝑗2𝜋𝑛 2𝜋 1 2𝜋 2𝜋
𝐱[𝑛] = {𝑒 3 + 𝑒 −𝑗 3 𝑛 } + {𝑒 𝑗 7 𝑛 − 𝑒 −𝑗 7 𝑛 }
2 2𝑗
1 𝑗72𝜋𝑛 1 −𝑗72𝜋𝑛 1 𝑗32𝜋𝑛 1 −𝑗32𝜋𝑛
= 𝑒 21 + 𝑒 21 + 𝑒 21 − 𝑒 21
2 2 2𝑗 2𝑗
−1 1 1
𝑐−3 = , 𝑐3 = , 𝑐7 = 𝑐−7 = and others are zeros
2𝑗 2𝑗 2
Exercise 2: Determine the Fourier series of the output signal 𝐲[𝑛] for an LTI system
∞
1 𝑛
𝐱[𝑛] = ∑ 𝛿(𝑛 − 4𝑘) 𝐡[𝑛] = ( ) 𝐲[𝑛]
2
𝑘=−∞
𝔽𝕊 𝔽𝕊
𝐱[𝑛] ↔ 𝑎𝑘 ⟹ 𝐲[𝑛] ↔ 𝑏𝑘 = 𝑎𝑘 𝐇(Ω)
But the problem here is in existence of 𝐇(Ω)? Since 𝐡[𝑛] is not absolutely summable
∞ ∞
1 𝑛
∑ |𝐡[𝑛]| = ∑ ( ) does not exist ⟹ 𝐇(Ω) does not exist
2
𝑛=−∞ 𝑛=−∞
2𝜋𝑛 𝜋𝑛
𝐱[𝑛] = sin ( ) cos ( )
3 2
Ans: We know that 𝐱[𝑛] = sin(𝛼) cos(𝛽) = [sin(𝛼 + 𝛽) + sin(𝛼 − 𝛽)]/2 so we obtain
2𝜋𝑛 𝜋𝑛 1 2𝜋𝑛 1 2𝜋𝑛
𝐱[𝑛] = sin ( ) cos ( ) = sin ( ) + sin ( )
3 2 2 12/7 2 12
The total period is the least common multiple of the two periods 𝑁 = (12/7)𝑚 = 12𝑘 = 12. As
what we have seen before we can use the Euler identity for sin and cosine to get
−1 1 1 −1
𝑐−1 = , 𝑐1 = , 𝑐7 = , 𝑐−7 = and others are zeros
4𝑗 4𝑗 4𝑗 4𝑗
3 𝑗 3 𝑗 𝑗 −𝑗
𝑐0 = 1, 𝑐1 = ( − ), 𝑐−1 = ( + ) , 𝑐2 = , 𝑐−2 = and others are zeros
2 2 2 2 2 2
Exercise 5: Determine the Fourier series of the signals
𝜋𝑛 𝜋𝑛
𝐱1 [𝑛] = cos ( ) and 𝐱 2 [𝑛] = cos2 ( )
4 8
Ans: The first signal is periodic with period 𝑁 = 8 , using the Euler identity for cosine we get
𝜋𝑛 1 2𝜋 2𝜋 1 1
𝐱1 [𝑛] = cos ( ) = {𝑒 𝑗 8 𝑛 + 𝑒 −𝑗 8 𝑛 } ⟹ 𝑐1 = , 𝑐−1 =
4 2 2 2
𝜋𝑛 1 1 𝜋𝑛 1
𝐱 2 [𝑛] = cos 2 ( ) = + cos ( ) = (1 + 𝐱1 [𝑛])
8 2 2 4 2
Therefore 𝑐0 = 1/2, 𝑐1 = 1/4, 𝑐−1 = 1/4
Exercise 6: Determine the Fourier series of the output signal 𝐲[𝑛] for an LTI system
∞
1 |𝑛|
𝐱[𝑛] = ∑ 𝛿(𝑛 − 4𝑘) 𝐡[𝑛] = ( ) 𝐲[𝑛]
2
𝑘=−∞
𝔽𝕊 𝔽𝕊 2𝜋𝑘
𝐱[𝑛] ↔ 𝑎𝑘 ⟹ 𝐲[𝑛] ↔ 𝑏𝑘 = 𝑎𝑘 𝐇(Ω) with Ω=( )
4
But the problem here is in existence of 𝐇(Ω)? Since 𝐡[𝑛] is absolutely summable because
∞ ∞ ∞ 𝑛=−1 ∞
1 |𝑛| 1 𝑛 1 −𝑛 1 𝑛
∑ |𝐡[𝑛]| = ∑ ( ) = ∑ ( ) + ∑ ( ) = (2 ∑ ( ) ) − 1 = 3 < ∞
2 2 2 2
𝑛=−∞ 𝑛=−∞ 𝑛=0 𝑛=−∞ 𝑛=0
Notice that
1 |𝑛| 1 𝑛 1 −𝑛 2 1
𝐡[𝑛] = ( ) = ( ) 𝑢[𝑛] + ( ) 𝑢[−𝑛 − 1] ⟹ 𝐇(Ω) = −𝑗Ω
−
2 2 2 2−𝑒 1 − 2𝑒 −𝑗Ω
2 1 −3𝑒 −𝑗Ω
𝐇(Ω) = − =
2 − 𝑒 −𝑗Ω 1 − 2𝑒 −𝑗Ω 2 − 5𝑒 −𝑗Ω + 2𝑒 −2𝑗Ω
𝜋𝑘
−3𝑒 −𝑗 2 3 1 3 3
𝑏𝑘 = 𝑎𝑘 𝐇(Ω) = 𝜋𝑘 ={ , , , }, where 𝑘 = 1: 4
20 12 20 4
8− 20𝑒 −𝑗 2 + 8𝑒 −𝑗𝜋𝑘
Exercise 7: Given LTI discrete system described by its difference equation
3 1
𝑦[𝑛] − 𝑦[𝑛 − 1] + 𝑦[𝑛 − 2] = 2𝑥[𝑛]
4 8
Find its impulse response using Discrete-time Fourier transform (DTFT)
Ans: We
3 1 𝒀(Ω) 16
(1 − 𝑒 −𝑗Ω + 𝑒 −2𝑗Ω ) 𝒀(Ω) = 2𝑿(Ω) ⟹ 𝑯(Ω) = = −𝑗Ω
4 8 𝑿(Ω) 8 − 6𝑒 + 𝑒 −2𝑗Ω
2 4 2
𝑯(Ω) = = −
1 −𝑗Ω 1 −𝑗Ω 1 −𝑗Ω 1 −𝑗Ω
(1 − 2 𝑒 ) (1 − 4 𝑒 ) (1 − 2 𝑒 ) (1 − 4 𝑒 )
1 𝑛 1 𝑛 1 𝑛 1 𝑛
𝐡[𝑛] = 4 ( ) 𝑢[𝑛] − 2 ( ) 𝑢[𝑛] = (4 − 2 ( ) ) ( ) 𝑢[𝑛]
2 4 2 4
1 𝑛−1 1 |𝑛−1|
▪ 𝐱1 [𝑛] = ( ) 𝑢[𝑛 − 1] ▪ 𝐱 2 [𝑛] = ( )
2 2
Ans:
𝑛=+∞ 𝑛=+∞ 𝑛−1 ∞ 𝑚
1 𝑛−1 −𝑗Ω𝑛 𝑒 −𝑗Ω 𝑒 −𝑗Ω 𝑒 −𝑗Ω
𝑿1 (Ω) = ∑ ( ) 𝑒 𝑢[𝑛 − 1] = 𝑒 −𝑗Ω ∑ ( ) =𝑒 −𝑗Ω
∑( ) =
2 2 2 1
𝑛=−∞ 𝑛=1 𝑚=0 1 − 2 𝑒 −𝑗Ω
2𝑗 0≤Ω≤𝜋
𝑿(Ω) = {
−2𝑗 −𝜋 <Ω≤0
Ans: we use the definition
Ω=𝜋 Ω=0
1 𝜋 𝑗Ω𝑛 1 0 1 2𝑒 𝑗Ω𝑛 1 2𝑒 𝑗Ω𝑛
𝐱[𝑛] = ∫ 2𝑗𝑒 𝑑Ω − ∫ 2𝑗𝑒 𝑗Ω𝑛 𝑑Ω = [ ] − [ ]
2𝜋 0 2𝜋 −𝜋 2𝜋 𝑛 Ω=0 2𝜋 𝑛 Ω=−𝜋
𝜋
1 𝑗𝜋𝑛 1 𝜋 𝜋 2 4sin2 ( 2 𝑛)
𝑗 𝑛 −𝑗 𝑛
= (𝑒 + 𝑒 −𝑗𝜋𝑛 − 2) = (𝑒 2 − 𝑒 2 ) = −
𝑛𝜋 𝑛𝜋 𝑛𝜋
Exercise 11: Determine a difference equation that describes the given Discrete-LTI system
4 𝑛 4 𝑛
𝐱[𝑛] = ( ) 𝑢[𝑛] 𝐡[𝑛] =? 𝐲[𝑛] = 𝑛 ( ) 𝑢[𝑛]
5 5
Ans: Applying the DTFT on inputs and output we get
Exercise 12: Determine the inverse Discrete-time Fourier transform for the given 𝒀(Ω)
1
𝒀(Ω) =
(1 − 𝑎𝑒 −𝑗Ω )2
Ans: Let we define
1 𝑑 𝑎𝑒 −𝑗Ω 𝑑 1
𝑿(Ω) = ⟹ 𝑗 𝑿(Ω) = ⟹ 𝑿(Ω) + 𝑗 𝑿(Ω) = = 𝒀(Ω)
1 − 𝑎𝑒 −𝑗Ω 𝑑Ω (1 − 𝑎𝑒 )
−𝑗Ω 2 𝑑Ω (1 − 𝑎𝑒 −𝑗Ω )2
𝑑
𝒀(Ω) = 𝑿(Ω) + 𝑗 𝑿(Ω) ⟺ 𝑦[𝑛] = 𝑥[𝑛] + 𝑛𝑥[𝑛] = (𝑛 + 1)𝑥[𝑛]
𝑑Ω
But it is well known that 𝑥[𝑛] = (𝑎)𝑛 𝑢[𝑛] means that 𝑦[𝑛] = (𝑛 + 1)𝑎𝑛 𝑢[𝑛]
Exercise 13: Determine the inverse Discrete-time Fourier transform for 𝑿(Ω)
1 1 −2𝑗Ω 1 2𝑗Ω 1 1 1
𝑿(Ω) = cos2 (Ω) = + 𝑒 + 𝑒 ⟹ 𝐱[𝑛] = 𝛿[𝑛] + 𝛿[𝑛 − 2] + 𝛿[𝑛 + 2]
2 4 4 2 4 4
Exercise 14: Consider the signal shown below in figure, assume that 𝑿(Ω) is its DTFT
1 𝜋 𝑗Ω𝑛
𝜋
𝐱[𝑛] = ∫ 𝑿(Ω)𝑒 𝑑Ω ⟹ ∫ 𝑿(Ω)𝑑Ω = 2𝜋𝐱[0] = 4𝜋
2𝜋 −𝜋 −𝜋
Exercise 15: Consider a causal Discrete-time LTI system with 2𝐲[𝑛] + 𝐲[𝑛 − 1] = 2𝐱[𝑛]
a- Determine 𝑯(Ω)
1. 𝐱[𝑛] = 𝛿[𝑛] − 1/2𝛿[𝑛 − 1]
b- Determine 𝐲[𝑛] for the inputs {
2. 𝑿(Ω) = 1 + 2𝑒 −3𝑗Ω
Ans:
1 1 −1 𝑛
a- (1 + 𝑒 −𝑗Ω ) 𝒀(Ω) = 𝑿(Ω) ⟹ 𝑯(Ω) = ⟹ 𝐡[𝑛] = ( ) 𝑢[𝑛]
2 1 2
(1 + 2 𝑒 −𝑗Ω )
b- We start by the first one 𝐲[𝑛] = 𝐱[𝑛] ⋆ 𝐡[𝑛] = 𝛿[𝑛] ⋆ 𝐡[𝑛] − 1/2𝛿[𝑛 − 1] ⋆ 𝐡[𝑛]
−1 𝑛 1 −1 𝑛−1 −1 𝑛
𝐲[𝑛] = ( ) 𝑢[𝑛] − ( ) 𝑢[𝑛 − 1] = ( ) {𝑢[𝑛] − 𝑢[𝑛 − 1]} = 𝛿[𝑛]
2 2 2 2
Now we deal within the second signal 𝑿(Ω) = 1 + 2𝑒 −3𝑗Ω ⟹ 𝐱[𝑛] = 𝛿[𝑛] + 2𝛿[𝑛 − 3]
−1 𝑛 −1 𝑛−3
𝐲[𝑛] = 𝐱[𝑛] ⋆ 𝐡[𝑛] = ( ) 𝑢[𝑛] + 2 ( ) 𝑢[𝑛 − 3]
2 2
Exercise 16: Given a Discrete-time LTI system with 𝐡[𝑛] = 𝑛(1/2)𝑛 𝑢[𝑛] and 𝐱[𝑛] = 1 ∀𝑛
Ans:
𝑛=+∞ 𝑛=+∞ 𝑛
1 1/2
❶ ∑ |𝐡[𝑛]| = ∑ 𝑛( ) = = 2 < ∞ ⟹ Stable system
𝑛=−∞ 𝑛=0
2 (1 − 1/2)
2
𝑘=+∞ 𝑘=+∞
❷ 𝐲[𝑛] = 𝐱[𝑛] ⋆ 𝐡[𝑛] = ∑ 𝐱[𝑛 − 𝑘]𝐡[𝑘] = ∑ 𝐡[𝑘] = 2 ∀𝑛
𝑘=−∞ 𝑘=−∞
1 𝑛
𝐱[𝑛] = 1 ∀𝑛 𝐡[𝑛] = 𝑛 ( ) 𝑢[𝑛] 𝐲[𝑛] = 2 ∀𝑛
2
Exercise 17: The Fourier transform of the input and output of LTI discrete system are
related by
𝑑𝑿(Ω)
𝒀(Ω) = 2𝑿(Ω) + 𝑒 −𝑗Ω 𝑿(Ω) −
𝑑Ω
❶ Isthe system linear, time invariant? (Justify)
❷ Find the impulse response of this system, is it , stable?
Ans:
𝑑𝑿(Ω)
𝒀(Ω) = 2𝑿(Ω) + 𝑒 −𝑗Ω 𝑿(Ω) − ⟺ 𝐲[𝑛] = 2𝐱[𝑛] + 𝐱[𝑛 − 1] + 𝑗𝑛𝐱[𝑛]
𝑑Ω
❶ The system is linear because is described by linear difference equation (DE), but time
invariant because the DE includes a variance in coefficients
❷ To find the impulse responses it is better to replace 𝐱[𝑛] by Dirac into the difference
equation and we see what happen at the output
This impulse response does not characterize the system because of invariance.
Exercise 18: Determine the inverse Discrete-time Fourier transform for 𝑿(Ω)
6𝑒 −𝑗Ω
𝑿(Ω) =
6 + 𝑒 −𝑗Ω − 𝑒 −2𝑗Ω
Ans: using the Partial fraction expansion we get
6 1 𝑛 1 𝑛
𝐱[𝑛] = (( ) − (− ) ) 𝑢[𝑛]
5 3 2
Exercise 19: Let a system which has a frequency response 𝑯(Ω) = −𝑒 𝑗Ω + 2𝑒 −2𝑗Ω + 𝑒 4𝑗Ω
Ans:
𝒀(Ω) = 𝑯(Ω)𝑿(Ω) = 4𝑒 −5𝑗Ω − 2𝑒 −3𝑗Ω + 6𝑒 −𝑗Ω + 1 + 𝑒 𝑗Ω − 3𝑒 2𝑗Ω − 𝑒 3𝑗Ω + 𝑒 4𝑗Ω + 3𝑒 5𝑗Ω
Find the impulse response of this system, is the system stable, causal or not?
Ans: Applying the Fourier transform (DTFT) we get
𝒀(Ω) 𝑒 𝑗Ω 1 𝑘1 𝑘2
𝑯(Ω) = = { 2𝑗Ω 𝑗Ω
} ⟹ 𝑯(Ω)/𝑒 𝑗Ω = { 2𝑗Ω 𝑗Ω
} = 𝑗Ω + 𝑗Ω
𝑿(Ω) 𝑒 −𝑒 −1 𝑒 −𝑒 −1 𝑒 −𝛼 𝑒 −𝛽
1 1 + √5 1 − √5
𝑘1 = −𝑘2 = , 𝛼= , 𝛽=
(𝛼 − 𝛽) 2 2
1 𝑒 𝑗Ω 𝑒 𝑗Ω 𝛼 𝑛 − 𝛽𝑛
𝑯(Ω) = { − } ⟹ 𝐡[𝑛] = ( ) 𝑢[𝑛]
(𝛼 − 𝛽) 𝑒 𝑗Ω − 𝛼 𝑒 𝑗Ω − 𝛽 𝛼−𝛽
The system is causal, but unstable because one pole is outside the unit circle.
Exercise 21: Consider the impulse response signal shown below in figure
Ans:
1 for 𝑛 = 0
𝐡[𝑛] = {2 for 𝑛 = ±1} = 𝛿[𝑛] + 2(𝛿[𝑛 + 1] + 𝛿[𝑛 − 1]) −1 0 1
1 elsewhere
❶ The system is non-causal because 𝐡[𝑛] ≠ 0 for 𝑛 < 0. The system is stable because 𝐡[𝑛] is
absolutely summable.
∞
−𝑗Ω 𝑗Ω
1
𝑯(Ω) = 2𝑒 + 1 + 2𝑒 = 𝑗Ω (2𝑒 2𝑗Ω + 𝑒 𝑗Ω + 2) ⟹ ∑|𝐡[𝑛]| = 𝑯(0) = 5
𝑒
−∞
❷ The output of the system is
𝐲[𝑛] = 𝐱[𝑛] ⋆ 𝐡[𝑛] = 𝐱[𝑛] ⋆ (𝛿[𝑛] + 2(𝛿[𝑛 + 1] + 𝛿[𝑛 − 1])) = 𝐱[𝑛] + 2𝐱[𝑛 − 1] + 2𝐱[𝑛 + 1]
= 𝑢[𝑛] + 2(𝑢[𝑛 + 1] + 𝑢[𝑛 − 1]) − 𝑢[𝑛 − 5] − 2𝑢[𝑛 − 4] − 2𝑢[𝑛 − 6]
Exercise 22: Consider discrete time LTI system with impulse response 𝐡[𝑛]
1 𝑛
𝐱[𝑛] ⟶ 𝐡[𝑛] = 𝑛 ( ) 𝑢[𝑛] ⟶ 𝐲[𝑛]
2
❶ Isthe system stable
❷ what is the output of the system if 𝐱[𝑛] = 1 ∀𝑛
Exercise 23: ❶ Convolve the following signals shown below in figure (using DTFT)
𝐱[𝑛] 𝐡[𝑛]
1 1 1 1 1
−1 1 −2 0
0 2
⋆ −1 1
2
−2
−1 −1 −1 −1
❷ Let𝐡[𝑛] is the IR of an LTI system, the question is that: is the system causal, stable?
❸ Find the output accumulation ∑+∞ −∞ 𝐲𝑛𝑒𝑤 [𝑛] of this system, if new input is 𝐱 𝑛𝑒𝑤 [𝑛] = 𝐱[𝑛 − 1]
We know that
𝒀(Ω) = 𝑿(Ω)𝑯(Ω) ⟹ 𝒀(Ω) = −𝑒 4𝑗Ω + 3𝑒 3𝑗Ω − 3𝑒 2𝑗Ω + 5𝑒 𝑗Ω − 6 + 5𝑒 −𝑗Ω − 3𝑒 −2𝑗Ω + 3𝑒 −3𝑗Ω − 𝑒 −4𝑗Ω
❸ we have seen that 𝑯(Ω) = −𝑒 2𝑗Ω + 2𝑒 𝑗Ω + 2𝑒 −𝑗Ω − 𝑒 −2𝑗Ω and from the other side we know
that the new output is 𝒀𝑛𝑒𝑤 (Ω) = 𝑿𝑛𝑒𝑤 (Ω)𝑯(Ω) = (𝑒 −𝑗Ω 𝑿(Ω)) 𝑯(Ω) because of time invariance
𝐱[𝑛] ⟶ 𝐡[𝑛] ⟶ 𝐲[𝑛] ⟹ 𝐱 𝑛𝑒𝑤 [𝑛] = 𝐱[𝑛 − 1] ⟶ 𝐡[𝑛] ⟶ 𝐲𝑛𝑒𝑤 [𝑛] = 𝐲[𝑛 − 1]
+∞ +∞
1 𝑛
{𝐱[𝑛] = 1 ∀𝑛} 𝐡[𝑛] = { 2)
( 𝑛 ≥ 0 {𝐲[𝑛] = 3 ∀𝑛}
2𝑛 𝑛<0
In this exercise we cannot use DTFT because 𝐱[𝑛] is not absolutely summable.
1 |𝑛| 3𝜋𝑛
𝐡[𝑛] = ( ) and 𝐱[𝑛] = sin ( )
2 4
Find the Fourier series representation of 𝐲[𝑛] = 𝐱[𝑛] ⋆ 𝐡[𝑛]
Ans:
+∞ +∞
3𝜋
(𝑛−𝑟)
𝐲[𝑛] = 𝐱[𝑛] ⋆ 𝐡[𝑛] = ∑ 𝐱[𝑛 − 𝑟]𝐡[𝑟] = ∑ (∑ 𝑎𝑘 𝑒 𝑗𝑘 4 ) 𝐡[𝑟]
−∞ −∞ 〈𝑁〉
3𝜋
𝑏𝑘 = 𝑎𝑘 𝑯(Ω) with Ω=𝑘
4
Since 𝐡[𝑛] is absolutely summable means that 𝑯(Ω) exist, so we can compute it
+∞ −1 +∞ +∞ +∞
−𝑗Ω𝑛 𝑛 −𝑗Ω𝑛
1 𝑛 1 𝑛 1 𝑛
𝑯(Ω) = ∑ 𝐡[𝑛]𝑒 = ∑2 𝑒 + ∑ ( ) 𝑒 −𝑗Ω𝑛 = −1 + ∑ ( ) 𝑒 𝑗Ω𝑛 + ∑ ( ) 𝑒 −𝑗Ω𝑛
2 2 2
−∞ −∞ 𝑛=0 𝑛=0 𝑛=0
1 1 3
= + −1=
1 1 5 − 4 cos(Ω)
1 − 2 𝑒 −𝑗Ω𝑛 1 − 2 𝑒 𝑗Ω𝑛
Let we compute the Fourier series coefficient of 𝐱[𝑛]. The period of 𝐱[𝑛] is 𝑁 = 8/3
3𝜋𝑛 1 3𝜋 1 3𝜋 𝑎 = 1/2𝑗
𝐱[𝑛] = sin ( ) = 𝑒 𝑗 4 𝑛 − 𝑒 −𝑗 4 𝑛 ⟹ { 1 and others are zeros
4 2𝑗 2𝑗 𝑎 −1 = −1/2𝑗
3𝜋 3𝑗 −3𝑗
𝑏𝑘 = 𝑎𝑘 𝑯 (𝑘 ) ⟹ 𝑏−1 = and 𝑏1 =
4 10 + 4√2 10 + 4√2
Exercise 26: Given an LTI system described by 𝐡[𝑛] = (0.5)𝑛 𝑢[𝑛], compute the output of
this system for ❶ 𝐱[𝑛] = (3/4)𝑛 𝑢[𝑛] ❷ 𝐱[𝑛] = (𝑛 + 1)(1/4)𝑛 𝑢[𝑛]
Ans:
−1 −1 −1 −1
❶ 𝒀(Ω) = 𝑯(Ω)𝑿(Ω) = (1 − 0.5𝑒 −𝑗Ω ) (1 − 0.75𝑒 −𝑗Ω ) = 3(1 − 0.75𝑒 −𝑗Ω ) − 2(1 − 0.5𝑒 −𝑗Ω )
(1/4)𝑒 −𝑗Ω 1 1 1
𝑿(Ω) = 2+ = 2 , 𝑯(Ω) =
1 1 3
(1 − 4 𝑒 −𝑗Ω ) 1 − 𝑒 −𝑗Ω (1 − 1 𝑒 −𝑗Ω ) (1 − 𝑒 −𝑗Ω )
4 4 4
1 4 2 1
𝒀(Ω) = 𝑯(Ω)𝑿(Ω) = = − −
1 2
1 1 1 2
(1 − 4 𝑒 −𝑗Ω ) (1 − 2 𝑒 −𝑗Ω ) (1 − 2 𝑒 −𝑗Ω ) 1 − 4 𝑒 −𝑗Ω (1 − 1 𝑒 −𝑗Ω )
4
1 𝑛 1 𝑛 1 𝑛 1 𝑛 1 𝑛
𝐲[𝑛] = (4 ( ) − 2 ( ) − (𝑛 + 1) ( ) ) 𝑢[𝑛] = (4 ( ) − (𝑛 + 3) ( ) ) 𝑢[𝑛]
2 4 4 2 4
1 − 2𝑒 −𝑗Ω
𝑯(Ω) =
1
1 − 𝑒 −𝑗Ω
4
Find 𝐡[𝑛] and realize the system
Ans:
1 − 2𝑒 −𝑗Ω 1 2𝑒 −𝑗Ω 1 𝑛 1 𝑛−1 1 𝑛
𝑯(Ω) = = − ⟹ 𝐡[𝑛] = (( ) − 2 ( ) ) 𝑢[𝑛 − 1] = −7 ( ) 𝑢[𝑛 − 1]
1 1 1 4 4 4
1 − 4 𝑒 −𝑗Ω 1 − 4 𝑒 −𝑗Ω 1 − 4 𝑒 −𝑗Ω
7 1
1 − 4 𝑒 −𝑗Ω − 2 𝑒 −2𝑗Ω
𝑯(Ω) =
1 1
1 + 4 𝑒 −𝑗Ω − 8 𝑒 −2𝑗Ω
7 1
1 − 4 𝑒 −𝑗Ω − 2 𝑒 −2𝑗Ω 5/3 14/3
𝑯(Ω) = −4+4 = − +4
1 1 1 1
1 + 4 𝑒 −𝑗Ω − 8 𝑒 −2𝑗Ω 1 + 2 𝑒 −𝑗Ω 1 + 2 𝑒 −𝑗Ω
5 −1 𝑛 14 1 𝑛
𝐡[𝑛] = 4𝛿[𝑛] + ( ( ) − ( ) ) 𝑢[𝑛]
3 2 3 4
7 1
𝑒 2𝑗Ω − 4 𝑒 𝑗Ω − 2 5/6 7/6
𝑯(Ω) = − 1 + 1 = 1 − ( 𝑗Ω + 𝑗Ω )
1 1 𝑒 + 1/2 𝑒 − 1/4
𝑒 2𝑗Ω + 4 𝑒 𝑗Ω − 8
5 −1 𝑛−1 7 1 𝑛−1
𝐡[𝑛] = 𝛿[𝑛] − ( ( ) + ( ) ) 𝑢[𝑛 − 1]
6 2 6 4
The two formulas of are 𝐡[𝑛] equivalent (use MATLAB to check that).
Exercise 29: Determine the signal if its Discrete-time Fourier transform (DTFT) is
+∞
0 0 ≤ |Ω| ≤ 𝑊 𝜋
𝟏) 𝑿(Ω) = { 𝟐) 𝑿(Ω) = ∑ (−1)𝑘 𝛿 (Ω − 𝑘 )
1 𝑊 < |Ω| ≤ 𝜋 2
𝑘=−∞
Ans:
−𝑊 +𝑊 𝜋
1
𝟏) 𝐱[𝑛] = ∫ 𝑿(Ω)𝑒 𝑗Ω𝑛 𝑑Ω ⟹ 2𝜋𝐱[𝑛] = ∫ 𝑿(Ω)𝑒 𝑗Ω𝑛 𝑑Ω + ∫ 𝑿(Ω)𝑒 𝑗Ω𝑛 𝑑Ω + ∫ 𝑿(Ω)𝑒 𝑗Ω𝑛 𝑑Ω
2𝜋 2𝜋 −𝜋 −𝑊 𝑊
1 −𝑊 1 𝜋 sin(𝜋𝑛) − sin(𝑊𝑛)
𝐱[𝑛] = ∫ 𝑿(Ω)𝑒 𝑗Ω𝑛 𝑑Ω + ∫ 𝑿(Ω)𝑒 𝑗Ω𝑛 𝑑Ω =
2𝜋 −𝜋 2𝜋 𝑊 𝑛𝜋
sin(𝜋𝑛) sin(𝑊𝑛) sin(𝑊𝑛)
= − = 𝛿[𝑛] −
𝑛𝜋 𝑛𝜋 𝑛𝜋
+∞
1 𝑘
𝜋 𝑗Ω𝑛 1 𝑘
𝜋 𝑗Ω𝑛 (−1)𝑘 𝑗𝑘 𝜋𝑛
𝟐) 𝐱[𝑛] = ∫ ∑ (−1) 𝛿 (Ω − 𝑘 ) 𝑒 𝑑Ω = ∑(−1) ∫ 𝛿 (Ω − 𝑘 ) 𝑒 𝑑Ω = ∑ 𝑒 2
2𝜋 2𝜋 2 2𝜋 2𝜋 2 2𝜋
𝑘=−∞ 𝑘 𝑘
Exercise 30: The input 𝐫[𝑛] and output 𝐱[𝑛] of a LTI system are related by
1
𝐫[𝑛] ⟶ 𝑯(Ω) ⟶ 𝐱[𝑛] ⟶ 𝑯𝑑 (Ω) = ⟶ 𝐲[𝑛]
𝑯(Ω)
𝑒 8𝑗Ω 1
𝑯𝑑 (Ω) = 8𝑗Ω = with 𝑧 = 𝑒 𝑗Ω
𝑒 − 𝑒 −8𝑎 1 − 𝑒 −8𝑎 𝑧 −8
The eight poles of 𝑯𝑑 (Ω) are located at 𝑧 8 = 𝑒 −8𝑎 ⟹ |𝑧| = 𝑒 −𝑎 the poles are distributed along
the circle of radius 𝑟 = 𝑒 −𝑎 < 1 ⟹ stable system.
Exercise 31: Given the following system
𝐖[𝑛]
𝐱[𝑛] ⟶ 𝑯1 (Ω) → 𝑯2 (Ω) ⟶ 𝐲[𝑛]
2 − 𝑒 −𝑗Ω 1
𝑯1 (Ω) = and 𝑯1 (Ω) =
1 1 1
1 + 2 𝑒 −𝑗Ω 1 − 2 𝑒 −𝑗Ω + 4 𝑒 −2𝑗Ω
2 − 𝑒 −𝑗Ω 1 2 − 𝑒 −𝑗Ω
𝑯(Ω) = 𝑯1 (Ω)𝑯1 (Ω) = { }{ }=
1 1 1 1
1 + 2 𝑒 −𝑗Ω 1 − 2 𝑒 −𝑗Ω + 4 𝑒 −2𝑗Ω 1 + 8 𝑒 −3𝑗Ω
We deduce that
1
𝐲[𝑛] + 𝐲[𝑛 − 3] = 2𝐱[𝑛] − 𝐱[𝑛 − 1]
8
Exercise 32: Given the following system described by:
where the input and the impulse response are given by: 𝐱[𝑛] = cos(Ω0 𝑛) 𝑢[𝑛] and 𝐡[𝑛] = 𝑢[𝑛]
𝑛+1
−𝑗Ω0 (𝑛+1)/2 𝑗Ω0 (𝑛+1)/2 sin (Ω0 ( 2 ))
𝑒 −𝑒 1 𝑒 𝑗Ω0 (𝑛+1)/2 𝑒 −𝑗Ω0 (𝑛+1)/2 𝑛
( )= and ( + ) = cos (Ω0 )
𝑒 0 − 𝑒 𝑗Ω0 /2
−𝑗Ω /2 Ω 2 𝑗Ω
𝑒 0 /2 −𝑗Ω
𝑒 0 /2 2
sin ( 20 )
Now we start the convolution between the input and the impulse response
+∞ 𝑛 𝑛
1
𝐲[𝑛] = ∑ cos(Ω0 𝑘) 𝑢[𝑛 − 𝑘]𝑢[𝑘] = ∑ cos(Ω0 𝑘) = (∑ 𝑒 𝑗Ω0 𝑘 + 𝑒 −𝑗Ω0 𝑘 )
2
𝑘=−∞ 𝑘=0 𝑘=0
1 1 − 𝑒 𝑗Ω0 (𝑛+1) 1 − 𝑒 −𝑗Ω0 (𝑛+1)
= ( + )
2 1 − 𝑒 𝑗Ω0 1 − 𝑒 −𝑗Ω0
1 𝑒 𝑗Ω0 (𝑛+1)/2 𝑒 −𝑗Ω0 (𝑛+1)/2 − 𝑒 𝑗Ω0 (𝑛+1)/2 1 𝑒 −𝑗Ω0 (𝑛+1)/2 𝑒 𝑗Ω0 (𝑛+1)/2 − 𝑒 −𝑗Ω0 (𝑛+1)/2
= ( ) + ( )
2 𝑒 𝑗Ω0 /2 𝑒 −𝑗Ω0 /2 − 𝑒 𝑗Ω0 /2 2 𝑒 −𝑗Ω0 /2 𝑒 𝑗Ω0 /2 − 𝑒 −𝑗Ω0 /2
𝑛+1
𝑗Ω0 (𝑛+1)/2 −𝑗Ω0 (𝑛+1)/2 −𝑗Ω0 (𝑛+1)/2 𝑗Ω0 (𝑛+1)/2 sin (Ω0 ( 2 ))
1 𝑒 𝑒 𝑒 −𝑒 𝑛
= ( + )( )= cos (Ω0 )
2 𝑒 𝑗Ω0 /2 𝑒 −𝑗Ω0 /2 𝑒 −𝑗Ω0 /2 − 𝑒 𝑗Ω0 /2 Ω 2
sin ( 20 )
Finally
𝑛+1
sin (Ω0 ( 2 ))
𝑛
𝐱[𝑛] = cos(Ω0 𝑛) 𝑢[𝑛] ⟶ 𝐡[𝑛] = 𝑢[𝑛] ⟶ 𝐲[𝑛] = cos (Ω0 )
Ω 2
sin ( 20 )
CHAPTER VII:
The Fast Fourier
Transform and Discrete
Time Systems
Discrete Fourier transform is the most important discrete transform, used to perform
Fourier analysis in many practical applications. Since it deals with a finite amount of
data, it can be implemented in computers by numerical algorithms or even dedicated
hardware. These implementations usually employ efficient fast Fourier transform
(FFT) algorithms; so much so that the terms "FFT" and "DFT" are often used
interchangeably. Prior to its current usage, the "FFT" initialism may have also been
used for the ambiguous term "finite Fourier transform".
The Fast Fourier Transform
and Discrete Time Systems
I. The Discrete Fourier Transform (DFT): Discrete-time Fourier transform (DTFT) may
not be practical for analyzing because is a function of the continuous frequency variable
and we cannot use a digital computer to calculate a continuum of functional values. (DFT)
is a frequency analysis tool for aperiodic finite-duration discrete-time signals which is
practical because it is discrete in frequency.
In DTFT the Discrete, aperiodic time domain signal is transformed into continuous,
periodic frequency domain signal. While in DFT, your input signal is the output of your
DTFT which is a continuous, periodic frequency domain signal, and DFT gives you the
Discrete samples of the continuous DTFT. Moreover, DFT are mainly used in computer
based analysis as computers store data in discrete sequences with finite length. Hence
storing frequency coefficients in continuous domain is not possible in digital computations.
Let the signal 𝐱[𝑛] be a finite-length sequence of length 𝑁, that is, 𝐱[𝑛] = 0 outside the
range 0 ≤ 𝑛 ≤ 𝑁 − 1. The DFT of 𝐱[𝑛], denoted as 𝑿[𝑘], is defined by
𝑁−1
1 2𝜋
where 𝑊𝑁 is the 𝑁 𝑡ℎ root of unity given by 𝑊𝑁 = (𝑒 −2𝜋𝑗 )𝑁 = 𝑒 −𝑗 𝑁 . The DFT pair is denoted by
a) 𝐱[𝑛] = [1, 0, −1, 0]; b) 𝐱[𝑛] = [𝑗, 0, 𝑗, 1]; c) 𝐱[𝑛] = [1, 1, 1, 1, 1, 1, 1, 1];
d) 𝐱[𝑛] = cos(0.25𝜋𝑛) ; 𝑛 = 0, . . . , 7 e) 𝐱[𝑛] = 0.9𝑛 , 𝑛 = 0, . . . , 7
𝑁−1 3
7 1 − 𝑊8 8𝑘
𝑿[𝑘] = ∑ 𝐱[𝑛](𝑊8 )𝑘𝑛 = { 1 − 𝑊 𝑘 =0 when 𝑘 ≠ 0
8
𝑛=0 8 when 𝑘 ≠ 0
𝑁
since (𝑊𝑁 )𝑁 = (𝑒 −𝑗2𝜋/𝑁 ) = 1 Therefore 𝑿[𝑘] = [8, 0, 0, 0, 0, 0, 0, 0];
𝑿[𝑘] = [5.69, 0.38 − 0.67𝑗, 0.31 − 0.28 𝑗, 0.30 − 0.11 𝑗, 0.30, 0.30 − 0.11𝑗, 0.31 − 0.28 𝑗, 0.38 − 0.67𝑗]
I.I. Matrix form of the DFT: The DFT can be expressed in a matrix operation form as
𝑿 = 𝑭𝑁 𝐱 with
𝑿[0] 𝐱[0]
𝑿 = [ 𝑿[1] ] , 𝐱 = [ 𝐱[1] ]
⋮ ⋮
𝑿[𝑁 − 1] 𝐱[𝑁 − 1]
1 1 ⋯ 1
1 𝑁−1
1 𝑊 𝑊𝑁 2 𝑊 𝑁
𝑁 ⋯
𝑭𝑁 = 1 𝑊𝑁2 𝑊𝑁4 … 𝑊𝑁2(𝑁−1)
⋮ ⋮ ⋮ ⋱ ⋮
𝑁−1 2(𝑁−1)
[1 𝑊𝑁
(𝑁−1)(𝑁−1)
𝑊𝑁 … 𝑊𝑁 ]
Important features of the DFT are the following:
1. There is a one-to-one correspondence between 𝐱[𝑛] and 𝑿[𝑘].
2. There is an extremely fast algorithm, called the fast Fourier transform for its calculation.
3. The DFT is closely related to the discrete Fourier series and the Fourier transform.
4. The DFT is the appropriate Fourier representation for digital computer realization
because it is discrete and of finite length in both the time and frequency domains.
I.II. Relationship of DFT to the DTFT: Notice that the DFT can be derived from the DTFT
𝑁−1 𝑁−1
2𝜋 2𝜋
𝑿[𝑘] = ∑ 𝐱[𝑛]𝑊𝑁𝑘𝑛 = ∑ 𝐱[𝑛]𝑒 −𝑗𝑘 𝑁 𝑛 = 𝑿(Ω)| = 𝑿 (𝑘 )
Ω=𝑘
2𝜋 𝑁
𝑛=0 𝑛=0 𝑁
Thus, 𝑿[𝑘] corresponds to the sampled 𝑿(Ω) at the uniformly spaced frequencies Ω = 2𝜋𝑘/𝑁
for integer 𝑘. Also we see that 𝑿[𝑘] of finite sequence 𝐱[𝑛] can be interpreted as the
coefficients 𝑐𝑘 in the discrete Fourier series representation of its periodic extension
multiplied by the period 𝑁. That is, 𝑿[𝑘] = 𝑁𝑐𝑘 .
2𝜋
𝑿[𝑘] = DFT(𝐱[𝑛]) = 𝑿(Ω)| = 𝑿 (𝑘 ) and 𝑿[𝑘] = 𝑁𝑐𝑘
Ω=𝑘
2𝜋 𝑁
𝑁
Example: 02 Consider the discrete-time pulse 𝐱[𝑛] = 𝑢[𝑛] − 𝑢[𝑛 − 10] ⟹ 𝑁 = 9. The DTFT
of this pulse was determined before
sin((𝑁 + 1)Ω/2)
𝑿(Ω) = 𝔽(𝑢[𝑛] − 𝑢[𝑛 − 10]) = 𝑒 −𝑗Ω𝑁/2
sin(Ω/2)
The DFT of this pulse is
2𝜋
sin (5𝑘 𝑁 )
𝑿[𝑘] = 𝑒 −4.5𝑗Ω 𝜋 = [10, 0, 0, 0, 0, 0, 0, 0, 0, 0]
sin (𝑘 𝑁)
I.III. Zero padding: Note that the choice of number of points 𝑁 is not fixed. If 𝐱[𝑛] has
length 𝑁1 < 𝑁, we want to assume that 𝐱[𝑛] has length 𝑁 by simply adding (𝑁 − 𝑁1 ) samples
with a value of 0. This addition of dummy samples is known as zero padding. Then the
resultant 𝐱[𝑛] is often referred to as an N-point sequence, and 𝑿[𝑘] is referred to as an N-
point DFT. By a judicious choice of 𝑁, such as choosing it to be a power of 2, computational
efficiencies can be gained.
Example: 03 (Zero padding the discrete-time pulse) Consider again the length-10 discrete-
time pulse used in Example 02. Create a new signal 𝒒[𝑛] by zero-padding it to 20 samples,
and compare the 20-point DFT of 𝒒[𝑛] to the DTFT of 𝐱[𝑛].
Remark: After some exemplifications it will become apparent that the properties of the DFT
are similar to those of DTFS and DTFT with one significant difference: Any shifts in the
time domain or the transform domain are circular shifts rather than linear shifts. Also, any
time reversals used in conjunction with the DFT are circular time reversals rather than
linear ones.
Example: 04 Let 𝑿[𝑘] = [1, 𝑗, −1, −𝑗]; and 𝑯[𝑘] = [0, 1, −1, 1]; be the DFT of the two
sequences 𝐱[𝑛] and 𝐡[𝑛] respectively. Determine the DFT of the following sequences without
computations.
Solution: Notice that 𝐱[𝑘] = [𝐱[0], 𝐱[1], 𝐱[2], 𝐱[3]] ⇔ 𝐱[𝑘 − 1] = [𝐱[3], 𝐱[0], 𝐱[1], 𝐱[2]] = 𝐱[𝑘 + 3].
This basic property is called circular shift. The DFT shift is also known as a circular shift.
2𝜋
In this case 𝑁 = 4 and 𝐿 = 1 ⟹ DFT(𝐱[𝑛 − 1]) = 𝑒 −𝑗𝑘 4 𝑿[𝑘] = (−𝑗)𝑘 𝑿[𝑘], 𝑘 = 0,1,2,3
2𝜋
DFT((−1)𝑛 𝐱[𝑛]) = DFT (𝑒 𝑗2 4 𝑛 𝐱[𝑛]) = 𝑿[𝑘 − 2]mod(4) = [−1, −𝑗, 1, 𝑗]
2𝜋
e) DFT((𝑗)𝑛 𝐱[𝑛]) = DFT (𝑒 𝑗 4 𝑛 𝐱[𝑛]) = 𝑿[𝑘 − 1]mod(4) = [−𝑗, 1, 𝑗, −1]
Example: 06 Let 𝐱[𝑛] have the DFT 𝑿[𝑘] = [1, 𝑗, 1, −𝑗] determine the following DFTs:
Solution:
2𝜋
a) 𝒀[𝑘] = DFT((−1)𝑛 𝐱[𝑛]) = DFT (𝑒 𝑗2 4 𝑘 𝐱[𝑛]) = 𝑿[𝑘 − 2]mod(4) = [1, −𝑗, 1, 𝑗]
b) 𝒀[𝑘] = DFT(𝐱[𝑛 + 1]mod(4) ) = (𝑗)𝑘 𝑿[𝑘] = [1, −1, −1, −1],
2𝜋
c) 𝒀[𝑘] = DFT(𝐱[𝑛] ⋆ 𝛿[𝑛 − 2]) = 𝑿[𝑘]. DFT(𝛿[𝑛 − 2]) = 𝑿[𝑘]. DFT(𝛿[𝑛])𝑒 −𝑗2 4 𝑘 = (−1)𝑘 𝑿[𝑘],
d) 𝒀[𝑘] = DFT(𝐱[−𝑛]) = 𝑿[−𝑘] = [𝑿[0], 𝑿[3], 𝑿[2], 𝑿[1]] = [1, −𝑗, 1, 𝑗],
Example: 07 Two finite sequences 𝐡[𝑛] and 𝐱[𝑛] have the following DFT's:
Let 𝐲[𝑛] = 𝐡[𝑛]⨂𝐱[𝑛] be the four point circular convolution of the two sequences. Using the
properties of the DFT (do not compute 𝐱[𝑛] and 𝐡[𝑛]),
Solution:
2𝜋
a) DFT(𝐱[𝑛 − 1]mod(4) ) = 𝑒 −𝑗 4 𝑘 𝑿[𝑘] = (−𝑗)𝑘 𝑿[𝑘] = [1, 2𝑗, −1, −2𝑗]
2𝜋
DFT(𝐡[𝑛 + 2]mod(4) ) = 𝑒 𝑗2 4 𝑘 𝑿[𝑘] = (−1)𝑘 𝑿[𝑘] = [1, −𝑗, 1, 𝑗]
b) Recall that
𝑁−1 𝑁−1
1 2𝜋 1 2𝜋
𝐲[𝑛] = IDFT(𝒀[𝑘]) = ∑ 𝒀[𝑘]𝑒 𝑗 𝑁 𝑘𝑛 ⟹ 𝐲[𝑛0 ] = ∑ 𝒀[𝑘]𝑒 𝑗 𝑁 𝑘𝑛0
𝑁 𝑁
𝑘=0 𝑘=0
Example: 08 Let 𝐱[𝑛] be a finite sequence with DFT 𝑿[𝑘] = DFT(𝐱[𝑛]) = [0, 1 + 𝑗, 1,1 − 𝑗].
Using the properties of the DFT determine the DFT's of the following:
𝜋
a) 𝐲[𝑛] = 𝑒 𝑗 2 𝑛 𝐱[𝑛]
b) 𝐲[𝑛] = cos(𝑛𝜋/2) 𝐱[𝑛]
c) 𝐲[𝑛] = 𝐱[𝑛 − 1]mod(4)
d) 𝐲[𝑛] = [0, 0, 1,0]⨂𝐱[𝑛] with ⨂ denoting circular convolution
Solution:
𝜋 𝜋 𝜋
a) since 𝑒 𝑗 2 𝑛 𝐱[𝑛] = 𝑒 𝑗24 𝑛 𝐱[𝑛] then 𝒀[𝑘] = DFT (𝑒 𝑗2 4 𝑛 𝐱[𝑛]) = 𝑿[𝑘 − 1]mod(4) = [1 − 𝑗, 0, 1 + 𝑗, 1]
𝜋 𝜋
b) In this case 𝐲[𝑛] = cos(𝑛𝜋/2) 𝐱[𝑛] = (𝑒 𝑗2 4 𝑛 𝐱[𝑛] + 𝑒 −𝑗24 𝑛 𝐱[𝑛]) /2 and therefore its DFT is
1 1 1 1
𝒀[𝑘] = 𝑿[𝑘 + 1]mod(4) + 𝑿[𝑘 − 1]mod(4) = [1, , 1, ]
2 2 2 2
𝜋
−𝑗2 𝑘 𝑘
c) 𝐲[𝑛] = DFT(𝐱[𝑛 − 1]mod(4) ) = 𝑒 4 𝑿[𝑘] = (−𝑗) 𝑿[𝑘] = [0, 1 − 𝑗, −1,1 + 𝑗]
d)DFT([0, 0, 1,0]⨂𝐱[𝑛]) = DFT(𝛿[𝑛 − 2]⨂𝐱[𝑛]) = DFT(𝐱[𝑛 − 2]mod(4) ) = (−𝑗)2𝑘 𝑿[𝑘] = (−1)𝑘 𝑿[𝑘]
Solution
2𝜋 2𝜋 ⋆
a) 𝒀[𝑘] = DFT(𝐱 ⋆ [𝑛]) = ∑〈𝑁〉 𝐱 ⋆ [𝑛]𝑒 −𝑗 2 𝑘𝑛 = (∑〈𝑁〉 𝐱[𝑛]𝑒 𝑗 2 𝑘𝑛 ) = 𝑿⋆ [−𝑘]mod(𝑁)
2𝜋 2𝜋
b) 𝒀[𝑘] = DFT(𝐱[−𝑛]mod(𝑁) ) = ∑〈𝑁〉 𝐱[−𝑛]mod(𝑁) 𝑒 −𝑗 2 𝑘𝑛 = (∑〈𝑁〉 𝐱[𝑛]𝑒 𝑗 2 𝑘𝑛 ) = 𝑿[−𝑘]mod(𝑁)
1 1 1
c) 𝒀[𝑘] = DFT(Re(𝐱[𝑛])) = DFT(𝐱[𝑛]) + DFT(𝐱 ⋆ [𝑛]) = (𝑿[𝑘] + 𝑿⋆ [−𝑘]mod(𝑁) )
2 2 2
1 1 1
d) 𝒀[𝑘] = DFT(Im(𝐱[𝑛])) = DFT(𝐱[𝑛]) − DFT(𝐱 ⋆ [𝑛]) = (𝑿[𝑘] − 𝑿⋆ [−𝑘]mod(𝑁) )
2𝑗 2𝑗 2𝑗
Example: 11 Let 𝐱[𝑛] = [2 3 − 1 4]. Write a MATLAB code to determine the DFT of 𝐱[𝑛]
using the FFT algorithm:
II. The Fast Fourier Transform (FFT): A fast Fourier transform (FFT) is an efficient
algorithm that computes the discrete Fourier transform (DFT) of a sequence, or its inverse
(IDFT). Fourier analysis converts a signal from its original domain (often time or space) to a
representation in the frequency domain and vice versa. The DFT is obtained by
decomposing a sequence of values into components of different frequencies. This operation
is useful in many fields, but computing it directly from the definition is often too slow to be
practical. An FFT rapidly computes such transformations by factorizing the DFT matrix
into a product of sparse (mostly zero) factors. The basic ideas were popularized in 1965 by
James Cooley and John Tukey. In 1994, Gilbert Strang described the FFT as "the most
important numerical algorithm of our lifetime", and it was included in Top 10 Algorithms of
20th Century by the IEEE magazine Computing in Science & Engineering.
If we take the 2-point DFT and 4-point DFT and generalize them to 8-point, 16-point, …,
2r-point, we get the FFT algorithm.
II.I Two-points DFT (N=2): In the case of the radix-2 Cooley–Tukey algorithm, the butterfly
is simply a DFT of size-2 that takes two inputs (𝐱[0], 𝐱[1]) (corresponding outputs of the two
sub-transforms) and gives two outputs (𝐗[0], 𝐗[1]) by the formula:
II.II Four-points DFT (N=4): Here a generalization of the derivation presented in the
previous case to a four-point DFT. This four-point DFT of size-4 that takes four inputs
(𝐱[0], 𝐱[1], 𝐱[2], 𝐱[3]) (corresponding outputs of the two sub-transforms) and gives two
outputs (𝐗[0], 𝐗[1], 𝐗[2], 𝐗[3]) by the formula:
Also it can be observed that the above diagram of the 4-point DFT may be rearranged as
We see that the 4-point DFT can be computed by the generation of two 2-point DFTs,
followed by a recomposition of terms as shown in the signal flow graph. This second method
of evaluating 𝑿[𝑘] is known as the decimation-in-time fast Fourier transform (FFT) algorithm.
II.III N-points DFT (N=2r): To start the generalization of the algorithm let we consider
some basic conventions, such as: the sequence 𝐱[𝑛] is often referred to as an N-point
sequence, and 𝑿[𝑘] is referred to as an N-point DFT. By a judicious choice of 𝑁, such as
choosing it to be a power of 2, computational efficiencies can be gained. Suppose, we
separated the Fourier Transform into even and odd indexed sub-sequences, we obtain
𝑁−1
2𝜋
𝑿[𝑘] = ∑ 𝐱[𝑛]𝑊𝑁𝑘𝑛 = ∑ 𝐱[𝑛]𝑊𝑁𝑘𝑛 + ∑ 𝐱[𝑛]𝑊𝑁𝑘𝑛 with 𝑊𝑁 = 𝑒 −𝑗 𝑁 and 𝑘 = 0,1,2, … , 𝑁 − 1
𝑛=0 Odd Even
Let we define 𝑛 = 2𝑟 in the case of even term and 𝑛 = 2𝑟 + 1 for the case of odd term.
𝑁/2−1 𝑁/2−1
2𝜋 2𝜋
−𝑗 𝑘
Notice that 𝑊𝑁2𝑘 = 𝑒 −𝑗 𝑁 2𝑘 = 𝑒 𝑁/2 𝑘
= 𝑊𝑁/2 . Therefore, we can write
𝑁/2−1 𝑁/2−1
𝑟𝑘 𝑟𝑘
𝑿[𝑘] = ∑ 𝐱[2𝑟]𝑊𝑁/2 + 𝑊𝑁𝑘 ∑ 𝐱[2𝑟 + 1]𝑊𝑁/2
𝑟=0 𝑟=0
Although the 𝑁/2-point DFTs of 𝐠[𝑟] and 𝐡[𝑟] are sequences of length 𝑁/2, the periodicity of
the complex exponentials allows us to write: 𝑮[𝑘] = 𝑮[𝑘 + 𝑁/2], & 𝑯[𝑘] = 𝑯[𝑘 + 𝑁/2]
Therefore, 𝑿[𝑘] may be computed from the 𝑁/2-point DFTs 𝑮[𝑘] and 𝑯[𝑘].
𝑁
𝑿[𝑘] = 𝑮[𝑘] + 𝑊𝑁𝑘 𝑯[𝑘] ⟹ 𝑿 [𝑘 + ] = 𝑮[𝑘] − 𝑊𝑁𝑘 𝑯[𝑘]
2
Flow graph of a
K -point DFT using two
(K /2)-point DFTs for K =8
Remark: In the diagram of the 8-point FFT above, note that the inputs aren’t in normal
order: 𝐱[0]; 𝐱[1]; 𝐱[2]; 𝐱[3]; 𝐱[4]; 𝐱[5]; 𝐱[6]; 𝐱[7], they’re in the bizarre order: 𝐱[0]; 𝐱[4]; 𝐱[2];
𝐱[6]; 𝐱[1]; 𝐱[5]; 𝐱[3]; 𝐱[7]. Why this sequence?
𝐱[𝑛] = [1 2 3 0] 𝐡[𝑛] = [2 1 0 0]
𝐗[𝑘] = [6, (−2 − 2𝑗), 2, (−2 + 2𝑗)], 𝐇[𝑘] = [3, (2 − 𝑗), 1, (2 + 𝑗)]
We can take the IDFT by using the FFT procedure (only the conjugation of the above)
Example: 13 Determine the DFT of the 8-point discrete time domain sequence
Remark: The FFT reduces the number of computations needed for a problem of size 𝑁 from
𝒪(𝑁 2 ) to 𝒪(𝑁 log(𝑁)).
II.IV Matrix Form of DFT: In applied mathematics, a DFT matrix is an expression of a
discrete Fourier transform (DFT) as a transformation matrix, which can be applied to a
signal through matrix multiplication.
1 1 ⋯ 1
𝑿[0] 1 𝐱[0]
𝑁−1
1 𝑊 𝑊𝑁2 𝑊𝑁𝑁−1
𝑁 ⋯
𝑿[𝑘] = DFT(𝐱[𝑛]) = ∑ 𝐱[𝑛]𝑊𝑁𝑘𝑛 ⟺ [ 𝑿[1] ] = 1 𝑊𝑁2 𝑊𝑁4 … 𝑊𝑁2(𝑁−1) [ 𝐱[1] ]
⋮ ⋮ ⋮ ⋱ ⋮
𝑛=0 ⋮ ⋮
𝑿[𝑁 − 1] 𝑁−1 𝐱[𝑁 − 1]
[ 1 𝑊𝑁 𝑊𝑁2(𝑁−1) … 𝑊𝑁
(𝑁−1)(𝑁−1)
]
Note that 𝑭𝑁 is symmetric; that is, 𝑭𝑇𝑁 = 𝑭𝑁 , where 𝑭𝑇𝑁 is the transpose of 𝑭𝑁 . Also it can be
1
shown that 𝑭𝑁 unitary matrix, that is:
√𝑁
1 1 1 𝐻
( 𝑭𝐻
𝑁) ( 𝑭𝑁 ) = 𝑰 ⟺ 𝑭𝑁 −1 = 𝑭
√𝑁 √𝑁 𝑁 𝑁
1 𝐻
Means that 𝑿 = 𝑭𝑁 𝐱 ⟺ 𝐱 = 𝑭 𝑿
𝑁 𝑁
Example: 14 Here a MATLAB code for the computation of DFT by matrix method (not fast)
we can compare the above method with the standard algorithm, let we see how is this?
clear all, clc, x = [ones(1,6) zeros(1,26)]; n = length(x);
y1 = fft(x);
y2 = x*dftmtx(n);
norm(y1-y2)
subplot(211); stem(abs(y2)), grid on
subplot(212); stem(angle(y2)), grid on
Good Idea: (How to fastest this matrix multiplication as possible as we can?) We want to
multiply 𝑭𝑁 times 𝐱 as quickly as possible. Normally a matrix times a vector takes 𝑁 2
separate multiplications-the matrix has 𝑁 2 entries. You might think it is impossible to do
better. (If the matrix has zero entries then multiplications can be skipped. But the Fourier
matrix has no zeros!) By using the special pattern 𝑊𝑁 = 𝑒 −2𝑗𝜋/𝑁 for its entries, 𝑭𝑁 can be
factored in a way that produces many zeros. This is the FFT.
The key idea is to connect 𝑭𝑁 with the half-size Fourier matrix 𝑭𝑁/2 . Assume that 𝑁 is a
power of 2 (say 𝑁 = 210 = 1024). We will connect 𝑭1024 to 𝑭512 rather to two copies of 𝑭512 .
When 𝑁 = 4, the key is in the relation between the matrices
1 1 1 1 1 1
𝑗 𝑗2 𝑗3 𝟎2×2
1 𝑭2 1 𝑗2
𝑭4 = 𝑗2 and [ ]= [ ]
1 𝑗4 𝑗6 𝑭2 1 1
𝟎2×2
[1 𝑗3 𝑗6 𝑗9] 1 𝑗2
On the left is 𝑭4 , with no zeros. On the right is a matrix that is half zero. The work is cut in
half. But wait, those matrices are not the same. The block matrix with two copies of the
half-size 𝑭 is one piece of the picture but not the only piece. Here is the factorization of 𝑭4
with many zeros:
1 0 1 0 1 1
𝟎2×2 1 0 0 0
0 1 0 𝑗 1 𝑗2 0 0 1 0
𝑭4 = [ ][ ][ ]
1 0 −1 0 1 1 0 1 0 0
𝟎2×2
0 1 0 −𝑗 1 𝑗2 0 0 0 1
The matrix on the right is a permutation. The middle matrix performs separate half-size
transforms on the evens and odds. The matrix at the left combines the two half-size
outputs-in a way that produces the correct full-size output 𝑿 = 𝑭𝑁 𝐱. The same idea applies
when 𝑁 = 1024 and 𝑀 = 512. The Fourier matrix 𝑭1024 is full of powers of 𝑊𝑁 = 𝑒 −2𝑗𝜋/𝑁 . The
first stage of the FFT is the great factorization discovered by Cooley and Tukey (and
foreshadowed in 1805 by Gauss):
The term 𝑰512 is the identity matrix. 𝑫512 is the diagonal matrix with entries (1, 𝑊𝑁 , … , 𝑊𝑁511 ).
The two copies of 𝑭512 are what we expected. Don't forget that they use the 512𝑡ℎ root of
unity.
If you have read this far, you have probably guessed what comes next. We reduced 𝑭𝑁 to
𝑭𝑁/2 . Keep going to 𝑭𝑁/4 . The matrices 𝑭512 lead to 𝑭256 (in four copies). Then 256 leads to
128. That is recursion. It is a basic principle of many fast algorithms, and here is the
second stage with four copies of 𝑭 = 𝑭256 and 𝑫 = 𝑫256 :
𝑰 𝑫 𝑭 pick 0,4,8, …
𝑭 𝟎2×2 pick 2,6,10, …
[ 512 ] = [𝑰 −𝑫 ][ 𝑭 ][ ]
𝟎2×2 𝑭512 𝑰 𝑫 𝑭 pick 1,5,9, …
𝑰 −𝑫 𝑭 pick 3,7,11, …
We will count the individual multiplications, to see how much is saved. Before the FFT was
invented, the count was the usual 𝑁 2 = (1024)2 . This is about a million multiplications. I
am not saying that they take a long time. The cost becomes large when we have many,
many transforms to do-which is typical. Then the saving by the FFT is also large:
1
The final count for size 𝑁 = 2ℓ is reduced from 𝑁 2 to 2 𝑁ℓ.
The number 𝑁 = 1024 is 210 , so ℓ = 10 . The original count of (1024)2 is reduced to (5)(1024).
The saving is a factor of 200. A million is reduced to five thousand. That is why the FFT has
revolutionized signal processing.
Example: 15 This example is add only for going deeper and deeper (Clarify things more)
𝐱0 1 1 1 1 𝐱0 1 1 1 1
𝑊4 𝑊4 𝑊4 𝐱1 2 3 2 𝑊4 𝑊43 𝐱1
𝐱1 1 1 𝑊4 𝐱 0
𝑭4 [ 𝐱 ] = 𝑊42 𝑊44 𝑊46 𝐱 2 [ ] = 4 [ ] + 𝑊42 𝑊46 [𝐱 3 ]
2 1 1 𝑊4 𝐱 2
𝐱3 [1 𝑊43 𝑊46 𝑊49 ] 𝐱 3 [1 𝑊46 ] [𝑊43 𝑊49 ]
1 1 1 1 1
1 𝑊42 𝐱 0 𝑊4 1 𝑊 4
2
𝐱1
= 4 [𝐱 ] + 2 4 [𝐱 ]
1 𝑊4 2 𝑊4 1 𝑊4 3
6
[1 𝑊4 ] [ 3 1
𝑊4 ] [ 𝑊46 ]
1 1 1 1 1
1 𝑊4 2 𝐱
0 𝑊4 1 𝑊42 𝐱1
=[ ][ ] + [ ][ ]
1 1 𝐱2 𝑊42 1 1 𝐱3
1 𝑊42 [ 𝑊43 ] 1 𝑊4
2
1 1 1 1 1
1 𝑊2 𝐱 0 𝑊4 1 𝑊2 𝐱1
=[ ][ ] + [ ][ ][ ]
1 1 𝐱2 −1 1 1 𝐱3
1 𝑊2 −𝑊4 1 𝑊2
1 0 1 0
0 1 1 1 𝐱0 0 𝑊4 1 1 𝐱1
=[ ][ ] [𝐱 ] + [ ][ ][ ]
1 0 1 𝑊2 2 −1 0 1 𝑊2 𝐱 3
0 1 0 −𝑊4
1 1 1 0
Now we define 𝑭2 = [ ] and 𝑫2 = [ ] we obtain
1 𝑊2 0 𝑊4
𝑭2 𝐱 even + 𝑫2 𝑭2 𝐱 odd 𝑰 𝑫2 𝑭2 𝟎2
𝑭4 𝐱 = [ ]=[ 2 ][ ]𝑷 𝐱
𝑭2 𝐱 even − 𝑫2 𝑭2 𝐱 odd 𝑰2 −𝑫2 𝟎2 𝑭2 4
𝑭1 𝐱 0 + 𝑫1 𝑭1 𝐱1 𝐱 +𝐱 𝑭 𝐱 + 𝑫1 𝑭1 𝐱 3 𝐱 +𝐱
𝑭2 𝐱 even = [ ] = [𝐱 0 − 𝐱 2 ], 𝑭2 𝐱 odd = [ 1 1 ] = [𝐱1 − 𝐱 3 ]
𝑭1 𝐱 0 − 𝑫1 𝑭1 𝐱1 0 2 𝑭1 𝐱1 − 𝑫1 𝑭1 𝐱 3 1 3
where 𝑷𝑁 is an 𝑛𝑡ℎ -order permutation matrix for the needed even and odd permutation.
This code is designed to help learn and understand the fast Fourier
algorithm. For this purpose it is kept as short and simple as possible.
function y = mfft(x)
% Length of x must be a power of two.
len = length(x);
if len >= 2
odd = mfft(x(1:2:len)); % calculate fft of odd elements
even = mfft(x(2:2:len)); % calculate fft of even elements
% rotate even elements to prepare for recombination
even = exp((0:len/2-1)*(-2i*pi/len)).* even;
y = [odd+even , odd-even]; % recombine fft of odd and even elements
else
y = x; % end of recursion, do nothing
end
CHAPTER VIII:
Practical Implementation
of Linear Systems
I. I. Introduction
II. Realization of Continuous Systems
II.I The Direct Form (First Canonical Form)
II.II The Transposed Form (Second Canonical Form)
II.III Continuous State Space Representation
III. Realization of Discrete Systems
III.I Basic Structures for IIR Systems
III.II Basic Structures for FIR Systems
III.III Discrete State Space Representation
IV. Solved Problems
We know that all the studied systems can be considered filters, but we differentiate
between what was primarily built for this purpose and what did such action as an
accidental result. Generally with that regard and from physical perspectives we are talking
about systems that may or may not be filters. Here we are going to introduce some
systematic procedures for the implementation of those systems. And as it is well known,
the transfer function of such system, is made of lumped linear elements, and is a rational
function (a ratio of two polynomials), and based on this information we will see that its
realizability depends strongly on the degree of the denominator polynomial which is called
the order of the system.
For large 𝑠 (𝑠 → ∞) we obtain 𝑯(𝑠) = 𝑏0 𝑠 𝑚−𝑛 . Therefore, for 𝑚 > 𝑛, the system acts as an
(𝑚 − 𝑛)𝑡ℎ -order differentiator. For this reason, we restrict 𝑚 ≤ 𝑛 for practical systems. With
this restriction, the most general case is 𝑚 = 𝑛 with this transfer function.
II.I The Direct Form (First Canonical Form): The method of obtaining a block diagram
from an s-domain system function will be derived using a third-order system, but its
generalization to higher-order system functions is quite straightforward. Consider a CTLTI
system described by a system function 𝑯(𝑠).
𝑏1 𝑏2 𝑏3
𝑏0 𝑠 3 + 𝑏1 𝑠 2 + 𝑏2 𝑠 + 𝑏3 𝑏0 + 𝑠 + 𝑠 2 + 𝑠 3 𝑏1 𝑏2 𝑏3 1
𝑯(𝑠) = 3 = 𝑎1 𝑎2 𝑎3 = (𝑏 + + + ) ( )
𝑠 + 𝑎1 𝑠 2 + 𝑎2 𝑠 + 𝑎3 1+ 𝑠 + 2+ 3
0
𝑠 𝑠 2 𝑠 3 1 + 𝑎1 + 𝑎2 + 𝑎3
𝑠 𝑠 𝑠 𝑠2 𝑠3
In this derivation we have considered 𝑿(𝑠) be the input of the system and 𝑾(𝑠) be an
intermediate variable.
𝑾(𝑠) 𝒀(𝑠)
𝑯(𝑠) = (𝑯1 (𝑠)𝑯2 (𝑠)) = ( )( )
𝑿(𝑠) 𝑾(𝑠)
This is one of the two canonical realizations (also known as the first canonical form or
direct-form realization). Observe that 2n integrators are required for a realization of an 𝑛𝑡ℎ -
order transfer function. In order to avoid the use of large number of integrators we can
construct another form of implementation by a rearrangement of the cascade
𝒀(𝑠) 𝑽(𝑠)
𝑯(𝑠) = (𝑯2 (𝑠)𝑯1 (𝑠)) = ( )( )
𝑽(𝑠) 𝑿(𝑠)
More importantly, this form requires the only 𝑛 integrators. the minimum number of 𝑛-
ordcr differential equations. Block diagrams that use the minimum number of energy
stores (integrators) for the realization of an n-order differential equation are also called
canonical forms.
II.II The Transposed Form (Second Canonical Form): The recursive procedure for
calculating the response of a differential equation is extremely useful in implementing
causal systems. However, it is important to recognize
that either in algebraic terms or in terms of block
diagrams, a differential equation can often be
rearranged into other forms leading to
implementations that may have particular advantages.
Here we present an alternative realization called the
second canonical form.
1
𝝃𝑘+1 (𝑠) = (𝑏 𝑿(𝑠) − 𝑎𝑁−𝑘 𝒀(𝑠) + 𝝃𝑘 (𝑠))
𝑠 𝑁−𝑘
⟹ 𝒀(𝑠) = 𝑏0 𝑿(𝑠) + 𝝃𝑁 (𝑠)
𝑁
1
⟹ 𝒀(𝑠) = 𝑏0 𝑿(𝑠) + ∑ (𝑏 𝑿(𝑠) − 𝑎𝑘 𝒀(𝑠))
𝑠𝑘 𝑘
𝑘=1
𝑁
⟹ 𝑯(𝑠) = 𝒀(𝑠)/𝑿(𝑠) = ∑ 𝑏𝑘 𝑠 𝑁−𝑘 /𝑎𝑘 𝑠 𝑁−𝑘
𝑘=0
▪ Finding the equation(s) relating the state variables to the input(s) (the state equation).
▪ Finding the output variables in terms of the state variables (the output equation).
The analysis procedure, therefore, consists of solving the state equation first, and then
solving the output equation. The state space description is capable of determining every
possible system variable (or output) from the knowledge of the input and the initial state
(conditions) of the system. For this reason it is an internal description of the system.
𝑑𝑛 𝐲 𝑑𝑛 𝐲 𝑑𝐲
𝑛
+ 𝑎1 𝑛 + ⋯ + 𝑎𝑛−1 + 𝑎𝑛 𝐲(𝑡) = 𝐮(𝑡)
𝑑𝑡 𝑑𝑡 𝑑𝑡
We can define 𝑛 new variables, 𝐱1 through 𝐱 𝑛 .
These 𝑛 simultaneous first-order differential equations are the state equations of the
system. The output equation is 𝐲(𝑡) = 𝐱1 (𝑡)
In this general formulation, all matrices are allowed to be time-variant (i.e. their elements
can depend on time); however, in the common LTI case, matrices will be time invariant. The
time variable 𝑡 can be continuous (e.g. 𝑡 ∈ ℝ) or discrete (e.g. 𝑡 ∈ ℤ). In the latter case, the
time variable 𝑘 is usually used instead of 𝑡. The "state space" is the Euclidean space in
which the variables on the axes are the state variables. The state of the system can be
represented as a vector within that space. If the dynamical system is linear, time-invariant,
and finite-dimensional, then the differential and algebraic equations may be written in
matrix form. The state-space method is characterized by significant algebraization of
general system theory, which makes it possible to use vector-matrix structures.
By its nature, the state variable analysis is eminently suited for multiple-input, multiple-
output (MIMO) systems. In addition, the state-space techniques are useful for several other
reasons, including the following:
Remark: It is of great importance to notes that the state space representation is not
unique. As a simple example we could simply reorder the variables 𝐱 ↔ 𝐱 𝑛𝑒𝑤 from the
example above. This results in a new state space representation. (This is due to the fact
that: similar matrices can represent the same system)
The above equations (i.e. state and output equations) can be solved in both the time
domain and frequency domain (Laplace transform). The latter requires fewer new concepts
and is therefore easier to deal with than the time-domain solution. For this reason, we shall
first consider the Laplace transform solution.
This equation gives the desired solution. Observe the two components of the solution. The
first component yields 𝐱(𝑡) when the input 𝐮(𝑡) = 0. Hence the first component is the zero-
input component. In a similar manner, we see that the second component is the zero-state
component.
The output equation is given by 𝐲(𝑡) = 𝑪𝐱(𝑡) + 𝑫𝐮(𝑡) ⟺ 𝐘(𝑠) = 𝑪𝐗(𝑠) + 𝑫𝐔(𝑠)
The zero-state response (that is, the response 𝐘(𝑠) when 𝐱(0) = 0), is given by
Remark: The idea behind a transfer function is to figure out what a system does with a
“delta” or impulse input (In other words a very short, very large "spiky" input). After you
know that any other response is just a sum (integral) of a bunch of different impulse
responses (the impulse response is a different way to think of the transfer function)
Now, in order to see how a system reacts to just an impulse input, you need to let the
system stop reacting to any previous inputs. In other words, the system has to be at rest
(have zero state or initial conditions).
Later on we will demonstrate 𝝓(𝑡) = 𝕃−1 (𝛟(𝑠)) = 𝑒 𝑨𝑡 . Now if we assume: it is known that
𝝓(𝑡) = 𝑒 𝑨𝑡 be the time domain function of the transition matrix 𝛟(𝑠) then
[y,t,x]=lsim(sys,u,t,x0);
plot(t,y), grid on
plot(t,x(1:end,1)), grid on
plot(t,x(1:end,2)), grid on
A state variable representation of a system is not unique. In fact there are infinitely many
representations. Methods for transforming from one set of state variables to another are
discussed below, followed by an example.
We can define a new set of independent variables (i.e., 𝑻 is invertible) 𝐳(𝑡) = 𝑻𝐱(𝑡). Though it
may not be obvious we can use this new set of variables as state variables. Start by solving
for 𝐱: 𝐱(𝑡) = 𝑻−1 𝐳(𝑡).
We can now rewrite the state space model by replacing 𝐱 in the original equations
𝐱̇ (𝑡) = 𝑨𝐱(𝑡) + 𝑩𝐮(𝑡) 𝑻−1 𝐳̇ (𝑡) = 𝑨𝑻−1 𝐳(𝑡) + 𝑩𝐮(𝑡) 𝐳̇ (𝑡) = 𝑻𝑨𝑻−1 𝐳(𝑡) + 𝑻𝑩𝐮(𝑡)
{ ⟺ { ⟺ {
𝐲(𝑡) = 𝑪𝐱(𝑡) + 𝑫𝐮(𝑡) 𝐲(𝑡) = 𝑪𝑻−1 𝐳(𝑡) + 𝑫𝐮(𝑡) 𝐲(𝑡) = 𝑪𝑻−1 𝐳(𝑡) + 𝑫𝐮(𝑡)
The process of converting transfer function to state space form is not unique. There are
various “realizations” possible. All realizations are “equivalent” (i.e. properties do not
change). However, one representation may have some advantages over others for a
particular task.
Remember that the 𝑖𝑗 𝑡ℎ element of the transfer function matrix 𝐇(𝑠) represents the transfer
function that relates the output 𝐲𝑖 (𝑡) to the input 𝐮𝑗 (𝑡).
𝑵(𝑠) ∑𝑚
𝑘=0 𝑏𝑘 𝑠
𝑛−𝑘 ∏𝑚𝑘=0(𝑠 − 𝑧𝑘 )
𝐇prop (𝑠) = = 𝑛 = 𝑏0 with 𝑎0 = 1 and 𝑚<𝑛
𝑫(𝑠) ∑𝑘=0 𝑎𝑘 𝑠 𝑛−𝑘 ∏𝑛𝑘=0(𝑠 − 𝑝𝑘 )
Where 𝑧𝑘 ∈ ℂ are called the system zeros and 𝑝𝑘 ∈ ℂ are called the system poles. Therefore,
the Zeros are defined as the roots of the polynomial of the numerator of a transfer function
and poles are defined as the roots of the denominator of a transfer function.
Poles and Zeros of a transfer function are the frequencies for which the value of the
denominator and numerator of transfer function becomes zero respectively. The values of
the poles and the zeros of a system determine whether the system is stable, and how well
the system performs. Control systems, in the simplest sense, can be designed simply by
assigning specific values to the poles and zeros of the system.
Remark: Identifying the poles and zeros of a transfer function aids in understanding the
behavior of the system. The standardized form of a transfer function is like a template that
helps us to quickly determine the system’s characteristics.
Physically realizable systems must have a number of poles greater than the number of
zeros (causality). Systems that satisfy this relationship are called Proper.
Assume that there is no repeated poles the by using partial fraction expansion we obtain
𝑵(𝑠) ∏𝑚
𝑘=0(𝑠 − 𝑧𝑘 ) 𝒉1 𝒉2 𝒉𝑛
𝐇prop (𝑠) = = 𝑏0 𝑛 = + + ⋯+
𝑫(𝑠) ∏𝑘=0(𝑠 − 𝑝𝑘 ) (𝑠 − 𝑝1 ) (𝑠 − 𝑝2 ) (𝑠 − 𝑝𝑛 )
𝑘=1
Knowing that 𝑝𝑘 = 𝜎𝑘 + 𝑗𝜔𝑘 therefore we can write
𝑛
From the last two rules, we can see that all poles of the system must have negative real
parts to be stable. We will discuss stability in later chapters.
MATLAB offers several instructions for the calculation of eigenvalues and zeros of a system.
One can calculate the eigenvalues of 𝑨 with the command 𝐞𝐢𝐠. We can also go through the
calculation of the characteristic polynomial of 𝑨 with the command poly, then calculate
the roots of the characteristic polynomial with the command roots. Another way to do this
is to define a sys system object and then calculate its zeros with the zero command and
calculate its eigenvalues with the pole command. It is also possible to calculate the
eigenvalues and the zeros of the system with the command pzmap.
clear all, clc, A=[-1 1 1;-1 -1 1;0 0 2]; B=[1 -1 2]'; C=[0 2 1]; D=1;
% ---------------
Eig_Val=eig(A)
% ---------------
Char_Poly=poly(A)
Eig_Val=roots(Char_Poly)
% ---------------
sys=ss(A,B,C,D)
% ---------------
Eig_Val =pole(sys)
zeros=zero(sys)
% ---------------
[Eig_Val,zeros]=pzmap(sys)
What's happen if some poles are repeated?
In case of repeated poles we obtain the term 1/(𝑠 − 𝑝𝑘 )2 in the partial fraction expansion,
which is corresponding to 𝑡𝑒 𝑝𝑘𝑡 in the time domain.
0 if 𝜎𝑘 < 0
lim 𝑡𝑒 𝑝𝑘𝑡 = lim 𝑡𝑒 𝜎𝑘 𝑡 𝑒 𝑗𝜔𝑘𝑡 = {
𝑡→∞ 𝑡→∞ ∞ if 𝜎𝑘 = 0
As a result: A system is asymptotically stable if all its poles have negative real parts. A
system is unstable if at least one pole has a positive real part, or if there are any repeated
poles on the imaginary axis. A system is marginally stable if it has one or more distinct
poles on the imaginary axis, and any remaining poles have negative real parts.
This equation is known as the characteristic equation of the matrix 𝑨, and 𝑝1, 𝑝2 ,..., 𝑝𝑛 are
the characteristic roots of 𝑨. The term eigenvalue, meaning "characteristic value" in
German, is also commonly used in the literature. Thus, we have shown that the
characteristic roots of a system are the eigenvalues (characteristic values) of the matrix 𝑨.
▪ All poles of the system are eigenvalue but not necessarily the converse (because of the
possible cancellation between poles and zeros)
▪ Poles are invariant under similarity transformation, means that all different state space
representations have the same set of eigenvalues.
Time-Domain Solution of State Equations: The state equations of a linear system are 𝑛
simultaneous linear differential equations of the first order. The same techniques of solving
scalar linear differential equations can be applied to state equations without any
modification. However, it is more convenient to carry out the solution in the framework of
matrix notation.
𝐱̇ (𝑡) = 𝑨𝐱(𝑡) + 𝑩𝐮(𝑡)
In the set of solved problems we show that the infinite series is absolutely and uniformly
convergent for all values of t. Consequently, it can be differentiated or integrated term by
term. Thus, to find (𝑑/𝑑𝑡)𝑒 𝑨𝑡 , we differentiate the series on the right-hand side of 𝑒 𝑨𝑡 . term
by term:
𝑑𝑒 𝑨𝑡 𝑨3 𝑡 2 𝑨4 𝑡 3 𝑨𝑛+1 𝑡 𝑛
= 𝑨 + 𝑨2 𝑡 + + …+ … = 𝑨𝑒 𝑨𝑡 = 𝑒 𝑨𝑡 𝑨
𝑑𝑡 2! 3! 𝑛!
Also note that from the definition, it follows that
𝑑 𝑑𝑷 𝑑𝑸
(𝑷𝑸) = 𝑸+𝑷
𝑑𝑡 𝑑𝑡 𝑑𝑡
Using this relationship, we observe that
𝑑 −𝑨𝑡 𝑑𝑒 −𝑨𝑡 𝑑𝐱
(𝑒 𝐱) = 𝐱(𝑡) + 𝑒 −𝑨𝑡
𝑑𝑡 𝑑𝑡 𝑑𝑡
𝑑𝐱
= −𝑒 −𝑨𝑡 𝑨𝐱(𝑡) + 𝑒 −𝑨𝑡
𝑑𝑡
= −𝑒 −𝑨𝑡 𝑨𝐱(𝑡) + 𝑒 −𝑨𝑡 (𝑨𝐱(𝑡) + 𝑩𝐮(𝑡))
= 𝑒 −𝑨𝑡 𝑩𝐮(𝑡)
This is the desired solution. The first term on the right-hand side represents 𝐱(𝑡) when the
input 𝐮(𝑡) = 0. Hence it is the zero-input component. The second term, by a similar
argument, is seen to be the zero-state component.
This result of can be easily generalized for any initial value of 𝑡. It is left as an exercise for
the reader to show that the solution of the state equation can be expressed as
𝑡
𝑨(𝑡−𝑡0 )
𝐱(𝑡) = 𝑒 𝐱(𝑡0 ) + (∫ 𝑒 𝑨(𝑡−𝜏) 𝑩𝐮(𝜏)𝑑𝜏)
𝑡0
Determining 𝑒 𝑨𝑡 : The exponential 𝑒 𝑨𝑡 required in 𝐱(𝑡) can be computed from the definition
above. Unfortunately, this is an infinite series, and its computation can be quite laborious.
Moreover, we may not be able to recognize the closed-form expression for the answer. There
are several efficient methods of determining 𝑒 𝑨𝑡 in closed form (see BEKHITI algebra books
2020). The Cayley-Hamilton theorem can be used to evaluate functions of a square matrix
𝑨, as shown below. Consider a function f(𝑨) in the form of an infinite power series:
∞
If we multiply both sides by 𝜆, the left-hand side is 𝜆𝑛+1, and the right-hand side contains
the terms 𝜆𝑛 , 𝜆𝑛−1 , … , 𝜆. If we substitute 𝜆𝑛 in terms of 𝜆𝑛−1 , 𝜆𝑛−2 , … , 𝜆, the highest power on
the right-hand side is reduced to 𝑛 − 1. Continuing in this way, we see that 𝜆𝑛+𝑘 can be
expressed in terms of 𝜆𝑛 , 𝜆𝑛−1 , … , 𝜆 for any 𝑘. Hence, the infinite series of f(𝜆) can always be
expressed in terms as f(𝜆) = 𝛽0 + 𝛽1 𝜆 + 𝛽2 𝜆2 + ⋯ + 𝛽𝑛−1 𝜆𝑛−1 . If we assume that there are 𝑛
distinct eigenvalues 𝜆1 , 𝜆2 , … 𝜆𝑛 , then the finite series of f(𝜆) holds for these n values of 𝜆. The
substitution of these values in f(𝜆) yields 𝑛 simultaneous equations
−1
𝛽0 1 𝜆1 𝜆12 ⋯ 𝜆1𝑛−1 f(𝜆1 )
𝛽 1 𝜆2 𝜆22 ⋮ 𝜆𝑛−1 )
( 1 )=( ⋱ 2 ) ( f(𝜆2 )
⋮ ⋮ ⋮ ⋮ ⋮ ⋮
𝛽𝑛−1 ⋮
…
1 𝜆𝑛 𝜆2𝑛 𝜆𝑛−1
𝑛 f(𝜆𝑛 )
Since 𝑨 also satisfies the characteristic equation, we may advance a similar argument to
show that if f(𝑨) is a function of a square matrix 𝑨 expressed as an infinite power series in
𝑨, then f(𝑨) = 𝛽0 + 𝛽1 𝑨 + 𝛽2 𝑨2 + ⋯ + 𝛽𝑛−1 𝑨𝑛−1 , in which the coefficients 𝛽𝑖 ′𝑠 are found from
the above matrix equation. If some of the eigenvalues are repeated (multiple roots), the
results are somewhat modified using the generalized Vandermond matrix.
The FIR versus IIR: Before starting the talk about the basic structures of digital system
we should differentiate two important types of filters, named IIR and FIR filters. If the
impulse response of the filter falls to zero after a finite period of time, it is an FIR (Finite
Impulse Response) filter. However, if the impulse response exists indefinitely, it is an IIR
(Infinite Impulse Response) filter.
The output values of IIR filters are calculated by adding the weighted sum of previous
and current input values to the weighted sum of previous output values. If the input values
are 𝐱[𝑛] and the output values 𝐲[𝑛], the difference equation defines the IIR filter:
𝑀 𝑁 𝑀 𝑁
1
𝐲[𝑛] = (∑ 𝑏𝑘 𝐱[𝑛 − 𝑘] − ∑ 𝑎𝑘 𝐲[𝑛 − 𝑘]) ⟹ 𝐇(𝑧) = (∑ 𝑏𝑘 𝑧 −𝑘 ) / (𝑎0 + ∑ 𝑎𝑘 𝑧 −𝑘 )
𝑎0
𝑘=0 𝑘=1 𝑘=0 𝑘=1
The number of forward coefficients 𝑀 and the number of reverse coefficients 𝑁 is usually
equal and is the filter order. The higher the filter order, the more the filter resembles an
ideal filter. If we do long division (power series) we obtain
∞ ∞
∑𝑀
𝑘=0 𝑏𝑘 𝑧
−𝑘
𝐇(𝑧) = 𝑁 = ∑ ℎ𝑘 𝑧 −𝑘 ⟹ 𝐡[𝑛] = ∑ ℎ𝑘 𝛿[𝑛 − 𝑘]
∑
𝑎0 + 𝑘=1 𝑎𝑘 𝑧 −𝑘
𝑘=0 𝑘=0
FIR filters are also known as non-recursive filters, convolution filters, or moving-average
filters because the output values of an FIR filter are described as a finite convolution:
𝑀 𝑀 𝑀
The output values of a FIR filter depend only on the current and past input values.
Because the output values do not depend on past output values, the impulse response
decays to zero in a finite period of time. FIR filters have the following properties:
• FIR filters can achieve linear phase response and pass a signal without phase distortion.
• They are easier to implement than IIR filters.
• FIR filters has all its poles at 𝑧 = 0 or |z|→∞ or both.
• FIR filters are always stable because they do not have finite poles outside unit disk.
Comparison: The advantage of IIR filters over FIR filters is that IIR filters usually require
fewer coefficients to execute similar filtering operations, that IIR filters work faster, and
require less memory space. The disadvantage of IIR filters is the nonlinear phase response.
IIR filters are well suited for applications that require no phase information, for example,
for monitoring the signal amplitudes. FIR filters are better suited for applications that
require a linear phase response. The necessary and sufficient condition for IIR filters to be
stable is that all poles are inside the unit circle. IIR filters consist of zeros and poles, and
require less memory than FIR filters, whereas FIR only consists of zeros. IIR filters can
become difficult to implement, and also delay and distort adjustments can alter the poles &
zeroes, which make the filters unstable, whereas FIR filters remain stable. FIR cannot
simulate analog filter responses, but IIR is designed to do that accurately. The high
computational efficiency of IIR filters, with short delays, often make the IIR popular as an
alternative. FIR filters have become too long in digital feedback systems, as well as in other
applications, and cause problems.
III.I Basic Structures for IIR Systems: A filter processes the input and obtains the output
through three types of operations: delay, multiplication, and addition, as evidenced from
the difference equation, system function, or convolution sum. These operations are done by
three elements that are either physical or conceptual, or implemented through hardware or
software tools. The elements are adder, gain (multiplier) and delay element.
The basic structures IIR systems include the direct form I, the direct form II, the cascade
form and the parallel form. These structures, as well as other structures for IIR systems,
have feedback loops.
⟹ 𝒀(𝑧) = 𝑏0 𝑿(𝑧) + ∑𝑁 −𝑘
𝑘=1 𝑧 (𝑏𝑘 𝑿(𝑧) − 𝑎𝑘 𝒀(𝑧))
⟹ 𝑯(𝑧) = 𝒀(𝑧)/𝑿(𝑧) = ∑𝑁
𝑘=0 𝑏𝑘 𝑧
𝑁−𝑘
/𝑎𝑘 𝑧 𝑁−𝑘
III.II Basic Structures for FIR Systems: Implementation of a filter (as a hardware or by a
software program) requires an interconnected set, or network, of the above elements, which
according to the filter's description produces the output 𝐲[𝑛] to an incoming 𝐱[𝑛]. Such a
network is called filter's structure. Implementation may be achieved through different
structures. Each structure is associated with the memory requirements, computational
complexity, and accuracy limitations (i.e., the effect of finite-word length). Such
considerations play a central role in choosing a structure that is best suited for a situation.
An analysis of resources or design processes that would determine the best structure for a
filter are not addressed in what follows. The aim is limited to presenting several commonly
used structures for FIR filters, and briefly pointing out some obvious choice criteria.
𝐲[𝑛] = ∑ 𝒃𝑘 𝐱[𝑛 − 𝑘]
𝑘=0
The recommended (default) structure within the Filter Designer is the Direct Form
Transposed structure, as this offers superior numerical accuracy when using floating point
arithmetic. This can be readily seen by analyzing the difference equations below (used for
implementation), as the undesirable effects of numerical swamping are minimized, since
floating point addition is performed on numbers of similar magnitude.
𝐲[𝑛] = 𝒃0 𝐱[𝑛] + 𝐰1 [𝑛 − 1]
𝐰1 [𝑛] = 𝒃1 𝐱[𝑛] + 𝐰2 [𝑛 − 1] 𝑁
𝐰2 [𝑛] = 𝒃2 𝐱[𝑛] + 𝐰3 [𝑛 − 1] Back−substitution
⇒ 𝐲[𝑛] = ∑ 𝒃𝑘 𝐱[𝑛 − 𝑘]
⋮ ⋮ + ⋮
𝑘=0
𝐰𝑁−1 [𝑛] = 𝒃𝑁−1 𝐱[𝑛] + 𝐰𝑁 [𝑛 − 1]
𝐰𝑁 [𝑛] = 𝒃𝑁 𝐱[𝑛]
The Transposed Form structure is considered the most numerically accurate for floating
point implementation, as the undesirable effects of numerical swamping are minimized as
seen by analyzing the difference equations.
Note: Under infinite precision arithmetic any given realization of a digital filter behaves
identically to any other equivalent structure. However, in practice, due to the finite word-
length limitations, a specific realization behaves totally differently from its other equivalent
realizations. Hence, it is important to choose a structure that has the least quantization
effects when implemented using finite precision arithmetic.
FIR filters do not use feedback circuitry, while IIR filters make use of feedback loop in order
to provide previous output in conjunction with current input.
FIR filters are mainly used in bandpass and bandstop filtering applications. While low pass
and anti-aliasing filtering applications require IIR digital filters.
III.III Discrete State Space Representation: Discrete time systems are either inherently
discrete (e.g. models of bank accounts, national economy growth models, population
growth models, digital words) or they are obtained as a result of sampling (discretization) of
continuous-time systems. In such kinds of systems, inputs, state space variables, and
outputs have the discrete form and the system models can be represented in the form of
transition tables. The mathematical model of a discrete-time system can be written in
terms of a recursive formula by using linear matrix difference equations as
Similarly to continuous-time linear systems, discrete state space equations can be derived
from difference equations. Consider 𝑛𝑡ℎ - order difference equation which is defined by
To derive the state space equation we introduce a new intermediate variable 𝑾(𝑧) as
𝐱1 [𝑘 + 1] 0 1 0 ⋯ 0 𝐱1 [𝑘] 0
𝐱 2 [𝑘 + 1] 0 0 1 ⋮ 0 𝐱 2 [𝑘] 0
⋮ = ⋮ ⋮ ⋮ ⋮ ⋮ + ⋮ 𝐮[𝑘]
⋱
⋮ 0 0 0 1 ⋮ 0
−𝑎 −𝑎1 −𝑎2 ⋮
… −𝑎𝑛−1 ) (𝐱 [𝑘]) (1)
𝐱
( 𝑛 [𝑘 + 1] ) ( 0 𝑛
𝐱[𝑛] = 𝝓[𝑛]𝐱[0]
⏟ + ∑ 𝝓[𝑛 − 𝑘 − 1]𝑩𝐮[𝑘]
Zero−input component ⏟
𝑘=0
Zero−state component
Note that the discrete-time state transition matrix relates the state of an input-free system
at initial time (𝑛 = 0) to the state of the system at any other time 𝑛 > 0, that is
Remark: If the initial value of the state vector is not 𝐱[0] but 𝐱[𝑛0 ], then the solution has to
be modified into
𝑛−1
It is easy to verify that the discrete-time state transition matrix has the following properties
▪ 𝝓[0] = 𝑨0 = 𝑰
▪ 𝝓[𝑛2 − 𝑛0 ] = 𝝓[𝑛2 − 𝑛1 ]𝝓[𝑛1 − 𝑛0 ] = 𝑨𝑛2 −𝑛1 𝑨𝑛1 −𝑛0 = 𝑨𝑛2 −𝑛0
▪ 𝝓𝑘 [𝑛] = 𝝓[𝑛𝑘] = (𝑨𝑛 )𝑘 = 𝑨𝑛𝑘
▪ 𝝓[𝑛 + 1] = 𝑨𝝓[𝑛 + 1]
The output of the system at an instant 𝑛 is obtained by substituting 𝐱[𝑛] into the output
equation 𝐲[𝑛] = 𝑪𝐱[𝑛] + 𝑫𝐮[𝑛], producing
𝑛−1
𝐲[𝑛] = 𝑪𝝓[𝑛]𝐱[0] + ∑ 𝑪𝝓[𝑛 − 𝑘 − 1]𝑩𝐮[𝑘] + 𝑫𝐮[𝑛] = 𝑪𝑨𝑛 𝐱[0] + 𝐡[𝑛] ⋆ 𝐮[𝑛] = 𝐲𝒉 [𝑛] + 𝐲𝒑 [𝑛]
𝑘=0
Where 𝐲𝒉 [𝑛] = 𝑪𝑨𝑛 𝐱[0] is the homogenous solution (the input free response) and 𝐲𝒑 [𝑛] is
the particular solution (the forced response).
𝑫 for 𝑛 = 0
𝐡[𝑛] = 𝑪𝑨𝑛−1 𝑩 + 𝑫𝛿[𝑛] = {
𝑪𝑨𝑛−1 𝑩 for 𝑛 > 0
The discrete-time state transition matrix defined by 𝝓[𝑛] = 𝑨𝑛 can be evaluated efficiently
for large values of 𝑛 by using a method based on the Cayley–Hamilton theorem and
described in. It can be also evaluated by using the Z-transform method, to be derived in the
next subsection.
Solution Using the Z-transform: Applying the Z–transform to the state space equation
of a discrete-time linear system
We conclude that 𝝓[𝑛] = ℤ−1 [𝑧(𝑧𝑰 − 𝑨)−1 ] = 𝑨𝑛 , 𝑛 = 1,2,3, … and 𝝓(𝑧) = 𝑧(𝑧𝑰 − 𝑨)−1
The inverse transform of the second term on the right-hand side is obtained directly by the
application of the discrete-time convolution, which produces
𝑛−1
The frequency domain form of the output vector 𝐘(𝑧) is obtained if the Z-transform is
applied to the output equation, and 𝐗(𝑧) is eliminated, leading to
From the above expression, for the zero initial condition, i.e., the discrete matrix transfer
function is defined by 𝐇(𝑧) = 𝑪(𝑧𝑰 − 𝑨)−1 𝑩 + 𝑫.
Now let we see what is about the effect of poles to reshape the response and/or the stability
of discrete-time system.
𝑛 𝑛
−1
Adj(𝑧𝑰 − 𝑨) 𝑬𝑘 𝑧 −1
𝐇prop (𝑧) = 𝑪(𝑧𝑰 − 𝑨) 𝑩 = 𝑪 𝑩=∑ ⟺ 𝐡prop [𝑛] = ∑ 𝑬𝑘 (𝜆𝑘 )𝑛−1
|𝑧𝑰 − 𝑨| (1 − 𝜆𝑘 𝑧 −1 )
𝑘=1 𝑘=1
𝑛
Hence {lim𝑛→∞ 𝐡prop [𝑛] = lim𝑛→∞ 𝑬𝑘 (𝜆𝑘 )𝑛−1 = 0}, only if |𝜆𝑘 | < 1, with 𝜆𝑘 = 𝜎𝑘 + 𝑗𝜔𝑘 = 𝑟𝑘 𝑒 𝑗𝜃𝑘 ∈ ℂ.
From the last equation, we can see that all poles of the Discrete-time system must be
inside the unite disk to be stable system.
In case of repeated poles we obtain the term 𝑛(𝜆𝑘 )𝑛−1 in the time domain. Hence, we get
𝑛−1 ∞ if 𝑟𝑘 = 1
lim 𝑛(𝜆𝑘 )𝑛−1 = lim 𝑛(𝑟𝑘 𝑒 𝑗𝜃𝑘 ) = lim 𝑛(𝑟𝑘 )𝑛−1 𝑒 𝑗𝜃𝑘(𝑛−1) = {
𝑛→∞ 𝑛→∞ 𝑛→∞ 0 if 𝑟𝑘 < 1
As a result: A Discrete-time system is asymptotically stable if all its poles are inside the
unite disk. A system is unstable if at least one pole is outside the unite disk, or if there are
any repeated poles on the unite circle. A system is marginally stable if it has one or more
distinct poles on the unite circle, and any remaining poles are inside the unite disk.
Solved Problems:
Recall that: The Cayley–Hamilton theorem says that any matrix satisfies its own
characteristic equation: Δ(𝐀) = 𝟎 where Δ(𝜆) = 𝑑𝑒𝑡(𝜆𝐼−𝐀) is the characteristic polynomial. This
theorem follows immediately from the fact that the minimal polynomial (𝜆) divides Δ(𝜆).
Hence the 𝑛𝑡ℎ power of 𝑨, and inductively all higher powers, are expressible as a linear
combination of 𝑰, 𝑨, … , 𝑨𝑛−1 . Thus any power series in 𝑨 can be reduced to a polynomial in 𝑨
of degree at most 𝑛 − 1.
∞ 𝑛−1
𝑡 𝑘 𝑨𝑘
𝑒 𝑨𝑡 =∑ = ∑ 𝛼𝑘 𝑡 𝑘 𝑨𝑘 with 𝛼0 = 𝛼1 = 1
𝑘!
𝑘=0 𝑘=0
Solution:
𝑛−1
𝑑 𝑨𝑡 1 1
𝑒 = lim (𝑒 𝑨(𝑡+ℎ) − 𝑒 𝑨𝑡 ) = lim (𝑒 𝑨ℎ − 𝑰)𝑒 𝑨𝑡 = lim (∑ 𝛼𝑘 ℎ𝑘−1 𝑨𝑘 ) 𝑒 𝑨𝑡 = 𝑨𝑒 𝑨𝑡 = 𝑒 𝑨𝑡 𝑨
𝑑𝑡 ℎ→0 ℎ ℎ→0 ℎ ℎ→0
𝑘=1
Exercise: 02 Prove that, if 𝑨(𝑡) ∈ ℝ𝑛×𝑠 and 𝑩(𝑡) ∈ ℝ𝑠×𝑚 are differentiable matrix-valued
functions of 𝑡, then the matrix product 𝑨(𝑡)𝑩(𝑡) is differentiable, and its derivative is
𝑑 𝑑𝑨(𝑡) 𝑑𝑩(𝑡)
(𝑨(𝑡)𝑩(𝑡)) = 𝑩(𝑡) + 𝑨(𝑡)
𝑑𝑡 𝑑𝑡 𝑑𝑡
Solution:
𝑑 𝑑 𝑛 𝑛 𝑑
(𝑨(𝑡)𝑩(𝑡))𝑖𝑗 = ∑ 𝑎𝑖𝑘 𝑏𝑘𝑗 = ∑ 𝑎𝑖𝑘 𝑏𝑘𝑗
𝑑𝑡 𝑑𝑡 𝑘=1 𝑘=1 𝑑𝑡
𝑛 𝑑𝑎𝑖𝑘 𝑑𝑏𝑘𝑗 𝑑 𝑑𝑨(𝑡) 𝑑𝑩(𝑡)
=∑ ( 𝑏𝑘𝑗 + 𝑎𝑖𝑘 ) ⟺ (𝑨(𝑡)𝑩(𝑡)) = 𝑩(𝑡) + 𝑨(𝑡)
𝑘=1 𝑑𝑡 𝑑𝑡 𝑑𝑡 𝑑𝑡 𝑑𝑡
𝑛 𝑑𝑎𝑖𝑘 𝑛 𝑑𝑏𝑘𝑗
=∑ 𝑏𝑘𝑗 + ∑ 𝑎𝑖𝑘
𝑘=1 𝑑𝑡 𝑘=1 𝑑𝑡 }
Exercise: 03 Prove that, first-order linear differential equation 𝐱̇ (𝑡) = 𝑨𝐱(𝑡), 𝐱 0 = 𝐱(0) has
the unique solution 𝐱(𝑡) = 𝑒 𝑨𝑡 𝐱 0 .
𝑑 𝑑
(𝐱(𝑡)) = (𝑒 𝑨𝑡 𝐱 0 ) = 𝑨𝑒 𝑨𝑡 𝐱 0 = 𝑨𝐱(𝑡) with 𝐱(0) = 𝑒 𝑨0 𝐱 0 = 𝐱 0
𝑑𝑡 𝑑𝑡
Therefore, 𝐱(𝑡) = 𝑒 𝑨𝑡 𝐱 0 is a solution of 𝐱̇ (𝑡) = 𝑨𝐱(𝑡), 𝐱 0 = 𝐱(0). Let we prove the uniqueness
of this solution by multiplying 𝐱(𝑡) by 𝑒 −𝑨𝑡 so
𝑑 −𝑨𝑡
(𝑒 𝐱(𝑡)) = −𝑨𝑒 −𝑨𝑡 𝐱(𝑡) + 𝑒 −𝑨𝑡 𝑨𝐱(𝑡) = −𝑨𝑒 −𝑨𝑡 𝐱(𝑡) + 𝑨𝑒 −𝑨𝑡 𝐱(𝑡) = 𝟎 ⟹ 𝑒 −𝑨𝑡 𝐱(𝑡) = 𝑪
𝑑𝑡
Therefore, 𝑒 −𝑨𝑡 𝐱(𝑡) is a constant column vector, say 𝑪, and 𝐱(𝑡) = 𝑒 𝑨𝑡 𝑪. As 𝐱(0) = 𝐱 0 , we
obtain that 𝐱 0 = 𝑒 −𝑨0 𝑪, that is, 𝐱 0 = 𝑪. Consequently, 𝐱(𝑡) = 𝑒 𝑨𝑡 𝐱 0 is the only one solution ■
∞ ∞ 𝑘 ∞ 𝑘
1 1 𝑘! 1 1
𝑒 𝑨+𝑩 = ∑ (𝑨 + 𝑩)𝑘 = ∑ (∑ 𝑨𝑠 𝑩𝑘−𝑠 ) = ∑ ∑ 𝑨𝑠 𝑩𝑘−𝑠
𝑘! 𝑘! 𝑠! (𝑘 − 𝑠)! 𝑠! (𝑘 − 𝑠)!
𝑘=0 𝑘=0 𝑠=0 𝑘=0 𝑠=0
∞ ∞ ∞ ∞ ∞
1 𝑠1 𝑟 1 1 1 1
=∑ ∑ 𝑨 𝑩 = (∑ 𝑨𝑠 ) (∑ 𝑩𝑟 ) = (∑ 𝑩𝑟 ) (∑ 𝑨𝑠 )
𝑠! 𝑟! 𝑠! 𝑟! 𝑟! 𝑠!
𝑘=0 𝑠+𝑟=𝑘 𝑠=0 𝑟=0 𝑟=0 𝑠=0
Exercise: 06 Prove that, if 𝐟(𝑡) = 𝑒 𝑨𝑡 ∈ ℝ𝑛×𝑛 is matrix-valued function of 𝑡 then its Laplace
transform is given by
∞
𝐅(𝑠) = ∫ 𝑒 −𝑠𝑡 𝐟(𝑡)𝑑𝑡 = (𝑠𝑰 − 𝑨)−1
0
Solution: Notice that
∞ ∞ ∞
−𝑠𝑡 𝑨𝑡 −(𝑠𝑰−𝑨)𝑡
∫ 𝑒 𝑒 𝑑𝑡 = ∫ 𝑒 𝑑𝑡 = ∫ (𝑠𝑰 − 𝑨)−1 (𝑠𝑰 − 𝑨)𝑒 −𝑡(𝑠𝑰−𝑨) 𝑑𝑡
0 0 0
∞ ∞
𝑑 −(𝑠𝑰−𝑨)𝑡
= (𝑠𝑰 − 𝑨)−1 ∫ (𝑠𝑰 − 𝑨)𝑒 −(𝑠𝑰−𝑨)𝑡 𝑑𝑡 = (𝑠𝑰 − 𝑨)−1 (− ∫ 𝑒 𝑑𝑡)
0 0 𝑑𝑡
−1
= (𝑠𝑰 − 𝑨)
Exercise: 07 First we define ‖𝑨‖ = max{|𝑎𝑖𝑗 | , 1 ≤ 𝑖, 𝑗 ≤ 𝑛} thus |𝑎𝑖𝑗 | ≤ ‖𝑨‖ for all 𝑖, 𝑗. This is
one of several possible “norms” on ℝ𝑛×𝑛 . Prove that, if 𝑨, 𝑩 ∈ ℝ𝑛×𝑛 then ‖𝑨𝑩‖ ≤ 𝑛‖𝑨‖‖𝑩‖
and ‖𝑨𝑘 ‖ ≤ 𝑛𝑘−1 ‖𝑨‖𝑘
Thus ‖𝑨𝑩‖ ≤ 𝑛‖𝑨‖‖𝑩‖. The second inequality follows from the first by induction. ■
⟙𝑡
Exercise: 08 Prove that, if 𝑨 ∈ ℝ𝑛×𝑛 then (𝑒 𝑨𝑡 )⟙ = 𝑒 𝑨 and ‖𝑒 𝑨𝑡 ‖ ≤ 𝑒 𝑡𝑛‖𝑨‖ (using the
previous norm defined befor)
❷
∞
𝑨𝑡
𝑡𝑘
‖𝑒 ‖ = ‖∑ (𝑨)𝑘 ‖
𝑘!
𝑘=0
∞ 𝑘
𝑡
≤∑ ‖𝑨𝑘 ‖
𝑘!
𝑘=0
∞ 𝑘
𝑡 𝑘−1
≤∑ 𝑛 ‖𝑨‖𝑘
𝑘!
𝑘=0
∞ 𝑘
𝑡 𝑘
≤∑ 𝑛 ‖𝑨‖𝑘 = 𝑒 𝑡𝑛‖𝑨‖
𝑘!
𝑘=0
Exercise: 09 Prove that, if 𝑨 ∈ ℝ𝑛×𝑛 and 𝐱 ∈ ℝ𝑛 then there exists a constant 𝐶 such that
‖𝑨𝐱‖ ≤ 𝐶‖𝑨‖‖𝐱‖ and show that if 𝜆 is an eigenvalue of 𝑨, then |𝜆| ≤ 𝑛‖𝑨‖
Exercise: 10 Let 𝑨, 𝑩 ∈ ℝ𝑛×𝑛 and 𝐱 ∈ ℝ𝑛 and define the vector norm by ‖𝐱‖ = √∑𝑛𝑖=1|𝑥𝑖 |2 and
2
the matrix Frobenius (or Schur) norm ‖𝑨‖ = √∑𝑖𝑗|𝑎𝑖𝑗 | . Prove that
−1 1 −1 −1 −1 −1
❶ ‖𝑨 ‖≥ ❷ ‖𝑨 −𝑩 ‖ ≤ ‖𝑨 ‖ ‖𝑩 ‖ ‖𝑨 − 𝑩‖
‖𝑨‖
−1 −1 1
❸ ‖𝑨 ‖ ‖𝑩‖ < 1 ⟹ ‖(𝑨 − 𝑩) ‖≤
−1
−1
(‖𝑨 ‖ − ‖𝑩‖)
❹ If we define a polynomial 𝒑𝑘 (𝑥) = ∑𝑘𝑖=0 𝑎𝑖 𝑥 𝑖 and the matrix evaluation 𝒑𝑘 (𝑨) = ∑𝑘𝑖=0 𝑎𝑖 𝑨𝑖
then prove that: ‖𝒑𝑘 (𝑨)‖ ≤ 𝒑𝑘 (‖𝑨‖)
1 1
≤ ‖(𝑰 ± 𝑨)‖−1 ≤
1 + ‖𝑨‖ 1 − ‖𝑨‖
Solution:
1 = ‖𝑰‖ ≤ ‖(𝑰 + 𝑨)−1 ‖‖𝑰 + 𝑨‖ ≤ ‖(𝑰 + 𝑨)−1 ‖(‖𝑰‖ + ‖𝑨‖) ≤ ‖(𝑰 + 𝑨)−1 ‖(1 + ‖𝑨‖)
1
⟹ ≤ ‖(𝑰 + 𝑨)−1 ‖ (𝐼)
1 + ‖𝑨‖
‖(𝑰 + 𝑨)−1 ‖ = ‖𝑰 − (𝑰 + 𝑨)−1 𝑨‖ ≤ ‖𝑰‖ + ‖(𝑰 + 𝑨)−1 𝑨‖ ≤ ‖𝑰‖ + ‖(𝑰 + 𝑨)−1 ‖‖𝑨‖
1
⟹ (1 − ‖𝑨‖)‖(𝑰 + 𝑨)−1 ‖ ≤ 1 ⟹ ‖(𝑰 + 𝑨)−1 ‖ ≤ (𝐼𝐼)
1 − ‖𝑨‖
If we replace 𝑨 by −𝑨 we obtain
1 1
≤ ‖(𝑰 − 𝑨)−1 ‖ ≤
1 + ‖𝑨‖ 1 − ‖𝑨‖
❶ 𝑨𝛿𝐱 = 𝛿𝒃 ⟹ 𝛿𝐱 = 𝑨−1 𝛿𝒃 ⟹ ‖𝛿𝐱‖ ≤ ‖𝑨−1 ‖‖𝛿𝒃‖ also we have ‖𝒃‖ ≤ ‖𝑨‖‖𝐱‖ so we get
‖𝛿𝐱‖ ‖𝛿𝒃‖ ‖𝛿𝐱‖ ‖𝛿𝒃‖
≤ ‖𝑨−1 ‖‖𝑨‖ ⟺ ≤ cond(𝑨)
‖𝐱‖ ‖𝒃‖ ‖𝐱‖ ‖𝒃‖
❷ (𝑨 + 𝛿𝑨)(𝐱 + 𝛿𝐱) = 𝒃 ⟹ 𝑨𝛿𝐱 = −𝛿𝑨(𝐱 + 𝛿𝐱) ⟹ 𝛿𝐱 = −𝑨−1 𝛿𝑨(𝐱 + 𝛿𝐱) take the norm of both
sides we obtain ‖𝛿𝐱‖ ≤ ‖𝑨−1 ‖‖𝛿𝑨‖‖𝐱 + 𝛿𝐱‖
In other hand we can write 𝛿𝐱 = −𝑨−1 𝛿𝑨(𝐱 + 𝛿𝐱) ⟹ ‖𝛿𝐱‖ ≤ ‖𝑨−1 ‖‖𝛿𝑨‖(‖𝐱‖ + ‖𝛿𝐱‖) so we have
‖𝛿𝐱‖ ‖𝑨−1 ‖‖𝛿𝑨‖
‖𝛿𝐱‖(1 − ‖𝑨−1 ‖‖𝛿𝑨‖) ≤ ‖𝑨−1 ‖‖𝛿𝑨‖‖𝐱‖ ⟹ ≤
‖𝐱‖ (1 − ‖𝑨−1 ‖‖𝛿𝑨‖)
Remark: Some exercises here are included just for completeness of state space-matrix
analysis (in fact they are matrix algebra subject)
CHAPTER IX:
Sampling and
Reconstruction
If we choose to index the temperature values with integers as shown in the third row of
Table then we could view the result as a discrete time signal 𝐱[𝑛] in the form
Generalizing the temperature example used above, the relationship between the continuous
time signal 𝐱(𝑡) and its discrete-time counterpart 𝐱[𝑛] is 𝐱[𝑛] = 𝐱(𝑡)|𝑡=𝑛𝑇𝑠 = 𝐱(𝑛𝑇𝑠 ) where 𝑇𝑠 is
the sampling interval, that is, the time interval between consecutive samples. It is also
referred to as the sampling period. The reciprocal of the sampling interval is called the
sampling rate or the sampling frequency: 𝑓𝑠 = 1/𝑇𝑠 Hz.
Thus the act of sampling allows us to obtain a discrete-time signal 𝐱[𝑛] from the
continuous time signal 𝐱(𝑡) . While any signal can be sampled with any time interval
between consecutive samples, there are certain questions that need to be addressed before
we can be confident that 𝐱[𝑛] provides an accurate representation of 𝐱(𝑡) .
Can always we reconstruct the original signal from the discrete one? If it is not always
possible then of which frequency rate the reconstruction will be good and accurate? Of
which period of sampling that the discrete-time signals provide enough information about
the original one?
The process of converting from digital back to analog is called reconstruction. If we plot
some function in computer we actually did not plot the true continuous-time waveform.
Instead, we actually plotted values of
the waveform only at isolated (discrete)
points in time and then connected
those points with straight lines.
Mathematicians call this reconstruction
process by interpolation because it may
be represented as time-domain
interpolation formula.
𝐲(𝑡) is a train of impulses spaced every 𝑇 seconds (impulse-sampled version of 𝐱(𝑡)). The
strength of an impulse at 𝑡 = 𝑛𝑇 is equal to the magnitude of 𝐱(𝑡) at that point. We would
like to find conditions and methods for recovering 𝐱(𝑡) from 𝐲(𝑡).
At this point, we need to pose a critical question: How dense must the impulse train 𝐬(𝑡) be
so that the impulse-sampled signal 𝐲(𝑡) is an accurate and complete representation of the
original signal 𝐱(𝑡)? In other words, what are the restrictions on the sampling interval 𝑇𝑠 or,
equivalently, the sampling rate 𝑓𝑠 ? In order to answer this question, we need to develop
some insight into how the frequency spectrum of the impulse-sampled signal 𝐲(𝑡) relates to
the spectrum of the original signal 𝐱(𝑡).
As discussed in previous chapters, 𝐬(𝑡) = ∑∞ 𝑘=−∞ 𝛿(𝑡 − 𝑘𝑇𝑠 ) can be represented in an
exponential Fourier series expansion in the form 𝐬(𝑡) = ∑∞ 𝑘=−∞ 𝑐𝑘 𝑒
𝑗𝑘𝜔𝑠 𝑡
where 𝜔𝑠 is both the
sampling rate in 𝑟𝑎𝑑/𝑠 and the fundamental frequency of the impulse train. It is computed
as 𝜔𝑠 = 2𝜋𝑓𝑠 = 2𝜋/𝑇𝑠 . The exponential FS coefficients for 𝐬(𝑡) are found as
∞
1 ∞ 1 ∞ 1 𝑇𝑠 /2 1
𝑐𝑘 = ∫ 𝐬(𝑡)𝑒 −𝑗𝑘𝜔𝑠 𝑡 𝑑𝑡 = ∫ ( ∑ 𝛿(𝑡 − 𝑘𝑇𝑠 )) 𝑒 −𝑗𝑘𝜔𝑠 𝑡 𝑑𝑡 = ∫ 𝛿(𝑡)𝑒 −𝑗𝑘𝜔𝑠 𝑡 𝑑𝑡 = for all 𝑘
𝑇𝑠 −∞ 𝑇𝑠 −∞ 𝑇𝑠 −𝑇𝑠 /2 𝑇𝑠
𝑘=−∞
Substituting this exponential FS coefficients into 𝐬(𝑡), the impulse train becomes
∞ ∞
1 𝑗𝑘𝜔 𝑡 1
𝐬(𝑡) = ∑ 𝑒 𝑠 ⟹ 𝐲(𝑡) = 𝐱(𝑡)𝐬(𝑡) = ∑ 𝐱(𝑡)𝑒 𝑗𝑘𝜔𝑠 𝑡
𝑇𝑠 𝑇𝑠
𝑘=−∞ 𝑘=−∞
In order to determine the frequency spectrum of the impulse sampled signal 𝐲(𝑡) let us take
the Fourier transform of both sides of last equation.
∞ ∞
1 1
𝔽(𝐲(𝑡)) = 𝔽 ( ∑ 𝐱(𝑡)𝑒 𝑗𝑘𝜔𝑠 𝑡 ) = ∑ 𝔽(𝐱(𝑡)𝑒 𝑗𝑘𝜔𝑠 𝑡 )
𝑇𝑠 𝑇𝑠
𝑘=−∞ 𝑘=−∞
Linearity property of the Fourier transform was used in obtaining the result. Furthermore,
using the frequency shifting property of the Fourier transform, the term inside the
summation becomes: 𝔽(𝐱(𝑡)𝑒 𝑗𝑘𝜔𝑠 𝑡 ) = 𝑿(𝜔 − 𝑘𝜔𝑠 ). The Fourier transform of the impulse-
sampled signal 𝐲(𝑡) is related to the Fourier transform of the original signal by
∞ ∞
1 1
𝒀(𝜔) = ∑ 𝔽(𝐱(𝑡)𝑒 𝑗𝑘𝜔𝑠 𝑡 ) = ∑ 𝑿(𝜔 − 𝑘𝜔𝑠 ) This is called the 𝑃𝑢𝑙𝑠𝑒 𝑀𝑜𝑑𝑢𝑙𝑎𝑡𝑖𝑜𝑛
𝑇𝑠 𝑇𝑠
𝑘=−∞ 𝑘=−∞
For the impulse-sampled signal to be an accurate and complete representation of the
original signal, 𝐱(𝑡) should be recoverable from 𝐲(𝑡). This in turn requires that the
frequency spectrum 𝑿(𝜔) be recoverable from the frequency spectrum 𝒀(𝜔). In the above
figure the spectrum 𝑿(𝜔) used for the original signal is bandlimited to the frequency range
|𝜔| ≤ 𝜔max . Sampling rate 𝜔𝑠 is chosen such that the repetitions of 𝑿(𝜔) do not overlap with
each other in the construction of 𝒀(𝜔), that is 𝜔𝑠 ≥ 2𝜔max . As a result, the shape of the
original spectrum 𝑿(𝜔) is preserved within the sampled spectrum 𝒀(𝜔). This ensures that
𝐱(𝑡) is recoverable from 𝐲(𝑡).
Remark: When spectral sections overlap, the word “Aliasing” is displayed, indicating
that the spectrum is being corrupted through the sampling process (sampling rate 𝜔𝑠 is not
chosen carefully). In otherward if a signal is not band-limited, or if the sampling rate is too
low, the spectral components of the signal will overlap each another and this condition is
called aliasing. Once the spectrum is aliased, the original signal is no longer recoverable
from its sampled version.
Theorem: (Nyquist-Shannon Theorem) Let 𝐱(𝑡) be any energy signal which is band-limited
with its highest frequency component less than 𝜔max (that is 𝑿(𝜔) = 0 for all |𝜔| > 𝜔max
with a bandwidth 𝜔max ). When 𝐱(𝑡) is sampled using a sampling frequency 𝜔𝑠 ≥ 2𝜔max ,
then it is possible to reconstruct this signal from its samples 𝐱[𝑛].
The Nyquist-Shannon Theorem is sometimes called the Sampling Theorem. The sampling
rate of 2𝜔max is called Nyquist rate. The sampling frequency selected is denoted as 𝑓𝑠 . We
will denote the Nyquist frequency 𝑓𝑠 /2 as 𝑓𝑁 . For the impulse-sampled signal to form an
accurate representation of the original signal, the sampling rate must be at least twice the
highest frequency in the spectrum of the original signal. This is known as the Nyquist
sampling criterion. It was named after Harry Nyquist (1889-1976) who first introduced the
idea in his work on telegraph transmission. Later it was formally proven by his colleague
Claude Shannon (1916-2001) in his work that formed the foundations of information theory.
When an analog signal is sampled, the most important factor is the selection of the
sampling frequency 𝑓𝑠 . In simple words, Sampling Theorem may be stated as, “Sampling
frequency is appropriate when one can recover the analog signal back from the signal
samples.” If the signal cannot be faithfully recovered, then the sampling frequency needs
correction.
In practice, the condition in 𝑓𝑠 ≥ 2𝑓max is usually met with inequality, and with sufficient
margin between the two terms to allow for the imperfections of practical samplers and
reconstruction systems. In practical implementations of samplers, the sampling rate 𝑓𝑠 is
typically fixed by the constraints of the hardware used. On the other hand, the highest
frequency of the actual signal to be sampled is not always known a priori. One example of
this is the sampling of speech signals where the highest frequency in the signal depends on
the speaker, and may vary. In order to ensure that the Nyquist sampling criterion in
𝑓𝑠 ≥ 2𝑓max is met regardless, the signal is processed through an anti-aliasing filter before it is
sampled, effectively removing all frequencies that are greater than half the sampling rate.
This is illustrated in the next figure
II.I. DTFT of sampled signal: The relationship between the Fourier transforms of the
continuous-time signal and its impulse-sampled version is given by the following equation
𝒀(𝜔) = {∑∞𝑘=−∞ 𝑿(𝜔 − 𝑘𝜔𝑠 )}/𝑇𝑠 . As discussed before, the purpose of sampling is to ultimately
create a discrete-time signal 𝐱[𝑛] from a continuous time signal 𝐱(𝑡). The discrete-time
signal can then be converted to a digital signal suitable for storage and manipulation on
digital computers.
Let 𝐱[𝑘] be defined in terms of 𝐱(𝑡) as: 𝐱[𝑘] = 𝐱(𝑡)|𝑡=𝑘𝑇𝑠 The Fourier transform of 𝐱(𝑡) is
∞
defined by 𝑿(𝜔) = ∫−∞ 𝐱(𝑡)𝑒 −𝑗𝜔𝑡 𝑑𝑡 Similarly, the DTFT of the discrete-time signal 𝐱[𝑘] is
∞
We would like to understand the relationship between the two transforms 𝑿(𝜔) and 𝑿(Ω).
Let we define Ω = 𝜔𝑇𝑠 then this leads us to the conclusion
∞ ∞
1 1 Ω − 2𝜋𝑘
𝒀(𝜔) = ∑ 𝑿(𝜔 − 𝑘𝜔𝑠 ) ⟹ 𝒀(Ω) = ∑ 𝑿( )
𝑇𝑠 𝑇𝑠 𝑇𝑠
𝑘=−∞ 𝑘=−∞
⟹ 𝒀(𝜔) = 𝒀(Ω/𝑇𝑠 ) ⟹ 𝑿(𝜔) = 𝑿(Ω/𝑇𝑠 )
1. What would happen to the spectrum if we used a pulse train instead of an impulse train?
2. How would the use of pulses affect the methods used in recovering the original signal
from its sampled version?
When pulses are used instead of impulses, there are two variations of the sampling
operation that can be used, namely natural sampling and zero-order hold sampling. The
former is easier to generate electronically while the latter lends itself better to digital coding
through techniques known as pulse-code modulation and delta modulation. We will review
each sampling technique briefly.
Natural sampling: Instead of using the periodic impulse train of 𝐬(𝑡) = ∑∞ 𝑘=−∞ 𝛿(𝑡 − 𝑘𝑇𝑠 ),
let the multiplying signal 𝐬(𝑡) be defined as a periodic pulse train with a duty cycle of 𝑑:
∞
𝑡 − 𝑘𝑇𝑠
𝐬(𝑡) = ∑ ∏ ( )
𝑑𝑇𝑠
𝑘=−∞
With
1 |𝑡| ≤ 1/2
∏(𝑡) = {
0 |𝑡| > 1/2
The ∏(𝑡) signal represents a unit pulse, that is, a pulse with unit amplitude and unit width
centered around the time origin 𝑡 = 0. The period of the pulse train is 𝑇𝑠 , the same as the
sampling interval. The width of each pulse is 𝑑𝑇𝑠 as shown in Figure.
Multiplication of the signal 𝐱(𝑡) with 𝐬(𝑡) yields a natural sampled version of the signal 𝐱(𝑡):
∞
𝑡 − 𝑘𝑇𝑠
𝐲(𝑡) = 𝐱(𝑡)𝐬(𝑡) = 𝐱(𝑡) ∑ ∏ ( )
𝑑𝑇𝑠
𝑘=−∞
In order to derive the relationship between frequency spectra of the signal 𝐱(𝑡) and its
naturally sampled version 𝐲(𝑡), we will make use of the exponential Fourier series
representation of 𝐬(𝑡). The exponential FS coefficients for a pulse train with duty cycle 𝑑
were found as
1 ∞ sin(𝜋𝑘𝑑)
𝑐𝑘 = ∫ 𝐬(𝑡)𝑒 −𝑗𝑘𝜔𝑠 𝑡 𝑑𝑡 = 𝑑 ( ) = 𝑑 sinc(𝑘𝑑)
𝑇𝑠 −∞ 𝜋𝑘𝑑
Therefore the EFS representation of 𝐬(𝑡) is 𝐬(𝑡) = 𝑑 ∑∞𝑘=−∞ sinc(𝑘𝑑) 𝑒
𝑗𝑘𝜔𝑠 𝑡
. Fundamental
frequency is the same as the sampling rate 𝜔𝑠 = 2𝜋/𝑇𝑠 . Using 𝐲(𝑡) = 𝐱(𝑡)𝐬(𝑡) the naturally
sampled signal is
∞ ∞ ∞
𝑗𝑘𝜔𝑠 𝑡
𝐲(𝑡) = 𝐱(𝑡) ∑ 𝑑sinc(𝑘𝑑) 𝑒 ⟹ 𝒀(𝜔) = ∫ (𝐱(𝑡) ∑ 𝑑sinc(𝑘𝑑) 𝑒 𝑗𝑘𝜔𝑠 𝑡 ) 𝑒 −𝑗𝜔𝑡 𝑑𝑡
𝑘=−∞ −∞ 𝑘=−∞
Interchanging the order of integration and summation and rearranging terms we obtain
∞ ∞ ∞
sin(𝜋𝑘𝑑)
𝒀(𝜔) = 𝑑 ∑ sinc(𝑘𝑑) [∫ 𝐱(𝑡)𝑒 −𝑗(𝜔−𝑘𝜔𝑠 )𝑡 𝑑𝑡] = ∑ ( ) 𝑿(𝜔 − 𝑘𝜔𝑠 )
−∞ 𝜋𝑘
𝑘=−∞ 𝑘=−∞
When the FET transistor is on, the analog voltage is shorted to ground; when off, the FET is
essentially open, so that the analog signal sample appears at the output.
Op-amp 1 is a noninverting amplifier that isolates the analog input channel from the
switching function.
The resistor 𝑅 is used to limit the output current of op-amp 1 when the FET is “on” and
provides a voltage division with rd of the FET. (rd, the drain-to-source resistance, is low but
not zero)
Flat Top Sampling: In natural sampling the tops of the pulses are not flat, but are
rather shaped by the signal 𝐱(𝑡). This behavior is not always desired, especially when the
sampling operation is to be followed by conversion of each pulse to digital format. An
alternative is to hold the amplitude of each pulse constant, equal to the value of the signal
at the left edge of the pulse. This is referred to as zero-order hold sampling or flat-top
sampling, and is illustrated in Figure.
Also the flat top pulse of 𝐲̅(𝑡) is mathematically equivalent to the convolution of
instantaneous sample and a pulse 𝒉𝑧𝑜ℎ (𝑡)
𝐲̅(𝑡) = 𝒉𝑧𝑜ℎ (𝑡) ⋆ 𝐲(𝑡)
∞
𝜔𝑑𝑇
sin ( 2 𝑠 )
𝑯𝑍𝑂𝐻 (𝑗𝜔) = 𝑒 −𝑗𝜔𝑑𝑇𝑠 /2 ( )
𝜔/2
∞
sin(𝜔𝑑𝑇𝑠 /2) −𝑗𝜔𝑑𝑇 /2
̅ (𝜔) = 𝑯𝑍𝑂𝐻 (𝑗𝜔)𝒀(𝜔) = (
𝒀 )𝑒 𝑠 ∑ 𝑿(𝜔 − 𝑘𝜔𝑠 )
𝜔𝑇𝑠 /2
𝑘=−∞
Remark: In most applications, we try to obey the constraint of the sampling theorem by
sampling at a rate higher than twice the highest frequency 𝑓𝑠 > 2𝑓𝑚𝑎𝑥 in order to avoid the
problems of aliasing. This is called over-sampling. When 𝑓𝑠 < 2𝑓𝑚𝑎𝑥 , the signal is under-
sampled and we say that aliasing has occurred.
III. Construction of Continuous-Time Signal: The sampling theorem suggests that a
process exists for reconstructing a continuous-time signal from its samples. This
reconstruction process would undo the A/D conversion so it is called D/A conversion
𝐱[𝑛] → 𝐷/𝐴 → 𝐱(𝑡). Since the sampling process of the ideal A/D converter is defined by the
mathematical substitution 𝑡 = 𝑛/𝑓𝑠 , we would expect the same relationship to govern the
ideal D/A converter, that is, 𝐱(𝑡) = 𝐱[𝑛]|𝑛=𝑓𝑠 𝑡 = 𝐱[𝑓𝑠 𝑡].
If a discrete time signal has an infinite length, we can terminate the signal at a desired
finite number of terms, by multiplying it by a window function. There are several window
functions such as the rectangular, triangular, Hamming, Hanning, Kaiser, etc. However, we
must choose a suitable window function; otherwise, the sequence will be terminated
abruptly producing the effect of leakage.
III.I. Zero-Order Hold Interpolation: The zero-order hold (ZOH) is a mathematical model
of the practical signal reconstruction done by a conventional digital-to-analog converter
(DAC). That is, it describes the effect of converting a discrete-time signal to a continuous-
time signal by holding each sample value for one sample interval. It has several
applications in electrical communication.
Zero-order hold interpolation can be achieved by processing the impulse sampled signal
𝐲(𝑡) through zero-order hold reconstruction filter, a linear system the impulse response of
which is a rectangle with unit amplitude and a duration of 𝑇𝑠 .
Notice the similarity between 𝒉𝑧𝑜ℎ (𝑡) for zero-order hold interpolation and 𝒉𝑧𝑜ℎ (𝑡) derived in
the discussion of zero-order hold sampling. The two become the same if the duty cycle is
set equal to 𝑑 = 1.
𝑡 − 0.5𝑑𝑇𝑠 𝑡 − 0.5𝑇𝑠
𝒉𝑧𝑜ℎ (𝑡) = ∏ ( ) 𝒉𝑧𝑜ℎ (𝑡) = ∏ ( )
𝑑𝑇𝑠 𝑇𝑠
𝐙𝐎𝐇 𝐒𝐚𝐦𝐩𝐥𝐞𝐫 𝐙𝐎𝐇 𝐈𝐧𝐭𝐞𝐫𝐩𝐨𝐥𝐚𝐭𝐨𝐫
III.II. Higher-Order Interpolation Filter: Given a signal 𝐱(𝑡), the reconstruction of 𝐱(𝑡)
from the sampled waveform 𝐲(𝑡) = 𝐱(𝑡)𝐬(𝑡) can be carried out as follows. First, suppose that
𝐱(𝑡) has bandwidth 𝑊; that is, 𝐗(𝜔) = 0 for |𝜔| > 𝑊
Then if 𝜔𝑠 ≥ 2𝑊 in the expression 𝒀(𝜔) = [∑∞ 𝑘=−∞ 𝑿(𝜔 − 𝑘𝜔𝑠 )]/𝑇𝑠 for 𝒀(𝜔) the replicas of 𝑿(𝜔)
do not overlap in frequency. Thus if the sampled signal 𝐲(𝑡) is applied to an ideal lowpass
filter with the frequency function shown in figure below, the only component of 𝒀(𝜔) that is
passed is 𝑿(𝜔). Hence, the output of the filter is equal to 𝐱(𝑡), which shows that the original
signal 𝐱(𝑡) can be completely and exactly reconstructed from the sampled waveform 𝐲(𝑡).
So, the reconstruction of 𝐱(𝑡) from the sampled signal 𝐲(𝑡) = 𝐱(𝑡)𝐬(𝑡) can be accomplished
by a simple low-pass filtering of the sampled signal. The process is illustrated in Figure
above. The filter in this figure is sometimes called an interpolation filter, since it reproduces
𝐱(𝑡) from the values of 𝐱(𝑡) at the time points 𝑡 = 𝑘𝑇𝑠 .
From figure above it is clear that the frequency response function of the interpolating filter
is given by
𝐱(𝑡) = 𝐡(𝑡) ⋆ 𝐲(𝑡) = ∫ 𝐲(𝜏)𝐡(𝑡 − 𝜏)𝑑𝜏 = ∫ ( ∑ 𝐱[𝑘𝑇𝑠 ]𝛿(𝑡 − 𝑘𝑇𝑠 )) 𝐡(𝑡 − 𝜏)𝑑𝜏
−∞ −∞ 𝑘=−∞
𝜔
∞ ∞
∞ ∞ sin ( 2𝑠 (𝑡 − 𝑘𝑇𝑠 ))
= ∑ ∫ 𝐱[𝑘𝑇𝑠 ]𝛿(𝑡 − 𝑘𝑇𝑠 )𝐡(𝑡 − 𝜏)𝑑𝜏 = ∑ 𝐱[𝑘𝑇𝑠 ]𝐡(𝑡 − 𝑘𝑇𝑠 ) = ∑ {𝐱[𝑘] ( 𝜔𝑠 )}
−∞ (𝑡 − 𝑘𝑇 𝑠 )
𝑘=−∞ 𝑘=−∞ −∞ 2
Now let we do some exercise on MATLAB to make things clear, and later on we will try to
develop the mathematics in the next sections.
MATLAB-Solved Problems:
MATLAB Exercise: 01 (Spectral relations in impulse sampling). Consider the continuous-
time signal 𝐱 𝑎 (𝑡) = 𝑒 −|𝑡| . Its Fourier transform is
2 2
𝑿𝑎 (𝜔) = 2
or 𝑿𝑎 (𝑓) =
1+𝜔 1 + 4𝜋 2 𝑓 2
Compute and graph the spectrum of 𝐱 𝑎 (𝑡). If the signal is impulse-sampled using a
sampling rate of 𝑓𝑠 = 1 𝐻𝑧 to obtain the signal 𝐱 𝑠 (𝑡), compute and graph the spectrum of
the impulse-sampled signal. Afterwards repeat with 𝑓𝑠 = 2 𝐻𝑧.
Solution: The script listed below utilizes an anonymous function to define the transform
𝑿𝑎 (𝑓). It then uses 𝑿𝑠 (𝑓) = (∑∞ 𝑘=−∞ 𝑿𝑎 (𝑓 − 𝑘𝑓𝑠 ))/𝑇𝑠 to compute and graph 𝑿𝑠 (𝑓)
superimposed with the contributing terms.
Solution: The spectrum given by 𝑿𝑠 (𝜔) = ∑∞ 𝑘=−∞(sin(𝜋𝑘𝑑) /𝜋𝑘)𝑿(𝜔 − 𝑘𝜔𝑠 ) may be written
∞
using 𝑓 instead of 𝜔 as 𝑿𝑠 (𝑓) = ∑𝑘=−∞(sin(𝜋𝑘𝑑) /𝜋𝑘)𝑿(𝑓 − 𝑘𝑓𝑠 ) The script to compute and
graph 𝑿𝑠 (𝑓) is listed below. It is obtained by modifying the last script developed before. The
sinc = (sin(𝜋𝑘𝑑) /𝜋𝑘) envelope is also shown.
MATLAB Exercise: 03 (ZOH sampling) Two-sided exponential signal 𝐱 𝑎 (𝑡) = 𝑒 −|𝑡| is sampled
using a zero-order hold sampler with a sampling rate of 𝑓𝑠 = 3 𝐻𝑧 and a duty cycle of
𝑑 = 0.3 . Compute and graph |𝑿𝑠 (𝑓)| in the frequency interval −12 ≤ 𝑓 ≤ 12 Hz.
The script to compute and graph 𝑿𝑠 (𝑓) is listed below. It is obtained by modifying the script
developed in MATLAB Exercise 1.
The function zohsamp(..) evaluates and returns a zero-order hold sampled version of the
signal.
The function natsamp(..) can be tested with the double sided exponential signal using the
following statements:
>> x = @(t) exp (-abs(t));
>> t = [-4:0.001:4];
>> xnat = natsamp(x ,0.2 ,0.5 , t);
>> plot(t,xnat);
The sampling rate can be modified by editing line 2 of the code. The graph generated by
this function is shown in figure below, for sampling rates 200 Hz and 400 Hz.
Modifying this script to produce first-order hold interpolation is almost trivial. The modified
script is given below.
clear all, clc,
fs = 200; % Sampling rate
Ts = 1/fs; % Sampling interval
% Set index limits "n1" & "n2" to cover time interval from -25ms to +75ms
n1 = -fs/40;
n2 = -3*n1;
n = [n1:n2];
t = n*Ts; % Vector of time instants
xs = exp (-100* t).*(n >=0); % Samples of the signal
clf;
stem(t,xs,'^'); grid;
hold on;
plot(t,xs,'r-');
hold off;
axis ([-0.030 ,0.080 , -0.2 ,1.1]);
title ('Reconstruction using first -order hold');
xlabel ('t (sec)');
ylabel ('Amplitude');
text (0.015 ,0.7 , sprintf ('Sampling rate = %.3g Hz',fs));
The only functional change is in line 13 where we use the function plot(..) instead of the
function stairs(..). The graph generated by this modified script is shown in Figure below for
sampling rates 200 Hz and 400 Hz.
Reconstruction through bandlimited interpolation requires a bit more work. The script for
this purpose is given below. Note that we have added a new section to the previous script to
compute the shifted sinc functions.
𝜔
∞
𝑡 − 𝑘𝑇𝑠
∞
sin(𝜋(𝑡 − 𝑘𝑇𝑠 )/𝑇𝑠 )
∞ sin ( 2𝑠 (𝑡 − 𝑘𝑇𝑠 ))
𝐲(𝑡) = ∑ 𝐲[𝑘]sinc ( ) = ∑ {𝑇𝑠 ( ) 𝐲[𝑘]} = ∑ {𝐲[𝑘] ( 𝜔 )}
𝑇𝑠 𝜋(𝑡 − 𝑘𝑇𝑠 ) 𝑠 (𝑡
− 𝑘𝑇 𝑠 )
−∞ −∞ −∞ 2
clear all, clc,
fs = 200; % Sampling rate
Ts = 1/fs; % Sampling interval
% Set index limits "n1"& "n2" to cover time interval from -25 ms to +75ms
n1 = -fs/40;
n2 = -3*n1;
n = [n1:n2];
t = n*Ts; % Vector of time instants
xs = exp (-100* t).*(n >=0); % Samples of the signal
% Generate the sinc interpolating functions
t2 = [-0.025:0.0001:0.1];
xr = zeros (size(t2));
for n=n1:n2 ,
nn = n-n1+1; % Because MATLAB indices start at 1
xr = xr+xs(nn)*sinc((t2-n*Ts)/Ts);
end;
clf;
stem(t,xs,'^'); grid;
hold on;
plot(t2,xr,'r-');
hold off;
axis ([-0.030 ,0.080 , -0.2 ,1.1]);
title ('Reconstruction using first -order hold');
xlabel ('t (sec)'); ylabel ('Amplitude');
text (0.015 ,0.7 , sprintf ('Sampling rate = %.3g Hz',fs));
The graph generated by this script is shown in Figure below for sampling
rates 200 Hz and 400 Hz.
IV. Linear Systems and Sampling: Models for continuous-time dynamical systems often
arise from the application of physical laws such as conservation of mass, momentum, and
energy. These models typically take the form of linear or nonlinear differential equations,
where the parameters involved can usually be interpreted in terms of physical properties of
the system. In practice, however, these kinds of models are not appropriate to interact with
digital devices. In any situation where digital controllers have to act on a real system, this
action can be applied (or updated) only at some specific time instants. Similarly, if we are
interested in collecting information from signals of a given system, this data can usually
only be recorded (and stored) at specific instants. This constitutes nowadays an
unavoidable paradigm: continuous-time systems interact with actuators and sensors that
are accessible only at discrete-time instants. As a consequence, the sampling process of
continuous-time systems is a key problem both for estimation and control purposes.
In this context, the current ection considers sampled-data models for linear systems. The
focus is on describing, in discrete-time, the relationship between the input signals and the
samples of the continuous-time system outputs.
There are several discretization and interpolation methods for converting dynamic system
from continuous time to discrete time and for resampling discrete-time models. Some
methods tend to provide a better frequency-domain match between the original and
converted systems, while others provide a better match in the time domain.
There are a lot of methods to find the discrete equivalent system, among them we refer
𝐱[𝑘 + 1] − 𝐱[𝑘]
𝐱̇ ≝ The Forward rule
𝑇
𝐱[𝑘] − 𝐱[𝑘 − 1]
𝐱̇ ≝ The Backward rule
𝑇
1 𝐱[𝑘 + 1] − 𝐱[𝑘]
{2 (𝐱̇ (𝑡) + 𝐱̇ (𝑡 + 𝑇)) ≝ 𝑇
The Bilinear rule
The operation can be carried out directly on the system function (transfer function) if one
translates the above equations into the frequency domain.
𝑧−1 𝑧−1
𝑠𝑿(𝑠) ↔ ( ) 𝑿(𝑧) 𝑠≅( ) The Forward rule
𝑇 𝑇
𝑧−1 𝑧−1
𝑠𝑿(𝑠) ↔ ( ) 𝑿(𝑧) ⟺ 𝑠≅( ) The Backward rule
𝑇𝑧 𝑇𝑧
𝑧+1 𝑧−1 2 𝑧−1
( ) 𝑿(𝑠) ↔ ( ) 𝑿(𝑧)} 𝑠≅ ( ) The Bilinear rule
2 𝑇 𝑇 𝑧+1
Remark: the trapezoidal rule is also called the Tustin's method or bilinear transformation.
Now it is interesting to see how the stable region of s-plane can be mapped to the z-plane.
Forward method
𝐱[𝑘 + 1] − 𝐱[𝑘]
𝐱̇ ≝
𝑇
Or
𝑧−1
𝑠≅( ) Discrete Stable ⟹ Continuous Stable
𝑇
Means that: can have continuous stable but discrete un-stable
(𝜎 − 1) 𝑗𝜔
continuous stable ⟹ 𝑅𝑒(𝑠)<0 ⟹ 𝑅𝑒 ( + )<0 ⟹ 𝜎 <1
𝑇 𝑇
This means that: we can have continuous stable system but discrete un-stable. In other
word we can interpret this result by the following implication,
Backward method
𝐱[𝑘] − 𝐱[𝑘 − 1]
𝐱̇ ≝
𝑇
Or
(𝜎 − 1) + 𝑗𝜔 1 2 2
1 2
continuous stable ⟹ Re(s) < 0 ⟹ 𝑅𝑒 ( ) < 0 ⟹ (𝜎 − ) + 𝜔 < ( )
𝑇(𝜎 + 𝑗𝜔) ⏟ 2 2
Disc Equation
Notice that: the obtained disc is located inside the unit circle so we can interpret this result
by the following implication, Continuous Stable ⟹ Discrete Stable. In other word, we can
have stable discrete system but continuous un-stable.
Bilinear method
(Tustin's method)
1
𝑧 = 𝑒 𝑠𝑇 ⟹ 𝑠 = ln(𝑧)
𝑇
Or
1 2 𝑧−1
𝑠= ln(𝑧) ≅ ( )
𝑇 𝑇 𝑧+1
⟹
Continuous Stable Discrete Stable
⟸
Means that: continuous stable equivalent discrete stable
(𝜎 − 1) + 𝑗𝜔
continuous stable ⟹ Re(s) < 0 ⟹ 𝑅𝑒 ( 𝜎 2 + 𝜔2 < 1
)<0 ⟹⏟
(𝜎 + 1) + 𝑗𝜔 Disc Equation
2 + 𝑇𝑠
Discrete Stable ⟹ |z| < 1 ⟹ | |<1⟹𝜎
⏟< 0 ⟹ continuous stable
2 − 𝑇𝑠 LHP
Notice that: the obtained disc is the unit disc so we can interpret this result by the
following equivalence, Continuous Stable ⟺ Discrete Stable. In other word, we can have
stable discrete system & continuous stable.
Example: Using the previous methods to find 𝐻(𝑧) equivalent to the given 𝐻(𝑠)
𝑎
𝐻(𝑠) =
𝑠+𝑎
𝑎 𝑎𝑇
The Forward rule: 𝐻(𝑧) = =
𝑧−1 𝑧 + (𝑎𝑇 − 1)
+ 𝑎
𝑇
𝑎 𝑎𝑇𝑧
The Forward rule: 𝐻(𝑧) = =
𝑧−1 (𝑎𝑇 + 1)𝑧 − 1
𝑇𝑧 + 𝑎
𝑎 𝑎𝑇(𝑧 + 1)
The Bilinear rule: 𝐻(𝑧) = =
2 𝑧−1 (𝑎𝑇 + 2)𝑧 + (𝑎𝑇 − 2)
𝑇 (𝑧 + 1) + 𝑎
❷ Step invariant method (Zero Order Hold): First let we find the transfer function of the
zero hold system
The transfer function of the ZOH can be obtained by Laplace transformation of the impulse
response. As shown in the figure, the impulse response is a unit pulse of width 𝑇. A pulse
can be represented as a positive step at time zero followed by a negative step at time 𝑇.
Using the Laplace transform of a unit step and the time delay theorem for Laplace
transforms,
1 − 𝑒 −𝑠𝑇
𝑯𝑍𝑂𝐻 (𝑠) = 𝕃(𝑢(𝑡) − 𝑢(𝑡 − 𝑇)) =
𝑠
Next, we consider the frequency response of the ZOH:
𝑇 𝑇 𝑇 𝑇
1 − 𝑒 −𝑗𝜔𝑇 𝑒 −𝑗𝜔2 𝑒 𝑗𝜔 2 − 𝑒 −𝑗𝜔 2 𝑇 sin (𝜔 )
𝑯𝑍𝑂𝐻 (𝑗𝜔) = = ( ) = 𝑇𝑒 −𝑗𝜔
2(
2 )
𝑗𝜔 𝜔 𝑗 𝑇
𝜔2
The basic idea behind the step invariance method is to choose a step response for discrete
system that is similar to the step response of analog system, and this is the reason why we
call it step invariant.
Let we ecxite a system by a step signal as forcing function, then its response will be
−1
𝑯(𝑠)
𝐲(𝑡) = { } Continuos system
𝑠
−1
𝑯(𝑧)
𝐲[𝑘] = { } Discrete system
1 − 𝑧 −1
Explanations (1𝑠𝑡 method): Let we define the hold system by ℎ0 (𝑡) = 𝑢(𝑡) − 𝑢(𝑡 − 𝑇)
1 𝑒 −𝑠𝑇 1 − 𝑒 −𝑠𝑇
𝑯0 = − =
𝑠 𝑠 𝑠
−1
−1 ).
𝑯(𝑠)
= (1 − 𝑧 { { }| }
𝑠 𝑡=𝑘𝑇
Remark: the advantage of setting the input to a step 𝑢[𝑘] is that: a step function is
invariant to ZOH i.e. 𝑢(𝑡) is also a step every sampling period (unaffected by ZOH).
Explanations (2𝑛𝑑 method): Let we define the hold system by ℎ0 (𝑡) = 𝑢(𝑡) − 𝑢(𝑡 − 𝑇)
1 − 𝑒 −𝑠𝑇
𝑯0 (𝑠) = {ℎ0 (𝑡)} =
𝑠
Before commence development we have to put the following setting
−1 −1
𝑯(𝑠) 𝑯(𝑠)
𝑮(𝑠) = so g(𝑡) = { } and {𝑒 −𝑠𝑇 } = 𝛿(𝑡 − 𝑇)
𝑠 𝑠
−1 −1 −1
𝑯(𝑠) 𝑯(𝑠)
{𝑯0 (𝑠)𝑯(𝑠)} = { − 𝑒 −𝑠𝑇 }= {𝑮(𝑠) − 𝑒 −𝑠𝑇 𝑮(𝑠)} = g(𝑡) = g(𝑡) − g(𝑡 − 𝑇)
𝑠 𝑠
−1
𝑯(𝑧) = { {𝑯0 (𝑠)𝑯(𝑠)}| }= {g(𝑡) − g(𝑡 − 𝑇)|𝑡=𝑘𝑇 } = (1 − 𝑧 −1 ) {g(𝑡)|𝑡=𝑘𝑇 }
𝑡=𝑘𝑇
−1
−1 ).
𝑯(𝑠)
= (1 − 𝑧 { { }| }
𝑠 𝑡=𝑘𝑇
Example:01 MATLAB Code for the ZOH design
clear all, close all, clc,
t0 = 0; dt = 1e-3; tn = 1;
t = t0:dt:tn; % Time domain
x = @(t)(sin(2*pi*t)); % Original signal
% x = @(t)(4*exp(3*t));
Ts = 3e-2; % Sampling period
zohImpl = ones(1,Ts/dt); % Impulse ZOH ℎ0(𝑡)
nSamples = tn/Ts;
samples = 0:nSamples; % Samples
xSampled = zeros(1,length(t));
xSampled(1:Ts/dt:end)= x(samples*Ts); % Sampled signal
%Convolution with IR response
xZoh1 = conv(zohImpl,xSampled);
xZoh1 = xZoh1(1:length(t));
figure(3);
hold all;
plot(t,x(t),'-r','linewidth',3);
stem(t,xSampled);
plot(t,xZoh1,'-b','linewidth',3);
❸ First Order Hold Equivalence: we now consider a first order hold method which
extrapolates samples by connecting them using straight lines. The
impulse response of the FOH is shown in figure
1
𝐡1 (𝑡) = [𝐑(𝑡 − 𝑇)𝑢(𝑡 − 𝑇) + 𝐑(𝑡 + 𝑇)𝑢(𝑡 + 𝑇) − 2𝐑(𝑡)𝑢(𝑡)]
𝑇
Where: 𝐑(𝑡) is the ramp signal. Now, knowing that the mapping
between z-plane and z-plane is 𝑧 = 𝑒 −𝑠𝑇 then the frequency response
of this FOH is given by:
𝑇 𝑇 2
𝑠 −𝑠 2
𝑒 𝑠𝑇
+𝑒−𝑠𝑇
−2 1 𝑒 2 −𝑒 2 𝑒 −𝑠𝑇 𝑒 𝑠𝑇 − 1 𝑧 −1 𝑧 − 1 2 (𝑧 − 1)2 1
𝐇1 (𝑠) = ( 2
)= ( ) = ( ) = ( ) =( ) 2
𝑇𝑠 𝑇 𝑠 𝑇 𝑠 𝑇 𝑠 𝑇𝑧 𝑠
1 𝑧
{𝑒 𝑝𝑠 𝑡 } = ⟺ {𝑒 𝑝𝑠 𝑡 |𝑡=𝑘𝑇 } = {(𝑒 𝑝𝑠 𝑇 )𝑘 } =
𝑠 − 𝑝𝑠 𝑧 − 𝑒 𝑝𝑠 𝑇
𝑛 𝛼𝑘 𝑛 𝛼𝑘 𝑧
𝑯(𝑠) = ∑ ⟷ 𝑯(𝑧) = ∑ 𝑠 𝑇
𝑘=1 𝑠 − 𝑠𝑘 𝑘=1 𝑧 − 𝑒 𝑘
Notice that 𝑝𝑧 = 𝑒 𝑝𝑠 𝑇 = 𝑒 𝜎𝑇 𝑒 𝑗𝜔𝑑 𝑇 = 𝑒 𝜎𝑇 𝑒 𝑗(𝜔𝑑 𝑇+2𝑘𝜋) . Thus, pole locations are a periodic function
of the damped natural frequency 𝜔𝑑 with period (𝜔𝑠 = 2𝜋/𝑇) . The mapping of distinct s-
domain poles to the same z-domain location is clearly undesirable in situations where a
sampled waveform is used to represent its continuous counterpart. The strip of width
𝜔𝑠 over which no such ambiguity occurs (frequencies in the range [(−𝜔𝑠 /2), 𝜔𝑠 /2] rad/s) is
known as the primary strip.
We know from equation 𝑝𝑧 = 𝑒 𝑝𝑠 𝑇 that discretization maps an s-plane pole at 𝑝𝑠 to a z-plane
pole at 𝑒 𝑝𝑠 𝑇 but that no rule exists for mapping zeros. In pole-zero matching, a discrete
approximation is obtained from an analog filter by mapping both poles and zeros using
𝑝𝑧 = 𝑒 𝑝𝑠 𝑇 . If the analog filter has 𝑛 poles and 𝑚 zeros, then we say that the filter has 𝑛 − 𝑚
zeros at infinity. For 𝑛 − 𝑚 zeros at infinity, we add 𝑛 − 𝑚 or 𝑛 − 𝑚 − 1 digital filter zeros at
unity. If the zeros are not added, it can be shown that the resulting system will include a
time delay. The second choice gives a strictly proper filter where the computation of the
output is easier, since it only requires values of the input at past sampling points. Finally,
we adjust the gain of the digital filter so that it is equal to that of the analog filter at a
critical frequency dependent on the filter.
The idea of pole-zero matching is to use the mapping 𝑝𝑧 = 𝑒 𝑝𝑠 𝑇 in order to determine the
zeros as well.
Example: Find a pole-zero matched digital filter approximation for the analog filters. In
filters 𝑯1 , 𝑯2 & 𝑯4 consider low-pass freq range and in 𝑯3 consider high-pass freq range.
𝑎 𝑧+1 1 − 𝑒 −𝑎𝑇
▪ 𝑯1 (𝑠) = ⟺ 𝑯1 (𝑧) = 𝐾 and 𝑯1 (𝑠)|𝑠=0 = 𝑯1 (𝑧)|𝑧=1 ⟹ 𝐾 =
𝑠+𝑎 𝑧 − 𝑒 −𝑎𝑇 2
𝑒0
𝑠 𝑧 − 1↙ 1 + 𝑒 −𝛼𝑇
▪ 𝑯3 (𝑠) = ⟺ 𝑯3 (𝑧) = 𝐾 ( ) and 𝑯3 (𝑠)|𝑠=∞ = 𝑯3 (𝑧)|𝑧=−1 ⟹ 𝐾=( )
𝑠+𝛼 𝑧 − 𝑒 −𝛼𝑇 2
𝑧 − 𝑒 −𝛼𝑇 2𝑎
▪ 𝑯(𝑠) = 𝑠 + 𝛼 ⟺ 𝑯(𝑧) = 𝐾 ( ) and 𝑯(𝑠)|𝑠=0 = 𝑯(𝑧)|𝑧=1 ⟹ 𝐾=( )
𝑧+1 1 − 𝑒 −𝛼𝑇
MATLAB Example: MATLAB Code for the Zero-pole mapping
𝜔𝑛2
𝑯(𝑠) = 2
𝑠 + 2𝜉𝜔𝑛 𝑠 + 𝜔𝑛2
clear all, close all, clc,
wn=5;zeta=0.5; % Undamped natural frequency, damping ratio
ga=tf([wn^2],[1,2*zeta*wn,wn^2]) % Analog transfer function
g=c2d(ga,0.1,'matched') % Transformation with a sampling period 0.1
If we are given a continuous time impulse response 𝒉(𝑡), we can consider transforming it to
a discrete system with 𝒉[𝑘] consisting of equally spaced samples so that
∞ ∞
Let we define the mapping 𝑧 = 𝑠 −𝑠𝑇 then we obtain 𝑯(𝑧) = ∑∞ 𝑘=0 𝒉(𝑘𝑇)𝑧
−𝑘
which is exactely
the z-transformation of 𝒉[𝑘]. In terms of partial fraction expansion we can write
𝑛 𝑛
𝐴𝑘 𝐴𝑘
𝑯(𝑠) = ∑ ⟷ 𝑯(𝑧) = ∑
𝑠 − 𝑝𝑘 1 − 𝑠 𝑝𝑘𝑇 𝑧 −1
𝑘=1 𝑘=1
This impulse invariant method can be extended for the case when the poles are not simple.
𝑠+𝑏
𝑯(𝑠) =
(𝑠 + 𝑏)2 + 𝑐 2
Solution The inverse Laplace transform of 𝑯(𝑠) yields 𝒉(𝑡) = 𝑒 −𝑏𝑡 cos(𝑐𝑡) 𝑢(𝑡). Sampling 𝒉(𝑡)
with sampling period 𝑇, we get
∞
−𝑏𝑘𝑇
𝒉[𝑘] = 𝑒 cos(𝑐𝑘𝑇) 𝑢(𝑘𝑇) ⟹ 𝑯(𝑧) = ∑ 𝑒 −𝑏𝑘𝑇 cos(𝑐𝑘𝑇) 𝑧 −𝑘
𝑘=0
∞
1
⟹ 𝑯(𝑧) = ∑ 𝑒 −𝑏𝑘𝑇 𝑧 −𝑘 (𝑒 −𝑗𝑐𝑘𝑇 + 𝑒 −𝑗𝑐𝑘𝑇 )
2
𝑘=0
1 1 1
⟹ 𝑯(𝑧) = [ + ]
2 1 − (𝑒 −(𝑏−𝑗𝑐)𝑇 )𝑧 −1 1 − (𝑒 −(𝑏+𝑗𝑐)𝑇 )𝑧 −1
1 1 − 𝑒 −𝑏𝑇 cos(𝑐𝑇) 𝑧 −1
𝑯(𝑧) = { }
2 1 − 2𝑒 −𝑏𝑇 cos(𝑐𝑇) 𝑧 −1 + 𝑒 −2𝑏𝑇 𝑧 −2
Note: The impulse invariance method is appropriate only for band-limited filters, i.e., low-
pass and band-pass filters, but not suitable for high-pass or band-stop filters where
additional band limiting is required to avoid aliasing. Thus, there is a need for another
mapping method such as bilinear transformation technique which avoids aliasing.
If we have a continuous-time state equation: 𝐱̇ (𝑡) = 𝑨𝐱(𝑡) + 𝑩𝒖(𝑡) We can derive the digital
version of this equation that we discussed above. We take the Laplace transform of our
equation: 𝑿(𝑠) = (𝑠𝑰 − 𝑨)−1 𝐱(0) + (𝑠𝑰 − 𝑨)−1 𝑩𝑼(𝑠). Now, taking the inverse Laplace
transform gives us our time-domain system, keeping in mind that the inverse Laplace
transform of the (𝑠𝑰 − 𝑨)−1 term is our state-transition matrix,
𝑡
𝐱(𝑡) = 𝑒 𝑨(𝑡−𝑡0 ) 𝐱(𝑡0 ) + ∫ 𝑒 𝑨(𝑡−𝜏) 𝑩𝒖(𝜏)𝑑𝜏
𝑡0
Now, we apply a zero-order hold on our input, to make the system digital. Notice that we
set our start time 𝑡0 = 𝑘𝑇, because we are only interested in the behavior of our system
during a single sample period: 𝒖(𝑡) = 𝒖(𝑘𝑇) with 𝑘𝑇 ≤ 𝑡 ≤ (𝑘 + 1)𝑇.
𝑡
𝐱(𝑡) = 𝑒 𝑨(𝑡−𝑘𝑇) 𝐱(𝑘𝑇) + ∫ 𝑒 𝑨(𝑡−𝜏) 𝑩𝒖(𝑘𝑇)𝑑𝜏
𝑘𝑇
We were able to remove 𝒖(𝑘𝑇) from the integral because it did not rely depend on 𝜏.
𝑡
𝐱(𝑡) = 𝑒 𝑨(𝑡−𝑘𝑇) 𝐱(𝑘𝑇) + (∫ 𝑒 𝑨(𝑡−𝜏) 𝑩𝑑𝜏) 𝒖(𝑘𝑇)
𝑘𝑇
Let 𝑡 = (𝑘 + 1)𝑇 then
(𝑘+1)𝑇
𝐱[𝑘 + 1] = 𝑒 𝑨𝑇 𝐱[𝑘] + (∫ 𝑒 𝑨((𝑘+1)𝑇−𝜏) 𝑩𝑑𝜏) 𝒖[𝑘]
𝑘𝑇
In the next table we summarize the most approximation numerical methods that are used
to convert continuous-time state space into discrete-time state space without proof
➳ The sampling process is just a mapping between the s-plane and z-plane 𝑧 = 𝐟(𝑠)
➳ Strip of the s-plane is mapped into the whole z-plane. There are infinitely many values of
𝑠 for each value of 𝑧. It is possible to partition the s-plane into horizontal strips of width 𝜔𝑠 .
𝑧 = 𝑒 𝑠𝑇 = 𝑒 𝜎𝑇 𝑒 𝑗𝜔𝑇 = 𝑟𝑒 𝑗𝜃
➳ The 𝑗𝜔-axis of s-plane is mapped into the unit circle in z-plane. 𝑧 = 𝑒 𝑗𝜔𝑇 ⟹ |𝑧| = 1
The pulse transfer function is the equivalent discrete transfer function and it is useful for
use when we deal with some interconnected systems in the presence of sampling devices.
Note that the presence of samplers complicates the algebra of block diagrams, since the
existence and expression of any input-output function depend on the number and location
of the samplers.
Example: Determine the (PTF) pulse transfer function for the given continuous systems:
1 1
𝑯(𝑠) = and 𝑮(𝑧) =
𝑠+𝑎 𝑠+𝑏
Solution: Here the PTF is obtained by the use of the impulse modulation method
−1 ∞
1
𝑯(𝑧) = { {𝑯(𝑠)}| }= {𝑒 −𝑎𝑡 |𝑡=𝑘𝑇 } = {(𝑒 −𝑎𝑇 )𝑘 } = ∑(𝑒 −𝑎𝑇 )𝑘 𝑧 −𝑘 =
𝑡=𝑘𝑇
1 − 𝑒 −𝑎𝑇 𝑧 −1
𝑘=0
−1 ∞
1
𝑮(𝑧) = { {𝑮(𝑠)}| }= {𝑒 −𝑏𝑡 | 𝑡=𝑘𝑇 } = {(𝑒 −𝑏𝑇 )𝑘 } = ∑(𝑒 −𝑏𝑇 )𝑘 𝑧 −𝑘 =
𝑡=𝑘𝑇
1 − 𝑒 −𝑏𝑇 𝑧 −1
𝑘=0
1 1
𝑯(𝑧) = 𝑮(𝑧) =
1 − 𝑒 −𝑎𝑇 𝑧 −1 1 − 𝑒 −𝑏𝑇 𝑧 −1
1 1
𝑯(𝑧)𝑮(𝑧) = ( ) ( )
1 − 𝑒 −𝑎𝑇 𝑧 −1 1 − 𝑒 −𝑏𝑇 𝑧 −1
But
1 1 1 1 1
𝑯𝑮(𝑧) = {( )( )} = { ( − )}
𝑠+𝑎 𝑠+𝑏 𝑏−𝑎 𝑠+𝑎 𝑠+𝑏
1 1 1
= ( −𝑎𝑇 −1
− )
𝑏−𝑎 1−𝑒 𝑧 1 − 𝑒 𝑧 −1
−𝑏𝑇
1 (𝑒 −𝑎𝑇 − 𝑒 −𝑏𝑇 )𝑧 −1
= [ ] ≠ 𝑯(𝑧)𝑮(𝑧)
𝑏 − 𝑎 (1 − 𝑒 −𝑎𝑇 𝑧 −1 )(1 − 𝑒 −𝑏𝑇 𝑧 −1 )
Example: Determine the pulse transfer function for the given schematic diagram:
Solution:
𝑬(𝑠) = 𝑹(𝑠) − 𝑯(𝑠)𝒀(𝑠)
𝒀(𝑠) = 𝑮(𝑠)𝑬⋆ (𝑠)
⇓
⇓
𝑬(𝑠) = 𝑹(𝑠) − 𝑯(𝑠)𝑮(𝑠)𝑬⋆ (𝑠) ⋆ (𝑠)
𝑮⋆ (𝑠)𝑹⋆ (𝑠) 𝒀(𝑧) 𝑮(𝑧)
⇒ 𝒀 = ⟹ =
𝒀(𝑠) = 𝑮(𝑠)𝑬⋆ (𝑠) 1 + [𝑯(𝑠)𝑮(𝑠)]⋆ 𝑹(𝑧) 1 + 𝑯𝑮(𝑧)
⇓
⇓
𝑬⋆ (𝑠) = 𝑹⋆ (𝑠) − [𝑯(𝑠)𝑮(𝑠)]⋆ 𝑬⋆ (𝑠)
𝒀⋆ (𝑠) = 𝑮⋆ (𝑠)𝑬⋆ (𝑠) }
Example: Determine the (PTF) pulse transfer function for the given continuous systems:
10(𝑠 + 2)
𝑯(𝑠) =
𝑠(𝑠 + 1)2
Solution: Here the PTF is obtained by the use of the impulse modulation method
−1
𝛼1 𝛼2 𝛼3
𝑯(𝑧) = { {𝑯(𝑠)}| }= { | + | + | }
𝑡=𝑘𝑇
𝑠 𝑡=𝑘𝑇 (𝑠 + 1)2 𝑡=𝑘𝑇 𝑠 + 1 𝑡=𝑘𝑇
2 𝑑 2
𝛼1 = 𝑠𝑯(𝑠)| = 20, 𝛼2 = (𝑠 + 1) 𝑯(𝑠)| = −10, 𝛼3 = ((𝑠 + 1) 𝑯(𝑠))| = −20
𝑠=0 𝑠=−1 𝑑𝑠 𝑠=−1
Hence
20 −10 20
𝑯(𝑧) = { | + | − | }
𝑠 𝑡=𝑘𝑇 (𝑠 + 1)2 𝑡=𝑘𝑇 𝑠 + 1 𝑡=𝑘𝑇
−𝑡
= {(20 − 20𝑒 − 10𝑡𝑒 −𝑡 )𝑢(𝑡)| }
𝑡=𝑘𝑇
−𝑘𝑇 −𝑘𝑇 )𝑢(𝑘𝑇)
= {(20 − 20𝑒 − 10𝑘𝑇𝑒 }
In this example it is preferable to remind you that the z-transform of 𝑘𝑒 −𝑘 𝑢(𝑘) and 𝑘 2 𝑢(𝑘)
−𝑘𝑇
𝑇𝑒 −𝑇 𝑧 −1 𝑇 2 𝑧 −1 (1 + 𝑧 −1 )
𝑇𝑘𝑒 𝑢(𝑘𝑇) ⟷ and (𝑘𝑇)2 𝑢(𝑘𝑇) ⟷
(1 − 𝑒 −𝑇 𝑧 −1 )2 (1 − 𝑧 −1 )3
20 20 10𝑇𝑒 −𝑇 𝑧 −1
= − −
1 − 𝑧 −1 1 − 𝑒 −𝑇 𝑧 −1 (1 − 𝑒 −𝑇 𝑧 −1 )2
Solved Problems:
Exercise: 01 Let the analog signal be represented as
What is the Nyquist rate for this signal? If the signal is sampled with a sampling frequency
of 200 Hz, what will be the DT signal obtained after sampling? What will be the recovered
signal?
Solution: The signal contains three frequencies, namely, 25 Hz, 150 Hz and 50 Hz. The
highest frequency is 150 Hz. The recommended rate of sampling is 300 Hz (2×150 Hz).
If the signal is a sampling frequency of 200 Hz, substituting 𝑡 = 𝑘𝑇𝑠 = 𝑘/𝑓𝑠 , we get,
𝑘 𝑘 𝑘
𝐱[𝑘𝑇𝑠 ] = 3 cos (50𝜋 ) + 2 sin (300𝜋 ) − 4 cos (100𝜋 )
200 200 200
Now, the discrete time signal obtained after sampling has only two digital frequencies,
namely, 𝑓1 = 1/8, 𝑓2 = 1/4.
𝜋 3𝜋 𝜋
𝐱[𝑘] = 3 cos ( 𝑘) + 2 sin ( 𝑘) − 4 cos ( 𝑘)
4 2 2
𝜋 𝜋 𝜋
= 3 cos ( 𝑘) − 2 sin ( 𝑘) − 4 cos ( 𝑘)
4 2 2
We see that the 150 Hz signal is aliased as 200-150 Hz, that is, 50 Hz with 180°phase
shift. The recovered signal contains only two frequencies, namely, 25 Hz and 50 Hz,
whereas the original signal had three frequencies 25 Hz, 150 Hz and 50 Hz.
Exercise: 02 A 1-kHz sinusoidal signal is sampled at 𝑡1 = 0 and 𝑡2 = 250 𝜇s. The sample
values are 𝑥1 = 0 and 𝑥2 = −1, respectively. Find the signal’s amplitude and phase.
2 2𝜋f0 4𝜋
cos ([ + 𝑚] 2𝜋𝑘) = 𝐱[𝑛] = 𝐱(𝑡)|𝑡=𝑘𝑇 = cos(2𝜋f0 𝑘10−3 ) ⟹ = + 2𝜋𝑚 ⟹ f0 = (103 𝑚 + 400) Hz
5 100 5
Solution The answer is no. Sampling the sum of two sinusoids with frequencies 400 and
1,400 Hz at the 1-kHz rate produces 𝐱[𝑛] = 𝐴 cos(4𝜋𝑘/5).
Solution In the frequency domain we have 𝒀(𝜔) = 𝑿1 (𝜔) ⋆ 𝑿1 (𝜔)/2𝜋 ⟹ The signal 𝒀(𝜔) is of
bandwidth 𝜔1 + 𝜔2 to avoid aliasing we require that 𝜔𝑠 ≥ 2(𝜔1 + 𝜔2 )
2𝜋 2𝜋 𝜋
≥ 2(𝜔1 + 𝜔2 ) ⟹ 𝑇𝑠max = = s ≈ 10.466 ms
𝑇𝑠 2(𝜔1 + 𝜔2 ) 300
Exercise: 06 What should be the maximum value of the sampling period to be able to
recover the signal 𝐱(𝑡) = sin(3𝜋𝑡) /𝜋𝑡 from its samples 𝐱(𝑘𝑇𝑠 )?
1 |𝜔| < 3𝜋
Solution In the frequency domain we have 𝑿(𝜔) = { } . This is a bandlimited
0 |𝜔| ≥ 3𝜋
signal of a cut off frequency 𝜔𝑚 = 3𝜋 ⟹ 𝜔𝑠 ≥ 2(3𝜋) ⟹ 𝑇𝑠 < 1/3 s or in other word 𝑇𝑠max = 1/3 s
Exercise: 07 consider the signal 𝐱(𝑡) = cos(𝜔𝑡) with f0 = 5 KHz. The signal is sampled at
frequency of f𝑠 = 8 KHz
Can we recover the signal from its samples? And what is the of the recovered signal?
Solution we cannot recover this signal from its samples because f𝑠 < 2f0 . There exists an
alias with frequency f = f𝑠 − f0 = 3 KHz. The recovered signal is 𝐱(𝑡) = cos((𝜔𝑠 − 𝜔0 )𝑡) where
𝜔𝑠 − 𝜔0 = 6𝜋 × 103 .
Exercise: 08 consider the impulse response of a continuous LTI system 𝐡(𝑡) = cos(𝜔0 𝑡).
Find its corresponding pulse transfer function 𝑯(𝑧) using the impulse invariant method.
−1
1
𝑯(𝑧) = {{ (𝑯(𝑠))}| }= {𝐡(𝑡)| }= { (𝑒 𝑗𝜔0 𝑘𝑇𝑠 + 𝑒 −𝑗𝜔0 𝑘𝑇𝑠 )}
𝑡=𝑘𝑇𝑠
2
𝑡=𝑘𝑇𝑠
1 ∞ 𝑗𝜔 𝑘𝑇 −𝑘 1 ∞ −𝑗𝜔 𝑘𝑇 −𝑘 1 1 1
𝑯(𝑧) = ∑ 𝑒 0 𝑠𝑧 + ∑ 𝑒 0 𝑠𝑧 = ( 𝑗𝜔 𝑇 −1
+ −𝑗𝜔
)
2 𝑘=0 2 𝑘=0 2 1 − 𝑒 0 𝑠𝑧 1−𝑒 0 𝑇𝑠 𝑧 −1
1 − cos(𝜔0 𝑇𝑠 ) 𝑧 −1
=
1 − 2 cos(𝜔0 𝑇𝑠 ) 𝑧 −1 + 𝑧 −2
sin(𝜔0 𝑇𝑠 ) 𝑧 −1
𝑯(𝑧) =
1 − 2 cos(𝜔0 𝑇𝑠 ) 𝑧 −1 + 𝑧 −2
1
𝑯(𝑠) =
𝑠−𝑎
Solution we know that
−1 (1 − 𝑧 −1 ) −1
𝑯(𝑠) 1 1
𝑯(𝑧) = (1 − 𝑧 −1 ) {{ ( )}| }= {{ ( − )}| }
𝑠 𝑎 𝑠−𝑎 𝑠
𝑡=𝑘𝑇𝑠 𝑡=𝑘𝑇𝑠
(1 − 𝑧 −1 )
= {(𝑒 𝑎𝑡 − 1)𝑢(𝑡)| }
𝑎 𝑡=𝑘𝑇𝑠
(1 − 𝑧 −1 ) 𝑎𝑘𝑇
= { (𝑒 𝑠 − 1)𝑢(𝑘𝑇𝑠 ) }
𝑎
1 1 − 𝑧 −1 𝑧 −1 (𝑒 𝑎𝑇𝑠 − 1)
= ( − 1) =
𝑎 1 − 𝑒 𝑎𝑇𝑠 𝑧 −1 1 − 𝑒 𝑎𝑇𝑠 𝑧 −1
Remark: 𝑧 = 𝑒 𝑎𝑇𝑠 is a pole of the new equivalent system, so the sampling period is a design
parameter in digital control, and can alter the stability of the system.
Exercise: 10 Determine the ZOH discrete time equivalent of the state space
0 1 0
𝐱̇ (𝑡) = [ ] 𝐱(𝑡) + [ ] 𝐮(𝑡)
−1 0 1
𝐲(𝑡) = [1 1]𝐱(𝑡)
𝑇
Solution we know 𝑨𝑑 = 𝑒 𝑨𝑇𝑠 and 𝑩𝑑 = ∫0 𝑠 𝑒 𝑨𝜏 𝑩𝑑𝜏 but what is 𝑒 𝑨t ?
−1 −1 −1
𝑨t 𝑠 −1 −1 1 𝑠 1 cos(𝑡) sin(𝑡)
𝑒 = {(𝑠𝑰 − 𝑨)−1 } = {( ) }= { ( )} = ( )
1 𝑠 2
𝑠 +1 −1 𝑠 − sin(𝑡) cos(𝑡)
cos(𝑇𝑠 ) sin(𝑇𝑠 )
𝑨𝑑 = 𝑒 𝑨𝑇𝑠 = ( )
− sin(𝑇𝑠 ) cos(𝑇𝑠 )
𝑇 𝑇 𝑇
𝑠 𝑠
cos(𝜏) sin(𝜏) 0 𝑠
sin(𝜏) 1 − cos(𝑇𝑠 )
𝑩𝑑 = ∫ 𝑒 𝑨𝜏 𝑩𝑑𝜏 = ∫ ( ) ( ) 𝑑𝜏 = ∫ ( ) 𝑑𝜏 = ( )
0 0 − sin(𝜏) cos(𝜏) 1 0 cos(𝜏) sin(𝑇𝑠 )
−1
𝑧 − cos(𝑇𝑠 ) − sin(𝑇𝑠 ) 1 − cos(𝑇𝑠 )
𝑯(𝑧) = 𝑪𝑑 (𝑧𝑰 − 𝑨𝑑 )−1 𝑩𝑑 = [1 1] ( ) ( )
sin(𝑇𝑠 ) 𝑧 − cos(𝑇𝑠 ) sin(𝑇𝑠 )
(1 − cos(𝑇𝑠 ) + sin(𝑇𝑠 ))𝑧 + (1 − cos(𝑇𝑠 ) − sin(𝑇𝑠 ))
=
𝑧 2 − 2 cos(𝑇𝑠 ) 𝑧 + 1
Exercise: 11 Continous system described by its state space equations
0 1 0
𝐱̇ (𝑡) = [ ] 𝐱(𝑡) + [ ] 𝐮(𝑡)
−1 −2 1
𝐲(𝑡) = [1 1]𝐱(𝑡)
Determine the ZOH discrete time equivalent of this state space, and then determin 𝑯(𝑧)
𝑇
Solution we know 𝑨𝑑 = 𝑒 𝑨𝑇𝑠 and 𝑩𝑑 = ∫0 𝑠 𝑒 𝑨𝜏 𝑩𝑑𝜏 but what is 𝑒 𝑨t ?
−1 −1 −1
𝑠 −1 −1 1 𝑠+2 1
𝑒 𝑨t = {(𝑠𝑰 − 𝑨)−1 } = {( ) }= { ( )}
1 𝑠+2 (𝑠 + 1)2 −1 𝑠
1 1 1
−1 +
𝑠 + 1 (𝑠 + 1)2 (𝑠 + 1)2 (1 + 𝑡)𝑒 −𝑡 𝑡𝑒 −𝑡
= =( )
−1 1 1 −𝑡𝑒 −𝑡 (1 − 𝑡)𝑒 −𝑡
−
{( (𝑠 + 1)2 𝑠 + 1 (𝑠 + 1)2 )}
(1 + 𝑇𝑠 )𝑒 −𝑇𝑠 𝑇𝑠 𝑒 −𝑇𝑠
𝑨𝑑 = 𝑒 𝑨𝑇𝑠 = ( )
−𝑇𝑠 𝑒 −𝑇𝑠 (1 − 𝑇𝑠 )𝑒 −𝑇𝑠
𝑇𝑠 𝑇𝑠 𝑇𝑠
(1 + 𝜏)𝑒 −𝜏 𝜏𝑒 −𝜏 0 𝜏𝑒 −𝜏 1 − (1 − 𝑇𝑠 )𝑒 −𝑇𝑠
𝑩𝑑 = ∫ 𝑒 𝑨𝜏 𝑩𝑑𝜏 = ∫ ( ) ( ) 𝑑𝜏 = ∫ ( ) 𝑑𝜏 = ( )
0 0 −𝜏𝑒 −𝜏 (1 − 𝜏)𝑒 −𝜏 1 0
(1 − 𝜏)𝑒 −𝜏 𝑇𝑠 𝑒 −𝑇𝑠
[1 1] 𝑠 + 2 1 0 1
𝑯(𝑠) = 𝑪(𝑧𝑰 − 𝑨)−1 𝑩 = ( )[ ] =
(𝑠 + 1) 2 −1 𝑠 1 𝑠+1
Therefore,
−1
1 1 1 1
𝑯(𝑧) = (1 − 𝑧 −1 ) {{ ( − )}| } = (1 − 𝑧 −1 ) ( −1
− )
𝑠 𝑠+1 1−𝑧 1 − 𝑒 −𝑇𝑠 𝑧 −1
𝑡=𝑘𝑇𝑠
1 − 𝑧 −1 (1 − 𝑒 −𝑇𝑠 )𝑧 −1
= (1 − ) =
1 − 𝑒 −𝑇𝑠 𝑧 −1 1 − 𝑒 −𝑇𝑠 𝑧 −1
Exercise: 12 consider the frequency response of a continuous LTI system 𝑯(𝑠) = 1/(𝑠 2 + 1).
Find its corresponding pulse transfer function 𝑯(𝑧) using the ZOH method.
−1 −1
1 −1/2 −1/2 1
𝑯(𝑧) = (1 − 𝑧 −1 ) {{ ( 2 )}| } = (1 − 𝑧 −1 ) {{ ( + + )}| }
𝑠(𝑠 + 1) 𝑠+𝑗 𝑠−𝑗 𝑠
𝑡=𝑘𝑇𝑠 𝑡=𝑘𝑇𝑠
(1 − 𝑧 (1 − 𝑧 −1 ) (1 − 𝑧 −1 )/2 −1 )/2
1 1
= (1 − 𝑧 −1 ) {(1 − 𝑒 𝑗𝑘𝑇𝑠 − 𝑒 −𝑗𝑘𝑇𝑠 ) 𝑢(𝑘𝑇𝑠 ) } = − −
2 2 1 − 𝑧 −1 1 − 𝑒 −𝑗𝑇𝑠 𝑧 −1 1 − 𝑒 𝑗𝑇𝑠 𝑧 −1
(1 − 𝑧 −1 )(cos(𝑇𝑠 )𝑧 −1 − 1)
𝑯(𝑧) = +1
1 − 2 cos(𝑇𝑠 ) 𝑧 −1 + 𝑧 −2
Exercise: 13 Find the ZOH equivalent to the given transfer function
𝑠+1
𝑯(𝑠) =
𝑠2 + 1
Solution we know that
−1
𝑯(𝑠) 𝑠+1 1 1 𝑠
= 2
= + 2 − 2 ↔ (1 + sin(𝑡) − cos(𝑡)) 𝑢(𝑡)
𝑠 𝑠(𝑠 + 1) 𝑠 𝑠 + 1 𝑠 + 1
What are the poles and zeros of the equivalent discrete time system?
1 1 𝑠
𝐡(𝑡) = (𝑡 + 1)𝑒 𝑡 𝑢(𝑡) ↔ 𝑯(𝑠) = + =
(𝑠 − 1) 2 𝑠−1 (𝑠 − 1)2
Hence, we obtain
−1
𝑯(𝑠) 1
= ↔ 𝑡𝑒 𝑡 𝑢(𝑡)
𝑠 (𝑠 − 1)2
∞ 𝑘
−1 ) 𝑘𝑇𝑠 )
𝑧−1 𝑒 𝑇𝑠 𝑇𝑠 𝑒 𝑇𝑠 (𝑧 − 1)
𝑯(𝑧) = (1 − 𝑧 {(𝑘𝑇𝑠 𝑒 𝑢(𝑘𝑇𝑠 )} = 𝑇𝑠 ( )∑𝑘( ) =
𝑧 𝑧 (𝑧 − 𝑒 𝑇𝑠 )2
𝑘=0
We have a double finite pole at 𝑧 = 𝑒 𝑇𝑠 and a single zero at 𝑧 = 1. If we select 𝑇𝑠 < 1 we obtain
a stable discrte time system even if the analoge filter is not stable? Aliasing
This is due to the fact that the analoge filter is not bandlimited. Then it is preferable to use
antialiasing filters in cascad with the sampler to avoid the problem of aliasing.
1
𝑯(𝑠) =
(𝑠 + 1)(𝑠 + 2)
Solution we know that
−1
1
𝑯(𝑧) = (1 − 𝑧 −1 ) {{ ( )}| }
𝑠(𝑠 + 1)(𝑠 + 2)
𝑡=𝑘𝑇𝑠
−1
1/2 1 1/2
= (1 − 𝑧 −1 ) {{ ( − + )}| }
𝑠 𝑠+1 𝑠+2
𝑡=𝑘𝑇𝑠
1/2 1 1/2
𝑯(𝑧) = (1 − 𝑧 −1 ) { −1
− −𝑇 −1
+ }
1−𝑧 1 − 𝑒 𝑠𝑧 1 − 𝑒 −2𝑇𝑠 𝑧 −1
1 1 − 𝑧 −1 1 1 − 𝑧 −1
= −( ) + ( )
2 1 − 𝑒 −𝑇𝑠 𝑧 −1 2 1 − 𝑒 −2𝑇𝑠 𝑧 −1
Solution From the given digital filter we notice that there is a backward difference
approximation, so we prefer to use this type of discretization
𝑡2 𝑡2 𝑘𝑇𝑠 𝑘𝑇𝑠
𝑑𝐲(𝑡)
∫ 𝑑𝑡 = ∫ 𝑎(𝐱(𝑡) − 𝐲(𝑡))𝑑𝑡 ⟺ ∫ 𝑑𝐲 = ∫ 𝑎(𝐱(𝑡) − 𝐲(𝑡))𝑑𝑡
𝑡1 𝑑𝑡 𝑡1 (𝑘−1)𝑇𝑠 (𝑘−1)𝑇𝑠
𝑘𝑇𝑠
𝐲[𝑘𝑇𝑠 ] − 𝐲[(𝑘 − 1)𝑇𝑠 ] = 𝑎 ∫ (𝐱(𝑡) − 𝐲(𝑡))𝑑𝑡
(𝑘−1)𝑇𝑠
In the small enough sampling period 𝑇𝑠 we can consider the function 𝐠(𝑡) = (𝐱(𝑡) − 𝐲(𝑡)) be
constant, that is 𝐠(𝑡) = 𝐠(𝑘𝑇𝑠 ) therefore
𝑘𝑇𝑠
𝐲[𝑘𝑇𝑠 ] − 𝐲[(𝑘 − 1)𝑇𝑠 ] = 𝑎𝐠(𝑘𝑇𝑠 ) ∫ 𝑑𝑡 = 𝑎𝐠(𝑘𝑇𝑠 )(𝑘𝑇𝑠 − (𝑘 − 1)𝑇𝑠 ) = 𝑎𝑇𝑠 (𝐱[𝑘𝑇𝑠 ] − 𝐲[𝑘𝑇𝑠 ])
(𝑘−1)𝑇𝑠
For seeking simplicity we omit the sampling period 𝑇𝑠 from the above equation we get
𝐘(𝑧) 𝑎 𝑎
𝑯(𝑧) = =( )= | = |
𝑠 + 𝑎 𝑠=1−𝑧−1 𝑯(𝑠) 𝑠=1−𝑧
−1 −1
𝐗(𝑧) 1−𝑧
𝑇𝑠 + 𝑎 𝑇𝑠 𝑇𝑠
This type of mapping between the s-plane and z-plane is called the backward difference
method.
1 − 𝑧 −1 1 𝑧 − 1
𝑠= = ( )
𝑇𝑠 𝑇𝑠 𝑧
(𝑘+1)𝑇𝑠
𝐲[(𝑘 + 1)𝑇𝑠 ] − 𝐲[𝑘𝑇𝑠 ] = ∫ 𝑎(𝐱(𝑡) − 𝐲(𝑡))𝑑𝑡
𝑘𝑇𝑠
In the small enough sampling period 𝑇𝑠 we can consider the function 𝐠(𝑡) = (𝐱(𝑡) − 𝐲(𝑡)) be
constant, that is 𝐠(𝑡) = 𝐠(𝑘𝑇𝑠 ) therefore
(𝑘+1)𝑇𝑠
𝐲[(𝑘 + 1)𝑇𝑠 ] − 𝐲[𝑘𝑇𝑠 ] = 𝑎𝐠(𝑘𝑇𝑠 ) ∫ 𝑑𝑡 = 𝑎𝑇𝑠 𝐠(𝑘𝑇𝑠 ) = 𝑎𝑇𝑠 (𝐱[𝑘𝑇𝑠 ] − 𝐲[𝑘𝑇𝑠 ])
𝑘𝑇𝑠
For seeking simplicity we omit the sampling period 𝑇𝑠 from the above equation we get
𝐘(𝑧) 𝑎 𝑎
𝑯(𝑧) = =( ) = | = |
𝑠 + 𝑎 𝑠=1−𝑧−1 𝑯(𝑠) 𝑠=1−𝑧
−1
𝐗(𝑧) 1 − 𝑧 −1
+ 𝑎 𝑧−1 𝑇𝑠 𝑧−1 𝑇𝑠
𝑧 −1 𝑇𝑠
This type of mapping between the s-plane and z-plane is called the forward difference
method.
1 − 𝑧 −1 𝑧 − 1
𝑠= =
𝑧 −1 𝑇𝑠 𝑇𝑠
Exercise: 18 Consider an analog integrator characterized by transfer function 𝑯𝑨𝑰 (𝑠) = 1/𝑠
and assume that its response to an excitation 𝐱(𝑡) is 𝐲(𝑡), The impulse response of the
integrator is given by {𝒉𝑨𝑰 (𝑡) = 1 for 𝑡 ≥ 0+ & 𝒉𝑨𝑰 (𝑡) = 0 for 𝑡 ≤ 0− } that is 𝒉𝑨𝑰 (𝑡) = 𝑢(𝑡)
and its response at instant 𝑡 to an arbitrary right-sided excitation 𝐱(𝑡), i.e., 𝐱(𝑡) = 0 for t < 0,
is given by the convolution integral
𝑡
1
𝐱(𝑡) ⟶ 𝑯𝑨𝑰 (𝑠) = ⟶ 𝐲(𝑡) = ∫ 𝐱(𝜏)𝒉𝑨𝑰 (𝑡 − 𝜏)𝑑𝜏
𝑠 0
Prove that this response can be aproximated by the following digital filter
𝑇
𝐲[𝑛] − 𝐲[𝑛 − 1] = {𝐱[𝑛] + 𝐱[𝑛 − 1]}
2
Solution: (Derivation of Bilinear-Transformation Method) If 0+ <𝑡1 <𝑡2 , we can write
𝑡2 𝑡1
𝐲(𝑡2 ) − 𝐲(𝑡1 ) = ∫ 𝐱(𝜏)𝒉𝑨𝑰 (𝑡2 − 𝜏)𝑑𝜏 − ∫ 𝐱(𝜏)𝒉𝑨𝑰 (𝑡1 − 𝜏)𝑑𝜏
0 0
For 0+ <τ≤𝑡1 ,𝑡2 we have 𝒉𝑨𝑰 (𝑡2 − 𝜏) = 𝒉𝑨𝑰 (𝑡1 − 𝜏) = 1 and
thus 𝐲(𝑡2 ) − 𝐲(𝑡1 ) simplifies to
𝑡2
𝐲(𝑡2 ) − 𝐲(𝑡1 ) = ∫ 𝐱(𝜏)𝑑𝜏
𝑡1
𝐘(𝑧) 𝑇 𝑧 + 1
𝑯𝑫𝑰 (𝑧) = = ( )
𝐗(𝑧) 2 𝑧 − 1
In effect, a digital integrator can be obtained from an analog integrator by simply applying
the bilinear transformation. Generally, applying the bilinear transformation to the transfer
function of an arbitrary analog filter will yield a digital filter characterized by the discrete-
time transfer function. And this type of approximation is known in mathematics as the
trapezoidal method.
2 𝑧−1 2 + 𝑇𝑠
𝑠= ( ) ⟷ 𝑧=( )
𝑇 𝑧+1 2 − 𝑇𝑠
1+𝑤 1−𝑧
𝑧=( ) ⟷ 𝑤=( )
1−𝑤 1+𝑧
1 𝑘 𝑘𝜋 1 𝑘 𝑘𝜋
𝐡[𝑘] = (10𝛿[𝑘] − 10 ( ) cos ( ) + 16 ( ) sin ( )) 𝑢[𝑘]
√2 4 √2 4
Determine the continuous transfer function 𝑯(𝑠) using the bilinear-transformation method
Solution: First of all we should find the discrete time transfer function
2 + 𝑇𝑠
3𝑧 + 1 3 (2 − 𝑇𝑠) + 1 4{8 − 2𝑇𝑠 − (𝑇𝑠)2 }
𝑯(𝑧) = 2 ⟹ 𝑯(𝑠) = =
𝑧 − 𝑧 + 1/2 2 + 𝑇𝑠 2 2 + 𝑇𝑠 1 5(𝑇𝑠)2 + 4𝑇𝑠 + 4
(2 − 𝑇𝑠) − (2 − 𝑇𝑠) + 2
Exercise: 20 Given a system setup as shown in figure. Assuming that we require
digitalizing the analog controller
1 𝑡 𝑑𝒆(𝑡)
𝒖(𝑡) = 𝐾 (𝒆(𝑡) + ∫ 𝒆(𝜉)𝑑𝜉 + 𝑇𝐷 )
𝑇𝐼 0 𝑑𝑡
With 𝒖(𝑡): the output of the controller, 𝒆(𝑡):inout of the controller, 𝐾: proportional constant,
𝑇𝐼 : integral constant, and 𝑇𝐷 : derivative constant
For best approximation use the trapezoidal rule for the integration term and backward
difference for the derivative term.
Solution: First of all we convert the analog controller equation to frequency domain
1 −1 1
𝑼(𝑠) = 𝐾 (1 + 𝑠 + 𝑇𝐷 𝑠) 𝑬(𝑠) ⟺ 𝑯𝑐 (𝑠) = 𝐾 (1 + 𝑠 −1 + 𝑇𝐷 𝑠)
𝑇𝐼 𝑇𝐼
𝑇𝑠 1 + 𝑧 −1 𝑇𝐷
𝑯𝑐 (𝑧) = 𝐾 {1 + ( ) + (1 − 𝑧 −1 )}
2𝑇𝐼 1 − 𝑧 −1 𝑇𝑠
𝐾𝑇𝑠 𝑧 −1 𝐾𝑇𝑠 1 𝐾𝑇𝑠 1 𝐾𝑇𝐷
=𝐾+ ( −1
)− ( −1
)+ ( −1
)+ (1 − 𝑧 −1 )
2𝑇𝐼 1 − 𝑧 2𝑇𝐼 1 − 𝑧 𝑇𝐼 1 − 𝑧 𝑇𝑠
𝐾𝑇𝑠 𝐾𝑇𝑠 1 𝐾𝑇𝐷
= (𝐾 − )+ ( −1
) + (1 − 𝑧 −1 )
2𝑇𝐼 𝑇𝐼 1 − 𝑧 𝑇𝑠
Let we put
𝐾𝑇𝑠 𝐾𝑇𝑠 𝐾𝑇𝐷
𝐾𝑃 = (𝐾 − ), 𝐾𝐼 = , 𝐾𝐷 =
2𝑇𝐼 𝑇𝐼 𝑇𝑠
We obtain
𝑘
𝐾𝐼
𝑯𝑐 (𝑧) = 𝐾𝑃 + ( −1
) + 𝐾𝐷 (1 − 𝑧 −1 ) ⟺ 𝒖[𝑘] = (𝐾𝑃 𝒆[𝑘] + 𝐾𝐼 ∑ 𝒆[𝑚] + 𝐾𝐷 ∇𝒆[𝑘])
1−𝑧
0
Exercise: 21 Given a hybrid closed loop system as shown in figure, find its pulse transfer
function PTF (i.e. if it is possible)
Solution: Here we give just a short answer for each block diagram
Exercise: 22 Given a hybrid closed loop system as shown in figure, find its pulse transfer
function PTF (i.e. if it is possible)
Exercise: 23 Given a hybrid closed loop system as shown in figure, find its pulse transfer
function PTF (i.e. if it is possible)
Exercise: 24 Given a hybrid closed loop system as shown in figure, find its pulse transfer
function PTF (i.e. if it is possible)
Exercise: 25 Given a hybrid closed loop system as shown in figure, find its pulse transfer
function PTF (i.e. if it is possible)
Exercise: 26 Write a MATLAB code to simulate a 2nd order digital filter
clear all, clc, a0=1; a1=3; b0=1/2; b1=-1; x0=0; x1=a1; u0=1; u1=0;
x=[x0,x1];n=18;
for k = 1:1:n,
x2=-b1*x1-b0*x0+a1*u1+a0*u0;
x=[x,x2];
x0=x1;
x1=x2;
u0=u1;
end
plot(x,’o’);grid
Consider the modulated signal 𝐲(𝑡) = 𝐱(𝑡)𝐬(𝑡) (i.e. train impulse sampled)
∞ ∞
= ∑ 𝐱(𝑘𝑇𝑠 )𝑒 −𝑠𝑘𝑇𝑠
−∞
∞ ∞
The Laplace transform of the sampled signal 𝐲(𝑡) = 𝐱(𝑡)𝐬(𝑡) can be approximated by an
equivalent z-transform if we consider the mapping 𝑧 = 𝑒 𝑠𝑇𝑠 . Now a various methods can be
obtained from this last relationship between the s-plane and z-plane.
𝑠𝑇
𝑧 − 1 1 − 𝑧 −1
𝑧=𝑒 ⟺ 𝑧 ≈ 1 + 𝑠𝑇 ⟺ 𝑠 = =
𝑇 𝑇𝑧 −1
▪ The Backward rule: we apply the Taylor on 𝑧 −1 = 𝑒 −𝑠𝑇
𝑧 − 1 1 − 𝑧 −1
𝑧 = 𝑒 𝑠𝑇 ⟺ 𝑧 −1 = 𝑒 −𝑠𝑇 ⟺ 𝑧 −1 ≈ 1 − 𝑠𝑇 ⟺ 𝑠 = =
𝑇𝑧 𝑇
▪ The Bilinear (Trapezoidal) rule: we apply the Taylor on 𝑇𝑠 = ln(𝑧)
1 2 𝑧−1 2 1 − 𝑧 −1
𝑧 = 𝑒 𝑠𝑇 ⟺ 𝑠 = ln(𝑧) ≈ ( )≈ ( )
𝑇 𝑇 𝑧+1 𝑇 1 + 𝑧 −1
CHAPTER X:
Analog and Digital
Filters in Linear
Systems
Filtering can be used to select one or more desirable and simultaneously reject one or
more undesirable bands of frequency components, or simply frequencies. For
example, one could use lowpass filtering to select a band of preferred low frequencies
and reject a band of undesirable high frequencies from the frequencies present in the
signal, we use highpass filtering to select a band of preferred high frequencies and
reject a band of undesirable low frequencies; use bandpass filtering to select a band
of frequencies and reject low and high frequencies; lastly we use bandstop filtering to
reject a band of frequencies but select low frequencies and high frequencies.
Analog and Digital Filters in
Linear Systems
I. Introduction: In many signal processing applications the need arises to change the
strength, or the relative significance, of various frequency components in a given signal.
Sometimes we may need to eliminate certain frequency components in a signal; at other
times we may need to boost the strength of a range of frequencies over others. This act of
changing the relative amplitudes of frequency components in a signal is referred to as
filtering, and the system that facilitates this is referred to as a filter. In a general sense any
continuous time LTI system can be seen as a filter. In the most general sense, a "filter" is a
device or a system that alters in a prescribed way the input that passes through it. In
essence, a filter converts inputs into outputs in such a fashion that certain desirable
features of the inputs are retained in the outputs while undesirable features are
suppressed. There are many kinds of filters; only a few examples are given here. In
automobiles, the oil filter removes unwanted particles that are suspended in the oil passing
through the filter; the air filter passes air but prevents dirt and dust from reaching the
carburetor. Colored glass may be used as an optical filter to absorb light of certain
wavelengths, thus altering the light that reaches the sensitized film in a camera.
An electrical filter is designed to separate and pass a desired signal from a mixture of
desired and undesired signals. Typical examples of complex electrical filters are
televisions and radios. More specifically, when a television is turned to a particular
channel, says Channel 2, it will pass those signals (audio and visual) transmitted by
Channel 2 and block out all other signals. On a smaller scale, filters are basic electronic
components in many communication systems such as the telephone, television, radio,
radar, and sonar. Electrical filters can also be found in power conversion circuits and
power systems in general. In fact, electrical filters permeate modern technology so much
that it is difficult to think of any moderately complex electronic device that does not employ
a filter in one form or another. Electrical filters may be classified in a number of ways.
Analog filters are used to process analog or continuous-time signals; digital filters are used
to process digital signals (discrete-time signals with quantized magnitude levels). Analog
filters may be classified as lumped or distributed depending on the frequency ranges for
which they are designed. Finally, analog filters may also be classified as passive or active
depending on the type of elements used in their realizations.
The inverting amplifier: has its noninverting input connected to ground or common. A
signal is applied through input resistor 𝑍1 , and negative current feedback is implemented
through feedback resistor 𝑍2 . Output 𝑣𝑜 has polarity opposite that of input 𝑣𝑖 .
𝑣𝑜 𝑍2 𝑍2
= − ⟺ 𝑣𝑜 = (− ) 𝑣𝑖
𝑣𝑖 𝑍1 𝑍1
The noninverting amplifier: is realized by
grounding 𝑍1 and applying the input signal at the
noninverting op amp terminal. When 𝑣𝑖 is positive,
𝑣𝑜 is positive and current 𝑖1 is positive. Voltage
𝑣1 = 𝑖1 𝑍1 then is applied to the inverting terminal as
negative voltage feedback.
𝑖𝑑 = 0 ⟹ 𝑣− = 𝑣+ = 𝑣𝑖 therefore,
𝑣𝑖 𝑣𝑖 − 𝑣𝑜 𝑍2
𝑖1 = − = 𝑖2 = ⟹ 𝑣𝑜 = (1 + ) 𝑣𝑖
𝑍1 𝑍2 𝑍1
Remark: if we want the gain of the op-amp to be less than one we add a divider circuit as
shown below
𝑍2
𝑣𝑜 = (1 + )𝑣
𝑍1 +
𝑍𝑥
𝑣+ = ( )𝑣
𝑍𝑥 + 𝑍𝑦 𝑖
⇓
⇓
𝑍2 𝑍𝑥
𝑣𝑜 = (1 + ) ( )𝑣
𝑍1 𝑍𝑥 + 𝑍𝑦 𝑖
𝑍2 𝑍2 𝑍𝑥
𝑣𝑜 = (− ) 𝑣1 + (1 + ) ( )𝑣
𝑍1 𝑍1 𝑍𝑥 + 𝑍𝑦 2
𝑍2 = 𝑍1 and 𝑍𝑥 = 𝑍𝑦
we get
𝑣𝑜 = 𝑣2 − 𝑣1
𝑅𝑓 𝑅𝑓 𝑅𝑓
𝑣𝑜𝑢𝑡 = − ( 𝑣1 + 𝑣2 + ⋯ + 𝑣 )
𝑅1 𝑅2 𝑅𝑛 𝑛
1 𝑡 1 1 1
𝑣𝑜𝑢𝑡 = − ∫ ( 𝑣1 + 𝑣2 + ⋯ + 𝑣 ) 𝑑𝑡
𝐶 −∞ 𝑅1 𝑅2 𝑅𝑛 𝑛
III. Bode Plot: (What Is It?) The Bode plot of the frequency response 𝐅(𝜔) of an LTI system
is the graph of 20 log|𝐅(𝜔)| (magnitude in dB) and ∠𝐅(𝜔) (phase angle) both plotted versus
log 𝜔. The main advantage of using the Bode diagram is that multiplication of magnitudes
can be converted into addition. To see this construction, first consider the system with the
transfer function
𝜔 𝜔
𝐾 (1 + 𝑗 𝑧 ) … (1 + 𝑗 𝑧 )
1 𝑚
𝐅(𝜔) = 𝜔 𝜔
𝑗𝜔 (1 + 𝑗 𝑝 ) … (1 + 𝑗 𝑝 )
1 𝑛−1
𝑚 𝑛−1
𝜔 𝜔
20 log|𝐅(𝜔)| = 20 { log|𝐾| − log|𝜔| + ∑ log |1 + 𝑗 | − ∑ log |1 + 𝑗 |}
𝑧𝑘 𝑝𝑘
𝑘=1 𝑘=1
The use of logarithmic scale makes it possible to display both the low- and high-frequency
characteristics of the transfer function in one graph. Even though zero frequency cannot be
included in the logarithmic scale (since log 0 = – ∞), it is not a serious problem as one can
go to as low a frequency as is required for analysis and design of practical control system.
The principal factors that may be present in a transfer function 𝑭(𝑗𝜔) = ∏𝑛𝑖=1 𝑭𝑖 (𝑗𝜔), in
general, are:
1. Constant gain 𝐾
2. Pure integral and derivative factors (𝑗𝜔)±𝑛
3. First-order factors (1 + 𝑗𝜔𝑇)±1
4. Quadratic factors [1 + 2𝜉(𝑗𝜔/𝜔𝑛 ) + (𝑗𝜔/𝜔𝑛 )2 ]±1
Once the logarithmic plots of these basic factors are known, it will be convenient to add
their contributions graphically to get the composite plot of the multiplying factors of
𝑮(𝑗𝜔)𝑯(𝑗𝜔), since the product of terms become additions of their logarithms. It will be
discovered soon that in the logarithmic scale, the actual amplitude and phase of the
principal factors of 𝑮(𝑗𝜔)𝑯(𝑗𝜔) may be approximated by straight line asymptotes, which is
the added advantage of the Bode plot. The errors in the approximation in most cases are
definite and known and when necessary, corrections can be incorporated easily to get an
accurate plot.
III.I. The Gain K: A number greater than unity has a positive value in decibels, while a
number smaller than unity has a negative value. The log-magnitude curve for a constant
gain K is a horizontal straight line at the magnitude of 20 log K decibels. The phase angle of
the gain K is zero. The effect of varying the gain K in the transfer function is that it raises
or lowers the log-magnitude curve of the transfer function by the corresponding constant
amount, but it has no effect on the phase curve.
𝐅(𝑗𝜔) = 𝐾
𝐅𝐝𝐛 (𝑗𝜔) = 20 log10 (K)
0 𝐾>0
∠𝐅(𝑗𝜔) = { ∘
−180 𝐾<0
III.II. Integral and Derivative Factors (Pole and Zero at the origin): The logarithmic
magnitude (in decibels) and the phase angle of the integrator 𝐅(𝑗𝜔) = 1/𝑗𝜔 are
𝐅𝐝𝐛 (𝑗𝜔) = 20 log10 |1/𝑗𝜔| = −20 log10 |𝜔| and ∠𝐅(𝑗𝜔) = −90∘
Similarly, the log magnitude and phase of the differentiator 𝐅(𝑗𝜔) = 𝑗𝜔 are
Now let we look for 𝐅(𝑗𝜔) = (1 + 𝑗𝜔𝑇). An advantage of the Bode diagram is that for
reciprocal factors-for example, the factor (1 + 𝑗𝜔𝑇) the log-magnitude and the phase-angle
curves need only be changed in sign, since 20 log10 |1 + 𝑗𝜔𝑇| = −20 log10 |1/(1 + 𝑗𝜔𝑇)| and
tan−1(𝜔𝑇) = − tan−1(1/𝜔𝑇)
III.IV. Quadratic Factors: Linear time invariant systems often possess quadratic factors of
the form 𝐅(𝑗𝜔) = 1/[1 + 2𝜉(𝑗𝜔/𝜔𝑛 ) + (𝑗𝜔/𝜔𝑛 )2 ]. The nature of roots in this expression
depends on the value of 𝜉. For 𝜉 > 1, both the roots will be real and the quadratic factor
can be expressed as a product of two first-order factors with real poles. The quadratic
factor can be expressed as the product of two complex conjugate factors for values of 𝜉
satisfying 0 < 𝜉 < 1. Asymptotic approximations to the frequency response curves will not
be accurate for the quadratic factor for small values of 𝜉. This is because of the fact that
the magnitude as well as the phase of the quadratic factor depend on both the corner
frequency and the damping ratio 𝜉.
10𝜔 𝜔
−40 log ( ) = −40 − 40 log ( )
𝜔𝑛 𝜔𝑛
Low-Pass filter: A filter whose passband is from some frequency 0 to 𝜔𝑝 and whose
stopband extends from some frequency 𝜔𝑠 , to infinity, where 𝜔𝑝 < 𝜔𝑠 ,
High-Pass filter: A filter whose passband is from some frequency 𝜔𝑝 to infinity and whose
stop band is from 0 to 𝜔𝑠 , where 𝜔𝑠 < 𝜔𝑝 .
Bandpass filter: filter whose passband is from some frequency 𝜔𝑝1 to some other frequency
𝜔𝑝2 and whose stopbands are from 0 to 𝜔𝑠1 and from 𝜔𝑠2 to ∞, where 𝜔𝑠1 < 𝜔𝑝1 < 𝜔𝑝2 < 𝜔𝑠2 .
Band-Reject filter: A filter whose passbands are from 0 to 𝜔𝑝1 and from 𝜔𝑝2 to ∞ and
whose stopband is from 𝜔𝑠1 to 𝜔𝑠2 , where 𝜔𝑝1 < 𝜔𝑠1 < 𝜔𝑠2 < 𝜔𝑝2 .
All-Pass filter: A filter whose magnitude is 1 for all frequencies (i.e., whose passband is
from 0 to ∞). This type of filter is used mainly for phase compensation and phase shifting
purposes.
The system function of a practical filter, which is made of lumped linear elements, is a
rational function (a ratio of two polynomials). The degree of the denominator polynomial is
called the order of the filter. Because of the practical importance of first- and second order
filters and their widespread applications, we discuss these two classes of filters in detail
and present examples of circuits to realize them. It is noted that first- and second order
filters are special cases of first- and second-order LTI systems. The properties and
characteristics of these systems (such as impulse and step responses, natural frequencies,
damping ratio, quality factor, overdamping, underdamping, and critical damping) apply to
these filters as well and will be addressed.
IV.I. First-Order Low-Pass Filters: Assuming that the op-amp is ideal, it functions as a
voltage follower, preventing the impedance Z from loading the RC circuit. The ideal op-amp,
therefore, has no effect on the frequency response of the RC circuit, and in this analysis we
need not consider it.
1 𝐶𝑠
KVL ⟹ −𝑽1 (𝑠) + 𝑽𝑅 (𝑠) + 𝑽𝐶 (𝑠) = 0 ⟹ −𝑽1 (𝑠) + 𝑅𝑰1 (𝑠) + 𝑰1 (𝑠) = 0 ⟹ 𝑰1 (𝑠) = ( ) 𝑽 (𝑠)
𝐶𝑠 1 + 𝑅𝐶𝑠 1
The voltage across the capacitor terminal sis given by
1 1 𝑽2 (𝑠) 1
𝑽2 (𝑠) = 𝑽𝐶 (𝑠) = 𝑰1 (𝑠) = ( ) 𝑽1 (𝑠) ⟹ 𝑯(𝑠) = =
𝐶𝑠 1 + 𝑅𝐶𝑠 𝑽1 (𝑠) 1 + 𝑅𝐶𝑠
The system function and frequency response of a first-order, low-pass filter are
1
𝑯(𝑠) = with 𝜏 = 𝑅𝐶
1 + 𝜏𝑠
⇕
⇕
1 1
𝑯(𝜔) = 𝜔 with 𝜔 0 =
1 + 𝑗 (𝜔 ) 𝑅𝐶
0
The input-output differential equation and responses to unit-impulse and unit-step inputs
are given below.
𝑑𝑣2 1 1 1 𝑡
+ 𝑣2 = 𝑣1 ⇔ 𝑣1 ⟶ ℎ(𝑡) = ( 𝑒 −𝑅𝐶 ) 𝑢(𝑡) ⟶ 𝑣2
𝑑𝑡 𝑅𝐶 𝑅𝐶 𝑅𝐶
𝑡 𝑡
The unit − step response: g(𝑡) = ∫ ℎ(𝑡)𝑑𝑡 = (1 − 𝑒 −𝑅𝐶 ) 𝑢(𝑡)
−∞
𝑽2 (𝑠) 1 1 1
𝑯(𝑠) = = ⟹ |𝑯(𝑗𝜔)| = ⟹ |𝑯(𝑗𝜔0 )| =
𝑽1 (𝑠) 1 + ( 𝑠 ) 𝜔 2 √2
𝜔0 √1 + ( )
𝜔0
For 𝐿𝑅 circuit 𝜏 = (1/𝜔0 ) = 𝐿/𝑅 and for 𝑅𝐶 circuit 𝜏 = (1/𝜔0 ) = 𝑅𝐶
When a periodic waveform passes through an low-pass filter, its Fourier coefficients
undergo different gains at different frequencies. Some harmonics may be attenuated more
strongly than others and thus filtered out.
IV.II. Practical Integrators: The system function of an integrator is 𝑯(𝑠) = 𝛼/𝑠 and its
frequency response 𝑯(𝜔) = 𝛼/(𝑗𝜔). Integration may be done by an RC (or RL) circuit and an
op-amp. Here are two implementations
The first-order phase shifter is a stable system with a pole at −𝜔0 and a zero at 𝜔0 . Figure
below shows plots of phase versus frequency for 𝜔0 = 1, 2, 5, and 10. In all cases the phase
shift is 90◦ at 𝜔0 . Within the neighborhood of 𝜔0 , a frequency perturbation 𝜔 proportionally
results in a phase perturbation almost proportional to −𝜔/𝜔0 . This translates into a time-
shift perturbation of 1/𝜔0 .
The first-order phase shifter may be implemented by a passive circuit (with gain
restriction), or by an active circuit as given in figure below.
Let we find the transfer function of the passive circuit, we apply the KCL on the outer loop
we get 𝑽1 (𝑠) = 𝑽𝑐 (𝑠) + 𝑅𝑰2 (𝑠) = (1 + 𝑅𝐶𝑠)𝑽𝑐 (𝑠) ⟹ 𝑽𝑐 (𝑠) = [(1 + 𝑅𝐶𝑠)−1 ]𝑽1 (𝑠), also on one inner
loop we have
For the active circuit we use the difference amplifier rule (with 𝑣2 = 𝑣1 )
𝑍2 𝑍2 𝑍𝑥 𝑽2 (𝑠) 1 − 𝑅𝐶𝑠
𝑽2 (𝑠) = {(− ) + (1 + ) ( )} 𝑽1 (𝑠) ⟹ 𝑯(𝑠) = =( )
𝑍1 𝑍1 𝑍𝑥 + 𝑍𝑦 𝑽1 (𝑠) 1 + 𝑅𝐶𝑠
IV.V. Lead and Lag Filters: Lead and lag compensators are first-order filters with a single
pole and zero, chosen such that a prescribed phase lead and lag is produced in a
sinusoidal input. In this way, when placed in a control loop, they reshape an overall system
function to meet desired characteristics (used to improve an undesirable frequency
response in feedback systems and it is a fundamental building block in classical control
theory). Electrical lead and lag networks can be made of passive RLC elements, or employ
operational amplifiers. In this section we briefly describe their system functions and
frequency responses. Lead–lag compensators influence disciplines as varied as robotics,
satellite control, automobile diagnostics, LCD displays and laser frequency stabilization.
They are an important building block in analog control systems, and can also be used in
digital control.
▪ Lead Network A lead network made of passive electrical elements is shown in figure,
along with its pole-zero and Bode plots.
Let we find the transfer function of this passive circuit, we apply the KCL on the CKT we get
1 1 + 𝑅1 𝐶𝑠
𝑽1 (𝑠) = {(𝑅1 ∥ ) + 𝑅2 } 𝑰(𝑠) ⟺ 𝑰(𝑠) = { } 𝑽 (𝑠)
𝐶𝑠 𝑅1 + 𝑅2 + 𝑅1 𝑅2 𝐶𝑠 1
The system function has a zero at 𝑧 = −𝜔1 and a pole at 𝑝 = −𝜔1 /𝛼 . The pole of the
system is located to the left of its zero, both being on the negative real axis in the s-plane.
The magnitude is normally expressed in dB.
(𝜔1 + 𝑗𝜔)
𝑯dB (𝑗𝜔) = 20 log10 |𝑯(𝑗𝜔)| = 20 log10 |𝛼 |
𝜔1 + 𝛼𝑗𝜔
𝜔 2 𝜔 2
= 20 log10 (𝛼) + 20 log10 (√1 + ( ) ) − 20 log10 (√1 + 𝛼 2 ( ) )
𝜔1 𝜔1
𝜔 𝜔
𝝓(𝜔) = tan−1 ( ) − tan−1 (𝛼 )
𝜔1 𝜔1
There is a frequency 𝜔𝑚 at which the lead compensator provides maximum phase lead. This
frequency is important in the design process.
𝑑 1/𝜔1 𝛼/𝜔1 𝛼𝜔 2 𝜔 2
𝝓(𝜔) = − = 0 ⟹ 1 + ( ) = 𝛼 + 𝛼 ( )
𝑑𝜔 1 + (𝜔/𝜔1 )2 1 + (𝛼𝜔/𝜔1 )2 𝜔1 𝜔1
𝜔 2
⟹ 1 − 𝛼 = 𝛼 ( ) (1 − 𝛼)
𝜔1
𝜔 2
⟹ 𝛼( ) = 1
𝜔1
⟹ 𝜔𝑚 = 𝜔1 /√𝛼
The maximum phase lead occurs at the frequency: 𝜔𝑚 = 𝜔1 /√𝛼, this frequency is the
geometric mean of the two cut –off frequencies of the compensator
1 𝜔1
log10 (𝜔𝑚 ) = log10 (𝜔1 /√𝛼) = (log10 (𝜔1 ) + log10 ( ))
2 𝛼
𝜔𝑚 𝜔𝑚 1 1−𝛼
𝝓𝑚 = 𝝓(𝜔𝑚 ) = tan−1 ( ) − tan−1 (𝛼 ) = tan−1 ( ) − tan−1(√𝛼) = tan−1 ( )
𝜔1 𝜔1 √𝛼 2√𝛼
1−𝛼 1−𝛼 1−𝛼
⟹ tan(𝝓𝑚 ) = ⟹ sin(𝝓𝑚 ) = ⟹ 𝝓𝑚 = sin−1 ( )
2√𝛼 1+𝛼 1+𝛼
The lead compensator provides a maximum phase lead of
1−𝛼 𝜔1 1 𝑅2
𝝓𝑚 = sin−1 ( ) at 𝜔𝑚 = with 𝜔1 = and 𝛼 =
1+𝛼 √𝛼 𝑅1 𝐶 (𝑅1 + 𝑅2 )
One may sketch the magnitude of the frequency response in dB and its phase in degrees on
semilog paper by the following qualitative argument. The low-frequency asymptote of the
magnitude plot is the horizontal line at 20 log α dB. The high-frequency asymptote is the
horizontal line at 0 dB. The magnitude plot is a monotonically increasing curve. Traversing
from low frequencies toward high frequencies it first encounters 𝜔1 (the zero). The zero
pushes the magnitude plot up with a slope of 20 dB per decade. As the frequency increases
it encounters 𝜔1 /𝛼 (the pole). The pole pulls the magnitude plot down with a slope of −20
dB per decade and, therefore, starts neutralizing the upward effect of the zero. The
magnitude plot is eventually stabilized at the 0 dB level. The zero and the pole are the
break frequencies of the magnitude plot. If 𝛼 << 1, the pole and the zero are far enough
from each other and constitute frequencies of 3-dB deviation from the asymptotes. At 𝜔1
(the zero) the magnitude is 3 dB above the low-frequency asymptote and the phase is 45◦.
At 𝜔1 /𝛼 (the pole) the magnitude is 3 dB below 0 dB (the high-frequency asymptote) and
the phase is 90° − 45° = 45° . The maximum phase lead in the output occurs at 𝜔𝑚 = 𝜔1 /√𝛼,
which is the geometric mean of the two break frequencies.
▪ Lag Network In many respects characteristics of the lag network are mirror images of
those of the lead network.
Let we find the transfer function of this passive circuit, we apply the KCL on the CKT we get
1 𝐶𝑠
((𝑅1 + 𝑅2 ) + ) 𝑰(𝑠) = 𝑽1 (𝑠) ⟹ 𝑰(𝑠) = ( ) 𝑽 (𝑠)
𝐶𝑠 1 + (𝑅1 + 𝑅2 )𝐶𝑠 1
1
1 (𝑅2 + 𝐶𝑠) 𝐶𝑠 𝑽2 (𝑠) 1+𝑅2 𝐶𝑠
𝑽2 (𝑠) = (𝑅2 + ) 𝑰(𝑠) = ( ) 𝑽1 (𝑠) ⟹ 𝑯(𝑠) = =( )
𝐶𝑠 1 + (𝑅1 + 𝑅2 )𝐶𝑠 𝑽1 (𝑠) 1 + (𝑅1 + 𝑅2 )𝐶𝑠
There is a frequency 𝜔𝑚 at which the lead compensator provides maximum phase lag. This
frequency is important in the design process.
𝑑 𝛼/𝜔2 1/𝜔2 𝛼𝜔 2 𝜔 2
𝝓(𝜔) = − = 0 ⟹ 1 + ( ) = 𝛼 + 𝛼 ( )
𝑑𝜔 1 + (𝛼𝜔/𝜔2 )2 1 + (𝜔/𝜔2 )2 𝜔2 𝜔2
𝜔 2
⟹ 1 − 𝛼 = 𝛼 ( ) (1 − 𝛼)
𝜔2
𝜔 2
⟹ 𝛼( ) = 1
𝜔2
⟹ 𝜔𝑚 = 𝜔2 /√𝛼
The maximum phase lag occurs at the frequency: 𝜔𝑚 = 𝜔2 /√𝛼, this frequency is the
geometric mean of the two cut –off frequencies of the compensator
1 𝜔2
log10 (𝜔𝑚 ) = log10 (𝜔2 /√𝛼) = (log10 (𝜔2 ) + log10 ( ))
2 𝛼
𝜔𝑚 𝜔𝑚 1 1−𝛼
𝝓𝑚 = 𝝓(𝜔𝑚 ) = tan−1 ( ) − tan−1 (𝛼 ) = tan−1 ( ) − tan−1(√𝛼) = tan−1 ( )
𝜔2 𝜔2 √𝛼 2√𝛼
1−𝛼 1−𝛼 1−𝛼
⟹ tan(𝝓𝑚 ) = ⟹ sin(𝝓𝑚 ) = ⟹ 𝝓𝑚 = sin−1 ( )
2√𝛼 1+𝛼 1+𝛼
1−𝛼 𝜔2 𝑅2 𝛼
𝝓𝑚 = sin−1 ( ) at 𝜔𝑚 = , with 𝛼 = and 𝜔2 =
1+𝛼 √𝛼 (𝑅1 + 𝑅2 ) 𝑅2 𝐶
The DC gain is unity (0 dB) and the high-frequency gain is 𝛼 = 𝑅2 /(𝑅1 + 𝑅2 ). The system has
a pole at 𝑝 = −𝜔2 and a zero at 𝑧 = −𝜔2 /𝛼 to the left of the pole, both being on the
negative real axis in the s-plane. The magnitude plot of the lag network displays a
horizontal flip of that of the lead network, and the phase is a vertical flip of that of the lead
the network. The phase of 𝑯(𝑗𝜔) varies from 0 (at low frequencies, 𝜔 = 0) to a minimum
(i.e., a maximum phase lag) of (2 tan−1 √𝛼 − 90° ) at 𝜔𝑚 = 𝜔2 /√𝛼 and returns back to zero at
high frequencies. Lag compensators are essentially low-pass filters. Therefore, lag
compensation permits a high gain at low frequencies (which improves the steady-state
performance) and reduces gain in the higher critical range of frequencies so as to improve
the phase margin.
Comments: The PD controller can be approximated by a phase lead filter, look to the
following steps, 𝐾𝑃 + 𝐾𝐷 𝑠 ≅ 𝐾𝑃 [(1 + 𝛼𝑇𝑠)/(1 + 𝑇𝑠)], with 𝑇 ≪ 1, and let 𝐾𝐷 /𝐾𝑃 = 𝛼𝑇 with 𝛼 > 1.
The PI controller can be approximated by a phase lag filter, look to the following steps,
𝐾𝑃 + 𝐾𝐼 /𝑠 ≅ 𝐾𝐼 [(1 + 𝐾𝑃 /𝐾𝐼 𝑠)/(𝜀 + 𝑠)] ≅ 𝐾[(1 + 𝛼𝑇𝑠)/(1 + 𝑇𝑠)] , with 𝐾 = 𝐾𝐼 /ε, (𝑇 = 1/𝜀) ≫ 1,
𝐾𝑃 /𝐾𝐼 = 𝛼𝑇 and 𝛼 < 1.
▪ Lead-Lag Network With single lag or lead compensation may not satisfied design
specifications. For an unstable uncompensated system, lead compensation provides fast
response but does not provide enough phase margin whereas lag compensation stabilize
the system but does not provide enough bandwidth. So we need multiple compensators in
cascade. Given below is the circuit diagram for the phase lag- lead compensation network.
(𝑠 + 𝑎1 )(𝑠 + 𝑏2 )
𝑯(𝑠) =
(𝑠 + 𝑏1 )(𝑠 + 𝑎2 )
Let we find the transfer function of this passive circuit, we apply the KCL on the CKT we get
1 1 1
((𝑅1 ∥ ) + 𝑅2 + ) 𝑰(𝑠) = 𝑽1 (𝑠) ⟹ 𝑰(𝑠) = ( ) 𝑽 (𝑠)
𝐶1 𝑠 𝐶2 𝑠 𝑍1 + 𝑍2 1
1 𝑅1 1 + 𝑅2 𝐶2 𝑠
With 𝑍1 = (𝑅1 ∥ )= and 𝑍2 =
𝐶1 𝑠 1 + 𝑅1 𝐶1 𝑠 𝐶2 𝑠
𝑍2 𝑽2 (𝑠) 𝑍2
𝑽2 (𝑠) = 𝑍2 𝑰(𝑠) = ( ) 𝑽1 (𝑠) ⟹ 𝑯(𝑠) = =( )
𝑍1 + 𝑍2 𝑽1 (𝑠) 𝑍1 + 𝑍2
Therefore,
(𝑠 + 𝜔1 )(𝑠 + 𝜔2 )
𝑯(𝑠) =
(𝑠 + 𝛽𝜔1 )(𝑠 + 𝜔2 /𝛽)
Remark: The PID controller can be approximated by a phase lead-lag compensator, and
then the three action corrector is just special case of the phase lead-lag compensator.
Example: (Lead Compensator) Given a second order system 𝑯(𝑠) = 4/(𝑠 2 + 2𝑠) design lead
network to reshape the system in a new form which have a zero at 𝑠 = −4.41 and 3 poles at
𝑠 = −6.4918, and 𝑠 = −6.9541 ± 8.0592𝑖.
The transfer functions of the lead compensator is: 𝑪(𝑠) = 𝐾(𝑠 + 𝛼)/(𝑠 + 𝛽). The closed-loop
transfer functions of the uncompensated and compensated system are given, respectively,
4
𝑯𝑐𝑙𝑜𝑠𝑒𝑑1 (𝑠) =
𝑠2 + 2𝑠 + 4
and
166.8s + 735.588
𝑯𝑐𝑙𝑜𝑠𝑒𝑑2 (𝑠) =
𝑠3 + 20.4𝑠 2 + 203.6s + 735.588
In order to examine the transient-response characteristics of the designed closed-loop
system, we shall obtain the unit-step and unit-ramp response curves of the compensated
and uncompensated system with MATLAB.
figure
The transfer functions of the lag compensator is: 𝑪(𝑠) = 𝐾(𝑠 + 𝛼)/(𝑠 + 𝛽). The transfer
functions of the compensator and closed-loop compensated system are given, respectively,
1
(𝑠 + 10) 50s + 5
𝑪(𝑠) = 0.5 and 𝑯𝑐𝑙𝑜𝑠𝑒𝑑 (𝑠) =
1 50𝑠 4 + 150.5𝑠 3+ 101.5𝑠 2 + 51s + 5
(𝑠 + 100)
IV.VI. Second-Order Low-Pass Filters: A second-order low-pass filter has two poles and
no zeros at finite frequencies. Its system function and frequency response are
1 1
𝑯(𝑠) = and 𝑯(𝑗𝜔) =
𝑠 2 + 2𝜉𝜔𝑛 s + 𝜔𝑛2 (𝜔𝑛2 − 𝜔 2 ) + 2𝜉𝜔𝑛 𝑗𝜔
Generally speaking, the second-order systems are low-pass filters (except when the quality
factor 𝑄 = 1/2𝜉 is high). Here we consider the frequency response behavior of such systems.
Based on the quality factor 𝑄 = 1/(2𝜉) , we recognize three cases.
𝑄 < 0.5 or 𝜉 > 1: (overdamped second-order system) The filter has two distinct poles on
the negative real axis: 𝑠1,2 = −𝜔1,2 = −𝜉𝜔𝑛 ± 𝜔𝑛 √𝜉 − 1 . Note that 𝜔1 𝜔2 = 𝜔𝑛2 & 𝜔1 + 𝜔2 = 2𝜉𝜔𝑛 .
Its frequency response may be written as
1 1
𝑯(𝑗𝜔) = [ 𝜔 ][ 𝜔 ]
1 + 𝑗 (𝜔 ) 1 + 𝑗 (𝜔 )
1 2
The filter is equivalent to the cascade of two first-order low-pass filters with 3-dB
frequencies at 𝜔1 and 𝜔2 , discussed previously. The magnitude Bode plot is shown on
figure. The plot has three asymptotic lines, with slopes of 0, −20, and −40 dB/decade, for
low, intermediate, and high frequencies, respectively.
𝑄 = 0.5 or 𝜉 = 1: (critically damped second-order system) The filter has a pole of order 2
on the negative real axis at 𝑠 = −𝜔0 , where 𝜔0 = 𝜔𝑛 . This is a critically damped system.
The frequency response may be written as
2
1
𝑯(𝑗𝜔) = [ 𝜔 ]
1 + 𝑗 (𝜔 )
0
The filter is equivalent to the cascade of two identical first-order low-pass filters with 3-dB
frequencies at 𝜔0 . The Bode plot is shown on figure. The low frequency asymptote is at 0
dB. The high-frequency asymptote is a line with a slope of −40 dB per decade. The break
frequency is at 𝜔0 . Attenuation at 𝜔0 is 6 dB.
𝑄 > 0.5 or 𝜉 < 1: (underdamped second-order system) The filter has two complex
conjugate poles with negative real parts, 𝑠1,2 = −𝜎 ± 𝑗𝜔𝑑 , where 𝜎 = 𝜉𝜔𝑛 and 𝜔𝑑 = 𝜔𝑛 √1 − 𝜉 2 .
The frequency response may be written as
1/𝜔𝑛2
𝑯(𝑗𝜔) =
𝜔 2 𝜔
(1 − (𝜔 ) ) + 2𝜎 (𝜔 ) 𝑗
𝑛 𝑛
This is an underdamped system. Its poles and its Bode plot are shown in Figure.
clear all, clc, wn=10;
for ksi=0.1:0.1:2
w=logspace(-2,4,1001); s=i*w;
H=1./(s.^2+2*ksi*wn*s+wn^2);
plot(log10(w),angle(H)); grid on, hold on
end
figure
for ksi=0.01:0.1:4
w=logspace(-2,4,1001); s=i*w;
H=1./(s.^2+2*ksi*wn*s+wn^2);
plot(log10(w),20*log10(abs(H))); grid on, hold on
end
Other commands that can be more convenient or flexible for generating Bode plots are
often used. The Bode plot for the current system function may also be generated by the
following program.
The following set of commands can provide more flexibility and is used for most Bode plots
in this chapter.
IV.VII. Second-Order High-Pass Filters: The system function and frequency response of a
second-order high-pass filter may be written as
𝑠2 −𝜔2
𝑯(𝑠) = and 𝑯(𝑗𝜔) = with 𝑄 = 1/2𝜉
𝑠 2 + 2𝜉𝜔𝑛 s + 𝜔𝑛2 (𝜔𝑛2 − 𝜔 2 ) + 2𝑗𝜉𝜔𝑛 𝜔
The filter is a stable system so it has a double zero at 𝑠 = 0 (DC) and two poles in the LHP.
It works as the opposite of the low-pass filter. The frequency response has a zero
magnitude at 𝜔 = 0, which grows to 1 at 𝜔 = ∞. The low-frequency magnitude asymptote is
a line with a slope of 40 dB per decade. The high-frequency asymptote is the 0 dB
horizontal line. Between the two limits (especially near 𝜔𝑛 ) the frequency response is
shaped by the location of the poles. As in any second-order system, we recognize the three
cases of the filter being overdamped, critically damped, and underdamped. Here again, the
frequency behavior of the filter may be analyzed in a unified way in terms of 𝜔0 = 𝜔𝑛 and 𝑄.
clear all, clc, wn=10;
for ksi=0.1:0.5:2
w=logspace( -1,4,1001);
num=[1 0 0]; den=[1 2*ksi*wn wn^2]; sys=tf(num,den);
[mag,angle]=bode(sys,w); semilogx(w,20*log10(mag)); hold on
end
IV.VIII. Second-Order Bandpass Filters: we can construct a circuit that acts as a band-
pass filter, passing mainly those frequencies within a certain frequency range. The analysis
of a simple second-order band-pass filter (i.e., a filter with two energy-storage elements)
can be conducted by analogy with the preceding discussions of the low-pass and high-pass
filters. The system function and frequency response of the basic second-order bandpass
filter are
2𝜉𝜔𝑛 𝑠 2𝑗𝜉𝜔𝑛 𝜔 1
𝑯(𝑠) = and 𝑯(𝑗𝜔) = with 𝑄 =
𝑠2 + 2𝜉𝜔𝑛 s + 𝜔𝑛2 (𝜔𝑛2 − 𝜔 2 ) + 2𝑗𝜉𝜔𝑛 𝜔 2𝜉
The filter has a zero at s = 0 (DC) and two poles at: 𝑠1,2 = −𝜔𝑛 (1 ± √1 − 4𝑄 2 ) /2𝑄, which,
depending on the value of 𝑄, are either real or complex. The frequency response has zero
magnitude at 𝜔 = 0 and ∞. It attains its peak at 𝜔𝑛 , sometimes called the resonance
frequency, where |𝑯(𝜔𝑛 )| = 1. The half-power bandwidth ∆𝜔 is defined as the frequency
range within which |𝑯(𝜔)/𝑯(𝜔𝑛 )| ≥ √2/2 (i.e., the gain is within 3 dB below its maximum,
and thus called the 3-dB bandwidth).
2
𝑯(𝜔) √2 4𝜉 2 𝜔𝑛2 𝜔2 √2
| |= ⟺ 2 = ( ) ⟺ |𝜔𝑛2 − 𝜔2 | = |2𝜉𝜔𝑛 𝜔|
𝑯(𝜔𝑛 ) 2 (𝜔𝑛 − 𝜔 ) + 4𝜉 𝜔𝑛 𝜔
2 2 2 2 2 2
The lower and upper limits of the half-power frequency band are a solution of the above Eq
1 1 𝜔𝑛
𝜔ℎ,𝑙 = 𝜔𝑛 (√𝜉 2 + 1 ± 𝜉) = 𝜔𝑛 (√ + 1 ± ) = (√1 + 4𝑄 2 ± 1)
4𝑄 2 2𝑄 2𝑄
𝜔𝑛
∆𝜔 = 𝜔ℎ − 𝜔𝑙 =
𝑄
In the present analysis we observe parallels with the cases of low-pass and high-pass
filters, and, depending on the value of the quality factor Q (which controls the location of a
filter’s poles and thus its bandwidth), we recognize the three familiar states of overdamped
(Q < 0.5), critically damped (Q = 0.5), and underdamped (Q > 0.5). The filter then becomes
wideband (low Q) or narrowband (high Q). The sharpness of the peak is determined by the
quality factor Q. In what follows we will discuss the shape of the Bode plot for three regions
of Q values.
𝑄 < 0.5: The system has two distinct negative real poles 𝑠1,2 = −𝜔𝑛 (1 ± √1 − 4𝑄 2 ) /𝑄 = −𝜔1,2.
Note that 𝜔1 𝜔2 = 𝜔𝑛2 and 𝜔1 + 𝜔2 = 2𝜔𝑛 /𝑄.
1 𝑗(𝜔/𝜔𝑛 )
𝑯(𝜔) =
𝑄 [1 + 𝑗(𝜔/𝜔1 )][1 + 𝑗(𝜔/𝜔2 )]
The slopes of the asymptotic lines in the magnitude Bode plot are 20, 0, and −20
dB/decade for low, intermediate, and high frequencies, respectively. The filter is a
wideband bandpass filter. It is equivalent to the cascade of a first-order high-pass filter and
a first-order low-pass filter with separate break points.
𝑄 = 0.5: The filter has a double pole at 𝑠 = 𝜔𝑛 . The frequency response may be written as
2𝑗(𝜔/𝜔𝑛 )
𝑯(𝜔) =
[1 + 𝑗(𝜔/𝜔𝑛 )]2
The asymptotic slopes of the plot are 20 dB/decade (low frequencies) and −20 dB/decade
(high frequencies). The filter is bandpass, equivalent to the cascade of a first-order high-
pass filter and a first-order low-pass sharing the same break frequency 𝜔𝑛 .
𝑄 > 0.5: The filter has two complex conjugate poles with negative real parts 𝑠1,2 = −𝜎 ± 𝑗𝜔𝑑 ,
where 𝜎 = 𝜔𝑛 /(2𝑄) and 𝜔𝑑 = 𝜔𝑛 (√4𝑄 2 − 1) /(2𝑄). Note that: 𝜎 2 + 𝜔𝑑2 = 𝜔𝑛2 . The asymptotic
slopes of the Bode plot are 20 dB/decade (low frequency) and −20 dB/decade (high
frequency).
High 𝑄: For a bandpass system with high 𝑄 (e.g., 𝑄 ≥ 10) the high and low 3-dB frequencies
are approximately symmetrical on the upper and lower sides of the center frequency:
𝜔ℎ,𝑙 = 𝜔𝑛 + 𝜔𝑛 /2𝑄
IV.IX. Second-Order Notch Filters: The system function and frequency response of a
second-order notch filter may be written as
The filter has two zeros at ±𝑗𝜔𝑛 (notch frequency) and two poles in the LHP. It works as the
opposite of the bandpass filter. The frequency response has a zero magnitude at 𝜔𝑛 . The
low- and high-frequency magnitude asymptotes are 0-dB horizontal lines. Between the two
limits, especially near 𝜔𝑛 , the frequency response is shaped by the location of the poles. As
in any second-order system, we recognize the three cases of the filter being overdamped,
critically damped, and underdamped. The sharpness of the dip at the notch frequency is
controlled by Q. Higher Qs produce narrower dips. As in the case of bandpass filters we can
define a 3-dB band for the dip. In this case the notch band identifies frequencies around 𝜔𝑛
within which the attenuation is greater than 3 dB. The notch filter described above is
functionally equivalent to subtracting the output of a bandpass filter from the input signal
traversing through a direct path as in figure.
Example: One application of narrow-band filters is in rejecting interference due to AC line
power. Any undesired 60-Hz signal originating in the AC line power can cause serious
interference in sensitive instruments. In medical instruments such as the
electrocardiograph, 60-Hz notch provided to reduce the effect of this interference2 on
cardiac measurements. Figure below depicts a circuit in which the effect of 60-Hz noise is
represented by way of a 60-Hz sinusoidal generator connected in series with a signal
source (𝐕S ), representing the desired signal. In this example we design a 60-Hz narrow-
band (or notch) filter to remove the unwanted 60-Hz noise.
Let 𝑅𝑆 = 50𝛺. To determine the appropriate capacitor and inductor values, we write the
expression for the notch filter impedance:
𝑽𝐿 (𝜔) 𝑅𝐿 𝑅𝐿
𝑯(𝜔) = = =
𝑽𝑆 (𝜔) 𝑅𝑆 + 𝑅𝐿 + 𝑍𝐿𝐶 𝑗𝜔𝐿
𝑅𝑆 + 𝑅𝐿 + ( )
1 − 𝜔 2 𝐿𝐶
Note that when 𝜔2 𝐿𝐶 = 1, the impedance of the circuit is infinite! The frequency 𝜔𝑛 = 1/√𝐿𝐶
is the resonant frequency of the LC circuit. If this resonant frequency were selected to be
equal to 60 Hz, then the series circuit would show an infinite impedance to 60-Hz currents,
and would therefore block the interference signal, while passing most of the other
frequency components. We thus select values of L and C that result in 𝜔𝑛 = 2𝜋 × 60. Let
𝐿 = 100 mH. Then 𝐶 = 1/(𝜔𝑛2 𝐿) = 70.36 𝜇F
IV.X. Second-Order All-Pass Filters: The second-order all-pass filter has a constant gain
𝐻0 and a phase that varies with frequency. The system function and frequency response are
To be able to build a causal filter from circuit components, it is necessary that the filter
transfer function 𝑯(𝑠) be rational in 𝑠. For ease of implementation, the order of 𝑯(𝑠) (i.e.,
the degree of the denominator) should be as small as possible. However, there is always a
trade-off between the magnitude of the order and desired filter characteristics such as the
amount of attenuation in the stopbands and the width of the transition regions.
From the other side, in order to avoid phase distortion in the output of a causal filter, the
phase function should be linear over the passband of the filter. However, the phase
function of a causal filter with a rational transfer function cannot be exactly linear over the
passband, and thus there will always be some phase distortion. The amount of phase
distortion that can be tolerated is often included in the list of filter specifications in the
design process.
In this section we will focus on designing analog filters to meet a set of user specifications.
It is known that filters with ideal characteristics are not realizable since they are non-
causal. In practical applications of filtering we seek realizable approximations to ideal filter
behavior. The requirements on the frequency spectrum of the filter must be relaxed in
order to obtain realizable filters. In the design of analog filters, specifications for the desired
filter are usually given in terms of a set of tolerance limits for the magnitude response
|𝑯(𝜔)|.
A number of approximation formulas exist in the literature for specifying the squared-
magnitude of a realizable low-pass filter spectrum. Some of the better known formulas will
be mentioned here: Butterworth filters, Chebyshev filters, elliptic filters and others.
where 𝜀 is a positive constant. The frequency 𝜔𝑐1 is the passband edge frequency. The
function 𝐶𝑁 (𝜈) in the denominator represents the Chebyshev polynomial of order N.
2
𝜔
|𝑯(𝜔)|2 = 1/ [1 + 𝜀 2 𝜓𝑁 ( )]
𝜔𝑐
The parameter 𝜀 is a positive constant. The function 𝜓𝑁 (𝜈) is called a Chebyshev rational
function, and is defined in terms of Jacobi elliptic functions.
The magnitude characteristic |𝑯(𝜔)| for a Butterworth filter is said to be maximally flat.
Magnitude spectra for Butterworth filters of various orders are shown in Figure.
clear all, clc, wc=2; N=20; w=-10:0.1:10;
magGjw=abs(sqrt(1./(1+(w/wc).^(2*N))));
plot(w,magGjw); grid on
xlabel('Frequency in rad/sec'); ylabel('Magnitude of H(s)');
title('Magnitude of Butterworth filter');
We know that the squared-magnitude function can be expressed as the product of the
system function and its complex conjugate, that is |𝑯(𝜔)|2 = 𝑯(𝜔)𝑯⋆ (𝜔). In addition, since
the impulse response ℎ(𝑡) of the filter is real-valued, its system function exhibits conjugate
symmetry: 𝑯⋆ (𝜔) = 𝑯(−𝜔) so we can write |𝑯(𝜔)|2 = 𝑯(𝜔)𝑯(−𝜔). Using the s-domain
system function 𝑯(𝑠), the problem of designing a Butterworth filter can be stated as
follows: Given the values of the two filter parameters 𝜔𝑐 and 𝑁, find the s-domain system
function 𝑯(𝑠) for which the squared-magnitude of the function 𝑯(𝜔) matches the defined
equation. For a system function 𝑯(𝑠) with real coefficients, it can be shown that
𝑁 −1
−𝑠 2
|𝑯(𝜔)|2 = 𝑯(𝑠)𝑯(−𝑠)| ⟷ 𝑯(𝑠)𝑯(−𝑠) = [1 + ( 2 ) ]
𝑠=𝑗𝜔
𝜔𝑐
𝜔𝑐 𝑒 𝑗𝑘𝜋/𝑁 , 𝑘 = 0, … , 2𝑁 − 1 if 𝑁 is odd
𝑝𝑘 = {
𝜔𝑐 𝑒 𝑗(2𝑘+1)𝜋/2𝑁 , 𝑘 = 0, … , 2𝑁 − 1 if 𝑁 is even
Remark: All the poles of the product 𝑯(𝑠)𝑯(−𝑠) are located on a circle with radius equal to
𝜔𝑐 . Furthermore, the poles are equally spaced on the circle, and the angle between any two
adjacent poles is 𝜋/𝑁 radians. Since 𝑯(𝑠) and 𝑯(−𝑠) have only real coefficients, all complex
poles appear in conjugate pairs.
We are interested in obtaining a filter that is both causal and stable. It is therefore
necessary to use the poles in the left half of the s-plane for 𝑯(𝑠). The remaining poles, the
ones in the right half of the s-plane, must be associated with 𝑯(−𝑠). The system function
𝑯(𝑠) for the Butterworth low-pass filter is constructed in the form
𝜔𝑐𝑁 𝜋 𝑘𝜋 3𝜋
𝑯(𝑠) = , for all 𝑘 that satisfy < <
∏𝑘(𝑠 − 𝑝𝑘 ) 2 𝑁 2
Example: Assume that we require a filter such that the pass-band magnitude is a constant
and equal to 1dB for frequency below 0.2𝜋, and the stop-band attenuation is greater than
15dB for frequency 0.3𝜋.
𝜔𝑐 0.3𝜋 −0.394 1
=( ) ⟹ 𝜔𝑐 = (0.6138)1.394 ≈ 0.70474 and 𝑁 = 5.8858
0.2𝜋 𝜔𝑐
𝜔𝑐𝑁 0.12093
𝑯( 𝑠 ) = = 2
∏𝑘(𝑠 − 𝑝𝑘 ) (𝑠 + 0.3640𝑠 + 0.4945)(𝑠 2 + 0.9945𝑠 + 0.4945)(𝑠 2 + 1.3585𝑠 + 0.4945)
The Chebyshev polynomial of order 𝑁 is defined as 𝐶𝑁 (𝜈) = cos(𝑁 cos −1 (𝜈)). A better
approach for understanding the definition 𝐶𝑁 (𝜈) = cos(𝑁 cos −1(𝜈)) would be to split it into
two equations as 𝜈 = cos(𝜃) and 𝐶𝑁 (𝜈) = cos(𝑁𝜃)
The Chebyshev polynomials of the first kind are obtained from the recurrence relation
Let we prove this identity, we know that cos(𝑥 + 𝑦) + cos(𝑥 − 𝑦) = 2 cos(𝑥) cos(𝑦) we put the
following change of variable 𝑥 = 𝜃 and 𝑦 = 𝑁𝜃 then
Therefore from the definition we obtain 𝐶𝑁+1 (𝜈) = 2𝜈𝐶𝑁 (𝜈) − 𝐶𝑁−1 (𝜈) so we can generate this
polynomials recursively
For example
𝐶1 (𝜈) = 1
𝐶2 (𝜈) = 2𝜈 2 − 1
𝐶3 (𝜈) = 2𝜈(2𝜈 2 − 1) − 1 = 4𝜈 3 − 2𝜈 − 1
𝐶4 (𝜈) = 2𝜈(4𝜈 3 − 2𝜈 − 1) − 2𝜈 2 + 1 = 8𝜈 4 − 6𝜈 2 − 2𝜈 + 1
Remarks: it should be noted that the polynomial 𝐶𝑁2 (𝜈) = cos 2 (𝑁 cos −1(𝜈)) varies between
zero and unity for 𝜈 varies between zero and unity, but if 𝜈 is greater than one then
cos−1 (𝜈) is imaginary, so 𝐶𝑁 (𝜈) is a hyperbolic polynomial
𝛼2 𝛼2 𝛼2
|𝑯(𝜔𝑐1 =)|2 𝜔 = 2 2
= 2
[1 + 𝜀 2 𝐶𝑁2 (𝜔𝑐1 )] [1 + 𝜀 𝐶𝑁 (1)] 1 + 𝜀
𝑐1
𝛼2 if 𝑁 is odd
2
𝛼
|𝑯(0)|2 = ={
[1 + 𝜀 2 𝐶𝑁2 (0)] 𝛼2
if 𝑁 is even
1 + 𝜀2
The term |𝑯(𝜔𝑐1 )|2 = 𝛼 2 /(1 + 𝜀 2 ) stated in other words, the pass−band is the range over
which the ripple oscillates with constant bounds; this is the range from 𝐷𝐶 to 𝜔𝑐1. From the
formula of |𝑯(𝜔𝑐1 )|2 , we observe that only when 𝜀 = 1 the magnitude at the cutoff frequency
is 𝛼/√2 i.e., the same as in other types of filters. But when 0 < 𝜀 < 1, the cutoff frequency is
greater than 3 dB the conventional cutoff frequency 𝜔𝑐1.
Proof: The poles 𝑝𝑘 of 𝑯(𝑠)𝑯(−𝑠) are the solutions of the equation 1 + 𝜀 2 𝐶𝑁2 (𝑝𝑘 /𝑗𝜔𝑐1 ) = 0
for 𝑘 = 0, . . . ,2𝑁 – 1. Let us define 𝜈𝑘 = 𝑝𝑘 /𝑗𝜔𝑐1 so that 1 + 𝜀 2 𝐶𝑁2 (𝜈𝑘 ) = 0
Using the definition of the Chebyshev polynomial we can rewrite 1 + 𝜀 2 𝐶𝑁2 (𝜈𝑘 ) = 0 as
1 + 𝜀 2 cos 2 (𝑁𝜃𝑘 ) = 0
To satisfy cos(𝑁𝛼𝑘 ) cosh(𝑁𝛽𝑘 ) = 0 the cosine term must be set equal to zero, leading to
(2𝑘 + 1)𝜋
cos(𝑁𝛼𝑘 ) = 0 ⟹ 𝛼𝑘 = 𝑘 = 0, . . . , 2𝑁 – 1
2𝑁
Using this value of 𝛼𝑘 in equation sin(𝑁𝛼𝑘 ) sinh(𝑁𝛽𝑘 ) = ±1/𝜀 results in
sinh−1 (1/𝜀)
sinh(𝑁𝛽𝑘 ) = ±1 and 𝛽𝑘 =
𝑁
The poles of the product 𝑯(𝑠)𝑯(−𝑠) can now be determined. Using 𝑝𝑘 = 𝑗𝜔𝑐1 𝜈𝑘
𝑝𝑘 = 𝑗𝜔𝑐1 𝜈𝑘
= 𝑗𝜔𝑐1 cos(𝜃𝑘 )
= 𝑗𝜔𝑐1 [cos(𝛼𝑘 ) cosh(𝛽𝑘 ) − 𝑗 sin(𝛼𝑘 ) sinh(𝛽𝑘 )], 𝑘 = 0, . . . , 2𝑁 – 1
It can be shown that those poles are on an elliptical trajectory. The ones in the left half s-
plane are associated with 𝑯(𝑠) in order to obtain a causal and stable filter.
Example: Design a third-order Chebyshev type-I analog low-pass filter with a passband
edge frequency of 𝜔𝑐1 = 1 rad/s and 𝜀 = 0.4. Afterwards compute and graph the magnitude
response of the designed filter.
𝑝0 = 0.2885 + 𝑗 𝑝3 = − 0.2885 − 𝑗
𝑝1 = 0.5771 𝑝4 = − 0.5771
𝑝2 = 0.2885 − 𝑗 𝑝5 = − 0.2885 + 𝑗
The first three poles, namely 𝑝0 , 𝑝1 and 𝑝2 are in the right half s-plane; therefore they
belong to 𝑯(−𝑠). The system function 𝑯(𝑠) should be constructed using the remaining three
poles.
0.6250
𝑯(𝑠) =
(𝑠 + 0.2885 + 𝑗) (𝑠 + 0.5771) (𝑠 + 0.2885 − 𝑗)
0.6250
=
𝑠 3 + 1.1542𝑠 2 + 1.4161 𝑠 + 0.6250
The numerator of 𝑯(𝑠) was adjusted to achieve |𝑯 (0)| = 0. The magnitude and the phase
of 𝑯(𝑠) are shown in Figure.
Example: Design Chebyshev low-pass filter such that |𝑯 (𝜔𝑐1 )| = 0.84, for 𝜔𝑐1 = 150 rad/s
and |𝑯 (𝜔2 )| = 0.0316, for 𝜔2 = 300 rad/s. Given that |𝑯(𝜔)|2 = [1 + 𝜀 2 𝐶𝑁2 (𝜔/𝜔𝑐1 )]−1
V.III Inverse Chebyshev Low-Pass Filters: The squared-magnitude function for an inverse
Chebyshev low-pass filter, also referred to as a Chebyshev type-II low-pass filter, is
𝜔
𝜀 2 𝐶𝑁 ( 𝜔𝑐2 )
|𝑯(𝜔)|2 = 𝜔
[1 + 𝜀 2 𝐶𝑁2 ( 𝜔𝑐2 )]
The denominator of the squared magnitude function |𝑯(𝜔)|2 is very similar to that of the
type-I Chebyshev filter squared magnitude response given before. Consequently, most of
the results obtained in the previous section through the derivation of the poles of
Chebyshev type-I low-pass filter will be usable. For an inverse Chebyshev filter the poles 𝑝𝑘
of the product 𝑯(𝑠)𝑯(−𝑠) are the solutions of the equation
𝑗𝜔𝑐2
1 + 𝜀 2 𝐶𝑁2 ( )=0 for 𝑘 = 0, . . . , 2𝑁 − 1.
𝑝𝑘
Example: Design of a low-pass filters using MATLAB code (i.e. all approximation methods)
Let 𝑮(𝑠) be the system function of an analog lowpass filter, and let 𝑯(𝜆) represent the new
filter to be obtained from it. For the new filter we use 𝜆 as the Laplace transform variable.
𝑯(𝜆) is obtained from 𝑮(𝑠) through a transformation such that 𝑯(𝜆) = 𝑮(𝑠)| . The
𝑠=𝑓(𝜆)
function 𝑠 = 𝑓(𝜆) is the transformation that converts the lowpass filter into the type of filter
desired.
Low-pass to high-pass: It is desired to obtain the hig-hpass filter system function 𝑯(𝜆)
from the lowpass filter system function 𝑮(𝑠). The transformation 𝑠 = 𝜔0 /𝜆 with 𝜔0 = 𝜔𝐿1 𝜔𝐻2
can be used for this purpose. The magnitudes of the two filters are identical at their
respective passband edge frequencies, that is, |𝑯(𝜔𝐻2 )| = |𝑮(𝜔𝐿1 )|. The stopband edges of
the two filters are related by𝜔𝐿2 𝜔𝐻1 = 𝜔𝐿1 𝜔𝐻2 so that we have |𝑯(𝜔𝐻1 )| = |𝑮(𝜔𝐿2 )|
Example: Recall that a third-order Chebyshev type-I analog low-pass filter was designed in
before with a cutoff frequency of 𝜔𝑐1 = 1 rad/s and 𝜀 = 0.4. The system function of the filter
was found to be
0.6250
𝑯(𝑠) =
𝑠 3 + 1.1542𝑠 2 + 1.4161 𝑠 + 0.6250
Convert this filter to a high-pass filter with a critical frequency of 𝜔𝐻2 = 3 rad/s. Afterwards
compute and graph the magnitude response of the designed filter.
Solution: The critical frequency of the low-pass filter is 𝜔𝐿1 = 1 rad/s; therefore, we will use
the transformation 𝑠 = 𝜔0 /𝜆 𝜔0 = 𝜔𝐿1 𝜔𝐻2 = 3. which leads to the system function
0.6250
𝑯(𝜆) = 3
3 3 2 3
( ) + 1.1542 ( ) + 1.4161 ( ) + 0.6250
𝜆 𝜆 𝜆
3
0.6250𝜆
=
0.6250𝜆3 + 4.2482𝜆2 + 10.3875𝜆 + 27
Solution: we have 𝜔𝐻2 = 100 rad/s and by using the transformation 𝜆 = 𝜔0 /𝜔 where the
𝜔0 = 𝜔𝐿1 𝜔𝐻2, assume that we are going to obtain a normalized low-pass filter with pass
band frequency 𝜔𝐿1 = 1 rad/s then the corresponding low pass filter specifications are
𝜔0 1 50 𝜔0
0 ≤ | | ≤ 50 rad/s ⟺ 0 ≤ | | ≤ rad/s ⟺ ≤ |𝜔| ≤ ∞
{ 𝜔 𝜔 𝜔0 50
𝜔0 𝜔0
| | > 100 rad/s ⟺ |𝜔| < rad/s ⟺ |𝜔| < 1
𝜔 100
In other word we can rewrite it as
2.8909
𝑯(𝑠) =
𝑠 4 + 3.407𝑠 3 + 5.8050𝑠 2 + 5.7934𝑠 + 2.8909
To derive the transfer function of the required high-pass filter, we use the transformation
𝑠 = 𝜔0 /𝜆 with 𝜔0 = 100 rad/s. The transfer function of the highpass filter is given by
2.8909
𝑯(𝜆) = 4 3
100 100 100 2 100
( ) + 3.407 ( ) + 5.8050 ( ) + 5.7934 ( ) + 2.8909
𝜆 𝜆 𝜆 𝜆
𝜆4
= 4
𝜆 + 2.004 × 102 𝜆3 + 2.008 × 104 𝜆2 + 1.179 × 106 𝜆 + 3.459 × 107
The MATLAB code for the design of the high-pass filter required in this example using the
Butterworth, Type I Chebyshev, Type II Chebyshev, and elliptic implementations is
included below. In each case, MATLAB automatically designs the high-pass filter. No
explicit transformations are needed.
[N,wn] = cheb2ord(wp,ws,Rp,Rs,'s') ;
[num3,den3] = cheby2(N,Rs,wn,'high','s');
H3 = tf(num3,den3); H= freqs(num3,den3,w);
plot(w,abs(H),'r','linewidth',1.5); grid on, figure
[N,wn] = ellipord(wp,ws,Rp,Rs,'s') ;
[num4,den4] = ellip(N,Rp,Rs,wn,'high','s') ;
H4 = tf(num4,den4); H= freqs(num4,den4,w);
plot(w,abs(H),'r','linewidth',1.5); grid on
Low-pass to bandpass transformation: Specification diagrams in figure below show a
low-pass filter with magnitude response |𝑮(𝑗𝜔)| and a bandpass filter with magnitude
response |𝑯(𝑗𝜔)|.
It is desired to obtain the bandpass filter system function 𝑯(𝜆) from the low-pass filter 𝑮(𝑠).
The transformation 𝑠 = (𝜆2 + 𝜔02 )/𝐵𝜆, with 𝜔0 = √𝜔𝐵2 𝜔𝐵3 , 𝐵 = (𝜔𝐵3 − 𝜔𝐵2 )/𝜔𝐿1 can be used
for this purpose. We require |𝑯(𝜔𝐵2 )| = |𝑮(−𝜔𝐿1 )| and |𝑯(𝜔𝐵3 )| = |𝑮(𝜔𝐿1 )|
The parameter 𝜔0 is the geometric mean of the passband edge frequencies of the bandpass
filter. The parameter 𝐵 is the ratio of the bandwidth of the bandpass filter to the bandwidth
of the low-pass filter.
In order to obtain the band-reject filter system function 𝑯(𝜆) from the lowpass filter system
function 𝑮(𝑠), the transformation to be used is in the form 𝑠 = 𝜔𝐿1 (𝜔𝑆4 − 𝜔𝑆1 )𝜆/(𝜆2 + 𝜔𝑆1 𝜔𝑆4 )
In this section we will briefly discuss the design of discrete-time filters. Discrete-time filters
are viewed under two broad categories: infinite impulse response (IIR) filters, and finite
impulse response (FIR) filters. The system function of an IIR filter has both poles and zeros.
Consequently the impulse response of the filter is of infinite length. It should be noted,
however, that the impulse response of a stable IIR filter must also be absolute summable,
and must therefore decay over time. In contrast, the behavior of an FIR filter is controlled
only by the placement of the zeros of its system function. For causal FIR filters all of the
poles are at the origin, and they do not contribute to the magnitude characteristics of the
filter.
For a given set of specifications, an IIR filter is generally more efficient than a comparable
FIR filter. On the other hand, FIR filters are always stable. Additionally, a linear phase
characteristic is possible with FIR filters whereas causal and stable IIR filters cannot have
linear phase. The significance of a linear phase characteristic is that the time delay is
constant for all frequencies. This is desirable in some applications, and requires the use of
an FIR filter.
VI.I Design of IIR filters: The most common method of designing IIR filters is to start with
an appropriate analog filter, and to convert its system function to the system function of a
discrete-time filter by means of some transformation. Designing a discrete-time filter by
this approach involves a three step procedure:
1. The specifications of the desired discrete-time filter are converted to the specifications of
an appropriate analog filter that can be used as a prototype. Let the desired discrete-time
filter be specified through critical frequencies 𝛺1 and 𝛺2 along with tolerance values 𝛥1 and
𝛥2 . Analog prototype filter parameters 𝜔1 and 𝜔2 need to be determined. (If the filter type is
bandpass or band-reject, two additional frequencies need to be specified.)
2. An analog prototype filter that satisfies the design specifications in step 1 is designed. Its
system function 𝑮(𝑠) is constructed.
0.6250
𝑮(𝑠) =
𝑠3 + 1.1542𝑠 2 + 1.4161 𝑠 + 0.6250
Convert this filter to a discrete-time filter using the impulse invariance technique with
𝑇 = 0.2 𝑠. Afterwards compute and graph the magnitude response of the discrete-time
filter.
Solution: The system function 𝑮(𝑠) can be written in partial fraction form as
The system function of the discrete-time filter 𝑯(𝑧) is found by using the impulse invariant
method as
𝑁 ℎ𝑘 𝑁 𝑇ℎ𝑘 𝑧
𝑮(𝑠) = ∑ ( ) ⟹ 𝑯(𝑧) = ∑ ( 𝑝 𝑇
)
𝑘=1 𝑠 − 𝑝𝑘 𝑘=1 𝑧 − 𝑒 𝑘
(−0.0577 − 𝑗 0.0167) 𝑧 (−0.0577 + 𝑗 0.0167) 𝑧 0.1154 𝑧
𝑯(𝑧) = + +
𝑧 − 0.9251 − 𝑗 0.1875 𝑧 − 0.9251 + 𝑗 0.1875 𝑧 − 0.8910
0.0023 𝑧 2 + 0.0021 𝑧
𝑯(𝑧) = 3
𝑧 − 2.7412 𝑧 2 + 2.5395𝑧 − 0.7939
clear all, clc, w=0:0.01:pi; z=exp(j*w);
num=0.0023*(z).^2+0.0021*(z);
den=((z).^3-2.7412*(z).^2+2.5395*(z)-0.7939);
H=(num)./(den);
magG=abs(H); plot(w,magG); grid on
xlabel('Frequency in rad/sec'); ylabel('Magnitude of H(z)');
title('Magnitude of Digital low-pass filter ');
Note that aliasing can be kept at a negligible level with the choice of T, and the analog
frequency 𝜔1 = 1 rad/s corresponds to the discrete-time frequency 𝛺 = 0.2 radians. In
general there is some limitations on the choice of T and sometimes is irrelevant and we
often use 𝑇 = 1𝑠 for simplicity.
𝜔1 = 𝛺1 /𝑇 = 0.2𝜋, 𝜔2 = 𝛺2 /𝑇 = 0.25𝜋
Let the system function for the analog prototype filter be 𝑮1 (𝜔) yielding an impulse
response 𝐠1 (𝑡). The impulse response of the discrete-time filter is 𝒉1 [𝑛] = 𝐠1 (𝑛𝑇) = 𝐠1 (𝑛)
b. With 𝑇 = 2𝑠, the critical frequencies of the analog prototype filter are
𝜔1 = 𝛺1 /𝑇 = 0.1𝜋, 𝜔2 = 𝛺2 /𝑇 = 0.125𝜋
Let the system function for the analog prototype filter be 𝑮2 (𝜔)so that its impulse response
is 𝐠 2 (𝑡). The impulse response of the discrete-time filter is obtained by sampling 𝐠 2 (𝑡) every
𝑇 = 2 seconds, that is, 𝒉2 [𝑛] = 𝐠 2 (𝑛𝑇) = 𝐠 2 (2𝑛). We have thus obtained two discrete-time
filters with the two choices of the sampling interval 𝑇. What is the relationship between
these two filters? Let us realize that 𝑮1 (0.2𝜋) = 𝑮2 (0.1𝜋) and 𝑮1 (0.25𝜋) = 𝑮2 (0.125𝜋).
Generalizing these relationships we have 𝑮1 (𝜔) = 𝑮2 (𝜔/2). Based on the scaling property
of the Fourier transform this implies that 𝐠1 (𝑛) = 𝐠 2 (2𝑛), and therefore 𝐡1 [𝑛] = 𝐡2 [(𝑛)]. The
two IIR filters designed are identical and independent from the choice of 𝑇. ■
The bilinear transformation provides a one-to-one mapping from the s-plane to the z-
plane. The mapping equation is given by
2 𝑧−1 2 Ω
𝑠= ( ) and 𝜔 = tan ( )
𝑇 𝑧+1 𝑇 2
Let we prove this statement starting from the mapping 𝑧 = 𝑒 𝑠𝑇 and using trapezoidal rule
𝑒 𝑠𝑇/2 2 + 𝑇𝑠 2 𝑧−1
𝑧 = 𝑒 𝑠𝑇 = −𝑠𝑇/2
≈ ⇔ 𝑠= ( )
𝑒 2 − 𝑇𝑠 𝑇 𝑧+1
2 Ω
𝜔= tan ( )
𝑇 2
This is called pre-warping equation.
Example: Using bilinear transformation design a Butterworth low-pass filter with the
following specifications: 𝛺1 = 0.2𝜋 , 𝛺2 = 0.36𝜋 , 𝑅𝑝 = 2 dB, 𝐴𝑠 = 20 dB
Solution: We will use 𝑇 = 1𝑠. The critical frequencies of the analog prototype filter are
found using the pre-warping equation applied to 𝛺1 and 𝛺2 :
𝜔1 2𝑁 𝑅𝑝 𝐴𝑠 𝜔1 𝑅𝑝 𝐴𝑠
( ) = [10 10 − 1] / [1010 − 1] ⟹ 2𝑁 log10 ( ) = log10 ([1010 − 1] / [1010 − 1])
𝜔2 𝜔2
𝑅𝑝 𝐴𝑠 𝜔1
𝑁 = log10 (√[10 10 − 1] / [1010 − 1]) /log10 ( ) = 3.8326
𝜔2
The filter order needs to be chosen as 𝑁 = 4. The 3-dB cutoff frequency is found by setting
the magnitude at the pass-band edge to −𝑅𝑝 dB by solving
𝜔1 2𝑁 𝑅𝑝 0.6498 8 2
( ) = [10 10 − 1] ⟺ ( ) = [1010 − 1] ⟹ 𝜔𝑐 = 0.6949 rad/s.
𝜔𝑐 𝜔𝑐
The analog prototype filter can now be designed using Butterworth low-pass filter design
technique described before with the values of 𝑁 and 𝜔𝑐 found. The system function is
0.2331
𝑮(𝑠) =
𝑠4 + 1.8157 𝑠3 + 1.6485 𝑠 2 + 0.8767 𝑠 + 0.2331
The system function for the discrete-time filter is found through bilinear transformation
using the replacement
2 1 − 𝑧 −1
𝑠= ( )
𝑇 1 + 𝑧 −1
which results in
The sample amplitudes of the impulse response 𝒉[𝑛] for 𝑛 = 0, . . . , 𝑁 − 1 are often referred to
as the filter coefficients. In this section we will present two approaches to the problem of
designing FIR filters: one very simplistic approach that is nevertheless instructive, and one
elegant approach that is based on computer optimization. In a general sense, the design
procedure for FIR filters consists of the following steps:
1. Start with the desired frequency response. Select the appropriate length N of the filter.
This choice may be an educated guess, or may rely on empirical formulas.
2. Choose a design method that attempts to minimize, in some way, the difference between
the desired frequency response and the actual frequency response that results.
3. Determine the filter coefficients 𝒉[𝑛] using the design method chosen.
4. Compute the system function and decide if it is satisfactory. If not, repeat the process
with a different value of 𝑁 and/or a different design method.
It was discussed earlier in this chapter that one of the main reasons for preferring FIR
filters over IIR filters is the possibility of a linear phase characteristic. Linear phase is
desirable since it leads to a time-delay characteristic that is constant independent of
frequency. As far as real-time implementations of IIR and FIR filters are concerned, we have
seen that IIR filters are mathematically more efficient and computationally less demanding
of hardware resources compared to FIR filters. If linear phase is not a significant concern in
a particular application, then an IIR filter may be preferred. If linear phase is a
requirement, on the other hand, an FIR filter must be chosen even though its
implementation may be more costly. Therefore, in the discussion of FIR design we will
focus on linear-phase FIR filters.
ℎ[𝑛] = {3, 2, 1, 2, 3}
↑
𝑛=0
Show that the phase characteristic of 𝑯(𝛺) is a linear function of 𝛺.
Solution: The DTFT of the impulse response is 𝑯(𝛺) = 3 + 2𝑒 −𝑗𝛺 + 𝑒 −𝑗2𝛺 + 2𝑒 −𝑗3𝛺 + 3𝑒 −𝑗4𝛺 .
Let us factor out 𝑒 −𝑗2𝛺 and write the result as
The expression in square brackets contains symmetric exponentials. Using Euler's formula,
it becomes
𝑨(𝛺) = [3𝑒 𝑗2𝛺 + 2𝑒 𝑗𝛺 + 1 + 2𝑒 −𝑗𝛺 + 3𝑒 −𝑗2𝛺 ]
3
= 1 + cos(𝛺) + cos(2𝛺)
2
Here 𝑨(𝛺) is purely real, so 𝑯(𝛺) = 𝑨(𝛺)𝑒 −𝑗2𝛺 . The phase response of the filter is
𝝓(𝛺) = −𝑗2𝛺
corresponding to a time delay of 𝑛𝑑 = 2 samples. The magnitude and the phase of the
system function are shown in Figure.
We are now ready to discuss FIR filter design. First the Fourier series design method will be
discussed. The use of window functions to overcome the fundamental problems
encountered will be explored. Afterwards we will briefly discuss the Parks McClellan
technique.
Fourier Design of FIR Filters: Generally, FIR filters are designed directly from the
impulse response of an ideal low-pass filter. It can be shown that the impulse response of
an ideal low-pass filter is a sinc function, and therefore that an ideal low-pass filter is non-
causal and IIR. Now, we are going to show that a causal low-pass FIR filter can be obtained
by delaying the ideal impulse response by 𝑀 time units and truncating the impulse
response.
An ideal low-pass discrete-time filter with a cutoff frequency 𝛺𝑐 has the system function
1, |𝛺| < 𝛺𝑐
𝑯𝑑 (𝛺) = {
0, 𝛺𝑐 < 𝛺 < 𝜋
The impulse response of the ideal discrete-time low-pass filter is found by taking the
inverse DTFT of 𝑯𝑑 (𝛺) given by
1 𝜋
𝐡𝑑 [𝑛] = ∫ 𝑯 (𝛺)𝑒 𝑗𝛺𝑛 𝑑𝛺
2𝜋 −𝜋 𝑑
1 𝛺𝑐 𝑗𝛺𝑛 𝛺𝑐 𝛺𝑐 𝑛
= ∫ 𝑒 𝑑𝛺 = sinc ( )
2𝜋 −𝛺𝑐 𝜋 𝜋
The result obtained for 𝐡𝑑 [𝑛] is valid for all 𝑁. Therefore, 𝐡𝑑 [𝑛] infinitely long, and cannot be
the impulse response of an FIR filter. On the other hand, sample amplitudes of 𝐡𝑑 [𝑛] get
smaller as the sample index is increased in both directions.
𝐡𝑑 [𝑛], |𝑛| ≤ 𝑀
𝐡𝑇 [𝑛] = {
0, otherwise
The truncated impulse response has 2𝑀 + 1 samples for 𝑛 = −𝑀, . . . , 𝑀. Truncation of the
ideal impulse response causes the spectrum of the filter to deviate from the ideal spectrum.
The system function for the resulting filter is
∞ 𝑀
Thus, 𝐡[𝑛] corresponds to a causal FIR filter. Using the time shifting property of the DTFT,
the system function 𝑯(𝛺) of the causal filter is related to 𝑯 𝑇 (𝛺) by 𝑯(𝛺) = 𝑯 𝑇 (𝛺)𝑒 −𝑗𝛺𝑀
The addition of the M-sample delay only affects the phase of the system function and not
its magnitude. Since 𝐡[𝑛] is both causal and finite-length, it is the impulse response of a
valid FIR filter.
Remark: It is worth observing that the impulse response 𝐡𝑑 [𝑛] is an even function of 𝑛.
Even symmetry is preserved in 𝐡𝑇 [𝑛]. Finally, when 𝐡[𝑛] is obtained by time shifting 𝐡𝑇 [𝑛]
by 𝑀 samples, the symmetry necessary for linear phase is preserved. Therefore, any filter
designed using this technique will have linear phase.
Example: (FIR Design by Fourier Method) Using the Fourier series method, design a length-
15 FIR low-pass filter to approximate an ideal low-pass filter with 𝛺𝑐 = 0.3𝜋 rad.
The impulse response of the FIR filter is 𝐡[𝑛] = 𝐡𝑇 [𝑛 − 𝑀] = 0.3 sinc(0.3(𝑛 − 7)); 𝑛 = 0,1 . . . ,14
The magnitude response of the designed filter is shown in
1, |𝑛| ≤ 𝑀
𝐰[𝑛] = {
0, otherwise
Based on the multiplication property of the DTFT, the spectrum of 𝐡𝑇 [𝑛] is the convolution
of the spectra of 𝐡𝑑 [𝑛] and 𝐰[𝑛].
1 𝜋
𝑯 𝑇 (𝛺) = ∫ 𝑯 (𝛺)𝑾(𝛺 − 𝜆)𝑑𝛺
2𝜋 −𝜋 𝑑
As the spectra in Figure above reveal, the reason behind the oscillatory behavior of 𝑯 𝑇 (𝛺)
especially around 𝛺 = 𝛺𝑐 is the shape of the spectrum 𝑾(𝛺). The high-frequency content of
𝑾(𝛺), on the other hand, is mainly due to the abrupt transition of the rectangular window
from unit amplitude to zero amplitude at its two edges. The solution is to use an alternative
window function, one that smoothly tapers down to zero at its edges, in place of the
rectangular window. The chosen window function must also be an even function of 𝑛 in
order to keep the symmetry of 𝐡𝑇 [𝑛] for linear phase. A large number of window functions
exist in the literature. A few of them are listed below:
The next figure shows the decibel magnitude spectra for rectangular, Hamming and
Blackman windows for comparison.
It is seen that the rectangular window has stronger side lobes than the other two window
functions. Hamming window provides side lobes that are at least 40 dB below the main
lobe, and the side lobes for the Blackman window are at least 60 dB below the main lobe.
The downside to side lobe suppression is the widening of the main lobe.
Example: Redesign the filter of the previous example using Hamming and Blackman
windows.
clear all, clc, n=[0:14]; w=-pi:0.01:pi; k=0; H1=0; H2=0; W1=0; W2=0;
for n=0:14
m=n+1;
h(m)=0.3*sinc(0.3*(n-7));
w1(m)=0.54-0.46*cos(pi*n/7);
w2(m)= 0.42 - 0.5*cos(pi*(n)/7) + 0.08*cos(2*pi*(n)/7) ;
end
h1=h.*w1; h2=h.*w2;
for n=0:14
m=n+1;
H1= H1 + h1(m).*exp(-j*w*n); H2= H2 + h2(m).*exp(-j*w*n);
end
subplot(221);
plot(w,abs(H1),'r','linewidth', 1.5); grid on; hold on
plot(w,abs(H2),'b','linewidth', 1.5);
subplot(222);
plot(w,angle(H1),'r','linewidth', 1.5), grid on; hold on
plot(w,angle(H2),'b','linewidth', 1.5)
figure
for n=0:14
m=n+1;
W1= W1 + w1(m).*exp(-j*w*n); W2= W2 + w2(m).*exp(-j*w*n);
end
plot(w, 20*log10(abs(W1)),'r','linewidth', 1.5); grid on, hold on
plot(w, 20*log10(abs(W2)),'b','linewidth', 1.5); grid on, hold on
In the discussion above we have concentrated on the design of low-pass FIR filters. Other
types of filters can also be designed; the only modification that is needed is in determining
the ideal impulse response 𝐡𝑑 [𝑛]. Expressions for the ideal impulse responses of high-pass,
bandpass and band-reject filters can be derived from 𝐡𝑑 [𝑛] = 𝛺𝑐 sinc(𝛺𝑐 𝑛/𝜋)/𝜋 as follows:
Highpass:
1, 𝛺𝑐 < |𝛺| < 𝜋 𝛺𝑐 𝛺𝑐 𝑛
𝑯𝑑 (𝛺) = { ⟺ 𝐡𝑑 [𝑛] = 𝛿[𝑛] − sinc ( )
0, |𝛺| < 𝛺𝑐 𝜋 𝜋
Bandpass:
1, 𝛺𝑐1 < |𝛺| < 𝛺𝑐2 𝛺𝑐2 𝛺𝑐2 𝑛 𝛺𝑐1 𝛺𝑐1 𝑛
𝑯𝑑 (𝛺) = { ⟺ 𝐡𝑑 [𝑛] = sinc ( )− sinc ( )
0, otherwise 𝜋 𝜋 𝜋 𝜋
Band-reject:
0, 𝛺𝑐1 < |𝛺| < 𝛺𝑐2 𝛺𝑐2 𝛺𝑐2 𝑛 𝛺𝑐1 𝛺𝑐1 𝑛
𝑯𝑑 (𝛺) = { ⟺ 𝐡𝑑 [𝑛] = 𝛿[𝑛] − sinc ( )+ sinc ( )
1, otherwise 𝜋 𝜋 𝜋 𝜋
Example: Using the Fourier series design method with a triangular (Bartlett) window,
design a 24th order FIR bandpass filter with passband edge frequencies 𝛺1 = 0.4𝜋 and
𝛺2 = 0.7𝜋 as shown in Figure.
Solution: The order of the FIR filter is 𝑁 − 1 = 24. The following two statements create a
length-25 Bartlett window and then use it for designing the bandpass filter required.
wn = bartlett (25);
hn = fir1 (24 ,[0.4 ,0.7] , wn)
Some important details need to be highlighted. The function bartlett(. . ) uses the filter
length N (this applies to other window generation functions such as hamming(. . ), hann(. . ),
and blackman(. . ) as well). On the other hand, the design function fir1(. . ) uses the filter
order which is 𝑁 − 1. The second argument to the function fir1(. . ) is a vector of two
normalized edge frequencies which results in a bandpass filter being designed.
clear all, clc, wn = bartlett (25); hn = fir1 (24 ,[0.4 ,0.7] ,wn)
Omg = [-256:255]/256* pi; H = fftshift (fft(hn ,512));
Omgd=[-1,-0.7,-0.7,-0.4,-0.4,0.4,0.4,0.7,0.7,1]*pi;
Hd = [0,0,1,1,0,0,1,1,0,0];
plot(Omg ,abs(H),Omgd ,Hd); grid;
axis([-pi,pi,-0.1 ,1.1]);
title('|H(\ Omega )|');
xlabel ('\Omega (rad)'); ylabel ('Magnitude');
MATLAB Problems:
Design a 48th-order FIR bandpass filter with passband 0.35𝜋 ≤ Ω ≤ 0.65𝜋 rad/sample.
Visualize its magnitude and phase responses.
Exercise: 04 A third-order lowpass filter with passband edge 𝜔𝑝 = 1 rad/s and passband
ripple 𝐴𝑝 = 1.0 dB is required. Obtain the poles and multiplier constant of the transfer
function assuming a Chebyshev approximation.
(a) Find the minimum filter order that would satisfy the specifications.
(b) Obtain the required transfer function.
Exercise: 06 A lowpass filter is required that would satisfy the following specifications:
• Passband edge 𝜔𝑝 : 2000 rad/s • Maximum passband loss 𝐴𝑝 : 0.4 dB
• Stopband edge 𝜔𝑠 : 7000 rad/s • Minimum stopband loss 𝐴𝑠 : 45.0 dB
(a) Assuming a Butterworth approximation, find the required order n and the value of the
transformationparameter λ.
(b) Form H(s).
Exercise: 07 Repeat Prob. 06 for the case of a Chebyshev approximation and compare the
design obtained with that obtained in Prob. 06.
Exercise: 08 Repeat Prob. 06 for the case of a inverse-Chebyshev approximation and
compare the design obtained with that obtained in Prob. 06.
Exercise: 09 highpass filter is required that would satisfy the following specifications:
• Passband edge 𝜔𝑝 : 2000 rad/s • Maximum passband loss 𝐴𝑝 : 0.5 dB
• Stopband edge 𝜔𝑠 : 1000 rad/s • Minimum stopband loss 𝐴𝑠 : 40.0 dB
(a) Assuming a Butterworth approximation, find the required order n and the value of the
transformation parameter λ.
(b) Form H(s).
Exercise: 11 A bandstop filter is required that would satisfy the following specifications:
• Lower passband edge 𝜔𝑝1: 20 rad/s
• Upper passband edge 𝜔𝑝2: 80 rad/s
• Lower stopband edge 𝜔𝑠1 : 48 rad/s
• Lower stopband edge 𝜔𝑠2 : 52 rad/s
• Maximum passband loss 𝐴𝑝 : 1.0 dB
• Minimum stopband loss 𝐴𝑠 : 25.0 dB
(a) Assuming a Butterworth approximation, find the required order n and the value of the
transformation parameters 𝐵 and 𝜔0 .
(b) Form H(s).
1
𝑯𝐴 (𝑠) =
(𝑠 + 1)(𝑠 2 + 𝑠 + 1)
The sampling frequency is 10 rad/s.
5 5𝑠 + 13
𝑯(𝑠) = + 2
𝑠 + 2.5 𝑠 + 4𝑠 + 25/4
Exercise: 17 Obtain a system function for each of the given analog filter network
Exercise: 18 Obtain a system function for each of the given analog filter network
Exercise: 19 Obtain a system function for each of the given analog filter network
Exercise: 20 Obtain a system function for each of the given active analog filter network
Exercise: 21 Obtain a system function for each of the given active analog filter network
Exercise: 22 Obtain a system function for the given active analog filter network
Exercise: 23 Obtain a system function for the given active analog filter network
Exercise: 23 Obtain a system function for the given passive analog filter network
This page intentionally left blank
Review Questions (set N○1)
▪ Define analog signal and give appropriate example.
▪ Define DT signal and illustrate it with a plot.
▪ What is uniform sampling? What is a DT signal?
▪ State the sampling theorem in time domain. Explain the aliasing effect.
▪ What is Nyquist rate of sampling?
▪ How does one decide the sampling frequency for any signal to be sampled?
▪ Can a signal with infinite bandwidth be sampled?
▪ Explain the process of sampling using the concept of a train of pulses.
▪ Explain the occurrence of replicated spectrum when the signal is sampled.
▪ How can one recover the signal from signal samples using a sync function
▪ Explain the aliasing effect for a signal which is sampled below the Nyquist rate.
▪ What is a practical way to recover the signal from signal samples?
▪ When does a phase reversal occur for a recovered aliased signal?
▪ What is the need for an anti-aliasing filter?
▪ What is a signal? What are scalar-valued signals and multiple-valued signals?
▪ How will you make use of test signals for system testing before it is actually implemented?
▪ Define even and odd property of a signal. Define different combinations of even and odd
signals and state whether they will be even or odd.
▪ What is linearity? Define additivity and homogeneity. Is the transfer curve for a linear
system always linear? Explain physical significance of linearity.
▪ Explain physical significance of shift invariance property.
▪ Explain property of superposition.
▪ When will you say that the system is memoryless? Give one example.
▪ Define causality for a system. Can we design and use a non-causal system?
▪ Is a causal system a requirement for spatial systems?
▪ Explain the meaning of causality for a system of a human being.
▪ What is invertibility? Can we use a non-invertible transform for processing a signal?
▪ What is a zero input response? Explain the procedure to find the zero input response for
CT and DT systems.
▪ What is a zero state response? How will you find the response to any input signal?
▪ How will you calculate the impulse response of the system? How will you find the step
response for the system?
▪ Explain the property of memory in terms of impulse response of the CT and DT system.
▪ Explain the property of causality in terms of impulse response of the CT and DT system.
▪ Explain the property of stability in terms of impulse response of the CT and DT system.
Review Questions (set N○3)
▪ How will you classify the signals as periodic or aperiodic? What is FS representation of the
periodic signal?
▪ Can you call the signal as a vector? What are basis functions? Why they should be
orthogonal?
▪ Why are the exponential functions used in place of sine and cosine functions?
▪ Why at all use sinusoidal or exponential functions as basis functions?
▪ State Dirichlet's conditions?
▪ Explain the difference between the phase responses of right-handed and left handed
exponential signals.
▪ Explain the use of Dirac delta function for finding the FT of exponential function, sine
function, cosine function, etc.
▪ Explain the procedure of IFT calculation using partial fraction expansion.
▪ Explain why DTFT is periodic with a period of pi.
▪ Explain the procedure for obtaining the DTFT in the form of closed from expression.
▪ Explain how the DT signal is obtained?
▪ Explain the DTFT of the train of impulses.
▪ What is linearity property of FT? What are its applications?
▪ Explain time shifting, time reversal and time scaling property of FT and DTFT along with
the physical significance of each property.
▪ Explain time differentiation and time integration property of FT. Does it have any
significance for DTFT?
▪ Explain frequency shifting and frequency differentiation property of FT and DTFT.
▪ Explain modulation and convolution properties of FT and DTFT with the applications.
▪ What is Parseval's theorem? What is its significance?
▪ How will you use FT to analyze the LTI system? Give a suitable example to explaine.
▪ How will you do frequency analysis of the signal? How will you analyze frequency response
of the LTI system.
Review Questions (set N○4)
▪ Define a complex frequency plane and define the Laplace transform (LT) of any signal x(t).
▪ What is the physical significance of LT? State properties of LT.
▪ Can we find the LT of a signal that is not absolutely integrable? Give a suitable example.
▪ Explain the relation between FT and LT. Compare LT and FT.
▪ Prove that the complex exponential 𝑒 𝑠𝑡 is an Eigen function of LTI system.
▪ Explain the procedure to find the total response of the system. Clearly state the meaning of
natural response and forced response.
▪ How will you find the zero input response and the zero state response?
▪ Can you analyze the stability of the causal and non-causal LTI system given the pole
locations?
▪ How will you find the frequency response of the system from the transfer function?
▪ Explain the use of MATLAB commands to plot the poles and zeros and to plot the impulse
response of the system.
1
𝑐𝑜𝑠𝑥 𝑠𝑖𝑛𝑥 1 cos 2𝑥 cos 𝑥 sin 𝑥 𝑃 |𝑥 𝑡 | 𝑑𝑡 𝐸 |𝑥 𝑡 | 𝑑𝑡 𝐸 |𝑥 𝑛 |
𝑇
1 1 1
𝑥 𝑡 𝑥 𝑡 𝑥 𝑡 𝑥 𝑡 𝑥 𝑡 𝑥 𝑡 𝑃 lim |𝑥 𝑛 | 𝑒 1 𝑦𝑛 𝑥𝑘ℎ𝑛 𝑘
2 2 → 2𝑁 1
1 1
𝑥 𝑛 𝑥𝑛 𝑥 𝑛 𝑥 𝑛 𝑥𝑛 𝑥 𝑛 𝑦 𝑡 𝑥 𝜏 ℎ 𝑡 𝜏 𝑑𝜏
2 2
1 𝑗𝑤 𝑑 𝑑
𝑥 𝑎𝑡 ↔ 𝑋 𝑥 𝑡 ↔ 𝑗𝑤𝑋 𝑗𝑤 𝑗𝑡𝑥 𝑡 ↔ 𝑋 𝑗𝑤 𝑥𝑛 𝑛 ⎯⎯ 𝑒 𝑋 𝑒 𝑥 𝑡 𝑡 𝑒 𝑋 𝑗𝑤
|𝑎| 𝑎 𝑑𝑡 𝑑𝑤
1 𝑗𝑤 1
𝑥 𝑎𝑡 ↔ 𝑋 𝑥 𝜏 𝑑𝜏 𝑋 𝑗𝑤 𝑥𝑛 𝑛 ⎯⎯ 𝑒
𝑋𝑘 𝑥 𝑡 𝑡 𝑒 𝑋𝑘
|𝑎| 𝑎 𝑗𝑤
1 1
𝑥 𝑡 𝑋𝑘𝑒 𝑋𝑘 𝑥 𝑡 𝑒 𝑑𝑡 𝑥 𝑡 𝑋 𝑗𝑤 𝑒 𝑑𝑤 𝑋 𝑗𝑤 𝑥 𝑡 𝑒 𝑑𝑡
𝑇 2𝜋
1
1
𝑥𝑛 𝑋𝑘𝑒 𝑋𝑘 𝑥𝑛𝑒 𝑥𝑛 𝑋 𝑒 𝑒 𝑑 𝑋 𝑒 𝑥𝑛𝑒
𝑁 2𝜋
1
𝑒 𝑑𝑡 𝛿 𝑓 𝐴 𝑑𝑥 𝑥 𝑐 sec 𝑎𝑥 𝑑𝑥 𝑙𝑛|sec 𝑎𝑥 tan 𝑎𝑥 | 𝑐
𝑎
𝑥 𝑎 𝑒
sin 𝑥 𝑑𝑥 cos 𝑥 𝑐 𝑥 𝑑𝑥 𝑐 𝑎 𝑑𝑥 𝑐 𝑒 𝑑𝑥 𝑐
𝑛 1 ln 𝑎 𝑎
𝑒 𝑎𝑥 1
cos 𝑥 𝑑𝑥 𝑠𝑖𝑛 𝑥 𝑐 𝑢 𝑑𝑣 𝑢𝑣 𝑣 𝑑𝑢 𝑐 𝑐𝑜𝑡 𝑥 𝑑𝑥 𝑙𝑛|sin x | 𝑐 𝑥𝑒 𝑑𝑥 𝑐
𝑎
𝑑𝑥 1 1 1
tan 𝑥 𝑑𝑥 𝑙𝑛|cos 𝑥 | 𝑐 𝑙𝑛|ax b| 𝑐 𝑠𝑒𝑐 𝑎𝑥 𝑑𝑥 tan 𝑎𝑥 𝑐 𝑐𝑠𝑐 𝑎𝑥 𝑑𝑥 cot 𝑎𝑥 𝑐
𝑎𝑥 𝑏 𝑎 𝑎 𝑎
Prepared by Prof. Dr. Hasan AMCA – Electrical and Electronic Engineering Department - Eastern Mediterranean University – May 2018
Laplace Transform Table
F (s) f (t) 0 ≤ t
1. 1 δ (t ) unit impulse at t = 0
1 1 or u(t ) unit step starting at t = 0
2.
s
1 t ⋅ u(t) or t ramp function
3.
s2
1 1
4. t n −1 n = positive integer
sn ( n − 1)!
1 −as u (t − a ) unit step starting at t = a
5. e
s
1 −as
u(t) − u(t − a) rectangular pulse
6. (1 − e )
s
7.
1 e −at exponential decay
s+a
1 1
8. t n−1e −at n = positive integer
( s + a) n (n − 1)!
1 1
9. (1 − e −at )
s ( s + a) a
1 1 b −at a −bt
10. s(s + a)(s + b) (1 − e + e )
ab b−a b−a
s +α 1 b(α − a) −at a(α − b) −bt
11. [α − e + e ]
s( s + a)(s + b) ab b−a b−a
1 1
12. (s + a)(s + b) (e − at − e −bt )
b−a
s 1
( ae − at − be −bt )
13. ( s + a )( s + b) a−b
φ = atan2(ω, α )
1 1 − at
24. (s + a) 2 + b 2 e sin(bt )
b
1 1
24a. 2 e −ζωnt sin(ωn 1 − ζ 2 t )
s + 2ζω n s + ω n
2
ωn 1 − ζ 2
s+a e − at cos( bt )
25. ( s + a ) 2 + b 2
Laplace Table Page 2
F(s) f(t) 0≤t
s +α (α − a ) 2 + b 2 − at
26. ( s + a ) 2 + b 2 e sin( bt + φ ) φ = atan2(b,α − a)
b
26a.
s +α
( α
ωn − ζωn ) 2
+1 ⋅ e −ζωnt sin(ω n 1 − ζ 2 t + φ )
s2 + 2ζωns +ωn
2 1−ζ 2
φ = atan2(ωn 1 − ζ 2 ,α − ζωn )
27. 1 1
1 + e−at
sin(bt −φ) φ = atan2( b,− a )
a +b b a +b
2 2 2 2
s[(s + a)2 + b2 ]
27a. 1 1
1 − e−ζωnt sin(ωn 1 − ζ 2 t + φ )
ωn 2
ωn 2 1 − ζ 2
s(s 2 + 2ζωn s + ωn
2
φ = cos − 1 ζ
28. α 1 (α − a) 2 + b2 −at
s +α + e sin(bt + φ)
a +b b
2 2
a +b
2 2
28a. α 1 α −ζω t
s +α + ( −ζ ) 2
+ (1−ζ 2
) ⋅ e ω
sin( 1−ζ 2
n
t +φ)
ωn ωn 1−ζ ωn
2 n
2
s(s2 + 2ζωn s + ωn )
2
s +α 1
[α − αe−at
+ a(a − α)te−at
]
34. s(s + a)2 a2
φ1 = atan2(−2aω, a2 + b2 − ω2 )
φ2 = atan2(2ab, a 2 − b 2 + ω 2 )
38. 1 α 2 + ω2 2
1
s +α ( ) sin(ωt + φ1 )
ω c
(s2 +ω2 )[(s + a)2 +b2 ]
1 (α − a)2 + b2 2 −at
1
+ [ ] e sin(bt +φ2 )
b c
c = (2aω)2 + (a2 + b2 −ω2 )2
1 α α
− (1− 1 + 20 )e−bt
b−a b b
2 T z (1 + z )
2 −1 −1
t2 (kT)2
(1 − z )
6. −1 3
s3
6 T z (1 + 4 z + z )
3 −1 −1 −2
t3 (kT)3
(1 − z )
7. −1 4
s4
a
1 – e-at 1 – e-akT
(1 − e )z − aT −1
8.
s (s + a ) (1 − z )(1 − e z )
−1 − aT −1
b−a (e − e )z− aT − bT −1
9.
(s + a )(s + b ) e-at – e-bt e-akT – e-bkT
(1 − e z )(1 − e z )
− aT −1 −bT −1
1 Te − aT z −1
te-at kTe-akT
(1 − e )
10.
(s + a ) 2 − aT
z −1
2
s 1 − (1 + aT )e − aT z −1
(1 – at)e-at (1 – akT)e-akT
(1 − e z )
11.
(s + a )2 − aT −1 2
2 T e (1 + e z )z
2 − aT − aT −1 −1
t2e-at (kT)2e-akT
(1 − e z )
12.
(s + a ) 3 − aT −1 3
a2
at – 1 + e-at akT – 1 + e-akT
[(aT − 1 + e )+ (1 − e − aTe )z ]z
− aT − aT − aT −1 −1
s (s + a ) (1 − z ) (1 − e z )
13. 2 −1 2 − aT −1
ω z −1 sin ωT
14. sin ωt sin ωkT
s +ω 2
2
1 − 2 z −1 cos ωT + z − 2
s 1 − z −1 cos ωT
15. cos ωt cos ωkT
s +ω 2
2
1 − 2 z −1 cos ωT + z − 2
ω e − aT z −1 sin ωT
16. e-at sin ωt e-akT sin ωkT
(s + a ) 2
+ω 2
1 − 2e − aT z −1 cos ωT + e − 2 aT z − 2
s+a 1 − e − aT z −1 cos ωT
17. e-at cos ωt e-akT cos ωkT
(s + a ) 2
+ω 2
1 − 2e z −1 cos ωT + e − 2 aT z − 2
− aT
1
18. – – ak
1 − az −1
ak-1 z −1
19. – –
k = 1, 2, 3, … 1 − az −1
z −1
kak-1
(1 − az )
20. – – −1 2
z (1 + az )
−1 −1
k2ak-1
(1 − az )
21. – – −1 3
(
z −1 1 + 4az −1 + a 2 z −2 )
k3ak-1
(1 − az )
22. – – −1 4
k4ak-1
(
z −1 1 + 11az −1 + 11a 2 z −2 + a 3 z −3 )
(1 − az )
23. – – −1 5
1
24. – – ak cos kπ
1 + az −1