0% found this document useful (0 votes)
427 views

Fundamentals of Linear Systems and Signal Nprocessing

What Is Signal Processing? It is the science of analyzing, synthesizing, sampling, encoding, transforming, decoding, enhancing, transporting, archiving, and in general manipulating signals in some way. With the rapid advances in very-large-scale integrated (VLSI) circuit technology and computer systems, the subject of signal processing has mushroomed into a multifaceted discipline with each facet deserving its own volume. This book is concerned primarily with the branch of signal processing that

Uploaded by

BEKHITI Belkacem
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
427 views

Fundamentals of Linear Systems and Signal Nprocessing

What Is Signal Processing? It is the science of analyzing, synthesizing, sampling, encoding, transforming, decoding, enhancing, transporting, archiving, and in general manipulating signals in some way. With the rapid advances in very-large-scale integrated (VLSI) circuit technology and computer systems, the subject of signal processing has mushroomed into a multifaceted discipline with each facet deserving its own volume. This book is concerned primarily with the branch of signal processing that

Uploaded by

BEKHITI Belkacem
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 394

Institute of Aeronautics & Space Studies - Blida University (Algeria)

Fundamentals of Linear
Systems and Signal
Processing
Authored by:
BEKHITI Belkacem
NAIL Bachir

2021
Professor Kamel Hariche
This page intentionally left blank
The Breadth and Depth of
Signal Processing
(By Steven W. Smith)

I. What Is Signal Processing? It is the science of analyzing, synthesizing, sampling,


encoding, transforming, decoding, enhancing, transporting, archiving, and in general
manipulating signals in some way. With the rapid advances in very-large-scale integrated
(VLSI) circuit technology and computer systems, the subject of signal processing has
mushroomed into a multifaceted discipline with each facet deserving its own volume. This
book is concerned primarily with the branch of signal processing that entails the spectral
characteristics and properties of signals. The spectral representation and analysis of
signals in general are carried out through the mathematical transforms e.g., the Fourier
series and Fourier transform. If the processing entails modifying, reshaping, or
transforming the spectrum of a signal in some way, then the processing involved will be
referred to as filtering.

Signal processing is one of the most powerful technologies that will shape science and
engineering in the twenty-first century. Revolutionary changes have already been made in a
broad range of fields: communications, medical imaging, radar & sonar, high fidelity music
reproduction, and oil prospecting, to name just a few. Each of these areas has developed a
deep Digital Signal Processing (DSP) technology, with its own algorithms, mathematics, and
specialized techniques. This combination of breath and depth makes it impossible for any
one individual to master all of the DSP technology that has been developed.

II. The Roots of DSP: Signal processing is distinguished from other areas in computer
science by the unique type of data it uses: signals. In most cases, these signals originate as
sensory data from the real world: seismic vibrations, visual images, sound waves, etc.
Signal processing is the mathematics, the algorithms, and the techniques used to
manipulate these signals after they have been converted into a digital form. This includes a
wide variety of goals, such as: enhancement of visual images, recognition and generation of
speech, compression of data for storage and transmission, etc. Suppose we attach an
analog-to-digital converter to a computer and use it to acquire a chunk of real world data.
DSP answers the question: What next?

The roots of DSP are in the 1960s and 1970s when digital computers first became
available. Computers were expensive during this era, and DSP was limited to only a few
critical applications. Pioneering efforts were made in four key areas: radar & sonar, where
national security was at risk; oil exploration, where large amounts of money could be made;
space exploration, where the data are irreplaceable; and medical imaging, where lives could
be saved.

The personal computer revolution of the 1980s and 1990s caused DSP to exploded with
new applications. Rather than being motivated by military and government needs, DSP was
suddenly driven by the commercial marketplace. Anyone who thought they could make
money in the rapidly expanding field was suddenly a DSP vender. DSP reached the public
in such products as: mobile telephones, compact disc players, and electronic voice mail. The
next page illustrates a few of these varied applications.
DSP Applications

Space Medical
−Space photograph enhancement −Diagnostic imaging (CT, MRI,
−Data compression ultrasound, and others)
−Intelligent sensory analysis by −Electrocardiogram analysis
remote space probes −Medical image storage/retrieval

Commercial Telephone
−Image and sound compression −Voice and data compression
for multimedia presentatio −Echo reduction
−Movie special effects −Signal multiplexing
−Video conference calling −Filtering

Military Industrial
−Radar −Oil and mineral prospecting
−Sonar −Process monitoring & control
−Ordnance guidance −Nondestructive testing
−Secure communication −CAD and design tool
This recent history is more than a curiosity; it has a tremendous impact on your ability to
learn and use DSP. Suppose you encounter a DSP problem, and turn to textbooks or other
publications to find a solution. What you will typically find is page after page of equations,
obscure mathematical symbols, and unfamiliar terminology. It's a nightmare! It is just
intended for a very specialized audience. State-of-the-art researchers need this kind of
detailed mathematics to understand the theoretical implications of the work.

As you go through each application, notice that DSP is very interdisciplinary, relying on the
technical work in many adjacent fields. As Fig. 1-2 suggests, the borders between DSP and
other technical disciplines are not sharp and well defined, but rather fuzzy and
overlapping. If you want to specialize in DSP, these are the allied areas you will also need to
study.

III. Signal Processing in Telecommunications: Telecommunication is about transferring


information from one location to another. This includes many forms of information:
telephone conversations, television signals, computer files, and other types of data. To
transfer the information, you need a channel between the two locations. This may be a wire
pair, radio signal, optical fiber, etc. Telecommunications companies receive payment for
transferring their customer's information, while they must pay to establish and maintain
the channel. The financial bottom line is simple: the more information they can pass
through a single channel, the more money they make. DSP has revolutionized the
telecommunications industry in many areas: signaling tone generation and detection,
frequency band shifting, filtering to remove power line hum, etc. Three specific examples
from the telephone network will be discussed here: multiplexing, compression, and echo
control.

III.I. Multiplexing: is the process of combining multiple signals into one signal, over a
shared medium which the signals can individually recovered.

III.II. Compression: When a voice signal is digitized at 8000 samples/sec, most of the
digital information is redundant. That is, the information carried by any one sample is
largely duplicated by the neighboring samples. To avoid duplication we use a data
compression algorithm.
III.III. Echo Control: Echoes are a serious problem in long distance telephone
connections. When you speak into a telephone, a signal representing your voice travels to
the connecting receiver, where a portion of it returns as an echo. The DSP deals with the
solution of such problem by some electronic filters called antialiasing or antinoise.

III.IV. Speech Generation and Recognition: Speech generation and recognition are used
to communicate between humans and machines. Rather than using your hands and eyes,
you use your mouth and ears. This is very convenient when your hands and eyes should be
doing something else, such as: driving a car, performing surgery, or (unfortunately) firing
your weapons at the enemy. Electronic device are made suitable to use DSP for the purpose
of doing this well.

IV. DSP in Military Applications: The military of every country has its own
communication network which is usually much more technical sophisticated than civil
network. Examples of such applications are Radar signal detection, Sonar detection,
cryptology and cryptanalysis, electronic warfare system etc…

IV.I. Radar: The word Radar is an acronym


for: 𝐑𝐀dio 𝐃etection 𝐀nd 𝐑anging. In the simplest
radar system, a radio transmitter produces a
pulse of radio frequency energy a few
microseconds long. This pulse is fed into a
highly directional antenna, where the resulting
radio wave propagates away at the speed of
light. Aircraft in the path of this wave will
reflect a small portion of the energy back
toward a receiving antenna, situated near the
transmission site. To determine the target
position (range and angle) Radar equipment do
a deep and complex DSP the reflected signal.

IV.II. Sonar: Sonar is an acronym for 𝐒𝐎und 𝐍𝐀vigation and 𝐑anging. It is divided into two
categories, active and passive. In active sonar, sound pulses between 2 kHz and 40 kHz are
transmitted into the water, and the resulting echoes detected and analyzed. Uses of active
sonar include: detection & localization of undersea
bodies, navigation, communication, and mapping the sea
floor. A maximum operating range of 10 to 100
kilometers is typical. In comparison, passive sonar simply
listens to underwater sounds, which includes: natural
turbulence, marine life, and mechanical sounds from
submarines and surface vessels. Since passive sonar
emits no energy, it is ideal for covert operations. You
want to detect the other guy, without him detecting you. The most important application of
passive sonar is in military surveillance systems that detect and track submarines. Passive
sonar typically uses lower frequencies than active sonar because they propagate through
the water with less absorption. Detection ranges can be thousands of kilometers.
IV. Industrial and Petroleum Applications: As early as the 1920s, geophysicists
discovered that the structure of the earth's crust could be probed with sound. Prospectors
could set off an explosion and record the echoes from boundary layers more than ten
kilometers below the surface. These echo seismograms were interpreted by the raw eye to
map the subsurface structure. The reflection seismic method rapidly became the primary
method for locating petroleum and mineral deposits, and remains so today.

In the ideal case, a sound pulse sent into the ground produces a single echo for each
boundary layer the pulse passes through. Unfortunately, the situation is not usually this
simple. Each echo returning to the surface must pass through all the other boundary
layers above where it originated. This can result in the echo bouncing between layers,
giving rise to echoes of echoes being detected at the surface. These secondary echoes can
make the detected signal very complicated and difficult to interpret. Digital Signal
Processing has been widely used since the 1960s to isolate the primary from the secondary
echoes in reflection seismograms. How did the early geophysicists manage without DSP?
The answer is simple: they looked in easy places, where multiple reflections were
minimized. DSP allows oil to be found in difficult locations, such as under the ocean.

V. Signal Processing in Medical Applications: Images are signals with special


characteristics. First, they are a measure of a parameter over space (distance), while most
signals are a measure of a parameter over time. Second, they contain a great deal of
information. For example, more than 10 megabytes can be required to store one second of
television video. This is more than a thousand times greater than for a similar length voice
signal. Third, the final judge of quality is often a subjective human evaluation, rather than
an objective criteria. These special characteristics have made image processing a distinct
subgroup within DSP.

In 1895, Wilhelm Conrad discovered that x-rays could pass through substantial amounts
of matter. Medicine was revolutionized by the ability to look inside the living human body.
Medical x-ray systems spread throughout the world in only a few years. In spite of its
obvious success, medical x-ray imaging was limited by four problems until DSP and related
techniques came along in the 1970s. First, overlapping structures in the body can hide
behind each other. For example, portions of the heart might not be visible behind the ribs.
Second, it is not always possible to distinguish between similar tissues. For example, it
may be able to separate bone from soft tissue, but not distinguish a tumor from the liver.
Third, x-ray images show anatomy, the body's structure, and not physiology, the body's
operation. The x-ray image of a living person looks exactly like the x-ray image of a dead
one! Forth, x-ray exposure can cause cancer, requiring it to be used sparingly and only
with proper justification.

The problem of overlapping structures was solved in 1971 with the introduction of the first
computed tomography scanner (formerly called computed axial tomography, or CAT
scanner). Computed tomography (CT) is a classic example of Digital Signal Processing. X-
rays from many directions are passed through the section of the patient's body being
examined. Instead of simply forming images with the detected x-rays, the signals are
converted into digital data and stored in a computer. The information is then used to
calculate images that appear to be slices through the body. These images show much
greater detail than conventional techniques, allowing significantly better diagnosis and
treatment. The impact of CT was nearly as large as the original introduction of x-ray
imaging itself. Within only a few years, every major hospital in the world had access to a CT
scanner. In 1979, two of CT's principle contributors, Godfrey N. Hounsfield and Allan M.
Cormack, shared the Nobel Prize in Medicine. That's good DSP!

The last three x-ray problems have been solved by using penetrating energy other than x-
rays, such as radio and sound waves. DSP plays a key role in all these techniques. For
example, Magnetic Resonance Imaging (MRI) uses magnetic fields in conjunction with radio
waves to probe the interior of the human body. Properly adjusting the strength and
frequency of the fields cause the atomic nuclei in a localized region of the body to resonate
between quantum energy states. This resonance results in the emission of a secondary
radio wave, detected with an antenna placed near the body. The strength and other
characteristics of this detected signal provide information about the localized region in
resonance. Adjustment of the magnetic field allows the resonance region to be scanned
throughout the body, mapping the internal structure. This information is usually presented
as images, just as in computed tomography. Besides providing excellent discrimination
between different types of soft tissue, MRI can provide information about physiology, such
as blood flow through arteries. MRI relies totally on Digital Signal Processing techniques,
and could not be implemented without them.

VI. Signal Processing in Space Applications: Sometimes, you just have to make the most
out of a bad picture. This is frequently the case with images taken from unmanned
satellites and space exploration vehicles. No one is going to send a repairman to Mars just
to tweak the knobs on a camera! DSP can improve the quality of images taken under
extremely unfavorable conditions in several ways: brightness and contrast adjustment,
edge detection, noise reduction, focus adjustment, motion blur reduction, etc. Images that
have spatial distortion, such as encountered when a flat image is taken of a spherical
planet, can also be warped into a correct representation. Many individual images can also
be combined into a single database, allowing the information to be displayed in unique
ways. For example, a video sequence simulating an aerial flight over the surface of a
distant planet.

VII. Signal Processing in Commercial Imaging Products: The large information content
in images is a problem for systems sold in mass quantity to the general public. Commercial
systems must be cheap, and this doesn't mesh well with large memories and high data
transfer rates. One answer to this dilemma is image compression. Just as with voice
signals, images contain a tremendous amount of redundant information, and can be run
through algorithms that reduce the number of bits needed to represent them. Television
and other moving pictures are especially suitable for compression, since most of the image
remain the same from frame-to-frame. Commercial imaging products that take advantage
of this technology include: video telephones, computer programs that display moving
pictures, and digital television.
VIII. Organization of this Book: For most practical systems, input and output signals are
continuous and these signals can be processed using continuous systems. However, due to
advances in digital systems technology and numerical algorithms, it is advantageous to
process continuous signals using digital systems (systems using digital devices) by
converting the input signal into a digital signal. Therefore, the study of both continuous
and digital systems is required. As most real systems are continuous and the concepts are
relatively easier to understand, we describe analog signals and systems first, immediately
followed by the corresponding description of digital signals and systems.

In this book, many illustrative examples are included in each chapter for easy
understanding of the fundamentals and methodologies of signals and systems. An
attractive feature of this book is the inclusion of MATLAB-based examples with codes to
encourage readers to implement exercises on their personal computers in order to become
confident with the fundamentals and to gain more insight into signals and systems.

This book is divided into 10 chapters. Chapter 1 presents an introduction to signals and
systems with basic classification of signals, elementary operations on signals, and some
examples of signals and systems. Chapter 2 introduces the linear time invariance and
systems. Chapter 3 gives Laplace transform analysis of continuous time signals and
systems. The z-transform analysis of Discrete-time signals and systems is covered in
Chapter 4. Chapter 5 deals with the Fourier transform and analysis of continuous-time
signals and systems. Chapter 6 deals with the Discrete-time Fourier transform and
analysis of Discrete-time signals and systems. The Fast Fourier Transform, implementation
of linear systems, and state-space representation of LTI systems are discussed in Chapter 7
and 8. Sampling theory and reconstruction of a band-limited signal from its samples,
analog/digital and digital/analog conversion are discussed in Chapter 9. Lastly, in Chapter
10 Ideal continuous time filters, practical filter (i.e. analog and digital) approximations and
design methodologies, and design of special class filters are briefly detailed.
CHAPTER I:
Introduction to
Signals and Systems
I. Introduction
II. Elementary Operations on Signals
III. Classification of Signals
1. Real, Complex, Even and odd signals
2. Continuous-time and discrete-time signals (Analog and digital signals)
3. Periodic and aperiodic signals
4. Energy and power signals (Measuring signals)
5. Deterministic and probabilistic signals
IV. Some Useful Signal Models
IV.I. The Step Signal (Heaviside Function)
IV.II The Impulse Signal
IV.III The Ramp Signal
IV.IV The Gate-Signal (П-Signal)
IV.V The Sign Signal
IV.VI The Exponential Signal
IV.VII The Sinusoidal Signals
V. Solved Problems

A signal is a function of one or more variables that conveys information about some
(usually physical) phenomenon. Signals can be classified based on the number of
independent variables with which they are associated. A signal that is a function of only
one variable is said to be one dimensional (1D). Similarly, a signal that is a function of two
or more variables is said to be multidimensional. A signal can also be classified on the
basis of whether it is a function of continuous or discrete variables. A system is an entity
that processes one or more input signals in order to produce one or more output signals,.
Such an entity is represented mathematically by a system of one or more equations. In a
communication system, the input might represent the message to be sent, and the output
might represent the received message. In a robotics system, the input might represent the
desired position of the end effector (e.g., gripper), while the output could represent the
actual position.
Introduction to Signals
and Systems
I. Introduction: It is natural and logical when we are dealing with a topic to ask such
questions: what it is this what does it stand for? Especially, when this matter is a science
which may has its own concepts and terminologies. Accordingly, it can be said that

A Signal is a term used to denote the information carrying property being transmitted to or
from an entity such as a device, instrument, or physiological source. Mathematically
talking, it is a function (or a sequence) of an independent variable 𝑡, typically representing
time representing a physical quantity or variable and it carry information. Thus, a
continuous-time signal is denoted 𝒙(𝑡) and discrete-time signal is denoted 𝒙[𝑛]. We can
also represent a Signal by a waveform.

Remark: All continuous signals that are functions of time are continuous-time, but not all
continuous-time signals are continuous functions of time.

Examples: (Type of signals in real life world)

– Radio and Television Signals.


– Telecommunications and Computer Signals.
– Biomedical Engineering Signals.
– Electrical current flowing through a resistor.
– Sonar sound waves propagating under water.
– Seismic (Earthquakes) and Geodesic signals.
– Picture (image) consists of a brightness/color signal.
– A video signal is a sequence of images.
A System is a term used to denote an entity that processes or operates on Signal(s) to
transform one signal to another (Manipulate, Change, Record, Transmit). Mathematically
talking, a system is a function mapping input signals into output signals, a system can be
a piece of code/software, or a physical device, or a black box whose input is a signal and it
performs some processing on that signal, and the output is a signal. The input is known as
excitation and the output is known as response.

Examples: (systems in real life)

– Amplifiers, Radios, Televisions


– Telephone, Modem, Computer
– Oscilloscopes, ECG, EEG, EMG
– Aircraft, Anti-Aircraft gun, Ship
– Radar, Sonar, and Antenna.

A system whose output signal is 𝑦(𝑡) and input signal 𝑢(𝑡), is said to be a transformation of
the signal space 𝑋 to another signal space 𝑌 such that 𝑦(𝑡) = 𝑻(𝑢(𝑡)). 𝑻 is called an operator
because the output function 𝑦(𝑡) can be
viewed as being produced by an operation on
the input function 𝑢(𝑡). Another equivalent
way of thinking about a system is to consider
the input 𝑢(𝑡) being mapped into the
output 𝑦(𝑡). This viewpoint can be
conceptualized as shown in Fig. In the figure,
the set of all the possible inputs is denoted by
𝑋 and the set of all possible outputs is
denoted by 𝑌.

In terms of the operator concept, a system is said


to be a many-to-one mapping because many
different inputs can result in a particular output,
but a given input cannot result in more than one output. Our discussion in this text will
center on systems with inputs and outputs that are functions of time.
II. Elementary Operations on Signals: Several basic operations by which new signals are
formed from given signals are familiar from the algebra and calculus of functions.
• Amplitude Scale and Shift: 𝑦(𝑡) = 𝑎𝑥(𝑡), 𝑦(𝑡) = 𝑥(𝑡) + 𝑏 𝑎, 𝑏 ∈ ℝ (possibly complex)
• Addition: 𝑦(𝑡) = 𝑥(𝑡) + 𝑧(𝑡) and Subtraction 𝑦(𝑡) = 𝑥(𝑡) − 𝑧(𝑡)
• Multiplication: 𝑦(𝑡) = 𝑥(𝑡) 𝑧(𝑡) and Time Inversion 𝑦(𝑡) = 𝑥(−𝑡) or 𝑦(−𝑡) = 𝑥(𝑡)
• Time Scale and Shift: 𝑦(𝑡) = 𝑥(𝑎𝑡), 𝑦(𝑡) = 𝑥(𝑡 − 𝑡0 ) and 𝑦(𝑡) = 𝑥(𝑎𝑡 − 𝑡0 )

Computer Example: write a short Matlab program to validate 𝑦(𝑡) = 𝑥(𝑡) ± 𝑧(𝑡)

clear all, clc, t=-3:0.01:3;

for k=1:1:length(t) for k=1:1:length(t)


if t(k)<-2 if t(k)<-2
x1(k)=0; x2(k)=0;
elseif t(k)>=-2 & t(k)<0 elseif t(k)>=-2 & t(k)<=2
x1(k)= t(k) + 2; x2(k)= 2;
elseif t(k) >=2 else
x1(k)=0; x2(k)= 0;
else end
x1(k)= -t(k) + 2; end
end
end

for k=1:1:length(t)
x3(k) = x1(k) + x2(k); x4(k) = x1(k) - x2(k);
end

subplot(411)
plot(t,x1,'k','linewidth',3)
grid on

subplot(412)
plot(t,x2,'r','linewidth',3)
grid on

subplot(413)
plot(t, x3 ,'b','linewidth',3)
grid on

subplot(414)
plot(t, x4 ,'g','linewidth',3)
grid on

Computer Example: write a short MATLAB code to validate 𝑦1/2 (𝑡) = (𝑒 2𝜋𝑖𝑡 ± 𝑒 −2𝜋𝑖𝑡 )/2

clear all, clc, | figure (1)


t = [1:0.01:5]; | plot3(real(x1),t, imag(x1)), grid on
x1 = exp(2*pi*i*t); | figure (2)
x2 = exp(-2*pi*i*t); | plot3(real(x2),t, imag(x2)) , grid on
y1 = (x1 + x2)/2; | figure (3)
y2 = (x1 - x2)/2; | plot3(real(y1),t, imag(y1)) , grid on
| figure (4)
| plot3(real(y2),t, imag(y2)) , grid on
III. Classification of Signals: There are several classes of signals. Here we shall consider
only the following classes, which are suitable for the scope of this book:

1. Real, Complex, Even and odd signals


2. Continuous-time and discrete-time signals (Analog and digital signals)
3. Periodic and aperiodic signals
4. Energy and power signals (Measuring signals)
5. Deterministic and probabilistic signals

 Even and Odd Signals: A signal is referred to as an Even if it is identical to its time-
reversed counterparts; 𝑥(𝑡) = 𝑥(−𝑡). Odd Signal: A signal is odd if 𝑥(𝑡) = −𝑥(−𝑡). An odd
signal must be 0 at 𝑡 = 0, in other words, odd signal passes the origin. Any signal 𝑥(𝑡) or
𝑥[𝑛] can be expressed as a sum of two signals, one of which is even and other is odd.

Continuous − time signals Discrete − time signals


𝑥𝑒 (𝑡) = {𝑥(𝑡) + 𝑥(−𝑡)}/2 even part of 𝑥(𝑡) 𝑥𝑒 [𝑛] = {𝑥[𝑛] + 𝑥[−𝑛]}/2 even part of 𝑥[𝑛]
𝑥𝑜 (𝑡) = {𝑥(𝑡) − 𝑥(−𝑡)}/2 odd part of 𝑥(𝑡) 𝑥𝑜 [𝑛] = {𝑥[𝑛] − 𝑥[−𝑛]}/2 odd part of 𝑥[𝑛]
Note that: (do it as an exercise)

⦁ The product of two even signals or


of two odd signals is an even signal

⦁ The product of an even signal and


an odd signal is an odd signal.

⦁ The derivative of an even/odd


function is odd/even.

⦁ The integral of an even/odd


function is an odd/even function,
plus a constant.
Example: decompose the following signal into odd and even parts

Solution:

 Periodic and Aperiodic Signals: A signal which does not repeat itself (i.e. its pattern)
after a specific interval of time is called aperiodic signal or non-periodic. Signals that
repeats its pattern over a period is called periodic signal. A continuous-time signal 𝑥(𝑡) is
said to be periodic with period 𝑇 if there is 𝑇 ∈ ℝ+ for which 𝑥(𝑡 + 𝑚𝑇) = 𝑥(𝑡) for all 𝑡 ∈ ℝ+ &
𝑚 ∈ ℤ. The fundamental period 𝑇0 of 𝑥(𝑡) is the smallest positive value of 𝑇 for which last
equation holds.

Remark: Note that a sequence 𝑥[𝑛] obtained by uniform sampling of a periodic continuous-
time signal may not be periodic. Also, a discrete-time sinusoid is not necessarily periodic.
Examples:

 Energy and power signals: The size of any entity is a number that indicates the
largeness or strength of that entity. Generally speaking, the signal amplitude varies with
time. How can a signal that exists over a certain time interval with varying amplitude be
measured by one number that will indicate the signal size or signal strength? Such a
measure must consider not only the signal amplitude, but also its duration. In this
manner, we may consider the area under a signal 𝑓(𝑡) as a possible measure of its size,
because it takes account of not only the amplitude, but also the duration. However, this
will be a defective measure because 𝑓(𝑡) could be a large signal, yet its positive and
negative areas could cancel each other, indicating a signal of small size. This difficulty can
be corrected by defining the signal size as the area under 𝑓 2 (𝑡), which is always positive.
We call this measure the signal energy 𝐸𝑓 , defined (for a real signal) as

Continuous − time signal Discrete − time signal


+∞ +∞
2
𝐸𝑥 = ∫ 𝑥 (𝑡)𝑑𝑡 𝐸𝑥 = ∑ 𝑥 2 [𝑛]
−∞ −∞

This definition can be generalized to a complex valued signal 𝑥(𝑡) as

Continuous − time signal Discrete − time signal


+∞ +∞

𝐸𝑥 = ∫ |𝑥(𝑡)|2 𝑑𝑡 𝐸𝑥 = ∑|𝑥[𝑛]|2
−∞ −∞

The signal energy 𝐸𝑥 must be finite for it to be a meaningful measure of the signal size. A
necessary condition for the energy to be finite is that the signal amplitude goes to zero as
time increase (𝑡 → ∞). For a signal 𝑥(𝑡), we define its power

Continuous − time signal Discrete − time signal


1 +𝑇/2 1 +𝑁
𝑃𝑥 = lim ∫ |𝑥(𝑡)|2 𝑑𝑡 𝑃𝑥 = lim ∑ |𝑥[𝑛]|2
𝑇→∞ 𝑇 −𝑇/2 𝑁→∞ 2𝑁 + 1 −𝑁

Remark: 1 Energy signals are time limited while power signals can exist over infinite time.
Non periodic signals are energy signals while power signals are (almost) periodic. Power of
an energy signal is zero and the energy of a power signals is infinite.

Remark: 2 If a signal is a power signal, then it cannot be an energy signal or vice versa;
power and energy signals are mutually exclusive. A signal may be neither a power nor an
energy signal, that is if the two conditions are not met. Almost all periodic functions of
practical interest are power signals.
 Continuous (Analog) & Discrete (Digital) Signals: A signal 𝑥(𝑡) is a continuous-time
signal if 𝑡 is a continuous variable. If 𝑡 is a discrete variable, that is, 𝑥[𝑡] is defined at
discrete times, then 𝑥[𝑡] is a discrete-time signal - often denoted as 𝑥[𝑛], where 𝑛 is an
integer. A discrete-time signal 𝑥[𝑛] may represent a phenomenon for which the independent
variable is inherently discrete, such as the daily closing value of a stock price, or it may be
obtained by sampling a continuous-time signal 𝑥(𝑡) at 𝑡 = 𝑛𝑇𝑠 , where 𝑇𝑠 is the sampling
period. To convert an analog signal into a digital signal, the analog signal needs to be
sampled and quantized; therefor digital signal is an amplitude quantization of discrete time
signal.

A signal dependent on a continuum of values of the independent variable 𝑡 is called a


continuous-time signal or, more generally, a continuous-data signal or (less frequently) an
analog signal. A signal defined at, or of interest at, only discrete (distinct) instants of the
independent variable t (upon which it depends) is called a discrete-time, a discrete data, a
sampled-data, or a digital signal.

IV. Some Useful Signal Models In the area of signals and systems, the step, the impulse,
the ramp and the exponential functions are very useful. They not only serve as a basis for
representing other signals, but their use can simplify many aspects of the signals and
systems. Therefore in this section we present definitions of the following basic control
signals: the step function, the gate (window) function, the impulse function, the ramp
function, the exponential function, and the sinusoidal function.
IV.I The Step Signal (Heaviside Function): Heaviside step function, or the unit step
function, usually denoted by ℍ or 𝜃 (but sometimes 𝑢, 1 or 𝟙), is a discontinuous function,
named after Oliver Heaviside (1850–1925), whose value is zero for negative argument and
one for positive argument. The function was originally developed in operational calculus for
the solution of differential equations, where it represents a signal that switches on at a
specified time and stays switched on indefinitely. Oliver Heaviside, who developed the
operational calculus as a tool in the analysis of telegraphic communications, represented
the function as 1.

continuous-time step function discrete-time step function


0 𝑡<0 0 𝑛<0
𝑢(𝑡) = { 𝑢[𝑛] = {
1 𝑡≥0 1 𝑛≥0

Existence of Step functions in real life world: In real-life situations, step signal can be
viewed as a constant force of magnitude |𝐴| Newtons applied at time equals zero seconds to
a certain object for a long time. In another situation, 𝐴𝑢(𝑡) can be an applied voltage of
constant magnitude to a load resistor 𝑅 at the time 𝑡 → 0.

For a smooth approximation to the step function,


one can use the logistic function:
1
𝑢(𝑡) = lim 𝑓𝛼 (𝑡) Where 𝑓𝛼 (𝑡) =
𝛼⟶∞ 1 + 𝑒 −2𝛼𝑡
𝛼𝑡 −𝛼𝑡
𝑒 −𝑒 2
𝐍𝐨𝐭𝐢𝐜𝐞 𝐭𝐡𝐚𝐭: tanh(𝛼𝑡) = 𝛼𝑡 −𝛼𝑡
= −1
𝑒 +𝑒 1 + 𝑒 −2𝛼𝑡
1 1 1
⟹ −2𝛼𝑡
= + tanh(𝛼𝑡)
1+𝑒 2 2
1 1
⟹ 𝑢(𝑡) = lim ( + tanh(𝑎𝑡))
𝑎⟶+∞ 2 2
Often an integral representation of the Heaviside step function is useful:

1 +∞ 𝑒 𝑖𝑡𝜔 1 𝜀+𝑗∞ 𝑒 𝑡𝑠
Fourier: 𝑢(𝑡) = lim+ ∫ 𝑑𝜔 Laplace: 𝑢(𝑡) = lim+ ∫ 𝑑𝑠
𝜀⟶0 2𝜋𝑖 −∞ 𝜔 − 𝑖𝜀 𝜀⟶0 2𝜋𝑖 𝜀−𝑗∞ 𝑠

The simplest definition of the Heaviside function is as the derivative of the ramp function or
in term of the sign (signum) function:

𝑑 1 1
𝑢(𝑡): = (max{𝑡, 0}) for 𝑡≠0 & 𝑢(𝑡): = + sgn(𝑡)
𝑑𝑡 2 2
The Heaviside function can also be defined as the integral of the Dirac delta function:
𝑢′ = 𝛿. This is sometimes written as
𝑡
𝑑𝑢(𝑡)
𝑢(𝑡): = ∫ 𝛿(𝑥)𝑑𝑥 or 𝛿(𝑡) =
−∞ 𝑑𝑡

Later we will see the meaning & proof of these last equations.

clear all
clc

a=[10 1 0.5 0.4 0.3 0.2 0.1];

for i=1:7
x=-2:0.01:2;
y=(1./(1+exp(-2.*a(i).*x)));
plot(x,y,'-','linewidth',3)
hold on
grid on
y2=1/2 +(1/2)*tanh(a(i).*x);
plot(x,y2,'--','linewidth',3)
hold on
pause(0.5)
end

IV.II The Impulse Signal: In mathematics, the Dirac delta function is a


generalized function or distribution introduced by the physicist Paul
Dirac. It is used to model the density of an idealized point mass or point
charge as a function equal to zero everywhere except for zero and whose
integral over the entire real line is equal to one. The unit impulse
function 𝛿(𝑡), also known as the Dirac delta function, plays a central role
in system analysis. Traditionally, 𝛿(𝑡) is often defined as the limit of a
suitably chosen conventional function having unity area over an
infinitesimal time interval as shown in next figure, and possesses the
following properties (see next pages)
1/𝜀 0 < 𝑡 < 𝜀
𝛿(𝑡) = lim𝜀⟶0 𝛿𝜀 (𝑡) where 𝛿𝜀 (𝑡) = {
0 elsewhere
∞ 𝑡=0
Therefore 𝛿(𝑡) = {
0 elsewhere

Let we compute the area under the curve 𝛿(𝑡) ?


+∞ +∞
1
Area = ∫ 𝛿(𝑡)𝑑𝑡 = lim ∫ 𝛿𝜀 (𝑡)𝑑𝑡 = 𝜀 × =1
−∞ 𝜀⟶0 −∞ 𝜀
1 +∞
Where: 𝜀 × 𝜀 = length×width. Therefore: ∫−∞ 𝛿(𝑡)𝑑𝑡 = 1

A good smooth approximation of the Dirac delta function 𝛿(𝑡) is


given by the next pulse functions:
2
1 sin((𝑡 − 𝑡0 )/𝜀) 1 (𝑡 − 𝑡0 )
𝛿(𝑡 − 𝑡0 ) = lim = lim exp (− )
𝜋 𝜀→0 (𝑡 − 𝑡0 ) 𝛼→0 𝛼√2𝜋 2𝛼 2
Other pulses, such as exponential pulse,
triangular pulse, may also be used in impulse
approximation. The important feature of the unit
impulse function is not its shape but the fact that
its effective duration (pulse width) approaches
zero while its area remains at unity.
+∞
−𝛼𝑡
■ 𝛿(𝑡) = lim 𝛼𝑒 where Area = lim ∫ 𝛼𝑒 −𝛼𝑡 𝑑𝑡 = 1
𝛼⟶∞ 𝛼⟶∞ 0
+∞
𝑑 1 𝑑 1 𝑎
■ 𝛿(𝑡) = ( sgn(𝑡)) = lim ( tanh(𝑎𝑡)) , Area = lim ∫ sech2 (𝑎𝑡) 𝑑𝑡 = 1
𝑑𝑡 2 𝑎⟶∞ 𝑑𝑡 2 𝛼⟶∞ −∞ 2
1 sin(𝛼(𝑡 − 𝑡0 ))
lim Dirichlet
𝜋 𝛼→∞ (𝑡 − 𝑡0 )
2
1 (𝑡 − 𝑡0 )
lim exp (− ) Gaussian
𝛼→0 |𝛼|√2𝜋 2𝛼 2
1
𝛿(𝑡 − 𝑡0 ) = 𝛼
lim Lorentzian
𝜋 𝛼→0 (𝑡 − 𝑡0 )2 + 𝛼 2
1 𝑇
lim ∫ exp(𝑖𝜔(𝑡 − 𝑡0 ))𝑑𝜔 Fourier I
𝑇→∞ 2𝜋 −𝑇

1 𝑇
lim ∫ cos(𝜔(𝑡 − 𝑡0 ))𝑑𝜔 Fourier II
{ 𝑇→∞ 𝜋 0
Computer Example: write a short program to plot a smooth approximation of the impulse
signal using the Gauss distribution

clear all
clc;

a=[1 0.8 0.5 0.3 0.1 0.05];


n=length(a);
for i=1:n
x=-4:0.01:4;
alpha(i)=(abs(a(i))*sqrt(2*pi));
y=(1/(alpha(i)))*exp(-x.^2/(2*a(i).^2));
plot(x,y,'-','linewidth',3)
hold on
grid on
pause(0.5)
end

Unit Impulse as a Generalized Function: The definition of the unit impulse function
given above is not mathematically rigorous, which leads to serious difficulties. First, the
impulse function does not define a unique function: for example, it can be shown that
𝛿(𝑡) + 𝑑𝛿/𝑑𝑡 also satisfies definition. Moreover, 𝛿(𝑡) is not even a true function in the
ordinary sense? An ordinary function is specified by its values for all time 𝑡. The impulse
function is zero everywhere except at 𝑡 = 0, and at this only interesting part of its range it is
undefined. These difficulties are resolved by defining the impulse as a generalized function
rather than an ordinary function. A generalized function is defined by its effect on other
functions instead of by its value at every instant of time.
In this approach the impulse function is defined by the sampling property (its effect on
other functions named 〈𝜙(𝑡), 𝛿(𝑡)〉 ). We say nothing about what the impulse function is or
what it looks like. Instead, the impulse function is defined in terms of its effect on a test
function 𝜙(𝑡). We define a unit impulse as a function for which the area under its product
with a function 𝜙(𝑡) is equal to the value of the function 𝜙(𝑡) at the instant where the
impulse is located. It is assumed that 𝜙(𝑡) is continuous at the location of the impulse.
+∞ +∞
〈𝜙(𝑡), 𝛿(𝑡)〉 = ∫ 𝜙(𝑡)𝛿(𝑡)𝑑𝑡 = 𝜙(0) ∫ 𝛿(𝑡)𝑑𝑡 = 𝜙(0)
−∞ −∞

Recall that the sampling property is the consequence of the classical (Dirac) definition of
impulse. In contrast, the sampling property defines the impulse function in the generalized
function approach.

We now present an interesting application of the generalized function definition of an


impulse. Because the unit step function 𝑢(𝑡) is discontinuous at 𝑡 = 0, its derivative 𝑑𝑢/𝑑𝑡
does not exist at 𝑡 = 0 in the ordinary sense. We now show that this derivative does exist in
the generalized sense, and it is, in fact, 𝛿(𝑡). As a proof, let us evaluate the integral of
(𝑑𝑢/𝑑𝑡)𝜙(𝑡), using integration by parts:
+∞ +∞ +∞
𝑑𝑢 𝑑𝜙(𝑡)
∫ 𝜙(𝑡)𝑑𝑡 = 𝑢(𝑡)𝜙(𝑡) | − ∫ 𝑢(𝑡) 𝑑𝑡
−∞ 𝑑𝑡 −∞ −∞ 𝑑𝑡
+∞ +∞ ∞
𝑑𝑢 𝑑
∫ 𝜙(𝑡)𝑑𝑡 = 𝜙(∞) − ∫ { 𝜙(𝑡)} 𝑑𝑡 = 𝜙(∞) − 𝜙(𝑡) | = 𝜙(0)
−∞ 𝑑𝑡 0 𝑑𝑡 0

This result shows that (i.e. practical derivative) 𝑑𝑢/𝑑𝑡 satisfies the sampling property of 𝛿(𝑡).
Therefore it is an impulse 𝛿(𝑡) in the generalized sense-that is, 𝑑𝑢/𝑑𝑡 = 𝛿(𝑡). Consequently
𝑡 0 𝑡<0
∫ 𝛿(𝑡)𝑑𝑡 = 𝑢(𝑡) = {
−∞ 1 𝑡≥0
Practical derivative of 𝑢(𝑡) by the graphical approach:
𝜏
0 𝑡<−
2 0 𝑡<0
1 1 𝜏 𝜏 1
𝑢𝜏 (𝑡) = 𝑡+ − ≤𝑡≤ ⟹ 𝑢(𝑡) = lim 𝑢𝜏 (𝑡) = { 𝑡=0
𝜏 2 2 2 𝜏⟶0 2
𝜏 1 𝑡>0
{ 1 𝑡 >
2

𝜏
0 𝑡<−
2
𝑑𝑢(𝑡) 𝑑𝑢𝜏 (𝑡) 1 𝜏 𝜏 ∞ 𝑡=0
= lim = lim − <𝑡< = 𝛿(𝑡) = {
𝑑𝑡 𝜏⟶0 𝑑𝑡 𝜏⟶0 𝜏 2 2 0 𝑡≠0
𝜏
{0 𝑡>
2
Remarks: The last derivative of 𝑢(𝑡) is called operational derivative or distributional
derivative. Derivatives of impulse function can also be defined as generalized functions.

Properties of the Dirac function (Continuous-time Case):


+∞ +∞
❶∫ 𝑥(𝑡)𝛿(𝑡)𝑑𝑡 = 𝑥(0) ❷∫ 𝑥(𝑡)𝛿(𝑡 − 𝜏)𝑑𝑡 = 𝑥(𝜏)
−∞ −∞
❸ 𝑥(𝑡)𝛿(𝑡) = 𝑥(0)𝛿(𝑡) ❹ 𝑥(𝑡)𝛿(𝑡 − 𝜏) = 𝑥(𝜏)𝛿(𝑡 − 𝜏)
❺ 𝛿(𝑡) = 𝛿(−𝑡) even funct 1
❻ 𝛿(𝑎𝑡) = 𝛿(𝑡)
+∞ |𝑎|
❼∫ 𝑒−𝑠𝑡𝛿(𝑡)𝑑𝑡 = 1 +∞
−∞ ❽∫ 𝛿(𝑡)𝑑𝑡 = 1

❾ 𝑡𝛿 (𝑡) = −𝛿(𝑡) −∞
𝑛 𝑡𝑛 (𝑛) 𝑛
+∞
𝑛𝑡 (𝑛) ❿
𝑛!
𝛿 (𝑡) = (−1) 𝛿(𝑡)
⓫ ∫ (−1) 𝛿 (𝑡)𝑑𝑡 = 1
−∞ 𝑛! +∞
𝑘 𝑑
𝑘
𝑛 (𝑘)
+∞
(𝑛) 𝑛
+∞ ⓬∫ 𝑡 𝛿 (𝑡)𝑑𝑡 = [(−1)
𝑘
(𝑡𝑛 )]
⓭∫ 𝑥(𝑡)𝛿 (𝑡)𝑑𝑡 = (−1) ∫ 𝑥(𝑛) (𝑡)𝛿(𝑡) 𝑑𝑡 −∞ 𝑑𝑡 𝑡=0
−∞ −∞

Exercise: 01 let we demonstrate the validity of the last (thirteenth) property. We know that
the first derivative of 𝑥(𝑡)𝛿(𝑡) is given by: [𝑥(𝑡)𝛿(𝑡)]′ = 𝑥̇ (𝑡)𝛿(𝑡) + 𝑥(𝑡)𝛿̇ (𝑡) Integrate both sides
of this last equation we get
+∞ +∞ +∞
∫ 𝛿̇ (𝑡)𝑥(𝑡)𝑑𝑡 = [ 𝑥(𝑡)𝛿(𝑡)] −∫ 𝑥̇ (𝑡)𝛿(𝑡)𝑑𝑡
−∞ −∞ −∞
+∞ +∞ +∞
We know that [ 𝑥(𝑡)𝛿(𝑡)] = 0 therefore ∫ 𝛿̇ (𝑡)𝑥(𝑡)𝑑𝑡 = − ∫ 𝑥̇ (𝑡)𝛿(𝑡)𝑑𝑡
−∞ −∞ −∞

Now, take the 2nd derivative of 𝑥(𝑡)𝛿(𝑡) we get: [𝑥(𝑡)𝛿(𝑡)]′′ = 𝑥̈ (𝑡)𝛿(𝑡) + 2𝑥̇ (𝑡)𝛿̇ (𝑡) + 𝑥(𝑡)𝛿̈ (𝑡)
Integrate both sides of this last equation we get
+∞
+∞ +∞
𝑑2
∫ 𝛿̈ (𝑡)𝑥(𝑡)𝑑𝑡 = [ 2 (𝑥(𝑡)𝛿(𝑡))] − (∫ 𝑥̈ (𝑡)𝛿(𝑡) + 2𝑥̇ (𝑡)𝛿̇ (𝑡) 𝑑𝑡)
−∞ 𝑑𝑡 −∞
−∞

We know from the previous derivation that


+∞
+∞ +∞
𝑑2
∫ 𝑥̇ (𝑡)𝛿̇ (𝑡)𝑑𝑡 = − ∫ 𝑥̈ (𝑡)𝛿(𝑡)𝑑𝑡 & [ 2 (𝑥(𝑡)𝛿(𝑡))] =0
−∞ −∞ 𝑑𝑡
−∞
+∞ +∞
We obtain as a final result ∫−∞ 𝛿̈ (𝑡)𝑥(𝑡)𝑑𝑡 = ∫−∞ 𝑥̈ (𝑡)𝛿(𝑡) following the same procedure we
can generalize to
+∞ +∞
(𝑛) (𝑡)𝑑𝑡 𝑛
∫ 𝑥(𝑡)𝛿 = (−1) ∫ 𝑥 (𝑛) (𝑡)𝛿(𝑡) 𝑑𝑡
−∞ −∞
+∞
As a result of this termination ∫ 𝑥(𝑡)𝛿 (𝑛) (𝑡)𝑑𝑡 = (−1)𝑛 𝑥 (𝑛) (0) and
−∞
+∞
𝑑𝑛
∫ 𝑥(𝑡)𝛿 (𝑛) (𝑡 − 𝛼)𝑑𝑡 = [(−1)𝑛 [𝑥(𝑡)]]
−∞ 𝑑𝑡 𝑛
𝑡=𝛼

Exercise: 02 demonstrate the truth of 𝑡𝛿 ′ (𝑡) = −𝛿(𝑡). To prove it let we take the derivative
of 𝑡𝛿(𝑡) which is 𝑑(𝑡𝛿(𝑡))/𝑑𝑡 = 𝛿(𝑡) + 𝑡𝛿 ′ (𝑡) but 𝑡𝛿(𝑡) = 0 ⬌ 𝑡𝛿 ′ (𝑡) = −𝛿(𝑡)

Exercise: 03 prove that 𝛿(𝑡) is an even function and 𝛿 ′ (𝑡) is an odd function.
𝟙. Let we start by 𝛿(𝑡) is an even function, to do this we propose to prove |𝑎|𝛿(𝑎𝑡) = 𝛿(𝑡).
+∞
1 +∞ 𝜉 1
∫ 𝛿(𝑎𝑡)𝑥(𝑡)𝑑𝑡 = ∫ 𝛿(𝜉)𝑥 ( ) 𝑑𝜉 = 𝑥(0)
−∞ |𝑎| −∞ 𝑎 |𝑎|
+∞ +∞
1 +∞ 1
Make substitution of 𝑥(0) = ∫ 𝛿(𝑡)𝑥(𝑡)𝑑𝑡 ∫ 𝛿(𝑎𝑡)𝑥(𝑡)𝑑𝑡 = ∫ 𝛿(𝑡)𝑥(𝑡)𝑑𝑡 𝛿(𝑎𝑡) = 𝛿(𝑡)
−∞ −∞ |𝑎| −∞ |𝑎|

Taking the magnitude of 𝑎 is necessary which is clear if the substitution is carried out with
different signs of 𝑎. |𝑎|𝛿(𝑎𝑡) = 𝛿(𝑡) implise that 𝛿(−𝑡) = 𝛿(𝑡)
+∞ +∞
𝟚. To prove 𝛿 ′ (𝑡) is an odd function, we use ∫ 𝛿̇ (𝑡)𝑥(𝑡)𝑑𝑡 = − ∫ 𝑥̇ (𝑡)𝛿(𝑡)𝑑𝑡
−∞ −∞
+∞
With 𝑥(𝑡) = 1 then ∫ 𝛿̇ (𝑡)𝑑𝑡 = 0 𝛿 ′ (𝑡) is an odd function
−∞
𝑡 𝑛 (𝑛)
𝐄𝐱𝐞𝐫𝐜𝐢𝐬𝐞: 𝟎𝟒 prove that 𝛿 (𝑡) = (−1)𝑛 𝛿(𝑡) In exercise 01 we have gotten
𝑛!
+∞ +∞
∫ 𝑥(𝑡)𝛿 (𝑛) (𝑡)𝑑𝑡 = (−1)𝑛 ∫ 𝑥 (𝑛) (𝑡)𝛿(𝑡) 𝑑𝑡
−∞ −∞
+∞ +∞
If we take 𝑥(𝑡) = 𝑡 𝑛 we get ∫ 𝑛 (𝑛) (𝑡)𝑑𝑡
𝑡 𝛿 = (−1)𝑛 ∫ 𝑛! 𝛿(𝑡) 𝑑𝑡 end of proof
−∞ −∞
+∞
𝑡 𝑛 (𝑛)
As a consequence we deduce that ∫ (−1)𝑛 𝛿 (𝑡)𝑑𝑡 = 1
−∞ 𝑛!
+∞
𝐄𝐱𝐞𝐫𝐜𝐢𝐬𝐞: 𝟎𝟓 prove that ∫ 𝛿(𝑡 − 𝑎)𝛿(𝑡 − 𝑏)𝑑𝑡 = 𝛿(𝑎 − 𝑏)
−∞

Existence of Dirac function in real life world: Again, in real life, this signal can represent
a situation where a person hits an object with a hammer with a force of A Newtons for a
very short period of time (pico seconds). We sometimes refer to this kind of signal as a
shock. In another real-life situation, the impulse signal can be as simple as closing and
opening an electrical switch in a very short time. Another situation where a spring-carrying
mass is hit upward can be seen as an impulsive force. You may realize that it is impossible
to generate a pure impulse signal for zero duration and infinite magnitude. To create an
approximation of an impulse, we can generate a pulse signal where the duration of the
signal is very short compared to the response of the system.
1 𝑛=0
Properties of the Dirac function (Discrete-time Case): In this case 𝛿[𝑛] = {
0 𝑛≠0
+∞ +∞
❶∑ 𝑥[𝑘]𝛿[𝑘] = 𝑥[0] ❷ ∑ 𝑥[𝑘]𝛿[𝑘 − 𝑘0] = 𝑥[𝑘0 ] ❸ 𝑥[𝑘]𝛿[𝑘] = 𝑥[0]𝛿[𝑘]
−∞ −∞
+∞
❹ 𝑥[𝑘]𝛿[𝑘 − 𝑘0] = 𝑥[𝑘0 ]𝛿[𝑘 − 𝑘0 ] ❺ 𝛿[𝑘] = 𝛿[−𝑘] ❻ 𝑢[𝑛] = ∑ 𝛿[𝑛 − 𝑘]
𝑘=0
𝑘=+∞
❼ 𝛿[𝑛] = 𝑢[𝑛] − 𝑢[𝑛 − 1] ❽ ∑ 𝛿[𝑛 − 𝑘] = 1 = 𝑢[𝑛] + 𝑢[−𝑛] − 𝛿[𝑛]
𝑘=−∞
𝑚=𝑛
𝐄𝐱𝐞𝐫𝐜𝐢𝐬𝐞: prove that ❶ 2𝛿[𝑛] = −1 + 2𝑢[𝑛] + (𝑢[−𝑛] − 𝑢[𝑛 − 1]), ❷ 𝑢[𝑛] = ∑ 𝛿[𝑚]
𝑚=−∞

IV.III The Ramp Signal: The ramp function is a unary real


function, whose graph is shaped like a ramp. It can be
expressed by numerous definitions, for example "0 for negative
inputs, output equals input for non-negative inputs". The term
"ramp" can also be used for other functions obtained by
scaling and shifting, and the function in this module is the
unit ramp function (slope 1, starting at 0). This function has
numerous applications in mathematics and engineering, and
goes by various names, depending on the context.

Properties of the Ramp function:


𝑡
𝑡 𝑡 ≥ 0 𝑡 + |𝑡|
▪ 𝑅(𝑡): = max{𝑡, 0} = { = = 𝑡𝑢(𝑡) = ∫ 𝑢(𝜉) 𝑑𝜉
0 𝑡<0 2 −∞

𝑑𝑅(𝑡) 𝑑2 𝑅(𝑡)
▪ = 𝑢(𝑡) , ▪ = 𝛿(𝑡) , ▪ 𝑅(𝑅(𝑡)) = 𝑅(𝑡) , ▪ 𝑅(𝑡) = 𝑢(𝑡) ⋆ 𝑢(𝑡)
𝑑𝑡 𝑑𝑡 2
Existence of Ramp functions in real life world In real-life situations this signal can be
viewed as a signal that is increasing linearly with time. An example is where a person starts
applying a force to an object at time 𝑡 → 0, and keeps pressing the object with increasing
force for a long time. The rate of the increase in the force applied is constant. Consider
another situation where a radar system, an anti-aircraft gun, and an incoming jet interact.
The radar antenna can provide an angular position input. Notion of the jet motion forces
this angle to change uniformly with time. This will force a ramp input signal to the anti-
aircraft gun since it will have to track the jet.

IV.IV The Gate-Signal (П-Signal): The rectangular function (also known as the rectangle
function, rect function, Pi function, gate function, unit pulse, or the normalized boxcar
function) is defined a
1
0 if |𝑡| >
2
1
rect(𝑡) ≔ ∏(𝑡) = 1/2 if |𝑡| =
2
1
1 if |𝑡| <
{ 2
The pulse function may also be expressed as a limit of a
rational function and/or step function:
1
rect(𝑡) ≔ lim ≔ 𝑢(𝑡 + 0.5) − 𝑢(𝑡 − 0.5)
𝑛⟶∞, 𝑛∈(𝑍) 1 + (2𝑡)2𝑛
Remark: The unit gate function is usually used to zero all values of another function,
outside a certain time interval.

Existence of Pulses in real life world: Pulses from a


controlled switching circuit often approximate the form
of a rectangular or "square" pulse. In a pulse train, such
as from a digital clock circuit, the waveform is repeated
at regular intervals. A single complete pulse cycle is
sufficient to characterize such a regular, repetitive train;
Also it exist in switching actions of electrical circuitry,
whether isolated or repetitive (as a pulse train); In
electric motors can create a train of pulses as the
internal electrical contacts make and break connections as the armature rotates; Gasoline
engine ignition systems can create a train of pulses as the spark plugs are energized or
fired; Continual switching actions of digital electronic circuitry.

Computer Example: write a short program to plot a smooth approximation of the


rectangular signal using the rational function
clear all
clc
n=[1 2 3 4 5 10];
for i=1:6
x=-2:0.01:2;
y=(1./(1+(2.*x).^( 2.*n(i))));
plot(x,y,'-','linewidth',3)
hold on
grid on
pause(0.5)
end

Remark: Pulses are typically characterized by:

■ The type of energy (radiated, electric, magnetic or conducted).


■ The range or spectrum of frequencies present.
■ Pulse waveform: shape, duration and amplitude.

Such a pulse's origination may be a natural occurrence or man-made and can occur as a
radiated, electric, or magnetic field or a conducted electric current, depending on the
source.

IV.V The Sign Signal: In mathematics, the sign function or


signum function (from signum, Latin for "sign") is an odd
mathematical function that extracts the sign of a real number. In
mathematical expressions the sign function is often represented
as sgn. The signum function of a real number 𝑡 is defined as
follows:
1 if 𝑡 > 0 𝑑|𝑡|
sgn(𝑡) = { 0 if 𝑡 = 0 Alternatively: sgn(𝑡) = , 𝑡≠0
𝑑𝑡
−1 if 𝑡 < 0
Computer Example: write a short program to plot a smooth approximation of the signum
signal using the "tanh(𝑎𝑥)" function

clear all,clc, a=[1 2 3 4 5 10];


for i=1:6
x=-2:0.01:2;
y=tanh(a(i).*x);
plot(x,y,'-','linewidth',3)
hold on, grid on, pause(0.5)
end
e=[1 .8 .5 .4 .2 .1];
for i=1:6
x=-2:0.01:2;
y2=x./sqrt(x.^2+ e(i).^2);
plot(x,y2,'-','linewidth',3)
hold on, grid on, pause(0.5)
end

Properties of the Sign function:


 Any real number can be expressed as the product of its absolute value and its sign
function: 𝑡 = |𝑡|. sgn(𝑡), we can also ascertain that sgn(𝑡 𝑛 ) = sgn(𝑡)𝑛 .
𝑡 |𝑡|
 It follows that whenever 𝑡 is not equal to 0 we have sgn(𝑡) = =
|𝑡| 𝑡
 Similarly, for any real number 𝑡, |𝑡| = sgn(𝑡). 𝑡
 The signum function is differentiable with derivative 0 everywhere except at 0. It is not
differentiable at 0 in the ordinary sense, but under the generalized notion of differentiation
in distribution theory, the derivative of the signum function is two times the Dirac delta
function, which can be demonstrated using the identity
𝑑 𝑑
sgn(𝑡) = 2𝑢(𝑡) − 1 ⟹ sgn(𝑡) = 2 𝑢(𝑡) = 2𝛿(𝑡)
𝑑𝑡 𝑑𝑡
For 𝑘 ≫ 1, a smooth approximation of the sign function is sgn(𝑡) = tanh(𝑘𝑡) another
approximation is: sgn(𝑡) ≈ 𝑡/√𝑡 2 + 𝜀 2 Which gets sharper as ε → 0.

IV.VI The Exponential Signal: The exponential function is the function f (𝑡) = 𝐴𝑒 𝑎𝑡 and its
graphical representation is shown in Figure

Remark: All functions presented in


this section can be expressed in terms
of exponential functions or derived
from the exponential function, a fact
which makes the exponential function
very interesting.

Furthermore, a periodic function can


be expressed as a linear combination
of exponential functions (Fourier
series). Moreover, it is worth
mentioning that the exponential
function is used to describe many
physical phenomena, such as the response of a system and radiation of nuclear isotopes.
The Complex Exponential Signal The continuous-time complex exponential signal is
given by 𝑥(𝑡) = 𝐴𝑒 𝑗𝜔0 𝑡 = 𝐴 cos(𝜔0 𝑡) + 𝑗𝐴 cos(𝜔0 𝑡) which is a periodic signal with period 𝑇 and
a fundamental 𝑇0 = 2𝜋/𝜔0 ⟹ 𝑥(𝑡) is periodic for every 𝜔0 ∈ [0 ∞), but what is about the
discrete-time complex exponential 𝑥[𝑛] = 𝐴𝑒 𝑗Ω0 𝑛 ?

𝑥[𝑛 + 𝑁] = 𝐴𝑒 𝑗Ω0 (𝑛+𝑁) = 𝐴𝑒 𝑗Ω0 𝑛 𝑒 𝑗Ω0 𝑁

To check the periodicity let look for 𝑥[𝑛 + 𝑁] = 𝑥[𝑛]

𝑥[𝑛 + 𝑁] = 𝑥[𝑛] ⟹ 𝐴𝑒 𝑗Ω0 𝑛 𝑒 𝑗Ω0 𝑁 = 𝐴𝑒 𝑗Ω0 𝑛 ⟹ 𝑒 𝑗Ω0 𝑁 = 1 ⟹ Ω0 𝑁 = 2𝜋𝑘

𝑥[𝑛] is a periodic signal if and only if Ω0 /2𝜋 = 𝑘/𝑁 is rational number.

Exercise:
6 6𝜋 Ω0 3
𝑥[𝑛] = 𝑒 𝑗7𝜋𝑛 ⟹ Ω0 = ⟹ = ⟹ 𝑥[𝑛] is periodic signal
7 2𝜋 7
Ω0
𝑥[𝑛] = 𝑒 𝑗√2𝑛 ⟹ ≠ rational number ⟹ 𝑥[𝑛] is non − periodic signal
2𝜋
Remark: 01 In discrete-time case all signals separated by a frequency of 2𝜋 are identical.
Remark: 02 In the continuous-time case if is periodic then it can be represented by a sum
of complex exponential signals (called the Fourier series 𝑥(𝑡) = ∑ 𝑎𝑘 𝑒 𝑗ω0 𝑘𝑡 ).

clear all,
clc,
y=0;
t=0:0.01:4*pi;
Nmax=8;

for k=0:Nmax
a=(2*k+1);
y=y+sin(a*t)/a;
plot(t,y,'-','linewidth',3)
hold on, grid on, pause(0.5)
end

If the continuous-time signal 𝑥(𝑡) is a periodic then it can be written as linear combination
of complex exponentials 𝑥(𝑡) = ∑ 𝑎𝑘 𝑒 𝑗ω0 𝑘𝑡 such that 𝜷1 = {𝜙𝑘 (𝑡) = 𝑒 𝑗ω0 𝑘𝑡 , 𝑘 = 0, ±1, ±2 ± ⋯ }
is a basis. In the language of Hilbert spaces, the set of functions 𝜙𝑘 (𝑡) is an orthonormal
basis for the space 𝐿2 of square-integrable functions on [ − 𝜋 , 𝜋 ]. In case of discrete-time
2𝜋
signals the basis is 𝜷2 = {𝜙𝑘 [𝑛] = 𝑒 𝑗𝑘( 𝑁 )𝑛 , 𝑘 = 0, ±1, ±2}, let we see what is happen if we look
2𝜋 2𝜋
for 𝜙𝑘+𝑁 [𝑛] = 𝑒 𝑗(𝑘+𝑁)( 𝑁 )𝑛 = 𝑒 𝑗𝑘( 𝑁 )𝑛 = 𝜙𝑘 [𝑛] ⟹ 𝜙𝑘+𝑁 [𝑛] = 𝜙𝑘 [𝑛] This equality tell as that:
there exist 𝑁-distinct signals to form a basis. In other word the basis of discrete-time
signals is 𝜷2 = {𝜙1 [𝑛], 𝜙2 [𝑛], … , 𝜙𝑁 [𝑛]}.

Remark: 03 from the above study we notice that the continuous-time signal have an
infinite dimensional space while the space of discrete-time signals is finite dimensional.

dim(𝜷1 ) = max number of linear independent signals = ∞


dim(𝜷2 ) = max number of linear independent signals = 𝑁
IV.VII The Sinusoidal Signals: A sine wave or sinusoid is a
mathematical curve that describes a smooth periodic oscillation. A
sine wave is a continuous wave. It is named after the function sine, of
which it is the graph. It occurs often in pure and applied
mathematics, as well as physics, engineering, signal processing and
many other fields. Its most basic form as a function of time (t)
is: 𝑦(𝑡) = 𝐴 sin(𝜔𝑡 + 𝜑) Where:  𝐴 = the amplitude, the peak deviation
of the function from zero.  f = the ordinary frequency, the number of
oscillations (cycles) that occur each second of time.  ω = 2𝜋f, the angular frequency, the
rate of change of the function argument in units of radians per second  φ = the phase,
specifies (in radians) where in its cycle the oscillation is at 𝑡 = 0.

Computer Example: Sinusoidal signals for both continuous time and discrete time will
become important building blocks for more general signals, and the representation using
sinusoidal signals will lead to a very powerful set of ideas for representing signals and for
analyzing an important class of systems. As an application of this type of function let we
consider the Fourier series which states that “any periodic signal can be decomposed as an
infinite linear combination of sinusoidal signals”, here in this example let we see the plot of
rectangular pulse using Fourier series approximation.

clear all,clc,
y=4; A=4;
t=0:0.05:4*pi;
Nmax=10;
for k=1:Nmax
a=4*A*(1-(-1)^k)/((pi^2)*(k^2));
y=y + a*cos(k*t);
end
plot(t,y,'-','linewidth',3)
hold on, grid on, % pause(0.5)

Some of useful analytic formula of the basic test signals


1 sin(𝑎𝑡) 1 1 𝑡 2 𝑑 1
− ( )
𝛿(𝑡) Dirac signal ■ lim ( ) ■ lim ( 𝑒 2 𝑎 ) , ■ lim ( tanh(𝑎𝑡))
𝑎⟶∞ 𝜋 𝑡 𝑎→0 |𝑎|√2𝜋 𝑎⟶∞ 𝑑𝑡 2

1 𝑑 𝑡 + |𝑡| 1 1
𝑢(𝑡) Step signal ■ lim , ■ ( ) , ■ lim ( + tanh(𝑎𝑡))
𝑎→∞ 1 + 𝑒 −2𝑎𝑡 𝑑𝑡 2 𝑎→∞ 2 2
𝑡
𝑡 + |𝑡|
𝑟(𝑡) Ramp signal ■ max{0, 𝑡} , ■ , ■ ∫ 𝑢(𝜗)𝑑𝜗
2 −∞
𝑡 𝑡 𝑑|𝑡|
sgn(𝑡) Signum signal ■ lim , ■ , ■ 2𝑢(𝑡) − 1, ■ ,
𝜀→0 √𝑡 2 + 𝜀 2 |𝑡| 𝑑𝑡
1
Π(𝑡) Gate signal ■ lim , ■ 𝑢(𝑡 + 𝑇) − 𝑢(𝑡 − 𝑇)
𝑛→∞ 1 + (2𝑡)2𝑛
|𝑡|
Λ(𝑡) Triangle signal ■ 𝐴 (1 − ) , 0 ≤ 𝑡 ≤ 𝑇
𝑇
The useful basic test signals (discrete-time)

1 𝑛>0
𝑛 𝑛≥0 1 𝑛≥0 1 𝑛=0
𝑅[𝑛] = { 𝑢[𝑛] = { sgn[𝑛] = { 0 𝑛=0 𝛿[𝑛] = {
0 𝑛<0 0 𝑛<0 0 𝑛≠0
−1 𝑛<0

Solved Problems:
Exercise 1: Compute the following integrals
3/2 +∞
1
𝟏. 𝐼 = ∫ sin(5𝑡 − 𝜃)𝛿 (𝑡 − ) 𝑑𝑡 𝟐. 𝐼 = ∫ 𝑒 −𝑡 𝛿 ′ (𝑡)𝑑𝑡
1 2 −∞
2 +∞
𝟑. 𝐼 = ∫ (3𝑡 2 + 1)𝛿(𝑡)𝑑𝑡 𝟒. 𝐼 = ∫ (𝑡 2 + cos(𝜋𝑡))𝛿(𝑡 − 1) 𝑑𝑡
1 −∞
Ans:
3/2
1 1 3
𝟏. 𝐼=∫ sin(5𝑡 − 𝜃)𝛿 (𝑡 − ) 𝑑𝑡 = 0 because 𝑡 = ∉ [1 ]
1 2 2 2
+∞ +∞
𝑑 −𝑡
𝟐. 𝐼 = ∫ 𝑒 −𝑡 𝛿 ′ (𝑡)𝑑𝑡 = − ∫ (𝑒 )𝛿(𝑡)𝑑𝑡 = 𝑒 0 = 1
−∞ −∞ 𝑑𝑡
2
𝟑. 𝐼 = ∫ (3𝑡 2 + 1)𝛿(𝑡)𝑑𝑡 = 0 because 𝑡 = 0 ∉ [1 2]
1
+∞
𝟒. 𝐼 = ∫ (𝑡 2 + cos(𝜋𝑡))𝛿(𝑡 − 1) 𝑑𝑡 = 1 + cos(𝜋)
−∞
Exercise 2: Find and sketch the first derivatives of
1 if 𝑡 > 0
𝟏) 𝑥(𝑡) = 𝑢(𝑡) − 𝑢(𝑡 − 𝑎) 𝑎 > 0 𝟐) 𝑥(𝑡) = sgn(𝑡) = {
−1 if 𝑡 < 0
3) 𝑥(𝑡) = 𝑡(𝑢(𝑡) − 𝑢(𝑡 − 𝑎)) 𝑎 > 0
Ans:
𝑑𝑥 𝑑𝑥
𝟏) = 𝛿(𝑡) − 𝛿(𝑡 − 𝑎) 𝟐) sgn(𝑡) = 2𝑢(𝑡) − 1 ⟹ = 2𝛿(𝑡)
𝑑𝑡 𝑑𝑡
𝑑𝑥
𝟑) = [𝑢(𝑡) − 𝑢(𝑡 − 𝑎)] − 𝑎𝛿(𝑡 − 𝑎)
𝑑𝑡

Exercise 3: A discrete-time signal 𝑥[𝑛] is shown in Fig. Sketch and label each of the
following signals.

(𝒂) 𝑥 [𝑛]𝑢[1 − 𝑛 ] ;
(𝒃) 𝑥 [𝑛]{𝑢[𝑛 + 2] − 𝑢 [𝑛]};
(𝒄) 𝑥 [𝑛]𝛿[ 𝑛 − 1]
Exercise 4: Determine whether the following signals are energy signals, power signals, or
neither.

𝒂) 𝑥(𝑡) = 𝑒 −𝑎𝑡 𝑢(𝑡) 𝑎 > 0 𝒃) 𝑥(𝑡) = 𝐴 cos(𝜔𝑡 + 𝜃)


𝒄) 𝑥(𝑡) = 𝑡𝑢(𝑡) 𝒅) 𝑥[𝑛] = (−0.5)𝑛 𝑢[𝑛];
𝒆) 𝑥[𝑛] = 𝑢[𝑛] 𝒇) 𝑥[𝑛] = 2𝑒 𝑗3𝑛
𝒈) 𝑥(𝑡) = 𝑒 𝑗2𝑡 + 𝑒 𝑗3𝑡 𝒉) 𝑥(𝑡) = 𝛿(𝑡 + 2) − 𝛿(𝑡 − 2)

Ans:
∞ ∞
1
𝒂) 𝐸 = ∫ |𝑒 −𝑎𝑡 𝑢(𝑡)| 𝑑𝑡 = ∫ 𝑒 −2𝑎𝑡 𝑑𝑡 =
2
< ∞ ⟹ Energy signal
−∞ 0 2𝑎

𝒃) The sinusoidal signal 𝑥(𝑡) = 𝐴 cos(𝜔𝑡 + 𝜃) is periodic with 𝑇 = 2𝜋/𝜔 then the average
power is given by:
2𝜋
1 𝑇 1 𝜔
𝑃 = ∫ 𝐴2 cos2 (𝜔𝑡 + 𝜃) 𝑑𝑡 = ∫ 𝐴2 cos2 (𝜔𝑡 + 𝜃) 𝑑𝑡
𝑇 0 𝑇 0
𝐴2 𝜔 2𝜋/𝜔 1 𝐴2
𝑃= ∫ (1 + cos(2𝜔𝑡 + 2𝜃))𝑑𝑡 = < ∞ ⟹ Power signal
2𝜋 0 2 2
𝑇/2 𝑇/2 (𝑇/2)2
𝒄) 𝐸 = lim ∫ |𝑡𝑢(𝑡)|2 𝑑𝑡 = lim ∫ 𝑡 2 𝑑𝑡 = lim =∞
𝑇→∞ −𝑇/2 𝑇→∞ 0 𝑇→∞ 3
𝑇/2
1 1 𝑇/2 (𝑇/2)2
𝑃 = lim ∫ |𝑡𝑢(𝑡)|2 𝑑𝑡 = lim ∫ 𝑡 2 𝑑𝑡 = lim =∞
𝑇→∞ 𝑇 −𝑇/2 𝑇→∞ 𝑇 0 𝑇→∞ 3𝑇

The signal 𝑥(𝑡) = 𝑡𝑢(𝑡) is neither energy nor power signal.


∞ ∞ ∞ ∞
2 𝑛 2
1 2𝑛 1 𝑛 1 4
𝒅) 𝐸 = ∑|𝑥[𝑛]| = ∑|(−0.5) 𝑢[𝑛]| = ∑ ( ) = ∑ ( ) = = <∞
2 4 1
−∞ −∞ 𝑛=0 𝑛=0 1−4 3
The signal 𝑥[𝑛] = (−0.5)𝑛 𝑢[𝑛] is an Energy signal.
𝑁 𝑁
1 1 𝑁+1 1
𝒆) 𝑃 = lim ∑|𝑥[𝑛]|2 = lim ∑ 12 = lim ( )= <∞
𝑁→∞ 2𝑁 + 1 𝑁→∞ 2𝑁 + 1 𝑁→∞ 2𝑁 + 1 2
−𝑁 𝑛=0
The signal 𝑥[𝑛] = 𝑢[𝑛] is a power signal.
𝑁 𝑁
1 2 1 2𝑁 + 1
𝒇) 𝑃 = lim ∑|2𝑒 𝑗3𝑛 | = lim ∑ 22 = lim 4 ( )=4<∞
𝑁→∞ 2𝑁 + 1 𝑁→∞ 2𝑁 + 1 𝑁→∞ 2𝑁 + 1
−𝑁 −𝑁
The signal 𝑥[𝑛] = 2𝑒 𝑗3𝑛 is a power signal.
5 1 𝑗𝑡 𝑗𝑡 5 𝑡 𝑡
𝒈) 𝑥(𝑡) = 𝑒 𝑗2𝑡 + 𝑒 𝑗3𝑡 → |𝑥(𝑡)| = |2𝑒 𝑗2𝑡 { (𝑒 2 + 𝑒 − 2 )}| = |2𝑒 𝑗2𝑡 cos ( )| = 2 |cos ( )|
2 2 2
𝑇
𝑡 2 2
𝑥 2 (𝑡) = 4 cos2 ( ) = 2(1 + cos(𝑡)) ⇒ 𝑃 = lim ∫ (1 + cos(𝑡))2 𝑑𝑡 = 2 < ∞
2 𝑇→∞ 𝑇 −𝑇
2
The signal 𝑥(𝑡) = 𝑒 𝑗2𝑡 + 𝑒 𝑗3𝑡 is a power signal.
Exercise 5: Determine whether or not each of the following signals is periodic. If a signal is
periodic, determine its fundamental period.
𝜋 𝜋
❶ 𝑥(𝑡) = cos ( 𝑡) + sin ( 𝑡) ❷ 𝑥(𝑡) = sin2 (𝑡) ❸ 𝑥(𝑡) = sin2 (𝑡) + cos2 (𝑡)
3 4
𝜋
❹ 𝑥(𝑡) = cos(𝑡) + sin(√2𝑡) ❺ 𝑥[𝑛] = cos2 [ 𝑛] ❻ 𝑥(𝑡) = 𝑒 𝑗2𝑡 + 𝑒 𝑗3𝑡
8
𝑗2𝑡 𝑗𝜋𝑡
❼ 𝑥(𝑡) = 𝑒 + 𝑒 ❽ 𝑥(𝑡) = |sin(𝑡)| + |cos(𝑡)| ❾ 𝑥(𝑡) = sin3(𝑡)
❿ 𝑥(𝑡) = sin2(𝑡) + cos 4 (𝑡) ⓫ 𝑥(𝑡) = sin4 (𝑡) + cos 4 (𝑡) ⓬ 𝑥(𝑡) = cos4 (𝑡) − sin4 (𝑡)

Ans:
𝜋 𝜋 𝑇1 3
❶ 𝑥(𝑡) = cos ( 𝑡) + sin ( 𝑡) → 𝑇1 = 6 & 𝑇2 = 8 → = = rational number
3 4 𝑇2 4
Then 𝑥(𝑡) is periodic with fundamental period 𝑇0 = LCM(𝑇1 , 𝑇2 ) means that 𝑇0 = 3𝑇2 = 4𝑇1 = 24
1
❷ 𝑥(𝑡) = sin2(𝑡) = (1 − cos(2𝑡)) = 𝑥1 (𝑡) + 𝑥2 (𝑡) → fundamental period 𝑇0 = 𝜋
2
❸ 𝑥(𝑡) = sin2(𝑡) + cos 2 (𝑡) = 1 ′DC_signal′ ⇒ 𝑥(𝑡) is periodic with no fundamental period.
𝑇1 2𝜋
❹ 𝑥(𝑡) = cos(𝑡) + sin(√2𝑡) = 𝑥1 (𝑡) + 𝑥2 (𝑡) with 𝑇1 = 2𝜋 & 𝑇2 = √2𝜋 → =
𝑇2 √2𝜋
𝑇1 2
Since we have = ≠ rational number then 𝑥(𝑡) is not periodic signal
𝑇2 √2
𝜋 1 𝜋
❺ 𝑥[𝑛] = cos2 [ 𝑛] = (1 + cos [ 𝑛]) = 𝑥1 [𝑛] + 𝑥2 [𝑛], 𝑁0 = LCM(𝑁1 , 𝑁2 )
8 2 4
1 1 1 𝜋
With 𝑥1 [𝑛] = 2 = 2 (1)𝑛 & 𝑥2 [𝑛] = 2 cos [ 4 𝑛] such that 𝑥1 [𝑛] is periodic signal with fundamental
period 𝑁1 = 1 and 𝑥2 [𝑛] is periodic signal with fundamental period 𝑁2 = 8 Since we have
𝑁1 /𝑁2 = 1/8 = rational number, so we deduce that 𝑥[𝑛] is periodic signal with fundamental
period 𝑁0 = LCM(𝑁1 , 𝑁2 ) = 8

❻ 𝑥(𝑡) = 𝑒 𝑗2𝑡 + 𝑒 𝑗3𝑡 = 𝑥1 (𝑡) + 𝑥2 (𝑡) with 𝑥1 (𝑡) = 𝑒 𝑗2𝑡 & 𝑥2 (𝑡) = 𝑒 𝑗3𝑡 . It is well known that the
signal 𝑥1 (𝑡) is periodic with fundamental period 𝑇1 = 𝜋 and 𝑥2 (𝑡) is periodic with period
𝑇2 = 2𝜋/3 → 𝑇1 /𝑇2 = 3/2 = rational number ⟹ 𝑥(𝑡) is periodic signal with fundamental
period 𝑇0 = LCM(𝑇1 , 𝑇2 ) = 2𝑇1 = 3𝑇2 = 2𝜋.

❼ 𝑥(𝑡) = 𝑒 𝑗2𝑡 + 𝑒 𝑗𝜋𝑡 is aperiodic (not periodic) signal, because of 𝑇1 /𝑇2 = 𝜋/2. Since we have
𝑇1 /𝑇2 = 𝜋/2 ≠ rational number then 𝑥(𝑡) is 𝑛𝑜𝑡 periodic signal

❽ 𝑥(𝑡) = |sin(𝑡)| + |cos(𝑡)| = 𝑥1 (𝑡) + 𝑥2 (𝑡). If you are supposed to find period of sum of two
function such that, 𝑥1 (𝑡) + 𝑥2 (𝑡) given that period of f is 𝑇1 and period of g is 𝑇2 then period
of total 𝑥1 (𝑡) + 𝑥2 (𝑡) will be LCM(𝑇1 , 𝑇2 ). But this technique has some constrain as it will not
give correct answers in some cases. One of those case is, if you take 𝑥1 (𝑡)=|sin(𝑡)| and
𝑥2 (𝑡)=|cos(𝑡)|, then period of 𝑥1 (𝑡) + 𝑥2 (𝑡) should be 𝜋 as per the above rule but, period of
𝑥1 (𝑡) + 𝑥2 (𝑡) is not 𝜋 but 𝜋/2. So in general it is very difficult to identify correct answers for
the questions regarding period. Most of the cases graph will help.
3 1
❾ 𝑥(𝑡) = sin3(𝑡) = sin(𝑡) − sin(3𝑡) = 𝑥1 (𝑡) + 𝑥2 (𝑡) → 𝑇0 = LCM(𝑇1 , 𝑇2 ) = 2𝜋 Also:
4 4
3 1
𝑥(𝑡) = cos 3 (𝑡) = cos(𝑡) + cos(3𝑡) = 𝑥1 (𝑡) + 𝑥2 (𝑡) → 𝑇0 = LCM(𝑇1 , 𝑇2 ) = 2𝜋
4 4
❿ 𝑥(𝑡) = sin2(𝑡) + cos 4 (𝑡) = 1 + cos4 (𝑡) − cos 2 (𝑡) = 1 + cos 2 (𝑡) (cos 2 (𝑡) − 1) → 𝑥(𝑡) is periodic
1 𝜋
𝑥(𝑡) = 1 − cos2 (𝑡) sin2 (𝑡) = 1 − (sin(𝑡) cos(𝑡))2 = 1 − sin2(2𝑡) → 𝑇0 =
4 2
⓫ 𝑥(𝑡) = sin4(𝑡) + cos4 (𝑡) = (sin2(𝑡) + cos2 (𝑡))2 − 2 sin2 (𝑡) cos 2 (𝑡) → 𝑥(𝑡) is periodic
1 𝜋
𝑥(𝑡) = 1 − 2 sin2 (𝑡) cos2 (𝑡) = 1 − sin2 (2𝑡) → 𝑇0 = LCM(𝑇1 , 𝑇2 ) =
2 2

⓬ 𝑥(𝑡) = cos 4 (𝑡) − sin4 (𝑡) = (cos 2 (𝑡) − sin2 (𝑡))(cos 2(𝑡) + sin2 (𝑡)) → 𝑥(𝑡) is periodic
𝑥(𝑡) = (cos2 (𝑡) − sin2 (𝑡)) = 1 − 2 sin2 (𝑡) = cos(2𝑡) → 𝑇0 = LCM(𝑇1 , 𝑇2 ) = 𝜋

Exercise 6: Consider the sinusoidal signal 𝑥(𝑡) = cos(15𝑡)


a) Find the formula of the sampling interval 𝑇𝑠 such that 𝑥[𝑛] = 𝑥(𝑛𝑇𝑠 ) is a periodic
sequence.
b) Find the fundamental period of 𝑥[𝑛] = 𝑥(𝑛𝑇𝑠 ) if 𝑇𝑠 = 0.1𝜋 seconds.

Ans: a) The sinusoidal signal 𝑥[𝑛] = 𝑥(𝑛𝑇𝑠 ) is periodic sequence if 𝑥[𝑛 + 𝑁] = 𝑥[𝑛], i.e.
cos(15𝑛𝑇𝑠 + 15𝑁𝑇𝑠 ) = cos(15𝑛𝑇𝑠 ) ⟹ 𝑇𝑠 = {2𝜋𝑚}/{15𝑁}, where 𝑚, 𝑛 ∈ ℕ.

𝜋 2𝜋𝑚 𝜋 20𝑚 4
b) 𝑇𝑠 = ⟹ = ⟹ 𝑁= = 𝑚, It is very well known that:
10 15𝑁 10 15 3
4
A fundamental period is the smallest integer value of 𝑁 ⟹ 𝑁0 = min {3 𝑚} = 4

Remark: sampled data not always necessarily be periodic, even if its continuous signal is
periodic. Sampling (Discretization) does not preserve periodicity.

Exercise 7: Show that if 𝑥(𝑡 + 𝑇) = 𝑥(𝑡) , then


𝛽 𝛽+𝑇 𝑇 𝑎+𝑇
∫ 𝑥(𝑡)𝑑𝑡 = ∫ 𝑥(𝑡)𝑑𝑡 and ∫ 𝑥(𝑡)𝑑𝑡 = ∫ 𝑥(𝑡)𝑑𝑡
𝛼 𝛼+𝑇 0 𝑎
Ans:
𝛽 𝛽
𝑑𝜉 𝑑𝑡
𝑖) 𝐼 = ∫ 𝑥(𝑡)𝑑𝑡 = ∫ 𝑥(𝑡 + 𝑇)𝑑𝑡 take the change of variable 𝜉 = 𝑡 + 𝑇 ⇒ =
𝛼 𝛼 𝑑𝑡 𝑑𝑡
𝛽 𝛽+𝑇 𝛽+𝑇
𝐼 = ∫ 𝑥(𝑡)𝑑𝑡 = ∫ 𝑥(𝜉)𝑑𝜉 = ∫ 𝑥(𝑡)𝑑𝑡
𝛼 𝛼+𝑇 𝛼+𝑇

𝑎+𝑇 0 𝑎+𝑇 𝑇 𝑎+𝑇 𝑇


𝑖𝑖) ∫ 𝑥(𝑡)𝑑𝑡 = ∫ 𝑥(𝑡)𝑑𝑡 + ∫ 𝑥(𝑡)𝑑𝑡 = ∫ 𝑥(𝑡)𝑑𝑡 + ∫ 𝑥(𝑡)𝑑𝑡 = ∫ 𝑥(𝑡)𝑑𝑡
𝑎 𝑎 0 𝑎+𝑇 0 0

Exercise 8: Show that if 𝑥(𝑡) is periodic with fundamental period 𝑇0 , then the normalized
average power 𝑃 of 𝑥(𝑡) is the same as the average power of 𝑥(𝑡) over any interval of length
𝑇0 , that is,
1 +𝑇/2 1 𝑇0
𝑃f = lim ∫ |𝑥(𝑡)|2 𝑑𝑡 = ∫ |𝑥(𝑡)|2 𝑑𝑡
𝑇⟶∞ 𝑇 −𝑇/2 𝑇0 0
Ans: Let we define 𝑇 = 𝑘𝑇0

1 +𝑇/2 2
1 +𝑘𝑇0 /2 2
1 𝑘𝑇0
𝑃f = lim ∫ |𝑥(𝑡)| 𝑑𝑡 = lim ∫ |𝑥(𝑡)| 𝑑𝑡 = lim ∫ |𝑥(𝑡)|2 𝑑𝑡
𝑇⟶∞ 𝑇 −𝑇/2 𝑘⟶∞ 𝑘𝑇0 −𝑘𝑇 /2 𝑘⟶∞ 𝑘𝑇0 0
0

1 𝑘𝑇0 2
𝑘 𝑇0 1 𝑇0
𝑃f = lim ∫ |𝑥(𝑡)| 𝑑𝑡 = lim ∫ |𝑥(𝑡)| 𝑑𝑡 = ∫ |𝑥(𝑡)|2 𝑑𝑡
2
𝑘⟶∞ 𝑘𝑇0 0 𝑘⟶∞ 𝑘𝑇0 0 𝑇0 0
Exercise 9: Find an odd and even parts of , 𝑥(𝑡) = 𝑒 𝑗𝑡
1
𝑥𝑒 (𝑡) = (𝑥(𝑡) + 𝑥(−𝑡)) = cos(𝑡)
𝐀𝐧𝐬: 𝑥(𝑡) = 𝑒 𝑗𝑡 = cos(𝑡) + 𝑖 sin(𝑡) ⇒ { 2
1
𝑥𝑜 (𝑡) = (𝑥(𝑡) − 𝑥(−𝑡)) = 𝑖 sin(𝑡)
2
Exercise 10: The following equalities are used on many occasions in this text. Prove their
validity
∞ ∞
𝛼𝑘 𝑛
𝛼
𝒂) ∑ 𝛼 = |𝛼| < 1 and 𝒃) ∑ 𝑛𝛼 𝑛 = |𝛼| < 1
1−𝛼 (1 − 𝛼)2
𝑛=𝑘 𝑛=0
∞ 𝑘−1 ∞
𝑛
1−𝛼
𝐀𝐧𝐬: 𝒂) Let we define the sum S = ∑ 𝛼 𝑛 = lim ( ) = ∑ 𝛼𝑛 + ∑ 𝛼𝑛 Then we have
𝑛→∞ 1−𝛼
𝑛=0 𝑛=0 𝑛=𝑘

∞ ∞ 𝑘−1
1 − 𝛼𝑛 1 − 𝛼𝑘 1 1 − 𝛼𝑘 𝛼𝑘
∑ 𝛼 = ∑ 𝛼 − ∑ 𝛼 𝑛 = { lim (
𝑛 𝑛
)} − ={ − }=
𝑛→∞ 1−𝛼 1−𝛼 1−𝛼 1−𝛼 1−𝛼
𝑛=𝑘 𝑛=0 𝑛=0

𝒃) Let we take the derivative of S with respect to 𝛼 we get:


∞ ∞ ∞
𝑑S 𝑑 𝑑 1 1 𝛼
= (∑ 𝛼 𝑛 ) = ( ) ⟺ ∑ 𝑛𝛼 𝑛−1 = ⇒ ∑ 𝑛𝛼 𝑛
=
𝑑𝛼 𝑑𝛼 𝑑𝛼 1 − 𝛼 (1 − 𝛼)2 (1 − 𝛼)2
𝑛=0 𝑛=0 𝑛=0

Exercise 11: Given a continuous signal 𝑥(𝑡) = cos(𝜔𝑡 2 ), 𝜔 = 5𝜋/4, prove that 𝑥(𝑡) is
aperiodic signal and also prove that, its sampled signal 𝑥[𝑛] is periodic when 𝑇𝑠 = 0.5. Find
the period of the discrete signal

Ans: Assume that 𝑥(𝑡) is periodic such that: cos(𝜔(𝑡 + 𝑇)2 ) = cos(𝜔𝑡 2 ) so 2ℓ𝜋 = 𝜔𝑇 2 + 2𝜔𝑡𝑇.
As we can see, 𝑇 is dependent on the value of 𝑡 and hence, is not a constant. So cos(𝜔𝑡 2 ) is
not periodic. This can also be observed in the graph for 𝑥(𝑡) = cos(𝜔𝑡 2 ).

The sampled signal is 𝑥[𝑛] = cos(𝜔𝑇𝑠 2 𝑛2 ) where 𝑇𝑠 = 0.5, let we look for the period of this
sequence 𝑥[𝑛 + 𝑁] = 𝑥[𝑛] ⇒ 𝑁 = 16. This can also be observed in the graph for 𝑥[𝑛].
Exercise 12: Prove that, if we let f(𝑡) = f𝑒 (𝑡) + f𝑜 (𝑡) then there exist g(𝑡) = g 𝑒 (𝑡) + g 𝑜 (𝑡) such
that the even and odd functions have the following property:

𝟙. f𝑒 (𝑡) × f𝑜 (𝑡) = g 𝑜 (𝑡) 𝟚. f𝑜 (𝑡) × f𝑜 (𝑡)= g 𝑒 (𝑡) 𝟛. f𝑒 (𝑡) × f𝑒 (𝑡)= g 𝑒 (𝑡)

Ans:

𝟙. 𝑦(𝑡) = f𝑒 (𝑡) × f𝑜 (𝑡) = f 2 (𝑡) − f 2 (−𝑡) ⇒ 𝑦(𝑡) = −𝑦(−𝑡) Then, it results to an odd function.

𝟚. 𝑦(𝑡) = f𝑜 (𝑡) × f𝑜 (𝑡) = f 2 (𝑡) + f 2 (−𝑡) − 2f(𝑡)f(−𝑡) ⇒ 𝑦(𝑡) = 𝑦(−𝑡) Then, it results to an even
function.

𝟛. 𝑦(𝑡) = f𝑒 (𝑡) × f𝑒 (𝑡) = f 2 (𝑡) + f 2 (−𝑡) + 2f(𝑡)f(−𝑡) ⇒ 𝑦(𝑡) = 𝑦(−𝑡) Then, it results to an even
function.

Exercise 13: Let g(𝑡) be any real valued function where g(𝑡) = g 𝑜 (𝑡) + g 𝑒 (𝑡). Prove that
g 𝑒 (𝑡) is symmetrical about the vertical axis, and g 𝑜 (𝑡) is symmetrical about the origin. As a
consequence deduce that:
+𝑎 +𝑎 +𝑎
∫ g 𝑒 (𝑡)𝑑𝑡 = 2 ∫ g 𝑒 (𝑡)𝑑𝑡 & ∫ g 𝑜 (𝑡)𝑑𝑡 = 0
−𝑎 0 −𝑎
Exercise 14: Prove that,
+∞ +∞ +∞
2 (𝑡)𝑑𝑡
𝐸=∫ 𝑥 = 𝐸𝑒 + 𝐸𝑜 = ∫ 𝑥𝑒2 (𝑡)𝑑𝑡 +∫ 𝑥𝑜2 (𝑡)𝑑𝑡
−∞ −∞ −∞

Ans: we know that 𝑥(𝑡) = 𝑥𝑒 (𝑡) + 𝑥𝑜 (𝑡) ⇒ 𝑥 2 (𝑡) = 𝑥𝑒2 (𝑡) + 𝑥𝑜2 (𝑡) + 2𝑥𝑒 (𝑡)𝑥𝑜 (𝑡)
+∞ +∞ +∞ +∞
𝐸=∫ 𝑥 2 (𝑡)𝑑𝑡 = ∫ 𝑥𝑒2 (𝑡)𝑑𝑡 + ∫ 𝑥𝑜2 (𝑡)𝑑𝑡 + 2 ∫ 𝑥𝑒 (𝑡)𝑥𝑜 (𝑡)𝑑𝑡
−∞ −∞ −∞ −∞

From the previous exercise we have seen that


+∞ +∞
∫ 𝑥𝑒 (𝑡)𝑥𝑜 (𝑡)𝑑𝑡 = ∫ g o (𝑡)𝑑𝑡 = 0
−∞ −∞
Therefore
+∞ +∞ +∞
𝐸=∫ 𝑥 2 (𝑡)𝑑𝑡 = 𝐸𝑒 + 𝐸𝑜 = ∫ 𝑥𝑒2 (𝑡)𝑑𝑡 + ∫ 𝑥𝑜2 (𝑡)𝑑𝑡
−∞ −∞ −∞
Exercise 15: The unit impulse (or unit sample) and the unit step sequences 𝛿[𝑛], 𝑢[𝑛] are
1 𝑛=𝑘 1 𝑛≥𝑘
defined as: 𝛿[𝑛 − 𝑘] = { 𝑢[𝑛 − 𝑘] = {
0 𝑛≠𝑘 0 𝑛<𝑘
Prove that 𝛿[𝛼𝑛] = 𝛿[𝑛] and check the following,
+∞
 𝛿[𝑛] = 𝑢[𝑛] − 𝑢[𝑛 − 1]  𝑢[𝑛] = ∑ 𝛿[𝑛 − 𝑘] (accumulation⬌integration)
𝑘=0
+∞
 𝑥[𝑛]𝛿[𝑛 − 𝑘] = 𝑥[𝑘]𝛿[𝑛 − 𝑘]  𝑥[𝑛] = ∑ 𝑥[𝑘] 𝛿[𝑛 − 𝑘] 𝛿[𝑛] = 𝛿[−𝑛]
𝑘−∞

Exercise 16: Given the complex exponential sequence which is of the form

𝑥[𝑛] = 𝑒 𝑖Ω0 𝑛 = cos(Ω0 𝑛) + 𝑖 sin(Ω0 𝑛)

Prove that 𝑥[𝑛] is periodic sequence with period 𝑁 (> 0), if and only if Ω0 satisfy the
following Ω0 /2𝜋 = 𝑚/𝑁, 𝑚 = positive integer. Thus the sequence 𝑥[𝑛] = 𝑒 𝑖Ω0 𝑛 not periodic for
any value of Ω0 . It is periodic only if Ω0 /2𝜋 is a rational number. Thus, if Ω0 , satisfies the
periodicity condition, Ω0 ≠ 0, and 𝑁 & 𝑚 have no factors in common, then the fundamental
period of the sequence 𝑥[𝑛] is given by 𝑁𝑜 = 𝑚(2𝜋/𝑁). Note that this property is quite
different from the property that the continuous-time signal 𝑒 𝑖ω0 𝑡 that is periodic for any
value of ω0 .

Exercise 17: Find the even and odd parts of the signal 𝑥[𝑛] = 𝛼 𝑛 𝑢[𝑛]
Exercise 18: If 𝑥[𝑛] = 0 for 𝑛 < 0 and 𝑥𝑒 [𝑛] = 2 × (0.9)|𝑛| 𝑢[|𝑛|], find 𝑥[𝑛]

2 × (0.9)𝑛 𝑛≥1
Hint: write 𝑥𝑒 [𝑛] in the form of brakts 𝑥𝑒 [𝑛] = {2𝛿[𝑛] 𝑛=0
2 × (0.9)−𝑛 𝑛 ≤ −1
𝐀𝐧𝐬: 𝑥[𝑛] = 𝛿[𝑛] + 2 × (0.9)𝑛 𝑢[𝑛 − 1]

Exercise 19: prove that |𝑎|𝛿(𝑎𝑡) = 𝛿(𝑡) Ans: we know that

1 − 𝑢(𝑡) 𝑎<0 𝑑 −𝛿(𝑡) 𝑎<0


𝑢(𝑎𝑡) = { ⟺ 𝑢(𝑎𝑡) = 𝑎 𝛿(𝑎𝑡) = {
𝑢(𝑡) 𝑎<0 𝑑𝑡 𝛿(𝑡) 𝑎<0
1
𝛿(𝑎𝑡) = 𝛿(𝑡)
|𝑎|
Exercise 20: prove that
1
𝛿((𝑡 − 𝑎)(𝑡 − 𝑏)) = (𝛿(𝑡 − 𝑎) + 𝛿(𝑡 − 𝑏))
|𝑎 − 𝑏|
Ans: we know that
1 𝑡<𝑎
𝑢((𝑡 − 𝑎)(𝑡 − 𝑏)) = {0 𝑎 < 𝑡 < 𝑏 = 1 − (𝑢(𝑡 − 𝑎) − 𝑢(𝑡 − 𝑏))
1 𝑡<𝑏
= 𝑢(𝑡 2 − (𝑎 + 𝑏)𝑡 + 𝑎𝑏)
𝑑 𝑑
⇒ 𝑢((𝑡 − 𝑎)(𝑡 − 𝑏)) = (1 − (𝑢(𝑡 − 𝑎) − 𝑢(𝑡 − 𝑏))) = −𝛿(𝑡 − 𝑎) + 𝛿(𝑡 − 𝑏)
𝑑𝑡 𝑑𝑡
Also it is true to say that
𝑑
𝑢((𝑡 − 𝑎)(𝑡 − 𝑏)) = {2𝑡 − (𝑎 + 𝑏)}𝛿((𝑡 − 𝑎)(𝑡 − 𝑏))
𝑑𝑡
= −𝛿(𝑡 − 𝑎) + 𝛿(𝑡 − 𝑏)

𝛿(𝑡 − 𝑏) 𝛿(𝑡 − 𝑎)
𝛿((𝑡 − 𝑎)(𝑡 − 𝑏)) = −
{2𝑡 − (𝑎 + 𝑏)} {2𝑡 − (𝑎 + 𝑏)}
Using the property of g(𝑡)𝛿(𝑡 − 𝑎) = g(𝑎)𝛿(𝑡 − 𝑎) we get:

𝛿(𝑡 − 𝑏) 𝛿(𝑡 − 𝑎)
𝛿((𝑡 − 𝑎)(𝑡 − 𝑏)) = −
𝑏−𝑎 𝑎−𝑏
Because of 𝑏 − 𝑎 < 0 or 𝑏 − 𝑎 > 0 then
1
𝛿((𝑡 − 𝑎)(𝑡 − 𝑏)) = (𝛿(𝑡 − 𝑎) + 𝛿(𝑡 − 𝑏))
|𝑏 − 𝑎|
Generally we have
𝛿(𝑡 − 𝑡𝑖 )
𝛿(g(𝑡)) = ∑ , where 𝑡𝑖 is a real root of g(𝑡) = 0
𝑖 |ǵ (𝑡𝑖 )|

In the integral form the generalized scaling property may be written as



f(𝑡𝑖 )
∫ f(𝑡)𝛿(g(𝑡))𝑑𝑡 = ∑
−∞ 𝑖 |ǵ (𝑡𝑖 )|
Review Questions:
1. Define a system model and a system itself.
2. What is the need of a system model?
3. How do system models expedite the design and development of systems.
4. What are model validation and model verification?
5. How do we validate and verify a given model?
6. How can we classify models?
8. Distinguish between feedback control and feedforward control systems.
9. Give a classification of systems.
10. What is the difference between a set and a system?
11. What do you understand by “whole is more than sum of its parts?”
12. What do you mean by the complexity of systems?
13. Differentiate the linear and nonlinear systems.
14. Explain the natural systems and manmade systems.
15. Explain in brief about system thinking.
16. Differentiate between hard and soft systems.

Computer program: This program provides a student a good support to plot & simulate
multistep functions

clear all
clc

t=-6:0.1:6;
k=length(t);
for i=1:k
if t(i)>=0 & t(i)<=1
x(i)=t(i);
elseif t(i)>1 & t(i)<=2
x(i)=-t(i)+2;
else
x(i)=0;
end
end
plot(t,x,'r','linewidth',3)
grid('minor')
Summary:
 A signal is a set of information or data. A system is an organized relationship among
functioning units or components. A system may be made up of physical components
(hardware realization) or may be an algorithm (software realization).

 A convenient measure of the size of a signal is its energy if it is finite. If the signal energy
is infinite, the appropriate measure is its power, if it exists. A signal whose physical
description is known completely in a mathematical or graphical form is a deterministic
signal. A random signal is known only in terms of its probabilistic description such as
mean value, mean square value, and so on, rather than its mathematical or graphical form.

 The unit step function 𝑢(𝑡) is very useful in representing causal signals and signals with
different mathematical descriptions over different intervals.

 In the classical definition, the unit impulse function 𝛿(𝑡) is characterized by unit area,
and the fact that it is concentrated at a single instant 𝑡 = 0. The impulse function has a
sampling (or sifting) property, which states that the area under the product of a function
with a unit impulse is equal to the value of that function at the instant where the impulse
is located (assuming the function to be continuous at the impulse location). In the modern
approach, the impulse function is viewed as a generalized function and is defined by the
sampling property.

 The Dirac impulse function or unit impulse function or simply the delta function 𝛿(𝑡) is
not a function in the strict mathematical sense. It is defined in advanced texts and courses
using the theory of distribution.

 The exponential function 𝑒 𝑠𝑡 , where s is complex, encompasses a large class of signals


that includes a constant, a monotonic exponential, a sinusoid, and an exponentially
varying sinusoid.

 A signal that is symmetrical about the vertical axis (𝑡 = 0) is an even function of time,
and a signal that is anti-symmetrical about the vertical axis is an odd function of time. The
product of an even function with an odd function results in an odd function. However, the
product of an even function with an even function or an odd function with an odd function
results in an even function. The area under an odd function from 𝑡 = −𝑎 to 𝑎 is always zero
regardless of the value of 𝑎. On the other hand, the area under an even function from
𝑡 = −𝑎 to 𝑎 is two times the area under the same function from 𝑡 = 0 to 𝑎 (or 𝑡 = −𝑎 to 0).
Every signal can be expressed as a sum of odd and even function of time.

 A system processes input (i.e. insert) signals to produce another output signals
(response). The input is the cause and the output is its effect. In general, the output is
affected by two causes: the internal conditions of the system (such as the initial conditions)
and the external input.

 If you are supposed to find period of sum of two function such that, f(𝑥) + g(𝑥) given that
period of f is a and period of g is b then period of total f(𝑥) + g(𝑥) will be LCM(a,b). But this
technique has some constrain as it will not give correct answers in some cases. One of
those case is, if you take f(𝑥) = |sin(𝑥)| and g(𝑥) = |cos(𝑥)|, then period of f(𝑥) + g(𝑥) should
be 𝜋 as per the above rule but, period of f(𝑥) + g(𝑥) is not 𝜋 but 𝜋/2. So in general it is very
difficult to identify correct answers for the questions regarding period. Most of the cases
graph will help.
Additional Computer programs:

clear all,clc, Ts=0.01;


t=-3*pi:Ts:3*pi;

y1=abs(cos(t));
[~,peaklocs] = findpeaks(y1);
T1= mean(Ts*diff(peaklocs))

y2=abs(sin(t));
[~,peaklocs] = findpeaks(y2);
T2= mean(Ts*diff(peaklocs))

y=y1+y2;
[~,peaklocs] = findpeaks(y);
T= mean(Ts*diff(peaklocs))

plot(t,y1,'-r','linewidth',2)
grid on
figure
>>
plot(t,y2,'-b','linewidth',2)
grid on T1 = 3.1425 % period 𝝅
figure T2 = 3.1420 % period 𝝅
plot(t,y,'-k','linewidth',2) T = 1.5700 % period 𝝅/2

List of some Non-smooth periodic functions:


clear all,clc
dt=0.01;
t=-3*pi:dt:3*pi; T=2

y1=sign(sin((2*pi/T)*t));
%y1=sign(cos((2*pi/T)*t));

plot(t,y1,'-r','linewidth',2)
grid on

clear all,clc
dt=0.01;
t=-10:dt:10;
T=10;
y1=atan(cot((2*pi/T)*t));
plot(t,y1,'-r','linewidth',2)
grid on
y2=asin(sin((2*pi/T)*t));
grid on
plot(t,y2,'-b','linewidth',2)

Sign signal as a smooth function


sgn = @(x,a,c) 1./(1 + exp(-a.*(x-c))); % Create Function
x = -4:0.1:12;
a=5; c=4;
y = sgn(x, a, c);
plot(x,y,'-b','linewidth',2)
xlabel('sigmf, P = [2 4]')
ylim([-0.05 1.05])
Signals in nature can be treated electronically by various sensors:

Motion: The motion of an object can be considered to be a signal, and can be monitored by
various sensors to provide electrical signals. For example, radar can provide an
electromagnetic signal for following aircraft motion. A motion signal is one-dimensional
(time), and the range is generally three-dimensional. Position is thus a 3-vector signal;
position and orientation of a rigid body is a 6-vector signal. Orientation signals can be
generated using a gyroscope.

Sound: Since a sound is a vibration of a medium (such as air), a sound signal associates a
pressure value to every value of time and three space coordinates. A sound signal is
converted to an electrical signal by a microphone, generating a voltage signal as an analog
of the sound signal, making the sound signal available for further signal processing. Sound
signals can be sampled at a discrete set of time points; for example, compact discs (CDs)
contain discrete signals representing sound, recorded at 44,100 samples per second; each
sample contains data for a left and right channel, which may be considered to be a 2-vector
signal (since CDs are recorded in stereo). The CD encoding is converted to an electrical
signal by reading the information with a laser, converting the sound signal to an optical
signal.

Images: A picture or image consists of a brightness or color signal, a function of a two-


dimensional location. The object's appearance is presented as an emitted or reflected
electromagnetic wave, one form of electronic signal. It can be converted to voltage or
current waveforms using devices such as the charge-coupled device. A 2D image can have
a continuous spatial domain, as in a traditional photograph or painting; or the image can
be discretized in space, as in a raster scanned digital image. Color images are typically
represented as a combination of images in three primary colors, so that the signal is vector-
valued with dimension three.

Videos: A video signal is a sequence of images. A point in a video is identified by its two-
dimensional position and by the time at which it occurs, so a video signal has a three-
dimensional domain. Analog video has one continuous domain dimension (across a scan
line) and two discrete dimensions (frame and line).

Biology: The value of the signal is an electric potential ("voltage"). The domain is more
difficult to establish. Some cells or organelles have the same membrane potential
throughout; neurons generally have different potentials at different points. These signals
have very low energies, but are enough to make nervous systems work; they can be
measured in aggregate by the techniques of electrophysiology.
Computer program: Fourier cosine series of a simple linear function 𝑓(𝑥) = 𝑥 converges to
an even periodic extension of 𝑓(𝑥) = 𝑥, which is a traingular wave. Note the very fast
convergence, compared to the sine series

clear;
hold off
L = 1; % Length of the interval
x = linspace(-3*L, 3*L, 300); % Create 300 points on the interval [-3L, 3L]
Const = -4*L/pi^2; % Constant in the expression for A_n
Cn = L / 2; % The baseline of the cosine series: A0 = L/2

for n = 1 : 2 : 3 % Only sum over odd integers


An = Const/n^2; % Coefficients inversely proportional to n^2
Fn = An * cos(n*pi*x/L); % Calculate Fourier cosine term
Cn = Cn + Fn; % Add the term to Fourier cosine series sum
plot(x, Cn, 'linewidth', 2); % Plot the cosine fourier sum
hold on;
end

xlabel('x'); ylabel('Sum(B_nsin(n\pix/L))');
title('Sum of first few terms in cosine series A_0+A_ncos(n\pix) (L=1');
plot(x, abs(mod(x+1,2)-1), 'k--', 'linewidth', 1); % Trickiest part of the
code: create triagular wave
legend('A_0+A_1cos(\pix)', 'A_0+A_1cos(\pix)+A_3cos(3\pix)', 'Even extension
of f(x)=x');

References
1. Katsuhiko Ogata., Modern Control Engineering, Prentice Hall Boston, 2010.
2. Jagan. N.C., Control Systems, Second Edition, BS Publications, New Jersey, 2008.
3. Richard C. Dorf., Modern Control Systems, Twelfth Edition Prentice Hall Boston, 2011.
4. Stanislaw H. Zak, Systems and Control, Oxford University Press, New York, 2003.
5. Martin Schetzen, Linear Time-Invariant Systems, Wiley, New York, 2003.
6. Kailath, T., Linear Systems, Prentice-Hall, Englewood Cliffs, New Jersey, 1980.
7. Lathi, B.P., Signals, Systems, and Communication, Wiley, New York, 1965.
8. Lathi, B.P., Signal Processing and Linear Systems, Berkeley-Cambridge Press, New York, 1998.
CHAPTER II:
Linear Time Invariant
Systems

I. Introduction
II. Block Diagrams of Systems and Interconnection
III. Time Invariant and Time Varying Systems
IV. Impulse response continuous-time systems
V. Impulse response discrete-time systems
VI. Step response
VII. Properties of LTI Systems
V.I. Causality
VII.II Stability
VII.III Memoryless
VII.IV Invertible system
VIII. Eigenfunction of Continuous-time LTI Systems
IX. Eigenfunction of Discrete-time LTI Systems
X. Solved Problems
XI. Review Questions

In system analysis, among other fields of study, a linear time-invariant system (or
"LTI system") is a system that produces an output signal from any input signal
subject to the constraints of linearity and time-invariance; these terms are briefly
defined next. LTI system theory is an area of applied mathematics which has direct
applications in electrical circuit analysis and design, signal processing and filter
design, control theory, mechanical engineering, image processing, NMR spectroscopy,
and many other technical areas where systems of ordinary differential equations
present themselves. (Wiki)
Linear Time Invariant
Systems
I. Introduction: A system whose output is proportional to its input is an example of a
linear system. But linearity implies more than this; it also implies additivity property,
implying that if several causes are acting on a system, then the total effect on the system
due to all these causes can be determined by considering each cause separately while
assuming all the other causes to be zero. The total effect is then the sum of all the
component effects.

In this book, we are concerned only with linear 𝑢(𝑡) LTI Sys 𝑦(𝑡)
systems, so that the corresponding linear ∑:
T(•)
mapping characterizes completely the linear
system. 𝑦(𝑡) = 𝑻(𝑢(𝑡)) Where 𝑻 is linear mapping

What is linearity? A general deterministic system can be described by an operator 𝑇, that


maps an input 𝑢(𝑡), as a function of 𝑡 to an output 𝑦(𝑡), a type of black box description.
Linear systems satisfy the property of superposition. Given two valid inputs 𝑢1 (𝑡), 𝑢2 (𝑡) as
well as their respective outputs 𝑦1 (𝑡) = 𝑻(𝑢1 (𝑡)) and 𝑦2 (𝑡) = 𝑻(𝑢2 (𝑡)) then a linear system
must satisfy 𝛼𝑦1 (𝑡) + 𝛽𝑦2 (𝑡) = 𝛼𝑻(𝑢1 (𝑡)) + 𝛽𝑻(𝑢2 (𝑡)) = 𝑻(𝛼𝑢1 (𝑡) + 𝛽𝑢2 (𝑡))
Linearity ≡ Superposition ≡ (Additivity + Homogeneity)

Additivity: 𝑻(𝑢1 (𝑡)) + 𝑻(𝑢2 (𝑡)) = 𝑻(𝑢1 (𝑡) + 𝑢2 (𝑡)) Homogeneity: 𝛼𝑻(𝑢(𝑡)) = 𝑻(𝛼𝑢(𝑡))
Notes: Any physical linear system is a
mathematical model of a system based
on the use of a linear operator. Any
physical system that does not satisfy
superposition principal is classified as
a nonlinear system. Also you have to
notice that: a consequence of the
homogeneity (or scaling) property of
linear systems is that a zero input
yields a zero output. This follows
readily by setting 𝛼 = 0. This is another
important property of linear systems.

Warning: saying that a system is


linear doesn't mean the same
significance as in physics or in
geometry, for example the system
described by linear equation 𝑦 = 𝑎𝑥 + 𝑏 where 𝑥 and 𝑦 are the input and output of the
system, respectively, and 𝑎 and 𝑏 are constants, this is nonlinear system for every 𝑏 ≠ 0.
The graph of 𝑦 = 𝑎𝑥 + 𝑏 is a straight line, yet it is nonlinear system! The above system is
said to have a bias. Systems with bias are incrementally linear. The incrementally linear
system will have a graph of 𝛥𝑦 vs 𝛥𝑥 as linear and it will pass through the origin.
Example: Let's check the linearity of the system: 𝑦(𝑡) = 𝑥(𝑡) cos(3𝑡). Where: 𝑥(𝑡) it the input
and 𝑦(𝑡) is the output.

Using the superposition theorem, we can prove that the system is linear. For any given
input 𝑥1 (t), the output is 𝑦1 (𝑡) = 𝑥1 (𝑡) cos(3𝑡) and for 𝑥2 (𝑡), the output is 𝑦2 (𝑡) = 𝑥2 (𝑡) cos(3𝑡).
For input [𝑥1 (𝑡) + 𝑥2 (𝑡)], the output is 𝑦(𝑡) = [𝑥1 (𝑡) + 𝑥2 (𝑡)] cos(3𝑡) That is, 𝑦(𝑡) = 𝑦1 (𝑡) + 𝑦2 (𝑡).
Hence the system is linear.
Example: Now, let's check another system, 𝑦(𝑡) = 𝑥(𝑡)2 (•)𝟐
𝑢(𝑡) 𝑦(𝑡)
y1 (𝑡) = 𝑻(𝑥1 (𝑡)) = 𝑥1 (𝑡)2 2
{ 2
⤇ 𝑻(𝑥1 (𝑡) + 𝑥2 (𝑡)) = (𝑥1 (𝑡) + 𝑥2 (𝑡)) ≠ y1 (𝑡) + y2 (𝑡)
y2 (𝑡) = 𝑻(𝑥2 (𝑡)) = 𝑥2 (𝑡)

y(𝑡) = 𝑻(𝑥1 (𝑡) + 𝑥2 (𝑡)) ≠ 𝑻(𝑥1 (𝑡)) + 𝑻(𝑥2 (𝑡))

Obviously, this system is non-linear.

Example: Given a system described by its in/out relationship (mapping) 𝑦(𝑡) = 𝑻(𝑥(t)), say
if the system is linear or not?

𝑑𝑦(𝑡)
❶ y(𝑡) = cos(𝑥(𝑡)) , ❷ + 3y(𝑡) = 𝑥(t) ❸ 𝑦(𝑡) = 𝑥(𝑡 − 𝛼)
𝑑𝑡
𝑡 𝑙

❹ y(𝑡) = ∫ 𝑥(𝜉)𝑑𝜉 ❺ 𝑦[𝑛] = |𝑥[𝑛]| ❻ 𝑦[𝑛] = ∑ (−1)𝑘 𝑥[𝑛 − 𝑘]


−∞ 𝑘=𝑚
❼ 𝑦[𝑛] + 𝑦[𝑛 − 1] = 𝑥[𝑛] − 𝑥[𝑛 − 2], ❽ 𝑦(t) = ln(𝑥(t)) ❾ 𝑦(t) = 𝑥(t) ⋆ 𝑥(t)

Answer: ❶ Nonlinear, ❷ Linear, ❸ Linear, ❹ Linear, ❺ Nonlinear, ❻ Linear, ❼ Linear


❽ Nonlinear, ❾ Nonlinear.

II. Block Diagrams of Systems and Interconnection: Large systems may consist of an
enormous number of components or elements. Analyzing such systems all at once could be
next to impossible. In such cases, it is convenient to represent a system by suitably
interconnected subsystems, each of which can be readily analyzed. Each subsystem can be
characterized in terms of its input-output relationships. Subsystems may be interconnected
by using three elementary types of interconnections: cascade, parallel, and feedback.
III. Time Invariant and Time Varying Systems: A system is called rime-invariant if a
time shift (delay or advance) in the input signal causes the same time shift in the output
signal. Thus, for a continuous-time system, the system is time-invariant if

𝑦(𝑡) = 𝑻(𝑥(𝑡)) ⟹ 𝑦(𝑡 − 𝑡0 ) = 𝑻(𝑥(𝑡 − 𝑡0 ))


Such processes are regarded as a class of systems in the field of system analysis. More
generally, the relationship between the input and output is 𝑦(𝑡) = 𝑻(𝑥(𝑡), 𝑡) , and its
variation with time is
𝜕𝑦 𝜕𝑻 𝜕𝑻 𝜕𝑥
= +
𝜕𝑡 𝜕𝑡 𝜕𝑥 𝜕𝑡
For time-invariant systems, the system properties remain constant with time, 𝜕𝑻/𝜕𝑡 = 0, for
example 𝑦(𝑡) = 𝑡𝑥(𝑡) gives
𝜕𝑦 𝜕𝑻
= 𝑥(𝑡) + 𝑡𝑥̇ (𝑡) ⤇ = 𝑥(𝑡) ≠ 0 in general, so not time − invariant.
𝜕𝑡 𝜕𝑡
A major subclass of linear systems, which in addition have parameters that do not change
with time, called linear time invariant (LTI) systems.

Example: is the given a system linear/nonlinear and invariant/time varying?

𝐚) 𝑦(t) = 𝑻(𝑥(𝑡)) = 𝑥 2 (𝑡) 𝒃) 𝑦(𝑡) = 𝑻(𝑢(𝑡)) = 𝑒 𝑡 𝑢(𝑡)


Answer: 𝐚) Nonlinear time invariant, 𝐛) Linear time varying
Example: give some of different examples for the validity of time invariant systems.

⦁ The system 𝑦(𝑡) = sin(𝑥(𝑡)) is time invariant.


⦁ The system 𝑦[𝑛] = 𝑛𝑥[𝑛] is not time invariant. This can be demonstrated by using
counterexample. Consider the input signal 𝑥1 [𝑛] = 𝛿[𝑛], which yields 𝑦1 [𝑛] = 𝑛𝛿[𝑛] = 0.
However, the input 𝑥2 [𝑛] = 𝛿[𝑛 − 1] yields the output 𝑦2 [𝑛] = 𝑛𝛿[𝑛 − 1] = 𝛿[𝑛 − 1]. Thus,
while 𝑥2 [𝑛] is the shifted version of 𝑥1 [𝑛], [ ] 𝑦2 [𝑛] is not the shifted version of 𝑦1 [𝑛].

⦁ The system 𝑦(𝑡) = 𝑥(2𝑡) is not time invariant. To check it uses counterexample. Consider
{𝑥1 (𝑡) = 1 only if |𝑡| ≤ 2}, the resulting output {𝑦1 (𝑡) = 1 only if |𝑡| ≤ 1}. If the input is
shifted by 2, that is, consider {𝑥2 (𝑡) = 1 only if 0 ≤ 𝑡 ≤ 4},, we obtain the resulting output
{𝑦2 (𝑡) = 1 only if 0 ≤ 𝑡 ≤ 2}. It is clearly seen that 𝑦2 (𝑡) ≠ 𝑦1 (𝑡 − 2), so the system is not
time invariant.
IV. Impulse response continuous-time systems: The fundamental result in LTI system
theory is that any LTI system can be characterized entirely by a single function called the
system's impulse response, that is, the output of a plant when the input is the delta
function i.e. ℎ(𝑡) = 𝑻(𝛿(𝑡)). Question: How is important ℎ(𝑡) in system theory? Can always
we obtain the output 𝑦(t) in terms of ℎ(t)? To answer this question let we assume that, we
are given the response of Dirac signal ℎ(t) = 𝑻(𝛿(𝑡)) and also we know from the properties of
Dirac function
+∞ +∞
𝑥(𝑡) = ∫ 𝑥(𝜉)𝛿(𝑡 − 𝜉)𝑑𝜉 ⟺ 𝑦(𝑡) = 𝑻(𝑥(𝑡)) = 𝑻 (∫ 𝑥(𝜉)𝛿(𝑡 − 𝜉)𝑑𝜉 )
−∞ −∞

Because of the LTI property of the system we have


+∞ +∞ +∞
𝑻(𝑥(𝑡)) = ∫ 𝑥(𝜉)𝑻(𝛿(𝑡 − 𝜉))𝑑𝜉 = ∫ 𝑥(𝜉)ℎ(𝑡 − 𝜉)𝑑𝜉 ⟹ 𝑦(𝑡) = ∫ 𝑥(𝜉)ℎ(𝑡 − 𝜉)𝑑𝜉
−∞ −∞ −∞

Remark: 01 This last integral equation is called the convolution between the input 𝑢(𝑡)
and ℎ(𝑡) where this linear operator is denoted by star ‘⋆’ and 𝑦(𝑡) = 𝑢(𝑡) ⋆ ℎ(𝑡) = ℎ(𝑡) ⋆ 𝑢(𝑡)
+∞
Remark: 02 Here, in the integral equation 𝑦(𝑡) = ∫−∞ 𝑢(𝜉)ℎ(𝑡 − 𝜉)𝑑𝜉 the function 𝑢(𝑡) is not
necessarily a step signal, but it can be any general input (it is better to denote it by 𝑥(𝑡)).

Homework: prove that the convolution operator is commutative. Hint: you can use change
of variables.

Remark: 03 The impulse response ℎ(𝑡) relates the input to output by inner operator called
convolution, and characterize completely the system. (i.e. The impulse response ℎ(𝑡) reveals
the system properties). Properties of system properties of the impulse response ℎ(𝑡)

Because convolution is fundamental to the analysis and description of LSI systems, in this
section we look at the mechanics of performing convolutions. We begin by listing some
properties of convolution that may be used to simplify the evaluation of the convolution
integral (or sum in discrete).

Properties of convolution in continuous-time systems:

 Commutative: 𝑥1 (𝑡) ⋆ 𝑥2 (𝑡) = 𝑥2 (𝑡) ⋆ 𝑥1 (𝑡)


 Associativity: 𝑥1 (𝑡) ⋆ {𝑥2 (𝑡) ⋆ 𝑥3 (𝑡)} = {𝑥1 (𝑡) ⋆ 𝑥2 (𝑡)} ⋆ 𝑥3 (𝑡)
 Distributivity: 𝑥1 (𝑡) ⋆ {𝑥2 (𝑡) + 𝑥3 (𝑡)} = 𝑥1 (𝑡) ⋆ 𝑥2 (𝑡) + 𝑥1 (𝑡) ⋆ 𝑥3 (𝑡)
𝑑𝑥(𝑡) 𝑑𝑦(𝑡)
 Dirivability: 𝑑(𝑥(𝑡) ⋆ 𝑦(𝑡))/𝑑𝑡 = ⋆ 𝑦(𝑡) = 𝑥(𝑡) ⋆
𝑑𝑡 𝑑𝑡
−1 (𝑡)
 Multiplicative identity: 𝑥(𝑡) ⋆ 𝛿(𝑡) = 𝑥(𝑡) and 𝑥 ⋆ 𝑥(𝑡) = 𝛿(𝑡).

Where 𝑥 −1 (𝑡) stands for the inverse function of 𝑥(t).

In the next page you gave shown a schematic diagram of the convolution properties
Computer program: This program provides a student a good support to plot & simulate
convolution integral & convolution sum. We write a nested program that will convolve two
analytic function f and g in both cases continous and discrete time signals.

clear all, clc,Ts=0.01;


t1=-0.5:Ts:0.5; t2=-1:Ts:1;
x=ones(1,length(t1));
h=(1-abs(t2)).*ones(1,length(t2));
n1=length(x); n2=length(h);
X=[x,zeros(1,n2)];H=[h,zeros(1,n1)];

for i=1:n1+n2-1
Y(i)=0;
for j=1:n1 ℎ(𝑡) 𝑥(𝑡)
if i-j+1>0

Y(i)= Y(i)+ X(j)* H(i-j+1);
else
end;
end;
end

t=-1.5:Ts:1.5;
plot(t1,x,'b','linewidth',3)
grid on
figure
plot(t2,h,'b','linewidth',3)
grid on
figure
plot(t,Ts*Y,'r','linewidth',3)
grid on 𝑦(𝑡) = 𝑥(𝑡) ⋆ ℎ(𝑡)
V. Impulse response discrete-time systems: The
output of a discrete time LTI system is completely
determined by the input and the system's response
to a unit impulse.

We can determine the system's output, y[n], if we


know the system's impulse response, h[n], and the
input, x[n]. The output for a unit impulse input is
called the impulse response.

By the sifting property of impulses, any signal can be decomposed in terms of an infinite
sum of shifted, scaled impulses.
+∞ +∞

𝑥[𝑛] = ∑ 𝑥[𝑘]𝛿[𝑛 − 𝑘] ⟺ 𝑦[𝑛] = 𝑯(𝑥[𝑛]) = 𝑯 (∑ 𝑥[𝑘]𝛿[𝑛 − 𝑘])


−∞ −∞

where 𝑯 is the system operator, and because of the LTI property of the system we have
+∞ +∞ 𝑘=+∞

𝑯(𝑥[𝑛]) = ∑ 𝑥[𝑘]𝑯(𝛿[𝑛 − 𝑘]) = ∑ 𝑥[𝑘]ℎ[𝑛 − 𝑘] ⟺ 𝑦[𝑛] = ∑ 𝑥[𝑘]ℎ[𝑛 − 𝑘]


−∞ −∞ 𝑘=−∞

This is the process known as Convolution. Since we are in discrete time, this is the Discrete
Time Convolution Sum. When a system is "shocked" by a delta function, it produces an
output known as its impulse response. For an LTI system, the impulse response completely
determines the output of the system given any arbitrary input. The output can be found
using discrete time convolution.

Properties of convolution in discrete-time systems:


 Commutative: 𝑥1 [𝑛] ⋆ 𝑥2 [𝑛] = 𝑥2 [𝑛] ⋆ 𝑥1 [𝑛]
 Associativity: 𝑥1 [𝑛] ⋆ {𝑥2 [𝑛] ⋆ 𝑥3 [𝑛]} = {𝑥1 [𝑛] ⋆ 𝑥2 [𝑛]} ⋆ 𝑥3 [𝑛]
 Distributivity: 𝑥1 [𝑛] ⋆ {𝑥2 [𝑛] + 𝑥3 [𝑛]} = 𝑥1 [𝑛] ⋆ 𝑥2 [𝑛] + 𝑥1 [𝑛] ⋆ 𝑥3 [𝑛]
 Differencing: 𝑥[𝑛 − 𝑘0 ] ⋆ 𝑦[𝑛] = 𝑥[𝑛] ⋆ 𝑦[𝑛 − 𝑘0 ]
 Multilinearity identity: 𝛼(𝑥1 [𝑛] ⋆ 𝑥2 [𝑛]) = (𝛼𝑥1 [𝑛]) ⋆ 𝑥2 [𝑛] = 𝑥1 [𝑛] ⋆ (𝛼𝑥2 [𝑛])
❻ Conjugation identity: ̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅
𝑥1[𝑛] ⋆ 𝑥2[𝑛] = ̅̅̅̅̅̅̅̅̅
𝑥1[𝑛] ⋆ ̅̅̅̅̅̅̅̅̅
𝑥2[𝑛]
❼ Multiplicative identity: 𝑥[𝑛] ⋆ 𝛿[𝑛] = 𝑥[𝑛] and 𝑥−1 [𝑛] ⋆ 𝑥[𝑛] = 𝛿[𝑛], where 𝑥 −1 [𝑛] stands for the
inverse function of 𝑥[𝑛].

Example: Prove that the operation of convolution has the following property (Differencing)

𝕕𝑇 (𝑥1 [𝑛] ⋆ 𝑥2 [𝑛]) = 𝕕𝑇 (𝑥1 [𝑛]) ⋆ 𝑥2 [𝑛] = 𝑥1 [𝑛] ⋆ 𝕕𝑇 (𝑥2 [𝑛])

for all discrete time signals 𝑥1 [𝑛], 𝑥2 [𝑛] where 𝕕𝑇 is the time shift operator with 𝑇 ∈ 𝑍.

Solution: we start from one side to arive at the other one


+∞

𝕕𝑇 (𝑥1 [𝑛] ⋆ 𝑥2 [𝑛]) = 𝕕𝑇 ( ∑ 𝑥1 [𝑘]𝑥2 [𝑛 − 𝑘])


𝑘=−∞
+∞

= ∑ 𝑥1 [𝑘]𝑥2 [(𝑛 − 𝑇) − 𝑘]
𝑘=−∞
+∞

= ∑ 𝑥1 [𝑘]𝕕𝑇 (𝑥2 [𝑛 − 𝑘])


𝑘=−∞
= 𝑥1 [𝑛] ⋆ 𝕕𝑇 (𝑥2 [𝑛])
+∞

= ∑ 𝑥2 [𝑘]𝑥1 [(𝑛 − 𝑇) − 𝑘]
𝑘=−∞
= 𝕕𝑇 (𝑥1 [𝑛]) ⋆ 𝑥2 [𝑛]

Observation: Notice that the discrete convolution has the following property
Duration(𝑥1 [𝑛] ⋆ 𝑥2 [𝑛]) = Duration(𝑥1 [𝑛]) + Duration(𝑥2 [𝑛]) − 1
for all discrete time signals 𝑥1 [𝑛], 𝑥2 [𝑛] where Duration(𝑥[𝑛]) gives the duration of a signal 𝑥.

clear all, clc

x=[0 1 1 0 1 1];
h=[0 1 2 3 2 1 0];
n1=length(x); n2=length(h);

X=[x,zeros(1,n2)];
H=[ h,zeros(1,n1)];
s= n1+n2-1;
for i=1:s
Y(i)=0; 𝑥[𝑛] ⋆ ℎ[𝑛]
for j=1:n1
if i-j+1>0
Y(i)=Y(i)+X(j)*H(i-j+1);
else
end;
end;
end
stem(1:n1,x,'b','linewidth',3)
grid on
figure
stem(1:n2,h,'b','linewidth',3)
grid on
figure
stem(1:s, Y,'r','linewidth',3)
grid on
𝑦[𝑛] = 𝑥[𝑛] ⋆ ℎ[𝑛]
VI. Step response The step response 𝑠(𝑡) of a continuous-time LTI system (represented by
the operator 𝑻) is defined to be the response of the system when the input is a Heaviside
function 𝑢(𝑡); that is, 𝑠(𝑡) = 𝑻(𝑢(𝑡)). In many applications, the step response 𝑠(𝑡) is also a
useful characterization of the system. The step response 𝑠(𝑡) can be easily determined by
+∞ ∞
𝑠(𝑡) = 𝑻(𝑢(𝑡)) = ∫ 𝑢(𝜉)ℎ(𝑡 − 𝜉)𝑑𝜉 = ∫ ℎ(𝑡 − 𝜉)𝑑𝜉
−∞ 0
Take change of variable 𝜇 = 𝑡 − 𝜉 and 𝑑𝜇 = −𝑑𝜉 we get:
∞ 𝑡
𝑑𝑠(𝑡)
𝑠(𝑡) = ∫ ℎ(𝑡 − 𝜉)𝑑𝜉 = ∫ ℎ(𝜇)𝑑𝜇 ⟹ ℎ(𝑡) =
0 −∞ 𝑑𝑡

Also you can arrive at the same thing using:

ℎ(𝑡) = 𝑇(𝛿(𝑡))
𝑑𝑢(𝑡) 𝑑
= 𝑇( ) = {𝑇(𝑢(𝑡))}
𝑑𝑡 𝑑𝑡
𝑑𝑠(𝑡)
=
𝑑𝑡

Remark: The step response of a discrete-time LTI system is the convolution of the unit step
with the impulse response:
+∞ +∞ 𝑛

𝑠[𝑛] = 𝑻(𝑢[𝑛]) = ∑ 𝑢[𝑘]ℎ[𝑛 − 𝑘] = ∑ ℎ[𝑘]𝑢[𝑛 − 𝑘] = ∑ ℎ[𝑘] ⟹ ℎ[𝑛] = 𝑠[𝑛] − 𝑠[𝑛 − 1]


−∞ −∞ −∞

VII. Properties of LTI Systems Some of the most important properties of a system are
causality, invertibility, stability and memoryless.

A. Causality: (‫ )السببية‬causal systems do not include future input samples; such system is
practically realizable (i.e. can be implemented), that mean such system can be constructed
practically. Generally all real time systems (either nature or physical reality) are causal
systems, a causal system does not respond to an input event until that event actually
occurs.

Continuous LTI system 𝑦(𝑡) = ℎ(𝑡) ⋆ 𝑥(𝑡) = ∫−∞ ℎ(𝜉)𝑥(𝑡 − 𝜉)𝑑𝜉 is causal if 𝑡 − 𝜉 < 𝑡 ⟹ 𝜉 > 0

∞ 0 ∞ ∞
𝑦(𝑡) = ∫ ℎ(𝜉)𝑥(𝑡 − 𝜉)𝑑𝜉 = ∫ ℎ(𝜉)𝑥(𝑡 − 𝜉)𝑑𝜉 + ∫ ℎ(𝜉)𝑥(𝑡 − 𝜉)𝑑𝜉 = ∫ ℎ(𝜉)𝑥(𝑡 − 𝜉)𝑑𝜉
−∞ −∞ 0 0

Therefore, for a causal continuous-time LTI system, we have ℎ(𝑡) = 0 for all 𝑡 < 0, in other

word we can say: 𝑦(𝑡) = ∫0 ℎ(𝜉)𝑥(𝑡 − 𝜉)𝑑𝜉.

Example: The following examples are for systems with an input 𝑥 and output 𝑦, say if the
system is causal or not?

❶ 𝑦(𝑡) = 1 − 𝑥(𝑡)cos(𝜔𝑡) causal


+∞
❷ 𝑦(𝑡) = ∫ sin(𝑡 + 𝜏) 𝑥(𝜏)𝑑𝜏 non_causal
−∞
1 1
❸ 𝑦𝑛 = 𝑥𝑛−1 + 𝑥𝑛+1 non_causal
2 2
+∞
❹ 𝑦(𝑡) = ∫ 𝑥(𝑡 − 𝜏) 𝑒 −𝛽𝜏 𝑑𝜏 causal
0
❶ The first one is obvious ❷ in the second we take the change of variable 𝜃 = 𝑡 + 𝜏 we
+∞
obtain 𝑦(𝑡) = ∫−∞ sin(𝜃) 𝑥(𝜃 − 𝑡)𝑑𝜃 therefore the system will be causal only if 𝜃 − 𝑡 < 𝑡 which
means that 𝜃 < 2𝑡 but this is not the case, because −∞ < 𝜃 < ∞ hence noncausal system.
+∞
❸ The third one is obvious and lastly ❹ 𝑦(𝑡) = ∫0 𝑥(𝑡 − 𝜏) 𝑒 −𝛽𝜏 𝑑𝜏 the system will be causal
only if 𝑡 − 𝜏 < 𝑡 which means that 𝜏 > 0 which is the case, hence a causal system.

Example: The following examples are for systems with an input 𝑥 and output 𝑦, say if the
system is causal or not?

▪ 𝑦(𝑡) = 𝑥(𝑡) + 3𝑥 2 (𝑡 − 1) causal system


▪ 𝑦(𝑡) = cos 2 (𝑡) 𝑥(𝑡) causal system
𝑡−2 ∞
▪ 𝑦(𝑡) = 2 + ∫−∞ 𝑥(𝑡 − 𝜏)𝑑𝜏 noncausal system, 𝑦(𝑡) = 2 + ∫2 𝑥(𝜃)𝑑𝜃 (i.e. 𝜃 is not less than 𝑡)
𝑡−1
▪ 𝑦(𝑡) = 𝑒 −𝑡 ∫−∞ 𝑒 𝜏 𝑥(𝜏)𝑑𝜏 causal system, because is linear and ℎ(𝑡) = 𝑒 −𝑡 𝑢(𝑡 − 1)
𝑡 2𝑡
▪ 𝑦(𝑡) = ∫−∞ 𝑥(2𝜏)𝑑𝜏 noncausal system, 𝑦(𝑡) = 0.5 ∫−∞ 𝑥(𝜃)𝑑𝜃 (i.e. 𝜃 is not less than 𝑡)
▪ 𝑦(𝑡) = 𝑥(−𝑡) noncausal system, because we need 𝑡 less than 0 (not the case)
▪ 𝑦(𝑡) = 𝑥(3𝑡) noncausal system, because always 3𝑡 > 𝑡
𝑡 ∞
▪ 𝑦(𝑡) = ∫−∞ 𝑒 𝑡−𝜎 𝑥 2 (𝜎)𝑑𝜎 causal system, because 𝑦(𝑡) = ∫0 𝑒 𝜃 𝑥 2 (𝑡 − 𝜃)𝑑𝜃 ⟹ 𝜃 > 0
𝑡 ∞
▪ 𝑦(𝑡) = 𝑥(𝑡) ∫−∞ 𝑒 −𝜏 𝑥(𝜏)𝑑𝜏 causal system, because 𝑦(𝑡) = 𝑒 −𝑡 𝑥(𝑡) ∫0 𝑒 𝜃 𝑥(𝑡 − 𝜃)𝑑𝜃 ⟹ 𝜃 > 0
𝑡 ∞
▪ 𝑦(𝑡) = ∫−∞ 𝑒 𝑡−𝜎 𝑥 2 (𝜎)𝑑𝜎 causal system, because 𝑦(𝑡) = ∫0 𝑒 𝜃 𝑥 2 (𝑡 − 𝜃)𝑑𝜃 ⟹ 𝜃 > 0
▪ 𝑦(𝑡) = 3𝑥(𝑡 + 1) + 4 noncausal system
𝑡
▪ 𝑦(𝑡) = −3𝑥(𝑡) + ∫0 3𝑥(𝜎)𝑑𝜎 causal system

Remark: 01 Causality is a necessity if the independent variable is time, but not all systems
have time as an independent variable. For example, a system that processes still images
does not need to be causal (spatial independent variable). Also non-causal systems can be
built and can be useful in many circumstances. Even non-real systems can be built and
are very useful in many contexts.

Remark: 02 Unlike continuous-time 'CT' systems, non-causal discrete-time 'DT' systems


can be realized. It is trivial to make an acausal FIR system causal by adding delays. It is
even possible to make acausal IIR (Applications of DSP).

B. Stability: (‫ )اإلستقرارية‬There are many types of stability of systems, but this section we
are interested only by the so called BIBO stability which means that if a LTI system is
bounded input then it should be of bounded output. Assume that |𝑢(𝑡)| < 𝑀 for all 𝑡 ∈ ℝ
what is about |𝑦(𝑡)| ?
+∞ +∞
|𝑦(𝑡)| = |∫ 𝑢(𝜏)ℎ(𝑡 − 𝜏) 𝑑𝜏| ≤ ∫ |𝑢(𝜏)ℎ(𝑡 − 𝜏)|𝑑𝜏
−∞ −∞
+∞
≤∫ |𝑢(𝜏)||ℎ(𝑡 − 𝜏)|𝑑𝜏
−∞
+∞
≤ 𝑀∫ |ℎ(𝑡 − 𝜏)|𝑑𝜏
−∞

Then the output 𝑦(𝑡) is absolutely bounded iff is absolutely integrible, that is,
+∞ +∞
∫ |ℎ(𝑡 − 𝜏)|𝑑𝜏 < ∞ 𝑜𝑟 ∫ |ℎ(𝜂)|𝑑𝜂 < ∞
−∞ −∞
Remarks: As a physical interpretation of a stability, it means that the dissipated energy of
a system cannot remain constant, hence it keep decreasing until it eventually reach zero,
this is when we are dealing with non-forced systems. But when a system is forced it may
remain constant but not ∞. If a system is perturbed from its rest, or equilibrium position,
then it starts to move. We can roughly say that the equilibrium position is stable if the
system does not go far from this position for small initial perturbations.

Example: You are given the impulse response of LTI system, which one of them is BIBO
stable? (𝑡 > 0)
2 2
■ ℎ(𝑡) = 𝑒 −𝑡 ■ ℎ(𝑡) = 𝑒 𝑡 ■ ℎ(𝑡) = 𝑒 −0.5𝑡 ■ ℎ(𝑡) = 𝐴. sin(𝜔𝑡)

Answer: The 2nd and 4th are unstable, but 1st and 3rd are BIBO stable. You are asked to
check them at home (do it as homework!!)

C. Memoryless: (‫ )التعلقية اللحظية‬as observed earlier, a system's output at any instant 𝑡


generally depends upon the entire past input. However, in a special class of systems, the
output at any instant 𝑡 depends only on its input at that instant. Such systems are said to
be instantaneous or memoryless systems. Otherwise, the system is said to be dynamic (or a
system with memory). Networks containing inductive and capacitive elements generally
have infinite memory because the response of such networks at any instant 𝑡 is determined
by their inputs over the entire past (−∞ 0). In this module we will generally examine
dynamic systems. Instantaneous systems are a special case of dynamic systems, and also a
special case of causal systems.

A linear time invariant system is called memoryless if the output depends only on the input
at the present time that is 𝑦(𝑡) = 𝑘𝑢(𝑡) or equivalently ℎ(𝑡) = 𝑘𝛿(𝑡)

A memoryless system is always causal (as it doesn't depend on future input values), but a
causal system doesn't need to be memoryless (because it may depend on past input or
output values).

Example: (Memoryless system-1) As an example of memoryless systems we have a “Gear


train, rotational transformer”, where A gear or cogwheel is a rotating machine part having
cut teeth, and those geared devices can change the speed, torque, and direction of a power
source.

𝑁1 𝜃𝑙 𝜔𝑙
Gear ratio = 𝑛 = and = =𝑛
𝑁2 𝜃𝑚 𝜔𝑚
Example: (Dynamic system) A latch or a flip-flop is a circuit that
has two stable states and can be used to store finite state
information. Latches and flip-flops are used as data storage
elements. Such data storage can be used for storage of state,
(smallest storage element of memory)𝑸(𝒏)=𝐋𝐨𝐠𝐢𝐜_𝐟𝐮𝐧𝐜𝐭𝐢𝐨𝐧(𝑸(𝒏−𝟏),𝑺,𝑹)

Example: (Memoryless system-2) given a system defined by its


input output map (operator): 𝑦(𝑡) = sin(𝜔𝑡 + 𝜑) 𝑥(𝑡). Check if the system is BIBO
stable/unstable, memoryless/dynamic and time invariant/varying.

Answer: The impulse response of this system ℎ(𝑡) = sin(𝜔𝑡 + 𝜑) 𝛿(𝑡). From the properties of
delta function we can write ℎ(𝑡) = 𝑀𝛿(𝑡), where: 𝑀 = sin(𝜑), the system is BIBO stable
because |𝑦(𝑡)| = |sin(𝜔𝑡 + 𝜑) 𝑥(𝑡)| < |sin(𝜔𝑡 + 𝜑)|. |𝑥(𝑡)| < |𝑥(𝑡)| = 𝑀 < ∞. The system also is
memoryless and linear time varying.

Example: (Memoryless system-3) The input output


operator of an operational Amplifier is given by:

𝑉out 𝑅f 𝑅f
= (1 + ) = const ⇒ ℎ(𝑡) = (1 + ) 𝛿(𝑡)
𝑉𝑖𝑛 𝑅g 𝑅g

A system is memoryless if its output for each value of the independent variable as a given
time is dependent only on the input at the same time. For example: 𝑦[𝑛] = (2𝑥[𝑛] − 𝑥 2 [𝑛])2 is
memoryless. A resistor is a memoryless system, since the input current and output voltage
has the relationship 𝑦[𝑛] = 𝐾𝑥[𝑛]. An example of a discrete-time system with memory is an
accumulator or summer. Another example is a delay. A capacitor and inductor are examples
of a continuous-time system with memory.

D. Invertible system: (‫ )النظام العكوس‬If we can obtain the input 𝑢(𝑡) back from the output 𝑦(𝑡)
by some operation, the system 𝑆 is said to be invertible. For a noninvertible system,
different inputs can result in the same output (as in a rectifier), and it is impossible to
determine the input for a given output. In rectifier circuit we have 𝑦(𝑡) = √𝑢2 (𝑡) = |𝑢(𝑡)|.
Therefore, for an invertible system, it is essential that distinct inputs result in distinct
outputs so that there is one-to-one mapping between an input and the corresponding
output. This ensures that every output has a unique input.
Example: An ideal differentiator is noninvertible because
integration of its output cannot restore the original signal
unless we know one piece of information about the signal.

𝑑𝑢(𝑡)
𝑦(𝑡) = −𝑅𝐶
𝑑𝑡

Example: Encoder in communication systems is an example of invertible system, that is,


the input to the encoder must be exactly recoverable from the output.

Homework: Show that a system described by the input output relation 𝑦(𝑡) = 𝑢2 (𝑡) is
noninvertible.

Definition: 1 LTI system that have ℎ(𝑡) as its impulse response is said to be invertible (i.e.
both left and right invertiblity) if and only if there exist a contiounous function 𝑔(𝑡) such
that: 𝑔(𝑡) ⋆ ℎ(𝑡) = 𝛿(𝑡) and ℎ(𝑡) ⋆ 𝑔(𝑡) = 𝛿(𝑡)

𝑦(𝑡) = 𝑻{𝑢(𝑡)} 𝑢(𝑡) = 𝑳{𝑦(𝑡)} = 𝑻−1 {𝑦(𝑡)}

Therefore we have: ℎ(𝑡) = 𝑻{𝛿(𝑡)} ⬌ 𝛿(𝑡) = 𝑻−1 {ℎ(𝑡)} = 𝑳(ℎ(𝑡))

Mathematical abstraction: In terms of the mapping concept, a system is said to be a many-


to-one mapping because many different inputs can result in a particular output, but a
given input cannot result in more than one output. Note that the system inverse is a
system, so that its operator must be a many-to-one mapping of the set of its inputs, 𝑌, to
the set of its outputs, 𝑋. We thus conclude that a system inverse exists if and only if the
system operator is a one-to-one mapping.

Example: Find the inverse system of an accumulator 𝑦[𝑛] = ∑𝑘=𝑛𝑘=−∞ 𝑥[𝑘]. Ans: by using the
properties of sum one can verify that 𝑥[𝑛] = 𝑦[𝑛] − 𝑦[𝑛 − 1].

Properties of Continuous -time Systems Properties of Discrete-time Systems

Causality: ℎ(𝑡) = 0 fot 𝑡 < 0 Causality: ℎ[𝑛] = 0 fot 𝑛 < 0


+∞ +∞
Stability: ∫−∞ |ℎ(𝜏)|𝑑𝜏 < ∞ Stability: ∑−∞|ℎ[𝑘]| < ∞
Memoryless: ℎ(𝑡) = 𝑀𝛿(𝑡) Memoryless: ℎ[𝑛] = 𝐾𝛿[𝑛]
Invertiblity: ∃𝑔(𝑡) such that 𝑔(𝑡) ⋆ ℎ(𝑡) = 𝛿(𝑡) Invertiblity: ∃𝑔[𝑛] such that 𝑔[𝑛] ⋆ ℎ[𝑛] = 𝛿[𝑛]

VIII. Eigenfunction of Continuous-time LTI Systems In mathematics, an eigenfunction


of a linear operator 𝑻 defined on some function space is any non-zero function f in that
space that, when acted upon by 𝑻, is only multiplied by some scaling factor called an
eigenvalue. As an equation, this condition can be written as 𝑻(𝐟) = 𝜆 𝐟 . In system
engineering a linear system maps inputs 𝑢(𝑡) to outputs 𝑦(𝑡), assume that the
eigenfunction of LTI system is denoted by 𝑣(𝑡) therefore, 𝑦(𝑡) = 𝑻{𝑣(𝑡)} = 𝜆𝑣(𝑡).

Where: ⦁ 𝜆 is called the characteristic or the eigenvalue of the operator 𝑻 .


⦁ 𝑣(𝑡) is called the eigenfunction of LTI operator 𝑻 corresponding to 𝜆.
Lemma: If 𝑆 is a linear-time-invariant system modeled mathematically by linear mapping
𝑻, then 𝐚 𝐩𝐡𝐚𝐬𝐨𝐫 𝑣(𝑡) = 𝑒 𝑠𝑡 is an eigenfunction of 𝑆.

Proof: The convolution of the phasor input 𝑣(𝑡) = 𝑒 𝑠𝑡 and ℎ(𝑡) gives:

𝑒 𝑠𝑡 𝜆 𝑒 𝑠𝑡
+∞
𝑦(𝑡) = ∫ 𝑒 𝑠𝜏 ℎ(𝑡 − 𝜏)𝑑𝜏 = 𝑻{𝑒 𝑠𝑡 }
−∞ 𝜆 𝑒 𝑠𝑡 = ℎ(𝑡) ⋆ 𝑒 𝑠𝑡

When we excite LTI system by a phasor input 𝑣(𝑡) = 𝑒 𝑠𝑡 then the response will be the
convolution of ℎ(𝑡) with 𝑣(𝑡), that is:
+∞ +∞
𝑦(𝑡) = 𝑻{𝑒 𝑠𝑡 } = ∫ ℎ(𝜏)𝑒 𝑠(𝑡−𝜏) 𝑑𝜏 = 𝑒 𝑠𝑡 ∫ ℎ(𝜏)𝑒 −𝑠𝜏 𝑑𝜏 = 𝜆𝑒 𝑠𝑡 ⬌ 𝑦(𝑡) = 𝜆𝑣(𝑡) ■
−∞ ⏟−∞
𝐻(𝑠)

Remark: Every LTI system has 𝑣(𝑡) = 𝑒 𝑠𝑡 as its eigenfunction with 𝑠 ∈ ℂ is any complex
+∞
number. Now we define a new integral transformation 𝜆 = 𝐻(𝑠) = ∫−∞ ℎ(𝜏)𝑒 −𝑠𝜏 𝑑𝜏 which is
called mathematically the Laplace transformation of impulse response, and in system
engineering is named transfer function of the LTI system and it characterize completely the
system “i.e. characteristic value”.

IX. Eigenfunction of Discrete-time LTI Systems As what we have seen before, a linear
time invariant system is a linear operator defined on a function space that commutes with
every time shift operator on that function space. Thus, we can also consider the eigenvector
functions, or eigenfunctions, of a system. It is particularly easy to calculate the output of a
system when an eigenfunction is the input as the output is simply the eigenfunction scaled
by the associated eigenvalue. As will be shown, discrete time complex exponentials serve as
eigenfunctions of linear time invariant systems operating on discrete time signals.

Consider a linear time invariant system 𝑯 with impulse response ℎ[𝑛] operating on some
space of infinite length discrete time signals. Recall that the output 𝑯(𝑥[𝑛]) of the system
for a given input 𝑥[𝑛] is given by the discrete time convolution of the impulse response with
the input
𝑘=+∞

𝑯(𝑥[𝑛]) = ∑ ℎ[𝑘]𝑥[𝑛 − 𝑘]
𝑘=−∞
𝑛
Now consider the input 𝑥[𝑛] = 𝑧 where 𝑧 ∈ ℂ . Computing the output for this input,
𝑘=+∞ 𝑘=+∞ 𝑘=+∞

𝑯(𝑧 𝑛 ) = ∑ ℎ[𝑘]𝑧 𝑛−𝑘 = ( ∑ ℎ[𝑘]𝑧 −𝑘 ) 𝑧 𝑛 = 𝜆𝑧 𝑛 where 𝜆 = ∑ ℎ[𝑘]𝑧 −𝑘


𝑘=−∞ 𝑘=−∞ 𝑘=−∞

Therefore, 𝜆 is the eigenvalue corresponding to the eigenvector 𝑧 𝑛 .

Remark: Now we define a new accumulation transformation 𝜆 = 𝐻(𝑧) = ∑+∞ −𝑘


−∞ ℎ[𝑘]𝑧 , which
is called mathematically the Z-transformation, and in system engineering is named discrete
transfer function of the LTI system and it characterize completely the system.

In steady state, the response to a complex exponential (or a sinusoid) of a certain frequency
is the same complex exponential (or sinusoid).
Solved Problems:
Exercise 1: determine whether the following continuous time system is linear or not
𝟏. 𝑦(𝑡) = 𝑥(sin(𝑡)) 𝟐. 𝑦(𝑡) = sin(𝑥(𝑡))
𝟑. 𝑦(𝑡) = 𝑡 2 𝑥(𝑡 − 1) 𝟒. 𝑦(𝑡) = 𝑒 𝑥(𝑡)

Ans: linear, nonlinear, linear, nonlinear.

Exercise 2: determine whether the following discrete time system is linear or not

𝟏. 𝑦[𝑛] = ln(𝑥[𝑛]) 𝟐. 𝑦[𝑛] = 𝑥[𝑛] − 𝑥[𝑛 − 1]


𝟑. 𝑦[𝑛] = 𝑥 2 [𝑛] + 𝑥 2 [𝑛 − 1] 𝟒. 𝑦[𝑛] = 2𝑥[𝑛] + 4

Ans: nolinear, linear, nonlinear, nonlinear.

Exercise 3: check whether the following continuous time system is causal or not
4𝑡
𝟓. 𝑦(𝑡) = ∫ 𝑥(𝑡)𝑑𝑡
𝟏. 𝑦(𝑡) = 𝑡𝑥(𝑡) 𝟐. 𝑦(𝑡) = 𝑥(𝑡 2 ) −∞
𝟑. 𝑦(𝑡) = 𝑥 2 (𝑡) 𝟒. 𝑦(𝑡) = 𝑥(sin(𝑡)) 𝑑
𝟔. 𝑦(𝑡) = 𝑥(𝑡)
𝑑𝑡
Ans: causal, non-causal, causal, causal, non-causal, causal.

Exercise 4: check whether the following continuous time system is stable or not (i.e.
compute the lim𝑡→∞ |𝑦(𝑡)/𝑥(𝑡)| when |𝑥(𝑡)| < 𝑀)
𝑡

𝟏. 𝑦(𝑡) = 𝑒 𝑥(𝑡)
𝟐. 𝑦(𝑡) = sin(𝑡) 𝑥(𝑡) 𝟓. 𝑦(𝑡) = ∫ 𝑥(𝑡)𝑑𝑡
−∞
𝟑. 𝑦(𝑡) = 𝑒 𝑡 𝑥(𝑡) 𝟒. 𝑦(𝑡) = √𝑥(sin(𝑡)) 𝑥(𝑡 − 2)
𝟔. 𝑦(𝑡) = 2
𝑡 +1

Ans: stable, stable, non-stable, stable, non-stable, stable.

Exercise 5: determine whether the following continuous time system is invertible or not
(i.e. one to one operator or not)

𝟏. 𝑦(𝑡) = 𝑥 3 (𝑡) 𝟐. 𝑦(𝑡) = 3𝑥(𝑡 − 2) + 4𝑡


4
𝟑. 𝑦(𝑡) = 𝑥 (𝑡 − 1) 𝟒. 𝑦(𝑡) = |𝑥(𝑡)| + 𝑡

Ans: invertible, invertible, non-invertible, non-invertible.

Exercise 6: this problem demonstrates the significations of absolute integrability of the


Impulse response for BIBO stability. LTI system has impulse response (IR)
∞ (−1)𝑘
ℎ(𝑡) = ∑ 𝛿(𝑘 − 𝑡)
𝑘=1 𝑘
+∞
𝟏. Show that ℎ(𝑡) is integrable ∫−∞ ℎ(𝑡)𝑑𝑡 < ∞
+∞
𝟐. Show that ℎ(𝑡) is not absolute integrable ∫−∞ |ℎ(𝑡)|𝑑𝑡 ⟶ ∞
𝟑. Provide an example of a bounded input that yields an unbounded output.
𝑥 𝑘−1 ∞ ∞ (−1)𝑘−1 1 ∞ (−1)𝑘
𝐀𝐧𝐬: 𝟏. Because ln(1 − 𝑥) = ∑ ⇒ ln(2) = ∑ ⇒ ln ( ) = ∑
𝑘=1 𝑘 𝑘=1 𝑘 2 𝑘=1 𝑘
+∞ +∞ (−1)𝑘 (−1)𝑘
∞ ∞ 1
⇒ ∫ ℎ(𝑡)𝑑𝑡 = ∑ ∫ 𝛿(𝑘 − 𝑡) 𝑑𝑡 = ∑ = ln ( )
−∞ 𝑘=1 −∞ 𝑘 𝑘=1 𝑘 2
+∞ ∞ +∞ (−1)𝑘 ∞ 1
𝟐. ∫ |ℎ(𝑡)|𝑑𝑡 = ∑ ∫ | | 𝛿(𝑘 − 𝑡) 𝑑𝑡 = ∑ =∞
−∞ 𝑘=1 −∞ 𝑘 𝑘=1 𝑘

𝟑. There are an infinite number of examples but we give a few to make things clear

1 1
𝑦(𝑡) = & 𝑦(𝑡) = 𝑒 𝑡𝑥(𝑡) & 𝑦(𝑡) =
𝑥(𝑡) − 𝑀 𝑡𝑥(𝑡) + 1

Exercise 7: check whether the following LTI continuous time system described by its
impulse response is stable or not? ℎ(𝑡) = 𝑒 −𝑎𝑡 sin(𝑎𝑡) 𝑢(𝑡)

Ans:
+∞ +∞ +∞ +∞
∫ |ℎ(𝑡)|𝑑𝑡 = ∫ |𝑒 −𝑎𝑡 sin(𝑎𝑡) 𝑢(𝑡)|𝑑𝑡 < ∫ 𝑒 −𝑎𝑡 |sin(𝑎𝑡)|𝑑𝑡 < ∫ 𝑒 −𝑎𝑡 𝑑𝑡
−∞ −∞ 0 0

+∞
1
∫ |ℎ(𝑡)|𝑑𝑡 < The system is BIBO stable.
−∞ 𝑎

Exercise 8: convolve the following signals (i.e. 𝑦[𝑛] = 𝑥[𝑛] ⋆ ℎ[𝑛])


1 𝑛
𝟏. 𝑥[𝑛] = 𝑢[𝑛] − 𝑢[−𝑛 − 1] & ℎ[𝑛] = { 2)
( 𝑛≥0
2𝑛 𝑛<0
1 𝑛 1 𝑛
𝟐. 𝑥[𝑛] = ( ) 𝑢[𝑛] & ℎ[𝑛] = ( ) 𝑢[𝑛]
3 6
𝟑. 𝑥[𝑛] = 2𝑛 𝑢[−𝑛] & ℎ[𝑛] = 𝑢[𝑛]
1 𝑛 1 𝑛
𝟒. 𝑥[𝑛] = ( ) 𝑢[𝑛] & ℎ[𝑛] = ( ) 𝑢[𝑛]
2 4
Ans:
+∞ +∞
𝟏. 𝑦[𝑛] = 𝑥[𝑛] ⋆ ℎ[𝑛] = ∑ 𝑥[𝑘] ℎ[𝑛 − 𝑘] = ∑ ℎ[𝑘]𝑥[𝑛 − 𝑘] But we have:
𝑘=−∞ 𝑘=−∞

+∞ +∞
𝑥[𝑛] = 𝑢[𝑛] − 𝑢[−𝑛 − 1] = 1 ∀𝑛 then 𝑦[𝑛] = ∑ ℎ[𝑘]𝑥[𝑛 − 𝑘] = ∑ ℎ[𝑘]
𝑘=−∞ 𝑘=−∞

+∞ 1 𝑘 +∞
𝑘=−1
𝑘
+∞ 1 𝑘 2
⟹ 𝑦[𝑛] = ∑ ℎ[𝑘] = ∑ ( ) +∑ 2 = 2 (∑ ( ) )−1= −1=3 ∀𝑛
𝑘=−∞ 𝑘=0 2 𝑘=−∞ 𝑘=0 2 1
1−2

+∞ 1 𝑘 1 𝑛−𝑘 𝑛 1 𝑘 1 𝑛−𝑘
𝟐. 𝑦[𝑛] = 𝑥[𝑛] ⋆ ℎ[𝑛] = ∑ ( ) 𝑢[𝑘] ( ) 𝑢[𝑛 − 𝑘] = ∑ ( ) ( )
𝑘=−∞ 3 6 𝑘=0 3 6

1 𝑛 𝑛
𝑘
1 𝑛 1 − 2𝑛+1 1 𝑛 1 𝑛
⟹ 𝑦[𝑛] = ( ) ∑ 2 = ( ) { } 𝑢[𝑛] = [2 ( ) − ( ) ] 𝑢[𝑛]
6 𝑘=0 6 1−2 3 6
0
+∞ ∑ 2𝑘 = 2 if 𝑛 ≥ 0
𝑘 −∞
𝟑. 𝑦[𝑛] = ∑ 2 𝑢[−𝑘]𝑢[𝑘]𝑢[𝑛 − 𝑘] = 𝑘=𝑛
𝑘=−∞
∑ 2𝑘 if 𝑛 < 0
{ −∞

0 1 𝑘
𝑘

∑ 2 =∑ ( ) =2 for 𝑛 ≥ 0
−∞ 0 2
𝑦[𝑛] =
𝑘=𝑛 ∞ 1 𝑘 ∞ 1 𝑘
∑ 2 = ∑ ( ) = 2 ∑ ( ) = 2𝑛+1
𝑘 𝑛
for 𝑛 < 0
{ −∞ −𝑛 2 0 2

1 𝑘 1 𝑛−𝑘
+∞ 1 𝑛 𝑛
𝑘
1 𝑛 𝑛
𝟒. 𝑦[𝑛] = 𝑥[𝑛] ⋆ ℎ[𝑛] = ∑ ( ) 𝑢[𝑘] ( ) 𝑢[𝑛 − 𝑘] = ( ) ∑ 2 ⟺ 𝑦[𝑛] = ( ) [2 − 1]𝑢[𝑛]
𝑘=−∞ 2 4 4 0 4

Exercise 9: consider LTI discrete system described by it impulse response (IR)


1 𝑛
ℎ[𝑛] = { 2)
( 𝑛≥0
2𝑛 𝑛<0
𝟏. Is the system causal, stable? Justify 𝟐. Determine 𝑦[𝑛] for the input 𝑥[𝑛] = 𝑢[𝑛]
Ans:
1 𝑛
𝟏. ℎ[𝑛] = ( ) 𝑢[𝑛] + 2𝑛 𝑢[−𝑛 − 1] ⇒ ℎ[𝑛] ≠ 0 when 𝑛 < 0 ⇒ noncausal
2
+∞ +∞ 1 𝑘 −1
𝑘
+∞ 1 𝑘
∑ |ℎ[𝑛]| ≤ ∑ ( ) + ∑ 2 = 2 (∑ ( ) ) − 1 = 3 ⇒ BIBO stable
−∞ 0 2 −∞ 0 2
𝑛
↗ case1 𝑛 ≥ 0
𝟐. 𝑦[𝑛] = 𝑢[𝑛] ⋆ ℎ[𝑛] = ∑ ℎ[𝑘]
−∞ ↘ case2 𝑛 < 0

−1 𝑛 1 𝑘 1 𝑛+1
∑ 2𝑘 + ∑ ( ) = 1 + 2 (1 − ( ) ) if 𝑛 ≥ 0
−∞ 0 2 2
𝑦[𝑛] =
𝑘=𝑛 1 𝑘 ∞ 1 𝑘 ∞
∑ 2 = ∑ ( ) = 2 ∑ ( ) = 2𝑛+1 if 𝑛 < 0
𝑘 𝑛
{ −∞ −𝑛 2 0 2

1 𝑛
⟹ 𝑦[𝑛] = {3 − ( ) } 𝑢[𝑛] + 2𝑛+1 𝑢[−𝑛 − 1]
2

Exercise 10: check whether the following LTI discrete time system described by its impulse
response (IR) is stable or not, causal or not?
1 𝑛 1 𝑛
𝟏. ℎ[𝑛] = 𝑛 ( ) 𝑢[𝑛] 𝟐. ℎ[𝑛] = 4𝑛 𝑢[2 − 𝑛] 𝟑. ℎ[𝑛] = ( ) 𝑢[𝑛] + (1.01)𝑛 𝑢[1 − 𝑛]
2 2
1 𝑛
Ans: ❶ ℎ[𝑛] = 𝑛 (2) 𝑢[𝑛] is BIBO stable because ℎ[𝑛] is absolutely summable i.e.
1
+∞ 1 𝑛 +∞
+∞ 1 𝑛 2
∑ |ℎ[𝑛]| = ∑ |𝑛 ( ) 𝑢[𝑛]| = ∑ 𝑛 ( ) = = 2 < ∞ → BIBO stable
−∞ −∞ 2 0 2 1 2
(1 − 2)
+∞ 𝑑 +∞ 𝑑 1 𝑥
We have used the fact that if 𝑥 < 1 then: ∑ 𝑛𝑥 𝑛 = 𝑥 ∑ 𝑥𝑛 = 𝑥 ( )=
0 𝑑𝑥 0 𝑑𝑥 1 − 𝑥 (1 − 𝑥)2

Also the system is causal because ℎ[𝑛] = 0 ∀𝑛 <0.


❷ ℎ[𝑛] = 4𝑛 𝑢[2 − 𝑛] is BIBO stable because ℎ[𝑛] is absolutely summable i.e.
+∞ +∞ 2 +∞ 1 𝑛
∑ |ℎ[𝑛]| = ∑ |4𝑛 𝑢[2 − 𝑛]| = ∑ 𝑛
4 =∑ ( )
−∞ −∞ −∞ −2 4
+∞ 1 −2 1 −1 +∞ 1 𝑛 1 8
∑ |ℎ[𝑛]| = ( ) + ( ) + ∑ ( ) = 20 + =
4 4 𝑛=0 4 1
−∞ 1−4 3

The system is not causal because ℎ[𝑛] ≠ 0 for 𝑛 < 0 .

1 𝑛
❸ ℎ[𝑛] = (2) 𝑢[𝑛] + (1.01)𝑛 𝑢[1 − 𝑛]is BIBO stable because ℎ[𝑛] is absolutely summable i.e.

+∞ +∞ 1 𝑘 1 Where
∑ |ℎ[𝑛]| ≤ ∑ ( ) + ∑ (1.01)𝑘
−∞ 𝑘=0 2 −∞
+∞ 1 𝑘
∞ 1 𝑘 1
+∞ ∞ 1 𝑘 ∑ ( ) = = 101
∑ |ℎ[𝑛]| ≤ ∑ ( ) +∑ ( ) 𝑘=0 1.01 1
−∞ 𝑘=0 2 −1 1.01 1 −
1.01
+∞ +∞ 1 𝑘 ∞ 1 𝑘 +∞ 1 𝑘 1
∑ |ℎ[𝑛]| ≤ ∑ ( ) +∑ ( ) + 1.01 ∑ ( ) = =2
−∞ 𝑘=0 2 𝑘=0 1.01 𝑘=0 2 1
1−2
Hence, ∑+∞
−∞|ℎ[𝑛]| ≤ 104.01. The system is not causal because ℎ[𝑛] ≠ 0 for 𝑛 < 0 .

Exercise 11: you are given an input output sequences {𝑥0 [𝑛], 𝑦0 [𝑛]} for a particular LTI
discrete system, where 𝑥0 [𝑛] = {0,1, 𝟐, 1,0} & 𝑦0 [𝑛] = {−1, −2, 𝟎, 2,1} try to find a relation
between input output sequences and determine the response for 𝑥1 [𝑛] = {𝟎, 1,2,3,4,3,2,1,0}
And determine the impulse response (IR).

Ans: graphically we notice that 𝑦0 [𝑛] = 𝑥0 [𝑛 − 1] − 𝑥0 [𝑛 + 1]

And because of LTI properties we can say that 𝑦1 [𝑛] = 𝑥1 [𝑛 − 1] − 𝑥1 [𝑛 + 1]

𝑥1 [𝑛] ={𝟎, 1,2,3,4,3,2,1,0} ⟹ 𝑦1 [𝑛] ={−𝟏, −2, −2, −2,0,2,2,2,1}


↑ ↑
𝑦[𝑛] = 𝑥[𝑛 − 1] − 𝑥[𝑛 + 1] ↔ ℎ[𝑛] = 𝛿[𝑛 − 1] − 𝛿[𝑛 + 1] = {0, −1, 𝟎, 1,0}

Exercise 12: determine and sketch the impulse response of LTI system described by:
𝑦[𝑛] = 𝑥[𝑛] − 2𝑥[𝑛 − 2] + 𝑥[𝑛 − 3] − 3𝑥[𝑛 − 4]

Ans: we know that 𝑦[𝑛] = 𝐓{𝑥[𝑛]} ↔ ℎ[𝑛] = 𝐓{𝛿[𝑛]}, means that:

ℎ[𝑛] = 𝛿[𝑛] − 2𝛿[𝑛 − 2] + 𝛿[𝑛 − 3] − 3𝛿[𝑛 − 4] = {𝟏, 0, −2,1, −3}


Exercise 13: consider the system described by: 𝑦[𝑛] = ∑∞
𝑘=0 𝑥[𝑛 − 𝑘]. Determine its impulse
response. What is its application? Is it stable? Justify

Ans: because of LTI properties we can say 𝑦[𝑛] = 𝐓{𝑥[𝑛]} ↔ ℎ[𝑛] = 𝐓{𝛿[𝑛]}, means that:
ℎ[𝑛] = ∑∞
𝑘=0 𝛿[𝑛 − 𝑘] = 𝑢[𝑛]. The system implements a digital integrator (i.e. accumulator) The
system is not stable because is not absolutely summable ∑+∞ −∞|ℎ[𝑛]| = ∞.

Exercise 14: a discrete time LTI system described by it impulse response ℎ[𝑛] = 𝛼 𝑛 𝑢[𝑛]

𝟏. Is the system causal, stable? Justify


𝟐. Determine 𝑦[𝑛] for the input 𝑥[𝑛] = 𝑢[𝑛], and for the input 𝑥[𝑛] = 𝛽 𝑛 𝑢[𝑛].

Ans: 𝟏. The system is always causal because {ℎ[𝑛] = 0 ∀ 𝑛 < 0}, but the stability depends
+∞
on the value of 𝛼 ∑+∞ +∞ 𝑛
−∞|ℎ[𝑛]| = ∑−∞|𝛼 𝑢[𝑛]| = ∑0 |𝛼|
𝑛

+∞ 1
Case I: |𝛼| < 1 ∑+∞ 𝑛
−∞|ℎ[𝑛]| = ∑0 |𝛼| = 1−𝛼 < ∞ ⟹ BIBO stable system
Case II: |𝛼| ≥ 1 ∑+∞
−∞|ℎ[𝑛]| = ∑+∞
0 |𝛼|
𝑛
=∞ ⟹ unstable system

The system is stable when the absolute value of the exponent 𝛼 is inside the unit disk. This
fact is extremely important and it is the property of locating the eigenvalues of the system
within the unitary disk. We will discuss this topic later in the next chapters when talking
about digital systems.

𝟐. Let we compute the output of the system when 𝑥[𝑛] = 𝑢[𝑛] we have

𝑦[𝑛] = 𝑇{𝑥[𝑛]} = 𝑥[𝑛] ⋆ ℎ[𝑛] = ℎ[𝑛] ⋆ 𝑥[𝑛]

1 𝑛+1
+∞ +∞ 𝑛 1 − (𝛼 ) 𝛼 𝑛+1 − 1
𝑦[𝑛] = ∑ 𝑥[𝑘] ℎ[𝑛 − 𝑘] = ∑ 𝑢[𝑘]𝛼 𝑛−𝑘 𝑢[𝑛 − 𝑘] = 𝛼 𝑛 ∑ 𝛼 −𝑘 = 𝛼 𝑛 =
𝑘=−∞ 𝑘=−∞ 𝑘=0 1 𝛼−1
1−𝛼
1
𝛼 𝑛+1 − 1 |𝛼| < 1
When 𝛼 ≠ 1: lim 𝑦[𝑛] = lim ={ 𝛼−1
𝑛→∞ 𝑛→∞ 𝛼 − 1
∞ |𝛼| ≥ 1
When: 𝛼 = 1 𝑦[𝑛] = 𝑥[𝑛] ⋆ ℎ[𝑛] = 𝑛 𝑢[𝑛] ′ramp′

The figure below shows the output convergence towards a constant value if the exponent
value is less than one

clear all
clc

a=0.75;
n=0;
for i=1:20
y(i)=(1-a^(n+1))/(1-a);
n=n+1;
end
stem(1:n,y,'b','linewidth',4)
grid on
Now let we compute the output of the system when 𝑥[𝑛] = 𝛽 𝑛 𝑢[𝑛]
+∞ 𝑛 𝑛 𝛼 𝑘
𝑦[𝑛] = ∑ 𝛼 𝑘 𝑢[𝑘] 𝛽 𝑛−𝑘 𝑢[𝑛 − 𝑘] = 𝛽 𝑛 ∑ 𝛼 𝑘 𝛽 −𝑘 = 𝛽 𝑛 ∑ ( )
𝑘=−∞ 𝑘=0 𝑘=0 𝛽

𝛼 𝑛+1
( ) −1
𝛽
𝛽𝑛 𝛼≠𝛽
𝛼
( )−1
𝑦[𝑛] = { 𝛽 }

{ 𝛽 𝑛 (𝑛 + 1)𝑢[𝑛] 𝛼=𝛽

Exercise 15: a discrete time LTI system described by it impulse response ℎ[𝑛] = 𝛼 𝑛 𝑢[𝑛]

Determine 𝑦[𝑛] for the input 𝑥[𝑛] = ℎ[−𝑛] = 𝛼 −𝑛 𝑢[−𝑛].

Ans: The output is 𝑦[𝑛] = 𝑥[𝑛] ⋆ ℎ[𝑛] = 𝑻{𝑥[𝑛]} means that 𝑦[𝑛] = ∑+∞ 𝑘
𝑘=−∞ 𝛼 𝑢[𝑘] 𝛼
−𝑛+𝑘
𝑢[𝑘 − 𝑛]
here we have two cases

Case I: 𝑛 ≥ 0 𝑦[𝑛] = ∑+∞ 𝑘 −𝑛+𝑘


𝑘=𝑛 𝛼 𝛼 = 𝛼 −𝑛 ∑+∞
𝑘=𝑛 𝛼
2𝑘
= 𝛼 −𝑛 ∑+∞
𝑚=0 𝛼
2(𝑛+𝑚)

1 − 𝛼 2𝑛
𝑦[𝑛] = lim 𝛼 𝑛
𝑛→∞ 1 − 𝛼2

Case II: 𝑛 ≤ 0 𝑦[𝑛] = ∑+∞ 𝑘 −𝑛+𝑘


𝑘=0 𝛼 𝛼 = 𝛼 −𝑛 ∑+∞
𝑘=0 𝛼
2𝑘

1 − 𝛼 2𝑛
𝑦[𝑛] = lim 𝛼 −𝑛
𝑛→∞ 1 − 𝛼2
When 0 < 𝛼 < 1 the output can be combined in one formula

𝛼 |𝑛|
𝑦[𝑛] = for all 𝑛
1 − 𝛼2
The figure below shows the output decays towards a zero value if the exponent value is less
than one

clear all
clc

a=0.75;
m=[ ];
n=-10;
for i=1:21
m=[m,n];
y(i)=(a^abs(n))/(1-a^2);
n=n+1;
end
stem(m,y,'b','linewidth',4)
Exercise 16: Let 𝑦(𝑡) = 𝑢(𝑡) ⋆ ℎ(𝑡), and let 𝐴𝑦 , 𝐴𝑢 𝑎𝑛𝑑 𝐴ℎ are the areas under the graphs of
𝑦(𝑡), 𝑢(𝑡), ℎ(𝑡) respectively. Prove that: 𝑦(𝑡) = 𝑢(𝑡) ⋆ ℎ(𝑡) ⟹ 𝐴𝑦 = 𝐴𝑢 𝐴ℎ or equivalently

+∞ +∞ +∞

∫ 𝑦(𝑡)𝑑𝑡 = ( ∫ 𝑥(𝑡)𝑑𝑡) ( ∫ ℎ(𝑡)𝑑𝑡)


−∞ −∞ −∞

Ans:
+∞ +∞ +∞ +∞ +∞

𝐴𝑦 = ∫ 𝑦(𝑡)𝑑𝑡 = ∫ ( ∫ 𝑢(𝜏)ℎ(𝑡 − 𝜏)𝑑𝜏) 𝑑𝑡 = ∫ [𝑢(𝜏) ( ∫ ℎ(𝑡 − 𝜏)𝑑𝑡)] 𝑑𝜏


−∞ −∞ −∞ −∞ −∞
+∞ +∞

Take the change of variable 𝜃 = 𝑡 − 𝜏 ⟹ ∫ ℎ(𝑡 − 𝜏)𝑑𝑡 = ∫ ℎ(𝜃)𝑑𝜃 hence


−∞ −∞
+∞ +∞ +∞ +∞ +∞ +∞

𝐴𝑦 = ∫ [𝑢(𝜏) ( ∫ ℎ(𝑡 − 𝜏)𝑑𝑡)] 𝑑𝜏 = ∫ 𝑢(𝜏) { ∫ ℎ(𝜃)𝑑𝜃} 𝑑𝜏 = ( ∫ ℎ(𝜃)𝑑𝜃) ( ∫ 𝑢(𝜏)𝑑𝜏) ■


−∞ −∞ −∞ −∞ −∞ −∞

Exercise 17: Let 𝑦[𝑛] = 𝑥[𝑛] ⋆ ℎ[𝑛] prove that

▪ 𝑦[𝑛] − 𝑦[𝑛 − 1] = 𝑥[𝑛] ⋆ (ℎ[𝑛] − ℎ[𝑛 − 1]) = (𝑥[𝑛] − 𝑥[𝑛 − 1]) ⋆ ℎ[𝑛]
+∞ +∞ +∞

▪ ∑ 𝑦[𝑘] = ( ∑ 𝑥[𝑘]) ( ∑ ℎ[𝑘])


𝑘=−∞ 𝑘=−∞ 𝑘=−∞

Exercise 18: Let 𝑦(𝑡) = 𝑥1 (𝑡) ⋆ 𝑥2 (𝑡). Then show that: 𝑥1 (𝑡 − 𝑡1 ) ⋆ 𝑥2 (𝑡 − 𝑡2 ) = 𝑦(𝑡 − 𝑡1 − 𝑡2 )

Ans: using the shift operator property we get


+∞
𝕕𝑡1 (𝑥1 (𝑡) ⋆ 𝑥2 (𝑡)) = 𝕕𝑇 (∫ 𝑥1 (𝜏)𝑥2 (𝑡 − 𝜏)𝑑𝜏)
−∞
+∞
=∫ 𝑥1 (𝜏)𝑥2 (𝑡 − 𝑡1 − 𝜏)𝑑𝜏
−∞
+∞
=∫ 𝑥1 (𝜏)𝕕𝑡1 (𝑥2 (𝑡 − 𝜏))𝑑𝜏
−∞
= 𝑥1 (𝑡) ⋆ 𝕕𝑡1 (𝑥2 (𝑡))
+∞
=∫ 𝑥2 (𝜏)𝑥1 (𝑡 − 𝑡1 − 𝜏)𝑑𝜏
−∞
= 𝑥2 (𝑡) ⋆ 𝕕𝑡1 (𝑥1 (𝑡))
If we apply this operator twice we get

𝑦(𝑡 − 𝑡1 − 𝑡2 ) = 𝕕𝑡1 (𝕕𝑡2 (𝑥1 (𝑡) ⋆ 𝑥2 (𝑡))) = 𝕕𝑡1 {𝑥1 (𝑡) ⋆ 𝕕𝑡2 (𝑥2 (𝑡))}
= 𝕕𝑡1 (𝑥1 (𝑡)) ⋆ 𝕕𝑡2 (𝑥2 (𝑡)) = 𝑥1 (𝑡 − 𝑡1 ) ⋆ 𝑥2 (𝑡 − 𝑡2 )

Exercise 19: Consider a stable continuous-time LTI system with impulse response ℎ(𝑡) that
is real and even. Show that cos(𝜔𝑡) and sin(𝜔𝑡) are eigenfunctions of this system with the
same real eigenvalue.
Exercise 20: Consider the system described by 𝑦̇ (𝑡) + 2𝑦(𝑡) = 𝑥̇ (𝑡) + 𝑥(𝑡). Find the impulse
response ℎ(𝑡) of the system.

Ans: Laplace transform gives 𝐻(𝑠) = (𝑠 + 1)/(𝑠 + 2) = 1 − 1/(𝑠 + 2) ⟹ ℎ(𝑡) = (1 − 𝑒 −2𝑡 )𝑢(𝑡)

Exercise 21: For each of the following signals, determine if the system is

▪ Linear or nonlinear
▪ Causal or noncausal
▪ Time invariant or time varying
▪ Memory or memoryless (Dynamic/static)
▪ BIBO stable/unstable

In all cases 𝑥(𝑡) is an arbitrary input signal and 𝑦(𝑡) is the output

𝟏. 𝑦(𝑡) = 𝑒 𝑥(𝑡) , 𝟐. 𝑦(𝑡) = sin(𝑡) 𝑥(𝑡), 𝟑. 𝑦(𝑡) = 𝑥(𝑡)3

𝟒. 𝑦(𝑡) = 𝑡 2 𝑥(𝑡), 𝟓. 𝑦(𝑡) = 𝑥(𝑡 + 1), 𝟔. 𝑦(𝑡) = |𝑥(𝑡)|


𝑡
𝑑
𝟕. 𝑦(𝑡) = sin(𝑥(𝑡)) sin(𝑡) 𝟖. 𝑦(𝑡) = ∫ 𝑥(𝜉 + 1)𝑑𝜉 𝟗. 𝑦(𝑡) = √𝑥 2 (𝑡 − 1)
−∞ 𝑑𝑡

Exercise 22: Continuous linear time invariant system described by its linear operator
defined by: 𝑦(𝑡) = sin(𝑎𝑡 + 𝑏) 𝑥(𝑡 − 1)

▪ What is about the linearity and time variance of this system?


▪ Find the impulse response ℎ(𝑡) of this system, is it BIBO stable?
▪ Does the Laplace transform of ℎ(𝑡) is equal to the transfer function?

Ans: ▪ The system is linear time varying because it obey the superposition principle

▪ ℎ(𝑡) = 𝑻{𝛿(𝑡)} = sin(𝑎𝑡 + 𝑏) 𝛿(𝑡 − 1) = sin(𝑎 + 𝑏) 𝛿(𝑡 − 1). The system is BIBO stable is
absolutely integrable
∞ ∞
∫ |ℎ(𝑡)| 𝑑𝑡 = |sin(𝑎 + 𝑏)| ∫ |𝛿(𝑡 − 1)| 𝑑𝑡 = |sin(𝑎 + 𝑏)| < ∞
−∞ −∞

▪ 𝐻(𝑠) = sin(𝑎 + 𝑏) 𝑒 −𝑠

Exercise 23: Find the impulse response ℎ(𝑡), of a given LTI system where: 𝑥̇ (𝑡) = 𝑥(𝑡) ⋆ ℎ(𝑡)

Ans: 𝑋(𝑠)𝐻(𝑠) = 𝑠𝑋(𝑠) ⟹ 𝐻(𝑠) = 𝑠 ⟹ ℎ(𝑡) = 𝛿 ′ (𝑡)or you can use directly the convolution
property 𝑥̇ (𝑡) = 𝑥(𝑡) ⋆ ℎ(𝑡) ⟹ 𝛿̇ (𝑡) = 𝛿(𝑡) ⋆ ℎ(𝑡) = ℎ(𝑡)

Exercise 24: Show that a system described the following differential equation is linear
𝑦̇ (𝑡) + 𝑡 2 𝑦(𝑡) = (2𝑡 + 3)𝑢(𝑡)

Exercise 25: Show that a system described the following differential equation is nonlinear
𝑦(𝑡)𝑦̇ (𝑡) + 3𝑦(𝑡) = 𝑢(𝑡)
Exercise 26: Show that different mathematical models can describes the same behavioral
system.

Answer: In power electronics the rectifier bridge circuit can be modeled by different math
representations:

1. 𝑦(𝑡) = sgn(𝑥(𝑡))𝑥(𝑡)

2. 𝑦(𝑡) = |𝑥(𝑡)|

3. 𝑦(𝑡) = √𝑥 2 (𝑡)

𝑑|𝑥(𝑡)|
4. 𝑦(𝑡) = { } 𝑥(𝑡)
𝑑𝑥(𝑡)

Exercise: 27 given a mapping say if the system is linear, causal and time invariant or not

❇ 𝑦(𝑡) = 𝑇{𝑢(𝑡)} = 𝑢2 (𝑡) 𝐀𝐧𝐬: nonlinear, causal, time invariant

❇ 𝑦(𝑡) = 𝑇{𝑢(𝑡)} = 𝑡𝑢(𝑡) 𝐀𝐧𝐬: linear, causal, time variant

❇ 𝑦(𝑡) = 𝑇{𝑢(𝑡)} = 𝑢(𝑡 + 1) 𝐀𝐧𝐬: linear, Noncausal, time invariant

❇ 𝑦(𝑡) = 𝑇{𝑢(𝑡)} = sin(𝑢(𝑡)) sin(𝑡) 𝐀𝐧𝐬: Nonlinear, causal, time variant

Exercise: 28 prove that the following convolution operation is true

❇ 𝑦(𝑡) = 𝛿̇ (𝑡) ⋆ 𝑥(𝑡) = 𝑥̇ (𝑡) ⋆ 𝛿(𝑡) = 𝑥̇ (𝑡)

Solution:

Method I: From the Dirac properties and integration by part we know that
+∞ +∞ +∞
𝛿(𝑡 − 𝜉)𝑥(𝜉)| =∫ 𝛿(𝑡 − 𝜉)𝑥̇ (𝜉)𝑑𝜉 − ∫ 𝛿̇ (𝑡 − 𝜉)𝑥(𝜉)𝑑𝜉
−∞ −∞ −∞

+∞

Where 𝛿(𝑡 − 𝜉)𝑥(𝜉)| = 0 then this formula can be written as


−∞
+∞ +∞
∫ 𝛿̇ (𝑡 − 𝜉)𝑥(𝜉)𝑑𝜉 = ∫ 𝛿(𝑡 − 𝜉)𝑥̇ (𝜉)𝑑𝜉 = 𝑥̇ (𝑡)⬌ 𝛿̇ (𝑡) ⋆ 𝑥(𝑡) = 𝑥̇ (𝑡) ⋆ 𝛿(𝑡) = 𝑥̇ (𝑡)
−∞ −∞

Method II: From the Laplace transform we can deduce that

(𝛿̇ (𝑡) ⋆ 𝑥(𝑡)) = 𝑠𝑋(𝑠) & (𝑥̇ (𝑡) ⋆ 𝛿(𝑡)) = 𝑠𝑋(𝑠)

From the above results we can say the following general formula

𝑥̇ 1 (𝑡) ⋆ 𝑥2 (𝑡) = 𝑥1 (𝑡) ⋆ 𝑥̇ 2 (𝑡)


Exercise: 29 given a differential equation of LTI system 𝑑𝑦/𝑑𝑡 = 𝑥(𝑡) − 𝑥(𝑡 − 1) Is it stable?
Justify
ℎ(𝑡)
Solution:
1
𝑡
0 1

When we give an impulse 𝛿(𝑡) to the system as an input then the system will respond
by ℎ(𝑡) and we can write

𝑑ℎ 1 − 𝑒 −𝑠
= 𝛿(𝑡) − 𝛿(𝑡 − 1) ⤇ ℎ(𝑡) = 𝑢(𝑡) − 𝑢(𝑡 − 1) ⤇ 𝐻(𝑠) = ( )
𝑑𝑡 𝑠
∞ ∞
∫ |ℎ(𝑡)| 𝑑𝑡 = ∫ |𝑢(𝑡) − 𝑢(𝑡 − 1)| 𝑑𝑡 = 𝐴𝑟𝑒𝑎 = 1 < ∞
−∞ −∞

ℎ(𝑡) is absolutely sumable ⤇ BIBO stable system.

Exercise: 30 Consider the system below with ℎ1 [𝑛] = 𝑢[𝑛 + 1] , ℎ2 [𝑛] = −𝑢[𝑛] Find 𝑦[𝑛] when

2𝜋 𝜋
𝑥[𝑛] = cos ( 𝑛) + sin ( 𝑛)
7 8

Solution: we have 𝑦[𝑛] = 𝑥[𝑛] ⋆ {ℎ1 [𝑛] + ℎ2 [𝑛]} = 𝑥[𝑛] ⋆ {𝑢[𝑛 + 1] − 𝑢[𝑛]}. Then the output
will be
2𝜋 𝜋
𝑦[𝑛] = 𝑥[𝑛] ⋆ 𝛿[𝑛 + 1] = 𝑥[𝑛 + 1] = cos ( {𝑛 + 1}) + sin ( {𝑛 + 1})
7 8

Exercise: 31 Write a program to convolve ∏(𝑡) with itself, given a sampling period 𝑇𝑠

clear all
clc

Ts=0.01;
t=-1/2:Ts:1/2;
n=length(t);
f=ones(1,n);
Y=conv(f,f);

t=-1:Ts:1;
plot(t,Ts*Y,'k','linewidth',3)
grid on
∏(𝑡) ⋆ ∏(𝑡) = Λ(𝑡)
Exercise: 32 Proving the commutativity of convolution (f ⋆ g)(𝑡) = (g ⋆ f)(𝑡). We have
𝑡
(f ⋆ g)(𝑡) = ∫ f(𝑡 − 𝑢)g(𝑢)𝑑𝑢
0

Let 𝑣 = 𝑡 − 𝑢, implies that 𝑑𝑣 = −𝑑𝑢, 𝑢 = 0 → 𝑣 = 𝑡, 𝑢=𝑡→𝑣=0


0 𝑡
(f ⋆ g)(𝑡) = − ∫ f(𝑣)g(𝑡 − 𝑣)𝑑𝑣 = ∫ f(𝑣)g(𝑡 − 𝑣)𝑑𝑣 = (g ⋆ f)(𝑡)
𝑡 0

Exercise: 33 Proving the associativity of convolution {(f ⋆ g) ⋆ h}(𝑡) = {f ⋆ (g ⋆ h)}(𝑡). We have



{(f ⋆ g) ⋆ h}(𝑡) = ∫ (f ⋆ g) (𝜃)h(𝑡 − 𝜃)𝑑𝜃
0
∞ ∞
{(f ⋆ g) ⋆ h}(𝑡) = ∫ [∫ (f(𝜂)g(𝜃 − 𝜂))𝑑𝜂] h(𝑡 − 𝜃)𝑑𝜃
0 0

Substituting 𝜆 = 𝜃 − 𝜂 and changing the order of integration


∞ ∞
{(f ⋆ g) ⋆ h}(𝑡) = ∫ f(𝜂) [∫ g(𝜆)h(𝑡 − 𝜂 − 𝜆)𝑑𝜆] 𝑑𝜂
0 0


Let we define m(𝑡 − 𝜂) = ∫0 g(𝜆)h(𝑡 − 𝜂 − 𝜆)𝑑𝜆 & m(𝑡) = g(𝑡) ⋆ h(𝑡)

{(f ⋆ g) ⋆ h}(𝑡) = ∫ f(𝜂)g(𝜆)m(𝑡 − 𝜂)𝑑𝜂 = f(𝑡) ⋆ m(𝑡) = {f ⋆ (g ⋆ h)}(𝑡)
0

Exercise: 34 Proving derivative property for convolution


𝑑 𝑑
( f ⋆ g) (𝑡) = (f ⋆ g) (𝑡)
𝑑𝑡 𝑑𝑡
∞ ∞
𝑑 𝑑 𝑑
𝐀𝐧𝐬𝐰𝐞𝐫: ( f ⋆ g) (𝑡) =∫ f(𝑣)g(𝑡 − 𝑣)𝑑𝑣 = ∫ f(𝑣) g(𝑡 − 𝑣)𝑑𝑣
𝑑𝑡 −∞ 𝑑𝑡 −∞ 𝑑𝑡

Take change of variable 𝑡 − 𝑣 = 𝜉 ⟶ 𝑣 = 𝑡 − 𝜉


−∞ ∞
𝑑 𝑑 𝑑 𝑑
( f ⋆ g) (𝑡) = ∫ −f(𝑡 − 𝜉) g(𝜉)𝑑𝜉 = ∫ f(𝑡 − 𝜉) g(𝜉)𝑑𝜉 = (f ⋆ g) (𝑡)
𝑑𝑡 +∞ 𝑑𝑡 −∞ 𝑑𝑡 𝑑𝑡

Exercise: 35 given a discrete linear time invariant system (DLTI) described by its
mathematical operator 𝑦[𝑛] = 𝑻{𝑥[𝑛]}, impulse response ℎ[𝑛], an input sequence 𝑥[𝑛] and
an output sequence 𝑦[𝑛] . Check the validity of the following discrete signal and system
properties

❑ 𝛿[𝑛] = 𝑢[𝑛] − 𝑢[𝑛 − 1] ❑ 𝑦[𝑛] = ∑∞


−∞ 𝑥[𝑘]ℎ[𝑛 − 𝑘]

❑ 𝑢[𝑛] = ∑∞
𝑘=0 𝛿[𝑛 − 𝑘] ❑ 𝑦[𝑛] = ∑∞
−∞ 𝑥[𝑛 − 𝑘]ℎ[𝑘]

1
❑ 𝑦[𝑛] = 𝑻{𝑧 𝑛 } = 𝜆𝑧 𝑛 & 𝜆 = ∑+∞
−∞ ℎ[𝑘]𝑧
−𝑘
❑ 𝛿[−𝑛] = 𝛿[𝑛] & 𝛿[𝑎𝑛] ≠ |𝑎| 𝛿[𝑛]
❑ ∏[𝑛] = 𝑢[𝑛 − 𝑎] − 𝑢[𝑛 − 𝑏] ❑ 𝑥[𝑛] ⋆ ℎ[𝑛] = ℎ[𝑛] ⋆ 𝑥[𝑛]
❑ ( f[𝑛] ⋆ ℎ[𝑛]) ⋆ g[𝑛] = f[𝑛] ⋆ ( h[𝑛] ⋆ g[𝑛]) ❑ 𝑥[𝑛]𝛿[𝑛 − 𝑘] = 𝑥[𝑘]𝛿[𝑛 − 𝑘]
❑ BIB0 stable if ∑+∞
−∞|h[𝑛]| < ∞ ❑ 𝑥[𝑛] ⋆ 𝛿[𝑛] = 𝑥[𝑛]
Solution: We solve some of them and leave the rest to students as a homework
assignment

❑ Let we verify 𝑢[𝑛] = ∑ 𝛿[𝑛 − 𝑘] Equivalence between continuous and discrete time
𝑘=0
∞ 𝑡
𝛿[𝑛] = 𝑢[𝑛] − 𝑢[𝑛 − 1]
𝑢[𝑛] = ∑ 𝛿[𝑛 − 𝑘] ⟺ 𝑢(𝑡) = ∫ 𝛿(𝜃)𝑑𝜃
𝛿[𝑛 − 1] = 𝑢[𝑛 − 1] − 𝑢[𝑛 − 2] −∞
𝑘=0

𝛿[𝑛 − 𝑘] = 𝑢[𝑛 − 𝑘] − 𝑢[𝑛 − 𝑘 − 1] 𝑑
-------------------------------------------- 𝛿[𝑛] = 𝑢[𝑛] − 𝑢[𝑛 − 1] ⟺ 𝛿(𝑡) = 𝑢(𝑡)
𝑑𝑡
sum: 𝑢[𝑛] = ∑∞ 𝑘=0 𝛿[𝑛 − 𝑘]

❑ Let we verify 𝑦[𝑛] = ∑+∞ −∞ 𝑥[𝑘]ℎ[𝑛 − 𝑘] = 𝑥[𝑛] ⋆ ℎ[𝑛]. We know that ℎ[𝑛] = 𝐓{𝛿[𝑛]} The
response of dirac. The general response is of the form 𝑦[𝑛] = 𝐓{𝑥[𝑛]} = 𝐓{∑+∞
−∞ 𝑥[𝑘]𝛿[𝑛 − 𝑘]},
because of LTI property we can say 𝑦[𝑛] = 𝐓{𝑥[𝑛]} = ∑−∞ 𝑥[𝑘]𝐓{𝛿[𝑛 − 𝑘]} = ∑+∞
+∞
−∞ 𝑥[𝑘]ℎ[𝑛 − 𝑘]
this last equation is colled the discret time convolution.

❑ Let we verify 𝑦[𝑛] = 𝑥[𝑛] ⋆ ℎ[𝑛] = ℎ[𝑛] ⋆ 𝑥[𝑛]

𝑦[𝑛] = ∑+∞
−∞ 𝑥[𝑘]ℎ[𝑛 − 𝑘] = 𝑥[𝑛] ⋆ ℎ[𝑛] take change of variable 𝑚 = 𝑛 − 𝑘 we get

𝑦[𝑛] = ∑+∞
−∞ 𝑥[𝑛 − 𝑚]ℎ[𝑚] = ℎ[𝑛] ⋆ 𝑥[𝑛] (Commutativity property)

❑ Let we verify 𝑦[𝑛] = 𝑻{𝑧 𝑛 } = 𝜆𝑧 𝑛 & 𝜆 = ∑+∞ −𝑘 𝑛


−∞ ℎ[𝑘]𝑧 . Where 𝑧 is called an eigenfunction
(or characteristic function) of the operator T, and the constant 𝜆 is called the eigenvalue (or
characteristic value) corresponding to the eigenfunction.
+∞ +∞ +∞

𝑦[𝑛] = 𝑻{𝑧 𝑛 } = {𝑧 𝑛 } ⋆ ℎ[𝑛] = ∑ ℎ[𝑛 − 𝑘] 𝑧 𝑛 = ∑ ℎ[𝑘] 𝑧 𝑛−𝑘 = 𝑧 𝑛 (∑ ℎ[𝑘] 𝑧 −𝑘 ) ⇒ 𝑦[𝑛] = 𝑻{𝑧 𝑛 } = 𝜆𝑧 𝑛


−∞ −∞ −∞

Thus this equation indicates that the complex exponential functions 𝑧 𝑛 are eigenfunctions
of any LTI system, with 𝜆 = 𝐻(𝑧) = ∑+∞ −𝑘
−∞ ℎ[𝑘] 𝑧 . The eigenvalue of a discrete-time LTI
system associated with the eigenfunction 𝑧 𝑘 is given by 𝐻(𝑧) which is a complex constant
whose value is determined by the value of 𝑧 via ∑+∞ −∞ ℎ[𝑘] 𝑧
−𝑘
. The above results underlie
the definitions of the z-transform and discrete-time Fourier transform which will be
discussed later.
Equivalence between continuous and discrete time


𝑡
𝑢[𝑛] = ∑ 𝛿[𝑛 − 𝑘] 𝑢(𝑡) = ∫ 𝛿(𝜈)𝑑𝜈
𝑘=0 −∞

𝛿[𝑛] = 𝑢[𝑛] − 𝑢[𝑛 − 1] 𝑑


𝛿(𝑡) = 𝑢(𝑡)
𝑑𝑡
𝑦[𝑛] = 𝑻{𝑧 𝑛 } = 𝜆𝑧 𝑛 𝑦[𝑛] = 𝑻{𝑒 𝑠 } = 𝜆𝑒 𝑠
ℎ[𝑘] = 𝑻{𝛿[𝑛]} ℎ(𝑡) = 𝑻{𝛿(𝑡)}
+∞
+∞
𝜆 = H(𝑧) = ∑ ℎ[𝑘] 𝑧 −𝑘 𝜆 = H(𝑠) = ∫ ℎ(𝑡)𝑒 −𝑠 𝑑𝑡
−∞ −∞
𝑦[𝑛] = 𝑥[𝑛] ⋆ ℎ[𝑛] = ℎ[𝑛] ⋆ 𝑥[𝑛] 𝑦(𝑡) = 𝑥(𝑡) ⋆ ℎ(𝑡) = ℎ(𝑡) ⋆ 𝑥(𝑡)
+∞
+∞
𝑥[𝑛] ⋆ ℎ[𝑛] = ∑ 𝑥[𝑘]ℎ[𝑛 − 𝑘] 𝑥(𝑡) ⋆ ℎ(𝑡) = ∫ 𝑥(𝜉)ℎ(𝑡 − 𝜉) 𝑑𝜉
−∞ −∞
+∞ +∞

ℎ[𝑛] ⋆ 𝑥[𝑛] = ∑ ℎ[𝑘]𝑥[𝑛 − 𝑘] ℎ(𝑡) ⋆ 𝑥(𝑡) = ∫ ℎ(𝜉)𝑥(𝑡 − 𝜉) 𝑑𝜉


−∞
−∞ +∞
+∞
stable: ∫ |ℎ(𝜉)| 𝑑𝜉 < ∞, causal: {ℎ(𝑡) = 0
ℎ[𝑘] = 0 if 𝑡 < 0
stable: ∑|ℎ[𝑘]| < ∞ & causal: { −∞
if 𝑘 < 0
−∞

Exercise: 36 Convolve the following two sequences using the polynomial method

{0 0 1 𝟏 0 1 1}
𝑥[𝑛] = ⏟ & {−2 1 𝟎 1 1}
ℎ[𝑛] = ⏟
↑ ↑

Solution: 𝑥[𝑛] ⋆ ℎ[𝑛] = ℎ[𝑛] ⋆ 𝑥[𝑛] = ∑∞ ∞


−∞ 𝑥[𝑘]ℎ[𝑛 − 𝑘] = ∑−∞ 𝑥[𝑛 − 𝑘]ℎ[𝑘]

We have to put: 𝑃𝑥 = 𝑥 −1 + 1 + 𝑥 2 + 𝑥 3 & 𝑃ℎ = −2𝑥 −2 + 𝑥 −1 + 𝑥 + 𝑥 2

𝑦[𝑛] = ℎ[𝑛] ⋆ 𝑥[𝑛] ⇔ 𝑃𝑦 = 𝑃𝑥 𝑃ℎ = −2𝑥 −3 − 𝑥 −2 + 𝑥 −1 − 1 + 𝑥 + 2𝑥 2 + 𝑥 3 + 2𝑥 4 + 𝑥 5

{−2, −1,1, −𝟏, 1,2,1,2,1}


𝑦[𝑛] = ⏟

You can declare this statement by using the following MATLAB instructions

x=[0 0 1 1 0 1 1]; h=[-2 1 0 1 1]; y=conv(x,h)


Exercise: 37 Convolve the following two sequences using the tabular method

𝑥[𝑛] = {0 1 2 3} & ℎ[𝑛] = {−1 − 1 3 4}

Solution:
ℎ[𝑛]
⋆ -1 -1 3 4
𝑥[𝑛]
0 0 0 0 0

1 -1 -1 3 4

2 -2 -2 6 8

3 -3 -3 9 12

𝑦[𝑛] = ℎ[𝑛] ⋆ 𝑥[𝑛] = {0, −1, −3, −2, 7, 17,12}

Exercise: 38 Convolve the following two sequences using the sum of columns
method 𝑥[𝑛] = {1 1 2 − 1} & ℎ[𝑛] = {1 2 3 − 1}

Solution:
1 1 2 −1 𝑦[𝑛] = ℎ[𝑛] ⋆ 𝑥[𝑛]
1 2 3 −1
𝑦[𝑛] = { 1, 3,7,5 ,3 , −5,1 }
1 1 2 −1

2 2 4 −2

3 3 6 −3

−1 − 1 − 2 1

1 3 7 5 3 −5 1

Exercise: 39 Let we define the following signal as shown in figure

You know that the Laplace transform of f(𝑡) is of the form

+∞
F(𝑠) = ∫ f(𝑡)𝑒 −𝑠𝑡 𝑑𝑡
0

Compute the value of F(0)

Solution:
+∞
1
F(0) = ∫ f(𝑡)𝑑𝑡 = Area of f(𝑡) = + 1 + 1.5 + 2 = 5
0 2
Review Questions:

. What is the full form of the LTI system?


a) Linear time inverse system
b) Late time inverse system
c) Linearity times invariant system
d) Linear Time Invariant system
. What is a unit impulse response?
a) The output of a linear system
b) The response of an invariant system
c) The output of an LTI system due to unit Impulse signal
d) The output of an input signal
. How do you define convolution?
a) Weighted superposition of time shifted responses
b) Addition of responses of an input signal
c) Multiplication or various shifted responses of a stable system
d) Superposition of various outputs
. Choose the properties which are very important in case of LTI signals and systems?
a) Linearity and time invariance
b) Linearity and stability
c) Stability and invariance
d) Linearity and causality
. Why is a linear time invariant systems important?
a) They can be structured as wanted
b) They can be molded in any domain
c) They are easy to define
d) They can be represented as a linear combination of signals
. Which is special the property listed below only holds good by an LTI system?
a) Memory
b) Stability
c) Causality
d) Distributive property
. What are the three special properties that only LTI systems follow?
a) Commutative property, Associative property, Causality
b) Associative property, Distributive property, Causality
c) Commutative property, Distributive property, Associative property
d) Distributive property, Stability, Causality
. What are the properties of an LTI system posse other than Associative,
Commutative and Distributive properties?
a) Memory, invertibility, causality, stability
b) Memory and non-causality
c) Invertibility and stability
d) Causality only
. An LTI system is memoryless only if
a) It does not store the previous value of the input
b) It does not depend on any previous value of the input
c) It does not depend on stored values of the system
d) It does not depend on the present value of the input
. A continuous time LTI system has memory only when
a) It does not depend on the present value of the input
b) It only depends on the past values of the input
c) Its output always depends both on the previous and past values of the input
d) Its output might depend on the present value and the previous value of the input
. An important property for causality of the system is
a) Initial rest
b) Final rest
c) It is memoryless
d) It is unstable
. What are the tools used in a graphical method of finding convolu of discrete signal?
a) Plotting, shifting, folding, multiplication, and addition in order
b) Scaling, shifting, multiplication, and addition in order
c) Scaling, multiplication and addition in order
d) Scaling, plotting, shifting, multiplication and addition in order
. How can we solve discrete time convolution problems?
a) The graphical method only
b) Graphical method and tabular method
c) Graphical method, tabular method and matrix method
d) Graphical method, tabular method, matrix method and summation method
. Which method is close to a graphical method for discrete time convolution?
a) Matrix method only
b) Tabular method
c) Tabular method and matrix method
d) Summation method
------------------------------------------------------------------------------
General remarks Signals are processed by systems. By the word system, we understand a
mapping from a signal set (input signals) to another signal set (output signals).

In the mapping, we understand that the whole signal 𝑥(𝑡) is transformed into the whole
signal 𝑦(𝑡). You can encounter the notation: 𝑦(𝑡) = 𝑻[𝑥(𝑡)]. This notation can be
misleading. It can also mean that the value of the signal y at the time 𝑡 is functionally
related to only the value of the signal 𝑥 at 𝑡. When we want to indicate the functional
relationship between values, we will use the following notation:

y(𝑡) = 𝐓[𝑡, x(𝜇), 𝜇 ∈ [𝑡1 , 𝑡2 ]]


This means that the value of the output y at the time 𝑡 depends on all the values of the
input signal at times 𝜇 between 𝑡1 and 𝑡2 and also on the state of the system at 𝑡.

The response of a phasor of frequency 𝑠, is also a phasor. The output phasor is


proportional to the input one. The constant of proportionality is the transfer function.
Since an LTI system is a linear operator, we can say that the phasors are the
"eigenfunctions" of LTI systems while the transfer function is the "eigenvalue".

When the LTI system is used to modify the spectrum of a signal, it is called a filter. We
can classify filters according to their amplitude response. Let H(s) be the transfer
function.

If |𝐻(jw)| ≠ 0 for |𝑤| > 𝑊, the filter is called Lowpass.


If |𝐻(jw)| ≠ 0 for |𝑤| < 𝑊, the filter is called Highpass.
If|𝐻(jw)| ≠ 0 for 0 < |w| < w2 and |w| > w1, the filter is called Bandpass.
If |𝐻(jw)| = constant for all frequencies, the filter is an Allpass filter.

Summary:
CHAPTER III:
Analysis of Continuous
LTI Systems by Laplace
Transform
I. Introduction
II. Laplace transformation
II.I. The Region of Convergence
II.II Properties of The ROC
II.III Properties of The Laplace Transform
III. Inverse of the Laplace transforms
III.I. Inversion by Partial fraction expansions
III.II popular application of the Laplace transform
IV. System Response and Transfer Function
IV.I Transfer Function
IV.II Poles and Zeros of the System Function
IV.III System Stability
IV.IV Interconnected system
V. Solved Problems

The Laplace transform plays a particularly important role in the analysis and
representation of continuous-time LTI systems. Many properties of a system can be
tied directly to characteristics of the poles, zeros, and region of convergence of the
system function 𝑯(𝑠). Due to its convolution property, Laplace transform is a
powerful tool to analyze LTI systems. As discussed before, when the input is the
Eigen-function of all LTI system, i.e., 𝐱(𝑡) = 𝑒 𝜆𝑡 , the operation on this input by the
system 𝐲(𝑡 ) = 𝑻(𝐱(𝑡 )) can be found by multiplying the system's eigenvalue 𝑯(𝜆) to the
input: 𝐲(𝑡 ) = 𝑻(𝑒 𝜆𝑡 ) = 𝑯(𝜆)𝑒 𝜆𝑡 . If an LTI system is causal (with a right sided impulse
response function 𝐡(𝑡) = 0 for 𝑡 < 0), then the ROC of its transfer function 𝑯(𝑠) is a
right sided plane. In particular, when 𝑯(𝑠) is rational, then the system is causal if
and only if its ROC is the right half plane to the right of the rightmost pole, and the
order of numerator is no greater than that of the denominator.

A causal LTI system with a rational transfer function 𝑯(𝑠) is stable if and only if all
poles of 𝑯(𝑠) are in the left half of the s-plane, i.e., the real parts of all poles are
negative.
Analysis of Continuous
LTI Systems by Laplace
Transform
I. Introduction: In system analysis, among other fields of study, a linear time-invariant
system (or "LTI system") is a system that produces an output signal from any input signal
subject to the constraints of linearity and time-invariance. These properties apply to many
important physical systems, in which case the response 𝑦(𝑡) of the system to an arbitrary
input 𝑥(𝑡) can be found directly using convolution: 𝑦(𝑡) = 𝑥(𝑡) ⋆ ℎ(𝑡) where ℎ(𝑡) is called the
system's impulse response and ⋆ represents convolution. What's more, there are systematic
methods for solving any such system (determining ℎ(𝑡)), whereas systems not meeting both
properties are generally more difficult (or impossible) to solve analytically.

LTI system theory is an area of applied mathematics which has direct applications in
electrical circuit analysis and design, signal processing and filter design, control theory,
mechanical engineering, image processing, the design of measuring instruments of many
sorts, NMR spectroscopy, and many other technical areas where systems of ordinary
differential equations present themselves.

The main goal in analysis of any dynamic system described by ordinary differential
equations is to find its response to a given input. The system response in general has two
components: zero-state response due to external forcing signals and zero-input response
due to system initial conditions. The Laplace transform will produce both the zero-input
and zero-state components of the system response. We will also present procedures for
obtaining the system impulse, step, and ramp responses. It is important to point out that the
Laplace transform is very convenient for dealing with the system input signals that have
jump discontinuities (and delta impulses).

The first thing to do when you are dealing with the analysis and design of a technical
problem is to develop an appropriate model, for example if we are given a physical,
biological, or social problem, we may first develop a mathematical model for it, and then
solve the model, and interpret its solution with respect to the problem statement. Modeling
is a process of abstraction of a real system, and the abstracted model may be logical or
mathematical. A mathematical model is a mathematical function governing a properties
and interactions in the system.

The mathematical models of systems are obtained by applying the fundamental physical
laws governing the nature of the components making these systems. In real life many
systems are time-variant or nonlinear, but they can be linearized around certain operating
ranges about their equilibrium.

The study of linear systems is important for two reasons:


■ Majority of engineering situations are linear at least within “specified range”.
■ Exact solutions of behavior of linear systems can usually be found by standard
techniques.
Remark: 01 If the system is linear time varying then it can be modeled by linear, time-
varying ordinary differential equations whose coefficients are change in time. But our
mathematical treatment in this book will be limited to linear, time-invariant ordinary
differential equations whose coefficients do not change in time.

Remark: 02 LTI systems can be modeled mathematically either by


⦁ Linear, time-invariant ordinary differential equations, or by
⦁ Impulse response (also step and ramp responses), or by
⦁ Transfer function (using Laplace transform), or by
⦁ State space representation (1st order Matrix differential equation)

Consider the following LTI system described by its ordinary differential equations

𝑛 𝑚
𝑑𝑘 𝑦(𝑡) 𝑑 𝑘 𝑥(𝑡)
∑ 𝑎𝑘 = ∑ 𝑏𝑘
𝑑𝑡𝑘 𝑑𝑡𝑘
𝑘=0 𝑘=0

As it is well known, ordinary linear differential equations can be solved in the time domain
or in the frequency domain. In the other hand finding the impulse response of linear
system is the same as finding solutions to the differential equation, among the methods
that can be used in the frequency domain are: the Fourier method or the Laplace method.
However, in the use of Fourier transform, we have been able to find only the zero-state
system response. While the Laplace transform is a powerful integral transform used to
switch a function from the time domain to the s-domain and can be used to solve linear
differential equations with given initial conditions.

II. Laplace transformation: In mathematics, the Laplace


transform is an integral transform named after its discoverer
Pierre-Simon Laplace. It takes a function of a real variable 𝑡
(often time) to a function of a complex variable 𝑠 (complex
frequency). Laplace transformation transforms quantities from
the time domain to the frequency domain, transforms
differential equations into algebraic equations and transforms
convolution into multiplication. It has many applications in
the sciences and technology. Pierre-Simon Laplace
1749-1827
Definition: The Laplace transform of a function f(𝑡), defined for all real numbers 𝑡 ≥ 0, is
the function 𝐹(𝑠), which is a unilateral transform defined by
+∞
𝐹(𝑠) = ∫ f(𝑡)𝑒 −𝑠𝑡 𝑑𝑡 = {𝑓(𝑡)}
0−
Remark: The Laplace transform is a conversion between
the time domain and frequency domain. The variable 𝑠 is
generally complex-valued and is expressed as 𝑠 = 𝜎 + 𝑗𝜔
with real numbers 𝜎 and ω.

Now we are in stage to ask what is about Existence of


Laplace Transform?
Existence of Laplace Transform

Definition: Before discussing the possible existence of {𝑓(𝑡)}, it is helpful to define


certain terms. A function f is said to be piecewise continuous on an interval 𝛼 ≤ 𝑡 ≤ 𝛽 if the
interval can be partitioned by a finite number of points 𝛼 = 𝑡0 < 𝑡1 < ⋯ < 𝑡𝑛 = 𝛽 so that

1. f is continuous on each open subinterval 𝑡𝑖−1 < 𝑡 < 𝑡𝑖 .


2. f approaches a finite limit as the endpoints of each subinterval are approached from
within the subinterval.

In other words, f is piecewise continuous on


𝛼 ≤ 𝑡 ≤ 𝛽 if it is continuous there except for a
finite number of jump discontinuities. If f is
piecewise continuous on 𝛼 ≤ 𝑡 ≤ 𝛽 for every 𝛽 > 𝛼,
then f is said to be piecewise continuous on 𝑡 > 𝛼.

Definition: A function f is said to be of exponential order on [0, ∞) if there exist a constant


K and α such that |f(𝑡)| ⩽ 𝐾𝒆𝛼𝑡 for all t>0 where f is defined.

Theorem: (Existence Theorem for Laplace Transform) Suppose that


1. f is piecewise continuous on the interval 0 ≤ 𝑡 ≤ 𝐴 for any positive 𝐴.
2. |f(𝑡)| ⩽ 𝐾𝒆𝛼𝑡 when t⩾M. In this inequality K, a and M are real constants, K and M
necessarily positive.
Then the Laplace transform {𝑓(𝑡)}= 𝐹(𝑠), defined above, exists for 𝑠 > 𝑎.
Proof: when f is piecewise continuous then
+∞ +∞
∫ |f(𝑡)|𝑒 −𝑠𝑡 𝑑𝑡 ≤ ∫ 𝐾𝑒 𝛼𝑡 𝑒 −𝑠𝑡 𝑑𝑡
0− 0−
+∞ 𝐾
≤ 𝐾∫ 𝑒 −(𝑠−𝛼)𝑡 𝑑𝑡 ⤇ 𝐹(𝑠) ≤ if 𝑠 > 𝛼
0−
𝑠−𝛼
𝐾
≤ lim [1 − 𝑒 −(𝑠−𝛼)𝜉 ]
𝜉→∞ 𝑠 − 𝛼 }
And the comparison theorem implies that {𝑓(𝑡)}exist for all 𝑠 > 𝛼. ■

2
Example: Consider the functions: f(𝑡) = 𝑒 𝑡 , f(𝑡) = 𝑡 𝑛 , f(𝑡) = 𝑡1/2 , and f(𝑡) = ln(1 + 𝑡)

Which one of them is of exponential order and which is not?


2
f(𝑡) = 𝑒 𝑡 is not exponential order on [0, ∞) because
2
𝑒𝑡
lim ( ) = lim 𝑒 𝑡(𝑡−𝛼) = ∞
𝑡→∞ 𝑒 𝛼𝑡 𝑡→∞
➽ Laplace transform does not exists
f(𝑡) = 𝑡 𝑛 is an exponential order on [0, ∞) because
𝑡𝑛
lim ( 𝛼𝑡 ) = 0 ➽ Laplace transform exists
𝑡→∞ 𝑒
f(𝑡) = 𝑡1/2 is an exponential order on [0, ∞) because
𝑡1/2
lim (
𝑡→∞ 𝑒 𝛼𝑡
)=0 ➽ Laplace transform exists
f(𝑡) = ln(1 + 𝑡) is an exponential order on [0, ∞) because
ln(1 + 𝑡)
lim ( )=0 ➽ Laplace transform exists
𝑡→∞ 𝑒 𝛼𝑡

Remark: When one says "the Laplace transform" without qualification, the unilateral or
one-sided transform is normally intended. The Laplace transform can be alternatively
defined as the bilateral Laplace transform or two-sided Laplace transform by extending the
limits of integration to be the entire real axis. If that is done the common unilateral
transform simply becomes a special case of the bilateral transform where the definition of
the function being transformed is multiplied by the Heaviside step function.

The bilateral Laplace transform is defined as follows,


+∞
𝐹(𝑠) = ∫ f(𝑡)𝑒 −𝑠𝑡 𝑑𝑡
−∞
II.I The Region of Convergence: The range of values of the complex variables 𝑠 for which
the Laplace transform converges is called the region of convergence (ROC). By definition,
the ROC of any function, f(t), is defined to be those values of 𝜎 for which
+∞
𝐼=∫ |f(𝑡)𝑒 −𝜎𝑡 |𝑑𝑡 < ∞
−∞
To illustrate the Laplace transform and the associated ROC let us consider some examples.

Example: Consider the signal f(𝑡) = 𝑒 −𝑎𝑡 𝑢(𝑡) 𝑎 real. Then by definition of the Laplace
transform of f(𝑡)
+∞ +∞ +∞
−𝑒 −(𝑎+𝑠)𝑡 1
𝐹(𝑠) = ∫ 𝑒 −𝑎𝑡 𝑒 −𝑠𝑡 𝑢(𝑡)𝑑𝑡 = ∫ 𝑒 −(𝑎+𝑠)𝑡
𝑑𝑡 = | =
−∞ 0 (𝑎 + 𝑠) (𝑎 + 𝑠)
0
−(𝑎+𝑠)𝑡
With: Re(𝑠) > −𝑎, because lim𝑡→∞ 𝑒 = 0 only if Re(𝑠 + 𝑎) > 0 or.

Thus, the ROC for this example is specified as Re(𝑠) > −𝑎 and is displayed in the complex
plane as shown in the Figure by the shaded area to the right of the line Re(𝑠) = −𝑎. In
Laplace transform applications, the complex plane is commonly referred to as the s-plane.
The horizontal and vertical axes are sometimes referred to as the 𝜎-axis and the jω -axis,
respectively.
𝑗𝜔 𝑗𝜔
s-plane

𝑎>0 𝑎<0

𝜎 𝜎
−𝑎 −𝑎
II.II Properties of The ROC: The determination of ROC requires some effort, and so is
difficult. However, there are some properties of the ROC which simplify its determination.
Furthermore, some important properties of the function, f(t), can be determined directly
from its ROC. We shall discuss some of these properties in this section because they will be
important in our later discussions.

The ROC of a function f(𝑡) is unaffected by a time shift of the function. That is, for any
value of 𝑡0 , the ROC of f(𝑡) and f(𝑡 − 𝑡0 )are the same.
The ROC of any function always will be 𝜎1 ≤ 𝜎 ≤ 𝜎2 . That is, the ROC of any function
always will be an interval of the 𝜎-axis and not a disjointed set of intervals.
For rational Laplace transforms, the ROC does not contain any poles.
If is of finite duration and is absolutely integrable, then the ROC is the entire s-plane.

II.III Properties of The Laplace Transform: Basic properties of the Laplace transform are
presented in the following.

1. Linearity and Amplitude Scaling: The Laplace transformation is a linear operation


(because of integral properties), means that if we have 𝑓1 (𝑡) ⟷ 𝐹1 (𝑠) and 𝑓2 (𝑡) ⟷ 𝐹2 (𝑠), then,
{𝛼𝑓1(𝑡) + 𝛽𝑓1(𝑡)}= 𝛼𝐹1(𝑠) + 𝛽𝐹2(𝑠). This property can be verified directly from the
definition of the Laplace transform. "Laplace transformation is a linear mapping".

2. Time Shifting (Delay): The Laplace transformation of the shifted (i.e. delayed) function
𝑓(𝑡 ± 𝛼)𝑢(𝑡 ± 𝛼) is given by {𝑓(𝑡 ± 𝛼)}= 𝑒 ±𝛼𝑠 𝐹(𝑠). This property can be verified by taking
change of variable 𝜉 = 𝑡 ± 𝛼. This Shift theorem is very important, because in the analysis
and syntheses of some engineering problems we are not always faced by transient
responses start at zero time 𝑡 = 0, but rather some delay encounter the forcing function
causes a shift in the system response.

𝑓(𝑡) = 𝑡 𝑓(𝑡) = 𝑡𝑢(𝑡) 𝑓(𝑡)𝑢(𝑡 − 𝑎) 𝑓(𝑡 − 𝑎)𝑢(𝑡 − 𝑎)

𝑡 𝑡 𝑡 𝑡

3. Shifting in the s-Domain: Let the Laplace transform of f1 (𝑡) be F1 (𝑠)with the ROC
𝜎𝑎 < 𝜎 < 𝜎𝑏 . Then the Laplace transform of {𝑒 ±𝑠 𝑡 f1(𝑡)}in which 𝑠0 = 𝜎0 + 𝑗𝜔0 is:
0

𝐹(𝑠) = {𝑒 ±𝑠 𝑡 f1(𝑡)}= 𝐹1(𝑠 ∓ 𝑠0)


0

4. Time Scaling: Let the Laplace transform of f1 (𝑡) be F1 (𝑠) with the ROC 𝜎𝑎 < 𝜎 < 𝜎𝑏 . Then
the Laplace transform of {f1(𝑎𝑡)} is F(𝑠)= {f1(𝑎𝑡)}
+∞
−𝑠𝑡
1 +∞ 1 𝑠
F(𝑠) = ∫ f1 (𝑎𝑡)𝑒 𝑑𝑡 = ∫ f1 (𝜉)𝑒 −𝑠𝜉/𝑎 𝑑𝜉 = 𝐹1 ( )
0 |𝑎| 0 |𝑎| 𝑎

|𝑎| {f1(𝑎𝑡)} = 𝐹1 (𝑠/𝑎)


This indicates that scaling the time variable 𝑡 by the factor 𝑎 causes an inverse scaling of
the variable 𝑠 by 1/𝑎 as well as an amplitude scaling of 𝐹1 (𝑠/𝑎) by 1/|𝑎|.

4. Time Reversal: A special case of interest is that for which 𝑎 = −1. For this special case,
we obtain that the Laplace transform of {f(−𝑡)}= 𝐹(−𝑠). Thus, time reversal of f(t)
produces a reversal of both the 𝜎- and jω- axes in the s-plane, with the ROC −𝜎𝑏 < 𝜎 < −𝜎𝑎
or, equivalently, 𝜎𝑎 < −𝜎 < 𝜎𝑏 .

5. Differentiation in the Time Domain: Let us look at time differentiation first by


considering a time function f(t) whose Laplace transform 𝐹(𝑠) is known to exist. We want to
transform the first derivative of f(t),
+∞
𝑑f(𝑡) −𝑠𝑡
{f ′ (𝑡)} = ∫ 𝑑𝑡
𝑒 𝑑𝑡
0
This can be integrated by part:
𝑑f(𝑡)
𝑢 = 𝑒 −𝑠𝑡 𝑑𝑣 = 𝑑𝑡
𝑑𝑡
+∞
+∞ +∞
𝑑f(𝑡) −𝑠𝑡
∫ 𝑒 𝑑𝑡 = [f(𝑡)𝑒 −𝑠𝑡 ] +𝑠∫ f(𝑡)𝑒 −𝑠𝑡 𝑑𝑡
0− 𝑑𝑡 0−
0−
The first term on the right must be approaches zero at increases without limit; otherwise
would be not exist. Hence, {f ′(𝑡)} = 𝑠𝐹(𝑠) − f(0−).
This equation shows that the effect of differentiation in the time domain is multiplication of
the corresponding Laplace transform by s. The associated ROC is unchanged unless there
is a pole-zero cancellation at s = 0. We shall have important applications for this property
in subsequent sections. Similar relationship may be developed for higher order derivatives

𝑛 𝑑 (𝑛−𝑘) f(𝑡)
{f (𝑛)
(𝑡)} = 𝑠 𝑛 𝐹(𝑠) − ∑ {𝑠 𝑛−𝑘 ( | )}
𝑘=1 𝑑𝑡 𝑛−𝑘 𝑡=0

Comment: if the function f(𝑡) is assumed to be 𝑛-times differentiable, with 𝑛𝑡ℎ derivative of
exponential type then {f (𝑛) (𝑡)} follows by mathematical induction.
6. Differentiation in the s-Domain: The last important property we discuss is a dual of
the time-differentiation property. Let the Laplace transform of f1 (𝑡) be F1 (𝑠) with the ROC
𝜎𝑎 < 𝜎 < 𝜎𝑏 . Then the frequency-differentiation property states that
+∞ +∞
𝑑 𝑑 +∞ 𝑑F1 (𝑠)
∫ 𝑡f1 (𝑡)𝑒 −𝑠𝑡 𝑑𝑡 = − ∫ f1 (𝑡) ( 𝑒 −𝑠𝑡 ) 𝑑𝑡 = − ∫ f1 (𝑡)𝑒 −𝑠𝑡 𝑑𝑡 ⬌ 𝐹(𝑠) = −
0 0 𝑑𝑠 𝑑𝑠 0 𝑑𝑠

𝐹(𝑠) = {𝑡f1 (𝑡)}= −𝑑F1 (𝑠)/𝑑𝑠


Similar relationship may be developed for higher order derivatives

𝑑 𝑛 F1 (𝑠)
{𝑡 𝑛
f1 (𝑡)} = (−1)𝑛
𝑑𝑠 𝑛
7. Integration in the Time Domain: It is necessary when dealing with differential
equations, to know the Laplace transform of the derivation and the integral of the time
𝑡
function. The construction of the Laplace transform of ∫−∞ f1 (𝑡)𝑢(𝑡)𝑑𝑡 is just a consecounce
of the deivatition property as discussed before. Let the Laplace transform of f1 (𝑡) be
𝑡
F1 (𝑠) with the ROC 𝜎𝑎 < 𝜎 <𝜎𝑏 . Then the Laplace of this integral is F(𝑠) = {∫−∞ f1 (𝑡)𝑢(𝑡)𝑑𝑡}
with:
+∞
𝑡 𝑡
1 1
F(𝑠) = ∫ {∫ f1 (𝑡)𝑢(𝑡)𝑑𝑡} 𝑒 −𝑠𝑡 𝑑𝑡 = F1 (𝑠) − (∫ f1 (𝑡)𝑢(𝑡)𝑑𝑡 | )
−∞ s s −∞ 𝑡=0
0

To validate this fact we try to do the integration by part on {∫0𝑡 f1(𝑡)𝑑𝑡} and take the
well-known change of variables, we let
𝑡
1
𝑢 = ∫ f1 (𝑡)𝑑𝑡 𝑑𝑣 = 𝑒 −𝑠𝑡 𝑑𝑡 or 𝑑𝑢 = f1 (𝑡)𝑑𝑡 𝑣 = − 𝑒 −𝑠𝑡
0 s
Then
+∞ 𝑡 𝑡 +∞ +∞
−𝑠𝑡
1 1
F(𝑠) = ∫ (∫ f1 (𝑡)𝑑𝑡 ) 𝑒 𝑑𝑡 = − [(∫ f1 (𝑡)𝑑𝑡 ) 𝑒 −𝑠𝑡 ] − ∫ f1 (𝑡)𝑒 −𝑠𝑡 𝑑𝑡
0 0 0 s 0− 0− s
+∞ 𝑡 𝑡
1 1 1
F(𝑠) = ∫ (∫ f1 (𝑡)𝑑𝑡 ) 𝑒 −𝑠𝑡 𝑑𝑡 = F1 (𝑠) − (∫ f1 (𝑡)𝑑𝑡 | ) = F1 (𝑠)
0 0 s s 0 𝑡=0
s

This last equation shows that the Laplace transform operation corresponding to time-
domain integration is multiplication by 1/𝑠, and this is expected since integration is the
inverse operation of differentiation. The integration produce a pole at zero (i.e. 1/𝑠) that has
an effect on the region of convergence.

Remark: Time-domain integration is equivalent to a convolution of f1 (𝑡) with the Heaviside


function that is {𝑢(𝑡) ⋆ f1(𝑡)} = 𝑠 −1F1(𝑠)
8. Convolution: In engineering applications, applying the Laplace transform yielded an
expression for F(𝑠) = {f(𝑡)} = F1(𝑠)F2(𝑠) that seemed to involve the product of two or more
transforms. To traverse this problem there is, however, a way to deal with this matter-a
method that involves a special product of functions (namely the convolution product).

Let f(𝑡) = f1 (𝑡) ⋆ f2 (𝑡) where the Laplace of each time function is F1 (𝑠), F2 (𝑠) then total
Laplace transformation is F(𝑠) such that
𝑡
{f1 (𝑡) ⋆ f2(𝑡)} = {∫ f1 (𝜆)f2 (𝑡 − 𝜆)𝑑𝜆 }
0
Means that:
+∞ 𝑡
F(𝑠) = ∫ 𝑒 −𝑠𝑡 [∫ f1 (𝜆)f2 (𝑡 − 𝜆)𝑑𝜆 ] 𝑑𝑡
0 0
−𝑠𝑡
Since 𝑒 does not depend upon 𝜆, we can move this factor inside the inner integral. If we
do this and also reverse the order of integration, the result is:
+∞ 𝑡 +∞ 𝑡
F(𝑠) = ∫ [∫ 𝑒 −𝑠𝑡 f1 (𝜆)f2 (𝑡 − 𝜆)𝑑𝑡 ] 𝑑𝜆 = ∫ f1 (𝜆) [∫ 𝑒 −𝑠𝑡 f2 (𝑡 − 𝜆)𝑑𝑡 ] 𝑑𝜆
0 0 0 0
Now make the substitution of 𝑥 = 𝑡 − 𝜆
+∞ 𝑡 +∞
F(𝑠) = ∫ 𝑒 −𝑠𝜆 f1 (𝜆) [∫ 𝑒 −𝑠𝑥 f2 (𝑥)𝑑𝑥 ] 𝑑𝜆 = F2 (𝑠) ∫ 𝑒 −𝑠𝜆 f1 (𝜆)𝑑𝜆 = F1 (𝑠)F2 (𝑠)
0 ⏟0 0
F2 (𝑠)

F(𝑠) = {f1(𝑡) ⋆ f2 (𝑡)} = F1(𝑠)F2(𝑠)


Convolution operator 'symbolized *' has important algebraic properties, but the most
significant property for us right now is that the Laplace transform of a convolution of two
functions is equal to the product of the Laplace transforms of these two functions. The
ROC for this new Laplace function is the intersection of both ROC's.

9. Integration in the s-Domain: Let the Laplace transform of f1 (𝑡) be F1 (𝑠) with the ROC
+∞
𝜎𝑎 < 𝜎 < 𝜎𝑏 . We want to find the equivalent of the following s-function F(𝑠) = ∫𝑠 F1 (𝜇) 𝑑𝜇
in time domain,
+∞ +∞
F(𝑠) = ∫ [∫ 𝑒 −𝜇𝑡 f1 (𝑡)𝑑𝑡] 𝑑𝜇
𝑠 0

Interchanging the order of integration we get,


+∞ +∞ +∞ +∞
f1 (𝑡) −𝜇𝑡
F(𝑠) = ∫ F1 (𝜇) 𝑑𝜇 = ∫ f1 (𝑡) [∫ 𝑒 −𝜇𝑡 𝑑𝜇] 𝑑𝑡 = ∫ 𝑒 𝑑𝑡
𝑠 0 𝑠 0 𝑡
+∞
{𝑡 −1
f1 (𝑡)} = ∫ F1 (𝜇)𝑑𝜇
𝑠

Comment: This is deduced using the nature of frequency differentiation and conditional
convergence.

10. Multiplication: The multiplication property is the dual property of convolution in time-
Domain and is often referred to as the frequency convolution theorem. Thus, multiplication
in the time domain becomes convolution in the frequency domain

1
F(𝑠) = {f1(𝑡)f2(𝑡)} = F1 (𝑠) ⋆ F2 (𝑠)
2𝜋𝑖
In other word
+∞
−𝑠𝑡
1 𝑐+𝑖𝑇
F(𝑠) = ∫ f1 (𝑡)f2 (𝑡)𝑒 𝑑𝑡 = lim ∫ F1 (𝜆)F2 (𝑠 − 𝜆)𝑑𝜆
0 𝑇→∞ 2𝜋𝑖 𝑐+𝑖𝑇

The inverse formula for the Laplace transformation (you will see it later) gives

+∞ +∞
1 𝛾+𝑖∞
∫ f1 (𝑡)f2 (𝑡)𝑒 −𝑠𝑡 𝑑𝑡 = ∫ ( ∫ F (𝜆)𝑒 𝜆𝑡 𝑑𝜆) f2 (𝑡)𝑒 −𝑠𝑡 𝑑𝑡
0 0 2𝜋𝑖 𝛾−𝑖∞ 1

1 𝛾+𝑖∞ ∞
1 𝑐+𝑖𝑇
= ∫ F1 (𝜆) (∫ f2 (𝑡)𝑒 −(𝑠−𝜆)𝑡 𝑒 𝑡 𝑑𝑡) 𝑑𝜆 = lim ∫ F1 (𝜆)F2 (𝑠 − 𝜆)𝑑𝜆
2𝜋𝑖 𝛾−𝑖∞ 0 𝑇→∞ 2𝜋𝑖 𝑐+𝑖𝑇
11. Periodic function: Consider a periodic signal f(𝑡) and it can be expressed as a sum of
time-shifted functions as

f(𝑡) = f1 (𝑡) + f2 (𝑡) + f2 (𝑡) …


f(𝑡) = f1 (𝑡) + 𝑢(𝑡 − 𝑇)f1 (𝑡 − 𝑇) + 𝑢(𝑡 − 2𝑇)f1 (𝑡 − 2𝑇) …
Where: 0 < 𝑡 < 𝑇. By applying the Laplace transform and the time-shifting property, we get

F(𝑠) = F1 (𝑠) + 𝑒 −𝑇𝑠 F1 (𝑠) + 𝑒 −2𝑇𝑠 F1 (𝑠) … = F1 (𝑠)(1 + 𝑒 −𝑇𝑠 + 𝑒 −2𝑇𝑠 + ⋯ )

Using the geometric series sum "1 + 𝑟 + 𝑟 2 + ⋯ = 1/(1 − 𝑟)" we get:


𝑇
1 ∫0 f1 (𝑡)𝑒 −𝑠𝑡 𝑑𝑡
F(𝑠) = F1 (𝑠) ( )=
1 − 𝑒 −𝑇𝑠 1 − 𝑒 −𝑇𝑠

12. Initial & Final value theorems: The initial value f(0) and final value f(∞) are used to
relate frequency domain expressions to the time-domain as time approaches zero and
infinity, respectively. These properties can be derived by using the differentiation property

❇ If we consider 𝑠 → ∞, then
+∞
𝑑f(𝑡) −𝑠𝑡

{f (𝑡)} = ∫
𝑑𝑡
𝑒 𝑑𝑡 = sF(𝑠) − f(0)
0

By taking limits, we get


+∞
𝑑f(𝑡) −𝑠𝑡
lim sF(𝑠) − f(0) = lim (∫ 𝑒 𝑑𝑡) = 0 ➡ f(0) = lim sF(𝑠)
𝑠→∞ 𝑠→∞ 0 𝑑𝑡 𝑠→∞
This is called the initial-value theorem. f(0) can be found using F(𝑠), without using the
inverse transform of f(𝑡).

Warning: The initial value theorem should be applied only if 𝐹(𝑠) is strictly proper (𝑚 < 𝑛),
because for 𝑚 ≥ 𝑛, lim𝑠⟶∞ 𝑠𝐹(𝑠) does not exist, and the theorem does not apply.

❇ If we consider 𝑠 → 0, then
+∞ ∞
𝑑f(𝑡) −𝑠𝑡
lim {sF(𝑠) − f(0)} = lim (∫ 𝑒 𝑑𝑡) = [f(𝑡)] = f(∞) − f(0) ➡ f(∞) = lim f(𝑡) = lim sF(𝑠)
𝑠→0 𝑠→0 0 𝑑𝑡 𝑡→∞ 𝑠→0
0

This is known as the final-value theorem. In the final-value theorem, all poles of F(𝑠) must
be located in the left half of the s-plane. The final value theorem is useful because it gives
the long-term behavior without having to perform partial fraction decompositions or other
difficult algebra. If F(s) has a pole in the right-hand plane or poles on the imaginary axis
(e.g., if f(𝑡) = 𝑒 𝑎𝑡 or f(𝑡) = sin(𝑡)), the behavior of this formula is undefined.
II.IV Laplace Transforms of Some Common Signals: The Laplace transforms of most of
the commonly encountered functions can be derived from knowledge of the transform for
only a few elementary functions. In these derivations, it is assumed that f(t) = 0 for t < 0,
i.e. all the functions are multiplied by u(t).

f(𝑡) 𝐹(𝑠) ROC f(𝑡) 𝐹(𝑠) ROC


𝑛!
𝛿(𝑡) 1 s-plane 𝑡 𝑛 𝑒 −𝑎𝑡 𝑢(𝑡) 𝜎 > −𝑎
(𝑠 + 𝑎)𝑛+1

1 𝜔0
𝑢(𝑡) 𝜎>0 sin(𝜔0 𝑡) 𝑢(𝑡) 𝜎>0
𝑠 𝑠2 + 𝜔0 2

1 𝑠
𝑡𝑢(𝑡) 𝜎>0 cos(𝜔0 𝑡) 𝑢(𝑡) 𝜎>0
𝑠2 𝑠2 + 𝜔0 2

𝑡 𝑛−1 1 𝜔0
𝑢(𝑡) 𝜎>0 sinh(𝜔0 𝑡) 𝑢(𝑡) 𝜎 > 𝜔0
(𝑛 − 1)! 𝑠𝑛 𝑠2 − 𝜔0 2

1 𝑠
𝑒 −𝑎𝑡 𝑢(𝑡) 𝜎 > −𝑎 cosh(𝜔0 𝑡) 𝑢(𝑡) 𝜎 > 𝜔0
𝑠+𝑎 𝑠 2 − 𝜔0 2

1 𝜔0
𝑒 −𝑎𝑡 𝑢(−𝑡) 𝜎 < −𝑎 𝑒 −𝑎𝑡 sin(𝜔0 𝑡) 𝑢(𝑡) 𝜎 > −𝑎
𝑠+𝑎 (𝑠 + 𝑎)2 + 𝜔0 2

𝑛! 𝑠
𝑡 𝑛 𝑢(𝑡) 𝜎>0 𝑒 −𝑎𝑡 cos(𝜔0 𝑡) 𝑢(𝑡) 𝜎 > −𝑎
𝑠 𝑛+1 (𝑠 + 𝑎)2 + 𝜔0 2

1 2𝜔0 𝑠
𝑡𝑒 −𝑎𝑡 𝑢(𝑡) 𝜎 > −𝑎 𝑡 sin(𝜔0 𝑡) 𝑢(𝑡) 𝜎>0
(𝑠 + 𝑎)2 (𝑠 2 + 𝜔0 2 )2

𝑡 𝑛−1 𝑒 −𝑎𝑡 1 𝑠 2 − 𝜔0 2
𝑢(𝑡) 𝜎 > −𝑎 𝑡 cos(𝜔0 𝑡) 𝑢(𝑡) 𝜎>0
(𝑛 − 1)! (𝑠 + 𝑎)𝑛 (𝑠 2 + 𝜔0 2 )2

A. Unit Impulse Function 𝜹(t): It is very well-known from the definition of a unit impulse,
that
+∞ +∞
∫ 𝛿(𝑡)𝜑(𝑡)𝑑𝑡 = 𝜑(0) ⬌ ∫ 𝛿(𝑡)𝑒 −𝑠𝑡 𝑑𝑡 = lim 𝑒 −𝑠𝑡 = 1 ⬌ {𝛿(𝑡)}= 1
−∞ −∞ 𝑠→0

The Laplace transform of an impulse of strength k will be simply k.


B. Exponential Function, f(t)=𝒆𝒂𝒕 : The Laplace transform of this function is as what we
have seen in the last example {𝑒 𝑎𝑡 } = (𝑎 + 𝑠)−1

C. Unit Step Function 𝒖(t): If a is made equal to zero in the exponential function eat , we get
the step function. Therefore, making a = 0 in the last equation we get, {𝑢(𝑡)} = 𝑠 −1

D. Unit Ramp Function 𝒕𝒖(t): Treating the ramp function as a multiplication of the unit
step function by t, (i.e. f(t) = tu(t)) and using the technique of integration by part we get,
+∞ +∞ 1
{𝑡𝑢(𝑡)} = ∫−∞ 𝑡𝑢(𝑡 )𝑒 −𝑠𝑡 𝑑𝑡 = ∫0 𝑡𝑒 −𝑠𝑡 𝑑𝑡 =
𝑠2
. ⬌ {𝑡𝑢(𝑡)} = 𝑠 −2.
E. Sinusoidal Function sin(t): Treating the Sinusoid signals as a summation of the
exponential function i.e. 2𝑖f(𝑡) = 𝑒 𝑖𝑡 − 𝑒 −𝑖𝑡 , 2g(𝑡) = 𝑒 𝑖𝑡 + 𝑒 −𝑖𝑡

1 𝑎
{f(𝑡)} = {sin(𝑡)} = 2𝑖 {𝑒 𝑖𝑎𝑡 − 𝑒 −𝑖𝑎𝑡 } = 𝑠2 + 𝑎2
1 𝑠
{g(𝑡)} = {cos(𝑡)} = 2 {𝑒 𝑖𝑎𝑡 + 𝑒 −𝑖𝑎𝑡 } = 𝑠 + 𝑎2
2

The Laplace transforms of some common signals are tabulated in Table shown below.
Instead of having to reevaluate the transform of a given signal, we can simply refer to such
a table and read out the desired transform.

III. Inverse of the Laplace transforms: Inversion of the Laplace transform to find the
signal f(𝑡) from its Laplace transform 𝐹(𝑠) is called the inverse Laplace transform,
-1
symbolically denoted as {f(𝑡)}. There is a procedure that is applicable to all classes
of transform functions that involves the evaluation of a line integral in complex s-plane;
that is,
1 𝛾+𝑖∞ -1
f(𝑡) = ∫ F(𝑠)𝑒 𝑠𝑡 𝑑𝑠 = {f(𝑡)}
2𝜋𝑖 𝛾−𝑖∞

In this integral, the real 𝛾 is to be selected such that if the region of convergence ROC of
F(𝑠) is 𝜎1 < Re(𝑠) < 𝜎2 , then 𝜎1 < 𝛾 < 𝜎2 . The evaluation of this inverse Laplace transform
integral requires an understanding of complex variable theory. In fact, calculating this
integral requires a thorough knowledge of the integration of complex functions over lines in
C, an extensive subject which is outside the scope of this book. Hence, the fundamental
theorem of the Laplace transform will not be used in the remainder of this book; partial
fraction expansion will be used to obtain the inverse Laplace transform of a rational
function F(𝑠). It will be assumed that F(𝑠) has real coefficients; in practice this is usually
the case. We will now describe in a number of steps how the inverse Laplace transform of
such a rational function F(𝑠) can be determined.

Inversion by Partial fraction expansions: The partial fraction expansion of rational


functions is a technique which is used to determine a primitive of a rational function (i.e.
residue theory). In this we shall see that the same technique can be used to determine the
inverse transform of a rational function. In the analysis of linear time invariant systems, we
encounter functions that are ratios of two polynomials in a certain variable, say 𝑠. Such
functions are known as rational functions. A rational function F(𝑠) can be expressed as

𝑏𝑚 𝑠 𝑚 + ⋯ + 𝑏1 𝑠 + 𝑏0 𝑁(𝑠)
F(𝑠) = =
𝑎𝑛 𝑠 𝑛 + ⋯ + 𝑎1 𝑠 + 𝑎0 𝐷(𝑠)

The function F(𝑠) is improper if 𝑚 ≥ 𝑛 and proper if 𝑚 < 𝑛. An improper function can always
be separated into the sum of a polynomial in 𝑠 and a proper function.

𝑁 ′ (𝑠)
If 𝑚 ≥ 𝑛 F(𝑠) is improper and F(𝑠) = 𝑄(𝑠) +
𝐷′ (𝑠)
𝑁(𝑠)
If 𝑚 < 𝑛 F(𝑠) is proper and F(𝑠) =
𝐷(𝑠)
Example: Consider, for example, the function
𝑁(𝑠) 2𝑠 3 + 9𝑠 2 + 11𝑠 + 2
F(𝑠) = =
𝐷(𝑠) 𝑠 2 + 4𝑠 + 3

Because this is an improper function, we divide the numerator by the denominator until
the remainder has a lower degree than the denominator. Therefore, F(𝑠) can be expressed
as
𝑁(𝑠) 𝑠−1
F(𝑠) = = (2𝑠
⏟ + 1) + ( 2 )
𝐷(𝑠) 𝐏𝐨𝐥𝐲𝐧𝐨𝐦𝐢𝐚𝐥 ⏟𝑠 + 4𝑠 + 3
𝐩𝐫𝐨𝐩𝐞𝐫
𝐟𝐫𝐚𝐜𝐭𝐢𝐨𝐧

A proper function can be further expanded into partial fractions. This method consists of
writing a rational function as a sum of appropriate partial fractions with unknown
coefficients, which are determined by clearing fractions and equating the coefficients of
similar powers on the two sides.

A. Case of distinct poles (roots of 𝐷(𝑠)):

𝑁(𝑠) 𝑁(𝑠) 𝑅1 𝑅2 𝑅𝑛
F(𝑠) = = = + +⋯+
𝐷(𝑠) (𝑠 − 𝑝𝑛 )(𝑠 − 𝑝𝑛−1 ) … (𝑠 − 𝑝𝑖 ) … (𝑠 − 𝑝1 ) (𝑠 − 𝑝1 ) (𝑠 − 𝑝2 ) (𝑠 − 𝑝𝑛 )

Where: 𝑅1 = lim𝑠→𝑝1 (𝑠 − 𝑝1 ) F(𝑠)


B. Case of repeated poles (i.e. each root 𝑝𝑖 is repeated ℓ𝑖 times): If 𝐷(𝑠) has multiple roots,
that is, if it contains factors of the form (𝑠 − 𝑝𝑖 )ℓ𝑖 , we say that 𝑝𝑖 is the multiple pole of
F(𝑠) with multiplicity ℓ𝑖 . Then the expansion of F(𝑠) will consist of terms of the form
𝑘 ℓ𝑖
𝑁(𝑠) 𝑅𝑖𝑗 1 𝑑𝑗−1 ℓ𝑖
F(𝑠) = = ∑∑ with 𝑅𝑖𝑗 = lim [ 𝑗−1 {(𝑠 − 𝑝𝑖 ) F(𝑠)}]
𝐷(𝑠) (𝑠 − 𝑝𝑖 )𝑗 𝑠→𝑝𝑖 (𝑗 − 1)! 𝑑𝑠
𝑖=1 𝑗=1

Example: Consider the following fractional s-function determine its PFE


1 𝑅11 𝑅12 𝑅13 𝑅21
F(𝑠) = 3 = + 2 + 3 +
𝑠 (𝑠 − 1) 𝑠 𝑠 𝑠 (𝑠 − 1)

1 1 𝑑 1
𝑅11 = lim [{𝑠 3 F(𝑠)}] = −1, 𝑅12 = lim [ {𝑠 3 F(𝑠)}] = − | = −1
𝑠→0 0! 𝑠→0 1! 𝑑𝑠 (𝑠 − 1)2 𝑠=0
1 𝑑2 2
𝑅13 = lim [ 2 {𝑠 3 F(𝑠)}] = | = −1, 𝑅21 = lim[{(𝑠 − 1)F(𝑠)}] = 1
𝑠→0 2! 𝑑𝑠 2(𝑠 − 1)3 𝑠=0 𝑠→1

Computer Example: Find the inverse Laplace transform of the following functions using
partial fraction expansion method (Matlab):

2𝑠 2 + 7𝑠 + 4
F(𝑠) = 3
𝑠 + 5𝑠 2 + 8𝑠 + 4
num=[2 7 4]; den=[1 5 8 4]; [r,p,k]=residue(num,den)
>> r = 3, 2, -1
>> p = -2, -2, -1
>> k=[ ]
Hence,
3 2 1
F(𝑠) = + − ⇒ f(𝑡) = (3𝑒 −2𝑡 + 2𝑡𝑒 −2𝑡 − 𝑒 −𝑡 )𝑢(𝑡)
(𝑠 + 2) (𝑠 + 2)2 (𝑠 + 1)
A popular application of the Laplace transform: is in solving linear differential
equations with constant coefficients. In this case, the motivation for using the Laplace
transform is to simplify the solution of the differential equation. There is some analogy
between logarithms and Laplace transforms, that is the logarithms are able to reduce the
multiplication of two numbers to the sum of their logarithms, while the Laplace transform
reduces the solution of a differential equation to an algebraic equation.

IV. System Response and Transfer Function: The Laplace transform method gives the
total response, which includes zero input and zero-state components. It is possible to
separate the two components if we so desire. The initial condition terms in the response
give rise to the zero-input response.

Let we apply the Laplace transform to a linear time invariant system described by ordinary
differential equation (initial value problem) we get
𝑛 𝑚
𝑑𝑘 𝑦(𝑡) 𝑑 𝑘 𝑥(𝑡)
∑ 𝑎𝑘 = ∑ 𝑏𝑘 ⟺
𝑑𝑡𝑘 𝑑𝑡𝑘
𝑘=0 𝑘=0

𝑛 𝑚
𝑘
𝑘
𝑘−𝑖
𝑑(𝑘−𝑖) 𝑦(𝑡) 𝑘
𝑘
𝑘−𝑖
𝑑 (𝑘−𝑖) 𝑥(𝑡)
∑ 𝑎𝑘 (𝑠 𝑌(𝑠) − ∑ {𝑠 | }) = ∑ 𝑏𝑘 (𝑠 𝑋(𝑠) − ∑ {𝑠 | })
𝑖=1 𝑑𝑡𝑘−𝑖 𝑡=0 𝑖=1 𝑑𝑡𝑘−𝑖 𝑡=0
𝑘=0 𝑘=0
Simplification of this last equation gives
𝑛 𝑚 𝑛 −1
𝑘
𝑘−𝑖
𝑑(𝑘−𝑖) 𝑦(𝑡) 𝑘
𝑘
𝑘−𝑖
𝑑 (𝑘−𝑖) 𝑥(𝑡)
𝑌(𝑠) = [∑ 𝑎𝑘 ∑ {𝑠 𝑘−𝑖
| } + ∑ 𝑏𝑘 (𝑠 𝑋(𝑠) − ∑ 𝑠 𝑘−𝑖
| )] (∑ 𝑎𝑘 𝑠 𝑘 )
𝑖=1 𝑑𝑡 𝑡=0 𝑖=1 𝑑𝑡 𝑡=0
𝑘=0 𝑘=0 𝑘=0

This response can be decomposed into two main parts 𝑌(𝑠) = 𝑌Force (𝑠) + 𝑌Initial (𝑠) where:
𝑚 𝑛 −1
𝑘 𝑘
𝑌Force (𝑠) = (∑ 𝑏𝑘 𝑠 ) (∑ 𝑎𝑘 𝑠 ) 𝑋(𝑠)
𝑘=0 𝑘=0
𝑛 𝑚 𝑛 −1
𝑘
𝑘−𝑖
𝑑 (𝑘−𝑖) 𝑦(𝑡) 𝑘
𝑘−𝑖
𝑑 (𝑘−𝑖) 𝑥(𝑡)
𝑌Initial (𝑠) = (∑ ∑ 𝑎𝑘 𝑠 | + ∑ ∑ 𝑏𝑘 𝑠 | ) (∑ 𝑎𝑘 𝑠 𝑘 )
𝑖=1 𝑑𝑡𝑘−𝑖 𝑡=0 𝑖=1 𝑑𝑡 𝑘−𝑖
𝑡=0
𝑘=1 𝑘=1 𝑘=0

Remark: Depends on the problem requirements, many other types of response


decomposition can take place.

⦁ If we are looking for the oscillation behavior of the system, then a steady state and
transient factorization is made place: 𝑌(𝑠) = (Steady state + Transient)response
⦁ If we are looking at the influence (or impact) of initial conditions on the system, then a
forced and free response factorization is made place: 𝑌(𝑠) = (Forced + Non_Forced)responses .
⦁ If we are looking for the algebraic structure of the solution, then a homogenous and
particular factorization is made place: 𝑌(𝑠) = (Homogenous + Particular)solutions

IV.I Transfer Function: In LTI systems the response of a phasor of frequency 𝑠, is also a
phasor. The output phasor is proportional to the input one. The constant of proportionality
is called the transfer function. Since an LTI system is a linear operator, we can say that the
phasors are the "eigenfunctions" of LTI systems while the transfer function is the
"eigenvalue". As what we have seen before, the fundamental result in LTI system theory is
that any LTI system can be characterized entirely by a single function called the system's
impulse response, which is completely independent on initial conditions.

Transfer function = {ℎ(𝑡)} = 𝐻(𝑠) = 𝑌(𝑠)/𝑋(𝑠)


The transfer function 𝐻(𝑠) is defined as the ratio of the output response 𝑌(𝑠) to the input
excitation 𝑋(𝑠), assuming all initial conditions are zero.

Remark: The transfer function is defined for, and is meaningful to, LTI systems only. It
does not exist for nonlinear or time-varying systems in general.

The transfer function is a property of a system itself, independent of the magnitude and
nature of the input or driving function. It includes the units necessary to relate the input to
the output; however, it does not provide any information concerning the physical structure
of the system (i.e. the transfer functions of many physically different systems can be
identical). If it is known, the output or response can be studied for various forms of inputs
with a view toward understanding the nature of the system. It may be established
experimentally by introducing known inputs and studying the output of the system. Once
established, a transfer function gives a full description of the dynamic characteristics of the
system, as distinct from its physical description. When the LTI system is used to modify the
spectrum of a signal, it is called a filter. (i.e. transfer function≡filter).

IV.II Poles and Zeros of the System Function: Poles and Zeros of a transfer function are
the frequencies for which the value of the denominator and numerator of transfer function
becomes zero respectively. The values of the poles and the zeros of a system determine
whether the system is stable, and how well the system performs. Physically realizable
systems must have a number of poles greater than the number of zeros. Systems that
satisfy this relationship are called Proper.

As 𝑠 approaches a zero, the numerator of the transfer function (and therefore the transfer
function itself) approaches the value 0. When 𝑠 approaches a pole, the denominator of the
transfer function approaches zero, and the value of the transfer function approaches
infinity.
𝑛 𝑚
𝑑 𝑘 𝑦(𝑡) 𝑑 𝑘 𝑥(𝑡) 𝑁(𝑠) ∑𝑚
𝑘=0 𝑏𝑘 𝑠
𝑘
∑ 𝑎𝑘 = ∑ 𝑏𝑘 ⟹ 𝐻(𝑠) = = ratio of two polynomials = 𝑛
𝑑𝑡𝑘 𝑑𝑡𝑘 𝐷(𝑠) ∑𝑘=0 𝑎𝑘 𝑠 𝑘
𝑘=0 𝑘=0

The 𝑧𝑖 ’𝑠 are the roots of the equation 𝑁(𝑠) = 0 (or lim𝑠→𝑧𝑖 𝐻(𝑠) = 0) and are defined to be the
system zeros, and the 𝑝𝑖 ’𝑠 are the roots of the equation 𝐷(𝑠) = 0 (or lim𝑠→𝑝𝑖 𝐻(𝑠) = ∞) and
are defined to be the system poles. If all of the coefficients of polynomials 𝑁(𝑠) and 𝐷(𝑠) are
real, then the poles and zeros mustbe either purely real, or appear in complex conjugate
pairs. In general for the poles, either 𝑝𝑖 = 𝜎𝑖 , or else 𝑝𝑖 , 𝑝𝑖+1 = 𝜎𝑖 ± 𝑗𝜔𝑖 . Similarly, the system
zeros are either real or appear in complex conjugate pairs.

Remark: The existence of a single complex pole without a corresponding conjugate pole
would generate complex coefficients in the polynomial 𝐷(𝑠).

Because the transfer function completely represents a system differential equation, its
poles and zeros effectively define the system response. In particular the system poles
directly define the components in the homogeneous response. The unforced response of a
linear system to a set of initial conditions is 𝑦ℎ (𝑡) = ∑𝑛𝑘=1 𝛽𝑘 𝑒 𝜆𝑘𝑡 where the constants 𝛽𝑘 are
determined from the given set of initial conditions and the exponents 𝜆𝑘 are the roots of the
characteristic equation or the system eigenvalues. The characteristic equation of the
system is 𝐷(𝑠) = ∑𝑛𝑘=0 𝑎𝑘 𝑠 𝑘 = 0 and its roots are the system poles, that is 𝜆𝑘 = 𝑝𝑘 .

Example: Plot the magnitude of 𝐻(𝑠) = (𝑠 2 + 2𝑠 + 17)/(𝑠 2 + 4𝑠 + 104) showing poles and zeros

clear all, clc


b=[1 2 17]; a=[1 4 104];
sigma=linspace(-20,20,40);
omega=linspace(-20,20,40);
[s,w] = meshgrid(sigma,omega);
sgrid = s+j*w;
H1 = polyval(b,sgrid)./polyval(a,sgrid);
h=mesh(sigma,omega,10*log10(abs(H1)));
dir = [1 1 0];
ang=atan(0.3);
rotate(h,dir,-rad2deg(ang));
%axis equal
IV.III System Stability: The stability of a linear system may be determined directly from
its transfer function. An 𝑛𝑡ℎ order linear system is asymptotically stable only if all of the
components in the homogeneous response 𝑦ℎ (𝑡) = ∑𝑛𝑘=1 𝛽𝑘 𝑒 𝑝𝑘𝑡 from a finite set of initial
conditions decay to zero as time increases, or
𝑛
lim ∑ 𝛽𝑘 𝑒 𝑝𝑘𝑡 = 0
𝑡→∞ 𝑘=1

Where 𝑝𝑘 are the system poles. In a stable system all components of the homogeneous
response must decay to zero as time increases. If any pole has a positive real part there is a
component in the output that increases without bound, causing the system to be unstable.

From the previous development we have seen


𝑛 𝑚 𝑡
𝑑 𝑘 ℎ(𝑡) 𝑑 𝑘 𝛿(𝑡) 𝑛 𝑛
∑ 𝑎𝑘 = ∑ 𝑏𝑘 ➡ ℎ(𝑡) = ∑ 𝑐𝑘 𝑒 𝑝𝑘 𝑡
➡ 𝐻(𝑠) = ∫ ∑ 𝑐𝑘 𝑒 𝑝𝑘𝑡 𝑒 −𝑠𝑡 𝑑𝑡
𝑑𝑡𝑘 𝑑𝑡𝑘 𝑘=1 0 𝑘=1
𝑘=0 𝑘=0

𝑡
𝑛 𝑛 𝑐𝑘
𝐻(𝑠) = ∑ 𝑐𝑘 (∫ 𝑒 (𝑝𝑘−𝑠)𝑡 𝑑𝑡) = ∑ ( )
𝑘=1 0 𝑘=1 𝑠 − 𝑝𝑘

In order for a linear system to be stable, all of its poles must have negative real parts, that
is they must all lie within the left-half of the s-plane. An “unstable” pole, lying in the right
half of the s-plane, generates a component in the system homogeneous response that
increases without bound from any finite initial conditions.

A system having one or more poles lying on the imaginary axis of the s-plane has non-
decaying oscillatory components in its homogeneous response, and is defined to be
marginally stable.

Example: To get a big picture on the system response and its stability try to plot the step
and impulse response of the next system (take all the possible cases of pole location)
𝑠+𝑎
𝐻(𝑠) =
𝑠2 + 𝑏𝑠 + 𝑐
Represent each response in a small window near its poles location in the s-plane.
IV.IV Interconnected system: Interconnections are very common in systems engineering.
The system that is to be processed commonly referred to as the plant may itself be the
result of interconnecting various sorts of subsystems in series, in parallel, and in feedback.
In addition, the plant is interfaced with sensors, actuators and the control system. Our
model for the overall system represents all of these components in some idealized or
nominal form, and will also include components introduced to represent uncertainties in,
or neglected aspects of the nominal description.

Solved Problems

Exercise: 1 For the given signal f(𝑡) find the Laplace transform F(𝑠)

❇ f(𝑡) = 𝑢(𝑡) ❇ f(𝑡) = 𝛿(𝑡) ❇ f(𝑡) = 𝑒 𝑎𝑡 𝑢(𝑡) ❇ f(𝑡) = cos(𝑎𝑡) 𝑢(𝑡)


❇ f(𝑡) = 𝐴[𝑢(𝑡) − 𝑢(𝑡 − 𝑎)] ❇ f(𝑡) = 𝑡𝑒𝜆𝑡 ❇ f(𝑡) = 𝑡2 𝑒𝜆𝑡 ❇ f(𝑡) = 𝑒−𝑎|𝑡|
❇ f(𝑡) = 𝐴[𝑢(𝑡 − 𝑎) − 𝑢(𝑡 − 𝑏)], where 𝑎 < 𝑏 ❇ f(𝑡) = 𝑒 −𝑎𝑡 cos(𝑏𝑡) 𝑢(𝑡)
❇ f(𝑡) = 𝑒 −𝑎𝑡 sin(𝑏𝑡) 𝑢(𝑡) ❇ f(𝑡) = sin(𝜔𝑡 + 𝜑) ❇ f(𝑡) = cos(𝜔𝑡 + 𝜑)
❇ −𝑎𝑡
f(𝑡) = (1 − 𝑒 )𝑢(𝑡) ❇ f(𝑡) = sinh(𝑎𝑡) ❇ f(𝑡) = cosh(𝑎𝑡)

Solution:
+∞
1 −𝑠𝑡 ∞ 1
❇ f(𝑡) = 𝑢(𝑡) ⟹ 𝐹(𝑠) = ∫ 𝑢(𝑡)𝑒 −𝑠𝑡
𝑑𝑡 = [− 𝑒 ] =
0− 𝑠 0− 𝑠
+∞ +∞
❇ f(𝑡) = 𝛿(𝑡) ⟹ 𝐹(𝑠) = ∫ 𝛿(𝑡)𝑒 −𝑠𝑡 𝑑𝑡 = ∫ 𝛿(𝑡)𝑑𝑡 = 1
0− 0−
+∞ ∞
1 (𝑎−𝑠)𝑡 1
❇ 𝑎𝑡
f(𝑡) = 𝑒 𝑢(𝑡) ⟹ 𝐹(𝑠) = ∫ 𝑒 𝑒 𝑑𝑡 = [ 𝑎𝑡 −𝑠𝑡
𝑒 ] =
0− 𝑎−𝑠 0− 𝑠−𝑎
+∞
1 𝑠
❇ f(𝑡) = cos(𝑎𝑡) 𝑢(𝑡) ⟹ 𝐹(𝑠) = ∫ {𝑒 𝑖𝑎𝑡−𝑠𝑡 + 𝑒 −(𝑖𝑎𝑡+𝑠𝑡) }𝑑𝑡 = 2
2 0− 𝑠 + 𝑎2
𝑒 −𝑎𝑠 − 𝑒 −𝑏𝑠
❇ (f(𝑡)) = {𝐴[𝑢(𝑡 − 𝑎) − 𝑢(𝑡 − 𝑏)]} = 𝐴 ( )
𝑠
+∞
1 2
❇ (𝑡𝑒 𝜆𝑡 ) = ∫ 𝑡𝑒 (𝜆−𝑠)𝑡 𝑑𝑡 = 2
and (𝑡 2 𝑒 𝜆𝑡 ) =
0− (𝑠 − 𝜆) (𝑠 − 𝜆)3
𝑠+𝑎 𝑏
❇ (𝑒 −𝑎𝑡 cos(𝑏𝑡) 𝑢(𝑡)) = , (𝑒 −𝑎𝑡
sin(𝑏𝑡)) =
(𝑠 + 𝑎)2 + 𝑏 2 (𝑠 + 𝑎)2 + 𝑏 2
sin 𝜑 𝑠 + 𝑎 cos 𝜑 𝑠 cos 𝜑 − 𝑎sin 𝜑
❇ {sin(𝑎𝑡 + 𝜑)} = 2 2
, {cos(𝑎𝑡 + 𝜑)} =
𝑠 +𝑎 𝑠 2 + 𝑎2
1 +∞ +∞
𝑎
❇ {sinh(𝑎𝑡)} = {∫ 𝑒 𝑎𝑡 𝑒 −𝑠𝑡 𝑑𝑡 − ∫ 𝑒 −𝑎𝑡 𝑒 −𝑠𝑡 𝑑𝑡} = 2
2 0− 0− 𝑠 − 𝑎2
1 +∞ 𝑎𝑡 −𝑠𝑡 +∞
𝑠
❇ {cosh(𝑎𝑡)} = {∫ 𝑒 𝑒 𝑑𝑡 + ∫ 𝑒 −𝑎𝑡 𝑒 −𝑠𝑡 𝑑𝑡} = 2
2 0− 0− 𝑠 − 𝑎2
1 1 2𝑎
❇ {𝑒 −𝑎|𝑡| } = {𝑒 −𝑎𝑡 𝑢(𝑡) + 𝑒 𝑎𝑡 𝑢(−𝑡)} = − = 2
𝑠 + 𝑎 𝑠 − 𝑎 𝑎 − 𝑠2
Remark: Two-sided exponential decay f(𝑡) = 𝑒 −𝑎|𝑡| can be transformed only by using
bilateral Laplace transformation. With −𝑎 < Re(𝑠) < 𝑎

a>0 a<0

Exercise: 2 prove that the following convolution operation is true


❇ 𝑦(𝑡) = 𝛿̇ (𝑡) ⋆ 𝑥(𝑡) = 𝑥̇ (𝑡) ⋆ 𝛿(𝑡) = 𝑥̇ (𝑡)
Solution:
Method I: From the Dirac properties and integration by part we know that
+∞ +∞ +∞
𝛿(𝑡 − 𝜉)𝑥(𝜉)| =∫ 𝛿(𝑡 − 𝜉)𝑥̇ (𝜉)𝑑𝜉 − ∫ 𝛿̇ (𝑡 − 𝜉)𝑥(𝜉)𝑑𝜉
−∞ −∞ −∞
+∞

Where 𝛿(𝑡 − 𝜉)𝑥(𝜉)| = 0 then this formula can be written as


−∞
+∞ +∞
∫ 𝛿̇ (𝑡 − 𝜉)𝑥(𝜉)𝑑𝜉 = ∫ 𝛿(𝑡 − 𝜉)𝑥̇ (𝜉)𝑑𝜉 = 𝑥̇ (𝑡)⬌ 𝛿̇ (𝑡) ⋆ 𝑥(𝑡) = 𝑥̇ (𝑡) ⋆ 𝛿(𝑡) = 𝑥̇ (𝑡)
−∞ −∞
Method II: From the Laplace transform we can deduce that

(𝛿̇ (𝑡) ⋆ 𝑥(𝑡)) = 𝑠𝑋(𝑠) & (𝑥̇ (𝑡) ⋆ 𝛿(𝑡)) = 𝑠𝑋(𝑠)


From the above results we can say the following general formula

𝑥̇ 1 (𝑡) ⋆ 𝑥2 (𝑡) = 𝑥1 (𝑡) ⋆ 𝑥̇ 2 (𝑡)

Exercise: 3 find the Laplace transform of the triangle signal shown in figure

We have
1 |𝑡|
𝑥(𝑡) = (1 − )
2𝑇 2𝑇

And also we know that the derivative of this signal is given by

1 2
𝑥̇ (𝑡) = ( ) {(𝑢(𝑡 + 2𝑇) − 𝑢(𝑡)) − (𝑢(𝑡) − 𝑢(𝑡 − 2𝑇))}
2𝑇

The Laplace transform is

1 2 𝑒 2𝑠𝑇 − 1 1 − 𝑒 −2𝑠𝑇
𝑠𝑋(𝑠) = ( ) [( )−( )]
2𝑇 𝑠 𝑠
1 2 1 2
𝑠 2 𝑋(𝑠) = ( ) [𝑒 2𝑠𝑇 + 𝑒 −2𝑠𝑇 − 2] = ( ) (𝑒 𝑠𝑇 − 𝑒 −𝑠𝑇 )2
2𝑇 2𝑇

sinh(𝑠𝑇) 2
𝑋(𝑠) = ( )
𝑠𝑇

1 |𝑡| < 1/2


Exercise: 4 given the pi signal ∏(𝑡) = { } find the convolution 𝑥(𝑡) = ∏(𝑡) ⋆ ∏(𝑡)
0 elsewhere

Solution:
𝑠 𝑠
𝑒 2 − 𝑒 −2
2
𝑥(𝑡) = ∏(𝑡) ⋆ ∏(𝑡) ⇔ 𝑋(𝑠) = (∏(𝑠)) where ∏(𝑠) = ( )
𝑠
𝑒 𝑠 + 𝑒 −𝑠 − 2
𝑋(𝑠) = ( ) ⇒ 𝑥(𝑡) = (𝑡 + 1)𝑢(𝑡 + 1) + (𝑡 − 1)𝑢(𝑡 − 1) − 2𝑡𝑢(𝑡)
𝑠2
𝑥(𝑡) = (𝑡 + 1)𝑢(𝑡 + 1) + (𝑡 − 1)𝑢(𝑡 − 1) − 2𝑡𝑢(𝑡) = (1 − |𝑡|)

Λ(𝑡)
∏(𝑡) ∏(𝑡)
1
1 1
⋆ =
1 1 1 1 −1 1
− −
2 2 2 2
Exercise: 5 Given that 𝛿(𝑡) = 𝑢̇ (𝑡), by virtue of this compute the convolution 𝑦(𝑡) = f(𝑡) ⋆ g(𝑡)
where f(𝑡) = cos(𝑡) 𝑢(𝑡) and g(𝑡) = 𝛿̇ (𝑡) + 𝑢(𝑡), deduce that the functions f(𝑡) and g(𝑡) are the
inverse composition of each other.

Solution:

Convolution method:

We know that 𝑦(𝑡) = f(𝑡) ⋆ g(𝑡) = {cos(𝑡) 𝑢(𝑡)} ⋆ 𝛿̇ (𝑡) + {cos(𝑡) 𝑢(𝑡)} ⋆ 𝑢(𝑡). And also we know
that: 𝑥(𝑡) ⋆ 𝛿(𝑡) = 𝑥(𝑡) & 𝑥(𝑡) ⋆ 𝛿̇ (𝑡) = 𝛿(𝑡) ⋆ 𝑥̇ (𝑡) then

𝑑
𝑦(𝑡) = {cos(𝑡) 𝑢(𝑡)} + {cos(𝑡) 𝑢(𝑡)} ⋆ 𝑢(𝑡)
𝑑𝑡
𝑦(𝑡) = cos(𝑡) 𝛿(𝑡) − sin(𝑡) 𝑢(𝑡) + sin(𝑡) 𝑢(𝑡)
𝑦(𝑡) = cos(𝑡) 𝛿(𝑡) = cos(0) 𝛿(𝑡) = 𝛿(𝑡)
Laplace method:
𝑠 1
𝐹(𝑠) = (cos(𝑡) 𝑢(𝑡)) = , 𝐺(𝑠) = (𝛿̇ (𝑡) + 𝑢(𝑡)) = 𝑠 +
𝑠2 +1 𝑠

𝑠 1
𝑌(𝑠) = 𝐹(𝑠)𝐺(𝑠) = ( 2 ) (𝑠 + ) = 1 ⟺ 𝑦(𝑡) = f(𝑡) ⋆ g(𝑡) = 𝛿(𝑡)
𝑠 +1 𝑠

Easily we conclude that 𝐹(𝑠) = 𝐺 −1 (𝑠) ⟺ f(𝑡) ⋆ g(𝑡) = 𝛿(𝑡) then f(t) and g(t) are the inverse
composition of each other.

Exercise: 6 A techniques that can be used to determine the bilateral Laplace transform of
some functions is to obtain a differential equation of which the transform is a solution and
then solve the differential equation for the transform. In this problem, we'll further
illustrate this technique by determining the Laplace transform of
𝛼 ∞ 𝛼
2 2
f(𝑡) = 𝑒 −( 2 )𝑡 , 𝛼 > 0, given that ∫ 𝑒 −( 2 )𝑡 𝑑𝑡 = √2𝜋/𝛼
0

❂ First show that the ROC of, f(𝑡) is −∞ < 𝜎 < ∞.


❂ Show that f(𝑡) satisfies the differential equation "DE"

𝑑f(𝑡)
+ 𝛼𝑡f(𝑡) = 0
𝑑𝑡

❂ Show that the Laplace transform of f(𝑡) satisfies the differential equation

𝑑F(𝑠) 1
− ( ) 𝑠F(𝑠) = 0
𝑑𝑠 𝛼

❂ Solve this differential equation and find the Laplace transform of f(𝑡)

❂ Compute the following convolution g(𝑡) = f(𝑡) ⋆ f(𝑡)


Solution:
𝛼 2
𝑒−( 2 )𝑡
❂ We have lim𝑡→∞ = 0 without any constraints then ROC is −∞ < 𝜎 < ∞
𝑒𝑐𝑡

𝑑f(𝑡) 𝛼 2 𝑑f(𝑡)
= −𝛼𝑡𝑒 −( 2 )𝑡 = −𝛼𝑡f(𝑡) ⇒ + 𝛼𝑡f(𝑡) = 0 − − − − − − − − − −(𝐼)
𝑑𝑡 𝑑𝑡

❂ Using the Laplace transform of this DE we get

𝑑F(𝑠) 𝑑F(𝑠) 1
𝑠F(𝑠) + 𝛼 (− )=0⇔ − ( ) 𝑠F(𝑠) = 0 − − − − − − − − − −(𝐼𝐼)
𝑑𝑠 𝑑𝑠 𝛼

❂ Let we solve this differential equation: Notice that the equations (𝐼) and (𝐼𝐼) are similar to
each other, however the solution of 𝐸𝑞 (𝐼) is known then we can directly deduce the form of
solution of 𝐸𝑞 (𝐼𝐼). The essential difference is that 𝛼 has been replaced with −1/𝛼. From this
observation, conclude that the solution of the differential equation must be
1 2
F(𝑠) = 𝐾𝑒 (2𝛼)𝑠

To find the constant 𝐾 let we compute F(0) = 𝐾 from the Laplace definition
∞ ∞ 𝛼 2
F(0) = 𝐾 = lim ∫ f(𝑡) 𝑒 −𝑠𝑡 𝑑𝑡 = ∫ 𝑒 −( 2 )𝑡 𝑑𝑡
𝑠→0 −∞ −∞

The value of this integral can be shown to be √2𝜋/𝛼 then

2𝜋 ( 1 )𝑠2
F(𝑠) = √ 𝑒 2𝛼
𝛼

❂ The convolution g(𝑡) = f(𝑡) ⋆ f(𝑡) can be concluded from the Laplace transform
2𝜋 ( 1 )𝑠2 𝑑G 2𝑠
G(𝑠) = F 2 (𝑠) = 𝑒 𝛼 ⤇ − G(𝑠) = 0
𝛼 𝑑𝑠 𝛼
1 2 𝑑G 𝑠 𝑑g 𝛽
−( )𝑡 2
If we define = we get: − G(𝑠) = 0 ⤇ + 𝛽𝑡g(𝑡) = 0 therefor g(𝑡) = 𝑀𝑒 2
𝛽 𝛼 𝑑𝑠 𝛽 𝑑𝑡

∞ 𝛽
2𝜋 −( )𝑡 2 2𝜋 4𝜋 𝜋
Let we compute G(0) = = ∫ 𝑀𝑒 2 𝑑𝑡 = 𝑀√ = 𝑀√ ⤇ 𝑀 = √
𝛼 −∞ 𝛽 𝛼 𝛼

𝛼
−( )𝑡 2
𝛼
−( )𝑡 2 𝜋 −(𝛼)𝑡 2
g(𝑡) = (𝑒 2 ) ⋆ (𝑒 2 ) =√ 𝑒 4
𝛼
Exercise: 7 Find the convolution y(𝑡) = (𝑒 𝑎𝑡 𝑢(𝑡)) ⋆ (𝑒 𝑏𝑡 𝑢(𝑡)) Solution:

1 1 1 1 1
𝑌(𝑠) = ( )( )= ( − )
𝑠−𝑎 𝑠−𝑏 𝑎−𝑏 𝑠−𝑎 𝑠−𝑏
1
⟹ y(𝑡) = (𝑒 𝑎𝑡 − 𝑒 𝑎𝑡 )𝑢(𝑡)
𝑎−𝑏
Exercise: 8 Find the convolution y(𝑡) = (𝑒 −𝑡 𝑢(𝑡)) ⋆ (𝑢(𝑡) − 𝑢(𝑡 − 1)) Solution:

1 1 − 𝑒 −𝑠 1 − 𝑒 −𝑠 1 1 1 1
𝑌(𝑠) = ( )( )= ={ − }−{ − } 𝑒 −𝑠
𝑠+1 𝑠 𝑠(𝑠 + 1) 𝑠 𝑠+1 𝑠 𝑠+1

y(𝑡) = [1 − 𝑒 −𝑡 ]𝑢(𝑡) + [1 − 𝑒 −(𝑡−1) ]𝑢(𝑡 − 1)

Exercise: 9 Find the inverse of 𝐻(𝑠) = (𝑠 + 𝑏)/(𝑠 + 𝑎) Solution:

s+b s b 𝑑
𝐻(𝑠) = = + ⤇ ℎ(𝑡) = (𝑒 −𝑎𝑡 𝑢(𝑡)) + 𝑏𝑒 −𝑎𝑡 𝑢(𝑡)
𝑠+𝑎 𝑠+𝑎 𝑠+𝑎 𝑑𝑡

ℎ(𝑡) = (𝑏 − 𝑎)𝑒 −𝑎𝑡 𝑢(𝑡) + 𝑒 −𝑎𝑡 𝛿(𝑡) ⤇ ℎ(𝑡) = (𝑏 − 𝑎)𝑒 −𝑎𝑡 𝑢(𝑡) + 𝛿(𝑡)

Exercise: 10 Find the inverse of 𝐻(𝑠) = (s + 1)/𝑠 Ans: 𝐻(𝑠) = 𝑠 + 𝑠 −1 ⤇ ℎ(𝑡) = 𝑢(𝑡) + 𝛿̇ (𝑡)

Exercise: 11 write a program to do the convolution of the following

ℎ(𝑡) = 𝑢(𝑡) g(𝑡) = 2𝑒 −𝑡 𝑢(𝑡) − 2𝑒 2𝑡 𝑢(−𝑡)

clear all 𝑢(𝑡)


clc 1
Ts=0.01;
t1 =-10:Ts:10; t1=t1';
g1=-2*exp(2*t1);
t2=0: Ts:10; t2=t2'; g(𝑡)
g2=2*exp(-t2);
u=[zeros(size(g1));ones(size(g2))];
g=[g1;g2];

y= Ts *conv(u,g);
t=-20: Ts:5; t=t';

plot(t,y(1:length(t)));
grid on

Exercise: 12 Two systems having an operator description 𝐻1 (𝑠) & 𝐻2 (𝑠) are cascaded
(connected) in series where

𝑋(𝑠) 𝑌(𝑠)
𝐻1 (𝑠) 𝐻2 (𝑠)

2−𝑠 1
𝐻1 (𝑠) = , 𝐻2 (𝑠) =
1 1 1
1 + 2𝑠 1 − 2 𝑠 + 4 𝑠2
❏ Determine the DE relating input to the output, and designate its poles
❏ Give the realization of this differential equation.

Solution:
2−𝑠 1 2−𝑠 16 − 8𝑠 𝑌(𝑠)
❏ 𝐻(𝑠) = 𝐻1 (𝑠) 𝐻2 (𝑠) = ( )( )=( )= 3 =
1 1 1 1 𝑠 +8 𝑋(𝑠)
1 + 2𝑠 1 − 2 𝑠 + 4 𝑠2 1 + 8 𝑠3

𝑑3 𝑑𝑥 1 𝑠 = −2
3
𝑦(𝑡) + 8𝑦(𝑡) = 16𝑥(𝑡) − 8 ⇔ 𝑠 = (8𝑖 2 )3 = {
𝑑𝑡 𝑑𝑡 𝑠 = 1 ± 𝑖√3
❏ In the parallel realization of this system we need: 2 adders, 2 gains, 4 diff

Exercise: 13 The output of LTI system is y(𝑡) = 𝑒 𝑡/3 𝑢(𝑡) when the system is subjected an
excitation
1 1 𝑑 1
𝑥(𝑡) = − 𝑒 2𝑡 𝑢(𝑡) + (𝑒 2𝑡 𝑢(𝑡))
4 𝑑𝑡

❏ Determine the impulse response of this system


❏ Realize it with minimum number of diff, adders, gains etc…

Solution: we have
1 1 1 1 1 1 1 1 1 1
❏ x(𝑡) = − 𝑒 2𝑡 𝑢(𝑡) + 𝑒 2𝑡 𝑢(𝑡) + 𝑒 2𝑡 𝛿(𝑡) = 𝑒 2𝑡 𝑢(𝑡) + 𝑒 2𝑡 𝛿(𝑡) = 𝑒 2𝑡 𝑢(𝑡) + 𝛿(𝑡)
4 2 4 4
1 1
𝑠−4 1
𝑋(𝑠) = 4+1= & 𝑌(𝑠) =
1 1 1
𝑠−2 𝑠−2 𝑠−3

1 1
𝑋(𝑠) 𝑠−2 𝑠−2 𝐴 𝐵
𝐻(𝑠) = = = = +
𝑌(𝑠) (𝑠 − 1) (𝑠 − 1) 𝑠 2 − 7 𝑠 + 1 (𝑠 −
1
) (𝑠 −
1
3 4 12 12 3 4)
1 1
𝐴 = lim1 (𝑠 − ) 𝐻(𝑠) = −2, 𝐵 = lim1 (𝑠 − ) 𝐻(𝑠) = 3
𝑠→ 3 𝑠→ 4
3 4
1 1
ℎ(𝑡) = (3𝑒 4𝑡 − 2𝑒 3𝑡 ) 𝑢(𝑡)
❏ Realization (implementation)
𝑋(𝑠) 𝑋(𝑠) 𝑊(𝑠) 1 1
𝐻(𝑠) = = =( ) (𝑠 − )
𝑌(𝑠) 𝑊(𝑠) 𝑌(𝑠) 7 1 2
𝑠2 − 𝑠 +
12 12
1
𝑌(𝑠) = 𝑠𝑊(𝑠) − 𝑊(𝑠) 1
2 𝑦(𝑡) = 𝑤̇ (𝑡) − 𝑤(𝑡)
{ ⬌ { 2
7 1
2
𝑋(𝑠) = 𝑠 𝑊(𝑠) − 𝑠𝑊(𝑠) − 𝑊(𝑠) 𝑤(𝑡) = 12𝑥(𝑡) − 12𝑤̈ (𝑡) + 7𝑤̇ (𝑡)
12 12

Exercise: 14 Let we define the following signal as shown in figure

You know that the Laplace transform of f(𝑡) is of the f(𝑡)


form
+∞ 2
F(𝑠) = ∫ f(𝑡)𝑒 −𝑠𝑡 𝑑𝑡
0 1
𝑡
Compute the value of F(0)
−4 −3 −2 −1

+∞
1
F(0) = ∫ f(𝑡)𝑑𝑡 = Area of f(𝑡) = + 1 + 1.5 + 2 = 5
0 2

Exercise: 15
f(𝑡)
Find the Laplace transform of the following graph 2

Solution: f(𝑡) = 2𝑢(𝑡) − 3𝑢(𝑡 − 2) + 𝑢(𝑡 − 3)


2 3
1 𝑡
F(𝑠) = [2 − 3𝑒 −2𝑠 + 𝑒 −3𝑠 ] −1
𝑠

Exercise: 16 find the inverse of 𝑒 −2𝑠 /(𝑠 2 + 1) Solution: f(𝑡) = 𝑢(𝑡 − 2) sin(𝑡 − 2)
Exercise: 17 find the inverse of
1 + 𝑒 −𝜋𝑠
𝐹(𝑠) = 2
𝑠 +1
Solution:
1 𝑒 −𝜋𝑠 sin(𝑡) 0≤𝑡<𝜋
𝐹(𝑠) = + ⇒ f(𝑡) = sin(𝑡) + 𝑢(𝑡 − 𝜋) sin(𝑡 − 𝜋) = {
𝑠2 + 1 𝑠2 + 1 0 𝑡≥𝜋

Exercise: 18 solve the following differential equation


𝑑2 𝑦
+𝑦 =𝑡 with initial conditions 𝑦(𝜋) = 𝑦̇ (𝜋) = 0
𝑑𝑡 2
𝑤̈ (𝑡) + 𝑤(𝑡) = 𝑡 + 𝜋
Solution: let we put 𝑤(𝑡) = 𝑦(𝑡 + 𝜋) ⇒ {
𝑤(0) = 𝑤̇ (0) = 0
We solve this last differential equation by using Laplace transform
1 𝜋 1 + 𝜋𝑠
𝑠 2 𝑊(𝑠) + 𝑊(𝑠) = + ⇔ 𝑊(𝑠) = 2 2
𝑠 2 𝑠 𝑠 (𝑠 + 1)
𝜋𝑠 𝜋𝑠 1 1 𝜋 1 𝜋𝑠 1
𝑊(𝑠) = ( 2
− 2 )+( 2− 2 )= + 2− 2 − 2
𝑠 𝑠 +1 𝑠 𝑠 +1 𝑠 𝑠 𝑠 +1 𝑠 +1
In time domain we can write: 𝑤(𝑡) = 𝑡 + 𝜋 − 𝜋 cos(𝑡) − sin(𝑡) but we have 𝑤(𝑡 − 𝜋) = 𝑦(𝑡)

𝑦(𝑡) = (𝑡 − 𝜋 cos(𝑡 − 𝜋) − sin(𝑡 − 𝜋))𝑢(𝑡 − 𝜋)

Exercise: 19 solve the following differential equation


𝑑2 𝑦 𝑑𝑦
+ = 𝑒𝑡 with initial conditions 𝑦(0) = 1 & 𝑦̇ (0) = 0
𝑑𝑡 2 𝑑𝑡
Solution: Applying the Laplace transform we get
1 1
(𝑠 2 𝑌(𝑠) − 𝑠) + (𝑠𝑌(𝑠) − 1) =
⇔ (𝑠 2 + 𝑠)𝑌(𝑠) = +𝑠+1
𝑠−1 𝑠−1
𝑠 1 1 1 1
𝑌(𝑠) = 2 = ( + ) ⇔ 𝑦(𝑡) = (𝑒 𝑡 + 𝑒 −𝑡 ) = cosh(𝑡)
𝑠 −1 2 𝑠−1 𝑠+1 2
Exercise: 20 solve the following differential equation
𝑑 2 𝑦 𝑑𝑦
+ =𝜋 with initial conditions 𝑦(0) = 𝜋 & 𝑦̇ (0) = 0
𝑑𝑡 2 𝑑𝑡
Solution: Applying the Laplace transform we get
𝜋 1
(𝑠 2 𝑌(𝑠) − 𝑠𝜋) + (𝑠𝑌(𝑠) − 𝜋) = ⇔ (𝑠 2 + 𝑠)𝑌(𝑠) = 𝜋 (𝑠 + + 1)
𝑠 𝑠
𝑠2 + 𝑠 + 1
⇔ 𝑌(𝑠) = 𝜋 ( 2 )
𝑠 (𝑠 + 1)
Take the partial fraction expansion we obtain
1 1
𝑌(𝑠) = 𝜋 ( 2
+ ) ⇔ 𝑦(𝑡) = 𝜋(𝑡 + 𝑒 −𝑡 )
𝑠 𝑠+1

Exercise: 21 solve the following differential equation

𝑑2𝑦
− 𝑦 = sin(𝑡) with initial conditions 𝑦(0) = −1 & 𝑦̇ (0) = 0
𝑑𝑡 2
Solution: Applying the Laplace transform we get
1 1
(𝑠 2 𝑌(𝑠) + 𝑠) − 𝑌(𝑠) = 2 ⇔ (𝑠 2 − 1) 𝑌(𝑠) = ( 2 − 𝑠)
𝑠 +1 𝑠 +1
−𝑠 1 −1/2 −1/2 1/2 1/2
𝑌(𝑠) = + = ( + ) + ( − )
𝑠 2 − 1 (𝑠 2 − 1)(𝑠 2 + 1) 𝑠−1 𝑠+1 𝑠2 − 1 𝑠2 + 1

1 −1 −1 1 1/2 1/2
𝑌(𝑠) = ( + − 2 +( − ))
2 𝑠−1 𝑠+1 𝑠 +1 𝑠−1 𝑠+1
1 −1/2 −3/2 1 3 1 1
𝑌(𝑠) = ( + − 2 ) ⇔ 𝑦(𝑡) = − ( 𝑒 −𝑡 + 𝑒 𝑡 + sin(𝑡))
2 𝑠−1 𝑠+1 𝑠 +1 4 4 2

Exercise: 22 Find the inverse of the following s-functions


𝜋
2 2𝑠 + 1 −2𝑠 3𝑠 + 1 −4𝑠 𝑒− 2 𝑠
❑ 𝑌(𝑠) = (𝑒−3𝑠 − 𝑒−4𝑠 ) ❑ 𝑌(𝑠) = ( 𝑒 + 𝑒 ) ❑ 𝑌(𝑠) = 2
𝑠 𝑠2 𝑠2 𝑠 +9
Solution:
−1
2
𝑦(𝑡) = ( (𝑒 −3𝑠 − 𝑒 −4𝑠 )) = 2[𝑢(𝑡 − 3) − 𝑢(𝑡 − 4)]
𝑠
−1
2𝑠 + 1 3𝑠 + 1 −4𝑠
𝑦(𝑡) = ( 2 𝑒 −2𝑠 + 𝑒 ) = 𝑡[𝑢(𝑡 − 2) − 𝑢(𝑡 − 3)]
𝑠 𝑠2
−1 𝜋
𝑒 −2 𝑠 1 𝜋
𝑦(𝑡) = ( 2 ) = cos(3𝑡) 𝑢 (𝑡 − )
𝑠 +9 3 2

Exercise: 23 Find the inverse of the following s-function


1 − 𝑒 (1−𝑠)𝑇
𝐹(𝑠) = , with 𝑇 is some constant
(𝑠 − 1)(1 − 𝑒 −𝑠𝑇 )

Solution: Let we define f1 (𝑡) which is a function over one period

−1 −1
1 − 𝑒 (1−𝑠)𝑇 1 𝑒 (1−𝑠)𝑇
f1 (𝑡) = ( )= ( − )
(𝑠 − 1) (𝑠 − 1) (𝑠 − 1)

f1 (𝑡) = 𝑒 𝑡 (𝑢(𝑡) − 𝑢(𝑡 − 𝑇)) for one period

Exercise: 24 Find the inverse of the following s-function 𝐹(𝑠) where

1 −𝑠
𝐹(𝑠) = (𝑒 − 𝑒 −2𝑠 )
𝑠2
Solution:

0 𝑡<1
(𝑡 (𝑡
f(𝑡) = − 1)𝑢(𝑡 − 1) − − 2)𝑢(𝑡 − 2) = {𝑡 − 1 1≤𝑡<2
1 𝑡≥2
Exercise: 25 Find the following transformation
−1
6 sin(𝑡)
❑ ( ) ❑ ( )
(𝑠 − 1)4 𝑡
−1
𝑛!
𝐒𝐨𝐥𝐮𝐭𝐢𝐨𝐧: We know that ( ) = 𝑡 𝑛 𝑒 −𝑎𝑡 𝑢(𝑡)
(𝑠 + 𝑎)𝑛+1
−1
6
Take 𝑛 = 3 and 𝑎 = 1 we get ( ) = 𝑡 3 𝑒 𝑡 𝑢(𝑡)
(𝑠 − 1)4
sin(𝑡) ∞ 𝑑𝜇 ∞ 𝜋
❑ ( )=∫ = tan−1(𝜇)| = tan−1(∞) − tan−1(𝑠) = − tan−1(𝑠)
𝑡 2
𝑠 𝜇 +1 𝑠 2
sin(𝑡) 1 1 𝜋
( ) = tan−1 ( ) Note that tan−1(𝑠) + tan−1 ( ) =
𝑡 𝑠 𝑠 2

Exercise: 26 Find the Laplace transform for the following graphical representation


❶ ❷

𝑥(𝑡)
❹ ❺ − ❻
𝐴

𝑇 2𝑇 𝑡
− −
𝑥(𝑡)

𝐴

Solution: 𝑡
𝑻

𝐴𝑡 𝑇
0≤𝑡< 𝑇
𝑇 2 |𝑡 − 2 |
❶ 𝑥(𝑡) = 𝑡 𝑇 = 𝐴 (1 − )
2𝐴 (1 − ) ≤𝑡<𝑇 𝑇
𝑇 2 2
{ 0 elsewhere

𝐴𝑡 𝑇 𝑡 𝑇
𝑥(𝑡) = {𝑢(𝑡) − 𝑢 (𝑡 − )} + 2𝐴 (1 − ) {𝑢 (𝑡 − ) − 𝑢(𝑡 − 𝑇)}
𝑇 2 𝑇 2
We know that from the graph of its derivative (i.e. which is the third one 3)

𝑇 𝑑𝑥 𝑇 𝑇
= {𝑢(𝑡) − 𝑢 (𝑡 − )} − {𝑢 (𝑡 − ) − 𝑢(𝑡 − 𝑇)}
𝐴 𝑑𝑡 2 2
𝑇 1 𝑇 1 𝑇 1 𝑇 1 𝑇 2
𝑠𝑋(𝑠) = (1 − 𝑒 −𝑠2 ) − (𝑒 −𝑠2 − 𝑒 −𝑠𝑇 ) = (1 − 2𝑒 −𝑠2 + 𝑒 −𝑠𝑇 ) = (1 − 𝑒 −𝑠2 )
𝐴 𝑠 𝑠 𝑠 𝑠
𝐴 𝑇 2
𝑋(𝑠) = 2 (1 − 𝑒 −𝑠 2 )
𝑇𝑠

𝐴𝑡
❷ 𝑥(𝑡) = { 𝑇 0 ≤ 𝑡 < 𝑇 = 𝐴𝑡 {𝑢(𝑡) − 𝑢(𝑡 − 𝑇)}
𝑇
0 𝑡>𝑇
𝐴
𝑥(𝑡) = {𝑡𝑢(𝑡) − (𝑡 − 𝑇)𝑢(𝑡 − 𝑇) − 𝑇𝑢(𝑡 − 𝑇)}
𝑇
𝐴 1 𝑒 −𝑠𝑇 𝑇𝑒 −𝑠𝑇 𝐴
⇒ 𝑋(𝑠) = [ 2− 2 − ] = 2 [1 − 𝑒 −𝑠𝑇 (1 + 𝑠𝑇)]
𝑇 𝑠 𝑠 𝑠 𝑇𝑠

❸ This is the derivative of the triangle signal means that

𝑑 𝐴 𝑇 2
−𝑠
𝑥(𝑡) = Λ(𝑡) ⟺ 𝑋(𝑠) = 𝑠Λ(𝑠) = (1 − 𝑒 2 )
𝑑𝑡 𝑇𝑠

𝐴 0 ≤ 𝑡 < 𝑇/2
❹ 𝑥(𝑡) = {
−𝐴 𝑇/2 ≤ 𝑡 < 𝑇

𝑥(𝑡) for one period is

𝑇 𝑇
𝑥1 (𝑡) = 𝐴 [{𝑢(𝑡) − 𝑢 (𝑡 − )} − {𝑢 (𝑡 − ) − 𝑢(𝑡 − 𝑇)}]
2 2

𝐴 𝑇 𝑇 𝐴 𝑇 𝐴 𝑠𝑇 2
−𝑠 −𝑠 −𝑠𝑇 −𝑠 −𝑠𝑇 −
𝑋1 (𝑠) = [(1 − 𝑒 ) − (𝑒
2 2 − 𝑒 )] = [1 − 2𝑒 2 + 𝑒 ] = (1 − 𝑒 2 )
𝑠 𝑠 𝑠
𝑇
−𝑠
𝐴 (1 − 𝑒 )
2
𝑋1 (𝑠) 𝑋1 (𝑠) 𝐴 𝑠𝑇
𝑋(𝑠) = = 𝑠𝑇 𝑠𝑇 = 𝑇 = tanh ( )
1 − 𝑒 −𝑠𝑇 𝑠 𝑠 4
(1 − 𝑒 − 2 ) (1 + 𝑒− 2 ) (1 + 𝑒 −𝑠2 )
❺ 𝑥(𝑡) is piecewise periodic signal, and over one period
𝐴 0≤𝑡<𝑎
it is 𝑥1 (𝑡) = { = 𝐴[𝑢(𝑡) − 𝑢(𝑡 − 𝑎)]
0 𝑎≤𝑡<𝑇
𝐴 𝑋1 (𝑠)
𝑋1 (𝑠) = [1 − 𝑒 −𝑎𝑠 ] & 𝑋(𝑠) =
𝑠 1 − 𝑒 −𝑇𝑠
𝐴 1 − 𝑒 −𝑎𝑠
𝑋(𝑠) = ( )
𝑠 1 − 𝑒 −𝑇𝑠

❻ for one period we have


𝐴
𝑋1 (𝑠) = [1 + 𝑒 −𝑇𝑠 ]
𝑠2 +1

𝑋1 (𝑠) 𝐴 1 + 𝑒 −𝑇𝑠
𝑋(𝑠) = = [ ]
1 − 𝑒 −𝑇𝑠 𝑠 2 + 1 1 − 𝑒 −𝑇𝑠
Remark:
𝜋 𝐴 1 + 𝑒 −𝑇𝑠
{𝑥(𝑡)} = {𝐴 |sin ( 𝑡)|} = 2 [ ]
𝑇 𝑠 + 1 1 − 𝑒 −𝑇𝑠

𝐴 sin(𝑡) 0≤𝑡<𝑇
❼ 𝑥(𝑡) = { = 𝐴[sin(𝑡) 𝑢(𝑡) − sin(𝑡 − 𝑇) 𝑢(𝑡 − 𝑇)]
0 elsewhere
𝐴
𝑋(𝑠) = [1 + 𝑒 −𝑇𝑠 ]
𝑠2 +1

Exercise: 27 compute the following integrals


∞ ∞
1 1 𝑎𝑡
■ ∫ [cos(𝑎𝑡) − cos(𝑏𝑡)]𝑑𝑡 ■ ∫ [𝑒 − 𝑒 𝑏𝑡 ]𝑑𝑡
0 𝑡 0 𝑡
∞ ∞ ∞ ∞
𝑓(𝑡) −𝑠𝑡 𝑓(𝑡)
𝐒𝐨𝐥𝐮𝐭𝐢𝐨𝐧: We know that ∫ 𝑒 𝑑𝑡 = ∫ 𝐹(𝜇)𝑑𝜇 ⟹ ∫ 𝑑𝑡 = ∫ 𝐹(𝜇)𝑑𝜇
0 𝑡 𝑠 0 𝑡 0
∞ ∞ ∞
1 𝜇 𝜇 1 𝜇 2 + 𝑎2 𝑏
■ ∫ [cos(𝑎𝑡) − cos(𝑏𝑡)]𝑑𝑡 = ∫ 2 2
− 𝑑𝜇 = ln [ ] = ln ( )
0 𝑡 0 𝜇 +𝑎 𝜇2 + 𝑏2 2 𝜇 2 + 𝑏 2 𝜇=0 𝑎
∞ ∞
1 𝑎𝑡 𝑏𝑡
1 1 𝜇+𝑎 ∞ 𝑏
■ ∫ [𝑒 − 𝑒 ]𝑑𝑡 = ∫ − 𝑑𝜇 = ln [ ] = ln ( )
0 𝑡 0 𝜇−𝑎 𝜇+𝑏 𝜇 + 𝑏 𝜇=0 𝑎

Exercise: 28 Find the following transformation


−1
1
❑ (𝐻(𝑠) = 4 ) ❑ (∏(𝑡) = 𝑢(𝑡 + 𝑎) − 𝑢(𝑡 − 𝑎))
𝑠 +1

𝑡
❑ (g(𝑡) = 3 ) ❑ (sgn(𝑡))

❶ we can decompose the denominator into


𝑠 4 + 1 = (𝑠 4 + 2𝑠 2 + 1) − 2𝑠 2 = (𝑠 2 + 1)2 − 2𝑠 2 = (𝑠 2 + √2𝑠 + 1)(𝑠 2 − √2𝑠 + 1)
Now we can use partial fraction expansion to the next expresion
−1 −1 −1
𝑠 1/2√2 1/2√2
(𝑠𝐻(𝑠)) = { }= { − }
(𝑠 2 + √2𝑠 + 1)(𝑠 2 − √2𝑠 + 1) (𝑠 2 − √2𝑠 + 1) (𝑠 2 + √2𝑠 + 1)

−1 3
1 2 1 1
= ( ) 2 − 2
2 √2 1 √2 1
(𝑠 − 2 ) + 2 (𝑠 + 2 ) + 2
( )
3
𝑡 −𝑡
1 2 𝑡 𝑡 𝑡 𝑡
= ( ) (√2𝑒 sin ( ) − √2𝑒 sin ( )) = sin ( ) sinh ( )
√2 √2
2 √2 √2 √2 √2
−1
𝑡
1 𝑡 𝑡
Finally we deduce that { 4 } = ∫ sin ( ) sinh ( ) 𝑑𝑡
𝑠 +1 0 √2 √2
❷ To compute the Laplace transform of the gate function we use the definition

𝑎 𝑎
1 2
(∏(𝑡) = 𝑢(𝑡 + 𝑎) − 𝑢(𝑡 − 𝑎)) = ∫ 𝑒 −𝑠𝑡 𝑑𝑡 = [− 𝑒 −𝑠𝑡 ] = sinh(𝑎𝑠)
−𝑎 𝑠 −𝑎 𝑠

❸ The Laplace of general exponential function is {g(𝑡) = 3𝑡 } = (𝑒 (ln 3)𝑡 ) = (𝑠 − ln 3)−1.

❹ The Laplace transform of signum function do not exist, since do not have the ROC.

Exercise: 29 Find Laplace transform of the following functions


1
❶ [sin(𝜔0 𝑡) + 𝜔0 𝑡cos(𝜔0 𝑡)]
2𝜔0 1
❸ 2 [cos(𝜔1 𝑡) − cos(𝜔2 𝑡)]
1 𝜔2 − 𝜔1 2
❷ [sin(𝜔0 𝑡) − 𝜔0 𝑡cos(𝜔0 𝑡)]
2𝜔0 3

Solution: ❶ from the Laplace transform properties we know that

𝑑 𝑑 𝑠 𝑠 2 − 𝜔0 2
{𝑡f(𝑡)} = − 𝐹(𝑠) ⟺ {𝑡cos(𝜔0 𝑡)} = − ( )= 2
𝑑𝑠 𝑑𝑠 𝑠 2 + 𝜔0 2 (𝑠 + 𝜔0 2 )2

1 1 𝜔0 𝑠 2 − 𝜔0 2 𝑠2
{ [sin(𝜔0 𝑡) + 𝜔0 𝑡cos(𝜔0 𝑡)]} = ( + 𝜔0 { 2 }) = 2
2𝜔0 2𝜔0 𝑠 2 + 𝜔0 2 (𝑠 + 𝜔0 2 )2 (𝑠 + 𝜔0 2 )2

❷ The same thing as before

1 1 𝜔0 𝑠 2 − 𝜔0 2 1
{ [sin(𝜔 0 𝑡) − 𝜔 0 𝑡cos(𝜔 0 𝑡)]} = ( − 𝜔 0 { }) = 2
2𝜔0 3 3 2
2𝜔0 𝑠 + 𝜔0 2 (𝑠 + 𝜔0 )
2 2 2 (𝑠 + 𝜔0 2 )2
❸ The same thing as before

1 1 𝑠 𝑠 𝑠
{ [cos(𝜔1 𝑡) − cos(𝜔 2 𝑡)]} = ( − ) = [ ]
𝜔2 2 − 𝜔1 2 𝜔2 2 − 𝜔1 2 𝑠 2 + 𝜔1 2 𝑠 2 + 𝜔2 2 (𝑠 2 + 𝜔1 2 )(𝑠 2 + 𝜔2 2 )

Exercise: 30 Find the inverse of the Laplace transform of the following

𝜔𝑛 2
𝑌(𝑠) = , 0<𝜉<1
𝑠(𝑠 2 + 2𝜉𝜔𝑛 𝑠 + 𝜔𝑛 2 )

Solution: Using the partial fraction expansion we get:

𝜔𝑛 2 𝑅1 𝑅2 𝑅3
𝑌(𝑠) = = + +
𝑠(𝑠 + 2𝜉𝜔𝑛 𝑠 + 𝜔𝑛 )
2 2 𝑠 𝑠 + 𝑝1 𝑠 + 𝑝2

We can modify the denominator of the 𝑌(𝑠) as: (𝑠 2 + 2𝜉𝜔𝑛 𝑠 + 𝜔𝑛 2 ) = (𝑠 + 𝜉𝜔𝑛 )2 + 𝜔𝑛 2 (1 − 𝜉 2 )

𝜔𝑛 2 𝐴 𝐵𝑠 + 𝐶
𝑌(𝑠) = = +
𝑠(𝑠 2 + 2𝜉𝜔𝑛 𝑠 + 𝜔𝑛 2 ) 𝑠 (𝑠 + 𝜉𝜔𝑛 )2 + 𝜔𝑛 2 (1 − 𝜉 2 )
After simplifying, you will get the values of 𝐴, 𝐵 and 𝐶 as 1, −1and −2𝜉𝜔𝑛 respectively.
Substitute these values in the above partial fraction expansion of
1 𝑠 + 𝜉𝜔𝑛 𝜉𝜔𝑛
𝑌(𝑠) = − −
𝑠 (𝑠 + 𝜉𝜔𝑛 ) + 𝜔𝑛 (1 − 𝜉 ) (𝑠 + 𝜉𝜔𝑛 ) + 𝜔𝑛 2 (1 − 𝜉 2 )
2 2 2 2

1 𝑠 + 𝜉𝜔𝑛 𝜉 𝜔𝑛 √1 − 𝜉 2
𝑌(𝑠) = − −
𝑠 (𝑠 + 𝜉𝜔 )2 + (𝜔 √1 − 𝜉 2 )2 √1 − 𝜉 2 (𝑠 + 𝜉𝜔 )2 + (𝜔 √1 − 𝜉 2 )2
𝑛 𝑛 𝑛 𝑛

Substitute, 𝜔𝑛 √1 − 𝜉 2 as 𝜔𝑑 in the above equation.


1 𝑠 + 𝜉𝜔𝑛 𝜉 𝜔𝑑
𝑌(𝑠) = − −
𝑠 (𝑠 + 𝜉𝜔𝑛 )2 + 𝜔𝑑 2 √1 − 𝜉 2 (𝑠 + 𝜉𝜔𝑛 )2 + 𝜔𝑑 2

Apply inverse Laplace transform on both the sides.

𝑒 −𝜉𝜔𝑛𝑡
𝑦(𝑡) = {1 − {(√1 − 𝜉 2 ) cos(𝜔𝑑 𝑡) − 𝜉sin(𝜔𝑑 𝑡)}} 𝑢(𝑡)
√1 − 𝜉 2

If √1 − 𝜉 2 = sin(𝜙) , then ‘𝜉’ will be cos(𝜙). Substitute these values in the above equation.

𝑒 −𝜉𝜔𝑛𝑡
𝑦(𝑡) = {1 − {sin(𝜙) cos(𝜔𝑑 𝑡) − cos(𝜙)sin(𝜔𝑑 𝑡)}} 𝑢(𝑡)
√1 − 𝜉 2

𝑒 −𝜉𝜔𝑛𝑡 √1 − 𝜉 2 𝜋
𝑦(𝑡) = {1 − sin(𝜔𝑑 𝑡 + 𝜙)} 𝑢(𝑡), with 0 < 𝜙 = tan−1 ( )<
√1 − 𝜉 2 𝜉 2

𝑒 −𝜉𝜔𝑛𝑡 𝜔𝑛 2
{1 − sin(𝜔𝑑 𝑡 + 𝜙)} =
√1 − 𝜉 2 𝑠(𝑠 2 + 2𝜉𝜔𝑛 𝑠 + 𝜔𝑛 2 )

𝑑2 𝑑 2 2
𝑒 −𝜉𝜔𝑛𝑡
𝑦 + 2𝜉𝜔𝑛 𝑦 + 𝜔𝑛 𝑦(𝑡) = 𝜔𝑛 𝑢(𝑡) ⇔ 𝑦(𝑡) = { 1 − sin(𝜔𝑑 𝑡 + 𝜙)} 𝑢(𝑡)
𝑑𝑡 2 𝑑𝑡 √1 − 𝜉 2
Remark: This result will be required by the student in the next chapters, which is the
responses (reactions) of second-order systems for a fixed input signals (i.e. step) and in the
following graphical diagram we explain the variation of the function 𝑦(𝑡) in terms of time
and in terms of Zeta changes.

Computer Program
clear all, clc, Wn=1; zeta=[0:0.1:0.9,1+eps,2,3,5]; yy=[]; t=0:0.1:12;
for z=zeta
Wd=sqrt(z^2-1)*Wn;
y=1-Wn*exp(-z*Wn*t).*[cosh(Wd*t)/Wn+z*sinh(Wd*t)/Wd]; yy=[yy;y];
end
plot(t,yy,'linewidth',3), grid on
xlabel('Time(secs)'); ylabel('Amplitude'); title('Closed-loop step')
figure
surf(t,zeta,yy)

+∞ −𝑠𝜆2
Exercise: 31 Express the following integral 𝐼 = ∫0 𝑒 𝑑𝜆 and use it in the evaluation of
the following unilateral transforms
1
} and { {√𝑡}
√𝑡
Solution: Let we evaluate 𝐼 by using the double integral
+∞ +∞ +∞ +∞ +∞ 2
−𝑠𝑥 2 −𝑠𝑦 2 −𝑠𝑥 2 −𝑠𝑦 2 −𝑠𝑥 2
∫ ∫ 𝑒 𝑒 𝑑𝑥𝑑𝑦 = ( ∫ 𝑒 𝑑𝑥 ) ( ∫ 𝑒 𝑑𝑦 ) = ( ∫ 𝑒 𝑑𝑥 )
−∞ −∞ −∞ −∞ −∞
2 +∞ 2 +∞ 2
Notice that 𝑒 −𝑠𝜆 is an even function that is ∫−∞ 𝑒 −𝑠𝜆 𝑑𝜆 = 2 ∫0 𝑒 −𝑠𝜆 𝑑𝜆

+∞ +∞ +∞ 2
−𝑠𝑥 2 −𝑠𝑦 2 −𝑠𝑥 2
∫ ∫ 𝑒 𝑒 𝑑𝑥𝑑𝑦 = 4 (∫ 𝑒 𝑑𝑥 )
−∞ −∞ 0

+∞ 2
−𝑠𝑥 2
1 +∞ +∞ −𝑠(𝑥 2 +𝑦 2) 1 2𝜋 +∞ −𝑠𝑟 2
(∫ 𝑒 𝑑𝑥 ) = ∫ ∫ 𝑒 𝑑𝑥𝑑𝑦 = ∫ ∫ 𝑟𝑒 𝑑𝑟𝑑𝜃
0 4 −∞ −∞ 4 0 0
2 2 ∞
+∞ +∞
−𝑠𝑥 2
1 −𝑒 −𝑠𝑟 𝜋 2 1 𝜋
(∫ 𝑒 𝑑𝑥 ) = {2𝜋 ( )| } = ⇒ 𝐼 = ∫ 𝑒 −𝑠𝑥 𝑑𝑥 = √
0 4 2𝑠 0
4𝑠 0 2 𝑠
1 +∞ 1 −𝑠𝑡 1 +∞ 1 2
❑ { }=∫ 𝑒 𝑑𝑡 Let 𝜆 = √𝑡 then { }=∫ 𝑒−𝑠𝜆 2𝜆𝑑𝜆
√𝑡 0 √𝑡 √𝑡 0 𝜆
+∞
1 2 𝜋
{ } = 2∫ 𝑒 −𝑠𝜆 𝑑𝜆 = √
√𝑡 0 𝑠

Remark: we can use the change of variable 𝜂 = 𝑠𝑡 then we get


1
1 +∞
1 1 +∞
1 Γ (2)
−𝜂 2
{ }=∫ 𝑒 −𝑠𝑡 𝑑𝑡 = ∫ 𝑒 𝑑𝜂 =
√𝑡 0 √𝑡 √𝑠 0 √𝜂 √𝑠

Where Γ is the gamma function Γ(1/2) = √𝜋

𝑡 𝑑 𝜋 √𝜋
❑ {√𝑡} = { }=− (√ ) = 3/2
√𝑡 𝑑𝑠 𝑠 2𝑠

Exercise: 32 give the inverse of Laplace transform for the following s-function

1
𝐹(𝑠) =
𝑠4 + 𝑠2 + 1
Solution: Let firstly we compute the inverse of 𝑠𝐹(𝑠)
−1 −1 −1
𝑠 𝑠 𝑠+1 𝑠−1
{𝑠𝐹(𝑠)} = { 4 }= { [ 2 − 2 ]}
𝑠 + 𝑠2 + 1 2 𝑠 +𝑠+1 𝑠 −𝑠+1
−1 −1
1 𝑠2 + 𝑠 𝑠2 − 𝑠 1 1 1
= { 2 − 2 }= {1 − −1+ 2 }
2 𝑠 +𝑠+1 𝑠 −𝑠+1 2 𝑠2 +𝑠+1 𝑠 −𝑠+1
−1 −1
1 1 1 1 1 1
= { 2 − 2 }= { 2 − }
2 𝑠 −𝑠+1 𝑠 +𝑠+1 2 1 3 1 2 3
(𝑠 − 2) + 4 (𝑠 + 2) + 4
−1 −1 √3 √3
1 2 2
ḟ(𝑡) = {𝑠𝐹(𝑠)} = { 2 − }
√3 1 3 1 2 3
(𝑠 − 2) + 4 (𝑠 + 2) + 4

1 1 √3 1 √3 2 √3 1
ḟ(𝑡) = {𝑒 2𝑡 sin ( 𝑡) − 𝑒 −2𝑡 sin ( 𝑡)} = sin ( 𝑡) sinh ( 𝑡)
√3 2 2 √3 2 2
𝑡
2 √3 1
f(𝑡) = ∫ sin ( 𝑡) sinh ( 𝑡) 𝑑𝑡
0 √3 2 2

Exercise: 33 Determine the Laplace transform for the following s-function


1 1 − 𝑒 −𝑎𝑡
f(𝑡) = (cos(𝑎𝑡) − cos(𝑏𝑡)) and g(𝑡) =
𝑡 𝑡
Solution:
+∞ +∞
𝜂 𝜂 1 𝜂2 + 𝑎2 1 𝑠 2 + 𝑏2
F(𝑠) = ∫ − 𝑑𝜂 = ln ( )| = ln ( )
𝑠 𝜂2 + 𝑎2 𝜂2 + 𝑏 2 2 𝜂2 + 𝑏2 𝑠 2 𝑠 2 + 𝑎2
+∞ +∞
1 1 𝜂 𝑠+𝑎
G(𝑠) = ∫ − 𝑑𝜂 = ln ( )| = ln ( )
𝑠 𝜂 𝜂+𝑎 𝜂+𝑎 𝑠 𝑠
Exercise: 34 express the solution of the next differential equation in terms of convolution
2
product. 𝑦̈ + 𝑦 = 𝑒 −𝑡 𝑤𝑖𝑡ℎ 𝑦(0) = 𝑦̇ (0) = 0

Solution: Applying directly the Laplace transform we get

2
−𝑡 2
{𝑒 −𝑡 } 2
(𝑠 2 + 1)𝑌(𝑠) = {𝑒 } ⟺ 𝑌(𝑠) = ⇒ 𝑦(𝑡) = 𝑒 −𝑡 ⋆ sin(𝑡)
(𝑠 + 1)
2

Exercise: 35 prove that


1 6
{sin3 (𝑡) } = {3 sin(𝑡) − sin(3𝑡) } =
4 (𝑠 2 + 1)(𝑠 2 + 9)
2
{ (cos(𝑡) − 𝑢(𝑡)) } = ln(1 + 𝑠 −2 )
𝑡
2
Exercise: 36 find the unilateral Laplace transform of the function f(𝑡) = 𝑒 −𝑡 . Solution:
Applying the direct definition we get
+∞ +∞ +∞ 𝑠2 𝑠2
−𝑡 2 −𝑠𝑡 −(𝑡 2 +𝑠𝑡) −(𝑡 2 +𝑠𝑡+ − )
F(𝑠) = ∫ 𝑒 𝑒 𝑑𝑡 = ∫ 𝑒 𝑑𝑡 = ∫ 𝑒 4 4 𝑑𝑡
0 0 0
𝑠2 +∞ 𝑠2 +∞
𝑠 2 2 𝑠
F(𝑠) = 𝑒4 ∫ 𝑒 −(𝑡+2) 𝑑𝑡 = 𝑒 4 ∫ 𝑒 −𝜂 𝑑𝜂 we put 𝜂 = 𝑡 +
0
𝜂 2
2
We know from the basic mathematics
2 +∞ −𝜂2 2 𝑠 2 √𝜋 𝑠
erfc(𝑥) = ∫ 𝑒 𝑑𝜂 therefore {𝑒 −𝑡 } = 𝑒 4 ( erfc ( ))
√𝜋 𝑥 2 2

Remark: erfc(𝑥) is the complementary error function defined by:


+∞ 𝑥
2 2 2 2
erfc(𝑥) = ∫ 𝑒 −𝜂 𝑑𝜂 = 1 − erf(𝑥) ⟺ erf(𝑥) = ∫ 𝑒 −𝜂 𝑑𝜂
√𝜋 𝑥 √𝜋 0

The error function erf(𝑥) satisfies the symmetry property


𝑥 𝑥
1 2 2 2
erf(𝑥) = ∫ 𝑒 −𝜂 𝑑𝜂 = ∫ 𝑒 −𝜂 𝑑𝜂
√𝜋 −𝑥 √𝜋 0

This integral is a special (non-elementary) and sigmoid function that occurs often in
probability, statistics, and partial differential equations describing diffusion.

Exercise 37: Perform long division and determine the quotient and the remainder

𝑠 2 + 9𝑠 + 3 3𝑠 3 + 17𝑠 2 + 33𝑠 + 15
𝐹(𝑠) = , 𝐹(𝑠) =
𝑠 2 + 3𝑠 + 2 𝑠 3 + 6𝑠 2 + 11𝑠 + 6

Exercise 38: Compute the Laplace transform of the given function.


𝑡
1 1
𝟏. f(𝑡) = ∫ [cos(𝑎𝜂) − cos(𝑏𝜂)]𝑑𝜂 𝟐. f(𝑡) = [2sin(𝑡)sinh(𝑡)]
0 𝜂 𝑡
𝜋𝑡 1 for 0 ≤ 𝑡 ≤ 𝑎
𝟑. f(𝑡) = |sin ( )| 𝟒. f(𝑡) = {−1 for 𝑎 ≤ 𝑡 ≤ 2𝑎 𝟓. f(𝑡) = 𝑡 3/2
𝑎 f(𝑡 + 2𝑎)
Hint: in the 2 one you can use:
nd
𝑥−𝑦
tan−1 (𝑥) − tan−1 (𝑦) = tan−1 ( )
1 + 𝑥𝑦
Ans:
1 𝑠 2 + 𝑎2 2 𝜋𝑎 𝑎𝑠 1 𝑎𝑠 3√𝜋
𝟏. ln ( 2 ) 𝟐. tan−1 ( 2 ) 𝟑. coth ( ) 𝟒. tanh ( ) 𝟓.
𝑠 𝑠 + 𝑏2 𝑠 𝑠2 +𝑎 2 2 𝑠 2 4𝑠 5/2

Exercise 39: Compute the Laplace transform & the inverse of of the given functions.
|𝑡 − 𝜋|
𝑠 + 𝑖𝑏 𝐴 (1 − ) 0 < 𝑡 ≤ 2𝜋
𝟏. F(𝑠) = 𝑎 ( 2 ) 𝟐. f(𝑡) = { 𝜋
𝑠 + 𝑏2
f(𝑡 + 2𝜋)

Ans:
𝐴 𝜋𝑠
𝟏. f(𝑡) = 𝑎𝑒 𝑖𝑏𝑡 𝟐. F(𝑠) = 2
tanh ( )
𝜋𝑠 2

Exercise 40: Compute the Laplace transform of the half wave rectification of sin(t), denoted
g(t), in which the negative cycles of sin(t) have been canceled to create g(t).

𝐴
g(𝑡) = (sin(𝑎𝑡) + |sin(𝑎𝑡)|)
2

𝐴 𝑎 𝜋𝑠
𝐀𝐧𝐬: 𝐺(𝑠) = { 2 (1 + coth ( ))}
2 𝑠 + 𝑎2 2𝑎

Exercise 41: Compute the Laplace of: (a) f(t)=sin3(ωt) (b) g(t)=cos3(ωt).

Exercise 42: Compute the Laplace of: f(t)=Ln(t)

Ans:

∞ ∞
1 𝜂 We take the change if the variable:
𝐹(𝑠) = ∫ Ln(𝑡)𝑒 −𝑠𝑡 𝑑𝑡 = ∫ Ln ( ) 𝑒 −𝜂 𝑑𝜂
0 0 s s
1
1 ∞ ∞
𝜂 = 𝑠𝑡 and 𝑑𝑡 = 𝑑𝜂
𝐹(𝑠) = {∫ Ln(𝜂)𝑒 −𝜂 𝑑𝜂 − ∫ Ln(s)𝑒 −𝜂 𝑑𝜂} 𝑠
s 0 0
1 ∞ 𝛾 + Ln(s) ∞
𝐹(𝑠) = {∫ Ln(𝜂)𝑒 −𝜂 𝑑𝜂 − Ln(s)} = −
s 0 𝑠 𝛾 = − ∫ Ln(𝜂)𝑒 −𝜂 𝑑𝜂
0

Exercise 43: Find the IR ℎ(𝑡), of a given LTI system such that: 𝑥̇ (𝑡) = 𝑥(𝑡) ⋆ ℎ(𝑡)

Answer: 𝑋(𝑠)𝐻(𝑠) = 𝑠𝑋(𝑠) ⟹ 𝐻(𝑠) = 𝑠 ⟹ ℎ(𝑡) = 𝛿 ′ (𝑡)

Exercise 44: Show that a system described the following differential equation is linear
𝑦̇ (𝑡) + 𝑡 2 𝑦(𝑡) = (2𝑡 + 3)𝑢(𝑡)

Exercise 45: Show that a system described the following differential equation is nonlinear
𝑦(𝑡)𝑦̇ (𝑡) + 3𝑦(𝑡) = 𝑢(𝑡)
Exercise 46: Show that:
1 𝑠𝑏
{𝛿(𝑎𝑡 + 𝑏)} = 𝑒 𝑎
|𝑎|

What is about {𝑢(𝑡) = 1} and {sgn(𝑡)} ?

Exercise 47: Find the Laplace transform of the following functions

𝟏. f(𝑡) = cos2 (𝑡) 𝟐. f(𝑡) = sin2 (𝑡) 𝟑. f(𝑡) = cos 3 (𝑡) 𝟒. f(𝑡) = sin3(𝑡)

𝟓. f(𝑡) = cos(4𝑡) sin(3𝑡) 𝟔. f(𝑡) = cos4 (𝑡) − sin4 (𝑡) 𝟕. f(𝑡) = (−1)𝑡 𝟖. f(𝑡) = cos 2 (𝑡) + sin2 (𝑡)

Answer: Use the following help 2 cos(𝑎) sin(𝑏) = sin(𝑎 + 𝑏) − sin(𝑎 − 𝑏) and
1 1
cos2 (𝑡) = {1 + cos(2𝑡)}, sin2 (𝑡) = {1 − cos(2𝑡)}, (−1)𝑡 = 𝑒 𝑖𝜋𝑡
2 2
1 1
cos3 (𝑡) = {3 cos(𝑡) + cos(3𝑡)}, sin3 (𝑡) = {3 sin(𝑡) − cos(3𝑡)}
4 4
1
cos4 (𝑡) − sin4(𝑡) = cos(2𝑡), cos(4𝑡) sin(3𝑡) = {sin(7𝑡) − sin(𝑡)}
2
------------------------------------
2
s +2 2 1 s 3 s
𝟏. F(𝑠) = 𝟐. F(𝑠) = 𝟑. F(𝑠) = ( 2 )+ ( 2 )
𝑠(s2 + 4) 𝑠(s 2+ 4) 4 s +9 4 s +1
3 1 1 3 1 7 1
𝟒. F(𝑠) = ( 2 )− ( 2 ) 𝟓. F(𝑠) = ( 2 − 2 )
4 s +1 4 s +9 2 s + 49 s + 1
s 1 1
𝟔. F(𝑠) = 𝟕. F(𝑠) = 𝟖. F(𝑠) =
s2 +4 s − 𝑖𝜋 s

Exercise 48: (Introduction to the next Chapter) given a sequence 𝑥[𝑛], we define the
mapping T such that 𝑋(𝑧) = 𝑻{𝑥[𝑛]} = ∑+∞
−∞ 𝑥[𝑛]𝑧
−𝑛
.

𝟏. Compute the value of 𝑻{𝑥[𝑛 − 1]}. 𝟐. For which input 𝑥1 [𝑛] to the mapping 𝑻 such that
𝑻{𝑥1 [𝑛] } =– 𝑧{𝑑𝑋(𝑧)/𝑑𝑧} 𝟑. Solve the following difference equation 𝑛𝑥[𝑛] − 𝑥[𝑛 − 1] = 0 where
𝑥[𝑛] = 0 for negative arguments and 𝑥[0] = 1
+∞ +∞ +∞
𝐀𝐧𝐬: 𝟏. 𝑻{𝑥[𝑛 − 1]} = ∑ 𝑥[𝑛 − 1]𝑧 −𝑛 = ∑ 𝑥[𝑚]𝑧 −𝑚−1 = 𝑧 −1 ∑ 𝑥[𝑚]𝑧 −𝑚
−∞ −∞ −∞

𝑻{𝑥[𝑛]} = 𝑋(𝑧) ⟺ 𝑻{𝑥[𝑛 − 1]} = 𝑧 −1 𝑋(𝑧)

𝑑 𝑑 +∞ +∞ +∞
𝟐. − 𝑧 𝑋(𝑧) = −𝑧 ∑ 𝑥[𝑛]𝑧 −𝑛 = −𝑧 ∑ −𝑛𝑥[𝑛]𝑧 −𝑛−1 = ∑ 𝑛𝑥[𝑛]𝑧 −𝑛 ⟺ 𝑥1 [𝑛] = 𝑛𝑥[𝑛]
𝑑𝑧 𝑑𝑧 −∞ −∞ −∞

𝟑. Solving this difference equation by using the operator 𝑋(𝑧) = 𝑻{𝑥[𝑛]} we get:

𝑑 𝑑
𝑛𝑥[𝑛] − 𝑥[𝑛 − 1] = 0 ⟺ 𝑧 𝑋(𝑧) + 𝑧 −1 𝑋(𝑧) = 0 ⟺ 𝑧 2 𝑋(𝑧) + 𝑋(𝑧) = 0
𝑑𝑧 𝑑𝑧
𝑑 𝑑𝑋(𝑧) 𝑑𝑧 1
𝑧2 𝑋(𝑧) + 𝑋(𝑧) = 0 ⟺ = − 2 ⟺ ln(𝑋(𝑧)) = ⟺ 𝑋(𝑧) = 𝐴𝑒 1/𝑧
𝑑𝑧 𝑋(𝑧) 𝑧 𝑧
+∞ 𝑢[𝑛] 1 𝑛 𝐴 𝐴
We know that 𝑋(𝑧) = 𝐴𝑒 1/𝑧 = ∑ 𝐴 ( ) = 𝑻 { 𝑢[𝑛]} ⇒ 𝑥[𝑛] = 𝑢[𝑛]
−∞ 𝑛! 𝑧 𝑛! 𝑛!
1
From initial conditions 𝑥[0] = 1 we obtain 𝐴 = 1 ⇒ 𝑥[𝑛] = 𝑢[𝑛]
𝑛!
CHAPTER IV:
Analysis of Discrete LTI
Systems by Z-Transform

I. Introduction
II. The Z Transform of Some Commonly Occurring Functions
III. Some Properties of the Z Transform
IV. Transfer Function (System Function) and DC Gain
V. Inverse Z-transform
V.I. Contour Integration
V.II Partial Fraction Expansion
V.III Inversion by Power Series
VI. Solved Problems

Many discrete transforms may not exist for all sequences due to the convergence
condition, whereas the z-transform exists for many sequences for which others does
not exist. Also, the z-transform allows simple algebraic manipulations. As such, the
z-transform has become a powerful tool in the analysis and design of digital systems.
This chapter introduces the z-transform, its properties, the inverse z-transform, and
methods for finding it. Also, in this chapter, the importance of the z-transform in the
analysis of LTI systems is established. Further, one-sided z-transform and the
solution of difference equations of discrete-time LTI systems are presented.

Z-transform is an infinite power series because summation index varies from -∞ to ∞.


But it is useful for values of z for which sum is finite. The values of z for which 𝐟(𝑧) is
finite and lie within the region called as “region of convergence (ROC). Z-transform is
used for linear filtering. Z-transform is also used for finding Linear convolution,
cross-correlation and auto-correlations of sequences. In Z-transform user can
characterize LTI system (stable/unstable, causal/anti-causal) and its response to
various signals by placements of pole and zero plot.

ROC is going to decide whether system is stable or unstable. ROC decides the type of
sequences causal or anti-causal. ROC also decides finite or infinite duration
sequences.
Analysis of Discrete
LTI Systems by
Z-Transform
I. Introduction: The Z-transform is the discrete-time counterpart of the Laplace transform.
The Z-transform is introduced to represent discrete-time signals (or sequences) in the z-
domain (i.e. complex frequency-domain where z is complex variable), and the concept of the
system function for a discrete-time LTI system will be described. The Laplace transform
converts integro_differential equations into algebraic equations. In a similar manner, the Z-
transform converts difference equations into algebraic equations, thereby simplifying the
analysis of discrete-time systems. The properties of the Z-transform closely parallel those of
the Laplace transform. However, we will see some important distinctions between the z-
transform and the Laplace transform.

In mathematics and signal processing, the Z-transform converts a discrete-time signal,


which is a sequence of real or complex numbers, into a complex frequency-domain
representation. It can be considered as a discrete-time equivalent of the Laplace transform.
This similarity is explored in the theory of time-scale calculus. Z-transform gives a
tractable way to solve linear, constant-coefficient difference equations. The idea contained
within the Z-transform is also known in mathematical literature as the method of
generating functions. From a mathematical view the Z-transform can also be viewed as a
Laurent series where one views the sequence of numbers under consideration as the
(Laurent) expansion of an analytic function. The Z-transform can be defined as either a
one-sided or two-sided transform.

It is well-known that computers are increasingly being integrated into physical


systems. For the computer, time is not continuous, it passes in discrete intervals. So
whenever a computer is being used, it is important to understand the ramifications of the
inherently discrete nature of time. As with the Laplace Transform, we will assume that
functions of interest are equal to zero for time less than zero. In the diagram below 𝑥(𝑡)
represents a continuous-time signal that is sampled every 𝑇 seconds, the resulting signal is
called 𝑥 ⋆ (𝑡) = 𝑥(𝑘𝑇). This represents a continuous-time signal that is measured by a
computer every 𝑇 seconds that results in a sampled signal.

There are a number of ways to represent the sampling process mathematically. One way
that is commonly used is to immediately represent the sampled signal by a series 𝑥[𝑛]. This
technique has the advantage of being very simple to understand, but makes the connection
of the sampled signal to the Laplace Transform easy to understand.

⋆ (𝑡)
𝑥 = ∑ 𝑥(𝑘𝑇)𝛿(𝑡 − 𝑘𝑇)
𝑘=0
Since we now have a time domain signal, we wish to see what kind of analysis we can do in
a transformed domain. Let's start by taking the Laplace Transform of the sampled signal:

⋆ (𝑠) {𝑥 ⋆ (𝑡)}
𝑋 = = {∑ 𝑥[𝑘]𝛿(𝑡 − 𝑘𝑇)}
𝑘=0

Since 𝑥[𝑘] is a constant, we can (because of Linearity) bring the Laplace Transform inside
the summation.
∞ ∞

𝑋 ⋆ (𝑠) = ∑ 𝑥[𝑘] {𝛿(𝑡 − 𝑘𝑇)} = ∑ 𝑥[𝑘]𝑒 −𝑠𝑘𝑇


𝑘=0 𝑘=0

To simplify the expression a little bit, we will use the notation 𝑧 = 𝑒 −𝑠𝑘𝑇 . We will call this the
Z Transform and define it as

(𝑥[𝑘]) = 𝑋(𝑧) = ∑ 𝑥[𝑘]𝑧 −𝑘


𝑘=0

Definition: The Z-transform of a sequence {𝑥[𝑘]} is denoted by 𝑋(𝑧) and it is calculated


using the formula 𝑋(𝑧) = ∑∞ 𝑘=0 𝑥[𝑘]𝑧
−𝑘
Where: z is chosen such that ∑∞ −𝑘
𝑘=0|𝑥[𝑘]𝑧 | < ∞ which
is the convergence condition. We will also use the following notation to represent a
sequence {𝑥[𝑘]} and its Z-transform 𝑋(𝑧): 𝑥[𝑘] ⟷ 𝑋(𝑧)

Alternatively, in cases where 𝑥[𝑘] is defined form −∞ to +∞ the two-sided transform is


defined as 𝑋(𝑧) = ∑+∞ −𝑘
𝑘=−∞ 𝑥[𝑘]𝑧 .

Region of convergence: The region of convergence (ROC) is the set of points in the
complex plane for which the Z-transform summation converges.
𝑘=+∞

ROC = {𝑧: | ∑ |𝑥[𝑘]𝑧 −𝑘 | < ∞ }


𝑘=−∞

Example Let us now consider a signal that is the real exponential: f[𝑘] = 𝛼 𝑘 𝑢[𝑘]
𝑘=+∞ ∞
𝑘 −𝑘
𝛼 𝑘 𝑧
𝐹(𝑧) = ∑ 𝛼 𝑢[𝑘]𝑧 = ∑( ) = |𝑧| > |𝛼|
𝑧 𝑧−𝛼
𝑘=−∞ 𝑘=0

𝐹(𝑧) Converges only if |𝑧| > |𝛼|

Example Let us now consider the left-sided real exponential: f[𝑘] = −𝛼 𝑘 𝑢[−𝑘 − 1]
𝑘=+∞ −1 ∞ ∞ ∞
𝛼 𝑘 𝑧 𝑘 𝑧 𝑘+1 𝑧 𝑧 𝑘 𝑧
𝐹(𝑧) = ∑ −𝛼 𝑘 𝑢[−𝑘 − 1]𝑧 −𝑘 = − ∑ ( ) = −∑( ) = −∑( ) = ∑( ) =
𝑧 𝛼 𝛼 𝛼 𝛼 𝑧−𝛼
𝑘=−∞ 𝑘=−∞ 𝑘=1 𝑘=0 𝑘=0

But 𝐹(𝑧) converges only if |𝑧| < |𝛼|

Remark: If you are primarily interested in application to one-sided signals, then the z-
transform used is restricted to causal signals (i.e., signals with zero values for negative
time) and the one-sided z-transform.
II The Z Transform of Some Commonly Occurring Functions: The z-transform is an
important tool in the analysis and design of discrete-time systems. It simplifies the solution
of discrete-time problems by converting LTI difference equations to algebraic equations and
convolution to multiplication. Thus, it plays a role similar to that served by Laplace
transforms in continuous-time problems. Now let us give some of commonly used discrete-
time signals such as the sampled step, exponential, and the discrete time impulse.

❶ The Unit Impulse Function: In discrete time systems the unit impulse is defined
somewhat differently than in continuous time systems.

1 𝑘=0
𝛿[𝑘] = { = 𝑢[𝑘] − 𝑢[𝑘 − 1]
0 𝑘≠0

1 𝑘≥0
Where 𝑢[𝑘] is defined by: 𝑢[𝑘] = {
0 𝑘<0
∞ ∞
−𝑘
𝑋(𝑧) = ∑ 𝛿[𝑘]𝑧 −𝑘
= 𝛿[𝑘]𝑧 | + ∑ 𝛿[𝑘]𝑧 −𝑘 = 1
𝑘=0 𝑘=0 ⏟
𝑘=1
zero

❷ The Unit Step Function: The unit step is one when 𝑘 is zero or positive.
∞ ∞
1 𝑧
𝑋(𝑧) = ∑ 𝑢[𝑘]𝑧 −𝑘 = ∑ 𝑧 −𝑘 = −1
=
1−𝑧 𝑧−1
𝑘=0 𝑘=0

Which is only valid for |𝑧| > 1; This implies


that the z-transform expression we obtain has
a region of convergence outside the unit disk

❸ The Unitary Ramp Function: Consider complex function


∞ ∞
𝑑 𝑧
f[𝑘] = 𝑘𝑢[𝑘] ⟹ 𝐹(𝑧) = ∑ 𝑘𝑧 −𝑘 = −𝑧 ( ∑ 𝑧 −𝑘 ) = |𝑧| > 1
𝑑𝑧 (𝑧 − 1)2
𝑘=0 𝑘=0

❹ The Exponential Function: Consider the exponential function



1 𝑧
f[𝑘] = 𝑒 −𝛼𝑘𝑇 𝑢[𝑘] ⟹ 𝐹(𝑧) = ∑ 𝑒 −𝛼𝑘𝑇 𝑢[𝑘]𝑧 −𝑘 = = , |𝑧| > 𝑒 −𝛼𝑇
1 − 𝑒 −𝛼𝑇 𝑧 −1 𝑧 − 𝑒 −𝛼𝑇
𝑘=0

❺ The Discrete Exponential Function: With the Z-Transform it is more common to get
solutions in the form of a power series

1 𝑧
f[𝑘] = 𝑎𝑘 𝑢[𝑘] ⟹ 𝐹(𝑧) = ∑ 𝑎𝑘 𝑢[𝑘]𝑧 −𝑘 = = , |𝑧| > 𝑎
1 − 𝑎𝑘 𝑧 −1 𝑧 − 𝑎
𝑘=0

Exercise: Find the Z-Transform of the following signal f[𝑘] = cos(𝑎𝑘) 𝑢[𝑘] Solution:
∞ ∞ ∞ ∞
−𝑘
𝑒 𝑖𝑎𝑘 + 𝑒 −𝑖𝑎𝑘 1
𝐹(𝑧) = ∑ cos(𝑎𝑘) 𝑢[𝑘]𝑧 = ∑( ) 𝑢[𝑘]𝑧 −𝑘 = {∑ 𝑒 𝑖𝑎𝑘 𝑢[𝑘]𝑧 −𝑘 + ∑ 𝑒 −𝑖𝑎𝑘 𝑢[𝑘]𝑧 −𝑘 }
2 2
𝑘=0 𝑘=0 𝑘=0 𝑘=0
1 𝑧 𝑧 𝑧(𝑧 − cos(𝑎))
(f[𝑘]) = 𝐹(𝑧) = { 𝑖𝛼
+ −𝑖𝛼
}= 2
2 𝑧−𝑒 𝑧−𝑒 𝑧 − 2𝑧 cos(𝑎) + 1

Also it can be checked that if f[𝑘] = sin(𝑎𝑘) 𝑢[𝑘] then

𝑧 sin(𝑎𝑘)
𝐹(𝑧) =
𝑧2 − 2𝑧 cos(𝑎) + 1

Remarks: the Z-Transform 𝑋(𝑧) is always a rational function of the complex variable z, and
the same complex function can also be expressed in terms of 𝑧 −1

𝑛(𝑧) 𝑧(𝑧 + 0.5) 1 + 0.5𝑧 −1


𝑋(𝑧) = = =
𝑑(𝑧) (𝑧 + 1)(𝑧 + 2) (1 + 𝑧 −1 )(1 + 2𝑧 −1 )

Properties of Region of convergence: Let 𝑋(𝑧) = 𝑛(𝑧)/𝑑(𝑧) be a rational function, so if we


define the roots of the numerator polynomial 𝑛(𝑧) as zeros of 𝑋(𝑧) and roots of the
denominator 𝑑(𝑧) as poles of 𝑋(𝑧) then, The region of convergence has a number of
properties that are dependent on the characteristics of the signal, 𝑥[𝑘].

▪ The ROC cannot contain any poles.


▪ If 𝑥[𝑘] is a finite-duration sequence, then the ROC is the entire z-plane, centered at origin.
▪ If 𝑥[𝑘] is a right-sided sequence, then the ROC extends outward from the outermost pole
in 𝑋(𝑧). And the ROC is outside of a given circle.
▪ If 𝑥[𝑘] is a left-sided sequence, then the ROC extends inward from the innermost pole in
𝑋(𝑧). And the ROC is inside of a given circle.
▪ If 𝑥[𝑘] is a two-sided sequence, the ROC will be a ring in the z-plane that is bounded on
the interior and exterior by a pole.

III. Some Properties of the Z Transform As we found with the Laplace Transform, it will
often be easier to work with the Z Transform if we develop some properties of the transform
itself. The z-transform can be derived from the Laplace transform. Hence, it shares several
useful properties with the Laplace transform, which can be stated without proof. These
properties can also be easily proved directly, and the proofs are left as an exercise for the
reader. Proofs are provided for properties that do not obviously follow from the Laplace
transform.

❶ Linearity: As with the Laplace Transform, the Z Transform is linear.

Given 𝑥[𝑘] = 𝑎. 𝑓[𝑘] + 𝑏. 𝑔[𝑘], the following property holds: 𝑋(𝑧) = 𝑎. 𝐹(𝑧) + 𝑏. 𝐺(𝑧)

Example: Find the z-transform of the causal sequence 𝑥[𝑘] = 2𝑢[𝑘] + 4𝛿[𝑘] 𝑘 = 0,1, …
∞ ∞ ∞
−𝑘 −𝑘
2𝑧 6𝑧 − 4
𝑋(𝑧) = ∑(2𝑢[𝑘] + 4𝛿[𝑘])𝑧 = ∑ 2𝑢[𝑘]𝑧 + ∑ 4𝛿[𝑘]𝑧 −𝑘 = +4=
𝑧−1 𝑧−1
𝑘=0 𝑘=0 𝑘=0

❷ Time Shift (Delay): An important property of the Z Transform is the time shift. To see
why this might be important consider that a discrete-time approximation to a derivative is
given by:
𝑑𝑓(𝑡) 𝑓((𝑘 + 1)𝑇) − 𝑓(𝑘𝑇) 𝑓[𝑘 + 1] − 𝑓[𝑘]
| = =
𝑑𝑡 𝑡=𝑘𝑇 𝑇 𝑇
Let's examine what effect such a shift has upon the Z Transform. Assume that the
sequence 𝑥[𝑘] has a Z-Transform 𝑋(𝑧), and we want to get the Z-Transform of 𝑥[𝑘 − 𝑘0 ]
∞ ∞ ∞

(𝑥[𝑘 − 𝑘0 ]) = ∑ 𝑥[𝑘 − 𝑘0 ]𝑧 −𝑘 = ∑ 𝑥[𝑚]𝑧 −𝑚−𝑘0


=𝑧 −𝑘0
∑ 𝑥[𝑚]𝑧 −𝑚 = 𝑧 −𝑘0 𝑋(𝑧)
𝑘=0 𝑚=−𝑘0 𝑚=0

Example: Find the z-transform of the causal sequence

4, 𝑘 = 2,3 …
𝑥[𝑘] = {
0, otherwise

The given sequence is a step function starting at 𝑘 = 2 rather than 𝑘 = 0 (i.e., it is delayed
by two sampling periods). Using the delay property, we have
∞ ∞ ∞
−𝑘 −2 −𝑚 −2
𝑧 4
𝑋(𝑧) = 4 ∑ 𝑢[𝑘 − 2]𝑧 = 4𝑧 ∑ 𝑢[𝑚]𝑧 = 4𝑧 ∑ 𝑧 −𝑚 = 4𝑧 −2 =
𝑧 − 1 𝑧(𝑧 − 1)
𝑘=0 𝑚=−2 𝑚=0

❸ Time Advance: Let's explore it in the same way as we did the shift to the
right. Consider the same sequence, 𝑥[𝑘], as before. This time we shift it to the left by 𝑎
samples to get 𝑥[𝑘 + 𝑎].
∞ 𝑎−1

𝑋new (𝑧) = ∑ 𝑥[𝑘 + 𝑎]𝑧 −𝑘 = 𝑧 𝑎 𝑋(𝑧) − ∑ 𝑥[ℓ]𝑧 𝑎−ℓ


𝑘=0 ℓ=0
Proof:
∞ ∞ ∞
−𝑘 𝑎−ℓ
𝑋new (𝑧) = ∑ 𝑥[𝑘 + 𝑎]𝑧 = ∑ 𝑥[ℓ]𝑧 = 𝑧 ∑ 𝑥[ℓ]𝑧 −ℓ
𝑎

𝑘=0 ℓ=𝑎 ℓ=𝑎


∞ 𝑎−1 𝑎−1
𝑎 −ℓ
= 𝑧 (∑ 𝑥[ℓ]𝑧 − ∑ 𝑥[ℓ]𝑧 ) = 𝑧 𝑋(𝑧) − ∑ 𝑥[ℓ]𝑧 𝑎−ℓ
−ℓ 𝑎

ℓ=0 ℓ=0 ℓ=0

❹ Multiplication by 𝒛𝒏𝟎 in Time Domain: this property is also known as the Z-domain
scaling and it says that: if 𝑥[𝑘] ⟷ 𝑋(𝑧) with ROC = 𝑅 then
∞ ∞
𝑧 −𝑘 𝑧
𝑧0𝑘 𝑥[𝑘] ⟷ ∑ 𝑧0𝑘 𝑥[𝑘]𝑧 −𝑘 = ∑ 𝑥[𝑘] ( ) = 𝑋 ( ) with ROC = |𝑧0 |𝑅
𝑧0 𝑧0
𝑘=0 𝑘=0

❺ Time Scaling: This property deals with the effect on the frequency-domain
representation of a signal if the time variable is altered. The most important concept to
understand for the time scaling property is that signals that are narrow in time will be
broad in frequency and vice versa.
∞ ∞ ∞

𝑥[𝑘/𝑎] ⟷ ∑ 𝑥[𝑘/𝑎]𝑧 −𝑘 = ∑ 𝑥[𝑚]𝑧 −𝑎𝑚 = ∑ 𝑥 [𝑚](𝑧 𝑎 )−𝑚 = 𝑋(𝑧 𝑎 )


𝑘=0 𝑚=0 𝑚=0

❻ Differentiation and Integration in z-Domain: if 𝑥[𝑘] ⟷ 𝑋(𝑧) with ROC = 𝑅 then


∞ ∞
−𝑘
𝑑 𝑑
𝑘𝑥[𝑘] ⟷ ∑ 𝑘𝑥[𝑘]𝑧 = −𝑧 ∑ 𝑥[𝑘]𝑧 −𝑘 = −𝑧 𝑋(𝑧) with ROC = 𝑅
𝑑𝑧 𝑑𝑧
𝑘=0 𝑘=0

𝑋(𝑧)
𝑘 −1 𝑥[𝑘] ⟷ ∑ 𝑘 −1 𝑥[𝑘]𝑧 −𝑘 = − ∫ 𝑑𝑧 with ROC = 𝑅
𝑧
𝑘=1
Proof: The derivative rule is a straightforward let we prove only the integration rule
∞ ∞
𝑋(𝑧)
∫( ) 𝑑𝑧 = ∫ (∑ 𝑥[𝑘]𝑧 −𝑘−1 ) 𝑑𝑧 = ∑ −𝑘 −1 𝑥[𝑘]𝑧 −𝑘 = − (𝑘 −1 𝑥[𝑘])
𝑧
𝑘=0 𝑘=1

Example Determine the Z-transform of 𝑥1 [𝑘] = 𝑘 2 𝑢[𝑘] and 𝑥2 [𝑘] = 𝑘 2 𝑎𝑘 𝑢[𝑘].

▪ Applying the derivative rule we get:

𝑑 𝑑 𝑑2 𝑑
𝑘 2 𝑥[𝑘] ⟷= 𝑧 {𝑧 𝑋(𝑧)} = 𝑧 2 2 𝑋(𝑧) + 𝑧 𝑋(𝑧)
𝑑𝑧 𝑑𝑧 𝑑𝑧 𝑑𝑧

2
𝑑2
2
𝑧 𝑑 𝑧 2𝑧 2 𝑧 𝑧2 + 𝑧
𝑥1 [𝑘] = 𝑘 𝑢[𝑘] ⟷ 𝑧 ( )+𝑧 ( )= − =
𝑑𝑧 2 𝑧 − 1 𝑑𝑧 𝑧 − 1 (𝑧 − 1)3 (𝑧 − 1)2 (𝑧 − 1)3

▪ Redo the same process as before we obtain

2 𝑘
𝑑2
2
𝑧 𝑑 𝑧 2𝑎𝑧 2 𝑧𝑎 𝑎𝑧(𝑧 + 𝑎)
𝑥2 [𝑘] = 𝑘 𝑎 𝑢[𝑘] ⟷ 𝑧 ( )+𝑧 ( )= − =
2
𝑑𝑧 𝑧 − 𝑎 𝑑𝑧 𝑧 − 𝑎 (𝑧 − 𝑎) 3 (𝑧 − 𝑎) 2 (𝑧 − 𝑎)3

❼ Time Reversal: if 𝑥[𝑘] ⟷ 𝑋(𝑧) with ROC = 𝑅 then


∞ ∞
−𝑘
1 −𝑘 1 1
𝑥[−𝑘] ⟷ ∑ 𝑥[−𝑘]𝑧 = ∑ 𝑥[𝑘] ( ) = 𝑋 ( ) with ROC =
𝑧 𝑧 𝑅
𝑘=0 𝑘=0

Therefore, a pole (or zero) in 𝑋(𝑧) at = 𝑧𝑘 , moves to 1/𝑧𝑘 , after time reversal. The
relationship ROC = 1/𝑅 indicates the inversion of 𝑅, reflecting the fact that a right-sided
sequence becomes left-sided if time-reversed, and vice versa.

❽ Time Forward Difference: if 𝑥[𝑘] ⟷ 𝑋(𝑧) with ROC = 𝑅 then

Forward: ∆𝑥𝑘 = 𝑥[𝑘 + 1] − 𝑥[𝑘] ⟷ (𝑧 − 1)𝑋(𝑧) − 𝑧𝑥[0]


Backward: ∇𝑥𝑘 = 𝑥[𝑘] − 𝑥[𝑘 − 1] ⟷ (1 − 𝑧 −1 )𝑋(𝑧)

❾ Convolution: The convolution product between two signals is an inner operator denoted
by star and is defined by
∞ ∞

𝑦[𝑘] = 𝑥[𝑘] ⋆ ℎ[𝑘] = ℎ[𝑘] ⋆ 𝑥[𝑘] = ∑ ℎ[𝑘 − 𝑖]𝑥[𝑖] = ∑ 𝑥[𝑘 − 𝑖]ℎ[𝑖]


𝑖=−∞ 𝑖=−∞

Now the Z-transform of this convolution is 𝑌(𝑧) = (𝑥[𝑘] ⋆ ℎ[𝑘]) = 𝑋(𝑧)𝐻(𝑧). This
relationship plays a central role in the analysis and design of discrete-time LTI systems, in
analogy with the continuous-time case.

Proof: Applying the definition of Z-transform on convolution product and make an


interchanging the order of summations we get

∞ ∞ ∞ ∞
−𝑘
𝑌(𝑧) = (𝑥[𝑘] ⋆ ℎ[𝑘]) = ∑ ( ∑ 𝑥[𝑘 − 𝑖]ℎ[𝑖]) 𝑧 = ∑ (ℎ[𝑖] ∑ 𝑥[𝑘 − 𝑖] 𝑧 −𝑘 )
𝑘=−∞ 𝑖=−∞ 𝑖=−∞ 𝑘=−∞
We know that ∑∞
𝑘=−∞ 𝑥[𝑘 − 𝑖] 𝑧
−𝑘
= 𝑧 −𝑖 𝑋(𝑧) hence
∞ ∞

𝑌(𝑧) = ∑ (ℎ[𝑖]𝑧 𝑋(𝑧)) = 𝑋(𝑧) ∑ ℎ[𝑖]𝑧 −𝑖 = 𝑋(𝑧)𝐻(𝑧)


−𝑖

𝑖=−∞ 𝑖=−∞

Example Determine the Z-transform of 𝛿[𝑘 − 𝑘0 ] and 𝑦[𝑘] = 𝑥[𝑘] ⋆ 𝛿[𝑘 − 𝑘0 ], given that 𝑋(𝑧)
be Z-transform of 𝑥[𝑘].
Z−transform Z−transform
𝛿[𝑘] ↔ 1 ⟹ 𝛿[𝑘 − 𝑘0 ] ↔ 𝑧 −𝑘0

𝑦[𝑘] = 𝑥[𝑘] ⋆ 𝛿[𝑘 − 𝑘0 ] ⟹ 𝑌(𝑧) = 𝑧 −𝑘0 𝑋(𝑧)

Also it can be verified by using the Dirac properties


+∞

𝑦[𝑘] = 𝑥[𝑘] ⋆ 𝛿[𝑘 − 𝑘0 ] = ∑ 𝑥[𝑚]𝛿[𝑚 − 𝑘 + 𝑘0 ] = 𝑥[𝑘 − 𝑘0 ]


−∞
The convolution relationship plays a central role in the analysis and design of discrete-time
LTI systems, in analogy with the continuous-time case.

𝐻(𝑧) 𝐻(𝑧)
𝛿[𝑘] ℎ[𝑘] and more general 𝑥[𝑘] 𝑦[𝑘]
LTI systems LTI systems

Because of linearity and time invariance in system we have a convolution of an arbitrary


signal with Dirac at the input will produce the same signal convolved with ℎ[𝑘] at output so

𝐻(𝑧)
𝑥[𝑘] = 𝑥[𝑘] ⋆ 𝛿[𝑘] 𝑦[𝑘] = 𝑥[𝑘] ⋆ ℎ[𝑘]
LTI systems

❿ Time Accumulation: if 𝑥[𝑘] ⟷ 𝑋(𝑧) with ROC = 𝑅 then


𝑘
1
( ∑ 𝑥[𝑖]) = 𝑋(𝑧)
1 − 𝑧 −1
𝑖=−∞
Proof: It is well-known that
∞ 𝑘 𝑘
𝑋(𝑧)
𝑥[𝑘] ⋆ 𝑢[𝑘] = ∑ 𝑢[𝑘 − 𝑖]𝑥[𝑖] = ∑ 𝑥[𝑖] ⟹ ( ∑ 𝑥[𝑖]) = (𝑥[𝑘] ⋆ 𝑢[𝑘]) =
1 − 𝑧 −1
𝑖=−∞ 𝑖=−∞ 𝑖=−∞

⓫ Conjugation: if 𝑥[𝑘] ⟷ 𝑋(𝑧) with ROC = 𝑅 then 𝑥 ⋆ [𝑘] ⟷ 𝑋 ⋆ (𝑧 ⋆ ) with ROC = 𝑅


∞ ∞ ⋆

∑ 𝑥 ⋆ [𝑘]𝑧 −𝑘 = (∑ 𝑥[𝑘](𝑧 ⋆ )−𝑘 ) = 𝑋 ⋆ (𝑧 ⋆ )


𝑘=0 𝑘=0

⓬ Parseval's Theorem:
𝑘=+∞
1 1
∑ 𝑥1 [𝑘]𝑥2⋆ [𝑘] = ∮ 𝑋1 (𝑧)𝑋2⋆ ( ⋆ ) 𝑧 −1 𝑑𝑧
2𝜋𝑗 𝑧
𝑘=−∞

Proof: Let 𝑥1 [𝑘] ⟷ 𝑋1 (𝑧) and 𝑥2 [𝑘] ⟷ 𝑋2 (𝑧) The proof of this identity is based on the inverse
relation and later on we will see the inverse formula of the Z-transform. Now assume that
the inverse formula is well-known
1
𝑥1 [𝑘] = ∮ 𝑋1 (𝑧)𝑧 𝑘−1 𝑑𝑧
2𝜋𝑗
𝑘=+∞ 𝑘=+∞ 𝑘=+∞
1 1
∑ 𝑥1 [𝑘]𝑥2⋆ [𝑘] = ∑ { ∮ 𝑋1 (𝑧)𝑧 𝑘−1 𝑑𝑧} 𝑥2⋆ [𝑘] = ∮ 𝑋1 (𝑧) ( ∑ 𝑥2⋆ [𝑘]𝑧 𝑘−1 ) 𝑑𝑧
2𝜋𝑗 2𝜋𝑗
𝑘=−∞ 𝑘=−∞ 𝑘=−∞
𝑘=+∞ 𝑘=+∞ ⋆
1 ⋆
1 −𝑘 −1 1 1 −𝑘
= ∮ 𝑋1 (𝑧) ( ∑ 𝑥2 [𝑘] ( ) 𝑧 ) 𝑑𝑧 = ∮ 𝑋1 (𝑧) ( ∑ 𝑥2 [𝑘] ( ⋆ ) ) 𝑧 −1 𝑑𝑧
2𝜋𝑗 𝑧 2𝜋𝑗 𝑧
𝑘=−∞ 𝑘=−∞
1 1
= ∮ 𝑋1 (𝑧)𝑋2⋆ ( ⋆ ) 𝑧 −1 𝑑𝑧
2𝜋𝑗 𝑧

⓭ Multiplication in Time It gives the change in Z-domain of the signal when


multiplication takes place. 𝑥1 [𝑘] ⟷ 𝑋1 (𝑧) and 𝑥2 [𝑘] ⟷ 𝑋2 (𝑧)

1 𝑧
𝑥1 [𝑘]𝑥2 [𝑘] ⟷ ∮ 𝑋1 (𝑣)𝑋2 ( ) 𝑣 −1 𝑑𝑣
2𝜋𝑗 𝑣

Proof: Let 𝑥1 [𝑘] ⟷ 𝑋1 (𝑧) and 𝑥2 [𝑘] ⟷ 𝑋2 (𝑧)


∞ ∞ ∞
1 1
∑ 𝑥1 [𝑘]𝑥2 [𝑘]𝑧 −𝑘 = ∑ (∮ 𝑋1 (𝑣)𝑣 𝑘−1 𝑑𝑣) 𝑥2 [𝑘]𝑧 −𝑘 = ∮ 𝑋1 (𝑣) (∑ 𝑥2 [𝑘]𝑣 𝑘−1 𝑧 −𝑘 ) 𝑑𝑣
2𝜋𝑗 2𝜋𝑗
𝑘=0 𝑘=0 𝑘=0

1 𝑧 −𝑘 1 𝑧
= ∮ 𝑋1 (𝑣) (∑ 𝑥2 [𝑘] ( ) ) 𝑣 −1 𝑑𝑣 = ∮ 𝑋1 (𝑣)𝑋2 ( ) 𝑣 −1 𝑑𝑣
2𝜋𝑗 𝑣 2𝜋𝑗 𝑣
𝑘=0

⓮ Initial value theorem: If 𝑥[𝑘] is causal (i.e. No positive terms in its Z-transform), then
which has its Z-transformation as 𝑋(𝑧), then the initial value theorem can be written as;

lim 𝑥[𝑘] = lim 𝑋(𝑧)


𝑘→0 𝑧→∞

𝐏𝐫𝐨𝐨𝐟: Let 𝑋(𝑧) = ∑ 𝑥[𝑘]𝑧 −𝑘 = 𝑥[0] + 𝑥[1]𝑧 −1 + 𝑥[2]𝑧 −2 + ⋯ ⟹ lim 𝑋(𝑧) = 𝑥[0] = lim 𝑥[𝑘]
𝑧→∞ 𝑘→0
𝑘=0

⓯ Final value theorem: The final value theorem allows us to calculate the limit of a
sequence as 𝑘 tends to infinity, if one exists, from the z-transform of the sequence.

Final Value Theorem states that if 𝑥[𝑘] is causal and the poles are all inside the circle, then
its final value is denoted as 𝑥[𝑘] or 𝑥(∞) and can be written as

𝑥(∞) = lim 𝑥[𝑘] = lim(1 − 𝑧 −1 )𝑋(𝑧) = lim(𝑧 − 1)𝑋(𝑧)


𝑘→∞ 𝑧→1 𝑧→1

Proof: Start with the definition of the z-transform


𝑁

𝑋(𝑧) = lim ∑ 𝑥[𝑘]𝑧 −𝑘


𝑁→∞
𝑘=0

and also use the time advance theorem of the z-transform


𝑁

𝑧(𝑋(𝑧) − 𝑥[0]) = lim ∑ 𝑥[𝑘 + 1]𝑧 −𝑘


𝑁→∞
𝑘=0
Now subtract the first from the second
𝑁

(𝑧 − 1)𝑋(𝑧) − 𝑧𝑥[0] = lim (∑ 𝑥[𝑘 + 1]𝑧 −𝑘 − 𝑥[𝑘]𝑧 −𝑘 )


𝑁→∞
𝑘=0
Then take the limit as 𝑧 → 1

lim{(𝑧 − 1)𝑋(𝑧) − 𝑧𝑥[0]} = lim 𝑥[𝑁 + 1] − 𝑥[0] ⟹ (lim(𝑧 − 1)𝑋(𝑧)) − 𝑥[0] = lim 𝑥[𝑁 + 1] − 𝑥[0]
𝑧→1 𝑁→∞ 𝑧→1 𝑁→∞

So
lim(𝑧 − 1)𝑋(𝑧) = lim 𝑥[𝑁 + 1] = 𝑥(∞)
𝑧→1 𝑁→∞

Alternately: Using the time delay theorem of the z-transform


𝑁
−1
𝑧 𝑋(𝑧) = lim ∑ 𝑥[𝑘 − 1]𝑧 −𝑘
𝑁→∞
𝑘=0
we can get to
𝑁 𝑁

(1 − 𝑧 −1 )𝑋(𝑧) = lim ∑ 𝑥[𝑘]𝑧 −𝑘 − ∑ 𝑥[𝑘 − 1]𝑧 −𝑘


𝑁→∞
𝑘=0 𝑘=0

and assuming 𝑥[𝑘] = 0 for 𝑘 < 0 and taking the limit as 𝑧 → 1

lim(1 − 𝑧 −1 )𝑋(𝑧) = lim 𝑥[𝑁] − 𝑥[−1] = lim 𝑥[𝑁] = 𝑥(∞)


𝑧→1 𝑁→∞ 𝑁→∞

The main pitfall of the theorem is that there are important cases where the limit does not
exist. The two main cases are as follows:

1. An unbounded sequence i.e. poles are outside the unit circle.


2. An oscillatory sequence i.e. poles are on the unit circle.

The reader is cautioned against blindly using the final value theorem, because this can
yield misleading results.

Example: Verify the final value theorem using the z-transform of a decaying exponential
sequence and its limit as 𝑘 tends to infinity.

Solution The z-transform pair of an exponential sequence is

𝑥1 [𝑘] = 𝑎𝑘 𝑢[𝑘] 𝑧 𝑎𝑧
{ ⟺ 𝑎𝑘 𝑢[𝑘] ⟷ 𝑋1 (𝑧) = & 𝑎−𝑘 𝑢[𝑘] ⟷ 𝑋2 (𝑧) =
𝑥2 [𝑘] = 𝑎−𝑘 𝑢[𝑘] 𝑧−𝑎 𝑎𝑧 − 1

Applying the final value theorem


𝑧(𝑧 − 1)
𝑥1 (∞) = lim(𝑧 − 1)𝑋1 (𝑧) = lim =0
𝑧→1 𝑧→1 𝑧 − 𝑎
𝑎𝑧(𝑧 − 1)
𝑥2 (∞) = lim(𝑧 − 1)𝑋2 (𝑧) = lim =0
𝑧→1 𝑧→1 𝑎𝑧 − 1

For the first signal 𝑥1 [𝑘] the theorem cannot even be applied. We have a growing sequence
without a limit (i.e. unbounded sequence); this once again violates the conditions of the
theorem. Thus, the final value theorem cannot be used.

For the second signal 𝑥2 [𝑘] the result is true.


⓰ Periodic Functions: If a function 𝑥1 [𝑘] is identically zero except for 0 ≤ 𝑘 ≤ 𝑇, then for
the periodic function 𝑥[𝑘] defined by

𝑥[𝑘] = ∑ 𝑥1 [𝑘 − 𝑚𝑇]
𝑚=0
The Z-transform 𝑋(𝑧) is given by
1 ∞ 1
𝑋(𝑧) = (∑ 𝑥1 [𝑘]𝑧 −𝑘 ) = 𝑋 (𝑧)
1−𝑧 −𝑇
𝑘=0 1 − 𝑧 −𝑇 1
Proof: Let we define 𝑥ℓ [𝑘] = 𝑥1 [𝑘 − ℓ𝑇] then

𝑥[𝑘] = 𝑥1 [𝑘] + 𝑥2 [𝑘] + 𝑥3 [𝑘] + ⋯


= 𝑥1 [𝑘] + 𝑥1 [𝑘 − 𝑇]𝑢[𝑘 − 𝑇] + 𝑥1 [𝑘 − 2𝑇]𝑢[𝑘 − 2𝑇] + ⋯

applying the Z-transform and the time-shifting property, we get


𝑋1 (𝑧)
𝑋(𝑧) = 𝑋1 (𝑧) + 𝑧 −𝑇 𝑋1 (𝑧) + +𝑧 −2𝑇 𝑋1 (𝑧) + ⋯ = 𝑋1 (𝑧)(1 + 𝑧 −𝑇 + 𝑧 −2𝑇 + ⋯ ) =
1 − 𝑧 −𝑇
IV. Transfer Function (System Function) and DC Gain: The theory of linear difference
equations closely parallels the theory of linear differential equations. The major distinction
between these two mathematical models is that a differential equation expresses a
relationship between continuous input and continuous output functions, whereas a
difference equation expresses a relationship between discrete input and discrete output
functions (sequences).

 The transfer function: We considered a discrete-time LTI system for which input 𝑥[𝑘] and
output 𝑦[𝑘] satisfy the general linear constant-coefficient difference equation of the form
𝑛 𝑚

∑ 𝑎𝑖 𝑦[𝑘 − 𝑖] = ∑ 𝑏𝑖 𝑥[𝑘 − 𝑖]
𝑖=0 𝑖=0

The transfer function of a system 𝐻(𝑧) is defined as the Z-transform of its output
𝑦[𝑘] divided by the Z-transform of its forcing function 𝑥[𝑘], so it is a complex rational
function of two polynomials named numerator and denominator.

Applying the z-transform and using the time-shift property and the linearity property of the
z-transform to this difference equation, we obtain
𝑛 𝑚
−𝑖 −𝑖
𝑌(𝑧) ∑𝑚𝑖=0 𝑏𝑖 𝑧
−𝑖
(∑ 𝑎𝑖 𝑧 ) 𝑌(𝑧) = (∑ 𝑏𝑖 𝑧 ) 𝑋(𝑧) ⟹ 𝐻(𝑧) = =
𝑋(𝑧) ∑𝑛𝑖=0 𝑎𝑖 𝑧 −𝑖
𝑖=0 𝑖=0

Hence, 𝐻(𝑧) is always rational.

 The DC gain: is that ratio of the steady-state output to the steady-state input, especially
we take the input as unit step. The DC gain is an important parameter, especially in
control applications.
𝑧
𝑌(𝑧) = 𝐻(𝑧)𝑋(𝑧) = 𝐻(𝑧)
𝑧−1
Using the final value theorem we obtain
𝑧
𝑥[∞] = lim(1 − 𝑧 −1 )𝑋(𝑧) = lim(1 − 𝑧 −1 ) =1
𝑧→1 𝑧→1 𝑧−1
𝑧
𝑦[∞] = lim(1 − 𝑧 −1 )𝑌(𝑧) = lim(1 − 𝑧 −1 )𝐻(𝑧) = 𝐻(1)
𝑧→1 𝑧→1 𝑧−1
The DC gain is
𝑦[∞]
DC gain = = 𝐻(1) = lim(1 − 𝑧 −1 )𝑌(𝑧)
𝑥[∞] 𝑧→1

As mentioned earlier, Z-transformation is carried out to simplify


time domain calculations. After completing all the calculations in the Z domain, we have to
map the results back to the time domain so as to be of use.

Contour Integration We now discuss how to obtain the sequence 𝑥[𝑘] by contour
integration, given its Z-transform. Recall that the two-sided Z-transform is defined by

𝑋(𝑧) = ∑ 𝑥[𝑘]𝑧 −𝑘
𝑘=−∞
Let us multiply both sides by 𝑧 𝑛−1 and integrate over a closed contour within ROC of 𝑋(𝑧);
let the contour enclose the origin. We have

∮ 𝑋(𝑧)𝑧 𝑛−1 𝑑𝑧 = ∮ ( ∑ 𝑥[𝑘]𝑧 −𝑘 ) 𝑧 𝑛−1 𝑑𝑧


𝒞 𝒞 𝑘=−∞

where 𝒞 denotes the closed contour within ROC, taken in a counterclockwise direction. As
the curve 𝒞 is inside ROC, the sum converges on every part of 𝒞 and, as a result, the
integral and the sum on the right-hand side can be interchanged. The above equation
becomes

∮ 𝑋(𝑧)𝑧 𝑛−1 𝑑𝑧 = ∑ 𝑥[𝑘] ∮ 𝑧 𝑛−1−𝑘 𝑑𝑧


𝒞 𝑘=−∞ 𝒞

Now we make use of the Cauchy integral theorem, according to which

1 1 𝑘=𝑛
∮ 𝑧 𝑛−1−𝑘 𝑑𝑧 = {
2𝜋𝑗 𝒞 0 𝑘≠𝑛

Here, 𝒞 is any contour that encloses the origin. Using the above equation, the right hand
side becomes 2𝜋𝑗𝑥[𝑘] and hence we obtain the formula

1 1
∮ 𝑋(𝑧)𝑧 𝑛−1 𝑑𝑧 = ∑ 𝑥[𝑘] ( ∮ 𝑧 𝑛−1−𝑘 𝑑𝑧) = 𝑥[𝑛]
2𝜋𝑗 𝒞 2𝜋𝑗 𝒞
𝑘=−∞

1
𝑥[𝑛] = ∮ 𝑋(𝑧)𝑧 𝑛−1 𝑑𝑧
2𝜋𝑗 𝒞

❷ Partial Fraction Expansion In case the given Z-transform is more complicated, it is


decomposed into a sum of standard fractions, and one can then calculate the overall
inverse Z-transform using the linearity property. We will restrict our attention to inversion
of Z-transforms that are proper.

Because the inverse Z-transform of 𝑧/(𝑧 − 𝑎) is given by 𝑎𝑘 𝑢[𝑘], we split 𝐻(𝑧) into partial
fractions, as given below:
𝐻1 𝐻2 𝐻𝑚
𝐻(𝑧) = + + ⋯+
𝑧 − 𝑎1 𝑧 − 𝑎2 𝑧 − 𝑎𝑚
where we have assumed that 𝐻(𝑧) has 𝑚 simple poles at 𝑎1 , 𝑎2 , … , 𝑎𝑚 . The coefficients
𝐻1 , 𝐻2 , … , 𝐻𝑚 are known as residues at the corresponding poles. The residues are calculated
using the formula 𝐻𝑖 = lim𝑧→𝑎𝑖 (𝑧 − 𝑎1 ) 𝐻(𝑧) 𝑖 = 1,2, … , 𝑚
In case of repeated poles (i.e. each root 𝑎𝑖 is repeated ℓ𝑖 times):

𝑟 ℓ𝑖
𝐻𝑖𝑗
𝐻(𝑧) = ∑ ∑
(𝑧 − 𝑎𝑖 )𝑗
𝑖=1 𝑗=1
Where
1 𝑑 𝑗−1
𝐻𝑖𝑗 = { lim ( 𝑗−1 (𝑧 − 𝑎𝑖 )ℓ𝑖 𝐻(𝑧))}
(𝑗 − 1)! 𝑧→𝑎𝑖 𝑑𝑧

Example: Obtain the inverse of 𝐻(𝑧), defined by

11𝑧 2 − 15𝑧 + 6
𝐻(𝑧) =
(𝑧 − 2)(𝑧 − 1)2

We begin with partial fraction expansion:


𝐻11 𝐻21 𝐻22
𝐻(𝑧) = + +
(𝑧 − 2) (𝑧 − 1) (𝑧 − 1)2

11𝑧 2 − 15𝑧 + 6
𝐻11 = lim(𝑧 − 2)𝐻(𝑧) = lim = 20
𝑧→2 𝑧→2 (𝑧 − 1)2

Multiplying 𝐻(𝑧) by (𝑧 − 1)2 , we obtain the following equation:

11𝑧 2 − 15𝑧 + 6 𝐻11 (𝑧 − 1)


(𝑧 − 1)2 𝐻(𝑧) = = + 𝐻21 (𝑧 − 1) + 𝐻22
(𝑧 − 2) (𝑧 − 2)

Substituting 𝑧 = 1, we obtain 𝐻22 = −2 . On differentiating once with respect to 𝑧 and


substituting 𝑧 = 1, we obtain

(𝑧 − 2)(22𝑧 − 15)(11𝑧 2 − 15𝑧 + 6)


𝐻21 = lim = −9
𝑧→1 (𝑧 − 1)2

20 9 2 20𝑧 −1 9𝑧 −1 2𝑧 −1
𝐻(𝑧) = − − = { − − }
(𝑧 − 2) (𝑧 − 1) (𝑧 − 1)2 (1 − 2𝑧 −1 ) (1 − 𝑧 −1 ) (1 − 𝑧 −1 )2
⟹ ℎ[𝑘] = (20 × 2𝑘−1 − 9 − 2(𝑘 − 1))𝑢[𝑘 − 1]

The above method is known as inversion. It is also known as realization, because it is


through this methodology that we can realize transfer functions in real life.

Example: Obtain the inverse of proper function 𝐻(𝑧), defined by

𝑧 3 − 𝑧 2 + 3𝑧 − 1
𝐻(𝑧) =
(𝑧 − 1)(𝑧 2 − 𝑧 + 1)

Because it is proper, we first divide the numerator by the denominator and obtain
𝑧(𝑧 + 1)
𝐻(𝑧) = 1 + = 1 + 𝐺(𝑧)
(𝑧 − 1)(𝑧 2 − 𝑧 + 1)
As 𝐺(𝑧) has a zero at the origin, we can divide by 𝑧:
𝐺(𝑧) (𝑧 + 1) (𝑧 + 1)
= =
𝑧 (𝑧 − 1)(𝑧 2 − 𝑧 + 1) (𝑧 − 1)(𝑧 − 𝑒 𝑗𝜋/3 )(𝑧 − 𝑒 −𝑗𝜋/3 )

Note that complex poles or complex zeros, if any, would always occur in conjugate pairs for
real sequences.
𝐺(𝑧) 2 1 1
= + −
𝑧 (𝑧 − 1) 𝑗𝜋 𝑗𝜋
(𝑧 − 𝑒 3 ) (𝑧 − 𝑒 − 3 )
We cross multiply by z and invert:

2𝑧 𝑧 𝑧 𝜋
𝐻(𝑧) = 1 + + − ⟹ ℎ[𝑘] = 𝛿[𝑘] + (2 − 2 cos ( 𝑘)) 𝑢[𝑘]
(𝑧 − 1) 𝑗𝜋 𝑗𝜋 3
(𝑧 − 𝑒3 ) (𝑧 − 𝑒− 3 )

Sometimes it helps to work directly in powers of z−1. We illustrate this in the next example.

Example: Obtain the inverse of proper function 𝐻(𝑧), defined by

5
3 − 6 𝑧 −1 1
𝐻(𝑧) = |𝑧| >
1 1 3
(1 − 4 𝑧 −1 ) (1 − 3 𝑧 −1 )

There are two poles, one at 𝑧 = 1/3 and one at 1/4 . As ROC lies outside the outermost
pole, the inverse transform is a right handed sequence:
5
3 − 6 𝑧 −1 𝐻1 𝐻2
𝐻(𝑧) = = +
1 1 1 −1 1
(1 − 4 𝑧 −1 ) (1 − 3 𝑧 −1 ) (1 − 4 𝑧 ) (1 − 3 𝑧 −1 )

It is straightforward to see that 𝐻1 = 1 & 𝐻2 = 2

1 2 1 𝑘 1 𝑘
𝐻(𝑧) = + ⟹ ℎ[𝑘] = {( ) + 2 ( ) } 𝑢[𝑘]
1 1 4 3
(1 − 4 𝑧 −1 ) (1 − 3 𝑧 −1 )

Example: Determine the inverse Z-transform of

𝑧 2 + 2𝑧
𝐻(𝑧) = 1 < |𝑧| < 2
(𝑧 − 2)(𝑧 + 1)2

As the degree of the numerator polynomial is less than that of the denominator polynomial,
and as it has a zero at the origin, first divide by z and do a partial fraction expansion:

𝐻(𝑧) 𝑧+2 𝐻11 𝐻12 𝐻22


= = + +
𝑧 (𝑧 − 2)(𝑧 + 1) 2 (𝑧 − 2) (𝑧 + 1) (𝑧 + 1)2

After computation as before we get

𝐻(𝑧) 4/9 4/9 1/3


= − −
𝑧 (𝑧 − 2) (𝑧 + 1) (𝑧 + 1)2
Because it is given to be the annulus 1 < |𝑧| < 2, and because it does not contain the poles,
ROC should be as follows:
4 𝑧 1 𝑧 4 𝑧
𝐻(𝑧) = − − +
⏟ 9 (𝑧 + 1) 3 (𝑧 + 1) 2 9 (𝑧 − 2)

|𝑧|>1 |𝑧|<2
We obtain the inverse as
𝑘 4 4
ℎ[𝑘] = ( − ) (−1)𝑘 𝑢[𝑘] − 2𝑘 𝑢[−𝑘 − 1]
3 9 9

Example: Determine the inverse Z-transform of


1
𝐻(𝑧)
𝑧2 + (√2 − 1)𝑧 − √2

𝐻(𝑧) 1 𝛼 𝛽 𝛾
= = + +
𝑧 𝑧(𝑧 2 + (√2 − 1)𝑧 − √2) 𝑧 𝑧 − 1 𝑧 + √2

After evaluation of 𝛼, 𝛽 & 𝛾 we get


1 1 1
𝛼=− , 𝛽= , 𝑎𝑛𝑑 𝛾=
√2 1 + √2 2 + √2
Therefore the partial fraction expansion is

1 1 𝑧 1 𝑧
𝐻(𝑧) = − +( ) +( )
√2 1 + √2 𝑧 − 1 2 + √2 𝑧 + √2

1 1 1 𝑘
ℎ[𝑘] = − 𝛿[𝑘] + {( )+( ) (−√2) } 𝑢[𝑘]
√2 1 + √2 2 + √2

❷ Inversion by Power Series Now we will present another method of inversion. In this,
both the numerator and the denominator are written in powers of 𝑧 −1 and we divide the
former by the latter through long division. Because we obtain the result in a power series,
this method is known as the power series method. We illustrate it with an example.

Example: Determine the inverse Z-transform of 𝐻(𝑧) = log(1 + 𝑎𝑧 −1 ) |𝑧| > |𝑎|
∞ ∞
(−1)𝑖+1 𝑣 𝑖 −1
(−1)𝑘+1 𝑎𝑘 𝑧 −𝑘
log(1 + 𝑣) = ∑ , |𝑣| < 1 ⟹ log(1 + 𝑎𝑧 )=∑ , |𝑧| > |𝑎|
𝑖 𝑘
𝑖=1 𝑘=1

𝑘+1
𝑎𝑘 −1
𝐻(𝑧) = log(1 + 𝑎𝑧 −1 ) = {(−1) 𝑘
𝑘 >0} ⟹ ℎ[𝑘] = (−𝑎)𝑘 𝑢[𝑘 − 1]
𝑘
0 𝑘≤0
−1
Example: Determine the inverse Z-transform of 𝐻(𝑧) = 𝑒 𝑧 .
−1
Let we use the power series expansion for 𝑒 𝑧
−1
∞ 1 −𝑘
𝐻(𝑧) = 𝑒 𝑧 = 𝑒 1/𝑧 = ∑ 𝑧
𝑘=0 𝑘!

Compare this to the Z-transform formula we get

1
ℎ[𝑘] = 𝑢[𝑘]
𝑘!
For convenience, the very important Z-transform properties are summarized in Table

Property Signal Fourier transform

𝐱[𝑛] 𝐗(𝑧)
𝐱1 [𝑛] 𝐗1 (𝑧)
𝐱 2 [𝑛] 𝐗 2 (𝑧)
Linearity 𝛼𝐱 2 [𝑛] + 𝛽𝐱 2 [𝑛] 𝛼𝐗1 (𝑧) + 𝛽𝐗 2 (𝑧)
Time shifting 𝐱[𝑛 − 𝑛0 ] 𝑧 −𝑛0 𝐗(𝑧)
Frequency scaling 1 𝑒 𝑗𝜔0 𝑛 𝐱[𝑛] 𝐗(𝑒 −𝑗𝜔0 𝑧)
Frequency scaling 2 𝑧0 𝑛 𝐱[𝑛] 𝐗(𝑧/𝑧0 )
Time reversal 𝐱[−𝑛] 𝐗(1/𝑧)
Frequency differentiation 𝑛𝐱[𝑛] −𝑧𝑑𝐗(𝑧)/𝑑𝑧
1
Integration ∑𝑛𝑚=−∞ 𝐱[𝑚] 𝐗(𝑧)
1−𝑧 −1
Convolution 𝐱1 [𝑛] ⋆ 𝐱1 [𝑛] 𝐗1 (𝑧)𝐗 2 (𝑧)

Parseual's theorem − − − − − − − − − − − − − − − − − − − − − − − − − − − − − − − − − −

1 ∞
∑ 𝐱1 [𝑛]𝐱 2⋆ [𝑛] = ∫ 𝐗 (𝑧)𝐗 ⋆2 (1/𝑧 ⋆ )𝑧 −1 𝑑𝑧
2𝜋𝑗 −∞ 1
−∞

Common Z-Transforms Pairs


𝐱[𝑛] 𝐗(𝑧)
𝛿[𝑛] 1
𝛿[𝑛 − 𝑛0 ] 𝑧 −𝑛0
1
𝑢[𝑛] with |𝑧| > 1
1−𝑧−1
1
−𝑢[−𝑛 − 1] with |𝑧| < 1
1−𝑧−1
𝑧(𝑧−cos(𝑎))
cos[𝑛]𝑢[𝑛] with |𝑧| > 1
𝑧 2 −2𝑧 cos(𝑎)+1
𝑧 sin(𝑎)
sin[𝑛]𝑢[𝑛] with |𝑧| > 1
𝑧 2 −2𝑧 cos(𝑎)+1
1
𝑎𝑛 𝑢[𝑛], with |𝑧| > |𝑎|
1−𝑎𝑧 −1
1
− 𝑎𝑛 𝑢[−𝑛 − 1] with |𝑧| < |𝑎|
1−𝑎𝑧 −1
𝑎𝑧 −1
𝑛𝑎𝑛 𝑢[𝑛] 2 with |𝑧| > |𝑎|
(1−𝑎𝑧 −1 )
𝑎𝑧 −1
−𝑛𝑎𝑛 𝑢[−𝑛 − 1] 2 with |𝑧| < |𝑎|
(1−𝑎𝑧 −1 )

Remark: The notation |𝑧| stands for the magnitude of complex numbers
Solved Problems:
Exercise 1: Consider the Discrete time LTI system described by
5
𝑦[𝑛 − 1] − 𝑦[𝑛] + 𝑦[𝑛 + 1] = 𝑥[𝑛]
2
1. Determine the poles and zeros of the system function (is it stable?)
2. Determine the impulse response IR ℎ[𝑛]

Ans: 1. We take the Z-transform to the difference equation we get


1
(𝑧 −1 + 𝑧 − 5/2 )𝑌(𝑧) = 𝑋(𝑧) ⟹ 𝐻(𝑧) =
𝑧 −1
+ 𝑧 − 5/2
𝑧
⟹ 𝐻(𝑧) = 2
𝑧 − 5/2𝑧 + 1
𝑧
⟹ 𝐻(𝑧) =
(𝑧 − 1/2)(𝑧 − 2)

There are two poles 𝑝1 = 1⁄2 , 𝑝2 = 2 and two zeros 𝑧1 = 0 , 𝑧2 = ∞, the system is unstable
because poles are not inside unit disc.

2. Here we take a partial fraction expansion

𝐻(𝑧) 2 1 1 2 1 1
= { − } ⟹ 𝐻(𝑧) = { − }
𝑧 3 (𝑧 − 2) (𝑧 − 1/2) 3 (1 − 2𝑧 −1 ) (1 − 1/2𝑧 −1 )

2 1 𝑛
𝐂𝐚𝐬𝐞: 𝟎𝟏 |𝑧| > 2 ℎ[𝑛] = (2𝑛 − ( ) ) 𝑢[𝑛]
3 2
𝑛
1 2 1
𝐂𝐚𝐬𝐞: 𝟎𝟐 |𝑧| < ℎ[𝑛] = (( ) − 2𝑛 ) 𝑢[−𝑛 − 1]
2 3 2
1 2 𝑛 1 𝑛
𝐂𝐚𝐬𝐞: 𝟎𝟑 <𝑧<2 ℎ[𝑛] = − (2 𝑢[−𝑛 − 1] + ( ) 𝑢[𝑛])
2 3 2

Exercise 2: Consider the Discrete time LTI system with input 𝑥[𝑛] and impulse response
ℎ[𝑛] where
𝑎𝑛 for 𝑛≥0 1 for 0 ≤ 𝑛 ≤ 𝑁 − 1
ℎ[𝑛] = { 𝑥[𝑛] = {
0 for 𝑛<0 0 elsewhere
Determine the output of the system using the Z-transform

Ans: We know that for any LTI system 𝑌(𝑧) = 𝑋(𝑧)𝐻(𝑧) or equivalently 𝑦[𝑛] = 𝑥[𝑛] ⋆ ℎ[𝑛]
+∞ +∞ +∞
𝑎 𝑛 1
 𝐻(𝑧) = 𝒵(ℎ[𝑛]) = ∑ 𝑎 𝑢[𝑛]𝑧 𝑛 −𝑛
= ∑𝑎 𝑧 𝑛 −𝑛
= ∑( ) =
𝑧 1 − 𝑎𝑧 −1
𝑛=−∞ 𝑛=0 𝑛=0

+∞ +∞ 𝑁−1
1 − 𝑧 −𝑁
 𝑋(𝑧) = 𝒵(𝑥[𝑛]) = 𝒵(𝑢[𝑛] − 𝑢[𝑛 − 𝑁]) = ∑ 𝑧 −𝑛
− ∑ 𝑧 −𝑛
= ∑𝑧 −𝑛
=
1 − 𝑧 −1
𝑛=0 𝑛=𝑁−1 𝑛=0

1 − 𝑧 −𝑁 1 𝑧 −𝑁
 𝑌(𝑧) = 𝑋(𝑧)𝐻(𝑧) = = −
(1 − 𝑎𝑧 −1 )(1 − 𝑧 −1 ) (1 − 𝑎𝑧 −1 )(1 − 𝑧 −1 ) (1 − 𝑎𝑧 −1 )(1 − 𝑧 −1 )
1 1 1 1 1 1 1 1
𝑌(𝑧) = [( ) −1
+( −1
) −1
] − 𝑧 −𝑁 [( ) −1
+( −1
) ]
1−𝑎 1−𝑧 1−𝑎 1 − 𝑎𝑧 1−𝑎 1−𝑧 1−𝑎 1 − 𝑎𝑧 −1

1 1
𝑦[𝑛] = ( ) (𝑢[𝑛] − 𝑢[𝑛 − 𝑁]) + ( ) (𝑎𝑛 𝑢[𝑛] − 𝑎𝑛−𝑁 𝑢[𝑛 − 𝑁])
1−𝑎 1 − 𝑎−1
clear all, clc,
N=30; a=0.7;

for n=1:1:N
h(n)=a^n;
x(n)=1;
end

y=conv(x,h);
stem(y),
grid on

Other method for programing discrete LTI systems using difference equations
clear all, clc,

y(1)=0; y(2)=0; N=50; N1=10; N2=30;


k=0:1:N-1;
x=[zeros(1,N1) ones(1,N2-N1) zeros(1,N-N2)];
subplot(211)
stem(k,x,'b','linewidth',1.5);
for n=2:N
y(n)=(0.7)*y(n-1)+x(n);
end
subplot(212)
stem(k(1:end-1),y(2:end),'r','linewidth',1.5);
grid on

Exercise 3: Consider the Discrete time system shown in figure

ℎ1 [𝑛] = 𝑢[𝑛 + 1] ℎ2 [𝑛] = −𝑢[𝑛] ℎ1 [𝑛]


LTI
1. Are ℎ1 [𝑛], ℎ2 [𝑛] stable systems? 𝑥[𝑛] 𝑦[𝑛]
2. Determine 𝑦[𝑛] if the input 𝑥[𝑛] is given by
ℎ2 [𝑛]
2𝜋 𝜋
𝑥[𝑛] = cos ( 𝑛) + sin ( 𝑛) LTI
7 8

3. Is the overall system stable?

Ans: 1. We have
𝑧 𝑧2
ℎ1 [𝑛] = 𝑢[𝑛 + 1] ⟹ 𝐻1 (𝑧) = 𝒵(ℎ1 [𝑛]) = = ⟹ unstable "𝑧 = 1"
1 − 𝑧 −1 𝑧 − 1
−1 −𝑧
ℎ2 [𝑛] = −𝑢[𝑛] ⟹ 𝐻2 (𝑧) = 𝒵(ℎ2 [𝑛]) = = ⟹ unstable "𝑧 = 1"
1 − 𝑧 −1 𝑧 − 1
In other word ℎ1 [𝑛], ℎ2 [𝑛] are unstable because they are not absolutely summable.
2. To determine let we look for
2𝜋 𝜋
2𝜋 𝜋 1 − [cos ( 7 )] 𝑧 −1 [sin ( 8)] 𝑧 −1
𝑋(𝑧) = 𝒵 (cos ( 𝑛) + sin ( 𝑛)) = + 𝜋
7 8 2𝜋
1 − [2 cos ( )] 𝑧 −1 + 𝑧 −2 1 − [2 cos (8)] 𝑧 −1 + 𝑧 −2
7
It is very difficult to obtain 𝑦[𝑛] from the frequency domain because of the very complicated
formula of 𝑋(𝑧). To avoid complexity we will look for ℎ[𝑛] = ℎ1 [𝑛] + ℎ2 [𝑛]

ℎ[𝑛] = 𝑢[𝑛 + 1] − 𝑢[𝑛] = 𝛿[𝑛 + 1] ⟹ 𝑦[𝑛] = 𝑥[𝑛] ⋆ ℎ[𝑛] = 𝑥[𝑛] ⋆ 𝛿[𝑛 + 1] = 𝑥[𝑛 + 1]
2𝜋 𝜋
ℎ[𝑛] = 𝛿[𝑛 + 1] ⟹ 𝑦[𝑛] = 𝑥[𝑛 + 1] = cos ( (𝑛 + 1)) + sin ( (𝑛 + 1))
7 8

3. The overall system is stable, since ∑∞ ∞


−∞|ℎ[𝑛]| = ∑−∞ 𝛿[𝑛 + 1] = 1 but a hidden un-stability
is there because of cancellation.

Exercise 4: Consider the Discrete time LTI system with an input output sequences

1 𝑛 1 1 𝑛−1 1 𝑛
𝑥[𝑛] = ( ) 𝑢[𝑛] − ( ) 𝑢[𝑛 − 1] 𝑦[𝑛] = ( ) 𝑢[𝑛]
2 4 2 3

1. Determine ℎ[𝑛] the impulse response of the system


2. Find the difference equation relation input to outputs
3. Realize this system with min number of components (adders delays etc..)

Ans: 1. We have
1 −1 1
1 𝑧 1 − 4 𝑧 −1 1
𝑋(𝑧) = − 4 = , 𝑌(𝑧) =
1 −1 1 −1 1 −1 1
1 − 2𝑧 1 − 2𝑧 1− 2𝑧 1 − 3 𝑧 −1

1 1
𝑌(𝑧) 1 − 2 𝑧 −1 1 − 2 𝑧 −1
𝐻(𝑧) = = =
𝑋(𝑧) (1 − 1 𝑧 −1 ) (1 − 1 𝑧 −1 ) 1 − 7 𝑧 −1 + 1 𝑧 −2
4 3 12 12
2. The difference equation is
7 1 1
𝑦[𝑛] − 𝑦[𝑛 − 1] + 𝑦[𝑛 − 2] = 𝑥[𝑛] − 𝑥[𝑛 − 1]
12 12 2
3. We introduce an intermediate variable 𝑤
𝑌(𝑧) 𝑌(𝑧) 𝑊(𝑧) 𝑥[𝑛] 𝑤[𝑛] 𝑦[𝑛]
𝐻(𝑧) = =( )( )
𝑋(𝑧) 𝑊(𝑧) 𝑋(𝑧)

𝑌(𝑧) 1 𝑊(𝑧) 1 𝑫
= 1 − 𝑧 −1 , = 𝟕 −𝟏
𝑊(𝑧) 2 𝑋(𝑧) 1 − 𝑧 −1 + 1 𝑧 −2
7
12 12 𝟏𝟐 𝟐
7 1 𝑫
𝑤[𝑛] = 𝑥[𝑛] + 𝑤[𝑛 − 1] − 𝑤[𝑛 − 2] −𝟏
12 12
7 1 𝟏𝟐
⟹ 𝑥[𝑛] = 𝑤[𝑛] − 𝑤[𝑛 − 1] + 𝑤[𝑛 − 2]
12 12
1
{𝑦[𝑛] = 𝑤[𝑛] + − 𝑤[𝑛 − 1]
2
Exercise 5: Determine the inverse of the Z–transform of
1 1
𝑋1 (𝑧) = |𝑧| <
1 2
1 − 𝑧 −1
2
1
1 − 2 𝑧 −1 1
𝑋2 (𝑧) = |𝑧| >
3 −1 1 −2 2
1 − 4𝑧 +8𝑧

Ans: 1. From the region of convergence we deduce that 𝑥1 [𝑛] is left sided signal so

1 1 𝑛
𝑋1 (𝑧) = ⟹ 𝑥1 [𝑛] = − ( ) 𝑢[−𝑛 − 1]
1 2
1 − 2 𝑧 −1

2. It is preferred to do partial fraction expansion


1 1
1 − 2 𝑧 −1 1 − 2 𝑧 −1 1
𝑋2 (𝑧) = = =
3 1 1 1 1
1 − 4 𝑧 −1 + 8 𝑧 −2 (1 − 2 𝑧 −1 ) (1 − 4 𝑧 −1 ) 1 − 4 𝑧 −1

It is guaranteed that 4|𝑧| > 1 so


1 𝑛
𝑥2 [𝑛] = ( ) 𝑢[𝑛]
4

Exercise 6: Determine the Z–transform and the corresponding ROC

1 1 𝑛−1 1 |𝑛|
𝟏. 𝑥[𝑛] = 𝛿[𝑛] − 𝛿[𝑛 − 6]; 𝟐. 𝑥[𝑛] = ( ) 𝑢[𝑛 − 1]; 𝟑. 𝑥[𝑛] = ( )
2 2 2
Ans:
1
𝟏. 𝑋(𝑧) = 1 − 𝑧 −6 The ROC is entire 𝑧_plane
2
𝑧 −1 1
𝟐. 𝑋(𝑧) = The ROC is |𝑧| >
1 2
1 − 2 𝑧 −1

∞ ∞
1 𝑛 1 −1 𝑛 1 1 1
𝟑. 𝑋(𝑧) = ∑ ( 𝑧) + ∑ ( 𝑧 ) ⟹ | 𝑧| < 1 and | 𝑧 −1 | < 1 ⟹ The ROC is {𝑧/ < 𝑧 < 2}
2 2 2 2 2
𝑛=1 𝑛=0

1
𝑧 1 1
𝑋(𝑧) = 2 + The ROC is {𝑧/ < 𝑧 < 2}
1 1 2
1 − 2 𝑧 1 − 2 𝑧 −1

Exercise 7: Convolve the following signals using the Z–transform

𝑥[𝑛] = {1, −1, 𝟏, −1,1} ℎ[𝑛] = {−1,2, 𝟎, 2, −1}


( )⋆( )
↑ ↑

𝑋(𝑧) = 𝑧 2 − 𝑧 + 1 − 𝑧 −1 + 𝑧 −2 & 𝐻(𝑧) = −𝑧 2 + 2𝑧 + 2𝑧 −1 − 𝑧 −2


𝑌(𝑧) = 𝑋(𝑧)𝐻(𝑧) = −𝑧 4 + 3𝑧 3 − 3𝑧 2 + 5𝑧 − 6 + 5𝑧 −1 − 3𝑧 −2 + 3𝑧 −3 − 𝑧 −4
⟹ 𝑦[𝑛] = {−1,3, −3,5, −𝟔, 5, −3,3, −1}
Exercise 8: Determine the impulse response of the given Discrete LTI system

3 1
𝑦[𝑛] + 𝑦[𝑛 − 1] + 𝑦[𝑛 − 2] = 𝑥[𝑛]
4 8
Ans:
3 1 1 2 −1
(1 + 𝑧 −1 + 𝑧 −2 ) 𝑌(𝑧) = 𝑋(𝑧) ⟹ 𝐻(𝑧) = = +
4 8 3 1 1 1
1 + 4 𝑧 −1 + 8 𝑧 −2 1 + 2 𝑧 −1 1 + 4 𝑧 −1

1 𝑛 1 𝑛
ℎ[𝑛] = [2 (− ) − (− ) ] 𝑢[𝑛]
2 4

Exercise 9: Consider the Discrete time LTI system with an impulse response

1 𝑛
ℎ[𝑛] = ( ) 𝑢[𝑛]
2

Find the output 𝑦[𝑛] for any of the given inputs

3 𝑛 1 𝑛
▪ 𝑥[𝑛] = ( ) 𝑢[𝑛] ▪ 𝑥[𝑛] = (𝑛 + 1) ( ) 𝑢[𝑛]
4 4
Ans:
1 −2 3 1 𝑛 3 𝑛
▪ 𝑌(𝑧) = = + ⟹ 𝑦[𝑛] = [−2 ( ) + 3 ( ) ] 𝑢[𝑛]
1 3 1 3 2 4
(1 − 2 𝑧 −1 ) (1 − 4 𝑧 −1 ) (1 − 2 𝑧 −1 ) (1 − 4 𝑧 −1 )

1 −1
1 𝑛 1 𝑛 1 𝑛
▪ 𝑥[𝑛] = (𝑛 + 1) ( ) 𝑢[𝑛] = 𝑛 ( ) 𝑢[𝑛] + ( ) 𝑢[𝑛] ⟹ 𝑋(𝑧) = 4𝑧 1 1
2+ 1
= 2
4 4 4 1 1 − 4 𝑧 −1 (1 − 1 𝑧 −1 )
(1 − 𝑧 −1 )
4 4
1 𝑛 1
ℎ[𝑛] = ( ) 𝑢[𝑛] ⟹ 𝐻(𝑧) =
2 1
1 − 2 𝑧 −1
𝑘1 = 4
1 1 𝑘1 𝑘2 𝑘3
𝑌(𝑧) = 𝑋(𝑧)𝐻(𝑧) = = + + with {𝑘2 = −2
1 −1 2 1 − 1 𝑧 −1 1 − 1 𝑧 −1 1 − 1 𝑧 −1 1 −1 2 𝑘3 = −1
(1 − 4 𝑧 ) 2 2 4 (1 − 4 𝑧 )

1 𝑛 1 𝑛 1 𝑛 1 𝑛 1 𝑛
𝑦[𝑛] = [4 ( ) − 2 ( ) − (𝑛 + 1) ( ) ] 𝑢[𝑛] = [4 ( ) − (𝑛 + 3) ( ) ] 𝑢[𝑛]
2 4 4 2 4

Exercise 10: Determine ℎ[𝑛] the inverse of a given Z–transform take into account that ℎ[𝑛]
is an absolutely summable sequence.

3 1
𝐻(𝑧) = |𝑧| >
1 1 2
𝑧 − 4 − 8 𝑧 −1
Ans:
3𝑧 𝐻(𝑧) 3 4 4 4 4
𝐻(𝑧) = ⟹ = = − ⟹ 𝐻(𝑧) = −
1 1 𝑧 1 1 1 1 1 1
𝑧2 − 4 𝑧 − 8 𝑧2 − 4 𝑧 − 8 𝑧 − 2 𝑧 + 4 1 − 2 𝑧 −1 1 + 4 𝑧 −1

Here we distinguish three cases for the ROC


1 1 𝑛 −1 𝑛
𝐂𝐚𝐬𝐞: 𝟎𝟏 |𝑧| > ℎ[𝑛] = 4 (( ) − ( ) ) 𝑢[𝑛]
2 2 4
1 −1 𝑛 1 𝑛
𝐂𝐚𝐬𝐞: 𝟎𝟐 |𝑧| < ℎ[𝑛] = 4 (( ) − ( ) ) 𝑢[−𝑛 − 1]
4 4 2
𝑛
1 1 1 −1 𝑛
𝐂𝐚𝐬𝐞: 𝟎𝟑 <𝑧< ℎ[𝑛] = 4 (− ( ) 𝑢[−𝑛 − 1] − ( ) 𝑢[𝑛])
4 2 2 4

But only the first which is absolutely summable

1 𝑛 −1 𝑛
ℎ[𝑛] = 4 (( ) + ( ) ) 𝑢[𝑛]
2 4

Exercise 11: Determine ℎ[𝑛] the inverse of a given Z–transform

𝑧−1
𝐻(𝑧) =
𝑧−𝑎
Ans:
1 𝑧 −1
𝐻(𝑧) = −1
− −1
⟹ ℎ[𝑛] = 𝑎𝑛 𝑢[𝑛] − 𝑎𝑛−1 𝑢[𝑛 − 1]
1 − 𝑎𝑧 1 − 𝑎𝑧
Since we have 𝑎𝑛 𝑢[𝑛] = 𝑎𝑛 𝑢[𝑛 − 1] + 𝛿[𝑛] so we can rewrite ℎ[𝑛] = 𝛿[𝑛] + (𝑎 − 1)𝑎𝑛−1 𝑢[𝑛 − 1]

Exercise 12: Determine ℎ[𝑛] the inverse of a given Z–transform

𝑧−𝑎
𝐻(𝑧) = knowing that 𝑎𝑛+1 𝑢[𝑛] = 𝑎𝑛+1 𝑢[𝑛 − 1] + 𝑎𝛿[𝑛]
1 − 𝑎𝑧
Ans:
𝑧−𝑎 1 𝑎𝑧 −1 𝑧 −1 𝑎−1
𝐻(𝑧) = = −1 − −1 = −
1 − 𝑎𝑧 𝑧 − 𝑎 𝑧 − 𝑎 1 − 𝑎−1 𝑧 −1 1 − 𝑎−1 𝑧 −1

1 𝑛−1 1 𝑛 𝑎2 − 1 1 𝑎2 − 1
ℎ[𝑛] = ( ) 𝑢[𝑛 − 1] − 𝑎−1 ( ) 𝑢[𝑛] = −𝑎𝛿[𝑛] + 𝑛+1 𝑢[𝑛] = − 𝛿[𝑛] + 𝑛+1 𝑢[𝑛 − 1]
𝑎 𝑎 𝑎 𝑎 𝑎
‫كـــــل من هذه العبارات صحيحة‬

Exercise 13: A causal discrete-time LTI system is described by


3 1
𝑦[𝑛] − 𝑦[𝑛 − 1] + 𝑦[𝑛 − 2] = 𝑥[𝑛]
4 8
where 𝑥[𝑛] and 𝑦[𝑛] are the input and output of the system, respectively.

(a) Determine the system function 𝐻(𝑧).


(b) Find the impulse response ℎ[𝑛] of the system.
(c) Find the step response 𝑠[𝑛] of the system.

Ans: From exercise 8 we deduce that

1 2 −1 1 𝑛 1 𝑛
𝐻(𝑧) = = + and ℎ[𝑛] = [2 ( ) − ( ) ] 𝑢[𝑛]
3 1 1 1 2 4
1 − 4 𝑧 −1 + 8 𝑧 −2 1 − 2 𝑧 −1 1 − 4 𝑧 −1

Now let we Find the step response 𝑠[𝑛] of the system


𝑧3 𝑆(𝑧) 𝑧2
𝑆(𝑧) = 𝑋(𝑧) 𝐻(𝑧) = ⟹ =
1 1 𝑧 1 1
(1 − 𝑧 −1 ) (1 − 𝑧 −1 ) (1 − 𝑧 −1 ) (1 − 𝑧 −1 ) (1 − 𝑧 −1 ) (1 − 𝑧 −1 )
2 4 2 4

Taking the partial-fraction expansion, we have

8/3 2 1/3 8 1 𝑛 1 1 𝑛
𝑆(𝑧) = − + ⟹ 𝑠[𝑛] = [ − 2 ( ) + ( ) ] 𝑢[𝑛]
(1 − 𝑧 −1 ) 1 −1 1 −1 3 2 3 4
(1 − 2 𝑧 ) (1 − 4 𝑧 )

Exercise 14: Find the z-transform of the following 𝑥[𝑛] = 3(0.5)𝑛 𝑢[𝑛] − 2(0.25)𝑛 𝑢[−𝑛 − 1]

Ans: 𝑋(𝑧) does not exist, because ROC = |𝑧| > 0.5 and |𝑧| < 0.25 ⟹ does not exist

Exercise 15: Given the z-transform of a sequence 𝑥[𝑛]


𝑧(𝑧 − 4)
𝑋(𝑧) =
(𝑧 − 1)(𝑧 − 2)(𝑧 − 3)

1. State all the possible regions of convergence.


2. For which ROC is 𝑋(𝑧) the z-transform of a causal sequence?

Ans: 1. The possible ROCs are |𝑧| < 1, 1 < |𝑧| < 2, 2 < |𝑧| < 3, and |𝑧| > 3

2. The sequence 𝑥[𝑛] will be causal only if it is a right sided, that is |𝑧| > 3.

Exercise 16: Show the following properties for the z-transform.

1. If 𝑥[𝑛] is even, then 𝑋(𝑧 −1 ) = 𝑋(𝑧).


2. If 𝑥[𝑛] is odd, then 𝑋(𝑧 −1 ) = −𝑋(𝑧).
3. If 𝑥[𝑛] is odd, then there is a zero in 𝑋(𝑧) at 𝑧 = 1.

Ans: 1. 𝑥[−𝑛] = 𝑥[𝑛] ⟹ 𝑋(𝑧 −1 ) = 𝑋(𝑧) 2. 𝑥[−𝑛] = −𝑥[𝑛] ⟹ 𝑋(𝑧 −1 ) = −𝑋(𝑧)

3. Use the result from part 2

𝑋(𝑧 −1 )|𝑧=1 = −𝑋(𝑧)|𝑧=1 ⟹ 𝑋(1) = −𝑋(1) ⟹ 𝑋(1) = 0 ⟹ zero in 𝑋(𝑧) at 𝑧 = 1


CHAPTER V:
Fourier-Analysis of
Continuous LTI
Systems
I. Introduction
II. Orthogonal Functions
II.I. Orthogonal Signals and Gram-Schimdt
II.II. Elementary Energy Signals
III. Fourier-Series (Continuous-Time Signals)
III.I Trigonometric Fourier series
III.II Exponential Fourier series
III.III Convergence of Fourier Series
III.IV Properties of Continuous Fourier Series
IV. Fourier-Transform (Continuous-Time Signals)
IV.I Alternate Forms of the Fourier Transform
IV.II Connection between Fourier and Laplace Transforms
IV.III Transforms of Some Useful Functions
IV.IV Fourier Spectra
IV.V Fourier Transforms of Periodic Functions
IV.VI Properties of the Continuous-Time Fourier Transform
V. Solved Problems

In representing and analyzing linear, time-invariant systems, our basic approach has
been to decompose the system inputs into a linear combination of basic signals and
exploit the fact that for a linear system the response is the same linear combination of
the responses to the basic inputs. The convolution sum and convolution integral grew
out of a particular choice for the basic signals in terms of which we carried out the
decomposition, specifically delayed unit impulses. This choice has the advantage that
for systems which are time-invariant in addition to being linear, once the response to
an impulse at onetime position is known, then the response is known at all time
positions. Complex exponentials as basic building blocks for representing the input and
output of LTI systems have a considerably different motivation than the use of
impulses. Complex exponentials are Eigen-functions of LTI systems; that is, the
response of an LTI system to any complex exponential signal is simply a scaled replica
of that signal. Consequently, if the input to an LTI system is represented as a linear
combination of complex exponentials, then the effect of the system can be described
simply in terms of a weighting applied to each coefficient in that representation. This
very important and elegant relationship between LTI systems and complex exponentials
leads to some extremely powerful concepts and results.
Fourier-Analysis of
Continuous LTI
Systems
I. Introduction: In mathematics, Fourier analysis is the study of the way general functions
may be represented or approximated by sums of simpler trigonometric functions (or
complex exponentials). Fourier analysis grew from the study of
Fourier series, and is named after Joseph Fourier (1768 –1830),
who showed that representing a function as a sum of trigonometric
functions greatly simplifies the study of heat transfer.

Today, the subject of Fourier analysis encompasses a vast


spectrum of mathematics. In the sciences and engineering, the
process of decomposing a function into oscillatory components is
often called Fourier analysis, while the operation of rebuilding the
function from these pieces is known as Fourier synthesis.

In the previous chapters we showed that a periodic signal can be represented as a sum of
sinusoidal signals (i.e., complex exponentials), but the method for determining the phase
and magnitude of the sinusoids was not discussed. This section will describe how to
determine the frequency domain representation of the signal. For now we will consider only
periodic signals, though the concept of the frequency domain can be extended to signals
that are not periodic (using what is called the Fourier Transform).

Important Note: In mathematics, the term Fourier analysis often refers to the study of
both operations (i.e. process of decomposing and rebuilding the function). Therefore, the
Fourier analysis includes two main categories named Fourier series and Fourier Transform.

Fourier analysis has many scientific applications: physics, partial differential equations,
number theory, combinatorics, digital signal processing, probability theory, statistics,
forensics, option pricing, cryptography, numerical analysis, acoustics, oceanography,
sonar, optics, diffraction, geometry, protein structure analysis, and other areas.

As it is very well known from linear algebra that every Hilbert space admits an orthonormal
basis, and each vector in the Hilbert space can be expanded as a series in terms of this
orthonormal basis. So, we suggest studying Fourier series on Hilbert Spaces. There exist a
large number of orthogonal signal sets which can be used as basis signals for generalized
Fourier series. Some well-known signal sets are exponential functions (i.e. sometimes
replaced by sinusoids or trigonometric functions), Walsh functions, Bessel functions,
Legendre polynomials, Laguerre functions, .Jacobi polynomials, Hermite polynomials, and
Chebyshev polynomials. The functions that concern us most in this book are the
trigonometric and the exponential sets discussed in the rest of the chapter. In order to
make things clear we prefer to introduce the concept of orthogonal functions.
II. Orthogonal Functions: The concept of orthogonality with regards to functions is like a
more general way of talking about orthogonality with regards to vectors. Orthogonal vectors
are geometrically perpendicular because their dot product is equal to zero. When you take
the dot product of two vectors you multiply their entries and add them together; but if you
wanted to take the "dot" or inner product of two functions, you would treat them as though
they were vectors with infinitely many entries and taking the dot product would become
multiplying the functions together and then integrating over some interval. It turns out that
for the inner product (for arbitrary real number 𝐿)
𝐿
〈𝐟(𝑡), 𝐠(𝑡)〉 = ∫ 𝐟(𝑡)𝐠(𝑡)𝑑𝑡
−𝐿

Definition: Two non-zero functions, 𝐟(𝑡) and 𝐱(𝑡), are said to be orthogonal on 𝑡1 ≤ 𝑡 ≤ 𝑡2 if
𝑡2
∫ 𝐟(𝑡)𝐱(𝑡)𝑑𝑡 = 0
𝑡1

A set of non-zero functions, {𝐱 𝑖 (𝑡)}, is said to be mutually orthogonal on 𝑡1 ≤ 𝑡 ≤ 𝑡2 (or just


an orthogonal set if we’re being lazy) if 𝐱 𝑖 (𝑡) and 𝐱𝑗 (𝑡) are orthogonal for every 𝑖 ≠ 𝑗. In
other words,
𝑡2
0 If 𝑖 ≠ 𝑗
〈𝐱 𝑖 (𝑡), 𝐱𝑗 (𝑡)〉 = ∫ 𝐱 𝑖 (𝑡)𝐱𝑗 (𝑡)𝑑𝑡 = {
𝑡
𝐸𝑖 > 0 If 𝑖 = 𝑗
1

Remarks: Here some comments on linear combination of function on Hilbert space:

⦁ The set {𝐱 𝑖 (𝑡)} is called a set of basis functions or basis signals.


⦁ Note that in the case of 𝑖 = 𝑗 for the second definition we know that we’ll get a positive
𝑡 𝑡 2
value from the integral because, 𝐸𝑖 = ∫𝑡 2 𝐱 𝑖 (𝑡)𝐱 𝑖 (𝑡)𝑑𝑡 = ∫𝑡 2(𝐱 𝑖 (𝑡)) 𝑑𝑡 > 0.
1 1
⦁ Recall that when we integrate a positive function we know the result will be positive as
well. The positive scalar value 𝐸𝑖 is the energy content in the signal 𝐱 𝑖 .
⦁ If the energies 𝐸𝑘 = 1 for all 𝑘 , then the set of functions {𝐱 𝑖 (𝑡)} is normalized and is called
an orthonormal set.

II.I. Orthogonal Signals and Gram-Schimdt: Now, consider approximating a signal 𝐟(𝑡)
over the interval [𝑡1 𝑡2 ] by a set of 𝑁 mutually orthogonal signals 𝐱1 (𝑡), 𝐱 2 (𝑡), … , 𝐱 𝑁 (𝑡) as
𝑁 𝑁

𝐟(𝑡) = ∑ 𝑐𝑘 𝐱 𝑘 (𝑡) + 𝒆(𝑡) or 𝐟(𝑡) = lim ∑ 𝑐𝑘 𝐱 𝑘 (𝑡)


𝑁→∞
𝑘=1 𝑘=1

Because we are dealing with infinite dimensional spaces, we can say that in our
approximation the error 𝒆(𝑡) = 𝐟(𝑡) − ∑𝑁𝑘=1 𝑐𝑘 𝐱 𝑘 (𝑡) will approaches zero as 𝑁 → ∞. So that
when we express a function in such way we are actually performing the Gram-Schimdt
process, by projecting a function onto a basis of {𝐱1 (𝑡), 𝐱 2 (𝑡), … , 𝐱 𝑁 (𝑡)} with
𝑡 1
〈𝐟(𝑡), 𝐱 𝑘 (𝑡)〉 ∫𝑡 𝐟(𝑡)𝐱 𝑘 (𝑡)𝑑𝑡 1 𝑡1
1
𝑐𝑘 = = 𝑡 2 = ∫ 𝐟(𝑡)𝐱 𝑘 (𝑡)𝑑𝑡 𝑘 = 1,2, … , 𝑁
〈𝐱 𝑘 (𝑡), 𝐱 𝑘 (𝑡)〉 1
∫𝑡 (𝐱 𝑘 (𝑡)) 𝑑𝑡 𝐸𝑘 𝑡1
1
Proof: Let we prove the formula of the coefficients 𝑐𝑘 beginning from the error 𝒆(𝑡) in the
above approximation
𝑁

𝐟(𝑡) − ∑ 𝑐𝑘 𝐱 𝑘 (𝑡) If 𝑡 ∈ [𝑡1 𝑡2 ]


𝒆(𝑡) = {
𝑘=1
0 otherwise
We now select some criterion for the 'best approximation'. We know that the signal energy
is one possible measure of a signal size. For best approximation, we need to minimize the
error signal-that is, minimize its size, which is its energy 𝐸𝑒 over the interval [𝑡1 𝑡2 ] given
by
𝑁 2 𝑁 𝑁 2
𝑡2 𝑡2 𝑡2
𝐸𝑒 = ∫ 𝒆2 (𝑡)𝑑𝑡 = ∫ (𝐟(𝑡)– ∑ 𝑐𝑘 𝐱 𝑘 (𝑡)) 𝑑𝑡 = ∫ {𝐟 2 (𝑡) − 2 ∑ 𝑐𝑘 𝐟(𝑡)𝐱 𝑘 (𝑡) + (∑ 𝑐𝑘 𝐱 𝑘 (𝑡)) } 𝑑𝑡
𝑡1 𝑡1 𝑘=1 𝑡1 𝑘=1 𝑘=1

2
Notice that (𝐱1 (𝑡) + ⋯ + 𝐱 𝑁 (𝑡)) = 𝐱12 (𝑡) + ⋯ + 𝐱 𝑁
2 (𝑡)
because of mutual orthogonality, so we
have to write
𝑡2 𝑡2 𝑁 𝑡2 𝑁 𝑡2
𝐸𝑒 = ∫ 𝒆 2 (𝑡)𝑑𝑡
=∫ 𝐟 2 (𝑡)𝑑𝑡
− 2 ∑ 𝑐𝑘 ∫ 𝐟(𝑡)𝐱 𝑘 (𝑡)𝑑𝑡 + ∑ 𝑐𝑘2 ∫ 𝐱 𝑘2 (𝑡)𝑑𝑡
𝑡1 𝑡1 𝑘=1 𝑡1 𝑘=1 𝑡1

Note that the right-hand side is a definite integral with 𝑡 as the dummy variable. Hence, 𝐸𝑒
is a function of the parameter 𝑐𝑘 (not 𝑡) and 𝐸𝑒 is minimum for some choice of 𝑐𝑘 . To
minimize 𝐸𝑒 , a necessary condition is
𝜕𝐸𝑒
=0
𝜕𝑐𝑘
From which we obtain
𝑁 𝑡2 𝑁 𝑡2 𝑡2 𝑡2
− ∑ ∫ 𝐟(𝑡)𝐱 𝑘 (𝑡)𝑑𝑡 + ∑ 𝑐𝑘 ∫ 𝐱 𝑘2 (𝑡)𝑑𝑡 = 0 ⟹ 𝑐𝑘 ∫ 𝐱 𝑘2 (𝑡)𝑑𝑡 = ∫ 𝐟(𝑡)𝐱 𝑘 (𝑡)𝑑𝑡
𝑘=1 𝑡1 𝑘=1 𝑡1 𝑡1 𝑡1
𝑡1
∫𝑡 𝐟(𝑡)𝐱 𝑘 (𝑡)𝑑𝑡 1 𝑡1
1
⟹ 𝑐𝑘 = 𝑡1 2 = ∫ 𝐟(𝑡)𝐱 𝑘 (𝑡)𝑑𝑡 ■
∫𝑡 (𝐱 𝑘 (𝑡)) 𝑑𝑡 𝐸𝑘 𝑡1
1

II.II. Elementary Energy Signals: It is striking and very interesting to look at the
relationship that links the energy stored in the basic signals with the total energy. We know
that the square of a sum of two orthogonal signals is equal to the sum of the squares two
2
signals, that is (𝐱1 (𝑡) + 𝐱 2 (𝑡)) = 𝐱12 (𝑡) + 𝐱 22 (𝑡). More generally if a set {𝐱 𝑖 (𝑡)} are mutually
2
orthogonal then (𝐱1 (𝑡) + ⋯ + 𝐱 𝑁 (𝑡)) = 𝐱12 (𝑡) + ⋯ + 𝐱 𝑁
2 (𝑡),
hence

𝑁 2 𝑁 𝑡2 𝑡2 𝑁 𝑁 𝑡2
𝐟 2 (𝑡)
= (∑ 𝑐𝑘 𝐱 𝑘 (𝑡)) = ∑ 𝑐𝑘2 𝐱 𝑘2 (𝑡) ⟺ ∫ 𝐟 2 (𝑡)𝑑𝑡
= ∫ ∑ 𝑐𝑘2 𝐱 𝑘2 (𝑡) 𝑑𝑡 = ∑ 𝑐𝑘2 (∫ 𝐱 𝑘2 (𝑡)𝑑𝑡)
𝑘=1 𝑘=1 𝑡1 𝑡1 𝑘=1 𝑘=1 𝑡1

𝑡2 𝑁

⟺ 𝐸f = ∫ 𝐟 2 (𝑡)𝑑𝑡 = ∑ 𝑐𝑘2 𝐸𝑘
𝑡1 𝑘=1

This means that the total energy of the original function is the sum of the energies of all the
orthogonal components. This equation goes under the name of Parseval's theorem.
III. Fourier-Series (Continuous-Time Signals): As what we gave said before, the French
mathematician Fourier found that any periodic waveform, that is, a waveform that repeats
itself after some time, can be expressed as a series of harmonically related sinusoids, i.e.,
sinusoids whose frequencies are multiples of a fundamental frequency (or first harmonic).

III.I Trigonometric Fourier series: There are two common forms of the Fourier series,
"Trigonometric Form" and the "Exponential Form", here we are going to start by the first
one. Consider a signal set: {1, cos(ω0 𝑡), cos(2ω0 𝑡), … , cos(𝑛ω0 𝑡), sin(ω0 𝑡), sin(2ω0 𝑡), … sin(𝑛ω0 𝑡)}.
In this set the sinusoid of frequency ω0 , called the fundamental, and a sinusoid of
frequency 𝑛ω0 is called the 𝑛𝑡ℎ harmonic.

Now we try to show that this set is orthogonal over any interval of duration 𝑇0 = 2𝜋/ω0 ,
which is the period of the fundamental. To prove the orthogonality we use the fact that

0 If 𝑛 ≠ 𝑚
𝑡1 +𝑇0
𝑇0 If 𝑛 = 𝑚 = 0
⦁∫ cos(𝑛ω0 𝑡)cos(𝑚ω0 𝑡)𝑑𝑡 =
𝑡1
𝑇0
{2 If 𝑛 = 𝑚 ≠ 0
0 If 𝑛 ≠ 𝑚
𝑡1 +𝑇0
⦁∫ sin(𝑛ω0 𝑡)sin(𝑚ω0 𝑡)𝑑𝑡 = {𝑇
0
𝑡1 If 𝑛 = 𝑚 ≠ 0
2
𝑡1 +𝑇0
⦁∫ sin(𝑛ω0 𝑡)cos(𝑚ω0 𝑡)𝑑𝑡 = 0 for all 𝑛 and 𝑚
𝑡1

Therefore, we can express a signal 𝐟(𝑡) by a trigonometric Fourier series over any interval of
duration 𝑇0 seconds as
∞ ∞

𝐟(𝑡) = 𝑎0 + ∑ 𝑎𝑛 cos(𝑛ω0 𝑡) + ∑ 𝑏𝑛 sin(𝑛ω0 𝑡)


𝑛=1 𝑛=1

To find the coefficient 𝑏𝑛 , multiply both sides of the equation by sin(𝑚ω0 𝑡) for an arbitrary
but fixed positive integer 𝑚, and integrate over one period. Using the above facts we get

2
𝑏𝑚 = ∫ 𝐟(𝑡)sin(𝑚ω0 𝑡)𝑑𝑡 for each 𝑚 = 1,2, …
𝑇0 𝑇0

Since 𝑚 was arbitrary, you can change this to an 𝑛 and get the formula for the coefficients
for the sine terms. A similar argument for the cosine terms establishes the formula for the
coefficient 𝑎𝑚 .

If 𝑚 = 0 then
1 𝑇0
𝑎0 = ∫ 𝐟(𝑡)𝑑𝑡
𝑇0 0
Elseif 𝑚 = 1,2, … then
2
𝑎𝑚 = ∫ 𝐟(𝑡)cos(𝑚ω0 𝑡)𝑑𝑡
𝑇0 𝑇0
End If
Remark: if we define new function 𝐡(𝑡) = 𝐟(𝑡)𝐠(𝑡) we obtain the following result
𝐿
𝐿
2 ∫ 𝐡(𝑡)𝑑𝑡 If 𝐡(𝑡) is an even function
∫ 𝐡(𝑡)𝑑𝑡 = { 0
−𝐿
0 If 𝐡(𝑡) is an odd function

Note that this fact is only valid on a “symmetric” interval, i.e. an interval in the form [−𝐿, 𝐿].
If we aren’t integrating on a “symmetric” interval then the fact may or may not be true.
However when studying Fourier series, we're specifically looking at the space of square-
𝜋
(Lebesgue)-integrable 𝐿2 [−𝜋, 𝜋], which has an inner product, 〈𝐟(𝑡), 𝐠(𝑡)〉 = ∫−𝜋 𝐟(𝑡)𝐠(𝑡)𝑑𝑡

The trigonometric Fourier series 𝐟(𝑡) = 𝑎0 + ∑∞ ∞


𝑛=1 𝑎𝑛 cos(𝑛ω0 𝑡) + ∑𝑛=1 𝑏𝑛 sin(𝑛ω0 𝑡) contains sine
and cosine terms of the same frequency. We can combine the two terms to obtain a single
sinusoid of the same frequency using: 𝑎𝑛 cos(𝑛ω0 𝑡) + 𝑏𝑛 sin(𝑛ω0 𝑡) = 𝐶𝑛 cos(𝑛ω0 𝑡 + 𝜃), where
𝐶𝑛 = √𝑎𝑛2 + 𝑏𝑛2 and 𝜃 = tan−1 (−𝑏𝑛 /𝑎𝑛 ). For consistency, we denote the dc term 𝑎0 by 𝐶0 , that is

𝐟(𝑡) = 𝐶0 + ∑ 𝐶𝑛 cos(𝑛ω0 𝑡 + 𝜃)
𝑛=1

III.II Exponential Fourier series: To represent the Fourier series in exponential form, the
sine and cosine terms in the Fourier series are expressed in terms of exponential function
1 1
sin(𝑛ω0 𝑡) = (𝑒 𝑗𝑛ω0 𝑡 − 𝑒 −𝑗𝑛ω0 𝑡 ) and cos(𝑛ω0 𝑡) = (𝑒 𝑗𝑛ω0 𝑡 + 𝑒 −𝑗𝑛ω0 𝑡 )
2𝑗 2
that results in exponential Fourier series
∞ ∞

𝐟(𝑡) = 𝑎0 + ∑ 𝑎𝑛 cos(𝑛ω0 𝑡) + ∑ 𝑏𝑛 sin(𝑛ω0 𝑡)


𝑛=1 𝑛=1

𝑎𝑛 𝑗𝑛ω 𝑡 𝑏𝑛
= 𝑎0 + ∑ (𝑒 0 + 𝑒 −𝑗𝑛ω0 𝑡 ) + (𝑒 𝑗𝑛ω0 𝑡 − 𝑒 −𝑗𝑛ω0 𝑡 )
2 2𝑗
𝑛=1

1
= 𝑎0 + ∑(𝑎𝑛 − 𝑗𝑏𝑛 )𝑒 𝑗𝑛ω0 𝑡 + (𝑎𝑛 + 𝑗𝑏𝑛 )𝑒 −𝑗𝑛ω0 𝑡
2
𝑛=1

= 𝑐0 + ∑ 𝑐𝑛 𝑒 𝑗𝑛ω0 𝑡 + 𝑐−𝑛 𝑒 −𝑗𝑛ω0 𝑡


𝑛=1
𝑘=+∞

= ∑ 𝑐𝑘 𝑒 𝑗𝑘ω0 𝑡
𝑘=−∞

Where

1 1
𝑐𝑘 = (𝑎 − 𝑗𝑏𝑘 ), 𝑐−𝑘 = (𝑎𝑘 + 𝑗𝑏𝑘 ),
2 𝑘 2
𝑎𝑘 = 2Re[𝑐𝑘 ], 𝑏𝑘 = −2Im[𝑐𝑘 ], 𝑎0 = 𝑐0 and 𝑐−𝑘 = 𝑐𝑘 ⋆
Exercise: As developed above we observe that, any periodic signal 𝐟(𝑡) can be represented
by it exponential Fourier series 𝐟(𝑡) = ∑𝑘=+∞
𝑘=−∞ 𝑐𝑘 𝑒
𝑗𝑘ω0 𝑡
. Prove that

1
𝑐𝑘 = ∫ 𝐟(𝑡)𝑒 −𝑗𝑘ω0 𝑡 𝑑𝑡
𝑇0 𝑇0

Solution: given a complex basis signals {𝐱 𝑘 (𝑡) = 𝑒 −𝑗𝑘ω0 𝑡 } which are mutually orthogonal
signals, so we are able to perform the Gram-Schimdt process, by projecting a function
𝐟(𝑡) onto a basis of {𝐱 𝑘 (𝑡)} with

〈𝐟(𝑡), 𝐱 𝑘 (𝑡)〉 ∫𝑇 𝐟(𝑡)𝐱 𝑘⋆ (𝑡)𝑑𝑡 1 1


𝑐𝑘 = = 0 ⋆ = ∫ 𝐟(𝑡)𝐱 𝑘⋆ (𝑡)𝑑𝑡 = ∫ 𝐟(𝑡)𝑒 −𝑗𝑘ω0 𝑡 𝑑𝑡
〈𝐱 𝑘 (𝑡), 𝐱 𝑘 (𝑡)〉 ∫𝑇 𝐱 𝑘 (𝑡)𝐱 𝑘 (𝑡)𝑑𝑡 𝐸𝑘 𝑇0 𝐸𝑘 𝑇0
0

1
𝐸𝑘 = ∫ 𝐱 𝑘 (𝑡)𝐱 𝑘⋆ (𝑡)𝑑𝑡 = ∫ 𝑒 𝑗𝑘ω0 𝑡 𝑒 −𝑗𝑘ω0 𝑡 𝑑𝑡 = 𝑇0 ⟹ 𝑐𝑘 = ∫ 𝐟(𝑡)𝑒 −𝑗𝑘ω0 𝑡 𝑑𝑡
𝑇0 𝑇0 𝑇0 𝑇0

A more compact representation of the Fourier series uses complex exponentials. In this
case we end up with the following synthesis and analysis equations:
𝑘=+∞

𝐟(𝑡) = ∑ 𝑐𝑘 𝑒 𝑗𝑘ω0 𝑡 Synthesis


𝑘=−∞
1
𝑐𝑘 = ∫ 𝐟(𝑡)𝑒 −𝑗𝑘ω0 𝑡 𝑑𝑡 Analysis
𝑇0 𝑇0

Parseval's identity, named after Marc-Antoine Parseval, is a fundamental result on the


summability of the Fourier series of a function. Geometrically, it is a generalized
Pythagorean theorem for inner-product spaces (which can have an uncountable infinity of
basis vectors). It is also known as Rayleigh's energy theorem, or Rayleigh's identity, after
John William Strutt, Lord Rayleigh. Although the term "Parseval's theorem" is often used to
describe the unitarity of any Fourier transform, especially in physics, the most general
form of this property is more properly called the Plancherel theorem.
𝑁 𝑁 𝑁
1
𝐸f = ∫ |𝐟(𝑡)|2 𝑑𝑡 = ∑ 𝑐𝑘2 𝐸𝑘 = 𝑇0 ∑ 𝑐𝑘2 ⟺ ∫ |𝐟(𝑡)|2 𝑑𝑡 = ∑ 𝑐𝑘2
𝑇0 𝑇0 𝑇0
𝑘=1 𝑘=1 𝑘=1

The interpretation of this form of the theorem is that the total energy of a signal can be
calculated by the energy of its Fourier series coefficients.

III.III Convergence of Fourier Series: A function 𝐟(𝑡) defined on an interval [𝑎, 𝑏] is said to
be piecewise continuous if it is continuous on the interval
except for a finite number of jump discontinuities.

If there is a jump discontinuity, then the Fourier series


approximation has oscillations (overshoot) near the jump
discontinuity. This phenomenon is called Gibbs
phenomenon.
It is known that a periodic signal 𝐟(𝑡) has a Fourier series representation if it satisfies the
following Dirichlet conditions:

1. 𝐟(𝑡) is absolutely integrable over any period, that is, 𝐟(𝑡) = ∫𝑇 |𝐟(𝑡)|𝑑𝑡 < ∞
0
2. 𝐟(𝑡) has a finite number of maxima and minima within any finite interval of 𝑡.
3. 𝐟(𝑡) has a finite number of discontinuities within any finite interval of 𝑡, and each of
these discontinuities is finite.

Note that the Dirichlet conditions are sufficient but not necessary conditions for the
Fourier series representation.

III.IV Properties of Continuous Fourier Series: If the continuous time functions 𝐟(𝑡),
𝐠(𝑡) are two periodic signals with the same period and having exponential Fourier series:

F_series 1 F_series 1
𝐟(𝑡) ↔ 𝛼𝑘 = ∫ 𝐟(𝑡)𝑒 −𝑗𝑘ω0 𝑡 𝑑𝑡 and 𝐠(𝑡) ↔ 𝛽𝑘 = ∫ 𝐠(𝑡)𝑒 −𝑗𝑘ω0 𝑡 𝑑𝑡
𝑇0 𝑇0 𝑇0 𝑇0

Then some of the important properties of Fourier series are summarized below.
F_series
a. Linearity Property: 𝐴𝐟(𝑡) + 𝐵𝐠(𝑡) ↔ 𝑐𝑘 = (𝐴𝛼𝑘 + 𝐵𝛽𝑘 )
F_series 1
b. Time Shifting: 𝐟(𝑡 − 𝜏) ↔ 𝑏𝑘 = 𝑇 ∫𝑇 𝐟(𝑡 − 𝜏)𝑒 −𝑗𝑘ω0 𝑡 𝑑𝑡 = 𝑒 −𝑗𝑘ω0𝜏 𝛼𝑘
0 0
F_series 1
c. Frequency Shifting: 𝑒 𝑗ℓω0𝑡 𝐟(𝑡) ↔ 𝑏𝑘 = 𝑇 ∫𝑇 𝐟(𝑡)𝑒 −𝑗(𝑘−ℓ)ω0 𝑡 𝑑𝑡 = 𝛼𝑘−ℓ
0 0
F_series 1
d. Time Reversal: 𝐟(−𝑡) ↔ 𝑏𝑘 = 𝑇 ∫𝑇 𝐟(−𝑡)𝑒 −𝑗𝑘ω0 𝑡 𝑑𝑡 = 𝛼−𝑘
0 0
F_series
e. Time Scaling: 𝐟(𝑎𝑡) ↔ 𝑏𝑘 = 𝛼𝑘 Where 𝐟(𝑎𝑡) is periodic with fundamental period 𝑇0 /𝑎.
f. Differentiation and Integration:
𝑑 F_series 1 𝑑
𝐟(𝑡) ↔ 𝑏𝑘 = ∫ ( 𝐟(𝑡)) 𝑒 −𝑗𝑘ω0𝑡 𝑑𝑡 = (𝑗𝑘ω0 )𝛼𝑘
𝑑𝑡 𝑇0 𝑇0 𝑑𝑡
𝑡 𝑡
F_series 1 1
∫ 𝐟(𝑡)𝑑𝑡 ↔ 𝑏𝑘 = ∫ (∫ 𝐟(𝑡)𝑑𝑡) 𝑒 −𝑗𝑘ω0 𝑡 𝑑𝑡 = ( )𝛼
−∞ 𝑇0 𝑇0 −∞ 𝑗𝑘ω0 𝑘
g. Multiplication and Convolution:
F_series +∞
𝐟(𝑡)𝐠(𝑡) ↔ 𝑐𝑘 = ∑ 𝛼ℓ 𝛽𝑘−ℓ = 𝛼𝑘 ⋆ 𝛽𝑘
ℓ=−∞
Fseries 1
𝐟(𝑡) ⋆ 𝐠(𝑡) ↔ 𝑐𝑘 = (∫ 𝐟(𝑡)𝑒 −𝑗𝑘ω0 𝑡 𝑑𝑡) (∫ 𝐠(𝑡)𝑒 −𝑗𝑘ω0 𝑡 𝑑𝑡) = 𝑇0 𝛼𝑘 𝛽𝑘
𝑇0 𝑇0 𝑇0
h. Conjugate:

⋆ (𝑡)
F_series 1 1 ⋆
𝐟 ↔ 𝑐𝑘 = ∫ 𝐟 ⋆ (𝑡)𝑒 −𝑗𝑘ω0 𝑡 𝑑𝑡 = (∫ 𝐟(𝑡)𝑒 𝑗𝑘ω0𝑡 𝑑𝑡) = 𝛼−𝑘
𝑇0 𝑇0 𝑇0 𝑇0
i. Response of LTI: Assume that 𝐱(𝑡) be an input of linear system whose transfer function
𝑯(𝑠) = 𝐘(𝑠)/𝐗(𝑠) and the response is 𝐲(𝑡) with
F_series F_series
𝐱(𝑡) ↔ 𝛼𝑘 ⟹ 𝐲(𝑡) ↔ 𝛽𝑘 = 𝑯(𝑗ω)𝛼𝑘

Proof: In previous chapters we showed that the response of an LTI system with transfer
function 𝑯(𝑠) to an exponential input 𝑒 𝑠𝑡 is also an exponential 𝑯(𝑠)𝑒 𝑠𝑡 Therefore, the
system response to exponential 𝑒 𝑗ω𝑡 is an exponential 𝑯(𝑗ω)𝑒 𝑗ω𝑡 . This input-output pair can
𝑗ω𝑡
be displayed as: 𝑒⏟ ⟹ ⏟ 𝑯(𝑗ω)𝑒 𝑗ω𝑡 . Therefore, from linearity property
Input Output

𝑘=+∞ 𝑘=+∞
∑ 𝛼𝑘 𝑒 𝑗𝑘ω0 𝑡 ⟹ ∑ 𝛼𝑘 𝑯(𝑗ω)𝑒 𝑗𝑘ω0 𝑡
⏟ 𝑘=−∞ ⏟ 𝑘=−∞
Input 𝐱(𝑡) Output 𝐲(𝑡)

The response 𝐲(𝑡) is obtained in the form of an exponential Fourier series, and is therefore
a periodic signal of the same period as that of the input with 𝛽𝑘 = 𝑯(𝑗ω)𝛼𝑘 .

Remark: An odd function can be represented by a Fourier Sine series and an even function
can be represented by a Fourier cosine series, so it is not surprising that we use sinusoids.

2
𝐟(𝑡) = 𝐟𝑒 (𝑡) = 𝑎0 + ∑ 𝑎𝑛 cos(𝑛ω0 𝑡) 𝑏𝑛 = 0 & 𝑎𝑛 = ∫ 𝐟(𝑡)cos(𝑛ω0 𝑡)𝑑𝑡
𝑇0 𝑇0
𝑛=1

2
𝐟(𝑡) = 𝐟𝑜 (𝑡) = ∑ 𝑏𝑛 sin(𝑛ω0 𝑡) 𝑎𝑛 = 0 & 𝑏𝑛 = ∫ 𝐟(𝑡)sin(𝑛ω0 𝑡)𝑑𝑡
𝑇0 𝑇0
𝑛=1

Example: 01 Find the Fourier series coefficients 𝑐𝑘 of the signal 𝐱(𝑡) = 𝐴|sin(𝜋𝑡)|.
Solution: The signal 𝐱(𝑡) = 𝐴|sin(𝜋𝑡)| is periodic function with fundamental period 𝑇0 = 1,
1 2𝜋
the Fourier series is 𝐱(𝑡) = ∑𝑘=+∞
𝑘=−∞ 𝑐𝑘 𝑒
𝑗𝑘ω0 𝑡
, with 𝑐𝑘 = 𝑇 ∫𝑇 𝐱(𝑡)𝑒 −𝑗𝑘ω0 𝑡 𝑑𝑡 and ω0 = = 2𝜋
0 0 𝑇0

The Fourier series coefficients can be computed by


1 1
2𝐴
𝑐0 = ∫ 𝐱(𝑡)𝑑𝑡 = ∫ 𝐴|sin(𝜋𝑡)|𝑑𝑡 =
0 0 𝜋
1 1
𝐴 1 𝑗𝜋(1−2𝑘)𝑡
𝑐𝑘 = ∫ 𝐴 sin(𝜋𝑡) 𝑒 −𝑗𝑘ω0 𝑡 𝑑𝑡 = ∫ 𝐴 sin(𝜋𝑡) 𝑒 −𝑗2𝜋𝑘𝑡 𝑑𝑡 = ∫ (𝑒 − 𝑒 −𝑗𝜋(1+2𝑘)𝑡 )𝑑𝑡
0 0 2𝑗 0
𝑗𝜋(1−2𝑘)𝑡 −𝑗𝜋(1+2𝑘)𝑡 𝑡=1
𝐴 𝑒 −1 𝑒 +1 2𝐴
= [ + ] =
2𝑗 𝑗𝜋(1 − 2𝑘) 𝑗𝜋(1 + 2𝑘) 𝑡=0 𝜋(1 − 4𝑘 2 )

clear all, clc,


x1=0; t=0:0.01:2;
Nmax=5; A=2; w0=2*pi;

for k=-Nmax:1:Nmax
x1=x1+(2*A/(pi*(1-4*k^2)))*exp((i*k*w0)*t);
if k>2
plot(t,x1,'-','linewidth',1.5)
hold on, grid on
end
end

The trigonometric Fourier series of this signal is


4𝐴 2𝐴
𝑎𝑘 = 2Re[𝑐𝑘 ] = , 𝑏𝑘 = −2Im[𝑐𝑘 ] = 0, 𝑎0 = 𝑐0 =
𝜋(1 − 4𝑘 2 ) 𝜋

2𝐴 𝑘=+∞ 4𝐴
𝐱(𝑡) = +∑ ( ) cos(2𝜋𝑘𝑡)
𝜋 𝑘=1 𝜋(1 − 4𝑘 2 )

Example: 02 Find the trigonometric Fourier coefficients for the periodic signal
𝐴𝑡
𝐱(𝑡) = (𝑢(𝑡) − 𝑢(𝑡 − 𝑇0 )) 0 < 𝑡 < 𝑇0
𝑇0

Solution: The components of the Fourier series are therefore given by

1 𝑇0 𝐴𝑡 𝐴 2 𝑇0 𝐴𝑡 2𝑛𝜋
𝑎0 = ∫ 𝑑𝑡 = and 𝑎𝑛 = ∫ cos ( 𝑡) 𝑑𝑡 = 0
𝑇0 0 𝑇0 2 𝑇0 0 𝑇0 𝑇0
2 𝑇0 𝐴𝑡 2𝑛𝜋 2𝐴 −𝑇02 cos(2𝑛𝜋) −𝐴
𝑏𝑛 = ∫ sin ( 𝑡) 𝑑𝑡 = 2 ( )=
𝑇0 0 𝑇0 𝑇0 𝑇0 2𝑛𝜋 𝑛𝜋
The coefficients 𝑎𝑛 = 0 because 𝐱(𝑡) is an odd function. The Fourier series is therefore given
by
𝐴 𝐴 +∞ 1 2𝑘𝜋
𝐱(𝑡) = − ∑ sin ( 𝑡)
2 𝜋 𝑘=1 𝑘 𝑇0

clear all, clc, T = 1.5; Nmax=5; A=5;


t = 0:0.01:3*T; w0=2*pi/T; x1=A/2;
x = A*sawtooth(w0*t); x=(x+A)/2;
plot(t,x,'k','linewidth',1.5)
grid on
hold on
for k=1:1:Nmax
x1=x1-(A/(pi*k))*sin((w0*k)*t);
if k>2
plot(t,x1,'-','linewidth',1.5)
hold on, grid on
end
end

Example: 03 Find the exponential and trigonometric Fourier coefficients for the signal

𝐴 0<𝑡<𝐿 𝑇
𝐱(𝑡) = { with 𝐿=
0 𝐿<𝑡<𝑇 2

Solution: The signal 𝐱(𝑡) is periodic function where


𝑇 𝑇
1 2 1 2 𝐴
𝑐0 = ∫ 𝐱(𝑡)𝑑𝑡 = ∫ 𝐴𝑑𝑡 =
𝑇 −𝑇 𝑇 0 2
2

𝑇 𝑇
1 2 1 2 𝐴
and 𝑐𝑘 = ∫ 𝐱(𝑡)𝑒 −𝑗𝑘ω0 𝑡 𝑑𝑡 = ∫ 𝐴𝑒 −𝑗𝑘ω0 𝑡 𝑑𝑡 = (1 − 𝑒 −𝑗𝑘𝜋 )
𝑇 −𝑇 𝑇 0 2𝑘𝜋𝑗
2

It is well known that


0 𝑘 = 2𝑚 ≠ 0
𝐴
1 𝑘 = 2𝑚 𝑘=0
𝑒 −𝑗𝑘𝜋 = { ⟹ 𝑐𝑘 = 2
−1 𝑘 = 2𝑚 + 1 𝐴
𝑘 = 2𝑚 + 1
{ 𝑘𝜋𝑗

The exponential Fourier approximation is


𝑘=+∞ 𝑚=+∞ 𝑚=+∞
𝑗𝑘ω0 𝑡 𝑗2𝑚ω0 𝑡
𝐱(𝑡) = ∑ 𝑐𝑘 𝑒 = ∑ 𝑐2𝑚 𝑒 + ∑ 𝑐2𝑚+1 𝑒 𝑗(2𝑚+1)ω0 𝑡
𝑘=−∞ 𝑚=−∞ 𝑚=−∞
𝑚=+∞
𝐴 𝐴
= + ∑ 𝑒 𝑗(2𝑚+1)ω0 𝑡
2 (2𝑚 + 1)𝜋𝑗
𝑚=−∞
2𝐴
⦁ If 𝑘 = 0 𝑎0 = 𝑐0 =
𝜋
2𝐴
⦁ If 𝑘 = 2𝑚 + 1 𝑎𝑘 = 2Re[𝑐𝑘 ] = 0, 𝑏𝑘 = −2Im[𝑐𝑘 ] = ,
(2𝑚 + 1)𝜋
⦁ If 𝑘 = 2𝑚 𝑎𝑘 = 2Re[𝑐𝑘 ] = 0, 𝑏𝑘 = −2Im[𝑐𝑘 ] = 0,
𝐴 𝑚=+∞ 2𝐴 2𝜋𝑡
𝐱(𝑡) = +∑ ( ) sin ( (2𝑚 + 1))
2 𝑚=0 (2𝑚 + 1)𝜋 𝑇

clear all, clc, A=5; x1=A/2; Nmax=5;


T=2; t=-T:0.01:T; w0=2*pi/T;
y1=A*sign(sin((2*pi/T)*t));
y1=(y1+5)/2;
plot(t,y1,'-r','linewidth',2)
grid on, hold on
for k=0:1:Nmax
a=(2*A/(pi*(2*k+1)));
x1=x1+(a)*sin((2*pi*(2*k+1))*t/T);
if k>2
plot(t,x1,'-','linewidth',1.5)
hold on, grid on
end
end

Example: 04 Find the trigonometric Fourier coefficients for the periodic signal

𝐴 |𝑡| < 𝑑
𝐱(𝑡) = { 𝑇
0 𝑑 < |𝑡| <
2

Solution: The signal 𝐱(𝑡) is periodic function where


𝑇
𝐱(𝑡)
2 2𝐴𝑑
𝑐0 = ∫ 𝑑𝑡 =
𝑇 𝑇 𝑇

2
𝑇
𝑑
𝐱(𝑡) −𝑗𝑘ω 𝑡
2 𝐴 −𝑗𝑘ω 𝑡 𝐴 𝐴
𝑐𝑘 = ∫ 𝑒 0 𝑑𝑡 = ∫ 𝑒 0 𝑑𝑡 = [𝑒 𝜃 − 𝑒 −𝜃 ]𝜃=𝑗𝑘ω 𝑑 = sin(𝑑ω0 𝑘)
𝑇 𝑇
− −𝑑 𝑇 2𝑘𝜋𝑗 0 𝑘𝜋
2

clear all, clc, T=2; d=0.5; A=5;


x1=(2*A*d)/T; t=-T:0.01:T;
Nmax=15; w0=2*pi/T; eps=0.0001;

for k=-Nmax:1:Nmax
a=(A/(pi*(k+eps)));
x1=x1+a*(sin(w0*k*d))*exp((i*k*w0)*t);
if k>=12
plot(t,x1,'-','linewidth',1.5)
hold on, grid on
end
end
The trigonometric Fourier series is

2𝐴
𝑎𝑘 = 2Re[𝑐𝑘 ] = sin(𝑑ω0 𝑘), 𝑏𝑘 = −Im[𝑐𝑘 ] = 0, 𝑎0 = 𝑐0
𝑘𝜋

2𝐴𝑑 2𝐴 sin(𝑑ω0 𝑘)
𝐱(𝑡) = + ∑ cos(𝑘ω0 𝑡)
𝑇 𝜋 𝑘
𝑘=1

clear all, clc, T=2; d=0.5; A=5;

x1=(2*A*d)/T; t=-T:0.01:T;

Nmax=30; w0=2*pi/T;

for k=1:Nmax
x1=x1+(2*A/pi)*(sin(k*w0*d)/k)*cos(k*w0*t);

if k>25
plot(t,x1,'-','linewidth',1.5)
hold on, grid on
end

end

Example: 05 Find the trigonometric Fourier coefficients for the periodic signal

𝑇
𝐴 |𝑡| <
𝐱(𝑡) = { 4
𝑇 𝑇
0 < |𝑡| <
4 2

Solution: The signal 𝐱(𝑡) is periodic function where


𝑇 𝑇
1 4 𝐴 1 4
𝑐0 = ∫ 𝐱(𝑡)𝑑𝑡 = and 𝑐𝑘 = ∫ 𝐱(𝑡)𝑒 −𝑗𝑘ω0 𝑡 𝑑𝑡
𝑇 −𝑇 2 𝑇 −𝑇
4 4

Let we compute the explicitly the formula of 𝑐𝑘


𝑇
1 4 𝐴 𝑇 𝑇 𝐴 𝜋 𝜋 𝐴 𝜋
𝑐𝑘 = ∫ 𝐴𝑒 −𝑗𝑘ω0 𝑡 𝑑𝑡 = (𝑒 𝑗𝑘ω0 4 − 𝑒 −𝑗𝑘ω0 4 ) = (𝑒 𝑗𝑘 2 − 𝑒 −𝑗𝑘 2 ) = sin (𝑘 )
𝑇 − 𝑇 2𝑘𝜋𝑗 2𝑘𝜋𝑗 𝑘𝜋 2
4

The coefficients 𝑏𝑛 = 0 because 𝐱(𝑡) is an even function. The trigonometric Fourier series is

2𝐴 𝜋 𝐴
𝑎𝑘 = 2Re[𝑐𝑘 ] = sin (𝑘 ), 𝑏𝑘 = −Im[𝑐𝑘 ] = 0, 𝑎0 = 𝑐0 =
𝑘𝜋 2 2
∞ 𝜋
𝐴 sin (𝑘 2)
𝐱(𝑡) = + ∑ 𝐴 ( 𝜋 ) cos(𝑘ω0 𝑡)
2 𝑘
𝑘=1 2
Example: 06 Find the trigonometric Fourier coefficients for the periodic signal 𝐱(𝑡)

𝑇
𝐴 0<𝑡<
𝐱(𝑡) = { 2
𝑇
−𝐴 <𝑡<𝑇
2

Solution: The signal 𝐱(𝑡) is periodic function which can be written as

𝑇
2𝐴 0<𝑡<
𝐱(𝑡) = 𝐱1 (𝑡) − 𝐴 with 𝐱1 (𝑡) = { 2
𝑇
0 <𝑡<𝑇
2
Let we compute the exponential Fourier series of 𝐱1 (𝑡) (we have seen it before)
𝑘=+∞ 𝑇 𝑇
1 2 1 2
𝐱1 (𝑡) = ∑ 𝑐𝑘 𝑒 𝑗𝑘ω0𝑡 with 𝑐0 = ∫ 𝐱1 (𝑡)𝑑𝑡 = 𝐴 and 𝑐𝑘 = ∫ 𝐱1 (𝑡)𝑒 −𝑗𝑘ω0 𝑡 𝑑𝑡
𝑇 0 𝑇 −𝑇
𝑘=−∞ 2

As what we have done in exercise 3 we get

𝐴 𝑚=+∞ 2𝐴 2𝜋𝑡 2𝐴 𝑚=+∞ 1


𝐱1 (𝑡) = 2 ( + ∑ ( ) sin ( (2𝑚 + 1))) = 𝐴 + ∑ 𝑒 𝑗(2𝑚+1)ω0 𝑡
2 𝑚=0 (2𝑚 + 1)𝜋 𝑇 𝜋𝑗 𝑚=−∞ (2𝑚 + 1)

Which implies that:

2𝐴 𝑚=+∞ 1
𝐱(𝑡) = 𝐱1 (𝑡) − 𝐴 = ∑ 𝑒 𝑗(2𝑚+1)ω0 𝑡 Exponential Form
𝜋𝑗 𝑚=−∞ (2𝑚 + 1)
4𝐴 𝑚=+∞ 1 2𝜋𝑡
= ∑ ( ) sin ( (2𝑚 + 1)) Trigonometric Form
𝜋 𝑚=0 (2𝑚 + 1) 𝑇

Example: 07 Find the trigonometric Fourier coefficients for the periodic signal 𝐱(𝑡)

2𝐴 𝑇
𝑡 0<𝑡<
𝐱(𝑡) = { 𝑇 2
2𝐴 𝑇
(𝑇 − 𝑡) <𝑡<𝑇
𝑇 2

Solution: The signal 𝐱(𝑡) is periodic differentiable function which can be written as

2𝐴 𝑇
𝑑 0<𝑡<
𝐱 𝑛𝑒𝑤 (𝑡) = 𝐱(𝑡) = { 𝑇 2
𝑑𝑡 2𝐴 𝑇
− <𝑡<𝑇
𝑇 2
From exercise 6 we get
𝑑 8𝐴 𝑚=+∞ 1 2𝜋𝑡
𝐱 𝑛𝑒𝑤 (𝑡) = 𝐱(𝑡) = ∑ ( ) sin ( (2𝑚 + 1))
𝑑𝑡 𝜋𝑇 𝑚=0 (2𝑚 + 1) 𝑇

By the integration of we 𝐱 𝑛𝑒𝑤 (𝑡) obtain

4𝐴 𝑚=+∞ 1 2𝜋𝑡
𝐱(𝑡) = 𝑐 − ∑ cos ( (2𝑚 + 1))
𝑚=0 (2𝑚 + 1)
𝜋 2 2 𝑇
1 𝑇 1 𝐴
The constant of integration can be determined by 𝑐 = 𝑇 ∫0 𝐱(𝑡)𝑑𝑡 = 𝑇 Area = 2

𝐴 4𝐴 𝑚=+∞ 1 2𝜋𝑡
𝐱(𝑡) = − 2∑ cos ( (2𝑚 + 1))
𝑚=0 (2𝑚 + 1)
2 𝜋 2 𝑇

Example: 08 Find the trigonometric Fourier coefficients for the periodic signal 𝐱(𝑡)

𝐴
(𝑇 + 𝑡) −𝑇 <𝑡 <0
𝐱(𝑡) = { 𝑇
𝐴
(𝑇 − 𝑡) 0<𝑡<𝑇
𝑇

Solution: The signal 𝐱(𝑡) is periodic signal with period equal to 𝐿 = 2𝑇 and is even function
so we deduce that 𝑏𝑛 = 0, only we should determine the value of 𝑎𝑛

1 𝑇 1 𝐴 2 𝑇
𝑎0 = ∫ 𝐱(𝑡)𝑑𝑡 = Area = 𝑎𝑛 = ∫ 𝐱(𝑡) cos(𝑛ω0 𝑡) 𝑑𝑡
𝑇 −𝑇 𝑇 2 𝑇 −𝑇

From the exercise 07 and the shift property of Fourier series we get
𝑘=+∞
𝐴 4𝐴
𝐱(𝑡) = + ∑ cos((2𝑘 − 1)ω0 𝑡)
2 (2𝑘 − 1)2 𝜋 2
𝑘=1

clear all, clc, T=3, L=2*T;


A=5; x1=A/2; t=-L:0.01:L;

Nmax=5; w0=2*pi/L;

for k=1:Nmax
x1=x1+(4*A/((2*k-1)*pi)^2)*cos((2*k-1)*w0*t);
if k>3
plot(t,x1,'-','linewidth',1.5)
hold on, grid on
end
end

Example: 09 Find the trigonometric Fourier coefficients for the periodic signal 𝐱(𝑡)

2𝐴 𝑇
𝐱(𝑡) = 𝑡 when |𝑡| <
{ 𝑇 2}
and the period is 𝑇
Solution: This signal is odd which implies that 𝑎𝑛 = 0 and using the result of exercise 2
with the shift property we get
− 𝑗𝑛ω0 −2𝐴
𝑇 2𝐴(−1)𝑛+1
𝑏𝑛 = 𝑒 2 ( )=
𝜋𝑛 𝜋𝑛

2 𝑇/2 2 𝑇/2 2𝐴(−1)𝑛+1


𝑎𝑛 = ∫ 𝐱(𝑡) cos(𝑛ω0 𝑡) 𝑑𝑡 = 0 and 𝑏𝑛 = ∫ 𝐱(𝑡) sin(𝑛ω0 𝑡) 𝑑𝑡 =
𝑇 −𝑇/2 𝑇 −𝑇/2 𝜋𝑛

∞2𝐴(−1)𝑘+1
𝐱(𝑡) = ∑ sin(𝑘ω0 𝑡)
𝑘=1 𝜋𝑘

clear all, clc, L=3; % half of period


T=2*L; A=10;
x1=0; t=-T:0.01:T;
Nmax=10; w0=2*pi/T;

for k=1:Nmax
x1=x1+((2*A*(-1)^(k+1))/(pi*k))*sin(k*w0*t);
if k>4
plot(t,x1,'-','linewidth',1.5)
hold on, grid on
end
end

Example: 10 Find the trigonometric Fourier coefficients for the periodic signal 𝐱(𝑡)

𝑡
𝐱(𝑡) = 𝐴 (1 − ) when 0<𝑡<𝑇
{ 𝑇 }
and the period is 𝑇

Solution: To make things more clear let we go back to exercise 2 where the function that
we studied is of the form

𝐴
𝐲(𝑡) = 𝑡 when 0 < 𝑡 < 𝑇 ⟹ 𝐱(𝑡) = 𝐴 − 𝐲(𝑡)
𝑇
In exercise 2 we get

𝐴 𝐴 ∞ 1 𝐴 𝐴 ∞ 1
𝐲(𝑡) = − ∑ sin(𝑘ω0 𝑡) ⟹ 𝐱(𝑡) = 𝐴 − 𝐲(𝑡) = + ∑ sin(𝑘ω0 𝑡)
2 𝜋 𝑘=1 𝑘 2 𝜋 𝑘=1 𝑘

clear all, clc, T = 1.5; Nmax=5; A=5;


t = 0:0.01:3*T; w0=2*pi/T; x1=A/2;
x = A*sawtooth(w0*t); x=(x+A)/2;
x = A-x;
plot(t,x,'k','linewidth',1.5)
grid on
hold on
for k=1:1:Nmax
x1=x1+(A/(pi*k))*sin((w0*k)*t);
if k>2
plot(t,x1,'-','linewidth',1.5)
hold on, grid on
end
end

Example: 11 Find the trigonometric Fourier coefficients for the periodic signal 𝐱(𝑡)

𝐴 when 0 < 𝑡 < 𝑇/2


𝐱(𝑡) = {
−𝐴 when 𝑇/2 < 𝑡 < 𝑇

Solution: This signal is odd which implies that 𝑎𝑛 = 0 so we have

2 𝑇
𝑎𝑛 = ∫ 𝐱(𝑡) cos(𝑛ω0 𝑡) 𝑑𝑡 = 0
𝑇 0
2 𝑇
𝑏𝑛 = ∫ 𝐱(𝑡) sin(𝑛ω0 𝑡) 𝑑𝑡
𝑇 0
4𝐴 𝑇/2
= ∫ sin(𝑛ω0 𝑡) 𝑑𝑡
𝑇 0
2𝐴 𝑇
= (1 − cos (𝑛ω0 ))
𝑛𝜋 2

It is very easy to check that cos(𝑛ω0 𝑇/2) = cos(𝑛𝜋) so

4𝐴 4𝐴 ∞ 1
𝑏2𝑘−1 = and 𝑏2𝑘 = 0 ⟹ 𝐱(𝑡) = ∑ sin((2𝑘 − 1)ω0 𝑡)
(2𝑘 − 1)𝜋 𝜋 𝑘=1 (2𝑘 − 1)

clear all, clc,

x1=0; t=0:0.01:10; Nmax=8;


A=5;

for k=1:Nmax
x1=x1+(4*A/(pi*(2*k-1)))*sin((2*k-1)*t);
if k>4
plot(t,x1,'-','linewidth',1.5)
hold on, grid on
end
end
Example: 12 Find the Fourier series of 𝐱(𝑡) = 𝑡 2 periodic on interval (−𝜋, 𝜋) and ω0 = 1.

Solution: The function is symmetric so it is going to consist of cosine terms, all 𝑏𝑛 = 0. We


are going to use integration by parts to evaluate 𝑎𝑛 .

1 𝜋 2 1 2 2𝜋 2
𝑎0 = ∫ 𝑡 𝑑𝑡 = ∫ 𝑡𝑑𝑡 =
𝜋 −𝜋 4 0 3
𝜋
1 2
2 𝜋 2 −4 cos(𝑛𝜋) 4
𝑎𝑛 = ∫ 𝑡 cos(𝑛𝑡) 𝑑𝑡 = ∫ 𝑡 cos(𝑛𝑡) 𝑑𝑡 = (−𝜋 ) = 2 (−1)𝑛
𝜋 −𝜋 𝜋 0 𝑛𝜋 𝑛 𝑛

2𝜋 2 ∞ 4
𝐱(𝑡) = +∑ 2
(−1)𝑘 cos(𝑘𝑡)
3 𝑘=1 𝑘

clear all, clc,

x1=(2*pi^2)/3; t=-10:0.01:10; Nmax=5;

for k=1:Nmax
x1=x1+(4/(k^2))*(-1)^k*cos(k*t);
if k>2
plot(t,x1,'-','linewidth',1.5)
hold on, grid on
end
end

Example: 13 Find the Fourier series of 𝐱(𝑡) = 𝑡 2 periodic on interval (0, 2𝜋) and ω0 = 1.

Short answer: you are asked to verify that the Fourier series of 𝐱(𝑡) is given by

4𝜋 2 ∞ 4 ∞ 4𝜋
𝐱(𝑡) = +∑ 2
cos(𝑘𝑡) − ∑ sin(𝑘𝑡)
3 𝑘=1 𝑘 𝑘=1 𝑘

clear all, clc,

x1=0; x2=0; t=-10:0.01:10; Nmax=8;

for k=1:Nmax
x1=x1+(4/(k^2))*cos(k*t);
x2=x2+(-4*pi/k)*sin(k*t);
x=((4*pi^2)/3)+x1+x2;
if k>4
plot(t,x,'-','linewidth',1.5)
hold on, grid on
end
end
Example: 14 for the periodic signal

0 when −2≤𝑡 <0


𝐲(𝑡) = { 𝐲(𝑡) = 𝐲(𝑡 + 4)
𝑡 when 0≤𝑡<2

Prove that the corresponding Fourier series is of the form

1 4 ∞ 1 (2𝑘 + 1)𝜋 2 ∞ (−1)𝑘+1 𝑘𝜋


𝐲(𝑡) = − 2∑ cos ( 𝑡) + ∑ sin ( 𝑡)
𝑘=0 (2𝑘 + 1)
2 𝜋 2 2 𝜋 𝑘 2
𝑘=1

Solution: This function is neither even nor odd, so we expect both sine and cosine terms to
be present. The period is 4 = 2𝐿 so 𝐿 = 2. Because 𝐲(𝑡) = 0 on the interval (−2,0), each of
the integrals in the Euler formulas, which should be an integral from 𝑡 = −2 to 𝑡 = 2, can
be replaced with an integral from 𝑡 = 0 to 𝑡 = 2. Thus, the Euler formulas give

1 𝑇 1 2 1
𝑎0 = ∫ 𝐲(𝑡)𝑑𝑡 = ∫ 𝑡𝑑𝑡 =
𝑇 0 4 0 2
1 2 𝑛𝜋 𝑛𝜋 2𝑥 2𝑑𝑥
𝑎𝑛 = ∫ 𝑡 cos ( 𝑡) 𝑑𝑡 (Let 𝑥 = 𝑡 so 𝑡 = and 𝑑𝑡 = )
2 0 2 2 𝑛𝜋 𝑛𝜋
1 𝑛𝜋 2𝑥 2𝑑𝑥 2 𝑛𝜋
2
= ∫ cos(𝑥) = 2 2 ∫ 𝑥 cos(𝑥) 𝑑𝑥 = 2 2 (cos(𝑛𝜋) − 1)
2 0 𝑛𝜋 𝑛𝜋 𝑛 𝜋 0 𝑛 𝜋
1 when 𝑛 = 0
2 0 when 𝑛 = 2𝑘
= 2 2 ((−1)𝑛 − 1) = { −4
𝑛 𝜋
when 𝑛 = 2𝑘 − 1
𝑛2 𝜋 2
1 2 𝑛𝜋 2𝑥 2𝑑𝑥
𝑏𝑛 = ∫ 𝑡 sin(𝑛ω0 𝑡) 𝑑𝑡 (Let 𝑥 = 𝑡 so 𝑡 = and 𝑑𝑡 = )
2 0 2 𝑛𝜋 𝑛𝜋
1 𝑛𝜋 2𝑥 2𝑑𝑥 2 𝑛𝜋
2 𝑥=𝑛𝜋
= ∫ sin(𝑥) = 2 2 ∫ 𝑥 sin(𝑥) 𝑑𝑥 = 2 2 [sin(𝑥) − 𝑥 cos(𝑥)]𝑥=0
2 0 𝑛𝜋 𝑛𝜋 𝑛 𝜋 0 𝑛 𝜋
−2 2
= 2 2 (𝑛𝜋 cos(𝑛𝜋)) = (−1)𝑛+1
𝑛 𝜋 𝑛𝜋
Example: 15 Compute the Fourier series of the function 𝐱(𝑡) = sin3(𝑡) Solution: This can
be handled by trig identities to reduce it to a finite sum of terms of the form sin(𝑛𝑡).

1 1 1
sin3(𝑡) = sin(𝑡) sin2(𝑡) =
sin(𝑡) (1 − cos(2𝑡)) = sin(𝑡) − sin(𝑡) cos(2𝑡)
2 2 2
1 1 1 3 1
= sin(𝑡) − ( (sin(3𝑡) + sin(−𝑡))) = sin(𝑡) − sin(3𝑡)
2 2 2 4 4

The right hand side is the Fourier series of sin3 (𝑡).


Example: 16 compute the Fourier series of the square pulse wave function of period 2𝜋
given by
1 when 0≤𝑡<ℎ
𝐱(𝑡) = { 𝐱(𝑡) = 𝐱(𝑡 + 2𝜋)
0 when ℎ ≤ 𝑡 < 2𝜋

Solution: For this function, it is more convenient to compute the 𝑎𝑛 and 𝑏𝑛 using
integration over the interval [0,2𝜋] rather than the interval [−𝜋, 𝜋].Thus,

1 2𝜋 Area ℎ and the Fourier series is


𝑎0 = ∫ 𝐱(𝑡)𝑑𝑡 = =
2𝜋 0 2𝜋 2𝜋
1 ℎ sin(𝑛ℎ)
𝑎𝑛 = ∫ cos(𝑛𝑡) 𝑑𝑡 = ℎ 1 ∞ sin(𝑘ℎ) 1 − cos(𝑘ℎ)
𝜋 0 𝑛𝜋 𝐱(𝑡) = + ∑ ( ) cos(𝑘𝑡) + ( ) sin(𝑘𝑡)

2𝜋 𝜋 𝑘=1 𝑘 𝑘
1 1 − cos(𝑛ℎ)
𝑏𝑛 = ∫ sin(𝑛𝑡) 𝑑𝑡 =
𝜋 0 𝑛𝜋
If the square pulse wave 𝐱(𝑡) is divided by ℎ, then one obtains a function whose graph is a
series of tall thin rectangles of height 1/ℎ and base ℎ, sothat each of the rectangles with the
bases starting at 2𝑛𝜋 has area 1. Now consider the limiting case where ℎ approaches 0.

The graph becomes a series of infinite height spikes of width 0. This looks like an infinite
sum of Dirac delta functions, which is the regular delta function extended to be periodic of
period 2𝜋. That is,
𝐱(𝑡) 𝑘=+∞
lim =∑ 𝛿(𝑡 − 2𝑘𝜋)
ℎ→0 ℎ 𝑘=−∞
Now compute the Fourier coefficients 𝑎𝑛 /ℎ and 𝑏𝑛 /ℎ as ℎ approaches 0.
𝑎𝑛 sin(𝑛ℎ) 1 𝑏𝑛 1 − cos(𝑛ℎ)
= = and = ⟶0
ℎ 𝑛ℎ𝜋 𝜋 ℎ 𝑛ℎ𝜋
Also, 𝑎0 /ℎ = 1/2𝜋. Thus, the 2𝜋-periodic delta function has a Fourier series
𝑘=+∞ 1 1 ∞ 1 1
∑ 𝛿(𝑡 − 2𝑘𝜋) ≈ + ∑ cos(𝑘𝑡) ≈ + (cos(𝑡) + cos(2𝑡) + cos(3𝑡) + ⋯ )
𝑘=−∞ 2𝜋 𝜋 𝑘=1 2𝜋 𝜋
1
≈ (1 + 𝑒 𝑖𝑡 + 𝑒 −𝑖𝑡 + 𝑒 2𝑖𝑡 + 𝑒 −2𝑖𝑡 + 𝑒 3𝑖𝑡 + 𝑒 −3𝑖𝑡 + ⋯ )
2𝜋
1
1
𝑖(𝑘+ )𝑡
1
−𝑖(𝑘+ )𝑡 sin ((𝑘 + 2) 𝑡)
1 𝑒 2 −𝑒 2 1
≈ ( )= lim
2𝜋 𝑖𝑡

𝑖𝑡 2𝜋 𝑘→∞ 1
𝑒 −𝑒
2 2 sin (2 𝑡)
{ }
IV. Fourier-Transform (Continuous-Time Signals): Can Fourier series be applied to
functions 𝐟(𝑡) that are not periodic? Strictly speaking the answer is no. But we can
generalize the approach to provide a positive
answer. The trick is to take the periodicity
length 2𝐿 to infinity, so that the function
becomes periodic with an infinite period ---
which is the same thing as not being periodic
at all. A consequence of this limiting procedure
is that the set of wave numbers implicated in
the Fourier expansion will no longer be
discrete, but will form a continuum. Discrete
sums will therefore be replaced by integrals, and the standard Fourier series will become a
Fourier transform (It is closely related to the Fourier series). In otherward, the Fourier
Transform is a mathematical technique that transforms a function of time, 𝐟(𝑡), to a
function of frequency, 𝐅(ω).

The Fourier transform can be viewed as an extension of the above Fourier series to non-
periodic functions, and this is the topic of this section. The Fourier Transform of a function
can be derived as a special case of the Fourier series when the period, 𝑇 → ∞. Start with the
Fourier series synthesis equation

𝐟(𝑡) = ∑ 𝑐𝑘 𝑒 𝑗𝑘𝜔0 𝑡
𝑘=−∞

where 𝑐𝑘 is given by the Fourier Series analysis equation,


𝑇/2
1 −𝑗𝑘𝜔0 𝑡
𝑐𝑘 = ∫ 𝐟(𝑡)𝑒 𝑑𝑡 ⟺ 𝑇𝑐𝑘 = ∫ 𝐟(𝑡)𝑒 −𝑗𝑘𝜔0 𝑡 𝑑𝑡
𝑇 𝑇 −𝑇/2

As 𝑇 → ∞ the fundamental frequency, 𝜔0 = 2𝜋/𝑇, becomes extremely small and the quantity
𝑘𝜔0 becomes a continuous quantity that can take on any value (since k has a range of ±∞)
so we define a new variable 𝜔 = 𝑘𝜔0 ; we also let 𝐅(𝜔) = 𝑇𝑐𝑘 . Making these substitutions in
the previous equation yields the analysis equation for the Fourier Transform (also called
the Forward Fourier Transform or the Fourier integral).
+∞
𝐅(𝜔) = lim 𝑇𝑐𝑘 = ∫ 𝐟(𝑡)𝑒 −𝑗𝜔𝑡 𝑑𝑡
𝑇→∞ −∞
Likewise, we can derive the Inverse Fourier Transform (i.e., the synthesis equation) by
starting with the synthesis equation for the Fourier series (and multiply and divide by T).
∞ ∞
𝑗𝑘𝜔0 𝑡
1
𝐟(𝑡) = ∑ 𝑐𝑘 𝑒 = ∑ 𝑇𝑐𝑘 𝑒 𝑗𝑘𝜔0 𝑡
𝑇
𝑘=−∞ 𝑘=−∞

As 𝑇 → ∞, 1/𝑇 = 𝜔0 /2𝜋. Since 𝜔0 is very small (as 𝑇 gets large, replace 𝜔0 by the quantity
𝑑𝜔). As before, we write 𝜔 = 𝑘𝜔0 and 𝐅(𝜔) = 𝑇𝑐𝑘 . A little work (and replacing the sum by an
integral) yields the synthesis equation of the Fourier Transform.
∞ ∞
𝑗𝑘𝜔0 𝑡
1 𝑑𝜔 1 +∞
𝐟(𝑡) = ∑ 𝑇𝑐𝑘 𝑒 = ∑ 𝐅(𝜔)𝑒 𝑗𝜔𝑡 = ∫ 𝐅(𝜔)𝑒 𝑗𝜔𝑡 𝑑𝜔
𝑇 2𝜋 2𝜋 −∞
𝑘=−∞ 𝑘=−∞

Example: 1 Let we find the Fourier series of the continuous-time periodic square wave
(pulse train) function of period 𝑇 = 2𝜋/𝜔0 and duration ℎ

1 when |𝑡| < ℎ


𝐱(𝑡) = { 𝐱(𝑡) = 𝐱(𝑡 + 𝑇)
0 when ℎ ≤ |𝑡| < 𝑇/2

The exponential Fourier series can be shown as

2 sin(𝑘𝜔0 ℎ) sin(𝑘𝜔0 ℎ) 2sin(𝜔ℎ)


𝑐𝑘 = ⟹ 𝑇𝑐𝑘 = 2 ( ) ⟹ 𝐗(𝜔) = 𝑇𝑐𝑘 = |
𝑘𝜔0 𝑇 𝑘𝜔0 𝜔 𝜔=𝑘𝜔 0

The coefficients 𝑇𝑐𝑘 are denoted by 𝐗(𝜔). Let we plot the graph of 𝐗(𝜔) for fixed ℎ and
different values for 𝑇 (i.e. for different frequencies 𝜔).
Definition of the Fourier Transform (and Inverse): If 𝐟(𝑡) is a continuous, integrable signal,
then the forward Fourier Transform is:
+∞
𝐅(𝜔) = ∫ 𝐟(𝑡)𝑒 −𝑗𝜔𝑡 𝑑𝑡 (𝐀𝐧𝐚𝐥𝐲𝐬𝐢𝐬 𝐄𝐪𝐮𝐚𝐭𝐢𝐨𝐧)
−∞

And the inverse Fourier Transform is:


1 +∞
𝐟(𝑡) = ∫ 𝐅(𝜔)𝑒 𝑗𝜔𝑡 𝑑𝜔 (𝐒𝐲𝐧𝐭𝐡𝐞𝐬𝐢𝐬 𝐄𝐪𝐮𝐚𝐭𝐢𝐨𝐧)
2𝜋 −∞

Example: 2 Let we find the inverse of Fourier transform for the frequency pulse function

1
when |𝜔| < 𝐴 1 +∞ 1 +𝐴
1 sin(𝐴𝑡)
𝐅(𝜔) = { 2𝐴 ⟹ 𝐟(𝑡) = ∫ 𝐅(𝜔)𝑒 𝑗𝜔𝑡 𝑑𝜔 = ∫ 𝑒 𝑗𝜔𝑡 𝑑𝜔 =
2𝜋 −∞ 4𝜋𝐴 −𝐴 2𝜋 𝐴𝑡
0 when |𝜔| > 𝐴

Remark: Let we see what's happen if 𝐴 → 0. We start by checking the frequency domain

1
when |𝜔| < 𝐴
lim 𝐅(𝜔) = lim { 2𝐴 = 𝛿(𝜔)
𝐴→0 𝐴→0
0 when |𝜔| > 𝐴
And in the time domain we have
1 sin(𝐴𝑡) 1
lim 𝐟(𝑡) = lim =
𝐴→0 𝐴→0 2𝜋 𝐴𝑡 2𝜋
This means that
1
𝔽 { } = 𝛿(𝜔) ⟺ 𝔽{𝐟(𝑡) = 1} = 2𝜋𝛿(𝜔)
2𝜋
Example: 3 Assume that the function 𝐟(𝑡) is a continuous with Fourier transform 𝐅(𝜔),
then find the Fourier transform of 𝑑𝐟(𝑡)/𝑑𝑡

1 +∞ 𝑗𝜔𝑡
𝑑𝐟(𝑡) 1 𝑑 +∞
𝐟(𝑡) = ∫ 𝐅(𝜔)𝑒 𝑑𝜔 ⟺ = ( ∫ 𝐅(𝜔)𝑒 𝑗𝜔𝑡 𝑑𝜔)
2𝜋 −∞ 𝑑𝑡 2𝜋 𝑑𝑡 −∞
+∞
𝑑𝐟(𝑡) 1 𝑑𝑒 𝑗𝜔𝑡
⟺ = (∫ 𝐅(𝜔) 𝑑𝜔)
𝑑𝑡 2𝜋 −∞ 𝑑𝑡
+∞
𝑑𝐟(𝑡) 𝑗𝜔
⟺ = (∫ 𝐅(𝜔)𝑒 𝑗𝜔𝑡 𝑑𝜔)
𝑑𝑡 2𝜋 −∞
Therefore,
𝑑𝐟(𝑡)
𝔽{ } = 𝑗𝜔𝐅(𝜔)
𝑑𝑡

IV.I Alternate Forms of the Fourier Transform There are alternate forms of the Fourier
Transform that you may see in different references. Different forms of the Transform result
in slightly different transform pairs (i.e., 𝐟(𝑡) and 𝐅(𝜔)), so if you use other references, make
sure that the same definition of forward and inverse transform are used.

Symmetric Form: Radian Frequency Symmetric Form: Hertz Frequency


+∞ +∞
1
𝐅(𝜔) = ∫ 𝐟(𝑡)𝑒 −𝑗𝜔𝑡 𝑑𝑡 𝐅(𝑓) = ∫ 𝐟(𝑡)𝑒 −𝑗2𝜋𝑓𝑡 𝑑𝑡
√2𝜋 −∞ −∞
∞ ∞
1
𝐟(𝑡) = ∫ 𝐅(𝜔)𝑒 𝑗𝜔𝑡 𝑑𝜔 𝐟(𝑡) = ∫ 𝐅(𝑓)𝑒 𝑗2𝜋𝑓𝑡 𝑑𝑓
√2𝜋 −∞ −∞

Existence of the Fourier Transform requires that the 𝐟(𝑡) be absolutely integrable,
+∞
∫ |𝐟(𝑡)|𝑑𝑡 < ∞
−∞
Existence of the Fourier Transform requires that discontinuities in 𝐟(𝑡) must be finite (i.e.
|𝐟(𝛼 + ) − 𝐟(𝛼 − )| < ∞). This presents no difficulty for the kinds of functions we will consider
(i.e., functions that we can produce in the lab).

Note: absolutely integrable functions would seem to present a problem, because common
signals such as the sine and cosine are not absolutely integrable. We will finesse this
problem, later, by considering impulse functions, 𝛿(𝛼), which are not functions in the strict
sense since the value isn't defined at 𝛼 = 0.

IV.II Connection between the Fourier Transform and the Laplace Transform: Since the
Laplace transform may be considered a generalization of the Fourier transform in which
the frequency is generalized from 𝑗𝜔 to 𝑠 = 𝑎 + 𝑗𝜔, the complex variable 𝑠 is often referred to
as the complex frequency.
+∞ +∞
−𝑠𝑡
𝐅(𝜔) = 𝐅(𝑠)|𝑠=𝑗𝜔 = ∫ 𝐟(𝑡)𝑒 𝑑𝑡| =∫ 𝐟(𝑡)𝑒 −𝑗𝜔𝑡 𝑑𝑡
−∞ 𝑠=𝑗𝜔 −∞

Because the Fourier transform is the Laplace transform with 𝑠 = 𝑗𝜔, it should not be
assumed automatically that the Fourier transform of a signal 𝐱(𝑡) is the Laplace transform
with 𝑠 replaced by 𝑗𝜔. If 𝐱(𝑡) is absolutely integrable, the Fourier transform of 𝐱(𝑡) can be
obtained from the Laplace transform of 𝐱(𝑡) with 𝑠 = 𝑗𝜔. This is not generally true of signals
which are not absolutely integrable.
Setting 𝑠 = 𝑎 + 𝑗𝜔 in Laplace transform, we have
+∞ +∞ +∞
𝐅(𝑠) = ∫ 𝐟(𝑡)𝑒 −𝑠𝑡 𝑑𝑡 = ∫ 𝐟(𝑡)𝑒 −(𝑎+𝑗𝜔)𝑡 𝑑𝑡 = ∫ {𝑒 −𝑎𝑡 𝐟(𝑡)}𝑒 −𝑗𝜔𝑡 𝑑𝑡 = 𝔽{𝑒 −𝑎𝑡 𝐟(𝑡)}
−∞ −∞ −∞

which indicates that the bilateral Laplace transform of 𝐟(𝑡) can be interpreted as the
Fourier transform of 𝑒 −𝑎𝑡 𝐟(𝑡).

Symmetry Property

Remark: If 𝐅(𝜔) is the Fourier transform of 𝐟(𝑡), then the symmetry between the analysis
and synthesis equations of the Fourier transform states that
+∞ +∞
2𝜋𝐟(𝑡) = ∫ 𝐅(𝜔)𝑒 𝑗𝜔𝑡 𝑑𝜔 ⟹ 2𝜋𝐟(−𝑡) = ∫ 𝐅(𝜔)𝑒 −𝑗𝜔𝑡 𝑑𝜔
−∞ −∞

If we Interchange 𝑡 and 𝜔, we obtain

1 +∞ 𝔽
𝐟(−𝜔) = ∫ 𝐅(𝑡)𝑒 −𝑗𝜔𝑡 𝑑𝑡 ⟺ 𝐅(𝑡) ↔ 2𝜋𝐟(−𝜔)
2𝜋 −∞

IV.III Transforms of Some Useful Functions For convenience, we now introduce a


compact notation for some useful functions such as gate, triangle, and interpolation
functions etc....

⦁ Gate Function (Pulse): We define a gate function rect(𝑡/ℎ) as a gate pulse of height 𝐴 and
width 2ℎ, centered at the origin,
+∞ +ℎ
𝐴 when |𝑡| < ℎ sin(𝜔ℎ)
∏(𝑡) = { ⟹ 𝐗(𝜔) = ∫ 𝐱(𝑡)𝑒 −𝑗𝜔𝑡 𝑑𝑡 = ∫ 𝐴𝑒 −𝑗𝜔𝑡 𝑑𝑡 = 2𝐴
0 when |𝑡| > ℎ −∞ −ℎ 𝜔
⟵⟵⟵
sin(𝜔ℎ) sin(ℎ𝑡)
𝔽{∏(𝑡)} = 2𝐴 and by symmetry 𝔽 {2𝐴 } = 2𝜋∏(𝜔)
𝜔 𝑡
⟶⟶⟶
⦁ Delta Function (Dirac): Consider the unit impulse function 𝛿(𝑡). The Laplace transform of
𝛿(𝑡) is 1 which can produce directly the Fourier transform
+∞ +∞
∫ 𝛿(𝑡)𝑒 −𝑠𝑡
𝑑𝑡 ⟹ 𝔽{𝛿(𝑡)} = ∫ 𝛿(𝑡)𝑒 −𝑗𝜔𝑡 𝑑𝑡 = 1 ⟺ 𝔽{𝛿(𝑡)} = 1
−∞ −∞

And from the symmetry between the analysis and synthesis equations of the Fourier
transform we have 𝔽{𝐟(𝑡) = 1} = 2𝜋𝛿(−𝜔) = 2𝜋𝛿(𝜔)

⦁ Signum Function (Sign): Consider the sign function sgn(𝑡) = (𝑢(𝑡) − 𝑢(−𝑡)), which can be
written in terms of 𝛿(𝑡)

𝑑𝐟(𝑡) 𝑑 𝑑 𝑑𝐟(𝑡)
= sgn(𝑡) = (𝑢(𝑡) − 𝑢(−𝑡)) = 2𝛿(𝑡) ⟹ 𝔽 { } = 𝔽{2𝛿(𝑡)} = 2
𝑑𝑡 𝑑𝑡 𝑑𝑡 𝑑𝑡

and from the derivative property we have


⟵⟵⟵
𝑑𝐟(𝑡) 1 𝜋 2
𝔽{ } = 𝑗𝜔𝐅(𝜔) = 2 ⟹ 𝐅(𝜔) = 𝔽{sgn(𝑡)} = and by symmetry 𝔽 { } = sgn(𝜔)
𝑑𝑡 𝑗𝜔 𝑡 𝑗
⟶⟶⟶
⦁ Exponential signal: let we start by real exponential function 𝐱(𝑡) = 𝑒 −𝑎𝑡 𝑢(𝑡) with 𝑎 > 0
+∞ +∞
1 1
𝐗(𝜔) = ∫ 𝑒 −𝑎𝑡 𝑢(𝑡)𝑒 −𝑗𝜔𝑡 𝑑𝑡 = ∫ 𝑒 −(𝑎+𝑗𝜔)𝑡 𝑑𝑡 = ⟺ 𝔽{𝑒 −𝑎𝑡 𝑢(𝑡)} =
−∞ 0 𝑎 + 𝑗𝜔 𝑎 + 𝑗𝜔

Note that 𝐱(𝑡) is absolutely integrable, so 𝐗(𝜔) = 𝐗(𝑠)|𝑠=𝑗𝜔

But what is about the complex exponential function (𝑡) = 𝑒 𝑗𝜔0 𝑡 with 𝜔0 > 0 ?
+∞ +∞
𝐗(𝜔) = ∫ 𝑒 𝑗𝜔0 𝑡 𝑒 −𝑗𝜔𝑡 𝑑𝑡 = ∫ 𝑒 −𝑗(𝜔−𝜔0 )𝑡 𝑑𝑡
−∞ −∞

If we let 𝜃 = 𝜔 − 𝜔0 we can write


+∞ +∞
∫ 𝑒 −𝑗(𝜔−𝜔0 )𝑡 𝑑𝑡 = ∫ 𝑒 −𝑗𝜃𝑡 𝑑𝑡 = 2𝜋𝛿(𝜃) ⟺ 𝔽{𝑒 𝑗𝜔0 𝑡 } = 2𝜋𝛿(𝜔 − 𝜔0 )
−∞ −∞

And from the symmetry between the analysis and synthesis equations of the Fourier
+∞
transform we have 𝔽{𝛿(𝑡 − 𝑡0 )} = ∫−∞ 𝛿(𝑡 − 𝑡0 )𝑒−𝑗𝜔𝑡 𝑑𝑡 = 𝑒𝑗𝜔𝑡0

⦁ Step function: Consider the unit step function 𝐱(𝑡) = 𝑢(𝑡) then the Fourier transform is

+∞ +∞ +∞
−𝑗𝜔𝑡 −𝑗𝜔𝑡
𝑒 −𝑗𝜔𝑡
𝐗(𝜔) = ∫ 𝑢(𝑡)𝑒 𝑑𝑡 = ∫ 𝑒 𝑑𝑡 = − |
−∞ 0 𝑗𝜔 0

But this expression is not convergent so we will use the following trick sgn(𝑡) = 2𝑢(𝑡) − 1. If
we define a step function in terms of sgn(𝑡) then the corresponding Fourier transform is

1 1 1 1 1
𝔽{𝑢(𝑡)} = 𝔽 { + sgn(𝑡)} = 𝔽 { } + 𝔽 { sgn(𝑡)} = 𝜋𝛿(𝜔) +
2 2 2 2 𝑗𝜔

1
𝔽{𝑢(𝑡)} = 𝜋𝛿(𝜔) +
𝑗𝜔

Thus, the Fourier transform of 𝑢(𝑡) cannot be obtained from its Laplace transform. Note
that the unit step function 𝑢(𝑡) is not absolutely integrable.

⦁ Sine and Cosine functions: Consider the Fourier transform of sin(𝜔0 𝑡) and cos(𝜔0 𝑡)
+∞ +∞
𝑒 𝜔0 𝑗𝑡 − 𝑒 −𝜔0𝑗𝑡 −𝑗𝜔𝑡
𝔽{sin(𝜔0 𝑡)} = ∫ sin(𝜔0 𝑡) 𝑒 −𝑗𝜔𝑡 𝑑𝑡 = ∫ ( )𝑒 𝑑𝑡
−∞ −∞ 2𝑗
+∞ +∞
1 1
= ∫ 𝑒 𝑗(𝜔0 −𝜔)𝑡 𝑑𝑡 + ∫ 𝑒 −𝑗(𝜔0 +𝜔)𝑡 𝑑𝑡
2𝑗 −∞ 2𝑗 −∞
= −𝑗𝜋[𝛿(𝜔 − 𝜔0 ) − 𝛿(𝜔 + 𝜔0 )]
+∞ +∞
𝑒𝜔0𝑗𝑡 + 𝑒−𝜔0𝑗𝑡
𝔽{cos(𝜔0 𝑡)} = ∫ cos(𝜔0 𝑡) 𝑒 −𝑗𝜔𝑡
𝑑𝑡 = ∫ ( ) 𝑒−𝑗𝜔𝑡 𝑑𝑡
−∞ −∞ 2
= 𝜋[𝛿(𝜔 − 𝜔0 ) + 𝛿(𝜔 + 𝜔0 )]

𝔽{sin(𝜔0 𝑡)} = 𝑗𝜋[𝛿(𝜔 + 𝜔0 ) − 𝛿(𝜔 − 𝜔0 )] and 𝔽{cos(𝜔0 𝑡)} = 𝜋[𝛿(𝜔 + 𝜔0 ) + 𝛿(𝜔 − 𝜔0 )]


IV.IV Fourier Spectra The Fourier transform 𝐗(𝜔) of 𝐱(𝑡) is, in general, complex, and it
can be expressed as 𝐗(𝜔) = |𝐗(𝜔)|𝑒 𝝓(𝜔) . We can plot the spectrum 𝐗(𝜔) as a function of 𝜔.
Since 𝐗(𝜔) is complex, we have both amplitude and angle (or phase) spectra, in which
|𝐗(𝜔)| is the amplitude and 𝝓(𝜔) is the angle (or phase) of 𝐗(𝜔).

If 𝐱(𝑡) is a real signal, then we get


+∞
𝐗(−𝜔) = ∫ 𝐱(𝑡)𝑒 𝑗𝜔𝑡 𝑑𝑡 = 𝐗 ⋆ (𝜔)
−∞

Then it follows that |𝐗(−𝜔)| = |𝐗(𝜔)| and 𝝓(𝜔) = −𝝓(−𝜔). Hence, as in the case of periodic
signals, the amplitude spectrum |𝐗(𝜔)| is an even function and the phase spectrum 𝝓(𝜔) is
an odd function of 𝜔.

As an example of a simple continuous-time causal frequency-selective filter, we consider


the 𝑅𝐶 filter shown

The output 𝐲(𝑡) and the input 𝐱(𝑡) are related 𝑅𝐶𝐲̇ (𝑡) + 𝐲(𝑡) = 𝐱(𝑡). Taking the Fourier
transforms of both sides of the above equation, the frequency response 𝐇(𝜔) of the 𝑅𝐶 filter
is given by
𝐘(𝜔) 1 1
𝐇(𝜔) = = =
𝐗(𝜔) 1 + 𝑗𝜔𝑅𝐶 1 + 𝑗𝜔/𝜔0

where 𝜔0 = 1/𝑅𝐶. Thus, the amplitude response |𝐇(𝜔)| and phase 𝜽𝑯 (𝜔) are given by

1 1 𝜔
|𝐇(𝜔)| = = , 𝜽𝑯 (𝜔) = − tan−1 ( )
|1 + 𝑗𝜔/𝜔0 | 1/2 𝜔0
[1 + (𝜔/𝜔0 )2 ]
IV.V Fourier Transforms of Periodic Functions While the Fourier transform was
originally incepted as a way to deal with aperiodic signals, we can still use it to analyze
period ones! The tackle method is twofold. First we know that any periodic signals can be
decomposed in terms of a Fourier series which is a summation of complex exponentials.
Second we know that each exponential has the Fourier transform as a delta function in the
frequency domain. By simply using superposition, then we would expect that the Fourier
transform of the periodic signal to be comprised of a sequence of delta functions in the
frequency domain, uniformly spaced by the fundamental frequency of the periodic signal in
the time domain.

We know that 𝔽{𝑒 𝑗𝜔0 𝑡 } = 2𝜋𝛿(𝜔 − 𝜔0 ) and from the linearity of the FT we can write
𝔽
𝑎𝑘 𝑒 𝑗𝑘𝜔0 𝑡 ↔ 2𝜋𝑎𝑘 𝛿(𝜔 − 𝑘𝜔0 )



𝑘=+∞ 𝑘=+∞
𝑗𝑘𝜔0 𝑡
𝔽
𝐱(𝑡) = ∑ 𝑎𝑘 𝑒 ↔ 𝐗(𝜔) = ∑ 2𝜋𝑎𝑘 𝛿(𝜔 − 𝑘𝜔0 )
𝑘=−∞ 𝑘=−∞

IV.VI Properties of the Continuous-Time Fourier Transform We now study some of the
basic and important properties of the Fourier transform and their implications as well as
applications (Many are presented with proofs, but a few are simply stated). Also many of
these properties are similar to those of the Laplace transform.

❶ Linearity The Fourier Transform is linear. The Fourier Transform of a sum of functions,
is the sum of the Fourier Transforms of the functions. Also, if you multiply a function by a
constant, the Fourier Transform is multiplied by the same constant.

𝐱(𝑡) = 𝛼𝐱1 (𝑡) + 𝛽𝐱 2 (𝑡)


+∞
𝐗(𝜔) = ∫ (𝛼𝐱1 (𝑡) + 𝛽𝐱 2 (𝑡))𝑒 −𝑗𝜔𝑡 𝑑𝑡
−∞
+∞ +∞
= 𝛼∫ 𝐱1 (𝑡)𝑒 −𝑗𝜔𝑡 𝑑𝑡 + 𝛽 ∫ 𝐱 2 (𝑡)𝑒 −𝑗𝜔𝑡 𝑑𝑡
−∞ −∞
= 𝛼𝐗1 (𝜔) + 𝛽𝐗 2 (𝜔)

❷ Time Scaling: The scaling property states that time compression of a signal results in its
spectral expansion, and time expansion of the signal results in its spectral compression.
+∞ +∞
𝑡 𝑡
𝐲(𝑡) = 𝐱 ( ) ⟹ 𝐘(𝜔) = ∫ 𝐱 ( ) 𝑒 −𝑗𝜔𝑡 𝑑𝑡 = |𝑎| ∫ 𝐱(𝑢)𝑒 −𝑗𝜔𝑎𝑢 𝑑𝑢 = |𝑎|𝐗(𝑎𝜔)
𝑎 −∞ 𝑎 −∞
That is
𝔽 𝔽 1 𝜔
𝐱(𝑡) ↔ 𝐗(𝜔) ⟹ 𝐲(𝑡) = 𝐱(𝑎𝑡) ↔ 𝐘(𝜔) = 𝐗( )
|𝑎| 𝑎

This indicates that scaling the time variable 𝑡 by the factor 𝑎 causes an inverse scaling of
the frequency variable 𝜔 by 1/𝑎 , as well as an amplitude scaling of 𝐗(𝜔/𝑎) by 1/𝑎. Thus,
the scaling property implies that time compression of a signal (𝑎 > 1) results in its spectral
expansion and that time expansion of the signal (𝑎 < 1) results in its spectral compression.
❸ Time Shift: If a function is delayed in time, this corresponds to multiplication by a
complex exponential in frequency domain. The complex exponential has a magnitude of 1,
so this is equivalent to a phase shift of −𝑎𝜔 radians.
+∞ +∞
𝐲(𝑡) = 𝐱(𝑡 − 𝑎) ⟹ 𝐘(𝜔) = ∫ 𝐱(𝑡 − 𝑎)𝑒 −𝑗𝜔𝑡 𝑑𝑡 = 𝑎 ∫ 𝐱(𝑢)𝑒 −𝑗𝜔(𝑢+𝑎) 𝑑𝑢 = 𝑒 −𝑗𝜔𝑎 𝐗(𝜔)
−∞ −∞

This equation shows that the effect of a shift in the time domain is simply to add a linear
term −𝑎𝜔, to the original phase spectrum 𝝓(𝜔). This is known as a linear phase shift of the
Fourier transform 𝐗(𝜔).

❹ Time Reversal: The time reversal of 𝐱(𝑡) produces a like reversal of the frequency axis
for 𝐗(𝜔). Time reversal is readily obtained by setting 𝑎 = −1 in time scaling.

𝔽
𝐲(𝑡) = 𝐱(−𝑡) ↔ 𝐘(𝜔) = 𝐗(−𝜔)

Alternatively we can prove this by change of variable


+∞ +∞
𝐘(𝜔) = 𝔽(𝐱(−𝑡)) = ∫ 𝐱(−𝑡)𝑒 −𝑗𝜔𝑡 𝑑𝑡 = ∫ 𝐱(𝑡)𝑒 𝑗𝜔𝑡 𝑑𝑡 = 𝐗(−𝜔)
−∞ −∞

❺ Frequency Shifting: Because 𝑒 𝑗𝜔0 𝑡 is not a real function that can be generated,
frequency shifting in practice is achieved by multiplying 𝐱(𝑡) by a sinusoid. This assertion
follows from the fact that

1 𝔽 1
cos(𝜔0 𝑡) 𝐱(𝑡) = (𝑒 𝑗𝜔0 𝑡 𝐱(𝑡) + 𝑒 −𝑗𝜔0 𝑡 𝐱(𝑡)) cos(𝜔0 𝑡) 𝐱(𝑡) ↔ (𝐗(𝜔 + 𝜔0 ) + 𝐗(𝜔 − 𝜔0 ))
2 2
1 𝑗𝜔 𝑡 ⟺ 𝔽 𝑗
sin(𝜔0 𝑡) 𝐱(𝑡) = (𝑒 0 𝐱(𝑡) − 𝑒 −𝑗𝜔0 𝑡 𝐱(𝑡)) sin(𝜔0 𝑡) 𝐱(𝑡) ↔ (𝐗(𝜔 + 𝜔0 ) − 𝐗(𝜔 − 𝜔0 ))
2𝑗 2
+∞ +∞
𝔽
𝐲(𝑡) = 𝑒 𝑗𝜔0 𝑡
𝐱(𝑡) ↔ 𝐘(𝜔) = ∫ 𝐱(𝑡)𝑒 𝑗𝜔0 𝑡 −𝑗𝜔𝑡
𝑒 𝑑𝑡 = ∫ 𝐱(𝑡)𝑒 −𝑗(𝜔−𝜔0 )𝑡 𝑑𝑡 = 𝐗(𝜔 − 𝜔0 )
−∞ −∞

This result shows that the


multiplication of a signal 𝐱(𝑡) by a
sinusoid of frequency 𝜔0 shifts the
spectrum 𝐗(𝜔) by ±𝜔0. Multiplication
of 𝐱(𝑡) by a sinusoid amounts to
modulating the sinusoid amplitude.
This type of modulation is known as
amplitude modulation. The sinusoid
cos(𝜔0 𝑡) is called the carrier, the signal
𝐱(𝑡) is the modulating signal, and the
signal cos(𝜔0 𝑡) 𝐱(𝑡) is the modulated
signal.

❻ Duality (or Symmetry): From the analysis and synthesis equations of the Fourier
transform we can show an interesting fact: the direct and the inverse transform operations
are remarkably similar. These operations, required to go from 𝐱(𝑡) to 𝐗(𝜔) and then from
𝐗(𝜔) to 𝐱(𝑡),
1 +∞
𝐱(𝑡) = ∫ 𝐗(𝜔)𝑒 𝑗𝜔𝑡 𝑑𝜔
2𝜋 −∞ 1 +∞
+∞ ⟺ 𝐱(−𝜔) = ∫ 𝐗(𝑡)𝑒 −𝑗𝜔𝑡 𝑑𝑡
−𝑗𝜔𝑡
2𝜋 −∞
𝐗(𝜔) = ∫ 𝐱(𝑡)𝑒 𝑑𝑡
{ −∞ }
𝔽 1 𝔽
𝐗(𝑡) ↔ 2𝜋𝐱(−𝜔) or 𝐗(−𝑡) ↔ 𝐱(𝜔)
2𝜋
❼ Even-Odd Properties: Consider only the case when x(t) is real (the complex case is not
much more difficult, but does not pertain to the signals being considered). Represent x(t) as
the sum of an even function and an odd function

𝐱(𝑡) = 𝐱 𝑜 (𝑡) + 𝐱 𝑒 (𝑡)


+∞ +∞
𝐗(𝜔) = ∫ 𝐱(𝑡)𝑒 −𝑗𝜔𝑡 𝑑𝑡 = ∫ (𝐱 𝑜 (𝑡) + 𝐱 𝑒 (𝑡))𝑒 −𝑗𝜔𝑡 𝑑𝑡
−∞ −∞
+∞
=∫ (𝐱 𝑜 (𝑡) + 𝐱 𝑒 (𝑡))(cos(𝜔𝑡) − 𝑗 cos(𝜔𝑡))𝑑𝑡
−∞
+∞ +∞
=∫ (𝐱 𝑜 (𝑡) cos(𝜔𝑡) + 𝐱 𝑒 (𝑡) cos(𝜔𝑡))𝑑𝑡 − ∫ (𝐱 𝑜 (𝑡) sin(𝜔𝑡) + 𝐱 𝑒 (𝑡) sin(𝜔𝑡))𝑑𝑡
−∞ −∞

Recall that the product of two odd functions or two even functions is an even function, and
the product of an odd and an even function is odd. Recall, also, that the integral of an odd
function from −𝑎 to +𝑎 is zero. Since 𝐱 𝑜 (𝑡) cos(𝜔𝑡) and 𝐱 𝑒 (𝑡) sin(𝜔𝑡) are odd, their integrals
go to zero so the previous result simplifies to
+∞
𝐗(𝜔) = ∫ (𝐱 𝑒 (𝑡) cos(𝜔𝑡) − 𝑗𝐱 𝑜 (𝑡) sin(𝜔𝑡))𝑑𝑡
−∞
+∞ +∞
=∫ 𝐱 𝑒 (𝑡) cos(𝜔𝑡) 𝑑𝑡 − 𝑗 ∫ 𝐱 𝑜 (𝑡) sin(𝜔𝑡) 𝑑𝑡
−∞ −∞
= 𝐗 𝑒 (𝜔) + 𝑗𝐗 𝑜 (𝜔)
+∞ +∞
With 𝐗 𝑒 (𝜔) = ∫−∞ 𝐱 𝑒 (𝑡) cos(𝜔𝑡) 𝑑𝑡 = 𝐗 𝑒 (−𝜔) and 𝐗 𝑜 (𝜔) = −𝑗 ∫−∞ 𝐱 𝑜 (𝑡) sin(𝜔𝑡) 𝑑𝑡 = −𝐗 𝑜 (−𝜔)

❽ Differentiation in the Time Domain: the effect of differentiation in the time domain is
the multiplication of 𝐗(𝜔) by 𝑗𝜔 in the frequency domain (see Exercise 03)
+∞
𝑑 𝑑𝐱(𝑡) 𝑗𝜔 𝑑𝐱(𝑡)
𝐲(𝑡) = 𝐱(𝑡) ⟹ = (∫ 𝐗(𝜔)𝑒 𝑗𝜔𝑡 𝑑𝜔) ⟹ 𝐘(𝜔) = 𝔽 { } = 𝑗𝜔𝐗(𝜔)
𝑑𝑥 𝑑𝑡 2𝜋 −∞ 𝑑𝑡

𝑑𝑛 𝐱(𝑡)
𝔽{ } = (𝑗𝜔)𝑛 𝐗(𝜔)
𝑑𝑡𝑛

❾ Differentiation in the Frequency Domain: Differentiating a function in frequency


domain is said to amplify the higher time components because of the additional multiplying
factor t. this property is dual of the previous one.

𝑑𝑛 𝑑𝑛 +∞ −𝑗𝜔𝑡
+∞
𝑑 𝑛 𝑒 −𝑗𝜔𝑡 +∞
𝐗(𝜔) = ∫ 𝐱(𝑡)𝑒 𝑑𝑡 = ∫ 𝐱(𝑡) 𝑑𝑡 = ∫ 𝐱(𝑡)(−𝑗𝑡)𝑛 𝑒 −𝑗𝜔𝑡 𝑑𝑡
𝑑𝜔 𝑛 𝑑𝜔 𝑛 −∞ −∞ 𝑑𝜔 𝑛
−∞

𝔽 𝑑𝑛
(−𝑗𝑡)𝑛 𝐱(𝑡) ↔ 𝐗(𝜔)
𝑑𝜔 𝑛
❿ Integration in the Time Domain: This property is the counterpart of the previous one,
but it is based on the Fourier transform of the step function
𝑡
𝔽 𝐗(𝜔)
𝐲(𝑡) = ∫ 𝐱(𝜃)𝑑𝜃 = 𝐱(𝑡) ⋆ 𝒖(𝑡) ↔ 𝐘(𝜔) = 𝐗(𝜔)𝐔(𝜔) = + 𝜋𝐗(0)𝛿(𝜔)
−∞ 𝑗𝜔
Proof:
𝑡 ∞ 𝑡
𝐘(𝜔) = 𝔽 {∫ 𝐱(𝜉)𝑑𝜉 } = ∫ (∫ 𝐱(𝜉)𝑑𝜉 ) 𝑒 −𝑗𝜔𝑡 𝑑𝑡
−∞ −∞ −∞

Notice that the integration of 𝐱(𝑡) can be written in terms of convolution with step function
𝑡 +∞
∫ 𝐱(𝜉)𝑑𝜉 = 𝐱(𝑡) ⋆ 𝒖(𝑡) = ∫ 𝐱(𝜉)𝒖(𝑡 − 𝜉)𝑑𝜉
−∞ −∞

𝑡 ∞ +∞
𝔽 {∫ 𝐱(𝜉)𝑑𝜉 } = ∫ (∫ 𝐱(𝜉)𝒖(𝑡 − 𝜉)𝑑𝜉 ) 𝑒 −𝑗𝜔𝑡 𝑑𝑡
−∞ −∞ −∞
+∞ +∞
=∫ (∫ 𝑒 −𝑗𝜔𝑡 𝐱(𝜉)𝒖(𝑡 − 𝜉) 𝑑𝑡) 𝑑𝜉
−∞ −∞
+∞ +∞
=∫ 𝐱(𝜉) (∫ 𝑒 −𝑗𝜔𝑡 𝒖(𝑡 − 𝜉) 𝑑𝑡) 𝑑𝜉
−∞ −∞
+∞ +∞
=∫ 𝐱(𝜉) (∫ 𝑒 −𝑗𝜔(𝜉+𝜏) 𝒖(𝜏) 𝑑𝜏) 𝑑𝜉
−∞ −∞
+∞ +∞
−𝑗𝜔𝜉
=∫ 𝐱(𝜉)𝑒 (∫ 𝑒 −𝑗𝜔𝜏 𝒖(𝜏) 𝑑𝜏) 𝑑𝜉
−∞ −∞
+∞ +∞
= (∫ 𝐱(𝜉)𝑒 −𝑗𝜔𝜉 𝑑𝜉) (∫ 𝑒 −𝑗𝜔𝜏 𝒖(𝜏) 𝑑𝜏)
−∞ −∞
1 𝐗(𝜔)
= 𝐗(𝜔) ( + 𝜋𝛿(𝜔)) = + 𝜋𝐗(0)𝛿(𝜔)
𝑗𝜔 𝑗𝜔

⓫ Convolution: This property is also called time convolution theorem, and it states that
convolution in the time domain becomes multiplication in the frequency domain. As in the
case of the Laplace transform, this convolution property plays an important role in the
𝔽
study of continuous-time LTI systems 𝐲(𝑡) = 𝐱1 (𝑡) ⋆ 𝐱 2 (𝑡) ↔ 𝐘(𝜔) = 𝐗1 (𝜔)𝐗 2 (𝜔)
Proof:
+∞
𝐘(𝜔) = 𝔽{𝐱1 (𝑡) ⋆ 𝐱 2 (𝑡)} = 𝔽 {∫ 𝐱1 (𝜉)𝐱 2 (𝑡 − 𝜉)𝑑𝜉 }
−∞
∞ +∞
= ∫ (∫ 𝐱1 (𝜉)𝐱 2 (𝑡 − 𝜉)𝑑𝜉 ) 𝑒 −𝑗𝜔𝑡 𝑑𝑡
−∞ −∞
∞ +∞
= ∫ 𝐱1 (𝜉) (∫ 𝑒 −𝑗𝜔𝑡 𝐱 2 (𝑡 − 𝜉)𝑑𝑡) 𝑑𝜉
−∞ −∞
∞ +∞
= ∫ 𝐱1 (𝜉) (∫ 𝑒 −𝑗𝜔(𝜉+𝜏) 𝐱 2 (𝜏)𝑑𝜏) 𝑑𝜉
−∞ −∞
∞ +∞
= ∫ 𝐱1 (𝜉)𝑒 −𝑗𝜔𝜉 (∫ 𝑒 −𝑗𝜔𝜏 𝐱 2 (𝜏)𝑑𝜏) 𝑑𝜉
−∞ −∞
∞ +∞
= (∫ 𝐱1 (𝜉)𝑒 −𝑗𝜔𝜉 𝑑𝜉 ) (∫ 𝐱 2 (𝜏)𝑒 −𝑗𝜔𝜏 𝑑𝜏)
−∞ −∞
= 𝐗1 (𝜔)𝐗 2 (𝜔)
⓬ Multiplication (Frequency Convolution): The multiplication property is the dual
property of convolution and is often referred to as the frequency convolution theorem.
Thus, multiplication in the time domain becomes convolution in the frequency domain
𝔽 1
𝐲(𝑡) = 𝐱1 (𝑡)𝐱 2 (𝑡) ↔ 𝐘(𝜔) = 𝐗 (𝜔) ⋆ 𝐗 2 (𝜔)
2𝜋 1
Proof:

𝐘(𝜔) = 𝔽{𝐱1 (𝑡)𝐱 2 (𝑡)} = ∫ (𝐱1 (𝑡)𝐱 2 (𝑡))𝑒 −𝑗𝜔𝑡 𝑑𝑡
−∞
∞ +∞
1
=∫ (∫ 𝐗1 (𝜗)𝑒 𝑗𝜗𝑡 𝑑𝜗) 𝐱 2 (𝑡)𝑒 −𝑗𝜔𝑡 𝑑𝑡
−∞ 2𝜋 −∞
1 ∞ +∞
= ∫ 𝐗 (𝜗) (∫ 𝐱 2 (𝑡)𝑒 −𝑗(𝜔−𝜗)𝑡 𝑑𝑡) 𝑑𝜗
2𝜋 −∞ 1 −∞
1 ∞ 1
= ∫ 𝐗1 (𝜗)𝐗 2 (𝜔 − 𝜗) 𝑑𝜗 = 𝐗 (𝜔) ⋆ 𝐗 2 (𝜔)
2𝜋 −∞ 2𝜋 1

⓭ Parseval's Relations: If we denote the normalized energy content of 𝐱(𝑡) by 𝐸𝐱 then the
Parseval's identity says that the energy content 𝐸𝐱 can be computed by integrating |𝐗(𝜔)|2
over all frequencies 𝜔. For this reason |𝐗(𝜔)|2 is often referred to as the energy-density
spectrum of 𝐱(𝑡), and the Parseual's theorem is also known as the energy theorem.

1 ∞
⦁ ∫ 𝐱1 (𝑡)𝐱 2 (𝑡)𝑑𝑡 = ∫ 𝐗 (𝜗)𝐗 2 (−𝜗) 𝑑𝜗
−∞ 2𝜋 −∞ 1 ∞ ∞
|| ⦁ ∫ 𝐱1 (𝜆)𝐗 2 (𝜆)𝑑𝜆 = ∫ 𝐗1 (𝜆)𝐱 2 (𝜆) 𝑑𝜆

1 ∞ −∞ −∞
⦁∫ |𝐱(𝑡)|2 𝑑𝑡 = ∫ |𝐗(𝜔)|2 𝑑𝜔
−∞ 2𝜋 −∞

Proof: From the frequency convolution property,



1 ∞
∫ (𝐱1 (𝑡)𝐱 2 (𝑡))𝑒 −𝑗𝜔𝑡 𝑑𝑡 = ∫ 𝐗 (𝜗)𝐗 2 (𝜔 − 𝜗) 𝑑𝜗
−∞ 2𝜋 −∞ 1

This must hold for all values of 𝜔, it must also be true for 𝜔 = 0, and under this condition,
it reduces to

1 ∞
∫ 𝐱1 (𝑡)𝐱 2 (𝑡)𝑑𝑡 = ∫ 𝐗 (𝜗)𝐗 2 (−𝜗) 𝑑𝜗
−∞ 2𝜋 −∞ 1

For the special case 𝐱 2 (𝑡) = 𝐱1⋆ (𝑡), and the conjugate functions property 𝔽{𝐱⋆1 (𝑡)} = 𝐗⋆1 (−𝜔),
we obtain:

⋆ (𝑡))𝑑𝑡
1 ∞ ⋆
1 ∞
∫ (𝐱(𝑡)𝐱 = ∫ 𝐗(𝜗)𝐗 (−(−𝜗)) 𝑑𝜗 = ∫ 𝐗(𝜔)𝐗 ⋆ (𝜔) 𝑑𝜔
−∞ 2𝜋 −∞ 2𝜋 −∞

Since 𝐱(𝑡)𝐱 ⋆ (𝑡) = |𝐱(𝑡)|2 and 𝐗(𝜔)𝐗 ⋆ (𝜔) = |𝐗(𝜔)|2



1 ∞
∫ |𝐱(𝑡)|2 𝑑𝑡 = ∫ |𝐗(𝜔)|2 𝑑𝜔
−∞ 2𝜋 −∞
∞ ∞
1 +∞
∫ 𝐱1 (𝜆)𝐗 2 (𝜆)𝑑𝜆 = ∫ 𝐱1 (𝜆) ( ∫ 𝐱 (𝑡)𝑒 −𝑗𝜆𝑡 𝑑𝑡) 𝑑𝜆
−∞ −∞ 2𝜋 −∞ 2
∞ +∞
1 +∞
∫ 𝐱1 (𝜆)𝐗 2 (𝜆)𝑑𝜆 = ∫ ( ∫ 𝐱1 (𝜆)𝑒 −𝑗𝜆𝑡 𝑑𝜆) 𝐱 2 (𝑡)𝑑𝑡
−∞ −∞ 2𝜋 −∞
∞ +∞ +∞
∫ 𝐱1 (𝜆)𝐗 2 (𝜆)𝑑𝜆 = ∫ 𝐗1 (𝑡)𝐱 2 (𝑡)𝑑𝑡 = ∫ 𝐗1 (𝜆)𝐱 2 (𝜆)𝑑𝜆
−∞ −∞ −∞
⓮ The Frequency Response of Continuous-Time LTI Systems We have shown that a
continuous-time LTI system can be represented in the time domain by its impulse response
and in the frequency-domain by its frequency response (i.e. Transfer function).

In LTI systems we know that 𝐲(𝑡) = 𝐱(𝑡) ⋆ 𝐡(𝑡) means that 𝐘(𝜔) = 𝐗(𝜔)𝐇(𝜔) and the complex
exponential signal 𝑒 𝑗𝜔0 𝑡 is an eigenfunction of the LTI system with corresponding
eigenvalue 𝐇(𝜔0 ).

𝐱(𝑡) = 𝑒 𝑗𝜔0 𝑡  LTI System  𝐲(𝑡) = 𝐇(𝜔0 )𝑒 𝑗𝜔0 𝑡

If 𝐱(𝑡) is periodic signal with its Fourier series 𝐱(𝑡) = ∑∞


𝑘=−∞ 𝑐𝑘 𝑒
𝑗𝑘𝜔0 𝑡
then the corresponding

output 𝐲(𝑡) is also periodic with the Fourier series 𝐲(𝑡) = ∑𝑘=−∞ 𝑐𝑘 𝐇(𝑘𝜔0 )𝑒 𝑗𝑘𝜔0 𝑡 . If 𝐱(𝑡) is
not periodic, then from
1 ∞ 𝑗𝜔𝑡
1 ∞
𝐱(𝑡) = ∫ 𝐗(𝜔)𝑒 𝑑𝜔 ⟹ 𝐲(𝑡) = ∫ 𝐗(𝜔)𝐇(𝜔)𝑒 𝑗𝜔𝑡 𝑑𝜔
2𝜋 −∞ 2𝜋 −∞

Thus, the behavior of a continuous-time LTI system in the frequency domain is completely
characterized by its frequency response 𝐇(𝜔). Let 𝐗(𝜔) = |𝐗(𝜔)|𝑒 𝑗𝜽𝑋 (𝜔) , 𝐘(𝜔) = |𝐘(𝜔)|𝑒 𝑗𝜽𝑌 (𝜔)
with |𝐘(𝜔)| = |𝐗(𝜔)||𝐇(𝜔)| and 𝜽𝑌 (𝜔) = 𝜽𝑋 (𝜔) + 𝜽𝐻 (𝜔). Hence, the magnitude spectrum
|𝐗(𝜔)| of the input is multiplied by the magnitude response |𝐇(𝜔)| f the system to determine
the magnitude spectrum |𝐘(𝜔)| f the output, and the phase response 𝜽𝑋 (𝜔) is added to the
phase spectrum 𝜽𝐻 (𝜔) of the input to produce the phase spectrum 𝜽𝑌 (𝜔) of the output. The
magnitude response |𝐇(𝜔)| is sometimes referred to as the gain of the system.

For convenience, the Fourier transform properties and theorems are summarized in Table
Property Signal Fourier transform

𝐱(𝑡) 𝐗(𝜔)
𝐱1 (𝑡) 𝐗1 (𝜔)
𝐱 2 (𝑡) 𝐗 2 (𝜔)
Linearity 𝛼𝐱 2 (𝑡) + 𝛽𝐱 2 (𝑡) 𝛼𝐗1 (𝜔) + 𝛽𝐗 2 (𝜔)
Time shifting 𝐱(𝑡 − 𝑡0 ) 𝑒 −𝑗𝜔𝑡0 𝐗(𝜔)
Frequency shifting 𝑒 𝑗𝜔0 𝑡 𝐱(𝑡) 𝐗(𝜔 − 𝜔0 )
1
Time scaling 𝐱(𝑎𝑡) |𝑎|
𝐗 2 (𝜔/𝑎)
Time reversal 𝐱(−𝑡) 𝐗(−𝜔)
Duality 𝐗(𝑡) 2𝜋𝐱(−𝜔)
Time differentiation 𝑑𝐱(𝑡)/𝑑𝑡 𝑗𝜔𝐗(𝜔)
Frequency differentiation −𝑗𝑡𝐱(𝑡) 𝑑𝐗(𝜔)/𝑑𝜔
𝑡 1
Integration ∫−∞ 𝐱(𝑡)𝑑𝑡 𝐗(𝜔) + 𝜋𝐗(0)𝛿(𝜔)
𝑗𝜔
Convolution 𝐱1 (𝑡) ⋆ 𝐱1 (𝑡) 𝐗1 (𝜔)𝐗 2 (𝜔)
Multiplication 2𝜋𝐱1 (𝑡)𝐱1 (𝑡) 𝐗1 (𝜔) ⋆ 𝐗 2 (𝜔)

Parseual's theorem − − − − − − − − − − − − − − − − − − − − − − − − − − − − − − − − − −

1 ∞
⦁ ∫ 𝐱1 (𝑡)𝐱 2 (𝑡)𝑑𝑡 = ∫ 𝐗 (𝜗)𝐗 2 (−𝜗) 𝑑𝜗
−∞ 2𝜋 −∞ 1 ∞ ∞
|| ⦁ ∫ 𝐱1 (𝜆)𝐗 2 (𝜆)𝑑𝜆 = ∫ 𝐗1 (𝜆)𝐱 2 (𝜆) 𝑑𝜆

1 ∞ −∞ −∞
⦁ ∫ |𝐱(𝑡)|2 𝑑𝑡 = ∫ |𝐗(𝜔)|2 𝑑𝜔
−∞ 2𝜋 −∞
⓯ Area Under 𝐱(𝑡) and Area Under 𝐗(𝜔)
+∞
𝐱(0) = ∫ 𝐗(𝜔) 𝑑𝜔 𝐀𝐫𝐞𝐚 𝐔𝐧𝐝𝐞𝐫 𝐗(𝜔)
−∞
+∞
𝐗(0) = ∫ 𝐱(𝑡) 𝑑𝑡 𝐀𝐫𝐞𝐚 𝐔𝐧𝐝𝐞𝐫 𝐱(𝑡)
−∞

Common Fourier Transforms Pairs


𝐱(𝑡) 𝐗(𝜔)
𝛿(𝑡) 1
1 2𝜋𝛿(𝜔)
𝛿(𝑡 − 𝑡0 ) 𝑒 −𝑗𝜔𝑡0
𝑒 𝑗𝜔0 𝑡 2𝜋𝛿(𝜔 − 𝜔0 )
cos(𝑡) 𝜋[𝛿(𝜔 + 𝜔0 ) + 𝛿(𝜔 − 𝜔0 )]
sin(𝑡) 𝑗𝜋[𝛿(𝜔 + 𝜔0 ) − 𝛿(𝜔 − 𝜔0 )]
1
𝑢(𝑡) 𝜋𝛿(𝜔) + 𝑗𝜔
1
𝑢(−𝑡) 𝜋𝛿(𝜔) − 𝑗𝜔
1
𝑒 −𝑎𝑡 𝑢(𝑡), 𝑎 > 0
𝑗𝜔+𝑎
1
𝑡𝑒 −𝑎𝑡 𝑢(𝑡), 𝑎 > 0
(𝑗𝜔+𝑎) 2
2𝑎
𝑒 −𝑎|𝑡| , 𝑎 > 0
𝑎2 +𝜔2
1 𝜋 −𝑎|𝜔|
𝑒
𝑎2 +𝑡 2 𝑎
2
−𝑎𝑡2 𝜋 −𝜔
𝑒 , 𝑎>0 √𝑎 𝑒 4𝑎
1 𝑎 > |𝑡| sin(𝑎𝜔)
∏(𝑡) = { 2𝑎
0 𝑎 < |𝑡| 𝑎𝜔
sin(𝑎𝑡) 1 𝑎 > |𝜔|
∏(𝜔) = {
𝜋𝑡 0 𝑎 < |𝜔|
2
sgn(𝑡)
𝑗𝜔
1 𝜋
sgn(𝜔)
𝑡 𝑗
2𝜋
∑+∞
𝑘=−∞ 𝛿(𝑡 − 𝑘𝑇) 𝜔0 ∑+∞
𝑘=−∞ 𝛿(𝜔 − 𝑘𝜔0 ) , 𝜔0 = 𝑇

Solved Problems:
𝑡
Exercise 1: Compute the Fourier Transforms of 𝛾(𝑡) = ∫−∞ 𝛿(𝜃)𝑑𝜃

Solution: we use the integral property and 𝐗(𝜔) = 𝔽{𝛿(𝑡)} = 1 we obtain

𝑡
𝐗(𝜔) 1
𝐘(𝜔) = 𝔽{𝛾(𝑡)} = 𝔽 {∫ 𝛿(𝜃)𝑑𝜃} = 𝜋𝐗(0)𝛿(𝜔) + = 𝜋𝛿(𝜔) +
−∞ 𝑗𝜔 𝑗𝜔
1 1
Exercise 2: Compute the Fourier Transform of the signal ∏(𝑡) = 𝛾 (𝑡 + 2) − 𝛾 (𝑡 − 2) where
𝛾(𝑡) is the step function

Solution: Let we define 𝐗(𝜔) = 𝔽{𝛾(𝑡)} so we have


1 1 𝑗𝜔 𝑗𝜔

𝐘(𝜔) = 𝔽{∏(𝑡)} = 𝔽 {𝛾 (𝑡 + ) − 𝛾 (𝑡 − )} = (𝑒 2 − 𝑒 2 ) 𝐗(𝜔)
2 2
𝑗𝜔 1 𝑗𝜔 1

= 𝑒 2 (𝜋𝛿(𝜔) + ) − 𝑒 2 (𝜋𝛿(𝜔) + )
𝑗𝜔 𝑗𝜔
𝑗𝜔 𝑗𝜔

𝑒 2 𝑒 2
= (𝜋𝛿(𝜔) + ) − (𝜋𝛿(𝜔) + )
𝑗𝜔 𝑗𝜔
𝑗𝜔 𝑗𝜔 𝜔
− sin ( 2 )
𝑒 2 𝑒 2
= − =
𝑗𝜔 𝑗𝜔 𝜔/2

Exercise 3: Compute the Fourier Transform of the signal Λ(𝑡) = ∏(𝑡) ⋆ ∏(𝑡) where: ∏(𝑡) is
the unit pulse gate function and Λ(𝑡) is the triangle function

Solution: Let we define 𝐗(𝜔) = 𝔽{∏(𝑡)} and 𝐘(𝜔) = 𝔽{= ∏(𝑡) ⋆ ∏(𝑡)} so by using the
convolution property we have
𝜔
sin2 ( 2 )
𝐘(𝜔) = 𝐗(𝜔)𝐗(𝜔) = 4
𝜔2
Exercise 4: Compute the Fourier Transform of the unit Sawtooth signal
𝑡 𝑡
𝑡 0<𝑡<1 1
𝐱(𝑡) = { } = ∫ ∏ (𝜃 − ) 𝑑𝜃 − ∫ 𝛿(𝜃 − 1)𝑑𝜃
0 1<𝑡 −∞ 2 −∞

where: ∏(𝑡) is the pulse gate function and Λ(𝑡) is the triangle function
2 𝑗𝜔
Solution: we know that 𝐗1 (𝜔) = 𝔽 {∏ (𝑡 − 2)} = 𝜔 𝑒 − 2 sin(𝜔/2) & 𝐗 2 (𝜔) = 𝔽{𝛿(𝑡 − 1)} = 𝑒 −𝑗𝜔 ,
1

and by using the integration property we get

𝐗1 (𝜔) 𝐗 2 (𝜔)
𝐗(𝜔) = ( + 𝜋𝐗1 (0)𝛿(𝜔)) − ( + 𝜋𝐗 2 (0)𝛿(𝜔))
𝑗𝜔 𝑗𝜔
𝜔
𝑗𝜔 sin ( ) −𝑗𝜔

=𝑒 2 2 + 𝜋𝛿(𝜔) − 𝑒 − 𝜋𝛿(𝜔)
𝜔2 𝑗𝜔
(𝑗 2 )
𝑗𝜔 𝜔

𝑒 2 sin ( 2 ) −
𝑗𝜔 𝑒 −𝑗𝜔 − 1 + 𝑗𝜔𝑒 −𝑗𝜔
= ( −𝑒 2 )=
𝑗𝜔 𝜔/2 𝜔2

Exercise 5: Compute the Fourier Transform of the unit Sawtooth signal using the following

𝑡 0<𝑡<1 1
𝐱(𝑡) = { } = 𝑡. ∏ (𝑡 − )
0 1<𝑡 2

where: ∏(𝑡) is the unit pulse gate function

Solution: The sawtooth can also be created by a delayed pulse multiplied by time (and by
using the time multiplication property), we get
𝑑 1 𝑑 −
𝑗𝜔 sin(𝜔/2) 𝑒 −𝑗𝜔 − 1 + 𝑗𝜔𝑒 −𝑗𝜔
𝐗(𝜔) = 𝑗 (𝔽 {∏ (𝑡 − )}) = 𝑗 (𝑒 2 . )=
𝑑𝜔 2 𝑑𝜔 𝜔/2 𝜔2

Exercise 6: Compute the Fourier Transform of the unit Sawtooth signal using the following

𝑡 0<𝑡<1 1
𝐱(𝑡) = { } = Λ(𝑡 − 1). ∏ (𝑡 − )
0 1<𝑡 2

where: ∏(𝑡) is the unit pulse gate function and Λ(𝑡) is the unit triangle function

Solution: There are a number of other methods. A class of methods that might seem like
1
obvious choices involve multiplying equations in time, i.e., 𝐱(𝑡) = Λ(𝑡 − 1). ∏ (𝑡 − 2)
𝜔
𝔽 sin2 ( 2 )
𝐱1 (𝑡) = Λ(𝑡 − 1) ↔ 𝐗1 (𝜔) = 4𝑒 −𝑗𝜔
𝜔2
𝜔
1 𝔽 𝜔 sin ( 2 )
−𝑗 2
𝐱 2 (𝑡) = ∏ (𝑡 − ) ↔ 𝐗 2 (𝜔) = 𝑒 𝜔
2
2
1 𝔽 1
𝐱(𝑡) = Λ(𝑡 − 1). ∏ (𝑡 − ) ↔ 𝐗(𝜔) = 𝐗 (𝜔) ⋆ 𝐗 2 (𝜔)
2 2𝜋 1
The resulting convolution (on the right of the arrow) is quite laborious, so this is not a good
method. In general multiplication of two functions in the time domain is not a useful
technique because of the resulting convolution in the frequency domain.

Exercise 7: Compute the Fourier Transform of the Windowed Sine Wave defined by

sin(𝜔0 𝑡) |𝑡| < 𝑇𝑝 ⁄2 2𝜋


𝐱(𝑡) = { } 𝑇𝑝 = 6𝑇0 = 6 ( )
0 otherwise 𝜔0

Solution: The windowed sine function is just the product of a sine wave and a rectangular
pulse 𝐱(𝑡) = sin(𝜔0 𝑡) . ∏(𝑡/𝑇𝑝 )

So the Fourier Transform is the convolution of the transforms of the sine and rectangular
pulse in the frequency domain (divided by 2𝜋). This is depicted below, followed by the
required math.

𝜔
1 sin (𝑇𝑝 )
𝐗(𝜔) = (𝑗𝜋 [𝛿(𝜔 + 𝜔0 ) − 𝛿(𝜔 − 𝜔0 )]) ⋆ (𝑇𝑝 2 )
2𝜋 𝑇𝑝 𝜔/2

sin(3𝜔)
= 𝑗([𝛿(𝜔 + 2𝜋) − 𝛿(𝜔 − 2𝜋)]) ⋆ ( )
𝜔
sin(3(𝜔 + 2𝜋)) sin(3(𝜔 − 2𝜋))
= 𝑗{ − }
(𝜔 + 2𝜋) (𝜔 − 2𝜋)
Exercise 8: Find the Fourier Transform of the Damped Cosine Wave defined by

𝐱(𝑡) = 𝑒 −𝑘|𝑡| cos(𝛺𝑡)

where κ is the damping constant, and Ω is the wave's angular frequency.

Solution: To calculate the Fourier transform it is helpful to convert the cosine into complex
exponentials.
1 1
For 𝑡 > 0 we write 𝐱(𝑡) = 𝑒 −𝑘𝑡 cos(𝛺𝑡) = 𝑒 −𝑘𝑡 (𝑒 𝑗𝛺𝑡 + 𝑒 −𝑗𝛺𝑡 ) = (𝑒 𝑗(𝛺+𝑗𝑘)𝑡 + 𝑒 −𝑗(𝛺−𝑗𝑘)𝑡 )
2 2
1 1
while for 𝑡 < 0, 𝐱(𝑡) = 𝑒 𝑘𝑡 cos(𝛺𝑡) = 𝑒 𝑘𝑡 (𝑒 𝑗𝛺𝑡 + 𝑒 −𝑗𝛺𝑡 ) = (𝑒 𝑗(𝛺−𝑗𝑘)𝑡 + 𝑒 −𝑗(𝛺+𝑗𝑘)𝑡 )
2 2

Then the Fourier transform is


1 0 1 +∞
𝐗(𝜔) = 𝔽{𝐱(𝑡)} = ∫ (𝑒 𝑗(𝛺−𝑗𝑘)𝑡 + 𝑒 −𝑗(𝛺+𝑗𝑘)𝑡 )𝑒 −𝑗𝜔𝑡 𝑑𝑡 + ∫ (𝑒 𝑗(𝛺+𝑗𝑘)𝑡 + 𝑒 −𝑗(𝛺−𝑗𝑘)𝑡 )𝑒 −𝑗𝜔𝑡 𝑑𝑡
2 −∞ 2 0
1 1 1 1 1
= { − − + }
2𝑗 𝛺 − 𝜔 − 𝑗𝑘 𝛺 − 𝜔 + 𝑗𝑘 𝛺 + 𝜔 + 𝑗𝑘 𝛺 + 𝜔 − 𝑗𝑘
𝑘 𝑘
={ + }
(𝜔 + 𝛺)2 + 𝑘 2 (𝜔 − 𝛺)2 + 𝑘 2

The function and its Fourier transform are displayed in Fig

Exercise 9: Find the Fourier Transform of the Truncated Cosine Function described by
cos(𝑡) |𝑡| < 𝜋⁄2
𝐱(𝑡) = { }
0 otherwise
Solution: To calculate the Fourier transform it is helpful to convert the cosine into complex
exponentials.
𝜋⁄2
1 𝜋⁄2
𝐗(𝜔) = 𝔽{𝐱(𝑡)} = ∫ cos(𝑡) 𝑒 −𝑗𝜔𝑡 𝑑𝑡 = ∫ (𝑒 𝑗(1−𝜔)𝑡 + 𝑒 −𝑗(1+𝜔)𝑡 )𝑑𝑡
−𝜋⁄2 2 −𝜋⁄2
1 𝑗 𝑗 𝜋 𝜋
= { − } (𝑒 𝑗𝜔 2 + 𝑒 −𝑗𝜔 2 )
2𝑗 𝜔 + 1 𝜔 − 1
𝜋
cos (𝜔 2 )
=2
(1 − 𝜔 2 )
Exercise 10: Find the Fourier Transform of the function defined by

𝐴
𝑡 |𝑡| < 𝑇
𝐱(𝑡) = { 𝑇 }
0 |𝑡| > 𝑇

𝐴 𝑡 𝐴
𝐱(𝑡) = 𝑡. ∏ ( ) = 𝑡(𝑢(𝑡 + 𝑇) − 𝑢(𝑡 − 𝑇))
𝑇 2𝑇 𝑇

Solution: Let we calculate the Fourier transform using the second expression
𝑇
𝐴 −𝑗𝜔𝑡
𝐗(𝜔) = 𝔽{𝐱(𝑡)} = ∫ 𝑡𝑒 𝑑𝑡
−𝑇 𝑇
Using the integration by parts
𝑎𝑥
𝑒 𝑎𝑥
∫ 𝑥𝑒 𝑑𝑥 = 2 (𝑎𝑥 − 1)
𝑎
We obtain
𝑇 +𝑇
𝐴 −𝑗𝜔𝑡 𝐴 𝑒 −𝑗𝜔𝑡 𝐴
𝐗(𝜔) = 𝔽{𝐱(𝑡)} = ∫ 𝑡𝑒 𝑑𝑡 = − (𝑗𝜔𝑡 + 1)| = {𝑒 −𝑗𝜔𝑡 (𝑗𝜔𝑇 + 1) − 𝑒 𝑗𝜔𝑡 (1 − 𝑗𝜔𝑇)}
−𝑇 𝑇 𝑇 (𝑗𝜔) 2 𝑇𝜔 2
−𝑇

After simplifications we get


𝑗2𝐴
𝐗(𝜔) = [(𝜔𝑇). cos(𝜔𝑇) − sin(𝜔𝑇)]
𝑇𝜔 2
Remark: we may use the following method (use differentiation and time scale properties)

𝐴 𝑡 𝑑 sin(𝜔𝑇) 𝑗2𝐴
𝐱(𝑡) = 𝑡. ∏ ( ) ⟺ 𝐗(𝜔) = 𝑗 (2𝐴 )= [(𝜔𝑇). cos(𝜔𝑇) − sin(𝜔𝑇)]
𝑇 2𝑇 𝑑𝜔 𝜔𝑇 𝑇𝜔 2

Exercise 11: Find the Fourier Transform of the following functions defined by
1
❶ 𝐱(𝑡) = 𝑢(−𝑡) ❺ 𝐱(𝑡) =
𝑡2 + 𝑎2
2
❷ 𝐱(𝑡) = 𝑒 −𝑎𝑡 𝑢(𝑡) 𝑎>0 ❻ 𝐱(𝑡) = 𝑒 −𝑎𝑡 𝑎>0
1
❸ 𝐱(𝑡) = 𝑡𝑒 −𝑎𝑡 𝑢(𝑡) 𝑎>0 ❼ 𝐱(𝑡) =
𝑡
1
❹ 𝐱(𝑡) = 𝑒 −𝑎|𝑡| 𝑎>0 ❽ 𝐱(𝑡) = 2
𝑡
Solution:
𝔽 𝔽
❶ We know that 𝐱(𝑡) ↔ 𝐗(𝜔) ⟺ 𝐱(−𝑡) ↔ 𝐗(−𝜔) so we have

1 1 1
𝔽{𝑢(𝑡)} = 𝜋𝛿(𝜔) + ⟹ 𝔽{𝑢(−𝑡)} = 𝜋𝛿(−𝜔) − ⟹ 𝐗(𝜔) = 𝜋𝛿(𝜔) −
𝑗𝜔 𝑗𝜔 𝑗𝜔

+∞ +∞ +∞
1
𝔽{𝐱(𝑡) = 𝑒 −𝑎𝑡
𝑢(𝑡)} = ∫ 𝐱(𝑡)𝑒 −𝑗𝜔𝑡
𝑑𝑡 = ∫ 𝑒 −𝑎𝑡
𝑢(𝑡)𝑒 −𝑗𝜔𝑡
𝑑𝑡 = ∫ 𝑒−(𝑎+𝑗𝜔)𝑡 𝑑𝑡 =
−∞ −∞ 0 𝑎 + 𝑗𝜔
𝔽 𝑑 1
❸ We know that 𝐱(𝑡) = 𝑡𝐱1(𝑡) ↔ 𝐗(𝜔) = 𝑗 𝐗 (𝜔) so we let 𝐱1(𝑡) = 𝑒−𝑎𝑡𝑢(𝑡) ⟹ 𝐗1(𝜔) =
𝑑𝜔 1 𝑎 + 𝑗𝜔

𝑑 𝑑 1 −𝑗 −𝑗 1
𝐗1 (𝜔) = ( )= ⟺ 𝐗(𝜔) = 𝔽 {𝑡𝑒 −𝑎𝑡
𝑢(𝑡)} = 𝑗 =
𝑑𝜔 𝑑𝜔 𝑎 + 𝑗𝜔 (𝑎 + 𝑗𝜔)2 (𝑎 + 𝑗𝜔)2 (𝑎 + 𝑗𝜔)2

❹ For the fourth case we use directly the definition of Fourier Transform
+∞ 0 +∞
𝔽{𝑒 −𝑎|𝑡|
}=∫ 𝑒 −𝑎|𝑡| −𝑗𝜔𝑡
𝑒 𝑑𝑡 = ∫ 𝑒 𝑒 𝑎𝑡 −𝑗𝜔𝑡
𝑑𝑡 + ∫ 𝑒 −𝑎𝑡 𝑒 −𝑗𝜔𝑡 𝑑𝑡
−∞ −∞ 0
0 +∞
1 1 2𝑎
= ∫ 𝑒 −(𝑗𝜔−𝑎)𝑡 𝑑𝑡 + ∫ −(𝑗𝜔+𝑎)𝑡
𝑒 𝑑𝑡 = + = 2
−∞ 0 𝑎 − 𝑗𝜔 𝑎 + 𝑗𝜔 𝑎 + 𝜔 2

❺ By using the duality principal and the pervious result we obtain

𝔽 2𝑎 2𝑎 𝔽
𝐱1 (𝑡) = 𝑒 −𝑎|𝑡| ↔ 𝐗1 (𝜔) = ⟺ 𝐗1 (𝑡) = ↔ 2𝜋𝐱1 (𝜔) = 2𝜋𝑒 −𝑎|𝜔|
𝑎2 + 𝜔 2 𝑎2 + 𝑡 2
1 1 𝔽 𝜋 𝜋
𝐗(𝑡) = 𝐗1 (𝑡) = 2 ↔ 𝐱(𝜔) = 𝐱1 (𝜔) = 𝑒 −𝑎|𝜔|
2𝑎 𝑎 + 𝑡2 𝑎 𝑎
Now we have the permission to write

2𝑎 1 𝜋
𝔽{𝑒−𝑎|𝑡| } = and 𝔽{ } = 𝑒−𝑎|𝜔|
𝑎2 + 𝜔 2 𝑎2 + 𝑡 2 𝑎
2
❻ It is very difficult to evaluate the Fourier transform of 𝑒 −𝑎𝑡 so we use a tactical strategy
to avoid this obstacle
+∞ +∞
2 2 𝑑 2
𝐗(𝜔) = 𝔽{𝑒 −𝑎𝑡 } = ∫ 𝑒 −𝑎𝑡 𝑒 −𝑗𝜔𝑡 𝑑𝑡 ⟹ 𝐗(𝜔) = −𝑗 ∫ 𝑡𝑒 −𝑎𝑡 𝑒 −𝑗𝜔𝑡 𝑑𝑡
−∞ 𝑑𝜔 −∞
𝑑 𝑗 +∞ 2
⟹ 𝐗(𝜔) = ∫ −2𝑎𝑡𝑒 −𝑎𝑡 𝑒 −𝑗𝜔𝑡 𝑑𝑡
𝑑𝜔 2𝑎 −∞
We use integration by part
+∞
𝑑 𝑗 −𝑎𝑡 2 −𝑗𝜔𝑡
+∞ 2
𝐗(𝜔) = {[𝑒 𝑒 ]−∞ + 𝑗𝜔 ∫ 𝑒 −𝑎𝑡 𝑒 −𝑗𝜔𝑡 𝑑𝑡}
𝑑𝜔 2𝑎 −∞ 𝑑 −𝜔
+∞ ⟹ 𝐗(𝜔) = 𝐗(𝜔)
−𝜔 2 −𝜔 𝑑𝜔 2𝑎
= ∫ 𝑒 −𝑎𝑡 𝑒 −𝑗𝜔𝑡 𝑑𝑡 = 𝐗(𝜔)
2𝑎 −∞ 2𝑎

To get 𝐗(𝜔) we must solve this last linear differential equation

𝑑 −𝜔 𝑑𝐗(𝜔) −𝜔 𝑑𝐗(𝜔) −1 −𝜔2


𝐗(𝜔) = 𝐗(𝜔) ⟺ = 𝑑𝜔 ⟺ ∫ = ∫ 𝜔𝑑𝜔 ⟺ ln(𝐗(𝜔)) = +𝑐
𝑑𝜔 2𝑎 𝐗(𝜔) 2𝑎 𝐗(𝜔) 2𝑎 4𝑎
Hence the corresponding Fourier transform is
−𝜔2 −𝜔 2
( +𝑐)
𝐗(𝜔) = 𝑒 4𝑎 = 𝐴𝑒 4𝑎

+∞ 2 𝜋
To determine the constant 𝐴 we use initial condition 𝐴 = 𝐗(0) = ∫−∞ 𝑒 −𝑎𝑡 𝑑𝑡 = √𝑎

𝜋 −𝜔2
−𝑎𝑡 2
𝐗(𝜔) = 𝔽{𝑒 } = √ 𝑒 4𝑎
𝑎
❼ By using the duality principal of 𝑗𝔽{sgn(𝑡)} = 2/𝜔 we obtain 𝔽{2/𝑡} = −𝑗2𝜋sgn(𝜔)

1 𝜋
𝔽 { } = sgn(𝜔)
𝑡 𝑗

❽ By using the multiplication property we get

1 1 1 𝔽 1 𝜋 𝜋 𝜋
𝐱(𝑡) = = . ↔ 𝐗(𝜔) = ( sgn(𝜔)) ⋆ ( sgn(𝜔)) = − (sgn(𝜔)) ⋆ (sgn(𝜔))
𝑡2 𝑡 𝑡 2𝜋 𝑗 𝑗 2
Notice that this convolution doesn't converge because 𝐱(𝑡) is not absolutely integrable.

Exercise 12: Find the Fourier transform of the following graphical representations

Solution:

❶ First we find the Fourier transform of

−𝐴 −𝑇 <𝑡 ≤0
𝐟(𝑡) = { 𝐴 0<𝑡≤𝑇
0 |𝑡| > 𝑇
0 𝑇 0 𝑇
𝐴𝑒 −𝑗𝜔𝑡 𝐴𝑒 −𝑗𝜔𝑡 4𝐴 2 𝜔𝑇
𝐅(𝜔) = 𝔽{𝐟(𝑡)} = ∫ −𝐴 𝑒 −𝑗𝜔𝑡
𝑑𝑡 + ∫ 𝐴 𝑒 −𝑗𝜔𝑡
𝑑𝑡 = | − | = sin ( )
−𝑇 0 𝑗𝜔 −𝑇 𝑗𝜔 0 𝑗𝜔 2
❷ Now we will find the Fourier transform of the triangular function 𝚲(𝑡/𝑇), notice that the
previous function 𝐟(𝑡) is one class of derivatives of triangular function.
𝑑
𝐟(𝑡) = −𝐴𝑇. 𝚲(𝑡/𝑇)
𝑑𝑡
Assume that 𝐗(𝜔) is Fourier transform of 𝚲(𝑡/𝑇) then

𝑗 4𝐴 2 𝜔𝑇 𝜔𝑇 𝜔𝑇 2
−𝐴𝑇. 𝑗𝜔𝐗(𝜔) = 𝐅(𝜔) ⟹ 𝐗(𝜔) = ( sin ( )) = 𝑇 (sin ( ) / )
𝐴𝑇𝜔 𝑗𝜔 2 2 2

❸ This function is solved before (see Exercise 10)

𝑗2𝐵
𝐗(𝜔) = [(𝜔𝑇). cos(𝜔𝑇) − sin(𝜔𝑇)]
𝑇𝜔 2
❹ The student is asked to check that

𝐴 𝑒 −𝑗𝜔𝑇1 − 1 𝐴𝑒 −𝑗𝜔𝑇2
𝐗(𝜔) = [ ] −
𝑇1 𝜔2 𝑗𝜔

Exercise 13: Find the Fourier transform of the following graphical representations

Solution: ❶ The Fourier transform of the pulse gate is


1
2
𝐗(𝜔) = ∫ 𝑒 −𝑗𝜔𝑡 𝑑𝑡 = sin(𝜔)
−1 𝜔
❷ The Fourier transform of the second signal is a sum of two pulses with different periods

sin(𝜔) sin(𝜔) 2
𝐗(𝜔) = 𝑷2 (𝜔) + 𝑷4 (𝜔) = 2 +4 = (sin(𝜔) + sin(2𝜔))
𝜔 2𝜔 𝜔
Exercise 14: ❶ using Fourier transform to find the transfer function 𝐇(𝜔) of an LTI system
described by it differential equation

𝑦̈ (𝑡) + 𝑎1 𝑦̇ (𝑡) + 𝑎2 𝑦(𝑡) = 𝑏0 𝑥̈ (𝑡) + 𝑏1 𝑥̇ (𝑡) + 𝑏2 𝑥(𝑡)

❷ If we let 𝑎1 = 4, 𝑎2 = 3, 𝑏0 = 0, 𝑏1 = 1, 𝑏2 = 2 find the impose response of the system


❸ say if the system stable or not, and of which type this filter is?

Solution: ❶ when we apply the Fourier transform we get

𝐘(𝜔) −𝑏0 𝜔2 + 𝑏1 𝑗𝜔 + 𝑏2
(−𝜔2 2
+ 𝑎1 𝑗𝜔 + 𝑎2 )𝐘(𝜔) = (−𝑏0 𝜔 + 𝑏1 𝑗𝜔 + 𝑏2 )𝐗(𝜔) ⟹ 𝐇(𝜔) = =
𝐗(𝜔) −𝜔 2 + 𝑎1 𝑗𝜔 + 𝑎2
𝐘(𝜔) 𝑗𝜔 + 2
❷ 𝐇(𝜔) = =
𝐗(𝜔) −𝜔 + 4𝑗𝜔 + 3
2

we heve: −𝜔2 + 4𝑗𝜔 + 3 = (𝑗𝜔 + 1)(𝑗𝜔 + 3) so take a partial fraction expansion of we obtain

𝑗𝜔 + 2 1/2 1/2 1 1
𝐇(𝜔) = = + ⟹ ℎ(𝑡) = ( 𝑒 −𝑡 + 𝑒 −3𝑡 ) 𝑢(𝑡)
(𝑗𝜔 + 1)(𝑗𝜔 + 3) 𝑗𝜔 + 1 𝑗𝜔 + 3 2 2

❸ The system is stable because ℎ(𝑡) is absolutely integrable

∞ ∞
1 1
∫ |ℎ(𝑡)| 𝑑𝑡 = ∫ | 𝑒 −𝑡 + 𝑒 −3𝑡 | 𝑑𝑡 = 1 < ∞
0 0 2 2

Exercise 15: An ideal bandpass filter (BPF) is specified by its transfer function
𝐇(𝜔) defined by

1 1
𝐇(𝜔) = ∏(𝜔 − 𝜔0 ) + ∏(𝜔 + 𝜔0 )
2 2
With
1 𝜔1 < 𝜔 < 𝜔2
⦁ ∏(𝜔 − 𝜔0 ) = {
0 otherwise

1 − 𝜔1 < 𝜔 < −𝜔2


⦁ ∏(𝜔 + 𝜔0 ) = {
0 otherwise
𝜔1 + 𝜔2
⦁ 𝜔0 = , 𝜔0 − 𝜔1 = 𝜔2 − 𝜔0 = 𝑎
2

Find and sketch ℎ(𝑡)


𝔽
Solution: we have seen that sin(𝑎𝑡) /𝜋𝑡 ↔ ∏(𝜔) and by using the shift property

sin(𝑎𝑡) sin(𝑎𝑡)
∏(𝜔 − 𝜔0 ) = 𝔽 (𝑒 𝑗𝜔0 𝑡 ) and ∏(𝜔 + 𝜔0 ) = 𝔽 (𝑒 −𝑗𝜔0 𝑡 )
𝜋𝑡 𝜋𝑡

And from the linearity of the Fourier transform we get

1 1 1 sin(𝑎𝑡) sin(𝑎𝑡) sin(𝑎𝑡)


𝐇(𝜔) = ∏(𝜔 − 𝜔0 ) + ∏(𝜔 + 𝜔0 ) = 𝔽 (𝑒 −𝑗𝜔0 𝑡 + 𝑒 𝑗𝜔0 𝑡 ) ⟹ ℎ(𝑡) = cos(𝜔0 𝑡)
2 2 2 𝜋𝑡 𝜋𝑡 𝜋𝑡
clear all, clc, t=-10:0.01:10;
a=1; w0=10; k=0;
y1=(sin(a*t).*cos(w0*t)./(pi*t));
y2=sin(a*t)./(pi*t); % the envelope
subplot(221)
plot(t,y1,'r','linewidth',3)
grid on, hold on
plot(t,y2,'--b','linewidth',1.5)
hold on
plot(t,-y2,'--b','linewidth',1.5)
%---------Fourier transform---------%
for w=-20:0.1:20
k=k+1; X(k)=trapz(t,y1.*exp(-j*w*t));
end
w=-20:.1:20;
subplot(222)
plot(w,X,'r','linewidth',3); grid on

Exercise 16: An LTI system described by it deferential equation (DE) as given

❶ If the DE is 𝑦̇ (𝑡) + 2𝑦(𝑡) = 𝑥(𝑡) + 𝑥̇ (𝑡) using the Fourier transform to find the IR ℎ(𝑡)
❷ If the DE is 𝑦̇ (𝑡) + 2𝑦(𝑡) = 𝑥(𝑡) find the output (response) of the system when

⦁ 𝑥(𝑡) = 𝑒 −𝑡 𝑢(𝑡)
⦁ 𝑥(𝑡) = 𝑢(𝑡)
Solution: ❶ we have
𝐘(𝜔) 𝑗𝜔 + 1
𝑦̇ (𝑡) + 2𝑦(𝑡) = 𝑥(𝑡) + 𝑥̇ (𝑡) ⟺ (𝑗𝜔 + 2)𝐘(𝜔) = (𝑗𝜔 + 1)𝐗(𝜔) ⟹ 𝐇(𝜔) = =
𝐗(𝜔) 𝑗𝜔 + 2
1
𝐇(𝜔) = 1 − ⟹ ℎ(𝑡) = 𝛿(𝑡) − 𝑒 −2𝑡 𝑢(𝑡)
𝑗𝜔 + 2
❷ we have
𝐘(𝜔) 1 𝐗(𝜔)
𝐇(𝜔) = = ⟹ 𝐘(𝜔) =
𝐗(𝜔) 𝑗𝜔 + 2 𝑗𝜔 + 2

1 1 1
⦁ 𝑥(𝑡) = 𝑒 −𝑡 𝑢(𝑡) ⟹ 𝐘(𝜔) = = − ⟹ 𝑦(𝑡) = (𝑒 −𝑡 − 𝑒 −2𝑡 )𝑢(𝑡)
(𝑗𝜔 + 1)(𝑗𝜔 + 2) (𝑗𝜔 + 1) (𝑗𝜔 + 2)
1 1 𝜋𝛿(𝜔) 1
⦁ 𝑥(𝑡) = 𝑢(𝑡) ⟹ 𝐘(𝜔) = (𝜋𝛿(𝜔) + ) = +
(𝑗𝜔 + 2) 𝑗𝜔 (𝑗𝜔 + 2) 𝑗𝜔(𝑗𝜔 + 2)
1
𝜋 1 1 1 1 1 2
⟹ 𝐘(𝜔) = 𝛿(𝜔) + ( − ) = (𝜋𝛿(𝜔) + ) − ⟹ 𝑦(𝑡) = (1 − 𝑒 −2𝑡 )𝑢(𝑡)
2 2 𝑗𝜔 (𝑗𝜔 + 2) 2 𝑗𝜔 (𝑗𝜔 + 2)

Exercise 17: An LTI system described by it transfer function given by


𝜋
−𝑗
𝐇(𝜔) = {𝑒 𝜋 0<𝜔
2

𝑒𝑗2 𝜔<0
❶ Determine the impulse response ℎ(𝑡) for this system

❷ Find the general expression of the output 𝑦(𝑡)


❸ Find the exact formula for 𝑦(𝑡) if the input is 𝑥(𝑡) = cos(𝜔0 𝑡) with 𝜔0 > 0
Solution: ❶ notice that the transfer function is a sign function

−𝑗 0<𝜔
𝐇(𝜔) = { = −𝑗. sgn(𝜔)
𝑗 𝜔<0

𝔽 2 1 𝔽
sgn(𝑡) ↔ by duality we obtain ℎ(𝑡) = ↔ 𝐇(𝜔) = −𝑗. sgn(𝜔)
𝑗𝜔 𝜋𝑡

❷ The general expression of 𝑦(𝑡) is

1 1 +∞ 𝑥(𝜏)
𝑦(𝑡) = 𝑥(𝑡) ⋆ ℎ(𝑡) = 𝑥(𝑡) ⋆ = ∫ ( ) 𝑑𝜏
𝜋𝑡 𝜋 −∞ 𝑡 − 𝜏

This impulse response ℎ(𝑡) is a specific linear operator called the Hilbert transform which
takes a function, 𝑥(𝑡) of a real variable and produces another function 𝑦(𝑡) of a real
variable. The Hilbert transform was first introduced by David Hilbert in this setting, to solve
a special case of the Riemann–Hilbert problem for analytic functions.

❸ The exact expression of 𝑦(𝑡) when 𝑥(𝑡) = cos(𝜔0 𝑡)

1 +∞ cos(𝜔0 𝜏)
𝑦(𝑡) = ∫ ( ) 𝑑𝜏
𝜋 −∞ 𝑡−𝜏

It is so difficult to evaluate such expression, so it is better to go frequency domain

𝐘(𝜔) = 𝐗(𝜔)𝐇(𝜔) = −𝑗𝜋[𝛿(𝜔 + 𝜔0 ) + 𝛿(𝜔 − 𝜔0 )]sgn(𝜔)


= −𝑗𝜋[𝛿(𝜔 + 𝜔0 )sgn(𝜔) + 𝛿(𝜔 − 𝜔0 )sgn(𝜔)]
= −𝑗𝜋[𝛿(𝜔 + 𝜔0 )sgn(−𝜔0 ) + 𝛿(𝜔 − 𝜔0 )sgn(𝜔0 )]

Since sgn(−𝜔0 ) = −1, sgn(𝜔0 ) = 1 therefore

1 +∞ cos(𝜔0 𝜏)
𝐘(𝜔) = 𝐗(𝜔)𝐇(𝜔) = 𝑗𝜋[𝛿(𝜔 + 𝜔0 ) − 𝛿(𝜔 − 𝜔0 )] ⟺ 𝑦(𝑡) = ∫ ( ) 𝑑𝜏 = sin(𝜔0 𝑡)
𝜋 −∞ 𝑡−𝜏

Exercise 18: Consider a causal continuous LTI system described by it transfer function

𝐇(𝜔) = 𝐀(𝜔) + 𝑗𝐁(𝜔)

Show that ℎ(𝑡) can be obtained from 𝐀(𝜔) and 𝐁(𝜔)

Solution: observe that we can decompose the impulse response into even odd parts

ℎ(𝑡) = ℎ𝑒 (𝑡) + ℎ𝑜 (𝑡) ⟹ ℎ(−𝑡) = ℎ𝑒 (𝑡) − ℎ𝑜 (𝑡)

but {ℎ(−𝑡) = 0 for 𝑡 > 0} because the system is causal, which means that
ℎ(𝑡) + ℎ(−𝑡) = 2ℎ𝑒 (𝑡)
⟺ ℎ(𝑡) = 2ℎ𝑒 (𝑡) = 2ℎ𝑜 (𝑡)
ℎ(𝑡) − ℎ(−𝑡) = 2ℎ𝑜 (𝑡)

and from the even/odd property we obtained 𝐇(𝜔) = 𝐇𝑒 (𝜔) + 𝑗𝐇𝑜 (𝜔) that is 𝐇𝑒 (𝜔) = 𝐀(𝜔)
and 𝐇𝑜 (𝜔) = 𝐁(𝜔) so
−1 −1
ℎ𝑒 (𝑡) = 𝔽 (𝐀(𝜔)) and ℎ𝑜 (𝑡) = 𝔽 (𝐁(𝜔))

−1 −1
ℎ(𝑡) = 2𝔽 (𝐀(𝜔)) = 2𝔽 (𝐁(𝜔))
Exercise 19: Consider a causal continuous LTI system with a frequency response

𝐇(𝜔) = 𝐀(𝜔) + 𝑗𝐁(𝜔)


Show that
1 +∞ 𝐁(𝜆) 1 +∞ 𝐀(𝜆)
𝐀(𝜔) = ∫ ( ) 𝑑𝜆 and 𝐁(𝜔) = − ∫ ( ) 𝑑𝜆
𝜋 −∞ 𝜔 − 𝜆 𝜋 −∞ 𝜔 − 𝜆

Solution: In the previous exercise we have seen that for causal LTI ℎ(𝑡) = 2ℎ𝑒 (𝑡) = 2ℎ𝑜 (𝑡)
that is
ℎ (𝑡) = −ℎ𝑜 (𝑡) for 𝑡 < 0 ℎ (𝑡) = sgn(𝑡)ℎ𝑜 (𝑡)
{ 𝑒 ⟹ { 𝑒
ℎ𝑒 (𝑡) = ℎ𝑜 (𝑡) for 𝑡 > 0 ℎ𝑜 (𝑡) = sgn(𝑡)ℎ𝑒 (𝑡)

We use the fact that 𝐀(𝜔) = 𝔽(ℎ𝑒 (𝑡)) and 𝐁(𝜔) = 𝔽(ℎ𝑜 (𝑡)) we obtain
1 1 2 1 +∞ 𝐁(𝜆)
𝐀(𝜔) = 𝔽(sgn(𝑡)ℎ𝑜 (𝑡)) = 𝔽(ℎ𝑜 (𝑡)) ⋆ 𝔽(sgn(𝑡)) = (𝑗𝐁(𝜔) ⋆ ) = ∫ ( ) 𝑑𝜆
2𝜋 2𝜋 𝑗𝜔 𝜋 −∞ 𝜔 − 𝜆
1 1 2 1 +∞ 𝐀(𝜆)
𝑗𝐁(𝜔) = 𝔽(sgn(𝑡)ℎ𝑒 (𝑡)) = 𝔽(ℎ𝑒 (𝑡)) ⋆ 𝔽(sgn(𝑡)) = (𝐀(𝜔) ⋆ ) = ∫ ( ) 𝑑𝜆
2𝜋 2𝜋 𝑗𝜔 𝑗𝜋 −∞ 𝜔 − 𝜆

Finally we deduce that: for any causal continuous LTI system 𝐀(𝜔) is the Hilbert transform
of 𝐁(𝜔) and 𝐁(𝜔) is the Hilbert transform of −𝐀(𝜔).

Exercise 20: Let 𝐱(𝑡) be a signal with Fourier transform 𝐗(𝜔)

1 |𝜔| < 1
𝐗(𝜔) = {
0 |𝜔| > 1

If this signal is an excitation of a LTI system defined by 𝑦(𝑡) = 𝑥̈ (𝑡), then find the output
energy defined by
+∞
∫ |𝑦(𝑡)|2 𝑑𝑡
−∞

Solution: In the frequency domain we have 𝐘(𝜔) = −𝜔2 𝐗(𝜔) and from Parseval theorem
+∞
1 +∞ 1 +∞ +∞ 4 2 (𝜔)
𝜔 𝐗 +1 4
𝜔 1
∫ |𝑦(𝑡)|2 𝑑𝑡 = 2
∫ |𝐘(𝜔)| 𝑑𝜔 = 2 2
∫ |−𝜔 𝐗(𝜔)| 𝑑𝜔 = ∫ 𝑑𝜔 = ∫ 𝑑𝜔 =
−∞ 2𝜋 −∞ 2𝜋 −∞ −∞ 2𝜋 −1 2𝜋 5𝜋

Exercise 21: Find the Fourier transform for the signal 𝐱(𝑡) = 𝑒 2𝑡 𝑢(−𝑡)

Solution: In the frequency we know that


1 1
𝐗(−𝜔) = 𝔽(𝐱(−𝑡)) = 𝔽(𝑒 −2𝑡 𝑢(𝑡)) = ⟹ 𝐗(𝜔) =
2 + 𝑗𝜔 2 − 𝑗𝜔

Exercise 22: Let we define two signals

𝐱1 (𝑡) = 𝑢(−𝑡) − 𝑢(−𝑡 + 2)


𝐱 2 (𝑡) = −𝑢(𝑡) + 𝑢(𝑡 − 2)
❶ Sketch both of 𝐱1 (𝑡) and 𝐱 2 (𝑡)
❷ show that they have the same Fourier transform.

Solution: ❶ We left the sckech to the student


❷ In the frequency we know that
𝔽 1 𝔽 1
𝑢(𝑡) ↔ (𝜋𝛿(𝜔) + ) and 𝑢(𝑡 − 2) ↔ 𝑒 −2𝑗𝜔 (𝜋𝛿(𝜔) + )
𝑗𝜔 𝑗𝜔
𝔽 1 𝔽 1
𝑢(−𝑡) ↔ (𝜋𝛿(𝜔) − ) and 𝑢(−𝑡 + 2) ↔ 𝑒 −2𝑗𝜔 (𝜋𝛿(𝜔) − )
𝑗𝜔 𝑗𝜔
So we can write
1 𝑒 −2𝑗𝜔 − 1
𝐗1 (𝜔) = 𝔽(𝐱1 (𝑡)) = (1 − 𝑒 −2𝑗𝜔 ) (𝜋𝛿(𝜔) − )=
𝑗𝜔 𝑗𝜔
1 𝑒 −2𝑗𝜔 − 1
𝐗 2 (𝜔) = 𝔽(𝐱2 (𝑡)) = (−1 + 𝑒 −2𝑗𝜔 ) (𝜋𝛿(𝜔) + ) =
𝑗𝜔 𝑗𝜔
Exercise 23: Find the inverse of Fourier transform for the signal

2 cos(𝜔) |𝜔| < 𝜋


❶ 𝐗(𝜔) = { ❷ 𝐗(𝜔) = 3 𝛿(𝜔 − 4) ❸ 𝐗(𝜔) = 𝜋𝑒 −|𝜔|
0 |𝜔| > 𝜋

3 3 1 |𝜔 − 𝜔0 | < 𝑎
❹ 𝐗(𝜔) = ∏𝑎 (𝜔 + 2) + ∏𝑎 (𝜔 − 2) with ∏𝑎 (𝜔 + 𝜔0 ) = { & 𝑎=1
0 |𝜔 − 𝜔0 | > 𝑎

Solution: ❶ In the first signal we use the integration by part


1 +∞ 𝑗𝜔𝑡
1 +𝜋
𝐱(𝑡) = ∫ 𝐗(𝜔) 𝑒 𝑑𝜔 = ∫ 2 cos(𝜔) 𝑒 𝑗𝜔𝑡 𝑑𝜔
2𝜋 −∞ 2𝜋 −𝜋
+𝜋 +𝜋
+𝜋
⦁ [cos(𝜔) 𝑒 𝑗𝜔𝑡 ]−𝜋 = 𝑗𝑡 ∫ cos(𝜔) 𝑒 𝑗𝜔𝑡 𝑑𝜔 − ∫ sin(𝜔) 𝑒 𝑗𝜔𝑡 𝑑𝜔
−𝜋 −𝜋
𝑗𝜔𝑡 +𝜋 +𝜋 𝑗𝜔𝑡 +𝜋
sin(𝜔) 𝑒 𝑒
⦁ [ ] =∫ cos(𝜔) 𝑑𝜔 + ∫ sin(𝜔) 𝑒 𝑗𝜔𝑡 𝑑𝜔
𝑗𝑡 −𝜋 −𝜋 𝑗𝑡 −𝜋

By adding those two formulas together we get


+𝜋
+𝜋
sin(𝜔) 1 cos(𝜔) 𝑒 𝑗𝜔𝑡 1
[( + cos(𝜔)) 𝑒 𝑗𝜔𝑡 ] = 𝜋 ( + 𝑗𝑡) ∫ 𝑑𝜔 = 𝜋 ( + 𝑗𝑡) 𝐱(𝑡)
𝑗𝑡 𝑗𝑡 −𝜋 𝜋 𝑗𝑡
−𝜋
+𝜋
sin(𝜔) 2cos(𝜋𝑡) cos(𝜋𝑡)
Since [( + cos(𝜔)) 𝑒 𝑗𝜔𝑡 ] = −2cos(𝜋𝑡) ⟹ 𝐱(𝑡) = − = −2𝑗𝑡 |𝑡| ≠ 1
𝑗𝑡 1 𝜋(1 − 𝑡 2 )
−𝜋 𝜋 (𝑗𝑡 + 𝑗𝑡)
❷ In the second signal we use direct definition
1 +∞ 𝑗𝜔𝑡
3 +∞
𝐱(𝑡) = ∫ 𝐗(𝜔) 𝑒 𝑑𝜔 = ∫ 𝛿(𝜔 − 4) 𝑒 𝑗𝜔𝑡 𝑑𝜔 = 2𝑒 4𝑡
2𝜋 −∞ 2𝜋 −∞

❸ In the third signal we have

1 +∞ 1 0 1 ∞ −𝜔 𝑗𝜔𝑡
𝐱(𝑡) = ∫ 𝐗(𝜔) 𝑒 𝑗𝜔𝑡 𝑑𝜔 = ∫ 𝜋𝑒 𝜔 𝑒 𝑗𝜔𝑡 𝑑𝜔 + ∫ 𝜋𝑒 𝑒 𝑑𝜔
2𝜋 −∞ 2𝜋 −∞ 2𝜋 0
1 0 1 ∞ 1/2 1/2 1/2
= ∫ 𝑒 (𝑗𝑡+1)𝜔 𝑑𝜔 + ∫ 𝑒 (𝑗𝑡−1)𝜔 𝑑𝜔 = ( )−( )=
2 −∞ 2 0 𝑗𝑡 + 1 𝑗𝑡 − 1 1 + 𝑡2

1/2
𝔽( ) = 𝜋𝑒−|𝜔|
1 + 𝑡2
❹ In the fourth signal we have

3 3 3 3 sin(𝑡) 3 sin(𝑡)
𝔽−1 (∏𝑎 (𝜔 + ) + ∏𝑎 (𝜔 − )) = (𝑒 −2𝑗𝑡 + 𝑒 2𝑗𝑡 ) = 2 cos ( 𝑡)
2 2 𝜋𝑡 2 𝜋𝑡

Exercise 24: Find the inverse of Fourier transform for the signal
sin(𝑡/2)
𝐱(𝑡) = sin(𝑡)
𝜋𝑡 2
Solution:
sin(𝑡)
sin(𝑡/2) sin(𝑡) sin(𝑡/2) 1 𝐱1 (𝑡) =
𝐱(𝑡) = sin(𝑡) = 𝜋( )( ) = 2𝜋 ( 𝐱1 (𝑡)𝐱 2 (𝑡)) with { 𝜋𝑡
𝜋𝑡 2 𝜋𝑡 𝜋𝑡 2 sin(𝑡/2)
𝐱 2 (𝑡) =
𝜋𝑡
1 1
𝜔+ for |𝜔| <
2 2
1 1 1 1 3
𝐗(𝜔) = ∏1 (𝜔)∏ (𝜔) = 𝐗1 (𝜔) ⋆ 𝐗2 (𝜔) = 1 for ≤ 𝜔 <
2
1
2 2 2 2
2
1 3 5
𝜔+ for ≤ 𝜔<
2 2 2
{ 0 elsewhere
clear all, clc,
Ts=0.01; t1=-0.5:Ts:0.5; t2=-1:Ts:1;
x1=ones(1,length(t1));
x2=ones(1,length(t2));
n1=length(x1); n2=length(x2);
X1=[x1,zeros(1,n2)];X2=[x2,zeros(1,n1)];
for i=1:n1+n2-1
X(i)=0; for j=1:n1
if i-j+1>0
X(i)= X(i)+ X1(j)* X2(i-j+1);
else
end; end; end
t=-1.5:Ts:1.5;
plot(t,0.5*Ts*X,'r','linewidth',3)
grid on

This Fourier transform can be evaluated directly using the next MATLAB code

clear all, clc, t=-10:0.01:10;


a=1; w0=10; k=0;
y1=(sin(t)./(pi*t));
y2=sin((1/2)*t)./(pi*t);
y=pi*y1.*y2;
subplot(221)
plot(t,y,'r','linewidth',3),grid on,
%---------Fourier transform---------%
for w=-2:0.05:2
k=k+1; X(k)=trapz(t,y.*exp(-j*w*t));
end
w=-2:.05:2;
subplot(222)
plot(w,X,'r','linewidth',3); grid on
Exercise 25: Determine whether the time domain signal of the following 𝐗(𝜔) is real or
complex valued and even or odd.

𝜋 1 1 1 |𝜔 − 𝜔0 | < 𝑎
❶ 𝐗(𝜔) = {∏𝑎 (𝜔 + ) − ∏𝑎 (𝜔 − )} with ∏𝑎 (𝜔 + 𝜔0 ) = { & 𝑎=1
2 2 2 0 |𝜔 − 𝜔0 | > 𝑎
𝑘
𝑘=+∞ 1 𝜋
❷ 𝐗(𝜔) = 𝜔−2 + 𝑗𝜔−3 ❸ 𝐗(𝜔) = ∑ ( ) 𝛿 (𝜔 − 𝑘 )
𝑘=−∞ 2 4

Solution:
−1 𝜋 1 1 𝜋 −𝑗𝑡 𝑗𝑡 sin(𝑡) sin2(𝑡)
❶ 𝐱(𝑡) = 𝔽 ( {∏𝑎 (𝜔 + ) − ∏𝑎 (𝜔 − )}) = (𝑒 2 −𝑒 )
2 = ⟹ complex, odd
2 2 2 2 𝜋𝑡 𝑗𝑡

−1 𝑑𝛿(𝑡) 1 𝑑𝛿(𝑡)
❷ (𝑗𝜔)3 𝐗(𝜔) = 1 − 𝑗𝜔 ⟺ 𝔽 ((𝑗𝜔)3 𝐗(𝜔)) = 𝛿(𝑡) − ⟺ 𝐗(𝜔) = 𝔽 ( 𝛿 (𝑡) − )
𝑑𝑡 (𝑗𝜔)3 𝑑𝑡

𝑑 3 𝛿(𝑡) 𝑑 4 𝛿(𝑡)
𝐱(𝑡) = − ⟹ real, nor even neither odd
𝑑𝑡 3 𝑑𝑡 4
𝑘 𝑘=+∞ 1 𝑘 𝜋
𝑘=+∞ 1 𝜋 𝑘 𝑡
❸ 𝐗(𝜔) = ∑ ( ) 𝛿 (𝜔 − 𝑘 ) ⟹ 𝐱(𝑡) = ∑ ( ) 𝑒 4 ⟹ real, even
𝑘=−∞ 2 4 𝑘=−∞ 2

Exercise 26: For the continuous time LTI system let 𝛼, 𝛽 > 0 and

LTI System
𝐱(𝑡) = 𝑒 −𝛼𝑡 𝑢(𝑡)   𝐲(𝑡) = 𝐡(𝑡) ⋆ 𝐱(𝑡)
𝐡(𝑡) = 𝑒 −𝛽𝑡 𝑢(𝑡)

Find the output of this system (Take into a consideration all cases for 𝛼 & 𝛽)

Solution:

Method I: using Fourier transform


𝔽 1 𝔽 1
𝐱(𝑡) = 𝑒 −𝛼𝑡 𝑢(𝑡) ↔ 𝐗(𝜔) = and 𝐡(𝑡) = 𝑒 −𝛽𝑡 𝑢(𝑡) ↔ 𝐇(𝜔) =
𝛼 + 𝑗𝜔 𝛽 + 𝑗𝜔

1 1 1
𝐘(𝜔) = 𝐗(𝜔)𝐇(𝜔) = ( )( )= 2
𝛼 + 𝑗𝜔 𝛽 + 𝑗𝜔 𝛼𝛽 − 𝜔 + 𝑗(𝛼 + 𝛽)𝜔

We distinguish two cases (distinct and repeated roots)

⦁ 𝛼 = 𝛽 𝐘(𝜔) = (𝛼 + 𝑗𝜔)−2 ⟹ 𝐲(𝑡) = 𝑡𝑒 −𝛼𝑡 𝑢(𝑡)

⦁𝛼≠𝛽
1 1 𝑘1 𝑘2 𝑘1 = lim (𝛼 + 𝑗𝜔)𝐘(𝜔)
𝑗𝜔→−𝛼
𝐘(𝜔) = ( )( )= + with {
𝛼 + 𝑗𝜔 𝛽 + 𝑗𝜔 𝛼 + 𝑗𝜔 𝛽 + 𝑗𝜔 𝑘2 = lim (𝛽 + 𝑗𝜔)𝐘(𝜔)
𝑗𝜔→−𝛽
1 1 1 1
⟹ 𝐘(𝜔) = { − } ⟹ 𝐲(𝑡) = (𝑒 −𝛼𝑡 − 𝑒 −𝛽𝑡 )𝑢(𝑡)
𝛽 − 𝛼 𝛼 + 𝑗𝜔 𝛽 + 𝑗𝜔 𝛽−𝛼

0 for 𝑡≤0
1
𝐲(𝑡) = (𝑒 −𝛼𝑡 − 𝑒 −𝛽𝑡 )𝑢(𝑡) 𝛽≠𝛼
{𝛽 − 𝛼 } for 𝑡>0
−𝛼𝑡
{ 𝑡𝑒 𝑢(𝑡) 𝛽≠𝛼
Method II: using the convolution and knowing that for 𝑡≤0 𝐲(𝑡) = 0
+∞
𝜏<𝑡
𝐲(𝑡) = ∫ 𝑒 −𝛼𝜏 𝑢(𝜏)𝑒 −𝛽(𝑡−𝜏) 𝑢(𝑡 − 𝜏)𝑑𝜏 ⟹ {
−∞
𝜏 >0
⦁𝛼≠𝛽
𝜏=𝑡
𝑡
−𝛼𝜏 −𝛽(𝑡−𝜏) −𝛽𝑡
𝑡
(𝛽−𝛼)𝜏 −𝛽𝑡
𝑒 (𝛽−𝛼)𝜏 1
𝐲(𝑡) = ∫ 𝑒 𝑒 𝑑𝜏 = 𝑒 ∫𝑒 𝑑𝜏 = 𝑒 [ ] = (𝑒 −𝛼𝑡 − 𝑒 −𝛽𝑡 )
0 0 𝛽 − 𝛼 𝜏=0 𝛽 − 𝛼
⦁𝛼=𝛽
𝑡 𝑡
−𝛼𝜏 −𝛼(𝑡−𝜏) −𝛽𝑡
𝐲(𝑡) = ∫ 𝑒 𝑒 𝑑𝜏 = 𝑒 ∫ 𝑑𝜏 = 𝑡𝑒 −𝛽𝑡
0 0

So, we deduce that


0 for 𝑡≤0
1
𝐲(𝑡) = (𝑒 −𝛼𝑡 − 𝑒 −𝛽𝑡 )𝑢(𝑡) 𝛽≠𝛼
{𝛽 − 𝛼 } for 𝑡>0
−𝛼𝑡
{ 𝑡𝑒 𝑢(𝑡) 𝛽≠𝛼

Exercise 27: write a program to do an amplitude modulation (AM)

𝑇 2𝐴 𝑇
+ 𝑡 for 0≤𝑡≤
𝐱(𝑡) = { 2 𝑇 2 with 𝐱(𝑡) = cos(𝑎𝑡)
𝑇 2𝐴 𝑇
− 𝑡 for ≤𝑡≤𝑇
2 𝑇 2
clear all,clc,
dt=0.01; t=-10:dt:10;
T=10; a=20;

y1=asin(sin((2*pi/T)*t));
y2=cos(a*t);
y=y1.*y2;
subplot(311)
plot(t,y1,'-b','linewidth',2)
grid on, hold on;
subplot(312)
plot(t,y2,'-r','linewidth',2)
grid on, hold on;
subplot(313)
plot(t,y,'-r','linewidth',2)
grid on

Exercise 28: Prove that the Fourier transform of the pulse train is
𝑘=+∞ 𝑘=+∞
2𝜋 2𝜋
𝔽 (𝐱(𝑡) = ∑ 𝛿(𝑡 − 𝑘𝑇)) = ∑ 𝛿 (𝜔 − 𝑘 )
𝑇 𝑇
𝑘=−∞ 𝑘=−∞

Solution: Firstly we must find the exponential Fourier series coefficients


𝑘=+∞ 𝑘=+∞
1 1 1 1
𝑐𝑘 = ∫ 𝐱(𝑡)𝑒 −𝑗𝜔0 𝑘𝑡 𝑑𝑡 = ∫ ∑ 𝛿(𝑡 − 𝑘𝑇) 𝑒 −𝑗𝜔0 𝑘𝑡 𝑑𝑡 = { ∑ ∫ 𝛿(𝑡 − 𝑘𝑇)𝑒 −𝑗𝜔0 𝑘𝑡 𝑑𝑡} =
𝑇 𝑇 𝑇 𝑇 𝑇 𝑇 𝑇
𝑘=−∞ 𝑘=−∞
So use the periodic function property we get
𝑘=+∞ 𝑘=+∞
1 𝔽 1 2𝜋
𝐱(𝑡) = ∑ 𝑒 𝑗𝜔0 𝑘𝑡 ↔ 𝐗(𝜔) = 2𝜋 ∑ 𝛿(𝜔 − 𝑘𝜔0 ) with 𝜔0 =
𝑇 𝑇 𝑇
𝑘=−∞ 𝑘=−∞

Exercise 29: Given a periodic signal as shown in figure where


𝑇
𝐱(𝑡) = 𝑒 𝑡 when |𝑡| <
2

❶ If we let 𝑇 = 2𝜋 find the exponential and trigonometric Fourier series coefficients


❷ Based on this Fourier series determine the following sum

1
𝑠=∑
1 + 𝑘2
𝑘=1
Solution: ❶ 𝜔0 = 2𝜋/𝑇 = 1 and
1 −𝑗𝜔0 𝑘𝑡
1 𝜋 𝑡 −𝑗𝜔 𝑘𝑡
𝑐𝑘 = ∫ 𝐱(𝑡)𝑒 𝑑𝑡 = ∫ 𝑒 𝑒 0 𝑑𝑡
𝑇 𝑇 2𝜋 −𝜋
𝑡=+𝜋
1 𝜋 (1−𝑗𝜔 𝑘)𝑡 𝑒 (1−𝑗𝜔0 𝑘)𝑡
= ∫ 𝑒 0 𝑑𝑡 = |
2𝜋 −𝜋 2𝜋(1 − 𝑗𝜔0 𝑘) 𝑡=−𝜋
1
= (𝑒 (1−𝑗𝜔0 𝑘)𝜋 − 𝑒 −(1−𝑗𝜔0 𝑘)𝜋 )
2𝜋(1 − 𝑗𝜔0 𝑘)
1
= (𝑒 𝜋 𝑒 −𝑗𝜔0 𝑘𝜋 − 𝑒 −𝜋 𝑒 𝑗𝜔0 𝑘𝜋 )
2𝜋(1 − 𝑗𝜔0 𝑘)
(−1)𝑘 𝑒 𝜋 − 𝑒 −𝜋 (−1)𝑘 (1 + 𝑗𝑘)
= ( )= sinh(𝜋)
𝜋(1 − 𝑗𝑘) 2 𝜋(1 + 𝑘 2 )
(−1)𝑘 (−1)𝑘 𝑘
= sinh(𝜋) + 𝑗 sinh(𝜋)
𝜋(1 + 𝑘 2 ) 𝜋(1 + 𝑘 2 )

Finally we obtain the complex coefficients

(−1)𝑘 sinh(𝜋) (−1)𝑘 sinh(𝜋)


𝑐𝑘 = + 𝑗𝑘
𝜋(1 + 𝑘 2 ) 𝜋(1 + 𝑘 2 )

The trigonometric coefficients are

sinh(𝜋) (−1)𝑘 sinh(𝜋) (−1)𝑘+1 sinh(𝜋)


𝑎0 = 𝑐0 = , 𝑎𝑘 = 2Re(𝑐𝑘 ) = 2 , 𝑏𝑘 = −2Im(𝑐𝑘 ) = 2𝑘
𝜋 𝜋(1 + 𝑘 2 ) 𝜋(1 + 𝑘 2 )

sinh(𝜋) (−1)𝑘 (−1)𝑘+1
𝐱(𝑡) = (1 + 2 ∑ cos(𝑘𝑡) + 𝑘 sin(𝑘𝑡))
𝜋 (1 + 𝑘 2 ) (1 + 𝑘 2 )
𝑘=1

❷ Now if we let 𝑡 = 𝜋 we obtain sin(𝜋𝑡) = 0 and cos(𝜋𝑡) = ±1 but the function 𝐱(𝑡) is not
defined at 𝑡 = 𝜋 so we let
∞ ∞
𝑒 𝜋 + 𝑒 −𝜋 sinh(𝜋) 1 1 1 𝜋
𝐱(𝜋) = = cosh(𝜋) = (1 + 2 ∑ ) ⟹ 𝑠=∑ = ( − 1)
2 𝜋 (1 + 𝑘 2 ) (1 + 𝑘 2 ) 2 tanh(𝜋)
𝑘=1 𝑘=1

Exercise 30: Given LTI system described by its differential equation 𝐲(𝑡) = 𝐱̇ (𝑡) assume that
the input signal is periodic with Fourier series
𝑘=∞

𝐱(𝑡) = ∑ 𝐶𝑘𝑥 𝑒 −𝑗𝜔0 𝑘𝑡


𝑘=−∞

𝑦
Find 𝐶𝑘 the exponential Fourier series coefficients for the signal 𝐲(𝑡)

Solution:
𝑘=∞ 𝑘=∞ 𝑘=∞
𝑑𝐱(𝑡) 𝑑 𝑑 𝑦
𝐲(𝑡) = = ∑ 𝐶𝑘𝑥 𝑒 −𝑗𝜔0 𝑘𝑡 = ∑ 𝐶𝑘𝑥 𝑒 −𝑗𝜔0 𝑘𝑡 = ∑ 𝑗𝜔0 𝑘𝐶𝑘𝑥 𝑒 −𝑗𝜔0 𝑘𝑡 ⟹ 𝐶𝑘 = 𝑗𝑘𝜔0 𝐶𝑘𝑥
𝑑𝑡 𝑑𝑡 𝑑𝑡
𝑘=−∞ 𝑘=−∞ 𝑘=−∞
𝑦
If we let 𝜔 = 𝑘𝜔0 then 𝐶𝑘 = 𝑗𝜔𝐶𝑘𝑥

Exercise 31: Find the Fourier series coefficients for the half cosine wave signal

𝑇 𝑇
0 for − ≤𝑡≤−
2 4
𝑇 𝑇=8
𝐱(𝑡) = 𝐴cos(𝜔0 𝑡) for |𝑡| ≤ Let
4 𝜔0 = 𝜋/4
𝑇 𝑇
{ 0 for ≤𝑡≤
4 2
Solution:
1 𝐴
𝑎0 = ∫ 𝐱(𝑡)𝑑𝑡 = Area over on period =
𝑇 𝑇 𝜋
2 4 𝑇/4 4𝐴 𝑇/4
𝑎𝑘 = ∫ 𝐱(𝑡)cos(𝑘𝜔0 𝑡)𝑑𝑡 = ∫ 𝐱(𝑡)cos(𝑘𝜔0 𝑡)𝑑𝑡 = ∫ cos(𝜔0 𝑡)cos(𝑘𝜔0 𝑡)𝑑𝑡
𝑇 𝑇 𝑇 0 𝑇 0

We have 2cos(𝜔0 𝑡)cos(𝑘𝜔0 𝑡) = cos((1 + 𝑘)𝜔0 𝑡) + cos((1 − 𝑘)𝜔0 𝑡) then


𝜋 𝜋
2𝐴 𝑇/4 𝐴 sin ((1 + 𝑘) 2) sin ((1 − 𝑘) 2)
𝑎𝑘 = ∫ cos((1 + 𝑘)𝜔0 𝑡) + cos((1 − 𝑘)𝜔0 𝑡)𝑑𝑡 = { + }
𝑇 0 𝜋 (1 + 𝑘) (1 − 𝑘)
For 𝑘 = 1 we have
4𝐴 𝑇/4 𝐴
𝑎1 = ∫ cos(𝜔0 𝑡)cos(𝜔0 𝑡)𝑑𝑡 =
𝑇 0 2

For 𝑘 = odd (𝑘 = 3,5, … ), (𝑘 + 1) and (𝑘 − 1) are both even, so

𝜋 𝜋
sin ((1 + 𝑘) ) = 0 = sin ((1 − 𝑘) ) 𝑘 = odd
2 2

For 𝑘 = even (𝑘 = 2,4,6, … ), (𝑘 + 1) and (𝑘 − 1) are both odd, so

𝜋 𝜋 𝜋
sin ((1 + 𝑘) ) = −sin ((1 − 𝑘) ) = cos (𝑘 ) = (−1)𝑘/2 𝑘 = even
2 2 2

Hence, To avoid using 𝑘 = 2,4,6, ... and also to ease computation, we can replace 𝑘 by 2ℓ

𝐴 𝐴 2𝐴 (−1)ℓ cos(2ℓ𝜔0 𝑡)
𝐱(𝑡) = + cos(𝜔0 𝑡) − ∑
𝜋 2 𝜋 (2ℓ)2 − 1
ℓ=1

clear all, clc,

t=0:0.01:2;
Nmax=5; w0=2*pi;
x1=1/pi+(1/2)*cos(w0*t);

for k=1:1:Nmax
a=((2*(-1)^k)/(pi*(1-(2*k)^2))) ;
x1=x1+a*cos((2*w0*k)*t);
if k>2
plot(t,x1,'-','linewidth',1.5)
hold on, grid on
end
end

Exercise 32: Find the Fourier series coefficients for the half sine wave signal

𝑇
𝐴sin(𝜔0 𝑡) for 0≤𝑡≤
𝐱(𝑡) = { 2
𝑇
0 for ≤𝑡≤𝑇
2
𝑇=8
Let 𝜋
𝜔0 =
4

Solution: here we give a short answer and we let the detail to students


𝐴 𝐴 2𝐴 cos(2ℓ𝜔0 𝑡)
𝐱(𝑡) = + sin(𝜔0 𝑡) − ∑
𝜋 2 𝜋 (2ℓ)2 − 1
ℓ=1
Exercise 33: Find the Fourier series coefficients for the given signal

𝐱(𝑡) = 𝑡 for |𝑡| < 𝜋 and 𝐱(𝑡) = 𝐱(𝑡 + 2𝜋)


And try to evaluate 𝜋/4

Solution: In example 09 we have proved that



2
𝐱(𝑡) = ∑(−1)𝑘+1 sin(𝑘𝜔0 𝑡) with 𝑇 = 2𝜋 ⟹ 𝜔0 = 1
𝑘
𝑘=1

∞ ∞
𝜋 𝜋 2 𝜋 𝜋 (−1)ℓ
𝐱 ( ) = = ∑(−1)𝑘+1 sin (𝑘 ) ⟹ =∑
2 2 𝑘 2 4 2ℓ + 1
𝑘=1 ℓ=0

Exercise 34: Find the Fourier transform for the given signal

Solution: Let we derive an expression for this signal

𝐴(𝑏 + 𝑡)
for − 𝑏 ≤ 𝑡 ≤ −𝑎
𝑏−𝑎
𝐟(𝑡) = 𝐴 for −𝑎 ≤𝑡 ≤ 𝑎
𝐴(𝑏 − 𝑡)
{ 𝑏−𝑎 for 𝑎≤𝑡≤𝑏

The second derivative of this expression is given by

𝑑2 𝐴
𝐟(𝑡) = (𝛿(𝑡 + 𝑏) − 𝛿(𝑡 + 𝑎) − 𝛿(𝑡 − 𝑎) + 𝛿(𝑡 − 𝑏))
𝑑𝑡 2 𝑏−𝑎
𝑑2 𝐴
𝔽 ( 2 𝐟(𝑡)) = (𝑒 𝑗𝑏𝜔 − 𝑒 𝑗𝑎𝜔 − 𝑒 −𝑗𝑎𝜔 + 𝑒 −𝑗𝑏𝜔 )
𝑑𝑡 𝑏−𝑎
𝑑2 2𝐴
𝔽 ( 2 𝐟(𝑡)) = (cos(𝑏𝜔) − cos(𝑎𝜔))
𝑑𝑡 𝑏−𝑎
Also we know taht
𝑑2 1 𝑑2 2𝐴
𝔽 ( 2 𝐟(𝑡)) = −𝜔2 𝔽(𝐟(𝑡)) ⟹ 𝔽(𝐟(𝑡)) = − 𝔽( 𝐟 ( 𝑡) ) = (cos(𝑎𝜔) − cos(𝑏𝜔))
𝑑𝑡 𝜔 2 𝑑𝑡2 𝜔 2 ( 𝑏 − 𝑎)

2𝐴
𝐅(𝜔) = (cos(𝑎𝜔) − cos(𝑏𝜔))
𝜔 2 (𝑏
− 𝑎)
CHAPTER VI:
Fourier-Analysis of
Discrete LTI Systems

I. Introduction
II. Discrete-Time Fourier Series (DTFS)
II.I Properties of Discrete Fourier Series
III. Discrete-Time Fourier Transform (DTFT)
III.I Connection between Fourier and z-Transform
III.II Common Discrete-time Fourier Transform Pairs
III.III DTFT Convergence Issues
III.IV Properties of the (DTFT) Fourier Transform
III.V General Comments on Fourier Transforms
V. Solved Problems

Fourier analysis is fundamental to understanding the behavior of signals and


systems. This is a result of the fact that sinusoids are Eigen-functions of linear, time-
invariant (LTI) systems. This is to say that if we pass any particular sinusoid through
a LTI system, we get a scaled version of that same sinusoid on the output. Then,
since Fourier analysis allows us to redefine the signals in terms of sinusoids, all we
need to do is determine how any given system affects all possible sinusoids (its
transfer function) and we have a complete understanding of the system.
Furthermore, since we are able to define the passage of sinusoids through a system
as multiplication of that sinusoid by the transfer function at the same frequency, we
can convert the passage of any signal through a system from convolution (in time) to
multiplication (in frequency). These ideas are what give Fourier analysis its power.
Fourier-Analysis of
Discrete LTI Systems
I. Introduction: In this chapter, we consider the representation of discrete-time signals
through decomposition as a linear combination of complex exponentials. For periodic
signals this representation becomes the discrete-time Fourier series, and for aperiodic
signals it becomes the discrete-time Fourier transform. The motivation for representing
discrete-time signals as a linear combination of complex exponentials is identical in both
continuous time and discrete time. Complex exponentials are Eigen-functions of linear,
time-invariant systems, and consequently the effect of an LTI system on each of these basic
signals is simply a (complex) amplitude change. Thus with signals decomposed in this way,
an LTI system is completely characterized by a spectrum of scale factors which it applies at
each frequency. In representing discrete-time periodic signals through the Fourier series,
we again use harmonically related complex exponentials with fundamental frequencies that
are integer multiples of the fundamental frequency of the periodic sequence to be
represented. However, as we discussed before, an important distinction between
continuous-time and discrete-time complex exponentials is that in the discrete-time case,
they are unique only as the frequency variable spans a range of 2𝜋.

Because the basic complex exponentials {𝜙𝑘 [𝑛] = 𝑒 𝑗𝑘Ω0 𝑛 Ω0 = 2𝜋𝑘/𝑁} repeat periodically in
frequency, two alternative interpretations arise for the behavior of the Fourier series
coefficients. One interpretation is that there are only 𝑁 coefficients. The second is that the
sequence representing the Fourier series coefficients can run on indefinitely but repeats
periodically. Both interpretations, of course, are equivalent because in either case there are
only 𝑁 unique Fourier series coefficients.

The discrete-time Fourier transform developed as we have just described corresponds to a


decomposition of an aperiodic signal as a linear combination of a continuum of complex
exponentials. The synthesis equation is then the limiting form of the Fourier series sum,
specifically an integral. The analysis equation is the same one we used previously in
obtaining the envelope of the Fourier series coefficients. Here we see that while there was a
duality in the expressions between the discrete-time Fourier series analysis and synthesis
equations, the duality is lost in the discrete-time Fourier transform since the synthesis
equation is now an integral and the analysis equation a summation. This represents one
difference between the discrete-time Fourier transform and the continuous-time Fourier
transform. Another important difference is that the discrete-time Fourier transform is
always a periodic function of frequency. Consequently, it is completely defined by its
behavior over a frequency range of 2𝜋 in contrast to the continuous-time Fourier transform,
which extends over an infinite frequency range.

II. Discrete-Time Fourier Series (DTFS): The exponential Fourier series consists of the
exponentials {𝜙𝑘 [𝑛] = 𝑒 𝑗𝑘Ω0 𝑛 } There would be an infinite number of harmonics, except for
the property proved before: that discrete-time exponentials whose frequencies are
separated by 2𝜋 (or integral multiples of 2𝜋) are identical because 𝜙𝑘 [𝑛] = 𝜙𝑘+𝑁 [𝑛]. The
consequence of this result is that the 𝑘 th harmonic is identical to the (𝑘 + 𝑁)th harmonic.
Since complex exponentials are Eigen-functions of linear time-invariant (LTI) systems 𝕋,
calculating the output of an LTI system given 𝑒 𝑗Ω𝑛 as an input amounts to simple
multiplication,
LTI system
𝑥[𝑛] = 𝑒 𝑗Ω𝑛   𝑦[𝑛] = 𝐻[𝑘]𝑒 𝑗Ω𝑛
𝕋

where Ω = 𝑘Ω0 = 2𝜋𝑘/𝑁, and 𝐻[𝑘] ∈ ℂ is the eigenvalue corresponding to k. As shown in the
figure, a simple exponential input would yield the output 𝑦[𝑛] = 𝐻[𝑘]𝑒 𝑗Ω𝑛 . Using this and
the fact that 𝕋 is linear, calculating 𝑦[𝑛] for combinations of complex exponentials is also
straightforward.
LTI system
𝑐1 𝑒 𝑗Ω1 𝑛 + 𝑐2 𝑒 𝑗Ω2 𝑛   𝑐1 𝐻[𝑘1 ]𝑒 𝑗Ω1 𝑛 + 𝐻[𝑘2 ]𝑐2 𝑒 𝑗Ω2 𝑛
𝕋
⋮ ⋮
⋮ ⋮
LTI system
∑ 𝑐𝑘 𝑒 𝑗𝑘Ω0 𝑛   ∑ 𝑐𝑘 𝐻[𝑘]𝑒 𝑗𝑘Ω0 𝑛
𝕋
𝑘 𝑘

The action of 𝕋 on an input such as those in the two equations above is easy to explain. 𝕋
independently scales each exponential component 𝑒 𝑗𝑘Ω0 𝑛 by a different complex number
𝐻[𝑘] ∈ ℂ . As such, if we can write a function 𝑦[𝑛] as a combination of complex exponentials
it allows us to easily calculate the output of a system. But, the periodicity of the complex
exponential 𝜙𝑘 [𝑛] tell as that: there exist 𝑁-distinct signals to form a basis, so far this lead
to the fact that discrete time-periodic function 𝐱[𝑛] = 𝐱[𝑛 + 𝑁] can be written as a linear
combination of only 𝑁 harmonic complex sinusoids. 𝐱[𝑛] = ∑𝑁−1 𝑘=0 𝑐𝑘 𝑒
𝑗𝑘Ω0 𝑛
with Ω0 = 2𝜋/𝑁

Theorem: Let we have an exponential term {𝑒 𝑗Ω𝑛 with 𝑛 = 1,2, … , 𝑁 − 1} which is a vector in
ℂ𝑁 , and if we define
1 𝑗2𝜋𝑘𝑛
𝜙𝑘 [𝑛] = 𝑒 𝑁 with and 𝑘 = 1,2, … , 𝑁 − 1
√𝑁

then 𝜙𝑘 [𝑛] is an orthonormal basis for ℂ𝑁 .

Proof First of all, we must show 𝜙𝑘 is orthonormal, i.e. ⟨𝜙𝑘 , 𝜙ℓ ⟩ = 𝛿𝑘ℓ


𝑁−1 𝑁−1 𝑁−1
1 2𝜋𝑘 2𝜋ℓ 1 2𝜋
⟨𝜙𝑘 , 𝜙ℓ ⟩ = ∑ 𝜙𝑘 [𝑛]𝜙ℓ⋆ [𝑛] = ∑ 𝑒 𝑗 𝑁 𝑛 𝑒 −𝑗 𝑁 𝑛 = ∑ 𝑒 𝑗 𝑁 𝑛(𝑘−ℓ)
𝑁 𝑁
𝑛=0 𝑛=0 𝑛=0

If 𝑘 = ℓ, then ⟨𝜙𝑘 , 𝜙ℓ ⟩ = (∑𝑁−1


𝑛=0 1)/𝑁 = 1

If 𝑘 ≠ ℓ then we must use the "partial summation formula" shown below:


𝑁−1 𝑁−1
1 − 𝛼𝑁 1 2𝜋 1 1 − 𝑒 𝑗2𝜋(𝑘−ℓ) 1 1−1
∑𝛼 = 𝑛
⟹ ⟨𝜙𝑘 , 𝜙ℓ ⟩ = ∑ 𝑒 𝑗 𝑁 𝑛(𝑘−ℓ) = ( 2𝜋 )= ( 2𝜋 )=0
1−𝛼 𝑁 𝑁 𝑗 (𝑘−ℓ) 𝑁 𝑗 (𝑘−ℓ)
𝑛=0 𝑛=0 1−𝑒 𝑁 1−𝑒 𝑁

Therefore: 𝜙𝑘 [𝑛] is an orthonormal set. 𝜙𝑘 [𝑛] is also a basis, since there are 𝑁-vectors which
are linearly independent (orthogonality implies linear independence).

And finally, we have shown that the harmonic sinusoids {𝜙𝑘 [𝑛]} form an orthonormal basis
for ℂ𝑁 ▪
Theorem: Given a discrete-time, periodic signal 𝐱[𝑛] (i.e. vector in ℂ𝑁 ), we can write:
𝑁−1 𝑁−1 𝑁−1
1 2𝜋𝑘
𝐱[𝑛] = ∑ 𝑐𝑘 𝜙𝑘 [𝑛] = ∑ 𝑐𝑘 𝑒 𝑗𝑘Ω0 𝑛 with 𝑐𝑘 = ∑ 𝐱[𝑛]𝑒 −𝑗 𝑁 𝑛
𝑁
𝑘=0 𝑘=0 𝑛=0

Proof: In order to obtain the DTFS coefficients we perform the Gram-Schimdt process, by
projecting a function onto a basis of {𝜙0 [𝑛], 𝜙1 [𝑛], … , 𝜙𝑁−1 [𝑛]} with
2𝜋𝑘 2𝜋𝑘 𝑁−1
−𝑗 𝑛 −𝑗 𝑛
⟨𝐱[𝑛], 𝜙𝑘 [𝑛]⟩ ∑𝑁−1
𝑛=0 𝐱[𝑛]𝑒 𝑁 ∑𝑁−1
𝑛=0 𝐱[𝑛]𝑒 𝑁 1 2𝜋𝑘
𝑐𝑘 = = 2𝜋𝑘 = = ∑ 𝐱[𝑛]𝑒 −𝑗 𝑁 𝑛
⟨𝜙𝑘 [𝑛], 𝜙𝑘 [𝑛]⟩ 2𝜋𝑘 ∑𝑁−1
𝑛=0 1 𝑁
∑𝑛=0 𝑒 𝑁 𝑒 𝑁 𝑛
𝑁−1 𝑗 𝑛 −𝑗
𝑛=0

Example: 01 Given a Discrete Time Square Wave as shown in figure. Find its DTFS

Solution: We have
𝑁−1
1 2𝜋𝑘 1 2𝜋𝑘
𝑐𝑘 = ∑ 𝐱[𝑛]𝑒 −𝑗 𝑁 𝑛 = ∑ 𝐱[𝑛]𝑒 −𝑗 𝑁 𝑛 ⟹ 𝑐0 = 1
𝑁 𝑁
〈𝑁〉 𝑛=0

Just like continuous time Fourier series, we can take the summation over any interval, so
we have
𝑛=𝑁1
1 2𝜋𝑘
𝑐𝑘 = ∑ 𝑒 −𝑗 𝑁 𝑛
𝑁
𝑛=−𝑁1
Let ℓ = 𝑛 + 𝑁1 (so we can get a geometric series starting at 0)
ℓ=2𝑁1 ℓ=2𝑁1 ℓ=2𝑁1
1 𝑒 𝑗Ω0 𝑘𝑁1 𝑒 𝑗Ω0 𝑘𝑁1 ℓ
𝑐𝑘 = ∑ 𝑒 −𝑗Ω0 𝑘(ℓ−𝑁1 ) = ∑ 𝑒 −𝑗Ω0 𝑘ℓ = ∑ (𝑒 −𝑗Ω0 𝑘 )
𝑁 𝑁 𝑁
ℓ=0 ℓ=0 ℓ=0
𝑒 𝑗Ω0 𝑘𝑁1 1 − 𝑒 −𝑗Ω0 𝑘(1+2𝑁1) 1 𝑒 𝑗Ω0 𝑘(1+2𝑁1 )/2 1 − 𝑒 −𝑗Ω0 𝑘(1+2𝑁1 )
= ( )= ( )
𝑁 1 − 𝑒 −𝑗Ω0 𝑘 𝑁 𝑒 𝑗Ω0 𝑘/2 1 − 𝑒 −𝑗Ω0 𝑘
2𝜋𝑘 1
sin ( 𝑁 (2 + 𝑁1 ))
1 𝑒 𝑗Ω0 𝑘(1+2𝑁1 )/2 − 𝑒 −𝑗Ω0 𝑘(1+2𝑁1 )/2 1
= ( ) =
𝑁 𝑒 𝑗Ω0 𝑘/2 − 𝑒 −𝑗Ω0 𝑘/2 𝑁 𝜋𝑘
sin ( 𝑁 )
= digital sinc
Example: 02 Given a periodic Discrete Time Sine-Wave 𝐱[𝑛] = sin(Ω0 𝑛). Find its DTFS

Solution: It is well known that from the Euler identity


1 1 2𝜋 2𝜋 1 1
𝐱[𝑛] = sin(Ω0 𝑛) = (𝑒 𝑗Ω0 𝑛 − 𝑒 −𝑗Ω0 𝑛 ) = (𝑒 𝑗 𝑁 𝑛 − 𝑒 −𝑗 𝑁 𝑛 ) ⟹ 𝑐−1 = − 𝑐1 =
2𝑗 2𝑗 2𝑗 2𝑗

Example: 03 Find of a given periodic discrete time signal with a period 𝑁


2𝜋 2𝜋 4𝜋 𝜋
𝐱[𝑛] = 1 + sin ( 𝑛) + 3 cos ( 𝑛) + cos ( 𝑛 + )
𝑁 𝑁 𝑁 2
Solution: We have
1 𝑗2𝜋𝑛 2𝜋 3 2𝜋 2𝜋 1 4𝜋 𝜋 4𝜋 𝜋
𝐱[𝑛] = 1 + (𝑒 𝑁 − 𝑒 −𝑗 𝑁 𝑛 ) + (𝑒 𝑗 𝑁 𝑛 + 𝑒 −𝑗 𝑁 𝑛 ) + (𝑒 𝑗( 𝑁 𝑛+ 2 ) + 𝑒 −𝑗( 𝑁 𝑛+ 2 ) )
2𝑗 2 2
3 𝑗 2𝜋 3 𝑗 2𝜋 1 4𝜋 1 4𝜋
= 1 + ( − ) 𝑒 𝑗 𝑁 𝑛 + ( + ) 𝑒 −𝑗 𝑁 𝑛 + 𝑗𝑒 𝑗 𝑁 𝑛 − 𝑗𝑒 −𝑗 𝑁 𝑛
2 2 2 2 2 2
3 𝑗 3 𝑗 1 1
𝑐0 = 1, 𝑐1 = ( − ) , 𝑐−1 = ( + ), 𝑐2 = 𝑗, 𝑐−2 = − 𝑗
2 2 2 2 2 2

Remark: The Fourier coefficients 𝑐𝑘 , are often referred to as the spectral coefficients of 𝐱[𝑛].

Remark: Since the discrete Fourier series is a finite series, in contrast to the continuous-
time case, there are no convergence issues with discrete Fourier series.

II.I Properties of Discrete Fourier Series: In this section we will discuss the basic
properties of the Discrete-Time Fourier Series (only some of the important properties). Let
𝐱[𝑛], 𝐲[𝑛] be two periodic signals with the same period and having Fourier series:
𝑁−1 𝑁−1 𝑁−1 𝑁−1
𝑗𝑘Ω0 𝑛
F−Series 𝐱[𝑛] −𝑗𝑘Ω 𝑛 𝑗𝑘Ω0 𝑛
F−Series 𝐲[𝑛] −𝑗𝑘Ω 𝑛
𝐱[𝑛] = ∑ 𝑐𝑘 𝑒 ↔ 𝑐𝑘 = ∑ 𝑒 0 𝐲[𝑛] = ∑ 𝑑𝑘 𝑒 ↔ 𝑑𝑘 = ∑ 𝑒 0
𝑁 𝑁
𝑘=0 𝑘=0 𝑘=0 𝑘=0

F−Series
a. Linearity and periodicity: 𝐴 𝐱[𝑛] + 𝐵𝐲[𝑛] ↔ 𝐴𝑐𝑘 + 𝐵𝑑𝑘 and 𝑐𝑘+𝑁 = 𝑐𝑘
F−Series
b. Time Shifting: 𝐱[𝑛 − 𝑛0 ] ↔ 𝑒 −𝑗𝑘Ω0 𝑛0 𝑐𝑘
F−Series
c. Frequency Shifting: 𝑒 𝑗ℓΩ0 𝑛 𝐱[𝑛] ↔ 𝑐𝑘−ℓ
F−Series F−Series
d. Time Reversal and Conjugate: 𝐱[−𝑛] ↔ 𝑐−𝑘 and 𝐱 ⋆ [𝑛] ↔ 𝑐𝑘⋆
F−Series
e. Periodic Convolution: ∑𝑁−1
𝑘=0 𝐱[𝑟]𝐱[𝑛 − 𝑟] ↔ 𝑁𝑐𝑘 𝑑𝑘
F−Series F−Series
g. Duality: 𝐱[𝑛] ↔ 𝑐𝑘 ⟹ 𝑁. 𝑐[𝑘] ↔ 𝐱[−𝑘] "very important"
F−Series F−Series
g. Even/Odd Sequences: 𝐱[𝑛] = 𝐱 𝑒 [𝑛] + 𝐱 𝑜 [𝑛] ⟹ 𝐱 𝑒 [𝑛] ↔ Re(𝑐𝑘 ) & 𝐱𝑜 [𝑛] ↔ 𝑗. Im(𝑐𝑘 )
F−Series
h. Multiplication: 𝐱[𝑛]. 𝒚[𝑛] ↔ ∑ℓ=〈𝑁〉 𝑐ℓ 𝑑𝑘−ℓ
F−Series
i. Difference: 𝐱[𝑛] − 𝐱[𝑛 − 1] ↔ (1 − 𝑒 −𝑗𝑘Ω0 )𝑐𝑘
j. Parseval's theorem: ∑𝑁−1 2 𝑁−1 2
𝑘=0 |𝐱[𝑘]| = 𝑁 ∑𝑘=0 𝑐𝑘
F−Series 1 𝑑 F−Series
k. Integration in the Fourier Domain: ∑𝑁−1
𝜂=0 𝐱[𝜂] ↔ 𝑐𝑘 and 𝑑𝑛 𝐱[𝑛] ↔ 𝑗𝑘Ω0 𝑐𝑘
𝑗𝑘Ω0

Example: 04 In this example we will develop two MATLAB codes to implement DTFS
analysis and synthesis equations. The first code given below computes the DTFS
coefficients for the periodic signal 𝐱[𝑘]. The vector “x” holds one period of the signal 𝐱[𝑘] for
𝑛 = 0, … , 𝑁 − 1. The vector “idx” holds the values of the index 𝑘 for which the DTFS
coefficients 𝑐𝑘 are to be computed. The coefficients are returned in the vector “c”

clear all, clc, N=500; N1=20; N2=30; idx=[1:1:100]';


x=[zeros(1,N1) ones(1,N2-N1) zeros(1,N-N2)];
c = zeros(size(idx)); % Create all-zero vector
N = length(x); % Period of the signal
for kk = 1:length(idx),
k = idx(kk); tmp = 0;
for nn = 1:length(x),
n=nn-1; % MATLAB indices start with 1.
tmp = tmp+x(nn)*exp(-j*2*pi/N*k*n);
end;
c(kk) = tmp/N;
end;
k=1:length(idx);
stem(k,real(c)), grid on
stem(k,imag(c)), grid on

The second code implements the DTFS synthesis equation. The vector “c” holds one period
of the DTFS coefficients ck for k=0,...,N−1. The vector “idx” holds the values of the index n
for which the signal samples x[n] are to be computed. The synthesized signal x[n] is
returned in the vector “x”

x = zeros(size(idx)); % Create all-zero vector.


N = length(c); % Period of the coefficient set.
for nn = 1:length(idx),
n = idx(nn); tmp = 0;
for kk = 1:length(c),
k=kk-1; % MATLAB indices start with 1.
tmp = tmp+c(kk)*exp(j*2*pi/N*k*n);
end;
x(nn) = tmp;
end;
III. Discrete-Time Fourier Transform (DTFT): In mathematics, the discrete-time Fourier
transform (DTFT) is a form of Fourier analysis that is applicable to a sequence of values.
We now develop a frequency expansion for non-periodic discrete–time functions using the
same strategy as we did in the continuous–time case. Again, for simplicity we’ll only
develop the expansions for functions 𝐱[𝑛] that are zero for all sufficiently large n. Again, our
conclusions will actually apply to a much broader class of functions.

Let 𝐱[𝑛] be a nonperiodic sequence of finite duration. That is, for some positive integer 𝑁1 ,
that is 𝐱[𝑛] = 0 |𝑛| > 𝑁1 . Such a sequence is shown in Fig. Let 𝐱̃[𝑛] be a periodic sequence
formed by repeating 𝐱[𝑛] with fundamental period 𝑁 as shown in Fig. If we let 𝑁 → ∞, we
have lim𝑁→∞ 𝐱̃[𝑛] = 𝐱[𝑛].

The discrete Fourier series of 𝐱̃[𝑛] is given by


𝑁−1 𝑁1
2𝜋 1
𝐱̃[𝑛] = ∑ 𝑐𝑘 𝑒 𝑗𝑘Ω0 𝑛 with Ω0 = and 𝑐𝑘 = ∑ 𝐱̃[𝑛]𝑒 −𝑗𝑘Ω0 𝑛
𝑁 𝑁
𝑘=0 𝑛=−𝑁1

Since 𝐱̃[𝑛] = 𝐱[𝑛] for |𝑛| > 𝑁1 , and also since 𝐱[𝑛] = 0 outside this interval, so we can write
𝑁1 ∞
1 1
𝑐𝑘 = ∑ 𝐱[𝑛]𝑒 −𝑗𝑘Ω0 𝑛 = ∑ 𝐱[𝑛]𝑒 −𝑗𝑘Ω0 𝑛
𝑁 𝑁
𝑛=−𝑁1 𝑛=−∞
Let us define 𝑿(Ω) as

2𝜋𝑘
𝑿(Ω) = ∑ 𝐱[𝑛]𝑒 −𝑗Ω𝑛 with Ω = 𝑘Ω0 = ⟹ 𝑁𝑐𝑘 = 𝑿(𝑘Ω0 )
𝑁
𝑛=−∞

If we make a substitution into the equation we get

𝑿(𝑘Ω0 ) 𝑗𝑘Ω 𝑛 1
𝐱̃[𝑛] = ∑ 𝑒 0 = ∑ 𝑿(𝑘Ω0 )𝑒 𝑗𝑘Ω0 𝑛 Ω0
𝑁 2𝜋
〈𝑁〉 〈𝑁〉

The term 𝑿(Ω) is periodic with period 2𝜋 and so is 𝑒 𝑗Ω𝑛 . Thus, the product 𝑿(Ω)𝑒 𝑗Ω𝑛 will
also be periodic with period 2𝜋. each term in the last summation represents the area of a
rectangle of height 𝑿(𝑘Ω0 )𝑒 𝑗𝑘Ω0 𝑛 and width Ω0 . As 𝑁 → ∞, Ω0 becomes infinitesimal (Ω0 → 0)
and the summation in 𝐱̃[𝑛] passes to an integral.

Since 𝑿(Ω)𝑒 𝑗Ω𝑛 is periodic with period 2𝜋, the interval of integration can be taken as any
interval of length 2𝜋.

1 1 1
𝐱[𝑛] = { lim ∑ 𝑿(𝑘Ω0 )𝑒 𝑗𝑘Ω0 𝑛 Ω0 } = { lim ∑ 𝑿(𝑘Ω0 )𝑒 𝑗𝑘Ω0 𝑛 Ω0 } = ∫ 𝑿(Ω)𝑒 𝑗Ω𝑛 𝑑Ω
2𝜋 𝑁→∞ 2𝜋 Ω0 →0 2𝜋 2𝜋
〈𝑁〉 〈𝑁〉

The discrete-time Fourier transform (DTFT) of a discrete set of real or complex numbers
𝐱[𝑛], for all integers 𝑛, is given by

𝑿(Ω) = 𝔽(𝐱[𝑛]) = ∑ 𝐱[𝑛]𝑒 −𝑗Ω𝑛 Analysis equation


𝑛=−∞
And its inverse is given by
1
𝐱[𝑛] = 𝔽−1 (𝑿(Ω)) = ∫ 𝑿(Ω)𝑒 𝑗Ω𝑛 𝑑Ω Synthesis equation
2𝜋 2𝜋
𝔽
and we say that 𝐱[𝑛] and 𝑿(Ω) form a Fourier transform pair denoted by 𝐱[𝑛] ↔ 𝑿(Ω)

The discrete-time Fourier transform 𝑿(Ω) of 𝐱[𝑛] is, in general, complex continuous
function and can be expressed as 𝑿(Ω) = |𝑿(Ω)|𝑒 𝑗𝜙(Ω) . As in continuous time, the Fourier
transform 𝑿(Ω) of a non-periodic sequence 𝐱[𝑛] is the frequency-domain specification of
𝐱[𝑛] and is referred to as the spectrum (or Fourier spectrum) of 𝐱[𝑛]. The quantity |𝑿(Ω)| is
called the magnitude spectrum of 𝐱[𝑛], and 𝑿(Ω) is called the phase spectrum of 𝐱[𝑛].
Furthermore, if 𝐱[𝑛] is real, the amplitude spectrum |𝑿(Ω)| is an even function and the
phase spectrum 𝜙(Ω) is an odd function of Ω.

Remark: Just as in the case of continuous time, the sufficient condition for the
convergence of 𝑿(Ω) is that 𝐱[𝑛] is absolutely summable, that is, ∑∞
−∞|𝐱[𝑛]| < ∞.
III.I Connection between the Fourier Transform and the z-Transform: The z-transform
of a signal evaluated on a unit circle is equal to the Fourier transform of that signal.
∞ ∞
−𝑛
𝑿(Ω) = 𝑿(𝑧)| ⟺ 𝑿(Ω) = ∑ 𝐱[𝑛]𝑧 | = ∑ 𝐱[𝑛]𝑒 −𝑗Ω𝑛
𝑧=𝑒 𝑗Ω
𝑛=−∞ 𝑧=𝑒 𝑗Ω 𝑛=−∞

we see that if the ROC of 𝑿(𝑧) contains the unit circle, then the Fourier transform 𝑿(Ω) of
𝐱[𝑛] equals 𝑿(𝑧) evaluated on the unit circle, that is, 𝑿(Ω) = 𝑿(𝑧)| 𝑗Ω
𝑧=𝑒

Note that since the summation in z-transform is denoted by 𝑿(𝑧), then the summation in
DTFT may be denoted as 𝑿(𝑒 𝑗Ω ). Thus, in the
remainder of this text, both 𝑿(Ω) and 𝑿(𝑒 𝑗Ω ) mean
the same thing whenever we connect the Fourier
transform with the z-transform. Because the
Fourier transform is the z-transform with 𝑧 = 𝑒 𝑗Ω ,
it should not be assumed automatically that the
Fourier transform of a sequence 𝐱[𝑛] is z-
transform with z replaced by 𝑒 𝑗Ω . If 𝐱[𝑛] is
absolutely summable, the Fourier transform of
𝐱[𝑛] can be obtained from the z-transform of
𝐱[𝑛] with 𝑧 = 𝑒 𝑗Ω since the ROC of 𝑿(𝑧) will
contain the unit circle; that is, |𝑒 𝑗Ω | = 1. This is not generally true of sequences which are
not absolutely summable.

Example: 05 Consider the unit impulse sequence 𝛿[𝑛] where the z-transform of 𝛿[𝑛] is
∞ ∞
−𝑛
𝑿(𝑧) = ∑ 𝛿[𝑛]𝑧 = 1 = ∑ 𝛿[𝑛]𝑒 −𝑗Ω𝑛 = 𝑿(𝑒 𝑗Ω )
𝑛=−∞ 𝑛=−∞

Thus, the z-transform and the Fourier transform of 𝛿[𝑛] are the same. Note that 𝛿[𝑛] is
absolutely summable and that the ROC of the z-transform of 𝛿[𝑛] contains the unit circle.

Example: 06 Consider the causal exponential sequence 𝐱[𝑛] = 𝑎𝑛 𝑢[𝑛] The z-transform of
𝐱[𝑛] is given by
1
𝑿(𝑧) = |𝑧| > |𝑎|
1 − 𝑎𝑧 −1
Thus, 𝑿(𝑒 𝑗Ω ) exists for |𝑎| < 1 because the ROC of 𝑿(𝑧) then contains the unit circle. That
is,
1
𝑿(𝑒 −𝑗Ω ) = |𝑎| < 1
1 − 𝑎𝑒 −𝑗Ω

Next, by definition the Fourier transform of 𝐱[𝑛] is


∞ ∞ ∞
𝑛 1
𝑿(𝑒 −𝑗Ω ) = ∑ 𝑎𝑛 𝑢[𝑛]𝑒 −𝑗Ω𝑛 = ∑ 𝑎𝑛 𝑒 −𝑗Ω𝑛 = ∑(𝑎𝑒 −𝑗Ω ) = |𝑎𝑒 −𝑗Ω | = |𝑎| < 1
1 − 𝑎𝑒 −𝑗Ω
𝑛=−∞ 𝑛=0 𝑛=0

Thus, we have 𝑿(Ω) = 𝑿(z)| . Note that 𝐱[𝑛] is absolutely summable


𝑧=𝑒 𝑗Ω
Example: 07 Consider the unit step sequence 𝑢[𝑛]. The z-transform of 𝐱[𝑛] is given by

1
𝑿(𝑧) = |𝑧| > |1|
1 − 𝑧 −1
The Fourier transform of 𝑢[𝑛] cannot be obtained from its z-transform because the ROC of
the z-transform of 𝑢[𝑛] does not include the unit circle. Note that the unit step sequence
𝑢[𝑛] is not absolutely summable. The Fourier transform of 𝑢[𝑛] is given by

1
𝑿(Ω) = 𝜋𝛿(Ω) + |Ω| < 𝜋
1 − 𝑒 −𝑗Ω
To prove this let we consider
1 − 𝑒 −𝑗Ω 1 − 𝑒 −𝑗Ω
𝔽(𝛿[𝑛]) = 1 = −𝑗Ω
= −𝑗Ω
+ 𝜋𝛿(Ω)(1 − 𝑒 −𝑗Ω ) = 1 + 𝜋𝛿(Ω)(1 − 𝑒 −𝑗Ω )
1−𝑒 1−𝑒

The second term is always zero because for Ω = 0, 1 − 𝑒 −𝑗Ω = 0 and it's zero on any other
point |Ω| < 𝜋. From the other side if we apply the shift property we get

𝔽(𝛿[𝑛]) = 𝔽(𝑢[𝑛] − 𝑢[𝑛 − 1])


1
= 𝑿(𝑗Ω) − 𝑒 −𝑗Ω 𝑿(𝑗Ω) } ⟹ 𝑿(Ω) = 𝜋𝛿(Ω) + |Ω| < 𝜋
1 − 𝑒 −𝑗Ω
= 1 + 𝜋𝛿(Ω)(1 − 𝑒 −𝑗Ω )

In the next section we will see more detail about the unit step sequence.

III.II Common Discrete-time Fourier Transform Pairs: For every time domain sequence
there is a corresponding frequency domain waveform, and vice versa. For example, a digital
rectangular pulse in the time domain coincides with a sinc function [i.e. ,sin(x)/x] in the
frequency domain. Duality provides that the reverse is also true; a rectangular pulse in the
frequency domain matches a sinc function in the time domain. Waveforms that correspond
to each other in this manner are called Fourier transform pairs. Several common pairs are
presented in this section (Some Fourier transform pairs can be computed quite easily
directly from the definition).

 Unit Dirac Impulse: To compute the Fourier transform of an impulse we apply the
definition of Fourier transform 𝑿(Ω) = 𝔽(𝛿[𝑛]) = ∑∞
𝑛=−∞ 𝛿[𝑛]𝑒
−𝑗Ω𝑛
= 1. More generally
∞ ∞

𝑿(Ω) = 𝔽(𝛿[𝑛 − 𝑛0 ]) = ∑ 𝛿[𝑛 − 𝑛0 ]𝑒 −𝑗Ω𝑛 = ∑ 𝛿[𝑚]𝑒 −𝑗Ω(𝑚+𝑛0 ) = 𝑒 −𝑗Ω𝑛0


𝑛=−∞ 𝑛=−∞

Remark: 𝔽(𝐱[𝑛 − 𝑛0 ]) = 𝑒 −𝑗Ω𝑛0 𝑿(Ω) is very obvious and called the shift property.
 DC Gain Sequence: The term DC stands for direct current, which is a constant current.
To compute DTFT of the sequence {1[𝑛] ∀𝑛} it is preferred to compute the inverse of:

𝑿(Ω) = 2𝜋 ∑ 𝛿[Ω − 2𝜋𝑘]


𝑘=−∞
That is

𝐱[𝑛] = 𝔽−1 (𝑿(Ω)) = 𝔽−1 (2𝜋 ∑ 𝛿(Ω − 2𝜋𝑘))


𝑘=−∞
By using the definition

−1
1 1
𝔽 (𝑿(Ω)) = ∫ 𝑿(Ω)𝑒 𝑗Ω𝑛 𝑑Ω = ∫ (2𝜋 ∑ 𝛿(Ω − 2𝜋𝑘)) 𝑒 𝑗Ω𝑛 𝑑Ω
2𝜋 2𝜋 2𝜋 2𝜋
𝑘=−∞
−𝜋 ∞ +𝜋 ∞
𝑗Ω𝑛
=∫ ( ∑ 𝛿(Ω − 2𝜋𝑘)) 𝑒 𝑑Ω = ∫ ∑ 𝑒 𝑗Ω𝑛 𝛿(Ω − 2𝜋𝑘) 𝑑Ω
−𝜋 𝑘=−∞ −𝜋 𝑘=−∞

We know that 𝑒 𝑗Ω𝑛 𝛿(Ω − 2𝜋𝑘) = 𝑒 𝑗2𝜋𝑘𝑛 𝛿(Ω − 2𝜋𝑘) = 𝛿(Ω − 2𝜋𝑘) so

+𝜋 ∞ +𝜋 ∞ +𝜋
−1 𝑗Ω𝑛
𝐱[𝑛] = 𝔽 (𝑿(Ω)) = ∫ ∑ 𝑒 𝛿(Ω − 2𝜋𝑘) 𝑑Ω = ∫ ∑ 𝛿(Ω − 2𝜋𝑘) 𝑑Ω = ∫ 𝛿(Ω)𝑑Ω = 1 ∀𝑛
−𝜋 𝑘=−∞ −𝜋 𝑘=−∞ −𝜋

Finally we deduce that if we let 𝐱[𝑛] = 1 = ∑∞


𝑘=−∞ 𝛿[𝑛 − 𝑘] then

∞ ∞ ∞
𝔽
𝐱[𝑛] = 1 for all 𝑛 ↔ 𝑿(Ω) = 2𝜋 ∑ 𝛿(Ω − 2𝜋𝑘) or 𝔽 ( ∑ 𝛿[𝑛 − 𝑘]) = 2𝜋 ∑ 𝛿(Ω − 2𝜋𝑘)
𝑘=−∞ 𝑘=−∞ 𝑘=−∞

Since the continuous F.T. of 𝐱(𝑡) = 1 (for all 𝑡) is 𝛿(𝑡), the DTFT of 𝐱[𝑛] = 1 shall be a
impulse train (or impulse comb), and it turns out to be 2𝜋 ∑∞
𝑘=−∞ 𝛿(Ω − 2𝜋𝑘).

Sampling property: the DTFT of a continuous signal 𝐱(𝑡) sampled with period 𝑇 is obtained
by a periodic duplicationof the continuous Fourier transform 𝑿(ω) with a period ω𝑠 = 2𝜋/𝑇
and scaled by 𝑇. This fact will be used in later on in sampling theory.

 Unit Step Sequence: Since the step function is not absolutely summable, so there is no
ordinary method for obtaining the DTFT. But how can we go beyond this obstacle?
𝔽 𝔽
We define 𝑢[𝑛] = 𝑢1 [𝑛] + 𝑢2 [𝑛] with 𝑢1 [𝑛] ↔ 𝑿1 (𝑗Ω), 𝑢2 [𝑛] ↔ 𝑿2 (𝑗Ω) and
1 1
𝑢1 [𝑛] = − ∞ < 𝑛 < +∞, & 𝑢2 [𝑛] = sgn[𝑛]
2 2
Therefore we express the impulse by 𝛿[𝑛] = 𝑢2 [𝑛] − 𝑢2 [𝑛 − 1] and using the fact that

𝔽(𝛿[𝑛]) = 1 and 𝔽(𝑢2 [𝑛] − 𝑢2 [𝑛 − 1]) = 𝑿2 (𝑗Ω) − 𝑒 −𝑗Ω 𝑿2 (𝑗Ω) = (1 − 𝑒 −𝑗Ω )𝑿2 (𝑗Ω)
Hence we get the following DTFT
1
𝑿2 (𝑗Ω) =
1 − 𝑒 −𝑗Ω
𝔽
We have seen that 𝑢1 [𝑛] = 1/2 for all 𝑛 ↔ 𝑿1 (𝑗Ω) = 𝜋 ∑∞𝑘=−∞ 𝛿(Ω − 2𝜋𝑘). Adding these two
results, we have the final result 𝑿(𝑗Ω) = 𝑿1 (𝑗Ω) + 𝑿2 (𝑗Ω) which is

1
𝑿(Ω) = 𝔽(𝑢[𝑛]) = 𝑿1 (𝑗Ω) + 𝑿2 (𝑗Ω) = 𝜋 ∑ 𝛿(Ω − 2𝜋𝑘) + − ∞ < Ω < +∞
1 − 𝑒 −𝑗Ω
𝑘=−∞

But the function 𝑿(Ω) is periodic with a period 2𝜋 therefore,

𝑿(Ω) = 𝔽(𝑢[𝑛]) = 𝜋𝛿(Ω) + (1 − 𝑒 −𝑗Ω ) −1 |Ω| < 𝜋

 Complex Exponential Sequence: Let we see what is the DTFT of 𝐱[𝑛] = 𝑒 𝑗Ω0 𝑛 , to do this
let we go back the dc gain sequence 𝐱1 [𝑛] = 1 and by using the definition of DTFT we get:
∞ ∞ ∞

𝑿1 (Ω) = 2𝜋 ∑ 𝛿(Ω − 2𝜋𝑘) = 𝔽(𝐱1 [𝑛] = 1) = ∑ 𝐱1 [𝑛]𝑒 −𝑗Ω𝑛 = ∑ 𝑒 −𝑗Ω𝑛


𝑘=−∞ 𝑛=−∞ 𝑛=−∞

As a change of variable we replace Ω by Ω − Ω0 to obtain


∞ ∞ ∞
−𝑗(Ω−Ω0 )𝑛
𝑿(Ω) = 𝑿1 (Ω − Ω0 ) = 2𝜋 ∑ 𝛿(Ω − Ω0 − 2𝜋𝑘) = ∑ 𝑒 = ∑ 𝑒 𝑗Ω0 𝑛 𝑒 −𝑗Ω𝑛 = 𝔽(𝑒 𝑗Ω0 𝑛 )
𝑘=−∞ 𝑛=−∞ 𝑛=−∞

𝑿(Ω) = 𝔽(𝑒 𝑗Ω0 𝑛 ) = 2𝜋 ∑ 𝛿(Ω − Ω0 − 2𝜋𝑘)


𝑘=−∞

And over one period we get: 𝑿(Ω) = 𝔽(𝑒 𝑗Ω0 𝑛 ) = 2𝜋𝛿(Ω − Ω0 ) |Ω| < 𝜋

 Sine and Cosine Sequences: Let we see what is the DTFT of 𝐱[𝑛] = cos(Ω0 𝑛)? In fact this
is just a consequence of the previous result
1 𝔽
𝐱[𝑛] = cos(Ω0 𝑛) = (𝑒 𝑗Ω0 𝑛 + 𝑒 −𝑗Ω0 𝑛 ) ↔ 𝑿(Ω) = 𝜋(𝛿(Ω + Ω0 ) + 𝛿(Ω − Ω0 )) |Ω| < 𝜋
2
1 𝔽
𝐱[𝑛] = sin(Ω0 𝑛) = (𝑒 𝑗Ω0 𝑛 − 𝑒 −𝑗Ω0 𝑛 ) ↔ 𝑿(Ω) = 𝜋𝑗(𝛿(Ω + Ω0 ) − 𝛿(Ω − Ω0 )) |Ω| < 𝜋
2𝑗

 The Sequences 𝐱[𝑛] = 𝛿[𝑛 + 𝑛0 ] ± 𝛿[𝑛 − 𝑛0 ]: What is the inverse Fourier transform of an
impulse located at 𝑛0 ? Applying the definition of inverse Fourier transform yields:

𝔽(𝛿[𝑛 + 𝑛0 ]) = ∑ 𝛿[𝑛 + 𝑛0 ]𝑒 −𝑗Ω𝑛 = 𝑒 𝑗Ω𝑛0


𝑛=−∞
Using the result of DTFT of Dirac impulse gathered with the shift property to obtain

𝔽(𝛿[𝑛 + 𝑛0 ] + 𝛿[𝑛 − 𝑛0 ]) = 𝑒 𝑗Ω𝑛0 + 𝑒 −𝑗Ω𝑛0 = 2 cos(Ω𝑛0 )


𝔽(𝛿[𝑛 + 𝑛0 ] − 𝛿[𝑛 − 𝑛0 ]) = 𝑒 𝑗Ω𝑛0 − 𝑒 −𝑗Ω𝑛0 = 2𝑗 sin(Ω𝑛0 )

 Rectangular Pulse Sequences: Consider the sequence defined by

1 |𝑛| ≤ 𝑁
𝐱[𝑛] = {
0 |𝑛| > 𝑁

To compute the Fourier transform of a pulse we apply the definition of Fourier transform:
𝑁

𝑿(Ω) = ∑ 𝑒 −𝑗Ω𝑛
𝑛=−𝑁

To simplify the summation we use the identity

𝑁 𝑟 𝑁+1 − 𝑟 −𝑁
𝑛≠1
∑ 𝑟𝑛 = { 1 − 𝑟𝑁
𝑛=−𝑁 2𝑁 + 1 𝑛=1
1
𝑁 −𝑗Ω(𝑁+1) 𝑗Ω𝑁
sin (Ω (𝑁 + 2))
𝑒 −𝑒
𝑿(Ω) = ∑ 𝑒 −𝑗Ω𝑛 = { Ω ≠ 0} = Ω≠0
𝑒 −𝑗Ω −1 sin(Ω/2)
𝑛=−𝑁 2𝑁 + 1 Ω=0
{ 2𝑁 + 1 Ω=0
Because we have
Ω 1 1
𝑒 −𝑗Ω(𝑁+1)
−𝑒 𝑗Ω𝑁 𝑒 −𝑗 2 (𝑒 −𝑗Ω(𝑁+2) − 𝑒 𝑗Ω(𝑁+2) )
= Ω Ω Ω
𝑒 −𝑗Ω − 1
𝑒 −𝑗 2 (𝑒 −𝑗 2 − 𝑒 𝑗 2 )

1 1 1
sin (Ω (𝑁 + 2)) (𝑁 + 2) cos (Ω (𝑁 + 2))
Since lim = lim × = 2𝑁 + 1
Ω→0 sin(Ω/2) Ω→0 1/2 cos(Ω/2)

1
sin (Ω (𝑁 + 2))
1 |𝑛| ≤ 𝑁 𝔽
𝐱[𝑛] = { ↔
0 |𝑛| > 𝑁 sin(Ω/2)

But what is about the inverse of rectangular pulse in frequency domain?


𝑊
−1 1 0 ≤ |Ω| ≤ 𝑊 1 𝑊 𝑗Ω𝑛 𝑒 𝑗Ω𝑛 sin(W𝑛)
𝐱[𝑛] = 𝔽 (𝑿(Ω) = { )= ∫ 𝑒 𝑑Ω = | =
0 𝑊 < |Ω| ≤ 𝜋 2𝜋 −𝑊 2𝜋𝑗𝑛 𝜋𝑛
−𝑊

sin(W𝑛) 𝔽 1 0 ≤ |Ω| ≤ 𝑊
𝐱[𝑛] = 0<W<𝜋 ↔ 𝑿(Ω) = {
𝜋𝑛 0 𝑊 < |Ω| ≤ 𝜋
III.III DTFT Convergence Issues: Recall the DTFT analysis and synthesis equations

𝑿(Ω) = 𝔽(𝐱[𝑛]) = ∑ 𝐱[𝑛]𝑒 −𝑗Ω𝑛 Analysis equation


𝑛=−∞
1
𝐱[𝑛] = 𝔽−1 (𝑿(Ω)) = ∫ 𝑿(Ω)𝑒 𝑗Ω𝑛 𝑑Ω Synthesis equation
2𝜋 2𝜋

The analysis equation contains an infinite sum. Sufficient conditions to guarantee


convergence are

▪ 𝐱[𝑛] is absolutely summable, i.e. ∑∞


𝑛=−∞|𝐱[𝑛]𝑒
−𝑗Ω𝑛
| = ∑∞
𝑛=−∞|𝐱[𝑛]| < ∞

▪ or 𝐱[𝑛] has finite energy, i.e ∑∞ 2


𝑛=−∞(𝐱[𝑛]) < ∞

There are no convergence issues associated with the synthesis equation since the integral
is over a finite interval. For example, unlike the CTFT case, the Gibbs phenomenon is
absent when ∫2𝜋 𝑿(Ω)𝑒 𝑗Ω𝑛 𝑑Ω is used.

III.IV Properties of the (DTFT) Fourier Transform Basic properties of the Fourier
transform are presented in the following. There are many similarities to and several
differences from the continuous-time case. Many of these properties are also similar to
those of the z-transform when the ROC of 𝑿(𝑧) includes the unit circle.

❶ Periodicity: Although the DTFT of a signal is typically defined only on a 2π interval, it is


nevertheless straightforward to show that the DTFT is actually 2π periodic 𝑿(Ω + 2𝜋) = 𝑿(Ω)
∞ ∞ ∞
−𝑗(Ω+2𝜋)𝑛 −𝑗Ω𝑛 −𝑗2𝜋𝑛
𝑿(Ω + 2𝜋) = 𝔽(𝐱[𝑛]) = ∑ 𝐱[𝑛]𝑒 = ∑ 𝐱[𝑛]𝑒 𝑒⏟ = ∑ 𝐱[𝑛]𝑒 −𝑗Ω𝑛 = 𝑿(Ω)
𝑛=−∞ 𝑛=−∞ 1 𝑛=−∞

❷ Linearity: Since the DTFT is an infinite sum, it should come as no surprise that it is a
linear operator. Nevertheless, it is a helpful property to know. Suppose we have the
𝔽 𝔽
following DTFT pairs: 𝐱1 [𝑛] ↔ 𝑿1 (Ω) & 𝐱2 [𝑛] ↔ 𝑿2 (Ω). Then by the linearity of the
DTFT we have that, for any constants 𝛼1 and 𝛼2 :
𝔽
𝛼1 𝐱1 [𝑛] + 𝛼2 𝐱 2 [𝑛] ↔ 𝛼1 𝑿1 (Ω) + 𝛼2 𝑿2 (Ω)

❸ DTFT Frequencies: Because 𝑿(Ω) is essentially the inner product of a signal 𝐱[𝑛] with
the signal 𝑒 𝑗Ω𝑛 , we can say that 𝑿(Ω) tells us how strongly the signal 𝑒 𝑗Ω𝑛 appears in 𝐱[𝑛].
𝑿(Ω), then, is a measure of the "frequency content" of the signal 𝐱[𝑛]. Consider the plot
below of the DTFT of some signal 𝐱[𝑛]:

This plot shows us that the signal 𝐱[𝑛] has a significant amount of low-frequency content
(frequencies around 𝜔 = 0), and less high-frequency content (frequencies around 𝜔 = ±𝜋 --
remember that the DTFT is 2π periodic).
❹ The DTFT and Time Shifts: If a signal is shifted in time, what effect might this have on
𝔽
its DTFT? Supposing 𝐱[𝑛] and 𝑿(Ω) are a DTFT pair, we have that: 𝐱[𝑛 − 𝑛0 ] ↔ 𝑒 −𝑗Ω𝑛0 𝑿(Ω)
So shifting a signal in time corresponds to a modulation (multiplication by a complex
sinusoid) in frequency. We can use the DTFT formula to prove this relationship, by way of a
change of variables 𝑚 = 𝑛 − 𝑛0 :
∞ ∞ ∞

]𝑒 −𝑗Ω𝑛 −𝑗Ω(𝑚+𝑛0 ) −𝑗Ω𝑛0


∑ 𝐱[𝑛 − 𝑛0 = ∑ 𝐱[𝑚]𝑒 =𝑒 ∑ 𝐱[𝑚]𝑒 −𝑗Ω𝑚 = 𝑒 −𝑗Ω𝑛0 𝑿(Ω)
𝑛=−∞ 𝑚=−∞ 𝑚=−∞

❺ The DTFT and Time Modulation: We saw above how a shift in time corresponds to
modulation in frequency. What do you suppose happens when a signal is modulated in
time? If you guessed that it is shifted in frequency, you're right! If a signal 𝐱[𝑛] has a DTFT
𝔽
of 𝑿(Ω), then we have this DTFT pair: 𝑒 𝑗Ω0 𝑛 𝐱[𝑛] ↔ 𝑿(Ω − Ω0 ) Below is the proof:
∞ ∞

𝔽(𝑒 𝑗Ω0 𝑛 𝐱[𝑛]) = ∑ 𝐱[𝑛]𝑒 𝑗Ω0 𝑛 𝑒 −𝑗Ω𝑛 = ∑ 𝐱[𝑛]𝑒 −𝑗(Ω−Ω0 )𝑛 = 𝑿(Ω − Ω0 )


𝑛=−∞ 𝑛=−∞

❻ The DTFT and Convolution Suppose that the impulse response of an LTI system is
𝒉[𝑛], the input to the system is 𝐱[𝑛], and the output is 𝐲[𝑛]. Because the system is LTI,
these three signals have a special relationship: 𝐲[𝑛] = 𝐱[𝑛] ⋆ 𝒉[𝑛] = ∑∞
𝑘=−∞ 𝐱[𝑘]𝒉[𝑛 − 𝑘]. The
output 𝐲[𝑛] is the convolution of 𝐱[𝑛] with 𝒉[𝑛]. Just as with the other DTFT properties, it
turns out there is also a relationship in the frequency domain. Consider the DTFT of each
of those signals; call them 𝑯(Ω), 𝑿(Ω), and 𝒀(Ω). The convolution of the signals 𝐱[𝑛] and
𝒉[𝑛] in time corresponds to the multiplication of their DTFTs in frequency:
𝔽
𝐲[𝑛] = 𝐱[𝑛] ⋆ 𝒉[𝑛] ↔ 𝒀(Ω) = 𝑿(Ω)𝑯(Ω)

Proof: For the proof, we take the DTFT of 𝐲[𝑛], using a change of variables along the way:
∞ ∞ ∞
−𝑗Ω𝑛
𝒀(Ω) = ∑ 𝐱[𝑛] ⋆ 𝒉[𝑛]𝑒 = ∑ { ∑ 𝐱[𝑘]𝒉[𝑛 − 𝑘]} 𝑒 −𝑗Ω𝑛
𝑛=−∞ 𝑛=−∞ 𝑘=−∞
∞ ∞

= ∑ ∑ 𝐱[𝑘]𝒉[𝑛 − 𝑘] 𝑒 −𝑗Ω𝑘 𝑒 −𝑗Ω(𝑛−𝑘)


𝑘=−∞ 𝑛=−∞
∞ ∞
−𝑗Ω𝑘
= ∑ 𝐱[𝑘]𝑒 ∑ 𝒉[𝑛 − 𝑘] 𝑒 −𝑗Ω(𝑛−𝑘)
𝑘=−∞ 𝑛=−∞
∞ ∞ ∞
−𝑗Ω𝑘 −𝑗Ω𝑚
= ∑ 𝐱[𝑘]𝑒 ∑ 𝒉[𝑚] 𝑒 = ∑ 𝐱[𝑘]𝑒 −𝑗Ω𝑘 𝑯(Ω)
𝑘=−∞ 𝑛=−∞ 𝑘=−∞

= 𝑯(Ω) ∑ 𝐱[𝑘]𝑒 −𝑗Ω𝑘 = 𝑿(Ω)𝑯(Ω)


𝑘=−∞

This relationship is very important. It gives insight, showing us how LTI systems modify the
frequencies of input signals. It is also useful, because it gives us an alternative way of
finding the output of a system. We could take the DTFTs of the input and impulse
response, multiply them together, and then take the inverse DTFT of the result to find the
output. There are some cases where this process might be easier than finding the
convolution sum.
❼ The DTFT and Duality The duality property of a continuous-time Fourier transform is
𝔽
expressed as 𝑿(𝑡) ↔ 2𝜋𝐱(−𝜔). There is no discrete-time counterpart of this property.
However, there is a duality between the discrete-time Fourier transform and the
𝔽
continuous-time Fourier series. Let 𝐱[𝑛] ↔ 𝑿(Ω) = 𝑿(Ω + 2𝜋) = ∑∞𝑛=−∞ 𝐱[𝑛]𝑒
−𝑗Ω𝑛
. Since Ω is
a continuous variable, letting Ω = 𝑡 and 𝑛 = −𝑘 in the 𝑿(Ω) equation we get:

𝑿(𝑡) = ∑ 𝐱[−𝑘]𝑒 𝑗𝑘𝑡


𝑘=−∞

Since 𝑿(𝑡) is periodic with period 𝑇0 = 2𝜋 and the fundamental frequency Ω0 = 1, the last
equation indicates that the Fourier series coefficients of 𝑿(𝑡) will be 𝐱[−𝑘]. This duality
𝔽𝕊
relationship is denoted by 𝑿(𝑡) ↔ 𝑐𝑘 = 𝐱[−𝑘], where 𝔽𝕊 denotes the Fourier series and
𝑐𝑘 are its Fourier coefficients.

Remark: There are other symmetry relationships as well. For example, signals that are
purely imaginary and odd have DTFTs that are purely real and odd. These types of
symmetry are a result of a property of the complex exponentials which build up the DTFTs.
Any signal of the form 𝑒 𝑗Ω𝑛 is conjugate symmetric, meaning that its real part is even and
its imaginary part is odd. Additionally, for conjugate symmetric signals, their magnitude is
even and their phase is odd.

❽ Time Reversal and Conjugation: Time reversal of the signal causes angular frequency
reversal of the transform. This property will be useful when we consider symmetry
𝔽 𝔽
properties of the DTFT. 𝐱[𝑛] ↔ 𝑿(Ω) ⟹ 𝐱[−𝑛] ↔ 𝑿(−Ω).

Conjugation of the signal causes both conjugation and angular frequency reversal of the
transform. This property will also be useful when we consider symmetry properties of the
𝔽 𝔽
DTFT. 𝐱[𝑛] ↔ 𝑿(Ω) ⟹ 𝐱 ⋆ [𝑛] ↔ 𝑿⋆ (−Ω)

❾ Differentiation in Frequency: Since LTI systems can be represented in terms of


differential equations, it is apparent with this property that converting to the frequency
domain may allow us to convert these complicated differential equations to simpler
𝔽 𝔽 𝑑𝑟
equations involving multiplication and addition. 𝐱[𝑛] ↔ 𝑿(Ω) ⟹ 𝑛𝑟 𝐱[𝑛] ↔ 𝑗 𝑟 𝑑Ω𝑟 𝑿(Ω)

Proof: For the proof, we take the DTFT and then we differentiate with respect to Ω
∞ ∞
𝑑 𝑑 −𝑗Ω𝑛 𝔽 𝑑
𝑿(Ω) = ∑ 𝐱[𝑛] 𝑒 = ∑ −𝑗𝑛𝐱[𝑛]𝑒 −𝑗Ω𝑛 ⟹ 𝑛𝐱[𝑛] ↔ 𝑗 𝑿(Ω)
𝑑Ω 𝑑Ω 𝑑Ω
𝑛=−∞ 𝑛=−∞

By induction we can obtain the general formula

❿ Differencing and Accumulation: The sequence 𝐱[𝑛] − 𝐱[𝑛 − 1] is called the first
difference sequence. Equation 𝔽(𝐱[𝑛] − 𝐱[𝑛 − 1]) = (1 − 𝑒 −𝑗Ω )𝑿(Ω) is easily obtained from the
𝔽
linearity and the time-shifting properties. 𝐲[𝑛] = 𝐱[𝑛] − 𝐱[𝑛 − 1] ↔ 𝒀(Ω) = (1 − 𝑒 −𝑗Ω )𝑿(Ω)

Note that accumulation 𝐲[𝑛] = ∑𝑛𝑘=−∞ 𝐱[𝑘] is the discrete-time counterpart of integration.
This formula can be written in terms of convolution of 𝐱[𝑛] with step sequence 𝒖[𝑛]
That is
𝑘=+∞ 𝑛

𝐲[𝑛] = 𝐱[𝑛] ⋆ 𝒖[𝑛] = ∑ 𝐱[𝑘] 𝒖[𝑛 − 𝑘] = ∑ 𝐱[𝑘] ⟺ 𝒀(Ω) = 𝑿(Ω)𝑼(Ω)


𝑘=−∞ 𝑘=−∞

𝑛
1 𝑿(Ω)
𝒀(Ω) = 𝑿(Ω)𝑼(Ω) = 𝑿(Ω) (𝜋𝛿(Ω) + ) ⟺ 𝒀(Ω) = 𝔽 ( ∑ 𝐱[𝑘]) = 𝜋𝑿(0)𝛿(Ω) +
1 − 𝑒 −𝑗Ω 1 − 𝑒 −𝑗Ω
𝑘=−∞

The impulse term on the right-hand side of Eq. reflects the dc or average value that can
result from the accumulation.

⓫ Time Multiplication: Multiplication in the time domain corresponds to convolution in


the frequency domain:
𝔽 1 1
𝐱[𝑛] = 𝐱1 [𝑛]𝐱 2 [𝑛] ↔ 𝑿(Ω) = 𝑿1 (Ω)⨂𝑿2 (Ω) = ∫ 𝑿 (𝜃)𝑿2 (Ω − 𝜃)𝑑𝜃
2𝜋 2𝜋 2𝜋 1

Proof: Let 𝐱[𝑛] = 𝐱1 [𝑛]𝐱 2 [𝑛]. Then by definition



1
𝑿(Ω) = ∑ { ∫ 𝑿 (𝜃)𝑒 𝑗𝜃𝑛 𝑑𝜃} 𝐱 2 [𝑛]𝑒 −𝑗Ω𝑛
2𝜋 2𝜋 1
𝑛=−∞

Interchanging the order of summation and integration, we get



1 1 1
𝑿(Ω) = ∫ 𝑿1 (𝜃) ( ∑ 𝐱 2 [𝑛] 𝑒 −𝑗(Ω−𝜃)𝑛 ) 𝑑𝜃 = ∫ 𝑿1 (𝜃)𝑿2 (Ω − 𝜃)𝑑𝜃 = 𝑿 (Ω)⨂𝑿2 (Ω)
2𝜋 2𝜋 2𝜋 2𝜋 2𝜋 1
𝑛=−∞

⓬ Odd-Even Properties: If 𝐱[𝑛] is real, let 𝐱[𝑛] = 𝐱 𝑒 [𝑛] + 𝐱 𝑜 [𝑛] where 𝐱 𝑒 [𝑛] and 𝐱 𝑜 [𝑛] are the
even and odd components of 𝐱[𝑛], respectively. Let

𝑿(Ω) = 𝔽(𝐱[𝑛]) = 𝑨(Ω) + 𝑗𝑩(Ω) = |𝑿(Ω)|𝑒 𝜙(Ω)

Then 𝑿(−Ω) = 𝑿⋆ (Ω), is the necessary and sufficient condition for x[n] to be real.

𝔽(𝐱 𝑒 [𝑛]) = 𝑨(Ω) and 𝔽(𝐱 𝑜 [𝑛]) = 𝑩(Ω)


As what we have seen before in the case of continuous FT

𝑨(Ω) = 𝑨(−Ω), 𝑩(Ω) = −𝑩(−Ω), |𝑿(Ω)| = |𝑿(−Ω)| and 𝜙(−Ω) = −𝜙(Ω)

⓭ Parseval's Relations: The Parseval or the energy theorem for DTFT states that
+∞ +∞
1 1
∑ 𝐱1 [𝑛]𝐱 2 [𝑛] = ∫ 𝑿 (𝜃)𝑿2 (−𝜃)𝑑𝜃 and ∑ |𝐱[𝑛]|2 = ∫ |𝑿(𝜃)|2 𝑑𝜃
2𝜋 2𝜋 1 2𝜋 2𝜋
𝑛−∞ 𝑛−∞

Proof: To proof this we use the frequency convolution theorem


+∞
1
𝔽(𝐱1[𝑛]𝐱 2 [𝑛]) = ∑ 𝐱1 [𝑛]𝐱 2 [𝑛] 𝑒 −𝑗Ω𝑛 = ∫ 𝑿 (𝜃)𝑿2 (Ω − 𝜃)𝑑𝜃
2𝜋 2𝜋 1
𝑛−∞
If we take Ω = 0 we get
+∞
1
∑ 𝐱1 [𝑛]𝐱 2 [𝑛] = ∫ 𝑿 (𝜃)𝑿2 (−𝜃)𝑑𝜃
2𝜋 2𝜋 1
𝑛−∞
If 𝐱 2 [𝑛] = 𝐱1⋆ [𝑛] then
+∞ +∞
1 1
∑ 𝐱1 [𝑛]𝐱 2 [𝑛] = ∫ 𝑿1 (𝜃)𝑿2 (−𝜃)𝑑𝜃 ⟺ ∑ |𝐱[𝑛]|2 = ∫ |𝑿(𝜃)|2 𝑑𝜃
2𝜋 2𝜋 2𝜋 2𝜋
𝑛−∞ 𝑛−∞

Parseval’stheorem  Conservation of energy in time and frequency domains

Property Signal Fourier transform

𝐱[𝑛] 𝐗(Ω)
𝐱1 [𝑛] 𝐗1 (Ω)
𝐱 2 [𝑛] 𝐗 2 (Ω)
Linearity 𝛼𝐱 2 [𝑛] + 𝛽𝐱 2 [𝑛] 𝛼𝐗1 (Ω) + 𝛽𝐗 2 (Ω)
Time shifting 𝐱[𝑛 − 𝑛0 ] 𝑒 −𝑗Ω𝑛0 𝐗(Ω)
Differencing 𝐱[𝑛] − 𝐱[𝑛 − 1] (1 − 𝑒 −𝑗Ω )𝐗(Ω)
Frequency scaling 𝑒 𝑗Ω0 𝑛 𝐱[𝑛] 𝐗(Ω − Ω0 )
Conjugation 𝐱 ⋆ [𝑛] 𝐗 ⋆ (−Ω)
Time reversal 𝐱[−𝑛] 𝐗(−Ω)
Frequency differentiation 𝑛𝐱[𝑛] 𝑗𝑑𝐗(Ω)/𝑑Ω
1
Integration ∑𝑛𝑚=−∞ 𝐱[𝑚] 𝐗(Ω) + 𝜋𝐗(0)𝛿(Ω)
1−𝑒 −𝑗Ω
Multiplication 2𝜋 𝐱1 [𝑛]𝐱 2 [𝑛] 𝐗1 (Ω) ⊗ 𝐗 2 (Ω)
Convolution 𝐱1 [𝑛] ⋆ 𝐱 2 [𝑛] 𝐗1 (Ω)𝐗 2 (Ω)

Parseual's theorem − − − − − − − − − − − − − − − − − − − − − − − − − − − − − − − − − −
+∞
1
∑ 𝐱1 [𝑛]𝐱 2 [𝑛] = ∫ 𝑿 (𝜃)𝑿2 (−𝜃)𝑑𝜃
2𝜋 2𝜋 1
𝑛−∞

Common Fourier Transform Pairs with |Ω| < 𝜋


𝐱[𝑛] 𝐗(Ω)
1 2𝜋𝛿(Ω)
𝛿[𝑛] 1
−𝑗Ω𝑛0
𝛿[𝑛 − 𝑛0 ] 𝑒
𝑒 𝑗Ω0 𝑛 2𝜋𝛿(Ω − Ω0 )
1
𝑢[𝑛] + 𝜋𝛿(Ω)
1−𝑒 −𝑗Ω
1
−𝑢[−𝑛 − 1] − 𝜋𝛿(Ω)
1−𝑒 −𝑗Ω
cos[Ω0 𝑛]𝑢[𝑛] 𝜋[𝛿(Ω + Ω0 ) + 𝛿(Ω − Ω0 )]
sin[Ω0 𝑛]𝑢[𝑛] 𝜋𝑗[𝛿(Ω + Ω0 ) − 𝛿(Ω − Ω0 )]
1
𝑎𝑛 𝑢[𝑛], |𝑎| < 1
1−𝑎𝑒 −𝑗Ω
1
− 𝑎𝑛 𝑢[−𝑛 − 1], |𝑎| > 1
1−𝑎𝑒 −𝑗Ω
1
(𝑛 + 1)𝑎𝑛 𝑢[𝑛], |𝑎| < 1 2
(1−𝑎𝑒 −𝑗Ω)
∑∞
𝑘−∞ 𝛿[𝑛 − 𝑘𝑁] Ω0 ∑∞
𝑘−∞ 𝛿(Ω − 𝑘Ω0 ) Ω0 = 2𝜋/𝑁
sin(𝑊𝑛) 1 0 ≤ |Ω| ≤ 𝑊
, 0<𝑊≤𝜋 𝑿(Ω) = {
𝜋𝑛 0 𝑊 < |Ω| ≤ 𝜋
1 |Ω| ≤ 𝑁1 sin(Ω[𝑁1 +1/2])
𝐱[𝑛] = {
0 𝑁1 < |Ω| sin(Ω/2)
General Comments on Fourier Transforms:

▪ The Fourier series (FS) is discrete in frequency domain, since it is the discrete set of
exponentials –integer multiples of 𝛺0 that make up the signal. This is because only a finite
number of frequencies are required to construct a periodic signal.

▪ The DTFT is continuous in frequency domain, since exponentials of a continuum of


frequencies are required to reconstruct a non-periodic signal.

▪ DTFT only exists for sequences that are absolutely summable, and is periodic with 2𝜋

▪ The DTFT of the impulse response is the frequency response of the system, and is
important in the filter design. If you want to design a filter that blocks a certain frequency
𝜔𝑐 , then we design the system such that 𝑯(𝜔𝑐 ) = 0; and if we want the system to pass a
certain frequency 𝜔𝑝 pass then we make sure that 𝑯(𝜔𝑝 ) = 1.

▪ The impulse response of an ideal LPF is infinitely long. ⟹ This is an IIR filter. In fact ℎ[𝑛]
is not absolutely summable ⟹ its DTFT cannot be computed ⟹ an ideal ℎ[𝑛] cannot be
realized! (Example is the LPF 𝑯(Ω) = rect(Ω) ⟹ ℎ[𝑛] = sinc[𝑛]). One possible solution is to
truncate ℎ[𝑛], say with a window function, and then take its DTFT to obtain the frequency
response of a realizable FIR filter.

▪ The Fourier transform is really not used in stability analysis. This is because with the
other transforms (Laplace and Z-transform), each can be totally described by its poles and
zeros and the residues at the poles, as the domains are two-dimensional. Poles and zeros
are not applicable to understanding and application of the Fourier transform, because its
domain is one-dimensional.

▪ The Fourier transform is a tool which allows us to represent a signal f(𝑡) as a continuous
sum of exponentials of the form 𝑒 𝑗𝜔𝑡 , whose frequencies are restricted to the imaginary axis
in the complex plane (𝑠=𝑗𝜔). As we saw previously, such a representation is quite valuable
in the analysis and processing of signals. In the area of system analysis, however, the use
of Fourier transform leaves much to be desired. First, the Fourier transform exists only for
a restricted class of signals and, therefore, cannot be used for such inputs as growing
exponentials. Second, the Fourier transform cannot be used easily to analyze unstable or
even marginally stable systems.

Solved Problems:
Exercise 1: Consider the signal
2𝜋𝑛 2𝜋𝑛
𝐱[𝑛] = cos ( ) + sin ( )
3 7
1. Determine the period of this signal
2. Determine the Fourier series representation

Ans: 1. If we let 𝐱[𝑛] = 𝐱1 [𝑛] + 𝐱 2 [𝑛] such that the period of each is 𝑁1 = 3𝑚 and 𝑁2 = 7ℓ then
the total period is the (LCM) least common multiple of the two periods 𝑁 = 3 × 7 = 21, this is
the fundamental period of 𝐱[𝑛].
2. The Fourier series representation

1 𝑗2𝜋𝑛 2𝜋 1 2𝜋 2𝜋
𝐱[𝑛] = {𝑒 3 + 𝑒 −𝑗 3 𝑛 } + {𝑒 𝑗 7 𝑛 − 𝑒 −𝑗 7 𝑛 }
2 2𝑗
1 𝑗72𝜋𝑛 1 −𝑗72𝜋𝑛 1 𝑗32𝜋𝑛 1 −𝑗32𝜋𝑛
= 𝑒 21 + 𝑒 21 + 𝑒 21 − 𝑒 21
2 2 2𝑗 2𝑗
−1 1 1
𝑐−3 = , 𝑐3 = , 𝑐7 = 𝑐−7 = and others are zeros
2𝑗 2𝑗 2

Exercise 2: Determine the Fourier series of the output signal 𝐲[𝑛] for an LTI system

1 𝑛
𝐱[𝑛] = ∑ 𝛿(𝑛 − 4𝑘)  𝐡[𝑛] = ( )  𝐲[𝑛]
2
𝑘=−∞

Ans: We know that if 𝐇(Ω) = ∑∞


𝑛=−∞ 𝐡[𝑛]𝑒
−𝑗Ω𝑛
then

𝔽𝕊 𝔽𝕊
𝐱[𝑛] ↔ 𝑎𝑘 ⟹ 𝐲[𝑛] ↔ 𝑏𝑘 = 𝑎𝑘 𝐇(Ω)

But the problem here is in existence of 𝐇(Ω)? Since 𝐡[𝑛] is not absolutely summable
∞ ∞
1 𝑛
∑ |𝐡[𝑛]| = ∑ ( ) does not exist ⟹ 𝐇(Ω) does not exist
2
𝑛=−∞ 𝑛=−∞

Which means that: 𝐲[𝑛] has no Fourier series representation.

Exercise 3: Determine the Fourier series of the signal

2𝜋𝑛 𝜋𝑛
𝐱[𝑛] = sin ( ) cos ( )
3 2

Ans: We know that 𝐱[𝑛] = sin(𝛼) cos(𝛽) = [sin(𝛼 + 𝛽) + sin(𝛼 − 𝛽)]/2 so we obtain
2𝜋𝑛 𝜋𝑛 1 2𝜋𝑛 1 2𝜋𝑛
𝐱[𝑛] = sin ( ) cos ( ) = sin ( ) + sin ( )
3 2 2 12/7 2 12

The total period is the least common multiple of the two periods 𝑁 = (12/7)𝑚 = 12𝑘 = 12. As
what we have seen before we can use the Euler identity for sin and cosine to get

−1 1 1 −1
𝑐−1 = , 𝑐1 = , 𝑐7 = , 𝑐−7 = and others are zeros
4𝑗 4𝑗 4𝑗 4𝑗

Exercise 4: Determine the Fourier series of the signal

2𝜋𝑛 2𝜋𝑛 4𝜋𝑛 𝜋


𝐱[𝑛] = 1 + sin ( ) + 3 cos ( ) + cos ( + )
𝑁 𝑁 𝑁 2
Ans: Here only a short answer

3 𝑗 3 𝑗 𝑗 −𝑗
𝑐0 = 1, 𝑐1 = ( − ), 𝑐−1 = ( + ) , 𝑐2 = , 𝑐−2 = and others are zeros
2 2 2 2 2 2
Exercise 5: Determine the Fourier series of the signals

𝜋𝑛 𝜋𝑛
𝐱1 [𝑛] = cos ( ) and 𝐱 2 [𝑛] = cos2 ( )
4 8
Ans: The first signal is periodic with period 𝑁 = 8 , using the Euler identity for cosine we get

𝜋𝑛 1 2𝜋 2𝜋 1 1
𝐱1 [𝑛] = cos ( ) = {𝑒 𝑗 8 𝑛 + 𝑒 −𝑗 8 𝑛 } ⟹ 𝑐1 = , 𝑐−1 =
4 2 2 2

The second signal is periodic with period 𝑁 = 4 , because 𝐱 2 [𝑛] = (1 + 𝐱1 [𝑛])/2

𝜋𝑛 1 1 𝜋𝑛 1
𝐱 2 [𝑛] = cos 2 ( ) = + cos ( ) = (1 + 𝐱1 [𝑛])
8 2 2 4 2
Therefore 𝑐0 = 1/2, 𝑐1 = 1/4, 𝑐−1 = 1/4

Exercise 6: Determine the Fourier series of the output signal 𝐲[𝑛] for an LTI system


1 |𝑛|
𝐱[𝑛] = ∑ 𝛿(𝑛 − 4𝑘)  𝐡[𝑛] = ( )  𝐲[𝑛]
2
𝑘=−∞

Ans: We know that 𝐲[𝑛] = 𝐱[𝑛] ⋆ 𝐡[𝑛] = ∑∞


𝑟=−∞ 𝐱[𝑟]𝐡[𝑛 − 𝑟] but in Fourier series
representation 𝐱[𝑛] can be written as

1 2𝜋
𝐱[𝑛] = ∑ 𝛿(𝑛 − 4𝑘) = ∑ 𝑒 𝑗𝑘 4 𝑛 The period 𝑁 = 4
4
𝑘=−∞ 〈𝑘→4〉
Which means that
∞ ∞
1 2𝜋
(𝑛−𝑟)
𝐲[𝑛] = 𝐱[𝑛] ⋆ 𝐡[𝑛] = ∑ 𝐱[𝑛 − 𝑟]𝐡[𝑟] = ∑ ( ∑ 𝑒 𝑗𝑘 4 ) 𝐡[𝑟]
4
𝑟=−∞ 𝑟=−∞ 〈𝑘→4〉

1 2𝜋
𝑗𝑘 (𝑛)
2𝜋 1 2𝜋
= ∑ 𝑒 4 (∑ 𝐡[𝑟]𝑒 −𝑗𝑘 4 𝑟 ) = ∑ 𝐇(Ω) 𝑒 𝑗𝑘 4 𝑛
4 4
〈𝑘→4〉 𝑟=−∞ 〈𝑘→4〉

𝔽𝕊 𝔽𝕊 2𝜋𝑘
𝐱[𝑛] ↔ 𝑎𝑘 ⟹ 𝐲[𝑛] ↔ 𝑏𝑘 = 𝑎𝑘 𝐇(Ω) with Ω=( )
4
But the problem here is in existence of 𝐇(Ω)? Since 𝐡[𝑛] is absolutely summable because
∞ ∞ ∞ 𝑛=−1 ∞
1 |𝑛| 1 𝑛 1 −𝑛 1 𝑛
∑ |𝐡[𝑛]| = ∑ ( ) = ∑ ( ) + ∑ ( ) = (2 ∑ ( ) ) − 1 = 3 < ∞
2 2 2 2
𝑛=−∞ 𝑛=−∞ 𝑛=0 𝑛=−∞ 𝑛=0

Notice that
1 |𝑛| 1 𝑛 1 −𝑛 2 1
𝐡[𝑛] = ( ) = ( ) 𝑢[𝑛] + ( ) 𝑢[−𝑛 − 1] ⟹ 𝐇(Ω) = −𝑗Ω

2 2 2 2−𝑒 1 − 2𝑒 −𝑗Ω
2 1 −3𝑒 −𝑗Ω
𝐇(Ω) = − =
2 − 𝑒 −𝑗Ω 1 − 2𝑒 −𝑗Ω 2 − 5𝑒 −𝑗Ω + 2𝑒 −2𝑗Ω
𝜋𝑘
−3𝑒 −𝑗 2 3 1 3 3
𝑏𝑘 = 𝑎𝑘 𝐇(Ω) = 𝜋𝑘 ={ , , , }, where 𝑘 = 1: 4
20 12 20 4
8− 20𝑒 −𝑗 2 + 8𝑒 −𝑗𝜋𝑘
Exercise 7: Given LTI discrete system described by its difference equation
3 1
𝑦[𝑛] − 𝑦[𝑛 − 1] + 𝑦[𝑛 − 2] = 2𝑥[𝑛]
4 8
Find its impulse response using Discrete-time Fourier transform (DTFT)

Ans: We
3 1 𝒀(Ω) 16
(1 − 𝑒 −𝑗Ω + 𝑒 −2𝑗Ω ) 𝒀(Ω) = 2𝑿(Ω) ⟹ 𝑯(Ω) = = −𝑗Ω
4 8 𝑿(Ω) 8 − 6𝑒 + 𝑒 −2𝑗Ω

By Partial fraction expansion we get

2 4 2
𝑯(Ω) = = −
1 −𝑗Ω 1 −𝑗Ω 1 −𝑗Ω 1 −𝑗Ω
(1 − 2 𝑒 ) (1 − 4 𝑒 ) (1 − 2 𝑒 ) (1 − 4 𝑒 )

1 𝑛 1 𝑛 1 𝑛 1 𝑛
𝐡[𝑛] = 4 ( ) 𝑢[𝑛] − 2 ( ) 𝑢[𝑛] = (4 − 2 ( ) ) ( ) 𝑢[𝑛]
2 4 2 4

Exercise 8: Determine the Discrete-time Fourier transform for the signals

▪ 𝐱1 [𝑛] = 𝛿[𝑛 + 1] + 𝛿[𝑛 − 1]


▪ 𝐱 2 [𝑛] = 𝛿[𝑛 + 2] − 𝛿[𝑛 − 2]
Ans:
𝑛=+∞
𝑒 −𝑗Ω + 𝑒 −𝑗Ω
𝑿1 (Ω) = ∑ (𝛿[𝑛 + 1] + 𝛿[𝑛 − 1])𝑒 −𝑗Ω𝑛 = 2 ( ) = 2 cos(Ω)
2
𝑛=−∞
𝑛=+∞
−𝑗Ω𝑛
𝑒 −2𝑗Ω − 𝑒 −2𝑗Ω
𝑿2 (Ω) = ∑ (𝛿[𝑛 + 2] − 𝛿[𝑛 − 2])𝑒 = 2𝑗 ( ) = 2𝑗 sin(2Ω)
2𝑗
𝑛=−∞

Exercise 9: Determine the Discrete-time Fourier transform for the signals

1 𝑛−1 1 |𝑛−1|
▪ 𝐱1 [𝑛] = ( ) 𝑢[𝑛 − 1] ▪ 𝐱 2 [𝑛] = ( )
2 2
Ans:
𝑛=+∞ 𝑛=+∞ 𝑛−1 ∞ 𝑚
1 𝑛−1 −𝑗Ω𝑛 𝑒 −𝑗Ω 𝑒 −𝑗Ω 𝑒 −𝑗Ω
𝑿1 (Ω) = ∑ ( ) 𝑒 𝑢[𝑛 − 1] = 𝑒 −𝑗Ω ∑ ( ) =𝑒 −𝑗Ω
∑( ) =
2 2 2 1
𝑛=−∞ 𝑛=1 𝑚=0 1 − 2 𝑒 −𝑗Ω

𝑛=+∞ 𝑛=0 𝑛=+∞


1 |𝑛−1| −𝑗Ω𝑛 1 1−𝑛 −𝑗Ω𝑛 1 𝑛−1 −𝑗Ω𝑛
𝑿2 (Ω) = ∑ ( ) 𝑒 = ∑ ( ) 𝑒 + ∑ ( ) 𝑒
2 2 2
𝑛=−∞ 𝑛=−∞ 𝑛=1
𝑛=∞ 𝑛=+∞ 𝑛=∞ 𝑛 𝑚=+∞ 𝑚
1 1+𝑛 𝑗Ω𝑛 1 𝑛−1
−𝑗Ω𝑛
1 𝑒 𝑗Ω 𝑒 −𝑗Ω
= ∑( ) 𝑒 + ∑ ( ) 𝑒 = ∑ ( ) + 𝑒 −𝑗Ω ∑ ( )
2 2 2 2 2
𝑛=0 𝑛=1 𝑛=0 𝑚=1
1/2 𝑒 −𝑗Ω 3𝑒 −𝑗Ω
= + =
1 1
1 − 2 𝑒 −𝑗Ω 1 − 2 𝑒 −𝑗Ω 5 − 4 cos(Ω)
Exercise 10: Determine the inverse Discrete-time Fourier transform for the given 𝑿(Ω)

2𝑗 0≤Ω≤𝜋
𝑿(Ω) = {
−2𝑗 −𝜋 <Ω≤0
Ans: we use the definition
Ω=𝜋 Ω=0
1 𝜋 𝑗Ω𝑛 1 0 1 2𝑒 𝑗Ω𝑛 1 2𝑒 𝑗Ω𝑛
𝐱[𝑛] = ∫ 2𝑗𝑒 𝑑Ω − ∫ 2𝑗𝑒 𝑗Ω𝑛 𝑑Ω = [ ] − [ ]
2𝜋 0 2𝜋 −𝜋 2𝜋 𝑛 Ω=0 2𝜋 𝑛 Ω=−𝜋
𝜋
1 𝑗𝜋𝑛 1 𝜋 𝜋 2 4sin2 ( 2 𝑛)
𝑗 𝑛 −𝑗 𝑛
= (𝑒 + 𝑒 −𝑗𝜋𝑛 − 2) = (𝑒 2 − 𝑒 2 ) = −
𝑛𝜋 𝑛𝜋 𝑛𝜋

Exercise 11: Determine a difference equation that describes the given Discrete-LTI system

4 𝑛 4 𝑛
𝐱[𝑛] = ( ) 𝑢[𝑛]  𝐡[𝑛] =?  𝐲[𝑛] = 𝑛 ( ) 𝑢[𝑛]
5 5
Ans: Applying the DTFT on inputs and output we get

1 𝒀(Ω) 𝑑 4/5𝑒 −𝑗Ω


𝑿(Ω) =  𝑯(Ω) =  𝒀(Ω) = 𝑗 𝑿(Ω) =
4 𝑿(Ω) 𝑑Ω 4 −𝑗Ω 2
1 − 𝑒 −𝑗Ω (1 − 𝑒 )
5 5

𝒀(Ω) 4/5𝑒 −𝑗Ω 1 4 𝑛 4 𝑛


𝑯(Ω) = = = −1⟹ 𝐡[𝑛] = ( ) 𝑢[𝑛] − 𝛿[𝑛] = ( ) 𝑢[𝑛 − 1]
𝑿(Ω) 1 − 4 𝑒 −𝑗Ω 1 − 4 𝑒 −𝑗Ω 5 5
5 5
The difference equation is 5𝑦[𝑛] − 4𝑦[𝑛 − 1] = 4𝑥[𝑛 − 1] or

Exercise 12: Determine the inverse Discrete-time Fourier transform for the given 𝒀(Ω)
1
𝒀(Ω) =
(1 − 𝑎𝑒 −𝑗Ω )2
Ans: Let we define

1 𝑑 𝑎𝑒 −𝑗Ω 𝑑 1
𝑿(Ω) = ⟹ 𝑗 𝑿(Ω) = ⟹ 𝑿(Ω) + 𝑗 𝑿(Ω) = = 𝒀(Ω)
1 − 𝑎𝑒 −𝑗Ω 𝑑Ω (1 − 𝑎𝑒 )
−𝑗Ω 2 𝑑Ω (1 − 𝑎𝑒 −𝑗Ω )2

Now we use the inverse of Discrete-time Fourier transform

𝑑
𝒀(Ω) = 𝑿(Ω) + 𝑗 𝑿(Ω) ⟺ 𝑦[𝑛] = 𝑥[𝑛] + 𝑛𝑥[𝑛] = (𝑛 + 1)𝑥[𝑛]
𝑑Ω
But it is well known that 𝑥[𝑛] = (𝑎)𝑛 𝑢[𝑛] means that 𝑦[𝑛] = (𝑛 + 1)𝑎𝑛 𝑢[𝑛]

Exercise 13: Determine the inverse Discrete-time Fourier transform for 𝑿(Ω)

▪ 𝑿(Ω) = cos2 (Ω)


Ans:

1 1 −2𝑗Ω 1 2𝑗Ω 1 1 1
𝑿(Ω) = cos2 (Ω) = + 𝑒 + 𝑒 ⟹ 𝐱[𝑛] = 𝛿[𝑛] + 𝛿[𝑛 − 2] + 𝛿[𝑛 + 2]
2 4 4 2 4 4
Exercise 14: Consider the signal shown below in figure, assume that 𝑿(Ω) is its DTFT

▪ Determine 𝑿(0) and 𝑿(𝜋) 2 2


𝜋
▪ Determine ∫ 𝑿(Ω)𝑑Ω 1 1 1
−𝜋
Ans:
𝑛=+∞ 1
𝑿(0) = ∑ 𝐱[𝑛] = 6 −2 −1 0 2 3
𝑛=+∞
𝑛=−∞
𝑿(Ω) = ∑ 𝐱[𝑛]𝑒 −𝑗Ω𝑛 ⟹ 𝑛=+∞ −1
𝑛=−∞ −𝑗𝜋𝑛
𝑿(𝜋) = ∑ 𝐱[𝑛]𝑒 =2
{ 𝑛=−∞

1 𝜋 𝑗Ω𝑛
𝜋
𝐱[𝑛] = ∫ 𝑿(Ω)𝑒 𝑑Ω ⟹ ∫ 𝑿(Ω)𝑑Ω = 2𝜋𝐱[0] = 4𝜋
2𝜋 −𝜋 −𝜋

Exercise 15: Consider a causal Discrete-time LTI system with 2𝐲[𝑛] + 𝐲[𝑛 − 1] = 2𝐱[𝑛]

a- Determine 𝑯(Ω)
1. 𝐱[𝑛] = 𝛿[𝑛] − 1/2𝛿[𝑛 − 1]
b- Determine 𝐲[𝑛] for the inputs {
2. 𝑿(Ω) = 1 + 2𝑒 −3𝑗Ω
Ans:

1 1 −1 𝑛
a- (1 + 𝑒 −𝑗Ω ) 𝒀(Ω) = 𝑿(Ω) ⟹ 𝑯(Ω) = ⟹ 𝐡[𝑛] = ( ) 𝑢[𝑛]
2 1 2
(1 + 2 𝑒 −𝑗Ω )

b- We start by the first one 𝐲[𝑛] = 𝐱[𝑛] ⋆ 𝐡[𝑛] = 𝛿[𝑛] ⋆ 𝐡[𝑛] − 1/2𝛿[𝑛 − 1] ⋆ 𝐡[𝑛]

−1 𝑛 1 −1 𝑛−1 −1 𝑛
𝐲[𝑛] = ( ) 𝑢[𝑛] − ( ) 𝑢[𝑛 − 1] = ( ) {𝑢[𝑛] − 𝑢[𝑛 − 1]} = 𝛿[𝑛]
2 2 2 2

Now we deal within the second signal 𝑿(Ω) = 1 + 2𝑒 −3𝑗Ω ⟹ 𝐱[𝑛] = 𝛿[𝑛] + 2𝛿[𝑛 − 3]

−1 𝑛 −1 𝑛−3
𝐲[𝑛] = 𝐱[𝑛] ⋆ 𝐡[𝑛] = ( ) 𝑢[𝑛] + 2 ( ) 𝑢[𝑛 − 3]
2 2
Exercise 16: Given a Discrete-time LTI system with 𝐡[𝑛] = 𝑛(1/2)𝑛 𝑢[𝑛] and 𝐱[𝑛] = 1 ∀𝑛

❶ Is the system stable or not? ❷ Determine 𝐲[𝑛]

Ans:
𝑛=+∞ 𝑛=+∞ 𝑛
1 1/2
❶ ∑ |𝐡[𝑛]| = ∑ 𝑛( ) = = 2 < ∞ ⟹ Stable system
𝑛=−∞ 𝑛=0
2 (1 − 1/2)
2

𝑘=+∞ 𝑘=+∞
❷ 𝐲[𝑛] = 𝐱[𝑛] ⋆ 𝐡[𝑛] = ∑ 𝐱[𝑛 − 𝑘]𝐡[𝑘] = ∑ 𝐡[𝑘] = 2 ∀𝑛
𝑘=−∞ 𝑘=−∞

1 𝑛
𝐱[𝑛] = 1 ∀𝑛  𝐡[𝑛] = 𝑛 ( ) 𝑢[𝑛]  𝐲[𝑛] = 2 ∀𝑛
2
Exercise 17: The Fourier transform of the input and output of LTI discrete system are
related by
𝑑𝑿(Ω)
𝒀(Ω) = 2𝑿(Ω) + 𝑒 −𝑗Ω 𝑿(Ω) −
𝑑Ω
❶ Isthe system linear, time invariant? (Justify)
❷ Find the impulse response of this system, is it , stable?

Ans:
𝑑𝑿(Ω)
𝒀(Ω) = 2𝑿(Ω) + 𝑒 −𝑗Ω 𝑿(Ω) − ⟺ 𝐲[𝑛] = 2𝐱[𝑛] + 𝐱[𝑛 − 1] + 𝑗𝑛𝐱[𝑛]
𝑑Ω
❶ The system is linear because is described by linear difference equation (DE), but time
invariant because the DE includes a variance in coefficients

❷ To find the impulse responses it is better to replace 𝐱[𝑛] by Dirac into the difference
equation and we see what happen at the output

𝐲[𝑛] = 2𝐱[𝑛] + 𝐱[𝑛 − 1] + 𝑗𝑛𝐱[𝑛] ⟹ 𝐡[𝑛] = 2𝛿[𝑛] + 𝛿[𝑛 − 1] + 𝑗𝑛𝛿[𝑛]


⟹ 𝐡[𝑛] = 2𝛿[𝑛] + 𝛿[𝑛 − 1]

This impulse response does not characterize the system because of invariance.

Exercise 18: Determine the inverse Discrete-time Fourier transform for 𝑿(Ω)

6𝑒 −𝑗Ω
𝑿(Ω) =
6 + 𝑒 −𝑗Ω − 𝑒 −2𝑗Ω
Ans: using the Partial fraction expansion we get

𝑒 −𝑗Ω −6/5 6/5


𝑿(Ω) = = +
1 1 1 1
(1 + 2 𝑒 −𝑗Ω ) (1 − 3 𝑒 −𝑗Ω ) (1 + 2 𝑒 −𝑗Ω ) (1 − 3 𝑒 −𝑗Ω )

6 1 𝑛 1 𝑛
𝐱[𝑛] = (( ) − (− ) ) 𝑢[𝑛]
5 3 2

Exercise 19: Let a system which has a frequency response 𝑯(Ω) = −𝑒 𝑗Ω + 2𝑒 −2𝑗Ω + 𝑒 4𝑗Ω

Determine the output of the system 𝐲[𝑛] if 𝑿(Ω) = 3𝑒 𝑗Ω + 1 − 𝑒 −𝑗Ω + 2𝑒 −3𝑗Ω

Ans:
𝒀(Ω) = 𝑯(Ω)𝑿(Ω) = 4𝑒 −5𝑗Ω − 2𝑒 −3𝑗Ω + 6𝑒 −𝑗Ω + 1 + 𝑒 𝑗Ω − 3𝑒 2𝑗Ω − 𝑒 3𝑗Ω + 𝑒 4𝑗Ω + 3𝑒 5𝑗Ω

Take the inverse of Fourier transforms we obtain the following sequence

𝐲[𝑛] = {… 0,3,1, −1, −3,1, 𝟏, 6,0, −2,0,4,0 … }



Exercise 20: Given LTI discrete system described by its difference equation

𝑦[𝑛] − 𝑦[𝑛 − 1] − 𝑦[𝑛 − 2] = 𝑥[𝑛 − 1]

Find the impulse response of this system, is the system stable, causal or not?
Ans: Applying the Fourier transform (DTFT) we get

𝒀(Ω) 𝑒 𝑗Ω 1 𝑘1 𝑘2
𝑯(Ω) = = { 2𝑗Ω 𝑗Ω
} ⟹ 𝑯(Ω)/𝑒 𝑗Ω = { 2𝑗Ω 𝑗Ω
} = 𝑗Ω + 𝑗Ω
𝑿(Ω) 𝑒 −𝑒 −1 𝑒 −𝑒 −1 𝑒 −𝛼 𝑒 −𝛽

Using the Partial fraction expansion to obtain

1 1 + √5 1 − √5
𝑘1 = −𝑘2 = , 𝛼= , 𝛽=
(𝛼 − 𝛽) 2 2

1 𝑒 𝑗Ω 𝑒 𝑗Ω 𝛼 𝑛 − 𝛽𝑛
𝑯(Ω) = { − } ⟹ 𝐡[𝑛] = ( ) 𝑢[𝑛]
(𝛼 − 𝛽) 𝑒 𝑗Ω − 𝛼 𝑒 𝑗Ω − 𝛽 𝛼−𝛽

The system is causal, but unstable because one pole is outside the unit circle.

Exercise 21: Consider the impulse response signal shown below in figure

❶ Isthe system causal, stable? (Justify) 2 2


❷ Find the output of this system, if 𝐱[𝑛] = 𝑢[𝑛] − 𝑢[𝑛 − 5] 1

Ans:
1 for 𝑛 = 0
𝐡[𝑛] = {2 for 𝑛 = ±1} = 𝛿[𝑛] + 2(𝛿[𝑛 + 1] + 𝛿[𝑛 − 1]) −1 0 1
1 elsewhere

❶ The system is non-causal because 𝐡[𝑛] ≠ 0 for 𝑛 < 0. The system is stable because 𝐡[𝑛] is
absolutely summable.

−𝑗Ω 𝑗Ω
1
𝑯(Ω) = 2𝑒 + 1 + 2𝑒 = 𝑗Ω (2𝑒 2𝑗Ω + 𝑒 𝑗Ω + 2) ⟹ ∑|𝐡[𝑛]| = 𝑯(0) = 5
𝑒
−∞
❷ The output of the system is

𝐲[𝑛] = 𝐱[𝑛] ⋆ 𝐡[𝑛] = 𝐱[𝑛] ⋆ (𝛿[𝑛] + 2(𝛿[𝑛 + 1] + 𝛿[𝑛 − 1])) = 𝐱[𝑛] + 2𝐱[𝑛 − 1] + 2𝐱[𝑛 + 1]
= 𝑢[𝑛] + 2(𝑢[𝑛 + 1] + 𝑢[𝑛 − 1]) − 𝑢[𝑛 − 5] − 2𝑢[𝑛 − 4] − 2𝑢[𝑛 − 6]

Exercise 22: Consider discrete time LTI system with impulse response 𝐡[𝑛]
1 𝑛
𝐱[𝑛] ⟶ 𝐡[𝑛] = 𝑛 ( ) 𝑢[𝑛] ⟶ 𝐲[𝑛]
2
❶ Isthe system stable
❷ what is the output of the system if 𝐱[𝑛] = 1 ∀𝑛

You should use the Fourier transform (DTFT) method


Ans:
❶ The system is stable when 𝐡[𝑛] is absolutely summable, that is ∑∞
−∞|𝐡[𝑛]| < ∞. Now we try
to evaluate this quantity using DTFT
∞ ∞ ∞
1 𝑛 −𝑗Ω𝑛
1 𝑛 𝑑 −𝑗Ω𝑛 𝑑 1 𝑛 −𝑗Ω𝑛
𝑯(Ω) = ∑ 𝑛 ( ) 𝑢[𝑛]𝑒 = 𝑗 ∑ ( ) 𝑢[𝑛] ( 𝑒 )=𝑗 (∑ ( ) 𝑒 )
2 2 𝑑Ω 𝑑Ω 2
−∞ −∞ 𝑛=0
1 −𝑗Ω
=𝑗
𝑑
(
2
) = 2𝑒
2
𝑑Ω 2 − 𝑒 −𝑗Ω 1
(1 − 2 𝑒 −𝑗Ω )
Finally we obtain
∞ 1 −𝑗Ω ∞ ∞
1 𝑛
𝑯(Ω) = ∑ 𝑛 ( ) 𝑢[𝑛]𝑒 −𝑗Ω𝑛 = 2𝑒 ⟹ ∑|𝐡[𝑛]| = ∑ 𝑛 (
1 𝑛
) 𝑢[𝑛] = 𝑯(0) = 2
2 1 −𝑗Ω 2 2
−∞ (1 − 2 𝑒 ) −∞ −∞

The IR 𝐡[𝑛] is absolutely summable ⟹ the system is stable.


𝑘=+∞ ∞
1 𝑘
𝐲[𝑛] = 𝐱[𝑛] ⋆ 𝐡[𝑛] = ∑ 𝐱[𝑛 − 𝑘]𝐡[𝑘] = ∑ 𝑘 ( ) = 𝑯(0) = 2 ∀𝑛
2
𝑘=−∞ 𝑘=0

Exercise 23: ❶ Convolve the following signals shown below in figure (using DTFT)

𝐱[𝑛] 𝐡[𝑛]
1 1 1 1 1

−1 1 −2 0
0 2
⋆ −1 1
2
−2

−1 −1 −1 −1
❷ Let𝐡[𝑛] is the IR of an LTI system, the question is that: is the system causal, stable?
❸ Find the output accumulation ∑+∞ −∞ 𝐲𝑛𝑒𝑤 [𝑛] of this system, if new input is 𝐱 𝑛𝑒𝑤 [𝑛] = 𝐱[𝑛 − 1]

Ans: ❶ Applying the DTFT on both 𝐱[𝑛] and 𝐡[𝑛] we obtain

𝑿(Ω) = 𝑒 2𝑗Ω − 𝑒 𝑗Ω + 1 − 𝑒 −𝑗Ω + 𝑒 −2𝑗Ω and 𝑯(Ω) = −𝑒 2𝑗Ω + 2𝑒 𝑗Ω + 2𝑒 −𝑗Ω − 𝑒 −2𝑗Ω

We know that

𝒀(Ω) = 𝑿(Ω)𝑯(Ω) ⟹ 𝒀(Ω) = −𝑒 4𝑗Ω + 3𝑒 3𝑗Ω − 3𝑒 2𝑗Ω + 5𝑒 𝑗Ω − 6 + 5𝑒 −𝑗Ω − 3𝑒 −2𝑗Ω + 3𝑒 −3𝑗Ω − 𝑒 −4𝑗Ω

… 0, −1,3, −3,5, −𝟔, 5, −3,3, −1,0, …


𝐲[𝑛] = { }

❷ From the graph we see that 𝐡[𝑛] is absolutely summable ⟹ the system is stable, but is
not causal because of negative arguments in 𝐡[𝑛].

❸ we have seen that 𝑯(Ω) = −𝑒 2𝑗Ω + 2𝑒 𝑗Ω + 2𝑒 −𝑗Ω − 𝑒 −2𝑗Ω and from the other side we know
that the new output is 𝒀𝑛𝑒𝑤 (Ω) = 𝑿𝑛𝑒𝑤 (Ω)𝑯(Ω) = (𝑒 −𝑗Ω 𝑿(Ω)) 𝑯(Ω) because of time invariance

𝐱[𝑛] ⟶ 𝐡[𝑛] ⟶ 𝐲[𝑛] ⟹ 𝐱 𝑛𝑒𝑤 [𝑛] = 𝐱[𝑛 − 1] ⟶ 𝐡[𝑛] ⟶ 𝐲𝑛𝑒𝑤 [𝑛] = 𝐲[𝑛 − 1]
+∞ +∞

⟹ ∑ 𝐲[𝑛] = ∑ 𝐲𝑛𝑒𝑤 [𝑛] = −1 + 3 − 3 + 5 − 6 + 5 − 3 + 3 − 1 = 2


−∞ −∞

Exercise 24: Convolve the following signals


1 𝑛 𝑛
𝐱[𝑛] = 𝑢[𝑛] + 𝑢[−𝑛 − 1] and 𝐡[𝑛] = { 2)
( 𝑛 ≥ 0} = (1) 𝑢[𝑛] + 2𝑛 𝑢[−𝑛 − 1]
2
2𝑛 𝑛<0
Ans: 𝐱[𝑛] = 𝑢[𝑛] + 𝑢[−𝑛 − 1] = 1 ∀𝑛 ⟹ 𝐲[𝑛] = 𝐱[𝑛] ⋆ 𝐡[𝑛] = ∑+∞
−∞ 𝐡[𝑛]
+∞ −1 +∞ +∞
1 𝑛 𝑛
1 𝑛
𝐲[𝑛] = 𝐱[𝑛] ⋆ 𝐡[𝑛] = ∑ 𝐡[𝑛] = ∑ 2 + ∑ ( ) = −1 + 2 (∑ ( ) ) = 3 ∀𝑛
2 2
−∞ −∞ 𝑛=0 𝑛=0

1 𝑛
{𝐱[𝑛] = 1 ∀𝑛}  𝐡[𝑛] = { 2)
( 𝑛 ≥ 0  {𝐲[𝑛] = 3 ∀𝑛}
2𝑛 𝑛<0

In this exercise we cannot use DTFT because 𝐱[𝑛] is not absolutely summable.

Exercise 25: Given an LTI system described by

1 |𝑛| 3𝜋𝑛
𝐡[𝑛] = ( ) and 𝐱[𝑛] = sin ( )
2 4
Find the Fourier series representation of 𝐲[𝑛] = 𝐱[𝑛] ⋆ 𝐡[𝑛]

Ans:
+∞ +∞
3𝜋
(𝑛−𝑟)
𝐲[𝑛] = 𝐱[𝑛] ⋆ 𝐡[𝑛] = ∑ 𝐱[𝑛 − 𝑟]𝐡[𝑟] = ∑ (∑ 𝑎𝑘 𝑒 𝑗𝑘 4 ) 𝐡[𝑟]
−∞ −∞ 〈𝑁〉

Change the order of summation we obtain


+∞
3𝜋 3𝜋 3𝜋
𝐲[𝑛] = ∑ 𝑎𝑘 (∑ 𝐡[𝑟]𝑒 −𝑗Ω𝑟 ) 𝑒 𝑗𝑘 4 𝑛 = ∑ 𝑎𝑘 𝑯(Ω)𝑒 𝑗𝑘 4 𝑛 = ∑ 𝑏𝑘 𝑒 𝑗𝑘 4 𝑛
〈𝑁〉 −∞ 〈𝑁〉 〈𝑁〉

3𝜋
𝑏𝑘 = 𝑎𝑘 𝑯(Ω) with Ω=𝑘
4
Since 𝐡[𝑛] is absolutely summable means that 𝑯(Ω) exist, so we can compute it
+∞ −1 +∞ +∞ +∞
−𝑗Ω𝑛 𝑛 −𝑗Ω𝑛
1 𝑛 1 𝑛 1 𝑛
𝑯(Ω) = ∑ 𝐡[𝑛]𝑒 = ∑2 𝑒 + ∑ ( ) 𝑒 −𝑗Ω𝑛 = −1 + ∑ ( ) 𝑒 𝑗Ω𝑛 + ∑ ( ) 𝑒 −𝑗Ω𝑛
2 2 2
−∞ −∞ 𝑛=0 𝑛=0 𝑛=0
1 1 3
= + −1=
1 1 5 − 4 cos(Ω)
1 − 2 𝑒 −𝑗Ω𝑛 1 − 2 𝑒 𝑗Ω𝑛

Let we compute the Fourier series coefficient of 𝐱[𝑛]. The period of 𝐱[𝑛] is 𝑁 = 8/3
3𝜋𝑛 1 3𝜋 1 3𝜋 𝑎 = 1/2𝑗
𝐱[𝑛] = sin ( ) = 𝑒 𝑗 4 𝑛 − 𝑒 −𝑗 4 𝑛 ⟹ { 1 and others are zeros
4 2𝑗 2𝑗 𝑎 −1 = −1/2𝑗

3𝜋 3𝑗 −3𝑗
𝑏𝑘 = 𝑎𝑘 𝑯 (𝑘 ) ⟹ 𝑏−1 = and 𝑏1 =
4 10 + 4√2 10 + 4√2

Exercise 26: Given an LTI system described by 𝐡[𝑛] = (0.5)𝑛 𝑢[𝑛], compute the output of
this system for ❶ 𝐱[𝑛] = (3/4)𝑛 𝑢[𝑛] ❷ 𝐱[𝑛] = (𝑛 + 1)(1/4)𝑛 𝑢[𝑛]

Ans:
−1 −1 −1 −1
❶ 𝒀(Ω) = 𝑯(Ω)𝑿(Ω) = (1 − 0.5𝑒 −𝑗Ω ) (1 − 0.75𝑒 −𝑗Ω ) = 3(1 − 0.75𝑒 −𝑗Ω ) − 2(1 − 0.5𝑒 −𝑗Ω )

Means that 𝐲[𝑛] = (3(0.75)𝑛 − 2(0.5)𝑛 )𝑢[𝑛]


❷ 𝐱[𝑛] = (𝑛 + 1)(1/4)𝑛 𝑢[𝑛] = 𝑛(1/4)𝑛 𝑢[𝑛] + (1/4)𝑛 𝑢[𝑛]

(1/4)𝑒 −𝑗Ω 1 1 1
𝑿(Ω) = 2+ = 2 , 𝑯(Ω) =
1 1 3
(1 − 4 𝑒 −𝑗Ω ) 1 − 𝑒 −𝑗Ω (1 − 1 𝑒 −𝑗Ω ) (1 − 𝑒 −𝑗Ω )
4 4 4

1 4 2 1
𝒀(Ω) = 𝑯(Ω)𝑿(Ω) = = − −
1 2
1 1 1 2
(1 − 4 𝑒 −𝑗Ω ) (1 − 2 𝑒 −𝑗Ω ) (1 − 2 𝑒 −𝑗Ω ) 1 − 4 𝑒 −𝑗Ω (1 − 1 𝑒 −𝑗Ω )
4

1 𝑛 1 𝑛 1 𝑛 1 𝑛 1 𝑛
𝐲[𝑛] = (4 ( ) − 2 ( ) − (𝑛 + 1) ( ) ) 𝑢[𝑛] = (4 ( ) − (𝑛 + 3) ( ) ) 𝑢[𝑛]
2 4 4 2 4

Exercise 27: Consider the transfer function of LTI system

1 − 2𝑒 −𝑗Ω
𝑯(Ω) =
1
1 − 𝑒 −𝑗Ω
4
Find 𝐡[𝑛] and realize the system

Ans:
1 − 2𝑒 −𝑗Ω 1 2𝑒 −𝑗Ω 1 𝑛 1 𝑛−1 1 𝑛
𝑯(Ω) = = − ⟹ 𝐡[𝑛] = (( ) − 2 ( ) ) 𝑢[𝑛 − 1] = −7 ( ) 𝑢[𝑛 − 1]
1 1 1 4 4 4
1 − 4 𝑒 −𝑗Ω 1 − 4 𝑒 −𝑗Ω 1 − 4 𝑒 −𝑗Ω

Exercise 28: Consider the transfer function of LTI system

7 1
1 − 4 𝑒 −𝑗Ω − 2 𝑒 −2𝑗Ω
𝑯(Ω) =
1 1
1 + 4 𝑒 −𝑗Ω − 8 𝑒 −2𝑗Ω

Find 𝐡[𝑛] and realize the system

Ans: Method 1: partial fraction in terms of 𝑒 −𝑗Ω

7 1
1 − 4 𝑒 −𝑗Ω − 2 𝑒 −2𝑗Ω 5/3 14/3
𝑯(Ω) = −4+4 = − +4
1 1 1 1
1 + 4 𝑒 −𝑗Ω − 8 𝑒 −2𝑗Ω 1 + 2 𝑒 −𝑗Ω 1 + 2 𝑒 −𝑗Ω

5 −1 𝑛 14 1 𝑛
𝐡[𝑛] = 4𝛿[𝑛] + ( ( ) − ( ) ) 𝑢[𝑛]
3 2 3 4

Method 2: partial fraction in terms of 𝑒 𝑗Ω

7 1
𝑒 2𝑗Ω − 4 𝑒 𝑗Ω − 2 5/6 7/6
𝑯(Ω) = − 1 + 1 = 1 − ( 𝑗Ω + 𝑗Ω )
1 1 𝑒 + 1/2 𝑒 − 1/4
𝑒 2𝑗Ω + 4 𝑒 𝑗Ω − 8

From the above results we obtain

5 −1 𝑛−1 7 1 𝑛−1
𝐡[𝑛] = 𝛿[𝑛] − ( ( ) + ( ) ) 𝑢[𝑛 − 1]
6 2 6 4

The two formulas of are 𝐡[𝑛] equivalent (use MATLAB to check that).
Exercise 29: Determine the signal if its Discrete-time Fourier transform (DTFT) is
+∞
0 0 ≤ |Ω| ≤ 𝑊 𝜋
𝟏) 𝑿(Ω) = { 𝟐) 𝑿(Ω) = ∑ (−1)𝑘 𝛿 (Ω − 𝑘 )
1 𝑊 < |Ω| ≤ 𝜋 2
𝑘=−∞

Ans:
−𝑊 +𝑊 𝜋
1
𝟏) 𝐱[𝑛] = ∫ 𝑿(Ω)𝑒 𝑗Ω𝑛 𝑑Ω ⟹ 2𝜋𝐱[𝑛] = ∫ 𝑿(Ω)𝑒 𝑗Ω𝑛 𝑑Ω + ∫ 𝑿(Ω)𝑒 𝑗Ω𝑛 𝑑Ω + ∫ 𝑿(Ω)𝑒 𝑗Ω𝑛 𝑑Ω
2𝜋 2𝜋 −𝜋 −𝑊 𝑊

1 −𝑊 1 𝜋 sin(𝜋𝑛) − sin(𝑊𝑛)
𝐱[𝑛] = ∫ 𝑿(Ω)𝑒 𝑗Ω𝑛 𝑑Ω + ∫ 𝑿(Ω)𝑒 𝑗Ω𝑛 𝑑Ω =
2𝜋 −𝜋 2𝜋 𝑊 𝑛𝜋
sin(𝜋𝑛) sin(𝑊𝑛) sin(𝑊𝑛)
= − = 𝛿[𝑛] −
𝑛𝜋 𝑛𝜋 𝑛𝜋
+∞
1 𝑘
𝜋 𝑗Ω𝑛 1 𝑘
𝜋 𝑗Ω𝑛 (−1)𝑘 𝑗𝑘 𝜋𝑛
𝟐) 𝐱[𝑛] = ∫ ∑ (−1) 𝛿 (Ω − 𝑘 ) 𝑒 𝑑Ω = ∑(−1) ∫ 𝛿 (Ω − 𝑘 ) 𝑒 𝑑Ω = ∑ 𝑒 2
2𝜋 2𝜋 2 2𝜋 2𝜋 2 2𝜋
𝑘=−∞ 𝑘 𝑘

In any interval of 2𝜋 we have 4 impulses: from −𝜋 to 𝜋 then 𝑘 = −2, −1,0,1,2


𝑘=2
(−1)𝑘 𝑗𝑘 𝜋𝑛 1 1 𝜋 1
𝐱[𝑛] = ∑ 𝑒 2 = − cos (𝑛 ) + cos(𝑛𝜋)
2𝜋 2𝜋 𝜋 2 𝜋
𝑘=−2

Exercise 30: The input 𝐫[𝑛] and output 𝐱[𝑛] of a LTI system are related by

𝐱[𝑛] = 𝐫[𝑛] − 𝑒 −8𝑎 𝐫[𝑛 − 8] where 0 < 𝑎 < 1

❶ determine the frequency response 𝑯(Ω) Is the system stable?


❷ It is desired to design a system 𝑯𝑑 (Ω) = 𝒀(Ω)/𝑿(Ω) such that 𝐲[𝑛] = 𝐫[𝑛]; what must be
this system? How is connected to? What are the poles of 𝑯𝑑 (Ω)? Is it stable?

Ans: ❶ determination of 𝑯(Ω) = 𝑿(Ω)/𝑹(Ω)

𝑿(Ω) 𝑒 8𝑗Ω − 𝑒 −8𝑎


𝑿(Ω) = (1 − 𝑒 −8𝑎 𝑒 −8𝑗Ω )𝑹(Ω) = (1 − 𝑒 −8(𝑎+𝑗Ω) )𝑹(Ω) ⟹ 𝑯(Ω) = =
𝑹(Ω) 𝑒 8𝑗Ω
We see that all the 8 poles are at the origin ⟹ stable system.

❷ The second system is the inverse of 𝑯(Ω) that is 𝑯𝑑 (Ω) = 1/𝑯(Ω)

1
𝐫[𝑛] ⟶ 𝑯(Ω) ⟶ 𝐱[𝑛] ⟶ 𝑯𝑑 (Ω) = ⟶ 𝐲[𝑛]
𝑯(Ω)

𝑒 8𝑗Ω 1
𝑯𝑑 (Ω) = 8𝑗Ω = with 𝑧 = 𝑒 𝑗Ω
𝑒 − 𝑒 −8𝑎 1 − 𝑒 −8𝑎 𝑧 −8
The eight poles of 𝑯𝑑 (Ω) are located at 𝑧 8 = 𝑒 −8𝑎 ⟹ |𝑧| = 𝑒 −𝑎 the poles are distributed along
the circle of radius 𝑟 = 𝑒 −𝑎 < 1 ⟹ stable system.
Exercise 31: Given the following system
𝐖[𝑛]
𝐱[𝑛] ⟶ 𝑯1 (Ω) → 𝑯2 (Ω) ⟶ 𝐲[𝑛]

2 − 𝑒 −𝑗Ω 1
𝑯1 (Ω) = and 𝑯1 (Ω) =
1 1 1
1 + 2 𝑒 −𝑗Ω 1 − 2 𝑒 −𝑗Ω + 4 𝑒 −2𝑗Ω

❶ Determine the difference equation relating inputs to outputs


❷ Give a realization for this difference equation

Ans: Notice that


1 1 1 1
(1 + 𝑒 −𝑗Ω ) (1 − 𝑒 −𝑗Ω + 𝑒 −2𝑗Ω ) = 1 + 𝑒 −3𝑗Ω
2 2 4 8

2 − 𝑒 −𝑗Ω 1 2 − 𝑒 −𝑗Ω
𝑯(Ω) = 𝑯1 (Ω)𝑯1 (Ω) = { }{ }=
1 1 1 1
1 + 2 𝑒 −𝑗Ω 1 − 2 𝑒 −𝑗Ω + 4 𝑒 −2𝑗Ω 1 + 8 𝑒 −3𝑗Ω
We deduce that
1
𝐲[𝑛] + 𝐲[𝑛 − 3] = 2𝐱[𝑛] − 𝐱[𝑛 − 1]
8
Exercise 32: Given the following system described by:

𝐱[𝑛] ⟶ 𝑯(Ω) ⟶ 𝐲[𝑛]

where the input and the impulse response are given by: 𝐱[𝑛] = cos(Ω0 𝑛) 𝑢[𝑛] and 𝐡[𝑛] = 𝑢[𝑛]

Ans: Observe that

𝑛+1
−𝑗Ω0 (𝑛+1)/2 𝑗Ω0 (𝑛+1)/2 sin (Ω0 ( 2 ))
𝑒 −𝑒 1 𝑒 𝑗Ω0 (𝑛+1)/2 𝑒 −𝑗Ω0 (𝑛+1)/2 𝑛
( )= and ( + ) = cos (Ω0 )
𝑒 0 − 𝑒 𝑗Ω0 /2
−𝑗Ω /2 Ω 2 𝑗Ω
𝑒 0 /2 −𝑗Ω
𝑒 0 /2 2
sin ( 20 )

Now we start the convolution between the input and the impulse response
+∞ 𝑛 𝑛
1
𝐲[𝑛] = ∑ cos(Ω0 𝑘) 𝑢[𝑛 − 𝑘]𝑢[𝑘] = ∑ cos(Ω0 𝑘) = (∑ 𝑒 𝑗Ω0 𝑘 + 𝑒 −𝑗Ω0 𝑘 )
2
𝑘=−∞ 𝑘=0 𝑘=0
1 1 − 𝑒 𝑗Ω0 (𝑛+1) 1 − 𝑒 −𝑗Ω0 (𝑛+1)
= ( + )
2 1 − 𝑒 𝑗Ω0 1 − 𝑒 −𝑗Ω0
1 𝑒 𝑗Ω0 (𝑛+1)/2 𝑒 −𝑗Ω0 (𝑛+1)/2 − 𝑒 𝑗Ω0 (𝑛+1)/2 1 𝑒 −𝑗Ω0 (𝑛+1)/2 𝑒 𝑗Ω0 (𝑛+1)/2 − 𝑒 −𝑗Ω0 (𝑛+1)/2
= ( ) + ( )
2 𝑒 𝑗Ω0 /2 𝑒 −𝑗Ω0 /2 − 𝑒 𝑗Ω0 /2 2 𝑒 −𝑗Ω0 /2 𝑒 𝑗Ω0 /2 − 𝑒 −𝑗Ω0 /2
𝑛+1
𝑗Ω0 (𝑛+1)/2 −𝑗Ω0 (𝑛+1)/2 −𝑗Ω0 (𝑛+1)/2 𝑗Ω0 (𝑛+1)/2 sin (Ω0 ( 2 ))
1 𝑒 𝑒 𝑒 −𝑒 𝑛
= ( + )( )= cos (Ω0 )
2 𝑒 𝑗Ω0 /2 𝑒 −𝑗Ω0 /2 𝑒 −𝑗Ω0 /2 − 𝑒 𝑗Ω0 /2 Ω 2
sin ( 20 )

Finally

𝑛+1
sin (Ω0 ( 2 ))
𝑛
𝐱[𝑛] = cos(Ω0 𝑛) 𝑢[𝑛] ⟶ 𝐡[𝑛] = 𝑢[𝑛] ⟶ 𝐲[𝑛] = cos (Ω0 )
Ω 2
sin ( 20 )
CHAPTER VII:
The Fast Fourier
Transform and Discrete
Time Systems

I. The Discrete Fourier Transform (DFT)


I.I. Matrix form of the DFT
I.II. Relationship of DFT to the DTFT
I.III. Zero padding
II. The Fast Fourier Transform (FFT)
II.I Two-points DFT (N=2)
II.II Four-points DFT (N=4)
II.III N-points DFT (N=2r)
II.IV Matrix Form of DFT

Discrete Fourier transform is the most important discrete transform, used to perform
Fourier analysis in many practical applications. Since it deals with a finite amount of
data, it can be implemented in computers by numerical algorithms or even dedicated
hardware. These implementations usually employ efficient fast Fourier transform
(FFT) algorithms; so much so that the terms "FFT" and "DFT" are often used
interchangeably. Prior to its current usage, the "FFT" initialism may have also been
used for the ambiguous term "finite Fourier transform".
The Fast Fourier Transform
and Discrete Time Systems
I. The Discrete Fourier Transform (DFT): Discrete-time Fourier transform (DTFT) may
not be practical for analyzing because is a function of the continuous frequency variable
and we cannot use a digital computer to calculate a continuum of functional values. (DFT)
is a frequency analysis tool for aperiodic finite-duration discrete-time signals which is
practical because it is discrete in frequency.

The discrete-time Fourier transform (DTFT) is the (conventional) Fourier transform of a


discrete-time signal. Its output is continuous in frequency and periodic. The discrete
Fourier transform (DFT) can be seen as the sampled version (in frequency-domain) of the
DTFT output. It's used to calculate the frequency spectrum of a discrete-time signal with a
computer, because computers can only handle a finite number of values. Both (DTFT and
CTFT) cannot be implemented in digital logic due to infinite length of the signal. If you
make length of the signal finite with discrete time, you will get DFT of the signal.

In DTFT the Discrete, aperiodic time domain signal is transformed into continuous,
periodic frequency domain signal. While in DFT, your input signal is the output of your
DTFT which is a continuous, periodic frequency domain signal, and DFT gives you the
Discrete samples of the continuous DTFT. Moreover, DFT are mainly used in computer
based analysis as computers store data in discrete sequences with finite length. Hence
storing frequency coefficients in continuous domain is not possible in digital computations.

DTFT (Discreet Time Fourier Transform) DFT (Discreet Fourier Transform) If we


If we apply the Discreet Time Fourier sample one Time period of a DTFT at a finite
Transform on a sequence we obtain DTFT number of frequency points we get DFT
1. Used for finite and infinite sequence 1. Used only for Finite Sequence
2. It is only theoretical 2. It is practical method
3. Cannot be implemented practically 3. It can be used practically (computers).
4. It is periodic and continuous 4. Non periodic and non-continuous.
5. Denoted by 𝑿(Ω) or 𝑿(𝑒 𝑗Ω ) 5. Denoted by 𝑿[𝑘] or 𝑿[𝑒 𝑗Ω𝑘 ]

Let the signal 𝐱[𝑛] be a finite-length sequence of length 𝑁, that is, 𝐱[𝑛] = 0 outside the
range 0 ≤ 𝑛 ≤ 𝑁 − 1. The DFT of 𝐱[𝑛], denoted as 𝑿[𝑘], is defined by
𝑁−1

𝑿[𝑘] = DFT(𝐱[𝑛]) = ∑ 𝐱[𝑛]𝑊𝑁𝑘𝑛 with 𝑘 = 0,1,2, … , 𝑁 − 1


𝑛=0

The inverse DFT (IDFT) is given by


𝑁−1
1
𝐱[𝑛] = IDFT(𝑿[𝑘]) = ∑ 𝑿[𝑘]𝑊𝑁−𝑘𝑛 with 𝑛 = 0,1,2, … , 𝑁 − 1
𝑁
𝑘=0

1 2𝜋
where 𝑊𝑁 is the 𝑁 𝑡ℎ root of unity given by 𝑊𝑁 = (𝑒 −2𝜋𝑗 )𝑁 = 𝑒 −𝑗 𝑁 . The DFT pair is denoted by

𝐱[𝑛] ⟷ 𝑿[𝑘] or 𝑿[𝑘] = DFT(𝐱[𝑛]) or 𝐱[𝑛] = IDFT(𝑿[𝑘])


Example: 01 Compute the DFT of the following sequences

a) 𝐱[𝑛] = [1, 0, −1, 0]; b) 𝐱[𝑛] = [𝑗, 0, 𝑗, 1]; c) 𝐱[𝑛] = [1, 1, 1, 1, 1, 1, 1, 1];
d) 𝐱[𝑛] = cos(0.25𝜋𝑛) ; 𝑛 = 0, . . . , 7 e) 𝐱[𝑛] = 0.9𝑛 , 𝑛 = 0, . . . , 7

Solution: a) Firstly, 𝑁 = 4 therefore, 𝑊𝑁 = 𝑒 −𝑗2𝜋/4 = −𝑗. Therefore, when 𝑘 = 0,1,2,3

𝑁−1 3

𝑿[𝑘] = ∑ 𝐱[𝑛]𝑊𝑁𝑘𝑛 = ∑ 𝐱[𝑛](−𝑗)𝑘𝑛 = 1 − (−𝑗)2𝑘 = [0, 2, 0, 2];


𝑛=0 𝑛=0

b) Similarly, 𝑁 = 4 therefore, 𝑊𝑁 = 𝑒 −𝑗2𝜋/4 = −𝑗. Therefore, when 𝑘 = 0,1,2,3


3

𝑿[𝑘] = ∑ 𝐱[𝑛](−𝑗)𝑘𝑛 = 𝑗 + 𝑗(−𝑗)2𝑘 + (−𝑗)3𝑘 = 𝑗 + 𝑗(−1)𝑘 + (𝑗)𝑘 = [1 + 2𝑗, 𝑗, 1 + 2𝑗, −𝑗];


𝑛=0

c) 𝑁 = 8 and 𝑊8 = 𝑒 −𝑗2𝜋/8 = 𝑒 −𝑗𝜋/4. Therefore, applying the geometric sum we obtain

7 1 − 𝑊8 8𝑘
𝑿[𝑘] = ∑ 𝐱[𝑛](𝑊8 )𝑘𝑛 = { 1 − 𝑊 𝑘 =0 when 𝑘 ≠ 0
8
𝑛=0 8 when 𝑘 ≠ 0
𝑁
since (𝑊𝑁 )𝑁 = (𝑒 −𝑗2𝜋/𝑁 ) = 1 Therefore 𝑿[𝑘] = [8, 0, 0, 0, 0, 0, 0, 0];

d) Again 𝑁 = 8 and 𝑊8 = 𝑒 −𝑗2𝜋/8 = 𝑒 −𝑗𝜋/4. Therefore


7 7 7 7
1 𝜋 𝜋 1 𝜋 𝜋 1 𝜋
(𝑘−1)𝑛 1 𝜋
(−𝑘−1)𝑛
𝑿[𝑘] = ∑ 𝑒 𝑗 4 𝑛 𝑒 −𝑗 4 𝑛𝑘 + ∑ 𝑒 −𝑗 4 𝑛 𝑒 −𝑗 4 𝑛𝑘 = ∑ 𝑒 𝑗 4 + ∑ 𝑒𝑗4
2 2 2 2
𝑛=0 𝑛=0 𝑛=0 𝑛=0

Applying the geometric sum we obtain 𝑿[𝑘] = [0, 4, 0, 0, 0, 0, 0, 4];

e) Lastly, 𝑁 = 8 and 𝑊8 = 𝑒 −𝑗2𝜋/8 = 𝑒 −𝑗𝜋/4. Therefore


7
𝜋 1 − 0.98
𝑿[𝑘] = ∑ 0.9𝑛 𝑒 −𝑗 4 𝑛𝑘 = 𝜋 for 𝑘 = 0,1,2, … ,7
𝑛=0 1 − 0.9𝑒 −𝑗 4 𝑘

Substituting numerically we obtain

𝑿[𝑘] = [5.69, 0.38 − 0.67𝑗, 0.31 − 0.28 𝑗, 0.30 − 0.11 𝑗, 0.30, 0.30 − 0.11𝑗, 0.31 − 0.28 𝑗, 0.38 − 0.67𝑗]
I.I. Matrix form of the DFT: The DFT can be expressed in a matrix operation form as
𝑿 = 𝑭𝑁 𝐱 with

𝑿[0] 𝐱[0]
𝑿 = [ 𝑿[1] ] , 𝐱 = [ 𝐱[1] ]
⋮ ⋮
𝑿[𝑁 − 1] 𝐱[𝑁 − 1]
1 1 ⋯ 1
1 𝑁−1
1 𝑊 𝑊𝑁 2 𝑊 𝑁
𝑁 ⋯
𝑭𝑁 = 1 𝑊𝑁2 𝑊𝑁4 … 𝑊𝑁2(𝑁−1)
⋮ ⋮ ⋮ ⋱ ⋮
𝑁−1 2(𝑁−1)
[1 𝑊𝑁
(𝑁−1)(𝑁−1)
𝑊𝑁 … 𝑊𝑁 ]
Important features of the DFT are the following:
1. There is a one-to-one correspondence between 𝐱[𝑛] and 𝑿[𝑘].
2. There is an extremely fast algorithm, called the fast Fourier transform for its calculation.
3. The DFT is closely related to the discrete Fourier series and the Fourier transform.
4. The DFT is the appropriate Fourier representation for digital computer realization
because it is discrete and of finite length in both the time and frequency domains.

I.II. Relationship of DFT to the DTFT: Notice that the DFT can be derived from the DTFT
𝑁−1 𝑁−1
2𝜋 2𝜋
𝑿[𝑘] = ∑ 𝐱[𝑛]𝑊𝑁𝑘𝑛 = ∑ 𝐱[𝑛]𝑒 −𝑗𝑘 𝑁 𝑛 = 𝑿(Ω)| = 𝑿 (𝑘 )
Ω=𝑘
2𝜋 𝑁
𝑛=0 𝑛=0 𝑁

Thus, 𝑿[𝑘] corresponds to the sampled 𝑿(Ω) at the uniformly spaced frequencies Ω = 2𝜋𝑘/𝑁
for integer 𝑘. Also we see that 𝑿[𝑘] of finite sequence 𝐱[𝑛] can be interpreted as the
coefficients 𝑐𝑘 in the discrete Fourier series representation of its periodic extension
multiplied by the period 𝑁. That is, 𝑿[𝑘] = 𝑁𝑐𝑘 .

2𝜋
𝑿[𝑘] = DFT(𝐱[𝑛]) = 𝑿(Ω)| = 𝑿 (𝑘 ) and 𝑿[𝑘] = 𝑁𝑐𝑘
Ω=𝑘
2𝜋 𝑁
𝑁

Example: 02 Consider the discrete-time pulse 𝐱[𝑛] = 𝑢[𝑛] − 𝑢[𝑛 − 10] ⟹ 𝑁 = 9. The DTFT
of this pulse was determined before
sin((𝑁 + 1)Ω/2)
𝑿(Ω) = 𝔽(𝑢[𝑛] − 𝑢[𝑛 − 10]) = 𝑒 −𝑗Ω𝑁/2
sin(Ω/2)
The DFT of this pulse is
2𝜋
sin (5𝑘 𝑁 )
𝑿[𝑘] = 𝑒 −4.5𝑗Ω 𝜋 = [10, 0, 0, 0, 0, 0, 0, 0, 0, 0]
sin (𝑘 𝑁)
I.III. Zero padding: Note that the choice of number of points 𝑁 is not fixed. If 𝐱[𝑛] has
length 𝑁1 < 𝑁, we want to assume that 𝐱[𝑛] has length 𝑁 by simply adding (𝑁 − 𝑁1 ) samples
with a value of 0. This addition of dummy samples is known as zero padding. Then the
resultant 𝐱[𝑛] is often referred to as an N-point sequence, and 𝑿[𝑘] is referred to as an N-
point DFT. By a judicious choice of 𝑁, such as choosing it to be a power of 2, computational
efficiencies can be gained.
Example: 03 (Zero padding the discrete-time pulse) Consider again the length-10 discrete-
time pulse used in Example 02. Create a new signal 𝒒[𝑛] by zero-padding it to 20 samples,
and compare the 20-point DFT of 𝒒[𝑛] to the DTFT of 𝐱[𝑛].

𝐱[𝑛] when 𝑛 = 0,1, … ,9


𝒒[𝑛] = {
0 when 𝑛 = 10, … ,19

The 20-point DFT of 𝒒[𝑛] is


19
−𝑗
2𝜋
𝑛𝑘 1 − 𝑒 −𝑗𝜋𝑘
𝑸[𝑘] = ∑ 𝒒[𝑛]𝑒 20 = 2𝜋 , 𝑘 = 0,1, … ,19
𝑛=0 1 − 𝑒 −𝑗 20 𝑘

Remark: After some exemplifications it will become apparent that the properties of the DFT
are similar to those of DTFS and DTFT with one significant difference: Any shifts in the
time domain or the transform domain are circular shifts rather than linear shifts. Also, any
time reversals used in conjunction with the DFT are circular time reversals rather than
linear ones.
Example: 04 Let 𝑿[𝑘] = [1, 𝑗, −1, −𝑗]; and 𝑯[𝑘] = [0, 1, −1, 1]; be the DFT of the two
sequences 𝐱[𝑛] and 𝐡[𝑛] respectively. Determine the DFT of the following sequences without
computations.

a) DFT(𝐱[𝑛 − 1]), b) DFT(𝐱[𝑛 + 3]), c) DFT(𝐱[𝑛] ⋆ 𝐡[𝑛]), d) DFT((−1)𝑛 𝐱[𝑛]),


e) DFT((𝑗)𝑛 𝐱[𝑛]), f) DFT(𝐱[−𝑛]), g) DFT(𝐱[2 − 𝑛]),

Solution: Notice that 𝐱[𝑘] = [𝐱[0], 𝐱[1], 𝐱[2], 𝐱[3]] ⇔ 𝐱[𝑘 − 1] = [𝐱[3], 𝐱[0], 𝐱[1], 𝐱[2]] = 𝐱[𝑘 + 3].
This basic property is called circular shift. The DFT shift is also known as a circular shift.

a) Firstly let we see the general case DFT(𝐱[𝑛 − 𝐿])


𝑘 2𝜋
DFT(𝐱[𝑛 − 𝐿]) = ∑ 𝐱[𝑛 − 𝐿]𝑊𝑁𝑘𝑛 = ∑ 𝐱[𝑚]𝑊𝑁𝑘(𝑚+𝐿) = 𝑒 −2𝑗𝜋𝑁𝐿 ∑ 𝐱[𝑚]𝑊𝑁𝑘𝑚 = 𝑒 −𝑗𝑘 𝑁 𝐿 𝑿[𝑘]
〈𝑁〉 〈𝑁〉 〈𝑁〉

2𝜋
In this case 𝑁 = 4 and 𝐿 = 1 ⟹ DFT(𝐱[𝑛 − 1]) = 𝑒 −𝑗𝑘 4 𝑿[𝑘] = (−𝑗)𝑘 𝑿[𝑘], 𝑘 = 0,1,2,3

This yield to DFT(𝐱[𝑛 − 1]) = [1, 1, 1, 1];


2𝜋
b) DFT(𝐱[𝑛 + 3]) = 𝑒 𝑗𝑘 𝑁 3 𝑿[𝑘] = (𝑗)3𝑘 𝑿[𝑘], 𝑘 = 0,1,2,3. This yield the same answer as before

DFT(𝐱[𝑛 + 3]) = (𝑗)3𝑘 𝑿[𝑘] = [1, 1, 1, 1] = DFT(𝐱[𝑛 − 1])

c) DFT(𝐱[𝑛] ⋆ 𝐡[𝑛]) = DFT(𝐱[𝑛]). DFT(𝐡[𝑛]) = 𝑿[𝑘]𝑯[𝑘] = [0, 𝑗, 1, −𝑗]


2𝜋
d) Like the DTFT we have the property: DFT (𝑒 𝑗𝑘0 𝑁 𝑛 𝐱[𝑛]) = 𝑿[𝑘 − 𝑘0 ]mod(𝑁) which implies

2𝜋
DFT((−1)𝑛 𝐱[𝑛]) = DFT (𝑒 𝑗2 4 𝑛 𝐱[𝑛]) = 𝑿[𝑘 − 2]mod(4) = [−1, −𝑗, 1, 𝑗]

2𝜋
e) DFT((𝑗)𝑛 𝐱[𝑛]) = DFT (𝑒 𝑗 4 𝑛 𝐱[𝑛]) = 𝑿[𝑘 − 1]mod(4) = [−𝑗, 1, 𝑗, −1]

f) DFT(𝐱[−𝑛]mod(4) ) = 𝑿[−𝑘]mod(4) , Therefore,

DFT(𝐱[−𝑛]) = 𝑿[−𝑘] = [𝑿[0], 𝑿[3], 𝑿[2], 𝑿[1]] = [1, −𝑗, −1 , 𝑗]


2𝜋
g) DFT(𝐱[2 − 𝑛]), let 𝐲[𝑛] = 𝐱[−𝑛] ⟹ 𝐲[𝑛 − 2] = 𝐱[2 − 𝑛] ⟹ DFT(𝐱[2 − 𝑛]) = 𝑒 𝑗2 4 𝑛 𝒀[𝑘]
2𝜋
DFT(𝐱[2 − 𝑛]) = 𝑒 𝑗2 4 𝑛 𝒀[𝑘] = (−1)𝑘 𝑿[−𝑘] = [1, 𝑗, 1 , 𝑗]
Example: 05 Let 𝐱[𝑛] 𝑛 = 0,1, … ,7 have DFT 𝑿[𝑘] = [1, 1 − 𝑗, 1, 0, 1, 0, 1, 1 + 𝑗] determine the
2𝜋
following DFTs: a) DFT (𝑒 𝑗 8 𝑛 𝐱[𝑛]), b) DFT(𝐱[𝑛] ⋆ 𝛿[𝑛 − 2]),

Solution: Knowing that DFT(𝛿[𝑛]) = 1


2𝜋
a) 𝒀[𝑘] = DFT (𝑒 𝑗 8 𝑛 𝐱[𝑛]) = DFT(𝐱[𝑛 − 1]mod(8) ) = [1 + 𝑗, 1, 1 − 𝑗, 1, 0, 1, 0, 1]
2𝜋
b) since we have DFT(𝛿[𝑛 − 2]) = 𝑒 −𝑗2 8 𝑘 DFT(𝛿[𝑛]) = (−𝑗)𝑘 therefore, 𝒀[𝑘] = DFT(𝐱[𝑛] ⋆ 𝛿[𝑛 − 2])
⟹ 𝒀[𝑘] = DFT(𝐱[𝑛]). DFT(𝛿[𝑛 − 2]) = (−𝑗)𝑘 𝑿[𝑘]

𝒀[𝑘] = (−𝑗)𝑘 𝑿[𝑘] = [1, −1 − 𝑗, −1, 0, 1, 0, −1, −1 + 𝑗]

Example: 06 Let 𝐱[𝑛] have the DFT 𝑿[𝑘] = [1, 𝑗, 1, −𝑗] determine the following DFTs:

a) 𝒀[𝑘] = DFT((−1)𝑛 𝐱[𝑛]), b) 𝒀[𝑘] = DFT(𝐱[𝑛 + 1]mod(4) ),


c) 𝒀[𝑘] = DFT(𝐱[𝑛] ⋆ 𝛿[𝑛 − 2]), d) 𝒀[𝑘] = DFT(𝐱[−𝑛]),

Solution:
2𝜋
a) 𝒀[𝑘] = DFT((−1)𝑛 𝐱[𝑛]) = DFT (𝑒 𝑗2 4 𝑘 𝐱[𝑛]) = 𝑿[𝑘 − 2]mod(4) = [1, −𝑗, 1, 𝑗]
b) 𝒀[𝑘] = DFT(𝐱[𝑛 + 1]mod(4) ) = (𝑗)𝑘 𝑿[𝑘] = [1, −1, −1, −1],
2𝜋
c) 𝒀[𝑘] = DFT(𝐱[𝑛] ⋆ 𝛿[𝑛 − 2]) = 𝑿[𝑘]. DFT(𝛿[𝑛 − 2]) = 𝑿[𝑘]. DFT(𝛿[𝑛])𝑒 −𝑗2 4 𝑘 = (−1)𝑘 𝑿[𝑘],

𝒀[𝑘] = (−1)𝑘 𝑿[𝑘] = [1, −𝑗, 1, 𝑗]

d) 𝒀[𝑘] = DFT(𝐱[−𝑛]) = 𝑿[−𝑘] = [𝑿[0], 𝑿[3], 𝑿[2], 𝑿[1]] = [1, −𝑗, 1, 𝑗],

Example: 07 Two finite sequences 𝐡[𝑛] and 𝐱[𝑛] have the following DFT's:

𝑿[𝑘] = DFT(𝐱[𝑛]) = [1, −2, 1, −2], 𝑯[𝑘] = DFT(𝐡[𝑛]) = [1, 𝑗, 1, −𝑗]

Let 𝐲[𝑛] = 𝐡[𝑛]⨂𝐱[𝑛] be the four point circular convolution of the two sequences. Using the
properties of the DFT (do not compute 𝐱[𝑛] and 𝐡[𝑛]),

a) determine DFT(𝐱[𝑛 − 1]mod(4) ) and DFT(𝐡[𝑛 + 2]mod(4) );


b) determine 𝐲[0] and 𝐲[1].

Solution:
2𝜋
a) DFT(𝐱[𝑛 − 1]mod(4) ) = 𝑒 −𝑗 4 𝑘 𝑿[𝑘] = (−𝑗)𝑘 𝑿[𝑘] = [1, 2𝑗, −1, −2𝑗]
2𝜋
DFT(𝐡[𝑛 + 2]mod(4) ) = 𝑒 𝑗2 4 𝑘 𝑿[𝑘] = (−1)𝑘 𝑿[𝑘] = [1, −𝑗, 1, 𝑗]

b) Recall that
𝑁−1 𝑁−1
1 2𝜋 1 2𝜋
𝐲[𝑛] = IDFT(𝒀[𝑘]) = ∑ 𝒀[𝑘]𝑒 𝑗 𝑁 𝑘𝑛 ⟹ 𝐲[𝑛0 ] = ∑ 𝒀[𝑘]𝑒 𝑗 𝑁 𝑘𝑛0
𝑁 𝑁
𝑘=0 𝑘=0

Then 𝐲[0] = (∑3𝑘=0 𝒀[𝑘])/4 = (𝒀[0] + 𝒀[1] + 𝒀[2] + 𝒀[3])/4

𝐲[0] = (𝑿[0]𝑯[0] + 𝑿[1]𝑯[1] + 𝑿[2]𝑯[2] + 𝑿[3]𝑯[3])/4 = 1/2


3
𝜋
𝐲[1] = (∑ 𝒀[𝑘]𝑒 𝑗 2 𝑘 ) /4 = (𝒀[0] + 𝑗𝒀[1] − 𝒀[2] − 𝑗𝒀[3])/4
𝑘=0
= (𝑿[0]𝑯[0] + 𝑗𝑿[1]𝑯[1] − 𝑿[2]𝑯[2] − 𝑗𝑿[3]𝑯[3])/4 = 1

Example: 08 Let 𝐱[𝑛] be a finite sequence with DFT 𝑿[𝑘] = DFT(𝐱[𝑛]) = [0, 1 + 𝑗, 1,1 − 𝑗].
Using the properties of the DFT determine the DFT's of the following:
𝜋
a) 𝐲[𝑛] = 𝑒 𝑗 2 𝑛 𝐱[𝑛]
b) 𝐲[𝑛] = cos(𝑛𝜋/2) 𝐱[𝑛]
c) 𝐲[𝑛] = 𝐱[𝑛 − 1]mod(4)
d) 𝐲[𝑛] = [0, 0, 1,0]⨂𝐱[𝑛] with ⨂ denoting circular convolution

Solution:
𝜋 𝜋 𝜋
a) since 𝑒 𝑗 2 𝑛 𝐱[𝑛] = 𝑒 𝑗24 𝑛 𝐱[𝑛] then 𝒀[𝑘] = DFT (𝑒 𝑗2 4 𝑛 𝐱[𝑛]) = 𝑿[𝑘 − 1]mod(4) = [1 − 𝑗, 0, 1 + 𝑗, 1]
𝜋 𝜋
b) In this case 𝐲[𝑛] = cos(𝑛𝜋/2) 𝐱[𝑛] = (𝑒 𝑗2 4 𝑛 𝐱[𝑛] + 𝑒 −𝑗24 𝑛 𝐱[𝑛]) /2 and therefore its DFT is
1 1 1 1
𝒀[𝑘] = 𝑿[𝑘 + 1]mod(4) + 𝑿[𝑘 − 1]mod(4) = [1, , 1, ]
2 2 2 2
𝜋
−𝑗2 𝑘 𝑘
c) 𝐲[𝑛] = DFT(𝐱[𝑛 − 1]mod(4) ) = 𝑒 4 𝑿[𝑘] = (−𝑗) 𝑿[𝑘] = [0, 1 − 𝑗, −1,1 + 𝑗]
d)DFT([0, 0, 1,0]⨂𝐱[𝑛]) = DFT(𝛿[𝑛 − 2]⨂𝐱[𝑛]) = DFT(𝐱[𝑛 − 2]mod(4) ) = (−𝑗)2𝑘 𝑿[𝑘] = (−1)𝑘 𝑿[𝑘]

𝒀[𝑘] = DFT([0, 0, 1,0]⨂𝐱[𝑛]) = [0, −1 − 𝑗, −1, 𝑗 − 1]

Example: 09 Let 𝑿[𝑘] = DFT(𝐱[𝑛]) with 𝑛, 𝑘 = 0, . . . , 𝑁 − 1. Determine the relationships


between 𝑿[𝑘] and the following DFT's:

a) 𝒀[𝑘] = DFT(𝐱 ⋆ [𝑛])


b) 𝒀[𝑘] = DFT(𝐱[−𝑛]mod(𝑁) )
c) 𝒀[𝑘] = DFT(Re(𝐱[𝑛]))
d) 𝒀[𝑘] = DFT(Im(𝐱[𝑛]))

Solution
2𝜋 2𝜋 ⋆
a) 𝒀[𝑘] = DFT(𝐱 ⋆ [𝑛]) = ∑〈𝑁〉 𝐱 ⋆ [𝑛]𝑒 −𝑗 2 𝑘𝑛 = (∑〈𝑁〉 𝐱[𝑛]𝑒 𝑗 2 𝑘𝑛 ) = 𝑿⋆ [−𝑘]mod(𝑁)
2𝜋 2𝜋
b) 𝒀[𝑘] = DFT(𝐱[−𝑛]mod(𝑁) ) = ∑〈𝑁〉 𝐱[−𝑛]mod(𝑁) 𝑒 −𝑗 2 𝑘𝑛 = (∑〈𝑁〉 𝐱[𝑛]𝑒 𝑗 2 𝑘𝑛 ) = 𝑿[−𝑘]mod(𝑁)
1 1 1
c) 𝒀[𝑘] = DFT(Re(𝐱[𝑛])) = DFT(𝐱[𝑛]) + DFT(𝐱 ⋆ [𝑛]) = (𝑿[𝑘] + 𝑿⋆ [−𝑘]mod(𝑁) )
2 2 2
1 1 1
d) 𝒀[𝑘] = DFT(Im(𝐱[𝑛])) = DFT(𝐱[𝑛]) − DFT(𝐱 ⋆ [𝑛]) = (𝑿[𝑘] − 𝑿⋆ [−𝑘]mod(𝑁) )
2𝑗 2𝑗 2𝑗

Example: 10 Let 𝐱[𝑛] = IDFT(𝑿[𝑘]) with 𝑛, 𝑘 = 0, . . . , 𝑁 − 1. Determine the relationships


between 𝐱[𝑛] and the following IDFT's:

a) 𝐲[𝑛] = IDFT(𝐗 ⋆ [𝑘])


b) 𝐲[𝑛] = IDFT(𝐗[−𝑘]mod(𝑁) )
c) 𝐲[𝑛] = IDFT(Re(𝐗[𝑘]))
d) 𝐲[𝑛] = IDFT(Im(𝐗[𝑘]))
Solution
𝑁−1 𝑁−1 ⋆
1 2𝜋 1 2𝜋
a) 𝐲[𝑛] = IDFT(𝐗 ⋆ [𝑘]) = ∑ 𝐗 ⋆ [𝑘]𝑒 𝑗 2 𝑘𝑛 = ( ∑ 𝐗[𝑘]𝑒 −𝑗 2 𝑘𝑛 ) = 𝐱 ⋆ [−𝑛]mod(𝑁)
𝑁 𝑁
𝑘=0 𝑘=0
𝑁−1 𝑁−1
1 2𝜋 1 2𝜋
b) 𝐲[𝑛] = IDFT(𝐗[−𝑘]mod(𝑁) ) = ∑ 𝐗[−𝑘]𝑒 𝑗 2 𝑘𝑛 = ( ∑ 𝐗[𝑘]𝑒 −𝑗 2 𝑘𝑛 ) = 𝐱[−𝑛]mod(𝑁)
𝑁 𝑁
𝑘=0 𝑘=0
1 1 1
c) 𝐲[𝑛] = IDFT(Re(𝐗[𝑘])) = IDFT(𝐗[𝑘]) + IDFT(𝐗 ⋆ [𝑘]) = (𝐱[𝑛] + 𝐱 ⋆ [𝑘])
2 2 2
1 1 1
d) 𝐲[𝑛] = IDFT(Im(𝐗[𝑘])) = IDFT(𝐗[𝑘]) − IDFT(𝐗 ⋆ [𝑘]) = (𝐱[𝑛] − 𝐱 ⋆ [𝑘])
2𝑗 2𝑗 2𝑗

Example: 11 Let 𝐱[𝑛] = [2 3 − 1 4]. Write a MATLAB code to determine the DFT of 𝐱[𝑛]
using the FFT algorithm:

clear all, clc,


x = [2 3 -1 4];
N = length(x);
t = 0:N-1;
X = zeros(4,1)
for k = 0:N-1
for n = 0:N-1
X(k+1) = X(k+1) + x(n+1)*exp(-j*pi/2*n*k)
end
end

subplot(311); stem(t,x); xlabel('Time (s)'); ylabel('Amplitude');


title('Time domain - Input sequence')
subplot(312); stem(t,X); xlabel('Frequency'); ylabel('|X(k)|');
title('Frequency domain - Magnitude response')
subplot(313); stem(t,angle(X)); xlabel('Frequency'); ylabel('Phase');
title('Frequency domain - Phase response')
X % to check |X(k)|
angle(X) % to check phase

II. The Fast Fourier Transform (FFT): A fast Fourier transform (FFT) is an efficient
algorithm that computes the discrete Fourier transform (DFT) of a sequence, or its inverse
(IDFT). Fourier analysis converts a signal from its original domain (often time or space) to a
representation in the frequency domain and vice versa. The DFT is obtained by
decomposing a sequence of values into components of different frequencies. This operation
is useful in many fields, but computing it directly from the definition is often too slow to be
practical. An FFT rapidly computes such transformations by factorizing the DFT matrix
into a product of sparse (mostly zero) factors. The basic ideas were popularized in 1965 by
James Cooley and John Tukey. In 1994, Gilbert Strang described the FFT as "the most
important numerical algorithm of our lifetime", and it was included in Top 10 Algorithms of
20th Century by the IEEE magazine Computing in Science & Engineering.

If we take the 2-point DFT and 4-point DFT and generalize them to 8-point, 16-point, …,
2r-point, we get the FFT algorithm.
II.I Two-points DFT (N=2): In the case of the radix-2 Cooley–Tukey algorithm, the butterfly
is simply a DFT of size-2 that takes two inputs (𝐱[0], 𝐱[1]) (corresponding outputs of the two
sub-transforms) and gives two outputs (𝐗[0], 𝐗[1]) by the formula:

𝑿[𝑘] = ∑ 𝐱[𝑛]𝑊2𝑘𝑛 = 𝑊20 𝐱[0] + 𝑊2𝑘 𝐱[1]


𝑛=0

𝑿[0] = 𝐱[0] + 𝐱[1]


{
𝑿[1] = 𝐱[0] − 𝐱[1]

II.II Four-points DFT (N=4): Here a generalization of the derivation presented in the
previous case to a four-point DFT. This four-point DFT of size-4 that takes four inputs
(𝐱[0], 𝐱[1], 𝐱[2], 𝐱[3]) (corresponding outputs of the two sub-transforms) and gives two
outputs (𝐗[0], 𝐗[1], 𝐗[2], 𝐗[3]) by the formula:

𝑿[0] = 𝐱[0] + 𝐱[1] + 𝐱[2] + 𝐱[3]


𝑿[1] = 𝐱[0] − 𝑗𝐱[1] − 𝐱[2] + 𝑗𝐱[3]
𝑿[2] = 𝐱[0] − 𝐱[1] + 𝐱[2] − 𝐱[3]
𝑿[3] = 𝐱[0] + 𝑗𝐱[1] − 𝐱[2] − 𝑗𝐱[3]



𝑿[0] = (𝐱[0] + 𝐱[2]) + (𝐱[1] + 𝐱[3])
𝑿[1] = (𝐱[0] − 𝐱[2]) − 𝑗(𝐱[1] − 𝐱[3])
𝑿[2] = (𝐱[0] + 𝐱[2]) − (𝐱[1] + 𝐱[3])
𝑿[3] = (𝐱[0] − 𝐱[2]) + 𝑗(𝐱[1] − 𝐱[3])
Note that the structure is a 4-point decomposition followed by two 2-point FFTs. Also note
that it is frequency 𝑿𝑛 [𝑘] that is the input to the DFT stage. This method of evaluating 𝑿[𝑘]
is known as the decimation-in-frequency fast Fourier transform (FFT) algorithm.

Also it can be observed that the above diagram of the 4-point DFT may be rearranged as
We see that the 4-point DFT can be computed by the generation of two 2-point DFTs,
followed by a recomposition of terms as shown in the signal flow graph. This second method
of evaluating 𝑿[𝑘] is known as the decimation-in-time fast Fourier transform (FFT) algorithm.

Remark: The difference between decimation-in-time (DIT) and decimation-in-frequency


(DIF) is: -the order of the samples: -On DIT the input is bit-reversed order and the output is
natural order; -On DIF the input is natural order and the output is bit-reversed order.

II.III N-points DFT (N=2r): To start the generalization of the algorithm let we consider
some basic conventions, such as: the sequence 𝐱[𝑛] is often referred to as an N-point
sequence, and 𝑿[𝑘] is referred to as an N-point DFT. By a judicious choice of 𝑁, such as
choosing it to be a power of 2, computational efficiencies can be gained. Suppose, we
separated the Fourier Transform into even and odd indexed sub-sequences, we obtain
𝑁−1
2𝜋
𝑿[𝑘] = ∑ 𝐱[𝑛]𝑊𝑁𝑘𝑛 = ∑ 𝐱[𝑛]𝑊𝑁𝑘𝑛 + ∑ 𝐱[𝑛]𝑊𝑁𝑘𝑛 with 𝑊𝑁 = 𝑒 −𝑗 𝑁 and 𝑘 = 0,1,2, … , 𝑁 − 1
𝑛=0 Odd Even

Let we define 𝑛 = 2𝑟 in the case of even term and 𝑛 = 2𝑟 + 1 for the case of odd term.
𝑁/2−1 𝑁/2−1

𝑿[𝑘] = ∑ 𝐱[2𝑟]𝑊𝑁2𝑟𝑘 + 𝑊𝑁𝑘 ∑ 𝐱[2𝑟 + 1]𝑊𝑁2𝑟𝑘


𝑟=0 𝑟=0

2𝜋 2𝜋
−𝑗 𝑘
Notice that 𝑊𝑁2𝑘 = 𝑒 −𝑗 𝑁 2𝑘 = 𝑒 𝑁/2 𝑘
= 𝑊𝑁/2 . Therefore, we can write
𝑁/2−1 𝑁/2−1
𝑟𝑘 𝑟𝑘
𝑿[𝑘] = ∑ 𝐱[2𝑟]𝑊𝑁/2 + 𝑊𝑁𝑘 ∑ 𝐱[2𝑟 + 1]𝑊𝑁/2
𝑟=0 𝑟=0

If we let 𝐠[𝑟] = 𝐱[2𝑟], 𝐡[𝑟] = 𝐱[2𝑟 + 1] then


𝑁/2−1 𝑁/2−1
𝑟𝑘 𝑟𝑘
𝑿[𝑘] = ∑ 𝐠[𝑟]𝑊𝑁/2 + 𝑊𝑁𝑘 ∑ 𝐡[𝑟]𝑊𝑁/2 = 𝑮[𝑘] + 𝑊𝑁𝑘 𝑯[𝑘]
𝑟=0 𝑟=0

Although the 𝑁/2-point DFTs of 𝐠[𝑟] and 𝐡[𝑟] are sequences of length 𝑁/2, the periodicity of
the complex exponentials allows us to write: 𝑮[𝑘] = 𝑮[𝑘 + 𝑁/2], & 𝑯[𝑘] = 𝑯[𝑘 + 𝑁/2]

Therefore, 𝑿[𝑘] may be computed from the 𝑁/2-point DFTs 𝑮[𝑘] and 𝑯[𝑘].

𝑘+𝑁/2 𝑁/2 𝑘+𝑁/2


Note that because 𝑊𝑁 = 𝑊𝑁 𝑊𝑁𝑘 = −𝑊𝑁𝑘 then 𝑊𝑁 𝑯[𝑘 + 𝑁/2] = −𝑊𝑁𝑘 𝑯[𝑘]

𝑁
𝑿[𝑘] = 𝑮[𝑘] + 𝑊𝑁𝑘 𝑯[𝑘] ⟹ 𝑿 [𝑘 + ] = 𝑮[𝑘] − 𝑊𝑁𝑘 𝑯[𝑘]
2
Flow graph of a
K -point DFT using two
(K /2)-point DFTs for K =8

Decimation-in-time: implementation of an 8-point DFT.

Remark: In the diagram of the 8-point FFT above, note that the inputs aren’t in normal
order: 𝐱[0]; 𝐱[1]; 𝐱[2]; 𝐱[3]; 𝐱[4]; 𝐱[5]; 𝐱[6]; 𝐱[7], they’re in the bizarre order: 𝐱[0]; 𝐱[4]; 𝐱[2];
𝐱[6]; 𝐱[1]; 𝐱[5]; 𝐱[3]; 𝐱[7]. Why this sequence?

In a decimation-in-time radix-2 FFT, the input is in bit-reversed order (hence "decimation-


in-time"). That is, if the time-sample index n is written as a binary number, the order is
that binary number reversed. The bit-reversal process is illustrated for a length-N=8
example below.
Example: 12 obtain the convolution of the two sequences using FFT method

𝐱[𝑛] = [1 2 3 0] 𝐡[𝑛] = [2 1 0 0]

Solution: Let we define the following even and odd parts

𝐱 𝑒 [0] = 𝐱[0] + 𝐱[2] = 4 𝐡𝑒 [0] = 𝐡[0] + 𝐡[2] = 2


𝐱 𝑒 [1] = 𝐱[0] − 𝐱[2] = −2 𝐡𝑒 [1] = 𝐡[0] − 𝐡[2] = 2
𝐱 𝑜 [0] = 𝐱[1] + 𝐱[3] = 2 𝐡𝑜 [0] = 𝐡[1] + 𝐡[3] = 1
𝐱 𝑜 [1] = 𝐱[1] − 𝐱[3] = 2 𝐡𝑜 [1] = 𝐡[1] − 𝐡[3] = 1

And according to the decimation-in-time we get


𝐗[0] = 𝐱 𝑒 [0] + 𝐱 0 [0] = 6 𝐇[0] = 𝐡𝑒 [0] + 𝐡0 [0] = 3
𝐗[1] = 𝐱 𝑒 [1] − 𝑗𝐱 𝑜 [1] = −2 − 2𝑗 𝐇[1] = 𝐡𝑒 [1] − 𝑗𝐡𝑜 [1] = 2 − 𝑗
𝐗[2] = 𝐱 𝑒 [0] − 𝐱 𝑜 [0] = 2 𝐇[2] = 𝐡𝑒 [0] − 𝐡𝑜 [0] = 1
𝐗[3] = 𝐱 𝑒 [1] + 𝑗𝐱 𝑜 [1] = −2 + 2𝑗 𝐇[3] = 𝐡𝑒 [1] + 𝑗𝐡𝑜 [1] = 2 + 𝑗

𝐗[𝑘] = [6, (−2 − 2𝑗), 2, (−2 + 2𝑗)], 𝐇[𝑘] = [3, (2 − 𝑗), 1, (2 + 𝑗)]

𝐘[𝑘] = 𝐇[𝑘]𝐗[𝑘] = [18, (−6 − 2𝑗), 2, (−6 + 2𝑗)]

We can take the IDFT by using the FFT procedure (only the conjugation of the above)

𝐘𝑒 [0] = 𝐘[0] + 𝐘[2] = 20 𝐲[0] = (𝐘𝑒 [0] + 𝐘0 [0])/4 = 2


𝐘𝑒 [1] = 𝐘[0] − 𝐘[2] = 16 𝐲[1] = (𝐘𝑒 [1] + 𝑗𝐘𝑜 [1])/4 = 5

𝐘𝑜 [0] = 𝐘[1] + 𝐘[3] = −12 𝐲[2] = (𝐘𝑒 [0] − 𝐘𝑜 [0])/4 = 8
𝐘𝑜 [1] = 𝐘[1] − 𝐘[3] = −4𝑗 𝐲[3] = (𝐘𝑒 [1] − 𝑗𝐘𝑜 [1])/4 = 3

The result will be 𝐲[𝑘] = 𝐱[𝑘] ⊗ 𝐡[𝑘] = [2 5 8 3]

Example: 13 Determine the DFT of the 8-point discrete time domain sequence

𝐱[𝑛] = [1, 1, −1, −1, −1, 1, 1, −1]

This is a program to implement the Radix-2 decimation in time FFT

clear all, clc, x=[1 1 -1 -1 -1 1 1 -1]; o=x; N=8; a=length(x);


% N must be 2's power and a integer value
if(a<N), x=[x zeros(1,(N-a))]; end;
x=bitrevorder(x); n=log2(N); X=zeros(1,N);
for m=1:1:n
l=2^(m-1); i=1;
while(i<=(N-l))
for k=0:1:(l-1)
X(i)=x(i)+x(i+l)*exp(-j*2*pi*k/N*(2^(n-m)));
X(i+l)=x(i)-x(i+l)*exp(-j*2*pi*k/N*(2^(n-m)));
i=i+1;
if(k==(l-1))
i=i+l;
end;
end;
end;
x=X;
end;
disp(['the output is: ',num2str(X)]); p=abs(X); q=angle(X);
subplot(311);
stem(o); xlabel('time'); ylabel('magnitude');
title('original sequence');
subplot(312);
stem(p); xlabel('frequency'); ylabel('magnitude');
title('magnitude spectrum of dft');
subplot(313);
stem(q); xlabel('frequency'); ylabel('phase');
title('phase spectrum of dft');
This is an alternative program to implement decimation in time FFT

% clear all, clc, data=[1 1 1 1 1 1 zeros(1,26)];


function X=fftf(data)
% This funciton Compute the Radix-2 decimation in time FFT
N = length(data); X=bitrevorder(data); L=log2(N);
k1=2; k2=N/2; k3=1;
for i1=1:L %Iteration stage
L1=1;
for i2=1:k2
k=1;
for i3=1:k3
i=i3+L1-1; j=i+k3;
W=complex(cos(2*pi*(k-1)/N),sin(2*pi*(k-1)/N));
T=X(j)*W;
X(j)=X(i)-T; X(i)=X(i)+T;
k=k+k2;
end
L1=L1+k1;
end
k1 = k1*2; k2 = k2/2; k3 = k3*2;
end
X
XI=conj(X)./N; % The time inverse FFT

Remark: The FFT reduces the number of computations needed for a problem of size 𝑁 from
𝒪(𝑁 2 ) to 𝒪(𝑁 log(𝑁)).
II.IV Matrix Form of DFT: In applied mathematics, a DFT matrix is an expression of a
discrete Fourier transform (DFT) as a transformation matrix, which can be applied to a
signal through matrix multiplication.

An N-point DFT is expressed as the multiplication 𝑿 = 𝑭𝑁 𝐱 , where 𝐱 is the original input


signal, 𝑭𝑁 is the 𝑁 by 𝑁 square DFT matrix, and 𝑿 is the DFT of the signal.

1 1 ⋯ 1
𝑿[0] 1 𝐱[0]
𝑁−1
1 𝑊 𝑊𝑁2 𝑊𝑁𝑁−1
𝑁 ⋯
𝑿[𝑘] = DFT(𝐱[𝑛]) = ∑ 𝐱[𝑛]𝑊𝑁𝑘𝑛 ⟺ [ 𝑿[1] ] = 1 𝑊𝑁2 𝑊𝑁4 … 𝑊𝑁2(𝑁−1) [ 𝐱[1] ]
⋮ ⋮ ⋮ ⋱ ⋮
𝑛=0 ⋮ ⋮
𝑿[𝑁 − 1] 𝑁−1 𝐱[𝑁 − 1]
[ 1 𝑊𝑁 𝑊𝑁2(𝑁−1) … 𝑊𝑁
(𝑁−1)(𝑁−1)
]

Where 𝑊𝑁 = 𝑒 −2𝑗𝜋/𝑁 is a primitive 𝑁 𝑡ℎ root of unity in which 𝑗 2 = − 1.

Note that 𝑭𝑁 is symmetric; that is, 𝑭𝑇𝑁 = 𝑭𝑁 , where 𝑭𝑇𝑁 is the transpose of 𝑭𝑁 . Also it can be
1
shown that 𝑭𝑁 unitary matrix, that is:
√𝑁

1 1 1 𝐻
( 𝑭𝐻
𝑁) ( 𝑭𝑁 ) = 𝑰 ⟺ 𝑭𝑁 −1 = 𝑭
√𝑁 √𝑁 𝑁 𝑁

1 𝐻
Means that 𝑿 = 𝑭𝑁 𝐱 ⟺ 𝐱 = 𝑭 𝑿
𝑁 𝑁
Example: 14 Here a MATLAB code for the computation of DFT by matrix method (not fast)

clear all, clc, x = [1:6 zeros(1,26)]'; N=32; a=length(x);


for k=0:N-1
for l=0:N-1
w(k+1,l+1)=cos((2*pi*k*l)/N)-i*sin((2*pi*k*l)/N);
end
end
Fk=w; %% Return the DFT matrix
X=Fk*x;
subplot(311);
stem(x); xlabel('time'); ylabel('magnitude');
title('original sequence');
subplot(312);
stem(abs(X)); xlabel('frequency'); ylabel('magnitude');
title('magnitude spectrum of dft');
subplot(313);
stem(angle(X)); xlabel('frequency'); ylabel('phase');
title('phase spectrum of dft');

we can compare the above method with the standard algorithm, let we see how is this?
clear all, clc, x = [ones(1,6) zeros(1,26)]; n = length(x);
y1 = fft(x);
y2 = x*dftmtx(n);
norm(y1-y2)
subplot(211); stem(abs(y2)), grid on
subplot(212); stem(angle(y2)), grid on
Good Idea: (How to fastest this matrix multiplication as possible as we can?) We want to
multiply 𝑭𝑁 times 𝐱 as quickly as possible. Normally a matrix times a vector takes 𝑁 2
separate multiplications-the matrix has 𝑁 2 entries. You might think it is impossible to do
better. (If the matrix has zero entries then multiplications can be skipped. But the Fourier
matrix has no zeros!) By using the special pattern 𝑊𝑁 = 𝑒 −2𝑗𝜋/𝑁 for its entries, 𝑭𝑁 can be
factored in a way that produces many zeros. This is the FFT.

The key idea is to connect 𝑭𝑁 with the half-size Fourier matrix 𝑭𝑁/2 . Assume that 𝑁 is a
power of 2 (say 𝑁 = 210 = 1024). We will connect 𝑭1024 to 𝑭512 rather to two copies of 𝑭512 .
When 𝑁 = 4, the key is in the relation between the matrices

1 1 1 1 1 1
𝑗 𝑗2 𝑗3 𝟎2×2
1 𝑭2 1 𝑗2
𝑭4 = 𝑗2 and [ ]= [ ]
1 𝑗4 𝑗6 𝑭2 1 1
𝟎2×2
[1 𝑗3 𝑗6 𝑗9] 1 𝑗2

On the left is 𝑭4 , with no zeros. On the right is a matrix that is half zero. The work is cut in
half. But wait, those matrices are not the same. The block matrix with two copies of the
half-size 𝑭 is one piece of the picture but not the only piece. Here is the factorization of 𝑭4
with many zeros:
1 0 1 0 1 1
𝟎2×2 1 0 0 0
0 1 0 𝑗 1 𝑗2 0 0 1 0
𝑭4 = [ ][ ][ ]
1 0 −1 0 1 1 0 1 0 0
𝟎2×2
0 1 0 −𝑗 1 𝑗2 0 0 0 1

The matrix on the right is a permutation. The middle matrix performs separate half-size
transforms on the evens and odds. The matrix at the left combines the two half-size
outputs-in a way that produces the correct full-size output 𝑿 = 𝑭𝑁 𝐱. The same idea applies
when 𝑁 = 1024 and 𝑀 = 512. The Fourier matrix 𝑭1024 is full of powers of 𝑊𝑁 = 𝑒 −2𝑗𝜋/𝑁 . The
first stage of the FFT is the great factorization discovered by Cooley and Tukey (and
foreshadowed in 1805 by Gauss):

𝑰512 𝑫512 𝑭512 𝟎2×2 Even_Odd


𝑭1024 = [ ][ ][ ]
𝑰512 −𝑫512 𝟎2×2 𝑭512 permutation

The term 𝑰512 is the identity matrix. 𝑫512 is the diagonal matrix with entries (1, 𝑊𝑁 , … , 𝑊𝑁511 ).
The two copies of 𝑭512 are what we expected. Don't forget that they use the 512𝑡ℎ root of
unity.

If you have read this far, you have probably guessed what comes next. We reduced 𝑭𝑁 to
𝑭𝑁/2 . Keep going to 𝑭𝑁/4 . The matrices 𝑭512 lead to 𝑭256 (in four copies). Then 256 leads to
128. That is recursion. It is a basic principle of many fast algorithms, and here is the
second stage with four copies of 𝑭 = 𝑭256 and 𝑫 = 𝑫256 :

𝑰 𝑫 𝑭 pick 0,4,8, …
𝑭 𝟎2×2 pick 2,6,10, …
[ 512 ] = [𝑰 −𝑫 ][ 𝑭 ][ ]
𝟎2×2 𝑭512 𝑰 𝑫 𝑭 pick 1,5,9, …
𝑰 −𝑫 𝑭 pick 3,7,11, …
We will count the individual multiplications, to see how much is saved. Before the FFT was
invented, the count was the usual 𝑁 2 = (1024)2 . This is about a million multiplications. I
am not saying that they take a long time. The cost becomes large when we have many,
many transforms to do-which is typical. Then the saving by the FFT is also large:

1
The final count for size 𝑁 = 2ℓ is reduced from 𝑁 2 to 2 𝑁ℓ.

The number 𝑁 = 1024 is 210 , so ℓ = 10 . The original count of (1024)2 is reduced to (5)(1024).
The saving is a factor of 200. A million is reduced to five thousand. That is why the FFT has
revolutionized signal processing.

Example: 15 This example is add only for going deeper and deeper (Clarify things more)

𝐱0 1 1 1 1 𝐱0 1 1 1 1
𝑊4 𝑊4 𝑊4 𝐱1 2 3 2 𝑊4 𝑊43 𝐱1
𝐱1 1 1 𝑊4 𝐱 0
𝑭4 [ 𝐱 ] = 𝑊42 𝑊44 𝑊46 𝐱 2 [ ] = 4 [ ] + 𝑊42 𝑊46 [𝐱 3 ]
2 1 1 𝑊4 𝐱 2
𝐱3 [1 𝑊43 𝑊46 𝑊49 ] 𝐱 3 [1 𝑊46 ] [𝑊43 𝑊49 ]
1 1 1 1 1
1 𝑊42 𝐱 0 𝑊4 1 𝑊 4
2
𝐱1
= 4 [𝐱 ] + 2 4 [𝐱 ]
1 𝑊4 2 𝑊4 1 𝑊4 3
6
[1 𝑊4 ] [ 3 1
𝑊4 ] [ 𝑊46 ]
1 1 1 1 1
1 𝑊4 2 𝐱
0 𝑊4 1 𝑊42 𝐱1
=[ ][ ] + [ ][ ]
1 1 𝐱2 𝑊42 1 1 𝐱3
1 𝑊42 [ 𝑊43 ] 1 𝑊4
2

1 1 1 1 1
1 𝑊2 𝐱 0 𝑊4 1 𝑊2 𝐱1
=[ ][ ] + [ ][ ][ ]
1 1 𝐱2 −1 1 1 𝐱3
1 𝑊2 −𝑊4 1 𝑊2
1 0 1 0
0 1 1 1 𝐱0 0 𝑊4 1 1 𝐱1
=[ ][ ] [𝐱 ] + [ ][ ][ ]
1 0 1 𝑊2 2 −1 0 1 𝑊2 𝐱 3
0 1 0 −𝑊4

1 1 1 0
Now we define 𝑭2 = [ ] and 𝑫2 = [ ] we obtain
1 𝑊2 0 𝑊4

𝑭2 𝐱 even + 𝑫2 𝑭2 𝐱 odd 𝑰 𝑫2 𝑭2 𝟎2
𝑭4 𝐱 = [ ]=[ 2 ][ ]𝑷 𝐱
𝑭2 𝐱 even − 𝑫2 𝑭2 𝐱 odd 𝑰2 −𝑫2 𝟎2 𝑭2 4

𝑭1 𝐱 0 + 𝑫1 𝑭1 𝐱1 𝐱 +𝐱 𝑭 𝐱 + 𝑫1 𝑭1 𝐱 3 𝐱 +𝐱
𝑭2 𝐱 even = [ ] = [𝐱 0 − 𝐱 2 ], 𝑭2 𝐱 odd = [ 1 1 ] = [𝐱1 − 𝐱 3 ]
𝑭1 𝐱 0 − 𝑫1 𝑭1 𝐱1 0 2 𝑭1 𝐱1 − 𝑫1 𝑭1 𝐱 3 1 3

Where: 𝑭1 = 1 and 𝑫1 = 1 by their definitions.

The general formula for Fourier matrix is

𝑰𝑁/2 𝑫𝑁/2 𝑭𝑁/2


𝑭𝑁 = [ ][ ] 𝑷𝑁
𝑰𝑁/2 −𝑫𝑁/2 𝑭𝑁/2

where 𝑷𝑁 is an 𝑛𝑡ℎ -order permutation matrix for the needed even and odd permutation.
This code is designed to help learn and understand the fast Fourier
algorithm. For this purpose it is kept as short and simple as possible.

This implementation is intended to help understand the fast Fourier


algorithm. It is not optimized for speed.

function y = mfft(x)
% Length of x must be a power of two.
len = length(x);
if len >= 2
odd = mfft(x(1:2:len)); % calculate fft of odd elements
even = mfft(x(2:2:len)); % calculate fft of even elements
% rotate even elements to prepare for recombination
even = exp((0:len/2-1)*(-2i*pi/len)).* even;
y = [odd+even , odd-even]; % recombine fft of odd and even elements
else
y = x; % end of recursion, do nothing
end
CHAPTER VIII:
Practical Implementation
of Linear Systems
I. I. Introduction
II. Realization of Continuous Systems
II.I The Direct Form (First Canonical Form)
II.II The Transposed Form (Second Canonical Form)
II.III Continuous State Space Representation
III. Realization of Discrete Systems
III.I Basic Structures for IIR Systems
III.II Basic Structures for FIR Systems
III.III Discrete State Space Representation
IV. Solved Problems

We consider the problem of realizing a given transfer function model as a state


variable model. The realization process is facilitated by first drawing a simulation
diagram for the system. A simulation diagram realizes an ODE model into a block
diagram representation using scalar gains, integrators, summing nodes, and
feedback loops. Historically, such diagrams were used to simulate dynamic system
models on analog computers. Given a transfer function model, its two common
realizations are described here.
Practical Implementation
of Linear Systems
I. Introduction: A linear system whose unit-impulse response is zero for 𝑡 < 0 is called
realizable. An example is a practical filter made of physical elements. The time-domain
condition given above translates into the frequency-domain condition for realizability by the
Paley-Wiener theorem. Based on that theorem, the ideal filters are not realizable. But we
can approximate the frequency response of an ideal filter with that of a practical filter with
any desired level of tolerance and accuracy. Moreover, in practice we want the filter to be
stable and buildable from lumped physical elements. These conditions require that we
approximate the frequency response of the ideal filter by that of a system function, which is
a ratio of two polynomials with real coefficients and poles in the LHP.

We know that all the studied systems can be considered filters, but we differentiate
between what was primarily built for this purpose and what did such action as an
accidental result. Generally with that regard and from physical perspectives we are talking
about systems that may or may not be filters. Here we are going to introduce some
systematic procedures for the implementation of those systems. And as it is well known,
the transfer function of such system, is made of lumped linear elements, and is a rational
function (a ratio of two polynomials), and based on this information we will see that its
realizability depends strongly on the degree of the denominator polynomial which is called
the order of the system.

II. Realization of Continuous Systems: We now develop a systematic method for


realization (or simulation) of an arbitrary 𝑛𝑡ℎ -order transfer function. Since realization is
basically a synthesis problem, there is no unique way of realizing a system. A given transfer
function can be realized in many different ways. We present here three different ways of
realization: canonical, cascade and parallel realization. A transfer function H (s) can be
realized by using integrators or differentiators along with summers and scalar multipliers.
For practical reasons we avoid the use of differentiators. A differentiator accentuates high-
frequency signals, which, by their nature, have large rates of change (large derivative).
Signals, in practice, are always corrupted by noise, which happens to be a broad-band
signal; that is, the noise contains components of frequencies ranging from low to very high.
In processing desired signals by a differentiator, the high-frequency components of noise
are amplified disproportionately. Such amplified noise may swamp the desired signal. The
integrator, in contrast, tends to suppress a high frequency signal by smoothing it out. In
addition, practical differentiators built with op amp circuits tend to be unstable. For these
reasons we avoid differentiators in practical realizations.

Consider a single input/single output LTIC system with a transfer function

𝒀(𝑠) 𝑏0 𝑠 𝑚 + 𝑏1 𝑠 𝑚−1 + ⋯ + 𝑏𝑚−1 𝑠 + 𝑏𝑚


𝑯(𝑠) = =
𝑿(𝑠) 𝑠 𝑛 + 𝑎1 𝑠 𝑛−1 + ⋯ + 𝑎𝑛−1 𝑠 + 𝑎𝑛

For large 𝑠 (𝑠 → ∞) we obtain 𝑯(𝑠) = 𝑏0 𝑠 𝑚−𝑛 . Therefore, for 𝑚 > 𝑛, the system acts as an
(𝑚 − 𝑛)𝑡ℎ -order differentiator. For this reason, we restrict 𝑚 ≤ 𝑛 for practical systems. With
this restriction, the most general case is 𝑚 = 𝑛 with this transfer function.
II.I The Direct Form (First Canonical Form): The method of obtaining a block diagram
from an s-domain system function will be derived using a third-order system, but its
generalization to higher-order system functions is quite straightforward. Consider a CTLTI
system described by a system function 𝑯(𝑠).

𝑏1 𝑏2 𝑏3
𝑏0 𝑠 3 + 𝑏1 𝑠 2 + 𝑏2 𝑠 + 𝑏3 𝑏0 + 𝑠 + 𝑠 2 + 𝑠 3 𝑏1 𝑏2 𝑏3 1
𝑯(𝑠) = 3 = 𝑎1 𝑎2 𝑎3 = (𝑏 + + + ) ( )
𝑠 + 𝑎1 𝑠 2 + 𝑎2 𝑠 + 𝑎3 1+ 𝑠 + 2+ 3
0
𝑠 𝑠 2 𝑠 3 1 + 𝑎1 + 𝑎2 + 𝑎3
𝑠 𝑠 𝑠 𝑠2 𝑠3

𝑿(𝑠) 𝑏0 𝑠 3 + 𝑏1 𝑠 2 + 𝑏2 𝑠 + 𝑏3 𝒀(𝑠) 𝑿(𝑠) 1 𝑾(𝑠) 𝑏1 𝑏2 𝑏3 𝒀 (𝑠)


→ → ⟺ → 𝑎 1 𝑎2 𝑎3 → 𝑏0 + + + →
𝑠 3 + 𝑎1 𝑠 2 + 𝑎2 𝑠 + 𝑎3 1+ 𝑠 + 2+ 3 𝑠 𝑠2 𝑠3
𝑠 𝑠
𝑯(𝑠)
𝑯1 (𝑠) 𝑯2 (𝑠)

In this derivation we have considered 𝑿(𝑠) be the input of the system and 𝑾(𝑠) be an
intermediate variable.
𝑾(𝑠) 𝒀(𝑠)
𝑯(𝑠) = (𝑯1 (𝑠)𝑯2 (𝑠)) = ( )( )
𝑿(𝑠) 𝑾(𝑠)

This is one of the two canonical realizations (also known as the first canonical form or
direct-form realization). Observe that 2n integrators are required for a realization of an 𝑛𝑡ℎ -
order transfer function. In order to avoid the use of large number of integrators we can
construct another form of implementation by a rearrangement of the cascade

𝒀(𝑠) 𝑽(𝑠)
𝑯(𝑠) = (𝑯2 (𝑠)𝑯1 (𝑠)) = ( )( )
𝑽(𝑠) 𝑿(𝑠)
More importantly, this form requires the only 𝑛 integrators. the minimum number of 𝑛-
ordcr differential equations. Block diagrams that use the minimum number of energy
stores (integrators) for the realization of an n-order differential equation are also called
canonical forms.

II.II The Transposed Form (Second Canonical Form): The recursive procedure for
calculating the response of a differential equation is extremely useful in implementing
causal systems. However, it is important to recognize
that either in algebraic terms or in terms of block
diagrams, a differential equation can often be
rearranged into other forms leading to
implementations that may have particular advantages.
Here we present an alternative realization called the
second canonical form.

Let we define the following variables 𝝃𝑘 (𝑘 = 0,1 … 𝑁)


With 𝝃0 (𝑠) = 0 and 𝝃𝑁+1 (𝑠) = 0 where

1
𝝃𝑘+1 (𝑠) = (𝑏 𝑿(𝑠) − 𝑎𝑁−𝑘 𝒀(𝑠) + 𝝃𝑘 (𝑠))
𝑠 𝑁−𝑘
⟹ 𝒀(𝑠) = 𝑏0 𝑿(𝑠) + 𝝃𝑁 (𝑠)
𝑁
1
⟹ 𝒀(𝑠) = 𝑏0 𝑿(𝑠) + ∑ (𝑏 𝑿(𝑠) − 𝑎𝑘 𝒀(𝑠))
𝑠𝑘 𝑘
𝑘=1
𝑁
⟹ 𝑯(𝑠) = 𝒀(𝑠)/𝑿(𝑠) = ∑ 𝑏𝑘 𝑠 𝑁−𝑘 /𝑎𝑘 𝑠 𝑁−𝑘
𝑘=0

This recursion leads to the above schematic diagram.


II.III Continuous State Space Representation: So far we have been describing systems in
terms of equations relating certain output to an input (the input-output relationship). This
type of description is an external description of a system (system viewed from the input and
output terminals). Such a description may be inadequate in some cases, and we need a
systematic way of finding system's internal description. State space analysis of systems
meets this need. In this method, we first select a set of key variables, called the state
variables, in the system. The state variables have the property that every possible signal or
variable in the system at any instant t can be expressed in terms of the state variables and
the input(s) at that instant t. If we know all the state variables as a function of t, we can
determine every possible signal or variable in the system at any instant with a relatively
simple relationship. The system description in this method consists of two parts

▪ Finding the equation(s) relating the state variables to the input(s) (the state equation).
▪ Finding the output variables in terms of the state variables (the output equation).

The analysis procedure, therefore, consists of solving the state equation first, and then
solving the output equation. The state space description is capable of determining every
possible system variable (or output) from the knowledge of the input and the initial state
(conditions) of the system. For this reason it is an internal description of the system.

If we have a system equation in the form of an nth-order differential equation, we can


convert it into a state equation as follows. Consider the system equation

𝑑𝑛 𝐲 𝑑𝑛 𝐲 𝑑𝐲
𝑛
+ 𝑎1 𝑛 + ⋯ + 𝑎𝑛−1 + 𝑎𝑛 𝐲(𝑡) = 𝐮(𝑡)
𝑑𝑡 𝑑𝑡 𝑑𝑡
We can define 𝑛 new variables, 𝐱1 through 𝐱 𝑛 .

𝐱1 (𝑡) = 𝐲(𝑡) 𝐱̇ 1 (𝑡) = 𝐱 2 (𝑡) 𝐱̇ 1 (𝑡) = 𝐱 2 (𝑡)


𝐱 2 (𝑡) = 𝐲̇ (𝑡) 𝐱̇ 2 (𝑡) = 𝐱 3 (𝑡) 𝐱̇ 2 (𝑡) = 𝐱 3 (𝑡)
⋮ ⋮ ⋮
⟺ 𝐱̇ 𝑛−1 (𝑡) = 𝐱 𝑛 (𝑡) ⟺ 𝐱̇ 𝑛−1 (𝑡) = 𝐱 𝑛 (𝑡)
𝐱 𝑛 (𝑡) = 𝐲 (𝑛−1) (𝑡)
𝑖=𝑛 𝑖=𝑛
𝐱̇ 𝑛 (𝑡) = 𝐲 (𝑛) (𝑡) = 𝐟(𝑡) − ∑ 𝑎𝑖 𝐲 (𝑛−𝒊) (𝑡) 𝐱̇ 𝑛 (𝑡) = 𝐟(𝑡) − ∑ 𝑎𝑖 𝐱 𝑛−𝑖+1 (𝑡)
{ 𝑖=1 { 𝑖=1

These 𝑛 simultaneous first-order differential equations are the state equations of the
system. The output equation is 𝐲(𝑡) = 𝐱1 (𝑡)

These equations can be written more conveniently in matrix form:

𝐱1 (𝑡) 0 1 0 0 𝐱1 (𝑡) 0 𝐱1 (𝑡)


𝑑 𝐱 2 (𝑡) 0 0 ⋮ ⋮ 𝐱 (𝑡) 0 (𝑡)
( )=( )( 2 ) + ( ) 𝐮(𝑡) and 𝐲(𝑡) = (1 0 … .0) ( 𝐱 2 )
𝑑𝑡 ⋮ ⋮ ⋮ 0 1 ⋮ ⋮ ⋮
𝐱 𝑛 (𝑡) −𝑎𝑛 −𝑎𝑛−1 −𝑎2 −𝑎1 𝐱 (𝑡) 1 𝐱 𝑛 (𝑡)
𝑛
The most general state-space representation of a linear system with 𝑚 inputs, 𝑝 outputs
and 𝑛 state variables is written in the following form
𝑑
𝐱(𝑡) = 𝑨(𝑡)𝐱(𝑡) + 𝑩(𝑡)𝐮(𝑡) The state equation
{𝑑𝑡
𝐲(𝑡) = 𝑪(𝑡)𝐱(𝑡) + 𝑫(𝑡)𝐮(𝑡) The output equation
Where
𝐱( . ) is called the "state vector", 𝐱(𝑡) ∈ ℝ𝑛
𝐲( . ) is called the "output vector", 𝐲(𝑡) ∈ ℝ𝑝
𝐮( . ) is called the "input (or control) vector", 𝐮(𝑡) ∈ ℝ𝑚
𝐀( . ) is the "state (or system) matrix", dim(𝑨(. )) = 𝑛 × 𝑛
𝐁( . ) is the "input matrix", dim(𝑩(. )) = 𝑛 × 𝑚
𝐂( . ) is the "output matrix", dim(𝑪(. )) = 𝑝 × 𝑛
𝐃( . ) is the "feedthrough (or feedforward) matrix" dim(𝑫(. )) = 𝑝 × 𝑚

In this general formulation, all matrices are allowed to be time-variant (i.e. their elements
can depend on time); however, in the common LTI case, matrices will be time invariant. The
time variable 𝑡 can be continuous (e.g. 𝑡 ∈ ℝ) or discrete (e.g. 𝑡 ∈ ℤ). In the latter case, the
time variable 𝑘 is usually used instead of 𝑡. The "state space" is the Euclidean space in
which the variables on the axes are the state variables. The state of the system can be
represented as a vector within that space. If the dynamical system is linear, time-invariant,
and finite-dimensional, then the differential and algebraic equations may be written in
matrix form. The state-space method is characterized by significant algebraization of
general system theory, which makes it possible to use vector-matrix structures.

By its nature, the state variable analysis is eminently suited for multiple-input, multiple-
output (MIMO) systems. In addition, the state-space techniques are useful for several other
reasons, including the following:

▪ Time-varying parameter systems and nonlinear systems can be characterized effectively


with state-space descriptions.
▪ State equations lend themselves readily to accurate simulation on analog or digital
computers, and can yield a great deal of information about a system even when they are
not solved explicitly.
▪ For second-order systems (𝑛 = 2), a graphical method called phase-plane analysis can be
used on state equations, whether they are linear or nonlinear.

Remark: It is of great importance to notes that the state space representation is not
unique. As a simple example we could simply reorder the variables 𝐱 ↔ 𝐱 𝑛𝑒𝑤 from the
example above. This results in a new state space representation. (This is due to the fact
that: similar matrices can represent the same system)
The above equations (i.e. state and output equations) can be solved in both the time
domain and frequency domain (Laplace transform). The latter requires fewer new concepts
and is therefore easier to deal with than the time-domain solution. For this reason, we shall
first consider the Laplace transform solution.

𝐱̇ (𝑡) = 𝑨𝐱(𝑡) + 𝑩𝐮(𝑡) ⟺ 𝑠𝐗(𝑠) − 𝐱(0) = 𝑨𝐗(𝑠) + 𝑩𝐔(𝑠)


⟺ (𝑠𝑰 − 𝑨)𝐗(𝑠) − 𝐱(0) = 𝑩𝐔(𝑠)
⟺ 𝐗(𝑠) = (𝑠𝑰 − 𝑨)−1 𝐱(0) + (𝑠𝑰 − 𝑨)−1 𝑩𝐔(𝑠)

If we define 𝛟(𝑠) = (𝑠𝑰 − 𝑨)−1 named the transition matrix, then

𝐱(𝑡) = 𝕃−1 (𝛟(𝑠)𝐱(0)) + 𝕃



−1
⏟ (𝛟(𝑠)𝑩𝐔(𝑠))
Zero−input component Zero−state component

This equation gives the desired solution. Observe the two components of the solution. The
first component yields 𝐱(𝑡) when the input 𝐮(𝑡) = 0. Hence the first component is the zero-
input component. In a similar manner, we see that the second component is the zero-state
component.

The output equation is given by 𝐲(𝑡) = 𝑪𝐱(𝑡) + 𝑫𝐮(𝑡) ⟺ 𝐘(𝑠) = 𝑪𝐗(𝑠) + 𝑫𝐔(𝑠)

𝐘(𝑠) = 𝑪(𝑠𝑰 − 𝑨)−1 𝐱(0) + 𝑪(𝑠𝑰 − 𝑨)−1 𝑩𝐔(𝑠) + 𝑫𝐔(𝑠)


= 𝑪(𝑠𝑰 − 𝑨)−1 𝐱(0) + (𝑪(𝑠𝑰 − 𝑨)−1 𝑩 + 𝑫)𝐔(𝑠)

The zero-state response (that is, the response 𝐘(𝑠) when 𝐱(0) = 0), is given by

𝐘(𝑠) = (𝑪(𝑠𝑰 − 𝑨)−1 𝑩 + 𝑫)𝐔(𝑠) ⟹ 𝐘(𝑠)/𝐔(𝑠) = 𝑪(𝑠𝑰 − 𝑨)−1 𝑩 + 𝑫

Remark: The idea behind a transfer function is to figure out what a system does with a
“delta” or impulse input (In other words a very short, very large "spiky" input). After you
know that any other response is just a sum (integral) of a bunch of different impulse
responses (the impulse response is a different way to think of the transfer function)

Now, in order to see how a system reacts to just an impulse input, you need to let the
system stop reacting to any previous inputs. In other words, the system has to be at rest
(have zero state or initial conditions).

Later on we will demonstrate 𝝓(𝑡) = 𝕃−1 (𝛟(𝑠)) = 𝑒 𝑨𝑡 . Now if we assume: it is known that
𝝓(𝑡) = 𝑒 𝑨𝑡 be the time domain function of the transition matrix 𝛟(𝑠) then

𝐇(𝑠) = 𝑪(𝑠𝑰 − 𝑨)−1 𝑩 + 𝑫 ⟹ 𝐡(𝑡) = 𝑪𝝓(𝑡)𝑩 + 𝑫𝛿(𝑡) = 𝑪𝑒 𝑨𝑡 𝑩 + 𝑫𝛿(𝑡)

The total response of the system is given by:


𝑡
𝐘(𝑠) = 𝑪𝛟(𝑠)𝐱(0) + (𝑪𝛟(𝑠)𝑩 + 𝑫)𝐔(𝑠) ⟹ 𝐲(𝑡) = 𝑪𝑒 𝑨𝑡 𝐱(0) + (∫ {𝑪𝑒 𝑨(𝑡−𝜏) 𝑩 + 𝑫𝛿(𝑡 − 𝜏)}𝐮(𝜏)𝑑𝜏)
−∞

The response of the system can be written as 𝐲(𝑡) = 𝐲ℎ (𝑡) + 𝐲𝑝 (𝑡)


𝑡
𝑨𝑡
𝑪𝑒 𝑨𝑡 𝐱(0) + ⏟
𝐲(𝑡) = 𝑪𝑒 𝐱(0) + (∫ 𝐡(𝑡 − 𝜏)𝐮(𝜏)𝑑𝜏) = ⏟ 𝐮(𝑡) ⋆ 𝐡(𝑡)
−∞ Free response Forced response
clear all, clc, m=10;b=0.1;k=2;

A=[0 -k/m;1 -b/m]; B=[k/m b/m]';


C=[0 1];D=0; sys=ss(A,B,C,D);
x0=[1 -1]'; T=20; t=0:0.01:T;
u=ones(length(t),1);

[y,t,x]=lsim(sys,u,t,x0);
plot(t,y), grid on
plot(t,x(1:end,1)), grid on
plot(t,x(1:end,2)), grid on

A state variable representation of a system is not unique. In fact there are infinitely many
representations. Methods for transforming from one set of state variables to another are
discussed below, followed by an example.

Consider the state space representation of an LTI system

𝐱̇ (𝑡) = 𝑨𝐱(𝑡) + 𝑩𝐮(𝑡)


𝐲(𝑡) = 𝑪𝐱(𝑡) + 𝑫𝐮(𝑡)

We can define a new set of independent variables (i.e., 𝑻 is invertible) 𝐳(𝑡) = 𝑻𝐱(𝑡). Though it
may not be obvious we can use this new set of variables as state variables. Start by solving
for 𝐱: 𝐱(𝑡) = 𝑻−1 𝐳(𝑡).

We can now rewrite the state space model by replacing 𝐱 in the original equations

𝐱̇ (𝑡) = 𝑨𝐱(𝑡) + 𝑩𝐮(𝑡) 𝑻−1 𝐳̇ (𝑡) = 𝑨𝑻−1 𝐳(𝑡) + 𝑩𝐮(𝑡) 𝐳̇ (𝑡) = 𝑻𝑨𝑻−1 𝐳(𝑡) + 𝑻𝑩𝐮(𝑡)
{ ⟺ { ⟺ {
𝐲(𝑡) = 𝑪𝐱(𝑡) + 𝑫𝐮(𝑡) 𝐲(𝑡) = 𝑪𝑻−1 𝐳(𝑡) + 𝑫𝐮(𝑡) 𝐲(𝑡) = 𝑪𝑻−1 𝐳(𝑡) + 𝑫𝐮(𝑡)

We recognize this as a state space representation

𝐱̇ (𝑡) = 𝑨𝐱(𝑡) + 𝑩𝐮(𝑡) 𝐳̇ (𝑡) = 𝑨1 𝐳(𝑡) + 𝑩1 𝐮(𝑡)


{ ⟺ {
𝐲(𝑡) = 𝑪𝐱(𝑡) + 𝑫𝐮(𝑡) 𝐲(𝑡) = 𝑪1 𝐳(𝑡) + 𝑫1 𝐮(𝑡)

With 𝑨1 = 𝑻𝑨𝑻−1 , 𝑩1 = 𝑻𝑩, 𝑪1 = 𝑪𝑻−1 , 𝑫1 = 𝑫

The process of converting transfer function to state space form is not unique. There are
various “realizations” possible. All realizations are “equivalent” (i.e. properties do not
change). However, one representation may have some advantages over others for a
particular task.

𝐱̇ (𝑡) = 𝑨𝐱(𝑡) + 𝑩𝐮(𝑡) 𝐳̇ (𝑡) = 𝑨1 𝐳(𝑡) + 𝑩1 𝐮(𝑡)


{ {
𝐲(𝑡) = 𝑪𝐱(𝑡) + 𝑫𝐮(𝑡) 𝐲(𝑡) = 𝑪1 𝐳(𝑡) + 𝑫1 𝐮(𝑡)
⇓ ⇓
⇓ ⇓
⇓ ⇓
𝐇(𝑠) = 𝑪(𝑠𝑰 − 𝑨)−1 𝑩 + 𝑫 𝐇1 (𝑠) = 𝑪1 (𝑠𝑰 − 𝑨1 )−1 𝑩1 + 𝑫1
= 𝑪𝑻−1 (𝑠𝑰 − 𝑻𝑨𝑻−1 )−1 𝑻𝑩 + 𝑫
= 𝑪𝑻−1 𝑻(𝑠𝑰 − 𝑨)−1 𝑻−1 𝑻𝑩 + 𝑫 = 𝐇(𝑠)
Note that although there are many state space representations of a given system, all of
those representations will result in the same transfer function (i.e., the transfer function of
a system is unique; the state space representation is not).

𝐇(𝑠) = 𝑪(𝑠𝑰 − 𝑨)−1 𝑩 + 𝑫

Remember that the 𝑖𝑗 𝑡ℎ element of the transfer function matrix 𝐇(𝑠) represents the transfer
function that relates the output 𝐲𝑖 (𝑡) to the input 𝐮𝑗 (𝑡).

clear all, clc, m=10;b=0.1;k=2;


A=[0 1;-2 -3]; B=[1 0;1 1];
C=[1 0;1 1;0 2]; D=[0 0;1 0;0 1];
[num1,den1]=ss2tf(A,B,C,D,1)
[num2,den2]=ss2tf(A,B,C,D,2)

In case of SISO systems we have 𝐇(𝑠) = 𝑪(𝑠𝑰 − 𝑨)−1 𝑩 + 𝑫 = 𝐇prop (𝑠) + 𝑫

𝑵(𝑠) ∑𝑚
𝑘=0 𝑏𝑘 𝑠
𝑛−𝑘 ∏𝑚𝑘=0(𝑠 − 𝑧𝑘 )
𝐇prop (𝑠) = = 𝑛 = 𝑏0 with 𝑎0 = 1 and 𝑚<𝑛
𝑫(𝑠) ∑𝑘=0 𝑎𝑘 𝑠 𝑛−𝑘 ∏𝑛𝑘=0(𝑠 − 𝑝𝑘 )

Where 𝑧𝑘 ∈ ℂ are called the system zeros and 𝑝𝑘 ∈ ℂ are called the system poles. Therefore,
the Zeros are defined as the roots of the polynomial of the numerator of a transfer function
and poles are defined as the roots of the denominator of a transfer function.

Poles and Zeros of a transfer function are the frequencies for which the value of the
denominator and numerator of transfer function becomes zero respectively. The values of
the poles and the zeros of a system determine whether the system is stable, and how well
the system performs. Control systems, in the simplest sense, can be designed simply by
assigning specific values to the poles and zeros of the system.

Remark: Identifying the poles and zeros of a transfer function aids in understanding the
behavior of the system. The standardized form of a transfer function is like a template that
helps us to quickly determine the system’s characteristics.

Physically realizable systems must have a number of poles greater than the number of
zeros (causality). Systems that satisfy this relationship are called Proper.

Assume that there is no repeated poles the by using partial fraction expansion we obtain

𝑵(𝑠) ∏𝑚
𝑘=0(𝑠 − 𝑧𝑘 ) 𝒉1 𝒉2 𝒉𝑛
𝐇prop (𝑠) = = 𝑏0 𝑛 = + + ⋯+
𝑫(𝑠) ∏𝑘=0(𝑠 − 𝑝𝑘 ) (𝑠 − 𝑝1 ) (𝑠 − 𝑝2 ) (𝑠 − 𝑝𝑛 )

This implies that


𝑛

𝐡prop (𝑡) = 𝑪𝝓(𝑡)𝑩 = 𝑪𝑒 𝑩 = ∑ 𝒉𝑘 𝑒 𝑝𝑘𝑡


𝑨𝑡

𝑘=1
Knowing that 𝑝𝑘 = 𝜎𝑘 + 𝑗𝜔𝑘 therefore we can write
𝑛

𝐡prop (𝑡) = ∑ 𝒉𝑘 𝑒 𝜎𝑘 𝑡 𝑒 𝑗𝜔𝑘𝑡


𝑘=1
We can see from this equation that every pole will have an exponential part, and a
sinusoidal part to its response. We can also go about constructing some rules:

▪ if 𝜎𝑘 = 0, the response of the pole is a perfect sinusoid (an oscillator)


▪ if 𝜔𝑘 = 0 , the response of the pole is a perfect exponential.
▪ if 𝜎𝑘 < 0, the exponential part of the response will decay towards zero.
▪ if 𝜎𝑘 > 0, the exponential part of the response will rise towards infinity.

From the last two rules, we can see that all poles of the system must have negative real
parts to be stable. We will discuss stability in later chapters.

clear all, clc,


A=[0 1;-2 -3]; B=[1 0;1 1];C=[1 0;1 1;0 2]; D=[0 0;1 0;0 1];
dt=0.01; t=0; T=10;
for k=1:(T/dt)
h(:,:,k)=C*expm(A*t)*B;
t=t+dt;
end
t=0:dt:T-dt;
plot(t,h(1,1,1:end)), grid on, hold on
plot(t,h(1,2,1:end)), grid on, hold on
plot(t,h(2,1,1:end)), grid on, hold on
plot(t,h(2,2,1:end)), grid on, hold on
plot(t,h(3,1,1:end)), grid on, hold on
plot(t,h(3,2,1:end)), grid on, hold on

MATLAB offers several instructions for the calculation of eigenvalues and zeros of a system.
One can calculate the eigenvalues of 𝑨 with the command 𝐞𝐢𝐠. We can also go through the
calculation of the characteristic polynomial of 𝑨 with the command poly, then calculate
the roots of the characteristic polynomial with the command roots. Another way to do this
is to define a sys system object and then calculate its zeros with the zero command and
calculate its eigenvalues with the pole command. It is also possible to calculate the
eigenvalues and the zeros of the system with the command pzmap.

clear all, clc, A=[-1 1 1;-1 -1 1;0 0 2]; B=[1 -1 2]'; C=[0 2 1]; D=1;
% ---------------
Eig_Val=eig(A)
% ---------------
Char_Poly=poly(A)
Eig_Val=roots(Char_Poly)
% ---------------
sys=ss(A,B,C,D)
% ---------------
Eig_Val =pole(sys)
zeros=zero(sys)
% ---------------
[Eig_Val,zeros]=pzmap(sys)
What's happen if some poles are repeated?

In case of repeated poles we obtain the term 1/(𝑠 − 𝑝𝑘 )2 in the partial fraction expansion,
which is corresponding to 𝑡𝑒 𝑝𝑘𝑡 in the time domain.
0 if 𝜎𝑘 < 0
lim 𝑡𝑒 𝑝𝑘𝑡 = lim 𝑡𝑒 𝜎𝑘 𝑡 𝑒 𝑗𝜔𝑘𝑡 = {
𝑡→∞ 𝑡→∞ ∞ if 𝜎𝑘 = 0

As a result: A system is asymptotically stable if all its poles have negative real parts. A
system is unstable if at least one pole has a positive real part, or if there are any repeated
poles on the imaginary axis. A system is marginally stable if it has one or more distinct
poles on the imaginary axis, and any remaining poles have negative real parts.

Characteristic Roots (Eigenvalues) of a Matrix: The denominator of every element of 𝛟(𝑠)


is ∆(𝑠) = det(𝑠𝑰 − 𝑨) because 𝛟(𝑠) = (𝑠𝑰 − 𝑨)−1, and the inverse of a matrix has its
determinant in the denominator. Since 𝑪, 𝑩, and 𝑫 are matrices with constant elements,
then the denominator of 𝛟(𝑠) will also be the denominator of 𝐇(𝑠). Hence, the denominator
of every element of 𝐇(𝑠) is ∆(𝑠) = |𝑠𝑰 − 𝑨|, except for the possible cancellation of the
common factors. In other words, the poles of all transfer functions of the system are also
the zeros of the polynomial |𝑠𝑰 − 𝑨|. Therefore, the zeros of the polynomial ∆(𝑠) are the
characteristic roots of the system. Hence, the characteristic roots of the system are the roots
of the equation |𝑠𝑰 − 𝑨| = 0. Since ∆(𝑠) is an 𝑛𝑡ℎ -order polynomial in 𝑠 with 𝑛 zeros, we can
write it as
∆(𝑠) = 𝑠 𝑛 + 𝑎1 𝑠 𝑛−1 + ⋯ + 𝑎𝑛−1 𝑠 + 𝑎𝑛
= (𝑠 − 𝑝1 )(𝑠 − 𝑝2 ) … (𝑠 − 𝑝𝑛 ) = 0

This equation is known as the characteristic equation of the matrix 𝑨, and 𝑝1, 𝑝2 ,..., 𝑝𝑛 are
the characteristic roots of 𝑨. The term eigenvalue, meaning "characteristic value" in
German, is also commonly used in the literature. Thus, we have shown that the
characteristic roots of a system are the eigenvalues (characteristic values) of the matrix 𝑨.

▪ All poles of the system are eigenvalue but not necessarily the converse (because of the
possible cancellation between poles and zeros)
▪ Poles are invariant under similarity transformation, means that all different state space
representations have the same set of eigenvalues.

𝐇(𝑠) = 𝑪1 (𝑠𝑰 − 𝑨1 )−1 𝑩1


= 𝑪2 𝑻−1 (𝑠𝑰 − 𝑻𝑨2 𝑻−1 )−1 𝑻𝑩2 } ⟹ det(𝑠𝑰 − 𝑨1 ) = det(𝑠𝑰 − 𝑨2 )
= 𝑪2 (𝑠𝑰 − 𝑨2 )−1 𝑩2

Time-Domain Solution of State Equations: The state equations of a linear system are 𝑛
simultaneous linear differential equations of the first order. The same techniques of solving
scalar linear differential equations can be applied to state equations without any
modification. However, it is more convenient to carry out the solution in the framework of
matrix notation.
𝐱̇ (𝑡) = 𝑨𝐱(𝑡) + 𝑩𝐮(𝑡)

We now show that the solution of this vector differential is


𝑡
𝐱(𝑡) = 𝑒 𝐱(0) + (∫ 𝑒 𝑨(𝑡−𝜏) 𝑩𝐮(𝜏)𝑑𝜏)
𝑨𝑡
0
Before proceeding further, we must define the exponential of the matrix 𝑒 𝑨𝑡 . An exponential
of a matrix is defined by an infinite series identical to that used in defining an exponential
of a scalar. We shall define

𝑨𝑡
𝑨2 𝑡 2 𝑨3 𝑡 3 𝑨𝑛 𝑡 𝑛 𝑨𝑘 𝑡 𝑘
𝑒 = 𝑰 + 𝑨𝑡 + + …+ …=∑
2! 3! 𝑛! 𝑘!
𝑘=0

In the set of solved problems we show that the infinite series is absolutely and uniformly
convergent for all values of t. Consequently, it can be differentiated or integrated term by
term. Thus, to find (𝑑/𝑑𝑡)𝑒 𝑨𝑡 , we differentiate the series on the right-hand side of 𝑒 𝑨𝑡 . term
by term:

𝑑𝑒 𝑨𝑡 𝑨3 𝑡 2 𝑨4 𝑡 3 𝑨𝑛+1 𝑡 𝑛
= 𝑨 + 𝑨2 𝑡 + + …+ … = 𝑨𝑒 𝑨𝑡 = 𝑒 𝑨𝑡 𝑨
𝑑𝑡 2! 3! 𝑛!
Also note that from the definition, it follows that

𝑒 𝟎 = 𝑰 ⟹ (𝑒 𝑨𝑡 )(𝑒 −𝑨𝑡 ) = (𝑒 −𝑨𝑡 )(𝑒 𝑨𝑡 ) = 𝑰

In the set of solved problems, we show that

𝑑 𝑑𝑷 𝑑𝑸
(𝑷𝑸) = 𝑸+𝑷
𝑑𝑡 𝑑𝑡 𝑑𝑡
Using this relationship, we observe that

𝑑 −𝑨𝑡 𝑑𝑒 −𝑨𝑡 𝑑𝐱
(𝑒 𝐱) = 𝐱(𝑡) + 𝑒 −𝑨𝑡
𝑑𝑡 𝑑𝑡 𝑑𝑡
𝑑𝐱
= −𝑒 −𝑨𝑡 𝑨𝐱(𝑡) + 𝑒 −𝑨𝑡
𝑑𝑡
= −𝑒 −𝑨𝑡 𝑨𝐱(𝑡) + 𝑒 −𝑨𝑡 (𝑨𝐱(𝑡) + 𝑩𝐮(𝑡))
= 𝑒 −𝑨𝑡 𝑩𝐮(𝑡)

The integration of both sides of this equation from 0 to 𝑡 yields


𝑡 𝑡
−𝑨𝑡 𝑡 −𝑨𝜏 −𝑨𝑡
𝑒 𝐱(𝑡)|0 = ∫ 𝑒 𝑩𝐮(𝜏)𝑑𝜏 ⟹ 𝑒 𝐱(𝑡) − 𝐱(0) = ∫ 𝑒 −𝑨𝜏 𝑩𝐮(𝜏)𝑑𝜏
0 0
Hence
𝑡
𝐱(𝑡) = 𝑒 𝐱(0) + (∫ 𝑒 𝑨(𝑡−𝜏) 𝑩𝐮(𝜏)𝑑𝜏) = 𝑒 𝑨𝑡 𝐱(0) + 𝑒 𝑨𝑡 ⋆ 𝑩𝐮(𝑡) = 𝑒 𝑨𝑡 ⋆ (𝐱(0)𝛿(𝑡) + 𝑩𝐮(𝑡))
𝑨𝑡
0

This is the desired solution. The first term on the right-hand side represents 𝐱(𝑡) when the
input 𝐮(𝑡) = 0. Hence it is the zero-input component. The second term, by a similar
argument, is seen to be the zero-state component.

This result of can be easily generalized for any initial value of 𝑡. It is left as an exercise for
the reader to show that the solution of the state equation can be expressed as
𝑡
𝑨(𝑡−𝑡0 )
𝐱(𝑡) = 𝑒 𝐱(𝑡0 ) + (∫ 𝑒 𝑨(𝑡−𝜏) 𝑩𝐮(𝜏)𝑑𝜏)
𝑡0
Determining 𝑒 𝑨𝑡 : The exponential 𝑒 𝑨𝑡 required in 𝐱(𝑡) can be computed from the definition
above. Unfortunately, this is an infinite series, and its computation can be quite laborious.
Moreover, we may not be able to recognize the closed-form expression for the answer. There
are several efficient methods of determining 𝑒 𝑨𝑡 in closed form (see BEKHITI algebra books
2020). The Cayley-Hamilton theorem can be used to evaluate functions of a square matrix
𝑨, as shown below. Consider a function f(𝑨) in the form of an infinite power series:

f(𝜆) = 𝛽0 + 𝛽1 𝜆 + 𝛽2 𝜆2 + ⋯ + 𝛽𝑛−1 𝜆𝑛−1 + ⋯ = ∑ 𝛽𝑘 𝜆𝑘


𝑘=0
Because the eigenvalue 𝜆 satisfies the characteristic equation of the matrix 𝑨, we can write

∆(𝜆) = 0 ⟺ 𝜆𝑛 = −(𝑎1 𝜆𝑛−1 + ⋯ + 𝑎𝑛−1 𝜆 + 𝑎𝑛 )

If we multiply both sides by 𝜆, the left-hand side is 𝜆𝑛+1, and the right-hand side contains
the terms 𝜆𝑛 , 𝜆𝑛−1 , … , 𝜆. If we substitute 𝜆𝑛 in terms of 𝜆𝑛−1 , 𝜆𝑛−2 , … , 𝜆, the highest power on
the right-hand side is reduced to 𝑛 − 1. Continuing in this way, we see that 𝜆𝑛+𝑘 can be
expressed in terms of 𝜆𝑛 , 𝜆𝑛−1 , … , 𝜆 for any 𝑘. Hence, the infinite series of f(𝜆) can always be
expressed in terms as f(𝜆) = 𝛽0 + 𝛽1 𝜆 + 𝛽2 𝜆2 + ⋯ + 𝛽𝑛−1 𝜆𝑛−1 . If we assume that there are 𝑛
distinct eigenvalues 𝜆1 , 𝜆2 , … 𝜆𝑛 , then the finite series of f(𝜆) holds for these n values of 𝜆. The
substitution of these values in f(𝜆) yields 𝑛 simultaneous equations
−1
𝛽0 1 𝜆1 𝜆12 ⋯ 𝜆1𝑛−1 f(𝜆1 )
𝛽 1 𝜆2 𝜆22 ⋮ 𝜆𝑛−1 )
( 1 )=( ⋱ 2 ) ( f(𝜆2 )
⋮ ⋮ ⋮ ⋮ ⋮ ⋮
𝛽𝑛−1 ⋮

1 𝜆𝑛 𝜆2𝑛 𝜆𝑛−1
𝑛 f(𝜆𝑛 )

Since 𝑨 also satisfies the characteristic equation, we may advance a similar argument to
show that if f(𝑨) is a function of a square matrix 𝑨 expressed as an infinite power series in
𝑨, then f(𝑨) = 𝛽0 + 𝛽1 𝑨 + 𝛽2 𝑨2 + ⋯ + 𝛽𝑛−1 𝑨𝑛−1 , in which the coefficients 𝛽𝑖 ′𝑠 are found from
the above matrix equation. If some of the eigenvalues are repeated (multiple roots), the
results are somewhat modified using the generalized Vandermond matrix.

As a special case we have

𝑛−1 1 𝜆1 𝜆12 ⋯ 𝜆1𝑛−1


𝛽0 𝑒 𝜆1 𝑡 ⋮ 𝜆𝑛−1
1 𝜆2 𝜆22
𝑒 𝑨𝑡 = ∑ 𝛽𝑘 𝑨𝑘 −1
with ( ⋮ ) = 𝑽 ( ⋮ ) with 𝑽 = ( ⋱ 2 )
⋮ ⋮ ⋮ ⋮
𝑘=0 𝛽𝑛−1 𝑒 𝜆𝑛𝑡 ⋮
1 𝜆𝑛 𝜆2𝑛 … 𝜆𝑛𝑛−1

III. Realization of Discrete Systems: The input-output relation of a linear time-invariant


discrete-time system can be characterized by an impulse response, a frequency response, a
system function or a linear constant-coefficient difference equation. When the input-output
relation is given, the system can be implemented in different structures. These structures
are different in accuracy, speed, cost, and others. A structural representation using
interconnected basic building blocks is the first step in the hardware or software
implementation of an LTI digital filter. The structural representation provides the key
relations between some pertinent internal variables with the input and output that in turn
provides the key to the implementation. Here, we discuss causal systems only.
A digital filter structure is said to be canonic if the number of delays in the block diagram
representation is equal to the order of the transfer function. Otherwise, it is a non-canonic
structure. Two digital filter structures are defined to be equivalent if they have the same
transfer function. We describe next a number of methods for the generation of equivalent
structures However, a fairly simple way to generate an equivalent structure from a given
realization is via the transpose operation.

The FIR versus IIR: Before starting the talk about the basic structures of digital system
we should differentiate two important types of filters, named IIR and FIR filters. If the
impulse response of the filter falls to zero after a finite period of time, it is an FIR (Finite
Impulse Response) filter. However, if the impulse response exists indefinitely, it is an IIR
(Infinite Impulse Response) filter.

 The output values of IIR filters are calculated by adding the weighted sum of previous
and current input values to the weighted sum of previous output values. If the input values
are 𝐱[𝑛] and the output values 𝐲[𝑛], the difference equation defines the IIR filter:
𝑀 𝑁 𝑀 𝑁
1
𝐲[𝑛] = (∑ 𝑏𝑘 𝐱[𝑛 − 𝑘] − ∑ 𝑎𝑘 𝐲[𝑛 − 𝑘]) ⟹ 𝐇(𝑧) = (∑ 𝑏𝑘 𝑧 −𝑘 ) / (𝑎0 + ∑ 𝑎𝑘 𝑧 −𝑘 )
𝑎0
𝑘=0 𝑘=1 𝑘=0 𝑘=1

The number of forward coefficients 𝑀 and the number of reverse coefficients 𝑁 is usually
equal and is the filter order. The higher the filter order, the more the filter resembles an
ideal filter. If we do long division (power series) we obtain
∞ ∞
∑𝑀
𝑘=0 𝑏𝑘 𝑧
−𝑘
𝐇(𝑧) = 𝑁 = ∑ ℎ𝑘 𝑧 −𝑘 ⟹ 𝐡[𝑛] = ∑ ℎ𝑘 𝛿[𝑛 − 𝑘]

𝑎0 + 𝑘=1 𝑎𝑘 𝑧 −𝑘
𝑘=0 𝑘=0

 FIR filters are also known as non-recursive filters, convolution filters, or moving-average
filters because the output values of an FIR filter are described as a finite convolution:
𝑀 𝑀 𝑀

𝐲[𝑛] = ∑ 𝑏𝑘 𝐱[𝑛 − 𝑘] ⟹ 𝐡[𝑛] = ∑ 𝑏𝑘 𝛿[𝑛 − 𝑘] or 𝐇(𝑧) = ∑ 𝑏𝑘 𝑧 −𝑘


𝑘=0 𝑘=0 𝑘=0

The output values of a FIR filter depend only on the current and past input values.
Because the output values do not depend on past output values, the impulse response
decays to zero in a finite period of time. FIR filters have the following properties:

• FIR filters can achieve linear phase response and pass a signal without phase distortion.
• They are easier to implement than IIR filters.
• FIR filters has all its poles at 𝑧 = 0 or |z|→∞ or both.
• FIR filters are always stable because they do not have finite poles outside unit disk.

 Comparison: The advantage of IIR filters over FIR filters is that IIR filters usually require
fewer coefficients to execute similar filtering operations, that IIR filters work faster, and
require less memory space. The disadvantage of IIR filters is the nonlinear phase response.
IIR filters are well suited for applications that require no phase information, for example,
for monitoring the signal amplitudes. FIR filters are better suited for applications that
require a linear phase response. The necessary and sufficient condition for IIR filters to be
stable is that all poles are inside the unit circle. IIR filters consist of zeros and poles, and
require less memory than FIR filters, whereas FIR only consists of zeros. IIR filters can
become difficult to implement, and also delay and distort adjustments can alter the poles &
zeroes, which make the filters unstable, whereas FIR filters remain stable. FIR cannot
simulate analog filter responses, but IIR is designed to do that accurately. The high
computational efficiency of IIR filters, with short delays, often make the IIR popular as an
alternative. FIR filters have become too long in digital feedback systems, as well as in other
applications, and cause problems.

III.I Basic Structures for IIR Systems: A filter processes the input and obtains the output
through three types of operations: delay, multiplication, and addition, as evidenced from
the difference equation, system function, or convolution sum. These operations are done by
three elements that are either physical or conceptual, or implemented through hardware or
software tools. The elements are adder, gain (multiplier) and delay element.

The basic structures IIR systems include the direct form I, the direct form II, the cascade
form and the parallel form. These structures, as well as other structures for IIR systems,
have feedback loops.

Consider an IIR system with system function


𝑀 𝑁
𝒀(𝑧) ∑𝑀
𝑘=0 𝑏𝑘 𝑧
−𝑘
𝒀(𝑧) 𝑾(𝑧)
𝐲[𝑛] = ∑ 𝑏𝑘 𝐱[𝑛 − 𝑘] − ∑ 𝑎𝑘 𝐲[𝑛 − 𝑘] ⟹ 𝐇(𝑧) = = 𝑁 =
𝑿(𝑧) ∑𝑘=0 𝑎𝑘 𝑧 −𝑘 𝑾(𝑧) 𝑿(𝑧)
𝑘=0 𝑘=1

With 𝑾(𝑧) = (∑𝑀 −𝑘 𝑁


𝑘=0 𝑏𝑘 𝑧 )𝑿(𝑧) = (∑𝑘=0 𝑎𝑘 𝑧
−𝑘 )𝒀(𝑧)
and 𝑎0 = 1

The intermediate variable 𝐖[𝑛] can be written as


𝑀 𝑁

𝐖[𝑛] = ∑ 𝑏𝑘 𝐱[𝑛 − 𝑘] and also 𝐖[𝑛] = ∑ 𝑎𝑘 𝐲[𝑛 − 𝑘]


𝑘=0 𝑘=0

This last equation can be implemented as shown below

Implementation of IIR by the direct form


Remark: the second schematic diagram above can be obtained from the first one, just by
moving (or swapping) parts before and after the sum. 𝐇(𝑧) = 𝐇1 (𝑧)𝐇2 (𝑧) = 𝐇2 (𝑧)𝐇1 (𝑧)

An alternative structure (realization) of IIR


filters is the transposed direct form, this
last one can be obtained by the following
nested programming algorithm.

Let we define the following variables


𝝃𝑘 (𝑘 = 0,1 … 𝑁) with 𝝃0 (𝑧) = 0 & 𝝃𝑁+1 (𝑧) = 0
where

𝝃𝑘+1 (𝑧) = 𝑧 −1 (𝑏𝑁−𝑘 𝑿(𝑧) − 𝑎𝑁−𝑘 𝒀(𝑧) + 𝝃𝑘 (𝑧))



⟹ 𝒀(𝑧) = 𝑏0 𝑿(𝑧) + 𝝃𝑁 (𝑧)

⟹ 𝒀(𝑧) = 𝑏0 𝑿(𝑧) + ∑𝑁 −𝑘
𝑘=1 𝑧 (𝑏𝑘 𝑿(𝑧) − 𝑎𝑘 𝒀(𝑧))

⟹ 𝑯(𝑧) = 𝒀(𝑧)/𝑿(𝑧) = ∑𝑁
𝑘=0 𝑏𝑘 𝑧
𝑁−𝑘
/𝑎𝑘 𝑧 𝑁−𝑘

This recursion leads to the above schematic diagram.

III.II Basic Structures for FIR Systems: Implementation of a filter (as a hardware or by a
software program) requires an interconnected set, or network, of the above elements, which
according to the filter's description produces the output 𝐲[𝑛] to an incoming 𝐱[𝑛]. Such a
network is called filter's structure. Implementation may be achieved through different
structures. Each structure is associated with the memory requirements, computational
complexity, and accuracy limitations (i.e., the effect of finite-word length). Such
considerations play a central role in choosing a structure that is best suited for a situation.
An analysis of resources or design processes that would determine the best structure for a
filter are not addressed in what follows. The aim is limited to presenting several commonly
used structures for FIR filters, and briefly pointing out some obvious choice criteria.

A causal FIR filter of order 𝑁 is characterized by a 𝐇(𝑧) = ∑𝑁 −𝑘


𝑘=0 𝒃𝑘 𝑧 . Which is a polynomial
in 𝑧 −1 . In the time-domain the input-output relation of the above FIR filter is given by
𝑁

𝐲[𝑛] = ∑ 𝒃𝑘 𝐱[𝑛 − 𝑘]
𝑘=0

An FIR filter of order 𝑁 is characterized by 𝑁 + 1 coefficients and, in general, require 𝑁 + 1


multipliers and 𝑁 two-input adders. Structures in which the multiplier coefficients are
precisely the coefficients of the transfer function are called direct form structures.
A direct form realization of an FIR filter can be readily developed from the convolution sum
description as indicated below

The recommended (default) structure within the Filter Designer is the Direct Form
Transposed structure, as this offers superior numerical accuracy when using floating point
arithmetic. This can be readily seen by analyzing the difference equations below (used for
implementation), as the undesirable effects of numerical swamping are minimized, since
floating point addition is performed on numbers of similar magnitude.

Let we consider the next nested recursive programming

𝐲[𝑛] = 𝒃0 𝐱[𝑛] + 𝐰1 [𝑛 − 1]
𝐰1 [𝑛] = 𝒃1 𝐱[𝑛] + 𝐰2 [𝑛 − 1] 𝑁
𝐰2 [𝑛] = 𝒃2 𝐱[𝑛] + 𝐰3 [𝑛 − 1] Back−substitution
⇒ 𝐲[𝑛] = ∑ 𝒃𝑘 𝐱[𝑛 − 𝑘]
⋮ ⋮ + ⋮
𝑘=0
𝐰𝑁−1 [𝑛] = 𝒃𝑁−1 𝐱[𝑛] + 𝐰𝑁 [𝑛 − 1]
𝐰𝑁 [𝑛] = 𝒃𝑁 𝐱[𝑛]

The Transposed Form structure is considered the most numerically accurate for floating
point implementation, as the undesirable effects of numerical swamping are minimized as
seen by analyzing the difference equations.

Note: Under infinite precision arithmetic any given realization of a digital filter behaves
identically to any other equivalent structure. However, in practice, due to the finite word-
length limitations, a specific realization behaves totally differently from its other equivalent
realizations. Hence, it is important to choose a structure that has the least quantization
effects when implemented using finite precision arithmetic.

FIR filters do not use feedback circuitry, while IIR filters make use of feedback loop in order
to provide previous output in conjunction with current input.

FIR filters are mainly used in bandpass and bandstop filtering applications. While low pass
and anti-aliasing filtering applications require IIR digital filters.
III.III Discrete State Space Representation: Discrete time systems are either inherently
discrete (e.g. models of bank accounts, national economy growth models, population
growth models, digital words) or they are obtained as a result of sampling (discretization) of
continuous-time systems. In such kinds of systems, inputs, state space variables, and
outputs have the discrete form and the system models can be represented in the form of
transition tables. The mathematical model of a discrete-time system can be written in
terms of a recursive formula by using linear matrix difference equations as

𝐱[𝑛 + 1] = 𝑨𝐱[𝑛] + 𝑩𝐮[𝑛] and 𝐲[𝑛] = 𝑪𝐱[𝑛] + 𝑫𝐮[𝑛]

Similarly to continuous-time linear systems, discrete state space equations can be derived
from difference equations. Consider 𝑛𝑡ℎ - order difference equation which is defined by

𝐲[𝑘 + 𝑛] + 𝑎𝑛−1 𝐲[𝑘 + 𝑛 − 1] + ⋯ + 𝑎1 𝐲[𝑘 + 1] + 𝑎0 𝐲[𝑘] = 𝑏𝑛 𝐮[𝑘 + 𝑛] + ⋯ + 𝑏1 𝐮[𝑘 + 1] + 𝑏0 𝐮[𝑘]

To derive the state space equation we introduce a new intermediate variable 𝑾(𝑧) as

𝒀(𝑧) 𝑵(𝑧) 𝒀(𝑧) 𝑾(𝑧) 𝑾(𝑧) 1 𝒀(𝑧)


= 𝑯(𝑧) = = with = and = 𝑵(𝑧)
𝑼(𝑧) 𝑫(𝑧) 𝑾(𝑧) 𝑼(𝑧) 𝑼(𝑧) 𝑫(𝑧) 𝑾(𝑧)

In time domain we have

𝑾(𝑧)𝑫(𝑧) = 𝑼(𝑧) ⟺ 𝐰[𝑘 + 𝑛] + 𝑎𝑛−1 𝐰[𝑘 + 𝑛 − 1] + ⋯ + 𝑎1 𝐰[𝑘 + 1] + 𝑎0 𝐰[𝑘] = 𝐮[𝑘]

Now let we define state variables as

𝐱1 [𝑘] = 𝐰[𝑘] ⟹ 𝐱1 [𝑘 + 1] = 𝐱 2 [𝑘]


𝐱 2 [𝑘] = 𝐰[𝑘 + 1] ⟹ 𝐱 2 [𝑘 + 1] = 𝐱 3 [𝑘]
𝐱 3 [𝑘] = 𝐰[𝑘 + 2] ⟹ 𝐱 3 [𝑘 + 1] = 𝐱 4 [𝑘]

𝐱 𝑛 [𝑘] = 𝐰[𝑘 + 𝑛 − 1] ⟹ 𝐱 𝑛 [𝑘 + 1] = 𝐰[𝑘 + 𝑛]

And from the above equation 𝑾(𝑧)𝑫(𝑧) = 𝑼(𝑧) we have to write

𝐱 𝑛 [𝑘 + 1] = 𝐰[𝑘 + 𝑛] = 𝐮[𝑘] − (𝑎𝑛−1 𝐱 𝑛 [𝑘] + ⋯ + 𝑎1 𝐱 2 [𝑘] + 𝑎0 𝐱1 [𝑘])

Now in matrix form we obtain

𝐱1 [𝑘 + 1] 0 1 0 ⋯ 0 𝐱1 [𝑘] 0
𝐱 2 [𝑘 + 1] 0 0 1 ⋮ 0 𝐱 2 [𝑘] 0
⋮ = ⋮ ⋮ ⋮ ⋮ ⋮ + ⋮ 𝐮[𝑘]

⋮ 0 0 0 1 ⋮ 0
−𝑎 −𝑎1 −𝑎2 ⋮
… −𝑎𝑛−1 ) (𝐱 [𝑘]) (1)
𝐱
( 𝑛 [𝑘 + 1] ) ( 0 𝑛

The output equation will be

𝑾(𝑧)𝑵(𝑧) = 𝒀(𝑧) ⟺ 𝐲[𝑘] = 𝑏𝑛 𝐰[𝑘 + 𝑛] + ⋯ + 𝑏1 𝐰[𝑘 + 1] + 𝑏0 𝐰[𝑘]


⟺ 𝐲[𝑘] = 𝑏𝑛 𝐱 𝑛 [𝑘 + 1] + 𝑏𝑛−1 𝐱 𝑛 [𝑘] + ⋯ + 𝑏1 𝐱 2 [𝑘] + 𝑏0 𝐱1 [𝑘]

Now make a substitution of 𝐱 𝑛 [𝑘 + 1] obtain


𝐱1 [𝑘]
𝐲[𝑘] = ((𝑏0 − 𝑎0 𝑏𝑛 ), (𝑏1 − 𝑎1 𝑏𝑛 ), … , (𝑏𝑛−1 − 𝑎𝑛−1 𝑏𝑛 )) ( ⋮ ) + 𝑏𝑛 𝐮[𝑘]
𝐱 𝑛 [𝑘]
 Solution of the Discrete-Time State Equation: We find the solution of the difference
state equation for the given initial state 𝐱[0] and the input signal 𝐮[𝑘]. From the state
equation 𝐱[𝑛 + 1] = 𝑨𝐱[𝑛] + 𝑩𝐮[𝑛] and for 𝑛 = 1,2,3, …, it follows

𝐱[1] = 𝑨𝐱[0] + 𝑩𝐮[0]


𝐱[2] = 𝑨𝐱[1] + 𝑩𝐮[1] = 𝑨2 𝐱[0] + 𝑨𝑩𝐮[0] + 𝑩𝐮[1]
𝐱[3] = 𝑨𝐱[2] + 𝑩𝐮[2] = 𝑨3 𝐱[0] + 𝑨2 𝑩𝐮[0] + 𝑨𝑩𝐮[1] + 𝑩𝐮[2]

𝑛−1

𝐱[𝑛] = 𝑨𝐱[𝑛 − 1] + 𝑩𝐮[𝑛 − 1] = 𝑨𝑛 𝐱[0] + ∑ 𝑨𝑛−𝑘−1 𝑩𝐮[𝑘]


𝑘=0
If we define 𝝓[𝑛] = 𝑨𝑛 then
𝑛−1

𝐱[𝑛] = 𝝓[𝑛]𝐱[0]
⏟ + ∑ 𝝓[𝑛 − 𝑘 − 1]𝑩𝐮[𝑘]
Zero−input component ⏟
𝑘=0
Zero−state component

Note that the discrete-time state transition matrix relates the state of an input-free system
at initial time (𝑛 = 0) to the state of the system at any other time 𝑛 > 0, that is

𝐱[𝑛] = 𝝓[𝑛]𝐱[0] = 𝑨𝑛 𝐱[0] for 𝐮[𝑘] = 0

Remark: If the initial value of the state vector is not 𝐱[0] but 𝐱[𝑛0 ], then the solution has to
be modified into
𝑛−1

𝐱[𝑛 + 𝑛0 ] = 𝝓[𝑛]𝐱[𝑛0 ] + ∑ 𝝓[𝑛 − 𝑘 − 1]𝑩𝐮[𝑛0 + 𝑘]


𝑘=0

It is easy to verify that the discrete-time state transition matrix has the following properties

▪ 𝝓[0] = 𝑨0 = 𝑰
▪ 𝝓[𝑛2 − 𝑛0 ] = 𝝓[𝑛2 − 𝑛1 ]𝝓[𝑛1 − 𝑛0 ] = 𝑨𝑛2 −𝑛1 𝑨𝑛1 −𝑛0 = 𝑨𝑛2 −𝑛0
▪ 𝝓𝑘 [𝑛] = 𝝓[𝑛𝑘] = (𝑨𝑛 )𝑘 = 𝑨𝑛𝑘
▪ 𝝓[𝑛 + 1] = 𝑨𝝓[𝑛 + 1]

The output of the system at an instant 𝑛 is obtained by substituting 𝐱[𝑛] into the output
equation 𝐲[𝑛] = 𝑪𝐱[𝑛] + 𝑫𝐮[𝑛], producing
𝑛−1

𝐲[𝑛] = 𝑪𝝓[𝑛]𝐱[0] + ∑ 𝑪𝝓[𝑛 − 𝑘 − 1]𝑩𝐮[𝑘] + 𝑫𝐮[𝑛] = 𝑪𝑨𝑛 𝐱[0] + 𝐡[𝑛] ⋆ 𝐮[𝑛] = 𝐲𝒉 [𝑛] + 𝐲𝒑 [𝑛]
𝑘=0

Where 𝐲𝒉 [𝑛] = 𝑪𝑨𝑛 𝐱[0] is the homogenous solution (the input free response) and 𝐲𝒑 [𝑛] is
the particular solution (the forced response).

The discrete impulse response is given by (Discrete-time Markov parameters)

𝑫 for 𝑛 = 0
𝐡[𝑛] = 𝑪𝑨𝑛−1 𝑩 + 𝑫𝛿[𝑛] = {
𝑪𝑨𝑛−1 𝑩 for 𝑛 > 0
The discrete-time state transition matrix defined by 𝝓[𝑛] = 𝑨𝑛 can be evaluated efficiently
for large values of 𝑛 by using a method based on the Cayley–Hamilton theorem and
described in. It can be also evaluated by using the Z-transform method, to be derived in the
next subsection.
 Solution Using the Z-transform: Applying the Z–transform to the state space equation
of a discrete-time linear system

𝑧𝐗(𝑧) − 𝑧𝐱[0] = 𝑨𝐗(𝑧) + 𝑩𝐔(𝑧) ⟺ (𝑧𝑰 − 𝑨)𝐗(𝑧) = 𝑧𝐱[0] + 𝑩𝐔(𝑧)


⟺ 𝐗(𝑧) = (𝑧𝑰 − 𝑨)−1 𝑧𝐱[0] + (𝑧𝑰 − 𝑨)−1 𝑩𝐔(𝑧)

The inverse Z-transform of the last equation gives 𝐱[𝑛], that

𝐱[𝑛] = ℤ−1 [𝑧(𝑧𝑰 − 𝑨)−1 ]𝐱[0] + ℤ−1 [(𝑧𝑰 − 𝑨)−1 𝑩𝐔(𝑧)]

We conclude that 𝝓[𝑛] = ℤ−1 [𝑧(𝑧𝑰 − 𝑨)−1 ] = 𝑨𝑛 , 𝑛 = 1,2,3, … and 𝝓(𝑧) = 𝑧(𝑧𝑰 − 𝑨)−1

The inverse transform of the second term on the right-hand side is obtained directly by the
application of the discrete-time convolution, which produces
𝑛−1

ℤ−1 [(𝑧𝑰 − 𝑨)−1 𝑩𝐔(𝑧)] = ∑ 𝝓[𝑛 − 𝑘 − 1]𝑩𝐮[𝑘]


𝑘=0

We have the required solution of the discrete-time state space equation as


𝑛−1

𝐱[𝑛] = 𝝓[𝑛]𝐱[0] + ∑ 𝝓[𝑛 − 𝑘 − 1]𝑩𝐮[𝑘]


𝑘=0
From 𝐲[𝑛] = 𝑪𝐱[𝑛] + 𝑫𝐮[𝑛] we have
𝑛−1

𝐲[𝑛] = 𝑪𝝓[𝑛]𝐱[0] + ∑ 𝑪𝝓[𝑛 − 𝑘 − 1]𝑩𝐮[𝑘] + 𝑫𝐮[𝑛]


𝑘=0

The frequency domain form of the output vector 𝐘(𝑧) is obtained if the Z-transform is
applied to the output equation, and 𝐗(𝑧) is eliminated, leading to

𝐘(𝑧) = 𝑪𝐗(𝑧) + 𝑫𝐔(𝑧) = 𝑪(𝑧𝑰 − 𝑨)−1 𝑧𝐱[0] + {𝑪(𝑧𝑰 − 𝑨)−1 𝑩 + 𝑫}𝐔(𝑧)

From the above expression, for the zero initial condition, i.e., the discrete matrix transfer
function is defined by 𝐇(𝑧) = 𝑪(𝑧𝑰 − 𝑨)−1 𝑩 + 𝑫.

Now let we see what is about the effect of poles to reshape the response and/or the stability
of discrete-time system.
𝑛 𝑛
−1
Adj(𝑧𝑰 − 𝑨) 𝑬𝑘 𝑧 −1
𝐇prop (𝑧) = 𝑪(𝑧𝑰 − 𝑨) 𝑩 = 𝑪 𝑩=∑ ⟺ 𝐡prop [𝑛] = ∑ 𝑬𝑘 (𝜆𝑘 )𝑛−1
|𝑧𝑰 − 𝑨| (1 − 𝜆𝑘 𝑧 −1 )
𝑘=1 𝑘=1
𝑛

𝐡prop [𝑛] = 𝑪𝑨𝑛−1 𝑩 = ∑ 𝑬𝑘 (𝜆𝑘 )𝑛−1


𝑘=1

Hence {lim𝑛→∞ 𝐡prop [𝑛] = lim𝑛→∞ 𝑬𝑘 (𝜆𝑘 )𝑛−1 = 0}, only if |𝜆𝑘 | < 1, with 𝜆𝑘 = 𝜎𝑘 + 𝑗𝜔𝑘 = 𝑟𝑘 𝑒 𝑗𝜃𝑘 ∈ ℂ.

From the last equation, we can see that all poles of the Discrete-time system must be
inside the unite disk to be stable system.

In case of repeated poles we obtain the term 𝑛(𝜆𝑘 )𝑛−1 in the time domain. Hence, we get

𝑛−1 ∞ if 𝑟𝑘 = 1
lim 𝑛(𝜆𝑘 )𝑛−1 = lim 𝑛(𝑟𝑘 𝑒 𝑗𝜃𝑘 ) = lim 𝑛(𝑟𝑘 )𝑛−1 𝑒 𝑗𝜃𝑘(𝑛−1) = {
𝑛→∞ 𝑛→∞ 𝑛→∞ 0 if 𝑟𝑘 < 1
As a result: A Discrete-time system is asymptotically stable if all its poles are inside the
unite disk. A system is unstable if at least one pole is outside the unite disk, or if there are
any repeated poles on the unite circle. A system is marginally stable if it has one or more
distinct poles on the unite circle, and any remaining poles are inside the unite disk.

Solved Problems:

Recall that: The Cayley–Hamilton theorem says that any matrix satisfies its own
characteristic equation: Δ(𝐀) = 𝟎 where Δ(𝜆) = 𝑑𝑒𝑡(𝜆𝐼−𝐀) is the characteristic polynomial. This
theorem follows immediately from the fact that the minimal polynomial (𝜆) divides Δ(𝜆).
Hence the 𝑛𝑡ℎ power of 𝑨, and inductively all higher powers, are expressible as a linear
combination of 𝑰, 𝑨, … , 𝑨𝑛−1 . Thus any power series in 𝑨 can be reduced to a polynomial in 𝑨
of degree at most 𝑛 − 1.
∞ 𝑛−1
𝑡 𝑘 𝑨𝑘
𝑒 𝑨𝑡 =∑ = ∑ 𝛼𝑘 𝑡 𝑘 𝑨𝑘 with 𝛼0 = 𝛼1 = 1
𝑘!
𝑘=0 𝑘=0

In general the coefficients 𝛼𝑘 are determined by interpolation (Algebra Book by BEKHITI). It


is known that every matrix has what is called a Jordan canonical form, that is, there exists
an invertible 𝑷 such that 𝑷−1 𝑨𝑷 = 𝑫 + 𝑵, where 𝑫 is diagonal, 𝑵 is nilpotent (that is, there
exists a 𝑚 ≥ 0 such that 𝑵𝑚 = 𝟎), and 𝑫 and 𝑵 commute.
𝑚−1
𝑨𝑡 𝑫𝑡
𝑵𝑘
𝑒 = 𝑷𝑒 (∑ ) 𝑷−1
𝑘!
𝑘=0

Exercise: 01 Prove that, the matrix valued function 𝑒 𝑨𝑡 is a differentiable function is a


differentiable matrix-valued function of 𝑡, and its derivative is 𝑨𝑒 𝑨𝑡 .

Solution:
𝑛−1
𝑑 𝑨𝑡 1 1
𝑒 = lim (𝑒 𝑨(𝑡+ℎ) − 𝑒 𝑨𝑡 ) = lim (𝑒 𝑨ℎ − 𝑰)𝑒 𝑨𝑡 = lim (∑ 𝛼𝑘 ℎ𝑘−1 𝑨𝑘 ) 𝑒 𝑨𝑡 = 𝑨𝑒 𝑨𝑡 = 𝑒 𝑨𝑡 𝑨
𝑑𝑡 ℎ→0 ℎ ℎ→0 ℎ ℎ→0
𝑘=1

Exercise: 02 Prove that, if 𝑨(𝑡) ∈ ℝ𝑛×𝑠 and 𝑩(𝑡) ∈ ℝ𝑠×𝑚 are differentiable matrix-valued
functions of 𝑡, then the matrix product 𝑨(𝑡)𝑩(𝑡) is differentiable, and its derivative is

𝑑 𝑑𝑨(𝑡) 𝑑𝑩(𝑡)
(𝑨(𝑡)𝑩(𝑡)) = 𝑩(𝑡) + 𝑨(𝑡)
𝑑𝑡 𝑑𝑡 𝑑𝑡
Solution:
𝑑 𝑑 𝑛 𝑛 𝑑
(𝑨(𝑡)𝑩(𝑡))𝑖𝑗 = ∑ 𝑎𝑖𝑘 𝑏𝑘𝑗 = ∑ 𝑎𝑖𝑘 𝑏𝑘𝑗
𝑑𝑡 𝑑𝑡 𝑘=1 𝑘=1 𝑑𝑡
𝑛 𝑑𝑎𝑖𝑘 𝑑𝑏𝑘𝑗 𝑑 𝑑𝑨(𝑡) 𝑑𝑩(𝑡)
=∑ ( 𝑏𝑘𝑗 + 𝑎𝑖𝑘 ) ⟺ (𝑨(𝑡)𝑩(𝑡)) = 𝑩(𝑡) + 𝑨(𝑡)
𝑘=1 𝑑𝑡 𝑑𝑡 𝑑𝑡 𝑑𝑡 𝑑𝑡
𝑛 𝑑𝑎𝑖𝑘 𝑛 𝑑𝑏𝑘𝑗
=∑ 𝑏𝑘𝑗 + ∑ 𝑎𝑖𝑘
𝑘=1 𝑑𝑡 𝑘=1 𝑑𝑡 }
Exercise: 03 Prove that, first-order linear differential equation 𝐱̇ (𝑡) = 𝑨𝐱(𝑡), 𝐱 0 = 𝐱(0) has
the unique solution 𝐱(𝑡) = 𝑒 𝑨𝑡 𝐱 0 .

Solution: Let we put 𝐱(𝑡) = 𝑒 𝑨𝑡 𝐱 0 so

𝑑 𝑑
(𝐱(𝑡)) = (𝑒 𝑨𝑡 𝐱 0 ) = 𝑨𝑒 𝑨𝑡 𝐱 0 = 𝑨𝐱(𝑡) with 𝐱(0) = 𝑒 𝑨0 𝐱 0 = 𝐱 0
𝑑𝑡 𝑑𝑡

Therefore, 𝐱(𝑡) = 𝑒 𝑨𝑡 𝐱 0 is a solution of 𝐱̇ (𝑡) = 𝑨𝐱(𝑡), 𝐱 0 = 𝐱(0). Let we prove the uniqueness
of this solution by multiplying 𝐱(𝑡) by 𝑒 −𝑨𝑡 so

𝑑 −𝑨𝑡
(𝑒 𝐱(𝑡)) = −𝑨𝑒 −𝑨𝑡 𝐱(𝑡) + 𝑒 −𝑨𝑡 𝑨𝐱(𝑡) = −𝑨𝑒 −𝑨𝑡 𝐱(𝑡) + 𝑨𝑒 −𝑨𝑡 𝐱(𝑡) = 𝟎 ⟹ 𝑒 −𝑨𝑡 𝐱(𝑡) = 𝑪
𝑑𝑡

Therefore, 𝑒 −𝑨𝑡 𝐱(𝑡) is a constant column vector, say 𝑪, and 𝐱(𝑡) = 𝑒 𝑨𝑡 𝑪. As 𝐱(0) = 𝐱 0 , we
obtain that 𝐱 0 = 𝑒 −𝑨0 𝑪, that is, 𝐱 0 = 𝑪. Consequently, 𝐱(𝑡) = 𝑒 𝑨𝑡 𝐱 0 is the only one solution ■

Exercise: 04 Prove that, if 𝑨, 𝑩 ∈ ℝ𝑛×𝑛 commute (that is 𝑨𝑩 = 𝑩𝑨), then 𝑒 𝑨+𝑩 = 𝑒 𝑨 𝑒 𝑩 .


1
Solution: From the definition of exponential matrix we have 𝑒 𝑨+𝑩 = ∑∞ 𝑘
𝑘=0 𝑘! (𝑨 + 𝑩) and if
𝑘!
the matrices 𝑨, 𝑩 are commute then (𝑨 + 𝑩)𝑘 = ∑𝑘𝑠=0 𝑠!(𝑘−𝑠)! 𝑨𝑠 𝑩𝑘−𝑠 which leads to

∞ ∞ 𝑘 ∞ 𝑘
1 1 𝑘! 1 1
𝑒 𝑨+𝑩 = ∑ (𝑨 + 𝑩)𝑘 = ∑ (∑ 𝑨𝑠 𝑩𝑘−𝑠 ) = ∑ ∑ 𝑨𝑠 𝑩𝑘−𝑠
𝑘! 𝑘! 𝑠! (𝑘 − 𝑠)! 𝑠! (𝑘 − 𝑠)!
𝑘=0 𝑘=0 𝑠=0 𝑘=0 𝑠=0
∞ ∞ ∞ ∞ ∞
1 𝑠1 𝑟 1 1 1 1
=∑ ∑ 𝑨 𝑩 = (∑ 𝑨𝑠 ) (∑ 𝑩𝑟 ) = (∑ 𝑩𝑟 ) (∑ 𝑨𝑠 )
𝑠! 𝑟! 𝑠! 𝑟! 𝑟! 𝑠!
𝑘=0 𝑠+𝑟=𝑘 𝑠=0 𝑟=0 𝑟=0 𝑠=0

Exercise: 05 Let 𝑎 ∈ ℂ and 𝑛 ∈ ℕ. Prove that if 𝐟(𝑡) is a continuous function defined on


[0, ∞), and if there exists a 𝑠0 such that for all 𝑠 > 𝑠0 ,

1 𝑡 𝑛−1 𝒆𝑎𝑡
𝐅(𝑠) = ∫ 𝑒 −𝑠𝑡 𝐟(𝑡)𝑑𝑡 = then 𝐟(𝑡) = for all 𝑡≥ 0
0 (𝑠 − 𝑎)𝑛 (𝑛 − 1)!

Solution: The proof is left as an exercise.

Exercise: 06 Prove that, if 𝐟(𝑡) = 𝑒 𝑨𝑡 ∈ ℝ𝑛×𝑛 is matrix-valued function of 𝑡 then its Laplace
transform is given by

𝐅(𝑠) = ∫ 𝑒 −𝑠𝑡 𝐟(𝑡)𝑑𝑡 = (𝑠𝑰 − 𝑨)−1
0
Solution: Notice that
∞ ∞ ∞
−𝑠𝑡 𝑨𝑡 −(𝑠𝑰−𝑨)𝑡
∫ 𝑒 𝑒 𝑑𝑡 = ∫ 𝑒 𝑑𝑡 = ∫ (𝑠𝑰 − 𝑨)−1 (𝑠𝑰 − 𝑨)𝑒 −𝑡(𝑠𝑰−𝑨) 𝑑𝑡
0 0 0
∞ ∞
𝑑 −(𝑠𝑰−𝑨)𝑡
= (𝑠𝑰 − 𝑨)−1 ∫ (𝑠𝑰 − 𝑨)𝑒 −(𝑠𝑰−𝑨)𝑡 𝑑𝑡 = (𝑠𝑰 − 𝑨)−1 (− ∫ 𝑒 𝑑𝑡)
0 0 𝑑𝑡
−1
= (𝑠𝑰 − 𝑨)
Exercise: 07 First we define ‖𝑨‖ = max{|𝑎𝑖𝑗 | , 1 ≤ 𝑖, 𝑗 ≤ 𝑛} thus |𝑎𝑖𝑗 | ≤ ‖𝑨‖ for all 𝑖, 𝑗. This is
one of several possible “norms” on ℝ𝑛×𝑛 . Prove that, if 𝑨, 𝑩 ∈ ℝ𝑛×𝑛 then ‖𝑨𝑩‖ ≤ 𝑛‖𝑨‖‖𝑩‖
and ‖𝑨𝑘 ‖ ≤ 𝑛𝑘−1 ‖𝑨‖𝑘

Solution: we estimate the size of the 𝑖, 𝑗-entry of 𝑨𝑩:


𝑛 𝑛
|𝑨𝑩|𝑖𝑗 = |∑ 𝑎𝑖𝑘 𝑏𝑘𝑗 | ≤ ∑ |𝑎𝑖𝑘 ||𝑏𝑘𝑗 | ≤ 𝑛‖𝑨‖‖𝑩‖
𝑘=1 𝑖𝑗 𝑘=1

Thus ‖𝑨𝑩‖ ≤ 𝑛‖𝑨‖‖𝑩‖. The second inequality follows from the first by induction. ■

⟙𝑡
Exercise: 08 Prove that, if 𝑨 ∈ ℝ𝑛×𝑛 then (𝑒 𝑨𝑡 )⟙ = 𝑒 𝑨 and ‖𝑒 𝑨𝑡 ‖ ≤ 𝑒 𝑡𝑛‖𝑨‖ (using the
previous norm defined befor)

Solution: ❶ By using the properties of transposition we get


∞ ⟙ ∞
𝑡𝑘 𝑘 𝑡𝑘 ⟙ 𝑘 ⟙
(𝑒 𝑨𝑡 )⟙ = (∑ 𝑨 ) = ∑ (𝑨 ) = 𝑒 𝑨 𝑡
𝑘! 𝑘!
𝑘=0 𝑘=0



𝑨𝑡
𝑡𝑘
‖𝑒 ‖ = ‖∑ (𝑨)𝑘 ‖
𝑘!
𝑘=0
∞ 𝑘
𝑡
≤∑ ‖𝑨𝑘 ‖
𝑘!
𝑘=0
∞ 𝑘
𝑡 𝑘−1
≤∑ 𝑛 ‖𝑨‖𝑘
𝑘!
𝑘=0
∞ 𝑘
𝑡 𝑘
≤∑ 𝑛 ‖𝑨‖𝑘 = 𝑒 𝑡𝑛‖𝑨‖
𝑘!
𝑘=0

Exercise: 09 Prove that, if 𝑨 ∈ ℝ𝑛×𝑛 and 𝐱 ∈ ℝ𝑛 then there exists a constant 𝐶 such that
‖𝑨𝐱‖ ≤ 𝐶‖𝑨‖‖𝐱‖ and show that if 𝜆 is an eigenvalue of 𝑨, then |𝜆| ≤ 𝑛‖𝑨‖

Solution: ❶ First we have


𝑛 𝑛 𝑛
|(𝑨𝐱)𝑖 | = |∑ 𝑎𝑖𝑗 𝑥𝑗 | ≤ ∑ |𝑎𝑖𝑗 ||𝑥𝑗 | ≤ ‖𝑨‖ ∑ |𝑥𝑗 | ≤ 𝑛‖𝑨‖‖𝐱‖
𝑗=1 𝑗=1 𝑗=1

and so ‖𝑨𝐱‖2 ≤ 𝑛(𝑛‖𝑨‖‖𝐱‖)2 . Thus for all 𝐱 ∈ ℝ𝑛 , ‖𝑨𝐱‖ ≤ 𝑛3/2 ‖𝑨‖‖𝐱‖.

❷ If 𝜆 is an eigenvalue and 𝐱 is a corresponding eigenvector, then we have 𝑨𝐱 = 𝜆𝐱, and so

‖𝜆𝐱‖ ≤ 𝐶‖𝑨‖‖𝐱‖ ⟺ |𝜆| ≤ 𝐶‖𝑨‖

But, how can we estimate the smallest constant 𝐶?


𝑛 𝑛 𝑛
|𝜆||𝑥𝑖 | = |(𝜆𝐱)𝑖 | = |(𝑨𝐱)𝑖 | = |∑ 𝑎𝑖𝑗 𝑥𝑗 | ≤ ∑ |𝑎𝑖𝑗 ||𝑥𝑗 | ≤ ‖𝑨‖ ∑ |𝑥𝑗 |
𝑗=1 𝑗=1 𝑗=1
Adding the n inequalities (for 𝑖 = 1, . . . , 𝑛), we obtain
𝑛
𝑛
|𝜆| ∑|𝑥𝑖 | ≤ 𝑛‖𝑨‖ ∑ |𝑥𝑗 |
𝑗=1
𝑖=1

As 𝐱 ≠ 𝟎, we can divide by ∑𝑛𝑖=1|𝑥𝑖 |, and this yields that |𝜆| ≤ 𝑛‖𝑨‖ ■

Exercise: 10 Let 𝑨, 𝑩 ∈ ℝ𝑛×𝑛 and 𝐱 ∈ ℝ𝑛 and define the vector norm by ‖𝐱‖ = √∑𝑛𝑖=1|𝑥𝑖 |2 and
2
the matrix Frobenius (or Schur) norm ‖𝑨‖ = √∑𝑖𝑗|𝑎𝑖𝑗 | . Prove that

−1 1 −1 −1 −1 −1
❶ ‖𝑨 ‖≥ ❷ ‖𝑨 −𝑩 ‖ ≤ ‖𝑨 ‖ ‖𝑩 ‖ ‖𝑨 − 𝑩‖
‖𝑨‖

−1 −1 1
❸ ‖𝑨 ‖ ‖𝑩‖ < 1 ⟹ ‖(𝑨 − 𝑩) ‖≤
−1
−1
(‖𝑨 ‖ − ‖𝑩‖)

❹ If we define a polynomial 𝒑𝑘 (𝑥) = ∑𝑘𝑖=0 𝑎𝑖 𝑥 𝑖 and the matrix evaluation 𝒑𝑘 (𝑨) = ∑𝑘𝑖=0 𝑎𝑖 𝑨𝑖
then prove that: ‖𝒑𝑘 (𝑨)‖ ≤ 𝒑𝑘 (‖𝑨‖)

Solution: ❶ First we have 1 = ‖𝑰‖ = ‖𝑨−1 𝑨‖ ≤ ‖𝑨−1 ‖‖𝑨‖ ⟹ ‖𝑨−1 ‖ ≥ 1/‖𝑨‖


−1 −1 −1 −1

𝑨−1 − 𝑩−1 = 𝑨−1 (𝑩 − 𝑨)𝑩−1 ⟹ ‖𝑨 − 𝑩 ‖ = ‖𝑨 (𝑩 − 𝑨)𝑩 ‖
⟹ ‖𝑨−1 − 𝑩−1 ‖ ≤ ‖𝑨−1 ‖‖𝑩 − 𝑨‖‖𝑩−1 ‖

1 − ‖𝑨−1 ‖‖𝑩‖ > 0


❸ ‖𝑨−1 ‖‖𝑩‖ < 1 ⟹ { & ⟹ ‖𝑨−1 ‖−1 − ‖𝑩‖ > 0 On the other hand we have
−1
‖𝑩‖ < ‖𝑨 ‖ −1

𝑰 = (𝑨 − 𝑩)−1 (𝑨 − 𝑩) = (𝑨 − 𝑩)−1 𝑨 − (𝑨 − 𝑩)−1 𝑩 ⟹ (𝑨 − 𝑩)−1 𝑨 = 𝑰 + (𝑨 − 𝑩)−1 𝑩

Therefore (𝑨 − 𝑩)−1 = 𝑨−1 + (𝑨 − 𝑩)−1 𝑩𝑨−1 this implies that:

‖(𝑨 − 𝑩)−1 ‖ = ‖𝑨−1 + (𝑨 − 𝑩)−1 𝑩𝑨−1 ‖


≤ ‖𝑨−1 ‖ + ‖(𝑨 − 𝑩)−1 𝑩𝑨−1 ‖
≤ ‖𝑨−1 ‖ + ‖(𝑨 − 𝑩)−1 ‖‖𝑩‖‖𝑨−1 ‖
1
Hence ‖(𝑨 − 𝑩)−1 ‖(1 − ‖𝑩‖‖𝑨−1 ‖) ≤ ‖𝑨−1 ‖ ⟹ ‖(𝑨 − 𝑩)−1 ‖ ≤
(‖𝑨−1 ‖−1 − ‖𝑩‖)

❹ ‖𝒑𝑘 (𝑨)‖ = ‖∑𝑘𝑖=0 𝑎𝑖 𝑨𝑖 ‖ ≤ ∑𝑘𝑖=0‖𝑎𝑖 𝑨𝑖 ‖ ≤ ∑𝑘𝑖=0 𝑎𝑖 ‖𝑨‖𝑖 ⟹ ‖𝒑𝑘 (𝑨)‖ ≤ 𝒑𝑘 (‖𝑨‖)


Exercise: 11 ❶ Let 𝑨, ℝ𝑛×𝑛 and ‖𝑨‖ < 1 prove that if (𝑰 ± 𝑨) is invertible then

1 1
≤ ‖(𝑰 ± 𝑨)‖−1 ≤
1 + ‖𝑨‖ 1 − ‖𝑨‖

❷ Now if we define the series 𝑺𝑛 = ∑𝑛𝑖=0 𝑨𝑖 prove that lim𝑛→∞ 𝑺𝑛 = (𝑰 − 𝑨)−1

Solution:

❶ If the matrix (𝑰 + 𝑨) is non-singular then 𝑰 = (𝑰 + 𝑨)−1 (𝑰 + 𝑨) and

1 = ‖𝑰‖ ≤ ‖(𝑰 + 𝑨)−1 ‖‖𝑰 + 𝑨‖ ≤ ‖(𝑰 + 𝑨)−1 ‖(‖𝑰‖ + ‖𝑨‖) ≤ ‖(𝑰 + 𝑨)−1 ‖(1 + ‖𝑨‖)

1
⟹ ≤ ‖(𝑰 + 𝑨)−1 ‖ (𝐼)
1 + ‖𝑨‖

Also we have (𝑰 + 𝑨)−1 = 𝑰 − (𝑰 + 𝑨)−1 𝑨 so

‖(𝑰 + 𝑨)−1 ‖ = ‖𝑰 − (𝑰 + 𝑨)−1 𝑨‖ ≤ ‖𝑰‖ + ‖(𝑰 + 𝑨)−1 𝑨‖ ≤ ‖𝑰‖ + ‖(𝑰 + 𝑨)−1 ‖‖𝑨‖

1
⟹ (1 − ‖𝑨‖)‖(𝑰 + 𝑨)−1 ‖ ≤ 1 ⟹ ‖(𝑰 + 𝑨)−1 ‖ ≤ (𝐼𝐼)
1 − ‖𝑨‖

From (𝐼) and (𝐼𝐼) we get


1 1
≤ ‖(𝑰 + 𝑨)−1 ‖ ≤
1 + ‖𝑨‖ 1 − ‖𝑨‖

If we replace 𝑨 by −𝑨 we obtain
1 1
≤ ‖(𝑰 − 𝑨)−1 ‖ ≤
1 + ‖𝑨‖ 1 − ‖𝑨‖

❷ Let we consider (𝑰 − 𝑨)𝑺𝑛 = (𝑰 − 𝑨)(𝑰 + 𝑨 + 𝑨2 + 𝑨3 + ⋯ + 𝑨𝑛 ) = 𝑰 − 𝑨𝑛+1

𝑺𝑛 = (𝑰 − 𝑨)−1 (𝑰 − 𝑨𝑛+1 ) = (𝑰 − 𝑨)−1 − (𝑰 − 𝑨)−1 𝑨𝑛+1

‖𝑨‖ < 1 ⟹ lim 𝑺𝑛 = lim (𝑰 − 𝑨)−1 − (𝑰 − 𝑨)−1 𝑨𝑛+1 = (𝑰 − 𝑨)−1


𝑛→∞ 𝑛→∞

Exercise: 12 ❶ Given a linear system of equation 𝑨𝐱 = 𝒃 If a small perturbation 𝛿𝒃 is given


to 𝒃 which will introduce another perturbation 𝛿𝐱 in the variable 𝐱. prove that
‖𝛿𝐱‖ ‖𝛿𝒃‖
≤ cond(𝑨) with cond(𝑨) = ‖𝑨‖‖𝑨−1 ‖
‖𝐱‖ ‖𝒃‖

❷ If we keep 𝒃 unchanged and a small increment 𝛿𝑨 is given to 𝑨 then prove that

‖𝛿𝐱‖ ‖𝛿𝑨‖ ‖𝛿𝐱‖ cond(𝑨) ‖𝛿𝑨‖


≤ cond(𝑨) & ≤( )
‖𝐱 + 𝛿𝐱‖ ‖𝑨‖ ‖𝐱‖ ‖𝛿𝑨‖ ‖𝑨‖
1 − cond(𝑨)
‖𝑨‖

❸ If a simultaneous perturbations 𝛿𝒃, 𝛿𝑨 is given to 𝒃 and 𝑨 the prove that

‖𝛿𝐱‖ cond(𝑨) ‖𝛿𝑨‖ ‖𝛿𝒃‖


≤( )( + )
‖𝐱‖ ‖𝛿𝑨‖ ‖𝑨‖ ‖𝒃‖
1 − cond(𝑨)
‖𝑨‖
Solution:

❶ 𝑨𝛿𝐱 = 𝛿𝒃 ⟹ 𝛿𝐱 = 𝑨−1 𝛿𝒃 ⟹ ‖𝛿𝐱‖ ≤ ‖𝑨−1 ‖‖𝛿𝒃‖ also we have ‖𝒃‖ ≤ ‖𝑨‖‖𝐱‖ so we get
‖𝛿𝐱‖ ‖𝛿𝒃‖ ‖𝛿𝐱‖ ‖𝛿𝒃‖
≤ ‖𝑨−1 ‖‖𝑨‖ ⟺ ≤ cond(𝑨)
‖𝐱‖ ‖𝒃‖ ‖𝐱‖ ‖𝒃‖
❷ (𝑨 + 𝛿𝑨)(𝐱 + 𝛿𝐱) = 𝒃 ⟹ 𝑨𝛿𝐱 = −𝛿𝑨(𝐱 + 𝛿𝐱) ⟹ 𝛿𝐱 = −𝑨−1 𝛿𝑨(𝐱 + 𝛿𝐱) take the norm of both
sides we obtain ‖𝛿𝐱‖ ≤ ‖𝑨−1 ‖‖𝛿𝑨‖‖𝐱 + 𝛿𝐱‖

‖𝛿𝐱‖ ‖𝛿𝑨‖ ‖𝛿𝐱‖ ‖𝛿𝑨‖


≤ ‖𝑨−1 ‖‖𝛿𝑨‖ ≤ ‖𝑨‖‖𝑨−1 ‖ ⟹ ≤ cond(𝑨)
‖𝐱 + 𝛿𝐱‖ ‖𝑨‖ ‖𝐱 + 𝛿𝐱‖ ‖𝑨‖

In other hand we can write 𝛿𝐱 = −𝑨−1 𝛿𝑨(𝐱 + 𝛿𝐱) ⟹ ‖𝛿𝐱‖ ≤ ‖𝑨−1 ‖‖𝛿𝑨‖(‖𝐱‖ + ‖𝛿𝐱‖) so we have
‖𝛿𝐱‖ ‖𝑨−1 ‖‖𝛿𝑨‖
‖𝛿𝐱‖(1 − ‖𝑨−1 ‖‖𝛿𝑨‖) ≤ ‖𝑨−1 ‖‖𝛿𝑨‖‖𝐱‖ ⟹ ≤
‖𝐱‖ (1 − ‖𝑨−1 ‖‖𝛿𝑨‖)

‖𝛿𝐱‖ ‖𝑨−1 ‖‖𝛿𝑨‖ ‖𝛿𝐱‖ cond(𝑨) ‖𝛿𝑨‖


≤ ⟺ ≤( )
‖𝐱‖ (1 − ‖𝑨−1 ‖‖𝛿𝑨‖) ‖𝐱‖ ‖𝛿𝑨‖ ‖𝑨‖
1 − cond(𝑨)
‖𝑨‖

❸ (𝑨 + 𝛿𝑨)(𝐱 + 𝛿𝐱) = (𝒃 + 𝛿𝒃) ⟹ 𝛿𝐱 = 𝑨−1 𝛿𝒃 − 𝑨−1 𝛿𝑨(𝐱 + 𝛿𝐱) so we have


‖𝛿𝐱‖ ‖𝛿𝒃‖
‖𝛿𝐱‖ ≤ ‖𝑨−1 ‖‖𝛿𝒃‖ + ‖𝑨−1 ‖‖𝛿𝑨‖(‖𝐱‖ + ‖𝛿𝐱‖) ⟺ (1 − ‖𝑨−1 ‖‖𝛿𝑨‖) ≤ ‖𝑨−1 ‖ + ‖𝑨−1 ‖‖𝛿𝑨‖
‖𝐱‖ ‖𝐱‖

We know that 1/‖𝐱‖ ≤ ‖𝑨‖/‖𝒃‖ then


‖𝛿𝐱‖ ‖𝛿𝒃‖
(1 − ‖𝑨−1 ‖‖𝛿𝑨‖) ≤ ‖𝑨−1 ‖‖𝑨‖ + ‖𝑨−1 ‖‖𝛿𝑨‖
‖𝐱‖ ‖𝒃‖
‖𝛿𝐱‖ cond(𝑨) ‖𝛿𝒃‖ cond(𝑨) ‖𝛿𝑨‖
⟺ ≤ +
‖𝐱‖ ‖𝛿𝑨‖ ‖𝒃‖ ‖𝛿𝑨‖ ‖𝑨‖
(1 − cond(𝑨) ) (1 − cond(𝑨) )
‖𝑨‖ ‖𝑨‖

‖𝛿𝐱‖ cond(𝑨) ‖𝛿𝑨‖ ‖𝛿𝒃‖


⟺ ≤( )( + )
‖𝐱‖ ‖𝛿𝑨‖ ‖𝑨‖ ‖𝒃‖
1 − cond(𝑨)
‖𝑨‖

Remark: Some exercises here are included just for completeness of state space-matrix
analysis (in fact they are matrix algebra subject)
CHAPTER IX:
Sampling and
Reconstruction

IV. Linear Systems and Sampling


I. Introduction
IV.I. Numerical Integration
II. Sampling of Continuous-Time Signal IV.II. Step invariant method
II.I. DTFT of sampled signal IV.III. First Order Hold Equivalence
II.II. Practical issues in sampling IV.IV. Zero-pole mapping
IV.V. Impulse Invariant Method
III. Construction of Continuous-Time Signal
IV.VI. Digitalization in State Space
III.I. Zero-Order Hold Interpolation
V. Discretization of Closed Loop System
III.II. Higher-Order Interpolation Filter
VI. Solved Problems

Digital hardware, including computers, take


actions in discrete steps. So they can deal
with discrete-time signals, but they cannot
directly handle the continuous-time signals
that are prevalent in the physical world. This
chapter is about the interface between these
two worlds, one continuous, the other
discrete. A discrete-time signal is constructed
by sampling a continuous-time signal, and a
continuous-time signal is reconstructed by
interpolating a discrete-time signal.
Sampling and Reconstruction
I. Introduction: The term sampling refers to the act of periodically measuring the
amplitude of a continuous time signal and constructing a discrete-time signal with the
measurements. If certain conditions are satisfied, a continuous-time signal can be
completely represented by measurements (samples) taken from it at uniform intervals. This
allows us to store and manipulate continuous-time signals on a digital computer.

Consider, for example, the problem of keeping track of temperature variations in a


classroom. The temperature of the room can be measured at any time instant, and can
therefore be modeled as a continuous-time signal 𝐱(𝑡). Alternatively, we may choose to
check the room temperature once every 10 minutes and construct a table.

If we choose to index the temperature values with integers as shown in the third row of
Table then we could view the result as a discrete time signal 𝐱[𝑛] in the form

22.4, 22.5, 22.8, 21.6, 21.7, 21.7, 21.9, 22.2, . . .


𝐱[𝑛] = { }

𝑛=0
A discrete-time signal is represented mathematically by an indexed sequence of numbers.
When stored digitally, the signal values are held in memory locations, so they would be
indexed by memory addresses. We denote the values of the discrete-time signal as 𝐱[𝑛],
where 𝑛 is the integer index indicating the order of the values in the sequence. The square
brackets [ ] enclosing the argument 𝑛 provide a notation that distinguishes between the
continuous-time signal 𝐱(𝑡) and a corresponding discrete-time signal 𝐱[𝑛].

Generalizing the temperature example used above, the relationship between the continuous
time signal 𝐱(𝑡) and its discrete-time counterpart 𝐱[𝑛] is 𝐱[𝑛] = 𝐱(𝑡)|𝑡=𝑛𝑇𝑠 = 𝐱(𝑛𝑇𝑠 ) where 𝑇𝑠 is
the sampling interval, that is, the time interval between consecutive samples. It is also
referred to as the sampling period. The reciprocal of the sampling interval is called the
sampling rate or the sampling frequency: 𝑓𝑠 = 1/𝑇𝑠 Hz.
Thus the act of sampling allows us to obtain a discrete-time signal 𝐱[𝑛] from the
continuous time signal 𝐱(𝑡) . While any signal can be sampled with any time interval
between consecutive samples, there are certain questions that need to be addressed before
we can be confident that 𝐱[𝑛] provides an accurate representation of 𝐱(𝑡) .

Can always we reconstruct the original signal from the discrete one? If it is not always
possible then of which frequency rate the reconstruction will be good and accurate? Of
which period of sampling that the discrete-time signals provide enough information about
the original one?

The process of converting from digital back to analog is called reconstruction. If we plot
some function in computer we actually did not plot the true continuous-time waveform.
Instead, we actually plotted values of
the waveform only at isolated (discrete)
points in time and then connected
those points with straight lines.
Mathematicians call this reconstruction
process by interpolation because it may
be represented as time-domain
interpolation formula.

For sampling a continuous-time signal


and its reconstruction, two questions
need to be answered. The first question
is, what minimum sampling rate
guarantees the error free reconstruction
of the signal? As one may suspect, the
minimum rate depends on how fast the
signal may change with time. Fast-
changing signals need faster sampling
rates. This question is answered by the
sampling theorem. The second question
is, how does the reconstruction error
depend on the number of samples
used, and the weight given to each sample? This can be answered by the reconstruction
method. In this chapter we will examine the sampling process and reconstruction method.

II. Sampling of Continuous-Time Signal: Sampling can be viewed as a transformation or


operation that acts on a continuous time signal 𝐱(𝑡) to produce an output, which is a
corresponding discrete-time signal 𝐱[𝑛]. In engineering, it is common to call such a
transformation a system, and to represent it graphically with a block diagram that shows
the input and output signals along with a name that describes the system operation. The
sampling operation is an example of a system whose input is a continuous-time signal and
whose output is a discrete-time signal. The system block diagram of 𝐱(𝑡) → 𝐴/𝐷 → 𝐱[𝑛]
represents the mathematical operation in 𝐱[𝑛] = 𝐱(𝑡)|𝑡=𝑛𝑇𝑠 = 𝐱(𝑛𝑇𝑠 ) and is called an ideal
continuous-to-discrete (C-to-D) converter. Its idealized mathematical form is useful for
analysis, but an actual hardware system for doing sampling, called an analog-to-digital (A-
to-D) converter.
Mathematically, uniform sampling may be modeled as
multiplication of 𝐱(𝑡) by a sampling function 𝐬(𝑡), a train of very
narrow pulses, ideally unit impulses, spaced every 𝑇𝑠 seconds.

𝐬(𝑡) = ∑ 𝛿(𝑡 − 𝑘𝑇𝑠 )


𝑘=−∞
The sampled function is
∞ ∞

𝐲(𝑡) = 𝐱(𝑡)𝐬(𝑡) = 𝐱(𝑡) ∑ 𝛿(𝑡 − 𝑘𝑇𝑠 ) = ∑ 𝐱(𝑘𝑇)𝛿(𝑡 − 𝑘𝑇𝑠 )


𝑘=−∞ 𝑘=−∞

𝐲(𝑡) is a train of impulses spaced every 𝑇 seconds (impulse-sampled version of 𝐱(𝑡)). The
strength of an impulse at 𝑡 = 𝑛𝑇 is equal to the magnitude of 𝐱(𝑡) at that point. We would
like to find conditions and methods for recovering 𝐱(𝑡) from 𝐲(𝑡).

It is important to understand that the impulse-sampled signal 𝐲(𝑡) is still a continuous


time signal. The subject of converting 𝐲(𝑡) to a discrete-time signal will be discussed later.

At this point, we need to pose a critical question: How dense must the impulse train 𝐬(𝑡) be
so that the impulse-sampled signal 𝐲(𝑡) is an accurate and complete representation of the
original signal 𝐱(𝑡)? In other words, what are the restrictions on the sampling interval 𝑇𝑠 or,
equivalently, the sampling rate 𝑓𝑠 ? In order to answer this question, we need to develop
some insight into how the frequency spectrum of the impulse-sampled signal 𝐲(𝑡) relates to
the spectrum of the original signal 𝐱(𝑡).
As discussed in previous chapters, 𝐬(𝑡) = ∑∞ 𝑘=−∞ 𝛿(𝑡 − 𝑘𝑇𝑠 ) can be represented in an
exponential Fourier series expansion in the form 𝐬(𝑡) = ∑∞ 𝑘=−∞ 𝑐𝑘 𝑒
𝑗𝑘𝜔𝑠 𝑡
where 𝜔𝑠 is both the
sampling rate in 𝑟𝑎𝑑/𝑠 and the fundamental frequency of the impulse train. It is computed
as 𝜔𝑠 = 2𝜋𝑓𝑠 = 2𝜋/𝑇𝑠 . The exponential FS coefficients for 𝐬(𝑡) are found as

1 ∞ 1 ∞ 1 𝑇𝑠 /2 1
𝑐𝑘 = ∫ 𝐬(𝑡)𝑒 −𝑗𝑘𝜔𝑠 𝑡 𝑑𝑡 = ∫ ( ∑ 𝛿(𝑡 − 𝑘𝑇𝑠 )) 𝑒 −𝑗𝑘𝜔𝑠 𝑡 𝑑𝑡 = ∫ 𝛿(𝑡)𝑒 −𝑗𝑘𝜔𝑠 𝑡 𝑑𝑡 = for all 𝑘
𝑇𝑠 −∞ 𝑇𝑠 −∞ 𝑇𝑠 −𝑇𝑠 /2 𝑇𝑠
𝑘=−∞

Substituting this exponential FS coefficients into 𝐬(𝑡), the impulse train becomes
∞ ∞
1 𝑗𝑘𝜔 𝑡 1
𝐬(𝑡) = ∑ 𝑒 𝑠 ⟹ 𝐲(𝑡) = 𝐱(𝑡)𝐬(𝑡) = ∑ 𝐱(𝑡)𝑒 𝑗𝑘𝜔𝑠 𝑡
𝑇𝑠 𝑇𝑠
𝑘=−∞ 𝑘=−∞

In order to determine the frequency spectrum of the impulse sampled signal 𝐲(𝑡) let us take
the Fourier transform of both sides of last equation.
∞ ∞
1 1
𝔽(𝐲(𝑡)) = 𝔽 ( ∑ 𝐱(𝑡)𝑒 𝑗𝑘𝜔𝑠 𝑡 ) = ∑ 𝔽(𝐱(𝑡)𝑒 𝑗𝑘𝜔𝑠 𝑡 )
𝑇𝑠 𝑇𝑠
𝑘=−∞ 𝑘=−∞

Linearity property of the Fourier transform was used in obtaining the result. Furthermore,
using the frequency shifting property of the Fourier transform, the term inside the
summation becomes: 𝔽(𝐱(𝑡)𝑒 𝑗𝑘𝜔𝑠 𝑡 ) = 𝑿(𝜔 − 𝑘𝜔𝑠 ). The Fourier transform of the impulse-
sampled signal 𝐲(𝑡) is related to the Fourier transform of the original signal by
∞ ∞
1 1
𝒀(𝜔) = ∑ 𝔽(𝐱(𝑡)𝑒 𝑗𝑘𝜔𝑠 𝑡 ) = ∑ 𝑿(𝜔 − 𝑘𝜔𝑠 ) This is called the 𝑃𝑢𝑙𝑠𝑒 𝑀𝑜𝑑𝑢𝑙𝑎𝑡𝑖𝑜𝑛
𝑇𝑠 𝑇𝑠
𝑘=−∞ 𝑘=−∞
For the impulse-sampled signal to be an accurate and complete representation of the
original signal, 𝐱(𝑡) should be recoverable from 𝐲(𝑡). This in turn requires that the
frequency spectrum 𝑿(𝜔) be recoverable from the frequency spectrum 𝒀(𝜔). In the above
figure the spectrum 𝑿(𝜔) used for the original signal is bandlimited to the frequency range
|𝜔| ≤ 𝜔max . Sampling rate 𝜔𝑠 is chosen such that the repetitions of 𝑿(𝜔) do not overlap with
each other in the construction of 𝒀(𝜔), that is 𝜔𝑠 ≥ 2𝜔max . As a result, the shape of the
original spectrum 𝑿(𝜔) is preserved within the sampled spectrum 𝒀(𝜔). This ensures that
𝐱(𝑡) is recoverable from 𝐲(𝑡).

Remark: When spectral sections overlap, the word “Aliasing” is displayed, indicating
that the spectrum is being corrupted through the sampling process (sampling rate 𝜔𝑠 is not
chosen carefully). In otherward if a signal is not band-limited, or if the sampling rate is too
low, the spectral components of the signal will overlap each another and this condition is
called aliasing. Once the spectrum is aliased, the original signal is no longer recoverable
from its sampled version.

Theorem: (Nyquist-Shannon Theorem) Let 𝐱(𝑡) be any energy signal which is band-limited
with its highest frequency component less than 𝜔max (that is 𝑿(𝜔) = 0 for all |𝜔| > 𝜔max
with a bandwidth 𝜔max ). When 𝐱(𝑡) is sampled using a sampling frequency 𝜔𝑠 ≥ 2𝜔max ,
then it is possible to reconstruct this signal from its samples 𝐱[𝑛].
The Nyquist-Shannon Theorem is sometimes called the Sampling Theorem. The sampling
rate of 2𝜔max is called Nyquist rate. The sampling frequency selected is denoted as 𝑓𝑠 . We
will denote the Nyquist frequency 𝑓𝑠 /2 as 𝑓𝑁 . For the impulse-sampled signal to form an
accurate representation of the original signal, the sampling rate must be at least twice the
highest frequency in the spectrum of the original signal. This is known as the Nyquist
sampling criterion. It was named after Harry Nyquist (1889-1976) who first introduced the
idea in his work on telegraph transmission. Later it was formally proven by his colleague
Claude Shannon (1916-2001) in his work that formed the foundations of information theory.

When an analog signal is sampled, the most important factor is the selection of the
sampling frequency 𝑓𝑠 . In simple words, Sampling Theorem may be stated as, “Sampling
frequency is appropriate when one can recover the analog signal back from the signal
samples.” If the signal cannot be faithfully recovered, then the sampling frequency needs
correction.

In practice, the condition in 𝑓𝑠 ≥ 2𝑓max is usually met with inequality, and with sufficient
margin between the two terms to allow for the imperfections of practical samplers and
reconstruction systems. In practical implementations of samplers, the sampling rate 𝑓𝑠 is
typically fixed by the constraints of the hardware used. On the other hand, the highest
frequency of the actual signal to be sampled is not always known a priori. One example of
this is the sampling of speech signals where the highest frequency in the signal depends on
the speaker, and may vary. In order to ensure that the Nyquist sampling criterion in
𝑓𝑠 ≥ 2𝑓max is met regardless, the signal is processed through an anti-aliasing filter before it is
sampled, effectively removing all frequencies that are greater than half the sampling rate.
This is illustrated in the next figure
II.I. DTFT of sampled signal: The relationship between the Fourier transforms of the
continuous-time signal and its impulse-sampled version is given by the following equation
𝒀(𝜔) = {∑∞𝑘=−∞ 𝑿(𝜔 − 𝑘𝜔𝑠 )}/𝑇𝑠 . As discussed before, the purpose of sampling is to ultimately
create a discrete-time signal 𝐱[𝑛] from a continuous time signal 𝐱(𝑡). The discrete-time
signal can then be converted to a digital signal suitable for storage and manipulation on
digital computers.

Let 𝐱[𝑘] be defined in terms of 𝐱(𝑡) as: 𝐱[𝑘] = 𝐱(𝑡)|𝑡=𝑘𝑇𝑠 The Fourier transform of 𝐱(𝑡) is

defined by 𝑿(𝜔) = ∫−∞ 𝐱(𝑡)𝑒 −𝑗𝜔𝑡 𝑑𝑡 Similarly, the DTFT of the discrete-time signal 𝐱[𝑘] is

𝑿(Ω) = ∑ 𝐱[𝑘]𝑒 −𝑗Ω𝑘


−∞

We would like to understand the relationship between the two transforms 𝑿(𝜔) and 𝑿(Ω).
Let we define Ω = 𝜔𝑇𝑠 then this leads us to the conclusion
∞ ∞
1 1 Ω − 2𝜋𝑘
𝒀(𝜔) = ∑ 𝑿(𝜔 − 𝑘𝜔𝑠 ) ⟹ 𝒀(Ω) = ∑ 𝑿( )
𝑇𝑠 𝑇𝑠 𝑇𝑠
𝑘=−∞ 𝑘=−∞
⟹ 𝒀(𝜔) = 𝒀(Ω/𝑇𝑠 ) ⟹ 𝑿(𝜔) = 𝑿(Ω/𝑇𝑠 )

II.II. Practical issues in sampling: In previous sections the issue of sampling a


continuous-time signal through multiplication by an impulse train was discussed. A
practical consideration in the design of samplers is that we do not have ideal impulse
trains, and must therefore approximate them with pulse trains. Two important questions
that arise in this context are:

1. What would happen to the spectrum if we used a pulse train instead of an impulse train?
2. How would the use of pulses affect the methods used in recovering the original signal
from its sampled version?

When pulses are used instead of impulses, there are two variations of the sampling
operation that can be used, namely natural sampling and zero-order hold sampling. The
former is easier to generate electronically while the latter lends itself better to digital coding
through techniques known as pulse-code modulation and delta modulation. We will review
each sampling technique briefly.
 Natural sampling: Instead of using the periodic impulse train of 𝐬(𝑡) = ∑∞ 𝑘=−∞ 𝛿(𝑡 − 𝑘𝑇𝑠 ),
let the multiplying signal 𝐬(𝑡) be defined as a periodic pulse train with a duty cycle of 𝑑:


𝑡 − 𝑘𝑇𝑠
𝐬(𝑡) = ∑ ∏ ( )
𝑑𝑇𝑠
𝑘=−∞

With

1 |𝑡| ≤ 1/2
∏(𝑡) = {
0 |𝑡| > 1/2

The ∏(𝑡) signal represents a unit pulse, that is, a pulse with unit amplitude and unit width
centered around the time origin 𝑡 = 0. The period of the pulse train is 𝑇𝑠 , the same as the
sampling interval. The width of each pulse is 𝑑𝑇𝑠 as shown in Figure.

Multiplication of the signal 𝐱(𝑡) with 𝐬(𝑡) yields a natural sampled version of the signal 𝐱(𝑡):

𝑡 − 𝑘𝑇𝑠
𝐲(𝑡) = 𝐱(𝑡)𝐬(𝑡) = 𝐱(𝑡) ∑ ∏ ( )
𝑑𝑇𝑠
𝑘=−∞

In order to derive the relationship between frequency spectra of the signal 𝐱(𝑡) and its
naturally sampled version 𝐲(𝑡), we will make use of the exponential Fourier series
representation of 𝐬(𝑡). The exponential FS coefficients for a pulse train with duty cycle 𝑑
were found as
1 ∞ sin(𝜋𝑘𝑑)
𝑐𝑘 = ∫ 𝐬(𝑡)𝑒 −𝑗𝑘𝜔𝑠 𝑡 𝑑𝑡 = 𝑑 ( ) = 𝑑 sinc(𝑘𝑑)
𝑇𝑠 −∞ 𝜋𝑘𝑑
Therefore the EFS representation of 𝐬(𝑡) is 𝐬(𝑡) = 𝑑 ∑∞𝑘=−∞ sinc(𝑘𝑑) 𝑒
𝑗𝑘𝜔𝑠 𝑡
. Fundamental
frequency is the same as the sampling rate 𝜔𝑠 = 2𝜋/𝑇𝑠 . Using 𝐲(𝑡) = 𝐱(𝑡)𝐬(𝑡) the naturally
sampled signal is
∞ ∞ ∞
𝑗𝑘𝜔𝑠 𝑡
𝐲(𝑡) = 𝐱(𝑡) ∑ 𝑑sinc(𝑘𝑑) 𝑒 ⟹ 𝒀(𝜔) = ∫ (𝐱(𝑡) ∑ 𝑑sinc(𝑘𝑑) 𝑒 𝑗𝑘𝜔𝑠 𝑡 ) 𝑒 −𝑗𝜔𝑡 𝑑𝑡
𝑘=−∞ −∞ 𝑘=−∞

Interchanging the order of integration and summation and rearranging terms we obtain
∞ ∞ ∞
sin(𝜋𝑘𝑑)
𝒀(𝜔) = 𝑑 ∑ sinc(𝑘𝑑) [∫ 𝐱(𝑡)𝑒 −𝑗(𝜔−𝑘𝜔𝑠 )𝑡 𝑑𝑡] = ∑ ( ) 𝑿(𝜔 − 𝑘𝜔𝑠 )
−∞ 𝜋𝑘
𝑘=−∞ 𝑘=−∞

(Pulse amplitude modulator)

When the FET transistor is on, the analog voltage is shorted to ground; when off, the FET is
essentially open, so that the analog signal sample appears at the output.

Op-amp 1 is a noninverting amplifier that isolates the analog input channel from the
switching function.

Op-amp 2 is a high input-impedance voltage follower capable of driving low-impedance


loads (high “fanout”).

The resistor 𝑅 is used to limit the output current of op-amp 1 when the FET is “on” and
provides a voltage division with rd of the FET. (rd, the drain-to-source resistance, is low but
not zero)

 Flat Top Sampling: In natural sampling the tops of the pulses are not flat, but are
rather shaped by the signal 𝐱(𝑡). This behavior is not always desired, especially when the
sampling operation is to be followed by conversion of each pulse to digital format. An
alternative is to hold the amplitude of each pulse constant, equal to the value of the signal
at the left edge of the pulse. This is referred to as zero-order hold sampling or flat-top
sampling, and is illustrated in Figure.
Also the flat top pulse of 𝐲̅(𝑡) is mathematically equivalent to the convolution of
instantaneous sample and a pulse 𝒉𝑧𝑜ℎ (𝑡)
𝐲̅(𝑡) = 𝒉𝑧𝑜ℎ (𝑡) ⋆ 𝐲(𝑡)

= 𝒉𝑧𝑜ℎ (𝑡) ⋆ ∑ 𝐱(𝑘𝑇𝑠 )𝛿(𝑡 − 𝑘𝑇𝑠 )


𝑘=−∞

= ∑ 𝐱(𝑘𝑇𝑠 ){𝒉𝑧𝑜ℎ (𝑡) ⋆ 𝛿(𝑡 − 𝑘𝑇𝑠 )}


𝑘=−∞
∞ ∞
= ∑ 𝐱(𝑘𝑇𝑠 ) ∫ 𝒉𝑧𝑜ℎ (𝑡 − 𝜏) ⋆ 𝛿(𝜏 − 𝑘𝑇𝑠 )𝑑𝜏
𝑘=−∞ −∞

= ∑ 𝐱(𝑘𝑇𝑠 )𝒉𝑧𝑜ℎ (𝑡 − 𝑘𝑇𝑠 )


𝑘=−∞

In frequency domain wa have


̅ (𝜔) 1 − 𝑒 −𝑠𝑑𝑇𝑠
𝒀
𝑯𝑍𝑂𝐻 (𝑗𝜔) = = |
𝒀(𝜔) 𝑠 𝑠=𝑗𝜔

𝜔𝑑𝑇
sin ( 2 𝑠 )
𝑯𝑍𝑂𝐻 (𝑗𝜔) = 𝑒 −𝑗𝜔𝑑𝑇𝑠 /2 ( )
𝜔/2


sin(𝜔𝑑𝑇𝑠 /2) −𝑗𝜔𝑑𝑇 /2
̅ (𝜔) = 𝑯𝑍𝑂𝐻 (𝑗𝜔)𝒀(𝜔) = (
𝒀 )𝑒 𝑠 ∑ 𝑿(𝜔 − 𝑘𝜔𝑠 )
𝜔𝑇𝑠 /2
𝑘=−∞

Remark: In most applications, we try to obey the constraint of the sampling theorem by
sampling at a rate higher than twice the highest frequency 𝑓𝑠 > 2𝑓𝑚𝑎𝑥 in order to avoid the
problems of aliasing. This is called over-sampling. When 𝑓𝑠 < 2𝑓𝑚𝑎𝑥 , the signal is under-
sampled and we say that aliasing has occurred.
III. Construction of Continuous-Time Signal: The sampling theorem suggests that a
process exists for reconstructing a continuous-time signal from its samples. This
reconstruction process would undo the A/D conversion so it is called D/A conversion
𝐱[𝑛] → 𝐷/𝐴 → 𝐱(𝑡). Since the sampling process of the ideal A/D converter is defined by the
mathematical substitution 𝑡 = 𝑛/𝑓𝑠 , we would expect the same relationship to govern the
ideal D/A converter, that is, 𝐱(𝑡) = 𝐱[𝑛]|𝑛=𝑓𝑠 𝑡 = 𝐱[𝑓𝑠 𝑡].

If a discrete time signal has an infinite length, we can terminate the signal at a desired
finite number of terms, by multiplying it by a window function. There are several window
functions such as the rectangular, triangular, Hamming, Hanning, Kaiser, etc. However, we
must choose a suitable window function; otherwise, the sequence will be terminated
abruptly producing the effect of leakage.

III.I. Zero-Order Hold Interpolation: The zero-order hold (ZOH) is a mathematical model
of the practical signal reconstruction done by a conventional digital-to-analog converter
(DAC). That is, it describes the effect of converting a discrete-time signal to a continuous-
time signal by holding each sample value for one sample interval. It has several
applications in electrical communication.

Interpolation is performed using horizontal lines, or polynomials of order zero, between


sampling instants. This is referred to as zero-order hold interpolation,

Zero-order hold interpolation can be achieved by processing the impulse sampled signal
𝐲(𝑡) through zero-order hold reconstruction filter, a linear system the impulse response of
which is a rectangle with unit amplitude and a duration of 𝑇𝑠 .
Notice the similarity between 𝒉𝑧𝑜ℎ (𝑡) for zero-order hold interpolation and 𝒉𝑧𝑜ℎ (𝑡) derived in
the discussion of zero-order hold sampling. The two become the same if the duty cycle is
set equal to 𝑑 = 1.
𝑡 − 0.5𝑑𝑇𝑠 𝑡 − 0.5𝑇𝑠
𝒉𝑧𝑜ℎ (𝑡) = ∏ ( ) 𝒉𝑧𝑜ℎ (𝑡) = ∏ ( )
𝑑𝑇𝑠 𝑇𝑠
𝐙𝐎𝐇 𝐒𝐚𝐦𝐩𝐥𝐞𝐫 𝐙𝐎𝐇 𝐈𝐧𝐭𝐞𝐫𝐩𝐨𝐥𝐚𝐭𝐨𝐫

III.II. Higher-Order Interpolation Filter: Given a signal 𝐱(𝑡), the reconstruction of 𝐱(𝑡)
from the sampled waveform 𝐲(𝑡) = 𝐱(𝑡)𝐬(𝑡) can be carried out as follows. First, suppose that
𝐱(𝑡) has bandwidth 𝑊; that is, 𝐗(𝜔) = 0 for |𝜔| > 𝑊

Then if 𝜔𝑠 ≥ 2𝑊 in the expression 𝒀(𝜔) = [∑∞ 𝑘=−∞ 𝑿(𝜔 − 𝑘𝜔𝑠 )]/𝑇𝑠 for 𝒀(𝜔) the replicas of 𝑿(𝜔)
do not overlap in frequency. Thus if the sampled signal 𝐲(𝑡) is applied to an ideal lowpass
filter with the frequency function shown in figure below, the only component of 𝒀(𝜔) that is
passed is 𝑿(𝜔). Hence, the output of the filter is equal to 𝐱(𝑡), which shows that the original
signal 𝐱(𝑡) can be completely and exactly reconstructed from the sampled waveform 𝐲(𝑡).

So, the reconstruction of 𝐱(𝑡) from the sampled signal 𝐲(𝑡) = 𝐱(𝑡)𝐬(𝑡) can be accomplished
by a simple low-pass filtering of the sampled signal. The process is illustrated in Figure
above. The filter in this figure is sometimes called an interpolation filter, since it reproduces
𝐱(𝑡) from the values of 𝐱(𝑡) at the time points 𝑡 = 𝑘𝑇𝑠 .

From figure above it is clear that the frequency response function of the interpolating filter
is given by

𝑇 for |𝜔| ≤ 𝑊 𝑊𝑇𝑠 sin(𝑊𝑡)


𝑯(𝜔) = { ⟷ 𝐡(𝑡) = ( ) −∞<𝑡 <∞
0 for |𝜔| > 𝑊 𝜋 𝑊𝑡
and the output 𝐱(𝑡) of the interpolating filter (i.e. Let 𝑊 = 𝜔𝑠 /2) is given by:
∞ ∞ ∞

𝐱(𝑡) = 𝐡(𝑡) ⋆ 𝐲(𝑡) = ∫ 𝐲(𝜏)𝐡(𝑡 − 𝜏)𝑑𝜏 = ∫ ( ∑ 𝐱[𝑘𝑇𝑠 ]𝛿(𝑡 − 𝑘𝑇𝑠 )) 𝐡(𝑡 − 𝜏)𝑑𝜏
−∞ −∞ 𝑘=−∞
𝜔
∞ ∞
∞ ∞ sin ( 2𝑠 (𝑡 − 𝑘𝑇𝑠 ))
= ∑ ∫ 𝐱[𝑘𝑇𝑠 ]𝛿(𝑡 − 𝑘𝑇𝑠 )𝐡(𝑡 − 𝜏)𝑑𝜏 = ∑ 𝐱[𝑘𝑇𝑠 ]𝐡(𝑡 − 𝑘𝑇𝑠 ) = ∑ {𝐱[𝑘] ( 𝜔𝑠 )}
−∞ (𝑡 − 𝑘𝑇 𝑠 )
𝑘=−∞ 𝑘=−∞ −∞ 2

Now let we do some exercise on MATLAB to make things clear, and later on we will try to
develop the mathematics in the next sections.

MATLAB-Solved Problems:
MATLAB Exercise: 01 (Spectral relations in impulse sampling). Consider the continuous-
time signal 𝐱 𝑎 (𝑡) = 𝑒 −|𝑡| . Its Fourier transform is
2 2
𝑿𝑎 (𝜔) = 2
or 𝑿𝑎 (𝑓) =
1+𝜔 1 + 4𝜋 2 𝑓 2

Compute and graph the spectrum of 𝐱 𝑎 (𝑡). If the signal is impulse-sampled using a
sampling rate of 𝑓𝑠 = 1 𝐻𝑧 to obtain the signal 𝐱 𝑠 (𝑡), compute and graph the spectrum of
the impulse-sampled signal. Afterwards repeat with 𝑓𝑠 = 2 𝐻𝑧.

Solution: The script listed below utilizes an anonymous function to define the transform
𝑿𝑎 (𝑓). It then uses 𝑿𝑠 (𝑓) = (∑∞ 𝑘=−∞ 𝑿𝑎 (𝑓 − 𝑘𝑓𝑠 ))/𝑇𝑠 to compute and graph 𝑿𝑠 (𝑓)
superimposed with the contributing terms.

clear all, clc, Xa = @(f) 2./(1+4* pi*pi*f.*f); % Original spectrum


f = [ -3:0.01:3];
fs = 1; % Sampling rate
Ts = 1/fs; % Sampling interval
% Approximate spectrum of impulse-sampled signal
Xs = zeros (size(Xa(f)));
for k=-5:5,
Xs = Xs+fs*Xa(f-k*fs);
end;
% Graph the original spectrum
clf;
subplot (2,1,1);
plot(f,Xa(f),'linewidth',1.5); grid;
axis ([ -3 ,3 , -0.5 ,2.5]); title ('X_{a}(f)');
% Graph spectrum of impulse -sampled signal
subplot (2,1,2);
plot(f,Xs,'linewidth',1.5); grid;
axis ([ -3 ,3 , -0.5 ,2.5]); hold on;
for k=-5:5,
tmp = plot(f,fs*Xa(f-k*fs),'-g','linewidth',1.5);
end;
hold off;
title ('X_{s}(f)'); xlabel ('f (Hz)');
MATLAB Exercise: 02 (Natural sampling) The two-sided exponential signal 𝐱 𝑎 (𝑡) = 𝑒 −|𝑡| is
sampled using a natural sampler with a sampling rate of 𝑓𝑠 = 4 𝐻𝑧 and a duty cycle of
𝑑 = 0.6. Compute and graph 𝑿𝑠 (𝑓) in the frequency interval −12 ≤ 𝑓 ≤ 12 𝐻𝑧.

Solution: The spectrum given by 𝑿𝑠 (𝜔) = ∑∞ 𝑘=−∞(sin(𝜋𝑘𝑑) /𝜋𝑘)𝑿(𝜔 − 𝑘𝜔𝑠 ) may be written

using 𝑓 instead of 𝜔 as 𝑿𝑠 (𝑓) = ∑𝑘=−∞(sin(𝜋𝑘𝑑) /𝜋𝑘)𝑿(𝑓 − 𝑘𝑓𝑠 ) The script to compute and
graph 𝑿𝑠 (𝑓) is listed below. It is obtained by modifying the last script developed before. The
sinc = (sin(𝜋𝑘𝑑) /𝜋𝑘) envelope is also shown.

clear all, clc


Xa = @(f) 2./(1+4* pi*pi*f.*f);
f = [-12:0.01:12];
fs = 4; % Sampling rate.
Ts = 1/fs; % sampling interval.
d = 0.6; % Duty cycle
Xs = zeros (size(Xa(f)));
for k=-5:5,
Xs = Xs+d*sinc(k*d)*Xa(f-k*fs);
end;
plot(f,Xs ,'b-','linewidth',1.5);
grid on;
hold
plot(f,2*d*sinc(f*d/fs),'r--','linewidth',1.5);
grid on;
axis ([-12 ,12 ,-0.5 ,1.5]);
The spectrum 𝑿𝑠 (𝑓) is shown in

MATLAB Exercise: 03 (ZOH sampling) Two-sided exponential signal 𝐱 𝑎 (𝑡) = 𝑒 −|𝑡| is sampled
using a zero-order hold sampler with a sampling rate of 𝑓𝑠 = 3 𝐻𝑧 and a duty cycle of
𝑑 = 0.3 . Compute and graph |𝑿𝑠 (𝑓)| in the frequency interval −12 ≤ 𝑓 ≤ 12 Hz.

Solution: The spectrum given by 𝑿𝑠 (𝜔) may be written using 𝑓 instead of 𝜔 as



sin(𝜋𝑓𝑑𝑇𝑠 ) −𝑗𝜋𝑓𝑑𝑇
𝑿𝑠 (𝑓) = ( )𝑒 𝑠 ∑ 𝑿(𝑓 − 𝑘𝑓 )
𝑠
𝜋𝑓𝑇𝑠
𝑘=−∞

The script to compute and graph 𝑿𝑠 (𝑓) is listed below. It is obtained by modifying the script
developed in MATLAB Exercise 1.

clear all, clc


Xa = @(f) 2./(1+4* pi*pi*f.*f); f = [-12:0.01:12];
fs = 3; % Sampling rate
Ts = 1/fs; % Sampling interval
d = 0.3; % Duty cycle
Xs = zeros (size(Xa(f)));
for k=-5:5,
Xs = Xs+fs*Xa(f-k*fs);
end;
Xs = d*Ts*sinc(f*d*Ts).*exp(-j*pi*f*d*Ts).*Xs;
plot(f,abs(Xs),'linewidth',1.5); grid;
axis ([ -12 ,12 , -0.1 ,0.8]);title('|X_s(f)|'); xlabel('f (Hz)');
MATLAB Exercise: 04 (Graphing signals for natural and zero-order hold sampling) In this
exercise we will develop and test two functions natsamp(..) and zohsamp(..) for obtaining
graphical representation of signals samples using natural sampling and zero-order
sampling respectively. The function natsamp(..) evaluates a naturally sampled signal at a
specified set of time instants.

function xnat = natsamp(x,Ts,d,t)


t1 = (mod(t,Ts)<=d*Ts);
xnat = x(t).*t1;
end

The function zohsamp(..) evaluates and returns a zero-order hold sampled version of the
signal.

function xzoh = zohsamp(x,Ts ,d,t)


t1 = (mod(t,Ts)<=d*Ts);
xzoh = x(t).* t1;
flg = 0;
for i=1: length (t),
if not (t1(i)),
flg = 0;
elseif (t1(i) & (flg ==0)) ,
flg = 1;
value = xzoh(i);
end;
if (flg == 1),
xzoh(i) = value;
end;end; end
For both functions the input arguments are as follows: 𝐱(𝑡): Name of an anonymous
function that can be used for evaluating the analog signal 𝐱(𝑡) at any specified time instant
𝑇𝑠 : The sampling interval in seconds. 𝑑: The duty cycle. Should be 0 < 𝑑 ≤ 1. and 𝑡: Vector
of time instants at which the sampled signal should be evaluated. For a detailed graph,
choose the time increment for the values in vector “𝑡” to be significantly smaller than 𝑇𝑠 .

The function natsamp(..) can be tested with the double sided exponential signal using the
following statements:
>> x = @(t) exp (-abs(t));
>> t = [-4:0.001:4];
>> xnat = natsamp(x ,0.2 ,0.5 , t);
>> plot(t,xnat);

The function zohsamp(..) can be tested with the following:

>> xzoh = zohsamp(x ,0.2 ,0.5 , t);


>> plot(t,xzoh);

MATLAB Exercise: 05 (Reconstruction of right-sided exponential) a right-sided exponential


signal is given by 𝐱(𝑡) = 𝑒 −100𝑡 𝑢(𝑡). The spectrum 𝑿(𝑓) is not bandlimited, and therefore
there is no sampling rate that would satisfy the requirements of the Nyquist sampling
theorem. As a result, aliasing will be present in the spectrum regardless of the sampling
rate used. The aliasing effect is most noticeable for 𝑓𝑠 = 200 Hz , and less so for 𝑓𝑠 = 600 Hz.
In a practical application of sampling, we would have processed the signal 𝐱(𝑡) through an
anti-aliasing filter prior to sampling it. However, in this exercise we will omit the anti-
aliasing filter, and attempt to reconstruct the signal from its sampled version using the
three techniques. The script given below produces a graph of the impulse-sampled signal
and the zero-order hold approximation to the analog signal 𝐱(𝑡).

clear all, clc,


fs = 200; % Sampling rate
Ts = 1/fs; % Sampling interval
% Set index limits "n1" & "n2" to cover time interval from -25ms to +75ms
n1 = -fs/40;
n2 = -3*n1;
n = [n1:n2];
t = n*Ts; % Vector of time instants
xs = exp(-100*t).*(n >=0); % Samples of the signal
clf;
stem(t,xs,'^'); grid;
hold on;
stairs(t,xs,'r-');
hold off;
axis ([-0.030 ,0.080,-0.2 ,1.1]);
title ('Reconstruction using zero -order hold');
xlabel ('t (sec)');
ylabel ('Amplitude');
text (0.015,0.7, sprintf('Sampling rate= %.3g Hz',fs));

The sampling rate can be modified by editing line 2 of the code. The graph generated by
this function is shown in figure below, for sampling rates 200 Hz and 400 Hz.

Modifying this script to produce first-order hold interpolation is almost trivial. The modified
script is given below.
clear all, clc,
fs = 200; % Sampling rate
Ts = 1/fs; % Sampling interval
% Set index limits "n1" & "n2" to cover time interval from -25ms to +75ms
n1 = -fs/40;
n2 = -3*n1;
n = [n1:n2];
t = n*Ts; % Vector of time instants
xs = exp (-100* t).*(n >=0); % Samples of the signal
clf;
stem(t,xs,'^'); grid;
hold on;
plot(t,xs,'r-');
hold off;
axis ([-0.030 ,0.080 , -0.2 ,1.1]);
title ('Reconstruction using first -order hold');
xlabel ('t (sec)');
ylabel ('Amplitude');
text (0.015 ,0.7 , sprintf ('Sampling rate = %.3g Hz',fs));

The only functional change is in line 13 where we use the function plot(..) instead of the
function stairs(..). The graph generated by this modified script is shown in Figure below for
sampling rates 200 Hz and 400 Hz.

Reconstruction through bandlimited interpolation requires a bit more work. The script for
this purpose is given below. Note that we have added a new section to the previous script to
compute the shifted sinc functions.

𝜔

𝑡 − 𝑘𝑇𝑠

sin(𝜋(𝑡 − 𝑘𝑇𝑠 )/𝑇𝑠 )
∞ sin ( 2𝑠 (𝑡 − 𝑘𝑇𝑠 ))
𝐲(𝑡) = ∑ 𝐲[𝑘]sinc ( ) = ∑ {𝑇𝑠 ( ) 𝐲[𝑘]} = ∑ {𝐲[𝑘] ( 𝜔 )}
𝑇𝑠 𝜋(𝑡 − 𝑘𝑇𝑠 ) 𝑠 (𝑡
− 𝑘𝑇 𝑠 )
−∞ −∞ −∞ 2
clear all, clc,
fs = 200; % Sampling rate
Ts = 1/fs; % Sampling interval
% Set index limits "n1"& "n2" to cover time interval from -25 ms to +75ms
n1 = -fs/40;
n2 = -3*n1;
n = [n1:n2];
t = n*Ts; % Vector of time instants
xs = exp (-100* t).*(n >=0); % Samples of the signal
% Generate the sinc interpolating functions
t2 = [-0.025:0.0001:0.1];
xr = zeros (size(t2));
for n=n1:n2 ,
nn = n-n1+1; % Because MATLAB indices start at 1
xr = xr+xs(nn)*sinc((t2-n*Ts)/Ts);
end;
clf;
stem(t,xs,'^'); grid;
hold on;
plot(t2,xr,'r-');
hold off;
axis ([-0.030 ,0.080 , -0.2 ,1.1]);
title ('Reconstruction using first -order hold');
xlabel ('t (sec)'); ylabel ('Amplitude');
text (0.015 ,0.7 , sprintf ('Sampling rate = %.3g Hz',fs));

The graph generated by this script is shown in Figure below for sampling
rates 200 Hz and 400 Hz.
IV. Linear Systems and Sampling: Models for continuous-time dynamical systems often
arise from the application of physical laws such as conservation of mass, momentum, and
energy. These models typically take the form of linear or nonlinear differential equations,
where the parameters involved can usually be interpreted in terms of physical properties of
the system. In practice, however, these kinds of models are not appropriate to interact with
digital devices. In any situation where digital controllers have to act on a real system, this
action can be applied (or updated) only at some specific time instants. Similarly, if we are
interested in collecting information from signals of a given system, this data can usually
only be recorded (and stored) at specific instants. This constitutes nowadays an
unavoidable paradigm: continuous-time systems interact with actuators and sensors that
are accessible only at discrete-time instants. As a consequence, the sampling process of
continuous-time systems is a key problem both for estimation and control purposes.

In this context, the current ection considers sampled-data models for linear systems. The
focus is on describing, in discrete-time, the relationship between the input signals and the
samples of the continuous-time system outputs.

There are several discretization and interpolation methods for converting dynamic system
from continuous time to discrete time and for resampling discrete-time models. Some
methods tend to provide a better frequency-domain match between the original and
converted systems, while others provide a better match in the time domain.

There are a lot of methods to find the discrete equivalent system, among them we refer

▪ Numerical integration (i.e. Forward, Backward, Bilinear methods)


▪ Pole zero matching method
▪ Step invariant method (zero order hold)
▪ First order hold method
▪ Impulse invariant method (impulse modulation)
▪ Least Squares (continuous-to-discrete conversion only)

Remark: The method of equivalence is called emulation


❶ Numerical Integration: The topics of numerical integration of differential equations is a
quite complex and only the elementary techniques are presented here

𝐱[𝑘 + 1] − 𝐱[𝑘]
𝐱̇ ≝ The Forward rule
𝑇
𝐱[𝑘] − 𝐱[𝑘 − 1]
𝐱̇ ≝ The Backward rule
𝑇
1 𝐱[𝑘 + 1] − 𝐱[𝑘]
{2 (𝐱̇ (𝑡) + 𝐱̇ (𝑡 + 𝑇)) ≝ 𝑇
The Bilinear rule

The operation can be carried out directly on the system function (transfer function) if one
translates the above equations into the frequency domain.

𝑧−1 𝑧−1
𝑠𝑿(𝑠) ↔ ( ) 𝑿(𝑧) 𝑠≅( ) The Forward rule
𝑇 𝑇
𝑧−1 𝑧−1
𝑠𝑿(𝑠) ↔ ( ) 𝑿(𝑧) ⟺ 𝑠≅( ) The Backward rule
𝑇𝑧 𝑇𝑧
𝑧+1 𝑧−1 2 𝑧−1
( ) 𝑿(𝑠) ↔ ( ) 𝑿(𝑧)} 𝑠≅ ( ) The Bilinear rule
2 𝑇 𝑇 𝑧+1

Remark: the trapezoidal rule is also called the Tustin's method or bilinear transformation.

Now it is interesting to see how the stable region of s-plane can be mapped to the z-plane.

Forward method

𝐱[𝑘 + 1] − 𝐱[𝑘]
𝐱̇ ≝
𝑇

Or

𝑧−1
𝑠≅( ) Discrete Stable ⟹ Continuous Stable
𝑇
Means that: can have continuous stable but discrete un-stable

Explanations: 𝑧 = 𝑠𝑇 + 1 or 𝑠 = (𝑧−1)/𝑇 and let 𝑧 = 𝜎 + 𝑗𝜔 then

(𝜎 − 1) 𝑗𝜔
continuous stable ⟹ 𝑅𝑒(𝑠)<0 ⟹ 𝑅𝑒 ( + )<0 ⟹ 𝜎 <1
𝑇 𝑇

This means that: we can have continuous stable system but discrete un-stable. In other
word we can interpret this result by the following implication,

Discrete Stable ⟹ Continuous Stable


Backward method

𝐱[𝑘] − 𝐱[𝑘 − 1]
𝐱̇ ≝
𝑇

Or

𝑧−1 Continuous Stable ⟹ Discrete Stable


𝑠≅( )
𝑇𝑧 Means that: can have continuous un-stable but discrete stable

Explanations: 𝑧−1 = 1 − 𝑠𝑇 or 𝑠 = (𝑧 − 1)/𝑇𝑧 and let 𝑧 = 𝜎 + 𝑗𝜔 then

(𝜎 − 1) + 𝑗𝜔 1 2 2
1 2
continuous stable ⟹ Re(s) < 0 ⟹ 𝑅𝑒 ( ) < 0 ⟹ (𝜎 − ) + 𝜔 < ( )
𝑇(𝜎 + 𝑗𝜔) ⏟ 2 2
Disc Equation

Notice that: the obtained disc is located inside the unit circle so we can interpret this result
by the following implication, Continuous Stable ⟹ Discrete Stable. In other word, we can
have stable discrete system but continuous un-stable.

Bilinear method
(Tustin's method)

1
𝑧 = 𝑒 𝑠𝑇 ⟹ 𝑠 = ln(𝑧)
𝑇

Or

1 2 𝑧−1
𝑠= ln(𝑧) ≅ ( )
𝑇 𝑇 𝑧+1

Continuous Stable Discrete Stable

Means that: continuous stable equivalent discrete stable

Explanations: 𝑠 = (2(𝑧 − 1))/(𝑇(𝑧 + 1)) and let 𝑧 = 𝜎 + 𝑗𝜔 and 𝑠 = 𝑎 + 𝑗𝑏 then

(𝜎 − 1) + 𝑗𝜔
continuous stable ⟹ Re(s) < 0 ⟹ 𝑅𝑒 ( 𝜎 2 + 𝜔2 < 1
)<0 ⟹⏟
(𝜎 + 1) + 𝑗𝜔 Disc Equation
2 + 𝑇𝑠
Discrete Stable ⟹ |z| < 1 ⟹ | |<1⟹𝜎
⏟< 0 ⟹ continuous stable
2 − 𝑇𝑠 LHP

Notice that: the obtained disc is the unit disc so we can interpret this result by the
following equivalence, Continuous Stable ⟺ Discrete Stable. In other word, we can have
stable discrete system & continuous stable.
Example: Using the previous methods to find 𝐻(𝑧) equivalent to the given 𝐻(𝑠)
𝑎
𝐻(𝑠) =
𝑠+𝑎
𝑎 𝑎𝑇
The Forward rule: 𝐻(𝑧) = =
𝑧−1 𝑧 + (𝑎𝑇 − 1)
+ 𝑎
𝑇
𝑎 𝑎𝑇𝑧
The Forward rule: 𝐻(𝑧) = =
𝑧−1 (𝑎𝑇 + 1)𝑧 − 1
𝑇𝑧 + 𝑎
𝑎 𝑎𝑇(𝑧 + 1)
The Bilinear rule: 𝐻(𝑧) = =
2 𝑧−1 (𝑎𝑇 + 2)𝑧 + (𝑎𝑇 − 2)
𝑇 (𝑧 + 1) + 𝑎

clear all, clc, Ts=0.01, t1=0:Ts:4; a=3; u=ones(1,length(t1));


n1= length(u)-1;
y1(1)=0; y2(1)=0; y3(1)=0; a1=-(a*Ts-2)/(a*Ts+2); a2=(a*Ts)/(a*Ts+2);
for k=1:n1
y1(k+1)=-(a*Ts-1)*y1(k)+ a*Ts*u(k);
y2(k+1)=1/(a*Ts+1)*y2(k)+( (a*Ts)/(a*Ts+1))*u(k);
y3(k+1)=a1*y3(k)+a2*u(k+1)+a2**u(k);
end
plot(t1(1:n1),y1(1:n1),'-','linewidth',3);hold on; grid on
plot(t1(1:n1),y2(1:n1),'-','linewidth',3);hold on; grid on
plot(t1(1:n1),y3(1:n1),'-','linewidth',3);grid on

❷ Step invariant method (Zero Order Hold): First let we find the transfer function of the
zero hold system

The transfer function of the ZOH can be obtained by Laplace transformation of the impulse
response. As shown in the figure, the impulse response is a unit pulse of width 𝑇. A pulse
can be represented as a positive step at time zero followed by a negative step at time 𝑇.
Using the Laplace transform of a unit step and the time delay theorem for Laplace
transforms,
1 − 𝑒 −𝑠𝑇
𝑯𝑍𝑂𝐻 (𝑠) = 𝕃(𝑢(𝑡) − 𝑢(𝑡 − 𝑇)) =
𝑠
Next, we consider the frequency response of the ZOH:
𝑇 𝑇 𝑇 𝑇
1 − 𝑒 −𝑗𝜔𝑇 𝑒 −𝑗𝜔2 𝑒 𝑗𝜔 2 − 𝑒 −𝑗𝜔 2 𝑇 sin (𝜔 )
𝑯𝑍𝑂𝐻 (𝑗𝜔) = = ( ) = 𝑇𝑒 −𝑗𝜔
2(
2 )
𝑗𝜔 𝜔 𝑗 𝑇
𝜔2
The basic idea behind the step invariance method is to choose a step response for discrete
system that is similar to the step response of analog system, and this is the reason why we
call it step invariant.
Let we ecxite a system by a step signal as forcing function, then its response will be

−1
𝑯(𝑠)
𝐲(𝑡) = { } Continuos system
𝑠
−1
𝑯(𝑧)
𝐲[𝑘] = { } Discrete system
1 − 𝑧 −1

If we do the equivalence between the two systems we get:


−1
−1
𝑯(𝑠) 𝑯(𝑠)
𝐲[𝑘] = 𝐲(𝑡)|𝑡=𝑘𝑇 = 𝕃 { }| ⟹ 𝑯(𝑧) = (1 − 𝑧 −1 ). { { }| }
𝑠 𝑡=𝑘𝑇
𝑠 𝑡=𝑘𝑇

Explanations (1𝑠𝑡 method): Let we define the hold system by ℎ0 (𝑡) = 𝑢(𝑡) − 𝑢(𝑡 − 𝑇)

1 𝑒 −𝑠𝑇 1 − 𝑒 −𝑠𝑇
𝑯0 = − =
𝑠 𝑠 𝑠

We know that the mapping between z-plane and z-plane is 𝑧 = 𝑒 −𝑠𝑇 so


−1 −1
𝑯(𝑠)
𝑯(𝑧) = { {𝑯0 (𝑠)𝑯(𝑠)}| }= { {(1 − 𝑒 −𝑠𝑇 ) }| }
𝑡=𝑘𝑇
𝑠 𝑡=𝑘𝑇

−1
−1 ).
𝑯(𝑠)
= (1 − 𝑧 { { }| }
𝑠 𝑡=𝑘𝑇

Remark: the advantage of setting the input to a step 𝑢[𝑘] is that: a step function is
invariant to ZOH i.e. 𝑢(𝑡) is also a step every sampling period (unaffected by ZOH).

Explanations (2𝑛𝑑 method): Let we define the hold system by ℎ0 (𝑡) = 𝑢(𝑡) − 𝑢(𝑡 − 𝑇)

1 − 𝑒 −𝑠𝑇
𝑯0 (𝑠) = {ℎ0 (𝑡)} =
𝑠
Before commence development we have to put the following setting
−1 −1
𝑯(𝑠) 𝑯(𝑠)
𝑮(𝑠) = so g(𝑡) = { } and {𝑒 −𝑠𝑇 } = 𝛿(𝑡 − 𝑇)
𝑠 𝑠
−1 −1 −1
𝑯(𝑠) 𝑯(𝑠)
{𝑯0 (𝑠)𝑯(𝑠)} = { − 𝑒 −𝑠𝑇 }= {𝑮(𝑠) − 𝑒 −𝑠𝑇 𝑮(𝑠)} = g(𝑡) = g(𝑡) − g(𝑡 − 𝑇)
𝑠 𝑠
−1
𝑯(𝑧) = { {𝑯0 (𝑠)𝑯(𝑠)}| }= {g(𝑡) − g(𝑡 − 𝑇)|𝑡=𝑘𝑇 } = (1 − 𝑧 −1 ) {g(𝑡)|𝑡=𝑘𝑇 }
𝑡=𝑘𝑇

−1
−1 ).
𝑯(𝑠)
= (1 − 𝑧 { { }| }
𝑠 𝑡=𝑘𝑇
Example:01 MATLAB Code for the ZOH design
clear all, close all, clc,
t0 = 0; dt = 1e-3; tn = 1;
t = t0:dt:tn; % Time domain
x = @(t)(sin(2*pi*t)); % Original signal
% x = @(t)(4*exp(3*t));
Ts = 3e-2; % Sampling period
zohImpl = ones(1,Ts/dt); % Impulse ZOH ℎ0(𝑡)
nSamples = tn/Ts;
samples = 0:nSamples; % Samples
xSampled = zeros(1,length(t));
xSampled(1:Ts/dt:end)= x(samples*Ts); % Sampled signal
%Convolution with IR response
xZoh1 = conv(zohImpl,xSampled);
xZoh1 = xZoh1(1:length(t));
figure(3);
hold all;
plot(t,x(t),'-r','linewidth',3);
stem(t,xSampled);
plot(t,xZoh1,'-b','linewidth',3);

Example: 02 Alternative MATLAB Code for the ZOH design


clear all, close all, clc,
t0 = 0; dt = 1e-3; tn = 1;
t = t0:dt:tn; % Time domain
x = @(t)(sin(2*pi*t)); % Original signal
% x = @(t)(4*exp(3*t));
Ts = 5e-2;
nSamples = tn/Ts;
samples = 0:nSamples; % Samples
xSampled = x(samples*Ts); % Sampled signal
k=0;
for i=1:length(t)
if k*Ts<= t(i) && t(i)<(k+1)*Ts
k=k+1;
end
xzoh2(i)= xSampled(k);
end
figure(4);
hold all;
plot(t,x(t),'-r','linewidth',3);
plot(t,xzoh2,'-b','linewidth',3);
stem(samples*Ts,xSampled);
fc=100; % cut-off frequency
Hf=tf([1],[1/fc 1]); y=lsim(Hf,xzoh2,t);
plot(t,y,'-g','linewidth',3);
clear all, close all, clc,
t0 = 0; dt = 1e-3; tn = 1; t = t0:dt:tn; % Time domain
x = @(t)(sin(2*pi*t)); Ts = 5e-2; nSamples = tn/Ts;
samples = 0:nSamples; % Samples
xSampled = x(samples*Ts); % Sampled signal
figure(3); hold all;
plot(t,x(t),'-r','linewidth',3);
stem(samples*Ts,xSampled);
k=0;
for i=1:length(t)
if k*Ts<= t(i) && t(i)<(k+1)*Ts
k=k+1;
end
xzoh2(i)= xSampled(k);
end
plot(t,xzoh2,'-b','linewidth',3);

❸ First Order Hold Equivalence: we now consider a first order hold method which
extrapolates samples by connecting them using straight lines. The
impulse response of the FOH is shown in figure
1
𝐡1 (𝑡) = [𝐑(𝑡 − 𝑇)𝑢(𝑡 − 𝑇) + 𝐑(𝑡 + 𝑇)𝑢(𝑡 + 𝑇) − 2𝐑(𝑡)𝑢(𝑡)]
𝑇
Where: 𝐑(𝑡) is the ramp signal. Now, knowing that the mapping
between z-plane and z-plane is 𝑧 = 𝑒 −𝑠𝑇 then the frequency response
of this FOH is given by:
𝑇 𝑇 2
𝑠 −𝑠 2
𝑒 𝑠𝑇
+𝑒−𝑠𝑇
−2 1 𝑒 2 −𝑒 2 𝑒 −𝑠𝑇 𝑒 𝑠𝑇 − 1 𝑧 −1 𝑧 − 1 2 (𝑧 − 1)2 1
𝐇1 (𝑠) = ( 2
)= ( ) = ( ) = ( ) =( ) 2
𝑇𝑠 𝑇 𝑠 𝑇 𝑠 𝑇 𝑠 𝑇𝑧 𝑠

Now let we find the equivalent discrete TF


−1
𝑯(𝑧) = { {𝑯1 (𝑠)𝑯(𝑠)}| }
𝑡=𝑘𝑇
(𝑧 − 1)2 −1
𝑯(𝑠)
= { { }| }
𝑇𝑧 𝑠 2 𝑡=𝑘𝑇
This is an acausal system in that the linear interpolation function moves toward the value
of the next sample before such sample is applied to the hypothetical FOH filter.

clear all, close all, clc, s = tf('s');


sys = (2*s+5)/(s^2+4*s+3); step(sys) % step function
hold on
T = 1; % sample time
sysd1 = c2d(sys, T, 'zoh');
sysd2 = c2d(sys, T, 'foh');
step(sysd1); step(sysd2)
grid on; hold off

❹ Zero-pole mapping: A very simple but effective method of obtaining a discrete


equivalent of a continuous transfer function is to be found by extrapolation of the relation
between the s-domain and z-domain. In before we have seen that if the Laplace transform
𝑭(𝑠) of a continuous-time function 𝐟(𝑡) has a pole 𝑝𝑠 , then the z-transform 𝑭(𝑧) of its
sampled counterpart 𝐟(𝑘𝑇) has a pole at 𝑝𝑧 = 𝑒 𝑝𝑠 𝑇 where T is the sampling period.

1 𝑧
{𝑒 𝑝𝑠 𝑡 } = ⟺ {𝑒 𝑝𝑠 𝑡 |𝑡=𝑘𝑇 } = {(𝑒 𝑝𝑠 𝑇 )𝑘 } =
𝑠 − 𝑝𝑠 𝑧 − 𝑒 𝑝𝑠 𝑇
𝑛 𝛼𝑘 𝑛 𝛼𝑘 𝑧
𝑯(𝑠) = ∑ ⟷ 𝑯(𝑧) = ∑ 𝑠 𝑇
𝑘=1 𝑠 − 𝑠𝑘 𝑘=1 𝑧 − 𝑒 𝑘

Notice that 𝑝𝑧 = 𝑒 𝑝𝑠 𝑇 = 𝑒 𝜎𝑇 𝑒 𝑗𝜔𝑑 𝑇 = 𝑒 𝜎𝑇 𝑒 𝑗(𝜔𝑑 𝑇+2𝑘𝜋) . Thus, pole locations are a periodic function
of the damped natural frequency 𝜔𝑑 with period (𝜔𝑠 = 2𝜋/𝑇) . The mapping of distinct s-
domain poles to the same z-domain location is clearly undesirable in situations where a
sampled waveform is used to represent its continuous counterpart. The strip of width
𝜔𝑠 over which no such ambiguity occurs (frequencies in the range [(−𝜔𝑠 /2), 𝜔𝑠 /2] rad/s) is
known as the primary strip.
We know from equation 𝑝𝑧 = 𝑒 𝑝𝑠 𝑇 that discretization maps an s-plane pole at 𝑝𝑠 to a z-plane
pole at 𝑒 𝑝𝑠 𝑇 but that no rule exists for mapping zeros. In pole-zero matching, a discrete
approximation is obtained from an analog filter by mapping both poles and zeros using
𝑝𝑧 = 𝑒 𝑝𝑠 𝑇 . If the analog filter has 𝑛 poles and 𝑚 zeros, then we say that the filter has 𝑛 − 𝑚
zeros at infinity. For 𝑛 − 𝑚 zeros at infinity, we add 𝑛 − 𝑚 or 𝑛 − 𝑚 − 1 digital filter zeros at
unity. If the zeros are not added, it can be shown that the resulting system will include a
time delay. The second choice gives a strictly proper filter where the computation of the
output is easier, since it only requires values of the input at past sampling points. Finally,
we adjust the gain of the digital filter so that it is equal to that of the analog filter at a
critical frequency dependent on the filter.

The idea of pole-zero matching is to use the mapping 𝑝𝑧 = 𝑒 𝑝𝑠 𝑇 in order to determine the
zeros as well.

▪ Continuous finite poles and zeros are mapped at 𝑝𝑧 = 𝑒 𝑝𝑠 𝑇


▪ Infinite zeros 𝑠 = ∞ are mapped at 𝑧 = 1
▪ we adjust the DC-gain
𝑯(𝑠)|𝑠=0 = 𝑯(𝑧)|𝑧=1 high − pass filter
𝑯(𝑠)|𝑠=∞ = 𝑯(𝑧)|𝑧=−1 low − pass filter

Example: Find a pole-zero matched digital filter approximation for the analog filters. In
filters 𝑯1 , 𝑯2 & 𝑯4 consider low-pass freq range and in 𝑯3 consider high-pass freq range.

𝑎 𝑧+1 1 − 𝑒 −𝑎𝑇
▪ 𝑯1 (𝑠) = ⟺ 𝑯1 (𝑧) = 𝐾 and 𝑯1 (𝑠)|𝑠=0 = 𝑯1 (𝑧)|𝑧=1 ⟹ 𝐾 =
𝑠+𝑎 𝑧 − 𝑒 −𝑎𝑇 2

𝑠+𝛽 𝑧 − 𝑒 −𝛽𝑇 𝛽 1 − 𝑒 −𝛼𝑇


▪ 𝑯2 (𝑠) = ⟺ 𝑯2 (𝑧) = 𝐾 ( ) and 𝑯2 (𝑠)|𝑠=0 = 𝑯2 (𝑧)|𝑧=1 ⟹ 𝐾 = ( )
𝑠+𝛼 𝑧 − 𝑒 −𝛼𝑇 𝛼 1 − 𝑒 −𝛽𝑇

𝑒0
𝑠 𝑧 − 1↙ 1 + 𝑒 −𝛼𝑇
▪ 𝑯3 (𝑠) = ⟺ 𝑯3 (𝑧) = 𝐾 ( ) and 𝑯3 (𝑠)|𝑠=∞ = 𝑯3 (𝑧)|𝑧=−1 ⟹ 𝐾=( )
𝑠+𝛼 𝑧 − 𝑒 −𝛼𝑇 2

𝑧 − 𝑒 −𝛼𝑇 2𝑎
▪ 𝑯(𝑠) = 𝑠 + 𝛼 ⟺ 𝑯(𝑧) = 𝐾 ( ) and 𝑯(𝑠)|𝑠=0 = 𝑯(𝑧)|𝑧=1 ⟹ 𝐾=( )
𝑧+1 1 − 𝑒 −𝛼𝑇
MATLAB Example: MATLAB Code for the Zero-pole mapping
𝜔𝑛2
𝑯(𝑠) = 2
𝑠 + 2𝜉𝜔𝑛 𝑠 + 𝜔𝑛2
clear all, close all, clc,
wn=5;zeta=0.5; % Undamped natural frequency, damping ratio
ga=tf([wn^2],[1,2*zeta*wn,wn^2]) % Analog transfer function
g=c2d(ga,0.1,'matched') % Transformation with a sampling period 0.1

❺ Impulse Invariant Method (Impulse Modulation): In this method, the impulse


response of an analog filter is uniformly sampled to obtain the impulse response of the
digital filter, and hence this method is called the impulse invariance method.

If we are given a continuous time impulse response 𝒉(𝑡), we can consider transforming it to
a discrete system with 𝒉[𝑘] consisting of equally spaced samples so that

𝒉𝑑 [𝑘] = 𝒉(𝑡)|𝑡=𝑘𝑇 = 𝒉[𝑘]

The transformation of 𝒉(𝑡) to 𝒉𝑑 [𝑘] can be viewed as impulse modulation



⋆ (𝑡)
𝒉[𝑘] = 𝒉 = ∑ 𝒉(𝑘𝑇)𝛿(𝑡 − 𝑘𝑇)
𝑘=0

∞ ∞

𝑯(𝑧) = 𝑯⋆ (𝑠) = {𝒉⋆ (𝑡)} = ∫ (∑ 𝒉(𝑘𝑇)𝛿(𝑡 − 𝑘𝑇)) 𝑠 −𝑠𝑡 𝑑𝑡


−∞ 𝑘=0
∞ ∞ ∞
−𝑠𝑡
= ∑ ∫ 𝒉(𝑘𝑇)𝛿(𝑡 − 𝑘𝑇)𝑠 𝑑𝑡 = ∑ 𝒉(𝑘𝑇)𝑠 −𝑠𝑘𝑇
𝑘=0 −∞ 𝑘=0

Let we define the mapping 𝑧 = 𝑠 −𝑠𝑇 then we obtain 𝑯(𝑧) = ∑∞ 𝑘=0 𝒉(𝑘𝑇)𝑧
−𝑘
which is exactely
the z-transformation of 𝒉[𝑘]. In terms of partial fraction expansion we can write
𝑛 𝑛
𝐴𝑘 𝐴𝑘
𝑯(𝑠) = ∑ ⟷ 𝑯(𝑧) = ∑
𝑠 − 𝑝𝑘 1 − 𝑠 𝑝𝑘𝑇 𝑧 −1
𝑘=1 𝑘=1

This impulse invariant method can be extended for the case when the poles are not simple.

Example: Consider a continuous system with the transfer function:

𝑠+𝑏
𝑯(𝑠) =
(𝑠 + 𝑏)2 + 𝑐 2
Solution The inverse Laplace transform of 𝑯(𝑠) yields 𝒉(𝑡) = 𝑒 −𝑏𝑡 cos(𝑐𝑡) 𝑢(𝑡). Sampling 𝒉(𝑡)
with sampling period 𝑇, we get

−𝑏𝑘𝑇
𝒉[𝑘] = 𝑒 cos(𝑐𝑘𝑇) 𝑢(𝑘𝑇) ⟹ 𝑯(𝑧) = ∑ 𝑒 −𝑏𝑘𝑇 cos(𝑐𝑘𝑇) 𝑧 −𝑘
𝑘=0

1
⟹ 𝑯(𝑧) = ∑ 𝑒 −𝑏𝑘𝑇 𝑧 −𝑘 (𝑒 −𝑗𝑐𝑘𝑇 + 𝑒 −𝑗𝑐𝑘𝑇 )
2
𝑘=0
1 1 1
⟹ 𝑯(𝑧) = [ + ]
2 1 − (𝑒 −(𝑏−𝑗𝑐)𝑇 )𝑧 −1 1 − (𝑒 −(𝑏+𝑗𝑐)𝑇 )𝑧 −1

1 1 − 𝑒 −𝑏𝑇 cos(𝑐𝑇) 𝑧 −1
𝑯(𝑧) = { }
2 1 − 2𝑒 −𝑏𝑇 cos(𝑐𝑇) 𝑧 −1 + 𝑒 −2𝑏𝑇 𝑧 −2

Note: The impulse invariance method is appropriate only for band-limited filters, i.e., low-
pass and band-pass filters, but not suitable for high-pass or band-stop filters where
additional band limiting is required to avoid aliasing. Thus, there is a need for another
mapping method such as bilinear transformation technique which avoids aliasing.

❻ Digitalization in State Space: Digital systems, expressed previously as difference


equations or z-Transform transfer functions, can also be used with the state-space
representation. All the same techniques for dealing with analog systems can be applied to
digital systems with only minor changes.

If we have a continuous-time state equation: 𝐱̇ (𝑡) = 𝑨𝐱(𝑡) + 𝑩𝒖(𝑡) We can derive the digital
version of this equation that we discussed above. We take the Laplace transform of our
equation: 𝑿(𝑠) = (𝑠𝑰 − 𝑨)−1 𝐱(0) + (𝑠𝑰 − 𝑨)−1 𝑩𝑼(𝑠). Now, taking the inverse Laplace
transform gives us our time-domain system, keeping in mind that the inverse Laplace
transform of the (𝑠𝑰 − 𝑨)−1 term is our state-transition matrix,
𝑡
𝐱(𝑡) = 𝑒 𝑨(𝑡−𝑡0 ) 𝐱(𝑡0 ) + ∫ 𝑒 𝑨(𝑡−𝜏) 𝑩𝒖(𝜏)𝑑𝜏
𝑡0

Now, we apply a zero-order hold on our input, to make the system digital. Notice that we
set our start time 𝑡0 = 𝑘𝑇, because we are only interested in the behavior of our system
during a single sample period: 𝒖(𝑡) = 𝒖(𝑘𝑇) with 𝑘𝑇 ≤ 𝑡 ≤ (𝑘 + 1)𝑇.
𝑡
𝐱(𝑡) = 𝑒 𝑨(𝑡−𝑘𝑇) 𝐱(𝑘𝑇) + ∫ 𝑒 𝑨(𝑡−𝜏) 𝑩𝒖(𝑘𝑇)𝑑𝜏
𝑘𝑇
We were able to remove 𝒖(𝑘𝑇) from the integral because it did not rely depend on 𝜏.
𝑡
𝐱(𝑡) = 𝑒 𝑨(𝑡−𝑘𝑇) 𝐱(𝑘𝑇) + (∫ 𝑒 𝑨(𝑡−𝜏) 𝑩𝑑𝜏) 𝒖(𝑘𝑇)
𝑘𝑇
Let 𝑡 = (𝑘 + 1)𝑇 then
(𝑘+1)𝑇
𝐱[𝑘 + 1] = 𝑒 𝑨𝑇 𝐱[𝑘] + (∫ 𝑒 𝑨((𝑘+1)𝑇−𝜏) 𝑩𝑑𝜏) 𝒖[𝑘]
𝑘𝑇

Take the change of variable 𝛼 = (𝑘 + 1)𝑇 − 𝜏 we get:


𝑇
𝑨𝑇
𝐱[𝑘 + 1] = 𝑒 𝐱[𝑘] + (∫ 𝑒 𝑨𝛼 𝑑𝛼 ) 𝑩𝒖[𝑘]
0
𝑇
We now define a new matrices, 𝑨𝑑 and 𝑩𝑑 , as follows: 𝑨𝑑 = 𝑒 𝑨𝑇 and 𝑩𝑑 = (∫0 𝑒 𝑨𝛼 𝑑𝛼 ) 𝑩 then
𝐱[𝑘 + 1] = 𝑨𝑑 𝐱[𝑘] + 𝑩𝑑 𝒖[𝑘]
If the 𝑨 matrix is nonsingular and 𝑇 is small enough, then we can find its inverse and
𝑇
instead define: 𝑨𝑑 = 𝑒 𝑨𝑇 ≈ 𝑰 + 𝑨𝑇 and 𝑩𝑑 = (∫0 𝑒 𝑨𝛼 𝑑𝛼 ) 𝑩 ≈ (𝑒 𝑨𝑇 − 𝑰)𝑨−1 𝑩 ≈ 𝑇𝑩

𝒖(𝑡) 𝐱̇ (𝑡) = 𝑨𝐱(𝑡) + 𝑩𝒖(𝑡) 𝐲(𝑡)


→ ZOH ⟶ ⟶ 𝐴/𝐷 →
𝐲(𝑡) = 𝑪𝐱(𝑡) + 𝑫𝒖(𝑡)

𝒖[𝑘] 𝐱[𝑘 + 1] = 𝑨𝑑 𝐱[𝑘] + 𝑩𝑑 𝒖[𝑘] 𝐲[𝑘]


→ →
𝐲[𝑘] = 𝑪𝑑 𝐱[𝑘] + 𝑫𝑑 𝒖[𝑘]

In the next table we summarize the most approximation numerical methods that are used
to convert continuous-time state space into discrete-time state space without proof

Forward method Backward method Bilinear Method


𝑨𝑇 𝑨𝑇 −1
𝑨𝑑 = 𝑰 + 𝑨𝑇 𝑨𝑑 = (𝑰 − 𝑨𝑇)−1 𝑨𝑑 = (𝑰 + ) (𝑰 − )
2 2
𝑨𝑇 −1
𝑩𝑑 = 𝑩𝑇 𝑩𝑑 = 𝑇(𝑰 − 𝑨𝑇)−1 𝑩 𝑩𝑑 = 𝑇 (𝑰 − ) 𝑩
2
𝑨𝑇 −1
𝑪𝑑 = 𝑪 𝑪𝑑 = 𝑪(𝑰 − 𝑨𝑇)−1 𝑪𝑑 = 𝑪 (𝑰 − )
2
𝑨𝑇 −1
𝑫𝑑 = 𝑫 𝑫𝑑 = 𝑫 + 𝑇𝑪(𝑰 − 𝑨𝑇)−1 𝑩 𝑫𝑑 = 𝑫 + 𝑇𝑪 (𝑰 − ) 𝑩
2
The stability is
The stability is This method guarantee
persevered for both
not insured the stability
directions
Remarks: ➳ The zero order hold in state space is the same as the forward method.

➳ The sampling process is just a mapping between the s-plane and z-plane 𝑧 = 𝐟(𝑠)

➳ Strip of the s-plane is mapped into the whole z-plane. There are infinitely many values of
𝑠 for each value of 𝑧. It is possible to partition the s-plane into horizontal strips of width 𝜔𝑠 .

𝑧 = 𝑒 𝑠𝑇 = 𝑒 𝜎𝑇 𝑒 𝑗𝜔𝑇 = 𝑟𝑒 𝑗𝜃

Strip of s − plane The entire z − plane


𝜋 𝜋 ⟹
− ≤𝜔≤ −𝜋 ≤ 𝜃 ≤ 𝜋
𝑇 𝑇 ⟸
−∞ < 𝜎 < ∞ 0≤𝑟<∞

➳ The 𝑗𝜔-axis of s-plane is mapped into the unit circle in z-plane. 𝑧 = 𝑒 𝑗𝜔𝑇 ⟹ |𝑧| = 1

➳ Eigenvalues of discretized state space model: The eigenvalues 𝜆𝑐 (𝑨) of continuous-time


system mapped to the discrete-time domain under ZOH as 𝜆𝑑 (𝑨𝑑 ) = 𝑒 𝜆𝑐 (𝑨)𝑇 . Now we put
𝜆𝑐 (𝑨) = 𝜎 + 𝑗𝜔 hence 𝜆𝑑 (𝑨𝑑 ) = 𝑒 𝜎𝑇 𝑒 𝑗𝜔𝑇 ⟹ if 𝜎 < 0 then |𝜆𝑑 | < 1.
➳ The (LHP) left half plane is mapped inside the unit circle, while the (RHP) right half plane
is mapped outside the unit circle. (LHP) ⟹ 𝜎 < 0 ⟹ |𝑧| < 1 and (RHP) ⟹ 𝜎 > 0 ⟹ |𝑧| > 1

V. Discretization of Closed Loop System: We consider closed-loop systems, in which the


output of the plant is fed back to the controller, giving the latter a notion of the effect of its
actions. A simple closed-loop control system is depicted in Figure below. It consists of a
controller and a plant. In digital control systems, the controller operation is implemented
digitally. In the majority of control applications, however, the plant is an analog system.
Digital control then results in hybrid systems, i.e., systems having both continuous-time
and discrete-time parts. Such systems are also referred to as sampled data systems (SDS).
Here the controller consists of three subsystems: a sampler, a digital controller and a so-
called hold device.
The hold device converts a discrete-time signal to a continuous-time signal (D/A). In other
words, it implements a reconstruction operation as defined before. A commonly used hold
device is the zero-order hold (ZOH), which keeps the output constant at the value of the
last input-sample until the next sample becomes available. A drawback of digital control is
that aliasing may occur. The continuous-time input of a controller may be affected by high-
frequency noise components, which turns into low-frequency components after sampling.
The controller will base its control strategy upon the disturbed signal and might drive the
plant into an undesired state. In many designs, the sampler is preceded by an anti-alias
filter to avoid the effects of aliasing. The scheme in Figure above shows some resemblance
to the general sampling scheme, certainly if the sampler is preceded by an anti-alias filter.

The pulse transfer function is the equivalent discrete transfer function and it is useful for
use when we deal with some interconnected systems in the presence of sampling devices.
Note that the presence of samplers complicates the algebra of block diagrams, since the
existence and expression of any input-output function depend on the number and location
of the samplers.

The pulse transfer function- Block diagrams


Remark: The pulse transfer function 𝑯𝑮(𝑧) is not the same 𝑯(𝑧)𝑮(𝑧) (i.e 𝑯𝑮(𝑧) ≠ 𝑯(𝑧)𝑮(𝑧))

Example: Determine the (PTF) pulse transfer function for the given continuous systems:

1 1
𝑯(𝑠) = and 𝑮(𝑧) =
𝑠+𝑎 𝑠+𝑏
Solution: Here the PTF is obtained by the use of the impulse modulation method

−1 ∞
1
𝑯(𝑧) = { {𝑯(𝑠)}| }= {𝑒 −𝑎𝑡 |𝑡=𝑘𝑇 } = {(𝑒 −𝑎𝑇 )𝑘 } = ∑(𝑒 −𝑎𝑇 )𝑘 𝑧 −𝑘 =
𝑡=𝑘𝑇
1 − 𝑒 −𝑎𝑇 𝑧 −1
𝑘=0

−1 ∞
1
𝑮(𝑧) = { {𝑮(𝑠)}| }= {𝑒 −𝑏𝑡 | 𝑡=𝑘𝑇 } = {(𝑒 −𝑏𝑇 )𝑘 } = ∑(𝑒 −𝑏𝑇 )𝑘 𝑧 −𝑘 =
𝑡=𝑘𝑇
1 − 𝑒 −𝑏𝑇 𝑧 −1
𝑘=0

Therefore the discretized transfer functions are given by:

1 1
𝑯(𝑧) = 𝑮(𝑧) =
1 − 𝑒 −𝑎𝑇 𝑧 −1 1 − 𝑒 −𝑏𝑇 𝑧 −1
1 1
𝑯(𝑧)𝑮(𝑧) = ( ) ( )
1 − 𝑒 −𝑎𝑇 𝑧 −1 1 − 𝑒 −𝑏𝑇 𝑧 −1
But
1 1 1 1 1
𝑯𝑮(𝑧) = {( )( )} = { ( − )}
𝑠+𝑎 𝑠+𝑏 𝑏−𝑎 𝑠+𝑎 𝑠+𝑏
1 1 1
= ( −𝑎𝑇 −1
− )
𝑏−𝑎 1−𝑒 𝑧 1 − 𝑒 𝑧 −1
−𝑏𝑇

1 (𝑒 −𝑎𝑇 − 𝑒 −𝑏𝑇 )𝑧 −1
= [ ] ≠ 𝑯(𝑧)𝑮(𝑧)
𝑏 − 𝑎 (1 − 𝑒 −𝑎𝑇 𝑧 −1 )(1 − 𝑒 −𝑏𝑇 𝑧 −1 )
Example: Determine the pulse transfer function for the given schematic diagram:

Solution:
𝑬(𝑠) = 𝑹(𝑠) − 𝑯(𝑠)𝒀(𝑠)
𝒀(𝑠) = 𝑮(𝑠)𝑬⋆ (𝑠)


𝑬(𝑠) = 𝑹(𝑠) − 𝑯(𝑠)𝑮(𝑠)𝑬⋆ (𝑠) ⋆ (𝑠)
𝑮⋆ (𝑠)𝑹⋆ (𝑠) 𝒀(𝑧) 𝑮(𝑧)
⇒ 𝒀 = ⟹ =
𝒀(𝑠) = 𝑮(𝑠)𝑬⋆ (𝑠) 1 + [𝑯(𝑠)𝑮(𝑠)]⋆ 𝑹(𝑧) 1 + 𝑯𝑮(𝑧)


𝑬⋆ (𝑠) = 𝑹⋆ (𝑠) − [𝑯(𝑠)𝑮(𝑠)]⋆ 𝑬⋆ (𝑠)
𝒀⋆ (𝑠) = 𝑮⋆ (𝑠)𝑬⋆ (𝑠) }

Example: Determine the (PTF) pulse transfer function for the given continuous systems:

10(𝑠 + 2)
𝑯(𝑠) =
𝑠(𝑠 + 1)2

Solution: Here the PTF is obtained by the use of the impulse modulation method
−1
𝛼1 𝛼2 𝛼3
𝑯(𝑧) = { {𝑯(𝑠)}| }= { | + | + | }
𝑡=𝑘𝑇
𝑠 𝑡=𝑘𝑇 (𝑠 + 1)2 𝑡=𝑘𝑇 𝑠 + 1 𝑡=𝑘𝑇

2 𝑑 2
𝛼1 = 𝑠𝑯(𝑠)| = 20, 𝛼2 = (𝑠 + 1) 𝑯(𝑠)| = −10, 𝛼3 = ((𝑠 + 1) 𝑯(𝑠))| = −20
𝑠=0 𝑠=−1 𝑑𝑠 𝑠=−1

Hence
20 −10 20
𝑯(𝑧) = { | + | − | }
𝑠 𝑡=𝑘𝑇 (𝑠 + 1)2 𝑡=𝑘𝑇 𝑠 + 1 𝑡=𝑘𝑇
−𝑡
= {(20 − 20𝑒 − 10𝑡𝑒 −𝑡 )𝑢(𝑡)| }
𝑡=𝑘𝑇
−𝑘𝑇 −𝑘𝑇 )𝑢(𝑘𝑇)
= {(20 − 20𝑒 − 10𝑘𝑇𝑒 }

In this example it is preferable to remind you that the z-transform of 𝑘𝑒 −𝑘 𝑢(𝑘) and 𝑘 2 𝑢(𝑘)

are obtained by using: {𝑘𝐱[𝑘] } = −𝑧(𝑑𝑿(𝑧)/𝑑𝑧)

−𝑘𝑇
𝑇𝑒 −𝑇 𝑧 −1 𝑇 2 𝑧 −1 (1 + 𝑧 −1 )
𝑇𝑘𝑒 𝑢(𝑘𝑇) ⟷ and (𝑘𝑇)2 𝑢(𝑘𝑇) ⟷
(1 − 𝑒 −𝑇 𝑧 −1 )2 (1 − 𝑧 −1 )3

Now we can obtain our PTF


−𝑘𝑇
𝑯(𝑧) = {(20 − 20𝑒 − 10𝑘𝑇𝑒 −𝑘𝑇 )𝑢(𝑘𝑇)}

20 20 10𝑇𝑒 −𝑇 𝑧 −1
= − −
1 − 𝑧 −1 1 − 𝑒 −𝑇 𝑧 −1 (1 − 𝑒 −𝑇 𝑧 −1 )2

Solved Problems:
Exercise: 01 Let the analog signal be represented as

𝐱(𝑡) = 3 cos(50𝜋𝑡) + 2 sin(300𝜋𝑡) − 4 cos(100𝜋𝑡)

What is the Nyquist rate for this signal? If the signal is sampled with a sampling frequency
of 200 Hz, what will be the DT signal obtained after sampling? What will be the recovered
signal?

Solution: The signal contains three frequencies, namely, 25 Hz, 150 Hz and 50 Hz. The
highest frequency is 150 Hz. The recommended rate of sampling is 300 Hz (2×150 Hz).

If the signal is a sampling frequency of 200 Hz, substituting 𝑡 = 𝑘𝑇𝑠 = 𝑘/𝑓𝑠 , we get,

𝑘 𝑘 𝑘
𝐱[𝑘𝑇𝑠 ] = 3 cos (50𝜋 ) + 2 sin (300𝜋 ) − 4 cos (100𝜋 )
200 200 200
Now, the discrete time signal obtained after sampling has only two digital frequencies,
namely, 𝑓1 = 1/8, 𝑓2 = 1/4.
𝜋 3𝜋 𝜋
𝐱[𝑘] = 3 cos ( 𝑘) + 2 sin ( 𝑘) − 4 cos ( 𝑘)
4 2 2
𝜋 𝜋 𝜋
= 3 cos ( 𝑘) − 2 sin ( 𝑘) − 4 cos ( 𝑘)
4 2 2

We see that the 150 Hz signal is aliased as 200-150 Hz, that is, 50 Hz with 180°phase
shift. The recovered signal contains only two frequencies, namely, 25 Hz and 50 Hz,
whereas the original signal had three frequencies 25 Hz, 150 Hz and 50 Hz.

Exercise: 02 A 1-kHz sinusoidal signal is sampled at 𝑡1 = 0 and 𝑡2 = 250 𝜇s. The sample
values are 𝑥1 = 0 and 𝑥2 = −1, respectively. Find the signal’s amplitude and phase.

Solution: The general form of the analoge signal is 𝐱(𝑡) = 𝐴 cos(2000𝜋𝑡 + 𝜃)

at 𝑡1 ∶ 𝐴 cos(𝜃) = 0 and at 𝑡2 : 𝐴 cos(𝜋/2 + 𝜃) = −𝐴 sin(𝜃) = −1. The answer: 𝐴 = 1, 𝜃 = 𝜋/2

Exercise: 03 A continuous-time sinusoidal signal 𝐱(𝑡) = cos(2𝜋f0 𝑡) with unknown f0 is


sampled at the rate of 1,000 samples per second resulting in the discrete-time signal
𝐱[𝑛] = cos(4𝜋𝑘/5). Find f0 .

Solution: The sampling period is 𝑇𝑠 = 10−3 sec

2 2𝜋f0 4𝜋
cos ([ + 𝑚] 2𝜋𝑘) = 𝐱[𝑛] = 𝐱(𝑡)|𝑡=𝑘𝑇 = cos(2𝜋f0 𝑘10−3 ) ⟹ = + 2𝜋𝑚 ⟹ f0 = (103 𝑚 + 400) Hz
5 100 5

f0 = (103 𝑚 + 400) Hz with 𝑚 = 0,1,2, …


Exercise: 04 A continuous-time periodic signal 𝐱(𝑡) with unknown period is sampled at the
rate of 1,000 samples per second resulting in the discrete-time signal 𝐱[𝑛] = 𝐴 cos(4𝜋𝑘/5).
Can you conclude with certainty that 𝐱(𝑡) is a single sinusoid?

Solution The answer is no. Sampling the sum of two sinusoids with frequencies 400 and
1,400 Hz at the 1-kHz rate produces 𝐱[𝑛] = 𝐴 cos(4𝜋𝑘/5).

Exercise: 05 Consider the system shown in figure


If we have

𝑿1 (𝜔) = 0 for |𝜔| > 100𝜋 s −1


and
𝑿2 (𝜔) = 0 for |𝜔| > 200𝜋 s−1 ,

Determine the maximum sampling period


to be used to avoid aliasing when the signal
𝐲(𝑡) = 𝐱1 (𝑡)𝐱 2 (𝑡).

Solution In the frequency domain we have 𝒀(𝜔) = 𝑿1 (𝜔) ⋆ 𝑿1 (𝜔)/2𝜋 ⟹ The signal 𝒀(𝜔) is of
bandwidth 𝜔1 + 𝜔2 to avoid aliasing we require that 𝜔𝑠 ≥ 2(𝜔1 + 𝜔2 )

2𝜋 2𝜋 𝜋
≥ 2(𝜔1 + 𝜔2 ) ⟹ 𝑇𝑠max = = s ≈ 10.466 ms
𝑇𝑠 2(𝜔1 + 𝜔2 ) 300

Exercise: 06 What should be the maximum value of the sampling period to be able to
recover the signal 𝐱(𝑡) = sin(3𝜋𝑡) /𝜋𝑡 from its samples 𝐱(𝑘𝑇𝑠 )?
1 |𝜔| < 3𝜋
Solution In the frequency domain we have 𝑿(𝜔) = { } . This is a bandlimited
0 |𝜔| ≥ 3𝜋
signal of a cut off frequency 𝜔𝑚 = 3𝜋 ⟹ 𝜔𝑠 ≥ 2(3𝜋) ⟹ 𝑇𝑠 < 1/3 s or in other word 𝑇𝑠max = 1/3 s

Exercise: 07 consider the signal 𝐱(𝑡) = cos(𝜔𝑡) with f0 = 5 KHz. The signal is sampled at
frequency of f𝑠 = 8 KHz

Can we recover the signal from its samples? And what is the of the recovered signal?

Solution we cannot recover this signal from its samples because f𝑠 < 2f0 . There exists an
alias with frequency f = f𝑠 − f0 = 3 KHz. The recovered signal is 𝐱(𝑡) = cos((𝜔𝑠 − 𝜔0 )𝑡) where
𝜔𝑠 − 𝜔0 = 6𝜋 × 103 .

Exercise: 08 consider the impulse response of a continuous LTI system 𝐡(𝑡) = cos(𝜔0 𝑡).
Find its corresponding pulse transfer function 𝑯(𝑧) using the impulse invariant method.

Solution we know that


𝑠
𝐡(𝑡) = cos(𝜔0 𝑡) ↔ 𝑯(𝑠) =
𝑠2 + 𝜔02

−1
1
𝑯(𝑧) = {{ (𝑯(𝑠))}| }= {𝐡(𝑡)| }= { (𝑒 𝑗𝜔0 𝑘𝑇𝑠 + 𝑒 −𝑗𝜔0 𝑘𝑇𝑠 )}
𝑡=𝑘𝑇𝑠
2
𝑡=𝑘𝑇𝑠
1 ∞ 𝑗𝜔 𝑘𝑇 −𝑘 1 ∞ −𝑗𝜔 𝑘𝑇 −𝑘 1 1 1
𝑯(𝑧) = ∑ 𝑒 0 𝑠𝑧 + ∑ 𝑒 0 𝑠𝑧 = ( 𝑗𝜔 𝑇 −1
+ −𝑗𝜔
)
2 𝑘=0 2 𝑘=0 2 1 − 𝑒 0 𝑠𝑧 1−𝑒 0 𝑇𝑠 𝑧 −1

1 − cos(𝜔0 𝑇𝑠 ) 𝑧 −1
=
1 − 2 cos(𝜔0 𝑇𝑠 ) 𝑧 −1 + 𝑧 −2

If 𝐡(𝑡) = sin(𝜔0 𝑡)we can follow the same procedure to get

sin(𝜔0 𝑇𝑠 ) 𝑧 −1
𝑯(𝑧) =
1 − 2 cos(𝜔0 𝑇𝑠 ) 𝑧 −1 + 𝑧 −2

Exercise: 09 Find the ZOH equivalent to the given transfer function

1
𝑯(𝑠) =
𝑠−𝑎
Solution we know that
−1 (1 − 𝑧 −1 ) −1
𝑯(𝑠) 1 1
𝑯(𝑧) = (1 − 𝑧 −1 ) {{ ( )}| }= {{ ( − )}| }
𝑠 𝑎 𝑠−𝑎 𝑠
𝑡=𝑘𝑇𝑠 𝑡=𝑘𝑇𝑠
(1 − 𝑧 −1 )
= {(𝑒 𝑎𝑡 − 1)𝑢(𝑡)| }
𝑎 𝑡=𝑘𝑇𝑠
(1 − 𝑧 −1 ) 𝑎𝑘𝑇
= { (𝑒 𝑠 − 1)𝑢(𝑘𝑇𝑠 ) }
𝑎
1 1 − 𝑧 −1 𝑧 −1 (𝑒 𝑎𝑇𝑠 − 1)
= ( − 1) =
𝑎 1 − 𝑒 𝑎𝑇𝑠 𝑧 −1 1 − 𝑒 𝑎𝑇𝑠 𝑧 −1

Remark: 𝑧 = 𝑒 𝑎𝑇𝑠 is a pole of the new equivalent system, so the sampling period is a design
parameter in digital control, and can alter the stability of the system.

Exercise: 10 Determine the ZOH discrete time equivalent of the state space

0 1 0
𝐱̇ (𝑡) = [ ] 𝐱(𝑡) + [ ] 𝐮(𝑡)
−1 0 1
𝐲(𝑡) = [1 1]𝐱(𝑡)
𝑇
Solution we know 𝑨𝑑 = 𝑒 𝑨𝑇𝑠 and 𝑩𝑑 = ∫0 𝑠 𝑒 𝑨𝜏 𝑩𝑑𝜏 but what is 𝑒 𝑨t ?

−1 −1 −1
𝑨t 𝑠 −1 −1 1 𝑠 1 cos(𝑡) sin(𝑡)
𝑒 = {(𝑠𝑰 − 𝑨)−1 } = {( ) }= { ( )} = ( )
1 𝑠 2
𝑠 +1 −1 𝑠 − sin(𝑡) cos(𝑡)

cos(𝑇𝑠 ) sin(𝑇𝑠 )
𝑨𝑑 = 𝑒 𝑨𝑇𝑠 = ( )
− sin(𝑇𝑠 ) cos(𝑇𝑠 )
𝑇 𝑇 𝑇
𝑠 𝑠
cos(𝜏) sin(𝜏) 0 𝑠
sin(𝜏) 1 − cos(𝑇𝑠 )
𝑩𝑑 = ∫ 𝑒 𝑨𝜏 𝑩𝑑𝜏 = ∫ ( ) ( ) 𝑑𝜏 = ∫ ( ) 𝑑𝜏 = ( )
0 0 − sin(𝜏) cos(𝜏) 1 0 cos(𝜏) sin(𝑇𝑠 )
−1
𝑧 − cos(𝑇𝑠 ) − sin(𝑇𝑠 ) 1 − cos(𝑇𝑠 )
𝑯(𝑧) = 𝑪𝑑 (𝑧𝑰 − 𝑨𝑑 )−1 𝑩𝑑 = [1 1] ( ) ( )
sin(𝑇𝑠 ) 𝑧 − cos(𝑇𝑠 ) sin(𝑇𝑠 )
(1 − cos(𝑇𝑠 ) + sin(𝑇𝑠 ))𝑧 + (1 − cos(𝑇𝑠 ) − sin(𝑇𝑠 ))
=
𝑧 2 − 2 cos(𝑇𝑠 ) 𝑧 + 1
Exercise: 11 Continous system described by its state space equations

0 1 0
𝐱̇ (𝑡) = [ ] 𝐱(𝑡) + [ ] 𝐮(𝑡)
−1 −2 1
𝐲(𝑡) = [1 1]𝐱(𝑡)

Determine the ZOH discrete time equivalent of this state space, and then determin 𝑯(𝑧)
𝑇
Solution we know 𝑨𝑑 = 𝑒 𝑨𝑇𝑠 and 𝑩𝑑 = ∫0 𝑠 𝑒 𝑨𝜏 𝑩𝑑𝜏 but what is 𝑒 𝑨t ?

−1 −1 −1
𝑠 −1 −1 1 𝑠+2 1
𝑒 𝑨t = {(𝑠𝑰 − 𝑨)−1 } = {( ) }= { ( )}
1 𝑠+2 (𝑠 + 1)2 −1 𝑠
1 1 1
−1 +
𝑠 + 1 (𝑠 + 1)2 (𝑠 + 1)2 (1 + 𝑡)𝑒 −𝑡 𝑡𝑒 −𝑡
= =( )
−1 1 1 −𝑡𝑒 −𝑡 (1 − 𝑡)𝑒 −𝑡

{( (𝑠 + 1)2 𝑠 + 1 (𝑠 + 1)2 )}

(1 + 𝑇𝑠 )𝑒 −𝑇𝑠 𝑇𝑠 𝑒 −𝑇𝑠
𝑨𝑑 = 𝑒 𝑨𝑇𝑠 = ( )
−𝑇𝑠 𝑒 −𝑇𝑠 (1 − 𝑇𝑠 )𝑒 −𝑇𝑠
𝑇𝑠 𝑇𝑠 𝑇𝑠
(1 + 𝜏)𝑒 −𝜏 𝜏𝑒 −𝜏 0 𝜏𝑒 −𝜏 1 − (1 − 𝑇𝑠 )𝑒 −𝑇𝑠
𝑩𝑑 = ∫ 𝑒 𝑨𝜏 𝑩𝑑𝜏 = ∫ ( ) ( ) 𝑑𝜏 = ∫ ( ) 𝑑𝜏 = ( )
0 0 −𝜏𝑒 −𝜏 (1 − 𝜏)𝑒 −𝜏 1 0
(1 − 𝜏)𝑒 −𝜏 𝑇𝑠 𝑒 −𝑇𝑠

Let we compute the transfer function 𝑯(𝑧)


−1
𝑯(𝑠)
𝑯(𝑧) = 𝑪𝑑 (𝑧𝑰 − 𝑨𝑑 )−1 𝑩𝑑 = (1 − 𝑧 −1 ) {{ ( )}| }
𝑠
𝑡=𝑘𝑇𝑠

[1 1] 𝑠 + 2 1 0 1
𝑯(𝑠) = 𝑪(𝑧𝑰 − 𝑨)−1 𝑩 = ( )[ ] =
(𝑠 + 1) 2 −1 𝑠 1 𝑠+1
Therefore,
−1
1 1 1 1
𝑯(𝑧) = (1 − 𝑧 −1 ) {{ ( − )}| } = (1 − 𝑧 −1 ) ( −1
− )
𝑠 𝑠+1 1−𝑧 1 − 𝑒 −𝑇𝑠 𝑧 −1
𝑡=𝑘𝑇𝑠

1 − 𝑧 −1 (1 − 𝑒 −𝑇𝑠 )𝑧 −1
= (1 − ) =
1 − 𝑒 −𝑇𝑠 𝑧 −1 1 − 𝑒 −𝑇𝑠 𝑧 −1

Exercise: 12 consider the frequency response of a continuous LTI system 𝑯(𝑠) = 1/(𝑠 2 + 1).
Find its corresponding pulse transfer function 𝑯(𝑧) using the ZOH method.

Solution we know that

−1 −1
1 −1/2 −1/2 1
𝑯(𝑧) = (1 − 𝑧 −1 ) {{ ( 2 )}| } = (1 − 𝑧 −1 ) {{ ( + + )}| }
𝑠(𝑠 + 1) 𝑠+𝑗 𝑠−𝑗 𝑠
𝑡=𝑘𝑇𝑠 𝑡=𝑘𝑇𝑠
(1 − 𝑧 (1 − 𝑧 −1 ) (1 − 𝑧 −1 )/2 −1 )/2
1 1
= (1 − 𝑧 −1 ) {(1 − 𝑒 𝑗𝑘𝑇𝑠 − 𝑒 −𝑗𝑘𝑇𝑠 ) 𝑢(𝑘𝑇𝑠 ) } = − −
2 2 1 − 𝑧 −1 1 − 𝑒 −𝑗𝑇𝑠 𝑧 −1 1 − 𝑒 𝑗𝑇𝑠 𝑧 −1

(1 − 𝑧 −1 )(cos(𝑇𝑠 )𝑧 −1 − 1)
𝑯(𝑧) = +1
1 − 2 cos(𝑇𝑠 ) 𝑧 −1 + 𝑧 −2
Exercise: 13 Find the ZOH equivalent to the given transfer function

𝑠+1
𝑯(𝑠) =
𝑠2 + 1
Solution we know that
−1

𝑯(𝑠) 𝑠+1 1 1 𝑠
= 2
= + 2 − 2 ↔ (1 + sin(𝑡) − cos(𝑡)) 𝑢(𝑡)
𝑠 𝑠(𝑠 + 1) 𝑠 𝑠 + 1 𝑠 + 1

𝑯(𝑧) = (1 − 𝑧 −1 ) {(1 + sin(𝑘𝑇𝑠 ) − cos(𝑘𝑇𝑠 )) 𝑢(𝑘𝑇𝑠 )}

1 (−𝑧 2 + (sin(𝑇𝑠 ) + cos(𝑇𝑠 ))𝑧)


= (1 − 𝑧 −1 ) { + }
1 − 𝑧 −1 1 − 2 cos(𝑇𝑠 ) 𝑧 + 𝑧 2

Exercise: 14 Find the ZOH equivalent to the given impulse response

𝐡(𝑡) = (𝑡 + 1)𝑒 𝑡 𝑢(𝑡)

What are the poles and zeros of the equivalent discrete time system?

Solution we know that

1 1 𝑠
𝐡(𝑡) = (𝑡 + 1)𝑒 𝑡 𝑢(𝑡) ↔ 𝑯(𝑠) = + =
(𝑠 − 1) 2 𝑠−1 (𝑠 − 1)2

Hence, we obtain
−1

𝑯(𝑠) 1
= ↔ 𝑡𝑒 𝑡 𝑢(𝑡)
𝑠 (𝑠 − 1)2
∞ 𝑘
−1 ) 𝑘𝑇𝑠 )
𝑧−1 𝑒 𝑇𝑠 𝑇𝑠 𝑒 𝑇𝑠 (𝑧 − 1)
𝑯(𝑧) = (1 − 𝑧 {(𝑘𝑇𝑠 𝑒 𝑢(𝑘𝑇𝑠 )} = 𝑇𝑠 ( )∑𝑘( ) =
𝑧 𝑧 (𝑧 − 𝑒 𝑇𝑠 )2
𝑘=0

We have a double finite pole at 𝑧 = 𝑒 𝑇𝑠 and a single zero at 𝑧 = 1. If we select 𝑇𝑠 < 1 we obtain
a stable discrte time system even if the analoge filter is not stable? Aliasing

This is due to the fact that the analoge filter is not bandlimited. Then it is preferable to use
antialiasing filters in cascad with the sampler to avoid the problem of aliasing.

Exercise: 15 Determine the ZOH equivalent to the given transfer function

1
𝑯(𝑠) =
(𝑠 + 1)(𝑠 + 2)
Solution we know that
−1
1
𝑯(𝑧) = (1 − 𝑧 −1 ) {{ ( )}| }
𝑠(𝑠 + 1)(𝑠 + 2)
𝑡=𝑘𝑇𝑠

−1
1/2 1 1/2
= (1 − 𝑧 −1 ) {{ ( − + )}| }
𝑠 𝑠+1 𝑠+2
𝑡=𝑘𝑇𝑠
1/2 1 1/2
𝑯(𝑧) = (1 − 𝑧 −1 ) { −1
− −𝑇 −1
+ }
1−𝑧 1 − 𝑒 𝑠𝑧 1 − 𝑒 −2𝑇𝑠 𝑧 −1
1 1 − 𝑧 −1 1 1 − 𝑧 −1
= −( ) + ( )
2 1 − 𝑒 −𝑇𝑠 𝑧 −1 2 1 − 𝑒 −2𝑇𝑠 𝑧 −1

1 (1 − 2𝑒 −𝑇𝑠 + 𝑒 −2𝑇𝑠 )𝑧 −1 + (𝑒 −𝑇𝑠 − 2𝑒 −2𝑇𝑠 + 𝑒 −3𝑇𝑠 )𝑧 −2


=
2 [1 − 𝑒 −𝑇𝑠 𝑧 −1 ][1 − 𝑒 −2𝑇𝑠 𝑧 −1 ]

Exercise: 16 Consider an analog low-pass filter characterized by its differential equation


𝑑𝐲(𝑡)
= 𝑎(𝐱(𝑡) − 𝐲(𝑡))
𝑑𝑡
Which corresponding to an impulse response 𝐡(𝑡) = 𝑎𝑒 𝑎𝑡 𝑢(𝑡). Prove that this filter can be
aproximated by the following digital filter: (1 + 𝑎𝑇𝑠 )𝐲[𝑘] − 𝐲[𝑘 − 1] = 𝑎𝑇𝑠 𝐱[𝑘]

Solution From the given digital filter we notice that there is a backward difference
approximation, so we prefer to use this type of discretization

𝑡2 𝑡2 𝑘𝑇𝑠 𝑘𝑇𝑠
𝑑𝐲(𝑡)
∫ 𝑑𝑡 = ∫ 𝑎(𝐱(𝑡) − 𝐲(𝑡))𝑑𝑡 ⟺ ∫ 𝑑𝐲 = ∫ 𝑎(𝐱(𝑡) − 𝐲(𝑡))𝑑𝑡
𝑡1 𝑑𝑡 𝑡1 (𝑘−1)𝑇𝑠 (𝑘−1)𝑇𝑠

𝑘𝑇𝑠
𝐲[𝑘𝑇𝑠 ] − 𝐲[(𝑘 − 1)𝑇𝑠 ] = 𝑎 ∫ (𝐱(𝑡) − 𝐲(𝑡))𝑑𝑡
(𝑘−1)𝑇𝑠

In the small enough sampling period 𝑇𝑠 we can consider the function 𝐠(𝑡) = (𝐱(𝑡) − 𝐲(𝑡)) be
constant, that is 𝐠(𝑡) = 𝐠(𝑘𝑇𝑠 ) therefore
𝑘𝑇𝑠
𝐲[𝑘𝑇𝑠 ] − 𝐲[(𝑘 − 1)𝑇𝑠 ] = 𝑎𝐠(𝑘𝑇𝑠 ) ∫ 𝑑𝑡 = 𝑎𝐠(𝑘𝑇𝑠 )(𝑘𝑇𝑠 − (𝑘 − 1)𝑇𝑠 ) = 𝑎𝑇𝑠 (𝐱[𝑘𝑇𝑠 ] − 𝐲[𝑘𝑇𝑠 ])
(𝑘−1)𝑇𝑠

For seeking simplicity we omit the sampling period 𝑇𝑠 from the above equation we get

(1 + 𝑎𝑇𝑠 )𝐲[𝑘] − 𝐲[𝑘 − 1] = 𝑎𝑇𝑠 𝐱[𝑘]

Using the z transform we obtain

𝐘(𝑧) 𝑎 𝑎
𝑯(𝑧) = =( )= | = |
𝑠 + 𝑎 𝑠=1−𝑧−1 𝑯(𝑠) 𝑠=1−𝑧
−1 −1
𝐗(𝑧) 1−𝑧
𝑇𝑠 + 𝑎 𝑇𝑠 𝑇𝑠

This type of mapping between the s-plane and z-plane is called the backward difference
method.

1 − 𝑧 −1 1 𝑧 − 1
𝑠= = ( )
𝑇𝑠 𝑇𝑠 𝑧

Exercise: 17 Consider an analog low-pass filter characterized by its differential equation


𝑑𝐲(𝑡)
= 𝑎(𝐱(𝑡) − 𝐲(𝑡))
𝑑𝑡
Which corresponding to an impulse response 𝐡(𝑡) = 𝑎𝑒 𝑎𝑡 𝑢(𝑡). Prove that this filter can be
aproximated by the following digital filter 𝐲[𝑘 + 1] = (1 − 𝑎𝑇𝑠 )𝐲[𝑘] + 𝑎𝑇𝑠 𝐱[𝑘]

Solution From the given digital filter we notice that there is a


forward difference approximation, so we prefer to use this type of
discretization
𝑡2 𝑡2 (𝑘+1)𝑇𝑠 (𝑘+1)𝑇𝑠
𝑑𝐲(𝑡)
∫ 𝑑𝑡 = ∫ 𝑎(𝐱(𝑡) − 𝐲(𝑡))𝑑𝑡 ⟺ ∫ 𝑑𝐲 = ∫ 𝑎(𝐱(𝑡) − 𝐲(𝑡))𝑑𝑡
𝑡1 𝑑𝑡 𝑡1 𝑘𝑇𝑠 𝑘𝑇𝑠

(𝑘+1)𝑇𝑠
𝐲[(𝑘 + 1)𝑇𝑠 ] − 𝐲[𝑘𝑇𝑠 ] = ∫ 𝑎(𝐱(𝑡) − 𝐲(𝑡))𝑑𝑡
𝑘𝑇𝑠

In the small enough sampling period 𝑇𝑠 we can consider the function 𝐠(𝑡) = (𝐱(𝑡) − 𝐲(𝑡)) be
constant, that is 𝐠(𝑡) = 𝐠(𝑘𝑇𝑠 ) therefore
(𝑘+1)𝑇𝑠
𝐲[(𝑘 + 1)𝑇𝑠 ] − 𝐲[𝑘𝑇𝑠 ] = 𝑎𝐠(𝑘𝑇𝑠 ) ∫ 𝑑𝑡 = 𝑎𝑇𝑠 𝐠(𝑘𝑇𝑠 ) = 𝑎𝑇𝑠 (𝐱[𝑘𝑇𝑠 ] − 𝐲[𝑘𝑇𝑠 ])
𝑘𝑇𝑠

For seeking simplicity we omit the sampling period 𝑇𝑠 from the above equation we get

𝐲[𝑘 + 1] = (1 − 𝑎𝑇𝑠 )𝐲[𝑘] + 𝑎𝑇𝑠 𝐱[𝑘]

Using the z transform we obtain

𝐘(𝑧) 𝑎 𝑎
𝑯(𝑧) = =( ) = | = |
𝑠 + 𝑎 𝑠=1−𝑧−1 𝑯(𝑠) 𝑠=1−𝑧
−1
𝐗(𝑧) 1 − 𝑧 −1
+ 𝑎 𝑧−1 𝑇𝑠 𝑧−1 𝑇𝑠
𝑧 −1 𝑇𝑠

This type of mapping between the s-plane and z-plane is called the forward difference
method.
1 − 𝑧 −1 𝑧 − 1
𝑠= =
𝑧 −1 𝑇𝑠 𝑇𝑠

Exercise: 18 Consider an analog integrator characterized by transfer function 𝑯𝑨𝑰 (𝑠) = 1/𝑠
and assume that its response to an excitation 𝐱(𝑡) is 𝐲(𝑡), The impulse response of the
integrator is given by {𝒉𝑨𝑰 (𝑡) = 1 for 𝑡 ≥ 0+ & 𝒉𝑨𝑰 (𝑡) = 0 for 𝑡 ≤ 0− } that is 𝒉𝑨𝑰 (𝑡) = 𝑢(𝑡)
and its response at instant 𝑡 to an arbitrary right-sided excitation 𝐱(𝑡), i.e., 𝐱(𝑡) = 0 for t < 0,
is given by the convolution integral
𝑡
1
𝐱(𝑡) ⟶ 𝑯𝑨𝑰 (𝑠) = ⟶ 𝐲(𝑡) = ∫ 𝐱(𝜏)𝒉𝑨𝑰 (𝑡 − 𝜏)𝑑𝜏
𝑠 0

Prove that this response can be aproximated by the following digital filter

𝑇
𝐲[𝑛] − 𝐲[𝑛 − 1] = {𝐱[𝑛] + 𝐱[𝑛 − 1]}
2
Solution: (Derivation of Bilinear-Transformation Method) If 0+ <𝑡1 <𝑡2 , we can write
𝑡2 𝑡1
𝐲(𝑡2 ) − 𝐲(𝑡1 ) = ∫ 𝐱(𝜏)𝒉𝑨𝑰 (𝑡2 − 𝜏)𝑑𝜏 − ∫ 𝐱(𝜏)𝒉𝑨𝑰 (𝑡1 − 𝜏)𝑑𝜏
0 0
For 0+ <τ≤𝑡1 ,𝑡2 we have 𝒉𝑨𝑰 (𝑡2 − 𝜏) = 𝒉𝑨𝑰 (𝑡1 − 𝜏) = 1 and
thus 𝐲(𝑡2 ) − 𝐲(𝑡1 ) simplifies to
𝑡2
𝐲(𝑡2 ) − 𝐲(𝑡1 ) = ∫ 𝐱(𝜏)𝑑𝜏
𝑡1

As 𝑡1 → 𝑡2 , from figure we can use trapezoidal rule


𝑡2 − 𝑡2
𝐲(𝑡2 ) − 𝐲(𝑡1 ) ≅ (𝐱(𝑡2 ) + 𝐱(𝑡1 ))
2
and on letting 𝑡1 = (𝑛 − 1)𝑇 and 𝑡2 = 𝑛𝑇 the difference
equation 2{𝐲[𝑛] − 𝐲[𝑛 − 1]} = 𝑇 × {𝐱[𝑛] + 𝐱[𝑛 − 1]} can be formed. This equation represents a
‘digital integrator’ that has approximately the same timedomain response as the analog
integrator for any excitation. By applying the z transform, we obtain

𝐘(𝑧) 𝑇 𝑧 + 1
𝑯𝑫𝑰 (𝑧) = = ( )
𝐗(𝑧) 2 𝑧 − 1

The above equation can be expressed as

𝑯𝑫𝑰 (𝑧) = 𝑯𝑨𝑰 (𝑠)|


𝑠=2(𝑧−1)
𝑇 𝑧+1

In effect, a digital integrator can be obtained from an analog integrator by simply applying
the bilinear transformation. Generally, applying the bilinear transformation to the transfer
function of an arbitrary analog filter will yield a digital filter characterized by the discrete-
time transfer function. And this type of approximation is known in mathematics as the
trapezoidal method.
2 𝑧−1 2 + 𝑇𝑠
𝑠= ( ) ⟷ 𝑧=( )
𝑇 𝑧+1 2 − 𝑇𝑠

If we define a new variable 𝑤 = 𝑇 × 𝑠/2 we obtain

1+𝑤 1−𝑧
𝑧=( ) ⟷ 𝑤=( )
1−𝑤 1+𝑧

Exercise: 19 Given a sampled system described by its impulse response

1 𝑘 𝑘𝜋 1 𝑘 𝑘𝜋
𝐡[𝑘] = (10𝛿[𝑘] − 10 ( ) cos ( ) + 16 ( ) sin ( )) 𝑢[𝑘]
√2 4 √2 4

Determine the continuous transfer function 𝑯(𝑠) using the bilinear-transformation method

Solution: First of all we should find the discrete time transfer function

2 + 𝑇𝑠
3𝑧 + 1 3 (2 − 𝑇𝑠) + 1 4{8 − 2𝑇𝑠 − (𝑇𝑠)2 }
𝑯(𝑧) = 2 ⟹ 𝑯(𝑠) = =
𝑧 − 𝑧 + 1/2 2 + 𝑇𝑠 2 2 + 𝑇𝑠 1 5(𝑇𝑠)2 + 4𝑇𝑠 + 4
(2 − 𝑇𝑠) − (2 − 𝑇𝑠) + 2
Exercise: 20 Given a system setup as shown in figure. Assuming that we require
digitalizing the analog controller

1 𝑡 𝑑𝒆(𝑡)
𝒖(𝑡) = 𝐾 (𝒆(𝑡) + ∫ 𝒆(𝜉)𝑑𝜉 + 𝑇𝐷 )
𝑇𝐼 0 𝑑𝑡

With 𝒖(𝑡): the output of the controller, 𝒆(𝑡):inout of the controller, 𝐾: proportional constant,
𝑇𝐼 : integral constant, and 𝑇𝐷 : derivative constant

For best approximation use the trapezoidal rule for the integration term and backward
difference for the derivative term.

Solution: First of all we convert the analog controller equation to frequency domain

1 −1 1
𝑼(𝑠) = 𝐾 (1 + 𝑠 + 𝑇𝐷 𝑠) 𝑬(𝑠) ⟺ 𝑯𝑐 (𝑠) = 𝐾 (1 + 𝑠 −1 + 𝑇𝐷 𝑠)
𝑇𝐼 𝑇𝐼

The equivalent digital controller is

𝑇𝑠 1 + 𝑧 −1 𝑇𝐷
𝑯𝑐 (𝑧) = 𝐾 {1 + ( ) + (1 − 𝑧 −1 )}
2𝑇𝐼 1 − 𝑧 −1 𝑇𝑠
𝐾𝑇𝑠 𝑧 −1 𝐾𝑇𝑠 1 𝐾𝑇𝑠 1 𝐾𝑇𝐷
=𝐾+ ( −1
)− ( −1
)+ ( −1
)+ (1 − 𝑧 −1 )
2𝑇𝐼 1 − 𝑧 2𝑇𝐼 1 − 𝑧 𝑇𝐼 1 − 𝑧 𝑇𝑠
𝐾𝑇𝑠 𝐾𝑇𝑠 1 𝐾𝑇𝐷
= (𝐾 − )+ ( −1
) + (1 − 𝑧 −1 )
2𝑇𝐼 𝑇𝐼 1 − 𝑧 𝑇𝑠
Let we put
𝐾𝑇𝑠 𝐾𝑇𝑠 𝐾𝑇𝐷
𝐾𝑃 = (𝐾 − ), 𝐾𝐼 = , 𝐾𝐷 =
2𝑇𝐼 𝑇𝐼 𝑇𝑠
We obtain
𝑘
𝐾𝐼
𝑯𝑐 (𝑧) = 𝐾𝑃 + ( −1
) + 𝐾𝐷 (1 − 𝑧 −1 ) ⟺ 𝒖[𝑘] = (𝐾𝑃 𝒆[𝑘] + 𝐾𝐼 ∑ 𝒆[𝑚] + 𝐾𝐷 ∇𝒆[𝑘])
1−𝑧
0
Exercise: 21 Given a hybrid closed loop system as shown in figure, find its pulse transfer
function PTF (i.e. if it is possible)
Solution: Here we give just a short answer for each block diagram

𝒀(𝑧) 𝑮1 (𝑧)𝑮2 (𝑧) 𝒀(𝑧) 𝑮1 (𝑧)𝑮2 (𝑧)


𝐏𝐓𝐅(𝑧) = = , 𝐏𝐓𝐅(𝑧) = =
𝑹(𝑧) 1 + 𝑮1 (𝑧)𝑮2 (𝑧)𝑯(𝑧) 𝑹(𝑧) 1 + 𝑮1 (𝑧)𝑮2 𝑯(𝑧)

Exercise: 22 Given a hybrid closed loop system as shown in figure, find its pulse transfer
function PTF (i.e. if it is possible)

Exercise: 23 Given a hybrid closed loop system as shown in figure, find its pulse transfer
function PTF (i.e. if it is possible)

Exercise: 24 Given a hybrid closed loop system as shown in figure, find its pulse transfer
function PTF (i.e. if it is possible)

Exercise: 25 Given a hybrid closed loop system as shown in figure, find its pulse transfer
function PTF (i.e. if it is possible)
Exercise: 26 Write a MATLAB code to simulate a 2nd order digital filter

clear all, clc, a0=1; a1=3; b0=1/2; b1=-1; x0=0; x1=a1; u0=1; u1=0;
x=[x0,x1];n=18;
for k = 1:1:n,
x2=-b1*x1-b0*x0+a1*u1+a0*u0;
x=[x,x2];
x0=x1;
x1=x2;
u0=u1;
end
plot(x,’o’);grid

Fundamental remark: (Author's Property) The approximation methods for converting


continuous to discrete system can be derived from the mapping 𝑧 = 𝑒 𝑠𝑇 .

Consider the modulated signal 𝐲(𝑡) = 𝐱(𝑡)𝐬(𝑡) (i.e. train impulse sampled)
∞ ∞

𝐲(𝑡) = 𝐱(𝑡)𝐬(𝑡) = 𝐱(𝑡) ∑ 𝛿(𝑡 − 𝑘𝑇𝑠 ) = ∑ 𝐱(𝑘𝑇𝑠 )𝛿(𝑡 − 𝑘𝑇𝑠 )


−∞ −∞

Take the Laplace transform of this function


∞ ∞

(𝐲(𝑡)) = ∫ (∑ 𝐱(𝑘𝑇𝑠 )𝛿(𝑡 − 𝑘𝑇𝑠 )) 𝑒 −𝑠𝑡 𝑑𝑡


−∞ −∞
∞ ∞
= ∑ 𝐱(𝑘𝑇𝑠 ) ∫ 𝛿(𝑡 − 𝑘𝑇𝑠 ) 𝑒 −𝑠𝑡 𝑑𝑡
−∞ −∞

= ∑ 𝐱(𝑘𝑇𝑠 )𝑒 −𝑠𝑘𝑇𝑠
−∞
∞ ∞

= ∑ 𝐱(𝑘𝑇𝑠 )(𝑒 𝑠𝑇𝑠 )−𝑘 = ∑ 𝐱[𝑘]𝑧 −𝑘 with 𝑧 = 𝑒 𝑠𝑇𝑠


−∞ −∞

The Laplace transform of the sampled signal 𝐲(𝑡) = 𝐱(𝑡)𝐬(𝑡) can be approximated by an
equivalent z-transform if we consider the mapping 𝑧 = 𝑒 𝑠𝑇𝑠 . Now a various methods can be
obtained from this last relationship between the s-plane and z-plane.

▪ The Forward rule: we apply the Taylor series on 𝑧 = 𝑒 𝑠𝑇

𝑠𝑇
𝑧 − 1 1 − 𝑧 −1
𝑧=𝑒 ⟺ 𝑧 ≈ 1 + 𝑠𝑇 ⟺ 𝑠 = =
𝑇 𝑇𝑧 −1
▪ The Backward rule: we apply the Taylor on 𝑧 −1 = 𝑒 −𝑠𝑇

𝑧 − 1 1 − 𝑧 −1
𝑧 = 𝑒 𝑠𝑇 ⟺ 𝑧 −1 = 𝑒 −𝑠𝑇 ⟺ 𝑧 −1 ≈ 1 − 𝑠𝑇 ⟺ 𝑠 = =
𝑇𝑧 𝑇
▪ The Bilinear (Trapezoidal) rule: we apply the Taylor on 𝑇𝑠 = ln(𝑧)

1 2 𝑧−1 2 1 − 𝑧 −1
𝑧 = 𝑒 𝑠𝑇 ⟺ 𝑠 = ln(𝑧) ≈ ( )≈ ( )
𝑇 𝑇 𝑧+1 𝑇 1 + 𝑧 −1
CHAPTER X:
Analog and Digital
Filters in Linear
Systems

I. Introduction IV.VII. Second-Order High-Pass Filters


II. Operational Amplifiers IV.VIII. Second-Order Bandpass Filters
III. Bode Plot: (What Is It?) IV.IX. Second-Order Notch Filters
III.I. The Gain K IV.X. Second-Order All-Pass Filters
III.II. Integral and Derivative Factors V. Design of Causal Analog Filters
III.III. First-Order Factors V.I Butterworth Filters
III.IV. Quadratic Factors V.II Chebyshev Filters
IV. Analog Filters V.III Inverse Chebyshev Filters
IV.I. First-Order Low-Pass Filters V.IV Analog filter transform…
IV.II. Practical Integrators VI. Design of Digital Filters
IV.III. First-Order High-Pass Filters VI.I Design of IIR filters
IV.V. First-Order All-Pass Phase Shifters VI.II Design of FIR filters
IV.V. Lead and Lag Filters V.II. MATLAB Problems
IV.VI. Second-Order Low-Pass Filters VI. Problems

Filtering can be used to select one or more desirable and simultaneously reject one or
more undesirable bands of frequency components, or simply frequencies. For
example, one could use lowpass filtering to select a band of preferred low frequencies
and reject a band of undesirable high frequencies from the frequencies present in the
signal, we use highpass filtering to select a band of preferred high frequencies and
reject a band of undesirable low frequencies; use bandpass filtering to select a band
of frequencies and reject low and high frequencies; lastly we use bandstop filtering to
reject a band of frequencies but select low frequencies and high frequencies.
Analog and Digital Filters in
Linear Systems

I. Introduction: In many signal processing applications the need arises to change the
strength, or the relative significance, of various frequency components in a given signal.
Sometimes we may need to eliminate certain frequency components in a signal; at other
times we may need to boost the strength of a range of frequencies over others. This act of
changing the relative amplitudes of frequency components in a signal is referred to as
filtering, and the system that facilitates this is referred to as a filter. In a general sense any
continuous time LTI system can be seen as a filter. In the most general sense, a "filter" is a
device or a system that alters in a prescribed way the input that passes through it. In
essence, a filter converts inputs into outputs in such a fashion that certain desirable
features of the inputs are retained in the outputs while undesirable features are
suppressed. There are many kinds of filters; only a few examples are given here. In
automobiles, the oil filter removes unwanted particles that are suspended in the oil passing
through the filter; the air filter passes air but prevents dirt and dust from reaching the
carburetor. Colored glass may be used as an optical filter to absorb light of certain
wavelengths, thus altering the light that reaches the sensitized film in a camera.

An electrical filter is designed to separate and pass a desired signal from a mixture of
desired and undesired signals. Typical examples of complex electrical filters are
televisions and radios. More specifically, when a television is turned to a particular
channel, says Channel 2, it will pass those signals (audio and visual) transmitted by
Channel 2 and block out all other signals. On a smaller scale, filters are basic electronic
components in many communication systems such as the telephone, television, radio,
radar, and sonar. Electrical filters can also be found in power conversion circuits and
power systems in general. In fact, electrical filters permeate modern technology so much
that it is difficult to think of any moderately complex electronic device that does not employ
a filter in one form or another. Electrical filters may be classified in a number of ways.
Analog filters are used to process analog or continuous-time signals; digital filters are used
to process digital signals (discrete-time signals with quantized magnitude levels). Analog
filters may be classified as lumped or distributed depending on the frequency ranges for
which they are designed. Finally, analog filters may also be classified as passive or active
depending on the type of elements used in their realizations.

II. Operational Amplifiers: Analog filters may be implemented by passive or active


circuits. (Exceptions are cases that require amplification to achieve a certain gain.) In this
way, filters are classified as passive or active. Passive filters require no power source, are
more robust and last longer, but are vulnerable to loading effects and may require
inductors which may become bulky. Active filters can do without inductors, provide a
better input impedance, and separate the load from the filter by employing an Operational
Amplifiers op-amp. However, they require a power source and are less robust. Examples of
both types of implementations will be presented.
The name operational amplifier (op amp) was originally given to an amplifier that could be
easily modified by external circuitry to perform mathematical operations (addition, scaling,
integration, etc.) in analog-computer applications. However, with the advent of solid-state
technology, op amps have become highly reliable, miniaturized, temperature-stabilized, and
consistently predictable in performance; they now figure as fundamental building blocks in
basic amplification and signal conditioning, in active filters, function generators, and
switching circuits. Operational amplifiers (op amp) are said to be active because they have
the ability of taking decision without any dependence.

𝑍𝑖 : The input impedance and very high (In ideal case 𝑍𝑖 → ∞)


𝑍𝑜 : The output impedance and very small (In ideal case 𝑍𝑜 → 0)
𝑣𝑑 : The input voltage, with 𝑣𝑑 = (𝑣2 − 𝑣1 ) and 𝑖𝑑 = 𝑣𝑑 /𝑍𝑖 ≅ 0
𝑣1 : is called the inverting input, 𝑣2 : is the non-inverting input,
𝑣𝑜 : The output voltage, with −(𝑉𝐸𝐸 − 2) ≤ 𝑣𝑜 ≤ (𝑉𝐶𝐶 − 2) and 𝑣𝑜 = 𝐴𝑑 𝑣𝑑 = 𝐴𝑑 (𝑣2 − 𝑣1 )

 The inverting amplifier: has its noninverting input connected to ground or common. A
signal is applied through input resistor 𝑍1 , and negative current feedback is implemented
through feedback resistor 𝑍2 . Output 𝑣𝑜 has polarity opposite that of input 𝑣𝑖 .

By the method of node voltages at the inverting


input, the current balance is
𝑣𝑖 − 𝑣1 𝑣𝑜 − 𝑣1 𝑣𝑑
+ = 𝑖𝑑 = ≅0
{ 𝑍1 𝑍2 𝑍𝑖
𝑣1 = 𝑣2 = 0

Which means that

𝑣𝑜 𝑍2 𝑍2
= − ⟺ 𝑣𝑜 = (− ) 𝑣𝑖
𝑣𝑖 𝑍1 𝑍1
 The noninverting amplifier: is realized by
grounding 𝑍1 and applying the input signal at the
noninverting op amp terminal. When 𝑣𝑖 is positive,
𝑣𝑜 is positive and current 𝑖1 is positive. Voltage
𝑣1 = 𝑖1 𝑍1 then is applied to the inverting terminal as
negative voltage feedback.

For the noninverting amplifier, assume that the


current into the inverting terminal of the op amp is
zero, so that 𝑣𝑑 = 0 and 𝑣1 ≈ 𝑣2 = 𝑣𝑖 .

𝑖𝑑 = 0 ⟹ 𝑣− = 𝑣+ = 𝑣𝑖 therefore,
𝑣𝑖 𝑣𝑖 − 𝑣𝑜 𝑍2
𝑖1 = − = 𝑖2 = ⟹ 𝑣𝑜 = (1 + ) 𝑣𝑖
𝑍1 𝑍2 𝑍1

Remark: if we want the gain of the op-amp to be less than one we add a divider circuit as
shown below

𝑍2
𝑣𝑜 = (1 + )𝑣
𝑍1 +
𝑍𝑥
𝑣+ = ( )𝑣
𝑍𝑥 + 𝑍𝑦 𝑖


𝑍2 𝑍𝑥
𝑣𝑜 = (1 + ) ( )𝑣
𝑍1 𝑍𝑥 + 𝑍𝑦 𝑖

 The difference amplifier: based on the


inverting and non-inverting amplifiers we obtain

𝑍2 𝑍2 𝑍𝑥
𝑣𝑜 = (− ) 𝑣1 + (1 + ) ( )𝑣
𝑍1 𝑍1 𝑍𝑥 + 𝑍𝑦 2

An appropriate choice of impedances

𝑍2 = 𝑍1 and 𝑍𝑥 = 𝑍𝑦
we get
𝑣𝑜 = 𝑣2 − 𝑣1

 Voltage follower: In this circuit (with an


ideal op-amp, drawing zero current and having
an infinite open-loop gain) the output is equal
to the input 𝑣𝑜 = 𝑣𝑠 . The system isolates the
load 𝑍𝑙 from the signal source 𝑣𝑠 , making the
internal resistance of the signal source
ineffective. The system is causal, memoryless,
linear, and time invariant with unity gain.
 Summing Op-Amp: The summing circuit of the below figure has n inputs 𝑣𝑖 , 𝑖 = 1 … 𝑛,
and one output 𝑣𝑜𝑢𝑡 . With the feedback element 𝑍 being a resistor 𝑅𝑓 , the input-output
relationship is

𝑅𝑓 𝑅𝑓 𝑅𝑓
𝑣𝑜𝑢𝑡 = − ( 𝑣1 + 𝑣2 + ⋯ + 𝑣 )
𝑅1 𝑅2 𝑅𝑛 𝑛

(For the derivation apply KCL at the inverting


terminal of the op-amp.) The circuit is linear, time
invariant, and memoryless. Replacing the feedback
resistor 𝑅𝑓 with a capacitor 𝐶, we obtain a summing
integrator with the input-output relationship

1 𝑡 1 1 1
𝑣𝑜𝑢𝑡 = − ∫ ( 𝑣1 + 𝑣2 + ⋯ + 𝑣 ) 𝑑𝑡
𝐶 −∞ 𝑅1 𝑅2 𝑅𝑛 𝑛

The circuit is then dynamic.

III. Bode Plot: (What Is It?) The Bode plot of the frequency response 𝐅(𝜔) of an LTI system
is the graph of 20 log|𝐅(𝜔)| (magnitude in dB) and ∠𝐅(𝜔) (phase angle) both plotted versus
log 𝜔. The main advantage of using the Bode diagram is that multiplication of magnitudes
can be converted into addition. To see this construction, first consider the system with the
transfer function
𝜔 𝜔
𝐾 (1 + 𝑗 𝑧 ) … (1 + 𝑗 𝑧 )
1 𝑚
𝐅(𝜔) = 𝜔 𝜔
𝑗𝜔 (1 + 𝑗 𝑝 ) … (1 + 𝑗 𝑝 )
1 𝑛−1

𝑚 𝑛−1
𝜔 𝜔
20 log|𝐅(𝜔)| = 20 { log|𝐾| − log|𝜔| + ∑ log |1 + 𝑗 | − ∑ log |1 + 𝑗 |}
𝑧𝑘 𝑝𝑘
𝑘=1 𝑘=1

The use of logarithmic scale makes it possible to display both the low- and high-frequency
characteristics of the transfer function in one graph. Even though zero frequency cannot be
included in the logarithmic scale (since log 0 = – ∞), it is not a serious problem as one can
go to as low a frequency as is required for analysis and design of practical control system.
The principal factors that may be present in a transfer function 𝑭(𝑗𝜔) = ∏𝑛𝑖=1 𝑭𝑖 (𝑗𝜔), in
general, are:

1. Constant gain 𝐾
2. Pure integral and derivative factors (𝑗𝜔)±𝑛
3. First-order factors (1 + 𝑗𝜔𝑇)±1
4. Quadratic factors [1 + 2𝜉(𝑗𝜔/𝜔𝑛 ) + (𝑗𝜔/𝜔𝑛 )2 ]±1

Once the logarithmic plots of these basic factors are known, it will be convenient to add
their contributions graphically to get the composite plot of the multiplying factors of
𝑮(𝑗𝜔)𝑯(𝑗𝜔), since the product of terms become additions of their logarithms. It will be
discovered soon that in the logarithmic scale, the actual amplitude and phase of the
principal factors of 𝑮(𝑗𝜔)𝑯(𝑗𝜔) may be approximated by straight line asymptotes, which is
the added advantage of the Bode plot. The errors in the approximation in most cases are
definite and known and when necessary, corrections can be incorporated easily to get an
accurate plot.

III.I. The Gain K: A number greater than unity has a positive value in decibels, while a
number smaller than unity has a negative value. The log-magnitude curve for a constant
gain K is a horizontal straight line at the magnitude of 20 log K decibels. The phase angle of
the gain K is zero. The effect of varying the gain K in the transfer function is that it raises
or lowers the log-magnitude curve of the transfer function by the corresponding constant
amount, but it has no effect on the phase curve.

𝐅(𝑗𝜔) = 𝐾
𝐅𝐝𝐛 (𝑗𝜔) = 20 log10 (K)
0 𝐾>0
∠𝐅(𝑗𝜔) = { ∘
−180 𝐾<0
III.II. Integral and Derivative Factors (Pole and Zero at the origin): The logarithmic
magnitude (in decibels) and the phase angle of the integrator 𝐅(𝑗𝜔) = 1/𝑗𝜔 are

𝐅𝐝𝐛 (𝑗𝜔) = 20 log10 |1/𝑗𝜔| = −20 log10 |𝜔| and ∠𝐅(𝑗𝜔) = −90∘

Similarly, the log magnitude and phase of the differentiator 𝐅(𝑗𝜔) = 𝑗𝜔 are

𝐅𝐝𝐛 (𝑗𝜔) = 20 log10 |𝑗𝜔| = 20 log10 |𝜔| and ∠𝐅(𝑗𝜔) = 90∘


III.III. First-Order Factors: The log magnitude of first-order factor 𝐅(𝑗𝜔) = 1/(1 + 𝑗𝜔𝑇) is

𝐅𝐝𝐛 (𝑗𝜔) = 20 log10 |1/(1 + 𝑗𝜔𝑇)|


= −20 log10 |√1 + 𝜔 2 𝑇 2 |
= −10 log10 |1 + 𝜔2 𝑇 2 |

For low frequencies, such that


𝜔 ≪ 1/𝑇 , the log magnitude may be
approximated by 𝐅𝐝𝐛 (𝑗𝜔) = 0 dB.
Thus, the log-magnitude curve at low
frequencies is the constant 0 dB line.
For high frequencies, such that
𝜔 ≫ 1/𝑇, 𝐅𝐝𝐛 (𝑗𝜔) ≈ −20 log10 (𝜔𝑇) .
This is an approximate expression for
the high-frequency range. At 𝜔 = 1/𝑇,
the log magnitude equals 0 dB; at
𝜔 = 10/𝑇, the log magnitude is 𝐅𝐝𝐛 (𝑗𝜔) ≈ −20 dB. The exact phase angle 𝜙 of the factor
𝐅(𝑗𝜔) = 1/(1 + 𝑗𝜔𝑇) is 𝜙 = − tan−1(𝜔𝑇). At zero frequency, the phase angle is
𝜙 = − tan−1(0) = 0∘ . At the corner frequency, the phase angle is 𝜙 = − tan−1(1) = −45∘ . At
infinity, the phase angle becomes –90°. Since the phase angle is given by an inverse
tangent function, the phase angle is skew symmetric about the inflection point at 𝜙 = −45∘ .

Now let we look for 𝐅(𝑗𝜔) = (1 + 𝑗𝜔𝑇). An advantage of the Bode diagram is that for
reciprocal factors-for example, the factor (1 + 𝑗𝜔𝑇) the log-magnitude and the phase-angle
curves need only be changed in sign, since 20 log10 |1 + 𝑗𝜔𝑇| = −20 log10 |1/(1 + 𝑗𝜔𝑇)| and
tan−1(𝜔𝑇) = − tan−1(1/𝜔𝑇)
III.IV. Quadratic Factors: Linear time invariant systems often possess quadratic factors of
the form 𝐅(𝑗𝜔) = 1/[1 + 2𝜉(𝑗𝜔/𝜔𝑛 ) + (𝑗𝜔/𝜔𝑛 )2 ]. The nature of roots in this expression
depends on the value of 𝜉. For 𝜉 > 1, both the roots will be real and the quadratic factor
can be expressed as a product of two first-order factors with real poles. The quadratic
factor can be expressed as the product of two complex conjugate factors for values of 𝜉
satisfying 0 < 𝜉 < 1. Asymptotic approximations to the frequency response curves will not
be accurate for the quadratic factor for small values of 𝜉. This is because of the fact that
the magnitude as well as the phase of the quadratic factor depend on both the corner
frequency and the damping ratio 𝜉.

The asymptotic frequency-response curve may be obtained as follows: Since


2
𝜔2 𝜔 2
𝐅𝐝𝐛 (𝑗𝜔) = 20 log10 (𝐅(𝑗𝜔)) = −10 log10 ((1 − 2 ) + (2𝜉 ) )
𝜔𝑛 𝜔𝑛
for low frequencies such that 𝜔 ≪ 𝜔𝑛 , the log magnitude becomes 𝐅𝐝𝐛 (𝑗𝜔) = 0 dB. The low-
frequency asymptote is thus a horizontal line at 0 dB. For high frequencies such that
𝜔 ≫ 𝜔𝑛 , the log magnitude becomes 𝐅𝐝𝐛 (𝑗𝜔) ≈ −40 log10 (𝜔/𝜔𝑛 ) dB. The equation for the high-
frequency asymptote is a straight line having the slope −40 dB/decade, since

10𝜔 𝜔
−40 log ( ) = −40 − 40 log ( )
𝜔𝑛 𝜔𝑛

The high-frequency asymptote intersects the low-frequency one at 𝜔 = 𝜔𝑛 , since at this


frequency 𝐅𝐝𝐛 (𝑗𝜔) = 0 dB . This frequency,
𝜔𝑛 , is the corner frequency for the
quadratic factor considered. The two
asymptotes just derived are independent of
the value of 𝜉. Near the frequency 𝜔 = 𝜔𝑛 , a
resonant peak occurs, as may be expected
from 𝐅(𝑗𝜔). The damping ratio 𝜉 determines
the magnitude of this resonant peak.
Errors obviously exist in the approximation
by straight-line asymptotes. The magnitude
of the error depends on the value of 𝜉. It is
large for small values of 𝜉. The phase angle
of the quadratic factor is
−1
𝜔 𝜔2
𝜙 = − tan [(2𝜉 ) / (1 − 2 )]
𝜔𝑛 𝜔𝑛

The phase angle is a function of both 𝜔 and


𝜉. At 𝜔 = 0, the phase angle equals 0°. At
the corner frequency 𝜔 = 𝜔𝑛 , the phase
angle is –90° regardless of 𝜉, since
𝜉
𝜙 = − tan−1 (2 ) = − tan−1 (∞) = −90∘
0
At 𝜔 = ∞, the phase angle becomes –180°. The phase-angle curve is skew symmetric about
the inflection point—the point where ϕ=–90°.There are no simple ways to sketch such
phase curves.
IV. Analog Filters: Ideal systems, such as ideal filters, are non-causal. However, they are
of interest because they set an upper bound for the system response. Practical systems
approximate the ideal response, while being causal (that is physically realizable). There are
five basic types of ideal filters: lowpass, highpass, bandpass, bandstop, and all-pass filter.
Filters are generally described by their system function 𝑯(𝑠) or frequency response 𝑯(𝜔).
The magnitude of the frequency response produces the gain or attenuation in the
amplitude of the signal and its phase produces delay.

Low-Pass filter: A filter whose passband is from some frequency 0 to 𝜔𝑝 and whose
stopband extends from some frequency 𝜔𝑠 , to infinity, where 𝜔𝑝 < 𝜔𝑠 ,
High-Pass filter: A filter whose passband is from some frequency 𝜔𝑝 to infinity and whose
stop band is from 0 to 𝜔𝑠 , where 𝜔𝑠 < 𝜔𝑝 .
Bandpass filter: filter whose passband is from some frequency 𝜔𝑝1 to some other frequency
𝜔𝑝2 and whose stopbands are from 0 to 𝜔𝑠1 and from 𝜔𝑠2 to ∞, where 𝜔𝑠1 < 𝜔𝑝1 < 𝜔𝑝2 < 𝜔𝑠2 .
Band-Reject filter: A filter whose passbands are from 0 to 𝜔𝑝1 and from 𝜔𝑝2 to ∞ and
whose stopband is from 𝜔𝑠1 to 𝜔𝑠2 , where 𝜔𝑝1 < 𝜔𝑠1 < 𝜔𝑠2 < 𝜔𝑝2 .
All-Pass filter: A filter whose magnitude is 1 for all frequencies (i.e., whose passband is
from 0 to ∞). This type of filter is used mainly for phase compensation and phase shifting
purposes.

The system function of a practical filter, which is made of lumped linear elements, is a
rational function (a ratio of two polynomials). The degree of the denominator polynomial is
called the order of the filter. Because of the practical importance of first- and second order
filters and their widespread applications, we discuss these two classes of filters in detail
and present examples of circuits to realize them. It is noted that first- and second order
filters are special cases of first- and second-order LTI systems. The properties and
characteristics of these systems (such as impulse and step responses, natural frequencies,
damping ratio, quality factor, overdamping, underdamping, and critical damping) apply to
these filters as well and will be addressed.

IV.I. First-Order Low-Pass Filters: Assuming that the op-amp is ideal, it functions as a
voltage follower, preventing the impedance Z from loading the RC circuit. The ideal op-amp,
therefore, has no effect on the frequency response of the RC circuit, and in this analysis we
need not consider it.
1 𝐶𝑠
KVL ⟹ −𝑽1 (𝑠) + 𝑽𝑅 (𝑠) + 𝑽𝐶 (𝑠) = 0 ⟹ −𝑽1 (𝑠) + 𝑅𝑰1 (𝑠) + 𝑰1 (𝑠) = 0 ⟹ 𝑰1 (𝑠) = ( ) 𝑽 (𝑠)
𝐶𝑠 1 + 𝑅𝐶𝑠 1
The voltage across the capacitor terminal sis given by
1 1 𝑽2 (𝑠) 1
𝑽2 (𝑠) = 𝑽𝐶 (𝑠) = 𝑰1 (𝑠) = ( ) 𝑽1 (𝑠) ⟹ 𝑯(𝑠) = =
𝐶𝑠 1 + 𝑅𝐶𝑠 𝑽1 (𝑠) 1 + 𝑅𝐶𝑠

The system function and frequency response of a first-order, low-pass filter are
1
𝑯(𝑠) = with 𝜏 = 𝑅𝐶
1 + 𝜏𝑠


1 1
𝑯(𝜔) = 𝜔 with 𝜔 0 =
1 + 𝑗 (𝜔 ) 𝑅𝐶
0
The input-output differential equation and responses to unit-impulse and unit-step inputs
are given below.
𝑑𝑣2 1 1 1 𝑡
+ 𝑣2 = 𝑣1 ⇔ 𝑣1 ⟶ ℎ(𝑡) = ( 𝑒 −𝑅𝐶 ) 𝑢(𝑡) ⟶ 𝑣2
𝑑𝑡 𝑅𝐶 𝑅𝐶 𝑅𝐶
𝑡 𝑡
The unit − step response: g(𝑡) = ∫ ℎ(𝑡)𝑑𝑡 = (1 − 𝑒 −𝑅𝐶 ) 𝑢(𝑡)
−∞

𝑽2 (𝑠) 1 1 1
𝑯(𝑠) = = ⟹ |𝑯(𝑗𝜔)| = ⟹ |𝑯(𝑗𝜔0 )| =
𝑽1 (𝑠) 1 + ( 𝑠 ) 𝜔 2 √2
𝜔0 √1 + ( )
𝜔0
For 𝐿𝑅 circuit 𝜏 = (1/𝜔0 ) = 𝐿/𝑅 and for 𝑅𝐶 circuit 𝜏 = (1/𝜔0 ) = 𝑅𝐶

When a periodic waveform passes through an low-pass filter, its Fourier coefficients
undergo different gains at different frequencies. Some harmonics may be attenuated more
strongly than others and thus filtered out.
IV.II. Practical Integrators: The system function of an integrator is 𝑯(𝑠) = 𝛼/𝑠 and its
frequency response 𝑯(𝜔) = 𝛼/(𝑗𝜔). Integration may be done by an RC (or RL) circuit and an
op-amp. Here are two implementations

IV.III. First-Order High-Pass Filters: High-pass filters may be analyzed by an approach


similar to that for low-pass filters. The transfer function of a first-order high-pass filter is
𝜏𝑠 𝑗𝜔 1 𝑅 1
𝑯(𝑠) = ⟺ 𝑯(𝜔) = with 𝜔0 = , 𝜔0 = or
1 + 𝜏𝑠 𝜔0 + 𝑗𝜔 𝜏 𝐿 𝑅𝐶

IV.IV. Practical Differentiators: A high-pass filter exhibits the differentiation property at


low frequencies. However, an ideal differentiator needs to operate perfectly over the whole
frequency range. By an approach similar to that of the ideal integrator, the input–output
relationship is found to be 𝑣2 = −𝑅𝐶 (𝑑𝑣1 /𝑑𝑡)
IV.V. First-Order All-Pass Phase Shifters: The following filter has a constant gain 𝐻0
irrespective of the frequency. Its phase, however, varies with frequency. It is called a phase
shifter.
𝑠 − 𝜔0 𝑗(𝜔/𝜔0 ) − 1
𝑯(𝑠) = 𝐻0 . ( ) ⟹ 𝑯(𝜔) = 𝐻0 . ( ) ⟹ |𝑯(𝜔)| = 𝐻0 𝜃 = −2 tan−1(𝜔/𝜔0 )
𝑠 + 𝜔0 𝑗(𝜔/𝜔0 ) + 1

The first-order phase shifter is a stable system with a pole at −𝜔0 and a zero at 𝜔0 . Figure
below shows plots of phase versus frequency for 𝜔0 = 1, 2, 5, and 10. In all cases the phase
shift is 90◦ at 𝜔0 . Within the neighborhood of 𝜔0 , a frequency perturbation 𝜔 proportionally
results in a phase perturbation almost proportional to −𝜔/𝜔0 . This translates into a time-
shift perturbation of 1/𝜔0 .

The first-order phase shifter may be implemented by a passive circuit (with gain
restriction), or by an active circuit as given in figure below.

Let we find the transfer function of the passive circuit, we apply the KCL on the outer loop
we get 𝑽1 (𝑠) = 𝑽𝑐 (𝑠) + 𝑅𝑰2 (𝑠) = (1 + 𝑅𝐶𝑠)𝑽𝑐 (𝑠) ⟹ 𝑽𝑐 (𝑠) = [(1 + 𝑅𝐶𝑠)−1 ]𝑽1 (𝑠), also on one inner
loop we have

𝑽1 (𝑠) 𝑽1 (𝑠) 1 1 𝑽2 (𝑠) 1 − 𝑅𝐶𝑠


− 𝑽2 (𝑠) − =0⟹( − ) 𝑽1 (𝑠) = 𝑽2 (𝑠) ⟹ 𝑯(𝑠) = = 0.5 ( )
(1 + 𝑅𝐶𝑠) 2 (1 + 𝑅𝐶𝑠) 2 𝑽1 (𝑠) 1 + 𝑅𝐶𝑠

For the active circuit we use the difference amplifier rule (with 𝑣2 = 𝑣1 )

𝑍2 𝑍2 𝑍𝑥 𝑽2 (𝑠) 1 − 𝑅𝐶𝑠
𝑽2 (𝑠) = {(− ) + (1 + ) ( )} 𝑽1 (𝑠) ⟹ 𝑯(𝑠) = =( )
𝑍1 𝑍1 𝑍𝑥 + 𝑍𝑦 𝑽1 (𝑠) 1 + 𝑅𝐶𝑠
IV.V. Lead and Lag Filters: Lead and lag compensators are first-order filters with a single
pole and zero, chosen such that a prescribed phase lead and lag is produced in a
sinusoidal input. In this way, when placed in a control loop, they reshape an overall system
function to meet desired characteristics (used to improve an undesirable frequency
response in feedback systems and it is a fundamental building block in classical control
theory). Electrical lead and lag networks can be made of passive RLC elements, or employ
operational amplifiers. In this section we briefly describe their system functions and
frequency responses. Lead–lag compensators influence disciplines as varied as robotics,
satellite control, automobile diagnostics, LCD displays and laser frequency stabilization.
They are an important building block in analog control systems, and can also be used in
digital control.

▪ Lead Network A lead network made of passive electrical elements is shown in figure,
along with its pole-zero and Bode plots.

Let we find the transfer function of this passive circuit, we apply the KCL on the CKT we get
1 1 + 𝑅1 𝐶𝑠
𝑽1 (𝑠) = {(𝑅1 ∥ ) + 𝑅2 } 𝑰(𝑠) ⟺ 𝑰(𝑠) = { } 𝑽 (𝑠)
𝐶𝑠 𝑅1 + 𝑅2 + 𝑅1 𝑅2 𝐶𝑠 1

Also on output terminal we have


1
𝑽2 (𝑠) 𝑅2 + 𝑅1 𝑅2 𝐶𝑠 1 + 𝑅1 𝐶𝑠 ( )+𝑠
𝑅1 𝐶
𝑽2 (𝑠) = 𝑅2 𝑰(𝑠) ⟹ ={ }= =
𝑽1 (𝑠) 𝑅1 + 𝑅2 + 𝑅1 𝑅2 𝐶𝑠 𝑅1 + 𝑅2 𝑅1 + 𝑅2 1
+ 𝑅 𝐶𝑠 ( ) (
𝑅2 1 𝑅2 𝑅1 𝐶 ) + 𝑠
𝑅2 1
(𝑅 + 𝑅 ) ((𝑅 𝐶 ) + 𝑠)
𝑽2 (𝑠) 1 + 𝑅1 𝐶𝑠 1 2 1
⟹ = =
𝑽1 (𝑠) 𝑅1 + 𝑅2 + 𝑅 𝐶𝑠 1 𝑅2
( ) + (
𝑅2 1
𝑅1 𝐶 𝑅1 + 𝑅2 ) 𝑠
If we denote 𝛼 = 𝑅2 (𝑅1 + 𝑅2 )−1 and 𝜔1 = (𝑅1 𝐶)−1 we obtain

𝑽2 (𝑠) (𝜔1 + 𝑠) (𝜔1 + 𝑗𝜔)


𝑯(𝑠) = =𝛼 or 𝑯(𝑗𝜔) = 𝛼
𝑽1 (𝑠) 𝜔1 + 𝛼𝑠 𝜔1 + 𝛼𝑗𝜔

The system function has a zero at 𝑧 = −𝜔1 and a pole at 𝑝 = −𝜔1 /𝛼 . The pole of the
system is located to the left of its zero, both being on the negative real axis in the s-plane.
The magnitude is normally expressed in dB.

(𝜔1 + 𝑗𝜔)
𝑯dB (𝑗𝜔) = 20 log10 |𝑯(𝑗𝜔)| = 20 log10 |𝛼 |
𝜔1 + 𝛼𝑗𝜔

𝜔 2 𝜔 2
= 20 log10 (𝛼) + 20 log10 (√1 + ( ) ) − 20 log10 (√1 + 𝛼 2 ( ) )
𝜔1 𝜔1

𝜔 𝜔
𝝓(𝜔) = tan−1 ( ) − tan−1 (𝛼 )
𝜔1 𝜔1

There is a frequency 𝜔𝑚 at which the lead compensator provides maximum phase lead. This
frequency is important in the design process.

𝑑 1/𝜔1 𝛼/𝜔1 𝛼𝜔 2 𝜔 2
𝝓(𝜔) = − = 0 ⟹ 1 + ( ) = 𝛼 + 𝛼 ( )
𝑑𝜔 1 + (𝜔/𝜔1 )2 1 + (𝛼𝜔/𝜔1 )2 𝜔1 𝜔1
𝜔 2
⟹ 1 − 𝛼 = 𝛼 ( ) (1 − 𝛼)
𝜔1
𝜔 2
⟹ 𝛼( ) = 1
𝜔1
⟹ 𝜔𝑚 = 𝜔1 /√𝛼

The maximum phase lead occurs at the frequency: 𝜔𝑚 = 𝜔1 /√𝛼, this frequency is the
geometric mean of the two cut –off frequencies of the compensator

1 𝜔1
log10 (𝜔𝑚 ) = log10 (𝜔1 /√𝛼) = (log10 (𝜔1 ) + log10 ( ))
2 𝛼

The maximum phase lead contributed by the lead compensator is

𝜔𝑚 𝜔𝑚 1 1−𝛼
𝝓𝑚 = 𝝓(𝜔𝑚 ) = tan−1 ( ) − tan−1 (𝛼 ) = tan−1 ( ) − tan−1(√𝛼) = tan−1 ( )
𝜔1 𝜔1 √𝛼 2√𝛼
1−𝛼 1−𝛼 1−𝛼
⟹ tan(𝝓𝑚 ) = ⟹ sin(𝝓𝑚 ) = ⟹ 𝝓𝑚 = sin−1 ( )
2√𝛼 1+𝛼 1+𝛼
The lead compensator provides a maximum phase lead of

1−𝛼 𝜔1 1 𝑅2
𝝓𝑚 = sin−1 ( ) at 𝜔𝑚 = with 𝜔1 = and 𝛼 =
1+𝛼 √𝛼 𝑅1 𝐶 (𝑅1 + 𝑅2 )

One may sketch the magnitude of the frequency response in dB and its phase in degrees on
semilog paper by the following qualitative argument. The low-frequency asymptote of the
magnitude plot is the horizontal line at 20 log α dB. The high-frequency asymptote is the
horizontal line at 0 dB. The magnitude plot is a monotonically increasing curve. Traversing
from low frequencies toward high frequencies it first encounters 𝜔1 (the zero). The zero
pushes the magnitude plot up with a slope of 20 dB per decade. As the frequency increases
it encounters 𝜔1 /𝛼 (the pole). The pole pulls the magnitude plot down with a slope of −20
dB per decade and, therefore, starts neutralizing the upward effect of the zero. The
magnitude plot is eventually stabilized at the 0 dB level. The zero and the pole are the
break frequencies of the magnitude plot. If 𝛼 << 1, the pole and the zero are far enough
from each other and constitute frequencies of 3-dB deviation from the asymptotes. At 𝜔1
(the zero) the magnitude is 3 dB above the low-frequency asymptote and the phase is 45◦.
At 𝜔1 /𝛼 (the pole) the magnitude is 3 dB below 0 dB (the high-frequency asymptote) and
the phase is 90° − 45° = 45° . The maximum phase lead in the output occurs at 𝜔𝑚 = 𝜔1 /√𝛼,
which is the geometric mean of the two break frequencies.

▪ Lag Network In many respects characteristics of the lag network are mirror images of
those of the lead network.

Let we find the transfer function of this passive circuit, we apply the KCL on the CKT we get

1 𝐶𝑠
((𝑅1 + 𝑅2 ) + ) 𝑰(𝑠) = 𝑽1 (𝑠) ⟹ 𝑰(𝑠) = ( ) 𝑽 (𝑠)
𝐶𝑠 1 + (𝑅1 + 𝑅2 )𝐶𝑠 1

And from the output terminal we have

1
1 (𝑅2 + 𝐶𝑠) 𝐶𝑠 𝑽2 (𝑠) 1+𝑅2 𝐶𝑠
𝑽2 (𝑠) = (𝑅2 + ) 𝑰(𝑠) = ( ) 𝑽1 (𝑠) ⟹ 𝑯(𝑠) = =( )
𝐶𝑠 1 + (𝑅1 + 𝑅2 )𝐶𝑠 𝑽1 (𝑠) 1 + (𝑅1 + 𝑅2 )𝐶𝑠

This can be written as:


𝑅2 1
1+𝑅2 𝐶𝑠 𝑠+
(𝑅1 + 𝑅2 ) (𝑅1 + 𝑅2 )𝐶
𝑯(𝑠) = ( )=( )
1 + (𝑅1 + 𝑅2 )𝐶𝑠 1
𝑠+
(𝑅1 + 𝑅2 )𝐶
Now let 𝛼 = 𝑅2 /(𝑅1 + 𝑅2 ), 𝜔2 = 1/[𝐶(𝑅1 + 𝑅2 )] then 𝑇 = (𝜔2 /𝛼) = 1/(𝑅2 𝐶)

The transfer function can be rewritten as

𝛼𝑠 + 𝜔2 𝑠 + 𝜔2 /𝛼 𝑠+𝑇 𝛼𝑗𝜔 + 𝜔2 1 + 𝛼𝑗𝜔/𝜔2


𝑯(𝑠) = ( ) = 𝛼( ) = 𝛼( ) or 𝑯(𝑗𝜔) = ( )=( )
𝑠 + 𝜔2 𝑠 + 𝜔2 𝑠 + 𝛼𝑇 𝑗𝜔 + 𝜔2 1 + 𝑗𝜔/𝜔2

The phase angle of this filter is given by:


𝜔 𝜔
𝝓(𝜔) = tan−1 (𝛼 ) − tan−1 ( )
𝜔2 𝜔2

There is a frequency 𝜔𝑚 at which the lead compensator provides maximum phase lag. This
frequency is important in the design process.

𝑑 𝛼/𝜔2 1/𝜔2 𝛼𝜔 2 𝜔 2
𝝓(𝜔) = − = 0 ⟹ 1 + ( ) = 𝛼 + 𝛼 ( )
𝑑𝜔 1 + (𝛼𝜔/𝜔2 )2 1 + (𝜔/𝜔2 )2 𝜔2 𝜔2
𝜔 2
⟹ 1 − 𝛼 = 𝛼 ( ) (1 − 𝛼)
𝜔2
𝜔 2
⟹ 𝛼( ) = 1
𝜔2
⟹ 𝜔𝑚 = 𝜔2 /√𝛼

The maximum phase lag occurs at the frequency: 𝜔𝑚 = 𝜔2 /√𝛼, this frequency is the
geometric mean of the two cut –off frequencies of the compensator

1 𝜔2
log10 (𝜔𝑚 ) = log10 (𝜔2 /√𝛼) = (log10 (𝜔2 ) + log10 ( ))
2 𝛼

The minimum phase lag contributed by the lag compensator is

𝜔𝑚 𝜔𝑚 1 1−𝛼
𝝓𝑚 = 𝝓(𝜔𝑚 ) = tan−1 ( ) − tan−1 (𝛼 ) = tan−1 ( ) − tan−1(√𝛼) = tan−1 ( )
𝜔2 𝜔2 √𝛼 2√𝛼
1−𝛼 1−𝛼 1−𝛼
⟹ tan(𝝓𝑚 ) = ⟹ sin(𝝓𝑚 ) = ⟹ 𝝓𝑚 = sin−1 ( )
2√𝛼 1+𝛼 1+𝛼

The lead compensator provides a minimum phase lag of

1−𝛼 𝜔2 𝑅2 𝛼
𝝓𝑚 = sin−1 ( ) at 𝜔𝑚 = , with 𝛼 = and 𝜔2 =
1+𝛼 √𝛼 (𝑅1 + 𝑅2 ) 𝑅2 𝐶

The DC gain is unity (0 dB) and the high-frequency gain is 𝛼 = 𝑅2 /(𝑅1 + 𝑅2 ). The system has
a pole at 𝑝 = −𝜔2 and a zero at 𝑧 = −𝜔2 /𝛼 to the left of the pole, both being on the
negative real axis in the s-plane. The magnitude plot of the lag network displays a
horizontal flip of that of the lead network, and the phase is a vertical flip of that of the lead
the network. The phase of 𝑯(𝑗𝜔) varies from 0 (at low frequencies, 𝜔 = 0) to a minimum
(i.e., a maximum phase lag) of (2 tan−1 √𝛼 − 90° ) at 𝜔𝑚 = 𝜔2 /√𝛼 and returns back to zero at
high frequencies. Lag compensators are essentially low-pass filters. Therefore, lag
compensation permits a high gain at low frequencies (which improves the steady-state
performance) and reduces gain in the higher critical range of frequencies so as to improve
the phase margin.
Comments: The PD controller can be approximated by a phase lead filter, look to the
following steps, 𝐾𝑃 + 𝐾𝐷 𝑠 ≅ 𝐾𝑃 [(1 + 𝛼𝑇𝑠)/(1 + 𝑇𝑠)], with 𝑇 ≪ 1, and let 𝐾𝐷 /𝐾𝑃 = 𝛼𝑇 with 𝛼 > 1.
The PI controller can be approximated by a phase lag filter, look to the following steps,
𝐾𝑃 + 𝐾𝐼 /𝑠 ≅ 𝐾𝐼 [(1 + 𝐾𝑃 /𝐾𝐼 𝑠)/(𝜀 + 𝑠)] ≅ 𝐾[(1 + 𝛼𝑇𝑠)/(1 + 𝑇𝑠)] , with 𝐾 = 𝐾𝐼 /ε, (𝑇 = 1/𝜀) ≫ 1,
𝐾𝑃 /𝐾𝐼 = 𝛼𝑇 and 𝛼 < 1.

▪ Lead-Lag Network With single lag or lead compensation may not satisfied design
specifications. For an unstable uncompensated system, lead compensation provides fast
response but does not provide enough phase margin whereas lag compensation stabilize
the system but does not provide enough bandwidth. So we need multiple compensators in
cascade. Given below is the circuit diagram for the phase lag- lead compensation network.

The general form of the transfer function of


lag-lead filter is given by

(𝑠 + 𝑎1 )(𝑠 + 𝑏2 )
𝑯(𝑠) =
(𝑠 + 𝑏1 )(𝑠 + 𝑎2 )

with 𝑎1 < 𝑏1 and 𝑎2 < 𝑏2

Let we find the transfer function of this passive circuit, we apply the KCL on the CKT we get
1 1 1
((𝑅1 ∥ ) + 𝑅2 + ) 𝑰(𝑠) = 𝑽1 (𝑠) ⟹ 𝑰(𝑠) = ( ) 𝑽 (𝑠)
𝐶1 𝑠 𝐶2 𝑠 𝑍1 + 𝑍2 1
1 𝑅1 1 + 𝑅2 𝐶2 𝑠
With 𝑍1 = (𝑅1 ∥ )= and 𝑍2 =
𝐶1 𝑠 1 + 𝑅1 𝐶1 𝑠 𝐶2 𝑠

𝑍2 𝑽2 (𝑠) 𝑍2
𝑽2 (𝑠) = 𝑍2 𝑰(𝑠) = ( ) 𝑽1 (𝑠) ⟹ 𝑯(𝑠) = =( )
𝑍1 + 𝑍2 𝑽1 (𝑠) 𝑍1 + 𝑍2

This can be written as:


1 1
(𝑠 + 𝑅 𝐶 ) (𝑠 + 𝑅 𝐶 )
1 1 2 2
𝑯(𝑠) =
1 1 1 1
𝑠 2 + (𝑅 𝐶 + 𝑅 𝐶 + 𝑅 𝐶 ) 𝑠 + 𝑅 𝑅 𝐶 𝐶
1 1 2 1 2 2 1 2 2 1

This meets the requirements for lag/lead compensation with


1 1 1
𝑎1 = , 𝑏2 = , 𝑎1 𝑏2 = 𝑎2 𝑏1 and 𝑎2 + 𝑏1 = 𝑎1 + 𝑏2 +
𝑅1 𝐶1 𝑅2 𝐶2 𝑅2 𝐶1

An alternative formula for the lag/lead filter is obtained when we define

𝜔1 = 𝑎1 , 𝑏1 = 𝛽𝜔1 , 𝑏2 = 𝜔2 , 𝑎2 = 𝜔2 /𝛽 and 𝛽 > 1

Therefore,
(𝑠 + 𝜔1 )(𝑠 + 𝜔2 )
𝑯(𝑠) =
(𝑠 + 𝛽𝜔1 )(𝑠 + 𝜔2 /𝛽)

Remark: The PID controller can be approximated by a phase lead-lag compensator, and
then the three action corrector is just special case of the phase lead-lag compensator.
Example: (Lead Compensator) Given a second order system 𝑯(𝑠) = 4/(𝑠 2 + 2𝑠) design lead
network to reshape the system in a new form which have a zero at 𝑠 = −4.41 and 3 poles at
𝑠 = −6.4918, and 𝑠 = −6.9541 ± 8.0592𝑖.

The transfer functions of the lead compensator is: 𝑪(𝑠) = 𝐾(𝑠 + 𝛼)/(𝑠 + 𝛽). The closed-loop
transfer functions of the uncompensated and compensated system are given, respectively,

4
𝑯𝑐𝑙𝑜𝑠𝑒𝑑1 (𝑠) =
𝑠2 + 2𝑠 + 4
and

166.8s + 735.588
𝑯𝑐𝑙𝑜𝑠𝑒𝑑2 (𝑠) =
𝑠3 + 20.4𝑠 2 + 203.6s + 735.588
In order to examine the transient-response characteristics of the designed closed-loop
system, we shall obtain the unit-step and unit-ramp response curves of the compensated
and uncompensated system with MATLAB.

clear all, clc, t=0:0.01:6; num=[0 0 4]; den=[1 2 4];


numc=[0 0 166.8 735.588]; denc=[1 20.4 203.6 735.588];
sys1=tf(num,den); sys2=tf(numc,denc); c1=step(sys1,t); c2=step(sys2,t);
plot(t,c1,'-'); hold on; plot(t,c2,'--') ; grid on;
title('Unit-Step Responses'); xlabel('t Sec'); ylabel('Outputs')
text(0.4,1.31,'Compensated System')
text(1.55,0.88,'Uncompensated System')

figure

num1=[0 0 0 4]; den1=[1 2 4 0];


num1c=[0 0 0 166.8 735.588]; den1c=[1 20.4 203.6 735.588 0];
sys1=tf(num1,den1); sys2=tf(num1c,den1c);c1=step(sys1,t); c2=step(sys2,t);
plot(t,c1,'-'); hold on; plot(t,c2,'--') ; grid on;
title('Unit-Ramp Responses'); xlabel('t Sec'); ylabel('Outputs')
text(0.4,1.31,'Compensated System')
text(1.55,0.88,'Uncompensated System')
Example: (Lag Compensator) Given a 3rd order system 𝑯(𝑠) = 1/(0.5𝑠 3 + 1.5𝑠 2 + 𝑠) design
lag network to reshape the system in a new form which have a zero at 𝑠 = −0.1 and 4 poles
at 𝑠 = −2.3155 , 𝑠 = −0.1228 , and 𝑠 = −0.2859 ± 0.5196𝑖. (i.e. with tracking)

The transfer functions of the lag compensator is: 𝑪(𝑠) = 𝐾(𝑠 + 𝛼)/(𝑠 + 𝛽). The transfer
functions of the compensator and closed-loop compensated system are given, respectively,

1
(𝑠 + 10) 50s + 5
𝑪(𝑠) = 0.5 and 𝑯𝑐𝑙𝑜𝑠𝑒𝑑 (𝑠) =
1 50𝑠 4 + 150.5𝑠 3+ 101.5𝑠 2 + 51s + 5
(𝑠 + 100)

In order to examine the transient-response characteristics of the designed closed-loop


system, we shall obtain the unit-step and unit-ramp response curves of the compensated
and uncompensated system with MATLAB.

clear all, clc, num = [1]; den = [0.5 1.5 1 1];


numc = [50 5]; denc = [50 150.5 101.5 51 5];
t = 0:0.1:40; sys1=tf(num,den); sys2=tf(numc,denc);
c1= step(sys1,t); c2 = step(sys2,t);
plot(t,c1,'.'); hold on ;plot(t,c2,'-'); grid on
title('Unit-Step Responses'); xlabel('t Sec'); ylabel('Outputs');
text(12.7,1.27,'Compensated system')
text(12.2,0.7,'Uncompensated system')
figure
%*****Unit-ramp response*****
num1 = [1]; den1 = [0.5 1.5 1 1 0];
num1c = [50 5]; den1c = [50 150.5 101.5 51 5 0];
t = 0:0.1:20; sys1=tf(num1,den1); sys2=tf(num1c,den1c);
y1 = step(sys1,t); y2 = step(sys2,t);
plot(t,y1,'.'); hold on ;
plot(t,y2,'-'); hold on ;
plot(t,t,'--'); hold on ; grid on
title('Unit-Ramp Responses') ;xlabel('t Sec'); ylabel('Outputs')
text(8.3,3,'Compensated system')
text(8.3,5,'Uncompensated system')
Remarks: Lead compensation essentially yields an appreciable improvement in transient
response and a small change in steady-state accuracy. It may accentuate high-frequency
noise effects. Lag compensation, on the other hand, yields an appreciable improvement in
steady-state accuracy at the expense of increasing the transient-response time. Lag
compensation will suppress the effects of high-frequency noise signals. Lag–lead
compensation combines the characteristics of both lead compensation and lag
compensation. The use of a lead or lag compensator raises the order of the system by 1
(unless cancellation occurs between the zero of the compensator and a pole of the
uncompensated open-loop transfer function). The use of a lag–lead compensator raises the
order of the system by 2 [unless cancellation occurs between zero(s) of the lag–lead
compensator and pole(s) of the uncompensated open-loop transfer function], which means
that the system becomes more complex and it is more difficult to control the transient-
response behavior.

IV.VI. Second-Order Low-Pass Filters: A second-order low-pass filter has two poles and
no zeros at finite frequencies. Its system function and frequency response are

1 1
𝑯(𝑠) = and 𝑯(𝑗𝜔) =
𝑠 2 + 2𝜉𝜔𝑛 s + 𝜔𝑛2 (𝜔𝑛2 − 𝜔 2 ) + 2𝜉𝜔𝑛 𝑗𝜔

Generally speaking, the second-order systems are low-pass filters (except when the quality
factor 𝑄 = 1/2𝜉 is high). Here we consider the frequency response behavior of such systems.
Based on the quality factor 𝑄 = 1/(2𝜉) , we recognize three cases.

𝑄 < 0.5 or 𝜉 > 1: (overdamped second-order system) The filter has two distinct poles on
the negative real axis: 𝑠1,2 = −𝜔1,2 = −𝜉𝜔𝑛 ± 𝜔𝑛 √𝜉 − 1 . Note that 𝜔1 𝜔2 = 𝜔𝑛2 & 𝜔1 + 𝜔2 = 2𝜉𝜔𝑛 .
Its frequency response may be written as

1 1
𝑯(𝑗𝜔) = [ 𝜔 ][ 𝜔 ]
1 + 𝑗 (𝜔 ) 1 + 𝑗 (𝜔 )
1 2

The filter is equivalent to the cascade of two first-order low-pass filters with 3-dB
frequencies at 𝜔1 and 𝜔2 , discussed previously. The magnitude Bode plot is shown on
figure. The plot has three asymptotic lines, with slopes of 0, −20, and −40 dB/decade, for
low, intermediate, and high frequencies, respectively.
𝑄 = 0.5 or 𝜉 = 1: (critically damped second-order system) The filter has a pole of order 2
on the negative real axis at 𝑠 = −𝜔0 , where 𝜔0 = 𝜔𝑛 . This is a critically damped system.
The frequency response may be written as
2
1
𝑯(𝑗𝜔) = [ 𝜔 ]
1 + 𝑗 (𝜔 )
0

The filter is equivalent to the cascade of two identical first-order low-pass filters with 3-dB
frequencies at 𝜔0 . The Bode plot is shown on figure. The low frequency asymptote is at 0
dB. The high-frequency asymptote is a line with a slope of −40 dB per decade. The break
frequency is at 𝜔0 . Attenuation at 𝜔0 is 6 dB.

𝑄 > 0.5 or 𝜉 < 1: (underdamped second-order system) The filter has two complex
conjugate poles with negative real parts, 𝑠1,2 = −𝜎 ± 𝑗𝜔𝑑 , where 𝜎 = 𝜉𝜔𝑛 and 𝜔𝑑 = 𝜔𝑛 √1 − 𝜉 2 .
The frequency response may be written as

1/𝜔𝑛2
𝑯(𝑗𝜔) =
𝜔 2 𝜔
(1 − (𝜔 ) ) + 2𝜎 (𝜔 ) 𝑗
𝑛 𝑛

This is an underdamped system. Its poles and its Bode plot are shown in Figure.
clear all, clc, wn=10;
for ksi=0.1:0.1:2
w=logspace(-2,4,1001); s=i*w;
H=1./(s.^2+2*ksi*wn*s+wn^2);
plot(log10(w),angle(H)); grid on, hold on
end
figure
for ksi=0.01:0.1:4
w=logspace(-2,4,1001); s=i*w;
H=1./(s.^2+2*ksi*wn*s+wn^2);
plot(log10(w),20*log10(abs(H))); grid on, hold on
end

Other commands that can be more convenient or flexible for generating Bode plots are
often used. The Bode plot for the current system function may also be generated by the
following program.

clear all, clc,


num=[1 3]; den=[1 515 7500];
sys=tf(num,den); %tf constructs the transfer (i.e., system) function
grid
bode(sys)

The following set of commands can provide more flexibility and is used for most Bode plots
in this chapter.

clear all, clc,


w=logspace( -1,4,1001);
num=[1 3]; den=[1 515 7500]; sys=tf(num,den);
[mag,angle]=bode(sys,w);
semilogx(w,20*log10(mag)); grid on
semilogx(w,angle); grid on

IV.VII. Second-Order High-Pass Filters: The system function and frequency response of a
second-order high-pass filter may be written as

𝑠2 −𝜔2
𝑯(𝑠) = and 𝑯(𝑗𝜔) = with 𝑄 = 1/2𝜉
𝑠 2 + 2𝜉𝜔𝑛 s + 𝜔𝑛2 (𝜔𝑛2 − 𝜔 2 ) + 2𝑗𝜉𝜔𝑛 𝜔

The filter is a stable system so it has a double zero at 𝑠 = 0 (DC) and two poles in the LHP.
It works as the opposite of the low-pass filter. The frequency response has a zero
magnitude at 𝜔 = 0, which grows to 1 at 𝜔 = ∞. The low-frequency magnitude asymptote is
a line with a slope of 40 dB per decade. The high-frequency asymptote is the 0 dB
horizontal line. Between the two limits (especially near 𝜔𝑛 ) the frequency response is
shaped by the location of the poles. As in any second-order system, we recognize the three
cases of the filter being overdamped, critically damped, and underdamped. Here again, the
frequency behavior of the filter may be analyzed in a unified way in terms of 𝜔0 = 𝜔𝑛 and 𝑄.
clear all, clc, wn=10;
for ksi=0.1:0.5:2
w=logspace( -1,4,1001);
num=[1 0 0]; den=[1 2*ksi*wn wn^2]; sys=tf(num,den);
[mag,angle]=bode(sys,w); semilogx(w,20*log10(mag)); hold on
end

IV.VIII. Second-Order Bandpass Filters: we can construct a circuit that acts as a band-
pass filter, passing mainly those frequencies within a certain frequency range. The analysis
of a simple second-order band-pass filter (i.e., a filter with two energy-storage elements)
can be conducted by analogy with the preceding discussions of the low-pass and high-pass
filters. The system function and frequency response of the basic second-order bandpass
filter are

2𝜉𝜔𝑛 𝑠 2𝑗𝜉𝜔𝑛 𝜔 1
𝑯(𝑠) = and 𝑯(𝑗𝜔) = with 𝑄 =
𝑠2 + 2𝜉𝜔𝑛 s + 𝜔𝑛2 (𝜔𝑛2 − 𝜔 2 ) + 2𝑗𝜉𝜔𝑛 𝜔 2𝜉

The filter has a zero at s = 0 (DC) and two poles at: 𝑠1,2 = −𝜔𝑛 (1 ± √1 − 4𝑄 2 ) /2𝑄, which,
depending on the value of 𝑄, are either real or complex. The frequency response has zero
magnitude at 𝜔 = 0 and ∞. It attains its peak at 𝜔𝑛 , sometimes called the resonance
frequency, where |𝑯(𝜔𝑛 )| = 1. The half-power bandwidth ∆𝜔 is defined as the frequency
range within which |𝑯(𝜔)/𝑯(𝜔𝑛 )| ≥ √2/2 (i.e., the gain is within 3 dB below its maximum,
and thus called the 3-dB bandwidth).
2
𝑯(𝜔) √2 4𝜉 2 𝜔𝑛2 𝜔2 √2
| |= ⟺ 2 = ( ) ⟺ |𝜔𝑛2 − 𝜔2 | = |2𝜉𝜔𝑛 𝜔|
𝑯(𝜔𝑛 ) 2 (𝜔𝑛 − 𝜔 ) + 4𝜉 𝜔𝑛 𝜔
2 2 2 2 2 2

The lower and upper limits of the half-power frequency band are a solution of the above Eq

1 1 𝜔𝑛
𝜔ℎ,𝑙 = 𝜔𝑛 (√𝜉 2 + 1 ± 𝜉) = 𝜔𝑛 (√ + 1 ± ) = (√1 + 4𝑄 2 ± 1)
4𝑄 2 2𝑄 2𝑄

𝜔𝑛
∆𝜔 = 𝜔ℎ − 𝜔𝑙 =
𝑄
In the present analysis we observe parallels with the cases of low-pass and high-pass
filters, and, depending on the value of the quality factor Q (which controls the location of a
filter’s poles and thus its bandwidth), we recognize the three familiar states of overdamped
(Q < 0.5), critically damped (Q = 0.5), and underdamped (Q > 0.5). The filter then becomes
wideband (low Q) or narrowband (high Q). The sharpness of the peak is determined by the
quality factor Q. In what follows we will discuss the shape of the Bode plot for three regions
of Q values.

𝑄 < 0.5: The system has two distinct negative real poles 𝑠1,2 = −𝜔𝑛 (1 ± √1 − 4𝑄 2 ) /𝑄 = −𝜔1,2.
Note that 𝜔1 𝜔2 = 𝜔𝑛2 and 𝜔1 + 𝜔2 = 2𝜔𝑛 /𝑄.
1 𝑗(𝜔/𝜔𝑛 )
𝑯(𝜔) =
𝑄 [1 + 𝑗(𝜔/𝜔1 )][1 + 𝑗(𝜔/𝜔2 )]

The slopes of the asymptotic lines in the magnitude Bode plot are 20, 0, and −20
dB/decade for low, intermediate, and high frequencies, respectively. The filter is a
wideband bandpass filter. It is equivalent to the cascade of a first-order high-pass filter and
a first-order low-pass filter with separate break points.

𝑄 = 0.5: The filter has a double pole at 𝑠 = 𝜔𝑛 . The frequency response may be written as

2𝑗(𝜔/𝜔𝑛 )
𝑯(𝜔) =
[1 + 𝑗(𝜔/𝜔𝑛 )]2

The asymptotic slopes of the plot are 20 dB/decade (low frequencies) and −20 dB/decade
(high frequencies). The filter is bandpass, equivalent to the cascade of a first-order high-
pass filter and a first-order low-pass sharing the same break frequency 𝜔𝑛 .

𝑄 > 0.5: The filter has two complex conjugate poles with negative real parts 𝑠1,2 = −𝜎 ± 𝑗𝜔𝑑 ,
where 𝜎 = 𝜔𝑛 /(2𝑄) and 𝜔𝑑 = 𝜔𝑛 (√4𝑄 2 − 1) /(2𝑄). Note that: 𝜎 2 + 𝜔𝑑2 = 𝜔𝑛2 . The asymptotic
slopes of the Bode plot are 20 dB/decade (low frequency) and −20 dB/decade (high
frequency).

High 𝑄: For a bandpass system with high 𝑄 (e.g., 𝑄 ≥ 10) the high and low 3-dB frequencies
are approximately symmetrical on the upper and lower sides of the center frequency:

𝜔ℎ,𝑙 = 𝜔𝑛 + 𝜔𝑛 /2𝑄
IV.IX. Second-Order Notch Filters: The system function and frequency response of a
second-order notch filter may be written as

𝑠 2 + 𝜔𝑛2 𝜔𝑛2 − 𝜔2 1 − (𝜔/𝜔𝑛 )2


𝑯(𝑠) = 2 , 𝑯(𝑗𝜔) = 2 =
𝑠 + 2𝜉𝜔𝑛 𝑠 + 𝜔𝑛2 𝜔𝑛 − 𝜔 2 + 2𝑗𝜉𝜔𝑛 𝜔 1 − (𝜔/𝜔𝑛 )2 + 2𝑗𝜉(𝜔/𝜔𝑛 )

The filter has two zeros at ±𝑗𝜔𝑛 (notch frequency) and two poles in the LHP. It works as the
opposite of the bandpass filter. The frequency response has a zero magnitude at 𝜔𝑛 . The
low- and high-frequency magnitude asymptotes are 0-dB horizontal lines. Between the two
limits, especially near 𝜔𝑛 , the frequency response is shaped by the location of the poles. As
in any second-order system, we recognize the three cases of the filter being overdamped,
critically damped, and underdamped. The sharpness of the dip at the notch frequency is
controlled by Q. Higher Qs produce narrower dips. As in the case of bandpass filters we can
define a 3-dB band for the dip. In this case the notch band identifies frequencies around 𝜔𝑛
within which the attenuation is greater than 3 dB. The notch filter described above is
functionally equivalent to subtracting the output of a bandpass filter from the input signal
traversing through a direct path as in figure.
Example: One application of narrow-band filters is in rejecting interference due to AC line
power. Any undesired 60-Hz signal originating in the AC line power can cause serious
interference in sensitive instruments. In medical instruments such as the
electrocardiograph, 60-Hz notch provided to reduce the effect of this interference2 on
cardiac measurements. Figure below depicts a circuit in which the effect of 60-Hz noise is
represented by way of a 60-Hz sinusoidal generator connected in series with a signal
source (𝐕S ), representing the desired signal. In this example we design a 60-Hz narrow-
band (or notch) filter to remove the unwanted 60-Hz noise.

Let 𝑅𝑆 = 50𝛺. To determine the appropriate capacitor and inductor values, we write the
expression for the notch filter impedance:

𝑽𝐿 (𝜔) 𝑅𝐿 𝑅𝐿
𝑯(𝜔) = = =
𝑽𝑆 (𝜔) 𝑅𝑆 + 𝑅𝐿 + 𝑍𝐿𝐶 𝑗𝜔𝐿
𝑅𝑆 + 𝑅𝐿 + ( )
1 − 𝜔 2 𝐿𝐶

Note that when 𝜔2 𝐿𝐶 = 1, the impedance of the circuit is infinite! The frequency 𝜔𝑛 = 1/√𝐿𝐶
is the resonant frequency of the LC circuit. If this resonant frequency were selected to be
equal to 60 Hz, then the series circuit would show an infinite impedance to 60-Hz currents,
and would therefore block the interference signal, while passing most of the other
frequency components. We thus select values of L and C that result in 𝜔𝑛 = 2𝜋 × 60. Let
𝐿 = 100 mH. Then 𝐶 = 1/(𝜔𝑛2 𝐿) = 70.36 𝜇F

clear all, clc, w=0:0.02:100;


magGjw=abs((w.^2-1)./(w.^2-i.*w-1));
semilogx(w,magGjw);
xlabel('Frequency in rad/sec−logscale');
ylabel('Magnitude of Vout/Vin');
title('Magnitude Characteristics of basic RLC band−pass filter');
grid

IV.X. Second-Order All-Pass Filters: The second-order all-pass filter has a constant gain
𝐻0 and a phase that varies with frequency. The system function and frequency response are

𝑠 2 − 2𝜉𝜔𝑛 𝑠 + 𝜔𝑛2 (𝑠/𝜔𝑛 )2 − 2𝜉(𝑠/𝜔𝑛 ) + 1


𝑯(𝑠) = 2 =
𝑠 + 2𝜉𝜔𝑛 𝑠 + 𝜔𝑛2 (𝑠/𝜔𝑛 )2 + 2𝜉(𝑠/𝜔𝑛 ) + 1
2𝜉𝜔𝑛 𝑠
= 1 − 2( 2 )
𝑠 + 2𝜉𝜔𝑛 𝑠 + 𝜔𝑛2
The operation of the above all-pass filter may be realized by passing the input signal
through a bandpass filter (with a gain 2) and subtracting the output from the signal, as in
Figure below. The filter has a pair of poles in the LHP and a pair of zeros in the RHP mirror-
imaging the poles with respect to the 𝑗𝜔 axis. Pole-zero location and phase response depend
on 𝑄 and 𝜔𝑛 .

V. Design of Causal Analog Filters: In real-time filtering applications, it is not possible to


utilize ideal filters, since they are non-causal. In such applications, it is necessary to use
causal filters which are non-ideal; that is, the transition from the passband to the
stopband (and vice versa) is gradual. In particular, the magnitude functions of causal
versions of low-pass, high-pass, bandpass, and band-stop filters have gradual transitions
from the passband to the stopband. Examples of magnitude functions for these basic types
of filters are shown in Figure

Remark: An important parameter in the design of CT filters is the cut-off frequency 𝜔𝑐 of


the filter, which is defined as the frequency at which the gain of the filter drops to 0.7071
times its maximum value. Assuming a gain of unity within the pass band, the gain at the
cut-off frequency 𝜔𝑐 is given by 0.7071 or −3 dB on a logarithmic scale. Since the cut-off
frequency lies typically within the transitional band of the filter, therefore 𝜔𝑝 ≤ 𝜔𝑐 ≤ 𝜔𝑠

To be able to build a causal filter from circuit components, it is necessary that the filter
transfer function 𝑯(𝑠) be rational in 𝑠. For ease of implementation, the order of 𝑯(𝑠) (i.e.,
the degree of the denominator) should be as small as possible. However, there is always a
trade-off between the magnitude of the order and desired filter characteristics such as the
amount of attenuation in the stopbands and the width of the transition regions.

From the other side, in order to avoid phase distortion in the output of a causal filter, the
phase function should be linear over the passband of the filter. However, the phase
function of a causal filter with a rational transfer function cannot be exactly linear over the
passband, and thus there will always be some phase distortion. The amount of phase
distortion that can be tolerated is often included in the list of filter specifications in the
design process.
In this section we will focus on designing analog filters to meet a set of user specifications.
It is known that filters with ideal characteristics are not realizable since they are non-
causal. In practical applications of filtering we seek realizable approximations to ideal filter
behavior. The requirements on the frequency spectrum of the filter must be relaxed in
order to obtain realizable filters. In the design of analog filters, specifications for the desired
filter are usually given in terms of a set of tolerance limits for the magnitude response
|𝑯(𝜔)|.

A number of approximation formulas exist in the literature for specifying the squared-
magnitude of a realizable low-pass filter spectrum. Some of the better known formulas will
be mentioned here: Butterworth filters, Chebyshev filters, elliptic filters and others.

Butterworth lowpass filters are characterized by the squared-magnitude function


−1
𝜔 2𝑁
|𝑯(𝜔)|2
= [1 + ( ) ]
𝜔𝑐
where the parameters 𝑁 and 𝜔𝑐 are the filter-order and the cutoff frequency respectively.

As an alternative to the Butterworth approximation formula, the Chebyshev type-I


approximation formula for the squared-magnitude function of a lowpass filter is
𝜔
|𝑯(𝜔)|2 = 1/ [1 + 𝜀 2 𝐶𝑁2 ( )]
𝜔𝑐1

where 𝜀 is a positive constant. The frequency 𝜔𝑐1 is the passband edge frequency. The
function 𝐶𝑁 (𝜈) in the denominator represents the Chebyshev polynomial of order N.

Chebyshev type-II approximation formula for the squared-magnitude function is similar to


the Chebyshev type-I formula. |𝑯(𝜔)|2 = 𝜀 2 𝐶𝑁 (𝜔𝑐2 /𝜔)/[1 + 𝜀 2 𝐶𝑁2 (𝜔𝑐2 /𝜔)]. The parameter 𝜔𝑐2
is the stopband edge frequency.

Finally, an elliptic lowpass filter is characterized by the squared-magnitude function

2
𝜔
|𝑯(𝜔)|2 = 1/ [1 + 𝜀 2 𝜓𝑁 ( )]
𝜔𝑐

The parameter 𝜀 is a positive constant. The function 𝜓𝑁 (𝜈) is called a Chebyshev rational
function, and is defined in terms of Jacobi elliptic functions.

V.I Butterworth Filters: Consider again the Butterworth squared-magnitude function


given above. At the frequency 𝜔 = 𝜔𝑐 the magnitude is equal to 1/√2. Since this
approximately corresponds to the −3 dB point on a logarithmic scale, the parameter 𝜔𝑐 is
also referred to as the 3-dB cutoff frequency of the Butterworth filter. Furthermore, it can
easily be shown that the first 2𝑁 − 1 derivatives of the magnitude spectrum |𝑯(𝜔)| are equal
to zero at 𝜔 = 0, that is,
𝑑𝑛
{|𝑯(𝜔)|}| =0
𝑑𝜔 𝑛 𝜔=0

The magnitude characteristic |𝑯(𝜔)| for a Butterworth filter is said to be maximally flat.
Magnitude spectra for Butterworth filters of various orders are shown in Figure.
clear all, clc, wc=2; N=20; w=-10:0.1:10;
magGjw=abs(sqrt(1./(1+(w/wc).^(2*N))));
plot(w,magGjw); grid on
xlabel('Frequency in rad/sec'); ylabel('Magnitude of H(s)');
title('Magnitude of Butterworth filter');

We know that the squared-magnitude function can be expressed as the product of the
system function and its complex conjugate, that is |𝑯(𝜔)|2 = 𝑯(𝜔)𝑯⋆ (𝜔). In addition, since
the impulse response ℎ(𝑡) of the filter is real-valued, its system function exhibits conjugate
symmetry: 𝑯⋆ (𝜔) = 𝑯(−𝜔) so we can write |𝑯(𝜔)|2 = 𝑯(𝜔)𝑯(−𝜔). Using the s-domain
system function 𝑯(𝑠), the problem of designing a Butterworth filter can be stated as
follows: Given the values of the two filter parameters 𝜔𝑐 and 𝑁, find the s-domain system
function 𝑯(𝑠) for which the squared-magnitude of the function 𝑯(𝜔) matches the defined
equation. For a system function 𝑯(𝑠) with real coefficients, it can be shown that

𝑁 −1
−𝑠 2
|𝑯(𝜔)|2 = 𝑯(𝑠)𝑯(−𝑠)| ⟷ 𝑯(𝑠)𝑯(−𝑠) = [1 + ( 2 ) ]
𝑠=𝑗𝜔
𝜔𝑐

If 𝑯(𝑠) has a pole in the left half s-plane at


𝑝1 = 𝜎1 + 𝑗𝜔1 , then 𝑯(−𝑠) has a corresponding
pole at 𝑝̅1 = − 𝜎1 − 𝑗𝜔1. Thus the poles of 𝑯(𝑠)
are mirror images of the poles of 𝑯(−𝑠) with
respect to the origin. In order to extract 𝑯(𝑠)
from the product 𝑯(𝑠)𝑯(−𝑠), we need to find the
poles of this product, and separate the two sets
of poles that belong to 𝑯(𝑠) and
𝑯(−𝑠) respectively. The cases of even and odd
filter-orders N need to be treated separately.

We use the following rotating complex numbers

𝑒 𝑗𝑘𝜋/𝑁 and 𝑒 𝑗(2𝑘+1)𝜋/2𝑁

Poles of 𝑯(𝑠)𝑯(−𝑠) for the Butterworth lowpass filter:

𝜔𝑐 𝑒 𝑗𝑘𝜋/𝑁 , 𝑘 = 0, … , 2𝑁 − 1 if 𝑁 is odd
𝑝𝑘 = {
𝜔𝑐 𝑒 𝑗(2𝑘+1)𝜋/2𝑁 , 𝑘 = 0, … , 2𝑁 − 1 if 𝑁 is even
Remark: All the poles of the product 𝑯(𝑠)𝑯(−𝑠) are located on a circle with radius equal to
𝜔𝑐 . Furthermore, the poles are equally spaced on the circle, and the angle between any two
adjacent poles is 𝜋/𝑁 radians. Since 𝑯(𝑠) and 𝑯(−𝑠) have only real coefficients, all complex
poles appear in conjugate pairs.

We are interested in obtaining a filter that is both causal and stable. It is therefore
necessary to use the poles in the left half of the s-plane for 𝑯(𝑠). The remaining poles, the
ones in the right half of the s-plane, must be associated with 𝑯(−𝑠). The system function
𝑯(𝑠) for the Butterworth low-pass filter is constructed in the form

𝜔𝑐𝑁 𝜋 𝑘𝜋 3𝜋
𝑯(𝑠) = , for all 𝑘 that satisfy < <
∏𝑘(𝑠 − 𝑝𝑘 ) 2 𝑁 2

Example: Assume that we require a filter such that the pass-band magnitude is a constant
and equal to 1dB for frequency below 0.2𝜋, and the stop-band attenuation is greater than
15dB for frequency 0.3𝜋.

Solution: from the bode plot we have


1/2
𝜔1 2𝑁
at 𝜔1 = 0.2𝜋 we have 20 log [1 + ( ) ] = −1
𝜔𝑐
1/2
𝜔2 2𝑁
at 𝜔2 = 0.3𝜋 we have 20 log [1 + ( ) ] = −15
{ 𝜔𝑐
This means that
𝜔 𝜔
𝜔1 2𝑁 𝜔 1
log (𝜔1 ) log ( 𝜔2 )
( ) = 100.1 − 1 2𝑁 log ( ) = log(100.1 − 1) = −0.5868 𝑐
= 𝑐
𝜔𝑐 𝜔𝑐 −0.5868 1.4860
⟺{ 𝜔2 ⟺ 0.7430
𝜔2 2𝑁 1.5
1.5
( ) = 10 − 1 2𝑁 log ( ) = log(10 − 1) = 1.4860 𝑁= 𝜔
𝜔𝑐
{ 𝜔𝑐 {
log ( 2 )
𝜔𝑐

𝜔𝑐 0.3𝜋 −0.394 1
=( ) ⟹ 𝜔𝑐 = (0.6138)1.394 ≈ 0.70474 and 𝑁 = 5.8858
0.2𝜋 𝜔𝑐

But 𝑁 is an integer number ⟹ 𝑁 round up to nearest value ⟹ 𝑁 = 6. After determining N,


we compute the correct 𝜔𝑐 from 20 log[1 + (𝜔1 /𝜔𝑐 )2𝑁 ]1/2 = −1 we obtain 𝜔𝑐 = 0.7032.

There are three pole pairs

𝑝1,2 = −0.1820 ± 𝑖0.6792, 𝑝3,4 = −0.4972 ± 𝑖0.4972, 𝑝5,6 = −0.6792 ± 𝑖0.1820

𝜔𝑐𝑁 0.12093
𝑯( 𝑠 ) = = 2
∏𝑘(𝑠 − 𝑝𝑘 ) (𝑠 + 0.3640𝑠 + 0.4945)(𝑠 2 + 0.9945𝑠 + 0.4945)(𝑠 2 + 1.3585𝑠 + 0.4945)

clear all, clc, wc=0.7032; N=6; w=0:0.01:3;


magGjw=abs(sqrt(1./(1+(w/wc).^(2*N))));
plot(w,magGjw); grid on
xlabel('Frequency in rad/sec');
ylabel('Magnitude of H(s)');
title('Magnitude of Butterworth filter');
V.II Chebyshev Filters: The Chebyshev Type I filters are based on approximations derived
from the Chebyshev polynomials which constitute a set of orthogonal functions. The
squared-magnitude function for a Chebyshev low-pass filter
is given by
2
𝛼2
|𝑯(𝜔)| = 𝜔
[1 + 𝜀 2 𝐶𝑁2 (𝜔 )]
𝑐1

Parameter 𝜀 is a positive constant, 𝑁 is the order of the filter,


and 𝜔𝑐1 is the passband edge frequency in rad/s. The term
𝐶𝑁 (𝜈) = 𝐶𝑁 (𝜔/𝜔𝑐1 ) represents the Chebyshev polynomial of
order 𝑁. Chebyshev filters of this type are sometimes called
Chebyshev type-I filters.

The Chebyshev polynomial of order 𝑁 is defined as 𝐶𝑁 (𝜈) = cos(𝑁 cos −1 (𝜈)). A better
approach for understanding the definition 𝐶𝑁 (𝜈) = cos(𝑁 cos −1(𝜈)) would be to split it into
two equations as 𝜈 = cos(𝜃) and 𝐶𝑁 (𝜈) = cos(𝑁𝜃)

The Chebyshev polynomials of the first kind are obtained from the recurrence relation

𝐶𝑁+1 (𝜈) = 2𝜈𝐶𝑁 (𝜈) − 𝐶𝑁−1 (𝜈)

Let we prove this identity, we know that cos(𝑥 + 𝑦) + cos(𝑥 − 𝑦) = 2 cos(𝑥) cos(𝑦) we put the
following change of variable 𝑥 = 𝜃 and 𝑦 = 𝑁𝜃 then

cos((𝑁 + 1)𝜃) + cos((𝑁 − 1)𝜃) = 2 cos(𝜃) cos(𝑁𝜃) ⟹


cos((𝑁 + 1) cos −1 (𝜈)) + cos((𝑁 − 1) cos−1 (𝜈)) = 2𝜈 cos(𝑁 cos −1(𝜈))

Therefore from the definition we obtain 𝐶𝑁+1 (𝜈) = 2𝜈𝐶𝑁 (𝜈) − 𝐶𝑁−1 (𝜈) so we can generate this
polynomials recursively

for 𝑁 = 0 𝐶0 (𝜈) = 1 remmeber that:


for 𝑁 = 1 𝐶1 (𝜈) = cos(cos −1(𝜈)) = 𝜈
for 𝑁 = 2 𝐶2 (𝜈) = cos(2 cos −1 (𝜈)) = 2𝜈 2 − 1 𝜃 = cos −1(𝜈) ⟹ cos(2𝜃) = cos(2 cos −1(𝜈))
⋮ ⟹ cos(2 cos −1(𝜈)) = 2 cos 2 (𝜃) − 1
𝐶𝑁+1 (𝜈) = 2𝜈𝐶𝑁 (𝜈) − 𝐶𝑁−1 (𝜈) ⟹ cos(2 cos −1(𝜈)) = 2𝜈 2 − 1

For example
𝐶1 (𝜈) = 1
𝐶2 (𝜈) = 2𝜈 2 − 1
𝐶3 (𝜈) = 2𝜈(2𝜈 2 − 1) − 1 = 4𝜈 3 − 2𝜈 − 1
𝐶4 (𝜈) = 2𝜈(4𝜈 3 − 2𝜈 − 1) − 2𝜈 2 + 1 = 8𝜈 4 − 6𝜈 2 − 2𝜈 + 1

Remarks: it should be noted that the polynomial 𝐶𝑁2 (𝜈) = cos 2 (𝑁 cos −1(𝜈)) varies between
zero and unity for 𝜈 varies between zero and unity, but if 𝜈 is greater than one then
cos−1 (𝜈) is imaginary, so 𝐶𝑁 (𝜈) is a hyperbolic polynomial

What's happening if 𝜈 > 1? The definition of Chebyshev polynomial becomes

cos(𝑁 cos−1 (𝜈)) if |𝜈| ≤ 1


𝐶𝑁 (𝜈) = cos(𝑁 cos−1 (𝜈)) = {
cosh(𝑁 cosh−1 (𝜈)) if 𝜈 > 1
A few interesting characteristics of Chebyshev polynomials can be readily observed:
▪ All Chebyshev polynomials pass through the point (1, 1), that is, 𝐶𝑁 (1) = 1 for all 𝑁.
▪ For all Chebyshev polynomials, if |𝜈| ≤ 1 then |𝐶𝑁 (𝜈)| ≤ 1.
▪ For |𝜈| > 1 all Chebyshev polynomials grow monotonically without bound.
▪ At 𝜈 = 0 the behavior of Chebyshev polynomials depends on the order 𝑁.
𝐶𝑁 (𝜈) = 0 if 𝑁 is odd, and 𝐶𝑁 (𝜈) = ±1 if 𝑁 is even.

𝛼2 𝛼2 𝛼2
|𝑯(𝜔𝑐1 =)|2 𝜔 = 2 2
= 2
[1 + 𝜀 2 𝐶𝑁2 (𝜔𝑐1 )] [1 + 𝜀 𝐶𝑁 (1)] 1 + 𝜀
𝑐1
𝛼2 if 𝑁 is odd
2
𝛼
|𝑯(0)|2 = ={
[1 + 𝜀 2 𝐶𝑁2 (0)] 𝛼2
if 𝑁 is even
1 + 𝜀2

The term |𝑯(𝜔𝑐1 )|2 = 𝛼 2 /(1 + 𝜀 2 ) stated in other words, the pass−band is the range over
which the ripple oscillates with constant bounds; this is the range from 𝐷𝐶 to 𝜔𝑐1. From the
formula of |𝑯(𝜔𝑐1 )|2 , we observe that only when 𝜀 = 1 the magnitude at the cutoff frequency
is 𝛼/√2 i.e., the same as in other types of filters. But when 0 < 𝜀 < 1, the cutoff frequency is
greater than 3 dB the conventional cutoff frequency 𝜔𝑐1.

Poles of 𝑯(𝑠)𝑯(−𝑠) for the Chebyshev type-I low-pass filter:

𝑝𝑘 = 𝑗𝜔𝑐1 [cos(𝛼𝑘 ) cosh(𝛽𝑘 ) − 𝑗 sin(𝛼𝑘 ) sinh(𝛽𝑘 )], 𝑘 = 0, . . . , 2𝑁 – 1

(2𝑘 + 1)𝜋 sinh−1 (1/𝜀)


𝛼𝑘 = and 𝛽𝑘 = 𝑘 = 0, . . . , 2𝑁 – 1
2𝑁 𝑁

Proof: The poles 𝑝𝑘 of 𝑯(𝑠)𝑯(−𝑠) are the solutions of the equation 1 + 𝜀 2 𝐶𝑁2 (𝑝𝑘 /𝑗𝜔𝑐1 ) = 0
for 𝑘 = 0, . . . ,2𝑁 – 1. Let us define 𝜈𝑘 = 𝑝𝑘 /𝑗𝜔𝑐1 so that 1 + 𝜀 2 𝐶𝑁2 (𝜈𝑘 ) = 0

Using the definition of the Chebyshev polynomial we can rewrite 1 + 𝜀 2 𝐶𝑁2 (𝜈𝑘 ) = 0 as

1 + 𝜀 2 cos 2 (𝑁𝜃𝑘 ) = 0

where 𝜈𝑘 = cos(𝜃𝑘 ). Solving for cos(𝑁𝜃𝑘 ) yields: cos(𝑁𝜃𝑘 ) = ±(𝑗/𝜀)


Let 𝜃𝑘 = 𝛼𝑘 + 𝑗𝛽𝑘 with 𝛼𝑘 and 𝛽𝑘 both as real parameters, then cos(𝑁𝜃𝑘 ) = ±(𝑗/𝜀) becomes
𝑗
cos(𝑁𝛼𝑘 + 𝑗𝑁𝛽𝑘 ) = ±
𝜀
which, using the appropriate trigonometric identity, can be written as
𝑗
cos(𝑁𝛼𝑘 ) cos(𝑗𝑁𝛽𝑘 ) − sin(𝑁𝛼𝑘 ) sin(𝑗𝑁𝛽𝑘 ) = ±
𝜀
Recognizing that cos(𝑗𝑁𝛽𝑘 ) = cosh(𝑁𝛽𝑘 ) and sin(𝑗𝑁𝛽𝑘 ) = 𝑗 sinh(𝑁𝛽𝑘 ) , we obtain
𝑗
cos(𝑁𝛼𝑘 ) cosh(𝑁𝛽𝑘 ) − 𝑗 sin(𝑁𝛼𝑘 ) sinh(𝑁𝛽𝑘 ) = ±
𝜀
Equating real and imaginary parts of both sides of this equation yields

cos(𝑁𝛼𝑘 ) cosh(𝑁𝛽𝑘 ) = 0 and sin(𝑁𝛼𝑘 ) sinh(𝑁𝛽𝑘 ) = ±1/𝜀

To satisfy cos(𝑁𝛼𝑘 ) cosh(𝑁𝛽𝑘 ) = 0 the cosine term must be set equal to zero, leading to

(2𝑘 + 1)𝜋
cos(𝑁𝛼𝑘 ) = 0 ⟹ 𝛼𝑘 = 𝑘 = 0, . . . , 2𝑁 – 1
2𝑁
Using this value of 𝛼𝑘 in equation sin(𝑁𝛼𝑘 ) sinh(𝑁𝛽𝑘 ) = ±1/𝜀 results in
sinh−1 (1/𝜀)
sinh(𝑁𝛽𝑘 ) = ±1 and 𝛽𝑘 =
𝑁
The poles of the product 𝑯(𝑠)𝑯(−𝑠) can now be determined. Using 𝑝𝑘 = 𝑗𝜔𝑐1 𝜈𝑘

𝑝𝑘 = 𝑗𝜔𝑐1 𝜈𝑘
= 𝑗𝜔𝑐1 cos(𝜃𝑘 )
= 𝑗𝜔𝑐1 [cos(𝛼𝑘 ) cosh(𝛽𝑘 ) − 𝑗 sin(𝛼𝑘 ) sinh(𝛽𝑘 )], 𝑘 = 0, . . . , 2𝑁 – 1

It can be shown that those poles are on an elliptical trajectory. The ones in the left half s-
plane are associated with 𝑯(𝑠) in order to obtain a causal and stable filter.
Example: Design a third-order Chebyshev type-I analog low-pass filter with a passband
edge frequency of 𝜔𝑐1 = 1 rad/s and 𝜀 = 0.4. Afterwards compute and graph the magnitude
response of the designed filter.

Solution: The parameter 𝛼𝑘 is found as 𝛼𝑘 = (2𝑘 + 1)𝜋/6 , 𝑘 = 0, . . . , 5 . and the parameter


𝛽𝑘 is obtained as 𝛽𝑘 = sinh−1 (1/0.4) /3 = 0.5491. Poles of 𝑯(𝑠)𝑯(−𝑠) are found through the
use of: 𝑝𝑘 = 𝑗𝜔𝑐1 [cos(𝛼𝑘 ) cosh(𝛽𝑘 ) − 𝑗 sin(𝛼𝑘 ) sinh(𝛽𝑘 )], 𝑘 = 0, . . . , 2𝑁 – 1. Evaluating this
equation for 𝑘 = 0, . . . , 5 yields the following poles:

𝑝0 = 0.2885 + 𝑗 𝑝3 = − 0.2885 − 𝑗
𝑝1 = 0.5771 𝑝4 = − 0.5771
𝑝2 = 0.2885 − 𝑗 𝑝5 = − 0.2885 + 𝑗

The first three poles, namely 𝑝0 , 𝑝1 and 𝑝2 are in the right half s-plane; therefore they
belong to 𝑯(−𝑠). The system function 𝑯(𝑠) should be constructed using the remaining three
poles.
0.6250
𝑯(𝑠) =
(𝑠 + 0.2885 + 𝑗) (𝑠 + 0.5771) (𝑠 + 0.2885 − 𝑗)
0.6250
=
𝑠 3 + 1.1542𝑠 2 + 1.4161 𝑠 + 0.6250
The numerator of 𝑯(𝑠) was adjusted to achieve |𝑯 (0)| = 0. The magnitude and the phase
of 𝑯(𝑠) are shown in Figure.

clear all, clc, w=0:0.01:5;


magG=abs(sqrt((0.6250)./((j*w).^3+1.1542*(j*w).^2+1.4161*(j*w)+0.6250)));
plot(w,magG); grid on
xlabel('Frequency in rad/sec');
ylabel('Magnitude of H(s)');
title('Magnitude of Chebyshev type-I analog low-pass filter ');
In some design problems it is desired to find the lowest-order filter that satisfies a set of
design specifications. The desired behavior of the low-pass filter is typically specified in
terms of the two critical frequencies 𝜔1 and 𝜔2 as well as the dB tolerance values for the
passband and the stopband respectively.

Example: Design Chebyshev low-pass filter such that |𝑯 (𝜔𝑐1 )| = 0.84, for 𝜔𝑐1 = 150 rad/s
and |𝑯 (𝜔2 )| = 0.0316, for 𝜔2 = 300 rad/s. Given that |𝑯(𝜔)|2 = [1 + 𝜀 2 𝐶𝑁2 (𝜔/𝜔𝑐1 )]−1

Solution: at the cut-off frequency we have


𝜔𝑐1 𝜔𝑐1 1 1 − |𝐻(𝜔𝑐1 )|2
𝐶𝑁 ( ) = cos (𝑁 cos −1 ( )) = 1 ⟹ |𝑯(𝜔𝑐1 )| = ⟹𝜀=√ ≅ 0.646
𝜔𝑐1 𝜔𝑐1 √1 + 𝜀 2 |𝐻(𝜔𝑐1 )|2
And at the stopband range we have
1 𝜔2 1 − |𝐻(𝜔2 )|2
|𝑯(𝜔2)|2 = 𝜔 ⟹ 𝐶 𝑁 ( ) = √ = 48.981
[1 + 𝜀 2 𝐶𝑁2 (𝜔 2 )] 𝜔𝑐1 𝜀 2 |𝐻(𝜔2 )|2
𝑐1
𝜔
cosh−1 (𝐶𝑁 (𝜔 2 ))
𝜔2 𝜔2 𝑐1
But 𝜔2 > 𝜔𝑐1 ⟹ 𝐶𝑁 ( ) = cosh (𝑁 cosh−1 ( )) ⟹ 𝑁 = 𝜔 = 3.48
𝜔𝑐1 𝜔𝑐1 cosh−1 (𝜔 2 )
𝑐1
We round up 𝑁 to an integer value so 𝑁 = 4

V.III Inverse Chebyshev Low-Pass Filters: The squared-magnitude function for an inverse
Chebyshev low-pass filter, also referred to as a Chebyshev type-II low-pass filter, is
𝜔
𝜀 2 𝐶𝑁 ( 𝜔𝑐2 )
|𝑯(𝜔)|2 = 𝜔
[1 + 𝜀 2 𝐶𝑁2 ( 𝜔𝑐2 )]

As in the case of Chebyshev type-I filters, 𝜀 is a positive


constant and 𝐶𝑁 (𝜈) represents the Chebyshev polynomial of
order N. The parameter 𝜔𝑐2 is the stopband edge frequency.
The magnitude response of the Chebyshev type-II filter is
smooth in the passband and has equiripple behavior in the
stopband.

The denominator of the squared magnitude function |𝑯(𝜔)|2 is very similar to that of the
type-I Chebyshev filter squared magnitude response given before. Consequently, most of
the results obtained in the previous section through the derivation of the poles of
Chebyshev type-I low-pass filter will be usable. For an inverse Chebyshev filter the poles 𝑝𝑘
of the product 𝑯(𝑠)𝑯(−𝑠) are the solutions of the equation

𝑗𝜔𝑐2
1 + 𝜀 2 𝐶𝑁2 ( )=0 for 𝑘 = 0, . . . , 2𝑁 − 1.
𝑝𝑘

Poles of 𝑯(𝑠)𝑯(−𝑠) for the Chebyshev type-II low-pass filter:


𝑗𝜔𝑐2
𝑝𝑘 = , 𝑘 = 0, . . . , 2𝑁 – 1
[cos(𝛼𝑘 ) cosh(𝛽𝑘 ) − 𝑗 sin(𝛼𝑘 ) sinh(𝛽𝑘 )]
(2𝑘 + 1)𝜋 sinh−1 (1/𝜀)
𝛼𝑘 = and 𝛽𝑘 = 𝑘 = 0, . . . , 2𝑁 – 1
2𝑁 𝑁
Zeros of 𝑯(𝑠) for the Chebyshev type-II low-pass filter:

±𝑗𝜔𝑐2 (𝑁 − 1)/2 if 𝑁 is odd


𝑧𝑘 = 𝑘 = 0, . . . , 𝐾 with 𝐾 = {
(2𝑘 + 1)𝜋 𝑁/2 if 𝑁 is even
cos ( 2𝑁 )

Example: Design of a low-pass filters using MATLAB code (i.e. all approximation methods)

clear all, clc, w = linspace(-50,50);


wp=5; % corner frequency of the pass band
ws=20; % corner frequency of the stop band,
Rp=1.9382; % Rp=-20*log10(0.8)
Rs=13.9794; % Rs=-20*log10(0.2)
[N,wc]=buttord (wp,ws,Rp,Rs,'s'); % order and cut-off freq
[num,den]=butter (N,wc,'s'); % determine num and denom
Ht = tf(num,den); H= freqs(num,den,w);
plot(w,abs(H),'r','linewidth',1.5); grid on

clear all, clc, w = linspace(-200,200);


wp=50; % corner frequency of the pass band
ws=100; % corner frequency of the stop band,
Rp=1; % Rp=-20*log10(0.8913)
Rs=15; % Rs=-20*log10(0.1778)
[N,wn]=cheb1ord(wp,ws,Rp,Rs,'s'); % order and natural freq
[num,den]=cheby1(N,Rp,wn,'s'); % determine num and denom
Ht = tf(num,den);H= freqs(num,den,w);
plot(w,abs(H),'r','linewidth',1.5); grid on

clear all, clc, w = linspace(-500,500);


wp=50; % corner frequency of the pass band
ws=100; % corner frequency of the stop band,
Rp=1; % Rp=-20*log10(0.8913)
Rs=15; % Rs=-20*log10(0.1778)
[N,wn]=cheb2ord(wp,ws,Rp,Rs,'s'); % order and natural freq
[num,den]=cheby2(N,Rs,ws,'s'); % determine num and denom
Ht = tf(num,den); H= freqs(num,den,w);
plot(w,abs(H),'r','linewidth',1.5); grid on

clear all, clc, w = linspace(-500,500);


wp=50; % corner frequency of the pass band
ws=100; % corner frequency of the stop band,
Rp=1; % Rp=-20*log10(0.8913)
Rs=15; % Rs=-20*log10(0.1778)
[N,wn]=ellipord (wp,ws,Rp,Rs,'s'); % order and natural freq
[num,den]=ellip(N,Rp,Rs,wn,'s'); % determine num and denom
Ht = tf(num,den); H= freqs(num,den,w);
plot(w,abs(H),'r','linewidth',1.5); grid on
V.IV Analog filter transformations: In preceding sections we have discussed the design of
analog low-pass filters. Design formulas for Butterworth, Chebyshev and elliptic squared-
magnitude functions were presented only for low-pass filters. If a different filter type such
as a high-pass, bandpass or band-reject filter is needed, it is obtained from a low-pass filter
by means of an analog filter transformation.

Let 𝑮(𝑠) be the system function of an analog lowpass filter, and let 𝑯(𝜆) represent the new
filter to be obtained from it. For the new filter we use 𝜆 as the Laplace transform variable.
𝑯(𝜆) is obtained from 𝑮(𝑠) through a transformation such that 𝑯(𝜆) = 𝑮(𝑠)| . The
𝑠=𝑓(𝜆)
function 𝑠 = 𝑓(𝜆) is the transformation that converts the lowpass filter into the type of filter
desired.

 Low-pass to high-pass: It is desired to obtain the hig-hpass filter system function 𝑯(𝜆)
from the lowpass filter system function 𝑮(𝑠). The transformation 𝑠 = 𝜔0 /𝜆 with 𝜔0 = 𝜔𝐿1 𝜔𝐻2
can be used for this purpose. The magnitudes of the two filters are identical at their
respective passband edge frequencies, that is, |𝑯(𝜔𝐻2 )| = |𝑮(𝜔𝐿1 )|. The stopband edges of
the two filters are related by𝜔𝐿2 𝜔𝐻1 = 𝜔𝐿1 𝜔𝐻2 so that we have |𝑯(𝜔𝐻1 )| = |𝑮(𝜔𝐿2 )|

Example: Recall that a third-order Chebyshev type-I analog low-pass filter was designed in
before with a cutoff frequency of 𝜔𝑐1 = 1 rad/s and 𝜀 = 0.4. The system function of the filter
was found to be

0.6250
𝑯(𝑠) =
𝑠 3 + 1.1542𝑠 2 + 1.4161 𝑠 + 0.6250

Convert this filter to a high-pass filter with a critical frequency of 𝜔𝐻2 = 3 rad/s. Afterwards
compute and graph the magnitude response of the designed filter.
Solution: The critical frequency of the low-pass filter is 𝜔𝐿1 = 1 rad/s; therefore, we will use
the transformation 𝑠 = 𝜔0 /𝜆 𝜔0 = 𝜔𝐿1 𝜔𝐻2 = 3. which leads to the system function
0.6250
𝑯(𝜆) = 3
3 3 2 3
( ) + 1.1542 ( ) + 1.4161 ( ) + 0.6250
𝜆 𝜆 𝜆
3
0.6250𝜆
=
0.6250𝜆3 + 4.2482𝜆2 + 10.3875𝜆 + 27

clear all, clc, w=-5:0.01:5;


HH=(0.6250*(j*w).^3)./(0.6250*(j*w).^3+4.2482*(j*w).^2+10.3875*(j*w)+27);
magG=abs(sqrt(HH)); plot(w,magG); grid on
xlabel('Frequency in rad/sec'); ylabel('Magnitude of H(s)');
title('Magnitude of Chebyshev type-I analog high-pass filter ');

Example: Design a high-pass Butterworth filter with the following specifications:

stop band (0 ≤ |𝜆| ≤ 50 radians/s) −1 dB ≤ 20 log10 |𝑯(𝜆)| ≤ 0;


pass band |𝜆| > 100 radians/s) 20 log10 |𝑯(𝜆)| ≤ −15 dB.

Solution: we have 𝜔𝐻2 = 100 rad/s and by using the transformation 𝜆 = 𝜔0 /𝜔 where the
𝜔0 = 𝜔𝐿1 𝜔𝐻2, assume that we are going to obtain a normalized low-pass filter with pass
band frequency 𝜔𝐿1 = 1 rad/s then the corresponding low pass filter specifications are

𝜔0 1 50 𝜔0
0 ≤ | | ≤ 50 rad/s ⟺ 0 ≤ | | ≤ rad/s ⟺ ≤ |𝜔| ≤ ∞
{ 𝜔 𝜔 𝜔0 50
𝜔0 𝜔0
| | > 100 rad/s ⟺ |𝜔| < rad/s ⟺ |𝜔| < 1
𝜔 100
In other word we can rewrite it as

stop band (2 ≤ |𝜔| ≤ ∞ radians/s) −1 dB ≤ 20 log10 |𝑯(𝜔)| ≤ 0;


pass band |𝜔| < 1 radians/s) 20 log10 |𝑯(𝜔)| ≤ 15 𝑑𝐵.
The above specifications are used to design a normalized low-pass Butterworth filter. The
transfer function of this low-pass filter (i.e. designed out) is given by

2.8909
𝑯(𝑠) =
𝑠 4 + 3.407𝑠 3 + 5.8050𝑠 2 + 5.7934𝑠 + 2.8909
To derive the transfer function of the required high-pass filter, we use the transformation
𝑠 = 𝜔0 /𝜆 with 𝜔0 = 100 rad/s. The transfer function of the highpass filter is given by

2.8909
𝑯(𝜆) = 4 3
100 100 100 2 100
( ) + 3.407 ( ) + 5.8050 ( ) + 5.7934 ( ) + 2.8909
𝜆 𝜆 𝜆 𝜆
𝜆4
= 4
𝜆 + 2.004 × 102 𝜆3 + 2.008 × 104 𝜆2 + 1.179 × 106 𝜆 + 3.459 × 107
The MATLAB code for the design of the high-pass filter required in this example using the
Butterworth, Type I Chebyshev, Type II Chebyshev, and elliptic implementations is
included below. In each case, MATLAB automatically designs the high-pass filter. No
explicit transformations are needed.

clear all, clc, w = linspace(-200,200);

% design specifications for high-pass Butterworth Filter

wp=100; % corner frequency of the pass band


ws=50; % corner frequency of the stop band,
Rp=1; % Rp=-20*log10(0.8913)
Rs=15; % Rs=-20*log10(0.1778)
[N,wc]=buttord (wp,ws,Rp,Rs,'s'); % order and cut-off freq
[num,den]=butter (N,wc,'high','s'); % determine num and denom
Ht = tf(num,den); H= freqs(num,den,w);
plot(w,abs(H),'r','linewidth',1.5); grid on; figure

% design specifications for high-pass Type I Chebyshev Filter

[N, wn] = cheb1ord(wp,ws,Rp,Rs,'s');


[num2,den2] = cheby1(N,Rp,wn,'high','s');
H2 = tf(num2,den2); H= freqs(num2,den2,w);
plot(w,abs(H),'r','linewidth',1.5); grid on; figure

% design specifications for high-pass Type II Chebyshev Filter

[N,wn] = cheb2ord(wp,ws,Rp,Rs,'s') ;
[num3,den3] = cheby2(N,Rs,wn,'high','s');
H3 = tf(num3,den3); H= freqs(num3,den3,w);
plot(w,abs(H),'r','linewidth',1.5); grid on, figure

% design specifications for high-pass Elliptic Filter

[N,wn] = ellipord(wp,ws,Rp,Rs,'s') ;
[num4,den4] = ellip(N,Rp,Rs,wn,'high','s') ;
H4 = tf(num4,den4); H= freqs(num4,den4,w);
plot(w,abs(H),'r','linewidth',1.5); grid on
 Low-pass to bandpass transformation: Specification diagrams in figure below show a
low-pass filter with magnitude response |𝑮(𝑗𝜔)| and a bandpass filter with magnitude
response |𝑯(𝑗𝜔)|.

It is desired to obtain the bandpass filter system function 𝑯(𝜆) from the low-pass filter 𝑮(𝑠).
The transformation 𝑠 = (𝜆2 + 𝜔02 )/𝐵𝜆, with 𝜔0 = √𝜔𝐵2 𝜔𝐵3 , 𝐵 = (𝜔𝐵3 − 𝜔𝐵2 )/𝜔𝐿1 can be used
for this purpose. We require |𝑯(𝜔𝐵2 )| = |𝑮(−𝜔𝐿1 )| and |𝑯(𝜔𝐵3 )| = |𝑮(𝜔𝐿1 )|

The parameter 𝜔0 is the geometric mean of the passband edge frequencies of the bandpass
filter. The parameter 𝐵 is the ratio of the bandwidth of the bandpass filter to the bandwidth
of the low-pass filter.

clear all, clc, w = linspace(-500,500);

wp=[100 200]; ws=[50 380]; Rp=2; Rs=20;

% Butterworth bandpass filter


[N, wc] = buttord(wp,ws,Rp,Rs,'s');
[num1,den1] = butter(N,wc,'s');
H1 = tf(num1,den1); H= freqs(num1,den1,w);
plot(w,abs(H),'r','linewidth',1.5); grid on, figure

% Type I Chebyshev bandpass filter


[N, wn] = cheb1ord(wp,ws,Rp,Rs,'s');
[num2,den2] = cheby1(N,Rp,wn,'s');
H2 = tf(num2,den2); H= freqs(num2,den2,w);
plot(w,abs(H),'r','linewidth',1.5); grid on, figure

% Type II Chebyshev bandpass filter


[N,wn] = cheb2ord(wp,ws,Rp,Rs,'s');
[num3,den3] = cheby2(N,Rs,wn,'s');
H3 = tf(num3,den3); H= freqs(num3,den3,w);
plot(w,abs(H),'r','linewidth',1.5); grid on, figure

% Elliptic bandpass filter


[N,wn] = ellipord(wp,ws,Rp,Rs,'s');
[num4,den4] = ellip(N,Rp,Rs,wn,'s');
H4 = tf(num4,den4); H= freqs(num4,den4,w);
plot(w,abs(H),'r','linewidth',1.5); grid on
The type of filter is specified by the dimensions of the pass-band and stop-band frequency
vectors. Since 𝜔𝑝 and 𝜔𝑠 are both vectors, MATLAB knows that either a bandpass or band-
stop filter is being designed. From the range of the values within 𝜔𝑝 and 𝜔𝑠 , MATLAB is also
able to differentiate whether a bandpass or a band-stop filter is being specified. In the
above example, since the range (50 to 380 Hz) of frequencies specified within the stop-band
frequency vector 𝜔𝑠 exceeds the range (100 to 200 Hz) specified within the pass-band
frequency vector 𝜔𝑝 , MATLAB is able to make the final determination that a bandpass filter
is being designed. For a band-stop filter, the converse would hold true.

 Low-pass to band-reject transformation: Specification diagrams in figure below show a


lowpass filter with magnitude response |𝑮(𝜔)| and a band-reject filter with magnitude
response |𝑯(𝜔)|.

In order to obtain the band-reject filter system function 𝑯(𝜆) from the lowpass filter system
function 𝑮(𝑠), the transformation to be used is in the form 𝑠 = 𝜔𝐿1 (𝜔𝑆4 − 𝜔𝑆1 )𝜆/(𝜆2 + 𝜔𝑆1 𝜔𝑆4 )

𝑠 = 𝐵𝜆/(𝜆2 + 𝜔02 ), with 𝜔0 = √𝜔𝑆1 𝜔𝑆4 , 𝐵 = (𝜔𝑆4 − 𝜔𝑆1 )𝜔𝐿1

For a low-pass filter to be converted into a high-pass filter we require

|𝑯(𝜔𝑆4 )| = |𝑮(−𝜔𝐿1 )| and |𝑯(𝜔𝑆1 )| = |𝑮(𝜔𝐿1 )|

clear all, clc, w = linspace(-500,500);


wp=[100 370]; ws=[150 250]; Rp=2; Rs=20;
% Butterworth band-stop Filter
[N, wn] = buttord(wp,ws,Rp,Rs,'s');
[num1,den1] = butter(N,wn,'stop','s'); H1 = tf(num1,den1);
% Type I Chebyshev band-stop filter
[N, wn] = cheb1ord(wp,ws,Rp,Rs,'s');
[num2,den2] = cheby1(N,Rp,wn,'stop','s'); H2 = tf(num2,den2);
% Type II Chebyshev band-stop filter
[N,wn] = cheb2ord(wp,ws,Rp,Rs,'s');
[num3,den3] = cheby2(N,Rs,wn,'stop','s'); H3 = tf(num3,den3);
% Elliptic band-stop filter
[N,wn] = ellipord(wp,ws,Rp,Rs,'s');
[num4,den4] = ellip(N,Rp,Rs,wn,'stop','s'); H4 = tf(num4,den4);
H= freqs(num4,den4,w); plot(w,abs(H),'r','linewidth',1.5); grid on
VI. Design of Digital Filters In its most general form, a digital filter is a system that will
receive an input in the form of a discrete-time signal and produce an output again in the form
of a discrete-time signal. There are many types of discrete-time systems that fall under this
category such as digital control systems, encoders, and decoders. What differentiates
digital filters from other digital systems is the nature of the processing involved. As in
analog filters, there is a requirement that the spectrum of the output signal be related to
that of the input by some rule of correspondence.

In this section we will briefly discuss the design of discrete-time filters. Discrete-time filters
are viewed under two broad categories: infinite impulse response (IIR) filters, and finite
impulse response (FIR) filters. The system function of an IIR filter has both poles and zeros.
Consequently the impulse response of the filter is of infinite length. It should be noted,
however, that the impulse response of a stable IIR filter must also be absolute summable,
and must therefore decay over time. In contrast, the behavior of an FIR filter is controlled
only by the placement of the zeros of its system function. For causal FIR filters all of the
poles are at the origin, and they do not contribute to the magnitude characteristics of the
filter.

For a given set of specifications, an IIR filter is generally more efficient than a comparable
FIR filter. On the other hand, FIR filters are always stable. Additionally, a linear phase
characteristic is possible with FIR filters whereas causal and stable IIR filters cannot have
linear phase. The significance of a linear phase characteristic is that the time delay is
constant for all frequencies. This is desirable in some applications, and requires the use of
an FIR filter.
VI.I Design of IIR filters: The most common method of designing IIR filters is to start with
an appropriate analog filter, and to convert its system function to the system function of a
discrete-time filter by means of some transformation. Designing a discrete-time filter by
this approach involves a three step procedure:

1. The specifications of the desired discrete-time filter are converted to the specifications of
an appropriate analog filter that can be used as a prototype. Let the desired discrete-time
filter be specified through critical frequencies 𝛺1 and 𝛺2 along with tolerance values 𝛥1 and
𝛥2 . Analog prototype filter parameters 𝜔1 and 𝜔2 need to be determined. (If the filter type is
bandpass or band-reject, two additional frequencies need to be specified.)

2. An analog prototype filter that satisfies the design specifications in step 1 is designed. Its
system function 𝑮(𝑠) is constructed.

3. The analog prototype filter is converted to a discrete-time filter by means of a


transformation. Specifically, a z-domain system function 𝑯(𝑧) is obtained from the analog
prototype system function 𝑮(𝑠).

Example: (Impulse invariant design of a low-pass filter) Consider the third-order


Chebyshev type-II analog low-pass filter designed before.

0.6250
𝑮(𝑠) =
𝑠3 + 1.1542𝑠 2 + 1.4161 𝑠 + 0.6250
Convert this filter to a discrete-time filter using the impulse invariance technique with
𝑇 = 0.2 𝑠. Afterwards compute and graph the magnitude response of the discrete-time
filter.

Solution: The system function 𝑮(𝑠) can be written in partial fraction form as

−0.2885 − 𝑗 0.0833 −0.2885 + 𝑗 0.0833 0.5771


𝑮(𝑠) = + +
𝑠 + 0.2886 − 𝑗 𝑠 + 0.2886 + 𝑗 𝑠 + 0.5771

The system function of the discrete-time filter 𝑯(𝑧) is found by using the impulse invariant
method as
𝑁 ℎ𝑘 𝑁 𝑇ℎ𝑘 𝑧
𝑮(𝑠) = ∑ ( ) ⟹ 𝑯(𝑧) = ∑ ( 𝑝 𝑇
)
𝑘=1 𝑠 − 𝑝𝑘 𝑘=1 𝑧 − 𝑒 𝑘
(−0.0577 − 𝑗 0.0167) 𝑧 (−0.0577 + 𝑗 0.0167) 𝑧 0.1154 𝑧
𝑯(𝑧) = + +
𝑧 − 0.9251 − 𝑗 0.1875 𝑧 − 0.9251 + 𝑗 0.1875 𝑧 − 0.8910

The closed form expression for 𝑯(𝑧) is

0.0023 𝑧 2 + 0.0021 𝑧
𝑯(𝑧) = 3
𝑧 − 2.7412 𝑧 2 + 2.5395𝑧 − 0.7939
clear all, clc, w=0:0.01:pi; z=exp(j*w);
num=0.0023*(z).^2+0.0021*(z);
den=((z).^3-2.7412*(z).^2+2.5395*(z)-0.7939);
H=(num)./(den);
magG=abs(H); plot(w,magG); grid on
xlabel('Frequency in rad/sec'); ylabel('Magnitude of H(z)');
title('Magnitude of Digital low-pass filter ');

Note that aliasing can be kept at a negligible level with the choice of T, and the analog
frequency 𝜔1 = 1 rad/s corresponds to the discrete-time frequency 𝛺 = 0.2 radians. In
general there is some limitations on the choice of T and sometimes is irrelevant and we
often use 𝑇 = 1𝑠 for simplicity.

Example: A low-pass IIR filter is to be designed with the following specifications:

𝛺1 = 0.2𝜋 , 𝛺2 = 0.25𝜋 , 𝑅𝑝 = 1 dB, 𝐴𝑠 = 30 dB

Impulse invariance technique is to be used for converting an analog prototype filter to a


discrete-time filter. Determine the specifications of the analog prototype if the sampling
interval is to be a. 𝑇 = 1𝑠 and b. 𝑇 = 2𝑠
Solution: a. Using 𝑇 = 1 𝑠, the critical frequencies of the analog prototype filter are

𝜔1 = 𝛺1 /𝑇 = 0.2𝜋, 𝜔2 = 𝛺2 /𝑇 = 0.25𝜋

The dB tolerance limits are unchanged: 𝑅𝑝 = 1 dB, 𝐴𝑠 = 30 dB

Let the system function for the analog prototype filter be 𝑮1 (𝜔) yielding an impulse
response 𝐠1 (𝑡). The impulse response of the discrete-time filter is 𝒉1 [𝑛] = 𝐠1 (𝑛𝑇) = 𝐠1 (𝑛)

b. With 𝑇 = 2𝑠, the critical frequencies of the analog prototype filter are

𝜔1 = 𝛺1 /𝑇 = 0.1𝜋, 𝜔2 = 𝛺2 /𝑇 = 0.125𝜋

The dB tolerance limits are unchanged as before.

Let the system function for the analog prototype filter be 𝑮2 (𝜔)so that its impulse response
is 𝐠 2 (𝑡). The impulse response of the discrete-time filter is obtained by sampling 𝐠 2 (𝑡) every
𝑇 = 2 seconds, that is, 𝒉2 [𝑛] = 𝐠 2 (𝑛𝑇) = 𝐠 2 (2𝑛). We have thus obtained two discrete-time
filters with the two choices of the sampling interval 𝑇. What is the relationship between
these two filters? Let us realize that 𝑮1 (0.2𝜋) = 𝑮2 (0.1𝜋) and 𝑮1 (0.25𝜋) = 𝑮2 (0.125𝜋).
Generalizing these relationships we have 𝑮1 (𝜔) = 𝑮2 (𝜔/2). Based on the scaling property
of the Fourier transform this implies that 𝐠1 (𝑛) = 𝐠 2 (2𝑛), and therefore 𝐡1 [𝑛] = 𝐡2 [(𝑛)]. The
two IIR filters designed are identical and independent from the choice of 𝑇. ■
 The bilinear transformation provides a one-to-one mapping from the s-plane to the z-
plane. The mapping equation is given by

2 𝑧−1 2 Ω
𝑠= ( ) and 𝜔 = tan ( )
𝑇 𝑧+1 𝑇 2

Let we prove this statement starting from the mapping 𝑧 = 𝑒 𝑠𝑇 and using trapezoidal rule

𝑒 𝑠𝑇/2 2 + 𝑇𝑠 2 𝑧−1
𝑧 = 𝑒 𝑠𝑇 = −𝑠𝑇/2
≈ ⇔ 𝑠= ( )
𝑒 2 − 𝑇𝑠 𝑇 𝑧+1

To derive the frequency characteristics of the bilinear transformation, we substitute 𝑧 = 𝑒 𝑗Ω


and 𝑠 = 𝑗𝜔 in the bilinear transformation. The resulting expression

2 𝑧−1 2 𝑒 𝑗Ω − 1 2 𝑒 𝑗Ω/2 − 𝑒 −𝑗Ω/2 2 sin(Ω/2) 2


𝑠= ( ) ⟺ 𝑗𝜔 = ( 𝑗Ω ) = ( 𝑗Ω/2 −𝑗Ω/2
)=𝑗 = 𝑗 tan(Ω/2)
𝑇 𝑧+1 𝑇 𝑒 +1 𝑇 𝑒 +𝑒 𝑇 cos(Ω/2) 𝑇

2 Ω
𝜔= tan ( )
𝑇 2
This is called pre-warping equation.

Example: Using bilinear transformation design a Butterworth low-pass filter with the
following specifications: 𝛺1 = 0.2𝜋 , 𝛺2 = 0.36𝜋 , 𝑅𝑝 = 2 dB, 𝐴𝑠 = 20 dB

Solution: We will use 𝑇 = 1𝑠. The critical frequencies of the analog prototype filter are
found using the pre-warping equation applied to 𝛺1 and 𝛺2 :

𝜔1 = 2 tan( 0.2𝜋/2) = 0.6498 rad 𝜔2 = 2 tan( 0.36𝜋/2) = 1.2692 rad


Let we find the order of the Butterworth low-pass filter
1
𝜔1 2𝑁 2 𝜔1 2𝑁 𝑅𝑝
20 log10 [1 + ( ) ] = 𝑅𝑝 ( ) = [1010 − 1]
𝜔𝑐 𝜔𝑐
1

2𝑁 2
𝜔1 2𝑁 𝐴𝑠
𝜔2 ( ) = [1010 − 1]
20 log10 [1 + ( ) ] = 𝐴𝑠 { 𝜔𝑐
{ 𝜔𝑐

By division of the last two equations we get

𝜔1 2𝑁 𝑅𝑝 𝐴𝑠 𝜔1 𝑅𝑝 𝐴𝑠
( ) = [10 10 − 1] / [1010 − 1] ⟹ 2𝑁 log10 ( ) = log10 ([1010 − 1] / [1010 − 1])
𝜔2 𝜔2

𝑅𝑝 𝐴𝑠 𝜔1
𝑁 = log10 (√[10 10 − 1] / [1010 − 1]) /log10 ( ) = 3.8326
𝜔2

The filter order needs to be chosen as 𝑁 = 4. The 3-dB cutoff frequency is found by setting
the magnitude at the pass-band edge to −𝑅𝑝 dB by solving

𝜔1 2𝑁 𝑅𝑝 0.6498 8 2
( ) = [10 10 − 1] ⟺ ( ) = [1010 − 1] ⟹ 𝜔𝑐 = 0.6949 rad/s.
𝜔𝑐 𝜔𝑐

The analog prototype filter can now be designed using Butterworth low-pass filter design
technique described before with the values of 𝑁 and 𝜔𝑐 found. The system function is

0.2331
𝑮(𝑠) =
𝑠4 + 1.8157 𝑠3 + 1.6485 𝑠 2 + 0.8767 𝑠 + 0.2331
The system function for the discrete-time filter is found through bilinear transformation
using the replacement
2 1 − 𝑧 −1
𝑠= ( )
𝑇 1 + 𝑧 −1
which results in

5.9612 𝑧 4 + 23.845 𝑧 3 + 35.767 𝑧 2 + 23.845 𝑧 + 5.9612


𝑯(𝑧) = 10−3 { }
𝑧 4 − 2.2659 𝑧 3 + 2.1534 𝑧 2 − 0.9595 𝑧 + 0.1674

clear all,clc, w = linspace(-5,5); z=exp(j*w); fs = 1;


wp=0.6498; % corner frequency of the pass band
ws=1.2692; % corner frequency of the stop band,
Rp=2; Rs=20;
[N,wc]=buttord (wp,ws,Rp,Rs,'s'); % order and cut-off freq
[nums,dens]=butter (N,wc,'s'); % determine num and denom
Ht = tf(nums,dens); H= freqs(nums,dens,w);
plot(w,abs(H),'r','linewidth',1.5); grid on; figure
[numz,denumz] = bilinear(nums,dens,fs);
numz = 5.9612*(z).^4 + 23.845*(z).^3 + 35.767*(z).^2 + 23.845*z + 5.9612;
denz = (z).^4 -2.2659*(z).^3 + 2.1534*(z).^2 -0.9595*z +0.1674;
H=1e-03*(numz)./(denz); magG=abs(H); plot(w,magG); grid on
xlabel('Frequency in rad/sec'); ylabel('Magnitude of H(z)');
title('Magnitude of Digital low-pass IIR filter');
VI.II Design of FIR filters: A length-N FIR filter is completely characterized by its finite-
length impulse response 𝒉[𝑛] for 𝑛 = 0, . . . , 𝑁 − 1. The system function for such a filter is
computed as
𝑁−1
1
𝑯(𝑧) = ∑ 𝒉[𝑘]𝑧 −𝑘 = (𝒉[0]𝑧 𝑁−1 + 𝒉[1]𝑧 𝑁−2 + ⋯ + 𝒉[𝑁 − 2]𝑧 + 𝒉[𝑁 − 1])
𝑧 𝑁−1
𝑘=0

which leads us to the following conclusions:

1. Length-N FIR filter has a system function that is of order 𝑁 − 1.


2. The system function 𝑯(𝑧) has 𝑁 − 1 zeros and as many poles (i.e. all poles are zeros).
3. The placement of zeros of the filter is determined by the sample amplitudes of the
impulse response 𝒉[𝑛].
4. In contrast, all poles of the length-N FIR filter are at the origin, independent of the
impulse response. Therefore, FIR filters are inherently stable.
5. The magnitude response of the filter is determined only by the locations of the zeros
since the poles at the origin do not contribute to the magnitude response.

The sample amplitudes of the impulse response 𝒉[𝑛] for 𝑛 = 0, . . . , 𝑁 − 1 are often referred to
as the filter coefficients. In this section we will present two approaches to the problem of
designing FIR filters: one very simplistic approach that is nevertheless instructive, and one
elegant approach that is based on computer optimization. In a general sense, the design
procedure for FIR filters consists of the following steps:

1. Start with the desired frequency response. Select the appropriate length N of the filter.
This choice may be an educated guess, or may rely on empirical formulas.
2. Choose a design method that attempts to minimize, in some way, the difference between
the desired frequency response and the actual frequency response that results.
3. Determine the filter coefficients 𝒉[𝑛] using the design method chosen.
4. Compute the system function and decide if it is satisfactory. If not, repeat the process
with a different value of 𝑁 and/or a different design method.

It was discussed earlier in this chapter that one of the main reasons for preferring FIR
filters over IIR filters is the possibility of a linear phase characteristic. Linear phase is
desirable since it leads to a time-delay characteristic that is constant independent of
frequency. As far as real-time implementations of IIR and FIR filters are concerned, we have
seen that IIR filters are mathematically more efficient and computationally less demanding
of hardware resources compared to FIR filters. If linear phase is not a significant concern in
a particular application, then an IIR filter may be preferred. If linear phase is a
requirement, on the other hand, an FIR filter must be chosen even though its
implementation may be more costly. Therefore, in the discussion of FIR design we will
focus on linear-phase FIR filters.

Example: A length-5 FIR filter has the impulse response

ℎ[𝑛] = {3, 2, 1, 2, 3}

𝑛=0
Show that the phase characteristic of 𝑯(𝛺) is a linear function of 𝛺.
Solution: The DTFT of the impulse response is 𝑯(𝛺) = 3 + 2𝑒 −𝑗𝛺 + 𝑒 −𝑗2𝛺 + 2𝑒 −𝑗3𝛺 + 3𝑒 −𝑗4𝛺 .
Let us factor out 𝑒 −𝑗2𝛺 and write the result as

𝑯(𝛺) = [3𝑒 𝑗2𝛺 + 2𝑒 𝑗𝛺 + 1 + 2𝑒 −𝑗𝛺 + 3𝑒 −𝑗2𝛺 ]𝑒 −𝑗2𝛺

The expression in square brackets contains symmetric exponentials. Using Euler's formula,
it becomes
𝑨(𝛺) = [3𝑒 𝑗2𝛺 + 2𝑒 𝑗𝛺 + 1 + 2𝑒 −𝑗𝛺 + 3𝑒 −𝑗2𝛺 ]
3
= 1 + cos(𝛺) + cos(2𝛺)
2

Here 𝑨(𝛺) is purely real, so 𝑯(𝛺) = 𝑨(𝛺)𝑒 −𝑗2𝛺 . The phase response of the filter is

𝝓(𝛺) = −𝑗2𝛺

corresponding to a time delay of 𝑛𝑑 = 2 samples. The magnitude and the phase of the
system function are shown in Figure.

clear all, clc, w=-pi:0.01:pi; z=exp(j*w);


H =(3*(z).^2+2*(z)+1+2*(z).^(-1)+ 3*(z).^(-2));
magG=abs(H); plot(w,magG); grid on; figure
xlabel('Frequency in rad/sec'); ylabel('Magnitude of H(z)');
title('Magnitude of Digital FIR filter ');
angG=angle(exp(-j*2*w)); plot(w, angG); grid on;

We are now ready to discuss FIR filter design. First the Fourier series design method will be
discussed. The use of window functions to overcome the fundamental problems
encountered will be explored. Afterwards we will briefly discuss the Parks McClellan
technique.
 Fourier Design of FIR Filters: Generally, FIR filters are designed directly from the
impulse response of an ideal low-pass filter. It can be shown that the impulse response of
an ideal low-pass filter is a sinc function, and therefore that an ideal low-pass filter is non-
causal and IIR. Now, we are going to show that a causal low-pass FIR filter can be obtained
by delaying the ideal impulse response by 𝑀 time units and truncating the impulse
response.

An ideal low-pass discrete-time filter with a cutoff frequency 𝛺𝑐 has the system function

1, |𝛺| < 𝛺𝑐
𝑯𝑑 (𝛺) = {
0, 𝛺𝑐 < 𝛺 < 𝜋

The filter has zero phase and zero time delay.

The impulse response of the ideal discrete-time low-pass filter is found by taking the
inverse DTFT of 𝑯𝑑 (𝛺) given by

1 𝜋
𝐡𝑑 [𝑛] = ∫ 𝑯 (𝛺)𝑒 𝑗𝛺𝑛 𝑑𝛺
2𝜋 −𝜋 𝑑
1 𝛺𝑐 𝑗𝛺𝑛 𝛺𝑐 𝛺𝑐 𝑛
= ∫ 𝑒 𝑑𝛺 = sinc ( )
2𝜋 −𝛺𝑐 𝜋 𝜋

The result obtained for 𝐡𝑑 [𝑛] is valid for all 𝑁. Therefore, 𝐡𝑑 [𝑛] infinitely long, and cannot be
the impulse response of an FIR filter. On the other hand, sample amplitudes of 𝐡𝑑 [𝑛] get
smaller as the sample index is increased in both directions.

A finite-length impulse response can be obtained by truncating 𝐡𝑑 [𝑛] as follows:

𝐡𝑑 [𝑛], |𝑛| ≤ 𝑀
𝐡𝑇 [𝑛] = {
0, otherwise

The truncated impulse response has 2𝑀 + 1 samples for 𝑛 = −𝑀, . . . , 𝑀. Truncation of the
ideal impulse response causes the spectrum of the filter to deviate from the ideal spectrum.
The system function for the resulting filter is
∞ 𝑀

𝑯 𝑇 (𝛺) = ∑ 𝐡𝑇 [𝑛]𝑒 −𝑗𝛺𝑛 = ∑ 𝐡𝑑 [𝑛]𝑒 −𝑗𝛺𝑛


𝑛=−∞ 𝑛=−𝑀
The truncated impulse response 𝐡𝑇 [𝑛] is still non-causal owing to the fact that the filter has
zero phase. In order to obtain a causal filter, a delay of 𝑀 samples must be incorporated
into the impulse response, resulting in: 𝐡[𝑛] = 𝐡𝑇 [𝑛 − 𝑀] 𝑛 = 0,1 . . . ,2𝑀

Thus, 𝐡[𝑛] corresponds to a causal FIR filter. Using the time shifting property of the DTFT,
the system function 𝑯(𝛺) of the causal filter is related to 𝑯 𝑇 (𝛺) by 𝑯(𝛺) = 𝑯 𝑇 (𝛺)𝑒 −𝑗𝛺𝑀

The addition of the M-sample delay only affects the phase of the system function and not
its magnitude. Since 𝐡[𝑛] is both causal and finite-length, it is the impulse response of a
valid FIR filter.

Remark: It is worth observing that the impulse response 𝐡𝑑 [𝑛] is an even function of 𝑛.
Even symmetry is preserved in 𝐡𝑇 [𝑛]. Finally, when 𝐡[𝑛] is obtained by time shifting 𝐡𝑇 [𝑛]
by 𝑀 samples, the symmetry necessary for linear phase is preserved. Therefore, any filter
designed using this technique will have linear phase.

Example: (FIR Design by Fourier Method) Using the Fourier series method, design a length-
15 FIR low-pass filter to approximate an ideal low-pass filter with 𝛺𝑐 = 0.3𝜋 rad.

Solution: The impulse response of the ideal low-pass filter is


0.3𝜋 0.3𝜋𝑛
𝐡𝑑 [𝑛] = sinc ( ) = 0.3 sinc(0.3𝑛)
𝜋 𝜋

Since 𝑁 = 2𝑀 + 1 = 15 we have 𝑀 = 7. The truncated impulse response is

0.3 sinc(0.3𝑛), |𝑛| ≤ 7


𝐡 𝑇 [𝑛] = {
0, otherwise

The impulse response of the FIR filter is 𝐡[𝑛] = 𝐡𝑇 [𝑛 − 𝑀] = 0.3 sinc(0.3(𝑛 − 7)); 𝑛 = 0,1 . . . ,14
The magnitude response of the designed filter is shown in

clear all, clc, n=[0:14]; w=-pi:0.01:pi; k=0; H=0;


for n=0:14
m=n+1; h(m)=0.3*sinc(0.3*(n-7));
end
for n=0:14
m=n+1; H= H + h(m).*exp(-j*w*n);
end
subplot(221); plot(w,abs(H),'r','linewidth',3); grid on
subplot(222); plot(w,angle(H),'r','linewidth',3), grid on,
A by-product of truncating the impulse response is the oscillatory behavior of the frequency
response that is particularly evident around the cutoff frequency 𝛺𝑐 . This effect was
observed as the Gibbs phenomenon in earlier chapters. An alternative way of expressing the
truncation of 𝐡𝑑 [𝑛] is 𝐡𝑇 [𝑛] = 𝐡𝑑 [𝑛]𝐰[𝑛] where 𝐰[𝑛] is a rectangular window function

1, |𝑛| ≤ 𝑀
𝐰[𝑛] = {
0, otherwise

Based on the multiplication property of the DTFT, the spectrum of 𝐡𝑇 [𝑛] is the convolution
of the spectra of 𝐡𝑑 [𝑛] and 𝐰[𝑛].
1 𝜋
𝑯 𝑇 (𝛺) = ∫ 𝑯 (𝛺)𝑾(𝛺 − 𝜆)𝑑𝛺
2𝜋 −𝜋 𝑑

As the spectra in Figure above reveal, the reason behind the oscillatory behavior of 𝑯 𝑇 (𝛺)
especially around 𝛺 = 𝛺𝑐 is the shape of the spectrum 𝑾(𝛺). The high-frequency content of
𝑾(𝛺), on the other hand, is mainly due to the abrupt transition of the rectangular window
from unit amplitude to zero amplitude at its two edges. The solution is to use an alternative
window function, one that smoothly tapers down to zero at its edges, in place of the
rectangular window. The chosen window function must also be an even function of 𝑛 in
order to keep the symmetry of 𝐡𝑇 [𝑛] for linear phase. A large number of window functions
exist in the literature. A few of them are listed below:

■ Triangular (Bartlett) window:


|𝑛|
𝐰[𝑛] = 1 − for |𝑛| ≤ 𝑀
𝑀
■ Hamming window:
𝜋(𝑛 + 𝑀)
𝐰[𝑛] = 0.54 − 0.46 cos ( ) for |𝑛| ≤ 𝑀
𝑀
■ Hanning window:
𝜋(𝑛 + 𝑀)
𝐰[𝑛] = 0.5 − 0.5 cos ( ) for |𝑛| ≤ 𝑀
𝑀
■ Blackman window:
𝜋(𝑛 + 𝑀) 2𝜋(𝑛 + 𝑀)
𝐰[𝑛] = 0.42 − 0.5 cos ( ) + 0.08 cos ( ) for |𝑛| ≤ 𝑀
𝑀 𝑀

The next figure shows the decibel magnitude spectra for rectangular, Hamming and
Blackman windows for comparison.

It is seen that the rectangular window has stronger side lobes than the other two window
functions. Hamming window provides side lobes that are at least 40 dB below the main
lobe, and the side lobes for the Blackman window are at least 60 dB below the main lobe.
The downside to side lobe suppression is the widening of the main lobe.
Example: Redesign the filter of the previous example using Hamming and Blackman
windows.

Solution: Using a window function the truncated impulse response is

𝐡 𝑇 [𝑛] = 0.3 sinc(0.3𝑛)𝐰[𝑛], |𝑛| ≤ 7

and the impulse response of the FIR filter is

𝐡[𝑛] = 𝐡𝑇 [𝑛 − 7] = 0.3 sinc(0.3(𝑛 − 7))𝐰[𝑛 − 7], 𝑛 = 0,1, … ,14

clear all, clc, n=[0:14]; w=-pi:0.01:pi; k=0; H1=0; H2=0; W1=0; W2=0;
for n=0:14
m=n+1;
h(m)=0.3*sinc(0.3*(n-7));
w1(m)=0.54-0.46*cos(pi*n/7);
w2(m)= 0.42 - 0.5*cos(pi*(n)/7) + 0.08*cos(2*pi*(n)/7) ;
end
h1=h.*w1; h2=h.*w2;
for n=0:14
m=n+1;
H1= H1 + h1(m).*exp(-j*w*n); H2= H2 + h2(m).*exp(-j*w*n);
end
subplot(221);
plot(w,abs(H1),'r','linewidth', 1.5); grid on; hold on
plot(w,abs(H2),'b','linewidth', 1.5);
subplot(222);
plot(w,angle(H1),'r','linewidth', 1.5), grid on; hold on
plot(w,angle(H2),'b','linewidth', 1.5)
figure
for n=0:14
m=n+1;
W1= W1 + w1(m).*exp(-j*w*n); W2= W2 + w2(m).*exp(-j*w*n);
end
plot(w, 20*log10(abs(W1)),'r','linewidth', 1.5); grid on, hold on
plot(w, 20*log10(abs(W2)),'b','linewidth', 1.5); grid on, hold on
In the discussion above we have concentrated on the design of low-pass FIR filters. Other
types of filters can also be designed; the only modification that is needed is in determining
the ideal impulse response 𝐡𝑑 [𝑛]. Expressions for the ideal impulse responses of high-pass,
bandpass and band-reject filters can be derived from 𝐡𝑑 [𝑛] = 𝛺𝑐 sinc(𝛺𝑐 𝑛/𝜋)/𝜋 as follows:

Highpass:
1, 𝛺𝑐 < |𝛺| < 𝜋 𝛺𝑐 𝛺𝑐 𝑛
𝑯𝑑 (𝛺) = { ⟺ 𝐡𝑑 [𝑛] = 𝛿[𝑛] − sinc ( )
0, |𝛺| < 𝛺𝑐 𝜋 𝜋

Bandpass:
1, 𝛺𝑐1 < |𝛺| < 𝛺𝑐2 𝛺𝑐2 𝛺𝑐2 𝑛 𝛺𝑐1 𝛺𝑐1 𝑛
𝑯𝑑 (𝛺) = { ⟺ 𝐡𝑑 [𝑛] = sinc ( )− sinc ( )
0, otherwise 𝜋 𝜋 𝜋 𝜋

Band-reject:
0, 𝛺𝑐1 < |𝛺| < 𝛺𝑐2 𝛺𝑐2 𝛺𝑐2 𝑛 𝛺𝑐1 𝛺𝑐1 𝑛
𝑯𝑑 (𝛺) = { ⟺ 𝐡𝑑 [𝑛] = 𝛿[𝑛] − sinc ( )+ sinc ( )
1, otherwise 𝜋 𝜋 𝜋 𝜋

Remark: In all of the above filters |𝛺| < 𝜋.

Example: Using the Fourier series design method with a triangular (Bartlett) window,
design a 24th order FIR bandpass filter with passband edge frequencies 𝛺1 = 0.4𝜋 and
𝛺2 = 0.7𝜋 as shown in Figure.

Solution: The order of the FIR filter is 𝑁 − 1 = 24. The following two statements create a
length-25 Bartlett window and then use it for designing the bandpass filter required.

wn = bartlett (25);
hn = fir1 (24 ,[0.4 ,0.7] , wn)

Some important details need to be highlighted. The function bartlett(. . ) uses the filter
length N (this applies to other window generation functions such as hamming(. . ), hann(. . ),
and blackman(. . ) as well). On the other hand, the design function fir1(. . ) uses the filter
order which is 𝑁 − 1. The second argument to the function fir1(. . ) is a vector of two
normalized edge frequencies which results in a bandpass filter being designed.

clear all, clc, wn = bartlett (25); hn = fir1 (24 ,[0.4 ,0.7] ,wn)
Omg = [-256:255]/256* pi; H = fftshift (fft(hn ,512));
Omgd=[-1,-0.7,-0.7,-0.4,-0.4,0.4,0.4,0.7,0.7,1]*pi;
Hd = [0,0,1,1,0,0,1,1,0,0];
plot(Omg ,abs(H),Omgd ,Hd); grid;
axis([-pi,pi,-0.1 ,1.1]);
title('|H(\ Omega )|');
xlabel ('\Omega (rad)'); ylabel ('Magnitude');
MATLAB Problems:

clear all, clc,


% lowpass filter design using Hamming window
wn = 0.4375; % Normalized cut-off frequency
N = 53; % Hamming Window
hhamm = fir1(N-1,wn, 'low',hamming(N));
% Impulse response of the LPF
w = -pi:0.001*pi:pi; % discrete frequencies for response
Hhamm = freqz(hhamm,1,w); % transfer function
plot(w,20*log10(abs(Hhamm))); % magnitude response
axis([-pi pi -120 20]); % set axis
title('FIR filter using Hamming window')
grid on
figure
plot(w,abs(Hhamm)); grid on

clear all, clc,


% lowpass filter design using blackman window
wn = 0.4375; % Normalized cut-off frequency
N = 88; % blackman Window
hblack = fir1(N-1,wn, 'low',blackman(N));
% Impulse response of the LPF
w = -pi:0.001*pi:pi; % discrete frequencies for response
Hblack = freqz(hblack,1,w); % transfer function
plot(w,20*log10(abs(Hblack))); % magnitude response
axis([-pi pi -120 20]); % set axis
title('FIR filter using blackman window')
grid on
figure
plot(w,abs(Hblack)); grid on
clear all, clc,
% highpass filter design using Hamming window
wn = 0.4375; % Normalized cut-off frequency
N = 53; % Hamming Window
hhamm = fir1(N-1,wn, 'high',hamming(N));
% Impulse response of the highpass filter
w = -pi:0.001*pi:pi; % discrete frequencies for response
Hhamm = freqz(hhamm,1,w); % transfer function
plot(w,20*log10(abs(Hhamm))); % magnitude response
axis([-pi pi -120 20]); % set axis
title(' highpass FIR filter using Hamming window')
grid on
figure
plot(w,abs(Hhamm)); grid on

clear all, clc,


% highpass filter design using blackman window
wn = 0.4375; % Normalized cut-off frequency
N = 89; % blackman Window
hblack = fir1(N-1,wn, 'high',blackman(N));
% Impulse response of the highpass filter
w = -pi:0.001*pi:pi; % discrete frequencies for response
Hblack = freqz(hblack,1,w); % transfer function
plot(w,20*log10(abs(Hblack))); % magnitude response
axis([-pi pi -120 20]); % set axis
title(' highpass FIR filter using blackman window')
grid on
figure
plot(w,abs(Hblack)); grid on

clear all, clc,


% bandstop filter design using Hamming window
wn = [0.35 0.65]; % Normalized cut-off frequency
N = 53; % Hamming Window
hhamm = fir1(N-1,wn, 'stop',hamming(N));
% Impulse response of the bandstop filter
w = -pi:0.001*pi:pi; % discrete frequencies for response
Hhamm = freqz(hhamm,1,w); % transfer function
plot(w,20*log10(abs(Hhamm))); % magnitude response
axis([-pi pi -120 20]); % set axis
title(' bandstop FIR filter using Hamming window')
grid on
figure
plot(w,abs(Hhamm)); grid on
clear all, clc,
% bandstop filter design using blackman window
wn = [0.35 0.65]; % Normalized cut-off frequency
N = 89; % blackman Window
hblack = fir1(N-1,wn, 'stop',blackman(N));
% Impulse response of the bandstop filter
w = -pi:0.001*pi:pi; % discrete frequencies for response
Hblack = freqz(hblack,1,w); % transfer function
plot(w,20*log10(abs(Hblack))); grid on; figure
axis([-pi pi -120 20]); title(' bandstop FIR filter using blackman window')
plot(w,abs(Hblack)); grid on

Design a 48th-order FIR bandpass filter with passband 0.35𝜋 ≤ Ω ≤ 0.65𝜋 rad/sample.
Visualize its magnitude and phase responses.

clear all, clc, b = fir1(48,[0.35 0.65]); freqz(b,1,512)

clear all, clc,


% bandpass filter design using blackman window
wn = [0.35 0.65]; % Normalized cut-off frequency
N = 89; % blackman Window
hblack = fir1(N-1,wn, 'stop',blackman(N));
% Impulse response of the bandpass filter
w = -pi:0.001*pi:pi; % discrete frequencies for response
Hblack = freqz(hblack,1,w); % transfer function
plot(w,20*log10(1-abs(Hblack))); ;grid on;figure
axis([-pi pi -120 20]); % set axis
title(' bandpass FIR filter using blackman window')
plot(w,1-abs(Hblack)); grid on
Problems:
Exercise: 01 A nonrecursive bandstop digital filter (FIR) can be designed by applying the
Fourier series method to the idealized frequency response:

1 for |𝜔| ≤ 𝜔𝑐1


𝑗𝜔𝑇 for 𝜔𝑐1 < |𝜔| < 𝜔𝑐2
𝐻(𝑒 ) = {0
1 for 𝜔𝑐2 ≤ |𝜔| ≤ 𝜋

(a) Obtain an expression for the impulse response of the filter.


(b) Obtain a causal transfer function assuming a filter length N = 11.

Exercise: 02 (a) Design a nonrecursive (FIR) highpass filter in which

1 for 2.5 ≤ |𝜔| ≤ 5.0 rad/s


𝐻(𝑒 𝑗𝜔𝑇 ) {
0 for |𝜔| < 2.5 rad/s
Use the rectangular window and assume that ωs = 10 rad/s and N = 11.
(b) Repeat part (a) with N = 21 and N = 31. Compare the three designs.

Exercise: 03 A fourth-order lowpass Butterworth filter is required.


(a) Obtain the normalized transfer function 𝑯𝑁 (𝑠).
(b) Derive expressions for the loss and phase shift.
(c) Calculate the loss and phase shift at ω = 0.5 rad/s.
(d) Obtain a corresponding denormalized transfer function 𝑯𝐷 (𝑠) with a 3-dB cutoff
frequency at 1000 rad/s.

Exercise: 04 A third-order lowpass filter with passband edge 𝜔𝑝 = 1 rad/s and passband
ripple 𝐴𝑝 = 1.0 dB is required. Obtain the poles and multiplier constant of the transfer
function assuming a Chebyshev approximation.

Exercise: 05 An application requires a normalized inverse-Chebyshev lowpass filter that


would satisfy the following specifications:
• Passband edge 𝜔𝑝 : 0.5 rad/s • Maximum passband loss 𝐴𝑝 : 0.5 dB
• Stopband edge 𝜔𝑠 : 1.0 rad/s • Minimum stopband loss 𝐴𝑠 : 30.0 dB

(a) Find the minimum filter order that would satisfy the specifications.
(b) Obtain the required transfer function.

Exercise: 06 A lowpass filter is required that would satisfy the following specifications:
• Passband edge 𝜔𝑝 : 2000 rad/s • Maximum passband loss 𝐴𝑝 : 0.4 dB
• Stopband edge 𝜔𝑠 : 7000 rad/s • Minimum stopband loss 𝐴𝑠 : 45.0 dB
(a) Assuming a Butterworth approximation, find the required order n and the value of the
transformationparameter λ.
(b) Form H(s).

Exercise: 07 Repeat Prob. 06 for the case of a Chebyshev approximation and compare the
design obtained with that obtained in Prob. 06.
Exercise: 08 Repeat Prob. 06 for the case of a inverse-Chebyshev approximation and
compare the design obtained with that obtained in Prob. 06.

Exercise: 09 highpass filter is required that would satisfy the following specifications:
• Passband edge 𝜔𝑝 : 2000 rad/s • Maximum passband loss 𝐴𝑝 : 0.5 dB
• Stopband edge 𝜔𝑠 : 1000 rad/s • Minimum stopband loss 𝐴𝑠 : 40.0 dB
(a) Assuming a Butterworth approximation, find the required order n and the value of the
transformation parameter λ.
(b) Form H(s).

Exercise: 10 Repeat Prob. 09 for the case of a Chebyshev and inverse-Chebyshev


approximations and compare the design obtained with that obtained in Prob. 09.

Exercise: 11 A bandstop filter is required that would satisfy the following specifications:
• Lower passband edge 𝜔𝑝1: 20 rad/s
• Upper passband edge 𝜔𝑝2: 80 rad/s
• Lower stopband edge 𝜔𝑠1 : 48 rad/s
• Lower stopband edge 𝜔𝑠2 : 52 rad/s
• Maximum passband loss 𝐴𝑝 : 1.0 dB
• Minimum stopband loss 𝐴𝑠 : 25.0 dB
(a) Assuming a Butterworth approximation, find the required order n and the value of the
transformation parameters 𝐵 and 𝜔0 .
(b) Form H(s).

Exercise: 12 Repeat Prob. 11 for the case of a Chebyshev and inverse-Chebyshev


approximations and compare the design obtained with that obtained in Prob. 11.

Exercise: 13 By using the invariant impulse-response method, derive a discrete-time


transfer function from the continuous-time transfer function

1
𝑯𝐴 (𝑠) =
(𝑠 + 1)(𝑠 2 + 𝑠 + 1)
The sampling frequency is 10 rad/s.

Exercise: 14 A continuous-time system is characterized by the transfer function


1
𝑯𝐴 (𝑠) = 3 2
𝑠 + 6𝑠 + 11𝑠 + 6
By using the invariant impulse-response method, obtain the transfer function of a
corresponding discrete time system. The sampling frequency is 6π.

Exercise: 15 A continuous-time system has a transfer function

5 5𝑠 + 13
𝑯(𝑠) = + 2
𝑠 + 2.5 𝑠 + 4𝑠 + 25/4

Using the invariant impulse-response method, obtain the transfer function of a


corresponding discrete time system. The sampling frequency is 20 rad/s.
Exercise: 16 (a) Obtain a digital-filter network by applying the bilinear transformation to
the transfer function
𝑠2
𝑯(𝑠) =
𝑠 2 + √2𝑠 + 1

(b) Evaluate the gain of the filter for 𝜔 = 0 and 𝜔 = 𝜋/𝑇 .

Exercise: 17 Obtain a system function for each of the given analog filter network

Exercise: 18 Obtain a system function for each of the given analog filter network
Exercise: 19 Obtain a system function for each of the given analog filter network

Exercise: 20 Obtain a system function for each of the given active analog filter network
Exercise: 21 Obtain a system function for each of the given active analog filter network

Exercise: 22 Obtain a system function for the given active analog filter network
Exercise: 23 Obtain a system function for the given active analog filter network

Exercise: 23 Obtain a system function for the given passive analog filter network
This page intentionally left blank
Review Questions (set N○1)
▪ Define analog signal and give appropriate example.
▪ Define DT signal and illustrate it with a plot.
▪ What is uniform sampling? What is a DT signal?
▪ State the sampling theorem in time domain. Explain the aliasing effect.
▪ What is Nyquist rate of sampling?

▪ How does one decide the sampling frequency for any signal to be sampled?
▪ Can a signal with infinite bandwidth be sampled?
▪ Explain the process of sampling using the concept of a train of pulses.
▪ Explain the occurrence of replicated spectrum when the signal is sampled.
▪ How can one recover the signal from signal samples using a sync function

▪ Explain the aliasing effect for a signal which is sampled below the Nyquist rate.
▪ What is a practical way to recover the signal from signal samples?
▪ When does a phase reversal occur for a recovered aliased signal?
▪ What is the need for an anti-aliasing filter?
▪ What is a signal? What are scalar-valued signals and multiple-valued signals?

▪ Why is analog signal called continuous time, continuous amplitude signal?


▪ What is a DT signal?
▪ What is a domain? What is a spatial domain?
▪ Why are naturally occurring signals random signals?
▪ Can any short-lived signal be a power signal?

▪ Can a periodic signal be a power signal?


▪ State different applications where you come across signals.
▪ Define a system. Explain the use of communication system.
▪ Explain how the control system processes the signals.
▪ What is the need for standard signals?

▪ Explain use standard signals for worst-case testing of a system.


▪ List various standard CT signals and their significance.
▪ What is a unit impulse? What practical situation it will represent?
▪ Draw unit step and unit ramp and state the relation between the two.
▪ What is a signum function? Why is it called as a sign function?

▪ What is the significance of a sinc function? How is it related to a rectangular pulse?


▪ List various standard DT signals and plot them.
▪ List the physical significance of all standard signals.
▪ Define a periodic signal. Explain the procedure for finding the period of a signal.
▪ Give situation when a periodic CT sinusoid after sampling becomes aperiodic DT sinusoid.
Review Questions (set N○2)
▪ Explain the periodicity condition for a DT sinusoid.
▪ How will you find the period for a combination signal?
▪ Define energy and power signal. Can a signal be neither energy nor a power signal?
▪ State the physical significance of different properties of the signals.
▪ Define different operations on signals such as time scaling and amplitude scaling.

▪ Explain time-shifting operation.


▪ Explain the steps for implementing time scaling followed by time sifting or vice versa.
▪ How will you implement time scaling for a DT signal?
▪ Explain multiplication and addition of two DT signals.
▪ Explain the physical significance of different operations on the signals.

▪ How will you make use of test signals for system testing before it is actually implemented?
▪ Define even and odd property of a signal. Define different combinations of even and odd
signals and state whether they will be even or odd.
▪ What is linearity? Define additivity and homogeneity. Is the transfer curve for a linear
system always linear? Explain physical significance of linearity.
▪ Explain physical significance of shift invariance property.
▪ Explain property of superposition.

▪ When will you say that the system is memoryless? Give one example.
▪ Define causality for a system. Can we design and use a non-causal system?
▪ Is a causal system a requirement for spatial systems?
▪ Explain the meaning of causality for a system of a human being.
▪ What is invertibility? Can we use a non-invertible transform for processing a signal?

▪ Explain the meaning of BIBO stability for a system.


▪ Explain the physical significance of stability.
▪ How will you interpret the system as interconnection of operators?
▪ What is a characteristic equation? What are characteristic mode terms?
▪ What are Eigen values? Why the solutions of the characteristic equations are assumed as
exponential functions?

▪ What is a zero input response? Explain the procedure to find the zero input response for
CT and DT systems.
▪ What is a zero state response? How will you find the response to any input signal?
▪ How will you calculate the impulse response of the system? How will you find the step
response for the system?
▪ Explain the property of memory in terms of impulse response of the CT and DT system.
▪ Explain the property of causality in terms of impulse response of the CT and DT system.
▪ Explain the property of stability in terms of impulse response of the CT and DT system.
Review Questions (set N○3)
▪ How will you classify the signals as periodic or aperiodic? What is FS representation of the
periodic signal?
▪ Can you call the signal as a vector? What are basis functions? Why they should be
orthogonal?
▪ Why are the exponential functions used in place of sine and cosine functions?
▪ Why at all use sinusoidal or exponential functions as basis functions?
▪ State Dirichlet's conditions?

▪ Why is the FS representation preferred for the decomposition of the signal?


▪ Can you convert trigonometric FS to exponential FS and vice versa?
▪ If the signal has some type of symmetry, can you reduce the number of computations by
drawing certain conclusions?
▪ State properties of FS related to time shifting. Explain the physical significance.
▪ State the convolution and modulation property of FS. What are its applications?

▪ State Parseval's Identity and explain its significance.


▪ What is Gibbs phenomenon? What is the cause of this effect? Can you eliminate this
effect? How will you reduce it?
▪ Define FT and IFT for aperiodic CT signals. State the Dirichlet conditions. How will you
define FT for a signal if it is not square integrable?
▪ Define the magnitude and phase response of FT of a signal.
▪ Explain the procedure for magnitude and phase response using hand calculations.

▪ Explain the difference between the phase responses of right-handed and left handed
exponential signals.
▪ Explain the use of Dirac delta function for finding the FT of exponential function, sine
function, cosine function, etc.
▪ Explain the procedure of IFT calculation using partial fraction expansion.
▪ Explain why DTFT is periodic with a period of pi.

▪ Explain the procedure for obtaining the DTFT in the form of closed from expression.
▪ Explain how the DT signal is obtained?
▪ Explain the DTFT of the train of impulses.
▪ What is linearity property of FT? What are its applications?
▪ Explain time shifting, time reversal and time scaling property of FT and DTFT along with
the physical significance of each property.

▪ Explain time differentiation and time integration property of FT. Does it have any
significance for DTFT?
▪ Explain frequency shifting and frequency differentiation property of FT and DTFT.
▪ Explain modulation and convolution properties of FT and DTFT with the applications.
▪ What is Parseval's theorem? What is its significance?
▪ How will you use FT to analyze the LTI system? Give a suitable example to explaine.
▪ How will you do frequency analysis of the signal? How will you analyze frequency response
of the LTI system.
Review Questions (set N○4)
▪ Define a complex frequency plane and define the Laplace transform (LT) of any signal x(t).
▪ What is the physical significance of LT? State properties of LT.
▪ Can we find the LT of a signal that is not absolutely integrable? Give a suitable example.
▪ Explain the relation between FT and LT. Compare LT and FT.
▪ Prove that the complex exponential 𝑒 𝑠𝑡 is an Eigen function of LTI system.

▪ Define region of convergence for LT. What is the significance of ROC?


▪ State and prove the property of linearity for LT.
▪ How will you use the property of time shifting to find the LT of periodic signals?
▪ Prove time shifting, tie scaling and time reversal properties of LT.
▪ State the S domain shift property and state its significance.

▪ Prove the property of frequency shifting.


▪ State the physical significance of the convolution property and prove it.
▪ Why is the frequency convolution property called as modulation property?
▪ What is the meaning of Parseval's theorem?
▪ How will you find the initial and final value of the signal without taking ILT?

▪ Explain the partial fraction expansion method for finding ILT.


▪ How will you find ILT if there are complex conjugate roots?
▪ Can you analyze the system using LT?
▪ Explain the procedure to find the impulse response of the system given its differential
equation.
▪ Explain the use of transfer function of the system to find the impulse response.

▪ Explain the procedure to find the total response of the system. Clearly state the meaning of
natural response and forced response.
▪ How will you find the zero input response and the zero state response?
▪ Can you analyze the stability of the causal and non-causal LTI system given the pole
locations?
▪ How will you find the frequency response of the system from the transfer function?
▪ Explain the use of MATLAB commands to plot the poles and zeros and to plot the impulse
response of the system.

▪ Why is the use of transform domain processing of a signal?


▪ What is the relation between Laplace transform and Z transform?
▪ Find the relation between Fourier transform and Z transform. What is the importance of a
unit circle in Z domain?
▪ State and prove properties of ROC. Can it happen that there is no ROC? When does this
happen?
▪ What is a time reversal property of the Z transform? What happens to ROC in this case?
Review Questions (set N○5)
▪ When a sequence is shifted in the time domain, what will happen to Z transform of the
shifted sequence? How is the ROC affected?
▪ State and prove the convolution property of Z transform.
▪ State the property of differentiation in the Z domain. How is it useful for calculation of Z
transform of 𝑛𝑥[𝑛]?
▪ State initial value theorem and prove it.
▪ Use the relation between Z transform and Laplace transform to translate the stability
criteria from Laplace domain to Z domain.
▪ Can we find the stability of a system if we know the impulse response of the system?
▪ Discuss the stability of a system if the system has complex conjugate poles.
▪ Explain the power series method for calculation of inverse Z transform. How can we make
use of ROC to find the power series?
▪ What are different methods for calculation of IZT?
▪ What is a residue? Will the knowledge of pole locations help in calculation of residue?
▪ Which poles are to be used for the calculation of a residue? How can one decide a closed
contour for the calculation of residue?
▪ How can one convert a difference equation into a Z domain transfer function? How can we
solve the difference equation? Explain using a numerical example.
▪ How will you analyze the system to find the frequency response of the system using
MATLAB program? Can you find the impulse response of the system?
Formula Sheet for Signals and Systems
1 1 sin 𝑎 𝑏 𝑑 𝑑
𝑐𝑜𝑠 𝑥 1 cos 2𝑥 sin 𝑥 1 cos 2𝑥 𝑛 0 cos 𝑥 sin 𝑥
2 2 sin 𝑎 cos 𝑏 cos 𝑎 sin 𝑏 𝑑𝑥 𝑑𝑥
cos 𝑎 𝑏 𝑑 𝑑
𝑒 cos 𝑥 𝑖 𝑠𝑖𝑛 𝑥 𝑒 cosh 𝑥 𝑠𝑖𝑛ℎ 𝑥 𝑥 1 sin 𝑥 𝑐𝑜𝑠 𝑥
cos 𝑎 cos 𝑏 sin 𝑎 sin 𝑏 𝑑𝑥 𝑑𝑥
1 1 𝑥 𝑥 𝑥 𝑑 𝑑
cos 𝑥 𝑒 𝑒 sin 𝑥 𝑒 𝑒 sin 𝑥 𝑥 ⋯ 𝑥 𝑛𝑥 cos 𝑥 sin 𝑥
2 2𝑖 3! 5! 7! 𝑑𝑥 𝑑𝑥
1 1 𝑥 𝑥 𝑥 𝑑 𝑑
cosh 𝑥 𝑒 𝑒 sinh 𝑥 𝑒 𝑒 cos 𝑥 1 ⋯ 𝑒 𝑒 tan 𝑥 𝑠𝑒𝑐 𝑥
2 2 2! 4! 6! 𝑑𝑥 𝑑𝑥
sinh 𝑥 𝑒 𝑒 𝑑 1 𝑑
tan 𝑥 sin 𝑥 ⁄cos 𝑥 cot 𝑥 1⁄𝑡𝑎𝑛 𝑥 tanh 𝑥 ln 𝑥 cot 𝑥 𝑐𝑠𝑐 𝑥
cosh 𝑥 𝑒 𝑒 𝑑𝑥 𝑥 𝑑𝑥
𝑐𝑜𝑠 2𝑥 2 cos 𝑥 1 sinh 𝑥 𝑒 𝑒 𝑑 𝑑 1
sin 2𝑥 2 sin 𝑥 cos 𝑥 coth 𝑥 𝑛 𝑛 ln 𝑥 arcsin 𝑥
1 2 sin 𝑥 cosh 𝑥 𝑒 𝑒 𝑑𝑥 𝑑𝑥 √1 𝑥
𝑤 𝑤 𝑤 𝑤 𝑤 𝑤 𝑑 1
sin 𝑤 𝑡 sin 𝑤 𝑡 2sin 𝑡 cos 𝑡 2 2cos 𝑤 𝑤 𝑡 sin 𝑡 arccos 𝑥
2 2 2 𝑑𝑥 √1 𝑥
𝑑 1 𝑑 1
sec 𝑥 1⁄cos 𝑥 cosec 𝑥 1⁄sin 𝑥 arcsec 𝑥 𝑒 0.37 arctan 𝑥
𝑑𝑥 𝑥√𝑥 1 𝑑𝑥 1 𝑥
𝑑 1 𝑑 1
sech x 1⁄cosh 𝑥 𝑐𝑜𝑠𝑒𝑐ℎ 𝑥 1⁄sinh 𝑥 arccsc 𝑥 arccot 𝑥
𝑑𝑥 𝑥√𝑥 1 𝑑𝑥 1 𝑥

1
𝑐𝑜𝑠𝑥 𝑠𝑖𝑛𝑥 1 cos 2𝑥 cos 𝑥 sin 𝑥 𝑃 |𝑥 𝑡 | 𝑑𝑡 𝐸 |𝑥 𝑡 | 𝑑𝑡 𝐸 |𝑥 𝑛 |
𝑇

1 1 1
𝑥 𝑡 𝑥 𝑡 𝑥 𝑡 𝑥 𝑡 𝑥 𝑡 𝑥 𝑡 𝑃 lim |𝑥 𝑛 | 𝑒 1 𝑦𝑛 𝑥𝑘ℎ𝑛 𝑘
2 2 → 2𝑁 1

1 1
𝑥 𝑛 𝑥𝑛 𝑥 𝑛   𝑥 𝑛 𝑥𝑛 𝑥 𝑛 𝑦 𝑡 𝑥 𝜏 ℎ 𝑡 𝜏 𝑑𝜏
2 2

1 𝑗𝑤 𝑑 𝑑
𝑥 𝑎𝑡 ↔ 𝑋 𝑥 𝑡 ↔ 𝑗𝑤𝑋 𝑗𝑤 𝑗𝑡𝑥 𝑡 ↔ 𝑋 𝑗𝑤 𝑥𝑛 𝑛 ⎯⎯ 𝑒 𝑋 𝑒 𝑥 𝑡 𝑡 𝑒 𝑋 𝑗𝑤
|𝑎| 𝑎 𝑑𝑡 𝑑𝑤
1 𝑗𝑤 1
𝑥 𝑎𝑡 ↔ 𝑋 𝑥 𝜏 𝑑𝜏 𝑋 𝑗𝑤 𝑥𝑛 𝑛 ⎯⎯ 𝑒 
𝑋𝑘 𝑥 𝑡 𝑡 𝑒 𝑋𝑘
|𝑎| 𝑎 𝑗𝑤

Periodic-Continuous (Fourier Series) Non-Periodic-Continuous (Fourier Transform)

1 1
𝑥 𝑡 𝑋𝑘𝑒 𝑋𝑘 𝑥 𝑡 𝑒 𝑑𝑡 𝑥 𝑡 𝑋 𝑗𝑤 𝑒 𝑑𝑤 𝑋 𝑗𝑤 𝑥 𝑡 𝑒 𝑑𝑡
𝑇 2𝜋

Periodic-Discrete (DTFS) Non-Periodic-Discrete (DTFT)


1 
1    
𝑥𝑛 𝑋𝑘𝑒 𝑋𝑘 𝑥𝑛𝑒 𝑥𝑛 𝑋 𝑒 𝑒 𝑑 𝑋 𝑒 𝑥𝑛𝑒
𝑁 2𝜋

1
𝑒 𝑑𝑡 𝛿 𝑓 𝐴 𝑑𝑥 𝑥 𝑐 sec 𝑎𝑥 𝑑𝑥 𝑙𝑛|sec 𝑎𝑥 tan 𝑎𝑥 | 𝑐
𝑎

𝑥 𝑎 𝑒
sin 𝑥 𝑑𝑥 cos 𝑥 𝑐 𝑥 𝑑𝑥 𝑐 𝑎 𝑑𝑥 𝑐 𝑒 𝑑𝑥 𝑐
𝑛 1 ln 𝑎 𝑎

𝑒 𝑎𝑥 1
cos 𝑥 𝑑𝑥 𝑠𝑖𝑛 𝑥 𝑐 𝑢 𝑑𝑣 𝑢𝑣 𝑣 𝑑𝑢 𝑐 𝑐𝑜𝑡 𝑥 𝑑𝑥 𝑙𝑛|sin x | 𝑐 𝑥𝑒 𝑑𝑥 𝑐
𝑎

𝑑𝑥 1 1 1
tan 𝑥 𝑑𝑥 𝑙𝑛|cos 𝑥 | 𝑐 𝑙𝑛|ax b| 𝑐 𝑠𝑒𝑐 𝑎𝑥 𝑑𝑥 tan 𝑎𝑥 𝑐 𝑐𝑠𝑐 𝑎𝑥 𝑑𝑥 cot 𝑎𝑥 𝑐
𝑎𝑥 𝑏 𝑎 𝑎 𝑎

Prepared by Prof. Dr. Hasan AMCA – Electrical and Electronic Engineering Department - Eastern Mediterranean University – May 2018
Laplace Transform Table

F (s) f (t) 0 ≤ t
1. 1 δ (t ) unit impulse at t = 0
1 1 or u(t ) unit step starting at t = 0
2.
s
1 t ⋅ u(t) or t ramp function
3.
s2
1 1
4. t n −1 n = positive integer
sn ( n − 1)!
1 −as u (t − a ) unit step starting at t = a
5. e
s
1 −as
u(t) − u(t − a) rectangular pulse
6. (1 − e )
s
7.
1 e −at exponential decay
s+a
1 1
8. t n−1e −at n = positive integer
( s + a) n (n − 1)!
1 1
9. (1 − e −at )
s ( s + a) a
1 1 b −at a −bt
10. s(s + a)(s + b) (1 − e + e )
ab b−a b−a
s +α 1 b(α − a) −at a(α − b) −bt
11. [α − e + e ]
s( s + a)(s + b) ab b−a b−a
1 1
12. (s + a)(s + b) (e − at − e −bt )
b−a
s 1
( ae − at − be −bt )
13. ( s + a )( s + b) a−b

Laplace Table Page 1


F(s) f(t) 0≤t
s +α 1
14. ( s + a )( s + b ) [(α − a)e −at − (α − b)e −bt ]
b−a
1 e−at e−bt e−ct
15. ( s + a)(s + b)(s + c) + +
(b − a)(c − a) (c − b)(a − b) (a − c)(b − c)
s +α (α − a)e−at (α − b)e−bt (α − c)e−ct
16. (s + a)(s + b)(s + c) + +
(b − a)(c − a) (c − b)(a − b) (a − c)(b − c)
ω sin ω t
17. 2
s + ω2
s cos ω t
s + ω2
18. 2
s+α α 2 + ω2
19. 2
s +ω2 sin(ωt + φ ) φ = atan2(ω, α )
ω
s sin θ + ω cos θ sin(ωt + θ )
20.
s2 + ω2
1 1
21. s ( s 2 + ω 2 ) (1 − cosωt )
ω2
s+α α α2 +ω2
22. s ( s 2 + ω 2 ) − cos( ω t + φ ) φ = atan2(ω,α )
ω2 ω2
1 e − at 1
23. (s + a)(s 2 + ω 2 ) + sin(ωt − φ )
a +ω
2 2
ω a +ω
2 2

φ = atan2(ω, α )
1 1 − at
24. (s + a) 2 + b 2 e sin(bt )
b
1 1
24a. 2 e −ζωnt sin(ωn 1 − ζ 2 t )
s + 2ζω n s + ω n
2
ωn 1 − ζ 2
s+a e − at cos( bt )
25. ( s + a ) 2 + b 2
Laplace Table Page 2
F(s) f(t) 0≤t
s +α (α − a ) 2 + b 2 − at
26. ( s + a ) 2 + b 2 e sin( bt + φ ) φ = atan2(b,α − a)
b
26a.
s +α
( α
ωn − ζωn ) 2

+1 ⋅ e −ζωnt sin(ω n 1 − ζ 2 t + φ )
s2 + 2ζωns +ωn
2 1−ζ 2
φ = atan2(ωn 1 − ζ 2 ,α − ζωn )
27. 1 1
1 + e−at
sin(bt −φ) φ = atan2( b,− a )
a +b b a +b
2 2 2 2

s[(s + a)2 + b2 ]
27a. 1 1
1 − e−ζωnt sin(ωn 1 − ζ 2 t + φ )
ωn 2
ωn 2 1 − ζ 2
s(s 2 + 2ζωn s + ωn
2
φ = cos − 1 ζ
28. α 1 (α − a) 2 + b2 −at
s +α + e sin(bt + φ)
a +b b
2 2
a +b
2 2

s[(s + a)2 + b2 ] φ = atan2( b , α − a ) − atan2( b , − a )

28a. α 1 α −ζω t
s +α + ( −ζ ) 2
+ (1−ζ 2
) ⋅ e ω
sin( 1−ζ 2
n
t +φ)
ωn ωn 1−ζ ωn
2 n
2

s(s2 + 2ζωn s + ωn )
2

φ = atan2(ωn 1 − ζ 2 ,α − ω nζ ) − atan2( 1 − ζ 2 ,−ζ )


29. e −ct e −at sin(bt − φ )
1 + φ = atan2(b, c − a)
(c − a) 2 + b 2 b (c − a) 2 + b 2
(s + c)[(s + a)2 + b2 ]

Laplace Table Page 3


F(s) f(t) 0 ≤1
30. 1 e−ct e−at sin(bt − φ)
1 − +
c(a2 + b2 ) c[(c − a)2 + b2 ] b a2 + b2 (c − a)2 + b2
s(s + c)[(s + a)2 + b2 ]
φ = atan2(b,−a) + atan2(b, c − a)
31. α (c − α )e −ct
+
s +α c(a 2 + b 2 ) c[(c − a) 2 + b 2 ]
s(s + c)[(s + a)2 + b2 ] (α − a) 2 + b 2
+ e −at sin(bt + φ )
b a 2 + b 2 (c − a) 2 + b 2
φ = atan2(b, α − a) − atan2(b,−a) − atan2(b, c − a)
1 1
32. s 2 ( s + a ) 2
(at−1+ e−at )
a
1 1
(1− e−at − ate−at )
33. s(s + a)2 a2

s +α 1
[α − αe−at
+ a(a − α)te−at
]
34. s(s + a)2 a2

s 2 + α1s + α 0 α0 a2 −α1a + α0 −at b2 −α1b + α0 −bt


35. s(s + a)(s + b) + e − e
ab a(a − b) b(a − b)
s 2 + α1s + α 0 α0 1 2 2
+ [(a − b − α1 a + α 0 ) 2
36. s[(s + a) 2 + b 2 ] c 2 bc
1
+ b (α1 − 2a) ] e−at sin(bt + φ)
2 2 2

φ = atan2[ b(α 1 − 2a ), a 2 − b 2 − α 1 a + α 0 ] − atan2( b,− a )


c2 = a2 +b2

Laplace Table Page 4


F(s) f(t) 0 ≤1
37. (1 / ω ) sin(ωt + φ1 ) + (1 / b)e − at sin(bt + φ 2 )
1 1

(s2 +ω2 )[(s + a)2 +b2 ] [ 4a ω + ( a + b − ω ) ]


2 2 2 2 2 2 2

φ1 = atan2(−2aω, a2 + b2 − ω2 )
φ2 = atan2(2ab, a 2 − b 2 + ω 2 )
38. 1 α 2 + ω2 2
1

s +α ( ) sin(ωt + φ1 )
ω c
(s2 +ω2 )[(s + a)2 +b2 ]
1 (α − a)2 + b2 2 −at
1
+ [ ] e sin(bt +φ2 )
b c
c = (2aω)2 + (a2 + b2 −ω2 )2

φ1 = atan2(ω , α ) − atan2( 2aω , a 2 + b 2 + ω 2 )


φ2 = atan2(b,α − a) + atan2(2ab, a 2 − b 2 − ω 2 )
s +α 1
1 2α a [b + (α − a ) ] − at
2 2 2
39. s2[(s + a)2 + b2 ] (α t + 1 − )+ e sin( bt + φ )
c c bc
c = a 2 + b2
φ = 2atan2(b, a) + atan2(b,α − a)
s 2 + α1s + α0 α1 +α0t α0 (a + b) 1 α1 α0
− − (1− + )e−at
40. s 2 (s + a)(s + b) ab (ab) 2
a −b a a 2

1 α α
− (1− 1 + 20 )e−bt
b−a b b

Laplace Table Page 5


Table of Laplace and Z-transforms
X(s) x(t) x(kT) or x(k) X(z)
Kronecker delta δ0(k)
1. – – 1 k=0 1
0 k≠0
δ0(n-k)
2. – – 1 n=k z-k
0 n≠k
1 1
3. 1(t) 1(k)
s 1 − z −1
1 1
4. e-at e-akT
s+a 1 − e − aT z −1
1 Tz −1
5.
s2
t kT
(1 − z ) −1 2

2 T z (1 + z )
2 −1 −1

t2 (kT)2
(1 − z )
6. −1 3
s3
6 T z (1 + 4 z + z )
3 −1 −1 −2

t3 (kT)3
(1 − z )
7. −1 4
s4
a
1 – e-at 1 – e-akT
(1 − e )z − aT −1

8.
s (s + a ) (1 − z )(1 − e z )
−1 − aT −1

b−a (e − e )z− aT − bT −1

9.
(s + a )(s + b ) e-at – e-bt e-akT – e-bkT
(1 − e z )(1 − e z )
− aT −1 −bT −1

1 Te − aT z −1
te-at kTe-akT
(1 − e )
10.
(s + a ) 2 − aT
z −1
2

s 1 − (1 + aT )e − aT z −1
(1 – at)e-at (1 – akT)e-akT
(1 − e z )
11.
(s + a )2 − aT −1 2

2 T e (1 + e z )z
2 − aT − aT −1 −1

t2e-at (kT)2e-akT
(1 − e z )
12.
(s + a ) 3 − aT −1 3

a2
at – 1 + e-at akT – 1 + e-akT
[(aT − 1 + e )+ (1 − e − aTe )z ]z
− aT − aT − aT −1 −1

s (s + a ) (1 − z ) (1 − e z )
13. 2 −1 2 − aT −1

ω z −1 sin ωT
14. sin ωt sin ωkT
s +ω 2
2
1 − 2 z −1 cos ωT + z − 2
s 1 − z −1 cos ωT
15. cos ωt cos ωkT
s +ω 2
2
1 − 2 z −1 cos ωT + z − 2
ω e − aT z −1 sin ωT
16. e-at sin ωt e-akT sin ωkT
(s + a ) 2
+ω 2
1 − 2e − aT z −1 cos ωT + e − 2 aT z − 2
s+a 1 − e − aT z −1 cos ωT
17. e-at cos ωt e-akT cos ωkT
(s + a ) 2
+ω 2
1 − 2e z −1 cos ωT + e − 2 aT z − 2
− aT

1
18. – – ak
1 − az −1
ak-1 z −1
19. – –
k = 1, 2, 3, … 1 − az −1
z −1
kak-1
(1 − az )
20. – – −1 2

z (1 + az )
−1 −1

k2ak-1
(1 − az )
21. – – −1 3

(
z −1 1 + 4az −1 + a 2 z −2 )
k3ak-1
(1 − az )
22. – – −1 4

k4ak-1
(
z −1 1 + 11az −1 + 11a 2 z −2 + a 3 z −3 )
(1 − az )
23. – – −1 5

1
24. – – ak cos kπ
1 + az −1

x(t) = 0 for t < 0


x(kT) = x(k) = 0 for k < 0
Unless otherwise noted, k = 0, 1, 2, 3, …
This page intentionally left blank

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy