0% found this document useful (0 votes)
205 views

4 Digital Signal Processing in Simulink 2007 PDF

Uploaded by

hananel_foros
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
205 views

4 Digital Signal Processing in Simulink 2007 PDF

Uploaded by

hananel_foros
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 58

Downloaded 01/23/15 to 128.2.10.23. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.

php

Chapter 4

Digital Signal Processing in


Simulink

We saw in the previous chapters how to build models of continuous time systems in Simulink.
This chapter provides insight into how to use Simulink to create, analyze, simulate, and code
digital systems and digital filters for various applications.
We begin with a simple example of a discrete system; one discussed in Cleve Moler’s
Numerical Computing with MATLAB [29]. This example, the Fibonacci sequence, is not a
digital filter but is an example of a difference equation. In creating the Simulink model for
this sequence, we illustrate the fact that the independent variable in Simulink does not have
to be time. In this case, the independent variable is the index in the sequence. We set the
“sample time” in the digital block to one, and then we interpret the time steps as the number
index for the element in the sequence.
Digital filters use an analysis technique related to the Laplace transform, called the
z-transform. We introduce the mathematics of this transform along with several methods
for calculating digital filter transfer functions.
A digital signal typically consists of samples from an analog signal at fixed times.
Since it is usually necessary to convert these digital signals back into an analog form so that
they can be used (to hear the audio from a CD or a cell phone, for example), it is natural to
start by asking how to go about doing this. The answer is the “sampling theorem” that shows
that an analog signal with a bounded Fourier transform (i.e., F (ω) = 0 for |ω| > ωM ) can
be sampled at the times ωπM or faster, and these samples can then be used to reconstruct
the analog signal. The method for doing this reconstruction is via a filter that allows only
the frequencies below ωM to pass through (and for this reason, we call the filter a low pass
filter). Therefore, the second section of this chapter shows how to develop low pass filters
and ways to adapt their properties to make them do other useful signal processing functions
such as high pass and band pass. In all cases, we will use Simulink to simulate the filters
and explore their properties.
The last part of this chapter will deal with implementation issues. We will look at how
Simulink allows us to evaluate different implementations of the digital filter. We will also
begin to look at the effect of limited precision arithmetic on the digital filter’s performance.
We will then look at a unique combined analog and digital device called a phase lock
loop. Simulink allows us to build a simulation of this device and do numerical experiments

115
116 Chapter 4. Digital Signal Processing in Simulink

that demonstrate its properties. In fact, Simulink is unique in its ability to do some very
Downloaded 01/23/15 to 128.2.10.23. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

detailed analyses of a phase-locked loop (see [11]). This is particularly true when it comes
to analyzing the effect of noise on the loop, a topic that we will take up in Chapter 5.

4.1 Difference Equations, Fibonacci Numbers, and


z-Transforms
One of the more interesting difference equations is the Fibonacci sequence. Fibonacci, the
sequence he developed, and its rather remarkable properties and history are all described
in detail, along with several MATLAB programs developed to illustrate the sequence, in
Chapter 1 of Cleve Moler’s Numerical Computing with MATLAB [29]; also see [26].
Let us revisit this sequence using Simulink. The Fibonacci sequence is

fn+2 = fn+1 + fn
with f1 = 1 and f2 = 2.

As a reminder, the sequence describes the growth in a population of animals that are
constrained to give birth once per generation, where the index is the current generation.
One possible Simulink model to generate this sequence uses the “Discrete” library
from the Simulink browser. To understand how Simulink works, we need to describe how
one would go about writing a program that generates the Fibonacci numbers. In the NCM
library (the programs that accompany the book Numerical Computing with MATLAB), there
is a MATLAB program that computes and saves the entire Fibonacci sequence from 1 to
n. If, instead, we want only to compute the values as the program runs without saving the
entire set of values, the MATLAB code (called Fibonacci and located in the NCS library)
would look like
function f = fibonacci(n)
% FIBONACCI Fibonacci sequence
% f = FIBONACCI(n) sequentially finds the first n
% Fibonacci numbers and displays them on the command line.

f1 = 1;
f2 = 2;
while i <= n
f = f1 + f2
f2 = f1;
f1 = f;
i = i+1
end

Even though we do not need to save the entire sequence in this example, we still need
to save the current and the previous value in order to calculate the next value of the sequence.
This fact means that this sequence requires two “states.” In the theory of systems, a state
is the minimum information needed to calculate the values of the difference equation. In
the snippet of MATLAB code above, we save the values in f1 and f2. When we build a
Simulink model to simulate a difference equation, the state needs a place to be stored for
use in the solution. Simulink uses the name 1z to denote this block. To understand where
4.1. Difference Equations, Fibonacci Numbers, and z-Transforms 117
Downloaded 01/23/15 to 128.2.10.23. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

1 1
z z
Unit Delay Unit Delay1 Scope

Figure 4.1. Simulating the Fibonacci sequence in Simulink.

this comes from, we need to show the method for solving difference equations using the
discrete version of the Laplace transform. We will do that in a moment, but first let us build
the Simulink model for the Fibonacci sequence.
To create the model, from MATLAB open the Simulink Library Browser as we have
done previously and open a new untitled model window.
Select the “Discrete” library, and then select the 1z icon (the “Unit Delay” block) and
drag one into the open model window. Right click and drag on the Unit Delay block in
the model window to make a copy of the Unit Delay. Connect the output of the first delay
block to the input of the second. This will send the output of the first delay to the second
delay block. Now we need to create the left-hand side of the Fibonacci equation. To do
this we need to add the outputs of the two Unit Delay blocks. Therefore, open the “Math
Operations” library and drag the summation block into the model window. Then connect
the outputs of each of the Unit Delay blocks to sum the inputs one at a time. The model
should look like Figure 4.1.
In order to start the process, we need the correct initial conditions. To set them, double
click on the Unit Delay block and set the initial condition to 2 and 1 (from left to right in the
diagram). This will set the initial value of f1 to 1 and the initial value of f2 to 2, as required.
Notice that the 1z block has a default “sample time” of 1 sec, which is exactly what we want
for simulating the sequence, as we discussed in the introduction above. To view the output,
drag a Scope block (in the Sinks library) into the model and connect it to the last Unit Delay
block. Click the start button on the model to start the simulation of the sequence.
Double click on the Scope block and click on the binoculars icon to see the result.
The simulation makes ten steps and plots the values that the Fibonacci sequence generated
as it runs. The graph should look like Figure 4.2. We created this figure using a built-in
MATLAB routine called “simplot.” This M-file uses a MATLAB structure generated by the
output of the Scope block. The Scope block generates this MATLAB data structure during
the simulation; the plot comes from the MATLAB command simplot(ScopeData). The
plot is Fibonacci Sequence.fig, and it is available in the NCS library from the Figures
directory.
If you want to compute the golden ratio “phi” as was done in NCM [29], the calculation
requires that you divide the value of f2 by f1. This uses the Math Operations library Product
block. In this library, drag the product block into the model window and double click on
it. The dialog that opens allows you to change the operations. In the dialog box that asks
“Number of Inputs” (which is set to 2 by default), type the symbols * and /. This will cause
one of the inputs to be the numerator and the other the denominator in a division (denoted
by × and ÷ in the block). Connect the × sign on the Product block icon to the line after
118 Chapter 4. Digital Signal Processing in Simulink

Fibonacci Sequence
Downloaded 01/23/15 to 128.2.10.23. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

(Values for the first 20 Numbers)

15000

10000

5000
Fibonacci Sequence
(Detail from 0 to 10)
100
0
80
0 5 10 15 20
Index 60

40

20

0
0 2 4 6 8 10
Index

Figure 4.2. Fibonacci sequence graph generated by the Simulink model.

1 1 f(n)
f(n-2) z f(n-1) z
Unit Delay Unit Delay1 Scope
f(n)
1.6180339901756

f(n-1)
Product
Display

Figure 4.3. Simulink model for computing the golden ratio from the Fibonacci sequence.

the first delay block (the “Unit Delay” block in Figure 4.1) and connect the ÷ to the “Unit
Delay1” block. Connect the output of the Product block to a Display block that you can
get from the “Sinks” library. This block displays the numeric value of a signal in Simulink.
Figure 4.3 shows Display with the result of the division after 10 iterations.
To make the simulation run longer, open the Configuration Parameters menu under
the “Simulation” menu at the top of the Fibonacci model window. The dialog that opens
when you do this allows you to change the Stop time. Change it to some large number
(from the default of 10), and run the simulation. You should see the display go to 1.618. To
see the full precision of this number, double click on the Display block and in the Format
pull down, select “long.” This corresponds to the MATLAB long format. The result should
be 1.6180339901756,

the limit of this ratio is the “golden ratio,” phi, and it has the value
phi = 1+2 5 . (MATLAB returns 1.61803398874989 when calculating this, and as can be
seen after 20 iterations, Simulink has come very close.) The discussions in [29] and [26]
describe phi and its history in detail.
This discrete sequence is only one of many sequences that you might want to find the
solution of in Simulink. In a more practical vein, we often want to process a digital signal
4.1. Difference Equations, Fibonacci Numbers, and z-Transforms 119

(a process called digital signal processing). Toward this end, digital filters are part of the
Downloaded 01/23/15 to 128.2.10.23. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

Simulink discrete library. They appear in the Digital library as z-transforms. What is this
all about?

4.1.1 The z-Transform


The z-transform, F (z), of an infinite sequence {fk } , k = 0, 1, . . . , n, . . . , is


F (z) = fk z k .
k=0

There are many technical details that need to be invoked to ensure that this sequence always
converges to a finite value, but suffice it to say that because the variable z is complex, the
sequence always is finite (even for sequences that diverge). For example, let us compute
the z-transform for the sequence that is 1 for all values of k. (This is called the discrete step
function.) Thus, we need to compute the sum


F (z) = zk = 1 + z1 + z2 + z3 + · · · .
k=0

If we multiply the value of F (z) by z, the sum on the right side becomes
zF (z) = z1 + z2 + z3 + · · · .
Now, by subtracting the second series from the first, all of the powers of z subtract (all the
way to infinity), and the only term that remains on the right is the 1, so
F (z) − zF (z) = 1
or
1
F (z) = .
1−z
As a second example, consider the sequence α k , k = 0, 1, 2, . . . . This sequence is the
discrete version of the exponential function eat since the values of this at the times k t
generate the sequence (ea t )k = α k (where α = ea t ).Following the same steps as we
used above, the z-transform of this sequence is F (z) = ∞ k k
k=0 α z = 1−αz .
1

It is a simple matter to work with the definition to create a table of z-transforms. This
table will allow you to solve any linear difference equation. For example, the discrete sine
iωk t −iωk t
can be generated using sin(ωk t) = e −e 2i
and the above transform of α k .
You can use the MATLAB connection to Maple to get some z-transforms. Try some
of these:
syms k n w z
simplify(ztrans(2ˆn))

This gives z/(z-2) as the result.


ztrans(sym(‘f(n+1)’))

This gives z*ztrans(f(n),n,z)-f(0)*z) as the result.


120 Chapter 4. Digital Signal Processing in Simulink
Downloaded 01/23/15 to 128.2.10.23. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

ztrans(sym(‘f(n+1)’))

This gives z*sin(k)/(zˆ2-2*z*cos(k)+1) as the result.


Solutions of linear difference equations using z-transforms are very similar to the
techniques for solving differential equations using Laplace transforms. Just as the derivative
has a Laplace transform that converts the differential equation into an algebraic equation, the
z-transform of fk+1 , k = 0, 1, . . . , n, . . . , converts the difference equation into an algebraic
equation. To see that this is so, assume that the z-transform of fk , k = 0, 1, . . . , n, . . . , is
F (z). Then the z-transform of fk+1 , k = 0, 1, . . . , n, . . . , is


fk+1 zk = f1 + f2 z1 + f3 z2 + · · ·
k=0
= z−1 F (z) − z−1 f0 .
Notice that this is the same answer as we got when using Maple above. From this, we can
see why Simulink uses 1/z as the notation for the “Unit Delay.”

4.1.2 Fibonacci (Again) Using z-Transforms


Let us use the z-transform to solve the Fibonacci difference equation. We use the unit delay
z-transform above twice. The first time gives the z-transform of fk+2 , and the second gives
the transform of fk+1 . The z transform of the Fibonacci equation is therefore
z−2 F (z) − z−2 f0 − z−1 f1 = z−1 F (z) − z−1 f0 + F (z).
Solving for F (z) in this expression gives
(z−2 − z−1 − 1)F (z) = z−1 f1 + z−2 f0 − z−1 f0 ;
therefore, we have the final algebraic equation for F (z):
−z−1 + z−2 + 2z−1 z−1 (z−1 + 1)
F (z) = = .
(z−2 − z−1 − 1) z−2 − z−1 − 1
Now we can factor the denominator of the function F (z) and write the right-hand side of
the above √as a partial fraction

expansion. The roots of the denominator polynomial are
φ1 = − 2 and φ2 = − 2 (note that φ2 = 1 − φ1 ), so the partial fraction expansion is
1+ 5 1− 5

A B
F (z) = + −1 .
z−1 + φ1 z + φ2
A and B are determined by using Heaviside’s method, wherein the value of A is obtained
by multiplying the left and right side of the above expression by z−1 + φ 1 and then setting
the value of z−1 = −φ 1 , and similarly for B. The inverse z-transform for each of these
terms comes from the transforms above. There is a lot of algebra involved in this, so let us
just look at the answer (see Example 4.1):
1
fn = (φ n+1 − (1 − φ 1 )n+1 ).
2φ1 − 1 1
This is the same result demonstrated in NCM [29].
4.2. Digital Sequences, Digital Filters, and Signal Processing 121

There is a connection between z-transforms and Laplace transforms that we will


Downloaded 01/23/15 to 128.2.10.23. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

develop later, but first let us look at practical applications of difference equations. Modern
technology such as cell phones, digital audio, digital TV, high-definition TV, and so on,
depends on taking an analog signal, processing it to make it digital, and then doing something
to the signal to make it easier to send and receive. Digital filtering permeates all of this
technology.

4.2 Digital Sequences, Digital Filters, and Signal


Processing
The advent of digital technology for both telephone and audio applications has made digital
signal processing one of the most pervasive mathematical techniques in use today. The meth-
ods used are particularly easy to simulate with the digital blocks in Simulink’s “Discrete”
library. To understand what these blocks do to a digital sequence, we need to understand
the mathematics that underlies discrete time signal processing and digital control.

4.2.1 Digital Filters, Using z-Transforms, and Discrete Transfer


Functions
To start, we will work with the exponential digital sequence we created above (see Sec-
tion 4.1.1). Assume that we are going to process a digital sequence fk using the sequence
α k . The digital filter is then

yk+1 = αy k + (1 − α)fk .

In this difference equation, the sequence fk is the signal to be processed (where fk is the
value of the signal f (t) at times k t as k, an integer, increases from 0), and the sequence y k
is the processed result. The simplest way to solve this equation is to use induction. Starting
at k = 0, with the initial condition y0 , we get the value of y1 as

y1 = αy0 + (1 − α)f0 .

Now, with y1 in hand, we can compute y2 by setting k = 1 in the difference equation for
the filter. The result is as follows:

y2 = αy1 + (1 − α)f1 = α(αy0 + (1 − α)f0 ) + (1 − α)f1 .

Thus,
y2 = α 2 y0 + α(1 − α)f0 + (1 − α)f1 .
If we continue iterating the equation like this, a pattern rapidly emerges and can be used to
write the solution to the equation for any k. (Verify this assertion by continuing to do the
iteration.) This solution is
k−1

yk = α k y0 + (1 − α) α k−1−j fj .
j =0
122 Chapter 4. Digital Signal Processing in Simulink

Notice that α k , the sequence we wanted, multiplies both the summation and the initial
Downloaded 01/23/15 to 128.2.10.23. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

condition.
We now use induction to show that this is indeed the solution. Remember that a proof
by induction follows these steps:

• Verify that the assertion is true for k = 0.


• Assume that the assertion is true for k, and show that it is then true for k + 1.

Because of the way that we generated the solution, it is clear that it is true for k = 0.
So next, assume that the solution above (for the index k) is true, and let us show that it is
true for k + 1.
From the difference equation yk+1 = αyk + (1 − α)fk , we substitute the postulated
solution for yk to get

yk+1 = αy k + (1 − α)fk

k−1
= α(α k y0 + (1 − α) α k−1−j fj ) + (1 − α)fk
j =0


k
= α k+1 y0 + (1 − α) α k−j fj .
j =0

This is exactly the solution that we postulated with the index at k + 1. Thus, by induction,
this is the solution to the difference equation.
Notice that one way of thinking about discrete equations in Simulink is that it imple-
ments the induction algorithm. It uses the definitions of the discrete process and starts at
k = 0, iterating until it reaches the nth sample.
If we take the z-transform of the difference equation yk+1 = αyk + (1 − α)f , we get

z−1 Y (z) − z−1 y 0 = αY (z) + (1 − α)F (z).

Solving this for Y (z) gives

z−1 (1 − α)
Y (z) = y 0 + −1 F (z)
z−1−α z −α

1 (1 − α)z
= y0 + F (z).
1 − αz 1 − αz
By comparing the z-transform above with the  solution, we can conclude that the inverse
z
z-transform of 1−αz F (z) is the convolution sum kj =0 α k−j fj . Note that this says that when
a sequence whose z-transform is F (z) is used as an input to a digital filter whose z-transform
z
is H (z) (called the discrete transfer function, and for this filter its value is H (z) = 1−αz ),
the product H (z)F (z) has an inverse transform that is the convolution sum. This result,
called the convolution theorem, provides the rationale for the Simulink notation of using
the z-transform of the filter inside a block. The notation implies that the output of the block
is the transfer function times the input to the block—even though the output came from the
difference equation and the result is a convolution sum (when the system is linear). This
4.2. Digital Sequences, Digital Filters, and Signal Processing 123

Gain1
Downloaded 01/23/15 to 128.2.10.23. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

1
1-alpha simout
z
Sine Wave Unit Delay T o Workspace

alpha

Gain

Figure 4.4. Generating the data needed to compute a digital filter transfer function.

slight abuse of notation allows for clarity in following the flow of “signals” in the Simulink
model because it maintains the operator notation even when the block contains a transfer
function.
One of the major uses of digital filters is to alter the tonal content of a sound. Since
music consists of many tones mixed together in a harmonic way, it is useful to see what a
digital filter does to a single tone. Therefore, we use Simulink to build a model that will
generate the output sequence yk when the input is a single sinusoid at frequency ω. Thus, we
assume that fk = A sin(ωk t), where k t is the sample time. The process of converting
an analog signal to a digital number is “sampling.” All digital signal processing uses some
form of sampling device that does this analog to digital conversion.
The discrete elements in the Simulink library handle the sampling process automat-
ically. Try building a Simulink model that filters an analog sine wave signal with the first
order digital filter yk+1 = αyk + (1 − α)fk before you open the model in the NCS library.
To run the model in the NCS library, type Digital_Filter at the MATLAB com-
mand line. Figure 4.4 shows the model.
In this model, the sampling of the sine wave occurs at the input to the Unit Delay block.
The sample time is set in a “Block Parameters” dialog box (opened by double clicking on
the Unit Delay block). In this dialog, the sample time was set to delta_t (an input from
MATLAB that is set when the model opens). This illustrates an important attribute that
Simulink uses. After sampling the signal, all further operations connected to the block that
does the sampling treat the signal as sampled (discrete). Thus, the Gain block operates on
the sampled output from the Unit Delay, and the addition occurs only at the sample times.
The dialog also allows setting the initial condition for the output. (We assume that the initial
condition is zero, the default value in the dialog box.)

4.2.2 Simulink Experiments: Filtering a Sinusoidal Signal and


Aliasing
The digital filter model in Section 4.2.1 is set up to run 50 sinusoidal signals (each at a
different frequency) simultaneously, and as it runs, it sends the results of all 50 simulations
into MATLAB in the MATLAB structure “simout.”
The values for the various parameters in the model are in the MATLAB workspace
and have the following values:
124 Chapter 4. Digital Signal Processing in Simulink
Downloaded 01/23/15 to 128.2.10.23. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

>> delta_t

delta_t =

1.0000e-003 %(Sample time of 1 ms or sample frequency of 1 kHz)

>> alpha

alpha =

9.0000e-001

>> omega

omega =

Columns 1 through 25

1.0000e-001 1.2355e-001 1.5264e-001 1.8859e-001 2.3300e-001


2.8786e-001 3.5565e-001 4.3940e-001 5.4287e-001 6.7070e-001
8.2864e-001 1.0238e+000 1.2649e+000 1.5627e+000 1.9307e+000
2.3853e+000 2.9471e+000 3.6410e+000 4.4984e+000 5.5577e+000
6.8665e+000 8.4834e+000 1.0481e+001 1.2949e+001 1.5999e+001

Columns 26 through 50

1.9766e+001 2.4421e+001 3.0171e+001 3.7276e+001 4.6054e+001


5.6899e+001 7.0297e+001 8.6851e+001 1.0730e+002 1.3257e+002
1.6379e+002 2.0236e+002 2.5001e+002 3.0888e+002 3.8162e+002
4.7149e+002 5.8251e+002 7.1969e+002 8.8916e+002 1.0985e+003
1.3572e+003 1.6768e+003 2.0717e+003 2.5595e+003 3.1623e+003:

The values for the 50 frequencies, the sample time for the filter, and the value of alpha are
stored as part of the model through a callback set by the Model Properties dialog.
The ability to cause calculations in MATLAB to run when the model opens or when
other Simulink actions occur is a feature of Simulink that you should understand.
After you have opened the model, go to the File menu and select “Model Properties.”
A Model Properties window will open, allowing you to enter and/or view information about
the model. It also allows you to select actions that occur at various events during the model
execution.
There are four tabs across the top of this window, denoted “Main,” “Callbacks,”
“History,” and “Description.” Figure 4.5 shows two of the tabs in the dialog. The window
opens, showing the contents of the Main tab. The Main tab is the top level of the window. It
shows the model creation date and the date we last saved it. It also shows a version number
(every time the model is changed or updated in any way, this number changes) and whether
or not the model has been modified. The second tab in the window is the Callbacks tab. In
this section, the user can specify MATLAB commands to execute whenever the indicated
action occurs. The possible actions are as follows.
4.2. Digital Sequences, Digital Filters, and Signal Processing 125
Downloaded 01/23/15 to 128.2.10.23. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

Figure 4.5. Adding callbacks to a Simulink model uses Simulink’s model properties
dialog, an option under the Simulink window’s file menu.

• Model preload function: These commands run immediately before the model opens
for the first time. Note that it is here that the values of omega, the sample time
delta_t, and alpha are set. The values for omega are provided by the MATLAB
function “logspace,” which creates a set of equally spaced values of the log (base 10)
of the output (in this case, omega), where the two arguments are the lowest value
(here it is 10−1 ) and the highest value (3 × 103 here). The value set for the sample
time is 0.001 sec (corresponding to 1 kHz).

• Model postload function: These commands run immediately after the model loads
the first time.

• Model initialization function: These commands run when the model creates its initial
conditions before starting.

• Simulation start function: These commands run prior to the actual start (i.e., imme-
diately after the start arrow is clicked).

• Simulation stop function: These commands run when the simulation stops. In this
case, we have two MATLAB commands to first calculate the maximum value of all 50
signals. (Note that the maximum values are over the structure in MATLAB generated
by the “To Workspace” block.)

• Model presave function: These commands run prior to the saving the model.

• Model close function: These commands run prior to closing the model.
126 Chapter 4. Digital Signal Processing in Simulink

The History and Description tabs allow the user to save information about the number
Downloaded 01/23/15 to 128.2.10.23. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

of times the model opens and is changed (the History tab) and for the user to describe the
model.
The StopFcn callback, executed when the simulation stops, is
simmax = max(simout.signals.values);
semilogx(omega,simmax);
xlabel(’Frequency "omega" in rad/sec’)
ylabel(’Amplitude of Output’);
grid

The plot at the right


comes from this code. It is a 1
semilog plot that shows 0.9
what the digital filter does
to the amplitudes of a sinu- 0.8

soid for different frequen-


Amplitude of Filtered Output

0.7
cies. Notice that all of
0.6
the low frequencies—tones
up to about 6 radians/sec 0.5
(about 1 Hz)—are unaf- 0.4
fected by the filter, whereas
the frequencies above that 0.3

are attenuated to the point 0.2


where a tone at about 3000
0.1
radians/sec (about 500 Hz)
is reduced in amplitude by 0
-1 0 1 2 3 4
10 10 10 10 10 10
95%. For this reason, this Frequency "omega" in rad/sec
filter is a “low pass” fil-
ter. This means that low 1
frequencies pass through
0.9
the filter unchanged, and it
attenuates higher frequen- 0.8
cies. Let us see what hap-
Amplitude of Filtered Output

0.7
pens if we input frequen-
cies beyond 500 Hz. To 0.6

see this, change the val- 0.5


ues in the omega array
by typing at the MATLAB 0.4

command line omega = 0.3


logspace(2,3.8);. The
0.2
second plot shown at the
right is the result of rerun- 0.1
ning the model with these
0
50 values of omega. In- 10
2
10
3
10
4

stead of continuing to re- Frequency "omega" in rad/sec


4.2. Digital Sequences, Digital Filters, and Signal Processing 127

Sampled Sine Wave -- Illustrating Aliasing


Downloaded 01/23/15 to 128.2.10.23. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

1.1

0.75
Sampled only at the times
0.5 0.0025 and 0.0125 sec., the sine
seems to be a constant for all times.
Signal Amplitude

0.25

-0.25
When the samples are every 0.002 seconds,
the signal looks like a sinusoid.
-0.5

-0.75

-1.1
0 0.002 0.004 0.006 0.008 0.01 0.012 0.014 0.016 0.018 0.02
Time (sec.)
Continuous Signal
Sampled at 0.0025 and 0.0125
Sampled Every 0.002 sec.

Figure 4.6. Sampling a sine wave at two different rates, illustrating the effect of aliasing.

duce the amplitude of the input, the filter’s output amplitude starts to climb back up until,
at the frequency 1t (1 kHz), there is no reduction in the amplitude of the sinusoid.
Why does the amplitude not continue to decrease? It is because this is a sampled-data
signal. A close examination of what happens when we sample an analog signal to convert
it into a sequence of numeric values (at equally spaced sample times) reveals why.
Look carefully at the effect of sampling a 100 Hz sinusoid as illustrated in Figure 4.6.
When the sample time is 0.002 seconds (the * in the figure), the values are tracking the
sinusoid as it oscillates up and down in amplitude. However, when the sample time exactly
matches the frequency of the sinusoid (the black squares), the oscillatory behavior of the
sinusoid is lost. For the precisely sampled values shown, the amplitude after sampling is
always exactly 1. Thus, as far as the digital filter is concerned, the frequency of the input
is 0 (it is not oscillating at all). This is what causes the plot of the amplitude of the filtered
output to turn around starting at half of the sample frequency.
We call this effect “aliasing.” It is exactly because of this that the digital standard
for audio CDs is to sample at 44.1 kHz (a sample time of 22.676 microseconds). With
this sample frequency, the audio frequencies up to 22.05 kHz are unaffected by the digital
conversion. Since the human ear does not really hear sounds that are above this frequency,
the aliasing effect is not perceived. There is a caveat, though: since aliasing takes the high
frequencies above 22.05 kHz and reduces them, these frequencies need to be removed (with
128 Chapter 4. Digital Signal Processing in Simulink

an analog filter) before the sampling takes place. The filter that does this is an antialiasing
Downloaded 01/23/15 to 128.2.10.23. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

filter. In Section 4.5.2, we will investigate how one must sample an analog signal to
ensure that aliasing is not a problem, but first we explore some of Simulink’s digital system
tools.

4.2.3 The Simulink Digital Library


Before we leave the subject of digital filters, let us look at some other methods that Simulink
has for digital filtering. You may have noticed as we built the digital models above that
there were other blocks in the Discrete library for digital filters. They are

• Discrete Filter,
• Discrete Transfer Function,
• Discrete Zero-Pole,
• Weighted Moving Average,
• Transfer Function First Order,
• Transfer Function Lead or Lag,
• Transfer Function Real Zero,
• Discrete State Space.

Each of these uses a different, but related, method for simulating the digital filter. The
preponderance of these models uses the z-transform of the filter to create the simulation of
the filter. For example, the discrete filter model uses the z-transform in terms of powers
of 1/z. We use this form of the digital filter because in some definitions of the z-transform
the infinite series is in terms of 1/z, not z. To change the numerator and denominator of
the filter transfer function, use the block parameters dialog box that opens when you double
1
click on the Discrete Filter block. The filter default is 1+0.5z −1 . The numerator is 1, and

the denominator is entered using the MATLAB notation [1 0.5], which, as the help at the
top of the dialog box shows, is for ascending powers of 1/z (see Figure 4.7). Notice in the
figure that when you change the numerator and denominator, the icon in the Simulink model
changes to show the new transfer function.
To try this block, let us simulate the Fibonacci sequence with it. Set up a new model
and drag the Discrete Filter block into it. Open the dialog by double clicking and enter the
vector [1 −1 −1] as the Denominator. The vector sets the powers of z−1 from the lowest to
the highest (the MATLAB convention for polynomials). Notice that the icon changes for
the Discrete Filter to display the denominator polynomials. Leave the numerator at 1.
Transfer functions do not have initial conditions, so we need to find a way to specify
that the Fibonacci sequence start with initial values of 1 and 2. To do this we use a block
that causes a signal to have a value at the start of the simulation. The block we need is
the IC block, which is in the library called Signal Attributes. Grab an IC block and drag
it into the model. The IC block has an input, but we do not need to use it. You can leave
4.2. Digital Sequences, Digital Filters, and Signal Processing 129
Downloaded 01/23/15 to 128.2.10.23. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

Figure 4.7. Changing the numerator and denominator in the Digital Filter block
changes the filter icon.

the input unconnected, but every time you run the model, you will receive the annoying
message

Warning: Input port 1 of ’untitled/IC’ is not connected.

To eliminate this message, there is a connection in the Sources library called “Ground.”
All it does is provides a dummy connection for the block and thereby eliminates the message.
The last thing is to connect a Scope to the output. The IC block has a default value of 1,
which is acceptable because starting the Fibonacci sequence with initial vales of 0 and 1
will still generate the sequence.
Figure 4.8 shows the model (called Fibonacci2 in the NCS library) and the simulation
results from the Scope block.
Comparing this with the result generated in the earlier version of the Fibonacci model
shows that the results are the same (except for the initial conditions).
The first seven versions of the discrete filter in the list above are all variations on
this block. To understand the subtleties of the differences, spend some time modeling the
Fibonacci sequence using each of these.
130 Chapter 4. Digital Signal Processing in Simulink
Downloaded 01/23/15 to 128.2.10.23. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

Figure 4.8. Using the Digital Filter block to simulate the Fibonacci sequence.

4.3 Matrix Algebra and Discrete Systems


We looked at state-space models for continuous time linear systems in Chapter 2. There is
an equivalent model for discrete systems.
Let us return to the Fibonacci sequence and use Simulink in a different way to show
some attributes of the sequence. Before we do though, let us look at the state-space version
of the Fibonacci sequence. When we talked about the digital filters in Simulink, we had
a list of eight different ways, the last of which was a state-space model. We did not go
into the details at the time because we had not developed the state-space model. We saw
how to convert a continuous time state-space model into an equivalent discrete model in
Section 2.1.1. We can convert the Fibonacci sequence difference equation into a discrete
state-space model directly. The steps are as follows.
Use the values on the right-hand side of the Fibonacci sequence (fk and fk+1 ) as the
components of a vector as follows:

fk
xk = .
fk+1
From the definition of the sequence fn+2 = fn+1 + fn , with f1 = 1 and f2 = 2, we get the
state-space vector-matrix form from the fact that the first component of xk+1 is the second
component of xk and the second component of xk+1 is the left-hand side of the difference
equation. Thus,

fk+1 0 1
xk+1 = = xk ,
fk+2 1 1

1
x0 = ,
1
fk = [ 1 0 ]xk .
4.3. Matrix Algebra and Discrete Systems 131
Downloaded 01/23/15 to 128.2.10.23. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

y(n)=Cx(n)+Du(n) 144
x(n+1)=Ax(n)+Bu(n)
Ground
Display
Discrete State-Space

a)
Simulating the Fibonacci Sequence with the Discrete Library State-Space Block

Matrix 0 1
Constant 1 1

[2x2]
Matrix 2
[2x2] [1 0]* u 144
Multiply 2
2
Multiply Gain Display

1 2
2 z
Unit Delay

b)
Simulating the Fibonacci Sequence Using Simulink’s Automatic Vectorization

Figure 4.9. State-space models for discrete time simulations in Simulink.

Notice that the initial conditions are not the values we used previously, but since we start
the iteration in the state-space model at k = 0, we have made the initial value of f0 = 1,
which is consistent with the initial values we used previously (since f2 = f1 + f0 = 2).
We use two methods to create this model. The first model for this state-space de-
scription uses the Discrete State Space block in the Simulink Discrete library. The model
is shown in Figure 4.9(a). (It is called Fib_State_Space1 in the NCS library.)
When this model runs, the Display block shows the values for the sequence. The
model has been set up to do 11 iterations. To see other values, highlight the number 11 in
the window at the right of the start arrow in the model window, and change its value to any
number (but be careful: the sequence grows without bound).
To build a Simulink model for this equation that uses Simulink’s vector-matrix capa-
bilities, we use the blocks for addition, multiplication, and gains from the Math Operations
library (as we did when we created the state-space model for continuous time systems).
Figure 4.9(b) shows this model (it is Fib_State_Space2 in the NCS library).
Note that the Constant block from the Sources library provides, as an input, the matrix
on the right side of the state-space equation above. The Multiply and the Gain blocks from
the Math Operations library are used to do the matrix multiply and the calculation of the
output. The dialogs from the two Math Operations blocks are shown in the following figures.
132 Chapter 4. Digital Signal Processing in Simulink
Downloaded 01/23/15 to 128.2.10.23. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

The dialog on the left is for the Matrix Constant block, and as can be seen, the value
for it has been set to the MATLAB variable [1 0; 1 1]. The dialog on the right is for the
Multiply block. It specifies that there are two inputs through the ** (two products), which
may be increased to as many values as desired (including the use of the “/ “to denote matrix
inverse). The multiplication type comes from the pull-down menu next to the Multiplication:
annotation. The menu has only two options: Matrix(*) and Element-wise (.*), where the
notation in parentheses indicates the operation as if it were MATLAB notation.
The initial conditions for the iteration are set using the dialog that opens when you
double click the Unit Delay block. Double clicking this block opens  the dialog that allows
the desired initial values to be set. As above, the initial values are 01 .
The iteration we are doing creates the Fibonacci sequence in a vector form that will
allow us to show some

interesting

facts about the

sequence.
So let us do some exploring.
The matrix 01 11 can be thought of as ff01 ff12 , since we assume that the initial
value of f0 is 0 and the values of f1 and f2 are both 1. Therefore, after the first iteration of
the difference equation we have



fk+2 0 1 0 1 f0 f1
x2 = = x1 = x0
fk+3 f1 f2 1 1 f1 f2

f1 f2
= x0
f0 + f 1 f1 + f 2

f1 f2
= x0 .
f2 f3
Notice that after this iteration the matrix multiplying x0 is in exactly the same form as
when we started, except that the subscripts are all one greater than the initial matrix. If the
iteration

continues,

the matrix form is the same (i.e., the value of xk after k iterations is
fk−1 fk
fk fk+1
x 0 ; in Exercise 4.2 we ask you to use induction to prove this).
If we take the determinant of this matrix, we see that it is

fk−1 fk
det = fk+1 fk−1 − fk2 .
fk fk+1
Let us use our model to show that this determinant is
fk+1 fk−1 − fk2 = ±1.
4.3. Matrix Algebra and Discrete Systems 133

The Simulink library does not contain an explicit block to compute the determinant of a
Downloaded 01/23/15 to 128.2.10.23. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

matrix, so we will use some of the “User Defined Functions” blocks. There are five different
ways that the user may define a function in Simulink. In this library are blocks that
• create embedded code directly from MATLAB instructions,
• call any MATLAB function (this block does not compile the MATLAB code),
• create a C-code function.
There are also variations on these blocks where a function uses a rather arcane but
useful form and a version of the C-code function that uses MATLAB syntax for those who
refuse to learn C. We will use only two of these blocks: the MATLAB function and the
Embedded MATLAB blocks.
The model that was created is shown below. (It is called Fib_determinant in the
NCS library.) In this model, the MATLAB Function block uses the single function “det”
directly from MATLAB to compute the determinant (which is set using the dialog that opens
when you double click on the block), and the “Embedded MATLAB Function” block has
the following simple code to compute the determinant of the 2 × 2 matrix input u. Since
the result of using the MATLAB function and the Embedded MATLAB function are the
same, you might legitimately wonder why there are two blocks. The reason has to do with
calling MATLAB from Simulink. Stand-alone code does not have access to MATLAB,
so the MATLAB Function block will not work. The Embedded MATLAB block, on the
other hand, creates exportable C code, so when the code compiles it works as a stand-alone
application.
function d = det2(u)
% An embeddable subset of the MATLAB language is supported.
% This function computes the determinant of the 2x2 matrix u.

d=det(u);

The model is in Figure 4.10. The first time this model runs, the embedded code
compiles into a dll file that executes each time the model runs.
When the model runs, 13 iterations result, and the determinant plot appears in the
Scope block (see Figure 4.11). We will use this model to illustrate the computational
aspects of finite precision arithmetic. If the number of iterations is set to 39, the determinant
from the MATLAB function shows a value of 2, and the determinant from the embedded
MATLAB function shows a value of 0. If we continue past 39 iterations, say 70, we still
get zero from the embedded function, but we get 2.18e+013 for the determinant from the
MATLAB function. What is happening here?
The problem is that we are at the limit of the precision of the computations. The
values for the Fibonacci sequence are at 8e+014 so that the products of the values in the
equation fk+1 fk−1 − fk2 are on the order of 1029 and the difference is therefore less than
the least significant bit in the calculation. The built-in MATLAB function starts to fail as
soon as the terms in this function get to about 1018 , whereas the embedded MATLAB block
protects the calculation from underflow by making the value 0. This still works only for so
long; eventually even this strategy fails. (To see this, try 80 iterations; at this number of
iterations none of the determinant values work.)
134 Chapter 4. Digital Signal Processing in Simulink
Downloaded 01/23/15 to 128.2.10.23. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

Figure 4.10. A numerical experiment in Simulink: Does fk+1 fk−1 − fk2 = ±1?

Figure 4.11. 13 iterations of the Fibonacci sequence show that fk+1 fk−1 − fk2 =
±1 (so far).

This exercise is an example of how you could use Simulink to check that some
mathematical result is true. If you wanted to show that fk+1 fk−1 − fk2 = ±1, and you did
not have any idea if it were true or not, you could build the model and try it. It would be
immediately obvious that the values are ±1 for the number of iterations you used.
4.4. The Bode Plot for Discrete Time Systems 135

You then could try to prove the result. The ability to use the pictorial representation
Downloaded 01/23/15 to 128.2.10.23. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

of the premise quickly to create numerical results can almost immediately tell you if the
premise is true.
Now that we know fk+1 fk−1 − fk2 = ±1 is
true, Exercise 4.3 asks that you use
!
induction to prove it. (Use the fact that det 01 11 is −1.)

4.4 The Bode Plot for Discrete Time Systems


In Section 4.2.2, we created a frequency response plot, but it used a numerical experiment
on the Simulink model, which is ponderous at best. There is a simpler way, based on the
z-transforms that we explore now.
In order to compute the Bode plot for a discrete system we need to understand the
mapping from the continuous Laplace variable to the discrete z-transform variable. To see
this we need to investigate the Laplace transform of a sampled signal f (t) (i.e., a signal that
exists only at the sample times k t) when we shift it in time by t. If we assume that the
Laplace transform of f (t) is F (s), then
 ∞  ∞
−st
e f (t + t)dt = e−s(t+ t)
f (τ )dτ
0 t

= e−s t F (s) − e−s t f (0).

The last step used the fact that the Laplace transform is from t = 0 to infinity, and in the
first line of the equation the integral starts at t. Since the function f (t) is discrete,f (0) is
the only value not in the integral when it starts at t.
Comparing this to the z-transform derived in Section 4.1.1, we see that z−1 = e−s t
or z = es t .
With this information, we would like to find the discrete (z-transform) transfer function
for the discrete time state-space model. We have seen two ways for developing the discrete
state-space model. When we developed the solution for the continuous time state-space
model in Section 2.1.1, we showed that the result of making the system discrete in time was
the model
xk+1 = ( t)xk + ( t)uk ,

yk = Cxk + Duk .

In developing the discrete state-space model of the Fibonacci sequence above, we went
directly from the difference equation to the discrete state-space model. (In this case the
matrices were not determined from the continuous system, and they are not functions of
t.) In either case, the form of the equations is the same. Taking the z-transform of this
gives

zX(z) − zx0 = ( t)X(z) + ( t)U(z).


136 Chapter 4. Digital Signal Processing in Simulink

Thus the discrete transfer function of the system (H(z)) is the z-transform with the initial
Downloaded 01/23/15 to 128.2.10.23. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

conditions set to zero, so we have


Z {yk }
H(z) = = C(zI − )−1 B + D.
Z {uk }
The Bode plot of the discrete system from the state-space form of the model comes from
setting z = eiω t in the above derivation. That is we need to compute

H (eiω t ) = C(eiω t I − )−1 B + D,

which has exactly the same form as the continuous transfer function except iω is replaced
by eiω t . (Remember that t is the sample time of the discrete process and is therefore
a constant.) The manipulations of the state-space model for both continuous and discrete
systems in MATLAB are the same, so the connection from Simulink to MATLAB for the
Bode plot calculation is identical. We can go back now to the digital filter example in
Section 4.2.2 above and use the Control System Toolbox to calculate its Bode plot.
Open the digital filter model by typing “Digital_Filter1” at the MATLAB command
line; the model that opens is the same as the model we created in Section 4.2.2, but the
sinusoid input has been deleted since this is not needed to create the Bode plot. As we did
above, under the “Tools” menu select “Control Design” and the submenu “Linear Analysis.”
The “Control and Estimation Tools Manager” will open. In the model, right click on the line
coming from the Input block and select the “Input Point” sub menu under the “Linearization
Points” menu item. Similarly, select “Output Point” (under the same menu) for the line
going to the Output block. Selecting these input and output points causes a small I/O icon
to appear on the input and output lines in the model. In the “Control and Estimation Tools
Manager” GUI, select the Bode response plot for the plot linearization results and then click
the Linearize Model button. The LTI Viewer starts, and right click to select the Bode plot
under “Plot Types.” The plot of the amplitude and phase of the discrete filter as created
by the Viewer (Figure 4.12) stops at the frequency 3.1416 radians/sec because this is the
frequency at which the Bode plot for the discrete system starts to turn around and repeat.
(This is called the “half sample frequency,” and it is equal to πt .)
Compare this plot with the amplitude plot created by the Simulink model Digital_Filter
in Section 4.2.2. You will see that it is the same. The important difference is that the
computation of the Bode plot using this method is far more accurate (and faster) than using
a large number of sinusoidal inputs as we did there.

4.5 Digital Filter Design: Sampling Analog Signals, the


Sampling Theorem, and Filters
In 1949, C. E. Shannon published a paper in the Proceedings of the Institute of Radio
Engineers (the IRE was an organization that became part of the Institute of Electrical and
Electronics Engineers, or IEEE) called “Communication in the Presence of Noise.” This
landmark paper introduced a wide range of technical ideas that form the backbone of modern
communications. The most interesting of these concepts is the sampling theorem. It gives
conditions under which an analog signal can be 100% accurately reconstructed from its
4.5. Digital Filter Design 137
Downloaded 01/23/15 to 128.2.10.23. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

Figure 4.12. The Bode plot for the discrete filter model developed using the Control
System Toolbox interface with Simulink.

discrete samples. Because it is such an important idea, and because it is so fundamental, it


is very instructive to work through the theorem to understand why, and how, it works.

4.5.1 Sampling and Reconstructing Analog Signals


When we introduced discrete signals above, they were simply a sequence of numbers. In
addition, their z-transform was the sum of the sequence values multiplied by powers of z.
In Section 4.4, we saw that the frequency response results when eiω t is substituted
for z (thereby giving H (eiω t )). When the digital signal is the result of sampling an analog
signal, we need a way of representing this fact. The method must maintain the connection
with the analog process.
Equating a sampled analog signal to a sequence of its values at the sample times loses
the analog nature of the process (and besides is really only one of the mathematical ways
of representing the process). An alternative is to represent the sampled signal as an analog
signal that is a sequence of impulses (analog functions) multiplying the sample values.
Doing this gives an alternate representation of the sampled sequence, {sk } as the analog
signal s ∗ (t):


s ∗ (t) = sk δ(t − k t).
k=0
138 Chapter 4. Digital Signal Processing in Simulink

With this definition, we can take the Laplace transform of s ∗ (t) as


Downloaded 01/23/15 to 128.2.10.23. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

 ∞  ∞ ∞ 

∗ −st ∗ −st
S (s) = e s (t)dt = e sk δ(t − k t) dt.
0 0 k=0

The integral and the sum commute in the last term above, so
∞  ∞   ∞
∗ −st
S (s) = e sk δ(t − k t)dt = sk e−sk t .
k=0 0 k=0

Now to understand the sampling theorem, assume that we have been sampling the signal
for a very long time so that the signal is in steady state. In that case, the Fourier transform
describes the frequency content of the signal. The difference between the Laplace and
Fourier transform is in the assumption that for the Fourier transform the time signal began
at −∞. The steps that created the sampled Laplace transform above are the same, except
that, since the time signal exists for all time, the sum and integral are double sided:
∞  ∞  ∞
S ∗ (ω) = e−iωt sk δ(t − k t)dt = sk e−sk t .
k=−∞ −∞ k=−∞

Now, the sampling theorem is as follows:


If a continuous time signal s(t) has the property that its Fourier transform is zero for
all frequencies above ωm or below −ωm (the Fourier transform of s(t), S(ω) = 0 for
|ω| ≥ ωm ), then we can perfectly reconstruct the signal from its sample values at the
sample times t = ωπm (or if the signal is sampled faster) using the infinite sum

 sin(ωm t − kπ )
s(t) = sk .
k=−∞
(ωm t − kπ )

This theorem is critical to all applications that use digital processing since it assures
that after the signal processing is complete the signal may be perfectly reconstructed as long
as the sampling was originally done at a rate at least twice as fast as the highest frequency
in the signal. As an aside, Claude Shannon proved this result, and it was published in the
Proceedings of the I.R.E. in 1949 [39].
The proof of this assertion is straightforward. Refer to Figure 4.13 as we proceed
through the steps in the proof.
The first step is to take the function S(ω) and expand it into an infinite series (using
the Fourier series to do so). The result is

 ik2π ω
Sexpanded (ω) = Sk e − 2ωm .
k=−∞

The value of Sk comes from the Fourier expansion


 ωm
1 ikπ ω
Sk = S(ω)e ωm dω.
2ωm −ωm
4.5. Digital Filter Design 139
Downloaded 01/23/15 to 128.2.10.23. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

s0 s1 s2 s3
s4 s5 …
t
−ωm ωm
a) Signal and its Sampled Values b) Fourier Transform of the Signal

c) Result of replicating S (ω ) an infinite number of times.

Figure 4.13. Illustrating the steps in the proof of the sampling theorem.

∞ iωt
Since the inverse Fourier transform of S(ω) is the signal s(t) = 2π 1
−∞ S(ω)e dω, we
π ∗
get that the value of Sk is ωm sk . Therefore, S (ω) is the same as Sexpanded (ω) as shown in
Figure 4.13. That is,
∞
π ikπ ω
S ∗ (ω) = sk e− ωm .
ω
k=−∞ m

The last step in the proof is now simple. In order to recover the signal we simply need to
multiply the Fourier transform of the sampled signal by the function p(ω) defined by

1, −ωm < ω < ωm ,
p(ω) =
0, elsewhere.

Figure 4.13 shows the rectangular function drawn on top of the transform of the sampled
signal. Clearly, the product is the transform of the original analog signal.
The inverse Fourier transform of p(ω) is the “sinc” function given by

ωm sin(ωm t − kπ )
.
π (ωm t − kπ )

The proof of the result follows immediately because the inverse Fourier transform of the
product of S ∗ (ω) and p(ω) is the convolution of this function and the inverse transform of
S ∗ (ω) which is just the sum of impulses defining s ∗ (t) that we started with.
140 Chapter 4. Digital Signal Processing in Simulink
Downloaded 01/23/15 to 128.2.10.23. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

View Signals

Sum of D/A Filter


Sines Output
Input Digital
Generate 15 Sine Waves
at Frequencies given by freqs. Signal
(s et by a call back when the model loads)
15 2*pi*5000
15 s+2*pi*5000 Error
A/D Converter Trans fer Fcn of D/A Error
(Sampled at 0.1 msec)
D/A Filter

Figure 4.14. Simulation illustrating the sampling theorem. The input signal is 15
sinusoidal signals with frequencies less than 500 Hz.

The function p(ω) eliminates all frequencies in the sampled signal above the frequency
ωm , and we have already seen that such a filter is a low pass filter. This is the main reason
we need to create good low pass analog filters.
For various technical reasons, it is impossible to build a filter that has a frequency
response that is exactly p(ω). This means that we are always searching for a good approx-
imation. The next section shows that there are numerous ways to come up with an approx-
imation and introduces the design of analog filters. These filters are prototypes of digital
filters, so this section is important for both the implementation of the sampling theorem and
for digital filter design, but first let us do some simple simulations to illustrate these concepts.
Because of the sharp discontinuity in the filter function, the ideal low pass filter
represented by the function p(ω) above is not the result of using a finite dimensional system
(i.e., a system represented by a finite order differential equation, having a transfer function
whose magnitude is |H (iω)|). Therefore, it is necessary to figure out how best to create
a good approximation. As you should guess, the approximation must not have the sharp
corner, so the transition from the region where the gain of the filter is 1 to the region where
the gain is 0 must be smooth.
In Figure 4.14, we show a Simulink model that uses the sine block (from the Sources
library) to create 15 continuous time sinusoidal signals. The signals are then summed
together and sampled at 0.1 msec, using a sample and hold operation (the “zero order hold”
block in the Discrete library). Once again, you should try to create this model from scratch
using the Simulink library rather than loading it from the NCS library using the MATLAB
command Sampling_Theorem.
The approximation we use for the low pass filter is the simple first order digital filter
from Section 4.2.2. The frequency for the filter has been set at 5 kHz.
The sinusoidal frequencies in the simulation come from the MATLAB vector freqs,
whose values are: 65, 72.8, 74.7, 82, 89.3, 91, 99.5, 103.2, 125, 180.2, 202.1, 223.3, 230.3,
310.3, and 405.2 Hz. Because the sample frequency of 10 kHz is 20 times the highest
frequency in the signal, the digital to analog (D/A) reconstruction of the signal using the
simple first order filter is not too bad. (The error is about 8%, as can be seen from the D/A
Error Scope.)
4.5. Digital Filter Design 141
Downloaded 01/23/15 to 128.2.10.23. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

Table 4.1. Using low pass filters. Simulating various filters with various sample times.
Sample Time Filter Type Filter Freq. Standard Dev. Of
Error Observed
0.001 First Order 1000 Hz 2.093
0.001 Second Order 1000 Hz 1.902
0.0001 First Order 1000 Hz 0.872
0.0001 Second Order 1000 Hz 0.632

This simulation can investigate changes in the sample frequency relative to the fre-
quencies in the signal. As a first numerical experiment, try changing the sampling frequency
to 500 Hz. (Make the sample time in the A/D Converter block 0.002.) Make note of the
magnitude of the error.
Next, change the
filter to 1 kHz by
changing the parame-
ter values in the Trans-
fer Function block to
2*pi*1000 in both the
numerator and denom-
inator. Note the er-
ror for this simula-
tion. Remember that
this filter is not a good
approximation to the
ideal low pass filter.
(If you go back to Sec-
tion 4.2.2 and look
at the frequency re-
sponse plot for this fil-
ter, you can see that for
this filter the amplitude is reduced 50% at 1000 Hz and is reduced only 90% at 10 kHz.)
A better filter is one that has a more rapid reduction in amplitude. Higher order filters
will do this. For example, double click the filter block and change the filter parameters
to the values in the figure above. This makes the transfer function for the filter H (s) =
(2π 1000)2
s 2 +1.414(2π 1000)+(2π 1000)2
. As we will see next, this is an example of a Butterworth filter.
Run the simulation using 1 kHz sampling and record this error. You should see very
little difference between the two filters when the sample time and the filter functions are
set to what the sampling theorem says are the appropriate values. Again, this is because
we are not filtering the signal with the ideal filter required by the theorem. To see that the
filters work well when the sampling is done at a higher rate than the minimum value of the
sampling theorem, change the sample time back to 0.1 msec, as we started with, and rerun
the simulation. Table 4.1 summarizes our results from these simulations.
142 Chapter 4. Digital Signal Processing in Simulink
Downloaded 01/23/15 to 128.2.10.23. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

Figure 4.15. Specification of a unity gain low pass filter requires four pieces of
information.

4.5.2 Analog Prototypes of Digital Filters: The Butterworth Filter


We did some simple analog low pass filter design in the previous section to illustrate the
conversion of a digital to an analog signal. We also need to have analog filters as follows.

• Since the Fourier transform of a signal that we want to sample must have no fre-
quencies above half the sample frequency, a low pass antialiasing filter used prior to
sampling ensures that this is true.

• We can design a digital filter using the analog filter as the starting point and then
converting the result using some type of transformation,.

So, with these reasons as motivation, let us explore some analog filters.

Remember that the transfer function of a filter is H (s)|s=iω = |H (iω)| ei H (iω) . Three
“bands” on the amplitude plot specify this filter, as illustrated in Figure 4.15. The first of
these bands is the “pass band,” which represents the region where the signal is not attenuated
(the region whose frequencies are below ωm ). The second region is the “transition band,”
where the amplitude gradually reduces to an acceptable minimum. (Because the frequency
response of a finite dimensional system is the ratio of polynomials, it is impossible for the
amplitude to be exactly zero except at an infinite frequency.) The last region is the “stop
band,” where the filter is below the acceptable value. In the figure, there are three additional
parts to the specification: the acceptable gain change over the pass band, the acceptable
amplitude of the signal in the stop band, and the frequency at the transition band limit.
4.5. Digital Filter Design 143

These parameters are ε1 , ε2 , and ωmacceptable , respectively, and they specify bounds on the
Downloaded 01/23/15 to 128.2.10.23. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

transfer function as follows:


In the pass band,

1 − ε1 ≤ |H (iω)| ≤ 1 for |ω| ≤ ωm .

In the stop band,

|H (iω)| ≤ ε2 for |ω| ≥ ωmacceptable .

The analog low pass filters that are most frequently used are the Butterworth, Cheby-
shev, and elliptic filters. The first of these, the Butterworth filter, comes from making the
filter transfer function as smooth as possible in each of the regions. The filter has as many
of the derivatives of its transfer function equal to 0 at frequencies
√ 0 and infinity.
Since the magnitude of the transfer function is |H (iω)| = H (iω)H (−iω), it is usual
to use the square of the magnitude in specifying the filter transfer function (eliminating the
square root). Therefore, the Butterworth filter is

1
|H Butter |2 =  2n .
ω
1+ ωm

This function has the property that its derivatives (up to the (2n − 1)st are zero at ω = 0
and at ω = ∞. Exercise 4.3 asks that you show that this assertion is true. The Laplace
transform for the filter can be determined by using the fact that

|H (iω)|2 = H (s)H (−s)|s=iω .

Therefore, filter transfer function is the result of factoring

1
H (s)H (−s) =  2n .
s
1+ iωm

The poles of the transfer function are the roots of the denominator, given by the equation
( iωsm )2n = −1, or equivalently, the 2n roots of this polynomial are the 2n complex roots of
−1 given by
1
sk = (−1) 2n (iωm ) .

These poles are equally spaced around a circle with radius ωm . The values with negative real
parts define H (s), and those with positive real parts become H (−s), so the transfer function
for the desired Butterworth filter is stable. It is easy to create this filter in MATLAB. We
have included in the NCS library an M-file called butterworthncs that uses the above to
determine the Butterworth filter poles and gain for any filter order. The code performs the
144 Chapter 4. Digital Signal Processing in Simulink

calculations shown below:


Downloaded 01/23/15 to 128.2.10.23. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

function [denpoles, gain] = butterworthncs(n, freq)

% The Butterworth Filter transfer function


% Use zero-pole-gain block in Simulink with the kth pole given by the
% formula:
% (1/2n)
% Poles with real part <0 among p = i(-1) 2*pi*freq
% k
% Where:
% freq = the Butterworth filter design frequency in Hz.
% n = order of the desired filter
% i = sqrt(-1)

denroots = (i*roots([1 zeros(1,2*n-1) 1]))*2*pi*freq; %BW poles = roots


denpoles = denroots(find(real(denroots)<=0)); % of -1 in left plane

% Zero out the imaginary part of the real pole (there is only one real
% pole, and then only when n is odd. Because of the precision
% in computing roots, the imaginary part will not be zero):

index = find(abs(imag(denpoles))<1e-6); % imag. part is 0


denpoles(index) = real(denpoles(index)); % when < eps.

% To insure the steady state gain is 1, the Butterworth filter gain


% must be n times the magnitude of the Butterworth circle squared:

gain = (2*pi*freq)ˆn

To exercise this code, let us design an eighth order Butterworth filter for the problem
we investigated above. The poles and gain result from typing the following command in
MATLAB:

[BWpoles, BWGain] =butterworthncs(8, 10000)

Now let us filter the same signal we filtered above, but this time we build a Simulink
model using these poles and gain. A new model similar to the Sampling_Theorem model
above is in Figure 4.16. (This model is Butterworth in the NCS library.)
The only difference in this model is that instead of the transfer function block, the filter
comes from the zero-pole-gain block in the Simulink continuous library. The dialog values
for this filter are set using the output of the M-file butterworthncs. (The output of this
M-file is the gain, BWgain, and the poles, BWpoles.) We designed the filter for a sample
frequency of 10000 Hz. Since the maximum frequency contained in the signal is only
about 500 Hz, this filter should do a good job of reconstructing the sampled signal. It does,
because the root mean square (rms) error in the reconstruction is only 0.0761 (significantly
smaller than the rms value for the simple second order filter we used above). Notice that
the model above contains a delay block (called transport delay in the model) that delays
the input signal by 1.1 msec to account for the Butterworth filter’s phase shift that delays
4.6. The Signal Processing Blockset 145

Filtered
Downloaded 01/23/15 to 128.2.10.23. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

Signal

Generate 15 Sine Waves


at Frequencies given by freqs.
(set by a call back when the model loads)
BWgain
BWpoles(s)
A/D Converter Butterworth Filter of Order n Error
(Sampled at 0.1 msec) Compute the coefficients Transport
using the m-filebutterworthncs Delay

Analog
This model uses the data created by a callback to MATLAB when the model opens as follows: Input
The source is a sum of sine waves at the following frequencies Signal
freqs = [65 72.8 74.7 82 89.3 91 99.5 103.2 125 180.2 202.1 223.3 230.3 310.3 405.2] Hz.
The phase of each sine wave is:
Φ = [0 0.0011 -0.0014 -0.0013 0.0006 0.0002 0.0037 -0.0006 0.0044 0.0030 -0.0039 -0.0034 -0.0011 -0.0004 0 ]
The amplitude of all of the sine waves and the sample times that they are generated are:
amplitudes = 1; sample time = 10 -5 sec.

Figure 4.16. Butterworth filter for D/A conversion using the results of the M-file
“butterworthncs.”

the output signal. By delaying the input before the error is computed, this filter lag is
accommodated. (Remember that in signal processing, time lags in the reconstruction of the
data are acceptable because there is usually a large spatial difference between the source
and the site of the reconstruction. (Think of transmission via the internet or via a radio link.)
We do not have to go through these machinations every time we want to design a filter.
Simulink easily allows the adding of new tools. These tools are blocksets, and the first of
them that we will look into is the Signal Processing blockset, which contains built-in blocks
that allow any analog (or digital) filters to be created. In the next section, we experiment with
analog and digital filter blocks using the Signal Processing Blockset to design Butterworth
and other filters.

4.6 The Signal Processing Blockset


Analog filters, digital filters and many other signal-processing components are available in
the Simulink add-on tool called the Signal Processing Blockset. This tool has many unique
features that allow analog and digital filters to be designed, modeled, and then coded. (Sig-
nals can be analog, digital, or mixed analog-digital; digital signals can have multiple sample
times.) Among the features of the tool are the ability to capture a segment of a time series
(in a buffer) for subsequent “batch” processing. The tool also permits processing temporal
data using “frames,” where the computations wait until a frame of data is collected, and then
the processor operates on the entire frame in a parallel operation. The tool allows digital
signal processing models and components using computations that have limited precision
and only fixed-point calculations. This last feature couples with a coding method that allows
the generation of C-code and allows HDL code output for special purpose signal processing
applications on a chip. (The code comes directly from the Simulink model.) We will not
describe how we do this in this chapter, but we will touch on some of the features in Chapter 9.
146 Chapter 4. Digital Signal Processing in Simulink

4.6.1 Fundamentals of the Signal Processing Blockset: Analog


Downloaded 01/23/15 to 128.2.10.23. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

Filters

To understand the capabilities and become


familiar with the Signal Processing Block-
set, open the library. The figure at the right,
a snapshot of the Library browser, shows
nine different categories of model elements.
They are Estimation, Filtering, Math Func-
tions, Quantizers, Signal Management, Sig-
nal Operations, Sinks, Sources, Statistics,
and Transforms. Explaining the details of
many of these blocks would take us beyond
the scope of this book, so we will leave
out the Estimation library and some of the
blocks in the Filtering and Math Functions
libraries.
We start our discussion by navigating
the Filtering library. Let us use the model
we created above, but this time we add fil-
tering blocks from the Signal Processing li-
brary.
Open the model butterworth_sp
(shown in Figure 4.17) that now includes
a block from the Filtering library that au-
tomatically designs an analog filter of any
type. The list of possible filters goes way
beyond the Butterworth that we have been
exploring so far. When you open the model,
it will contain the Butterworth filter we
designed using the butterworthncs M-file
along with the Filter block from the Signal
Processing Blockset. This block contains
the design specifications for the same But-
terworth filter.
This model illustrates another powerful feature of Simulink. In the process of making
a change to the model (perhaps a change that does nothing but simplify it, as we are doing
here), we need to be worried that the change might introduce an error. The simple expedient
of comparing the two calculations using the summation block (to compute their differences)
creates an immediate and unequivocal test for the accuracy of the calculations. The differ-
ence, displayed on a Scope, should be about the numeric precision. The result, displayed
in a Scope block, contains both the Signal Processing Blockset results and the difference
between the design using the NCS library design and the Signal Processing Blockset design
of the Butterworth filter. The Scope we call “Compare SP block with NCS block” gives
the plot shown in Figure 4.17(b). As can be seen in this figure, the difference between the
4.6. The Signal Processing Blockset 147

Analog
Downloaded 01/23/15 to 128.2.10.23. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

Filter Design Compare


from SP Blockset SP block with
butter NCS block
SP Blockset Result

Difference between
Generate 15 Sine Waves NCS and SP Library
at Frequencies given by freqs. Butterworth filters
(set by a call back when the model loads)
15 BWgain
BWpoles(s) D/A Filter
A/D Converter Output
Butterworth Filter of Order n D/A Error
(Sampled at 0.1 msec)
Computed using the NCS Transport
m-file butterworthncs Delay

Sum ofSines
Input
Analog
Input
Signal
This model uses the data created by a callback to MATLAB when the modle opens as follows:
The source is a sum of sine waves at the following frequencies
freqs = [65 72.8 74.7 82 89.3 91 99.5 103.2 125 180.2 202.1 223.3 230.3 310.3 405.2] Hz.
The phase of each sine wave is:
Φ = [0 0.0011 -0.0014 -0.0013 0.0006 0.0002 0.0037 -0.0006 0.0044 0.0030 -0.0039 -0.0034 -0.0011 -0.0004 0 ]
The amplitude of all of the sine waves and the sample times that they are generated are:
-5
amplitudes = 1; sample time = 10 sec.

a)
Simulink Model with Butterworth Filter from the Signal Processing Blockset and the Filter
Designed using butterworthtncs.

SP Blockset Result
15

10

-5

-10
0 0.05 0.1 0.15 0.2 0.25
Difference between
NCS and SP Library
-14
x 10 Butterworth filters
4

-2

-4
0 0.05 0.1 0.15 0.2 0.25
Time
b)
Reconstruction of an Analog Signal using a Filter Designed with the Signal Processing Blockset.

Figure 4.17. Using the Signal Processing Blockset to design an analog filter.

two implementations of the filter is mostly less than 2 × 10−14 , which is well within the
numerical tolerances of the calculations.
We can now experiment with the entire range of filters that the Signal Processing
Blockset offers. Note that as the filter changes, the icon for the filter shows a plot of the
148 Chapter 4. Digital Signal Processing in Simulink
Downloaded 01/23/15 to 128.2.10.23. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

Table 4.2. Comparison of the error in converting a digital signal to an analog


signal using different low pass filters.
10th Order Time Delay A/D
Filter Type milliseconds Error
Butterworth 1.10 0.0761
Chebyshev I 1.47 0.4576
Chebyshev II 0.58 0.1859
Elliptic 0.49 0.3393 to 0.5108
Bessel 0.132 0.274

frequency response of the transfer function, along with the name of the filter type. This
provides a direct visual cue to the type of filter so that when we review it in the future, we
know precisely the original intent (and if for any reason the picture and/or the type do not
match the original intention or the specifications, it is readily apparent).
Five different filters are available from the analog filter design block. Try each of
these filters in turn and record the error in the reconstruction of the original analog signal.
We did this experiment, with the results tabulated in Table 4.2. (Each filter has a different
time lag, so modify the delay time in the Transport Lag block, as shown, to account for
this.) The elliptic filter stop band ripples range over 0.1–2 db, hence the error ranges.

4.6.2 Creating Digital Filters from Analog Filters


We have seen how one can use the state-space model to create a digital system that has the
same response as the analog system when the input to the system is a step. In Section 2.1.1,
the discrete time solution of a continuous state variable model was determined to be

xk+1 = ( t)xk + ( t)uk ,

where  t
( t) = eAτ dτ,
0
 t
( t) = eAτ Bdτ,
0
and the MATLAB file c2d_ncs computes these matrices. We can use these methods of
forming a discrete time system with a response that is equivalent to the analog system to
make a digital filter from the analog filter. In this instance the responses are equivalent
in the sense that both the analog and digital filters will have the same step response. (In
Exercise 4.5 you will show that this is true.) Digital filters are usually equivalent (using
this approach) to the analog Butterworth, Chebyshev, Bessel, or elliptic filters. The other
equivalence that one can have is “impulse equivalence,” where the impulse responses of
the analog and digital filter are the same (but not the step responses).
A third approach for the creation of an equivalent digital filter uses an approximation
of the mapping of the Laplace and z-transform variables. The approximation is the bilinear
 −1 
transformation given by s = 2t 1−z 1+z−1
. This mapping in the complex plane is one-to-one,
4.6. The Signal Processing Blockset 149

so the inverse mapping is unique and is (noting that this form is also bilinear)
Downloaded 01/23/15 to 128.2.10.23. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

 
1 + 2t s
z= .
1 − 2t s

The denominator in this mapping is of the form 1


1−x
= 1 + x + x 2 + · · · , so the value of z is
   2 
t t t
z= 1+ s 1+ s+ s + ···
2 2 2
t2 2
=1+ ts + s + ···.
2
This approximation is very close to the Taylor series for es t , the actual mapping for the
Laplace variable to the z variable. However, because the bilinear mapping is one-to-one,
there is no ambiguity when this transformation is used. The powerful feature of this trans-
formation is that one frequency exists where the phases of the transfer functions for both
the analog and digital filters are the same. You can select this frequency using a technique
called prewarping. Any of these approaches are selectable in the filter design block from
the Signal Processing Blockset. In the next section, we look at one of these.

4.6.3 Digital Signal Processing


One of the consequences of using a linear system described by a differential equation to
filter data is the time lag that we encountered in the low pass filters we used in the previous
section. In signal processing, these are “infinite impulse response,” or iir, filters. If it
is necessary to eliminate this time lag, then the filter transfer function needs to be real at
all frequencies. (If it is complex, then the imaginary part of the filter transfer function
corresponds to a phase shift that results in the time lag.) For an iir filter, the only way that
the transfer function can be real is if the poles are symmetric with respect to the imaginary
axis (i.e., for any pole with negative real part, there is an equivalent pole with positive real
part). Since poles in the right half plane (with positive real parts) are unstable, it is clearly
impossible to build an iir filter that processes the signal sequentially and has no phase shift.
The next best attribute we can ask for an iir filter is that its phase be linear. This is possible,
and many filter designs impose this criterion in the development of the filter specification.
There is an alternative to an iir filter; it is called “finite impulse response,” or fir, filter.
The fir filter does not require the solution of a differential equation but instead relies on the
convolution of the input with the filter response to create the output. The most important
attribute of a fir filter is that it always has linear phase. Let us explore this attribute with
some Simulink models. The first model we create uses one of the simplest and, for many
applications, most useful of all fir filters. This filter is the moving average filter that computes
the mean of some number of past samples of a signal. The filter is
n−1
1
yk = uk−i .
n i=0
150 Chapter 4. Digital Signal Processing in Simulink

Band-Limited
Downloaded 01/23/15 to 128.2.10.23. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

White Noise
1
[zeros(n-2,1);1] ones(1,n-1)/n* u
z
Unit Delay Scope
b Vector c

diag(ones(1,n-2),1)* u

A Matrix

1/n

d
a)
The FIR Filter Simulink Model
2 0 0 P o in t M o vin g A ve ra g e
S im u lin k S im u la t io n
U s in g S t a t e S a c e M o d e l
1

0.8

0.6

0.4

0.2

-0 . 2

-0 . 4

-0 . 6

-0 . 8

-1
0 50 100 150 200 250 300 350 400 450 500
T im e

b)
Simulation Results

Figure 4.18. Moving average FIR filter simulation.

The z-transform of this filter is

1  1  
Y (z) = 1 + z−1 + · · · + z−(n−1) = n−1 zn−1 + zn−2 + · · · + z + 1 .
n nz

Thus, this filter has n − 1 poles at the origin. This means only that you have to wait for n
samples before there is an output.
The process for computing the output yk is to accumulate (in the sense of adding to
the previous values) n1 times the current and past values of the input. The Simulink model
that generates the fir n-point moving average filter (Figure 4.18) is in the NCS library. It is
Moving_Avg_FIR. Exercise 4.6 asks that you verify that this model is indeed the state-space
model of the filter.
The simulation computes the average of the input created by the Band Limited White
Noise block, which we investigate in more detail in the next chapter. The important in-
formation about this block is that the output is a sequence of Gaussian random variables
with zero mean and unit variance (so the moving average should be zero). When the model
opens, the callback sets n to 200 so the average is over the previous 200 samples. The result
is in Figure 4.18(b).
4.6. The Signal Processing Blockset 151

You can change the number of points in the moving average by changing the variable n
Downloaded 01/23/15 to 128.2.10.23. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

in MATLAB. Beware, however, that this implementation of the filter is extremely inefficient,
so as n gets larger, the time it takes to compute the moving average increases dramatically.
This last point illustrates a very important as-
pect of filter design. That is, the design approach
makes a dramatic impact on the time it takes to
perform the computations and the accuracy of the
result. Let us explore this with the Digital Fil-
ter design block in the Signal Processing Blockset.
The model Moving_Avg_FIR_sp in the NCS li-
brary uses the signal processing Digital filter block
to create an fir filter that is identical to the state-
space model above. Double click on the “Digital
Filter” block in the model, and the window on the
right will appear. This dialog allows you to select
the filter type (in the “Main” tab); in this case, we
selected FIR (all zeros). Next, we select the fil-
ter structure, the feature of the block that allows
a more robust implementation. This is not criti-
cal in the simulation of the filter (although using
a bad implementation as we did above can cause a
long wait for the simulation to be completed), but it is absolutely critical in implement-
ing a filter in a real-time application. Using the smallest number of multiplies, adds,
and storage elements can
greatly simplify the fil-
ter and make its execution
much more rapid. In this
implementation, we have
used the “Direct Form”
of the filter with the nu-
merator coefficients equal
to 1/n*ones(1,n), where in
the simulation, n = 200.
One last feature of
the Filter block is the abil-
ity to view the filter trans-
fer function as a frequency
plot (and in other views).
Clicking the “View Filter
Response” button creates
the plot shown at the right.
The plot opens showing
the magnitude of the filter transfer function plotted versus the normalized frequency. The
viewer allows you also to look at
• the phase of the filter,
• the amplitude and phase plotted together,
152 Chapter 4. Digital Signal Processing in Simulink

Pole/Zero Plot
Downloaded 01/23/15 to 128.2.10.23. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

0.8

0.6

0.4

0.2
Imaginary Part

199
0

-0.2

-0.4

-0.6

-0.8

-1

-1 -0.5 0 0.5 1
Real Part

a) Pole Zero Plot b) Filter Properties

Figure 4.19. Pole-zero plot and filter properties for the moving average filter.

• the phase delay,

• the impulse or the step response,

• the poles and zeros of the filter,

• the coefficients of the filter (both numerator and denominator polynomials),

• the data about the filter’s form,

• issues about the filter implementation by viewing the filter magnitude response with
limited precision implementations.

We select these using the buttons along the top of the figure (circled in the figure).
Two of the more interesting outputs from this tool are the pole-zero plot and the filter
properties. Figure 4.19 shows these for the moving average filter. Notice that the filter
properties gives a calculation of the number of multiplies, adds, states, and the number of
multiplies and adds per input sample. (In this case the numbers are the same since the fir
filter requires only multiplications and adds for each of the zeros.)
We will explore some of the filter structures and their computation counts in Exer-
cise 4.6. For now, we can really appreciate the difference the implementation makes by
using the sim command in MATLAB to simulate both of the moving average Simulink
models. The MATLAB code for doing this is

tic;sim(’Moving_Avg_FIR’);t1=toc;
tic;sim(’Moving_Avg_FIRsp’);t2=toc;
tcomp = [t1 t2]

The results from this code (on my computer) are

tcomp =
18.6036 0.5747
4.6. The Signal Processing Blockset 153

The difference is so dramatic because the first implementation (the state-space model)
Downloaded 01/23/15 to 128.2.10.23. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

has a state matrix a that is 199 × 199. At every iteration, this matrix multiplies the previous
state. Then there is a vector addition, followed by a vector multiply and an addition (for
the feed-through of the input). Use this as a guide for determining the calculation count in
Exercise 4.6.
In general, all fir filters have the form of a sum of present and past values of the input
multiplied by coefficients that are the desired (finite) impulse response. Thus, the general
fir filter and its z-transform are
n−1
  n−1 " n−1
Y (z)
yk = hi uk−i and H (z) = = hi z−i = (z−1 + zeroi ).
i=0
U (z) i=0 i=0

Notice that this filter has n zeroes (zeroi ) that are determined by factoring the poly-
nomial H (z). Because the highest power in this transfer function is z−n , the filter has n
poles at the origin (which corresponds to the fact that the filter output is only complete after
n samples; there is an n-sample lag before the correct output appears). This appears in the
response of the moving average filter from the Simulink model above.
The discussion above was for a filter that computes the mean of n samples. Because
of the study of this type of filter in statistics, all zero filters are “moving average” (or MA-
filters). The statisticians also developed iir filters that were all poles, and they dubbed these
autoregressive (or AR-filters). Last, if a filter has both poles and zeros, it is an ARMA-filter.
The filter design block allows you to create all three types of filters. The dialog also allows
the user to specify where the filter coefficients are set in the model. The option we have
used is to set them through the filter mask dialog. (They are computed from the poles and
zeros set in the dialog.) You also can select an option where the filter coefficients are an
input (they can then be computed in some other part of the Simulink model for use in this
block, thereby changing the filter coefficients “on the fly”), or the coefficients can be created
in MATLAB using an object called DFILT. Finally, the dialog allows the user to force the
filter to use fixed-point arithmetic. This leads us to the discussion in the next section.

4.6.4 Implementing Digital Filters: Structures and Limited Precision


Very few analytic methods allow one to visualize the effects of limited precision arithmetic
on a filter. The use of simulation in this case is mandatory. Therefore, let us explore some
of the consequences of designing a filter for use in a small, inexpensive computer that, say,
has only 16 bits available.

Input Signal (Sum of sinusoids)


15

10

-5

-10
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
154 Chapter 4. Digital Signal Processing in Simulink
Downloaded 01/23/15 to 128.2.10.23. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

Table 4.3. Specification for the band-pass filter and the actual values achieved in
the design.
Specification Spec. Values Actual Values
(−3 dB point)
Sample Freq. 10 kHz 10 kHz
Pass Band #1 Edge 100 Hz 100 Hz
Stop Band #1 Edge 110 Hz 109.7705Hz
Pass Band #2 Edge 120 Hz 120.2508 Hz
Stop Band #2 Edge 130 Hz 130 Hz
Stop Band 1 Gain −60 dB −60 dB
Transition Width 10 Hz 10 Hz
Pass Band Ripple 1 dB 1 dB

Generate 15 Sine Waves Digital


at Frequencies given by freqs. Bandpass Filter
Designer
(set by a call back when the model loads) Floating Pt.
double double double Bandpass Filter
Bandpass

View Signals

Figure 4.20. The Simulink model for the band-pass filter with no computational
limitations.

The first question we need to ask is, what’s the problem? We have created a very
simple example of a digital signal-processing task that will allow us to explore some of the
limited precision arithmetic features built into the signal processing blockset. The model,
called Precision_testsp1 or 2 in the NCS library, is a simulation of a device that
one might design to try to find a single tone in a time series that consists of a multitude of
tones. We might use it, for example, in a frequency analyzer or in a device to tune a wind
instrument during its manufacture. It consists of a band-pass filter with a narrow frequency
range (10 Hz in this case) and a sample time of 10 kHz. The input to the filter is the same
sum of sinusoids that we have used previously, namely sine waves at the frequencies 65,
72.8, 74.7, 82, 89.3, 91, 99.5, 103.2, 125, 180.2, 202.1, 223.3, 230.3, 310.30, and 405.2 Hz,
in the figure above.
The band-pass filter specification is in Table 4.3, and the Simulink model
Precision_testsp1 that we use to test the design is in Figure 4.20. If you open this
model, you will see that it contains the Bandpass Filter block from the “Filter Design
Toolbox” Simulink library. This block converted the specification in the table into filter
coefficients used by the Digital Filter block. The digital filter block allows you to take the
floating point design and convert it to an equivalent fixed point design.
Figure 4.21 shows the amplitude-versus-frequency plot for the filter designed by the
“Band Pass Filter Design” block. (Open the model in the NCS library and double click
on this block to see the specification and create this plot.) The solid lines in the figure are
the filter amplitude, as designed, and the dotted lines are the specification from the table
4.6. The Signal Processing Blockset 155

Magnitude Response (dB)


Downloaded 01/23/15 to 128.2.10.23. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

-10

-20
Magnitude (dB)

-30

-40

-50

-60

-70
0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.18
Frequency (kHz)

Figure 4.21. The band-pass filter design created by the Signal Processing Blockset
digital filter block.

above. When this filter is used, the digital filter response is almost indistinguishable from
the response of an analog filter. We have deliberately designed this filter for a frequency
that is not contained in the 15 sine waves that make up the input. (The pass band is from
110 to 120 Hz; none of the sinusoids have this frequency range.)
Thus, the output of the filter should be very small. This is in fact the case as can be
seen in the response plot (Figure 4.22(a)). The amplitude of the output (after the initial
transient dies down) is about 0.01 (about 1/1000 the amplitude of the input signal), which
is very good for detecting that the tone is not present.
The next part of the design is to place the filter pass band in the area where we know
there is a tone. Thus, let us try to pick out the tone at 103.2 Hz by specifying that the pass
band corners will be 100 and 110 Hz (so the values of Fstop1, Fpass1, Fstop2, and Fpass2
are 90, 100, 110, and 120, respectively). This is so easy to do in the design that it amounts to
a trivial change (in contrast to designing this filter by hand). You should try this, if you have
not already. If you have not, the Simulink model Precision_testsp2 in the NCS library
has the changes in it. Open this model and double click the Bandpass Filter block to view
the changes that were made. (The dialog that opens has the new design for the pass band.)
When you run this model, the response should look like the plot shown in Figure 4.22(b).
The maximum output is now about 1.5 (compared to the 0.15 above), showing that there is
a tone in the 10 Hz frequency range of 100 to 110 Hz.
The filter design method that we use for both of the band-pass filters is the “Second
Order Section” (or Direct Form Type II). When a digital filter is implemented using floating-
point operations, the structure of the filter does not usually matter, but when the filter is for a
fixed-point computer, as we intend to do next, the way that the filter is structured can make
a huge difference. For this reason, the Signal Processing Blockset includes design methods
for a wide variety of filter structures.
156 Chapter 4. Digital Signal Processing in Simulink

Output of Bandpass Filter


Downloaded 01/23/15 to 128.2.10.23. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

0.2

0.1

-0.1

-0.2
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
Time

a) Output of the Bandpass Filter when no Signal is in the Pass Band

1.5
1
0.5
0
-0.5
-1
-1.5

0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2


Time

b) Bandpass Filter Output when One of the Sinusoidal Signal is in the Pass Band

Figure 4.22. Band-pass filter simulation results (with and without signal in the
pass band).

Why is the implementation method an issue for fixed-point calculations? In Sec-


tion 4.4, we showed that the transfer function for a discrete linear system comes from the
Z{yk }
state-space model as H(z) = Z{u k}
= C(zI − )−1 B + D. The poles of the discrete system
and the eigenvalues of the system are given by the det(zI − ( t)), so using the denomina-
tor polynomial to create a digital filter will result in coefficients that range from the sum of
the eigenvalues (or poles) to their product. All of the eigenvalues (poles) of the z-transform
transfer function are less than one because the matrix ( t) is
 
eλ1 t
0 ··· 0
 0 eλ2 t
··· 0 
 
( t) = eA t
= T −1  .. .. .. T,
 . . ··· . 
0 0 · · · eλn t

where T is the matrix that diagonalizes the matrix A (and ( t)). The #denominator of
the z-transform is the determinant of this matrix and is the polynomial nj=1 (z − eλj t ).
(Exercise 4.7 asks that you show that this is true.)
4.6. The Signal Processing Blockset 157

If this polynomial defines the filter denominator, the precision needed to store the
Downloaded 01/23/15 to 128.2.10.23. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

coefficients would range over many orders of magnitude. Consider, for example, the fourth
order filter with poles at 0.1, 0.2, 0.3, and 0.4; its denominator polynomial is z4 + z3 +
0.35z2 + 0.05z + .0024, so the coefficients span almost three orders of magnitude. Each
coefficient would therefore require at least 10 bits just to get one bit of accuracy. The
simplest way to avoid this is to break the filter up into cascaded second order systems.
Second order sections can eliminate complex arithmetic. (The denominator polynomial of
∗ ∗
a second order system is always z2 + (eλi t + eλi t )z + eλi λi t , which is always real.)
2

The cascaded second order sections that were designed by the filter designer for this model
can be seen by looking under the mask (by right clicking on the Bandpass Filter block,
and selecting “look under mask” form the menu that opens). The block that appears is the
“Generated Filter block,” and we open it by double clicking. When this block opens, you
will see the cascaded second order sections (unconnected) that make up the filter (as shown
in the figure below).

CheckSigna
1 -K- [Sect1] [Sect1] -K- [Sect2] [Sect2] -K- [Sect3] [Sect3] -K- -K- 1
Attributes
Input. Output
s(1) s(2) s(3) s(4) s5(4)
-1 -1 -1 -1
z z z z

-K- -K- -K- -K- -K- -K- -K- -K-

a(2)(1) b(2)(1) a(2)(2) b(2)(2) a(2)(3) b(2)(3) a(2)(4) b(2)(4)


-1 -1 -1 -1
z z z z

-K- -K- -K- -K- -K- -K- -K- -K-

a(3)(1) b(3)(1) a(3)(2) b(3)(2) a(3)(3) b(3)(3) a(3)(4) b(3)(4)

The coefficients are in gain blocks, and we can inspect them by double clicking on
the block. It is very useful to become familiar with navigating around the Signal Processing
Blockset generated models this way.
Now that a filter design exists, we impose the requirement that the calculations must
use a precision of 16 bits. Designing a filter with limited precision arithmetic is simple
if the filter design block creates the design. Once it is complete, MATLAB uses the filter
design block to design the implementable digital filter. Before we do this, we need to think
a little about what we mean by limited precision arithmetic and digital computing without
floating-point calculations.
Whenever a calculation uses floating-point arithmetic, the computer automatically
normalizes the result (using the scaling of the floating-point number) to give the maximum
precision. (Among the many references on the use of floating point, [31] describes how
to use a block floating-point realization of digital filters.) The normalized version of any
variable in the computer is
x = ±(1 + f ) · 2e ,
f is the fraction (or mantissa) and e is the exponent.
The fraction f is always positive and less than one and is a binary number with at most
52 bits. The exponent (in 64-bit IEEE format) is always in the interval −1022 ≤ e ≤ 1023.
Because of the limited size of f , there is a limit to the precision of the number that can
be represented. (In MATLAB, this is captured by the variable eps = 2−52 , the value for
any computer using the IEEE floating-point standard for 64-bit words.) The exponent has a
similar effect, except that its maximum and minimum values determine the smallest and the
largest numbers we can represent. It would pay to read Section 1.7 of NCM [29] to make
sure that you understand the concept of limited precision.
158 Chapter 4. Digital Signal Processing in Simulink

Inexpensive computers do not have floating-point capability. (They have “fixed-point”


Downloaded 01/23/15 to 128.2.10.23. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

architectures, where the user places the binary point at a fixed location in the digital word.)
Programming these computers requires that the programmer ensure that the calculation
uses enough bits. When we say enough bits, we are immediately in the realm of speculation
because the number of bits needed for any calculation depends on the range of values that
the data will have. For example, if we are interested in building a digital filter that will act
as the tone control for an audio amplifier, the amplifier electronics immediately before the
analog-to-digital conversion determines the maximum value that the signal will ever have.
The amplifier might limit the signal to a maximum of 1 volt. The result of the conversion
of the analog signal will be a number that ranges from −1 to 1. It is then easy to scale the
fixed-point number so its magnitude is always less than one by simply putting the binary
point after the first bit. (The first bit is then the sign bit, and the remainder of the bits are
available to store the value.) This scaling can be kept for all of the intermediate calculations,
but depending on the calculation (add, subtract, multiply, or divide), this may not be best,
so scaling needs to be reconsidered at every step in the computation.
When the amplitude of the signal being filtered is not known—as, for example, when
filtering a signal that is being received over a radio link—the designer needs to figure out
what the extremes for the signal will be and ensure that the digital word accommodates
them. When the extremes are exceeded, the conversion to a digital word reaches a limit
(which is called saturation and results in the most significant bit in the conversion to try to
alter the sign bit). When the digital word has a sign, the computer recognizes this attempt
and creates an error. When using unsigned integers, the bit overflows the register. (The
computer sees a carry bit that has no place to go, resulting in an overflow error.)
To handle the limited precision, the designer needs to figure out what the effect of the
limited precision will be both in terms of the accuracy that is required and in terms of the
artifacts introduced by the quantization. As we will see, quantization is a nonlinear effect
that can introduce many different noises into a signal that may, at a minimum, be distracting
and, in the worst case, cause the filter to do strange things (like oscillate).
Based on the discussion above, it should be clear that fixed-point numbers accommo-
date a much smaller dynamic range than floating-point numbers. The goal in the scaling
is to ensure that this range is as large as possible. The scaling usually used is to have a
scale factor that goes along with the digital representation and an additive constant (or bias)
that determines what value the digital representation has when all of the bits are zero. The
scaling acts like a slope and the additive constant acts like the intercept in the equation of a
straight line, i.e.,
Vrepresentation = S · w + b.
As was the case for the floating-point numbers, the constant S = (1 + f ) · 2e , where the
magnitude of f is less than one and b is the bias. The difference between the floating-point
and fixed-point representation is that the value for e is always the same in the fixed-point
calculations. The programmer must keep track of the slope and the bias. The computer
uses only w during the calculations. Simulink fixed-point tools have many different rules
for altering the scaling for each of the calculations that occur ensuring the best possible
precision (with constraints). Simulations insure that the input signals truly represent the full
range that is expected. The simulation also must ensure that the calculations at every step
use values that have ranges that are consistent with the data.
4.6. The Signal Processing Blockset 159
Downloaded 01/23/15 to 128.2.10.23. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

Figure 4.23. Fixed-point implementation of a band-pass filter using the Signal


Processing Blockset.

With this brief discussion, we can begin to investigate digital signal processing on
a fixed-point machine. Open the models Precision_testsp2 (see Figure 4.23(a)) and
double click the Digital Filter block. The dialog in Figure 4.23(b) will appear. The infor-
160 Chapter 4. Digital Signal Processing in Simulink

mation in the Main dialog consists of the data created by the filter design in the “Digital
Downloaded 01/23/15 to 128.2.10.23. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

Band-pass Filter Designer” block in the model. The Transfer function type is iir, and the
filter structure is the biquadratic direct form that we specified.
The dialog has a tab that forces the filter design to use a fixed-point architecture.
Click this tab and look at the options. Since the filter consists of four cascaded second order
sections, the first issue we need to address is how to scale each of the outputs. As a starting
guess we specify that each second order filter will use the full precision of the 16-bit word,
so we put the binary point to the right of the sign bit (i.e., between bit 15 and 16, so that the
fraction length—the length of w in the scaling equation—is 15 bits). The coefficients are
also critical, so they need as much precision as possible. We have specified overall accuracy
of 32 bits, with 15 bits for both the numerator and denominator coefficients in each of the
second order sections. Each multiplication in our computer has 32 bits of precision, so we
specify that the word length is 32 bits, and we again allow the most precision of 31 bits for
the “Fraction Length.” The output will use a 32-bit D/A conversion, so the output is scaled
the same as the accumulator.
These guesses for the scaling are now tested. The first step in the process is to run the
model (with the guesses) to see what the maximum and minimum values of the various sig-
nals are. To do this, just click the start button. The simulation will run, and the maximum and
minimum values appear in MATLAB in a 45-element cell array called FixPtSimRanges.
The last five values in this cell array show the simulation results for the fixed-point cal-
culations. For example, the 45th entry, obtained by typing FixPtSimRanges{45} at the
MATLAB command line, contains
Path: [1x54 char] SignalName: ’Section output’
DataType: ’FixPt’MantBits: -16
FixExp: -15 MinValue: -1.0000
MaxValue: 0.9998.

The next step is to allow the “Autoscale” tool to adjust the scaling to give the maximum
amount of precision to the implementation. To do this, select “Fixed Point Settings” under
the Tools menu. This selection opens the dialog in Figure 4.24.
To automatically scale the calculations (and in the process compute all of the scale
factors S associated with each calculation), simply click the “Autoscale Blocks” button at
the bottom of the dialog. The results appear by opening the Digital Filter block and then
looking under the Fixed-Point Tab. The Autoscale will have scaled all of the calculations,
but you should find that the values we selected are good.
Now that the filter is complete, you can experiment with different word sizes and
precisions, and see what the effect is on the output of our filter. However, if you look at
the “View Signals” Scope block, you will notice that even with the design optimized for
precision, the fixed-point and floating-point computations are not the same.

4.6.5 Batch Filtering Operations, Buffers, and Frames


The Signal Processing Blockset can also process signals in a “batch” mode where the signals
are captured and then buffered in a register before the entire sequence is used to calculate a
desired functional. The most frequent use of this technique is in the calculation of the Fourier
transform of a signal. Remember that the Fourier transform is an integral over all time. Since
4.6. The Signal Processing Blockset 161
Downloaded 01/23/15 to 128.2.10.23. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

Figure 4.24. Running the Simulink model to determine the minima, maxima, and
scaling for all of the fixed-point calculations in the Band-pass filter.

we can capture only a finite time sample, any calculation for a signal processing application
will at best approximate the transform. In addition, the data will be digital, so the transform
will be done using an approximating summation. The most frequent approximation is the
fast Fourier transform (FFT). It is the subject of Chapter 8 of NCM, so we will assume that
the reader is familiar with its method of calculation.
In order to illustrate the concept of buffers and the FFT in the Signal Processing
blockset, consider the problem of recreating an analog signal from its samples (the sampling
theorem problem we investigated above). In this case let’s not try to find an analog filter
that will do this, but let’s try to use the Fourier transform (in the form of the FFT).
The model FFT_reconstruction in the NCS library (Figure 4.25) starts with the
same sum of 15 sinusoids that we have been using in the previous models. The recon-
struction, however, uses the transform of the signal. Let us look at the theory behind the
calculations, and then we will explore the model’s details.
Remember that the sampling theorem told us that to reconstruct the signal from its
samples we need to multiply the Fourier transform of the sampled signal by the function that
is 1 up to the sample frequency and zero elsewhere. When we take the Fourier transform
using the FFT, we get frequencies that are up to half the sample frequency. Thus, if the
162 Chapter 4. Digital Signal Processing in Simulink

Delay by:
Downloaded 01/23/15 to 128.2.10.23. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

samp_time*nbuffer
(Accounts for lag in buffer)

Sampled
Generate 15 Sine Waves Pad Transform Input
Unbuffer Make Ressult Real
at Frequencies given by freqs. with Zeros
(ndesired values) (Eliminate Residual
(set by a call back when the model loads) Scale by:
Imaginary Part) nd/nbuf
15 [512x1] [512x1] 4096 [4096x1]
FFT FFT Padded FFT IFFT Data
[512x1] Inherit
-K-
Ref Complexity Reconstructed
FFT IFFT at: nd/nbuf
Buffer
View Signals
(nbuffer values)
[512x1]
|u| DSP
[512x1] 1
Freq Constant
Compute |FFT|
Short-Time
Spectrum

Frequency Domain Sampled Signal Reconstruction

Ideal Low Pass Filter Created by Padding the FFT

Figure 4.25. Illustrating the use of buffers. Reconstruction of a sampled signal


using the FFT.

conditions for the sampling theorem (that the signal is band limited) are valid, the FFT is
just S(ω), −ωM ≤ ω ≤ ωM , where ωM is the maximum frequency contained in the signal
and the values of ω are just 2ωnM , where n is the number of samples of the signal that were
transformed. In creating the model in Figure 4.25, the first step was to save n samples of
the input signal for subsequent processing by the FFT. The block in the Signal Processing
Blockset that does this is the Buffer block. It simply stores some number of values of the
input in a vector before passing it on to the next block. When the model opened, three data
values (called nbuffer, npad, and ndesired) appear in MATLAB. Nbuffer is the size of
the buffer, and it is set to 512. (It always must be a power of two for the FFT block.)
The FFT and the inverse FFT use the blocks from the Transforms library in the Signal
Processing Blockset. Thus, outwardly, all we are doing in the model above is taking the
transform of the input and then taking the inverse transform to create an output. The output,
though, cannot be the result of the inverse transform, since this is going to be a vector of
time samples that, presumably, matches the output of the Buffer block. The way to pass the
vector of samples back into Simulink as a set of time samples at the appropriate simulation
times is to use the “Unbuffer” block. (Both the Buffer and Unbuffer are in the Signal
Management library under Buffers.)
To apply the Sampling theorem, you need to multiply the transform of the sampled
signal by the pulse function and then take the inverse Fourier transform (which is continuous
so this results in a continuous time signal). We are using the IFFT block that takes the
inverse Fast Fourier Transform and therefore outputs a signal only at discrete values of
time. Therefore, in order for the inverse to have more time values (thereby filling in or
interpolating the missing samples), we need to increase the size of the FFT before we take
the inverse. We do this by padding the transform with zeros to the left of −ωM and to the
right of ωM . The subsystem called “Pad Transform with Zeros” in Figure 4.26(a) does this.
To see how, double click on the block to open it.
There is a block called “Zero Pad” in the “Signal Operations” library, and we use it
to do the padding. The block adds zeros at either the front or the rear of a vector. However,
we need to be careful about this. Remember that when the FFT is computed, the highest
frequencies are in the center of the transform vector, and the lowest (zero frequencies) are
at the left and right of the vector (i.e., the transform is stored from 0 to −ωM and then from
4.6. The Signal Processing Blockset 163

Zero Pad the Transform Un-Shift the Transform


Downloaded 01/23/15 to 128.2.10.23. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

(at the lowest neg. freq.) so Zero is at the Ends.


[512x1] MATLAB 512 2304 4096 MATLAB 4096
1 1
Function Function 4096
FFT Padded
Shift the Transform Zerod Pad the Transform FFT
4096
so Zero is at the Center. (at the highest pos. freq.)
View Padded
|u| |FFT| padded
Transform as a Matrix
4096
[8x4096] Matrix
4096 Viewer

Store the |FFT|


in rows of a matrix

a) Simulink Subsystem that Pads an FFT with Zeros to


Increase the Number of Sample Points in the Transform

300

250

200

150

100

50

0
0 100 200 300 400 500 600

b) Padding an FFT with Zeros Must Account for the Fact that
the 0-Frequency is Not at the Center of the FFT

Figure 4.26. Using the FFT and the sampling theorem to interpolate a faster
sampled version of a sampled analog signal.

ωM to 0 with a discontinuity at the center of the array). If you run the following MATLAB
code, it will generate the plot in Figure 4.26(b) to illustrate this:

t = 0: .01:511*.01;
y = sin(t)+sin(10*t)+sin(100*t);
z=fft(y);
plot (abs(z))

The plot shows the fact that the transform has the zero frequency at the 0th and the
512th points computed. Because of this FFT quirk, padding the FFT with zeros during the
computation in the Simulink model will not add zeros below −ωM and above ωM . (Exercise:
What does it do?)
There is a built-in command in MATLAB that we will use to rotate the fft. The
function is fftshift, and it converts the FFT so its zero frequency is at the center of the
164 Chapter 4. Digital Signal Processing in Simulink

plot (as it appears using the Fourier transform). To see the effect of using this command,
Downloaded 01/23/15 to 128.2.10.23. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

replace the last plot command with


plot (fftshift(abs(z))).
There is no function in the Signal Processing library that does the equivalent of
fftshift. Therefore, we use the MATLAB function block in the Simulink library to use the
MATLAB fftshift. The function needs to be invoked twice: once before we do the zero
padding and once after to put the FFT back into the correct form for the inverse (IFFT block).
Two additional blocks used in the model come from the Signal Processing Blockset
sinks library. They are the Short Time Spectrum block and the Matrix Viewer block. The
first block is used to view the FFT as it is computed, and the second allows us to view the
padded FFT as it is computed (in a three-dimensional plot). In this plot, time and frequency
are the two axes, and the color is the amplitude of the FFT. (In the book it is in a gray
scale, but in the model that you are running from the NCS library, it is in color.) Another
attribute used in the model is to display the length of the vectors and the sample times
on the various lines using different colors for the lines. These attributes come from the
Port/Signal Displays under the Format menu in the model. Now, with this understanding of
the mathematics involved, it should be clear that if we pad the transforms by some number
of zero elements to make the final padded value have 2n points, the inverse FFT (IFFT)
will result in a time series with 2n values. Thus, making 2n > nbuff er, the output will
have more time samples than the input. From the sampling theorem, this output is the result
of multiplying the FFT of the input by the pulse function, and the result is a signal that
perfectly—to within numeric precision, of course—reconstructs the original signal at the
new sample times.
The result of running this model is shown in Figure 4.27 for the reconstruction of
the samples signal at 8 times the input frequency (ndesired = 4096, nbuffer = 512, and
npad = 1792).
The results show very good reconstruction of the sampled signal, as we would expect.
Figure 4.28(a) shows the output from the Short-Time Spectrum block in the model, and
Figure 4.28(b) shows the plot created by the Matrix Viewer block. Almost every one of
the 15 frequencies in the input sine wave can be seen in the peaks of the spectrum (which
is really the FFT), and the padding of the FFT to create the new output can be seen in the
Matrix Viewer output.
We have explored about 20% of the capabilities of the Signal Processing Blockset
in this chapter. For example, dramatic improvements in many signal-processing applica-
tions result from the processing of a large sample of data transferred to the computer as
a contiguous block. To some extent, we have seen this in the example above, where we
buffer the signal data before sending it to the FFT. The same approach applies to filters and
many other signal-processing operations. When we do this in Simulink, the operation uses
a vector called a “frame.” Frames make most of the computationally intensive blocks in the
Signal Processing Blockset run faster. There are many examples in the demos that are part
of the signal-processing toolbox that illustrate this, and now that you understand how the
buffer block works, you should be able to work through these examples without trouble.
Furthermore, it is a good idea to look at all of the demos, and also to open all of the blocks in
the blockset to see how each of them works, and, along with the help, determine what each of
the blocks needs in terms of data inputs and special considerations for the use of the blocks.
4.7. The Phase-Locked Loop 165

Analog Signal Sampled


Downloaded 01/23/15 to 128.2.10.23. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

at 1kHz
10

-5

-10
4 4.01 4.02 4.03 4.04 4.05 4.06 4.07 4.08 4.09 4.1

Reconstructed
at 4 kHz
10

-5

-10
4 4.01 4.02 4.03 4.04 4.05 4.06 4.07 4.08 4.09 4.1
Time

Figure 4.27. The analog signal (sampled at 1 kHz, top) and the reconstructed
signal (sampled at 8 kHz, bottom).

4.7 The Phase-Locked Loop


An interesting signal-processing device incorporates the basics of signal processing and
feedback control. The device, a phase-locked loop (PLL), is an inherently nonlinear control
system. It is extremely simple to understand, and Simulink provides a perfect way for
simulating the device. In the process of creating the simulation, we will encounter some
new Simulink blocks, we will use some familiar blocks in a new way, and we will encounter
some numerical issues. We begin with how the PLL operates.
Imagine that we want to track a sinusoidal signal. The classic example of this is
the tuner in a radio, television, cell phone, or any device that must lock onto a particular
frequency to operate properly. In early radios, a demodulator followed an oscillator tuned
to the desired frequency. The oscillator operated in an open loop fashion, so if its frequency
drifted (or the signal’s frequency shifted slightly), the radio needed to be manually retuned.
The operation of the demodulator used the fact that the product of the incoming frequency
and the local oscillator created sinusoids at frequencies that were the sum and difference of
the input and oscillator frequencies. Mathematically this comes from

1
sin(ω1 t+ϕ1 ) cos(ω2 t+ϕ2 ) = (sin ((ω1 + ω2 )t + ϕ1 + ϕ2 ) + sin ((ω1 − ω2 )t + ϕ1 − ϕ2 )) .
2
The output of the demodulator was the result of extracting only the difference frequency
using a circuit tuned to this “intermediate” frequency. In most applications, tuned radio-
frequency amplifiers increased the intermediate signal’s amplitude. The PLL uses the same
166 Chapter 4. Digital Signal Processing in Simulink
Downloaded 01/23/15 to 128.2.10.23. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

Figure 4.28. Results of interpolating a sampled signal using FFTs and the sampling
theorem.
4.7. The Phase-Locked Loop 167
Downloaded 01/23/15 to 128.2.10.23. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

num(s)

Input Signal den(s)


Modulator VCO
101.56 Hz. Low Pass Filter
(3rd Order Butterworth at 5 Hz.) Tracking
Error
Input

VCOout VCOin

Voltage
Controlled
Oscillator

This Phase Locked Loop Simulation will Track a sinusoidal input that
is in the frequency range of 95 to 105 Hx.

Figure 4.29. Phase-locked loop (PLL). A nonlinear feedback control for tracking
frequency and phase of a sinusoidal signal.

concept, except the oscillator frequency uses the difference sinusoid to change the fre-
quency of the oscillator. The feedback uses a device called a voltage controlled oscillator
(abbreviated VCO). The operation of the loop needs three parts:
• the VCO,
• the device that creates the product of the input and the VCO output,
• a filter to remove sin ((ω1 + ω2 )t + ϕ1 + ϕ2 ) before the result of the product passes
to the VCO.
The term “locked” describes when the input frequency and the VCO frequency are
the same. Thus, at lock, since the product has both the sum and difference frequencies, the
difference frequency is zero. Therefore, the filter that we need to remove the sum frequency
is our old friend the low pass filter. We will build the loop simulation in Simulink using a
third order Butterworth filter.
The Simulink model of phase-locked loop is Phase_Lock_Loop in the NCS library
(Figure 4.29).
A MATLAB callback (as usual) creates the parameters for the simulation. The But-
terworth filter is third order, and its coefficients are in the Transfer function block. The
modulator uses the product block. The VCO is a subsystem that looks like Figure 4.30.
This implementation of the VCO ensures that the generated sinusoid always has the
correct frequency despite the limited numeric precision of the simulation. The big worry is
the effect of roundoff. NCM has a discussion of the effect of calculating a sinusoid using
increments in t that look like tnext = t+ t. If the value of t cannot be represented precisely
in binary, then eventually the iteration implied by the equation will give an incorrect result.
The integrator modulo 1 in the diagram above fixes this problem.
Let us look at this subsystem (Figure 4.31).
The first thing to note is that the integration is creating the ω2 t part of the argument of
cos(ω2 t + ϕ 2 ) that is the VCO output. The second thing to note is that the integrator resets
whenever the value of its output is one.
168 Chapter 4. Digital Signal Processing in Simulink
Downloaded 01/23/15 to 128.2.10.23. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

Scope
Integrator Voltage Controlled Oscillator
Modulo 1 Output Ampitude

1 Kpll In Integral modulo 2pi cos(u[1]) Amp 1


VCOin VCOout
Gain cosine

Oscillator Base Oscillator


f0 Phase
Frequency Base Phase

The integrator output is approximately:

0 <= ( f0 + Input(t) Kvco) t < 2 π

Figure 4.30. Voltage controlled oscillator subsystem in the PLL model.

Gain

1+eps

In
1
>= 1
2*pi 1
s
xo Integral modulo 2pi
Relational
Modulus Set value to
1 Operator Integrator
1 be between 0 and 2 pi

[0] **************************************************************************
rem
This integrator creates an output that is between 0 and 2π .
Math
1 Function Initial Integral The reset on the integration uses the state port at the top
Modulus
is set to 0 at t = 0 (so there is no algebraic loop).

Any errors in the integration at the reset are calculated from


the remainder. The integration starts at a value that is its value
before the reset - 1.
**************************************************************************

Figure 4.31. Details of the integrator modulo 1 subsystem in Figure 4.30.

This reset uses two options in the integrator block that we have not used before. The
first is the state port that comes from the top of the integrator. A check box in the integrator
dialog causes the display of this port. You use it when you need the result of the integration
to modify the input to the integrator. If you fed the output back to the input, an algebraic
loop would result that is difficult for Simulink to resolve. The port eliminates this loop.
The second new input to the integrator is the “reset” port. This port is created when
you select rising from the “External reset” pull-down menu in the integrator dialog. Thus,
in the model, the integrator is reset to the initial condition whenever the relational operator
shows that the output is greater than 1+eps. (eps is the smallest floating-point number in the
IEEE floating-point standard; see NCM for a discussion of this MATLAB variable.) The
4.7. The Phase-Locked Loop 169

Input Signal
Downloaded 01/23/15 to 128.2.10.23. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

1.5

0.5

-0.5

-1

-1.5
1.9 1.91 1.92 1.93 1.94 1.95 1.96 1.97 1.98 1.99 2

Output of the VCO


1.5

0.5

-0.5

-1

-1.5
1.9 1.91 1.92 1.93 1.94 1.95 1.96 1.97 1.98 1.99 2
Time

a) Phase Lock Loop Simulation Input and Output

Output of the Modulo Integrator 0.6


Phase Lock Loop Tracking Error
7

6 0.5

5 0.4
Input to VCO

4 0.3

3 0.2

2 0.1

1 0

0 -0.1
1.9 1.91 1.92 1.93 1.94 1.95 1.96 1.97 1.98 1.99 2 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
Time Time

b) Output of the Modulo Integrator c) Loop Tracking Response

Figure 4.32. Phase-locked loop simulation results.

integrator steps are not necessarily going to occur at exactly the point where the output is
exactly one. The external initial condition in the integrator is used at the reset to set the
value of the integrator to the remainder (the amount the output exceeds 1) for the next cycle.
The output of the integral is therefore a sawtooth wave that goes from zero to one. We
multiply the output by 2π before using it to calculate the cosine.
Run the PLL Simulink model and observe the outputs in the Scopes. There are three
Scope blocks, two at the top level of the diagram. The first shows the two sinusoids (the
input and the VCO output), and the second shows the feedback signal that drives the VCO.
Look at these and at the Scope block in the VCO subsystem. (Figure 4.32(a) shows the
170 Chapter 4. Digital Signal Processing in Simulink

plots of the input and output of the simulation.) Figure 4.32(b) shows the moduloarithmetic
Downloaded 01/23/15 to 128.2.10.23. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

integrator output that we described above, and Figure 4.32(c) is the tracking error in the PLL.
Because the frequency of the VCO is not exactly 101.56 Hz, the VCO frequency (VCOin)
must have a nonzero steady state value. This is clearly the case. Furthermore, the PLL got to
this steady state value with a rapid response and minimal overshoot. The design of the PLL
feedback uses linear control techniques that start with a linear model of the PLL dynamics.
Many more interesting and complex mathematical processing applications are possi-
ble using the blockset, including noise removal, more complex filter designs, and the ability
to create a processing algorithm for an audio application and actually hear how it sounds
on the PC. There are tools that allow the export of any signal processing application to
Hardware Description Language form so that the processing ends up on a chip. Tools also
will export designs to Xilinx and Texas Instruments architectures.
We are now ready to talk about stochastic processes and the mathematics of simulation
when the simulated variables are processes generated by a random quantity called “noise.”
This is the subject of the next chapter.

4.8 Further Reading


The Fibonacci sequence and the golden ratio is the subject of a book by Livio [26]. It is
a very readable and interesting review of the many real (and apocryphal) attributes of the
sequence.
In graduate school (at MIT), Claude Shannon worked on an early from of analog
computer called a “differential analyzer.” His work on the sampling theorem links to the
subject of simulation in a very fundamental way. In fact, while he was working on the
differential analyzer he created a way of analyzing relay switching circuits that were part
of these early devices. He published a paper on these results in the Transactions of the
American Institute of Electrical Engineers (AIEE) that won the Alfred Noble AIEE award.
During World War II, Shannon worked at Bell Labs on fire control systems. As the
war ended, Bell Labs published a compilation of the work done on these systems. Shannon,
along with Richard Blackman and Hendrik Bode, wrote an article on data smoothing that
cast the control problem as one of signal processing. This work used the idea of uncertainty
as a way to model information, and it was a precursor to the discovery of the sampling
theorem.
The Shannon sampling theorem is so fundamental to all of discrete systems that his
papers frequently reappear in print. Two recent examples of this are in the Proceedings of
the IEEE [23], [39]. A paper in the IEEE Communications Society Magazine [27] followed
shortly afterward. The best source for the proof of the theorem is in Papoulis [32]. Papoulis
had a knack for finding elegant proofs, and his proof of the sampling theorem is no exception.
There are hundreds of texts on digital filter design. One that is reasonable is by Leland
Jackson [22].
Reference [11] describes the operation of phase-locked loops. It also shows how they
are analyzed using linear feedback control techniques, and how easy it is to create a loop in
a digital form.
To learn more about the tools, blocks, and techniques available with the Signal Process-
ing Blockset and Toolbox, see The MathWorks Users Manuals and Introduction [43], [44].
Exercises 171

Exercises
Downloaded 01/23/15 to 128.2.10.23. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

4.1 Show that the Fibonacci sequence has the solution (completing the step outlined in
Section 4.1.2)
1
fn = (φ n+1 − (1 − φ 1 )n+1 ).
2φ1 − 1 1

4.2 Verify using induction the result from Section 4.3 that: xk = ffk−1 k
fk
fk+1
x0 . Is it a
fact that because you use the iterative equation in the model and the result is what you
expect that this must imply that the result is true by induction? If you use a simulation
of a difference equation, iteratively, to produce a consistent result, will this imply that
the result is always true by induction?
4.3 Show, using induction, that the Fibonacci sequence has the property that fk+1 fk−1 −
fk2 = ±1. (Follow the hint in Section 4.3.)
4.4 Verify that the Butterworth filter function
1
|H Butter |2 =  2n
ω
1+ ωm

is maximally flat. (All of its derivatives from the first to the (n − 1)st are zero at zero
and infinity.)
4.5 Show that the analog system dtd x(t) = Ax(t) + Bu(t) and the discrete system xk+1 =
( t)xk + ( t)uk have the same step responses when
 t
( t) = 0 eAτ dτ,
 t
( t) = 0 eAτ Bdτ.
Create a Simulink model with the state-space model of a second order system in
continuous time and in discrete time using the above.
Create a discrete time system using the Laplace transform of the continuous system
−1
with the mapping s = 2t ( 1−z
1+z−1
) to generate the discrete system.
Compare the simulations using inputs that are zero (use the same initial conditions
for each of the versions), a step, and a sinusoid at different frequencies.
4.6 Show that the fir filter for an n-sample moving average has the state-space model
   
0 1 0 ... 0 0
 0 0 1 ... 0   0 
   
xk+1 =  . . . .  xk +  ..  uk ,
 .. .. .. . . . ..   . 
0 0 0 ... 1 1
 
yk = 1
n
1 1 1 1 xk + n1 uk .
In this state-space model the A matrix is (n − 1) × (n − 1) and the vectors are of
appropriate dimensions. Verify that the Simulink model in the text does indeed use
172 Chapter 4. Digital Signal Processing in Simulink

this equation. Calculate the number of adds and multiplies this version of the moving
Downloaded 01/23/15 to 128.2.10.23. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

average filter requires.


Investigate other non–state-space versions of this digital filter. Use the Simulink
digital filter library. Work out the computation count needed to implement the filters
you select, and compare them with the number of computations needed for the state-
space model. Is this model an efficient way to do this filter? What would be the most
efficient implementation?
#
4.7 Show that det(zI − ( t)) is nj=1 (z − eλj t ), where ( t) = eA t and the λj are
the eigenvalues of the matrix A.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy