ACS Lab Manual
ACS Lab Manual
VII SEMESTER
7EE4-22
ADVANCED CONTROL
SYSTEM LAB
PSO-1 Ability to utilize logical and technical skills to model, simulate and analyse
electrical components and systems.
PSO-2 Empowering to provide socially acceptable technical solutions to real time
electrical engineering problems with the application of modern and appropriate
techniques for sustainable development.
Department of Electrical Engineering
7EE4-22 Advanced Control System Lab
Course Outcomes (COs):
CO1: Represent a system in MATLAB (in the form of a transfer function) considering its zeros, poles,
and gain.
CO2: Analyse the plots of time and frequency responses of SISO and MIMO systems.
CO3: Analyse the response of the RLC circuit. Assess gain and phase margin to examine the effect
of stability margins on closed-loop response characteristics of a control system.
CO4: Analyse the Time Domain response analysis of first and second-order systems
DO’S
• Maintain strict discipline in the Lab.
• Lab-apparatus must be handled properly.
• Before switching on the power supply, get it be checked by the faculty.
• Switch off the mobile.
• Be a keen observer while performing the experiment.
DON’TS
• Do not touch or attempt to touch the mains power points directly with bare hands.
• Do not manipulate the experiment results.
• Do not overcrowd the experiment tables.
• Do not tamper the equipment.
• Do not leave the lab without prior permission from the faculty.
INSTRUCTIONS TO THE STUDENTS
GENERAL INSTRUCTIONS
• Maintain separate observation copy for each laboratory and carry it regularly.
• Observations or readings should be taken only in the observation copy.
• Measured readings must be signed by the faculty after the completion of the
experiment.
• Maintain Index column in the observation copy and get the signature of the faculty
before leaving the lab.
DATE/EXP. No. 1 2 3 4 5 6 7 8 9 10 11 12
>>𝑉𝑏(𝑡) 𝖺 𝜔𝑚(𝑡)
𝑑𝜃𝑚(𝑡)
>>𝑉𝑏(𝑡) = 𝐾𝑏 𝑑𝑡
>>𝑉𝑏(𝑠) = 𝐾𝑏𝑆𝜃𝑚(𝑠)
Also,
𝑑𝑖𝑎(𝑡)
>>𝑒 (𝑠) = 𝑅 . 𝑖 (𝑡) + 𝐿 + 𝑉 (𝑡)
𝑎 𝑎 𝑎 𝑎 𝑑𝑡 𝑏
Or
>>𝑻𝒎(𝒕) 𝖺 𝑰𝒂(𝒕)
Or
>>𝑇𝑚(𝑠) = 𝐾𝑡𝐼𝑎(𝑠)
(1). Inertia(J):-(motor inertia + inertia of O/P load which has to be connected). (2).
Friction or damping .
𝐽𝑑 2𝜃𝑚 𝑑𝜃𝑚
Or 𝑇 (𝑡) = +𝐷 [∴ 𝜔 = 𝑑𝜃 = 𝑆𝜃]
𝑚 𝑑𝑡2 𝑑𝑡 𝑑𝑡
𝑅𝑎
>> 𝐸𝑎(𝑠) = [{ + 𝐿 𝑎 𝑆} {𝐽𝑆2 + 𝐷𝑆} + 𝐾𝑏 𝑆] 𝜃𝑚 (𝑠)
𝐾𝑡
𝑄𝑚(𝑠) 𝐾𝑡
>> =
𝐸𝑎(𝑠) (𝑅𝑎+𝐿𝑎𝑆)(𝐽𝑆2+𝐷𝑆)+𝐾𝑏𝐾𝑡𝑆
Equation (6) gives open loop T.F. with 𝐸𝑎(𝑠) as input and 𝜃𝑚(𝑠) as output. Or
𝜃𝑚(𝑠) 𝐾𝑡
>> = 2
𝐸𝑎(𝑠) 𝑆[𝐽𝐿𝑎𝑆 +(𝐽𝑅𝑎+𝐷𝐿𝑎)𝑆+𝐷𝑅𝑎]
Now consider the closed loop transfer function which indicates the relation between 𝜃𝑚𝑟𝑒𝑓
>>
𝜽𝒎 𝑲𝑮
𝜽𝒓𝒆𝒇
=
𝟏+𝑲𝑮
𝑻𝒎 𝗍
>> 𝑇0 = 𝐾𝑉𝑐
𝑇0
>> 𝐾 =
𝜔0
𝑇0
Now slope 𝑚 = →stalling torque at rated voltage.
𝜔0
>> 𝑦 = 𝑚𝑥 + 𝑐
>> 𝑇𝑚 = 𝑚𝜔 + 𝐾𝑉𝑐
𝑑𝜔
>> 𝑇𝑚 (𝑡) = 𝑚𝜔(𝑡) + 𝐾𝑉𝑐 =𝐽 + 𝑓𝜔
𝑑𝑡
Or
Or
𝜔(𝑠) 𝐾
>> =
𝑉𝑐(𝑠) 𝐽𝑆+(𝑓−𝑚)
𝜔(𝑠) 𝐾𝑚
>> =
𝑉𝑐(𝑠) (𝑇𝑚𝑆+1)
Here,
𝑘
>> 𝐾𝑚 = 𝑓−𝑚
>> 𝑇𝑚 =𝐽 𝑓−𝑛
MATLAB CODE:
clc
clear all
close all
% AC Servo Motor
% Define parameters
Kt = 0.01; % Motor torque constant (Nm/A)
Rs = 1; Ls = 0.1; Rr = 0.5; Lr = 0.05; Lm = 0.08; Ke = 0.1; J = 0.01; B = 0.005;
OUTPUT:
RESULT:
7EE4-22 ADVANCED CONTROL SYSTEM LAB
Practical No:-1
Objective:-
Determination of transfer functions of DC servomotor and AC servomotor.
AIM: Time domain response of rotary servo and Linear servo (first order and
second order) systems using MATLAB/Simulink.
Theory:
The time response represents how the state of a dynamic system changes in time when subjected
to a particular input. Since the models we have derived consist of differential equations, some
integration must be performed in order to determine the time response of the system. For some
simple systems, a closed-form analytical solution may be available. However, for most systems,
especially nonlinear systems or those subject to complicated inputs, this integration must be
carried out numerically. Fortunately, MATLAB provides many useful resources for calculating
time responses for many types of inputs, as we shall see in the following sections.
The time response of a linear dynamic system consists of the sum of the transient response which
depends on the initial conditions and the steady-state response which depends on the system input.
These correspond to the homogenous (free or zero input).
Order system
The order of a dynamic system is the order of the highest derivative of its governing differential
equation. Equivalently, it is the highest power of in the denominator of its transfer function. The
important properties of first, second, and higher-order systems will be reviewed in this section.
where the parameters and completely define the character of the first-order system.
DC gain
The DC gain is the ratio of the magnitude of the steady-state step response to the magnitude of
the step input. For stable transfer functions, the Final Value Theorem demonstrates that
the DC gain is the value of the transfer function evaluated at s=0
For first-order systems of the forms shown, the DC gain is
DC Gain
The DC gain, again is the ratio of the magnitude of the steady-state step response to the
magnitude of the step input, and for stable systems it is the value of the transfer function when
S = 0. For the forms given,
.
Damping Ratio
The damping ratio is a dimensionless quantity charaterizing the rate at which an oscillation
in the system's response decays due to effects such as viscous friction or electrical resistance.
From the above definitions,
Natural Frequency
The natural frequency is the frequency (in rad/s) that the system will oscillate at when there
is no damping,
MATLAB CODE:
clc
clear all
close all
OUTPUT:
Second-Order Rotary Servo System Parameters
clc
clear all
close all
for both speed and position control. Below is a step-by-step guide for simulating the DC motor control.
7EE4-22 ADVANCED CONTROL SYSTEM LAB
Practical No:-3
Objective:-Simulate Speed and position control of DC Motor.
1. What is DC Motor?
Theory:
Bode plots, are the best way to analyze the frequency response of a linear system.
num=[8 18 32];
den=[1 6 14 24];
display('second order transfer function as')
sys = tf(num,den)
output will be
sys =
8 s^2 + 18 s + 32
-----------------------
s^3 + 6 s^2 + 14 s + 24
Continuous-time transfer function.
bode(sys)
grid
Create a transfer function model and plot its frequency response.
H = tf([10,21],[1,1.4,26]);
bode(H)
Calculate the frequency response between 1 and 13 rad/s.
[mag,phase,w] = bode(H,{1,13});
When you call bode with output arguments, the command returns
vectors mag and phase containing the magnitude and phase of the frequency
response. The cell array input {1,13} tells bode to calculate the response at a grid
of frequencies between 1 and 13 rad/s. bode returns the frequency points in the
vector w.
H = tf([1 0.1 7.5],[1 0.12 9 0 0]);
[mag,phase,wout] = bode(H);
first two dimensions of mag and phase are both 1. The third dimension is the
number of frequencies in wout.
size(mag)
ans = 1×3
1 1 42
length(wout)
ans = 42
Thus, each entry along the third dimension of mag gives the magnitude of the
response at the corresponding frequency in wout.
[Gm,Pm,Wcg,Wcp] = margin(m,p,w)
Plot Gain and Phase Margins of TF
sys = tf(1,[1 2 1 0])
sys = 1
---------------
s^3 + 2 s^2 + s
margin(sys)
The gain margin (6.02 dB) and phase margin (21.4 deg), displayed in the title, are
marked with solid vertical lines. The dashed vertical lines indicate the locations
of Wcg, the frequency where the gain margin is measured, and Wcp, the
frequency where the phase margin is measured.
[Gm,Pm,Wcg,Wcp] = margin(sys)
bode(sys3)
grid
margin(sys3)
[Gm,Pm,Wcg,Wcp] = margin(sys3)
7EE4-22 ADVANCED CONTROL SYSTEM LAB
Practical No:-4
Objective:- Frequency response of small-motion, linearized model of
industrial robot (first and second order) system using MATLAB.
PID Controller
The term PID stands for proportional integral derivative and it is one kind of device used to control
different process variables like pressure, flow, temperature, and speed in industrial applications.
In this controller, a control loop feedback device is used to regulate all the process variables.
This type of control is used to drive a system in the direction of an objective location otherwise
level. It is almost everywhere for temperature control and used in scientific processes, automation
& myriad chemical. In this controller, closed-loop feedback is used to maintain the real output
from a method like close to the objective otherwise output at the fixe point if possible.
PID Controller Block Diagram
A closed-loop system like a PID controller includes a feedback control system. This system
evaluates the feedback variable using a fixed point to generate an error signal. Based on that, it
alters the system output. This procedure will continue till the error reaches Zero otherwise the
value of the feedback variable becomes equivalent to a fixed point.
This controller provides good results as compared with the ON/OFF type controller. In the
ON/OFF type controller, simply two conditions are obtainable to manage the system. Once the
process value is lower than the fixed point, then it will turn ON. Similarly, it will turn OFF once
the value is higher than a fixed value. The output is not stable in this kind of controller and it will
swing frequently in the region of the fixed point. However, this controller is more steady &
accurate as compared to the ON/OFF type controller.
Working of PID Controller
With the use of a low cost simple ON-OFF controller, only two control states are possible, like
fully ON or fully OFF. It is used for a limited control application where these two control states
are enough for the control objective. However oscillating nature of this control limits its usage
and hence it is being replaced by PID controllers.
PID controller maintains the output such that there is zero error between the process variable and
setpoint/ desired output by closed-loop operations. PID uses three basic control behaviors that are
explained below.
P- Controller
Proportional or P- controller gives an output that is proportional to current error e (t). It compares
the desired or set point with the actual value or feedback process value. The resulting error is
multiplied with a proportional constant to get the output. If the error value is zero, then this
controller output is zero.
This controller requires biasing or manual reset when used alone. This is because it never reaches
the steady-state condition. It provides stable operation but always maintains the steady-state
error. The speed of the response is increased when the proportional constant Kc increases.
I-Controller
Due to the limitation of p-controller where there always exists an offset between the process
variable and setpoint, I-controller is needed, which provides necessary action to eliminate the
steady-state error. It integrates the error over a period of time until the error value reaches zero.
It holds the value to the final control device at which error becomes zero.
Integral control decreases its output when a negative error takes place. It limits the speed of
response and affects the stability of the system. The speed of the response is increased by
decreasing integral gain, Ki.
In the above figure, as the gain of the I-controller decreases, the steady-state error also goes on
decreasing. For most of the cases, the PI controller is used particularly where the high-speed
response is not required.
D-Controller
I-controller doesn’t have the capability to predict the future behavior of error. So it reacts normally
once the setpoint is changed. D-controller overcomes this problem by anticipating the future
behavior of the error. Its output depends on the rate of change of error with respect to time,
multiplied by derivative constant. It gives the kick start for the output thereby increasing system
response.
In the above figure response of D, the controller is more, compared to the PI controller, and also
settling time of output is decreased. It improves the stability of the system by compensating for
phase lag caused by I-controller. Increasing the derivative gain increases the speed of response.
So finally we observed that by combining these three controllers, we can get the desired response
for the system.
Types of PID Controller
PID controllers are classified into three types like ON/OFF, proportional, and standard type
controllers. These controllers are used based on the control system, the user can be used the
controller to regulate the method.
ON/OFF Control
An on-off control method is the simplest type of device used for temperature control. The device
output may be ON/OFF through no center state. This controller will turn ON the output simply
once the temperature crosses the fixed point. A limit controller is one particular kind of ON/OFF
controller that uses a latching relay. This relay is reset manually and used to turn off a method
once a certain temperature is attained.
Proportional Control
This kind of controller is designed to remove the cycling which is connected through ON/OFF
control. This PID controller will reduce the normal power which is supplied toward the heater
once the temperature reaches the fixed point.
This controller has one feature to control the heater so that it will not exceed the fixed point
however it will reach the fixed point to maintain a steady temperature.
This proportioning act can be achieved through switching ON & OFF the output for small time
periods. This time proportioning will change the ratio from ON time to OFF time for controlling
the temperature.
Standard Type PID Controller
This kind of PID controller will merge proportional control through integral & derivative control
to automatically assist the unit to compensate modifications within the system. These
modifications, integral & derivative are expressed in time-based units.
These controllers are also referred through their reciprocals, RATE & RESET correspondingly.
The terms of PID must be adjusted separately otherwise tuned to a specific system with the trial
as well as error.
he variable ( ) represents the tracking error, the difference between the desired output ( ) and the
actual output ( ). This error signal ( ) is fed to the PID controller, and the controller computes
both the derivative and the integral of this error signal with respect to time. The control signal (
) to the plant is equal to the proportional gain ( ) times the magnitude of the error plus the
integral gain ( ) times the integral of the error plus the derivative gain ( ) times the derivative
of the error.
This control signal ( ) is fed to the plant and the new output ( ) is obtained. The new output ( )
is then fed back and compared to the reference to find the new error signal ( ). The controller
takes this new error signal and computes an update of the control input. This process continues
while the controller is in effect.
The transfer function of a PID controller is found by taking the Laplace transform of above
Equation
We can define a PID controller in MATLAB using a transfer function model directly, for
example:
Kp = 1;
Ki = 1;
Kd = 1;
s = tf('s');
C = Kp + Ki/s + Kd*s
C=
s^2 + s + 1
-----------
s
Continuous-time transfer function.
Alternatively, we may use MATLAB's pid object to generate an equivalent continuous-time
controller as follows:
C = pid(Kp,Ki,Kd)
C=
1
Kp + Ki * --- + Kd * s
s
with Kp = 1, Ki = 1, Kd = 1
Continuous-time PID controller in parallel form.
Let's convert the pid object to a transfer function to verify that it yields the same result as above:
tf(C)
ans =
s^2 + s + 1
-----------
s
Continuous-time transfer function.
The Characteristics of the P, I, and D Terms
Increasing the proportional gain ( ) has the effect of proportionally increasing the control signal
for the same level of error. The fact that the controller will "push" harder for a given level of error
tends to cause the closed-loop system to react more quickly, but also to overshoot more. Another
effect of increasing is that it tends to reduce, but not eliminate, the steady-state error.
The addition of a derivative term to the controller ( ) adds the ability of the controller to
"anticipate" error. With simple proportional control, if is fixed, the only way that the control
will increase is if the error increases. With derivative control, the control signal can become large
if the error begins sloping upward, even while the magnitude of the error is still relatively small.
This anticipation tends to add damping to the system, thereby decreasing overshoot. The addition
of a derivative term, however, has no effect on the steady-state error.
The addition of an integral term to the controller ( ) tends to help reduce steady-state error. If
there is a persistent, steady error, the integrator builds and builds, thereby increasing the control
signal and driving the error down. A drawback of the integral term, however, is that it can make
the system more sluggish (and oscillatory) since when the error signal changes sign, it may take
a while for the integrator to "unwind."
The general effects of each controller parameter ( , , ) on a closed-loop system are
summarized in the table below. Note, these guidelines hold in many cases, but not all. If you truly
want to know the effect of tuning the individual gains, you will have to do more analysis, or will
have to perform testing on the actual system.
CL SETTLING S-S
RISE TIME OVERSHOOT
RESPONSE TIME ERROR
Small
Kd Decrease Decrease No Change
Change
Example Problem
The transfer function between the input force and the output displacement then
becomes
Let
m = 1 kg
b = 10 N s/m
k = 20 N/m
F=1N
Substituting these values into the above transfer function
(6)
The goal of this problem is to show how each of the terms, , , and , contributes to
obtaining the common goals of:
▪ Fast rise time
▪ Minimal overshoot
▪ Zero steady-state error
Open-Loop Step Response
Let's first view the open-loop step response. Create a new m-file and run the following code:
s = tf('s');
P = 1/(s^2 + 10*s + 20);
step(P)
The DC gain of the plant transfer function is 1/20, so 0.05 is the final value of the output to a unit
step input. This corresponds to a steady-state error of 0.95, which is quite large. Furthermore, the
rise time is about one second, and the settling time is about 1.5 seconds. Let's design a controller
that will reduce the rise time, reduce the settling time, and eliminate the steady-state error.
Proportional Control
From the table shown above, we see that the proportional controller ( ) reduces the rise time,
increases the overshoot, and reduces the steady-state error.
The closed-loop transfer function of our unity-feedback system with a proportional controller is
the following, where is our output (equals ) and our reference is the input:
(7)
Let the proportional gain ( ) equal 300 and change the m-file to the following:
Kp = 300;
C = pid(Kp)
T = feedback(C*P,1)
t = 0:0.01:2;
step(T,t)
C=
Kp = 300
P-only controller.
T=
300
----------------
s^2 + 10 s + 320
Proportional-Derivative Control
Now, let's take a look at PD control. From the table shown above, we see that the addition of
derivative control ( ) tends to reduce both the overshoot and the settling time. The closed-loop
transfer function of the given system with a PD controller is:
(8)
Let equal 300 as before and let equal 10. Enter the following commands into an m-file
and run it in the MATLAB command window.
Kp = 300;
Kd = 10;
C = pid(Kp,0,Kd)
T = feedback(C*P,1)
t = 0:0.01:2;
step(T,t)
C=
Kp + Kd * s
with Kp = 300, Kd = 10
T=
10 s + 300
----------------
s^2 + 20 s + 320
Continuous-time transfer function.
This plot shows that the addition of the derivative term reduced both the overshoot and the settling
time, and had a negligible effect on the rise time and the steady-state error.
Proportional-Integral Control
Before proceeding to PID control, let's investigate PI control. From the table, we see that the
addition of integral control ( ) tends to decrease the rise time, increase both the overshoot and
the settling time, and reduces the steady-state error. For the given system, the closed-loop transfer
function with a PI controller is:
(9)
Let's reduce to 30, and let equal 70. Create a new m-file and enter the following
commands.
Kp = 30;
Ki = 70;
C = pid(Kp,Ki)
T = feedback(C*P,1)
t = 0:0.01:2;
step(T,t)
C=
1
Kp + Ki * ---
s
with Kp = 30, Ki = 70
Continuous-time PI controller in parallel form.
T= 30 s + 70
------------------------
s^3 + 10 s^2 + 50 s + 70
Continuous-time transfer function.
Run this m-file in the MATLAB command window and you should generate the above plot. We
have reduced the proportional gain ( ) because the integral controller also reduces the rise time
and increases the overshoot as the proportional controller does (double effect). The above
response shows that the integral controller eliminated the steady-state error in this case.
Proportional-Integral-Derivative Control
Now, let's examine PID control. The closed-loop transfer function of the given system with a PID
controller is:
(10)
After several iterations of tuning, the gains = 350, = 300, and = 50 provided the desired
response. To confirm, enter the following commands to an m-file and run it in the command
window. You should obtain the following step response.
Kp = 350;
Ki = 300;
Kd = 50;
C = pid(Kp,Ki,Kd)
T = feedback(C*P,1);
t = 0:0.01:2;
step(T,t)
C=
1
Kp + Ki * --- + Kd * s
s
Now, we have designed a closed-loop system with no overshoot, fast rise time, and no steady-
state error.
When you are designing a PID controller for a given system, follow the steps shown below to
obtain a desired response.
1. Obtain an open-loop response and determine what needs to be improved
2. Add a proportional control to improve the rise time
3. Add a derivative control to reduce the overshoot
4. Add an integral control to reduce the steady-state error
5. Adjust each of the gains , , and until you obtain a desired overall response. You can
always refer to the table shown in this "PID Tutorial" page to find out which controller
controls which characteristics.
Lastly, please keep in mind that you do not need to implement all three controllers (proportional,
derivative, and integral) into a single system, if not necessary. For example, if a PI controller
meets the given requirements (like the above example), then you don't need to implement a
derivative controller on the system. Keep the controller as simple as possible.
An example of tuning a PI controller on an actual physical system can be found at the
following link. This example also begins to illustrate some challenges of implementing control,
including: control saturation, integrator wind-up, and noise amplification.
MATLAB provides tools for automatically choosing optimal PID gains which makes the trial and
error process described above unnecessary. You can access the tuning algorithm directly
using pidtune or through a nice graphical user interface (GUI) using pidTuner.
The MATLAB automated tuning algorithm chooses PID gains to balance performance (response
time, bandwidth) and robustness (stability margins). By default, the algorithm designs for a 60-
degree phase margin.
Let's explore these automated tools by first generating a proportional controller for the mass-
spring-damper system by entering the command shown below. In the shown syntax, P is the
previously generated plant model, and 'p' specifies that the tuner employ a proportional controller.
pidTuner(P,'p')
The pidTuner GUI window, like that shown below, should appear.
Notice that the step response shown is slower than the proportional controller we designed by
hand. Now click on the Show Parameters button on the top right. As expected, the proportional
gain, , is smaller than the one we employed, = 94.86 < 300.
We can now interactively tune the controller parameters and immediately see the resulting
response in the GUI window. Try dragging the Response Time slider to the right to 0.14 s, as
shown in the figure below. This causes the response to indeed speed up, and we can see is
now closer to the manually chosen value. We can also see other performance and robustness
parameters for the system. Note that before we adjusted the slider, the target phase margin was
60 degrees. This is the default for the pidTuner and generally provides a good balance between
robustness and performance.
Now let's try designing a PID controller for our system. By specifying the previously designed or
(baseline) controller, C, as the second parameter, pidTuner will design another PID controller
(instead of P or PI) and will compare the response of the system with the automated controller
with that of the baseline.
pidTuner(P,C)
We see in the output window that the automated controller responds slower and exhibits more
overshoot than the baseline. Now choose the Domain: Frequency option from the toolstrip,
which reveals frequency domain tuning parameters.
Now type in 32 rad/s for Bandwidth and 90 deg for Phase Margin, to generate a controller
similar in performance to the baseline. Keep in mind that a higher closed-loop bandwidth results
in a faster rise time, and a larger phase margin reduces the overshoot and improves the system
stability.
Finally, we note that we can generate the same controller using the command line
tool pidtune instead of the pidTuner GUI employing the following syntax.
opts = pidtuneOptions('CrossoverFrequency',32,'PhaseMargin',90);
[C, info] = pidtune(P, 'pid', opts)
C=
1
Kp + Ki * --- + Kd * s
s
Reference : https://ctms.engin.umich.edu/CTMS/index.php?example=Introduction§ion=ControlPID
7EE4-22 ADVANCED CONTROL SYSTEM LAB
Practical No:-5
Objective:- Frequency response of small-motion, linearized model of
industrial robot (first and second order) system using MATLAB.
Theory:
clear;clc;
Ra=2;
La=0.4;
Kt=0.02;
J=0.02;
D=0.2;
Kb=0.02;
K=1;
num=[Kt*K];
den=[J*La J*Ra+D*La D*Ra+Kb*Kt+Kt*K];
TF_closeloop_Control_DC=tf(num,den)
7EE4-22 ADVANCED CONTROL SYSTEM LAB
Practical No:-6
Objective:- Design and implement closed loop control of DC Motor using
MATLAB/Simulink and suitable hardware platform.
1. What is Matlab?
8. Define CMRR.
Controller Block Diagram: The block diagram of microcontroller based bridge PWM inverter
is as shown in Figure. The required four digit speed in RPM [Rotation per Minute] is entered
through the keyboard and corresponding to the key pressed, digital equivalent of that RPM is
stored in memory.
Current running speed of the AC motor is sensed through speed sensor, and the analog output
given by the sensor is converted to digital data using Analog to Digital converter [ADC].
The digital data is accepted through 8051 microcon-troller ports and is compared with required
speed’s equivalent digital data. In accordance with the error signal, the width (duty cycle) of
PWM signal is varied, which in turn controls the AC voltage.
From the generated PWM signal, required two gate signals are generated using external interrupt
to drive the bridge inverter circuit.
Gate signals are boosted up to a sufficient voltage level by using gate driver circuit, so that it can
drive the MOSFET switches of bridge inverter to the ON state. User can alter the speed at any
instant of time in accordance to his requirements. Many additional fea-tures can be further added
like sensing the temperature of room and automatically controlling either the speed of the fan or
the level of air conditioning required. Figure 4 explains the logic flow of the basic operation.
Controller Design
Controller is designed by using simpler low cost compo-nents like 8051 microcontroller, 8 or 12
bit Analog to Digital Converter (ADC), 4×4 keypad, 4 chopper MOS-FET switches (IRFZ48)
and speed/Intensity sensor.
The controller design can be explained under 4 sec-tions as:
Keypad interface with 8051 μc.
Keypad Interface
A 4×4 keypad is interface with 8051 microcontroller as shown in Figure 5, through which four
keys are accepted.
After accepting the four keys they are combined to rep-resent four digit required RPM, which
actually represents the external memory address, in which digital equivalent of speed is stored.
For example if the keys entered are 1 (01), 2 (02), 3 (03), 4 (04), then they are combined as
1234 (RPM), which represents External memory address, in which 8 bit digital equivalent of that
speed is stored. Higher byte of the memory address is stored in DPH [data pointer high byte].
Lower byte of the memory address is stored in DPL [data pointer low byte]. This method saves
time since it doesn’t require any program execution to convert the entered speed in RPM into its
digital equivalent. The other method is to enter equivalent digital data of RPM directly, provided
a conversion chart is available [exter-nal look-up table]. This technique will save some mem-ory
access time, since communication with memory is avoided.
ADC Interfacing
Whenever speed varies from zero to maximum, the speed sensor O/P varies from zero to five
volts respectively. An 8-bit ADC with resolution 1/28 is used to convert the analog voltage to
digital data. Minimum of 19.5 mv change in voltage (corresponding change in RPM) is required
to change the digital state of ADC. This limits the accuracy of the application. The logic of
interfacing ADC is as explained in the flowchart given in the Figure 7.
PWM Generation
8051 microcontroller do not have on-chip PWM genera-tor. It is implemented using ‘A’ register
and any other register (R0-R7) as shown in Figure 8.
A count (ON period time) is loaded onto one of the GPR (General purpose register), which can
be called as Duty cycle register and accumulator (‘A’) is loaded with zero. Register ‘A’ is
incremented in steps of one and continuously compared with duty cycle register.
Result: We studied the digital controller using microcontroller.
Practical No:-7
Objective:- Implementation of digital controller using microcontroller
Theory:
The system in this example consists of an inverted pendulum mounted to a motorized cart. The
inverted pendulum system is an example commonly found in control system textbooks and
research literature. Its popularity derives in part from the fact that it is unstable without control,
that is, the pendulum will simply fall over if the cart isn't moved to balance it. Additionally, the
dynamics of the system are nonlinear. The objective of the control system is to balance the
inverted pendulum by applying a force to the cart that the pendulum is attached to. A real-world
example that relates directly to this inverted pendulum system is the attitude control of a booster
rocket at takeoff.
In this case we will consider a two-dimensional problem where the pendulum is constrained to
move in the vertical plane shown in the figure below. For this system, the control input is the
force that moves the cart horizontally and the outputs are the angular position of the pendulum
and the horizontal position of the cart .
For the PID, root locus, and frequency response sections of this problem, we will be interested
only in the control of the pendulum's position. This is because the techniques used in these
sections are best-suited for single-input, single-output (SISO) systems. Therefore, none of the
design criteria deal with the cart's position. We will, however, investigate the controller's effect
on the cart's position after the controller has been designed. For these sections, we will design a
controller to restore the pendulum to a vertically upward position after it has experienced an
impulsive "bump" to the cart. Specifically, the design criteria are that the pendulum return to its
upright position within 5 seconds and that the pendulum never move more than 0.05 radians away
from vertical after being disturbed by an impulse of magnitude 1 Nsec. The pendulum will
initially begin in the vertically upward equilibrium, = .
In summary, the design requirements for the inverted pendulum state-space example are:
Below are the free-body diagrams of the two elements of the inverted pendulum system.
Summing the forces in the free-body diagram of the cart in the horizontal direction, you get the
following equation of motion.
Note that you can also sum the forces in the vertical direction for the cart, but no useful
information would be gained.
Summing the forces in the free-body diagram of the pendulum in the horizontal direction, you get
the following expression for the reaction force .
If you substitute this equation into the first equation, you get one of the two governing equations
for this system.
To get the second equation of motion for this system, sum the forces perpendicular to the
pendulum. Solving the system along this axis greatly simplifies the mathematics. You should get
the following equation.
To get rid of the and terms in the equation above, sum the moments about the centroid of the
pendulum to get the following equation.
Combining these last two expressions, you get the second governing equation.
Since the analysis and control design techniques we will be employing in this example apply only
to linear systems, this set of equations needs to be linearized. Specifically, we will linearize the
equations about the vertically upward equillibrium position, = , and will assume that the system
stays within a small neighborhood of this equillbrium. This assumption should be reasonably
valid since under control we desire that the pendulum not deviate more than 20 degrees from the
vertically upward position. Let represent the deviation of the pedulum's position from
equilibrium, that is, = + . Again presuming a small deviation ( ) from equilibrium, we can use
the following small angle approximations of the nonlinear functions in our system equations:
After substiting the above approximations into our nonlinear governing equations, we arrive at
the two linearized equations of motion. Note has been substituted for the input .
1. Transfer Function
To obtain the transfer functions of the linearized system equations, we must first take the Laplace
transform of the system equations assuming zero initial conditions. The resulting Laplace
transforms are shown below.
Recall that a transfer function represents the relationship between a single input and a single
output at a time. To find our first transfer function for the output and an input of we need
to eliminate from the above equations. Solve the first equation for .
where,
From the transfer function above it can be seen that there is both a pole and a zero at the origin.
These can be canceled and the transfer function becomes the following.
Second, the transfer function with the cart position as the output can be derived in a similar
manner to arrive at the following.
2. State-Space
The linearized equations of motion from above can also be represented in state-space form if they
are rearranged into a series of first order differential equations. Since the equations are linear,
they can then be put into the standard matrix form shown below.
The matrix has 2 rows because both the cart's position and the pendulum's position are part of
the output. Specifically, the cart's position is the first element of the output and the pendulum's
deviation from its equilibrium position is the second element of .
MATLAB representation
1. Transfer Function
We can represent the transfer functions derived above for the inverted pendulum system within
MATLAB employing the following commands. Note that you can give names to the outputs (and
inputs) to differentiate between the cart's position and the pendulum's position. Running this code
in the command window produces the output shown below.
M = 0.5;
m = 0.2;
b = 0.1;
I = 0.006;
g = 9.8;
0.3;
q = (M+m)*(I+m*l^2)-(m*l)^2;
s = tf('s');
inputs = {'u'};
outputs = {'x'; 'phi'};
set(sys_tf,'InputName',inputs)
set(sys_tf,'OutputName',outputs)
sys_tf
sys_tf =
1.045e-05 s
phi: -----------------------------------------------------
2.3e-06 s^3 + 4.182e-07 s^2 - 7.172e-05 s - 1.025e-05
M = .5;
m = 0.2;
b = 0.1;
I = 0.006;
g = 9.8;
l = 0.3;
A = [0 1 0 0;
0 -(I+m*l^2)*b/p (m^2*g*l^2)/p 0;
0 0 0 1;
0 -(m*l*b)/p m*g*l*(M+m)/p 0];
B=[ 0;
(I+m*l^2)/p;
0;
m*l/p];
C = [1 0 0 0;
0 0 1 0];
D = [0;
0];
states = {'x' 'x_dot' 'phi' 'phi_dot'};
inputs = {'u'};
outputs = {'x'; 'phi'};
sys_ss = ss(A,B,C,D,'statename',states,'inputname',inputs,'outputname',outputs)
sys_ss =
B= u
x 0
x_dot 1.818
phi 0
phi_dot 4.545
C=
x x_dot phi phi_dot
x 1 0 0 0
phi 0 0 1 0
D=
u
x 0
phi 0
The above state-space model can also be converted into transfer function form employing the tf
command as shown below. Conversely, the transfer function model can be converted into state-
space form using the ss command.
sys_tf = tf(sys_ss)
sys_tf =
From input "u" to output...
1.818 s^2 + 1.615e-15 s - 44.55
x: --------------------------------------
s^4 + 0.1818 s^3 - 31.18 s^2 - 4.455 s
4.545 s - 1.277e-16
phi: ----------------------------------
s^3 + 0.1818 s^2 - 31.18 s - 4.455
7EE4-22 ADVANCED CONTROL SYSTEM LAB
Practical No:-8
Objective:- Design and implementation of controller for practical systems
- inverted pendulum system.
Objectives:
a) To restrict the pendulum arm vibration (α) within ±3 degrees
b) To restrict the base angle oscillation (θ) within ±30 degrees.
Prerequisites:
LQR technique, Matlab coding, Arduino coding, State space modelling
Equipment required:
Inverted pendulum setup, Arduino mega, A-B cable, Decoder shield, Power supply, Screw driver, Jumpers,
Wires and Wire stripper.
The equations of motion for a pendulum can be derived using Newton’s laws or Lagrangian
mechanics. For an inverted pendulum on a cart, the equations are typically:
Practical No:-9
Objective:- To design and implement control action for maintaining a
pendulum in the upright position (even when subjected to external
disturbances) through LQR technique in an Arduino Mega.
To see how this problem was originally set up and the system equations were derived, consult the Inverted
Pendulum: System Modeling page. For this problem the outputs are the cart's displacement ( in meters)
and the pendulum angle ( in radians) where represents the deviation of the pedulum's position from
equilibrium, that is, .
The design criteria for this system for a 0.2-m step in desired cart position are as follows:
▪ Settling time for and of less than 5 seconds
▪ Rise time for of less than 0.5 seconds
▪ Pendulum angle never more than 20 degrees (0.35 radians) from the vertical
▪ Steady-state error of less than 2% for and
As you may have noticed if you went through some of the other inverted pendulum examples, the
design criteria for this example are different. In the other examples we were attemping to keep
the pendulum vertical in response to an impulsive disturbance force applied to the cart. We did
not attempt to control the cart's position. In this example, we are attempting to keep the pendulum
vertical while controlling the cart's position to move 0.2 meters to the right. A state-space design
approach is well suited to the control of multiple outputs as we have here.
This problem can be solved using full-state feedback. The schematic of this type of control system
is shown below where is a matrix of control gains. Note that here we feedback all of the
system's states, rather than using the system's outputs for feedback.
Open-loop poles
In this problem, represents the step command of the cart's position. The 4 states represent the position and velocity of the cart
and the angle and angular velocity of the pendulum. The output contains both the position of the cart and the angle of the
pendulum. We want to design a controller so that when a step reference is given to the system, the pendulum should be displaced,
but eventually return to zero (i.e. vertical) and the cart should move to its new commanded position. To view the system's open-
loop response please refer to the Inverted Pendulum: System Analysis page.
The first step in designing a full-state feedback controller is to determine the open-loop poles of the system. Enter the following
lines of code into an m-file. After execution in the MATLAB command window, the output will list the open-loop poles
(eigenvalues of ) as shown below.
M = 0.5;
m = 0.2;
b = 0.1;
I = 0.006;
g = 9.8;
l = 0.3;
A = [0 1 0 0;
0 -(I+m*l^2)*b/p (m^2*g*l^2)/p 0;
0 0 0 1;
0 -(m*l*b)/p m*g*l*(M+m)/p 0];
B = [ 0;
(I+m*l^2)/p;
0;
m*l/p];
C = [1 0 0 0;
0 0 1 0];
D = [0;
0];
states = {'x' 'x_dot' 'phi' 'phi_dot'};
inputs = {'u'};
outputs = {'x'; 'phi'};
sys_ss = ss(A,B,C,D,'statename',states,'inputname',inputs,'outputname',outputs);
poles = eig(A)
poles =
-0.1428
-5.6041
5.5651
As you can see, there is one right-half plane pole at 5.5651. This should confirm your intuition
that the system is unstable in open loop.
Linear Quadratic Regulation (LQR)
The next step in the design process is to find the vector of state-feedback control
gains assuming that we have access (i.e. can measure) all four of the state variables. This can
be accomplished in a number of ways. If you know the desired closed-loop pole locations, you
can use the MATLAB commands place or acker. Another option is to use the lqr command which
returns the optimal controller gain assuming a linear plant, quadratic cost function, and reference
equal to zero (consult your textbook for more details).
Before we design our controller, we will first verify that the system is controllable. Satisfaction
of this property means that we can drive the state of the system anywhere we like in finite time
(under the physical constraints of the system). For the system to be completely state controllable,
the controllability matrix must have rank where the rank of a matrix is the number of linearly
independent rows (or columns). The controllability matrix of the system takes the form shown
below. The number corresponds to the number of state variables of the system. Adding
additional terms to the controllability matrix with higher powers of the matrix will not increase
the rank of the controllability matrix since these additional terms will just be linear combinations
of the earlier terms.
(3)
Since our controllability matrix is 4x4, the rank of the matrix must be 4. We will use the
MATLAB command ctrb to generate the controllability matrix and the MATLAB
command rank to test the rank of the matrix. Adding the following additional commands to your
m-file and running in the MATLAB command window will produce the following output.
co = ctrb(sys_ss);
controllability = rank(co)
controllability =
Therefore, we have verified that our system is controllable and thus we should be able to design
a controller that achieves the given requirements. Specifically, we will use the linear quadratic
regulation method for determining our state-feedback control gain matrix . The MATLAB
function lqr allows you to choose two parameters, and , which will balance the relative
importance of the control effort ( ) and error (deviation from 0), respectively, in the cost function
that you are trying to optimize. The simplest case is to assume , and . The cost
function corresponding to this and places equal importance on the control and the state
variables which are outputs (the pendulum's angle and the cart's position). Essentially, the lqr
method allows for the control of both outputs. In this case, it is pretty easy to do. The controller
can be tuned by changing the nonzero elements in the matrix to achieve a desirable response.
To observe the structure of , enter the following into the MATLAB command window to see
the output given below.
Q = C'*C
Q=
1 0 0 0
0 0 0 0
0 0 1 0
0 0 0 0
The element in the (1,1) position of represents the weight on the cart's position and the element
in the (3,3) position represents the weight on the pendulum's angle. The input weighting will
remain at 1. Ultimately what matters is the relative value of and , not their absolute values.
Now that we know how to interpret the matrix, we can experiment to find the matrix that
will give us a "good" controller. We will go ahead and find the matrix and plot the response all
in one step so that changes can be made in the control and seen automatically in the response.
Add the following commands to the end of your m-file and run in the MATLAB command
window to get the following value for and the response plot shown below.
Q = C'*C;
R = 1;
K = lqr(A,B,Q,R)
Ac = [(A-B*K)];
Bc = [B];
Cc = [C];
Dc = [D];
sys_cl = ss(Ac,Bc,Cc,Dc,'statename',states,'inputname',inputs,'outputname',outputs);
t = 0:0.01:5;
r =0.2*ones(size(t));
[y,t,x]=lsim(sys_cl,r,t);
[AX,H1,H2] = plotyy(t,y(:,1),t,y(:,2),'plot');
set(get(AX(1),'Ylabel'),'String','cart position (m)')
set(get(AX(2),'Ylabel'),'String','pendulum angle (radians)')
title('Step Response with LQR Control')
K=
Q = C'*C;
Q(1,1) = 5000;
Q(3,3) = 100
R = 1;
K = lqr(A,B,Q,R)
Ac = [(A-B*K)];
Bc = [B];
Cc = [C];
Dc = [D];
sys_cl = ss(Ac,Bc,Cc,Dc,'statename',states,'inputname',inputs,'outputname',outputs);
t = 0:0.01:5;
r =0.2*ones(size(t));
[y,t,x]=lsim(sys_cl,r,t);
[AX,H1,H2] = plotyy(t,y(:,1),t,y(:,2),'plot');
set(get(AX(1),'Ylabel'),'String','cart position (m)')
set(get(AX(2),'Ylabel'),'String','pendulum angle (radians)')
title('Step Response with LQR Control')
Q=
5000 0 0 0
0 0 0 0
0 0 100 0
0 0 0 0
K=
We can find this factor by employing the used-defined function rscale.m as shown below.
The matrix is modified to reflect the fact that the reference is a command only on cart position.
Cn = [1 0 0 0];
sys_ss = ss(A,B,Cn,0);
Nbar = rscale(sys_ss,K)
Nbar =
-70.7107
Note that the function rscale.m is not a standard function in MATLAB. You will have to
download it here and place it in your current directory. More information can be found
here, Extras: rscale.m. Now you can plot the step response by adding the above and following
lines of code to your m-file and re-running at the command line.
sys_cl = ss(Ac,Bc*Nbar,Cc,Dc,'statename',states,'inputname',inputs,'outputname',outputs);
t = 0:0.01:5;
r =0.2*ones(size(t));
[y,t,x]=lsim(sys_cl,r,t);
[AX,H1,H2] = plotyy(t,y(:,1),t,y(:,2),'plot');
set(get(AX(1),'Ylabel'),'String','cart position (m)')
set(get(AX(2),'Ylabel'),'String','pendulum angle (radians)')
title('Step Response with Precompensation and LQR Control')
Now, the steady-state error is within our limits, the rise and settle times are met, and the
pendulum's overshoot is within range of the design criteria.
Note that the precompensator employed above is calculated based on the model of the plant
and further that the precompensator is located outside of the feedback loop. Therefore, if there
are errors in the model (or unknown disturbances) the precompensator will not correct for them
and there will be steady-state error. You may recall that the addition of integral control may also
be used to eliminate steady-state error, even in the presence of model uncertainty and step
disturbances. For an example of how to implement integral control in the state space setting, see
the Motor Position: State-Space Methods example. The tradeoff with using integral control is that
the error must first develop before it can be corrected for, therefore, the system may be slow to
respond. The precompensator on the other hand is able to anticipitate the steady-state offset using
knowledge of the plant model. A useful technique is to combine the precompensator with integral
control to leverage the advantages of each approach.
Observer-based control
The response achieved above is good, but was based on the assumption of full-state feedback,
which is not necessarily valid. To address the situation where not all state variables are measured,
a state estimator must be designed. A schematic of state-feedback control with a full-state
estimator is shown below, without the precompensator .
Before we design our estimator, we will first verify that our system is observable. The property
of observability determines whether or not based on the measured outputs of the system we can
estimate the state of the system. Similar to the process for verifying controllability, a system is
observable if its observability matrix is full rank. The observability matrix is defined as follows.
(4)
We can employ the MATLAB command obsv to contruct the observability matrix and
the rank command to check its rank as shown below.
ob = obsv(sys_ss);
observability = rank(ob)
observability = 4
Since the observability matrix is 8x4 and has rank 4, it is full rank and our system is observable.
The observability matrix in this case is not square since our system has two outputs. Note that if
we could only measure the pendulum angle output, we would not be able to estimate the full state
of the system. This can be verified by the fact that obsv(A,C(2,:)) produces an observability
matrix that is not full rank.
Since we know that we can estimate our system state, we will now describe the process for
designing a state estimator. Based on the above diagram, the dynamics of the state estimate are
described by the following equation.
(5)
The spirit of this equation is similar to that of closed-loop control in that last term is a correction
based on feedback. Specifically, the last term corrects the state estimate based on the difference
between the actual output and the estimated output . Now let's look at the dynamics of the
error in the state estimate.
(6)
Therefore, the state estimate error dynamics are described by
(7)
and the error will approach zero ( will approach ) if the matrix is stable (has negative
eigenvalues). As is with the case for control, the speed of convergence depends on the poles of
the estimator (eigenvalues of ). Since we plan to use the state estimate as the input to our
controller, we would like the state estimate to converge faster than is desired from our overall
closed-loop system. That is, we would like the observer poles to be faster than the controller poles.
A common guideline is to make the estimator poles 4-10 times faster than the slowest controller
pole. Making the estimator poles too fast can be problematic if the measurement is corrupted by
noise or there are errors in the sensor measurement in general.
Based on this logic, we must first find the controller poles. To do this, copy the following code to
the end of your m-file. If you employed the updated matrix, you should see the following poles
in the MATLAB command window.
poles = eig(Ac)
poles =
-8.4910 + 7.9283i
-8.4910 - 7.9283i
-4.7592 + 0.8309i
-4.7592 - 0.8309i
The slowest poles have real part equal to -4.7592, therefore, we will place our estimator poles at
-40. Since the closed-loop estimator dynamics are determined by a matrix ( ) that has a
similar form to the matrix that determines the dynamics of the state-feedback system ( ),
we can use the same commands for finding the estimator gain as we can for finding the state-
feedback gain . Specifically, since taking the transpose of leaves the eigenvalues
unchanged and produces a result that exactly matches the form of , we can use
the acker or place commands. Recalling that the place command cannot place poles of
multiplicity greater than one, we will place the observer poles as follows. Add the following
commands to your m-file to calculate the matrix and generate the output shown below.
P = [-40 -41 -42 -43];
L = place(A',C',P)'
L=
1.0e+03 *
0.0826 -0.0010
1.6992 -0.0402
-0.0014 0.0832
-0.0762 1.7604
We are using both outputs (the angle of the pendulum and the position of the cart) to design the observer.
Now we will combine our state-feedback controller from before with our state estimator to get the full compensator. The resulting
closed-loop system is described by the following matrix equations.
(8)
(9)
The closed-loop system described above can be implemented in MATLAB by adding the following commands to the end of your
m-file. After running the m-file the step response shown will be generated.
zeros(size(A)) (A-L*C)];
Bce = [B*Nbar;
zeros(size(B))];
Dce = [0;0];
states = {'x' 'x_dot' 'phi' 'phi_dot' 'e1' 'e2' 'e3' 'e4'};
inputs = {'r'};
outputs = {'x'; 'phi'};
sys_est_cl = ss(Ace,Bce,Cce,Dce,'statename',states,'inputname',inputs,'outputname',outputs);
t = 0:0.01:5;
r = 0.2*ones(size(t));
[y,t,x]=lsim(sys_est_cl,r,t);
[AX,H1,H2] = plotyy(t,y(:,1),t,y(:,2),'plot');
set(get(AX(1),'Ylabel'),'String','cart position (m)')
set(get(AX(2),'Ylabel'),'String','pendulum angle (radians)')
title('Step Response with Observer-Based State-Feedback Control')
This response is almost identical to the response achieved when it was assumed that we had full access to the state variables. This
is because the observer poles are fast, and because the model we assumed for the observer is identical to the model of the actual
plant (including the same initial conditions). Therefore, all of the design requirements have been met with the minimal control
effort expended. No further iteration is needed.
Result:
This example demonstrates Pendulum & Cart Control System.
7EE4-22 ADVANCED CONTROL SYSTEM LAB
Practical No:-10
Objective:- The fourth order, nonlinear and unstable real-time control
system (Pendulum & Cart Control System).
2. Define the condition for negative damped system in 2nd order system?
To run the experiment successfully we do not need to change the default settings. The parameter tuning
is only required if the system responds wrongly. To create the realtime code of the given model build the
model. Next, click the Simulation/connect to target and Start real-time code options to start the
experiment.
Crane mode
The control problem is the following: steer the cart from an initial position on the rail
to a given final position while dumping oscillations of the pendulum which occurs due to
the cart motion.
The FIS Editor controller window is opened (Fig. 11.7). From this window you can
go farther to watch and modify the fuzzy crane controller. We recommend you to use the
editor. However, you can see or design a fuzzy controller without the help of the editor.
This technique is presented alternatively in this section. Hence you can write:
>>a=readfis('controller');
It results in storing in the MATLAB Workspace under the a variable the controller
fismatrix.
name: 'controller'
type: 'mamdani'
andMethod: 'min'
orMethod: 'max'
defuzzMethod: 'centroid'
impMethod: 'min'
aggMethod: 'max'
input: [1x4 struct]
output: [1x1 struct]
rule: [1x12 struct]
If you write:
» plotfis(a)
then the crane project is shown (given in Fig. 11.6). You can see the controller system with
4 inputs, 1 output, 12 rules. We incorporate an input variable into the project by defining
rules corresponding to that variable. Otherwise the variable is left out. As one can notice
the third input variable i.e. the pendulum velocity does not take part in building the fuzzy
rules (see the dashed line in the crane project).
Result: We performed the project on real life motion control system.