0% found this document useful (0 votes)
10 views13 pages

control-system-basics

This document covers the control of power electronic converters, focusing on control system basics, including PID control theory and tuning methods. It discusses the closed-loop operations of SCRs, VSCs, and DC-DC converters, along with computation models and simulation examples. Additionally, it highlights the limitations of PID control and the importance of tuning for optimal system performance.

Uploaded by

Chintan koirala
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views13 pages

control-system-basics

This document covers the control of power electronic converters, focusing on control system basics, including PID control theory and tuning methods. It discusses the closed-loop operations of SCRs, VSCs, and DC-DC converters, along with computation models and simulation examples. Additionally, it highlights the limitations of PID control and the importance of tuning for optimal system performance.

Uploaded by

Chintan koirala
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

Unit 3: Control of Power Electronic Converters [14]

3.1. Control System Basics:

 Control block diagram and control objectives;


 Basic control actions (P, PI, PID, PD); tuning the controllers,
 Small signal and large signal models;
3.2. Close loop operation of SCR: voltage control operation, generation of firing angle.

3.3. Close loop operation of VSC: voltage control operation, current control operation;
decoupled P-Q or vector control schemes, generation of ac duty cycle.

3.4. Close loop operation of dc-dc converters: voltage control and current control operation;
generation of duty cycle

3.5. Computation models and simulation examples.

3.1 Control system Basics

Control block diagram and control objective

Control Objectives:

1. To reduce the error


2. To obtained the desired output
Basic Control Action (P, PI, PID)

This section introduces the basic fundamentals of proportional-integral-derivative (PID) control


theory, and provides a brief overview of control theory and the characteristics of each of the
PID control loops. Several methods for tuning a PID controller are given, along with some
disadvantages and limitations of this type of control.

The PID controller is a feedback mechanism widely used in a variety of applications. The
controller calculates an “error” that is the difference between a measured process variable and
the desired set-point value needed by the application. PID controllers will attempt to minimize
the process error by continually adjusting the inputs. Although this is a powerful tool, the
controllers must be correctly tuned if they are to be effective. Additionally, the limitations of a
PID controller should be recognized in order to ensure that they are not used in applications
that cannot make use of their unique advantages.

A PID controller involves three separate system parameters:

• Proportional (sometimes called the “gain”): determines the reaction to the current error

• Integral: calculates the system reaction based on the sum of recent errors

• Derivative: calculates the rate at which the system error has been changing

The weighted sum of these three values is used to adjust a process by adjusting a control
element, which could literally be nearly anything within the process. These three summed
terms constitute the measured variable, i.e.—the aspect of the application that one is trying to
manipulate:

Vm(t) = Pout + Iout + Dout

Where

Thus the PID Algorithm can be re written as


Proportional loop
The purpose of the proportional gain is to create a change to the system’s output that is
directly proportional to the system’s current error value. Stated another way, a gain can be
thought of as an amplifier to the controller, as it only serves to multiply the current error value
by a given gain value. A large gain value will yield a large change in a system’s output for a given
error, and thus gain can be used to amplify the speed with which a controller reacts to a certain
state condition. However, if the gain is too large, the system can become unstable ver y quickly;
conversely, if the gain value is too small, the controller will have a subsequently small response
to an error value. This latter condition will result in a less-sensitive controller, which may not
respond correctly to errors or disturbances.

In an ideal state—i.e., free of any disturbances—a purely proportional control system will not
settle at the set-point value, but will retain a steady error that is a function of the proportional
and process gain. However, despite the presence of the steady-state offset, it is common
practice to design control systems where in the greatest amount of control response is
provided by the controller’s proportional gain. An example of this steady-state error is shown in
Figure 1.

Figure 1: proportional response to step input

Intergral Loop
The value contributed from the integral loop is proportional to both the magnitude and
duration of the error. Summing the recent error values over time (integrating the error) gives
the offset value that should have been previously corrected. This accumula ted-error value is
then multiplied by the integral gain (which defines the magnitude of the contribution of the
integral loop) and added to the controller output. When added to the proportional term, the
integral loop accelerates the response of the proces s towards the set-point value and
eliminates the residual steady-state error of a proportional-only controller. The integral loop is
only responding to the summation of recent errors, however, which will cause the response to
overshoot the set-point value and thus create an error in the opposite direction. Left alone, this
PI controller may eventually settle on the set-point value over time, but there are many
applications—such as stability control systems in aircraft—where rapidly settling upon the set-
point value without oscillation is both desirable and necessary.

Figure 2 shows the effects of adding an integral loop to a proportional controller. Note how
changing the value of the integral gain affects the response of the system. Although a PI
controller will not resolve to a steady-state error (as a proportional- only will), the amount of
overshoot is directly related to the value of the integral gain. Notice in Figure 2 that the highest
value of integral gain gave the fastest response to the step input (as evidenced by the steep
slope of Ki = 2, relative to the other values), but also required the most amount of oscillations
and the longest amount of time to resolve to the set-point value. By contrast, the red line of Ki
= 0.5 has the slowest response time of the three options, but notice that it resolves to the set-
point value with no noticeable overshoot. Which response is “best” for a given application will
of course depend on the application in question, but it is common practice to limit the number
of response oscillations while still maintaining an acceptable response time. This is also done
via the derivative gain, as discussed below.

Figure 2: controller response to step input with proportional and derivative values held
constant

Derivative loop
With a PI control, the system is able to settle to its set-point value through the use of a steady-
state proportional response and the summation of past errors. But how fast have those
previous errors been changing with respect to time? In Figure 2, the rate at which the errors
change is relatively constant—especially with Ki equal to 2. To increase response time and
minimize errors, a term is needed to calculate the rate at which the error term is changing. This
is done through a derivative loop, someti mes called a “rate loop.”

The derivative loop calculates the rate at which the error is changing by calculating the slope of
the error. In essence, this is done by calculating the change in error (rise) over time (run) —the
first derivative of the error function. This value is multiplied by a derivative gain Kd to obtain
the derivative contribution to the system. As with the proportional and integral loops, the
derivative gain can have a great impact on the system’s response (Fig. 3). The derivative loop
controls the rate at which the controller’s response overshoots a given input value—produced
by the proportional and integral loops— and is most noticeable when the process variable is
close to the set-point. However, derivative loops amplify noise and are thus very sensitive to
noise in the error term. For this reason, it is best to use attenuation filters with derivative loops,
lest the presence of noise combined with a high value of derivative gain drive the system to
instability. Note in Figure 3 that the behavior of the derivative term relative to its gain is the
direct opposite of the integral term’s response to an identical gain value.

Figure 3: controller response to step input with proportional and integral values held constant

Tuning of PID controller


Tuning a PID controller involves the control of four variables:
• Rise time: the amount of time necessary for the system’s initial output to rise past 90% of its
desired value

• Overshoot: the amount by which the initial response exceeds the s et-point value

• Resolving time: the amount of time required by the system to converge to the set-point
value.

• Steady-state error: the measured difference between the system output and the set-point
value

The goal of a PID controller is to take an input value and maintain it at a given set-point over
time. But if the values for the three loops of a PID controller are chosen incorrectly, the system
will become unstable through any one of a number of failure modes. Typically, these involve an
output that diverges—with or without oscillation—and is limited by the physical characteristics
of the control mechanisms, including actuators breaking, sensors and encoders burning out,
etc. The process of tuning a controller involves adjusting its control parameters—proportional
band, integral gain and derivative gain—in response to a given input until the desired response
is attained. This desired response is almost entirely application-driven. For instance, a controller
must not allow any overshoot or oscillation if such things would create a hazardous condition
within the application (and would yield response graphs similar to the red line in Figure 2).
Other applications are inherently non-linear, rendering parameters that are ideal at full-load
and maximum-RPM conditions undesirable when starting from zero-load conditions.
Methods of Tuning a PID controller

There are, generally speaking, three main methods of tuning a PID controller (Table 1).

1. Manual tuning. Manual tuning is best used when a system must remain online during the
tuning process. The four step process is as follows:
• Set Ki and Kd to zero

• Increase Kp until the loop output begins to oscillate

• Reduce Kp to one-half of this value to obtain a quarter-wave decay

• Increase Ki to adjust the behavior of the offset so that the system will resolve in an acceptable
amount of time (how much resolving time is acceptable will be governed by the process in
question)

Note that increasing the integral gain by too great an amount will cause system instability
(Table 2). The derivative gain should then be adjusted until the system resolves to its set-point
value with acceptable alacrity after experiencing a load disturbance. This is simulated with a
step doublet or “stick rap”—a step input from 0 to one, followed by a step input from one to
0—or with the sinusoidal or ramp input equivalents. Note that a fast PID loop will usually
require a slight overshoot to resolve to the set-point more quickly. But if the system cannot
accept an overshoot, an over-damped system will be required. In these instances the Kp value
will be less than half of the value causing oscillation.

2. Ziegler-Nichols tuning. The Ziegler-Nichols tuning method is a very powerful way to resolve a
system to its set-point value while circumventing a great deal of the mathematical calculations
required to find an initial estimation of the PID values. This is especially useful when the system
is unknown or when creating state matrices for the system is impractical or impossible. As with
manual tuning, with Ziegler-Nichols tuning the integral and derivative gain values are first set to
zero. The proportional gain is then increased from zero until the system reaches an oscillatory
state, as above. This proportional gain value should be marked Ku, or ultimate gain. The
system’s oscillatory period at this gain value should also be marked Tu, or ultimate period.
These two ultimate values are then used to set the proportional, integral and derivative gain
values (Table 3; Ref. 4). tuning. It will permit some fluctuation in the controller response as long
as each successive oscillation peak is no more than one-fourth the amplitude of the previous
peak (Ref. 5)—or, the so-called, “quarter-wave decay.” Applications requiring less fluctuation or
a faster resolving time will require further tuning.

A second Ziegler-Nichols tuning method is used for plant models with step responses
resembling an S-shaped curve (or “reaction curve”), with no overshoot. This is ideally suited for
processes that cannot tolerate overshoot or oscillations. A typical reaction curve is shown in
Figure 4. The delay time L and constant time T are found by drawing a tangent line to the
reaction curve through its inflection point and finding the intersection points with the time axis
and the set point line. Once these intercepts are determined, the values from Table 3 are
recalculated (Table 4; Ref. 6). The parameters in Table 4 will give a system response with an
overshoot of approximately 25%, and the system will resolve to the set-point value within
polynomial time (Ref. 7).
3. Software tuning. As it has with most other aspects of life, technology has rendered a great
many number of control tuning methods irrelevant. A very large number of modern facilities
forego tuning their controllers using the manual calculation methods mentioned previously.
Rather, tuning and optimization software are used to ensure that optimum results are obtained
in short order. Of course, for some systems— such as those with response times measured in
minutes or hours—mathematical tuning is still recommended, as tuning by pure trial-and-error
can literally take hours or days. MATLAB and SimuLink are the most common tools used to
design and tune control systems, and they have found widespread use in a variety of industries.
Other software packages such as PIDeasy, AdvaControl Tuner, IMCTune and others can often
produce optimal responses from either online or offline inputs, and are plug-and-play ready—
often with no need of subsequent controller refinements. Many of the features of PID tuning
software are also designed directly into the hardware of the controller, most often from the
“Big Four” of control vendors—ABB, Honeywell, Foxboro and Yokogawa. Because of the
number of variables involved in software tuning, it is recommended that it be done on a case-
by-case basis.

Limitations of PID Control


Although a PID controller provides an optimum solution to many processes, it is not suitable for
all control problems that may be encountered. This is especially true for processes with ramp-
style changes in set-point values or slow disturbances (Ref. 8). PID controllers can also perform
poorly when the gain values must be greatly reduced in order to prevent a constant
oscillation—or “hunting”—about the set-point value. Furthermore, PID controllers are linear
and so care must be taken when using them with inherently nonlinear systems—i.e., systems
that do not satisfy the superposition principle or systems with an output that is not
proportional to its input, such as air handling and mixing applications.

For nonlinear systems, gain scheduling—where utilizing a family of linear controllers that are
independently activated based upon the values of scheduling variables determines the current
operating region of the system—is most often used. Which scheduling variables are used will
depend on the system in question.

For example, a flight control system on an aircraft might use altitude and true airspeed as its
scheduling variables, whereas an air handling application might use mass flow rate and impeller
RPM. Nonlinear systems might be controllable with linear control systems if enough data and a
sufficiently high sampling rate are known. Oftentimes, however, the use of gain scheduling may
be more cost-effective. Feed-forward control is found in a number of applications, including
perceptron (Ed.’s note: a binary classifier that maps its input x—a real-valued vector—to an
output value f (x)—a single binary value—across the matrix) and long distance telephony (L-
carrier transmission system of the 1970s).

Feed forward control can also be used to improve the performance of a PID controller if certain
qualities about the system are known beforehand and can be fed forward into the PID
controller. This feed-forward value can greatly impact the performance of the controller; best
of all, because feed forward input is not affected by the feedback of the system, the feed-
forward value can never cause the control system to oscillate, thus improving controller
response and overall system stability.

Because the derivative loop is susceptible to process noise, it is also important to employ low-
pass filters, if needed. However, the use of low-pass filters with derivative control can result in
one filter negating the effect of another. For this reason, proper instrumentation or the use of a
median filter may be a better option for improving both filter efficiency and overall
performance of the controller (Ref. 9). Additionally, the differential loop can be turned off
completely—Kd = 0— thereby using the PID controller as a PI controller. Note that this may
require retuning the proportional and integral loops by utilizing one of the methods discussed
in the previous Loop Tuning section.

Large‐Signal vs. Small‐Signal Analysis


Large‐Signal analysis:
1. DC Analysis; finding operating point and the bias conditions (Voltages and Currents at
different nodes) of circuit.
2. Use current and voltage equations that govern the behavior of the device (e.g.
ID=Is(exp(VBE/VT)‐1), and KVL/KCL to find the bias points.
3. Large signal values are used to find “small‐signal” parameters. *NOTE: Circuit may
contain non‐linear devices (e.g. diodes, BJTs), so never “linearize” these devices, i.e. do
not think of them as dependent sources or resistors.
4. Voltages applied here are usually several volts to several tens of volts. They are “large”
values.

Small‐Signal analysis:
1. AC Analysis; tweak around the bias condition, i.e. around the DC bias of the circuit, add
a small AC source that slightly increases and decreases the bias point.
2. “Linearize” the device. Replace the non‐linear device with linear ones. Why can you
treat non‐linear devices as linear when small‐signal values are applied? Because if you
take the instantaneous slope of the IV curve at a particular DC point and zoom into it,
for values very close to this DC point, the IV curve looks quite linear.
3. Construct the small‐signal model using values for the parameters that you found in Step
3 of Large‐Signal Analysis.
4. Use this model to find things like gain, input and output resistances.
Large signal and small signal analysis of BJTS
Large signal analaysis is DC analysis and the DC equations of BJTS are

Small signal analysis of BJT

If you draw Ic vs. Vce curves for 3 different Vbe values and pick one point as the DC operating
point and show how incremental changes in Vbe , i.e. if the new Vbe becomes Vbe +vbe , changes Ic
to Ic+ic and show how this is modeled as a voltage dependent current source, g mvbe . Similarly,
you can do this for incremental changes in Vce and show its effect and why that’s a resistor. So,
basically derive where the small‐signal model comes from.
References
1. Astrom, K.J. and T.H. Hagglund. “New Tuning Methods for PID Controllers,” Proceedings from
the 3rd European Control Conference, 1995.
2. Ang, K.H., G.C.Y. Chong and Y. Li. “PID Control System Analysis, Design and Technology,” IEEE
Transactions on Control Systems Technology 13 (4), 2005, pp. 559–576.
3. Li, Feng et al. “PIDeasy and Automated Generation of Optimal PID Controllers,” Proceedings
from the 3rd Asia- Pacific Conference of Control and Measurement, Dunhuang, P.R. China, 1998,
pp. 29–33.
4. Co, Tomas B. “Ziegler Nichols Method,” Michigan Technological University Department of
Chemical Engineering Website, URL: http://www.chem.mtu.edu/~tbco/ cm416/zn.html (cited
February 3, 2010).
5. Van Doren, Vance J. “Loop Tuning Fundamentals,” Control Engineering Website, URL:
http://www.controleng. com/article/268148-Loop_Tuning_Fundamentals.php (cited February
3, 2010).
6. Zhong, J. “PID Controller Tuning: A Short Tutorial” (class lesson), Purdue University, 2006.
7. Ibid.
8. Sung, S. W. and In-Beum Lee. “Limitations and Countermeasures of PID Controllers,”
Department of Chemical Engineering, Pohang University of Science and Technology, Pohang,
Korea, 1996.
9. Ang, K.H., G.C.Y. Chong and Y. Li. “PID Control System Analysis, Design and Technology,” IEEE
Trans Control Systems Tech, 2005, 13 (4), URL: http:// eprints.gla.ac.uk/3817/1/IEEE3.pdf [cited
2007].

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy