RC_Theme_1

Download as pdf or txt
Download as pdf or txt
You are on page 1of 18

Theme 1

Modeling of Uncertain Systems


________________________________________________________

1.1 Control-System Representations


1.2 Unstructured Uncertainties
1.3 Parametric Uncertainty
1.4 Linear Fractional Transformations
1.5 Structured Uncertainties
________________________________________________________

1.1 Control-System Representations


Robustness is of crucial importance in control-system design because real
engineering systems are vulnerable to external disturbance and measurement
noise and there are always differences between mathematical models used for
design and the actual system. Typically, a control engineer is required to design
a controller that will stabilize a plant, if it is not stable originally, and satisfy
certain performance levels in the presence of disturbance signals, noise
interference, unmodeled plant dynamics and plant-parameter variations.
Though always having been appreciated, the need and importance of
robustness in control-systems design has been particularly brought into the
limelight during the last two decades. In classical single-input single-output
control, robustness is achieved by ensuring good gain and phase margins.
Designing for good stability margins usually also results in good, well-damped
time responses, i.e. good performance. When multivariable design techniques
were first developed in the 1960s, the emphasis was placed on achieving good
performance, and not on robustness. These multivariable techniques were based
on linear quadratic performance criteria and Gaussian disturbances, and proved

1–1
to be successful in many aerospace applications where accurate mathematical
models can be obtained, and descriptions for external disturbances/noise based
on white noise are considered appropriate. However, application of such
methods, commonly referred to as the linear quadratic Gaussian (LQG)
methods, to other industrial problems made apparent the poor robustness
properties exhibited by LQG controllers. This led to a substantial research effort
to develop a theory that could explicitly address the robustness issue in
feedback design. The pioneering work in the development of the forthcoming
theory, now known as the H  optimal control theory, was conducted in the
early 1980s. In the H  approach, the designer from the outset specifies a model
of system uncertainty, such as additive perturbation and/or output disturbance,
which is most suited to the problem at hand. A constrained optimization is then
performed to maximize the robust stability of the closed-loop system to the type
of uncertainty chosen, the constraint being the internal stability of the feedback
system. In most cases, it would be sufficient to seek a feasible controller such
that the closed-loop system achieves certain robust stability. Performance
objectives can also be included in the optimization cost function. Elegant
solution formulas have been developed and are readily available in software
packages such as MATLAB®.
A control system is an interconnection of components to perform certain
tasks and to yield a desired response, i.e. to generate desired signal (the output),
when it is driven by manipulating signal (the input). A control system is a
causal, dynamic system, i.e. the output depends not only the present input but
also the input at the previous time.
In general, there are two categories of control systems, the open-loop
systems and closed-loop systems. An open-loop system uses a controller or
control actuator to obtain the design response. In an open-loop system, the
output has no effect on the input. In contrast to an open-loop system, a closed-
loop control system uses sensors to measure the actual output to adjust the input
in order to achieve desired output. The measure of the output is called the
feedback signal, and a closed-loop system is also called a feedback system. It is
known that only feedback configurations are able to achieve the robustness of a
control system.
Due to the increasing complexity of physical systems under control and
rising demands on system properties, most industrial control systems are no
longer single-input and single-output (SISO) but multi-input and multi-output

1–2
(MIMO) systems with a high interrelationship (coupling) between these
channels. The number of (state) variables in a system could be very large as
well. These systems are called multivariable systems.
In order to analyze and design a control system, it is advantageous if a
mathematical representation of such a relationship (a model) is available. The
system dynamics is usually governed by a set of differential equations in either
open-loop or closed-loop systems. In the case of linear, time-invariant systems,
these differential equations are linear ordinary differential equations. By
introducing appropriate state variables and simple manipulations, a linear, time-
invariant, continuous-time control system can be described by the following
model:
x (t )  A x(t )  B u (t ) ,
y (t )  C x(t )  D u (t ) , (1.1)

where x(t )   n is the state vector, u (t )   m the input (control) vector, and
y (t )   p the output (measurement) vector.
With the assumption of zero initial condition of the state variables and using
Laplace transform, a transfer function matrix corresponding to the system in
(1.1) can be derived as
G ( s )  C ( sI n  A) 1 B  D . (1.2)

It should be noted that the H  optimization approach is a frequency-


domain method, though it utilizes the time-domain description such as (1.1) to
explore the advantages in numerical computation and to deal with multivariable
systems. The system given in (1.1) is assumed to be minimal, i.e. completely
controllable and completely observable, unless described otherwise.
In the case of discrete-time systems, similarly the model is given by
x(k  1)  A x(k )  B u (k ) ,
y (k )  C x(k )  D u (k ) , (1.3)

with a corresponding transfer function matrix as

G ( z )  C ( zI n  A) 1 B  D . (1.4)

An essential issue in control-systems design is the stability. An unstable


system is of no practical value. This is because any control system is vulnerable
1–3
to disturbances and noises in a real work environment, and the effect due to
these signals would adversely affect the expected, normal system output in an
unstable system. Feedback control techniques may reduce the influence
generated by uncertainties and achieve desirable performance. However, an
inadequate feedback controller may lead to an unstable closed-loop system
though the original open-loop system is stable.
When a dynamic system is just described by its input/output relationship
such as a transfer function (matrix), the system is stable if it generates bounded
outputs for any bounded inputs. This is called the bounded-input-bounded-
output (BIBO) stability. For a linear, time-invariant system modeled by a
transfer function matrix ( G (s) in (1.2)), the BIBO stability is guaranteed if and
only if all the poles of G ( s ) are in the open-left-half complex plane, i.e. with
negative real parts.
When a system is governed by a state-space model such as (1.1), a stability
concept called asymptotic stability can be defined. A system is asymptotically
stable if, for an identically zero input, the system state will converge to zero
from any initial states. For a linear, time-invariant system described by a model
of (1.1), it is asymptotically stable if and only if all the eigenvalues of the state
matrix A are in the open-left-half complex plane, i.e. with negative real parts.
In general, the asymptotic stability of a system implies that the system is
also BIBO stable, but not vice versa. However, for a system in (1.1), if
 A, B, C , D is of minimal realization, the BIBO stability of the system implies
that the system is asymptotically stable.
The above stabilities are defined for open-loop systems as well as closed-
loop systems. For a closed-loop system (interconnected, feedback system), it is
more interesting and intuitive to look at the asymptotic stability from another
point of view and this is called the internal stability. An interconnected system
is internally stable if the subsystems of all input–output pairs are asymptotically
stable (or the corresponding transfer function matrices are BIBO stable when
the state space models are minimal). Internal stability is equivalent to
asymptotic stability in an interconnected, feedback system but may reveal
explicitly the relationship between the original, open-loop system and the
controller that influences the stability of the whole system.

1–4
Fig. 1.1 An interconnected system of G and K

For the system given in Fig. 1.1, there are two inputs r and d and two
outputs y and u. The transfer functions from the inputs to the outputs,
respectively, are
T yr  GK ( I  GK ) 1 ,
T yd  G ( I  KG ) 1 ,
Tur  K ( I  GK ) 1 ,
Tud   KG ( I  KG ) 1 . (1.5)

Hence, the system is internally stable if and only if all the transfer functions
in (1.5) are BIBO stable, or the transfer function matrix

GK ( I  GK ) 1 G ( I  KG ) 1 
M  1 
 K ( I  GK )  KG ( I  KG ) 1 

from  r d  T to  y u  T is BIBO stable.


It can be shown that if there is no unstable pole/zero cancellation between G
and K, then any one of the four transfer functions being BIBO stable would be
enough to guarantee that the whole system is internally stable.
A control system interacts with its environment through command signals,
disturbance signals and noise signals, etc. Tracking error signals and actuator
driving signals are also important in control systems design. For the purpose of
analysis and design, appropriate measures, the norms, must be defined for
describing the “size” of these signals.
Let the linear space X be F m , where F   for the field of real numbers,
or F  C for complex numbers. For x   x1 , x2 ,  , x m  T  X , the p-norm of
the vector x is defined by

1–5
m
1  norm x 1   xi , for p  1,
i 1

1 p
m p
p  norm x p    xi  , for 1  p   ,
 i 1 
  norm x   max xi , for p   .
1i  m

When p  2 , x 2 is the familiar Euclidean norm.


When X is a linear space of continuous or piecewise continuous time scalar-
valued signals x(t ) , t   , the p-norm of a signal x(t ) is defined by

1  norm x 1 
x(t ) dt , for p  1,

 1 p
p  norm x    x(t ) p dt  , for 1  p   ,
p   
  norm x   sup x(t ) , for p   .
t

The normed spaces, consisting of signals with finite norm as defined


correspondingly, are called L1 () , L p () , and L () , respectively. From a
signal point of view, the 1-norm, x 1 of the signal x(t ) is the integral of its

absolute value. The square of the 2-norm, x 22 , is often called the energy of the
signal x(t ) since that is what it is when x(t ) is the current through a 1 
resistor. The ∞-norm, x , is the amplitude or peak value of the signal, and the

signal is bounded in magnitude if x(t )  L () .


System norms are actually the input–output gains of the system.
Suppose that G is a linear and bounded system that maps the input signal
u (t ) into the output signal y (t ) , where u  ( U ,  U ) y  ( Y ,  Y ) . U and Y
are the signal spaces, endowed with the norms  U and  Y , respectively.
Then the norm, maximum system gain, of G is defined as

Gu Y
G  sup (1.6)
u 0 u U

1–6
or
G  sup Gu Y  sup Gu Y.
u U 1 u U 1

Obviously, we have
Gu Y  G  u U.

If G1 and G 2 are two linear, bounded and compatible systems, then

G1G 2  G1  G 2 .

G is called the induced norm of G with regard to the signal norms  U


and  Y .
We are particularly interested in the so-called ∞-norm of a system. For a
linear, time-invariant, stable system G : L2m ()  L2p () , the ∞-norm, or the
induced 2-norm, of G is given by

G   sup G ( j ) 2 , (1.7)


where G ( j ) 2 is the spectral norm of the p  m matrix G ( j ) and G ( s) is


the transfer function matrix of G. Hence, the ∞-norm of a system describes the
maximum energy gain of the system and is decided by the peak value of the
largest singular value of the frequency response matrix over the whole
frequency axis. This norm is called the H  -norm, since we denote by H  the
linear space of all stable linear systems.

1.2 Unstructured Uncertainties


It is well understood that uncertainties are unavoidable in a real control system.
The uncertainty can be classified into two categories: disturbance signals and
dynamic perturbations. The former includes input and output disturbance (such
as a gust on an aircraft), sensor noise and actuator noise, etc. The latter
represents the discrepancy between the mathematical model and the actual
dynamics of the system in operation. A mathematical model of any real system
is always just an approximation of the true, physical reality of the system

1–7
dynamics. Typical sources of the discrepancy include unmodeled (usually high-
frequency) dynamics, neglected nonlinearities in the modeling, effects of
deliberate reduced-order models, and system-parameter variations due to
environmental changes and torn-and-worn factors. These modeling errors may
adversely affect the stability and performance of a control system. In the
sections, we will discuss how dynamic perturbations are usually described so
that they can be accounted for in system robustness analysis and design.
Many dynamic perturbations that may occur in different parts of a system
can, however, be lumped into one single perturbation block  , for instance,
some unmodeled, high-frequency dynamics. This uncertainty representation is
referred to as “unstructured” uncertainty. In the case of linear, time-invariant
systems, the block  may be represented by an unknown transfer function
matrix.
The unstructured dynamics uncertainty in a control system can be described
in different ways, such as is listed in the following, where G p ( s ) denotes the
actual, perturbed system dynamics and Go (s ) a nominal model description of
the physical system.
1. Additive perturbation (see Fig. 1.2):
G p ( s )  Go ( s )  ( s ) . (1.8)

2. Inverse additive perturbation (see Fig. 1.3):

G p ( s)  1  Go ( s)  1  ( s) . (1.9)

3. Input multiplicative perturbation (see Fig. 1.4):


G p ( s )  Go ( s ) I  ( s ) . (1.10)

4. Output multiplicative perturbation (see Fig. 1.5):


G p ( s )  I  ( s )Go ( s ) . (1.11)

5. Inverse input multiplicative perturbation (see Fig. 1.6):

G p ( s)  1  I  ( s)Go ( s)  1 . (1.12)

6. Inverse output multiplicative perturbation (see Fig. 1.7):

G p ( s)  1  Go ( s)  1 I  ( s) . (1.13)

1–8
Fig. 1.2 Additive perturbation configuration

Fig. 1.3 Inverse additive perturbation configuration

Fig. 1.4 Input multiplicative perturbation configuration

Fig. 1.5 Output multiplicative perturbation configuration

Fig. 1.6 Inverse input multiplicative perturbation configuration

1–9
Fig. 1.7 Inverse output multiplicative perturbation configuration

The additive uncertainty representations give an account of absolute error


between the actual dynamics and the nominal model, while the multiplicative
representations show relative errors.
It should be noted that a successful robust control-system design would
depend on, to a certain extent, an appropriate description of the perturbation
considered, though theoretically most representations are interchangeable.

Example 1.1 The dynamics of many control systems may include a “slow” part
and a “fast” part, for instance in a dc motor. The actual dynamics of a scalar
plant may be
G p ( s )  g gain Gslow ( s ) G fast ( s )

where g gain is constant, and

1 1
Gslow  , G fast  ,   1.
1  sT 1   sT

In the design, it may be reasonable to concentrate on the slow response part


while treating the fast response dynamics as a perturbation. Let  a and  m
denote the additive and multiplicative perturbations, respectively. It can easily
be worked out that
 a ( s )  G p  g gain Gslow  g gain Gslow (G fast  1)
  sT
 g gain ,
(1  sT ) (1   sT )

G p  g gain Gslow
 m (s)   G fast  1
g gain Gslow
  sT
 .
1   sT

1 – 10
Fig. 1.8 Absolute and relative errors in Example 1.1

The magnitude Bode plots of  a and  m can be seen in Fig. 1.8, where
g gain is assumed to be 1. The difference between the two perturbation
representations is obvious: though the magnitude of the absolute error may be
small, the relative error can be large in the high-frequency range in comparison
to that of the nominal plant.

1.3 Parametric Uncertainty


The unstructured uncertainty representations discussed in Sect. 1.2 are useful in
describing unmodeled or neglected system dynamics. These complex
uncertainties usually occur in the high-frequency range and may include
unmodeled lags (time delay), parasitic coupling, hysteresis, and other
nonlinearities. However, dynamic perturbations in many industrial control
systems may also be caused by inaccurate description of component
characteristics, torn-and-worn effects on plant components, or shifting of

1 – 11
operating points, etc. Such perturbations may be represented by variations of
certain system parameters over some possible value ranges (complex or real).
They affect the low-frequency range performance and are called “parametric
uncertainties”.

Example 1.2 A mass–spring–damper system can be described by the following


second order, ordinary differential equation:

d 2 x(t ) d x(t )
m c  k x(t )  f (t ) ,
dt 2 dt

where m is the mass, c the damping constant, k the spring stiffness, x(t ) the
displacement and f (t ) the external force.
For imprecisely known parameter values, the dynamic behavior of such a
system is actually described by

d 2 x(t ) d x(t )
(mo   m )  ( co   c )  (k o   k ) x(t )  f (t ) , (1.14)
dt 2 dt

where mo , co , and k o denote the nominal parameter values and  m ,  c , and  k


possible variations over certain ranges.
By defining the state variables x1 and x 2 as the displacement variable and
its first order derivative (velocity), the second order differential equation (1.14)
may be rewritten into a standard state-space form

x1  x 2 ,
1
x 2   (ko   k ) x1  (co   c ) x 2  f  ,
mo   m
y  x1 .

Further, the system can be represented by an analog block diagram as in


Fig. 1.9.
Notice that
1  mo   m 

can be rearranged as a feedback in terms of 1 mo and  m . Figure 1.9 can be


redrawn as in Fig. 1.10, by pulling out all the uncertain variations.
1 – 12
Fig. 1.9 Analog block diagram of Example 1.2

Fig. 1.10 Structured uncertainties block diagram of Example 1.2

Let z1 , z 2 , and z 3 be x 2 , x 2 , and x1 , respectively, considered as another,


fictitious output vector; and, d1 , d 2 , and d 3 be the signals coming out from the
perturbation blocks  m ,  c , and  k , as shown in Fig. 1.10.
The perturbed system can be arranged in the following state-space model
and represented as in Fig. 1.11:

1 – 13
 d1 
 x1   0k 1   x1 
c  0 0 0   0 
1
 x    o  o   x 2    1  1  1 d 2     f ,
 2   mo mo     
 d 3   mo 

 k o 
co   1 
 z1   mo mo   x1   1  1  1  d1   mo 
z    0 1      0 0 0  d 2    0  f , (1.15)
 2   x2      
 z 3   1 0   0 0 0   d 3   0 
   

 x1 
y  1 0   .
x2 

Fig. 1.11 Standard configuration of Example 1.2

The state-space model of (1.15) describes the augmented, interconnection


system M of Fig. 1.11. The perturbation block  in Fig. 1.11 corresponds to
parameter variations and is called “parametric uncertainty”. The uncertain block
 is not a full matrix but a diagonal one. It has certain structure, hence the
terminology of “structured uncertainty”. More general cases will be discussed
shortly in Sect. 1.5.

1 – 14
1.4 Linear Fractional Transformations
The block diagram in Fig. 1.11 can be generalized to be a standard
configuration to represent how the uncertainty affects the input/output
relationship of the control system under study. This kind of representation first
appeared in the circuit analysis back in the 1950s. It was later adopted in the
robust control study for uncertainty modeling. The general framework is
depicted in Fig. 1.12.

Fig. 1.12 Standard M   configuration

The interconnection transfer function matrix M in Fig. 1.12 is partitioned as

 M 11 M 12 
M 
 M 21 M 22 

where the dimensions of M 11 conform with those of  .


By routine manipulations, it can be derived that


z  M 22  M 21 ( I  M 11 ) 1 M 12 
if ( I  M 11 ) is invertible. When the inverse exists, we may define

F ( M ,  )  M 22  M 21 ( I  M 11 ) 1 M 12 .

F ( M ,  ) is called a linear fractional transformation (LFT) of M and  .


Because the “upper” loop of M is closed by the block  , this kind of linear
fractional transformation is also called an upper linear fractional

1 – 15
transformation (ULFT), and denoted with a subscript u, i.e. Fu ( M ,  ) , to show
the way of connection.
Similarly, there are also lower linear fractional transformations (LLFT) that
are usually used to indicate the incorporation of a controller K into a system.
Such a lower LFT can be depicted as in Fig. 1.13 and defined by

Fl ( M ,  )  M 11  M 12 K ( I  M 22 K ) 1 M 21 .

Fig. 1.13 Lower LFT configuration

With the introduction of linear fractional transformations, the unstructured


uncertainty representations discussed in Sect. 1.2 may be uniformly described
by Fig. 1.12, with appropriately defined interconnection matrices M as listed
below.

1. Additive perturbation:

0 I 
M  . (1.16)
 I Go

2. Inverse additive perturbation:

 Go Go 
M  . (1.17)
 Go Go 

3. Input multiplicative perturbation:

0 I 
M  . (1.18)
Go Go 

4. Output multiplicative perturbation:

1 – 16
0 Go 
M  . (1.19)
 I Go 

5. Inverse input multiplicative perturbation:

 I I 
M  . (1.20)
 Go Go 

6. Inverse output multiplicative perturbation:

 I Go 
M  . (1.21)
 I Go 

In the above, it is assumed that ( I  M 11 ) is invertible. The perturbed


system is thus
G p ( s )  Fu ( M ,  ) .

The block  in (1.16)–(1.21) is supposed to be a “full” matrix, i.e. it has no


specific structure.

1.5 Structured Uncertainties


In many robust design problems, it is more likely that the uncertainty scenario
is a mixed case of those described in Sects. 1.2 and 1.3. The uncertainties under
consideration would include unstructured uncertainties, such as unmodeled
dynamics, as well as parameter variations. All these uncertain parts still can be
taken out from the dynamics and the whole system can be rearranged in a
standard configuration of (upper) linear fractional transformation F ( M ,  ) .
The uncertain block  would then have the following general form:
m j m j
  diag ( 1I r1 ,  ,  s I r s , 1 , ,  f ) ,  i  C,  j  C , (1.22)

where

 i 1 ri  j 1 m j  n ,
s f

with n is the dimension of the block  . We may define the set of such  as  .
The total block  thus has two types of uncertain block: s repeated scalar

1 – 17
blocks and f full blocks. The parameters  i of the repeated scalar blocks can be
real numbers only, if further information of the uncertainties is available.
However, in the case of real numbers, the analysis and design would be even
harder. The full blocks in (1.22) need not be square, but by restricting them as
such makes the notation much simpler.
When a perturbed system is described by an LFT with the uncertain block
of (1.22), the  considered has a certain structure. It is thus called “structured
uncertainty”. Apparently, using a lumped, full block to model the uncertainty in
such cases, for instance in Example 1.2, would lead to pessimistic analysis of
the system behavior and produce conservative designs.
The corresponding functions of Robust Control Toolbox® allow to facilitate
the process of building different uncertainty models (see Lab 1).

Text:
Gu, D.W., Petkov, P.H., Konstantinov, M.M. Robust Control Design with
MATLAB®, Second Edition, Springer, London, 2013.

1 – 18

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy