0% found this document useful (0 votes)
51 views20 pages

Robust Inventory-Production Control Problem With Stochastic Demand

This paper deals with the inventory-production control problem where produced items are assumed to deteriorate at a rate dependent on stochastic demand. The problem is formulated as a jump linear quadratic control problem and the optimal policy is obtained from a set of coupled Riccati equations. The paper also investigates the guaranteed cost control problem under the same assumptions.

Uploaded by

Dyah Ayu Kusuma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
51 views20 pages

Robust Inventory-Production Control Problem With Stochastic Demand

This paper deals with the inventory-production control problem where produced items are assumed to deteriorate at a rate dependent on stochastic demand. The problem is formulated as a jump linear quadratic control problem and the optimal policy is obtained from a set of coupled Riccati equations. The paper also investigates the guaranteed cost control problem under the same assumptions.

Uploaded by

Dyah Ayu Kusuma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

OPTIMAL CONTROL APPLICATIONS AND METHODS

Optim. Control Appl. Meth., 20, 1}20 (1999)

ROBUST INVENTORY-PRODUCTION CONTROL


PROBLEM WITH STOCHASTIC DEMAND

E. K. BOUKAS * R, P. SHI AND A. ANDIJANI


 Mechanical Engineering Department, E! cole Polytechnique de Montre& al, P.O. Box 6079, station **Center-ville++,
Montre& al, Que& bec, Canada H3C 3A7
 Centre for Industrial and Applicable Mathematics, School of Mathematics, The University of South Australia,
The Levels, SA 5095, Australia
 System Engineering Department, King Fahd University of Petroleum and Minerals, Dhahran 31261, Saudi Arabia

SUMMARY

This paper deals with the inventory-production control problem where the produced items are supposed
to be deteriorating with a rate that depends on the stochastic demand rate. The inventory-production
control problem is formulated as a jump linear quadratic control problem. The optimal policy that solves
the optimal control problem is obtained in terms of a set of coupled Riccati equations. The guaranteed cost
control problem is also investigated. Copyright  1999 John Wiley & Sons, Ltd.

KEY WORDS: production systems; inventory control; stochastic process; markov process; jump linear
quadratic control; guaranteed cost control problem

1. INTRODUCTION
Most of the products after production when they are stocked somewhere deteriorate and their
value decreases with time. For instance, electronic products may become obsolete as technology
changes (personal computers is a good example); fashion tends to depreciate the value of clothing
over time; and batteries die out with time. The e!ect of time is even more critical for perishable
goods such as foodstu! and cigarettes.}
In this paper, we study the problem of continuous-time inventory-production control with
deteriorating rate of the items, which has been a subject being extensively investigated for the last
ten years.  In most of the models reported in the literature, it has been considered that the
deteriorating rate of the items is constant and the demand rate is also constant and deterministic.
But in reality, it is very hard to verify the assumption that the demand rate is deterministic and
known. Here, we will consider that the demand rate is described by a continuous-time stochastic

* Correspondence to: Professor El-Kebir Boukas, Mechanical Engineering Department, Ecole Polytechnique de
montreH al, C.P. 6079, succ.centre-villes, MontreH al, Quebec, Canada H3C 3A7.
RThis work has been done during the authors visit to System Engineering Department, KFUPM, Saudi Arabia.

Contract/grant sponsor: Natural Sciences and Engineering Research Council of Canada


Contract/grant number: OGP00 36444
Contract/grant sponsor: Australian Research Council
Contract/grant number: AU9532206
Contract/grant sponsor: KFUPM Council

CCC 0143}2087/99/010001}20$17.50 Received 3 February 1998


Copyright  1999 by John Wiley & Sons, Ltd. Revised 21 August 1998
2 E. K. BOUKAS, P. SHI AND A. ANDIJANI

Markov process with "nite state space and that the deteriorating rate depends on the demand
rate. In fact, for a manufacturing system producing perishable products, if the demand rate is less
than a given production rate, the stock level will increase and it is clear that, if the items remain
stocked for a long time they will deteriorate. The deterioration e!ect will be reduced if the
di!erence between the production and the demand is small but positive. The higher the demand
rate is, the lower the deteriorating rate will be.
The goal of this paper is to determine the optimal production policy that reduces the cost of the
facility under consideration. Under some reasonable assumptions, we give the closed-form
solution of the optimal policy of the optimization problem in terms of a set of coupled Riccati
equations. The uncertain model of the inventory control problem is also proposed where the
uncertainties on parameters are supposed to be of the norm bounded type. The guaranteed cost
inventory control problem is also investigated.

2. PROBLEM STATEMENT
Let us assume that we have a production system that produces one item. Let the production rate
of this facility be denoted by u(t)31 . Let l(t) be a continuous-time Markov process with "nite
>
state space denoted by S"+1, 2, 2, n ,. Let the demand rate of the produced item be a stochas-
Q
tic demand rate and denote it by d(l(t))3DL1 . For each value of the process l(t) at time t,
>
t3[t , t ], where t is the instant of the nth jump of the process l(t), the demand rate will take
L L> L
a constant value, denoted by d(l(t)). Let us assume that the deterioration rate depends on the
demand rate d(l(t)), i.e. c(d(l(t))). For instance, let d and d be two levels of the demand rate, with
 
d 'd , then c(d )(c(d ).The evolution of the stochastic process +l(t), t*0, that determines
   
the demand rate is then described by the following probability transitions:


q h#o(h) if aOb
P[l(t#h)"b " l(t)"a]" ?@ (1)
1#q h#o(h) otherwise
??
with q *0 for all aOb and q "! S q for all a3S, and lim o(h)/h"0.
?@ ?? @Z ?@ F
The di!erential equation that describes the evolution of the inventory of our facility is therefore
given by


!c(d(l(t)))x(t)#u(t)!d(l(t)) if x(t)*0
xR (t)"
u(t)!d(l(t)) if x(t)(0
where c(d(a)) is constant when l(t)"a, x(0)"x and l(0)"a are respectively the initial values of

the state and the mode at time t"0.
This last di!erential equation can be rewritten as
xR (t)"!c(d(l(t)))I+ * , x(t)#u(t)!d(l(t)), x(0)"x ,l(0)"a (2)
VR  
where I+ , is the index function de"ned as
.V


1 if P(x) is true
I+ ,"
.V 0 otherwise

Remark 2.1
Our assumption which states that the demand d(t) is a function of the continuous-time Markov
process l(t) is a reasonable one. In fact, if we are producing perishable goods and the demand rate

Copyright  1999 by John Wiley & Sons, Ltd. Optim. Control Appl. Methods, 20, 1}20 (1999)
ROBUST INVENTORY-PRODUCTION CONTROL 3

is low, it is sure that we will lose a certain number of items corresponding to the ones that will be
kept in the stock and therefore deteriorate with time.

Remark 2.2
Note that system (1)}(2) is a hybrid system in which one state x(t) takes values continuously,
and another &state' l(t) takes values discretely. This kind of system can be used to represent many
important physical systems subject to random failures and structure changes, such as electric
power systems, control systems of a solar thermal central receiver, communications systems,
aircraft #ight control, and manufacturing systems.}

Our objective in this paper is to seek an admissible feedback control law that minimizes the
following cost function:

 
T
J (x ,a,0)"E [c x(t)#c u(t)] dt " x(0)"x , l(0)"a (3)
S    

where c (c O0) is the inventory holding cost when x(t) is positive or the shortage cost when x(t)
 
is negative, c (c O0) is the production unit cost and E stands for the mathematical expectation
 
operator.

Remark 2.3
This formulation can be extended to the more general case of facility producing multi parts.
The extension is easy to do but for simplicity we will center our emphasis on the case of one
machine producing one part type. The general case will be treated in Section 4.
In the following section, we will deal respectively with two cases. In the "rst part, we will
consider the case in which the demand rate is stochastic but the deteriorating rate is constant and
independent of the demand rate. In the second part, we extend the problem of the "rst part by
considering a stochastic demand rate and the deteriorating rate as depending on the demand rate.
In each case, we will try to determine the control policy that minimizes the cost function given by
equation (3).

3. INVENTORY-PRODUCTION CONTROL PROBLEM


Let us assume that the deteriorating rate c(l(t)), in equation (2) is constant and does not depend
on the demand rate d(l(t)). The demand rate d(l(t)) is the same as in Section 2. In this case, the
dynamics becomes:
xR (t)"!cI+ * ,x (t)#u(t)!d(l(t)), x(0)"x , l(0)"a (4)
VR  
where the variables keep the same de"nitions as in Section 2 except that c is constant.
Let us also consider the cost function given by equation (3). This optimization problem seems
to be in the framework of the optimization of the class of systems with Markovian jumps which
has been studied extensively. In this paper, the framework of this class of systems has been
stated for the "rst time. Using this formalism, other results have been obtained.}
This class of systems has also been used to model di!erent types of systems, like manufacturing
systems, power systems, etc. The book of Mariton gives some examples of applications of this
class of systems. The reader is also referred to the references of the previous paragraph for other
references dealing with applications.

Copyright  1999 by John Wiley & Sons, Ltd. Optim. Control Appl. Methods, 20, 1}20 (1999)
4 E. K. BOUKAS, P. SHI AND A. ANDIJANI

Dexnition 3.1
A control u( ) )"+u(t) : t*0, is said to be admissible if: (i) u( ) ) is adapted to the p-algebra
generated by the random process l( ) ), denoted as p+m(s) : 0)s)t,, and (ii) u(t)31 for all t*0.
>
Let U denote the set of all admissible controls of our control problem.

Dexnition 3.2
A measurable function u(x(t), l(t)) is an admissible feedback control, if (i) for any given initial
x and a of the continuous state and the mode, the following equations have a unique solution
x( ) ):

xR (t)"!c(d(l(t))I+ * ,x(t)#u(x(t), l(t))!d, x(0)"x (5)


VR 
and (ii) u( ) )"u(x( ) ), l( ) ))3U.
To solve the optimization problem of this section, let us drop the contraint on u(t) and consider
the more general case and then based on its solution get the one we are looking for. Let us
consider that the system dynamics is described by the following di!erential non-linear system of
equations:

xR (t)"f (x(t), u(t),l(t)), x(0)"x , l(0)"a (6)



where the variables above keep the same de"nitions as in the previous section. Let the cost
function be de"ned by

 
T
J (x , a, 0)"E g(x(t), u(t), t) dt " x(0)"x ,l(0)"a (7)
S  

Let us assume that the functions f (x(t), u(t), l(t)) and g(x(t), u(t), l(t)) satisfy all the required
assumptions. Let the value function v(x(t), l(t), t) be de"ned by

v(x(t), l(t), t)"min J (x(t), l(t), t) (8)


S
SR
Let us also assume that the value function is continuously di!erentiable with respect to the
arguments.
Using the dynamic programming principle  and the expression of the value function, i.e.

 
T
v(x(t), l(t), t)"min E g(x(s), d(s), s)ds " x(t), l(t) (9)
SR R
we can show that the Hamilton}Jacobi}Bellman equation is given by:

min [(A v)(x(t), l(t), t)#g(x(t), l(t), t)]"0 (10)


S
SR
where (A v)(x(t), l(t), t) is de"ned as:
S
*v *v
(A v)(x(t), l(t), t)" (x(t), l(t), t)#f T (x(t), l(t), t) (x(t), l(t), t)# q v(x(t), b, t)
S *t *x JR@
@ZS
(11)

Copyright  1999 by John Wiley & Sons, Ltd. Optim. Control Appl. Methods, 20, 1}20 (1999)
ROBUST INVENTORY-PRODUCTION CONTROL 5

Let us now apply this result to our case where


f (x(t), u(t), l(t))"!cI+ * , x(t)#u(t)!d(l(t))
VR 
g(x(t), u(t), l(t))"c x(t)#c u(t)
 
Before establishing the optimality condition for our optimization problem, let us "rst notice
that the value function owns the required conditions. Since the dynamics is nonlinear and the
instantaneous cost is quadratic in x and u, it can be proven that the value function is convex and
possesses derivatives with respect to x for each a in S.
Using our expressions for the functions f ( ) ) and g( ) ) and the HJB equation given by
equation (10), one has


*v *v
(x(t), l(t), t)#min [!cI+ * ,x(t)#u(t)!d(l(t))]T (x(t), l(t), t)
*t VR  *x
SR
# q v(x(t), b, t)#c x(t)#c u(t) "0
@ZS
JR@    (12)

Since the control is unconstrained, the optimal control law is given by


1 *v
u*(t)"! (x(t), l(t), t) (13)
2c *x

Hence, substituting this control, u*(t), into the HJB equation, we have the following "rst-order
partial di!erential equation for the value function:

 
*v 1 *v  *v
(x(t), l(t), t)! (x(t), l(t), t) #[!cI+ * , x(t)!d(l(t))] (x(t), l(t), t)
*t 4c *x VR  *x
# q v(x(t), b, t)#c x(t)"0 (14)
JR@ 
@ZS
To solve this partial di!erential equation, we try the following expression for the value function
v(x(t), l(t), t):
v(x(t), l(t), t)"p(l(t), t)x(t)#q(l(t), t)x(t)#r(l(t), t) (15)
where p(l(t), t), q(l(t), t) and r(l(t), t) are time-varying positive real functions depending on l(t)
and t. They remain to be determined. Based on this expression, and after computing the
expressions of (*v/*t)(x(t), l(t), t) and (*v/*x)(x(t), l(t), t) and substituting them into (14), and using
the fact that the corresponding HJB equation is equal to 0 for all x(t) in 1, one obtains


i pR (a, t)!4cI+ * , p(a, t)! p(a, t)# q p(b, t)#c "0
VR  c ?@ 
g  @ZS
p(a, )"0
g
3


g qR (a, t)!2cI+ * , q(a, t)! p(a, t) q(a, t)# q q(b, t)!4d(a)p(a, t)"0
VR  c ?@
j  @ZS (16)
g q(a, )"0
g 3


rR (a, t)! q(a, t)!2d(a)q(a, t)# q r(b, t)"0,
g 4c ?@
 @ZS
k r(a, )"0
for all a3S.

Copyright  1999 by John Wiley & Sons, Ltd. Optim. Control Appl. Methods, 20, 1}20 (1999)
6 E. K. BOUKAS, P. SHI AND A. ANDIJANI

Therefore, if there exists a set of positive real functions p(a, t) and q(a, t) satisfying the coupled
Riccati equations (16), then (14) has a set of solutions v(x(t), l(t), t). Consequently, there exists an
optimal control law of the form (13) that solves our stochastic optimization problem (3). To
summarize the above, we have the following theorem.

Theorem 3.1
If the set of coupled Riccati equations (16) has positive real function, p(a, t) and q(a, t), for all
a3S, then the linear quadratic control problem of (1)}(3) has an optimal solution given by the
following expression:

1
u(t)"! [p(a, t)x(t)#q(a, t)], a3S. (17)
2c


Remark 3.1
It should be noted that many computational algorithms for solving the coupled Riccati
equations are available.   Notice that equation (16) depends on the values of x(t). Therefore,
we need to solve this set of equations when x(t)*0 which gives c(d(l(t)))O0 and when x(0
which corresponds to c(d(l(t)))"0.
Let us now return to the optimization problem of Section 2. This optimization problem like the
one we dealt with at the beginning of this section, seems to be in the general framework of the
class of systems with Markovian jumps. To get the corresponding optimality conditions, let us
now apply this result to our case where

f (x(t), u(t), l(t))"!c(d(l(t)))I+ * ,x(t)#u(t)!d(l(t)) (18)


VR 
g(x(t), u(t), l(t))"c x(t)#c u(t) (19)
 
Using these expressions for the functions f ( ) ) and g( ) ) and the previous optimality conditions
given by equation (10), one has


*v *v
(x(t), l(t), t)#min [!c(d(l(t))) I+ * , x(t)#u(t)!d(l(t))]T (x(t), l(t), t)
*t VR  *x
SR

# q v(x(t), b, t)#c x(t)#c u(t) "0


@ZS
JR@    (20)

Since the control is unconstrained, the optimal control law is given by

1 *v
u*(t)"! (x(t), l(t), t) (21)
2c *x

Hence, substituting this control, u*(t), into the HJB equation, we have the following "rst-order
partial di!erential equation for the value function:

 
*v 1 *v  *v
(x(t), l(t), t)! (x(t), l(t), t) #[!c(l(t))x(t)I+ * ,!d(l(t))] (x(t), l(t), t)
*t 4c *x VR  *x

# q v(x(t), b, t)#c x(t)"0 (22)
JR@ 
@ZS

Copyright  1999 by John Wiley & Sons, Ltd. Optim. Control Appl. Methods, 20, 1}20 (1999)
ROBUST INVENTORY-PRODUCTION CONTROL 7

To solve this partial di!erential equation, we try the same expression for the value function
v(x(t), l(t), t) as we did in the previous and use the same steps. Then based on equation (22), one
obtains
3


i pR (a, t)!4c(d(a))I+VR*, p(a, t)!c p(a, t)# q?@ p(b, t)#c"0
g  @ZS
p(a, )"0
g
g qR (a, t)!2c(d(a))I+ * , q(a, t)! 3 p(a, t) q(a, t)# q q(b, t)!4d(a)p(a, t)"0
j
g  q(a, )"0
VR  c


g rR (a, t)! 3 q(a, t)!2d(a)q(a, t)# q r(b, t)"0,


@ZS
?@
(23)

g

k r(a, )"0
for all a3S.
4c
 @ZS
?@

Therefore, if there exists a set of positive real functions p(a, t) and q(a, t) for a3S, satisfying the
coupled Riccati equations (23), then (22) has a set of solutions v(x(t), l(t), t). Consequently, there
exists an optimal control law of the form (17) to solve our stochastic optimization problem (3). To
summarize the above, we have the following theorem.

Theorem 3.2
If the set of coupled Riccati equations (23) have positive real functions, p(a, t) and q(a, t), for all
a3S, then the linear quadratic control problem of (1)}(3) has an optimal solution given by the
following expression:
1
u(t)"! [p(a, t)x(t)#q(a, t)], a3S (24)
2c


Remark 3.2
Since in practice the production rate, is all the time positive, the optimal production rate, u*(t),
is then given by

 
1 *v
u*(t)"max 0, ! (x(t), l(t), t)
2c *x

Let us now deal with the in"nite horizon optimization problem by letting go to in"nite. In
this case we need more assumptions to guarantee the existence of the optimal solution. These
assumptions can be reduced to the stochastic stabilizability and the stochastic detectability.
Before introducing the de"nitions of stochastic stability and stochastic stabilizability, let us put
our optimization problem in a more general formulation with multiple states, multiple modes and
multiple production rates. This formulation is given by the following system of di!erential
equations:
x (t)"A(l(t))x(t)#B(l(t))[u(t)!d(l(t))], x(0)"x , l(0)"a (25)

where x(t)31L is state, u(t)31L is control input, d(l(t))31L is the demand vector being
> >
a function of l(t) which is the same as in Section 2. A(l(t)) and B(l(t)) are matrix functions of l(t)
with the form

Copyright  1999 by John Wiley & Sons, Ltd. Optim. Control Appl. Methods, 20, 1}20 (1999)
8 E. K. BOUKAS, P. SHI AND A. ANDIJANI

 
!c (l(t)) 0 0 2 0

0 !c (l(t)) 0 2 0
 if x*0
A(l(t))" $ $ $ \ $
0 0 0 2 !c (l(t))
L
0 otherwise

 
1 0 0 2 0
0 1 0 2 0
B(l(t))" "I
$ $ $ \ $ L
0 0 0 2 1

where I is the identity matrix of dimension n.


L
Remark 3.3
The presence of I+x * , in the expression of the matrix A means that we will consider two cases
R 
when x(t)*0 and x(t)(0 in the resolution.
Associated with system (25), we consider the following cost function:

 

Ju (x , a, 0)"E [xTR (a)x(t)#uT (t)R (a)u(t)]dt " x(0)"x , l(0)"a , (26)
   

where R (a) and R (a), a3S are symmetric and positive-de"nite matrices. They are diagonal
 
matrices.
Let x(t, x , l(0)) denote the trajectory of the state x(t) (stock level at time t) from the initial state

(x , l(0)). We introduce the following stochastic stability and stochastic stabilizability concepts

for continuous-time jump linear systems.

Dexnition 3.3
For system (25) with u(t),0 and d(l(t)),0, for all l(t)3S, the equilibrium point 0 is
stochastically stable, if for every initial state (x , l )
 


>
E +#x(t, x , l ) # , dt(#R
 


Dexnition 3.4
We say the system (25) is stochastically stabilizable, if for every initial state (x , l(0)), there exists

a linear feedback control law u(t)"L(l(t))x(t)#d(l(t)), l(t)"a, such that the closed-loop
system

x (t)"[A(l(t))#L(l(t))]x(t)

is stochastically stable.

Copyright  1999 by John Wiley & Sons, Ltd. Optim. Control Appl. Methods, 20, 1}20 (1999)
ROBUST INVENTORY-PRODUCTION CONTROL 9

If we assume that the system is stochastically stabilizable and stochastically detectable, then we
can prove that the optimal control law exists and it is given by solving a set of coupled Riccati
equations similar to the one we give for the case of one item.

4. GUARANTEED COST CONTROL


In order to control the behaviour of a system, we should capture the system's salient features in
a mathematical model. In practice, it is almost always impossible to get an exact mathematical
model of a dynamical system due to the complexity of the system, the di$culty of measuring
various parameters, environmental noises, uncertain and/or time-varying parameters, etc. Indeed,
the model of the system to be controlled almost always contains some type of uncertainty. There
are two main categories of uncertainty. The "rst one arises from neglected high-frequency
dynamics, such as actuator and sensor dynamics, or structured modes. The second category of
uncertainty results from unknown and/or time-varying real parameters of the system and can be
thought of as low-frequency modelling errors. The control design should take into account the
uncertainty inherent to a mathematical model of the system in order to maintain stability and
performance speci"cations in the presence of these uncertainties.
In the past decades, the design of control systems that can handle model uncertainties has been
one of the most challenging problems and received considerable attention from control engineers
and scientists. A number of approaches to the problem of robust control design for uncertain
dynamical systems have been proposed, for example, robust stabilization, sensitive minimization,
H control, and loop transfer recovery. 

There are two major issues in robust controller designs. The "rst is concerned with the
robust stability of the uncertain closed-loop system,  and the other is robust perfor-
mance.  The latter is more important since when controlling a system dependent on uncertain
parameters, it is always desirable to design a control system which is not only stable, but also
guarantees an adequate level of performance. This issue has been addressed both for the
continuous-time case  and for the discrete-time case.  On the other hand, the stochastic
linear uncertain systems have been also recently studied, in particular, the linear uncertain
systems with Markovian jump parameters. \ The problems of designing state feedback
controllers for uncertain Markovian jumping systems to achieve both stochastic stability and
a prescribed H performance have been investigated for continuous-time and discrete-time

cases. Also, the singularly perturbed robust H control problem for continuous-time systems

with Markovian jumping parameters has been considered, and it is shown that the constructed
composite controller is robust with respect to the small perturbation parameter in a certain
interval. However, to the authors' knowledge, the problem of guaranteed cost control for linear
Markovian jumping discrete-time systems with parameter uncertainties is still not fully investi-
gated.
In this section, we study the guaranteed cost control problem for system (2) with multiple items
and norm-bounded uncertainty. We design a controller such that the closed-loop system of the
underlying uncertain system with this controller is stochastically stable and the cost function is
ensured to be in a certain range.
The class of uncertain systems under consideration is described by the state equations

x (t)"[A(l(t))#*A(l(t))]x(t)#[B(l(t))#*B(l(t))](u(t)!d(l(t))), x(0)"x , l(0)"a (27)



where x(t)31L is state, u(t)31L is control input, d(l(t))31L is another input being a function of
l(t) which is the same as in Section 2. A(l(t)) and B(l(t)) are matrix functions of l(t) with the same

Copyright  1999 by John Wiley & Sons, Ltd. Optim. Control Appl. Methods, 20, 1}20 (1999)
10 E. K. BOUKAS, P. SHI AND A. ANDIJANI

forms as in the previous section, *A(l(t)) and *B(l(t)) are unknown matrices which represent
time-varying parameter uncertainties and assumed to belong to certain bounded compact
sets

 
*c (l(t)) 0 0 2 0

0 *c (l(t)) 0 2 0
 if x*0
*A(l(t))" $ $ $ \ $
0 0 0 2 *c (l(t))
L
0 otherwise

 
*b (l(t)) 0 0 2 0

0 *b (l(t)) 0 2 0
*B(l(t))"  .
$ $ $ \ $
0 0 2 *b (l(t))
0
L
The admissible parameter uncertainties are assumed to be of the forms

*A(l(t))"H(l(t))F(l(t), t)E (l(t)), *B(l(t))"H(l(t))F(l(t), t)E (l(t)) (28)


 
where H(l(t)), E (l(t)) and E (l(t)), for any l(t)"a, a3S, are known constant matrices of
 
appropriate dimensions and F(l(t), t) is an unknown time-varying matrix satisfying

FT (l(t), t)F(l(t), t))I, l(t)3S. (29)

Remark 4.1
It should be noted that the parameter uncertainty structure as in (28)}(29) has been widely used
in the problems of robust control and robust "ltering of uncertain systems (see, for example,
References [37, 28] and the references therein) and many practical systems possess parameter
uncertainties which can be either exactly modelled, or overbounded by (28). Furthermore, the
matrix F(l(t), t), l(t) for all l(t)3S, containing the uncertain parameters in the state and input
matrices, is allowed to be state dependent as long as (29) is satis"ed along all possible state
trajectories. The matrices H(l(t)), E (l(t)) and E (l(t)) specify how the uncertain parameters
 
F(l(t), t) a!ect the nominal matrix of the system (27). Observe that the unit overbound for
F(l(t), t) does not cause any loss of generality. Indeed, F(l(t), t) can be always normalized, in the
sense of (29), by appropriately choosing the matrices H(l(t)), E (l(t)) and E (l(t)).
 
Remark 4.2
It should be noted that the practical problem we are dealing with will not have uncertainty on
the matrix B(l(t)) which is equal to the identity matrix. Therefore, *B(l(t))"0, for all t. But for
generality of our results, we will consider the case in which this is not equal to 0. The uncertainty
on the matrix A results from the fact that it is very hard to know exactly the deteriorating rate.
Associated with system (27), we consider the following cost function:

 

Ju(x , a, 0)"E [xTR (a)x(t)#uT (t)R (a)u(t)] dt " x(0)"x , l(0)"a (30)
   


Copyright  1999 by John Wiley & Sons, Ltd. Optim. Control Appl. Methods, 20, 1}20 (1999)
ROBUST INVENTORY-PRODUCTION CONTROL 11

where R (a) and R (a), a3S are symmetric and positive-de"nite matrices. We also introduce the
 
following quadratic stability concept for the uncertain jumping system (27). 

Dexnition 4.1
The system (27) (with u(t),0 and d(l(t)),0) is said to be stochastically quadratically stable if
and only if there exists a set of symmetric matrices +P(a)'0, a3S, satisfying the following
coupled matrix inequalities:

[A(l(t))#H(l(t))F(l(t), t)E (l(t))]T P(a)#P(a)[A(l(t))#H(l(t))F(l(t), t)E (l(t))]


 
# q P(b)(0
?@
@ZS
for all admissible uncertainties F(l(t), t) in (28)}(29).

Proposition 4.1
If system (27) (with u(t),0 and d(l(t)),0) is stochastically quadratically stable, then it is
stochastically stable.

Lemma 4.1
Given matrices Q, H, E and R of appropriate dimensions and with Q and R symmetric and
R'0, then

Q#HFE#E T F T H T(0

for all F satisfying FTF)R, if and only if there exists some e'0 such that

Q#eHHT#e\ETRE(0

Lemma 4.2
Given matrices H,E and R of appropriate dimensions with R'0 symmetric, there exists
some e'0 such that

HFE#E T F T H T )eHH T #e\E T RE

for all F satisfying F T F)R.

The following result can be derived immediately by using Lemmas 4.1 and 4.2.

Theorem 4.1
The system (27) (with u(t),0 and d(l(t)),0) is stochastically quadratically stable if and only if
there exist a sequence of +e '0, a3S, and a set of symmetric and positive-de"nite matrices
?
+P(a), a3S, such that the following inequality

AT (a)P(a)#P(a)A(a)# q P(b)#eP(a)H(a)H T (a)P(a)#e\E T (a)E(a)(0


?@ ? ?
@ZS
holds for all a3S.

Copyright  1999 by John Wiley & Sons, Ltd. Optim. Control Appl. Methods, 20, 1}20 (1999)
12 E. K. BOUKAS, P. SHI AND A. ANDIJANI

Dexnition 4.2
A control law u(t)"L(a)x(t)#d(a), a3S is said to de"ne a stochastic quadratic
guaranteed cost control with associated cost matrix P(a)'0, a3S for the system (27) and cost
function (30) if

xT [R #LT (a)R L(a)]x#2xTP(a)[A(a)#H (a)F(a)(E (a)#E (a)L(a))#B(a)L(a)]x


    
#xT
 @ZS
?@ 
q P(b) x(0, a3S (31)

for all non-zero x31L and all matrices F(l(t), t) satisfying (29).
Now we are ready to present our main results in this section.

Theorem 4.2
Consider the system (27) with cost function (30) and suppose the control law u(t)"
L(a)x(t)#d(a), a3S is a stochastic quadratic guaranteed cost control with cost matrix
P(a)'0, a3S. Then the closed-loop uncertain system

x (t)"[A(a)#H(a)F(a, t)(E (a)#E (a)L(a))#B(a)L(a)]x(t) (32)


 

is stochastically quadratically stable for all F(a) satisfying FT (a)F(a))I, and for all a3S.
Furthermore, the corresponding value of the cost function (30) satis"es the bound

Ju (x , a, 0))E+xTP(a)x ,, +FT (a)F(a))I


  

Conversely, if there exists a control law u(t)"L(a)x(t)#d(a), a3S such that the resulting
closed-loop system (32) is stochastically quadratically stable, then this control law is a stochastic
quadratic guaranteed cost control with some cost matrix P (a)'0, a3S.
Proof. It is similar to that for a result without Markovian jumps. Firstly, if the control law
u(t)"L(a)x(t)#d(a), l(t)"a3S is a stochastic quadratic guaranteed cost control with cost
matrix P(a)'0, a3S, then by De"nition 4.2 and the fact of R (a)'0 and R (a)'0, a3S, one
 
has


xT [A(a)#H(a)F(a, t)(E (a)#E (a)L(a))#B(a)L(a)]TP(a)
 

#P(a)[A(a)#H(a)F(a, t)(E (a)#E (a)L(a))#B(a)L(a)]# q P(b) x(t)(0


 
@ZS
?@ 
for all non-zero x31L and all matrices F(l(t), t) satisfying (29). Hence, the closed-loop system (32)
is stochastically quadratically stable.
Let us de"ne a Lyapunov candidate <(x, a) by

<(x(t), a)"xT (t)P(a)x(t), a3S.

Copyright  1999 by John Wiley & Sons, Ltd. Optim. Control Appl. Methods, 20, 1}20 (1999)
ROBUST INVENTORY-PRODUCTION CONTROL 13

It can be easily shown that the weak in"nitesimal operator A<(x, a) of the Markov process
+(l(t), x(t)), t*0, is given by

A<(x, a)"xT ([A(a)#H(a)F(a, t)(E (a)#E (a)L(a))#B(a)L(a)]T P(a))x


 
#xT (P(a)[A(a)#H(a)F(a, t)(E (a)#E (a)L(a))#B(a)L(a)])x
 


#xT q P(b) x(t).
@ZS
?@ 
By applying Dynkin formula to the above, one obtains

 
T
E(<(x(t), l(t))!<(x , a))"E A<(x(q), l(q)) dq



 
T
(!E [xT (q)R (l(q))x(q)#uT (q)R (l(q))u(q)] dq (33)
 

for all a3S and FT (a)F(a))I.
Recalling that the closed-loop system (32) is stochastically quadratically stable, that is,
lim E(<(x(t), l(t)))"0, we have from (33)
R
Ju (x , a, 0)(E(xTP(a)x )
  
which is the desired result.
Secondly, if there exists a control law u(t)"L(a)x(t)#d(a), a3S such that the resulting
closed-loop system (32) is stochastically quadratically stable, then by De"nition 4.1, there exists
a set of symmetric and positive de"nite matrices +P(a), a3S, and a sequence of +e '0, a3S,
?
such that

e xT [R #LT (a)R L(a)]x#2xTP(a)[A(a)#H (a)F(a)(E (a)#E (a)L(a))


?     

#B(a)L(a)]x#xT
 q P(b) x(0,
@ZS
?@  a3S

for all non-zero x31L and all F(a) satisfying (29). Thus, this control law is a stochastic quadratic
guaranteed cost control with cost matrix P (a)"P(a)/e , a3S, which completes the proof.
?
Remark 4.3
The connection between stochastic quadratic stability and stochastic quadratic guaranteed
cost control has been established in Theorem 4.2. Note that the bound obtained in Theorem 4.2
depends upon the initial condition x . Without loss of generality, we may assume that x is a zero
 
mean random variable satisfying E+x xT,"I. Then the cost bound in Theorem 4.2 becomes
 
Ju (x , a, 0)) tr[P(a)].

?ZS
where tr(P) represents the trace of P.
Our last result in this section concerns with the design of stochastic quadratic guaranteed cost
controller for uncertain system (27).

Copyright  1999 by John Wiley & Sons, Ltd. Optim. Control Appl. Methods, 20, 1}20 (1999)
14 E. K. BOUKAS, P. SHI AND A. ANDIJANI

Theorem 4.3
Suppose there exists a sequence of +e '0, a3S, such that the set of coupled Riccati equations
?
[A(a)!B(a)(e R (a)#ET (a)E (a))\E (a)E (a)]TP(a)
?     
#P(a)[A(a)!B(a)(e R (a)#ET (a)E (a))\E (a)E (a)]# q P(b)
?      ?@
@ZS
#e P(a)H(a)HT (a)P(a)!e P(a)B(a)(e R (a)#ET (a)E (a))\BT (a)P(a)
? ? ?   
1 T
# E (a)[I!E (a)(e R (a)#ET (a)E (a))\ET (a)]E (a)#R (a)"0 (34)
e   ?      
?
has a set of symmetric and positive-de"nite solution +P(a), a3S,. Then, given any sequence of
+d '0, a3S,, there exists a set of symmetric and positive-de"nite solution +P (a), a3S, such
?
that P(a)(P (a)(P(a)#d I, a3S and the control law
?
u(t)"!(e R (a)#ET (a)E (a))\(e BT (a)P(a)#ET (a)E (a))x(t)#d(a) (35)
?    ?  
is a stochastic quadratic guaranteed cost control for the system (27) with cost matrix P (a), a3S.
Proof. Let L(a) and A (a) be de"ned as
L(a)"!(e R (a)#ET (a)E (a))\(e BT (a)P(a)#ET (a)E (a))
?    ?  
A (a)"A(a)#B(a)L(a), R (a)"e R (a)#ET (a)E (a).
 ?   
Then, (34) can be rewritten as
A T (a)P(a)#P(a)A (a)# q P(b)#e P(a)B(a)R \ (a)BT (a)P(a)#e P(a)H(a)HT (a)P(a)
?@ ?  ?
@ZS
1
# ET (a)[I!E (a)R \ (a)ET (a)]E (a)#R (a)"0, a3S (36)
e      
?
De"ne E (a)"E (a)#E (a)L(a), R (a)"R (a)#LT (a)R (a)L(a). By standard matrix manip-
   
ulation, it can be veri"ed from (36) that
1
A T (a)P(a)#P(a)A (a)# q P(b)#e P(a)H(a)HT (a)P(a)# E T (a)E (a)#R (a)"0, a3S
?@ ? e
@ZS ?
(37)
Now let PK (a)"e P(a). It follows from (37) that PK (a), a3S satis"es
?
A T (a)PK (a)#PK (a)A (a)# q PK (b)#PK (a)H(a)HT (a)PK (a)#E T (a)E (a)#e R (a)"0, a3S
?@ ?
@ZS
(38)
Hence, for any given e3(0, e ), one has from (38)
? ?
A (a)PK (a)#PK (a)A (a)# q PK (b)#PK (a)H(a)HT (a)PK (a)#E T (a)E (a)#e R (a)(0,
T
a3S.
?@ ?
@ZS
(39)
Given now any sequence of +d '0, a3S,, we may choose the constant e3(0, e ) such that
? ? ?
e 1
P(a)(P (a) O ? P(a)" PK (a)(P(a)#d I
e e ?
? ?

Copyright  1999 by John Wiley & Sons, Ltd. Optim. Control Appl. Methods, 20, 1}20 (1999)
ROBUST INVENTORY-PRODUCTION CONTROL 15

It is obvious from (39) that P (a), a3S satis"es the inequality

1
A T (a)P (a)#P (a)A (a)# q P (b)#eP (a)H(a)HT (a)P (a)# E T (a)E (a)#R (a)(0, a3S
?@ ? e
@ZS ?
(40)

Finally, by using Lemma 4.2, we have from (40)

[A (a)#H(a)F(a, t)E (a)]T P (a)#P (a)[A (a)#H(a)F(a, t)E (a)]# q P (b)#R (a)
?@
@ZS
)A T (a)P (a)#P (a)A (a)# q P (b)#eP (a)H(a)HT (a)P (a)
?@ ?
@ZS
1
# E T (a)E (a)#R (a)(0, a3S )
e
?
which is required and the proof ends.

Remark 4.4
It has been shown in Theorem 4.3 that the problem of robust stochastic quadratic guaranteed
cost control for uncertain system (27) can be solved if the coupled algebraic Riccati equations (34)
have solutions +P(a)'0, a3S,. This result covers the one in when S"+1,. Note that the
selection of scaling parameters +e '0, a3S, is important to the existence of appropriate
?
solutions to (34), whose property is characterized in the following lemma.

Lemma 4.3
Suppose that (34) admits a set of symmetric and positive-de"nite solutions +P(a), a3S, for
e "e'0, a3S. Then, for any given e 3(0, e), (34) has a set of symmetric and positive-de"nite
? ? ? ?
solutions +P, a3S, as well.
?
Remark 4.5
By Theorem 4.3 and Lemma 4.3, we may optimize the guaranteed cost for the uncertain system
(27) with cost function (30). This is because if for some scaling parameter sequence +e '0, a3S,,
?
there exists a set of matrices +P(a)'0, a3S, satisfying (34), then we can always "nd constant
eN '0, a3S, where eN ,a3S, are the supremum of e , a3S, respectively, such that for any
? ? ?
e 3(0, eN ), a3S, equation (34) admits a set of symmetric and positive-de"nite solutions. There-
? ?
fore, the optimal bound for (30) can be obtained via the following way:

min tr[P(a)]
C? ?ZS
by iterating e over (0, eN ), a3S. In Summary, the minimum of robust quadratic guaranteed cost
? ?
of system (27) with (30) can be obtained via solving the following optimization problem:

tr[P(a)]
minimize
?ZS
subject to P(a)'0, e '0, & "0, a3S
? ?
where & stands for the left-hand side of (34).
?

Copyright  1999 by John Wiley & Sons, Ltd. Optim. Control Appl. Methods, 20, 1}20 (1999)
16 E. K. BOUKAS, P. SHI AND A. ANDIJANI

Remark 4.6
In connection with the system (2) under consideration in Section 2, we may only consider the
special structure of (27) with B(a),I, E (a),0, a3S, that is, the parameter uncertainty is only

involved in matrix A(a), i.e. c(d(a)), a3S. In this situation, the coupled Riccati equations in (34)
and the controller u(t) in (35) will reduce to

AT (a)P(a)#P(a)A(a)# q P(b)#e P(a)H(a)HT (a)P(a)


?@ ?
@ZS
1
!P(a)B(a)R\ (a)BT (a)P(a)# ET (a)E (a)#R (a)"0
 e   
?
and

u(t)"max(!R\ (a)P(a)x(t)#d(a)).


5. SIMULATION RESULTS
To show the usefulness of our model, let us consider a production system consisting of one failure
prone machine producing one item. Let the Markov process, l (t) (describing the state of the

machine at time t) have two modes +1,2, (l (t)"1 means that the machine is up and can produce

parts, l (t)"2 means that the machine is down and no part can be produced by this machine

before "xing the failure), and let its dynamics be described by the following transition matrix:

 
!0.01 0.01
" "
 0.1 !0.1
Let l (t) be the Markov process, which a!ects the demand rate. Let us assume that this process

has two modes, i.e. +1, 2,, and let its dynamics be described by the following transition matrix:

 
!0.1 0.1
" "
 0.5 !0.5

Let the demand rate d(l (t), l (t)) be de"ned by the following constants:
 


1.0 if l (t)"1, l (t)3S
d(l(t))"  
1.5 if l (t)"2, l (t)3S
 
Since the process l (t) has two states and the process l (t) has also two states, the Markov
 
process l(t)"(l (t),l (t)) will have four modes that we will index by S"+1, 2, 3, 4, with
 
1": (1,1); 2 "
: (2, 1), 3 "
: (1, 2) and 4 ": (2, 2). Its dynamics is described by the following
transition matrix:

!0.01 0.01 0.1 0


0.1 !0.2 0 0.1
""
0.5 0 !0.51 0.01
0 0.5 0.1 !0.6

Copyright  1999 by John Wiley & Sons, Ltd. Optim. Control Appl. Methods, 20, 1}20 (1999)
ROBUST INVENTORY-PRODUCTION CONTROL 17

Let the deteriorating rate c(d(l(t))) be de"ned as follows:


0.01 if l (t)"1, l (t)
c(d(l(t)))"  
0.05 if l (t)"2, l (t)
 
We will assume that the desired values for the production planning x (t), the inventory

planning x (t) and demand x (t) are all constants and are, respectively, x (t)"1.0, x (t)"1.0 and
'   '
x (t)"1.0. The corresponding di!erential equations are given by the following:

xR (t)"0

xR (t)"0
'
xR (t)"0

To put this new control problem in the standard jump linear quadratic control, we de"ne new
variables e (t)"x(t)!x (t) and e (t)"u(t)!u (t). Then we use the extended dynamics of the
' '  
systems.
Let us also control the changes in the production rate. This can be done by extending our
dynamics by the following di!erential equation:
uQ (t)"v(t)
where v(t) is a control variable.
The cost function in this case is given by

 

Jv (x , l , 0)"E [c e (t)#c e (t)#c v(t)] dt " x , l (41)
   '     

Assume that the parameters c , c and c , are given, respectively, by:
  
c "1.0

c "2.0

c "1.5

After solving the set of the corresponding coupled Riccati equations, the tracking control law
will have the following gains:


[0.802 3.893 3.868 !0.056 !3.953] when l(t)"1
[0.0 0.0 0.0 0.0 0.0] when l(t)"2
K(l(t))"
[0.750 3.867 3.826 !0.094 !4.104] when l(t)"3
[0.0 0.0 0.0 0.0 0.0] when l(t)"4
A program to simulate our production system under the state feedback control law (v(x(t),
l(t))"!K(l(t))x(t)) has been developed using Matlab. The simulation results as shown in
Figure 1 con"rm what we can track the production planning by acting on the production rate
when the machine is subject to random breakdown and the produced items deteriorate with
a rate that depends on the stochastic demand rate. In this "gure, we represent the behaviours of
the errors e (t)"y (t), e (t)"y (t) and the mode l(t) as a function of time t. The initial condition
'   
for the simulation is given by
x(0)"[5 0 1 1 1] (42)

Copyright  1999 by John Wiley & Sons, Ltd. Optim. Control Appl. Methods, 20, 1}20 (1999)
18 E. K. BOUKAS, P. SHI AND A. ANDIJANI

Figure 1. Behaviour of the errors y (t) and y (t) vs. time


 

Our control variable v(t) gives the way on how we can act on the production rate u(t) in order
to satisfy our objectives which consists of tracking the demand and the production planning.

6. CONCLUSION
In this paper, we dealt with the inventory-production control problem where the random process
representing the demand is modelled as a Markov chain with constant jump rates. The optimal
policy which minimizes the cost of facilities is determined. It has been shown that the above
problem can be solved if a set of coupled di!erential Riccati equations has symmetric and
positive-de"nite solutions. The guaranteed cost inventory-production has also been investigated
when the deteriorating rates for the items are uncertain.

REFERENCES
1. Andijani, A. and M. Al-Dajani, &Analysis of Deteriorating Inventory/Production System using a Linear Quadratic
regulator', European J. Oper. Res. 106(1), 82}89 (1987).
2. Haiping, B. and U. H. Wang, &An economic ordering policy model for deteriorating items with time proportional
demand', J. Oper. Res. Soc., 46(1), 21}27 (1990).
3. Hamid, B., &Replenishment schedule for deteriorating items with time proportional demand', J. Oper. Res. Soc., 40(1),
75}81 (1989).
4. Heng, A., J. Labban and R. Linn, &An order-level for deteriorating items with partial back-ordering', Comput. Ind.
Engng 20(2), 187}197 (1991).
5. Wee, H.,&Economic production lot size model for deteriorating items with partial back-ordering', Comput. Ind. Engng,
23(4), 449}458 (1993).
6. Willsky, A. S., &A survey of design methods for failure detection in dynamic systems', Automatica, 12(5), 601}611 (1976).
7. Sworder, D. D. and R. O. Rogers, &An LQ-solution to a control problem associated with a solar thermal central
receiver', IEEE rans. Automat. Control, 28(8), 971}978 (1983).

Copyright  1999 by John Wiley & Sons, Ltd. Optim. Control Appl. Methods, 20, 1}20 (1999)
ROBUST INVENTORY-PRODUCTION CONTROL 19

8. Athans, M., &Command and control (C2) theory: a challenge to control science', IEEE rans. Automat. Control, 32(2),
286}293 (1987).
9. Moerder, D. D., N. Halyo, J. R. Broussard and A. K. Caglayan, &Application of precomputed control laws in
a recon"gurable aircraft #ight control system', J. Guidance, Control Dyn., 12(3), 325}333 (1989).
10. Boukas, E. K. and A. Haurie, &Manufacturing #ow control and preventive maintenance: a stochastic control
approach', IEEE rans. Automat. Control, 35(7), 1024}1031 (1990).
11. Boukas, E. K., Q. Zhang and G. Yin, &Robust production and maintenance planning in stochastic manufacturing
systems', IEEE rans. Automat. Control, 40(6), 1098}1102 (1995).
12. Boukas, E. K. and H. Yang, &Optimal control of manufacturing #ow and preventive maintenance', IEEE rans.
Automat. Control, 41(6), 881}885 (1996).
13. Krasovskii, N. N. and E. A. Lidskii, &Analysis design of controller in systems with random attributes*Part 1',
Automatic Remote Control, 22, 1021}1025 (1961).
14. Boukas, E. K., &Robust stability of linear piecewise deterministic systems under matching conditions', Control heory
Adv. echnol., 10(4), 1541}1549 (1995).
15. Rishel, R., &Control of systems with jump Markov disturbances', IEEE rans. Automat. Control, 24(4), 241}244 (1975).
16. Sworder, D. D., &Feedback control of a class of linear systems with jump parameters', IEEE rans. Automat. Control,
14(1), 9}14 (1969).
17. Mariton, M., Jump inear Systems in Automatic Control, Marcel Dekker, New York, 1990.
18. Ji, Y. and H. J. Chizeck, &Controllability, stabilizability, and continuous-time Markovian jump linear quadratic
control', IEEE rans. Automat. Control, 35(7), 777}788 (1990).
19. Costa, O. L. V. and M. D. Fragoso, &Stability results for discrete-time linear systems with Markovian jumping
parameters', J. Math. Anal. Appl., 179(2), 154}178 (1993).
20. Dufour, F. and P. Bertrand, &The "ltering problem for continuous-time linear systems with Markovian switching
coe$cients', System Control ett., 23(5), 453}461 (1994).
21. Boukas, E. K., &Control of systems with controlled jump Markov disturbances', Control heory Adv. echnol., 9(2),
577}595 (1993).
22. Sethi, S. P. and Q. Zhang, Hierarchical Decision Making in Stochastic Manufacturing Systems, Birkhauser, Boston,
1994.
23. Abou-Kandil, H., G. Freiling and G. Jank, &Solution and asymptotic behavior of coupled Riccati equations in jump
linear systems', IEEE rans. Automat. Control, 39(8), 1631}1636 (1994).
24. Salama, A. I. A. and V. Gourishankar, &A computational algorithm for solving a system of coupled algebraic matrix
Riccati equations', IEEE rans. Comput. 23(1), 100}102 (1974).
25. Khargonekar, P. P., I. R. Petersen and K. Zhou, &Robust stabilization of uncertain linear systems : quadratic
stabilizability and H control theory', IEEE rans. Automat. Control, 35(3), 356}361 (1990).

26. Zames, G., &Feedback and optimal sensitivity: model reference transformation, multiplicative seminorms and approx-
imate inverse', IEEE rans. Automat. Control, 26(2), 301}320 (1981).
27. Petersen, I. R., &A stabilization algorithm for a class of uncertain linear systems', System Control ett., 8(4), 351}357
(1987).
28. Shi, P., &Filtering on sampled-data systems with parametric uncertainty', IEEE rans. Automat. Control, 43(12),
1022}1027 (1998).
29. Xie, L., P. Shi and C. E. de Souza, &On designing controllers for a class of uncertain sampled-data nonlinear systems',
IEE Proc.-Control heory Appl., 140(2), 119}126 (1993).
30. Chang, S. S. L. and T. K. C. Peng, &Adaptive guaranteed cost control of systems with uncertain parameters', IEEE
rans. Automat. Control, 17(4), 474}483 (1972).
31. Bernstein, D. S. and W. M. Haddad, &Robust stability and performance via "xed-order dynamic compensation with
guaranteed cost bounds', Math. Control Signals Systems, 3(6), 139}163 (1990).
32. Petersen, I. R., &Guaranteed cost LQG control of uncertain linear systems', IEE Proc. D, 142(2), 95}102 (1995).
33. Geromel, J. C., P. L. D. Peres and S. R. Souza, &H guaranteed cost control for uncertain discrete-time linear systems',

Int. J. Control, 57(4), 853}864 (1993).
34. Xie, L. and Y. C. Soh, &Guaranteed cost control of uncertain discrete-time systems', Control heory Adv. echnol.,
10(4), 1235}1251 (1995).
35. Benjelloun, K., E. K. Boukas and P. Shi, &Robust stabilizability of uncertain linear systems with Markovian jumping
parameters', Proc. 97 American Control Conf., Albuquerque, NM, 1997, pp. 2120}2124.
36. Boukas, E. K. and P. Shi, 0H control for discrete-time linear systems with Markovian jumping parameters. Proc.

36th IEEE Conf. on Decision & Control., San Diego, CA, Vol. 4, 1997, pp. 4134}4139.
37. Shi, P. and E. K. Boukas, &H control for Markovian jumping linear systems with parametric uncertainty', J. Optim.

heory Appl., 95(2), xx}xx (1997).
38. Dragan, V., P. Shi and E. K. Boukas, 0H control for singularly perturbed systems in the presence of Markovian

jumping parameters', Proc. 36th IEEE Conf. on Decision & Control., San Diego, CA, Vol. 3, 1997, pp. 2163}2168.
39. Feng, X., K. A. Loparo, Y. Ji and H. J. Chizeck, &Stochastic stability properties of jump linear systems', IEEE rans.
Automat. Control, 37(1), 38}53 (1992).
40. Costa, O. L. V. and E. K. Boukas, &Necessary and su$cient condition for robust stability and stabilizability of
continuous-time linear systems with Markovian jumps', preprint, 1997.

Copyright  1999 by John Wiley & Sons, Ltd. Optim. Control Appl. Methods, 20, 1}20 (1999)
20 E. K. BOUKAS, P. SHI AND A. ANDIJANI

41. Xie, L., &Output feedback H control of systems with parameter uncertainty', Int. J. Control, 63(4), 741}750 (1996).

42. Petersen, I. R. and D. C. McFarlane, &Optimal guaranteed cost control and "ltering for uncertain linear systems',
IEEE rans. Automat. Control, 39(9), 1971}1977 (1994).
43. Xie, L. and Y. C. Soh, &Robust Kalman "ltering for uncertain systems', System Control ett., 22(2), 123}129 (1994).
44. Boukas, E. K. and F. AlSunni, &Optimal tracker for unreliable manufacturing systems', submitted.
45. Boukas, E. K., &Numerical methods for HJB equations of optimization problems for piecewise deterministic systems',
Optimal Control Appl. Meth., 16, 41}58 (1995).

Copyright  1999 by John Wiley & Sons, Ltd. Optim. Control Appl. Methods, 20, 1}20 (1999)

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy