Linear Estimation For Random Delay Systems

Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

Systems & Control Letters 60 (2011) 450459

Contents lists available at ScienceDirect

Systems & Control Letters


journal homepage: www.elsevier.com/locate/sysconle

Linear estimation for random delay systems


Huanshui Zhang a, , Gang Feng b , Chunyan Han a,c
a

School of Control Science and Engineering, Shandong University, Jinan 250061, Shandong, PR China

Department of Manufacturing Engineering and Engineering Management, City University of Hong Kong, Hong Kong

School of Control Science and Engineering, University of Jinan, Jinan 250022, Shandong, PR China

article

info

Article history:
Received 2 June 2010
Received in revised form
27 November 2010
Accepted 25 March 2011
Available online 10 May 2011
Keywords:
Linear estimation
Random delay
Reorganized innovation analysis
Riccati equation
Convergence
Stability

abstract
This paper is concerned with the linear estimation problems for discrete-time systems with random
delayed observations. When the random delay is known online, i.e., time-stamped, the random delayed
system is reconstructed as an equivalent delay-free one by using measurement reorganization technique,
and then an optimal linear filter is presented based on the Kalman filtering technique. However, the
optimal filter is time-varying, stochastic, and does not converge to a steady state in general. Then an
alternative suboptimal filter with deterministic gains is developed under a new criteria. The estimator
performance in terms of their error covariances is provided, and its mean square stability is established.
Finally, a numerical example is presented to illustrate the efficiency of proposed estimators.
2011 Elsevier B.V. All rights reserved.

1. Introduction
As the result of the increasing development in communication
networks, control and state estimation over networks have
attracted great attention during the past few years (see e.g. [1]).
The feedback control systems wherein the control loops are closed
through a real-time network are called networked control systems
(NCSs) (see e.g. [26]). In NCSs, data typically travel through
communication networks from sensors to controllers and from
controllers to actuators. As a direct consequence of the finite
bandwidth for data transmission over networks, time delay, either
from sensors to controllers, called sensor delay, or from controllers
to actuators, called controller delay, or both, is inevitable in
networked systems where a common medium is used for data
transfer. This time delay, being either constant, time-varying, or
random, can degrade the performance of a control system if it is not
given due consideration. In many instances it can even destabilize
the system. Hence time delay is one of the challenging problems
faced by control practitioners in NCSs.
The filtering problem for networked control systems with
sensor delays especially for random sensor delays has received

This work is supported by the Taishan Scholar Construction Engineering by


Shandong Government, the National Natural Science Foundation for Distinguished
Young Scholars of China (No. 60825304), and the Major State Basic Research
Development Program of China (973 Program) (No. 2009cb320600).
Corresponding author.
E-mail address: hszhang@sdu.edu.cn (H. Zhang).

0167-6911/$ see front matter 2011 Elsevier B.V. All rights reserved.
doi:10.1016/j.sysconle.2011.03.009

much attention during the past few years (See e.g. [7,8]). On
the filtering problems with intermittent observations, the initial
work can be traced back to Nahi [9] and Hadidi [10]. Recently,
this problem has been studied in [11,12], respectively, where the
statistical convergence property of estimation error covariances
was studied, and the existence of a critical value for the arrival
rate of observations was shown. For the situation that the
one-step sensor delay was described as a binary white noise
sequence, a reduced-order linear unbiased estimator was designed
via state augmentation in [13]. When the random delay was
characterized by a set of Bernoulli variables, the unscented filtering
algorithms [14], the linear and quadratic least-square estimation
method [15], the optimal H2 and Kalman filtering [16,17], and
the H filtering [18,19] have been developed. The rational of
modeling the random delay as Bernoulli variable sequences has
been justified in those papers. On the other hand, modeling the
random delay as a finite state Markov chain is also a reasonable
way. The relevant estimation results for this type of modeling can
be found in [20,21], and the reference therein. It can be noted
that the results mentioned above mainly focus on the systems
just with one- or two-step delay. To the best of our knowledge,
there exist few estimation results on multiple random delayed
systems [22]. Furthermore, most existing results employ state
augmentation method to deal with random delays. In fact, the
reorganization observation method which was developed in our
previous works [23] is also an effective tool to deal with the
random delay without state augmentation.
In this paper we will investigate the optimal and suboptimal linear estimators for discrete-time systems with random observation

H. Zhang et al. / Systems & Control Letters 60 (2011) 450459

451

delays, where the delay steps are larger than one sampling period
and known online via time-stamped data. The key technique to be
used for dealing with the random delay is the reorganization observation method, and by this method the random delayed system
is transformed into a delay-free one. An optimal linear filter is presented with time-varying stochastic gains, and the solution to this
estimator is also discussed. Furthermore, an alternative suboptimal estimator with deterministic gains is developed under a new
performance index. Convergence and mean square stability of this
estimator are established. It is shown that the stability of this estimator does not depend on observation packet delay but only on the
overall observation packet loss probability. Note that both filters
(optimal and suboptimal) have the same dimension as the original
systems.
The remainder of this paper is organized as follows. In Section 2,
we state the problem formulation. In Section 3, the reorganized
observations are defined, and an optimal filter is designed by
the projection formula. In Section 4, an alternative suboptimal
linear filter is developed under a new criteria. Its convergence and
stability are discussed. Finally, a numerical example is given in
Section 5, which is followed by some conclusions in Section 6.
Notations: Throughout this paper, Rn denotes the n-dimensional Euclidean space, Rmn denotes the set of all m n real matrices. A real symmetric matrix Q > 0(0) denotes Q being a
positive-definite (or positive semi-definite) matrix, and A > ()B
means A B > ()0. A stands for the transpose of the matrix A.

Under the Assumption 2.2, we know that the possible received


observations at time t when t r are

Q 1 and Q 2 represent the reverse and a square root of the positivedefinite matrix Q . diag{...} denotes a block-diagonal matrix, P
indicates the occurrence probability of an event, and E {.} represents the mathematical expectation operator. As usual, we define
H = L2 ( , F , P) as the Hilbert space of all square summable in
the probability space ( , F , P), equipped with the inner product
x, y = E {xy }, x, x = E {xx } = x2 , where . represents the
standard Euclidean norm. L{y1 , . . . , yn } denotes the linear space
spanned by y1 , . . . , yn . Proj{.} denotes the projection operator, and
ts means the Kronecker delta function. In addition, for a realvalued function with domain X , arg minxX (x) = {x X :
(x) = minyX (y)}.

Problem 1 (Optimal Linear Estimation). Given {{y(s)}ts=0 } and {i,s :


i = 0, . . . , r ; s = 0, . . . , t }, find a linear minimum mean square
error (LMMSE) estimation x (t | t ) of x(t ).

y(t ) = [y0 (t )

yr (t )] ,

(2.3)

where
yi (t ) = i,t Hx(t i) + i,t v(t i),

i = 0, . . . , r ,

(2.4)

with i,t defined as a binary random variable indicating the arrival


of the observation packet for state x(t i) at time t, that is
1,

i,t =

0,

If the observation for state


x(t i) is received at time t;
otherwise.

(2.5)

Obviously, P(i,t = 1) = i , i = 0, . . . , r. As is well known, in


the real-time control systems, the state x(t ) can only be observed
at most one time, and thus i,t , i = 0, . . . , r must satisfy the
following property:

i,t +i j,t +j = 0,

i = j.

(2.6)

When t < r, the observation of (2.3) is written as

y(t ) = [y0 (t )

yt (t )

0] ,

(2.7)

where we set yi (t ) 0 for t < i r. Then the estimation problems


considered in this paper can be stated as:

Problem 2 (Suboptimal Linear Estimation). Given {{y(s)}ts=0 }, find a


suboptimal linear estimation x e (t | t ) of x(t ).
3. Optimal linear estimator
In this section, an analytical solution to the LMMSE estimation
will be presented by reorganizing the observation sequences and
applying the projection formula in the Hilbert space.

2. Problem formulations
3.1. Measurements reorganization
Consider the following discrete-time systems with randomly
delayed measurements
x(t + 1) = x(t ) + Gw(t ),

In view of the definition of y(t ), we define a new observation as

x(0),

(2.1)

yr (t ) (t ) = Hx(t r (t )) + v(t r (t )),

(2.2)

y r (s) ,

where x(t ) Rn is the state, w(t ) Rr is the input noise, y (t ) Rp


is the measurement and v(t ) Rp is the measurement noise. r (t )
is the random delay and its probability distributions are known a
priori. The following assumptions are made to the system (2.1) and
(2.2).

y0 (s)

..
, 0 s t r,
.
yr (s + r )

y0 (s)
.
y t s (s) , .. , t r < s t .
yt s (t )

(3.1)

(3.2)

Assumption 2.1. The initial state x(0), w(t ), and v(t ) are null
mean white noises with covariance matrices

Then y () is the delay-free observation which satisfies

E [x(0)x (0)] = P0 ,

y r (s) = Hr (s)x(s) + vr (s),

E [w(t )w (s)] = Q ts ,

0 s t r,

E v(t )v (s) = Rts ,

y t s (s) = Ht s (s)x(s) + vt s (s),

respectively. x(0), w(t ), and v(t ) are mutually independent.

where

Assumption 2.2. Measurements in (2.2) are time-stamped and


transmitted through a digital communication network. The
random delay r (t ) is bounded with 0 r (t ) r where r is given
as the length of memory buffer, and its probability distributions are
P(r (t ) = i) = i , i = 0, . . . , r. If the measurements transmitted
to the receiver with a delay larger than r, they will
be considered
r
as the ones lost completely. Thus the property 0
i=0 i 1 is
satisfied. Also r (t ) is independent of x(0), w(t ), and v(t ).

0,s H
.
Hr (s) = .. ,
r ,s+r H

0,s H
.
Ht s (s) = .. ,

t s,t H

t r < s t,

0,s v(s)

..
vr (s) =
,
.
r ,s+r v(s)

0,s v(s)

..
vt s (s) =
,
.

(3.3)
(3.4)

t s,t v(s)

(3.5)

(3.6)

452

H. Zhang et al. / Systems & Control Letters 60 (2011) 450459

with vr (s) and vt s (s) being white noises of zero means and
respective covariance matrices
Rr (s) = diag{0,s R, . . . , r ,s R},

(3.7)

Remark 3.1. From the above definitions, it can be observed [24]


that the estimation x (s, r ) is indeed the projection of x(s) onto the
linear space

L{yr (0), . . . , y r (s 1)} = L{r (0), . . . , r (s 1)},

and
Rt s (s) = diag{0,s R, . . . , t s,t R}.

(3.8)

Then it is easy to know [23] that

(3.22)

and x (s, t s + 1) is the projection of x(s) onto the linear space

L{yr (0), . . . , y r (t r ); y r 1 (t (r 1)), . . . , y t (s1) (s 1)}

r
{{yr (s)}ts=
yt s (s)}ts=t r +1 },
0 ; {

(3.9)

contains the same information as {{y(s)} }.


Thus Problem 1 can be restated as: Given the observation
r
{{yr (s)}ts=
yt s (s)}ts=t r +1 } and the information {i,s : i =
0 ; {
0, . . . , r ; s = 0, . . . , t }, find an LMMSE estimation x (t | t ) of x(t ).
As in the Kalman filter, we first define the innovation sequences
associated with the reorganized observations (3.1) and (3.2).
t
s=0

Definition 3.1. Consider the new observations (3.1) and (3.2).

For 0 s t r, define
r (s) , y r (s) y r (s),

(3.10)

where y r (s) is the LMMSE estimation of y r (s) given the


observations

{yr (0), . . . , y r (s 1)}.

(3.11)

For s > t r, define

= L{r (0), . . . , r (t r ); r 1 (t (r 1)), . . . ,


t (s1) (s 1)}.

(3.23)

Furthermore, the innovation sequences can be rewritten as

r (s) = y r (s) Hr (s)x(s, r ),

0 s t r,

t s (s) = y t s (s) Ht s (s)x(s, t s + 1),

(3.24)

s > t r.

(3.25)

3.2. Design of the optimal estimator


In this subsection we will give the solution to the LMMSE estimation problem by applying the reorganized innovation sequences
defined in the last subsection.
Theorem 3.1. Consider the system (2.1)(2.2), the optimal estimation x (t | t ) is given by
x (t | t ) = [In K0 (t )H0 (t )]x(t , 1) + K0 (t )y0 (t ),

t s (s) , y t s (s) y t s (s),

(3.12)

where K0 (t ) is the solution to the following equation

where y t s (s) is the LMMSE estimation of y t s (s) given the observations

K0 (t )Q0 (t ) = P0 (t )H0 (t ),

{yr (0), . . . , y r (t r ); y r 1 (t (r 1)), . . . ,

with

y t (s1) (s 1)}.

(3.13)

The sequence {i (s)} defined above is indeed a white noise of


zero mean and covariance Qi (s) and spans the same linear space
as (3.9) (See [23]). And it is usually termed as the reorganized
innovation. Then the optimal estimation based on the new
innovation sequences can be defined as follows.

(3.26)

(3.27)

Q0 (t ) = H0 (t )P1 (t )H0 (t ) + R0 (t )
and the estimation x (t , 1) is computed by the following iteration.

Step 1: Calculate x (s + 1, r ), s = 0, . . . , t r by the following


Kalman recursion
x (s + 1, r ) = r (s)x(s, r ) + Kr (s)yr (s);

x (0, r ) = 0,

(3.28)

where
Definition 3.2. Consider the given time instant t.

For 0 s t r, the estimation x (s, r ) is defined as


x (s + 1, r ) = x (s, r ) + Kr (s)[yr (s) Hr (s)x(s, r )],
Ew,v x(s + 1) x (s + 1, r )

(3.14)

is minimized. Further define


Pr (s) , Ew,v [x(s, r )x (s, r )],

Kr (s)Qr (s) = Pr (s)Hr (s),

(3.30)

with
Qr (s) = Hr (s)Pr (s)Hr (s) + Rr (s),

(3.15)

(3.29)

where Kr (s) is to be determined such that


2

r (s) = Kr (s)Hr (s),

(3.16)

(3.31)

Pr (s + 1) = Pr (s) Kr (s)Qr (s)Kr (s) + GQG


Pr (0) = P0 .

;
(3.32)

Step 2: Calculate x (s + 1, t s) for s = t r + 1, . . . , t 1 by


the following recursion

where
x (s, r ) = x(s) x (s, r ).

(3.17)

x (s + 1, t s) = t s (s)x(s, t (s 1))

+ Kt s (s)yt s (s),

For s > t r, the estimation x (s, t s) is defined as


x (s + 1, t s) = x (s, t s + 1) + Kt s (s)[yt s (s)

Ht s (s)x(s, t s + 1)],

where
(3.18)

t s (s) = Kt s (s)Ht s (s),

(3.34)

where Kt s (s) is to be determined such that

Kt s (s)Qt s (s) = Pt (s1) (s)Ht s (s),

Ew,v x(s + 1) x (s + 1, t s)

with

(3.33)

(3.19)

(3.35)

is minimized. Further define

Qt s (s) = Ht s (s)Pt (s1) (s)Hts (s) + Rt s (s),

(3.36)

Pt s+1 (s) , Ew,v [x(s, t s + 1)x (s, t s + 1)],

Pt s (s + 1) = Pt (s1) (s)
Kt s (s)Qt s (s)Kts (s) + GQG .

(3.37)

(3.20)

where
x (s, t s + 1) = x(s) x (s, t s + 1).

(3.21)

Step 3: Set s = t 1 in (3.33), then x (t , 1) is obtained directly.

H. Zhang et al. / Systems & Control Letters 60 (2011) 450459

Proof. Note that x (t | t ) is the projection of the state x(t ) onto


r
t
the linear space spanned by {{r (s)}ts=
0 ; {t s (s)}s=t r +1 }. Since
is a white noise, the estimation x (t | t ) is calculated by using the
projection formula as
x (t | t ) = Proj{x(t ) | r (0), . . . , r (t r );

r 1 (t r + 1), . . . , 1 (t 1)} + Proj{x(t ) | 0 (t )}


= x (t , 1) + Proj{x(t ) | 0 (t )}.

(3.38)

Now we calculate Proj{x(t ) | 0 (t )}. Since Proj{x(t ) | 0 (t )} is a


linear combination of the innovation 0 (t ), set
Proj{x(t ) | 0 (t )} = K0 (t )0 (t ),

(3.39)

where K0 (t ) is to be determined.
Case 1: When the covariance matrix of the innovation 0 (t ),
denoted by Q0 (t ), is invertible, Proj{x(t ) | 0 (t )} is given by
Proj{x(t ) | 0 (t )} = P1 (t )H0 (t )Q01 (t )0 (t ),

(3.40)

where Q0 (t ) is the covariance of the innovation 0 (t ) and P1 (t ) is


as in (3.20). From (3.39) and (3.40), we obtain that
K0 (t ) = P1 (t )H0 (t )Q01 (t ),

Remark 3.3. Compared to the existing work in [22], the contributions of our work can be described as follows: a new model
for the system with finite multiple random observation delays is
constructed, and another efficient methodthe reorganized innovation analysis method is employed for the design of the optimal
estimator with random observation delays.
Remark 3.4. The LMMSE estimator described above is optimal
for any observation packet delay generating process. However
from an engineering perspective it is also desirable to further
characterize the performance of the estimator. In this scenario a
natural performance metric is the expected error covariance, i.e.,
E [Pt +1|t ], where the expectation is performed with respect to the
arrival binary random variable {i,s : i = 0, . . . , r ; s = 0, . . . , t }.
Unfortunately, it is not clear whether such quantity converges to
a steady state. Thus instead of trying to obtain the upper bound
on the expected error covariance of the time-varying LMMSE
estimator, we will, in the next section focus on a suboptimal filter
with deterministic gains. That is the filter gains will not contain the
observation packet arrival binary random variable .,. , instead the
gains will be determined by the probability of the random variable.

(3.41)
4. Suboptimal linear estimator

which satisfies (3.27).


Case 2: When the covariance matrix of the innovation 0 (t ) is
singular, the matrix K0 (t ) is chosen to minimize K0 (t )0 (t )
x(t )2 ,
K0 (t )Q0 (t ) = P1 (t )H0 (t ),

(3.42)

which is (3.27). Note that 0 (t ) = y 0 (t ) y 0 (t ) = y 0 (t )


H0 (t )x(t , 1), then the covariance matrix of 0 (t ) is given by
Q0 (t ) = H0 (t )P1 (t )H0 (t ) + R0 (t ),

(3.43)

and then (3.26) follows directly from (3.38).


For the case of 0 s t r, the optimal estimation x (s + 1, r )
is given as
x (s + 1, r ) = Proj{x(s + 1) | r (0), . . . , r (s)}

= x (s, r ) + Proj{x(s + 1) | r (s)},


= x (s, r ) + Kr (s)r (s).

(3.44)

With a similar discussions as that of (3.38)(3.43) and (3.28)(3.31)


are obtained immediately. Now we start to derive the expression
for Pr (s + 1). In view of (2.1) and (3.28), we obtain that
x (s + 1, r ) = x(s + 1) x (s + 1, r )

= x (s, r ) + Gw(s) Kr (s)r (s).

(3.45)

Since x (s, r ) is independent of w(s), it follows from (3.45) that


Pr (s + 1) + Kr (s)Qr (s)Kr (s) = Pr (s) + GQG ,

453

(3.46)

which is (3.32).
Finally, following the similar procedure as in the determination
of Kr (s) and Pr (s), we shall obtain (3.33)(3.37). This completes the
derivation of the solution to the Problem 1. 
Remark 3.2. It should be pointed out that the Riccati equation
(3.32) is not in a standard form as the innovation covariance matrix
Qr (s) may be singular. In order to solve the Riccati equation (3.32)
for the filter design, we should first solve the linear equation (3.30),
which may be unsolvable or solvable but with infinite number of
solutions. Fortunately it can be shown that Eq. (3.30) is solvable and
any solution, if there exist more than one solution, to the equation
yields the same result for the Riccati equation (3.32). More detailed
discussion can be seen in Appendix.

In this section, we will propose a new suboptimal estimator


with deterministic gains by minimizing the mean square estimation error where the statistics of the observation packet arrival random variable is used. Then we prove the convergence and mean
square stability of the new estimator under standard assumptions.
4.1. Design of the suboptimal estimator
Similar to the discussions in Section 3, we first employ the
reorganized observation method to transform the random delayed
system into the one just with multiplicative noises but without
delays. The obtained system has the same description as (3.1)
(3.6), while the covariance matrices of vr (s) and vt s (s) are
described as follows
Rr = diag{0 R, . . . , r R},

0 s t r,

Rt s = diag{0 R, . . . , t s R},

t r < s t.

(4.1)
(4.2)

Also, it can be shown [23] that


r
{{yr (s)}ts=
yt s (s)}ts=t r +1 },
0 ; {

spans the same linear space as {{y(s)}

(4.3)
t
s=0

}.

Definition 4.1. Given (2.1), (3.3) and (3.4), the linear suboptimal
estimation x e (t | t ) of x(t ) with deterministic gains is defined as
x e (t | t ) , x e (t , 1) + K0 (t )[y0 (t ) H0 (t )xe (t , 1)],

(4.4)

where K0 (t ) is to be determined such that


E ,w,v x(t ) x e (t | t )2

(4.5)

is minimized, and x e (t , 1) is defined as follows.

For 0 s t r, define
x e (s + 1, r ) , x e (s, r ) + Kr (s)[yr (s) Hr (s)xe (s, r )],

(4.6)

with Kr (s) to be determined such that


E ,w,v x(s + 1) x e (s + 1, r )2
is minimized.

(4.7)

454

H. Zhang et al. / Systems & Control Letters 60 (2011) 450459

For t r < s t, define

Kt s (s) = [ Pt s+1 (s)Hts ]

[diag{0 HPt s+1 (s)H , . . . , t s HPt s+1 (s)H } + Rt s ]1 ,

x e (s + 1, t s) , x e (s, t s + 1) + Kt s (s)[yt s (s)

Ht s (s)xe (s, t s + 1)],

with Kt s (s) to be determined such that


E ,w,v x(s + 1) x e (s + 1, t s)

(4.21)

(4.8)
Pt s (s + 1) = Pt s+1 (s)

Furthermore, we define the covariance matrix of the estimation


errors
Pr (s + 1) , E ,w,v [xe (s + 1, r )xe (s + 1, r )],

(4.10)

Pt s (s + 1) , E ,w,v [xe (s + 1, t s)xe (s + 1, t s)],

(4.11)

where

1
Pt s+1 (s)H HPt s+1 (s)H + R
HPt s+1 (s) + GQG ,

is minimized.

Remark 4.1. Note from the criteria index of (3.15) (or (3.19)) and
(4.7) (or (4.9)) that, the estimation x e (s, r ) or x e (s, t s + 1) defined
in Definition 4.1 is different from the one defined in Definition 3.2.
The expectation in (4.7) (or (4.9)) is taken over w, v and {.,. }
simultaneously.

i =0

(4.9)

So Problem 2 is to find an estimator as defined in Definition 4.1.

t s

Ht s = E (Ht s (s)) = col{0 H , . . . , t s H },

x e (s + 1, t s) = x(s + 1) x e (s + 1, t s).

(4.12)

(4.23)

with Pr (t r + 1) given by (4.18).


Proof. Based on the definitions of (4.6) and (4.8), we introduce the
following notations
er (s) = y r (s) Hr (s)xe (s, r ),

s t r,

et s (s) = y t s (s) Ht s (s)xe (s, t s + 1),

(4.24)
s > t r.

(4.25)

Firstly for s = t, we have from (4.4) that


E ,w,v x(t ) x e (t |t )2

= P1 (t ) + K0 (t )M (t )K0 (t ) xe (t , 1), e0 (t )K0 (t )


K0 (t )e0 (t ), x e (t , 1)
= P1 (t ) K0 (t )M (t )[K0 (t )]
+ [K0 (t ) K0 (t )]M (t )[K0 (t ) K0 (t )] ,

x e (s + 1, r ) = x(s + 1) x e (s + 1, r ),

(4.22)

(4.26)

where M (t ) = e0 (t ), e0 (t ) and

Then one has the following result on the suboptimal estimation.

K0 (t ) = xe (t , 1), e0 (t )M 1 (t ).

Theorem 4.1. The suboptimal estimation x e (t | t ) with deterministic


gains is given by

It is clear that E ,w,v x(t ) x e (t |t )2 will be minimized if we


choose K0 (t ) = K0 (t ). Now we calculate K0 (t ). In view of (4.25),
we obtain that

x e (t | t ) = [I K0 (t )H0 (t )]xe (t , 1) + K0 (t )y0 (t ),

(4.13)

where
K0 (t ) = P1 (t )H0 [0 HP1 (t )H + 0 R]1 ,

(4.14)

H0 = E [H0 (t )] = 0 H ,

(4.15)

and x e (t , 1) is derived by the following iterations.

For 0 s t r, the estimation x e (s + 1, r ) of Definition 4.1 is


calculated as

y r (s), x e (0, r ) = 0,

(4.16)

where
Kr (s) = [ Pr (s)Hr ]

[diag{0 HPr (s)H , . . . , r HPr (s)H } + Rr ]1 , (4.17)

Pr (s + 1) = Pr (s)
i Pr (s)H
i=0

HPr (s)H + R

HPr (s) + GQG ,

Pr (0) = P0 , (4.18)

Hr = E (Hr (s)) = col{0 H , . . . , r H }.

(4.19)

For t r < s t, the estimation x e (s + 1, t s) of Definition 4.1 is


calculated as
x e (s + 1, t s) = [ Kt s (s)Ht s (s)]

x e (s, t s + 1) + Kt s (s)yt s (s), x e (t r + 1, r ), (4.20)


where x e (t r + 1, r ) can be calculated by (4.16) and

xe (t , 1), e0 (t ) = xe (t , 1), H0 (t )xe (t , 1) + v0 (t )


= P1 (t )H0 ,

(4.28)

M (t ) = H0 (t )xe (t , 1) + v0 (t ), H0 (t )xe (t , 1) + v0 (t )
= 0 HP1 (t )H + 0 R.

(4.29)

Substituting (4.28) and (4.29) into (4.27), we obtain the expression


of (4.14).
Next for s t r, it follows from (4.6) and (2.1) that
x e (s + 1, r )

= { Kr (s)Hr (s)}xe (s, r ) + Gw(s)

Kr (s)vr (s).

x e (s + 1, r ) = [ Kr (s)Hr (s)]xe (s, r ) + Kr (s)

(4.27)

(4.30)

It follows from (4.30) that


Pr (s + 1)

E ,w,v x(s + 1) x e (s + 1, r )2
= Pr (s) Pr (s)Hr Kr (s)
Kr (s)Hr Pr (s) + GQG + Kr (s)
[diag{0 HPr (s)H , . . . , r HPr (s)H } + Rr ]Kr (s)
= [Kr (s) Kr (s)]
[diag{0 HPr (s)H , . . . , r HPr (s)H } + Rr ]

[Kr (s) Kr (s)]


i Pr (s)H
i=0

(HPr (s)H + R) HPr (s) + Pr (s) + GQG ,

(4.31)

where
Kr (s) = [ Pr (s)Hr ]

[diag{0 HPr (s)H , . . . , r HPr (s)H } + Rr ]1 .

(4.32)

H. Zhang et al. / Systems & Control Letters 60 (2011) 450459

It is obvious that E x(s + 1, r )2 will be minimized if we choose


Kr (s) = Kr (s) and Pr (s) satisfying

Pr (s + 1) = Pr (s)

455

Theorem 4.2. Consider the suboptimal estimators (4.13)(4.22).


Suppose there exists a positive-definite matrix P such that
P > g0 ,...,r (P ),

i Pr (s)H

(4.36)

where

i=0

(HPr (s)H + R)1 HPr (s) + GQG .

(4.33)

Finally, following the similar procedure as in the determination of


Kr (s) and Pr (s), we can obtain (4.21)(4.22). This completes the
proof of Theorem 4.1.

Remark 4.2. The advantage of the suboptimal filter is that it
leads to a deterministic time-varying filter which is easy to be
implemented and all of its calculations (the gain matrices) can
be done off line. That is to say, all the heavy calculations can be
made before we start collecting the data, since the gain matrices
of the filter are not data-dependent. Also, the deterministic gain
allows us to analyze the convergence and mean stability of the
filter. Moreover, it can be shown that the stochastic LQ control is
dual to the proposed suboptimal estimator for multiplicative noise
systems. Thus, it can be applied to solve the control problems with
random input delays and packet dropping via the duality.

g0 ,...,r (P ) = P


i PH

i=0

+ R)1 H P + GQG .
(H PH
Then for any initial condition Pr (0), the Riccati difference equations
for Pr (t r ), Pr 1 (t r + 1), . . . , P0 (t ) converge to a unique set of
algebraic equations when t ,

Pr = Pr

i Pr H (HPr H + R)1

i=0

P r 1

HPr + GQG ,

r 1

= Pr
i Pr H (HPr H + R)1

(4.37)

i=0

HPr + GQG ,

(4.38)

..
.

4.2. Convergence of the suboptimal estimator

P0 = P1 0 P1 H (HP1 H + R)1 HP1 + GQG .

We first make the following assumptions:

(4.39)

And the corresponding estimators x e (t r + 1, r ), x e (t r + 2, r


1), . . . , x e (t |t ) converge to a set of constant gain estimators,

Assumption 4.1. ( , H ) is stabilizable.


1

Assumption 4.2. ( , Q 2 G ) is observable.

x e (t r + 1, r ) = [ Kr Hr (t r )]xe (t r , r )

In the following our aim is to show that for an arbitrary but fixed
nonnegative symmetric Pr (t0 ), Pr (t r ), Pr 1 (t r + 1), . . . , P0 (t )
are convergent when t . At first, we present some useful
lemmas without proof since similar derivation lines can be found
in [11].

+ Kr y r (t r ),
..
.
x e (t , 1) = [ K1 H1 (t 1)]xe (t 1, 2) + K1 y 1 (t 1),

(4.40)

x e (t |t ) = [I K0 H0 (t )]xe (t , 1) + Ko y 0 (t ),

(4.42)

Lemma 4.1. Consider the following operator

where

g (X )

= X + GQG XH (HXH + R) HX ,
(K , X ) = [ KH ]X [ KH ] + (1 )KHXH K

+GQG + KRK ,

(4.35)

where 0 1. Assume X S = {S R

nn

(4.34)

|S 0}, R > 0, Q

0, and ( , GQ 2 ) is stabilizable. Then the following facts are true:


(i) With KX = XH (HXH + R)1 , g (X ) = (KX , X ).
(ii) g (X ) = minK (K , X ) (K , X ), F .
(iii) If X Y , then g (X ) g (Y ).
Lemma 4.2. Define the linear operator

(X ) = [ KH ]X [ KH ] + (1 )KHX (KH ) .
Suppose there exists X > 0 such that X > L(X ).
(i) For all W 0, limt t (W ) = 0.
(ii) Let V > 0 and consider the linear system Xt +1 = (Xt ) + V
initialized at X0 . Then, the sequence Xt is bounded.
Lemma 4.3. Consider the operator g (X ) defined in (4.34). Suppose
there exists a positive-definite matrix P such that
P > g (P ).
Then for any P0 , the sequence P (t ) = gt (P0 ) is bounded, i.e., there
exists MP0 0 dependent on P0 such that P (t ) MP0 , t.
With these lemmas, one can obtain the following results on
the convergence properties of the Riccati difference equations in
Theorem 4.1.

(4.41)

Kr = Pr Hr [diag{0 HPr H , . . . , r HPr H } + Rr ]1

..
.
K1 = P2 H1 [diag{0 HP2 H , 1 HP2 H } + R1 ]1 ,
K0 = P1 H0 [0 HP1 H + 0 R]1 .
Proof. To derive the claimed results, we just need to analyze
the convergence of the Riccati difference equations for (4.18) and
(4.22). The whole process in the derivation are divided into three
stages.
(i) To show the convergence of the Riccati equation (4.18). First,
denote
g0 ,...,r (X ) = X + GQG

i XH (HXH + R)1 HX ,

i=0

(4.43)
then
Pr (s + 1) = g0 ,...,r (Pr (s)) = gs 0 ,...,r (Pr (0)).
Recalling Assumption 2.2, we obtain that 0
i=0 i 1. Thus
(4.43) satisfies the conditions of Lemma 4.1. Consider the Riccati
equation (4.18) initialized at Qr (0) = 0 and let Qr (s) = gs 0 ,...,r (0),
then 0 = Qr (0) Qr (1). It follows from Lemma 4.1(iii) that

Qr (1) = g0 ,...,r (Qr (0)) g0 ,...,r (Qr (1)) = Qr (2).


A simple inductive argument establishes that
0 = Qr (0) Qr (1) Qr (2) M0 .

456

H. Zhang et al. / Systems & Control Letters 60 (2011) 450459

Here, we have used the Lemma 4.3 to bound the sequence {Qr (s)}.
We now have a monotone nondecreasing sequence of matrices
bounded above. It is obvious that

Pr = Pr + GQG

i Pr H (HPr H + R)1 HPr .

(4.44)

Next, we show that the Riccati iteration initialized at Rr (0) Pr


also converges, and to the same limit Pr . Define the matrix KPr =
Pr H (HPr H + R)1 and consider the linear operator


i KP r H X

i =0

i 1

i=0

i KPr H

i =0
r

i KPr HXH KPr .

i=0

(HP1 (t 1)H + R)1 HP1 (t 1) + GQG .

Pr = g0 ,...,r (Pr ) = (Pr ) + GQG +

i KPr RKPr > (Pr ).

i =0

Thus, satisfies the condition of Lemma 4.2. Consequently, for all


X 0, lims s (X ) = 0. Now, suppose Rr (0) Pr . Then
Rr (1) = g0 ,...,r (Rr (0)) g0 ,...,r (Pr ) = Pr .

(4.45)

A simple inductive argument establishes that Rr (s) Pr , s. Note


that
0 (Rr (s + 1) Pr ) = g0 ,...,r (Rr (s)) g0 ,...,r (Pr )
= (KRr (s), Rr (s)) (KPr , Pr )

In the above, we have proven the convergence of Pr (t r ) in (4.46).


Based on this fact and note the expression of (4.47), we get that
Pr 1 (t r + 1) converges as well. Similar reasoning shows that
Pr 2 (t r + 2), . . . , P0 (t ) are all convergent when t . Set
Pr 1 = limt Pr 1 (t r + 1), . . . , P0 = limt P0 (t ), which
obviously satisfy the algebraic equation (4.38)(4.39).
(iii) To show the uniqueness of the limits. To this end, we just
need to establish that (4.37) has a unique semi-definite solution,
since a unique solution to (4.37) just determine a unique set of
solutions to (4.38)(4.39) respectively. ConsiderP r = gr ,...,0 (P r )

P r , P r , . . . .
However, we have shown that for any initial condition (4.18)
converges to Pr . Thus, P r = Pr . The uniqueness of the solutions
is proved. 
1

Corollary 4.1. Assume


( , H ) is stabilizable and ( , Q 2 G ) is
r
observable. Then for any i=0 i , the matrices Pr (t r ), Pr 1 (t
r + 1), . . . , P0 (t ) converge to a unique set of constant matrices
when t , where is determined by the following optimization
problem:

= arg min (F , Y ) > 0,

(KPr , Rr (s)) (KPr , Pr )

= (Rr (s) Pr ) 2 (Rr (s 1) Pr )


s (Rr (0) Pr ).

(F , Y )

Then in light of Lemma 4.2 and taking limit on both sides of the
above inequalities, one obtains that
0 lim (Rr (s + 1) Pr ) 0.

Qr (0) Pr (0) Rr (0).


It then follows from Lemma 4.1(iii) that

s.

We have already established that the Riccati equations Qr (s) and


Rr (s) converge to Pr . As a result, we have
s

proving (4.37).
(ii) To show the convergence of the expressions in (4.22), that
is,
r

(1 )FH

1
2

F ( R) 2
0

YGQ
0

(4.50)
Moreover, the suboptimal estimators become the steady-state ones.
Proof. First, we prove that the following statements are equivalent:
(i) X > 0 such that X > g (X ),
(ii) K , X > 0 such that X > (K , X ),
(iii) F and 0 < Y I, such that (F , Y ) > 0.
Let K = X H (H X H + R)1 , then we will obtain the equivalence between (i) and (ii) immediately. Further, by using the Schur
complements, we can prove that (F , Y ) > 0 if and only if
Y > 0,

(4.51)

(1 )FHY 1 H F YGQG F FRF > 0.


Set Y = X
yields

,F = X

(4.52)

K , and multiply X on both sides of (4.52),

X > 0,

i Pr (t r 1)H

(4.53)

X [ KH ]X [ KH ] (1 )KHXH K

i=0

(HPr (t r 1)H + R)1 HPr (t r 1) + GQG .

(4.49)

Y [Y FH ]Y 1 [Y FH ]

Pr = lim Qr (s) lim Pr (s) lim Rr (s) = Pr ,

Pr (t r ) = Pr (t r 1)

Y > 0, 0 < 1,

(Y FH )

( R) 2 F

Thus we have proved that the Riccati iteration initialized at Rr (0)


also converges to the same limit Pr .
Now we establish that the Riccati iteration converges to Pr for all
initial condition Pr (0) 0. Define Qr (0) = 0 and Rr (0) = Pr (0) +
Pr . Consider three Riccati iterations, initialized at Qr (0), Pr (0) and
Rr (0). Note that

(Y FH )

= (1 )H F

Q 2 G Y

(4.48)

and the Riccati iteration (4.18) initialized as Pr (0) = P r . This yields


the constant sequence

Observe that

Qr (s) Pr (s) Rr (s),

(4.47)

P0 (t ) = P1 (t 1) 0 P1 (t 1)H

i=0

..
.

Also, we see that Pr is a fixed point of the Riccati iteration

i Pr (t r )H

(HPr (t r )H + R) HPr (t r ) + GQG .

lim Qr (s) = Pr .

(X ) =

r 1

i =0

Pr 1 (t r + 1) = Pr (t r )

(4.46)

GQG KRK > 0.

(4.54)

H. Zhang et al. / Systems & Control Letters 60 (2011) 450459

Thus the condition (ii) is satisfied if and only if (iii) is satisfied. So


(i), (ii) and (iii) are mutually equivalent.
Next, we show the convergence of the Riccati equations (4.18)
and (4.22)
for Pr (t r ), . . . , P0 (t ).
r
When
i=0 i = 1, the Riccati equation (4.18) reduces to the
standard Riccati equation

verges to a fixed point Pr , and the corresponding algebraic Riccati


equation can be rewritten as

P (s + 1) = P (s) P (s)H (HP (s)H + R)1

HP (s) + GQG ,

Pr =

s 0.

Remark 4.3. It is noted that, (4.49) with (4.50) is a quasi-convex


optimization problem in the variables (, F , Y ) and the solution
can be obtained by iterating LMI feasibility problem and using
bisection for the variable , as shown in [25].
4.3. Stability of the suboptimal estimator
In this subsection, we will show the mean square stability of
the constant gain estimators (4.40)(4.42). The following lemma
will be used subsequently.
Lemma 4.4 ([26]). The system
x(t + 1) =

A+

Ai i (t ) x(t ),

i=0

where x(t ) Rn and i (t ), i = 1, . . . , m are zero-mean random


processes with E [i (t )j (s)] = ij ts , is mean square stable if and only
if there exists a matrix Q = Q Rnn with Q > 0 such that
A QA Q +

(4.55)

i =0

Theorem 4.3. Under Assumptions 4.1 and 4.2, if the Riccati equations (4.18) and (4.22) converge, then the corresponding constant gain
estimators (4.40)(4.42) are mean square stable.
Proof. It is noted that if the filter x e (t r + 1, r ) is stable, the future
finite iterations x e (t r + 1, r 1), . . . , x e (t , 1) are stable as well.
Thus to show the stability of (4.40)(4.42), one just needs to show
the stability of (4.40). Note that (4.40) can be rewritten as
x e (t r + 1, r ) =

i FPr H x e (t r , r )

(i i,t r +i )FPr H x e (t r , r ) + Kr y r (t r ),
i=0

i FPr H

i =0

i 1

i =0

i FPr HPr H FP r + GQG +

i =0

i FPr RFP r .

i =0

As shown in Theorem 4.2

Pr >

i FPr H Pr

i=0

i FPr H

i =0

i 1

i =0

i FPr HPr H FP r ,

i =0
1

and Pr > 0 since (A , Q 2 G ) is observable. Then the conditions of


Lemma 4.4 is satisfied, and thus (4.40) is mean square stable. The
proof is thus completed. 

5. An illustrative example
In this section, we present a simple numerical example to illustrate the developed theoretical results. Consider a dynamic system
described in (2.1) and (2.2) with the following parameters
1.02
=
0

0.05
,
0. 9

1
G=
,
0.5

H = 2

1 ,

where w(t ) and v(t ) are white noises with zero means and covariance matrices Q = 1 and R = 1, respectively. The initial value x0
and its covariance matrix are set to be

[ ]

1
x0 =
,
1

1
P0 =
0

0
.
1

In this example, it is assumed that r (t ) {0, 1, 2}. r (t ) is a prior


known to the estimator and its paths can be described by i,t (i =
0, 1, 2) with the restriction i,t +i j,t +j = 0 if i = j. One path
of r (t ) is given in Fig. 1, based on which the optimal filter can be
obtained directly by using the scheme proposed in Theorem 3.1.
The true state values and the optimal estimates of the state components x1 (t ) and x2 (t ) are shown in Figs. 2 and 3, respectively.
For the suboptimal filter design, it is assumed that r (t ) is of the
probabilities

P{r (t ) = 1} = 0.1

and P{r (t ) = 2} = 0.1,


and also i,t +i j,t +j = 0 if i = j. Then from Theorem 4.1, the suboptimal filter can be obtained. Fig. 4 plots the real state x1 (t ) and
its suboptimal estimate, while Fig. 5 plots the real state x2 (t ) and
its estimate. Finally, the sum of the optimal and suboptimal estimation error covariances of x1 (t ) and x2 (t ) are given in Fig. 6. It can be
seen from the simulation results that the obtained linear estimators for systems with random delays track well and the estimation
scheme proposed in this paper produces good performance.
6. Conclusion

i=0

i FPr H Pr

P{r (t ) = 0} = 0.8,

Ai QAi < 0.

i=0

Under the conditions that ( , H ) is stabilizable and ( , Q 2 G )


is observable, we know that P (s) converges to afixed point [24].
r
Obviously, the iterations in (4.22) converge. If
, we
i=0 i =
know from (4.50) that, there exist F and 0 < Y I such that
(F , Y ) > 0. Then based on the equivalence between (i) and (iii),
we obtain that, there exists X > 0 such that X > g0 ,...,r (X ). In
view of Theorem 4.2, we get that for any initial condition Pr (0) 0,
the Riccati iterations (4.18) and (4.22) converge to some finite limits, and the corresponding estimators
r converge to the steady ones.
Now, consider the case that
i=0 i 1. Recalling the operators g (X ) in (4.35), we can prove that g (X ) is monotonically decreasing in . If there exist F and 0 < Y I such that (F , Y ) > 0,
there must be existing X > 0 such that X > g (X ) > g0 ,...,r (X ).
Also, from Theorem 4.2, we can get that Pr 1 (s), . . . , P0 (s) are all
convergent, and the existence of the limits of Pr (s), . . . , P0 (s) implies that the suboptimal estimators converge to a set of steadystate estimators. The proof is thus completed. 

457

r
where FPr = Pr H (HPr H + R)1 . It can be seen that E [ i=0 (i
r
r

i,t r +i )] = 0, and cov[


i=0 (i i,t r +i )] =
i =0 i ( 1

r
i =0 i ) .
Based on the theorems hypotheses, we know that Pr (s) con-

(4.56)

In this paper, we have studied the linear estimation for random


delayed systems with time-stamped measurements. An optimal
linear filter, with time-varying and stochastic gains has been
presented via reorganized observation method and projection
formula. Also, a suboptimal estimator with deterministic gains has

458

H. Zhang et al. / Systems & Control Letters 60 (2011) 450459

Fig. 1. The trajectories of i,t , (i = 0, 1, 2) which give one path of r (t ).

Fig. 2. The first state component x1 (t ) and its optimal filter x 1 (t | t ) based on the
given path of r (t ) as shown in Fig. 1.

Fig. 3. The second state component x2 (t ) and its optimal filter x 2 (t | t ) based on
the given path of r (t ) as shown in Fig. 1.

Fig. 4. The first state component x1 (t ) and its suboptimal filter x o1 (t | t ) based
on the given probabilities of r (t ) : P{r (t ) = 0} = 0.8, P{r (t ) = 1} = 0.1 and
P{r (t ) = 2} = 0.1.

Fig. 5. The second state component x2 (t ) and its suboptimal filter x o2 (t | t ) based
on the given probabilities of r (t ) : P{r (t ) = 0} = 0.8, P{r (t ) = 1} = 0.1 and
P{r (t ) = 2} = 0.1.

Fig. 6. The sum of the optimal and suboptimal estimation error covariances.

H. Zhang et al. / Systems & Control Letters 60 (2011) 450459

been proposed under a new performance index. Its convergence


and stability are proved, and it is shown that the stability of this
estimator does not depend on observation packet delay but only
on the overall observation packet loss probability.
Appendix. Discussions on the solutions to (3.32) and (3.37)
In this Appendix, we discuss the solution to the singular Riccati
equation (3.32). We only consider the case of s t r, the other
case of s > t r can be addressed similarly. Observe that
Kr (s)Qr (s) = Pr (s)Hr (s),
Qr (s) = Hr (s)Pr (s)Hr (s) + Rr (s),

Pr (s + 1) = Pr (s) Kr (s)Qr (s)Kr (s) + GQG ,

0,s H
.
Hr (s) = .. ,
r ,s+r H

(A.4)

(A.5)

When i,i+s = 0, i = 0, . . . , r, it follows from (A.3) and (A.4)


that Qr (s) = 0, s = 0, . . . , t r, then the Riccati equation (A.3)
becomes
(A.6)

Thus the solution to the Riccati equation exists and is unique. Note
from the property (2.6), the remaining case is that there exist one,
say {0, . . . , r } such that ,+s = 1, and the others are zero. In
this case, it is easily known that there exists an elementary matrix
1
Ar (s) with Ar (s) = A
r (s) such that
Ar (s)Hr (s) =

[ ]

Ar Rr (s)Ar (s) =

H
,
0

R
0

(A.7)

0
.
0

(A.8)

Thus, by using (A.7)(A.8), it follows from (3.31) that


HPr (s)H + R
Ar (s)Qr (s)Ar (s) =
0

0
.
0

(A.9)

On the other hand, note that A (t , i)A(t , i) = I, (3.30) can be


equivalently written as
Kr (s)Ar (s)Ar (s)Qr (s)Ar (s) = Pr (s)Hr (s)Ar (s).

(A.10)

Let
Kr (s)Ar (s) = [K r (s)

K r (s)].

(A.11)

Incorporating the above equation into (A.10) and utilizing (A.7)


and (A.8), yields

[K r (s)(HPr (s)H + R)

0] = [ Pr (s)H

0].

(A.12)

Note that the matrix Qv is invertible, K r (s) is given uniquely as


K r (s) = Pr (s)H (HPr (s)H + R)1 .

(A.13)

Therefore, Kr (s) is given as


Kr (s) = [K r (s)

K r (s)]Ar (s),

where K r (s) can be of any value.

(A.15)

References

with vr (s) a white noise of zero mean and covariance matrix

Pr (s + 1) = Pr (s) + GQG .

[HPr (s)H + R]1 HPr (s) + GQG ,

(A.2)

Rr (s) = diag{0,s R, . . . , r ,s+r R}.

Pr (s + 1) = Pr (s) Pr (s)H

(A.1)

(A.3)

0,s v(s)

..
vr (s) =
,
.
r ,s+r v(s)

We have shown in the above that the linear equation (3.30)


always has a solution. Next, we show that for any solution to (3.30),
the Riccati equation (3.32) remains the same. In fact, substituting
(A.14) into (3.32) yields

which does not depend on the choice of the matrix K r (s).


Based on the above discussions, we conclude that Eq. (3.30) is
solvable and any solution, if there exist more than one solution, to
the equation yields the same result for the Riccati equation (3.32).

where

459

(A.14)

[1] J.P. Hespanha, P. Naghshtabrizi, Y. Xu, A survey of recent results in networked


control systems, IEEE Trans. Automat. Control 95 (1) (2007) 138162.
[2] J. Nilsson, B. Bernhardsson, B. Wittenmark, Stochastic analysis and control
of real-time systems with random time delays, Automatica 34 (1) (1998)
5764.
[3] L. Zhang, Y. Shi, T. Chen, A new method for stabilization of networked control
systems with random delays, IEEE Trans. Automat. Control 50 (8) (2005)
11771181.
[4] W. Zhang, L. Yu, Modeling and control of networked control systems with
both network-induced delay and packet-dropout, Automatica 44 (2008)
32063210.
[5] M. Liu, D.W.C. Ho, Y. Niu, Stabilization of Markovian jump linear system
over networks with random communication delay, Automatica 45 (2009)
416421.
[6] C. Lin, Z. Wang, F. Yang, Observer-based networked control for continuoustime systems with random sensor delays, Automatica 45 (2009) 578584.
[7] C. Su, C. Lu, Interconnected network state estimation using randomly delayed
measurements, IEEE Trans. Power Syst. 16 (4) (2001) 870878.
[8] Z. Wang, D.W.C. Ho, X. Liu, Robust filtering under randomly varying sensor
delay measurements, IEEE Trans. Circuits Syst. Express Briefs 51 (2004)
320326.
[9] N.E. Nahi, Optimal recursive estimation with uncertain observation, IEEE
Trans. Inf. Theory 15 (4) (1969) 457462.
[10] M.T. Hadidi, C.S. Schwartz, Linear recursive state estimators under uncertain
observations, IEEE Trans. Automat. Control 24 (6) (1979) 944948.
[11] B. Sinopoli, L. Schenato, M. Franceschetti, K. Poolla, M.I. Jordan, S.S. Sastry,
Kalman filtering with intermittent observations, IEEE Trans. Automat. Control
49 (9) (2004) 14531463.
[12] K. Plarre, F. Bullo, On Kalman filtering for detectable systems with intermittent
observations, IEEE Trans. Automat. Control 54 (2) (2009) 386390.
[13] E. Yaz, A. Ray, Linear unbiased state estimation under randomly varying
bounded sensor delay, Appl. Math. Lett. 11 (1998) 2732.
[14] A.H. Carazo, J.L. Prez, Extended and unscented filtering algorithms using
one-step randomly delayed observations, Appl. Math. Comput. 190 (2007)
13751393.
[15] S. Nakamori, R.C. guila, A.H. Carazo, J.L. Prez, Recursive estimators of signals
from measurements with stochastic delays using covariance information,
Appl. Math. Comput. 162 (2005) 6579.
[16] M. Sahebsara, T. Chen, S.L. Shah, Optimal H2 filtering with random sensor
delay, multiple packet dropout and uncertain observations, Internat. J. Control
80 (2) (2007) 292301.
[17] M. Moayedi, Y.C. Soh, Y.K. Foo, Optimal Kalman filtering with random sensor
delays, packet dropouts and missing measurements, in: Proceedings of the
American Control Conference, USA, June 2009, pp. 34053410.
[18] S. Zhou, G. Feng, H filtering for discrete-time systems with randomly varying
sensor delays, Automatica 44 (2008) 19181922.
[19] B. Shena, Z. Wang, H. Shu, G. Wei, H filtering for nonlinear discrete-time
stochastic systems with randomly varying sensor delays, Automatica 45 (4)
(2009) 10321037.
[20] J. Evans, V. Krishnamurthy, Hidden Markov model state estimation with
randomly delayed observations, IEEE Trans. Automat. Control 47 (8) (1999)
21572166.
[21] H. Song, L. Yu, W. Zhang, H filtering of network-based systems with random
delay, Signal Process. 89 (4) (2009) 615622.
[22] L. Schenato, Optimal estimation in network control systems subject to random
delay and packet drop, IEEE Trans. Automat. Control 53 (5) (2008) 13111316.
[23] H. Zhang, L. Xie, D. Zhang, Y.C. Soh, A reorganized innovation approach to linear
estimation, IEEE Trans. Automat. Control 49 (10) (2004) 18101814.
[24] B.D.O. Anderson, J.B. Moore, Optimal Filtering, Prentice-Hall, Englewood Cliffs,
N.J., 1979.
[25] S. Boyd, L.E. Ghaoui, E. Feron, V. Balakrishnan, Linear Matrix Inequalities in
System and Control Theory, SIAM, Philadelphia, PA, 1997.
[26] F. Wang, V. Balakrishnan, Robust Kalman filters for linear time-varying
systems with stochastic parametric uncertainties, IEEE Trans. Signal Process.
50 (4) (2002) 803813.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy