more exactly, measurable function w.r.t. some σ-algebra

Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

CONTENT

01. Random variable and random process. Classes of random processes


(Overview).
02. Convergence with probability one and in probability. Other types of
convergence. Ergodic theorem.
03. Orthogonal projection. Conditional expectation in the wide sense.
04. Wiener filter.
05. Kalman filter.
06. Kalman filter implementation for linear algebraic equations. Karhunen
Loeve decomposition.
07. Independence. The conditional expectation.
08. Non linear filtering.
09. Gaussian random sequences.
10. Wiener process. Gaussian white noise.
11. Poisson process. Poisson white noise. Telegraphic signal.
12. Stochastic Itô integral.

1. Random variable and random process. Classes of


random processes (Overview)
1.1. Random variable. Denote by Ω = {ω} a sample space. A func-
tion1 ξ(ω), ω ∈ Ω:
ξ
Ω→R
is called random variable. Typically, a description of the random vari-
able ξ(ω) is given in term of its distribution function
F (x) = P (ω : ξ(ω) ≤ x), x ∈ R,
where P , named probabilistic measure, is a function transforming sub-
sets of Ω to the interval [0, 1]:
P
Ω−
→ [0, 1]
and is such that P (Ω) = 1, P (∅) = 0.

1.2. Random vector. A vector



ξ(ω) = ξ1 (ω), . . . , ξn (ω) .
with random variables as its entries is called the random vector. Also,
the random vector is characterized by distribution function:
F (x1 , · · · , xn ) = P (ω : ξ1 (ω) ≤ x1 , · · · , ξn ≤ xn ).

1more exactly, measurable function w.r.t. some σ-algebra


1
1.3. Random processes. A random vector

ξ(ω) = ξ1 (ω), . . . , ξn (ω), . . .
might be interpreted as random sequence (process), if k = 1, . . . , n, . . .
are understood as time moments. Formally, in a case of random se-
quence the distribution function of countable values of arguments is
considered
F (x1 , · · · , xn , · · · , ) = P (ω : ξ1 (ω) ≤ x1 , · · · , ξn ≤ xn , · · · , ).
For a continuous time case, a family of random variables ξt (ω) para-
metrized by t ≥ 0 or −∞ < t < ∞, is called continuous time random
process. For fixed t0 , ξt0 (ω) is the random variable while for fixed ω0 ,
a function ξt (ω0 ) of the argument t is called trajectory (or a path)
of the random process ξt (ω). This trajectory might be continuous or
discontinuous function. If all paths of a random process are continuous,
we say shortly ”continuous process”. Under consideration of continuous
time process, so called, finite dimensional distributions are introduces:
Ft1 ,··· ,tn (x1 , ..., xn ) = P (ω : ξt1 (ω) ≤ x1 , · · · , ξtn (ω) ≤ xn )
for every n ≥ 1 and t1 , · · · , tn .
Remark 1.1. For sake of a notational simplicity, henceforth the index
“ω” will be omitted.
Brownian process (Bt )t≥0 is the typical example of random process
with continuous paths. Its mathematical description was done by N.
Wiener and in the sequel notation (Wt )t≥0 −“Wiener process” will be
used as well. A short description of Wiener process is the following:
1. W0 = 0 and paths are continuous functions;
2. the expectation EWt ≡ 0;
3. the correlation function Wt Ws = min(t, s);
4. any finite dimensional distributions are Gaussian.
Poisson process (πt )t≥0 is the typical example of random process with
discontinuous paths. Its short description is the following:
1. π0 = 0 and paths are piece-wise constant functions having
jumps of the unite size;
2. the expectation Eπt ≡ t;
2. for non-overlapping time intervals increments of (πt )t≥0 are in-
dependent random variables having the Poisson distribution.
Random processes, considered as mathematical objects, are studied
under different frameworks caused different mathematical theories for
their study. We mention here Linear Theory when only linear trans-
formations of the random process paths are available. In the context
of that theory, the expectation and correlation function are main tools
for analysis. An important class of random process successful served
by the linear theory is the class of
2
Random Processes in the wide sense. Any process, Xn , from this class
has the expectation EXn ≡ const.(= 0 for notational convenience)
and the correlation function K(n − m) = EXn Xm of two arguments
P m) being in reality the function of their difference n − m. Under
(n,
|K(n)| < ∞, the Fourier transform defines, so called, spectral den-
n √
sity (here ı = −1)
X
f (λ) = eιλn K(n), λ ∈ R
n

the main tool for Wiener filter theory.


An application of the linear theory is natural for
Gaussian random process since any finite dimensional distributions
of which completely characterized by the expectation and correlation
functions, so that under linear transformation, only these function are
replaced by new ones while Gaussian structure of the distribution is
preserved.

A class of random processes being beyond the range of linear theory


but still in the framework of a stationary notion is a class of
Stationary random processes in the strict sense. Random processes
from that class are not assumed to obey the second moment (so they
are not obligatory stationary processes in the wide sense). Finite di-
mensional distributions of processes from that class possess the follow-
ing characterization: finite dimensional distribution is preserved under
any shift of the time parameter
   
P Xn1 ≤ x1 , . . . , Xnk ≤ xk ≡ P Xn1 +m ≤ x1 , . . . , Xnk +m ≤ xk .

Processes with independent and identically distributed values forms its


simple subclass:
  Yk  
P Xn1 ≤ x1 , . . . , Xnk ≤ xk ≡ P Xn1 ≤ xj
j=1

If EXn ≡ 0 and EXn2 ≡ 1 we obtain further subclass of


White noises. Every white noise is also the stationary process in the
wide sense.

Beyond of the Linear theory, there are various applications, for in-
stance, Non linear Filtering, where a crucial role plays a class of
Markov processes. In contrast to processes with independent  val-
ues, when the joint distribution function P X1 ≤ x1 , . . . , Xk ≤ xk is

split into the product of marginal distributions: kj=1 P Xj ≤ xj , in
Q 

3
Markov case
 
P X1 ≤ x1 , . . . , Xk ≤ xk
Z Z  

= ... P Xk ≤ xk Xk−1 = xk−1
R
R 
× dP Xk−1 ≤ xk−1 Xk−2 = xk−2

   
× dP X2 ≤ x2 X1 = x1 dP X1 ≤ x1 .
That formula provides simpler structure, if the distribution function
obeys a density
 
k
∂ P X1 ≤ x1 , . . . , Xk ≤ xk
= p(x1 , x2 , ..., xk ),
∂x1 . . . ∂xk
and in terms of conditional densities is presented as:
n
Y
p(x1 , x2 , ..., xk ) = pk1 (x1 ) p(xi |xi−1 ).
i=2

In a modern part of Theory of Random processes, named Stochastic


Calculus the main class of random processes is the class of
Semimartingales.

Appendix A. Random process in the wide sense


Random processes in the wide sense are extensively studied in many
undergraduate courses. So, here we recall only the main properties of
correlation function and spectral density and give some examples.
A.1. Definitions. Set time values n = · · · , −1, 0, 1, · · · A random
process (Xn ) with
1. EXn2 ≡ EX02 < ∞ (EX02 > is assumed);
2. EXn ≡ 0;
3. K(n − m) = EXn Xm
is said to be stationary in the wide sense. Here
- K(0) is the variance
- K(n) is the correlation function
- ρ(n) = K(n)
K(0)
is the correlation coefficient.
We emphasize a few properties of the correlation functions K.
1. K is nonnegative definite function: for any numbers a1 , ..., am
and `1 , ..., `m
Xm
ai aj K(`i − `j ) ≥ 0. (1.1)
i,j=1

2. K(−n) = K(n) and |K(n)| ≤ K(0).


4
If

X
|K(n)| < ∞, (1.2)
n=−∞

the Fourier transform of K(n)



1 X −ıλ`
f (λ) = e K(`), λ ∈ [−π, π] (1.3)
2π n=−∞
is well defined and determine the spectral density function. The spec-
tral density f (λ) possesses the following properties.
1. the is the spectral density is real (not complex) nonnegative
function with Z π
f (λ)dλ < ∞;
−π

2. −π
eιλn f (λ)dλ = K(n);
An increasing function F (λ) with dF (λ) = f (λ)dλ is called the spec-
tral measure. If (1.2) is not valid, an existence of the spectral density
is problematic. Nevertheless, the spectral measure F (λ) always exists
and any correlation function K obey a spectral decomposition with
appropriate F (Herglotz theorem):
Z π
K(n) = eiλn dF (λ). (1.4)
−π

A.2. Examples.
1
1. White noise. f (λ) = 2π
.
2. Moving average.

X
Xn = ak εn−k (1.5)
k=−∞

is the linear transformation of the white noise (εk ), where ak are num-

|ak |2 < ∞.
P
bers such that
k=−∞
3. One side moving average. Xn = ∞
P
k=0Pak εn−k .
4. Moving average of order p > 0. Xn = pk=0 ak εn−k .
5. Autoregression.
Xn + b1 Xn−1 + ... + bq Xn−q = εn , (1.6)
where parameters b1 , ..., bq are such that the roots of the polynomial
Q(z) = 1 + b1 z −1 + ... + bq z −q
lie within of the unit circle. The spectral density
1 1
f (λ) = .
2π |Q(eιλ ) 2
5
6. Autoregression and moving average.
Xn + b1 Xn−1 + ... + bq Xn−q = a0 εn + a1 εn−1 + ... + ap εn−p , (1.7)
where the roots of the polynomial Q(z) = 1+b1 z −1 +...+bq z −q lie within
the unit circle. The spectral density is defined in terms of polynomials
Q(z) and P (z) = a0 + a1 z −1 + ... + ap z −p p as:
2
1 P (eιλ )
f (λ) = , (1.8)
2π Q(eιλ 2

The spectral density of the type given in (1.8) is called rational spectral
density.
Remark A.1. A linear transformation of stationary sequence not oblig-
atory preserves stationarity.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy