PTSP PPT
PTSP PPT
RANDOM PROCESS
probability introduced through sets
and relative frequency
• Experiment:- a random experiment is an
action or process that leads to one of several
possible outcomes
Experiment Outcomes
Course Grades F, D, C, B, A, A+
Sample Space
• List: “Called the Sample Space”
• Outcomes: “Called the Simple Events”
This list must be exhaustive, i.e. ALL possible
outcomes included.
• Die roll {1,2,3,4,5} Die roll
{1,2,3,4,5,6}
P(B|A) P(A|B)
• The probabilities P(A) and P(AC) are called
prior probabilities because they are
determined prior to the decision about taking
the preparatory course.
• The conditional probability P(A | B) is called a
posterior probability (or revised probability),
because the prior probability is revised after
the decision about taking the preparatory
course.
Total probability theorem
• Take events Ai for I = 1 to k to be:
– Mutually exclusive: Ai Aj for all i,j
– Exhaustive: A 0 A 1 k
p( Aj p(B A j )
p( Aj B) B) p(B) kp( A)
p(B A ) i
p( A )
Independence
• Do A and B depend on one another?
– Yes! B more likely to be true if A.
– A should be more likely if B.
• If Independent
p A B p A
pB pA B pA
pB A pB
• If Dependent
p A B p A p B
p A B p A p B p A
B p A B p B A
p A
Random variable
• Random variable
– A numerical value to each outcome of a particular
experiment
S
-3 -2 -1 0 1 2 3
• Example 1 : Machine Breakdowns
– Sample space : S {electrical, mechanical, misuse}
– Each of these failures may be associated with a
repair cost
– State space : {50, 200, 350}
– Cost is a random variable : 50, 200, and 350
• Probability Mass Function (p.m.f.)
– A set of probability value assigned to each
of the values taken by the discrete random variable x i
– 0 1 pi
i
p
– Probability : P( X 1xi )
and i
pi
Continuous and Discrete random
variables
• Discrete random variables have a countable number
of outcomes
– Examples: Dead/alive, treatment/placebo, dice, counts,
etc.
• Continuous random variables have an infinite
continuum of possible values.
– Examples: blood pressure, weight, the speed of a car, the
real numbers from 1 to 6.
• Distribution function:
•
Probability Density Function (pdf)
• X : continuous rv, then,
• pdf
properties: 1.
t
2.
F (t) f (x)dx
t
f (x)dx
0
,
Binomial
• Suppose that the probability of success is p
• Examples
– Toss of a coin (S = head): p = 0.5 q = 0.5
– Roll of a die (S = 1): p = 0.1667 q = 0.8333
– Fertility of a chicken egg (S = fertile): p = 0.8 q = 0.2
binomial
• Imagine that a trial is repeated n times
• Examples
– A coin is tossed 5 times
– A die is rolled 25 times
– 50 chicken eggs are examined
• Assume p remains constant from trial to trial and that the trials are
statistically independent of each other
• Example
– What is the probability of obtaining 2 heads from a coin that
was tossed 5 times?
P(x) = e -µµx
x!
poisson
• Poisson distribution is applied where random
events in space or time are expected to
occur
0. 7
6
0.
0. 8
4 0.
9
0.
2 1
1.
0 1
0 0.1 0.2 0.3 0.6 0.7 0.8 0.9 1 1.1 1.2
1.
1.3 1.4 1.5 1.6 1.7 1.8 1.9 2 2.1 2.2
2
time 1.
3
1.
4
1.
5
1.
6
1.
7
1.
8
1.
9
Uniform distribution
0 x<
{
, a,
F(x) xa
, a <x<
= b
a b
1 x>
, b.
Gaussian (Normal) Distribution
• Bell shaped pdf – intuitively pleasing!
• Central Limit Theorem: mean of a large
number of mutually independent rv’s (having
arbitrary distributions) starts following Normal
distribution as n
E( X ) pi xi
i
Expectation of a continuous random variable with
p.d.f f(x)
E[ X ] X x i
P(xi )
f X (x a) f X (x a), x E[ X ]
a
X r.v. Y =g( X )
Ex: Y g( X )
r.v.
X2
P( X 0) P( X 1 1) P( X P(Y 0) P(Y
2 1)
1
1)
3 3 3
Expectation
E[ X B] continuous r.v.
N
xf X (x
B)dx discrete r.v.
E[ X B] x P(x
i i
B)
Ex: B {X
b} fX
b (x) , x b
f X (x X b) f X (x)dx b
E[ X X b]
xf
X
b(x)dx
0, x f
X
b (x)dx
Moments
n-th moment of a r.v. X
mn E[ X n ] xn fX (x)dx
continuous r.v.
N
mn E[ Xn ] i P(xi )
x n
discrete r.v.
m i1
0
m1
1 X
MULTIPLE RANDOM VARIABLES and OPERATIONS:
MULTIPLE RANDOM VARIABLES :
Vector Random Variables
A vector random variable X is a function that assigns a
vector of real numbers to each outcome ζ in S, the sample
space of the random experiment
p X ,Y (x j, yk ) P X yj x:Y
k
y
P X x ,Y j 1,2,… k 1,2,…
The probability y of any event A is the(4.4)
j
sum of the pmf over the
outcomes in A k
(4.5
PX in A p X ,Y (x j , yk ) . )
( x j , y k ) in A
(4.6
p X ,Y (x j , yk ) )
1.
j1 k 1
The marginal probability mass
functions :
p X (x j ) P X jx
PX x ,Y j
PX xand Y y X and Y y2
anything
x j
1 j (4.7a)
p X,Y (x j ,yk ) ,
pY ( y k ) kP1Y y k
(4.7b
p X,Y ( x )
j ,y k ) .
j 1
The Joint cdf of X and Y
The joint cumulative distribution function of X and Y is
defined as the
probability of the product-form event X x1 Y y1":
FX ,Y (x1 , y1 ) PX x1 ,Y (4.8
)
1 . joint cdf is nondecreasing in the “northeast”
yThe
direction,
(i) FX,Y(x1,y1 ) FX,Y(x2 ,y2 ) if x1 x2 and y1 y2 ,
and
lim FX ,Y (x, y) FX ,Y (x,
yb
b)
The Joint pdf of Two Jointly Continuous Random
Variables
We say that the random variables X and Y are jointly
continuous if the probabilities of events involving (X, Y) can
be expressed as an integral of a pdf. There is a
nonnegative function fX,Y(x,y), called the joint probability
density function, that is defined on the real plane such that
for every event A, a subset of the plane,
(4.9
PX in A A f X ,Y (x', y')dx' )
as shown in Fig. 4.7. When a is the entire plane, the
dy',
integral must
equal one :
(4.10
1 f X ,Y (x', y')dx'
The joint cdf can be obtained in terms of the joint) pdf
dy'.
of jointly
continuous random variables by integrating over the
semi-infinite
The marginal pdf’s fX(x) and fY(y) are obtained by taking the
derivative of the
corresponding marginal cdf’s
FX (x) FX ,Y (x, )
FY ( y) FX ,Y (,
FX (x) dx
d
x
y) .
f X ,Y (x',
y')dy'dx' (4.15
f X ,Y (x, a)
y')dy' .
(4.15
FY ( y) f X ,Y (x', b)
y)dx'.
INDEPENDENCE OF TWO RANDOM
VARIABLES
y PX x PY
j k
Conditional Probability
In Section 2.4, we know
P Y in A, X (4.22
PY in A | X x PX )
xIf X is discrete, then
x Eq. (4.22) can be used to
.
obtain the
conditional cdf of Y given X =
xk : PY y, X kx
FY ( y | xk ) , for PX xk (4.23
PX xk 0 . )
The conditional pdf of Y given X = xk , if the derivative exists,
is given
by Y d
f ( y | xk ) dy FY ( y | xk ) .
(4.24)
MULTIPLE RANDOM VARIABLES
Joint Distributions
The joint cumulative distribution function of X1, X2,…., Xn is
defined as the probability of an n-dimensional semi-infinite
rectangle associate with the point (x1,…, xn):
The
(t, )collection
assigned.
of X (t , ) ⁝
such waveforms
n
X (t , )
form a stochastic
k
⁝
process. The X (t , )
set of { k } and the 2
⁝
X (t , )
time index t can be 1
t
continuous or 0
1
t 2
t
an
d
2F ( x , x , t , t
f X ( x1 , x2 ,1t ,2 t )x 1 2 X
)
1 2
1 2
x
represents the second-order density function of the process
X(t). Similarly f X (x 1 , x2 , xn , t1, t 2 , trne)presents the nth
order density function of the process
process X(t) requires the f (xX(t). , x , t , t ,
, xComplete t )
specification of the stochastic X 1 2 n 1 2 n
for altl i ,
knowledge of
i 1, 2, , nand for all n. (an almost impossible
in
reality). task
1.3 Stationary Process
Stationary Process :
The statistical characterization of a process is
independent of the time at which observation of the
process is initiated.
Nonstationary Process:
Not a stationary process (unstable phenomenon )
Consider X(t) which is initiated at t = ,
X(t1),X(t2)…,X(tk) denote the RV obtained at t1,t2…,tk
For the RP to be stationary in the strict sense
(strictly stationary)
The joint distribution function
FX ( t1 τ ),..., X ( tk τ ) ( x ,.., x ) FX ( t1 ) ,...,X ( tk ) ( x ,... x )
1 k 1 k
(1.3)
For all time shift t, all k, and all possible choice of
t1,t2…,tk 55
1.4 Mean, Correlation,and Covariance
Function
Let X(t) be a strictly stationary RP
The mean of X(t) is
X (t ) E X (t )
xf X ( t ) ( x ) d x
X
(1.6)
for all t
(1.7)
fX(t)(x)R: Xthe
(t1,tfirst
order pdf.
2 ) E X ( t1 ) X ( t2 )
The autocorrelation function
of X(t) is
x1 x2 f X ( t1 ) X ( t2 ) ( x1 , x2 )dx1dx2
- -
x1 x2 f X (0) X ( t2 t1 ) ( x1 , x2 )dx1dx2
- -
RX (t2 t1 ) 56
The autocovariance function
58
Cross-correlation Function
RXY (t,u ) E X (t )Y (u ) (1.19)
and
RYX (t,u ) E Y (t ) X (u ) (1.20)
Note and are not
RXY even
general (t , u ) RYX (t , u )
functions.
The correlation matrix is
RX (t , u ) RXY (t , u )
R (t , u )
RYX ( t , u ) RY ( t , u )
If X(t) and Y(t) are jointly stationary
RX ( ) RXY ( )
R ( ) (1.21)
R
YX ( ) RY ( )
where τ t u 59
Proof of RXY ( ) RYX ( τ ) :
RXY ( τ ) E[ X (t )Y (t τ )]
Let t τ μ,
RXY ( τ ) E [ X ( )Y ( )]
E [Y ( ) X ( )]
E [Y (t ) X (t ( )]
RYX ( τ ) (1.22)
60
Power Spectrum
For a deterministic signal x(t), the spectrum is well X
defined: If represents its Fourier transform, i.e., if
j t
( )
X ( ) x(t)e dt, (18-
1)
then ) |2represents
| X (theorem
Parseval’s since its
theenergy
signal spectrum.
energy is This
follows
given by from 2 1 (18-2)
x (t)dt 2 | X ( ) | d 2
( ,
Thu | X () |2 represents E. energy in the
the signal
s
(see Fig band )
18.1).
| X ( )|
X 2
Energy in( ,
(t ) )
0 t 0
Fig
18.1
However for stochastic processes, a direct application
of (18-1) a sequence of random variables for every .
generates
Moreover, for a stochastic process, E{| X(t) |2} represents
the ensemble average power (instantaneous energy) at the
instant t.
1 T T
j ( t1 t 2 )
T T
2T
R X X ( t1 , t2 ) e
d t1 d t 2
(18-
represents the
power distribution of X(t) based on (– T, T ). 5)
For wide sense stationary (w.s.s) processes, it is possible to
further simplify (18-5). Thus if X(t) is assumed to be w.s.s,
then
and (18-5) simplifies toR (t ,t ) R (t t )
XX 1 2 XX 1 2
E Y (t ) H ( f ) exp( j 2fτ1 ) df h( τ 2 ) RX ( τ 2 τ1 ) dτ1 dτ 2
2
df H ( f ) dτ2h( τ 2 ) RX (τ 2 τ1 ) exp( j 2fτ1 ) dτ1
(1.35)
df H ( f ) dτ2h(τ 2 ) exp( j 2fτ 2 ) RX ( ) exp( j 2fτ ) d (1.36)
*
H ( f ) (complex conjugate response of the filter) 66
2
E Y (t ) df H ( f )
-
2
R
-
X (τ ) exp( j 2f ) dτ (1.37)
H ( f ) : the magnitude response
Define: Power Spectral Density ( Fourier Transform ofR(τ ) )
S X ( f ) RX ( ) exp( 2πfτ ) d (1.38)
-
E Y (t ) H ( f ) S X ( f ) df
2
-
2
(1.39)
Recall E Y 2 (t )
- - h( τ1 )RX ( τ2 τ1 ) dτ1 dτ2 (1.33)
Let H ( f ) be the magnitude response of an ideal narrowband filter
1, f f c 1 f
|H ( f )| 2
1 (1.40)
0, f f c 2 f
D f : Filter Bandwidth
If Δf f c and S X ( f ) is continuous ,
E Y 2 (t ) 2Δf S X ( f c ) in W/Hz
67
Properties of The PSD
S X ( f ) RX ( τ ) exp( j 2f ) d (1.42)
RX ( τ ) S X ( τ ) exp( j 2f ) df (1.43)
Einstein-Wiener-Khintahine relations:
S X ( f ) RX ( τ )
68
a. S X (0) RX ( τ ) d (1.44)
b. E X (t ) S X ( f ) df
2
(1.45)
c. If X (t ) is stationary ,
E Y 2 (t ) ( 2Δf ) S X ( f ) 0
S X ( f ) 0 for all f (1.46)
d. S X ( f ) RX ( τ ) exp( j 2f ) dτ
RX (u ) exp( j 2fu ) du, u τ
S X ( f ) (1.47)
e. The PSD can be associated with a pdf :
SX ( f )
pX ( f )
(1.48)
S
X ( f ) df
69
Cross-Spectral Densities
S XY ( f ) RXY ( ) exp( j 2f )d (1.68)
SYX ( f ) RYX ( ) exp( j 2f )d (1.69)
f (x , x , t , t ) f (x , x , t t ) (14-
X 1 2 1 2 X 1 2 1 2
18)
i.e., the second order density function of a strict sense
stationary
process depends only on the difference of the time t1 t2
indices R (t , t ) E{X (t ) X * .
(t )}
In that case the autocorrelation function is given by
1 2 x x f 1 ( x , x2 , t2 )dx1 dx2
*
t
XX
(14-
R (t1 21t )X 1 R 2 ) R* ( ), 19)
( XX XX
(1 , 2 , ,n ) k 1 l ,k
22)
X
i ek
where C (t ,t ) is as defined on (14-9). If X(t) is wide-
XX
stationary, then using (14-20)-(14-21) in (14-22)
sense
we get n n n
j 2 C
k
1
XX
(ti t k)
i k
(14-
and hence e if the set of time indices are shifted by a 23)
generate cato
constant new set of jointly Gaussian random X X
1 1
X 2 X 2(t , Xn X (t
variables then their joint c),
(t
n c), is identical
function c)
to (14-23). characteristic
Thus the set of random
variables
and {X i }i have the same joint probability distribution for all n
n
i i
all c, establishing
1 the strict sense stationarity of Gaussian 1
processes from nits wide-sense stationarity.
and To summarize
{X } if X(t) is a Gaussian process, then
wide-sense stationarity (w.s.s) strict-sense
stationarity (s.s.s).
Notice that since the joint p.d.f of Gaussian random variables
depends only on their second order statistics, which is also
the basis
PILLAI/