0% found this document useful (0 votes)
23 views74 pages

PTSP PPT

The document provides an overview of probability theory and random processes, introducing key concepts such as sample space, events, and probability axioms. It discusses various probability distributions, including binomial, Poisson, and Gaussian distributions, along with their applications and properties. Additionally, it covers conditional probability, Bayes' Law, and the expectation of random variables.

Uploaded by

sreerajgptc
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views74 pages

PTSP PPT

The document provides an overview of probability theory and random processes, introducing key concepts such as sample space, events, and probability axioms. It discusses various probability distributions, including binomial, Poisson, and Gaussian distributions, along with their applications and properties. Additionally, it covers conditional probability, Bayes' Law, and the expectation of random variables.

Uploaded by

sreerajgptc
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 74

PROBABILITY THEORY &

RANDOM PROCESS
probability introduced through sets
and relative frequency
• Experiment:- a random experiment is an
action or process that leads to one of several
possible outcomes
Experiment Outcomes

Flip a coin Heads, Tails


Numbers: 0, 1, 2, ...,
Exam Marks
100
Assembly Time t > 0 seconds

Course Grades F, D, C, B, A, A+
Sample Space
• List: “Called the Sample Space”
• Outcomes: “Called the Simple Events”
This list must be exhaustive, i.e. ALL possible
outcomes included.
• Die roll {1,2,3,4,5} Die roll
{1,2,3,4,5,6}

• The list must be mutually exclusive, i.e.


no two outcomes can occur at the same
time:
• Die roll {odd number or even number}
• Die roll{ number less than 4 or
even number}
Sample Space
• A list of exhaustive [don’t leave anything out] and
mutually exclusive outcomes [impossible for 2
different events to occur in the same experiment]
is called a sample space and is denoted by S.

• The outcomes are denoted by O1, O2, …, Ok

• Using notation from set theory, we can represent


the sample space and its outcomes as:

• S = {O1, O2, …, Ok}


• Given a sample space S = {O1, O2, …, Ok}, the
probabilities assigned to the outcome
must satisfy these requirements:

(1) The probability of any outcome is between 0 and


1
• i.e. 0 ≤ P(Oi) ≤ 1 for each i, and

(2) The sum of the probabilities of all the outcomes


equals 1
• i.e. P(O1) + P(O2) + … + P(Ok) = 1
Relative Frequency
Random experiment with sample space S. we shall assign
non-negative number called probability to each event
in the sample space.
Let A be a particular event in S. then “the probability of
event A” is denoted by P(A).
Suppose that the random experiment is repeated n times,
if the event A occurs nA times, then the probability of
event A is defined as “Relative frequency “
• Relative Frequency Definition: The probability of an
• event A is defined as
nA
P( A) 
lim
n n
Axioms of Probability
For any event A, we assign a number P(A), called
the probability of the event A. This number
satisfies the following three conditions that
act the axioms of probability.
(i) P( A)  0 (Probabili ty is a nonnegativ e
number)
(ii) If A B1
(iii) P() (Probabili
then P( AtyofB)the
 whole
P( A) set is
unity)
 , (Note that (iii) states that ifP(B).
A and B are mutually
exclusive (M.E.) events, the probability of their union
is the sum of their probabilities.)
Event
• The probability of san event is the sum of the
probabilities of the simple events that
constitute the event.
• E.g. (assuming a fair die) S = {1, 2, 3, 4, 5, 6}
and
• P(1) = P(2) = P(3) = P(4) = P(5) = P(6) = 1/6
• Then:
• P(EVEN) = P(2) + P(4) + P(6) = 1/6 + 1/6 + 1/6 =
3/6 = 1/2
Conditional Probability
• Conditional probability is used to determine how
two events are related; that is, we can determine
the probability of one event given the
occurrence of another related event.
• Experiment: random select one student in class.
• P(randomly selected student is male) =
• P(randomly selected student is male/student is
on 3rd row) =
• Conditional probabilities are written as P(A | B)
and read as “the probability of A given B” and
is calculated as
• P( A and B) = P(A)*P(B/A) = P(B)*P(A/B) both
are true
• Keep this in mind!
Bayes’ Law
• Bayes’ Law is named for Thomas Bayes, an
eighteenth century mathematician.

• In its most basic form, if we know P(B |


A),

• we can apply Bayes’ Law to determine P(A


| B)

P(B|A) P(A|B)
• The probabilities P(A) and P(AC) are called
prior probabilities because they are
determined prior to the decision about taking
the preparatory course.
• The conditional probability P(A | B) is called a
posterior probability (or revised probability),
because the prior probability is revised after
the decision about taking the preparatory
course.
Total probability theorem
• Take events Ai for I = 1 to k to be:
– Mutually exclusive: Ai  Aj  for all i,j
– Exhaustive: A 0 A  1 k

For any event B on S


S
p(B)  p(B A1 ) p( A1 )   p(B Ak )
p( Ak ) k
p(B)   p(B Ai ) p( Ai
)
Bayes theorem
i1
follows

p( Aj  p(B A j ) 
p( Aj B) B) p(B)  kp( A)

 p(B A ) i

p( A )
Independence
• Do A and B depend on one another?
– Yes! B more likely to be true if A.
– A should be more likely if B.
• If Independent
p A  B   p A
pB  pA B pA
pB A pB
• If Dependent
p A  B   p A  p B 
p A  B   p A  p B  p A
 B  p  A  B   p B A 
p A 
Random variable
• Random variable
– A numerical value to each outcome of a particular
experiment
S

-3 -2 -1 0 1 2 3
• Example 1 : Machine Breakdowns
– Sample space : S  {electrical, mechanical, misuse}
– Each of these failures may be associated with a
repair cost
– State space : {50, 200, 350}
– Cost is a random variable : 50, 200, and 350
• Probability Mass Function (p.m.f.)
– A set of probability value assigned to each
of the values taken by the discrete random variable x i
– 0 1  pi 
i
p
– Probability : P( X  1xi ) 
and i

pi
Continuous and Discrete random
variables
• Discrete random variables have a countable number
of outcomes
– Examples: Dead/alive, treatment/placebo, dice, counts,
etc.
• Continuous random variables have an infinite
continuum of possible values.
– Examples: blood pressure, weight, the speed of a car, the
real numbers from 1 to 6.
• Distribution function:

• If FX(x) is a continuous function of x, then X is a


continuous random variable.
– FX(x): discrete in x  Discrete rv’s
– FX(x): piecewise continuous  Mixed rv’s
– PROPERTIES:


Probability Density Function (pdf)
• X : continuous rv, then,

• pdf
properties: 1.
t
2.
F (t)  f (x)dx

 t
 f (x)dx
0
,

Binomial
• Suppose that the probability of success is p

• What is the probability of failure?


q=1–p

• Examples
– Toss of a coin (S = head): p = 0.5  q = 0.5
– Roll of a die (S = 1): p = 0.1667  q = 0.8333
– Fertility of a chicken egg (S = fertile): p = 0.8  q = 0.2
binomial
• Imagine that a trial is repeated n times

• Examples
– A coin is tossed 5 times
– A die is rolled 25 times
– 50 chicken eggs are examined

• Assume p remains constant from trial to trial and that the trials are
statistically independent of each other
• Example
– What is the probability of obtaining 2 heads from a coin that
was tossed 5 times?

P(HHTTT) = (1/2)5 = 1/32


Poisso
n becomes impractical
• When there is a large number of trials, but a small probability
of success, binomial calculation
– Example: Number of deaths from horse kicks in the Army
in different years

• The mean number of successes from n trials is µ = np


–Example: 64 deaths in 20 years from thousands of soldiers
If we substitute µ/n for p, and let n tend to infinity, the binomial
distribution becomes the Poisson distribution:

P(x) = e -µµx
x!
poisson
• Poisson distribution is applied where random
events in space or time are expected to
occur

• Deviation from Poisson distribution may


indicate some degree of non-randomness in
the events under study

• Investigation of cause may be of interest


Exponential Distribution
Uniform
All (pseudo) random generators generate random deviates of
U(0,1) distribution; that is, if you generate a large number of
random variables and plot their empirical distribution
function, it will approach this distribution in the limit.
U(a,b)  pdf constant over the (a,b) interval and CDF is the
ramp
function
0
U(0,1)
pdf 0.
1
1.
0.
2
2
1 0.
3
0.
0.
6
8
0.
cdf

0. 7
6
0.
0. 8
4 0.
9
0.
2 1
1.
0 1
0 0.1 0.2 0.3 0.6 0.7 0.8 0.9 1 1.1 1.2
1.
1.3 1.4 1.5 1.6 1.7 1.8 1.9 2 2.1 2.2
2
time 1.
3
1.
4
1.
5
1.
6
1.
7
1.
8
1.
9
Uniform distribution

0 x<

{
, a,
F(x) xa
, a <x<
= b
a b
1 x>
, b.
Gaussian (Normal) Distribution
• Bell shaped pdf – intuitively pleasing!
• Central Limit Theorem: mean of a large
number of mutually independent rv’s (having
arbitrary distributions) starts following Normal
distribution as n 

• μ: mean, σ: std. deviation, σ2: variance (N(μ,


σ2))
• μ and σ completely describe the statistics. This
is significant in statistical estimation/signal
processing/communication theory etc.
• N(0,1) is called normalized Guassian.
• N(0,1) is symmetric i.e.
– f(x)=f(-x)
– F(z) = 1-F(z).
• Failure rate h(t) follows IFR behavior.
– Hence, N( ) is suitable for modeling long-term wear or
aging related failure phenomena
Exponential Distribution
Conditional Distributions
• The conditional distribution of Y given X=1 is:
• While marginal distributions are obtained
from the bivariate by summing, conditional
distributions are obtained by “making a cut”
through the bivariate distribution
The Expectation of a Random
Variable
Expectation of a discrete random variable with
p.m.f
P( X  xi )  pi

E( X )   pi xi
i
Expectation of a continuous random variable with
p.d.f f(x)

E(X )   continuous r.v.


state s p ace
xf ( x)dx
E[ X ]  X  N  xf X
expectation of X = mean of X = average of X
(x)dx discrete r.v.

E[ X ]  X  x i

P(xi )
f X (x  a)  f X (x  a),  x  E[ X ] 
a
X r.v.  Y =g( X )
Ex: Y  g( X ) 
r.v.
X2
P( X  0)  P( X 1 1)  P( X  P(Y  0)  P(Y
2  1) 
1
1) 
3 3 3
Expectation

expectation of a function of a r.v. X


E[g( X )]  N g(x) f X (x)dx continuous r.v.
discrete r.v.
i
conditional
E[g( X )]   g(xi of
expectation
1 a r.v.
)P(x i) X

E[ X B]  continuous r.v.

N 
xf X (x
B)dx discrete r.v.
E[ X B]   x P(x
i i

B)
Ex: B  {X 
b} fX
 b (x) , x b

f X (x X  b)   f X (x)dx b
E[ X X  b]
 xf
 X



b(x)dx
0, x  f
 X
b (x)dx
Moments
n-th moment of a r.v. X

mn  E[ X n ]  xn fX (x)dx
 continuous r.v.
N
mn  E[ Xn ]  i P(xi )
x n
discrete r.v.
m  i1
0
m1 
1 X
MULTIPLE RANDOM VARIABLES and OPERATIONS:
MULTIPLE RANDOM VARIABLES :
Vector Random Variables
A vector random variable X is a function that assigns a
vector of real numbers to each outcome ζ in S, the sample
space of the random experiment

Events and Probabilities


EXAMPLE 4.4

Consider the tow-dimensional random variable X = (X, Y).


Find the
region of the plane corresponding to the events
A  X  Y  10,
B  min( X ,Y )  5, and
 
C  X 2  Y 2  100 .

The regions corresponding to events A and C are


straightforward to find and are shown in Fig. 4.1.
Independence
If the one-dimensional random variable X and Y are
“independent,” if A1 is any event that involves X only and A2 is
any event that involves Y only, then
PX in A1 , Y in A2  PX in A1 PY in A2 .
In the general case of n random variables, we say that the
random
variables X1, X2,…, Xn are independent if
PX 1 in A1 ,…, X n in An  PX 1 in A1  PX n in An (4.3
, )

where the Ak is an event that involves Xk only.


Pairs of Discrete Random Variable
Let the vector random variable X = (X,Y) assume values from some
se 
St  (x ,
countable
j  1,2,…, k  1,2,….The joint probability mass function
yspecifies
), j
the probabilities of the product-
of X
form event
k
X  xj   Y  k

p X ,Y (x j, yk )  P X yj x:Y 
 k

y  
 P X  x ,Y  j  1,2,… k  1,2,…

The probability y of any event A is the(4.4)
j
sum of the pmf over the
outcomes in A k

(4.5
PX in A   p X ,Y (x j , yk ) . )
( x j , y k ) in A
 
(4.6
 p X ,Y (x j , yk )  )
1.
j1 k 1
The marginal probability mass
functions :

p X (x j )  P X  jx
  PX  x ,Y  j
 PX  xand Y  y  X and Y  y2  
anything 
 x j
1 j (4.7a)

  p X,Y (x j ,yk ) ,
pY ( y k )  kP1Y  y k 

(4.7b
  p X,Y ( x )
j ,y k ) .
j 1
The Joint cdf of X and Y
The joint cumulative distribution function of X and Y is
defined as the
probability of the product-form event X  x1 Y  y1":
FX ,Y (x1 , y1 )  PX  x1 ,Y  (4.8
)
1 . joint cdf is nondecreasing in the “northeast”
yThe
direction,
(i) FX,Y(x1,y1 )  FX,Y(x2 ,y2 ) if x1  x2 and y1  y2 ,

It is impossible for either X or Y to assume a value less


than  , therefore
(ii) FX,Y (  ,y1 )  FX,Y (x2 , )  0

It is certain that X and Y will assume values less


than infinity, therefore
(iii) FX,Y (,)  1.
If we let one of the variables approach infinity while
keeping the
other fixed, we obtain the marginal cumulative distribution
(iv FX (x)  FX ,Y (x, )  PX  x,Y    PX
functions
)  x
an
d FY ( y)  FX ,Y (, y)  PY 
Recall y that
. the cdf for a single random variable is
continuous form the right. It can be shown that the joint
cdf is continuous from the “north” and from the “east”
(v) lim FX ,Y (x, y)  FX ,Y (a, y)
xa 

and
lim FX ,Y (x, y)  FX ,Y (x,
yb 
b)
The Joint pdf of Two Jointly Continuous Random
Variables
We say that the random variables X and Y are jointly
continuous if the probabilities of events involving (X, Y) can
be expressed as an integral of a pdf. There is a
nonnegative function fX,Y(x,y), called the joint probability
density function, that is defined on the real plane such that
for every event A, a subset of the plane,
(4.9
PX in A A  f X ,Y (x', y')dx' )
as shown in Fig. 4.7. When a is the entire plane, the
dy',
integral must
equal one :

 (4.10
1    f X ,Y (x', y')dx'
The joint cdf can be obtained in terms of the joint) pdf
dy'.
of jointly
continuous random variables by integrating over the
semi-infinite
The marginal pdf’s fX(x) and fY(y) are obtained by taking the
derivative of the
corresponding marginal cdf’s
FX (x)  FX ,Y (x, )
FY ( y)  FX ,Y (,
FX (x) dx 
d 
x

 

y) .
f X ,Y (x',

 y')dy'dx' (4.15
  f X ,Y (x, a)
y')dy' .

(4.15
FY ( y)   f X ,Y (x', b)
y)dx'.
INDEPENDENCE OF TWO RANDOM
VARIABLES

X and Y are independent random variables if any event A1


defined in terms of X is independent of any event A2
defined in terms of Y ;
PX in A1 , Y in A2  PX in A1 PY in A2 (4,17
 )
Suppose .that X and Y are a pair of discrete random variables.
If we let
A  X  x  and A  Y  y t,hen the independence of X
1 j 2 k
implies
and Y
j k 
p (x , y )  P X  x ,Y  that
X ,Y j k

y   PX  x PY 
j k

y p X (x j ) pY ( y k ) for all x and yk . (4.18


j )
4.4 CONDITIONAL PROBABILITY AND
CONDITIONAL EXPECTATION

Conditional Probability
In Section 2.4, we know

P Y in A, X (4.22
PY in A | X  x  PX  )
 xIf X is discrete, then
x Eq. (4.22) can be used to
.
obtain the
conditional cdf of Y given X =
xk : PY  y, X  kx
FY ( y | xk ) , for PX  xk  (4.23
  PX  xk 0 . )

The conditional pdf of Y given X = xk , if the derivative exists,
is given
by Y d
f ( y | xk ) dy FY ( y | xk ) .
(4.24)
MULTIPLE RANDOM VARIABLES

Joint Distributions
The joint cumulative distribution function of X1, X2,…., Xn is
defined as the probability of an n-dimensional semi-infinite
rectangle associate with the point (x1,…, xn):

FX , X ,…X (x1 , x2 ,… xn )  PX 1  x1 , X 2  x2 ,…, X n  (4.38


x n . )
1 2 n
The joint cdf is defined for discrete, continuous, and random
variables of
mixed type
Stochastic Processes
denote the random outcome of an experiment. To
Let 
outcome everysuppose
such a X (t ,
Xwaveform is )

The
(t,  )collection
assigned.
of X (t ,  ) ⁝
such waveforms
n

X (t ,  )
form a stochastic
k

process. The X (t ,  )
set of { k } and the 2

X (t ,  )
time index t can be 1
t
continuous or 0
1
t 2
t

discrete (countably Fig.


For fixed
infinite orfinite)
i  S(the
all experimental as set
outcomes), X (t, )is a specific time14.1
well.
of
function. X1  X 1 i
For fixed t,
, )
is a random(tvariable. The ensemble of all such realizations
X (t,  )
over time represents the stochastic
process X(t). (see Fig 14.1). For
example
X (t)  a cos(0 t  
),
If X(t) is a stochastic process, then for fixed t, X(t)
represents
a random variable. Its distribution function is given
by
FX (x,t)  P{X (t) 
x}
Notice thatF (x,t) depends on t, since for a different t, we
X
a different random dF (x,
obtain
variable. Further
f X (x, t)  t) dx
X

represents the first-order probability density


function of the process X(t).
For t = t1 and t = t2, X(t) represents two different random
variables X1 = X(t1) and X2 = X(t2) respectively. Their joint
distribution is given by
F (x , x ,t ,t )  P{X (t )  x , X (t ) 
x}
X 1 2 1 2 1 1 2 2

an
d
2F ( x , x , t , t
f X ( x1 , x2 ,1t ,2 t )x 1 2 X
)
1 2
1 2
x
represents the second-order density function of the process
X(t). Similarly f X (x 1 , x2 , xn , t1, t 2 , trne)presents the nth
order density function of the process
process X(t) requires the f (xX(t). , x , t , t ,
, xComplete t )
specification of the stochastic X 1 2 n 1 2 n
for altl i ,
knowledge of
i  1, 2, , nand for all n. (an almost impossible
in
reality). task
1.3 Stationary Process
Stationary Process :
The statistical characterization of a process is
independent of the time at which observation of the
process is initiated.
Nonstationary Process:
Not a stationary process (unstable phenomenon )
Consider X(t) which is initiated at t =  ,
X(t1),X(t2)…,X(tk) denote the RV obtained at t1,t2…,tk
For the RP to be stationary in the strict sense
(strictly stationary)
The joint distribution function
FX ( t1 τ ),..., X ( tk τ ) ( x ,.., x )  FX ( t1 ) ,...,X ( tk ) ( x ,... x )
1 k 1 k

(1.3)
For all time shift t, all k, and all possible choice of
t1,t2…,tk 55
1.4 Mean, Correlation,and Covariance
Function
Let X(t) be a strictly stationary RP
The mean of X(t) is
 X (t )  E  X (t )

 xf X ( t ) ( x ) d x

 X
(1.6)
for all t
(1.7)
fX(t)(x)R: Xthe
(t1,tfirst 
order pdf. 
2 )  E X ( t1 ) X ( t2 )
The autocorrelation  function
 of X(t) is
  x1 x2 f X ( t1 ) X ( t2 ) ( x1 , x2 )dx1dx2
- -
 
  x1 x2 f X (0) X ( t2  t1 ) ( x1 , x2 )dx1dx2
- -

 RX (t2  t1 ) 56
The autocovariance function

C X (t1,t 2) E X (t1 )   X X (t2 )   X 


 R
(1.10)(
X 2t  t1 )   2
X
Which is of function of time difference (t2-t1).
We can determine CX(t1,t2) if mX and RX(t2-t1) are
known.
Note that:
1. mX and RX(t2-t1) only provide a partial
description.
2. If mX(t) = mX and RX(t1,t2)=RX(t2-t1),
then X(t) is wide-sense stationary (stationary
process).
3. The class of strictly stationary processes with
finite
second-order moments is a subclass of the
class of all 57
Properties of the autocorrelation
function
For convenience of notation , we redefine
RX ( ) E  X (t  τ ) X (t ) , for all t (1.11)
1. The mean-square value
 
RX (0)  E X 2 (t ) , τ 0 (1.12)
2.
RX ( ) R(  τ) (1.13)
3.
RX ( ) RX (0) (1.14)

58
Cross-correlation Function
RXY (t,u )  E  X (t )Y (u ) (1.19)
and
RYX (t,u )  E Y (t ) X (u ) (1.20)
Note and are not
RXY even
general (t , u ) RYX (t , u )
functions.
The correlation matrix is
 RX (t , u ) RXY (t , u )
R (t , u )  
 RYX ( t , u ) RY ( t , u ) 
If X(t) and Y(t) are jointly stationary

 RX ( ) RXY ( )
R ( )   (1.21)
R
 YX ( ) RY ( ) 
where τ t  u 59
Proof of RXY ( ) RYX (  τ ) :

RXY ( τ ) E[ X (t )Y (t  τ )]
Let t  τ  μ,
 RXY ( τ ) E [ X (    )Y (  )]
E [Y (  ) X (    )]
E [Y (t ) X (t   (   )]
RYX (  τ ) (1.22)

60
Power Spectrum
For a deterministic signal x(t), the spectrum is well X
defined: If represents its Fourier transform, i.e., if
  j t
( )
X ( ) x(t)e dt, (18-
  1)
then ) |2represents
| X (theorem
Parseval’s since its
theenergy
signal spectrum.
 energy is This

follows
given by from  2 1 (18-2)
 x (t)dt 2  | X ( ) | d  2

( ,  

Thu | X () |2 represents E. energy in the
the signal
s
(see Fig band  )
18.1).
 

| X ( )|
X 2
Energy in( , 
(t ) )

0 t 0 
   
Fig
18.1
However for stochastic processes, a direct application
of (18-1) a sequence of random variables for every .
generates
Moreover, for a stochastic process, E{| X(t) |2} represents
the ensemble average power (instantaneous energy) at the
instant t.

To obtain the spectral distribution of power versus frequency


for stochastic processes, it is best to avoid infinite intervals
to begin with, and start with a finite interval (– T, T ) in
(18-1). Formally, partial Fourier transform of a process X(t)
based on (– T, T ) is given by
so T
 (18-
that
X T ( )  T
X (t)e  j t
dt
3)
 distribution associated with that
represents the power
realization based on (– T, T ). Notice that (18-4) represents a
random variable for every and its ensemble average gives,
the average power distribution
based on (– T, T ).
Thus 2 2
T 
T X (t)e j t
dt , (18-
2T 4)
2T
T T j  ( t1  t 2 )
PT (  )   | X (  ) |   1
T
2
 T E { X ( t1 ) X * ( t 2 )}e  d t1d t 2
E  2T  2T T

1 T T

j  ( t1  t 2 )
  T  T
2T
R X X ( t1 , t2 ) e 
d t1 d t 2
(18-
represents the 
power distribution of X(t) based on (– T, T ). 5)
For wide sense stationary (w.s.s) processes, it is possible to
further simplify (18-5). Thus if X(t) is assumed to be w.s.s,
then
and (18-5) simplifies toR (t ,t )  R (t  t )
XX 1 2 XX 1 2

Let   t1  t2 and proceeding as in (14-24),


we get T
P ( )  2T T
1
T T
t
RXX (t1 2  j (t1  t 2) dt 1dt 2.
T )e

to be the power distribution of the w.s.s. process X(t)
based on (– T, T ). Finally2Tletting T   in (18-
P ( )  2T 2T
6), we obtain
T
1 RXX ( )e  j
(2T  |  |)d
(18-
2T 6)
 2T RXX ( )e j (1  2T| | )d  0

S XX ( )  lim TP ( ) RXX ( )e  j d  (18-
7)

T 
 w.s.s
to be the power spectral density of the 0 process X(t).
Notice that 
(18-
RXX ( )    SXX ( ) 
FT
8)
0.
i.e., the autocorrelation function and the power spectrum
of a w.s.s Process form a Fourier transform pair, a relation
known as the Wiener-Khinchin Theorem. From (18-8), the
inverse formula gives

and in particular for  0, we 


S XX ( )e j
get
RXX ( ) 2  (18-
9)
1 d
From (18-10), the area under XX
S ( ) represents the total
power of the
process X(t), and henceXXS
spectrum.(Fig
()truly represents
1
the power   2
18.2).  S
2  XX ( )d  XX
(0)  E{| X (t) | }  P, the total (18-
R power. 10)
If X(t) is a real w.s.s process, ( ) =

XX XX
( ) so
then RS XX
( ) R XX (  ) e  j   dR that
  
  R X X ( ) c o s  
  d
 2 R X X (  ) c o s   d   S X(X   )  0
so that the power0 spectrum is an even function, (in (18-
addition to being real and nonnegative). 13)
1.7 Power Spectral Density (PSD)
Consider the Fourier transform of g(t),

G ( f )  g (t ) exp(  j 2πft ) dt


g (t )  G ( f ) exp( j 2πft ) df

Let H(f ) denote the frequency response,   2 -  1

h( τ1 )  H ( f ) exp( j 2πfτ1 ) df (1.34)


    
E Y (t )    H ( f ) exp( j 2fτ1 ) df  h( τ 2 ) RX ( τ 2  τ1 ) dτ1 dτ 2
2 
       
  
 

df H ( f )  dτ2h( τ 2 )  RX (τ 2  τ1 ) exp( j 2fτ1 ) dτ1
 
(1.35)

  
 df H ( f )  dτ2h(τ 2 ) exp( j 2fτ 2 )  RX ( ) exp(  j 2fτ ) d (1.36)
  

*
H ( f ) (complex conjugate response of the filter) 66
 2

E Y (t )  df H ( f )

-
2 
R
-
X (τ ) exp(  j 2f ) dτ (1.37)
H ( f ) : the magnitude response
Define: Power Spectral Density ( Fourier Transform ofR(τ ) )

S X ( f )  RX ( ) exp(  2πfτ ) d (1.38)
-

  
E Y (t )  H ( f ) S X ( f ) df
2
-
2
(1.39)
Recall E Y 2 (t ) 
 
- - h( τ1 )RX ( τ2  τ1 ) dτ1 dτ2 (1.33)
Let H ( f ) be the magnitude response of an ideal narrowband filter
1, f  f c  1  f
|H ( f )|  2
1 (1.40)
0, f  f c  2  f
D f : Filter Bandwidth

If Δf  f c and S X ( f ) is continuous ,
 
E Y 2 (t ) 2Δf S X ( f c ) in W/Hz
67
Properties of The PSD

S X ( f )  RX ( τ ) exp(  j 2f ) d (1.42)


RX ( τ )  S X ( τ ) exp( j 2f ) df (1.43)


Einstein-Wiener-Khintahine relations:

S X ( f )  RX ( τ )

S X ( f ) is more useful than RX (τ ) !

68

a. S X (0)  RX ( τ ) d (1.44)


 
b. E X (t )  S X ( f ) df
2


(1.45)
c. If X (t ) is stationary ,
 
E Y 2 (t ) ( 2Δf ) S X ( f ) 0
S X ( f ) 0 for all f (1.46)

d. S X (  f )  RX ( τ ) exp( j 2f ) dτ


 RX (u ) exp(  j 2fu ) du, u  τ


S X ( f ) (1.47)
e. The PSD can be associated with a pdf :
SX ( f )
pX ( f )  
(1.48)
S

X ( f ) df
69
Cross-Spectral Densities

S XY ( f )  RXY ( ) exp(  j 2f )d (1.68)


SYX ( f )  RYX ( ) exp(  j 2f )d (1.69)


S XY ( f ) and SYX ( f ) may not be real.



RXY ( τ )  S XY ( f ) exp( j 2πfτ )df


RYX ( τ )  SYX ( f ) exp( j 2πfτ )df


 RXY ( τ )  RYX (  τ ) (1.22)



S XY ( f ) SYX (  f ) SYX (f) (1.72)
70
For a first-order strict sense stationary
process,
from (14-14) we have f (x,t)  f (x,t
X c) (14-
X 15)
for any c. In particular c = – t gives
(14-
fX X
(x,t)  f (x) 16)
i.e., the first-order density of X(t) is independent of t.
In that case
Similarly, for a second-order strict-sense stationary
we have from (14-
process 
(14-
14)
E[ X (t)]   
x f ( x)dx  , a 17)
for any c. Forconstant.
c = – t2 we
get
f (x , x , t , t ) f (x , x , t
2
 c, t  c)
X 1 2 1 2 X 1 2 1

f (x , x , t , t )  f (x , x , t  t ) (14-
X 1 2 1 2 X 1 2 1 2
18)
i.e., the second order density function of a strict sense
stationary
process depends only on the difference of the time t1  t2 
indices R (t , t )  E{X (t ) X * .
(t )}
In that case the autocorrelation function is given by
1 2 x x f 1 ( x , x2 ,   t2 )dx1 dx2
*

 t
XX

(14-

 R (t1  21t )X 1 R 2  )  R* ( ), 19)
( XX XX

i.e., the autocorrelationXX function


1 of
2 a second order
strict-sense
stationary process depends only on the difference of
the time  t1  t2
indice
Notice that (14-17) and (14-19) are consequences of the
s
stochastic.
process being first and second-order strict sense stationary.
On the other hand, the basic conditions for the first and
second order stationarity – Eqs. (14-16) and (14-18) – are
usually difficult to verify. In that case, we often resort to a
looser definition of stationarity, known as Wide-Sense
Stationarity (W.S.S), by making use of
(14-17) and (14-19) as the necessary conditions. Thus, a
process X(t)
is said to be Wide-Sense Stationary if
(i) E{X (t )} 
and (14-
 20)
E{X (t1 )X *2(t )}  R1
(ii) XX
(14-
2
i.e., for wide-sense (t 
stationary processes, the mean is a 21)
t ), and the autocorrelation function depends only on
constant
the difference between the time indices. Notice that (14-
20)-(14-21) does not say anything about the nature of the
probability density functions, and instead deal with the
average behavior of the process. Since (14-20)-(14-21)
follow from (14-16) and (14-18), strict-sense stationarity
always implies wide-sense stationarity. However, the
converse is not true in general, the only exception being the
Gaussian
X 1  X ( t 1process.
), X 2  X ( t2 ),
X n  X ( t n ) are jointly Gaussian
variables
 , for any
This follows, since if X(t)1 is2 ,t , t whose joint
a tGaussian
n process, then by
definition
characteristic function random
is given by
n n
j   ( tk )  k    C ( ti , t k )  i  k / 2
(14-
XX

 (1 , 2 , ,n )  k 1 l ,k

22)
X

i ek
where C (t ,t ) is as defined on (14-9). If X(t) is wide-
XX
stationary, then using (14-20)-(14-21) in (14-22)
sense
we get n n n
j    2  C
k
1
XX
(ti t k) 
i k

 X (1 ,2 , , n )  k11 11 k

(14-
and hence e if the set of time indices are shifted by a 23)
generate cato
constant new set of jointly Gaussian random X  X
1 1
X 2  X 2(t , Xn  X (t
variables then their joint  c),
(t
 n  c), is identical
function  c)
to (14-23). characteristic
Thus the set of random
variables
and {X i }i have the same joint probability distribution for all n
n
i i
all c, establishing
1 the strict sense stationarity of Gaussian 1
processes from nits wide-sense stationarity.
and To summarize
{X } if X(t) is a Gaussian process, then
wide-sense stationarity (w.s.s)  strict-sense
stationarity (s.s.s).
Notice that since the joint p.d.f of Gaussian random variables
depends only on their second order statistics, which is also
the basis

PILLAI/

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy