Fall 2009 Final Solution
Fall 2009 Final Solution
Fall 2009 Final Solution
Department of Electrical Engineering & Computer Science 6.041/6.431: Probabilistic Systems Analysis (Fall 2009)
Final Solutions:
December 15, 2009
Problem 2. (20 points) (a) (5 points) Were given that the joint PDF is constant in the shaded region, and since the PDF must integrate to 1, we know that the constant must equal 1 over the area of the region. Thus, c= 1 = 2. 1/2
(b) (5 points) The marginal PDFs of X and Y are found by integrating the joint PDF over all possible y s and xs, respectively. To nd the marginal PDF of X , we take a particular value x and integrate over all possible y values in that vertical slice at X = x. Since the joint PDF is constant, this integral simplies to just multiplying the joint PDF by the width of the slice. Because the width of the slice is always 1/2 for any x [0, 1], we have that the marginal PDF of X is uniform over that interval: 1, 0 x 1, fX (x) = 0, otherwise. Since the joint PDF is symmetric, the marginal PDF of Y is also uniform: 1, 0 y 1, fY (y ) = 0, otherwise. (c) (5 points) To nd the conditional expectation and variance, rst we need to determine what the condi tional distribution is given Y = 1/4. At Y = 1/4, we take a horizontal slice of a uniform joint PDF, which gives us a uniform distribution over the interval x [1/4, 3/4]. Thus, we have 1 E[X | Y = 1/4] = , 2 var(X | Y = 1/4) = (1/2)2 1 = . 12 48
(d) (5 points) At Y = 3/4, we have a horizontal slice of the joint PDF, which is nonzero when x [0, 1/4] [3/4, 1]. Since the joint PDF is uniform, the slice will also be uniform, but only in the range of x where the joint PDF is nonzero (i.e. where (x, y ) lies in the shaded region). Thus, the conditional PDF of X is 2, x [0, 1/4] [3/4, 1], fX |Y (x | 3/4) = 0, otherwise.
Page 1 of 7
1 1 1 7 + 1= . 3 6 3 18
p1j rj 1 (n).
Since states 3 and 4 are absorbing states, this expression simplies to 1 1 r11 (n + 1) = r11 (n) + r21 (n). 4 4 Alternatively, r11 (n + 1) =
4 k=1
= r11 (n)
(d) (5 points) The steady-state probabilities do not exist since there is more than one recurrent class. The long-term state probabilities would depend on the initial state. (e) (5 points) To nd the probability of being absorbed by state 4, we set up the absorption probabilities. Note that a4 = 1 and a3 = 0. 1 a1 = a1 + 4 1 = a1 + 4 1 a2 = a1 + 3 1 = a1 + 3 Solving these equations yields a1 = 3 8. 1 a2 + 4 1 a2 + 4 1 1 a3 + a4 3 6 1 6
1 1 a3 + a4 3 3 1 3
Page 2 of 7
which is a sum of a random number of i.i.d. random variables. Thus, we can use the law of iterated expectations to nd 1 2 5 340 E[C ] = E[E[C | L]] = E[LCi ] = E[L]E[Ci ] = (M ) 1 + 2 = 68 = . 3 3 3 3 (c) (5 points) Let X be the number of laps (out of 72) after which Al drank 2 cups of water. Then, in order for him to drink at least 130 cups, we must have 1 (72 X ) + 2 X 130, which implies that we need
X 58.
Now, let Xi be i.i.d. Bernoulli random variables that equal 1 if Al drank 2 cups of water following his ith lap and 0 if he drank 1 cup. Then X = X1 + X2 + + X72 . X is evidently a binomial random variable with n = 72 and p = 2/3, and the probability we are looking for is 72 k 72k 72 2 1 P(X 58) = . k 3 3
k=58
This expression is dicult to calculate, but since were dealing with the sum of a relatively
large number of i.i.d. random variables, we can invoke the Central Limit Theorem to approx
imate this probability using a normal distribution. In particular, we can approximate X as
Page 3 of 7
min(1/4,t)
fX (x)pN |X (n | x)dx
2 n+1 (1+)x x e dx x=0 n! 2 (n + 1)! = n! (1 + )n+2 2 (n+1) n = 0, 1, 2 . . . = (1+)n+2 0 otherwise. = (c) (5 points) lin (N ), the linear least-squares estimator of X based on an observation of The equation for X N , is cov(X, N ) X (N E(N )). lin (N ) = E[X ] + var(N )
Page 5 of 7
(d) (5 points) MAP (N ), the MAP estimator of X based on an observation of N is The expression for X MAP (N ) = arg max fX |N (x | n) X
x
where the third equality holds since pN (n) has no dependency on x and the last equality holds by removing all quantities that have no dependency on x. The max can be found by dierentiation and the result is: MAP (N ) = 1 + N . X 1+ This is the only local extremum in the range x [0, ). Moreover, fX |N (x | n) equals 0 at x = 0 and goes to 0 as x and fX |N (x | n) > 0 otherwise. We can therefore conclude MAP (N ) is indeed a maximum. that X (e) (5 points) To minimize the probability of error, we choose the hypothesis that has the larger posterior
Page 6 of 7
Page 7 of 7
For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.