0% found this document useful (0 votes)
15 views13 pages

103 Sept 2004 Solution

Uploaded by

Kanika Kanodia
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views13 pages

103 Sept 2004 Solution

Uploaded by

Kanika Kanodia
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

Faculty of Actuaries Institute of Actuaries

EXAMINATIONS

September 2004

Subject 103 Stochastic Modelling

EXAMINERS REPORT

Faculty of Actuaries
Institute of Actuaries
Subject 103 (Stochastic Modelling) September 2004 Examiners Report

he examiners were pleased to note that the overall quality of answers on this
final sitting of subject 103 was high and that many of the candidates
demonstrated a good knowledge of the principles and practice of stochastic
modelling. As always, credit was awarded for comments which showed that
candidates had an understanding of the topic covered in a question, even if the
calculations gave the wrong answer due to some mathematical error.
Question 7 was particularly well answered, with Questions 2 and 4 not far
behind. Questions 6 and 10 had the lowest proportion of good answers; it is
possible that time pressure played a role in the case of Question 10.

1 (i) Let nij denote the number of direct transitions from state i to state j, with ni+
the total number of transitions out of state i. Then pij nij / ni .

(ii) Model fitting aims to find the best-fitting model in a given class. But it is
conceivable that even the best-fitting model in the class does not fit very well.
Model validation is a set of procedures to test the adequacy of the fit.

(iii) Sensitivity analysis is part of model validation. The purpose is to determine


whether the behaviour of the fitted model would be substantially different if
the parameter values were slightly different from the estimates already
obtained.

The technique involves simulating the fitted process a large number of times,
using several simulations for each of a number of slightly different parameter
values, then examining the output of the simulation to attempt to identify
systematic differences.

It is important that the same sequence of random numbers be used in each of


the sets of simulations to ensure comparability.

Many candidates failed to mention the importance of using the same sequence
of pseudo-random numbers. Apart from that, most answers showed good
knowledge of the principles of modelling.

2 (i) It is inadequate because someone who has never suffered from disease A or B
is not in the same position as someone who has suffered from one or both in
the past but is currently healthy.

The state space should be extended by splitting state 0 into 3: 0: Has never
suffered from A or B , A: Has suffered from A but is now healthy ,
AB: Has suffered both A and B but is now healthy.

(ii)
0 A AB

1 2

Page 2
Subject 103 (Stochastic Modelling) September 2004 Examiners Report

(iii) Only a time-inhomogeneous model can properly reflect the dependence of


both the recovery rate for A and the infection rate for B on the age of the
person.

(iv) If, in a population taken as a whole, the number of people in each age group is
roughly constant over time, then the age-dependent transition rates of the
individuals who make up the population can be averaged out to give a time-
homogeneous model which works perfectly adequately given that national
medical services are generally only concerned with total numbers falling ill.
This question was answered well in general. Where candidates lost marks it
was often due to mis-specifying the additional states in part (i). Splitting state
A into Has recovered from Disease A and is aged below 18 and Has
recovered from Disease A and is aged 18 or more is reasonable when
modelling an entire population, but does not lead to a time-homogeneous
Markov model when applied to a single individual, since one s 18th birthday
does not occur at a random time. However, answers along these lines with
good explanations were given full marks.

3 (i) Set Xi as follows:

A if 0 U i 1/ 5
Xi B if 1/ 5 U i 9 / 20
C if 9 / 20 U i

(ii) Use the inverse transformation method.

2
x 10 t dt 10 x
The distribution function is F ( x) 1 for 4 x 10 .
4 18 36

Solving the equation gives F 1 (u ) 10 6 1 u

So we can set X i 10 6 1 U i

or alternatively we could use X i 10 6 U i

(iii) Use acceptance-rejection method:

Let V1 = U1 , so that V1 is uniformly distributed on [0, ] and has density


function g(x) = 1/ over that range.

We define

Page 3
Subject 103 (Stochastic Modelling) September 2004 Examiners Report

C sup f ( x) / g ( x) sup 2sin 2 x 2.


0 x 0 x

If U2 < sin2 V1 let X1 = V1; otherwise reject this value and select a new pair
U1, U2. Repeat for other Xi

Answers to parts (i) and (ii) were generally good. For part (iii) many
candidates only described the general theory without specifying g(x) or
calculating the constant C.

4 (i) A standard Brownian motion {Bt } is defined by the following properties:

B(0) 0 and Bt has independent increments; Bt Bs is independent of


Bu for 0 u s and s t .
Bt has stationary and Gaussian increments; Bt Bs N (0 t s ) .
Bt has continuous sample paths, i.e. t Bt is continuous.

(ii) Using Itô s Lemma gives

1 1 1
d (log St ) dSt 2
(dSt ) 2
St 2 St
2
dt dBt dt
2

This implies that

2
log St log S0 t Bt
2

or, finally,

2
2
t Bt
St S0 e

(iii) We have

Page 4
Subject 103 (Stochastic Modelling) September 2004 Examiners Report

2
x
P St x S0 y P Bt t log
2 y
2
1 x
P Bt log t
y 2
2
log x t
y 2
1
t

Substituting the values x 120 y 96 02 0 15 and t 0 25 years


above we find that the answer is

1 (2.3461) 1 0 9905 0 0095.

The calculations in parts (ii) and (iii) were well done. The definition in part
(i) caused some problems: it is necessary to mention the stationary,
independent increments property; then either of the two remaining properties
(continuous sample paths, normally distributed increments) implies the other.

5 (i) The Markov property is clear: the chain jumps either up or down by 1, with
probabilities depending only on the current state, not the past history.

P( X n 1 i 1| X n i ) is the probability that the (n+1)th ball selected is red,


which is just 1/N of the number of red balls at time n, which is N i.

(ii) From any state i it is possible to reach any other state j in just |j i| steps,
either all upwards or all downwards. This means that the chain is irreducible.

Every transition takes the chain from an even state to an odd one or vice versa,
which implies that the period must be an even number.

On the other hand, starting from state 0 it is possible to return to 0 in two


steps. Therefore state 0 has period 2 and, by irreducibility, all states have
period 2.

(iii) To find the stationary distribution we can use the relationship suggested by the
Detailed Balance Equations:

N i i 1
i i 1 for i = 0, 1, 2, ., N 1.
N N

Thus we get the recursive relationship

N i
i 1 i for i = 0, 1, 2, ., N 1
i 1

Page 5
Subject 103 (Stochastic Modelling) September 2004 Examiners Report

Starting with i = 0 and working forwards we get

N
1 0
1

N 1 N ( N 1)
2 1 = 0
2 (2)(1)

N 2 N ( N 1)( N 2)
3 2 = 0
3 (3)(2)(1)

and in general

N ( N 1)( N 2) ( N i 1) N! N
i = 0 = 0 = 0
i (i 1)(i 2) (2)(1) ( N i )! i ! i

T
Alternatively, write down the transition matrix P and use the equation P=
T
to obtain

1
1 0 1 N 0
N

2 N ( N 1)
0 2 1 2 0
N 2

N 1 3 N ( N 1)( N 2)
1 3 2 3 0 , etc.
N N 3!

To find 0, we use the fact that 0 + 1 + 2 + ..+ N = 1

N 1 N 1
N! N! 1 1
i.e. 0 1 2N
i 1 ( N i )! i ! i 0 i! 2N

Therefore the stationary distribution = ( 0, 1, 2, .., N) is given by

N 1
i for i = 0, 1, 2, ., N
i 2N

(iv) P ( X n j | X 0 0) does not converge, being alternately zero and non-zero,


since X is periodic.

The derivation of the stationary distribution in part (iii) caused difficulties


with many candidates, but otherwise candidates showed a good understanding
of discrete-time Markov chains.

Page 6
Subject 103 (Stochastic Modelling) September 2004 Examiners Report

dm1 d 1
6 (i) (a) Ex X t Ex [dX t ] Ex [b X t ] (b m1 ) , where Ex
dt dt dt
denotes conditional expectation given X0 = x. This derivation uses the
fact that the increments of Brownian motion have expectation equal to
zero.

d
(b) [e t m1 (t )] be t , implying that
dt
m1 (t ) e t [ x b(e t 1)] b ( x b)e t
.

(ii) (a) dYt 2 X t dX t (dX t ) 2 2 X t [ (b X t )dt X t dBt ] 2


X t dt
2
[2 b ] X t dt 2 X t2 dt 2 X t3/ 2 dBt

d 2
(b) m2 (t ) [2 b ]m1 (t ) 2 m2 (t ). Again we have used the fact
dt
that Brownian increments have mean zero.

(c) We do not need to solve the equation, but just to note that since dm2/dt
tends to 0, this implies that 2 limt m2(t) = [2 b + 2] limt m1(t)
= 2 b2 + b 2. Therefore
b 2
limt E[ X t2 | X 0 x] b 2 , from which we deduce that
2
b 2
limt Var[ X t | X 0 x] .
2

This question was relatively poorly answered, although much of it is based on


the standard theory of Ordinary Differential Equations.

7 (i) Taking covariances with X t k for k 1 in (1) gives

Cov( X t X t k) 1Cov( X t 1 Xt k) 2Cov ( X t 2 Xt k) p Cov ( X t p Xt k)

which gives the Yule-Walker equations since, by definition,


k Cov( X t X t k ) for 0 k p .

2
For k 0 , there is an extra term which accounts for Cov( X t et ) .

(ii) A diagnostic test is based on the partial ACF and uses the fact that, for an
AR(2) process, the partial autocorrelations, k , are zero for k 2 .

Page 7
Subject 103 (Stochastic Modelling) September 2004 Examiners Report

The values of k are estimated by the partial ACF, k


, and for k 2 the
asymptotic variance of k
is 1 n . Using a normal approximation, values of the
sample partial ACF outside the range 2 n indicate that the AR(2) model
may be inadequate.

(iii) (a) The process can be written in terms of the backward shift operator as
(1 0 6 B 0 3B 2 ) X t et .

Hence the characteristic polynomial is 1 0 6 z 0 3 z 2 with roots


06 (0 6) 2 1 2
i.e. the roots are 1 156 6 .
06

Since both roots lie outside the unit circle, the process can be
stationary.

(b) X t is not Markov since the conditional distribution of X k 1 given the


history up to time k depends on X k 1 as well as on X k .

(c) The Yule-Walker equations in this case yield

0 06 1 03 2 1 (3)
1 06 0 03 1 (4)
2 06 1 03 0 (5)

From (4) we have

6 0
07 1 06 0 1 (6)
7

and substituting into (5) we get

36 3 57
2 0 0 0 (7)
70 10 70

Inserting the last two equations into (3) we obtain

36 171
0 0 0 1
70 700

which gives

36 171 700
1 0 1 0
70 700 169

Page 8
Subject 103 (Stochastic Modelling) September 2004 Examiners Report

Then (6) and (7) yield resp.

6 0 600 570
1 2
7 169 169

The examiners were pleased to note the high quality of answers to this
question. It appears that the theoretical principles of Time Series analysis are
well understood.

8 (i) The equation is

Xt ( Xt 1 ) et et 1

The parameters are (the autoregressive parameter), (the moving average


parameter), the mean level and the innovation standard deviation .

(ii) A time series process is (weakly) stationary if the mean of the process,
mt E ( X t ) , does not vary with time and the covariance of the process,
Cov( X t X s ) depends only on the time difference t s and not on the
particular values t s .

For the model in (1) to be stationary, 1 is needed.

(iii) For the method of moments, we calculate the theoretical ACF 1 2 in terms
of the parameters . Then we find the sample ACF, say r1 r2 from the data.
Subsequently we obtain estimates for by equating 1 with r1 and 2
with r2 .

The value of 2 is estimated using the calculated value of 0 and the sample
variance, whereas an estimate for is the sample mean x .

(iv) (a) Using the given values we obtain the forecasts

x 25(1) 9 12 0 71(14 82) 0 17( 1 98) 19 306

and

x 25(2) 9 12 0 71(19 306) 22 827

(b) For exponential smoothing the equation is

x 25(1) x 24(1) x25 x 24(1) 12 97 0 3(14 82 12 97) 13 525

Page 9
Subject 103 (Stochastic Modelling) September 2004 Examiners Report

(v) Exponential smoothing is simple to apply and does not suffer from problems
of over-fitting. If the data appear fairly stationary but are not especially well
fitted by any of the Box-Jenkins methods, exponential smoothing is likely to
produce more reliable results. More advanced versions of exponential
smoothing can cope with varying trends and multiplicative variation.

Many candidates omitted to mention as a parameter in part (i). Marks for


this question were not quite as good as for Q7, indicating that the practical
aspects of Time Series analysis are less well understood than the theoretical
ones.

9 (i) The Markov model implies that holding times are exponentially distributed.

(ii) The generator matrix is as follows (in minutes then, equivalently, in hours):

W A M S H
1 1 0 0 0
W 15 15
0 1 3 1 1
A 20 400 400 25
M 0 0 1 0 1
30 30
S 0 0 0 1 1
180 180
H 0 0 0 0 0

W A M S H
W 4 4 0 0 0
A 0 3 0.45 0.15 2.4
M 0 0 2 0 2
0 0 0 1 1
S 3 3
H 0 0 0 0 0

(a) The equations are as follows (first if t is in minutes, then in hours)

d 1 3
pWM (t ) pWM (t ) pWA (t )
dt 30 400
d 1 1
pWA (t ) pWA (t ) pWW (t )
dt 20 15

d
pWM (t ) 2 pWM (t ) 0.45 pWA (t )
dt
d
pWA (t ) 3 pWA (t ) 4 pWW (t )
dt

Page 10
Subject 103 (Stochastic Modelling) September 2004 Examiners Report

(b) First note that pWW(t) = e t/15. Then try inserting the given formula in
the second equation above:

d 1 t / 20 4 t /15
LHS pWA (t ) e e
dt 5 15

and

1 1 1 t / 20 1 t /15 1 t /15
RHS pWA (t ) pWW (t ) e e e ,
20 15 5 5 15

which is equal to the RHS, as required.

We should also check that the formula gives pWA(0) = 0, which it does.

d t / 30 3 t / 30
(c) e pWM (t ) e .4 e t / 20 e t /15 with pWM(0) = 0
dt 400
t / 30
implies that e pWM (t ) 0.9 1.8e t / 60 0.9e t / 30 , which simplifies
t / 30 t / 20 t /15
to pWM (t ) 0.9e 1.8e 0.9e .

(iii) (a) The expected length of time spent in state W is 15 mins, after which
there is a transition to state A with probability 1.

(b) The other equations are:

TA = 20 + 0.15 TM + 0.05 TS

TM = 30

TS = 180

(c) Solving these equations gives TA = 20 + 4.5 + 9 = 33.5 and TW = 48.5


mins

A number of students suggested that pWW (t ) 1 pWA (t ) , which works in the


two-state case. In this example, however, it is only possible to state that
pWW (t ) pWA (t ) pWM (t ) pWS (t ) pWH (t ) 1, which is not the same.
It was disappointing that not many candidates made substantial progress with
solving the differential equations, but the last part of the question was in
general well done.

10 (i) The Wiener process can be defined as Wt t Bt . In this case 1.

(ii) We need to show that

Page 11
Subject 103 (Stochastic Modelling) September 2004 Examiners Report

E St S s Ss 0 s t

as well as proving that E[ St ] .

x tx 2 2
But W (t ) N ( t t ) , so that M t ( x) e

We have

2 W (t )
E St S s E e Ss

2 W (t ) W ( s ) W ( s )
E e Ss

2 W (s) 2 (W (t s ))
e E e (1)

using stationarity and independence of the increments.

From the definition of a Wiener process with drift above, we have

W (t s ) N ( (t s ) (t s )) (2)

so

2 (W (t s ))
E e Mt s ( 2 )

where M t s is the moment generating function of the normal distribution in


(t s ) x (t s ) x 2 2
(2). But for this distribution we know that M t s ( x) e .

Therefore

2
2 (t s ) (t s )(2 )2 2
Mt s ( 2 ) e 1

now (1) shows that E St S s S s as required.

Finally, check the expectation: St 0 , so

2 W (t ) t ( 2 ) t ( 2 )2 2
E[ S t ] E[ St ] E[e ] Mt ( 2 ) e

(iii) (a) P[ S (t ) a ] P[e 2 ( B (t ) t ) a ] P[ 2 B(t ) b 2 2t ] , where


b = log a. If 0 , this becomes

Page 12
Subject 103 (Stochastic Modelling) September 2004 Examiners Report

b b
P B(t ) t t
2 2 t

whereas in the case 0 we have

b b
P B (t ) t t
2 2 t

and in both cases the RHS tends to 1 as t .

(b) P[Ta t ] P[ S (t ) a] 0 as t .

By definition, S (Ta ) a . Therefore E[ S (Ta )] a .

(c) This is not a contradiction because the conditions of the optional


stopping theorem are not satisfied. Neither S (t ) nor Ta is bounded
above, even though S (t ) is a convergent martingale.

(iv) (a) In this case Ta is only finite if S (t ) hits a , which is not certain.
However, as above it is certain that S (t ) 0 almost surely.

Therefore

a if Ta
S (Ta )
0 if Ta

It follows that E[ S (Ta )] aP[Ta ].

(b) Now the optional stopping theorem applies, since S (t Ta ) is bounded


below by 0 and above by a .

We may deduce that

1
1 S (0) aP[Ta ] i e that P[Ta ]
a
In part (ii) a large number of candidates did not even attempt to prove that
E (| St |) : this condition is a requirement for S to be a martingale and
should not be omitted. However, most candidates had a good idea of how to
prove that S satisfied the conditional expectation condition.
Parts (iii) and (iv) attracted at best sketchy answers. The examiners were
unsure whether this was due to pressure of time or to lack of familiarity with
applications of the optional stopping theorem.

END OF EXAMINERS REPORT

Page 13

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy