FRAM Time Series
FRAM Time Series
14.1 INTRODUCTION
In the previous units (Units 12 and 13), you have learnt stationary and
nonstationary time series, the concept of autocorrelation and partial
autocorrelation in time series. When the data is autocorrelated, then most of
the standard modelling methods may become misleading or sometimes even
useless because they are based on the assumption of independent
observations. Therefore, we need to consider alternative methods that take
into account the autocorrelation in the data. Such types of models are known
as time series models. In this unit, you will study time series models, such as
autoregressive (AR), moving average (MA) autoregressive moving average
(ARMA), autoregressive integrated moving average (ARIMA), etc.
In this unit, you will learn some time series models. We begin with a simple
introduction of the necessity of time series models instead of ordinary
regression models in Sec. 14.2. In Secs. 14.3 and 14.4, we discuss the
autoregressive and moving average models with their types and properties,
respectively. In Sec. 14.5, the autoregressive moving average models are
117
* Dr. Prabhat Kumar Sangal, School of Sciences, IGNOU, New Delhi
Block 3 Time Series Analysis
explained. The AR, MA and ARMA models are used for stationary time series.
If a time series is nonstationary then we use autoregressive integrated moving
average (ARIMA), therefore, you will learn it in Sec. 14.6. When you deal with
real-time series data, then the first question may arise in your mind how you
know which time series model is most suitable for a particular time series data.
For that, we discuss time series model selection in Sec. 14.7.
The coefficients β0 and β1 denote the intercept and the slope of the
regression line, respectively. The intercept β0 represents the predicted value
of Y when X = 0 and the slope β1 represents the average predicted change
in Y resulting from a one-unit change in X. Also each observation Y is
consisting of the systematic or explained part of the model, β0 + β1X and
random error ε . The term "error" in this context refers to a departure from the
underlying straight line model rather than a mistake and it includes everything
affecting Y other than predictor variable X. The error term has the following
assumptions:
• The mean of the error term should be zero, i.e., E [ ε ] =0 .
• The error term should have constant variance, i.e., Var [ ε ] =σ2 =constant
The model expresses the present value as a linear combination of the mean of
the series δ (read as delta), the previous value of the variable y t −1 and the
error term ɛt (read as epsilon). The magnitude of the impact of the previous
value on the present value is quantified using a coefficient denoted with φ1
(read as phai). The "error term " is called white noise and it is normally
distributed with mean zero and constant variance (σ2 ).
120
Var
= [ y t ] Var [δ] + φ12 Var [ y t −1 ] + Var [ε t ]
Unit 14 Time Series Modelling Techniques
Since time series is stationary, therefore, Var [ y t ] = Var [ y t −1 ]
Therefore,
Var [ y t ] =
φ12 Var [ y t ] + σ 2
σ2
Var [=
yt ] 2
≥ 0 when φ12 < 1
1 − φ1
Cov [ ε t , y t +k ] = 0
Therefore,
γk = φ1γk −1
Hence,
σ2 σ2
γ1 = φ1Var ( y t ) =
φ1γ 0 = φ1
1 − φ12
Since Var [ y t ] =
1 − φ12
σ2 σ2
γ2 =φ1γ1 =φ12 Since γ 1 = φ1
1 − φ12 1 − φ12
σ2
γk = φ1k
φ1γk −1 =
1 − φ12
The autocorrelation function(ACF) for an AR(1) model is as follows:
γk
ρk = φ1k for k =
= 0,1,2,...
γ0
We now study the properties of the AR(2) model as we have studied for AR(1).
Mean and Variance
We can find the mean and variance of an AR(2) model as we have obtained
for AR(1). Here, we just write these as follows:
δ
Mean =
1− φ1 − φ2
Similarly,
σ2
Var [ y t ]
= ≥ 0 when φ12 + φ22 < 1
1 − φ12 − φ22
Autocovariance and Autocorrelation Functions
We can find the autocovariance of an AR(2) model as we have obtained for
AR(1) model. Here, we just write these as follows:
σ 2 if k = 0
γk = Cov [ y t , y t +k ] = φ1γk −1 + φ2 γk − 2 +
0 if k > 0
Therefore,
γ 0 = φ1γ1 + φ2 γ 2
and
γk = φ1γk −1 + φ2 γk − 2 + σ 2 ; k = 1,2,...
ρ3 =φ1ρ2 + φ2ρ1
• φ1 + φ2 < 1
• φ2 − φ1 < 1
We now study the properties of the AR(p) model as we have studied for AR(1)
and AR(2) models.
Mean and Variance
We can find the mean and variance of an AR(p) model as we have obtained
for AR(1). Here, we just write these as follows:
δ
Mean =
1 − φ1 − φ2 − ... − φp
and
σ2
=Var [ y t ] ≥ 0 when φ12 + φ22 + ... + φp2 < 1
1 − φ12 − φ22 − ... − φp2
Autocovariance and Autocorrelation Functions
We can find the autocovariance of an AR(2) model as we have obtained for
AR(1). Here, we just write these as follows:
σ 2 if k = 0
γk = Cov [ y t , y t +k ] = φ1γk −1 + φ2 γk − 2 + ... + φp γk −p +
0 if k > 0
Therefore,
γ 0 = φ1γ1 + φ2 γ 2 + ... + φp γp
and
γk = φ1γk −1 + φ2 γk − 2 + ... + φp γk −p + σ 2 ; k = 1,2,...
123
Block 3 Time Series Analysis
The autocorrelation function for an AR(2) model can be obtained by dividing γk
by γ 0 as follows:
where ε t N [0,1]
δ = 10 and φ1 =0.2
σ2
γ1 =
φ1γ 0 =
φ1 0.2 × 1.04 =
= 0.21
1 − φ12
γ2 =φ1γ1 =
0.2 × 0.21 =
0.04
γ3 =
φ1γ 2 =
0.2 × 0.04 =
0.008
124 We can forecast, the next value of y100 using the prediction model as
Unit 14 Time Series Modelling Techniques
ŷ=
t 10 + 0.2y t −1
Therefore, if the current observation is y100 = 7.5, then the next observation
y101 = 11.15 will be above the mean (12.5).
You may like to try the following Self Assessment Question before studying
further.
SAQ 1
Consider the time series model
yt =
5 + 0.8y t −1 − 0.5y t − 2 + ε t
where ε t N [0,2]
Moving average (MA) models are the models in which the value of a
variable in the current period is regressed against the residuals in the
previous period.
A moving average model, states that the current value is linearly dependent on
the past error terms. 125
Block 3 Time Series Analysis
A Moving Average model is similar to an autoregressive model, except that
instead of being a linear combination of past time series values, it is a linear
combination of the past error/residual/white noise terms.
Concept of Invertibility
An autoregressive model can be used on time series data if and only if the
time series is stationary. The moving average models are always stationary.
But some restrictions also are imposed on the parameters of the moving
average model as in the case of the autoregressive model.
As by the definition of the moving average model, the value of a variable in the
current period is regressed against the residuals in the previous period and if
The MA model is defined
in many textbooks and
y t is the value of a variable at time t and ε t −1 is the residuals at time t –1 then
computer software with we can express moving average model of first-order as
minus sign before the
terms. Although this y t =μ + ε t + θ1ε t −1
switches the algebraic
signs of estimated We can write the above expression as
coefficient values and
(unsquared) theta terms ε t = y t − μ − θ1ε t −1
in formulas for ACFs and
variances. It has no ε t = y t − μ − θ1 ( y t −1 − μ − θ1ε t − 2 )
effect on the model’s
overall theoretical = μ ( θ1 − 1) + y t + θ1y t −1 + θ12 ε t − 2
features. In order to
accurately construct the
estimated model, you
must examine your ∞
software to make sure ε t= c + ∑ θ1i yit
that either to make sure i=0
that either negative or
positive signs are used.
It means that we can convert/invert the past residuals/errors//noises into past
R software uses positive
signs in its underlying observations. In other words, we can convert/invert a moving average model
model, as we take here. into the autoregressive model. This property is called invertibility. This notion
is very important if one wants to forecast the future values of the dependent
variable, otherwise, the forecasting task will be impossible (i.e., the residuals
in the past cannot be estimated, as it cannot be observed). Actually, when the
model is not invertible, the innovations can still be represented by
observations of the future, this is not helpful at all for forecasting purposes.
Any autoregressive process is necessarily invertible but a stationarity condition
must be imposed to ensure the uniqueness of the model for a particular
autocorrelation structure. A moving average process on the other hand is
stationary but in order for there to be a unique model for a particular
autocorrelation structure an invertbility condition must be imposed.
We now discuss different types of moving average models on the basis of the
correlation with the residuals of the previous periods in the following sub-
sections.
The moving average model in which the value of a variable in the current
period is regressed against its previous residual is called the first-order moving
average. For example, if today's price of a share depends on whatever has
happened in the other factor on the previous day except for the price of the
126 share on the previous day then we use the first-order moving average.
Unit 14 Time Series Modelling Techniques
If y t is the value of a variable at time t and ε t −1 is the residuals at time t –1
then the moving average model of first-order is given as follows:
y t =μ + ε t + θ1ε t −1
E [ y t ] =E [μ] + E [ ε t ] + θ1E [ ε t −1 ]
=E [ ε t ] E=
[ε t −1 ] 0 because ε t N 0,σ 2
=μ + 0 + θ1 × 0
and μ is a constant so E (μ) =μ
Therefore,
Mean of MA(1) = μ
Similarly,
Var [ y t ] =
Var [μ] + θ12 Var [ ε t −1 ] + Var [ ε t ]
Since
Var [ ε t ] Var
= = [ε t −1 ] σ 2 because ε t N 0,σ 2
Therefore,
[ y t ] θ12σ 2 + σ 2
Var =
Var [ y t ] = ( )
1 + θ12 σ 2 ≥ 0
γ1 = θ1σ 2
127
Block 3 Time Series Analysis
γk 0; k > 1
=
ρk 0; k > 1
=
It indicates that the autocorrelation function for MA(1) model becomes zero
after lag 1.
Conditions for Invertibility
The moving average models are always stationary. However, some
restrictions are also imposed on the parameters of the moving average models
otherwise the model can not converge. Therefore, some constraints on the
values of the parameters are required for the invertibility of the MA (1)
model which is as follows:
θ1 < 1 ⇒ −1 < θ1 < 1
As for MA (1) model, the coefficients θ1 & θ2 represent the magnitude of the
impact of past residuals on the present value.
Let us study the properties of the MA(2) model.
Mean of MA(2) = μ
Var [ y t ] =
Var [μ] + θ12 Var [ ε t −1 ] + θ22 Var [ ε t − 2 ] + Var [ ε t ]
Since
128
Var [μ] = 0 because μ is constant
Unit 14 Time Series Modelling Techniques
Var [ ε t ] Var
= = [ε t −1 ] Var
= [ε t −2 ] σ because ε t N 0,σ
2 2
Therefore,
Var [ y t ] = θ12σ 2 + θ22σ 2 + σ 2
( )
Var [ y t ] = 1 + θ12 + θ22 σ 2 ≥ 0
( )
γ 0 =Var ( y t ) = 1 + θ12 + θ22 σ 2
γ1
= ( θ1 + θ1θ2 ) σ 2
γ 2 = θ2σ 2
γk 0; k > 2
=
γ2 θ2
ρ=
2 =
γ 0 1 + θ12 + θ22
ρk 0; k > 2
=
• θ1 + θ2 < 1
• θ2 − θ1 < 1
After understanding the first and second-order moving average models, you
are interested to know the general form of the moving average model. Let’s
discuss the general form of the moving average model.
A moving average model states that the current value is linearly dependent on
the current and past error terms.
If y t is the value of a variable at time t and ε t , ε t −1 , ε t − 2 ,...,ε t − q are the residuals
at time t,t − 1,t − 2,...,t − q , respectively, then the moving average model of the
qth order is expressed as the present value ( y t ) as a linear combination of the
mean of the series (μ) , the present error term ( ε t ) , and past error terms ( ε t −1 ,
ε t − 2 ,...,ε t − q ). Mathematically, we express a general moving average model as
follows: 129
Block 3 Time Series Analysis
y t =μ + ε t + θ1ε t −1 + θ2 ε t − 2 + ... + θqε t − q
where θ1,θ2 ,...,θq represent the magnitude of the impact of past errors on the
present value.
After understanding the form of the general moving average model, we now
study the properties of the model.
Mean and Variance
The mean and variance of MA(q) model are given as follows:
Mean = μ
( )
Var [ y t ] = 1 + θ12 + θ22 + ... + θ2q σ 2 ≥ 0
(
γ 0 = 1 + θ12 + θ22 + ... + θ2q σ 2)
( θ + θ1θk +1 + ... + θq−k θq ) σ 2 ;k =
1,2,...,q
γk = k
0;k > q
where ε t N [0,1]
(iii) What are the mean and variance of the time series?
Solution: Since the variable in the current period is regressed against its
previous residual, therefore, it is the first-order moving average model. To
check whether it is invertible, first of all, we find the parameters of the time
series model. We now compare it with its standard form, that is,
y t =μ + ε t + θ1ε t −1
We obtain
μ = 10 , θ1 = 0.8
The invertibility constraints for MA(1) is −1 < θ1 < 1. Since θ1 lies between –1
and 1, therefore, the time series model MA(1) is invertible.
Mean = 2
( )
Var [ y t ] = 1 + θ12 σ 2 = (1 + 0.64 ) × 1 = 1.64
=γ 0 Var
= ( y t ) 1.64
γ1= θ1σ 2= 0.8 × 1= 0.8
γk 0; k > 1
=
ρk 0; k > 1
=
We can forecast, the next value of y100 using the prediction model as
ŷ t= 2 + 0.8ε t −1
Therefore, if the residual at t = 100 is 2.3, then the next observation y101 = 2.18
will be above the mean (2).
You may try the following Self Assessment Question before studying further.
SAQ 2
Consider the time series model
y t = 42 + ε t + 0.7ε t −1 − 0.2ε t − 2
where ε t N [0,2]
(v) Suppose the residual errors for time periods 20 and 21 are 0.23 and 0.54
then forecast the next observation.
ARMA(1,1) Models
ARMA(1,1) models are the models in which the value of a variable in the
current period is related to its own value in the previous period as well as
values of the residual in the previous period. It is a mixture of AR(1) and
MA(1).
If y t and y t −1 are the values of a variable at time t and t –1, respectively and if
ε t and ε t −1 are the residuals at time t and t –1, respectively then the
ARMA (1,1) model is expressed as follows:
y t = δ + φ1y t −1 + θ1ε t −1 + ε t
Var [ y t ]
=
(1 + 2φ θ1 1 )
+ θ12 σ 2
≥ 0 when φ12 < 1
2
1− φ 1
(1 + 2φ θ )
+ θ12 σ 2
=γ 0 Var
= [yt ] 1 1
1− φ 2
1
γ1 =
( φ1 + θ1 )(1 + φ1θ1 ) σ 2
1 − φ12
γk =
φ1γk −1 for k =
2,3,...
• θ1 + θ2 < 1
• θ2 − θ1 < 1
Solution:
Since the variable in the current period is regressed against its previous value
as well as previous residual, therefore, it is the ARMA model of order (1, 1). To
check whether it is stationary and invertible, first of all, we find the parameters
of the time series model. We now compare it with its standard form, that is,
y t = δ + φ1y t −1 + θ1ε t −1 + ε t
We obtain
δ = 20 , φ1 =−0.5 , θ1 = 0.7
The stationary constraints for ARMA(1, 1) is −1 < φ1 < 1 . Since φ1 lies
between –1 and 1, therefore, the time series model ARMA(1, 1) is stationary. 135
Block 3 Time Series Analysis
Similarly, the invertibility constraints for ARMA(1,1) is −1 < θ1 < 1. Since θ1
lies between –1 and 1, therefore, the time series model ARMA(1,1) is
invertible.
We now calculate the autocovariance function of ARMA(1, 1) as
( φ1 + θ1 )(1 + φ1θ1 )
γk for k = 1
ρ=
k = 1 + 2φ1θ1 + θ12
γ0 k
φ1 ρk −1 for k = 2,3,...
ρ1 =
( φ1 + θ1 )(1 + φ1=
θ1 ) ( −0.5 + 0.7 )(1 − 0.5 × 0.7
=
) 0.13
= 0.165
1 + 2 × −0.5 × 0.7 + ( 0.7 )
2 2
1 + 2φ1θ1 + θ1 0.79
φ12ρ1 =
ρ2 = −0.5 × −0.5 × 0.165 =
0.041
( −0.5 ) × 0.041 =
3
φ13ρ2 =
ρ3 = −0.005
SAQ 3
Consider the ARMA time series model
y t =+
27 0.8y t −1 + 0.3ε t −1 + ε t
where y′t is the differenced series which may have been differenced more
than once and p and q are the orders of autoregressive and moving average
parts.
For the first difference, we can write the ARIMA model as
y t − y t −1 = δ + φ1 ( y t −1 − y t − 2 ) + φ2 ( y t − 2 − y t −3 ) + ... + φp ( y t − q − y t − q−1 )
− θ1ε1 − θ2 ε t − 2 − ... − θqε t − q + ε t
On the basis of different values of p, q and d, the ARIMA model has different
types which are discussed as follows:
14.6.1 Various Forms of ARIMA Models
The ARIMA model has various forms for different values of the parameters
p, d and q of the model. We discuss some standard forms as follows:
It is important that the Step 1: Since there are two types of models which are used for stationary and
choice of order makes nonstationary time series, therefore, first of all, we plot the time series
sense. For example,
suppose you have had data and check whether the time series data is stationary or
blood pressure readings nonstationary as you have learned in Unit 13.
for every day over the
past two years. You Step 2: If the time series is stationary, we have to decide which model out
may find that an AR(1)
or AR(2) model is of AR, MA and ARMA is suitable for our time series data. To
appropriate for distinguish among them, we calculate the autocorrelation function
modeling blood
pressure. However, the (ACF) and the partial autocorrelation function (PACF) as discussed in
PACF may indicate a Unit 13. After that, we plot the ACF and PACF versus the lag, that is,
large partial
autocorrelation value at
correlogram as discussed in Unit 13 and try to identify the pattern of
a lag of 17, but such a both. The ACF plot is most useful for identifying the AR model and
large order for an PACF plot for the order of the AR model whereas the PACF plot is
autoregressive model
likely does not make most useful for identifying the MA model and ACF plot for the order of
much sense. the MA model. We now try to understand how to distinguish between
AR, MA and ARMA models as follows:
Case I (AR model): In the plot of ACF versus the lag (correlogram), if you
see a gradual diminish in amount or exponential decay
then this indicates that the values of the time series are
serially correlated, and the series can be modelled
through an AR mode. For determining the order of an
AR model, we use a plot of PACF versus the lag. If the
PACF output cuts off, which means the PACF is almost
zero at lag p+1, then it indicates that the AR model of
order p. We can also calculate PACF by increasing the
order one by one and as soon as this lies within the
range of ± 2/√n (where n is the size of the time series)
we should stop and take the order as the last significant
PACF as the order of the AR model (see SAQ 4).
Case II (MA model): In the plot of PACF versus the lag, if you see a gradual
diminish in amount or exponential decay then this
indicates that the series can be modelled through an MA
model and if the ACF output cuts off, means the ACF
almost zero, at lag q+1, then it indicates that the MA
138 model of order q.
Unit 14 Time Series Modelling Techniques
Case III (ARMA model): If the autocorrelation function (ACF) as well as the
partial autocorrelation function (PACF) plots show a
gradual diminish in amount (exponential decay) or
damped sinusoid pattern then this indicates that the
series can be modelled through an ARMA model but
it makes the identification of the order of the ARMA
(p, q) model relatively more difficult. For that
extended ACF, generalised sample PACS, etc. are
used which are beyond the scope of this course. For
more detail, you can consult Time Series Analysis
Forecasting and Control, 4th Edition, written by Box,
Jenkins and Reinsel.
Step 3: If the time series is nonstationary, we obtain the first, second, etc.
differences of the time series as discussed in Unit 13 until it becomes
stationary and ensure that trend and seasonal components are
removed and find d. Suppose after the second difference the series
becomes stationary then d is 2. Generally, one or two-stage
differencing is sufficient. The differenced series will be shorter (as you
have observed in Unit 13) than the source series. An ARMA model is
then fitted to the resulting time series. Since ARIMA models have
three parameters, therefore, there are many variations to the possible
models that could be fitted. We should choose the ARIMA models as
simple as possible, i.e. contain as few terms as possible (small values
of p and q). For more detail, you can consult Time Series Analysis
Forecasting and Control, 4th Edition, written by Box, Jenkins and
Reinsel.
Step 4: After identifying the model, we estimate the parameters of the model
using the method of moments, maximum likelihood estimation, least
squares methods, etc. The method of moments is the simplest of
these. In this method, we equate the sample autocorrelation functions
to the corresponding population autocorrelation functions which are
the function of the parameters of the model and solve these
equations for the parameters of the model. However, this method is
not a very efficient method of estimation of parameters. For moving
average processes usually the maximum likelihood method is used
which gives more efficient estimates when n is large. We shall not
discuss this anymore here and if someone is interested in this, may
refer to Time Series Analysis Forecasting and Control, 4th Edition,
written by Box, Jenkins and Reinsel.
Step 5: After fitting the best model, we give a diagnostic check to the
residuals to examine whether the fitted model is adequate or not. It
helps us to ensure no more information is left for extraction and check
the goodness of fit. For the residual analysis, we plot the ACF and
PACF of the residual and check whether there is a pattern or not. For
the adequate model there should be no structure in ACF and PACF of
the residual and should not differ significantly from zero for all lags
greater than one. For the goodness of fit, we use Akaike’s information
criterion (AIC) and Bayesian information criterion (BIC). We have not 139
Block 3 Time Series Analysis
discussed all the above aspects in detail here but interested one
should consult Time Series Analysis Forecasting and Control, 4th
Edition, written by Box, Jenkins and Reinsel.
After understanding the procedure of selection of a time series model, let us
take an example.
Example 4: The temperature (in oC) in a particular area on different days
collected by the meteorological department is as given below:
Day Temperature Day Temperature
1 27 9 28
2 29 10 30
3 31 11 30
4 27 12 26
5 28 13 30
6 30 14 31
7 32 15 27
8 29
Y
40
38
Temperature
36
34
32
30 X
1 2 3 4 5 6 7 8 9 10 11 12 13
Day
We now estimate the parameters ( δ and φ1 ) of the model using the method of
moments. In this method, we equate the sample autocorrelation functions to
the population autocorrelation functions which are the function of the
parameters of the model and solve these as below:
r1 = ρ1
For estimating the parameter δ , first, we find the mean of the given data and
δ
then we use the relationship Mean =
1 − φ1
1 15 435
Mean
= ∑=
15 i=1
yi = 29
15
Therefore,
δ
Mean = 29 = ⇒ δ = 29 × 0.165 = 4.785
1 − 0.835
Therefore the suitable model for the temperature data is
yt =
4.785 + 0.835y t −1 + ε t
Before going to the next session, you may like to do some exercise yourself.
Let us try Self Assessment Question.
SAQ 4
A researcher wants to develop an autoregressive model for the data on
COVID-19 patients in a particular city. For that, he collected the data 100 days
and calculated the autocorrelation function which are given as follow:
r1 = 0.73, r2 = 0.39, r3 = 0.07
14.8 SUMMARY
In this unit, we have discussed
14.10 SOLUTIONS/ANSWERS
Self Assessment Questions (SAQs)
1. For checking the stationarity of the time series, first of all, we find the
parameters of the time series model. Since the variable in the current
period is regressed against its previous and previous to previous values,
therefore, it is the second-order autoregressive model. We now compare
it with its standard form, that is, y t = δ + φ1y t −1 + φ2 y t − 2 + ε t . We obtain
δ = 5 , φ1 =0.8 , φ2 =−0.5
Since all three conditions for stationarity are satisfied, hence, the time
series is stationary.
σ2 2 2
Var [ y t ] = 2
= 2
= = 18.18
1 − φ1 − φ2 1 − 0.64 − 0.25 0.11
φ1 0.8
=ρ1 = = 0.53
142 1 − φ2 1 + 0.5
Unit 14 Time Series Modelling Techniques
ρ2 =φ1ρ1 + φ2 =0.8 × 0.53 − 0.5 =−0.076
−0.061 − 0.265 =
= −0.326
We can forecast, the next value of y52 using the prediction model as
ŷ t =
5 + 0.8y t −1 − 0.5y t − 2
ŷ 52 =
5 + 0.8y 51 − 0.5y 50
2. Since the variable in the current period is regressed against its previous
and previous to previous residuals, therefore, it is a second-order
moving average model. For checking the invertibility of the MA(2) model,
first of all, we find the parameters of the time series model. We now
compare it with its standard form, that is, y t =μ + ε t + θ1ε t −1 + θ2 ε t − 2 . We
have
μ = 42
θ1 = 0.7
θ2 = −0.2
• θ1 + θ2 < 1
• θ2 − θ1 < 1
Since θ2 = −0.2 , therefore, it lies between –1 and 1
Since all three conditions for the invertibility of MA(2) model are satisfied,
hence the time series is invertible.
We now calculate the mean and variance of the series as
Mean = 42
σ2 2 2
Var [ y t ] = 2
= 2
= = 1.307
1 + θ1 + θ2 1 + 0.49 + 0.04 1.53
ρk 0; k > 2
=
We can forecast, the next value of y52 using the prediction model as
ŷ t =+
42 0.7y t −1 − 0.2y t − 2
ŷ 22 =
42 + 0.7ε 21 − 0.2ε 20
143
Block 3 Time Series Analysis
ŷ 22 = 42 + 0.7 × 0.54 − 0.2 × 0.23 = 42.332
We obtain
δ = 27 , φ1 =0.8 , θ1 = 0.3
ρ1 =
( φ1 + θ1 )(1 +=
φ1θ1 ) ( 0.8 + 0.3 )(1 − 0.8 × 0.3 )
=
0.836
= 0.532
1 + 2φ1θ1 + θ12 1 + 2 × 0.8 × 0.3 + ( 0.3 )
2
1.57
( 0.8 ) × 0.532 =
2
φ12ρ1 =
ρ2 = 0.340
( 0.8 ) × 0.340 =
3
φ13ρ2 =
ρ3 = 0.174
4. As we know from Unit 13, the 1st-order partial autocorrelation equals the
1st-order autocorrelation, that is,
φ11 = r1 = 0.73
φˆ22 =
(r − r ) =0.39 − ( 0.73 )
2 1
2 2
=−0.31
(1 − r ) 1 − ( 0.73 )
1
2 2
Since PACF of first-order lies outside the range ± 2/√n = ± 2/√100 = 0.2,
therefore, we calculate second-order PACF as
(
φˆ 22 =
)
r2 − r12
=
0.54 − ( 0.69 )
=
2
0.064
= 0.049
(
1 − r1)
2
1 − ( 0.69 )
2
0.524
Since PACF (2) lies within the range of ± 2/√n = 0.22, therefore, AR(1)
will be suitable for this time series.
146