Bootstrapping Time Series Models
Bootstrapping Time Series Models
Bootstrapping Time Series Models
Estimate SE
1. Using the asymptotic distribution of 𝜽
Coverage Error
The difference between the actual
coverage and nominal coverage
2. General guidelines for using the bootstrap approach
Methods to Construct Confidence intervals
1.
Use the asymptotic distribution of 𝜽
± zα·SE(𝜽)
𝜽 Coverage error = O(n-1/2)
*
Use the bootstrap distribution of 𝜽
𝑯𝟎 : 𝜽 = 𝜽𝟎
𝑯𝟏 : 𝜽 ≠ 𝜽𝟎
Reminder:
𝜽𝟎 - The parameter’s value under the null hypothesis
𝜽 - An estimation of the parameter’s value using the original samples
∗
𝜽 - An estimation of the parameter’s value using the BS resamples
2. General guidelines for using the bootstrap approach
Hypothesis Testing
• Two important issues concerning hypothesis testing using
bootstrap methods relate to the questions about:
a) What test statistic to bootstrap?
b) How to generate the bootstrap samples?
2. General guidelines for using the bootstrap approach
Hypothesis Testing
• Two important issues concerning hypothesis testing using
bootstrap methods relate to the questions about:
a) What test statistic to bootstrap?
b) How to generate the bootstrap samples?
∗− 𝜽
𝜽
ෝ
𝝈
∗− 𝜽
𝜽
Use the properly studentized statistic: BUT NOT:
ෝ∗
𝝈
∗− 𝜽
𝜽
ෝ∗ - the estimate of 𝝈
𝝈 ෝ from the BS sample
2. General guidelines for using the bootstrap approach
Hypothesis Testing
• Two important issues concerning hypothesis testing using
bootstrap methods relate to the questions about:
a) What test statistic to bootstrap?
b) How to generate the bootstrap samples?
𝒚 = 𝜷𝒙 + 𝜺, 𝜺~𝒊𝒊𝒅(𝟎, 𝝈𝟐 )
𝝈
𝜷, ෝ are OLS estimators for
𝜺ො - the OLS residuals
𝜺∗ − the BS residuals, obtained by resampling 𝜺ො
2. General guidelines for using the bootstrap approach
Methods of BS Samples Generation
Consider two sampling schemes for the generation of the bootstrap samples:
+ 𝜺∗
𝑺𝟏 : 𝒚∗ = 𝜷𝒙
𝑺𝟐 : 𝒚∗ = 𝜷𝟎 𝒙 + 𝜺∗
Both use 𝜀 ∗
= 𝜷
𝑻𝟏 : 𝑻 𝜷 ∗ − 𝜷
Τ𝝈
ෝ∗
∗ − 𝜷𝟎 Τ𝝈
𝑻𝟐 : 𝑻 𝜷𝟎 = 𝜷 ෝ∗
2. General guidelines for using the bootstrap approach
= 𝜷
𝑻𝟏 : 𝑻 𝜷 ∗ − 𝜷
Τ𝝈
ෝ∗
+ 𝜺∗
𝑺𝟏 : 𝒚∗ = 𝜷𝒙
∗ − 𝜷𝟎 Τ𝝈
𝑻𝟐 : 𝑻 𝜷𝟎 = 𝜷 ෝ∗
= 𝜷
𝑻𝟏 : 𝑻 𝜷 ∗ − 𝜷
Τ𝝈
ෝ∗
𝑺𝟐 : 𝒚∗ = 𝜷𝟎 𝒙 + 𝜺∗
∗ − 𝜷𝟎 Τ𝝈
𝑻𝟐 : 𝑻 𝜷𝟎 = 𝜷 ෝ∗
2. General guidelines for using the bootstrap approach
= 𝜷
𝑻𝟏 : 𝑻 𝜷 ∗ − 𝜷
Τ𝝈
ෝ∗
+ 𝜺∗
𝑺𝟏 : 𝒚∗ = 𝜷𝒙
∗ − 𝜷𝟎 Τ𝝈
𝑻𝟐 : 𝑻 𝜷𝟎 = 𝜷 ෝ∗
Hall & Wilson
= 𝜷
𝑻𝟏 : 𝑻 𝜷 ∗ − 𝜷
Τ𝝈
ෝ∗
𝑺𝟐 : 𝒚∗ = 𝜷𝟎 𝒙 + 𝜺∗
∗ − 𝜷𝟎 Τ𝝈
𝑻𝟐 : 𝑻 𝜷𝟎 = 𝜷 ෝ∗
Giersbergen & Kiviet
Based on Monte-Carlo study of an AR(1) model
2. General guidelines for using the bootstrap approach
= 𝜷
𝑻𝟏 : 𝑻 𝜷 ∗ − 𝜷
Τ𝝈
ෝ∗
+ 𝜺∗
𝑺𝟏 : 𝒚∗ = 𝜷𝒙
∗ − 𝜷𝟎 Τ𝝈
𝑻𝟐 : 𝑻 𝜷𝟎 = 𝜷 ෝ∗
= 𝜷
𝑻𝟏 : 𝑻 𝜷 ∗ − 𝜷
Τ𝝈
ෝ∗
𝑺𝟐 : 𝒚∗ = 𝜷𝟎 𝒙 + 𝜺∗
∗ − 𝜷𝟎 Τ𝝈
𝑻𝟐 : 𝑻 𝜷𝟎 = 𝜷 ෝ∗
Giersbergen & Kiviet
Based on Monte-Carlo study of an AR(1) model
2. General guidelines for using the bootstrap approach
= 𝜷
𝑻𝟏 : 𝑻 𝜷 ∗ − 𝜷
Τ𝝈
ෝ∗
+ 𝜺∗
𝑺𝟏 : 𝒚∗ = 𝜷𝒙
𝑺𝟐 : 𝒚∗ = 𝜷𝟎 𝒙 + 𝜺∗
∗ − 𝜷𝟎 Τ𝝈
𝑻𝟐 : 𝑻 𝜷𝟎 = 𝜷 ෝ∗
ARMA Models
ARMA – AutoRegressive + Moving Average
ARMA Models
Consider the stationary AR(p) model
𝑝
𝑦𝑡 = 𝑎𝑖 𝑦𝑡−𝑖 + 𝑒𝑡 , 𝑒𝑡 ~𝑖𝑖𝑑(0, 𝜎 2 )
𝑖=1
Given data on n + p observations (𝑦1−𝑝 , … , 𝑦0 , 𝑦1 , … , 𝑦𝑛 )
𝑦𝑡 = 𝑎𝑖 𝑦𝑡−𝑖 + 𝑒𝑡 , 𝑒𝑡 ~𝑖𝑖𝑑(0, 𝜎 2 )
𝑖=1
We get (𝑎ො1 ,𝑎ො2 , . . . ,𝑎ො𝑝 ) and the least squares residuals 𝑒Ƹ𝑡
1Τ2
1 𝑛
2 – Define the centered and scaled residuals 𝑒𝑡 = 𝑒ෝ𝑡 − 𝑛 𝑒ෝ𝑡 𝑛−𝑝
Bickel & Freedman – residuals tend to be smaller than the true errors
3 – Resample 𝑒𝑡 with replacement to get the BS residuals 𝒆𝒕 ∗
3. Structured Time Series Models: The Recursive BS
𝑦𝑡 = 𝑎𝑖 𝑦𝑡−𝑖 + 𝑒𝑡 , 𝑒𝑡 ~𝑖𝑖𝑑(0, 𝜎 2 )
𝑖=1
𝑝
∗
4 – Construct the BS sample 𝑦𝑡∗ = 𝑎ෝ𝑖 𝑦𝑡−𝑖 + 𝑒𝑡 ∗
𝑖=1
1
−2
Bose – the LS estimates 𝑎ෝ𝑖 can be BS-ed with accuracy 𝑜 𝑛 - Little o
1
−2
improving the normal approximation error of 𝑂 𝑛 - Big o
3. Structured Time Series Models: The Recursive BS
Explosive
3. Structured Time Series Models: The Recursive BS
𝑦𝑡 = 𝛽𝑦𝑡−1 + 𝑢𝑡
𝑦𝑡 = 𝛽𝑦𝑡−1 + 𝑢𝑡
𝛽 <1 𝑦𝑡 is stationary
𝛽 =1 𝑦𝑡 is unstable
𝛽 >1 𝑦𝑡 is explosive
3. Structured Time Series Models: The Recursive BS
𝑦𝑡 = 𝛽𝑦𝑡−1 + 𝑢𝑡
𝑦𝑡 = 𝛼 + 𝛽𝑦𝑡−1 + 𝑢𝑡 , 𝑢𝑡 ~𝑖𝑖𝑑(0, 𝜎 2 )
Since the distribution of the OLS estimator of 𝛽መ of 𝛽 is invariant to α and 𝜎 2
𝑝
∗ 𝑦0 is given
𝑦𝑡∗ = 𝑎ෝ𝑖 𝑦𝑡−𝑖 + 𝑒𝑡 ∗
𝑖=1 𝑝=1 𝑦(𝑡=1,…,𝑛) are recursively BS calculated
(𝑎1 , 𝑎2 , . . . , 𝑎𝑝 ) 𝛽0
Rayner (1990) – the use of the student-t approximation is not satisfactory, particularly for high values of 𝛽
the bootstrap-t performs very well in samples of sizes 5-10, even when mixtures of normal
distributions are used for the errors
4 - GENERAL ERROR STRUCTURES - THE
MOVING BLOCK BOOTSTRAP (MBB)
4. General error structures – The MBB
Carlstein (1986) – first discussed the idea of bootstrapping blocks of observations rather
than the individual observations. The blocks are nonoverlapping
l l l
l
𝑛
In the Carlstein procedure: = 𝑏 blocks In the Künsch procedure: 𝑛 − 𝑙 + 1 blocks
𝑙
3 6 7 2 1 5 3 6 7 6 7 2 7 2 1 2 1 5
2. The mean 𝑥ҧ𝑛∗ of the moving block bootstrap is biased in the sense that:
1. Carlstein’s Künsch’s
non-overlapping blocks < moving blocks
VARIANCE
As the block size increases
BIAS
VARIANCE
As the block size increases
BIAS
The optimal block size 𝑙 ∗ for the AR(1) model 𝑥𝑡 = 𝜑𝑥𝑡−1 + 𝜀𝑡 is:
𝟐 Τ𝟑 𝟐 Τ𝟑
𝒍∗ = 𝟐𝝋Τ 𝟏 − 𝝋𝟐 𝒏
4. General error structures – The MBB
𝛾 0 + 2 σ∞
𝑗=1 𝛾(𝑗)
𝜌= γ(𝑗) –the covariance of 𝑥𝑡 at lag j
σ∞
𝑗=1 𝑗 ∙ 𝛾(𝑗)
4. General error structures – The MBB
𝜌 = 1 − 𝜑 2 Τ𝜑 𝜌 = 1 + 𝜑2 Τ𝜑