Menu
Home
Log in / Register
 
Home arrow Economics arrow Financial Econometrics
< Prev   CONTENTS   Next >

Chapter 6. Multivariate Time Series Analysis

6.0.1. Introduction

Multivariate analysis investigates dependence and interactions among a set of variables in multi-values processes. One of the most powerful method of analyzing multivariate time series is the vector autoregression model. It is a natural extension of the univariate autoregressive model to the multivariate case.

In this chapter we cover concepts of VAR modelling, non-stationary multivariate time series and cointegration.

More detailed discussion can be found in Hamilton (1994), Harris (1995), Enders (2004), Tsay (2002), Zivot and Wang (2006).

6.1. Vector Autoregression Model

Let Yt = (Yi,t Y2>t Yn>t) denote an k x 1 vector of time series variables. The basic vector autoregressive model of order p, VAR(p), is

where Hi are k x k matrices of coefficients, c is a k x 1 vector of constants and ut is an k x 1 unobservable zero mean white noise vector process with covariance matrix E.

If we consider a special case of two dimensional vector Y, the VAR consists of two equation (also called a bivariate VAR)

with cov (uitt u2s) = C12 for t = s.

As in the univariate case with AR processes, we can use the lag operator to represent VAR(p)

where n(L) = In - U1L - - UpLp.

If we impose stationarity on Yt in (6.1.2), the unconditional expected value is given by

Very often other deterministic terms or stochastic exogenous variables may be included into the VAR specification to represent. More general form of the VAR(p) model is

where Xt represents an m x 1 matrix of exogenous or deterministic variables, and r is a matrix of parameters.

6.1.1. Estimation of VARs and Inference on coefficients

Since the VAR(p) may be written as a system of equations with the same sets of explanatory variables, its coefficients can be efficiently and consistently estimated by estimating each of the components using the OLS method (see Hamilton (1994)). Under standard assumptions regarding the behavior of stationary and ergodic VAR models (see Hamilton (1994) the estimators of the coefficients are asymptotically normally distributed.

An element of is asymptotically normally distributed, so asymptotically valid t-tests on individual coefficients may be constructed in the usual way (see Chapter 2). More general linear hypotheses can also be tested using the Wald statistic.

Lag Length Selection A reasonable strategy how to determine the lag length of the VAR model is to fit VAR(p) models with different orders p = 0 pmax and choose the value of p which minimizes some model selection criteria. Model selection criteria for VAR(p) could be based on Akaike (AIC), Schwarz-Bayesian (BIC) and Hannan-Quinn (HQ) information criteria:

Forecasting We can use VAR model to forecast times series in a similar way to forecasting from a univariate AR model.

The one-period-ahead forecast based on information available at time T is

while /-step forecast is

where YT+j T = YT+j for j < 0. The h-step forecast errors may be expressed as

where the matrices ^>s are determined by recursive substitution

with *0 = In and Hj = 0 for j > p. The forecasts are unbiased since all of the forecast errors have expectation zero and the MSE matrix for Yt+h T is

The /-step forecast in the case of estimated parameters is

where IIj are the estimated matrices of parameters. The h-step forecast error is now

The estimate of the MSE matrix of the h-step forecast is then

6.1.2. Granger Causality

One of the main uses of VAR models is forecasting. The structure of the VAR model provides information about a variable's or a group of variables' forecasting ability for other variables. The following intuitive notion of a variable's forecasting ability is due to Granger (1969). If a variable, or group of variables, Y1 is found to be helpful for predicting another variable, or group of variables, Y2 then Y1 is said to Granger-cause Y2; otherwise it is said to fail to Granger-cause Y2. Formally, Yi fails to Granger-cause Y2 if for all s > 0 the MSE of a forecast of Y2>t+s based on (Y2<t Y2t-1 ) is the same as the MSE of a forecast of Y2>t+s based on (Y2tt Y2t-1 ) and (Y1t Y1t-1 ). Note that the notion of Granger causality only implies forecasting ability.

In a bivariate VAR(p) model for Yt = (Y1t Y2t)', Y2 fails to Granger-cause Y1 if all of the p VAR coefficient matrices n1 Ip are lower triangular. That is, all of the coefficients on lagged values of Y2 are zero in the equation for Y1. The p linear coefficient restrictions implied by Granger non-causality may be tested using the Wald statistic. Notice that if Y2 fails to Granger-cause Y1 and Y1 fails to Granger-cause Y2, then the VAR coefficient matrices n1 Ip are diagonal.

 
Found a mistake? Please highlight the word and press Shift + Enter  
< Prev   CONTENTS   Next >
 
Subjects
Accounting
Business & Finance
Communication
Computer Science
Economics
Education
Engineering
Environment
Geography
Health
History
Language & Literature
Law
Management
Marketing
Philosophy
Political science
Psychology
Religion
Sociology
Travel