Time series is a sequence of numerical data in which observations are measured at a particular instant of time. The frequency of observation can, for example, be annual, quarterly, monthly, daily, etc. The main goal of time series analysis is to study the dynamics of the data.

In this chapter we introduce basic time series models for estimation and forecasting of financial data. Further details about theory of time series analysis cab be found in Hamilton (1994), Greene (2000), Enders (2004), Tsay (2002) and others.

3.2. Stationarity and Autocorrelations

3.2.1. Stationarity

A time series Yt is said to be strictly stationary if for all integers i j and all possible integers k the multivariate distribution function of (Y Yi+1 Yi+k-1) is identical to (Yj Y'+i Yj+k-i). In practice we are very often interested in consequences of this assumption regarding moments of the distribution. If Yi and Yj have identical distribution this implies that their means are identical, thus E[Yt] does not depend on time and equal to some constant ¡1. Also, because the pairs (Yi Yi+s) and (Yj Yj+s) have identical bivariate distributions it follows that the autocovariances

depend only on the time lag s. This implies also that Yt have constant variance Ac = a2.

A stochastic process whose first and second order moments (means, variances, and covariances) do not change with time is said to be second order stationary. More precisely, a time series Yt is called stationary if the following conditions are satisfied:

Here j, 70, and 7k are finite-valued numbers that do not depend on time t. 3.2.2 Autocorrelation

The autocorrelations of a stationary process are defined by ps = ^. These correlations describe the short-run dynamic relations within the time series, in contrast with the trend, which corresponds to the long-run behaviour of the time series.

The simplest possible autocorrelations occur when a stationary process consists of uncorrelated random variables. In this case p0 = 1, ps = 0 for all s > 0. Such time series is called white noise.

It is important when modeling financial returns to appreciate that if Yt is white noise then Yt and Yt+s are not necessarily independent for s > 0.

The partial autocorrelation (ps at lag s measures the correlation of Yt values that are s periods apart after removing the correlation from the intervening lags. It equals the regression coefficient on Yt-s when Yt is regressed on a constant, Yt-1 Yt-s.

Time series prediction To describe the correlations, we imagine that our observed time series comes from a stationary process that existed before we started observing it. We denote the past of the stationary process Yt by yt- = Yt-i Yt-1 , where the "dots" mean that there is no clear-cut beginning of this past. We call it also the information set available at time point t — 1. The least squares predictor of Yt based on the past Yt-1 is the function f (Yt-1) that minimizes E [(Yt — f (Yt-1))2]. This predictor is given by the conditional mean f (Yt-1) = E [Yt Yt-1] with corresponding (one-step-ahead) prediction errors et = Yt — f (Yt-1) = Yt — E [Yt Yt-1].

The process et is also called the innovation process, as it corresponds to the unpredictable movements in Yt. If the observations are jointly normally distributed, then the conditional mean is a linear function of the past observations

Here a models the mean E[Yt] = ¡1 of the series. From the above equation we get 1 = a + Y Vk 1, so that ¡1 = (1 — Y PkAs the process is assumed to be stationary, the coefficients pk do not depend on time and the innovation process et is also stationary. It has the following properties:

Here the variance a2 is constant over time. 3.2.3 Example: Variance Ratio Test

Very often a predictability of stock returns is linked to the presence of autocorrelation in the returns series. If stock returns form an iid process, then variances of holding period returns should increase in proportion to the length of the holding period. If the log return is constant, then under the rational expectation hypothesis stock prices follows a random walk

Variance of the returns forecasts

due to the independence. Alternatively, if log returns are iid, then

The variance-ratio statistic is defined as

which should be unity if returns are iid and less than unity under mean reversion. The variance ratio test is set up as H0: VRh = 1 and under the null

See Cuthbertson and Nitzsche (2004) for more details about the test. Let us consider as an example how to program the variance ratio test in EViews.

In this test uses overlapping ^-period returns. As an input to the program, the workfile should contain a series of log prices p used to test for predictability. We start the program in a usual way.

The variable !h denotes the horizon of the returns forecast. The next we create one period and h period returns.

In order to build the variance ratio statistics we need to have the actual number of observations (returns), mean and variance of returns series.

We can now compute the variance ratio statistic

We need a p-value in order to test the hypothesis. Two-sided significance level (p-value) can be calculated as a follows

Finally, we create a table to report the results. We declare a new table VRTEST object with 2 rows and 5 columns, set the width of each column and write the context of each sell down.

table(2,5) VRTEST

Setcolwidth(VRTEST,1,15)

Setcolwidth(VRTEST,2,15)

Setcolwidth(VRTEST,3,10)

Setcolwidth(VRTEST,4,10)

Setcolwidth(VRTEST,5,13)

Setcell(VRTEST,1,1,"Nr of obs")

Setcell(VRTEST,1,2,MHorizon h")

Setcell(VRTEST,1,3,"VRh")

Setcell(VRTEST,1,4,"test stat Zh")

Setcell(VRTEST,1,5,"p-value")

Setcell(VRTEST,2,1,T,0)

Setcell(VRTEST,2,2,!h,0)

Setcell(VRTEST,2,3,VRh,4)

Setcell(VRTEST,2,4,Zh,4)

Setcell(VRTEST,2,5,Zh_level,5) delete r mu rh T var1 varh Zh Zh_level next

Found a mistake? Please highlight the word and press Shift + Enter