Menu
Home
Log in / Register
 
Home arrow Business & Finance arrow The mathematics of financial models
< Prev   CONTENTS   Next >

Using Historical Underlying Values

In the previous section I discussed how, due to the high liquidity of actively traded options, historical implied volatilities are readily available for analysis. In such an instance, one could perform the analysis on historical implied volatility values to help decide the relative cheapness of current implied volatility vis-a-vis implied volatilities experienced during history. While it is important for one to understand this type of relative value analysis, the data (even if readily available) does not readily lend itself to the estimation of the volatility that the option owner is going to be realizing during the life of the option.

The question of estimating the realized volatility (that the option holder is going to experience during the life of the option) can be actually done using historical data of the underlying asset. More precisely, to do this, one has to first recall that the underlying assumption regarding the movements in stock price is that future prices are lognormally distributed; that is In ST is normally distributed with a mean of , and, variance of or, equivalently, is normally distributed with a mean of , and, variance of

Since equation (3.5) is equivalent to saying that continuously compounded daily returns are normally distributed with a mean of zero[1] (i.e., ) and variance of, it readily follows that the volatility of the underlying stock,•, can be obtained by computing the standard deviation of the annualized continuously compounded (i.e., natural logarithm) of the returns. Succinctly put, this standard deviation can be computed using the expression, where щ represents the ith annualized continuously compounded daily return.[2] This result is obtained using a statistical method called method of moments. Another method of parameter estimation, called maximum likelihood estimate, also produces a similar result for the average volatility – except that the denominator used in the computation of the standard deviation is n – 1.[3]

Figure 6.10 shows the historical path of the index.

Table 6.11 shows the sample calculations related to the historical volatility.

Historical S&P 500 Index Values

FIGURE 6.10 Historical S&P 500 Index Values

As can be seen in Table 6.11, to arrive at the annualized volatility of 8.4 percent, I had to make a few assumptions, such as:

Number of Days Used to Convert Days into Years In arriving at the 8.4 percent annualized volatility number, I used 365 days (as a day-count convention) to calculate the time (in years) between each data point. Practitioners sometimes also use 250 or 360 days. While using 360 days as a day-count convention only slightly changes the result, using 250 days would yield a volatility change of about 2 percent.

Choice Of Return Period In the example discussed, I used daily returns to arrive at the volatility. I could have just as well used a weekly return to arrive

TABLE 6.11 Computation of Daily Historical Volatility

at a volatility of 10.9 percent or a monthly return to arrive at a volatility of 10.7 percent. The question of what return period one should use in practice is a tricky one, as it is usually a function of how frequently the hedger wants to rebalance the portfolio or positions.

Amount of Data Used In the computation carried out, I used the entire data set to compute the returns. In practice, despite the fact that one may have access to a large historical database, it may not be prudent to use the entire dataset simply because going back too much into history may not be reflective of the current financial markets/economic environment, and using too little may not be enough to give a good estimate for what the reality could be. Hence, herein lie the balance and the art of deciding how much data to use and which part of an historical period should be considered. Another related question is how (if at all) should the life of the option matter to how much data is used?

Weighting the Observations Throughout the example, by using the formula, I implicitly assumed that the observations used to compute the standard deviation were equally weighted (i.e., weights of). In practice, it is common to assume that the more recent historical observations should weigh more than the less recent historical observations. To do this, practitioners assume that the standard deviation is given by the formula where is the weight attached to(the return i days ago), , and (for).[4] A further extension to this weighting scheme is to assume that there is a long-run, average variance with its appropriate weight. As a consequence, the expression for the volatility would take the form

where V and w0 represent the long-run variance and its respective weight.[5]

It is this thinking that led to the development of different weighting schemes associated with weighting of the historical returns.

Rolling Historical Volatilities

FIGURE 6.11 Rolling Historical Volatilities

Rolling History One criterion in choosing an appropriate estimate for realized volatility is to ensure that the estimate has some noise. While too little noise typically implies too much data, too much noise implies too little data. To understand what the right amount of noise should be, one needs to study how the volatility rolls through time and how much noise should one accept. To do this, one needs to calculate the volatility for a given data set (as discussed earlier) and then recalculate the volatility for a new data set (which comprises a new data point and loses the oldest data point). This is then repeated over the entire data set. In the context of our example, by adding more old data to the original data set, the following illustrates an example of a rolling volatility.

As can be seen from Figure 6.11, the rolling volatility (or volatility of the rolling data) is slowly trending downwards. The consequence of the resulting exhibit is the need to look at a longer historical period to better understand how much of an up-and-down fluctuation this volatility measure was subjected to. Should this be all the historical information that is available (or should further historical information support the same trend as above) then it clearly means that the number of data points for computing volatility needs to be reduced to introduce more noise to the rolling volatility. In running the above analysis, it is imperative to do a reality check to see if the results make intuitive sense – as it is very easy to get caught up in theoretical results that may not make any sense in practice.

  • [1] This assumption has very little impact on the volatility estimation since over a small time interval the quantitywould be very small.
  • [2] The formula to calculate standard deviation (as typically seen in standard statistics textbooks) is of the form. Since mean of this population is assumed to be 0,. From this, the result for the standard deviation readily follows.
  • [3] The formula for ση obtained using a maximum likelihood estimating method takes the form
  • [4] In particular, by settingand assuming n is large enough, one can arrive at the exponentially weighted moving average (EWMA) model. See Hull (2012).
  • [5] Known as the ARCH (n) model, this was first introduced by Engle in 1982.
 
Found a mistake? Please highlight the word and press Shift + Enter  
< Prev   CONTENTS   Next >
 
Subjects
Accounting
Business & Finance
Communication
Computer Science
Economics
Education
Engineering
Environment
Geography
Health
History
Language & Literature
Law
Management
Marketing
Philosophy
Political science
Psychology
Religion
Sociology
Travel