Log in / Register
Home arrow Business & Finance arrow The mathematics of financial models
< Prev   CONTENTS   Next >


In earlier sections, I discussed the use of calibration to estimate model parameters from market information. As mentioned, it is not uncommon for practitioners to resort to the use of statistical methods to estimate these model parameters. Reasons why a practitioner would want to go through this process include:

■ Estimating the parameters using historical information to understand the estimated parameter values better and then make a decision if the currently traded prices (implied model parameters) are cheaper than what they have been historically.

■ Calibration of model parameters makes good sense only if the prices that are used for calibration are from a liquidly traded market. If these prices change only sporadically (or if the trading volume is very thin) then calibrating to such prices would be pointless, as it would not give a true representation of the market sentiments on true value of the derivative. In such an instance, supplementing the calibration method with the estimation of these parameters using historical information would provide more insights to what is going on in the marketplace.

■ Derivative prices are not going to always be readily available. In such an event, trying to estimate what the value of an embedded option is or what value one would assign to a customized derivative (which has no active market) can only be made feasible if one is able to estimate these parameters historically.

Given the above backdrop, it would be of interest to look at two methods one can use to calculate historical parameter values.

Using Historical Implied Volatilities

In the earlier sections of this chapter I discussed the extraction of zero risk- free/dividend rates and spot-volatility rates from liquidly traded financial instruments. In practice, for liquidly traded options (e.g., options on the S&P index) it is quite common to find historical implied values that can be analyzed to better understand the behavior of current implied values during a defined period (e.g., trading hours in a day). As a consequence, one can better estimate the relative cheapness associated with the implied value of a currently traded option vis-a-vis the realized values or even where the implied values of similar instruments have been trading during the last six months and so on.

To understand this better, I will use historical implied volatility values to illustrate how such an analysis can be done.

Figure 6.8 shows the historical daily closing implied volatility values of a one-year at-the-money (ATM) option and how it fluctuates over time. To better understand the behavior of this implied volatility and make practical trading decisions, one can statistically analyze the data supporting Figure 6.9. One simple analysis is to compute the cumulative probabilities associated with the observations.

From Figure 6.9, it can be seen that:

■ 100 percent of the observations lie in the interval [10.18%, 23.21%].

■ 95 percent of the observations lie in the interval [10.95%, 20.34%].

■ 90 percent of the observations lie in the interval [11.33%, 19.95%].

■ The average value of the 273 observations is 14.98 percent.

Historical One-Year ATM Option Volatilities

FIGURE 6.8 Historical One-Year ATM Option Volatilities

Cumulative Probabilities of One-Year ATM Implied Volatilities

FIGURE 6.9 Cumulative Probabilities of One-Year ATM Implied Volatilities

Depending on what the current implied volatility level is, a trader can ask the question: How relatively cheap or expensive is the current implied volatility vis-a-vis where it used to trade historically? Using this type of analysis, the trader can monitor charts of a similar constant maturity option (taking into consideration the time decay) daily to decide if that trade should be unwound.

While the above illustrates a simple example of the kind of analysis that is used to help with decision making associated with the assessment of the relative cheapness of an actively trading 1-year ATM implied volatility, one can deploy rigorous statistical approaches (e.g., generalized autoregressive conditional heteroskedasticity (GARCH), exponential generalized autoregressive conditional heteroskedasticity (EGARCH) models) to understand the impact of seasons, trading volumes, earnings, and more on the implied volatility of the option to fine-tune the decisions.

Found a mistake? Please highlight the word and press Shift + Enter  
< Prev   CONTENTS   Next >
Business & Finance
Computer Science
Language & Literature
Political science