Menu
Home
Log in / Register
 
Home arrow Business & Finance arrow Frequently Asked Questions in Quantitative Finance
< Prev   CONTENTS   Next >

1986 Ho and Lee

One of the problems with the Vasicek framework for interest-rate derivative products was that it didn't give very good prices for bonds, the simplest of fixed-income products. If the model couldn't even get bond prices right, how could it hope to correctly value bond options? Thomas Ho and Sang-Bin Lee found a way around this, introducing the idea of yield-curve fitting or calibration. See Ho & Lee (1986).

1992 Heath, Jarrow and Morton

Although Ho and Lee showed how to match theoretical and market prices for simple bonds, the methodology was rather cumbersome and not easily generalized. David Heath, Robert Jarrow and Andrew Morton (HJM) took a different approach. Instead of modelling just a short rate and deducing the whole yield curve, they modelled the random evolution of the whole yield curve. The initial yield curve, and hence the value of simple interest rate instruments, was an input to the model. The model cannot easily be expressed in differential equation terms and so relies on either Monte Carlo simulation or tree building. The work was well known via a working paper, but was finally published, and therefore made respectable in Heath, Jarrow & Morton (1992).

1990s Cheyette, Barrett, Moore and Wilmott

When there are many underlyings, all following lognormal random walks, you can write down the value of any European non-path-dependent option as a multiple integral, one dimension for each asset. Valuing such options then becomes equivalent to calculating an integral. The usual methods for quadrature are very inefficient in high dimensions, but simulations can prove quite effective. Monte Carlo evaluation of integrals is based on the idea that an integral is just an average multiplied by a 'volume.' And since one way of estimating an average is by picking numbers at random we can value a multiple integral by picking integrand values at random and summing. With N function evaluations, taking a time of O(N) you can expect an accuracy of O(1/N1/2), independent of the number of dimensions. As mentioned above, breakthroughs in the 1960s on low-discrepancy sequences showed how clever, non-random, distributions could be used for an accuracy of O(1/N), to leading order. (There is a weak dependence on the dimension.) In the early 1990s several groups of people were simultaneously working on valuation of multi-asset options. Their work was less of a breakthrough than a transfer of technology.

They used ideas from the field of number theory and applied them to finance. Nowadays, these low-discrepancy sequences are commonly used for option valuation whenever random numbers are needed. A few years after these researchers made their work public, a completely unrelated group at Columbia University successfully patented the work. See Oren Cheyette (1990) and John Barrett, Gerald Moore & Paul Wilmott (1992).

1994 Dupire, Rubinstein, Derman and Kani

Another discovery was made independently and simultaneously by three groups of researchers in the subject of option pricing with deterministic volatility. One of the perceived problems with classical option pricing is that the assumption of constant volatility is inconsistent with market prices of exchange-traded instruments. A model is needed that can correctly price vanilla contracts, and then price exotic contracts consistently. The new methodology, which quickly became standard market practice, was to find the volatility as a function of underlying and time that when put into the Black-Scholes equation and solved, usually numerically, gave resulting option prices which matched market prices. This is what is known as an inverse problem: use the 'answer' to find the coefficients in the governing equation. On the plus side, this is not too difficult to do in theory. On the minus side, the practice is much harder, the sought volatility function depending very sensitively on the initial data. From a scientific point of view there is much to be said against the methodology. The resulting volatility structure never matches actual volatility, and even if exotics are priced consistently it is not clear how to best hedge exotics with vanillas in order to minimize any model error. Such concerns seem to carry little weight, since the method is so ubiquitous. As so often happens in finance, once a technique becomes popular it is hard to go against the majority. There is job safety in numbers. See Emanuel Derman & Iraj Kani (1994), Bruno Dupire (1994) and Mark Rubinstein (1994).

 
Found a mistake? Please highlight the word and press Shift + Enter  
< Prev   CONTENTS   Next >
 
Subjects
Accounting
Business & Finance
Communication
Computer Science
Economics
Education
Engineering
Environment
Geography
Health
History
Language & Literature
Law
Management
Marketing
Philosophy
Political science
Psychology
Religion
Sociology
Travel