In performing any kind of simulation, due to the variability (or noise) associated with each simulated outcome, there is always going to be an issue relating to the stability of the results (and, as a consequence, the time required to arrive at a stable result). From a computational standpoint one is interested in using algorithms and methods that allow for fast convergence, but the unfortunate part is that such convergence techniques tend to be very problem specific. In this section, I will answer the third question posed in the beginning of this chapter: How best to reduce the variance associated with simulated outcomes?

In practice, there are two ways of obtaining convergence in any simulation-based problem. The first relates to how best to sample the uniform numbers, so that the sampled numbers are well spaced and can adequately capture the distribution from which these variables are generated. This method was discussed in the first section of this chapter when I discussed random, stratified, and Latin hypercube sampling methods. In addition to this, the likes of low-discrepancy sequences, importance-sampling methods, and so on can also be used to effectively reduce sampling errors. Unfortunately, many times, one method does not work best for all the problems. Therefore a practitioner needs to understand how effective each method can be for the problem in hand. To do this, one applies a variety of tests of randomness^{[1]} to compare which of these techniques produces better uniform numbers faster.

Fortunately, in addition to the above, one can do other things to speed up the convergence. This second method focuses on how to best use the simulated variables and use this as a leverage to generate more variables with very little effort – which implies a reduction of the variance of the simulated variables. Given this backdrop, I will now discuss two of the most commonly used approaches.

[1] There are several articles published on how to create and implement randomness tests that can help with the detection of randomness associated with the generated variable, including the Diehard tests. See Marsaglia and Zaman (1995).

Found a mistake? Please highlight the word and press Shift + Enter