In mathematics, more specifically in the theory of Monte Carlo methods, variance reduction is a procedure used to increase the precision of the estimates that can be obtained for a given simulation or computational effort.[1] Every output random variable from the simulation is associated with a variance which limits the precision of the simulation results. In order to make a simulation statistically efficient, i.e., to obtain a greater precision and smaller confidence intervals for the output random variable of interest, variance reduction techniques can be used. The main ones are common random numbers, antithetic variates, control variates, importance sampling, stratified sampling, moment matching, conditional Monte Carlo and quasi random variables. For simulation with black-box models subset simulation and line sampling can also be used. Under these headings are a variety of specialized techniques; for example, particle transport simulations make extensive use of "weight windows" and "splitting/Russian roulette" techniques, which are a form of importance sampling.

Crude Monte Carlo simulation

Suppose one wants to compute \( {\displaystyle z:=E(Z)} \) with the random variable Z defined on the probability space ( \( (\Omega ,{\mathcal {F}},P \) ). Monte Carlo does this by sampling i.i.d. copies \( {\displaystyle Z_{1},...,Z_{R}} \) of Z and then to estimate z via the sample-mean estimator

\( {\displaystyle {\overline {z}}={\frac {1}{n}}\sum _{i=1}^{n}Z_{i}}

Under further mild conditions such as \( {\displaystyle var(Z)<\infty } \), a central limit theorem will apply such that for large \( n\rightarrow \infty \) , the distribution of \( {\displaystyle {\overline {z}}} \) Convergence_in_distribution to the normal distribution with mean z and standard deviation \( {\displaystyle \sigma /{\sqrt {n}}} \). Because the standard deviation only converges towards 0 at the rate \( {\sqrt {n}} \), implying one needs to increase the number of simulations ( n) by a factor of 10 {\displaystyle 10} 10 to half the standard deviation of \( {\displaystyle {\overline {z}}} \), variance reduction methods are often useful for obtaining more precise estimates for \( {\displaystyle z} \) without needing very large numbers of simulations.

Common Random Numbers (CRN)

The common random numbers variance reduction technique is a popular and useful variance reduction technique which applies when we are comparing two or more alternative configurations (of a system) instead of investigating a single configuration. CRN has also been called correlated sampling, matched streams or matched pairs.

CRN requires synchronization of the random number streams, which ensures that in addition to using the same random numbers to simulate all configurations, a specific random number used for a specific purpose in one configuration is used for exactly the same purpose in all other configurations. For example, in queueing theory, if we are comparing two different configurations of tellers in a bank, we would want the (random) time of arrival of the N-th customer to be generated using the same draw from a random number stream for both configurations.

zUnderlying principle of the CRN technique

Suppose \( X_{{1j}} \) and \( X_{{2j}} \) are the observations from the first and second configurations on the j-th independent replication.

We want to estimate

\( \xi =E(X_{{1j}})-E(X_{{2j}})=\mu _{1}-\mu _{2}.\, \)

If we perform n replications of each configuration and let

\( Z_{j}=X_{{1j}}-X_{{2j}}\quad {\mbox{for }}j=1,2,\ldots ,n, \)

then \( E(Z_{j})=\xi \) and \( Z(n)={\frac {\sum _{{j=1,\ldots ,n}}Z_{j}}{n}} \) is an unbiased estimator of \( \xi . \)

And since the Z j {\displaystyle Z_{j}} Z_{j}'s are independent identically distributed random variables,

\( {\displaystyle \operatorname {Var} [Z(n)]={\frac {\operatorname {Var} (Z_{j})}{n}}={\frac {\operatorname {Var} [X_{1j}]+\operatorname {Var} [X_{2j}]-2\operatorname {Cov} [X_{1j},X_{2j}]}{n}}.} \)

In case of independent sampling, i.e., no common random numbers used then Cov(X1j, X2j) = 0. But if we succeed to induce an element of positive correlation between X1 and X2 such that Cov(X1j, X2j) > 0, it can be seen from the equation above that the variance is reduced.

It can also be observed that if the CRN induces a negative correlation, i.e., Cov(X1j, X2j) < 0, this technique can actually backfire, where the variance is increased and not decreased (as intended).[2]

See also

Explained variance

References

Botev, Z.; Ridder, A. (2017). "Variance Reduction". Wiley StatsRef: Statistics Reference Online: 1–6. doi:10.1002/9781118445112.stat07975. ISBN 9781118445112.

Hamrick, Jeff. "The Method of Common Random Numbers: An Example". Wolfram Demonstrations Project. Retrieved 29 March 2016.

Hammersley, J. M.; Handscomb, D. C. (1964). Monte Carlo Methods. London: Methuen. ISBN 0-416-52340-4.

Kahn, H.; Marshall, A. W. (1953). "Methods of Reducing Sample Size in Monte Carlo Computations". Journal of the Operations Research Society of America. 1 (5): 263–271. doi:10.1287/opre.1.5.263.

MCNP — A General Monte Carlo N-Particle Transport Code, Version 5 Los Alamos Report LA-UR-03-1987

Undergraduate Texts in Mathematics

Graduate Studies in Mathematics

Hellenica World - Scientific Library

Retrieved from "http://en.wikipedia.org/"

All text is available under the terms of the GNU Free Documentation License