Skip to main content
Engineering LibreTexts

8.5: Issues of Cost and Accuracy

  • Page ID
    47269
    • Franz S. Hover & Michael S. Triantafyllou
    • Massachusetts Institute of Technology via MIT OpenCourseWare
    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    The balance of cost versus error for the methods above are summarized as:

    • The Monte-Carlo estimator has a variance that decreases with \(1/N\), where \(N\) is the number of evaluations. There is no dependence on the random dimension \(d\) or on the form of the function.
    • The trapezoidal rule in one dimension has an error that decreases with \(n^{-2}\).
    • Gauss-Hermite quadrature in one dimension matches exactly all polynomials \(g(x)\) of order less than or equal to \(2n-1\), and makes the best fit possible when it is of higher order. The error is zero if \(g(x)\) is a polynomial of order \(2n-1\) or less, but if not then the error goes as \(n-r\), where \(r\) is the ”smoothness” of the function. The smoothness is the same as the number of derivatives that can be taken everywhere in the domain.

    Some simple but important scaling laws show that the Monte Carlo will ultimately outperform any quadrature rule, in high enough dimensions. Recall that the MC estimator \(G\) has error - or standard deviation of error - that scales as \[\epsilon \, = \, O (1 / \sqrt{N}).\]

    Now, in contrast, the error of a quadrature rule in one dimension scales as \[ \epsilon \, = \, O (n^{-k}), \] where \(k\) is a parameter of the method used and the function itself, which is two for the trapezoid we described above, and the smoothness \(r\) for Gaussian quadrature. Considering the multi-dimensional quadrature case, the error stays the same but now the total number of evaluations \(N\) includes all the grid points, i.e., \(N = n^d\) (assuming \(N_1 = N_2 = \cdots = N)\). Hence, we have

    \[ \epsilon \, = \, O ((N^{1/d})^{-k}) \, = \, O (N^{-k/d}). \]

    Error in the quadrature rules is affected dramatically by \(d\): even in two dimensions, the \(n^{-2}\) convergence of the trapezoid rule is degraded to \(n^{-1}\)! In four dimensions, the error rate of \(1 / \sqrt{N}\) is the same as for Monte Carlo. The Gaussian quadrature may do better for some functions, but there are plenty of them for which \(r = 2\); for example, the innocuous \(g(x) = x^2 |x|\)!

    Another major factor to consider is that the quadrature rules are inherently trying to make a polynomial approximation of the function, whereas the Monte Carlo technique has no such intention. Discontinuous and otherwise non-smooth functions in particular can cause serious problems for the quadratures, which must have many points in order to cover the sharp spots accurately.

    Summarizing, we recommend that you keep the two major classes of integration tools handy:­ grid-less and grid-based. For lower dimensions and smoother functions, Gaussian quadratures can provide exceptional results, whereas the Monte-Carlo workhorse always comes out on top for high-dimension, difficult functions. The differences are trivial for very cheap evaluations of \(g\), but become very compelling when each evaluation takes a lot of computer time, or involves an actual experiment.

    There are substantial extensions to multi-dimensional quadrature rules and to Monte Carlo methods, some of which are available in the following references:

    M.H. Kalos and P.A. Whitlock, 1986, Monte Carlo methods, volume 1: basics, New York: Wiley.

    W.H. Press, S.A. Teukolsky, W.T. Vetterling, and B.P. Flannery, 1992, Numerical recipes in C, Cambridge, UK: Cambridge University Press.


    This page titled 8.5: Issues of Cost and Accuracy is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Franz S. Hover & Michael S. Triantafyllou (MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.