Skip to main content
Engineering LibreTexts

8.4: Grid-Based Techniques

  • Page ID
    47268
    • Franz S. Hover & Michael S. Triantafyllou
    • Massachusetts Institute of Technology via MIT OpenCourseWare
    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    As noted above, moment calculations on the output variable are essentially an integral over the domain of the random variables. Given this fact, an obvious approach is simply to focus on a high-quality integration routine that uses some fixed points \(x\) - in the Monte Carlo method, these were chosen at random. In one dimension, we have the standard trapezoid rule:

    \begin{align} \int\limits_{a}^{b} g(x) \, dx \, &\approx \, \sum_{i=1}^{n} w_i g(x_i), \textrm { with} \\[4pt] w_1 \, &= \, (b-a) / 2(n-1) \nonumber \\[4pt] w_n \, &= \, (b-a) / 2(n-1) \nonumber \\[4pt] w_2, \, \cdots, \, w_{n-1} \, &= \, (b-a) / (n-1) \nonumber \\[4pt] x_i \, &= \, a + (i-1)(b-a) / (n-1). \nonumber \end{align}

    Here the \(w_i\) are simply weights, and \(g\) is to be evaluated at the different abscissas \(x_i\). This rule has error of the order \(1 / n^2\), meaning that a doubling in the number of points n gives a fourfold improvement in the error. To make an integration in two dimensions we take the tensor product of two single-dimension calculations:

    \[ \int\limits_{a}^{b} \int\limits_{c}^{d} g(x) \, dx \, \approx \, \sum_{i=0}^{n_1} \sum_{j=0}^{n_2} w_i w_j g(x_{ij}). \]Here the abscissas have two elements, taken according to the grids in the two directions. In many applications, the trapezoid rule in multi-dimensions works quite well and can give a good understanding of the various moments of the given system. Extensions such as Romberg integration, with the tensor product, can be applied to improve results.

    A particularly powerful technique involving orthogonal polynomials is also available and it gives truly remarkable accuracy for smooth functions. For our development here, we will focus on normal distributions of random variables. These pertain to a particular set of orthogonal polynomials known as the Hermite (pronounced ”hermit”) polynomials. These are:

    \begin{align*} h_0 (x) \, &= \, 1 \\[4pt] h_1 (x) \, &= \, x \\[4pt] h_2 (x) \, &= \, x^2 - 1 \\[4pt] &\cdots \nonumber \\[4pt] h_{n+1} (x) \, &= x h_n(x) - n h_{n-1}(x) \textrm{ (recurrence relation).} \end{align*}

    The defining feature of this set of polynomials is that

    \[ \int\limits_{- \infty}^{\infty} \dfrac{1}{\sqrt{2 \pi}} e^{-x^2 / 2} h_i (x) h_j (x) dx = \begin{cases} 0, && \text{if and only if } i \neq j \\[4pt] 1, && \text{if and only if } i = j \end{cases} \]

    Note that we have chosen a scaling so that the inner product comes out to be exactly one - some textbook definitions of the Hermite polynomials will not match this. Note also that the inner product is taken with respect to the Gaussian exponential weighting function, which we equate below with the Gaussian pdf. Now the magic comes from the following\(^1\): a \((2n − 1)\)'th order polynomial \(g(x)\) can be written as

    \[ g(x) \, = \, h_n (x) [a_{n-1} h_{n-1}(x) + a_{n-2} h_{n-2}(x) + \cdots + a_0 h_0(x)] + b_{n-1} h_{n-1}(x) + \cdots + b_0 h_0(x). \]

    This formula, with the \(2n\) coefficients \(a\) and \(b\), covers the \((2n-1)\)'th order term with \(a_{n-1} h_n(x) h_{n-1}(x)\), and the zero'th order term with \(b_0 h_0(x)\). It can be shown that all the products are linearly independent, so that indeed all polynomials up to order \(2n - 1\) are accounted for.

    Returning to the integration problem, for the determination of moments, recall the definition that

    \[ E(g(x)) \, = \int\limits_{x \epsilon X} p(x) g(x) \, dx, \]

    where \(p(x)\) is the probability density function of random variable \(x\). Employing specifically the Gaussian pdf in \(p(x)\) and the Hermite polynomials so as to achieve orthogonality, we can integrate our \(g(x)\) as follows:

    \[ \int\limits_{- \infty}^{\infty} p(x) g(x) \, dx \, = \, b_0 \int\limits_{- \infty}^{\infty} p(x) h_0(x) \, dx \, = \, b_0. \]

    Thus, we have only to find \(b_0\)! To do this, we cleverly select the abscissas to be the zeros (or roots) of \(h_n(x)\) - let’s call them \(x_1, \cdots, x_n\). We obtain a linear system:

    \begin{align} \begin{Bmatrix} g(x_1) \\[4pt] \vdots \\[4pt] g(x_N) \end{Bmatrix} \, &= \, \begin{bmatrix} h_{n-1}(x_1) &\cdots & h_0(x_1) \\[4pt] \vdots &\ &\ \vdots \\[4pt] h_{n-1}(x_n) & \cdots & h_0(x_n) \end{bmatrix} \begin{Bmatrix} b_{n-1} \\[4pt] \vdots \\[4pt] b_0 \end{Bmatrix}, \quad \text{or} \\[4pt] \quad \nonumber \\[4pt] \vec{g} \, &= \, H \vec{b}. \end{align}

    Notice that the \(a\) coefficients do not appear here because the \(x_i\) are taken at the roots of \(h_{n-1}(x)\). The linear system can be solved easily as \(\vec{b} = H^{-1} \vec{g}\), and we have a special interest in the last row, which of course has the solution for \(b_0\). Indeed, the bottom row of \(H^{-1}\) is the set of weights \(w_i\), in complete analogy with the weights we defined above for the trapezoid rule:

    \[ b_0 \, = \int\limits_{-\infty}^{\infty} p(x) g(x) \, dx \, = \, \sum_{i=1}^{n} w_i g(x_i). \]


    \(^{\PageIndex{1}}\): Parts of this derivation follow J.R. Hockenberry and B.C. Lesieutre (2004), IEEE Transactions on Power Systems, 19:1483-1491.


    This page titled 8.4: Grid-Based Techniques is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Franz S. Hover & Michael S. Triantafyllou (MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.