Skip to main content
Engineering LibreTexts

9.4: Constraints

  • Page ID
    50213
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    The entropy has its maximum value when all probabilities are equal (we assume the number of possible states is finite), and the resulting value for entropy is the logarithm of the number of states, with a possible scale factor like \(k_B\). If we have no additional information about the system, then such a result seems reasonable. However, if we have additional information in the form of constraints then the assumption of equal probabilities would probably not be consistent with those constraints. Our objective is to find the probability distribution that has the greatest uncertainty, and hence is as unbiased as possible.

    For simplicity we consider only one such constraint here. We assume that we know the expected value of some quantity (the Principle of Maximum Entropy can handle multiple constraints but the mathematical procedures and formulas are more complicated). The quantity in question is one for which each of the states of the system has its own amount, and the expected value is found by averaging the values corresponding to each of the states, taking into account the probabilities of those states. Thus if there is a quantity \(G\) for which each of the states has a value \(g(A_i)\) then we want to consider only those probability distributions for which the expected value is a known value \(\widetilde{G}\)

    \(\widetilde{G} = \displaystyle \sum_{i} p(A_i)g(A_i) \tag{9.4}\)

    Of course this constraint cannot be achieved if \(\widetilde{G}\) is less than the smallest \(g(A_i)\) or greater than the largest \(g(A_i)\).

    Example

    For our Berger’s Burgers example, suppose we are told that the average price of a meal is $2.50, and we want to estimate the separate probabilities of the various meals without making any other assumptions. Then our constraint would be

    \($2.50 = $1.00p(B) + $2.00p(C) + $3.00p(F) + $8.00p(T) \tag{9.5}\)

    For our magnetic-dipole example, assume the energies for states \(U\) and \(D\) are denoted \(e(i)\) where \(i\) is either \(U\) or \(D\), and assume the expected value of the energy is known to be some value \(\widetilde{E}\). All these energies are expressed in Joules. Then

    \(\widetilde{E} = e(U)p(U) + e(D)p(D) \tag{9.6}\)

    The energies \(e(U)\) and \(e(D)\) depend on the externally applied magnetic field \(H\). This parameter, which will be carried through the derivation, will end up playing an important role. If the formulas for the \(e(i)\) from Table 9.2 are used here,

    \(\widetilde{E} = m_dH[p(D) − p(U)] \tag{9.7}\)


    This page titled 9.4: Constraints is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Paul Penfield, Jr. (MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.