Skip to main content
Engineering LibreTexts

6.4: Uniformization

  • Page ID
    44630
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    Up until now, we have discussed Markov processes under the assumption that \(q_{ii} = 0\) (\(i.e.\), no transitions from a state into itself are allowed). We now consider what happens if this restriction is removed. Suppose we start with some Markov process defined by a set of transition rates \(q_{ij}\) with \(q_{ii} = 0\), and we modify this process by some arbitrary choice of \(q_{ii}\geq 0\) for each state \(i\). This modification changes the embedded Markov chain, since \(\nu_i\) is increased from \(\sum_{k \neq i}q_{ik}\) to \(\sum_{k\neq i} q_{ik} + q_{ii}\). From (6.1.5), \(P_{ij}\) is changed to \(q_{ij}/\nu_i\) for the new value of \(\nu_i\) for each \(i, j\). Thus the steady-state probabilities \(\pi_i\) for the embedded chain are changed. The Markov process {\(X(t); t\geq 0\)} is not changed, since a transition from \(i\) into itself does not change \(X(t)\) and does not change the distribution of the time until the next transition to a different state. The steady-state probabilities for the process still satisfy

    \[ p_j\nu_j = \sum_k p_kq_{kj} \quad ; \quad \sum_i p_i =1 \nonumber \]

    The addition of the new term \(q_{jj}\) increases \(\nu_j\) by \(q_{jj}\), thus increasing the left hand side by \(p_j\, q_{jj}\). The right hand side is similarly increased by \(p_j\, q_{jj}\), so that the solution is unchanged (as we already determined it must be).

    A particularly convenient way to add self-transitions is to add them in such a way as to make the transition rate \(\nu_j\) the same for all states. Assuming that the transition rates {\(\nu_i; i\geq 0\)} are bounded, we define \(\nu^*\) as \(\sup_j \,\nu_j\) for the original transition rates. Then we set \(q_{jj} = \nu^* -\sum_{k\neq j} q_{jk}\) for each \(j\). With this addition of self-transitions, all transition rates 6 become \(\nu^*\). From (6.2.15), we see that the new steady state probabilities, \(\pi_i^*\), in the embedded Markov chain become equal to the steady-state process probabilities, \(p_i\). Naturally, we could also choose any \(\nu\) greater than \(\nu^*\) and increase each \(q_{jj}\) to make all transition rates equal to that value of \(\nu\). When the transition rates are changed in this way, the resulting embedded chain is called a uniformized chain and the Markov process is called the uniformized process. The uniformized process is the same as the original process, except that quantities like the number of transitions over some interval are different because of the self transitions

    Assuming that all transition rates are made equal to \(\nu^*\), the new transition probabilities in the embedded chain become \(P_{ij}^* = q_{ij}/\nu^*\). Let \(N(t)\) be the total number of transitions that occur from 0 to \(t\) in the uniformized process. Since the rate of transitions is the same from all states and the inter-transition intervals are independent and identically exponentially distributed, \(N(t)\) is a Poisson counting process of rate \(\nu^*\). Also, \(N(t)\) is independent of the sequence of transitions in the embedded uniformized Markov chain. Thus, given that \(N(t) = n\), the probability that \(X(t) = j\) given that \(X(0) = i\) is just the probability that the embedded chain goes from \(i\) to \(j\) in \(\nu\) steps, \(i.e.\), \(P_{ij}^{*n}\). This gives us another formula for calculating \(P_{ij} (t)\), (\(i.e.\), the probability that \(X(t) = j\) given that \(X(0) = i\)).

    \[ P_{ij}(t) = \sum^{\infty}_{n=0} P^{*n}_{ij} \dfrac{e^{-\nu^*t}(\nu^*t)^n}{n!} \nonumber \]

    Another situation where the uniformized process is useful is in extending Markov decision theory to Markov processes, but we do not pursue this.


    This page titled 6.4: Uniformization is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Robert Gallager (MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.