Skip to main content
Engineering LibreTexts

6.9: Summary

  • Page ID
    77514
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    We have seen that Markov processes with countable state spaces are remarkably similar to Markov chains with countable state spaces, and throughout the chapter, we frequently made use of both the embedded chain corresponding to the process and to the sampled time approximation to the process.

    For irreducible processes, the steady-state equations, \ref{6.23} and \(\sum_i p_i\nu_i = 1\), were found to specify the steady-state probabilities, \(p_i\), which have significance both as time-averages and as limiting probabilities. If the transition rates \(\nu_i\) are bounded, then the sampled-time approximation exists and has the same steady-state probabilities as the Markov process itself. If the transition rates \(\nu_i\) are unbounded but \(\sum_i p_i\nu_i < \infty\), then the embedded chain is positive recurrent and has steady-state probabilities, but the sampled-time approximation does not exist. We assumed throughout the remainder of the chapter that \(\sum_i p_i\nu_i < \infty\). This ruled out irregular processes in which there is no meaningful steady state, and also some peculiar processes such as that in Exercise 6.9 where the embedded chain is null recurrent.

    Section 6.3 developed the Kolmogoroff backward and forward differential equations for the transient probabilities \(P_{ij} (t)\) of being in state \(j\) at time \(t\) given state \(i\) at time 0. We showed that for finite-state processes, these equations can be solved by finding the eigenvalues and eigenvectors of the transition rate matrix \(Q\). There are close analogies between this analysis and the algebraic treatment of finite-state Markov chains in chapter 3, and exercise 6.7 showed how the transients of the process are related to the transients of the sampled time approximation.

    For irreducible processes with bounded transition rates, uniformization was introduced as a way to simplify the structure of the process. The addition of self transitions does not change the process itself, but can be used to adjust the transition rates \(\nu_i\) to be the same for all states. This changes the embedded Markov chain, and the steady-state probabilities for the embedded chain become the same as those for the process. The epochs at which transitions occur then form a Poisson process which is independent of the set of states entered. This yields a separation between the transition epochs and the sequence of states.

    The next two sections analyzed birth-death processes and reversibility. The results about birth-death Markov chains and reversibility for Markov chains carried over almost without change to Markov processes. These results are central in queueing theory, and Burke’s theorem allowed us to look at simple queueing networks with no feedback and to understand how feedback complicates the problem.

    Jackson networks were next discussed. These are important in their own right and also provide a good example of how one can solve complex queueing problems by studying the reverse time process and making educated guesses about the steady-state behavior. The somewhat startling result here is that in steady state, and at a fixed time, the number of customers at each node is independent of the number at each other node and satisfies the same distribution as for an M/M/1 queue. Also the exogenous departures from the network are Poisson and independent from node to node. We emphasized that the number of customers at one node at one time is often dependent on the number at other nodes at other times. The independence holds only when all nodes are viewed at the same time.

    Finally, semi-Markov processes were introduced. Renewal theory again provided the key to analyzing these systems. Theorem 6.8.1 showed how to find the steady-state probabilities of these processes, and it was shown that these probabilities could be interpreted both as time-averages and, in the case of non-arithmetic transition times, as limiting probabilities in time.

    For further reading on Markov processes, see [13], [16], [22], and [8].


    6.9: Summary is shared under a not declared license and was authored, remixed, and/or curated by LibreTexts.

    • Was this article helpful?