Skip to main content
Engineering LibreTexts

5.8: Exercises

  • Page ID
    77175
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    Exercise 5.1

    Let \(\left\{P_{i j} ; i, j \geq 0\right\}\) be the set of transition probabilities for an countablestate Markov chain. For each \(i\), \(j\), let \(\mathrm{F}_{i j}(n)\) be the probability that state \(j\) occurs sometime between time 1 and n inclusive, given \(X_{0}=i\). For some given \(j\), assume that \(\left\{x_{k} ; k \geq 0\right\}\) is set of nonnegative numbers satisfying \(x_{i}=P_{i j}+\sum_{k \neq j} P_{i k} x_{k}\). Show that \(x_{i} \geq \mathrm{F}_{i j}(n)\) for all \(n\) and \(i\), and hence that \(x_{i} \geq \mathrm{F}_{i j}(\infty)\) for all \(i\). Hint: use induction.

    Exercise 5.2

    1. For the Markov chain in Figure 5.2, show that, for \(p \geq 1 / 2, \mathrm{~F}_{00}(\infty)=2(1-p)\) and show that \(\mathrm{F}_{i 0}(\infty)=[(1-p) / p]^{i}\) for \(i \geq 1\). Hint: first show that this solution satisfies \ref{5.9} and then show that \ref{5.9} has no smaller solution (see Exercise 5.1). Note that you have shown that the chain is transient for \(p>1 / 2\) and that it is recurrent for \(p=1 / 2\).
    2. Under the same conditions as part a), show that \(\mathrm{F}_{i j}(\infty)\) equals \(2(1-p)\) for \(j=i\), equals \([(1-p) / p]^{i-j}\) for \(i>j\), and equals 1 for \(i<j\).

    Exercise 5.3

    1. Show that the \(n\)th order transition probabilities, starting in state 0, for the Markov chain in Figure 5.2 satisfy

      \(P_{0 j}^{n}=p P_{0, i-1}^{n-1}+q P_{0, i+1}^{n-1} \quad j \neq 0 ; \quad P_{00}^{n}=q P_{00}^{n-1}+q P_{01}^{n-1}\)

      Hint: Use the Chapman-Kolmogorov equality, (3.8).

    2. For \(p=1 / 2\), use this equation to calculate \(P_{0 j}^{n}\) iteratively for \(n=1,2,3,4\). Verify \ref{5.3} for these values and then use induction to verify \ref{5.3} in general. Note: this becomes an absolute mess for \(p \neq 1 / 2\), so don’t attempt this in general.
    3. As a more interesting approach, which brings out the relationship of Figures 5.2 and 5.1, note that (5.3), with \(j+n\) even, is the probability that \(S_{n}=j\) for the chain in 5.1 and \ref{5.3} with \(j+n\) odd is the probability that \(S_{n}=-j-1\) for the chain in 5.1. By viewing each transition over the self loop at state 0 as a sign reversal for the chain in 5.1, explain why this surprising result is true. (Again, this doesn’t work for \(p \neq 1 / 2\), since the sign reversals also reverse the +1, -1 transitions.)

    Exercise 5.4

    Let \(j\) be a transient state in a Markov chain and let \(j\) be accessible from \(i\). Show that \(i\) is transient also. Interpret this as a form of Murphy’s law (if something bad can happen, it will, where the bad thing is the lack of an eventual return). Note: give a direct demonstration rather than using Lemma 5.1.3.

    Exercise 5.5

    Consider an irreducible positive-recurrent Markov chain. Consider the renewal process \(\left\{N_{j j}(t) ; t \geq 0\right\}\) where, given \(X_{0}=j, N_{j j}(t)\) is the number of times that state \(j\) is visited from time 1 to \(t\). For each \(i \geq 0\), consider a renewal-reward function \(R_{i}(t)\) equal to 1 whenever the chain is in state \(i\) and equal to 0 otherwise. Let \(\pi_{i}\) be the time-average reward.

    1. Show that \(\pi_{i}=1 / \overline{T}_{i i}\) for each \(i\) with probability 1.
    2. Show that \(\sum_{i} \pi_{i}=1\). Hint: consider \(\sum_{i \leq M} \pi_{i}\) for any integer \(M\).
    3. Consider a renewal-reward function \(R_{i j}(t)\) that is 1 whenever the chain is in state \(i\) and the next state is state \(j\). \(R_{i j}(t)=0\) otherwise. Show that the time-average reward is equal to \(\pi_{i} P_{i j}\) with probability 1. Show that \(p_{k}=\sum_{i} \pi_{i} P_{i k}\) for all \(k\).

    Exercise 5.6

    Let \(\left\{X_{n} ; n \geq 0\right\}\) be a branching process with \(X_{0}=1\). Let \(\overline{Y}, \sigma^{2}\) be the mean and variance of the number of offspring of an individual.

    1. Argue that \(\lim _{n \rightarrow \infty} X_{n}\) exists with probability 1 and either has the value 0 (with probability \(\left.\mathrm{F}_{10}(\infty)\right)\) or the value \(\infty\) (with probability \(1-\mathrm{F}_{10}(\infty)\)).
    2. Show that \(\operatorname{VAR}\left[X_{n}\right]=\sigma^{2} \overline{Y}^{n-1}\left(\overline{Y}^{n}-1\right) /(\overline{Y}-1)\) and \(\operatorname{VAR}\left[X_{n}\right]=n \sigma^{2}\) for \(\overline{Y}=1\).

    Exercise 5.7

    There are \(n\) states and for each pair of states \(i\) and \(j\), a positive number \(d_{i j}=d_{j i}\) is given. A particle moves from state to state in the following manner: Given that the particle is in any state \(i\), it will next move to any \(j \neq i\) with probability \(P_{i j}\) given by

    \(P_{i j}=\frac{d_{i j}}{\sum_{j \neq i} d_{i j}}\)

    Assume that \(P_{i i}=0\) for all \(i\). Show that the sequence of positions is a reversible Markov chain and find the limiting probabilities.

    Exercise 5.8

    Consider a reversible Markov chain with transition probabilities \(P_{i j}\) and limiting probabilities \(\pi_i\). Also consider the same chain truncated to the states 0, 1, . . . , \(M\). That is, the transition probabilities \(\left\{P_{i j}^{\prime}\right\}\) of the truncated chain are

    \(P_{i j}^{\prime}=\left\{\begin{array}{ll}
    \frac{P_{i j}}{\sum_{k=0}^{m} P_{i k}} & ; & 0 \leq i, j \leq M \\
    \} 0 & ; &\text{elsewhere}
    \end{array}\right.\)

    Show that the truncated chain is also reversible and has limiting probabilities given by

    \(\overline{\pi}_{i}=\frac{\pi_{i} \sum_{j=0}^{M} P_{i j}}{\sum_{k=0}^{M} \pi_{i} \sum_{m=0}^{M} P_{k m}}\)

    Exercise 5.9

    A Markov chain (with states \(\{0,1,2, \ldots, J-1\}\) where \(J\) is either finite or infinite) has transition probabilities \(\left\{P_{i j} ; i, j \geq 0\right\}\). Assume that \(P_{0 j}>0\) for all \(j>0\) and \(P_{j 0}>0\) for all \(j>0\). Also assume that for all \(i, j, k\), we have \(P_{i j} P_{j k} P_{k i}=P_{i k} P_{k j} P_{j i}\).

    1. Assuming also that all states are positive recurrent, show that the chain is reversible and find the steady-state probabilities \(\left\{\pi_{i}\right\}\) in simplest form.
    2. Find a condition on \(\left\{P_{0 j} ; j \geq 0\right\}\) and \(\left\{P_{j 0} ; j \geq 0\right\}\) that is sucient to ensure that all states are positive recurrent.

    Exercise 5.10

    1. Use the birth and death model described in figure 5.4 to find the steadystate probability mass function for the number of customers in the system (queue plus service facility) for the following queues:
      1. M/M/1 with arrival probability \(\lambda \delta\), service completion probability \(\mu \delta\).
      2. M/M/m with arrival probability \(\lambda \delta\), service completion probability \(i \mu \delta\) for \(i\) servers busy, \(1 \leq i \leq m\).
      3. M/M/\(\infty\) with arrival probability \(\lambda \delta\), service probability \(i \mu \delta\) for \(i\) servers. Assume \(\delta\) so small that \(i \mu \delta<1\) for all \(i\) of interest.

    Assume the system is positive recurrent.

    1. For each of the queues above give necessary conditions (if any) for the states in the chain to be i) transient, ii) null recurrent, iii) positive recurrent.
    2. For each of the queues find:

    L=(\text { steady-state }) mean number of customers in the system.

    \(L^{q}=\text { (steady-state) }\) mean number of customers in the queue.

    \(W=(\text { steady-state })\) mean waiting time in the system.

    \(W^{q}=(\text { steady-state })\) mean waiting time in the queue.

    Exercise 5.11

    1. Given that an arrival occurs in the interval \((n \delta,(n+1) \delta)\) for the sampledtime M/M/1 model in figure 5, find the conditional PMF of the state of the system at time \(n \delta\) (assume n arbitrarily large and assume positive recurrence).
    2. For the same model, again in steady state but not conditioned on an arrival in \((n \delta,(n+1)\delta)\), find the probability \(Q(i, j)(i \geq j>0)\) that the system is in state \(i\) at \(n \delta\) and that \(i-j\) departures occur before the next arrival.
    3. Find the expected number of customers seen in the system by the first arrival after time \(n \delta\). (Note: the purpose of this exercise is to make you cautious about the meaning of “the state seen by a random arrival”).

    Exercise 5.12

    Find the backward transition probabilities for the Markov chain model of age in figure 2. Draw the graph for the backward Markov chain, and interpret it as a model for residual life.

    Exercise 5.13

    Consider the sample time approximation to the M/M/1 queue in figure 5.5.

    1. Give the steady-state probabilities for this chain (no explanations or calculations required– just the answer).

    In parts b) to g) do not use reversibility and do not use Burke’s theorem. Let \(X_{n}\) be the state of the system at time \(n \delta\) and let \(D_{n}\) be a random variable taking on the value 1 if a departure occurs between \(n \delta\) and \((n+1) \delta\), and the value 0 if no departure occurs. Assume that the system is in steady state at time \(n \delta\).

    1. Find \(\operatorname{Pr}\left\{X_{n}=i, D_{n}=j\right\}\) for \(i \geq 0, j=0,1\)
    2. Find \(\operatorname{Pr}\left\{D_{n}=1\right\}\)
    3. Find \(\operatorname{Pr}\left\{X_{n}=i \mid D_{n}=1\right\}\) for \(i \geq 0\)
    4. Find \(\operatorname{Pr}\left\{X_{n+1}=i \mid D_{n}=1\right\}\) and show that \(X_{n+1}\) is statistically independent of \(D_{n}\). Hint: Use part d); also show that \(\operatorname{Pr}\left\{X_{n+1}=i\right\}=\operatorname{Pr}\left\{X_{n+1}=i \mid D_{n}=1\right\}\) for all \(i \geq 0\) is sucient to show independence.
    5. Find \(\operatorname{Pr}\left\{X_{n+1}=i, D_{n+1}=j \mid D_{n}\right\}\) and show that the pair of variables \(\left(X_{n+1}, D_{n+1}\right)\) is statistically independent of \(D_{n}\).
    6. For each \(k>1\), find \(\operatorname{Pr}\left\{X_{n+k}=i, D_{n+k}=j \mid D_{n+k-1}, D_{n+k-2}, \ldots, D_{n}\right\}\) and show that the pair \(\left(X_{n+k}, D_{n+k}\right)\) is statistically independent of \(\left(D_{n+k-1}, D_{n+k-2}, \ldots, D_{n}\right)\). Hint: use induction on \(k\); as a substep, find \(\operatorname{Pr}\left\{X_{n+k}=i \mid D_{n+k-1}=1, D_{n+k-2}, \ldots, D_{n}\right\}\) and show that \(X_{n+k}\) is independent of \(D_{n+k-1}, D_{n+k-2}, \ldots, D_{n}\).
    7. What do your results mean relative to Burke’s theorem

    Exercise 5.14

    Let \(\left\{X_{n}, n \geq 1\right\}\) denote a irreducible recurrent Markov chain having a countable state state space. Now consider a new stochastic process \(\left\{Y_{n}, n \geq 0\right\}\) that only accepts values of the Markov chain that are between 0 and some integer \(m\). For instance, if \(m=3\) and \(X_{1}=1, X_{2}=3, X_{3}=5, X_{4}=6, X_{5}=2\), then \(Y_{1}=1, Y_{2}=3, Y_{3}=2\).

    1. Is \(\left\{Y_{n}, n \geq 0\right\}\) a Markov chain? Explain briefly.
    2. Let \(p_{j}\) denote the proportion of time that \(\left\{X_{n}, n \geq 1\right\}\) is in state \(j\). If \(p_{j}>0\) for all \(j\), what proportion of time is \(\left\{Y_{n}, n \geq 0\right\}\) in each of the states 0, 1, . . . , \(m\)?
    3. Suppose \(\left\{X_{n}\right\}\) is null-recurrent and let \(p_{i}(m), i=0,1, \ldots, m\) denote the long-run proportions for \(\left\{Y_{n}, n \geq 0\right\}\). Show that \(p_{j}(m)=p_{i}(m) E[\text { time the } X \text { process spends in } j \text { between return to }i],j \neq i.\}\)

    Exercise 5.15

    Verify that \ref{5.53} is satisfied by the hypothesized solution to \(p\) in (5.57). Also show that the equations involving the idle state \(f\) are satisfied.

    Exercise 5.16

    Replace the state \(\mathbf{m}=\left(m, z_{1}, \ldots, z_{m}\right)\) in Section 5.6 with an expanded state \(\mathbf{m}=\left(m, z_{1}, w_{1}, z_{2}, w_{2}, \ldots, z_{m}, w_{m}\right)\) where \(m\) and \(\left\{z_{i} ; 1 \leq i \leq m\right\}\) are as before and \(w_{1}, w_{2}, \ldots, w_{m}\) are the original service requirements of the \(m\) customers.

    1. Hypothesizing the same backward round-robin system as hypothesized in Section 5.6, find the backward transition probabilities and give the corresponding equations to (5.51­ 5.54) for the expanded state description.
    2. Solve the resulting equations to show that

      \(\pi_{\mathbf{m}}=\pi+\phi\left(\frac{\lambda \delta}{1-\lambda \delta}\right)^{m} \prod_{j=1}^{m} f\left(w_{j}\right)\)

    3. Show that the probability that there are m customers in the system, and that those customers have original service requirements given by \(w_{1}, \ldots, w_{m}\), is

      \(\left.\operatorname{Pr}\left\{m, w_{1}, \ldots, w_{m}\right\}=\pi_{\phi} \quad \frac{\lambda \delta}{1-\lambda \delta}\right)^{m} \prod_{j=1}^{m}\left(w_{j}-1\right) f\left(w_{j}\right)\)

    4. Given that a customer has original service requirement \(w\), find the expected time that customer spends in the system.

    5.8: Exercises is shared under a not declared license and was authored, remixed, and/or curated by LibreTexts.

    • Was this article helpful?