Skip to main content
Engineering LibreTexts

4.9: Summary

  • Page ID
    77096
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    Sections 4.1 to 4.7 developed the central results about renewal processes that frequently appear in subsequent chapters. The chapter starts with the strong law for renewal processes, showing that the time average rate of renewals, \(N(t) / t\), approaches \(1 / \overline{X}\) with probability 1 as \(t \rightarrow \infty\). This, combined with the strong law of large numbers, is the basis for most subsequent results about time-averages. Section 4.4 adds a reward function \(R(t)\) to the underlying renewal process. These reward functions are defined to depend only on the inter-renewal interval containing \(t\), and are used to study many surprising aspects of renewal processes such as residual life, age, and duration. For all sample paths of a renewal process (except a subset of probabiity 0), the time-average reward for a given \(R(t)\) is a constant, and that constant is the expected aggregate reward over an inter-renewal interval divided by the expect length of an inter-renewal interval.

    The next topic, in Section 4.5 is that of stopping trials. These have obvious applications to situations where an experiment or game is played until some desired (or undesired) outcome (based on the results up to and including the given trial) occurs. This is a basic and important topic in its right, but is also needed to understand both how the expected renewal rate \(\mathrm{E}[N(t)] / t\) varies with time \(t\) and how renewal theory can be applied to queueing situations. Finally, we found that stopping rules were helpful in understanding G/G/1 queues, especially Little’s theorem, and to derive an understand the Pollaczek-Khinchin expression for the expected delay in an M/G/1 queue.

    This is followed, in Section 4.6, by an analysis of how \(\mathrm{E}[N(t)] / t\) varies with \(t\). This starts by using Laplace transforms to get a complete solution of the ensemble-average, \(\mathrm{E}[N(t)] / t\), as a function of \(t\), when the distribution of the inter-renewal interval has a rational Laplace transform. For the general case (where the Laplace transform is irrational or non-existant), the elementary renewal theorem shows that \(\lim _{t \rightarrow \infty} \mathrm{E}[N(t)] / t=1 / \overline{X}\). The fact that the time-average (WP1) and the limiting ensemble-average are the same is not surprising, and the fact that the ensemble-average has a limit is not surprising. These results are so fundamental to other results in probability, however, that they deserve to be understood.

    Another fundamental result in Section 4.6 is Blackwell’s renewal theorem, showing that the distribution of renewal epochs reach a steady state as \(t \rightarrow \infty\). The form of that steady state depends on whether the inter-renewal distribution is arithmetic (see (4.59)) or nonarithmetic (see (4.58)).

    Section 4.7 ties together the results on rewards in 4.4 to those on ensemble averages in 4.6. Under some very minor restrictions imposed by the key renewal theorem, we found that, for non-arithmetic inter-renewal distributions, \(\lim _{t \rightarrow \infty} \mathrm{E}[R(t)]\) is the same as the time-average value of reward.

    Finally, all the results above were shown to apply to delayed renewal processes

    For further reading on renewal processes, see Feller,[8], Ross, [16], or Wolff, [22]. Feller still appears to be the best source for deep understanding of renewal processes, but Ross and Wolff are somewhat more accessible.


    4.9: Summary is shared under a not declared license and was authored, remixed, and/or curated by LibreTexts.

    • Was this article helpful?