Skip to main content
Engineering LibreTexts

4.6: Expected Number of Renewals

  • Page ID
    44619
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    The purpose of this section is to evaluate \(\mathrm{E}[N(t)]\), denoted \(m(t)\), as a function of \(t>0\) for arbitrary renewal processes. We first find an exact expression, in the form of an integral equation, for \(m(t)\). This can be easily solved by Laplace transform methods in special cases. For the general case, however, \(m(t)\) becomes increasingly messy for large \(t\), so we then find the asymptotic behavior of \(m(t)\). Since \(N(t) / t\) approaches \(1 / \overline{X}\) with probability 1, we might expect \(m(t)\) to grow with a derivative \(m^{\prime}(t)\) that asymptotically approaches \(1 / \overline{X}\). This is not true in general. Two somewhat weaker results, however, are true. The first, called the elementary renewal theorem (Theorem 4.6.1), states that \(\lim _{t \rightarrow \infty} m(t) / t=1 / \overline{X}\). The second result, called Blackwell’s theorem (Theorem 4.6.2), states that, subject to some limitations on \(\delta>0\), \(\lim _{t \rightarrow \infty}[m(t+\delta)-m(t)]=\delta / \overline{X}\). This says essentially that the expected renewal rate approaches steady state as \(t \rightarrow \infty\). We will find a number of applications of Blackwell’s theorem throughout the remainder of the text.

    The exact calculation of \(m(t)\) makes use of the fact that the expectation of a nonnegative random variable is defined as the integral of its complementary distribution function,

    \(m(t)=\mathrm{E}[N(t)]=\sum_{n=1}^{\infty} \operatorname{Pr}\{N(t) \geq n\}.\)

    Since the event \(\{N(t) \geq n\}\) is the same as \(\left\{S_{n} \leq t\right\}\), \(m(t)\) is expressed in terms of the distribution functions of \(S_{n}\), \(n \geq 1\), as follows.

    \[m(t)=\sum_{n=1}^{\infty} \operatorname{Pr}\left\{S_{n} \leq t\right\}.\label{4.51} \]

    Although this expression looks fairly simple, it becomes increasingly complex with increasing \(t\). As \(t\) increases, there is an increasing set of values of \(n\) for which \(\operatorname{Pr}\left\{S_{n} \leq t\right\}\) is significant, and \(\operatorname{Pr}\left\{S_{n} \leq t\right\}\) itself is not that easy to calculate if the interarrival distribution \(\mathrm{F}_{X}(x)\) is complicated. The main utility of \ref{4.51} comes from the fact that it leads to an integral equation for \(m(t)\). Since \(S_{n}=S_{n-1}+X_{n}\) for each \(n \geq 1\) (interpreting \(S_{0}\) as 0), and since \(X_{n}\) and \(S_{n-1}\) are independent, we can use the convolution equation \ref{1.11} to get

    \(\operatorname{Pr}\left\{S_{n} \leq t\right\}=\int_{x=0}^{t} \operatorname{Pr}\left\{S_{n-1} \leq t-x\right\} d \mathrm{~F}_{X}(x) \quad \text { for } n \geq 2.\)

    Substituting this in \ref{4.51} for \(n \geq 2\) and using the fact that \(\operatorname{Pr}\left\{S_{1} \leq t\right\}=\mathrm{F}_{X}(t)\), we can interchange the order of integration and summation to get

    \[\begin{aligned}
    m(t) &=\mathrm{F}_{X}(t)+\int_{x=0}^{t} \sum_{n=2}^{\infty} \operatorname{Pr}\left\{S_{n-1} \leq t-x\right\} d \mathrm{~F}_{X}(x) \\
    &=\mathrm{F}_{X}(t)+\int_{x=0}^{t} \sum_{n=1}^{\infty} \operatorname{Pr}\left\{S_{n} \leq t-x\right\} d \mathrm{~F}_{X}(x) \\
    &=\mathrm{F}_{X}(t)+\int_{x=0}^{t} m(t-x) d \mathrm{~F}_{X}(x) ; \quad t \geq 0
    \end{aligned}\label{4.52} \]

    An alternative derivation is given in Exercise 4.22. This integral equation is called the renewal equation. The following alternative form is achieved by integration by parts.17

    \[m(t)=\mathrm{F}_{X}(t)+\int_{\tau=0}^{t} F_{X}(t-\tau) d m(\tau) ; \quad t \geq 0\label{4.53} \]

    Laplace transform approach

    If we assume that \(X \geq 0\) has a density \(\mathrm{f}_{X}(x)\), and that this density has a Laplace transform18 \(L_{X}(s)=\int_{0}^{\infty} \mathrm{f}_{X}(x) e^{-s x} d x\), then we can take the Laplace transform of both sides of (4.52). Note that the final term in \ref{4.52} is the convolution of \(m\) with \(\mathrm{f}_{X}\), so that the Laplace transform of \(m(t)\) satisfies

    \(L_{m}(s)=\frac{L_{X}(s)}{s}+L_{m}(s) L_{X}(s).\)

    Solving for \(L_{m}(s)\),

    \[L_{m}(s)=\frac{L_{X}(s)}{s\left[1-L_{X}(s)\right]}.\label{4.54} \]

    Example 4.6.1

    As a simple example of how this can be used to calculate \(m(t)\), suppose \(\mathrm{f}_{X}(x)=(1 / 2) e^{-x}+e^{-2 x}\) for \(x \geq 0\). The Laplace transform is given by

    \(L_{X}(s)=\frac{1}{2(s+1)}+\frac{1}{s+2}=\frac{(3 / 2) s+2}{(s+1)(s+2)}.\)

    Substituting this into \ref{4.54} yields

    \(L_{m}(s)=\frac{(3 / 2) s+2}{s^{2}(s+3 / 2)}=\frac{4}{3 s^{2}}+\frac{1}{9 s}-\frac{1}{9(s+3 / 2)}.\)

    We can solve for \(m(t)\), \(t \geq 0\), by taking the inverse Laplace transform,

    \(m(t)=\frac{4 t}{3}+\frac{1-\exp [-(3 / 2) t]}{9}.\)

    The procedure in this example can be used for any inter-renewal density \(\mathrm{f}_{X}(x)\) for which the Laplace transform is a rational function, i.e., a ratio of polynomials. In such cases, \(L_{m}(s)\) will also be a rational function. The Heaviside inversion formula (i.e., factoring the denominator and expressing \(L_{m}(s)\) as a sum of individual poles as done above) can then be used to calculate \(m(t)\). In the example above, there was a second order pole at \(s=0\) leading to the linear term \(4 t / 3\) in \(m(t)\), there was a first order pole at \(s=0\) leading to the constant 1/9, and there was a pole at \(s=-3 / 2\) leading to the exponentially decaying term.

    We now show that a second order pole at \(s=0\) always occurs when \(L_{X}(s)\) is a rational function. To see this, note that \(L_{X}(0)\) is just the integral of \(f_{X}(x)\), which is 1; thus \(1-L_{X}(s)\) has a zero at \(s=0\) and \(L_{m}(s)\) has a second order pole at \(s=0\). To evaluate the residue for this second order pole, we recall that the first and second derivatives of \(L_{X}(s)\) at \(s=0\) are \(-\mathrm{E}[X]\) and \(\mathrm{E}\left[X^{2}\right]\) respectively. Expanding \(L_{X}(s)\) in a power series around \(s=0\) then yields \(L_{X}(s)=1-s \mathrm{E}[X]+\left(s^{2} / 2\right) \mathrm{E}\left[X^{2}\right]\) plus terms of order \(s^{3}\) or higher. This gives us

    \[L_{m}(s)=\frac{1-s \overline{X}+\left(s^{2} / 2\right) \mathrm{E}\left[X^{2}\right]+\cdots}{s^{2}\left[\overline{X}-(s / 2) \mathrm{E}\left[X^{2}\right]+\cdots\right]}=\frac{1}{s^{2} \overline{X}}+\frac{1}{s}\left(\frac{\mathrm{E}\left[X^{2}\right]}{2 \overline{X}^{2}}-1\right)+\cdots.\label{4.55} \]

    The remaining terms are the other poles of \(L_{m}(s)\) with their residues. For values of \(s\) with \(\Re(s) \geq 0\), we have \(\left|L_{X}(s)\right|=\left|\int \mathrm{f}_{X}(x) e^{-s x} d x\right| \leq \int \mathrm{f}_{X}(x)\left|e^{-s x}\right| d x \leq \int \mathrm{f}_{X}(x) d x=1\) with strict inequality except for \(s=0\). Thus \(L_{X}(s)\) cannot have any poles on the imaginary axis or the right half plane, and \(1-L_{X}(s)\) cannot have any zeros there other than the one at \(s=0\). It follows that all the remaining poles of \(L_{m}(s)\) are strictly in the left half plane. This means that the inverse transforms for all these remaining poles die out as \(t \rightarrow \infty\). Thus the inverse Laplace transform of \(L_{m}(s)\) is

    \[\begin{aligned}
    m(t) &=\frac{t}{\overline{X}}+\frac{\mathrm{E}\left[X^{2}\right]}{2 \overline{X}^{2}}-1+\epsilon(t) \\
    &=\frac{t}{\overline{X}}+\frac{\sigma^{2}}{2 \overline{X}^{2}}-\frac{1}{2}+\epsilon(t) \quad \text { for } t \geq 0,
    \end{aligned}\label{4.56} \]

    where \(\lim _{t \rightarrow \infty} \epsilon(t)=0\).

    We have derived \ref{4.56} only for the special case in which \(\mathrm{f}_{X}(x)\) has a rational Laplace transform. For this case, \ref{4.56} implies both the elementary renewal theorem \(\left(\lim _{t \rightarrow \infty} m(t) / t=1/\overline{X}\right)\) and also Blackwell’s theorem \(\left(\lim _{t \rightarrow \infty}[m(t+\delta)-m(t)]=\delta / \overline{X}\right)\). We will interpret the meaning of the constant term \(\sigma^{2} /\left(2 \overline{X}^{2}\right)-1 / 2\) in Section 4.8.

    elementary renewal theorem

    Theorem 4.6.2 (The elementary renewal theorem)

    Let \(\{N(t) ; t>0\}\) be a renewal counting process with mean inter-renewal interval \(\overline{X}\). Then \(\lim _{t \rightarrow \infty} \mathrm{E}[N(t)] / t=1 / \overline{X}\).

    Discussion: We have already seen that \(m(t)=\mathrm{E}[N(t)]\) is finite for all \(t>0\) (see Exercise 4.2). The theorem is proven by establishing a lower and upper bound to \(m(t) / t\) and showing that each approaches \(1 / \mathrm{E}[X]\) as \(t \rightarrow \infty\). The key element for each bound is (4.35), repeated below, which comes from the Wald equality.

    \[m(t)=\frac{\mathrm{E}\left[S_{N(t)+1}\right]}{\overline{X}}-1.\label{4.57} \]

    Proof

    The lower bound to \(m(t) / t\) comes by recognizing that \(S_{N(t)+1}\) is the epoch of the first arrival after \(t\). Thus \(\mathrm{E}\left[S_{N(t)+1}\right]>t\). Substituting this into (4.57),

    \(\frac{m(t)}{t}>\frac{1}{\mathrm{E}[X]}-\frac{1}{t}.\)

    Clearly this lower bound approaches \(1 / \mathrm{E}[X]\) as \(t \rightarrow \infty\). The upper bound, which is more difficult19 and might be omitted on a first reading, is established by first truncating \(X(t)\) and then applying \ref{4.57} to the truncated process.

    For an arbitrary constant \(b>0\), let \(\breve{X}_{i}=\min \left(b, X_{i}\right)\). Since these truncated random variables are IID, they form a related renewal counting process \(\{\breve{N}(t) ; t>0\}\) with \(\breve{m}(t)=\mathrm{E}[\breve{N}(t)]\) and \(\breve{S}_{n}=\breve{X}_{1}+\cdots+\breve{X}_{n}\). Since \(\breve{X}_{i} \leq X_{i}\) for all \(i\), we see that \(\breve{S}_{n} \leq S_{n}\) for all \(n\). Since \(\left\{S_{n} \leq t\right\}=\{N(t) \geq n\}\), it follows that \(\breve{N}(t) \geq N(t)\) and thus \(\breve{m}(t) \geq m(t)\). Finally, in the truncated process, \(\breve{S}_{\breve{N}(t)+1} \leq t+b\) and thus \(\mathrm{E}\left[\breve{S}_{\breve{N}(t)+1}\right] \leq t+b\). Thus, applying \ref{4.57} to the truncated process,

    \(\frac{m(t)}{t} \leq \frac{\breve{m}(t)}{t}=\frac{\mathrm{E}\left[S_{\check{N}(t)+1}\right]}{t \mathrm{E}[\breve{X}]}-\frac{1}{t} \leq \frac{t+b}{t \mathrm{E}[\breve{X}]}.\)

    Next, choose \(b=\sqrt{t}\). Then

    \(\frac{m(t)}{t} \leq \frac{1}{\mathrm{E}[\breve{X}]}+\frac{1}{\sqrt{t} \mathrm{E}[\breve{X}]}\).

    Note finally that \(\mathrm{E}[\breve{X}]=\int_{0}^{b}\left[1-\mathrm{F}_{X}(x)\right] dx\). Since \(b=\sqrt{t}\), we have \(\lim _{t \rightarrow \infty} \mathrm{E}[\breve{X}]=\mathrm{E}[X]\), completing the proof.

    Note that this theorem (and its proof) have not assumed finite variance. It can also be seen that the theorem holds when \(\mathrm{E}[X]\) is infinite, since \(\lim _{t \rightarrow \infty} \mathrm{E}[\breve{X}]=\infty\) in this case.

    Recall that \(N[t, \omega] / t\) is the average number of renewals from 0 to \(t\) for a sample function \(\omega\), and \(m(t) / t\) is the average of this over \(\omega\). Combining with Theorem 4.3.1, we see that the limiting time and ensemble-average equals the time-average renewal rate for each sample function except for a set of probability 0.

    Another interesting question is to determine the expected renewal rate in the limit of large \(t\) without averaging from 0 to \(t\). That is, are there some values of \(t\) at which renewals are more likely than others for large \(t\)? If the inter-renewal intervals have an integer distribution function (i.e., each inter-renewal interval must last for an integer number of time units), then each renewal epoch \(S_{n}\) must also be an integer. This means that \(N(t)\) can increase only at integer times and the expected rate of renewals is zero at all non-integer times.

    An obvious generalization of integer valued inter-renewal intervals is that of inter-renewals that occur only at integer multiples of some real number \(d>0\). Such a distribution is called an arithmetic distribution. The span of an arithmetic distribution is the largest number \(\lambda\) such that this property holds. Thus, for example if \(X\) takes on only the values 0, 2, and 6, its distribution is arithmetic with span \(\lambda=2\). Similarly, if \(X\) takes on only the values 1/3 and 1/5, then the span is \(\lambda=1 / 15\). The remarkable thing, for our purposes, is that any inter-renewal distribution that is not an arithmetic distribution leads to a uniform expected rate of renewals in the limit of large \(t\). This result is contained in Blackwell’s renewal theorem, which we state without proof.20. Recall, however, that for the special case of an inter-renewal density with a rational Laplace transform, Blackwell’s renewal theorem is a simple consequence of (4.56).

    Theorem 4.6.2 (Blackwell).

    If a renewal process has an inter-renewal distribution that is non-arithmetic, then for each \(\delta>0\),

    \[\lim _{t \rightarrow \infty}[m(t+\delta)-m(t)]=\frac{\delta}{\mathrm{E}[X]}.\label{4.58} \]

    If the inter-renewal distribution is arithmetic with span \(\lambda\), then

    \[\lim _{t \rightarrow \infty}[m(t+\lambda)-m(t)]=\frac{\lambda}{\mathrm{E}[X]}.\label{4.59} \]

    Eq. \ref{4.58} says that for non-arithmetic distributions, the expected number of arrivals in the interval \((t, t+\delta]\) is equal to \(\delta / \mathrm{E}[X]\) in the limit \(t \rightarrow \infty\). Since the theorem is true for arbitrarily small \(\delta\), the theorem almost seems to be saying that \(m(t)\) has a derivative for large \(t\), but this is not true. One can see the reason by looking at an example where \(X\) can take on only the values 1 and \(\pi\). Then no matter how large \(t\) is, \(N(t)\) can only increase at discrete points of time of the form \(k+j \pi\) where \(k\) and \(j\) are nonnegative integers. Thus \(d m(t) / d t\) is either 0 or \(\infty\) for all \(t\). As \(t\) gets larger, the jumps in \(m(t)\) become both smaller in magnitude and more closely spaced from one to the next. Thus \([m(t+\delta)-m(t)] / \delta\) can approach \(1 / \mathrm{E}[X]\) as \(t \rightarrow \infty\) for any fixed \(\delta\) (as the theorem says), but as \(\delta\) gets smaller, the convergence in \(t\) gets slower. For the above example (and for all discrete non-arithmetic distributions), \([m(t+\delta)-m(t)] / \delta\) does not approach21 \(1 / \mathrm{E}[X]\) for any \(t\) as \(\delta \rightarrow 0\).

    For an arithmetic renewal process with span \(\lambda\), the asymptotic behavior of \(m(t)\) as \(t \rightarrow \infty\) is much simpler. Renewals can only occur at multiples of \(\lambda\), and since simultaneous renewals are not allowed, either 0 or 1 renewal occurs at each time \(k \lambda\). Thus for any \(k\). we have

    \[\operatorname{Pr}\{\text { Renewal at } \lambda k\}=m(\lambda k)-m(\lambda(k-1)), \label{4.60} \]

    where, by convention, we take \(m(0)=0\). Thus \ref{4.59} can be restated as

    \[\lim _{k \rightarrow \infty} \operatorname{Pr}\{\text { Renewal at } k \lambda\}=\frac{\lambda}{\overline{X}}\label{4.61} \]

    The limiting behavior of \(m(t)\) is discussed further in the next section.


    17A mathematical subtlety with the Stieltjes integrals \ref{4.52} and \ref{4.53} will be discussed in Section 4.7.3.

    18Note that \(L_{X}(s)=\mathrm{E}\left[e^{-s X}\right]=g_{X}(-s)\) where \(g\) is the MGF of \(X\). Thus the argument here could be carried out using the MGF. We use the Laplace transform since the mechanics here are so familiar to most engineering students

    19The difficulty here, and the reason for using a truncation argument, comes from the fact that the residual life, \(S_{N(t)+1}-t\) at \(t\) might be arbitrarily large. We saw in Section 4.4 that the time-average residual life is infinite if \(\mathrm{E}\left[X^{2}\right]\) is infinite. Figure 4.6 also illustrates why residual life can be so large.

    20See Theorem 1 of Section 11.1, of [8]) for a proof


    This page titled 4.6: Expected Number of Renewals is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Robert Gallager (MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.