Skip to main content
Engineering LibreTexts

7.2: The Queueing Delay in a G/G/1 Queue

  • Page ID
    44633
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vectorC}[1]{\textbf{#1}} \)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)

    Before analyzing random walks in general, we introduce two important problem areas that are often best viewed in terms of random walks. In this section, the queueing delay in a G/G/1 queue is represented as a threshold crossing problem in a random walk. In the next section, the error probability in a standard type of detection problem is represented as a random walk problem.This detection problem will then be generalized to a sequential detection problem based on threshold crossings in a random walk.

    Consider a G/G/1 queue with first-come-first-serve (FCFS) service. We shall associate the probability that a customer must wait more than some given time \(\alpha \) in the queue with the probability that a certain random walk crosses a threshold at \(\alpha \). Let \( X_1, X_2, . . .\) be the interarrival times of a G/G/1 queueing system; thus these variables are IID with an arbitrary distribution function \(F_X \ref{x} =\)Pr{\( X_i ≤ x\) }. Assume that arrival 0 enters an empty system at time 0, and thus \( S_n = X_1 +X_2 + ... +X_n\) is the epoch of the \(n^{th}\) arrival after time 0. Let \(Y_0, Y_1, . . . ,\) be the service times of the successive customers. These are independent of {\( X_i; i ≥ 1 \) } and are IID with some given distribution function \( F_Y (y)\). Figure 7.2 shows the arrivals and departures for an illustrative sample path of the process and illustrates the queueing delay for each arrival.

    clipboard_eb5cf2add478fcbc242b9c87dac1de52f.png

     

    Figure 7.2: Sample path of arrivals and departures from a G/G/1 queue. Customer 0 arrives at time 0 and enters service immediately. Customer 1 arrives at time \( s_1 = x_1 \). For the case shown above, customer 0 has not yet departed, \( i.e., \, x_1 < y_0\), so customer 1’s time in queue is \( w_1 = y_0 − x_1\). As illustrated, customer 1’s system time (queueing time plus service time) is \( w_1 + y_1 \).

    Customer 2 arrives at \( s_2 = x_1 + x_2\). For the case shown above, this is before customer 1 departs at \( y_0 + y_1\). Thus, customer 2’s wait in queue is \(w_2 = y_0 + y_1 − x_1 − x_2\). As illustrated above, \(x_2 +w_2\) is also equal to customer 1’s system time, so \(w_2 = w_1 +y_1 −x_2\). Customer 3 arrives when the system is empty, so it enters service immediately with no wait in queue, \(i.e., \, w_3 = 0\).

     

    Let \( W_n\) be the queueing delay for the \( n^{th}\) customer, \( n ≥ 1\). The system time for customer \( n\) is then defined as the queueing delay \(W_n\) plus the service time \( Y_n\). As illustrated in Figure 7.2, customer \( n ≥ 1\) arrives \( X_n \) time units after the beginning of customer \(n − 1\)’s system time. If \( X_n < W_{n−1} + Y_{n−1}, \, i.e.\), if customer \( n\) arrives before the end of customer \( n − 1\)’s system time, then customer \( n\) must wait in the queue until \( n\) finishes service (in the figure, for example, customer 2 arrives while customer 1 is still in the queue). Thus

    \[ W_n = W_{n-1} +Y_{n-1} - X_n \qquad \text{if } X_n \leq W_{n-1} + Y_{n-1} . \nonumber \]

    On the other hand, if \( X_n > W_{n−1} + Y_{n−1} \), then customer \(n−1\) (and all earlier customers) have departed when \( n\) arrives. Thus \( n\) starts service immediately and \( W_n = 0\). This is the case for customer 3 in the figure. These two cases can be combined in the single equation

    \[ W_n = \text{max} [W_{n-1} + Y_{n-1} - X_n , 0]; \qquad \text{for } n \geq 1; \qquad W_0 = 0 \nonumber \]

    Since \( Y_{n−1}\) and \( X_n\) are coupled together in this equation for each \(n\), it is convenient to define \( U_n = Y_{n−1} − X_n\). Note that {\( U_n; n ≥ 1\)} is a sequence of IID random variables. From (7.4), \( W_n = \text{max}[W_{n−1} + U_n, 0]\), and iterating on this equation,

    \[ \begin{aligned} W_n \quad &= \quad \text{max}[\text{max} [ W_{n-2} + U_{n-1}, \, 0] \\ &= \quad \text{max} [ ( W_{n-2} + U_{n-1} + U_n , \, 0 ] \\ &= \quad \text{max} [ ( W_{n-3} + U_{n-2}+ U_{n-1}+U_n), \, (U_{n-1} + U_n), \, U_n, \, 0] \\ &= \quad ... \quad ... \\ &= \quad \text{max} [(U_1 + U_2 + ... + U_n), \quad (U_2+U_3+...+U_n), ... , (U_{n-1}+U_n), \, U_n , \, 0 ]. \end{aligned} \nonumber \]

    It is not necessary for the theorem below, but we can understand this maximization better by realizing that if the maximization is achieved at \( U_i + U_{i+1} + ...+ U_n\), then a busy period must start with the arrival of customer \( i − 1\) and continue at least through the service of customer \(n\). To see this intuitively, note that the analysis above starts with the arrival of customer 0 to an empty system at time 0, but the choice of 0 time and customer number 0 has nothing to do with the analysis, and thus the analysis is valid for any arrival to an empty system. Choosing the largest customer number before \( n\) that starts a busy period must then give the correct queueing delay, and thus maximizes (7.5). Exercise 7.2 provides further insight into this maximization.

    Define \( Z_n = U_n\), define \( Z_n = U_n + U_{n−1}\), and in general, for \( i ≤ n\), define \( Z^n_i = U_n +U_{n−1} + ... + U_{n−i+1} \). Thus \( Z_n^n = U_n + ... + U_1\). With these definitions, \ref{7.5} becomes

    \[ W_n = \text{max} [ 0,Z_1^n,Z_2^n, ... ,Z_n^n] . \nonumber \]

    Note that the terms in {\( Z^n_i; 1 ≤ i ≤ n\) } are the first \( n\) terms of a random walk, but it is not the random walk based on \( U_1, U_2, . . . ,\) but rather the random walk going backward, starting with \( U_n\). Note also that \( W_{n+1}\), for example, is the maximum of a different set of variables, \(i.e.\), it is the walk going backward from \( U_{n+1}\). Fortunately, this doesn’t matter for the analysis since the ordered variables (\( U_n, U_{n−1} . . . , U_1 )\) are statistically identical to (\( U_1, . . . , U_n)\). The probability that the wait is greater than or equal to a given value \( \alpha \) is

    \[ \text{Pr} \{ W_n \geq \alpha \} = \text{Pr} \{ \text{max} [0,Z^n_1,Z^n_2,...,Z^n_n ] \geq \alpha \} . \nonumber \]

    This says that, for the \( n^{th}\) customer, Pr{\( W_n ≥ \alpha \)} is equal to the probability that the random walk {\( Z_n; 1 ≤ i ≤ n\)} crosses a threshold at \(\alpha\) by the \(n^{th}\) trial. Because of the \(i\) initialization used in the analysis, we see that \(W_n\) is the queueing delay of the \(n^{th}\) arrival after the beginning of a busy period (although this \(n^{th}\) arrival might belong to a later busy period than that initial busy period)

    As noted above, (Un, Un−1, . . . , U1) is statistically identical to (U1, . . . , Un). This means that Pr{Wn ≥ α} is the same as the probability that the first n terms of a random walk based on {Ui; i ≥ 1} crosses a threshold at α. Since the first n + 1 terms of this random walk provide one more opportunity to cross α than the first n terms, we see that

    \[ ... \leq \text{Pr} \{ W_n \geq \alpha \} \leq \text{Pr} \{ W_{n+1} \geq \alpha \} \leq ... \leq 1 . \nonumber \]

    Since this sequence of probabilities is non-decreasing, it must have a limit as \(n\rightarrow \infty \), and this limit is denoted Pr{\(W ≥ \alpha \)}. Mathematically,1 this limit is the probability that a random walk based on {\( U_i; i ≥ 1\)} ever crosses a threshold at \( \alpha \). Physically, this limit is the probability that the queueing delay is at least α for any given very large-numbered customer ( \( i.e. \), for customer \(n\) when the influence of a busy period starting \( n\) customers earlier has died out). These results are summarized in the following theorem.

    Theorem 7.2.1

    Let {\(X_i; i ≥ 1\)} be the IID interarrival intervals of a G/G/1 queue, let {\(Y_i; i ≥ 0\)} be the IID service times, and assume that the system is empty at time 0 when customer 0 arrives. Let \(W_n\) be the queueing delay for the \(n^{th}\) customer. Let \(U_n = Y_{n−1} −X_n\) for \(n ≥ 1\) and let \(Z_i^n = U_n + U_{n−1} + ... + U{n−i+1} \) for \(1 ≤ i ≤ n\). Then for every \( \alpha > 0\), and \( n ≥ 1, \, W_n = \text{max}[0, Z^1_n, Z^2_n, . . . , Z_n^n\)]. Also, Pr{\(W_n ≥ \alpha \)} is the probability that the random \(n \) walk based on {\( U_i; i ≥ 1\)} crosses a threshold at \(\alpha\) on or before the \( n^{th}\) trial. Finally, Pr{\(W ≥ \alpha \} = \text{lim}_{n\rightarrow \infty}\) Pr{\(W_n ≥ \alpha\)} is equal to the probability that the random walk based on {\( U_i; i ≥ 1 \)} ever crosses a threshold at \( \alpha\).

    Note that the theorem specifies the distribution function of \(W_n\) for each \(n\), but says nothing about the joint distribution of successive queueing delays. These are not the same as the distribution of successive terms in a random walk because of the reversal of terms above.

    We shall find a relatively simple upper bound and approximation to the probability that a random walk crosses a positive threshold in Section 7.4. From Theorem 7.2.1, this can be applied to the distribution of queueing delay for the G/G/1 queue (and thus also to the M/G/1 and M/M/1 queues).

    __________________________________

    1. More precisely, the sequence of queueing delays \(W_1, W_2 . . . ,\) converge in distribution to \(W , \, i.e., \, \text{lim}_n F_{W_n} \ref{w} = F_{W} (w)\) for each \(w\). We refer to \( W\) as the queueing delay in steady state.

    This page titled 7.2: The Queueing Delay in a G/G/1 Queue is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Robert Gallager (MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform.