4.1: Time Averages
- Page ID
- 47239
From the essential aspects of probability we now move into the time domain, considering random signals. For this, assign to each random event \(A_i\) a complete signal, instead of a single scalar: \(A_i \rightarrow x_i(t)\). The set of all the functions that are available (or the menu) is called the ensemble of the random process. An example case is to roll a die, generating \(i = [1, 2, 3, 4, 5, 6]\), and suppose \(x_i(t) = t^i\).
In the general case, there could be infinitely many members in the ensemble, and of course these functions could involve some other variables, for example \(x_i(t, \, y, \, z)\), where \(y\) and \(z\) are variables not related to the random event \(A_i\). Any particular \(x_i(t)\) can be considered a regular, deterministic function, if the event is known. \(x(t_o)\), taken at a specific time but without specification of which event has occurred, is a random variable.
The theory of random processes is built on two kinds of probability calculations: those taken across time and those taken across the ensemble. For time averages to be taken, we have to consider a specific function, indexed by \(i\):
\begin{align} m(x_i(t)) \, &= \, \lim_{T \to \infty} \dfrac{1}{T} \int\limits_{0}^{T} x_i(t) \, dt \quad \textrm{(mean)} \\[4pt] V^t (x_i(t)) \, &= \, \lim_{T \to \infty} \dfrac{1}{T} \int\limits_{0}^{T} [x_i(t) - m(x_i(t))]^2 \, dt \quad \textrm{(variance on time)} \\[4pt] R_i^t (\tau) \, &= \, \lim_{T \to \infty} \dfrac{1}{T} \int\limits_{0}^{T} [x_i(t) - m(x_i(t))] [x_i(t + \tau) - m(x_i(t))] \, dt \quad \textrm{(autocorrelation).} \end{align}
The mean and variance have new symbols, but are calculated in a way that is consistent with our prior definitions. The autocorrelation is new and plays a central role in the definition of a spectrum. Notice that is an inner product of the function’s deviation from its mean, with a delayed version of the same, such that \(R(0) = V^t\).
Consider the roll of a die, and the generation of functions \(x_i(t) = a \cos (i \omega_o t)\). We have
\begin{align*} m(x_i(t)) \, &= \, \lim_{T \to \infty} \int\limits_{0}^{T} a \cos (i \omega_o t) \, dt \, = \, 0 \\[4pt] V^t(x_i(t)) \, &= \, \lim_{T \to \infty} \dfrac{1}{T} \int\limits_{0}^{T} a^2 \cos ^2 (i \omega_o t) \, dt \, = \, \dfrac{a^2}{2} \\[4pt] R^t_i (\tau) \, &= \, \lim_{T \to \infty} \dfrac{1}{T} \int\limits_{0}^{T} a^2 \cos (i \omega_o t) \cos (i \omega_o (t + \tau)) \, dt \, = \, \dfrac{a^2}{2} \cos (i \omega_o t). \end{align*}
In this case, the autocorrelation depends explicitly on the event index \(i\), and has a peak of \(a^2/2\) at \(i \omega_o \tau = 2 \pi k\), where \(k\) is an integer. These values for \(\tau\) are precisely separated by the period of the \(i\)’th harmonic in the ensemble. When the functions line up, we get a positive \(R^t\); when they are out of phase, we get a negative \(R^t\).