6.5: Exponential Averages and Recursive Filters
Suppose we try to extend our method for computing finite moving averages to infinite moving averages of the form
\[\begin{align}
x_{n} &= \qquad \qquad \qquad \sum_{k=0}^{\infty} w_{k} u_{n-k} \nonumber\\
&=w_{0} u_{n}+w_{1} u_{n-1}+\cdots+w_{1000} u_{n-1000}+\cdots
\end{align} \nonumber \]
In general, this moving average would require infinite memory for the weighting coefficients \(w_{0}, w_{1}, \ldots\) and for the inputs \(u_{n}, u_{n-1}, \ldots\). Furthermore, the hardware for multiplying wkun−kwkun-k would have to be infinitely fast to compute the infinite moving average in finite time. All of this is clearly fanciful and implausible (not to mention impossible). But what if the weights take the exponential form
\[w_{k}= \begin{cases}0, & k<0 \\ w_{0} a^{k}, & k \geq 0 ?\end{cases} \nonumber \]
Does any simplification result? There is hope because the weighting sequence obeys the recursion
\[w_{k}= \begin{cases}0, & k<0 \\ w_{0}, & k=0 \\ a w_{k-1} & k \geq 1\end{cases} \nonumber \]
This recursion may be rewritten as follows, for \(k \geq 1\):
\[w_{k}-a w_{k-1}=0, k \geq 1 \nonumber \]
Let's now manipulate the infinite moving average and use the recursion for the weights to see what happens. You must follow every step:
\[\begin{align}
x_{n} &=\sum_{k=0}^{\infty} w_{k} u_{n-k} \nonumber \\
&=\quad \sum_{k=1}^{\infty} w_{k} u_{n-k}+w_{0} u_{n} \nonumber\\
&=\sum_{k=1}^{\infty} a w_{k-1} u_{n-k}+w_{0} u_{n} \nonumber\\
&=a \sum_{m=0}^{\infty} w_{m} u_{n-1-m}+w_{0} u_{n} \nonumber\\
&=a x_{n-1}+w_{0} u_{n} .
\end{align} \nonumber \]
This result is fundamentally important because it says that the output of the infinite exponential moving average may be computed by scaling the previous output \(x_{n-1}\) by the constant \(a\), scaling the new input \(u_n\) by \(w_0\), and adding. Only three memory locations must be allocated: one for \(w_0\), one for \(a\), and one for \(x_{n-1}\). Only two multiplies must be implemented: one for \(ax_{n-1}\) and one for \(w_0u_n\). A diagram of the recursion is given in Figure 1. In this recursion, the old value of the exponential moving average, \(x_{n-1}\), is scaled by \(a\) and added to \(w_0u_n\) to produce the new exponential moving average \(x_n\). This new value is stored in memory, where it becomes \(x_{n-1}\) in the next step of the recursion, and so on.
Try to extend the recursion of the previous paragraphs to the weighted average
\(x_{n}=\sum_{k=0}^{N-1} a^{k} u_{n-k} .\)
What goes wrong?
Compute the output of the exponential moving average \(x_{n}=a x_{n-1}+w_{0} u_{n}\) when the input is
\(u_{n}= \begin{cases}0, & n<0 \\ u, & n \geq 0\end{cases}\)
Plot your result versus \(n\).
Compute \(w_0\) in the exponential weighting sequence
\(w_{n}= \begin{cases}0, & n<0 \\ a^{n} w_{0}, & n \geq 0\end{cases}\)
to make the weighting sequence a valid window. (This is a special case of Exercise 3 from Filtering: Moving Averages .) Assume \(−1<a<1\)