# 6.6: Convergence of Fourier Series

- Page ID
- 22877

## Introduction

Before looking at this module, hopefully you have become fully convinced of the fact that **any** periodic function, \(f(t)\), can be represented as a sum of complex sinusoids (Section 1.4). If you are not, then try looking back at eigen-stuff in a nutshell (Section 14.4) or eigenfunctions of LTI systems (Section 14.5). We have shown that we can represent a signal as the sum of exponentials through the Fourier Series equations below:

\[f(t)=\sum_{n} c_{n} e^{j \omega_{0} n t} \label{6.35}\]

\[c_{n}=\frac{1}{T} \int_{0}^{T} f(t) e^{-\left(j \omega_{0} n t\right)} d t \label{6.36}\]

Joseph Fourier insisted that these equations were true, but could not prove it. Lagrange publicly ridiculed Fourier, and said that only continuous functions can be represented by Equation \ref{6.35} (indeed he proved that Equation \ref{6.35} holds for continuous-time functions). However, we know now that the real truth lies in between Fourier and Lagrange's positions.

## Understanding the Truth

Formulating our question mathematically, let

\[f_{N}^{\prime}(t)=\sum_{n=-N}^{N} c_{n} e^{j \omega_{0} n t} \nonumber \]

where \(c_n\) equals the Fourier coefficients of \(f(t)\) (see Equation \ref{6.36}).

\(f_{N}^{\prime}(t)\) is a "partial reconstruction" of \(f(t)\) using the first \(2N+1\) Fourier coefficients. \(f_{N}^{\prime}\) **approximates** \(f(t)\), with the approximation getting better and better as \(N\) gets large. Therefore, we can think of the set \(\left\{\forall N, N=\{0,1, \ldots\}:\left(\frac{\mathrm{d} f_{N}(t)}{\mathrm{d}}\right)\right\}\) as a **sequence of functions**, each one approximating \(f(t)\) better than the one before.

The question is, does this sequence converge to \(f(t)\)? Does \( f_{N}^{\prime}(t) \rightarrow f(t)\) as \(N \rightarrow \infty\)? We will try to answer this question by thinking about convergence in two different ways:

- Looking at the
**energy**of the error signal:\[e_{N}(t)=f(t)-f_{N}^{\prime}(t) \nonumber \]

- Looking at \(\displaystyle{\lim_{N \to \infty}} \frac{d f_{N}(t)}{d}\) at
**each point**and comparing to \(f(t)\).

### Approach #1

Let \(e_N(t)\) be the difference (i.e. error) between the signal \(f(t)\) and its partial reconstruction \(f_N^{\prime}\)

\[e_{N}(t)=f(t)-f_{N}^{\prime}(t) \]

If \(f(t) \in L^{2}([0, T])\) (finite energy), then the energy of \(e_{N}(t) \rightarrow 0\) as \(N \rightarrow \infty\) is

\[\int_{0}^{T}\left(\left|e_{N}(t)\right|\right)^{2} \mathrm{d} t=\int_{0}^{T}\left(f(t)-f_{N}^{\prime}(t)\right)^{2} \mathrm{d} t \rightarrow 0\]

We can prove this equation using Parseval's relation:

\[\lim_{N \rightarrow \infty} \int_{0}^{T}\left(\left|f(t)-f_{N}^{\prime}(t)\right|\right)^{2} \mathrm{d} t=\lim_{N \rightarrow \infty} \sum_{N=-\infty}^{\infty}\left(\left|\mathscr{F}_{n}(f(t))-\mathscr{F}_{n}\left(\frac{\mathrm{d} f_{N}(t)}{\mathrm{d}}\right)\right|\right)^{2}=\lim_{N \rightarrow \infty} \sum_{|n|>N}\left(\left|c_{n}\right|\right)^{2}=0 \nonumber\]

where the last equation before zero is the tail sum of the Fourier Series, which approaches zero because \(f(t) \in L^{2}([0, T])\). Since physical systems respond to energy, the Fourier Series provides an adequate representation for all \(f(t) \in L^{2}([0, T])\) equaling finite energy over one period.

### Approach #2

The fact that \(e_{N} \rightarrow 0\) says nothing about \(f(t)\) and \(\displaystyle{\lim_{N \rightarrow \infty}} \frac{\mathrm{d} f_{N}(t)}{\mathrm{d}}\) being **equal** at a given point. Take the two functions graphed below for example:

Given these two functions, \(f(t)\) and \(g(t)\), then we can see that for all \(t\), \(f(t) \neq g(t)\), but

\[\int_{0}^{T}(|f(t)-g(t)|)^{2} \mathrm{d} t=0 \nonumber\]

From this we can see the following relationships:

energy convergence \(\neq\) pointwise convergence

pointwise convergence \(\Rightarrow\) convergence in \(L^2([0,T])\)

However, the reverse of the above statement does not hold true.

It turns out that if \(f(t)\) has a **discontinuity** (as can be seen in figure of \(g(t)\) above) at \(t_0\), then

\[f\left(t_{0}\right) \neq \lim_{N \rightarrow \infty} \frac{\mathrm{d} f_{N}\left(t_{0}\right)}{\mathrm{d}} \nonumber \]

But as long as \(f(t)\) meets some other fairly mild conditions, then

\[f\left(t^{\prime}\right)=\lim_{N \rightarrow \infty} \frac{\left.\mathrm{d} f_{N}\left(t^{\prime}\right)\right)}{\mathrm{d}} \nonumber\]

if \(f(t)\) is **continuous** at \(t=t^{\prime}\).

These conditions are known as the **Dirichlet Conditions**.

## Dirichlet Conditions

Named after the German mathematician, Peter Dirichlet, the **Dirichlet conditions** are the sufficient conditions to guarantee **existence** and **energy convergence** of the Fourier Series.

### The Weak Dirichlet Condition for the Fourier Series

For the Fourier Series to exist, the Fourier coefficients must be finite. The **Weak Dirichlet Condition** guarantees this. It essentially says that the integral of the absolute value of the signal must be finite.

Theorem \(\PageIndex{1}\): Weak Dirichlet Condition for the Fourier Series

The coefficients of the Fourier Series are finite if

\[\int_{0}^{T}|f(t)| d t<\infty\]

**Proof**

This can be shown from the magnitude of the Fourier Series coefficients:

\[\left|c_{n}\right|=\left|\frac{1}{T} \int_{0}^{T} f(t) e^{-\left(j \omega_{0} n t\right)} d t\right| \leq \frac{1}{T} \int_{0}^{T}\left|f(t) \| e^{-\left(j \omega_{0} n t\right)}\right| d t \]

Remembering our complex exponentials (Section 1.8), we know that in the above equation \(\left|e^{-\left(j \omega_{0} n t\right)}\right|=1\), which gives us:

\[\begin{align}

\left|c_{n}\right| \leq & \frac{1}{T} \int_{0}^{T}|f(t)| d t<\infty \\

& \Rightarrow\left(\left|c_{n}\right|<\infty\right)

\end{align}\]

Note

If we have the function:

\[\forall t, 0<t \leq T:\left(f(t)=\frac{1}{t}\right)\]

then you should note that this function **fails** the above condition because:

\[\int_{0}^{T}\left|\frac{1}{t}\right| \mathrm{d} t=\infty\]

### The Strong Dirichlet Conditions for the Fourier Series

For the Fourier Series to exist, the following two conditions must be satisfied (along with the Weak Dirichlet Condition):

- In one period, \(f(t)\) has only a finite number of minima and maxima.
- In one period, \(f(t)\) has only a finite number of discontinuities and each one is finite.

These are what we refer to as the **Strong Dirichlet Conditions**. In theory we can think of signals that violate these conditions, \(\sin(\log t)\) for instance. However, it is not possible to create a signal that violates these conditions in a lab. Therefore, any real-world signal will have a Fourier representation.

Example \(\PageIndex{1}\)

Let us assume we have the following function and equality:

\[f^{\prime}(t)=\lim_{N \rightarrow \infty} \frac{\mathrm{d} f_{N}(t)}{\mathrm{d}} \]

If \(f(t)\) meets all three conditions of the Strong Dirichlet Conditions, then

\[f(\tau)=f^{\prime}(\tau) \nonumber\]

at every \(\tau\) at which \(f(t)\) is continuous. And where \(f(t)\) is discontinuous, \(f^{\prime}(t)\) is the **average** of the values on the right and left.

Note

The functions that fail the strong Dirchlet conditions are pretty pathological - as engineers, we are not too interested in them.