Skip to main content
Engineering LibreTexts

13.1: Comments

This section comes from a note describing results on efficient algorithms to calculate the discrete Fourier transform (DFT) that were collected over years. Perhaps the most interesting is the discovery that the Cooley-Tukey FFT was described by Gauss in 1805. That gives some indication of the age of research on the topic, and the fact that a 1995 compiled bibliography on efficient algorithms contains over 3400 entries indicates its volume. Three IEEE Press reprint books contain papers on the FFT. An excellent general purpose FFT program has been described in and is used in Matlab and available over the internet.

In addition to this book there are several others that give a good modern theoretical background for the FFT, one book that gives the basic theory plus both FORTRAN and TMS 320 assembly language programs, and other books that contain chapters on advanced FFT topics. There is a good up-to-date, on-line reference with both theory and programming techniques. The history of the FFT and excellent survey articles can also be found. The foundation of much of the modern work on efficient algorithms was done by S. Winograd.

Efficient FFT algorithms for length-\(2^M\) were described by Gauss and discovered in modern times by Cooley and Tukey. These have been highly developed and good examples of FORTRAN programs can be found. Several new algorithms have been published that require the least known amount of total arithmetic. Of these, the split-radix FFT seems to have the best structure for programming, and an efficient program has been written to implement it. A mixture of decimation-in-time and decimation-in-frequency with very good efficiency is given in and one called the Sine-Cosine FT. Recently a modification to the split-radix algorithm has been described that has a slightly better total arithmetic count. Theoretical bounds on the number of multiplications required for the FFT based on Winograd's theories and schemes for calculating an in-place, in-order radix-2 FFT can be found. Also found are various forms of unscramblers. A discussion of the relation of the computer architecture, algorithm and compiler can be found. A modification would be to allow lengths of \(N=q2^m\) for \(q\).

The “other” FFT is the prime factor algorithm (PFA) which uses an index map originally developed by Thomas and by Good. The theory of the PFA was derived in and further developed and an efficient in-order and in-place program. A method has been developed to use dynamic programming to design optimal FFT programs that minimize the number of additions and data transfers as well as multiplications. This new approach designs custom algorithms for a particular computer architecture. An efficient and practical development of Winograd's ideas has given a design method that does not require the rather difficult Chinese remainder theorem for short prime length FFT's. These ideas have been used to design modules of length 11, 13, 17, 19, and 25. Other methods for designing short DFT's can be found. A program that implements the nested Winograd Fourier transform algorithm (WFTA) is also found but it has not proven as fast or as versatile as the PFA. An interesting use of the PFA was announced in searching for large prime numbers.

These efficient algorithms can not only be used on DFT's but on other transforms with a similar structure. They have been applied to the discrete Hartley transform and the discrete cosine transform.

The fast Hartley transform has been proposed as a superior method for real data analysis but that has been shown not to be the case. A well-designed real-data FFT is always as good as or better than a well-designed Hartley transform. The Bruun algorithm also looks promising for real data applications as does the Rader-Brenner algorithm.

Apart from the general length algorithms, for lengths that are not highly composite or prime, the chirp z-transform in a good candidate for longer lengths and an efficient order-\(N^2\) algorithm called the QFT for shorter lengths. A method which automatically generates near-optimal prime length Winograd based programs is available. This gives the same efficiency for shorter lengths (i.e. \(N\leq 19\) and new algorithms for much longer lengths and with well-structured algorithms. Special methods are available for very long lengths. A very interesting general length FFT system called the FFTW has been developed by Frigo and Johnson at MIT. It uses a library of efficient “codelets" which are composed for a very efficient calculation of the DFT on a wide variety of computers. For most lengths and on most computers, this is the fastest FFT today. Surprisingly, it uses a recursive program structure. The FFTW won the 1999 Wilkinson Prize for Numerical Software.

The use of the FFT to calculate discrete convolution was one of its earliest uses. Although the more direct rectangular transform would seem to be more efficient, use of the FFT or PFA is still probably the fastest method on a general purpose computer or DSP chip. On special purpose hardware or special architectures, the use of distributed arithmetic or number theoretic transforms [link] may be even faster. Special algorithms for use with the short-time Fourier transform and for the calculation of a few DFT values and for recursive implementation have also been developed. Formulations of the DFT in terms of tensor or Kronecker products look promising for developing algorithms for parallel and vector computer architectures.

Various approaches to calculating approximate DFTs have been based on cordic methods, short word lengths, or some form of pruning. A new method that uses the characteristics of the signals being transformed has combined the discrete wavelet transform (DWT) combined with the DFT to give an approximate FFT with \(O(N)\) multiplications for certain signal classes. A similar approach has been developed using filter banks.

The study of efficient algorithms not only has a long history and large bibliography, it is still an exciting research field where new results are used in practical applications.

Contributor

  • ContribEEBurrus