The publication by Cooley and Tukey in 1965 of an efficient algorithm for the calculation of the DFT was a major turning point in the development of digital signal processing. During the five or so years that followed, various extensions and modifications were made to the original algorithm. By the early 1970's the practical programs were basically in the form used today. The standard development shows how the DFT of a length-N sequence can be simply calculated from the two length-N/2 DFT's of the even index terms and the odd index terms. This is then applied to the two half-length DFT's to give four quarter-length DFT's, and repeated until N scalars are left which are the DFT values. Because of alternately taking the even and odd index terms, two forms of the resulting programs are called decimation-in-time and decimation-in-frequency. For a length of $$2^M$$, the dividing process is repeated $$M=\log _{2}N$$ times and requires N multiplications each time. This gives the famous formula for the computational complexity of the FFT of $$N\log _{2}N$$ which was derived in Multidimensional Index Mapping.