In a previous article i briefly touched on FFTs. I mentioned that they are a faster way to perform polynomial interpolation. But FFTs are used for more than just polynomials. Specifically they are used for computing DFTs. A DFT is a system of linear equations that are related whose basis lies in . DFTs have other applications in signal processing.

In order to motivate the need for the FFT, Let’s examine a 12-degree polynomial in whose basis lies in the real.

It’s clear from writing out this expression that the complexity of calculating each for is . Another way to look at this system of equations is through the lens of matrix multiplication:

This matrix is also known as the

*vandermonde matrix*. The vandermonde matrix is invertible, since it is a square matrix with distinct elements. This equation can therefore be re-arranged allowing us to derive the coefficients s of the polynomial given the evaluations .This is an alternative algebraic form of polynomial interpolation, other forms include Lagrange polynomials, Newton’s polynomials etc. Unfortunately, performing either operations, whether polynomial evaluation or interpolation leads to a complexity of .

# Discrete Fourier Transform

The Fast Fourier Transform provides a way to transform the complexity of computing a DFT of size for which the naive approach takes complexity into something quasilinear. In order for this to happen, we need to perform a

*change of basis*, i.e rewrite the coordinates of our polynomial to be elements of a cyclic group. Such that they are indexed by successive powers of the generator of this cyclic group. Thus we get the DFTwhere , such that . Let’s look at this in matrix form for .

simplifying since

You’ll notice that a lot of terms seems to be repeated seemingly at random, this

*periodicity*is what the Fast Fourier Transform takes advantage of.# Cooley-Tukey

There are a different FFT algorithms depending on the structure of the cyclic group that can be applied, but we’ll focus our attention on the most popular one, the Cooley-Tukey FFT. In their 1968 paper, they propose an algorithm for machines to compute the DFT much faster than ever before. (Although some people would point out that gauss did it first). Their paper exploits the structure of the cyclic group by changing the indexing of the DFT which gives rise to the faster algorithm. Suppose that the size of our DFT is composite such t .

Then let

Substituting the terms of & in , we get

Since and are constant terms, we can rewrite & in terms of only it’s variables.

Finally we obtain the equation

We can observe that the inner sum over only depends on and , we can therefore write:

So that the DFT in can then be written as:

Let’s work out an example DFT of size , & using the Cooley-Tukey FFT:

Simplifying

The keen eye will recognise that since & now have 2 indices, they are basically matrices and the above equations can be seen through the lens of matrix multiplication:

Next we write out the terms for

Simplifying, we get

Visualising the above eqs in matrix form is a bit tricky, because we seem to be multiplying 3 matrices here. But upon closer inspection, two of them is an element-wise product or so-called

*Hadamard product*of the two matrices. Given by:Revisiting eq

The matrix is known as the twiddle factors.

Now it’s finally time to multiply by our matrix, but the equation for is a bit weird, we seem to multiplying in row-order instead of the conventional column order. This can be seen as multiplying by a transposed matrix, given as:

Therefore we take the transpose of the result of this element-wise multiplication so that the multiplication is in the correct column-order.

Finally we multiply the transpose by the matrix.

We can observe that the DFT requires operations, while requires operations. Both have a total of elements, hence the complexity for this algorithm is:

## Recursion

Lets observe one of the DFTs from eq

It’s clear that this is the exact same problem we started with, recall the matrix form of eq . Hence the Cooley-Tukey FFT can be used to decompose this DFT further. Unfortunately in this example we end up with the same complexity since . But is the only case where the sum of factors equals it’s product. In other cases recursive application of the Cooley-Tukey FFT provides significant efficiency gains when is highly composite:

Then the complexity of the algorithm becomes:

Some visual examples of the decomposition

The Cooley-Tukey FFT can also be combined with other FFT algorithms like the Good-Thomas FFT for factors that are

*coprime.*This method completely eliminates the need for twiddle factors. Rader’s FFT can also be applied to DFT’s of prime lengths since they have no factors and can’t be decomposed further by the Cooley-Tukey algorithm.## Radix2FFT

What you popularly know as the

*Radix2FFT*is actually another form of the Cooley-Tukey FFT. Since the Cooley-Tukey FFT can be applied recursively. Given the case when , you can continue to apply the the FFT until you reach a base case of two size-2 DFTs which can you evaluate and start to combine. This transformation yields the fastest time to compute the DFT:## Inverse FFT

The inverse FFT is the same as the FFT except it’s performed with inverse powers of the generator and finally multiplied by the inverse of the group order. This is given as:

(Proof is left as an exercise for the reader)