Hacker News new | past | comments | ask | show | jobs | submit login
The faster-than-fast Fourier transform (web.mit.edu)
220 points by pmikal on Jan 18, 2012 | hide | past | favorite | 31 comments



Today's pedantic minute is sponsored by Carl Friedrich Gauss.

The FFT algorithm was rediscovered in the 60's. Gauss discovered it in the early 19th century while studying the phase of the moon, before Fourier described his transform. He published it in latin, and the text was lost until 1984.


Not to detract from Gauss' remarkable discovery but there is no relation between Gauss' FFT algorithm and the continuous Fourier transform that Fourier discovered. It is also worth mentioning that credit for the discrete Fourier transform goes to Vandermode and pre-dates Gauss' FFT algorithm by about 30 years.


I didn't know about Vandermonde, its always nice to learn something.

I'm no mathematician, and my understanding of the subject is limited. How is the discrete Fourier transform unrelated to its continuous counterpart? Their definitions are extremely similar...

From a semantic point of view, it is nice that all these transform bear the same name, but historically, it is a bit unfair to attribute Fourier analysis to Fourier, who didn't discover it (even though he greatly developed the subject).

I know there's an eponymous law that describes this phenomenon, and it is of course not named after its inventor, but I can't remember its name.


The discrete and continuous transforms are closely related. DFTs and their properties were known before Gauss and his 1805 paper is about computing them quickly. I wanted to emphasize the difference between the problem of finding ways to quickly compute DFTs and Fourier's discovery of the continuous transform.


Now I get it, thanks.



I wonder if HN clipped the quote in that url. Let's try this: http://en.wikipedia.org/wiki/Stigler%27s_law_of_eponymy

Yep--for some reason that worked.


Thanks :-)


You can find descriptions of the FFT algorithm's history in [1, 2]. Both are available for free only for subscribers, sorry.

[1] http://www.springerlink.com/content/j30x8k122v828w87

[2] http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=1162...


Or here for no fee.

http://www.cis.rit.edu/class/simg716/Gauss_History_FFT.pdf

As a Gauss fanboy let me just say wow. Especially note that from 1828 to 1965 there a various versions of the FFT presented for special cases but Gauss had the complete version in 1805.


Gauss again!


Paper: http://arxiv.org/pdf/1201.2501v1

It starts by dividing the spectrum into frequency bins, and assumes that there is usually 0 or 1 non-zero coefficients in each bin. It wasn't clear to me what happens when a large number of non-zero coefficients are clustered together in one bin. Can O(k log n) still be achieved when most frequencies are clustered?


Interesting. I've a friend in Australia who's been doing some work analysing rainfall using arrays of wire across long distances (details somewhere here: http://wiredlab.org/), and during a recent conversation he pointed me at this paper: http://www.icita.org/icita2004/abstracts/128-5.htm which seems to use another method (involving Primes) for calculating Fourier Transforms, and claims a 95% speed up.

Sadly I haven't had a chance to look into it in any great detail yet, although it's "in the queue" of things to poke at. I thought it worth pointing out in case anyone else might find it useful.


Spare transforms like this are very interesting, and these seem like some very good performance results. It's worth noting, though, that FFTW (http://www.fftw.org/) is still much faster on the sizes of transforms that many (most?) applications do - less than 2^15 or so entries. Also, as the number of non-zero frequencies climbs, sparse algorithms very quickly get overtaken by good implementations of more standard FFT algorithms.

Still, very cool stuff.


They actually point out a specific comparison to FFTW in the paper: their "preliminary implementation" is faster than FFTW for n=2^22 and k<=2^17, and they point out that this performance is a distinct advantage over previous algorithms, which were faster for only k<=2000 or so. In other words, it seems that this algorithm is faster than FFTW for a much larger number of non-zero frequencies than was previously possible.

I don't know how to estimate what values of n would be relevant for today's applications, but n=2^22≈4M doesn't seem like such a big number for images or video (if signal length roughly corresponds to number of pixels). Any thoughts?


> I don't know how to estimate what values of n would be relevant for today's applications, but n=2^22≈4M doesn't seem like such a big number for images or video (if signal length roughly corresponds to number of pixels). Any thoughts?

Transform codecs for video typically don't transform an entire frame at a time, but rather blocks of the frame. This is for three reasons. The first is efficiency - most of these codecs date back to a time when huge transforms (FFT, DCT, etc) were very expensive. Possibly the most important is the local artifacts are less noticable than global artifacts in video, and using blocks is a nice way to bound that. Finally, the point of using a transform is to be able to extract some 'structure' from the data that can be used to encode it in a more efficient way. Small blocks make this easier to do in a perceptually acceptable or transparent way.

There are absolutely applications for large transforms, but video encoding typically isn't one. Incidentally, many modern approaches are based on other transforms, like the discrete wavelet transform, or approximations to transforms like the DCT.


If FFTW were invented today, it would stand for Fourier For The Win


As usual the article misrepresents video compression. Almost l useful coding tools are spatial and not frequency based; the only famous one (DCT) was replaced with a rough approximation in H.264 and it works just as well.

This is because lots of realistic images don't really have meaningful frequency-based content. (imagine sampling every 8 or 16 pixels - would their values be in any way related to each other?)


Don't think of them in terms the full function. You can approximate a strait line by a small part of a long sine wave. http://www.wolframalpha.com/input/?i=Sin%28x%29++from+.01+to...

But, for a real world example look at jpeg.

Next, each 8×8 block of each component (Y, Cb, Cr) is converted to a frequency-___domain representation, using a normalized, two-dimensional type-II discrete cosine transform (DCT). http://en.wikipedia.org/wiki/JPEG


I guess this is the project page, also contains the papers and some benchmarks:

http://groups.csail.mit.edu/netmit/sFFT


For the plain ol' FFT, I've found the "Fastest Fourier in the West": http://www.fftw.org/ to be an excellent library, which implements a whole bunch of different solvers, and picks the appropriate ones to use for your data.

Hopefully this new algorithm can be added to their toolbox in the future.


In the paper, they state that they compared their implementation to FFTW and beat it handily.


There's lots of things you can do with practical algorithms when you start thinking data-first approaches and try to fit the optimizations to the probability distribution of the data, i.e. fastest path for the most probable or important input and vice versa. Many approximations are based on the idea of ignoring input data which is improbable or has little effect on the outcome, and can be quite safely pruned out early.


Or as they say in computer architecture 101, optimizing for the common case :)



Is the algorithm patented?


Mathematical methods (as such!) are a specific exclusion in Europe (EPC 52(2) &(3)).

Of course that as such gives you somewhere to hang your patent application if you can afford the patent lawyers fees to protect the monopoly.


Mathematical methods are excluded in the U.S. too by case law (Benson, Flook, Diehr). But "law" in terms of what's on the books and "law" in terms of what happens when you go to court are two different things.


The talk at SODA is in 1.5 hours, I'm going to attend it.

Any other HNers at SODA?


How was the talk?


This is very interesting, thank you. I'll try to implement it this weekend.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: