Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Math Programming Science

Faster-Than-Fast Fourier Transform 271

First time accepted submitter CanEHdian writes "MIT news reports on research done resulting in a Faster-than-fast Fourier Transform algorithm. 'At the Association for Computing Machinery's Symposium on Discrete Algorithms (SODA) this week, a group of MIT researchers will present a new algorithm that, in a large range of practically important cases, improves on the fast Fourier transform. Under some circumstances, the improvement can be dramatic — a tenfold increase in speed. The new algorithm could be particularly useful for image compression, enabling, say, smartphones to wirelessly transmit large video files without draining their batteries or consuming their monthly bandwidth allotments.'"
This discussion has been archived. No new comments can be posted.

Faster-Than-Fast Fourier Transform

Comments Filter:
  • In before _____.

    Key phrase:

    "smartphones to wirelessly transmit large video files without ... consuming their monthly bandwidth allotments."

    Everyone see the connection to the Copyright Mayhem this week?

    Bueller?

    • Another use that I can see for this technology is to use it for emergency and impromptu news broadcasting, especially when faced with dictatorial authorities (like Iran/China/Syria) which will do anything to stop unfavorable news from getting out to the world

  • by TaoPhoenix ( 980487 ) <TaoPhoenix@yahoo.com> on Friday January 20, 2012 @04:56AM (#38759112) Journal

    Doesn't some part of Internet Security (of ____, I'll leave that to my betters to explain) rely on Fourier properties?

    So if we can bust those "Faster than Fast", what next?

    • Re:Security (Score:5, Informative)

      by SuricouRaven ( 1897204 ) on Friday January 20, 2012 @05:03AM (#38759168)
      Nope. FFTs are used in many fields, but cryptography is not one of them. Not since the old analog radio scramblers of decades past.
      • Re:Security (Score:5, Informative)

        by gnasher719 ( 869701 ) on Friday January 20, 2012 @05:55AM (#38759394)

        Nope. FFTs are used in many fields, but cryptography is not one of them. Not since the old analog radio scramblers of decades past.

        Informative and wrong. When you calculate powers of 4096 bit numbers, the fastest way to calculate the 4096 bit products is convolution via FFT.

        • Re:Security (Score:4, Informative)

          by Anonymous Coward on Friday January 20, 2012 @06:25AM (#38759504)

          Nope. FFTs are used in many fields, but cryptography is not one of them. Not since the old analog radio scramblers of decades past.

          Informative and wrong. When you calculate powers of 4096 bit numbers, the fastest way to calculate the 4096 bit products is convolution via FFT.

          These matrices are not sparse, though, and thus the algorithms under discussion are not suitable, or any faster than FFT.

        • I was not aware that the mathematics of multiplication were equivilent to the FFT. Interesting. I'm not sure how to multiply via convolution though, other than the obvious trivial cheat of a 1x1 matrix.
          • Re: (Score:3, Informative)

            by Damnshock ( 1293558 )

            Multiplications in time domain = convolution in transformed domain

            Sometimes it's easier to go to the transformed domain, convolute and then go back to time domain :)

          • Re:Security (Score:5, Insightful)

            by Doc Ruby ( 173196 ) on Friday January 20, 2012 @08:18AM (#38760052) Homepage Journal

            One of the compelling mathematical insights of Fourier's mathematics is that convolution is a multiplication in a complementary space (spectrum vs frequency). Along with the insight that multiplication in radix space is addition in logarithm space, these completely transformed (pun intended) the human understanding of space and the numbers with which we describe it.

          • Re:Security (Score:4, Informative)

            by atrizzah ( 532135 ) on Friday January 20, 2012 @09:16AM (#38760610)

            For numbers that are small enough to fit in your processor's registers, multiplication is a constant-time process. But when dealing with numbers much larger than your processor's word-length (more than 4 bytes), you have to resort to the long-mulitplication algorithm. As it turns out, multiplying polynomials is exactly the same as convolution, and is O(n^2), and long multiplication is a special case of polynomial multiplication, where the variable is fixed:

            525 * 343 = (5x^2 + 2x + 5) (3x^3 + 4x + 3), for x = 10

            So for two large numbers, rather than going through the long process of convolving the digits (long-multiplying), you do the following process:

            1. take the FFT of the digit sequence of each number, O(n*log(n))
            2. multiply point by point, O(n)
            3. take the IFFT of that result, O(n*log(n))
            • I might also add that unless your numbers have a whole bunch of zeros, they aren't sparse, so the algorithm in the paper above doesn't help you. The algorithm in the paper basically says the more zeros you have in your inputs to your FFT, the faster you can do the FFT. This is mostly useful in data compression, where you intentionally throw out coefficients to save space. Although, most compression algorithms use variations of the discrete cosine transform (DCT) instead of the FFT. The DCT is very simil
        • Re:Security (Score:5, Informative)

          by misof ( 617420 ) on Friday January 20, 2012 @07:39AM (#38759868)

          Yes, FFT may be used in cryptography. But this is unrelated, as the first post in this thread talks about security. FFT has no connection to the security of cryptosystems.

          As far as I'm aware, the security of *absolutely no* cryptosystem used in practice depends in any way on the FFT.

          Yes, FFT gives us a way to multiply big integers quickly. But all cryptosystems that use big integers already *do* assume that everyone *can* multiply big integers quickly. Even if there was a ten-times speedup, this would mean absolutely no danger to their security.

          (And one final nitpick: FFT is not the fastest way to multiply 4096-bit integers, those are still considered fairly short and you would probably gain a better performance out of some simpler-but-asymptotically-slower algorithm.)

  • by tempmpi ( 233132 ) on Friday January 20, 2012 @04:56AM (#38759120)

    It doesn't improve the regular FFT but only sparse variants. Image or Audio compression nearly always uses the regular non-sparse FFT.

    • Re: (Score:3, Interesting)

      by Anonymous Coward

      The paper claims the opposite: "Images and audio data are equally sparse. This sparsity provides the
      rationale underlying compression schemes such as MPEG and JPEG."

    • by MojoRilla ( 591502 ) on Friday January 20, 2012 @05:36AM (#38759306)
      This isn't right. The proposed algorithm is better on sparser signals, but is a regular FFT. The point is that many things we want to compress are sparse, or at least have pieces that are sparse. In JPEG, for example, images are broken into 8x8 pixel blocks and then DCT'ed (which is similar to FFT). Many of those blocks might be sparse, and benefit from this new approach, even if the most complex blocks aren't and don't benefit.
      • Yeah, but a big-O improvement may do you no good if you only need n=8...
        • by dkf ( 304284 )

          Yeah, but a big-O improvement may do you no good if you only need n=8...

          Depends on what sort of complexity it is. When the complexity function is larger than polynomial, even n=8 can be worrying. (I remember working with algorithms that were O(EXP^2(n)) at one point, many years ago; they had combinatorial searches over a system where the test function was itself an NP-Hard problem. Nasty, and we couldn't use the structure of the problem to restrict anything too much either. Suffice to say that at the time we were having problems finding supercomputers large enough to run our co

    • by Anonymous Coward on Friday January 20, 2012 @05:39AM (#38759316)

      I'm only awake due to a bathroom break, so I may not be the most clear-minded at the moment, but subtracting coefficients from the bins seems like a pretty obvious permutation of algorithmic techniques to try, and having 2/3 probability of a correct FT analysis doesn't seem like an inspiring trade-off (skimmed... perhaps they explain ways to improve this?) Is this anything more than a highly specialized, not particularly practical version of the algorithm? I'm used to working on algorithms which are much more general-case (e.g., new solutions for SCALAPACK problems).

      • Whoa (Score:5, Funny)

        by HappyClown ( 668699 ) on Friday January 20, 2012 @05:57AM (#38759406)
        Let me get this straight - you're saying you woke up in the middle of the night intending to take a dump, and somehow ended up posting about complex mathematical algorithms on the Internet instead? Respect.
        • Let me get this straight - you're saying you woke up in the middle of the night intending to take a dump, and somehow ended up posting about complex mathematical algorithms on the Internet instead? Respect.

          If the dump takes a long time (which it sometime does), what else is one supposed to do in the meantime, sitting on the toilet bowl with a laptop in one's lap?

    • Saying that compression uses the regular "non-sparse" algorithm is rather meaningless; they use what is available, and I don't believe there was a sparse-optimized algo until now.

  • by sethstorm ( 512897 ) on Friday January 20, 2012 @05:00AM (#38759146) Homepage

    So the cat gets transformed even faster [xkcd.com].

    (apologies to XKCD)

  • by sound+vision ( 884283 ) on Friday January 20, 2012 @05:02AM (#38759156) Journal
    Lossy compression such as MP3, AAC, and according to TFS also video compression, are all fundamentally based on FFT (fast fourier transforms). Depending on how well this new specific method works, we could see decreases in the time it takes to encode all this stuff. Which can be quite a long time... have you ever tried to do a 2-pass MPEG 4 conversion on a feature-length digital video? Video encoding doesn't scale well to multiple cores either, so nowadays when performance increases coming mostly from adding more cores... video encoding hasn't been getting faster at the same pace it used to. I'm thinking this new FFT algorithm could make a big difference in encoding speeds.

    Additionally, I know lots of audio (and I'm assuming video) effects, DSPs of the kind you find in Adobe Audition, Audacity, et al., rely on FFTs. Computing such effects could get a lot faster. With increases in both the speed of adding effects, and compression afterwards, we might be seeing a significant speedup in the overall digital audio/video-editing process.
    • by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Friday January 20, 2012 @06:26AM (#38759506) Homepage Journal

      Video encoding doesn't scale well to multiple cores either,

      Welp, that is patently false. Video encoding is embarassingly parallel in any case where video has keyframes and the video is sufficiently long to hand a roughly equal number of chunks of video (keyframe to keyframe) to each core.

      • Absolutely. I have 4 dual core CPU's in my machine. When I encode video, all four are clocking at about 90-95%. Encoding an hour of video takes in the neighborhood of 10-15 minutes per pass depending on the movie, and my hardware is even a few years old, plus I'm typically decoding at the same time from a different source (eg, DVD).

    • by alexhs ( 877055 )

      I'm thinking this new FFT algorithm could make a big difference in encoding speeds.

      I'm not so sure. It has the potential to make a big difference in decoding speed :
      I would think that the input signal is not that sparse, but rather that it has plenty of small components. The goal of the compression is to remove the smallest components as unnoticeable noise. Only after that step you get a sparse signal. So what you can actually improve significantly is technically the inverse FFT (which uses basically the same algorithm as the FFT), used for decoding.

    • Re: (Score:3, Informative)

      by Anonymous Coward

      I have implemented (baseline) h264 on GPU, and I tuned it for at least 6 month. The FFT (actually fastDCT variant), only took 20% of the runtime, because it was done 4x4 blocks, it was the most parallel part of the codec, the easiest to implement. By far, the lossless compression, and the motion estimation are more complicated. Especially motion estimation, which is very important for good bitrate / quality in a video codec.
      So this wouldn't really speed up h264 like video codes..(only a little bit).

    • Maybe on audio, but on video the FFT itsself is only a tiny part of the encoding time. The really slow part is finding motion vectors, which is a completly seperate part, and usually the slowest one.

      Video encoding actually should scale fantastically to multible cores - it's almost linear, in theory. It's just that very few encoders support it due to the complexity of multithreading what is already a very complicated program.
    • Lossy compression such as MP3, AAC, and according to TFS also video compression, are all fundamentally based on FFT (fast fourier transforms).

      Actually, most (if not all?) of these lossy compression schemes are based on the discrete cosine transform, not the discrete fourier transform. That is definitely true for MP3 and AAC, and also true for JPEG which most video encoders probably use for key frames.

  • by Viol8 ( 599362 ) on Friday January 20, 2012 @05:14AM (#38759218) Homepage

    I'm sure I'm not the only person who completely understands what a FFT does but simply can't get his head around the actual implementations details. I simply don't get how a signal of many mixed frequencies which essentially means the amplitude is moving all over the place can travel through some equations and a neat list of the fundamental and harmonics can pop out. Its mathematical magic to me. Wish I could "get" it. But I guess I'll just have to settle for using it.

    • by lachlan76 ( 770870 ) on Friday January 20, 2012 @05:34AM (#38759292)

      In the DFT case the signal is merely a finite number of samples, so you can forget about the time-dependence and look at it as a long vector. If you sit down and work out the maths, you find that the vectors corresponding to the constant function and the various sinusoids are linearly independent.

      All you're really doing then is solving a system of linear equations---they don't just have to be sinusoids, you could find the coefficients for any linearly independent set of functions. You could just as easily replace one of the sinusoids with an exponential function and still get coefficients out such that you could do your weighted sum of vectors and get back your original function.

      • by Viol8 ( 599362 )

        I'm afraid you lost me at "long vector"

        "sit down and work out the maths"

        Ah , if only I could. But I'd settle for understanding it first! :o)

        • by lachlan76 ( 770870 ) on Friday January 20, 2012 @06:04AM (#38759426)

          "Sit down and work out the maths" is really just code for "here's one I prepared earlier". If you're keen on this sort of thing, read a bit about solving systems of linear equations, and you will hopefully be able to look at the problem, exclaim "this is trivial!" and live forever happy in the knowledge that it is indeed theoretically soluble (as I tried to describe above) without having to concern ones self with the computational subtleties that suck all of the fun out of it.

          MIT OCW have a set of videos on linear algebra if your curiosity is sufficient to justify chewing up some time on it. Probably some of the most useful partially-post-high-school maths that one can learn. Here [ups.edu] is a free textbook on it.

          • It's beautiful, isn't it.

            I read the linked articles and it dawned on me. So very beautiful.

            I'm glad there are others. :))

      • Rereading your comment, I think I may have slightly misunderstood what you were saying. If you have produced a digital signal by sampling an analogue one (i.e. if you look at the values of a signal at various points) over a finite, the DFT (or the FFT as its most popular algorithm) will not tell you what the fundamental frequency is.

        You can get an approximation by making assumptions about the nature of the signals that you sample, but this is not the best way to do it, and those assumptions are normally th

    • I only got FFT by working forward from different frequency sine ways to triangle, sawtooth, square, and other waves. After you do this, you realize it must be possible to go the other way.

      A very coarse FFT can be calculated by multiplying the waveform by two square waves that are 90 degrees out of phase and summing each sequence. If there is no component corresponding to the square wave frequency, the two totals will be near zero.

    • Well, that is not what FFT does. You have to look at plain old FT (Fourier Transform) first, not FFT (Fast Fourier Transform), and better CFT (Continuous Fourier Transform) first.

      CFT transforms a continuous signal into continuous frequencies, and it does that by integrating the input signal multiplied by a complex sine wave. The mathematics for that is reasonably understandable, doing the calculations is horribly difficult.

      FT does an approximation of CFT, by first changing the input signal to discrete
    • Comment removed (Score:5, Insightful)

      by account_deleted ( 4530225 ) on Friday January 20, 2012 @06:28AM (#38759528)
      Comment removed based on user account deletion
    • I simply don't get how a signal of many mixed frequencies which essentially means the amplitude is moving all over the place can travel through some equations and a neat list of the fundamental and harmonics can pop out.

      But it's easiest to start with the O(n^2) DFT, not the FFT.

      As always, the only way to truly understand it is to work out the maths. But, it it helps, let's try a simple one.

      Imagine you have some vectors in 4D called a, b, c, d and are respectively [1 0 0 0 ] [0 1 0 0 ] [0 0 1 0] and [0 0 0

      • by Splab ( 574204 )

        Why is it, math books and people into math uses words like "obvious" or "you can very easily"; it really really destroys your readers enthusiasm for reading what you are doing.

        Yes, it might very well be obvious and trivial to you, but it sure as hell make me feel dumb when I don't get the obvious in the first pass.

        Other than that, nice intro to FT.

        • The distinction I suppose is between conclusions that one can work out by inspection or immediate application of theorem, rather than by taking a detour through intermediate steps. It's probably not quite so cut-and-dry, but that's how I tend to see things. Such turns of phrase are just about part of the jargon.

          It almost functions as a punctuation mark to reassure the paranoid (which sometimes is practically everyone) that "no, this next bit really is what it looks like".

      • ... how do you get from a load of say 16 bit samples into vectors? If I have sin wave sample data in an array such as the following:

        0,5,10,5,0,-5,-10,-5,0

        how does that then become a sequence of vectors to calculate the FT?

        • 0,5,10,5,0,-5,-10,-5,0

          You can consider them as vectors, but normally they are considered complex numbers.

          0 + 0i, 5 + 0i, 10 + 0i, 5 + 0i, ....

        • .. how do you get from a load of say 16 bit samples into vectors? If I have sin wave sample data in an array such as the following:

          It's a single vector, not vectors (pllural): that's the important distinction. All the samples form a single vector.

          In other words, if you have 1024 samples, then put them into a 1024 dimensional vector.

          Does that clear it up?

    • by Delgul ( 515042 )

      It is actually quite simple provided you have at least some basic math skills. Don't try to wrap your head around the math involved just yet. Just do this:

      1) Have a look at http://en.wikipedia.org/wiki/Fourier_transform [wikipedia.org] and only look at Definition and Introduction.
      2) Get some tool like Matlab or Octave (the last one is OSS)
      3) Generate a pure sine wave signal and put that through the formula's you found in 1). You should get a single spike in your results
      4) Now add a second sine wave with a different freque

    • Rather than understanding the FFT (an O(nlgn) algorthim for computing the DFT which is normally an O(n^2) operation) you should first understand how the basic DFT equation works, which is independent for each of the frequencies. It just takes each of the n elements in your discrete signal and multiplies it by a (complex) sinusoidal function of that frequency and sums them. If the data is correlating well with the sine wave the magnitudes of these products will be larger and of a consistent sign (+ for dir

    • Most of the time, when people talk about taking the FFT of a signal, they really mean the STFT, which breaks the signal into small blocks, and then takes the FFT of each block indepently, the result being known as the analysis frames of the signal. Either way, it fundamentally comes down to this: you have a discrete (sampled) signal of finite-length N (N could be really large for the full FFT, or something like 1024 for an STFT), and the amount of information (bandwidth) of that signal is fundamentally li

  • by DarkHelmet ( 120004 ) <.mark. .at. .seventhcycle.net.> on Friday January 20, 2012 @06:17AM (#38759472) Homepage

    From what I know, the Great Internet Mersenne Prime Search [mersenne.org] (GIMPS) uses a Fast Fourier Transform to quickly find the square of a number. This is a required part of the Lucas-Lehmer test (the test that determines if the number is prime).

    If this form of FFT can do fast squaring, it will reduce the amount of time taken to find new, large primes.

    This is a potentially exciting development in this field.

    • If this form of FFT can do fast squaring, it will reduce the amount of time taken to find new, large primes.

      It wouldn't, because it only works when your data contains many zeroes. The products in GIMPS are of pseudo-random data, so I would expect half the bits to be non-zero, and almost none of the numbers involved to be zero.

  • No matter how good you think the latest and greatest algorithm is, there is usually someone who can come up with a way to improve on it.

    For example, I DO know how to structure the probes done by MSS Code Factory such that you could stuff the rule base in Oracle and use some very Oracle-specific syntax to do the probes in one query, taking the first record from a sorted result set as your "answer" to the probe. I've written similar queries for other databases, but they're constrained by how many levels d

  • As a scientist in the filed of Fourier Transform Spectroscopy I find this perspective quite interesting, even though it is not that different from other techniques we tried to apply in the field to improve performance. Needless to say, we are quite dependent on FFT performances in this field, even more since the deployment of Imagine FTS systems. Any improvement in FFT performance is noteworthy.

    Lets see if its one of those "fields" where drastic performance improvement can be met! Regardless, it's quite an

  • Has research established a theoretical minimum amount of joules required to transmit one bit of info, at the limits of computation?

  • I started reading the paper, and get the general idea -- but haven't yet checked out the prereqs, pseudorandom spectrum permutations and flat filtering windows. Is there sample code of this algorithm available anywhere?

If all else fails, lower your standards.

Working...