## Faster-Than-Fast Fourier Transform 271

Posted
by
samzenpus

from the greased-lightning dept.

from the greased-lightning dept.

First time accepted submitter CanEHdian writes

*"MIT news reports on research done resulting in a Faster-than-fast Fourier Transform algorithm. 'At the Association for Computing Machinery's Symposium on Discrete Algorithms (SODA) this week, a group of MIT researchers will present a new algorithm that, in a large range of practically important cases, improves on the fast Fourier transform. Under some circumstances, the improvement can be dramatic — a tenfold increase in speed. The new algorithm could be particularly useful for image compression, enabling, say, smartphones to wirelessly transmit large video files without draining their batteries or consuming their monthly bandwidth allotments.'"*
## transmit large video files (Score:2, Insightful)

In before _____.

Key phrase:

"smartphones to wirelessly transmit large video files without ... consuming their monthly bandwidth allotments."

Everyone see the connection to the Copyright Mayhem this week?

Bueller?

## Another use -- Emergency News Broadcasting (Score:3)

Another use that I can see for this technology is to use it for emergency and impromptu news broadcasting, especially when faced with dictatorial authorities (like Iran/China/Syria) which will do anything to stop unfavorable news from getting out to the world

## Security (Score:3)

Doesn't some part of Internet Security (of ____, I'll leave that to my betters to explain) rely on Fourier properties?

So if we can bust those "Faster than Fast", what next?

## Re:Security (Score:5, Informative)

## Re:Security (Score:5, Informative)

Nope. FFTs are used in many fields, but cryptography is not one of them. Not since the old analog radio scramblers of decades past.

Informative and wrong. When you calculate powers of 4096 bit numbers, the fastest way to calculate the 4096 bit products is convolution via FFT.

## Re:Security (Score:4, Informative)

Nope. FFTs are used in many fields, but cryptography is not one of them. Not since the old analog radio scramblers of decades past.

Informative and wrong. When you calculate powers of 4096 bit numbers, the fastest way to calculate the 4096 bit products is convolution via FFT.

These matrices are not sparse, though, and thus the algorithms under discussion are not suitable, or any faster than FFT.

## Re: (Score:2)

## Re: (Score:3, Informative)

Multiplications in time domain = convolution in transformed domain

Sometimes it's easier to go to the transformed domain, convolute and then go back to time domain :)

## Re:Security (Score:5, Insightful)

One of the compelling mathematical insights of Fourier's mathematics is that convolution is a multiplication in a complementary space (spectrum vs frequency). Along with the insight that multiplication in radix space is addition in logarithm space, these completely transformed (pun intended) the human understanding of space and the numbers with which we describe it.

## Re: (Score:2)

## Re:Security (Score:4, Informative)

For numbers that are small enough to fit in your processor's registers, multiplication is a constant-time process. But when dealing with numbers much larger than your processor's word-length (more than 4 bytes), you have to resort to the long-mulitplication algorithm. As it turns out, multiplying polynomials is exactly the same as convolution, and is O(n^2), and long multiplication is a special case of polynomial multiplication, where the variable is fixed:

525 * 343 = (5x^2 + 2x + 5) (3x^3 + 4x + 3), for x = 10

So for two large numbers, rather than going through the long process of convolving the digits (long-multiplying), you do the following process:

## Re: (Score:3)

## Re:Security (Score:5, Informative)

Yes, FFT may be used in cryptography. But this is unrelated, as the first post in this thread talks about security. FFT has no connection to the security of cryptosystems.

As far as I'm aware, the security of *absolutely no* cryptosystem used in practice depends in any way on the FFT.

Yes, FFT gives us a way to multiply big integers quickly. But all cryptosystems that use big integers already *do* assume that everyone *can* multiply big integers quickly. Even if there was a ten-times speedup, this would mean absolutely no danger to their security.

(And one final nitpick: FFT is not the fastest way to multiply 4096-bit integers, those are still considered fairly short and you would probably gain a better performance out of some simpler-but-asymptotically-slower algorithm.)

## Doesn't sound THAT useful (Score:5, Interesting)

It doesn't improve the regular FFT but only sparse variants. Image or Audio compression nearly always uses the regular non-sparse FFT.

## Re: (Score:3, Interesting)

The paper claims the opposite: "Images and audio data are equally sparse. This sparsity provides the

rationale underlying compression schemes such as MPEG and JPEG."

## Re:Doesn't sound THAT useful (Score:5, Informative)

betteron sparser signals, but is a regular FFT. The point is that many things we want to compress are sparse, or at least have pieces that are sparse. In JPEG, for example, images are broken into 8x8 pixel blocks and then DCT'ed (which is similar to FFT). Many of those blocks might be sparse, and benefit from this new approach, even if the most complex blocks aren't and don't benefit.## Re: (Score:2)

## Re: (Score:2)

Yeah, but a big-O improvement may do you no good if you only need n=8...

Depends on what sort of complexity it is. When the complexity function is larger than polynomial, even n=8 can be worrying. (I remember working with algorithms that were O(EXP^2(n)) at one point, many years ago; they had combinatorial searches over a system where the test function was itself an NP-Hard problem. Nasty, and we couldn't use the structure of the problem to restrict anything too much either. Suffice to say that at the time we were having problems finding supercomputers large enough to run our co

## Re: (Score:3)

## Re:Doesn't sound THAT useful (Score:5, Interesting)

I'm only awake due to a bathroom break, so I may not be the most clear-minded at the moment, but subtracting coefficients from the bins seems like a pretty obvious permutation of algorithmic techniques to try, and having 2/3 probability of a correct FT analysis doesn't seem like an inspiring trade-off (skimmed... perhaps they explain ways to improve this?) Is this anything more than a highly specialized, not particularly practical version of the algorithm? I'm used to working on algorithms which are much more general-case (e.g., new solutions for SCALAPACK problems).

## Whoa (Score:5, Funny)

## Re: (Score:3)

Let me get this straight - you're saying you woke up in the middle of the night intending to take a dump, and somehow ended up posting about complex mathematical algorithms on the Internet instead? Respect.

If the dump takes a long time (which it sometime does), what else is one supposed to do in the meantime, sitting on the toilet bowl with a laptop in one's lap?

## It's not the algo that's sparse, it's the signal (Score:2)

Saying that compression uses the regular "non-sparse" algorithm is rather meaningless; they use what is available, and I don't believe there was a sparse-optimized algo until now.

## Re: (Score:2)

In OFDMA only some carriers could be used if the network is lightly loaded, but this can change in real time and you may not know if it's sparse or not in advance (in LTE for example, a device only get its own all

## Wonder how it'd "work" on cats. (Score:5, Funny)

So the cat gets transformed even faster [xkcd.com].

(apologies to XKCD)

## Re:Wonder how it'd "work" on cats. (Score:4, Funny)

So the cat gets transformed even faster [xkcd.com].

(apologies to XKCD)

Be glad it's not the furrier transform of your cat :)

## Potentially huge digital A/V benefits (Score:4, Informative)

Additionally, I know

lotsof audio (and I'm assuming video) effects, DSPs of the kind you find in Adobe Audition, Audacity, et al., rely on FFTs. Computing such effects could get a lot faster. With increases in both the speed of adding effects, and compression afterwards, we might be seeing a significant speedup in the overall digital audio/video-editing process.## Re:Potentially huge digital A/V benefits (Score:5, Insightful)

Video encoding doesn't scale well to multiple cores either,

Welp, that is patently false. Video encoding is embarassingly parallel in any case where video has keyframes and the video is sufficiently long to hand a roughly equal number of chunks of video (keyframe to keyframe) to each core.

## Re: (Score:3)

Absolutely. I have 4 dual core CPU's in my machine. When I encode video, all four are clocking at about 90-95%. Encoding an hour of video takes in the neighborhood of 10-15 minutes per pass depending on the movie, and my hardware is even a few years old, plus I'm typically decoding at the same time from a different source (eg, DVD).

## Re: (Score:3)

I'm thinking this new FFT algorithm could make a big difference in encoding speeds.

I'm not so sure. It has the potential to make a big difference in :

decodingspeedI would think that the input signal is not that sparse, but rather that it has plenty of small components. The goal of the compression is to remove the smallest components as unnoticeable noise. Only after that step you get a sparse signal. So what you can actually improve significantly is technically the inverse FFT (which uses basically the same algorithm as the FFT), used for decoding.

## Re: (Score:3, Informative)

I have implemented (baseline) h264 on GPU, and I tuned it for at least 6 month. The FFT (actually fastDCT variant), only took 20% of the runtime, because it was done 4x4 blocks, it was the most parallel part of the codec, the easiest to implement. By far, the lossless compression, and the motion estimation are more complicated. Especially motion estimation, which is very important for good bitrate / quality in a video codec.

So this wouldn't really speed up h264 like video codes..(only a little bit).

## Re: (Score:2)

Video encoding actually should scale fantastically to multible cores - it's almost linear, in theory. It's just that very few encoders support it due to the complexity of multithreading what is already a very complicated program.

## Re: (Score:2)

Lossy compression such as MP3, AAC, and according to TFS also video compression, are all fundamentally based on FFT (fast fourier transforms).

Actually, most (if not all?) of these lossy compression schemes are based on the discrete

cosinetransform, not the discrete fourier transform. That is definitely true for MP3 and AAC, and also true for JPEG which most video encoders probably use for key frames.## the MAFIAA obviously (Score:2)

They will try to keep this under wraps as anything that speeds up video encoding/decoding will be classes as anti Hollywood and a tool for piracy.

Standby for a few mysterious disappearances of any public copies of the algorithm.

## Re: (Score:3)

Please, enlighten me, how do public copies of algorithms "disappear"? Counterpoint: deCSS, the release of (master) keys for AACS and HDCP.

Anyway, we can just publish it as a poem [cmu.edu] and it's protected under free speech:

Now help me, Muse, for

I wish to tell a piece of

controversial math.

## Re: (Score:2)

And yes, I have done 8-bit integer FFTs. Loads of them!

## Re: (Score:2)

Correct. FFT is used to compute the DFT. What was your point again?

FFT is probably not as efficient as a naive method, on general purpose hardware, in the case of small block sizes like what JPEG is based on (16 samples.)

You simply arent going to get an order-of-magnitude step improvement with FFT when N is small, but you have to pay the various overheads (non-linear memory access and the calculation of those pointers) anyways. This is similar to how optimized bubblesort beats the crap out of quicksort for very small lists, as quicksort has a lot of overhead (recursive

## Wish I could understand the details of FFTs (Score:5, Interesting)

I'm sure I'm not the only person who completely understands what a FFT does but simply can't get his head around the actual implementations details. I simply don't get how a signal of many mixed frequencies which essentially means the amplitude is moving all over the place can travel through some equations and a neat list of the fundamental and harmonics can pop out. Its mathematical magic to me. Wish I could "get" it. But I guess I'll just have to settle for using it.

## Re:Wish I could understand the details of FFTs (Score:5, Informative)

In the DFT case the signal is merely a finite number of samples, so you can forget about the time-dependence and look at it as a long vector. If you sit down and work out the maths, you find that the vectors corresponding to the constant function and the various sinusoids are linearly independent.

All you're really doing then is solving a system of linear equations---they don't just have to be sinusoids, you could find the coefficients for any linearly independent set of functions. You could just as easily replace one of the sinusoids with an exponential function and still get coefficients out such that you could do your weighted sum of vectors and get back your original function.

## Re: (Score:2)

I'm afraid you lost me at "long vector"

"sit down and work out the maths"

Ah , if only I could. But I'd settle for understanding it first! :o)

## Re:Wish I could understand the details of FFTs (Score:5, Informative)

"Sit down and work out the maths" is really just code for "here's one I prepared earlier". If you're keen on this sort of thing, read a bit about solving systems of linear equations, and you will hopefully be able to look at the problem, exclaim "this is trivial!" and live forever happy in the knowledge that it is indeed theoretically soluble (as I tried to describe above) without having to concern ones self with the computational subtleties that suck all of the fun out of it.

MIT OCW have a set of videos on linear algebra if your curiosity is sufficient to justify chewing up some time on it. Probably some of the most useful partially-post-high-school maths that one can learn. Here [ups.edu] is a free textbook on it.

## Re: (Score:2)

It's beautiful, isn't it.

I read the linked articles and it dawned on me. So very beautiful.

I'm glad there are others. :))

## Re: (Score:2)

Rereading your comment, I think I may have slightly misunderstood what you were saying. If you have produced a digital signal by sampling an analogue one (i.e. if you look at the values of a signal at various points) over a finite, the DFT (or the FFT as its most popular algorithm) will not tell you what the fundamental frequency is.

You can get an approximation by making assumptions about the nature of the signals that you sample, but this is not the best way to do it, and those assumptions are normally th

## Re: (Score:2)

## Re: (Score:2)

I only got FFT by working forward from different frequency sine ways to triangle, sawtooth, square, and other waves. After you do this, you realize it must be possible to go the other way.

A very coarse FFT can be calculated by multiplying the waveform by two square waves that are 90 degrees out of phase and summing each sequence. If there is no component corresponding to the square wave frequency, the two totals will be near zero.

## Re: (Score:2)

CFT transforms a continuous signal into continuous frequencies, and it does that by integrating the input signal multiplied by a complex sine wave. The mathematics for that is reasonably understandable, doing the calculations is horribly difficult.

FT does an approximation of CFT, by first changing the input signal to discrete

## Re:Wish I could understand the details of FFTs (Score:5, Insightful)

"You might think you understand the mathematics behind a model but until you can sit down and code it up on a computer, you can't say you really get it."This was mentioned within the context of building a model for gravitational wave signatures for binary black hole systems, but I've found it applies to everything. You can't claim to understand a piece of mathematics properly until you can break it down into steps that a computer can process.

If you're interested in applying this to Fourier transforms there are a few really good books that explain the procedure in detail. I'd start with this [amazon.com], which focuses mainly on wavelets but also includes a really good popular account of the importance of Fourier transforms and FFT in particular. Then I'd go and get myself a copy of this [amazon.com], which explains FFTs in much more detail. It's even got code for the FFTs in BASIC!

## Re: (Score:3)

I simply don't get how a signal of many mixed frequencies which essentially means the amplitude is moving all over the place can travel through some equations and a neat list of the fundamental and harmonics can pop out.But it's easiest to start with the O(n^2) DFT, not the FFT.

As always, the only way to truly understand it is to work out the maths. But, it it helps, let's try a simple one.

Imagine you have some vectors in 4D called a, b, c, d and are respectively [1 0 0 0 ] [0 1 0 0 ] [0 0 1 0] and [0 0 0

## Re: (Score:2)

Why is it, math books and people into math uses words like "obvious" or "you can very easily"; it really really destroys your readers enthusiasm for reading what you are doing.

Yes, it might very well be obvious and trivial to you, but it sure as hell make me feel dumb when I don't get the obvious in the first pass.

Other than that, nice intro to FT.

## Re: (Score:2)

The distinction I suppose is between conclusions that one can work out by inspection or immediate application of theorem, rather than by taking a detour through intermediate steps. It's probably not quite so cut-and-dry, but that's how I tend to see things. Such turns of phrase are just about part of the jargon.

It almost functions as a punctuation mark to reassure the paranoid (which sometimes is practically everyone) that "no, this next bit really is what it looks like".

## Ok , I guess I'm mathematically a bit thick but... (Score:2)

... how do you get from a load of say 16 bit samples into vectors? If I have sin wave sample data in an array such as the following:

0,5,10,5,0,-5,-10,-5,0

how does that then become a sequence of vectors to calculate the FT?

## Re: (Score:2)

0,5,10,5,0,-5,-10,-5,0

You can consider them as vectors, but normally they are considered complex numbers.

....

0 + 0i, 5 + 0i, 10 + 0i, 5 + 0i,

## Re: (Score:3)

.. how do you get from a load of say 16 bit samples into vectors? If I have sin wave sample data in an array such as the following:It's a single vector, not vectors (pllural): that's the important distinction. All the samples form a single vector.

In other words, if you have 1024 samples, then put them into a 1024 dimensional vector.

Does that clear it up?

## Re: (Score:2)

It is actually quite simple provided you have at least some basic math skills. Don't try to wrap your head around the math involved just yet. Just do this:

1) Have a look at http://en.wikipedia.org/wiki/Fourier_transform [wikipedia.org] and only look at Definition and Introduction.

2) Get some tool like Matlab or Octave (the last one is OSS)

3) Generate a pure sine wave signal and put that through the formula's you found in 1). You should get a single spike in your results

4) Now add a second sine wave with a different freque

## Re: (Score:3)

Rather than understanding the FFT (an O(nlgn) algorthim for computing the DFT which is normally an O(n^2) operation) you should first understand how the basic DFT equation works, which is independent for each of the frequencies. It just takes each of the n elements in your discrete signal and multiplies it by a (complex) sinusoidal function of that frequency and sums them. If the data is correlating well with the sine wave the magnitudes of these products will be larger and of a consistent sign (+ for dir

## Re: (Score:3)

Most of the time, when people talk about taking the FFT of a signal, they really mean the STFT, which breaks the signal into small blocks, and then takes the FFT of each block indepently, the result being known as the analysis frames of the signal. Either way, it fundamentally comes down to this: you have a discrete (sampled) signal of finite-length N (N could be really large for the full FFT, or something like 1024 for an STFT), and the amount of information (bandwidth) of that signal is fundamentally li

## Re: (Score:2)

Ok , I think I understand what you're on about but if amplitudes are additive wouldn't they cancel out over the course of N whole waveform samples because you'd be adding the positive and negative values together? Or do you just sample N+1/2 waveforms?

## Faster Mersenne Prime Calculations? (Score:4, Interesting)

From what I know, the Great Internet Mersenne Prime Search [mersenne.org] (GIMPS) uses a Fast Fourier Transform to quickly find the square of a number. This is a required part of the Lucas-Lehmer test (the test that determines if the number is prime).

If this form of FFT can do fast squaring, it will reduce the amount of time taken to find new, large primes.

This is a potentially exciting development in this field.

## Re: (Score:2)

If this form of FFT can do fast squaring, it will reduce the amount of time taken to find new, large primes.

It wouldn't, because it only works when your data contains many zeroes. The products in GIMPS are of pseudo-random data, so I would expect half the bits to be non-zero, and almost none of the numbers involved to be zero.

## Nice. More tuning! :) (Score:2)

No matter how good you think the latest and greatest algorithm is, there is usually someone who can come up with a way to improve on it.

For example, I DO know how to structure the probes done by MSS Code Factory such that you could stuff the rule base in Oracle and use some very Oracle-specific syntax to do the probes in one query, taking the first record from a sorted result set as your "answer" to the probe. I've written similar queries for other databases, but they're constrained by how many levels d

## Re: (Score:2)

I've no idea. I've been using JDBC for the past 15 years, or ODBC for .Net, and it's been 3-4 years since I last touched Oracle. The tuning tricks I was taught 20 years ago still seem to work though, and for most databases, not just Oracle.

I have 30+ years of RDBMS experience, having cut my teeth on the very earliest releases of Oracle, Sybase, and Ingres back when I was working for NorTel in around '89. I was on a project tasked to write real life applications with each of them so the company could c

## FTS applications... (Score:2)

As a scientist in the filed of Fourier Transform Spectroscopy I find this perspective quite interesting, even though it is not that different from other techniques we tried to apply in the field to improve performance. Needless to say, we are quite dependent on FFT performances in this field, even more since the deployment of Imagine FTS systems. Any improvement in FFT performance is noteworthy.

Lets see if its one of those "fields" where drastic performance improvement can be met! Regardless, it's quite an

## Theoretical Minimum Joules Per Bit? (Score:2)

Has research established a theoretical minimum amount of joules required to transmit one bit of info, at the limits of computation?

## Sample code, or it didn't happen (Score:2)

I started reading the paper, and get the general idea -- but haven't yet checked out the prereqs, pseudorandom spectrum permutations and flat filtering windows. Is there sample code of this algorithm available anywhere?

## Re:Can it be done effectivly without an FPU? (Score:5, Insightful)

DSP with Arduino

What is wrong with this picture?

Processors used in Arduino (Atmel 8-bit AVR series) are minimal, general-purpose microcontrollers, a replacement for earlier PIC microcontrollers. Using them for DSP is only slightly less stupid than building DSP boards entirely out of individual discrete transistors. There is A WHOLE FUCKING CATEGORY OF IC dedicated to DSP, plenty of microcontroller-style yet high-performance SoC suitable for DSP, and FPGA with DSP-specific blocks.

But noooo, ignorant people such as yourself, would rather recommend the most un-DSP-ish device in the world just because it comes bundled with Java-based IDE that runs on your beloved wiiiiiiiiindows and has the ugliest editor ever written since

xedit.## Re:Can it be done effectivly without an FPU? (Score:5, Interesting)

## Re:Can it be done effectivly without an FPU? (Score:4, Interesting)

The Spectrum Analyser. This is one of the most common uses is staring you right in the face in almost every mp3 player.

The interesting thing is that the greater time window over which the FFT operates, you can observe finer frequency detail within that particular window at the expense of how quickly the graph or bars change over time (in simplistic terms). I wonder how this new algo will change the frequency detail/transient time detail trade-off. Do we see more detail in both domains? Less? The same?

## Re:Can it be done effectivly without an FPU? (Score:4, Informative)

The interesting thing is that the greater time window over which the FFT operates, you can observe finer frequency detail within that particular window at the expense of how quickly the graph or bars change over time (in simplistic terms). I wonder how this new algo will change the frequency detail/transient time detail trade-off. Do we see more detail in both domains? Less? The same?

This is a property of the Discrete Fourier transform itself. For a certain window size there are only a discrete number of frequencies that you can calculate.

The larger your window-> The more frequenecies you can see (at the expense of a longer sampling time)

FFT algorithms simply try to compute the DFT quickly, this advance is significant for paractical applications but doesn't change the underlying mathematics.

## Re:Can it be done effectivly without an FPU? (Score:5, Interesting)

## Re:Can it be done effectivly without an FPU? (Score:4, Interesting)

Only partially true. What many people don't know is that Heisenberg's derivation was slightly wrong because while it modeled the thing being measured with quantum mechanics, it incorrectly modeled the measuring device classically.

A rigorous analysis using qm for both was done Ozawa in 2003 (see http://arxiv.org/abs/1201.1833 [arxiv.org] which shows the correct uncertainty equations on p. 2).

From http://www.sciencedaily.com/releases/2012/01/120116095529.htm [sciencedaily.com] "In order to describe the fundamental uncertainty and the additional disturbance due to the measuring process, both particle and measurement device have to be treated in the framework of quantum theory,...But the product of error and disturbance can be made arbitrarily small -- even smaller than Heisenberg's original formulation"

## Re: (Score:3)

He's not saying that the time-frequency resolution tradeoff is due to quantum physics effects. He's saying that the two uncertainty principles are related. They are. We're talking about pure theory here, with no engineering difficulties. And yes, the fundamental math is information theory and Nyquist (which is ALSO very similar to the fundamental math of quantum physics). The equations for the two uncertainty principles have exactly the same form.

An ideal, theoretical, continuous signal that is time-li

## Re: (Score:3)

Of course it's a problem. My point was that for all practical issues, engineering difficulties are what imposes the limits before you have to worry about quantum effects.

Yes, it's a property of the DFT. Or more generally Information uncertainty is a property of all discrete systems which sample at fixed intervals, that's what nyquist is all about. If you didn't sample it, you can't know it if there's data outside of your sampled signal's bandwidth it can alias down onto your desired data and cause all k

## Re: (Score:3)

## Re: (Score:3)

The interesting thing is that the greater time window over which the FFT operates, you can observe finer frequency detail within that particular window at the expense of how quickly the graph or bars change over time (in simplistic terms). I wonder how this new algo will change the frequency detail/transient time detail trade-off. Do we see more detail in both domains? Less? The same?

That's a consequence of a fundamental mathematical relationship between the frequency and time domains and has nothing to do with the algorithm itself. A signal of length L can be exactly represented by a Fourier series of frequencies 0, 1/L, 2/L, 3/L... etc., with nothing in between, so there's no finer detail to see.

## Re: (Score:2, Informative)

Exactly represented, yes, but not practically so.

Often the signal you are looking for doesn't have a frequency that fits well with the FFT frequency distribution. (The sampling time is not evenly divisable by the signal period time.)

This causes a pure sinus signal at the wrong frequency to be visible as a spread of frequencies where it is almost impossible to see adjacent signals with a lower amplitude.

Being able to calculate DFT with a method that doesn't have the same requirements the FFT algorithm have m

## Re:Can it be done effectivly without an FPU? (Score:4, Funny)

Same problem as with "Enhance!" in the movies.

The "Fast and Fouriers"?

## Re:Can it be done effectivly without an FPU? (Score:5, Interesting)

Well, if for some reason someone is already using Arduino boards to do a bunch of stuff it might make sense if it was just fast enough. But yeah, if you have a choice it doesn't make much sense to pick an Arduino for this task.

So far the only "usefulness" I've seen in the Arduino has been for that retro kick, being artificially limited to the kind of environment I haven't had to work in for a long time, but if I wanted to build something useful I'd probably at least want a 16-bit processor these days, the difference in price is negligible for any hobby project (hell, a 32-bit processor probably wouldn't be a major budget concern considering what you're likely to spend on the rest of your hardware when building something)...

## Re: (Score:3, Interesting)

A year ago, there was still easily an order of magnitude difference in price going from AVR to 32-bit microcontroller land in base deve hardware alone. You say that for one hobby item the price difference (say $10 --->$100) is neglible, because they are both smallish numbers relative to income... 1) You need to learn multiplication 2) A lot of hobbyists have the desire to do their own version of "ubiquitous" computing.. Your "negligible" price difference theory breaks down in all the cases I'm familiar w

## Re: (Score:2)

## Re: (Score:3)

Actually a very large number of projects you see out there in the hobby arena are the opposite. It's a case of people throwing 8-bit microcontrollers at simple problems that can more easily / better be solved with either some discrete parts, logic ICs, or even analogue circuits.

Currently I use 8bit AVRs (not Arduinos) for most of my projects, but when I look around I find I have the opposite opinion to you. I have a digital radio complete with 128x64 pixel screen and a menu system that goes 3-4 levels deep

## Re: (Score:3)

I have a digital radio complete with 128x64 pixel screen and a menu system that goes 3-4 levels deep that all runs on an 8bit microcontroller.

More than that: Every NES game you ever played runs on an 8-bit microcontroller.

## Re:Can it be done effectivly without an FPU? (Score:5, Informative)

I think the parent is getting at a problem I see at my job fairly frequently. Professional Firmware and DSP developers that have no knowledge about the effects or consequence of their code on the hardware itself. Ultimately re-implementing something that used to run fine in 1/10th the ram, and 1/40th the processor cycles (or less) in a brand new dual core DSP and still can't figure out why they aren't meeting performance requirements. I've seen these guys demand a bigger, faster, more power hungry CPU to run their code when a previous version that was literally a fraction of the capability ran the same algorithm.

True Story: I'm an FPGA dev, and I was helping out software guys debug why the DSP was completely missing messages and seemed to be dropping data. Turns out doing a thousand lines of heavy DSP INSIDE the interrupt handler is probably going to hang up the CPU and cause it to miss interrupts. The worst part was the blank stares we got when we told them this, and had to explain why that was a bad thing.

It seems the idea of actually writing good, fast, and power efficient code is a dying art because it's perceived as 'not necessary'. In some instances this thinking is correct (e.g. desktop computers), but in the embedded space where you're trying to keep the cost of final products down and minimize power usage, grabbing the fastest DSP part that TI makes is probably not always the best solution... but it's what you get when you think it's automatically OK/good to 'do everything in software'. There are costs and penalties to account for.

For pure hobbyists, none of this really applies, but I suspect that a good many of the people that do this stuff as a hobby are also at least tangentially involved in it in some way at their jobs, so it serves to bring up the considerations.

## Re: (Score:3)

It seems the idea of actually writing good, fast, and power efficient code is a dying art

Boy, give this guy a gold star. Exactly for the reason he states (most people today learn programming by programming for desktop computers), finding a good embedded software person for battery-powered, portable applications is becoming a unicorn search.

Every time I hear someone say that they're burned out on software, and can't stand the thought of writing another line of server or game or PC code, I always suggest a change of venue from desktop to embedded software -- a change that, once (s)he adapts to t

## Re: (Score:3)

The problem is experiences differ with different components. Some microcontrollers have a concept of configurable interrupt priority, where a running ISR can be interrupted yet again by a different ISR. I got bitten by this when I switched to AVR. I had a timing loop running and assumed that it would get priority over a simple key press as one of my previous designs (Yes I'm a lazy coder, but it worked for me previously :-P). Anyway first time I hit the keypress the timing loop stopped dead, held the output

## Re: (Score:3)

## Re:Can it be done effectivly without an FPU? (Score:5, Insightful)

Slashdot could really use a +1 Insightful but Unnecessarily Dickish mod.

## Re:Can it be done effectivly without an FPU? (Score:4, Funny)

What is wrong with this picture?

Moderation categories already encompass both categories mentioned, and are an improvement on a simple "like" or "dislike". Adding multiple combinations of moderation (e.g. Insightful + Flamebait or Informative + Troll or any of the other 90 combinations you would needlessly add to the slashdot moderation system) is only slightly less stupid than allowing the herd of cats that is the slashdot moderator community to just create arbitrary moderation categories out of thin air any time they feel like it. There ARE TWO WHOLE FUCKING CATEGORIES OF MODERATION already dedicated to "Insightful but Unnecessarily Dickish", namely Insightful and Flamebait. The general idea is that on average the +1 from the Insightful and the -1 from the Flamebait cancel each other out, and people checking the moderations on their comments over time will come to realize they should be less of a dick but retain their level of insight.

But noooo, ignorant people such as yourself, would rather recommend slashdot implement the most arbitrary and poorly architected moderation system in the world just so you can get the +5 insiiiiiiiiightful. (And even if you would argue that Dickish!=Flamebait you are proposing a new 11th moderation category for "insightful but Unecessarily Dickish" worth a +1, which would result in the sizable misanthropic subset of slashdot users who would actually TRY to get their posts moderated as your wondrous new moderation category, thinking it is an improvement over plain old insightful.)

## Re:Can it be done effectivly without an FPU? (Score:5, Funny)

Slashdot could REALLY use a +1 Insightful but Unnecessarily Dickish mod.

## Re: (Score:2)

## Re:Can it be done effectivly without an FPU? (Score:4, Insightful)

While I too am sick of seeing dinky little Arduino projects, there are plenty of good reasons to be doing FFT on low power micros.

For example, the company I work for designs battery powered wireless sensors: They have to be compact, consume minimal power and have minimal components for manufacture. We've got a single ATMega processor that sits between a sensor and a radio, doing, among other things, FFT and other 'DSP' functions to compress the data before we spend our precious milliamps transmitting it.

It really is the cheapeast, lowest power way to get the job done, sometimes a hardware DSP is overkill.

## Re: (Score:2)

Do you have math showing the compression consumes less power than just transmitting the raw data? And that it's less latent to wait for compression than to wait for uncompressed data to finish arriving?

## Re:Can it be done effectivly without an FPU? (Score:4, Interesting)

Do you have math showing the compression consumes less power than just transmitting the raw data?

Lots of it, the power budget drives the design of the entire sensor.

The micro-controller uses a lot less energy than the packet radio so it makes sense to process the raw sensor data on-board and transmit information such as fourier coefficients, integrals and other statistics. Every packet saved is another fraciton of a second the radio can be powered down.

It's a fairly common technique with sensor networks to process raw signals at the point of collection and transmit only the useful information, it makes the most use of your bandwidth and in this case energy.

## Re: (Score:2)

I don't think that Zigbee uses compression by default. Why do you think it doesn't, since Zigbee's main design goal is power efficiency?

## Re: (Score:2)

Except when your device is already made with Arduino, and then it needs to do something new that is DSP. Then you need DSP with Arduino. Even if it's "stupid" it might be necessary. Most application development must use the HW already present in its installed base, or die trying.

Since DSP (or rather NSP, since "Native Signal Processing" is what non-DSP CPUs were needed to do when there was no DSP but only a CPU in the installed base) isn't impossible on microcontrollers, just somewhat impractical on many of

## Re: (Score:3)

DSP is a *process*, not a *processor*.

You can run the same mathematical processes on a TMS320, DSP56K, FPGA, desktop PC, GPGPU video card, embedded ARM, PIC, AVR, even a 4-bit micro if you feel the need. Sure, a purpose-built DSP might be a bit faster at it than a microcontroller, but if the processor you choose is fast enough to run your DSP-like algorithm at the speed you need, what's the damn problem?

Case in point, there's really nothing better suited for decoding MP3/AAC audio than a DSP chip - a proper

## Re:Can it be done effectivly without an FPU? (Score:4, Informative)

You can run the same mathematical processes on a TMS320, DSP56K, FPGA, desktop PC, GPGPU video card, embedded ARM, PIC, AVR, even a 4-bit micro if you feel the need.

You can just as well run it on vacuum tubes or relays, however there are no real-world applications for that -- and the same applies for 8-bit microcontrollers.

An AVR has no trouble doing DTMF decoding on values captured from the ADC,

DTMF is specifically designed to be easy to decode with the worst possible limitations on a decoder. Claiming that it makes a microcontroller capable of DSP would be like saying that a lightbulb can be used to calculate exponents because it can produce a close approximation of a black body radiation spectrum.

or running a few DDS channels to generate sine waves and throw them out a PWM channel, despite those being classical DSP applications.

In name only. There is no actual processing involved.

- Signed, someone who does DSP for a living.

So do I, however I don't count trivial operation on tables of precomputed values as "DSP".

## Re: (Score:3)

## Re:Can it be done effectivly without an FPU? (Score:4, Interesting)

## Re: (Score:3)

## Re: (Score:2)

If you mix them they taste terrible. (SOPA = soup in Spanish)