Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Software Math Programming Technology

Progress In Algorithms Beats Moore's Law 166

Relic of the Future writes "Seen on the blog 'Algorithmic Game Theory,' a report to congress and the president about past and future advances in information technology notes that, while improvements in hardware accounted for an approximate 1,000 fold increase in calculation speed over a 15-year time-span, improvements in algorithms accounted for an over 43,000 fold increase."
This discussion has been archived. No new comments can be posted.

Progress In Algorithms Beats Moore's Law

Comments Filter:
  • by sourcerror ( 1718066 ) on Friday December 24, 2010 @07:49PM (#34663070)

    more bloated GUI libraries (bling), and comfier runtimes (different VMs).

    • by betterunixthanunix ( 980855 ) on Friday December 24, 2010 @08:20PM (#34663210)
      Well, the real question is, are users better off as a result? Personally, I found that most of the computationally intensive graphics were not actually helping me get my work done any faster, switched to a more minimal window manager and did all that I could to reduce the amount of CPU time spent on rendering the GUI (disabling special effects wherever I saw them), and discovered that, in fact, I was correct: none of those effects were actually helping me get my work done.

      On the flip side, people tend to be turned off when they see what my screen looks like. It has gotten to the point where I do not mention that this is "Linux," because I do not want new users to get scared. In the end, looking pretty winds up being more important than how efficiently you use computational resources.
      • At work I run fvwm with the mouse configured for left hand use with xmodmap and Button3 on the title bar set to close the window. Right handed users inevitably grab the mouse with their right hand and try to move a window with their Button1.

        • oo - please post your ~/.fvwmrc online somewhere! *excited*. i run fvwm too, i run fvwm too!

        • by cskrat ( 921721 )

          I get a similar effect by using a Dvorak keyboard layout at work (keycaps still in qwerty). Great fun when a coworker tries to enter a password on my keyboard.

        • Why bother with window titles at all? You can disable all window decorations, and reprogram the mouse buttons to act on the *whole* window client area. Much easier to move/resize/destroy that way. All you have to do is use the extra mouse buttons that come with practically all mice, and that nobody ever uses for anything anyway.

          On my fvwm, I've set up the 8/9 buttons as follows: doubleclick 8 -> Hide, clidkdrag 8 -> Resize, doubleclick 9 -> Kill, clickdrag 9 -> Move. It seemed a bit weird the

          • I suppress titles on some window classes. NEdit has tabs and an information field to tell me which file has focus, so the title bar is redundant there. I might give your approach a go. It sounds interesting.

            • I also like to have the window titles displayed in the background menu, eg. something like

              DestroyFunc invokeRootMenu
              AddToFunc invokeRootMenu
              + I DestroyMenu recreate RootMenu
              + I AddToMenu RootMenu "r00tMenu [Desk $[desk.n]]" Title
              + I AddToMenu RootMenu "New" Function NewTerm
              + I All (Iconic,CurrentScreen) AddToMenu RootMenu ($[w.name])\ [$[w.desk]] WindowId $[w.id] SelectWindow
              + I All (!Iconic,CurrentScreen) AddToMenu RootMenu $[w.name]\ [$[w.desk]] WindowId $[w.id] SelectWindow
              + I AddToMenu RootM

              • I also like to have the window titles displayed in the background menu, eg. something like

                DestroyFunc invokeRootMenu
                AddToFunc invokeRootMenu
                + I DestroyMenu recreate RootMenu
                + I AddToMenu RootMenu "r00tMenu [Desk $[desk.n]]" Title
                + I AddToMenu RootMenu "New" Function NewTerm
                + I All (Iconic,CurrentScreen) AddToMenu RootMenu ($[w.name])\ [$[w.desk]] WindowId $[w.id] SelectWindow
                + I All (!Iconic,CurrentScreen) AddToMenu RootMenu $[w.name]\ [$[w.desk]] WindowId $[w.id] SelectWindow
                + I AddToMenu RootMenu "" Nop
                + I AddToMenu RootMenu "Restart" Restart fvwm2
                + I AddToMenu RootMenu "Quit" Quit
                + I Popup RootMenu

                Mouse 1 R A WindowList

                ...would appear to do the same thing apart from being able to add the extra functions to it.

                • Oh yes, that's true. The built-in windowlist is pretty good, I just wanted to have a "New" terminal menu item when I wrote this a few years ago (the idea was to emulate the behaviour of the wm2 window manager which I liked). I should read the fvwm docs again one of these days, there's probably lots of new things to try :)
          • More screen space was one of the most important to me. You don't have to do as much moving and resizing ;). I was amazed what a difference it made moving from a 15" monitor at 1024x768 to a 19" monitor at 1600x1200. And now 24" wide screen LCDs and dual monitor setups are common. Keep the PDF viewer and browser with reference material and search results on one monitor, and have an integrated development environment on the other. I'm looking forward to 6'x3' tabletop size monitors.

        • I've always thought that people who insisted on left-handed mouse configs were pansies, and in fact, every one I've heard complain about it turned out to suffer chronic Munchhausen's syndrome, with many other symptoms.

          I am left-handed. The QWERTY layout is one of the few tools that is specifically biased in southpaws' favor. The keyboard is by most measures a more complex object than the mouse, the latter of which can be flailed about until it works.

          It's far easier to learn to use a mouse using your right h

          • The left handed configuration is more balanced. Backspace, delete and enter are both on the right side of the keyboard. I can select an object with my left and and activate it with the enter key using my right hand.

          • The biggest flaw I see with your logic is that the mouse is more of a precision device, so it makes sense to use it with your stronger hand.

            I am left handed but I use the mouse right handed. Not because it's "better" but because it's a standard setup and there are so many right handed mice I don't want to limit myself.

            Also, when I'm using the mouse and keyboard together I'm rarely typing. Usually I'm hitting backspace/delete, enter, etc. Not that it matters that I have to reach my left hand across the keybo

          • I'm ambimoustrous. The mouse is set for right-handed use, but I prefer to use it left-handed, except when I want to use it very fast (games). Generally, I'm right-handed. I've injured each upper arm once by having the mouse much too far off to the side.
        • by JamesP ( 688957 )

          let me guess, you use emacs...

        • by jon3k ( 691256 )
          I'm a lefty at work too (and right-handed at home). Took a few weeks to get really comfortable using it, but I hope over a career in IT it will save my wrists splitting time between both hands.
      • Well, the real question is, are users better off as a result? Personally, I found that most of the computationally intensive graphics were not actually helping me get my work done any faster, switched to a more minimal window manager and did all that I could to reduce the amount of CPU time spent on rendering the GUI (disabling special effects wherever I saw them), and discovered that, in fact, I was correct: none of those effects were actually helping me get my work done.

        On the flip side, people tend to be turned off when they see what my screen looks like. It has gotten to the point where I do not mention that this is "Linux," because I do not want new users to get scared. In the end, looking pretty winds up being more important than how efficiently you use computational resources.

        I'm not going to argue that on a Linux desktop anyway, the special effects are mostly just that, special effects, but you might get yourself into a situation like in a car; ripping the sound dampening material out makes it more efficient, but is not worth it 99.9% of the time. Animation is used in interfaces on a couple OSs that will go nameless to provide a smooth visual transition from one state to another. It isn't a new concept, it's just easier to do with today's hardware. I think the problem is ani

        • I'm not going to argue that on a Linux desktop anyway, the special effects are mostly just that, special effects, but you might get yourself into a situation like in a car; ripping the sound dampening material out makes it more efficient, but is not worth it 99.9% of the time.

          Except that I never claimed that disabling the effects made me more efficiently, but rather that it did not make me any less efficient. The only effect that is remotely useful is the smooth transition between workspaces, mainly because it helps me subconsciously keep track of "where" I am without using a paging window; even then, only a couple frames of animation is really necessary. Beyond that? Translucent windows? Rotating cubes? Wobbly windows?

          Maybe you are right -- maybe the cool looking effe

          • by jon3k ( 691256 )
            Rotating cube is nice because it helps me visualize what desktop I'm on, like you pointed out. Translucent windows is helpful because it allows you to work from a document behind (or slightly overlapping) a terminal without spending a lot of time managing windows (resizing, moving). Wobbly windows are just annoying.
      • When I studied computer science back in 1989 or so, my professor made a joke. He said that computing power would probably be used only to "draw an animated trashcan" and so we shouldn't be too wildly optimistic about increasing processing power. Ofcourse, we all had a big laugh.

        20 years later, the joke doesn't seem so funny anymore.

    • by samael ( 12612 ) *

      I can't talk about runtimes, but my window manager is currently taking up less than 3% of my CPU. If yours is sucking up dramatically more than that then I'm somewhat surprised...

      • by jhol13 ( 1087781 )

        It is not CPU time, it is wall clock time.
        BTW, this is one reason why web applications currently suck, they are extremely slow on best and make you wait for seconds and sometimes even longer on worst.

        • by samael ( 12612 ) *

          Wall clock time? My PC never seems to be waiting on something being drawn, so I'm not sure why graphical fluff would be a problem.

        • by jon3k ( 691256 )
          Depends on the web app. If you're refreshing entire pages against a slow web server I'd agree. Well designed web apps using DHTML (Javascript, CSS, et al) that reload only portions of the page (using things like AJAX and Comet) can make for great applications. GMail is a pretty good example.
  • Algorithmists have Moore's Law envy.

    • by JamesP ( 688957 )

      Well, they worked hard, studied a lot, improved the algorithms, lots of testing, implementation care, etc

      Then some doofus comes and does "select * from table;"

  • Even in the 1980's it was recognized in avionics that algorithm improvement accounted for several orders of magnitude of speed in computing -- out performing hardware advances of the day. We highlighted that article in Avionics Weekly -- proudly displaying it outside our cubicles.
  • 1000 fold (Score:5, Interesting)

    by MichaelSmith ( 789609 ) on Friday December 24, 2010 @07:56PM (#34663104) Homepage Journal

    Thats an interesting figure because when the alpha came out DEC were predicting 1000 fold improvement in speed over about the same period. They split the improvement over factors of ten: clock speed, parallelism in the CPU and multiple cores were each expected to deliver a factor of ten improvement.

    Now while our algorithms might be getting better our programmers definitely are not. A lot of those improved algorithms must be in our APIs. Like the way its easy to use a Hashtable in java now but in the last you might have just searched a linear array.

    • Re:1000 fold (Score:5, Interesting)

      by jc42 ( 318812 ) on Friday December 24, 2010 @10:26PM (#34663738) Homepage Journal

      Now while our algorithms might be getting better our programmers definitely are not.

      Sometimes this is intentional. One of the more fun software competitions I've run across is to write the slowest sorting algorithm. Trivial programming tricks such as do-nothing loops that just run a counter to use time disqualify you, obviously. The judging is solely on the algorithm, by comparing the O(N) functions. One of the winners in the early years was a sort that generated a random permutation of the data, and tested it for sortedness, repeating if it wasn't sorted. And it turns out that there are several progressively worse variants of this, depending on how ridiculous your random-permutation algorithm is.

      I should go find this contest again, and see what sorting atrocities they've come up with in the past couple years ...

      (I did once read a suggestion that such contests not be widely publicised, out of fear that managers will find them and declare the winning algorithms the new company standards. ;-)

      • by Kjella ( 173770 )

        The judging is solely on the algorithm, by comparing the O(N) functions.

        Hehe I once managed to put an expensive function inside the inner loop making a mostly O(N) function more like O(N^2)... that gets very noticable when you got 100k file hashes you're trying to match. It would have made a pretty good run for stupidest algorithm. But when you don't really have a benchmark, the results were correct it wasn't that obvious what I did wrong except it ran very, very slowly.

      • You consider n! slow? Really? Try the original bogosort algorithm as proposed by Jon Doyle when he was a researcher at CMU.
        1) Given a list of integers to sort, replace each integer in the list with the integer which represents the Ackerman function of that integer
        2) Sort the list using any other sorting algorithm you would care to use
        3) Take the inverse Ackerman function of each number of the list.

        Where v is the list of input values, this algorithm runs in time O(n*A(max(v)-1) (which is a worst case runni

    • Re:1000 fold (Score:4, Interesting)

      by sco08y ( 615665 ) on Friday December 24, 2010 @11:47PM (#34663990)

      Thats an interesting figure because when the alpha came out DEC were predicting 1000 fold improvement in speed over about the same period. They split the improvement over factors of ten: clock speed, parallelism in the CPU and multiple cores were each expected to deliver a factor of ten improvement.

      Now while our algorithms might be getting better our programmers definitely are not. A lot of those improved algorithms must be in our APIs. Like the way its easy to use a Hashtable in java now but in the last you might have just searched a linear array.

      There are more programmers because programming is becoming more accessible, so naturally more people who suck at programming are doing it.

      There's another problem: while gains from Moore's law have historically been realized by a recompile or less, most new algorithms require actually changing the code.

      • There's another problem: while gains from Moore's law have historically been realized by a recompile or less, most new algorithms require actually changing the code.

        I think you're forgetting about actually changing the hardware, in the former case.

    • by billstewart ( 78916 ) on Saturday December 25, 2010 @12:40AM (#34664182) Journal

      One thing that's happened to improve algorithms, besides having people study Computer Science and take advantage of the work other people have done in academia and open source systems over the past few decades, is that computers are enough bigger and faster that you can solve problems that weren't feasible in the past, because you didn't have enough RAM, or disks were too small and you needed to use tape, or CPUs weren't fast enough to bother using some techniques, and having those tools gives you more choices of algorithms than I had when I was an undergrad in the 70s, or than my professors had when they were learning CS in the 60s and 50s. 640KB wasn't enough for everybody, and I remember a Star Wars funded research project at Princeton in the late 80s that had a VAX with 128MB of RAM crammed into it so they could study what you could do if you really always had enough RAM. (They didn't think 128MB was really quite enough, but it was a good start and they could get it funded, and it was still extravagantly large - they even had a 450MB disk drive just for swap :-)

  • by 140Mandak262Jamuna ( 970587 ) on Friday December 24, 2010 @07:58PM (#34663118) Journal
    What the Good Lord Algorithmic Efficiency Improvements giveth, the Evil Lord Real-Time-Virus-Scanner taketh away.
    • by sco08y ( 615665 )

      What the Good Lord Algorithmic Efficiency Improvements giveth, the Evil Lord Real-Time-Virus-Scanner taketh away.

      Heretic! Thine computer is a vessel of purity whose sole purpose for existence is to be continually purged of impurity. Thou shalt be grateful for the pittance of cycles that our great Real-Tyme Virus Scanner does leave ye to work with your ill begotten "productivity applications."

  • by F.Ultra ( 1673484 ) on Friday December 24, 2010 @07:58PM (#34663120)
    So how did they magically increase the number of transistors using software? Oh whait, someone didn't know what Moore's law was...
    • by Paco103 ( 758133 )

      Virtual Machines?

  • Not so much (Score:5, Insightful)

    by kasperd ( 592156 ) on Friday December 24, 2010 @07:58PM (#34663122) Homepage Journal
    When hardware gets faster it applies more or less uniformly to all algorithms you run on that hardware. When an algorithm is improved it applies to just one specific problem. For some problems we already knew close to optimal algorithms 15 years ago, in those cases there was no way algorithmic improvements could have achieved such a speedup. And in fact the article mentions just one specific problem where algorithmic improvements made it about 43.000 times faster on one specific input. In other words, this number is not nearly as general as the summary makes it sound. The are algorithms that were not speeded up nearly as much, and for any algorithms that were bad enough to begin with, there could have been an even larger speedup - without anybody making a fuss about it.
    • Re:Not so much (Score:5, Informative)

      by QRDeNameland ( 873957 ) on Friday December 24, 2010 @08:23PM (#34663222)

      Mod parent up. If this claim were true in any generic sense, we'd see newer software generally running ever faster on older hardware, and it wouldn't seem that every generation of hardware improvement goes mostly towards visual eye-candy and anti-virus.

      The problems the author cites, notably speech recognition, are cases that are notorious for *not* scaling to CPU power, that is, throwing brute force cycles at the problem only yields marginal benefits. While processing power doubles every 18 months, speech recognition only gets slightly better despite both better hardware and algorithmic advances.

      • While processing power doubles every 18 months
        That may have been the case at one point but it hasn't been true recently.

        I bought the 13 inch macbook i'm typing this on over 3 years ago. It has a 2.16GHz C2D. When I look at laptops now I still see dual core chips at 2.GHz. Occasionally a quad core is seen but the clock speed (with all four cores running) is only arround that of my macbook (slightly over if you count turbo and buy the extreme edition).

        Over in desktop land a similar thing is true, by mid 2007

        • Re:Not so much (Score:4, Insightful)

          by SLi ( 132609 ) on Friday December 24, 2010 @09:11PM (#34663462)

          The grandparent didn't say "clock speed doubles". It said "processing power doubles". In the last few years those things have been very distinct, in some cases processing power increasing while clock speeds have actually become lower.

          • In the last few years those things have been very distinct
            P4 to core was a huge increase in performance per clock but that was some time ago and afaict increases since then have been more evoloutionary than revoloutionary.

            I stand by my statement that the improvement in the last 3 years is closer to 2x that 4x at least for CPUs that are available at anything like a sane price, The hex cores push that to 3x but only if you have a heavily multi-threaded workload and nearly $1000 to drop on the CPU alone.

            See fo

            • by arth1 ( 260657 )

              P4 to core was a huge increase in performance per clock but that was some time ago and afaict increases since then have been more evoloutionary than revoloutionary.

              I stand by my statement that the improvement in the last 3 years is closer to 2x that 4x at least for CPUs that are available at anything like a sane price, The hex cores push that to 3x but only if you have a heavily multi-threaded workload and nearly $1000 to drop on the CPU alone.

              See for example an anadtech bench comparison of a Q6600 to an i7-950 http://www.anandtech.com/bench/Product/53?vs=100 [anandtech.com]

              But again you're talking speed. That's what this benchmark measures. Moore's law, on the other hand, talks about number of transistors.

              The improvements that have come with the Westmere architecture are not to be scoffed at (unless all you look at are old code benchmarks[*]). Some of them include
              - 32 nm processing
              - six cores
              - L3 cache shared by all cores
              - huge page support
              - graphics processor inside the CPU packaging
              - lowered power usage
              - SSE 4.2 and AES-NI instructions
              - real mode virtualization
              - Obsoleti

        • by drsmithy ( 35869 )

          Over in desktop land a similar thing is true, by mid 2007 we had 2.67 GHz C2Qs. Now we have 3. GHz i7 quads and hexes.
          There has been improvement in the last 3 years but afaict it's closer to 2x than 4x.

          A Core i series CPU is substantially faster at the same clock speed than a Core 2 series CPU.

        • While processing power doubles every 18 months

          That may have been the case at one point but it hasn't been true recently.

          I suspect that this is the common misunderstanding of Moore's Law. Although Moore's itself refers to the number of transistors, until recently it *was* fair to say that the Moore's increases were accompanied by broadly comparable increases in computational speed, and most people assumed that Moore's referred to the latter, or at least implied it.

          Although Moore's Law itself still hasn't been broken, the change in focus to multi-core processors (which I assume we both know *isn't* the same as a 2 or 4 times

          • by hitmark ( 640295 )

            indeed. Said "law" can either increase the number of transistors pr core, or drop the cost pr core as one manage to fit more cores pr wafer and so increase production yields.

      • by Sparr0 ( 451780 )

        It is true in MANY senses. Take a game like Cube, which uses a very novel and efficient approach to representing and rendering a 3d world for a first person shooter. Send that code back in time to the day's of Wolf3D and it would blow them away on their same old-to-us hardware.

        • I think you misinterpret what I mean by "true in any generic sense". I'm talking about the idea of algorithmic performance outpacing hardware performance applying across the board, as the article and summary seem to imply.

          I'm entirely aware that new innovative algorithms can make great improvements in specific cases. However, in the vast majority of technology domains, the performance gains of more efficient software rarely outpace the performance gains of hardware over time (in many cases the opposite,

          • by Sparr0 ( 451780 )

            Do you have any counterexamples? That is, for what problems was the optimal algorithmic solution known 20 years ago?

            • Again, you misinterpret me. I never said more optimal algorithmic solutions are not being developed in any domain, just that the rate of performance gain from such optimizations are generally not as rapid as developments in hardware.

              For example, database technology. As mature a tech as that is, algorithmic improvements are being made every day, I'm sure. But you can't tell me that the improvement you get in database performance over the past 10 years is more due to better software than better hardware,

      • by Surt ( 22457 )

        Newer software is adding functionality faster than the algorithmic advances are making it faster.

    • Re:Not so much (Score:4, Insightful)

      by betterunixthanunix ( 980855 ) on Friday December 24, 2010 @08:34PM (#34663268)

      And in fact the article mentions just one specific problem where algorithmic improvements made it about 43.000 times faster on one specific input.

      I think a more general issue is citing a constant as the speedup for an algorithm. Improvements in algorithms often refer to improvements in the asymptotic efficiency, rather than constant factors. If I asked you how much of an improvement a switch from selection sort to merge sort would be, you would not tell me "it is 1000 times faster" because that would be wrong (except for a particular input size).

    • If you can make and algorithm 43000 times faster, that means the first version was utter garbage to begin with. By this standard, all a person needs to do is write an algorithm that is one million times less efficient than the hardware would permit. Then over 18 months, make the software progressively faster until it gets one million times faster and claim that you performed a miracle. I think this is the Montgomery "Scotty" Scott school of computer programing where you under promise a thousand fold but
      • I do not really care about boot time (after all, the phone does not turn on instantly too), however, software andoperating systems did get slower over time with not much improvement to justify the slowness.

        For example, Windows Vista and 7 require much better hardware than Windows XP, but apart from the changes in GUI it's not much different. So, where do the clock cycles and memory go? Windows 2003 (fresh install) runs quite fast inside a VM on one of my PCs (3x Xeon 700MHz, 3GB RAM) with one CPU and 256-38

        • When the phone reboots a few times a day, boot time matters. Yes of course, the phone (android) shouldn't crash but it does.

          OSes on the other hand do need to be boot more, especially laptops if you don't want to lose charge in standby or you care about crypto which only works if you power off. In standby mode, people can freeze the memory and transplant it to another computer.

          The other problem is that OSes that boot slow also tend to run slow. The user interface slowness is something developers and
    • When an algorithm is improved it applies to just one specific problem. For some problems we already knew close to optimal algorithms 15 years ago, in those cases there was no way algorithmic improvements could have achieved such a speedup.

      I get what you're saying, but still agree with the article. There's one trivially simple algorithmic adjustment that's far more common now than 15 years ago: memoizing functions. I told a story [slashdot.org] a while back about changing a coworker's code from running for 4 hours to 8 seconds. The crux of the change was using a Python dict to store the output of a function and re-using it whenever possible, and the 99+% cache hit rate had a magical effect on runtime. It also took half a GB of RAM.

      That wouldn't have been a

  • 45 years later (Score:4, Insightful)

    by zeroRenegade ( 1475839 ) on Friday December 24, 2010 @08:04PM (#34663154)
    Moore's law is based more on economics than technology. He was just a hands on executive who had a clear vision of what the future would bring. Anybody who actually has read his original paper would realize this. The fact that his theory has held up this long is incredulous.
  • Moore's Law is that the number of transistors on a single integrated circuit would double - it has nothing to do with speedup.
  • Anonymous Coward (Score:2, Informative)

    by Anonymous Coward

    Linear Programming is a reasonably obscure optimisation method used in operational research, where you describe a complex management problem in terms of a series of linear inequalities, then use an algorithm to evaluate all the possible solutions until it has found the optimal one. The algorithmic improvements here mean that we have found ways of cutting down the space faster, while still coming to the best solution. For one particular problem with just the right characteristics it's very possible that an i

  • I'm not sure they know what Moore's law is.

    Oh, a submission from tim? of course it's wrong.

    nothing new here

  • by jthill ( 303417 ) on Friday December 24, 2010 @08:17PM (#34663196)

    Progress in Algorithms Beats Moore’s Law

    Everyone knows Moore’s Law – a prediction made in 1965 by Intel co-founder Gordon Moore that the density of transistors in integrated circuits would continue to double every 1 to 2 years.

    Fewer people appreciate the extraordinary innovation that is needed to translate increased transistor den- sity into improved system performance. This effort requires new approaches to integrated circuit design, and new supporting design tools, that allow the design of integrated circuits with hundreds of millions or even billions of transistors, compared to the tens of thousands that were the norm 30 years ago. It requires new processor architectures that take advantage of these transistors, and new system architectures that take advantage of these processors. It requires new approaches for the system software, programming languages, and applications that run on top of this hardware. All of this is the work of computer scientists and computer engineers.

    Even more remarkable – and even less widely understood – is that in many areas, performance gains due to improvements in algorithms have vastly exceeded even the dramatic performance gains due to increased processor speed.

    The algorithms that we use today for speech recognition, for natural language translation, for chess playing, for logistics planning, have evolved remarkably in the past decade. It’s difficult to quantify the improvement, though, because it is as much in the realm of quality as of execution time. In the field of numerical algorithms, however, the improvement can be quantified. Here is just one example, provided by Professor Martin Grötschel of Konrad-Zuse-Zentrum für Informationstechnik Berlin. Grötschel, an expert in optimization, observes that a benchmark production planning model solved using linear programming would have taken 82 years to solve in 1988, using the computers and the linear programming algorithms of the day. Fifteen years later – in 2003 – this same model could be solved in roughly 1 minute, an improvement by a factor of roughly 43 million. Of this, a factor of roughly 1,000 was due to increased processor speed, whereas a factor of roughly 43,000 was due to improvements in algo- rithms! Grötschel also cites an algorithmic improvement of roughly 30,000 for mixed integer programming between 1991 and 2008.

    The design and analysis of algorithms, and the study of the inherent computational complexity of prob- lems, are fundamental subfields of computer science.

    • by pem ( 1013437 )
      What about memory size?

      Without knowing about the "before" and "after" algorithm, it's hard to know if the "after" algorithm couldn't have been run earlier because there wasn't enough memory.

      I've written a lot of stuff that runs a hell of a lot faster lately, but I use a lot more memory than I did 20 years ago.

      Any realistic scenario will chalk up around a factor of at least 8000 to memory improvements, leaving, oh, maybe a factor of 5 to pure algorithmic improvements.

    • A similar thing has occurred over the last 30 years in the physics sub-field of quantum Monte Carlo. I forget what the numbers are, but it's something similar with the algorithmic improvements being like an order of magnitude greater than the CPU improvements.
  • by Jay Maynard ( 54798 ) on Friday December 24, 2010 @08:26PM (#34663238) Homepage

    Looks like both Moore's Law and the improvement in algorithms together don't beat out Gates's Law: Software bloat doubles every 18 months.

    • I've always wondered how they get away with advertising each version of Windows as "the fastest version EVER!"... while steadily increasing the minimum hardware requirements on the box.

      You mean it runs faster on a faster computer? Gosh, bowl me over.

  • Remember when you were a youngster and you had to learn about O() ?

    Well, over the years you lose sight of the importance of that.

    Not too long ago, just for fun, I implemented a DFT naively, O(n^2), and as an FFT (O(n log n).

    Try that and run it for 65536 points and see what happens, even on a fast machine...

    Consider for example the humble matrix solver. Iterative solvers (only applicable for certain problem spaces) can get you n lg n sort of behaviors, as compared to n^3 for generalized gaussian elimination

  • by Khyber ( 864651 ) <techkitsune@gmail.com> on Friday December 24, 2010 @08:57PM (#34663380) Homepage Journal

    OPTIMIZE YOUR SHIT.

    Because tons of hardware is useless if you can't push it to it's limits with efficient code and algorithms.

    Game and OS designers, I'm looking right at you.

  • by lkcl ( 517947 ) <lkcl@lkcl.net> on Friday December 24, 2010 @09:01PM (#34663410) Homepage

    i read a sci-fi story once, i forget who it was by, where some space travellers found themselves in a situation where their computer failed. so they taught themselves maths, plotted the stars, learned to do trigonometry in their heads, and over the course of many months, the 30 or so travellers managed to work in parallel to calculate an approximate but accurate enough path to get themselves within range of rescuers.

    then, there is another sci-fi story, called "The End of Eternity", which dates back to when the title "Computer" was utilised in exactly the same way as "Professor" is utilised, now. the hero of the story was a human named "Computer Harkan". Computer. "one who computes".

    the point of mentioning these two things is very very simple: prior to modern "computers", parallel "Computing" algorithms were commonplace, well known, and heavily relied on. Along came "computers" and - you know what's coming next - exceedingly rapidly, all that incredibly valuable "parallel" expertise was lost, in favour of single-step, single-process algorithms. i believe we're paying for that loss of expertise, somewhat.

  • I don't understand, how does it mean algorithm improvement of 43,000 fold? Who needs 43,000 fold when anyone can easily make 1,000,000 fold improvement!

    Proof:

    Let N=1000
    O(N^2) = 1,000,000
    O(N) = 1000
    O(log N) = 3
    O(1) = 1

    O(N^2)/O(1) = 1,000,000 / 1 = 1,000,000

    Wait till we made improvement in O(N!) problems like traveling salesman and see how huge improvement that's gonna be!

  • While processing power has increased over the years, the amount of memory that can be built in to a given amount of space has grown at a similar rate. So while it's easy to account for pure processing performance increases, what about algorithms that improved due to the space-time tradeoff [wikipedia.org], which would allow more complex algorithms that use more memory in return for requiring less processing time?

    It seems disingenuous to say that algorithm improvements beat Moore's Law if they're really just benefiting from

  • And yet ... (Score:4, Funny)

    by PPH ( 736903 ) on Saturday December 25, 2010 @12:26AM (#34664116)

    ... Murphy still holds the lead.

  • Ah, I remember looking at the sorting algorithm in a low level graphics library. Tightly hand coded, packed to keep every pipeline stage filled and make optimal use of all the parallel units in a VLIW graphics processor, it was probably the most efficient bubble sort I'd ever seen.

    I coded up a quicksort to about the same degree of tightness in a couple days, and, golly gee, a whole bunch of code suddenly got faster. Some MUCH faster... That was Lesson 1 for me. Optimize algorithms before optimizing impl

  • Given the conflict of interest in the report evidenced by:

    Report to the President and Congress: Designing a Digital Future: Federally FUnded R&D in Networking and IT:

    It's amazing they didn't report a billion-fold increase in algorithms resulting from government funded R&D.

  • From TFA:

    The algorithms that we use today for speech recognition, for natural language translation, for chess playing, for logistics planning, have evolved remarkably in the past decade.

    I for one am yet to see any major improvements in speech recognition in the last ten years. It seems that it plateaued around 1995-2000 and hasn't had any decent improvements since.

    Meanwhile, speech recognition has seen an increase in deployments in things such as:

    • voice-driven menu systems... "I'm sorry, did you say 'put me through to an terminator'?", and
    • speech-to-text voicemail... Telstra in Australia is a classic for this, with voicemail SMS's being nearly incomprehensible in conjunction with c

"The whole problem with the world is that fools and fanatics are always so certain of themselves, but wiser people so full of doubts." -- Bertrand Russell

Working...