Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Software Math The Internet

The Gradual Public Awareness of the Might of Algorithms 169

Soylent Mauve writes "The trend toward data- and algorithm-driven tuning of business operations has gotten a lot of attention recently — check out the recent articles in the New York Times and the Economist. It looks like computer scientists, especially those with machine learning training, are getting their day in the sun. From the NYT piece: 'It was the Internet that stripped the word of its innocence. Algorithms, as closely guarded as state secrets, buy and sell stocks and mortgage-backed securities, sometimes with a dispassionate zeal that crashes markets. Algorithms promise to find the news that fits you, and even your perfect mate. You can't visit Amazon without being confronted with a list of books and other products that the Great Algoritmi recommends. Its intuitions, of course, are just calculations -- given enough time they could be carried out with stones. But when so much data is processed so rapidly, the effect is oracular and almost opaque.'"
This discussion has been archived. No new comments can be posted.

The Gradual Public Awareness of the Might of Algorithms

Comments Filter:
  • Slightly O.T. (Score:5, Informative)

    by gardyloo ( 512791 ) on Sunday September 23, 2007 @01:34PM (#20720389)
    I just (a few minutes ago) found this free PDF book about algorithms (written for the undergrad-level student). It's pretty good: http://beust.com/algorithms.pdf [beust.com]
  • by 12357bd ( 686909 ) on Sunday September 23, 2007 @02:25PM (#20720787)
    maybe not as beautiful as 'clasic' ones, but algorithms indeed. Something like shapes, you know, 'clasic' algorithms (ie: sort) are somewhat like circles (simple formulaes) but real objects (ie: leafs) are extremely complex formulaes only approximated by fractals and with a lot of 'heuristics' in it.
  • by Stochastism ( 1040102 ) on Sunday September 23, 2007 @02:46PM (#20720969) Journal
    Did you mean SVM? I think the quadratic programming optimizer used for SVM training would count as a black-box, even to most of the SVM crowd ;) And don't get me started on Gaussian Processes.

    Machine learning is supposed to *look* like magic. It's supposed to behave like a black box with just one or two knobs on it. When -- and this is unfortunatley almost always -- it doesn't, then it's not the machine learing doing the work, it's the programmer. In this case I can forgive Joe Wannabe for tearing his hair out over the complexity. The problem with machine learning is that the "no free lunch" theorem says that there is essentially no one-size-fits all black box. The programmer must have some understanding of why they are using that particular black box.
  • by Chmcginn ( 201645 ) * on Sunday September 23, 2007 @03:32PM (#20721261) Journal

    I buy the new Terry Pratchett book and I'm bombarded with EVERY book by him or co-authored by him or licensed by him or whatever. I don't want derivatives.

    My favorite is getting Amazon recommendations for books I've already bought... through Amazon.

    I often find myself saying "Ah, yes, I just bought the hardcover version of that book last year, now I should go out and get the paperback, the second edition with a few minor spelling corrections, etc, etc."

    Or something.

  • by rjh ( 40933 ) <rjh@sixdemonbag.org> on Sunday September 23, 2007 @09:03PM (#20723443)

    I'm a graduate student in CS right now. One of the things I'm researching is stochastic approximation heuristics. Without any argument, these are algorithms. They have to be algorithms, or else the Church-Turing Thesis doesn't apply and we wouldn't be able to have computers do them at all.

    An algorithm is, broadly speaking, a terminating sequence of deterministic steps that effectively derives outputs from provided inputs. But don't believe me--after all, I'm just a random guy on Slashdot. But maybe Cormen, Leiserson, Rivest and Stein's Introduction to Algorithms should be believed:

    Informally, an algorithm is any well-defined computational procedure that takes some value, or set of values, as input and produces some value, or set of values, as output.

    Don Knuth has an equivalent definition of algorithm in The Art of Computer Programming. He makes explicit a couple of details which are implicit in the CLRS definition, but other than that they're interchangeable. Knuth talks about the effectiveness of algorithms, in that an algorithm must uphold the promises the programmer makes about it.

    So now that we've got a decent definition of "algorithm", one that's approved by five of the brightest lights in computer science, let's look at simulated annealing. This is a stochastic (random) heuristic approximation process. You say it's not an algorithm, because sometimes it'll give barkingly wrong answers. I say it is. So let's look at our definition of algorithm, and see whether it is or not.

    It's well-defined, in that every step of the process has mathematical clarity and precision. It's deterministic, in that if I feed it the exact same inputs (including initializing the pseudorandom number generator to the same seed value), I get the exact same outputs. It will always terminate, thanks to a counter that limits the annealing process to a couple of million operations. And finally, it is effective, in that it upholds the promises I, the programmer, make about the outputs.

    According to your reasoning, it fails on the effectiveness criteria. It's not an algorithm because it doesn't solve NP-COMPLETE problems, it simply approximates them. But that's a straw man argument: I never claimed it solved NP-COMPLETE problems, therefore the effectiveness of the algorithm is not determined by whether it solves NP-COMPLETE problems.

A morsel of genuine history is a thing so rare as to be always valuable. -- Thomas Jefferson

Working...