Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Supercomputing Wikipedia Science Technology

Could Wikipedia Become a Supercomputer? 165

An anonymous reader writes "Large websites represent an enormous resource of untapped computational power. This short post explains how a large website like Wikipedia could give a tremendous contribution to science, by harnessing the computational power of its readers' CPUs and help solve difficult computational problems." It's an interesting thought experiment, at least — if such a system were practical to implement, what kind of problems would you want it chugging away at?
This discussion has been archived. No new comments can be posted.

Could Wikipedia Become a Supercomputer?

Comments Filter:
  • No. It couldn't. (Score:3, Insightful)

    by Anonymous Coward on Saturday June 25, 2011 @03:45PM (#36570484)

    Wikipedia is a clusterfuck of little tiny fiefdoms. And you expect them to solve actual problems? hahahahaha.

  • by ScrewMaster ( 602015 ) on Saturday June 25, 2011 @03:47PM (#36570514)
    "Clusterfuck of little tiny fiefdoms." That has to be the best description of wikipedia that I've ever heard.
  • coins (Score:0, Insightful)

    by Anonymous Coward on Saturday June 25, 2011 @03:48PM (#36570520)

    Easy, wikipedia will use user's computational power to mine bitcoins. In this way they won't need any donations. Just wait.

  • Do not like it (Score:5, Insightful)

    by drolli ( 522659 ) on Saturday June 25, 2011 @03:51PM (#36570548) Journal

    if i want to contribute computing power somewhere for free then there are ways to do it already

    if wikipedia needs money, i can donate something or pay something.

    But *please* i use wikipedia often, maybe primarily, on my tablet. I dont think that abusing an ARM processor running on Battery power connected via an instable and slow internet connection will help a lot.

  • Ummm... (Score:5, Insightful)

    by fuzzyfuzzyfungus ( 1223518 ) on Saturday June 25, 2011 @03:57PM (#36570590) Journal
    Let me tell you a little story:

    Once upon a time, shortly after an asteroid impact wiped out the vacuum tubes; but before Steve Jobs invented aluminum, we had computers that plugged into the wall, with CPUs that ran all the time at pretty much the same power level. Even when idle. Back in those days, had most people's schedulers not kind of sucked, there may actually have been some "free" CPU time floating about.

    Now, back to the present: On average, today's computer has a pretty substantial delta between power at full load and power at idle. This is almost 100% certainly the case if the computer is a laptop or embedded device of some kind(which is also where the difference in battery life will come to the user's notice most quickly). CPU load gets converted into heat, power draw, and fan noise within moments of being imposed.

    Now, it still might be the case that wikipedia readers are feeling altruistic; but, if so, javascript is an unbelievably inefficient mechanism for attacking the sort of problems where you would want a large distributed computing system. A java plugin would be much better, an application better still, at which point you are right back to today, where we have a number of voluntary distributed computing projects.

    If they wished to enforce, rather then persuade, they'd run into the unpleasant set of problems with people blocking/throttling/lying about the results of/etc. the computations being farmed out. Given wikipedia's popularity, plugins for doing so in all major browsers would be available within about 15 minutes. Even without them, most modern browsers pop up some sort of "a script on this page is using more CPU time than humanity possessed when you were born to twiddle the DOM to no apparent effect, would you like to give it the fate it deserves?" message if JS starts eating enough time to hurt responsiveness.

    In summary: Terrible Plan.
  • Re:Ummm... (Score:2, Insightful)

    by swillden ( 191260 ) <shawn-ds@willden.org> on Saturday June 25, 2011 @06:54PM (#36571966) Journal

    javascript is an unbelievably inefficient mechanism for attacking the sort of problems where you would want a large distributed computing system

    Not necessarily. This is true of most of the Javascript engines around, because they're pure interpreters of a language not designed to be particularly efficient, but Javascript can be compiled to machine code before execution. This is what Google's V8, the Javascript engine in Chrome, does. With JIT-compiled Javascript you'll get comparable efficiency to JIT-compiled Java, which is pretty competitive with compiled C.

    The rest of your post is dead on, though. There really aren't any spare cycles today. Even desktop machines and servers dynamically adjust clock rate on demand, and automatically drop into various power-saving states to save even more power when the cycles aren't needed. So it would be rude to exploit users' CPUs without their permission, and in the case of battery-powered devices it could be much worse than just rude.

  • by AlienIntelligence ( 1184493 ) on Saturday June 25, 2011 @08:34PM (#36572640)

    Well, one person started to, then kinda went on a weird
    other-topic rant.

    The biggest issue, which makes this entire idea, sound
    pretty worthless... for the majority of Wikipedia users, I
    presume and have no idea of a source that would vet that
    or refute it? What good is 1 or two minutes of computing
    time?

    Even the longest articles I might read on there are barely
    5 minutes for me. I am a quick reader though.

    Do many users 'stay' on the site for extended periods of
    time? I honestly have never researched anything for any
    long stay. If I need to do serious research. No offense,
    Wikipedia, but you are not going to be the source.

    I guess you can break down the work, or only schedule
    work that can be broken down into 1 minute chunks,
    you could dole out work units based on the length of
    article, with a maximum of maybe 3-5 minutes (the
    average attention span) so when someone gives up on
    "all the words" the work isn't lost.

    Then you get into, how long is the download time of the
    chunks. Will that be affected throughout the day, as
    server latency scales up and down? Or localized traffic
    scales up and down? That eats into compute time, since
    you have to send the work unit back. Which may be an
    order of size more.

    Next point... why not just create the "Wikipedia Distrubuted
    Computer Project" and have frequent (or whomever) users
    download a client and run it... ... because then it would be just like all the others and then
    you see why the answer to this is...

    1) Yes, Wikipedia could become a supercomputer.
    [even though it wouldn't be Wikipedia in the sense that it was
    THEIR computers.]

    2) So, that makes it in a way... NO, they can't become a
    supercomputer because of the feasibility, etc but they can
    be a hub for a distributed network, which really isn't a
    supercomputer

    -AI

  • by rdnetto ( 955205 ) on Saturday June 25, 2011 @11:40PM (#36573648)

    Please, for the love of all that is sane, do not press enter just because you've reached the edge of the textbox. Some of us actually have desktop sized screens, and reading a column of text that only occupies 1/4 of it is excruciatingly painful.

Today is a good day for information-gathering. Read someone else's mail file.

Working...