Follow Slashdot stories on Twitter


Forgot your password?
Supercomputing Wikipedia Science Technology

Could Wikipedia Become a Supercomputer? 165

An anonymous reader writes "Large websites represent an enormous resource of untapped computational power. This short post explains how a large website like Wikipedia could give a tremendous contribution to science, by harnessing the computational power of its readers' CPUs and help solve difficult computational problems." It's an interesting thought experiment, at least — if such a system were practical to implement, what kind of problems would you want it chugging away at?
This discussion has been archived. No new comments can be posted.

Could Wikipedia Become a Supercomputer?

Comments Filter:
  • boinc (Score:5, Informative)

    by mrflash818 ( 226638 ) on Saturday June 25, 2011 @03:40PM (#36570446) Homepage Journal

    There is already existing infrastructure and projects where people can donate their system's computational power: []

  • Re:I don't buy it... (Score:4, Informative)

    by the gnat ( 153162 ) on Saturday June 25, 2011 @06:07PM (#36571594)

    I'll be interested to see if any /.ers can propose genuinely significant problems that would be solvable by a 100fold or even 1000fold increase in processing power.

    I guess it depends on how you define "significant." My guess is that there are a lot of areas of science that could benefit from massive computing resources, not because it would magically solve problems, but because it would enable researchers to explore more hypotheses and be more efficient while doing so. The reason they're not using existing resources like DOE supercomputers is because many of these applications are (not unreasonably) perceived as wasteful and inefficient, but if petaflop-class distributed systems became widely accessible, this argument would vanish.

    I personally find some of the hype about Folding@Home to be overblown (it's not going to cure cancer or replace crystallography, folks), but it's actually an excellent example of the kind of problem that's ill-suited towards traditional HPC but a perfect fit for distributed systems. The molecular dynamics simulations that it runs are not hugely time consuming on their own, but there is a huge sampling problem: only a tiny fraction of the simulations have the desired result. So they run tens or hundreds of thousands of simulations on their network, and get the answer they want. There are other examples like this, also in protein structure; it turns out that you can solve some X-ray crystal structures by brute-force computing instead of often laborious experimental methods involving heavy atoms. This isn't common practice because it requires 1000s of processors to happen in a reasonable amount of time - and it still may not work. But if every biology department had a petaflop cluster available, it would be much more popular.

    More generally, if we suddenly gained 100- or 1000-fold increase in processing power, habits would change. My lab recently bought several 48-core systems (which are insanely cheap), and we're starting to do things with them that we would have considered extravagant before. Nothing world-changing, and nothing that would have been outright impossible on older systems, but the boost in efficiency is very noticeable - time that would have been spent waiting for the computers to finish crunching numbers is spent analyzing results and generating new datasets instead.

Nothing makes a person more productive than the last minute.