Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Supercomputing Science Technology

Einstein@Home Set To Break Petaflops Barrier 96

hazeii writes "Einstein@home, the distributed computing project searching for the gravitational waves predicted to exist by Albert Einstein, looks set to breach the 1 Petaflops barrier around midnight UTC tonight. Put into context, if it was in the Top500 Supercomputers list, it would be in at number 24. I'm sure there are plenty of Slashdot readers who can contribute enough CPU and GPU cycles to push them well over 1,000 teraflops — and maybe even discover a pulsar in the process." From their forums: "At 14:45 we had 989.2 TFLOPS with an increase of 1.3 TFLOPS/h. In principle that's enough to reach 1001.1 TFLOPS at midnight (UTC) but very often, like yesterday, between 22:45 and 22:50 there occurs a drop of about 5 TFLOPS. So we will have very likely hit 1 PFLOPS in the early morning tomorrow. "
This discussion has been archived. No new comments can be posted.

Einstein@Home Set To Break Petaflops Barrier

Comments Filter:
  • E@H success (Score:1, Informative)

    by Anonymous Coward

    Although Seti@Home is probably the most known project (or used to be), E@H is probably the most successful one from the pure science perspectives. They have actually managed to discover new pulsars that nobody has seen before, and unlike some slightly shady DC projects (some of them being actually for-profit), their data is accessible. Good job E@H team!

    • by Macrat ( 638047 )

      Although Seti@Home is probably the most known project (or used to be), E@H is probably the most successful one from the pure science perspectives.

      Unlike the pure science of Folding@home?

  • by Anonymous Coward

    You have to run the Linpack benchmark and report that.

    • by kasperd ( 592156 ) on Wednesday January 02, 2013 @01:50PM (#42452655) Homepage Journal

      You have to run the Linpack benchmark and report that.

      And I guess no distributed computing platform is ever going to score in top 500 according to that benchmark. The communication performance between nodes is very important to most parallel algorithms. Any decent benchmark would take that into account. A real super computer has much faster communication between the nodes, than what you can achieve across the Internet. Both throughput and latency matters. There are some specific problems which can be split into parts that can be computed independently by nodes without communication between them, but most super computers are used for tasks, that do not fall into that class.

      At some point I heard the rule of thumb, that when you are building a super computer, you divide your funds in three equal parts. One of those was to be spent on the interconnect. I don't recall what the other two were supposed to be spend on.

      • Re: (Score:3, Funny)

        by Anonymous Coward

        At some point I heard the rule of thumb, that when you are building a super computer, you divide your funds in three equal parts. One of those was to be spent on the interconnect. I don't recall what the other two were supposed to be spend on.

        Hookers and blow.

      • I agree. That's what makes all the difference between such distributed systems or simple clusters, and real massively parallel supercomputers, with a single kernel instance running on thousands of CPU cores accessing the same distributed shared memory. Nothing to do with opportunistic distributed batch jobs, both don't compare.

  • folding@home (Score:2, Insightful)

    by etash ( 1907284 )
    genuine question:

    wouldn't it be wise for practical* reasons for people to offer more power to folding@home instead of einstein@home?



    * = has more chances to help humanity ( for curing diseases etc. )
    • Re: (Score:2, Interesting)

      by hawguy ( 1600213 )

      genuine question:

      wouldn't it be wise for practical* reasons for people to offer more power to folding@home instead of einstein@home?

      * = has more chances to help humanity ( for curing diseases etc. )

      Or, to put it another way - why waste resources studying astronomy when there are so many sick people in the world so it would be better for humanity to put our resources into curing disease?

      • by Anonymous Coward

        Here, I'll tee it up for you.

                We're all going to die. Why bother doing anything at all?

        Pretty easy statement to refute; same refutation works for all the other flavors as well.

        But to address the OP, they are my flops, and I can do with them as I please; just as you can do whatever you like with yours.

      • by etash ( 1907284 )
        IMHO you are comparing apples to oranges. giving food to poor people in africa does not solve any problem. the problem would be solved if they had a strong economy, education etc. the equivalent of this in the field of medicine, with the added benefit that it SOLVES ( instead of postponing ) a particular set of problem for all humanity ( instead of just africa ), is FINDING a permanent cure for diseases.
      • Or, to put it another way - why waste resources studying astronomy when there are so many sick people in the world so it would be better for humanity to put our resources into curing disease?

        Protien folding simulation is such a large and basic need globally there ought to be enough large scale interest to make development of specialized ASICs to deal with these problems cost effective and exceedingly useful for all who need to do these simulations. A quick check of google shows such chips do in fact exist with unbelivable performance figures which kick the snot out countless tens of thousands of CPU/GPUs. There is no shortage of funding for medical research so it begs the question why waste C

        • Why not use a bacterial or yeast culture and let them fold actual proteins?
        • Protien folding simulation is such a large and basic need globally there ought to be enough large scale interest to make development of specialized ASICs to deal with these problems cost effective and exceedingly useful for all who need to do these simulations. A quick check of google shows such chips do in fact exist with unbelivable performance figures which kick the snot out countless tens of thousands of CPU/GPUs. There is no shortage of funding for medical research so it begs the question why waste CPU/GPU resources on folding simulations?

          I still do seti and milkyway at home because there are no resources allocated for seti and milkyway at home is interesting to me personally.

          First of all, protein folding is not the only thing they do, the Folding@HOME infrastructure is used by many for a variety of bio-molecular studies [stanford.edu].

          Secondly, custom ASIC-based machines like Anton [wikipedia.org] and MDGRAPE [wikipedia.org] (which are AFAIK the only such machines around these days) consist of much more than a custom-chip, they use specialized interconnects, memory, software, etc. and cost a lot. The MDGRAPRE-4, the coming version of the Riken-developed custom molecular simulation machine costs $10M + $4M (development + [www.cscs.ch]

    • by Anonymous Coward

      Sure. But wouldn't it be wise for practical reasons for you to go out and help out at a local soup kitchen instead of posting on Slashdot?

      People are going to do what they want to do. Of the people that share CPU/GPU, more people are interested in Einstein.

    • Re:folding@home (Score:5, Interesting)

      by gQuigs ( 913879 ) on Wednesday January 02, 2013 @01:03PM (#42452109) Homepage

      Discovery is not usually a straight line.

      I donate to SETI@Home, Einstein@Home, LHC@Home, and a bunch of projects at WorldCommunityGrid. BOINC and GridRepublic makes this easy. I believe Folding@Home is a seperate standalone project, so it's all or nothing. In addition, there are a LOT of protein folding projects. I'd really like to see them work together - or explain why they are different.

      • Re:folding@home (Score:4, Interesting)

        by Enderandrew ( 866215 ) <enderandrew@gmSTRAWail.com minus berry> on Wednesday January 02, 2013 @01:24PM (#42452365) Homepage Journal

        Someone who only knows physics might not be able to help medical research, so scientific resources aren't entirely fungible. But CPU cycles are. So contributing to one particular distributed computing project does carry an opportunistic cost of not supporting another.

        Going off on a tangent here, while I echo your sentiment that people should be free to support whatever distributed computing project they want, I'm not sure people realize that SETI has basically already failed. They've covered their entire spectrum numerous times, and have been listening for decades without finding anything. The entire project operates off the assumption that interstellar communication of another intelligent life form would occur over radio waves.

        Requisite XKCD:

        http://xkcd.com/638/ [xkcd.com]

        If someone is contributing cycles to it, and not protein folding, then valuable medical research (that has been proven worthwhile) might be suffering literally out of ignorance. That is worth pointing out.

        • Going off on a tangent here, while I echo your sentiment that people should be free to support whatever distributed computing project they want, I'm not sure people realize that SETI has basically already failed. They've covered their entire spectrum numerous times, and have been listening for decades without finding anything. The entire project operates off the assumption that interstellar communication of another intelligent life form would occur over radio waves.

          well, the seti@home project may be in disarray, but it's a bit early to say that seti (search for extra-terrestrial intelligence) in general has failed, isn't it? a few decades of silence from potential civilizations that may potentially be thousands, millions or even billions of light years away can hardly be construed as strong evidence.

          • The concept of searching for extra-terrestrial life hasn't failed, but their project of just scanning radio waves basically has. If another civilization used radio for interstellar broadcasts, we'd see steady, regular broadcasts. When we blanket a spectrum from a physical direction and don't see anything, it suggests no one is broadcasting radio waves.

            There may be technologically advanced life forms out there broadcasting by other means, but repeatedly checking radio waves probably won't offer any real bene

            • i don't think this is true at all. scanning radio waves seems just as viable a means as any other to me. my point is that we need to wait for far more than a few decades of silence before the statement "seti is a failure" should even enter our thinking. there may be a civilization making identical radios to our's right now and maybe they have been for as long as we have. but if they're 1,000 light years away (not very far in interstellar terms), decades of silence is the expected result.
        • Re:folding@home (Score:4, Informative)

          by Aqualung812 ( 959532 ) on Wednesday January 02, 2013 @04:27PM (#42454357)

          I'm not sure people realize that SETI has basically already failed. They've covered their entire spectrum numerous times

          The entire spectrum? We've only looked at one frequency range on 20% of the sky:

          SETI@home is basically a 21-cm survey. If we haven't guessed right about the alien broadcasters' choice of hailing frequency, the project is barking up the wrong tree in a forest of thousands of trees. Secondly, there has been little real-time followup of interesting signals. Lack of immediate, dedicated followup means that many scans are needed of each sky position in order to deal with the problem of interstellar scintillation if nothing else.

          With its first, single-feed receiver, SETI@home logged at least three scans of more than 67 percent of the sky observable from Arecibo, amounting to about 20 percent of the entire celestial sphere. Of this area, a large portion was swept six or more times. Werthimer says that a reasonable goal, given issues such as interstellar scintillation, is nine sweeps of most points on Arecibo's visible sky.

          Quoted from http://www.skyandtelescope.com/resources/seti/3304561.html?page=5&c=y [skyandtelescope.com]

          Also, when there is no work to be done, your computer can look at other things.

          I donate my time to several medical studies that will likely find some results that will help all people. I also donate some time to climate research that has less of a chance of helping EVERYONE. I also donate some time to SETI which has a very, very small chance of changing the world.

          It is called hedging your bets. I spend some CPU on things with low risk and low reward, and others on things with high risk and high reward.

        • by hazeii ( 5702 )

          The XKCD reminds me of a story that back in the 18th century, people wanting to know if the moon was inhabited used their telescopes to look for signal fires.

      • I believe Folding@Home is a seperate standalone project, so it's all or nothing. In addition, there are a LOT of protein folding projects. I'd really like to see them work together - or explain why they are different.

        Not only are there a lot of projects like this, most of them - whatever their intrinsic scientific merit - have very little direct application to fighting disease. Sure, the people directing the projects like to claim that they're medically relevant, but this is largely because the NIH is the

        • I believe Folding@Home is a seperate standalone project, so it's all or nothing. In addition, there are a LOT of protein folding projects. I'd really like to see them work together - or explain why they are different.

          Not only are there a lot of projects like this, most of them - whatever their intrinsic scientific merit - have very little direct application to fighting disease. Sure, the people directing the projects like to claim that they're medically relevant, but this is largely because the NIH is the major source of funding. It's also really difficult to explain the motivations for such projects to a general audience without resorting to gross oversimplifications. (This isn't a criticism of protein folding specifically, it's all biomedical basic research that has these problems.) My guess is that it will take decades for most of the insights gleaned from these studies to filter down to a clinical setting.

          The project that is arguably more relevant to disease is Rosetta@Home, but that's because of the protein design aspect, not the structure prediction. (In fact, Rosetta doesn't even do "protein folding" in the sense that Folding@Home does - it is a predictive tool, not a simulation engine like Folding@Home.)

          Someone please mod this up, as a researcher in the same field as F@H I can attest this is all quite correct.

          First, I should preface this by saying I've interacted with several of the F@H folks professionally and they do excellent work. And that the NIH is under no pretenses when it funds this work that cures will magically pop out tomorrow - they think of it as a seed investment for a decade or two in the future. In terms of tax dollars spent, it's a good investment considering many biomedical labs spend mo

      • Re:folding@home (Score:4, Interesting)

        by hAckz0r ( 989977 ) on Wednesday January 02, 2013 @02:36PM (#42453161)
        Different? Ok, "Go Fight Against Malaria" and "Say No To Schistosoma" are both trying to cure the #1 and #2 parasitic diseases worldwide.

        Malaria is known to be in the US and has several medications to treat it. The CDC will tell you that Schistosoma does not even exist in the US, but I acquired it at the age of 10, and it wasn't until I purchased my own lab equipment around the age of 50 that I finally got an answer to all my bizarre health problems. Statistically I should be dead, several times over. Over 200,000 people die from it every year, and I am clearly one of the lucky ones.

        There is currently only one drug (praziquantel) to "cure' (with 60% efficacy) Schistosoma, and it is quickly loosing its effectiveness. There is no other substitute. None. After visiting many pharmacies in my area, it took me three days for me to locate the drug in the USA and tell the Pharmacy where they could get it for me. . Yes Its that bad. Funny thing is I can buy it off the shelf for my dog, with a prescription, but I couldn't buy it anywhere for human consumption? Clearly we need more options and SNTS protein folding analysis will help with that goal.

        If you have a few extra CPU cycles to spare, please sign up for one of these two worthy causes!

        More info on Schistosomiasis
        https://en.wikipedia.org/wiki/Schistosomiasis [wikipedia.org]
        https://en.wikipedia.org/wiki/Praziquantel [wikipedia.org]

        • by mspohr ( 589790 )

          Drugs for dogs are just as "pure" as those for humans.
          On the Internet, no one knows you're a dog.

    • I have read that the latest supercomputer can do a Giga Flop per watt of computing. I do not think that any home computer will come close to that figure. Now if someone would calculate the total cost of doing this 1000 TFlops, I would think that it would be very close to the amount of purchasing a supercomputer and running it. 1,000 TFlops is around 5% of the fastest supercomputer today so one would assume that the supercomputer could do a lot more than just this program. Before 2020 1,000 TFops will b
    • I was thinking, why not Bitcoins? At least there's a probability of getting money for those spare cycles.

  • by Anonymous Coward

    The Bitcoin network combined processing power is 287.78 Petaflops. roughly 300 times larger.

    • by euyis ( 1521257 )
      Isn't Bitcoin's FLOPS number just an estimate, and a grossly inaccurate one based on the wrong assumptions that there's a single formula for estimating FLOPS with just integer performance, and the formula's applicable to all platforms?
      • by Anonymous Coward

        "Isn't Bitcoin's FLOPS number just an estimate, and a grossly inaccurate one based on the wrong assumptions that there's a single formula for estimating FLOPS with just integer performance, and the formula's applicable to all platforms?"

        All true.

        But even if 287 Petaflops is grossly inaccurate, it's *still* gonna be more than 1 Petaflop. I'm almost sure about that...

  • by gatzke ( 2977 ) on Wednesday January 02, 2013 @01:35PM (#42452481) Homepage Journal

    The reason I found slashdot back in the 90s was due to the team performance on the distributed.net tasks. So they do turn those cycles into something useful!

  • I'm sorry, but that's 1024 Teraflops to me.
    You're still off by ~20 hours.

  • So, I just bought a new 4/8 core I7 Mac. Told Folding@home to use 50% of my cores. It persisted in using 100% of my cores, despite what I told it to do, until I uninstalled it. Is there a distributed project whose client will honor my request to only donate half of my resources? Bonus points for one which lets me say which hours of which days it can run. If none of them can, I'll let ElectricSheep provide the eye candy, I really don't care. But I'd rather help out a cause that behaves as I specify on
    • Re: (Score:2, Informative)

      by Anonymous Coward

      I think Folding@home uses its own specialized client. I've never used it, so I can't help you there. Most of the other distributed (grid) projects out there use the BOINC [berkeley.edu] client. BOINC allows you to schedule processor time to when you want to run, allows the stoppage of distributed processes once CPU usage reaches a certain (user-definable) level, and all sorts of other things. I don't think Folding@home allows the BOINC client to connect, however.

      I think what is happening (in your case) is the folding

    • by tlhIngan ( 30335 )

      So, I just bought a new 4/8 core I7 Mac. Told Folding@home to use 50% of my cores. It persisted in using 100% of my cores, despite what I told it to do, until I uninstalled it. Is there a distributed project whose client will honor my request to only donate half of my resources? Bonus points for one which lets me say which hours of which days it can run. If none of them can, I'll let ElectricSheep provide the eye candy, I really don't care. But I'd rather help out a cause that behaves as I specify on my har

    • by Anonymous Coward

      Whilst perhaps not as efficient, I find that running resource-hungry applications like these within a virtual machine with number of cores you want to limit it to is a good way to throttle the amount of resource they can consume

  • by Anonymous Coward

    I would like to contribute my spare CPU clock cycles, but without causing my CPU to speed up (in this case, with Intels SpeedStep) from the lowest setting at 800 MHz. Otherwise, my laptop gets hot and loud. How can I do that?

    • by Anonymous Coward

      for i in /sys/devices/system/cpu/cpu[0-9]*; do echo 1 > $i/cpufreq/ondemand/ignore_nice_load; done

  • by Lost Race ( 681080 ) on Wednesday January 02, 2013 @03:51PM (#42453931)

    1 PFLOPS is an arbitrary threshold or milestone. It's not a barrier because nothing special happens at that point. The speed of light is a barrier. Even the speed of sound is a barrier. 10^n somethings per whatever is rarely if ever a barrier for any positive integer n.

  • If it was at position 24 in the Top500, it would likely be 3x as power-efficient than having all these individual computers. These sort of initiatives are impressively inefficient (but very effective), this is why the 'cloud' model won the battle over the 'grid' model. It only works because computing power is donated, not paid for. On the other hand, the equivalent supercomputer would likely cost 3-8x the aggregate (wrt the sum of costs of all these computers), because of it being custom-made.

  • I added two nVidia GTX 260 and one nVidia GT 240 card to Einstein @ Home , and voila this morning's stats show:

    Page last updated 3 Jan 2013 8:50:02 UTC
    Floating point speed (from recent average credit of all users) 1000.2 TFLOPS

    For a BOINC novice it can be quite daunting to figure out how to make it use all GPUs and not accept any CPU-only work units. Editing some XML files in some data directory isn't exactly user friendly.

    • by hazeii ( 5702 )

      2xGTX260 are in theory about 1.5TFLOPS so that's welcome fuel on the fire :)

      You can also configure common settings using the BOINC preferences [uwm.edu] and Einstein@home preferences [uwm.edu] pages. It seems common to use "location" to set up different preferences for different hosts, e.g. I use "home" setting for machines which are only good for GPU work, "work" for CPU-only systems and the "default" setting for both CPU/GPU (plus "school" settings for experimentation).

      Also AIUI the latest client will use all your GPUs as lo

Think of it! With VLSI we can pack 100 ENIACs in 1 sq. cm.!

Working...