Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Science

The Truth About SETI@Home 257

zealot writes "According to this article, the SETI@Home project is not using the most optimized clients available "just to brake the unit turn around" so that they can continue to recieve various contributions. The authors are also demanding access to the client source (and asking to GPL it if possible), so the greatest performance may be obtained. " It's an interesting point: They didn't figure on getting the reponse they did, and will sooner rather then later run out of blocks to be crunched. Yep: What happens if hold a war and /everyone/ comes? Or a distributed program, I guess.
This discussion has been archived. No new comments can be posted.

The Truth About SETI@Home

Comments Filter:
  • by howardjp ( 5458 )
    I would rather see that they not GPL the client. It would be far too valuable under a BSD license or an X Consortium license. Whereas if the software were placed under the GPL the code becomes useless for general use.
  • I would think it would be good for project morale to say, "Look how good we're doing! We can process the data blocks faster than we receive them!" Seems to me that the purpose of this project is to get something done, to make some progress...Not to keep everybody busy for as long as possible.
  • by Psiren ( 6145 )
    Anyone else find that final sentence difficult to understand?

    Da grammmur on dis syte is atroshus. ;)
  • All of the evidence points to exactly the problem that the author describes. Seti@home needs to write a second client to handle the division of work and "graduate" some of the reliable, high performing volunteers into producing the units of work for others.

    Their fundamental problem is that they only distributed the analysis portion. Now that the overall load has become unbalanced, they need to distribute one more piece of the workload.
  • Opening up the source code to the world will increase the block counts in two ways:

    1) Optimisation

    2) Hacked clients returning blocks unchecked, as previously seen with distributed.net

    By using a hacked client and _lots_ of different
    e-mail addresses to report completed blocks with, any hacked-client-antics could go undetected....
  • I don't see your point. I use lots of GPL'ed software for what I consider general use. How does anybody loose the use of the software by it being put under the GPL license?
  • by razorwire ( 35010 ) on Friday July 30, 1999 @04:30AM (#1774821)
    Am I missing something? Where is the proof behind these assertions? It seems that they're extrapolating S@H's reluctance to hand over the source code on demand into a conspiracy theory.

    I'm not saying that this is unbelievable, just that it would be nice to have some evidence to back these claims up... or else state them as conjecture, not fact.
    --

  • by Zorgoth ( 68241 ) on Friday July 30, 1999 @04:31AM (#1774822)
    Let's not get too wound up because SETI@HOME is getting overwhelmed. It means that the idea is successful. Why do people participate in these kind of distributed processing projects? With the exception of those who want to show off their machine, most of us do it because we feel that instead of our computer downloading Warez or porn all night, we can do something useful.

    If SETI@HOME is having some troubles, helpful advice, not scathing criticism is what is needed.

    I would much rather see more of this kind of thing, even if it was occasionally bungled, than other groups being scared off because of how hostile the online community is

    Maybe I'll find ET...
  • by Anonymous Coward
    The article in 3DNOW is way off base... If you check the seti@home home page (http://www.setiathome.ssl.berkeley.edu) they explain why the code is not released to the public. Additionally, the code improvements they discuss are limited to Intel chips, and I, for one, am not running the bulk of my setiathome clients on Intel chips (too slow...), although I realize that a good portion of the clients being run are Windoze clients.
  • I'm sure everyone is already aware of this, but there are 2 clients for windows - one gui based, one console based. The console based one goes about 2 or 3 times faster than the gui based one so anyone interested in drastically increasing their block processing speed can do so immediately by using the console based client. Also, I've seen some really odd stuff over the last month in the seti@home stats ... first MS is substantially behind Team Slashdot, then they suddenly surge ahead, then their numbers *decrease*, then they surge ahead again, then they disappear from the stats page entirely [!], then the return with low numbers, then (today) they're 20K blocks ahead of Team Slashdot - what in the world is going on here?
  • by Anonymous Coward
    I've said it before and I'll say it again: Say anything about the GPL or Linux other than "GPL r00lz" or "Linux is k-RAD" and you are labeled a troll and moderated down. Slashdot moderators make me sick.
  • We're distributing encryption cracking and alien detection. What else can we distribute? We need something that can be broken down into small parts, worked on in parallel, and isn't overly time-sensitve.

    We could get some writers, artists, and 3d animators together and make 'the great Net movie', and render it on the largest rendering farm ever.

    Maybe some sort of distributed neural network? Dunno.

    Ideas?
  • by noeld ( 43600 ) on Friday July 30, 1999 @04:39AM (#1774827) Homepage
    There will be a Yahoo! Chat Event devoted to SETI@home from 5 to 7 PM (PST) this Friday 7/30/1999. Go here [yahoo.com] for details.

    From the Yahoo page:

    Dr. David P. Anderson, Project Director and Dr. Dan Werthimer, Chief Scientist, of the SETI@Home project discuss their perspectives regarding this exciting new approach to computing.

    Looks like anyone interested can find out the real scoop from the horses mouth.

    The article seemed to be flame bait to me. They never said that Seti@home said anything other detailing the performance critical routine in the seti@home software. Then the way I read it seti@home did not want to give up their source. The article said:

    SETI is not interested in receiving a faster client software

    Is this what they said or more likely an interpertation of what they said?

    Lets check the facts before slamming Seti@Home.

    Check out the Lance Armstrong Foundation [laf.org]

  • by qmrf ( 52837 )
    I beg your pardon? This sounds like an incitement to holy war...And a fairly poorly written one at that.

    It would be far too valuable under a BSD...

    Huh? How can something be *too* valuable? That's silly; the more value we can pump into things like this, the better. (Assuming they don't have value in and of themselves, which is a rather faulty assumption.) Plus, saying that it would be "too valuable" seems to undermine your statement that it shouldn't be GPL. If it shouldn't be GPL and is too valuable for BSD, then what *should* it be? You seem to agree with the idea that it should be free [speech], but only quibbling over the liscense. Which would you pick?

    under the GPL the code becomes useless

    Absolutely baseless FUD. It may be true (though I feel very strongly that it's not), but you need to back your statement up in order to give it any merit whatsoever. If you're going to troll for license fanatics, at least do it convincingly.

  • no, trouble understanding gramer. problems speling neglagebel. pronouns (personal) unessary... geeks need not to write.

    damn it people, isn't the philosophy behind coding very similar to sentence diagramming? shouldn't we be the people who speak correctly? sorry for ranting, but it irritates me to see a group of people so incapable of expressing themselves. not that i'm trashing hemos here... it just comes to a head with him because he posts.

    if we're serious about this "revolution" thing, we need to give people a reason to take us seriously. communication is essential... yes, even with the outside world.

    i don't really know where this came from, and i beg the moderators to regulate the hell out of it for being off topic.

  • It sounds like academia is just as fraught with Dilbertisms and idiotic management as industry. With the type of response SETI@Home has received from the world it seems that their problems could be solved with a simple call for help to the community. But it appears that they have not learned anything about community development success open source efforts have had, and are ego-inflated by their monopoly on SETI data and insist on being control freaks. The sad thing about this is that their attitude will eventually affect their ability to get funding, as skeptics can easily point out the fact that when they were donated millions (anyone care to really estimate?) in free computational power they wasted most of it.

    I'm by no means a /leading/ contributor to the effort, but I am in the top 0.3%-tile overall, and in the top 30 in "team slashdot", having spent a decent amount of time to get the software running autonomously on a handful of SPARCs. But I think now I'm going to look more closely at distributed.net despite the fact that SETI engages the imagination (and fantasy) much more. Anything we can do to improve this process?
  • I chose AMD because I fancied not tossing my motherboard and case. I run on a 450 MHz AMD-K6-2, with the 3d-now! extensions (of course). I can tell you from experience that the FFT that Seti@Home uses routine on an AMD SUCKS. It takes forever to do a single block of data, and I calculated it would eventually take something like 70 hours to complete the block. I was disappointed. Granted the floating-point on AMD's aren't great, but they aren't THAT bad. That was part of AMD's purpose to include 3D-NOW!, to handle floating-point better. The fact that they are unwilling to optimize the code just plain makes me angry. It puts those with alternative processors at a disadvantage, when the users themselves are willing and able to fix it. That is the reason I love RC5: it runs well no moatter what processor you use, it's optimized for almost every one, and if you want to optimize it further, distributed.net allows you too. Just because Seti@Home claims they will run out of blocks of data, it does not matter. If they do, perhaps other programs will see the need to contribute.. perhaps the ACTUAL SETI program.... (this is not actually run by SETI, but by an affiliated group). We could all get a lot more done if we simply all did our best to get what 'needs' to be done done, and worry about the next source for our work when we get there (a la RC5).
  • by Anonymous Coward
    Hmmm...well, if the government wants to monitor all internet traffic, perhaps they could distribute that. NSA@home :)
  • I got email from Linus himself recommending them!

    (Joke! :))

  • Maybe GPFs taking down MS's computers? Hmm, I wonder what the cause of that could be...

  • You're a little off-base here.

    1) Consider the community. Almost everyone here loves the GPL. If you're going to bash it, you need to provide evidence... not just say "GPL dr00lz" (a paraphrasing of the comment at the top of this thread.)

    2) The comment made no sense. It should have gotten moderated down even if you replaced the negatives in it with positives. It doesn't say anything worthwhile. Instead of saying "foo should not be foobar," say "foo is not foobar because foobar promotes pig-raping and nuclear holocaust."

    But other than that, I think that was a very intelligent and worthwhile bashing of the /. moderators. Thanks for making my day 6 inches brighter.

  • They're worried about running out of blocks to process? Isn't that the idea?

    The problem is that the data they want to process doesn't come in blocks, it comes in one big chunk (or several), and they can't break it up fast enough to keep up with the blocks being finished.
    ---
  • downloading warez and porn takes up almost no CPU time, and the SETI@home chunks don't take up that much bandwidth ;>

    Doesn't seem mutually exclusive to me :P
  • 1.5 hours per work unit!!!!!
  • The problem is that the work units are created from the Arecibo Radio Observatory, and not many people have their own radio observatory...
    /El Niño
  • I've been recieving ancient Seti@home units for weeks. They keep sending out the same units over and over, from January 1999 through March 1999. When you work on it, you notice that it just keeps looping. You hit a February unit, then a March unit, then a January unit, etc.

    Maybe they've set it up to analyse the units in that strange order, but I doubt it seeing as they have enough computing power to encode several thousand years of MP3's per day... using BLADEENC! :)

    What the Seti@home project needs is a way to better manage the data that they're being sent. It would also be nice if they'd optimize their client so we could run it for the same amount of time, but have it take up less CPU. It really is intrusive when it's not running in screensaver-only mode under windows, and without -nice 19 in Unix.
  • They can't distribute the splitting project because the data blocks are HUGE. I'm guessing that the data comes in large chunks, maybe just one large chunk, even. If they could distribute that, they wouldn't need to, because the blocks would be small enough to send on their own.
    ---

  • They make their dedicated volunteers use massive amount of electricity to run their machines at full throttle while this goes on - it is not unjustified to say that SETI@Home may make a measurable impact on the CO2 buildup in the atmosphere that way. Charming.
    What should you be thinking of this?
    Defrauded? No. You haven't been promised anything.
    But it should be hurting your idealism.

    You sound like your idealism is hurt, mate.

    What's the big deal? You want to get your team into the Top Ten of this competition thing or something?

    D.

  • by Kook9 ( 69585 )
    >I've said it before and I'll say it again: Say
    >anything about the GPL or Linux other than "GPL
    >r00lz" or "Linux is k-RAD" and you are labeled a
    >troll and moderated down. Slashdot
    >moderators make me sick.

    Trashing the GPL or Linux on a forum like this is the definition of a troll. There's a big difference between saying "GPL is not my preferred license" and "GPL is worthless."

    Kook9 out.
  • There is.
    Go to the Display Properties, Screen Saver, Hit the settings button, check the go to blank box, and put 0 minutes in for time. It will show no pretty pictures, and give your speed a shot in the arm.
  • So, they use a check. Sortof like the hash or MD5 suggested earlier on the distributed.net thread. Since they aren't worried about performance anyways, a little slowdown won't hurt.
    ---
  • I would like to know how to get the console based wone rather than the GUI, cuz the GUI sucks on my comp! (See my other post further down.)
  • High Energy Physics experiments use vast quantities of computer time to analyze results from Terabytes (now, soon Petabytes) of data and even vaster quantities in Monte Carlo programs to generate events to under detector biases and data backgrounds. In both cases, data is discrete events. Fermilab was one of the pioneers of production farm computing - a loosely coupled multiprocessor system. The canonical farm consists of a horde of low-cost systems (workstations or PCs running Linux) that are the worker nodes plus a small number of systems which act as overall managers of the process and which parcel out the data to worker nodes. In the past the I/O nodes read the input tapes, gave a single event to each worker node, collected the results and wrote the output tape(s). Modern implementations have the worker nodes acting on a file containing 1 to a few thousand events.

    I've wondered if the Monte Carlo processing (in particular) would be ameanable to SETI@home-like distributed processing. One possible problem is that the analysis and Monte Carlo generation programs are often quite large and have largish memory requirements.
  • Great idea!! I would certainly like to donate cycles to something more inspiring than cracking encryption keys. The Great Net Movie (TGNM) would get lots of good publicity, and with the kind of computing power available through this donation process, unparalleled, er paralleled, images could be had.

    I *do* think that something like this should be done under a larger organizing structure such as distributed.net. I only wish that SETI@Home had worked with the distributed.net people so that everyone would benefit to the improvements to the infrastructure and organization.
  • Never attribute to malice that which can be adequately explained by stupidity.

    Look, this code is unoptimized. We all knew that. No news there. Now, they say they're not currently interested in optimizing it, probably for a couple of reasons. I mean they're having problems keeping up with the current load as is. What's wrong with them wanting to sort that out before they add in the problem of different code bases for various chipsets?

    Look, these guys aren't experts in distributed technologies like the guys over at d.net.. Hell, no one's as expert as the guys over at d.net. You can't run d.net for as long as it's been going on and not be experts.

    So the seti@home guys can't split the bits up fast enough to avoid duplicates? Well hell, there's hundreds of corporate sponsors falling all over each other to donate stuff, right? Use that!

    Frankly, I thought seti@home should've teamed up with d.net from the beginning, instead of trying to setup an entirely new system. Why not use an existing infrastructure? Quite a hell of a lot of people (myself included) killed the rc5 client in order to run the seti@home client. I don't want to go back to rc5, but I don't want to duplicate work either.

    Seti@home should just hand the codebase over to the d.net guys and see what they can make of it. If the setup is anything like d.net's, then they surely can help with getting more bits split, as it were. So you got too many people to help? Well, that's what you get for not doing your homework, no cookie for you. Now admit defeat, and get some FREE help.

  • I have trouble with any whiney article that has crap like this in it:

    "You've bit your teeth and let your box run all night because it took so long to get a unit out, right? But it is for a good cause, so you took the noise, the extra warmth in a summer night and the higher electricity bill. Well, surprise, if you had the proper client software you could have switched the damn thing off at night!"


    Oh damn, the Seti@HOME people are "making" me run my computer all night (at "full throtle", no less ! ;-), depriving me of my daily shutdown and boot-up process that every Unix admin just can't wait for!


    My opinion of his opinion: he should "get over it". The only thing driving that dude is competition with others, not the altruistic donation of *spare* computing power towards (an arguably) good cause.


    If I ever write an article like that, remind me to switch to decaffinated coke.


    -adam a

  • by db ( 3944 )
    *removes Seti@Home off of the three linux boxes, the one OpenBSD box, and the UltraSPARC he had it running on, installs RC5*

    --
    Dave Brooks (db@amorphous.org)
    http://www.amorphous.org
  • But when my computer is MY work, my, hobby, i don't like it when a program doesn't use it and makes it look like crap....
  • I've got a Sun Enterprise 250 that will just about match that.

    --
    Dave Brooks (db@amorphous.org)
    http://www.amorphous.org
  • Thanks for that tip.

    But, I could only find a cli-client for NT,
    not for win'95. Or is there one?

    --Sander.
  • Hey! Get all your freakin monster machines out of the project. It's not a competetion.
    I have a crappy little Cyrix P200 running in screen saver mode a few hours a day. I'm still working on my first block - which all of you power hogs have undoubtedly reworked 3 or 4 times by now.
    Give us little guys who are actually interested in the SETI project a chance to really participate.

  • Is there anything out there in the Open Source realm that does this kind of distributed processing stuff. I know that each project is different and needs customized stuff, but a general program would work. Something like the following:
    Server - program to keep track of all stats send blocks, recieve answers, redundancy checks, a blank area for your own backend stuff, and maybe some HTML reports (cuz everyone wants to see this on the web :).
    Client - basic framework of how to receive problems, send answers. Then have an blank area in the code where people can add in there own stuff as a module.
    Licensing might be a problem, but I'd suggest that you leave the individual code as a sperate program thus the cutomized part wouldn't need to be under the GPL if the general part was GPL'd and a company wanted to use it. Maybe the LGPL?
    I don't know enough about all this stuff, but everybody seems to like playing with the distributed projects, it would make sense if there was a free version. I know there are arguments against opening the code (easier to fake answers), but couldn't redundant checks help that?
    -cpd
  • I believe you are right about them needing to increase their block-output and optimize the client. However, conscripting the clients to create work units would be very inefficient since they recieve 30-gigabyte tape-backups from Arecibo every day (I rememer this bit of info from their website somewhere). It would make more sense to do it on-site. However, I can't understand why writing out 30-gigs of data in one day is so hard! Can't you restore a 30-gig backup in a couple of hours?

    I think their problem lies with their software designers, who, in my opinion, aren't all that clever. Since the SETI project is a volunteer effort, and they don't want to release the source code, I think their best solution is to conscript some volunteer open-source programmers to help with it. God knows they need help. :) They would still be able to keep the source code private, and end up with a better, more effucient client, and could probably get input on their server application as well.

    I'm just afraid that people may lose interest in the project once they realize that they've been checking the same work units for weeks, and that they've been wasting CPU time checking them as well. :)
  • is anyone else getting tired of /hemo's/ way of emphazing /stuff/?
  • Run 8 of them under linux AMD 300 178MB ram with -nice 20, it takes about a week for it to process all units, this is good if you have a dial up line, only have to connect on saturday to upload the results and download some more work units.
  • Make that, oh, around 20 million entire CDs per day. One CD takes me 15 minutes to rip and encode. They've got 290 kiloclients or so. Work it out. My estimate assumes they aren't working for the entire day. Anyone know how many unique CDs there are total in the world?
    ---
  • Am I the only one sick of seeing this argument used against SETI@Home? This is totally STUPID. My computers would run the same hours with or without SETI@Home on them. I also don't think that I'm pulling *that* much electricity to make a large impact on the atmosphere - and my area uses hydroelectric power anywayz.

    I think it's funny though when people hear this argument and then use it to advocate d.net.
  • Err, distributed.net is not open. The fact that the distributed.net client was hacked demonstrates, once again, that security through obscurity does not work. In other words, trying to make it more difficult to hack the client by only publishing the object code rather than the source code as well is a much worse solution than figuring out some way to guarantee that bad blocks cannot be submitted, regardless of the functionality of the client.
  • BTW, unless this seems normal, it seems the client may have already been hacked:

    62) polle 3975 1193 hr 28 min 43.0 sec 0 hr 18 min 00.9 sec

    straight off the top 100 lists just now..

    Notice, 18 MINUTES to complete a work unit.. Umm.. no.

  • I always thought it would be cool if someone was doing an animated movie like "a bug's life" on a shoestring budget using the net as their render farm. That would be way cooler than encryption cracking or SETI. Of course, the rendering is the easy part, coming up with a decent script, doing all the 3d modelling (coming up with all the frames that need to be rendered), and then there's the audio...It would be tough, but man it would sure be cool...
  • I like to think of distributed net in the same context as the NASA Deep Space 1 mission that just flew by comet Braille.

    -DN (distributed.net) has had it's share of glitches and trouble, but now it represents a technology that we can apply to other problems.
    -DS1 has had a few glitches of its own. The ION engine wouldn't start properly, and then it mysteriously started working after a while. Of course, we figured out what was wrong, and that seems to be a normal characteristic of brand new ION engines. Overall, the DS1 ION engine has operated for 1800 hours, vindicating the original concept.

    -DN tackles a current political issue, but the problem is technically boring. Cryptology is hot in the news today, but we all know the outcome of the DN problem. We'll find the key. But along the way we will learn a great deal about how to build vastly distributed programs running with donated computer time.
    -DS1 also tackeled a current political issue, but it was essentially boring. DS1 flew by astroid Braille. Asteroids are in the news, but DS1 was so far away from it that the asteroid occupied only 4 pixels in the CCD. It also appears that there was a problem with the tracking system, so there might not be any better photos. Boring! But we're learning a tremendous amount about how to build spacecraft that can automatically perform their own navigation.


    SETI at Home's biggest mistake is that they re-invented the wheel and made all the mistakes they would have avoided if they'd had some help. They have the right problem. We're all interested in finding alien life. But they could learned something from the distributed net people.

  • Obviously, to you, it is a competition, since otherwise you would not want "a chance to really participate". You would just want the project to be completed as fast as possible.
    ---
  • Come on guys, now that we know Seti@home is just a bunch of no-good, no-good-dooers, lets all stop wasting CPU time on aliens that we aren't going to find (come on, they are out there, but too far) and break some encryption! That is something we can use right now!

    Don't get mad about what I called seti@home. I run both (on diff computers) although I am thinking of changing to both RC5.
  • by davie ( 191 ) on Friday July 30, 1999 @05:17AM (#1774876) Journal

    Look at the awful job most of the search engines are doing keeping up with the web. Why not a distributed spidering project? Hand out a base of URLs to spider, then let remotes spider from there. As the ever pessimistic Rob has already pointed out to me, the load on the host end would be huge, but I still think if it were done right, the whole net could be cataloged in a few months, then kept updated.

    It seems like a distributed spidering project with a search engine front end like Google on different hardware/net could make a search engine useful again. There are probably a few interesting things that sites could do to streamline the workload--I haven't thought of them, but my spidie senses are tingling.

  • by Jburkholder ( 28127 ) on Friday July 30, 1999 @05:17AM (#1774877)
    Well, good - its not just me that had this reaction as well. I mean really, this was all speculation. It could very well be that seti is concentrating more on how to cope with the 10x volunteers they got (if they expected 100,000 and got a million) than optimizing the clients. But, they didn't say so, directly or otherwise. Whare's the proof?

    Besides, what real purpose does it serve to spend any time doing 3dnow optimization of the seti clients when there are more volunteers than they can handle now anyway? I didn't get the point of this article (assuming the premise is true) as to how this would help anyone but the 3dnow bunch. Sure, they have a worthy cause, I would love to have 3dnow in more applications for my AMD, but I don't get how this does that, or how it helps seti.

    To be honest, it makes 3dnow.org look a lot less credible than it attempts to make seti look (IMHO).
  • > I run on a 450 MHz AMD-K6-2,
    > with the 3d-now! extensions (of course). I can
    > tell you from experience that the FFT that
    > Seti@Home uses routine on an AMD SUCKS. It takes
    > forever to do a single block of data, and I
    > calculated it would eventually take something
    > like 70 hours to complete the block. I was
    > disappointed.

    Wow, 70 hours? I'm running SETI@Home on Linux with an AMD K6-2 450, and according to my handy tk-SETI@Home [cuug.ab.ca] client, I am averaging 14 hours 55 minutes. Granted, that's not great, but it's not 70 hours, either. 'Course, it may have something to do with my Super7 100 MHz bus and 128 MB RAM, though I am pretty much in the dark as to how the client uses resources... Anyway, I just thought I'd throw in my .02 regarding performance.

    -Chris
  • In light of how fast this Internet thing makes technological innovation, maybe we're looking at a paradigm shift coming.

    Perhaps, instead of trying to distribute the cracking of some cypher, or finding ET, people need to sign up for "distributed computing." What I mean is, people sign up to use some client software to do any project someone needs more than one or two computers to solve, even if they only need one CPU-year, if they provide the client-side software to do the number-crunching, it gets downloaded and executed.

    Of course, the software would need to be small, and there would probably have to be some semi-centralized agent for everyone to get it from, and there would have to be a "validation" process to make sure someone wasn't just trying to find all the combinations of the letters in the alphabet that make cool names for their EverQuest character.

    Thil promises to get big. I mean, they had a measurable backlog on their calculations at SETI, but it's dwindling quickly. This shouldn't make us upset, we should be glad that we have proven this can work. Not only that, but we've proven there are more people than they expected would be willing to participate.

    What if they figured out a way to distribute the calcs on Pi or the solution to the human genome? We could probably find a cure for cancer in a month if they could figure out how to distribute the work. If it can be calculated, we could probably cut the calculation time down to virtually nothing. Not only that, but we've proven we can by laying the smack down on this whole ET-search.

    So, what we need is an agency (/.@home?) to organize and distribute the plethora of projects out there that have one year project times or greater. Of course, one of the distributed projects could be the assignment and distribution of the projects. Maybe give that job to people who have proven dedicated to "the cause," a time-served promotion schedule, of sorts, like in a business.

    Or something.

    -Ristoril

  • Check out COSM, which is supposed to do exactly the sort of thing you describe.
  • One of the major problems they've had from the begining is that Aricebo doesn't have any sort of reliably fast connection. They actually snail mail the data on DAT tape from the observatory to Berkeley.
  • by coreman ( 8656 ) on Friday July 30, 1999 @05:24AM (#1774883) Homepage
    Seems to me the simple answer is to process all blocks twice and compare the results. This would then solve/detect the problem where a hacked client was sending back "untrue" results. Anything that comes back with "different" results gets sent to a third "validated" respondent and the differing on of the initial pair gets demoted from validated. This also solves the workload bottleneck for the time being.
  • That is a hot idea. I would like to participate in BOTH efforts, but I'm not going to run two screensavers or whatever. I think a client that would take an arbitrary task and work on it for a while, then move to another, would be really neat. One time you'd be finding aliens, the other you'd be factoring.

    I actually have a lot of sympathy for Seti@Home, and I feel that people should refrain from bashing them. Designing large distributed systems is HARD, and that it works reasonably well at all is testament to their efforts. Not to mention that these people are interested in the science, NOT the computer aspect of this project. Aside from the distributed.net effort, there's not a whole lot of experience out there in these internet-wide computational systems.
  • I think he's using the Windows client, as he hinted above.
  • by ethereal ( 13958 ) on Friday July 30, 1999 @05:25AM (#1774886) Journal

    Well, if they've got so many more volunteers than are strictly necessary, why not hand out blocks multiple times and check that all the clients give the same results? If you detect any differences, run that block on a trusted machine at SETI@Home HQ and ban the clients that returned the bogus blocks.

    Granted, you aren't going to be able to detect hacked clients returning unchecked blocks very easily this way, because you won't have too many positive blocks to compare the results with. But you could seed the raw data with some known positive blocks to catch clients that are returning incorrect (unchecked) negative results. And if a hacked client is sophisticated enough to return a positive result for a positive block and a negative result for all other blocks, isn't that the same behavior as an unmodified client?

    Yes, this extra redundancy would slow the project down, but it sounds like there is more than enough computing power available. If SETI@Home explained that redundant processing was necessary to ensure valid results, I'm sure most users wouldn't have a problem with it. If you're interested in SETI@Home in the first place, you already know that good science and/or good data analysis isn't done overnight and requires a lot of procedural safeguards to get the right results.

  • by Anonymous Coward
    For other distributed-calculation groups, check out www.mersenne.com [mersenne.com]. I especially recommend GIMPS [mersenne.org]. If the prospect of finding the largest prime in the world gives you a woody -- a two million digit prime, the largest in the world, was very recently discovered, and netted its discoverer $50,000 for being the first one-million-plus digit -- well, then, check it out.

  • The 3DNow! advocacy guy's got it all wrong. It is a conspiracy from the NSA, FBI and CIA to make people stop donating their spare CPU time to Distributed.net.

    - Da Lawn
  • <JOKE>

    (there are only 4 "letters" in DNA)

    <Counts./>
    &ltCounts again./>

    Wait a second!

    <Counts one more time./>

    it sure seems to me that there are only three letters in DNA!

    </JOKE>

  • Why is this article called "The Truth About SETI@HOME"? It's not more the truth than any of the usual Slashdot postings.

    (Same for this flood of articles from osOpinion. Anybody, knowledgeable or not, can get his stuff published there, and all of Slashdot, LinuxToday and LWN will start a big fuss about it. It's just a waste of time.)
  • The URL for the command-line based nt version is:

    http://setiathome.ssl.berkeley.edu/software/seti athome-1.3.alpha-winnt-cmdline.exe

    They don't seem to have a win9X version, but I'd speculate that the odds are pretty good that the nt version will run on 9X.
  • solving the "hacked client" problem when giving away client source would be easy: the server would need to keep track on who processed what data and send out some packets (maybe 5% or 2%) to different people and compare the results. i am sure the extra performance of tuned clients will outwight the checking overhead. once someone is found cheatings they are out.. comparing would work for the seti test. with the RC5 contest or finding primes it would not be that easy: the result of the 2% double packets would be in most cases "not a hit" and it would be unlikely to find someone who cheats that way. one way to solve that problem would be to demand that clients send a hash value of some intermediate results back that make shure the client has completed most of the processing steps.. and that values can be double checked..

    mond.
  • Duh! Everyone knows it's 42.

    :-)

  • I think I agree with you on this, ethereal. I thought about that before, and it makes sense that having two people check one unit, then having a third or fourth person check the unit if they don't match is a good idea. Their server just has to be a lot smarter, and I think their main problem is that their software designers are either overburdened, or don't know what they're doing. :)

    The combination of redundancy like this, as well as faster, optimized clients (which already have been written) are important steps to making the SETI@home project successful. They seem to lack any sort of ability to deviate from their original plan to scan the sky in 2 years, and check all the units in 6 years.

    Alas.
  • Have you actually done a block of data on your AMD? Are you saying it took you 70 hours? If so I don't know what kind of overclocked processor someone sold you because I am running an 350 AMD K6-2 3DNow like you are just 100 Mhz slower and it takes me roughly 26 hours per block. If you are getting 70's I'm wondering what else you have running at the same time.
  • A quote taken from the Seti@home page;
    http://setiathome.ssl.berkeley.edu/faq.html#q1.9

    "We decided not to make source code available for security reasons and for science reasons as well. We have to have everyone do the exact same analysis, or we can't have any control over our research and be confident in our results.We were also worried that there may be a few people that want to deliberately try to screw up our database and server."

    As I read this above, the reason they are not distributing source, is to keep the data taint to the minimum. For a scientific analasys you would want as few unknown variables as possible. Keeping a contiguous software base from start to end on the project, ( at least the core computation part ) assures that the data is processed via the exact same instructions. By allowing anyone to construct a more efficient client, you're adding(potentially) small changes into the math. The amount of calculations over that minor 320~k block of data is staggering. Similar to the "Butterfly" example in chaos theory, a minor warble in a single math calculation may have no noticeable effect a dozen times.. but a million?

    Yes, I may be blowing this out of proportion. Yes, I know that there does not need to be a contiguous analasys scheme from start to end in the case of seti@home, Yes, I feel that they should consider (as in the 3dnow article) NDA's for faster clients. But I feel that the above quoted statement says things clear enough to me.

    Yes. They should not trust a valuable data block to a single computer. I'm assuming that there is a very robust system of checks and balances installed to check and re-check, and re-check every number crunched by passing the same block to multiple machines.

    As for the second half of that statement, it's to prevent people who -don't- care about the project to get their jollies in raising their stats. Look at all the todger waving that goes along with the rankings... "_MY_ team is #1..." Yes it is still possible to fake packets, as has been evidinced, but with open source clients, it's trivial.

    Yes. I believe in Open Source. However I know there is a time and place for it, scientific analasys, to me, is not one of them. RC5? Sure.. crack keys by brute force.. no analasys needed there. But SETI@home? This is science... This is for a more worthy cause than bragging to your mates about how your machine crunched more than theirs and you're cooler than them...

    Seti@home definitely needs to ramp up their block construction process if what is in the 3dnow article is true. File "Running out of blocks to process" under 'Problems we -want- to have' How many scientists would -kill- to have too much processing power available?

    Hell.. make a "variable" scientific analysis engine for distributed computing, give the science labs of the planet access to all the crunching they could ever dream of..

    As always, my own opinion.
  • I don't like this guys attitude at all. He's angry because SETI@home has limited resources? They spend a fortune to use that radio telescope and distribute the work units, but thats not good enough for dude, he wants more. He is angry because he's "sacrificing" his idle cpu cycles. Its almost as if he feels its his RIGHT to help out when in fact its a privilege. They have every right to make the client as slow as they want. They have absolutely no obligation to open source it or improve the speed. Its their toy, and they are only letting us play with it.

    I'm happy for SETI@home. I think its great that they've got 8 million volunteers and have managed to (almost) process all the data they've acquired. I hope they aren't completely overwhelmed, though. It must cost to server up all those work units. Something like 40 gigs a day, I think it is. Thankfully they have all them corporate sponsors.

    I think it would be interesting if they open sourced it, but I understand why they haven't.
  • The author claim that if he have faster code, he will shutdown his computer at night.

    I don't think it's gonna happen. People will still make their seti@home software run at night. Having fatser code will only enable them to do even more workunits.

    Also the number of people who have 3dnow pr P3 is a lot smaller than the number of people who have P2, Pentium, PowerPC, Alpha, Sparc, ...

    Optimizing for 3dnow will only enable a small amount of people to get faster performance.

    I'm in favor of opening the source code for seti@home, they should not fear having code out there that produce false result because they could easily recompute the workunit with their version to see if it's true. And if too many bad result came from one person, they could easily cancel his account.

    Opening the source will enable other people to correct bugs (the windows version is really not stable enough to run 24hrs/day) and optimize the code (again the window version is more than 2 times slower than the linux version - ok the gfx take more time to compute, but it should not be more than 2 timer slower - I've tested the 2 versions on my laptop).

    Opening the source will also enable other scientist (with computer programming knowledge) to check the source to make sure that the computation are done right.

    It's kind of contreversial for scientist to release closed thing. Science it about peer-review, all their work whould be peer-reviewable including the source code of their programs.

    Note that they should not only open the source of the client but of all the other part of the system. This would enable other people to adapt the application to work with other projects and to enable amateur radio-astronomer to analise theire own observation with well-know algorithm and maybe to set-up their own distributed system to analyse the data they collect.
  • Okay, I don't want to hear any more about SETI@Home not releasing the source code. Here is the real deal on all of this. Do any of you wonder why SGI is blowing the doors off of everybody else? Do you wonder why they crank blocks in under 3 hours? It's not because they have superior hardware, it's because they have the all-powerful SETI@Home client source code. They've gone in and tweaked it so that the FFT (fast fourier transforms) routines use the optimized math libraries (they didn't previously). Bam, instant boost for SGI. Sun has done the same thing, you just can't notice since their average block rate hasn't dropped down yet (they've only been using the optimized client for about 2 weeks now). Both the SGI and the Sun clients had to pass certain SETI@Home tests to be allowed by SETI@Home (they had to crank a block and return results within a certain amount of precision and accuracy).

    Both Sun and SGI have offered the optimized clients to SETI@Home to distribute on their home page, but SETI@Home doesn't want them (they don't want forks in the code base or something like that).
  • not realy:
    i run clients on 2 linux machines ( 4 years old 64M cpu changed from P133 to IDT C6-200 ) and they take about 40h each
    a new box still running win98 with 64M Celeron 466 takes about 65h per block with "graphic" client
  • Spend 10 minutes with the client (cli or gui) in a debugger, and you'll see plenty of places to improve performance.

    I don't necessarily agree with the whole article, but the client could be much faster.
  • That's funny. They are, indeed, blatantly wasting their computer power. It is unreasonable that anyone would send out CD data to encode, but the idea that in a single day, we could've encoded a great portion, if not all, of the CD's ever produced.

    Someone made a good comment when they said that the main problem with SETI@home is their unwillingness to call on the community, just like the legendary PHB's of every major corporation. They manage out of fear, rather than by communicating with the people.
  • One other thing that may speed up performance for some. I have noticed that if you set the screensaver to go to blank screen, you get a higher blockrate, A pentIII 450 took around 20 hours to do a block with the saver always on, and only 9 with it set to go to blank screen.(This may also have something to do with the different versions, I have not checked lately.)
    josh
  • After complaining to the SETI folks that the Windoze client software would run so much faster if they used 3DNow! optimizations (you did see this was a 3DNow! advocacy site, didn't you?), they say:

    What should SETI@Home do about it? Limit the submission of units to one machine per email address. Disallow the transfer of units between accounts. Retire the Top 20 teams and accounts of all categories every 2 months, that will make it less appealing to push so hard to the top.

    But, above all, let people have the fastest client software possible, that is the least they deserve if you want them to support the project. Put the code into open source and let the best coders of the planet tackle it!

    Cynical translation: "Kick off all those darned RISC users with their really fast floating point units and their farms of multiprocessor servers! Let us recode your client so our favorite processor looks better!"

    Now I remember why we're looking for intelligence Out There....

  • They don't have to crunch all the work units themself, they just have to crunch the *interesting* work units, those that return results that could be interpreted as alien radio transmission. And they still do that right now, look at the faq and you'll see it's in their protocol.

    Even if they do not open the source, some people WILL decompile it, reverse engineer the communication protocol and will produce wrong-behaving clients!
  • n optimizations != n different source trees

    He was especially talking about optimizing the FFT routine using special instructions from the 3dnow instruction set. Since this function plays a big role in overall computing time, this would have speeded up the whole process noticeable.

    You can maintain portability and `speed hacks' for special platforms in one tree. Look at PostgreSQL, GNU libc, strace, MySQL. They all contain assembler code for improving speed on certain setups while supporting various platforms.
  • I want to ditch the GUI client and start using the CLI one, but I need a graceful transition. I mean, what does the system do if it sends out a block and it never gets the work back? I guess I could leave a little hole in the sky! Thanks,
  • How did "as fast as possible" ever get to be such a good thing.
    I figure my machine might crank out 3 or 4 chunks during the life of the project. It would just be nice to think that maybe they weren't just duplicates of someone else's work.
    Use the fast 3d machines for Quake or raytracing like they were intended.

  • The NT client runs on recent 9x installations.
    It runs fine on my 98 machines, but it did have a .dll dependency failure with an old 95 box.
  • ...the fact that when they first started having problems, they blamed it on the linux community, saying that we modified the clients to report more packets completed than actually were for the purpose of getting higher rankings. As a community, we did not do this. Perhaps a few linux users did, but they blamrd the Linuc community as a WHOLE. Till I can get an apology to the Linuc Community from them, no SETI@Home packets will ever pass through my computer.
  • That is the link to the alpha binary. Most people probably want the x86 binary which is h ttp://setiathome.ssl.berkeley.edu/software/setiath ome-1.3.i386-winnt-cmdline.exe [berkeley.edu].
  • by drwiii ( 434 ) on Friday July 30, 1999 @06:38AM (#1774930) Homepage
    Wouldn't it be interesting if SETI finally recieves an alien transmission, and it's RC5-encrypted? (:

    It's just like Linux vs. BSD.. Each side has something they excel at, and something that they lag behind at. Just use whichever one makes you happy.

  • Phew, I'm not used to waddling around in asbestos underwear anymore .. anyway, I made an update to the disputed editorial at the former location [fullon3d.com] that replies to some of the comments made here and in other forums, check it out if you really care.

    Armin Lenz
  • Currently being worked on, by at least two groups. My own (The Free Film Project), and The Internet Movie Project.

    The Free Film Project can be found off the GNU [gnu.org] website. The Internet Movie Project is over here [imp.org].

  • If you want, you can join in with one or both of the groups I know that are trying to do just this sort of thing. I posted the URLs in the reply just prior to this, and don't want to start getting redundant, here.
  • by davie ( 191 ) on Friday July 30, 1999 @07:29AM (#1774957) Journal

    This is a situation where a hierarchical workload distribution would probably work. Unlike the SETI project with its huge, monolithic data chunks, a spidering project would be dealing with small (comparatively speaking) chunks. There could be several levels of capability depending on host speed, storage, bandwidth, etc. A company with Suns, a few Gig to spare, and a T3 could handle more volume and complexity--maybe spending their cycles figuring out relevance by context and links, etc., rather than spidering. A lot of the grunt work could be done before the cataloged data are returned to the destination hosts. This would also be a little nicer to bandwidth, I guess.

  • dang..that is a problem i have not thought of. but i think it is not that much of a problem:

    once someone claims a price for finding RC5 key or next prime or so. one would look in the logfiles and see who has processed that part of the keyspace..and that person would be in trouble then.. especially when releated someohow to the person claiming the prize..

    but of course in the seti case that guy could already be hitchhiking through the galaxy and we would not know about it ;-)
  • I guess I don't completely understand this situation, but let me try to pick my words correctly: the Fullon3d web site is labeling Seti@Home an enemy, they want optimized software, and they keep saying Seti@Home is trying to stall. Ok...I still don't get it.
    1) I guess we could start by saying the Seti@Home Project is not a video game; it is a distributed computing project connecting hundreds of thousands of computers actively working on the same task. All of this optimization stuff doesn't mean a thing for this project and it's goals. Fullon3d is also reporting that the Seti@Home Project has a lot of competition, which is entirely true. And, as a way to curb this growing *obsession* with work units, they propose the following:
    -----
    "Limit the submission of units to one machine per email address.
    Disallow the transfer of units between accounts.
    Retire the Top 20 teams and accounts of all categories every 2 months, that will make it less appealing to push so hard to the top."
    -----
    Of course, they want to stigmatize competition, but yet, they want to optimize the client to make processing units faster!! Seems a bit contradicting doesn't it? Also, I'm fine running the client software on my P166 comp with 88mb of memory. IMHO, the project is running fine. What need will making the client run *a little* faster fill?
    *
    2) Everyone has already seen the negative aspects of opening the source, so I don't have to refresh your minds. But, let's think about this situation with some common sense; there is no need to label Seti the enemy.
    *
    3) Oh yeah one more thing: fullon3d was comparing the Seti@Home project to this:
    -----
    "How would you feel building a road to the hungry people of the world with your bare hands while the initiator of the project doesn't want you to have shovels because they didn't think to buy trucks and groceries yet?"
    -----
    What? They are two unrelated situations that deal with different varibles and have no relevance on either of them! The fact is, if you don't want to help Seti, don't download the program.

    Rajiv Varma
  • I didn't think of that possibility. However, if you use sufficient redundancy, then not all of the people processing that block will be running a hacked client. As long as there is one legitimate client which records a positive result back at d.net, then you can demonstrate that anybody else who said that block was a negative is running a broken (hacked) client. It would be necessary to send the redundant blocks to different clients simultaneously in order to make sure that any hacked clients are discovered before the user of the hacked client has an opportunity to cash in on their ill-gotten gains.

    You could also seed the work units with positives and check to see which clients record those as negative results, and then ban those clients before they cause you to miss a real positive result. Seeding the work units like this means you have to be able to generate more than one positive result, which would be doable in SETI because a number of different results could be the pattern we're looking for. In other words, there are many possible positive results. Seeded positives might be impossible to use in key cracking because there should only be one key to the puzzle and if we already knew the key, we wouldn't need a contest to find it. The only solution that I can see for d.net would be for the client to not return a positive/negative result but just return the processed block for final interpretation as positive or negative back at the d.net server. Unfortunately this moves back into the "security through obscurity" model.

  • Well, that's a good point and it would carry more weight (for me) if you ran out of valid work units and your machine powered down. But it looks like seti may be just sending old wu's again to keep people running it, which I agree is a waste.

    So, I don't disagree that efficiency is a good thing and that un-optimized clients is a bad thing. I didn't mean to (and I don't think I did) imply otherwise.

    I guess I'm just thick because I still don't see how that article hopes to accomplish that. If a 3dnow optimized client appears tomorrow, everyone would still leave their machines on the same amount of time, no? If seti ran out of work units they would likely send out January again. What did I miss?
  • Ok... I *JUST* got out my ammeter, and my computer takes in .5A. At least in America, we have 120 Volts, so thats 60 watts (if i am thinking correctly here, P=VI, right?) Now, in most parts of the country, energy is about 8-10 cents a kilowatt hour. According to my calculuations (correct me if im wrong here), but thats like 6 cents a night? I'd also like to point out that if a computer puts out any NOTICABLE heat of noise, you probably need a new computer, perferably not an intel *hint*
  • So gcc, the gimp, gnome, and emacs are all useless for real work? Aside from the fact that there's gnu.misc.discuss for this, what is your definition of real work?

  • We can test the "Infinite Number of Monkeys" theory. Various computers would generate random ASCII characters in their spare CPU cycles, and then the resulting sequences would be compared to the works of Shakespeare.


    So there wouldn't quite be an infinite number of computers involved... but computers are faster than monkeys anyway.
    --

It is easier to write an incorrect program than understand a correct one.

Working...