Posted
by
emmett
from the et-phone-home-two-point-oh dept.
Rafael writes "The SETI@home project, aimed at using a computer screensaver to search for alien signals, seemed almost as crazy as searching at all. However, it became a phenomenon and an ever-increasing Internet addiction. Now SETI@home is being improved. Check the history at the BBC's Web page."
This discussion has been archived.
No new comments can be posted.
Even if there are 100,000,000 people using the client, what good does that do if there aren't enough telescopes (all types)? (BTW I am not trying to be a troll here)
All I can say is, it's about time. People have been whining about the duplication of effort on the SETI@Home project for a while.
This abundance of processor time is actually a pretty good argument for a good, open, generic distributed project framework. (unfortunately, most of them don't look as *cool* as SETI.:) --- pb Reply or e-mail; don't vaguely moderate [152.7.41.11].
I'm not talking about geeks running this on their private computers. But you've got sysadmins everywhere who spend a significant portion of their time looking for ways to run that stupid thing on even more of the computers in their networks. "But it only uses idle time, no harm is done" - ha ha. Yes, it was kind of funny when it started, but anyone running that stuff on anything else than their own computers isn't worth his job.
The problem is not the telescopes. There is so much data flowing in to the radio telescopes that the problem is getting enough cpu's together to process it all.
There are enough telescopes, they are satellite dishes from observatories all over the world. Plus, each dish is continually downloading data, so processor cycles are continually needed to interpret data.
It's about time they got a new version. Of course, no discussion of SETI@Home is complete without mentioning: "WHY WON"T THEY OPEN SOURCE IT?!?" There, now I have initiated the debate. One thing that they could do is "opensource" the data, that is let people attack the data with clients that they have designed, in their own open-sourcey style. That would be unfair competition, I suppose. Still, security concerns are a lame excuse, and they should really open it up.
SETI@Home is a true example of half hearted socialism at work. These people want some of the benefits of socialism without the true benefits of sharing. They want community help, but not true community help. We need to email them all our socialist pamphlets to help them learn the truth. Please help us.
They talk about adding new code to look for pulsating signals, this is a very interesting idea, but unfortunately I am afraid that a lot of people will not switch from v1.6 to v2.0, due to the fact that this additional check will increase the amount of time that it takes to process a block. A lot of the people I know here at Cornell are doing this as much for the recognition of processing X number of blocks a day, as they are for the actual science of the project.
Nearly every person that I know ran the SETI client alot when it first came out. It was fun competition to see who could spread their user to the most machines, change other people's userid's remotely, or just a trophy of computing muscle. It didn't take too long before people started trying to crack it.
It was pretty fun during the attempt, but when it succeded, the whole thing became pretty meaningless. The user Ragnar - who I happen to know personally - is a prime example (likely the best). If you check his stats, he should be in #10th place, or maybe even higher. I havn't checked in a while. The method that he uses is quite ingenious, yet still horribly unbalances the whole thing. (I've abandoned my saintalex user, who ended up getting picked up by ragnar, so I'm not being a hypocrite here:)
More SETI information, eh? I'll admit, I've seen some REALLY strange/bizarre stuff come out of Hoe, but I never realized the damn thing was written by extraterrestrials.
yep... once again the seti project misses the target. am i the only one who sees the problem here, my brothers? can this project ever "succeed" being run by people this clueless?
let's say you lived out in the woods in the deep south. you wake up one morning, nice and toasty warm in your red long-underwear. you decide you'd better go hunting today, if you're going to have dinner tonight. hmmmmm. rabbit sounds dang good!
what do you do? go grab your rifle and just tromple through the woods hoping to scare up some scraggly rabbit? of course not... you grab your rifle and your bloodhound and go tromple through the woods knowing your bloodhound will sniff out a nice fat meal!
"good god, open source man has finally lost his mind!" i can hear you gasp. well, when have i ever let you down?!
what the seti project needs is a bloodhound. it's ludicrous to just shove a bunch of software on some brain-dead intel driven computer and expect to sort out any kind of an intelligent signal. the seti project is trompling through the woods, without a keen nose to sniff out those crispy alien critters!
this is why i recommend that the seti project release a version of seti at home that runs on... get this... the aibo! yes, my brothers, our whacky friend, the aibo has come of age and is ready to mature into the sophisticated extraterrestial tracking device it has always been destined to become.
the aibo - he's not just for petrification any more!
A lot of the people I know here at Cornell are doing this as much for the recognition of processing X number of blocks a day, as they are for the actual science of the project.
This won't be the first time an upgrade has caused an increase in time. If they stay true to form (and they may not), at some point they will just start discarding results from older clients.
Okay if they open source it I'm wondering what improvments can be made? I know there is the obvious of improving how it looks and the graphics. But the math being ran by this program is fairly advanced and I'm sure they don't want anyone being able to compile a client which screws this part of it up.
Also what would people to do ensure data files being sent back are valid? And is there anyway to do this if the entire client is opened.
It is really a shame when people get so wrapped up in being "stats whores" that they would think this way.
I understand the want to be noticed and visable, high on the stats, but at some point it just goes overboard. People begin installing clients on computers that they do not own or operate, people begin hacking clients to return false-positives to gain higher rankings....it's really pathetic.
Everyone has their own reasons for participating in a distributed computing project. To me, being concerned at all about their stats only proves that they are narrow minded and short sighted.
I was just talked to ragnar... apparently the seti folks caught on and deleted his accounts. Unless this new version is able to block his efforts, he could easily start back up again. You can rest well, however, as he said that he was probably not going to.
Hopefully this new client'll return the seti project to what it was in the early days....
However... The other day, i was in a computer lab at the college where i attend, and in the (ctl-alt-del) task manager in NT, i noticed one of the process was RC-5 and another was seti@home. I can't kill the process, not in five minutes anyway, without admin access and on a guest account... meaning someone with admin access put it on, its using 90% of my processor cycles, and i don't want it on my computer lab station. I move to a different station. Its there too. Well. This is sure legal. Some bright kid with SysAdmin access put the programs on there. There are like 400 Computers in this room.
Seti is a great Idea. unfortunately, things like it and RC-5 always make a following that they don't need and that no one wants. If you want to make your processor run hot, go ahead. Don't put it near me.
What do you mean, I can't initialize things in an assert?" ~zero
i have 20 boxes on a test network that are used mostly as terminals to log in to the production system. seti@home is FAR better use of the machines than just sitting there as idle workstations.
There is both a Linux and Slashdot group on Seti@Home. If you want to help out just go to the Seti@Home website [berkeley.edu] and download the client. After installing the client you can just join the Slashdot or Linux group by going to the groups link on the main page. Search for the group you want. Then just join the group.
The Slashdot group currently has 424 members and has computed 47953 packets.
The Linux group currently has 344 members and has computed 146283 packets. (there are a few *really* good systems in the Linux group that process the bulk of those packets)
An intelligent signal was detected by a Windows user in Memphis last month but was kept quiet by SETI. They knew it was a message from an alien race but they could not decode what it meant. An encryption specialist who knew someone at distributed.net handed the message over to them and unbeknownst to users, the message was inserted into the key-cracking server. Early this morning the message was cracked and experts were amazed at the significance and wisdom the contacting race had bestowed upon us. The NWO wanted to keep this to themselves and control the rest of the world however with the release of Kevin Mitnick this morning this was impossible. Kevin broke into Top Security systems at NASA and stole this incredible piece of knowledge and has now given it to the world. Without further hesitation, the message:
It suggests that AI research has been "slowing down." Hello? What about that big chess machine IBM built? What about all those bots people wrote for Quake2? What about Half-Life? Now, admittedly, most of this is entertainment related, but it's still AI research, and it's getting pretty darn advanced. Q3, if you listen to Carmack, has even better bots. I mean, this guy has a cool idea, but so far, it looks like there's no client to download, his mission statement is incredibly vague, and there's really no details about how they're going to use distributed computing to enhance the project. Past that, his claim that "AI has been slowing down" seems totally wrong. Call me skeptical.
http://setiathome.ssl.berkeley.edu/uni x.html [berkeley.edu] mentions "The xsetiathome X11 client released with the 2.0 versions of the client has known problems. Please note the README.xsetiathome file in the.tar delivery. Not all platforms necessarily have an xsetiathome client. Please do not email us about bugs in the xsetiathome client."
People should give their spare CPU cycles to useful distributed computing efforts like www.dcypher.net [dcypher.net] or www.distributed.net. If you want to actually contribute to something useful you should go to the links above instead of wasting your time with SETI@Home which sends out duplicate work because it already has too many people helping:P
From http://setiathome.berkeley.edu/unix.html (note the last line) :
Version 2.0 Upgrade procedure: -1. Stop your currently running v1.x client -2. Put the new v2.0 binary in place of your v1.x binary -3. Start the new client -The v2.0 client will restart processing of your current work-unit from the beginning. -Please discard your v1.x binary. It will soon be unable to obtain new work units.
Under Windows, running as a screen saver doubles the time it takes to process a block. On a PII 350 running Windows NT, turning all the graphics off took me from 25 hours a block to 14 hours a block.
I'd imagine that X would be worse, if anything, given the way Windows trades stability for graphics speed.
If your interest is throughput (either as a score whore, or because you want to donate as much as you can) I'd stay away from any X client.
"its using 90% of my processor cycles, and i don't want it on my computer lab station."
I dunno, I run seti@home (not RC5)constantly on my k6/2-400, 96mb, etc running *gasp* win98 and I honestly don't see enough of a performance hit in anything to justify complaining.
Then again, I suppose that depends on whether or not one agrees that win98 delivers "performance" in the first place }:D
Even if there are 100,000,000 people using the client, what good does that do if there aren't enough telescopes
The setiathome FAQ covers most of it, but the short story is they are getting data in 2.5MHz wide chunks. To keep a work unit small enough so that a reasonably powered machine will do the job, they filter the data so that a workunit comprises 107 seconds of data, in a band a bit over 9kHz wide.
So, for every hour they are running ar Arecibo, they can get over 8600 work units. That helps a bit, too.
One other thing I've seen (I've been on since they went with the release) is that a lot of people sign up, process a few units and lose interest.
For what it's worth, I've processed a bit over 335 units. This is on a single box, because I couldn't get through the proxies at work. Even so, I'm running in the 97th percentile.
Hint: With a Windoze box, the screen saver chews up a lot of resources. On Win98, I let it run in the background without (usually) writing to the screen. This speeds up processing about 3X. (It takes about 11 to 12 hours on a 350MHz PII in background, 39 hours on the screen. I never finished a workunit on my 486 box....)
SETI@home is actually a fantastic program for "burning in" new machines that aren't ready for prime time yet. In a large server farm composed of several dozen machines, the laws of probability dictate that some piece of hardware is faulty. SETI@home is a great way to let new servers run with a full processor load for a while to see if anything goes wrong before loading the boxes with whatever production code will eventually run on them.
I do agree, though, that setting this up on corporate machines is an amazing waste of time. Then again, so is posting comments on Slashdot from work, but that doesn't seem to stop very many people.
If SETI were to be opened, multiple problems would be encountered. First off, there's immediate compatability issues. With hundreds of variations of the client floating around, it's more than likely that many would be incompatible with the SETI servers. Secondly, cheating (getting counted for undeserved blocks) would expierence a boon. Previously, I was only aware of a few people who were actually able to cheat, but, if the source was opened people could forge blocks to their little heart's content (for more on cheating, see my previous post).
It is unlikely that these 2 points would encompass the totality of the issues, should your idea be implimented. Open Sourcing is *not* for everything.
Speaking from experience, it gets really old after a while. Never got the current version to make it past the proxies at work, but if 2.0 really does work, I might set up a couple of work boxen to crunch. Preferably, without the gui.
OTOH, if Nitrozac@home ran under X, I'd be there in a heartbeat.:-)
How odd... I have the exact same box, but running NT. It takes me 14 hours to run a block, but the odd thing is, with graphics turned on, it took only 25...
We also must have started around the same time, because I've down 342 units as of today.
The numbers are interesting. At three hundred or so blocks, we're at more than almost 98% of the people. As far as I can tell (and it is only an estimate, look at the charts), 5% of the people are responsible for something like 99% of the blocks processed. (My poor little Pentium, which runs under a different account, has only managed 56 blocks, and yet it is in the 86% percentile.)
This is not a trivial question. Firstly, SETI@Home uses the Aricebo Radio Telescope. This is a very nice dish for radio astronomy, but useless for SETI work. It's far too small. The smallest useful dish or array will be the hectare array, being built by the SETI Institute. Aricebo will only be able to detect signals from nearby stars that are: (a) at LEAST as strong as our most powerful RADAR, (b) SUSATAINED for a substantial period of time, (c) containing information on a carrier wave, (d) orbiting a planet or star, with no compensation for motion
In short, leakage (the most likely sort of signal to be found) will be invisible, actual RADAR type devices will be screened out (too short a duration and no information content), and any civilisation advanced enough to WANT to locate other civilisations by sending deliberate signals are likely to be filtered, by being screened out as local interference through a lack of doplar shift.
Methinks that SETI@Home is ingenious, but is using the wrong telescope. And it'll be finished before the RIGHT telescope has been built & put on-line.
As for "Open Sourcing" SETI@Home, it was, to start off with. The original UNIX client was GPLed. Hardly anyone bothered to do anything with it, and so they closed the source & shoved it over to a commercial house. Don't blame them - look to yourself first.
Having said that, SETI@Home's attitude has been somewhat attrocious. They've been going on about security, when that was never the cause of them going non-Open Source. Progress was. And part of that is their fault. They refused to set up a CVS repository, did VERY slow (and low-quality) releases, and basically impeded themselves at every turn. They should have done a damn sight better than that. Yes, there were only a few people there, which is EXACTLY WHY they needed to use CVS, rather than relying on manually testing every e-mailed patch, and rolling a fresh tarball by hand every few weeks or months.
Honestly, if SETI@Home has shown anything, it's shown that we should be less worried about intelligence "out there" and rather more worried by the lack of it down here.
I agree - i have run it on mine, and it dosen't suck enough power to really bunch my shorts up, but the computer lab is running 166 workstations with 16 meg of ram. And they have 29 (you read that right, 29) processes running in NT, counting RC5 and seti. I remember on the NT server i set up, i had it down to 8 processes; this is a workstation. just throwing more of my 2 cents worth in. ~zero
These programs use *spare* cycles. When you start using the computer it moves out of the way and gives the majority of the CPU % back to your apps.
More accurately, on most OSs, the background client program for d.net or set signals to the OS that it is a low-priority process, so there is no performance hit.
So what's the big deal? If you had performance problems on those computers, it probably wasn't the d.net clients.
Remember all the hoopla over the rumor that they deliberately slowed down the software to let more people feel the warm fuzzies of running a client?
As they say, they have more processing time than they need, so find ways to use it!
A few people may bitch and moan about lower stats, but I say to hell with the people that make stats their first priority. They are the kind who are likely to cheat or run the client on an error-prone over-overclocked (as opposed to overclocking a severely underrated chip) machine, making the extra security measures that slow everything down necessary.
There's a much stronger feeling of a "team effort" with distributed.net, and how can you help loving those cool little cow icons?
Seriously, two weeks ago I reinstalled the NT command line version of Seti (an ANCIENT client, but there hasn't been an update) on one of my machines and let it run all night...wouldn't you know it, it hung on sending and receivingg the data, and the stats I had (I identified myself with the same Email address I had previously used) never showed up. So I deleted it and went back to CSC.
Let's keep cracking on RC5! I can't wait for the OGR contest [distributed.net] to start.
I run it constantly under NT. It seems to have no effect on anything else I run, which includes CPU intensive stuff like DevStudio.
The only thing it changes is the task manager performance display, which always shows a processor use of 100%. But given the amount of time I spend compiling, believe me, if it took even 1% of the CPU time I used, it'd be gone.
They can use the d.net approach: open-source the compute portion and then link it with the closed-source binary only networking code. Put in some redundancy to make sure clients are doing all the checks; for example, ask for various intermediate values calcuated during the FFTs. The math done by the program is NOT advanced: any undergraduate compsci student should have enough math background to understand it. The implementation of the FFT in SETI@home looks to be rather poorly done, but of course one can't tell for sure because I don't have a copy of the source. Nonetheless, other public available FFT implementations (in generic C/Fortran code, not to mention machine optimized vendor or scientific computing libraries) tend to perform significantly better on equivalent sized transform block, such as FFTW, and Dan Bernstein's FFT library. The people running SETI@home are top-notch, so I don't think their poor coding is accidental...they simply see no need for clients to process data faster.
These things arent real AI, (well, not sure about the chess machine,) but for the quake bots, there is nothing intelligent about it. They are programmed to respond to the input that is given to them. Just because it may be complex doesnt mean its AI. Im not too sure about this project either though. There doesnt seem to be any real opensource AI stuff going on besides this though, none that I have heard of at least. Could be cool if it get support..
I had trouble with the NT command line version as well, but found that if you run the graphics version, but turn off the screensaver mode and minimize it, it runs pretty much at the same speed.
The point was *not* that it decreased the performance, insomuch is it was that someone put it on public university use computers. To use a computer that is not yours (and I seriously doubt that the University asked for it to be run on 400 of their computers) in order to accomplish a task that is meant to be run in *your* spare cycles, to me, seems wrong. There was not much performance decrease. The issue is that someone put it there without asking a higher up admin, and it is against the "Acceptable Computer Use Policy". And it bugs me. Call me wierd or paranoid. Plus, running their processors ar 99% 24/7 is going to burn the already taxed processors. I think most of the computers are old, but at least a few are new. Over heating the processor dosen't seem like the best of ideas. And to be even more anal, your tax dollars plus my tuition pay for these computers. ~zero
Its kinda scary in the articles it said they had TOO much computer power but ive only done 37 blocks on this account and its in the 82% percentile where did everyone else go did they stop using it? imagine how much data they would be crunching if everyone that signed up used it vigilantly!!!!
When I first heard about SETI@Home last summer I was thrilled; ever since I was a child I had been fascinated by outer space and the possibilities of life beyond our planet. I installed the software and even watched it process as I was going to sleep at night. I kept diligent records of my own peaks and high Gaussians and even had a map where I pinpointed where in the sky my data had come from. (And of course I studied the stats on the SETI homepage trying desperately to surpass everyone else on my domain in units completed. I don't have a fast processor; I never did.)
Well, I don't do that anymore. I took it off my screensaver after a while and then when I went to DSL and had it automatically connect, most of the time I forgot it was there. But still, I'm glad to have it and glad to have the opportunity to participate; in a small way it's like fulfilling a dream from my childhood. I thank all those involved for that.
As for the new version coming out, I'm crossing my fingers that it'll fix the connection problems I've been experiencing ever since we networked our computers and I no longer have a direct connection to the Internet. Even though I don't look at it much anymore, I think it's a worthwhile project--both for the search for extra-terrestrial intelligence and for distibuted computing.
Well I'm seeing a lot of "what happens when they run out of data" posts deep down, so let me contribute my $0.02 worth of radio astronomy:
Much of the current data was picked up during the Arecibo upgrade, where the telescope was essentially out of commission and staring up at the sky.Since the earth rotates, the sky over the telescope changes, so just grabbing all the signal ("drift scans") still provided useful data.
Arecibo is huge [naic.edu] - 305m in diameter, almost exactly a kilometer around (makes a good jogging track!): that's too large to steer. So it was designed as a spherical dish section, not parabolic like a sattelite dish: a parabola sees perfectly in one pointing direction, but a spherical dish can see fuzzily in any direction.
During the upgrade, the old line feed has been augmented by a Gregorian reflector, which allows perfect focus from a spherical dish. But the line feed still exists, as you can see from the pictures [naic.edu]. So now, while the Gregorian takes astronomy data, the old line feed can continue looking off in some other (random) part of the sky, and take useful search data! And if the piggyback project is still running, the SETI people also get first dibs on all our data to search for their signals.
With additional tests for pulsations, one wonders how many of our pulsars the SETI people will rediscover. Useful check on their processing quality, I'm sure...
Arecibo's neat - consider visiting if you ever get a chance. Takes your breath away to realize the size of it all!
With hundreds of variations of the client floating around, it's more than likely that many would be incompatible with the SETI servers.
So the SETI servers don't send them blocks, process their blocks, or record their stats. Problem solved - if you want to be a part of the project, you have to use a compatible client.
Secondly, cheating (getting counted for undeserved blocks) would expierence a boon.
There are two ways that people could cheat: returning false negative results without actually checking the results, and returning false positive results when there really isn't a positive.
False negatives can be easily caught by issuing the same blocks to other clients. Compare the results, and if they disagree run them again at SETI HQ, and ban the cheating clients from participating. The article already discusses how they are sending the same blocks to two clients at once, they just need to up that level of redundancy a little bit to solve this problem. You couldn't normally trust even a non-hacked client to provide the correct results 100% of the time anyway, because that machine might have bad RAM, an overclocked processor, no cooling, the case off, and a RF transmitter in the next room. Some level of redundancy will always be necessary for this sort of project, and can also be used to catch cheaters.
False positive results are even easier to catch. Don't you think that SETI HQ will check any positive results themselves before going public? They aren't going to call the NY Times on the strength of hacked.linux.box returning a positive on it's first data block, let me tell you. Just ban clients that return false positives, and get on with the thing.
When discussing open-sourcing distributed.net's key cracking, where there's a prize attached, it has been pointed out that a hacked client could be used to return a false negative but inform the user so that they can claim the prize before d.net can. But for SETI@Home, there isn't any danger of that. Who is going to believe J. Random Hacker's claims of detecting SETI on his bedroom PC? Even if someone did this, there's only one place that the raw data could have been coming from, because J. Random Hacker certainly doesn't have a high-powered radio telescope in the back yard generating all that data.
In short, I have yet to hear a good explanation of why the benefits of open-sourcing the client wouldn't exceed the problems (minimal, see above) of doing so.
Sounds like they should make all contributions Anonymous by default. No logins, passwords or teams. They'd lose a few folks, but gain a steadier and more dependable crowd who weren't just in it to get a happy hard-on seeing their name on a list of 'top contributors'.
Never happen, but that's okay. Just keep them grits a-comin', and stay (or get) ripped [rippedin2000.com] in 2000.
I too have been with distributed.net for quite a while (Since RC5-56 in spring of 1997). But I also participate in a number of other current distributed computing projects.
Why do this? Well, first my ultimate goal is not stats-oriented. I don't really care where I am in the stats, where my team is in the stats, etc. What I enjoy most is that I contribute to a number of projects, albiet a little in each.
For example. The Seti@home client runs on a Pentium 166 (overclocked to 200) that is sitting in the living room of my apt. It makes my roomates happy to see eye-candy on the monitor out there and it keeps the otherwise idle machine (it's a proxy/firewall for our cable modem connection) doing something. No, it doesn't zip through data blocks (about 1 every 5 days or so), but it is doing _something_.
So, what's my point? (I'm asking myself that same question). Basicly, it's that people participate in distributed computing projects for various reasons. If Seti@home has brought 1.6 million people into the distributed computing fold, then more power to them! (I do contest their figure of 1.6 million people participating, but that's for another discussion). Even if the client is not 100% efficient (and what software package is 100% efficient?) they are still contributing to the overall education of people in matters of distributed computing, in particular internet based projects.
Finally, it should be noted that distributed.net re-did a large number of CSC data blocks (~25%) as a security measure to reduce/eliminate false-positives from making their way into the stats.
as far as i can tell tkseti doesn't post the data with the 2.0 linux client. nor will wmseti work. also, there is no xseticlient for linux that i could find. it is bundled with some of the other *nix clients, however
I participate in SETI@home because I get off on having more computing power than my friends and relatives, and running a command at "nice -19".:)
There's no need to look in the skies for ET, because there are plenty of aliens here on earth -- I'm certain my ex-boss was one, and my friend Pete's former roommate, who had superhuman abilities (collegiate track champion) would neither confirm nor deny his terrestriality. NO, I don't think it's worthwhile -- it's a big waste of time.
While dnetc used 1.0% of memory, seti used 20.5% of memory. Also as seti don't provide optimisations of modern processors (and the fact that they're unlikely to find anything anyway) then I just think seti is a waste of spare cycles. OK RC5's only slightly better but at least it'll come to an end eventually.
You forgot to mention one of the other major reasons SETI@Home is useless.....As with most of the other distributed computing projects out there, people seem interested soley in being the #1 data processor, rather than the actual science being done. Time and again, this has caused people to cheat and contaminate the results.
I am using this small "xlock" solution for seti since I started using it (version 1.3), so that it only start when I actually lock my x screen on my system (going home, lunch, meeting...)
So that on my nice "old" afterstep 1.0, when I press "window+L" (the "window" key bind to "3" on my X), will actually run the script: start seti, then my xlock and the kill seti (and its lock) it once I unlock my screen.
The nice part is just so that unlocking is not a real issue (I do have cronjobs, but they are not that consuming).
I do recognize, that this method is a little overkill (I should just "stop" it and "continue" later) but it works for me this way so I thought I was going to share it.
Yeah, there's an xsetiathome client that will be bundled with the various unix versions. It's not a screensaver, though - just an X window that opens up with similar graphics as the Mac/Windows client. Maybe someday it'll act as a screensaver as well. - Matt - SETI@home
that's absolutly not true distributed.net and other cracking projects are a waste of cpu cycles cuase any idiot knows that you can eventualy crack anything.
The "-nolock" consider that I will only run seti when I "x"lock my system, with no other running version (for the current one is killed when I un"x"lock it).
Note that/local/software is just my method for installing new softwares that I compile/install without messing with the system install (I then have a script to create links to/local/bin,/local/include, etc... so that when I want to upgrade the software, I just have to run the perl script to delete the created links and create new ones;) )
v1.0.6 for Mac SUCKS. A low-level debugger shows they implement their own drawing (rectangle, line,...) code for the screensaver part, and that uses a significant amount of processor time. Not to mention being ugly. What's more, the processor optimizations (when it is doing math) are dismal.
I dunno if they're planning to come out w/ a v2.0 GUI for Win/Mac, but until they clean their code or OSS it, I'm sticking w/ dnetc. All text, Altivec optimized, 3.8MKeys/sec. They know where the computing power is:v).
Running a processor at 100% capacity 24/7 is not going to 'burn' the processor, that is what they are designed to do. Those processors will be consigned to the rubbish bin well before they stop working.
You could perhaps argue on power consumption, but as long as the clients don't stop disks spinning down and monitors going to sleep there's not much of a case there.
The university isn't getting any less of a return on investment on the processor because it is being used more and in fact they may even get something out of it, even if it's just publicity.
In fact I'd suggest that a large group of computers that otherwise spend a fair amount of time idling but need to be on for easy access are perfect for adding to a distributed computing effort such as d.net or seti@home.
Of course, no discussion of SETI@Home is complete without mentioning: "WHY WON"T THEY OPEN SOURCE IT?!?"
This may have been discussed previously, but I'll bite anyway. The problem with open-sourcing the SETI signal-processing code is the fact that this is a running experiment. With open-source operating systems, web-hosts, games, etc, you will be running these on your own computers and/or on your own networks. The SETI example is completely different. With SETI, you are contributing to their scientific experiment. Therefore, they want to know precisely how every resulting piece of data was processed. I cannot stress that enough. As a scientist, I can assure you that if you cannot verify the integrity of certain data, then you may as well throw that data out (or re-run the experiment at the least. As learned with the cold fusion people, DON'T PUBLISH if you're not certain of the validity of the data).
The problem with open-sourcing the code is that somebody on the client side could very well have changed the processing code, and introduced any number of possible errors. It could be something seemingly harmless, like using single-precision floats or linear interpolation, in some places instead of whatever methods were originally employed there, for a processing-speed gain. However, there is no way that the SETI folks can tell if the data they are receiving has been processed correctly or not. And one thing SETI cannot do is accept compromised data.
I do believe it is in the public interest to let others know the code processing and evaluating techniques, though. What could be done is run an open-source type development of a separate client system, and then release some sort of binary-only module for the actual runtime client (perhaps they'll not reveal the data-authentication routines, for instance). You may disagree completely with this, but please understand that from the point of view of a science experiment, they MUST in no way whatsoever receive data processed in a different method than they originally intended .
As for "Open Sourcing" SETI@Home, it was, to start off with. The original UNIX client was GPLed. Hardly anyone bothered to do anything with it, and so they closed the source & shoved it over to a commercial house.
Can you provide a reference for this? Or better yet, a link to the GPL'd code?
I offered to help them during the call for developers in April/June 1998. However, I changed my mind when they refused to consider putting the code under any sort of Open Source license. I'd be very disappointed if they had done a GPL'd release, since I was quite interested in participating otherwise. The licence was my only objection: in fact, I remember arguing that the non-free license would hurt their ability to attract developers!
The last version of the code I have is version 1.4a from 1998 June 22, and it contains no license of any sort.
I had been running Seti@home on my 366/celery linux box for quite a while. However, with 64MB of ram, running seti, netscape, staroffice, xmms, emacs, gcc, gnome, or whatever else I happen to be running I end up having to swap a whole lot. I don't know how seti@home's algorithm works, but I would still be running it now if it wasn't such a memory hog. Anybody know of worthwhile distributed computing projects that use less memory?
Maybe if you explain your position, rather than just stating it, you might not sound like such a clueless blowhard. Then again, maybe you'll sound like more of one:)
The reason not to opensource SETI@home is a scientific one. In order to ensure that the experiment goes smoothly and in order to ensure accurate and consistent reporting of results, the scientists must maintain control over the methods of the experiment.
The program code is an essential part of the method of the experiment. Different methods (which would erupt if it was opensourced) would ruin the validity of the results, since consistency was not maintained. Thus, the scientists are protecting the credibility and validity of their results while also minimizing uncertainty.
This is a single experiment that uses the help of a lot of people. It isn't a lot of experiments. A single experiment must maintain consistency throughout. Tracking and documenting multiple versions would be a larger endeavor than the scientists have time for and it would introduce an unacceptable level of uncertainty in the results.
> There are two ways that people could cheat: returning false negative results without actually checking the results, and returning false positive results when there really isn't a positive.
When talking about cheating (and generally about abusing the S@H software), people generally talk about delivering work units without actually processind them. This has been done in the past (with the closed source client), and was only detected because the lowlifes doing this forgot to forge reasonable processing times and instead sent them in with a 10 minute counter. If a more intelligent idiot would decide to boost his fame and sabotage the client by sending in bogus data, this would really sabotage S@H, and since the S@H authors are members of the holy church of 'security through obscurity' they're not gonna change their opensourcing policy anytime soon:-(
Just a quick plug for my SETI Club [geocities.com] - for somewhere to keep talking about SETI & SETI@home after this story has fallen off the/. front page. Currently just over 250 members. Enjoy.
Well, considering a lot of Wintel users have it running in Screensaver mode (at least just about all of the people I've seen running it on Windows), and considering that SETI *almost* has too many people crunching data compared to whats available, the GUI option really isn't a big deal for most.
Personally, and this is not trolling for flames, I think the SETI project is a waste of time in its current state.
The data you process may be processed multiple times by others, thus rendering your time spent, worthless. I have heard reports that they in fact lost a portion of the anylized data, and resubmitted it to clients, as well.
Finally, by their own technical paper [berkeley.edu], they claim:
The sky survey covers a 2.5 MHz bandwidth centered at 1420 MHz... The survey covers 28% of the sky (declinations ranging from +1 to +35 degrees) with a sensitivity of 3E-25 W/m2.
2.5mhz frequency spread, covering 28% of the sky.
It just seems like a waste of time to me.. others may (and obviously do) see it differently, but to me, if they are only searching 1/4 of the sky, and even then, only scanning a 2.5 mhz frequency range, that leaves a LOT out. I ran a client myself, but when I started reading more data about the project, and what it entailed, I decided that I could put my proc time to better uses.
Then again, SETI obviously has the use of expensive hardware, dedicated scientists, and hundreds of thousands of interested people willing to run clients to analyze the data, so obviously it has support. I dont think the entire SETI program is a waste of money, I just dont see the benefits of scanning such a small portion of the sky/freq range.
Again, not trolling here, this is an objective opinion.
Please. Don't. This argument is on EVERY article that mentions either Distributed.net or Seti@Home. Give it up people, some will run d.net, others will run seti; lay off the evangelism.
Keeping SETI@home closed source only solves half the problem...
You're completely forgetting about all the cheaters who could just travel to some planet N light-years away, then travel back in time (N + Blargh + Glorp) years, where Blargh = the time it took said cheater to reach said planet, and Glorp is the time it would take the cheater to build an ultra-high powered radio transmitter and aim it to the exact point that the Arecibo telescope will be in N years, plus up to 24 hours of "padding time" to ensure that the telescope will be facing the right direction at the moment the signal arrives.
Then, said cheater simply pushes the Big Red Button labelled "Transmit Bogus Alien Mating Calls To Fools On Earth", sits back and waits.
Of course, he'll still have to rush back to Earth in time to compromise the SETI@Home network's security to ensure that the data packets containing the bogus interstellar message get routed to his computer alone!
Then, he sits and waits.... soon, the data packets are processed, and sent back to SETI... yes, my pretties... soon I^H...he... will receive a pat on the back and perhaps a small thank-you check for my^H^H...his..."contribution" to science...
Umm... the preceeding situation was, ahem, completely hypothetical and I^Hthe hypothetical "cheater" would never stoop so low just to trick SETI@Home into thinking that I, Hypergeek^H^H^H^H^H^H^H^H^H^H^H^Hhe had just found extra-terrestrial intelligence.
Or maybe this hypothetical person-whom-I-don't-know would stoop so low... I mean, I have no idea-- it's all speculation, right?
[Mumble... that SETI check had better be enough to cover the costs of the interstellar space ship, the time machine, and the ultra-high-powered radio transmitter... otherwise Dad'll kill me when he gets his MasterCard bill!]
Umm... SETI@Home does offer a monetary reward for finding extraterrestrial life, right?
Uh oh. I'm dead. Unless, that is, anybody could figure out a way I could either:
Flee to another solar system really fast
Go back in time and stop myself from... nah... go forward in time to the end of the month and hide my dad's credit statement
Send a distress call to extra-terrestrials so they'd come and rescue me. Oh, scratch that one... the signal would never reach them in time, and, besides, I only built a "Transmit TO Earth" high-powered radio transmitter.
Or, if nobody can figure out a way to do those things... perhaps you could help me find a good recycling plant that pays a decent rate for scrap metal? I figure I can make back 1/10000000000th of what I owe... damn... it's too bad nobody would possibly be interested in buying a used interstellar spaceship, time machine, and ultra-high-powered radio transmitter... at least, not anybody on Slashdot... [Sigh...]
Well, I'm dead Real Soon Now. There's only one way to prevent this tragedy from claiming the lives of other, innocent victims: OPEN SOURCE SETI@HOME!!
The examples you give of AI research are pretty narrow. I don't know much about game bots, but personally I'm not sure if they qualify as "artificial intelligence" in anything but the loosest sense. Expert systems like chess-playing computers represent only very small subset of all the true research being done in AI.
The kind of project that the Great AIP site seems to be proposing is an entirely different idea. From what I can tell, they are proposing to create an online environment that allows AI's to interact and evolve and eventually "reach intelligence". We're talking about an artificial intelligence that should be able to interact, experience, develop, learn and create... not a big computer that can calculate and evaluate insanely huge numbers of chess moves.
The project itself concerns me a little, because I'm not sure that its creators have fully investigated and appreciated the magnitude of the task they are undertaking. The history of AI research shows a recurring pattern: periods of extreme optimism and lofty aspirations, followed by periods of cynicism as people's ideas fail to pan out as expected. Over and over we are confronted with the fact that this thing we call "intelligence" (which I notice the Great AIP still hasn't even *defined*) is far more complex and elusive than we previously suspected.
Not that I'm trying to say that AI research is pointless or that an open source AI project has no merit; I simply hope that people involved will do some serious research into previous attempts at finding "intelligence". Otherwise, I worry that they will simply repeat past mistakes.
For more information on the huge range of ideas and research falling under the category of AI, check out the collection of links at http://ai.about.com/compute/ai/ [about.com]. For a good beginner's overview (and a chance shamelessly plug a friend's site ) I recommend this AI Tutorial Review [utoronto.ca].
Yeah, I let it all slide for a while. Then I got access to a raft of Solaris machines at work. Now I have a total of 2.8 GHz and 2.2 GB chugging away on seti. It sure is a lot more fun when you can see you totals jump each day.:-)
I was a bit worried about taking away from other processes, but the client runs at priority 00, so no worries.
hey, wouldn't it be cool to distribute it even further? As pointless and stupid as it is... how about loading this to run on the Casio mp3 watch? [slashdot.org] In my opinion, it would be way cooler to have a little gadget processing seti data on my wrist- rather than using it to play crappy music. Or how about the video game systems? Dreamcast, hellzyeah! Only play that thing a couple hours a day, gobs of free time to sit there and process.
Whatever... My ideas, like my sigs, probably aren't even original.
If the LGM are REALLY flying around in high-tech UFO's.. doncha think they'd CONTACT US if they really wanted to speak to us??????????? Come off it! Get a grip! Seti is such a waste of time... they'll contact us when they're ready! In the meantime - work on RC5-64!
If they *do* opensource it, how will they sort out "false" clients from "real" clients? How can they be sure that all different versions that will appear treat the data with the exact same algorithm? Remember that they beta tested the current incarnation of the client for over a year (and during that time, you *could* download the source) before releasing it. If someone starts sending incorrect data back to them, they will have to start over again or simply stop the whole project. So the security concerns they are talking about are not "we're afraid to be hacked". It's "We're afraid that someone will make a really fast client that everyone will use but which returns unreliable (not proven) data."
(Actually, there are hacked clients which are faster but returns unreliable data allready...)
Why, oh why did they release first Linux version compiled for 686, not 386? This limits severely the amount of users that can initilally jump into the 2.0, IMO...
I know I'm posting this fairly late in the thread life, but I REALLY hope some of you will find this interesting.
Last night I attended a lecture/discussion by Dr. Kent Cullers, the Signal Detection Group Leader, one of the project managers in the SETI Phoenix Project, and one of the board members of the SETI@Home program (ever see Contact? The blind radio astronomer was modeled after him). After the lecture, I asked Dr. Cullers about this very topic. I told him flat out that I could not understand why a project with as lofty goals as SETI would willingly give a cold shoulder to thousands of potential programmers willing to donate their time towards developing more efficient clients and (potentially) search algorithms. I also told him flat out (in front of 100+ people) that while I've always supported the goals of SETI, their cold shoulder to the OSS community made no sense when you consider their claims of supporting scientific cooperation. His reply was refreshing, honest, and to the point.
The first thing he told me was that, despite public claims to the contrary, security is not SETI@Home's biggest concern. The REAL problem right now is that there is apparently no efficient system in place for transporting large blocks of information from the receiver setup to the SETI@Home hub at Berkeley. I'm not sure how many of you realize this, but SETI@Home is an affiliate of SETI, not a directly controlled part of the organization. He told us that SETI itself has a very large quantity of unprocessed data to sort through, but without the proper infrastructure to get it to the S@H users, there's no easy way to handle them. This IS being worked on though, so don't lose faith yet.
The second thing that he pointed out was that there are people inside of SETI, including HIM (remember, he's on the SETI@Home board), that want to see the client opened up. In fact, he pointed out that this very topic is already on his agenda to be brought up at the next board meeting. You know why he's on our side? Because, in addition to his many other amazing talents, he's a programmer! (yeah, a blind computer programmer...this man has just earned my respect for life). He described the hoops HE has to jump through to get access to the code, and he's a friggin board member! While he couldn't give me any promises, he was MUCH more receptive to the idea than any of the other SETI guys I've seen quoted on Slashdot.
In the meantime, he gave me this suggestion: Apparently a small part of the problem is that Dr. David Anderson, the head of the SETI@Home project, isn't entirely convinced that OSS developers could really bring about any significant improvements. Dr. Cullers stated that the best way he could think of to change Dr. Andersons mind would be to email him suggestions as to possible ways to improve the performance and reliability of the client. According to Dr. Cullers, getting him to open it is simply a matter of impressing on him the fact that we can help, and that we wont just be getting in the way. I realize that some people may have a problem sending coding suggestions to a closed-source project, but according to Dr. Cullers it would definitely help further the cause.
And on one final note, I'd like to tell you all something. I walked away from that lecture with an entirely different take on SETI. They're not a tired old organization desperate for government funding anymore, they've got some cool new projects already underway and in the works for the near future (SETI Optical, new arrays for Phoenix, etc.) Dr. Cullers made a very realistic projection (explaining the math and technology) that Phoenix will have out entire galaxy scanned in less than 50 years (they haven't even really touched it yet). Please remember, SETI has little direct control over SETI@Home, so don't let your poor opinions of the S@H project ruin your views of the entire program. SETI is worthy of our support.
And, if you get the chance to talk to any more S@H board members, here are a few things to bring up:
Multicasting - Whilst a lot of clients won't be on multicast networks, a lot will. This may be useful in reducing bandwidth consumption.
Proxy networks - You don't -actually need- to have data transmitted from a central point, if you can get a network of caching proxies set up. The main server can connect to all the proxies, simultaneously, via a multicast connection, and then the clients can fetch data from their local proxy.
Compress the data. Raw radio data should compress very nicely, with delta shift encoding, or some variant thereof. That way, there's less surplus information to send.
Read The Manual, especially the one marked "RSVP", describing resource reservation on the Internet.
Well, I doubt I'll be speaking with any of them in the near future, this was a public forum after all:) He did give me his email address though, and invited me to send him a message if anything else came up so I will definitely pass those on to him (a proxy network sounds so obvious...why didn't I think of bringing that up;)
Don't put too much faith in this development though...he's still just one voice in SETI, and his primary job is figuring out new ways of removing radio noise without degrading the original carrier signal. He doesn't have the power to open up S@H by himself, but the fact that we've got someone receptive to the idea on the inside, who also happens to be a skilled programmer, means that there's hope.
the purpose of this project, and most of all are not scientists. This isn't about credit, its about opening up humanity to new possibilities and reaching out into the unknown.
"So the SETI servers don't send them blocks, process their blocks, or record their stats. Problem solved - if you want to be a part of the project, you have to use a compatible client."
And how, pray tell, do you enforce a compatible client. If the client is trusted, there is no way to ensure it is valid. You cannot trust a liar to tell you he is a liar. This whole discussion has been had before on the QuakeForge list, where the security architecture of an open source quake was discussed. So if we cannot enforce a compatible client, then the few people at the project have to waste all their time chasing down and removing accounts of bad people, who will just make more accounts. Sure, the closed-source version is not a great solution, but I don't expect these guys to make more headaches for themselves for the heck of it.
"False negatives can be easily caught by issuing the same blocks to other clients."
Even if you only send out one duplicate, that already doubles the workload. I guess you could randomly sample, but then your trade-off is that you will miss some.
"False positive results are even easier to catch."
Being able to catch false positives won't be of much use if your whole network is down because it was flooded with false positives.
Open sourcing opens a big can of worms. It/forces/ you to create/correct/ and clever solutions. That is a benefit of open source. It is also a detriment, because these guys are scientists and want to work on the project, not hire a whole programming squad to make clever solutions to new problems they have generated, which are being handled, while not/correctly/, at least "good enough" already by the closed-source client.
Open sourcing would be a good thing...but it has to be handled very slowly and rationally and carefully.
I don't think that all of the issues you raise will be as much of a problem as you say. Why would someone invest time into using a hacked client that produces incorrect results? The most likely reason (going by the history of this sort of project) is to move up in the ratings for the project. But to do this, you need to build up a history of past efforts, which will be destroyed every time your hacked client gets caught and you get banned from the project. As long as the project organizers can detect hacked clients, the thrill of using them will quickly wear off when people realize that there's no payoff for doing so. As for detection, see below.
And how, pray tell, do you enforce a compatible client.
OK, so I was a little glib about that. Let me explain my reasoning. There are really two kinds of compatibility that we care about: does the client's communication with the server follow the established protocol (data formats, ports, CRCs, etc.), and are the client's results compatible with the computations which the project intends to be running. If the client doesn't use the right protocol to talk to the project server(s), then you can ban it right there and move on. It's easy to tell if a client isn't sending data in the right sized chunks, etc. You can ban clients automatically when this happens, with no overhead in manpower (other than setting up the initial system). If I hacked the closed SETI client right now to talk to their servers on the wrong port, you can bet they would drop me like a hot potato - it would be obvious that I'm incompatible. By definition, networking protocol incompatibilities are detectable, because if you can't tell that the client isn't following the protocol, then by definition it is compatible with the protocol. If, on the other hand, the client's calculations are incompatible, then see below and my previous post.
If the client is trusted, there is no way to ensure it is valid.
Exactly correct, which is why no clients can be trusted. Closed source doesn't prevent hacked clients, it just ensures that hacked clients are only created by the more motivated. As I explain below and in my previous post, even a non-hacked client can't be trusted 100%. Distributed computing will need to be run on a basis of clients proving that they are trustworthy, rather than the server assuming they are innocent until proven guilty.
Even if you only send out one duplicate, that already doubles the workload. I guess you could randomly sample, but then your trade-off is that you will miss some.
It's true that there will have to be duplication of error, but as I explained before, even with perfect closed-source clients and perfect good will on the part of their users, you would still want some level of redundancy. Somebody's processor could be overclocked, they could have bad memory, a random cosmic ray could strike because they have the box open, etc. If you really want to be sure of a scientific calculation like this, you have to run it multiple times (preferably with different but equivalent algorithms) and compare results. Redundancy slows things down, but did you really expect these projects to finish up next week? Remember, compared to the computing resources which SETI has available in-house, they're still getting a tremendous performance boost at very low cost.
Being able to catch false positives won't be of much use if your whole network is down because it was flooded with false positives.
Once attackers realize that there's no glory (ratings advantage) in doing so, they'll quit with this attack. A huge mass of false positives can be solved by temporarily banning the source IP address or address blocks, and contacting the source's upstream provider just as you would do if you were ping flooded or attacked by any other DOS.
Open sourcing opens a big can of worms. It/forces/ you to create/correct/ and clever solutions.
Agreed - which is why open source is a good idea in the first place for this client. For all we know, there could be a computational error in the client. Sure this is unlikely, the SETI folks really know their stuff, but it's happened before with software released by very professional developers and it can only be caught by a source code audit. At the worst case, SETI could use crypto in their networking layer and release that as a binary-only library, while opening up the computational parts of the client. This would allow people to experiment with the algorithms involved and the screen saver part of the code.
Open sourcing would be a good thing...but it has to be handled very slowly and rationally and carefully.
Agreed. I'm not advocating any overnight changes in SETI@Home. It would take some work to run a secure distributed computation with open sourced clients; possibly this would outweigh the expected advantage of the open source. You really won't know the benefits of open source until you try it, just like any other software project, so it's tough to make a case when we know all the possible problems but can point to few known advantages. I just feel that there are definite advantages for them to move in the direction of open source, and the difficulties of doing so are not so great as some would say, or at least are not insurmountable.
Are they ever going to release (Score:1)
Artificial Intelligence (Score:3)
It is open source, GPL, and for Linux
Mars Probe Signal Decoding option (Score:2)
They made this to work on a 386...cool... (Score:1)
This is interesting "For example, our Windows version uses only standard Intel 386 instructions, not MMX, or other proprietary variations. " 386!
How does this work? (Score:1)
Alright! (Score:1)
This abundance of processor time is actually a pretty good argument for a good, open, generic distributed project framework. (unfortunately, most of them don't look as *cool* as SETI.
---
pb Reply or e-mail; don't vaguely moderate [152.7.41.11].
Giant waste of time and effort (Score:1)
Re:How does this work? (Score:1)
Mike van Lammeren
Re:How does this work? (Score:1)
Finally! (Score:2)
The Truth (Score:2)
New Code = longer time? (Score:3)
Re:Artificial Intelligence (Score:2)
alternate motives (Score:3)
It was pretty fun during the attempt, but when it succeded, the whole thing became pretty meaningless. The user Ragnar - who I happen to know personally - is a prime example (likely the best). If you check his stats, he should be in #10th place, or maybe even higher. I havn't checked in a while. The method that he uses is quite ingenious, yet still horribly unbalances the whole thing. (I've abandoned my saintalex user, who ended up getting picked up by ragnar, so I'm not being a hypocrite here
Guess it's back to RC5.
SaintAlex
Observe, reason, and experiment.
Re:More SETI information available (Score:1)
It all makes so much more sense now...
OPEN SOURCE SETI (Score:4)
let's say you lived out in the woods in the deep south. you wake up one morning, nice and toasty warm in your red long-underwear. you decide you'd better go hunting today, if you're going to have dinner tonight. hmmmmm. rabbit sounds dang good!
what do you do? go grab your rifle and just tromple through the woods hoping to scare up some scraggly rabbit? of course not... you grab your rifle and your bloodhound and go tromple through the woods knowing your bloodhound will sniff out a nice fat meal!
"good god, open source man has finally lost his mind!" i can hear you gasp. well, when have i ever let you down?!
what the seti project needs is a bloodhound. it's ludicrous to just shove a bunch of software on some brain-dead intel driven computer and expect to sort out any kind of an intelligent signal. the seti project is trompling through the woods, without a keen nose to sniff out those crispy alien critters!
this is why i recommend that the seti project release a version of seti at home that runs on... get this... the aibo! yes, my brothers, our whacky friend, the aibo has come of age and is ready to mature into the sophisticated extraterrestial tracking device it has always been destined to become.
the aibo - he's not just for petrification any more!
thank you.
Octane7: What's that? (OT) (Score:1)
Re:New Code = longer time? (Score:1)
This won't be the first time an upgrade has caused an increase in time. If they stay true to form (and they may not), at some point they will just start discarding results from older clients.
Re:Finally! (Score:1)
Also what would people to do ensure data files being sent back are valid? And is there anyway to do this if the entire client is opened.
Re:New Code = longer time? (Score:2)
I understand the want to be noticed and visable, high on the stats, but at some point it just goes overboard. People begin installing clients on computers that they do not own or operate, people begin hacking clients to return false-positives to gain higher rankings....it's really pathetic.
Everyone has their own reasons for participating in a distributed computing project. To me, being concerned at all about their stats only proves that they are narrow minded and short sighted.
Enough ranting for one day.
Re:Are they ever going to release (Score:1)
i have it running on 20 of my headless boxes (don't tell my boss!!), so x gui would be pointless.
Re:alternate motives (update) (Score:2)
Hopefully this new client'll return the seti project to what it was in the early days....
SaintAlex
Observe, reason, and experiment.
The idea behind seti is great. (Score:3)
However... The other day, i was in a computer lab at the college where i attend, and in the (ctl-alt-del) task manager in NT, i noticed one of the process was RC-5 and another was seti@home. I can't kill the process, not in five minutes anyway, without admin access and on a guest account... meaning someone with admin access put it on, its using 90% of my processor cycles, and i don't want it on my computer lab station.
I move to a different station.
Its there too. Well. This is sure legal.
Some bright kid with SysAdmin access put the programs on there. There are like 400 Computers in this room.
What do you mean, I can't initialize things in an assert?"
~zero
Re:Giant waste of time and effort (Score:1)
so there.
Linux and Slashdot on Seti@Home (Score:4)
There is both a Linux and Slashdot group on Seti@Home. If you want to help out just go to the Seti@Home website [berkeley.edu] and download the client. After installing the client you can just join the Slashdot or Linux group by going to the groups link on the main page. Search for the group you want. Then just join the group.
The Slashdot group currently has 424 members and has computed 47953 packets.
The Linux group currently has 344 members and has computed 146283 packets. (there are a few *really* good systems in the Linux group that process the bulk of those packets)
Breaking News! (Score:5)
FIRST POST!!!!
Re:Artificial Intelligence (Score:1)
Re:Are they ever going to release (Score:1)
-beme
SETI@Home useless.... (Score:2)
Re:New Code = longer time? (Score:2)
Version 2.0 Upgrade procedure:
-1. Stop your currently running v1.x client
-2. Put the new v2.0 binary in place of your v1.x binary
-3. Start the new client
-The v2.0 client will restart processing of your current work-unit from the beginning.
-Please discard your v1.x binary. It will soon be unable to obtain new work units.
So, bye bye to those who won't upgrade...
---
So where is it? (Score:1)
Re:Are they ever going to release (Score:2)
Check out http://setiathome.berkeley.edu/README.xsetiathome
---
Re:Are they ever going to release (Score:2)
I'd imagine that X would be worse, if anything, given the way Windows trades stability for graphics speed.
If your interest is throughput (either as a score whore, or because you want to donate as much as you can) I'd stay away from any X client.
Re:The idea behind seti is great. (Score:2)
I dunno, I run seti@home (not RC5)constantly on my k6/2-400, 96mb, etc running *gasp* win98 and I honestly don't see enough of a performance hit in anything to justify complaining.
Then again, I suppose that depends on whether or not one agrees that win98 delivers "performance" in the first place }:D
Re:How does this work? (Score:2)
The setiathome FAQ covers most of it, but the short story is they are getting data in 2.5MHz wide chunks. To keep a work unit small enough so that a reasonably powered machine will do the job, they filter the data so that a workunit comprises 107 seconds of data, in a band a bit over 9kHz wide.
So, for every hour they are running ar Arecibo, they can get over 8600 work units. That helps a bit, too.
One other thing I've seen (I've been on since they went with the release) is that a lot of people sign up, process a few units and lose interest.
For what it's worth, I've processed a bit over 335 units. This is on a single box, because I couldn't get through the proxies at work. Even so, I'm running in the 97th percentile.
Hint: With a Windoze box, the screen saver chews up a lot of resources. On Win98, I let it run in the background without (usually) writing to the screen. This speeds up processing about 3X. (It takes about 11 to 12 hours on a 350MHz PII in background, 39 hours on the screen. I never finished a workunit on my 486 box....)
Re:Giant waste of time and effort (Score:2)
SETI@home is actually a fantastic program for "burning in" new machines that aren't ready for prime time yet. In a large server farm composed of several dozen machines, the laws of probability dictate that some piece of hardware is faulty. SETI@home is a great way to let new servers run with a full processor load for a while to see if anything goes wrong before loading the boxes with whatever production code will eventually run on them.
I do agree, though, that setting this up on corporate machines is an amazing waste of time. Then again, so is posting comments on Slashdot from work, but that doesn't seem to stop very many people.
OPEN SOURCE[ing] SETI (is a very, very, bad idea) (Score:1)
It is unlikely that these 2 points would encompass the totality of the issues, should your idea be implimented.
Open Sourcing is *not* for everything.
SaintAlex
Observe, reason, and experiment.
Re:Breaking News! (Score:4)
Re:Are they ever going to release (Score:1)
Speaking from experience, it gets really old after a while. Never got the current version to make it past the proxies at work, but if 2.0 really does work, I might set up a couple of work boxen to crunch. Preferably, without the gui.
OTOH, if Nitrozac@home ran under X, I'd be there in a heartbeat.
Re:How does this work? (Score:1)
We also must have started around the same time, because I've down 342 units as of today.
The numbers are interesting. At three hundred or so blocks, we're at more than almost 98% of the people. As far as I can tell (and it is only an estimate, look at the charts), 5% of the people are responsible for something like 99% of the blocks processed. (My poor little Pentium, which runs under a different account, has only managed 56 blocks, and yet it is in the 86% percentile.)
Is SETI@Home worthwhile? (Score:5)
In short, leakage (the most likely sort of signal to be found) will be invisible, actual RADAR type devices will be screened out (too short a duration and no information content), and any civilisation advanced enough to WANT to locate other civilisations by sending deliberate signals are likely to be filtered, by being screened out as local interference through a lack of doplar shift.
Methinks that SETI@Home is ingenious, but is using the wrong telescope. And it'll be finished before the RIGHT telescope has been built & put on-line.
As for "Open Sourcing" SETI@Home, it was, to start off with. The original UNIX client was GPLed. Hardly anyone bothered to do anything with it, and so they closed the source & shoved it over to a commercial house. Don't blame them - look to yourself first.
Having said that, SETI@Home's attitude has been somewhat attrocious. They've been going on about security, when that was never the cause of them going non-Open Source. Progress was. And part of that is their fault. They refused to set up a CVS repository, did VERY slow (and low-quality) releases, and basically impeded themselves at every turn. They should have done a damn sight better than that. Yes, there were only a few people there, which is EXACTLY WHY they needed to use CVS, rather than relying on manually testing every e-mailed patch, and rolling a fresh tarball by hand every few weeks or months.
Honestly, if SETI@Home has shown anything, it's shown that we should be less worried about intelligence "out there" and rather more worried by the lack of it down here.
Re:The idea behind seti is great. (Score:1)
just throwing more of my 2 cents worth in.
~zero
Re:The idea behind seti is great. (Score:2)
More accurately, on most OSs, the background client program for d.net or set signals to the OS that it is a low-priority process, so there is no performance hit.
So what's the big deal? If you had performance problems on those computers, it probably wasn't the d.net clients.
slower, but more thorough is a wise decision (Score:2)
As they say, they have more processing time than they need, so find ways to use it!
A few people may bitch and moan about lower stats, but I say to hell with the people that make stats their first priority. They are the kind who are likely to cheat or run the client on an error-prone over-overclocked (as opposed to overclocking a severely underrated chip) machine, making the extra security measures that slow everything down necessary.
I know I'm biased... (Score:3)
The lack of timely upgrades for the client a la distributed.net [distributed.net]
The lack of updates of fresh content on the website a la distributed.net [distributed.net]
Network outages
"Recycling" of data - just a horrible waste of time
I'm happy to say I was with CSC since day one [distributed.net] and I'm still happily cracking away [distributed.net] on RC5.
There's a much stronger feeling of a "team effort" with distributed.net, and how can you help loving those cool little cow icons?
Seriously, two weeks ago I reinstalled the NT command line version of Seti (an ANCIENT client, but there hasn't been an update) on one of my machines and let it run all night...wouldn't you know it, it hung on sending and receivingg the data, and the stats I had (I identified myself with the same Email address I had previously used) never showed up. So I deleted it and went back to CSC.
Let's keep cracking on RC5! I can't wait for the OGR contest [distributed.net] to start.
Re:The idea behind seti is great. (Score:1)
The only thing it changes is the task manager performance display, which always shows a processor use of 100%. But given the amount of time I spend compiling, believe me, if it took even 1% of the CPU time I used, it'd be gone.
Re:Finally! (Score:2)
Re:Artificial Intelligence (Score:2)
Re:I know I'm biased... (Score:1)
The point of the post was.... (Score:1)
Plus, running their processors ar 99% 24/7 is going to burn the already taxed processors. I think most of the computers are old, but at least a few are new. Over heating the processor dosen't seem like the best of ideas.
And to be even more anal, your tax dollars plus my tuition pay for these computers.
~zero
Re:Are they ever going to release (Score:1)
High Percentile for low block numbers (Score:1)
A Personal SETI Story (Score:3)
Well, I don't do that anymore. I took it off my screensaver after a while and then when I went to DSL and had it automatically connect, most of the time I forgot it was there. But still, I'm glad to have it and glad to have the opportunity to participate; in a small way it's like fulfilling a dream from my childhood. I thank all those involved for that.
As for the new version coming out, I'm crossing my fingers that it'll fix the connection problems I've been experiencing ever since we networked our computers and I no longer have a direct connection to the Internet. Even though I don't look at it much anymore, I think it's a worthwhile project--both for the search for extra-terrestrial intelligence and for distibuted computing.
Gothea
427 Members (Score:2)
: )
Drift scans at Arecibo (Score:5)
Much of the current data was picked up during the Arecibo upgrade, where the telescope was essentially out of commission and staring up at the sky.Since the earth rotates, the sky over the telescope changes, so just grabbing all the signal ("drift scans") still provided useful data.
Arecibo is huge [naic.edu] - 305m in diameter, almost exactly a kilometer around (makes a good jogging track!): that's too large to steer. So it was designed as a spherical dish section, not parabolic like a sattelite dish: a parabola sees perfectly in one pointing direction, but a spherical dish can see fuzzily in any direction.
During the upgrade, the old line feed has been augmented by a Gregorian reflector, which allows perfect focus from a spherical dish. But the line feed still exists, as you can see from the pictures [naic.edu]. So now, while the Gregorian takes astronomy data, the old line feed can continue looking off in some other (random) part of the sky, and take useful search data! And if the piggyback project is still running, the SETI people also get first dibs on all our data to search for their signals.
With additional tests for pulsations, one wonders how many of our pulsars the SETI people will rediscover. Useful check on their processing quality, I'm sure...
Arecibo's neat - consider visiting if you ever get a chance. Takes your breath away to realize the size of it all!
Re:OPEN SOURCE[ing] SETI (is a very, very, bad ide (Score:4)
I've disagreed with this before, so here goes:
So the SETI servers don't send them blocks, process their blocks, or record their stats. Problem solved - if you want to be a part of the project, you have to use a compatible client.
There are two ways that people could cheat: returning false negative results without actually checking the results, and returning false positive results when there really isn't a positive.
When discussing open-sourcing distributed.net's key cracking, where there's a prize attached, it has been pointed out that a hacked client could be used to return a false negative but inform the user so that they can claim the prize before d.net can. But for SETI@Home, there isn't any danger of that. Who is going to believe J. Random Hacker's claims of detecting SETI on his bedroom PC? Even if someone did this, there's only one place that the raw data could have been coming from, because J. Random Hacker certainly doesn't have a high-powered radio telescope in the back yard generating all that data.
In short, I have yet to hear a good explanation of why the benefits of open-sourcing the client wouldn't exceed the problems (minimal, see above) of doing so.
Re:The idea behind seti is great. (Score:2)
Never happen, but that's okay. Just keep them grits a-comin', and stay (or get) ripped [rippedin2000.com] in 2000.
It was: Be sure ... (Score:1)
A crummy commercial!
Re:Linux and Slashdot on Seti@Home (Score:1)
Ars Technica [arstechnica.com] is cooler than /. [slashdot.org] anyway. They're interested in technology, not politics.
--
Re:I know I'm biased... (Score:2)
PiHex [cecm.sfu.ca]
Mersenne Primes [mersenne.org]
Seti@home [berkeley.edu]
Why do this? Well, first my ultimate goal is not stats-oriented. I don't really care where I am in the stats, where my team is in the stats, etc. What I enjoy most is that I contribute to a number of projects, albiet a little in each.
For example. The Seti@home client runs on a Pentium 166 (overclocked to 200) that is sitting in the living room of my apt. It makes my roomates happy to see eye-candy on the monitor out there and it keeps the otherwise idle machine (it's a proxy/firewall for our cable modem connection) doing something. No, it doesn't zip through data blocks (about 1 every 5 days or so), but it is doing _something_.
So, what's my point? (I'm asking myself that same question). Basicly, it's that people participate in distributed computing projects for various reasons. If Seti@home has brought 1.6 million people into the distributed computing fold, then more power to them! (I do contest their figure of 1.6 million people participating, but that's for another discussion). Even if the client is not 100% efficient (and what software package is 100% efficient?) they are still contributing to the overall education of people in matters of distributed computing, in particular internet based projects.
Finally, it should be noted that distributed.net re-did a large number of CSC data blocks (~25%) as a security measure to reduce/eliminate false-positives from making their way into the stats.
Re:Are they ever going to release (Score:1)
Re:Is SETI@Home worthwhile? (Score:2)
There's no need to look in the skies for ET, because there are plenty of aliens here on earth -- I'm certain my ex-boss was one, and my friend Pete's former roommate, who had superhuman abilities (collegiate track champion) would neither confirm nor deny his terrestriality. NO, I don't think it's worthwhile -- it's a big waste of time.
--
SETI's memory consumption (Score:2)
PID USER PRI NI SIZE RSS SHARE STAT LIB %CPU %MEM TIME COMMAND
767 n343145 20 1 12992 12M 236 R N 0 50.6 20.5 6530m setiathome
5474 n8201079 16 0 640 640 516 R 0 48.3 1.0 1:11 dnetc
While dnetc used 1.0% of memory, seti used 20.5% of memory. Also as seti don't provide optimisations of modern processors (and the fact that they're unlikely to find anything anyway) then I just think seti is a waste of spare cycles. OK RC5's only slightly better but at least it'll come to an end eventually.
E.T. tastes like rabbit. (Score:1)
Re:Is SETI@Home worthwhile? (Score:1)
a small xlock solution ... (Score:2)
----- setilock
#!
cd
./setiathome -nice 19 >&
xlock -mode blank
killall setiathome
rm -f lock.txt
------
So that on my nice "old" afterstep 1.0, when I press "window+L" (the "window" key bind to "3" on my X), will actually run the script: start seti, then my xlock and the kill seti (and its lock) it once I unlock my screen.
The nice part is just so that unlocking is not a real issue (I do have cronjobs, but they are not that consuming).
I do recognize, that this method is a little overkill (I should just "stop" it and "continue" later) but it works for me this way so I thought I was going to share it.
Hope that helps.
Isn't there *something* worthwhile? (Score:2)
Indeed. And I might add, I am not convinced that they even know what they're looking for when analyzing the data.
I think this project is a complete waste of time and resources. Even more of a waste than RC5/64, and that's saying something.
Isn't there some project out there that actually has some scientific usefulness?
---
Re:Are they ever going to release (Score:1)
Re:SETI@Home *IS NOT* useless.... (Score:1)
distributed.net and other cracking projects are a waste of cpu cycles cuase any idiot knows that you can eventualy crack anything.
but what can be a greater reward then finding ET?
Re:a small xlock solution ... (Score:1)
------ setilock
#!
cd
./setiathome -nice 19 -nolock >&
xlock -mode blank
kill `cat pid.sah`
------
The "-nolock" consider that I will only run seti when I "x"lock my system, with no other running version (for the current one is killed when I un"x"lock it).
Note that
Hope that helps
Re:Breaking News! (Score:1)
But only for *nix. (Score:1)
What's more, the processor optimizations (when it is doing math) are dismal.
I dunno if they're planning to come out w/ a v2.0 GUI for Win/Mac, but until they clean their code or OSS it, I'm sticking w/ dnetc. All text, Altivec optimized, 3.8MKeys/sec. They know where the computing power is
Re:The point of the post was.... (Score:2)
Running a processor at 100% capacity 24/7 is not going to 'burn' the processor, that is what they are designed to do. Those processors will be consigned to the rubbish bin well before they stop working.
You could perhaps argue on power consumption, but as long as the clients don't stop disks spinning down and monitors going to sleep there's not much of a case there.
The university isn't getting any less of a return on investment on the processor because it is being used more and in fact they may even get something out of it, even if it's just publicity.
In fact I'd suggest that a large group of computers that otherwise spend a fair amount of time idling but need to be on for easy access are perfect for adding to a distributed computing effort such as d.net or seti@home.
Open-Sourced SETI (Score:3)
This may have been discussed previously, but I'll bite anyway. The problem with open-sourcing the SETI signal-processing code is the fact that this is a running experiment. With open-source operating systems, web-hosts, games, etc, you will be running these on your own computers and/or on your own networks. The SETI example is completely different. With SETI, you are contributing to their scientific experiment. Therefore, they want to know precisely how every resulting piece of data was processed. I cannot stress that enough. As a scientist, I can assure you that if you cannot verify the integrity of certain data, then you may as well throw that data out (or re-run the experiment at the least. As learned with the cold fusion people, DON'T PUBLISH if you're not certain of the validity of the data).
The problem with open-sourcing the code is that somebody on the client side could very well have changed the processing code, and introduced any number of possible errors. It could be something seemingly harmless, like using single-precision floats or linear interpolation, in some places instead of whatever methods were originally employed there, for a processing-speed gain. However, there is no way that the SETI folks can tell if the data they are receiving has been processed correctly or not. And one thing SETI cannot do is accept compromised data.
I do believe it is in the public interest to let others know the code processing and evaluating techniques, though. What could be done is run an open-source type development of a separate client system, and then release some sort of binary-only module for the actual runtime client (perhaps they'll not reveal the data-authentication routines, for instance). You may disagree completely with this, but please understand that from the point of view of a science experiment, they MUST in no way whatsoever receive data processed in a different method than they originally intended .
original client GPL? (Score:1)
Can you provide a reference for this? Or better yet, a link to the GPL'd code?
I offered to help them during the call for developers in April/June 1998. However, I changed my mind when they refused to consider putting the code under any sort of Open Source license. I'd be very disappointed if they had done a GPL'd release, since I was quite interested in participating otherwise. The licence was my only objection: in fact, I remember arguing that the non-free license would hurt their ability to attract developers!
The last version of the code I have is version 1.4a from 1998 June 22, and it contains no license of any sort.
Distributed computing with less memory (Score:1)
I don't know how seti@home's algorithm works, but I would still be running it now if it wasn't such a memory hog.
Anybody know of worthwhile distributed computing projects that use less memory?
Yeah? (Score:1)
Re:Are they ever going to release (Score:1)
Re:Finally! (Score:2)
The program code is an essential part of the method of the experiment. Different methods (which would erupt if it was opensourced) would ruin the validity of the results, since consistency was not maintained. Thus, the scientists are protecting the credibility and validity of their results while also minimizing uncertainty.
This is a single experiment that uses the help of a lot of people. It isn't a lot of experiments. A single experiment must maintain consistency throughout. Tracking and documenting multiple versions would be a larger endeavor than the scientists have time for and it would introduce an unacceptable level of uncertainty in the results.
SETI "cheating" (Score:1)
When talking about cheating (and generally about abusing the S@H software), people generally talk about delivering work units without actually processind them.
This has been done in the past (with the closed source client), and was only detected because the lowlifes doing this forgot to forge reasonable processing times and instead sent them in with a 10 minute counter.
If a more intelligent idiot would decide to boost his fame and sabotage the client by sending in bogus data, this would really sabotage S@H, and since the S@H authors are members of the holy church of 'security through obscurity' they're not gonna change their opensourcing policy anytime soon
Want to talk about it? (Score:2)
Re:Artificial Intelligence (Score:1)
Re:Are they ever going to release (Score:1)
Personally, and this is not trolling for flames, I think the SETI project is a waste of time in its current state.
The data you process may be processed multiple times by others, thus rendering your time spent, worthless. I have heard reports that they in fact lost a portion of the anylized data, and resubmitted it to clients, as well.
Finally, by their own technical paper [berkeley.edu], they claim:
The sky survey covers a 2.5 MHz bandwidth centered at 1420 MHz
2.5mhz frequency spread, covering 28% of the sky.
It just seems like a waste of time to me.. others may (and obviously do) see it differently, but to me, if they are only searching 1/4 of the sky, and even then, only scanning a 2.5 mhz frequency range, that leaves a LOT out. I ran a client myself, but when I started reading more data about the project, and what it entailed, I decided that I could put my proc time to better uses.
Then again, SETI obviously has the use of expensive hardware, dedicated scientists, and hundreds of thousands of interested people willing to run clients to analyze the data, so obviously it has support. I dont think the entire SETI program is a waste of money, I just dont see the benefits of scanning such a small portion of the sky/freq range.
Again, not trolling here, this is an objective opinion.
Re:How useful is distributed.net compared to THIS? (Score:1)
Re:Isn't there *something* worthwhile? (Score:1)
Bah! That only solves HALF the problem! (Score:2)
You're completely forgetting about all the cheaters who could just travel to some planet N light-years away, then travel back in time (N + Blargh + Glorp) years, where Blargh = the time it took said cheater to reach said planet, and Glorp is the time it would take the cheater to build an ultra-high powered radio transmitter and aim it to the exact point that the Arecibo telescope will be in N years, plus up to 24 hours of "padding time" to ensure that the telescope will be facing the right direction at the moment the signal arrives.
Then, said cheater simply pushes the Big Red Button labelled "Transmit Bogus Alien Mating Calls To Fools On Earth", sits back and waits.
Of course, he'll still have to rush back to Earth in time to compromise the SETI@Home network's security to ensure that the data packets containing the bogus interstellar message get routed to his computer alone!
Then, he sits and waits.... soon, the data packets are processed, and sent back to SETI... yes, my pretties... soon I^H...he... will receive a pat on the back and perhaps a small thank-you check for my^H^H...his..."contribution" to science...
Umm... the preceeding situation was, ahem, completely hypothetical and I^Hthe hypothetical "cheater" would never stoop so low just to trick SETI@Home into thinking that I, Hypergeek^H^H^H^H^H^H^H^H^H^H^H^Hhe had just found extra-terrestrial intelligence.
Or maybe this hypothetical person-whom-I-don't-know would stoop so low... I mean, I have no idea-- it's all speculation, right?
[Mumble... that SETI check had better be enough to cover the costs of the interstellar space ship, the time machine, and the ultra-high-powered radio transmitter... otherwise Dad'll kill me when he gets his MasterCard bill!]
Umm... SETI@Home does offer a monetary reward for finding extraterrestrial life, right?
Uh oh. I'm dead. Unless, that is, anybody could figure out a way I could either:
Well, I'm dead Real Soon Now. There's only one way to prevent this tragedy from claiming the lives of other, innocent victims: OPEN SOURCE SETI@HOME!!
-Hypr Geeque
Re:Artificial Intelligence (Score:1)
The kind of project that the Great AIP site seems to be proposing is an entirely different idea. From what I can tell, they are proposing to create an online environment that allows AI's to interact and evolve and eventually "reach intelligence". We're talking about an artificial intelligence that should be able to interact, experience, develop, learn and create... not a big computer that can calculate and evaluate insanely huge numbers of chess moves.
The project itself concerns me a little, because I'm not sure that its creators have fully investigated and appreciated the magnitude of the task they are undertaking. The history of AI research shows a recurring pattern: periods of extreme optimism and lofty aspirations, followed by periods of cynicism as people's ideas fail to pan out as expected. Over and over we are confronted with the fact that this thing we call "intelligence" (which I notice the Great AIP still hasn't even *defined*) is far more complex and elusive than we previously suspected.
Not that I'm trying to say that AI research is pointless or that an open source AI project has no merit; I simply hope that people involved will do some serious research into previous attempts at finding "intelligence". Otherwise, I worry that they will simply repeat past mistakes.
For more information on the huge range of ideas and research falling under the category of AI, check out the collection of links at http://ai.about.com/compute/ai/ [about.com]. For a good beginner's overview (and a chance shamelessly plug a friend's site ) I recommend this AI Tutorial Review [utoronto.ca].
Re:A Personal SETI Story (Score:1)
I was a bit worried about taking away from other processes, but the client runs at priority 00, so no worries.
Other platforms for: Seti@home @ play @whatever! (Score:1)
Whatever... My ideas, like my sigs, probably aren't even original.
Ah... fuck seti! (Score:1)
Re:Finally! (Score:1)
Remember that they beta tested the current incarnation of the client for over a year (and during that time, you *could* download the source) before releasing it.
If someone starts sending incorrect data back to them, they will have to start over again or simply stop the whole project.
So the security concerns they are talking about are not "we're afraid to be hacked".
It's "We're afraid that someone will make a really fast client that everyone will use but which returns unreliable (not proven) data."
(Actually, there are hacked clients which are faster but returns unreliable data allready...)
why did they build 686-version first? (Score:1)
You may find this interesting (Score:3)
Last night I attended a lecture/discussion by Dr. Kent Cullers, the Signal Detection Group Leader, one of the project managers in the SETI Phoenix Project, and one of the board members of the SETI@Home program (ever see Contact? The blind radio astronomer was modeled after him). After the lecture, I asked Dr. Cullers about this very topic. I told him flat out that I could not understand why a project with as lofty goals as SETI would willingly give a cold shoulder to thousands of potential programmers willing to donate their time towards developing more efficient clients and (potentially) search algorithms. I also told him flat out (in front of 100+ people) that while I've always supported the goals of SETI, their cold shoulder to the OSS community made no sense when you consider their claims of supporting scientific cooperation. His reply was refreshing, honest, and to the point.
The first thing he told me was that, despite public claims to the contrary, security is not SETI@Home's biggest concern. The REAL problem right now is that there is apparently no efficient system in place for transporting large blocks of information from the receiver setup to the SETI@Home hub at Berkeley. I'm not sure how many of you realize this, but SETI@Home is an affiliate of SETI, not a directly controlled part of the organization. He told us that SETI itself has a very large quantity of unprocessed data to sort through, but without the proper infrastructure to get it to the S@H users, there's no easy way to handle them. This IS being worked on though, so don't lose faith yet.
The second thing that he pointed out was that there are people inside of SETI, including HIM (remember, he's on the SETI@Home board), that want to see the client opened up. In fact, he pointed out that this very topic is already on his agenda to be brought up at the next board meeting. You know why he's on our side? Because, in addition to his many other amazing talents, he's a programmer! (yeah, a blind computer programmer...this man has just earned my respect for life). He described the hoops HE has to jump through to get access to the code, and he's a friggin board member! While he couldn't give me any promises, he was MUCH more receptive to the idea than any of the other SETI guys I've seen quoted on Slashdot.
In the meantime, he gave me this suggestion: Apparently a small part of the problem is that Dr. David Anderson, the head of the SETI@Home project, isn't entirely convinced that OSS developers could really bring about any significant improvements. Dr. Cullers stated that the best way he could think of to change Dr. Andersons mind would be to email him suggestions as to possible ways to improve the performance and reliability of the client. According to Dr. Cullers, getting him to open it is simply a matter of impressing on him the fact that we can help, and that we wont just be getting in the way. I realize that some people may have a problem sending coding suggestions to a closed-source project, but according to Dr. Cullers it would definitely help further the cause.
And on one final note, I'd like to tell you all something. I walked away from that lecture with an entirely different take on SETI. They're not a tired old organization desperate for government funding anymore, they've got some cool new projects already underway and in the works for the near future (SETI Optical, new arrays for Phoenix, etc.) Dr. Cullers made a very realistic projection (explaining the math and technology) that Phoenix will have out entire galaxy scanned in less than 50 years (they haven't even really touched it yet). Please remember, SETI has little direct control over SETI@Home, so don't let your poor opinions of the S@H project ruin your views of the entire program. SETI is worthy of our support.
Already does (Score:1)
Re:You may find this interesting (Score:2)
And, if you get the chance to talk to any more S@H board members, here are a few things to bring up:
Re:You may find this interesting (Score:1)
Don't put too much faith in this development though...he's still just one voice in SETI, and his primary job is figuring out new ways of removing radio noise without degrading the original carrier signal. He doesn't have the power to open up S@H by himself, but the fact that we've got someone receptive to the idea on the inside, who also happens to be a skilled programmer, means that there's hope.
Then they are worthless, they do not understand (Score:1)
Re:OPEN SOURCE[ing] SETI (is a very, very, bad ide (Score:2)
And how, pray tell, do you enforce a compatible client. If the client is trusted, there is no way to ensure it is valid. You cannot trust a liar to tell you he is a liar. This whole discussion has been had before on the QuakeForge list, where the security architecture of an open source quake was discussed. So if we cannot enforce a compatible client, then the few people at the project have to waste all their time chasing down and removing accounts of bad people, who will just make more accounts. Sure, the closed-source version is not a great solution, but I don't expect these guys to make more headaches for themselves for the heck of it.
"False negatives can be easily caught by issuing the same blocks to other clients."
Even if you only send out one duplicate, that already doubles the workload. I guess you could randomly sample, but then your trade-off is that you will miss some.
"False positive results are even easier to catch."
Being able to catch false positives won't be of much use if your whole network is down because it was flooded with false positives.
Open sourcing opens a big can of worms. It
Open sourcing would be a good thing...but it has to be handled very slowly and rationally and carefully.
Jazilla.org - the Java Mozilla [sourceforge.net]
Re:OPEN SOURCE[ing] SETI (is a very, very, bad ide (Score:2)
I don't think that all of the issues you raise will be as much of a problem as you say. Why would someone invest time into using a hacked client that produces incorrect results? The most likely reason (going by the history of this sort of project) is to move up in the ratings for the project. But to do this, you need to build up a history of past efforts, which will be destroyed every time your hacked client gets caught and you get banned from the project. As long as the project organizers can detect hacked clients, the thrill of using them will quickly wear off when people realize that there's no payoff for doing so. As for detection, see below.
OK, so I was a little glib about that. Let me explain my reasoning. There are really two kinds of compatibility that we care about: does the client's communication with the server follow the established protocol (data formats, ports, CRCs, etc.), and are the client's results compatible with the computations which the project intends to be running. If the client doesn't use the right protocol to talk to the project server(s), then you can ban it right there and move on. It's easy to tell if a client isn't sending data in the right sized chunks, etc. You can ban clients automatically when this happens, with no overhead in manpower (other than setting up the initial system). If I hacked the closed SETI client right now to talk to their servers on the wrong port, you can bet they would drop me like a hot potato - it would be obvious that I'm incompatible. By definition, networking protocol incompatibilities are detectable, because if you can't tell that the client isn't following the protocol, then by definition it is compatible with the protocol. If, on the other hand, the client's calculations are incompatible, then see below and my previous post.
Exactly correct, which is why no clients can be trusted. Closed source doesn't prevent hacked clients, it just ensures that hacked clients are only created by the more motivated. As I explain below and in my previous post, even a non-hacked client can't be trusted 100%. Distributed computing will need to be run on a basis of clients proving that they are trustworthy, rather than the server assuming they are innocent until proven guilty.
It's true that there will have to be duplication of error, but as I explained before, even with perfect closed-source clients and perfect good will on the part of their users, you would still want some level of redundancy. Somebody's processor could be overclocked, they could have bad memory, a random cosmic ray could strike because they have the box open, etc. If you really want to be sure of a scientific calculation like this, you have to run it multiple times (preferably with different but equivalent algorithms) and compare results. Redundancy slows things down, but did you really expect these projects to finish up next week? Remember, compared to the computing resources which SETI has available in-house, they're still getting a tremendous performance boost at very low cost.
Once attackers realize that there's no glory (ratings advantage) in doing so, they'll quit with this attack. A huge mass of false positives can be solved by temporarily banning the source IP address or address blocks, and contacting the source's upstream provider just as you would do if you were ping flooded or attacked by any other DOS.
Agreed - which is why open source is a good idea in the first place for this client. For all we know, there could be a computational error in the client. Sure this is unlikely, the SETI folks really know their stuff, but it's happened before with software released by very professional developers and it can only be caught by a source code audit. At the worst case, SETI could use crypto in their networking layer and release that as a binary-only library, while opening up the computational parts of the client. This would allow people to experiment with the algorithms involved and the screen saver part of the code.
Agreed. I'm not advocating any overnight changes in SETI@Home. It would take some work to run a secure distributed computation with open sourced clients; possibly this would outweigh the expected advantage of the open source. You really won't know the benefits of open source until you try it, just like any other software project, so it's tough to make a case when we know all the possible problems but can point to few known advantages. I just feel that there are definite advantages for them to move in the direction of open source, and the difficulties of doing so are not so great as some would say, or at least are not insurmountable.
Re:Are they ever going to release (Score:2)