1367291
story
Rafael writes
"The SETI@home project, aimed at using a computer screensaver to search for alien signals, seemed almost as crazy as searching at all. However, it became a phenomenon and an ever-increasing Internet addiction. Now SETI@home is being improved. Check the history at the BBC's Web page."
Are they ever going to release (Score:1)
Artificial Intelligence (Score:3)
It is open source, GPL, and for Linux
Mars Probe Signal Decoding option (Score:2)
They made this to work on a 386...cool... (Score:1)
This is interesting "For example, our Windows version uses only standard Intel 386 instructions, not MMX, or other proprietary variations. " 386!
How does this work? (Score:1)
Alright! (Score:1)
This abundance of processor time is actually a pretty good argument for a good, open, generic distributed project framework. (unfortunately, most of them don't look as *cool* as SETI.
---
pb Reply or e-mail; don't vaguely moderate [152.7.41.11].
Giant waste of time and effort (Score:1)
Re:How does this work? (Score:1)
Mike van Lammeren
Re:How does this work? (Score:1)
Finally! (Score:2)
The Truth (Score:2)
New Code = longer time? (Score:3)
Re:Artificial Intelligence (Score:2)
alternate motives (Score:3)
It was pretty fun during the attempt, but when it succeded, the whole thing became pretty meaningless. The user Ragnar - who I happen to know personally - is a prime example (likely the best). If you check his stats, he should be in #10th place, or maybe even higher. I havn't checked in a while. The method that he uses is quite ingenious, yet still horribly unbalances the whole thing. (I've abandoned my saintalex user, who ended up getting picked up by ragnar, so I'm not being a hypocrite here
Guess it's back to RC5.
SaintAlex
Observe, reason, and experiment.
Re:More SETI information available (Score:1)
It all makes so much more sense now...
OPEN SOURCE SETI (Score:4)
let's say you lived out in the woods in the deep south. you wake up one morning, nice and toasty warm in your red long-underwear. you decide you'd better go hunting today, if you're going to have dinner tonight. hmmmmm. rabbit sounds dang good!
what do you do? go grab your rifle and just tromple through the woods hoping to scare up some scraggly rabbit? of course not... you grab your rifle and your bloodhound and go tromple through the woods knowing your bloodhound will sniff out a nice fat meal!
"good god, open source man has finally lost his mind!" i can hear you gasp. well, when have i ever let you down?!
what the seti project needs is a bloodhound. it's ludicrous to just shove a bunch of software on some brain-dead intel driven computer and expect to sort out any kind of an intelligent signal. the seti project is trompling through the woods, without a keen nose to sniff out those crispy alien critters!
this is why i recommend that the seti project release a version of seti at home that runs on... get this... the aibo! yes, my brothers, our whacky friend, the aibo has come of age and is ready to mature into the sophisticated extraterrestial tracking device it has always been destined to become.
the aibo - he's not just for petrification any more!
thank you.
Octane7: What's that? (OT) (Score:1)
Re:New Code = longer time? (Score:1)
This won't be the first time an upgrade has caused an increase in time. If they stay true to form (and they may not), at some point they will just start discarding results from older clients.
Re:Finally! (Score:1)
Also what would people to do ensure data files being sent back are valid? And is there anyway to do this if the entire client is opened.
Re:New Code = longer time? (Score:2)
I understand the want to be noticed and visable, high on the stats, but at some point it just goes overboard. People begin installing clients on computers that they do not own or operate, people begin hacking clients to return false-positives to gain higher rankings....it's really pathetic.
Everyone has their own reasons for participating in a distributed computing project. To me, being concerned at all about their stats only proves that they are narrow minded and short sighted.
Enough ranting for one day.
Re:Are they ever going to release (Score:1)
i have it running on 20 of my headless boxes (don't tell my boss!!), so x gui would be pointless.
Re:alternate motives (update) (Score:2)
Hopefully this new client'll return the seti project to what it was in the early days....
SaintAlex
Observe, reason, and experiment.
The idea behind seti is great. (Score:3)
However... The other day, i was in a computer lab at the college where i attend, and in the (ctl-alt-del) task manager in NT, i noticed one of the process was RC-5 and another was seti@home. I can't kill the process, not in five minutes anyway, without admin access and on a guest account... meaning someone with admin access put it on, its using 90% of my processor cycles, and i don't want it on my computer lab station.
I move to a different station.
Its there too. Well. This is sure legal.
Some bright kid with SysAdmin access put the programs on there. There are like 400 Computers in this room.
What do you mean, I can't initialize things in an assert?"
~zero
Re:Giant waste of time and effort (Score:1)
so there.
Linux and Slashdot on Seti@Home (Score:4)
There is both a Linux and Slashdot group on Seti@Home. If you want to help out just go to the Seti@Home website [berkeley.edu] and download the client. After installing the client you can just join the Slashdot or Linux group by going to the groups link on the main page. Search for the group you want. Then just join the group.
The Slashdot group currently has 424 members and has computed 47953 packets.
The Linux group currently has 344 members and has computed 146283 packets. (there are a few *really* good systems in the Linux group that process the bulk of those packets)
Breaking News! (Score:5)
FIRST POST!!!!
Re:Artificial Intelligence (Score:1)
Re:Are they ever going to release (Score:1)
-beme
SETI@Home useless.... (Score:2)
Re:New Code = longer time? (Score:2)
Version 2.0 Upgrade procedure:
-1. Stop your currently running v1.x client
-2. Put the new v2.0 binary in place of your v1.x binary
-3. Start the new client
-The v2.0 client will restart processing of your current work-unit from the beginning.
-Please discard your v1.x binary. It will soon be unable to obtain new work units.
So, bye bye to those who won't upgrade...
---
So where is it? (Score:1)
Re:Are they ever going to release (Score:2)
Check out http://setiathome.berkeley.edu/README.xsetiathome
---
Re:Are they ever going to release (Score:2)
I'd imagine that X would be worse, if anything, given the way Windows trades stability for graphics speed.
If your interest is throughput (either as a score whore, or because you want to donate as much as you can) I'd stay away from any X client.
Re:The idea behind seti is great. (Score:2)
I dunno, I run seti@home (not RC5)constantly on my k6/2-400, 96mb, etc running *gasp* win98 and I honestly don't see enough of a performance hit in anything to justify complaining.
Then again, I suppose that depends on whether or not one agrees that win98 delivers "performance" in the first place }:D
Re:How does this work? (Score:2)
The setiathome FAQ covers most of it, but the short story is they are getting data in 2.5MHz wide chunks. To keep a work unit small enough so that a reasonably powered machine will do the job, they filter the data so that a workunit comprises 107 seconds of data, in a band a bit over 9kHz wide.
So, for every hour they are running ar Arecibo, they can get over 8600 work units. That helps a bit, too.
One other thing I've seen (I've been on since they went with the release) is that a lot of people sign up, process a few units and lose interest.
For what it's worth, I've processed a bit over 335 units. This is on a single box, because I couldn't get through the proxies at work. Even so, I'm running in the 97th percentile.
Hint: With a Windoze box, the screen saver chews up a lot of resources. On Win98, I let it run in the background without (usually) writing to the screen. This speeds up processing about 3X. (It takes about 11 to 12 hours on a 350MHz PII in background, 39 hours on the screen. I never finished a workunit on my 486 box....)
Re:Giant waste of time and effort (Score:2)
SETI@home is actually a fantastic program for "burning in" new machines that aren't ready for prime time yet. In a large server farm composed of several dozen machines, the laws of probability dictate that some piece of hardware is faulty. SETI@home is a great way to let new servers run with a full processor load for a while to see if anything goes wrong before loading the boxes with whatever production code will eventually run on them.
I do agree, though, that setting this up on corporate machines is an amazing waste of time. Then again, so is posting comments on Slashdot from work, but that doesn't seem to stop very many people.
OPEN SOURCE[ing] SETI (is a very, very, bad idea) (Score:1)
It is unlikely that these 2 points would encompass the totality of the issues, should your idea be implimented.
Open Sourcing is *not* for everything.
SaintAlex
Observe, reason, and experiment.
Re:Breaking News! (Score:4)
Re:Are they ever going to release (Score:1)
Speaking from experience, it gets really old after a while. Never got the current version to make it past the proxies at work, but if 2.0 really does work, I might set up a couple of work boxen to crunch. Preferably, without the gui.
OTOH, if Nitrozac@home ran under X, I'd be there in a heartbeat.
Re:How does this work? (Score:1)
We also must have started around the same time, because I've down 342 units as of today.
The numbers are interesting. At three hundred or so blocks, we're at more than almost 98% of the people. As far as I can tell (and it is only an estimate, look at the charts), 5% of the people are responsible for something like 99% of the blocks processed. (My poor little Pentium, which runs under a different account, has only managed 56 blocks, and yet it is in the 86% percentile.)
Is SETI@Home worthwhile? (Score:5)
In short, leakage (the most likely sort of signal to be found) will be invisible, actual RADAR type devices will be screened out (too short a duration and no information content), and any civilisation advanced enough to WANT to locate other civilisations by sending deliberate signals are likely to be filtered, by being screened out as local interference through a lack of doplar shift.
Methinks that SETI@Home is ingenious, but is using the wrong telescope. And it'll be finished before the RIGHT telescope has been built & put on-line.
As for "Open Sourcing" SETI@Home, it was, to start off with. The original UNIX client was GPLed. Hardly anyone bothered to do anything with it, and so they closed the source & shoved it over to a commercial house. Don't blame them - look to yourself first.
Having said that, SETI@Home's attitude has been somewhat attrocious. They've been going on about security, when that was never the cause of them going non-Open Source. Progress was. And part of that is their fault. They refused to set up a CVS repository, did VERY slow (and low-quality) releases, and basically impeded themselves at every turn. They should have done a damn sight better than that. Yes, there were only a few people there, which is EXACTLY WHY they needed to use CVS, rather than relying on manually testing every e-mailed patch, and rolling a fresh tarball by hand every few weeks or months.
Honestly, if SETI@Home has shown anything, it's shown that we should be less worried about intelligence "out there" and rather more worried by the lack of it down here.
Re:The idea behind seti is great. (Score:1)
just throwing more of my 2 cents worth in.
~zero
Re:The idea behind seti is great. (Score:2)
More accurately, on most OSs, the background client program for d.net or set signals to the OS that it is a low-priority process, so there is no performance hit.
So what's the big deal? If you had performance problems on those computers, it probably wasn't the d.net clients.
slower, but more thorough is a wise decision (Score:2)
As they say, they have more processing time than they need, so find ways to use it!
A few people may bitch and moan about lower stats, but I say to hell with the people that make stats their first priority. They are the kind who are likely to cheat or run the client on an error-prone over-overclocked (as opposed to overclocking a severely underrated chip) machine, making the extra security measures that slow everything down necessary.
I know I'm biased... (Score:3)
The lack of timely upgrades for the client a la distributed.net [distributed.net]
The lack of updates of fresh content on the website a la distributed.net [distributed.net]
Network outages
"Recycling" of data - just a horrible waste of time
I'm happy to say I was with CSC since day one [distributed.net] and I'm still happily cracking away [distributed.net] on RC5.
There's a much stronger feeling of a "team effort" with distributed.net, and how can you help loving those cool little cow icons?
Seriously, two weeks ago I reinstalled the NT command line version of Seti (an ANCIENT client, but there hasn't been an update) on one of my machines and let it run all night...wouldn't you know it, it hung on sending and receivingg the data, and the stats I had (I identified myself with the same Email address I had previously used) never showed up. So I deleted it and went back to CSC.
Let's keep cracking on RC5! I can't wait for the OGR contest [distributed.net] to start.
Re:The idea behind seti is great. (Score:1)
The only thing it changes is the task manager performance display, which always shows a processor use of 100%. But given the amount of time I spend compiling, believe me, if it took even 1% of the CPU time I used, it'd be gone.
Re:Finally! (Score:2)
Re:Artificial Intelligence (Score:2)
Re:I know I'm biased... (Score:1)
The point of the post was.... (Score:1)
Plus, running their processors ar 99% 24/7 is going to burn the already taxed processors. I think most of the computers are old, but at least a few are new. Over heating the processor dosen't seem like the best of ideas.
And to be even more anal, your tax dollars plus my tuition pay for these computers.
~zero
Re:Are they ever going to release (Score:1)
High Percentile for low block numbers (Score:1)
A Personal SETI Story (Score:3)
Well, I don't do that anymore. I took it off my screensaver after a while and then when I went to DSL and had it automatically connect, most of the time I forgot it was there. But still, I'm glad to have it and glad to have the opportunity to participate; in a small way it's like fulfilling a dream from my childhood. I thank all those involved for that.
As for the new version coming out, I'm crossing my fingers that it'll fix the connection problems I've been experiencing ever since we networked our computers and I no longer have a direct connection to the Internet. Even though I don't look at it much anymore, I think it's a worthwhile project--both for the search for extra-terrestrial intelligence and for distibuted computing.
Gothea
427 Members (Score:2)
: )
Drift scans at Arecibo (Score:5)
Much of the current data was picked up during the Arecibo upgrade, where the telescope was essentially out of commission and staring up at the sky.Since the earth rotates, the sky over the telescope changes, so just grabbing all the signal ("drift scans") still provided useful data.
Arecibo is huge [naic.edu] - 305m in diameter, almost exactly a kilometer around (makes a good jogging track!): that's too large to steer. So it was designed as a spherical dish section, not parabolic like a sattelite dish: a parabola sees perfectly in one pointing direction, but a spherical dish can see fuzzily in any direction.
During the upgrade, the old line feed has been augmented by a Gregorian reflector, which allows perfect focus from a spherical dish. But the line feed still exists, as you can see from the pictures [naic.edu]. So now, while the Gregorian takes astronomy data, the old line feed can continue looking off in some other (random) part of the sky, and take useful search data! And if the piggyback project is still running, the SETI people also get first dibs on all our data to search for their signals.
With additional tests for pulsations, one wonders how many of our pulsars the SETI people will rediscover. Useful check on their processing quality, I'm sure...
Arecibo's neat - consider visiting if you ever get a chance. Takes your breath away to realize the size of it all!
Re:OPEN SOURCE[ing] SETI (is a very, very, bad ide (Score:4)
I've disagreed with this before, so here goes:
So the SETI servers don't send them blocks, process their blocks, or record their stats. Problem solved - if you want to be a part of the project, you have to use a compatible client.
There are two ways that people could cheat: returning false negative results without actually checking the results, and returning false positive results when there really isn't a positive.
When discussing open-sourcing distributed.net's key cracking, where there's a prize attached, it has been pointed out that a hacked client could be used to return a false negative but inform the user so that they can claim the prize before d.net can. But for SETI@Home, there isn't any danger of that. Who is going to believe J. Random Hacker's claims of detecting SETI on his bedroom PC? Even if someone did this, there's only one place that the raw data could have been coming from, because J. Random Hacker certainly doesn't have a high-powered radio telescope in the back yard generating all that data.
In short, I have yet to hear a good explanation of why the benefits of open-sourcing the client wouldn't exceed the problems (minimal, see above) of doing so.
Re:The idea behind seti is great. (Score:2)
Never happen, but that's okay. Just keep them grits a-comin', and stay (or get) ripped [rippedin2000.com] in 2000.
It was: Be sure ... (Score:1)
A crummy commercial!
Re:Linux and Slashdot on Seti@Home (Score:1)
Ars Technica [arstechnica.com] is cooler than /. [slashdot.org] anyway. They're interested in technology, not politics.
--
Re:I know I'm biased... (Score:2)
PiHex [cecm.sfu.ca]
Mersenne Primes [mersenne.org]
Seti@home [berkeley.edu]
Why do this? Well, first my ultimate goal is not stats-oriented. I don't really care where I am in the stats, where my team is in the stats, etc. What I enjoy most is that I contribute to a number of projects, albiet a little in each.
For example. The Seti@home client runs on a Pentium 166 (overclocked to 200) that is sitting in the living room of my apt. It makes my roomates happy to see eye-candy on the monitor out there and it keeps the otherwise idle machine (it's a proxy/firewall for our cable modem connection) doing something. No, it doesn't zip through data blocks (about 1 every 5 days or so), but it is doing _something_.
So, what's my point? (I'm asking myself that same question). Basicly, it's that people participate in distributed computing projects for various reasons. If Seti@home has brought 1.6 million people into the distributed computing fold, then more power to them! (I do contest their figure of 1.6 million people participating, but that's for another discussion). Even if the client is not 100% efficient (and what software package is 100% efficient?) they are still contributing to the overall education of people in matters of distributed computing, in particular internet based projects.
Finally, it should be noted that distributed.net re-did a large number of CSC data blocks (~25%) as a security measure to reduce/eliminate false-positives from making their way into the stats.
Re:Are they ever going to release (Score:1)
Re:Is SETI@Home worthwhile? (Score:2)
There's no need to look in the skies for ET, because there are plenty of aliens here on earth -- I'm certain my ex-boss was one, and my friend Pete's former roommate, who had superhuman abilities (collegiate track champion) would neither confirm nor deny his terrestriality. NO, I don't think it's worthwhile -- it's a big waste of time.
--
SETI's memory consumption (Score:2)
PID USER PRI NI SIZE RSS SHARE STAT LIB %CPU %MEM TIME COMMAND
767 n343145 20 1 12992 12M 236 R N 0 50.6 20.5 6530m setiathome
5474 n8201079 16 0 640 640 516 R 0 48.3 1.0 1:11 dnetc
While dnetc used 1.0% of memory, seti used 20.5% of memory. Also as seti don't provide optimisations of modern processors (and the fact that they're unlikely to find anything anyway) then I just think seti is a waste of spare cycles. OK RC5's only slightly better but at least it'll come to an end eventually.
E.T. tastes like rabbit. (Score:1)
Re:Is SETI@Home worthwhile? (Score:1)
a small xlock solution ... (Score:2)
----- setilock
#!
cd
./setiathome -nice 19 >&
xlock -mode blank
killall setiathome
rm -f lock.txt
------
So that on my nice "old" afterstep 1.0, when I press "window+L" (the "window" key bind to "3" on my X), will actually run the script: start seti, then my xlock and the kill seti (and its lock) it once I unlock my screen.
The nice part is just so that unlocking is not a real issue (I do have cronjobs, but they are not that consuming).
I do recognize, that this method is a little overkill (I should just "stop" it and "continue" later) but it works for me this way so I thought I was going to share it.
Hope that helps.
Isn't there *something* worthwhile? (Score:2)
Indeed. And I might add, I am not convinced that they even know what they're looking for when analyzing the data.
I think this project is a complete waste of time and resources. Even more of a waste than RC5/64, and that's saying something.
Isn't there some project out there that actually has some scientific usefulness?
---
Re:Are they ever going to release (Score:1)
Re:SETI@Home *IS NOT* useless.... (Score:1)
distributed.net and other cracking projects are a waste of cpu cycles cuase any idiot knows that you can eventualy crack anything.
but what can be a greater reward then finding ET?
Re:a small xlock solution ... (Score:1)
------ setilock
#!
cd
./setiathome -nice 19 -nolock >&
xlock -mode blank
kill `cat pid.sah`
------
The "-nolock" consider that I will only run seti when I "x"lock my system, with no other running version (for the current one is killed when I un"x"lock it).
Note that
Hope that helps
Re:Breaking News! (Score:1)
But only for *nix. (Score:1)
What's more, the processor optimizations (when it is doing math) are dismal.
I dunno if they're planning to come out w/ a v2.0 GUI for Win/Mac, but until they clean their code or OSS it, I'm sticking w/ dnetc. All text, Altivec optimized, 3.8MKeys/sec. They know where the computing power is
Re:The point of the post was.... (Score:2)
Running a processor at 100% capacity 24/7 is not going to 'burn' the processor, that is what they are designed to do. Those processors will be consigned to the rubbish bin well before they stop working.
You could perhaps argue on power consumption, but as long as the clients don't stop disks spinning down and monitors going to sleep there's not much of a case there.
The university isn't getting any less of a return on investment on the processor because it is being used more and in fact they may even get something out of it, even if it's just publicity.
In fact I'd suggest that a large group of computers that otherwise spend a fair amount of time idling but need to be on for easy access are perfect for adding to a distributed computing effort such as d.net or seti@home.
Open-Sourced SETI (Score:3)
This may have been discussed previously, but I'll bite anyway. The problem with open-sourcing the SETI signal-processing code is the fact that this is a running experiment. With open-source operating systems, web-hosts, games, etc, you will be running these on your own computers and/or on your own networks. The SETI example is completely different. With SETI, you are contributing to their scientific experiment. Therefore, they want to know precisely how every resulting piece of data was processed. I cannot stress that enough. As a scientist, I can assure you that if you cannot verify the integrity of certain data, then you may as well throw that data out (or re-run the experiment at the least. As learned with the cold fusion people, DON'T PUBLISH if you're not certain of the validity of the data).
The problem with open-sourcing the code is that somebody on the client side could very well have changed the processing code, and introduced any number of possible errors. It could be something seemingly harmless, like using single-precision floats or linear interpolation, in some places instead of whatever methods were originally employed there, for a processing-speed gain. However, there is no way that the SETI folks can tell if the data they are receiving has been processed correctly or not. And one thing SETI cannot do is accept compromised data.
I do believe it is in the public interest to let others know the code processing and evaluating techniques, though. What could be done is run an open-source type development of a separate client system, and then release some sort of binary-only module for the actual runtime client (perhaps they'll not reveal the data-authentication routines, for instance). You may disagree completely with this, but please understand that from the point of view of a science experiment, they MUST in no way whatsoever receive data processed in a different method than they originally intended .
original client GPL? (Score:1)
Can you provide a reference for this? Or better yet, a link to the GPL'd code?
I offered to help them during the call for developers in April/June 1998. However, I changed my mind when they refused to consider putting the code under any sort of Open Source license. I'd be very disappointed if they had done a GPL'd release, since I was quite interested in participating otherwise. The licence was my only objection: in fact, I remember arguing that the non-free license would hurt their ability to attract developers!
The last version of the code I have is version 1.4a from 1998 June 22, and it contains no license of any sort.
Distributed computing with less memory (Score:1)
I don't know how seti@home's algorithm works, but I would still be running it now if it wasn't such a memory hog.
Anybody know of worthwhile distributed computing projects that use less memory?
Yeah? (Score:1)
Re:Are they ever going to release (Score:1)
Re:Finally! (Score:2)
The program code is an essential part of the method of the experiment. Different methods (which would erupt if it was opensourced) would ruin the validity of the results, since consistency was not maintained. Thus, the scientists are protecting the credibility and validity of their results while also minimizing uncertainty.
This is a single experiment that uses the help of a lot of people. It isn't a lot of experiments. A single experiment must maintain consistency throughout. Tracking and documenting multiple versions would be a larger endeavor than the scientists have time for and it would introduce an unacceptable level of uncertainty in the results.
SETI "cheating" (Score:1)
When talking about cheating (and generally about abusing the S@H software), people generally talk about delivering work units without actually processind them.
This has been done in the past (with the closed source client), and was only detected because the lowlifes doing this forgot to forge reasonable processing times and instead sent them in with a 10 minute counter.
If a more intelligent idiot would decide to boost his fame and sabotage the client by sending in bogus data, this would really sabotage S@H, and since the S@H authors are members of the holy church of 'security through obscurity' they're not gonna change their opensourcing policy anytime soon
Want to talk about it? (Score:2)
Re:Artificial Intelligence (Score:1)
Re:Are they ever going to release (Score:1)
Personally, and this is not trolling for flames, I think the SETI project is a waste of time in its current state.
The data you process may be processed multiple times by others, thus rendering your time spent, worthless. I have heard reports that they in fact lost a portion of the anylized data, and resubmitted it to clients, as well.
Finally, by their own technical paper [berkeley.edu], they claim:
The sky survey covers a 2.5 MHz bandwidth centered at 1420 MHz
2.5mhz frequency spread, covering 28% of the sky.
It just seems like a waste of time to me.. others may (and obviously do) see it differently, but to me, if they are only searching 1/4 of the sky, and even then, only scanning a 2.5 mhz frequency range, that leaves a LOT out. I ran a client myself, but when I started reading more data about the project, and what it entailed, I decided that I could put my proc time to better uses.
Then again, SETI obviously has the use of expensive hardware, dedicated scientists, and hundreds of thousands of interested people willing to run clients to analyze the data, so obviously it has support. I dont think the entire SETI program is a waste of money, I just dont see the benefits of scanning such a small portion of the sky/freq range.
Again, not trolling here, this is an objective opinion.
Re:How useful is distributed.net compared to THIS? (Score:1)
Re:Isn't there *something* worthwhile? (Score:1)
Bah! That only solves HALF the problem! (Score:2)
You're completely forgetting about all the cheaters who could just travel to some planet N light-years away, then travel back in time (N + Blargh + Glorp) years, where Blargh = the time it took said cheater to reach said planet, and Glorp is the time it would take the cheater to build an ultra-high powered radio transmitter and aim it to the exact point that the Arecibo telescope will be in N years, plus up to 24 hours of "padding time" to ensure that the telescope will be facing the right direction at the moment the signal arrives.
Then, said cheater simply pushes the Big Red Button labelled "Transmit Bogus Alien Mating Calls To Fools On Earth", sits back and waits.
Of course, he'll still have to rush back to Earth in time to compromise the SETI@Home network's security to ensure that the data packets containing the bogus interstellar message get routed to his computer alone!
Then, he sits and waits.... soon, the data packets are processed, and sent back to SETI... yes, my pretties... soon I^H...he... will receive a pat on the back and perhaps a small thank-you check for my^H^H...his..."contribution" to science...
Umm... the preceeding situation was, ahem, completely hypothetical and I^Hthe hypothetical "cheater" would never stoop so low just to trick SETI@Home into thinking that I, Hypergeek^H^H^H^H^H^H^H^H^H^H^H^Hhe had just found extra-terrestrial intelligence.
Or maybe this hypothetical person-whom-I-don't-know would stoop so low... I mean, I have no idea-- it's all speculation, right?
[Mumble... that SETI check had better be enough to cover the costs of the interstellar space ship, the time machine, and the ultra-high-powered radio transmitter... otherwise Dad'll kill me when he gets his MasterCard bill!]
Umm... SETI@Home does offer a monetary reward for finding extraterrestrial life, right?
Uh oh. I'm dead. Unless, that is, anybody could figure out a way I could either:
Well, I'm dead Real Soon Now. There's only one way to prevent this tragedy from claiming the lives of other, innocent victims: OPEN SOURCE SETI@HOME!!
-Hypr Geeque
Re:Artificial Intelligence (Score:1)
The kind of project that the Great AIP site seems to be proposing is an entirely different idea. From what I can tell, they are proposing to create an online environment that allows AI's to interact and evolve and eventually "reach intelligence". We're talking about an artificial intelligence that should be able to interact, experience, develop, learn and create... not a big computer that can calculate and evaluate insanely huge numbers of chess moves.
The project itself concerns me a little, because I'm not sure that its creators have fully investigated and appreciated the magnitude of the task they are undertaking. The history of AI research shows a recurring pattern: periods of extreme optimism and lofty aspirations, followed by periods of cynicism as people's ideas fail to pan out as expected. Over and over we are confronted with the fact that this thing we call "intelligence" (which I notice the Great AIP still hasn't even *defined*) is far more complex and elusive than we previously suspected.
Not that I'm trying to say that AI research is pointless or that an open source AI project has no merit; I simply hope that people involved will do some serious research into previous attempts at finding "intelligence". Otherwise, I worry that they will simply repeat past mistakes.
For more information on the huge range of ideas and research falling under the category of AI, check out the collection of links at http://ai.about.com/compute/ai/ [about.com]. For a good beginner's overview (and a chance shamelessly plug a friend's site ) I recommend this AI Tutorial Review [utoronto.ca].
Re:A Personal SETI Story (Score:1)
I was a bit worried about taking away from other processes, but the client runs at priority 00, so no worries.
Other platforms for: Seti@home @ play @whatever! (Score:1)
Whatever... My ideas, like my sigs, probably aren't even original.
Ah... fuck seti! (Score:1)
Re:Finally! (Score:1)
Remember that they beta tested the current incarnation of the client for over a year (and during that time, you *could* download the source) before releasing it.
If someone starts sending incorrect data back to them, they will have to start over again or simply stop the whole project.
So the security concerns they are talking about are not "we're afraid to be hacked".
It's "We're afraid that someone will make a really fast client that everyone will use but which returns unreliable (not proven) data."
(Actually, there are hacked clients which are faster but returns unreliable data allready...)
why did they build 686-version first? (Score:1)
You may find this interesting (Score:3)
Last night I attended a lecture/discussion by Dr. Kent Cullers, the Signal Detection Group Leader, one of the project managers in the SETI Phoenix Project, and one of the board members of the SETI@Home program (ever see Contact? The blind radio astronomer was modeled after him). After the lecture, I asked Dr. Cullers about this very topic. I told him flat out that I could not understand why a project with as lofty goals as SETI would willingly give a cold shoulder to thousands of potential programmers willing to donate their time towards developing more efficient clients and (potentially) search algorithms. I also told him flat out (in front of 100+ people) that while I've always supported the goals of SETI, their cold shoulder to the OSS community made no sense when you consider their claims of supporting scientific cooperation. His reply was refreshing, honest, and to the point.
The first thing he told me was that, despite public claims to the contrary, security is not SETI@Home's biggest concern. The REAL problem right now is that there is apparently no efficient system in place for transporting large blocks of information from the receiver setup to the SETI@Home hub at Berkeley. I'm not sure how many of you realize this, but SETI@Home is an affiliate of SETI, not a directly controlled part of the organization. He told us that SETI itself has a very large quantity of unprocessed data to sort through, but without the proper infrastructure to get it to the S@H users, there's no easy way to handle them. This IS being worked on though, so don't lose faith yet.
The second thing that he pointed out was that there are people inside of SETI, including HIM (remember, he's on the SETI@Home board), that want to see the client opened up. In fact, he pointed out that this very topic is already on his agenda to be brought up at the next board meeting. You know why he's on our side? Because, in addition to his many other amazing talents, he's a programmer! (yeah, a blind computer programmer...this man has just earned my respect for life). He described the hoops HE has to jump through to get access to the code, and he's a friggin board member! While he couldn't give me any promises, he was MUCH more receptive to the idea than any of the other SETI guys I've seen quoted on Slashdot.
In the meantime, he gave me this suggestion: Apparently a small part of the problem is that Dr. David Anderson, the head of the SETI@Home project, isn't entirely convinced that OSS developers could really bring about any significant improvements. Dr. Cullers stated that the best way he could think of to change Dr. Andersons mind would be to email him suggestions as to possible ways to improve the performance and reliability of the client. According to Dr. Cullers, getting him to open it is simply a matter of impressing on him the fact that we can help, and that we wont just be getting in the way. I realize that some people may have a problem sending coding suggestions to a closed-source project, but according to Dr. Cullers it would definitely help further the cause.
And on one final note, I'd like to tell you all something. I walked away from that lecture with an entirely different take on SETI. They're not a tired old organization desperate for government funding anymore, they've got some cool new projects already underway and in the works for the near future (SETI Optical, new arrays for Phoenix, etc.) Dr. Cullers made a very realistic projection (explaining the math and technology) that Phoenix will have out entire galaxy scanned in less than 50 years (they haven't even really touched it yet). Please remember, SETI has little direct control over SETI@Home, so don't let your poor opinions of the S@H project ruin your views of the entire program. SETI is worthy of our support.
Already does (Score:1)
Re:You may find this interesting (Score:2)
And, if you get the chance to talk to any more S@H board members, here are a few things to bring up:
Re:You may find this interesting (Score:1)
Don't put too much faith in this development though...he's still just one voice in SETI, and his primary job is figuring out new ways of removing radio noise without degrading the original carrier signal. He doesn't have the power to open up S@H by himself, but the fact that we've got someone receptive to the idea on the inside, who also happens to be a skilled programmer, means that there's hope.
Then they are worthless, they do not understand (Score:1)
Re:OPEN SOURCE[ing] SETI (is a very, very, bad ide (Score:2)
And how, pray tell, do you enforce a compatible client. If the client is trusted, there is no way to ensure it is valid. You cannot trust a liar to tell you he is a liar. This whole discussion has been had before on the QuakeForge list, where the security architecture of an open source quake was discussed. So if we cannot enforce a compatible client, then the few people at the project have to waste all their time chasing down and removing accounts of bad people, who will just make more accounts. Sure, the closed-source version is not a great solution, but I don't expect these guys to make more headaches for themselves for the heck of it.
"False negatives can be easily caught by issuing the same blocks to other clients."
Even if you only send out one duplicate, that already doubles the workload. I guess you could randomly sample, but then your trade-off is that you will miss some.
"False positive results are even easier to catch."
Being able to catch false positives won't be of much use if your whole network is down because it was flooded with false positives.
Open sourcing opens a big can of worms. It
Open sourcing would be a good thing...but it has to be handled very slowly and rationally and carefully.
Jazilla.org - the Java Mozilla [sourceforge.net]
Re:OPEN SOURCE[ing] SETI (is a very, very, bad ide (Score:2)
I don't think that all of the issues you raise will be as much of a problem as you say. Why would someone invest time into using a hacked client that produces incorrect results? The most likely reason (going by the history of this sort of project) is to move up in the ratings for the project. But to do this, you need to build up a history of past efforts, which will be destroyed every time your hacked client gets caught and you get banned from the project. As long as the project organizers can detect hacked clients, the thrill of using them will quickly wear off when people realize that there's no payoff for doing so. As for detection, see below.
OK, so I was a little glib about that. Let me explain my reasoning. There are really two kinds of compatibility that we care about: does the client's communication with the server follow the established protocol (data formats, ports, CRCs, etc.), and are the client's results compatible with the computations which the project intends to be running. If the client doesn't use the right protocol to talk to the project server(s), then you can ban it right there and move on. It's easy to tell if a client isn't sending data in the right sized chunks, etc. You can ban clients automatically when this happens, with no overhead in manpower (other than setting up the initial system). If I hacked the closed SETI client right now to talk to their servers on the wrong port, you can bet they would drop me like a hot potato - it would be obvious that I'm incompatible. By definition, networking protocol incompatibilities are detectable, because if you can't tell that the client isn't following the protocol, then by definition it is compatible with the protocol. If, on the other hand, the client's calculations are incompatible, then see below and my previous post.
Exactly correct, which is why no clients can be trusted. Closed source doesn't prevent hacked clients, it just ensures that hacked clients are only created by the more motivated. As I explain below and in my previous post, even a non-hacked client can't be trusted 100%. Distributed computing will need to be run on a basis of clients proving that they are trustworthy, rather than the server assuming they are innocent until proven guilty.
It's true that there will have to be duplication of error, but as I explained before, even with perfect closed-source clients and perfect good will on the part of their users, you would still want some level of redundancy. Somebody's processor could be overclocked, they could have bad memory, a random cosmic ray could strike because they have the box open, etc. If you really want to be sure of a scientific calculation like this, you have to run it multiple times (preferably with different but equivalent algorithms) and compare results. Redundancy slows things down, but did you really expect these projects to finish up next week? Remember, compared to the computing resources which SETI has available in-house, they're still getting a tremendous performance boost at very low cost.
Once attackers realize that there's no glory (ratings advantage) in doing so, they'll quit with this attack. A huge mass of false positives can be solved by temporarily banning the source IP address or address blocks, and contacting the source's upstream provider just as you would do if you were ping flooded or attacked by any other DOS.
Agreed - which is why open source is a good idea in the first place for this client. For all we know, there could be a computational error in the client. Sure this is unlikely, the SETI folks really know their stuff, but it's happened before with software released by very professional developers and it can only be caught by a source code audit. At the worst case, SETI could use crypto in their networking layer and release that as a binary-only library, while opening up the computational parts of the client. This would allow people to experiment with the algorithms involved and the screen saver part of the code.
Agreed. I'm not advocating any overnight changes in SETI@Home. It would take some work to run a secure distributed computation with open sourced clients; possibly this would outweigh the expected advantage of the open source. You really won't know the benefits of open source until you try it, just like any other software project, so it's tough to make a case when we know all the possible problems but can point to few known advantages. I just feel that there are definite advantages for them to move in the direction of open source, and the difficulties of doing so are not so great as some would say, or at least are not insurmountable.
Re:Are they ever going to release (Score:2)