Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Science

SETI@Home Says Client 'Upgrades' Are a Bad Idea 250

bgp4 writes "New Scientist has an article on how 'upgrades' to SETI@Home clients are causing some trouble. Even though the upgrades speed the client up, SETI reps don't want people using them because they may induce bad data. If SETI@home just open-sourced [the SETI@home client], they'd have better PR and a better client." Amen! SETI@home, are you listening?
This discussion has been archived. No new comments can be posted.

SETI@Home Says Client 'Upgrades' Are a Bad Idea

Comments Filter:
  • The article seems to think that people modified the client in order to help out the SETI movement. I think it's pretty apparent that the only reason to do this is to be able to bust through data faster, increasing your score. duh! :)
  • The Truth about SETI@Home [slashdot.org]

    This is a scientific experiment. It's not a race, it's not a demonstration of the power of computers or computer communities.

  • by Ageless ( 10680 ) on Thursday November 18, 1999 @05:04AM (#1522059) Homepage
    Comon people, think about this before you flame. Someone submitting invalid data can ruin this entire project for everyone. Open source software doesn't get tested until it's too late, and by time a calculation bug is found in the code, thousands and thousands of entries could be invalid. At least think about the project some before you randomly spew out "Free The Source!!"
  • by rde ( 17364 ) on Thursday November 18, 1999 @05:05AM (#1522061)
    Yeah, I run seti@home. Yeah, I'd like it to be open source. But it's not.
    If you agree to help out, you do so on seti@home's terms. They say they want you to use this software, so you use this software.
    This is not about having the most units completed, or about being the one to find the signal, or about improving the software. It's about helping the project, and contributing to the body of scientific knowledge. If you want to help, use the sanctioned software. If you use anything else, then you're hindering. Go crack cyphers or calculate weather patterns or something.
  • Listen, I'm an Open Source advocate in my own time, but I get fed up with people picking at Seti@Home for not being Open Source. What's the big deal? Yes, it would be a better world, maybe, if Seti@Home went Open Source. Yes, it makes sense for a project like Seti@Home to aim at a collaborative construction.

    But give me a break. Stop antagonising Seti@Home because they're doing it in a traditional way. Who cares? They've come up with a great concept to advance their research, and a lot of people bought into it. It's not like they're making money out of it. It's not like the publicity they get fills the bank account of a rich CEO.

    So stop picking at Seti@Home all the time until they become Open Source. Change is gradual, and we have to count or blessings and support Open Source initiatives when we see them, not declare a Jihad against everything non-OSS. Yes, most people including me would like to take a peek at the code. But let it be, for just a moment... Seti@Home remains a great project, whether it's Open Source or not.

    You know, there *are* great software that don't need to be Open Source... And I think they're not hurt that much because of it. I don't think Seti@Home would get a whole lotta good from Open Sourcing their code, because everyone could figure out how to send false data claiming they not only got an E.T. signal, but it was singing Singing in the Rain in slow-motion.

    /me calms down.

    "The wages of sin is death but so is the salary of virtue, and at least the evil get to go home early on Fridays."

  • by Jerf ( 17166 ) on Thursday November 18, 1999 @05:09AM (#1522064) Journal
    SETI is listening, and your arguments are rejected.

    This may sound odd to people participating in distributed.net, but SETI is not about processing data as quickly as possible. It's science. In science, you want to hold as many of the variables as similar as possible, so that you can be sure they didn't create a false result (false positive OR false negative). All else being equal, speed is nice, but it is not the goal.

    Open source is not the answer for everything. Sure, if it was open source, some good patches might come out, but how many people would download the code, apply the patches that speed it up, and never have a clue that they just fatally broke the FFT result testing algorithm? Or for that matter, if they broke the FFT algorithm? Or would simply use it to easily learn how to send result blocks without processing them?

    The fact of the matter is, even if you can improve the code, you cannot improve the code. (That's not a typo.) If you can improve the code, instead of helping SETI by processing keys faster, you bring yourself out of alignment with everybody else, create potential bugs in the experiment, and render all of your results suspect. SETI is science... distributed.net is engineering. There is a big difference, and science does things the way it does it for a reason. SETI needs the results to be as solid as possible. (If one of the hacked clients detects a signal, rest assured that even if SETI doesn't subject it to extra scrutiny as a result, some other scientist will.)

    SETI can't stop people from modifying the executable on their own systems, but I think the people calling for SETI to make it even easier for people to modify the system (not just your code, SETI is part of a system and subject to the interactions thereof) have a fundamental misunderstanding of what SETI is about.
  • If people would read the FAQ on SETI@HOME it tells why they are NOT open sourcing it. If the client is open source then the data that is sent back to the servers can be faked, and even if it can be faked that calls the whole study into question since anyone could send back a bad packet to get points (and it would probably happen) and they would have no way of knowing.



    For another thing there is no advantage to
    open sourcing the client since last I heard they had more people then they need to process the data and the bottleneck is from them receiving data from the telescope and not data back from the users.

  • But, locking up the source won't stop bugs from creeping in. The argument is, which model get the bugs fixed faster, my guess is open source, if only because Seti's audience is hugely skewed towards geeks. You'd have plenty of competent programmers reviewing the code, and they could set up a system where the 'unstable' versions of Seti client only handle old data where in the results are known (which is what they probably have to do for testing purposes anyway).

    The only two arguments I can see for closed source are

    1) The plan to get a patent for the code at some point, or otherwise make money off it.

    or

    2) They are worried that someone will maliciously re-write there client to return junk data. But if someone wanted to do that, they could probably do it without the Seti source.

    Dana
  • If you open source the seti@home client you're going to have people hacking the code to boost their numbers, induce fake "sightings" and generally muck up the data.

    In a scientific experiment like this you need more control -- open source is great, but it's not right for all projects.

    These people have enough work to do with little resources, they don't need to have to try to kep out all the little hax0rs who want to hack with the code and look cool.
  • But, it *is* about finding the signal. It may not be about *you* finding the signal, but the *whole point* is to track down a signal.

    To track down the signal, units must be processed. If we process signals faster it'll either let them eventually process more signals (larger spectrum, etc) or use less computers. Either is a worthy goal. One helps science, the other simply saves power and lets people run other distributed projects.

    SETI@Home is hindering their own project by insisting on slow (They refused AMD's offer of help to speed the client up, doing more work, as carefully as before, in less time.) clients just to keep a large number of users helping is a bad PR move. You could help a project that wants to process data, or you could help SETI who wants a large number of users, who incidentally process some data.
  • by Entrope ( 68843 ) on Thursday November 18, 1999 @05:12AM (#1522069) Homepage
    The insistence with which some people clamor for open sourcing everything really annoys me (and a lot of other people). There are very good reasons not everything is open sourced, and sometimes they're not even due to stupid licensing restrictions imposed by third-party code.

    For something like SETI@home (or distributed.net or whatever else you like), there's a very good reason to keep the clients binary-only. Namely, there is no oracle for verifying that a block of search space was actually searched by the client that claims to have searched it. Abuse of this was seen by the DES challenge and distributed.net before; open-sourcing SETI@home would lead to even worse abuses. Unethical people would modify the code to claim they had searched oodles of key blocks, ruining the results of the search -- and only so they could show off how "studly" their computer system is.

    Of course, maybe this concept is too hard for bgp4 to grasp. But for goodness's sake, it's in the SETI@home FAQ [berkeley.edu]. Whining about their policies on Slashdot isn't likely to change their minds.

    (Beyond the malicious introduction of false reports, it's very easy to "optimize" something like this and introduce numerical or algorithmic errors. Unless you are familiar with advanced theories of signal processing -- the sort of thing you'd find in graduate classes at a good university -- you would be well over your head in looking at how the algorithms work. And there are enough bright grad students working on the average project to know how to optimize for all sorts of cases without the help of a bunch of open source zealots who think that the GPL is some magic potion that can be applied to anything to make it better.)
  • If this were a typical user program then id say open source it. The fact is this is a scientific experiment. The validity of the data collected is all that matters. If you open source the code then you will have a number of people who screwed up their version of the client because they didnt know how to do it right or because they just want higher stats and dont care about the data.

    The benifit of an open source client is lost in a case such as this.
  • by pslam ( 97660 )
    The only way they can be absolutely certain that clients aren't doing bad calculations is if they know that every client that goes out is from their own thoroughly checked source code. Open sourcing the client and letting anyone submit data with unchecked clients would defeat the entire point of the project. Seeing as a message from space is rare enough as it is, I wouldn't risk a buggy client destroying data that might never come again.
  • It's a good idea, but I'm increasingly skeptical. The clients WERE originally Open Source. They were actually GPL! But, through total mis-management on the part of the people running the project, patches were never included and response times were very poor.

    No CVS was ever set up (though it was repeatedly promised, those times any response could be elicted at all!), and the frustration of those volunteer developers taking part was very evident on the mailing lists.

    There were a grand total of SIX snapshots ever made. SIX! At the very least they should have been daily!

    I'm not the least bit surprised that the frustration with a slothful and (frankly) incompetent development team has burst out with "home improvements". That was only a matter of time.

    As for SETI@Home's "reason" for not releasing the source - the need for a secure, validated connection - well, I think that's been blown out the water. If their connection was so secure and validated, rigged clients wouldn't be possible. The fact that SETI@Home are complaining at all shows that they're no better equipt to secure a connection than the movie industry with it's DVD encryption.

    Let's all cut the pretense, and stop this nonsense. I want the very best SETI@Home clients that modern computer skills can create, looking for real data, using real analysis, not cheap junk that goes faster because half the tests are skipped. (Notice the gaussian stats vanished with the first official "faster client"?)

    Now, SETI@Home obviously also wants genuine security. So put your source where your mouth is, SETI folks, and RELEASE THE CODE! Your attempts have failed, miserably, so let those who know what they're doing take over!

  • What are you talking about? The SETI programmers already screwed up. Don't you remember that soon after their initial release to the public, they announced that there was a bug in the program that was causing every computer to process the SAME packet of data. Therefore, your conclusion that open source programs are somehow inferior products is flawed. If there were hundreds of open source programmers looking at the code, this bug would probably been fixed before it was released. That IS reality.
  • SETI@Home is hindering their own project
    There's no argument that it could be handled better. But my point remains: if you're not happy with what they're doing, don't help. There are plenty of other distributed projects out there that are worthy of your cycles.
    I want to help seti@home. I realise there are faster clients that are probably as accurate, but seti@home don't want me to use them. So I won't.
  • While you will never get an argument from me that open source software is generally better, there are some things however for which Open Source is not appropiate. Even Eric Raymond would agree, after all, one of his presentations convinced me of this fact.

    SETI@home is such software. It isn't some kind of disk system that just needs to run more quickly. It is a scientific endeavor. Unless it can be assured that the new algorithm being used in the patch produces EXACTLY the same results given the same input for ANY possible input, use of such altered packages could invalidate the experiment.

    While it is nice to have such things open, in this case it just doesn't seem prudent given that the software is performing complex computations on the data. It is possible that some of the shortcuts used in the patch eliminate some calculation which almost never seems to produce any results, but could given certian inputs.

    In a case such as SETI@home, such things would be a serious problem.

    There is another reason the SETI@home team wants to keep things a little secret. There are groups who (for whatever reason) may want to either mask a possible signal, or create a false one. There are fanatics on both sides. If you read the FAQ on the home page, it states as much, and that steps have been taken to assure that such things are not possible. This may require some secrecy.
  • How about this: someone who tries to "improve" the algorithms gets them wrong but it finishes blocks faster and has a cool display that syncs with MP3s. The client becomes popular and SETI@Home is left with gigs of data which needs to be re-processed. Worse, the bad data may not be distinguishable from the good data so it's all ruined.
  • So, if they released their source, bugs would magically appear?

    Open source doesn't cause bugs. Fast development cycles cause bugs. End of story.

    If they open sourced the project they'd speed it up enough (proof: AMD and other organizations have offered both general speedups and processor-specific speedups of over 100%) to add redundancy, both to check the method by processing some packets with an alternate algorithm, and to guard against false packet returns.

    They aren't obligated to open-source the project, but I for one quit running SETI@Home when I heard about them refusing optimization help because they didn't want to process the packets too quickly. I don't care what their reasons are, I simply refuse to run something that's woefully unoptimized, especially when they were presented with alternatives. Now I'm running other distributed projects with my spare cycles. And I suspect I'm not the only convert.
  • by Anonymous Coward
    Typical "official" Slashdot sentiment: ~(Open Source) == Bad. There are quite a few legitimate reasons to not open-source everything, most of which will likely be entered as comments while I'm typing this, so I'll spare the discussion but for one: security.

    If people went through the considerable trouble of reverse-engineering and patching the client for speedups, wouldn't it make sense that they'd do even worse things to open code? And since most of these erstwhile hackers don't have any knowledge of astrophysics or the algorithms involved, they'd more than likely screw something up.

    Do you just randomly comment out offending lines of code when compiling unfamiliar software, or do you try to fix the bug?

    If you have the necessary knowledge to do something like this properly: then volunteer! And to Slashdot: get your head out of the sand! (Oh, and Hemos is a hamster.)
  • If people would read the FAQ on SETI@HOME it tells why they are NOT open sourcing it. If the client is open source then the data that is sent back to the servers can be faked, and even if it can be faked that calls the whole study into question since anyone could send back a bad packet to get points (and it would probably happen) and they would have no way of knowing.

    You're missing the point. People are faking data NOW. It's done already. The client has been hacked.

    Open source is not a lack of security, it's a big plus. If they want a way to make it much more difficult to fake packets back, why don't they talk to d.net? I know d.net had to deal with that problem in the past. So come on already.

    I agree that you mainly want to be sure that the data is valid, but that does not preclude optimizing the client for speedups. They gave the source to Intel to optimize for P2 and P3's already. They won't give it to AMD (or haven't yet). They're deliberately withholding it from people who can make the improvements, and do it securely.

    All it's really doing is FFT's anyway, which really lends itself well to optimization on various CPUs. Just a load of floating point, although 3dNow! instructions probably won't help much, since those are single precision only, I think. You'd want double for something where you have that many significant bits.


    ---
  • by athmanb ( 100367 ) on Thursday November 18, 1999 @05:19AM (#1522080)
    What we see here is just one more guy who simply cannot believe that open-sourced software is more secure then proprietary one (Just like the old argument of some people - mainly the less mathematically knowledgeable suits - who can't believe in the security of OS encryption software, after all the algorithm is readable by everyone and therefore unsecure).

    In reality, the original patch from Olli was a good chance for moving in the bazaar scheme not only to processing power but also to client development, since it was a simple wrapper which didn't do anything special itself, but enabled other programmers to write their own Fat Fourier Transformation algorithms for special chips.

    If S@h would have allowed patching the software (or had even made the client OS) we could now have dozens of different clients tuned for every imaginable Streaming Extension, DSP or multiprocessor enviroment. And the SETI@home project could have also been a starter for future similar projects which could have used all the power of the specialized out there.

    Now for the defense of the project heads: There have been multiple malicious tries by individuals in the past who tried to get their rating up by cracking the protocol and sending bogus data to the server. This perhaps explains why they're this paranoid.
  • >To track down the signal, units must
    >be processed. If we process signals
    >faster it'll either let them eventually
    >process more signals

    They are already processing the signals at nearly twice the speed they can collect them.

    By speeding up the client a small percent you hope for what? processing at 3 times the speed its collected? The client is good enough and nothing is gained by spending countless hours gaining a small speed increase, the time could be better spent somewhere else.

    (this is like spending months of your spare time speeding up lilo or some other command. might learn something doing it but it is just a waste of time)
  • Is that they're going to patent intelligent alien life when they find it! This way they'll hold an exclusive patent on all intelligence in the galaxy and anyone who tries to think will be charged a license fee.

    You know, I think it may be time to turn off the RC5 clients and let power savings mode kick in for the first time in months. It all seems so... silly somehow. :-)

    Science that isn't open source? What's next?
    -- I'm omnipotent, I just don't care.
  • Yeah. I'm not following bgp4's rationale.

    1) People tinkering with Seti@Home's code may be screwing it up.

    2) Therefore, we should let more people tinker with it.

    Huh?

    There are, no doubt, at least a few situations were open-source is not The Answer. Maybe this is one of them, eh?

  • I wanted to take a look to see exactly what the heck these patches do, but I'll be damned if I can find any. The best of search engines (several), and I can't find them anywhere. First time I've found myself unable to find something like this.

    Any tips, anyone?

    ---
  • Unfortunately most people participating in it at the moment think differently. And the client for some platforms really stinks...

    For example they have not done anything to optimize the non-DGUX alpha clients despite the release of CPML and dec CCC.

    1. Seti can hardly be convinced to open-source the client until they have means of verifying the blocks they receive.

    2. They hardly have resource to develop a good verification scheme and they are afraid to bring openly people from the outside to do this because they can jeopardize the entire experiment.

    Overall, unless someone sponsors a closed variant of No 2 disallowing SETI to use the money on anything else we shall not see any open source seti clients and the performance on some really nice number-crunching machines will continue to suck...
  • OH PLEASE. In all the articles I have ever read that were trying to bring up a good point that article is way down on my list. The author of it is basicly pissed because they gave the client to Intel to optimise for SSE and not to them to optimise for 3Dnow! I don't blame them. Intel would have agreed to not let their source out and didn't, they would have more reason to trust them since its a major company that can inforce something like that and would never want to let it leek out and get all the bad press. That is not the same as some small time group that wants the source just given to them and at most can give their promise that they won't send it out.



    The author also doesn't want to just stick to the points. He has many many statments that are just pushing the point. Complaining about lost CPU cycles and its effect on the enviroment and such.



    "They make their dedicated volunteers use massive amount of
    electricity to run their machines at full throttle while this goes on - it
    is not unjustified to say that SETI@Home may make a measurable
    impact on the CO2 buildup in the atmosphere that way. Charming."



    All in all this is a article by a guy who didn't get the source he shouldn't have and flamed SETI without even doing so much as reading the FAQ where they explain most of his arguments.





  • I'm seeing comments saying that SETI@home wants accurate results, not speed. Someone actually posted that "improving the code won't improve the code." That if you try to make it faster, you are out of line with the SETI@home's goals.

    How can I say this? Well let's try this: what the HELL are you talking about?

    Please, let's *think* here. Why are the goals of accuracy and speed mutual exclusive? Perhaps there are some methods that would reduce accuracy for speed. Fine. Let SETI@Home set there tolerance levels (i.e. how much significance to use on calculations, precicion, etc.). Then let people who are very, very good at making C programs run really fast and making programs run reliably and accurately take a crack at it.

    Come on people. There is no reason that SETI@Home could not be improved by an open source approach. I agree that accuracy should not be sacrificed. But pardon me if I think the resources of the entire program world might not be able to produce a faster and more reliable/accurate client. It's just a matter of raw resources. More programmers, more peer review==better, more reliable code. And it just might happen that it comes out faster in the process

    ranting ends ...

  • The reason is, as someone has noted, that I can FORGE the packets so that they will look like normaly processed data, whle they will be bogus. And I know already some people (by their nicknames) that have tried to do this, and get a high rank. I have seen people processing thousands of packets in an hour, while, with the official client, you can process a packet in 9 - 10 hours on a P III 400 MHz (running NT or Linux, no big difference).



  • by Otto ( 17870 ) on Thursday November 18, 1999 @05:31AM (#1522089) Homepage Journal
    SETI is science... distributed.net is engineering. There is a big difference, and science does things the way it does it for a reason. SETI needs the results to be as solid as possible.

    The main thing with the SETI client _should_ be not to make sure it finds a signal, but to make sure it doesn't miss one. I agree with this part of it.

    However, this is data analysis. Pure and simple. Run an algorithm on a lot of signals. Easy. No one questions the algorithm. What is questioned is the implementation. If I can take an algorithm and optimize it to run faster on a particular processor, then I must still get the same results. Otherwise, the algorithm is no longer the same.

    And it's not a question of some hacked program finding a signal. ANY signal found will be subjected to the most intense scrutiny even before it's announced. Then it'll be scrutinized again afterward. And other signals from that sky area will be looked at, at various times going back years. No, a false signal will be eliminated pretty quickly. The thing to watch for is a false negative.

    I don't think open source is the cure-all answer here, but I think the people running seti@home have not given thought to the fact that people running the client are the kind of people who really, really know computers. They know algorithms. They know the internet. They are smarter than the average bear. And they don't like people telling them you cannot know this, or you cannot know that. The SETI people need to explain their position better, or count on a lot of people leaving the project.

    Just my $0.02

    ---
  • but enabled other programmers to write their own Fat
    Fourier Transformation algorithms for special chips.



    I believe you meant Fast, of course, but this little typo is really cute :o)

    And who knows, maybe the good 'ol frenchman was actually fat and didn't mind it a bit!

    To stay on topic, here is a link to clues about Fourier Transforms. [utk.edu]





  • The argument that open source is more secure is based on fact that many people will see problems by looking at the source, and will fix bugs that they encounter.

    In the case of the SETI@Home project, the security is needed by the project, who don't get to see the modified versions. So it doesn't apply.

    Of course, if people distribute their modified versions, other people will find the bugs. But how will the already-submitted results be retracted? And what about modified versions that people keep to themselves?

    I suspect that future projects of this kind are going to have to include more dummy data to verify that incorrect algorithms are not being used.

  • Is everyone out there spouting off without reading the article, or am I the only one who can't get the link to work? I can't even get a ping response from www.newscientist.com...

    Eric
  • by Otto ( 17870 ) on Thursday November 18, 1999 @05:36AM (#1522094) Homepage Journal
    For something like SETI@home (or distributed.net or whatever else you like), there's a very good reason to keep the clients binary-only. Namely, there is no oracle for verifying that a block of search space was actually searched by the client that claims to have searched it. Abuse of this was seen by the DES challenge and distributed.net before; open-sourcing SETI@home would lead to even worse abuses.

    You're right. Distributed.net had that exact problem. They found a way to fix it. The client is STILL open-source. See? Simple, huh?

    Check out http://www.distributed.net/source/ [distributed.net]...

    Not entirely open source, they left out the part you would need to report results back. But that's the part we really don't care about. Everyone wants the algorithms optimized. I just think the SETI@home people should get a clue.

    ---
  • So, if they released their source, bugs would magically appear?

    No, but some dishonest programmer may try to either 1) hose the program because they think Seti is stupid or 2) try to fake data so s/he becomes "the person who discovered ET."

    Let's not get all idealogical here folks, instead let's try and think of some way to identify suspect blocks and cope with them.

    Then we can go to the Seti@Home folks with an entire plan, not just chanting the open source mantra.
  • SETI@Home is partially worried about people submitting unchecked data packets to boost their score, or not submitting a signal if found, simply to mess with the project.

    But, they're also concerned with the project working too well. If their weekly allotment of data was processed in a day, users would either sit idle for the rest of the week, or recheck data. Either is likely to make people look for another project.

    We need to have a general distributed processing client with modular cores. That way people can check for alien signals when the new SETI@Home client comes out, then evaluate various check opening moves, or try code keys, or any other project that needs users.

    The basic shell of a distributed client is fairly simple, and in most cases is very similar. The system sends you data, tells you what to do with the data, and accepts processed data for return.

    The same client could send out SETI@Home packets and DES keyspaces depending on which project had a more immediate need for help, and what the client computer was capable of.

    Cracking code keys is a lightweight process. It takes very little RAM, and can be suspended instantly. Running FFTs on radio signals (SETI@Home) is a heavyweight process. It requires a lot of RAM, and isn't generally appropriate to run in the background while using the computer.

    A multi-core client could deal with this beautifully. When you're using the computer, a lightweight process is selected based on your preferences and need, and runs in spare cycles, not being noticed. When the screensaver kicks in indicating an idle computer, the lightweight process is swapped out for a heavyweight process like SETI@Home or chess analysis, to be run when a performance deficit on the client won't be noticed.

    The user could download processing cores for the projects they'd like to participate in, and list the jobs in order of preference. For instance, SETI@Home then chess analysis for the screen-saved modules, and Bovine code cracking for the spare-cycle modules.

    Then the system would crack codes between keypresses, and switch to SETI@Home when there are data blocks available, but switch seamlessly to the less time-critical chess analysis when the SETI@Home blocks run out.

    The system has the benefit that not every author of a distributed project would have to write a whole system, plus servers, and bug test it. You'd simply write a processing module and a server module to properly serve that data, and compile the client module for as many systems as possible, or distribute it as compilable code. Thus allowing SETI@Home to run on an SGI machine for instance, without the authors having to write a communication system that would run on that computer.

    Mathematical tasks, like cracking keys, or recursively iterating chess moves, or FFT calculations are fairly easy to implement in a small program, and are fairly portable, usually not relying on the OS for much.

    Currently, there are many ideas for distributed projects which never get explored because they don't have anybody capable of coding a client as stable/easy as the bovine, or of finding a large enough audience to make it worthwhile. Having one shell with multiple cores would allow anyone who can express their problem in code to reach a large audience of people ready to work on their distributed project.
  • Some people complain that open sourcing S@h would lead to more falsification of results. However, this is already occuring, and S@h will eventually be forced to do more and more redundancy checks anyway (i.e. sending the same packet to multiple people). Another interesting idea is that S@h will run out of packets to process sooner than expected and for this reason does not WANT to speed up the project. I think that overall, open sourcing the project, and then doing multiple redundancy checks will result in both faster and more accurate results.
  • Okay.. so the way they can be absolutly sure the client is working correctly to open source it.. but its not as if they're trusting the client completely. They will check the data that is reporting the signal (time and time again I suspect).

    But what if they open source it? People will know how to fake the return data. You know someone will do it. So do they check the occasional block of data or the (most likely) more frequent block of data...

    Its no choice to me.
  • This assumes too much of too many programmers.

    Just becuase one is a good programmer, that does
    not immediately imply that one is good with
    signal processing, or algorithm design.

    The worry is less about malicious data, then
    well meaning people who change the source
    without realizing that they have modified the fundamental algorithm in some manner.

    I remember doing a school lab which required me
    to write code a solving some partial differential
    equations. Unfortunately, the correct algorithm
    was not stable, and the stable algorithm introduces errors. Getting the stable algorithm
    to produce "correct enough" results required
    a great deal of tweaking, and slowing down of the
    code.

    None of this is as straightfoward as I have made
    it out to be. So let's just let the SETI@Home
    guys do the correct, rigorous thing. Sure they
    are not perfect. But in that case, THEY get the
    blame, NOT YOU! :)


    1. If you can't tell the difference between moderation and default levels, I wouldn't trust you to be competent to meta-moderate The Onion, let alone Slashdot. Moderation requires intelligence.
    2. Metamoderation is NOT a way to get your voice heard, according to the metamoderation guidelines. If you aren't willing to even read the rules, I hope CmdrTaco bans you from moderation & metamoderation for good. Those weren't written because the admins here enjoy scrawling long documents.
    3. Opening the source won't disrupt anything. It is a matter of placing a tarball in a directory. That's IT. Total. Nothing more. If they want to ensure experimental clients don't disrupt any work, simply take out the encoding routines. The standard server'll reject anything those clients send. Only an idiot would imagine that a 30 second task would "disrupt the project for weeks".
    4. Technically, opening the source is not only beneficial, but also a legal requirement. Any code from the original client that is present is GPLed, and the GPL -must- be honored.
    5. If your only contribution to this thread is to be purile and abusive, go away and play on some other board. And take your "First Post" friends with you.
  • Just look at what happened with distributed.net's RC5 client. People would modify the source so that keys weren't actually being tested, thereby increasing their keyrate. I completely agree with the SETI group's choice. Open sourcing clearly would not solve the problems with the SETI@Home.
  • Alright, I can kind of agree with some of the reasons why open source might not be the best idea for this project, but I don't think one should be so hasty as to reject the possibility so quickly. We have all seen the benefits of open source for Linux. Why are so many people so quick to say that even though it worked for linux, it can't work here. The very fact that so many people here at slashdot have an opinion on the matter shows that there are many potential developers out there who are already helping SETI by discussing the problem.

    One of the complaints for not releasing the source was that false data packets could be sent. By this, I assume you would mean a 'false positive' since what would be the point of doctoring the code to produce the same negative result that most other people are already getting. So some hacker modifies his code to detect an alien signal. Well, considering that this should be a rare event, all the SETI people have to do is to recheck the data themselves using trusted methods. Since the number of false positives are small, this would not tax their resources much. So then what motivation would a hacker have for doing this if they only know that they will be shown to be a fraud in the end. I think that most people who would spend their time working on the code would actually be more inclined to find a real signal than waste their time with such childish behavior. Yes, there will always be exceptions, but don't tell me that they can't already reverse engineer the code to do the same thing now.

    Second, why not introduce some redundancy into their method. If it is true that their speed is limited by some other factor such as the rate at which they are receiving data from their telescope, then why not send out identical packets to more that one computer. That way, even if one person does decide to mess with the data, the chance that both computers (or three or four) would screw up the data in exactly the same way would be very unlikely.

    Some people have objected to open sourcing the code because that "is not how science works", but I disagree completely, and being a NASA scientist myself I think I have room to talk. Science, in fact, works by redundancy. The fact that more than one group of people can analyze the same set of data and come up with the same results is one of the cornerstones of the scientific principle. Has anyone ever heard of cold fusion? reproducible?

    So my point is that already in this discussion, several people have come up with some good ideas for improving their existing code. Open source is already working and we haven't even got a hold of the source yet!
  • I don't think you really understand how to optimize code.

    You optimize the sections of the code that take the most processor time, and the programs you use the most. If lilo completes in milliseconds, don't optimize it. If your whole office app is a pig, optimize the parts that the user spends the most time in. If your shell is fairly quick, yet you spend eight hours a day working in it, it might be worth optimizing those input routines even if it's only a 10% overall speed gain.

    With SETI@Home, sure, they run out of data before they run out of time. But, they don't have to perfectly match. By your logic we should slow down the client so as not to waste any cycles on unimportant non-SETI@Home processing.

    I'd prefer to optimize SETI@Home and either require less users, or process the week's signals faster and be able to run other distributed projects.

    Because SETI@Home *is* a valid target for optimization. It sucks an incredible number of cycles. And now that we're used to running distributed projects, those aren't useless cycles anymore. You can see SETI@Home as competing directly with other projects for computer time.

    It's much the same as a slow subroutine making the whole application slow. But in this case the subroutine is SETI@Home and it's making the application of providing computing resources to as many distributed projects as possible run poorly.
  • Not releasing the source won't make a difference. People who want to send fake results badly will be able to do it, all it takes is to reverse engineer the communication protocols. TO ALL THE PEOPLE WHO DON'T UNDERSTAND THIS: GET A FSCKIN' CLUE!!!

    The only proven way to provide any kind of security to the project, is provide some kind of check as to the accuracy of the data sent. That involves checking every client's data once in a while, obviously the more often the better, of course, that will take some raw processing power off.

    A way to do it: get every client to register, assign a certified key back to the client, with which it signs its responses, and send 1/10th (for example) of the results back to some OTHER client, check that the results match, if not, investigate and shoot them down, possibly try some legal action if possible. Also add some checks WRT IP addresses, so as to check that the same key does not originate from too many different IPs, etc ...

    That can't be perfect, but it's almost impossible to defeat, at least compared to the current scheme. (Unless I missed something).

  • That's not a way to handle it. If everyone who has a problem with the specific implementation goes away you end up with SETI@Home being critically underpowered. And I like what SETI@Home is trying to do. I just wish they didn't make it an either-or choice of helping them or helping other projects. A well written client would allow people to participate in more computing projects with the same number of spare cycles.
  • The third reason is that it is a research project, not a software development project. If you're doing research and you don't have complete control over the environment, you're just producing junk. These guys are doing exactly what they should be doing.
  • The Seti people do not want anyone using new clients for one reason....their servers cannot record and split the data into packets quick enough. Right now they are recieving more completed packets than they know how to process. It seems that their little idea caught on a little more than they expected, and they just can't keep up.
  • Seems to me that the Seti@Home people are caught between a rock and a hard place. In order to attract lots of cycles, they've made a contest out of it. That's what posting team rankings on number of results returned amounts to, after all. And that inflames the folks who know they could rise in the rankings if only they could optimize their clients.

    On the other hand, they prize uniformity of results over speed. Hmmmm.

    Well, if they already have too many volunteers, the answer is simple: dump the contest and make it a lottery with one winner. No more rankings, no more teams. Just tap the guy who finds the aliens.

    All the contesters will drop out and go back to cracking RC5. The hard core will remain, consisting of the ones who agree that uniformity is more important than speed, together with the ones who seriously want to mess with the project. There'll still be the need for security but at least most of the results will be consistent.
  • First of all, where exactly do you find this "official" slashdot sentiment. I looked everywhere and I couldn't find it. I'm sorry, but the very fact that you posted this article shows that there is no 'official' slashdot sentiment since you are a slashdot reader and you have a different opinion.

    You mention that people have already reverse engineered the code. Well then, this already shows that even if they don't release the code, the security is bad. Open source could actualy strengthen the security as I mention below. You conclude that releasing the code would only decrease the security, but how do you know that? What if for every one hacker that is out to screw up the code, there are ten programmers that can improve the code. I don't see an obvious outcome of this being that the security is lessened.

    I suggest that some method of redundancy be introduced so that more than one computer be used to analyze the same packet of data. This way, even if one guy does fsck the data up, there are a number of backup computers to double check it. This is one of the fundamental principles of science, redundancy and reproducibility.

  • Your original post was moderated as "Insightful." I wouldn't listen to the opinion of someone who is too dim to tell the difference between a 2 and a 3.
  • They should be checking a random sampling of returned blocks to make sure the processing is correct.

    This could catch both malicious forgery and accidents, like perhaps there's a bug in the code so that a k6-2 of 450Mhz returns false answers... How would they know?

    And the idea of an open client means, at least to me, that we'd submit changes to SETI@Home, and they'd release the next approved client.
  • Funny you should ask... :)

    My degree was a BSc in Mathematics and Computing, with honors. (English degrees come from certified Universities only, not American Football stadia.)

    Going back to A-Levels, I studied physics (with one of my special fields being astronomy). Prior to that, I've built my own radio telescope, and even had the opportunity to control the main dish at Jodrel Bank.

    Back to more recent times, I've been playing with AIPS and AIPS++, and various other astronomical software. It's fun stuff, though astronomers don't necessarily make for the best coders.

    Scientific experiments require controlled conditions, that is true. However, it needs to be borne in mind that the experiment is in SETI observations, NOT in distributed computing. The software is therefore not a variable in consideration. (Nor could it be. You could never devise a control.) Releasing the source would therefore not invalidate the experiment.

  • It doesn't show the time of the moderation. You presume that it was marked "insightful" before my post, but as my post makes it crystal clear that the post was still at a 2 at the time I replied, your presumption is very clearly false.

    Anyone who makes a presumption for the sole purpose of provoking a flame-fest deserves to get burned in the process. Don't play with matches.

  • by WNight ( 23683 ) on Thursday November 18, 1999 @06:30AM (#1522127) Homepage
    Ok, I'm done. Don't know what took you so long.

    Process the data packets, hash the results, with MD5 or SHA, then return the hash along with whatever other results are needed.

    SETI@Home picks some random number of packets and sends them out to a second user, checking to see if both users return the same hash value.

    And this doesn't just stop malicious forgery, but it also stops bugs which may cause incorrect packets to be returned some of the time.

    This does require keeping a list of the packets a user processes, but this shouln't be that hard. Especially because they start fresh every week (month?) with the new data.


    Where this gets really hard is with Bovine, and the code breaking.

    The message is known to the users, so the cracker (bad guy) knows what result to watch for. And he knows what result to return (yes, or no). Easy to forge.

    Bovine is also more likely to get a user hiding hits, instead of faking misses. For misses, they simply use a hash of the first bytes of the 'plaintext' after each decryption and compare these. For a hit, they need to hope that the packet with the hit is one sent out for inependent verification.

    What they need is a cryptographic way of hiding the true results from the user.

    As in, in you have C, a cyphertext, and P, a plaintext, and want to find K, the key that turns C into P, is there some transformation you can make to C and P that allows the same key to function, but masks the cypher and plaintext?

    This isn't as impossible as it sounds... Imagine a rotation cypher with a numeric key.

    Let's imagine the key is 3.

    Plaintext 'CAT' becomes 'FDW'

    Add the transformation -1 to both

    'CAT' becomes 'BZS', 'FDW' becomes 'ECV'

    Now, imagine the Bovine group wanting to test their users. They think people will hide a success, making the project continue.

    If the user knows that the random-looking cypher text decrypts into 'CAT' they simply watch for 'CAT' to be decrypted and they've found the key.

    So, Bovine sends T(C1) and T(P), the transformed (T layer) cyphertext (C1) and plaintext (P1). The user knows that T(P1) is the desired result, 'FDW' in this case. But they don't know if that plaintext is the contest winner, or a loyalty check. So they decrypt T(C1) and low and behold, it matches T(P1) ('BZS' becomes 'ECV') and the software says 'We did it!'.

    If they report this set of keys as a failure, Bovine *could* have planted this key, and shut them down when it isn't reported. Or, it could be the real key. They'd never know.

    So they're compelled to be correct, because if their answers don't match the ones Bovine expects, Bovine doesn't trust their other answers and their stats are thrown out.

    So, we need to find the transformation (T) that allows T(C)+K to equal T(P) for all cyphertexts and all plaintexts.

    There may not be such a transformation that does this, but if there is, this is the ultimate answer for Bovine, and other projects like this.
  • In science, you want to hold as many of the variables as similar as possible, so that you can be sure they didn't create a false result

    Thank you, I couldn't have said it better myself. I don't think most people realize what a disaster it would be to have everyone mucking with the seti@home client.

    To everyone who thinks the client should be open sourced, I'll be blunt: Open sourcing the client would do no less than destroy the seti@home project. I'm not joking. The scientific validity would be gone. When you add those patches? You are hurting seti@home.

    --GnrcMan--
  • Look, S@H is trying to deal with the demand of more people than they expected using the clients. Mr_Ust knows what he's talking about in the previous post but it must be said again...

    ...S@H cannot keep up with demand for packets...

    They are only making about 1/3 as many as are sent out. In addition, they only have a limited time (2 years?) of data gathering time allocated to them. If we go any faster, we will just get more repetition in the packets sent. We can't find ET any faster by speeding up the clients. The only way to go faster is to collect more data from the telescope. Unfortunately, the S@H site tells us why this is not possible. Maybe we should all read the info available from S@H before going on and on about the "inefficiency" of the clients.

    Creepers, they are asking for our help with their project and we make it sound like they should be flogged for not being "perfect." If you don't like it, don't participate in it.

    --

  • by bgarcia ( 33222 ) on Thursday November 18, 1999 @06:42AM (#1522137) Homepage Journal
    ...but SETI is not about processing data as quickly as possible...
    Absolutely, positively false.

    If that were the case, then SETI@home would simply do all the computations on their own machines, and not ask for help from thousands of systems distributed all over the world.

    but how many people would download the code, apply the patches that speed it up, and never have a clue that they just fatally broke the FFT result testing algorithm?
    This isn't rocket science. You have a central repository that takes in good patches. Clueless people know to just download good code from the repository. You have regression tests to test these patches.
    If you can improve the code, instead of helping SETI by processing keys faster, you bring yourself out of alignment with everybody else...
    What in the world do you mean by "alignment"? And why is it a bad thing?

    Code that does the same thing, only faster, is known as better code.

    What SETI@home needs to do is add some security and checking to their system. Double-check results every now and then. If a particular client is found to have given a bad result, then remove all results obtained from that client.

    Telling people to not upgrade their clients just isn't good enough. It's quite easy for someone to maliciously hack a client to produce bad output. You need a system that can protect against this anyway. And once you have this system in place, then you'll also have protection against buggy clients. Then there should be no reason not to open-source the damn thing.

  • While I agree with you, I think you, and a lot of the other "Open Source SETI now" zealots have missed the point. This has more to do with scientific method and the ability to convince other scientists and observers than it does technology, algorithms, open-scource and computers. Seti@home is an experiment, not a software package...it just happens to use software to do the experiment.

    If a signal is found, your right, it will be analyzed to death before it is announced. False ones will be rejected. But with any experiment, you must be sure that ALL conditions are the same everytime it is repeated, to be sure the results are as accurate as possible. If the source is opened, they have just lost that control...they cannot be sure (100%) that the analysis is repeated under the same conditions. As we have already seen, some people have tried to boost their scores by sending the same packets over and over or by sending false packets. While these will get filtered out, the project can still say their results are trustworthy because all analysis is the same - same implementation, same algorithm. Imagine if it is opened up and a signal is found. It will be processed, double checked and checked again. But when they announce it, there will be a large number of people (and scientists) who, for whatever reason, will refuse to believe it. They will then point to the lack of control over the source as a place where the data was either " messed up to give a false positive" or deliberately faked. As they say in court, they will have a "reasonable doubt". This is too important to have that kind of possible outcome.

    Put it in another context - a group is experimenting with an AIDS (or ebola or something like that)vaccine and testing it in petrie dishes (this is an example, so excuse me if this doesn't really happen). They set up a bunch of dishes they had control over (washed and prepared themselves). They get some help from people outside the experiment who can make the dishes clearer or easier to handle or what ever. They check and double check and find that in a lot of the dishes the virus is killed quickly and in some it is not. How do you interpret these results? You don't - the experiment is invalid because you cannot safely say which pertrie dishes are which and don't know wether the results are due to your drug or due to some bleach left in the dish by those who prepared it (by accident or on purpose). You can only be sure about your dishes but you can't tell which ones are yours. The solution is to only use your own dishes.

    I'm sorry for being long winded but I'm trying to stress a point. So open source can make the client more efficient and faster, who cares? Being sure of the results is what matters.

    The sad part is these patches may have already made the experiment invalid for thse same reasons...considering the size and reach of the net, who knows how many "enhanced" clients are out there. It is possible that, if a signal is found (or missed, in which case we may not realize it) that all the redundancy checks were done on clients with the same or similar patches with the same calculation flaw and all becasue the project doesn't have complete control...and because some 19 year old wanted to be #1 on the SETI@home list.

    Hey, if you don't like it, don't use it. It's their experiment we must participate on thier terms.
  • by drox ( 18559 ) on Thursday November 18, 1999 @06:57AM (#1522149)
    Why are the goals of accuracy and speed mutual exclusive?

    Well, here's a demonstration you can try at home. Find a recipe for, say, chocolate chip cookies. You want more speed? Double the heat setting on your oven, and cut the baking time by half. Watch what happens. Your output is no longer accurate, even though the input (yer ingredients, order in which you combined them, etc.) is the same as what was called for in the original "source".

    Now open-sourcing recipes is a fine idea. Go ahead and experiment in the kitchen, and if you can come up with a faster way to make cookies that taste as good as the slow-cooking ones, more power to ya. But don't expect Betty Crocker to print your recipe in her next cookbook until she gets to test it out herself.

    The folks at Seti@home might be better served if they open-sourced their code. It seems like a good way to improve it. But one programmer's improvement is another programmer's bug. And if someone's "improved" Seti@home code is fast but sloppy, and gives unacceptable results, the folks at Seti (and all of us who care about the project) lose out big-time.

    It would be nice if the code were available to be tampered with, fine-tuned, and "improved". It would also be nice if only "real" improvements - not quick'n'dirty shortcuts - were used in crunching the data. But how to tell? We don't live in a perfect world. Open the source and big improvements - as well as tiny-but-devastating bugs - may follow.

    There is supposed to be one accepted program for crunching Seti's data. Arrange it so several versions are running, and you introduce more variables into the experiment. Not good.
  • >SETI can't stop people from modifying the executable on their own systems,
    >but I think the people calling for SETI to make it even easier for people to modify the system

    How can closing the source prevent me from spoofing a valid signal with a client I write and distribute? Nothing! In short you can't ensure that you are getting valid results back simply by saying no one can know HOW your client works because you can't control the source of the packets. I can write a bogus client that mimics your interface.

    There is no inconsistency to having the client open, and then still ensuring that only valid clients be used-these are separate issues entirely.

    According to your statement I quoted above, SETI's experiment is already blown if they can't stop people from modifying your executables, so why do they continue? Because they (I hope) have checks to see that the data is valid.

    Point is, no one has to hack the closed client they can write their own. The only way to secure the client is to find other ways to validate the data. Thus validity has nothing to do with the openess, so open it and you gain the insight of the best programmers in the world both on performance, and probably security too.

    This isn't about speed, or accuracy in the Scientifc processes that SETI is using, if it were, then no Scientific endevor ANYWHERE could use open source software. Guess all those universities will have to get rid of Linux :)

    I don't think anyone is advocating that SETI release control of the project, just suggesting that we leave the astronomy to the astronomers, and the programming to the programmers.

    >If you can improve the code, instead of helping SETI by processing keys faster,
    >you bring yourself out of alignment with everybody else, create potential bugs in >the experiment,

    No one is saying that giving any goofball the ability to screw with your algorithims will help you, but they ARE suggesting that you could get a lot of help with the valid client that YOU distribute/endorse. If you choose to abdicate that responsibility, well than you are no better than the suits who think that Open Source is a panacea that means they don't have to work anymore either.

    Security through obscurity, you'd think the scientific community would know better.
  • Ok, I understand what you're saying now. Yes, I agree entirely that the frequency of such knowledge isn't going to be particularly high. I also agree entirely that there will be a tendancy for certain coders to take short-cuts, to boost apparent ratings.

    IMHO, the first could be dealt with by having a core team of coder volunteers who were definitely skilled in astronomy, maths, physics and computing who could isolate segments of the code, such as the maths routines, etc.

    If these -segments- were released, the only people likely to be interested in working on them would be those who would find them useful. There would be no "glorious high scores" to aim for.

    It would also solve the problem of multiple versions. Not only would a rival version need to re-assemble the fragments, they'd need to fill in the gaps for any unreleased code, plus the code for validating the client to the server. IMHO, this can be made sufficiently difficult to ensure only "blessed" clients exist, but still benefit from the combined eyes of the Open Source community.

    How does this sound?

  • Wow, ok I followed about 1/2 of that. I'm not a crypto guy (at all, my idea of an interesting book on crypto is Focault's Pendulum. Could someone else comment on this???
  • If there's one thing S@H is not, it's science.

    How so? Please explain. Seriously.


    ...phil

  • When we compare Seti to some other project, such as RC5, we can see that maybe there's such a thing as too much optimization.

    RC5 has a known number of keys to check, and that number is truely enormous when compared to the number of keys per day that can be done. Thus, there is every incentive to optimize the hell out of the client.

    By comparison Seti@Home is already processing data faster than it can be created. Optimizing the client won't make the analysis go any faster - maybe they would be able to check each work unit three times instead of two. But, where's the incentive to optimize?

    S@H is a victum of its own success. What will probably happen is the number of users will start to drop off as people become bored. This isn't necessarily bad.


    ...phil

  • by hanway ( 28844 ) on Thursday November 18, 1999 @08:11AM (#1522181) Homepage
    There are several intertwined issues here, so it's not easy to boil everything down to a simple "it should[n't] be Open Source" answer:

    • People are conditioned to want their computers to run faster. The amount of time and effort some people spend to overclock and benchmark their computers is often far out of proportion to the actual benefit they get from their computer's speed. It's not surprising that people treat their SETI@home processing speed as a benchmark.
    • The fact that SETI@home puts up statistics that have turned this experiment into a competition to complete the most work units reinforces that behavior.
    • At least one company (perhaps SGI, but I can't remember for sure) has mentioned their SETI@home crunching speed in some marketing literature, again emphasizing speed over quality of results.
    • As several in this discussion have pointed out, making the clients faster won't help the project because the bottleneck is that SETI@home can't prepare the units fast enough. However...
    • If the client software were improved, clients could potentially do more sophisticated processing in roughly the same time, improving the science. However...
    • This could make the clients seem even slower than they already are, which wouldn't sit well with the kiddies who are more interested in their rank or how fast it makes their box seem than the science involved.
    So what lessons could be learned if this or a similar experiment were to be done again?
    • Deemphasize the ranking of work units completed. Perhaps if the concept of a fixed work unit could be dropped altogether (i.e. make the "size" of a work unit something arbitrary so that they couldn't be compared). This would possibly prevent the client from being used more as a benchmark than for its true purpose.
    • Plan for hacked clients and spoofed results by sending out enough test work units and by cross-checking results with multiple clients enough to have confidence in the results backed up by statistics.
    • With enough cross-checking, you might as well Open Source the client.
    I would be interested to hear if there is a (theoretically) foolproof way to use distributed clients to produce results with confidence if you accept that some clients will be spoofed.
  • Point well taken.

    Of course, what does this have to do with Open Source? If you don't like they way they do it, create your own version. I may not like Word, but I'm not demanding MS open source it, I'll use KOffice, StarOffice et al or create my own. Let them identify the slice their way first then check it out...don't mess with it midway through the experiment. Your proposal is to do the experiment a different way. Do it, but changing the experiment half-way through makes no sense.

    And I say once again:
    "Hey, if you don't like it, don't use it. It's their experiment we must participate on thier terms."

  • The SETI people, like the distributed.net people, want to make it hard to report that you have searched a particular block without really doing so. That is why they haven't released the source.

    Unlike the distributed.net people, the client is running on real data of which there is a limited supply. They don't want it to be any faster, because they'd start to run out of data blocks. They don't care how many points your team gets - they want to do science, and from that perspective they want everyone using the same algorithms - they don't want them "optimised" by individuals *at all*. The risk of false negatives or bugs introduced by stupid programming is too great.

    Possibly you should work out what people are trying to acheive before being rude about their cluefulness.
  • Uhm...then don't run it. Your choice. Me? My machine behaves just fine and I don't care how fast my work units get processes. I'm doing this to help out a program which lost all its funding in 1993 because some senator couldn't get a Radio Telescope in his constituancy (or some other equally stupid reason). This is for the science. If your moral code doesn't allow you to run it, don't.
  • I have a graduate degree in mathematics, a background in numerical analysis, and I've worked as a programmer and scientist for years in a quantitative science department. Let me pose a random scientific-programming example so that non-scientists can have something to chew on.

    Suppose you want to maximize a function of n variables, where the function is too difficult to differentiate (usually much too difficult). You have transformed these variables to be as nice as possible. Basically, to find the maximum, you will begin a search at some predefined point in n-space (or k predefined points, doing k separate runs). You iteratively attempt to increase the function's value at these points. In each iteration, you might take small pertubations in each direction to try to estimate a gradient. After determining the local topography, you then move your point in that direction, taking a small step, a large step, or maybe a random jump, depending on previous movements or the state of your algorithm. In the end, you will arrive at a local maximum, an inflection, or a boundary. You hope that this is the true maximum for your domain, but the best you can hope for is a heuristic solution.

    This sort of problem, depending on the function topography, tends to be very ... fragile. If you modify your methodology, changing the step sizes or the state criteria, for instance, you will probably change the final local maximum, and then your results will be unreproducible, and much of your statistic analysis goes out the window. Worse, you don't even have to touch the methodology. Change the order of seemingly commutative arithmetic operations and your LHS differs slightly due to different roundoff errors, which (in 64-bits, but probably in 80-bits, too) all too often results in a different local maximum, even for problems that aren't particularly ill-conditioned.

    While many tweakers will be careful to leave mathematical algorithms intact, many others just won't think about this sort of thing, and will poison any data pools their results are aggregated into.

    This said, let me say that there are a _lot_ of scientists who are truly bad programmers, and all too often these people think they are hot stuff. They write brittle, monolithic, error-ridden pieces of garbage that are just painful to apprehend, with plenty of hidden bugs just sitting there. Ugh. An acid test to weed these people out might be, "What do you think of _Numerical Recipes in C_?" If you come from a scientific background and want a good programming job, I think you have to prove yourself, since industry is well-aware of the Numerical Recipes scientist archetype.

    Chances are, the SETI code is at least somewhat ugly, and can really be improved for future releases by open source. NO modifications should be made on an ongoing experiment, but that doesn't rule out an open source next version, with the active communtity serving as alpha/beta testers. There is a lot of interest and a lot of talent out here. All it takes is some openness and sme good management on SETI's part, right?
  • No they don't. I'm still using the first client on my Win 98 machine and have 29 units complete and counting...they're being proicessed this very minute.

  • If you object to their policies so much, then don't
    run their client.

    There are a million or so screen savers out there.

    These people know what they are doing from a scientific standpoint, until we can approach them with a _ROCK_SOLID_ plan for how open sourcing would work, and a plan to prevent any contamination of data, they _aren't_ going to listen
    to us. And they _shouldn't_!

    In other words approach them with a working transmission protocol that immediately notifies the server if the code has been tampered with, or makes trapping such occurances trival. Someone above posted a method that I think might be a good start. However, I doubt if they would accept a random sampling of verifies, ideally every packet should be verified somehow.

    This isn't a democracy folks, this is an attempt at a scientific study of one of the most important question in the world. One with implications for religion, philosophy, biology, physics, astronomy and almost every other field of human endevor.

    If they choose to be a little conservative with _their_ code, it's _their_ proffesional ass on the line. OTHO if our screensaver or distributed client runs 10% faster, gee isn't that nice...

    Out of a job, and a laughing stock compared to ooohhh look at the purty colors.
    Sheesh, cut them a break.
  • by theonetruekeebler ( 60888 ) on Thursday November 18, 1999 @08:50AM (#1522201) Homepage Journal
    1. Seti can hardly be convinced to open-source the client until they have means of verifying the blocks they receive.
    But they can sure as anything verify the blocks they've sent, and if they get a report of a hit, they can verify it themselves using a reference implementation, or quietly submit it to someone else and see if they get the same results. I'm thinking that if they make a point of being the One True Source of Source code, and reviewing submitted patches and integrating them themselves, much potential trouble will be eliminated.

    Open-sourcing has the further advantage of becoming very, very portable, almost for free. I'm sure there are a few bored mainframers out there with a few underutilized MVS boxen that could contribute to the Cause.

    <troll>But all of us know the real reason they won't publish the source--SETI@Home isn't sending us space noise at all; they're sending us heavily-encrypted high-level diplomatic radio transmissions from all over the world, letting us do the CIA / NSA / NRO's dirty work for them. Until we see the source, they can't prove it's false; therefore we must assume it's true until we see the source.</troll>.

    Has anybody seen the latest DNRC office prank? Someone's co-worker was running SETI@Home on their office PC, so the DNRCie hacked him a screen saver that beeped and kept flashing the words SETI@Home: Possible extraterrestrial transmission found. Confidence 99.9893% over and over again. I can think of a half dozen people I'd like to do this to, only I'd make damned sure that on the next user input event, they'd get a dialogue saying "Please stand by: writing results to disk. DO NOT INTERRUPT THIS PROCESS" followed two seconds later by a dialogue box saying "Fatal exception error", because there's no point in being just a little bit cruel.

    --

  • "and if they get a report of a hit, they can verify it themselves using a reference implementation, or quietly submit it to someone else and see if they get the same results. "

    That's not the problem. They're not worried that people will submit false positives because, as you said, it's easy to check. They're much more worried about false negatives. Imagine this. I'm a Freaked out advocate of Some machine or OS. I'm sure my suped up computer will blow everyone out of the water! So I'll sign up for SETI@Home to prove it to the world! Yea! The only problem is that my machine sucks. It performs below average. I'm still convinvced that it's the best (Because I'm a freeked out advocate) but I can't spare the cash to actualy make it perform. So how to convice the rest of the world how cool my system realy is? I Fake it! I modify my client so every half hour it requests a new packet from SETI@home and sends it straight to the great hard drive in the sky. Then half an hour later it tells the SETI program "Nope no aliens here" and requests the next packet.
    The point is a large number of the people using SETI@home realy don't care about the science. They're doing it as sort of a 'My Box is Better then Yours' contest. And at least some of them would fake it if they could. Meanwhile the aliens are saying 'hello' and we're missing it.
    So what they've got to do is come up with a way to ensure the packets are actualy being checked that doesn't involve the client. I don't see why they don't send the occasional fake-possitive to suspect clients.
  • by JohnnyCannuk ( 19863 ) on Thursday November 18, 1999 @09:23AM (#1522214)
    BTW, for all the "why not Open Source" whiners out there, 18 months ago, this project WAS open sourced. You could volunteer, talk directly to the project leaders and hack the code all you wanted. This was during the DEVELOPMENT phase. When the development was finished, the client was closed. Where were the whiners when you could get the source? I got 2 different versions of it to play with (though I don't have them now, since I've nuked my hd a few times since then). The client is NOT a commercial product. Open source is great for commercial products because it continuously improves the quality of the code and design. A commercial product needs this improvement to stay competitive. The SETI@HOME client does not need this. It is doing what it was designed to do for this experiment just fine. There is no "competing product" to stay caught up to or surpass. In 2 years, when the experiment is over, so will the SETI@HOME client. Therefore, any of the benefits of OSS development have already been utilised by by SETI@HOME. It doesn't need to be tweaked or spead up or become more efficient. It simply needs to be used by 'volunteers' the way the project leaders have designed it. If you can't abide by their request, don't volunteer. If you don't like the way it works, don't use it.

    If SETI@HOME didn't have their ranking scheme, would we be even having this discussion? I think that was their only mistake....

    They already used the Open Source model...

  • by copito ( 1846 ) on Thursday November 18, 1999 @09:46AM (#1522219)
    Open source works best when the developers are "scratching their own itch" or at least eating their own dogfood. Linux supports lots of odd hardware because the users need to use lots of odd hardware. OpenBSD is fairly secure because the users need it to be secure.

    In the case of SETI, there is a mismatch between the potential developers and users that is unique to a distributed system. The client user/developer wants a client that is faster, potentially at the expense of accuracy. The official SETI developers are not users primarily, but instead want to have great confidence in the client, potentially at the expence of speed.

    This mismatch turns many of the benefits of open source on its head, from the perspective of the SETI developers, since their goals are different than the potential developer community. Note that the client user/developer would achieve his goals quite well if the client were open sourced, as per dogma.

    Perhaps we need to tweak the dogma a bit to account for this bizarre bazaar. Instead of "open source is always better", try this on for size:
    Open source always makes what I am running on my computer better for my needs, assuming I am a developer or there are developers whose goals coincide with mine.

    It does not necessarily make me happier with what you are running on your computer. If you are running a server I am happy that the server is open source because that means it has open interfaces. I'd be equally happy if you just opened your interfaces. If you are running my code for me in a distributed type of network, I want my goals to be preeminent in the code design. Open source only gaurantees this in the rare case that our goals completely coincide.

    For open source to work reliably in a distributed network of users with varying goals, there has to be a central authority with approved clients for whom the cost of approving clients is less than the benefit of the potential innovation. In addition there must be a way to strongly authenticate that those clients have not been modified. Incidentally, an authentication method is what is really needed in a closed source case as well. I can't think of a foolproof method for doing this, the best bet is probably just random result comparison and some speed heuristics.

    So the answer is that SETI could open source their client, and the people running it would benefit. But it might not be a benefit to them since the benefit of added speed (the most likely outcome) is not currently needed, and the potential for good faith bogus clients increases greatly. Bad faith bogus clients would likely stay at the same frequency.

    --
  • The current policies of SETI@Home are clearly wrongheaded; security-through-binary-obscurity is not the right way to protect the integrity of their project, and they're falling into the same close-minded trap the DVD Forum did. Obviously, they should open source the client (and the server!), but maintain an "official" binary distribution with an embedded signing key; their servers would only accept input from blessed binaries. (Yeah, it's possible to extract the key [though this can be made difficult]; you'd want to send a different key with each download, so you could easily revoke one. The point is that this policy would allow well-meaning contributors to play with the client source and submit optimizations for review.)

    Furthermore, they'd get people making not only performance improvements, but bug fixes. Do they really think that by working alone they have better QC than if everyone could review the code?

    That's not my point, though.

    Normally, when the community at large is faced with someone taking a clearly broken licensing approach, we can react by making our own version free of encumbrance. Here, however, we're tied to SETI's data source -- few of us own large radio telescopes.

    Is it possible to gain access to SETI's raw data feed? (Of course it is; they send data blocks to everyone for processing!) Is it possible to come up with our own code for analyzing that data and processing it elsewhere -- with or without the cooperation of the S@H administration? I don't think they'd be happy; but could they stop it? (Surely not without causing even more ill-will.)

    ...

    Tangent: What if the ETs use ultra-wideband (UWB) techniques (see http://www.uwb.org/, http://www.time-domain.com/)? Won't FFT-based analysis totally miss such signals?
  • Unlike the distributed.net people, the client is running on real data of which there is a limited supply. They don't want it to be any faster, because they'd start to run out of data blocks.

    Now THIS brings up the possibility of issuing the same data blocks to multiple clients & cross-checking the results to improve the confidence of the results!

  • There are some side effects of open-sourcing the SETI client. As I understand things, the SETI client periodically transmits data to an outside machine. If SETI were open-sourced, it would be easier for rogue programmers to alter the source to send files (or whatever) to a programmer. Sure, we would all have the source. How many of us actually look at the source? Furthermore, a less than scrupulous programmer could publish source that didn't match the compiled binary.

    I am somewhat concerned that the SETI project could be a front for the NSA and/or Echelon. How can the NSA keep up with analyzing all those voice lines they're monitoring? Why don't they just right a cute little screensaver that analyzes voice lines, and transmits the results back to home base? Hmmm. If that's the case, they really don't have anything to benefit from open-sourcing the code. It is this that prevents me from running SETI on my home machine...
  • SETI has always had trouble creating enough work units to keep up with user demand. The project is more popular than anyone had ever dreamed.

    Look at the date of the data you're processing. The date of a work unit I just downloaded right now is July 20. Analyzing this data isn't supposed to be fast. It's supposed to be accurate. It's better to do something slowly and do it correctly than to hurry through it and find out you did it wrong. I'm not saying that SETI couldn't make their client faster, but the idea of it going too fast is ludicrous. They're still sending 4-month old work units. IIRC, when I first started SETI@Home, I was also getting 4-month old work units.

    SETI just can't generate that much data to munch. If they did speed up all the clients and they ran out of blocks, users would be very upset. Solution... SETI would start sending out the same blocks to be rechecked while waiting for new blocks to be created. And users would be upset because they were doing the same blocks twice.

    This is science, not a game. Scientific data needs to be checked and rechecked many times. The problem (from the article) isn't because of SETI. The problem is because of people who believe that distributed computing is a race instead of a way for organizations to effectively speed up processing of data. If results aren't accurate, the entire process is useless.
  • You want more speed? Double the heat setting on your oven, and cut the baking time by half. Watch what happens.

    That's a great example, but it is also a straw man. That example works for cookies, but not necessarily for software. I agree that there might be a way to speed up the code at the cost of accuracy. But that is not what I am advocating. I'm advocating better written software--which will mostly likely be at least of of the following: more stable, more robust, faster. Better written, by definition, has to be at least one of those. And we already have a requirements for the project: no loss of accuracy. Fine. We work towards that goal. That's what software development and design review are all about. Open source just taps the resources of thousands for those tasks.

    And just because they open source it doesn't mean they lose control. As these post evidence, many who do SETI are doing it for the right reasons. So if SETI realeses the code FreeBSD style--i.e. where they retain control of the "official" tree--then they gain all the benefits of opensourcing along with maintaining specific control of their research. The authentication problem could easily be solved by some public key system that is controlled by SETI. And before we start arguing about that, let's remember that they have already LOST that battle. So at this point, an open approach to how to improve the client would only help them.

    Again, cookies are one thing. But software is another. I am sure you have worked on projects where someone wrote something accurate, but slow. Perhaps their memory management wasn't right, or they were using a slow method to get something done. The code worked. The code was accurate. But it could be improved and remain accurate just by having someone more experienced look at it.

    And that is what I am advocating. Remember, that's the whole open source "thing". I think your confusing GPL, forking, and crab grass development techniques with a time honored scientific approach: peer review.

    When it gets down to it, that is EXACTLY what open source is all about.

  • I understand the merits of keeping the source to seti@home closed. It would be quite aweful if the project were invalidated by bogus results. However, I have a real problem in that the project maintainers seem to want their clients to run slowly. That's right! they seem to want to run inefficient code on your computer.

    My chief example of this comes from some work that compaq has done on the client. They asked for the source to Seti@Home so they could get it to run on Alphas. the sent their version back to the maintainers and that is the alpha client that is being used now.

    However, later they went back and tuned the client to run more efficiently. They linked the seti@home code against their Compaq Extended Math Library which contains a highly optimized FFT routine. The results are stunning. If you check out Compaq's stats, they have 21164 Alphas completing data blocks in under 3 hours. On my 21164 machine the best I've ever seen is 6. The speedup on the 21264 processor is even more dramatic. Check out their top users in the compaq team, they're flying through the data in under an hour per block!

    Seti@Home has recieved that patch many months ago. (I found out about it from a compaq engineer on the now dwindling AlphaNT mailing list this summer) However the package maintainers have done nothing. This leads me to conclude that they either are technically incompetant and lack the skills to test this enhanced client (which I doubt because they are letting compaq use it) or simply want the clients to run slower.

    So while I agree that open sourcing the client would be a very bad idea, I fail to see the logic behind refusing to distribute perfectly valid patches that could INCREASE the amount of science that seti@home does.
  • by heroine ( 1220 ) on Thursday November 18, 1999 @11:09AM (#1522234) Homepage
    Having worked with the gcc compiler for many years I can say that math fluctuations are increadibly easy to introduce through optimization. Floating point operations, though rarely used in CS education, are rampant in the Seti code. The problem isn't in producing bad data but in differences in the output that most floating point optimizations produce. Floating point operations are much more prone to fluctuations due to optimization. The optimized seti clients are guaranteed to produce different results than the original clients.

    Since they haven't been able to collect enough data by orders of magnitude to feed their clients, there's little use to optimized Seti clients outside of learning how the Seti client works.
  • What if Seti used some sort of checking to verify data? Example:

    Person A processes the data and checks the processed data into the server.

    The data person A had is assigned to a (random) person B. If person B verifies what person A found, the data is accepted to be true.

    If the data person B processed is different than the conclusion person A came to, the data is sent to person C and D for processing. C and D should verify what either A OR B got.

    Of course, this could cause problems if abused, but it would allow for an open source client in that nobody can REALLY forge data because the data would be processed more than once. Of course, one may argue that this considerably slows down the effort .. but this is the best way to operate in a way that may lead to better efficiency.

    But then again .. what do i know, and why are you reading this?

  • I think in fact they already do this as a check for hacked clients - or maybe its distributed.net that do it.

    Nonetheless it does raise an interesting option , which would be to open source the client, and make sure that every user's client had, say, every tenth result duplicated to another based on a differently patched source tree. This would act as some kind of assurance, I suppose. However, it might be hard to tell which source tree the client was built from in a way not open to hacking.
  • Faster client software doesn't gain them anything. It just helps people running the client app inflate their egos by boosting their block counts.
    It also steals more CPU cycles from a volunteer's machine that what is necessary. His machine could be used for other things besides SETI@home, you know.

    This "fast enough" argument is ridiculous. There's always a good reason to make it faster.

  • Send the client a sata packet with known characteristics every now and then and check the results.

    In fact, they do that. If you look at the 'signals' graph on the web site, you will see that they inject test signals at known frequencies into the outbound data and look for them on the returned results.


    ...phil

  • But from the _user's_ viewpoint, it's taking him hours to do one data block, it should be faster! Who cares if he has to wait a few days to get the next one so long as the one he's got finishes quickly? Once it's done, he can go do other things with his processor.

    Personally, I don't believe most people are running it that way. This one doesn't work like RC5, or the Mersienne Primes, where it can sit quietly in the background. Since the S@H client takes over so much of the machine, I believe most people only run it when they aren't using their computers. Thus, it's not a question of "doing other things with the computer". When the user is doing the other things, the client is basically shut down.

    It's exactly the But from the _user's_ viewpoint, it's taking him hours to do one data block, it should be faster! viewpoint that leads to the race mentality and ultimately cheating to get higher in the rankings. The important thing is the science, not the race. I've got several machines running the S@H client, and I quit checking my rankings a long time ago.


    ...phil

  • And you're absolutely sure that when you swap in this new FFT subroutine, that you will get the exact same results, down to the least significant bit, every time? Because if you're not sure, then you've effectively changed something in the project, thereby throwing the entire thing into a realm of uncertainty that science does not handle well. It could be enough to invalidate the entire project.


    ...phil
  • Having worked with the gcc compiler for many years I can say that math fluctuations are increadibly easy to introduce through optimization.

    Amen. Somebody should moderate this up. The pro-optimization faction here on /. seems to be a bunch of semi-competent high school-grade programmers who have no idea how incredibly difficult numerical computing is.

    Free clue of the day: it's hard to write a program that produces good floating point results for any nontrivial computation. Seemingly trivial optimizations can result in accumulated errors and numerical instability that completely fsck up the entire calculation. Nothing could be worse for the SETI project than a legion of arrogant C hackers submitting blocks processed with patched, "optimized" clients that have different numerical behavior than the original.

    ~k.lee
  • it's science."

    no, really?!

    thanks for the explaination, but i think people are aware of that fact. the question is, how much science can you do?

    Ironic that not two posts above, we have someone stating "that if there is one thing S@H is *not*, it's science".

  • False positives are not the problem. Negatives are. "You can just check the positives to see that it's not a hacked client".

    What about the person who hacks the client, not realising he's broken something stopping potentially valid results from being sent back. Do we then check all negatives too? If so, why bother?

    Yes, a few people who use S@H are "smarter than usual", the kind of backpatting people occasionally like to give themselves. I have a reasonably high IQ. I *don't* think I'm qualified to fiddle with algorithms people with PhD's in astrophysics have spent *years* on. "Thousands of eyes" don't help here.

  • I couldn't help but laugh at some of these points:

    By the mean of integrating authentification protocols into the client and the server, they could accept result only from official clients.

    This would solve the problem of bad results.

    No it wouldn't. Because I'd have the source and I'd know how to 'fake' an official client.

    Also, opening the source will ensure peer-revision of the way THEY implemented the algorithm. It seem that we must trust them for the correctness of the implementation.

    It has been reviewed. By countless PhD'ed astrophysicists over YEARS. SETI isn't new. It's had BILLIONS of dollars of funding and some of the greatest minds in their field. I'm sorry. If it came to it, I'd trust their implmentation.

    Science is about peer review, opening the source code of the client AND the server will enable peer review. Peer review should not be only for science data, but also for the tools scientists use.

    The data has been peer reviewed, and they didn't develop a new algorithm. IMHO, The seti@home people DO NOT listen. They have hundreds of thousand active users and they do not listen or even respect them.

    How many of these hundreds of thousands of users are *QUALIFIED* to review the product? See above point.

  • No. It would have been released, caused a problem. And fixed reasonably quickly. Hopefully.

    An argument for open source? Yes. But I think the arguments against far outweigh.

  • No a false result means that we have x% confidence that a civilization broadcasting in these particular frequencies does not exist in a well defined volume and time period in space.

    It is similar to the task of identifying the top quark. Before the top quark was observed, there were a large number of negative experiments which proved that it must have a mass greater than the mass observed in the experiments.

    These kinds of experiments are invaluable, since they tell you where not to look in the future. They can also help disprove a theory that predicts a greater frequency of observations than those actually observed.
    --
  • I don't understand why you would argue against faster clients when the problem is already infinite in scope.

    Because the problem, as defined by the Seti@Home team, is not infinite in scope. In fact, it's very finite. The are only generating about 30 gig of raw data per day, and the analysis capacity in place is way more than sufficient to analyze all that data.

    Now, you might say "expand the project". That's a good idea, but it takes money and people that are not available. The S@H guys are running this thing on a relative shoestring. If you are going to tackle an infinite project, you have to throw infinite resources at it, and that's simply not available. They are doing what they can with what they have.


    ...phil

  • Te obligatory this isn't news reply:

    SETI said the same thing about releasing the source to the 3Dnow team and they certainly aren't going to release it to the public now.

    Personaly, I think its a major waste of computing power when other projects like GIMPS have 30,000 members while SETI is pushing half a MILLION.

    The Spielbergesque lure of meeting ET keeps morons happy with their number crunching competitions while real science like GIMPS is mostly ignored.

    Now the number crunching competition has gotten to the point where people with a lot of time to kill have written patches to have the fastest team on the net.

    Its the geek equivilent of drag racing. SETI open source advocates should admit they're in a bloated project, swallow their BS techie pride and crunch slowly away for science or go do something else to show off.

  • It also steals more CPU cycles from a volunteer's machine that what is necessary. His machine could be used for other things besides SETI@home, you know.

    Which is why the client runs at low priority - so the cycles are available for other work as necessary. If the volunteer needs all his cycles for something else, he always has the option of shutting the S@H client off.


    ...phil

  • Not true. There have been several groups making requests to the SETI@Home folks to add processor optimizations. The 3DNOW camp of AMD zealots even rewrote the transformation routine used by S@H and submitted it for inclusion, and the S@H dolts refused the help.

    There are some who would cheat, and there are others who want to show how 'their' processor excels. You can't drop someone in either group without reviewing their code.

    -ck
  • But with any experiment, you must be sure that ALL conditions are the same everytime it is repeated, to be sure the results are as accurate as possible.

    "Oh, the reason you couldn't reproduce my experiment is that your conditions weren't the same. Jupiter was in Taurus when you did it."

    Science isn't about making ALL conditions the same. Science is about deciding what conditions are important, and making them the same. (In that regard, your comments are very scientific: you've decided what's unimportant and actually completely forgotten about it.) So one of you is arguing "implementation is unimportant, algorithm is what matters", and the other one is arguing "who writes the implementation really is important, because there's a finite chance they'll screw it up, by mistake or on purpose". Guess what: you're both right.

    So find a solution that solves both problems. Here's a hint: it's going to involve social engineering. The scientists and the hackers are going to have to actually listen to each other if they want to reach a good accomodation. It's not just a matter of the hackers shutting up and doing what they're told.

Our OS who art in CPU, UNIX be thy name. Thy programs run, thy syscalls done, In kernel as it is in user!

Working...