Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Science

World Wide Cluster 68

gwjc writes: "There is a pretty good Ian Foster article on Web-based computing clusters at the Nature site. The usual SETI@home, condor, and entropia mentions as well as a few that were news to me such as "Compute against Cancer" and "Fight Aids At Home" with links. I wonder how I go about declaring."
This discussion has been archived. No new comments can be posted.

World Wide Cluster

Comments Filter:
  • seti@home was fun

    seti@work got me fired

  • How will any prevail? I guess you could run seti@home, distributed.net, prime95, and the rest on one box. Wouldn't that void the point? All this competition waters down each project.
  • More and more companies are starting these distributed computation projects that feed off newbies' altruistic intentions, but why? Are the companies' motives altruistic as well? Hardly. Whatever drug molecule you calculate will be instantly patented by the corporation without so much as a by-your-leave. At least with those key-cracking contests, the winner got a prize. Here, you just get the shaft, and Entropia Incorporated [entropia.com] gets a cut.

    I'm sick of corporations and I'm sick of patents. It's getting to the point where I feel like sabbotaging all patent-seeking enterprises, even if it means we'll never find an AIDS drug. You can't do good by doing evil first, no matter what Macchiaveli tells you.
  • by Dan Hayes ( 212400 ) on Saturday January 06, 2001 @10:36AM (#526621)

    After all, whilst the advantages of distributed computing are clear in that they can provide a way of harnessing a lot of computing power for a cheap cost, there are also downsides to this kind of project.

    If people are so taken up with this sort of thing, imagine how easily it could be abused. People don't tend to be able to recognise and deal with email viruses, let alone a rogue distributed project that claims to do one thing whilst in actuality do something else. It sounds to me like a perfect opportunity for intelligence agencies to get their software on people's computers without anyone knowing!

    How can you tell whether that client you run 24-7 on your home computer is actually helping calculate the next prime number or whether it is scanning all of your net connections and sending the information to a giant government database to be perused at their leisure? Police states like Britain already want to keep records of everything you do, this seems like a damn good way of doing it on the sly.

    Personally, I'd be very wary of any piece of software that sits on your PC and has a constant connection to the internet. Unfortunately, most people are too trusting when it comes to their security online...

  • by Lover's Arrival, The ( 267435 ) on Saturday January 06, 2001 @10:37AM (#526622) Homepage
    I mean, could this be the next stage of the Internet we are seeing? Could the internet end up as one big supercoumputing cluster, and when we use it we timeshare? It does seem to be the ultimate long term direction that the Web is heading in, doesn't it?

    The thing that scares me is the possibility of said cluster being abused, and hackers using it for ill purpose. Also, what are the implications for privacy? Look at .Net and the like, this is the nest step, and all my private files will be spread all over the Web! Ultimately, the superdupercluster could become conscious and ruin us all! ;-) I would like to see these technologies more strictly controlled. Sharing of data is one thing, but sharing of processing power seems a bit on the dangerous side, don't you agree?

    Thanks for reading.

  • I don't understand your point? There's still *tons* of untapped CPU power out there. One project should get all of it, and the rest get screwed?

    Probably less than 1% of Intel CPUs out there right now are running some idle-time project like this; until 100% are running *something* 100% of the time, I don't see how these projects "water down" each other.

  • the superdupercluster could become conscious and ruin us all!

    What if it would become conscious and most of its information data base and the way it communicate with the outside world would come from the usenet and sites like Stileproject and Slashdot?

    Scary indeed.

  • Since you were indirectly bashing the folding at home [stanford.edu] project, I'd just like to clarify that the information gained by this distributed project will be published in scientific journals.

    Will we get money from it? No.

    Will we get our names in the publication? No.

    Will we feel good for donating our otherwise wasted CPU cycles to science? Yes!

  • None of the companies this article mentions will compensate you for your computing time. Until they do, this whole thing isn't going to take off, because there just isn't enough motivation for Joe Q. User to install this software. Why would they install something they don't understand, at a privacy risk and a stability risk, for no immediate personal benefit?

    I used the Distributed client for quite a while, but I switched to Seti just because it had a cooler screen saver. I've got a bunch of computers in my office that are usually idle, and this at least looks cool when the PHBs walk by.
    At least Entropia (one of the companies in the article) gets that part of the motivation, and provides you with a color screensaver. It's not nearly as good as the Seti one, but it's something.
  • Doesn't it seem like a better idea to use idle computer time to search for a cure for Cancer and AIDS instead of searching for aliens and finding new OGR's? Guess I'm just gonna have to say goodbye to distributed.net!

  • by Anonymous Coward
    While many of these projects are worthy, please turn your computer off when you are not using it. It's great to donate spare cycles while you are word processing or whatever, but when you finish, turn off your computer. Many areas of the world (including California) are critically short of electrical power. It doesn't make for a better world to waste electricity when it is short supply. Much of the emergency electricity is now being generated in obsolete highly polluting coal fired power plants. Use less electricity and you can fight pollution and cancer too! So for the sake of everyone, please conserve electricity. Turn you computer (and TV, and lights, etc) off when not in use.
  • by Anonymous Coward
    Wow, it's amazing how you basically said nothing new, insighful or interesting and yet got modded up. You know putting buzzwords together does not mean that you are insightful. Did you hear that, crack-smoking moderators?
  • by Anonymous Coward
    the ramifications of this on Beowulf clustering?
  • I agree. I calculated that running SETI 24/7 on my computer added like $25 to my electricity bill. The whole thing is about $50. (This is because I have the super-hot PII-233 plus three harddrives)

  • So if you are running three different projects, or more than one project, then the one that starts first will get all of your idle time, as far as I know.

    Well, that depends on the scheduler. Linux, for example, would distribute the CPU time evenly between all project.

    I don't know any off-hand, but there are schedulers which don't use a timeslice. On such a system, you could only get one project to run.

    And if your distributed client is a screensaver, you can only run one of those at once.

  • This article didn't mention Popular Power [popularpower.com]. Popular Power has the BEST screensaver, plus it WILL pay it's clients as soon as it gets a commercial project. Sign up now to help work on an influenza vaccine

  • by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Saturday January 06, 2001 @11:16AM (#526634) Homepage Journal
    I mean, could this be the next stage of the Internet we are seeing? Could the internet end up as one big supercoumputing cluster, and when we use it we timeshare? It does seem to be the ultimate long term direction that the Web is heading in, doesn't it?

    I sure hope so.

    I was just talking about this very concept with a coworker just the other day, about how to fairly share processor time.

    The concept is something like this: Every N instructions executed on your system (probably in some sort of virtual machine) constitutes some sort of computing "unit". CPU-hours and such are meaningless when people have different processors, so I think that this is the best way to measure it. (Any resource can be shared this way, but let's just talk about processing power for now.) You should be able to either sell blocks of time on your systems (which opens up the possibility for companies which make their money by selling compute time) or you should be able to simply trade it.

    We would need a central system to manage the block counts. This system must be free. Who should run such a system? I have no idea. In any case, when you send out your jobs, they get picked up (peer to peer) by whoever is idle, and processed. The doling out of jobs would be handled by the block count system.

    So now let's say my system has had the cooperative network plugin sitting on it for a couple of months and I've built up, oh, a thousand blocks of processing. I now send out 500 blocks worth. My priority should be weighted positively because I have not been using my credit. Therefore, someone who has 500 credits and sends out the same amount of work should have to wait longer (and/or get less concurrent jobs) if my request alone will make the system busy.

    Now, if the system is largely idle, and I have 1000 credits and send out 1500 blocks' worth of work, then my jobs still get processed immediately. However, I will then be at -500 credits. If I then submit another 500 block job and the system is busy, I will have a very low priority.

    However, you might decide that you want certain people to have priority on your system even if they have a low number of credits. It's your system, it's your right. You should be able to add (or remove!) priority from a user/project ID, disallow all use by an ID, or allow use only by a certain ID. You should also be able to specify times during which only you have access, no one has access, a certain group is prioritized higher, or what have you. Again, it's your machine.

    And finally, the source to the server code should be freely available so that anyone can run a distributed processing network of their own, for public or private use. You should also be able to merge networks and later split them again if a group wants to share their private resources with others. Key management should be peer to peer but brokered through the server (or at least signed by the server) for security purposes. Only people with a current (and verified) key should be able to use your resources, if you so specify.

  • Wow. Talk about paranoia. If you can't trust anyone at all, unplug your computer from your ethernet connection. Anybody could hack/hurt your computer in some way any time you're connected. If you have no faith in anyone, then stay off the Net. These are great projects, and the value they provide to society as a whole greatly outweighs the potential risk.

  • Try Popular Power [popularpower.com]. Great Flash screensavers, plus they will be paying soon. In the meantime, help create an influenze vaccine.

  • Supply and demand. I pay for my power, and I can use it however I'd like. If there's a shortage, then the power company can increase my bill.

  • Somebody mentioned something about running multiple programs at the same time on the same machine. You're missing the point. Unless you're an ubergeek and want to exploit your CPU cycles just for the sake of saying that you did it, you don't run everything at once.

    You only run what you are interested in. For example, if you wish to help find (and believe in) alien life, you run SETI@home. If you want to find cool new protein structures, you run Folding@home [stanford.edu] to help the proteomics researchers. It is simple as that. As with everything in this world, use common sense. After all, we're talking about a cool way of doing things, not about how it will change every man, woman and child's life! Because it probably won't.

  • Being relatively new to the computer world, I find a lot of what takes place
    as I browse the net very upsetting at times. For example, I've had shit
    deposited onto my harddrive (i.e. cookie w/ java booger) which reset my home-
    page. The next time I go online, I'm staring at not just one,
    but three windows, each trying to sell me something I'm not going to buy regardless.

    The idea of allowing someone's program to run on my machine, without my direct
    control turns my stomach. Yes I am paranoid, and I think I have good reason to be.
    More, bigger, faster and easier are not always better for an individual
    user where control is given away to those who have their own agendas.

    I like to think I am in control of my PC. I'm learning to take
    measures to 'make it so'. I think we all should!

  • by epaulson ( 7983 ) on Saturday January 06, 2001 @11:32AM (#526640) Homepage
    Look, It's not hard to build sandboxes to protect yourself against foreign code. The way to do this has been well-documented - it can be as simple as a chroot() call or as full-out as the virtualization tricks that VMWare and plex86 pull. Just because Sun and Microsoft haven't managed to get it right yet doesn't mean it's not possible.

    And if you're really worried about abuse, let's take a quick paranoid-look at things.

    Evil Groups:

    1. Microsoft

    2. NSA/CIA

    3. Telecoms

    I'd say that if the intelligence community wanted it's software on computers it's already got plenty of opportunities.

    -Erik

  • Let's face it, most people are too self centered to run any kind of distributed project just for the scientific benefit. Joe AOL would rather leave his pretty Flying Windows screensaver up than donate CPU cycles to find ET/cure cancer/crack encryption/whatever. There needs to be some kind of monetary reward to encourage the majority of users to run a distributed client. I'm pretty sure that the "pay based on the numbers you crunch" business model is out of the question, since most of these organizations aren't operating as a commercial entity. And don't have any way to raise the capital to fund such a project, so I would likepropose an alternative method: The owner of the computer that gets the lucky number and ends up completing the goal of the project gets $1,000,000. I'm sure that would be only a small percentage of the revenue generated by marketing the cure for cancer, but I think that an approach where one lucky user gets a huge sum would be more enticing to the average user and help to foster more widespread use of distributed computing.

    Just a thought.

  • Wow, it's amazing how you basically said nothing new, insighful or interesting and yet got modded up.

    Why don't you try:

    • Making "new, insightful or interesting" comments yourself, enriching the discussion instead of descreasing the SN ratio with pointless whining, and
    • Not posting anonymously, so people will take you seriously?

    I, myself, thought the comment was interesting, and would have modded it up myself if I had points right now.


    TomatoMan
  • first off joe q user doesnt understand what a security and privacy risk is. if joe q is using windows the stability risk will not make that big of an impact. all of that aside it could be that joe q cannot do complex math but he understands that his computer can. he might also realize that helping to find cures for diseases may not directely impact him, but it's a good thing to do.

    i dont know about the rest, but the _only_ one i went to: cure for cancer [parabon.com] gives away money.

    i didnt know seti at home paid money. screensaves are pretty worthless in my mind, especially now that power management can turn off the monitor. so if the computers in your office win money do you give it to your company?

    use LaTeX? want an online reference manager that
  • Distributed computing of this nature, like many clusters, is only as good as the jobs running on it. While it would be perfect for something that performs the same sequence on mulitple sets of data (ie, SETI or rendering), it is not what you would use for processing that relies on the previous calculation to continue (weather model simulation, optimizations, etc). These are just too bandwith intensive and would leave even the slowest processors mostly idle while saturating even the fastest links.

    It is a far better idea to come up with some proposed jobs and determine the best hardware (be it a single large-scale system or a distrubuted cluster or small systems) before telling the world to leave their power-hungry personal computers running 24/7. 400 watt consumption isn't much until you multiply it by the number of PCs owned by the geek community.
  • I have been interested in this kind of distributed computing for a while. I got to thinking how cool it would be if some kind of distributed computing system could be added to say linux(as a module or something)

    I envisage a system were you have resource_buddies-people you have agreed to share idle cpu time with. These buddies would probably need permenent ip add's, but they are becoming increasingly common with broadband links. Anyway, when you are doing something like rendering something with gimp, ior any other cpu intensive task, the kernel module could kick in at a user defined cpu usage level. When active the module could test a few resource buddies to see if any are active and if any have cpu idle time. If it detects idle time it then shares the processing load with the remote system.

    Anyway, just a thought, probably never happen

  • Yep, if I can't get the source and compile it myself, I'm not going to run it.

    I can't really see any rational argument for keeping these sources closed, if those who work on them just realize that, I'm prefectly fine with it, and I'll join whatever project.

    BTW, I crunched a lot fo units for SETI@home in the beginning, I think the idea is great. However, they obviously don't need my EV6 CPU, and they seem to have a hard time acquiring a clue about opening the source, so I quit.

  • this is a good idea, but i think it would involve major changes in how the kernel process data. projects like these are possible because the problems can be (relatively easily) broken up into smaller jobs that can be dished out to the clients. in order to do what you want i believe you would either have to rewrite the gimp or the kernel would have to be rewritten to sit on top of your collective computers. i think it would involve more than just a module.

    use LaTeX? want an online reference manager that
  • Let's face it, most people are too self centered...

    yes, but people like to think that they aren't self-centered. So a project like this could take advantage of that.
    If all a person had to do was double-click[1] on some file, then have it install. (ie a _really_ easy install), then people would do it. They get to cure cancer in their spare time, that makes them feel good. Look at all those chain-letters bouncing around the net saying "forward this to ten people to make some sick and dying girl happy" or some shit. They'll do it...if it is _simple_.

    [1]People will be somewhat scared that it is a virus or something, but will only hesitate for about 2.5 seconds, then run it.

  • Well, thanks for a very interesting reply indeed. The good thing about your system is that it discriminates against freeloaders, so if my computer was part of a network like yours I could have it working up credits while it is idle, and then use them up in a burst when I want to do some processor intensive work. I think you have a very good idea there. Thanks!
  • Lets say Joe Blow is too self-centered to do this for the good of others, there is still a simple solution. Make a kickass screen saver and explain in the terms of use that it is free, but it uses spare CPU cycles to do distributing computing.

    In short, use Joe Blow's greed against him, give it away for free, and stick the distributing computing aspect of it somewhere in a click-through agreement. If he's like most people out there, he won't even know its there.
  • I've been donating spare computing time to distributed.net [distributed.net] for the past two years (though I'm starting to reclaim those clock cycles for my own projects again...) However, I would not donate spare computer power for any other project unless either the source code is available, or it is run out of a sandbox that I trust it cannot get out of. (It would also have to be for a good cause.)

    We already have such a sandbox which is multi-platform (including Linux.) Although it's not the fastest possible implementation, I'd be much more willing to donate my spare computing cycles if the program were written as a Java applet.

    The same restrictions that make Java applets safe also, to me, sound like the restrictions that would make distributed computation safe. They have no access to your local disk. They cannot make network connections, except to the source of the code.

    Aside that people think of applets only for displaying graphics, and maintaining one of them up 24/7 would be difficult, are there any reasons why Java applets shouldn't be used for distributed computing?

  • The idea of allowing someone's program to run on my machine, without my direct control turns my stomach. Yes I am paranoid, and I think I have good reason to be. More, bigger, faster and easier are not always better for an individual user where control is given away to those who have their own agendas.

    you let programs run on your computer without direct control all of the time. in linux their called daemons and in windows they are called services. these are programs that run in the background that you never see. in linux most of them are open source, but since most people (myself included) dont religously read every scrap of the source we really dont know what we are running.

    what i'm saying is that you have to assume some ignorance in order to use your computer. you depend on others (for linux it's the gnu folks, kernel hackers, etc.) to check this code. i believe you like to think you are in control of your pc.

    use LaTeX? want an online reference manager that
  • seti@work got me fired
    Well, you can't say they (Seti@Home) didn't warn you. It's part of the licence agreement.
    Restrictions

    You may use this software on a computer system only if you own the system or have the permission of the owner.
  • by Ånubis ( 126403 ) on Saturday January 06, 2001 @12:02PM (#526654) Homepage

    Being a long time distributed computing advocate, I've used (and crunched many blocks for) GIMPS, distributed.net, Seti@Home, Folding@Home, and United Devices. I'm currently using all my spare cycles for United Devices. Why? Well here's a brief explanation/analysis of the projects I've used:

    GIMPS - They have a good, clean client, but the critical problem is that the project has no conceivable benefit.

    distributed.net - Probably the best client/site out there, and definitely the largest pre-Seti@home project. However, the cracking of encrypted messages has next to no scientific benefit (it is quite easy to calculate the chance per try of cracking any of their ciphertexts). Recently they've been doing some work with OGRs. Finding new OGRs looks like something that at least has a marginal benefit. On a side note, distributed.net has partnered up with United Devices.

    Seti@home - It seems like everyone and their mom is running seti@home. However, reportedly seti@home actually has more clock cycles than they need. (they can only get so much radio info per day to analyze)

    Folding@Home - Definitely a lesser known project which is being run by some researchers from Stanford, they analyze proteins. The project definitely has scientific merit, however they're experiencing some growing pains due to their recent popularity. Also their client is definitely beta-esque.

    United Devices - This is the project that I'm currently contributing to. (so of course I'm bias) I chose them because they're doing something useful (working on cancer stuff with some researchers from the University of Oxford), have a fairly good client, and have a 'rewards' program for their users. (btw, GIMPS and distributed.net users also have the chance of winning a large cash prize) In addition, UD has partnered up with distributed.net, so it looks like UD just might be the commercial corporation to win the Internet-based distributed computing market.

  • The Popular Power [popularpower.com] client uses a Java sandbox to protect users from job code. We are also the only company with clients for Windows, Linux, and Mac. We use Java for our Internet projects for exactly the reasons you mention.

    You ask whether there is any reason Java would not work for this purpose; the only reason I know of is that some industries (e.g., pharmaceuticals) have not adopted Java for their large-scale computation. In these cases, we provide the company an enterprise server product that lets them run code written in any language on their own machines behind their firewall. You lose the protection of the Java sandbox, but since the same company is both writing and running the code, this is a good trade-off.

    Best,
    Marc Hedlund <marc@popularpower.com [mailto]>
    CEO, Popular Power <http://www.popularpower.com/ [popularpower.com]>

    Give your computer something to dream about (tm)
    www.popularpower.com [popularpower.com]

  • That gives me an idea. Would it be possible to get all or even some of these projects to work together? Here's what I see. Have one program that runs on top of all others. That program can load other modules (for lack of a better word) like seti@home, folding@home, etc etc. You could set the master program to run module X a set amount of times and then cycle to the next module. I suppose you could do this with Linux as you have specified, but you divide you CPU among all active process. This way you have 100% dedication some of the time. You may not think this would be a big deal, but it is to some. For example if you ran seti and folding at the same time your time per WU would be bigger than normal. Granted, it is kind of trivial but I know I take my stats at seti somewhat seriously so I don't run anything else.
  • I hear you. I have invited the daemons and services to do their work
    when I installed their OSs on my drive. Sadly, what you say is true.
    With Windows, I see it as 'dancing with the devil'. With Linux,
    I feel more trusting considering its being open-source. My
    greatest concern is for what I could be tricked into habitating my machine.
    I have a lot to learn, and I'm keeping my eyes open.
    Thanks for your input.
  • I would like to see a client that securly updates itself. - Last time Seti stopped running because it wanted an upgrade I uninstalled it.

    I would like to see more of a trusted central organization that will put out a client that can handle updates and giving you a choice on what project to compute on.
    This would also make it simpler to track ladder rankings.
    The whole point I got into the distrubuted bit was to see if I could crank out more keys then my buddies. I imagine this worked for alot of other techie-geeks.

    At this point I am not going to bother donating cycles until somone comes out with a nice client, (with optional usefull looking screensaver), that is actually working for a good cause such as cancer cure. Ladder rankings would be an added bonus - attraction.

    Also clients that PROPERLY support SMP, I seem to remember having to start two instances of a key cruncher inorder to get full effect out of a machine.

    On another note I remember a tale about a contracter friend installing the key cruncher software on machines at places he was contracting at. Fergot about it for months after he left that contract. Checked his ranking after he had no machines personaly only to find those office machines were still working away.

    I am also curious to see what Google could pump into a project with their spare cpu cycles from many thousands of machines(other then heat up the datacenters).

    -Cyril

    All babbeling contained here in, may or may not be sensible to the common earthling
  • Wow, I read one book, and....

    You see, this is the second time in this week that I have been able to relate a /. comment to this book [amazon.com] I read a while ago, Wyrm by Mark Fabi. It's set in the days leading up to 01.01.00, and it has an interesting premise. Part of it is a vast, complex, "AI" who is sort of born out of unified processing on the internet. It's basically an accident.

    Like some of the other stuff in the book, it's somewhat... far-fetched. But that's OK, because it's still a great book. ;)

    -J
  • Folding@Home - Definitely a lesser known project which is being run by some researchers from Stanford, they analyze proteins. The project definitely has scientific merit, however they're experiencing some growing pains due to their recent popularity. Also their client is definitely beta-esque.

    Right now, I'm running Folding@home. In the past, I've crunched blocks for distributed.net. Your objection to key cracking is spot-on the reason why I don't run their clients any more. As for SETI@HOME, the reason I don't do that (and never have) isn't that they have plenty of CPU time already; It's that I don't expect SETI to actually find anything.

    Folding@Home can help us right now. I'm not suggesting we kill off SETI, or even SETI@HOME; We do learn useful things from both, though not where the eetees are. But Folding@Home has more immediate applications.

  • There may be a power shortage in California but not all of us live in Cali... And for those of you that say that leaving a computer running 24*7 is a waste of power, if it is winter, then the power is not wasted, after all the heat given off by the computer means less other heaters have to be used. Finally it doesnt shorten the life of the computer, since there is less heat and electrical stress if its on all the time.
  • I'm working on something related right now. The most important difference is there's no central work server; instead the concentration is on basic support for safe mobile code and resource control, so different groups could use it. Mojo Nation looks like the most promising system for handling the cpu-time trading, so far.

    What I do have working now is a portable virtual machine with a sandbox controlling cpu and memory use. The biggest jobs remaining are a C compiler backend and a code generator for x86. (Currently it has only a threaded-code interpreter.)

    Oh, and it's free software, of course. I wasn't planning on releasing it quite yet, but it would kind of suck if someone started a similar project without knowing about this (or if someone already did, and I'm the one duplicating effort). Email me if you're interested.

    --Darius Bacon (darius@smop.com)
  • We are also the only company with clients for Windows, Linux, and Mac.

    Get your facts straight, please. For instance, the Distributed.net [distributed.net] effort has clients for Acorn RISCOS, AmigaOS, AIX, BeOS, OpenBSD, NetBSD, FreeBSD, BSD/OS, DEC UNIX/OSF1, PC-DOS/MS-DOS, HPUX, Linux, MacOS X, MacOS, Novell NetWare, NeXTStep, IBM OS/2, IBM OS/390, QNX, SCO Unix, SGI IRIX, Sequent DYNIX, SINIX, Solaris/SunOS, DEC VMS, Windows 95/98/NT/200 and Windows 3.x. More clients are under development. So while it is nice to hear that you have support for more than just Windows, please don't believe that you are on top. I bet that SETI@home has clients for more than Windows/Linux/MacOS too.

    Oh, and as for sandboxing: at least for Linux, something like User-mode Linux [sourceforge.net] would be an excellent choice.

  • I pay for my power
    I wholeheartedly agree with the capitalistic system, but I doubt the total environmental costs are in your electric bill. Future generations will be paying for our present power usage as well.

    And for those who say their house needs to be heated anyway, I hope they are using something more efficent than electric for heat.

  • MojoNation [mojonation.net] sorta promises to do this, altough their infrastructure can't handle it currently. See the earlier Slashdot article on MojoNation

    If the distributed work is carried out by Java apps, using the standard security precautions, you don't need to get so many grey hairs over that - as long as you're willing to exchange lots of processing power for it. I've always felt the correct language for distributed computing is assembler, at least for the core things. GIMPS and Distributed.net are model examples of this.

  • Robert Heinlein saw it all 50 years ago... Just read The Moon is a Harsh Mistress In that book, the Lunar colonies main processing computer (which, after gaining sentience, was called Mike) became sentient after many modules and auxiliary processors were added. It brings up an interesting thought... I really know nothing about neural nets or how they are simulated, but couldn't one of these distributed processing systems be used to simulate a neural net? With enough CPU power behind it could the system develop a somewhat sentient persona?
  • Maybe. However, if it is packaged up as something "sexy", like, say... Searching for aliens. Now that sounds like a big magnet for the masses. Throw in some undecipherable science-babble, few supporting TV-series, a goal that won't be achieved in our lifetime and nifty put pointless graphics that beat the flying windows, and there you have it!

    In the end, people will and should contribute to such projects that mean something to them personally. It would be interesting to see what propotion of SETI@home contributors are also X-Files fans, for example. On the other hand people whose relatives are dying of cancer, or even AIDS, might be predisposed to such research. And as the life-expectancy of people keeps growing, this kind of research seems like a sure winner in the long run.

  • Hey - how about Mosix or Beowulf across the Inet. Yeah, I know the link is crappy for traffic, but it'd be neat to give it a shot. Anyone game?
  • ... one thing which you will always have to remember is that these 'computed' biological systems will never be able to pair up against true biological trials. While these distributed 'find a cure for ...' systems may identify drug molecules likely to produce an effect in this closed system, at the end of the day, there will still be 2-5 years worth of FDA/GEPRM trials which could easily prove the 'likely drug molecule' completely ineffective.

    Projects such as the Human Genome project may contribute to the formulation of better computational modules for testing, but at the end of the day, we have to remember that these are still only simulated environments.

  • Coincidentally or not, January PC PRO magazine just continued this same argument on its Letters-column. In a summary, "...it's air travel. More than the average CO2 emission per Briton, per year is chucked out of a Boeing 747 for every passenger on board before it's reached sunny Spain. Each Comdex alone generates more CO2 emission than a third-world country does in a year".

    So, all things in perspective. And who knows if a future distributed project might help us to find more efficient and environmentally friendly ways to travel or simulate the effects of global warming - even if being able to scratch that expensive flight to a cancer-clinic doesn't interest you.

  • FUCK U ALL, if the electricity is being generated by the trash we throw out, and my bill is paid. I think the real cause is the trillion watts consumed by the tree huggers air conditioners.
  • Well, according to the likes of SETI@home, distributed computing, now dubbed "P&P" to include Napster etc. is the "next big thing for Internet". There's indeed an explosion of commercial activity on this arena right now. Hopefully each one will be able to carve out their own niche, of people who'd mostly not be interested in the other projects - altough doubtlessly some overlapping exists. Luckily the commercial ventures with big risks are the ones to suffer mostly from this, the non-profit outfits can hopefully weather it out.

    Something I've been waiting with interest is a resurgence of the "Free Computer" idea, originally intended to be financed through advertisement, customer profiling and such. Adding distributed computing to this equation might be just what it takes to make it profitable: Imagine the ability to run the equivalences of huge server-farms with no space, maintenance or electricity investments! And advertisement benefits on top of that. It might work, and bring more people into distributed computing.

    When this technology could still be called new, with GIMPS and Distributed.net pretty much the only ones out there, I too used to be worried about the use of "this power" for what I considered "wrong purposes". There seemed to be just something so inherently wrong about the ability to attain results that would normally need larger supercomputers than yet existed, even for the "common good". More than anything, I was really worried the mankind wouldn't be ready for that yet. As it had turned out now, there is no single "distributed supercomputer", and likely never will, so I am less concerned it will ever be used for unethical things the humankind isn't ready for yet. There's always choice what you run, and in a sense the TCP/IP stack already was a hugely successful distributed computing experiment. It's probably also going to remain the largest.

  • Just how do I track the billing? By cycle or bandwidth?
  • How about this -- A place like distributed.net uses some secure method to track who you are and then clocks how many hours you spend running their software, then, every year they submit those hours to your national revenue service which then deducts those hours at a rate from your taxes each year (Any amount I would be happy with, 5 cents would yield $360 dollars for 300 days. 50 cents would yield $3,600 a fair good chunk of change). Maybe even a certain amount for different programs, so companies could pay more or less to have their program get run more often. This brings up the issue of companies monopolizing that computer time. This shouldn't be an issue since the user still has control over whose programs they run and maybe how much CPU time they get. This would also bring about competition, 0.5 to 2 cent hikes between competing companies every week would bring in a nice quadratic curve to the function of your tax deductible income. There is one bad issue which is that organizations that can't afford to pay for the distributed computing would be without these such services. In that case, an incentive program could be put in place where legitimate non-profit organizations get free distributed computing but the user still gets the cash. The process still has the same special interest factor, you can still run SETI or mprime even though it may only be 2.5 cents instead of the 5 cents microsoft will pay to have you send them every thing from the web-sites you visit every day to your keystrokes. The only group I see losing out would be groups between the free and ultra-high rate. Since a non-profit organization gets the service for free, and a high-rate company will get most of the time, what happens to companies that can only afford prices at 10-15% of the highest rates. It's not a perfect idea, but it's a start. It's eCharity. But then again, what are the chances something so practical and logical could actually occur in a government such as the United States :)
  • I had a FreeBSD box running Seti at work 24/7. I stopped working there 2 months ago but the box still seems to chew through Seti units.
    My guess is that they don't know what it is doing but since the label on the box says "server" they leave it alone.
    Oh well, they would never know how to operate a FreeBSD box anyway. :)

    --------
  • This thread has brought up something I was pondering. The fact is to do just about anything in this world you need money. AIDS drugs weren't developed by the Saint Theresas of the world. They were developed by companies that were intent on making money off them. If through the profit oriented Entropia new AIDS drugs are created that could possibly save millions of lives then this is a good thing. If Entropia wasn't contracted to do this then there would be no new drugs produced. Do I really like the way it is no I don't. It's just how it is. However I do like the idea of having a chance to win a million dollars that another poster mentioned. I give them a chance to make money and they give me a chance too! Sounds fair to me!
  • "Why would they install something they don't understand, at a privacy risk and a stability risk, for no immediate personal benefit?"

    They're running Windows, aren't they?

  • I agree that a number of these projects offer little scientific or societal benefit, so it's very gratifying to see this approach starting to be applied to the advancement of medicine.

    You've listed several projects, and I know there are others, each of which has developed its own client from scratch. This seems like pointlessly duplicated effort. Much of the functionality must be the same for all the different clients, isn't it? If there were an open-source distributed computing client project, it could be developed and debugged by all these teams and be much more reliable. With a standardized client there could be an economy of MIPS, some given freely and others sold. It would considerably advance all these projects, and any future ones.

    I realize there are security issues; if badly implemented, client code could present awesome opportunities for viruses. But suitable measures should prevent this: digitally signed work units, maybe verifying checksums with the server, and there are probably a dozen other possibilities.

    If the nature of the problems is so diverse that there are necessarily deep fundamental differences between the various clients, then this would be a bad idea. But I'm guessing that, except for variations in needed bandwidth of peer-to-peer communication, the clients ought to look mostly pretty similar.

  • There is a pretty good Ian Foster article on Web-based computing clusters at the Nature site

    None of these projects use the HTTP protocol, or hypertext in any form (let alone represented by HTML or one of it's variants.) So what on earth do they have to do with the web?

  • They keep the source closed so a few evil people don't figure out how to cheat and start sending them crap data, ruining the scientific aspect of it.

    Yep, that's the FAQ answer. So what happened in the real world? Some kid decided the client sucks, he writes his own client that's three times faster but othervice identical, so they can't see what results have been produced by their client and what's produced by his client.

    So, there is really no difference in this aspect in keeping source open or closed. It is however not very likely this would have happened if they opened the source, since anybody could produce optimized clients and sent it back to the project. Besides, any strong positives would have to be confirmed anyway, so it's no rational argument. The only real problem is if false clients would return a large number of false results (negative or positive, especially positives), but that may with closed source as well as open source, but I thought malicious users are more likely to attack closed source models? Finally, science is supposed to be open, so before they publish, they should open.

    Security through obscurity isn't very good either, but it does discourage the script kiddies from trying more.

    They do? Admittedly, I haven't done any research on th etopic, but aren't script kiddies more likely to attack closed models than open models?

  • Is there a slasdot entropia team to FightAIDS@home?
  • a beowulf cluster of these!!

    Why the f*ck do some people feel compelled to submit this comment for every slashdot story????

    Can't people be a bit more creative?

  • Uh, What power outage? There is no power outage in California, it just another media scam.
  • There is a standard distributied computing architecture called COSM. It is the baby of someone involved with distributed.net. It has already been used by the Folding@HOME people.
  • You could have said the same thing for newspapers->radio, or radio->TV, or TV->internet...etc. The issues aren't new. There are benefits and drawbacks. Hopefully we can draw the lines in the correct places so that we get the most benefits with the lowest percentage of misuse/abuse/whatever. This is the whole crime and security question in general. If we wanted to be absolutely safe from each other we'd all be locked up.

The hardest part of climbing the ladder of success is getting through the crowd at the bottom.

Working...