Hosting Problems For distributed.net 214
Yoda2 writes "I've always found the distributed.net client to be a scientific, practical use for my spare CPU cycles. Unfortunately, it looks like they lost their hosting and need some help. The complete story is available on their main page but I've included a snippet with their needs below:
'Our typical bandwidth usage is 3Mb/s, and reliable uptime is of course essential.
Please e-mail dbaker@distributed.net if you think you may be able to help us in this area.'
As they are already having hosting problems, I hate to /. them, but their site is copyrighted so I didn't copy the entire story.
Please help if you can." Before there was SETI@Home, Distributed.net was around - hopefully you can still join the team.
Suggestion (Score:2, Informative)
Jouster
Re:Suggestion (Score:5, Informative)
You can't put your own server software on sourceforge's servers, at least not to my knowledge, so all sourceforge would be good for is hosting the client downloads...which it might actually already do. Hope that answers your question.
Hargun
Re:Suggestion (Score:4, Informative)
FWIW, Our clients actually can speak a pure HTTP protocol for requesting data, allowing a simple /cgi-bin/rc5.cgi script handle direct serving, but the default communications mode is a more compact raw binary mode.
Gambling regulations (Score:2)
Since it's a "contest" with cash prizes, why not charge people to enter.
d.net can't do that because under the regulations of many U.S. states, that would be considered "gambling."
Re:Suggestion (Score:1)
I could be wrong though and I'm sure someone will point that out if so. Perhaps you could have some parts closed even on sourceforge.
Soccer manager: Hattrick [hattrick.org]
Distributed hosting? (Score:5, Interesting)
Re:Distributed hosting? (Score:2, Insightful)
Pretty much same concept as any clustered computing, the pipes are always important, and no, u can't 'distribute' the connections.
Re:Distributed hosting? (Score:2)
keyspace off of the main server. Machine B
takes a subportion of the keyspace from Machine A.
Machine C takes a subportion of keyspace from
Machine D, ad nauseum. When a machine completes
the checking of its key blocks, it reports it back to
the machine it was acquired from for consolidation.
When the main server hears back from Machine A,
it is a tiny packet saying keys in this entire range have
all been checked and returned negative. One small
packet instead of hundreds from each of the
individual machines that actually processed them.
This is only one simple configuration for example
purposes.
You're still gonna need a host, but the
bandwidth required will be nothing.
Re:Distributed hosting? (Score:2, Informative)
Re:Distributed hosting? (Score:2)
Re:Distributed hosting? (Score:1)
Each mirror-site's code would need to have its own verification scheme to validate someones completed blocks (which i think they have now). And it would have to be tamper-proof and/or trusted mirrors since faking a "done with block, this one wasn't it" in the location of the right answer (stat whores) would be a true setback for the project.
Since the full block information wouldn't ever be compiled together on a central server, we might have to give up some of the details from stats. Not having all the block by block count of every persons activity being centralized for stat-computation would probably cut back their bandwidth considerably.
Multiple problems (Score:5, Insightful)
Next, the integrity of the project gets called into question the moment you begin allowing clients to check processed blocks. The number of fals positives could easily shoot through the roof. Also, a computer with bad memory or simply running a faulty OS (such as Win9x/ME) could overlook a true positive, thereby virtually obliterating the project (ie. "we're at 100% completion with no result, guess we start over?")
As stated above, stats would be impossible to do in this manner, and the same applies for key distrobution. One could argue that the total keys be distributed amoung thousands of nodes and handed out from there, but you create more problems then you solve. You still need a centralized management location to keep track of keys that have or have not been tested. Imagine a node going offline permanently or simply losing the keys it was handed. Suddenly, a large block of keys is missing. As it stands now, the keymaster simply re-issues the keys to someone else after a couple of weeks of no response from the client it sent the original blocks to. Under a distributed format, the keymaster would have to keep track of which keys went to which key distributor, which of those came back, which of those need to be redistributed, where they... (you get the message.)
Next you run into another problem of integrity. What's to stop each distributed keymaster from claiming it's own client is the one that completed all blocks submitted to it. Consider this example, central keymaster sends out 200,000 blocks of keys to keymaster node 101. Keymaster node 101 distributes these keys to a bunch of clients which process the blocks, then send them back to keymaster node 101. Keymaster node 101, which has been modded slightly, then modifies each data block, changing the user id to that of the keymaster's owner, thereby making it appear that any block coming back from keymaster 101 was processed by keymaster 101. It might be easy to spot, but then how to you find out who to give credit to?
The webpage doesn't attract the majority of the bandwidth; the projects do. Distributing the projects would be disasterous, as many have already tried taking advantage of the current system to increase their block yields through modded clients. Luckily, this is easy to spot for now. Under a distributed system, this would be next to impossible. All this, and I've yet to make mention of the fact that the code would have to be completely re-written to work alongside a custom P2P application, which would add months of development to a project that probably only has weeks or months left in it.
In short, someone host the damn thing, k?
Re:Multiple problems (Score:1)
Of course there are protocols for distributed computing by which you need not hope to Christ, but can be quite confident that your results are correct. Good ones can compute formulae and withstand up to one less than 1/3 of the participants being active traitors. But on the other hand, their bit complexity is not exactly lowering your total amount of communication either...
Re:Distributed hosting? (Score:1)
Re:Distributed hosting? (Score:3, Informative)
Each of the servers listed are in in different DNS rotation grouped roughly by geographically named groups (that try to take in general network topology/connectivity). The servers listed there (known as "fullservers") handle all of the data communication needs requested by clients, and the fullservers in turn keep in contact with the "keymaster". The keymaster is the server responsible for the coordination of unique work between all of the fullservers and assigning out large regions of keyspace to the fullservers (which in turn split up the regions and redistribute to clients).
The hardware that we had hosted at Insync/TexasNet was actually 3 machines which together served several roles: our keymaster, one of our dns secondaries, our irc network hub, one of our three web content mirrors, and our ftp software distribution mirror (for actual client downloads).
It's unfortunate that the change in management at Insync/TexasNet caused them to want to re-evaluate all of the free-loading machines that were receiving donated services (there were apparently several others besides us) and cut off anyone who wasn't paying. Regardless, it's a touch economy and companies that want to survive have to look at where their costs are going and do their best to cut spending.
Re:Distributed viruses? (Score:3, Informative)
"I hate to /. them but..."? (Score:5, Funny)
A slow, painful, prolonged,
Re:"I hate to /. them but..."? (Score:2, Insightful)
Re:"I hate to /. them but..."? (Score:2)
Stopping three quarters of the way (Score:3, Informative)
Re:Stopping three quarters of the way (Score:2, Funny)
Re:Stopping three quarters of the way (Score:1)
Re:Stopping three quarters of the way (Score:2)
due to a software bug, the correct key was already found but no one realized it?
could easily be interpreted to mean that you found the key half way through and put it in your pocket and kept searching. Discovering a bug and checking your pocket is much better than discovering a bug and having to start from scratch.
-
Who cares? (Score:3, Interesting)
We all know that eventually, the key is going to be found, and some stupid message will be deciphered ("Congratulations on solving the 64 bit challenge. blablabla")
Why waste trillions of CPU cycles and thousands of $ in bandwidth to find something out that we already know is true?
OMG, hang on!!! (Score:1, Funny)
Good Lord, what shall I do?
Distributed Hosting (Score:1)
Seriously, what is the current state of p2p-networking when serving common html-pages would be the thing to do?
Location (Score:1)
Re:Location (Score:2)
Dnet, is it useful ? (Score:1, Flamebait)
Re:Dnet, is it useful ? (Score:1)
Dnet has already confirmed the longest golumb ruler of length 24 and is working on discovering the longest ruler length 25. This information is IMMEDIATLY useful to people in many fields of science. I'd point you to their OGR page, but for the fear of
Re:Dnet, is it useful ? (Score:3, Insightful)
It does that with a ton of people until it finds the right key. It will eventually crack every crypto they throw at it, because it's only a matter of time.
Seti@home is searching for something that they don't even know if it's out there, and can you imagine the impact if they do find PROOF that there's life somewhere else? That's far more important then stupid crypto keys and such
the UD cancer treatment, while iffy because it's probably set up to benifit a company still has a HUGE impact on EVERYONE'S life.. I don't know anybody that either hasn't had cancer or a family member that has had cancer, and to find a cure!
Re:Dnet, is it useful ? (Score:2, Insightful)
Cancer research? I've yet to see a viable distributed project for cancer research. By that, I mean an organized effort with real data, a complete and concise goal, and a clean method for reaching that goal. Distributed raytracing? More pretty pictures on the computer screen.
You want to draw pretty pictures, I want to brute force an encrypted message to prove current laws regarding encryption are draconian and need to be changed immediately. Gee, I can't imagine why anyone would think dnet is more usefull than raytracing....
Re:Dnet, is it useful ? (Score:1)
I don't run any of the cancer dist. projects so someone else can answer that better than me.
What you are doing is hardly worth the effort. It would be like someone memorizing an entire DVD in binary than repeating it to fight the absurdities of the DMCA. Everyone knows how large the keyspace is and no one is surprized that it is taking them this long to find the key.
Seti@home has potential but I agree it needs to look for multi band signals at the least as it currently looks exclusively in the hydrogen emission band. It would be like people in france not having boats and looking across the ocean at britain expecting to see a fire that reaches a certain height above the ground, it is a very limited idea of what we should be looking for.
Re:Dnet, is it useful ? (Score:4, Informative)
http://members.ud.com/home.htm
This is real research, worked on by United Devices, helped by the University of Oxford, Intel and the National Foundation for Cancer Research.
It meets all your criteria- this is from their site:
"The research centers on proteins that have been determined to be a possible target for cancer therapy. Through a process called "virtual screening", special analysis software will identify molecules that interact with these proteins, and will determine which of the molecular candidates has a high likelihood of being developed into a drug. The process is similar to finding the right key to open a special lock--by looking at millions upon millions of molecular keys."
graspee
Re:Dnet, is it useful ? (Score:4, Informative)
Re:Dnet, is it useful ? (Score:2, Insightful)
> candidates has a high likelihood of being
> developed into a drug
>
Which will then be sold back to you at prices, where dying from cancer is probably the better choice. Profits, amazingly, do not get donated to the Free Software Foundation but to lawyers fighting the demand for affordable generic drugs. The drug-empire CEO's meanwhile sip martini's floating in their yacht just a couple miles off the coast of the Karposi-Belt...
Copy + Paste (Score:1, Informative)
URGENT: We have recently learned that our long-standing arrangement with Texas.Net (formerly Insync) would end at noon, Friday, March 22. Through an agreement with Insync, we were hosted at no charge for many years. Though we have tried to make other arrangements with them or to continue our current service until we can make other arrangements, in the end we had no choice but to move.
Several of the Austin cows made a road trip Friday morning to retrieve our equipment from their colocation facility.
We have no reason to complain about Texas.Net or their current decision. As a business, they chose to donate to us for a long time, and have now decided that they must stop. In dbaker's words in a letter to Texas.Net: "Our experience with Insync has been excellent; I've never been happier with an Internet provider. I've recommended them (and indirectly, Texas.Net) to everyone and even this [situation] won't change that."
Though United Devices has kindly offered to colocate our primary servers for a short time at no expense, we find ourselves in the market for a new ISP. If any of our participants work for a major ISP in Texas (preferably within a few hours of Austin, but we're not picky), and would be willing to donate colocation space and connectivity, we would eagerly like to speak with you. Our typical bandwidth usage is 3Mb/s, and reliable uptime is of course essential.
Please e-mail dbaker@distributed.net if you think you may be able to help us in this area.
Practical? (Score:1)
Correct me if I'm wrong, but isn't this the outfit which is concerned with breaking low-grade crypto? How's that going to improve my daily life? I'd much sooner donate my CPU cycles to the evil international pharmaceutic corps which does benefit cancer study. If you get a rash from commercial ventures, there's the folding@home. It's more like basic research, so it won't produce any miracle cures, but it might eventually lead to research that could.
But breaking crypto? Why?
Re:Practical? (Score:1)
If you consider 56/64/128-bit RC5 low-grade, yes.
How's that going to improve my daily life?
I have no idea what your daily life is like, but if it involves encrypting things you'd prefer stayed private, it should eventually help you in that aspect. Not to mention your boost of confidence as you follow your daily stats and see yourself advancing past others every day.
But breaking crypto? Why?
Because if a few thousand unspecialized computers can brute force the best encryption allowed by law with minimal optimization and research, then we have some good reasons to push for the law to be changed. Personally, I don't like the idea of the best encryption available to me being useful for all of 3 seconds while it's being broken. I don't usually have anything worth decrypting, but I like to think that when I do, it'll be worth my time to encrypt it.
Re:Practical? (Score:2)
There's absolutely no evidence to suggest that the government will ever change crypto laws based on what happens at distributed.net.
Its not like those in government who are responsible for consulting in these matters (NSA, etc) aren't aware of the issues at play here with current export-level encryption -- if you think that they are somehow unaware of these issues and dnet is required to bring them to light, please pass the crack pipe.
Re:Practical? (Score:1)
And 42 bits are clearly too veak. Today when 128-bits ar common and allowed to be used by almost the entire world, it's not an issue anymore.
Re:Practical? (Score:2)
That might have been true when d.net was working on DES, but things have changed.
I think a more accurate wording would be
Because if a over ten thousand computers, working for three years, can't brute force 64-bit encryption, when 256-bit encryption is readily available [rijndael.com] then we have very little reason to push for the law to be changed.
Re:Practical? (Score:2, Insightful)
distributed.net (Score:1)
It would be a shame to see them disappear. They've had/has a lot of cumulative computing power, and it ought to be put to real use.
Ah, the days of installing the res/rc5-42 clients on lots of 386 and 486 machines and actually having them do some real computing....
Re:distributed.net (Score:1)
Optimization of Network Usage (Score:1)
I know that I, for one, have boxen and bandwidth to pull off 3 Mb/s of CPU-intensive network traffic 24/7, but I'm not about to devote my precious resources to something that I don't understand, especially when I haven't even had the chance to ascertain that a solution that utilized my donated resources was, in fact, the best one.
Jouster
Re:Optimization of Network Usage (Score:1)
Each 'message' to the keyserver is more like 'ipaddres,date,username,keyrange,size of key range,client version'
They do work in ranges, and dnet has been working to make those ranges larger but not too large (larger == lower bandwidth, but more time need be spent cracking it). If it takes to long to crack a range, that range risks being recycled before a user submits it. It's a very dynamic system that they've been working on for many years now and seems to be doing well. Maybe they could tweak some more for bandwidth, but that would be a question better asked of the fine dnetc staff.
Re:Optimization of Network Usage (Score:2)
Sweet! What sort of connection is that? The cable modem provider in my area offers very limited "business" symmetric connections up to 5 Mbps, but they charge dearly for it. A lot cheaper than a fractional T3, though.
Re:Optimization of Network Usage (Score:2, Informative)
All of the chatty, multi-step network communications overhead with dealing directly with the clients is done at the fullserver level, including doing a windowed-history based coalescing on result submissions.
Aliens, crypto or cancer - what's your choice? (Score:3, Interesting)
For some time the only one around was seti@home which analyzes noise from space, I think, in search for alien lifeforms, then there's distributed.net doing crypto and math stuff, (correct me if I'm wrong). And then there's people like Intel running medical research in areas like cancer and alzheimer.
I don't know about you, but to me medical research feels a somewhat more beneficial to humanity than search for aliens. Don't get me wrong, I'm not saying that the work done by seti and distributed isn't important or shouldn't be done, just that there's other research that might be more worthwhile supporting.
That's just my opinion, but if you feel the same way, checkout this site [intel.com].
Re:Aliens, crypto or cancer - what's your choice? (Score:4, Informative)
d.net was around a long time before SETI@home - I've personally been running the client since 1997. SETI@home launched on May 13, 1999 (though they were fundraising and doing development for a couple of years before that).
I'm personally strongly interested in cryptography for various reasons, so d.net gets my processor time. I seem to recall various people have concerns about how exactly the cancer project will use the eventual data it collects - i.e. whether the products produced as a result of the project will be commercially exploited - they don't want companies just using this large distributed network to make a fast buck.
Re:Aliens, crypto or cancer - what's your choice? (Score:1)
Thanks for pointing out the errors in fact, but still the cancer research appeals more to me personally even though I share the general concerns about the use of the results.
Re:Aliens, crypto or cancer - what's your choice? (Score:4, Insightful)
If d.net did something interesting, like attempt to find an improved factoring algorithm, or to find a way to perform interesting analysis on ciphertext, then it would be useful. Right now though, it's a 100% useless application.
Think for a moment about what d.net truly does, and tell me with a straight face that it's interesting to either a cryptologist or a cryptanalyst.
If you want to help somebody with your spare cycles, you can help cure diseases [intel.com] or if you're so inclined, you can perform FFTs on random noise. [berkeley.edu] Don't try to tell me that d.net helps anything though; you're kidding yourself if you think so.
distributed.net was useful in the past (Score:5, Interesting)
And they succeeded in this.
What I however don't understand is why they kept doing their cryptography projects afterwards. Proving that RC5-64 is breakable while you can buy 256 bit encryption freely is indeed just a stupid waste of CPU cycles and bandwidth.
I'd like to see them discontinue RC5-64, and concentrate their work on OGR and maybe on other, new projects.
Re:Aliens, crypto or cancer - what's your choice? (Score:2)
Not completely useless... it gives you an idea of how long it takes to count to a million, by ones, on general-purpose, widely available hardware.
For example, they showed that RC5-56 was not terribly secure since it "only" took 250 days; similarly for DES in 22 hours (and I think they did RC5-48 before RC5-56). However, I think that with the time they've been taking on RC5-64 (over four years now, and nearly another year to go to exhaust the keyspace) shows that that key length is still fairly secure against "casual" hackers.
In conclusion, I think that d.net does help something -- it tells people "56 bits bad, 64 bits still decent". IMO.
Re:Aliens, crypto or cancer - what's your choice? (Score:2)
Depends on your definition of casual, but in any case, determining how long a brute force attack might take is still useful. Many security experts use 20 years as the benchmark for how long something should be safe from an attack. In the extreme, this means that if we are able to complete the RSA challenge before 2016 or so (I don't remember exactly when they offered the challenges), then RC5-64 isn't secure.
Admittedly, that's a very extreme view, but given the progress that a group of volunteers has been able to make against RC5-64 I hope it shows that nothing that needs long-term protection should be encrypted with RC5-64 (imagine how long it would take to brute force RC5-64 in 2010, for example).
Re:Aliens, crypto or cancer - what's your choice? (Score:2)
Re:Aliens, crypto or cancer - what's your choice? (Score:2)
Sorry, my CPU time is taken. (Score:3, Funny)
Re:Aliens, crypto or cancer - what's your choice? (Score:1)
As for me... well I like the idea of only my intended receipient reading my emails - the gov't wants the keys they can have them but joe bloggs sitting down stream of my mail packet... can sod off.
Re:Aliens, crypto or cancer - what's your choice? (Score:5, Interesting)
If the distributed cancer network weren't there, and if it's really performing a genuinely useful job for the companies, you can be sure they'd be investing the $x million required to just buy a supercomputer or three to do it for them. So the only difference I see the cancer project making is that it's saving huge pharmaceutical firms a few million dollars. The world's cryptographers, most of whom are academics (ignoring the NSA-employed ones for a minute) don't have the millions of dollars to throw around if d.net wasn't there - neither do the mathematicians interested in the results of d.net's other project, Optimal Goulomb Rulers. As a result, I see d.net as making more of a difference than the cancer stuff.
All that said, those are my reasons for running d.net - you've got your own reasons, and it's your own choice.
Re:Aliens, crypto or cancer - what's your choice? (Score:2)
Any treatments developed by this will be priced comperably with other treatments of a given effectiveness. Reducing the development costs only allows them to increase their profit margins, in effect subsidizing other drug development efforts with higher costs and less competitive pricing opportunities. Or more cynically increasing the compensation available to high-level executives, which will likely happen anyway.
Capitalists almost NEVER use reduced production costs as an incentive to lower prices without serious competitive pressure, and even then the greater temptation is to act like an illegal cartel and keep prices higher. They almost always price goods and services comperable to like goods in the market and enjoy the higher profit margins.
You can argue that this is good for business, or you can argue its good for consumers in the long run, I think there's points to be made on both ends. But it still leaves a nagging question about who dies and who doesn't because some marketing guy needs a bigger boat.
Re:Aliens, crypto or cancer - what's your choice? (Score:1)
Amnesty international amongst others make use of strong cryptography to be able to work in some of the less liberal countries - without strong crypto they'd be fubar'd.
Your reaction is typical of the brainwashing the various supposedly open gov'ts have purveyed over the last few years...
Re:Aliens, crypto or cancer - what's your choice? (Score:2)
I still like seing my clients from my first job 4 years ago still submitting packets, gives me a nice feeling
I'm sure the aliens will cure all our diseases.... (Score:2)
Re:Aliens, crypto or cancer - what's your choice? (Score:2, Interesting)
Distributed Medical Research... (Score:2)
However it does mention that finding drugs to combat various diseases is a first priority. So I assume that a particular pharmecutical company would benefit from this, as would a small percentage of people with cancer who also have private health insurance.
I would want my CPU time going in open-source medicine, and not someone else's patent that will be abused to make the most money possible.
I'm not saying that this is the case with Intel's distributed cancer-curing client, but it kinda looks like that given the lack of details of beneficiaries.
Anyone know for sure?
I might email them...
Windows on Intel - what's your choice? (Score:2)
That leaves an awful lot of non-intel boxes, and even non-windows intel boxes with spare cycles that can't participate. Until they have the option to do so, I anticipate a lot of cycles going to 'less worthy' causes...
that's a lot of bandwidth (Score:4, Interesting)
Re:that's a lot of bandwidth (Score:1)
Re:that's a lot of bandwidth (Score:1)
A continuous three Megabits per second works out to somewhere just under a Terabyte a month. Not going to be cheap.
It's going to run about $600/month plus server costs. Bandwidth prices have been dropping rapidly over the past year.
Don't laugh! (Score:1)
more than 160000 PII 266MHz computers
This is a present? For me?
Cool, but they should have presenting me with 10000 Athlons 2000+ plus 10000 Geforce4's and 10000 Game Tits XP instead. I mean, does anybody know why they use PII's 266 MHz as a reference?
Re:Don't laugh! (Score:2, Insightful)
Because they started RC5-64 over four years ago and probably didn't change their frame of reference since then, only the multiplier.
Sort of like how some PC magazines do benchmarks of things such as hard drives with old systems, to ensure that you can compare last week's results with some numbers published two years ago, in a semi-meaningful way since the only thing changed is the different hard drive.
Re:Don't laugh! (Score:2)
Big number in the front... ooohh look, shiny things.
Think of all the cpu cycles wasted!!! (Score:1, Insightful)
Pointless project by now (Score:2, Interesting)
Originally, the point they wanted to make was that 64-bit RC5 was not strong enough to protect privacy.
They started, what, 4-5 years ago? About 30 000 computers running for 4 years can't break 64-bit encryption. Geez, I'd say that, if anything, the conclusion would be that 64-bits is plenty for shopping etc. unless you've got some really _big_ secrets. Certainly plenty for day-to-day mail. More or less the opposite of what they wanted to prove.
Nowadays they've added the OGR stuff to appear at least a bit more usefull, but in reality, the applications of those results are very limited.
Really, the right thing to do is not to waste power on such pointless projects.
--
GCP (Moderation suggestion: -1 Disagree)
Re:Pointless project by now (Score:1)
I would be very suprised if the future of the 'net didn't focus around sharing computational power, distributed computing and storage on-mass. As such the distributed.net effort is a great starting place to learn from.
Of course - there are still hundreds of other issues not covered by dnet. The reality of the 'net is that it is STILL in its infancy. It has a lot of growing up to do - and lot of issues still need to be solved that are currently buried deep into its complexities. Such as search engines - as the web grows these become increasing flaky.
What it costs. (Score:1)
So, they have been getting $ 3000 per month or more of free bandwidth and rack space.
IMHO, if their work is really important, they should be able to raise $ 36K per year from the crypto community.
Don't link to the original... (Score:1)
Of course, don't bother trying if Google hasn't had time to cache the site yet...
keyservers? (Score:1)
Re:keyservers? (Score:3, Informative)
However, this doesn't reduce the bandwidth at the keymaster any further, since this sort of splitting is already also being done at a larger scale between the keymaster and fullservers (and the bandwidth issue is with the keymaster, not the fullservers).
Folding @ Home on Linux (and URL) (Score:1)
I prefer to use my spare cycles for Medical research.
http://folding.stanford.edu [stanford.edu]
Folding @ Home on Mac OS X (Score:1)
(same URL)
Folding at Home [stanford.edu]
Issues Resolved? (Score:2)
I just saw this statement at the bottom of their front page:
distributed.net and United Devices join forces: distributed.net and United Devices have announced a partnership which will combine the skills and experience of distributed.net with the commercial backing of United Devices. Several distributed.net volunteers are leaving their old day jobs and joining United Devices full time. United Devices will be providing distributed.net with new hardware and hosting services, as well as sponsoring a donation program that will help support distributed.net's charitable activities."
I guess they are okay for the time being?
Re:Issues Resolved? (Score:5, Informative)
Note that several of the distributed.net volunteer staff (including myself) do indeed work for United Devices during the day, and that our employment there began awhile ago (more than 15 months ago), so that partnership announcement is not really related.
Someone please explain... (Score:3, Insightful)
Thier site is popular enoug that it would seem to be a good time to experiment with moving the http stuff to freenet, since it's only updated once per day. The people willing to download the dnet client are would seem to be some of the most willing people to download the freenet client. Freenet is designed so that the slashdot effect actually increases reliability and speed of acess for the commonly requested data. Distributed.net would seem to have reached a critical mass of readership in order to have reasonable reliability for its freenet page. Your could have the client get your team and individual scores sent to it as part of the block submission cinfirmation.
It would seem to me that they could arbitrarily reduce their bandwidth requirements by increasing the minimum size of keyspace portions they're handing out. It would seem that thier project traffic would be (or could be made) the same for each work unit, regardless of the size of the work units. Bigger work units are really only a problem for clients that are turned off and on regularly. They client still only needs to keep track of current state (current key in the case of RC5), the final state of the work unit (last key to check for RC5) and the current checksum for the work unit. None of these change in memory requirements as you increase work unit sizes. 99% of the people don't know the work unit size anyway, so changing the work unit size won't cause many people to complain, particularly if it's necessary to keep dnet hosted.
Unless I'm mistaken, the server really only needs to send the client a brief prefix identifying the message as a work unit, followed by "start" and "stop" points for the computation. For RC5, this would mean a 64-bit starting key and a 64-bit ending key. I haven't sat down and worked out the cannocalization scheme for GRs, but it seems that they are countable (in the combinatorics sense, not the kindergarten sense) and could be represented fairly compactly. The current minimum ruler length need not be sent, snce you'd probably always want the client to send back the minimum ruler length in it' work unit anyway. The client would need to send back a work unit identifier (this could be left out, but it's not strictly safe) and an MD5 sum of all of the computational results or some other way to compare results when they duplicate work units. (A certain percentage of the work units are actually sent tomultiple clients in order to check that everyone is playing fairly.)
Plug: My guide to distributed computing... (Score:2)
64 bits is enough (Score:2)
We can help. (Score:2)
Please feel free to email sales@tiernetworking.com [mailto] for more information.
Try your webhosting provider (Score:2)
JOhn
Re:Quote of the Day [QOTD] (Score:1)
I told that girl [coderz.net] to don't rip off fragile experimental projects.
Afterall she's a lady and she would understand that's coward action to phreack on big security holes... or not. To abuse distributed.net because exist on there big security holes.
We cannot rely on crackers' compassion, distributed.net must find a way to turn clients' security into a better level.
Re:Free Cycles? (Score:1)
Few processing power is required to handle the I/O between machines, data compression and validation compared to the cpu cycles those will be spend on "internal" processing. This is true for both boxed clusters and distributed.net.
Clustering management doesn't make an impact so the cpus will do all stages of processing "in-house", but due to a different architeture distributed.net needs power to handle "160000 PII 266 MHz".
Re:Reality (Score:1, Insightful)
That would lead to a lot of blocks being processed more than once before the entire keyspace was exhausted, increasing the time required.
Re:Reality (Score:1)
A real experiment would not hand out the same block to only one person. Heck check out UD they claim to send the same block to at least five people. And I agree its a good idea. That chances that 5 people are cheats is less than the prob of only 1.
Besides, there are teams of thousands on dnet. Why don't they organize their own cache? We already know quite a bit of the keyspace is invalid.
My point is there is really no need for dnet other than stats and glory. My single computer stands about as much chance of finding the key inside dnet or outside dnet.
Tom
Re:Reality (Score:1)
'cept that you probably wouldn't write a search client. I know that you seem to be saying that they're nothing and you're something, but *they're actually doing something*.
2 Mbps DSL (Score:2)
Re:Solution: make money. (Score:2, Insightful)
Getting donations of bandwidth and hosting is harder because those are ongoing commitments (including potential staff-support, and physical colo access, etc).
Direct money donations are also somewhat hard to get. Fortunately distributed.net is a 501(c)(3) organization, which means anyone can donate and receive an income tax writeoff (see articles of incorporation [distributed.net]). Tax day is coming up soon, folks! :)
Re:Solution: make money. (Score:2)
Re: GOOGLE! (Score:1)
Hmmz... I really hope I can disable the option in the google toolbar when it's fully operational. I prefer distributed over the whatever-google-is-going-to-choose project.
Re:So 3Mb/s huh? (Score:4, Informative)