"Evolution of the Internet" Powers Massive LHC Grid 93
jbrodkin brings us a story about the development of the computer network supporting CERN's Large Hadron Collider, which will begin smashing particles into one another later this year. We've discussed some of the impressive capabilities of this network in the past.
"Data will be gathered from the European Organization for Nuclear Research (CERN), which hosts the collider in France and Switzerland, and distributed to thousands of scientists throughout the world. One writer described the grid as a 'parallel Internet.' Ruth Pordes, executive director of the Open Science Grid, which oversees the US infrastructure for the LHC network, describes it as an 'evolution of the Internet.' New fiber-optic cables with special protocols will be used to move data from CERN to 11 Tier-1 sites around the globe, which in turn use standard Internet technologies to transfer the data to more than 150 Tier-2 centers. Worldwide, the LHC computing grid will be comprised of about 20,000 servers, primarily running the Linux operating system. Scientists at Tier-2 sites can access these servers remotely when running complex experiments based on LHC data, Pordes says. If scientists need a million CPU hours to run an experiment overnight, the distributed nature of the grid allows them to access that computing power from any part of the worldwide network"
I'm impressed (Score:1)
Re: (Score:1, Funny)
Re: (Score:1, Offtopic)
Re: (Score:3, Informative)
Re:Waste of good fiber. (Score:5, Insightful)
There's really no reason to use redirects or tinyurl on
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Security? (Score:1)
Re: (Score:1, Informative)
Bitch... (Score:2, Insightful)
Re: (Score:1)
Re: (Score:2)
They save all the sensor data to events that make the first relevancy triggers, but the vast majority is discarted.
Re:Bitch... (Score:4, Funny)
Re: (Score:2)
Shame on the editors, thought.
outsource (Score:1)
Re: (Score:2)
This is already behind the "realtime" stage, which is made done directly in hardware and only picks up the 0.001%or so of events that are deemed worthwhile to analyse.
Otherwise, they would need exabit connections...
Some Realtime (Score:5, Interesting)
For the Tier 1 a significant fraction of the data is raw 'sensor' (we call it detector) data. This allows reconstruction program converts the data into physics objects like electrons, muons, jets etc.) to be rerun on the data once bugs in the initial reconstruction program have been fixed.
And this, ladies and gentlemen... (Score:1)
Re:All that and we still have no anti-gravity (Score:5, Funny)
Re: (Score:1)
Then the less strong will be the new weak. Then you will want THAT increased!
When will you be happy?
Re: (Score:3, Interesting)
So, what have you done today to help make science fiction closer to reality?
I worked on the board layout for my rocket test stand data acquisition system. Sure, it's far removed from a trip to Mars, but you have to start somewhere. I'll bet you can't even say that much.
If you're unwilling to put forth any effort, quit bitching at those who are.
Re: (Score:3, Interesting)
I work in my spare time on an open source project called factdiv. The idea is to use FACTOR as a problem to learn how to attack complexity itself. Complexity problems underly all the great open questions in science and so if you can solve those, you sorta solve them all.
So far, results haven't been all that great, but, someone will get there. If we do, then we can have computers answer questions, like, how to take 10,000,000 part
Re: (Score:2)
Re: (Score:2)
The whole post was a joke, and nobody got it.
Re: (Score:1)
Re: (Score:2)
That's going to be difficult because of all the complications with the folding nacelles and power couplings...
How do you know this? (Score:5, Insightful)
How do you know this? One possibility is that there are more that 3 space dimensions. If this is the case AND the LHC has enough energy to access them we could well end up being able to study quantum gravity at the LHC. This might not give is flying cars but in order to first utilize something it is neccessary to understand it.
Basically, physics is a total failure, and that's why there's no flying cars or nuclear fusion...
It depends on what you think the goals of physics are. As a physicist myself I would define them as "to understand how the Universe works". While we still have a long way to go physics has by no means been a failure in that regard. We understand far more about how the Universe works than we did 50 or 100 years ago. Whether or not we can produce flying cars or fusion reactors depends on HOW the Universe works. To say that physics is a failure because these things are extremely hard to produce would be like saying that Columbus' expedition was a total failure because he didn't get to India. You cannot complain physics is a failure just because the Universe does not work the way that YOU want it to - we study the laws of physics, we don't get to make them.....although it would be interesting if we could!
Re:How do you know this? (Score:4, Funny)
(Yes, I'm an engineer. And, I admit, I'm slacking.)
Re: (Score:2)
(Yes, I'm an engineer. And, I admit, I'm slacking.)
Re: (Score:2)
Unfortunately suing car manufacturers for failure to produce a flying car is not an attempt that is likely to succeed.
Re: (Score:2)
Lawyers have made the concept of a flying car all but impossible because of liability concerns. As is, they are the cause of aviation costing *double* what it should.
The question is... (Score:4, Funny)
Re: (Score:1)
Or wait.. that just means it's not "Vista Premium" capable...
*dreams of the profits of selling 20K vista licenses would bring in*
..muahahaha...hahahah......HAHAHAHAHAHAHA*snort*HAHAHAHAHAAHAAA!
They should be expecting a letter any day now... (Score:1)
This is exactly the sort of asshattery I would expect from an organization headed by Ralph Yarro and Darl McBride.
Lucky they're not using Windows Server (Score:2)
15 Petabytes (Score:2, Informative)
The collisions will produce much more data, but "only" 15 PB of that will be permanently stored. That's a stack of CDs 20km high. Every. Year.
Re: (Score:1)
Re: (Score:1)
OK, let's put it into Libraries of Congresses.
The James Madison building alone has about 424k m^3 of assignable space (assuming a height of 10 feet of assignable space). The stack of CD's takes up 288 m3 assuming 12x12cm packaging. So assuming that the other two library buildings burnt down, then that would be 1/2000 libraries of congress.
Bah.
Re: (Score:2)
"fiber-optic cables with special protocols" (Score:2)
Re: (Score:3, Interesting)
I suspect the "special protocols" they are referring to are about the data transfer protocols (GridFTP for data movement), not some wonky Layer-1 protocol. However, these folks, like I2, have been investing in dynamic-circuit equipment, meaning that sites could eventually get dedicated bandwidth between any two installations.
Re: (Score:1)
Re: (Score:3, Informative)
"Parallel Internet"? Pfft. (Score:1)
(Unless it's like the parallel Goatee Universe in ST:TOS. In which case all the women will be dressed opaquely from head to toe? Or they will all have beards?)
Re: (Score:1)
Re: (Score:3, Funny)
Re: (Score:2)
Re: (Score:3, Informative)
A million hours overnight? (Score:1)
Will pixar be able to render their movies overnight no
Re: (Score:3, Insightful)
Re: (Score:2)
Yeah... (Score:3, Funny)
Re: (Score:1)
But does it run... (Score:5, Interesting)
Then youve got a bunch of scientists who are fundamentally geeks
And its all being setup in Europe, which isnt as under the grip of MS
As a bonus
They need to ability to look back and explain all their analysis which means they have to see the source
It costs a hell of a lot to get the data so they dont want to loose any data anywhere.
They have a lot of results to analyse so they dont want to be waiting for the server to come back on-line.
Could they of gone with BSD? probably, but most science tools are developed for linux.
And of course the most important question... (Score:1)
Re: (Score:2)
That was fast. (Score:1)
Intelligent Design of the Internet? (Score:4, Funny)
Re: (Score:1)
You can help too (Score:5, Informative)
Warning: Although not for this crowd. Joining OSG (http://www.opensciencegrid.org/) is a bit more complicated than loading up BOINC or folding@home. It requires a stack of middleware that is distributed as part of OSG's software. Most of the sites I believe use Condor (http://www.cs.wisc.edu/condor/). If you would like to get Condor up and running quick the best way is using ROCKS (http://www.rocksclusters.org/wordpress/) with a Rocks Condor "Roll" (jargon for Rocks condor cluster). Then after getting your condor flock up and running you can load the Open Science Grid stuff on it.
I'm currently running a small cluster of PC's that were destined to be excessed (P4's 3 or 4 years old) and have seen jobs come in and process on my computers! And...to boot you can configure BOINC to act as a backfill mechanism so that when the systems are not running jobs from OSG they can be running BOINC and whatever project you've joined through that project.
BTW...all of the software mentioned is funded under grants from the National Science Foundation - primarily via the Office of CyberInfrastructure but some through other Directorates within NSF.
Re:You can help too (Score:4, Informative)
It's also not for the faint of heart. While the OSG software installation process has gotten much better over the last couple of years, it still takes several hours for an experienced admin to get a new site up and running, and that's assuming you already have your cluster and batch system (such as Condor or PBS) already configured correctly. If you are new to the OSG, then it is likely to take a week or more before your site is ready for outside use.
Our organization has found that it takes at least one full time admin to manage a medium-sized OSG cluster (~100 PCs), though you can probably get away with less effort for a smaller cluster.
This isn't meant to be criticism against the OSG; I think they've done great work in building up a grid infrastructure in the US. I just want to emphasize that supporting a OSG cluster is a non-trivial effort.
Re: (Score:2)
'active' is a bit of an understatement. You need to be willing to provide long term support for the resources that you volunteer to the OSG, including frequent upgrades of the OSG middleware. A resource that joins the OSG for 3 months and then leaves is not going to provide much benefit to the larger OSG community.
It's also not for the faint of heart. While the OSG software installation process has gotten much better over the last couple of years, it still takes several hours for an experienced admin to get a new site up and running, and that's assuming you already have your cluster and batch system (such as Condor or PBS) already configured correctly. If you are new to the OSG, then it is likely to take a week or more before your site is ready for outside use.
Our organization has found that it takes at least one full time admin to manage a medium-sized OSG cluster (~100 PCs), though you can probably get away with less effort for a smaller cluster.
This isn't meant to be criticism against the OSG; I think they've done great work in building up a grid infrastructure in the US. I just want to emphasize that supporting a OSG cluster is a non-trivial effort.
ABSOLUTELY.
You could not of said it better. Much better than I did. Of course you don't necessarily have to run a BIG cluster. Even one with 10 or 20 processors can be of use to people.
I bet you a dollar it doesn't... (Score:1, Funny)
correction for TFA (Score:1)
Wrong. Caltech oversees the infrastructure for the US LHC network. The OSG provides the middleware and grid operations center for the computing and storage resources in the US that are part of the LHC experiments. The OSG does not manage or oversee communications networks.
Tech/$/second gt Science/$/second (Score:5, Insightful)
Cooling (Score:1)
nuke scientists doing this!! (Score:1)
who gave them their degrees?