Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Networking The Internet Science Linux

"Evolution of the Internet" Powers Massive LHC Grid 93

jbrodkin brings us a story about the development of the computer network supporting CERN's Large Hadron Collider, which will begin smashing particles into one another later this year. We've discussed some of the impressive capabilities of this network in the past. "Data will be gathered from the European Organization for Nuclear Research (CERN), which hosts the collider in France and Switzerland, and distributed to thousands of scientists throughout the world. One writer described the grid as a 'parallel Internet.' Ruth Pordes, executive director of the Open Science Grid, which oversees the US infrastructure for the LHC network, describes it as an 'evolution of the Internet.' New fiber-optic cables with special protocols will be used to move data from CERN to 11 Tier-1 sites around the globe, which in turn use standard Internet technologies to transfer the data to more than 150 Tier-2 centers. Worldwide, the LHC computing grid will be comprised of about 20,000 servers, primarily running the Linux operating system. Scientists at Tier-2 sites can access these servers remotely when running complex experiments based on LHC data, Pordes says. If scientists need a million CPU hours to run an experiment overnight, the distributed nature of the grid allows them to access that computing power from any part of the worldwide network"
This discussion has been archived. No new comments can be posted.

"Evolution of the Internet" Powers Massive LHC Grid

Comments Filter:
  • by Yvan256 ( 722131 ) on Wednesday April 23, 2008 @01:24PM (#23173714) Homepage Journal
    warning: this is a "*.notlong.com" link... DO NOT CLICK.
  • Re:Security? (Score:1, Informative)

    by Anonymous Coward on Wednesday April 23, 2008 @01:43PM (#23173912)
    there is a lot of quite fancy security stuff used. all users need a x.509 certificate to submit jobs.
  • 15 Petabytes (Score:2, Informative)

    by Anonymous Coward on Wednesday April 23, 2008 @01:45PM (#23173940)
    "The LHC collisions will produce 10 to 15 petabytes of data a year"

    The collisions will produce much more data, but "only" 15 PB of that will be permanently stored. That's a stack of CDs 20km high. Every. Year.
  • You can help too (Score:5, Informative)

    by Danathar ( 267989 ) on Wednesday April 23, 2008 @02:29PM (#23174432) Journal
    What a lot of people don't know is that if you want to join a cluster to the Open Science Grid and you are a legit organization more than likely they would let you join. Just be sure you understand your responsibilities as it's more of an active participation. If you are a school or computer user group/club go to the open science grid website and start reading up.

    Warning: Although not for this crowd. Joining OSG (http://www.opensciencegrid.org/) is a bit more complicated than loading up BOINC or folding@home. It requires a stack of middleware that is distributed as part of OSG's software. Most of the sites I believe use Condor (http://www.cs.wisc.edu/condor/). If you would like to get Condor up and running quick the best way is using ROCKS (http://www.rocksclusters.org/wordpress/) with a Rocks Condor "Roll" (jargon for Rocks condor cluster). Then after getting your condor flock up and running you can load the Open Science Grid stuff on it.

    I'm currently running a small cluster of PC's that were destined to be excessed (P4's 3 or 4 years old) and have seen jobs come in and process on my computers! And...to boot you can configure BOINC to act as a backfill mechanism so that when the systems are not running jobs from OSG they can be running BOINC and whatever project you've joined through that project.

    BTW...all of the software mentioned is funded under grants from the National Science Foundation - primarily via the Office of CyberInfrastructure but some through other Directorates within NSF.
  • by vondo ( 303621 ) on Wednesday April 23, 2008 @02:35PM (#23174530)
    It has nothing to do with ISPs. The Tier1 sites are the largest sites around the world with thousands of CPUs and petabytes of storage to hand the influx of data. Typically there is no more than one Tier 1/country/experiement. Tier 2's in this nomenclature are generally university sites that have O(100) CPUs and O(100) TB of disk.
  • by NatasRevol ( 731260 ) on Wednesday April 23, 2008 @03:09PM (#23174872) Journal
    It wasn't very black...
  • Re:You can help too (Score:4, Informative)

    by wart ( 89140 ) on Wednesday April 23, 2008 @03:35PM (#23175142) Homepage
    'active' is a bit of an understatement. You need to be willing to provide long term support for the resources that you volunteer to the OSG, including frequent upgrades of the OSG middleware. A resource that joins the OSG for 3 months and then leaves is not going to provide much benefit to the larger OSG community.

    It's also not for the faint of heart. While the OSG software installation process has gotten much better over the last couple of years, it still takes several hours for an experienced admin to get a new site up and running, and that's assuming you already have your cluster and batch system (such as Condor or PBS) already configured correctly. If you are new to the OSG, then it is likely to take a week or more before your site is ready for outside use.

    Our organization has found that it takes at least one full time admin to manage a medium-sized OSG cluster (~100 PCs), though you can probably get away with less effort for a smaller cluster.

    This isn't meant to be criticism against the OSG; I think they've done great work in building up a grid infrastructure in the US. I just want to emphasize that supporting a OSG cluster is a non-trivial effort.

"I've seen it. It's rubbish." -- Marvin the Paranoid Android

Working...