Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Supercomputing Science

CERN Launches Huge LHC Computing Grid 46

RaaVi writes "Yesterday CERN launched the largest computing grid in the world, which is destined to analyze the data coming from the world's biggest particle accelerator, the Large Hadron Collider. The computing grid consists of more than 140 computer centers from around the world working together to handle the expected 10-15 petabytes of data the LHC will generate each year." The Worldwide LHC Computing Grid will initially handle data for up to 7,000 scientists around the world. Though the LHC itself is down for some lengthy repairs, an event called GridFest was held yesterday to commemorate the occasion. The LCG will run alongside the LHC@Home volunteer project.
This discussion has been archived. No new comments can be posted.

CERN Launches Huge LHC Computing Grid

Comments Filter:
  • Re:Why so much? (Score:5, Informative)

    by imsabbel ( 611519 ) on Saturday October 04, 2008 @12:04PM (#25256235)

    To you mean those particle trails with "impact image"?
    Just think about the resolution of those trails.
    Add the 3rd dimension.
    And then consider that to build this trail, they need the data of ALL sonsors in the volume, to pick out what belongs to the trail.

    And then think about his happening 10 million times per second...

    They filter out all but a couple 1000 of them, but this still amounts to a lot of data.

    And the higgs boson just doesnt appeast in one single image. It might show up in certain types of cascades, or anomalities in other processes, that only become obvious if a huge statiscal base is evaluated.

  • by timboe ( 932655 ) on Saturday October 04, 2008 @01:51PM (#25256869)

    I don't know why they need such a big grid, according to the inquirer they only create about 15 Gigs of data each year.

    No, 15 million gigs - now you see! And yes, a full detector read out consists of every non zero channel in the entire detector which comes to about 3 Mb per event and we readout ~200 events/sec. And there are 4 main detectors each doing this. Not even mentioning the processor power to run statistical analysis on these data sets!

  • Re:Why so much? (Score:5, Informative)

    by lysergic.acid ( 845423 ) on Saturday October 04, 2008 @02:06PM (#25256953) Homepage

    consider that:

    • the LHC contains 150 million sensors collecting 700 MB per second.
    • the experiment accelerates beams composed of multiple "bunches" of 1.1 x 10^11 protons each.
    • each of the aforementioned beams contains 2808 "bunches."
    • when the beams converge they cause 600 million collisions per second.
    • each collision between two protons produces many smaller subatomic particles.

    scientists are tracking the paths in which the resultant subatomic particles travel not just to find detectable post-collision phenomena, but they are also looking to see what is missing from those impact images (what their sensors cannot pick up). this will allow scientists to predict strange and interesting new particles that science has yet to discover. but in order to detect what is missing, they have to make sure to record all that is there (or not missing). and that means tracking perhaps tens of billions of particles and their travel path in 3-dimensions at very high resolutions, and at very high sampling rates.

  • by Gromius ( 677157 ) on Sunday October 05, 2008 @05:23AM (#25262311)
    Traditionally particle physics doesnt use the data to "generate" theories as such. We use the data to measure various properties (W mass, Z->ll mass spectrum, lepton pt spectra), looking for descrepances with the theory predictions. Then we (hopefully) go, oops this doesnt agree with the theory, we'ld better come up with another explaination. Recently its been, ah SM predictions confirmed *again*.

    I can only really speak for CMS (one of the two big general purpose experiments) but every experiment does similar things. Basically the data is split into smaller datasets based on what we decided was interesting in the event (basically what trigger fired). So we split it into events with electrons, muons, photons, jets (yes events will have multiple of the above but dont worry about how we deal with it). Then each physicist looking for a specific signature (ie a top quark, or in my case high mass e+e- pair) runs their custom homebrew statistical analysis (which use common tools) to pick out the events they are interested in. There are also physicists who run custom designed programs to pick out *any* descrepancy from theory predictions but as they are more general, they arent as senstive as a dedicated analysis on a single channel.

The use of money is all the advantage there is to having money. -- B. Franklin

Working...