Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Space Supercomputing

Simulating Galaxies With Supercomputers 120

An anonymous reader writes "Over in the UK Durham University is tasking its supercomputing cluster with nothing less than recreating how galaxies are born and evolve over the course of billions of years. Even with 800 AMD processor cores at its disposal the university is still hitting the limits of what is possible."
This discussion has been archived. No new comments can be posted.

Simulating Galaxies With Supercomputers

Comments Filter:
  • Waste of Time (Score:1, Informative)

    by MrTripps ( 1306469 ) on Tuesday September 14, 2010 @02:27PM (#33578040)
    Let me save those guys some time: 42
  • by rubycodez ( 864176 ) on Tuesday September 14, 2010 @02:30PM (#33578090)

    mostly opteron 175 (528 of them at 2.2 GHz with 1056GB RAM totl) and 285 (256 of them at 2.6GHz with 512GB RAM tota), so about 2GB RAM each.

    they run Solaris 10 u3

    http://icc.dur.ac.uk/icc.php?content=Computing/Cosma [dur.ac.uk]

  • Re:Waste of Time (Score:3, Informative)

    by tom17 ( 659054 ) on Tuesday September 14, 2010 @03:03PM (#33578636) Homepage
    "what do you get if you multiply six by nine"
  • by Anonymous Coward on Tuesday September 14, 2010 @03:17PM (#33578826)

    the grape-5 does N-body simulations using specialized hardware that is faster than a standard CPU: http://en.wikipedia.org/wiki/Gravity_Pipe [wikipedia.org]

  • Re:Well... (Score:3, Informative)

    by Bootsy Collins ( 549938 ) on Tuesday September 14, 2010 @03:28PM (#33579026)
    That's correct. In a simulation like this, the most important physical effect to model is gravity. Gravity doesn't have a range. For each timestep in the simulation, all the mass in the simulation has to interact with all the other mass in the simulation. There are a variety of numerical tricks that people who write these codes use to make the problem feasible, so that the computation time required doesn't scale as N^2, with N = number of particles in the simulation. But even with these tricks, to calculate the force on an individual particle, you still have to care about the stuff outside your local volume. These are problems you have to solve when you parallelize your code. Distributing the problem in an @home fashion would require so much inter-participant communication that at this point, it wouldn't really be practical.
  • by Anonymous Coward on Tuesday September 14, 2010 @04:38PM (#33579970)

    Details are here:
    http://icc.dur.ac.uk/icc.php?content=Computing/Cosma

    The Cosmology Machine (COSMA) was first switched on in July 2001. From the original system only 2 TByte of dataspace is still on line. 64 SunBlade 1000s with a total of 64 GByte of RAM were donated by the ICC in February 2006 to the Kigali Institute for Science and Technology in Kigali, the Capital of Rwanda. Sun Microsystems payed for their transport by air.

    In February 2004, QUINTOR was installed. QUINTOR consists of a 256 SunFire V210s with a total of 512 UltraSparc IIIi 1 GHz processors and 576 GByte of RAM. The systems have recently been upgraded to Solaris 10 u3, the Studio 12 compilers are the default and the Sun HPC - CT7 is used for parallel MPI applications. CT7 is Sun's packaging for Open MPI 1.2.x
    In April 2006, OCTOR was installed. OCTOR consists of

          1. Cordelia, a 264 X2100 node cluster with a total of
                        * 528 2.2 GHz AMD Opteron cores (AMD Opteron 175)
                        * 1056 GByte of RAM
                        * connected via dual gigabit over Nortel 5510 Switches
          2. Miranda, a 64 X4100 node cluster with a total of
                        * 256 2.6 GHz AMD Opteron cores (AMD Opteron 285)
                        * 512 GByte of RAM
                        * connected via gigabit for system services
                        * connected via Myrinet 2000 F cards for HPC communications
          3. Oberon, a V40z with a total of
                        * 8 2.2 GHz AMD Opteron cores (AMD Opteron 875)
                        * 32 GByte of RAM
                Oberon is the head node of OCTOR.

            Octor
    The systems are running Solaris 10 x86/64, the Studio12 compilers are default, but Studio11 and Studio10 are still available. Sun HPC - CT7 is used for parallel MPI applications.

    In September 2007, we received rosalind, a V890 with 8 1.5 GHZ Ultra Sparc VI+ cores and 64 GByte of RAM. This was a donation by Sun Microsystems, which is gratefully acknowledged. Rosalind is running Solaris 10_u4, the Studio12 compilers and Sun HPC - CT7. This system has the role to provide access to single large memory and to packages such as IDL.

Beware of Programmers who carry screwdrivers. -- Leonard Brandwein

Working...