Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Supercomputing Space Science

Simulated Universe 332

anonymous lion writes "A story in the Guardian Unlimited reports on The Millennium Simulation saying that it is 'the biggest exercise of its kind'. It required 25 million megabytes of memory to take our universe's initial conditions along with the known laws of physics to create this simulated universe." From the article: "The simulated universe represents a cube of creation with sides that measure 2bn light years. It is home to 20m galaxies, large and small. It has been designed to answer questions about the past, but it offers the tantalising opportunity to fast-forward in time to the slow death of the galaxies, billions of years from now."
This discussion has been archived. No new comments can be posted.

Simulated Universe

Comments Filter:
  • by Uppity Nigger ( 889082 ) on Friday June 03, 2005 @08:26PM (#12719525)
    I think my PC can handle it.
  • by daveschroeder ( 516195 ) * on Friday June 03, 2005 @08:26PM (#12719526)
    The University of Wisconsin [wisc.edu] has deployed 200 TB of storage for support of similar types of experiments as part of the Grid Laboratory of Wisconsin [wisc.edu].

    Brief article, with pictures:

    University of Wisconsin deploys nearly 200TB of Xserve RAID storage [alienraid.org] (Google cache [google.com])

    The storage is used for, among other things, particle physics simulations in support of research projects at sites such as the Large Hadron Collider [web.cern.ch] at CERN [cern.ch]. More information on GLOW and its initiatives can be found here [wisc.edu].

    Text of the above article:

    The University of Wisconsin - Madison has deployed 35 5.6TB Xserve RAID storage arrays in a single research installation as part of an ongoing scientific computing initiative.

    The Grid Laboratory of Wisconsin (GLOW), a partnership between several research departments at the University of Wisconsin, have installed almost 200TB, or 200,000GB, of Xserve RAID arrays. As a comparison, 200TB of storage is enough to hold 2.75 years of high definition video, 25,000 full length DVD movies, 323,000 CDs, 20 printed collections of the Library of Congress, or over 1000 Wikipedias.

    The GLOW storage installation is physically split between the departments of Computer Sciences and High Energy Physics. Each Xserve RAID is attached to a dedicated Linux node running Fedora Core 3 via an Apple Fibre Channel PCI-X Card and is either directly accessed via various mechanisms, such as over the network via gigabit ethernet, or aggregated using tools such as dCache.

    The storage is primarily used to act as a holding area for large amounts of data from experiments such as the Compact Muon Solenoid (CMS) and ATLAS experiments at the Large Hadron Collider at CERN.

    Aside from the GLOW initiative, the university also has Xserve RAID storage systems in use in other areas as well.


    Full disclosure: I am the administrator of alienraid.org and am affiliated with the University of Wisconsin.
  • by Seumas ( 6865 ) on Friday June 03, 2005 @08:28PM (#12719541)
    I'm pretty sure that they're talking about RAM. And yes, 23 terrabytes of RAM is a ton.
  • Re:I thought (Score:5, Informative)

    by Anonymous Coward on Friday June 03, 2005 @08:34PM (#12719594)

    They have stuff like stars that are older than some estimates of the universe's age

    No, they don't. This has happened a few times in the past, e.g., when they didn't know about the different populations of stars, but currently there isn't an age problem.

    and missing matter in the form of dark matter that they can't account for

    We don't know what dark matter is, but we know enough about its gravitational properties -- that's why it was postulated to exist, after all -- to simulate its effects on these scales.

    How are they supposed to simulate the universe, if the model they have is so badly flawed.

    The models we have are not as badly flawed as you think they are. But even if they are flawed, that's the point of the simulation: to test the validity of the model. If the simulation's results don't agree with observations, then that tells us about where the model fails.
  • by andyh1978 ( 173377 ) on Friday June 03, 2005 @08:47PM (#12719704) Homepage
    I figured they meant 25 TB of RAM. Which would be much more impressive.
    This was on Newsnight a couple of days ago; the researcher said their machine had 1TB of RAM.

    That's confirmed in page 18 of their paper: http://arxiv.org/PS_cache/astro-ph/pdf/0504/050409 7.pdf [arxiv.org]
    The calculation was performed on 512 processors of an IBM p690 parallel computer at the Computing Centre of the Max-Planck Society in Garching, Germany. It utilised almost all the 1 TB of physically distributed memory available. It required about 350000 processor hours of CPU time, or 28 days of wall-clock time.
    The mean sustained floating point performance (as measured by hardware counters) was about 0.2 TFlops, so the total number of floating point operations carried out was of order 5x10^17.
  • by Vellmont ( 569020 ) on Friday June 03, 2005 @09:48PM (#12720022) Homepage
    The uncertainty principle makes this an impossibility. Even if you could somehow simulate everything you could never get the exact initial conditions of even one particle. See http://en.wikipedia.org/wiki/Uncertainty_principle [wikipedia.org]

  • by stinkbomb ( 238228 ) on Friday June 03, 2005 @10:59PM (#12720380)
    The real amount of RAM used is ~1 TB according to the Max Planck Institute: http://www.mpa-garching.mpg.de/mpa/research/curren t_research/hl2004-8/hl2004-8-en-print.html [mpa-garching.mpg.de]

Software production is assumed to be a line function, but it is run like a staff function. -- Paul Licker

Working...