Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Space Science

Cluster Cracks Jupiter's Moons 14

YourHero writes: "Using the 256-cpu/500MHz Velocity I Cluster (64 Dell PowerEdge boxes installed Aug 1999) Cornell researchers have explained some of the wild moon orbits around Jupiter by running a 1-billion-year simulation. Abstract , abstract (presented at AAS/Division for Planetary Sciences), and news coverage [spaceflight now]. I saw no information on run time, but Aug 1999-present represents an upper limit somewhat short of 1 billion years."
This discussion has been archived. No new comments can be posted.

Cluster Cracks Jupiter's Moons

Comments Filter:
  • The Cornell Theory Center runs Win2k!

    However, they do seem to provide access to anyone with a hard problem to solve. I cant find anything further about this on their website [cornell.edu].

    Anyways, did the simulator simply match reality here? Or was there something more to this?

    • I wonder how much of the cluster's processor time is wasted on the overhead for 64 copies of Win2k. It can't be that much, but I'd imagine a more efficient OS could speed things up a bit.

      Anyways, did the simulator simply match reality here? Or was there something more to this?

      The simulator provided an explanation of an odd aspect of reality, namely, that no small moons with eccentric orbits could be found in a certain large range of orbits around Jupiter.

    • ya, I had to read that twice to believe it, I wonder what their reasoning for win2K was. Is there some sort of package for clustering Win2K boxes?

      Some how I think some sort of beowolf cluster would have gotten it done faster, and with less blue screens :).

      It would be nice to find a good page detailing why they went with MS. Perhaps its a simple reason....funding. I wouldn't be surprised MS chipped in some money....
    • it was a deal (Score:2, Informative)

      The Cornell Theory Center (CTC) and Microsoft Corporation announced today the launch of expanded services for the finance industry through CTC's new role as a Windows Cluster Solutions Provider.
      (from their page)

      I see now

    • Looking at the URL provided ( the link [cornell.edu] ), it looks like there is a very strong Microsoft focus in many of their projects. Maybe Microsoft is providing funding for some of their research?

      At the end of the day the results are probably more important than the platform on which the calculations were done.

  • I saw no information on run time

    Quote from the second paragraph: In a three-month computing marathon, the Velocity I cluster at the Cornell Theory Center was able to mimic cosmic conditions over eons that would cause physical perturbations in the moons of Jupiter.

    See? It's not THAT hard to find it when you actually read the article!
  • by QuantumFTL ( 197300 ) on Tuesday December 04, 2001 @02:00AM (#2652856)
    I'm an undergrad at Cornell, currently assisting with some magnetohydronamics research (Parallelizing a FORTRAN program written by a bunch of russian scientists... yay...). I'll probably be using the cluster at the end of the year, and I must say I'm rather impressed with it. It's not as cool as some that Cornell has had in the past, however they are very willing to let many individuals use it, and I find it great that I get to be involved with supercomputing at an undergraduate level. (Some day I hope to be a computational physicist).

    Also, slightly off topic, I've been hired to create and install/maintain a linux cluster of 6 linux boxes connected with 100 base-T Eithernet, and I have about $3500 to spend for it. Anyone have any suggestions about what to use? I'm thinking 1.3-1.4 GHZ Athelon processors, a small drive, 256-512 megs of ram...

    Also, can anyone tell me why using Red Hat would be a bad idea... it'll just be running FORTRAN programs non-stop, with a bit of SSH and SCP, nothing kernel-intensive. In the Space Sciences Building where I work, we use Red Hat and Solaris, and I don't really see much point using a "better" distribution, like Debian (as I have never really used it much before).

    Cheers,
    Justin
    • Sure, use RedHat.


      Debian's greatest strength is that it's easily kept up to date, with new and debuged packages, etc. I prefer it for personal use and on servers that I need to keep up to date for security fixes.


      But since what you're running is computational efforts, on a static cluster, using consistant tools, you could use anything. The kernel is still just the kernel, what makes it RedHat or Debian is the rest of the packages that you're not using anyway.


      And most of all, you already have experience with RedHat.


      Bob-

      • Hey, thanks a bunch! That makes a lot of sense, and I was going to slap on RedHat for the first few months anyways. Security isn't a big issue for us at the moment (it's going to be so heavily firewalled, maybe only a few IP addresses are even going to be allowed access to it in any way).

        Any ideas about hardware? We don't need monitors or periferals, but does it matter what kind of eithernet card I get (so long as it's 100 base T, and has a working driver under linux)?

        Thanks,
        Justin
        • You ask about hardware. Personally, if I had to buy new, I'd do the oldest trick in PC fabrication: parts. Get a no-name mini-tower case, cheap video, a motherboard with and an ethernet card on board (intel, 3com, whatever), AMD Athalon 1GHz, an IDE CD to initially boot on, 256MB of ram and a 4-gig hard drive.

          No floppy, no frills, no useless mouse/keyboard/monitor packages to store in a closet.

          The only real requirement is that they must not require a keyboard be connected on a reboot. I admit my own server has a dusty keyboard on top of it just in case it reboots, which it hasn't done since I upgraded the kernel (check netcraft) 450 days ago.

          You can get them cheap, nothing is cutting edge. The RedHat should/will detect and use the ethernet and video "cards" without a problem. No games, no Xwindows, no need for massive disk drives. I expect a 1-gig drive would do the job, but with the price of disks you might not *find* a 1-gig disk any more.

          I fully expect you could build these for less than $350 a copy. Maybe you can get a bulk order discount from your local parts house, too.

          If you want to experiment, try a 2-CPU motherboard in one box to play with, and see just how much speed you can get out of it. Then just add it to your cluster. You have very little invested in "proprietary hardware" if you do decide to upgrade/replace anything in the future, too.

          I love clusters, especially because you can use any old PC that can run Linux. Of course, the administrative overhead of adding a node might drain more than a K5-133MHz system would add to your composit throughput, but it means hardware takes a looooong time to become obsolete. Good thing in an educational environment.

          Bob-

Brain off-line, please wait.

Working...