Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Science Software Linux

NASA To Get 10,240 Node Itanium 2 Linux Cluster 249

starwindsurfer writes "US space agency Nasa is to get a massive supercomputing boost to help get its shuttle missions back in action after the 2003 shuttle disaster. Project Columbia, a collaboration with two technology giants, will mean Nasa's computing power will be ramped up by 10 times to do complex simulations."
This discussion has been archived. No new comments can be posted.

NASA To Get 10,240 Node Itanium 2 Linux Cluster

Comments Filter:
  • Tax payer. (Score:4, Interesting)

    by BrookHarty ( 9119 ) on Monday August 09, 2004 @12:45PM (#9921148) Journal
    I'm rather mad at this idea, the system costs more than an opteron system, costs more to run (heat/power) and is slower. But it at least runs linux.

    Also, why is the BBC the first news tidbit about NASA's new supercomputer?

  • by geomon ( 78680 ) on Monday August 09, 2004 @12:49PM (#9921182) Homepage Journal
    Moderators must be having a bad day. I've seen several other attempts at humor moderated 'offtopic'.

    I wonder if this is a Monday phenomenon? I wonder what the distribution of 'Funny' moderation is through the week.

  • by cbreaker ( 561297 ) on Monday August 09, 2004 @12:50PM (#9921192) Journal
    .. or a very good writer.

    "They can also be modelled over a time period of weeks or months instead of over just a few days."

    Ohh sweet, so then what used to take days now takes months?

    And at one point in the article, it says "20 nodes" and then at another part it says "512 nodes." So like, what is it?

    You know what, I don't even care.
  • by iamdrscience ( 541136 ) on Monday August 09, 2004 @01:04PM (#9921328) Homepage
    Yes, that's why I said that Virginia Tech bought them. The point was that they were cheapest because Apple gave them a huge price break, presumably for the promotion it gave them (i.e. "Holy shit! Our computers are so fast and awesome that they're using them in supercomputer clusters!").

    You'll notice that no large clusters have built out of G5s since, and it's because nobody else is going to get price breaks significant enough to make it the cheapest solution.
  • by GGardner ( 97375 ) on Monday August 09, 2004 @01:11PM (#9921392)
    Instead of saying "In my best engineering and technical judgement, based on years of training and experience, I think this is a bad idea", the engineers can say "Our really expensive computer thinks this is a bad idea".
  • by noselasd ( 594905 ) on Monday August 09, 2004 @01:48PM (#9921764)
    You know, the actual error was some engineers assuming specs
    from other engineers were in pounds while they really were in newtons
    (1 pound == 4.45 newton)
  • Re:Irony emulator (Score:5, Interesting)

    by kenaaker ( 774785 ) on Monday August 09, 2004 @01:49PM (#9921771)
    I worked on the space shuttle simulator (lo, these many years ago), and the shuttle computers are derivatives of the computers that IBM originally used in the B52's. They were called AP-101's, and if I remember correctly were Harvard Architecture systems with a separate instruction and data store memories. I think they had 128K (32 bit?) words for instructions and 64K (16 bit?)words for data.

    The simulator originally ran on IBM System 360 mod 75's (serial numbers 1, 4, and 5). When I was working on it, the simulator was running on a IBM 3033 (370 architecture) machine running MVS, and had a hardware interface that attached 3 AP101's to the system IO channels. The shuttle hardware outside of the AP101's and environment were modelled in the 3033, even including the "slosh dynamics" of the fuel in the external tank. The simulator was written in 370 Assembler with macros for the programming control structures.

    One of the funniest things about running the simulator came out of the major failure tests. The simulator had a distinct "abend" that indicated that the vehicle had a position that was below the surface of the earth.

  • by nboscia ( 91058 ) on Monday August 09, 2004 @03:54PM (#9923036)
    The issue has become space to put the machines. As it is, the pre-existing supercomputers are being moved to other rooms and there is barely enough space to accomodate the 10k as it is. Many supercomputing facilities have a similar problem. There are not many rooms that have the environmental controls needed to run such massive systems.

    As for the comment on making it 11x faster - the other systems serve a different purpose (customer base and funding source)... and they were moved to another location to make room for Columbia.

    On a cool note, it looks like they are filming the building of this system so we can see one of those time-lapse videos.
  • by Hugh-know-who ( 716929 ) on Monday August 09, 2004 @04:28PM (#9923330)
    The Columbia disaster was not due to a lack of computing power, but rather to a culture of denial. The failure of mid-level and senior management to listen to their people prevented any action being taken until it was too late. In a way, this mirrors the broader American culture of the late 20th and early 21st Centuries, typified by a complete refusal of individuals - particularly but not exclusively individuals in powerful positions - to take responsibility for their own actions, inactions or failures of any kind.

Arithmetic is being able to count up to twenty without taking off your shoes. -- Mickey Mouse

Working...