Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
IBM NASA Science Technology

NASA Unplugs Its Last Mainframe 230

coondoggie writes "It's somewhat hard to imagine that NASA doesn't need the computing power of an IBM mainframe any more, but NASA's CIO posted on her blog today that at the end of the month, the Big Iron will be no more at the space agency. NASA CIO Linda Cureton wrote: 'This month marks the end of an era in NASA computing. Marshall Space Flight Center powered down NASA's last mainframe, the IBM Z9 Mainframe.'"
This discussion has been archived. No new comments can be posted.

NASA Unplugs Its Last Mainframe

Comments Filter:
  • by tysonedwards ( 969693 ) on Sunday February 12, 2012 @03:44PM (#39012731)
    It's also that $730,000 / year in ongoing maintenance for a Z9 is not really all that practical, especially considering that newer deployments based on GPGPUs have far lower operating costs, and provide higher performance than a 5 year old big iron.
  • TFA (Score:5, Informative)

    by ldapboy ( 946366 ) on Sunday February 12, 2012 @03:46PM (#39012751)
    The cited page is a copy/paste of Linda Cureton's blog post. Lame and uncool to copy someone's article whole without a link, don't you think, even if they are paid with taxpayer $$? Here's the original article : http://blogs.nasa.gov/cm/blog/NASA-CIO-Blog/posts/post_1329017818806.html [nasa.gov]
  • by History's Coming To ( 1059484 ) on Sunday February 12, 2012 @03:48PM (#39012761) Journal
    The reason big old fashioned mainframes are still in use in many places is simply the cost of moving away from them. They're generally running custom code, custom databases, custom hardware....the sheer cost of re-doing everything is the big problem, not the fact that modern hardware has any issues doing the job.

    Clusters of PS3s make a perfectly serviceable supercomputer, but if your existing solution still works...
  • by deoxyribonucleose ( 993319 ) on Sunday February 12, 2012 @03:51PM (#39012781)

    I've seen mainframes used at Insurance companies and Banks, but the rest of the world seems to favour the the cloud ways of Elastic Cloud and what not.

    I've heard mainframes have high IO thoroughput, but what about their equivalent Cloud solutions and scalability especially?

    Thanks.

    Latency. Confidentiality. Reliability. But most of all: sunk costs and proprietary software embodying key business knowledge. Replacing mainframes requires a large enterprise to start not only major software procurement or development (or both, as in ERP), but also business process reengineering... none of which is particularly fun, cheap or in themselves something that helps capture greater market share.

  • by bws111 ( 1216812 ) on Sunday February 12, 2012 @03:54PM (#39012805)

    GPGPUs do not replace mainframes, unless the mainframe in question is being used for the wrong reasons.

    GPGPUs excel at very fast computation and being cheap.

    Mainframes excel at very high transaction rates (lots of I/O), incredible reliability (five 9s), and security.

    GPGPUs are used in scientific (number-crunching) work, mainframes are used for business.

  • Re:Why stop there??? (Score:5, Informative)

    by Sir_Sri ( 199544 ) on Sunday February 12, 2012 @03:55PM (#39012817)

    scientists aren't business people either. The intricacies of managing the main contractors, the infrastructure base and the diplomatic exchanges that go with all the other space programmes in the world are best left to people who aren't scientists.

    NASA was always about more than just shuttles and manned spaceflight. Those are, generally, relatively poor investments for the science you get out of them. Great PR, and broadly inspirational, but relatively inefficient actual science. NASA does communications satellites, telescopes, materials sciences, weather, the weather of the sun, general satellite management from all of those things, fundamental aeronautics research, etc. There's a lot more to what goes on that just pure science, and than the trolls misguided view that it's all about manned spaceflight. And, like anything, there's a legitimate desire to use the progamme to showoff expertise and build relationships internationally.

  • by Junta ( 36770 ) on Sunday February 12, 2012 @03:58PM (#39012841)

    I've heard mainframes have high IO thoroughput, but what about their equivalent Cloud solutions and scalability especially?

    Depends on the problem.

    For a relatively naively constructed algorithm, IO will be measurably worse in any 'cloud' platform popular today, and severely worse than mainframe. However, if you understand how to make your application scale (assuming it theoretically can), you can *in aggregate* match mainframe IO benefit at a much lower acquisition cost (though depending on who you talk to the more fudge-friendly 'TCO' metric may or may not follow). The trick is for many applications, the perceived risk and cost to reach that understanding is higher than just continuing to go with the flow of an IBM mainframe. Of course, some moderately broad areas of problems are getting tooling to more easily do that sort of scaling without too much extra thought. On the other hand, some problem areas no one has constructed a 'proper' approach that would negate the need for mainframe-like architecture.

    With respect to the word 'cloud', the overwhelming majority of 'clouds' covered in tech news are EC2 and EC2-workalikes where IO is not particularly optimized. There are also various companies championing a departmental server or two with a few virtual machines on it as a 'cloud', further diluting the message and usually having terrible IO characteristics even with overpriced storage architectures. On the other hand, there are some projects claiming 'cloud' that include arbitration of bare-metal execution that can reasonably compare with a 'boring' scale-out private x86 scale-out solution, but very few people care.

  • by sjames ( 1099 ) on Sunday February 12, 2012 @04:06PM (#39012887) Homepage Journal

    Mainframes aren't about computational performance, they're about reliability and to a lesser degree (these days) I//O performance. If you want computational performance, you go with a cluster or perhaps a cluster of GPUs (depending on the nature of the problem).

    Mainframes are about reliability. When your app absolutely positively must run 24/7, a mainframe is a reasonable consideration. We can get about 90% of that with multiple failover servers and other similar strategies. Where that's good enough, we go that way because of the vastly lower prices. However, if the 90% solution just isn't good enough, mainframe it is.

  • by story645 ( 1278106 ) <story645@gmail.com> on Sunday February 12, 2012 @04:06PM (#39012903) Journal

    My mom keeps telling me that UPS is one of the world's largest users of DB2, a statement backed up in this article [theregister.co.uk]. They're not switching off for the same reason financial institutions don't; After pouring lots of money into alternatives, they found that mainframes have better performance.

  • by azgard ( 461476 ) on Sunday February 12, 2012 @04:13PM (#39012973)

    Yes: http://www.hercules-390.org/ [hercules-390.org]

    But IBM won't allow to run z/OS (the operating system usually used) on it.

  • by tysonedwards ( 969693 ) on Sunday February 12, 2012 @05:02PM (#39013329)

    In the article, they mentioned that 700k was for the maintenance contract, and 30k was for the power.

    They also stated that thy were keeping it around for a few projects that were slated to be terminated, but hadn't yet been and they had no desire to migrate services to standard servers.

  • by Anonymous Coward on Sunday February 12, 2012 @05:55PM (#39013765)

    You don't have to IPL (Initial Program Load - reboot) every LPAR (Logical Partition - like a virtual machine) in the sysplex (cluster) at the same time...

  • by compro01 ( 777531 ) on Sunday February 12, 2012 @06:07PM (#39013865)

    LPAR is a Logical PARtition, a section of hardware in a mainframe dedicated to one operating system instance. Basically a form of visualization.

    A sysplex is a SYStems comPLEX. Basically a cluster of mainframes.

    Basically, you reboot a single LPAR containing one specific thing rather than the entire physical system or cluster.

    It's basically like having multiple independent servers for each thing, but more reliable and more flexible.

  • by Shinobi ( 19308 ) on Sunday February 12, 2012 @06:27PM (#39013987)

    As much as I use and abuse VMWare, it's not yet comparable to IBM Mainframe class virtualization.

  • by webnut77 ( 1326189 ) on Sunday February 12, 2012 @06:35PM (#39014051)

    Also, the mainframes in those days were much bigger iron than the one pictured in the article. You could fit five z9's into the space of a single s/370

    You could literally step inside an IBM 3090.

  • by dissy ( 172727 ) on Sunday February 12, 2012 @08:57PM (#39014963)

    Kinda like the HP Blade server we have running ESX here at work? It costs a lot less than a Z9 as well :).

    Kinda, sorta, a lil :}
    To be fair, there are many features in a mainframe that the HP blade server can't do.

    Now don't get me wrong, I'm not saying the blade server is a pile of crap or anything. It's pretty damn awesome in fact.

    But mainframes have some hardware redundancy features more geared towards assuring data gets from one place to another without error.
    ECC if not CRC is used in nearly every path data can take through the system.
    CPUs can be configured in a dual or tri-system state, where 3 procs do the same task, and at the end compare answers. Data can be redundant in memory too, which can do the same odd-man-out verifications on reads.

    An HP blade server can emulate Some of this in software, but it is much slower than in hardware, and even that won't necessarily catch every data path.

    The closest I think you can get is having 2 or 3 different instances of the same virtual machine processing the same data, using different CPU blades, memory, and ending up putting the results on different SANs each on their own unique bus.
    Then you need at least two manager VMs to do the feeding of data and comparing the results, both of the instances below it, and with its partner manager.
    Those two would also be responsible for figuring out which system below it is throwing disagreeing results, and ideally narrowing down from which piece of hardware, in order to raise an alert and hopefully disable the failed hardware at a higher level in ESX. As well as email you to buy it a replacement part of course.

    If the two managers disagree with each other, all you can hope for then is a graceful termination of processing, hopefully with a detailed reason why.

    You can get pretty close to a similar effect, in that you will not get bad data in the end due to some step in the processing chain going bad in hardware.
    But you will likely have downtime of processing once something does go bad, depending on how many resources you are willing to throw at what amounts to the same work.

    For the niche cases that need that level of data protection, mainframes pretty much do that all transparently, and as I mentioned a lot in hardware which speeds the over all jobs up.
    Handling failed hardware transparently to the running job, and even the user, means the system disables the hardware flagged as bad, and continues on using other resources, all with no intervention on an admins part.

    Of course most of what you are paying for is the IBM support, which is pretty much required to have.
    One can get such support from HP too of course, at a price. But being optional is nice for those of us that don't really need it.
    Email alerts of failed hardware are plenty for me to work it into this or the next budget.
    If I had an active HP rep, it isn't much to hit forward on an email after all ;}
    But on an IBM system like this, the mainframe sends IBM the message directly. 'When' depending on your support contract, an IBM rep shows up at your door either within a couple hours, or a day or two, replacement part in hand, and ready to do the swap under your supervision.

    The price on the big iron is definitely about what you get. Thankfully for the wallet, many tasks don't need that level of redundancy, and so HP does quite well in the lower end market, which is of course larger too.

    Mainframes serve special niches, and when those details become important for the task at hand, IBM has to make up on the low volume with higher prices. But it's not all for naught, you still get your moneys worth.

  • by baegucb ( 18706 ) on Sunday February 12, 2012 @10:14PM (#39015301)

    I just put our IT department into a spreadsheet, went through and counted the mainframe people. At a guess, there are 50,000 employed in our organization. In my IT department are about 325 people. Out of those 34 are dedicated to the mainframe 24x7x365. The other 300 are supporting mostly MS products and customers (public and internal). The network people are probably a couple of dozen, and maybe a dozen or so UNIX/Linux people. So no, mainframes are not over staffed. And we moved database clusters off servers and onto the mainframe Linux LPARs because of the huge increase of speed due to IO.

  • by AK Marc ( 707885 ) on Monday February 13, 2012 @04:06AM (#39017027)
    PHP + SQL beats SAP in almost all instances. SAP's use (as it is) is a merging of the front end and back end so that you don't need to know two separate programs to support it. It's instant integration. The front end and the database are inextricably linked. Someone thought it a good idea, but in practice, it makes things worse. Worse performance, worse security, harder support, more expensive.

This restaurant was advertising breakfast any time. So I ordered french toast in the renaissance. - Steven Wright, comedian

Working...