Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
IBM NASA Science Technology

NASA Unplugs Its Last Mainframe 230

coondoggie writes "It's somewhat hard to imagine that NASA doesn't need the computing power of an IBM mainframe any more, but NASA's CIO posted on her blog today that at the end of the month, the Big Iron will be no more at the space agency. NASA CIO Linda Cureton wrote: 'This month marks the end of an era in NASA computing. Marshall Space Flight Center powered down NASA's last mainframe, the IBM Z9 Mainframe.'"
This discussion has been archived. No new comments can be posted.

NASA Unplugs Its Last Mainframe

Comments Filter:
  • by Anonymous Coward on Sunday February 12, 2012 @03:35PM (#39012653)

    Pardon my youth and naiveness.

    I've seen mainframes used at Insurance companies and Banks, but the rest of the world seems to favour the the cloud ways of Elastic Cloud and what not.

    I've heard mainframes have high IO thoroughput, but what about their equivalent Cloud solutions and scalability especially?

    Thanks.

  • by Animats ( 122034 ) on Sunday February 12, 2012 @03:41PM (#39012703) Homepage

    NASA still has a big data center in Slidell, Louisiana. They're hiring. [jobamatic.com] With the mainframes gone, one would expect they'd close down Slidell, but no. Instead, they're building a big museum and PR center [infinitysc...center.org] there.

    NASA seems to spend money at a relatively constant rate, independent of whether they're flying anything.

  • by wisebabo ( 638845 ) on Sunday February 12, 2012 @04:00PM (#39012855) Journal

    I mean it's possible to run your old Commodore 64 or TRS-80 (or even Apple II?) software in a software emulator of these machines. And it's (mostly?) legal to do so? (BTW, anyone know of an Apple II emulator which will run the game "Epoch"?)

    So are there software emulators for an IBM 360 or VAX out there? Can I run them on my iPad? There might be some interesting software that you could play with, despite the primitive hardware they did send Man to the moon using these systems as well as defend the U.S. against nuclear attack and run the IRS. (Getting this code might be a bit of a problem!)

    Even if there isn't a software emulator DIRECTLY for a mainframe to run on my iPad, what about one that'll run on a pentium class PC. Then is it practical to run THAT in emulation mode on my iPad?

  • by raburton ( 1281780 ) on Sunday February 12, 2012 @04:12PM (#39012959) Homepage

    Yes, there are mainframe emulators. And if you compare processing power they come out quite well, but as others have pointed out mainframes aren't super computers and don't claim to be. If you just want something that can run your mainframe code that's great. What an emulator won't give you is any of the things that people actually want a mainframe for (see other posts for details).

    It's a bit disappointing to see so many people on slashdot wondering what the purpose of a mainframe is. It shows so many "geeks" have a very limited knowledge of IT in the real world.

  • by interval1066 ( 668936 ) on Sunday February 12, 2012 @04:13PM (#39012975) Journal
    Yes; http://www.turbohercules.com/ [turbohercules.com]. Near and dear to my heart is the CDC Cyber series on which I had to run my COBOL assignments at uni; http://members.iinet.net.au/~tom-hunter/ [iinet.net.au]. Anyone also do assignments on PDP 11/70's as I did also? http://www.dbit.com/ [dbit.com]. Christ, I bet there's an emulator for any platform and architecture that's existed.
  • by Anonymous Coward on Sunday February 12, 2012 @04:20PM (#39013047)

    The JSC mainframe system(s) used to build and support the shuttle flight software were shutdown on July 29 of 2011. DEVS, PRDS, PATS, SDFC, SDFA, and RTF1 systems.

    These systems had been used since May 6, 1981 (no, not the same computers) under a NASA contract. Photos of the servers were taken. Yes, they are just as boring as they sound.

    It was sad to see the tape silo nearly empty when it would normally hold hundreds or thousands of tapes.

    We have a support group on LinkedIn.

  • Remember the Tao... (Score:5, Interesting)

    by Anonymous Coward on Sunday February 12, 2012 @04:29PM (#39013099)

    There was once a programmer who worked upon microprocessors. "Look at how well off I am here," he said to a mainframe programmer who came to visit, "I have my own operating system and file storage device. I do not have to share my resources with anyone. The software is self- consistent and easy-to-use. Why do you not quit your present job and join me here?"

    The mainframe programmer then began to describe his system to his friend, saying "The mainframe sits like an ancient sage meditating in the midst of the data center. Its disk drives lie end-to-end like a great ocean of machinery. The software is as multifaceted as a diamond, and as convoluted as a primeval jungle. The programs, each unique, move through the system like a swift-flowing river. That is why I am happy where I am."

    The microcomputer programmer, upon hearing this, fell silent. But the two programmers remained friends until the end of their days.

  • by Lumpy ( 12016 ) on Sunday February 12, 2012 @04:40PM (#39013179) Homepage

    "Does payroll really require a Big Iron?"

    No it does not. I managed the SQL databases for Comcast's company wide Accounting systems in 2006 it ran off of 4 MSSQL servers. They were monstrous servers, with 8 processors each, (you count the cores) screaming fast Raid 50 arrays with a buttload of ram to deal with the pivot tables. 99% of payroll speed is the DB read and write times not mathematic calculations.

    But you certainly do NOT need Big Iron to do a few million payroll calculations for hourly, Executive and Sales people every 2 weeks. Running payroll took 6 hours on tuesday. If something went wrong we ran it later t hat night.

  • by emt377 ( 610337 ) on Sunday February 12, 2012 @04:45PM (#39013215)

    Does payroll really require a Big Iron? I can't really imagine that keeping track of a company's financials (even measured in billions) requires $730,000 / year in number crunching ability...

    It's not just payroll, but also tracking every expense, income, capital asset, depreciation, order placed, order received, services provided, goods shipped, customer phone call received, etc. For a large company it's an AMAZING amount of data. The "old way" of doing this was to dump all this into a set of tables, then run enormously complex recursively joined queries to restructure it and generate reports, billing reminders, etc. The new way is to dump it into mapreduce, scribe feeds, or equivalent and get cooked data out that can easily be tabulated in reports. The data out of the distributed computation gets fed into a relatively small db while all the raw data is just piped to some storage device for posterity. This computational model fits better with cloud provisioning. But you may find a room full of 20U blade chassis loaded up isn't exactly cheap either. But it's more flexible, and the mapreduce model of pre-cooking is more economic because it distributes the load over the quarter rather than over a few days following each quarter. Of course, horizontal scaling is vastly cheaper than vertical scaling, if the problem can be attacked that way. Even if the overhead approaches several hundred percent or you crunch numbers in php (heaven forbid) - it's still vastly cheaper.

    But everyone knows the distributed model is cheaper. It's just that any business that's been around more than 10 years has a large body of legacy code to already implement all the custom payroll, auditing, tax code enforcement, tax optimization, reports, etc they need, which makes it's a huge project to move a system that's already in production. Moving it would probably cost in the millions and is very risky, so it's easy to just to pony up $700k annually and forget about it. It's also really difficult to migrate in steps.

  • by AK Marc ( 707885 ) on Sunday February 12, 2012 @04:56PM (#39013291)
    Maybe all payroll stuff requires suckage, but PeopleSoft and SAP are two of the worst products in existence. The city here adopted SAP to "save costs" and now has a department of 10+ "SAP Programmers" just to keep basic functionality running. And PeopleSoft where I used it last directly wasn't hard. It ran on a single low-spec server. You just couldn't use it for 5 days of the month (invoice generation, payroll, and 3 other such dedicated days). That was "acceptable" because the cost of a server that could generate those pieces in a few seconds would have cost more than the inconvenience of the application being unavailable.
  • by Shinobi ( 19308 ) on Sunday February 12, 2012 @04:59PM (#39013313)

    It's not the decimal places as such. It's the fact that, if we step through the entire stack: Everything is rounded off properly(that means, no floating points math rounding errors), input from many concurrent sources without choking, reliability not just in terms of machine/OS/application uptime, but also in terms of data integrity(a modern mainframe does ECC, checksumming etc at every stage of data handling, even in transfers to and from RAM. With the proper configuration, you can also have it encrypted in transfer, at no cost in throughput or computational performance). The I/O also means it can crunch through all the conditionals listed in records much faster, including all banking and tax rules etc. In terms of physical hardware, everything is redundant, with the aforementioned ECC etc, and if you are serious, you sysplex it. And that's just for payroll/employee records. Virtualization is handled pretty much transparently, seeing as IBM has done it on mainframes since the 1960's. Security of the underlying system is excellent with superb compartmentalization and ACL's etc, such that the Linux images that you can run virtualized are less secure even in a hardened configuration.

    For financials or insurance, it's everything mentioned above, and handling thousands of terminals/ATM's, handling transfers between accounts in real time etc. At a bank here in Sweden, the Unix/Linux crowd has been trying to move the bank away from mainframes for over 9 years now, but the mainframe division can show, year after year, that even with the support costs, it's cheaper to use the mainframe, because it's more reliable, and needs less infrastructure and manpower than the bundle-of-servers approach.

  • by mikael ( 484 ) on Sunday February 12, 2012 @05:16PM (#39013427)

    Mainframes support virtualization, multiple versions of different operating systems running at the same time. One OS can be used for development, another for production. Hardware can be hot-plugged. You can pull out CPU's, disk drive, memory boards, all while the system is still running. The system will have it's own back-up power, it's own diesel generator, fuel-tanks and air-conditioning. Just as much if not more reliability and redundancy than a nuclear reactor.

  • by Forever Wondering ( 2506940 ) on Sunday February 12, 2012 @05:41PM (#39013637)
    And backward compatibility and service. Write once, run forever. You can take a binary program compiled in 1975 and it will run unchanged on the latest mainframe.

    ---

    Even if it's on a punched card deck and you don't have card deck reader hardware anymore, IBM does. Its support group will transfer the card deck to whatever media your current hardware can handle.

    Also, if a mainframe ever does go down, IBM's service escalation policy is unbelievable (e.g. that's what you pay for). I remember when my datacenter's mainframe went down [circa 1975]. The following numbers aren't exact, but similar.

    The local rep must be onsite within a fixed period of time (e.g. 2 hours). He has [say] 4 hours to diagnose/fix the problem. If he is unable to do so, the regional hotshot is called in. If more time goes by, the national service rep and one or more of the system architects must arrive. After 24 hours, an executive vice president must be onsite and stay until the problem is resolved.

    When we had our problem, the onsite VP had the entire mainframe replaced, by diverting a system scheduled to go to a new customer and airfreighting it up. Total round trip time [for complete replacement/install]: 72 hours

    Also, the mainframes in those days were much bigger iron than the one pictured in the article. You could fit five z9's into the space of a single s/370

  • by symbolset ( 646467 ) * on Sunday February 12, 2012 @06:01PM (#39013823) Journal

    It's really not hard to configure a single rack server with 1M IOPS, 1-2 TB RAM, 40-160Gbit aggregate networking and 40-48 cores these days. They fit 4-8 per rack, storage and switching included. They don't cost as much as you might think, even with the hand-holding support contract. And they run the OpenStack "cloud" platform quite well.

  • by RadioTV ( 173312 ) on Sunday February 12, 2012 @06:59PM (#39014213)

    I watched an IBM mainframe service tech remove the jacket of his three piece suit, roll up his shirt sleeves, strike up a propane torch and re-sweat the solder joints on the copper pipe for the water cooling system of an ES/9000. I don't know what they are like these days, but 15 years ago those guys were amazing. They had to know how to repair every piece of hardware that IBM made, and how to troubleshoot every operating system.

  • by whatteaux ( 112343 ) on Sunday February 12, 2012 @07:32PM (#39014469)

    It's not the number of decimal plaaces that's the issue. Mainframes can store and manipulate a number like 1.23 as EXACTLY 1.23, whereas a lesser machine would have to use some binary floating-point approximation (1.230000001234 or 1.229999999993241 for example) with rounding, etc. Even the programming languages used on mainframes (mainly COBOL and PL/I but also RPG) have specific provision for fixed-point decimal data types, whereas C and its derivatives (C++, C#, Java, Objective-C, D, etc) are utterly clueless.

    But mainframe financial applications do relatively little actual arithmetic. Most of their time is spent moving strings and structures around - something that C and its derivatives also just can't do efficiently, if at all, whereas COBOL and PL/I do it easily and quickly.

    In mainframe software, anything that can be static is static; data is only dynamic if it absolutely has to be. This is the basis for the high efficiency of mainframe software. COBOL has no equivalent of C's 'new'; PL/I does (of course - 'ALLOCATE') but it's used relatively rarely. You therefore almost NEVER see memory leaks in mainframe software.

  • It isn't that hard to implement "fixed decimal point" mathematical algorithms. It can be done on any computer which has integer opcodes... which is just about every computer ever built since ENIAC. The rounding you are referring to is due to floating point numbers where the numbers are rounded when they are put into the data format in the first place.

    Making a "fixed point data type" in C++, C#, Java, or other more modern languages is simply creating that data type in the first place, and most of those languages have operator overloading for those data types that make the task trivial once you've implemented the type. I'm sure simply looking around can find find that already written as a library, but any programmer worthy of the title should be able to create these data types from scratch in those languages.

    C and its derivatives also move strings around very efficiently, but I'll admit the programmer interface into accessing those functions is clunky and obfuscated, while COBOL does it as a designed behavior of the language.

    As for static variables, the problems that come from the dynamic variables has more to do with how programmers using those languages have been taught to use them, where dynamic variables are considered ordinary and the developers insist upon pointer references in libraries and common interfaces. Again, it doesn't have to be done that way, but the API standards push it to be that way. COBOL comes from an earlier time when such techniques simply weren't taught.

    Traditionally in Mainframes they had larger bus sizes and register sizes, which for fixed point calculations is critical. With most microcomputers having 64 bit registers as a common practice and even some low-end game consoles having 128 or even 256 bit registers, the real strengths of mainframes in terms of computing power is almost lost. About the only benefit to mainframes any more is strictly the uptime and circuit redundancy for hot-swapping components. For computing tasks that take hours or days to complete, that can be very important.

  • by bored ( 40072 ) on Sunday February 12, 2012 @11:54PM (#39015915)

    Having just got my first z114, I call BS. To me it seems that VMWare is actually more advanced. At least I can install an OS in under 30+ hours, and migrating an image around is as easy as a couple clicks. Plus, I don't have to work around the OS's inability to fully utilize disks >54GB.

    The list goes on and on.

Ya'll hear about the geometer who went to the beach to catch some rays and became a tangent ?

Working...