Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Mars Bug NASA Software

Patching Software on Another Planet 96

An anonymous reader writes "Sixteen years ago, the Mars Pathfinder lander touched down on Mars and began collecting about the atmosphere and geology of the Red Planet. Its original mission was planned to last somewhere between a week and a month, but it only took a few days for software problems to crop up. The engineers responsible for the system were forced to diagnose the problem and issue a patch for a device that was millions of miles away. From the article: 'The Pathfinder's applications were scheduled by the VxWorks RTOS. Since VxWorks provides pre-emptive priority scheduling of threads, tasks were executed as threads with priorities determined by their relative urgency. The meteorological data gathering task ran as an infrequent, low priority thread, and used the information bus synchronized with mutual exclusion locks (mutexes). Other higher priority threads took precedence when necessary, including a very high priority bus management task, which also accessed the bus with mutexes. Unfortunately in this case, a long-running communications task, having higher priority than the meteorological task, but lower than the bus management task, prevented it from running. Soon, a watchdog timer noticed that the bus management task had not been executed for some time, concluded that something had gone wrong, and ordered a total system reset.'"
This discussion has been archived. No new comments can be posted.

Patching Software on Another Planet

Comments Filter:
  • by Anonymous Coward

    I didn't even read the full summary. But hasn't the occurrence of this priority inversion issue been reported about ... many years ago?

  • by xmas2003 ( 739875 ) * on Saturday July 06, 2013 @10:51AM (#44203389) Homepage
    From TFA: "Engineers later confessed that system resets had occurred during pre-flight tests. They put these down to a hardware glitch and returned to focusing on the mission-critical landing software"

    Very surprised by this ... even if a hardware glitch, wouldn't you want to track that down before launch? Especially since in the harsh space environment (bit flops even with hardened RAM/CPU), you want your hardware to be as reliable as possible.
    • by mlts ( 1038732 ) * on Saturday July 06, 2013 @11:13AM (#44203475)

      Devil's advocate here:

      If it were my guess, there are so many priorities of glitches, and with a limited budget, if it isn't something that actively shuts down operations, resources are spent on other things.

      The one good thing in this equation is the watchdog circuits. Without these in place, it can mean the hardware goes down and never comes to life again.

      It is extremely hard to get working operating systems and patch management here on Earth [1]... much less having systems that are made to work where there is no way to walk up to the machine, and re-flash a new OS via the JTAG ports.

      [1]: Patch management had issues for every OS I've used. AIX gets issues via lppchk which means force-installing LPPs, RedHat gets RPM glitches possibly forcing a rebuild of the DB, Windows sometimes will just not install, or permit to be installed an update from WU, and so on. Now, with this in mind, trying to patch a machine millions of miles away is very daunting for even the best of the best.

      • by girlintraining ( 1395911 ) on Saturday July 06, 2013 @11:58AM (#44203739)

        If it were my guess, there are so many priorities of glitches, and with a limited budget, if it isn't something that actively shuts down operations, resources are spent on other things.

        Devil here: This isn't a budget problem, this is a management problem. Going all the way back to the Challenger disaster, NASA has shown a pattern of disregard for proper engineering practice. Richard Feynman chewed their ass out in Appendix F [nasa.gov] of the Challenger report to congress, and it was so scathing that both Congress and NASA tried to kick him off the board and discard his results... prompting the entire senior engineering staff of all branches of the Shuttle project to sign a petition saying: Either publish this, or face our wrath.

        This isn't a technical problem -- this is management having shitty project management skills. If the budget is insufficient, then the project scope has to be reduced. It's just that simple. This is not the engineers' fault, or is it the fault of the technology... this is management trying to do too much with too little.

        • by 0123456 ( 636235 )

          Bugs get prioritised, and when you have to launch this year or wait three years for the next launch window, non-critical bugs aren't going to delay a launch. A bug that causes the computer to reset and return to operation is not a critical bug for a system that's rolling over the surface of Mars at a few feet per minute.

          I remember reading that the Apollo Guidance Computer developers would randomly press the reset button while testing their software just to ensure that, if it did reset, that wouldn't cause p

        • Re: (Score:2, Insightful)

          by DerekLyons ( 302214 )

          This isn't a technical problem -- this is management having shitty project management skills. If the budget is insufficient, then the project scope has to be reduced. It's just that simple. This is not the engineers' fault, or is it the fault of the technology... this is management trying to do too much with too little.

          It must be nice to live in your black and white world, but the rest of us live in the real world where engineering, budget, and schedule tradeoffs are a reality.

          • by gl4ss ( 559668 )

            This isn't a technical problem -- this is management having shitty project management skills. If the budget is insufficient, then the project scope has to be reduced. It's just that simple. This is not the engineers' fault, or is it the fault of the technology... this is management trying to do too much with too little.

            It must be nice to live in your black and white world, but the rest of us live in the real world where engineering, budget, and schedule tradeoffs are a reality.

            well yeah.. unless you're building something to go to space. I suppose someone knew that the problem wouldn't make patching impossible.

            of course if benefits from the project are totally immeasurable then it doesn't matter that much that the thing might waste the entire budget if it doesn't work.

            • It must be nice to live in your black and white world, but the rest of us live in the real world where engineering, budget, and schedule tradeoffs are a reality.

              well yeah.. unless you're building something to go to space.

              Well, no, Even things that are being built to go into space suffer from the same limitations as anything else.

        • by arth1 ( 260657 )

          Richard Feynman chewed their ass out in Appendix F of the Challenger report to congress, and it was so scathing that both Congress and NASA tried to kick him off the board and discard his results... prompting the entire senior engineering staff of all branches of the Shuttle project to sign a petition saying: Either publish this, or face our wrath.

          I have a hard time believing that Feynman wrote it, or that it wasn't re-written by someone else before it was published. Read this (emphasis mine):

          "A more reasonable figure for the mature rockets might be 1 in 50. With special care in the selection of parts and in inspection, a figure of below 1 in 100 might be achieved but 1 in 1,000 is probably not attainable with today's technology. (Since there are two rockets on the Shuttle, these rocket failure rates must be doubled to get Shuttle failure rates from

      • by sjames ( 1099 )

        I have to wonder these days if including a BMC that CAN re-flash through JTAG remotely might have become practical. While it is extra weight and we'd need a hardened BMC, it's not like it has to have much performance as long as it runs at all. Given that it could save a mission it could be a good trade-off.

      • Most things I've worked on have watchdog circuits or software. They're almost essential, since you will never be able to find every possible bug or account for every exceptional condition. Watchdogs are the system equivalent of customer support telling you to reboot and see if that fixes it. Here on earth it can sometimes be amazingly difficult just to patch something ten miles away.

    • There's just so much you can do. Eventually you just have to leave it and hope it works.
    • by Cammi ( 1956130 )
      Sounds like a management decision.
    • From TFA: "Engineers later confessed that system resets had occurred during pre-flight tests. They put these down to a hardware glitch and returned to focusing on the mission-critical landing software"

      Very surprised by this ... even if a hardware glitch, wouldn't you want to track that down before launch? Especially since in the harsh space environment (bit flops even with hardened RAM/CPU), you want your hardware to be as reliable as possible.

      Perhaps they were thinking about that sweet sweet mileage charge for a service call?

    • by v1 ( 525388 )

      Inter-planetary travel imposes inflexible deadlines. Planetary alignment dictates when your launch windows are, and they are frequently several years apart. Compare it with the space shuttle for example, where you can get a launch window everyday or two and have lives at risk. Project planning on an inter-planetary launch spreads out over years. If your part of the project starts getting behind, and it's not something you can fix by simply throwing more resources at it, you have to prioritize so you don

  • by Anonymous Coward

    Fixing code written by someone from a different planet.

    • by Anonymous Coward

      At my job we refer to them as Indian contractors.

      Usually subcontractors working for IBM.

  • by BitZtream ( 692029 ) on Saturday July 06, 2013 @11:13AM (#44203481)

    This problem is known as priority inversion. Its a common concern in schedulers when critical functions run in their own threads. Its something that they should have known about and tested against. Or they could have used more traditional IO approaches and let the VxWorks IO system, which already has protection against priority inversion by design, do its job.

    • by k8to ( 9046 )

      The issue was simple.

      In VxWorks, when you create pipes for IPC, you get to choose what kind of semaphore you want, because people want the shortest deadlines possible (at least classically).

      JPL selected simple mutexes, which led the priority inversion.

      Pipes were, generally speaking, far less well exercised in the codebase at the time, and Wind River engineers explicitly advised the use of message queues which would offer the necessary functionality and would not have had the problem encountered.

  • Do people still call this the red planet? lol
  • by Hentes ( 2461350 ) on Saturday July 06, 2013 @11:34AM (#44203617)

    Seriously? This reads like morality tale for beginner programmers. "Remember kids, always check the settings of your mutexes!"
    Will we also have articles about NASA engineers mistyping == for = ? Everyone makes mistakes, just because it happened in a rover doesn't make it interesting.

    • The interesting part is not so much that people make mistakes, it is how they are solved.

      • by Hentes ( 2461350 )

        Not really, most bugs are easy to fix once found. Especially trivial ones like this.

        • Of course, it's the process how to find the bug, and later update the remote device, that's interesting. I know fixing bugs is often just a few keystrokes - after spending hours or days searching for the cause.

    • Re:Boring story (Score:5, Interesting)

      by Antique Geekmeister ( 740220 ) on Saturday July 06, 2013 @11:48AM (#44203683)

      Actually, it's very interesting. It shows that even with the very extensive testing and layers of planning and managerial processes to prevent such errors, they can still creep in. And it shows that very expensive, one-off projects remain vulnerable to subtle design errors, so the tools to do field updates are _critical_.

      Note that designing for spacecraft can be a real artform: they have extremely limited computational resources, due to the inherent risks of bit errors in increasingly small modern silicon exposed to radiation and temperature changes, and you cannot simply shield the electronics: the shielding adds weight and itself becomes radioactive over time. So you often wind up using quite old but far more stable technologies. That means tools that may be considered quite obsolete by the time your design phase is complete and the device is ready for launch. And by the time it arrives _on Mars_, the techonology is very obsolete indeed.

      My respect for the programmers and designers of interplanetary spacecraft is enormous: systems like Voyager and the Mars Rover, Spirit, that exceed their lifespans by years fill me with pride as an engineer that we could build so well. And the obligatory XKCD on the subject:

              http://www.xkcd.com/695/ [xkcd.com]

      • by Hentes ( 2461350 )

        Actually, it's very interesting. It shows that even with the very extensive testing and layers of planning and managerial processes to prevent such errors, they can still creep in. And it shows that very expensive, one-off projects remain vulnerable to subtle design errors, so the tools to do field updates are _critical_.

        That's true, but has been known for a while.

        Note that designing for spacecraft can be a real artform: they have extremely limited computational resources, due to the inherent risks of bit errors in increasingly small modern silicon exposed to radiation and temperature changes, and you cannot simply shield the electronics: the shielding adds weight and itself becomes radioactive over time. So you often wind up using quite old but far more stable technologies. That means tools that may be considered quite obsolete by the time your design phase is complete and the device is ready for launch. And by the time it arrives _on Mars_, the techonology is very obsolete indeed.

        That is indeed an interesting topic, and I wouldn't have complained if the article talked about that. But it was just a generic description of a common error with almost no details about the actual system. I didn't say that I don't respect the engineers working on the project. Even the best minds make simple errors. It just doesn't make for a good story.

      • All the more reason that public, non-military projects like this should have everything open sourced.

        Had the hardware and software platforms both been open sourced and available to the public, they would have had a lot more hands and eyes helping to correct these issues.

        The only way we're going to get off this planet is with mutual cooperation, and I think that should start between the public sector and..well..the public.
  • Mars Code (Score:3, Interesting)

    by Anonymous Coward on Saturday July 06, 2013 @11:45AM (#44203663)

    At the USENIX "Hot Topics in System Dependability 2012" conference Gerard Holzmann of JPL labs gave a fantastic talk [usenix.org] about how they developed the software for the Curiosity rover. (spoiler: Having to display a Bieber poster in your cubical if break the nightly build, is a great motivator.)

  • Are they certain it wasn't just the person on the tech support line who suggested rebooting it?

  • This is a rambling bit of history. Move on if that's not your thing. I love reading about problems like the the Pathfinder problems. Trust me - such things often happen on Earth-bound systems, too.

    Back in '79, I was working on a multiprocessing router for the ancient ARPANET. At the time the net had over sixty routers distributed across the continent. Actually we called them "imps" - well, "IMPS" but I'll use the modern term "router." We had a lot of the same problems as Pathfinder without ever leaving the atmosphere.

    By then all ARPANET routers were remotely maintained. They all ran continuously and we did all software maintenance in Cambridge, MA. By then the basic software was really reliable. They rarely crashed on their own, and we mostly sent updates to tweak performance or to add new protocol features. Once in a while we'd have to use a "magic modem" message to restart a dead machine and to reload things. The software rarely broke so badly that we'd have to have someone on-site load up a paper tape. So remote maintenance was well established by then.

    The multiprocessor didn't run "threads" it ran "strips." Each was a non-preemptive task designed to execute quickly enough not to monopolize the processor. If you wrote software for a Mac before OS-X, you know how this works. A multi-step process might involve a sequence of strips executed one after the other.

    Debugging the multiprocessor code was a bit of a challenge because we could lock out multi-step processes in several different ways. While we could put our test router on the network for live testing, this didn't guarantee that we'd get the same traffic the software would get at other sites. For example, we had software to connect computer terminals directly to hosts through the router (the original "terminal access controllers"). This software ran at a lower priority than router-to-router packet handling. It was possible for a busy router to give all the bandwidth to the packets and essentially lock out the host traffic. Such problems might not show up until updated software was loaded into a busy site.

    Uploading a patch involved assembly language. We'd generally add new code virus style. First you load the new code into some spare RAM. Once the code is loaded, we patch the working program so that it jumps to the patch the next time it executes. The patch jumps back to an appropriate spot in the program once the new code has executed. We sent the patches in a series of data packets with special addressing to talk to a "packet core" program that loaded them.

    The bottom line: it's the sort of challenge that kept a lot of us working as programmers for a long time. And they pop up again every time someone starts another system from scratch.

  • More like "olds," am I right? Huh? Ahhh.
    • I remember a lecturer telling exactly this story when I took a real-time systems course circa 2001...
      • by matfud ( 464184 )

        Far far older than that. It is not a new problem but it it a very persistent one. There are many ways to try and avoid the problem. Most do not work in practice. Priority inversion is quite tricky to deal with.

  • This is why you don't use threads for important stuff...

  • or any other Microsoft related OS thanks God.

Disc space -- the final frontier!

Working...