Forgot your password?
typodupeerror
Mars Software Science Technology

Stress-Testing Software For Deep Space 87

Posted by samzenpus
from the phone-the-help-desk dept.
kenekaplan writes "NASA has used VxWorks for several deep space missions, including Sojourner, Spirit, Opportunity and the Mars Reconnaissance Orbiter. When the space agency's Jet Propulsion Laboratory (JPL) needs to run stress tests or simulations for upgrades and fixes to the OS, Wind River's Mike Deliman gets the call. In a recent interview, Deliman, a senior member of the technical staff at Wind River, which is owned by Intel, gave a peek at the legacy technology under Curiosity's hood and recalled the emergency call he got when an earlier Mars mission hit a software snag after liftoff."
This discussion has been archived. No new comments can be posted.

Stress-Testing Software For Deep Space

Comments Filter:
  • by gadzook33 (740455) on Wednesday October 10, 2012 @09:36PM (#41615239)
    While I buy that the landing systems need an RTOS, I doubt Curiosity does. Image processing that happens with "precision"? Do x86 processors not process images precisely enough? I get the idea of being hardened to radiation but it was my understanding we have newer processors that fit the bill on this. The rest of this seems like a rationalization for using old hardware. However, as an engineer for the government it's possible I'm just old and embittered.
    • Re: (Score:2, Insightful)

      by Anonymous Coward

      Remember, too, that Curiosity has been in the works for almost a decade. They had to commit to a spec for the computers a long time ago, so it's no wonder by today's standards things seem out of date.

      • by Anonymous Coward on Wednesday October 10, 2012 @10:02PM (#41615385)

        that's why land-based projects like SKA for example which also take decades to complete are designed taking moore's law into account, leading to a very funny situation in which the project starts, they start building stuff but the computers that will run the thing are still 10 years away... (and I guess everybody just hopes computers will keep up or else...)

        Also you must take into account that the actual instruments are being built fairly early i.e. 5 or more years before launch since there is a LOT of testing calibration more testing etc. Additionally, when the stake is a billion dollar project like these you tend to leave fancy new things and favor old proven and well documented tech. Just in case...
        If not you just mount two instruments if you have space and money a fancy new one and the old usual thing (such is the case for Solar Orbiter for example)

        • The classic example is the Pentium math error: imagine if you were 2 years into the mission and then discovered that the new high speed chip you put in gives incorrect floating point calculations.

      • Exactly, and imagine if we sent people there. It's amazing we sent people to the moon so many years ago!

    • by Sasayaki (1096761) on Wednesday October 10, 2012 @09:56PM (#41615355)

      My understanding is that the thinking goes like this.

      Sure, there are newer processors that claim to fit the bill. But space hasn't changed so much since the Apollo days that we need all new processors; by and large anything that needs "heavy lifting" CPU wise can be transmitted back to Earth. For unmanned probes, there's very little demand for high speed CPU tasks that can't be offloaded to Earth. And even if there was, when your latency back to your operator is about 14 minutes (with an extra 14 to receive further instructions, plus the time it takes to interpret the previous data set, determine new instructions, then program those instructions), that's a lot of down time to work on various tasks.

      The Mars rover CPUs, I imagine, spend the vast majority of their time idling.

      However... the old stuff works. It has its faults and flaws, sure, but they're extremely well known and documented. You can work around them. You have the old grognards that have been kicking around since Apollo who know every damn thing about them. They're risky, sure, but it's a managed, controlled, limited and understood risk. But new processors are *new*. You lose that element of certainty, and the CPU is the heart of a probe. You lose it, you're fucked.

      You're trusting the mission, a mission that costs billions of bucks, to a new, untested device that hasn't been field tested, hasn't got that certainty, and *you just don't need*.

      • by Grave (8234)

        That's just it - this sort of computing task cannot get by with 5 9's or 7 9's or a hundred 9's of reliability. It needs to be 100% reliable, which means that every potential hiccup, flaw, or design quirk is understood and documented to the nth degree, and thus can be worked around. It also means you can reliably simulate the hardware and throw all sorts of stress testing at it.

      • Re: (Score:2, Interesting)

        by Anonymous Coward

        Reminds me of about 10 years ago when I was working at Motorola on cell phone base stations. We switched from VxWorks to Linux and got... nothing. No performance gains, no reliability gains. Just a free OS instead of something we had to spend money licensing.

        Of course, all the extra time spent switching and testing certainly cost a lot of money in man hours.

      • by gadzook33 (740455)
        Yeah, I agree except that wasn't really how his argument goes...and yes, old stuff works. But new stuff works too (also, new here could be 5 years old). Anyway, I'm not really (or at least overly) questioning their rationale. I've just seen too many programs where the same people have been there forever and it's easier to keep doing the same thing rather than try something new. Again, hopefully that's not the case at NASA but it's sure as hell the case at the Pentagon.
        • Yeah, I agree except that wasn't really how his argument goes...and yes, old stuff works. But new stuff works too (also, new here could be 5 years old). Anyway, I'm not really (or at least overly) questioning their rationale. I've just seen too many programs where the same people have been there forever and it's easier to keep doing the same thing rather than try something new. Again, hopefully that's not the case at NASA but it's sure as hell the case at the Pentagon.

          On the other hand, NASA really doesn't have the budget to spend working up for something new either. A processor switch means new simulators, new architectures etc. I imagine for a space probe - i.e. something you can't get at ever if it breaks down - then you go with the processor you have when you start designing it, and you pick the most reliable thing you can.

        • by sjames (1099)

          How many hours in space had the new stuff logged when the design of Curiosity was completed?

      • Re: (Score:3, Informative)

        by lordholm (649770)

        Newer missions collect too much data to transmit everything back to earth. They typically need to do local processing of for example images and other data. There is also AI aspects, for the ExoMars rover (made by Europe), the onboard computer will have a virtual scientist embedded. This virtual scientist look at the camera pictures and decide if something is worth an extra look, and may order the rover to carry out opportunistic science. I am not sure as to whether this is the case with Curiosity, by I coul

    • Re: (Score:3, Informative)

      by Anonymous Coward

      And that's where you (and most people) are mistaken.

      A RTOS is not an OS that acts "quickly", it's an OS which provide a 100% guarantee that a task will be executed in a definite time-frame, whether this needs to be 1 micro-second or 1 hour ; and which provide guarantees if the task can not be completed in this time-frame. A job neither Windows nor any flavor of Linux can achieve.

    • by TubeSteak (669689)

      First of all, there's a typo in TFA.
      They state the chip is a "RAD760" but they link to the RAD750 wikipedia page.

      Do x86 processors not process images precisely enough? I get the idea of being hardened to radiation but it was my understanding we have newer processors that fit the bill on this.

      The problem with x86 technology is that it has gotten too advanced.
      The chips have become so dense that radiation hardening is much much more difficult than it used to be.
      Increased difficulty = increased expense

      Further, I don't think you appreciate the specs of that old PowerPC chip.
      It's tolerance to 1 megarad of radiation exposure is a lot.
      You literally get what you pay for with this cup, ranging

    • They start planning this years, years and years ahead. It is not uncustomary to have decided on a hardware platform five years before launch. Since there's a lot at stake for these bigger missions to succeed, they usually don't take risks and put stuff up there that hasn't proven itself. Maybe some evolution like a higher clock rate or more memory or something like that, but a new processor architecture gets tried on other things that have redundancy, lower cost or less exposure and preferably a combination

    • by Animats (122034) on Thursday October 11, 2012 @02:02AM (#41616503) Homepage

      I get the idea of being hardened to radiation but it was my understanding we have newer processors that fit the bill on this.

      Radiation-hardened processors are hard to get. For one thing, they're export-controlled, so if you make them in the US, you can't sell many. Atmel makes a rad-hard SPARC CPU, and they've sold 3000 of them. Nobody seems to have built a modern x86 design or even an ARM in a rad-hard technology.

      There's a basic conflict between small gate size and radiation hardness. The smaller the transistors, the more likely a stray particle can damage or switch them. So the latest small geometries aren't as suitable. Also, the more radiation-hard processes, like Silicon on Sapphire, aren't used much for high-volume products.

      As a result, rad-hard parts are an expensive niche product. It's not inherently expensive to make them, but the volume is so small that the cost per part is high.

    • The RTOS may not be needed for image processing, but I'll bet it's handy when driving, or running other mechanical aspects.
      And once you have an RTOS for those tasks, it'd be silly to add another OS for the non-time-critical tasks.

    • by jittles (1613415)
      The government LOVES old hardware. Trust me. The AH-64D uses 486 processors. You know what? They aren't the only ones, either. I used to work for a company that designed and manufactured analog and digital video surveillance systems. They are still using 486's in some of their hardware as well (key components that require an insane MTBF to comply with regulations for casinos, military installations, etc). Why? Because it runs nice and cool compared to modern processors, and it is a tried and true proces
      • by toolie (22684)

        The AH-64D uses 486 processors.

        You didn't even get the architecture right, much less the processor.

        • by jittles (1613415)
          Depends on which aircraft system you're talking about. I can promise you that they have at least one 486 on board. I've dealt with the aircraft for years.
    • by Lumpy (12016)

      Then as an engineer you understand why it's super stupid to have it all run off of 1 processor.

      Guidance and maneuvering is 1 processor/system. Science package another, imaging another, etc... When you cant get to it to press the reset button, you dont do the dumb mistakes done on consumer hardware like automotive industry does.

      Example: GM and having 90% of the car run on the BCM, and Honda running the WHOLE car including engine off of the single ECM. My AC quit working because of a faulty sensor sh

      • by gadzook33 (740455)
        Yeah, I agree. In fact, I think that was the point I was trying to make (albeit unsuccessfully as it turns out).
      • > That kind of design is only done by really really dumb engineers.

        Or maybe by mid level managers?

        Hey, if this one component goes bad, (A) they MUST fix it because it controls so much, and (B) we make a boatload of money replacing it. Therefore, it's a great idea. Engineer promoted. Everyone happy.

        Oh, wait. Not everyone?
      • by mcgrew (92797) *

        Example: GM and having 90% of the car run on the BCM, and Honda running the WHOLE car including engine off of the single ECM. My AC quit working because of a faulty sensor shorting out the IO port on the ECM. only fix is to replace the WHOLE ECM for the car at $2200.00

        That kind of design is only done by really really dumb engineers.

        At one point I agreed with that sentiment. Hanlon's razor says don't assume malice when stupidity will explain, but mcgrew's razor says don't assume stupidity when greedy self-in

  • by Anonymous Coward

    ''recalled the emergency call he got when an earlier Mars mission hit a software snag after liftoff."
    From TFA:

    Back when Spirit Rover landed on Mars in 2004, it experienced file systems problems. I got a call on landing day while I was in Southern California. I fired up my laptop and worked with three groups who were dealing with a variety of time zones: California, Japan and Mars. Since I had a RAD 6000 systems on my desk running simulations, by the end of first week we figured it out and were able to fix it.

    • by AaronW (33736) on Wednesday October 10, 2012 @10:20PM (#41615503) Homepage

      With my long experience with VxWorks this doesn't surprise me. VxWorks is not the most robust RTOS. Think of it as a multi-tasking MS-DOS. The version they used has no memory protection between processes and I have found numerous areas of VxWorks to be badly implemented or downright buggy. Up through version 5.3 the malloc() implementation was absolutely horrid and suffered from severe fragmentation and performance problems. On the platform I was working with I replaced the VxWorks implementation with Doug Lea's implementation (which glibc was based off of) and our startup time dropped from an hour to 3 minutes. I was also able to easily add instrumentation so we could quickly find memory leaks or heap corruption in the field, something not possible with Wind River's implementation. After reading about the problems with the filesystem I looked at the Wind River filesystem code. It was rather ugly. They map FAT on top of flash memory (not the best choice) and the corner cases were not well handled (like a full filesystem).

      Similarly, their TCP/IP stack sucked as well. If you can drop to the T-shell through a security exploit you totally own the box (i.e. Huawei's poor security record).

      VxWorks is fine for simple applications, but for very complex applications it sucks. At least the 5.x series do not clean up after a task if it crashes because it does not keep track of what resources are used by a task. A task is basically just a thread of execution. All memory is a shared global pool. At the time it did have one feature that was useful that was lacking in Linux, priority inheritance mutexes. These are a requirement for proper real-time performance and I believe are now included in Linux.

      • by Jeremi (14640) on Thursday October 11, 2012 @02:01AM (#41616501) Homepage

        Up through version 5.3 the malloc() implementation was absolutely horrid and suffered from severe fragmentation and performance problems.

        I talked to one of Curiosity's software engineers the day it landed... he mentioned that one of their coding rules was: no malloc() allowed.

        • No malloc()? Interesting, I worked on a project at NG and we had same policy. Everything was on the stack or global. We had the chance to run with Monta Vista embedded Linux but someone higher up decided to go with "tried and true" VxWorks. I agree with a poster above about re-training costs and all that adding up.. but if embedded linux became standard with big companies I don't think it would take too long to make-up the costs of re-training and all the other stuff that goes with it.

        • by AaronW (33736)

          That is a good policy if you can do it, but in this case it was impossible. We had to use some 3rd party software which used malloc and realloc extensively. To make matters worse, for a long time we could only get obfuscated code to support the network processor we were using, meaning that it was impossible to make changes to it. We also had to make use of it because of the dynamic nature of the software. In our case it really wasn't feasible to avoid mllox. Replacing Windriver's malloc had some huge advant

        • Re: (Score:3, Funny)

          by Anonymous Coward

          Malloc is non-deterministic. The request for a pointer to return contiguous free bytes will need to search a fragmented memory map to complete the request. The duration of the search depends upon the algorithms and the amount of fragmentation relative to the size of the request. It is worse if it must rearrange memory to accomodate the request. Thus, use of malloc() is typically avoided for time-critical code in a real-time operating system.

          • by dfries (466073)

            It is worse if it must rearrange memory to accomodate the request.

            You were going okay until here. You can't rearrange memory, malloc returns pointers, and there isn't any callback to ask for that pointer back to move it to another location.

            Byte compiled languages like Java can rearrange memory but you call new not malloc so I know you weren't talking about them. Garbage collection is a much bigger problem especially if you think about mixing Java and real time operations. C/C++ in realtime means fol

      • by AmiMoJo (196126)

        You probably shouldn't be using malloc() on an embedded system like that anyway. Statically allocate everything. That way you know exactly how much memory will be consumed at any time and can budget appropriately. It also reduces the chance of having a bug malloc() all your memory or running out of stack space.

        VxWorks claims to have memory protection, chances are it is the CPU they are using which lacks an MMU to support it.

      • by Anonymous Coward

        If your objhective requires an RTOS, you're probably not going to malloc(). There are edge cases, but we've successfully banished them. We don't use VxWorks, thank god, but we do use a real memory machine instead of a virtual memory machine. Getting young programmers to understand that is challenging, and getting CS grads, of all fucking people, to program for a real memory machine is just fucking impossible. We make them managers instead.

      • policy inheritance can be handled through FUTEX_PI. Issues due to a lock-contention can be handled by the kernel via FUTEX_LOCK_PI.
  • by Anonymous Coward
    At least one instrument [nasa.gov] running VxWorks has been flying on the ISS since 2001. I'd be surprised if it were the only one.
  • My PVR also runs VxWorks. Given that it still crashes randomly now and again, I hope they have a better version for space probes.
  • pshaw, we use RTEMS (Score:3, Informative)

    by Anonymous Coward on Wednesday October 10, 2012 @10:30PM (#41615559)

    the other big player in space RTOS: RTEMS.
    Free, open source, rtems.org.

    Has all the same problems as VxWorks.. no process memory isolation (because space flight hardware doesn't have the hardware to support it usually)....

    One thing that VxWorks has that RTEMS doesn't, and I wish it did, was dynamic loading and linking of applications. You're basically back in 1960s monolithic image days, not even with overlay loaders.

    • Why not FORTH?
      It was the to-go system for exploration satellites for years.
      I believe Voyager is running it still.
      • If a better OS came along since the start of the Voyager program, which I'm sure is true, I highly doubt that the Voyager crafts would get their disks wiped and a new OS installed, so to speak, while on their way to the edge of the solar system.
      • Re: (Score:3, Interesting)

        by Anonymous Coward

        FORTH is great. From about 2 dozen core instructions an entire operating environment can be built. Unfortunately, FORTH takes in-depth knowledge of not only the hardware, but also a firm grasp of scope of the tasks that need to be performed. Most programmers today cannot handle FORTH -- imagine building your own TCPIP stack, filesystem, and RTOS operating environment from scratch. That talent is found only in a dying breed of programmers, literally.

        For a robust 100% reliable radiation-hardened space environ

  • by gQuigs (913879)

    that is (or was?) in newer Linksys routers, that are much less stable than the older Linux based versions..

    http://en.wikipedia.org/wiki/VxWorks#Networking_and_communication_components [wikipedia.org]

    • by Anonymous Coward

      that's actually at the fault of the device drivers and glue code.

      Unfortunately vxworks has a small and well understood and deterministic core despite various suck points. Important in many areas in control but it has so many cons. The buggy POS crap all over the world running VxWorks is testimount to that. It's like once you consider the human factor in commanding the beast why bother because it's just going to be less reliable in the end. AFAIK you get to pay royalies too... This is truely one of the

  • Didn't they used to do Linux distros back in the day?

    Yes , I know, off topic , but just asking...

    • by Anonymous Coward

      Yup. Wind River Linux. I remember using it and thinking what a /big/ pile of shit it was. It was slated as an embedded target but the smallest they could shrink it was a gig or so. Their support had no idea how to shrink it, and I only shaved it down be a couple of hundred meg.

      Suffice to say, for what they were charging we were able to build our own glibc based distro with newer, more stable components and cram it in under 200M. There was even change.

  • by Viol8 (599362) on Thursday October 11, 2012 @03:56AM (#41616999)

    An old Power PC can fly a spaceship to mars, execute a difficult landing and now semi autonomously drive a robot across the surface of a planet 30 million miles away , yet its not up to the job of writing documents using the latest word processors. Whats wrong with this picture?

    • I wonder what the CPU and memory load graphs would look like for a probe versus some standard desktop applications. Might explain a lot.
      • by Viol8 (599362)

        I would imagine that landing a spaceship takes a lot more CPU than reformatting some text and drawing a blinking cursor.

    • Your desktop word processing software also didn't have a licensing cost in the hundreds of thousands of dollars...
      • by mcgrew (92797) *

        Your desktop word processing software also didn't have a licensing cost in the hundreds of thousands of dollars

        It would if you were the only customer and it was only going to run on one computer. Do you have any idea how many programmers MS has and what it costs for salaries and other overhead?

  • I find the most revealing part of the interview that he publicly acknowledges his customers working on secret designs for space.
    I'm sure those customers will deny any such project exists.

For every bloke who makes his mark, there's half a dozen waiting to rub it out. -- Andy Capp

Working...