Stress-Testing Software For Deep Space 87
kenekaplan writes "NASA has used VxWorks for several deep space missions, including Sojourner, Spirit, Opportunity and the Mars Reconnaissance Orbiter. When the space agency's Jet Propulsion Laboratory (JPL) needs to run stress tests or simulations for upgrades and fixes to the OS, Wind River's Mike Deliman gets the call. In a recent interview, Deliman, a senior member of the technical staff at Wind River, which is owned by Intel, gave a peek at the legacy technology under Curiosity's hood and recalled the emergency call he got when an earlier Mars mission hit a software snag after liftoff."
Seems like a rationalization (Score:5, Interesting)
Re: (Score:2, Insightful)
Remember, too, that Curiosity has been in the works for almost a decade. They had to commit to a spec for the computers a long time ago, so it's no wonder by today's standards things seem out of date.
Re:Seems like a rationalization (Score:5, Informative)
that's why land-based projects like SKA for example which also take decades to complete are designed taking moore's law into account, leading to a very funny situation in which the project starts, they start building stuff but the computers that will run the thing are still 10 years away... (and I guess everybody just hopes computers will keep up or else...)
Also you must take into account that the actual instruments are being built fairly early i.e. 5 or more years before launch since there is a LOT of testing calibration more testing etc. Additionally, when the stake is a billion dollar project like these you tend to leave fancy new things and favor old proven and well documented tech. Just in case...
If not you just mount two instruments if you have space and money a fancy new one and the old usual thing (such is the case for Solar Orbiter for example)
Re: (Score:3)
The classic example is the Pentium math error: imagine if you were 2 years into the mission and then discovered that the new high speed chip you put in gives incorrect floating point calculations.
Re: (Score:1)
Exactly, and imagine if we sent people there. It's amazing we sent people to the moon so many years ago!
Re: (Score:2)
It's funny how once money is put into developing something 'impractical', that other uses are found for it that lead to it being useful for everyone. Examples are many, but I'll just mention: GPS, Communication and Weather Satellites, The Internet and Xtube.
Re: (Score:2)
That "stunt", which maybe it was, resulted in driving a lot of advances in microelectronics that led directly to the cool toys we have today.
No, it didn't. ICs existed before Apollo, and the primary benefit was ramping up production and pushing for improved reliability.
We would have gotten here without the space race, but it would have taken longer.
Indeed. We might still be using Core 2s. Not a big deal in the grand scheme of technology when most i5s and i3s spend most of their time idle.
Examples are many, but I'll just mention: GPS, Communication and Weather Satellites, The Internet and Xtube.
None of which have anything to do with Apollo. It was a great achievement with the technology of its time, but the 'spinoff' arguments are just bogus.
Re:Seems like a rationalization (Score:5, Insightful)
My understanding is that the thinking goes like this.
Sure, there are newer processors that claim to fit the bill. But space hasn't changed so much since the Apollo days that we need all new processors; by and large anything that needs "heavy lifting" CPU wise can be transmitted back to Earth. For unmanned probes, there's very little demand for high speed CPU tasks that can't be offloaded to Earth. And even if there was, when your latency back to your operator is about 14 minutes (with an extra 14 to receive further instructions, plus the time it takes to interpret the previous data set, determine new instructions, then program those instructions), that's a lot of down time to work on various tasks.
The Mars rover CPUs, I imagine, spend the vast majority of their time idling.
However... the old stuff works. It has its faults and flaws, sure, but they're extremely well known and documented. You can work around them. You have the old grognards that have been kicking around since Apollo who know every damn thing about them. They're risky, sure, but it's a managed, controlled, limited and understood risk. But new processors are *new*. You lose that element of certainty, and the CPU is the heart of a probe. You lose it, you're fucked.
You're trusting the mission, a mission that costs billions of bucks, to a new, untested device that hasn't been field tested, hasn't got that certainty, and *you just don't need*.
Re: (Score:2)
That's just it - this sort of computing task cannot get by with 5 9's or 7 9's or a hundred 9's of reliability. It needs to be 100% reliable, which means that every potential hiccup, flaw, or design quirk is understood and documented to the nth degree, and thus can be worked around. It also means you can reliably simulate the hardware and throw all sorts of stress testing at it.
Re:Seems like a rationalization (Score:4, Insightful)
Re: (Score:2, Interesting)
Reminds me of about 10 years ago when I was working at Motorola on cell phone base stations. We switched from VxWorks to Linux and got... nothing. No performance gains, no reliability gains. Just a free OS instead of something we had to spend money licensing.
Of course, all the extra time spent switching and testing certainly cost a lot of money in man hours.
Re: (Score:2)
Re: (Score:2)
Yeah, I agree except that wasn't really how his argument goes...and yes, old stuff works. But new stuff works too (also, new here could be 5 years old). Anyway, I'm not really (or at least overly) questioning their rationale. I've just seen too many programs where the same people have been there forever and it's easier to keep doing the same thing rather than try something new. Again, hopefully that's not the case at NASA but it's sure as hell the case at the Pentagon.
On the other hand, NASA really doesn't have the budget to spend working up for something new either. A processor switch means new simulators, new architectures etc. I imagine for a space probe - i.e. something you can't get at ever if it breaks down - then you go with the processor you have when you start designing it, and you pick the most reliable thing you can.
Re: (Score:2)
How many hours in space had the new stuff logged when the design of Curiosity was completed?
Re: (Score:3, Informative)
Newer missions collect too much data to transmit everything back to earth. They typically need to do local processing of for example images and other data. There is also AI aspects, for the ExoMars rover (made by Europe), the onboard computer will have a virtual scientist embedded. This virtual scientist look at the camera pictures and decide if something is worth an extra look, and may order the rover to carry out opportunistic science. I am not sure as to whether this is the case with Curiosity, by I coul
Re: (Score:3)
Not for nothing but I'm about a step away from being a hippie and I've served the government faithfully for many years. I work with some of the best and brightest and if you weren't able to cut it there, the fact that you're incredibly negative and seem like a jerk would likely only be a f
Re:Seems like a rationalization (Score:4, Informative)
Look, that guy ("Required Snark") might have been an asshole, but you didn't really acquit yourself well either in your original post. I cofounded and work for a real-time telemetry contractor. We use Android, but the Linux kernel isn't built to handle read-time applications reliably. There are too many things to handle in terms of time-safe task-switching, execution, multi-processing, and internal consistency in order for it to be a good RTOS. So keeping that in mind, I had to implement a real time environment in userspace that uses root and some native code in order to collect data, send data, and operate hardware in a safe, timely manner. But this isn't the best solution because I still have to deal with the fact that it's all just a frustrating abstraction sitting on top of a kernel that isn't at all concerned with what I'm actually trying to do, despite my best efforts to single-handedly make the necessary changes.
Your "newer processors" bit is also completely off the mark. Radiation-hardened processors lag generations behind owing to the need for extensive redesign and testing. Complicating this picture is the fact that even then, they still have varying levels of reliability and power efficiency. You don't want a processor that has a microcode architecture that makes your targeted code difficult to semantically evaluate and verify. You don't want (or need) a recent processor that hasn't had extensive real-world user testing. You want a processor in the goldilocks zone, one that you've worked with before and has a community behind it.
Keeping that all in mind, they chose a good processor, and already had an OS largely built for it based on previous missions with earlier versions of the same processor.
Re:Seems like a rationalization (Score:5, Funny)
Pedestrian crossing?
You rang?
Re: (Score:2)
TFA explains it: "VxWorks has to react immediately in order to survive while exploring Marsâ(TM) surface."
It isn't hard to imagine why this would be the case. If a sensor suddenly reports a fault you might want to react extremely quickly to prevent the rover being damaged. Say a wheel jams or something like that. Since the rover can't be repaired a great deal of caution is necessary.
Re: (Score:3, Informative)
And that's where you (and most people) are mistaken.
A RTOS is not an OS that acts "quickly", it's an OS which provide a 100% guarantee that a task will be executed in a definite time-frame, whether this needs to be 1 micro-second or 1 hour ; and which provide guarantees if the task can not be completed in this time-frame. A job neither Windows nor any flavor of Linux can achieve.
Re: (Score:2)
Except RTlinux.
http://en.wikipedia.org/wiki/RTLinux [wikipedia.org]
Re: (Score:3)
First of all, there's a typo in TFA.
They state the chip is a "RAD760" but they link to the RAD750 wikipedia page.
Do x86 processors not process images precisely enough? I get the idea of being hardened to radiation but it was my understanding we have newer processors that fit the bill on this.
The problem with x86 technology is that it has gotten too advanced.
The chips have become so dense that radiation hardening is much much more difficult than it used to be.
Increased difficulty = increased expense
Further, I don't think you appreciate the specs of that old PowerPC chip.
It's tolerance to 1 megarad of radiation exposure is a lot.
You literally get what you pay for with this cup, ranging
Ever seen a time table for a space mission? (Score:3)
They start planning this years, years and years ahead. It is not uncustomary to have decided on a hardware platform five years before launch. Since there's a lot at stake for these bigger missions to succeed, they usually don't take risks and put stuff up there that hasn't proven itself. Maybe some evolution like a higher clock rate or more memory or something like that, but a new processor architecture gets tried on other things that have redundancy, lower cost or less exposure and preferably a combination
Re:Seems like a rationalization (Score:5, Informative)
I get the idea of being hardened to radiation but it was my understanding we have newer processors that fit the bill on this.
Radiation-hardened processors are hard to get. For one thing, they're export-controlled, so if you make them in the US, you can't sell many. Atmel makes a rad-hard SPARC CPU, and they've sold 3000 of them. Nobody seems to have built a modern x86 design or even an ARM in a rad-hard technology.
There's a basic conflict between small gate size and radiation hardness. The smaller the transistors, the more likely a stray particle can damage or switch them. So the latest small geometries aren't as suitable. Also, the more radiation-hard processes, like Silicon on Sapphire, aren't used much for high-volume products.
As a result, rad-hard parts are an expensive niche product. It's not inherently expensive to make them, but the volume is so small that the cost per part is high.
Re: (Score:3)
Shielding is heavy and expensive to launch (and to land softly). Then, for every extra mm of lead shielding you add, there's a more energetic photon just waiting to flip a bit. It ends being up cheaper to make radiation hardened electronics than to accommodate for the extra shielding.
Re: (Score:2)
The RTOS may not be needed for image processing, but I'll bet it's handy when driving, or running other mechanical aspects.
And once you have an RTOS for those tasks, it'd be silly to add another OS for the non-time-critical tasks.
Re: (Score:2)
Re: (Score:2)
The AH-64D uses 486 processors.
You didn't even get the architecture right, much less the processor.
Re: (Score:2)
Re: (Score:2)
Then as an engineer you understand why it's super stupid to have it all run off of 1 processor.
Guidance and maneuvering is 1 processor/system. Science package another, imaging another, etc... When you cant get to it to press the reset button, you dont do the dumb mistakes done on consumer hardware like automotive industry does.
Example: GM and having 90% of the car run on the BCM, and Honda running the WHOLE car including engine off of the single ECM. My AC quit working because of a faulty sensor sh
Re: (Score:2)
Re: (Score:2)
Or maybe by mid level managers?
Hey, if this one component goes bad, (A) they MUST fix it because it controls so much, and (B) we make a boatload of money replacing it. Therefore, it's a great idea. Engineer promoted. Everyone happy.
Oh, wait. Not everyone?
Re: (Score:2)
Example: GM and having 90% of the car run on the BCM, and Honda running the WHOLE car including engine off of the single ECM. My AC quit working because of a faulty sensor shorting out the IO port on the ECM. only fix is to replace the WHOLE ECM for the car at $2200.00
That kind of design is only done by really really dumb engineers.
At one point I agreed with that sentiment. Hanlon's razor says don't assume malice when stupidity will explain, but mcgrew's razor says don't assume stupidity when greedy self-in
"earlier Mars mission" == MER-A Spirit (Score:2, Interesting)
''recalled the emergency call he got when an earlier Mars mission hit a software snag after liftoff."
From TFA:
Back when Spirit Rover landed on Mars in 2004, it experienced file systems problems. I got a call on landing day while I was in Southern California. I fired up my laptop and worked with three groups who were dealing with a variety of time zones: California, Japan and Mars. Since I had a RAD 6000 systems on my desk running simulations, by the end of first week we figured it out and were able to fix it.
Re:"earlier Mars mission" == MER-A Spirit (Score:5, Informative)
With my long experience with VxWorks this doesn't surprise me. VxWorks is not the most robust RTOS. Think of it as a multi-tasking MS-DOS. The version they used has no memory protection between processes and I have found numerous areas of VxWorks to be badly implemented or downright buggy. Up through version 5.3 the malloc() implementation was absolutely horrid and suffered from severe fragmentation and performance problems. On the platform I was working with I replaced the VxWorks implementation with Doug Lea's implementation (which glibc was based off of) and our startup time dropped from an hour to 3 minutes. I was also able to easily add instrumentation so we could quickly find memory leaks or heap corruption in the field, something not possible with Wind River's implementation. After reading about the problems with the filesystem I looked at the Wind River filesystem code. It was rather ugly. They map FAT on top of flash memory (not the best choice) and the corner cases were not well handled (like a full filesystem).
Similarly, their TCP/IP stack sucked as well. If you can drop to the T-shell through a security exploit you totally own the box (i.e. Huawei's poor security record).
VxWorks is fine for simple applications, but for very complex applications it sucks. At least the 5.x series do not clean up after a task if it crashes because it does not keep track of what resources are used by a task. A task is basically just a thread of execution. All memory is a shared global pool. At the time it did have one feature that was useful that was lacking in Linux, priority inheritance mutexes. These are a requirement for proper real-time performance and I believe are now included in Linux.
Re:"earlier Mars mission" == MER-A Spirit (Score:5, Interesting)
Up through version 5.3 the malloc() implementation was absolutely horrid and suffered from severe fragmentation and performance problems.
I talked to one of Curiosity's software engineers the day it landed... he mentioned that one of their coding rules was: no malloc() allowed.
Re: (Score:2)
No malloc()? Interesting, I worked on a project at NG and we had same policy. Everything was on the stack or global. We had the chance to run with Monta Vista embedded Linux but someone higher up decided to go with "tried and true" VxWorks. I agree with a poster above about re-training costs and all that adding up.. but if embedded linux became standard with big companies I don't think it would take too long to make-up the costs of re-training and all the other stuff that goes with it.
Re: (Score:2)
That is a good policy if you can do it, but in this case it was impossible. We had to use some 3rd party software which used malloc and realloc extensively. To make matters worse, for a long time we could only get obfuscated code to support the network processor we were using, meaning that it was impossible to make changes to it. We also had to make use of it because of the dynamic nature of the software. In our case it really wasn't feasible to avoid mllox. Replacing Windriver's malloc had some huge advant
Re: (Score:3, Funny)
Malloc is non-deterministic. The request for a pointer to return contiguous free bytes will need to search a fragmented memory map to complete the request. The duration of the search depends upon the algorithms and the amount of fragmentation relative to the size of the request. It is worse if it must rearrange memory to accomodate the request. Thus, use of malloc() is typically avoided for time-critical code in a real-time operating system.
Re: (Score:2)
You were going okay until here. You can't rearrange memory, malloc returns pointers, and there isn't any callback to ask for that pointer back to move it to another location.
Byte compiled languages like Java can rearrange memory but you call new not malloc so I know you weren't talking about them. Garbage collection is a much bigger problem especially if you think about mixing Java and real time operations. C/C++ in realtime means fol
Re: (Score:2)
You probably shouldn't be using malloc() on an embedded system like that anyway. Statically allocate everything. That way you know exactly how much memory will be consumed at any time and can budget appropriately. It also reduces the chance of having a bug malloc() all your memory or running out of stack space.
VxWorks claims to have memory protection, chances are it is the CPU they are using which lacks an MMU to support it.
Re: (Score:1)
If your objhective requires an RTOS, you're probably not going to malloc(). There are edge cases, but we've successfully banished them. We don't use VxWorks, thank god, but we do use a real memory machine instead of a virtual memory machine. Getting young programmers to understand that is challenging, and getting CS grads, of all fucking people, to program for a real memory machine is just fucking impossible. We make them managers instead.
Re: (Score:2)
Re:I'm glad I don't program for NASA (Score:5, Funny)
VxWorks has a nice track record in space (Score:2, Interesting)
Re: (Score:1)
Hopefully it worked better in space than it did in WRT54G's.
My PVR (Score:2)
pshaw, we use RTEMS (Score:3, Informative)
the other big player in space RTOS: RTEMS.
Free, open source, rtems.org.
Has all the same problems as VxWorks.. no process memory isolation (because space flight hardware doesn't have the hardware to support it usually)....
One thing that VxWorks has that RTEMS doesn't, and I wish it did, was dynamic loading and linking of applications. You're basically back in 1960s monolithic image days, not even with overlay loaders.
Re: (Score:2)
It was the to-go system for exploration satellites for years.
I believe Voyager is running it still.
Re: (Score:2)
Re: (Score:3, Interesting)
FORTH is great. From about 2 dozen core instructions an entire operating environment can be built. Unfortunately, FORTH takes in-depth knowledge of not only the hardware, but also a firm grasp of scope of the tasks that need to be performed. Most programmers today cannot handle FORTH -- imagine building your own TCPIP stack, filesystem, and RTOS operating environment from scratch. That talent is found only in a dying breed of programmers, literally.
For a robust 100% reliable radiation-hardened space environ
The same VxWorks.... (Score:2, Flamebait)
that is (or was?) in newer Linksys routers, that are much less stable than the older Linux based versions..
http://en.wikipedia.org/wiki/VxWorks#Networking_and_communication_components [wikipedia.org]
Re: (Score:1)
that's actually at the fault of the device drivers and glue code.
Unfortunately vxworks has a small and well understood and deterministic core despite various suck points. Important in many areas in control but it has so many cons. The buggy POS crap all over the world running VxWorks is testimount to that. It's like once you consider the human factor in commanding the beast why bother because it's just going to be less reliable in the end. AFAIK you get to pay royalies too... This is truely one of the
Re: (Score:2)
As a slightly off thread, I always wondered why Intel bought windriver. One of the issues we have is that finding someone who knows the OS well is difficuilt because there is no way of getting exposure to it unless you have a lot money.
I can't help thinking Intel have missed a trick here. With the rise of the embedded hobbiest with things like Raspberry Pi, a new generation of engineers are learning, however there experience is based around ARM and linux, so further marginalising Intel in the embedded world, which is likely to be the big growth area in the future.
If intel was smart they would create there own hobbiest board based around an embeeded core duo or the like and provide a free version of vxworks to run on it. It doesn't need some of the high end features, but would provide early exposure to the OS as well as raising the profile of Intel in the embedded space.
Just a thought....
Whoops posted as AC
Re: (Score:1)
As a slightly off thread, I always wondered why Intel bought windriver. One of the issues we have is that finding someone who knows the OS well is difficuilt because there is no way of getting exposure to it unless you have a lot money.
Intel has loads of cash and a near monopoly on processors in most major market segments. They need somewhere to grow and PCs and servers isn't it. The big segments they are weak in are mobile (or more generally, low power) and networking.
VxWorks is very common in networking equipment and in embedded (low power / low processing capability) systems.
I can see where WindRiver looked attractive to Intel. Of course, the risk is that they scare traditional VxWorks customers off by focusing WindRiver too he
Re: (Score:2)
Owning VxWorks also gives Intel a way to get into military designs. These are high margin, low-volume parts just like server CPUs, so it's a lucrative market for Intel to get into.
That said, they've only made half-assed commitments, offering just 7 years availability of embedded processors [intel.com] (most places do 10+ years). That works for simpler projects, but the bigger government designs may require a CPU upgrade before the finished product even ships!
And yes in the Windows desktop world it's no big deal to up
Wind River? (Score:2)
Didn't they used to do Linux distros back in the day?
Yes , I know, off topic , but just asking...
Re: (Score:1)
Yup. Wind River Linux. I remember using it and thinking what a /big/ pile of shit it was. It was slated as an embedded target but the smallest they could shrink it was a gig or so. Their support had no idea how to shrink it, and I only shaved it down be a couple of hundred meg.
Suffice to say, for what they were charging we were able to build our own glibc based distro with newer, more stable components and cram it in under 200M. There was even change.
It demonstrate how inefficient desktop software is (Score:4, Insightful)
An old Power PC can fly a spaceship to mars, execute a difficult landing and now semi autonomously drive a robot across the surface of a planet 30 million miles away , yet its not up to the job of writing documents using the latest word processors. Whats wrong with this picture?
Re: (Score:1)
Re: (Score:2)
I would imagine that landing a spaceship takes a lot more CPU than reformatting some text and drawing a blinking cursor.
Re: (Score:2)
"Landing a space ship can be done using analogue electronics as a control system in the 60s"
I don't remember any system in the 60s where a skycrane had to hover in place, lower a lander down, release it then fly off. Or navigate using image recognition. If you know otherwise fill me in.
"more intensive than reformatting text?"
Oh please. Reformatting text algorithms were running on 8 bit home computers in the 70s!
Re: (Score:2)
Re: (Score:2)
Your desktop word processing software also didn't have a licensing cost in the hundreds of thousands of dollars
It would if you were the only customer and it was only going to run on one computer. Do you have any idea how many programmers MS has and what it costs for salaries and other overhead?
Re: (Score:2)