Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Space Bug Wireless Networking Science Hardware

Debugging The Spirit Rover 390

icebike writes "eeTimes has a story on how the Mars Rover was essentially reprogrammed from millions of miles away. 'How do you diagnose an embedded system that has rendered itself unobservable? That was the riddle a Jet Propulsion Laboratory team had to solve when the Mars rover Spirit capped a successful landing on the Martian surface with a sequence of stunning images and then, darkness.' The outcome strikes me as an extremely Lucky Hack, and the rover could have just as likely been lost forever. Are there lessons here that we can use here on the third rock for recovery of our messed up machines which we manage from afar via ssh?"
This discussion has been archived. No new comments can be posted.

Debugging The Spirit Rover

Comments Filter:
  • Space Technology (Score:5, Insightful)

    by superpulpsicle ( 533373 ) on Sunday February 22, 2004 @01:49AM (#8354131)
    That's the thing that amaze me. Any technology having to do with space seem that much more advanced.

    Here on earth we can't even build cars that require no maintainance and last more than 10 years.
  • by Anonymous Coward on Sunday February 22, 2004 @01:53AM (#8354150)
    Yeah offer to pay $800 million for a custom built car, and you can bet it will last 90 days too.
  • by prakslash ( 681585 ) on Sunday February 22, 2004 @01:56AM (#8354166)
    Unless you are a lay person, I don't understand what the big deal is .

    If it was the hardware that got fried and they miraculously fixed that, I would understand but this was just a software glitch.

    I routinely reboot and reprogram machines in our data-center that is 2000 miles away from me.

    As long as all hardware components are working and there is connectivity to the machine, it doesn't matter whether the machine is a few miles away or a million miles away.

  • The proper fix... (Score:3, Insightful)

    by Dan East ( 318230 ) on Sunday February 22, 2004 @01:57AM (#8354173) Journal
    ...would have been to have "fixed" the problem before the hardware left earth. This "bug" (or more accurately, known limitation of the filesystem) should have been discovered here on earth if the rover had been properly tested.

    The only real bug was the inability of the system to properly handle running out of file entries (or more specifically, consuming too much RAM as the number of file entries increased). However the software should have never have stressed the filesystem to that degree in the first place.

    Dan East
  • by beeplet ( 735701 ) <beeplet@gmail.com> on Sunday February 22, 2004 @01:57AM (#8354174) Journal
    Actually any technology making it into space is more likely to be 10 years out of date... Getting anything certified for space is a long process. The technology in space isn't more advanced, just much better documented and well-understood.
  • Hindsight (Score:5, Insightful)

    by FTL ( 112112 ) * <slashdot@neil.frase[ ]ame ['r.n' in gap]> on Sunday February 22, 2004 @01:57AM (#8354175) Homepage
    The article (I know, I know, this is Slashdot) is really good. It contains everything that is missing from traditional media. The story, the background, technical details, and follow through.

    Granted mainstream media have to keep their coverage dumbed down if Joe Public are going to read it. But what really bugs me is the lack of follow-up. We hear about poorly understood events as they are unfolding, then never heard about them later when they are completely understood.

    A recent example is the gangway between ship and shore at the QM2's drydock. It collapsed killing lots of people, an investigation was launched. Why did it collapse? At the time it wasn't known. I'm sure it's known now, but there's been absolutely no followup.

    This article about the rover is great not so much because of the level of detail but because it reports on an event with the benefit of hindsight.

  • by Anonymous Coward on Sunday February 22, 2004 @01:58AM (#8354179)
    No the didn't use SSH. However, a lucky hacker
    would have to have access to a every large radio atennae, like the one atop a volcano in Hawaii.
  • by mcbridematt ( 544099 ) on Sunday February 22, 2004 @01:59AM (#8354184) Homepage Journal
    I don't think they would bother using anything to do with TCP. Anything you do send you will have to wait 9 minutes for. Just imagine the ping times:

    Pinging mars-rover with 32 bytes of data:
    request timed out
    request timed out
    request timed out
    64 bytes from mars-rover: icmp_seq=0 ttl=64 time=32400ms :(

    If it has anything to do with current internet protocols, it would be UDP.
  • by Billly Gates ( 198444 ) on Sunday February 22, 2004 @02:02AM (#8354202) Journal
    The Japanese started that.

    They make alot of money from loyal customers. But I admit my 13 year old 91 honda civic with 140k miles is getting on my nerves with repair costs. WOuld a 91 ford escort still be running today? I think not.

    I will buy only Toyatas and Honda's for that reason.

    It amazes me consumers are too stupid to read consumer reports and buy cars on looks. Repair costs for things like Cadallacs and BMW's are not cheap for TCO! Yes consumer products have TCO too and we and not just businesses should look at that as well.
  • by Mr2cents ( 323101 ) on Sunday February 22, 2004 @02:05AM (#8354213)
    What filesystem is used? Is wear leveling being used? The directory structure is apparently stored in RAM during the day (why else would it use so much RAM?), that is a good thing for reducing wear on the flash system. But what's the number of writes on the flash chips? When will that number be reached?
  • by randyest ( 589159 ) on Sunday February 22, 2004 @02:09AM (#8354233) Homepage
    If you RTFA you will realize that I'm not lying in the least when I say that, effectively, they ran out of flash-based "disk" space! They forgot to delete old files when updating the programs in the flash memory (which is mounted like a filesystem, or hard disk), and the OS was failing because it wanted to use that space. So it rebooted, and still had insufficient disk space, and rebooted again . . . lather rinse repeat. There was no signal because it was stuck in a reboot loop because they ran out of disk. Wow.

    They fixed it by telling it to boot without using the flash (safe mode :) ), then used low-level (direct access) flash utilities to remove the old files. Reboot, mount, disk check / corruption repair, voila it works again.

    We have a big 1TB NetApps server where I work, and we have so much disk space that people get lazy and don't delete files or archive old projects, then they get really confused when jobs fail, not thinking disk space until checking everything else first. But it happens, and it's usually surprisingly hard to debug (they check a lot of other things first, sometimes even upgrading tool versions!). It's really kinda funny, in an expensive and mildly embarassing way that the Spirit had the same problem.
  • Lucky Hack? (Score:5, Insightful)

    by electromaggot ( 597134 ) on Sunday February 22, 2004 @02:11AM (#8354243)
    "The outcome strikes me as an extremely Lucky Hack..."

    The outcome does not strike me as a "Lucky Hack." They made the system flexible, that flexibility got them into some trouble, and it's also what got them out of it. Anyone else agree?
  • by kfg ( 145172 ) on Sunday February 22, 2004 @02:12AM (#8354248)
    Ten years out of date, but ten years more reliable for the effort.

    Sort of like Debian.

    Cutting edge ain't always what it's cracked up to be.

    KFG
  • by dellis78741 ( 745139 ) on Sunday February 22, 2004 @02:15AM (#8354262)
    The tricky part here was that the 'hardware connectivity' depended on 'software functionality'. Try maintaining machine a block away if the commnication link requires both ends to point a satellite dish at an orbiting satellite and that pointing relied of software functioning correctly.
  • by FTL ( 112112 ) * <slashdot@neil.frase[ ]ame ['r.n' in gap]> on Sunday February 22, 2004 @02:19AM (#8354280) Homepage
    I routinely reboot and reprogram machines in our data-center that is 2000 miles away from me.

    As long as all hardware components are working and there is connectivity to the machine, it doesn't matter whether the machine is a few miles away or a million miles away.

    There are some fundamental differences, my friend:

    • If you screw up leaving the computer unbootable, you get local tech support to check the console and fix it. NASA on the other hand doesn't have tech support on Mars.
    • If you hose the server, it means a day's worth of reinstallation. If NASA hoses their rover, they just lost $300,000,000.
    • You can poke around the system and see what's wrong. NASA has a harder time since their lag time is 20 minutes.
    • You can download core dumps, NASA were operating on the low-bandwidth antenna which meant looking at file sizes, time stamps, selected lines, but not file contents.
    • You have your boss breathing down your neck (hoping for success), NASA have the international media breathing down their necks (hoping for a disaster).
  • by updog ( 608318 ) on Sunday February 22, 2004 @02:21AM (#8354287) Homepage
    There is a big difference between this, and your example of forcing a controlled reboot of your remote machines.

    Spirit was in a constant reboot cycle, and the fact that they could even communicate with it long enough to bypass the problem was an accomplishment (and lucky).

    It would be more similar to your remote data-center machine suddenly going offline and you have no idea why, and you are unable to ssh to it, and you fix it by running through potential scenarios and finding that the problem could have been due to mounting a certain partition, then discovering that there's an exploit in ICMP that allows you to hack to kernel so it doesn't mount that partition.

  • Lucky Hack? (Score:5, Insightful)

    by SuperKendall ( 25149 ) * on Sunday February 22, 2004 @02:21AM (#8354291)
    Your post is the only thing that strikes me as a "Lucky Hack" here. They included the ability in the design to remotely disable booting from flash and upload new boot images, in what way is that a "hack"? All this is just foresight in design to include as many possible recovery modes as they could.

    Basically, they rebooted from a recovery image (sent via radio) and then proceeded to do low-level fixes on Flash memory and they a chkdisk. If I do something similar via recovery disk or CD, I don't get a lot of people telling me that it was a "Lucky Hack" that I could boot off of CD!!!
  • by Anonymous Coward on Sunday February 22, 2004 @02:22AM (#8354295)
    One lesson we can learn from the Spirit problems that really and truly does directly apply on earth:

    Just in case of a worst case scenario, always make sure you have physical access to the machines.
  • by amRadioHed ( 463061 ) on Sunday February 22, 2004 @02:24AM (#8354303)
    Are you forgetting that the latency when communicationg with mars averages around 1200000 ms? I'd say that when you have to wait 20 minutes to see the result of anything you do you're going to have to substantially change your debugging strategy.
  • by Anonymous Coward on Sunday February 22, 2004 @02:41AM (#8354360)
    .. namely, "Do Not Use VxWorks". Use something stable instead. eCos [planetmirror.com] comes to mind. So does everyone's favorite OS these days [kernel.org], which has RTOS support. Having been a frustrated VxWorks user in the past, I'd no more entrust my mission-critical services to it than I would to Microsoft. -- TTK
  • by NymblZ ( 754889 ) on Sunday February 22, 2004 @03:00AM (#8354408) Homepage Journal
    As long as all hardware components are working and there is connectivity to the machine, it doesn't matter whether the machine is a few miles away or a million miles away.

    That's just it - consider the stress those rovers are enduring or might encounter: subzero tempatures down to -200f, out-of-the-blue (red?) sandstorms, gamma radiation, and who knows what else out there that could suddenly fsck with the systems or scramble internal data ? Your average Dell rack will never have to deal with any of those things.
  • by stevenzenith ( 727403 ) on Sunday February 22, 2004 @03:02AM (#8354415) Homepage Journal
    I can't believe that this is the state of the art at NASA - no wonder Shuttles fall from the sky.
  • Yes, see my post (Score:3, Insightful)

    by SuperKendall ( 25149 ) * on Sunday February 22, 2004 @03:06AM (#8354420)
    I had pretty much the same post - the originator of the story confuses luck with skill, a mistake a find very annoying and committed all too frequently. I'll fully admit when I've been lucky, but I also went recognition for foresight when I've had some! NASA deserves at least that much respect.
  • by nsayer ( 86181 ) <`moc.ufk' `ta' `reyasn'> on Sunday February 22, 2004 @03:17AM (#8354456) Homepage
    Before doing something risky, type this:

    sleep 600 && reboot &

    Now if your risky maneuver makes the ssh session unusable, just wait 5 minutes for the machine to reboot.

    This is great for fiddling with firewalls by remote control... through the firewall. :-)

    Oh... You say you're not using a POSIX-like system? That's not supported. Sorry. :-)
  • Re:Oh, sure... (Score:5, Insightful)

    by JWSmythe ( 446288 ) * <jwsmythe@nospam.jwsmythe.com> on Sunday February 22, 2004 @03:25AM (#8354470) Homepage Journal
    It sounded like the same type questions non-technical bosses always ask about technical matters.

    "We're ordering this brand new hardware that you've never tested before. Can you guarantee it will never crash?"

    "Will this database server handle the load of our brand new project?" (without an accurate growth estimate)

    "A server 2000 miles away just went down. What happened?" (no ping, no nothing) Hmmm.. Power/NIC/CPU/CPU fan/hard disks?

    It really sounds like they did some decent advanced planning on those probes, but from other stories I read, the were shooting for 90 days of reliability, which in itself was a hard one to do. What if it turns the antenna the wrong way and looses connectivity? What if it gets hit by lightning? What if it falls in a hole? (go Beagle!)

    Sure, relate this to your web server colocated somewhere you're not. Cross your fingers, hold your breath, and hope there aren't a few fatal systems failures, or a bit of human error. I've been responsible for a bit of that in the past, but at least my equipment wasn't a few million miles away.

  • by enosys ( 705759 ) on Sunday February 22, 2004 @03:34AM (#8354496) Homepage
    From the article:

    Using the low- level commands, about a thousand files and their directories -- the leftovers from the initial launch load -- were removed.

    I think that means they deleted the useless stuff they wanted to delete anyways but didn't get to delete before the crash. I also remember news about science data from before the crash that was received after they got the rover working again.

    As for how critical it is, well yeah, it seems the rover didn't need the contents of the flash file system. The operating system and other software was in the same flash memory but I assume that any sane designer would put in some hardware write protect interlock that's not easy to defeat accidentally.

  • by KewlPC ( 245768 ) on Sunday February 22, 2004 @03:47AM (#8354518) Homepage Journal
    You realize that missions to Mars can only be launched once every two years, right? If they miss their launch window, they've got to wait two years before they can launch again.

    You also realize that NASA did do a test mission, right? They built a test rover and put it out in a desert somewhere. They used the mission to test the hardware, test the software, and to help train the team.
  • by cookiepus ( 154655 ) on Sunday February 22, 2004 @03:47AM (#8354520) Homepage
    I'd say that when you have to wait 20 minutes to see the result of anything you do you're going to have to substantially change your debugging strategy.

    Please! Back in the day people would write programs on paper, mail them in an envelope to a computing center somewhere, and get results weeks later.

    THAT was pressure not to fuck up.

  • What we can learn: (Score:5, Insightful)

    by sakusha ( 441986 ) on Sunday February 22, 2004 @04:14AM (#8354579)
    It appears that we still haven't learned the biggest lesson of all. I still remember back around 1970, there was a big sign on the wall next to the IBM 370s at my university, written on a primitive pen plotter, it said:

    Computers never make mistakes, they do exactly what humans tell them to do. All "computer errors" are human errors.

  • by QuadPro ( 16532 ) on Sunday February 22, 2004 @04:15AM (#8354585) Homepage
    And what are the specifications of your network connection? Those *are* measured in base 10: 1 megabit/second = 1000 kilobit/second = 1000000 bit/second.

    If there was any revisionist crap here, it was the defining of M, G, and so on to be used in base 2 (1024) in the first place. Those are standardized units!
  • by jelle ( 14827 ) on Sunday February 22, 2004 @04:38AM (#8354625) Homepage
    But at NASA, you have a local replica of the whole system sitting in the lab next door, you're in a team of professionals that if necessary can calculate the most probable results of particular radiation hitting your system under a given angle, or can tell you the power usage and temperature effect of the system components given a particular subroutine, or can dream low-level correct assembly for the platform under study, plus the vendor has a couple of on-line support guys sitting in chairs in the corner of your office waiting for your activation command (which is the word "huh?")...

  • by Anonymous Coward on Sunday February 22, 2004 @05:06AM (#8354703)
    UDP would be even worse. Interplanetary transmission is difficult, so some packet loss is likely. Under UDP the packets would just disappear-it's an unreliable protocol. TCP would of course be too inefficient. I'd expect them to use a custom protocol designed for the specific application, since their situation is totally unlike anything you'll face on Earth.
  • by Matrix9180 ( 734303 ) * <matrix9180@@@gmail...com> on Sunday February 22, 2004 @05:28AM (#8354740)
    Did you RTFA? The rover was rebooting over and over because it was using up all of it's memory... then eventually the batteries were low so it went into a sort of 'safe mode' where only the absolute minimum was loaded, and that's when NASA was able to communicate with it again...

    It was nothing like what you described, just a VERY well designed system (though it would have been somewhat better had the system been able to go straight to "safe mode" after the initial critical error (running out of memory))

    Did the people with mod points RTFA? Score 5 Insightful?

    And no, I'm not new to /. ;)
  • by KewlPC ( 245768 ) on Sunday February 22, 2004 @05:45AM (#8354761) Homepage Journal
    WindRiver may give JPL large discounts, but I doubt that's the only reason VxWorks is running on the MERs.

    Years ago, when JPL was designing the Mars Pathfinder mission, they asked Wind River to do an "affordable" port of VxWorks to the RAD6000 (a radiation-hardened RS6000), and they agreed. Since the computers on the two MERs are very similar to the computer on the Mars Pathfinder lander, it makes sense that they'd use the same OS that they used on the MPF lander.

    I would think the fact that JPL knows VxWorks very well by now would be a major factor in deciding to use VxWorks for the MERs.
  • by Anonymous Coward on Sunday February 22, 2004 @06:15AM (#8354805)
    I believe the software was in fact working exactly as expected here so no amount of formal verification would have helped.

    Perhaps with hindsight it might have worked a little differently but you can't forsee every combination of events and in fact the software worked flawlessly allowing them to recover in exactly the way they designed it.

    If this had been sitting on a desk next to them they probably would have sorted this out in 20 minutes but they obviously they needed to do this in a methodical and careful way to verify what they thought this problem was and they needed to do this over a very slow and very delayed link which was being affected by the problem. Which is no doubt why it took a few days.

  • by Mal-2 ( 675116 ) on Sunday February 22, 2004 @07:01AM (#8354866) Homepage Journal
    Could this have not been said more succinctly with a simple quote? Namely:

    "What we have here, is failure to communicate."

    Mal-2
  • Hmmmm (Score:4, Insightful)

    by ziggy_zero ( 462010 ) on Sunday February 22, 2004 @07:07AM (#8354873)
    "The irony of it was that the operating system was doing exactly what we'd told it to do"

    Funny, that's how it was explained to me by my computer science teacher my freshman year in high school. He said, "The problem with computers is that they do exactly what we tell them to."
  • by roskakori ( 447739 ) on Sunday February 22, 2004 @09:23AM (#8355092)
    maybe the guy really is from PR and doesn't know how to carefully phrase sentences targeted at a technical audience, but these also hit my eye:
    "It was recognized just after [the June 2003] launch that there were some serious shortcomings in the code that had been put into the launch load of software," said JPL data management engineer Roger Klemm.
    i know this is common within the software industry, but if this happens on such a project, it looks like plain incompetence to me.
    Klemm said that with the leftover directories and their files removed, the system is now functioning well. But just in case, the team is working on an exception-handler routine that will more gracefully recover from an allocation failure.

    allocation errors are the easiest to predict. even if you don't handle them gracefully (which often can be near to impossible), most of the time you can log them. of course, a reliable, redundant log facility is one the most crucial components of such a system...

    writting this from my armchair, of course i can't really judge their competence and claim i could have done better. still, the article makes me suspicious.
  • by thrill12 ( 711899 ) on Sunday February 22, 2004 @10:07AM (#8355174) Journal
    Seriously, from a developer viewpoint, that is all wrong.
    I have worked on projects in which there was simply too much logging going on that you couldn't tell head from toe anymore. When a problem arrived, scanning the logfiles proved very cumbersome indeed. Every developer had his own stuff logged, which sometimes proved interesting, sometimes proved utter crap (noone wants to know variable XYZ is increased by 1 for 24943 times).

    You should develop a well-thought logging strategy that increases the logging verbosity on a problem-basis, not simply log everything that happens and hoping you get some useful information.
  • by Chemisor ( 97276 ) on Sunday February 22, 2004 @11:11AM (#8355423)
    > What on earth (or on Mars) could we possibly take away from this experience?

    Rule 3: Never ignore the return value from open.
  • by Anonymous Coward on Sunday February 22, 2004 @11:57AM (#8355640)
    Wrong. That was a human error in designing the processor.
  • by roman_mir ( 125474 ) on Sunday February 22, 2004 @11:58AM (#8355647) Homepage Journal
    Really? What about external factors like a radiation spike that kills some of your hardware, will the computer do what you tell it to do then correctly?
  • by cetialphav ( 246516 ) on Sunday February 22, 2004 @01:13PM (#8356037)
    So it's clear that their on-the-ground testing didn't catch the first bug, despite the rigorous testing you described. Which makes one wonder if they really did such rigorous testing. The grandparent is right.

    But this was probably intentional. The flight to Mars is a long one, so there is plenty of time to test while the rover is in transit. Before launch, you need to make sure that the hardware works and is reliable. Since they can upload new versions of software, they can do much of the testing after the launch. This is one of the things that allowed them to hit aggressive launch windows.

    This looks like it was less a technical failure and more a communications failure. Other rover operations were dependant on the utilities running to clear up flash space. When that did not happen on time, the right people were not told and so they assumed there would be more space available.
  • by DerekLyons ( 302214 ) <fairwater@@@gmail...com> on Sunday February 22, 2004 @02:06PM (#8356337) Homepage
    But at NASA, you have a local replica of the whole system sitting in the lab next door.
    A lab that resembles the rover on Mars less than you might think. You see, the lab is rebooted frequently, and equally frequently has it's configuration reset to test one thing or another. The rover on Mars has been running for months. (This is the difference that lead to the problems they are currently debugging.)
    you're in a team of professionals that if necessary can calculate the most probable results of particular radiation hitting your system under a given angle, or can tell you the power usage and temperature effect of the system components given a particular subroutine,
    Of course none of the guys have acess to or experience with a system that has been exposed to severe enviroments, operated for months at a time, etc.. *Except* for the two irreplaceable examples sitting on the Martian surface. Nor are they *entirely* certain of the effects of those exposures and changes, *except* by examing the two irreplaceable examples sitting on the Martian surface.

    Certainly the JPL/NASA guys are smart, experienced with other probes, and have massive resources backing them up. But they also have some heavy odds against them.

The Tao is like a glob pattern: used but never used up. It is like the extern void: filled with infinite possibilities.

Working...