Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Mars NASA IT

Reformatting a Machine 125 Million Miles Away 155

An anonymous reader writes: NASA's Opportunity rover has been rolling around the surface of Mars for over 10 years. It's still performing scientific observations, but the mission team has been dealing with a problem: the rover keeps rebooting. It's happened a dozen times this month, and the process is a bit more involved than rebooting a typical computer. It takes a day or two to get back into operation every time. To try and fix this, the Opportunity team is planning a tricky operation: reformatting the flash memory from 125 million miles away. "Preparations include downloading to Earth all useful data remaining in the flash memory and switching the rover to an operating mode that does not use flash memory. Also, the team is restructuring the rover's communication sessions to use a slower data rate, which may add resilience in case of a reset during these preparations." The team suspects some of the flash memory cells are simply wearing out. The reformat operation is scheduled for some time in September.
This discussion has been archived. No new comments can be posted.

Reformatting a Machine 125 Million Miles Away

Comments Filter:
  • by Psychotria ( 953670 ) on Saturday August 30, 2014 @12:25PM (#47791119)

    You're probably joking, but the OS is VxWorks.

  • Re:Remote management (Score:5, Informative)

    by ledow ( 319597 ) on Saturday August 30, 2014 @12:31PM (#47791151) Homepage

    Not really...

    The chances are that "reformat" isn't what we think and includes one of more of:

    1) Rewriting cells and allowing wear-levelling and sector-replacement to take place, and make bad sectors as bad.
    2) Write-testing and manually avoiding those sectors that don't perform as expected.
    3) Rewriting all the critical storage functions to avoid the already-known bad sectors.

    It's the kind of thing that anyone can play with. Not saying it's not risky on a remote device, but BadRAM etc. patches have been in places for years and that's a way to run Linux on machines with faulty ***RAM****, not just long-term storage.

    Many years ago, a bad sector on your hard drive was something you found out with scandisk (or previous tools) and then it was marked as bad and that was the end of that. Your PC wouldn't use it and so long as it wasn't the boot sector, that was the end of that. It was only the "creeping" bad sectors, where you got more bad sectors over time, that would really worry anyone.

    I imagine that it's not at all difficult to make sure that multiple boot sectors were in place if you really wanted to but why bother? The chances are billions to one. Chances are this hardware has MUCH better fault tolerance and multiple hardware watchdogs, firmware, and boot attempts to make sure it eventually gets back up SOMEHOW.

    There's a reason that even FAT stores two copies of the allocation table, why Linux ext filesystems store multiple copies of the superblock, etc. They come from a legacy where the occasional bad sector wasn't a problem and where 20Mb of hard drive cost more than the computer did so it was better to cope with the fault than just tell people to buy a new one. And their predecessors were (and still are) mainframes with hardware that's just that fault-tolerant in the first place anyway.

    It's not at all hard to write a filesystem that can cope with not only damage, but even recurring damage. You've seen PAR files presumably? The same could easily be done on a filesystem-level basis (and I imagine, somewhere, already is for some specialist niche).

    It's not that big a deal once they KNOW that's the problem. The biggest problem is that they only "suspect" that's the problem.

You knew the job was dangerous when you took it, Fred. -- Superchicken

Working...