Forgot your password?
typodupeerror
Mars NASA Science

Curiosity Rover On Standby As NASA Addresses Computer Glitch 98

Posted by samzenpus
from the fixing-the-glitch dept.
alancronin writes "NASA's Mars rover Curiosity has been temporarily put into 'safe mode,' as scientists monitoring from Earth try to fix a computer glitch, the US space agency said. Scientists switched to a backup computer Thursday so that they could troubleshoot the problem, said to be linked to a glitch in the original computer's flash memory. 'We switched computers to get to a standard state from which to begin restoring routine operations,' said Richard Cook of NASA's Jet Propulsion Laboratory, the project manager for the Mars Science Laboratory Project, which built and operates Curiosity."
This discussion has been archived. No new comments can be posted.

Curiosity Rover On Standby As NASA Addresses Computer Glitch

Comments Filter:
  • by AmiMoJo (196126) * <mojoNO@SPAMworld3.net> on Sunday March 03, 2013 @03:03PM (#43062771) Homepage

    Are we talking a temporary issue that can be resolved by re-flashing the memory in question or is one of the cells damaged in some un-recoverable way? Either way there are solutions but the latter is far more serious.

    • Re: (Score:3, Funny)

      by Anonymous Coward

      Would hate to be the field service engineer for this one...

      • Call Howard Wolowitz. "Having worked on the Mars rover project, he invited his date, Dr. Stephanie Barnett, to drive it at a secret facility. Instead, the rover became stuck in a Martian ditch and he spent the rest of the night with Sheldon and Raj trying to undo the damage. When this proved unsuccessful, he decided to erase the hard drives of the facility to cover up his meddling, only to later find out that the rover had discovered the first clear signs of life on Mars. When he made it onto a new team fo
        • by icebike (68054) on Sunday March 03, 2013 @05:37PM (#43063511)

          Does every incident in the real world need a reference to a TV show?

          Are you sure you can't find an XKCD comic [xkcd.com] that would be more appropriate?

      • by TJNoffy (1255090)

        Really? I would LOVE it if the technology existed to get me there and back safely! Send me!

        • by RockDoctor (15477)

          Really? I would LOVE it if the technology existed to get me there and back safely! Send me!

          To misquote ... whoever actually wrote Star Wars ... "You are not the field service engineer we're looking for!"

          Real Mars rover field service engineers go out, fix the falt, and then terraform the planet. With nowt but a roll of duct tape and a can of WD40.

    • by icebike (68054) on Sunday March 03, 2013 @03:18PM (#43062837)

      TFA pretty much covers this, saying they believe it is a problem in the flash memory.

      The computer problem is related to a glitch in flash memory on the A-side computer caused by corrupted memory files, Cook said. Scientists are still looking into the root cause the corrupted memory, but it's possible the memory files were damaged by high-energy space particles called cosmic rays, which are always a danger beyond the protective atmosphere of Earth.

      They also say

      "We also want to look to see if we can make changes to software to immunize against this kind of problem in the future," Cook said.

      It seems that, since the same thing happened on one of the earlier rovers, this is something they would have done some time ago.

      They are now updating the B side computer so it can manage the mission while they work on the primary. I wonder why this is not something that is kept up to date anyway. I can see keeping B an update or two behind A to prevent a single programming error taking both of them down. But after you are satisfied with A's software load, why keep B so far back-level that transition takes so much time. And since the computers are said to be identical, why the desire to move back to A?

      • Re: (Score:2, Funny)

        by Anonymous Coward

        Because apt-get update installed takes a million years with that kind of latency?

        • by icebike (68054)

          They sent the update once, didn't they?
          Wait till you are satisfied it worked, and shunt it over to computer B.

          • by BradleyUffner (103496) on Sunday March 03, 2013 @09:29PM (#43064475) Homepage

            They sent the update once, didn't they?
            Wait till you are satisfied it worked, and shunt it over to computer B.

            I'm fairly sure that they purposely keep the computers out of sync to avoid a single bug taking out both systems. If I recall, it actually has 3 computers, 2 of them have identical hardware that run different versions of the same software, and a 3rd computer based on completely different hardware running yet another software package. Each system is able to assume command of the mission and issue updates to the other systems.

      • by Kjella (173770) on Sunday March 03, 2013 @03:54PM (#43063023) Homepage

        They are now updating the B side computer so it can manage the mission while they work on the primary. I wonder why this is not something that is kept up to date anyway. I can see keeping B an update or two behind A to prevent a single programming error taking both of them down. But after you are satisfied with A's software load, why keep B so far back-level that transition takes so much time. And since the computers are said to be identical, why the desire to move back to A?

        When are you "satisfied" with software like this? Imagine something comes as slow corruption or only occurs when a certain counter overflows or whatever, you don't want to be caught in a race against time to save the system before the B computer dies too. Which is probably the reason why they want to move it back to the A computer, if it can't run there then they don't have a backup anymore. It's better for them with a slightly reduced system with backup than a B computer running with no backup. It does them no good to sit on the ground and say "well, we've figured out how what happened and how we could have fixed it" after you've lost contact, then it's game over. You don't run it in "if it breaks, we're done here" mode unless you really, absolutely must.

        • by icebike (68054)

          If your scenario were real, then why are the updating B at this point?
          Clearly they are satisfied with that A was running before the memory fritz.

        • by instagib (879544) on Sunday March 03, 2013 @05:23PM (#43063441)

          One can only hope that they have a C computer which will never be updated, and which can reset the rover to the initial state. Even if updates on A run fine for some time, experience in computing of the last decades shows that Murphy's Law is always lurking.

          • As they cannot have someone pushza physical hardware reset button, they should have a simple watchdog circuit that looks for a particular radio signal (it would listen in on it) and force a reset that way. The flash routine would similarly be brutally powerful yet simple.

            And why they don't have 200 flash banks I don't know. Best 9 of 17 wins on a bit level, that kind of thing.

          • i'm pretty sure the guys at nasa writing code for the mars rover are doing it in a lot more careful way than those impulsive hacks at facebook or the iTunes team. murphy's law is supposed to be obliterated by engineering.

      • by Brett Buck (811747) on Sunday March 03, 2013 @04:15PM (#43063109)

        I wonder why this is not something that is kept up to date anyway. I can see keeping B an update or two behind A to prevent a single programming error taking both of them down. But after you are satisfied with A's software load, why keep B so far back-level that transition takes so much time. And since the computers are said to be identical, why the desire to move back to A?

        I can easily imagine this happening, I work on a very similar, perhaps nearly identical spacecraft (that's just a tad mode critical AND expensive than this thing...) and we haven't necessarily maintained this. You underestimate the overhead associated with generating the necessary uploads.

                The reason they probably want to go back to the Prime is that their failure isolation system database is keyed to using the prime units only, and to alter it to start on the "B" side and have it switch back to "A" is prohibitive, or at least easier to get around by switching back to A. This last is also something we do in the rare case of a temporary failure. There's less good justification to doing it than leaving the backup program image alone but having to completely retest the entire redundancy management system for a new configuration is generally avoided. If it fails hard, it doesn't really matter, since there's no Prime to switch back to.

              Brett

      • by sjames (1099)

        Several reasons come to mind. First, it's not necessary. If they ever need to switch computers they can do it then (as they are doing). It takes time and energy (from the limited supply) to do the updates.

        Next, up until they have to do that, they stay with the software that has already proven itself to be at least good enough to run the mission. That is, if something really screws up and they realize it was a change 3 revisions back, they still have a computer that was never affected by that change.

        As for

      • They are now updating the B side computer so it can manage the mission while they work on the primary. I wonder why this is not something that is kept up to date anyway. I can see keeping B an update or two behind A to prevent a single programming error taking both of them down. But after you are satisfied with A's software load, why keep B so far back-level that transition takes so much time. And since the computers are said to be identical, why the desire to move back to A?

        They're running the same flight software, but the parameters are different. (A parameter might say, for example, how far to drive between autonomous visual odometry updates, or how big the bounding boxes around the arm should be when computing ChemCam laser safety.) There are thousands of these parameters, and they're not routinely kept up to date on the non-prime side (which has historically been the B side).

        And while the computers are identical, the non-cross-strapped equipment isn't. For example, the B-s

      • by RockDoctor (15477)

        And since the computers are said to be identical, why the desire to move back to A?

        The desire, as I read it, is to have the "A" side back available as a backup. So, in the event of another fault (hypothesis : cosmic ray hit) on the "B" side, then they'll be able to recover the rover's operational capability using the "A" side.

    • by tipo159 (1151047)

      Are we talking a temporary issue that can be resolved by re-flashing the memory in question or is one of the cells damaged in some un-recoverable way? Either way there are solutions but the latter is far more serious.

      Did you read the article? Most of these questions are answered there.

    • by Solandri (704621)
      Glitched memory usually isn't a problem. Other spacecraft [arstechnica.com] have had similar memory problems [nasa.gov]. Usually it's temporary. If it's permanent, the computers are programmed to map around the glitched memory or (back in the tape drive days) not use that segment of tape..

      The real danger is that such a glitch will first manifest itself by altering control or orientation instructions, breaking the spacecraft's contact with Earth. Most spacecraft are designed with a "safe mode" when this happens. If there's been
    • by tlhIngan (30335)

      Are we talking a temporary issue that can be resolved by re-flashing the memory in question or is one of the cells damaged in some un-recoverable way? Either way there are solutions but the latter is far more serious.

      It's most likely NAND flash, in which case damaged cells are a natural occurance anyways (most have up to 2% bad to begin with, even when "new"). Even if it's damaged, it's where the flash translation layer goes and marks it as bad and avoids using it.

      The only question is - is the rest of the a

  • Who actually fabs the chips and circut boards used by NASA? I'm guessing these flash modules are not your typical Sandisk variety. Also, do they plan on wiping out the flash memory from a secondary computer, mapping bad cells, and reloading software from scratch? Hopefully these flash modules don't suffer any systemic issues.

  • How does it feels to SSH with a ~45 minutes delay. Wait: why bother encrypting? Just telnet!
  • by Miletos (1289588) on Sunday March 03, 2013 @03:45PM (#43062979)
    "NOOOO! I should've selected Safe mode WITH Networking!"
  • solutions: "someone fitted a module in backwards" "Why didnt they simply do X instead of Y" Do you seriously think the people who work for NASA, the same ones who sent a group of men to the moon and back, rovers to mars, Voyager to Deep Space, shuttles, space stations, et all can be out thought by YOU? And from the comfort of your Lazy Boy Chair, no less?! Wow. Surely you can come to respect these these men and women have given countless hours of thought, simulation, and planning to these missions. In fact, many have dedicated their lives to this engineering. Surely you can respect that those involved in this mission did not simply put the red wire where the blue one should have gone. Thats sorta insulting. Peace out
  • by Celarent Darii (1561999) on Sunday March 03, 2013 @04:23PM (#43063153)

    I once had to fix a server some 6000 km away due to a corrupted disk. Doing pdisk and modifying fstab over ssh and then a reboot. You just check and recheck to make sure you did it right and just hope you get a ping a few minutes later.

    Can't imagine how these guys feel. 45 min ping and it isn't like they could ask someone to go turn it off and on again.

    Good luck to the guys working on this.

    • by cultiv8 (1660093)

      ...it isn't like they could ask someone to go turn it off and on again.

      Just reboot the server [youtube.com]

    • by evilviper (135110) on Sunday March 03, 2013 @07:52PM (#43064059) Journal

      I once had to fix a server some 6000 km away due to a corrupted disk. Doing pdisk and modifying fstab over ssh and then a reboot. You just check and recheck to make sure you did it right and just hope you get a ping a few minutes later.

      It's called out-of-band management. You can bring up a server from bare metal with no working OS installed. Damn near every server out there comes with at least ipmi, and often DRACs/iLos/RSAs with some additional features. All you need to do is give the OoBM interface an IP address (perhaps a DHCP reservation) and you're good to go.

      Even if you're running on desktop-class hardware, you can still fake OoBM pretty well with a serial port. Linux/BSD/etc., will bring-up the serial port as the console as soon as the bootloader starts up, if configured to do so. And if the disk has failed, or otherwise your bootloader doesn't work, hopefully your bios is set to PXE boot, and your pxelinux configuration will give you a serial console as soon as that kicks-in. Throw-in magic sysrq to allow you to reboot a system that's not responding, and you've got something reasonably close to OoBM just about free. You could also supplement this with a watchdog timer and make things even more reliable.

      But as cheap as server-class hardware is, and the ubiquity of ipmi, it's probably not worthwhile going the cheap route.

      • by cthulhu11 (842924)

        All you need to do is give the OoBM interface an IP address (perhaps a DHCP reservation) and you're good to go.

        "You too can be a MILLIONAIRE, and never pay taxes! First step: get a million dollars" This is one place where server manufacturers all too often mistake rackmount servers for desktops. I demo'd an IBM x-something system last year and the thing was a non-starter. Serial console didn't work out of the box, and even the elaborate legacy-KVM steps they came up with couldn't make the thing work. Talked to Cisco about their UCS systems, and they thoroughly were incapable of understanding. "Just hook up a

        • by evilviper (135110)

          I don't know why you're having these millions of insurmountable problems with OoBM, but I can assure you, it's only you... everyone else in the world has things working just fine.

          Rack a server, plug in the power and ethernet cables, then use the front panel LCD menu to assign the static IP address for each one if you don't like DHCP, maybe set the password depending on vendor, and get the hell out of there. Or you can plug in video and keyboard to do it via the BIOS screen prompts. Done it with hundreds

          • by cthulhu11 (842924)

            I don't know why you're having these millions of insurmountable problems with OoBM,

            Um, I made no such claim.

            but I can assure you, it's only you... everyone else in the world has things working just fine.

            Wow, you know EVERYONE ELSE? Your friend count on Facebook must be just sick!

            Rack a server, plug in the power and ethernet cables, then use the front panel LCD menu to assign the static IP address for each one if you don't like DHCP, maybe set the password depending on vendor, and get the hell out of there.

            Maybe set the password? Why on earth would I want to leave it set to admin/changeme? Only front-panel IP address setting I've seen has been on old laser printers and crappy Infortrend RAID arrays. Liking or not liking DHCP is moot. Servers are not desktops, there rarely is a local server available. Recently "ip helper" router config foo can sometimes relay DHCP broadcasts to a remote server, but this

            • by evilviper (135110)

              *Sigh*... I've worked with plenty of people like you before. Acting like an asshole to try and mask your incompetence doesn't ever actually work.

              Why on earth would I want to leave it set to admin/changeme?

              I was discussing what config NEEDS to be done locally to get OoBM working. Changing the password is something you can do from the other side of the planet, once it's pingable.

              Liking or not liking DHCP is moot. Servers are not desktops, there rarely is a local server available. Recently "ip helper" route

              • by cthulhu11 (842924)

                *Sigh*... I've worked with plenty of people like you before. Acting like an asshole

                Who's the one namecalling here?

                This is so UTTERLY MORONIC. That shiny new "ip helper" stuff is something I've been using for the past 15 years, and I don't recall it being new at the time. It sure as hell is 100% "reliable" and "predictable" in every possible way

                Maybe for you, with whatever homogenous environment you have. IOS-XR bugs happen.

                I've got hundreds and hundreds of systems depending on high-availability DHCP servers every single day

                And you're telling *me* that I'm not a sysadmin? What bizarre fear do you have of static configuration?

                EVERY DELL SERVER[...]

                Dell? Seriously? Do you work in Hollywood setting up rows of servers for spy show shoots? I'll bet they're all in one room.

                What you've "seen" is a lousy measure of anything

                Because you're more important than me?

                since you're spouting nothing but ignorance and nonsense left and right.

                Oh right, I clearly am imagining the bugs in HP's iLO. And the PXE code on the shitty onboard NICs that shipped with the DL580G7.

                • by evilviper (135110)

                  You're the worst combination of ignorant, tiresome, and obtuse when it suits you... So I'll just say goodbye, after I quickly address just a few of your points...

                  use a crossover cable by mistake

                  I have to point out, this statement shows massive ignorance. Every NIC and switch that can do gigabit does auto MDIX, and will just work, whether you have a straight or xover cable.

                  Management would laugh me out of my job if I told them that I needed to fly around the world to do every system install personally.

                  It's

  • Didn't Apple just disable Flash again? Coincidence? Knew they should have turned off remote updating on the Rover's Mac Mini.
  • by chalker (718945) on Sunday March 03, 2013 @05:01PM (#43063353) Homepage

    Check out the official rover press kit for a summary of the computer design (http://mars.jpl.nasa.gov/msl/news/pdfs/MSLLanding.pdf) Page 42 in particular:

    "Curiosity has redundant main computers, or rover compute elements. Of this “A” and “B” pair, it uses one at a time, with the spare held in cold backup. Thus, at a
    given time, the rover is operating from either its “A” side or its “B” side. Most rover devices can be controlled by either side; a few components, such as the navigation camera, have side-specific redundancy themselves. The computer inside the rover — whichever side is active — also serves as the main computer for the rest of the Mars Science Laboratory spacecraft during the flight from Earth and arrival at Mars. In case the active computer resets for any reason during the critical minutes of entry, descent and landing, a software feature called “second chance” has been designed to enable the other side to promptly take control, and in most cases, finish the landing with a bare-bones version of entry, descent and landing instructions.

    Each rover compute element contains a radiation-hardened central processor with PowerPC 750 architecture: a BAE RAD 750. This processor operates at up to 200 megahertz speed, compared with 20 megahertz speed of the single RAD6000 central processor in each of the Mars rovers Spirit and Opportunity. Each of Curiosity’s redundant computers has 2 gigabytes of flash memory (about eight times as much as Spirit or Opportunity), 256 megabytes of dynamic random access memory and 256 kilobytes of electrically erasable programmable read-only memory.

    The Mars Science Laboratory flight software monitors the status and health of the spacecraft during all phases of the mission, checks for the presence of commands to execute, performs communication functions and controls spacecraft activities. The spacecraft was launched with software adequate to serve for the landing and for operations on the surface of Mars, as well as during the flight from Earth to Mars. The months after launch were used, as planned, to develop and test improved flight software versions. One upgraded version was sent to the spacecraft in May 2012 and installed onto its computers in May and June. This version includes improvements for entry, descent and landing. Another was sent to the spacecraft in June and will be installed on the rover’s computers a few days after landing, with improvements for driving the rover and using its robotic arm."

    And according to a release they issued after landing, both computers receive the same updates and are running the same software (not a version or 2 behind like others have suggested): http://mars.jpl.nasa.gov/news/whatsnew/index.cfm?FuseAction=ShowNews&NewsID=1305 [nasa.gov]

  • Ok lets assume a cosmic ray corrupted some random block of flash memory...so what? Why should that lead to failure to upload anything or enter sleep mode?

    I can only assume there is integrity check for block level I/O from flash and it just did not try to load garbage without knowing it. If it were any old PC app this would be perfectly acceptable behavior.

    However for ultra expensive spacefaring things I would expect it to be designed to still try and be useful even if the southbridge cought fire.

    • by Binestar (28861) on Sunday March 03, 2013 @09:18PM (#43064437) Homepage
      Yeah... did you miss the part where it went to the redundant unit and sent an error to mission control? Sheesh.
      • Yeah... did you miss the part where it went to the redundant unit and sent an error to mission control? Sheesh.

        No I missed it. I read both articles and none of them mentioned A. the rover went to a redundant anything by *itself* or B that it sent an error.

        It says they "NASA" switched it and that they noticed the problem when the rover did not uplink or enter sleep mode when it was supposed to... what error are you talking about?

        And I think my point remains. Just because you can't read or write to an area of persistant storage what prevents you from entering sleep mode or uplinking data? There was also no informat

    • by Chris Burke (6130) on Monday March 04, 2013 @03:44AM (#43065613) Homepage

      Ok lets assume a cosmic ray corrupted some random block of flash memory...so what? Why should that lead to failure to upload anything or enter sleep mode?

      Pretty much any fault, error, or out-of-bounds reading with any part of the rover causes it to stop whatever it is doing and wait for ground control to check it out and decide what to do. If the fault is with the computer itself, it makes sense to gracefully enter safe mode. It probably was a cosmic ray flipping a random bit, but you can't assume that when designing your fault handler.

      If it were any old PC app this would be perfectly acceptable behavior. However for ultra expensive spacefaring things I would expect it to be designed to still try and be useful even if the southbridge cought fire.

      See, I think you have that backwards. If it were a PC app it would be appropriate to just assume the error was insignificant or more likely not bother checking in the first place. If it's a more serious problem then eventually the app or OS might crash, the user will reboot, and if that doesn't work reinstall, and if not that then they'll just go get some new hardware.

      For a multi-billion rover on another planet, you don't want to just wait and see what happens. Any anomaly at all should be cause for cautious, deliberate action. Heck, the whole project is run that way.

      The rover was designed with a lot of redundancy and flexibility so that it can be useful even in the face of more serious problems, and if that turns out to be the case they'll find a way to make the rover as useful as possible. Missing a couple night's worth of downloads and delaying some activities in order to take the time to make sure they're maximizing the rover's future potential is an easy tradeoff.

      • Pretty much any fault, error, or out-of-bounds reading with any part of the rover causes it to stop whatever it is doing and wait for ground control to check it out and decide what to do.

        Thats a great strategy only problem with it is from TFA the indication they received was noticing it was not behaving the way it was supposed to be behaving. They had to look around to figure out why.

        If the fault is with the computer itself, it makes sense to gracefully enter safe mode. It probably was a cosmic ray flipping a random bit, but you can't assume that when designing your fault handler.

        You don't have to assume anything. You KNOW the block is invalid. A bad block should not cripple the computer so that it can't do anything else. There is no indication from TFA there were any other faults.

        See, I think you have that backwards. If it were a PC app it would be appropriate to just assume the error was insignificant or more likely not bother checking in the first place.

        All I do is write software and I refuse to follow this shitty advice. Every error should be checked an

        • by Chris Burke (6130)

          Thats a great strategy only problem with it is from TFA the indication they received was noticing it was not behaving the way it was supposed to be behaving. They had to look around to figure out why.

          Yes, because normal operations were suspended.

          You don't have to assume anything. You KNOW the block is invalid. A bad block should not cripple the computer so that it can't do anything else. There is no indication from TFA there were any other faults.

          No, you don't. You don't know if the block is bad, if the data bus is suffering an intermittent fault that happened to occur while that block was being read, if it's the BIST or ECC mechanisms that are faulty, or if it's a software error corrupting the data. Going from "we got a fault on reading this block" to "that block and only that block is affected, let's get on with it" with no consideration is a great way to lose a rover.

          All I do is write software and I refuse to follow this shitty advice. Every error should be checked and handled. Besides the fricking hardware does all the heavy lifting for us

          Ah, so you only allow your softw

          • Yes, because normal operations were suspended.

            Why the mystery? Why couldn't it just say that this failed?

            No, you don't. You don't know if the block is bad, if the data bus is suffering an intermittent fault that happened to occur while that block was being read, if it's the BIST or ECC mechanisms that are faulty, or if it's a software error corrupting the data. Going from "we got a fault on reading this block" to "that block and only that block is affected, let's get on with it" with no consideration is a great way to lose a rover.

            Why should the rover have to read or write to persistant storage to continue to operate?

            Ah, so you only allow your software to be run on hardware with ECC corrected RAM and ECC caches and ECC data busses... seems weird to call this a "PC app" when it's excluding most of the PC market. Unless you're doing it yourself then you're only checking for a subset of errors.

            If your going to use this interpretation of "error" ECC is not good enough. It can fail undetected as well, same goes for cryptographic signatures. This is not a grand tour of everything and anything that can go wrong. I never asserted the system should continue if the running image was suspect. That would be madness. This is about I/O to persistant storage

    • by AC-x (735297)

      However for ultra expensive spacefaring things I would expect it to be designed to still try and be useful even if the southbridge cought fire.

      The computer on Curiosity is completely redundant and has switched over to the secondary computer, even if the primary computer has suffered fatal hardware failure the rover can continue to operate on the secondary. If that's not "being useful" after a failure I don't know what is!

      • The computer on Curiosity is completely redundant and has switched over to the secondary computer, even if the primary computer has suffered fatal hardware failure the rover can continue to operate on the secondary. If that's not "being useful" after a failure I don't know what is!

        This is confusing my point. Your drawing a "systems" box around both computers while I have only drawn a box around one computer.

        Relying on hardware redundancy to fix a software problem kind of spoils the reason for hardware redundancy doesn't it? What if B-side had been burnt to a crisp and then the same problem occured? I'm sure they have an answer for that.

        • by AC-x (735297)

          Who said it was a software only problem? The article suggests the flash memory may have been corrupted by cosmic rays, how do you protect against that? Redundancy. How redundant did they make it? Completely redundant (2 separate computer systems).

          Plus no-one said that the A-side could never recover on its own (like what happened with Spirit), I'm sure it's just a lot easier to boot the redundant system and diagnose it from there.

          Or do you have a better idea for how they could architected it?

          • Who said it was a software only problem?

            NASA did. From the space.com article they said some files were corrupted meaning flash hardware could still be accessed.

            The article suggests the flash memory may have been corrupted by cosmic rays, how do you protect against that? Redundancy.

            There are several ways to do it using error correction techniques at the cost of some capacity. My point is not that flash should not have failed it is the system should be able to continue to function with external I/O failure present as long as the core system processor/northbridge is ok. An I/O error transfering data for one discrete experiment or function should not adversly effect

            • by AC-x (735297)

              There are several ways to do it using error correction techniques at the cost of some capacity. My point is not that flash should not have failed it is the system should be able to continue to function with external I/O failure present as long as the core system processor/northbridge is ok. An I/O error transfering data for one discrete experiment or function should not adversly effect another.

              You're acting as if the rover suddenly failed and is no-longer working, the fact is the rover is completely operational and they're currently fully diagnosing the problem to make sure they fix the problem.

              Plus have you considered it's far better to fail safe then fail catastrophically? We're talking about computer systems with zero physical access, it seems like a pretty damn good idea to me to, on detection of any sort of problem, enter a safe mode to allow the problem to be fully diagnosed instead of the

  • Noooooo! [slashdot.org]
  • E.E. / firmware guy here... Corrupted files, and so far not even a mention of a potential software problem?

    Even considering radiation out in space, it seems that it's still easier to get the hardware right than the software / firmware. I'm not jumping to any conclusions, but my first guess would be some kind of rare and unanticipated race condition, or some rarely-executed leg of the filesystem / logging software, etc.

    I'm probably cynical from having worked on lots of safety-critical systems for a w
  • What about that earth-shaking discovery Curiosity made that this news will be put down into history? I still haven't heard about it. Here's the link http://science.slashdot.org/story/12/11/20/1511232/what-earth-shaking-discovery-has-curiosity-made-on-mars [slashdot.org]

"Regardless of the legal speed limit, your Buick must be operated at speeds faster than 85 MPH (140kph)." -- 1987 Buick Grand National owners manual.

Working...