Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Space Science Hardware

What's Inside the Mars Rovers 458

Captain Zion writes "Space.com has a story about the hardware and software of Mars Rovers Spirit and Opportunity. Basically, they're radiation-shielded, 20MHz PowerPC machines wirh 128Mb RAM and 256Mb of flash memory, running VxWorks. I wonder if I could make a nice firewall with one of these for my home network..."
This discussion has been archived. No new comments can be posted.

What's Inside the Mars Rovers

Comments Filter:
  • Wait a second... (Score:5, Insightful)

    by deitel99 ( 533532 ) on Thursday January 29, 2004 @11:05AM (#8123510)
    The machines aren't as slow as the top post says... they don't run at 20MHz, they are "capable of carrying out about 20 million instructions per second". Depending on the complexity of the instructions, the processor actually runs several times faster than 20MHz.
  • by gerf ( 532474 ) on Thursday January 29, 2004 @11:06AM (#8123522) Journal

    Does a 20mhz processor really need 128mb of ram? I mean, with a bus speed that low, you can probably put the data to flash ROM just as fast. What are the chances of you using all 128mb of ram?

    I imagine they can use all the storage they can, since there's no hard drive. So, the RAM acts as a cache for everything that is transmitted (which is a lot, actually). The Flash is used for more permanant software, like OS, commands, other files, ect. I'm amazed they can do it all with as little as they have.

  • by 4r0g ( 467711 ) on Thursday January 29, 2004 @11:07AM (#8123527)
    The memory is probably partitioned, and some banks can be shut down if they fail. Thus the 128MB. OR, you could use it as texture memory (wonder what the display adapter is like ;)
  • Redundency Check? (Score:3, Insightful)

    by shlomo ( 594012 ) on Thursday January 29, 2004 @11:07AM (#8123528)
    I read somewhere that the shuttle spacecraft, has 6 computers for redundancy checks.

    Your Average plane have a triple backup system, I spoke to some engineer and he said preflight checks are usally just making sure two of the systems are still working

    you'd think they could at least send up some more hardware with these little critters. The extra weight would pan out, when things go bad...case in point see what they are dealing with now :)

  • by Zog The Undeniable ( 632031 ) on Thursday January 29, 2004 @11:10AM (#8123564)
    Actually, they're probably slower :-P

    Modern superscalar (pipelined) processors have a lot more MIPS than megahertz.

  • Ouch (Score:3, Insightful)

    by savagedome ( 742194 ) on Thursday January 29, 2004 @11:13AM (#8123590)
    In addition to VxWorks' reliability, the system allows users to add software patches -- such as a glitch fix or upgrade -- without interruption while a mission is in flight. "We've always had that [feature] so you don't have to shut down, reload and restart after every patch," Blackman said, adding that some commercial desktop systems require users to reboot their computers after a patch

    The bold emphasis is mine but that is a big Ouch for Microsoft.


    RAD6000 microprocessors are radiation-hardened versions of the PowerPC chips that powered Macintosh computers in the early 1990s
    Shouldn't Apple be using this in their commercials somehow to further boost their reliability. I am sure the PR market can put it in a way that non-geeks watching tv can relate, right?

  • by snake_dad ( 311844 ) on Thursday January 29, 2004 @11:17AM (#8123623) Homepage Journal
    That way of thinking would make the cost of robotic space exploration approach that of human space exploration. Plus, the rover will not crash with loss of life in case of a minor computer failure. There is a much bigger margin for troubleshooting "in the field" than with aircraft or manned spacecraft.

    Ofcourse NASA did implement a measure of redundancy by sending two rovers instead of just one.

  • Re:Self-warming (Score:3, Insightful)

    by Waffle Iron ( 339739 ) on Thursday January 29, 2004 @11:33AM (#8123755)
    If they don't mind it, then let's send up those dune buggies with RTG and 18-inch wheels and cover a lot more of Mars.

    I don't know, but I'd guess that for rovers of this size and weight, it might be easier to use solar panels than to figure out how to dump all of the waste heat that an RTG would produce. These devices are highly inefficient in terms of converting thermal energy to electrical energy, so they are probably spewing kilowatts of thermal energy at all times, and there's no way to shut them off.

    That would pose two immediate problems: with the el-cheapo balloon-based reentry system used, how do you keep the rover from frying when it's wrapped up in its airbag cocoon? The only way would seem to be a heat sink, but that would add significant weight to the spacecraft.

    The second problem would be how to radiate all of that heat during operation. The outer planet probes have spindly rods that hold the RTG a few meters away from the main body. A rover couldn't do this and maintain balance, so you would need some kind of closely attached radiator with liquid cooling and/or heat shielding to protect the main unit from the intense heat. Once again, this could end up being heavier than simple solar panels.

  • by millahtime ( 710421 ) on Thursday January 29, 2004 @11:34AM (#8123770) Homepage Journal
    The response time to an interupt is the big thing though. You have to have the garunteed interupt response time. With a real time OS you get that after every iunstruction. With Linux 2.4 that wasn't really there which they remedied in 2.6 but it isn't true real time. For machines like this, planes, space shuttles and cars you will find a real time OS. Would you want your drive by wire response time to be 200 micro seconds or 4000 microseconds. In reality that makes a difference.
  • by jandrese ( 485 ) * <kensama@vt.edu> on Thursday January 29, 2004 @11:38AM (#8123803) Homepage Journal
    Generally, no. The cache maintains a _copy_ of recently accessed main memory. Logically it is not a seperate memory space, rather it shows up as certain memory locations being faster than others when you access them. The upshot is that you still need at least as much main memory as cache space to actually use the chace.
  • by noselasd ( 594905 ) on Thursday January 29, 2004 @11:50AM (#8123923)
    No, the response time really isn't _the_ big thing. It ofcouse have to be fast but it's how reliable it is. That is, what response times it guarantees. While e.g. Linux sometimes can have much faster response time than some RTOS's, it's still no guarantee that latency is good _all the time_. Having response times ranging randomly from say 0,1ms-0,5s (which might be the case on general purpose OS'es) is alot worse than having a *guaranteed* response time of e.g. 0.5-1.0ms
  • by iamdrscience ( 541136 ) on Thursday January 29, 2004 @11:54AM (#8123977) Homepage
    Using Flash ROM as RAM is a baaaaaad idea. There actually is a limit to how many times it can be rewritten, you'd just never get to it with any normal use of Flash (i.e. as storage space), but it's definitely within reach when you use it as RAM (or as swap space). In fact, you'll wear it out pretty quickly (depends on the system design and activity and such, but I would guess maybe a few weeks for this one).
  • by The_K4 ( 627653 ) on Thursday January 29, 2004 @11:54AM (#8123980)
    As side effect this means that Opertunity probably has the SAME bug! :(
  • by badasscat ( 563442 ) <basscadet75@@@yahoo...com> on Thursday January 29, 2004 @12:38PM (#8124439)
    Those 601s are a very nice chip, and quite underestimated at what they can do at low clock speeds. If the RAD6000 is anywhere similar, I can understand why it was picked.

    I doubt it, because it probably had little to do with performance. I read an article a while back (tried to Google it now, couldn't find it) that said NASA is generally about ten years behind consumer PC's in terms of the CPU speeds they send into space. These things have to go through so much testing for reliability, etc. that it often takes that long to certify a CPU for space flight. When they do find a CPU they can certify, they'll often continue using it for years. They won't just stick the latest PPC chip in a vehicle, assuming it's the same as the last one, only faster. That's the quickest way I can think of to disaster.

    Not much CPU power is really needed for space flight, though, and there's not really much point in taking along more than you need. We went to the moon using slide rules. We can make it to Mars on 20mhz chips - and I'll bet the only reason we even need that much power is for the scientific experiments on the surface. Speed is not really the primary concern on space missions.
  • by Anonymous Coward on Thursday January 29, 2004 @01:19PM (#8124876)
    VxWorks!!!

    The fact that NASA put a probe on Mars using a 60's style OS architecture is an amazing tribute to the programmers who worked on it.

    - No implicit memory protection (protection domains - ha!)
    - Goofy GUI-based development environment
    - the list just goes on and on...
    - Questionable real-time performance (it works great if you shut down all services)

    Overall, I am surprised that NASA continues to use this outdated software. There are far better alternatives in the embedded real-time OS domain that would easily run in the same environment.

    And before everyone starts talking about Linux, forget it, it is too big and fat and impossible to rate for high reliability missions like this one. By the time you trimmed it down to size, you would be better writing your own OS.

    As a company, Wind River is dying, their market share is down, and people are starting to wake up to the fact that you need good programmers to write software, not goofy GUIs and outdated kernels.

    Ask anyone who programs embedded systems who has used VxWorks and you will always get the same response - "Man I hate that OS, and management wants us to use their tools, but they are useless".

    Amazing things can be done with a watchdog timer...Anyone who uses Wind River for anything remotely serious knows that.

    I just hope NASA picks something else when they send people to Mars, or else here will be the conversations:

    Astronaut: "Base, I have to hold my breath for a bit while my suit computer reboots, it must have crashed when I exhaled too much...damned OS crashed again!"
    Base: "Roger that, maybe you should reload your suit software? or perhaps breath slower?"
    Astronaut: "mmmmfffffff!"

    Wind River is like the Microsoft of the embedded world, except a choice still remains.
  • by DunbarTheInept ( 764 ) on Thursday January 29, 2004 @01:56PM (#8125357) Homepage
    Redundancy is useful for handling hardware errors, but this problem was in the software that keeps track of the flash filesystem. So having three computers would just mean you get the same problem three times, and your expensive redundancy would have bought you nothing. (They are having to take steps to make sure Opportunity doesn't encounter the same problem.)

  • by Cap'n Canuck ( 622106 ) on Thursday January 29, 2004 @05:36PM (#8128000)
    As great as QNX is compared to VxWorks, it
    A) Would never have been chosen as the OS of choice

    B) Will never BE chosen to replace VxWorks

    Why? It's a great company, but it's based in Ottawa (that's Canada for you Yankees), and NA$A Bucks do not flow over the border.

Understanding is always the understanding of a smaller problem in relation to a bigger problem. -- P.D. Ouspensky

Working...