Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Space Science Hardware

What's Inside the Mars Rovers 458

Captain Zion writes "Space.com has a story about the hardware and software of Mars Rovers Spirit and Opportunity. Basically, they're radiation-shielded, 20MHz PowerPC machines wirh 128Mb RAM and 256Mb of flash memory, running VxWorks. I wonder if I could make a nice firewall with one of these for my home network..."
This discussion has been archived. No new comments can be posted.

What's Inside the Mars Rovers

Comments Filter:
  • by }InFuZeD{ ( 52430 ) on Thursday January 29, 2004 @10:03AM (#8123487) Homepage
    Does a 20mhz processor really need 128mb of ram? I mean, with a bus speed that low, you can probably put the data to flash ROM just as fast. What are the chances of you using all 128mb of ram?
  • Radiation hardness (Score:5, Interesting)

    by swordboy ( 472941 ) on Thursday January 29, 2004 @10:05AM (#8123501) Journal
    Does anyone know what the deal was with the flash memory that caused the outage? I heard something about a "solar event" that caused a problem with the flash memory that led to the outage. It was subsequently resolved by disabling the flash. If so, BAE Aerospace has a possible solution [baesystems.com] with their upcoming line of rad-hard memory.
    • It sounds more like it was actually just poor filesystem management, not a hardware failure or soft error.

      There have been some incriminating statements along the lines of never having tested the FS for long periods, having to delete extra data from the flash, etc.

    • by TwistedGreen ( 80055 ) on Thursday January 29, 2004 @10:26AM (#8123687)
      It appears to be a software error, and not hardware-related. It actually looks like it ran out of swap space and the OS crashed. This article [spaceflightnow.com] explains what they think happened, and this article [spaceflightnow.com] has more information on their recovery plans.

      A quote:
      It is now believed that the rover's flash memory had become so full of files that the craft couldn't manage all of the information stored aboard. Spirit bogged down because it didn't have enough random access memory, or RAM, to handle the current amount of files in the flash -- including data recorded during its cruise from Earth to Mars and the 18 days of operations on the red planet's surface.
      Raises some interesting questions about software reliability, I think. Did nobody think about running out of disk space?
      • by parc ( 25467 ) on Thursday January 29, 2004 @10:45AM (#8123879)
        They didn't run out of "disk space". They ran out of RAM. My guess is that when reading the "directory" structure of the flash, a data structure needed to be allocated in RAM. When the malloc for that structure failed, it counted as a hard failure and the system rebooted. Presumably, a malloc failing is a symptom of much larger problems in your system, problems that you won't be able to overcome, so the best thing is to wipe everything and start fresh.
        • by AaronW ( 33736 ) on Thursday January 29, 2004 @12:43PM (#8125143) Homepage
          If anyone saw my earlier posts on VxWorks they would see I am not at all surprised about the problems NASA is having.

          As someone with first-hand experience with VxWorks let me say that VxWorks' memory handling code sucks. Their malloc implementation has got to be the worst one ever designed. It fragments horribly and when fragmented has unusable performance. A malloc call can take many milliseconds when memory gets fragmented. Our box used to crash due to the fragmentation. I replaced Wind River's code with DLMalloc and that fixed the memory issue. We went from many tens of thousands of fragments to only a few dozen. Our reliability significantly increased as well after dumping Wind Rivers brain-dead malloc code. BTW, glibc uses a variant of Doug Lea's malloc code, so it's been widely tested.

          Furthermore, in VxWorks there is no way to identify what process malloced a block of memory. There is no memory protection either (think DOS). If a task has a memory leak, VxWorks does not provide any method of tracking down the culprit. I had to add that support to the DL Malloc code I ported so we could find memory leaks and general memory usage by task.

          Since malloc is critical to the working of VxWorks, if you run out of memory you are basically completely dead. Often the only way to recover is to wait for the hardware watchdog timer to kick in and reboot the system.

          If a task dies in VxWorks with a bad pointer there is no way to recover other than reboot. The OS will not clean up after a task (i.e. free memory, close files, release semaphores).

          As far as flash goes, VxWorks supports the FAT16 file system. As you know, FAT16 sucks. It only supports a limited number of files in the root directory. It's relatively easy to corrupt, and when corrupt it tends to corrupt itself even worse. There is no wear leveling support. If the FAT table or root directory gets corrupted, you're screwed. VxWorks is even worse since there arn't tools to fix a corrupt file system.

          VxWorks is not a scalable OS. The OS gets slower as the number of tasks increases. Realtime support sucks. Although it has support for things like priority inheritance to prevent priority inversion, the best guaranteed realtime latency is half the system tick rate (the tick rate is usually 10ms).

          Also remember that unlike open source operating systems, the source code to VxWorks is not available unless you pay some major $$$. Without the source you're basically working blind.

          VxWorks is an old RTOS, and its age is definitely showing. It is not a robust OS.

          As far as turning VxWorks into a firewall, you'll need to write all your own code. The VxWorks TCP/IP stack is an archaic vulnerable version of the BSD stack. TCP sequence number guessing is trivial. There is no built-in support for firewall support, NAT, or anything else. I have heard many many complaints about the VxWorks networking code. Although the box I'm working on is a router and broadband remote access server, we don't use the VxWorks TCP/Ip stack much. I am sure the VxWorks stack is vulnerable to many of the current DOS attacks as well.

          Some of you might say why not use VxWorks AE, which adds memory protection. Think hacked-in memory protection far worse than Windows 95. It's slow, very buggy, and poorly supported. We tried AE and after many months we dumped it and went back to the non-AE version. Very few companies actually went with AE.

          Today there are many other choices for an RTOS that are better than VxWorks. For our next major project we're looking at both QNX and TimeSys Linux. QNX is a true microkernel design with the core being around 70K. Every driver is protected and can be restarted if it crashes. I think you can even buy a medical grade version of QNX. TimeSys Linux is also pretty cool, with excellent real-time support and all the advantages of Linux. For something like the Mars rover, QNX would be better due to the limited amount of memory and greater robustness.

          Wi
          • by Anonymous Coward
            Your comments about VxWorks show me that you don't have a well establish practical Real Time OS (RTOS)experience and are trying to to compare an RTOS to a Server or Desktop OS. That is quite a bit misleading. For example,
            consider the use of malloc()/free(). In an RTOS it does not much sense to dynamically allocate and free memory at run time. Same thing applied to other sytem resources, like tasks/processes. This takes quite a bit of time from a Real Time point of view to create new tasks, semaphores, ob
            • by AaronW ( 33736 ) on Thursday January 29, 2004 @04:20PM (#8127813) Homepage
              While I would agree with you that avoiding malloc and preallocating memory is the way to go, but it is not always possible. In my case, we are using various 3rd party libraries, and changing them to use static memory allocation would be prohibitave. In at least one case, the third party library source code was not available. Also, in many cases the dynamic nature of some algorithms requires dynamic memory management. You cannot statically allocate everything, especially in a limited memory environment.

              I know we're not the only ones to have been burned by Wind River's malloc. I know several major companies that also had to replace Wind River's code.

              As far as being able to dynamically replace code, VxWorks isn't alone in that. Numerous other RTOSes out there can do the same thing, including QNX. QNX even supports the concept of a hot standby process to take over if the main process dies.

              To give you an idea about how Wind River's malloc works, they keep a sorted linked list of fragments from the smallest to the largest. When you try and allocate a block, it walks the linked list until it finds a block large enough. Likewise, when you free a block it checks if it can coalesc the block with a neighboring block. It then goes through the linked list looking for a slot to insert the free block.

              Yes, VxWorks may have been around since the 80's, but that's part of the problem too and it is showing its age. In the 80s embedded processors typically did not have MMUs. Now MMUs are quite common in the more powerful embedded processors.

              You say you can't have low latency and memory protection? QNX proves that you can. It is low latency and *very* robust. If your driver dies, no problem, restart it. Timesys Linux also has a very low latency, although not as low as QNX. Timesys also has an interesting feature where you can guarantee CPU and networking resources. I can schedule a task to be guaranteed 5.8ms of execution every 8.3ms and it will guarantee that that task will get the CPU time allotted to it with the desired resolution. This is without increasing the system tick rate (usually 10ms). Timesys can also schedule a task to be higher priority than an interrupt. I'm not as familiar with QNXs scheduler, but it's also quite flexible from what I've heard.

              As far as FAT, it is not a robust filesystem. It never has been. If the FAT gets corrupted or a directory entry gets corrupted it's difficult to recover. Other than possibly having 2 copies of the FAT cluster table, any corruption can be difficult to repair. If the FAT table gets corrupted, which table is corrupt and which is not? If a directory entry gets corrupted, it can be impossible to fix. For flash memory, unless you are using a device with special wear-leveling, FAT is about the worst choice since any file write that changes the size of a file requires a write to the directory entry and possibly the FAT table. If the table gets corrupted and you don't run a repair operation (which often ends up leaving orphaned files as lost clusters), the file system can happily corrupt itself to death. Why do you think every time DOS/Windows9x/ME crashed it had to repair the disk with scandisk? FAT is a poorly designed file system that was originally designed for 160K floppies and scales poorly. FAT32 is an improvement, but it's still not very robust. For flash, something like Linux's journalling flash file system 2 (JFFS2) [redhat.com]. More information on VxWorks file system support can be found here [xs4all.nl].

              Basic VxWorks information can be found http://www.slac.stanford.edu/exp/glast/flight/docs /VxWorks_2.2/vxworks/guide/ [stanford.edu].

          • As great as QNX is compared to VxWorks, it
            A) Would never have been chosen as the OS of choice

            B) Will never BE chosen to replace VxWorks

            Why? It's a great company, but it's based in Ottawa (that's Canada for you Yankees), and NA$A Bucks do not flow over the border.
      • by The_K4 ( 627653 ) on Thursday January 29, 2004 @10:54AM (#8123980)
        As side effect this means that Opertunity probably has the SAME bug! :(
  • by Chilltowner ( 647305 ) on Thursday January 29, 2004 @10:05AM (#8123508) Homepage Journal
    Darn. Interesting articles, but I was hoping that inside it was filled with creamy nougat center. Oh, wait. I'm thinking Mars bar. Nevermind.
    • by QEDog ( 610238 )
      The question that we all want to know is, how do they drive it? I imagine that they have 3 identical car cockpits, with steering wheel, brakes and gas pedal. 3 different engineers drive it, voting on their actions for redundancy. If one of them dies, or goes to the bathroom, or simply starts honking like a mad man, still the other 2 could respond.
  • Wait a second... (Score:5, Insightful)

    by deitel99 ( 533532 ) on Thursday January 29, 2004 @10:05AM (#8123510)
    The machines aren't as slow as the top post says... they don't run at 20MHz, they are "capable of carrying out about 20 million instructions per second". Depending on the complexity of the instructions, the processor actually runs several times faster than 20MHz.
    • Comment removed (Score:4, Informative)

      by account_deleted ( 4530225 ) on Thursday January 29, 2004 @10:08AM (#8123542)
      Comment removed based on user account deletion
      • Re:Wait a second... (Score:3, Informative)

        by Thuktun ( 221615 )
        deitel99: The machines aren't as slow as the top post says... they don't run at 20MHz, they are "capable of carrying out about 20 million instructions per second". Depending on the complexity of the instructions, the processor actually runs several times faster than 20MHz.

        danheskett: That's an excellent point. A lot of people are thinking instruction = 1 cycle. The real world is that it's not unusual for an instruction to take 2, 4, 10, or even 100 cycles. The reality of the matter is that instructions c
    • by Zog The Undeniable ( 632031 ) on Thursday January 29, 2004 @10:10AM (#8123564)
      Actually, they're probably slower :-P

      Modern superscalar (pipelined) processors have a lot more MIPS than megahertz.

    • Although since they're PowerPC chips, of course, even if they ran at 20 MHz, they'd still be a *lot* faster than comparable Intel. You can't just go by the specs. You have to benchmark it in real world applications and think about overall system performance...
      • by Anonymous Coward
        Don't you mean you need to benchmark in out of this world applications.
    • Re:Wait a second... (Score:5, Informative)

      by Rootbear ( 9274 ) on Thursday January 29, 2004 @10:32AM (#8123748) Homepage
      Actually they are running at 20MHz. I've seen several write ups which clearly state that. The RAD6000 can apparently run at up to 33MHz, with a claimed 35MIPS. The rovers are "underclocked", probably due to power budget concerns.

      Go to
      http://www.iews.na.baesystems.com/space/rad600 0/ra d6000.html
      and click on the rover picture to get a PDF brochure, which gives the 33MHz/35MIPS figure.

      Rootbear
      • by questamor ( 653018 ) on Thursday January 29, 2004 @10:49AM (#8123916)
        One chart I've seen of IBM's POWER chips and their derivatives had an entire section devoted to the PPC chips, and the RAD6000 wasn't included in these, but was an offshoot to the side, branching just before the PPC601.

        By all other standards however, they seem to be closely related in time and technology to the 601, which powered Powermac 6100, 7100, 8100, 9150, 7200, 8200 and 7500 PPCs, as well as I think, one of IBM's thinkpads.

        Those 601s are a very nice chip, and quite underestimated at what they can do at low clock speeds. If the RAD6000 is anywhere similar, I can understand why it was picked.
        • Those 601s are a very nice chip, and quite underestimated at what they can do at low clock speeds. If the RAD6000 is anywhere similar, I can understand why it was picked.

          I doubt it, because it probably had little to do with performance. I read an article a while back (tried to Google it now, couldn't find it) that said NASA is generally about ten years behind consumer PC's in terms of the CPU speeds they send into space. These things have to go through so much testing for reliability, etc. that it ofte
  • by Faust7 ( 314817 ) on Thursday January 29, 2004 @10:06AM (#8123514) Homepage
    To survive the frigid Martian night, MER computers are housed in warm electronics boxed heated by a combination of electric heaters, eight radioisotope heater units as well as the natural warmth from the electronics themselves.

    Just leave off the heatsinks and fans, and everything should be fine.
    • Re:Self-warming (Score:5, Interesting)

      by Cyclopedian ( 163375 ) on Thursday January 29, 2004 @10:09AM (#8123559) Journal
      To survive the frigid Martian night, MER computers are housed in warm electronics boxed heated by a combination of electric heaters, eight radioisotope heater units as well as the natural warmth from the electronics themselves.[Emphasis added by me]

      If obsessed environmentalists don't like NASA sending up probes with any radioactive material ('it might blow up, ohh..'), then how did this little tidbit get by them? Do they consider it non-radioactive? If they're only concerned by radioactive propulsion systems, then I think they're a bunch of hypocrites. Radioactivitiy is radioactivity whether it's propulsion or heating.

      If they don't mind it, then let's send up those dune buggies with RTG and 18-inch wheels and cover a lot more of Mars.

      -Cyc

      • Re:Self-warming (Score:5, Informative)

        by JDevers ( 83155 ) on Thursday January 29, 2004 @10:24AM (#8123678)
        If I'm not mistaken, virtually all probes have some sort of radioisotope heater...

        Radioactivity is NOT radioactivity when you are considering things like this. Saying the people who don't want nuclear powered rockets should hate this as well or else they are hypocrites is tantamount to saying that the people who don't like oil spills should bitch about how some motor oil ALWAYS stays in the plastic container it is shipped in. Not quite the same problem. Afterall, these things aren't much more radioactive than a Coleman lantern wick or a smoke detector element...
      • Re:Self-warming (Score:3, Insightful)

        by Waffle Iron ( 339739 )
        If they don't mind it, then let's send up those dune buggies with RTG and 18-inch wheels and cover a lot more of Mars.

        I don't know, but I'd guess that for rovers of this size and weight, it might be easier to use solar panels than to figure out how to dump all of the waste heat that an RTG would produce. These devices are highly inefficient in terms of converting thermal energy to electrical energy, so they are probably spewing kilowatts of thermal energy at all times, and there's no way to shut them off

      • Re:Self-warming (Score:5, Informative)

        by The Fun Guy ( 21791 ) on Thursday January 29, 2004 @10:36AM (#8123795) Homepage Journal
        Radioisotope thermoelectric power units need to be hot enough to allow for electricity to be generated by thermocouples placed between the unit and the heat sink (space). A quick Google search gives 200-500 watts of power generated from multiple interleaved stacks of plutonium-238 or strontium-90, average radioactive source strength of around 50,000 curies, depending on design.

        Radioisotope heaters use much less material, as they only need enough heat to keep the warm electronics box above -40F or so. From the Environmental Impact Statement in the Federal Register ([wais.access.gpo.gov][DOCID:fr10de02-54]):

        "Each rover would employ two [calibration] instruments that use small quantities of cobalt-57 (not exceeding 350 millicuries) and curium-244 (not exceeding 50 millicuries) as instrument sources. Each rover would have up to 11 RHUs that use plutonium dioxide to provide heat to the electronics and batteries on board the rover. The radioisotope inventory of 11 RHUs would total approximately 365 curies of plutonium."

        Nothing you'd like to swallow, but still, much smaller than a radioisotope power unit.
      • Re:Self-warming (Score:3, Interesting)

        by A55M0NKEY ( 554964 )
        Someone said a while back that a house made of aerogel would need only a candle to heat it. Surely surrounding the electronics in and Aerogel box would trap so much heat they would melt themselves. I understand that they have to shut the electronics off at night, so the radioisotope heaters make sence, but why all the other heaters?
    • by Zog The Undeniable ( 632031 ) on Thursday January 29, 2004 @10:12AM (#8123579)
      If only they'd used an Athlon, the planet would have been habitable in Bermuda shorts by the end of the week.
  • by Powercntrl ( 458442 ) on Thursday January 29, 2004 @10:06AM (#8123515) Homepage
    But I'd take a Linksys over a hacked Mars Rover anyday... Billions cheaper, ya know.
  • Radiation Shielding (Score:4, Interesting)

    by kyknos.org ( 643709 ) on Thursday January 29, 2004 @10:06AM (#8123521) Homepage
    How is it done? Some external armor, or even insides of the chip are different?

    ---

    • by pi eater ( 714532 ) on Thursday January 29, 2004 @10:10AM (#8123569) Homepage
      A modded alienware case perhaps?

      geeky stuff.. offensive stuff! [wabshirts.com]
    • by the real darkskye ( 723822 ) on Thursday January 29, 2004 @10:15AM (#8123601) Homepage
      The CPU is fabricated to withstand the radiation, a brief summary can be found here [nasa.gov] or by googling
    • by PhuCknuT ( 1703 )
      Both.

      They have extra shielding on the outside, and the electronics on the inside are designed to disipate sudden charges created by radiation hits.
    • by shawnce ( 146129 )
      A lot is done with extra shielding but often radiation hardened chips use larger feature sizes then modern equivalents. The larger the features the more resilient they can be to particle/energy hits. Basically they are harder to damage permanently.
      • There's more to it than just feature size. You need the right substrate doping and circuit structures than can dissipate the charge introduced by a particle without having it course through the parts of the chip most susceptible to being fried, or perhaps just having their logic state flipped.
    • by emil ( 695 ) on Thursday January 29, 2004 @10:38AM (#8123808)

      ...as the substrate of the chip, rather than a silicon wafer, so the chip was a "sapphire" chip rather than a silicon chip (although doped silicon could then be used to form transistors, as could Gallium Arsenide or Germanium, through the regular lithographic process).

      This is the classic "Silicon On Insulator." IBM has a process of embedding a layer of glass beneath the surface of a standard silicon wafer, allowing SOI using silicon substrates. This and their work with copper set them apart from the other large silicon transisitor foundries (TSMC, Intel, etc.).

      The processors on the rovers are probably SOI, but I don't know which process is used.

  • Redundency Check? (Score:3, Insightful)

    by shlomo ( 594012 ) on Thursday January 29, 2004 @10:07AM (#8123528)
    I read somewhere that the shuttle spacecraft, has 6 computers for redundancy checks.

    Your Average plane have a triple backup system, I spoke to some engineer and he said preflight checks are usally just making sure two of the systems are still working

    you'd think they could at least send up some more hardware with these little critters. The extra weight would pan out, when things go bad...case in point see what they are dealing with now :)

    • by snake_dad ( 311844 ) on Thursday January 29, 2004 @10:17AM (#8123623) Homepage Journal
      That way of thinking would make the cost of robotic space exploration approach that of human space exploration. Plus, the rover will not crash with loss of life in case of a minor computer failure. There is a much bigger margin for troubleshooting "in the field" than with aircraft or manned spacecraft.

      Ofcourse NASA did implement a measure of redundancy by sending two rovers instead of just one.

    • Re:Redundency Check? (Score:5, Informative)

      by vofka ( 572268 ) on Thursday January 29, 2004 @10:29AM (#8123714) Journal
      If I recall correctly, the Shuttle has 5 GPC's (General Purpose Computers), three of which are "online" at any one time.

      The online GPC's each carry out the same set of calculations (potentially each uses code designed to do the same thing, but written by different programmers), and they compare each others results. If any single GPC is considered to be too far wrong, the offline GPC's submit their answers. The three GPC's that are in closest agreement then become the new online GPC's, and the remaining two go offline. The GPC's can reboot themselves if they are too far out of whack, if they fail in one of the "results elections", and of course when they are told to do so by the crew.

      Also, whenever a GPC is sent offline by one of the others, a specific caution indicator (and potentially the master caution indicator and klaxon) is activated, and the relevant error codes are shown on one of the forward CRT's. The error codes, along with other information such as the currently running program and the current mission phase, determine the crew's actions. Actions can be as simple as disabling the master caution klaxon for the current alert, all the way to hand-checking certain results and manual GPC restarts.

      This is all from memory (from about 5 years back), so some of this may have changed recently, particularly on Atlantis with the "glass cockpit" upgrade that happened 18 months or so ago, but the general gist should be about right (and I'm sure I'll soon know if it isn't!!)
    • Redundancy is useful for handling hardware errors, but this problem was in the software that keeps track of the flash filesystem. So having three computers would just mean you get the same problem three times, and your expensive redundancy would have bought you nothing. (They are having to take steps to make sure Opportunity doesn't encounter the same problem.)

  • by blorg ( 726186 ) on Thursday January 29, 2004 @10:09AM (#8123552)
    "I wonder if I could make a nice firewall with one of these for my home network..."

    You could, but the latency would be a bitch.
  • by Cutriss ( 262920 ) on Thursday January 29, 2004 @10:09AM (#8123555) Homepage
    Basically, they're radiation-shielded, 20MHz PowerPC machines wirh 128Mb RAM and 256Mb of flash memory, running VxWorks.

    Mb = Megabits
    MB = Megabytes.

    The article writes out megabytes, so MB should be used, not Mb!
  • by mikesmind ( 689651 ) on Thursday January 29, 2004 @10:10AM (#8123568) Homepage
    "It's quite unusual to have a single computer for the whole mission," Scuderi said, adding that many missions tend to have redundant systems as a guard against failure.
    Now, while having two rovers is a form of redundancy, wouldn't it be wise to have some redundancy on each individual rover? I understand that there are concerns like weight and budget, but wouldn't some redundancy be a good form or risk management?
  • Ouch (Score:3, Insightful)

    by savagedome ( 742194 ) on Thursday January 29, 2004 @10:13AM (#8123590)
    In addition to VxWorks' reliability, the system allows users to add software patches -- such as a glitch fix or upgrade -- without interruption while a mission is in flight. "We've always had that [feature] so you don't have to shut down, reload and restart after every patch," Blackman said, adding that some commercial desktop systems require users to reboot their computers after a patch

    The bold emphasis is mine but that is a big Ouch for Microsoft.


    RAD6000 microprocessors are radiation-hardened versions of the PowerPC chips that powered Macintosh computers in the early 1990s
    Shouldn't Apple be using this in their commercials somehow to further boost their reliability. I am sure the PR market can put it in a way that non-geeks watching tv can relate, right?

    • Re:Ouch (Score:4, Informative)

      by Quasar1999 ( 520073 ) on Thursday January 29, 2004 @10:19AM (#8123646) Journal
      AH dude... Microsoft is totally different... VxWorks is based on the fact that all memory is linearly accessable.. there is no memory protection... that works great when all your apps are written in-house, and you know the timer ain't gonna trash the motor controller...

      But try that on an OS for a desktop system, and your email program just may blow up your paint program... (remember Windows 3.1's stability? Make that 10x worse)... You can't use VxWorks for the desktop as Windows is used today... it needs a lot of protection... The ease of upgrading is due to the lack of protection...
  • by Hiroto. S ( 631919 ) on Thursday January 29, 2004 @10:16AM (#8123611) Journal
    I googled across following presentation with a little more details.

    Flying VxWorks to Mars [colorado.edu]


  • ...read the URL to VxWorks as WinDriver.com instead of WindRiver.com?

  • by vpscolo ( 737900 ) on Thursday January 29, 2004 @10:19AM (#8123644) Homepage
    And it can still send back at 128 kbits/sec which is faster than my connection can managed. Just waiting for it to start getting spam advertising pr0n and viagra.

    Spirit Rover: Staying up longer and harder

    Rus
  • Not 20Mhz (Score:2, Informative)

    But 20 MIPS
  • I noticed something odd in the latest shots from the Rover. Just on the horizon:

    http://idisk.mac.com/charlesholt/public/DuckDodg er s.jpg
  • Beagle-2 (Score:3, Interesting)

    by TheSurfer ( 560640 ) on Thursday January 29, 2004 @10:27AM (#8123698)
    An interview with one of the Beagle-2 software developers can be found here: http://linuxdevices.com/articles/AT7460495111.html
  • by bhima ( 46039 ) <Bhima@Pandava.gmail@com> on Thursday January 29, 2004 @10:28AM (#8123702) Journal
    What NASA should do is to hire a Taiwanese firm to build inexpensive knock-offs of Sojourner. They already have the design, I'm sure a few bright minds could cut the chassis price down significantly; after all we don't need all the exotic materials. I'm sure IBM still makes a PPC variant that would make a new cheap board layout easy. As far as the OS: of course we don't need VxWorks (Nor could the project afford it) we have NetBSD!

    The profits from Slashdot alone could extend the life of HST or launch the James Web Space Telescope early.

    I thought about the current rovers, but I think they are a bit large to be successful!

  • Mac users.. (Score:3, Funny)

    by JayPee ( 4090 ) on Thursday January 29, 2004 @10:30AM (#8123731)
    You know what's annoying about this story now that it's making the roungs.

    Mac users everywhere take this as "Oooohh.. there's a Mac inside of those things!" "There are Macs on Mars!" Bleah.

    And before you mod me down, realize that I'm an unrepentant Mac user and an Apple Authorized Service Tech.
  • by sammyo ( 166904 ) on Thursday January 29, 2004 @10:31AM (#8123746) Journal
    Shouldn't each rover be required to have a different OS? I mean this must just quash the Martian software industry.

  • by GileadGreene ( 539584 ) on Thursday January 29, 2004 @10:34AM (#8123773) Homepage
    radiation-shielded, 20MHz PowerPC machines

    No, they're not.

    The processors in MER are RAD6000 [baesystems.com]'s, which are radiation-hardened versions of the RS/6000, the predecessor to the PowerPC (see this [baesystems.com] for details). The RAD6000's younger brother, the RAD750, is indeed a rad-hardened PowerPC.

    As an aside, there is a big difference between a radiation-shielded processor and a radiation-hardened processor. Shielding implies just sticking some kind of rad-absorbent material between the processor and the environment. A rad-hardened processor is actually manufactured in a different way - different gate layout, different design rules, often different materials (Silicon-on-Insulator is popular). These things are done to minimize or prevent the effects of single-event upsets (when a bit is flipped by high-energy particles) and single-event latchups (which basically turn a couple of gates into a glorified short-to-ground). The materials changes may also improve the overall total dose tolerance of the processor. The work required for redesign is one of the reasons that space-qualified rad-hard processors lag the commercial market. The NASA Office of Logic Design [klabs.org] has some good papers on space processors available online if you're interested in learning more.

    • by addaon ( 41825 ) <addaon+slashdot@gma i l .com> on Thursday January 29, 2004 @11:26AM (#8124312)
      The Rad750, btw, is a deeply cool chip. Once it's mature enough to start using for scientific-level stuff, it will be a real revolution in what we can do. One of the limitations with Hubble was that it had so little processing, a full data dump needed to be done for even checking orientation; there was no ability to offload processing to the sat. If the 750, or something similar (not that I know of anything too similar) is up there for our next big telescope, it will make a real difference in the efficiency of how it is used.
  • by snake_dad ( 311844 ) on Thursday January 29, 2004 @10:35AM (#8123777) Homepage Journal
    While JPL scientists have expressed confidence they can overcome the problem, possibly in a matter of weeks, the hardware and software manufactures behind the computer system are more than ready to help.

    "If they ask, we come," Blackman said, echoing the enthusiasm of BAE officials. "I think when something like this happens, the whole community responds to it."

    Maybe they should put up an image of the data, the diagrams of the systemboard, and descriptions of the symptoms, and submit it to Ask Slashdot. I'm sure there are a couple of VxWorks experts here who'd love to take a crack at this :)

  • by mnemotronic ( 586021 ) <mnemotronic.gmail@com> on Thursday January 29, 2004 @10:35AM (#8123782) Homepage Journal
    It's some Ok hardware, given the source (primative carbon-based species). We removed the antennas & cameras, and turned it into a coffee table. The CPU was slower than most MDAs (martianal data assistants), but provided us with a laugh. Junior ate the instrument package before we could stop him. Naughty grzybfyx!
    Thanks.

    Sincerely, Marvin & Family
  • No, not PowerPC... (Score:3, Informative)

    by noselasd ( 594905 ) on Thursday January 29, 2004 @10:37AM (#8123796)
    No. Its a RAD6000 CPU. Not entierly your stock PowerPC chip.
    RAD6000 [baesystems.com]
  • FPGA's (Score:4, Interesting)

    by retostamm ( 91978 ) on Thursday January 29, 2004 @10:53AM (#8123969) Homepage
    There are also Xilinx FPGA's [xilinx.com] in the Rover. Cool thing because they can be reconfigured if you find a bug while the thing is in transit.

    Xilinx radiation-tolerant Virtex(TM) FPGAs are being used in the "main brain" of the rover vehicle, controlling the motors for the wheels, steering, arms, cameras and various instrumentation, enabling the vehicle to travel about the planet.

    They also controlled the Pyrotechnical stuff during landing.

    [Disclaimer: I work for this great company.]

  • RAD6K (Score:5, Informative)

    by Anonymous Coward on Thursday January 29, 2004 @11:03AM (#8124069)
    I am an engineer that works with the RAD6K processor boards. A couple of observations here.

    1. The RAD6K really does run at 20 Mhz. They're creakingly slow. They're spec'd to run up to 33 Mhz, but the customer can get them to clock at lower speeds (I've seen them run at 12.5 Mhz). The only drawback is the PCI bus is also clocked as the same speed as the CPU. This is a mixed bag - but a slower PCI bus helps improves signal integrity and decreases power consumption.
    2. The board is PCI, but NOT compact PCI. There is a proprietary PCI connector and a proprietary PCI backplane. You cannot plug commercial PCI products unless you have an adapter to interface to the proprietary PCI connectors.
    3. For those who are not aware, there are three types of memory being used on the rovers. There is the SRAM (the RAD6K boards use SRAMs, not DRAMs), the EEPROM, and apparently, FLASH RAM. The EEPROM and the SRAM are on the processor board itself - there is probably more EEPROM memory in the system on another board. The EEPROM usually holds the flight code, and there are usually two copies. An original version that was launched with the spacecraft, and one patched version made possible via uplinks.
    4. I am amazed at the presence of FLASH RAM's. I am not aware of any rad-hardened FLASH RAM devices for spaceflight use. In addition to radiation hardness, the device must be made reliable with an approved QML-Q or V manufacturing flow. Radiation hardness is just icing on the cake, but the key is that the device must be reliable to withstand temperature extremes, shock and vibration. So, I have yet to see a FLASH RAM device that can be used. I am aware of the Chalcogenide based RAM's which are essientially uses the same substrates on CD-ROMs as memory cells. These products are hard to come by right now and are a high risk because we don't have sufficient data and flight heritage. A catch-22 in flight design is if it hasn't flown before, we don't want to fly it. But at some point, someone has to fly the first generation (someone who is willing to take a huge risk). Anyway, the FLASH RAM's on the rovers are in all likelihood upscreened commercial products. In other words, a mass buy of an entire lot of dies of commercial FLASH RAM's may have been bought, packaged in-house or through a vendor, and then screened for reliablity at extended specifications. This is not the same as the manufacturer who can guarantee the specs by design by designing it from the outset with increased reliability in mind.
    5. Radiation shielding? Minimal at best! The RAD6K shields its SRAMs by placing it on the underside of the processor board and orienting it such that the particles hit the processor side of the board instead of the RAM side of the board. There is some degree of radiation shielding for particles of sufficiently low energy. The truly high speed particles are going to get through and the only thing that will truly stop them is shielding whose thickness is measured in feet. That amount of shielding is too heavy for launch. The best we can do is mitigate the effects of radiation by guaranteeing devices can withstand a certain amount of radation dosage (measured in kRads) and design for latchup protection (latchup is a parasitic condition in which an ionizing particle impacts a transistor structure in a way that causes a SCR to be formed and a runaway current condition is initated leading to the device being burned out by high currents). Radiation effects in the form of SEE's (single event effects) such as bit flips can be mitigated by redundancy and voting circuits, memory scrubbing, and error checking using checksums/CRC's.
    • Re:RAD6K (Score:5, Informative)

      by demachina ( 71715 ) on Thursday January 29, 2004 @11:58AM (#8124661)
      "A catch-22 in flight design is if it hasn't flown before, we don't want to fly it. But at some point, someone has to fly the first generation (someone who is willing to take a huge risk)."

      Or you fly it as a non mission critical experimental payload which is what we did back in the day I worked on avionics. You fly it as an experimental package so it gets the stress but if it breaks either its not in the mission critical loop, or if it is in the loop you can switch back to proven hardware. I kind of assumed this would be a standard part of qualifying electronics for space flight as well though its obviously a lot more expensive. Its not feasible to test it on Mars due the expense but you could test it in geostationary orbit where it will get lots of radiation and temperature extremes, as well as launch vibration and G's.
  • by SilverGiant ( 687839 ) on Thursday January 29, 2004 @11:05AM (#8124091)

    .

    "I wonder if I could make a nice firewall with one of these for my home network..."

    1. build computer
    2. drop off on mars on way to grocery
    3. laugh in face of hacking
    4. reconsider when computer mistaken by rover for new type of rock, drilled for mineral content

  • by danwiz ( 538108 ) on Thursday January 29, 2004 @11:29AM (#8124335)

    As noted in a previous slashdot posting [slashdot.org], the software in the control room was written in Java.

    A ZDNet [com.com] article says Java made communicating between multiple software pieces very flexible and James Gosling, inventor of Java, spent considerable time helping develop the system. Sun [sun.com] also describes how the same application was used for the Pathfinder mission back in 1997.

  • by rshadowen ( 134639 ) on Thursday January 29, 2004 @12:55PM (#8125335)
    The RAD6000 is based on a POWER CPU called RSC (RIOS single chip) - it's not a PPC chip. This was a design that consolidated the 5 - 6 chip RIOS processor complex onto a single lower performance die for low end workstations. I worked on the development team at IBM.

    The RSC design played a key role in bringing Apple and Motorola together with IBM to create the PowerPC line of CPUs. The 601 was the first PPC and was basically a redesign of RSC. It supported both POWER and PPC architectures, although there were deviances from PPC since the architecture was actually being defined at the time we were working on the chip.

    The RAD6000 version of the design happened because IBM wanted to pursue some government contracts, so had the RSC specially qualified. Another group then took the design and performed the radiation hardening.

    After Pathfinder we had some cool IBM/Mars posters hanging around the building, but oddly enough they vanished very quickly...

A Fortran compiler is the hobgoblin of little minis.

Working...