Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Space Technology

Computers in Space Examined 267

Wil Harris writes "There's an article about the computers used in space missions over at bit-tech this morning. It covers the processor types and speeds, why space stations are less powerful than the laptops that astronauts take up with them and why tape storage is still de rigeur. An interesting and concise couple o' pages."
This discussion has been archived. No new comments can be posted.

Computers in Space Examined

Comments Filter:
  • K.I.S.S. (Score:5, Insightful)

    by fembots ( 753724 ) on Friday April 22, 2005 @10:22PM (#12320012) Homepage
    The fewer components you have, the less likely you are to encounter a failure.

    I remember reading something about most space missions are pre-determined and very straight forward, there's no need for difficult maneuver like one has to execute in a X-Wing.

    Having said that, there are still plenty of complicated, unexpected problems in space, but these problems have to be analysed and decision made by people on earth.

    I guess it's all circumstantial, I can't even operate my 2001 Toyota electric window if the engine's dead, but my 1989 Toyota has no such problem. So if I crashed into a river, I hope I was driving the '89, but if I'm crashing into another car, I want my '01.
    • Re:K.I.S.S. (Score:5, Funny)

      by AtariAmarok ( 451306 ) on Friday April 22, 2005 @10:26PM (#12320036)
      "The fewer components you have, the less likely you are to encounter a failure."

      Which is why the good old fashioned meteor, with one REALLY BIG moving part, is one of the most successful space vehicles ever. Ol' T-Rex can attest to its effectiveness.

    • "there's no need for difficult maneuver like one has to execute in a X-Wing." ...at least until space battles become a reality.
    • Re:K.I.S.S. (Score:5, Informative)

      by william_w_bush ( 817571 ) on Friday April 22, 2005 @10:31PM (#12320062)
      also, older components tend to use larger signaling thresholds, which makes a big difference considering the mag-flux caused by radiation in space is much higher than on earth. id guess those tapes are also done with a high bias, as platters could be wiped with a decent flare, and fine-process cmos chips could be knocked out completely with a suprisingly small charge. even a small spike in power from a line surging or regulator going bad could take down some hard-disks, while all you have to do is rewind the tape and it's good.

      you only need enough cpu power to handle some basic tasks and send the rest down to earth. considering most of the software is in c or assembler a 486 is an awful lot of power for most tasks.
    • Re:K.I.S.S. (Score:2, Insightful)

      by Fjornir ( 516960 )
      ...So if I crashed into a river, I hope I was driving the '89...

      Entirely offtopic, but do you wear a seatbelt? Can you unbuckle it when it's loaded (ie: you're hanging upside down from it?)? Get a seatbelt saw/window hammer. The combined tool is not much larger than your thumb, and it comes on a lanyard to hang from your rearview mirror. Some are also emergency flashlight.

      • Re:K.I.S.S. (Score:3, Informative)

        by the pickle ( 261584 )
        You must not have seen the Car and Driver article where they tested about five or six of those things.

        Absolutely, utterly, 100 percent useless. They couldn't break a window if you shot them out of a railgun. Seriously.

        p
        • Re:K.I.S.S. (Score:2, Interesting)

          by Fjornir ( 516960 )
          Nope. Don't read Car and Driver. Did break a window with one once. Dunno why Car and Driver would print that, but it wasn't hard.
    • Re:K.I.S.S. (Score:3, Insightful)

      by OneOver137 ( 674481 )
      The fewer components you have, the less likely you are to encounter a failure.

      True hardware box failures are taken care of by redundancies,not by limiting parts.

      I remember reading something about most space missions are pre-determined and very straight forward

      Actually, the military likes to get the most mileage out of their assets and you would not believe some of the reprogramming that goes on to reconfigure the software to extend and/or modify a mission.

      but these problems have to be analysed
      • Re:K.I.S.S. (Score:3, Interesting)

        by TWX ( 665546 )
        "True hardware box failures are taken care of by redundancies,not by limiting parts."

        It's still very smart to use a small core of very expensive and high quality parts that can function entirely on their own, rather than to have a vast, interconnected system that needs most everything present and working in order to remain functional. It's kind of like Galileo and the Voyager probes, where the basic core was over-engineered the right way to withstand problems, while the external stuff was ultimately exp
    • Now please explain this simple concept my professor who keeps saying "Modern cell-phones have 100x the processing power of the space shuttle!" I feel like I deserve a 10% refund for the class everytime he tries to shock me with that statement.
  • by AtariAmarok ( 451306 ) on Friday April 22, 2005 @10:24PM (#12320024)
    "Open the pod bay doors, HAL!"

    Nuff said (but there's something to be said for the butlerian jihad, and Cmdr Adama filling his battlestar with rotary phones and manual typewriters!)

    • Priorities (Score:5, Funny)

      by ravenspear ( 756059 ) on Friday April 22, 2005 @10:29PM (#12320051)
      Cmdr Adama filling his battlestar with rotary phones and manual typewriters!

      Something tells me he should focus on adding a few more medics first.
      • "Something tells me he should focus on adding a few more medics first." How about the medics of the Star Trek kind who can perform brain surgery and replace severed limbs with a few waves of a whistling chrome salt shaker?
  • Doesn't it seem very strange compared to the days where the goverment had super computers and the regular people had no computers? A stark contrast indeed. Now we are...close to the same level? Does this sound realistic, or are aces up their sleeves?
    • by Stevyn ( 691306 ) on Friday April 22, 2005 @10:51PM (#12320158)
      If you're referring to the article blurb, the article says this is because they have no need for a newer computer.

      If you're referring to the government in general that they don't have ultra-powerful computers, then it's because they don't need them. If Congress can allocate billions and billions to the war in Iraq, I think if there was a serious need for computing power then they'd have purchased it already.
    • by Waffle Iron ( 339739 ) on Friday April 22, 2005 @11:09PM (#12320242)
      Doesn't it seem very strange compared to the days where the goverment had super computers and the regular people had no computers?

      It's not all that different: The government has computers that will work reliably in outer space. The regular people don't.

    • A whole deck full of them. Browse through here [supercomputingonline.com] to get an idea. Uses from nuclear physics, CFD, military applications, applications that will set off the tinfoil beanie crowd...yeah, the .gov still has supercomputers. Heck, how do you think NOAA makes the weather forecasts that AccuWeather wants to (re)sell you?
    • by bm_luethke ( 253362 ) <`luethkeb' `at' `comcast.net'> on Saturday April 23, 2005 @01:31AM (#12320780)
      Govt never flew supercomputers into space, land based they still have vastly superior computing power. I doubt many of us has a multi-teraflop computer in thier basement.

      Space has always been about reliability, can't repair much when you are up there. All the processing happens Earth bound. The ratio has gotten *bigger* as time has gone on (look at the top 100 or top 500 supercomputing lists) - what we can run at home is *nothing* like what the large govt installations run. Even if you had the money the local power board isn't going to run your power needs into a residential zone, let alone you have a large staff 24 hours a day to maintain your l337 system.
    • No aces up the sleeve. I used to work for a supercomputer manufacturer back in the late 80's. Supercomputers got run over by the "Attack of the Killer Micros" (as Eugene Miya used to say). You have to look at the amount of money that's being put into R&D.

      Intel is spending way more money (today) than any supercomputer manufacturer can possibly afford to. Back in the 80's it was possible (not easy, but possible) to wire up discrete components into fast processors. The Cray-1 had a 10ns clock - or 10
  • by Raul654 ( 453029 ) on Friday April 22, 2005 @10:26PM (#12320033) Homepage
    Dave Mills (inventor of NTP) told me that on the last Columbia shuttle mission, they were running some experiments with NTP in space. And, thankfully, they transmitted all their data before landing. But apparently, they were so overworked, they didn't have time to calibrate the machine properly, so sadly, the data is useless.
  • Ouch... (Score:4, Funny)

    by thegamerformelyknown ( 868463 ) on Friday April 22, 2005 @10:34PM (#12320076) Homepage
    The first manned space flight had a computer on board to control re-entry, but it was basic in the extreme - and locked so Comrade Gagarin couldn't tamper with it. An envelope with the code to unlock the computer was hidden somewhere in the capsule, and should an emergency arise, ground control would tell him where it was. Nice.

    Sounds nasty. I would at LEAST want to have some QUICK way of getting to it.

    Like a hammer :).
    • Radio waves travel at 2.98 x 10 ^ 8 meters per second. Figure he was 11 km max above the earth, and that is not a very long time to figure out how to operate the computer.
    • Unmentioned in the article is why the unlock code was hidden from him.

      The Soviets were afraid of a defection, which would be possible if he could run the navigation system himself.
    • Re:Ouch... (Score:2, Funny)

      by minkie ( 814488 )

      There's an old joke about what the airplane cockpit of the future is going to look like. It's going to have a computer, a pilot, and a dog.

      The pilot is there to feed the dog.

      The dog is there to bite the pilot if he tries to touch the computer.

  • by NanoGator ( 522640 ) on Friday April 22, 2005 @10:36PM (#12320082) Homepage Journal
    The basic gist of the article is "They don't use more than they really need". Unfortunately, this is not a complete answer.

    A company I used to work for discussed using some of their technology with Nasa. One of the things they told us was that they preferred processors a two or three years old because they were afraid of random bit-flippings caused by radiation etc.

    (Sadly, I wasn't in on this whole conversation, so I doubt I can effectively answer some of the questions that arise. For example, I'm not sure why the processors had to be a couple of years old. I assume it had to do with shielding or something, but I really don't know. If anybody has insight on this topic, I'd really really like to be enlightened.)
    • It has something to do with the surface density of registers on chip. When using a high-performance chip, there are many more registers in the same area, leading to irrepairable string of bits.
      • Comment removed (Score:5, Informative)

        by account_deleted ( 4530225 ) on Friday April 22, 2005 @10:51PM (#12320161)
        Comment removed based on user account deletion
        • Re:About bit flip (Score:3, Interesting)

          by helioquake ( 841463 ) *
          Yep, that's right. I've been using CCDs at a ground level for long, seeing CRs zapping through a tiny detector in a short integration time (a few seconds). We are also surrounded by mildly radioactive materials (some paint, rocks, etc), which can cause radiation damage as well.

          And surely that's part of the reason that I always buy ECC RAMS for mission critical stuffs.
        • Re:About bit flip (Score:3, Interesting)

          by Mondoz ( 672060 )
          On board the International Space Station, they run programs checking for bit flips.
          Single Event Upsets happen occasionally, but it's difficult to tell if they're associated with actual hardware failures or just if they're just coincidental.

          They have 2 networks of IMS A-31P laptops for Command & Control of the station (PCS) and another network for situation awareness, procedure viewing, inventory tracking, Office tools (Word, email, etc...) and a few other uses.
          They're not completely COTS laptops - they
        • Urban Legend... (Score:3, Insightful)

          by imsabbel ( 611519 )
          There hasnt been a single instance of proven cosmic ray bit flip on ground level.

          And for bit-flips of other causes: The bit-failure rate per mbit has dropped a few orders of magnitudes tha last 10 or 15 years.

    • by GileadGreene ( 539584 ) on Friday April 22, 2005 @10:55PM (#12320180) Homepage
      It's not that NASA prefers processors a couple of years old. It's that NASA prefers processors which have been radiation-hardened, which makes them resistant to both single-event effects (bit flips) and to cumulative ionization-induced degradation (which gradually changes the threshold voltage of transistors due to charge build-up in gate oxides).

      Unfortunately, radiation hardening a processor involves altering the fabrication process (some processes - e.g. SOI - are more resistant to bit flips than others), inserting guard rings, adding self-checking logic, and a bunch of other changes. Doing all of this stuff takes time (and money) so space-ready processors typically lag COTS processors by a generation or two. Example: the current "hot new" rad-hard processor is the RAD750 [baesystems.com], which is a rad-hard version of the venerable PowerPC 750.

      Having said all of that, some small, risk-tolerant missions do use standard COTS processors (PowerPCs and StrongARMs are popular, as are industrial embedded processors like the Hitachi SuperH line). But you won't tend to find them in most NASA projects.

    • by Jozer99 ( 693146 )
      Radiation in space is usually not high enough to cause extensive phyiscal damage to processors in the short term. However, some kinds of radiation can cause random electrical noise on the processor, screwing up calculations. The larger the components, and the fewer of them there are, the less likely this is to happen. Of course, like all government agencies, NASA is very cautious about waiting to see how well components work. This is one of the reasons cell phones are not allowed on airplanes, the FAA h
    • A company I used to work for discussed using some of their technology with Nasa. One of the things they told us was that they preferred processors a two or three years old because they were afraid of random bit-flippings caused by radiation etc.

      That can't be the whole reason, otherwise they could just get modern parts and use up some of those spare cycles and memory to implement more redundancy and error-correcting codes. Surely that would be a far better safeguard against random bit-flip than using old t
      • Actually, there has been some research into using "software implemented hardware fault tolerance" (SIHFT) to guard against the effects of bits flips. There's a research group at Stanford that's done a lot of work with SIHFT, and even flew a test processor on the ARGOS satellite. In addition to error-correcting codes on memory (which is standard on most spacecraft anyway) they also use two techniques to ensure that executing software is not corrupted: Control Flow Checking by Software Signatures (CFCSS), and
  • by helioquake ( 841463 ) * on Friday April 22, 2005 @10:36PM (#12320083) Journal
    Just like Cassini, the Hubble also has on-board solid state recorder (installed during one of the servicing missions), which replaced an old tape recorder. This has been really a nice addition as we can store more data into the solid state device while collecting data bits and dump them when the downlink becomes available. It really helps increase the efficiency of the satellite (and that's a big thing for science mission).

    [Note that I've simplified the scheme alot here.]

    Though several sections of the device have been damaged by radiation, or something, I hear. So even these things aren't too resilent to the harsh space environment, yet. Something you future engineers should think about as a project.
    • The article made me wonder what the attrition rate is for the laptops astronauts take with them. Have any of them been damaged while in space? Granted, most of them have only been up there for a short time, but a few have probably spent years on the ISS. Does NASA have any figures?
  • by Anonymous Coward on Friday April 22, 2005 @10:38PM (#12320094)
    Anybody remember RCA's CDP1802 [wikipedia.org], the weird little CMOS RISC-ish 8-bitter used in Voyager, Viking, and Galileo? These things have been running for decades, despite the radiation they've been subjected to.

    Now that's engineering!
  • And why is this relevent? Isn't there atmosphere inside all manned spacecraft?
    • Disks work just fine in space (although they do need to be ruggedized a little). See for example the General Dynamics (nee Spectrum Astro) Space-Qualified RAID [gdc4s.com]. Note that these are meant for use on unmanned spacecraft.
    • HDs use gas bearings (Score:4, Informative)

      by redelm ( 54142 ) on Friday April 22, 2005 @11:23PM (#12320308) Homepage
      The heads of modern HDs are riding on gas(air) bearings to keep them a controlled distance from the mdia platters. In a vaccum (the cases cannot hold 15 psi and would leak out) the heads would be scratching the media.

    • While I could easily imagine designing a disk that could work in space, you can not pull the old ST41201J out of your box and launch it into space. The flying head effect requires an atmosphere between the surface of the disk and the head. Stock disks have a vent (wiht a filter similar to that of a filter-tip cigarette), such that exposed to vacuum, the heads would crash.

      Even manned aircraft might experience low atmospheric pressure (or even total vacuum) from time to time -- I guess they could pack a

    • You'll probably have cooling problems, due to lack of convection. Low air pressure can also cause problems with cooling and head crashes. Vibration from the launch can also destroy delicate hardware, or make it malfunction.
    • And why is this relevent? Isn't there atmosphere inside all manned spacecraft?

      Per experience working for a NASA subcontractor making (non-critical) instrumentation...

      The pressure the craft is operated at is less than standard sea level air pressure. (I don't know how much less.) It was, though, so much less that the hard drives sent up (on the project I worked on) were failing due to the lack of air for the Bernoulli effect (the pnenomena that holds the heads up when the drive spins), along with not en

    • by cyclone96 ( 129449 ) on Saturday April 23, 2005 @03:13AM (#12321123)
      I work for NASA on the manned programs and my experience is that hard drives are a headache on long term space missions.

      The laptops onboard Space Station are primarily IBM laptops (many of which will soon be running Linux - yeah!). While the drives are easy to replace, they fail fairly often (compared to other space hardware) and new ones need to be launched. The software on the drives also becomes corrupted frequently (maybe once every few weeks), requiring the crew to waste time recopying the software from CD. While these COTS laptops and hard drives were cheap up front (almost zero development cost, custom stuff would have been tens of millions of dollars) we are paying for it now because we waste a lot of operational time fixing them.

      The Honeywell Command and Control computers (the primary flight computers onboard, which are triple redundant and manages core systems in the US segment) used to have a 300 megabyte hard drive to store flight software.

      In 2001 during a shuttle mission, hard drive problems caused ALL THREE of those computers to crash simultaneously in a massive cascading failure. While it never got a lot of press, recovering from that took several days and an effort reminiscent of Apollo 13. You can read a contemporary article on it here: http://www.space.com/missionlaunches/launches/soyu z_iss_010427.html [space.com]

      When we got the things back and did a post-mortem, it turned out that the hard drive had a design flaw where the arm was dragging across the disk during power down and scratching it, which eventually led to failure.

      They were replaced with solid state units shortly thereafter (which were already in the development pipeline). No moving parts, and much less problematic.
  • by Future Man 3000 ( 706329 ) on Friday April 22, 2005 @10:41PM (#12320118) Homepage
    I've heard that the type of computer equipment they use in studio is also of interest to space hardware designers (or maybe vice versa) -- they work towards reducing interference with other studio equipment (mostly power spikes; use solid state storage instead of hard drives, shield everything, work towards smooth power transistions at startup/shutdown).

    They need more performance than space equipment, and space equipment has power concerns that studio equipment does not, so the equation balances.

  • by Anonymous Coward on Friday April 22, 2005 @10:44PM (#12320130)
    • w00t, it's a ThinkPad! I couldn't tell you which one, but that little red eraserhead is unmistakeable.

      Question: The article makes a big deal about standard hard drives not being able to work in Zero G? Why does a laptop HD work in Zero G and a regular hard drive not work? Have I missed something?
      • It's a Thinkpad, alright. They use 'em on the Space Shuttle too - basically NASA put a stake in the ground in the mid-90's, bought a boatload of 166MHz Thinkpads, put Windows 95(!) on them and characterized the heck out of them.

        So the many faults of this platform are well understood, which is what really counts. Interesting article on this here [spaceref.com]

        (Loving my T40... er... in the abstract sense only)

  • Old tech updated? (Score:5, Informative)

    by rjhall ( 80887 ) on Friday April 22, 2005 @11:05PM (#12320229)
    Some 9 years ago I worked on some chip design for Hughes and ESA.

    Back then, we used 1.2um on 4" (or 6" in the new fab) wafers - and everything was built on a sapphire substrate instead of a silicon substrate to make them radiation hard (when they went through the van allen belt).

    It was dull, as every single chip had about 12 inches of paperwork from QA. Every *instance* of every chip had its own paperwork, I mean. It was also dull because they wanted tried and tested tech, not any of this new fangled sub-micron stuff.

    That was then. Can anyone let me know how much things have changed?
    • Re:Old tech updated? (Score:2, Interesting)

      by geremy ( 18495 )
      The first 8" wafer line of Honeywell's SOI VII HX5000 0.15 micron will be officially opened by the end of the month.

    • Re:Old tech updated? (Score:4, Informative)

      by F00F ( 252082 ) on Saturday April 23, 2005 @03:54AM (#12321270)
      > Some 9 years ago I worked on some chip design for Hughes

      I work at what's left of one of the old Hughes Space & Comm. digital ASIC groups, with several of the old(er) Hughes Radar fellas.

      > Back then, we used 1.2um on 4" (or 6" in the new fab) wafers -
      > and everything was built on a sapphire substrate instead of a silicon substrate...


      SOS is much less prevalent today. My first design was quarter micron Si CMOS, and many new designs are tenth and sub-tenth micron. It's becoming more difficult to convince foundries to perform space-qualified work. Thinner gate oxides, rising leakage current, clock distribution power dissipation, and shrinking feature sizes in libraries are making good standard cell ASIC development for the space environment more difficult, and the full custom stuff is pricier. Modern COTS hardware is having a more and more difficult time meeting satisfactory (particularly end-of-life) performance requirements. And, to boot, obsolescence issues are continuing to rear their ugly heads. Technology qualification is getting more expensive every year, and the spaceborne commercial telecom market was ravaged by overcapacity.

      To top it off, the hardest part is actually finding people who can write rugged, professional, reliable code (and test plans, and device specifications, and interface documents, and...) day after day.

      > It was dull, as every single chip had about 12 inches of paperwork from QA.
      > That was then. Can anyone let me know how much things have changed?


      Yeah -- we're up to about 50 inches.
  • GPC vs. embedded (Score:4, Interesting)

    by fermion ( 181285 ) on Friday April 22, 2005 @11:13PM (#12320265) Homepage Journal
    I think what most people miss is that the devices in space craft, hell even most cars, and not general purpose computers(GPC), have no need to be GPCs, and should not be GPCs. A GPC, quite frankly, is jack of all trades. Designed to do everything adequately, nothing well, and is a single point of failure. It might be over-designed to do a few things well, but then it costs more.

    An embedded machine, OTOH, is designed to do one, or a very small range of things, very well, very reliably, and very efficiently. I have had the fortune of working on two space based projects. In the first we used a single board Z80 based space hardened 'computer' to control a simple set of devices. It stored the ASM code in an EEPROM. It was more complex than we needed, as it was a standard issue unit, but much simpler than the Apple ][ we used as the GPC.

    On the second project, 10 years later, we were not using incredible different machines on the satellite, though the GPC was now a Wintel machines with 100X the memory and speed. But when your main concern is that things just have to work, processor speed and OS wars have little meaning.

    So these stories about how underpowered and behind the times embedded systems are just annoys me. It is just like continuous burns on SciFi shows(kudos to Babylon 5). Perhaps meaningless power is important to the ignorant masses, but we on /. are supposed to know better. I was using a tape drive until at least '87, just because It Worked.

  • Is it just me, or does using tape storage seem horribly unreliable. I have some bad memories of backing up my 486SX's gargantuan 540 MB hard drive onto 100MB tapes. The process often took 2-3 days, because about 50% of the time, writing to tape would fail, necessitating starting the whole tape over again. How could tape be reliable enough to run space probes and other things that handle huge temperature extremes and tons of radiation?
    • It's just you.

      Or, more specifically, it's the crappy tape drive you used for backup.
    • Re:Tape?!?! (Score:5, Interesting)

      by bluGill ( 862 ) on Friday April 22, 2005 @11:49PM (#12320405)

      They are not useing the junk tape drives that you were using, but quality stuff. Mainframes have always put most of their data on tapes drives, and they rarely have problems.

      Course a mainframe tape drive can cost $30,000 each, (not counting the robots that load them) so you can see why home users don't get that quality.

  • by aapold ( 753705 ) on Saturday April 23, 2005 @12:07AM (#12320465) Homepage Journal
    I always wonder why you can't overclock the hell out of chips in the cold vaccuum of space...
    • There's the cold of space and there's the hot of space. Most spacecraft don't spend their time pointing in ONE direction so the ships have to built with some expectation that exposed parts will face heat and cold.

      Anything exposed to the sun is going to get very hot indeed. That'd be bad for a bare CPU.

      Anything exposed to the night side (or the side where the sun ain't) is going to be cold but a CPU is still going to need a heat sink to effectively remove the heat. Empty space is not a particularly good
    • It is hard to dissipate heat in space. Even if the few atoms that hit your cpu are of very low temperatures, there aren't enough of them to keep it cool enough to run.
    • Remember that space is a vacuum. Much like a thermos bottle. Unless there's energy being transmitted to an object (like from sunlight), the hot stays hot and the cold stays cold.

      Unfortunately there's almost always sunlight to add some heat into your system.
  • by mykepredko ( 40154 ) on Saturday April 23, 2005 @12:22AM (#12320513) Homepage
    My experience with space rated equipment isn't all that extensive or current (I was involved in failure analysis of an AP-101 memory card that had an intermittent failure from the STS-2 and had some interactions with the engineers at IBM's old FSD division, which designed the AP-101s and wrote the flight software) but the article misses one very big point that is the really fascinating aspect (to me) of spacecraft computing hardware and I would have to challenge a number of facts in it.

    1. The shuttle launch algorithms and orbital maintenance procedures are a lot more complex than the article makes them out to be. There are several hundred parameters that are continually checked, recorded and processed from tens to hundreds of times per second to make sure the flight path is correct and all systems are operating correctly. Along with monitoring the flight path, the computers were/are largely responsible for the data displayed on the astronaut/pilot's CRT displays in the cockpit.

    2. It is my understanding, that in the early shuttle missions at least, there were multiple code loads during flight. The original AP-101s had a maximum of 256K words of 32 bit memory, which was enough for a separate launch, orbit and landing image, each which had to be loaded into the AP-101s before the next phase of flight. There have been issues with loading software or receiving and loading new software from the ground.

    3. The original AP-101s were designed for the F-15 and could be considered "state of the art" for the early 1970s in terms of processing power and memory size. They are capable of about five MIPs and had a full megabyte of battery backed memory. They were chosen because they had been qualified for the high G-Loads and temperature extremes of the fighters. While the systems used on the shuttle were of the same design as used on the F-15 (and later the B-1B), they were inspected to much higher standards and all failures had to be resolved down to the point of having a test in place to prevent the failure from escaping the manufacturing/test processes as well root cause action plans at the component supplier.

    The memory card failure that I was involved with was caused by a solder ball inside a metal RAM chip package. During the shuttle's ascent, vibration caused the solder ball to break free and intermittently touch the surface of the chip inside the package. The problem was extremely difficult to reproduce and was found by placing a microphone on the chip package and tapping the chip with the eraser end of a pencil. Chips with this solder ball defect "rang" differently than ones without this problem. After the ball was discovered and proven (by cutting open the chip package), every chip used in a shuttle AP-101 was tap tested by IBM to ensure no other solder balls were hidden inside the packages.

    4. I don't know where that picture of the "Part of the AP-101S" came from as there is no way that is flight qualified hardware for an F-15, let alone a shuttle orbiter. Wire reworks are simply not allowed in high-G, high vibration environments and it looks like the surface mount components are hand soldered into place. I think this is prototype hardware that somebody pawned off on the author.

    5. I don't understand where the idea that space systems having to be low power came from. The AP-101s were real power hogs (all their logic is bipolar) and were in fact glycol cooled. A significant fraction of the orbiter power generation is devoted to the compter systems (as well as the spacecraft cooling capabilies).

    What is always interesting is looking at how the software for manned spacecraft is developed. A big joke is the Mars Observer and the mix up between English and Metric units, but think about how often you've heard about a software failure on board the shuttle - or any manned spacecraft for that matter. In Apollo, there were none and the software for the CM and LM computers was wire wrapped on a bed of nails instead of being burned into
  • Determinism (Score:5, Interesting)

    by Detritus ( 11846 ) on Saturday April 23, 2005 @12:45AM (#12320602) Homepage
    One important point that the author missed was determinism. Many of these computers are used in hard real-time applications. If the tasks don't meet their deadlines, the system has failed. This requires predictable timing. That means cache is a liability, as are many of the advanced features of modern processors. You need to be able to sit down with a program listing and count how many CPU cycles it takes to execute a segment of code. If you have a 10 ms time slot to do a task, you have to be able to prove that the code can run in less than 10 ms. If you poll a set of sensors every 100 ms, you want the timing to be identical every time the code runs, to eliminate timing jitter in the measurements.

    The controller for the SSME (Space Shuttle Main Engine) uses a pair of 68000 processors. It is a very critical system. If something starts to go wrong with the engine, it has to detect the problem and shut the engine down before it progresses to a catastrophic failure. It uses two redundant processors for reliability. Each engine has its own controller.

    Old microprocessors like the 80386 and the 68000 were the last commercial processors before cache, pipelines and other trickery made timing analysis difficult or impossible. Some people have used DSPs for controllers because they still offer predictable timing.

  • running on Intel Pentium 4 Extreme Edition 840 Dual corebwith dual SLI Geforce 6800 Ultra 512MB and 4096MB of DDR2-533 memory to run Halflife 2 and Doom3 in space, and of course the latest edition of 3Dmark2005. I am sure those cosmonauts could use some 3D gaming entertainment while waiting for docking. ;)
  • by MonkeyBoyo ( 630427 ) on Saturday April 23, 2005 @02:01AM (#12320870)
    I've had experience with some of the computers in older government satellites.

    Simple processors are preferred because that makes it much easier to figure out the time bounds on a subroutine. You don't want one routine to use up so much time that it keeps something else from being done. Timing information is rigoriously analyzed to make sure that the system won't miss something if lots of things happens at once. Fancy modern archetectures like cache, pipeline stalls, out-of-order operations, etc. make timing analysis very difficult.

    Generally interrupts are not used - instead conditions are polled at a regular time slice. One reason for this is that polled data is also down-linked in a telemetry stream for status monitoring and trouble shooting. Also interrupts greatly complicate timing analysis.
  • by olafva ( 188481 ) on Saturday April 23, 2005 @02:37AM (#12320967) Homepage
    The FPGAs on Spirit and Opportunity [xilinx.com] seem to be overlooked. NASA's new Reconfigurable Scalable Computer (RSC) Project [akspace.com] for space applications is exploring using FPGAs (instead of CPUs) which offer increased performance and radiation tolerance at a fraction of the power consumption.

No man is an island if he's on at least one mailing list.

Working...