Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Science

Hubble's Computers Upgraded 97

MRcow writes "A story at ABCnews.com says the Hubble Space Telescope that was recently repaired by the crew of space shuttle Discovery is having its computer system upgraded. The new system will be 'three linked computers that run on the Intel 486 microchip.' It says older processors are used because they have to be tested for radiation and such. That makes me wonder if the computers are going to be "linked" and if so, how? Maybe a Beowulf cluster on Hubble? Talk about 'geeks in space'." The processors on the Hubble are being upgraded from what I understand are 1980s versions. The new hard drive is going to be a whopping 10 gigs with three 486 processors. The processors and drive have to be specially shielded and made to handle heat/cold extremes.
This discussion has been archived. No new comments can be posted.

Hubble's Computers Upgraded

Comments Filter:
  • by Anonymous Coward
    Do you KNOW what they are upgrading from!? It was a RadioShack TANDY 1000TX up there!!! Why? Cause the darn things NEVER DIE! Even at -1 bizzilion degrees, well you get the picture.
  • What is it about pointy haired boss program managers choosing x86 processors for space applications? They expect to run windoz? Or maybe NASA can only afford development systems from garage sales these days.

    Two of the hardest things to do in a spacecraft is obtain electricity and dissipate heat. A 486 consumes at least several watts. You can get roughly the same level of performance as a 486 using an ARM7TDMI MPU using nearly two orders of magnitude less power (and have smaller code size to boot - the T stands for Thumb code compression) If NASA has a buy US policy then check out IBM's 40X series of PowerPC embedded control cores.

    And the Pentium? There's a huge f**king waste of silicon area and Amps for an embedded control application that doesn't need to be x86 compatible. Every heard of StrongARM? the MIPS R5270?, the PowerPC 603e? Are these guys idiots or is Intel paying them to use Pentium. I can hardly wait for the next Mars probe to be lost from a divide error or F00F bug. Anyone ever check that the state machines in the Pentium's complex instruction decoder are fully recoverable from single event upset error ("bush behavior" encoded).
  • They really shouldn't be doing much computing up there at all. The emphasis in cost and design should be on the most reliable speed of light link between the Hubble and all the computing power on earth. The computers down here should be doing everything possible. The computing up there on board the Hubble should be doing nothing more than necessary. There are no users up there. The Hubble is nothing more than the most sophisticated sensor that has ever been produced. It should be nothing more than a remotely controlled sensor. All the emphasis should be on the link between the Hubble and the control points on earth.
  • The last one I recall was a 50MHz 486dx2. Big heat sink but no fan required.
    ---------------------------------------------
    To me, excessive cooling requirements for a CPU is a sign of bad design.
    ---------------------------------------------
    This is why I avoid the latest CPUs. I suspect they're just slower CPUs that are overclocked. The ones that pass the performance test are relabeled as being faster CPUs. The ones that became unstable are sold as lower speed CPUs. Notice hou much cooler cheap $60 400MHz AMD K6-2s run compared to the first 200MHz CPUs? I still run a 200MHz p55MMX Intel and that sucker runs HOT. The aforementioned 400MHz chip's heat sink is merely warm to the touch. Touching the heat simk on the 200MHz chip will leave lines on your finger. I'll wager that the 700 and 800 MHz chips being produced 2 or 3 years from now will run much cooler. Why? Because by then the design will be FIXED unlike what we see now, which was rushed to the market on the advice of marketers... not engineers.
  • by Anonymous Coward on Friday December 24, 1999 @08:04AM (#1448146)
    I work in the industry, specifically on flight software systems. We typically have to use old technology because ITS BEEN MADE RADIATION HARD. We use 300Mhz PowerPC's for early development in the labs, and because they happed to be a superset of the instruction set we use on the "state of the art" flight processor, the RAD6000 from Lockheed Martin Federal Systems. This is the processer that flew on the mars pathfinder missions. It has a dynamically selectable clock speed (meaning we ca jump from 2.5 to 33 Mhz just by writing to a register). It's a 64 bit piplined RISC processor, essentially what IBM was putting in AIX workstations 6 or 8 years ago, except now it's rad hard. They cost something like $300K, but if you need the fault protection (like memory that automagically corrects for single and multi-bit radiation errors, etc) they you have to start forking over the bucks. There's some movement in the industry into rad-tolerent (not hardened) PowerPC voting systems, but if your project is not in just the right orbit (or it goes to deep space) you need to be RAD-HARD. So it's not typically budgets, etc that drive things like processor choice. It's what technology will survive the radiation, thermal cycles, vib, shocks, etc. The typical PC motherboard wouldn't last 10 hours in most projects from the thermal cycle alone. A lot of student spacecraft projects fly PC's, 6811/12, etc. If you wrap enough stuff around them, they will work reasonably well most of the time, but you will be rebooting, watchdog resetting A LOT. Thats unacceptable for a probe doing nav calculations (unless you already screwed up your unit conversion, and then your screwed anyway!). The sad thing, is at 20Mhz, with really tight code, pipelining, and no cache misses, we might get 20 MIPS out of the RAD6K. LMFS is devloping a new higher speed PowerPC based unit, but I think it's still a ways out. Telescopes, etc didn't used to need much horsepower, since they were just sending telemetry, taking a picture or two, and wiggling some mirrors or doors. Nowadays, you have to do all sorts of analysis, correlation, and compression on the data before you send it down because the speed of the network pipe is the limiting reagent.
  • Since there's no such thing as a PC with no moving parts being made anymore...

    Acutally there are. You need to look in industrial computer catalogs. Considerable number of (expensive) parts for situations like this, including solid-state disks.


    ...phil

  • Those 1000 pins are in high-density cable connectors. The computers are mounted in sealed boxes.


    ...phil
  • Saw a news article about it a couple of days ago, but can't find the article anymore. (anyone?)

    Anyway could be using this technology - works a bit like RAID only for whole computers rather than just disks.

    Yes I know its been done before, but theres a twist -if only I can find the damn article.

    Martin
  • In addition to the radiation and reliability concerns, as we all know the newer chips are larger thermal loads. And unlike here on the ground, you can't use fans for cooling. ;-)

    Another thing I wonder about is, are the more powerful chips really necessary, anyway? In none of the satellite designs I've been involved with has processor speed been a design driver or technical limitation. Mass, cost, and power are generally the design drivers (sometimes schedule). I would imagine that a chip optimized for low mass, power, and cost (like a 486) would trade off very well against more powerful chips, especially if it is also very reliable. ;-)

  • NASA would probably have had a special Motherboard made, or at least a custom BIOS.

    :-) Spacecraft manufacturers always use computer architectures specifically suited to space use. They tend to have a lot of unusual devices to connect to the processors, sometimes more cards than a typical motherboard handles. As well, they're driven by the shake, rattle, and roll of launch, and the need to keep mass, power, and cooling requirements to a minimum.

    The attention to detail that goes into the design of a spacecraft's computer architecture is pretty amazing, too, especially given that often the computer architecture is only used (as designed) once. And all too often, we're faced with integrating things designed for multiple different architectures (imagine getting a PC modem, SGI video card, and Sun monitor integrated into a Mac chassis.)

  • by Effugas ( 2378 ) on Friday December 24, 1999 @05:45AM (#1448152) Homepage
    Failures occur, or more bluntly, sh*t happens.

    There are two ways to deal with failures: Work to prevent them, or accomodate(and recognize) their presence.

    NASA generally chooses to use failure prevention(via "hardening" engeering that adds voting circuits, for example) until it can get a high enough success rate such that the rare failed computations are an acceptable cost in the face of a vast majority of successful operations.

    One thing to think about is that, in general, the Hubble itself is not the device that requires ridiculous amounts of computational power. It's far cheaper to have your supercomputers terrestrially based than to launch them twenty thousand miles in the sky! The Hubble is rather like the equivalent of a wireless VVVWAN passive sniffer / protocol converter, exchanging photons sent from distant lands into data which is then sent to the ground based equipment for rendering.

    Do you demand that your network card have a Sixth Generation x86 chip? Same difference.

    Yours Truly,

    Dan Kaminsky
    DoxPara Research
    http://www.doxpara.com
  • Comment removed based on user account deletion
  • by smooge ( 3938 ) on Friday December 24, 1999 @08:10AM (#1448154) Homepage

    I worked on the Alexis Project at Los Alamos New Mexico in 1993/4. It was a X-ray telescope and used processors I thought were positively ancient for the time. However I found out that these were needed because the amount of times a reset occured because of X-ray, gamma-ray, highspeed particles is really scary in space. These particles have very small wavelengths but high energies and the smaller the circuit the more circuits that they are going to affect. This will cause anything from NO-OPS to hard damage of the circuit (which eventually causes the chip to be useless.)

    I am pretty sure that the 4 486's are doing the same thing that the 6-8 gyroscopes are doing.. they all do the same thing but if they dont agree they do it over again to see if the second time they do. If multiples do agree the wrong one is marked bad and may be shut down depending on how many times it occurs.

    The next generation chips that will need to be used in space are the ones that are both hardened and redundant. I think it was Motorola or IBM that had a chip that you could physically abuse and it would keep going due to the amount of redundancy in it. The problem is that it aint speedy :)

    BTW, in case people think that this doesnt happen on earth. I also learned the hard way that above 1km you get more and more problems from radiation on chips. A lot of non-repeatable memory problems we had at Los Alamos got linked to high energy particle/waves coming in from space that usually get blocked by the atmosphere... I also remember reading a paper where similar things occur in buildings with large amount of granite (general decay of various elements) or other stones.

    Stephen Smoogen (Physicist in Support clothing)

  • What is it with Slashdot's dweeping of Beowulf clusters? Noone seems to be able to mention "new computer" without Beowulf being mentioned by someone

    The comedy pro technical term for it is 'running gag'. They've been used pretty successfully throughout comedy history, and every culture seems to have them. For example, North Carolina's had Jesse Helms as a running gag for decades now.. And I hear England's royal family continues to garner laughs after centuries, I guess that joke never gets tired...

    Your Working Boy,
  • You do not moderate something down because you disagree with it, you moderate it down because it is flamebait, or a troll or such. Otherwise I would moderate YOU down.
  • by Accipiter ( 8228 ) on Friday December 24, 1999 @04:00AM (#1448157)
    I was thinking....NASA would probably have had a special Motherboard made, or at least a custom BIOS. The classic 486 BIOS didn't support anything larger than 512 Megabytes for a Hard Disk, and the newer 486 BIOS supported up to 2 Gigabytes. A 486 system supporting 10 Gigs is quite uncommon, but can obviously be cusomized to work.

    (Although each computer can have a network device, and be hooked up to a standalone drive system. That's possible, and makes more sense.)

    How cool would that be to have "National Aeronautics and Space Administration" plus the NASA logo embedded in your computer's BIOS and displayed on boot! Killer.

    -- Give him Head? Be a Beacon?

  • You need to look in industrial computer catalogs
    You don't need to go that far, a Palm Pilot would qualify. And if you thought that was too underpowered to fit your definition of "PC" then the HPCpros like the big HP Jonada certainly should qualify.
  • Given that the computer operating on a long-term spacecraft will have to withstand conditions (like -270 to +270 degrees Celsius with no heatsinks, heavy bombardment of all kinds of radiation from the "solar wind," etc.), it's small wonder why NASA only now has qualified the 486 CPU for space use. Designing the radiation shielding and thermal protection plus guaranteeing extreme reliability of use for years at a time requires a lot of engineering work, and frankly, it'll be some time before NASA will qualify a Pentium II, let alone a Pentium III, CPU for spacecraft use.
  • Well remember that 486 is essetially using the same instruction set as pentium (okay, I don't think MMX and SSE is used in BIOS...... maybe a 3D logo or voice reconigition at bios?!) so I think most Pentium BIOS would run fine on normal 486s! Anyway, I think they probably use other architecture then PC... They probably need more reliablity so they would use their own OS and stuff!
  • Where did it say that they were using PC technology? Intel processors do not necessaraly mean a PC archetecture.

    And having mutiple processors hooked up together dosent necessaraly mean SMP or any other kind of // processing you can think of. Given that it was ABC news, 'three linked computers' could mean anything.. It could be some kind of single computer with mutiprocessors (intel had supercomputers with hundreds of 8088's working together, after all) but this is a very specialized piece of hardware. The desiginers know that system FOO will peek at X cycles, BAR at Y, etc. FOO and BAZ use processor A, BAR and FRED use processor B, and C is held in reserve.

    And they might only be 'linked' in the most basic sence - if the aiming system has its own computer (distinct from the others) they it would only have to communicate with the data aquisition computers with "stop collecting" and "start collecting"

  • This scheme is used in lots of situations where actual fault tolerance, not just high availability, is required. Here's a link to a company that makes Solaris servers that use this scheme:

    http://www.resilience.com [resilience.com]
  • Hard disks have cooling and reliability problems when used at high altitudes, such as at astronomical observatories in Chile. I don't think they would last long on a spacecraft.
  • The Shuttle GPCs are not 80186s, they are IBM AP-101S computers, which replaced the IBM AP-101B computers. The IBM AP-101 is a member of the IBM 360/370 family of computers and is programmed in HAL/S.

    There are two programs, PASS (primary avionics software system) and BFS (backup flight system). PASS runs on 4 computers and BFS runs on 1 computer. They were developed by two separate groups, IBM (PASS) and Rockwell (BFS). If PASS fails due to hardware or software problems, BFS can be used to fly the Shuttle.

  • by arivanov ( 12034 ) on Friday December 24, 1999 @03:31AM (#1448165) Homepage
    Intel is bound by agreements to produce 8086, 80186, 80286 (this one has been xfered to Harris), 80386 for a few more years.

    AMD is still manufacturing 486, It is also manufacturing 386 and 486 embedded derivatives with integrated RAM control, bus control, dma controllers and even PCMCIA controllers (something like 80186 but with all the 486DX100 bells and wistles).
  • Anonymous Coward is absolutely right. Is his score of "0" punishment for being anonymous, or spite over cutting through all the crap? There is NO reason for more computing power on-board the Hubble, just NASA's usual triple-redundant sets of simple systems. The network is the computer, guys -- remember? All the computing muscle is at JSC and JPL, where it belongs.

    But it took five long screens of prattle about Beowulf clustering and space-based IP and 486 floating point calcs and NASA-branded BIOS and space-Kelvan overclocking to get to the point that the Hubble is just an appliance: in effect a scanner with the ability to point itself very precisely in three dimensions.

    It's the world's biggest hot-synched Palm. It doesn't need Windows. It doesn't even need -- perish the thought -- Linux.

  • That makes me wonder if the computers are going to be "linked" and if so, how?
    IIRC, there was talk of utilising the IP protocol we all know and love to communicate between 'craft' for want of a better expression. details can be found here [slashdot.org]. As it stands, far too many of my packets are routed to Mars for this to be a real innovation...
  • Triple redundancy is pretty much a standard for critical control systems; airliners, nuclear power plant, chemical refineries and so on. It is pretty well tested.

  • Maybe a Beowulf cluster on Hubble?

    What is it with Slashdot's dweeping of Beowulf clusters? Noone seems to be able to mention "new computer" without Beowulf being mentioned by someone.

    Could someone please explain what on earth Hubble needs a Beowulf cluster for? It's kind of pointless of drooling over a Beowulf cluster without having an idea what to use it for.

    -- Abigail

  • The processors and drive have to be specially shielded and made to handle heat/cold extremes.

    Did anyone else think about how fast you could overclock your processor in space? ;)
  • Last year, Intel pretty much gave the design of the Pentium to Sandia National Labs and NASA for ruggedization for use in space: Press release [intel.com]

  • http://dailynews.yahoo.com/h/nm/19991223/sc/space_ shuttle_46.html

    If you read link above, you'll read following quote:

    "'You have to keep in mind that we don't run Windows, we don't have disks, we don't do Internet,' John Campbell, project chief
    for the Hubble, told reporters."
  • The reason for NASA and other agencies using the Intel designed chips is that Intel is giving them the designs of the chips to modify to the required specs royalty free.
  • Just something that should be pointed out is that NASA aren't using a hard disk but a solid state system to replace the existing tape system. I assume this is because outer space is too harsh an environment for hard disk technology.
  • by iNTelligence ( 33364 ) on Friday December 24, 1999 @03:28AM (#1448175)
    The Sandia National Laboratory is currently preparing the Pentium for use in high radiation environments such as space, for more information check out the following sites
    http://www.sandia.gov/media/rhp.htm [sandia.gov]
    http://www.sandia.gov/LabN ews/LN12-18-98/intel_story.htm [sandia.gov]




  • by overshoot ( 39700 ) on Friday December 24, 1999 @06:49AM (#1448176)
    Newer processors have increasingly smaller distances between the pathways on the surface of the chip. Since this distance is smaller, radiation can cause the electrical charge to "jump" across.

    Not quite. Ionizing radiation induces carrier pairs in the silicon, possibly causing conduction when it shouldn't happen. Charged particles (esp. alpha) actually deposit charge where it can cause trouble. The most susceptible circuits are memories; dynamic memories actually store data in charge states, and static memories are intentionally very low-energy. Newer processes use smaller features and lower supply voltages, which reduces the charge needed to change state.

    Unfortunately, for speed/power reasons all leading-edge processors use dynamic logic (similar to dynamic memory, except that the charges can be even smaller since the storage time is so much less.)

    There are ways to minimize these effects with shielding,

    Shielding is actually worse than useless, since cosmic rays are too penetrating to block effectively, but they do kick out secondary radiation from the shielding.

    I'd love to know whether they're using the original Intel 486 or one of the newer 486 replacements that use static logic (which is handy in really-low-power systems since it lets you start and stop the clock at any time.)
  • I'd had to say that such a comment is a little off-base, given that Pathfinder used a PowerPC-based architecture. I was lucky enough to take a "special topics in engineering" course here at Caltech with all of the folks that were on the Pathfinder project - a great group of bright, driven folks.


    Joseph R. Kiniry
    http://www.cs.caltech.edu/~kiniry/
    California Institute of Technology
  • Three computers? And if they disagree, the decision is made by majority rule? Ooh, if I worked in NASA, it'd be so hard not to refer to them as Melchior, Casper, and Balthazar...teehee...
    --
    "HORSE."
  • Well, even if they weren't as vulerable (I of course have no idea whether they are ore not) they can't be used because they haven't been tested. (They're just too new to be used yet) Given the reliability required for these operations the NASA has to make sure that it will work, otherwise it'll get expensive ;)

  • by Szoup ( 61508 ) on Friday December 24, 1999 @02:39AM (#1448180)
    There's (mainly) two issues as to the WHY:

    1. A time tested system (one that has proven itself over several years of "real world" as well as experimental development use) is a more acceptable one to them. Thy don't want to send a device into space and six months later have it fail because of a processor bug that know one knew about.

    2. As the circuits on computer chips get smaller and smaller, there is a concern over the ability for solar radiation to trigger (or disable) the circuit's switches.
    --------------------------------------- ----
  • by GPSguy ( 62002 ) on Friday December 24, 1999 @06:44AM (#1448181) Homepage
    Harris Semiconductor has spent a lot of time and effort making the iAPx86 processor line radiation hardened. And it works. But it doesn't work nearly as well for the newer generation as the older.

    The tighter dies are more sensitive to charged particle events (Single Event Upsets, 'SEUs'), and the thinner nature of deposition makes the hardware more prone to heavy particle (permanent) events.

    Older CMOS processors and memory were less prone to both types of error because of line thickness and spacing, depth of deposition and because of the voltage levels used.

    The COSMAC 1802 is still a widely used space-borne processor!

    Rad-hard planning is a non-trivial task, expecially when it's new to the designer. Trying to develop a design for a medium-altitude satellite was a challenge that a group I was once involved with was never able to satisfactorily overcome, and we lost the project because of it.

    Having watched the evolution of the Shuttle system General Purpose Computers (GPCs) migrate from IBM proprietary processors to 80186s was a painful process. The software was clean-roomed, and 2 seperate groups of people developed code with identical functionality, while a 3rd group was responsible for testing. None of the groups was supposed to interact at all, and to the best of my knowledge, didn't even talk over beer... I suspect the Hubble ports are similar, although the rules for Manned Spaceflight code are more stringent.
  • The BIOS is only a factor up until boot time with a ``real'' OS such as Linux or FreeBSD. I had an old crappy Acer Aspire which thought it could read a 3.7GB drive but definitely could not, hosing it at every step of the way. However, once FreeBSD booted off a floppy, it was just fine. So I created a custom boot floppy and just left it in the drive. :-)

    That said, 8 GB should be your hard limit there, though I do not speak from experience and just hearsay. LBA is required to even access drives above 8 GB because they exceed the IDE specs.

  • Not even close ;)

    Beowulf is a hero from old Celtic tales. The stories of his escapades were traditionally passed by word of mouth. They were not written down until the monks got ear of them and thoroughly injected them with Catholic dogma. The aforementioned priests published the Beowulf myths shortly before Chaucer published "The Canterbury Tales." Reading Chaucer and these stories of Beowulf will give you a fairly good understanding of how Elizabethan England came to be (and perhaps explain some of the method to the madness of writers like Shakespeare).

    Why it was decided to name some clustering software after this ancient character, I'm not sure. It's possible that it is explained somewhere on their website or within their documentation.

    Hope I've helped.
  • The DX-200 line of (mobile) phone switching centers uses a highly rendundant, highly scalable architecture using i486 processors. I don't exactly know where they get them, but I know they are selling the DX-200 like hot cookies.



  • "You should keep in mind we don't do Windows, we don't have disks and we don't do Internet," Hubble program director John Campbell joked.

    We don't do Internet
    Well, there goes my chance in space. Unless I can get a powerful transmitter down here that will let me read Slashdot with about a 40 second delay. Oh well, can't be worse than my connetion now.

    We don't have disks
    Disks that are able to endure an incredible amount of G force and an incredible amount of cold/heat. Interesting.. no wonder why they are not feasible, my disks don't even last from home school.

    With not even going to touch the Windows issues,
    Matthew
    _____________________________________
  • Ironically, this means that the shuttle is carrying up into space components for Hubble that are much more modern than it's own computer systems - which are amazingly old, for similar reasons of known reliablity.
  • I dont quite understand what makes a 486 more resistant to radiation.
    I'm not a science expert, but I'm sure the hubble doesn't need a horrible amount of CPU. Why don't they just use an AMD-k63 450mhz? Last I heard those were around 50$ and I'm sure even NASA can afford that ;)

    Tyler
  • The issue here is that the CPU core actually needs to be redesigned to withstand radiation. The term radiation-hardened is more then just slapping a stock i486 with enough lead to block radiation. Some of the redesign might make the die larger because of the technique involved. I know that some where online I read an article about radiation hardening the pentium.
  • Haven't done much in the way of network programming, have you? The interconnect is almost *always* slower than what can be done at the endpoints; hubble's downlink probably benefits a *lot* from doing compression and other processing onboard.

    Besides, you can't control a sattelitte with ~2-5 second latency. Also, should the Hubble crash when their's an ion storm or other disturbasnce that breaks the communication with the base station? Think about it; something that expensive and that far away better have a high degree of autonomy...
  • That's exactly what chipmakers do: they design a chip and then ramp the frequency up until "she can't take much more 'o this." Then it's time to design a new core: more efficient, smaller die, something like that. Coppermine, anyone?

    This is also the reason that a good old Celeron-300 is the hottest chip intel ever made. That's the highest that manufacturing technique could go. It's pretty high too, you can cook meat on that puppy!

    I still think it's cool how after hours of operation, my PIII-500 is much cooler than the voodoo3 main chip.

  • I read a Sun blurb somewhere that the ISS runs all control systems off of two x86 pentium laptops, (primary & backup), and all the astronauts use laptops running windows.

    -Nick
    Who volunteers to go up there with a Mandrake disk and help those poor astronauts out.
  • The drive is a hunk of RAM (or something not so volitile because
    1.)No moving parts. (A service mission costs a couple of million per trip)

    2.)Any spinning object (like a HD spindle) would impart rotational inertia on the satelite as a whole, and throw you off of the star you were trying to look at.

    -Nick(my 4 canadian cents)
  • by Coldraven ( 99514 ) on Friday December 24, 1999 @02:48AM (#1448195)
    The 80486 processor has a smaller instruction set than Pentium/MMX chipsets and their variants (AMD, CYRIX). The MMX sets are optimized more for multimedia functions than scientific calculations.

    Another issue is the floating-point concern; though Intel has come a long way from the notorious Pentium 60.876564 MHz errors, there are still enough quirks in the more advanced calculations (especially in the IBM/Cyrix copycat processors) to make their use by an orbiiting telescope impractical.

    As a 486 is exssentially a 386 chip with a built-in math coprocessor, its reliable calculation scale, along with the right operating command scheme would be as close to a RISC-based system as one could getwithout building a new chip out of whole cloth, or using larger clusters of older MAC - and of course, Motorola Dragonball - processors to achieve the same results.
  • I heard, on some reputable news source like TLC, that the Space Station is going to use, or is using, 286's. Because of the custom (and good) programming for the systems, they (the 286s) can be fast enough. Wowzers!

    P.S. - How can the Shuttle astronauts use their pentium portables?

  • by case_igl ( 103589 ) on Friday December 24, 1999 @02:45AM (#1448197) Homepage
    In comment to a couple replies asking why such old technology is used... If I remember correctly it has to do with how radiation can effect the state of the processor and chipsets. Newer processors have increasingly smaller distances between the pathways on the surface of the chip. Since this distance is smaller, radiation can cause the electrical charge to "jump" across. There are ways to minimize these effects with shielding, but as far as I know this is an "add on" that is not done by the chip manufacturers themself. I guess this would create a lead time.. IE: A new P3 chip needs x months of work to have a good shielding design developed, x months to test that... Coupled with the fact that this hardware is designed years before launch and you can see how things can slip. Remember also that comparing your desktop to something in orbit that was designed 20 years ago for a specific task isn't as easy as it might appear...Until recently NASA custom designed it's hardware and software. So, the Hubble running at 25 Mhz using all custom hardware and software is going to be a lot more efficient than a 25 Mhz 486 running Windows since it's all task oriented. I'm no expert of course, but I haven't seen some of our resident Super-Technical Answer Dudes (tm) post anything yet. (Probably still working on the formulas for their posts hehe). Case
  • Another reason for using old technology in space is that there is so long between the design of a project, and its approval and funding.

    When the money finally comes, nobody wants to redesign the whole project.

  • DEar Tyler The simple matter is that space or outer space, if you will is a radiation environment. If you send up untested and unprotected computer components in a space craft of any sort. You run the real risk of having the components(CPU and or hard drive) fried by radiation. Solar wind, cosmic rays or any of the other stray radiation particles that stream through outer space. We are protected to very great degree by the cover of our planets magnetic field and 100 mile thick blanket of atmosphere. It is similar to going to sea. You have to have a Boat and minimal instruments for navigation to cross the ocean. As well as some protection from the Bad weather; in this case the radiation in space.
  • The ~3k avg temp in space would be ideal for overclocking!
    I wonder if there has been any attempts at quake in space...Call up the Grays and lets have a lan party!!
    NightHawk
  • Back in the day... When I worked at Bell Labs and went to Rutgers University, I knew a guy by the name of Dr. Dan Shanefield (http://www.rci.rutgers.edu/~mjohnm/shanefield.htm l). He has patents on a bunch of the chips used in telecom equipment back in the late 70s and early 80s. He taught a course in 'Electronic Ceramics' that many EEs and CerEs took. He explained the lag in chip use in satelite applications is because newer chips tend to be made of plastics and composites. This makes them cheaper for consumer use and does not affect their properties for normal use, but the radiation and temp gradients of space will destroy chips like that. The older 486 chips are ceramic-metal composites and can withstand some serious temp gradients.

    The other factor is that the chips (and sometimes the whole solid state component) are coated with a thin layer of diamond. This increases its resistance to radiation, and EMP (electro magnetic pulse) in particular. A great deal of the NASA engineering specs are based on mil-spec so they must be able to withstand attack by EM weapons. Military satellites are supposed to be able to operate after a radiation burst as powerful as a nuclear explosion.

    IMHO, I'm just glad to see that NASA is using off the shelf (inexpensive) components rather than custom designed stuff.
  • by DonGenaro ( 119442 ) on Friday December 24, 1999 @04:14AM (#1448202)
    Their are two problems associated with radiation, Firs are high energy particles which will over time blow through the silicon chip damaging it. And the other is charged particles depositing charges on the chip. As charges build up on the chip in areas it can cause bits to flip at pretty much random. the higher end processors of today are much much more susceptible to this because they have smaller etchings and hence a smaller charge can cause a bit flip. and as we all know random bit flips are BAD BAD BAD. But they institute other procedures for this. (like having all gates in triplicate then feed them to a voting circuit).
  • by Kalgart ( 127560 ) on Friday December 24, 1999 @02:33AM (#1448203)
    The processor linking that the refered to in the article will be NASA's normal triple redundant system.
    All 3 CPU's do the processing and if one of them is different that one is deemed faulty and dropped off line. This system is also used with in the shuttle, although they have a 4th CPU due to the value and vunerability of the shuttles cargo.
    How this triple redundancy works in actual practice is a question for some one else to answer.
  • A lot of posters are saying the previous computer was i386 based. I didn't design or build it, but the NASA page [shuttlepresskit.com] states that the previous computer was designed in the late 1970s and was it was difficult and expensive to maintain the original software. It does have a i386-based coprocessor that was added at a later date.

    The press release states that the new computer allows the use of a more modern programming language. Presumably, they are using something like VxWorks for the OS?

    Does anyone recognize the name of the old computer mentioned on the link, or know anymore details about the old (or new) computer? I wish NASA would be more open with details -- the average slashdotter knows more about his mobo than NASA is telling us about the hubble's brains!
  • by Anonymous Cowpoop ( 128906 ) on Friday December 24, 1999 @06:37AM (#1448205)
    (I know this is a bit late in the thread, but...)
    The old processors were 386's with .5 MB RAM a piece. The new 486's have 2 MB RAM a piece. Also, I think the new processors were all together in one assembly because a MSNBC article [msnbc.com] says there were approx 1000 pins in the unit they had to connect.
  • But your own Computer wouldnt work , out in space ;) Its really hard to get those High Tech things get working , in really nasty enviroments.
  • And they don't do WINDOWS. Thank God. The processors used in space are specially graded and hardened. These chip work well outside of the avarage enviorment. The new machines of today are for controled enviorments, and much more sensitive to its surrounding. The OS for the CPU's on Hubble should be nothing less than Good assembly code. Much more efficent than the consumer stuff and reliable.

Say "twenty-three-skiddoo" to logout.

Working...