Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
NASA Space

Looking At the Hardware and Software of NASA's New Horizons (imgtec.com) 76

alexvoica writes: Last week we learnt that Pluto has blue skies and ice water thanks to a series of high-resolution images provided by the New Horizons probe. But how is the probe taking these photographs and sending them back to NASA? What hardware and software systems are inside and who built them? Luckily, the New Horizons engineering team kindly answered these questions (and many others) in a detailed interview.

Here are some fun facts from my discussion with the engineers. The chipset: It might sound strange to some but NASA used to be a chip maker. Before using standard MIPS or Intel CPUs for probes like New Horizons, NASA had to design custom-built processors since the commercial solutions available at the time were not designed to handle the intense workloads of space travel. Inside New Horizons we find a radiation-hardened, MIPS-based Mongoose-V processor worth $40,000 apiece and built using a grant from the Goddard Space Flight Center. The camera: New Horizons has a multispectral 1 megapixel camera; sending a single 1200 x 900 image back to earth takes approximately 3-4 hours. The comms: Forget 4G LTE, New Horizons uses the very best! The probe relies on NASA's Deep Space Network (DSN) to make its long-distance calls. DSN is the largest and most sensitive scientific telecom system in the world and was also used to guide the astronauts aboard the Apollo 13 mission back to earth. Tom Hanks and Kevin Bacon remain forever grateful. The memory: New Horizons includes 16GB of flash memory which provides plenty of storage space for photos and other scientific data. The operating system: New Horizons runs on Nucleus, a popular operating system designed by Mentor Graphics. Coincidentally, Nucleus is also at the heart of the ARTIK 1 platform for IoT launched by Samsung only a few months ago.

This discussion has been archived. No new comments can be posted.

Looking At the Hardware and Software of NASA's New Horizons

Comments Filter:
  • NASA was buying Intel 486 processors for the space shuttles from eBay.
  • by willworkforbeer ( 924558 ) on Thursday October 15, 2015 @11:12AM (#50735899)
    relies on NASA's Deep Space Network (DSN) to make its long-distance calls.

    Good to see the alliance with Bajor is paying dividends.
  • by Anonymous Coward

    Why not just use an off-the-shelf processor and put the PCBs in shielded, shock mounted enclosures?

    • Why not just use an off-the-shelf processor and put the PCBs in shielded, shock mounted enclosures?

      Allow us to introduce to you government purchasing system... Before viewing prices, we suggest you put your tray tables in the upright position so that there are no obstacles when you reflexively smack yourself in the forehead.

      • Hell, I'm wondering why NASA just doesn't send up an iPhone and pay the LD charge? Or doesn't NASA trust an Irish business?
    • Re:US $40K processor (Score:5, Informative)

      by avandesande ( 143899 ) on Thursday October 15, 2015 @11:21AM (#50736001) Journal

      Because the shielding you are thinking of does nothing in space. Radiation hardening BTW has nothing to do with shielding.

      https://en.wikipedia.org/wiki/... [wikipedia.org]

    • Re:US $40K processor (Score:5, Informative)

      by Rei ( 128717 ) on Thursday October 15, 2015 @11:24AM (#50736025) Homepage

      "shielded, shock mounted enclosures" aren't going to do anything against 1+ GeV protons.

    • by Anonymous Coward

      Because then you have to launch a shielded, shock-mounted enclosure to Pluto. Which probably costs > 40k. And it would probably be less reliable.

      It's just about using the right tool for the job.

    • by sjbe ( 173966 ) on Thursday October 15, 2015 @11:31AM (#50736087)

      Why not just use an off-the-shelf processor and put the PCBs in shielded, shock mounted enclosures?

      Several reasons. 1) Weight matters. Adding shielding and enclosures adds weight and thus cost. Sometimes the cost of the component is dwarfed by the cost to launch said component. 2) Off the shelf hardware doesn't always work for applications like these for a variety of reasons. Sometimes it's fine but not always. 3) A lot of this stuff was developed a LONG time ago and has to work with a lot of legacy systems. Off the shelf solutions don't always work for some of the problems they face. 4) Once they have a proven design, it is a non-trivial task to get a new piece of hardware qualified.

      • And let's be honest, a $40k processor isn't really going to drive up the price of the project very much.
        • And let's be honest, a $40k processor isn't really going to drive up the price of the project very much.

          Hell, 15 years ago that was the price of a Silicon Graphics Octane workstation which I had on my desk. You are quite correct that while $40K is a lot of money, it isn't really that much compared to the budgets of the projects such chips get used for. The New Horizons mission cost about $700 million if memory serves. For a project that you have to get right the first time, $40K for a proven processor is kind of a steal actually.

    • Re:US $40K processor (Score:5, Interesting)

      by cnaumann ( 466328 ) on Thursday October 15, 2015 @11:39AM (#50736179)

      Radiation shielding in space is harder than you might think. You can't just add a thin lead sheet or other dense material around critical circuits and be done. When photons get above a certain energy level, they pretty much blast through anything dense. What you really need for shielding for those types of high energy photons is a _really_ thick layer of low density shielding, say several miles of a gas under about 1 atmosphere of pressure. But this is simply not practical on today's spacecraft. There are other approaches to shielding such as layering high and low density materials, but in a spacecraft shielding is always limited by volume and mass constraints.

      The other option is to deal with radiation by building chips with redundancy. The idea is that if one part of the circuit gets temporarily zapped, two other parts are still functional and the majority is probably the correct answer. You also build the electronics so that if everything goes south they can reboot and recover.

      NASA knows what they are doing here!

      • by necro81 ( 917438 )
        My favorite is the "vault" they constructed for the electronics on the Juno mission to Jupiter. Because that mission regularly dips into the radiation belts around the planet, even the best rad-hardened processor would not survive. Over the mission lifetime, it'll have to survive the equivalent of 100 million dental x-rays. NASA's solution: 200 kg of titanium. (Lead would have been too soft to survive launch. Other materials, such as tungsten, are relatively difficult to work with. Titanium is a well-
    • A couple of things to consider

      1: For certain types of radiation shielding can actually make things worse. Very high energy particles tend to go right through matter without interacting at all but when they *do* hit something they can create a storm of lower energy particles. Sometimes these particles can have a higher probability of causing problems than the original high energy particle did.
      2: Getting mass out of the gravity well is expensive. $40K may seem like a lot to us mere mortals but it's trivial co

    • by sjames ( 1099 )

      There is shielding, but to protect the hardware adequately to use regular server parts, it would be huge and heavy.

  • by Kagato ( 116051 ) on Thursday October 15, 2015 @11:14AM (#50735919)

    In the dawn of Silicon Valley the fabs counted on NASA and Military orders. For quite a long time they could count on 70+% of the production going towards NASA and military contracts. Almost no one else could afford the products at the time. Eventually Intel broke that mold by making a huge bet that they could slash the product costs and a wave of volume would follow to make the price point profitable. It was a huge risk.

    • by Tablizer ( 95088 )

      That's true of much of history. Metallurgy was advanced largely in the process of making weapons; as were rockets, airplanes, (early) electronic computers, compilers, etc.

      War and porn are the two pillars of R&D, early adopters, and funding. Technology depends on wankers or things that work/look like wankers.

      The resources and vast testing for the V2 rocket was incredible for the time. Rockets went from toys to space-capable in about 5 years.

  • That summary (Score:5, Insightful)

    by nitehawk214 ( 222219 ) on Thursday October 15, 2015 @11:17AM (#50735951)

    Seriously, what the fuck is going on there?

    The comms: Forget 4G LTE, New Horizons uses the very best!
    Tom Hanks and Kevin Bacon remain forever grateful.

    • by Tablizer ( 95088 )

      Since they were using commercial services as comparisons, I wonder why they didn't make reference to Comcast:

      sending a single 1200 x 900 image back to earth takes approximately 3-4 hours

    • I'm still thinking an array of 9 smart phones could do the job. If one gets zapped by a particle or 2, then remote reboot that phone.
      • by 0123456 ( 636235 )

        I remember a plan some years ago to use a cube of cheap processors. Most times you'd lose one or two to a single high-energy particle event, in the expected worst case, you'd lose three, and the others would keep running while the others rebooted.

        Don't think it ever went anywhere.

        Going back to the Apollo Guidance Computer, it used checkpointing so you could reboot it at any time and it would just continue where it left off. The developers used to randomly reboot while running tests, and the boot was so fast

    • by Yunzil ( 181064 )

      Yeah, they forgot Bill Paxton.

    • The comms: Forget 4G LTE, New Horizons uses the very best!

      The fine article compares the spacecraft's systems to a few cellphones. Thus, the submitter decided to take the insightful analogy and make it funny. We can laugh at him now.

      • I am the author of the article, the submitter and the person who is replying to your comment. I guess comedy is not my strong suit?
        • by KGIII ( 973947 )

          Your guess is correct. It's not my strong suit either.

        • I am the author of the article, the submitter and the person who is replying to your comment. I guess comedy is not my strong suit?

          The article was well-written and I found only one glaring problem with it: the lack of a label on the X-axis of the CPU speed graph. Always label your axes!

          The second paragraph of the /. summary was written in a different style than were the first paragraph and the fine article. It is sloppy, colloquial, and even contains an obvious product placement. In contrast to the fine article, the /. summary was insulting to the reader. That might be fine on proletariat websites, but on a site populated by the /. d

          • Thanks for the feedback. I specifically used the words "fun" and "fact" in the second paragraph since I was trying to mix the two. I can see that it wasn't to everyone's liking, I'll try to keep it more dry and factual next time.
            • ...I can see that it wasn't to everyone's liking...

              You will have a hard time finding a more fastidious audience than on Slashdot!

  • by Anonymous Coward

    We we learnt us some good English too while we were at it?

  • by Anonymous Coward

    corporate welfare. For the price of one of those CPUs, we could provide nearly a thousand months of WIC. Of course, CONservatives would rather starve a child than give-up their welfare.

  • by kbonin ( 58917 ) on Thursday October 15, 2015 @11:26AM (#50736043)

    A decade ago I spent about two years on an embedded system running Nucleus, spent several months fixing bugs in the threading primitives, including the core spin-lock mutex that worked about 99.999% of the time under low-load conditions, but whose failure rate rose rapidly with load to about 2%. So much fun. Parts of that codebase looked like they were written by very low skill programmers.

    • by phantomfive ( 622387 ) on Thursday October 15, 2015 @12:52PM (#50736779) Journal
      Yeah I came here to say the exact same thing.
      Nucleus is NOT the embedded OS I would use for anything serious or really, for anything.
      There are so many other good options. Micro C OS, vxWorks, and QNX all come to mind as better options.
  • by raymorris ( 2726007 ) on Thursday October 15, 2015 @12:11PM (#50736461) Journal

    I'm curious what the development process is, especially for the software. It seems to me it would be very bad to have a bug show up while you're trying to image Pluto. It has to be reliable and bug-free, to the greatest extent possible. What techniques and processes are used to have a high degree of confidence that the code they are developing is absolutely correct? Are some of those processes applicable to other types of software development, in which finding and fixing bugs is expensive, though not nearly as expensive?

    • One of the biggest techniques they use is code reuse. The project managers don't like to try anything that hasn't been used on another probe or something.
      Another technique they use is extreme simplicity. Try to avoid if statements in for loops, as an example.
      Another technique they use is redundancy.
      Another technique is restarting often (which sounds weird, but it's basically what we do every time a web request comes in......each request starts with clean state).

      These techniques are discussed in the bo
    • by Anonymous Coward

      My project wasn't NASA, but NGAS. We start with high level design requirements (must take a picture with this angle, must measure from this to this brightness linearly, must have this resolution on the surface, must measure in these particular spectrum) and then break it down level by level. We have 5-6 levels of design documents, and every input and output at every level of design is very well described. By the time the design documents are finished, programming is generally very simple. We then do robust

    • A lot of it is (was?) governed by SEI CMM https://en.wikipedia.org/wiki/... [wikipedia.org]. Like all things, implemented well, it gave you a continuously improving development environment, implemented badly (at the behest of PHBs) it resulted in death by documentation and process.
    • For the Space Shuttle, this article [fastcompany.com] describes the process pretty well. Of course, the first release of the Shuttle flight software cost half a billion dollars.

    • Testing, well-specced requirements, testing, and then more testing.

      Every piece of the software is tested to exhaustion by a separate group to the one who wrote it.

      During the Rogers Commission to investigate the Challenger disaster, Richard Feynman was critical of NASA's approach to engineering. The only group he had good things to say about was the software team.

      From: http://duartes.org/gustavo/blog/post/richard-feynman-challenger-disaster-software-engineering/ [duartes.org]

      The software is checked very carefully in a bottom-up fashion. First, each new line of code is checked, then sections of code or modules with special functions are verified. The scope is increased step by step until the new changes are incorporated into a complete system and checked. This complete output is considered the final product, newly released. But completely independently there is an independent verification group, that takes an adversary attitude to the software development group, and tests and verifies the software as if it were a customer of the delivered product.

      To summarize then, the computer software checking system and attitude is of the highest quality. There appears to be no process of gradually fooling oneself while degrading standards so characteristic of the Solid Rocket Booster or Space Shuttle Main Engine safety systems. To be sure, there have been recent suggestions by management to curtail such elaborate and expensive tests as being unnecessary at this late date in Shuttle history.

      ...And that second quote pretty much answers t

  • I'd like to ask what mistakes or flaws caused the "overload" problem a week before the main encounter. I would think a simulator (mirror) would catch it. It seems either they didn't use a simulator, or the simulator somehow got out of sync with the real probe. I suppose fake data may take up a different amount of storage space than the real probe, and wonder if that was the cause for the difference.

    Also, I wonder about the emotions of the people involved when the probe locked up 1 week from The Big Game.

    • Amusing, your comment reminded me of the opening movie scene with Kevin Bacon in it; the movie is titled "Animal House"
  • NASA had to design custom-built processors since the commercial solutions available at the time were not designed to handle the intense workloads of space travel.

    workloads? l do know (I worked at JPL for six years in the late 80s and early 90s) that commercial chips were not designed to handle the harsh environment of space.

    Inside New Horizons we find a radiation-hardened, MIPS-based Mongoose-V processor worth $40,000 apiece.

    A thing is worth exactly what someone else will pay for it. To me that processor isn't worth a hill of beans, even if NASA may have paid $40K for them. Back when I was at JPL one of the rad hardened CPUs that was available, IIRC, was the NSC 32000 – a NatSemi part that was a near clone of the Motorola 68000.

Math is like love -- a simple idea but it can get complicated. -- R. Drabek

Working...