Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
NASA Government Open Source

NASA To Catalog and Release Source Code For Over 1,000 Projects 46

An anonymous reader writes "By the end of next week, NASA will release a master catalog of over 1,000 software projects it has conducted over the years and will provide instructions on how the public can obtain copies of the source code. NASA's goal is to eventually 'host the actual software code in its own online repository, a kind of GitHub for astronauts.' This follows NASA's release of the code running the Apollo 11 Guidance Computer a few years back. Scientists not affiliated with NASA have already adapted some of NASA's software. 'In 2005, marine biologists adapted the Hubble Space Telescope's star-mapping algorithm to track and identify endangered whale sharks. That software has now been adapted to track polar bears in the arctic and sunfish in the Galapagos Islands.' The Hubble Space Telescope's scheduling software has reportedly also been used to schedule MRIs at hospitals and as control algorithms for online dating services. The possibilities could be endless."
This discussion has been archived. No new comments can be posted.

NASA To Catalog and Release Source Code For Over 1,000 Projects

Comments Filter:
  • by fygment ( 444210 ) on Friday April 04, 2014 @10:46AM (#46660785)

    TFA contains links to Wired articles. Couldn't find a link to a NASA catalogue so TFA is a 'heads up' of what is to come, yes?

    Here's the link to the DARPA catalogue: http://www.darpa.mil/OpenCatal... [darpa.mil]

  • Re:Wait... What? (Score:5, Informative)

    by Required Snark ( 1702878 ) on Friday April 04, 2014 @10:51AM (#46660807)
    Factually incorrect: "Plus they are a heck a lot more reliable then they were 20+ years ago."

    Over twenty years ago there were computers that hardware and software that were designed to work together. At least two of these systems had extra tag bits in memory that defined the memory contents. Specifically I am talking about Symbolics Lisp Machines and Burroughs Large Systems that natively ran Algol. I worked on both of these systems and they were intrinsically more reliable then any systems I know of today.

    Because of the tagged memory they had hardware protection against a large class of errors that current systems encounter all the time. It was possible to find the bugs and eliminate them so they did not re-occur. It also protected against having undetected errors, which is a true nightmare.

    Having hardware and software designed at the same time results in a better product. This is even more significant when the system is designed to run a specific high level language. Everything has less bugs.

    Heck, Cray machines had ECC memory: SECDED. Single Error Correction, Double Error Detection. They needed it, because memory was not so reliable as today, but now you are lucky to just have a parity bit. All this work is going on, and no one has a clue if there are bad results or not.

    As an industry we have gone backwards. That's not an opinion, it's an observation.

Happiness is twin floppies.

Working...