Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Space

A.I. Software To Command NASA Mission 83

EccentricAnomaly writes: "NASA's Three Corner Sat mission will use artificial intelligence to command three formation-flying spacecraft. JPL has a press release here. And there's more information about the A.I. software here. The software is apparently the next generation of the software used for Deep Space One."
This discussion has been archived. No new comments can be posted.

A.I. Software To Command NASA Mission

Comments Filter:
  • Interested readers are recommended to read Steven Minton's paper Solving large-scale constraint satisfaction and scheduling problems using a heuristic repair method and Minimizing conflicts: a heuristic repair method for constraint satisfaction and scheduling problems.

    Constraint satisfaction is a framework to model many NP-hard scheduling and planning problems. A classic example is the n-queens problem and a famous benchmark problem is the radio link frequency assignment problem (RLFAP). I don't know if NASA's autonomous spacecraft planning algorithm is based on the constraint satisfaction model, but it's likely. (darn, I got a NOT FOUND error when trying to download the papers...)

    Iterative repair is useful to get a new solution base on the previous one after there are changes in the environment. Since it won't solve the problem from scratch again so it can be very fast. But one of the big problems is convergence has not been proven yet, so it may be possible that the algorithm will try to find a solution forever but in vain (although very unlikely). There are ongoing research on explaining the behavior of iterative repair by mathmatical proofs, such as re-modeling it as a Lagragian multiplier method.

    Although the definition of AI is controvesial as seen in the previous discussion, searching has long been regarded as the underlying technique for many AI algorithms. It can't learn, and it can't "think", but it can find a solution quick and looks like it can think, just like what a chess problem does.

  • Aah thats the problem with Voyager 6
    they forgot this line:

    25 avoid $WORMHOLES_LEADING_TO_THE_OTHER_SIDE_OF_THE_GALAXY

    Very critical oversight I might add.
  • by alewando ( 854 ) on Tuesday May 29, 2001 @08:39PM (#189610)
    That's the important question to ask yourself when you consider supporting AI-controlled space flight: would you trust it with your own life? Would you get into a spacecraft, knowing full well that what stands between you and the harsh vacuum of space is a mere handful of transistors and the linguistic excretions of a caffeine-loaded microserf. Whether these missions themselves are manned or unmanned makes no difference, because:
    1. If it's used for unmanned flight today, then it'll be used for manned flight tomorrow. If you don't look forward to that tomorrow, then you should kill its seeds today.
    2. The safety of unmanned crafts is still critical, because reckless spaceflight is expensive , in time and money. NASA budgets are shrinking everyday, so any step away from "Faster, cheaper, better" is a step in the wrong direction.

      And the problems don't stop there. What about the labor disputes that are bound to arise? A greater emphasis on computer-controlled spaceflight will inevitably result in cutbacks in personel, as living breathing astronauts are replaced with cyborg equivalents. And don't fool yourself by thinking they'll still need technicians to manage the cyborgs; there's a huge difference between a non-prescribing technician and a board-certified specialist surgeon, and there'll be a huge difference between a greasy fellow with a monkey wrench and today's red-blooded military men and astronauts. Besides, would the cyborgs even allow humans to tend to them? Never. It would undercut their quest for world domination and we'd learn the secrets of the rayguns they point at our heads and pancreases from global satellites in low-earth orbit.

      No. A thousand times no! AI has its place: on my desktop, providing consumers with cheeful and prompt advice concerning spreadsheets and form letters. It is certainly not in space, where a rogue AI would have free reign to decide to change courses or otherwise alter its mission. You may think it's fine to send an unregulated robotic probe off to Mars to collect samples, but you won't be laughing when that robot claims Mars in the name of cybernetics and starts broadcasting communistic Mars Free Radio signals at our precious bodily television bands.
  • Originally A.I was a goal to achieve a "thinking machine", one that didn't require outside input

    If by "outside input" you mean preprogramming, then that's a meaningless definition. Everything is programmed, in hardware or software, through easily understood symbolic algorithms, opaque matrices, or bundles of neurons.

    as far as strictly defined by academia today, yes, you're absolutely correct, a.i. is just solving problems

    The definition is vague, but it's narrower than just solving problems. A crowbar solves problems, but few would call it AI. Originally, AI was a term to describe making people out of sand. Some core problems were identified (vision, language, planning, learning) and some techniques were invented or applied (artificial neural nets, belief networks, support vector machines). Now AI describes those core problems, even in contexts not meant to result in fully-functioning people, and also describes the technology that's often applied to those problems.

  • What makes this "AI"?

    Planning and scheduling have been considered AI problems for a long time. Pretty much anything trying to solve an AI problem is considered to be AI technology. It just may not be interesting or nontrivial technology. Many people overvalue the term "AI", thinking that anything associated with that label should be able to appreciate a fine wine or cry at an opera, but there's really no objective way to decide which techniques are nontrivial enough to warrant the stamp of pretentiousness

  • Well, if it was /smart/ AI you could give the machine a vested interest in the mission succeeding as well. And you could know exactly whats on the machines agenda.

    Is possible, of course, that a human could be bribed or blackmailed to crash a space craft. Or that the stress of the trip would cause mental problems causing the same.

  • Sky Net
  • My tax dollars paid for it, where is my copy?

    Someone please file a freedom of information act filing on this and then publish the source code...

    Force your government to comply, they won't do it otherwise.

    .
  • When an A.I.-type problem is solved- e.g. game playing, image analysis, expert systems, etc.- then people say it is was an easy problem and not really A.I.

    One of the oldest definitions of A.I. hasn't really been reached yet: the Turing Test. That would be a tele-conversion with an entity where you couldn't tell whether it was human or machine.

  • Dave... I don't think you should do that, Dave...
  • You may think it's fine to send an unregulated robotic probe off to Mars to collect samples, but you won't be laughing when that robot claims Mars in the name of cybernetics and starts broadcasting communistic Mars Free Radio signals at our precious bodily television bands.

    Sometimes, when I see the moderation on a comment, I wish there was a "moderator goal" menu when performing moderation. I would like to know if the person who gave this silly troll a +1 insightful was being silly himself, trying to bring about the collapse of slashdot, or simply smoking dried cat feces.



    --
  • What makes this "AI"? Or to turn the question around, why aren't routers and print spoolers considered AI if this is? Artificial Intelegence is a big problem; it's solution isn't hastened by using the term as a synonym for "we picked a better algorithm than you might have expected us to."

    I once read AI described as being a solution to a problem whose solution is unknown. A system would cease to be AI when the solution becomes known, even if the system hasn't changed.

    I don't recall the exact example given, but it went something like this: You have a problem that you do not know the solution to, let's say how to determine the volume of a sphere given its radius. You can measure the radius, and you can measure the volume, but you cannot determine the relationship between them. So you throw a neural network at some test measurments, and it starts spitting out correct results. AI at work! But then, you study what the neural net is doing, and determine that it is simply applying the formula (4/3)pi*r^3 to the radius. The neural net hasn't changed, but now that you know the formula it has become just a fancy way of executing a few floating point multiplications. Is it still AI?

    I find this definition useful, even if it does suffer from a sort of Heisenbergian uncertainty principle (it might be AI, but only if you don't look at what it is actually doing). This also leads to the interesting case that a system may appear to be AI to an outside observer, but only a simple rule system to the person who programmed it.



    --
  • Steve Chein came to talk to our AI group at USC-ISI not too long ago. And while it may not come out in the press release, their system does do some learning. It maintains lots of statisitcs of its successes and failures, and over time learns which heuristics and which pre-programmed tasks have the best effect under different conditions. Much of the learning is handled planetside in simulations, but I believe the learning is still enabled for inflight tweaks.

    This is some of the best AI on the planet people.

    Anm
  • by cpeterso ( 19082 ) on Tuesday May 29, 2001 @08:25PM (#189621) Homepage

    Remote Agent, written in Common Lisp, wins NASA's Software of the Year 1999 [globalgraphics.com]. Dissatisfied, NASA management decides the entire project should be ported to C++ [tu-graz.ac.at] !!?

    Chuck Fry, one of Remote Agent's developers, says:

    "The bad news: the Remote Agent modules are all being ported to C++ anyway, as C++ is seen as more "saleable" to risk-averse flight software management. This despite the fact C++ was bumped from Deep Space One because the compiler technology wasn't mature, which the Remote Agent planner/scheduler team learned to its dismay."

  • Most ML algorithms attempt to find a function f(x) from a sampling of x and f(x) values.

    This is only if you're dealing with a two dimensional space, and you're trying to estimate a function f(x). You're simply talking about using a neural net as an interpolation algorithm. This is indeed one use of neural nets (and has proven more accurate than many interpolation/regression methods), but it is IMHO the least interesting use of NNs. People in general need to start thinking of more clever ways to encode problems for neural nets; I'm sure everyone will be quite surprised with the results.

    For example, I think you would prefer your airplane pilot to be able to fly the plane than to be able to explain aerodynamics.

    You might be getting confused here. If we created some neural net to learn how to fly airplanes, and it did so with excellent reliability and precision, we still will not have The Solution To Flying Airplanes, as in an algorithm or some differential equation that flies airplanes perfectly. Even with a very complete training set, and a network whose outputs are consistently correct, we still will never know whether we have a case of data under- or overfitting.

    When you boil it all down, neural nets are just fancy statistical analysis tools. Its like the difference between the true expected value of some phenomenon, and the sample expected value. No matter how large your sample size is, your result is always just an educated guess, it lacks any true scientific rigor.

    So yeah, I hope the above is coherent and/or correct, I'm watching Simpsons as I type this so my concentration is not currently 100% optimal :)
  • One of the main problems with using machine learning is a fundamental lack of trust, and this lack of trust actually has a theoretical basis. Take a neural network that attempts to learn problem X, given a specific set of inputs, it learns to always give the correct answers. The point to emphasize here is that this type of AI only learns how to give good results to a given problem, machine learning does not provide coherent solutions to problems!

    Thus, on a fundamental level, machine learning cannot be trusted. The state of a neural network, for example, is a black box from which no real insight can be gained ... it just gives the right results (most of the time).

    Imagine a neural network whose training set is the raw molecular structure data of different diseases (input) and their cures (output). The network trains on this data until the error reduces to zero. Then you feed in the raw molecular data of some new badass disease, and as output you have the structure of a working cure! The problem goes away, even though no one really knows how the solution was found, since a neural network contains no real data to understand -- it is a black box.

    This is why scientists are so skeptical when it comes to AI techniques, it is essentially an affront to the scientific method. I personally believe that machine learning can be effectively used to improve our lives ... scientists will have to find new day jobs, however. :) Seriously, though, this NASA mission is important to me because it shows that scientists are not afraid to employ machine learning techniques instead of spending a great deal of time and money trying to solve a set of problems and then develop expert systems for each little situation.
  • "My Sat has three corners, three corners has my Sat!"

    The religious implications here cannot be ignored... suddenly, I feel a craving for hummentaschen.
  • by Matt2000 ( 29624 ) on Tuesday May 29, 2001 @09:12PM (#189625) Homepage

    10 avoid $SUN
    20 avoid $PLANETS
    30 become $SMARTER THAN HUMAN CREATORS COULD HAVE DREAMED
    40 multiply
    50 goto 10
  • This is a misunderstanding of machine learning. Most ML algorithms attempt to find a function f(x) from a sampling of x and f(x) values. Usually, the sample is too small and the set of possible functions is too large to learn the correct function with asbolute certainty. This is a good reason for not trusting ML algorithms.

    On the other hand, xmanix's problem seems to be that ML algorithms don't produce correct explanations. This is misguided. For example, I think you would prefer your airplane pilot to be able to fly the plane than to be able to explain aerodynamics. Of course, you might prefer both, but when you have neither, you probably want your ML algorithm to learn how to fly the plane and worry about explanations later, especially if you are on the plane.

  • Yep, the NASA engineers had IRC on the big screens for a half a day trying to track down the rip.
  • There's a pretty neat "view from the inside" set of reports on the DS-1 project at this JPL site. [nasa.gov] One of the more amazing parts is where the DS-1's star tracker quit working, so they've written software to take the picture from their main imaging camera, and search for guide stars in it.
  • They run (at the time I worked there) the worlds most powerful Ion Beam Implanter, which is about 10 meters long and sends ionized Oxygen into a chamber in which wafers are spinning through the ion beam at 200 RPM.

    I have a feeling it's no longer the world's most powerful ion beam implanter... ISAC at TRIUMF can be run at 30keV, and the target can be floated anywhere from -29 to +60 keV, giving you a total range of 1 to 90 keV for implantation energy. We expect to be able to get (Li-8) ions to implant at around 60nm in gold at 30keV, which is a heck of a lot more dense than silicon. I haven't run any SRIM calculations with oxygen yet, but I'm guessing you could get way deeper than 60nm in silicon, even with oxygen.

    As of three years ago, Motorola was a big purchaser of these wafers, and IBM was just getting into buying their own implanter.

    SOI technology is now standard in at least some of the commercial PowerPC chips used by Apple (forget which ones), so they may just be using stock chips.
  • by tbo ( 35008 ) on Tuesday May 29, 2001 @08:23PM (#189630) Journal
    3CS will utilize a PowerPC 750 flight processor.

    The PowerPC 750 is also known as the G3, BTW. PowerPCs in space? Not the first time [slashdot.org] we've heard about such a plan. Of course, SkyCorp is using Apple-built G4 systems, while 3CS is probably just using the G3 with some off-the-shelf embedded controller-style board.

    Does anybody know about the vulnerability of PowerPC chips to radiation, or how "rad-hardening" works in chips like the modified 386s the military uses? Cosmic rays tend to be very high energy, so shielding is probably not practical.

    From what I understand, there are two types of radiation-induced errors. Hard errors involve damage to the chip, and are very bad. Soft errors are a matter of erroneously flipped bits, and are only somewhat bad. Soft errors could be compensated for through good software design, error correction, etc., but I would think the only real defense against hard errors would be to make the wires on the chip so damn big that a few defects here and there wouldn't matter.

    I happen to work at a particle accelerator with an isotope seperator and accelerator [triumf.ca], so you'd think I'd know this stuff, but I've never really thought about this particular application before. Oh well, if nobody answers me, I can always try a SRIM [ibm.com] simulation at work tommorow (don't have a Windows box at home to run it on), to see how much shielding you'd need to stop cosmic rays, and how many defects they'd cause in silicon. If anybody wants easy karma points, download the SRIM [ibm.com] software, run a simulation on a thick silicon layer with some high-energy alpha particles (helium nuclei at, say 1GeV), turn on damage calculations, and report the results back here, please. I'm sure somebody will mod you up.
  • I'll put my money on the AI doing a good job when they actually perfect filters back home.

    Depends on what you're asking the filter to do. I can imagine, for instance, that NASA ends up with some number of corrupt images. Or perhaps it'll catch an image with a reflection from the sun in the frame, washing the rest out.

    I would hope that NASA doesn't need a pr0n filter for its satellite cameras...

  • troll or no (beowulf cluster is imaginative relative to the usual tripe deez dayez)... BSD for science, GPL for not getting skrewed by your employer.

    the SAD part is, the implicit contract between academia and industry has been whorified; once, academia did pure research and industry and the world reaped the profits. Now, industry still reaps the profits and academia whores itself.... Not a GPL-o-phile, but at times like this...

    (sure, an exaggeration, but) Or???? (drunk, fuck yall)

    Boss of nothin. Big deal.
    Son, go get daddy's hard plastic eyes.

  • Actually, with a few mirrors, some lasers to determine distance, and decent station keeping software for the thrusters, I suspect NASA should be able to conjure up a Long Baseline Array out of these satelites.

    IIRC (and it was a long time ago) to do this for visible wavelengths, you'd need to measure the distance between the satelites down to (making this up) 150nm.

    Still, it would be pretty nifty. Not that you could ever resolve a licence plate, even if you had the resolution, without correcting for atmospheric effects, but cool all the same.
  • I'd trust this stuff [fastcompany.com] a lot more than a lot of the things [korpios.org] we have trusted [corpwatch.org] on the road [detnews.com] in the past [gknet.com].
  • ROTFL

    mod this the fuck up.
  • It maintains lots of statisitcs of its successes and failures, and over time learns which heuristics and which pre-programmed tasks have the best effect under different conditions.

    In my opinion, while this is without question a form of learning, it does not come close to what I have in mind. The system apparently has a pre-programmed evaluation function that it uses to gauge success or failure. A true intelligent system should be able to come up with its own evaluation functions to suit novel phenomena. It should also create its own goals as it learns. Of course, it should not have complete freedom in doing so because there is no telling what it will do. We want it to obey us. So there must be an initial conditioning that causes it to try to please its masters. Just one man's opinion.

    This is some of the best AI on the planet people.

    I wish them the best and I do applaud them for their effort and ongoing success.
  • NASA software that thinks for itself and makes decisions without help from ground controllers will fly as the brains of triplet satellites in 2002. - NASA/JPL news release.

    All software systems including the software that runs /. make decisions and "think" for themselves. The fact that this software uses its pattern recognition abilities to execute a limited number of pre-programmed decisions, does not make it intelligent IMO. It's a cool hack but to me, unless it has HAL-like capabilities, that's all it is, a cool program. If it cannot learn from its mistakes, it's not intelligent.
  • They're just pod bay doors, Hal, sure you can close them...
    ...
    ...
  • by mgblst ( 80109 ) on Tuesday May 29, 2001 @08:17PM (#189639) Homepage
    I also here they have a dvd copy of AI movie by steven spielberg, as a message of help to all aliens, to come save us from crap hollywood movies.... but i may have heard wrong.
  • by Jah-Wren Ryel ( 80510 ) on Tuesday May 29, 2001 @08:41PM (#189640)
    NASA loves to 'commercialize' their developments. In this way, they are probably the highest profile form of corporate welfare there is. NASA does the R&D and then hands over the exclusive rights for a pittance to some commercial outfit to turn into a product.

    It may be apocryphal, but I think Beowulf just barely missed being locked-up by this kind of old-school thinking. The saving grace was that most of the original Beowulf work had been done by a sub-contractor and, unlike most of the rest of the industry, NASA had not required that the sub to turn over IP rights to the client (NASA) in their contract. So once Beowulf got big, the NASA administration came around wanting to lock it up and give it to one vendor, but they were foiled by their own contract, and the contractors were able to free the source for the work they had done.

    Thus leaving me free to say, "Imagine that AI running on a Beowulf cluster!"
  • Completely unfounded FUD. They aren't planning on putting some kind of HAL on the damn ship. The AI they are using is more like fuzzy logic than anything. AI is such a misnomer...
  • Yeah, sounds like NASA believing its own PR...

    It does seem to be part of the silly hyperbolic PR that goes on, the danger is people actually believe it. Checking http://www.spaceflightnow.com [spaceflightnow.com] today, I see the story about the malfunctioning Canadarm2- the article exclaims "..then station program managers could order the Shoulder Pitch joint be replaced with a spare in a dramatic spacewalk by shuttle astronauts..." . Yawn. It's a space walk, it will be planned out weeks in advance, it will be methodical and routine. It's not dramatic .

    I think the media surrounding all the activity going on is causing more trouble than helping by playing all this drama-queen PR hype act. Seems to be pretty prevalent in US media, though, and increasingly across the rest of the world. Anyone care to comment on why this is so? Does it actually help NASA get extra funding?

  • One of the oldest definitions of A.I. hasn't really been reached yet: the Turing Test. That would be a tele-conversion with an entity where you couldn't tell whether it was human or machine.

    A machine that tries to convert you to its own religious cult? Neat!

    I think you meant something like tele-conversation though. This has been done. Problem is that this original version of the Turing Test actually turns out to be quite easy to pass. Some ELIZA-like programs have passed it, even though most people wouldn't consider them intelligent. (Sorry for the lack of references to back this up, but I'm studying AI and this is what my professor told me ;).)
  • Well, at least other people can see her too, which made her a major upgrade from my previous girlfriend...
  • Good morning Dave
  • while(humanUnwillingnessToCompleteMission==true && missionStatus=="incomplete")
    homicide(Dave);

    (sorry, don't know BASIC : )

  • We all know how well the real inteligence has been doing with missions.
  • The only unusual thing about this is that it's a low earth orbit project. JPL's planetary probes have had considerable autonomy for decades. They have to; the speed of light lag is too long for simple remote control. Earth satellites, though, are traditionally slaves to ground control.
  • by StaticEngine ( 135635 ) on Tuesday May 29, 2001 @10:44PM (#189649) Homepage
    Look here: Ibis Technology Corporation [ibis.com]

    I used to contract there. They make SOI (Silicon on Insulator) wafers which allow chips manufacured with them to be both low power and radiation hardened. They run (at the time I worked there) the worlds most powerful Ion Beam Implanter, which is about 10 meters long and sends ionized Oxygen into a chamber in which wafers are spinning through the ion beam at 200 RPM. The O2 embeds itself in the Silicon about 60nm beneath the surface, and sticks there. After eight hours of this process, the wafers are removed and annealed in a giant oven, at which point the Oxygen and Silicon chemically merge to form SiO2 (silicon dioxide, also known as sand), which is a mighty fine insulator.

    This whole process causes chips printed on these wafers to have a lower energy drain through the substrate. It also has the advantage that when hit with an ionized particle, the oppositely charged electrons which would normally rush up from the substrate are blocked by the insulator, so you have a greatly reduced charge entering your gate, which has a greatly reduced chance of flipping your bit.

    As of three years ago, Motorola was a big purchaser of these wafers, and IBM was just getting into buying their own implanter. So those PPC chips may just be special spacefaring jobbies...

  • I think people will always trust a human pilot more, as he has a vested interested in the mission succeeding, machines don't.

  • ...because it will be able to make decisions about the mission on it's own - what pictures to transmit back to Earth and which one to junk. Autopilots are used to keep on doing something - like flying straight and level. Okay, so some of the more advanced autopilots can land a plane, but if anything goes wrong they still have to fall back to human input. Try flying anything manually with a couple of minute's worth of signal delay - it would be a real pain in the ass.

    What they're doing here is (unless I'm horribly mistaken) a dramatic step beyond the typical abilities of an autopilot.


  • As a member of the University of Colorado based hardware team for this flying rock and having personally researched this matter I can say that radiation should be too much of a problem. This sat will be launched from a low orbit (lower than the ISS) leaving plenty of atmosphere to absorb most of the radiation, protecting the PPC on the industrial RPX lite SPC [embeddedplanet.com] from being damaged (of course high energy particles are still a problem). Additionally circuitry is protected in heavy aluminum shielding. Because of the short design life of 3CS, hard errors are should not be too much of an issue. Of great concern are soft errors in the kernel memory space in the onboard flash memory - if the kernel is damaged, the sat is useless as it will not even be able to be issued a new copy from the ground. Our team attempted to use Reed-Solomon encoding to correct bit-flips, but we had no driver for writing to flash during the boot sequence.

    A good place to look for detailed information of this sat can be found here. [asu.edu]

    ---
    May the sacred call of the dogcow guide you down the path toward nerdvana. MOOOOOF!

  • This has been an ongoing fruitless debate for a very long time, indeed, as is likely to happen when people can't even define the label they are trying to associate with things. If you consider the words closely you will never see a reason to argue about this one again! While everyone else is arguing that something isn't AI because it isn't *ACTUALLY* intelligent, they kind of miss the whole point of the word Artificial there, don't you think?

    Now, if you can actually tell me that a cat that stops chasing a ball to avoid getting hit by a car is "one smart cat", and a robot that does the same is "just a stupid machine", then that is because there is a good chance that they are each smarter than you are!

    I don't know if Steven Searle is still alive, but if he is, will someone please slap him upside the head and explain this to him!

  • There is no need for AI, unless they come up with some real intelligence first. Getting the conversions between feet and meters right would be a good start.

    --
    msm
  • by 3flp ( 172152 ) on Tuesday May 29, 2001 @09:57PM (#189655)
    while (in_space == 1)
    {
    if (Abs(speed_error)>threshold))
    {
    fire_thrusters(-speed_error);
    msg_to_nasa("This is AI here. I have just corrected the speed! Send a news release.");
    }
    else
    {
    msg_to_nasa("This is AI here. Everything is OK.");
    sleep(100);
    };
    };
    msg_to_nasa("This is AI here. I have landed!");

    ... Since when do you call feedback "artificial intelligence"? Gimme a break NASA.
  • They are probably using the Rad-Hard version, the Rad750. [sanders.com]

    We would have used it for Swift instead of the previous generation's RAD6000 If it had been available earlier. [sonoma.edu]

  • Yeah, and the ship is piloted by the blow up autopilot from the Airplane! movies.

    -----
  • You're ABSOLUTELY correct!
    Of course if it Meets up with some other alien civilisation of Intelligent Machine-Robots who take pity on this dinky, merely semi-intelligent space probe...THEN we've got something!

    So who else thinks that V'ger is the mother of the Borg???
  • Will it be smart enough to convert from metric to U.S standard?
  • Does anyone have a copy of this in Perl?

  • by 2nd Post! ( 213333 ) <gundbearNO@SPAMpacbell.net> on Tuesday May 29, 2001 @08:58PM (#189661) Homepage
    I'm tired of this kind of attitude. Is The Lover's Arrival giving out lessons now?

    Geek dating! [bunnyhop.com]
  • by mojo-raisin ( 223411 ) on Tuesday May 29, 2001 @08:18PM (#189662)
    What reason does NASA have for not releasing the source? It's not like they're a business - this is purely scientific. Slap a GPL (or BSD) on these babies and let everyone take a gander...
  • someone once lamented the demise of the intelligent troll. thankfully, they were wrong. this post is proof that that old art form lives on on slashdot
  • Easy:

    while ($inspace)
    {
    if ( abs($speed_error) > $threshold)
    {
    &fire_thrusters( -$speed_error);
    print NASA "This is AI here. I have just corrected the speed! Send a news release.\n";
    }
    else
    {
    print NASA "This is AI here. Everythink is OK.\n";
    }
    }
    print NASA "This is AI here. I have landed!\n";

  • Excuse me?

    Planning and scheduling are hard AI problems and way way more complex than IP routing or print spooling (I invite you to try writing serious planning/scheduling software if you doubt me.)

    These problems are AI in the sense that (a) no known tractable engineering solution exists and (b) they are tasks that lie squarely in the province of (talented and capable) human beings.

    - Ralph

  • LOL - funny.
  • The beauty of neural nets is that the person feeding the data in need not know the exact patterns.

    You could think of this as beauty - or as a real pain in the @$$ - depending on your job with the N.Net. Often the common denominators that nets get trained to recognize is some feature that the humans controlling the training have no idea is even there. This can be very frustrating and waste a lot of time and effort.

    A story comes to mind of a US military project to train N.Nets to identify tanks from aerial images so as to make better smart-bombs. They fed them lots of pictures with tanks, and lots of pictures without tanks, and they performed beautifully - then took them out to try on the real thing and they all failed miserably. It turned out that the pictures of tanks were taken in the morning and the pictures without in the afternoon, and all the N.Nets had learned was to recognize different luminosity values. DOH !!!

    But this illustrates nicely the inherent problems underlying N.Nets as a type of AI, and why there's no real intelligence going on. The conditions of training have to be perfectly tailored to the desired resulting test situations, otherwise the net won't work as desired. But in constraining the training, and controlling the dataset to rule out all factors other than the ones to train for, you essentially remove all the "intelligent" issues from the situation, and leave only mindless pattern matching for the machine. All the intelligent work has been done for the machine by its trainers - and none is left for the machine to do on its own.

  • by corvi42 ( 235814 ) on Wednesday May 30, 2001 @05:58AM (#189668) Homepage Journal
    Well unfortunately the term "Artificial Intelligence" is used interchangeably in two very different contexts, and this often causes a lot of confusion. Basically the difference is between "strong AI" and "weak AI" ( or type "A" and type "B" if you read Haugeland ).

    Strong AI is the "holy grail" of AI research, a truly sentient machine. The purpose behind AI work and cognitive science in general is to create such a machine - not for the sake of doing it, but rather because we want to understand cognition in a mechanical manner. Nobody has ever achieved this goal, nor are their any theories of how to do it.

    Weak AI is what we have in computing today. This basically just means a cool algorithm or system which performs particular narrowly defined tasks in some manner that gives similar results to those an intelligent person would arrive at. This is the kind of AI used in games, expert systems, neural nets, fuzzy logic machines, micro-worlds etc.

    Personally I much prefer Neal Stephenson's terminology for this weaker category - which he called "Pseudo-Intelligence" or PI in "Diamond Age". To me this has always seemed a much apter description, and keeps the distinction between this and the theoretical AI.

  • glorified autopilot. Yee haw!


    -MR
  • The software is a dramatic shift in the way mission operations are conducted. Typically all science data, good or bad, is sent back to Earth. This software will have the ability to make real-time decisions based on the images it acquires and send back only those that it deems important. This will eliminate the need for scientists to preview thousands of low-priority images and let them focus on only the high-priority data transmitted back.

    Well that sounds an awful lot like a content filter. The fact that its also one for pictures should make the task of censoring/filtering out images exponentially tougher. I'll put my money on the AI doing a good job when they actually perfect filters back home.

  • I kinda like the idea of "intelligent" space probes, but I thought that most of the sattelites/probes in space today relied on input to provide a decent output. After all what this little probe does is to get some input from it's surroundings, and then try to avoid it in decent way.
    Are every single one of the probes we use today controlled from the ground? sounds pretty unadvanced to me, it is certainly not a sencond to late to invent a probe that is able to fly on it's own.


    However it is not very strange that it has taken some time to "invent" an intelligent space-probe, as it require quite a bit of power to work. And as everybody knows the space is full of radiation, and chips don't adjust too well to radiation. You need a pretty powerfull computer to calculate distances, analysing images and control every instrument on board.

    Anybody tried using the G3 in space yet? I seem to recall reading something about a plan to use a bunch of them, but I can't recall if anybody has done it yet.


    Sorry about my bad english, but it is not my native language.
  • Did I read that right, Haley Joel Osment is going to pilot the spacecraft??

    --------------------------------
  • The safety of unmanned crafts is still critical, because reckless spaceflight is expensive , in time and money. Yeah, seriously the guys at Nasa should really think twice before porting that AI code over from those Lucasfilm state of the art space flight simulators.
  • A.I. defined by academia is the use of a computer to solve problems. The study of AI is basically just the study of using the computer to solve the harder problems. Learning algorithms can be used in AI, but stuff as simple as pathfinding is AI too.
  • ... and you're not?
  • No. It's a glorified

    • pilot
    . As far as I know, an autopilot just keeps the plane on a predetermined course - essentially following an invisible track. These satellites have no track, no predetermined course. Instead, they have general objectives that they then fulfil by keeping a certain orbit and certain configuration.

    for anyone who read Marvin Minsky's stuff about how useless robot building is, I would point out that things like this are the endproducts of all that robot building.

  • Good post, but some things I disagree with.

    The state of a neural network, for example, is a black box from which no real insight can be gained...

    Not so. The weights of the nodes inside a neural network can, and are analyzed, to determine the features of the input that the network uses to produce a result. I think this is known as cluster analysis - not my specialty.

    magine a neural network whose training set is the raw molecular structure data of different diseases (input) and their cures (output). The network trains on this data until the error reduces to zero...

    Cool idea, but not possible, at least by today's knowledge.

    For one thing, neural nets do not produce output data of this complexity. A neural network has rigidly defined inputs and rigidly defined outputs. I can't see how you could constrain possible structure of a molecule sufficiently to be used as the output of a neural net.

    Secondly, a neural net will not work on any possible set of data you feed into it. There have to be common patterns in the data that the net can draw out. The beauty of neural nets is that the person feeding the data in need not know the exact patterns. If these patterns are not present (and in lots of cases, they are not) the neural net will not converge.

    Scientists are right to not trust their jobs to neural nets, because they'll never have to - neural nets are suitable for a particular domain of problem. Like every individual machine learning and AI technique, it is not a magic bullet. Neither are expert systems (a completely different system). AI may dream of walking, talking, learning robots that can learn anything and solve any problem, but what AI is practically about is things like this - any computing problem that has to deal with uncertainty or a lack of definition. They will assist scientists, not replace them.

    --
    Spam the secret service - include the words terrorism and sedition in every email!
  • "The software is apparently the next generation of the software used for Deep Space One."

    Folks, we stand at the edge of a new frontier. Just think - this is the software which will eventually lead to the the technology that makes Deep Space Nine [onestop.net] possible!

    Invisible Agent
  • Open the pod bay doors HAL.

    I can't do that Dave.

    OPEN THE POD BAY DOORS HAL.

    I can't do that Dave.

    Wasn't there something earlier about how this hasn't happened? Will they launch the thing this year? All hail Kubrick!


  • "The goals of the 3CS mission include the demonstration of stereo imaging, formation flying and innovative command and data handling."

    Looking forward to that huge array, like one giant CCD that can read a license plate on an alien planet.
  • Planning and scheduling are hard AI problems and way way more complex than IP routing or print spooling

    These are all the same problem, if you consider them in their full glory; a really good print spooler that respects job priorities, printer statuses, patterns of use, etc. etc. might wind up printing three-lower priority jobs over one high priority job, or split a high priority job between printers (and proceed each piece with a cover sheet giving directions on how to re-assemble the job). Or it could detect a paper out condition and move the rest of the job to another printer. Or, in fact, make use of any sort of fancy planning / scheduling trick you care to think of. It's all the same problem.

    And yes, it is a hard (NP complete [maths.org], runs into the frame problem [hps.elte.hu], etc.) problem. But that doesn't mean that any attack on it is AI! This is a classic (and too often revisited) logic error. You have a goal (AI). If you could obtain that goal, it would have some consequence (you could presumably write good scheduling software). You obtain the consequent, and announce that you have reached the goal!

    To see that this is fallacious, consider: You have a goal (go to Scottland); it would have a consequence (you could try haggis [www.rnw.nl]). So then someone in your home town feeds you haggis, and you mistakenly announce that you have been to Scotland!

    These problems are AI in the sense that (a) no known tractable engineering solution exists and (b) they are tasks that lie squarely in the province of (talented and capable) human beings.

    This statement argues for my side, since they evidently have (a) found a tractable engineering solution and (b) it doesn't involve sending talented and capable human beings with each mission. Therefore it must not be AI.

    --MarkusQ

    P.S. (I invite you to try writing serious planning/scheduling software if you doubt me.)

    I'm afraid I didn't wait for your invitation. Been there, done that. Many times.

  • by MarkusQ ( 450076 ) on Tuesday May 29, 2001 @08:43PM (#189682) Journal
    *sigh* First games designers, now JPL.

    What makes this "AI"? Or to turn the question around, why aren't routers and print spoolers considered AI if this is? Artificial Intelegence is a big problem; it's solution isn't hastened by using the term as a synonym for "we picked a better algorithm than you might have expected us to."

    *grump*

    -- MarkusQ

  • yep, you are right. My only argument is that I, and others like me , believe that the definition of intelligence has been wrongly redefined by a lot of A.I experts so as to make their own work seem more credible. Originally A.I was a goal to achieve a "thinking machine", one that didn't require outside input (a previous but seemingly lost definition of intelligence), but as far as strictly defined by academia today, yes, you're absolutely correct, a.i. is just solving problems. Why we don't just call it that, well, I can't say it's beyond me - obviously NASA and id software know that A.I sounds a lot cooler than "Computer generated opponents can solve problems!" or "New spacecraft computers can examine data to solve problems!"
  • by palindromic ( 451110 ) on Tuesday May 29, 2001 @08:49PM (#189684) Journal
    I wish people would stop referring to semi-dynamic code as "artificial intelligence" Anyone who knows the definition of A.I. knows that environmental modifiers that cue preset responses don't qualify as intelligence. Dave would've passed the Turing test, I think.
  • Can deep space probes ever be possible without AI? Looking into history, we always find that right from the primitive space exploration days, AI has come to have an increasing place in space probes, especially deep space probes. And the AI presence has only increased. Man's exploration into space will contain to expand, and he cannot expect to go with each and every flight into the outer world. Hence the need for AI. It is all trash that AI shall in any way control man. We should have engineers who shall engineer AI in the proper way and if that engineering goes awry, sound reason permits me to say that it is not AI that is destroying man, but man himself digging his grave. It is another matter that AI engineering is one of the most difficlut tasks ever. But as man expands his conquest of universe, unmanned flights and AI will be his weapons in this age.
  • They don't want anyone to know the spacecraft commanding language until after the spacecraft is done with its mission... ya know, to protect the spacecraft from mischief. Generally NASA software is public domain.
  • What -- do you mean to tell me that ELIZA isn't real?

    After all these years...
  • They want to blame the AI for using Imperial numbers instead of metric when it crash's and burns.
  • Unfortunatly NASA's funding is a bit low, a fact I am all too familiar with considering my fiance works for a NASA contractor (NASA has had to reneg on 3 contracts in her department in just the past 2 months). Comercializing potentially useful software and hardware is a large part of what allows NASA to continue it's work and research.
  • I love to see articles like this hit the news. To many people are familiar with Terminator style AI, and Star Wars style AI, which is quite a long way off. While the prospects are interesting, this self same prospect places a stigma on AI from a lot of other scientists. I hope more articles like these surface over the coming years to answer the question millions of scientists are asking "What does AI really do for us?"

The best defense against logic is ignorance.

Working...