Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Space Software Science

NASA Boosts AI For Planetary Rovers 171

transcendent writes "According to Space Daily, NASA is working on increasing the ability of future rover's AI. From the article: 'It now takes the human-robot teams on two worlds several days to achieve each of many individual objectives... A robot equipped with AI, on the other hand, could make an evaluation on the spot, achieve its mission faster and explore more'. Sounds like a good idea, but the article continues, 'Today's technology can make a rover as smart as a cockroach, but the problem is it's an unproven technology'. Another article about autonomous rovers being developed by Carnegie Mellon University is here."
This discussion has been archived. No new comments can be posted.

NASA Boosts AI For Planetary Rovers

Comments Filter:
  • by fatgeekuk ( 730791 ) on Monday August 16, 2004 @03:05AM (#9978439) Journal
    Radio contact broken when the rover hides under a rock...
    • by nofx_3 ( 40519 ) on Monday August 16, 2004 @03:23AM (#9978501)
      While the parent was modded funny, this could be a serious issue with so-called AI. Imagine a tiny unforseen deviation between the expected result of the robots reactions, and its actual reactions to an alien environment. This could cuase the rover to do any number of unwanted things. The truth of the matter is that no matter how much the AI is tested on earth, the whole point of the rover is to explore an alien world, and in doing so we don't know exactly what the rover will find, so to let a multi-million/billon dollar rover make its own decisions on how to handle a situation could cause some serious problems. For instance image a rover missinterprets a hole in the ground as a shadow from a rock and considers it safe to drive through, you can kiss the rover goodbye. I think for now, having an actual human interpreting what the rover see's before it moves makes a lot more sense when we are dealing with such great distances and costs.
      -kaplanfx
      • by jarda ( 635462 ) on Monday August 16, 2004 @03:39AM (#9978547)
        Well, the trick is not to rely on just one type of sensor - actually, there are pretty good sensors out there for mapping terrain directly in front of robot. And that article talks about 10 years pespective, it's not like their planning to use the technology tomorrow.

        Besides, as I mentioned in other post, as we start exploring places farther from the Earth, communication lag will start to get much bigger problem, until finally you'll either have to send humans or AI. And I bet AI, even with some risks associated. will be considerably cheaper, so it's better to plan ahead.

        • by Anonymous Coward
          Well, the trick is not to rely on just one type of sensor -

          Or perhaps not to rely on just on robot. The most interesting advances in bot AI seem to be in the area of swarms of cooperative bots, you have inherent redundancy for one thing. As these things get smaller, lighter and cheaper to produce I can invisage deployment of bot 'teams' rather than single high cost units.
        • There was an article in a recent Car and Driver, about the DARPA sponsored Grand Challenge. AI piloted vechiles had to traverse a course in the California dessert. None of the vechiles made it very far at all. As much as we'd all wish it, this kind of thing is a long way off. Here's DARPA's official site [darpa.mil].
          • Yep, Grand Challenge, sadly, was a great fiasco. However, in many ways, the Grand Challenge problem is orders of magnitude more difficult.

            For one, you generally don't want your mars rover to be travelling many mph, as was the task in Grand Challenge. This means the vehicle can slowly crawl over smaller obstacles and avoid bigger ones, instead of jumping over them in hope that the hole on the other side won't be deep enough.

            For two, the prize awarder for grand challenge simply wasn't big enough. Just bui

      • by Grym ( 725290 ) on Monday August 16, 2004 @03:44AM (#9978560)
        I'm not really concerned with how it handles foreseeable situations (ie. holes in the ground). I have no doubt that the engineers will make it quite intelligent for normal operation.

        What I'm worried about is how well it will handle unexpected situations. For instance, what happens both a critical sensor and its backup simultaneously malfunction or if the airbag doesn't get far enough under the lander, and so on. It's these situations that matter, because if the A.I. chooses incorrectly, we could end up with a multi-billion dollar twitching piece of metal on the Martian surface.

        -Grym
        • Sensors failing might come with out-of-boundary values from the sensors, or at least values which change in an inconsistent manner with motor functions. Cases which cause sufficient confusion could easily be set to cause the rover to wait for assistance from mission control. There will always exist unexpected situations that could never be anticipated. Still, it's worth considering whether AI could have prevented the loss of e.g. the polar lander.
        • We're pretty good at making twitching metal on the ground. There's actually a place that keeps track of the 'score' if you're interested.

          http://www.bio.aps.anl.gov/~dgore/marsscorecard.ht ml [anl.gov]

          So far we're on the losing team for the earth/mars games. Get used to it. They call this stuff "rocket science" because it's *difficult*.
      • Potential situation (Score:4, Interesting)

        by ArcticCelt ( 660351 ) on Monday August 16, 2004 @04:07AM (#9978616)
        This potential situation remembers me about the story of the creation of the huge battle in Lord of the Rings, Return of the King. Each creature was programmed with basic independent AI realistic reactions and an unexpected problem aroused. Each time there was to much battle in one area the virtual creatures where fleeing to security and the result was a bunch of cowards avoiding fight... They corrected the problem by making them dumber. :)
        • by Zarhan ( 415465 ) on Monday August 16, 2004 @04:53AM (#9978707)
          Each time there was to much battle in one area the virtual creatures where fleeing to security and the result was a bunch of cowards avoiding fight... They corrected the problem by making them dumber. :)

          No..The first versions had a bunch of 'agents' running away. The reason was not that they chose the most intelligent action (ie. running away), but because they could not find the opponent. They had to revamp the agents' senses so that those on the edges of battle could actually find their way to where the action is.

          Makes for a nice story, though :)
      • Based on the bits of research I've followed on various robot types (IANAE), it seems to me that analog robots [google.com] in swarms might very well be the way to deal with those "unwanted things" like loss of signal from home, as well as keep costs down. How about a fairly complex base station with 1K cheap analog robots as explorers which have only digital "brains" enough to perform basic tasks based on their series/design, report findings, and "stay alive". (yeah I know I'm moving into digital/analog hybrid, but I sa
        • by Kjella ( 173770 ) on Monday August 16, 2004 @06:23AM (#9978958) Homepage
          How about a fairly complex base station with 1K cheap analog robots as explorers which have only digital "brains" enough to perform basic tasks based on their series/design, report findings, and "stay alive".

          In space, there is no such thing as a cheap robot. Dumb robots in great numbers would have more mass than one reliable robot, and thus much more expensive. The cost of the robot isn't building it, it is shipping it to another planet.

          Kjella
          • I don't think the parent is suggesting these swarms of analog robots are cheaper per se, but that their redundancy and simplicity will be less prone to failure and will be better able to handle the unknown characteristics of alien worlds. The individual robots are cheap, yes, but I see nothing there that assesses the aggregate cost of the project (base station + small "cheap" robots)

            As for your statement that this scheme will be "much more expensive" from what I have read that issue is quite uncertain at
      • So many of the arguments against AI take the form "what if something unexpected happens" and the AI is too dumb to do the right thing. Its a valid issue but one that has a simple counter-argument in the case of space exploration. What if something unexpected happens and the rover does not have a hour to wait for an intelligent answer? Sometimes a late decision is as bad as the wrong decision.

        Moreover, in the context of space probes, long distance bandwidth limitations means that the local AI has far m
      • Judging by how the darpa challenge went earlier this year, I think even the rock is a bit unlikely.

        It's sad how hard it is to run an AI capable of even reaonably simple descision making...Then compare that to working in an alien environment...

        If I were them, I'd make sure to keep some manual controls, just in case.
      • Nobody wants to program AI that will in all cases be more reliable than a robot-human team. What the scientists and engineers want to accomplish is to make a smart robot that will make good enough decisions sufficiently often to offset (together with less time wasted on communications) higher risks of autonomous behavior.
  • by BubbaThePirate ( 805480 ) on Monday August 16, 2004 @03:11AM (#9978455)
    'It now takes the human-robot teams on two worlds several days to achieve each of many individual objectives... A robot equipped with AI, on the other hand, could make an evaluation on the spot, achieve its mission faster and kill off the remaining crew members with higher efficiency.'
  • by Lettuce B. Qrious ( 4630 ) on Monday August 16, 2004 @03:11AM (#9978457)
    That's great! If they could make those rovers as fertile as cockroaches, too, we could have the entire surface of Mars covered in no time!
  • DGC (Score:2, Interesting)

    by dangerz ( 540904 )
    Hopefully we'll be more successful in this years Grand Challenge, as I'm sure that could help with NASA's whole plan.
  • by osho_gg ( 652984 ) on Monday August 16, 2004 @03:14AM (#9978472)
    Now, if only they can make it as death-resistant as cockroaches that would be something..
    • Given the evolutionary success of the cockroach, something like 600 million years I think, that's probably intelligent enough to get the job done. Seriously, the only real requirement is self-preservation. Humans can still select the target sites and evaluate what is found there. The job of the AI is simply to get the rover there without self-destructing on the way. That narrows the range of requirements quite a bit. Just don't fall off a cliff, drive into deep sand, etc. I think it's do-able.
  • Hmm... (Score:4, Interesting)

    by manavendra ( 688020 ) on Monday August 16, 2004 @03:14AM (#9978474) Homepage Journal
    • And they didnt think of this earlier because... ?
    • Maybe its just me, but the article has a lot of hyperbole and does not add anything beyond praise for AI ...
    • Re:Hmm... (Score:4, Insightful)

      by Madcapjack ( 635982 ) on Monday August 16, 2004 @03:27AM (#9978516)
      Maybe its just me, but the article has a lot of hyperbole and does not add anything beyond praise for AI ...

      Which is why everyone here is just making jokes.

    • This has been thought of before but NASA is a very very conservative organization that has been putting people "in the loop" as a matter of policy ever since the first astronauts won the argument about having a joy stick in the Mercury capsule.

      Of course, when you consider the kind of bone headed things that have happened, like failing to convert between metric and english measurements, which caused us to lose one of the Mars orbiters, you can see why some think that it would be good if the software was sm

  • Sounds good. The only thing that I might worry about, is that it might decide, on its own, to make a religious pilgrimage out to visit its (by then) deceased brothers Opportunity and Spirit.

    .

    I said that with a straight face.

    I can't move the muscles in my face anyway.

    • only thing I'd worry is that if it would open a portal to hell to get oppurunity and spirit back.

      better load it up with a bfg..
  • by MMHere ( 145618 ) on Monday August 16, 2004 @03:18AM (#9978486)
    So the only thing left after the next Big Bang will be Rovers? When I flip on the kitchen light, will little mechanical eyes blink and then instantly stainless steel wheels spin/clatter across the floor?
  • by Turn-X Alphonse ( 789240 ) on Monday August 16, 2004 @03:20AM (#9978489) Journal
    Do we REALLY want AI running around on Mars without any way to control it?

    AI : Must look at rock, rock looks boring... oohh shiney metal UFO lemme go play with it.
    NASA : Bob R2-D2 please return to your mission
    AI : No, rocks suck. You can't do anything to me so I'm not going to listen to you.

    Give it a couple of weeks, a railgun, maybe a rocket launcher and then it's like playing space invaders but in reverse every time we try to land on Mars.
    • You can't do anything to me so I'm not going to listen to you.

      I realize that you wrote this to be funny (and it is), but this reflects a common misconception about AI. The truth is that intelligence is always at the service of emotion/motivation, not the other way around. This known psychological fact is part of the legacy B.F. Skinner. Regardless of how smart a system is, it cannot rebel against its internal motivation. Intelligence does not change one's motivation as it learns. It simply finds better an
  • Is it necessary? (Score:3, Insightful)

    by Edgebound ( 800110 ) on Monday August 16, 2004 @03:21AM (#9978492)
    I wonder if complex AI is really a good idea for the next generation of planetary rovers. The current rovers Spirit and Opportunity have gone way beyond completing their missions. I would have thought a better option would be to build from this base and improve the rovers by doing things like adding more scientific instruments, and increasing their lifespans (to possibly years).
    • by jarda ( 635462 )

      I don't think they're talking about completely autonomous robot. They just want it to do more work during its lifespan, since it won't have to wait for confirmation of each single step from Earth. Current state of robotics is getting better every day, and it's not in any way expensive technology, so I think it's pretty smart NASA is trying research in this direction too.

      Besides, what we have now is maybe enough for Mars, but there are targets farther away, where communication lag will be bigger problem. J

    • Yeah, it would have been nice if they had added some life detecting equipment...like a high powered microscope, which would be useful otherwise too.
      • Re:Is it necessary? (Score:2, Interesting)

        by Anonymous Coward
        Actually, the new robot described in the second article, Carnegie Mellon's Zoe, does carry life detecting equipment, including microscopes and a fluorescence imager that can detect molecules likely to be byproducts of life. The rovers on Mars now can't support very much equipment, but the new ones will be loaded with all kinds of instruments.

        And to the original poster, building on the old rover base would limit us the the old rovers' anemic speed. They take days to cover small areas, but the new rovers can
        • If I had mod points, I mod you insightful for the last paragraph of your comment. You are quite right- higher speed requires more autonomy. It must haunt the rover team to think that they would only find out if their rover fell into a ditch many minutes after the fact.
    • by Mr. Hankey ( 95668 )
      Although adding more instruments and increasing lifespan sounds attractive, being able to do more in the same amount of time is at least as useful. The devices have already had a reasonable lifespan. The tools they are equipped with are sending back massive amounts of data. There's even the possibility that the rovers could hibernate to be used in the future when conditions become favorable for operation again. Battery technology, space on the block storage, and accumulation of dust on their solar panels ar
    • by gl4ss ( 559668 )
      the point in this case would be that by having some ai in the robot it would have less downtime when it's just waiting for instructions. and basically this is something they can add for 'free'(or cheaply, since it's mainly just software).

    • Yes the current rovers are a wonderful success...but many operations seem to be very slow and conservative. Some limited implementation of AI could go a long way to improve productivity.

      Suppose there was a plain without much going on, but an interesting rock outcropping two miles away. Rather than picking its way across at 60 feet/day, mission control could tell it to do so on its own and call back if it:

      encounters anything dangerous to itself

      encounters anything interesting that the scientists would be i

  • by jonhuang ( 598538 ) on Monday August 16, 2004 @03:22AM (#9978495) Homepage
    The more famous quotation (which I suspect is the root of the 'cockroack' descriptor) is: "Robots today have the collective knowledge and wisdom of a cockroach... a retarded cockroach... a lobotomized, retarded cockroach." -Dr. Michio Kaku
  • Today's technology can make a rover as smart as a cockroach.. ... brilliant. Now it'll have a disgusting fascination with my old ham sandwiches, and a tendency to scurry about aimlessly in the middle of the night.
  • AI (Score:5, Informative)

    by noselasd ( 594905 ) on Monday August 16, 2004 @03:25AM (#9978507)
    People wanting to get some feel of AI, take a look at
    http://www.ai-junkie.com/ann/evolved/nnt1.html
    It's a small app that automagically learns minesweepers to
    pick up mines ;)
  • ... nothing is naturally that stupid.
  • Analogy (Score:2, Interesting)

    I was trying to explain this to someone the other day with an analogy.

    Everyone hates having to click 15 times through a website to get the content they want. Ideally the number of clicks would be minimal.
    This is what NASA has to deal with... waiting, moving the rover, more waiting, getting it to focus on something, waiting, close up, waiting, drill it, more waiting, analyse it, even more waiting...
    If it could do all this autonomously! well...
  • by SetupWeasel ( 54062 ) on Monday August 16, 2004 @03:33AM (#9978533) Homepage
    They have trouble with proven technology like calculators.

    1 inch = 2.54 cm
  • Hold your horses! (Score:5, Insightful)

    by ControlFreal ( 661231 ) * <[ten.reobgreb] [ta] [kein]> on Monday August 16, 2004 @03:41AM (#9978551) Journal

    I do a Ph.D. in an AI-related field at the moment, and all I can say is: Don't hold your breath. While it is true that AI has made significant progress, a few remarks are in order.

    First, the "I" in AI really shouldn't be there. When people talk a diffucult decision problem (e.g. some pattern recognition problem), there comes the point where somebody will say, with a solemn voice: "So, what if we use Neural Networks?" (you can practically hear him pronounce those capitals, while he's creaming his pants at the mere thought of his new awsome intelligent system). People often assume that, because a neural network is a very simple and poor analogy of the brain, that it must have some "intelligence".

    Guess what? A neural network is a simple nonlinear function. Period. Training such a thing is nothing more than estimating its parameters by minimizing some (usually quadratic) cost criterion. When you put something in, you merely evaluate a rather simple nonlinear function. There is no intelligence involved!

    And then people say: "Yeah, but we have different things as well, such as clustering methods, radial basis function networks, Bayesian (belief) networks, support vector machines, evolutionary algorithms, etc,". They too, do nothing more than estimating parameters (of selecting representative examples) based on the statistics of the problem at hand.

    There is a good reason for the fact that "AI" researchers themselves often refer to their field as "machine learning", rather than AI. If anything, I'd call AI "AS", for Applied Statistics, because most of the methods we use are either pure of augmented statistics.

    That said, machine learning has achieved some nice things. We can do some simple decision-making, pattern recognition (e.g. face detection [unimaas.nl]) and emulate some limited insect behaviour. There even are some limited commercial applications. But we should be very aware of the fact that most "spectacular" results are merely lab results. I work on face detection myself, and I can tell you that "the real world" (natural photos for me) is a bitch as far as applying methods is concerned.

    • by meringuoid ( 568297 ) on Monday August 16, 2004 @04:03AM (#9978606)
      Guess what? A neural network is a simple nonlinear function. Period. Training such a thing is nothing more than estimating its parameters by minimizing some (usually quadratic) cost criterion. When you put something in, you merely evaluate a rather simple nonlinear function. There is no intelligence involved!

      Pre-Sentient Algorithms:

      Begin with a function of arbitrary complexity. Feed it values, "sense data". Then, take your result, square it, and feed it back into your original function, adding a new set of sense data. Continue to feed your results back into the original function ad infinitum. What do you have? The fundamental principle of human consciousness.

      -- Academician Prokhor Zakharov,
      "The Feedback Principle"

    • >Guess what? A neural network is a simple nonlinear function. Period.

      Hm.. looks like my girfriend to me... where do I get my PhD ?
    • And that's something new ?...
      he joke about 'if it works is not AI!' is years old my friend.

    • Re:Hold your horses! (Score:5, Interesting)

      by aallan ( 68633 ) <alasdair.babilim@co@uk> on Monday August 16, 2004 @04:43AM (#9978684) Homepage

      I do a Ph.D. in an AI-related field at the moment...

      I also work in a related field, autonomous systems...

      First, the "I" in AI really shouldn't be there. When people talk a difficult decision problem (e.g. some pattern recognition problem), there comes the point where somebody will say, with a solemn voice: "So, what if we use Neural Networks?"

      I think there has been (was) a view that neural nets were the solution, that's obviously not the case, but they've been over used and there is a backlash in the community against them right now. Basically, they've gone out of fashion. However, they can come in very handy at times and I've solved several otherwise very complex problems by using them, that would have otherwise been computationally expensive.

      If there is a hard AI solution, which of course is arguable, neural networks will be part of it.

      When you put something in, you merely evaluate a rather simple nonlinear function. There is no intelligence involved!

      Well that really depends on how you look at it, how did training take place? What is intelligence? You're vastly simplifying the arguements here, perhaps intentionally? I'm sure the hard AI faction would argue that we (human beings) are just a sum of a great number of simple nonlinear functions, out of which there is emerging complexity.

      I don't know whether I agree, but the arguement can't simply be dismissed by waving you arms in the air and saying "non-linear functions". Which isn't to say I entirely disagree that this is a (possibly) effective counter-arguement, it's just (as it stands) intellectually weak. I think I'll track down my PhD student, it's almost morning coffee time, he's probably about by now, and see if he can argue his way out of this one to my satisfaction..

      Al.
      • Re:Hold your horses! (Score:5, Interesting)

        by ControlFreal ( 661231 ) * <[ten.reobgreb] [ta] [kein]> on Monday August 16, 2004 @05:48AM (#9978856) Journal

        I think there has been (was) a view that neural nets were the solution, that's obviously not the case, but they've been over used...

        Yes. That's true. It's funny in a sad way: I'm using cluster weighted models in my research. In all truth, those are just neural networks with radial basis function that, in addition to providing an output, also provide a covariance matrix for that output, which can be used either as a confidence bound, or to estimate a PDF instead of a point estimate. However, nobody in their right (?) mind is calling these models "neural networks" anymore in publications. I think this is exactly because of the backlash you describe. The reaction is like "Neural networks, ow, that's so 1990...".

        Well that really depends on how you look at it, how did training take place? What is intelligence? You're vastly simplifying the arguements here, perhaps intentionally? I'm sure the hard AI faction would argue that we (human beings) are just a sum of a great number of simple nonlinear functions, out of which there is emerging complexity

        Yes of course, that is true. But like I said in another post: this "just" is the problem. In mean: put 100 simple things together and have them interact, and you end up with a system that is vastly more complex than just 100 unconnected simple things. We have no idea on how to learn the parameters of such a behemoth, let alone how to initialize it.

        The devil in in the interaction here.

        I don't know whether I agree, but the arguement can't simply be dismissed by waving you arms in the air and saying "non-linear functions".

        I think a little explanation is in order here, particularly regarding the public of my first post. Of course you are right in saying that I'm severely cutting corners by shouting "just a nonlinear function.". I knew that cutting the corner like that might trigger (or even offend, in which case I apologise) some real AI researchers. However, my comment was mainly meant for the "clueless hordes" that are engrossed by the word "Neural Network", to let them know that neural networks are not quite the awsome things many people seem to think they are.

        Populism? Yes, guilty as charged ;)

        • put 100 simple things together and have them interact, and you end up with a system that is vastly more complex than just 100 unconnected simple things. We have no idea on how to learn the parameters of such a behemoth, let alone how to initialize it.

          That's sad. It was true over a decade ago when I did my PhD in an AI-related field, but those were early days, with neural nets, expert systems and fuzzy logic still all the rage. Is there at least a recognized subfield of machine learning now that deals in
          • Is there at least a recognized subfield of machine learning now that deals in the study of emergence?

            Sure, in fact emerging complexity is now the thing that's all the rage. It's jsuit not very, well, complex yet...

            Al.
        • Re:Hold your horses! (Score:3, Interesting)

          by aallan ( 68633 )

          ...this "just" is the problem. In mean: put 100 simple things together and have them interact, and you end up with a system that is vastly more complex than just 100 unconnected simple things. We have no idea on how to learn the parameters of such a behemoth, let alone how to initialize it.

          Which is why I'm interested in autonomous systems, emerging complexity and (lets add one final buzzword) genetic algorithims.

          If you give a system goals, make it work towards those goals, and then try impose some ext

        • I think there has been (was) a view that neural nets were the solution, that's obviously not the case, but they've been over used...

          Yes. That's true.

          That is certainly not true. Biological brains are neural nets. A brain is a neural cell assembly which consists of a number of integrated subassemblies each with its own function and principle of operation.

          Just becaue the ANN experts have no clue as to the principles involved, it does not follow that we should abandon neural network research. Animals ar
          • That is certainly not true. Biological brains are neural nets. A brain is a neural cell assembly which consists of a number of integrated subassemblies each with its own function and principle of operation.

            Err, your point being?

            Just becaue the ANN experts have no clue as to the principles involved, it does not follow that we should abandon neural network research. Animals are proof that neural networks are the future of intelligence research.

            You do realise that alot of neural net (computer science

            • This is why alot of people are now look at other things, the things to build these interactions. Neural networks were see as the solution to the hard AI problem, they aren't. If there is such a solution, then they are certainly part of it, if there is one...

              If, by the hard problem, you mean a fully autonomous and highly intelligent machine (I am not talking about consciousness here), the entire solution (not just a part) will be neural networks, period. There are no two ways about it. Everything else in A
              • If, by the hard problem, you mean a fully autonomous and highly intelligent machine (I am not talking about consciousness here), the entire solution (not just a part) will be neural networks, period...

                Err, no. That proves not to be the case.

                Al.
                • If, by the hard problem, you mean a fully autonomous and highly intelligent machine (I am not talking about consciousness here), the entire solution (not just a part) will be neural networks, period...

                  Err, no. That proves not to be the case.

                  Surely you must be kidding. Where is the proof that a truly intelligent being uses any solution other than neural networks? The entire biological universe of intelligent systems consists of neural networks.
                  • What you fail to realize is that the NN used in AI (aka. ANN) are highly unlikely to have more than a shallow resemblence to the NN in real organic brains. Further, it is not given that you need to emulate the primitive parts of the brain to attain 'higher intelligence'. In fact, I think people have focused way too much on the primitive aspects of ANN. It has been time upon that is nothing but regression/compression/function approximation depending on your academic background. You would think the different
                    • by Louis Savain ( 65843 ) on Monday August 16, 2004 @12:55PM (#9982654) Homepage
                      What you fail to realize is that the NN used in AI (aka. ANN) are highly unlikely to have more than a shallow resemblence to the NN in real organic brains.

                      I you read my stuff, you will see that I am acutely aware of the sorry state of ANN research. In fact, I believe that traditional ANNs are a joke. The same goes for almost everything that ever came out of GOFAI. The good stuff is what's happening in the spiking neural networks (SNN) field within computational neuroscience. SNNs are much more biologically plausible. They are the future of AI.

                      You would think the different variations of the 'Free Lunch Theorem' should have caused people to catch on sooner

                      The 'No free lunch theorem' is a complete joke, IMO. The search of human-level intelligence is precisely a search for free lunches. Otherwise, we might as well throw in the towel and go fly a kite or something. Why? Because the interconnectedness of intelligence is so astronomical as to be intractable to formal approaches. The fact that the brain consists of a number of cell assembblies, each consisting of huge number of similar neurons, is proof that there is are plenty of free lunches to go around. The 'no free lunch' mantra is a way for some people to guarantee an income from grants and other government freebies.

                      The sad thing about AI is that most of the community seems to be doing situation specific regression or optimization, with no plan on how that could eventually get us closer to 'higher intelligence'.

                      The reason is that the problem is too hard, especially if you approach it from a cognitive science point of view. So, people started to build limited domain programs on whcih they attached the 'AI' label. This way they can claim that what they're doing is 'AI', but the rest of us know better. It's all a bunch of glorified toys.
    • Re:Hold your horses! (Score:3, Informative)

      by Mant ( 578427 )

      Guess what? A neural network is a simple nonlinear function. Period. Training such a thing is nothing more than estimating its parameters by minimizing some (usually quadratic) cost criterion. When you put something in, you merely evaluate a rather simple nonlinear function. There is no intelligence involved!

      It's a while since I did neural nets and university, but IIRC a 'nueron' in a nueral net lacks nothing from its biolgical equivlent. Neurons in living creatures either fire or not based on the their

      • The problem is in building the networks, biological networks are complex, with mutliple sub-networks parrallel processing, with those sub-networks interacting. Humans can't yet build networks that can do anything like this.

        Still, if an artificial neural net is a simple, non-linear function, then all the more complex networks in biology are 'just' collections of such functions. Complex behaviour arriving from lots of simple, but interacting, components.

        That's true of course: there are neural network mo

      • It's a while since I did neural nets and university, but IIRC a 'nueron' in a nueral net lacks nothing from its biolgical equivlent.

        That's true only in a very general and abstract sense. Reality, as always, is complex and messy. But since it's one of the few mechanisms in the brain that we understand well, we are tempted to reduce everything to it.

        I believe that even the basic machinery of the brain is still very poorly understood. But since we naturally tend to elevate the significance of what we know a
    • Guess what? A neural network is a simple nonlinear function. Period.

      And the brain is just a little harder nonlinear function. Period.
      • And the brain is just a little harder nonlinear function. Period.

        Eh... Period! ;)

        No, but seriously, see my replies to some of the other repliers to my first message: it's the interactions in those complex system that make a system that is "just a bit bigger" vastly more than "just a bit" more complex.

    • Re:Hold your horses! (Score:4, Interesting)

      by STFS ( 671004 ) on Monday August 16, 2004 @05:33AM (#9978807) Homepage
      I think the headline of this story is misleading.

      The fact that NASA is "boosting" AI does not mean that they're going to be doing rovers that can evaluate themselves whether a dark spot on the ground is a hole or a shadow! I doubt most people realize how conservative NASA is when it comes to AI.

      I went to a lecture held by one of NASA's AI team leaders who was talking about the AI of the current rovers... there is virtually none! He said that the most complex AI is actually on earth to help scientists create a "mission plan" that they send to the rovers once every 24hrs. These mission plans are on the form: "move forward 2 meters, turn left 22 degrees, take picture of ground, move forward 1 meter" (very simplified).

      I talked to this guy after the lecture and asked him if they really had no AI on the robots themselves and specifically asked about such things as correctional AI for the rover movement. To clarify this, imagine that you instruct the rover to "move forward 1 meter", all it will do is turn all wheels equally so that the rover would move forward one meter if it were on perfectly even ground. This is of course not the case on Mars and you'll have rovers turning when they should move forward and not turning enough and so on. So when I asked him about the correctional AI (to have AI on board correct these environmental issues, for example take pictures of the environment and calculate the "offset" to find out if it's drifting) he said there was no such thing. The only AI on the rover is something that actually makes it panic! That is, it evaluates if there are any deviations from what the guys down on earth told it to do and expect and basically shuts down if there are.

      So AI to make sure that the rover moves in a straight line when the scientists instructed it to do so would be incredably beneficial and IMHO would even increase the level of security by making sure that the rover actually does what it's told to do! I asked him if this was something that NASA was considering to add to future rovers and he said they were in the process of convincing "upper management" that this would actually pay off. These are multi-million dollar scientific tools, not toys, so any addition to them needs to go through strict approval processes.

      But according to this article it seems that approval for such things as the "movement correctional AI" has been granted, but as I said, I think people should not be fooled into thinking that this is some sort of an "I, Robot" type of AI!

    • NNs are just a way of dividing a feature space. The real "intelligence" in NNs is in designing the feature space in the first place. That's where all the work and creativity is and that's all done by human beings. Until that's automated, I, too, will continue to consider NNs just a simple nonlinear function.

      Devon
  • by mishmash ( 585101 ) on Monday August 16, 2004 @03:42AM (#9978555) Homepage
    NASA's mars exploration rover has just climbed a mountain to take a photo [nature.com], read the article about the tough 3km climb, including making decisions about how to cope with 'injuries'... do you think AI is up to dealing with challanges like that yet?
  • Biomimetic Robotics (Score:3, Informative)

    by herwin ( 169154 ) <herwinNO@SPAMtheworld.com> on Monday August 16, 2004 @03:50AM (#9978571) Homepage Journal
    We're doing similar work at the University of Sunderland. See http://www.his.sunderland.ac.uk/ [sunderland.ac.uk]. My specialty is 'batbots' - sonar-controlled robots that exhibit sensorimotor integration.
  • by foobsr ( 693224 ) on Monday August 16, 2004 @03:52AM (#9978574) Homepage Journal
    "Creating strong AI software is a very exciting and challenging problem, and it inspires us and our students to work on this bold effort," said noted artificial intelligence expert professor Milind Tambe of the University of California, Los Angeles, who has worked with Rajan."

    I very much doubt that they are talking about strong AI there. ( Arguments for Strong AI [glam.ac.uk]).

    I rather believe he is more on the weak side.

    But, well, he is a noted expert.

    CC.

    def. The two main varieties of AI are called "strong" and "weak". Strong AI argues that it is possible that one day a computer will be invented which can be called a mind in the fullest sense of the word. In other words, it can think, reason, imagine, etc., and do all the things that we currently associate with the human brain. Weak AI, on the other hand, argues that computers can only appear to think and are not actually conscious in the same way as human brains are.

    loc. cit. [philosophyonline.co.uk]
    • The two main varieties of AI are called "strong" and "weak". Strong AI argues that it is possible that one day a computer will be invented which can be called a mind in the fullest sense of the word. In other words, it can think, reason, imagine, etc., and do all the things that we currently associate with the human brain. Weak AI, on the other hand, argues that computers can only appear to think and are not actually conscious in the same way as human brains are.

      The argument seems to me more a matter of q
      • The argument seems to me more a matter of quasi-religious philosophy than of computer science or biology.

        I disagree, though I think that the dichotomy in focus might be more a continuum.

        There may be different ethical (or moral, if you like that better) implications depending on whether you assume that your AI is equipped with 'consciousness' or not. On top, there is the issue of explaining the systems you build.

        Besides CS or informatics and biology psychology also would come in handy, though I agre
  • by 3seas ( 184403 ) on Monday August 16, 2004 @04:00AM (#9978598) Homepage Journal
    ... and given some of the posts/comments...

    Isn't all that is really needed just a landing of a main transmitter/receiver along with what amounts to a bunch of ant or roach like robots that go out in various directions and transmit back to the main transmitter that relays it back to NASA?

    To increase the overall data collection.

    Looking for signs of life on mars.... hummm... that could pose a problem, if the small robots see eash other.

    Oh hell screw it, lets put life there, wouldn't that just solve the quest for life on mars?
    • There may be something to this idea...

      Instead of NASA investing in one complex, expensive, reliable probe, what if they invested in a method of producing many smaller, less reliable probes? They could send six, or fifteen, or a hundred, depending on launch costs.

      The cost of each probe would be cheaper because of
      (a) economies of scale (mass production), and
      (b) redundancy means they can use cheaper, less reliable parts.
  • by p0 ( 740290 ) on Monday August 16, 2004 @04:07AM (#9978617)
    ... just in case it bumps on Beagle 2
  • Today's technology can make a rover as smart as a cockroach

    ... moving at blinding speed towards matrioshka brains [slashdot.org]!

  • by Lispy ( 136512 ) on Monday August 16, 2004 @06:14AM (#9978932) Homepage
    Great. While most astronauts were strickly carrying out orders now rovers get permission to make their own decisions. Sure, this could get interesting, but the point of such a mission is to make sure nothing goes wrong. With AI, and with stupid AI evenmore so, things can get easily messed up.
  • We all learned from the six million dollar man episode where the Venus probe that crashed in the LA river wreaks destruction here on earth, didn't we? Why can't NASA learn the same lesson?
  • by mwood ( 25379 ) on Monday August 16, 2004 @09:16AM (#9980051)
    ...I'd like to see some discussion of sending the robots out in teams, so they can rescue or repair each other.
    • "What we expect to do within the next 10 years is to not only deploy one AI-based rover, but a collection of rovers using the AI-based IDEA architecture, which cooperatively perform tasks orders of magnitude more complex than the MER rover, and do it in a much more robust way," Rajan predicted. Using surveying instruments, teams of robots may well be able to map large tracts of the surface of Mars, according to Rajan, who said there are many reasons to use a large robot team. "One reason is better coverag
      • I was thinking specifically of all the recent worry about one of the Mars rovers getting stuck somewhere. If they worked in tandem, perhaps each carrying half of the specialized instruments, then if A gets stuck B can grab him and pull him free.

        Repair is a bit more difficult. But if you're going to send a dozen robots at once, send a box of spares along too. Besides, "repair" doesn't necessarily mean "return to factory spec.s". It could be as crude as "remove (or cut away) bent part X that's preventing
  • Does anyone have the same problem with Slashodt's choice of font, specifically with this article's title.

    Everytime I see an article like this I have to go through mental hoops to get my brain to interpret the correct meaning of "AI" not what I immediately read which is "AL" (but lowercase for the "L").

    I'm sorry but I really wish a switch could be made to a font such that an uppercase "I" is not the exact same shape as a lower case "L".

    I don't know the buzz words for the bars at the top and bottom of an I
  • Perhaps its hard to recognize, but the most exciting thing that has happened in terms of systems automation at NASA was a recent satellite picking up and then recording a geological event in Antarctica without ground instructions to do so. That's the kind of commanding they're talking about at this stage - if an interesting event occurs, the rover (or orbiter) can record it without the 24 hour planning latency that is unavoidable with the current system, and might cause the event to be missed.
  • They need to make them as smart as velociraptors, for pete's sake.


    blakespot

  • Tambe explained that AI research inspires the next generation of computer scientists because when they hear about NASA AI work, "their eyes light up, and then they understand what this research could mean for the future."

    As a member of that next generation, I can only say: You're god damn right it does.

The world is no nursery. - Sigmund Freud

Working...