NASA Boosts AI For Planetary Rovers 171
transcendent writes "According to Space Daily, NASA is working on increasing the ability of future rover's AI. From the article: 'It now takes the human-robot teams on two worlds several days to achieve each of many individual objectives... A robot equipped with AI, on the other hand, could make an evaluation on the spot, achieve its mission faster and explore more'. Sounds like a good idea, but the article continues, 'Today's technology can make a rover as smart as a cockroach, but the problem is it's an unproven technology'. Another article about autonomous rovers being developed by Carnegie Mellon University is here."
I can see it now.... (Score:5, Funny)
Re:I can see it now.... (Score:5, Interesting)
-kaplanfx
Re:I can see it now.... (Score:5, Insightful)
Besides, as I mentioned in other post, as we start exploring places farther from the Earth, communication lag will start to get much bigger problem, until finally you'll either have to send humans or AI. And I bet AI, even with some risks associated. will be considerably cheaper, so it's better to plan ahead.
Re:I can see it now.... (Score:3, Insightful)
Or perhaps not to rely on just on robot. The most interesting advances in bot AI seem to be in the area of swarms of cooperative bots, you have inherent redundancy for one thing. As these things get smaller, lighter and cheaper to produce I can invisage deployment of bot 'teams' rather than single high cost units.
Re:I can see it now.... (Score:2, Insightful)
Re:I can see it now.... (Score:2)
For one, you generally don't want your mars rover to be travelling many mph, as was the task in Grand Challenge. This means the vehicle can slowly crawl over smaller obstacles and avoid bigger ones, instead of jumping over them in hope that the hole on the other side won't be deep enough.
For two, the prize awarder for grand challenge simply wasn't big enough. Just bui
Re:I can see it now.... (Score:5, Insightful)
What I'm worried about is how well it will handle unexpected situations. For instance, what happens both a critical sensor and its backup simultaneously malfunction or if the airbag doesn't get far enough under the lander, and so on. It's these situations that matter, because if the A.I. chooses incorrectly, we could end up with a multi-billion dollar twitching piece of metal on the Martian surface.
-Grym
Re:I can see it now.... (Score:3, Insightful)
It's happened before, get used to it... (Score:2)
http://www.bio.aps.anl.gov/~dgore/marsscorecard.h
So far we're on the losing team for the earth/mars games. Get used to it. They call this stuff "rocket science" because it's *difficult*.
Potential situation (Score:4, Interesting)
Re:Potential situation (Score:4, Interesting)
No..The first versions had a bunch of 'agents' running away. The reason was not that they chose the most intelligent action (ie. running away), but because they could not find the opponent. They had to revamp the agents' senses so that those on the edges of battle could actually find their way to where the action is.
Makes for a nice story, though
Re:I can see it now.... (Score:3, Interesting)
Re:I can see it now.... (Score:4, Insightful)
In space, there is no such thing as a cheap robot. Dumb robots in great numbers would have more mass than one reliable robot, and thus much more expensive. The cost of the robot isn't building it, it is shipping it to another planet.
Kjella
Re:I can see it now.... (Score:2)
As for your statement that this scheme will be "much more expensive" from what I have read that issue is quite uncertain at
Curse of the Unexpected (Score:3, Interesting)
Moreover, in the context of space probes, long distance bandwidth limitations means that the local AI has far m
Re:I can see it now.... (Score:3, Interesting)
It's sad how hard it is to run an AI capable of even reaonably simple descision making...Then compare that to working in an alien environment...
If I were them, I'd make sure to keep some manual controls, just in case.
Re:I can see it now.... (Score:2)
What it should say... (Score:5, Funny)
Fertility crach course (Score:3, Funny)
Re:Fertility crach course (Score:2)
DGC (Score:2, Interesting)
as intelligent as cockroach? (Score:3, Funny)
Re:as intelligent as cockroach? (Score:2, Insightful)
Hmm... (Score:4, Interesting)
Re:Hmm... (Score:4, Insightful)
Which is why everyone here is just making jokes.
Re:Hmm... (Score:2)
This has been thought of before but NASA is a very very conservative organization that has been putting people "in the loop" as a matter of policy ever since the first astronauts won the argument about having a joy stick in the Mercury capsule.
Of course, when you consider the kind of bone headed things that have happened, like failing to convert between metric and english measurements, which caused us to lose one of the Mars orbiters, you can see why some think that it would be good if the software was sm
sounds good (Score:2, Funny)
.
I said that with a straight face.
I can't move the muscles in my face anyway.
Re:sounds good (Score:2)
better load it up with a bfg..
Cockroaches and Rovers (Score:5, Funny)
Out of control? (Score:3, Funny)
AI : Must look at rock, rock looks boring... oohh shiney metal UFO lemme go play with it.
NASA : Bob R2-D2 please return to your mission
AI : No, rocks suck. You can't do anything to me so I'm not going to listen to you.
Give it a couple of weeks, a railgun, maybe a rocket launcher and then it's like playing space invaders but in reverse every time we try to land on Mars.
Intelligence != Rebellion (Score:3, Interesting)
I realize that you wrote this to be funny (and it is), but this reflects a common misconception about AI. The truth is that intelligence is always at the service of emotion/motivation, not the other way around. This known psychological fact is part of the legacy B.F. Skinner. Regardless of how smart a system is, it cannot rebel against its internal motivation. Intelligence does not change one's motivation as it learns. It simply finds better an
Re:Out of control? (Score:4, Funny)
>N
Plain
You are on a flat red plain, featureless apart from occasional scattered rocks. To the East lies a crater.
>E
Crater's Edge.
You are on the edge of a shallow crater. You can go down into the crater, or West, back to the plain.
There is a small rock here.
>look at rock
You can't see anything like that here.
>Look at stone
You can't see anything special about the stone.
>Test stone for life.
I only understood you so far as wanting to test the stone.
Is it necessary? (Score:3, Insightful)
Re:Is it necessary? (Score:3, Insightful)
I don't think they're talking about completely autonomous robot. They just want it to do more work during its lifespan, since it won't have to wait for confirmation of each single step from Earth. Current state of robotics is getting better every day, and it's not in any way expensive technology, so I think it's pretty smart NASA is trying research in this direction too.
Besides, what we have now is maybe enough for Mars, but there are targets farther away, where communication lag will be bigger problem. J
Re:Is it necessary? (Score:2)
Re:Is it necessary? (Score:2, Interesting)
And to the original poster, building on the old rover base would limit us the the old rovers' anemic speed. They take days to cover small areas, but the new rovers can
Re:Is it necessary? (Score:2)
Re:Is it necessary? (Score:3, Insightful)
Re:Is it necessary? (Score:3, Insightful)
Re:Is it necessary? (Score:2, Interesting)
Suppose there was a plain without much going on, but an interesting rock outcropping two miles away. Rather than picking its way across at 60 feet/day, mission control could tell it to do so on its own and call back if it:
encounters anything dangerous to itself
encounters anything interesting that the scientists would be i
"Cockroach" is a synopsis of truth (Score:5, Informative)
I didn't realise they were that smart! (Score:2, Funny)
Still, I for one...
Re:I didn't realise they were that smart! (Score:2)
Re:"Cockroach" is a synopsis of truth (Score:2)
Re:"Cockroach" is a synopsis of truth (Score:3, Interesting)
cockroach doesn't think or learn complex thought patterns, all an animal like it does is follow reflexes, a premade program(of sorts, go around looking for food, eat, hide in shadow - it won't start thinking outside of that pro
Hmm. (Score:2)
AI (Score:5, Informative)
http://www.ai-junkie.com/ann/evolved/nnt1.htm
It's a small app that automagically learns minesweepers to
pick up mines
AI - Artificial Intelligence.... (Score:2, Funny)
Analogy (Score:2, Interesting)
Everyone hates having to click 15 times through a website to get the content they want. Ideally the number of clicks would be minimal.
This is what NASA has to deal with... waiting, moving the rover, more waiting, getting it to focus on something, waiting, close up, waiting, drill it, more waiting, analyse it, even more waiting...
If it could do all this autonomously! well...
unproven technology bad for nasa (Score:5, Funny)
1 inch = 2.54 cm
Hold your horses! (Score:5, Insightful)
I do a Ph.D. in an AI-related field at the moment, and all I can say is: Don't hold your breath. While it is true that AI has made significant progress, a few remarks are in order.
First, the "I" in AI really shouldn't be there. When people talk a diffucult decision problem (e.g. some pattern recognition problem), there comes the point where somebody will say, with a solemn voice: "So, what if we use Neural Networks?" (you can practically hear him pronounce those capitals, while he's creaming his pants at the mere thought of his new awsome intelligent system). People often assume that, because a neural network is a very simple and poor analogy of the brain, that it must have some "intelligence".
Guess what? A neural network is a simple nonlinear function. Period. Training such a thing is nothing more than estimating its parameters by minimizing some (usually quadratic) cost criterion. When you put something in, you merely evaluate a rather simple nonlinear function. There is no intelligence involved!
And then people say: "Yeah, but we have different things as well, such as clustering methods, radial basis function networks, Bayesian (belief) networks, support vector machines, evolutionary algorithms, etc,". They too, do nothing more than estimating parameters (of selecting representative examples) based on the statistics of the problem at hand.
There is a good reason for the fact that "AI" researchers themselves often refer to their field as "machine learning", rather than AI. If anything, I'd call AI "AS", for Applied Statistics, because most of the methods we use are either pure of augmented statistics.
That said, machine learning has achieved some nice things. We can do some simple decision-making, pattern recognition (e.g. face detection [unimaas.nl]) and emulate some limited insect behaviour. There even are some limited commercial applications. But we should be very aware of the fact that most "spectacular" results are merely lab results. I work on face detection myself, and I can tell you that "the real world" (natural photos for me) is a bitch as far as applying methods is concerned.
Re:Hold your horses! (Score:4, Insightful)
Pre-Sentient Algorithms:
Begin with a function of arbitrary complexity. Feed it values, "sense data". Then, take your result, square it, and feed it back into your original function, adding a new set of sense data. Continue to feed your results back into the original function ad infinitum. What do you have? The fundamental principle of human consciousness.
-- Academician Prokhor Zakharov,
"The Feedback Principle"
Re:Hold your horses! (Score:3, Funny)
Re:Hold your horses! (Score:2)
Hm.. looks like my girfriend to me... where do I get my PhD ?
Re:Hold your horses! (Score:2)
And that's something new ?...
he joke about 'if it works is not AI!' is years old my friend.
Re:Hold your horses! (Score:5, Interesting)
I do a Ph.D. in an AI-related field at the moment...
I also work in a related field, autonomous systems...
First, the "I" in AI really shouldn't be there. When people talk a difficult decision problem (e.g. some pattern recognition problem), there comes the point where somebody will say, with a solemn voice: "So, what if we use Neural Networks?"
I think there has been (was) a view that neural nets were the solution, that's obviously not the case, but they've been over used and there is a backlash in the community against them right now. Basically, they've gone out of fashion. However, they can come in very handy at times and I've solved several otherwise very complex problems by using them, that would have otherwise been computationally expensive.
If there is a hard AI solution, which of course is arguable, neural networks will be part of it.
When you put something in, you merely evaluate a rather simple nonlinear function. There is no intelligence involved!
Well that really depends on how you look at it, how did training take place? What is intelligence? You're vastly simplifying the arguements here, perhaps intentionally? I'm sure the hard AI faction would argue that we (human beings) are just a sum of a great number of simple nonlinear functions, out of which there is emerging complexity.
I don't know whether I agree, but the arguement can't simply be dismissed by waving you arms in the air and saying "non-linear functions". Which isn't to say I entirely disagree that this is a (possibly) effective counter-arguement, it's just (as it stands) intellectually weak. I think I'll track down my PhD student, it's almost morning coffee time, he's probably about by now, and see if he can argue his way out of this one to my satisfaction..
Al.Re:Hold your horses! (Score:5, Interesting)
I think there has been (was) a view that neural nets were the solution, that's obviously not the case, but they've been over used...
Yes. That's true. It's funny in a sad way: I'm using cluster weighted models in my research. In all truth, those are just neural networks with radial basis function that, in addition to providing an output, also provide a covariance matrix for that output, which can be used either as a confidence bound, or to estimate a PDF instead of a point estimate. However, nobody in their right (?) mind is calling these models "neural networks" anymore in publications. I think this is exactly because of the backlash you describe. The reaction is like "Neural networks, ow, that's so 1990...".
Well that really depends on how you look at it, how did training take place? What is intelligence? You're vastly simplifying the arguements here, perhaps intentionally? I'm sure the hard AI faction would argue that we (human beings) are just a sum of a great number of simple nonlinear functions, out of which there is emerging complexity
Yes of course, that is true. But like I said in another post: this "just" is the problem. In mean: put 100 simple things together and have them interact, and you end up with a system that is vastly more complex than just 100 unconnected simple things. We have no idea on how to learn the parameters of such a behemoth, let alone how to initialize it.
The devil in in the interaction here.
I don't know whether I agree, but the arguement can't simply be dismissed by waving you arms in the air and saying "non-linear functions".
I think a little explanation is in order here, particularly regarding the public of my first post. Of course you are right in saying that I'm severely cutting corners by shouting "just a nonlinear function.". I knew that cutting the corner like that might trigger (or even offend, in which case I apologise) some real AI researchers. However, my comment was mainly meant for the "clueless hordes" that are engrossed by the word "Neural Network", to let them know that neural networks are not quite the awsome things many people seem to think they are.
Populism? Yes, guilty as charged ;)
Is there a study of emergence? (Score:2)
That's sad. It was true over a decade ago when I did my PhD in an AI-related field, but those were early days, with neural nets, expert systems and fuzzy logic still all the rage. Is there at least a recognized subfield of machine learning now that deals in
Re:Is there a study of emergence? (Score:2)
Is there at least a recognized subfield of machine learning now that deals in the study of emergence?
Sure, in fact emerging complexity is now the thing that's all the rage. It's jsuit not very, well, complex yet...
Al.Re:Hold your horses! (Score:3, Interesting)
Which is why I'm interested in autonomous systems, emerging complexity and (lets add one final buzzword) genetic algorithims.
If you give a system goals, make it work towards those goals, and then try impose some ext
Re:Hold your horses! (Score:2)
Yes. That's true.
That is certainly not true. Biological brains are neural nets. A brain is a neural cell assembly which consists of a number of integrated subassemblies each with its own function and principle of operation.
Just becaue the ANN experts have no clue as to the principles involved, it does not follow that we should abandon neural network research. Animals ar
Re:Hold your horses! (Score:2)
That is certainly not true. Biological brains are neural nets. A brain is a neural cell assembly which consists of a number of integrated subassemblies each with its own function and principle of operation.
Err, your point being?
Just becaue the ANN experts have no clue as to the principles involved, it does not follow that we should abandon neural network research. Animals are proof that neural networks are the future of intelligence research.
You do realise that alot of neural net (computer science
Re:Hold your horses! (Score:2)
If, by the hard problem, you mean a fully autonomous and highly intelligent machine (I am not talking about consciousness here), the entire solution (not just a part) will be neural networks, period. There are no two ways about it. Everything else in A
Re:Hold your horses! (Score:2)
If, by the hard problem, you mean a fully autonomous and highly intelligent machine (I am not talking about consciousness here), the entire solution (not just a part) will be neural networks, period...
Err, no. That proves not to be the case.
Al.Re:Hold your horses! (Score:2)
Err, no. That proves not to be the case.
Surely you must be kidding. Where is the proof that a truly intelligent being uses any solution other than neural networks? The entire biological universe of intelligent systems consists of neural networks.
Re:Hold your horses! (Score:2, Interesting)
Re:Hold your horses! (Score:4, Insightful)
I you read my stuff, you will see that I am acutely aware of the sorry state of ANN research. In fact, I believe that traditional ANNs are a joke. The same goes for almost everything that ever came out of GOFAI. The good stuff is what's happening in the spiking neural networks (SNN) field within computational neuroscience. SNNs are much more biologically plausible. They are the future of AI.
You would think the different variations of the 'Free Lunch Theorem' should have caused people to catch on sooner
The 'No free lunch theorem' is a complete joke, IMO. The search of human-level intelligence is precisely a search for free lunches. Otherwise, we might as well throw in the towel and go fly a kite or something. Why? Because the interconnectedness of intelligence is so astronomical as to be intractable to formal approaches. The fact that the brain consists of a number of cell assembblies, each consisting of huge number of similar neurons, is proof that there is are plenty of free lunches to go around. The 'no free lunch' mantra is a way for some people to guarantee an income from grants and other government freebies.
The sad thing about AI is that most of the community seems to be doing situation specific regression or optimization, with no plan on how that could eventually get us closer to 'higher intelligence'.
The reason is that the problem is too hard, especially if you approach it from a cognitive science point of view. So, people started to build limited domain programs on whcih they attached the 'AI' label. This way they can claim that what they're doing is 'AI', but the rest of us know better. It's all a bunch of glorified toys.
Re:Hold your horses! (Score:3, Informative)
Guess what? A neural network is a simple nonlinear function. Period. Training such a thing is nothing more than estimating its parameters by minimizing some (usually quadratic) cost criterion. When you put something in, you merely evaluate a rather simple nonlinear function. There is no intelligence involved!
It's a while since I did neural nets and university, but IIRC a 'nueron' in a nueral net lacks nothing from its biolgical equivlent. Neurons in living creatures either fire or not based on the their
Re:Hold your horses! (Score:3, Interesting)
The problem is in building the networks, biological networks are complex, with mutliple sub-networks parrallel processing, with those sub-networks interacting. Humans can't yet build networks that can do anything like this.
Still, if an artificial neural net is a simple, non-linear function, then all the more complex networks in biology are 'just' collections of such functions. Complex behaviour arriving from lots of simple, but interacting, components.
That's true of course: there are neural network mo
Re:Hold your horses! (Score:2)
That's true only in a very general and abstract sense. Reality, as always, is complex and messy. But since it's one of the few mechanisms in the brain that we understand well, we are tempted to reduce everything to it.
I believe that even the basic machinery of the brain is still very poorly understood. But since we naturally tend to elevate the significance of what we know a
Re:Hold your horses! (Score:2, Funny)
And the brain is just a little harder nonlinear function. Period.
Re:Hold your horses! (Score:2)
And the brain is just a little harder nonlinear function. Period.
Eh... Period! ;)
No, but seriously, see my replies to some of the other repliers to my first message: it's the interactions in those complex system that make a system that is "just a bit bigger" vastly more than "just a bit" more complex.
Re:Hold your horses! (Score:4, Interesting)
The fact that NASA is "boosting" AI does not mean that they're going to be doing rovers that can evaluate themselves whether a dark spot on the ground is a hole or a shadow! I doubt most people realize how conservative NASA is when it comes to AI.
I went to a lecture held by one of NASA's AI team leaders who was talking about the AI of the current rovers... there is virtually none! He said that the most complex AI is actually on earth to help scientists create a "mission plan" that they send to the rovers once every 24hrs. These mission plans are on the form: "move forward 2 meters, turn left 22 degrees, take picture of ground, move forward 1 meter" (very simplified).
I talked to this guy after the lecture and asked him if they really had no AI on the robots themselves and specifically asked about such things as correctional AI for the rover movement. To clarify this, imagine that you instruct the rover to "move forward 1 meter", all it will do is turn all wheels equally so that the rover would move forward one meter if it were on perfectly even ground. This is of course not the case on Mars and you'll have rovers turning when they should move forward and not turning enough and so on. So when I asked him about the correctional AI (to have AI on board correct these environmental issues, for example take pictures of the environment and calculate the "offset" to find out if it's drifting) he said there was no such thing. The only AI on the rover is something that actually makes it panic! That is, it evaluates if there are any deviations from what the guys down on earth told it to do and expect and basically shuts down if there are.
So AI to make sure that the rover moves in a straight line when the scientists instructed it to do so would be incredably beneficial and IMHO would even increase the level of security by making sure that the rover actually does what it's told to do! I asked him if this was something that NASA was considering to add to future rovers and he said they were in the process of convincing "upper management" that this would actually pay off. These are multi-million dollar scientific tools, not toys, so any addition to them needs to go through strict approval processes.
But according to this article it seems that approval for such things as the "movement correctional AI" has been granted, but as I said, I think people should not be fooled into thinking that this is some sort of an "I, Robot" type of AI!
Re:Hold your horses! (Score:3, Insightful)
Devon
Re:Hold your horses! (Score:2)
+5: Insightful
That is exactly the problem, as with as other (normal) computer programs: the GIGO (Garbage in, Garbage out) principle still applies.
This one's climbed a 'mountain' (Score:3, Interesting)
Biomimetic Robotics (Score:3, Informative)
Looks more like Weak AI ... (Score:5, Interesting)
I very much doubt that they are talking about strong AI there. ( Arguments for Strong AI [glam.ac.uk]).
I rather believe he is more on the weak side.
But, well, he is a noted expert.
CC.
def. The two main varieties of AI are called "strong" and "weak". Strong AI argues that it is possible that one day a computer will be invented which can be called a mind in the fullest sense of the word. In other words, it can think, reason, imagine, etc., and do all the things that we currently associate with the human brain. Weak AI, on the other hand, argues that computers can only appear to think and are not actually conscious in the same way as human brains are.
loc. cit. [philosophyonline.co.uk]
Re:Looks more like Weak AI ... (Score:2)
The argument seems to me more a matter of q
Re:Looks more like Weak AI ... (Score:2)
I disagree, though I think that the dichotomy in focus might be more a continuum.
There may be different ethical (or moral, if you like that better) implications depending on whether you assume that your AI is equipped with 'consciousness' or not. On top, there is the issue of explaining the systems you build.
Besides CS or informatics and biology psychology also would come in handy, though I agre
Along the roach idea.... (Score:3, Funny)
Isn't all that is really needed just a landing of a main transmitter/receiver along with what amounts to a bunch of ant or roach like robots that go out in various directions and transmit back to the main transmitter that relays it back to NASA?
To increase the overall data collection.
Looking for signs of life on mars.... hummm... that could pose a problem, if the small robots see eash other.
Oh hell screw it, lets put life there, wouldn't that just solve the quest for life on mars?
Re:Along the roach idea.... (Score:2, Interesting)
Instead of NASA investing in one complex, expensive, reliable probe, what if they invested in a method of producing many smaller, less reliable probes? They could send six, or fifteen, or a hundred, depending on launch costs.
The cost of each probe would be cheaper because of
(a) economies of scale (mass production), and
(b) redundancy means they can use cheaper, less reliable parts.
Probe should know how to do CPR (Score:3, Funny)
Matrioshka brains (Score:2)
More freedom than a human. (Score:3, Interesting)
We all learned from the six million dollar man... (Score:2)
Once they get smart enough... (Score:3, Insightful)
Re:Once they get smart enough... (Score:2)
Re:Once they get smart enough... (Score:2)
Repair is a bit more difficult. But if you're going to send a dozen robots at once, send a box of spares along too. Besides, "repair" doesn't necessarily mean "return to factory spec.s". It could be as crude as "remove (or cut away) bent part X that's preventing
Slashdot Font (Score:2)
Everytime I see an article like this I have to go through mental hoops to get my brain to interpret the correct meaning of "AI" not what I immediately read which is "AL" (but lowercase for the "L").
I'm sorry but I really wish a switch could be made to a font such that an uppercase "I" is not the exact same shape as a lower case "L".
I don't know the buzz words for the bars at the top and bottom of an I
Planning is difficult. (Score:2, Interesting)
Smart as a cockroach? (Score:2)
blakespot
The Next Generation (Score:2)
As a member of that next generation, I can only say: You're god damn right it does.
Re:Obey (Score:5, Funny)
2. A robot must obey the orders given it by cockroaches except when such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Scary...
Re:Long distance (Score:2, Funny)
Totally Uncalled For (Score:3, Funny)
"Because a rover on mars has about the same chance of reproducing as an average
Hey now, that kind of generalized bashing is totally uncalled for... the Mars rover has a much better chance.
Re:Long distance (Score:4, Insightful)
I don't care what software runs on the rovers embedded computers because:
* these robots won't reproduce.
* the poluting materials (ie rover bodies and lander) are already there.
* current AI techniques might just about make a desicion about whether to take a photo or make a soil sample, given a preprogrammed embodyment, but will *NOT* be creating any novel, intelligent behaviours.
I wouldn't worry about AI just yet.
Autonomy will just cut out the 5 minute lag between action and effect on any data sent to and from mars. Think about that next time you feel like your internet connection is too slow.
Re:Sounds bad... (Score:2, Funny)
I have empirical evidence which is different from that. A good trick, though, is to have a 'honeypot' shoe on the floor where they will hide and then
CC.
Re:Cockroach (Score:3, Interesting)
Intelligence is not merely defined as "processing power" where processing is making calculations involving numbers, scientific formulas etc. Intelligence encompasses much more than that.
Nasa would make itself proud if they can really build a robot who, like a cockroach, can see what's ahead and around of it, determine a route of access and follow it. The one that will have t
Re:A...I... (Score:2)
Artificial Insemination.