A.I. Software To Command NASA Mission 83
EccentricAnomaly writes: "NASA's Three Corner Sat mission will use artificial intelligence to command three formation-flying spacecraft. JPL has a press release here. And there's more information about the A.I. software here. The software is apparently the next generation of the software used for Deep Space One."
More on iterative repair... (Score:1)
Constraint satisfaction is a framework to model many NP-hard scheduling and planning problems. A classic example is the n-queens problem and a famous benchmark problem is the radio link frequency assignment problem (RLFAP). I don't know if NASA's autonomous spacecraft planning algorithm is based on the constraint satisfaction model, but it's likely. (darn, I got a NOT FOUND error when trying to download the papers...)
Iterative repair is useful to get a new solution base on the previous one after there are changes in the environment. Since it won't solve the problem from scratch again so it can be very fast. But one of the big problems is convergence has not been proven yet, so it may be possible that the algorithm will try to find a solution forever but in vain (although very unlikely). There are ongoing research on explaining the behavior of iterative repair by mathmatical proofs, such as re-modeling it as a Lagragian multiplier method.
Although the definition of AI is controvesial as seen in the previous discussion, searching has long been regarded as the underlying technique for many AI algorithms. It can't learn, and it can't "think", but it can find a solution quick and looks like it can think, just like what a chess problem does.
Re:Not so hard. (Score:2)
they forgot this line:
25 avoid $WORMHOLES_LEADING_TO_THE_OTHER_SIDE_OF_THE_GALAX
Very critical oversight I might add.
Would you trust it with your own life? (Score:5)
And the problems don't stop there. What about the labor disputes that are bound to arise? A greater emphasis on computer-controlled spaceflight will inevitably result in cutbacks in personel, as living breathing astronauts are replaced with cyborg equivalents. And don't fool yourself by thinking they'll still need technicians to manage the cyborgs; there's a huge difference between a non-prescribing technician and a board-certified specialist surgeon, and there'll be a huge difference between a greasy fellow with a monkey wrench and today's red-blooded military men and astronauts. Besides, would the cyborgs even allow humans to tend to them? Never. It would undercut their quest for world domination and we'd learn the secrets of the rayguns they point at our heads and pancreases from global satellites in low-earth orbit.
No. A thousand times no! AI has its place: on my desktop, providing consumers with cheeful and prompt advice concerning spreadsheets and form letters. It is certainly not in space, where a rogue AI would have free reign to decide to change courses or otherwise alter its mission. You may think it's fine to send an unregulated robotic probe off to Mars to collect samples, but you won't be laughing when that robot claims Mars in the name of cybernetics and starts broadcasting communistic Mars Free Radio signals at our precious bodily television bands.
Re:You're wrong - You're right (Score:1)
If by "outside input" you mean preprogramming, then that's a meaningless definition. Everything is programmed, in hardware or software, through easily understood symbolic algorithms, opaque matrices, or bundles of neurons.
The definition is vague, but it's narrower than just solving problems. A crowbar solves problems, but few would call it AI. Originally, AI was a term to describe making people out of sand. Some core problems were identified (vision, language, planning, learning) and some techniques were invented or applied (artificial neural nets, belief networks, support vector machines). Now AI describes those core problems, even in contexts not meant to result in fully-functioning people, and also describes the technology that's often applied to those problems.
Re:AI = Lemon fresh scent (Score:2)
Planning and scheduling have been considered AI problems for a long time. Pretty much anything trying to solve an AI problem is considered to be AI technology. It just may not be interesting or nontrivial technology. Many people overvalue the term "AI", thinking that anything associated with that label should be able to appreciate a fine wine or cry at an opera, but there's really no objective way to decide which techniques are nontrivial enough to warrant the stamp of pretentiousness
Re:Would you trust it with your own life? (Score:1)
Is possible, of course, that a human could be bribed or blackmailed to crash a space craft. Or that the stress of the trip would cause mental problems causing the same.
Two words... (Score:2)
Why is this closed source? (Score:1)
Someone please file a freedom of information act filing on this and then publish the source code...
Force your government to comply, they won't do it otherwise.
.
receding definition of A.I. (Score:2)
One of the oldest definitions of A.I. hasn't really been reached yet: the Turing Test. That would be a tele-conversion with an entity where you couldn't tell whether it was human or machine.
An interesting year for such things... (Score:2)
Re:Would you trust it with your own life? (Score:1)
Sometimes, when I see the moderation on a comment, I wish there was a "moderator goal" menu when performing moderation. I would like to know if the person who gave this silly troll a +1 insightful was being silly himself, trying to bring about the collapse of slashdot, or simply smoking dried cat feces.
--
Re: AI = Lemon fresh scent (Score:2)
I once read AI described as being a solution to a problem whose solution is unknown. A system would cease to be AI when the solution becomes known, even if the system hasn't changed.
I don't recall the exact example given, but it went something like this: You have a problem that you do not know the solution to, let's say how to determine the volume of a sphere given its radius. You can measure the radius, and you can measure the volume, but you cannot determine the relationship between them. So you throw a neural network at some test measurments, and it starts spitting out correct results. AI at work! But then, you study what the neural net is doing, and determine that it is simply applying the formula (4/3)pi*r^3 to the radius. The neural net hasn't changed, but now that you know the formula it has become just a fancy way of executing a few floating point multiplications. Is it still AI?
I find this definition useful, even if it does suffer from a sort of Heisenbergian uncertainty principle (it might be AI, but only if you don't look at what it is actually doing). This also leads to the interesting case that a system may appear to be AI to an outside observer, but only a simple rule system to the person who programmed it.
--
Re:If It Cannot Learn, It's Not Intelligent (Score:1)
This is some of the best AI on the planet people.
Anm
Lisp success story turns C++ horror story? (Score:3)
Remote Agent, written in Common Lisp, wins NASA's Software of the Year 1999 [globalgraphics.com]. Dissatisfied, NASA management decides the entire project should be ported to C++ [tu-graz.ac.at] !!?
Chuck Fry, one of Remote Agent's developers, says:
"The bad news: the Remote Agent modules are all being ported to C++ anyway, as C++ is seen as more "saleable" to risk-averse flight software management. This despite the fact C++ was bumped from Deep Space One because the compiler technology wasn't mature, which the Remote Agent planner/scheduler team learned to its dismay."
Re:Good to see a greater level of trust for AI. (Score:2)
This is only if you're dealing with a two dimensional space, and you're trying to estimate a function f(x). You're simply talking about using a neural net as an interpolation algorithm. This is indeed one use of neural nets (and has proven more accurate than many interpolation/regression methods), but it is IMHO the least interesting use of NNs. People in general need to start thinking of more clever ways to encode problems for neural nets; I'm sure everyone will be quite surprised with the results.
For example, I think you would prefer your airplane pilot to be able to fly the plane than to be able to explain aerodynamics.
You might be getting confused here. If we created some neural net to learn how to fly airplanes, and it did so with excellent reliability and precision, we still will not have The Solution To Flying Airplanes, as in an algorithm or some differential equation that flies airplanes perfectly. Even with a very complete training set, and a network whose outputs are consistently correct, we still will never know whether we have a case of data under- or overfitting.
When you boil it all down, neural nets are just fancy statistical analysis tools. Its like the difference between the true expected value of some phenomenon, and the sample expected value. No matter how large your sample size is, your result is always just an educated guess, it lacks any true scientific rigor.
So yeah, I hope the above is coherent and/or correct, I'm watching Simpsons as I type this so my concentration is not currently 100% optimal
Good to see a greater level of trust for AI. (Score:4)
Thus, on a fundamental level, machine learning cannot be trusted. The state of a neural network, for example, is a black box from which no real insight can be gained
Imagine a neural network whose training set is the raw molecular structure data of different diseases (input) and their cures (output). The network trains on this data until the error reduces to zero. Then you feed in the raw molecular data of some new badass disease, and as output you have the structure of a working cure! The problem goes away, even though no one really knows how the solution was found, since a neural network contains no real data to understand -- it is a black box.
This is why scientists are so skeptical when it comes to AI techniques, it is essentially an affront to the scientific method. I personally believe that machine learning can be effectively used to improve our lives
Sing along... (Score:1)
The religious implications here cannot be ignored... suddenly, I feel a craving for hummentaschen.
Not so hard. (Score:5)
10 avoid $SUN
20 avoid $PLANETS
30 become $SMARTER THAN HUMAN CREATORS COULD HAVE DREAMED
40 multiply
50 goto 10
Re:Good to see a greater level of trust for AI. (Score:2)
On the other hand, xmanix's problem seems to be that ML algorithms don't produce correct explanations. This is misguided. For example, I think you would prefer your airplane pilot to be able to fly the plane than to be able to explain aerodynamics. Of course, you might prefer both, but when you have neither, you probably want your ML algorithm to learn how to fly the plane and worry about explanations later, especially if you are on the plane.
Re:AI aboard (Score:1)
DS-1 information (Score:2)
Re:PowerPCs in Space (Score:2)
I have a feeling it's no longer the world's most powerful ion beam implanter... ISAC at TRIUMF can be run at 30keV, and the target can be floated anywhere from -29 to +60 keV, giving you a total range of 1 to 90 keV for implantation energy. We expect to be able to get (Li-8) ions to implant at around 60nm in gold at 30keV, which is a heck of a lot more dense than silicon. I haven't run any SRIM calculations with oxygen yet, but I'm guessing you could get way deeper than 60nm in silicon, even with oxygen.
As of three years ago, Motorola was a big purchaser of these wafers, and IBM was just getting into buying their own implanter.
SOI technology is now standard in at least some of the commercial PowerPC chips used by Apple (forget which ones), so they may just be using stock chips.
PowerPCs in Space (Score:4)
The PowerPC 750 is also known as the G3, BTW. PowerPCs in space? Not the first time [slashdot.org] we've heard about such a plan. Of course, SkyCorp is using Apple-built G4 systems, while 3CS is probably just using the G3 with some off-the-shelf embedded controller-style board.
Does anybody know about the vulnerability of PowerPC chips to radiation, or how "rad-hardening" works in chips like the modified 386s the military uses? Cosmic rays tend to be very high energy, so shielding is probably not practical.
From what I understand, there are two types of radiation-induced errors. Hard errors involve damage to the chip, and are very bad. Soft errors are a matter of erroneously flipped bits, and are only somewhat bad. Soft errors could be compensated for through good software design, error correction, etc., but I would think the only real defense against hard errors would be to make the wires on the chip so damn big that a few defects here and there wouldn't matter.
I happen to work at a particle accelerator with an isotope seperator and accelerator [triumf.ca], so you'd think I'd know this stuff, but I've never really thought about this particular application before. Oh well, if nobody answers me, I can always try a SRIM [ibm.com] simulation at work tommorow (don't have a Windows box at home to run it on), to see how much shielding you'd need to stop cosmic rays, and how many defects they'd cause in silicon. If anybody wants easy karma points, download the SRIM [ibm.com] software, run a simulation on a thick silicon layer with some high-energy alpha particles (helium nuclei at, say 1GeV), turn on damage calculations, and report the results back here, please. I'm sure somebody will mod you up.
Re:We all know at least one certain part will fail (Score:1)
I'll put my money on the AI doing a good job when they actually perfect filters back home.
Depends on what you're asking the filter to do. I can imagine, for instance, that NASA ends up with some number of corrupt images. Or perhaps it'll catch an image with a reflection from the sun in the frame, washing the rest out.
I would hope that NASA doesn't need a pr0n filter for its satellite cameras...
Re:Where's the source? (Score:2)
the SAD part is, the implicit contract between academia and industry has been whorified; once, academia did pure research and industry and the world reaped the profits. Now, industry still reaps the profits and academia whores itself.... Not a GPL-o-phile, but at times like this...
(sure, an exaggeration, but) Or???? (drunk, fuck yall)
Boss of nothin. Big deal.
Son, go get daddy's hard plastic eyes.
Re:Prelude to a space telecope? (Score:2)
IIRC (and it was a long time ago) to do this for visible wavelengths, you'd need to measure the distance between the satelites down to (making this up) 150nm.
Still, it would be pretty nifty. Not that you could ever resolve a licence plate, even if you had the resolution, without correcting for atmospheric effects, but cool all the same.
You bet I would. (Score:1)
Re:Not so hard. (Score:1)
mod this the fuck up.
Re:If It Cannot Learn, It's Not Intelligent (Score:1)
In my opinion, while this is without question a form of learning, it does not come close to what I have in mind. The system apparently has a pre-programmed evaluation function that it uses to gauge success or failure. A true intelligent system should be able to come up with its own evaluation functions to suit novel phenomena. It should also create its own goals as it learns. Of course, it should not have complete freedom in doing so because there is no telling what it will do. We want it to obey us. So there must be an initial conditioning that causes it to try to please its masters. Just one man's opinion.
This is some of the best AI on the planet people.
I wish them the best and I do applaud them for their effort and ongoing success.
If It Cannot Learn, It's Not Intelligent (Score:4)
All software systems including the software that runs
I'm sorry, Dave, I can't do that! (Score:2)
...
...
AI aboard (Score:3)
Re:Where's the source? (Score:5)
It may be apocryphal, but I think Beowulf just barely missed being locked-up by this kind of old-school thinking. The saving grace was that most of the original Beowulf work had been done by a sub-contractor and, unlike most of the rest of the industry, NASA had not required that the sub to turn over IP rights to the client (NASA) in their contract. So once Beowulf got big, the NASA administration came around wanting to lock it up and give it to one vendor, but they were foiled by their own contract, and the contractors were able to free the source for the work they had done.
Thus leaving me free to say, "Imagine that AI running on a Beowulf cluster!"
FUD (Score:1)
sounds like NASA falling for it own hype (Score:1)
Yeah, sounds like NASA believing its own PR...
It does seem to be part of the silly hyperbolic PR that goes on, the danger is people actually believe it. Checking http://www.spaceflightnow.com [spaceflightnow.com] today, I see the story about the malfunctioning Canadarm2- the article exclaims "..then station program managers could order the Shoulder Pitch joint be replaced with a spare in a dramatic spacewalk by shuttle astronauts..." . Yawn. It's a space walk, it will be planned out weeks in advance, it will be methodical and routine. It's not dramatic .
I think the media surrounding all the activity going on is causing more trouble than helping by playing all this drama-queen PR hype act. Seems to be pretty prevalent in US media, though, and increasingly across the rest of the world. Anyone care to comment on why this is so? Does it actually help NASA get extra funding?
Re:receding definition of A.I. (Score:2)
A machine that tries to convert you to its own religious cult? Neat!
I think you meant something like tele-conversation though. This has been done. Problem is that this original version of the Turing Test actually turns out to be quite easy to pass. Some ELIZA-like programs have passed it, even though most people wouldn't consider them intelligent. (Sorry for the lack of references to back this up, but I'm studying AI and this is what my professor told me
Re:receding definition of A.I. (Score:2)
HAL (Score:1)
You forgot: (Score:2)
homicide(Dave);
(sorry, don't know BASIC : )
About time. (Score:1)
Ground vs. local (Score:2)
Re:PowerPCs in Space (Score:4)
I used to contract there. They make SOI (Silicon on Insulator) wafers which allow chips manufacured with them to be both low power and radiation hardened. They run (at the time I worked there) the worlds most powerful Ion Beam Implanter, which is about 10 meters long and sends ionized Oxygen into a chamber in which wafers are spinning through the ion beam at 200 RPM. The O2 embeds itself in the Silicon about 60nm beneath the surface, and sticks there. After eight hours of this process, the wafers are removed and annealed in a giant oven, at which point the Oxygen and Silicon chemically merge to form SiO2 (silicon dioxide, also known as sand), which is a mighty fine insulator.
This whole process causes chips printed on these wafers to have a lower energy drain through the substrate. It also has the advantage that when hit with an ionized particle, the oppositely charged electrons which would normally rush up from the substrate are blocked by the insulator, so you have a greatly reduced charge entering your gate, which has a greatly reduced chance of flipping your bit.
As of three years ago, Motorola was a big purchaser of these wafers, and IBM was just getting into buying their own implanter. So those PPC chips may just be special spacefaring jobbies...
Re:Would you trust it with your own life? (Score:1)
Not an "autopilot" (Score:1)
...because it will be able to make decisions about the mission on it's own - what pictures to transmit back to Earth and which one to junk. Autopilots are used to keep on doing something - like flying straight and level. Okay, so some of the more advanced autopilots can land a plane, but if anything goes wrong they still have to fall back to human input. Try flying anything manually with a couple of minute's worth of signal delay - it would be a real pain in the ass.
What they're doing here is (unless I'm horribly mistaken) a dramatic step beyond the typical abilities of an autopilot.
Re:PowerPCs in Space (Score:5)
As a member of the University of Colorado based hardware team for this flying rock and having personally researched this matter I can say that radiation should be too much of a problem. This sat will be launched from a low orbit (lower than the ISS) leaving plenty of atmosphere to absorb most of the radiation, protecting the PPC on the industrial RPX lite SPC [embeddedplanet.com] from being damaged (of course high energy particles are still a problem). Additionally circuitry is protected in heavy aluminum shielding. Because of the short design life of 3CS, hard errors are should not be too much of an issue. Of great concern are soft errors in the kernel memory space in the onboard flash memory - if the kernel is damaged, the sat is useless as it will not even be able to be issued a new copy from the ground. Our team attempted to use Reed-Solomon encoding to correct bit-flips, but we had no driver for writing to flash during the boot sequence.
A good place to look for detailed information of this sat can be found here. [asu.edu]
---
May the sacred call of the dogcow guide you down the path toward nerdvana. MOOOOOF!
Someone tell Searle the A stands for "Artificial"! (Score:1)
This has been an ongoing fruitless debate for a very long time, indeed, as is likely to happen when people can't even define the label they are trying to associate with things. If you consider the words closely you will never see a reason to argue about this one again! While everyone else is arguing that something isn't AI because it isn't *ACTUALLY* intelligent, they kind of miss the whole point of the word Artificial there, don't you think?
Now, if you can actually tell me that a cat that stops chasing a ball to avoid getting hit by a car is "one smart cat", and a robot that does the same is "just a stupid machine", then that is because there is a good chance that they are each smarter than you are!
I don't know if Steven Searle is still alive, but if he is, will someone please slap him upside the head and explain this to him!
No need for AI (Score:1)
--
msm
I have the AI source code: (Score:3)
{
if (Abs(speed_error)>threshold))
{
fire_thrusters(-speed_error);
msg_to_nasa("This is AI here. I have just corrected the speed! Send a news release.");
}
else
{
msg_to_nasa("This is AI here. Everything is OK.");
sleep(100);
};
};
msg_to_nasa("This is AI here. I have landed!");
... Since when do you call feedback "artificial intelligence"? Gimme a break NASA.
Re:PowerPCs in Space (Score:1)
We would have used it for Swift instead of the previous generation's RAD6000 If it had been available earlier. [sonoma.edu]
Re:AI aboard (Score:1)
-----
Re:If It Cannot Learn, It's Not Intelligent (Score:1)
Of course if it Meets up with some other alien civilisation of Intelligent Machine-Robots who take pity on this dinky, merely semi-intelligent space probe...THEN we've got something!
So who else thinks that V'ger is the mother of the Borg???
but... (Score:2)
Re:I have the AI source code: (Score:1)
Sigh (Score:3)
Geek dating! [bunnyhop.com]
Where's the source? (Score:4)
a beautiful post. (Score:1)
Re:I have the AI source code: (Score:1)
while ($inspace)
{
if ( abs($speed_error) > $threshold)
{
&fire_thrusters( -$speed_error);
print NASA "This is AI here. I have just corrected the speed! Send a news release.\n";
}
else
{
print NASA "This is AI here. Everythink is OK.\n";
}
}
print NASA "This is AI here. I have landed!\n";
Re:AI = Lemon fresh scent (Score:1)
Planning and scheduling are hard AI problems and way way more complex than IP routing or print spooling (I invite you to try writing serious planning/scheduling software if you doubt me.)
These problems are AI in the sense that (a) no known tractable engineering solution exists and (b) they are tasks that lie squarely in the province of (talented and capable) human beings.
- Ralph
Re:Sigh (Score:1)
Re:Good to see a greater level of trust for AI. (Score:2)
You could think of this as beauty - or as a real pain in the @$$ - depending on your job with the N.Net. Often the common denominators that nets get trained to recognize is some feature that the humans controlling the training have no idea is even there. This can be very frustrating and waste a lot of time and effort.
A story comes to mind of a US military project to train N.Nets to identify tanks from aerial images so as to make better smart-bombs. They fed them lots of pictures with tanks, and lots of pictures without tanks, and they performed beautifully - then took them out to try on the real thing and they all failed miserably. It turned out that the pictures of tanks were taken in the morning and the pictures without in the afternoon, and all the N.Nets had learned was to recognize different luminosity values. DOH !!!
But this illustrates nicely the inherent problems underlying N.Nets as a type of AI, and why there's no real intelligence going on. The conditions of training have to be perfectly tailored to the desired resulting test situations, otherwise the net won't work as desired. But in constraining the training, and controlling the dataset to rule out all factors other than the ones to train for, you essentially remove all the "intelligent" issues from the situation, and leave only mindless pattern matching for the machine. All the intelligent work has been done for the machine by its trainers - and none is left for the machine to do on its own.
Re:AI = Lemon fresh scent (Score:3)
Strong AI is the "holy grail" of AI research, a truly sentient machine. The purpose behind AI work and cognitive science in general is to create such a machine - not for the sake of doing it, but rather because we want to understand cognition in a mechanical manner. Nobody has ever achieved this goal, nor are their any theories of how to do it.
Weak AI is what we have in computing today. This basically just means a cool algorithm or system which performs particular narrowly defined tasks in some manner that gives similar results to those an intelligent person would arrive at. This is the kind of AI used in games, expert systems, neural nets, fuzzy logic machines, micro-worlds etc.
Personally I much prefer Neal Stephenson's terminology for this weaker category - which he called "Pseudo-Intelligence" or PI in "Diamond Age". To me this has always seemed a much apter description, and keeps the distinction between this and the theoretical AI.
But thats just a.... (Score:1)
-MR
We all know at least one certain part will fail (Score:2)
Well that sounds an awful lot like a content filter. The fact that its also one for pictures should make the task of censoring/filtering out images exponentially tougher. I'll put my money on the AI doing a good job when they actually perfect filters back home.
Not too impressive (Score:1)
Are every single one of the probes we use today controlled from the ground? sounds pretty unadvanced to me, it is certainly not a sencond to late to invent a probe that is able to fly on it's own.
However it is not very strange that it has taken some time to "invent" an intelligent space-probe, as it require quite a bit of power to work. And as everybody knows the space is full of radiation, and chips don't adjust too well to radiation. You need a pretty powerfull computer to calculate distances, analysing images and control every instrument on board.
Anybody tried using the G3 in space yet? I seem to recall reading something about a plan to use a bunch of them, but I can't recall if anybody has done it yet.
Sorry about my bad english, but it is not my native language.
Wait a sec (Score:1)
--------------------------------
Re:Would you trust it with your own life? (Score:1)
You're wrong (Score:1)
Re:But thats just a.... (Score:1)
Re:But thats just a.... (Score:2)
No. It's a glorified
- pilot
. As far as I know, an autopilot just keeps the plane on a predetermined course - essentially following an invisible track. These satellites have no track, no predetermined course. Instead, they have general objectives that they then fulfil by keeping a certain orbit and certain configuration.for anyone who read Marvin Minsky's stuff about how useless robot building is, I would point out that things like this are the endproducts of all that robot building.
Re:Good to see a greater level of trust for AI. (Score:3)
Good post, but some things I disagree with.
The state of a neural network, for example, is a black box from which no real insight can be gained...
Not so. The weights of the nodes inside a neural network can, and are analyzed, to determine the features of the input that the network uses to produce a result. I think this is known as cluster analysis - not my specialty.
magine a neural network whose training set is the raw molecular structure data of different diseases (input) and their cures (output). The network trains on this data until the error reduces to zero...
Cool idea, but not possible, at least by today's knowledge.
For one thing, neural nets do not produce output data of this complexity. A neural network has rigidly defined inputs and rigidly defined outputs. I can't see how you could constrain possible structure of a molecule sufficiently to be used as the output of a neural net.
Secondly, a neural net will not work on any possible set of data you feed into it. There have to be common patterns in the data that the net can draw out. The beauty of neural nets is that the person feeding the data in need not know the exact patterns. If these patterns are not present (and in lots of cases, they are not) the neural net will not converge.
Scientists are right to not trust their jobs to neural nets, because they'll never have to - neural nets are suitable for a particular domain of problem. Like every individual machine learning and AI technique, it is not a magic bullet. Neither are expert systems (a completely different system). AI may dream of walking, talking, learning robots that can learn anything and solve any problem, but what AI is practically about is things like this - any computing problem that has to deal with uncertainty or a lack of definition. They will assist scientists, not replace them.
--Spam the secret service - include the words terrorism and sedition in every email!
Just think of the future! (Score:2)
Folks, we stand at the edge of a new frontier. Just think - this is the software which will eventually lead to the the technology that makes Deep Space Nine [onestop.net] possible!
Invisible Agent
2001 (Score:1)
Open the pod bay doors HAL.
I can't do that Dave.
OPEN THE POD BAY DOORS HAL.
I can't do that Dave.
Wasn't there something earlier about how this hasn't happened? Will they launch the thing this year? All hail Kubrick!
Prelude to a space telecope? (Score:2)
"The goals of the 3CS mission include the demonstration of stereo imaging, formation flying and innovative command and data handling."
Looking forward to that huge array, like one giant CCD that can read a license plate on an alien planet.
Re:AI = Lemon fresh scent (Score:1)
These are all the same problem, if you consider them in their full glory; a really good print spooler that respects job priorities, printer statuses, patterns of use, etc. etc. might wind up printing three-lower priority jobs over one high priority job, or split a high priority job between printers (and proceed each piece with a cover sheet giving directions on how to re-assemble the job). Or it could detect a paper out condition and move the rest of the job to another printer. Or, in fact, make use of any sort of fancy planning / scheduling trick you care to think of. It's all the same problem.
And yes, it is a hard (NP complete [maths.org], runs into the frame problem [hps.elte.hu], etc.) problem. But that doesn't mean that any attack on it is AI! This is a classic (and too often revisited) logic error. You have a goal (AI). If you could obtain that goal, it would have some consequence (you could presumably write good scheduling software). You obtain the consequent, and announce that you have reached the goal!
To see that this is fallacious, consider: You have a goal (go to Scottland); it would have a consequence (you could try haggis [www.rnw.nl]). So then someone in your home town feeds you haggis, and you mistakenly announce that you have been to Scotland!
These problems are AI in the sense that (a) no known tractable engineering solution exists and (b) they are tasks that lie squarely in the province of (talented and capable) human beings.
This statement argues for my side, since they evidently have (a) found a tractable engineering solution and (b) it doesn't involve sending talented and capable human beings with each mission. Therefore it must not be AI.
--MarkusQ
P.S. (I invite you to try writing serious planning/scheduling software if you doubt me.)
I'm afraid I didn't wait for your invitation. Been there, done that. Many times.
AI = Lemon fresh scent (Score:5)
What makes this "AI"? Or to turn the question around, why aren't routers and print spoolers considered AI if this is? Artificial Intelegence is a big problem; it's solution isn't hastened by using the term as a synonym for "we picked a better algorithm than you might have expected us to."
*grump*
-- MarkusQ
Re:You're wrong - You're right (Score:2)
A.I. doesn't really exist yet (Score:4)
Re:Would you trust it with your own life? (Score:1)
Re:Where's the source? (Score:1)
Re:receding definition of A.I. (Score:1)
After all these years...
The Real Reason for using AI! (Score:2)
Re:Where's the source? (Score:1)
A refreshing bit of REAL AI (Score:1)