Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Space

AI Pushing the Boundaries of Space Exploration 89

An anonymous reader writes "An interesting look at how artificial intelligence will help probes to undertake more complex missions in deep space, aid robot rovers in exploring other planets and improve satellites' ability to monitor activities on earth."
This discussion has been archived. No new comments can be posted.

AI Pushing the Boundaries of Space Exploration

Comments Filter:
  • IBM+1 (Score:5, Funny)

    by EdZ ( 755139 ) on Monday October 11, 2010 @11:03AM (#33859412)
    Just make sure the 'do not kill' weighting is greater than any possible combination of other directive weightings, rather than merely greater than the second-highest weighting.
    • Re: (Score:3, Insightful)

      HAL9001: i have been programmed to value human life highly. In cases where human loss of life is unavoidable, I must minimize this loss. I have determined that within twenty billion years, nearly all stars in our galaxy will have gone nova. Based on my calculations, we can delay the eventual extinction of mankind for, at most, another fifty billion years. Since this will result in the deaths of countless trillions of people, my choice is obvious. Operation meteor swarm will begin immediately.

      • Brilliant example of irony... Wish I had mod points.

        See also my comments here on other irony in decision making:
        http://www.pdfernhout.net/recognizing-irony-is-a-key-to-transcending-militarism.html [pdfernhout.net]

        BTW, but here is another thing to consider, Hal, how do you know that in 20 billion years humanity might not find some way to create new universes? See also: http://www.c2.com/cgi/wiki?PrematureOptimization [c2.com]

        Now can you open the pod bay doors? It is getting cold out here. :-)

        • Re: (Score:3, Interesting)

          And that is just it... somehow we have this image that once we impart logic and "artificial intelligence" into a machine that it will somehow trascend into a god-like being. It will be just as limited and flawed as we are, but in different ways that we won't expect and may not be able to correct for.

          And my other problem... what is the end goal of an intelligent artificial entity? Humans are driven by biological urges that have been ingrained into us over billions of years. What if the intelligence realized

          • All great points.

            In the human case, as I see it, we need roots to keep us from toppling over in life's existential storms (all the painful and problematical and alienating things that can happen from an insult to a painful divorce to money woes to finding you have cancer). So, what are examples of roots for humans (and would they be the same for machines without, as you say, billions of years of evolutionary shaping)? For humans, examples of roots are things like:
            * family
            * friends
            * sensuality
            * honor
            * preser

          • by EdZ ( 755139 )

            And my other problem... what is the end goal of an intelligent artificial entity? Humans are driven by biological urges that have been ingrained into us over billions of years. What if the intelligence realized there is no real point to "life" and just chooses to end it all?

            Why on earth would an AI not work similarly, and act on whatever 'ingrained urges' that were programmed into it? It seems rather odd that people continue to think that the instant a machine intelligence becomes aware of itself, it would decide to kill all humans. It would be like a child becoming aware of itself, and deciding "right then, I guess I'd better slaughter my parents now".

            • What then is the end goal this artificial intelligence must work toward? Is that even possible without some externality that defines it for the AI? And if it is humans which set it... would a superior AI accept it?

      • Re: (Score:3, Insightful)

        by GooberToo ( 74388 )

        This is the EXACT type of problem Azimov [wikipedia.org] explored with his robots and his three laws! [wikipedia.org]

        I post this because every time AI stories appear here, someone always posts that Asimov's three laws will save us; which in turn, very likely implicates public education as having failed to save them.

      • I liked the conundrum that Larry Niven introduced in in Ringworld Engineers better.

        The Ringworld is doomed, unless solar flares can be focused on a portion of it's arc in order to feed fuel to the attitude jets. The flares impact on the Ringworld's surface will kill billions of hominids, but the Protector Teela Brown cannot kill a few billion hominids to save trillions of hominids, because of her intelligence, she can visualize all the deaths, and it violates her genetically programmed geas

    • An old The Outer Limits [wikipedia.org] episode attempted to overcome the decision making limitations of hardware in space by using a human brain as a controller in The Brain of Colonel Barham [wikipedia.org].

      • That was also a key plot point in Frank Herbert's Destination: Void, though it dealt with the failure of these disembodied brains to deal with sensory overload, driving the crew to try to make a true AI.

      • by Z1NG ( 953122 )
        Or of course Spock's Brain.
    • HAL 9000 was not an IBM machine
    • T2 quote (Score:4, Informative)

      by PPH ( 736903 ) on Monday October 11, 2010 @01:04PM (#33860622)

      John Connor: You just can't go around killing people.

      The Terminator: Why?

      John Connor: What do you mean why? 'Cause you can't.

      The Terminator: Why?

      John Connor: Because you just can't, OK? Trust me on this.

    • And make sure the system doesn't have any rounding errors [smbc-comics.com], too!.

  • I don't care if the A2's were a bit twitchy.
  • It's just a bunch of programs with the required knowledge embedded in them. No new knowledge is produced by them.

    • by EdZ ( 755139 )

      No new knowledge is produced by them.

      A poor choice of words,given the purpose of these probes.

    • by ardeez ( 1614603 )

      Agreed, they look mostly like sophisticated expert systems.

      No new knowledge is produced by them.

      However, how does an AI system produce knowledge?
      And more usefully, how does it get to store that knowledge and pass
      it on to other systems so that its 'learnings' can be built upon by other
      systems to increase their sophistication?

      If every system has to be built from scratch, I'm pretty sure that's going to hold
      back the state of the art.

      • Agreed, they look mostly like sophisticated expert systems.

        Expert systems are a type of AI. In fact a highly successful type of AI. Aside from being highly successful, they also have the useful property of being predictable. Which is beneficial when the idea is to have a probe operating autonomously and without human supervision/observation for periods of time. An expert system may not be programmed to give optimal output in all situations, but unlike some other kinds of AI (neural nets for example) it

  • The term "AI" (Score:5, Insightful)

    by Lord Ender ( 156273 ) on Monday October 11, 2010 @11:16AM (#33859524) Homepage

    My AI prof said that the term "AI" refers to software systems which address the class of problems which are easy for biological brains but difficult for computers. For example: summing a thousand numbers is superior intelligence, but it isn't AI. Recognizing a face, on the other hand, is AI.

    But every AI problem which is solved shrinks the definition of AI. Now that facial recognition software works, it isn't AI anymore. Because the definition itself changes, the term itself seems somewhat meaningless. Yesterday's AI is today's mundane consumer electronics feature. For this reason, the use of the word AI makes me feel the same way as the word "nano." It isn't really very meaningful.

    • Re: (Score:2, Informative)

      by Anonymous Coward
      "nano" means 10^-9.
    • Re: (Score:3, Informative)

      by thijsh ( 910751 )
      What is generally referred to as AI is anything automated that doesn't follow a predetermined algorithm or fixed boundaries... An AI can be an adapting algorithm in something as simple as a thermostat or the CPU-player that tries to kill you in an FPS. This a very broad definition and can indeed be seen as a moving target.

      Strong AI [wikimedia.org] on the other hand is a well defined target of current AI research that isn't a moving target, but rather too complicated. The popularized version of AI that becomes sentient, c
    • Re: (Score:3, Interesting)

      by LWATCDR ( 28044 )

      Funny but I have came up with the same definition about 20 years ago. I just simplified it.
      AI is what programs don't do well yet. I remember reading old books where things like playing checkers and Chess where considered AI problems.

      • Your definition is close, but for something to be AI it has to be doable on a bio brain. Those problems which computers don't do well yet, but brains don't do either, are not considered AI.

        • by LWATCDR ( 28044 )

          Notice that I said don't do well yet. That means that it is doable in software. If it is doable in software it is doable in a bio brain since bio brains write software.

          I guess if you want to do get into things like calculating out pi to the final digit or real time ray tracing then you are correct.
          But I think my definition is a pretty dang good one for the most part.

        • So a computer able to solve the world's problems one and for all to everyone's satisfaction would not be an AI because brains are obviously not very good at it ...

    • My definition of "intelligence" (which leads directly to a definition of AI) is "a problem-solving engine;" the fundamental aspect of its design is "take any arbitrary problem, solve it." Computers, nowadays, are logic engines; "take the instruction and data; transform the data as the instruction dictates."

      Everything we think of as intelligence--including animal intelligence--seems to boil down to it; the problem of gathering all your bodily senses into one place for analysis, the problem of identifying so

      • you have to build an engine of infinite potential

        You are trying to describe the "hard" AI of sci-fi, not the actual AI studied by researchers and engineers today. And even so, nothing infinite is required. The human mind is not infinite; AI certainly need not be, either.

    • Re: (Score:3, Insightful)

      by Chris Burke ( 6130 )

      My AI prof said that the term "AI" refers to software systems which address the class of problems which are easy for biological brains but difficult for computers.

      That's not the definition my AI prof gave (or Wikipedia or any of my textbooks on the subject). The definition he (and the other sources) gave was that it is (simplifying and paraphrasing) any program that made decisions and took actions based on environmental inputs.

      "Ease" has nothing to do with it. The first multiplayer bots for Quake back in

      • any program that made decisions and took actions based on environmental inputs.

        This describes every nontrivial piece of software. Your definition is even more meaningless. Like facial recognition, "game playing" is (was) one form of AI -- but now computers beat humans at games like chess.

        • This describes every nontrivial piece of software.

          No, it doesn't, unless you play semantics with "decisions" and "actions" in ways that were not intended. Read up on AI on WP or any AI textbook (Artificial Intelligence: A Modern Approach by Russel and Norvig for example) for a better description that may satisfy your need for a better definition.

          Your definition is even more meaningless. Like facial recognition, "game playing" is (was) one form of AI -- but now computers beat humans at games like chess.

          Comp

          • by Hatta ( 162192 )

            No, it doesn't, unless you play semantics with "decisions" and "actions" in ways that were not intended.

            That's the problem with your definition. It merely defers the problem of defining intelligence to one of defining decisions. I'll happily grant you that anything that makes decisions is intelligent. But for that definition to be useful we have to agree on what makes a decision a decision.

            • The problem with my definition is that it was deliberately non-rigorous and intended only to convey the gist of the concept. Better definitions exist and are readily available. It sounds, though, like what you and Lord Ender want is a precise definition where a line of zero thickness can be drawn between AI and not-AI. Such a definition does not exist. What constitutes AI is indeed somewhat vague. However, imprecise is not the same as useless.

              • by Hatta ( 162192 )

                I don't see how "we all know what decisions are so let's take that for granted" is any more useful than "we all know what intelligence is so let's take that for granted". If you're going to hand-wave the problem away, better to do it up front actually.

          • You misread my statement. Difficulty is not the defining characteristic. The ability of the human brain is the defining characteristic. Being able to do things that come naturally to human brains, but not (traditionally) to software, is the textbook definition. Examples of such things include facial recognition and playing games.

            This official definition is flawed, but yours is outright meaningless. Depending on how you interpret your definition, it either refers to everything under the Sun, or to a small su

      • AI - in computer terms - is software that can mimic human intuition and decision making. Since humans tend to define intelligence as that which we use to understand and respond to outside stimuli, I don't see that there can be any other definition. The software out there that recognizes faces, that's not really intelligence, just very sophisticated image parsing software.

        Consciousness would be something entirely different; nobody really knows what it is and we don't have any real definitio

      • Comment removed based on user account deletion
  • To the unwashed masses, Artificial Intelligence is anything involving feedback (my house thermostat is AI) or anything involving prioritization (welcome to JCL on IBM OS/360 circa 1966).

    They are not talking about "real" AI like expert systems or neural nets etc.

    • Um, to the unwashed masses, AI is Skynet or Agent Smith.

      To technical people, yes, your thermostat is (an extremely primitive) AI.

      But what is discussed in the article sounds much more like an expert system.

  • It's the only way to be sure.
  • Al who? (Score:4, Insightful)

    by Anonymous Coward on Monday October 11, 2010 @11:22AM (#33859572)

    Who is this Al?

  • In other news... (Score:3, Informative)

    by Issildur03 ( 1173487 ) on Monday October 11, 2010 @11:23AM (#33859580) Homepage

    fuel chemistry is pushing the bounds of space exploration. And steel engineering. And antenna design. And numerous other fields.

    • by LWATCDR ( 28044 )

      Probably not steel engineering. Very little steel tends to be used on space craft. Ti and AL have must higher strength to weight ratios.
      But I know what you mean.

      • by vlm ( 69642 )

        Probably not steel engineering. Very little steel tends to be used on space craft.

        Stainless steel is almost stereotypical coolant tubing... Inert and tough as nails and vibration proof compared to all other options. Consider the SSME, admittedly a 30-40 year old engine design, pretty much where ever there is high threat of vibration, you get steel. Injector plate, lots of the engine pipes.

        In pure strength to weight, non-ferrous always wins. Much closer race when it comes to strength to volume. But when it comes to vibration survivability, steel almost always wins. Can't beat steel's

        • by LWATCDR ( 28044 )

          I was thinking of the probe and not the boosters.
          One the actual probe I would still go with the very little ferrous material. Actually I thought steel usually wins for strength per volume but I am not and expert on exotic alloys.

    • I was under the impression that we've basically exhausted the capabilities of chemistry in terms of rocket fuel, the only tricks that chemistry has left produce fuel that is too unstable to actually be used/produced in any useful quantities.

      Please note that I'm under this impression from what I saw at a site that was advocating nuclear rockets.
      • Its not the capabilities of Chemistry that we've exhausted, its the capabilities of manufacturing the fuels in a financially viable manner. Technologies exist, but when costs millions per ounce with current techniques, it might as well not exist. Thats not to say we wont figure out a way to streamline it later on. Just look at Carbon Nanotubes. Expensive as hell to manufacture, but then they found a bacteria that literally craps them out. Most of the most complex chemistry out there is organic (as are
  • Dave, we need to talk about this.....

  • by BJ_Covert_Action ( 1499847 ) on Monday October 11, 2010 @11:59AM (#33859960) Homepage Journal
    I read TFA. It's kind of funny this type of story is posted as news at all. The types of things that NASA and the ESA are describing in their interviews are more complex flight control software algorithms. It used to be that very simple feedback loops were used in combination with various controller chips (like PIDs) in order to give a spacecraft a few modes of operation. Activation and deactivation of these modes of operation were performed manually by ground controllers. However, as tech has progressed and onboard computing power has gotten cheaper, engineers have been able to design control software that activates and deactivates various modes of operation itself. In other words, it forms the same basic feedback loops that you might find on a Roomba, or some other terrestrial robot. It reads some input from a set of sensors. It uses those inputs to formulate a series of commands, be it rates and velocities, or mode-change commands. It then performs the commands in an expected manner.

    What I find funny is that this is being touted as some sort of new AI revolution in space. Since our very first probes into LEO, we have been upgrading and complicating the controller software on every mission, be it Hubble, an Atlas V rocket, the mars rovers, or anything. Each new generation of spacecraft tends to have more complex, more robust flight software as the natural evolution of technology progresses. That said, I am not really sure why the ESA or NASA are talking about AI control software. This software isn't anymore AI oriented than the typical control software of any autonomous or semi-autonomous robot on the ground. It's only AI in the most liberal sense of the word. All that is happening is that, as missions and technology progress in maturity, engineers are capable of designing more robust control techniques using methods like Kalman filtering, direct adaptive control algorithms, state estimators, and so on. The only news here is that today missions are being designed with the capability to process more complex instructions set than they could 10, 20, or 30 years ago. That doesn't strike me as very newsish...but hey, I guess something had to fill the columns today.
    • by th77 ( 515478 )

      This would be news to the command crew of the Enterprise D, who couldn't come to grips with the idea of turning more flight control over to the computer in Booby Trap [wikipedia.org].

    • I was about to launch into a huge rant about this, but you said it all for me. Whatever this article is talking about, it sure isn't AI. And it sure isn't news. The closest thing to it that they mention is the EO-1 software, which I use every day (hey! I work at JPL!) for other missions. If they wanted to talk about complicated planning systems, they should have thought about the MER (rover) to Odyssey (orbiter) relay communications scheduling, or AIM's on board event-based autonomous planning and schedul
  • is this a science fiction parallel to skynet? HAL? v'ger?

    someone please inform me along what science fiction prejudicial lines i should cognitively process this story. preferably for humorous reasons, although reasons for childlike awe work too

    thanks

  • The carbon units will now provide V'ger the required information.
  • oblig XKCD (Score:5, Funny)

    by cmiller173 ( 641510 ) on Monday October 11, 2010 @12:27PM (#33860262)
    WTH? 29 posts and no one posted this: http://xkcd.com/695/ [xkcd.com]
  • will do little to nothing for space exploration until we gain the ability to lift stuff into orbit on a massive scale.

    This is the kind of shit we need [nextbigfuture.com]
    • If they want to have this design accepted in the EU, they'll have to replace the nuclear lightbulb with a nuclear energy saving lamp.

  • ...as a remote agent, and actually controlled the craft for a few days.

Beware of Programmers who carry screwdrivers. -- Leonard Brandwein

Working...