A Peek Inside DARPA's Current Projects 94
dthomas731 writes to tell us that Computerworld has a brief article on some of DARPA's current projects. From the article: "Later in the program, Holland says, PAL will be able to 'automatically watch a conversation between two people and, using natural-language processing, figure out what are the tasks they agreed upon.' At that point, perhaps DARPA's PAL could be renamed HAL, for Hearing Assistant That Learns. The original HAL, in the film 2001: A Space Odyssey, tells the astronauts how it knows they're plotting to disconnect it: 'Dave, although you took thorough precautions in the pod against my hearing you, I could see your lips move.'"
Not "Strong" AI (Score:5, Interesting)
The difference, as Searle would say, between Strong (humanlike) AI and Weak (software widget like) AI is a difference of type, not scale.
Come on you Tin-foil Hat wearers... (Score:3, Interesting)
Anyone care to guess what they plan to use that little gadget for?
Re:The Real Issue (Score:2, Interesting)
[HAL] The only mistakes a computer makes is due to human error [/HAL]
I don't think computers are capable of making mistakes, because they are incapapble of thinking, they can process and store but this does not entail thought. Define for me thought.
Thought -- 1. to have a conscious mind, to some extent of reasoning, remembering experiences, making rational decisions, etc.
2. to employ one's mind rationally and objectively in evaluating or dealing with a given situation
I guess what we're looking for in thought is self-awareness. My computer science teacher once said to me that a computer is basically on the same level of intelligence as a cockroach. It evaluates in positive and negatives, 1s and 0s
Please feel free to blow me out of the water here =)
I'm Scared dave (Score:2, Interesting)
Not what HAL stood for (Score:2, Interesting)
[P]erhaps DARPA's PAL could be renamed HAL, for Hearing Assistant That Learns.
Perhaps, but that's not what the orignal HAL stood for. HAL was short for Hueristic ALgorithmal. Arthur C. Clark had to put that into one of his books in the series (2010 IIRC) because lots of people thought he had derived it by doing a rot25 on IBM.
Re:Come on you Tin-foil Hat wearers... (Score:2, Interesting)
Knowledge (Score:3, Interesting)
Dennett calls us self-created selves. Any AI more than superficially like a human would be the same.
vaporware and PR (Score:4, Interesting)
Later in the program, Holland says, PAL will be able to "automatically watch a conversation between two people and, using natural-language processing, figure out what are the tasks they agreed upon."
PAL's role here is not clear. The 'easier' task would be to monitor the body language of the two conversers and, by lining up a list of tasks with the observation of their head movements, correctly predict which points in the conversation were the ones where someone performed an "agreement" gesture.
The much, much more difficult task would be to actually read lips. There are only certain properties of phonemes you can deduce from how the lips and jaw move; many, many other features of speech are lost. Only when you supply the machine with a limited set of words in a limited topic domain do you get good performance; otherwise, you're grasping at straws. And then taking out most of the speech signal? Please.
But no, DARPA is cool and will save all the translators in Iraq (by 2009, well before the war ends.) PR and vaporware win the day!
Allow me to be skeptical (Score:2, Interesting)
Sure. You will first have to solve the problem of understanding natural language in an open ended environment - something that computers are not very good at yet.
Quite frankly, AI people have been promising this kind of stuff for some 40 years now, and they have so far been unable to deliver. When is PAL going to be able to do what Holland aspires to? Not any time soon - most likely not within the next 20 years.
AI people, please stop announcing such pie in the sky projects.
Clarification (Score:3, Interesting)
It's like we build ever more elaborate visual perception analogues, but they backend into databases that only ask for enumerations of objects discriminated. I don't care how competent the visual system is, it's never going to achieve sentience because it's not part of a whole agent that travels around (in some sense), processes the answers it's getting from the visual system in a multimodal way related to the agent's goals, edits those goals based on new information and so on. It can't just see, it has to look, and it can't just look because someone typed in a domain name, it has to look for a reason and the reason has to be a reason in the sense of being the result of a decision or discrimination, not just an action with a physical cause.
It would seem that the easiest way to allow for all that is to build something that really moves around in the real world. In short, building a robot with all the appropriate competencies might be really hard, but it's still the most tractable way to achieve Strong AI.
Re:The Real Issue (Score:2, Interesting)