A Peek Inside DARPA's Current Projects 94
dthomas731 writes to tell us that Computerworld has a brief article on some of DARPA's current projects. From the article: "Later in the program, Holland says, PAL will be able to 'automatically watch a conversation between two people and, using natural-language processing, figure out what are the tasks they agreed upon.' At that point, perhaps DARPA's PAL could be renamed HAL, for Hearing Assistant That Learns. The original HAL, in the film 2001: A Space Odyssey, tells the astronauts how it knows they're plotting to disconnect it: 'Dave, although you took thorough precautions in the pod against my hearing you, I could see your lips move.'"
The Real Issue (Score:3, Insightful)
Re:Paranoid (Score:4, Insightful)
HAL was programmed to eliminate any possibile failure points in the mission that he could. Through the spaceflight, HAL observed that the humans in the mission were failable (one of them made a suboptimal chess move, a handful of other mistakes were made). HAL had the ability to complete the mission on it's own. Therefore, HAL made the decision, in line with it's programming, to eliminate the human element.
It makes sense, really, when you think about it. And truly, if Dave had just gone along with it and died, HAL would have finished the job perfectly fine.
Re:The Real Issue (Score:3, Insightful)
Re:The Real Issue (Score:3, Insightful)
Re:The Real Issue (Score:2, Insightful)
Re:The Real Issue (Score:4, Insightful)
A machine is mechanical and is incapable of mistakes as it can't set expectations.
From your quote, "Mistakes are when you have some kind of expectation and those expectations fail to be met.", machines aren't capable of setting expectations, only following a basic 'to do' list.
If a machine adds 1+1 and returns 3 to the register, then it didn't fail, it added 1+1 in the way it knows how to.
AI today is nothing more than a bunch of IF..THEN possiblities run on fast processors to make it seem instantaneous and 'alive'.
You can program a machine to be aware of it's power (Voltage) and you can have a program set to monitor that power with cameras and laser beams and whatever else with commands to shoot laser beams at any device that comes near that power but the machine still isn't aware of what power is.
Not to get philosophical here but IMO, AI won't ever be real until a machine has a belief system and part of that belief system relies upon it's own ability to get energy, just like animals do.
It's possible that a program can be written so that a machine is aware of it's own requirements but then we're back to a bunch of IF..THEN statements written by programmers.
Agreed (Score:3, Insightful)
Also, I have no trouble recognizing that cleverly-designed "Weak" AI is nonetheless quite strong enough in more conventional senses to be a monumental aid to human problem solving, in the same manner and to the same degree as an ICBM is a great aid to human offensive capabilities.
Re:The Real Issue (Score:4, Insightful)
Then they can't make mistakes, but can make errors. What do you call it when a brownout causes a computer to flip a bit and give an incorrect answer to a math problem? How about when it is programmed incorrectly so that it gives 2+2=5? How about when a user intends to add 2 and 3, but types in 2+2? In all cases, the computer outputs a wrong answer. Computers can be wrong and are wrong all the time. The wrong answers are errors. A "mistake" isn't an assigning of blame. I agree that computers can be blameless, but that does not mean that mistakes involving computers can't be made. I think your definition of "mistake" is not the most common, and would be the point of contention in this discussion, not whether computers have ever outputted an erroneous answer.
As I interpreted the scene... (Score:3, Insightful)
As I interpreted the scene: Though the audio pickups were off, HAL had a clear view. So he zoomed in on their faces, panned back-and-forth between speakers, and got a clear shot of their faces - lips, eyes, eyebrows, and other facial markings - as each spoke.
Which tells me he was lip-reading. (Also face-reading.) He knew every word they said and had the bulk of the visual side-channel emphasis as well.
If all he needed to know was that they WERE having a conversation, he could have gotten that from his view through the window, without the camera gyrations.
We, as the audience, got an alternation of the omniscient viewpoint - watching and hearing the conversation - with HAL's viewpoint - silence (except for camera pan and zoom actuators) and an alternating closeups of the two talking heads. Thus we could both follow what was said and observe that HAL was doing so as well - but visually - and was putting a lot of processing power and camera actuator activity into subverting the humans' attemp to keep him from knowing what was said.
Re:Paranoid (Score:2, Insightful)
The book and the movie are two different animals. The movie made no mention of the "Hal was following orders" subplot. Short of saying it outright, the movie makes it pretty clear that Hal screwed up and his pride demanded that he eliminate the witneses. Which, if you ask me, makes a more interesting story.
After reading all the books, I came to the conclusion that Clarke would have been better served by sticking to the screenplay.