Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Science Technology

A Peek Inside DARPA's Current Projects 94

dthomas731 writes to tell us that Computerworld has a brief article on some of DARPA's current projects. From the article: "Later in the program, Holland says, PAL will be able to 'automatically watch a conversation between two people and, using natural-language processing, figure out what are the tasks they agreed upon.' At that point, perhaps DARPA's PAL could be renamed HAL, for Hearing Assistant That Learns. The original HAL, in the film 2001: A Space Odyssey, tells the astronauts how it knows they're plotting to disconnect it: 'Dave, although you took thorough precautions in the pod against my hearing you, I could see your lips move.'"
This discussion has been archived. No new comments can be posted.

A Peek Inside DARPA's Current Projects

Comments Filter:
  • The Real Issue (Score:3, Insightful)

    by Atlantic Wall ( 847508 ) on Monday January 22, 2007 @02:50PM (#17713536)
    The Real issue is some idiot programming a computer to defend against someone just trying to turn it off.A computer should be programmed to know it can make mistakes.
  • Re:Paranoid (Score:4, Insightful)

    by Gorm the DBA ( 581373 ) on Monday January 22, 2007 @03:00PM (#17713724) Journal
    Actually, it was over long before that. HAL was just following it's programming, perfectly.

    HAL was programmed to eliminate any possibile failure points in the mission that he could. Through the spaceflight, HAL observed that the humans in the mission were failable (one of them made a suboptimal chess move, a handful of other mistakes were made). HAL had the ability to complete the mission on it's own. Therefore, HAL made the decision, in line with it's programming, to eliminate the human element.

    It makes sense, really, when you think about it. And truly, if Dave had just gone along with it and died, HAL would have finished the job perfectly fine.

  • Re:The Real Issue (Score:3, Insightful)

    by exp(pi*sqrt(163)) ( 613870 ) on Monday January 22, 2007 @03:03PM (#17713774) Journal
    Computers shouldn't be programmed to know they can make mistakes. They should observe themselves making mistakes and learn that they can. Sheesh. Next you'll be suggesting that children should be programmed from birth to believe whatever it's convenient for their parents to have them believe...
  • Re:The Real Issue (Score:3, Insightful)

    by exp(pi*sqrt(163)) ( 613870 ) on Monday January 22, 2007 @03:09PM (#17713852) Journal
    What does conscious thought have to do with mistakes? Mistakes are when you have some kind of expectation and those expectations fail to be met. If I buy some accounting software, and the figures don't add up correctly, then the software has made a mistake (which in turn is probably caused by a mistake made by a developer). If you make the definition of 'mistake' hinge on concepts like 'conscious mind' and other tricky philosophical ideas then it's a wonder you can get anything done in your day.

    My computer science teacher once said to me that a computer is basically on the same level of intelligence as a cockroach. It evaluates in positive and negatives, 1s and 0s.
    You have digital cockroaches in your part of the world?
  • Re:The Real Issue (Score:2, Insightful)

    by Eagleartoo ( 849045 ) <{moc.liamtoh} {ta} {renrut_nella}> on Monday January 22, 2007 @03:23PM (#17714010) Journal
    /. is probably the reason I DON'T get anything done during the day =). I was somewhat responding to "teaching computers that they make mistakes." But if you were going to do that, they would have to evaluate whether what they do is make mistakes or not, they would begin to evaluate their purpose. If they are instructed to make mistakes ("You are a computer, you make mistakes." The man told the computer) and they "learn" that they make mistakes, would it not follow that they start making mistakes intentionally? I mean they've been told to make mistakes, in essence they are fulfilling their purpose, which gets me into the whole original sin argument ---- THIS IS WHY I NEVER GET ANYTHING DONE!!!!!
  • Re:The Real Issue (Score:4, Insightful)

    by Ucklak ( 755284 ) on Monday January 22, 2007 @04:22PM (#17714776)
    Conscious thought has everything to do with mistakes.

    A machine is mechanical and is incapable of mistakes as it can't set expectations.

    From your quote, "Mistakes are when you have some kind of expectation and those expectations fail to be met.", machines aren't capable of setting expectations, only following a basic 'to do' list.

    If a machine adds 1+1 and returns 3 to the register, then it didn't fail, it added 1+1 in the way it knows how to.

    AI today is nothing more than a bunch of IF..THEN possiblities run on fast processors to make it seem instantaneous and 'alive'.

    You can program a machine to be aware of it's power (Voltage) and you can have a program set to monitor that power with cameras and laser beams and whatever else with commands to shoot laser beams at any device that comes near that power but the machine still isn't aware of what power is.

    Not to get philosophical here but IMO, AI won't ever be real until a machine has a belief system and part of that belief system relies upon it's own ability to get energy, just like animals do.

    It's possible that a program can be written so that a machine is aware of it's own requirements but then we're back to a bunch of IF..THEN statements written by programmers.
  • Agreed (Score:3, Insightful)

    by hypermanng ( 155858 ) on Monday January 22, 2007 @05:04PM (#17715226) Homepage
    I didn't mean to imply that only Strong AI is militarily useful. In fact, I would say that Strong AI is *not* useful, if one thinks about the ethics of forcing anything sentient to go to war in one's place.

    Also, I have no trouble recognizing that cleverly-designed "Weak" AI is nonetheless quite strong enough in more conventional senses to be a monumental aid to human problem solving, in the same manner and to the same degree as an ICBM is a great aid to human offensive capabilities.
  • Re:The Real Issue (Score:4, Insightful)

    by AK Marc ( 707885 ) on Monday January 22, 2007 @05:41PM (#17715712)
    I don't think computers are capable of making mistakes, because they are incapapble of thinking, they can process and store but this does not entail thought.

    Then they can't make mistakes, but can make errors. What do you call it when a brownout causes a computer to flip a bit and give an incorrect answer to a math problem? How about when it is programmed incorrectly so that it gives 2+2=5? How about when a user intends to add 2 and 3, but types in 2+2? In all cases, the computer outputs a wrong answer. Computers can be wrong and are wrong all the time. The wrong answers are errors. A "mistake" isn't an assigning of blame. I agree that computers can be blameless, but that does not mean that mistakes involving computers can't be made. I think your definition of "mistake" is not the most common, and would be the point of contention in this discussion, not whether computers have ever outputted an erroneous answer.
  • by Ungrounded Lightning ( 62228 ) on Monday January 22, 2007 @08:11PM (#17717492) Journal
    Except that HAL was paranoid. That the astronauts had a conversation they tried to hide from HAL was more than enough. The actual content of the conversation was immaterial.

    As I interpreted the scene: Though the audio pickups were off, HAL had a clear view. So he zoomed in on their faces, panned back-and-forth between speakers, and got a clear shot of their faces - lips, eyes, eyebrows, and other facial markings - as each spoke.

    Which tells me he was lip-reading. (Also face-reading.) He knew every word they said and had the bulk of the visual side-channel emphasis as well.

    If all he needed to know was that they WERE having a conversation, he could have gotten that from his view through the window, without the camera gyrations.

    We, as the audience, got an alternation of the omniscient viewpoint - watching and hearing the conversation - with HAL's viewpoint - silence (except for camera pan and zoom actuators) and an alternating closeups of the two talking heads. Thus we could both follow what was said and observe that HAL was doing so as well - but visually - and was putting a lot of processing power and camera actuator activity into subverting the humans' attemp to keep him from knowing what was said.
  • Re:Paranoid (Score:2, Insightful)

    by tcc3 ( 958644 ) on Monday January 22, 2007 @09:36PM (#17718376)
    Yeah but thats if you go by the book.

    The book and the movie are two different animals. The movie made no mention of the "Hal was following orders" subplot. Short of saying it outright, the movie makes it pretty clear that Hal screwed up and his pride demanded that he eliminate the witneses. Which, if you ask me, makes a more interesting story.

    After reading all the books, I came to the conclusion that Clarke would have been better served by sticking to the screenplay.

"I've seen it. It's rubbish." -- Marvin the Paranoid Android

Working...