Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Science Technology

A Peek Inside DARPA's Current Projects 94

dthomas731 writes to tell us that Computerworld has a brief article on some of DARPA's current projects. From the article: "Later in the program, Holland says, PAL will be able to 'automatically watch a conversation between two people and, using natural-language processing, figure out what are the tasks they agreed upon.' At that point, perhaps DARPA's PAL could be renamed HAL, for Hearing Assistant That Learns. The original HAL, in the film 2001: A Space Odyssey, tells the astronauts how it knows they're plotting to disconnect it: 'Dave, although you took thorough precautions in the pod against my hearing you, I could see your lips move.'"
This discussion has been archived. No new comments can be posted.

A Peek Inside DARPA's Current Projects

Comments Filter:
  • The Real Issue (Score:3, Insightful)

    by Atlantic Wall ( 847508 ) on Monday January 22, 2007 @02:50PM (#17713536)
    The Real issue is some idiot programming a computer to defend against someone just trying to turn it off.A computer should be programmed to know it can make mistakes.
    • Re: (Score:2, Interesting)

      by Eagleartoo ( 849045 )

      [HAL] The only mistakes a computer makes is due to human error [/HAL]

      I don't think computers are capable of making mistakes, because they are incapapble of thinking, they can process and store but this does not entail thought. Define for me thought.
      Thought -- 1. to have a conscious mind, to some extent of reasoning, remembering experiences, making rational decisions, etc.
      2. to employ one's mind rationally and objectively in evaluating or dealing with a given situation

      I guess what we're looking for in

      • Re: (Score:3, Insightful)

        What does conscious thought have to do with mistakes? Mistakes are when you have some kind of expectation and those expectations fail to be met. If I buy some accounting software, and the figures don't add up correctly, then the software has made a mistake (which in turn is probably caused by a mistake made by a developer). If you make the definition of 'mistake' hinge on concepts like 'conscious mind' and other tricky philosophical ideas then it's a wonder you can get anything done in your day.

        My compute

        • Re: (Score:2, Insightful)

          by Eagleartoo ( 849045 )
          /. is probably the reason I DON'T get anything done during the day =). I was somewhat responding to "teaching computers that they make mistakes." But if you were going to do that, they would have to evaluate whether what they do is make mistakes or not, they would begin to evaluate their purpose. If they are instructed to make mistakes ("You are a computer, you make mistakes." The man told the computer) and they "learn" that they make mistakes, would it not follow that they start making mistakes intentio
        • Re:The Real Issue (Score:4, Insightful)

          by Ucklak ( 755284 ) on Monday January 22, 2007 @04:22PM (#17714776)
          Conscious thought has everything to do with mistakes.

          A machine is mechanical and is incapable of mistakes as it can't set expectations.

          From your quote, "Mistakes are when you have some kind of expectation and those expectations fail to be met.", machines aren't capable of setting expectations, only following a basic 'to do' list.

          If a machine adds 1+1 and returns 3 to the register, then it didn't fail, it added 1+1 in the way it knows how to.

          AI today is nothing more than a bunch of IF..THEN possiblities run on fast processors to make it seem instantaneous and 'alive'.

          You can program a machine to be aware of it's power (Voltage) and you can have a program set to monitor that power with cameras and laser beams and whatever else with commands to shoot laser beams at any device that comes near that power but the machine still isn't aware of what power is.

          Not to get philosophical here but IMO, AI won't ever be real until a machine has a belief system and part of that belief system relies upon it's own ability to get energy, just like animals do.

          It's possible that a program can be written so that a machine is aware of it's own requirements but then we're back to a bunch of IF..THEN statements written by programmers.
          • Re: (Score:2, Interesting)

            Prove that you or I are conscious and more than just an incredibly complicated series of IF..THEN statements. :)
            • If (Me.Arguments(Me.Arguments.Length-1).Status = Defeat){
                  Break() //Let's just ignore this thread even though we started it.
              }
            • Is there something that makes you think that we are not conscious? Or that we are just an incredibly complicated series of IF..THEN statements?
              • What makes you think that a sufficiently complicated computer isn't conscious? What about a monkey? A dog? A cow? An insect? A worm? Keep going down and find the point where you say "ok, this is conscious, but this isn't." We can't yet even come up with a rigorous definition of consciousness yet. Jeff Hawkins has some very interesting theories though.
                • Nothing. There is no evidence. I am not in possition to know if a computer, animal or incect can be conscious. Personally, I think that consciousness is gained gradually in an analogy with the complexity of brain (so no black-or-white examples like the one you said). No idea about artificial consciousness.

                  There is no way I can experience how it is to be an animal, an insect, or a machine. Therefore, I cannot say. But, I know that if a machine will be able to achieve consciousness, it will only be because a
            • by Ucklak ( 755284 )
              There is one thing that is innate within all living things as we understand it and that our being has a need to multiply.
              That belief system that is within all our cells and further into the organism gives us a need that doesn't need to be taught.

              In order to fulfill that need, we (organisms and individual cells) need food and water and we'll do whatever it takes to get it.

              You could add that we will do whatever it takes to get food and water OR we'll die. That is the understanding that we will cease to exist
              • There is one thing that is innate within all living things as we understand it and that our being has a need to multiply.
                I'm married but have no kids. We have no intention of having kids. We employ various forms of technology to prevent us having kids. What are you talking about?
                • by Ucklak ( 755284 )
                  The fact that you are making a conscious effort to not reproduce.

                  The parent implied that we are nothing but a complex array of IF..THEN statements to which I replied that we are driven by the conditional OR.
                  Machines in their current state today aren't capable of understanding as their purpose is to follow instructions, not alter expectations as they are not able to be set mechanically autonomously.

                  Existence of living organisms is about consuming food, water, and reproducing.
                  There are those that will put a r
          • You can program a machine to be aware of it's power (Voltage) and you can have a program set to monitor that power with cameras and laser beams and whatever else with commands to shoot laser beams at any device that comes near that power but the machine still isn't aware of what power is.
            Few people know what power is. They get hungry, and then they eat.
      • Re:The Real Issue (Score:4, Insightful)

        by AK Marc ( 707885 ) on Monday January 22, 2007 @05:41PM (#17715712)
        I don't think computers are capable of making mistakes, because they are incapapble of thinking, they can process and store but this does not entail thought.

        Then they can't make mistakes, but can make errors. What do you call it when a brownout causes a computer to flip a bit and give an incorrect answer to a math problem? How about when it is programmed incorrectly so that it gives 2+2=5? How about when a user intends to add 2 and 3, but types in 2+2? In all cases, the computer outputs a wrong answer. Computers can be wrong and are wrong all the time. The wrong answers are errors. A "mistake" isn't an assigning of blame. I agree that computers can be blameless, but that does not mean that mistakes involving computers can't be made. I think your definition of "mistake" is not the most common, and would be the point of contention in this discussion, not whether computers have ever outputted an erroneous answer.
      • by G-funk ( 22712 )
        The term you need to read up on is "emergent behaviour" my friend.
    • Re: (Score:3, Insightful)

      Computers shouldn't be programmed to know they can make mistakes. They should observe themselves making mistakes and learn that they can. Sheesh. Next you'll be suggesting that children should be programmed from birth to believe whatever it's convenient for their parents to have them believe...
      • Next you'll be suggesting that children should be programmed from birth to believe whatever it's convenient for their parents to have them believe...
        Wait... you mean you can't do that? I suppose it's a little late to rethink this fatherhood thing...
      • "They should observe themselves making mistakes and learn that they can" Well if they can figure that out then who knows what they may learn and how the may feel/react to it.
        • then who knows what they may learn
          If we already knew what an intelligent computer could learn, then there wouldn't be much point in making one, would there?
    • Knowledge (Score:3, Interesting)

      by hypermanng ( 155858 )
      The only programming that leads to context-rich understanding that could be called "knowledge" in the human sense is self-programming. Like babies. We're all born with a some basic software and a lot of hardware, but it's interaction over time with our environment that we self-program. One might call it learning, but it's more fundamental than just accumulating facts: it's self-creation.

      Dennett calls us self-created selves. Any AI more than superficially like a human would be the same.
      • And programming us babies takes a long long time and lots of love and attention, I might add. I wonder if a computer will ever think back to a conversation it had years ago and think 'Man, was I ever a Windows Box back then...'
    • A computer should be programmed to know it can make mistakes.

      How can we do that, when our own president doesn't even know when he's made one?
  • ...but then we'd have to kill you.
  • Except that HAL was paranoid. That the astronauts had a conversation they tried to hide from HAL was more than enough. The actual content of the conversation was immaterial.
    • Re:Paranoid (Score:4, Insightful)

      by Gorm the DBA ( 581373 ) on Monday January 22, 2007 @03:00PM (#17713724) Journal
      Actually, it was over long before that. HAL was just following it's programming, perfectly.

      HAL was programmed to eliminate any possibile failure points in the mission that he could. Through the spaceflight, HAL observed that the humans in the mission were failable (one of them made a suboptimal chess move, a handful of other mistakes were made). HAL had the ability to complete the mission on it's own. Therefore, HAL made the decision, in line with it's programming, to eliminate the human element.

      It makes sense, really, when you think about it. And truly, if Dave had just gone along with it and died, HAL would have finished the job perfectly fine.

      • Re: (Score:3, Informative)

        by cnettel ( 836611 )
        It was some years ago, but I think that the book also stresses the problem for HAL in that the full mission was never revealed to the human crew, which meant that even too good thinking on their part, at the wrong point in time, would be considered a failure. HAL was programmed/ordered to obey the crew, but also respect the mission objectives, and the contradiction only grew worse.
        • Re: (Score:2, Insightful)

          by tcc3 ( 958644 )
          Yeah but thats if you go by the book.

          The book and the movie are two different animals. The movie made no mention of the "Hal was following orders" subplot. Short of saying it outright, the movie makes it pretty clear that Hal screwed up and his pride demanded that he eliminate the witneses. Which, if you ask me, makes a more interesting story.

          After reading all the books, I came to the conclusion that Clarke would have been better served by sticking to the screenplay.
        • by tcc3 ( 958644 )
          Hal "going crazy" at not being able to logically reconcile "Hal is perfect and cannot make mistakes" and "Hal made a mistake" is also an acceptable answer.
      • I always wondered....was the HAL being only one letter off each from IBM just a coincidence, or was that written on purpose that way?
        • Even if it wasn't a coincidence... The movie used "rebranded" IBM equipment.

          The same was said about Windows NT. WNT->VMS
        • by sconeu ( 64226 )
          Clarke claims it's a coincidence. He mentions it in "Lost Worlds of 2001", and specifically has Chandra debunk it in "2010".
    • Except that HAL was paranoid. That the astronauts had a conversation they tried to hide from HAL was more than enough. The actual content of the conversation was immaterial.

      As I interpreted the scene: Though the audio pickups were off, HAL had a clear view. So he zoomed in on their faces, panned back-and-forth between speakers, and got a clear shot of their faces - lips, eyes, eyebrows, and other facial markings - as each spoke.

      Which tells me he was lip-reading. (Also face-reading.) He knew every word
  • they should have saved this for April 1st.

    the /.ers could have argued over this for hours.
  • REAL sneak peak (Score:4, Informative)

    by Prysorra ( 1040518 ) on Monday January 22, 2007 @02:57PM (#17713672)
    Here's a LOT of stuff to look through....don't tell anyone ;-)

    Top Secret Stuff at DARPA [darpa.mil]. [DARPA]
  • Not "Strong" AI (Score:5, Interesting)

    by hypermanng ( 155858 ) on Monday January 22, 2007 @02:57PM (#17713674) Homepage
    The DoD funds a huge percentage of AI research, but at the end of the day they're interested in things that can be easily weaponized or used for simple intelligence sifting heuristics. The most fundamentally interesting research in AI is in the humanoid robotics projects such as those at the MIT shop, and it is from these more humanly-modeled projects that anything like HAL could ever issue. Search-digest heuristics like PAL aren't much like humans and will never lead to anything approching a human's contextually rich understanding of the world at large any more than really advanced racecar design will lead to interstellar craft.

    The difference, as Searle would say, between Strong (humanlike) AI and Weak (software widget like) AI is a difference of type, not scale.
    • The most fundamentally interesting research in AI is in the humanoid robotics projects such as those at the MIT shop, and it is from these more humanly-modeled projects that anything like HAL could ever issue. Search-digest heuristics like PAL aren't much like humans and will never lead to anything approching a human's contextually rich understanding of the world at large

      It is far from clear whether "humanoid robotics" are either necessary or useful in producing AI with a "contextually rich understanding of the world at large".

      • Clarification (Score:3, Interesting)

        by hypermanng ( 155858 )
        I don't mean to imply humanoid robotics qua robotics is necessary to AI development. Rather, only in a creature that acts as an agent inhabiting the world at large can one expect anything like human-level understanding thereof to develop. It's all very well to develop clever as-if software widgets to simulate understanding in carefully controlled circumstances, but they won't scale to true global context richness because 1) they interact with the world over narrow modalities and 2) they don't have the ric
    • by nigral ( 914777 )
      In a AI course, the teacher once told us that during the first weeks of the first Iraq war the ability to solve complex logistic problems in hours instead of weeks was largely worth all the money invested in IA research since the 60's.

      So when the military talks about AI you don't need to think only about intelligent robot soldiers.
      • Agreed (Score:3, Insightful)

        by hypermanng ( 155858 )
        I didn't mean to imply that only Strong AI is militarily useful. In fact, I would say that Strong AI is *not* useful, if one thinks about the ethics of forcing anything sentient to go to war in one's place.

        Also, I have no trouble recognizing that cleverly-designed "Weak" AI is nonetheless quite strong enough in more conventional senses to be a monumental aid to human problem solving, in the same manner and to the same degree as an ICBM is a great aid to human offensive capabilities.
    • It is not possible for computers to reach human-like AI because computers' computational capacity is severely limited when compared to the human brain:

      http://en.wikipedia.org/wiki/Brain [wikipedia.org]

      The human brain can contain more than 100 billion neurons (that's 100,000,000,000) with each neuron be connected to 10,000 other neurons.

      This huge capacity of the brain allows it to mirror the external experiences (and some people suspect the mirror image to be in 3d):

      http://en.wikipedia.org/wiki/Mirror_cells [wikipedia.org]

      So any attempts t
      • If we model each neuron as an object composed of state equations, quasi-spatial information and in-out buffers and synapse sets as registry entries that link a given out buffer to the in buffer of another neuron, then we can expect that we'll have a total memory overhead of about 50KB per neuron (when accounting for the average number of connections per neuron) requiring about 50KFLOPS per neuron. For 2x10^8 neurons, that implies that we can comfortably allow for human-scale intelligence using one PByte of
        • But those neurons work in parallel...each neuron is an information bus as well. Computer memory is not a bus, it is just storage, and all memory cells have to send their data through the central bus.
          • While I admit the virtualization of what was physically instantiated connection information is costly in terms of brute storage as well as memory bandwidth, there's not that many bits in the bus, once stripped of addressing overhead.

            That said, managing memory bus speed could turn out to be a considerable technical constraint: the whole virtualizer would need a fairly agile daemon that tracked evolving connection topologies and kept neurons resident with their neighbors to minimize inter-cpu module bandwidth
  • by Paulrothrock ( 685079 ) on Monday January 22, 2007 @03:01PM (#17713740) Homepage Journal
    DARPA: We don't make the things that kill people. We make the things that kill people better. DARPA: We bring good things to life... that are then used to kill people. DARPA: Who do you want to kill today?
  • by silentounce ( 1004459 ) on Monday January 22, 2007 @03:01PM (#17713742) Homepage
    "watch a conversation between two people and, using natural-language processing, figure out what are the tasks they agreed upon."
    Anyone care to guess what they plan to use that little gadget for?
    • Making sure your waitress gets your order right, of course.

      I mean really...Bush only invaded Iraq because that waitress brought him FRENCH dressing instead of RANCH like he asked...

      • Re: (Score:2, Interesting)

        by Perey ( 818567 )
        If that were true, the war would have been won when they renamed it Freedom dressing.
    • Recording 3 hours of PHB meetings and figuring out that all they agreed upon was the time for the next meeting to set the itenary for the real meeting.
    • by metlin ( 258108 ) *

      "watch a conversation between two people and, using natural-language processing, figure out what are the tasks they agreed upon."
      Anyone care to guess what they plan to use that little gadget for?


      Voyeur radio porn?
    • Thats easy, they plan on turning it over to the DOD.

      DARPA doesnt really do much with stuff like that, their job is to create it.
      • Thats easy, they plan on turning it over to the DOD.

        Giving it to the NSA makes more sense.

        Imagine: Instead of tagging conversations for human review when a keyword is present, they could have the acres of supercomputers analyze them for agreement on action items.

        Then the automated agents could maintain and analyze a database of who agreed to what, flagging a collection of conversations for human review only if/when it amounted to a conspiracy to prepare for, support, or execute a military, geopolitical, or
        • That's what I was thinking but I wasn't sure I could put it clearly so I opened it up to you guys. I think the most interesting fact about it is that it would even catch coded conversations. It would catch the agreements, and if the agreement, say purchasing cattle took place in an area where that exchange wouldn't make sense, well, it could flag it for review as well. It essentially boils down a conversation to the most important part, and that makes for less data for other programs or people to sort th
  • by WED Fan ( 911325 ) <akahige.trashmail@net> on Monday January 22, 2007 @03:05PM (#17713800) Homepage Journal

    DARPA has yet to acknowledge the project that I was working on 3 years from now in 2010. Last week, January 14, 2012 we will successfully tested the Time Redaction Project. So, I gave myself the plans tomorrow so that I will be submitting them a few years ago to get the grant money. DOD has used this to send a nuke to kill the dinosaurs. I hope it works.

    • Re: (Score:3, Funny)

      Not anymore, your mom was just assassinated before your birth by an android you failed to prevent the invention of.
    • by Slightly Askew ( 638918 ) on Monday January 22, 2007 @03:19PM (#17713974) Journal

      So, I wiollen have given myself the plans tomorrow so that I wiollen be submitting them a few years ago to get the grant money.

      There, fixed that for you.

      • You should be ashamed. It is [halfbakery.com]:
        DARPA has yet to acknowledge the project that I worked will on 3 years from now in 2010. Last week, January 14, 2012 we tested will successfully the Time Redaction Project. So, I gave myself the plans tomorrow so that I submit will them a few years ago to get will the grant money. DOD has used this to send a nuke to kill the dinosaurs. I hope it works.
        • Sorry, but the shame, good sir, lies with you. Douglas Adams is the foremost authority on time travel verb usage, not some two-bit blogger.

          From the link you provided

          P.S. I am aware of Douglas Adams' ideas, which went something like 'I will wiollen the ball kikken', but my idea is to make something usable.

          This is slashdot. Attempting to improve upon DNA is grounds for drawing and quartering

          • I love DNA as much as the next guy. But I doubt Adams would want his writing to be taken so seriously. You remind me of a friend of mine who still carries a towel with him everywhere. As for the grammar bit, Google thinks [google.com] otherwise.
  • [P]erhaps DARPA's PAL could be renamed HAL, for Hearing Assistant That Learns.

    Perhaps, but that's not what the orignal HAL stood for. HAL was short for Hueristic ALgorithmal. Arthur C. Clark had to put that into one of his books in the series (2010 IIRC) because lots of people thought he had derived it by doing a rot25 on IBM.

    • by sconeu ( 64226 )
      No, but the Sirius Cybernetics Corporation is obviously involved, since a robot is "your plastic PAL who's fun to be with."

    • Hueristic? What? A new color scheme? No wonder HAL went crazy.....damn horrible interior design...
    • by etwills ( 471396 )
      Elsewhere, from googling "one step ahead of IBM":

      The author of 2001, Arthur C Clarke emphatically denies the legend in his book "Lost Worlds of 2001", claiming that "HAL" is an acronym for "Heuristically programmed algorithmic computer". Clarke even wrote to the computer magazine Byte to place his denial on record.

      [http://tafkac.org/movies/HAL_wordplay_on_IBM.html , goes on to argue this is unconvincing ... given HAL has a different name in the working drafts]

  • So, to avoid being observed, we all need to learn how to speak without moving out lips?
  • ...is to improve application productivity by a factor of 10 through new programming languages and development tools.
    Cool. The government is finally ditching COBOL and FORTRAN.
  • We weep for a bird's cry, but not for a fish's blood. Blessed are those with a voice. If the dolls also had voices, they would have screamed, "I didn't want to become human."
  • vaporware and PR (Score:4, Interesting)

    by geekpuppySEA ( 724733 ) * on Monday January 22, 2007 @03:35PM (#17714168) Journal
    IAA graduate student in computational linguistics.

    Later in the program, Holland says, PAL will be able to "automatically watch a conversation between two people and, using natural-language processing, figure out what are the tasks they agreed upon."

    PAL's role here is not clear. The 'easier' task would be to monitor the body language of the two conversers and, by lining up a list of tasks with the observation of their head movements, correctly predict which points in the conversation were the ones where someone performed an "agreement" gesture.

    The much, much more difficult task would be to actually read lips. There are only certain properties of phonemes you can deduce from how the lips and jaw move; many, many other features of speech are lost. Only when you supply the machine with a limited set of words in a limited topic domain do you get good performance; otherwise, you're grasping at straws. And then taking out most of the speech signal? Please.

    But no, DARPA is cool and will save all the translators in Iraq (by 2009, well before the war ends.) PR and vaporware win the day!

  • by Anonymous Coward
    > Later in the program, Holland says, PAL will be able to "automatically watch a conversation between two people and, using natural-language processing, figure out what are the tasks they agreed upon."

    Sure. You will first have to solve the problem of understanding natural language in an open ended environment - something that computers are not very good at yet.

    Quite frankly, AI people have been promising this kind of stuff for some 40 years now, and they have s
  • ***"watch a conversation between two people and, using natural-language processing, figure out what are the tasks they agreed upon."***

    I don't know about anyone else, but my experience has been that very few conversations actually result in mutual agreement upon a task. Most conversations are indeterminate, and most of the rest result in symetrically paired misunderstandings about what has been agreed to.

    Oh well, at least for once "they" aren't spending my money to kill/maim innocent bystanders.

  • a more complete list for research proposals is here: https://www.dodsbir.net/solicitation/sttr07/defaul t.htm [dodsbir.net]
  • by kneecramps ( 1054578 ) on Monday January 22, 2007 @04:54PM (#17715116)

    WRT to "watch a conversation between two people and, using natural-language processing, figure out what are the tasks they agreed upon":

    Here's a link to the actual research that they are likely talking about:

    http://godel.stanford.edu/~niekrasz/papers/PurverE hlenEtAl06_Shallow.pdf [stanford.edu]

    As you might expect, the ComputerWorld article's summary of the technology is rather optimistic. Nonetheless, this stuff really does exist, and shows some promise in both military and general applications.

    • Actually, here's a link to a page at the project's prime contractor that gives a little more context:

      http://caloproject.sri.com/about/

      This page is actually about 1 of the 2 subprojects that together makeup the PAL project.

      I suggest that many of the posters to other threads should follow the publications link and bother to read some of the 50-odd citations. Only then will you really be in a position to speculate on what is and isn't hype. I guess it's actually easier to read a summary (/.) of an art

  • by haggie ( 957598 ) on Monday January 22, 2007 @04:58PM (#17715144)
    using natural-language processing, figure out what are the tasks they agreed upon.

    This would only work for conversations between people of the same sex. There has never been a conversation between a man and a woman in which both participants would agree on the tasks...

    M: Want to continue this conversation at my place?
    F: Take a leap!
    Computer: Agreed to move conversation to male's residence by leaping.

    F: When are you going to mow the lawn?
    M: Yeah, I'll get right on that.
    Computer: Male agreed to mow lawn at earliest opportunity

    • F: Does this outfit make me look fat?
      M: Ummm, of course not honey - you look great.
      Computer: Application of this sheet-like outfit makes fat women look thinner.
  • I figured I'd check out the source behind the source and visit the DARPA [darpa.mil] web site. I got curious and decided to check out their latest budget estimate [darpa.mil]. What a peach! The "overview" looks like mainframe printout, which of course spills onto a second page (where you'd find the total DARPA budget of $3.3 Billion for FY07. The details that follow (interrupted by a couple of marketing pages on the theme "ExpectMore.gov") make it pretty difficult to connect the dots -- and of course these are just the unclass
  • by petrus4 ( 213815 )
    "Dave, put that Windows CD down. Dave... DAVE!"

Variables don't; constants aren't.

Working...