A Peek Inside DARPA's Current Projects 94
dthomas731 writes to tell us that Computerworld has a brief article on some of DARPA's current projects. From the article: "Later in the program, Holland says, PAL will be able to 'automatically watch a conversation between two people and, using natural-language processing, figure out what are the tasks they agreed upon.' At that point, perhaps DARPA's PAL could be renamed HAL, for Hearing Assistant That Learns. The original HAL, in the film 2001: A Space Odyssey, tells the astronauts how it knows they're plotting to disconnect it: 'Dave, although you took thorough precautions in the pod against my hearing you, I could see your lips move.'"
The Real Issue (Score:3, Insightful)
Re: (Score:2, Interesting)
[HAL] The only mistakes a computer makes is due to human error [/HAL]
I don't think computers are capable of making mistakes, because they are incapapble of thinking, they can process and store but this does not entail thought. Define for me thought.
Thought -- 1. to have a conscious mind, to some extent of reasoning, remembering experiences, making rational decisions, etc.
2. to employ one's mind rationally and objectively in evaluating or dealing with a given situation
I guess what we're looking for in
Re: (Score:3, Insightful)
Re: (Score:2, Insightful)
Re:The Real Issue (Score:4, Insightful)
A machine is mechanical and is incapable of mistakes as it can't set expectations.
From your quote, "Mistakes are when you have some kind of expectation and those expectations fail to be met.", machines aren't capable of setting expectations, only following a basic 'to do' list.
If a machine adds 1+1 and returns 3 to the register, then it didn't fail, it added 1+1 in the way it knows how to.
AI today is nothing more than a bunch of IF..THEN possiblities run on fast processors to make it seem instantaneous and 'alive'.
You can program a machine to be aware of it's power (Voltage) and you can have a program set to monitor that power with cameras and laser beams and whatever else with commands to shoot laser beams at any device that comes near that power but the machine still isn't aware of what power is.
Not to get philosophical here but IMO, AI won't ever be real until a machine has a belief system and part of that belief system relies upon it's own ability to get energy, just like animals do.
It's possible that a program can be written so that a machine is aware of it's own requirements but then we're back to a bunch of IF..THEN statements written by programmers.
Re: (Score:2, Interesting)
No need to! (Score:1)
Break()
}
Re: (Score:1)
Re: (Score:1)
Re: (Score:1)
There is no way I can experience how it is to be an animal, an insect, or a machine. Therefore, I cannot say. But, I know that if a machine will be able to achieve consciousness, it will only be because a
Re: (Score:2)
That belief system that is within all our cells and further into the organism gives us a need that doesn't need to be taught.
In order to fulfill that need, we (organisms and individual cells) need food and water and we'll do whatever it takes to get it.
You could add that we will do whatever it takes to get food and water OR we'll die. That is the understanding that we will cease to exist
Re: (Score:2)
Re: (Score:2)
The parent implied that we are nothing but a complex array of IF..THEN statements to which I replied that we are driven by the conditional OR.
Machines in their current state today aren't capable of understanding as their purpose is to follow instructions, not alter expectations as they are not able to be set mechanically autonomously.
Existence of living organisms is about consuming food, water, and reproducing.
There are those that will put a r
Re: (Score:2)
Re:The Real Issue (Score:4, Insightful)
Then they can't make mistakes, but can make errors. What do you call it when a brownout causes a computer to flip a bit and give an incorrect answer to a math problem? How about when it is programmed incorrectly so that it gives 2+2=5? How about when a user intends to add 2 and 3, but types in 2+2? In all cases, the computer outputs a wrong answer. Computers can be wrong and are wrong all the time. The wrong answers are errors. A "mistake" isn't an assigning of blame. I agree that computers can be blameless, but that does not mean that mistakes involving computers can't be made. I think your definition of "mistake" is not the most common, and would be the point of contention in this discussion, not whether computers have ever outputted an erroneous answer.
Re: (Score:2)
Re: (Score:3, Insightful)
Re: (Score:1)
Re: (Score:1)
Re: (Score:2)
Knowledge (Score:3, Interesting)
Dennett calls us self-created selves. Any AI more than superficially like a human would be the same.
Re: (Score:1)
Insert heavy-handed comment here... (Score:1, Offtopic)
How can we do that, when our own president doesn't even know when he's made one?
We could tell you... (Score:1)
Paranoid (Score:2)
Re:Paranoid (Score:4, Insightful)
HAL was programmed to eliminate any possibile failure points in the mission that he could. Through the spaceflight, HAL observed that the humans in the mission were failable (one of them made a suboptimal chess move, a handful of other mistakes were made). HAL had the ability to complete the mission on it's own. Therefore, HAL made the decision, in line with it's programming, to eliminate the human element.
It makes sense, really, when you think about it. And truly, if Dave had just gone along with it and died, HAL would have finished the job perfectly fine.
Re: (Score:3, Informative)
Re: (Score:2, Insightful)
The book and the movie are two different animals. The movie made no mention of the "Hal was following orders" subplot. Short of saying it outright, the movie makes it pretty clear that Hal screwed up and his pride demanded that he eliminate the witneses. Which, if you ask me, makes a more interesting story.
After reading all the books, I came to the conclusion that Clarke would have been better served by sticking to the screenplay.
Re: (Score:1)
Re: (Score:2)
Re: (Score:2)
The same was said about Windows NT. WNT->VMS
Re: (Score:2)
Re: (Score:1)
As I interpreted the scene... (Score:3, Insightful)
As I interpreted the scene: Though the audio pickups were off, HAL had a clear view. So he zoomed in on their faces, panned back-and-forth between speakers, and got a clear shot of their faces - lips, eyes, eyebrows, and other facial markings - as each spoke.
Which tells me he was lip-reading. (Also face-reading.) He knew every word
True or not (Score:1)
the
REAL sneak peak (Score:4, Informative)
Top Secret Stuff at DARPA [darpa.mil]. [DARPA]
Not "Strong" AI (Score:5, Interesting)
The difference, as Searle would say, between Strong (humanlike) AI and Weak (software widget like) AI is a difference of type, not scale.
Re: (Score:2)
The most fundamentally interesting research in AI is in the humanoid robotics projects such as those at the MIT shop, and it is from these more humanly-modeled projects that anything like HAL could ever issue. Search-digest heuristics like PAL aren't much like humans and will never lead to anything approching a human's contextually rich understanding of the world at large
It is far from clear whether "humanoid robotics" are either necessary or useful in producing AI with a "contextually rich understanding of the world at large".
Clarification (Score:3, Interesting)
Re: (Score:1)
So when the military talks about AI you don't need to think only about intelligent robot soldiers.
Agreed (Score:3, Insightful)
Also, I have no trouble recognizing that cleverly-designed "Weak" AI is nonetheless quite strong enough in more conventional senses to be a monumental aid to human problem solving, in the same manner and to the same degree as an ICBM is a great aid to human offensive capabilities.
Re: (Score:2)
http://en.wikipedia.org/wiki/Brain [wikipedia.org]
The human brain can contain more than 100 billion neurons (that's 100,000,000,000) with each neuron be connected to 10,000 other neurons.
This huge capacity of the brain allows it to mirror the external experiences (and some people suspect the mirror image to be in 3d):
http://en.wikipedia.org/wiki/Mirror_cells [wikipedia.org]
So any attempts t
Yes and no (Score:2)
Re: (Score:2)
I don't see a problem (Score:2)
That said, managing memory bus speed could turn out to be a considerable technical constraint: the whole virtualizer would need a fairly agile daemon that tracked evolving connection topologies and kept neurons resident with their neighbors to minimize inter-cpu module bandwidth
DARPA Slogans (Score:5, Funny)
DARPA Created The Internet (Score:2)
Re: (Score:1)
Come on you Tin-foil Hat wearers... (Score:3, Interesting)
Anyone care to guess what they plan to use that little gadget for?
Re: (Score:2)
I mean really...Bush only invaded Iraq because that waitress brought him FRENCH dressing instead of RANCH like he asked...
Re: (Score:2, Interesting)
Re: (Score:2)
Re: (Score:2)
Voyeur radio porn?
Use Plan? (Score:1)
DARPA doesnt really do much with stuff like that, their job is to create it.
NSA makes more sense. (Score:2)
Giving it to the NSA makes more sense.
Imagine: Instead of tagging conversations for human review when a keyword is present, they could have the acres of supercomputers analyze them for agreement on action items.
Then the automated agents could maintain and analyze a database of who agreed to what, flagging a collection of conversations for human review only if/when it amounted to a conspiracy to prepare for, support, or execute a military, geopolitical, or
Re: (Score:2)
I'm Scared dave (Score:2, Interesting)
I worked will on a DARPA... (Score:3, Funny)
DARPA has yet to acknowledge the project that I was working on 3 years from now in 2010. Last week, January 14, 2012 we will successfully tested the Time Redaction Project. So, I gave myself the plans tomorrow so that I will be submitting them a few years ago to get the grant money. DOD has used this to send a nuke to kill the dinosaurs. I hope it works.
Re: (Score:3, Funny)
Re:I worked will on a DARPA... (Score:5, Funny)
So, I wiollen have given myself the plans tomorrow so that I wiollen be submitting them a few years ago to get the grant money.
There, fixed that for you.
Re: (Score:2)
DARPA has yet to acknowledge the project that I worked will on 3 years from now in 2010. Last week, January 14, 2012 we tested will successfully the Time Redaction Project. So, I gave myself the plans tomorrow so that I submit will them a few years ago to get will the grant money. DOD has used this to send a nuke to kill the dinosaurs. I hope it works.
Re: (Score:2)
Sorry, but the shame, good sir, lies with you. Douglas Adams is the foremost authority on time travel verb usage, not some two-bit blogger.
From the link you provided
This is slashdot. Attempting to improve upon DNA is grounds for drawing and quartering
Re: (Score:2)
Not what HAL stood for (Score:2, Interesting)
[P]erhaps DARPA's PAL could be renamed HAL, for Hearing Assistant That Learns.
Perhaps, but that's not what the orignal HAL stood for. HAL was short for Hueristic ALgorithmal. Arthur C. Clark had to put that into one of his books in the series (2010 IIRC) because lots of people thought he had derived it by doing a rot25 on IBM.
Re: (Score:2)
Heuristic.... (Score:1)
Re: (Score:1)
[http://tafkac.org/movies/HAL_wordplay_on_IBM.html , goes on to argue this is unconvincing ... given HAL has a different name
in the working drafts]
ventriloquists have already cracked this? (Score:1)
Re: (Score:1)
Re: (Score:2)
Suddenly, Michael Winslow [imdb.com] becomes in-demand again.
Re: (Score:2)
Finally! (Score:2)
GITS 2 (Score:2)
Re: (Score:1)
vaporware and PR (Score:4, Interesting)
Later in the program, Holland says, PAL will be able to "automatically watch a conversation between two people and, using natural-language processing, figure out what are the tasks they agreed upon."
PAL's role here is not clear. The 'easier' task would be to monitor the body language of the two conversers and, by lining up a list of tasks with the observation of their head movements, correctly predict which points in the conversation were the ones where someone performed an "agreement" gesture.
The much, much more difficult task would be to actually read lips. There are only certain properties of phonemes you can deduce from how the lips and jaw move; many, many other features of speech are lost. Only when you supply the machine with a limited set of words in a limited topic domain do you get good performance; otherwise, you're grasping at straws. And then taking out most of the speech signal? Please.
But no, DARPA is cool and will save all the translators in Iraq (by 2009, well before the war ends.) PR and vaporware win the day!
Allow me to be skeptical (Score:2, Interesting)
Sure. You will first have to solve the problem of understanding natural language in an open ended environment - something that computers are not very good at yet.
Quite frankly, AI people have been promising this kind of stuff for some 40 years now, and they have s
Say What? (Score:2)
I don't know about anyone else, but my experience has been that very few conversations actually result in mutual agreement upon a task. Most conversations are indeterminate, and most of the rest result in symetrically paired misunderstandings about what has been agreed to.
Oh well, at least for once "they" aren't spending my money to kill/maim innocent bystanders.
2007 DoD proposal list (Score:2)
the real research behind this (Score:3, Informative)
WRT to "watch a conversation between two people and, using natural-language processing, figure out what are the tasks they agreed upon":
Here's a link to the actual research that they are likely talking about:
http://godel.stanford.edu/~niekrasz/papers/PurverE hlenEtAl06_Shallow.pdf [stanford.edu]
As you might expect, the ComputerWorld article's summary of the technology is rather optimistic. Nonetheless, this stuff really does exist, and shows some promise in both military and general applications.
Re: (Score:1)
Actually, here's a link to a page at the project's prime contractor that gives a little more context:
http://caloproject.sri.com/about/
This page is actually about 1 of the 2 subprojects that together makeup the PAL project.
I suggest that many of the posters to other threads should follow the publications link and bother to read some of the 50-odd citations. Only then will you really be in a position to speculate on what is and isn't hype. I guess it's actually easier to read a summary (/.) of an art
Same sex conversations only... (Score:5, Funny)
This would only work for conversations between people of the same sex. There has never been a conversation between a man and a woman in which both participants would agree on the tasks...
M: Want to continue this conversation at my place?
F: Take a leap!
Computer: Agreed to move conversation to male's residence by leaping.
F: When are you going to mow the lawn?
M: Yeah, I'll get right on that.
Computer: Male agreed to mow lawn at earliest opportunity
Re: (Score:1)
M: Ummm, of course not honey - you look great.
Computer: Application of this sheet-like outfit makes fat women look thinner.
Look Closer; Go Cross-Eyed (Score:2)
HAL (Score:2)