MIT AI Acts Childish on Purpose 84
garibald gave us the link to an article in the electronic Telegraph about researchers at MIT who have built an interactive AI robot called "Kismet" that is as cute as any
George Lucas character, and is supposed to function on the emotional level of a
human two-year-old. The cuteness is not gratuitous. As the article makes clear,
there is a sound, scientific reason for it. (For pictures, and more technical depth than the Telegraph story, you may want to go directly to Kismet's Web Page.)
Re:A Few Comments on Cuteness (Score:1)
j/k
Seriously, I don't know what is more unsettling:
1. expending effort and resources to develop emotive responses in mechanical devices, or
2. the image of programmers cooing at a robot that looks like a gremlin - "Daddy loves Kismet, does Kismet love daddy?"
Not impressive (Score:1)
Seems that the algorithm is basically: See moving object, get excited (or some other expression) but look disgusted or fall asleep if it goes on for too long. Respond a little differently when human face is detected. They blather on about 'learning ' but I don't see any evidence of that either.
Looks like a lot of hype for precious little substance.
skin (Score:1)
You're thinking on the same track they are, except they want skin to make it act, not just look, more human. Dr. Anne Foerst [mit.edu] is a post-doc at the lab working on skin. If you fish around the MIT AI site [mit.edu], you may find a bit about the work. Cog, Kismet's "older brother," has some fake skin on his stomach. Judging from what I've read, developing skin for the robots has proved very difficult.
I saw Dr. Foerst speak, and she is amazing. She has a joint appointment with MIT AI and Harvard Divinity. Her work on skin has a theological basis from Genesis informed by modern studies of human development. Her exegesis of Genesis, based on the work of a 14th century rabbi, was something totally new to me. She is both a theologian and a computer scientist with formal education in both fields. This article [spiritualityhealth.com] includes a brief background.
Re:She's ugly (Score:1)
1) This has *no bearing whatsoever* on her work. And if she were a man, and that man was attractive, then maybe someone would have brought this up. Also, what bearing does this man's opinions have on the field of Computer Science? I don't agree with, say, all of RMS's views, or ESR's, but they're still great coders...
2) Didn't look at her home page, but professional photos do tend to look better. Actually, that was one of the *intelligent* replies to this troll.
3) I'm sure 'the woman' is used to it by now. It's not nice to call people ugly, but consider the source. If she hasn't learned by now not to listen to anonymous posters talking trash, perhaps it's time.
This brings up another point: would we be viciously defending her if she wasn't a woman? Of course not. Grow up, gender matters in society, no matter how much you try to wish it away. Hopefully in the future people will view this in a more positive light, instead of being so hung up about it. Maybe this is America's Puritan roots at work for us, but I could do without it.
If they're running low on funds... (Score:1)
Re:Over-stimulation with a Stuffed Toy (Score:1)
--
Re:It's more than spoofing.. (Score:1)
Spoofing is a term applied when a process transmits information to another process reflecting some status about itself that is, in fact, not reflective of the true state. For example, IP Spoofing transmits false IP Source Header information. Samba does not spoof SMB, though--it really speaks it.
It's a bit of a grey line.
In humans, it turns out that the mere expression of an emotional state is enough to generate some aspects of that state internally. On a personal note, this is why I spent Senior Year of High School smiling like a hyena. Acting happy created happiness--strange but true. But Kismet doesn't act out externally as an expression of internal emotional states, or vice versa--there *are* no emotions to reflect, not in the human sense of the word. It's a spoof.
But try telling that to someone who is looking at a cute sad little doll that asks not to have its power disconnected.
What was it that the Star Trek authors argued? That sentience required consciousness, self-awareness, and intelligence? Good values. Nice values. But the ability to effectively convey emotions (and perhaps complex concepts) effectively is probably required for any creature to be recognized as sentient. Call it a bug in the Human OS if you need to. But it's true.
--Dan
Once you pull the pin, Mr. Grenade is no longer your friend.
A Few Comments on Cuteness (Score:4)
Sometimes I think computer people Just Don't Get It.
Then I see this.
Now, alot of us geeks might whine and moan. "Oh, it's so cute. Die Jar Jar Die!" "The future of technology lies with...Terminator Furby." Whatever.
Outside of the cold, sterile world of gigabit routers and tab completion, a convincingly emotive device has been developed--artificially.
The importance of this is not to be underestimated. Large chunks of government policy are designed to protect animals with emotive properties. As a classic comedy routine went, "Sometimes I think the animals just all got in a line. 'What are you?' 'I'm a seal.' 'You're cute, honk that horn, we'll make sure nobody beats you with a club. Now what are you?' 'Cow. Moo.' 'GET ON THE TRUCK! You're a baseball glove.'"
Pet Rocks were quite the subversive satire on this maternal instinct.
Tamagotchi began the trend, and Furbies proved the consumer attraction, but I think it's the progeny of Kismet that will whip up quite a media frenzy. Who wouldn't think that the media has been waiting to report that a machine built by man doesn't want to die? It's a pent up desire; one that will be released at the first credible moment.
And along comes Kismet.
The key to Kismet really is that it spoofs emotion. Think about it for a second. The MIT guys "sniffed" humans with cameras, copied the protocol stream down to the transitions from one emotive signal to another, and (most importantly) parsed enough of the incoming emotistream to generate a seemingly interactive experience.
Shades of Eliza? Obviously. Eliza spoofed "Rogerian Psychology", where a person does nothing but ask the minimum amount of questions to keep you talking. Eliza took some aspect of human psychology and looped it on itself to create meaning to the user at the least possible computational cost.
Give Eliza and Kismet a love child equipped text to emotive speech and speech to text, and toss in a degree of anger, rage, and fear if the robot "believes" 1) it is to be deactivated(put a video sensor near the off switch) or 2) it is ignored for excessive periods of time, and a non-zero part of the population will believe it alive and as worth protecting as a cute baby seal.
Is it just me, or is it scary how much Hackerthink(spoof, parse, etc.) fits so many different situations? If you can talk to a single person in biotech for more than twenty mintues about their job and not realize they're utter hackers, you aren't paying attention. The same applies to psychologists. How many psych papers read like security bulletin? ("God refuses to patch. F1zRR has released Prozac 1.0 to compensate.")
If Kismet ever goes mainstream, the psychologists are going to have a field day. The technological revolution eliminated the need for menial workers. Kismet, scarily enough, could make shallow friendships far more awkward. "We don't do much more than talk like Kismet."
All this will go on until the media decides to flip public opinion around on its ear(thus making everybody tune in) saying "Are we nuts? This Kismet thing is NOTHING compared to humans! Deux Ex Humana!"
That's just my thoughts. It's late. I'm tired. I have to get up in four hours. Joy.
Send me comments. Or don't.
Once you pull the pin, Mr. Grenade is no longer your friend.
Re:She's ugly (Score:1)
1)I'm quite puzzled by what bearing this has on her work. If she was a man this wouldn't have been brought up. With guy's like you I wonder about the sanity of any person wanting to go into Comp Sci or the like.
2) If you look at the pictures on her home you should be noting the poor quality of the photos and setting. She's not bad looking when she makes the effort. The same thing could be said for most of us.
3) Have you ever considered this woman's feelings? Here she is finally getting some notice, and some Anonymous Coward starts ragging on her looks. Get a life, and a shred of human decency.
Re:Sirius Cybernetics (Score:1)
Re:Speaking of cuteness... (Score:1)
Re:She's ugly (Score:1)
That was one thing my senior year in high school taught me. See, at our school, the year book pictures for seniors would be shot by a professional photography studio while all other grade levels would be shot in mass production slaughter pens set up on campus.
Because I was only in my second year in High School, but on track to graduate that year, I got my picture taken for both grade levels. My senior yearbook picture looks very handsome and mature. The sophomore picture looks childish. Both were of the same person (me) and taken during the same year.
High School Prom also taught me the same thing. All of the girls who normally had "average" looks looked absolutely stunning at prom. The girls who normally looked stunning at school didn't look any better than normal and some were actually not quite as nice looking as normal -- too overdone.
A little makeup, time in front of a mirror, and a few helpful hints from a friend who knows what they're doing can do wonders for your appearance.
I've heard a couple of beers in the beholder can do that too... But I've never tried that one myself.
Re:Hardware and stuff (Score:1)
/dev
No - there is no moral backdrop for this (Score:1)
Consequently, any value judgements that head-in-the-sand universal moralists might want to make as such developments unfold are just complete unmitigated rubbish.
A-prior universal moralists on the march already (Score:1)
Not in human culture, they haven't. It takes a large amount of public interplay before an idea germinates and becomes a cultural meme -- most ideas just wither and die without trace except in the minds of researchers. In the area we're discussing, there is one huge big ZERO of cultural heritage to draw upon as a basis for value judgements that have any applicability.
> I don't see why you can't accept someone's current perception of automaton-human interaction to be valid, just because we have seen so little of it and it is not something society has got used to
I can't accept it because apples aren't oranges, and the fact that you haven't seen many apples but have seen lots of oranges doesn't entitle you to judge apples to be bad oranges. You have to get used to apples before you can validly say whether they are good or not.
Every single one of your arguments is laced with the desire to impose your current judgements on a new state of affairs, and there is no arguing with that, so I won't, because you won't entertain the notion that your previously acquired views might not apply.
The closed mindsets of universal moralists are legion, and it's no surprise to find their perpetual arguing against the open-minded position of making no a-priori judgements even here. If you had been in control, there would be no free Internet because it dismisses most pre-Internet values as irrelevant. Sheesh, I'm glad you're not in control.
Value judgements out of context are worthless (Score:2)
Human can and will judge anything and everything, of course, but that doesn't mean that the judgement is valid. In this case it's not valid, because there is no cultural context in the automaton+human universe against which a system of values can be built.
Judging something out of context is only for the brain-dead, just like judging that an apple is a very poor quality orange. Not having such a context available is not a valid excuse for applying a previous inappropriate context in its place. The logical approach is to let a new context develop naturally over time, lead where it may.
But with 95% of humans happy to follow "moral leaders" like sheep, we're in for a field day of a-priori moralistic universalism, especially when nanotech begins to change the rules built up over millenia.
No, It's Hector! (Score:1)
---
seumas.com
Re:Uh oh... (Score:1)
---
Re:No - there is no moral backdrop for this (Score:1)
There may be no such thing as perversion in the behaviour of automata by itself, but the interaction of automata with humans is something that can and has to be judged within a human's value system as well.
Aside from that, I don't like "head-in-the-sand" universal apologists either, who would just as gratituously apply their selfish value system to the development of weapons of mass destruction.
Re:Value judgements out of context are worthless (Score:1)
Automata that behave like humans in one way or another have existed hundreds of years ago (do the names Wolfgang von Kempelen and Jacques de Vaucanson ring a bell?). The idea of social and even sexual interaction of automatons with humans is certainly not new (do you ever read Science Fiction books?). The cultural context you seem to have a fixation with develops as new things enter our culture, so it's somewhat useless as a measure of validity of a moral judgement. I don't see why you can't accept someone's current perception of automaton-human interaction to be valid, just because we have seen so little of it and it is not something society has got used to (that would seem to constitute a cultural context for you). Would you say that bestiality may only be considered immoral ("perverted") because it has been known for thousands of years? Arguing that moral judgement of interaction of humans with automatons (assuming for the sake of argument that it is relatively new, which it isn't) is not valid because someone improved the technical possibilities for it recently is like arguing that child pornography on the Internet can't be judged because it has only been around for a few years (and the adaptation of the "cultural context" of child abuse, paedophilia etc. which has been in existence for thousands of years as well, would be invalid according to your argumentation).
But with 95% of humans happy to follow "moral leaders" like sheep, we're in for a field day of a-priori moralistic universalism, ...
This would be a valid argument if these new developments were in no way part of our culture/universe and therefore inaccessible for our current value systems. Feel free to argue that this is the case, I just hope that your head is stuck deep enough in sand.
Note that personally, I'm not calling interaction with automatons anything - I am just apalled by your narrow-mindedness and the way you criticise people with other value systems while apparently trying to force the idea that science must be free from the burden of ethics down everyone's throat.
Re:Value judgements out of context are worthless (Score:2)
That's just rich. "Who are you to judge, perhaps my booted foot is comfortable on your neck. This iron fist gets awful cramped, it has to be flexed and pounded on people a few times. Shoving these ridiculous notions that I shouldn't control everything according to my principles of what I believe is best for you
It's unfathomable how moralists can get away with claiming that their opposition forces their viewpoint of free inquiry onto others as some sort of oppressive yoke.
Re:Value judgements out of context are worthless (Score:2)
techie community or juvenile locker room? (Score:1)
> The pictures of her with the robot caught my eye at first...
I don't think you realize that a thread of comments on someone's appearance is not appropriate in this context. Having to put up with stuff like that can get really old, and discourage women from participating in the "community". In case you hadn't yet realized it, Mrs. Breazeal has damned impressive techie credentials. (And she probably has better aesthetic sense than you do, as well.)
reading comprehension and real-world experience (Score:1)
I'm afraid that you weren't thinking very clearly when you posted, but I'm sure that you will find it helpful to closely re-read and thoughtfully contemplate the posts in question (especially #53 [slashdot.org] and #60 [slashdot.org]), consulting a dictionary if you encounter any unfamiliar terms.
Re:Not impressive (Score:1)
In June I visited the MIT AI Lab and had a chance to talk with Ms. Breazeal and Brian Scassellati, another doctoral student working on Kismet and Cog.
Kismet detects both faces and motion, and reacts differently to each. Thus waving a toy with a face, a teddy bear, will cause a different reaction then waving something without a face, say a slinky.
Steve M
Re:Hardware and stuff (Score:2)
I visited the MIT AI Lab in early June (I took a one week course -- it was fun). I asked Cynthia this question, and the answer is no, the robot does not remember. It identifies facial features and reacts according.
I believe she said it was on the to do list.
Steve M
Re:A Few Comments on Cuteness (Score:1)
I am personal surprised that we have not made more inroads into this field by now. It seems that converse has happened: we humans have adapted to computers more than we have made them adapt to us. I've written about this before on my website. [geocities.com]
On the other hand, if computers displayed the emotional complexity of humans, we would not be able to trust them as much. Such is the paradox of reinventing ourselves.
- Michael
Re:It's more than spoofing.. (Score:1)
If she looks like a babe, why not call her one (Score:1)
Obviously you DON'T work in academia, since any one who does know that it's ALWAYS important to package grant proposals in the best light, technical merits be damned.
Sirius Cybernetics (Score:2)
Speaking of cuteness... (Score:1)
---
Put Hemos through English 101!
Re:Not impressive (Score:1)
I watched one of the videos, but I as well don't see much, if any, "learning" going on, on the robot's part. If anything, the preprogrammed behavior of Kismet is training Cynthia not to wave the toys too wildly in its face.
Kismet is programmed to manipulate. If you neglect it, it's sad. If you pay it enough attention, it's happy. If you overstimulate it, it gets cranky. I had a Tamagotchi that did the same.
If the point of this is to gauge *human* responses to Kismet's algorithms, then it's doing fine, but it's a darn expensive way to be doing it.
What it looks like (Score:1)
Forget Kismet, see Cyc! (Score:1)
Re:No - there is no moral backdrop for this (Score:1)
Head on the tabletop.
Participate in the experiment (Score:2)
Personally, I'm going to arrange some Kismet faces to make a "How Do You Feel Today?" montage...
more input... (Score:1)
great idea, but DAMN! that thing's ugly. Looks like a cross between a furby and Short Curcuit's Johnny 5.
Hardware and stuff (Score:1)
I wonder if they have implemented a 'familiarity' algorithm in the robot, one that uses recognition technology to "remember" facial and structural contexts, so that Cynthia could conceivably be perceived as a 'mother figure', whereas another unfamiliar person in the field of view could possibly cause a negative reaction without any overstimulation. Maybe that's next.
I remember seeing on some cheesy PC tv show there were some programs floating about that used facial recognition with a camera to verify identity, maybe this could be incorporated in the experiment to make the robot's reactions more 'human'.
doc.
Control experiment needed! (Score:1)
They need to check kismet with Janet Reno, to see if he's really just reacting to faces!
Re:It's more than spoofing.. (Score:1)
Kismet is not spoofing emotion - he is exhibiting emotion through his resultant behavior. There is nothing fake about this, or fundamentally different to the way emotions work in ourselves. If a human observer anthromorphises Kismet and assumes he is experiencing anything, then that is a human shortcoming, and not a reflection on the reality of Kismet's emotions (internal state).
Ben
Emotion (Score:1)
The other poster in this thread brought up the example of a bird feeding a gaping mouth carp due to a hard-wired respose. My Canadian fiancee saw a moose in their backyard trying to mate with a PLASTIC deer! Things like this make it hard to deny that much behaviour is simply a hard wired responce to certain stimulii. I guess you could make the distinction between emotions (biochemical) and hard-wired neural behaviours, but obviously a robot like Kismet could easily parallel either via global state that is modified by stimuli and effects his behavior (as they have implemented), or via more specific behaviours such as wide eyed and slack jawed as a response to girls like Cynthia (not yet implemented).
Incidently, when I said that emotions make animals "react well", I didn't mean that it makes them react optimally all the time, just that it makes them react in ways that are statistically good for the species (i.e. that statisticaly have survival benefit).
FWIW, my own hobbyist "artificial animal" research is based on evolution, neurology and child development, rather than AI. Cog/Kismet are more along the right direction that projects like Cyc, but nonetheless still rather misguided..
Ben
It's more than spoofing.. (Score:2)
Would it be any more fake if Kismet was programmed to find "baby faces" (big eyes, rounded face, etc) cute, than humans finding Kismet cute?
This isn't spoofing. This is the way emotions work.
Ben
Robot Acts Childish on Purpose (Score:2)
Re:A-prior universal moralists on the march alread (Score:1)
I want to make sure I understand your argument: you have a problem with saying that human intercourse with a robot is perverted because we have no experience to use to make this judgement. The problem with this argument is that, although we have never had robots capable of sexually stimulating humans, we have plenty of experience with sexual acts other than human+human. Any of these acts could be called perverted, but the word "perverted" is a very subjective term.
We have invented an endless list of mechanical devices, the sole purpose of which is sexual stimulation. Porn shops litter the highway selling these devices as well as video tapes of sexual acts. We have an online army of porn peddlers that, combined, probably out gross many countries. You can even buy a life sized doll made out of silicon that has an artificial internal skeleton (Check it out) [realdoll.com]. We have personally witnessed the way in which technology has evolved sexuality and now you say that we can not use that experience to judge what seems like the next step in its evolution, particulartly one that seems eminent?
As a species we do this all the time: use past experience to predict and judge the future. We are doing it right now in relation to the cloning debate. If we can not use history as a guide for the future, then we are lost. Note that I said use history as a guide. No one is saying sex with robots is perverted because sex with animals is perverted. We are using reason, logic, history, and personal experience to make a judgement on a scenario. The fact that we have never actually been in that scenario does not make the judgement wrong, it simply means we have to be more careful in making it.
-
Strange... (Score:1)
Interesting approach but... (Score:1)
Re:Interesting approach but... (Score:1)
Re:techie community or juvenile locker room? (Score:1)
Re:Hardware and stuff (Score:1)
Guess they coulda put a PIII in Mr Furby here but then again, maybe they wanted to be able to run their robot on less than 50 W of power...
Antiquated.. hah.
Re:Not impressive (Score:1)
Tilden's assertions about the number of computing units needed to locomote and adapt to the environment seem right on the mark. Have you ever seen or, better yet, built one of these things? It's awesome. I spent 3 years as a hobbyist trying to get half of the robustness that a Tilden design displays.. My BEAM droid wandered my house unattended for two weeks, charging itself when needed and avoiding death by any number of means.. I even lost it for a couple hours once. It had made its way under my bed.
Jeezuz, you break off one their legs and they can keep on walking with a modified gait! Add in some simple (or not so simple) sensory response to the core neural cascade and you have a VERY insect like system.. far more "intelligent" than even a Linux machine.
Brooks and Tilden are probably both right.. if it was a question of programming or processing power, we should HAVE at least a marginally intelligent robot by now but it's NOT. Intelligence is shaping up to be the result of layers upon layers of communication, potentiation, and redundacy rather than a single processing unit committed to finding the "best" solution to any given problem. Brooks is going after intelligence and Tilden is going after animal behavior. Someday their love child is going to be reading this thread in the late 90's web archives and laugh its ass off.
You have to know this.. get a disk block out of whack on your super-fast UNIX box and what happens? At least.. error and at worst.. crash. The box will ask you what to do. It's dumb. It's a glorified scientific calculator. And so is mine. But Otis (my dearly departed BEAM droid) was "smart".
Re:Not impressive (Score:1)
Re:If they're running low on funds... (Score:1)
(If you're an antisocial retard who's never even talked to a woman)
It's going to be a long, frustrating life if you can never relate to women any better than that. Have fun spending your Saturday nights DIH, my boy.
I was surprised to see how dorky Linus looks (Score:1)
So what exactly does this thing do? (Score:1)
Unless it learned how to be "cute" itself, it's just an AIBO. Why would MIT be pushing this when they've almost certainly got more significant projects running?
Re:A Few Comments on Cuteness (Score:1)
I don't think we understand our own emotions let alone how to emulate them. Ill go back to the story and see if they have psychology types working with them. I won't eat any commercial meat because of the methods used to raise and harvest it. The animals probably are not healthy to be eating but if they are..they are tortured long before they get the shot to the head. The transportation alone is inhumane. Yet..I hunt and will eat what I kill
myself or by someone I know. I know the creature wasn't tortured before it was killed and can be somewhat assured of its health and condition. Before you aim the flame throwers..You are all invited anytime after Jan 1 2000 for some FRESHMEAT on the table...Ill make it look like it came from the grocery store.
below is quoted//////////////////////////
The importance of this is not to be underestimated. Large chunks of government
policy are designed to protect animals with emotive properties. As a classic comedy
routine went, "Sometimes I think the animals just all got in a line. 'What are you?' 'I'm
a seal.' 'You're cute, honk that horn, we'll make sure nobody beats you with a club.
Now what are you?' 'Cow. Moo.' 'GET ON THE TRUCK! You're a baseball
glove.'"
Re:heres my theory... (Score:1)
My parents tell me I didn't say a word until I was three, and then I started with full sentences. Before that I merely imitated the sounds of road vehicles.
But then again, I can't see a digital talking ape either.
Re:Another Dead Kennedy (Score:1)
ACs.
Re:Not impressive (Score:1)
Re:Interesting approach but... (Score:1)
Re:She's ugly (Score:1)
1) You are in no position to tell that.
2) You're wrong, she is. (They all are, except
mom
> Don't you hate it when you see a girl from
> behind and you think she's beautiful, only to
> have her turn around and reveal that she's
> ugly?
You are calling her ugly? Well that is impolite,
bordering on rudeness.
And that does bother me, because there are not
that many female techies around, so we should
treat them politely. Traitor you are.
-Cic
P.S.: This never happens to me. When they turn
around, they always are in fact beautiful.
Maybe you don't manage to make them smile at you?
Let's meet at sunrise for a duel to settle this... (Score:1)
in a duel of Quake II to defend the ladys honour.
If only he'd given his name...
Of course, that would be more knightly than geeky, and perhaps the lady would take more
satisfaction out of it if she'd frag him herself.
(Repeatedly. With a blaster. From behind. With
eyes shut and one hand bound to the back
But still, the offer stands. (En garde!)