Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Microsoft Science Technology

Microsoft Research Uses Kinect To Translate Between Spoken and Sign Languages 79

An anonymous reader writes in with a neat project Microsoft is working on to translate sign language with a Kinect. "Microsoft Research is now using the Kinect to bridge the gap between folks who don't speak the same language, whether they can hear or not. The Kinect Sign Language Translator is a research prototype that can translate sign language into spoken language and vice versa. The best part? It does it all in real time."
This discussion has been archived. No new comments can be posted.

Microsoft Research Uses Kinect To Translate Between Spoken and Sign Languages

Comments Filter:
  • by Press2ToContinue ( 2424598 ) * on Wednesday October 30, 2013 @08:30PM (#45287207)

    Kinect translation with new autocorrect-for-ASL: "I really like your tits"

    Thanks Microsoft!

    • This little joke actually raises an interesting topic, as sign language usually does not follow the same grammar as spoken/written language. Sign languages can differ from country to country or from region to region, similar to dialects. The hands aren't the only things used to convey meanings either: facial expressions or body movements are used too.

      An example of sign language using the line above could be to represent the words "breasts", "you" and "like", in that order, with expressing the meaning "very

  • How do you sign: Dear aunt, let's set so double the killer delete select all?

  • There's a lot of contextual clues necessary to understand sign language. Most conversations would seem "faux caveman" like to the outsider - a lot of Noun Verb Noun going on...

    I'm going to have to watch the video from another machine, but I'm more interested in the bumper at the bottom that has realtime English/Chinese translation in your own voice...

    • They (I assume the same group) demoed the real-time English/Chinese translation in your own voice last year. It's really impressive, and the results were surprisingly good.

      I do wonder how it deals with phonemes that are present in one language but not in another, maybe there's a "training process" you have to do initially to make sure it has enough recorded samples to get full-coverage of the target language.

  • Hardly revolutionary (Score:4, Interesting)

    by afidel ( 530433 ) on Wednesday October 30, 2013 @08:48PM (#45287327)

    We were doing real-time ASL translation to text using the webcam on the Indy2 workstation back in 1997, success rate was about 85% and most of the misses were from hidden object problems which the Kinect does nothing to help with.

    • And for my masters thesis in 2000 I worked on a thing called TESSA [uea.ac.uk]. Specifically my part was on the compression algorithm that reduced the data points for the sign language half from 15 dimensions (1 dimension for each joint in the fingers and thumb) down to 3 dimensions as computing power was too limited back then to deal with any more in real time. In the end I used a combination of hidden Markov models and active shape models to do this. Was really interesting stuff.
    • And these people are NOT doing real time translation. They're having people sign each sign stilted isolation.

      As

      Though

      They

      Talked

      Like

      This

  • As far as ways to communicate online goes, I'm not sure how useful of a tool this would be. I can definitely see how this could easily become the best way to learn sign language though if paired with Rosetta Stone-like tutoring software. My wife has been planning to learn sign language soon, I'm sure she'd love to have something like this as a learning aide.
    • It would probably be invaluable for video-chatting/conferences (or even IRL discussions) where there's a mix of Deaf people &hearing ones that don't know the specific sign language they use, as it'd mean the two groups could interact directly/independently without requiring a translator. That'd be great for private sensitive conversations between two individuals, relatives meeting for the first time (as it'd mean they could communicate directly), and might have a good impact on Deaf people's employabi

  • I tried to tackle the same thing a couple of months back using OpenCV and a smartphone. Before starting I consulted with people that knew sign language and the problem is not so much recognizing hand gestures, but facial expressions. They say that most of the conversation happens with a a given look, a frown, or the movement of the lips.

    Needless to say, I though it was too hard to solve the problem on a smartphone so I postponed the project. I don't think Kinect can do a much better job at picking up the
  • There are lots of research being done with kinect, by BS and masters students, mostly around physiotherapy. This is one of those creative applications that everybody says after hearing about it .. "damn, why didn't I think of that". Very creative use of the kinect.

  • Looking at the video in the article, it seems that "in real time" means "at about 1/4 of the speed of regular signing".

    Imagine. Having. To. Speak. Like. Kirk.

  • by hakioawa ( 127597 ) on Thursday October 31, 2013 @10:29AM (#45290275)

    Disclaimer: yes I work for Microsoft. No not on these projects.

    This was demo'd live in front of 30K MSFT employees at our annual company meeting. It nearly brought me to tears. Yes, I can see through demoware and and yes it's highly imperfect, but honestly it was the single most impressive use of technology I've ever seen. It was both novel and simple. It combined hardware, algorithms, user experience, and cloud scale. I don't know if it will ever go anywhere though I expect that it will. The key point here is that these are off the shelf components. Kinect and gesture APIs combined with machine translation and text to speech. It's important that these are, all or nearly all public production APIs. Such a system 10 years ago even if possible, would never make it to market because of the tiny user base. Today we can build such apps for the 0.01% of the population that need Mandarin Sign Language translated to English. And it can be cost effectively. That is the point. Technology being used to address real problems for under served communities. So yes, maybe people researched automated sign language recognition years ago, but bringing it to market and enabling a scenario for real people is a wholly different beast

  • I knew it! the Federation runs on Microsoft technology. Guess that explains all the exploding consoles and Transporter accidents.

  • The ASL interpreters I know do a lot of on-call work for medical, mental health and educational purposes. One thing they mentioned is ASL has regional dialects.

E = MC ** 2 +- 3db

Working...