A new research prototype from Microsoft Research Asia, the Chinese Academy of Sciences, and Beijing Union University uses Kinect technology to translate sign language into spoken language—and spoken language into sign language—in real time. It translates from one sign language to another (such as between American sign language and Chinese sign language), and helps people who can hear communicate with people who are deaf or hard of hearing.
The sign language translator uses computational and sensor technology—specifically Kinect’s ability to capture visual and articulation data simultaneously. Machine learning and pattern recognition enable the tool to interpret the meaning of the different gestures captured by the Kinect device. By reducing communication barriers and facilitating social interactions, this tool has the potential to help improve the quality of life for people who are deaf or hard of hearing.