The technology behind the cooking rats in Ratatouille and the dancing penguins in Happy Feet could help bridge stubborn academic gaps between deaf and hearing students. Researchers are using computer-animation techniques, such as motion-capture, to make life-like computer avatars that can reliably and naturally translate written and spoken words into sign language, whether it’s American Sign Language or that of another country.
English and ASL are fundamentally different languages, said computer scientist Matthew Huenerfauth, director of the Linguistic and Assistive Technologies Laboratory at the Rochester Institute of Technology, and translation between them “is just as hard as translating English to Chinese.” Programming avatars to perform that translation is much harder. Not only is ASL grammar different from English, but sign language also depends heavily on facial expressions, gaze changes, body positions, and interactions with the physical space around the signer to make and modify meaning. It’s translation in three dimensions.
About three-quarters of deaf and hard-of-hearing students in America are mainstreamed, learning alongside hearing students in schools and classes where sign-language interpreters are often in short supply. On average, deaf students graduate high school reading English—a second language to them—at a fourth-grade level, according to a report out of Gallaudet University, the premier university for deaf students. That reading deficit slows their learning in every other subject. It also limits the usefulness of closed captioning for multimedia course material.
Read more: Slate