Students at New York have developed an app that uses AR and machine learning to translate sign language into spoken words, and vice versa. Unfortunately, many deaf people still feel that they live fairly isolated lives, with things like appointments and going out often massively complicated due to the lack of general knowledge of sign language throughout the hearing population.

The ARSL app uses computer vision models to translate sign language into spoken language and the other way around in real time. The app prototype has gained funding as part of the Verizon Connected Futures challenge, which was set up to form a pipeline between NYC universities and the company in order to promote next gen technology.

The ARSL team is headed by Zhongheng Li, who was inspired by his experiences of growing up with two deaf parents. Speaking of his mother, who moved to the US from Hong Kong, Li noted her frustration at the fact that there was no universal sign language

You can see the team’s presentation video here.

he ARSL app is currently limited to a single use case of making an appointment at a medical clinic. The task of expanding the functionality out is a complex one. Sign language is an intricate system of movement and facial expressions, with the words often coming in a different order than they would in English. Add into the equation the individual subtleties that each person brings to signing, and you have a lot for a machine attempting to learn and translate to handle.

Source: VIRTUAL REALITY NEWS