Register for free and continue reading
Join our growing army of changemakers and get unlimited access to our premium content
Abhishek Singh used Google's Tensorflow image recognition algorithms to convert sign language into speech and dictates Alexa's response for deaf users.
Assistive voice recognition technologies like Amazon’s Alexa are making life more convenient for those who can hear, but what about the deaf community? Abhishek Singh, well known for creating an AR version of Super Mario Bros, has developed a solution.
Singh recognized two problems with devices such as Alexa. First, voice activation doesn’t work as well with unclear speech input, which is problematic for both deaf people and those with speech difficulties. Secondly, devices like Alexa respond with sounds.
To bridge these gaps, Singh turned to Google’s tensorflow open source software. Tensorflow is a machine learning platform that enables developers to utilize these algorithms without having to code from scratch. Singh taught his platform to recognize American Sign Language (ASL). He did this by recording video of himself performing the signs. By repeating this process multiple times per sign, Singh was able to entrain image recognition neural networks into the software. After this process, Singh was then able to perform a sign into a laptop webcam, have the algorithm detect it, then display the corresponding text onto the screen. The laptop then voices this command with text reading software. This activates Alexa, which then responds to the input. The user can then read as Alexa’s response is dictated onto the screen.
Singh admits his method is a roundabout solution rather than one that attacks the issue directly. But, by addressing the issue, Singh hopes that developers will be more aware of inclusivity when making assistive technology. Singh plans to release an open source version of the software soon, so that developers can use his work for other projects.
Please login or Register to leave a comment.