Siri Opt2 (1)

Siri, why can’t you understand what I’m saying?!

Speech and voice impairments are more common than you think. According to the National Institute on Deafness and Other Communication Disorders (NIDCD), more than three million Americans stutter and nearly 1 in 12 children has a disorder related to voice, speech, language or swallowing.

So what does this mean for that smartphone in your pocket? Well, if you have a speech impairment, it means Siri never understands you. And with the explosion of voice-controlled technology like the Amazon Echo, Google Home, and the upcoming Apple HomePod, those with speech impairments will find themselves feeling left behind. Think about those cool commercials where Alec Baldwin is commanding Alexa to order him new socks and the feeling of excitement you felt picturing what your own personal Echo assistant would be able to do for you. Unfortunately, not all of us get that opportunity.

And it doesn’t just apply to those with speech impairments. Heavily accented English-speakers face the same challenges with voice-controlled technology. Siri cannot understand a Scotsman almost as often as she cannot understand someone with a stutter. Voice recognition is a tricky and complex business, and it’ll be in the spotlight very soon as we advance down the path to hands-free everything. Picture the future with a home that you can command to turn off the lights, lock the doors, adjust the thermostat, and chill the wine… all without leaving the couch and just by talking. Or self-driving cars that you tell the address to as you enter and you get whisked away.

How will those with impairments (or heavy accents) manage? Luckily, there’s an innovative startup based out of Tel Aviv called Voiceitt that has made the choice to tackle this issue. According to this article, the application will have users read short, useful sentences such as “I’m hungry” or “Open the door,” and the software will record and learn the user’s particular pronunciation. After the learning sequence is complete, the application can turn the user’s statements into normalized speech which it can output as audio or text. Voice-controlled technology will be able to recognize and understand the normalized speech, allowing users to partake in ordering new socks from Alexa while lying in bed.

The ultimate goal is to interface Voiceitt’s unique accessibility features with all major technology brands, like Google, Apple, IBM, and Microsoft. Working as an OEM (original equipment manufacturer) would skip the middle step and allow users to speak directly to Siri or Google, who would already have the upgraded knowledge of recognizing patterns and understanding the user.

Leave a Reply

Your email address will not be published. Required fields are marked *