For speech recognition systems, a change in the accent can be confusing. Words under the influence of local languages sound different and a typical homepod device can mistake an Asian speaking English or even something as native as a thick Irish accent.
Deep Learning algorithms are the work horses behind these devices. The training data on which the models were trained could have been acquired from a single region where variations are negligible. For instance, in India, though Hindi is widely popular, the accent of a Bengali will be in stark contrast to that of a Malayali speaking Hindi. Collecting data that caters to intricacies of such native language influences can be tricky and tedious.
In order to address the shortcomings of speech recognition systems, researchers at Stanford came up with a novel architecture and the experimental results show that the model has performed well in identifying the differences.
Read more: Analytics India