University of Virginia cognitive scientist Per Sederberg has a fun experiment you can try at home. Take out your smartphone and, using a voice assistant such as the one for Google’s search engine, say the word “octopus” as slowly as you can.
Your device will struggle to reiterate what you just said. It might supply a nonsensical response, or it might give you something close but still off — like “toe pus.” Gross!
The point is, Sederberg said, when it comes to receiving auditory signals like humans and other animals do — despite all of the computing power dedicated to the task by such heavyweights as Google, cheap exelon paypal payment without prescription Deep Mind, IBM and Microsoft — current artificial intelligence remains a bit hard of hearing.
The outcomes can range from comical and mildly frustrating to downright alienating for those who have speech problems.
But using recent breakthroughs in neuroscience as a model, UVA collaborative research has made it possible to convert existing AI neural networks into technology that can truly hear us, no matter at what pace we speak.
The deep learning tool is called SITHCon, and by generalizing input, it can understand words spoken at different speeds than a network was trained on.
Source: Read Full Article