We propose a new end-to-end architecture for automatic speech recognition that expands the “listen, attend and spell” (LAS) paradigm. While the main word-predicting network is trained to predict words, the… Click to show full abstract
We propose a new end-to-end architecture for automatic speech recognition that expands the “listen, attend and spell” (LAS) paradigm. While the main word-predicting network is trained to predict words, the secondary, speller network, is optimized to predict word spellings from inner representations of the main network (e.g. word embeddings or context vectors from the attention module). We show that this joint training improves the word error rate of a word-based system and enables solving additional tasks, such as out-of-vocabulary word detection and recovery. The tests are conducted on LibriSpeech dataset consisting of 1000 h of read speech.
               
Click one of the above tabs to view related content.