Sign Up to like & get
recommendations!
0
Published in 2021 at "IEEE Signal Processing Letters"
DOI: 10.1109/lsp.2020.3044547
Abstract: Very deep transformers outperform conventional bi-directional long short-term memory networks for automatic speech recognition (ASR) by a significant margin. However, being autoregressive models, their computational complexity is still a prohibitive factor in their deployment into…
read more here.
Keywords:
autoregressive transformer;
speech recognition;
non autoregressive;