その他 (国際) End-to-End ASR and Audio Segmentation with Non-autoregressive Insertion-based model
Yuya Fujita, Shinji Watanabe (Johns Hopkins Univ.), Motoi Omachi
End-to-end models are suitable for realizing practical automatic speech recognition (ASR) systems run on a device with limited computing capability. This is because the models are composed of a single neural network hence many techniques like quantization and compression to make the model footprint lower are easily applied. One important issue when realizing such an ASR system is the segmentation of input audio. The ideal scenario is to integrate audio segmentation and ASR into a single model achieving low latency and high accuracy. In addition, reducing the real-time factor of decoding is important to achieve low latency. Non-autoregressive Transformer is a promising approach since it can decode an N-length token sequence with less than N iterations. We propose a method to realize audio segmentation and non-autoregressive decoding using a single model. It is an insertion-based model that jointly trains with connectionist temporal classification (CTC) loss. The CTC part is used for segmentation. Then, the insertion-based non-autoregressive decoding generates a hypothesis considering the full context inside the segment. It can decode an N-length token sequence with approximately log(N) iterations. Experimental results on unsegmented evaluation sets of a Japanese dataset show the method achieves better accuracy than baseline CTC.