Publications

ワークショップ (国際) An Exploration of Self-Supervised Pretrained Representations for End-to-End Speech Recognition

Xuankai Chang (Carnegie Mellon University), Takashi Maekaku, Pengcheng Guo (Northwestern Polytechnical University), Jing Shi (Institute of Automation, Chinese Academy of Sciences), Yen, Aswin Shanmugam Subramanian (Johns Hopkins University), Tianzi Wang (Johns Hopkins University), Shu, Yu Tsao (Academia Sinica), Hung, Shinji Watanabe (Carnegie Mellon University)

IEEE Automatic Speech Recognition and Understanding Workshop 2021 (ASRU 2021)

2021.12.15

Self-supervised pretraining on speech data has achieved a lot of progress. High-fidelity representation of the speech signal is learned from a lot of untranscribed data and shows promising performance. Recently, there are several works focusing on evaluating the quality of self-supervised pretrained representations on various tasks without domain restriction, e.g. SUPERB. However, such evaluations do not provide a comprehensive comparison among many ASR benchmark corpora. In this paper, we focus on the general applications of pretrained speech representations, on advanced end-to-end automatic speech recognition (E2E-ASR) models. We select several pretrained speech representations and present the experimental results on various open-source and publicly available corpora for E2E-ASR. Without any modification of the back-end model architectures or training strategy, some of the experiments with pretrained representations, e.g., WSJ, WSJ0-2mix with HuBERT, reach or outperform current state-of-the-art (SOTA) recognition performance. Moreover, we further explore more scenarios for whether the pretraining representations are effective, such as the cross-language or overlapped speech. The scripts, configuratons and the trained models have been released in ESPnet to let the community reproduce our experiments and improve them.