Large-Scale Pre-Training of End-to-End Multi-Talker ASR for Meeting Transcription with Single Distant Microphone

被引:10
|
作者
Kanda, Naoyuki [1 ]
Ye, Guoli [1 ]
Wu, Yu [2 ]
Gaur, Yashesh [1 ]
Wang, Xiaofei [1 ]
Meng, Zhong [1 ]
Chen, Zhuo [1 ]
Yoshioka, Takuya [1 ]
机构
[1] Microsoft Cloud AI, Redmond, WA 98052 USA
[2] Microsoft Res Asia, Beijing, Peoples R China
来源
关键词
multi-talker speech recognition; speaker counting; serialized output training; SPEAKER DIARIZATION; SPEECH; CONVOLUTION;
D O I
10.21437/Interspeech.2021-102
中图分类号
R36 [病理学]; R76 [耳鼻咽喉科学];
学科分类号
100104 ; 100213 ;
摘要
Transcribing meetings containing overlapped speech with only a single distant microphone (SDM) has been one of the most challenging problems for automatic speech recognition (ASR). While various approaches have been proposed, all previous studies on the monaural overlapped speech recognition problem were based on either simulation data or small-scale real data. In this paper, we extensively investigate a two-step approach where we first pre-train a serialized output training (SOT)-based multi-talker ASR by using large-scale simulation data and then fine-tune the model with a small amount of real meeting data. Experiments are conducted by utilizing 75 thousand (K) hours of our internal single-talker recording to simulate a total of 900K hours of multi-talker audio segments for supervised pretraining. With fine-tuning on the 70 hours of the AMI-SDM training data, our SOT ASR model achieves a word error rate (WER) of 21.2% for the AMI-SDM evaluation set while automatically counting speakers in each test segment. This result is not only significantly better than the previous state-of-the-art WER of 36.4% with oracle utterance boundary information but also better than a result by a similarly fine-tuned single-talker ASR model applied to beamformed audio.
引用
收藏
页码:3430 / 3434
页数:5
相关论文
共 22 条
  • [21] OmDet: Large-scale vision-language multi-dataset pre-training with multimodal detection network
    Zhao, Tiancheng
    Liu, Peng
    Lee, Kyusong
    IET COMPUTER VISION, 2024, 18 (05) : 626 - 639
  • [22] ProphetNet-X: Large-Scale Pre-training Models for English, Chinese, Multi-lingual, Dialog, and Code Generation
    Qi, Weizhen
    Gong, Yeyun
    Yan, Yu
    Xu, Can
    Yao, Bolun
    Zhou, Bartuer
    Cheng, Biao
    Jiang, Daxin
    Chen, Jiusheng
    Zhang, Ruofei
    Li, Hougiang
    Duan, Nan
    ACL-IJCNLP 2021: THE JOINT CONFERENCE OF THE 59TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS AND THE 11TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING: PROCEEDINGS OF THE SYSTEM DEMONSTRATIONS, 2021, : 232 - 239