Audio-visual speech recognition based on joint training with audio-visual speech enhancement for robust speech recognition

被引:4
|
作者
Hwang, Jung-Wook [1 ]
Park, Jeongkyun [2 ]
Park, Rae-Hong [1 ,3 ]
Park, Hyung-Min [1 ]
机构
[1] Sogang Univ, Dept Elect Engn, Seoul 04107, South Korea
[2] Sogang Univ, Dept Artificial Intelligence, Seoul 04107, South Korea
[3] Sogang Univ, ICT Convergence Disaster Safety Res Inst, Seoul 04107, South Korea
基金
新加坡国家研究基金会;
关键词
Audio-visual speech recognition; Audio-visual speech enhancement; Deep learning; Joint training; Conformer; Robust speech recognition; DEREVERBERATION; NOISE;
D O I
10.1016/j.apacoust.2023.109478
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Visual features are attractive cues that can be used for robust automatic speech recognition (ASR). In par-ticular, speech recognition performance can be improved by combining audio with visual information obtained from the speaker's face rather than using only audio in acoustically unfavorable environments. For this reason, recently, studies on various audio-visual speech recognition (AVSR) models have been actively conducted. However, from the experimental results of the AVSR models, important information for speech recognition is mainly concentrated on audio signals, and visual information plays a role in enhancing the robustness of recognition when the audio signal is corrupted in noisy environments. Therefore, there is a limit to the improvement of the recognition performance of conventional AVSR mod-els in noisy environments. Unlike the conventional AVSR models that directly use input audio-visual information as it is, in this paper, we propose an AVSR model that first performs AVSE to enhance target speech based on audio-visual information and then uses both audio information enhanced by the AVSE and visual information such as the speaker's lips or face. In particular, we propose a deep AVSR model that performs end-to-end training as one model by integrating an AVSR model based on the conformer with hybrid decoding and an AVSE model based on the U-net with recurrent neural network (RNN) atten-tion (RA). Experimental results on the LRS2-BBC and LRS3-TED datasets demonstrate that the AVSE model effectively suppresses corrupting noise and the AVSR model successfully achieves noise robustness. Especially, the proposed jointly trained model integrating the AVSE and AVSR stages into one model showed better recognition performance than the other compared methods.& COPY; 2023 Elsevier Ltd. All rights reserved.
引用
收藏
页数:8
相关论文
共 50 条
  • [21] Connectionism based audio-visual speech recognition method
    Che, Na
    Zhu, Yi-Ming
    Zhao, Jian
    Sun, Lei
    Shi, Li-Juan
    Zeng, Xian-Wei
    Jilin Daxue Xuebao (Gongxueban)/Journal of Jilin University (Engineering and Technology Edition), 2024, 54 (10): : 2984 - 2993
  • [22] Audio-visual modeling for bimodal speech recognition
    Kaynak, MN
    Zhi, Q
    Cheok, AD
    Sengupta, K
    Chung, KC
    2001 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS, VOLS 1-5: E-SYSTEMS AND E-MAN FOR CYBERNETICS IN CYBERSPACE, 2002, : 181 - 186
  • [23] Bimodal fusion in audio-visual speech recognition
    Zhang, XZ
    Mersereau, RM
    Clements, M
    2002 INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, VOL I, PROCEEDINGS, 2002, : 964 - 967
  • [24] Robust Self-Supervised Audio-Visual Speech Recognition
    Shi, Bowen
    Hsu, Wei-Ning
    Mohamed, Abdelrahman
    INTERSPEECH 2022, 2022, : 2118 - 2122
  • [25] AUDIO-VISUAL DEEP LEARNING FOR NOISE ROBUST SPEECH RECOGNITION
    Huang, Jing
    Kingsbury, Brian
    2013 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2013, : 7596 - 7599
  • [26] Speech enhancement and recognition in meetings with an audio-visual sensor array
    Maganti, Hari Krishna
    Gatica-Perez, Daniel
    McCowan, Iain
    IEEE TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2007, 15 (08): : 2257 - 2269
  • [27] Audio-Visual Speech Codecs: Rethinking Audio-Visual Speech Enhancement by Re-Synthesis
    Yang, Karren
    Markovic, Dejan
    Krenn, Steven
    Agrawal, Vasu
    Richard, Alexander
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 8217 - 8227
  • [28] Lite Audio-Visual Speech Enhancement
    Chuang, Shang-Yi
    Tsao, Yu
    Lo, Chen-Chou
    Wang, Hsin-Min
    INTERSPEECH 2020, 2020, : 1131 - 1135
  • [29] Audio-visual enhancement of speech in noise
    Girin, L
    Schwartz, JL
    Feng, G
    JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA, 2001, 109 (06): : 3007 - 3020
  • [30] AV2AV: Direct Audio-Visual Speech to Audio-Visual Speech Translation with Unified Audio-Visual Speech Representation
    Choi, Jeongsoo
    Park, Se Jin
    Kim, Minsu
    Ro, Yong Man
    2024 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2024, : 27315 - 27327