Audio-Visual Speech Enhancement Using Multimodal Deep Convolutional Neural Networks

被引:152
|
作者
Hou, Jen-Cheng [1 ]
Wang, Syu-Siang [2 ]
Lai, Ying-Hui [3 ]
Tsao, Yu [1 ]
Chang, Hsiu-Wen [4 ]
Wang, Hsin-Min [5 ]
机构
[1] Acad Sinica, Res Ctr Informat Technol Innovat, Taipei 11529, Taiwan
[2] Natl Taiwan Univ, Grad Inst Commun Engn, Taipei 10617, Taiwan
[3] Natl Yang Ming Univ, Dept Biomed Engn, Taipei 112, Taiwan
[4] Mackay Med Coll, Dept Audiol & Speech Language Pathol, New Taipei 252, Taiwan
[5] Acad Sinica, Inst Informat Sci, Taipei 11529, Taiwan
关键词
Audio-visual systems; deep convolutional neural networks; multimodal learning; speech enhancement; VOICE ACTIVITY DETECTION; NOISE-REDUCTION; SOURCE SEPARATION; DENOISING AUTOENCODER; RECOGNITION; ALGORITHMS; INTELLIGIBILITY;
D O I
10.1109/TETCI.2017.2784878
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Speech enhancement (SE) aims to reduce noise in speech signals. Most SE techniques focus only on addressing audio information. In this paper, inspired by multimodal learning, which utilizes data from different modalities, and the recent success of convolutional neural networks (CNNs) in SE, we propose an audio-visual deep CNNs (AVDCNN) SE model, which incorporates audio and visual streams into a unified network model. We also propose a multitask learning framework for reconstructing audio and visual signals at the output layer. Precisely speaking, the proposed AVDCNN model is structured as an audio-visual encoder-decoder network, in which audio and visual data are first processed using individual CNNs, and then fused into a joint network to generate enhanced speech (the primary task) and reconstructed images (the secondary task) at the output layer. The model is trained in an endto-end manner, and parameters are jointly learned through back propagation. We evaluate enhanced speech using five instrumental criteria. Results show that the AVDCNN model yields a notably superior performance compared with an audio-only CNN-based SE model and two conventional SE approaches, confirming the effectiveness of integrating visual information into the SE process. In addition, the AVDCNN model also outperforms an existing audio-visual SE model, confirming its capability of effectively combining audio and visual information in SE.
引用
收藏
页码:117 / 128
页数:12
相关论文
共 50 条
  • [41] Mixture of Inference Networks for VAE-Based Audio-Visual Speech Enhancement
    Sadeghi, Mostafa
    Alameda-Pineda, Xavier
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2021, 69 (69) : 1899 - 1909
  • [42] AVSE CHALLENGE: AUDIO-VISUAL SPEECH ENHANCEMENT CHALLENGE
    Blanco, Andrea Lorena Aldana
    Valentini-Botinhao, Cassia
    Klejch, Ondrej
    Gogate, Mandar
    Dashtipour, Kia
    Hussain, Amir
    Bell, Peter
    2022 IEEE SPOKEN LANGUAGE TECHNOLOGY WORKSHOP, SLT, 2022, : 465 - 471
  • [43] Edged based Audio-Visual Speech enhancement demonstrator
    Chen, Song
    Gogate, Mandar
    Dashtipour, Kia
    Kirton-Wingate, Jasper
    Hussain, Adeel
    Doctor, Faiyaz
    Arslan, Tughrul
    Hussain, Amir
    INTERSPEECH 2024, 2024, : 2032 - 2033
  • [44] Inventory-Based Audio-Visual Speech Enhancement
    Kolossa, Dorothea
    Nickel, Robert
    Zeiler, Steffen
    Martin, Rainer
    13TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION 2012 (INTERSPEECH 2012), VOLS 1-3, 2012, : 586 - 589
  • [45] Enhancing Quality and Accuracy of Speech Recognition System by Using Multimodal Audio-Visual Speech signal
    El Maghraby, Eslam E.
    Gody, Amr M.
    Farouk, M. Hesham
    ICENCO 2016 - 2016 12TH INTERNATIONAL COMPUTER ENGINEERING CONFERENCE (ICENCO) - BOUNDLESS SMART SOCIETIES, 2016, : 219 - 229
  • [46] Audio-Visual Speech Recognition System Using Recurrent Neural Network
    Goh, Yeh-Huann
    Lau, Kai-Xian
    Lee, Yoon-Ket
    PROCEEDINGS OF THE 2019 4TH INTERNATIONAL CONFERENCE ON INFORMATION TECHNOLOGY (INCIT): ENCOMPASSING INTELLIGENT TECHNOLOGY AND INNOVATION TOWARDS THE NEW ERA OF HUMAN LIFE, 2019, : 38 - 43
  • [47] Lip landmark-based audio-visual speech enhancement with multimodal feature fusion network
    Li, Yangke
    Zhang, Xinman
    NEUROCOMPUTING, 2023, 549
  • [48] Audio-Visual Emotion Recognition Using a Hybrid Deep Convolutional Neural Network based on Census Transform
    Cornejo, Jadisha Yarif Ramirez
    Pedrini, Helio
    2019 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN AND CYBERNETICS (SMC), 2019, : 3396 - 3402
  • [49] Multimodal Sparse Transformer Network for Audio-Visual Speech Recognition
    Song, Qiya
    Sun, Bin
    Li, Shutao
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (12) : 10028 - 10038
  • [50] A Robust Audio-visual Speech Recognition Using Audio-visual Voice Activity Detection
    Tamura, Satoshi
    Ishikawa, Masato
    Hashiba, Takashi
    Takeuchi, Shin'ichi
    Hayamizu, Satoru
    11TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION 2010 (INTERSPEECH 2010), VOLS 3 AND 4, 2010, : 2702 - +