MAMGAN: Multiscale attention metric GAN for monaural speech enhancement in the time domain

被引:9
|
作者
Guo, Huimin [1 ,2 ]
Jian, Haifang [1 ]
Wang, Yequan [3 ]
Wang, Hongchang [1 ,2 ]
Zhao, Xiaofan [3 ]
Zhu, Wenqi [4 ]
Cheng, Qinghua [1 ,2 ]
机构
[1] Chinese Acad Sci, Inst Semicond, Lab Solid State Optoelect Informat Technol, Beijing 100083, Peoples R China
[2] Univ Chinese Acad Sci, Beijing 100049, Peoples R China
[3] Beijing Acad Artificial Intelligence, Beijing 100089, Peoples R China
[4] North China Univ Technol, Sch Informat Sci & Technol, Beijing 100144, Peoples R China
关键词
Speech enhancement; Time domain; Multiscale attention; Attention metric discriminator; RECURRENT NEURAL-NETWORK; SELF-ATTENTION; U-NET; NOISE;
D O I
10.1016/j.apacoust.2023.109385
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
In the speech enhancement (SE) task, the mismatch between the objective function used to train the SE model, and the evaluation metric will lead to the low quality of the generated speech. Although existing studies have attempted to use the metric discriminator to learn the alternative function of evaluation metric from data to guide generator updates, the metric discriminator's simple structure cannot better approximate the function of the evaluation metric, thus limiting the performance of SE. This paper proposes a multiscale attention metric generative adversarial network (MAMGAN) to resolve this problem. In the metric discriminator, the attention mechanism is introduced to emphasize the meaningful features of spatial direction and channel direction to avoid the feature loss caused by direct average pooling to better approximate the calculation of the evaluation metric and further improve SE's performance. In addition, driven by the effectiveness of the self-attention mechanism in capturing long-term dependence, we construct a multiscale attention module (MSAM). It fully considers the multiple representations of signals, which can better model the features of long sequences. The ablation experiment verifies the effectiveness of the attention metric discriminator and the MSAM. Quantitative analysis on the Voice Bank + DEMAND dataset shows that MAMGAN outperforms various time-domain SE methods with a 3.30 perceptual evaluation of speech quality score.
引用
收藏
页数:11
相关论文
共 50 条
  • [31] Supervised Attention Multi-Scale Temporal Convolutional Network for monaural speech enhancement
    Zhang, Zehua
    Zhang, Lu
    Zhuang, Xuyi
    Qian, Yukun
    Wang, Mingjiang
    EURASIP JOURNAL ON AUDIO SPEECH AND MUSIC PROCESSING, 2024, 2024 (01)
  • [32] Improved Speech Enhancement using a Complex-Domain GAN with Fused Time-Domain and Time-frequency Domain Constraints
    Dang, Feng
    Zhang, Pengyuan
    Chen, Hangting
    INTERSPEECH 2021, 2021, : 2721 - 2725
  • [33] A Nested U-Net With Self-Attention and Dense Connectivity for Monaural Speech Enhancement
    Xiang, Xiaoxiao
    Zhang, Xiaojuan
    Chen, Haozhe
    IEEE SIGNAL PROCESSING LETTERS, 2022, 29 : 105 - 109
  • [34] Real Time Speech Enhancement in the Waveform Domain
    Defossez, Alexandre
    Synnaeve, Gabriel
    Adi, Yossi
    INTERSPEECH 2020, 2020, : 3291 - 3295
  • [35] Audio-Visual Fusion using Multiscale Temporal Convolutional Attention for Time-Domain Speech Separation
    Liu, Debang
    Zhang, Tianqi
    Christensen, Mads Graesboll
    Wei, Ying
    An, Zeliang
    INTERSPEECH 2023, 2023, : 3694 - 3698
  • [36] Audio-Visual Speech Enhancement Based on Multiscale Features and Parallel Attention
    Jia, Shifan
    Zhang, Xinman
    Han, Weiqi
    2024 23RD INTERNATIONAL SYMPOSIUM INFOTEH-JAHORINA, INFOTEH, 2024,
  • [37] U-Former: Improving Monaural Speech Enhancement with Multi-head Self and Cross Attention
    Xu, Xinmeng
    Hao, Jianjun
    2022 26TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2022, : 663 - 669
  • [38] Neural-Free Attention for Monaural Speech Enhancement Toward Voice User Interface for Consumer Electronics
    Chen, Moran
    Zhang, Qiquan
    Song, Qi
    Qian, Xinyuan
    Guo, Ruijin
    Wang, Mingjiang
    Chen, Deying
    IEEE TRANSACTIONS ON CONSUMER ELECTRONICS, 2023, 69 (04) : 765 - 774
  • [39] Time-Domain Speech Enhancement for Robust Automatic Speech Recognition
    Yang, Yufeng
    Pandey, Ashutosh
    Wang, DeLiang
    INTERSPEECH 2023, 2023, : 4913 - 4917
  • [40] Sixty Years of Frequency-Domain Monaural Speech Enhancement: From Traditional to Deep Learning Methods
    Zheng, Chengshi
    Zhang, Huiyong
    Liu, Wenzhe
    Luo, Xiaoxue
    Li, Andong
    Li, Xiaodong
    Moore, Brian C. J.
    TRENDS IN HEARING, 2023, 27