MAMGAN: Multiscale attention metric GAN for monaural speech enhancement in the time domain

被引:9
|
作者
Guo, Huimin [1 ,2 ]
Jian, Haifang [1 ]
Wang, Yequan [3 ]
Wang, Hongchang [1 ,2 ]
Zhao, Xiaofan [3 ]
Zhu, Wenqi [4 ]
Cheng, Qinghua [1 ,2 ]
机构
[1] Chinese Acad Sci, Inst Semicond, Lab Solid State Optoelect Informat Technol, Beijing 100083, Peoples R China
[2] Univ Chinese Acad Sci, Beijing 100049, Peoples R China
[3] Beijing Acad Artificial Intelligence, Beijing 100089, Peoples R China
[4] North China Univ Technol, Sch Informat Sci & Technol, Beijing 100144, Peoples R China
关键词
Speech enhancement; Time domain; Multiscale attention; Attention metric discriminator; RECURRENT NEURAL-NETWORK; SELF-ATTENTION; U-NET; NOISE;
D O I
10.1016/j.apacoust.2023.109385
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
In the speech enhancement (SE) task, the mismatch between the objective function used to train the SE model, and the evaluation metric will lead to the low quality of the generated speech. Although existing studies have attempted to use the metric discriminator to learn the alternative function of evaluation metric from data to guide generator updates, the metric discriminator's simple structure cannot better approximate the function of the evaluation metric, thus limiting the performance of SE. This paper proposes a multiscale attention metric generative adversarial network (MAMGAN) to resolve this problem. In the metric discriminator, the attention mechanism is introduced to emphasize the meaningful features of spatial direction and channel direction to avoid the feature loss caused by direct average pooling to better approximate the calculation of the evaluation metric and further improve SE's performance. In addition, driven by the effectiveness of the self-attention mechanism in capturing long-term dependence, we construct a multiscale attention module (MSAM). It fully considers the multiple representations of signals, which can better model the features of long sequences. The ablation experiment verifies the effectiveness of the attention metric discriminator and the MSAM. Quantitative analysis on the Voice Bank + DEMAND dataset shows that MAMGAN outperforms various time-domain SE methods with a 3.30 perceptual evaluation of speech quality score.
引用
收藏
页数:11
相关论文
共 50 条
  • [41] Exploring Multi-Stage GAN with Self-Attention for Speech Enhancement
    Asiedu Asante, Bismark Kweku
    Broni-Bediako, Clifford
    Imamura, Hiroki
    APPLIED SCIENCES-BASEL, 2023, 13 (16):
  • [42] Harmonic beamformers for speech enhancement and dereverberation in the time domain
    Jensen, J. R.
    Karimian-Azari, S.
    Christensen, M. G.
    Benesty, J.
    SPEECH COMMUNICATION, 2020, 116 : 1 - 11
  • [43] A New Framework for Supervised Speech Enhancement in the Time Domain
    Pandey, Ashutosh
    Wang, Deliang
    19TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2018), VOLS 1-6: SPEECH RESEARCH FOR EMERGING MARKETS IN MULTILINGUAL SOCIETIES, 2018, : 1136 - 1140
  • [44] Visually Assisted Time-Domain Speech Enhancement
    Ideli, Elham
    Sharpe, Bruce
    Bajic, Ivan, V
    Vaughan, Rodney G.
    2019 7TH IEEE GLOBAL CONFERENCE ON SIGNAL AND INFORMATION PROCESSING (IEEE GLOBALSIP), 2019,
  • [45] Neural speech enhancement in the time-frequency domain
    Volkmer, M
    2003 IEEE XIII WORKSHOP ON NEURAL NETWORKS FOR SIGNAL PROCESSING - NNSP'03, 2003, : 617 - 626
  • [46] DBT-Net: Dual-Branch Federative Magnitude and Phase Estimation With Attention-in-Attention Transformer for Monaural Speech Enhancement
    Yu, Guochen
    Li, Andong
    Wang, Hui
    Wang, Yutian
    Ke, Yuxuan
    Zheng, Chengshi
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2022, 30 : 2629 - 2644
  • [47] Convolutional gated recurrent unit networks based real-time monaural speech enhancement
    Sunny Dayal Vanambathina
    Vaishnavi Anumola
    Ponnapalli Tejasree
    R. Divya
    B. Manaswini
    Multimedia Tools and Applications, 2023, 82 : 45717 - 45732
  • [48] Monaural speech enhancement using U-net fused with multi-head self-attention
    FAN Junyi
    YANG Jibin
    ZHANG Xiongwei
    ZHENG Changyan
    Chinese Journal of Acoustics, 2023, 42 (01) : 98 - 118
  • [49] Monaural speech enhancement using U-net fused with multi-head self-attention
    Fan, Junyi
    Yang, Jibin
    Zhang, Xiongwei
    Zheng, Changyan
    Shengxue Xuebao/Acta Acustica, 2022, 47 (06): : 703 - 716
  • [50] Convolutional gated recurrent unit networks based real-time monaural speech enhancement
    Vanambathina, Sunny Dayal
    Anumola, Vaishnavi
    Tejasree, Ponnapalli
    Divya, R.
    Manaswini, B.
    MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 82 (29) : 45717 - 45732