Multi-View Attention Transfer for Efficient Speech Enhancement

被引:3
|
作者
Shin, Wooseok [1 ]
Park, Hyun Joon [1 ]
Kim, Jin Sob [1 ]
Lee, Byung Hoon [1 ]
Han, Sung Won [1 ]
机构
[1] Korea Univ, Sch Ind & Management Engn, Seoul, South Korea
来源
关键词
speech enhancement; multi-view knowledge distillation; feature distillation; time domain; low complexity;
D O I
10.21437/Interspeech.2022-10251
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Recent deep learning models have achieved high performance in speech enhancement; however, it is still challenging to obtain a fast and low-complexity model without significant performance degradation. Previous knowledge distillation studies on speech enhancement could not solve this problem because their output distillation methods do not fit the speech enhancement task in some aspects. In this study, we propose multi-view attention transfer (MV-AT), a feature-based distillation, to obtain efficient speech enhancement models in the time domain. Based on the multi-view features extraction model, MV-AT transfers multi-view knowledge of the teacher network to the student network without additional parameters. The experimental results show that the proposed method consistently improved the performance of student models of various sizes on the Valentini and deep noise suppression (DNS) datasets. MANNER-S-8.1GF with our proposed method, a lightweight model for efficient deployment, achieved 15.4 x and 4.71 x fewer parameters and floating-point operations (FLOPS), respectively, compared to the baseline model with similar performance.
引用
收藏
页码:1198 / 1202
页数:5
相关论文
共 50 条
  • [1] Removing Bias with Residual Mixture of Multi-View Attention for Speech Emotion Recognition
    Jalal, Md Asif
    Milner, Rosanna
    Hain, Thomas
    Moore, Roger K.
    INTERSPEECH 2020, 2020, : 4084 - 4088
  • [2] AdaptMVSNet: Efficient Multi-View Stereo with adaptive convolution and attention fusion
    Jiang, Pengfei
    Yang, Xiaoyan
    Chen, Yuanjie
    Song, Wenjie
    Li, Yang
    COMPUTERS & GRAPHICS-UK, 2023, 116 : 128 - 138
  • [3] High frequency domain enhancement and channel attention module for multi-view stereo
    Yang, Yongjuan
    Cao, Jie
    Zhao, Hong
    Chang, Zhaobin
    Wang, Weijie
    Computers and Electrical Engineering, 2025, 121
  • [4] Multi-View Frequency-Attention Alternative to CNN Frontends for Automatic Speech Recognition
    Alastruey, Belen
    Drude, Lukas
    Heymann, Jahn
    Wiesler, Simon
    INTERSPEECH 2023, 2023, : 4973 - 4977
  • [5] Multimodal speech emotion recognition based on multi-scale MFCCs and multi-view attention mechanism
    Lin Feng
    Lu-Yao Liu
    Sheng-Lan Liu
    Jian Zhou
    Han-Qing Yang
    Jie Yang
    Multimedia Tools and Applications, 2023, 82 : 28917 - 28935
  • [6] Multimodal speech emotion recognition based on multi-scale MFCCs and multi-view attention mechanism
    Feng, Lin
    Liu, Lu-Yao
    Liu, Sheng-Lan
    Zhou, Jian
    Yang, Han-Qing
    Yang, Jie
    MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 82 (19) : 28917 - 28935
  • [7] Multi-view stereo network with point attention
    Zhao, Rong
    Gu, Zhuoer
    Han, Xie
    He, Ligang
    Sun, Fusheng
    Jiao, Shichao
    APPLIED INTELLIGENCE, 2023, 53 (22) : 26622 - 26636
  • [8] Multi-view stereo network with point attention
    Rong Zhao
    Zhuoer Gu
    Xie Han
    Ligang He
    Fusheng Sun
    Shichao Jiao
    Applied Intelligence, 2023, 53 : 26622 - 26636
  • [9] Multi-view self-attention networks
    Xu, Mingzhou
    Yang, Baosong
    Wong, Derek F.
    Chao, Lidia S.
    KNOWLEDGE-BASED SYSTEMS, 2022, 241
  • [10] Attention-Aware Multi-View Stereo
    Luo, Keyang
    Guan, Tao
    Ju, Lili
    Wang, Yuesong
    Chen, Zhuo
    Luo, Yawei
    2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 1587 - 1596