Multi-View Attention Transfer for Efficient Speech Enhancement

被引:3
|
作者
Shin, Wooseok [1 ]
Park, Hyun Joon [1 ]
Kim, Jin Sob [1 ]
Lee, Byung Hoon [1 ]
Han, Sung Won [1 ]
机构
[1] Korea Univ, Sch Ind & Management Engn, Seoul, South Korea
来源
关键词
speech enhancement; multi-view knowledge distillation; feature distillation; time domain; low complexity;
D O I
10.21437/Interspeech.2022-10251
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Recent deep learning models have achieved high performance in speech enhancement; however, it is still challenging to obtain a fast and low-complexity model without significant performance degradation. Previous knowledge distillation studies on speech enhancement could not solve this problem because their output distillation methods do not fit the speech enhancement task in some aspects. In this study, we propose multi-view attention transfer (MV-AT), a feature-based distillation, to obtain efficient speech enhancement models in the time domain. Based on the multi-view features extraction model, MV-AT transfers multi-view knowledge of the teacher network to the student network without additional parameters. The experimental results show that the proposed method consistently improved the performance of student models of various sizes on the Valentini and deep noise suppression (DNS) datasets. MANNER-S-8.1GF with our proposed method, a lightweight model for efficient deployment, achieved 15.4 x and 4.71 x fewer parameters and floating-point operations (FLOPS), respectively, compared to the baseline model with similar performance.
引用
收藏
页码:1198 / 1202
页数:5
相关论文
共 50 条
  • [41] EFFICIENT VIEW SYNTHESIS FOR MULTI-VIEW VIDEO PLUS DEPTH
    Vijayanagar, Krishna Rao
    Kim, Joohee
    Lee, Yunsik
    Kim, Jong-bok
    2013 20TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP 2013), 2013, : 2197 - 2201
  • [42] Efficient and Effective Incomplete Multi-View Clustering
    Liu, Xinwang
    Zhu, Xinzhong
    Li, Miaomiao
    Tang, Chang
    Zhu, En
    Yin, Jianping
    Gao, Wen
    THIRTY-THIRD AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FIRST INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / NINTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2019, : 4392 - 4399
  • [43] View knowledge transfer network for multi-view action recognition
    Liang, Zixi
    Yin, Ming
    Gao, Junli
    He, Yicheng
    Huang, Weitian
    IMAGE AND VISION COMPUTING, 2022, 118
  • [44] CNN Based Multi-view Image Quality Enhancement
    Jeon, Gyu-Lee
    Kim, Hee-Jae
    Yeo, Eun
    Kang, Je-Won
    International Conference on Ubiquitous and Future Networks, ICUFN, 2022, 2022-July : 372 - 375
  • [45] CNN Based Multi-view Image Quality Enhancement
    Jeon, Gyu-Lee
    Kim, Hee-Jae
    Yeo, Eun
    Kang, Je-Won
    2022 THIRTEENTH INTERNATIONAL CONFERENCE ON UBIQUITOUS AND FUTURE NETWORKS (ICUFN), 2022, : 372 - 375
  • [46] MULTI-VIEW VISUAL SPEECH RECOGNITION BASED ON MULTI TASK LEARNING
    Han, HouJeung
    Kang, Sunghun
    Yoo, Chang D.
    2017 24TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2017, : 3983 - 3987
  • [47] Deep Multi-View Enhancement Hashing for Image Retrieval
    Yan, Chenggang
    Gong, Biao
    Wei, Yuxuan
    Gao, Yue
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2021, 43 (04) : 1445 - 1451
  • [48] Intra-view and Inter-view Attention for Multi-view Network Embedding
    Wang, Yueyang
    Hu, Liang
    Zhuang, Yueting
    Wu, Fei
    ADVANCES IN MULTIMEDIA INFORMATION PROCESSING, PT I, 2018, 11164 : 201 - 211
  • [49] Multi-source Transfer Learning with Multi-view Adaboost
    Xu, Zhijie
    Sun, Shiliang
    NEURAL INFORMATION PROCESSING, ICONIP 2012, PT III, 2012, 7665 : 332 - 339
  • [50] Discriminative feature learning based on multi-view attention network with diffusion joint loss for speech emotion recognition
    Liu, Yang
    Chen, Xin
    Song, Yuan
    Li, Yarong
    Wang, Shengbei
    Yuan, Weitao
    Li, Yongwei
    Zhao, Zhen
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 137