Small Is Beautiful: Compressing Deep Neural Networks for Partial Domain Adaptation

被引:0
|
作者
Ma, Yuzhe [1 ]
Yao, Xufeng [2 ]
Chen, Ran [2 ]
Li, Ruiyu [3 ]
Shen, Xiaoyong [3 ]
Yu, Bei [2 ]
机构
[1] Hong Kong Univ Sci & Technol Guangzhou, Microelect Thrust, Guangzhou 511400, Peoples R China
[2] Chinese Univ Hong Kong, Dept Comp Sci & Engn, Hong Kong, Peoples R China
[3] SmartMore Corp Ltd, Hong Kong, Peoples R China
关键词
Training; Computational modeling; Task analysis; Adaptation models; Deep learning; Taylor series; Supervised learning; neural network compression; transfer learning;
D O I
10.1109/TNNLS.2022.3194533
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Domain adaptation is a promising way to ease the costly data labeling process in the era of deep learning (DL). A practical situation is partial domain adaptation (PDA), where the label space of the target domain is a subset of that in the source domain. Although existing methods yield appealing performance in PDA tasks, it is highly presumable that computation overhead exists in deep PDA models since the target is only a subtask of the original problem. In this work, PDA and model compression are seamlessly integrated into a unified training process. The cross-domain distribution divergence is reduced by minimizing a soft-weighted maximum mean discrepancy (SWMMD), which is differentiable and functions as regularization during network training. We use gradient statistics to compress the overparameterized model to identify and prune redundant channels based on the corresponding scaling factors in batch normalization (BN) layers. The experimental results demonstrate that our method can achieve comparable classification performance to state-of-the-art methods on various PDA tasks, with a significant reduction in model size and computation overhead.
引用
收藏
页码:3575 / 3585
页数:11
相关论文
共 50 条
  • [1] Compressing Convolutional Neural Networks in the Frequency Domain
    Chen, Wenlin
    Wilson, James
    Tyree, Stephen
    Weinberger, Kilian Q.
    Chen, Yixin
    KDD'16: PROCEEDINGS OF THE 22ND ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2016, : 1475 - 1484
  • [2] Compressing Deep Neural Networks for Recognizing Places
    Saha, Soham
    Varma, Girish
    Jawahar, C. V.
    PROCEEDINGS 2017 4TH IAPR ASIAN CONFERENCE ON PATTERN RECOGNITION (ACPR), 2017, : 352 - 357
  • [3] Bi-Transferring Deep Neural Networks for Domain Adaptation
    Zhou, Guangyou
    Xie, Zhiwen
    Huang, Jimmy Xiangji
    He, Tingting
    PROCEEDINGS OF THE 54TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 1, 2016, : 322 - 332
  • [4] Transfer channel pruning for compressing deep domain adaptation models
    Yu, Chaohui
    Wang, Jindong
    Chen, Yiqiang
    Qin, Xin
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2019, 10 (11) : 3129 - 3144
  • [5] Transfer channel pruning for compressing deep domain adaptation models
    Chaohui Yu
    Jindong Wang
    Yiqiang Chen
    Xin Qin
    International Journal of Machine Learning and Cybernetics, 2019, 10 : 3129 - 3144
  • [6] Transfer Channel Pruning for Compressing Deep Domain Adaptation Models
    Yu, Chaohui
    Wang, Jindong
    Chen, Yiqiang
    Wu, Zijing
    TRENDS AND APPLICATIONS IN KNOWLEDGE DISCOVERY AND DATA MINING: PAKDD 2019 WORKSHOPS, 2019, 11607 : 257 - 273
  • [7] Anonymous Model Pruning for Compressing Deep Neural Networks
    Zhang, Lechun
    Chen, Guangyao
    Shi, Yemin
    Zhang, Quan
    Tan, Mingkui
    Wang, Yaowei
    Tian, Yonghong
    Huang, Tiejun
    THIRD INTERNATIONAL CONFERENCE ON MULTIMEDIA INFORMATION PROCESSING AND RETRIEVAL (MIPR 2020), 2020, : 161 - 164
  • [8] COMPRESSING DEEP NEURAL NETWORKS FOR EFFICIENT SPEECH ENHANCEMENT
    Tan, Ke
    Wang, DeLiang
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 8358 - 8362
  • [9] CUP: Cluster Pruning for Compressing Deep Neural Networks
    Duggal, Rahul
    Xiao, Cao
    Vuduc, Richard
    Duen Horng Chau
    Sun, Jimeng
    2021 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2021, : 5102 - 5106
  • [10] Compressing Deep Neural Networks With Sparse Matrix Factorization
    Wu, Kailun
    Guo, Yiwen
    Zhang, Changshui
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2020, 31 (10) : 3828 - 3838