Knowledge Distillation in Fourier Frequency Domain for Dense Prediction

被引:0
|
作者
Shi, Min [1 ]
Zheng, Chengkun [1 ]
Yi, Qingming [1 ]
Weng, Jian [1 ,2 ]
Luo, Aiwen [1 ]
机构
[1] Jinan Univ, Coll Informat Sci & Technol, Dept Elect Engn, Guangzhou 510632, Peoples R China
[2] Jinan Univ, Coll Cyber Secur, Guangzhou 510632, Peoples R China
基金
中国国家自然科学基金;
关键词
Frequency-domain analysis; Feature extraction; Semantics; Knowledge engineering; Detectors; Accuracy; Technological innovation; Object detection; Head; Discrete Fourier transforms; Dense prediction; Fourier transform; knowledge distillation; object detection; semantic segmentation;
D O I
10.1109/LSP.2024.3515795
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Knowledge distillation has been widely used to enhance student network performance for dense prediction tasks. Most previous knowledge distillation methods focus on valuable regions of the feature map in the spatial domain, ignoring the semantic information in the frequency domain. This work explores effective information representation of feature maps in the frequency domain and proposes a novel distillation method in the Fourier domain. This approach enhances the student's amplitude representation and transmits both original feature knowledge and global pixel relations. Experiments on object detection and semantic segmentation tasks, including both homogeneous distillation and heterogeneous distillation, demonstrate the significant improvement for the student network. For instance, the ResNet50-RepPoints detector and ResNet18-PspNet segmenter achieve 4.2% AP and 5.01% mIoU improvements on COCO2017 and CityScapes datasets, respectively.
引用
收藏
页码:296 / 300
页数:5
相关论文
共 50 条
  • [41] Knowledge Distillation for Energy Consumption Prediction in Additive Manufacturing
    Li, Yixin
    Hu, Fu
    Ryan, Michael
    Wang, Ray
    Liu, Ying
    IFAC PAPERSONLINE, 2022, 55 (02): : 390 - 395
  • [42] FTDKD: Frequency-Time Domain Knowledge Distillation for Low-Quality Compressed Audio Deepfake Detection
    Wang, Bo
    Tang, Yeling
    Wei, Fei
    Ba, Zhongjie
    Ren, Kui
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2024, 32 : 4905 - 4918
  • [43] Generalization of the Fourier domain watermarking to the space/spatial-frequency domain
    Djurovic, I
    Stankovic, S
    Pitas, I
    Stankovic, L
    Tilp, J
    IWISPA 2000: PROCEEDINGS OF THE FIRST INTERNATIONAL WORKSHOP ON IMAGE AND SIGNAL PROCESSING AND ANALYSIS, 2000, : 47 - 51
  • [44] Bilateral Knowledge Distillation for Unsupervised Domain Adaptation of Semantic Segmentation
    Wang, Yunnan
    Li, Jianxun
    2022 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2022, : 10177 - 10184
  • [45] DOMAIN ADAPTATION OF DNN ACOUSTIC MODELS USING KNOWLEDGE DISTILLATION
    Asami, Taichi
    Masumura, Ryo
    Yamaguchi, Yoshikazu
    Masataki, Hirokazu
    Aono, Yushi
    2017 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2017, : 5185 - 5189
  • [46] Domain adaptation and knowledge distillation for lightweight pavement crack detection
    Xiao, Tianhao
    Pang, Rong
    Liu, Huijun
    Yang, Chunhua
    Li, Ao
    Niu, Chenxu
    Ruan, Zhimin
    Xu, Ling
    Ge, Yongxin
    EXPERT SYSTEMS WITH APPLICATIONS, 2025, 263
  • [47] Domain Adaptive Knowledge Distillation for Driving Scene Semantic Segmentation
    Kothandaraman, Divya
    Nambiar, Athira
    Mittal, Anurag
    2021 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION WORKSHOPS (WACVW 2021), 2021, : 134 - 143
  • [48] Deep Domain Knowledge Distillation for Person Re-identification
    Yan, Junjie
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2019: DEEP LEARNING, PT II, 2019, 11728 : 700 - 713
  • [49] Joint structured pruning and dense knowledge distillation for efficient transformer model compression
    Cui, Baiyun
    Li, Yingming
    Zhang, Zhongfei
    NEUROCOMPUTING, 2021, 458 : 56 - 69
  • [50] Knowledge Distillation-Based Domain-Invariant Representation Learning for Domain Generalization
    Niu, Ziwei
    Yuan, Junkun
    Ma, Xu
    Xu, Yingying
    Liu, Jing
    Chen, Yen-Wei
    Tong, Ruofeng
    Lin, Lanfen
    IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 245 - 255