Knowledge Distillation in Fourier Frequency Domain for Dense Prediction

被引:0
|
作者
Shi, Min [1 ]
Zheng, Chengkun [1 ]
Yi, Qingming [1 ]
Weng, Jian [1 ,2 ]
Luo, Aiwen [1 ]
机构
[1] Jinan Univ, Coll Informat Sci & Technol, Dept Elect Engn, Guangzhou 510632, Peoples R China
[2] Jinan Univ, Coll Cyber Secur, Guangzhou 510632, Peoples R China
基金
中国国家自然科学基金;
关键词
Frequency-domain analysis; Feature extraction; Semantics; Knowledge engineering; Detectors; Accuracy; Technological innovation; Object detection; Head; Discrete Fourier transforms; Dense prediction; Fourier transform; knowledge distillation; object detection; semantic segmentation;
D O I
10.1109/LSP.2024.3515795
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Knowledge distillation has been widely used to enhance student network performance for dense prediction tasks. Most previous knowledge distillation methods focus on valuable regions of the feature map in the spatial domain, ignoring the semantic information in the frequency domain. This work explores effective information representation of feature maps in the frequency domain and proposes a novel distillation method in the Fourier domain. This approach enhances the student's amplitude representation and transmits both original feature knowledge and global pixel relations. Experiments on object detection and semantic segmentation tasks, including both homogeneous distillation and heterogeneous distillation, demonstrate the significant improvement for the student network. For instance, the ResNet50-RepPoints detector and ResNet18-PspNet segmenter achieve 4.2% AP and 5.01% mIoU improvements on COCO2017 and CityScapes datasets, respectively.
引用
收藏
页码:296 / 300
页数:5
相关论文
共 50 条
  • [31] Latent domain knowledge distillation for nighttime semantic segmentation
    Liu, Yunan
    Wang, Simiao
    Wang, Chunpeng
    Lu, Mingyu
    Sang, Yu
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 132
  • [32] Harmonized Dense Knowledge Distillation Training for Multi-Exit Architectures
    Wang, Xinglu
    Li, Yingming
    THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 10218 - 10226
  • [33] Fingerprint Ridge Frequency Estimation in the Fourier Domain
    Patriciu, Victor-Valeriu
    Spinu, Stelian
    ADVANCES IN ELECTRICAL AND COMPUTER ENGINEERING, 2014, 14 (04) : 95 - 98
  • [34] Cross-Domain Deepfake Detection Based on Latent Domain Knowledge Distillation
    Wang, Chunpeng
    Meng, Lingshan
    Xia, Zhiqiu
    Ren, Na
    Ma, Bin
    IEEE SIGNAL PROCESSING LETTERS, 2025, 32 : 896 - 900
  • [35] Position Awareness Modeling with Knowledge Distillation for CTR Prediction
    Liu, Congcong
    Li, Yuejiang
    Zhu, Jian
    Teng, Fei
    Zhao, Xiwei
    Peng, Chanping
    Lin, Zhangang
    Shao, Jingping
    PROCEEDINGS OF THE 16TH ACM CONFERENCE ON RECOMMENDER SYSTEMS, RECSYS 2022, 2022, : 562 - 566
  • [36] Knowledge Distillation for Enhanced Age and Gender Prediction Accuracy
    Kim, Seunghyun
    Park, Yeongje
    Lee, Eui Chul
    MATHEMATICS, 2024, 12 (17)
  • [37] Training Efficient Saliency Prediction Models with Knowledge Distillation
    Zhang, Peng
    Su, Li
    Li, Liang
    Bao, BingKun
    Cosman, Pamela
    Li, GuoRong
    Huang, Qingming
    PROCEEDINGS OF THE 27TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA (MM'19), 2019, : 512 - 520
  • [38] On Knowledge Distillation from Complex Networks for Response Prediction
    Arora, Siddhartha
    Khapra, Mitesh M.
    Ramaswamy, Harish G.
    2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, 2019, : 3813 - 3822
  • [39] FreeKD: Knowledge Distillation via Semantic Frequency Prompt
    Zhang, Yuan
    Huang, Tao
    Liu, Jiaming
    Jiang, Tao
    Cheng, Kuan
    Zhang, Shanghang
    2024 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2024, : 15931 - 15940
  • [40] Ultrafast Video Attention Prediction with Coupled Knowledge Distillation
    Fu, Kui
    Shi, Peipei
    Song, Yafei
    Ge, Shiming
    Lu, Xiangju
    Li, Jia
    THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 10802 - 10809