Knowledge distillation for object detection with diffusion model

被引:0
|
作者
Zhang, Yi [1 ]
Long, Junzong [1 ]
Li, Chunrui [1 ]
机构
[1] Sichuan Univ, Dept Comp Sci, Chengdu, Peoples R China
关键词
Object detection; Knowledge distillation; Diffusion model; Noise prediction;
D O I
10.1016/j.neucom.2025.130019
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Knowledge distillation is a method that transfers information from a larger network (i.e. the teacher) to a smaller network (i.e. the student), so that the student network can inherit the strong performance of the teacher network while maintaining its computational complexity within a relatively lower range. Currently, knowledge distillation has been widely applied to object detection field to mitigate the rapid expansion of the model size. In this paper, we propose an object detector based on knowledge distillation method. Meanwhile, directly mimicking the features of the teacher often fails to achieve the desired results due to the extra noise in the feature extracted by the student, which causes significant inconsistency and may even weaken the capability of the student. To address this issue, we utilize diffusion model to remove the noise so as to narrow the gap between the features extracted by the teacher and the student, improving the performance of the student. Furthermore, we develop a noise matching module that matches noise level in the student feature during the denoising process. Extensive experiments have been conducted on COCO and Pascal VOC to validate the effectiveness of the proposed method, in which our method achieves 40.0% mAP and 81.63% mAP respectively, while maintaining a frame rate of 27.3FPS, exhibiting the superiority of our model in both accuracy and speed.
引用
收藏
页数:11
相关论文
共 50 条
  • [31] ROBUST AND ACCURATE OBJECT DETECTION VIA SELF-KNOWLEDGE DISTILLATION
    Xu, Weipeng
    Chu, Pengzhi
    Xie, Renhao
    Xiao, Xiongziyan
    Huang, Hongcheng
    2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2022, : 91 - 95
  • [32] Revisiting knowledge distillation for light-weight visual object detection
    Gao, Tianze
    Gao, Yunfeng
    Li, Yu
    Qin, Peiyuan
    TRANSACTIONS OF THE INSTITUTE OF MEASUREMENT AND CONTROL, 2021, 43 (13) : 2888 - 2898
  • [33] GAN-Knowledge Distillation for One-Stage Object Detection
    Wang, Wanwei
    Hong, Wei
    Wang, Feng
    Yu, Jinke
    IEEE ACCESS, 2020, 8 : 60719 - 60727
  • [34] Knowledge Distillation in Object Detection for Resource-Constrained Edge Computing
    Setyanto, Arief
    Sasongko, Theopilus Bayu
    Fikri, Muhammad Ainul
    Ariatmanto, Dhani
    Agastya, I. Made Artha
    Rachmanto, Rakandhiya Daanii
    Ardana, Affan
    Kim, In Kee
    IEEE ACCESS, 2025, 13 : 18200 - 18214
  • [35] Towards Efficient 3D Object Detection with Knowledge Distillation
    Yang, Jihan
    Shi, Shaoshuai
    Ding, Runyu
    Wang, Zhe
    Qi, Xiaojuan
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35, NEURIPS 2022, 2022,
  • [36] Multidomain Object Detection Framework Using Feature Domain Knowledge Distillation
    Jaw, Da-Wei
    Huang, Shih-Chia
    Lu, Zhi-Hui
    Fung, Benjamin C. M.
    Kuo, Sy-Yen
    IEEE TRANSACTIONS ON CYBERNETICS, 2024, 54 (08) : 4643 - 4651
  • [37] KNOWLEDGE DISTILLATION FOR OBJECT DETECTION: FROM GENERIC TO REMOTE SENSING DATASETS
    Hoang-An Le
    Minh-Tan Pham
    IGARSS 2023 - 2023 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM, 2023, : 6194 - 6197
  • [38] Scalability of knowledge distillation in incremental deep learning for fast object detection
    Yuwono, Elizabeth Irenne
    Tjondonegoro, Dian
    Sorwar, Golam
    Alaei, Alireza
    APPLIED SOFT COMPUTING, 2022, 129
  • [39] Active Object Detection with Knowledge Aggregation and Distillation from Large Models
    Yang, Dejie
    Liu, Yang
    2024 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2024, : 16624 - 16633
  • [40] A Lightweight Malware Detection Model Based on Knowledge Distillation
    Miao, Chunyu
    Kou, Liang
    Zhang, Jilin
    Dong, Guozhong
    MATHEMATICS, 2024, 12 (24)