Knowledge distillation for object detection with diffusion model

被引:0
|
作者
Zhang, Yi [1 ]
Long, Junzong [1 ]
Li, Chunrui [1 ]
机构
[1] Sichuan Univ, Dept Comp Sci, Chengdu, Peoples R China
关键词
Object detection; Knowledge distillation; Diffusion model; Noise prediction;
D O I
10.1016/j.neucom.2025.130019
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Knowledge distillation is a method that transfers information from a larger network (i.e. the teacher) to a smaller network (i.e. the student), so that the student network can inherit the strong performance of the teacher network while maintaining its computational complexity within a relatively lower range. Currently, knowledge distillation has been widely applied to object detection field to mitigate the rapid expansion of the model size. In this paper, we propose an object detector based on knowledge distillation method. Meanwhile, directly mimicking the features of the teacher often fails to achieve the desired results due to the extra noise in the feature extracted by the student, which causes significant inconsistency and may even weaken the capability of the student. To address this issue, we utilize diffusion model to remove the noise so as to narrow the gap between the features extracted by the teacher and the student, improving the performance of the student. Furthermore, we develop a noise matching module that matches noise level in the student feature during the denoising process. Extensive experiments have been conducted on COCO and Pascal VOC to validate the effectiveness of the proposed method, in which our method achieves 40.0% mAP and 81.63% mAP respectively, while maintaining a frame rate of 27.3FPS, exhibiting the superiority of our model in both accuracy and speed.
引用
收藏
页数:11
相关论文
共 50 条
  • [21] Object Knowledge Distillation for Joint Detection and Tracking in Satellite Videos
    Zhang, Wenhua
    Deng, Wenjing
    Cui, Zhen
    Liu, Jia
    Jiao, Licheng
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2024, 62 : 1 - 13
  • [22] CrossKD: Cross-Head Knowledge Distillation for Object Detection
    Wang, Jiabao
    Chen, Yuming
    Zhang, Zhaohui
    Li, Xiang
    Cheng, Ming-Ming
    Hou, Qibin
    2024 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2024, : 16520 - 16530
  • [23] Knowledge Distillation via Hierarchical Matching for Small Object Detection
    Ma, Yong-Chi
    Ma, Xiao
    Hao, Tian-Ran
    Cui, Li-Sha
    Jin, Shao-Hui
    Lyu, Pei
    Journal of Computer Science and Technology, 2024, 39 (04) : 798 - 810
  • [24] Context-aware knowledge distillation network for object detection
    Chu, Jing-Hui
    Shi, Li-Dong
    Jing, Pei-Guang
    Lv, Wei
    Zhejiang Daxue Xuebao (Gongxue Ban)/Journal of Zhejiang University (Engineering Science), 2022, 56 (03): : 503 - 509
  • [25] Discretization and decoupled knowledge distillation for arbitrary oriented object detection
    Chen, Cheng
    Ding, Hongwei
    Duan, Minglei
    DIGITAL SIGNAL PROCESSING, 2024, 150
  • [26] Exploring Inconsistent Knowledge Distillation for Object Detection with Data Augmentation
    Liang, Jiawei
    Liang, Siyuan
    Liu, Aishan
    Ma, Ke
    Li, Jingzhi
    Cao, Xiaochun
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 768 - 778
  • [27] Closed -loop unified knowledge distillation for dense object detection
    Song, Yaoye
    Zhang, Peng
    Huang, Wei
    Zha, Yufei
    You, Tao
    Zhang, Yanning
    PATTERN RECOGNITION, 2024, 149
  • [28] Knowledge Diffusion for Distillation
    Huang, Tao
    Zhang, Yuan
    Zheng, Mingkai
    You, Shan
    Wang, Fei
    Qian, Chen
    Xu, Chang
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [29] DiffusionDet: Diffusion Model for Object Detection
    Chen, Shoufa
    Sun, Peize
    Song, Yibing
    Luo, Ping
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 19773 - 19786
  • [30] KDSMALL: A lightweight small object detection algorithm based on knowledge distillation
    Zhou, Wen
    Wang, Xiaodon
    Fan, Yusheng
    Yang, Yishuai
    Wen, Yihan
    Li, Yixuan
    Xu, Yicheng
    Lin, Zhengyuan
    Chen, Langlang
    Yao, Shizhou
    Zequn, Liu
    Wang, Jianqing
    COMPUTER COMMUNICATIONS, 2024, 219 : 271 - 281