Foreground separation knowledge distillation for object detection

被引:0
|
作者
Li, Chao [1 ]
Liu, Rugui [1 ]
Quan, Zhe [1 ]
Hu, Pengpeng [2 ]
Sun, Jun [1 ]
机构
[1] Jiangnan Univ, Sch Artificial Intelligence & Comp Sci, Wuxi, Jiangsu, Peoples R China
[2] Coventry Univ, Ctr Computat Sci & Math Modelling, Coventry, England
基金
中国国家自然科学基金;
关键词
Knowledge distillation; Object detection; Foreground separation; Channel feature;
D O I
10.7717/peerj-cs.2485
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In recent years, deep learning models have become predominant methods for computer vision tasks, but the large computation and storage requirements of many models make them challenging to deploy on devices with limited resources. Knowledge distillation (KD) is a widely used approach for model compression. However, when applied in the object detection problems, the existing KD methods either directly applies the feature map or simply separate the foreground from the background by using a binary mask, aligning the attention between the teacher and the student models. Unfortunately, these methods either completely overlook or fail to thoroughly eliminate noise, resulting in unsatisfactory model accuracy for student models. To address this issue, we propose a foreground separation distillation (FSD) method in this paper. The FSD method enables student models to distinguish between foreground and background using Gaussian heatmaps, reducing irrelevant information in the learning process. Additionally, FSD also extracts the channel feature by converting the spatial feature maps into probabilistic forms to fully utilize the knowledge in each channel of a well-trained teacher. Experimental results demonstrate that the YOLOX detector enhanced with our distillation method achieved superior performance on both the fall detection and the VOC2007 datasets. For example, YOLOX with FSD achieved 73.1% mean average precision (mAP) on the Fall Detection dataset, which is 1.6% higher than the baseline. The code of FSD is accessible via https://doi.org/10.5281/zenodo.13829676.
引用
收藏
页数:22
相关论文
共 50 条
  • [1] Multivalued Background/Foreground Separation for Moving Object Detection
    Maddalena, Lucia
    Petrosino, Alfredo
    FUZZY LOGIC AND APPLICATIONS, 2009, 5571 : 263 - +
  • [2] Structural Knowledge Distillation for Object Detection
    de Rijk, Philip
    Schneider, Lukas
    Cordts, Marius
    Gavrila, Dariu M.
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [3] Dual Relation Knowledge Distillation for Object Detection
    Ni, Zhen-Liang
    Yang, Fukui
    Wen, Shengzhao
    Zhang, Gang
    PROCEEDINGS OF THE THIRTY-SECOND INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2023, 2023, : 1276 - 1284
  • [4] New Knowledge Distillation for Incremental Object Detection
    Chen, Li
    Yu, Chunyan
    Chen, Lvcai
    2019 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2019,
  • [5] Knowledge distillation for object detection with diffusion model
    Zhang, Yi
    Long, Junzong
    Li, Chunrui
    NEUROCOMPUTING, 2025, 636
  • [6] Shared Knowledge Distillation Network for Object Detection
    Guo, Zhen
    Zhang, Pengzhou
    Liang, Peng
    ELECTRONICS, 2024, 13 (08)
  • [7] EXPLORING EFFECTIVE KNOWLEDGE DISTILLATION FOR TINY OBJECT DETECTION
    Liu, Haotian
    Liu, Qing
    Liu, Yang
    Liang, Yixiong
    Zhao, Guoying
    2023 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2023, : 770 - 774
  • [8] Research on Object Detection Network Based on Knowledge Distillation
    Kuang, Hongbo
    Liu, Ziwei
    2021 4TH INTERNATIONAL CONFERENCE ON INTELLIGENT AUTONOMOUS SYSTEMS (ICOIAS 2021), 2021, : 8 - 12
  • [9] Learning Efficient Object Detection Models with Knowledge Distillation
    Chen, Guobin
    Choi, Wongun
    Yu, Xiang
    Han, Tony
    Chandraker, Manmohan
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 30 (NIPS 2017), 2017, 30
  • [10] Cross-Weighting Knowledge Distillation for Object Detection
    Li, Zhaoyi
    Li, Zihao
    Yue, Xiaodong
    ROUGH SETS, PT I, IJCRS 2024, 2024, 14839 : 285 - 299