Prediction-Guided Distillation for Dense Object Detection

被引:12
|
作者
Yang, Chenhongyi [1 ]
Ochal, Mateusz [2 ,3 ]
Storkey, Amos [2 ]
Crowley, Elliot J. [1 ]
机构
[1] Univ Edinburgh, Sch Engn, Edinburgh, Midlothian, Scotland
[2] Univ Edinburgh, Sch Informat, Edinburgh, Midlothian, Scotland
[3] Heriot Watt Univ, Sch Engn & Phys Sci, Edinburgh, Midlothian, Scotland
来源
基金
英国工程与自然科学研究理事会;
关键词
Dense object detection; Knowledge distillation;
D O I
10.1007/978-3-031-20077-9_8
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Real-world object detection models should be cheap and accurate. Knowledge distillation (KD) can boost the accuracy of a small, cheap detection model by leveraging useful information from a larger teacher model. However, a key challenge is identifying the most informative features produced by the teacher for distillation. In this work, we show that only a very small fraction of features within a groundtruth bounding box are responsible for a teacher's high detection performance. Based on this, we propose Prediction-Guided Distillation (PGD), which focuses distillation on these key predictive regions of the teacher and yields considerable gains in performance over many existing KD baselines. In addition, we propose an adaptive weighting scheme over the key regions to smooth out their influence and achieve even better performance. Our proposed approach outperforms current state-of-theart KD baselines on a variety of advanced one-stage detection architectures. Specifically, on the COCO dataset, our method achieves between +3.1% and +4.6% AP improvement using ResNet-101 and ResNet-50 as the teacher and student backbones, respectively. On the CrowdHuman dataset, we achieve +3.2% and +2.0% improvements in MR and AP, also using these backbones. Our code is available at https://github.com/ ChenhongyiYang/PGD.
引用
收藏
页码:123 / 138
页数:16
相关论文
共 50 条
  • [21] Regional filtering distillation for object detection
    Wu, Pingfan
    Zhang, Jiayu
    Sun, Han
    Liu, Ningzhong
    MACHINE VISION AND APPLICATIONS, 2024, 35 (02)
  • [22] Structural Knowledge Distillation for Object Detection
    de Rijk, Philip
    Schneider, Lukas
    Cordts, Marius
    Gavrila, Dariu M.
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [23] General Instance Distillation for Object Detection
    Dai, Xing
    Jiang, Zeren
    Wu, Zhao
    Bao, Yiping
    Wang, Zhicheng
    Liu, Si
    Zhou, Erjin
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 7838 - 7847
  • [24] Regional filtering distillation for object detection
    Pingfan Wu
    Jiayu Zhang
    Han Sun
    Ningzhong Liu
    Machine Vision and Applications, 2024, 35
  • [25] Consistency- and dependence-guided knowledge distillation for object detection in remote sensing images
    Chen, Yixia
    Lin, Mingwei
    He, Zhu
    Polat, Kemal
    Alhudhaif, Adi
    Alenezi, Fayadh
    EXPERT SYSTEMS WITH APPLICATIONS, 2023, 229
  • [26] Knowledge Distillation in Fourier Frequency Domain for Dense Prediction
    Shi, Min
    Zheng, Chengkun
    Yi, Qingming
    Weng, Jian
    Luo, Aiwen
    IEEE SIGNAL PROCESSING LETTERS, 2025, 32 : 296 - 300
  • [27] Channel-wise Knowledge Distillation for Dense Prediction
    Shu, Changyong
    Liu, Yifan
    Gao, Jianfei
    Yan, Zheng
    Shen, Chunhua
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 5291 - 5300
  • [28] Focal Loss for Dense Object Detection
    Lin, Tsung-Yi
    Goyal, Priya
    Girshick, Ross
    He, Kaiming
    Dollar, Piotr
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2020, 42 (02) : 318 - 327
  • [29] Dense Receptive Field for Object Detection
    Yao, Yongqiang
    Dong, Yuan
    Huang, Zesang
    Bai, Hongliang
    2018 24TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2018, : 1815 - 1820
  • [30] Focal Loss for Dense Object Detection
    Lin, Tsung-Yi
    Goyal, Priya
    Girshick, Ross
    He, Kaiming
    Dollar, Piotr
    2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, : 2999 - 3007