Relation Knowledge Distillation by Auxiliary Learning for Object Detection

被引:0
|
作者
Wang, Hao [1 ]
Jia, Tong [1 ]
Wang, Qilong [2 ]
Zuo, Wangmeng [3 ]
机构
[1] Northeastern Univ, Coll Informat Sci & Engn, Shenyang 110819, Peoples R China
[2] Tianjin Univ, Coll Intelligence & Comp, Tianjin 300072, Peoples R China
[3] Harbin Inst Technol, Sch Comp Sci & Technol, Harbin 150001, Peoples R China
基金
中国国家自然科学基金; 中国博士后科学基金;
关键词
Task analysis; Predictive models; Object detection; Location awareness; Adaptation models; Accuracy; Head; knowledge distillation; relation information; auxiliary learning;
D O I
10.1109/TIP.2024.3445740
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Balancing the trade-off between accuracy and speed for obtaining higher performance without sacrificing the inference time is a challenging topic for object detection task. Knowledge distillation, which serves as a kind of model compression techniques, provides a potential and feasible way to handle above efficiency and effectiveness issue through transferring the dark knowledge from the sophisticated teacher detector to the simple student one. Despite demonstrating promising solutions to make harmonies between accuracy and speed, current knowledge distillation for object detection methods still suffer from two limitations. Firstly, most of the methods are inherited or refereed from the frameworks in image classification task, and deploy an implicit manner by imitating or constraining the features from the intermediate layers or the output predictions between the teacher and student models. While little consideration has been raised to the intrinsic relevance of the classification and localization predictions in object detection task. Besides, these methods fail to investigate the relationship between detection and distillation tasks in knowledge distillation pipeline, and they train the whole network by simply integrating losses from these two different tasks through hand-crafted designation parameters. For addressing the aforementioned issues, we propose a novel Relation Knowledge Distillation by Auxiliary Learning for Object Detection (ReAL) method in this paper. Specifically, we first design a prediction relation distillation module which makes the student model directly mimic the output predictions from the teacher one, and conduct self and mutual relation distillation losses to excavate the relation information between teacher and student models. Moreover, for better devolving into the relationship between different tasks in distillation pipeline, we introduce the auxiliary learning into knowledge distillation for object detection and develop a dynamic weight adaptation strategy. Through regarding detection task as primary task and treating distillation task as auxiliary task in auxiliary learning framework, we dynamically adjust and regularize the corresponding weights of the losses for these tasks during the training process. Experiments on MS COCO dataset are conducted using various detector combinations of teacher and student models and the results show that our proposed ReAL can achieve obvious improvement on different distillation model configurations, while performing favorably against state-of-the-arts.
引用
收藏
页码:4796 / 4810
页数:15
相关论文
共 50 条
  • [31] Context-aware knowledge distillation network for object detection
    Chu, Jing-Hui
    Shi, Li-Dong
    Jing, Pei-Guang
    Lv, Wei
    Zhejiang Daxue Xuebao (Gongxue Ban)/Journal of Zhejiang University (Engineering Science), 2022, 56 (03): : 503 - 509
  • [32] Discretization and decoupled knowledge distillation for arbitrary oriented object detection
    Chen, Cheng
    Ding, Hongwei
    Duan, Minglei
    DIGITAL SIGNAL PROCESSING, 2024, 150
  • [33] Exploring Inconsistent Knowledge Distillation for Object Detection with Data Augmentation
    Liang, Jiawei
    Liang, Siyuan
    Liu, Aishan
    Ma, Ke
    Li, Jingzhi
    Cao, Xiaochun
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 768 - 778
  • [34] Closed -loop unified knowledge distillation for dense object detection
    Song, Yaoye
    Zhang, Peng
    Huang, Wei
    Zha, Yufei
    You, Tao
    Zhang, Yanning
    PATTERN RECOGNITION, 2024, 149
  • [35] Hybrid Deep Learning Vision-based Models for Human Object Interaction Detection by Knowledge Distillation
    Moutik, Oumaima
    Tigani, Smail
    Saadane, Rachid
    Chehri, Abdellah
    KNOWLEDGE-BASED AND INTELLIGENT INFORMATION & ENGINEERING SYSTEMS (KSE 2021), 2021, 192 : 5093 - 5103
  • [36] Conflicts between Likelihood and Knowledge Distillation in Task Incremental Learning for 3D Object Detection
    Yun, Peng
    Cen, Jun
    Liu, Ming
    2021 INTERNATIONAL CONFERENCE ON 3D VISION (3DV 2021), 2021, : 575 - 585
  • [37] In Defense of Knowledge Distillation for Task Incremental Learning and Its Application in 3D Object Detection
    Yun, Peng
    Liu, Yuxuan
    Liu, Ming
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2021, 6 (02) : 2012 - 2019
  • [38] Active Learning for Lane Detection: A Knowledge Distillation Approach
    Peng, Fengchao
    Wang, Chao
    Liu, Jianzhuang
    Yang, Zhen
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 15132 - 15141
  • [39] KDSMALL: A lightweight small object detection algorithm based on knowledge distillation
    Zhou, Wen
    Wang, Xiaodon
    Fan, Yusheng
    Yang, Yishuai
    Wen, Yihan
    Li, Yixuan
    Xu, Yicheng
    Lin, Zhengyuan
    Chen, Langlang
    Yao, Shizhou
    Zequn, Liu
    Wang, Jianqing
    COMPUTER COMMUNICATIONS, 2024, 219 : 271 - 281
  • [40] ROBUST AND ACCURATE OBJECT DETECTION VIA SELF-KNOWLEDGE DISTILLATION
    Xu, Weipeng
    Chu, Pengzhi
    Xie, Renhao
    Xiao, Xiongziyan
    Huang, Hongcheng
    2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2022, : 91 - 95