Relation Knowledge Distillation by Auxiliary Learning for Object Detection

被引:0
|
作者
Wang, Hao [1 ]
Jia, Tong [1 ]
Wang, Qilong [2 ]
Zuo, Wangmeng [3 ]
机构
[1] Northeastern Univ, Coll Informat Sci & Engn, Shenyang 110819, Peoples R China
[2] Tianjin Univ, Coll Intelligence & Comp, Tianjin 300072, Peoples R China
[3] Harbin Inst Technol, Sch Comp Sci & Technol, Harbin 150001, Peoples R China
基金
中国国家自然科学基金; 中国博士后科学基金;
关键词
Task analysis; Predictive models; Object detection; Location awareness; Adaptation models; Accuracy; Head; knowledge distillation; relation information; auxiliary learning;
D O I
10.1109/TIP.2024.3445740
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Balancing the trade-off between accuracy and speed for obtaining higher performance without sacrificing the inference time is a challenging topic for object detection task. Knowledge distillation, which serves as a kind of model compression techniques, provides a potential and feasible way to handle above efficiency and effectiveness issue through transferring the dark knowledge from the sophisticated teacher detector to the simple student one. Despite demonstrating promising solutions to make harmonies between accuracy and speed, current knowledge distillation for object detection methods still suffer from two limitations. Firstly, most of the methods are inherited or refereed from the frameworks in image classification task, and deploy an implicit manner by imitating or constraining the features from the intermediate layers or the output predictions between the teacher and student models. While little consideration has been raised to the intrinsic relevance of the classification and localization predictions in object detection task. Besides, these methods fail to investigate the relationship between detection and distillation tasks in knowledge distillation pipeline, and they train the whole network by simply integrating losses from these two different tasks through hand-crafted designation parameters. For addressing the aforementioned issues, we propose a novel Relation Knowledge Distillation by Auxiliary Learning for Object Detection (ReAL) method in this paper. Specifically, we first design a prediction relation distillation module which makes the student model directly mimic the output predictions from the teacher one, and conduct self and mutual relation distillation losses to excavate the relation information between teacher and student models. Moreover, for better devolving into the relationship between different tasks in distillation pipeline, we introduce the auxiliary learning into knowledge distillation for object detection and develop a dynamic weight adaptation strategy. Through regarding detection task as primary task and treating distillation task as auxiliary task in auxiliary learning framework, we dynamically adjust and regularize the corresponding weights of the losses for these tasks during the training process. Experiments on MS COCO dataset are conducted using various detector combinations of teacher and student models and the results show that our proposed ReAL can achieve obvious improvement on different distillation model configurations, while performing favorably against state-of-the-arts.
引用
收藏
页码:4796 / 4810
页数:15
相关论文
共 50 条
  • [41] Revisiting knowledge distillation for light-weight visual object detection
    Gao, Tianze
    Gao, Yunfeng
    Li, Yu
    Qin, Peiyuan
    TRANSACTIONS OF THE INSTITUTE OF MEASUREMENT AND CONTROL, 2021, 43 (13) : 2888 - 2898
  • [42] GAN-Knowledge Distillation for One-Stage Object Detection
    Wang, Wanwei
    Hong, Wei
    Wang, Feng
    Yu, Jinke
    IEEE ACCESS, 2020, 8 : 60719 - 60727
  • [43] Knowledge Distillation in Object Detection for Resource-Constrained Edge Computing
    Setyanto, Arief
    Sasongko, Theopilus Bayu
    Fikri, Muhammad Ainul
    Ariatmanto, Dhani
    Agastya, I. Made Artha
    Rachmanto, Rakandhiya Daanii
    Ardana, Affan
    Kim, In Kee
    IEEE ACCESS, 2025, 13 : 18200 - 18214
  • [44] Towards Efficient 3D Object Detection with Knowledge Distillation
    Yang, Jihan
    Shi, Shaoshuai
    Ding, Runyu
    Wang, Zhe
    Qi, Xiaojuan
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35, NEURIPS 2022, 2022,
  • [45] Multidomain Object Detection Framework Using Feature Domain Knowledge Distillation
    Jaw, Da-Wei
    Huang, Shih-Chia
    Lu, Zhi-Hui
    Fung, Benjamin C. M.
    Kuo, Sy-Yen
    IEEE TRANSACTIONS ON CYBERNETICS, 2024, 54 (08) : 4643 - 4651
  • [46] KNOWLEDGE DISTILLATION FOR OBJECT DETECTION: FROM GENERIC TO REMOTE SENSING DATASETS
    Hoang-An Le
    Minh-Tan Pham
    IGARSS 2023 - 2023 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM, 2023, : 6194 - 6197
  • [47] Active Object Detection with Knowledge Aggregation and Distillation from Large Models
    Yang, Dejie
    Liu, Yang
    2024 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2024, : 16624 - 16633
  • [48] Few-Shot Object Detection Algorithm Based on Adaptive Relation Distillation
    Duan, Danting
    Zhong, Wei
    Peng, Liang
    Ran, Shuang
    Hu, Fei
    PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT XII, 2024, 14436 : 328 - 339
  • [49] Localization Distillation for Object Detection
    Zheng, Zhaohui
    Ye, Rongguang
    Hou, Qibin
    Ren, Dongwei
    Wang, Ping
    Zuo, Wangmeng
    Cheng, Ming-Ming
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (08) : 10070 - 10083
  • [50] A Light-Weight CNN for Object Detection with Sparse Model and Knowledge Distillation
    Guo, Jing-Ming
    Yang, Jr-Sheng
    Seshathiri, Sankarasrinivasan
    Wu, Hung-Wei
    ELECTRONICS, 2022, 11 (04)