Active Object Detection with Knowledge Aggregation and Distillation from Large Models

被引:2
|
作者
Yang, Dejie [1 ]
Liu, Yang [1 ]
机构
[1] Peking Univ, Wangxuan Inst Comp Technol, Beijing, Peoples R China
基金
中国国家自然科学基金;
关键词
D O I
10.1109/CVPR52733.2024.01573
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Accurately detecting active objects undergoing state changes is essential for comprehending human interactions and facilitating decision-making. The existing methods for active object detection (AOD) primarily rely on visual appearance of the objects within input, such as changes in size, shape and relationship with hands. However, these visual changes can be subtle, posing challenges, particularly in scenarios with multiple distracting no-change instances of the same category. We observe that the state changes are often the result of an interaction being performed upon the object, thus propose to use informed priors about object related plausible interactions (including semantics and visual appearance) to provide more reliable cues for AOD. Specifically, we propose a knowledge aggregation procedure to integrate the aforementioned informed priors into oracle queries within the teacher decoder, offering more object affordance commonsense to locate the active object. To streamline the inference process and reduce extra knowledge inputs, we propose a knowledge distillation approach that encourages the student decoder to mimic the detection capabilities of the teacher decoder using the oracle query by replicating its predictions and attention. Our proposed framework achieves state-of-the-art performance on four datasets, namely Ego4D, Epic-Kitchens, MECCANO, and 100DOH, which demonstrates the effectiveness of our approach in improving AOD. The code and models are available at https://github.com/idejie/KAD.git.
引用
收藏
页码:16624 / 16633
页数:10
相关论文
共 50 条
  • [1] Learning Efficient Object Detection Models with Knowledge Distillation
    Chen, Guobin
    Choi, Wongun
    Yu, Xiang
    Han, Tony
    Chandraker, Manmohan
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 30 (NIPS 2017), 2017, 30
  • [2] Structural Knowledge Distillation for Object Detection
    de Rijk, Philip
    Schneider, Lukas
    Cordts, Marius
    Gavrila, Dariu M.
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [3] Distilling Knowledge from Large-Scale Image Models for Object Detection
    Li, Gang
    Wang, Wenhai
    Li, Xiang
    Li, Ziheng
    Yang, Jian
    Dai, Jifeng
    Qiao, Yu
    Zhang, Shanshan
    COMPUTER VISION - ECCV 2024, PT LXXXIV, 2025, 15142 : 142 - 160
  • [4] Dual Relation Knowledge Distillation for Object Detection
    Ni, Zhen-Liang
    Yang, Fukui
    Wen, Shengzhao
    Zhang, Gang
    PROCEEDINGS OF THE THIRTY-SECOND INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2023, 2023, : 1276 - 1284
  • [5] New Knowledge Distillation for Incremental Object Detection
    Chen, Li
    Yu, Chunyan
    Chen, Lvcai
    2019 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2019,
  • [6] Knowledge distillation for object detection with diffusion model
    Zhang, Yi
    Long, Junzong
    Li, Chunrui
    NEUROCOMPUTING, 2025, 636
  • [7] Foreground separation knowledge distillation for object detection
    Li, Chao
    Liu, Rugui
    Quan, Zhe
    Hu, Pengpeng
    Sun, Jun
    PEERJ COMPUTER SCIENCE, 2024, 10
  • [8] Shared Knowledge Distillation Network for Object Detection
    Guo, Zhen
    Zhang, Pengzhou
    Liang, Peng
    ELECTRONICS, 2024, 13 (08)
  • [9] KNOWLEDGE DISTILLATION FOR OBJECT DETECTION: FROM GENERIC TO REMOTE SENSING DATASETS
    Hoang-An Le
    Minh-Tan Pham
    IGARSS 2023 - 2023 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM, 2023, : 6194 - 6197
  • [10] EXPLORING EFFECTIVE KNOWLEDGE DISTILLATION FOR TINY OBJECT DETECTION
    Liu, Haotian
    Liu, Qing
    Liu, Yang
    Liang, Yixiong
    Zhao, Guoying
    2023 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2023, : 770 - 774