Dynamic relevance learning for few-shot object detection

被引:0
|
作者
Liu, Weijie [1 ]
Cai, Xiaojie [1 ]
Wang, Chong [1 ]
Li, Haohe [1 ]
Yu, Shenghao [1 ]
机构
[1] Ningbo Univ, Fac Elect Engn & Comp Sci, Ningbo 315000, Peoples R China
基金
中国国家自然科学基金;
关键词
Few-shot object detection; Meta R-CNN; Graph convolutional networks; Dynamic relevance learning;
D O I
10.1007/s11760-024-03774-1
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Expensive bounding-box annotations have limited the development of object detection task. Thus, it is necessary to focus on more challenging task of few-shot object detection. It requires the detector to recognize objects of novel classes with only a few training samples. Nowadays, many existing popular methods adopting training way similar to meta-learning have achieved promising performance, such as Meta R-CNN series. However, support data is only used as the class attention to guide the detecting of query images each time. Their relevance to each other remains unexploited. Moreover, a lot of recent works treat the support data and query images as independent branch without considering the relationship between them. To address this issue, we propose a dynamic relevance learning model, which utilizes the relationship between all support images and Region of Interest (RoI) on the query images to construct a dynamic graph convolutional network (GCN). By adjusting the prediction distribution of the base detector using the output of this GCN, the proposed model serves as a hard auxiliary classification task, which guides the detector to improve the class representation implicitly. Comprehensive experiments conducted on Pascal VOC and MS-COCO datasets achieve competitive results, which shows its effectiveness of learning more generalized features. Our code is available at https://github.com/liuweijie19980216/DRL-for-FSOD.
引用
收藏
页数:10
相关论文
共 50 条
  • [21] Meta-Learning-Based Incremental Few-Shot Object Detection
    Cheng, Meng
    Wang, Hanli
    Long, Yu
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2022, 32 (04) : 2158 - 2169
  • [22] Multi-Object Detection and Tracking Based on Few-Shot Learning
    Luo, Da-Peng
    Du, Guo-Qing
    Zeng, Zhi-Peng
    Wei, Long-Sheng
    Gao, Chang-Xin
    Cheng, Ying
    Xiao, Fei
    Luo, Chen
    Tien Tzu Hsueh Pao/Acta Electronica Sinica, 2021, 49 (01): : 183 - 191
  • [23] Few-Shot Object Detection: A Comprehensive Survey
    Koehler, Mona
    Eisenbach, Markus
    Gross, Horst-Michael
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (09) : 11958 - 11978
  • [24] Meta-RCNN: Meta Learning for Few-Shot Object Detection
    Wu, Xiongwei
    Sahoo, Doyen
    Hoi, Steven
    MM '20: PROCEEDINGS OF THE 28TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, 2020, : 1679 - 1687
  • [25] Industrial few-shot fractal object detection
    Haoran Huang
    Xiaochuan Luo
    Chen Yang
    Neural Computing and Applications, 2023, 35 : 21055 - 21069
  • [26] Learning General and Specific Embedding with Transformer for Few-Shot Object Detection
    Zhang, Xu
    Chen, Zhe
    Zhang, Jing
    Liu, Tongliang
    Tao, Dacheng
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2025, 133 (02) : 968 - 984
  • [27] Transformation Invariant Few-Shot Object Detection
    Li, Aoxue
    Li, Zhenguo
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 3093 - 3101
  • [28] Meta-Learning-Based Incremental Few-Shot Object Detection
    Department of Computer Science and Technology, Tongji University, Shanghai
    201804, China
    不详
    200092, China
    不详
    201210, China
    IEEE Trans Circuits Syst Video Technol, 2022, 4 (2158-2169):
  • [29] Few-shot Object Detection as a Semi-supervised Learning Problem
    Bailer, Werner
    Fassold, Hannes
    19TH INTERNATIONAL CONFERENCE ON CONTENT-BASED MULTIMEDIA INDEXING, CBMI 2022, 2022, : 131 - 135
  • [30] Adaptive Multi-task Learning for Few-Shot Object Detection
    Ren, Yan
    Li, Yanling
    Kong, Adams Wai-Kin
    COMPUTER VISION-ECCV 2024, PT VII, 2025, 15065 : 297 - 314