Dynamic relevance learning for few-shot object detection

被引:0
|
作者
Liu, Weijie [1 ]
Cai, Xiaojie [1 ]
Wang, Chong [1 ]
Li, Haohe [1 ]
Yu, Shenghao [1 ]
机构
[1] Ningbo Univ, Fac Elect Engn & Comp Sci, Ningbo 315000, Peoples R China
基金
中国国家自然科学基金;
关键词
Few-shot object detection; Meta R-CNN; Graph convolutional networks; Dynamic relevance learning;
D O I
10.1007/s11760-024-03774-1
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Expensive bounding-box annotations have limited the development of object detection task. Thus, it is necessary to focus on more challenging task of few-shot object detection. It requires the detector to recognize objects of novel classes with only a few training samples. Nowadays, many existing popular methods adopting training way similar to meta-learning have achieved promising performance, such as Meta R-CNN series. However, support data is only used as the class attention to guide the detecting of query images each time. Their relevance to each other remains unexploited. Moreover, a lot of recent works treat the support data and query images as independent branch without considering the relationship between them. To address this issue, we propose a dynamic relevance learning model, which utilizes the relationship between all support images and Region of Interest (RoI) on the query images to construct a dynamic graph convolutional network (GCN). By adjusting the prediction distribution of the base detector using the output of this GCN, the proposed model serves as a hard auxiliary classification task, which guides the detector to improve the class representation implicitly. Comprehensive experiments conducted on Pascal VOC and MS-COCO datasets achieve competitive results, which shows its effectiveness of learning more generalized features. Our code is available at https://github.com/liuweijie19980216/DRL-for-FSOD.
引用
收藏
页数:10
相关论文
共 50 条
  • [31] Mixed Supervision for Instance Learning in Object Detection with Few-shot Annotation
    Zhong, Yi
    Wang, Chengyao
    Li, Shiyong
    Zhou, Zhu
    Wang, Yaowei
    Zheng, Wei-Shi
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2022, 2022,
  • [32] Few-Shot Object Detection with Weight Imprinting
    Dingtian Yan
    Jitao Huang
    Hai Sun
    Fuqiang Ding
    Cognitive Computation, 2023, 15 : 1725 - 1735
  • [33] Few-Shot Object Detection with Foundation Models
    Han, Guangxing
    Lim, Ser-Nam
    2024 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2024, : 28608 - 28618
  • [34] Few-Shot Object Detection in Unseen Domains
    Guirguis, Karim
    Eskandar, George
    Kayser, Matthias
    Yang, Bin
    Beyerer, Juergen
    2022 16TH INTERNATIONAL CONFERENCE ON SIGNAL-IMAGE TECHNOLOGY & INTERNET-BASED SYSTEMS, SITIS, 2022, : 98 - 107
  • [35] Few-Shot Learning with Novelty Detection
    Bjerge, Kim
    Bodesheim, Paul
    Karstoft, Henrik
    DEEP LEARNING THEORY AND APPLICATIONS, PT I, DELTA 2024, 2024, 2171 : 340 - 363
  • [36] IMPROVING FEW-SHOT OBJECT DETECTION WITH OBJECT PART PROPOSALS
    Chevalley, Arthur
    Tomoiaga, Ciprian
    Detyniecki, Marcin
    Russwurm, Marc
    Tuia, Devis
    IGARSS 2023 - 2023 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM, 2023, : 6502 - 6505
  • [37] Multiscale Dynamic Attention and Hierarchical Spatial Aggregation for Few-Shot Object Detection
    An, Yining
    Song, Chunlin
    APPLIED SCIENCES-BASEL, 2025, 15 (03):
  • [38] Sampling-invariant fully metric learning for few-shot object detection
    Leng, Jiaxu
    Chen, Taiyue
    Gao, Xinbo
    Mo, Mengjingcheng
    Yu, Yongtao
    Zhang, Yan
    NEUROCOMPUTING, 2022, 511 : 54 - 66
  • [39] Few-Shot Class-Incremental Learning for Classification and Object Detection: A Survey
    Zhang, Jinghua
    Liu, Li
    Silven, Olli
    Pietikainen, Matti
    Hu, Dewen
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2025, 47 (04) : 2924 - 2945
  • [40] Dynamic Knowledge Path Learning for Few-Shot Learning
    Li, Jingzhu
    Yin, Zhe
    Yang, Xu
    Jiao, Jianbin
    Ding, Ye
    BIG DATA MINING AND ANALYTICS, 2025, 8 (02): : 479 - 495