Breaking Immutable: Information-Coupled Prototype Elaboration for Few-Shot Object Detection

被引:0
|
作者
Lu, Xiaonan [1 ,2 ,3 ,4 ]
Diao, Wenhui [1 ,2 ,3 ,4 ]
Mao, Yongqiang [1 ,2 ,3 ,4 ]
Li, Junxi [1 ,2 ,3 ,4 ]
Wang, Peijin [1 ,2 ]
Sun, Xian [1 ,2 ,3 ,4 ]
Fu, Kun [1 ,2 ,3 ,4 ]
机构
[1] Chinese Acad Sci, Aerosp Informat Res Inst, Beijing, Peoples R China
[2] Aerosp Informat Res Inst, Key Lab Network Informat Syst Technol NIST, Beijing, Peoples R China
[3] Univ Chinese Acad Sci, Beijing, Peoples R China
[4] Univ Chinese Acad Sci, Sch Elect Elect & Commun Engn, Beijing, Peoples R China
基金
国家重点研发计划;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Few-shot object detection, expecting detectors to detect novel classes with a few instances, has made conspicuous progress. However, the prototypes extracted by existing meta-learning based methods still suffer from insufficient representative information and lack awareness of query images, which cannot be adaptively tailored to different query images. Firstly, only the support images are involved for extracting prototypes, resulting in scarce perceptual information of query images. Secondly, all pixels of all support images are treated equally when aggregating features into prototype vectors, thus the salient objects are overwhelmed by the cluttered background. In this paper, we propose an Information-Coupled Prototype Elaboration (ICPE) method to generate specific and representative prototypes for each query image. Concretely, a conditional information coupling module is introduced to couple information from the query branch to the support branch, strengthening the query-perceptual information in support features. Besides, we design a prototype dynamic aggregation module that dynamically adjusts intra-image and inter-image aggregation weights to highlight the salient information useful for detecting query images. Experimental results on both Pascal VOC and MS COCO demonstrate that our method achieves state-of-the-art performance in almost all settings. Code will be available at: https://github.com/lxn96/ICPE.
引用
收藏
页码:1844 / 1852
页数:9
相关论文
共 50 条
  • [21] Industrial few-shot fractal object detection
    Huang, Haoran
    Luo, Xiaochuan
    Yang, Chen
    NEURAL COMPUTING & APPLICATIONS, 2023, 35 (28): : 21055 - 21069
  • [22] Hallucination Improves Few-Shot Object Detection
    Zhang, Weilin
    Wang, Yu-Xiong
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 13003 - 13012
  • [23] A Closer Look at Few-Shot Object Detection
    Liu, Yuhao
    Dong, Le
    He, Tengyang
    PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT VIII, 2024, 14432 : 430 - 447
  • [24] Few-Shot Object Detection: A Comprehensive Survey
    Koehler, Mona
    Eisenbach, Markus
    Gross, Horst-Michael
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (09) : 11958 - 11978
  • [25] Industrial few-shot fractal object detection
    Haoran Huang
    Xiaochuan Luo
    Chen Yang
    Neural Computing and Applications, 2023, 35 : 21055 - 21069
  • [26] Transformation Invariant Few-Shot Object Detection
    Li, Aoxue
    Li, Zhenguo
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 3093 - 3101
  • [27] Discriminative Prototype Learning for Few-Shot Object Detection in Remote-Sensing Images
    Guo, Manke
    You, Yanan
    Liu, Fang
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2023, 61 : 1 - 13
  • [28] Few-Shot Object Detection with Foundation Models
    Han, Guangxing
    Lim, Ser-Nam
    2024 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2024, : 28608 - 28618
  • [29] Few-Shot Object Detection with Weight Imprinting
    Dingtian Yan
    Jitao Huang
    Hai Sun
    Fuqiang Ding
    Cognitive Computation, 2023, 15 : 1725 - 1735
  • [30] Few-Shot Object Detection in Unseen Domains
    Guirguis, Karim
    Eskandar, George
    Kayser, Matthias
    Yang, Bin
    Beyerer, Juergen
    2022 16TH INTERNATIONAL CONFERENCE ON SIGNAL-IMAGE TECHNOLOGY & INTERNET-BASED SYSTEMS, SITIS, 2022, : 98 - 107