Transductive Information Maximization For Few-Shot Learning

被引:0
|
作者
Boudiaf, Malik [1 ]
Masud, Ziko Imtiaz [1 ]
Rony, Jerome [1 ]
Dolz, Jose [1 ]
Piantanida, Pablo [2 ]
Ben Ayed, Ismail [1 ]
机构
[1] ETS Montreal, Montreal, PQ, Canada
[2] Univ Paris Saclay, Cent Supelec CNRS, Gif Sur Yvette, France
基金
加拿大自然科学与工程研究理事会;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We introduce Transductive Infomation Maximization (TIM) for few-shot learning. Our method maximizes the mutual information between the query features and their label predictions for a given few-shot task, in conjunction with a supervision loss based on the support set. Furthermore, we propose a new alternating-direction solver for our mutual-information loss, which substantially speeds up transductive-inference convergence over gradient-based optimization, while yielding similar accuracy. TIM inference is modular: it can be used on top of any base-training feature extractor. Following standard transductive few-shot settings, our comprehensive experiments(2) demonstrate that TIM outperforms state-of-the-art methods significantly across various datasets and networks, while used on top of a fixed feature extractor trained with simple cross-entropy on the base classes, without resorting to complex meta-learning schemes. It consistently brings between 2% and 5% improvement in accuracy over the best performing method, not only on all the well-established few-shot benchmarks but also on more challenging scenarios, with domain shifts and larger numbers of classes.
引用
收藏
页数:13
相关论文
共 50 条
  • [31] Feature Reconstruction-guided Transductive Few-Shot Learning with Distribution Statistics Optimization
    Sun, Zhe
    Wang, Mingyang
    Ran, Xiangchen
    Guo, Pengfei
    EXPERT SYSTEMS WITH APPLICATIONS, 2025, 270
  • [32] Few-Shot Learning Through an Information Retrieval Lens
    Triantafillou, Eleni
    Zemel, Richard
    Urtasun, Raquel
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 30 (NIPS 2017), 2017, 30
  • [33] Transductive Learning for Textual Few-Shot Classification in API-based Embedding Models
    Colombo, Pierre
    Pellegrain, Victor
    Boudiaf, Malik
    Storchan, Victor
    Tami, Myriam
    Ben Ayed, Ismail
    Hudelot, Celine
    Piantanida, Pablo
    2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING, EMNLP 2023, 2023, : 4214 - 4231
  • [34] Attribute-guided Dynamic Routing Graph Network for Transductive Few-shot Learning
    Chen, Chaofan
    Yang, Xiaoshan
    Yan, Ming
    Xu, Changsheng
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2022, 2022, : 6259 - 6268
  • [35] Transductive meta-learning with enhanced feature ensemble for few-shot semantic segmentation
    Amin Karimi
    Charalambos Poullis
    Scientific Reports, 14
  • [36] STTMC: A Few-Shot Spatial Temporal Transductive Modulation Classifier
    Shi, Yunhao
    Xu, Hua
    Qi, Zisen
    Zhang, Yue
    Wang, Dan
    Jiang, Lei
    IEEE Transactions on Machine Learning in Communications and Networking, 2024, 2 : 546 - 559
  • [37] Temporal Transductive Inference for Few-Shot Video Object Segmentation
    Siam, Mennatullah
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2025,
  • [38] Feature Transductive Distribution Optimization for Few-Shot Image Classification
    Liu, Qing
    Tang, Xianlun
    Wang, Ying
    Li, Xingchen
    Jiang, Xinyan
    Li, Weisheng
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2025, 35 (03) : 2230 - 2243
  • [39] Transductive Graph-Attention Network for Few-shot Classification
    Pan, Lili
    Liu, Weifeng
    2022 16TH IEEE INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING (ICSP2022), VOL 1, 2022, : 190 - 195
  • [40] Capturing the few-shot class distribution: Transductive distribution optimization
    Liu, Xinyue
    Liu, Ligang
    Liu, Han
    Zhang, Xiaotong
    PATTERN RECOGNITION, 2023, 138