DOMAIN-AGNOSTIC META-LEARNING FOR CROSS-DOMAIN FEW-SHOT CLASSIFICATION

被引:1
|
作者
Lee, Wei-Yu [1 ]
Wang, Jheng-Yu [1 ]
Wang, Yu-Chiang Frank [2 ]
机构
[1] MOXA Inc, Technol & Res Corp Div, Taipei, Taiwan
[2] Natl Taiwan Univ, Dept Elect Engn, Taipei, Taiwan
关键词
Meta-learning; Few-shot classification;
D O I
10.1109/ICASSP43922.2022.9746025
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Few-shot classification requires one to classify instances of novel classes, given only a few examples of each class. Although promising meta-learning methods have been proposed recently, there is no guarantee that existing solutions would generalize to novel classes from an unseen domain. In this paper, we tackle the challenging task of cross-domain few-shot classification and propose Domain-Agnostic Meta-Learning (DAML) algorithm. Our DAML, serving as an optimization strategy, learns to adapt the model to novel classes in both seen and unseen domains by data sampled from multiple domains with desirable task settings. In our experiments, we apply DAML on three popular metric-based models under cross-domain settings. Experiments on several benchmarks (mini-ImageNet, CUB, Cars, Places, Plantae and META-DATASET) show that DAML significantly improves the generalization ability of learning models, and addresses cross-domain few-shot classification with promising results.
引用
收藏
页码:1715 / 1719
页数:5
相关论文
共 50 条
  • [21] Meta-Learning Adversarial Domain Adaptation Network for Few-Shot Text Classification
    Han, ChengCheng
    Fan, Zeqiu
    Zhang, Dongxiang
    Qiu, Minghui
    Gao, Ming
    Zhou, Aoying
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL-IJCNLP 2021, 2021, : 1664 - 1673
  • [22] Powering Finetuning in Few-Shot Learning: Domain-Agnostic Bias Reduction with Selected Sampling
    Tao, Ran
    Zhang, Han
    Zheng, Yutong
    Savvides, Marios
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 8467 - 8475
  • [23] Adaptive Domain-Adversarial Few-Shot Learning for Cross-Domain Hyperspectral Image Classification
    Ye, Zhen
    Wang, Jie
    Liu, Huan
    Zhang, Yu
    Li, Wei
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2023, 61
  • [24] DUAL GRAPH CROSS-DOMAIN FEW-SHOT LEARNING FOR HYPERSPECTRAL IMAGE CLASSIFICATION
    Zhang, Yuxiang
    Li, Wei
    Zhang, Mengmeng
    Tao, Ran
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 3573 - 3577
  • [25] Experiments in Cross-domain Few-shot Learning for Image Classification: Extended Abstract
    Wang, Hongyu
    Fraser, Huon
    Gouk, Henry
    Frank, Eibe
    Pfahringer, Bernhard
    Mayo, Michael
    Holmes, Geoff
    ECMLPKDD WORKSHOP ON META-KNOWLEDGE TRANSFER, VOL 191, 2022, 191 : 81 - 83
  • [26] SAR Image Classification Using Few-shot Cross-domain Transfer Learning
    Rostami, Mohammad
    Kolouri, Soheil
    Eaton, Eric
    Kim, Kyungnam
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW 2019), 2019, : 907 - 915
  • [27] Few-Shot Learning With Prototype Rectification for Cross-Domain Hyperspectral Image Classification
    Qin, Anyong
    Yuan, Chaoqi
    Li, Qiang
    Luo, Xiaoliu
    Yang, Feng
    Song, Tiecheng
    Gao, Chenqiang
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2024, 62
  • [28] Domain Mapping Network for Remote Sensing Cross-Domain Few-Shot Classification
    Lu, Xiaoqiang
    Gong, Tengfei
    Zheng, Xiangtao
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2024, 62 : 1 - 11
  • [29] StyleAdv: Meta Style Adversarial Training for Cross-Domain Few-Shot Learning
    Fu, Yuqian
    Xie, Yu
    Fu, Yanwei
    Jiang, Yu-Gang
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 24575 - 24584
  • [30] An Adversarial Meta-Training Framework for Cross-Domain Few-Shot Learning
    Tian, Pinzhuo
    Xie, Shaorong
    IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 : 6881 - 6891