TeSLA: Test-Time Self-Learning With Automatic Adversarial Augmentation

被引:9
|
作者
Tomar, Devavrat [1 ]
Vray, Guillaume [1 ]
Bozorgtabar, Behzad [1 ,2 ]
Thiran, Jean-Philippe [1 ,2 ]
机构
[1] Ecole Polytech Fed Lausanne, Lausanne, Switzerland
[2] CHU Vaudois, Lausanne, Switzerland
关键词
SEGMENTATION;
D O I
10.1109/CVPR52729.2023.01948
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Most recent test-time adaptation methods focus on only classification tasks, use specialized network architectures, destroy model calibration or rely on lightweight information from the source domain. To tackle these issues, this paper proposes a novel Test-time Self-Learning method with automatic Adversarial augmentation dubbed TeSLA for adapting a pre-trained source model to the unlabeled streaming test data. In contrast to conventional self-learning methods based on cross-entropy, we introduce a new test-time loss function through an implicitly tight connection with the mutual information and online knowledge distillation. Furthermore, we propose a learnable efficient adversarial augmentation module that further enhances online knowledge distillation by simulating high entropy augmented images. Our method achieves state-of-the-art classification and segmentation results on several benchmarks and types of domain shifts, particularly on challenging measurement shifts of medical images. TeSLA also benefits from several desirable properties compared to competing methods in terms of calibration, uncertainty metrics, insensitivity to model architectures, and source training strategies, all supported by extensive ablations. Our code and models are available at https://github.com/devavratTomar/TeSLA.
引用
收藏
页码:20341 / 20350
页数:10
相关论文
共 50 条
  • [21] Diverse Data Augmentation with Diffusions for Effective Test-time Prompt Tuning
    Feng, Chun-Mei
    Yu, Kai
    Liu, Yong
    Khan, Salman
    Zuo, Wangmeng
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV, 2023, : 2704 - 2714
  • [22] Greedy Policy Search: A Simple Baseline for Learnable Test-Time Augmentation
    Molchanov, Dmitry
    Lyzhov, Alexander
    Molchanova, Yuliya
    Ashukha, Arsenii
    Vetrov, Dmitry
    CONFERENCE ON UNCERTAINTY IN ARTIFICIAL INTELLIGENCE (UAI 2020), 2020, 124 : 1308 - 1317
  • [23] Deep learning with test-time augmentation for radial endobronchial ultrasound image differentiation: a multicentre verification study
    Yu, Kai-Lun
    Tseng, Yi-Shiuan
    Yang, Han-Ching
    Liu, Chia-Jung
    Kuo, Po-Chih
    Lee, Meng-Rui
    Huang, Chun-Da
    Kuo, Lu-Cheng
    Wang, Jann-Yuan
    Ho, Chao-Chi
    Shih, Jin-Yuan
    Yu, Chong-Jen
    BMJ OPEN RESPIRATORY RESEARCH, 2023, 10 (01)
  • [24] STTA: enhanced text classification via selective test-time augmentation
    Xiong H.
    Zhang X.
    Yang L.
    Xiang Y.
    Zhang Y.
    PeerJ Computer Science, 2023, 9
  • [25] Training- and Test-Time Data Augmentation for Hyperspectral Image Segmentation
    Nalepa, Jakub
    Myller, Michal
    Kawulok, Michal
    IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2020, 17 (02) : 292 - 296
  • [26] Efficient improvement of classification accuracy via selective test-time augmentation
    Son, Jongwook
    Kang, Seokho
    INFORMATION SCIENCES, 2023, 642
  • [27] Boosting anomaly detection using unsupervised diverse test-time augmentation
    Cohen, Seffi
    Goldshlager, Niv
    Rokach, Lior
    Shapira, Bracha
    INFORMATION SCIENCES, 2023, 626 : 821 - 836
  • [28] Improving Medical Image Segmentation Using Test-Time Augmentation with MedSAM
    Nazzal, Wasfieh
    Thurnhofer-Hemsi, Karl
    Lopez-Rubio, Ezequiel
    MATHEMATICS, 2024, 12 (24)
  • [29] Robustness test-time augmentation via learnable aggregation and anomaly detection
    Xiong H.
    Yang L.
    Fang G.
    Li J.
    Xiang Y.
    Zhang Y.
    Journal of Intelligent and Fuzzy Systems, 2024, 46 (04): : 8783 - 8798
  • [30] Quantifying Object Detection Uncertainty in Autonomous Driving with Test-Time Augmentation
    Magalhaes, Rui
    Bernardino, Alexandre
    2023 IEEE INTELLIGENT VEHICLES SYMPOSIUM, IV, 2023,