Gradient-aware domain-invariant learning for domain generalizationGradient-Aware Domain-Invariant Learning for Domain GeneralizationF. Hou et al.

被引:0
|
作者
Feng Hou [1 ]
Yao Zhang [2 ]
Yang Liu [3 ]
Jin Yuan [1 ]
Cheng Zhong [2 ]
Yang Zhang [4 ]
Zhongchao Shi [3 ]
Jianping Fan [3 ]
Zhiqiang He [3 ]
机构
[1] Chinese Academy of Sciences,Institute of Computing Technology
[2] University of Chinese Academy of Sciences,undefined
[3] AI Lab,undefined
[4] Lenovo Research,undefined
[5] Southeast University,undefined
[6] Lenovo Ltd,undefined
关键词
Domain shift; Domain generalization; Domain-invariant parameters; Sparse;
D O I
10.1007/s00530-024-01613-4
中图分类号
学科分类号
摘要
In realistic scenarios, the effectiveness of Deep Neural Networks is hindered by domain shift, where discrepancies between training (source) and testing (target) domains lead to poor generalization on previously unseen data. The Domain Generalization (DG) paradigm addresses this challenge by developing a general model that relies solely on source domains, aiming for robust performance in unknown domains. Despite the progress of prior augmentation-based methods by introducing more diversity based on the known distribution, DG still suffers from overfitting due to limited domain-specific information. Therefore, unlike prior DG methods that treat all parameters equally, we propose a Gradient-Aware Domain-Invariant Learning mechanism that adaptively recognizes and emphasizes domain-invariant parameters. Specifically, two novel models named Domain Decoupling and Combination and Domain-Invariance-Guided Backpropagation (DIGB) are introduced to first generate contrastive samples with the same domain-invariant features and then selectively prioritize parameters with unified optimization directions across contrastive sample pairs to enhance domain robustness. Additionally, a sparse version of DIGB achieves a trade-off between performance and efficiency. Our extensive experiments on various domain generalization benchmarks demonstrate that our proposed method achieves state-of-the-art performance with strong generalization capabilities.
引用
收藏
相关论文
共 50 条
  • [1] Gradient-aware domain-invariant learning for domain generalization
    Hou, Feng
    Zhang, Yao
    Liu, Yang
    Yuan, Jin
    Zhong, Cheng
    Zhang, Yang
    Shi, Zhongchao
    Fan, Jianping
    He, Zhiqiang
    MULTIMEDIA SYSTEMS, 2025, 31 (01)
  • [2] Domain-Invariant Feature Learning for Domain Adaptation
    Tu, Ching-Ting
    Lin, Hsiau-Wen
    Lin, Hwei Jen
    Tokuyama, Yoshimasa
    Chu, Chia-Hung
    INTERNATIONAL JOURNAL OF PATTERN RECOGNITION AND ARTIFICIAL INTELLIGENCE, 2023, 37 (03)
  • [3] Learning Domain-Invariant Representations from Text for Domain Generalization
    Zhang, Huihuang
    Hu, Haigen
    Chen, Qi
    Zhou, Qianwei
    Jiang, Mingfeng
    PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT VIII, 2024, 14432 : 118 - 129
  • [4] ATTENTIVE ADVERSARIAL LEARNING FOR DOMAIN-INVARIANT TRAINING
    Meng, Zhong
    Li, Jinyu
    Gong, Yifan
    2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2019, : 6740 - 6744
  • [5] LEARNING DOMAIN-INVARIANT TRANSFORMATION FOR SPEAKER VERIFICATION
    Zhang, Hanyi
    Wang, Longbiao
    Lee, Kong Aik
    Liu, Meng
    Dang, Jianwu
    Chen, Hui
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 7177 - 7181
  • [6] Learning Domain-Invariant Representations of Histological Images
    Lafarge, Maxime W.
    Pluim, Josien P. W.
    Eppenhof, Koen A. J.
    Veta, Mitko
    FRONTIERS IN MEDICINE, 2019, 6
  • [7] Learning Domain-Invariant and Discriminative Features for Homogeneous Unsupervised Domain Adaptation
    ZHANG Yun
    WANG Nianbin
    CAI Shaobin
    ChineseJournalofElectronics, 2020, 29 (06) : 1119 - 1125
  • [8] Learning Domain-Invariant Subspace Using Domain Features and Independence Maximization
    Yan, Ke
    Kou, Lu
    Zhang, David
    IEEE TRANSACTIONS ON CYBERNETICS, 2018, 48 (01) : 288 - 299
  • [9] A Bit More Bayesian: Domain-Invariant Learning with Uncertainty
    Xiao, Zehao
    Shen, Jiayi
    Zhen, Xiantong
    Shao, Ling
    Snoek, Cees G. M.
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [10] On Learning Domain-Invariant Representations for Transfer Learning with Multiple Sources
    Trung Phung
    Trung Le
    Long Vuong
    Toan Tran
    Anh Tran
    Bui, Hung
    Dinh Phung
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34