Exploring Latent Transferability of feature components

被引:0
|
作者
Wang, Zhengshan [1 ]
Chen, Long [1 ]
He, Juan [1 ]
Yang, Linyao [2 ,3 ]
Wang, Fei-Yue [3 ]
机构
[1] Univ Macau, Fac Sci & Technol, Dept Comp & Informat Sci, Macau 999078, Peoples R China
[2] Chinese Acad Sci, Inst Automat, State Key Lab Management & Control Complex Syst, Beijing 100190, Peoples R China
[3] Univ Chinese Acad Sci, Sch Artificial Intelligence, Beijing 100049, Peoples R China
关键词
Unsupervised domain adaptation; Feature disentanglement; Adversarial learning; Dynamic learning;
D O I
10.1016/j.patcog.2024.111184
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Feature disentanglement techniques have been widely employed to extract transferable (domain-invariant) features from non-transferable (domain-specific) features in Unsupervised Domain Adaptation (UDA). However, due to the complex interplay among high-dimensional features, the separated "non-transferable" features may still be partially informative. Suppressing or disregarding them, as commonly employed in previous methods, can overlook the inherent transferability. In this work, we introduce two concepts: Partially Transferable Class Features and Partially Transferable Domain Features (PTCF and PTDF), and propose a succinct feature disentanglement technique. Different with prior works, we do not seek to thoroughly peel off the nontransferable features, as it is challenging practically. Instead, we take the two-stage strategy consisting of rough feature disentanglement and dynamic adjustment. We name our model as ELT because it can systematically Explore Latent Transferability of feature components. ELT can automatically evaluate the transferability of internal feature components, dynamically giving more attention to features with high transferability and less to features with low transferability, effectively solving the problem of negative transfer. Extensive experimental results have proved its efficiency. The code and supplementary file will be available at https://github.com/ njtjmc/ELT.
引用
收藏
页数:12
相关论文
共 50 条
  • [1] Exploring Slow Feature Analysis for Extracting Generative Latent Factors
    Menne, Max
    Schueler, Merlin
    Wiskott, Laurenz
    PROCEEDINGS OF THE 10TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION APPLICATIONS AND METHODS (ICPRAM), 2021, : 120 - 131
  • [2] Discriminative feature alignment: Improving transferability of unsupervised domain adaptation by Gaussian-guided latent alignment
    Wang, Jing
    Chen, Jiahong
    Lin, Jianzhe
    Sigal, Leonid
    Silva, Clarence W. de
    PATTERN RECOGNITION, 2021, 116
  • [3] Exploring Transferability on Adversarial Attacks
    Alvarez, Enrique
    Alvarez, Rafael
    Cazorla, Miguel
    IEEE ACCESS, 2023, 11 : 105545 - 105556
  • [4] Exploring the transferability of safety performance functions
    Farid, Ahmed
    Abdel-Aty, Mohamed
    Lee, Jaeyoung
    Eluru, Naveen
    Wang, Jung-Han
    ACCIDENT ANALYSIS AND PREVENTION, 2016, 94 : 143 - 152
  • [5] Boosting Adversarial Transferability Through Intermediate Feature
    He, Chenghai
    Li, Xiaoqian
    Zhang, Xiaohang
    Zhang, Kai
    Li, Hailing
    Xiong, Gang
    Li, Xuan
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2023, PT V, 2023, 14258 : 28 - 39
  • [6] IMPROVING ADVERSARIAL TRANSFERABILITY VIA FEATURE TRANSLATION
    Kim, Yoonji
    Cho, Seungju
    Byun, Junyoung
    Kwon, Myung-Joon
    Kim, Changick
    2023 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2023, : 3359 - 3363
  • [7] Enhancing the Transferability of Adversarial Examples with Feature Transformation
    Xu, Hao-Qi
    Hu, Cong
    Yin, He-Feng
    MATHEMATICS, 2022, 10 (16)
  • [8] Latent Feature Lasso
    Yen, Ian E. H.
    Lee, Wei-Cheng
    Chang, Sung-En
    Suggala, Arun S.
    Lin, Shou-De
    Ravikumar, Pradeep
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 70, 2017, 70
  • [9] Formalizing and Exploring the Transferability of Inclusive Design Rules
    Sangelkar, Shraddha
    McAdams, Daniel A.
    JOURNAL OF MECHANICAL DESIGN, 2013, 135 (09)