Counterfactual Intervention Feature Transfer for Visible-Infrared Person Re-identification

被引:21
|
作者
Li, Xulin [1 ,2 ]
Lu, Yan [1 ,2 ]
Liu, Bin [1 ,2 ]
Liu, Yating [3 ]
Yin, Guojun [1 ,2 ]
Chu, Qi [1 ,2 ]
Huang, Jinyang [1 ,2 ]
Zhu, Feng [4 ]
Zhao, Rui [4 ,5 ]
Yu, Nenghai [1 ,2 ]
机构
[1] Univ Sci & Technol China, Sch Informat Sci & Technol, Hefei, Peoples R China
[2] Chinese Acad Sci, Key Lab Electromagnet Space Informat, Beijing, Peoples R China
[3] Univ Sci & Technol China, Sch Data Sci, Hefei, Peoples R China
[4] SenseTime Res, Hong Kong, Peoples R China
[5] Shanghai Jiao Tong Univ, Qing Yuan Res Inst, Shanghai, Peoples R China
来源
关键词
Person re-identification; Counterfactual; Cross-modality;
D O I
10.1007/978-3-031-19809-0_22
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Graph-based models have achieved great success in person re-identification tasks recently, which compute the graph topology structure (affinities) among different people first and then pass the information across them to achieve stronger features. But we find existing graph-based methods in the visible-infrared person re-identification task (VI-ReID) suffer from bad generalization because of two issues: 1) train-test modality balance gap, which is a property of VI-ReID task. The number of two modalities data are balanced in the training stage but extremely unbalanced in inference, causing the low generalization of graph-based VI-ReID methods. 2) sub-optimal topology structure caused by the end-to-end learning manner to the graph module. We analyze that the joint learning of backbone features and graph features weaken the learning of graph topology, making it not generalized enough during the inference process. In this paper, we propose a Counterfactual Intervention Feature Transfer (CIFT) method to tackle these problems. Specifically, a Homogeneous and Heterogeneous Feature Transfer ((HFT)-F-2) is designed to reduce the train-test modality balance gap by two independent types of well-designed graph modules and an unbalanced scenario simulation. Besides, a Counterfactual Relation Intervention (CRI) is proposed to utilize the counterfactual intervention and causal effect tools to highlight the role of topology structure in the whole training process, which makes the graph topology structure more reliable. Extensive experiments on standard VI-ReID benchmarks demonstrate that CIFT outperforms the state-of-the-art methods under various settings.
引用
收藏
页码:381 / 398
页数:18
相关论文
共 50 条
  • [21] Visible-infrared person re-identification with complementary feature fusion and identity consistency learning
    Wang, Yiming
    Chen, Xiaolong
    Chai, Yi
    Xu, Kaixiong
    Jiang, Yutao
    Liu, Bowen
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2025, 16 (01) : 703 - 719
  • [22] Multi-granularity enhanced feature learning for visible-infrared person re-identification
    Liu, Huilin
    Wu, Yuhao
    Tang, Zihan
    Li, Xiaolong
    Su, Shuzhi
    Liang, Xingzhu
    Zhang, Pengfei
    JOURNAL OF SUPERCOMPUTING, 2025, 81 (01):
  • [23] Visible-infrared person re-identification model based on feature consistency and modal indistinguishability
    Sun, Jia
    Li, Yanfeng
    Chen, Houjin
    Peng, Yahui
    Zhu, Jinlei
    MACHINE VISION AND APPLICATIONS, 2023, 34 (01)
  • [24] TWO-PHASE FEATURE FUSION NETWORK FOR VISIBLE-INFRARED PERSON RE-IDENTIFICATION
    Cheng, Yunzhou
    Xiao, Guoqiang
    Tang, Xiaoqin
    Ma, Wenzhuo
    Gou, Xinye
    2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2021, : 1149 - 1153
  • [25] FMCNet: Feature-Level Modality Compensation for Visible-Infrared Person Re-Identification
    Zhang, Qiang
    Lai, Changzhou
    Liu, Jianan
    Huang, Nianchang
    Han, Jungong
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 7339 - 7348
  • [26] Image-text feature learning for unsupervised visible-infrared person re-identification
    Guo, Jifeng
    Pang, Zhiqi
    IMAGE AND VISION COMPUTING, 2025, 158
  • [27] Modality Unifying Network for Visible-Infrared Person Re-Identification
    Yu, Hao
    Cheng, Xu
    Peng, Wei
    Liu, Weihao
    Zhao, Guoying
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 11151 - 11161
  • [28] Revisiting Modality-Specific Feature Compensation for Visible-Infrared Person Re-Identification
    Liu, Jianan
    Wang, Jialiang
    Huang, Nianchang
    Zhang, Qiang
    Han, Jungong
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2022, 32 (10) : 7226 - 7240
  • [29] Visible-infrared person re-identification model based on feature consistency and modal indistinguishability
    Jia Sun
    Yanfeng Li
    Houjin Chen
    Yahui Peng
    Jinlei Zhu
    Machine Vision and Applications, 2023, 34
  • [30] Progressive discrepancy elimination for visible-infrared person re-identification
    Zhang, Guoqing
    Wang, Zhun Zhun
    Wang, Hairui
    Zhou, Jieqiong
    Zheng, Yuhui
    NEUROCOMPUTING, 2024, 607