Visible-infrared person re-identification with complementary feature fusion and identity consistency learning

被引:0
|
作者
Wang, Yiming [1 ]
Chen, Xiaolong [1 ]
Chai, Yi [1 ]
Xu, Kaixiong [1 ]
Jiang, Yutao [1 ]
Liu, Bowen [2 ]
机构
[1] Chongqing Univ, Sch Automat, Chongqing 400044, Peoples R China
[2] Chongqing Univ Sci & Technol, Sch Intelligent Technol & Engn, Chongqing 401331, Peoples R China
基金
中国国家自然科学基金;
关键词
Cross-modality; Person re-identification; Feature fusion; Collaborative adversarial mechanism; PREDICTION; NETWORK;
D O I
10.1007/s13042-024-02282-5
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The dual-mode 24/7 monitoring systems continuously obtain visible and infrared images in a real scene. However, differences such as color and texture between these cross-modality images pose challenges for visible-infrared person re-identification (ReID). Currently, the general method is modality-shared feature learning or modal-specific information compensation based on style transfer, but the modality differences often result in the inevitable loss of valuable feature information in the training process. To address this issue, A complementary feature fusion and identity consistency learning (CFF-ICL) method is proposed. On the one hand, the multiple feature fusion mechanism based on cross attention is used to promote the features extracted by the two groups of networks in the same modality image to show a more obvious complementary relationship to improve the comprehensiveness of feature information. On the other hand, the designed collaborative adversarial mechanism between dual discriminators and feature extraction network is designed to remove the modality differences, and then construct the identity consistency between visible and infrared images. Experimental results by testing on SYSU-MM01 and RegDB datasets verify the method's effectiveness and superiority.
引用
收藏
页码:703 / 719
页数:17
相关论文
共 50 条
  • [31] Fine-grained Learning for Visible-Infrared Person Re-identification
    Qi, Mengzan
    Chan, Sixian
    Hang, Chen
    Zhang, Guixu
    Li, Zhi
    2023 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, ICME, 2023, : 2417 - 2422
  • [32] Learning with Twin Noisy Labels for Visible-Infrared Person Re-Identification
    Yang, Mouxing
    Huang, Zhenyu
    Hu, Peng
    Li, Taihao
    Lv, Jiancheng
    Peng, Xi
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 14288 - 14297
  • [33] Attributes Based Visible-Infrared Person Re-identification
    Zheng, Aihua
    Feng, Mengya
    Pan, Peng
    Jiang, Bo
    Luo, Bin
    PATTERN RECOGNITION AND COMPUTER VISION, PT I, PRCV 2022, 2022, 13534 : 254 - 266
  • [34] Minimizing Maximum Feature Space Deviation for Visible-Infrared Person Re-Identification
    Wu, Zhixiong
    Wen, Tingxi
    APPLIED SCIENCES-BASEL, 2022, 12 (17):
  • [35] Joint Modal Alignment and Feature Enhancement for Visible-Infrared Person Re-Identification
    Lin, Ronghui
    Wang, Rong
    Zhang, Wenjing
    Wu, Ao
    Bi, Yihan
    SENSORS, 2023, 23 (11)
  • [36] Feature-Level Compensation and Alignment for Visible-Infrared Person Re-Identification
    Dong, Husheng
    Lu, Ping
    Yang, Yuanfeng
    Sun, Xun
    IET COMPUTER VISION, 2025, 19 (01)
  • [37] Multi-Scale Dynamic Fusion for Visible-Infrared Person Re-Identification
    Wang, Shen
    Wang, Yu
    Qiao, Renjie
    Wu, Kejun
    Lin, Chia-Wen
    Cai, Chengtao
    ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2025, 21 (03)
  • [38] Interaction and Alignment for Visible-Infrared Person Re-Identification
    Gong, Jiahao
    Zhao, Sanyuan
    Lam, Kin-Man
    2022 26TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2022, : 2253 - 2259
  • [39] BiFFN: Bi-Frequency Guided Feature Fusion Network for Visible-Infrared Person Re-Identification
    Cao, Xingyu
    Ding, Pengxin
    Li, Jie
    Chen, Mei
    SENSORS, 2025, 25 (05)
  • [40] MGFNet: A Multi-granularity Feature Fusion and Mining Network for Visible-Infrared Person Re-identification
    Xu, BaiSheng
    Ye, HaoHui
    Wu, Wei
    NEURAL INFORMATION PROCESSING, ICONIP 2023, PT V, 2024, 14451 : 15 - 28