A triple-path global-local feature complementary network for visible-infrared person re-identification

被引:0
|
作者
Guo, Jiangtao [1 ]
Ye, Yanfang [1 ]
Du, Haishun [1 ]
Hao, Xinxin [1 ]
机构
[1] Henan Univ, Sch Artificial Intelligence, Zhengzhou 450046, Peoples R China
关键词
Visible-infrared person re-identification; Local comprehensive discriminative features; Weighted regularization center triplet loss;
D O I
10.1007/s11760-023-02789-4
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Cross-modality visible-infrared person re-identification (VI-ReID) aims to match visible and infrared images of pedestrians from different cameras. Most existing VI-ReID methods learn global features of pedestrians from the original image subspace. However, they are not only susceptible to background clutter, but also do not explicitly handle the discrepancy between the two modalities. In addition, some local-based person re-identification methods extract the local features of pedestrians by slicing pedestrian feature maps. However, most of them simply concatenate these local features to obtain the final local features of pedestrians, ignoring the importance of each of these local features. To this end, we propose a triple-path global-local feature complementary network (TGLFC-Net). Specifically, we introduce intermediate modality images to weaken the impact of modality discrepancy and thus obtain the robust global features of pedestrians. Moreover, we design a local comprehensive discriminative feature mining module, which improves the network's capability of mining the local comprehensive discriminative features of pedestrians by performing dynamic weighted fusion of local features. Since the final representations of pedestrians incorporate the robust global features and the local comprehensive discriminative features, they have stronger robustness and discriminative capability. In addition, we also design a weighted regularization center triplet loss, which can not only eliminate the negative impact of anomalous triplets, but also reduce the computational complexity of the network. Experimental results on RegDB and SYSU-MM01 datasets demonstrate that TGLFC-Net can achieve a satisfactory VI-ReID performance. In particular, it achieves 92.36% Rank-1 accuracy and 80.32% mAP on the RegDB dataset, respectively.
引用
收藏
页码:911 / 921
页数:11
相关论文
共 50 条
  • [1] A triple-path global–local feature complementary network for visible-infrared person re-identification
    Jiangtao Guo
    Yanfang Ye
    Haishun Du
    Xinxin Hao
    Signal, Image and Video Processing, 2024, 18 : 911 - 921
  • [2] Dual-Path Imbalanced Feature Compensation Network for Visible-Infrared Person Re-Identification
    Cheng, Xu
    Wang, Zichun
    Jiang, Yan
    Liu, Xingyu
    Yu, Hao
    Shi, Jingang
    Yu, Zitong
    ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2025, 21 (01)
  • [3] Visible-Infrared Person Re-Identification via Global Feature Constraints Led by Local Features
    Wang, Jin
    Jiang, Kaiwei
    Zhang, Tianqi
    Gu, Xiang
    Liu, Guoqing
    Lu, Xin
    ELECTRONICS, 2022, 11 (17)
  • [4] Visible-infrared person re-identification with complementary feature fusion and identity consistency learning
    Wang, Yiming
    Chen, Xiaolong
    Chai, Yi
    Xu, Kaixiong
    Jiang, Yutao
    Liu, Bowen
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2025, 16 (01) : 703 - 719
  • [5] Identity Feature Disentanglement for Visible-Infrared Person Re-Identification
    Chen, Xiumei
    Zheng, Xiangtao
    Lu, Xiaoqiang
    ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2023, 19 (06)
  • [6] Modality Unifying Network for Visible-Infrared Person Re-Identification
    Yu, Hao
    Cheng, Xu
    Peng, Wei
    Liu, Weihao
    Zhao, Guoying
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 11151 - 11161
  • [7] Attention-enhanced feature mapping network for visible-infrared person re-identification
    Liu, Shuaiyi
    Han, Ke
    MACHINE VISION AND APPLICATIONS, 2025, 36 (02)
  • [8] TWO-PHASE FEATURE FUSION NETWORK FOR VISIBLE-INFRARED PERSON RE-IDENTIFICATION
    Cheng, Yunzhou
    Xiao, Guoqiang
    Tang, Xiaoqin
    Ma, Wenzhuo
    Gou, Xinye
    2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2021, : 1149 - 1153
  • [9] Occluded Visible-Infrared Person Re-Identification
    Feng, Yujian
    Ji, Yimu
    Wu, Fei
    Gao, Guangwei
    Gao, Yang
    Liu, Tianliang
    Liu, Shangdong
    Jing, Xiao-Yuan
    Luo, Jiebo
    IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 : 1401 - 1413
  • [10] Stronger Heterogeneous Feature Learning for Visible-Infrared Person Re-Identification
    Hao Wang
    Xiaojun Bi
    Changdong Yu
    Neural Processing Letters, 56