A triple-path global-local feature complementary network for visible-infrared person re-identification

被引:0
|
作者
Guo, Jiangtao [1 ]
Ye, Yanfang [1 ]
Du, Haishun [1 ]
Hao, Xinxin [1 ]
机构
[1] Henan Univ, Sch Artificial Intelligence, Zhengzhou 450046, Peoples R China
关键词
Visible-infrared person re-identification; Local comprehensive discriminative features; Weighted regularization center triplet loss;
D O I
10.1007/s11760-023-02789-4
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Cross-modality visible-infrared person re-identification (VI-ReID) aims to match visible and infrared images of pedestrians from different cameras. Most existing VI-ReID methods learn global features of pedestrians from the original image subspace. However, they are not only susceptible to background clutter, but also do not explicitly handle the discrepancy between the two modalities. In addition, some local-based person re-identification methods extract the local features of pedestrians by slicing pedestrian feature maps. However, most of them simply concatenate these local features to obtain the final local features of pedestrians, ignoring the importance of each of these local features. To this end, we propose a triple-path global-local feature complementary network (TGLFC-Net). Specifically, we introduce intermediate modality images to weaken the impact of modality discrepancy and thus obtain the robust global features of pedestrians. Moreover, we design a local comprehensive discriminative feature mining module, which improves the network's capability of mining the local comprehensive discriminative features of pedestrians by performing dynamic weighted fusion of local features. Since the final representations of pedestrians incorporate the robust global features and the local comprehensive discriminative features, they have stronger robustness and discriminative capability. In addition, we also design a weighted regularization center triplet loss, which can not only eliminate the negative impact of anomalous triplets, but also reduce the computational complexity of the network. Experimental results on RegDB and SYSU-MM01 datasets demonstrate that TGLFC-Net can achieve a satisfactory VI-ReID performance. In particular, it achieves 92.36% Rank-1 accuracy and 80.32% mAP on the RegDB dataset, respectively.
引用
收藏
页码:911 / 921
页数:11
相关论文
共 50 条
  • [31] Learning dual attention enhancement feature for visible-infrared person re-identification
    Zhang, Guoqing
    Zhang, Yinyin
    Zhang, Hongwei
    Chen, Yuhao
    Zheng, Yuhui
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2024, 99
  • [32] Pure Detail Feature Extraction Network for Visible-Infrared Re-Identification
    Cui, Jiaao
    Chan, Sixian
    Mu, Pan
    Tang, Tinglong
    Zhou, Xiaolong
    INTELLIGENT AUTOMATION AND SOFT COMPUTING, 2023, 37 (02): : 2263 - 2277
  • [33] HCFN: Hierarchical cross-modal shared feature network for visible-infrared person re-identification?
    Li, Yueying
    Zhang, Huaxiang
    Liu, Li
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2022, 89
  • [34] BiFFN: Bi-Frequency Guided Feature Fusion Network for Visible-Infrared Person Re-Identification
    Cao, Xingyu
    Ding, Pengxin
    Li, Jie
    Chen, Mei
    SENSORS, 2025, 25 (05)
  • [35] Visible-Infrared Person Re-Identification With Modality-Specific Memory Network
    Li, Yulin
    Zhang, Tianzhu
    Liu, Xiang
    Tian, Qi
    Zhang, Yongdong
    Wu, Feng
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2022, 31 : 7165 - 7178
  • [36] MGFNet: A Multi-granularity Feature Fusion and Mining Network for Visible-Infrared Person Re-identification
    Xu, BaiSheng
    Ye, HaoHui
    Wu, Wei
    NEURAL INFORMATION PROCESSING, ICONIP 2023, PT V, 2024, 14451 : 15 - 28
  • [37] Modality-perceptive harmonization network for visible-infrared person re-identification
    Zuo, Xutao
    Peng, Jinjia
    Cheng, Tianhang
    Wang, Huibing
    INFORMATION FUSION, 2025, 118
  • [38] Diverse-Feature Collaborative Progressive Learning for Visible-Infrared Person Re-Identification
    Chan, Sixian
    Meng, Weihao
    Bai, Cong
    Hu, Jie
    Chen, Shenyong
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2024, 20 (05) : 7754 - 7763
  • [39] Multi-granularity enhanced feature learning for visible-infrared person re-identification
    Liu, Huilin
    Wu, Yuhao
    Tang, Zihan
    Li, Xiaolong
    Su, Shuzhi
    Liang, Xingzhu
    Zhang, Pengfei
    JOURNAL OF SUPERCOMPUTING, 2025, 81 (01):
  • [40] Visible-infrared person re-identification model based on feature consistency and modal indistinguishability
    Sun, Jia
    Li, Yanfeng
    Chen, Houjin
    Peng, Yahui
    Zhu, Jinlei
    MACHINE VISION AND APPLICATIONS, 2023, 34 (01)