Diffusion Augmentation and Pose Generation Based Pre-Training Method for Robust Visible-Infrared Person Re-Identification

被引:1
|
作者
Sun, Rui [1 ]
Huang, Guoxi [2 ]
Xie, Ruirui [2 ]
Wang, Xuebin [2 ]
Chen, Long [2 ]
机构
[1] Hefei Univ Technol, Sch Comp & Informat, Anhui Prov Key Lab Ind Safety & Emergency Technol, Key Lab Knowledge Engn Big Data,Minist Educ, Hefei 230009, Peoples R China
[2] Hefei Univ Technol, Sch Comp & Informat, Anhui Prov Key Lab Ind Safety & Emergency Technol, Hefei 230009, Anhui, Peoples R China
基金
中国国家自然科学基金;
关键词
Person re-identification; visible-infrared; self-supervised; corruption robustness; pre-; training;
D O I
10.1109/LSP.2024.3466792
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Cross-Modal Visible-Infrared Person Re-identification (VI-REID) constitutes a vital application for constructing all-time surveillance systems. However, the current VI-REID model exhibits significant performance deterioration in noisy environments. Existing algorithms endeavor to mitigate this challenge through fine-tuning stages. We contend that, in contrast to fine-tuning stages, the pre-training phase can effectively exploit the attributes of extensive unlabeled data, thereby facilitating the development of a robust VI-REID model. Therefore, in this paper, we propose a pre-training method for VI-REID based on Diffusion Augmentation and Pose Generation (DAPG), aiming to enhance the robustness and recognition rate of VI-REID models in the presence of damaged scenes. Multiple transfer experiments on the SYSU-MM01 and RegDB datasets demonstrate that our method outperforms existing self-supervised methods, as evidenced by the results.
引用
收藏
页码:2670 / 2674
页数:5
相关论文
共 50 条
  • [41] Counterfactual Intervention Feature Transfer for Visible-Infrared Person Re-identification
    Li, Xulin
    Lu, Yan
    Liu, Bin
    Liu, Yating
    Yin, Guojun
    Chu, Qi
    Huang, Jinyang
    Zhu, Feng
    Zhao, Rui
    Yu, Nenghai
    COMPUTER VISION, ECCV 2022, PT XXVI, 2022, 13686 : 381 - 398
  • [42] Learning with Twin Noisy Labels for Visible-Infrared Person Re-Identification
    Yang, Mouxing
    Huang, Zhenyu
    Hu, Peng
    Li, Taihao
    Lv, Jiancheng
    Peng, Xi
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 14288 - 14297
  • [43] Visible-Infrared Person Re-Identification via Partially Interactive Collaboration
    Zheng, Xiangtao
    Chen, Xiumei
    Lu, Xiaoqiang
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2022, 31 : 6951 - 6963
  • [44] Feature Fusion and Center Aggregation for Visible-Infrared Person Re-Identification
    Wang, Xianju
    Chen, Cuiqun
    Zhu, Yong
    Chen, Shuguang
    IEEE ACCESS, 2022, 10 : 30949 - 30958
  • [45] Bidirectional modality information interaction for Visible-Infrared Person Re-identification
    Yang, Xi
    Liu, Huanling
    Wang, Nannan
    Gao, Xinbo
    PATTERN RECOGNITION, 2025, 161
  • [46] Auxiliary Representation Guided Network for Visible-Infrared Person Re-Identification
    Qi, Mengzan
    Chan, Sixian
    Hang, Chen
    Zhang, Guixu
    Zeng, Tieyong
    Li, Zhi
    IEEE TRANSACTIONS ON MULTIMEDIA, 2025, 27 : 340 - 355
  • [47] Frequency domain adaptive framework for visible-infrared person re-identification
    Wang, Jiangcheng
    Li, Yize
    Tao, Xuefeng
    Kong, Jun
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2024, : 2553 - 2566
  • [48] Partial Enhancement and Channel Aggregation for Visible-Infrared Person Re-Identification
    Jing, Weiwei
    Li, Zhonghua
    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2025, E108D (01) : 82 - 91
  • [49] A visible-infrared person re-identification method based on meta-graph isomerization aggregation module
    Shan, Chongrui
    Zhang, Baohua
    Gu, Yu
    Li, Jianjun
    Zhang, Ming
    Wang, Jingyu
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2024, 104
  • [50] Margin-Based Modal Adaptive Learning for Visible-Infrared Person Re-Identification
    Zhao, Qianqian
    Wu, Hanxiao
    Zhu, Jianqing
    SENSORS, 2023, 23 (03)