Staged encoder training for cross-camera person re-identification

被引:0
|
作者
Zhi Xu
Jiawei Yang
Yuxuan Liu
Longyang Zhao
Jiajia Liu
机构
[1] Guilin University of Electronic Technology,School of Computer Information and Security
[2] Guilin University of Electronic Technology,School of Mechanical and Electrical Engineering
[3] Civil Aviation Flight University of China,School of Institute of Electronic and Electrical Engineering
来源
关键词
Camera variation; Contrastive learning; Unsupervised; Person re-identification;
D O I
暂无
中图分类号
学科分类号
摘要
As a cross-camera retrieval problem, person re-identification (ReID) suffers from image style variations caused by camera parameters, lighting and other reasons, which will seriously affect the model recognition accuracy. To address this problem, this paper proposes a two-stage contrastive learning method to gradually reduce the impact of camera variations. In the first stage, we train an encoder for each camera using only images from the respective camera. This ensures that each encoder has better recognition performance on images from its respective camera while being unaffected by camera variations. In the second stage, we encode the same image using all trained encoders to generate a new combination code that is robust against camera variations. We also use Cross-Camera Encouragement (Lin et al., in: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020) distance that complements the advantages of combined encoding to further mitigate the impact of camera variations. Our method achieves high accuracy on several commonly used person ReID datasets, e.g., on the Market-1501, achieves 90.8% rank-1 accuracy and 85.2% mAP, outperforming the recent unsupervised works by 12+% in terms of mAP. Code is available at https://github.com/yjwyuanwu/SET.
引用
收藏
页码:2323 / 2331
页数:8
相关论文
共 50 条
  • [21] Gallery based k-reciprocal-like re-ranking for heavy cross-camera discrepancy in person re-identification
    Liu, Haijun
    Cheng, Jian
    NEUROCOMPUTING, 2019, 333 : 64 - 75
  • [22] Association Loss and Self-Discovery Cross-Camera Anchors Detection for Unsupervised Video-Based Person Re-Identification
    Yuan, Xiuhuan
    Han, Hua
    Huang, Li
    INTERNATIONAL JOURNAL OF PATTERN RECOGNITION AND ARTIFICIAL INTELLIGENCE, 2021, 35 (14)
  • [23] Camera Style Adaptation for Person Re-identification
    Zhong, Zhun
    Zheng, Liang
    Zheng, Zhedong
    Li, Shaozi
    Yang, Yi
    2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : CP99 - CP99
  • [24] Revisiting Person Re-Identification by Camera Selection
    Peng, Yi-Xing
    Li, Yuanxun
    Zheng, Wei-Shi
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2024, 46 (05) : 2692 - 2708
  • [25] Person Re-identification on Heterogeneous Camera Network
    Zhuo, Jiaxuan
    Zhu, Junyong
    Lai, Jianhuang
    Xie, Xiaohua
    COMPUTER VISION, PT III, 2017, 773 : 280 - 291
  • [26] Cross Dataset Person Re-identification
    Hu, Yang
    Yi, Dong
    Liao, Shengcai
    Lei, Zhen
    Li, Stan Z.
    COMPUTER VISION - ACCV 2014 WORKSHOPS, PT III, 2015, 9010 : 650 - 664
  • [27] Inter-Intra Camera Identity Learning for Person Re-Identification with Training in Single Camera
    Zhang, Guoqing
    Luo, Zhiyuan
    Lin, Weisi
    Xuan, Jing
    2023 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, ICME, 2023, : 2429 - 2434
  • [28] Learning Cross Camera Invariant Features with CCSC Loss for Person Re-identification
    Zhao, Zhiwei
    Liu, Bin
    Li, Weihai
    Yu, Nenghai
    IMAGE AND GRAPHICS, ICIG 2019, PT I, 2019, 11901 : 429 - 441
  • [29] Cross-Modality Person Re-Identification with Generative Adversarial Training
    Dai, Pingyang
    Ji, Rongrong
    Wang, Haibin
    Wu, Qiong
    Huang, Yuyu
    PROCEEDINGS OF THE TWENTY-SEVENTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2018, : 677 - 683
  • [30] Distance based Training for Cross-Modality Person Re-Identification
    Tekeli, Nihat
    Can, Ahmet Burak
    2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW), 2019, : 4540 - 4549