Learning to disentangle scenes for person re-identification

被引:26
|
作者
Zang, Xianghao [1 ]
Li, Ge [1 ]
Gao, Wei [1 ]
Shu, Xiujun [2 ]
机构
[1] Peking Univ, Sch Elect & Comp Engn, Shenzhen 518055, Peoples R China
[2] Peng Cheng Lab, Shenzhen 518034, Peoples R China
基金
中国国家自然科学基金;
关键词
Person re-identification; Divide-and-conquer; Multi-branch network;
D O I
10.1016/j.imavis.2021.104330
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
There are many challenging problems in the person re-identification (ReID) task, such as the occlusion and scale variation. Existing works usually tried to solve them by employing a one-branch network. This one-branch net -work needs to be robust to various challenging problems, which makes this network overburdened. This paper proposes to divide-and-conquer the ReID task. For this purpose, we employ several self-supervision operations to simulate different challenging problems and handle each challenging problem using different networks. Con-cretely, we use the random erasing operation and propose a novel random scaling operation to generate new im-ages with controllable characteristics. A general multi-branch network, including one master branch and two servant branches, is introduced to handle different scenes. These branches learn collaboratively and achieve dif-ferent perceptive abilities. In this way, the complex scenes in the ReID task are effectively disentangled, and the burden of each branch is relieved. The results from extensive experiments demonstrate that the proposed method achieves state-of-the-art performances on three ReID benchmarks and two occluded ReID benchmarks. Ablation study also shows that the proposed scheme and operations significantly improve the performance in various scenes. (c) 2021 Elsevier B.V. All rights reserved.
引用
收藏
页数:12
相关论文
共 50 条
  • [31] Deep Parts Similarity Learning for Person Re-Identification
    Jose Gomez-Silva, Maria
    Maria Armingol, Jose
    de la Escalera, Arturo
    PROCEEDINGS OF THE 13TH INTERNATIONAL JOINT CONFERENCE ON COMPUTER VISION, IMAGING AND COMPUTER GRAPHICS THEORY AND APPLICATIONS (VISIGRAPP 2018), VOL 5: VISAPP, 2018, : 419 - 428
  • [32] Domain generalized federated learning for Person Re-identification
    Liu, Fangyi
    Ye, Mang
    Du, Bo
    COMPUTER VISION AND IMAGE UNDERSTANDING, 2024, 241
  • [33] Style Interleaved Learning for Generalizable Person Re-Identification
    Tan, Wentao
    Ding, Changxing
    Wang, Pengfei
    Gong, Mingming
    Jia, Kui
    IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 1600 - 1612
  • [34] SIMILARITY LEARNING WITH LISTWISE RANKING FOR PERSON RE-IDENTIFICATION
    Chen, Yiqiang
    Duffner, Stefan
    Stoian, Andrei
    Dufour, Jean-Yves
    Baskurt, Atilla
    2018 25TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2018, : 843 - 847
  • [35] Person re-identification using selective transformation learning
    Amin, Fazail
    Mondal, Arijit
    Mathew, Jimson
    MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 82 (25) : 38993 - 39013
  • [36] Joint dictionary and metric learning for person re-identification
    Zhou, Qin
    Zheng, Shibao
    Ling, Haibin
    Su, Hang
    Wu, Shuang
    PATTERN RECOGNITION, 2017, 72 : 196 - 206
  • [37] Person re-identification based on metric learning: a survey
    Guofeng Zou
    Guixia Fu
    Xiang Peng
    Yue Liu
    Mingliang Gao
    Zheng Liu
    Multimedia Tools and Applications, 2021, 80 : 26855 - 26888
  • [38] Survey on person re-identification based on deep learning
    Wang, Kejun
    Wang, Haolin
    Liu, Meichen
    Xing, Xianglei
    Han, Tian
    CAAI TRANSACTIONS ON INTELLIGENCE TECHNOLOGY, 2018, 3 (04) : 219 - 227
  • [39] On the Exploration of Joint Attribute Learning for Person Re-identification
    Roth, Joseph
    Liu, Xiaoming
    COMPUTER VISION - ACCV 2014, PT I, 2015, 9003 : 673 - 688
  • [40] Deep Learning for Person Re-Identification: A Survey and Outlook
    Ye, Mang
    Shen, Jianbing
    Lin, Gaojie
    Xiang, Tao
    Shao, Ling
    Hoi, Steven C. H.
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (06) : 2872 - 2893