HOReID: Deep High-Order Mapping Enhances Pose Alignment for Person Re-Identification

被引:31
|
作者
Wang, Pingyu [1 ]
Zhao, Zhicheng [1 ]
Su, Fei [1 ]
Zu, Xingyu [1 ]
Boulgouris, Nikolaos, V [2 ]
机构
[1] Beijing Univ Posts & Telecommun, Sch Artificial Intelligence, Beijing Key Lab Network Syst & Network Culture, Beijing 100876, Peoples R China
[2] Brunel Univ, Dept Elect & Comp Engn, Uxbridge UB8 3PH, Middx, England
关键词
Feature extraction; Semantics; Annotations; Pose estimation; Training; Testing; Task analysis; Person re-identification; pose alignment; high-order mapping; convolutional neural networks; NETWORK;
D O I
10.1109/TIP.2021.3055952
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Despite the remarkable progress in recent years, person Re-Identification (ReID) approaches frequently fail in cases where the semantic body parts are misaligned between the detected human boxes. To mitigate such cases, we propose a novel High-Order ReID (HOReID) framework that enables semantic pose alignment by aggregating the fine-grained part details of multilevel feature maps. The HOReID adopts a high-order mapping of multilevel feature similarities in order to emphasize the differences of the similarities between aligned and misaligned part pairs in two person images. Since the similarities of misaligned part pairs are reduced, the HOReID enhances pose-robustness within the learned features. We show that our method derives from an intuitive and interpretable motivation and elegantly reduces the misalignment problem without using any prior knowledge from human pose annotations or pose estimation networks. This paper theoretically and experimentally demonstrates the effectiveness of the proposed HOReID, achieving superior performance over the state-of-the-art methods on the four large-scale person ReID datasets.
引用
收藏
页码:2908 / 2922
页数:15
相关论文
共 50 条
  • [1] Mixed High-Order Attention Network for Person Re-Identification
    Chen, Binghui
    Deng, Weihong
    Hu, Jiani
    2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 371 - 381
  • [2] PAII: A Pose Alignment Network with Information Interaction for Person Re-identification
    Chunyan Lyu
    Tong Xu
    Wu Ning
    Qi Cheng
    Kejun Wang
    Chenhui Wang
    Neural Processing Letters, 2023, 55 : 1455 - 1477
  • [3] PAII: A Pose Alignment Network with Information Interaction for Person Re-identification
    Lyu, Chunyan
    Xu, Tong
    Ning, Wu
    Cheng, Qi
    Wang, Kejun
    Wang, Chenhui
    NEURAL PROCESSING LETTERS, 2022,
  • [4] PAII: A Pose Alignment Network with Information Interaction for Person Re-identification
    Lyu, Chunyan
    Xu, Tong
    Ning, Wu
    Cheng, Qi
    Wang, Kejun
    Wang, Chenhui
    NEURAL PROCESSING LETTERS, 2023, 55 (02) : 1455 - 1477
  • [5] Pose-Guided Feature Alignment for Occluded Person Re-Identification
    Miao, Jiaxu
    Wu, Yu
    Liu, Ping
    Ding, Yuhang
    Yang, Yi
    2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 542 - 551
  • [6] Pose-Invariant Embedding for Deep Person Re-Identification
    Zheng, Liang
    Huang, Yujia
    Lu, Huchuan
    Yang, Yi
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2019, 28 (09) : 4500 - 4509
  • [7] Pose Transferrable Person Re-Identification
    Liu, Jinxian
    Ni, Bingbing
    Yan, Yichao
    Zhou, Peng
    Cheng, Shuo
    Hu, Jianguo
    2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 4099 - 4108
  • [8] Person re-identification by pose priors
    Bak, Slawomir
    Martins, Filipe
    Bremond, Francois
    IMAGE PROCESSING: ALGORITHMS AND SYSTEMS XIII, 2015, 9399
  • [9] GAReID: Grouped and Attentive High-Order Representation Learning for Person Re-Identification
    Wang, Pingyu
    Su, Fei
    Zhao, Zhicheng
    Zhao, Yanyun
    Boulgouris, Nikolaos, V
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022,
  • [10] Multi-view Based Pose Alignment Method for Person Re-identification
    Zhang, Yulei
    Zhao, Qingjie
    Li, You
    PROCEEDINGS OF 2019 CHINESE INTELLIGENT AUTOMATION CONFERENCE, 2020, 586 : 439 - 447