Pose-guided self and external attention feature matching and aggregation network for person re-identification*

被引:1
|
作者
Yao, Junping [1 ]
Yang, Zebin [1 ]
Li, Xiaojun [1 ]
Guo, Yi [1 ]
机构
[1] Xian High Tech Res Inst, Xian 710025, Shaanxi, Peoples R China
关键词
Pose-guided person re-identification; External attention; Feature matching and aggregation; Pose estimation; REIDENTIFICATION;
D O I
10.1016/j.displa.2023.102567
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Traditional methods that use pose information for person re-identification (ReID) often focus only on relationships between different parts of the same sample and ignore the correlation between poses of different samples, which hinders further improvement in recognition accuracy. In view of this, this paper proposes a pose-guided self and external attention feature matching and aggregation net which contains a visual context self and external attention module and a pose-guided feature matching and aggregation module. It can divide global features into local features without strict spatial feature alignment, and enables the model to focus on pose correlation between different samples. Thus, it not only learns pose-related identity identification features, but also avoids interference from the samples with large pose changes. The visual context self and external attention module realizes feature embedding through a transformer-based image classification model ViT and a human pose estimation model. Also, it extracts encoder features and local features containing rich pose information by adding an external attention. The pose-guided feature matching and aggregation module obtains a learnable part semantic view through a transformer decoder and pose heat maps. Then, it matches and aggregates with local features, adaptively learns pose-related features, and enhances the robustness of the model to background and occlusion. In this paper, experiments are conducted based on three data sets, including Market-1501, DukeMTMC-reID and MSMT17. The Rank-1 values are 95.3%, 90.1%, and 82.5%, respectively, and the mAP values are 88.3%, 81.1%, and 64.1%, respectively. In conclusion, our method introduces an external attention into the ReID task, and the ReID model can obtain more accurate results by extracting pose-related identity identification features and avoiding interference from samples with large pose changes.
引用
收藏
页数:8
相关论文
共 50 条
  • [1] Pose-Guided Feature Alignment for Occluded Person Re-Identification
    Miao, Jiaxu
    Wu, Yu
    Liu, Ping
    Ding, Yuhang
    Yang, Yi
    2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 542 - 551
  • [2] PGMANet:Pose-Guided Mixed Attention Network for Occluded Person Re-Identification
    Zhai, You
    Han, Xianfeng
    Ma, Wenzhuo
    Gou, Xinye
    Xiao, Guoqiang
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [3] Pose-guided part matching network via shrinking and reweighting for occluded person re-identification
    Wang, HongXia
    Chen, Xiang
    Liu, Chun
    IMAGE AND VISION COMPUTING, 2021, 111
  • [4] Pose-guided feature region-based fusion network for occluded person re-identification
    Xie, Gengsheng
    Wen, Xianbin
    Yuan, Liming
    Wang, Jianchen
    Guo, Changlun
    Jia, Yansong
    Li, Minghao
    MULTIMEDIA SYSTEMS, 2023, 29 (03) : 1771 - 1783
  • [5] Pose-Guided Representation Learning for Person Re-Identification
    Li, Jianing
    Zhang, Shiliang
    Tian, Qi
    Wang, Meng
    Gao, Wen
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (02) : 622 - 635
  • [6] Pose-Guided Attention Learning for Cloth-Changing Person Re-Identification
    Liu, Xiangzeng
    Liu, Kunpeng
    Guo, Jianfeng
    Zhao, Peipei
    Quan, Yining
    Miao, Qiguang
    IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 5490 - 5498
  • [7] Pose-Guided Feature Learning with Knowledge Distillation for Occluded Person Re-Identification
    Zheng, Kecheng
    Lan, Cuiling
    Zeng, Wenjun
    Liu, Jiawei
    Zhang, Zhizheng
    Zha, Zheng-Jun
    PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2021, 2021, : 4537 - 4545
  • [8] Pose-guided feature region-based fusion network for occluded person re-identification
    Gengsheng Xie
    Xianbin Wen
    Liming Yuan
    Jianchen Wang
    Changlun Guo
    Yansong Jia
    Minghao Li
    Multimedia Systems, 2023, 29 : 1771 - 1783
  • [9] Pose-Guided Feature Disentangling for Occluded Person Re-identification Based on Transformer
    Wang, Tao
    Liu, Hong
    Song, Pinhao
    Guo, Tianyu
    Shi, Wei
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / THE TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 2540 - 2549
  • [10] PPBI: Pose-Guided Partial-Attention Network with Batch Information for Occluded Person Re-Identification
    Cui, Jianhai
    Chen, Yiping
    Deng, Binbin
    Liu, Guisong
    Wang, Zhiguo
    Li, Ye
    SENSORS, 2025, 25 (03)