A Semantic-Aware Attention and Visual Shielding Network for Cloth-Changing Person Re-Identification

被引:2
|
作者
Gao, Zan [1 ,2 ]
Wei, Hongwei [1 ]
Guan, Weili [3 ]
Nie, Jie [4 ]
Wang, Meng [5 ]
Chen, Shengyong [2 ]
机构
[1] Qilu Univ Technol, Shandong Artificial Intelligence Inst, Shandong Acad Sci, Jinan 250014, Peoples R China
[2] Tianjin Univ Technol, Minist Educ, Key Lab Comp Vis & Syst, Tianjin 300384, Peoples R China
[3] Monash Univ, Fac Informat Technol, Clayton, Vic 3800, Australia
[4] Ocean Univ China, Coll Informat Sci & Engn, Qingdao 266100, Peoples R China
[5] Hefei Univ Technol, Sch Comp Sci & Informat Engn, Hefei 230009, Peoples R China
基金
中国国家自然科学基金;
关键词
Feature extraction; Visualization; Semantics; Task analysis; Clothing; Pedestrians; Shape; Cloth-changing person re-identification (ReID); human semantic attention (HSA); semantic-aware; visual clothes shielding (VCS);
D O I
10.1109/TNNLS.2023.3329384
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Cloth-changing person re-identification (ReID) is a newly emerging research topic that aims to retrieve pedestrians whose clothes are changed. Since the human appearance with different clothes exhibits large variations, it is very difficult for existing approaches to extract discriminative and robust feature representations. Current works mainly focus on body shape or contour sketches, but the human semantic information and the potential consistency of pedestrian features before and after changing clothes are not fully explored or are ignored. To solve these issues, in this work, a novel semantic-aware attention and visual shielding network for cloth-changing person ReID (abbreviated as SAVS) is proposed where the key idea is to shield clues related to the appearance of clothes and only focus on visual semantic information that is not sensitive to view/posture changes. Specifically, a visual semantic encoder is first employed to locate the human body and clothing regions based on human semantic segmentation information. Then, a human semantic attention (HSA) module is proposed to highlight the human semantic information and reweight the visual feature map. In addition, a visual clothes shielding (VCS) module is also designed to extract a more robust feature representation for the cloth-changing task by covering the clothing regions and focusing the model on the visual semantic information unrelated to the clothes. Most importantly, these two modules are jointly explored in an endto-end unified framework. Extensive experiments demonstrate that the proposed method can significantly outperform state-of- the-art methods, and more robust features can be extracted for cloth-changing persons. Compared with multibiometric unified network (MBUNet) (published in TIP2023), this method can achieve improvements of 17.5% (30.9%) and 8.5% (10.4%) on the LTCC and Celeb-reID datasets in terms of mean average precision (mAP) (rank-1), respectively. When compared with the Swin Transformer (Swin-T), the improvements can reach 28.6% (17.3%), 22.5% (10.0%), 19.5% (10.2%), and 8.6% (10.1%) on the PRCC, LTCC, Celeb, and NKUP datasets in terms of rank-1 (mAP), respectively.
引用
收藏
页码:1243 / 1257
页数:15
相关论文
共 50 条
  • [31] Cloth-Changing Person Re-Identification With Invariant Feature Parsing for UAVs Applications
    Xiong, Mingfu
    Yang, Xinxin
    Chen, Hanmei
    Aly, Wael Hosny Fouad
    Altameem, Abdullah
    Saudagar, Abdul Khader Jilani
    Mumtaz, Shahid
    Muhammad, Khan
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2024, 73 (09) : 12448 - 12457
  • [32] Robust Fine-Grained Learning for Cloth-Changing Person Re-Identification
    Yin, Qingze
    Ding, Guodong
    Zhang, Tongpo
    Gong, Yumei
    MATHEMATICS, 2025, 13 (03)
  • [33] Cloth-changing person re-identification paradigm based on domain augmentation and adaptation
    Peixu Z.
    Guanyu H.
    Xinyu Y.
    Xi'an Dianzi Keji Daxue Xuebao/Journal of Xidian University, 2023, 50 (05): : 87 - 94
  • [34] Joint feature augmentation and posture label for cloth-changing person re-identification
    Jiang, Liman
    Zhang, Canlong
    Wu, Lei
    Li, Zhixin
    Wang, Zhiwen
    Wei, Chunrong
    MULTIMEDIA SYSTEMS, 2025, 31 (02)
  • [35] Good is Bad: Causality Inspired Cloth-debiasing for Cloth-changing Person Re-identification
    Yang, Zhengwei
    Lin, Meng
    Zhong, Xian
    Wu, Yu
    Wang, Zheng
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR, 2023, : 1472 - 1481
  • [36] Temporal Transformation Network Based On Scale Sequences for Cloth-Changing Person Re-Identification in Video Datasets
    Zhu, Xiaoke
    Zhao, Bo
    Dong, Zhiwei
    Dong, Lingyun
    Li, Danyang
    2023 9th International Conference on Computer and Communications, ICCC 2023, 2023, : 1821 - 1825
  • [37] Exploring Fine-Grained Representation and Recomposition for Cloth-Changing Person Re-Identification
    Wang, Qizao
    Qian, Xuelin
    Li, Bin
    Xue, Xiangyang
    Fu, Yanwei
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 6280 - 6292
  • [38] An In-depth Exploration of Person Re-identification and Gait Recognition in Cloth-Changing Conditions
    Li, Weijia
    Hou, Saihui
    Zhang, Chunjie
    Cao, Chunshui
    Liu, Xu
    Huang, Yongzhen
    Zhao, Yao
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 13824 - 13833
  • [39] Unified Stability and Plasticity for Lifelong Person Re-Identification in Cloth-Changing and Cloth-Consistent Scenarios
    Yan, Yuming
    Yu, Huimin
    Wang, Yubin
    Song, Shuyi
    Huang, Weihu
    Jin, Juncan
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (10) : 9166 - 9180
  • [40] Vision transformer-based robust learning for cloth-changing person re-identification
    Xue, Chen
    Deng, Zhongliang
    Yang, Wangwang
    Hu, Enwen
    Zhang, Yao
    Wang, Shuo
    Wang, Yiming
    APPLIED SOFT COMPUTING, 2024, 163