A Semantic-Aware Attention and Visual Shielding Network for Cloth-Changing Person Re-Identification

被引:2
|
作者
Gao, Zan [1 ,2 ]
Wei, Hongwei [1 ]
Guan, Weili [3 ]
Nie, Jie [4 ]
Wang, Meng [5 ]
Chen, Shengyong [2 ]
机构
[1] Qilu Univ Technol, Shandong Artificial Intelligence Inst, Shandong Acad Sci, Jinan 250014, Peoples R China
[2] Tianjin Univ Technol, Minist Educ, Key Lab Comp Vis & Syst, Tianjin 300384, Peoples R China
[3] Monash Univ, Fac Informat Technol, Clayton, Vic 3800, Australia
[4] Ocean Univ China, Coll Informat Sci & Engn, Qingdao 266100, Peoples R China
[5] Hefei Univ Technol, Sch Comp Sci & Informat Engn, Hefei 230009, Peoples R China
基金
中国国家自然科学基金;
关键词
Feature extraction; Visualization; Semantics; Task analysis; Clothing; Pedestrians; Shape; Cloth-changing person re-identification (ReID); human semantic attention (HSA); semantic-aware; visual clothes shielding (VCS);
D O I
10.1109/TNNLS.2023.3329384
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Cloth-changing person re-identification (ReID) is a newly emerging research topic that aims to retrieve pedestrians whose clothes are changed. Since the human appearance with different clothes exhibits large variations, it is very difficult for existing approaches to extract discriminative and robust feature representations. Current works mainly focus on body shape or contour sketches, but the human semantic information and the potential consistency of pedestrian features before and after changing clothes are not fully explored or are ignored. To solve these issues, in this work, a novel semantic-aware attention and visual shielding network for cloth-changing person ReID (abbreviated as SAVS) is proposed where the key idea is to shield clues related to the appearance of clothes and only focus on visual semantic information that is not sensitive to view/posture changes. Specifically, a visual semantic encoder is first employed to locate the human body and clothing regions based on human semantic segmentation information. Then, a human semantic attention (HSA) module is proposed to highlight the human semantic information and reweight the visual feature map. In addition, a visual clothes shielding (VCS) module is also designed to extract a more robust feature representation for the cloth-changing task by covering the clothing regions and focusing the model on the visual semantic information unrelated to the clothes. Most importantly, these two modules are jointly explored in an endto-end unified framework. Extensive experiments demonstrate that the proposed method can significantly outperform state-of- the-art methods, and more robust features can be extracted for cloth-changing persons. Compared with multibiometric unified network (MBUNet) (published in TIP2023), this method can achieve improvements of 17.5% (30.9%) and 8.5% (10.4%) on the LTCC and Celeb-reID datasets in terms of mean average precision (mAP) (rank-1), respectively. When compared with the Swin Transformer (Swin-T), the improvements can reach 28.6% (17.3%), 22.5% (10.0%), 19.5% (10.2%), and 8.6% (10.1%) on the PRCC, LTCC, Celeb, and NKUP datasets in terms of rank-1 (mAP), respectively.
引用
收藏
页码:1243 / 1257
页数:15
相关论文
共 50 条
  • [41] DCR-ReID: Deep Component Reconstruction for Cloth-Changing Person Re-Identification
    Cui, Zhenyu
    Zhou, Jiahuan
    Peng, Yuxin
    Zhang, Shiliang
    Wang, Yaowei
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2023, 33 (08) : 4415 - 4428
  • [42] Cloth-Changing Person Re-identification from A Single Image with Gait Prediction and Regularization
    Jin, Xin
    He, Tianyu
    Zheng, Kecheng
    Yin, Zhiheng
    Shen, Xu
    Huang, Zhen
    Feng, Ruoyu
    Huang, Jianqiang
    Chen, Zhibo
    Hua, Xian-Sheng
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 14258 - 14267
  • [43] Robust auxiliary modality is beneficial for video-based cloth-changing person re-identification
    Chen, Youming
    Tuo, Ting
    Guo, Lijun
    Zhang, Rong
    Wang, Yirui
    Gao, Shangce
    IMAGE AND VISION COMPUTING, 2025, 154
  • [44] Color Labels-based Branching Structure Model for Cloth-changing Person Re-identification
    Chae W.
    Gwon S.
    Seo K.
    Transactions of the Korean Institute of Electrical Engineers, 2024, 73 (04): : 725 - 730
  • [45] Win-Win by Competition: Auxiliary-Free Cloth-Changing Person Re-Identification
    Yang, Zhengwei
    Zhong, Xian
    Zhong, Zhun
    Liu, Hong
    Wang, Zheng
    Satoh, Shin'ichi
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2023, 32 : 2985 - 2999
  • [46] Semantic guidance attention network for occluded person re-identification
    Ren X.
    Zhang D.
    Bao X.
    Li B.
    Tongxin Xuebao/Journal on Communications, 2021, 42 (10): : 106 - 116
  • [47] Attention-Aware Adversarial Network for Person Re-Identification
    Shen, Aihong
    Wang, Huasheng
    Wang, Junjie
    Tan, Hongchen
    Liu, Xiuping
    Cao, Junjie
    APPLIED SCIENCES-BASEL, 2019, 9 (08):
  • [48] Mixed Attention-Aware Network for Person Re-identification
    Sun, Wenchen
    Liu, Fang'ai
    Xu, Weizhi
    2019 12TH INTERNATIONAL SYMPOSIUM ON COMPUTATIONAL INTELLIGENCE AND DESIGN (ISCID 2019), 2019, : 120 - 123
  • [49] Attention-Aware Compositional Network for Person Re-identification
    Xu, Jing
    Zhao, Rui
    Zhu, Feng
    Wang, Huaming
    Ouyang, Wanli
    2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 2119 - 2128
  • [50] Dual semantic interdependencies attention network for person re-identification
    Yang, Shengrong
    Hu, Haifeng
    Chen, Dihu
    Su, Tao
    ELECTRONICS LETTERS, 2020, 56 (25) : 1411 - 1413