A Semantic-Aware Attention and Visual Shielding Network for Cloth-Changing Person Re-Identification

被引:2
|
作者
Gao, Zan [1 ,2 ]
Wei, Hongwei [1 ]
Guan, Weili [3 ]
Nie, Jie [4 ]
Wang, Meng [5 ]
Chen, Shengyong [2 ]
机构
[1] Qilu Univ Technol, Shandong Artificial Intelligence Inst, Shandong Acad Sci, Jinan 250014, Peoples R China
[2] Tianjin Univ Technol, Minist Educ, Key Lab Comp Vis & Syst, Tianjin 300384, Peoples R China
[3] Monash Univ, Fac Informat Technol, Clayton, Vic 3800, Australia
[4] Ocean Univ China, Coll Informat Sci & Engn, Qingdao 266100, Peoples R China
[5] Hefei Univ Technol, Sch Comp Sci & Informat Engn, Hefei 230009, Peoples R China
基金
中国国家自然科学基金;
关键词
Feature extraction; Visualization; Semantics; Task analysis; Clothing; Pedestrians; Shape; Cloth-changing person re-identification (ReID); human semantic attention (HSA); semantic-aware; visual clothes shielding (VCS);
D O I
10.1109/TNNLS.2023.3329384
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Cloth-changing person re-identification (ReID) is a newly emerging research topic that aims to retrieve pedestrians whose clothes are changed. Since the human appearance with different clothes exhibits large variations, it is very difficult for existing approaches to extract discriminative and robust feature representations. Current works mainly focus on body shape or contour sketches, but the human semantic information and the potential consistency of pedestrian features before and after changing clothes are not fully explored or are ignored. To solve these issues, in this work, a novel semantic-aware attention and visual shielding network for cloth-changing person ReID (abbreviated as SAVS) is proposed where the key idea is to shield clues related to the appearance of clothes and only focus on visual semantic information that is not sensitive to view/posture changes. Specifically, a visual semantic encoder is first employed to locate the human body and clothing regions based on human semantic segmentation information. Then, a human semantic attention (HSA) module is proposed to highlight the human semantic information and reweight the visual feature map. In addition, a visual clothes shielding (VCS) module is also designed to extract a more robust feature representation for the cloth-changing task by covering the clothing regions and focusing the model on the visual semantic information unrelated to the clothes. Most importantly, these two modules are jointly explored in an endto-end unified framework. Extensive experiments demonstrate that the proposed method can significantly outperform state-of- the-art methods, and more robust features can be extracted for cloth-changing persons. Compared with multibiometric unified network (MBUNet) (published in TIP2023), this method can achieve improvements of 17.5% (30.9%) and 8.5% (10.4%) on the LTCC and Celeb-reID datasets in terms of mean average precision (mAP) (rank-1), respectively. When compared with the Swin Transformer (Swin-T), the improvements can reach 28.6% (17.3%), 22.5% (10.0%), 19.5% (10.2%), and 8.6% (10.1%) on the PRCC, LTCC, Celeb, and NKUP datasets in terms of rank-1 (mAP), respectively.
引用
收藏
页码:1243 / 1257
页数:15
相关论文
共 50 条
  • [21] Masked Attribute Description Embedding for Cloth-Changing Person Re-Identification
    Peng, Chunlei
    Wang, Boyu
    Liu, Decheng
    Wang, Nannan
    Hu, Ruimin
    Gao, Xinbo
    IEEE TRANSACTIONS ON MULTIMEDIA, 2025, 27 : 1475 - 1485
  • [22] A Relation-aware Cloth-Changing Person Re-identification Framework Based on Clothing Template
    Su, Chenshuang
    Zou, Mingdong
    Zhou, Yujie
    Zhu, Xiaoke
    Liang, Wenjuan
    Yuan, Caihong
    2023 IEEE INTERNATIONAL CONFERENCES ON INTERNET OF THINGS, ITHINGS IEEE GREEN COMPUTING AND COMMUNICATIONS, GREENCOM IEEE CYBER, PHYSICAL AND SOCIAL COMPUTING, CPSCOM IEEE SMART DATA, SMARTDATA AND IEEE CONGRESS ON CYBERMATICS,CYBERMATICS, 2024, : 444 - 451
  • [23] Multiple Information Prompt Learning for Cloth-Changing Person Re-Identification
    Wei, Shengxun
    Gao, Zan
    Ma, Chunjie
    Zhao, Yibo
    Guan, Weili
    Chen, Shengyong
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2025, 34 : 801 - 815
  • [24] Dual Level Adaptive Weighting for Cloth-Changing Person Re-Identification
    Liu, Fangyi
    Ye, Mang
    Du, Bo
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2023, 32 : 5075 - 5086
  • [25] Adaptive transformer with Pyramid Fusion for cloth-changing Person Re-Identification
    Zhang, Guoqing
    Zhou, Jieqiong
    Zheng, Yuhui
    Martin, Gaven
    Wang, Ruili
    PATTERN RECOGNITION, 2025, 163
  • [26] Semantic-Aware Occlusion-Robust Network for Occluded Person Re-Identification
    Zhang, Xiaokang
    Yan, Yan
    Xue, Jing-Hao
    Hua, Yang
    Wang, Hanzi
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2021, 31 (07) : 2764 - 2778
  • [27] Patching Your Clothes: Semantic-Aware Learning for Cloth-Changed Person Re-Identification
    Jia, Xuemei
    Zhong, Xian
    Ye, Mang
    Liu, Wenxuan
    Huang, Wenxin
    Zhao, Shilei
    MULTIMEDIA MODELING, MMM 2022, PT II, 2022, 13142 : 121 - 133
  • [28] IDENTITY-SENSITIVE KNOWLEDGE PROPAGATION FOR CLOTH-CHANGING PERSON RE-IDENTIFICATION
    Wu, Jianbing
    Liu, Hong
    Shi, Wei
    Tang, Hao
    Guo, Jingwen
    2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2022, : 1016 - 1020
  • [29] Occluded Cloth-Changing Person Re-Identification via Occlusion-aware Appearance and Shape Reasoning
    Nguyen, Vuong D.
    Mantini, Pranav
    Shah, Shishir K.
    2024 IEEE INTERNATIONAL CONFERENCE ON ADVANCED VIDEO AND SIGNAL BASED SURVEILLANCE, AVSS 2024, 2024,
  • [30] Multi-Stage Adversarial Learning for Cloth-Changing Person Re-Identification
    Wang, Chaoyue
    Gan, Litian
    Lin, Shuaijun
    Liu, Weijie
    Xia, Tian
    Huang, Guohao
    2024 5TH INTERNATIONAL CONFERENCE ON COMPUTER ENGINEERING AND APPLICATION, ICCEA 2024, 2024, : 1304 - 1310