Masked Attribute Description Embedding for Cloth-Changing Person Re-Identification

被引:0
|
作者
Peng, Chunlei [1 ,2 ]
Wang, Boyu [1 ,2 ]
Liu, Decheng [1 ,2 ]
Wang, Nannan [1 ]
Hu, Ruimin [3 ]
Gao, Xinbo [4 ]
机构
[1] Xidian Univ, Sch Cyber Engn, State Key Lab Integrated Serv Networks, Xian 710071, Peoples R China
[2] Minist Educ, Key Lab Artificial Intelligence, Shanghai 200240, Peoples R China
[3] Xidian Univ, Sch Cyber Engn, Xian 710071, Peoples R China
[4] Chongqing Univ Posts & Telecommun, Chongqing Key Lab Image Cognit, Chongqing 400065, Peoples R China
基金
中国国家自然科学基金;
关键词
Feature extraction; Image color analysis; Shape; Three-dimensional displays; Skeleton; Training; Pedestrians; Visualization; Solid modeling; Interference; Attribute description; cloth-changing re-identification; person re-identification; transformer;
D O I
10.1109/TMM.2024.3521730
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Cloth-changing person re-identification (CC-ReID) aims to match persons who change clothes over long periods. The key challenge in CC-ReID is to extract cloth-irrelated features, such as face, hairstyle, body shape, and gait. Current research mainly focuses on modeling body shape using multi-modal biological features (such as silhouettes and sketches). However, it does not fully leverage the personal description information hidden in the original RGB image. Considering that there are certain attribute descriptions that remain unchanged after the changing of cloth, we propose a Masked Attribute Description Embedding (MADE) method that unifies personal visual appearance and attribute description for CC-ReID. Specifically, handling variable cloth-sensitive information, such as color and type, is challenging for effective modeling. To address this, we mask the clothes type and color information (upper body type, upper body color, lower body type, and lower body color) in the personal attribute description extracted through an attribute detection model. The masked attribute description is then connected and embedded into Transformer blocks at various levels, fusing it with the low-level to high-level features of the image. This approach compels the model to discard cloth information. Experiments are conducted on several CC-ReID benchmarks, including PRCC, LTCC, Celeb-reID-light, and LaST. Results demonstrate that MADE effectively utilizes attribute description, enhancing cloth-changing person re-identification performance, and compares favorably with state-of-the-art methods.
引用
收藏
页码:1475 / 1485
页数:11
相关论文
共 50 条
  • [31] Face and body-shape integration model for cloth-changing person re-identification
    Agbodike, Obinna
    Zhang, Weijin
    Chen, Jenhui
    Wang, Lei
    IMAGE AND VISION COMPUTING, 2023, 140
  • [32] DCR-ReID: Deep Component Reconstruction for Cloth-Changing Person Re-Identification
    Cui, Zhenyu
    Zhou, Jiahuan
    Peng, Yuxin
    Zhang, Shiliang
    Wang, Yaowei
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2023, 33 (08) : 4415 - 4428
  • [33] IRANet: Identity-relevance aware representation for cloth-changing person re-identification
    Shi, Wei
    Liu, Hong
    Liu, Mengyuan
    IMAGE AND VISION COMPUTING, 2022, 117
  • [34] Cloth-Changing Person Re-identification from A Single Image with Gait Prediction and Regularization
    Jin, Xin
    He, Tianyu
    Zheng, Kecheng
    Yin, Zhiheng
    Shen, Xu
    Huang, Zhen
    Feng, Ruoyu
    Huang, Jianqiang
    Chen, Zhibo
    Hua, Xian-Sheng
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 14258 - 14267
  • [35] Exploring Shape Embedding for Cloth-Changing Person Re-Identification via 2D-3D Correspondences
    Wang, Yubin
    Yu, Huimin
    Yan, Yuming
    Song, Shuyi
    Liu, Biyang
    Lu, Yichong
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 7121 - 7130
  • [36] A Semantic-Aware Attention and Visual Shielding Network for Cloth-Changing Person Re-Identification
    Gao, Zan
    Wei, Hongwei
    Guan, Weili
    Nie, Jie
    Wang, Meng
    Chen, Shengyong
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2025, 36 (01) : 1243 - 1257
  • [37] A Semantic-Aware Attention and Visual Shielding Network for Cloth-Changing Person Re-Identification
    Gao, Zan
    Wei, Hongwei
    Guan, Weili
    Nie, Jie
    Wang, Meng
    Chen, Shengyong
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2025, 36 (01) : 1243 - 1257
  • [38] Occlusion-aware appearance and shape learning for occluded cloth-changing person re-identification
    Nguyen, Vuong D.
    Mantini, Pranav
    Shah, Shishir K.
    PATTERN ANALYSIS AND APPLICATIONS, 2025, 28 (02)
  • [39] A Relation-aware Cloth-Changing Person Re-identification Framework Based on Clothing Template
    Su, Chenshuang
    Zou, Mingdong
    Zhou, Yujie
    Zhu, Xiaoke
    Liang, Wenjuan
    Yuan, Caihong
    2023 IEEE INTERNATIONAL CONFERENCES ON INTERNET OF THINGS, ITHINGS IEEE GREEN COMPUTING AND COMMUNICATIONS, GREENCOM IEEE CYBER, PHYSICAL AND SOCIAL COMPUTING, CPSCOM IEEE SMART DATA, SMARTDATA AND IEEE CONGRESS ON CYBERMATICS,CYBERMATICS, 2024, : 444 - 451
  • [40] Robust auxiliary modality is beneficial for video-based cloth-changing person re-identification
    Chen, Youming
    Tuo, Ting
    Guo, Lijun
    Zhang, Rong
    Wang, Yirui
    Gao, Shangce
    IMAGE AND VISION COMPUTING, 2025, 154