Incorporating texture and silhouette for video-based person re-identification

被引:2
|
作者
Bai, Shutao [1 ,2 ]
Chang, Hong [1 ,2 ]
Ma, Bingpeng [2 ]
机构
[1] Chinese Acad Sci, CAS, Inst Comp Technol, Key Lab Intelligent Informat Proc, Beijing 100190, Peoples R China
[2] Univ Chinese Acad Sci, Beijing 100049, Peoples R China
关键词
Silhouette; Relational modeling; Decomposition;
D O I
10.1016/j.patcog.2024.110759
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Silhouette is an effective modality in video-based person re-identification (ReID) since it contains features (e.g., e . g ., stature and gait) complementary to the RGB modality. However, recent silhouette-assisted methods have not fully explored the spatial-temporal relations within each modality or considered the cross-modal complementarity in fusion. To address these two issues, we propose a Complete Relational Framework that includes two key components. The first component, Spatial-Temporal Relational Module (STRM), explores the spatiotemporal relations. STRM decomposes the video's spatiotemporal context into local/fine-grained and global/semantic aspects, modeling them sequentially to enhance the representation of each modality. The second component, Modality-Channel Relational Module (MCRM), explores the complementarity between RGB and silhouette videos. MCRM aligns two modalities semantically and multiplies them to capture complementary interrelations. With these two modules focusing on intra- and cross-modal relationships, our method achieves superior results across multiple benchmarks with minimal additional parameters and FLOPs. Code and models are available at https://github.com/baist/crf.
引用
收藏
页数:13
相关论文
共 50 条
  • [21] TEMPORALLY ALIGNED POOLING REPRESENTATION FOR VIDEO-BASED PERSON RE-IDENTIFICATION
    Gao, Changxin
    Wang, Jin
    Liu, Leyuan
    Yu, Jin-Gang
    Sang, Nong
    2016 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2016, : 4284 - 4288
  • [22] Diverse part attentive network for video-based person re-identification *
    Shu, Xiujun
    Li, Ge
    Wei, Longhui
    Zhong, Jia-Xing
    Zang, Xianghao
    Zhang, Shiliang
    Wang, Yaowei
    Liang, Yongsheng
    Tian, Qi
    PATTERN RECOGNITION LETTERS, 2021, 149 : 17 - 23
  • [23] Diversity Regularized Spatiotemporal Attention for Video-based Person Re-identification
    Li, Shuang
    Bak, Slawomir
    Carr, Peter
    Wang, Xiaogang
    2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 369 - 378
  • [24] Multiscale Aligned SpatialTemporal Interaction for Video-Based Person Re-Identification
    Ran, Zhidan
    Wei, Xuan
    Liu, Wei
    Lu, Xiaobo
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (09) : 8536 - 8546
  • [25] CONVOLUTIONAL TEMPORAL ATTENTION MODEL FOR VIDEO-BASED PERSON RE-IDENTIFICATION
    Rahman, Tanzila
    Rochan, Mrigank
    Wang, Yang
    2019 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME), 2019, : 1102 - 1107
  • [26] A Duplex Spatiotemporal Filtering Network for Video-based Person Re-identification
    Zheng, Chong
    Wei, Ping
    Zheng, Nanning
    2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 7551 - 7557
  • [27] Learning Compact Appearance Representation for Video-Based Person Re-Identification
    Zhang, Wei
    Hu, Shengnan
    Liu, Kan
    Zha, Zhengjun
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2019, 29 (08) : 2442 - 2452
  • [28] Learning Bidirectional Temporal Cues for Video-Based Person Re-Identification
    Zhang, Wei
    Yu, Xiaodong
    He, Xuanyu
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2018, 28 (10) : 2768 - 2776
  • [29] Video-Based Person Re-Identification Using Unsupervised Tracklet Matching
    Riachy, Chirine
    Khelifi, Fouad
    Bouridane, Ahmed
    IEEE ACCESS, 2019, 7 : 20596 - 20606
  • [30] Sequences consistency feature learning for video-based person re-identification
    Zhao, Kai
    Cheng, Deqiang
    Kou, Qiqi
    Li, Jiahan
    Liu, Ruihang
    ELECTRONICS LETTERS, 2022, 58 (04) : 142 - 144