Real-time Video-based Person Re-identification Surveillance with Light-weight Deep Convolutional Networks

被引:5
|
作者
Wang, Chien-Yao [1 ]
Chen, Ping-Yang [2 ]
Chen, Ming-Chiao [3 ]
Hsieh, Jun-Wei [2 ]
Liao, Hong-Yuan Mark [1 ]
机构
[1] Acad Sinica, Inst Informat Sci, Taipei, Taiwan
[2] Natl Taiwan Ocean Univ, Dept Comp Sci & Engn, Keelung, Taiwan
[3] Natl Taitung Univ, Dept Comp Sci & Informat Engn, Taitung, Taiwan
关键词
D O I
10.1109/avss.2019.8909855
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Today's person re-ID system mostly focuses on accuracy and ignores efficiency. But in most real-world surveillance systems, efficiency is often considered the most important focus of research and development. Therefore, for a person re-ID system, the ability to perform real-time identification is the most important consideration. In this study, we implemented a real-time multiple camera video-based person re-ID system using the NVIDIA Jetson TX2 platform. This system can be used in a field that requires high privacy and immediate monitoring. This system uses YOLOv3-tiny based light-weight strategies and person re-ID technology, thus reducing 46% of computation, cutting down 39.9% of model size, and accelerating 21% of computing speed. The system also effectively upgrades the pedestrian detection accuracy. In addition, the proposed person re-ID example mining and training method improves the model's performance and enhances the robustness of cross-domain data. Our system also supports the pipeline formed by connecting multiple edge computing devices in series. The system can operate at a speed up to 18 fps at 1920x1080 surveillance video stream.
引用
收藏
页数:8
相关论文
共 50 条
  • [41] Temporal Extension Topology Learning for Video-Based Person Re-identification
    Ning, Jiaqi
    Li, Fei
    Liu, Rujie
    Takeuchi, Shun
    Suzuki, Genta
    COMPUTER VISION - ACCV 2022 WORKSHOPS, 2023, 13848 : 213 - 225
  • [42] TEMPORALLY ALIGNED POOLING REPRESENTATION FOR VIDEO-BASED PERSON RE-IDENTIFICATION
    Gao, Changxin
    Wang, Jin
    Liu, Leyuan
    Yu, Jin-Gang
    Sang, Nong
    2016 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2016, : 4284 - 4288
  • [43] Diverse part attentive network for video-based person re-identification *
    Shu, Xiujun
    Li, Ge
    Wei, Longhui
    Zhong, Jia-Xing
    Zang, Xianghao
    Zhang, Shiliang
    Wang, Yaowei
    Liang, Yongsheng
    Tian, Qi
    PATTERN RECOGNITION LETTERS, 2021, 149 : 17 - 23
  • [44] Diversity Regularized Spatiotemporal Attention for Video-based Person Re-identification
    Li, Shuang
    Bak, Slawomir
    Carr, Peter
    Wang, Xiaogang
    2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 369 - 378
  • [45] Multiscale Aligned SpatialTemporal Interaction for Video-Based Person Re-Identification
    Ran, Zhidan
    Wei, Xuan
    Liu, Wei
    Lu, Xiaobo
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (09) : 8536 - 8546
  • [46] A Duplex Spatiotemporal Filtering Network for Video-based Person Re-identification
    Zheng, Chong
    Wei, Ping
    Zheng, Nanning
    2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 7551 - 7557
  • [47] Learning Compact Appearance Representation for Video-Based Person Re-Identification
    Zhang, Wei
    Hu, Shengnan
    Liu, Kan
    Zha, Zhengjun
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2019, 29 (08) : 2442 - 2452
  • [48] Learning Bidirectional Temporal Cues for Video-Based Person Re-Identification
    Zhang, Wei
    Yu, Xiaodong
    He, Xuanyu
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2018, 28 (10) : 2768 - 2776
  • [49] Video-Based Person Re-Identification Using Unsupervised Tracklet Matching
    Riachy, Chirine
    Khelifi, Fouad
    Bouridane, Ahmed
    IEEE ACCESS, 2019, 7 : 20596 - 20606
  • [50] Sequences consistency feature learning for video-based person re-identification
    Zhao, Kai
    Cheng, Deqiang
    Kou, Qiqi
    Li, Jiahan
    Liu, Ruihang
    ELECTRONICS LETTERS, 2022, 58 (04) : 142 - 144