Thermal Gait Dataset for Deep Learning-Oriented Gait Recognition

被引:0
|
作者
Youssef, Fatma [1 ]
El-Mahdy, Ahmed [1 ,2 ]
Ogawa, Tetsuji [3 ]
Gomaa, Walid [2 ,4 ]
机构
[1] Egypt Japan Univ Sci & Technol, Dept Comp Sci & Engn, Alexandria, Egypt
[2] Alexandria Univ, Fac Engn, Alexandria, Egypt
[3] Waseda Univ, Dept Commun & Comp Engn, Tokyo, Japan
[4] Egypt Japan Univ Sci & Technol, Cyber Phys Syst Lab, Alexandria, Egypt
关键词
thermal imagery; human gait; convolutional neural networks; vision transformers; gender recognition; person verification;
D O I
10.1109/IJCNN54540.2023.10191513
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This study attempted to construct a thermal dataset of human gait in diverse environments suitable for building and evaluating sophisticated deep learning models (e.g., vision transformers) for gait recognition. Gait is a behavioral biometric to identify a person and requires no cooperation from the person, making it suitable for security and surveillance applications. For security purposes, it is desirable to be able to recognize a person in darkness or other inadequate lighting conditions, in which thermal imagery is advantageous over visible light imagery. Despite the importance of such nighttime person identification, available thermal gait datasets captured in the dark are scarce. This study, therefore, collected a relatively large set of thermal gait data in both indoor and outdoor environments with several walking styles, e.g., walking normally, walking while carrying a bag, and walking fast. This dataset was utilized in multiple gait recognition tasks, such as gender classification and person verification, using legacy convolutional neural networks (CNNs) and modern vision transformers (ViTs). Experiments using this dataset revealed the effective training method for person verification, the effectiveness of ViT on gait recognition, and the robustness of the models against the difference in walking styles; it suggests that the developed dataset enables various studies on gait recognition using state-of-the-art deep learning models.
引用
收藏
页数:8
相关论文
共 50 条
  • [31] On Learning Disentangled Representations for Gait Recognition
    Zhang, Ziyuan
    Tran, Luan
    Liu, Feng
    Liu, Xiaoming
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (01) : 345 - 360
  • [32] The OU-ISIR Gait Database Comprising the Large Population Dataset and Performance Evaluation of Gait Recognition
    Iwama, Haruyuki
    Okumura, Mayu
    Makihara, Yasushi
    Yagi, Yasushi
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2012, 7 (05) : 1511 - 1521
  • [33] Deep Residual Matrix Factorization for Gait Recognition
    Peng, Bo
    Zhu, Wenjie
    Wang, Xiuhui
    ICMLC 2020: 2020 12TH INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND COMPUTING, 2018, : 330 - 334
  • [34] Gait Recognition Using Deep Convolutional Features
    Min, Pa Pa
    Sayeed, Md Shohel
    Ong, Thian Song
    2019 7TH INTERNATIONAL CONFERENCE ON INFORMATION AND COMMUNICATION TECHNOLOGY (ICOICT), 2019, : 121 - 125
  • [35] Person Recognition Based on Deep Gait: A Survey
    Khaliluzzaman, Md.
    Uddin, Ashraf
    Deb, Kaushik
    Hasan, Md Junayed
    SENSORS, 2023, 23 (10)
  • [36] Pose-based deep gait recognition
    Sokolova, Anna
    Konushin, Anton
    IET BIOMETRICS, 2019, 8 (02) : 134 - 143
  • [37] A multi-modal dataset for gait recognition under occlusion
    Li, Na
    Zhao, Xinbo
    APPLIED INTELLIGENCE, 2023, 53 (02) : 1517 - 1534
  • [38] A multi-modal dataset for gait recognition under occlusion
    Na Li
    Xinbo Zhao
    Applied Intelligence, 2023, 53 : 1517 - 1534
  • [39] CASIA-E: A Large Comprehensive Dataset for Gait Recognition
    Song, Chunfeng
    Huang, Yongzhen
    Wang, Weining
    Wang, Liang
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (03) : 2801 - 2815
  • [40] Gait Recognition in the Presence of Occlusion: A New Dataset and Baseline Algorithms
    Hofmann, Martin
    Sural, Shamik
    Rigoll, Gerhard
    WSCG 2011: COMMUNICATION PAPERS PROCEEDINGS, 2011, : 99 - +