Learning Viewpoint-Invariant Features for LiDAR-Based Gait Recognition

被引:0
|
作者
Ahn, Jeongho [1 ]
Nakashima, Kazuto [2 ]
Yoshino, Koki [1 ]
Iwashita, Yumi [3 ]
Kurazume, Ryo [2 ]
机构
[1] Kyushu Univ, Grad Sch Informat Sci & Elect Engn, Fukuoka 8190395, Japan
[2] Kyushu Univ, Fac Informat Sci & Elect Engn, Fukuoka 8190395, Japan
[3] CALTECH, Jet Prop Lab, Pasadena, CA 91125 USA
基金
日本科学技术振兴机构;
关键词
Gait recognition; 3D point cloud; LiDAR; convolutional neural networks; attention mechanism;
D O I
10.1109/ACCESS.2023.3333037
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Gait recognition is a biometric identification method based on individual walking patterns. This modality is applied in a wide range of applications, such as criminal investigations and identification systems, since it can be performed at a long distance and requires no cooperation of interests. In general, cameras are used for gait recognition systems, and previous studies have utilized depth information captured by RGB-D cameras, such as Microsoft Kinect. In recent years, multi-layer LiDAR sensors, which can obtain range images of a target at a range of over 100 m in real time, have attracted significant attention in the field of autonomous mobile robots and self-driving vehicles. Compared with general cameras, LiDAR sensors have rarely been used for biometrics due to the low point cloud densities captured at long distances. In this study, we focus on improving the robustness of gait recognition using LiDAR sensors under confounding conditions, specifically addressing the challenges posed by viewing angles and measurement distances. First, our recognition model employs a two-scale spatial resolution to enhance immunity to varying point cloud densities. In addition, this method learns the gait features from two invariant viewpoints (i.e., left-side and back views) generated by estimating the walking direction. Furthermore, we propose a novel attention block that adaptively recalibrates channel-wise weights to fuse the features from the aforementioned resolutions and viewpoints. Comprehensive experiments conducted on our dataset demonstrate that our model outperforms existing methods, particularly in cross-view, cross-distance challenges, and practical scenarios.
引用
收藏
页码:129749 / 129762
页数:14
相关论文
共 50 条
  • [21] Context for LiDAR-based Place Recognition
    Li, Jiahao
    Qian, Hui
    Du, Xin
    2023 21ST INTERNATIONAL CONFERENCE ON ADVANCED ROBOTICS, ICAR, 2023, : 107 - 112
  • [22] Lidar-Based Gait Analysis and Activity Recognition in a 4D Surveillance System
    Benedek, Csaba
    Galai, Bence
    Nagy, Balazs
    Janko, Zsolt
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2018, 28 (01) : 101 - 113
  • [23] Information set based features for the speed invariant gait recognition
    Medikonda J.
    Madasu H.
    Ketan P.B.
    IET Biometrics, 2018, 7 (03): : 269 - 277
  • [24] Information set based features for the speed invariant gait recognition
    Medikonda, Jeevan
    Madasu, Hanmandlu
    Ketan, Panigrahi Bijaya
    IET BIOMETRICS, 2018, 7 (03) : 269 - 277
  • [25] Content-based image retrieval by viewpoint-invariant color indexing
    Gevers, T
    Smeulders, AWM
    IMAGE AND VISION COMPUTING, 1999, 17 (07) : 475 - 488
  • [26] Content-based image retrieval by viewpoint-invariant color indexing
    ISIS, Faculty of WINS, University of Amsterdam, Kruislaan 403, 1098 SJ Amsterdam, Netherlands
    Image Vision Comput, 7 (475-488):
  • [27] OverlapTransformer: An Efficient and Yaw-Angle-Invariant Transformer Network for LiDAR-Based Place Recognition
    Ma, Junyi
    Zhang, Jun
    Xu, Jintao
    Ai, Rui
    Gu, Weihao
    Chen, Xieyuanli
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2022, 7 (03) : 6958 - 6965
  • [28] Semantics-enhanced discriminative descriptor learning for LiDAR-based place recognition
    Chen, Yiwen
    Zhuang, Yuan
    Huai, Jianzhu
    Li, Qipeng
    Wang, Binliang
    El-Bendary, Nashwa
    Yilmaz, Alper
    ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING, 2024, 210 : 97 - 109
  • [29] Fast Sequence-Matching Enhanced Viewpoint-Invariant 3-D Place Recognition
    Yin, Peng
    Wang, Fuying
    Egorov, Anton
    Hou, Jiafan
    Jia, Zhenzhong
    Han, Jianda
    IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, 2022, 69 (02) : 2127 - 2135
  • [30] A robust covariate-invariant gait recognition based on pose features
    Parashar, Anubha
    Parashar, Apoorva
    Shekhawat, Rajveer Singh
    IET BIOMETRICS, 2022, 11 (06) : 601 - 613