Learning Viewpoint-Invariant Features for LiDAR-Based Gait Recognition

被引:0
|
作者
Ahn, Jeongho [1 ]
Nakashima, Kazuto [2 ]
Yoshino, Koki [1 ]
Iwashita, Yumi [3 ]
Kurazume, Ryo [2 ]
机构
[1] Kyushu Univ, Grad Sch Informat Sci & Elect Engn, Fukuoka 8190395, Japan
[2] Kyushu Univ, Fac Informat Sci & Elect Engn, Fukuoka 8190395, Japan
[3] CALTECH, Jet Prop Lab, Pasadena, CA 91125 USA
基金
日本科学技术振兴机构;
关键词
Gait recognition; 3D point cloud; LiDAR; convolutional neural networks; attention mechanism;
D O I
10.1109/ACCESS.2023.3333037
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Gait recognition is a biometric identification method based on individual walking patterns. This modality is applied in a wide range of applications, such as criminal investigations and identification systems, since it can be performed at a long distance and requires no cooperation of interests. In general, cameras are used for gait recognition systems, and previous studies have utilized depth information captured by RGB-D cameras, such as Microsoft Kinect. In recent years, multi-layer LiDAR sensors, which can obtain range images of a target at a range of over 100 m in real time, have attracted significant attention in the field of autonomous mobile robots and self-driving vehicles. Compared with general cameras, LiDAR sensors have rarely been used for biometrics due to the low point cloud densities captured at long distances. In this study, we focus on improving the robustness of gait recognition using LiDAR sensors under confounding conditions, specifically addressing the challenges posed by viewing angles and measurement distances. First, our recognition model employs a two-scale spatial resolution to enhance immunity to varying point cloud densities. In addition, this method learns the gait features from two invariant viewpoints (i.e., left-side and back views) generated by estimating the walking direction. Furthermore, we propose a novel attention block that adaptively recalibrates channel-wise weights to fuse the features from the aforementioned resolutions and viewpoints. Comprehensive experiments conducted on our dataset demonstrate that our model outperforms existing methods, particularly in cross-view, cross-distance challenges, and practical scenarios.
引用
收藏
页码:129749 / 129762
页数:14
相关论文
共 50 条
  • [1] Is object recognition mediated by viewpoint-invariant parts or viewpoint-dependent features?
    Tarr, M. J.
    Hayward, W. G.
    Gauthier, I.
    Williams, P.
    PERCEPTION, 1995, 24 : 4 - 4
  • [2] Viewpoint-invariant face recognition based on view-based representation
    Chung, Jinyun
    Lee, Juho
    Park, Hyun Jin
    Yang, Hyun Seung
    COMPUTATIONAL INTELLIGENCE, PT 2, PROCEEDINGS, 2006, 4114 : 872 - 878
  • [3] FEATURE SELECTION FOR LIDAR-BASED GAIT RECOGNITION
    Galai, Bence
    Benedek, Csaba
    2015 INTERNATIONAL WORKSHOP ON COMPUTATIONAL INTELLIGENCE FOR MULTIMEDIA UNDERSTANDING (IWCIM), 2015,
  • [4] The role of attention on viewpoint-invariant object recognition
    Stankiewicz, B. J.
    Hummel, J. E.
    PERCEPTION, 1996, 25 : 50 - 50
  • [5] Viewpoint-invariant and viewpoint-dependent object recognition in dissociable neural subsystems
    Burgund, ED
    Marsolek, CJ
    PSYCHONOMIC BULLETIN & REVIEW, 2000, 7 (03) : 480 - 489
  • [6] Viewpoint-invariant and viewpoint-dependent object recognition in dissociable neural subsystems
    E. Darcy Burgund
    Chad J. Marsolek
    Psychonomic Bulletin & Review, 2000, 7 : 480 - 489
  • [7] Qualitative curvature cues in viewpoint-invariant recognition of bent pins
    Gilson, S. J.
    Baker, S. S.
    Foster, D. H.
    PERCEPTION, 1999, 28 : 106 - 107
  • [8] The role of ventral stream areas for viewpoint-invariant object recognition
    Nestmann, Sophia
    Karnath, Hans-Otto
    Rennig, Johannes
    NEUROIMAGE, 2022, 251
  • [9] Lightweight, Viewpoint-Invariant Visual Place Recognition in Changing Environments
    Lowry, Stephanie
    Andreasson, Henrik
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2018, 3 (02): : 957 - 964
  • [10] Towards Viewpoint-Invariant Visual Recognition via Adversarial Training
    Ruan, Shouwei
    Dong, Yinpeng
    Su, Hang
    Peng, Jianteng
    Chen, Ning
    Wei, Xingxing
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV, 2023, : 4686 - 4696