An end-to-end gait recognition system for covariate conditions using custom kernel CNN

被引:1
|
作者
Ali, Babar [1 ]
Bukhari, Maryam [1 ]
Maqsood, Muazzam [1 ]
Moon, Jihoon [2 ]
Hwang, Eenjun [3 ]
Rho, Seungmin [4 ]
机构
[1] COMSATS Univ Islamabad, Dept Comp Sci, Attock Campus, Islamabad, Pakistan
[2] Soonchunhyang Univ, Dept AI & Big Data, Asan 31538, South Korea
[3] Korea Univ, Sch Elect Engn, Seoul 02841, South Korea
[4] Chung Ang Univ, Dept Ind Secur, Seoul 06974, South Korea
关键词
Gait recognition; Covariate factors; Deep learning; Convolutional neural networks; Custom kernel CNN; NEURAL-NETWORKS; IDENTIFICATION;
D O I
10.1016/j.heliyon.2024.e32934
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Gait recognition is the identification of individuals based on how they walk. It can identify an individual of interest without their intervention, making it better suited for surveillance from afar. Computer-aided silhouette-based gait analysis is frequently employed due to its efficiency and effectiveness. However, covariate conditions have a significant influence on individual recognition because they conceal essential features that are helpful in recognizing individuals from their walking style. To address such issues, we proposed a novel deep-learning framework to tackle covariate conditions in gait by proposing regions subject to covariate conditions. The features extracted from those regions will be neglected to keep the model's performance effective with custom kernels. The proposed technique sets aside static and dynamic areas of interest, where static areas contain covariates, and then features are learnt from the dynamic regions unaffected by covariates to effectively recognize individuals. The features were extracted using three customized kernels, and the results were concatenated to produce a fused feature map. Afterward, CNN learns and extracts the features from the proposed regions to recognize an individual. The suggested approach is an end-to-end system that eliminates the requirement for manual region proposal and feature extraction, which would improve gait-based identification of individuals in real-world scenarios. The experimentation is performed on publicly available dataset i.e. CASIA A, and CASIA C. The findings indicate that subjects wearing bags produced 90 % accuracy, and subjects wearing coats produced 58 % accuracy. Likewise, recognizing individuals with different walking speeds also exhibited excellent results, with an accuracy of 94 % for fast and 96 % for slow-paced walk patterns, which shows improvement compared to previous deep learning methods.(c) 2017 Elsevier Inc. All rights reserved.
引用
收藏
页数:13
相关论文
共 50 条
  • [41] End-to-End Dynamic Gesture Recognition Using MmWave Radar
    Ali, Anum
    Parida, Priyabrata
    Va, Vutha
    Ni, Saifeng
    Nguyen, Khuong Nhat
    Ng, Boon Loong
    Zhang, Jianzhong Charlie
    IEEE ACCESS, 2022, 10 : 88692 - 88706
  • [42] End-to-End Spontaneous Speech Recognition Using Hesitation Labeling
    Horii, Koharu
    Fukuda, Meiko
    Ohta, Kengo
    Nishimura, Ryota
    Ogawa, Atsunori
    Kitaoka, Norihide
    2021 ASIA-PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE (APSIPA ASC), 2021, : 1077 - 1081
  • [43] Low Latency Speech Recognition using End-to-End Prefetching
    Chang, Shuo-Yiin
    Li, Bo
    Rybach, David
    He, Yanzhang
    Li, Wei
    Sainath, Tara
    Strohman, Trevor
    INTERSPEECH 2020, 2020, : 1962 - 1966
  • [44] End-to-End Spontaneous Speech Recognition Using Disfluency Labeling
    Horii, Koharu
    Fukuda, Meiko
    Ohta, Kengo
    Nishimura, Ryota
    Ogawa, Atsunori
    Kitaoka, Norihide
    INTERSPEECH 2022, 2022, : 4108 - 4112
  • [45] End-to-end Accented Speech Recognition
    Viglino, Thibault
    Motlicek, Petr
    Cernak, Milos
    INTERSPEECH 2019, 2019, : 2140 - 2144
  • [46] Multichannel End-to-end Speech Recognition
    Ochiai, Tsubasa
    Watanabe, Shinji
    Hori, Takaaki
    Hershey, John R.
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 70, 2017, 70
  • [47] END-TO-END AUDIOVISUAL SPEECH RECOGNITION
    Petridis, Stavros
    Stafylakis, Themos
    Ma, Pingchuan
    Cai, Feipeng
    Tzimiropoulos, Georgios
    Pantic, Maja
    2018 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2018, : 6548 - 6552
  • [48] End-to-End Speech Recognition in Russian
    Markovnikov, Nikita
    Kipyatkova, Irina
    Lyakso, Elena
    SPEECH AND COMPUTER (SPECOM 2018), 2018, 11096 : 377 - 386
  • [49] END-TO-END MULTIMODAL SPEECH RECOGNITION
    Palaskar, Shruti
    Sanabria, Ramon
    Metze, Florian
    2018 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2018, : 5774 - 5778
  • [50] Overview of end-to-end speech recognition
    Wang, Song
    Li, Guanyu
    2018 INTERNATIONAL SYMPOSIUM ON POWER ELECTRONICS AND CONTROL ENGINEERING (ISPECE 2018), 2019, 1187