Leveraging spatio-temporal features using graph neural networks for human activity recognition

被引:4
|
作者
Raj, M. S. Subodh [1 ]
George, Sudhish N. [1 ]
Raja, Kiran [2 ]
机构
[1] Natl Inst Technol Calicut, Dept Elect & Commun Engn, Calicut, Kerala, India
[2] Norwegian Univ Sci & Technol, Dept Comp Sci, Gjovik, Norway
关键词
Covariance descriptor; Graph neural network; Human activity; Subspace clustering;
D O I
10.1016/j.patcog.2024.110301
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Unsupervised human activity recognition (HAR) algorithms working on motion capture (mocap) data often use spatial information and neglect the activity-specific information contained in the temporal sequences. In this work, we propose a new unsupervised algorithm for HAR from mocap data to leverage both spatial and temporal information embedded in activity sequences. For this, we employ a shallow graph neural network (GNN) comprising a graph convolutional network and a gated recurrent unit to aggregate the spatial and temporal features of the mocap sequences, respectively. Moreover, we encode the transformations of the human body through log-regularized kernel covariance descriptors linked to the trajectory movement maps of mocap frames. These descriptors are then fused with the GNN features for downstream activity recognition tasks. Finally, HAR is performed by a new unsupervised algorithm using a neighborhood Laplacian regularizer and a normalized dictionary learning approach. The generalizability of the proposed model is validated by training the GNN on a public dataset and testing on the other datasets. The performance of the proposed model is evaluated using six publicly available human mocap datasets. Compared to existing approaches, the proposed model improves activity recognition consistently by 12%-30% across different datasets.
引用
收藏
页数:15
相关论文
共 50 条
  • [1] Efficient human activity recognition with spatio-temporal spiking neural networks
    Li, Yuhang
    Yin, Ruokai
    Kim, Youngeun
    Panda, Priyadarshini
    FRONTIERS IN NEUROSCIENCE, 2023, 17
  • [2] Human Action Recognition by Learning Spatio-Temporal Features With Deep Neural Networks
    Wang, Lei
    Xu, Yangyang
    Cheng, Jun
    Xia, Haiying
    Yin, Jianqin
    Wu, Jiaji
    IEEE ACCESS, 2018, 6 : 17913 - 17922
  • [3] Abnormal Activity Recognition Using Spatio-Temporal Features
    Chathuramali, K. G. Manosha
    Ramasinghe, Sameera
    Rodrigo, Ranga
    2014 7TH INTERNATIONAL CONFERENCE ON INFORMATION AND AUTOMATION FOR SUSTAINABILITY (ICIAFS), 2014,
  • [4] Graph-based approach for human action recognition using spatio-temporal features
    Ben Aoun, Najib
    Mejdoub, Mahmoud
    Ben Amar, Chokri
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2014, 25 (02) : 329 - 338
  • [5] Explainable Spatio-Temporal Graph Neural Networks
    Tang, Jiabin
    Xia, Lianghao
    Huang, Chao
    PROCEEDINGS OF THE 32ND ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2023, 2023, : 2432 - 2441
  • [6] Spatio-temporal hand gesture recognition using neural networks
    Lin, DT
    IEEE WORLD CONGRESS ON COMPUTATIONAL INTELLIGENCE, 1998, : 1794 - 1798
  • [7] Human Interaction Recognition Using Improved Spatio-Temporal Features
    Sivarathinabala, M.
    Abirami, S.
    PROCEEDINGS OF 3RD INTERNATIONAL CONFERENCE ON ADVANCED COMPUTING, NETWORKING AND INFORMATICS (ICACNI 2015), VOL 1, 2016, 43 : 191 - 199
  • [8] Skeleton-based action recognition using spatio-temporal features with convolutional neural networks
    Rostami, Zahra
    Afrasiabi, Mahlagha
    Khotanlou, Hassan
    2017 IEEE 4TH INTERNATIONAL CONFERENCE ON KNOWLEDGE-BASED ENGINEERING AND INNOVATION (KBEI), 2017, : 583 - 587
  • [9] Misbehavior detection with spatio-temporal graph neural networks ☆
    Yuce, Mehmet Fatih
    Erturk, Mehmet Ali
    Aydin, Muhammed Ali
    COMPUTERS & ELECTRICAL ENGINEERING, 2024, 116
  • [10] Traffic Forecasting with Spatio-Temporal Graph Neural Networks
    Shah, Shehal
    Doshi, Prince
    Mangle, Shlok
    Tawde, Prachi
    Sawant, Vinaya
    ARTIFICIAL INTELLIGENCE AND KNOWLEDGE PROCESSING, AIKP 2024, 2025, 2228 : 183 - 197