Hybrid features for skeleton-based action recognition based on network fusion

被引:4
|
作者
Chen, Zhangmeng [1 ,2 ]
Pan, Junjun [1 ,2 ]
Yang, Xiaosong [3 ]
Qin, Hong [4 ]
机构
[1] Beihang Univ, State Key Lab Virtual Real Technol & Syst, Beijing, Peoples R China
[2] Peng Cheng Lab, Shenzhen, Peoples R China
[3] Bournemouth Univ, Fac Media & Commun, Poole, Dorset, England
[4] SUNY Stony Brook, Dept Comp Sci, Stony Brook, NY 11794 USA
基金
中国国家自然科学基金; 北京市自然科学基金; 美国国家科学基金会; 国家重点研发计划;
关键词
action recognition; CNN; human skeleton; hybrid features; LSTM; multistream neural network;
D O I
10.1002/cav.1952
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
In recent years, the topic of skeleton-based human action recognition has attracted significant attention from researchers and practitioners in graphics, vision, animation, and virtual environments. The most fundamental issue is how to learn an effective and accurate representation from spatiotemporal action sequences towards improved performance, and this article aims to address the aforementioned challenge. In particular, we design a novel method of hybrid features' extraction based on the construction of multistream networks and their organic fusion. First, we train a convolution neural networks (CNN) model to learn CNN-based features with the raw skeleton coordinates and their temporal differences serving as input signals. The attention mechanism is injected into the CNN model to weigh more effective and important information. Then, we employ long short-term memory (LSTM) to obtain long-term temporal features from action sequences. Finally, we generate the hybrid features by fusing the CNN and LSTM networks, and we classify action types with the hybrid features. The extensive experiments are performed on several large-scale publically available databases, and promising results demonstrate the efficacy and effectiveness of our proposed framework.
引用
收藏
页数:11
相关论文
共 50 条
  • [41] Improved semantic-guided network for skeleton-based action recognition
    Mansouri, Amine
    Bakir, Toufik
    Elzaar, Abdellah
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2024, 104
  • [42] HMANet: Hyperbolic Manifold Aware Network for Skeleton-Based Action Recognition
    Chen, Jinghong
    Zhao, Chong
    Wang, Qicong
    Meng, Hongying
    IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS, 2023, 15 (02) : 602 - 614
  • [43] Enhanced decoupling graph convolution network for skeleton-based action recognition
    Gu, Yue
    Yu, Qiang
    Xue, Wanli
    MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 83 (29) : 73289 - 73304
  • [44] Temporal Refinement Graph Convolutional Network for Skeleton-Based Action Recognition
    Zhuang T.
    Qin Z.
    Ding Y.
    Deng F.
    Chen L.
    Qin Z.
    Raymond Choo K.-K.
    IEEE Transactions on Artificial Intelligence, 2024, 5 (04): : 1586 - 1598
  • [45] EchoGCN: An Echo Graph Convolutional Network for Skeleton-Based Action Recognition
    Qian, Weiwen
    Huang, Qian
    Li, Chang
    Chen, Zhongqi
    Mao, Yingchi
    Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), (245-261):
  • [46] SpatioTemporal focus for skeleton-based action recognition
    Wu, Liyu
    Zhang, Can
    Zou, Yuexian
    PATTERN RECOGNITION, 2023, 136
  • [47] Pyramidal Graph Convolutional Network for Skeleton-Based Human Action Recognition
    Li, Fanjia
    Zhu, Aichun
    Liu, Zhongyu
    Huo, Yu
    Xu, Yonggang
    Hua, Gang
    IEEE SENSORS JOURNAL, 2021, 21 (14) : 16183 - 16191
  • [48] An efficient self-attention network for skeleton-based action recognition
    Qin, Xiaofei
    Cai, Rui
    Yu, Jiabin
    He, Changxiang
    Zhang, Xuedian
    SCIENTIFIC REPORTS, 2022, 12 (01):
  • [49] An efficient self-attention network for skeleton-based action recognition
    Xiaofei Qin
    Rui Cai
    Jiabin Yu
    Changxiang He
    Xuedian Zhang
    Scientific Reports, 12 (1)
  • [50] Spatiotemporal Graph Autoencoder Network for Skeleton-Based Human Action Recognition
    Abduljalil, Hosam
    Elhayek, Ahmed
    Marish Ali, Abdullah
    Alsolami, Fawaz
    AI, 2024, 5 (03) : 1695 - 1708