An Attention-Aware Model for Human Action Recognition on Tree-Based Skeleton Sequences

被引:1
|
作者
Ding, Runwei [1 ]
Liu, Chang [1 ]
Liu, Hong [1 ,2 ]
机构
[1] Peking Univ, Shenzhen Grad Sch, Shenzhen, Peoples R China
[2] Peking Univ, Key Lab Machine Percept, Beijing, Peoples R China
来源
SOCIAL ROBOTICS, ICSR 2018 | 2018年 / 11357卷
基金
中国国家自然科学基金;
关键词
Human action recognition; Skeleton; Attention-ware model; Tri-directional Tree Traversal Map (TTTM);
D O I
10.1007/978-3-030-05204-1_56
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Skeleton-based human action recognition (HAR) has attracted a lot of research attentions because of robustness to variations of locations and appearances. However, most existing methods treat the whole skeleton as a fixed pattern, in which the importance of different skeleton joints for action recognition is not considered. In this paper, a novel CNN-based attention-ware network is proposed. First, to describe the semantic meaning of skeletons and learn the discriminative joints over time, an attention generate network named Global Attention Network (GAN) is proposed to generate attention masks. Then, to encode the spatial structure of skeleton sequences, we design a tree-based traversal (TTTM) rule, which can represent the skeleton structure, as a convolution unit of main network. Finally, the GAN and main network are cascaded as a whole network which is trained in an end-to-end manner. Experiments show that the TTTM and GAN are supplemented each other, and the whole network achieves an efficient improvement over the state-of-the-arts, e.g., the classification accuracy of this network was 83.6% and 89.5% on NTU-RGBD CV and CS dataset, which outperforms any other methods.
引用
收藏
页码:569 / 579
页数:11
相关论文
共 50 条
  • [21] Attention-Aware Age-Agnostic Visual Place Recognition
    Wang, Ziqi
    Li, Jiahui
    Khademi, Seyran
    van Gemert, Jan
    2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW), 2019, : 1437 - 1446
  • [22] ARFace: Attention-Aware and Regularization for Face Recognition With Reinforcement Learning
    Zhang, Liping
    Sun, Linjun
    Yu, Lina
    Dong, Xiaoli
    Chen, Jinchao
    Cai, Weiwei
    Wang, Chen
    Ning, Xin
    IEEE TRANSACTIONS ON BIOMETRICS, BEHAVIOR, AND IDENTITY SCIENCE, 2022, 4 (01): : 30 - 42
  • [23] A Deep Attention Model for Action Recognition from Skeleton Data
    Gao, Yanbo
    Li, Chuankun
    Li, Shuai
    Cai, Xun
    Ye, Mao
    Yuan, Hui
    APPLIED SCIENCES-BASEL, 2022, 12 (04):
  • [24] Action Recognition Based on Multi-Level Topological Channel Attention of Human Skeleton
    Hu, Kai
    Shen, Chaowen
    Wang, Tianyan
    Shen, Shuai
    Cai, Chengxue
    Huang, Huaming
    Xia, Min
    SENSORS, 2023, 23 (24)
  • [25] Human Action Recognition Based on Skeleton Features
    Gao, Yi
    Wu, Haitao
    Wu, Xinmeng
    Li, Zilin
    Zhao, Xiaofan
    COMPUTER SCIENCE AND INFORMATION SYSTEMS, 2023, 20 (01) : 537 - 550
  • [26] Human action recognition based on skeleton splitting
    Yoon, Sang Min
    Kuijper, Arjan
    EXPERT SYSTEMS WITH APPLICATIONS, 2013, 40 (17) : 6848 - 6855
  • [27] Insight on Attention Modules for Skeleton-Based Action Recognition
    Jiang, Quanyan
    Wu, Xiaojun
    Kittler, Josef
    PATTERN RECOGNITION AND COMPUTER VISION, PT I, 2021, 13019 : 242 - 255
  • [28] Memory Attention Networks for Skeleton-based Action Recognition
    Xie, Chunyu
    Li, Ce
    Zhang, Baochang
    Chen, Chen
    Han, Jungong
    Liu, Jianzhuang
    PROCEEDINGS OF THE TWENTY-SEVENTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2018, : 1639 - 1645
  • [29] HUMAN SKELETON TREE RECURRENT NEURAL NETWORK WITH JOINT RELATIVE MOTION FEATURE FOR SKELETON BASED ACTION RECOGNITION
    Wei, Shenghua
    Song, Yonghong
    Zhang, Yuanlin
    2017 24TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2017, : 91 - 95
  • [30] Memory Attention Networks for Skeleton-Based Action Recognition
    Li, Ce
    Xie, Chunyu
    Zhang, Baochang
    Han, Jungong
    Zhen, Xiantong
    Chen, Jie
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 33 (09) : 4800 - 4814