Spatio-temporal segments attention for skeleton-based action recognition

被引:19
|
作者
Qiu, Helei [1 ]
Hou, Biao [1 ]
Ren, Bo [1 ]
Zhang, Xiaohua [1 ]
机构
[1] Xidian Univ, Sch Artificial Intelligence, Xian 710071, Shaanxi, Peoples R China
基金
中国国家自然科学基金;
关键词
Action recognition; Skeleton; Self-attention; Spatio-temporal joints; Feature aggregation; NETWORKS;
D O I
10.1016/j.neucom.2022.10.084
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Capturing the dependencies between joints is critical in skeleton-based action recognition. However, the existing methods cannot effectively capture the correlation of different joints between frames, which is very useful since different body parts (such as the arms and legs in "long jump") between adjacent frames move together. Focus on this issue, a novel spatio-temporal segments attention method is proposed. The skeleton sequence is divided into several segments, and several consecutive frames contained in each segment are encoded. And then an intra-segment self-attention module is proposed to capture the rela-tionship of different joints in consecutive frames. In addition, an inter-segment action attention module is introduced to capture the relationship between segments to enhance the ability to distinguish similar actions. Compared with the state-of-the-art methods, our method achieves better performance on two large-scale datasets. (c) 2022 Elsevier B.V. All rights reserved.
引用
收藏
页码:30 / 38
页数:9
相关论文
共 50 条
  • [31] Spatio-Temporal Motion Topology Aware Graph Convolutional Network for Skeleton-Based Action Recognition
    Ma, Ji
    Liu, Wei
    Ding, Linlin
    Luo, Hao
    WEB INFORMATION SYSTEMS AND APPLICATIONS, WISA 2024, 2024, 14883 : 549 - 560
  • [32] Position-aware spatio-temporal graph convolutional networks for skeleton-based action recognition
    Yang, Ping
    Wang, Qin
    Chen, Hao
    Wu, Zizhao
    IET COMPUTER VISION, 2023, 17 (07) : 844 - 854
  • [33] Efficient Spatio-Temporal Contrastive Learning for Skeleton-Based 3-D Action Recognition
    Gao, Xuehao
    Yang, Yang
    Zhang, Yimeng
    Li, Maosen
    Yu, Jin-Gang
    Du, Shaoyi
    IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 : 405 - 417
  • [34] Action Recognition With Spatio-Temporal Visual Attention on Skeleton Image Sequences
    Yang, Zhengyuan
    Li, Yuncheng
    Yang, Jianchao
    Luo, Jiebo
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2019, 29 (08) : 2405 - 2415
  • [35] Skeleton-based action recognition based on spatio-temporal adaptive graph convolutional neural-network
    Cao Y.
    Liu C.
    Huang Z.
    Sheng Y.
    Huazhong Keji Daxue Xuebao (Ziran Kexue Ban)/Journal of Huazhong University of Science and Technology (Natural Science Edition), 2020, 48 (11): : 5 - 10
  • [36] Spatio-Temporal Graph Convolution for Skeleton Based Action Recognition
    Li, Chaolong
    Cui, Zhen
    Zheng, Wenming
    Xu, Chunyan
    Yang, Jian
    THIRTY-SECOND AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTIETH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / EIGHTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2018, : 3482 - 3489
  • [37] Robust Skeleton-based Action Recognition through Hierarchical Aggregation of Local and Global Spatio-temporal Features
    Ren, J.
    Napoleon, R.
    Andre, B.
    Chris, S.
    Liu, M.
    Ma, J.
    2018 15TH INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION, ROBOTICS AND VISION (ICARCV), 2018, : 901 - 906
  • [38] Two-stream spatio-temporal GCN-transformer networks for skeleton-based action recognition
    Chen, Dong
    Chen, Mingdong
    Wu, Peisong
    Wu, Mengtao
    Zhang, Tao
    Li, Chuanqi
    SCIENTIFIC REPORTS, 2025, 15 (01):
  • [39] SPATIO-TEMPORAL MULTI-SCALE SOFT QUANTIZATION LEARNING FOR SKELETON-BASED HUMAN ACTION RECOGNITION
    Yang, Jianyu
    Zhu, Chen
    Yuan, Junsong
    2019 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME), 2019, : 1078 - 1083
  • [40] Multi-scale spatio-temporal network for skeleton-based gait recognition
    He, Dongzhi
    Xue, Yongle
    Li, Yunyu
    Sun, Zhijie
    Xiao, Xingmei
    Wang, Jin
    AI COMMUNICATIONS, 2023, 36 (04) : 297 - 310