Spatio-temporal segments attention for skeleton-based action recognition

被引:19
|
作者
Qiu, Helei [1 ]
Hou, Biao [1 ]
Ren, Bo [1 ]
Zhang, Xiaohua [1 ]
机构
[1] Xidian Univ, Sch Artificial Intelligence, Xian 710071, Shaanxi, Peoples R China
基金
中国国家自然科学基金;
关键词
Action recognition; Skeleton; Self-attention; Spatio-temporal joints; Feature aggregation; NETWORKS;
D O I
10.1016/j.neucom.2022.10.084
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Capturing the dependencies between joints is critical in skeleton-based action recognition. However, the existing methods cannot effectively capture the correlation of different joints between frames, which is very useful since different body parts (such as the arms and legs in "long jump") between adjacent frames move together. Focus on this issue, a novel spatio-temporal segments attention method is proposed. The skeleton sequence is divided into several segments, and several consecutive frames contained in each segment are encoded. And then an intra-segment self-attention module is proposed to capture the rela-tionship of different joints in consecutive frames. In addition, an inter-segment action attention module is introduced to capture the relationship between segments to enhance the ability to distinguish similar actions. Compared with the state-of-the-art methods, our method achieves better performance on two large-scale datasets. (c) 2022 Elsevier B.V. All rights reserved.
引用
收藏
页码:30 / 38
页数:9
相关论文
共 50 条
  • [21] Glimpse and Zoom: Spatio-Temporal Focused Dynamic Network for Skeleton-Based Action Recognition
    Zhao, Zhifu
    Chen, Ziwei
    Li, Jianan
    Wang, Xiaotian
    Xie, Xuemei
    Huang, Lei
    Zhang, Wanxin
    Shi, Guangming
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (07) : 5616 - 5629
  • [22] Skeleton-Based Action Recognition Using Spatio-Temporal LSTM Network with Trust Gates
    Liu, Jun
    Shahroudy, Amir
    Xu, Dong
    Kot, Alex C.
    Wang, Gang
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2018, 40 (12) : 3007 - 3021
  • [23] Skeleton-based action recognition using spatio-temporal features with convolutional neural networks
    Rostami, Zahra
    Afrasiabi, Mahlagha
    Khotanlou, Hassan
    2017 IEEE 4TH INTERNATIONAL CONFERENCE ON KNOWLEDGE-BASED ENGINEERING AND INNOVATION (KBEI), 2017, : 583 - 587
  • [24] Skeleton-based Human Action Recognition Using Spatio-Temporal Geometry ( ICCAS 2019)
    Ryu, Hanna
    Kim, Seong-heum
    Hwang, Youngbae
    2019 19TH INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION AND SYSTEMS (ICCAS 2019), 2019, : 329 - 332
  • [25] Lightweight Multiscale Spatio-Temporal Graph Convolutional Network for Skeleton-Based Action Recognition
    Zheng, Zhiyun
    Yuan, Qilong
    Zhang, Huaizhu
    Wang, Yizhou
    Wang, Junfeng
    BIG DATA MINING AND ANALYTICS, 2025, 8 (02): : 310 - 325
  • [26] PROGRESSIVE SPATIO-TEMPORAL GRAPH CONVOLUTIONAL NETWORK FOR SKELETON-BASED HUMAN ACTION RECOGNITION
    Heidari, Negar
    Iosifidis, Alexandros
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 3220 - 3224
  • [27] SKELETON ACTION RECOGNITION BASED ON SPATIO-TEMPORAL FEATURES
    Huang, Qian
    Xie, Mengting
    Li, Xing
    Wang, Shuaichen
    2023 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2023, : 3284 - 3288
  • [28] Skeleton-based action recognition using sparse spatio-temporal GCN with edge effective resistance
    Ahmad, Tasweer
    Jin, Lianwen
    Lin, Luojun
    Tang, GuoZhi
    NEUROCOMPUTING, 2021, 423 : 389 - 398
  • [29] Learning Multi-Granular Spatio-Temporal Graph Network for Skeleton-based Action Recognition
    Chen, Tailin
    Zhou, Desen
    Wang, Jian
    Wang, Shidong
    Guan, Yu
    He, Xuming
    Ding, Errui
    PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2021, 2021, : 4334 - 4342
  • [30] Global Spatio-Temporal Deformable Network for Skeleton-Based Gesture Recognition
    Shi D.
    Lin H.
    Liu Y.
    Zhang X.
    Dianzi Keji Daxue Xuebao/Journal of the University of Electronic Science and Technology of China, 2024, 53 (01): : 60 - 66