PART AWARE GRAPH CONVOLUTION NETWORK WITH TEMPORAL ENHANCEMENT FOR SKELETON-BASED ACTION RECOGNITION

被引:1
|
作者
Huang, Qian [1 ,2 ,3 ]
Nie, Yunqing [1 ,2 ,3 ]
Li, Xing [1 ,2 ]
Yang, Tianjin [1 ,2 ]
机构
[1] Hohai Univ, Minist Water Resources, Key Lab Water Big Data Technol, Nanjing, Peoples R China
[2] Hohai Univ, Sch Comp & Informat, Nanjing, Peoples R China
[3] Nanjing Huiying Elect Technol Corp, Nanjing, Peoples R China
关键词
Human action recognition; Skeleton; Graph convolution network;
D O I
10.1109/ICIP49359.2023.10222714
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In recent years, skeleton-based human action recognition has attracted broad research interests, and methods based on graph convolution networks have demonstrated excellent performance. However, how to extract the distinguishing spatio-temporal information effectively remains an essential problem. To address the problem, we propose a novel part aware graph convolution network with temporal enhancement, which can adaptively evaluate the activity level of each part of the body in the action sequence and enhance the extraction of temporal information. Considering that the range of motion of body parts in the action sequence is greater than that of joints, we manually divide the five major parts of the body and generate a skeleton sequence with different attention weights by using part-based attention module. Then, a temporal enhanced module is used to model actions with different duration. Experiments show that our method achieves the state-of-the-art performance.
引用
收藏
页码:3255 / 3259
页数:5
相关论文
共 50 条
  • [21] Dual-Stream Structured Graph Convolution Network for Skeleton-Based Action Recognition
    Xu, Chunyan
    Liu, Rong
    Zhang, Tong
    Cui, Zhen
    Yang, Jian
    Hu, Chunlong
    ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2021, 17 (04)
  • [22] Dual-Excitation SpatialTemporal Graph Convolution Network for Skeleton-Based Action Recognition
    Lu, Jian
    Huang, Tingting
    Zhao, Bo
    Chen, Xiaogai
    Zhou, Jian
    Zhang, Kaibing
    IEEE SENSORS JOURNAL, 2024, 24 (06) : 8184 - 8196
  • [23] Semantics-Assisted Training Graph Convolution Network for Skeleton-Based Action Recognition
    Hu, Huangshui
    Cao, Yu
    Fang, Yue
    Meng, Zhiqiang
    Sensors, 2025, 25 (06)
  • [24] Spatial Temporal Graph Deconvolutional Network for Skeleton-Based Human Action Recognition
    Peng, Wei
    Shi, Jingang
    Zhao, Guoying
    IEEE Signal Processing Letters, 2021, 28 : 244 - 248
  • [25] Dynamic Semantic-Based Spatial-Temporal Graph Convolution Network for Skeleton-Based Human Action Recognition
    Xie, Jianyang
    Meng, Yanda
    Zhao, Yitian
    Nguyen, Anh
    Yang, Xiaoyun
    Zheng, Yalin
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2024, 33 : 6691 - 6704
  • [26] Spatial–Temporal gated graph attention network for skeleton-based action recognition
    Mrugendrasinh Rahevar
    Amit Ganatra
    Pattern Analysis and Applications, 2023, 26 (3) : 929 - 939
  • [27] Occluded Part-aware Graph Convolutional Networks for Skeleton-based Action Recognition
    Kim, Min Hyuk
    Kim, Min Ju
    Yoo, Seok Bong
    2024 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, ICRA 2024, 2024, : 7310 - 7317
  • [28] Temporal Receptive Field Graph Convolutional Network for Skeleton-Based Action Recognition
    Zhang, Qingqi
    Wu, Ren
    Nakata, Mitsuru
    Ge, Qi-Wei
    2024 International Technical Conference on Circuits/Systems, Computers, and Communications, ITC-CSCC 2024, 2024,
  • [29] Spatial Graph Convolutional and Temporal Involution Network for Skeleton-based Action Recognition
    Wan, Huifan
    Pan, Guanghui
    Chen, Yu
    Ding, Danni
    Zou, Maoyang
    PROCEEDINGS OF ACM TURING AWARD CELEBRATION CONFERENCE, ACM TURC 2021, 2021, : 204 - 209
  • [30] Graph transformer network with temporal kernel attention for skeleton-based action recognition
    Department of Computer Science and Engineering, School of Information Science and Engineering, Yunnan University, Kunming
    650504, China
    Knowl Based Syst,