Semantics-Assisted Training Graph Convolution Network for Skeleton-Based Action Recognition

被引:0
|
作者
Hu, Huangshui [1 ]
Cao, Yu [1 ]
Fang, Yue [1 ]
Meng, Zhiqiang [1 ]
机构
[1] College of Computer Science and Engineering, Changchun University of Technology, Changchun,130012, China
关键词
Classification (of information) - Joints (anatomy) - Network coding - Network theory (graphs);
D O I
10.3390/s25061841
中图分类号
学科分类号
摘要
The skeleton-based action recognition networks often focus on extracting features such as joints from samples, while neglecting the semantic relationships inherent in actions, which also contain valuable information. To address the lack of utilization of semantic information, this paper proposes a semantics-assisted training graph convolution network (SAT-GCN). By dividing the features outputted by the skeleton encoder into four parts and contrasting them with the text features generated by the text encoder, the obtained contrastive loss is used to guide the overall network training. This approach effectively improves recognition accuracy while reducing the number of model parameters. In addition, angle features are incorporated into the skeleton model input to aid in classifying similar actions. Finally, a multi-feature skeleton encoder is designed to separately extract features such as joints, bones, and angles. These extracted features are then integrated through feature fusion. The fused features are then passed through three graph convolution blocks before being fed into fully connected layers for classification. Extensive experiments were conducted on three large-scale datasets, NTU RGB + D 60, NTU RGB + D 120, and NW-UCLA to validate the performance of the proposed model. The results show that the SAT-GCN outperforms others in terms of both accuracy and number of parameters. © 2025 by the authors.
引用
收藏
相关论文
共 50 条
  • [31] Glimpse and focus: Global and local-scale graph convolution network for skeleton-based action recognition
    Gao, Xuehao
    Du, Shaoyi
    Yang, Yang
    NEURAL NETWORKS, 2023, 167 : 551 - 558
  • [32] An Efficient Graph Convolution Network for Skeleton-Based Dynamic Hand Gesture Recognition
    Peng, Sheng-Hui
    Tsai, Pei-Hsuan
    IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS, 2023, 15 (04) : 2179 - 2189
  • [33] Spatial adaptive graph convolutional network for skeleton-based action recognition
    Zhu, Qilin
    Deng, Hongmin
    APPLIED INTELLIGENCE, 2023, 53 (14) : 17796 - 17808
  • [34] Relation Selective Graph Convolutional Network for Skeleton-Based Action Recognition
    Yang, Wenjie
    Zhang, Jianlin
    Cai, Jingju
    Xu, Zhiyong
    SYMMETRY-BASEL, 2021, 13 (12):
  • [35] EARLY FUSION GRAPH CONVOLUTIONAL NETWORK FOR SKELETON-BASED ACTION RECOGNITION
    Zhao, Xiaoxue
    Liu, Cuiwei
    Shi, Xiangbin
    2021 IEEE 31ST INTERNATIONAL WORKSHOP ON MACHINE LEARNING FOR SIGNAL PROCESSING (MLSP), 2021,
  • [36] Selective directed graph convolutional network for skeleton-based action recognition
    Ke, Chengyuan
    Liu, Sheng
    Feng, Yuan
    Chen, Shengyong
    PATTERN RECOGNITION LETTERS, 2025, 190 : 141 - 146
  • [37] Hierarchical Aggregated Graph Neural Network for Skeleton-Based Action Recognition
    Geng, Pei
    Lu, Xuequan
    Li, Wanqing
    Lyu, Lei
    IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 11003 - 11017
  • [38] Scale Adaptive Graph Convolutional Network for Skeleton-Based Action Recognition
    Wang X.
    Zhong Y.
    Jin L.
    Xiao Y.
    Tianjin Daxue Xuebao (Ziran Kexue yu Gongcheng Jishu Ban)/Journal of Tianjin University Science and Technology, 2022, 55 (03): : 306 - 312
  • [39] Feature reconstruction graph convolutional network for skeleton-based action recognition
    Huang, Junhao
    Wang, Ziming
    Peng, Jian
    Huang, Feihu
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2023, 126
  • [40] Temporal Refinement Graph Convolutional Network for Skeleton-Based Action Recognition
    Zhuang T.
    Qin Z.
    Ding Y.
    Deng F.
    Chen L.
    Qin Z.
    Raymond Choo K.-K.
    IEEE Transactions on Artificial Intelligence, 2024, 5 (04): : 1586 - 1598