Enhanced Adjacency Matrix-Based Lightweight Graph Convolution Network for Action Recognition

被引:5
|
作者
Zhang, Daqing [1 ]
Deng, Hongmin [1 ]
Zhi, Yong [1 ]
机构
[1] Sichuan Univ, Sch Elect & Informat Engn, Chengdu 610064, Peoples R China
基金
中国国家自然科学基金;
关键词
action recognition; skeleton data; CA-EAMGCN; feature selection; combinatorial attention; MOTION;
D O I
10.3390/s23146397
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
Graph convolutional networks (GCNs), which extend convolutional neural networks (CNNs) to non-Euclidean structures, have been utilized to promote skeleton-based human action recognition research and have made substantial progress in doing so. However, there are still some challenges in the construction of recognition models based on GCNs. In this paper, we propose an enhanced adjacency matrix-based graph convolutional network with a combinatorial attention mechanism (CA-EAMGCN) for skeleton-based action recognition. Firstly, an enhanced adjacency matrix is constructed to expand the model's perceptive field of global node features. Secondly, a feature selection fusion module (FSFM) is designed to provide an optimal fusion ratio for multiple input features of the model. Finally, a combinatorial attention mechanism is devised. Specifically, our spatial-temporal (ST) attention module and limb attention module (LAM) are integrated into a multi-input branch and a mainstream network of the proposed model, respectively. Extensive experiments on three large-scale datasets, namely the NTU RGB+D 60, NTU RGB+D 120 and UAV-Human datasets, show that the proposed model takes into account both requirements of light weight and recognition accuracy. This demonstrates the effectiveness of our method.
引用
收藏
页数:20
相关论文
共 50 条
  • [31] Multi-Scale Adaptive Graph Convolution Network for Skeleton-Based Action Recognition
    Hu, Huangshui
    Fang, Yue
    Han, Mei
    Qi, Xingshuo
    IEEE ACCESS, 2024, 12 : 16868 - 16880
  • [32] Auxiliary Task Graph Convolution Network: A Skeleton-Based Action Recognition for Practical Use
    Cho, Junsu
    Kim, Seungwon
    Oh, Chi-Min
    Park, Jeong-Min
    APPLIED SCIENCES-BASEL, 2025, 15 (01):
  • [33] Dual-Stream Structured Graph Convolution Network for Skeleton-Based Action Recognition
    Xu, Chunyan
    Liu, Rong
    Zhang, Tong
    Cui, Zhen
    Yang, Jian
    Hu, Chunlong
    ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2021, 17 (04)
  • [34] Dual-Excitation SpatialTemporal Graph Convolution Network for Skeleton-Based Action Recognition
    Lu, Jian
    Huang, Tingting
    Zhao, Bo
    Chen, Xiaogai
    Zhou, Jian
    Zhang, Kaibing
    IEEE SENSORS JOURNAL, 2024, 24 (06) : 8184 - 8196
  • [35] Semantics-Assisted Training Graph Convolution Network for Skeleton-Based Action Recognition
    Hu, Huangshui
    Cao, Yu
    Fang, Yue
    Meng, Zhiqiang
    Sensors, 2025, 25 (06)
  • [36] SelfGCN: Graph Convolution Network With Self-Attention for Skeleton-Based Action Recognition
    Wu, Zhize
    Sun, Pengpeng
    Chen, Xin
    Tang, Keke
    Xu, Tong
    Zou, Le
    Wang, Xiaofeng
    Tan, Ming
    Cheng, Fan
    Weise, Thomas
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2024, 33 : 4391 - 4403
  • [37] VGAResNet: A Unified Visibility Graph Adjacency Matrix-Based Residual Network for Chronic Obstructive Pulmonary Disease Detection Using Lung Sounds
    Roy, Arka
    Thakur, Arushi
    Satija, Udit
    IEEE SENSORS LETTERS, 2023, 7 (11)
  • [38] An Action Recognition Method Based on Deformable Convolution Network
    Dong, Xu
    Tan, Li
    Zhou, Lina
    Song, Yanyan
    2020 4TH INTERNATIONAL CONFERENCE ON CONTROL ENGINEERING AND ARTIFICIAL INTELLIGENCE (CCEAI 2020), 2020, 1487
  • [39] A Lightweight Architecture Attentional Shift Graph Convolutional Network for Skeleton-Based Action Recognition
    Li, Xianshan
    Kang, Jingwen
    Yang, Yang
    Zhao, Fengda
    INTERNATIONAL JOURNAL OF COMPUTERS COMMUNICATIONS & CONTROL, 2023, 18 (03)
  • [40] Action recognition method based on multi-stream attention-enhanced recursive graph convolution
    Wang, Huaijun
    Bai, Bingqian
    Li, Junhuai
    Ke, Hui
    Xiang, Wei
    APPLIED INTELLIGENCE, 2024, 54 (20) : 10133 - 10147