Enhanced Adjacency Matrix-Based Lightweight Graph Convolution Network for Action Recognition

被引:5
|
作者
Zhang, Daqing [1 ]
Deng, Hongmin [1 ]
Zhi, Yong [1 ]
机构
[1] Sichuan Univ, Sch Elect & Informat Engn, Chengdu 610064, Peoples R China
基金
中国国家自然科学基金;
关键词
action recognition; skeleton data; CA-EAMGCN; feature selection; combinatorial attention; MOTION;
D O I
10.3390/s23146397
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
Graph convolutional networks (GCNs), which extend convolutional neural networks (CNNs) to non-Euclidean structures, have been utilized to promote skeleton-based human action recognition research and have made substantial progress in doing so. However, there are still some challenges in the construction of recognition models based on GCNs. In this paper, we propose an enhanced adjacency matrix-based graph convolutional network with a combinatorial attention mechanism (CA-EAMGCN) for skeleton-based action recognition. Firstly, an enhanced adjacency matrix is constructed to expand the model's perceptive field of global node features. Secondly, a feature selection fusion module (FSFM) is designed to provide an optimal fusion ratio for multiple input features of the model. Finally, a combinatorial attention mechanism is devised. Specifically, our spatial-temporal (ST) attention module and limb attention module (LAM) are integrated into a multi-input branch and a mainstream network of the proposed model, respectively. Extensive experiments on three large-scale datasets, namely the NTU RGB+D 60, NTU RGB+D 120 and UAV-Human datasets, show that the proposed model takes into account both requirements of light weight and recognition accuracy. This demonstrates the effectiveness of our method.
引用
收藏
页数:20
相关论文
共 50 条
  • [1] Campus violence action recognition based on lightweight graph convolution network
    Li Qi
    Deng Yao-hui
    Wang Jiao
    CHINESE JOURNAL OF LIQUID CRYSTALS AND DISPLAYS, 2022, 37 (04) : 530 - 538
  • [2] Enhanced decoupling graph convolution network for skeleton-based action recognition
    Gu, Yue
    Yu, Qiang
    Xue, Wanli
    MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 83 (29) : 73289 - 73304
  • [3] Spatialoral Graph Convolutional Networks for Action Recognition with Adjacency Matrix Generation Network
    Niu, Junyu
    Yang, Rong
    Guan, Wang
    Xie, Zijie
    Proceedings - 2021 2nd International Conference on Electronics, Communications and Information Technology, CECIT 2021, 2021, : 1150 - 1154
  • [4] IMViT: Adjacency Matrix-Based Lightweight Plain Vision Transformer
    Chen, Qihao
    Yan, Yunfeng
    Wang, Xianbo
    Peng, Jishen
    IEEE ACCESS, 2025, 13 : 18535 - 18545
  • [5] An adaptive adjacency matrix-based graph convolutional recurrent network for air quality prediction
    Chen, Quanchao
    Ding, Ruyan
    Mo, Xinyue
    Li, Huan
    Xie, Linxuan
    Yang, Jiayu
    SCIENTIFIC REPORTS, 2024, 14 (01)
  • [6] Temporal-enhanced graph convolution network for skeleton-based action recognition
    Xie, Yulai
    Zhang, Yang
    Ren, Fang
    IET COMPUTER VISION, 2022, 16 (03) : 266 - 279
  • [7] PAIRWISE ADJACENCY MATRIX ON SPATIAL TEMPO' L GRAPH CONVOLUTION NETWORK FOR SKELETON -BASED TWO -PERSON INTERACTION RECOGNITION
    Yang, Chao-Lung
    Setyoko, Aji
    Tampubolon, Hendrik
    Hua, Kai-Lung
    2020 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2020, : 2166 - 2170
  • [8] Application of an Adaptive Adjacency Matrix-Based Graph Convolutional Neural Network in Taxi Demand Forecasting
    Xu, Jian-You
    Zhang, Shuo
    Wu, Chin-Chia
    Lin, Win-Chin
    Yuan, Qing-Li
    MATHEMATICS, 2022, 10 (19)
  • [9] Node Connection Strength Matrix-Based Graph Convolution Network for Traffic Flow Prediction
    Chen, Jian
    Wang, Wei
    Yu, Keping
    Hu, Xiping
    Cai, Ming
    Guizani, Mohsen
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2023, 72 (09) : 12063 - 12074
  • [10] Action recognition algorithm based on skeleton graph with multiple features and improved adjacency matrix
    Zhang, Shanqing
    Jiao, Shuheng
    Chen, Yujie
    Xu, Jiayi
    IET IMAGE PROCESSING, 2024, 18 (13) : 4250 - 4262