Exploring the Application of Neural Networks in the Learning and Optimization of Sports Skills Training

被引:0
|
作者
Liu, Dazheng [1 ]
机构
[1] Yangzhou Polytech Inst, Yangzhou 225127, Jiangsu, Peoples R China
关键词
Deep neural network; action recognition; 2D pose prediction; pose estimation; sports skill training; attention mechanism; POSE ESTIMATION; RECOGNITION;
D O I
10.14569/IJACSA.2024.0150956
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Sports skills training is a crucial component of sports education, significantly contributing to the development of athletic abilities and overall physical literacy. It is essential to utilized neural networks to optimize traditional training methods that are inefficient and rely on subjective assessments. This paper develops methods for sports action recognition and athlete pose estimation and prediction based on deep neural networks. Given the complexity and rapid changes in sports skills, we propose a multi-task framework-based HICNN-PSTA model for jointly recognizing sports actions and estimating human poses. This method leverages the advantages of Convolution and Involution operators in computing channel and spatial information to extract sports skill features and uses a decoupled multi-head attention mechanism to fully capture spatio-temporal information. Furthermore, to accurately predict human poses to avoid potential sports injuries, this paper introduces an MS-GCN prediction model based on the multi-scale graph. This method utilizes the constraints between human body key points and parts, dividing the 2D human pose into different levels, significantly enhancing the modeling capability of human pose sequences. The proposed algorithms have been thoroughly validated on a basketball skills dataset and compared with various advanced algorithms. Experimental results sufficiently demonstrate the effectiveness of the proposed methods in sports action recognition and human pose estimation and prediction. This research advances the application of deep neural networks in the field of sports training, providing significant reference value for related studies.
引用
收藏
页码:547 / 554
页数:8
相关论文
共 50 条
  • [41] Application of Artificial Neural Networks for Filtration Optimization
    Griffiths, K. A.
    Andrews, R. C.
    JOURNAL OF ENVIRONMENTAL ENGINEERING-ASCE, 2011, 137 (11): : 1040 - 1047
  • [42] APPLICATION OF NEURAL NETWORKS FOR ROUTE TECHNOLOGY OPTIMIZATION
    Tulupov, D., V
    Romaschev, A. N.
    OBRABOTKA METALLOV-METAL WORKING AND MATERIAL SCIENCE, 2008, (01): : 24 - 25
  • [44] Teaching Learning Based Optimization for Neural Networks Learning Enhancement
    Satapathy, Suresh Chandra
    Naik, Anima
    Parvathi, K.
    SWARM, EVOLUTIONARY, AND MEMETIC COMPUTING, (SEMCCO 2012), 2012, 7677 : 761 - +
  • [45] Proposal and Investigation of a Distributed Learning Strategy for training of Neural Networks in Earth Observation Application Scenarios
    Valente, Francesco
    Lavacca, Francesco G.
    Fiori, Tiziana
    Eramo, Vincenzo
    PROCEEDINGS OF 2024 IEEE/IFIP NETWORK OPERATIONS AND MANAGEMENT SYMPOSIUM, NOMS 2024, 2024,
  • [46] Supervised learning in spiking neural networks with FORCE training
    Nicola, Wilten
    Clopath, Claudia
    NATURE COMMUNICATIONS, 2017, 8
  • [47] Supervised learning in spiking neural networks with FORCE training
    Wilten Nicola
    Claudia Clopath
    Nature Communications, 8
  • [48] Constrained Training of Recurrent Neural Networks for Automata Learning
    Aichernig, Bernhard K.
    Koenig, Sandra
    Mateis, Cristinel
    Pferscher, Andrea
    Schmidt, Dominik
    Tappler, Martin
    SOFTWARE ENGINEERING AND FORMAL METHODS, SEFM 2022, 2022, 13550 : 155 - 172
  • [49] Toward Training Recurrent Neural Networks for Lifelong Learning
    Sodhani, Shagun
    Chandar, Sarath
    Bengio, Yoshua
    NEURAL COMPUTATION, 2020, 32 (01) : 1 - 35
  • [50] Training Spiking Neural Networks with Local Tandem Learning
    Yang, Qu
    Wu, Jibin
    Zhang, Malu
    Chua, Yansong
    Wang, Xinchao
    Li, Haizhou
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,