Hybrid lightweight Deep-learning model for Sensor-fusion basketball Shooting-posture recognition

被引:15
|
作者
Fan, Jingjin [1 ]
Bi, Shuoben [2 ]
Xu, Ruizhuang [2 ]
Wang, Luye [2 ]
Zhang, Li [1 ]
机构
[1] Nanjing Univ Informat Sci & Technol, Res Inst Hist Sci & Technol, Nanjing 210044, Peoples R China
[2] Nanjing Univ Informat Sci & Technol, Sch Geog Sci, Nanjing 210044, Peoples R China
基金
中国国家自然科学基金;
关键词
Attention mechanism; Basketball shooting posture recognition; Gated recurrent unit; Sensor fusion; SqueezeNet; Lightweight deep-learning model;
D O I
10.1016/j.measurement.2021.110595
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
Shooting-posture recognition is an important area in basketball technical movement recognition domain. This paper proposes the squeeze convolutional gated attention (SCGA) deep-learning model to try identifying various sensor-fusion basketball shooting postures. The model is based on the lightweight SqueezeNet deep-learning model for spatial feature extraction, the gated recurrent unit for time-series feature extraction, and an atten-tion mechanism for feature-weighting calculation. The SCGA model is used to train and test the 10 types of sensor-fusion basketball shooting-posture datasets, and the intra-test achieved an average precision rate of 98.79%, an average recall rate of 98.85%, and a Kappa value of 0.9868. The inter-test achieved a 94.06% average precision rate, 94.57% average recall rate, and a 0.9389 Kappa value. The effectiveness of the SCGA deep-learning model illustrates the potential of the proposed model in recognizing various sensor-fusion basketball shooting postures. This study provides a reference for the field of sports technical-movement recognition.
引用
收藏
页数:15
相关论文
共 50 条
  • [41] A multi-channel hybrid deep learning framework for multi-sensor fusion enabled human activity recognition
    Zhang, Lei
    Yu, Jingwei
    Gao, Zhenyu
    Ni, Qin
    ALEXANDRIA ENGINEERING JOURNAL, 2024, 91 : 472 - 485
  • [42] A Lightweight Model for Feature Points Recognition of Tool Path Based on Deep Learning
    Chen, Shuo-Peng
    Ma, Hong-Yu
    Shen, Li-Yong
    Yuan, Chun-Ming
    COMPUTER-AIDED DESIGN AND COMPUTER GRAPHICS, CAD/GRAPHICS 2023, 2024, 14250 : 45 - 59
  • [43] Automatic recognition and classification of field insects based on lightweight deep learning model
    Yuan Z.-M.
    Yuan H.-J.
    Yan Y.-X.
    Li Q.
    Liu S.-Q.
    Tan S.-Q.
    Jilin Daxue Xuebao (Gongxueban)/Journal of Jilin University (Engineering and Technology Edition), 2021, 51 (03): : 1131 - 1139
  • [44] An Efficient and Lightweight Deep Learning Model for Human Activity Recognition Using Smartphones
    Ankita
    Rani, Shalli
    Babbar, Himanshi
    Coleman, Sonya
    Singh, Aman
    Aljahdali, Hani Moaiteq
    SENSORS, 2021, 21 (11)
  • [45] A novel Deep-Learning model for Human Activity Recognition based on Continuous Wavelet Transform
    Pavliuk, Olena
    Mishchuk, Myroslav
    5TH INTERNATIONAL CONFERENCE ON INFORMATICS & DATA-DRIVEN MEDICINE, IDDM 2022, 2022, 3302
  • [46] KeypointNet: An Efficient Deep Learning Model with Multi-View Recognition Capability for Sitting Posture Recognition
    Cao, Zheng
    Wu, Xuan
    Wu, Chunguo
    Jiao, Shuyang
    Xiao, Yubin
    Zhang, Yu
    Zhou, You
    ELECTRONICS, 2025, 14 (04):
  • [47] Electromagnetic Wave Absorption in the Human Head: A Virtual Sensor Based on a Deep-Learning Model
    Di Barba, Paolo
    Januszkiewicz, Lukasz
    Kawecki, Jaroslaw
    Mognaschi, Maria Evelina
    SENSORS, 2023, 23 (06)
  • [48] A new hybrid deep learning model for human action recognition
    Jaouedi, Neziha
    Boujnah, Noureddine
    Bouhlel, Salim
    JOURNAL OF KING SAUD UNIVERSITY-COMPUTER AND INFORMATION SCIENCES, 2020, 32 (04) : 447 - 453
  • [49] LAGNet: A Hybrid Deep Learning Model for Automatic Modulation Recognition
    Li, Zhuo
    Lu, Guangyue
    Li, Yuxin
    Zhou, Hao
    Li, Huan
    2024 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE, WCNC 2024, 2024,
  • [50] Speech emotion recognition using feature fusion: a hybrid approach to deep learning
    Khan, Waleed Akram
    ul Qudous, Hamad
    Farhan, Asma Ahmad
    MULTIMEDIA TOOLS AND APPLICATIONS, 2024, 83 (31) : 75557 - 75584