Badminton video action recognition based on time network

被引:1
|
作者
Zhi, Juncai [1 ]
Sun, Zijie [1 ]
Zhang, Ruijie [1 ,2 ]
Zhao, Zhouxiang [2 ]
机构
[1] Tangshan Normal Univ, Dept Phys Educ, Tangshan 063000, Hebei, Peoples R China
[2] Jeonju Univ, Grad Sch, Jeonju, South Korea
关键词
Time segmentation network; video action recognition; action recognition; ALGORITHM;
D O I
10.3233/JCM-226889
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
With the continuous development of artificial intelligence research, computer vision research has shifted from traditional "feature engineering"-based methods to deep learning-based "network engineering" methods, which automatically extracts and classifies features by using deep neural networks. Traditional methods based on artificial design features are computationally expensive and are usually used to solve simple research problems, which is not conducive for large-scale data feature extraction. Deep learning-based methods greatly reduce the difficulty of artificial features by learning features from large-scale data and are successfully applied in many visual recognition tasks. Video action recognition methods also shift from traditional methods based on artificial design features to deep learning-based methods, which is oriented to building more effective deep neural network models. Through collecting and sorting related research results found that academic for timing segment network of football and basketball video action research is relatively rich, but lack of badminton research given the above research results, this study based on timing segment network of badminton video action identification can enrich the research results, provide reference for follow-up research. This paper introduces the lightweight attention mechanism into the temporal segmentation network, forming the attention mechanism-timing segmentation network, and trains the neural network to get the classifier of badminton stroke action, which can be predicted as four common types: forehand stroke, backhand stroke, overhead stroke and pick ball. The experimental results show that the recognition recall and accuracy of various stroke movements reach more than 86%, and the average size of recall and accuracy is 91.2% and 91.6% respectively, indicating that the method based on timing segmentation network can be close to the human judgment level and can effectively conduct the identification task of badminton video strokes.
引用
收藏
页码:2739 / 2752
页数:14
相关论文
共 50 条
  • [21] Video Test-Time Adaptation for Action Recognition
    Lin, Wei
    Mirza, Muhammad Jehanzeb
    Kozinski, Mateusz
    Possegger, Horst
    Kuchne, Hilde
    Bischof, Horst
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 22952 - 22961
  • [22] Real Time Action Recognition from Video Footage
    Apon, Tasnim Sakib
    Chowdhury, Mushfiqul Islam
    Reza, Zubair
    Datta, Arpita
    Hasan, Syeda Tanjina
    Alam, Golam Rabiul
    2021 3RD INTERNATIONAL CONFERENCE ON SUSTAINABLE TECHNOLOGIES FOR INDUSTRY 4.0 (STI), 2021,
  • [23] Convolutional Neural Network-Based Video Super-Resolution for Action Recognition
    Zhang, Haochen
    Liu, Dong
    Xiong, Zhiwei
    PROCEEDINGS 2018 13TH IEEE INTERNATIONAL CONFERENCE ON AUTOMATIC FACE & GESTURE RECOGNITION (FG 2018), 2018, : 746 - 750
  • [24] DC3D: A Video Action Recognition Network Based on Dense Connection
    Mu, Xiaofang
    Liu, Zhenyu
    Liu, Jiaji
    Li, Hao
    Li, Yue
    Li, Yikun
    2022 TENTH INTERNATIONAL CONFERENCE ON ADVANCED CLOUD AND BIG DATA, CBD, 2022, : 133 - 138
  • [25] FENet: An Efficient Feature Excitation Network for Video-based Human Action Recognition
    Zhang, Zhan
    Jin, Yi
    Feng, Songhe
    Li, Yidong
    Wang, Tao
    Tian, Hui
    2022 16TH IEEE INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING (ICSP2022), VOL 1, 2022, : 540 - 544
  • [26] Context-Aware Memory Attention Network for Video-Based Action Recognition
    Koh, Thean Chun
    Yeo, Chai Kiat
    Vaitesswar, U. S.
    Jing, Xuan
    2022 IEEE 14TH IMAGE, VIDEO, AND MULTIDIMENSIONAL SIGNAL PROCESSING WORKSHOP (IVMSP), 2022,
  • [27] Action Recognition Based on a Selective Sampling Strategy for Real-time Video Surveillance
    Zhang, Bo
    Zhang, Hong
    Yuan, Ding
    SEVENTH INTERNATIONAL CONFERENCE ON GRAPHIC AND IMAGE PROCESSING (ICGIP 2015), 2015, 9817
  • [28] Sparse coding-based space-time video representation for action recognition
    Yinghua Fu
    Tao Zhang
    Wenjin Wang
    Multimedia Tools and Applications, 2017, 76 : 12645 - 12658
  • [29] TWO-PATHWAY TRANSFORMER NETWORK FOR VIDEO ACTION RECOGNITION
    Jiang, Bo
    Yu, Jiahong
    Zhou, Lei
    Wu, Kailin
    Yang, Yang
    2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2021, : 1089 - 1093
  • [30] Manet: motion-aware network for video action recognition
    Li, Xiaoyang
    Yang, Wenzhu
    Wang, Kanglin
    Wang, Tiebiao
    Zhang, Chen
    COMPLEX & INTELLIGENT SYSTEMS, 2025, 11 (03)