An Improved High-Resolution Network-Based Method for Yoga-Pose Estimation

被引:1
|
作者
Li, Jianrong [1 ]
Zhang, Dandan [1 ]
Shi, Lei [1 ]
Ke, Ting [1 ]
Zhang, Chuanlei [1 ]
机构
[1] Tianjin Univ Sci & Technol, Coll Artificial Intelligence, Tianjin 300453, Peoples R China
来源
APPLIED SCIENCES-BASEL | 2023年 / 13卷 / 15期
关键词
human pose estimation; attention mechanism; high-resolution networks; feature pyramids;
D O I
10.3390/app13158912
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
In this paper, SEPAM_HRNet, a high-resolution pose-estimation model that incorporates the squeeze-and-excitation and pixel-attention-mask (SEPAM) module is proposed. Feature pyramid extraction, channel attention, and pixel-attention masks are integrated into the SEPAM module, resulting in improved model performance. The construction of the model involves replacing ordinary convolutions with the plug-and-play SEPAM module, which leads to the creation of the SEPAMneck module and SEPAMblock module. To evaluate the model's performance, the YOGA2022 human yoga poses teaching dataset is presented. This dataset comprises 15,350 images that capture ten basic yoga pose types-Warrior I Pose, Warrior II Pose, Bridge Pose, Downward Dog Pose, Flat Pose, Inclined Plank Pose, Seated Pose, Triangle Pose, Phantom Chair Pose, and Goddess Pose-with a total of five participants. The YOGA2022 dataset serves as a benchmark for evaluating the accuracy of the human pose-estimation model. The experimental results demonstrated that the SEPAM_HRNet model achieved improved accuracy in predicting human keypoints on both the common objects in context (COCO) calibration set and the YOGA2022 calibration set, compared to other state-of-the-art human pose-estimation models with the same image resolution and environment configuration. These findings emphasize the superior performance of the SEPAM_HRNet model.
引用
收藏
页数:18
相关论文
共 50 条
  • [31] MS-HRNet: multi-scale high-resolution network for human pose estimation
    Wang, Yanxia
    Wang, Renjie
    Shi, Hu
    Liu, Dan
    JOURNAL OF SUPERCOMPUTING, 2024, 80 (12): : 17269 - 17291
  • [32] EDite-HRNet: Enhanced Dynamic Lightweight High-Resolution Network for Human Pose Estimation
    Rui, Liyuheng
    Gao, Yanyan
    Ren, Haopan
    IEEE ACCESS, 2023, 11 : 95948 - 95957
  • [33] A high-resolution method for direction of arrival estimation based on an improved self-attention module
    Fu, Xiaoying
    Sun, Dajun
    Teng, Tingting
    JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA, 2024, 156 (04): : 2743 - 2758
  • [34] Deep High-Resolution Representation Learning for Human Pose Estimation
    Sun, Ke
    Xiao, Bin
    Liu, Dong
    Wang, Jingdong
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 5686 - 5696
  • [35] SHaRPose: Sparse High-Resolution Representation for Human Pose Estimation
    An, Xiaoqi
    Zhao, Lin
    Gong, Chen
    Wang, Nannan
    Wang, Di
    Yang, Jian
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 2, 2024, : 691 - 699
  • [36] Hand pose estimation based on improved NSRM network
    Yang, Shiqiang
    He, Duo
    Li, Qi
    Wang, Jinhua
    Li, Dexin
    EURASIP JOURNAL ON ADVANCES IN SIGNAL PROCESSING, 2023, 2023 (01)
  • [37] Hand pose estimation based on improved NSRM network
    Shiqiang Yang
    Duo He
    Qi Li
    Jinhua Wang
    Dexin Li
    EURASIP Journal on Advances in Signal Processing, 2023
  • [38] xMSannotator: An R Package for Network-Based Annotation of High-Resolution Metabolomics Data
    Uppal, Karan
    Walker, Douglas I.
    Jones, Dean P.
    ANALYTICAL CHEMISTRY, 2017, 89 (02) : 1063 - 1067
  • [39] A SINGLE UPPER LIMB POSE ESTIMATION METHOD BASED ON THE IMPROVED STACKED HOURGLASS NETWORK
    Peng, Gang
    Zheng, Yuezhi
    Li, Jianfeng
    Yang, Jin
    INTERNATIONAL JOURNAL OF APPLIED MATHEMATICS AND COMPUTER SCIENCE, 2021, 31 (01) : 123 - 133
  • [40] Human Pose Estimation Based on Multi-Spectral Attention and High Resolution Network
    Ma W.
    Zhang D.
    Jisuanji Fuzhu Sheji Yu Tuxingxue Xuebao/Journal of Computer-Aided Design and Computer Graphics, 2022, 34 (08): : 1283 - 1292