Segmentation Guided Attention Networks for Human Pose Estimation

被引:0
|
作者
Tang, Jingfan [1 ]
Lu, Jipeng [1 ]
Zhang, Xuefeng [2 ,3 ]
Zhao, Fang [4 ]
机构
[1] Hangzhou Dianzi Univ, Coll Comp, Hangzhou 310018, Peoples R China
[2] Ningbo Univ, Coll Sci & Technol, Lab Intelligent Home Appliances, Ningbo 315300, Peoples R China
[3] Ningbo Univ, Coll Sci & Technol, Sch Informat Engn, Ningbo 315300, Peoples R China
[4] Zhejiang Shuren Univ, Coll Informat Sci & Technol, Hangzhou 310015, Peoples R China
关键词
human pose estimation; segmentation guided attention; spatial attention maps; deep learning; accuracy improvement;
D O I
10.18280/ts.410522
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Human pose estimation is an important and widely studied task in computer vision. One of the difficulties inAhuman pose estimation is that the model is vulnerable to complex backgrounds when making predictions. In this paper, we propose a deep high-resolution network based on segmentation guided. A conceptually simple but computationally efficient segmentation guided module is used to generate segmentation maps. The obtained segmentation map will be used as a spatial attention map in the feature extraction stage. Since the skeletal point region is used as the foreground in the segmentation map, the model pays more attention to the key point region to effectively reduce the influence of complex background on the prediction results. The segmentation guided module provides a spatial attention map with a priori knowledge, unlike the traditional spatial attention mechanism. To verify the effectiveness of our method, we conducted a series of comparison experiments on the MPII human pose dataset and the COCO2017 keypoint detection dataset. The highest boosting effect of our model compared to HRNet on the COCO2017 dataset is up to 3%. The experimental results show that this segmentation guidance mechanism is effective in improving accuracy.
引用
收藏
页码:2485 / 2493
页数:9
相关论文
共 50 条
  • [31] Stacked Hourglass Networks for Human Pose Estimation
    Newell, Alejandro
    Yang, Kaiyu
    Deng, Jia
    COMPUTER VISION - ECCV 2016, PT VIII, 2016, 9912 : 483 - 499
  • [32] PETS-Nets: Joint Pose Estimation and Tissue Segmentation of Fetal Brains Using Anatomy-Guided Networks
    Pei, Yuchen
    Zhao, Fenqiang
    Zhong, Tao
    Ma, Laifa
    Liao, Lufan
    Wu, Zhengwang
    Wang, Li
    Zhang, He
    Wang, Lisheng
    Li, Gang
    IEEE TRANSACTIONS ON MEDICAL IMAGING, 2024, 43 (03) : 1006 - 1017
  • [33] Pose Estimation With Segmentation Consistency
    Lu, Huchuan
    Shao, Xinqing
    Xiao, Yi
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2013, 22 (10) : 4040 - 4048
  • [34] Human pose estimation based on frequency domain and attention module
    Zhou, Shuren
    Duan, Xinlan
    Zhou, Jiarui
    NEUROCOMPUTING, 2024, 604
  • [35] Enhanced Human Pose Estimation with Attention-Augmented HRNet
    Zhang, Junjie
    Yang, Haojie
    Deng, Yancong
    6TH INTERNATIONAL CONFERENCE ON IMAGE PROCESSING AND MACHINE VISION, IPMV 2024, 2024, : 88 - 93
  • [36] Lightweight and Efficient Human Pose Estimation Fusing Transformer and Attention
    Wu, Chengpeng
    Tan, Guangxing
    Chen, Haifeng
    Li, Chunyu
    Computer Engineering and Applications, 2024, 60 (22) : 197 - 208
  • [37] Efficient Spatial-Attention Module for Human Pose Estimation
    Tran, Tien-Dat
    Vo, Xuan-Thuy
    Nguyen, Duy-Linh
    Jo, Kang-Hyun
    FRONTIERS OF COMPUTER VISION, IW-FCV 2021, 2021, 1405 : 242 - 250
  • [38] Human Pose Estimation Fusing Weight Adaptive Loss and Attention
    Jiang, Chunling
    Zeng, Bi
    Yao, Zhuangze
    Deng, Bin
    Computer Engineering and Applications, 2023, 59 (18) : 145 - 153
  • [39] Human Pose Estimation using Deep Structure Guided Learning
    Ai, Baole
    Zhou, Yu
    Yu, Yao
    Du, Sidan
    2017 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2017), 2017, : 1224 - 1231
  • [40] Hierarchical Contextual Refinement Networks for Human Pose Estimation
    Nie, Xuecheng
    Feng, Jiashi
    Xing, Junliang
    Xiao, Shengtao
    Yan, Shuicheng
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2019, 28 (02) : 924 - 936