CoNPL: Consistency training framework with noise-aware pseudo labeling for dense pose estimation

被引:0
|
作者
Wen, Jiaxiao [1 ]
Chu, Tao [1 ]
Sun, Junyao [1 ]
Liu, Qiong [1 ]
机构
[1] South China Univ Technol, Sch Software Engn, Guangzhou 510006, Peoples R China
关键词
Dense pose estimation; pseudo labeling; Consistency training; Data augmentation;
D O I
10.1016/j.imavis.2024.105170
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Dense pose estimation faces hurdles due to the lack of costly precise pixel-level IUV labels. Existing methods aim to overcome it by regularizing model outputs or interpolating pseudo labels. However, conventional geometric transformations often fall short, and pseudo labels may introduce unwanted noise, leading to continued challenges in rectifying inaccurate estimations. We introduced a novel Consistency training framework with Noiseaware Pseudo Labeling (CoNPL) to tackle problems in learning from unlabeled pixels. CoNPL employs both weak and strong augmentations in a shared model to enhance robustness against aggressive transformations. To address noisy pseudo labels, CoNPL integrates a Noise-aware Pseudo Labeling (NPL) module, which consists of a Noise-Aware Module (NAM), and Noise-Resistant Learning (NRL) modules. NAM identifies misclassifications and incorrect UV coordinates using binary classification and regression, while NRL dynamically adjusts loss weights to filter out uncertain samples, thereby stabilizing learning from pseudo labels. Our method demonstrates a + 2.0% improvement in AP on the DensePose-COCO benchmark across different networks, achieving state-of-theart performance. On the Ultrapose and DensePose-Chimps benchmark, our method also demonstrates a + 2.7% and + 3.0% improvement in AP.
引用
收藏
页数:12
相关论文
共 50 条
  • [1] NAT: Noise-Aware Training for Robust Neural Sequence Labeling
    Namysl, Marcin
    Behnke, Sven
    Koehler, Joachim
    58TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2020), 2020, : 1501 - 1517
  • [2] Noise-Aware Quantum Amplitude Estimation
    Herbert, Steven
    Williams, Ifan
    Guichard, Roland
    Ng, Darren
    IEEE TRANSACTIONS ON QUANTUM ENGINEERING, 2024, 5
  • [3] Noise-Aware Framework for Robust Visual Tracking
    Li, Shengjie
    Zhao, Shuai
    Cheng, Bo
    Chen, Junliang
    IEEE TRANSACTIONS ON CYBERNETICS, 2022, 52 (02) : 1179 - 1192
  • [4] Absolute 3D Human Pose Estimation Using Noise-Aware Radial Distance Predictions
    Chang, Inho
    Park, Min-Gyu
    Kim, Je Woo
    Yoon, Ju Hong
    SYMMETRY-BASEL, 2023, 15 (01):
  • [5] Cleaning training-datasets with noise-aware algorithms
    Escalante, H. Jair
    SEVENTH MEXICAN INTERNATIONAL CONFERENCE ON COMPUTER SCIENCE, PROCEEDINGS, 2006, : 151 - 158
  • [6] Social emotion classification based on noise-aware training
    Li, Xin
    Rao, Yanghui
    Xie, Haoran
    Liu, Xuebo
    Wong, Tak-Lam
    Wang, Fu Lee
    DATA & KNOWLEDGE ENGINEERING, 2019, 123
  • [7] QuantumNAT: Quantum Noise-Aware Training with Noise Injection, Quantization and Normalization
    Wang, Hanrui
    Gu, Jiaqi
    Ding, Yongshan
    Li, Zirui
    Chong, Frederic T.
    Pan, David Z.
    Han, Song
    PROCEEDINGS OF THE 59TH ACM/IEEE DESIGN AUTOMATION CONFERENCE, DAC 2022, 2022, : 1 - 6
  • [8] An Occlusion and Noise-Aware Stereo Framework Based on Light Field Imaging for Robust Disparity Estimation
    Yang, Da
    Cui, Zhenglong
    Sheng, Hao
    Chen, Rongshan
    Cong, Ruixuan
    Wang, Shuai
    Xiong, Zhang
    IEEE TRANSACTIONS ON COMPUTERS, 2024, 73 (03) : 764 - 777
  • [9] Noise-aware Local Model Training Mechanism for Federated Learning
    Zhang, Jinghui
    Lv, Dingyang
    Dai, Qiangsheng
    Xin, Fa
    Dong, Fang
    ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2023, 14 (04)
  • [10] POSTFILTERING USING AN ADVERSARIAL DENOISING AUTOENCODER WITH NOISE-AWARE TRAINING
    Tawara, Naohiro
    Tanabe, Hikari
    Kobayashi, Tetsunori
    Fujieda, Masaru
    Katagiri, Kazuhiro
    Yazu, Takashi
    Ogawa, Tetsuji
    2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2019, : 3282 - 3286