Deep Learning From Multiple Noisy Annotators as A Union

被引:14
|
作者
Wei, Hongxin [1 ]
Xie, Renchunzi [1 ]
Feng, Lei [2 ]
Han, Bo [3 ]
An, Bo [1 ]
机构
[1] Nanyang Technol Univ, Sch Comp Sci & Engn, Singapore 639798, Singapore
[2] Chongqing Univ, Coll Comp Sci, Chongqing 400044, Peoples R China
[3] Hong Kong Baptist Univ, Dept Comp Sci, Hong Kong, Peoples R China
基金
中国国家自然科学基金; 新加坡国家研究基金会;
关键词
Training; Deep learning; Labeling; Noise measurement; Neural networks; Standards; Learning systems; Annotators; crowdsourcing; noisy labels; transition matrix; CLASSIFICATION;
D O I
10.1109/TNNLS.2022.3168696
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Crowdsourcing is a popular solution for large-scale data annotations. So far, various end-to-end deep learning methods have been proposed to improve the practical performance of learning from crowds. Despite their practical effectiveness, most of them have two major limitations--they do not hold learning consistency and suffer from computational inefficiency. In this article, we propose a novel method named UnionNet, which is not only theoretically consistent but also experimentally effective and efficient. Specifically, unlike existing methods that either fit a given label from each annotator independently or fuse all the labels into a reliable one, we concatenate the one-hot encoded vectors of crowdsourced labels provided by all the annotators, which takes all the labeling information as a union and coordinates multiple annotators. In this way, we can directly train an end-to-end deep neural network by maximizing the likelihood of this union with only a parametric transition matrix. We theoretically prove the learning consistency and experimentally show the effectiveness and efficiency of our proposed method.
引用
收藏
页码:10552 / 10562
页数:11
相关论文
共 50 条
  • [31] Sequence labeling with multiple annotators
    Filipe Rodrigues
    Francisco Pereira
    Bernardete Ribeiro
    Machine Learning, 2014, 95 : 165 - 181
  • [32] Cosmological constraints from noisy convergence maps through deep learning
    Fluri, Janis
    Kacprzak, Tomasz
    Refregier, Alexandre
    Amara, Adam
    Lucchi, Aurelien
    Hofmann, Thomas
    PHYSICAL REVIEW D, 2018, 98 (12)
  • [33] Deep Learning from Noisy Labels with Some Adjustments of a Recent Method
    Fazekas, Istvan
    Forian, Laszlo
    Barta, Attila
    INFOCOMMUNICATIONS JOURNAL, 2023, 15 : 9 - 12
  • [34] Regression Learning with Multiple Noisy Oracles
    Ristovski, Kosta
    Das, Debasish
    Ouzienko, Vladimir
    Guo, Yuhong
    Obradovic, Zoran
    ECAI 2010 - 19TH EUROPEAN CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2010, 215 : 445 - 450
  • [35] Neural architectures for aggregating sequence labels from multiple annotators
    Li, Maolin
    Ananiadou, Sophia
    NEUROCOMPUTING, 2022, 514 : 539 - 550
  • [36] Learning from Multiple Noisy Annotations via Trustable Data Mixture
    Wang, Ruohan
    Chen, Fangping
    Guan, Changyu
    Xue, Cong
    Ji, Xiang
    Li, Wang
    ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, PT II, ICIC 2024, 2024, 14876 : 428 - 437
  • [37] Deep Reinforcement Learning Autoencoder with Noisy Feedback
    Goutay, Mathieu
    Aoudia, Faycal Ait
    Hoydis, Jakob
    17TH INTERNATIONAL SYMPOSIUM ON MODELING AND OPTIMIZATION IN MOBILE, AD HOC, AND WIRELESS NETWORKS (WIOPT 2019), 2019, : 346 - 351
  • [38] Deep learning for location prediction on noisy trajectories
    Kandhare, Pravinkumar Gangadharrao
    Nakhmani, Arie
    Sirakov, Nikolay Metodiev
    PATTERN ANALYSIS AND APPLICATIONS, 2023, 26 (01) : 107 - 122
  • [39] A Convergence Path to Deep Learning on Noisy Labels
    Liu, Defu
    Tsang, Ivor W.
    Yang, Guowu
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (04) : 5170 - 5182
  • [40] Deep learning for location prediction on noisy trajectories
    Pravinkumar Gangadharrao Kandhare
    Arie Nakhmani
    Nikolay Metodiev Sirakov
    Pattern Analysis and Applications, 2023, 26 : 107 - 122