To Aggregate or Not? Learning with Separate Noisy Labels

被引:10
|
作者
Wei, Jiaheng [1 ]
Zhu, Zhaowei [1 ]
Luo, Tianyi [2 ]
Amid, Ehsan [3 ]
Kumar, Abhishek [3 ]
Liu, Yang [1 ]
机构
[1] Univ Calif Santa Cruz, Santa Cruz, CA 95064 USA
[2] Amazon Search Sci & AI, Palo Alto, CA USA
[3] Google Res, Brain Team, Mountain View, CA USA
基金
美国国家科学基金会;
关键词
Crowdsourcing; Label Aggregation; Label Noise; Human Annotation; LOWER BOUNDS; MODELS;
D O I
10.1145/3580305.3599522
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The rawly collected training data often comes with separate noisy labels collected from multiple imperfect annotators (e.g., via crowdsourcing). A typical way of using these separate labels is to first aggregate them into one and apply standard training methods. The literature has also studied extensively on effective aggregation approaches. This paper revisits this choice and aims to provide an answer to the question of whether one should aggregate separate noisy labels into single ones or use them separately as given. We theoretically analyze the performance of both approaches under the empirical risk minimization framework for a number of popular loss functions, including the ones designed specifically for the problem of learning with noisy labels. Our theorems conclude that label separation is preferred over label aggregation when the noise rates are high, or the number of labelers/annotations is insufficient. Extensive empirical results validate our conclusions.
引用
收藏
页码:2523 / 2535
页数:13
相关论文
共 50 条
  • [21] Variational Rectification Inference for Learning with Noisy Labels
    Sun, Haoliang
    Wei, Qi
    Feng, Lei
    Hu, Yupeng
    Liu, Fan
    Fan, Hehe
    Yin, Yilong
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2025, 133 (02) : 652 - 671
  • [22] Dimensionality-Driven Learning with Noisy Labels
    Ma, Xingjun
    Wang, Yisen
    Houle, Michael E.
    Zhou, Shuo
    Erfani, Sarah M.
    Xia, Shu-Tao
    Wijewickrema, Sudanthi
    Bailey, James
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 80, 2018, 80
  • [23] Learning With Auxiliary Less-Noisy Labels
    Duan, Yunyan
    Wu, Ou
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2017, 28 (07) : 1716 - 1721
  • [24] Few-shot Learning with Noisy Labels
    Liang, Kevin J.
    Rangrej, Samrudhdhi B.
    Petrovic, Vladan
    Hassner, Tal
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 9079 - 9088
  • [25] A Convergence Path to Deep Learning on Noisy Labels
    Liu, Defu
    Tsang, Ivor W.
    Yang, Guowu
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (04) : 5170 - 5182
  • [26] Penalty based robust learning with noisy labels
    Kong, Kyeongbo
    Lee, Junggi
    Kwak, Youngchul
    Cho, Young-Rae
    Kim, Seong-Eun
    Song, Woo-Jin
    NEUROCOMPUTING, 2022, 489 : 112 - 127
  • [27] Joint Optimization Framework for Learning with Noisy Labels
    Tanaka, Daiki
    Ikami, Daiki
    Yamasaki, Toshihiko
    Aizawa, Kiyoharu
    2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 5552 - 5560
  • [28] Adaptive Learning for Dynamic Features and Noisy Labels
    Gu, Shilin
    Xu, Chao
    Hu, Dewen
    Hou, Chenping
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2025, 47 (02) : 1219 - 1237
  • [29] Learning with Noisy Labels via Sparse Regularization
    Zhou, Xiong
    Liu, Xianming
    Wang, Chenyang
    Zhai, Deming
    Jiang, Junjun
    Ji, Xiangyang
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 72 - 81
  • [30] Decoding class dynamics in learning with noisy labels
    Tatjer, Albert
    Nagarajan, Bhalaji
    Marques, Ricardo
    Radeva, Petia
    PATTERN RECOGNITION LETTERS, 2024, 184 : 239 - 245