Transferring Annotator- and Instance-Dependent Transition Matrix for Learning From Crowds

被引:0
|
作者
Li, Shikun [1 ,2 ]
Xia, Xiaobo [3 ]
Deng, Jiankang [4 ]
Ge, Shiming [1 ,2 ]
Liu, Tongliang [3 ]
机构
[1] Chinese Acad Sci, Inst Informat Engn, Beijing 100095, Peoples R China
[2] Univ Chinese Acad Sci, Sch Cyber Secur, Beijing 100049, Peoples R China
[3] Univ Sydney, Fac Engn, Sydney AI Ctr, Sch Comp Sci, Darlington, NSW 2008, Australia
[4] Imperial Coll London, Dept Comp, London SW7 2BX, England
基金
澳大利亚研究理事会;
关键词
Noise; Annotations; Noise measurement; Knowledge transfer; Data models; Sparse matrices; Estimation; Learning from crowds; label-noise learning; noise transition matrix; knowledge transfer;
D O I
10.1109/TPAMI.2024.3388209
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Learning from crowds describes that the annotations of training data are obtained with crowd-sourcing services. Multiple annotators each complete their own small part of the annotations, where labeling mistakes that depend on annotators occur frequently. Modeling the label-noise generation process by the noise transition matrix is a powerful tool to tackle the label noise. In real-world crowd-sourcing scenarios, noise transition matrices are both annotator- and instance-dependent. However, due to the high complexity of annotator- and instance-dependent transition matrices (AIDTM), annotation sparsity, which means each annotator only labels a tiny part of instances, makes modeling AIDTM very challenging. Without prior knowledge, existing works simplify the problem by assuming the transition matrix is instance-independent or using simple parametric ways, which lose modeling generality. Motivated by this, we target a more realistic problem, estimating general AIDTM in practice. Without losing modeling generality, we parameterize AIDTM with deep neural networks. To alleviate the modeling challenge, we suppose every annotator shares its noise pattern with similar annotators, and estimate AIDTM via knowledge transfer. We hence first model the mixture of noise patterns by all annotators, and then transfer this modeling to individual annotators. Furthermore, considering that the transfer from the mixture of noise patterns to individuals may cause two annotators with highly different noise generations to perturb each other, we employ the knowledge transfer between identified neighboring annotators to calibrate the modeling. Theoretical analyses are derived to demonstrate that both the knowledge transfer from global to individuals and the knowledge transfer between neighboring individuals can effectively help mitigate the challenge of modeling general AIDTM. Experiments confirm the superiority of the proposed approach on synthetic and real-world crowd-sourcing data.
引用
收藏
页码:7377 / 7391
页数:15
相关论文
共 50 条
  • [1] Instance-Dependent Label-Noise Learning with Manifold-Regularized Transition Matrix Estimation
    Cheng, De
    Liu, Tongliang
    Ning, Yixiong
    Wang, Nannan
    Han, Bo
    Niu, Gang
    Gao, Xinbo
    Sugiyama, Masashi
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 16609 - 16618
  • [2] Instance-Dependent Partial Label Learning
    Xu, Ning
    Qiao, Congyu
    Geng, Xin
    Zhang, Min-Ling
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [3] Learning from binary labels with instance-dependent noise
    Aditya Krishna Menon
    Brendan van Rooyen
    Nagarajan Natarajan
    Machine Learning, 2018, 107 : 1561 - 1595
  • [4] Learning from binary labels with instance-dependent noise
    Menon, Aditya Krishna
    van Rooyen, Brendan
    Natarajan, Nagarajan
    MACHINE LEARNING, 2018, 107 (8-10) : 1561 - 1595
  • [5] Cognition-Driven Structural Prior for Instance-Dependent Label Transition Matrix Estimation
    Zhang, Ruiheng
    Cao, Zhe
    Yang, Shuo
    Si, Lingyu
    Sun, Haoyang
    Xu, Lixin
    Sun, Fuchun
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2025, 36 (02) : 3730 - 3743
  • [6] Instance-Dependent Inaccurate Label Distribution Learning
    Kou, Zhiqiang
    Wang, Jing
    Jia, Yuheng
    Liu, Biao
    Geng, Xin
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2025, 36 (01) : 1425 - 1437
  • [7] An instance-dependent simulation framework for learning with label noise
    Keren Gu
    Xander Masotto
    Vandana Bachani
    Balaji Lakshminarayanan
    Jack Nikodem
    Dong Yin
    Machine Learning, 2023, 112 : 1871 - 1896
  • [8] An instance-dependent simulation framework for learning with label noise
    Gu, Keren
    Masotto, Xander
    Bachani, Vandana
    Lakshminarayanan, Balaji
    Nikodem, Jack
    Yin, Dong
    MACHINE LEARNING, 2023, 112 (06) : 1871 - 1896
  • [9] Is Temporal Difference Learning Optimal? An Instance-Dependent Analysis
    Khamaru, Koulik
    Pananjady, Ashwin
    Ruan, Feng
    Wainwright, Martin J.
    Jordan, Michael, I
    SIAM JOURNAL ON MATHEMATICS OF DATA SCIENCE, 2021, 3 (04): : 1013 - 1040
  • [10] Optimistic PAC Reinforcement Learning: the Instance-Dependent View
    Tirinzoni, Andrea
    Al-Marjani, Aymen
    Kaufmann, Emilie
    INTERNATIONAL CONFERENCE ON ALGORITHMIC LEARNING THEORY, VOL 201, 2023, 201 : 1460 - 1480