An improved sample selection framework for learning with noisy labels

被引:0
|
作者
Zhang, Qian [1 ]
Zhu, Yi [1 ]
Yang, Ming [2 ]
Jin, Ge [1 ]
Zhu, Yingwen [1 ]
Lu, Yanjun [1 ]
Zou, Yu [1 ,3 ]
Chen, Qiu [4 ]
机构
[1] Jiangsu Open Univ, Sch Informat Technol, Nanjing, Jiangsu, Peoples R China
[2] Nanjing Normal Univ, Sch Comp & Elect Informat, Nanjing, Jiangsu, Peoples R China
[3] Nanjing Univ Informat Sci & Technol, Sch Artificial Intelligence, Sch Future Technol, Nanjing, Jiangsu, Peoples R China
[4] Kogakuin Univ, Grad Sch Engn, Dept Elect Engn & Elect, Tokyo, Japan
来源
PLOS ONE | 2024年 / 19卷 / 12期
基金
中国国家自然科学基金;
关键词
D O I
10.1371/journal.pone.0309841
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Deep neural networks have powerful memory capabilities, yet they frequently suffer from overfitting to noisy labels, leading to a decline in classification and generalization performance. To address this issue, sample selection methods that filter out potentially clean labels have been proposed. However, there is a significant gap in size between the filtered, possibly clean subset and the unlabeled subset, which becomes particularly pronounced at high-noise rates. Consequently, this results in underutilizing label-free samples in sample selection methods, leaving room for performance improvement. This study introduces an enhanced sample selection framework with an oversampling strategy (SOS) to overcome this limitation. This framework leverages the valuable information contained in label-free instances to enhance model performance by combining an SOS with state-of-the-art sample selection methods. We validate the effectiveness of SOS through extensive experiments conducted on both synthetic noisy datasets and real-world datasets such as CIFAR, WebVision, and Clothing1M. The source code for SOS will be made available at https://github.com/LanXiaoPang613/SOS.
引用
收藏
页数:37
相关论文
共 50 条
  • [31] Augmentation Strategies for Learning with Noisy Labels
    Nishi, Kento
    Ding, Yi
    Rich, Alex
    Hollerer, Tobias
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 8018 - 8027
  • [32] Partial Label Learning with Noisy Labels
    Zhao, Pan
    Tang, Long
    Pan, Zhigeng
    Annals of Data Science, 2025, 12 (01) : 199 - 212
  • [33] To Aggregate or Not? Learning with Separate Noisy Labels
    Wei, Jiaheng
    Zhu, Zhaowei
    Luo, Tianyi
    Amid, Ehsan
    Kumar, Abhishek
    Liu, Yang
    PROCEEDINGS OF THE 29TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2023, 2023, : 2523 - 2535
  • [34] DEEP LEARNING CLASSIFICATION WITH NOISY LABELS
    Sanchez, Guillaume
    Guis, Vincente
    Marxer, Ricard
    Bouchara, Frederic
    2020 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO WORKSHOPS (ICMEW), 2020,
  • [35] Twin Contrastive Learning with Noisy Labels
    Huang, Zhizhong
    Zhang, Junping
    Shan, Hongming
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 11661 - 11670
  • [36] Iterative Cross Learning on Noisy Labels
    Yuan, Bodi
    Chen, Jianyu
    Zhang, Weidong
    Tai, Hung-Shuo
    McMains, Sara
    2018 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2018), 2018, : 757 - 765
  • [37] Robust Federated Learning With Noisy Labels
    Yang, Seunghan
    Park, Hyoungseob
    Byun, Junyoung
    Kim, Changick
    IEEE INTELLIGENT SYSTEMS, 2022, 37 (02) : 35 - 43
  • [38] Robust Collaborative Learning with Noisy Labels
    Sun, Mengying
    Xing, Jing
    Chen, Bin
    Zhou, Jiayu
    20TH IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM 2020), 2020, : 1274 - 1279
  • [39] NLNL: Negative Learning for Noisy Labels
    Kim, Youngdong
    Yim, Junho
    Yun, Juseung
    Kim, Junmo
    2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 101 - 110
  • [40] Compressing Features for Learning With Noisy Labels
    Chen, Yingyi
    Hu, Shell Xu
    Shen, Xi
    Ai, Chunrong
    Suykens, Johan A. K.
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (02) : 2124 - 2138