Unlabeled data selection for active learning in image classification

被引:0
|
作者
Xiongquan Li
Xukang Wang
Xuhesheng Chen
Yao Lu
Hongpeng Fu
Ying Cheng Wu
机构
[1] Kunming University of Science and Technology,Faculty of Information Engineering and Automation
[2] Sage IT Consulting Group,undefined
[3] The University of North Carolina at Chapel Hill,undefined
[4] University of Bristol,undefined
[5] Khoury College of Computer Sciences,undefined
[6] Northeastern University,undefined
[7] University of Washington,undefined
来源
关键词
D O I
暂无
中图分类号
学科分类号
摘要
Active Learning has emerged as a viable solution for addressing the challenge of labeling extensive amounts of data in data-intensive applications such as computer vision and neural machine translation. The main objective of Active Learning is to automatically identify a subset of unlabeled data samples for annotation. This identification process is based on an acquisition function that assesses the value of each sample for model training. In the context of computer vision, image classification is a crucial task that typically requires a substantial training dataset. This research paper introduces innovative selection methods within the Active Learning framework, aiming to identify informative images from unlabeled datasets while minimizing the number of required training data. The proposed methods, namely Similari-ty-based Selection, Prediction Probability-based Selection, and Competence-based Active Learning, have been extensively evaluated through experiments conducted on popular datasets like Cifar10 and Cifar100. The experimental results demonstrate that the proposed methods outperform random selection and conventional selection techniques. The superior performance of the novel selection methods underscores their effectiveness in enhancing the Active Learning process for image classification tasks.
引用
收藏
相关论文
共 50 条
  • [21] Feature Selection for Unlabeled Data
    Chen, Chien-Hsing
    ADVANCES IN SWARM INTELLIGENCE, PT II, 2011, 6729 : 269 - 274
  • [22] Adaptive Active Learning for Image Classification
    Li, Xin
    Guo, Yuhong
    2013 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2013, : 859 - 866
  • [23] DEEP ACTIVE LEARNING FOR IMAGE CLASSIFICATION
    Ranganathan, Hiranmayi
    Venkateswara, Hemanth
    Chakraborty, Shayok
    Panchanathan, Sethuraman
    2017 24TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2017, : 3934 - 3938
  • [24] Improving One-Class Classification of Remote Sensing Data by Using Active Learning: A Case Study of Positive and Unlabeled Learning
    Sun Y.
    Li P.
    Beijing Daxue Xuebao (Ziran Kexue Ban)/Acta Scientiarum Naturalium Universitatis Pekinensis, 2020, 56 (01): : 155 - 163
  • [25] Ensemble learning with active data selection for semi-supervised pattern classification
    Wang, Shihai
    Chen, Ke
    2007 IEEE INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, VOLS 1-6, 2007, : 355 - 360
  • [26] Web Page Classification Using Relational Learning Algorithm and Unlabeled Data
    Li, Yanjuan
    Guo, Maozu
    JOURNAL OF COMPUTERS, 2011, 6 (03) : 474 - 479
  • [27] A Dynamic Centroid Text Classification Approach by Learning from Unlabeled Data
    Jiang, Cuicui
    Zhu, Dingju
    Jiang, Qingshan
    PROCEEDINGS OF 3RD INTERNATIONAL CONFERENCE ON MULTIMEDIA TECHNOLOGY (ICMT-13), 2013, 84 : 1420 - 1429
  • [28] LEARNING WITH A GENERATIVE ADVERSARIAL NETWORK FROM A POSITIVE UNLABELED DATASET FOR IMAGE CLASSIFICATION
    Chiaroni, F.
    Rahal, M-C.
    Hueber, N.
    Dufaux, F.
    2018 25TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2018, : 1368 - 1372
  • [29] Evaluation of Active Learning Techniques on Medical Image Classification with Unbalanced Data Distributions
    Chong, Quok Zong
    Knottenbelt, William J.
    Bhatia, Kanwal K.
    DEEP GENERATIVE MODELS, AND DATA AUGMENTATION, LABELLING, AND IMPERFECTIONS, 2021, 13003 : 235 - 242
  • [30] Active learning for deep object detection by fully exploiting unlabeled data
    Tan, Feixiang
    Zheng, Guansheng
    CONNECTION SCIENCE, 2023, 35 (01)