Self-paced multi-label co-training

被引:4
|
作者
Gong, Yanlu [1 ]
Wu, Quanwang [1 ]
Zhou, Mengchu [2 ]
Wen, Junhao [3 ]
机构
[1] Chongqing Univ, Coll Comp Sci, Chongqing 400030, Peoples R China
[2] New Jersey Inst Technol, Dept Elect & Comp Engn, Newark, NJ 07102 USA
[3] Chongqing Univ, Sch Big Data & Software Engn, Chongqing 401331, Peoples R China
基金
中国国家自然科学基金;
关键词
Co-training; Label rectification; Multi-label classification; Self-paced learning; Semi-supervised learning; LEARNING APPROACH; QUALITY; MODEL;
D O I
10.1016/j.ins.2022.11.153
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Multi-label learning aims to solve classification problems where instances are associated with a set of labels. In reality, it is generally easy to acquire unlabeled data but expensive or time-consuming to label them, and this situation becomes more serious in multi-label learning as an instance needs to be annotated with several labels. Hence, semi -supervised multi-label learning approaches emerge as they are able to exploit unlabeled data to help train predictive models. This work proposes a novel approach called Self -paced Multi-label Co-Training (SMCT). It leverages the well-known co-training paradigm to iteratively train two classifiers on two views of a dataset and communicate one classi-fier's predictions on unlabeled data to augment the other's training set. As pseudo labels may be false in iterative training, self-paced learning is integrated into SMCT to rectify false pseudo labels and avoid error accumulation. Concretely, the multi-label co-training model in SMCT is formulated as an optimization problem by introducing latent weight variables of unlabeled instances. It is then solved via an alternative convex optimization algorithm. Experimental evaluations are carried out based on six benchmark multi-label datasets and three metrics. The results demonstrate that SMCT is very competitive in each setting when compared with five state-of-the-art methods.(c) 2022 Elsevier Inc. All rights reserved.
引用
收藏
页码:269 / 281
页数:13
相关论文
共 50 条
  • [1] Self-Paced Co-training
    Ma, Fan
    Meng, Deyu
    Xie, Qi
    Li, Zina
    Dong, Xuanyi
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 70, 2017, 70
  • [2] Self-paced Multi-view Co-training
    Ma, Fan
    Meng, Deyu
    Dong, Xuanyi
    Yang, Yi
    JOURNAL OF MACHINE LEARNING RESEARCH, 2020, 21
  • [3] Self-paced Safe Co-training for Regression
    Min, Fan
    Li, Yu
    Liu, Liyan
    ADVANCES IN KNOWLEDGE DISCOVERY AND DATA MINING, PAKDD 2022, PT II, 2022, 13281 : 71 - 82
  • [4] Multi-Label Co-Training
    Xing, Yuying
    Yu, Guoxian
    Domeniconi, Carlotta
    Wang, Jun
    Zhang, Zili
    PROCEEDINGS OF THE TWENTY-SEVENTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2018, : 2882 - 2888
  • [5] Self-Paced Multi-Label Learning with Diversity
    Seyedi, Seyed Amjad
    Ghodsi, S. Siamak
    Akhlaghian, Fardin
    Jalili, Mahdi
    Moradi, Parham
    ASIAN CONFERENCE ON MACHINE LEARNING, VOL 101, 2019, 101 : 790 - 805
  • [6] Self-Paced Unified Representation Learning for Hierarchical Multi-Label Classification
    Yuan, Zixuan
    Liu, Hao
    Zhou, Haoyi
    Zhang, Denghui
    Zhang, Xiao
    Wang, Hao
    Xiong, Hui
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 15, 2024, : 16623 - 16632
  • [7] Multi-Label Adversarial Attack With New Measures and Self-Paced Constraint Weighting
    Su, Fengguang
    Wu, Ou
    Zhu, Weiyao
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2024, 33 : 3809 - 3822
  • [8] Self-paced and self-consistent co-training for semi-supervised image segmentation
    Wang, Ping
    Peng, Jizong
    Pedersoli, Marco
    Zhou, Yuanfeng
    Zhang, Caiming
    Desrosiers, Christian
    MEDICAL IMAGE ANALYSIS, 2021, 73
  • [9] Co-training based prediction of multi-label protein–protein interactions
    Tang T.
    Zhang X.
    Li W.
    Wang Q.
    Liu Y.
    Cao X.
    Computers in Biology and Medicine, 2024, 177
  • [10] Inductive Semi-supervised Multi-Label Learning with Co-Training
    Zhan, Wang
    Zhang, Min-Ling
    KDD'17: PROCEEDINGS OF THE 23RD ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2017, : 1305 - 1314