Co-training Based Attribute Reduction for Partially Labeled Data

被引:1
|
作者
Zhang, Wei [1 ,2 ]
Miao, Duoqian [1 ,3 ]
Gao, Can [4 ]
Yue, Xiaodong [5 ]
机构
[1] Tongji Univ, Sch Elect & Informat Engn, Shanghai 201804, Peoples R China
[2] Shanghai Univ Elect Power, Sch Comp Sci & Technol, Shanghai PT-200090, Peoples R China
[3] Tongji Univ, Minist Educ, Key Lab Embedded Syst & Serv Comp, Shanghai PT-201804, Peoples R China
[4] Zoom lion Heavy Ind Sci & Technol Dev Co Ltd, Changsha PT-410013, Peoples R China
[5] Shanghai Univ, Sch Engn & Comp Sci, Shanghai PT-200444, Peoples R China
基金
中国国家自然科学基金;
关键词
Rough Sets; Co-training; Incremental Attribute Reduction; Partially Labeled Data; Semi-supervised learning;
D O I
10.1007/978-3-319-11740-9_8
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Rough set theory is an effective supervised learning model for labeled data. However, it is often the case that practical problems involve both labeled and unlabeled data. In this paper, the problem of attribute reduction for partially labeled data is studied. A novel semi-supervised attribute reduction algorithm is proposed, based on co-training which capitalizes on the unlabeled data to improve the quality of attribute reducts from few labeled data. It gets two diverse reducts of the labeled data, employs them to train its base classifiers, then co-trains the two base classifiers iteratively. In every round, the base classifiers learn from each other on the unlabeled data and enlarge the labeled data, so better quality reducts could be computed from the enlarged labeled data and employed to construct base classifiers of higher performance. The experimental results with UCI data sets show that the proposed algorithm can improves the quality of reduct.
引用
收藏
页码:77 / 88
页数:12
相关论文
共 50 条
  • [21] When does Co-training Work in Real Data?
    Ling, Charles X.
    Du, Jun
    Zhou, Zhi-Hua
    ADVANCES IN KNOWLEDGE DISCOVERY AND DATA MINING, PROCEEDINGS, 2009, 5476 : 596 - +
  • [22] Automatic Image Annotation based on Co-Training
    Li, Zhixin
    Lin, Lan
    Zhang, Canlong
    Ma, Huifang
    Zhao, Weizhong
    2019 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2019,
  • [23] Edge Intelligence based Co-training of CNN
    Xie, Feiyi
    Xu, Aidong
    Jiang, Yixin
    Chen, Songlin
    Liao, Runfa
    Wen, Hong
    14TH INTERNATIONAL CONFERENCE ON COMPUTER SCIENCE AND EDUCATION (ICCSE 2019), 2019, : 830 - 834
  • [24] A Spectral Unmixing Method Based on Co-Training
    Pang, Qingyu
    Yu, Jing
    Sun, Weidong
    IMAGE AND GRAPHICS (ICIG 2017), PT II, 2017, 10667 : 570 - 579
  • [25] Bayesian Co-Training
    Yu, Shipeng
    Krishnapuram, Balaji
    Rosales, Romer
    Rao, R. Bharat
    JOURNAL OF MACHINE LEARNING RESEARCH, 2011, 12 : 2649 - 2680
  • [26] ROBUST CO-TRAINING
    Sun, Shiliang
    Jin, Feng
    INTERNATIONAL JOURNAL OF PATTERN RECOGNITION AND ARTIFICIAL INTELLIGENCE, 2011, 25 (07) : 1113 - 1126
  • [27] Robust Medical Image Classification From Noisy Labeled Data With Global and Local Representation Guided Co-Training
    Xue, Cheng
    Yu, Lequan
    Chen, Pengfei
    Dou, Qi
    Heng, Pheng-Ann
    IEEE TRANSACTIONS ON MEDICAL IMAGING, 2022, 41 (06) : 1371 - 1382
  • [28] Attribute Selection for Partially Labeled Categorical Data By Rough Set Approach
    Dai, Jianhua
    Hu, Qinghua
    Zhang, Jinghong
    Hu, Hu
    Zheng, Nenggan
    IEEE TRANSACTIONS ON CYBERNETICS, 2017, 47 (09) : 2460 - 2471
  • [29] Supervised training of adaptive systems with partially labeled data
    Erdogmus, D
    Rao, YN
    Principe, JC
    2005 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, VOLS 1-5: SPEECH PROCESSING, 2005, : 321 - 324
  • [30] Co-training for Policy Learning
    Song, Jialin
    Lanka, Ravi
    Yue, Yisong
    Ono, Masahiro
    35TH UNCERTAINTY IN ARTIFICIAL INTELLIGENCE CONFERENCE (UAI 2019), 2020, 115 : 1191 - 1201