Co-training Based Attribute Reduction for Partially Labeled Data

被引:1
|
作者
Zhang, Wei [1 ,2 ]
Miao, Duoqian [1 ,3 ]
Gao, Can [4 ]
Yue, Xiaodong [5 ]
机构
[1] Tongji Univ, Sch Elect & Informat Engn, Shanghai 201804, Peoples R China
[2] Shanghai Univ Elect Power, Sch Comp Sci & Technol, Shanghai PT-200090, Peoples R China
[3] Tongji Univ, Minist Educ, Key Lab Embedded Syst & Serv Comp, Shanghai PT-201804, Peoples R China
[4] Zoom lion Heavy Ind Sci & Technol Dev Co Ltd, Changsha PT-410013, Peoples R China
[5] Shanghai Univ, Sch Engn & Comp Sci, Shanghai PT-200444, Peoples R China
基金
中国国家自然科学基金;
关键词
Rough Sets; Co-training; Incremental Attribute Reduction; Partially Labeled Data; Semi-supervised learning;
D O I
10.1007/978-3-319-11740-9_8
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Rough set theory is an effective supervised learning model for labeled data. However, it is often the case that practical problems involve both labeled and unlabeled data. In this paper, the problem of attribute reduction for partially labeled data is studied. A novel semi-supervised attribute reduction algorithm is proposed, based on co-training which capitalizes on the unlabeled data to improve the quality of attribute reducts from few labeled data. It gets two diverse reducts of the labeled data, employs them to train its base classifiers, then co-trains the two base classifiers iteratively. In every round, the base classifiers learn from each other on the unlabeled data and enlarge the labeled data, so better quality reducts could be computed from the enlarged labeled data and employed to construct base classifiers of higher performance. The experimental results with UCI data sets show that the proposed algorithm can improves the quality of reduct.
引用
收藏
页码:77 / 88
页数:12
相关论文
共 50 条
  • [41] A study on deceptive review identification based on co-training
    Zhang W.
    Wang Q.
    Bu C.
    Li J.
    Zhang S.
    Xitong Gongcheng Lilun yu Shijian/System Engineering Theory and Practice, 2020, 40 (10): : 2669 - 2683
  • [42] CO-TRAINING - A SYNERGISTIC OUTCOME
    MILLER, GV
    WILSON, PG
    TRAINING AND DEVELOPMENT JOURNAL, 1982, 36 (09): : 94 - 100
  • [43] DCPE co-training for classification
    Xu, Jin
    He, Haibo
    Man, Hong
    NEUROCOMPUTING, 2012, 86 : 75 - 85
  • [44] LAPLACIAN REGULARIZED CO-TRAINING
    Li Yang
    Liu Weifeng
    Wang Yanjiang
    2014 12TH INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING (ICSP), 2014, : 1408 - 1412
  • [45] CO-TRAINING - COLLABORATIVE MODEL
    CHADBOURNE, JW
    STACKOSULLIVAN, D
    MAHONEY, JT
    PERSONNEL AND GUIDANCE JOURNAL, 1979, 57 (10): : 544 - 546
  • [46] Supervised learning and Co-training
    Darnstaedt, Malte
    Simon, Hans Ulrich
    Szoerenyi, Balazs
    THEORETICAL COMPUTER SCIENCE, 2014, 519 : 68 - 87
  • [47] A co-training algorithm for multi-view data with applications in data fusion
    Culp, Mark
    Michailidis, George
    JOURNAL OF CHEMOMETRICS, 2009, 23 (5-6) : 294 - 303
  • [48] Supervised Learning and Co-training
    Darnstaedt, Malte
    Simon, Hans Ulrich
    Balazs Szoerenyi
    ALGORITHMIC LEARNING THEORY, 2011, 6925 : 425 - +
  • [49] A co-training method based on teaching-learning model
    National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing 210046, China
    Jisuanji Yanjiu yu Fazhan, 2013, 11 (2262-2268):
  • [50] Network traffic classification based on ensemble learning and co-training
    HaiTao He
    XiaoNan Luo
    FeiTeng Ma
    ChunHui Che
    JianMin Wang
    Science in China Series F: Information Sciences, 2009, 52 : 338 - 346