Semi-supervised Relation Extraction via Incremental Meta Self-Training

被引:0
|
作者
Hu, Xuming [1 ]
Zhang, Chenwei [2 ]
Ma, Fukun [1 ]
Liu, Chenyao [1 ]
Wen, Lijie [1 ]
Yu, Philip S. [1 ,3 ]
机构
[1] Tsinghua Univ, Beijing, Peoples R China
[2] Amazon, Bellevue, WA 98004 USA
[3] Univ Illinois, Chicago, IL USA
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
To alleviate human efforts from obtaining large-scale annotations, Semi-Supervised Relation Extraction methods aim to leverage unlabeled data in addition to learning from limited samples. Existing self-training methods suffer from the gradual drift problem, where noisy pseudo labels on unlabeled data are incorporated during training. To alleviate the noise in pseudo labels, we propose a method called MetaSRE, where a Relation Label Generation Network generates quality assessment on pseudo labels by (meta) learning from the successful and failed attempts on Relation Classification Network as an additional metaobjective. To reduce the influence of noisy pseudo labels, MetaSRE adopts a pseudo label selection and exploitation scheme which assesses pseudo label quality on unlabeled samples and only exploits high-quality pseudo labels in a self-training fashion to incrementally augment labeled samples for both robustness and accuracy. Experimental results on two public datasets demonstrate the effectiveness of the proposed approach. Source code is available(1).
引用
收藏
页码:487 / 496
页数:10
相关论文
共 50 条
  • [31] Classwise Self-Paced Self-Training for Semi-Supervised Image Classification
    Lu, Cheng-Yu
    Hsu, Heng-Cheng
    Chiang, Chen-Kuo
    2023 ASIA PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE, APSIPA ASC, 2023, : 753 - 758
  • [32] Fast semi-supervised self-training algorithm based on data editing
    Li, Bing
    Wang, Jikui
    Yang, Zhengguo
    Yi, Jihai
    Nie, Feiping
    INFORMATION SCIENCES, 2023, 626 : 293 - 314
  • [33] SELF-TRAINING FOR SEMI-SUPERVISED DEEP CONTOUR DETECTION OF SURFACE WATER
    Alsamman, AbdulRahman
    Syed, Mohammad Baqiri
    XXIV ISPRS CONGRESS: IMAGING TODAY, FORESEEING TOMORROW, COMMISSION III, 2022, 43-B3 : 1393 - 1398
  • [34] Automatic Adjustment of Confidence Values in Self-training Semi-supervised Method
    Ovidio Vale, Karliane M.
    Canuto, Anne Magaly de P.
    Santos, Araken de Medeiros
    Gorgonio, Flavius da Luz e
    Tavares, Alan de M.
    Gorgnio, Arthur C.
    Alves, Cainan T.
    2018 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2018,
  • [35] Semi-Supervised PolSAR Image Classification Based on Self-Training and Superpixels
    Li, Yangyang
    Xing, Ruoting
    Jiao, Licheng
    Chen, Yanqiao
    Chai, Yingte
    Marturi, Naresh
    Shang, Ronghua
    REMOTE SENSING, 2019, 11 (16)
  • [36] Semi-supervised process monitoring based on self-training PCA model
    Zheng, Junhua
    Ye, Lingjian
    Ge, Zhiqiang
    PROCESS SAFETY AND ENVIRONMENTAL PROTECTION, 2024, 187 : 1311 - 1321
  • [37] Guided Self-Training based Semi-Supervised Learning for Fraud Detection
    Kumar, Awanish
    Ghosh, Soumyadeep
    Verma, Janu
    3RD ACM INTERNATIONAL CONFERENCE ON AI IN FINANCE, ICAIF 2022, 2022, : 148 - 155
  • [38] Semi-supervised Segmentation with Self-training Based on Quality Estimation and Refinement
    Zheng, Zhou
    Wang, Xiaoxia
    Zhang, Xiaoyun
    Zhong, Yumin
    Yao, Xiaofen
    Zhang, Ya
    Wang, Yanfeng
    MACHINE LEARNING IN MEDICAL IMAGING, MLMI 2020, 2020, 12436 : 30 - 39
  • [39] Estimating Age on Twitter Using Self-Training Semi-Supervised SVM
    Iju, Tatsuyuki
    Endo, Satoshi
    Yamada, Koji
    Toma, Naruaki
    Akamine, Yuhei
    JOURNAL OF ROBOTICS NETWORKING AND ARTIFICIAL LIFE, 2016, 3 (01): : 24 - 27
  • [40] GDST: Global Distillation Self-Training for Semi-Supervised Federated Learning
    Liu, Xinyi
    Zhu, Linghui
    Xia, Shu-Tao
    Jiang, Yong
    Yang, Xue
    2021 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2021,