Reranking and Self-Training for Parser Adaptation

被引:0
|
作者
McClosky, David [1 ]
Charniak, Eugene [1 ]
Johnson, Mark [1 ]
机构
[1] Brown Univ, BLLIP, Providence, RI 02912 USA
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Statistical parsers trained and tested on the Penn Wall Street Journal (WSJ) treebank have shown vast improvements over the last 10 years. Much of this improvement, however, is based upon an ever-increasing number of features to be trained on (typically) the WSJ treebank data. This has led to concern that such parsers may be too finely tuned to this corpus at the expense of portability to other genres. Such worries have merit. The standard "Charniak parser" checks in at a labeled precision-recall f-measure of 89.7% on the Penn WSJ test set, but only 82.9% on the test set from the Brown treebank corpus. This paper should allay these fears. In particular, we show that the reranking parser described in Charniak and Johnson (2005) improves performance of the parser on Brown to 85.2%. Furthermore, use of the self-training techniques described in (MeClosky et al., 2006) raise this to 87.8% (an error reduction of 28%) again without any use of labeled Brown data. This is remarkable since training the parser and reranker on labeled Brown data achieves only 88.4%.
引用
收藏
页码:337 / 344
页数:8
相关论文
共 50 条
  • [21] Unsupervised Video Domain Adaptation with Masked Pre-Training and Collaborative Self-Training
    Reddy, Arun
    Paul, William
    Rivera, Corban
    Shah, Ketul
    de Melo, Celso M.
    Chellappa, Rama
    2024 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2024, : 18919 - 18929
  • [22] Self-Training Based Adversarial Domain Adaptation for Radio Signal Recognition
    Liang, Zhi
    Xie, Jian
    Yang, Xin
    Tao, Mingliang
    Wang, Ling
    IEEE COMMUNICATIONS LETTERS, 2022, 26 (11) : 2646 - 2650
  • [23] Machine Reading Comprehension Framework Based on Self-Training for Domain Adaptation
    Lee, Hyeon-Gu
    Jang, Youngjin
    Kim, Harksoo
    IEEE Access, 2021, 9 : 21279 - 21285
  • [24] Self-training ABS
    Akhmetshin, A.M.
    Avtomobil'naya Promyshlennost, 2001, (06): : 34 - 36
  • [25] Automatic adaptation of object detectors to new domains using self-training
    RoyChowdhury, Aruni
    Chakrabarty, Prithvijit
    Singh, Ashish
    Jin, SouYoung
    Jiang, Huaizu
    Cao, Liangliang
    Learned-Miller, Erik
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 780 - 790
  • [26] Self-training: A survey
    Amini, Massih-Reza
    Feofanov, Vasilii
    Pauletto, Loic
    Hadjadj, Lies
    Devijver, Emilie
    Maximov, Yury
    NEUROCOMPUTING, 2025, 616
  • [27] Contrastive Learning and Self-Training for Unsupervised Domain Adaptation in Semantic Segmentation
    Marsden, Robert A.
    Bartler, Alexander
    Doebler, Mario
    Yang, Bin
    2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [28] Energy-based Self-Training and Normalization for Unsupervised Domain Adaptation
    Herath, Samitha
    Fernando, Basura
    Abbasnejad, Ehsan
    Hayat, Munawar
    Khadivi, Shahram
    Harandi, Mehrtash
    Rezatofighi, Hamid
    Haffari, Gholamreza
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 11619 - 11628
  • [29] Machine Reading Comprehension Framework Based on Self-Training for Domain Adaptation
    Lee, Hyeon-Gu
    Jang, Youngjin
    Kim, Harksoo
    IEEE ACCESS, 2021, 9 : 21279 - 21285
  • [30] Unsupervised Adaptation of Question Answering Systems via Generative Self-training
    Rennie, Steven J.
    Marcheret, Etienne
    Mallinar, Neil
    Nahamoo, David
    Goel, Vaibhava
    PROCEEDINGS OF THE 2020 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP), 2020, : 1148 - 1157