Calibrating Structured Output Predictors for Natural Language Processing

被引:0
|
作者
Jagannatha, Abhyuday [1 ]
Yu, Hong [1 ,2 ]
机构
[1] Univ Massachusetts, Coll Informat & Comp Sci, Amherst, MA 01003 USA
[2] Univ Massachusetts Lowell, Dept Comp Sci, Lowell, MA USA
来源
58TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2020) | 2020年
基金
美国国家卫生研究院;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We address the problem of calibrating prediction confidence for output entities of interest in natural language processing (NLP) applications. It is important that NLP applications such as named entity recognition and question answering produce calibrated confidence scores for their predictions, especially if the applications are to be deployed in a safety-critical domain such as healthcare. However, the output space of such structured prediction models is often too large to adapt binary or multi-class calibration methods directly. In this study, we propose a general calibration scheme for output entities of interest in neural network based structured prediction models. Our proposed method can be used with any binary class calibration scheme and a neural network model. Additionally, we show that our calibration method can also be used as an uncertainty-aware, entity-specific decoding step to improve the performance of the underlying model at no additional training cost or data requirements. We show that our method outperforms current calibration techniques for named-entity-recognition, part-of-speech and question answering. We also improve our model's performance from our decoding step across several tasks and benchmark datasets. Our method improves the calibration and model performance on out-of-domain test scenarios as well.
引用
收藏
页码:2078 / 2092
页数:15
相关论文
共 50 条
  • [21] Natural language processing
    Gelbukh, A
    HIS 2005: 5th International Conference on Hybrid Intelligent Systems, Proceedings, 2005, : 6 - 6
  • [22] What Does Natural Language Processing Add to the Structured Data Available in Medical Records?
    Zhou, X.
    Weiss, L.
    Walker, A. M.
    Ananthakrishnan, A. N.
    Shen, R.
    Sobel, R. E.
    Bate, A.
    Reynolds, R. F.
    PHARMACOEPIDEMIOLOGY AND DRUG SAFETY, 2014, 23 : 397 - 398
  • [23] Introduction to the Special Issue on Deep Structured Learning for Natural Language Processing, Part 3
    Manogaran, Gunasekaran
    Qudrat-Ullah, Hassan
    Xin, Qin
    ACM TRANSACTIONS ON ASIAN AND LOW-RESOURCE LANGUAGE INFORMATION PROCESSING, 2021, 20 (05)
  • [24] Putting Natural in Natural Language Processing
    Chrupala, Grzegorz
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2023), 2023, : 7820 - 7827
  • [25] Kernels for structured natural language data
    Suzuki, J
    Sasaki, Y
    Maeda, E
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 16, 2004, 16 : 643 - 650
  • [26] Explaining Structured Queries in Natural Language
    Koutrika, Georgia
    Simitsis, Alkis
    Ioannidis, Yannis E.
    26TH INTERNATIONAL CONFERENCE ON DATA ENGINEERING ICDE 2010, 2010, : 333 - 344
  • [27] FROM SEMANTICOBJECTS TO STRUCTURED NATURAL LANGUAGE
    Sheu, Phillip C-Y.
    Kitazawa, Atsushi
    Ishii, Chihiro
    Kaneko, Kenichi
    Xie, Fei
    INTERNATIONAL JOURNAL OF SEMANTIC COMPUTING, 2007, 1 (03) : 359 - 375
  • [28] NEW TRENDS IN NATURAL-LANGUAGE PROCESSING - STATISTICAL NATURAL-LANGUAGE PROCESSING
    MARCUS, M
    PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 1995, 92 (22) : 10052 - 10059
  • [29] Introduction to Chinese Natural Language Processing (Review of Introduction to Chinese Natural Language Processing)
    Jiang Song
    JOURNAL OF TECHNOLOGY AND CHINESE LANGUAGE TEACHING, 2010, 1 (01): : 94 - 98
  • [30] Designing Natural Language Output for the IoT
    Verame, Jhim Kiel M.
    Kittley-Davies, Jacob
    Costanza, Enrico
    Martinez, Kirk
    UBICOMP'16 ADJUNCT: PROCEEDINGS OF THE 2016 ACM INTERNATIONAL JOINT CONFERENCE ON PERVASIVE AND UBIQUITOUS COMPUTING, 2016, : 1584 - 1589