Self-Evaluation Improves Selective Generation in Large Language Models

被引:0
|
作者
Ren, Jie [1 ]
Zhao, Yao [1 ]
Vu, Tu [2 ]
Liu, Peter J. [1 ]
Lakshminarayanan, Balaji [1 ]
机构
[1] Google DeepMind, London, England
[2] Google Res, Mountain View, CA USA
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Safe deployment of large language models (LLMs) may benefit from a reliable method for assessing their generated content to determine when to abstain or to selectively generate. While likelihood-based metrics such as perplexity are widely employed, recent research has demonstrated the limitations of using sequence-level probability estimates given by LLMs as reliable indicators of generation quality. Conversely, LLMs have demonstrated strong calibration at the token level, particularly when it comes to choosing correct answers in multiple-choice questions or evaluating true/false statements. In this work, we reformulate open-ended generation tasks into token-level prediction tasks, and leverage LLMs' superior calibration at the token level. We instruct an LLM to self-evaluate its answers, employing either a multi-way comparison or a point-wise evaluation approach, with the option to include a "None of the above" option to express the model's uncertainty explicitly. We benchmark a range of scoring methods based on self-evaluation and evaluate their performance in selective generation using TRUTHFULQA and TL;DR. Through experiments with PALM-2 and GPT-3, we demonstrate that self-evaluation based scores not only improve accuracy, but also correlate better with the overall quality of generated content.
引用
收藏
页码:49 / 64
页数:16
相关论文
共 50 条
  • [31] Generation and Evaluation of Synthetic Critical Care Progress Notes With Large Language Models
    Leding, B.
    Gao, Y.
    Dligach, D.
    Croxford, E.
    Mayampurath, A.
    Churpek, M. M.
    Afshar, M.
    AMERICAN JOURNAL OF RESPIRATORY AND CRITICAL CARE MEDICINE, 2024, 209
  • [32] Assessing the proficiency of large language models in automatic feedback generation: An evaluation study
    Dai, Wei
    Tsai, Yi-Shan
    Lin, Jionghao
    Aldino, Ahmad
    Jin, Hua
    Li, Tongguang
    Gašević, Dragan
    Chen, Guanliang
    Computers and Education: Artificial Intelligence, 2024, 7
  • [33] An Empirical Evaluation of Using Large Language Models for Automated Unit Test Generation
    Schafer, Max
    Nadi, Sarah
    Eghbali, Aryaz
    Tip, Frank
    IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, 2024, 50 (01) : 85 - 105
  • [34] Pupil Evaluation and Self-Evaluation
    Symonds, Percival M.
    TEACHERS COLLEGE RECORD, 1952, 54 (03): : 138 - 149
  • [35] SELF-EVALUATION AND CREATIVITY
    SZYMANSKI, K
    HARKINS, SG
    PERSONALITY AND SOCIAL PSYCHOLOGY BULLETIN, 1992, 18 (03) : 259 - 265
  • [36] Tools for self-evaluation
    不详
    TRANSFUSION CLINIQUE ET BIOLOGIQUE, 1999, 6 (05) : 311 - 323
  • [37] LARGE MARGIN TRAINING IMPROVES LANGUAGE MODELS FOR ASR
    Wang, Jilin
    Huang, Jiaji
    Church, Kenneth Ward
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 7368 - 7372
  • [38] Rationality of Thought Improves Reasoning in Large Language Models
    Gou, Tian
    Zhang, Boyao
    Sun, Zhenglie
    Wang, Jing
    Liu, Fang
    Wang, Yangang
    Wang, Jue
    KNOWLEDGE SCIENCE, ENGINEERING AND MANAGEMENT, PT IV, KSEM 2024, 2024, 14887 : 343 - 358
  • [39] Assessing Generative Language Models in Classification Tasks: Performance and Self-evaluation Capabilities in the Environmental and Climate Change Domain
    Grasso, Francesca
    Locci, Stefano
    NATURAL LANGUAGE PROCESSING AND INFORMATION SYSTEMS, PT II, NLDB 2024, 2024, 14763 : 302 - 313
  • [40] SELF-EVALUATION PROCESSES
    TAYLOR, SE
    NETER, E
    WAYMENT, HA
    PERSONALITY AND SOCIAL PSYCHOLOGY BULLETIN, 1995, 21 (12) : 1278 - 1287