Self-Evaluation Improves Selective Generation in Large Language Models

被引:0
|
作者
Ren, Jie [1 ]
Zhao, Yao [1 ]
Vu, Tu [2 ]
Liu, Peter J. [1 ]
Lakshminarayanan, Balaji [1 ]
机构
[1] Google DeepMind, London, England
[2] Google Res, Mountain View, CA USA
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Safe deployment of large language models (LLMs) may benefit from a reliable method for assessing their generated content to determine when to abstain or to selectively generate. While likelihood-based metrics such as perplexity are widely employed, recent research has demonstrated the limitations of using sequence-level probability estimates given by LLMs as reliable indicators of generation quality. Conversely, LLMs have demonstrated strong calibration at the token level, particularly when it comes to choosing correct answers in multiple-choice questions or evaluating true/false statements. In this work, we reformulate open-ended generation tasks into token-level prediction tasks, and leverage LLMs' superior calibration at the token level. We instruct an LLM to self-evaluate its answers, employing either a multi-way comparison or a point-wise evaluation approach, with the option to include a "None of the above" option to express the model's uncertainty explicitly. We benchmark a range of scoring methods based on self-evaluation and evaluate their performance in selective generation using TRUTHFULQA and TL;DR. Through experiments with PALM-2 and GPT-3, we demonstrate that self-evaluation based scores not only improve accuracy, but also correlate better with the overall quality of generated content.
引用
收藏
页码:49 / 64
页数:16
相关论文
共 50 条
  • [1] Autoregressive Self-Evaluation: A Case Study of Music Generation Using Large Language Models
    Banat, Rerker
    Colton, Simon
    2023 IEEE CONFERENCE ON ARTIFICIAL INTELLIGENCE, CAI, 2023, : 264 - 265
  • [2] SELF-EVALUATION AND FOREIGN LANGUAGE
    HOLBY, DJ
    PEABODY JOURNAL OF EDUCATION, 1967, 44 (04): : 239 - 241
  • [3] On the Evaluation of Large Language Models in Unit Test Generation
    Yang, Lin
    Yang, Chen
    Gao, Shutao
    Wang, Weijing
    Wang, Bo
    Zhu, Qihao
    Chu, Xiao
    Zhou, Jianyi
    Liang, Guangtai
    Wang, Qianxiang
    Chen, Junjie
    Proceedings - 2024 39th ACM/IEEE International Conference on Automated Software Engineering, ASE 2024, : 1607 - 1619
  • [4] On the Evaluation of Large Language Models in Unit Test Generation
    Yang, Lin
    Yang, Chen
    Gao, Shutao
    Wang, Weijing
    Wang, Bo
    Zhu, Qihao
    Chu, Xiao
    Zhou, Jianyi
    Liang, Guangtai
    Wang, Qianxiang
    Chen, Junjie
    arXiv,
  • [5] Self-Planning Code Generation with Large Language Models
    Jiang, Xue
    Dong, Yihong
    Wang, Lecheng
    Fang, Zheng
    Shang, Qiwei
    Li, Ge
    Jin, Zhi
    Jiao, Wenpin
    ACM TRANSACTIONS ON SOFTWARE ENGINEERING AND METHODOLOGY, 2024, 33 (07)
  • [6] Application and Evaluation of Large Language Models for the Generation of Survey Questions
    Maiorino, Antonio
    Padgett, Zoe
    Wang, Chun
    Yakubovskiy, Misha
    Jiang, Peng
    PROCEEDINGS OF THE 32ND ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2023, 2023, : 5244 - 5245
  • [7] Chain-of-Thought Improves Text Generation with Citations in Large Language Models
    Ji, Bin
    Liu, Huijun
    Du, Mingzhe
    Ng, See-Kiong
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 16, 2024, : 18345 - 18353
  • [8] The Impact of Self-Evaluation Instruction on Student Self-Evaluation, Music Performance, and Self-Evaluation Accuracy
    Hewitt, Michael P.
    JOURNAL OF RESEARCH IN MUSIC EDUCATION, 2011, 59 (01) : 6 - 20
  • [9] Adaptation with Self-Evaluation to Improve Selective Prediction in LLMs
    Chen, Jiefeng
    Yoon, Jinsung
    Ebrahimi, Sayna
    Arik, Sercan O.
    Pfister, Tomas
    Jha, Somesh
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS - EMNLP 2023, 2023, : 5190 - 5213
  • [10] Self-evaluation
    Lerolle, N.
    Megarbane, B.
    REANIMATION, 2012, 21 (05): : 645 - 647