HILL: A Hallucination Identifier for Large Language Models

被引:2
|
作者
Leiser, Florian [1 ]
Eckhardt, Sven [2 ]
Leuthe, Valentin [1 ]
Knaeble, Merlin [3 ]
Maedche, Alexander [3 ]
Schwabe, Gerhard [2 ]
Sunyaev, Ali [1 ]
机构
[1] Karlsruhe Inst Technol, Inst Appl Informat & Formal Descript Methods, Karlsruhe, Germany
[2] Univ Zurich, Dept Informat, Zurich, Switzerland
[3] Karlsruhe Inst Technol, Human Ctr Syst Lab, Karlsruhe, Germany
关键词
ChatGPT; Large Language Models; Artificial Hallucinations; Wizard of Oz; Artifact Development; AUTOMATION; WIZARD; OZ;
D O I
10.1145/3613904.3642428
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Large language models (LLMs) are prone to hallucinations, i.e., non-sensical, unfaithful, and undesirable text. Users tend to overrely on LLMs and corresponding hallucinations which can lead to misinterpretations and errors. To tackle the problem of overreliance, we propose HILL, the "Hallucination Identifier for Large Language Models". First, we identified design features for HILL with a Wizard of Oz approach with nine participants. Subsequently, we implemented HILL based on the identified design features and evaluated HILL's interface design by surveying 17 participants. Further, we investigated HILL's functionality to identify hallucinations based on an existing question-answering dataset and five user interviews. We find that HILL can correctly identify and highlight hallucinations in LLM responses which enables users to handle LLM responses with more caution. With that, we propose an easy-to-implement adaptation to existing LLMs and demonstrate the relevance of user-centered designs of AI artifacts.
引用
收藏
页数:13
相关论文
共 50 条
  • [31] Large Language Models
    Vargas, Diego Collarana
    Katsamanis, Nassos
    ERCIM NEWS, 2024, (136): : 12 - 13
  • [32] Large Language Models
    Cerf, Vinton G.
    COMMUNICATIONS OF THE ACM, 2023, 66 (08) : 7 - 7
  • [33] Hallucination Reduction and Optimization for Large Language Model-Based Autonomous Driving
    Wang, Jue
    SYMMETRY-BASEL, 2024, 16 (09):
  • [34] Learning a new language (Identifier of sign language)
    Campos Torres, Isaias
    Garrido Leon, Jose Francisco
    Cobos Martinez, Mario
    2017 IEEE MEXICAN HUMANITARIAN TECHNOLOGY CONFERENCE (MHTC), 2017, : 64 - 69
  • [35] The interior language of the psychic hallucination
    Kolle, K.
    NERVENARZT, 1938, 11 (01): : 43 - 43
  • [36] LANGUAGE PROCESS AND HALLUCINATION PHENOMENOLOGY
    ALPERT, M
    BEHAVIORAL AND BRAIN SCIENCES, 1986, 9 (03) : 518 - 519
  • [37] Large Language Models in der WissenschaftLarge language models in science
    Karl-Friedrich Kowalewski
    Severin Rodler
    Die Urologie, 2024, 63 (9) : 860 - 866
  • [38] The Importance of Understanding Language in Large Language Models
    Youssef, Alaa
    Stein, Samantha
    Clapp, Justin
    Magnus, David
    AMERICAN JOURNAL OF BIOETHICS, 2023, 23 (10): : 6 - 7
  • [39] Dissociating language and thought in large language models
    Mahowald, Kyle
    Ivanova, Anna A.
    Blank, Idan A.
    Kanwisher, Nancy
    Tenenbaum, Joshua B.
    Fedorenko, Evelina
    TRENDS IN COGNITIVE SCIENCES, 2024, 28 (06) : 517 - 540
  • [40] On the creativity of large language models
    Franceschelli, Giorgio
    Musolesi, Mirco
    AI & SOCIETY, 2024,