Hallucination Reduction and Optimization for Large Language Model-Based Autonomous Driving

被引:0
|
作者
Wang, Jue [1 ]
机构
[1] Johns Hopkins Univ, Whiting Sch Engn, Baltimore, MD 21218 USA
来源
SYMMETRY-BASEL | 2024年 / 16卷 / 09期
关键词
autonomous driving; large language models; hallucination reduction;
D O I
10.3390/sym16091196
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Large language models (LLMs) are widely integrated into autonomous driving systems to enhance their operational intelligence and responsiveness and improve self-driving vehicles' overall performance. Despite these advances, LLMs still struggle between hallucinations-when models either misinterpret the environment or generate imaginary parts for downstream use cases-and taxing computational overhead that relegates their performance to strictly non-real-time operations. These are essential problems to solve to make autonomous driving as safe and efficient as possible. This work is thus focused on symmetrical trade-offs between the reduction of hallucination and optimization, leading to a framework for these two combined and at least specifically motivated by these limitations. This framework intends to generate a symmetry of mapping between real and virtual worlds. It helps in minimizing hallucinations and optimizing computational resource consumption reasonably. In autonomous driving tasks, we use multimodal LLMs that combine an image-encoding Visual Transformer (ViT) and a decoding GPT-2 with responses generated by the powerful new sequence generator from OpenAI known as GPT4. Our hallucination reduction and optimization framework leverages iterative refinement loops, RLHF-reinforcement learning from human feedback (RLHF)-along with symmetric performance metrics, e.g., BLEU, ROUGE, and CIDEr similarity scores between machine-generated answers specific to other human reference answers. This ensures that improvements in model accuracy are not overused to the detriment of increased computational overhead. Experimental results show a twofold improvement in decision-maker error rate and processing efficiency, resulting in an overall decrease of 30% for the model and a 25% improvement in processing efficiency across diverse driving scenarios. Not only does this symmetrical approach reduce hallucination, but it also better aligns the virtual and real-world representations.
引用
收藏
页数:20
相关论文
共 50 条
  • [31] InstOptima: Evolutionary Multi-objective Instruction Optimization via Large Language Model-based Instruction Operators
    Yang, Heng
    Li, Ke
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (EMNLP 2023), 2023, : 13593 - 13602
  • [32] Model-based tracking for autonomous arrays
    Porter, MB
    Hursky, P
    Tiemann, CO
    Stevenson, M
    OCEANS 2001 MTS/IEEE: AN OCEAN ODYSSEY, VOLS 1-4, CONFERENCE PROCEEDINGS, 2001, : 786 - 792
  • [33] Large Language Model-based Test Case Generation for GP Agents
    Jorgensen, Steven
    Nadizar, Giorgia
    Pietropolli, Gloria
    Manzoni, Luca
    Medvet, Eric
    O'Reilly, Una-May
    Hemberg, Erik
    PROCEEDINGS OF THE 2024 GENETIC AND EVOLUTIONARY COMPUTATION CONFERENCE, GECCO 2024, 2024, : 914 - 923
  • [34] Characterizing the Confidence of Large Language Model-Based Automatic Evaluation Metrics
    Stureborg, Rickard
    Alikaniotis, Dimitris
    Suhara, Yoshi
    PROCEEDINGS OF THE 18TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 2: SHORT PAPERS, 2024, : 76 - 89
  • [35] LUNA: A Model-Based Universal Analysis Framework for Large Language Models
    Song, Da
    Xie, Xuan
    Song, Jiayang
    Zhu, Derui
    Huang, Yuheng
    Felix, Juefei-Xu
    Ma, Lei
    IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, 2024, 50 (07) : 1921 - 1948
  • [36] KNOWLEDGE MANAGEMENT TOOL: A LARGE LANGUAGE MODEL-BASED SEARCH ENGINE
    Gao, W.
    Merrill, C.
    Texeira, B. C.
    Weissmueller, N.
    Gao, C.
    Bao, Y.
    Anstatt, D.
    VALUE IN HEALTH, 2024, 27 (06) : S387 - S387
  • [37] Improving Text Classification with Large Language Model-Based Data Augmentation
    Zhao, Huanhuan
    Chen, Haihua
    Ruggles, Thomas A.
    Feng, Yunhe
    Singh, Debjani
    Yoon, Hong-Jun
    ELECTRONICS, 2024, 13 (13)
  • [38] Large Language Model-Based Responses to Patients' In-Basket Messages
    Small, William R.
    Wiesenfeld, Batia
    Brandfield-Harvey, Beatrix
    Jonassen, Zoe
    Mandal, Soumik
    Stevens, Elizabeth R.
    Major, Vincent J.
    Lostraglio, Erin
    Szerencsy, Adam
    Jones, Simon
    Aphinyanaphongs, Yindalon
    Johnson, Stephen B.
    Nov, Oded
    Mann, Devin
    JAMA NETWORK OPEN, 2024, 7 (07)
  • [39] Sequential Model-Based Optimization for Natural Language Processing Data Pipeline Selection and Optimization
    Arntong, Piyadanai
    Pongpech, Worapol Alex
    INTELLIGENT INFORMATION AND DATABASE SYSTEMS, ACIIDS 2021, 2021, 12672 : 303 - 313
  • [40] A large language model-based building operation and maintenance information query
    Li, Yan
    Ji, Minxuan
    Chen, Junyu
    Wei, Xin
    Gu, Xiaojun
    Tang, Juemin
    ENERGY AND BUILDINGS, 2025, 334