A General Approach to Website Question Answering with Large Language Models

被引:0
|
作者
Ding, Yilang [1 ]
Nie, Jiawei [1 ]
Wu, Di [1 ]
Liu, Chang [1 ]
机构
[1] Emory Univ, Atlanta, GA 30322 USA
来源
关键词
D O I
10.1109/SOUTHEASTCON52093.2024.10500166
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Language Models (LMs), in their most basic form, perform just like any other machine learning model - they produce interpolations and extrapolations based on their training distribution. Although recent models such as OpenAl's GPT-4 have demonstrated unprecedented capabilities in absorbing the copious volumes of information in their training data, their ability to consistently reproduce factual information still remains unproven. Additionally, LMs on their own lack the ability to keep up to date with real life data without frequent fine-tuning. These drawbacks effectively render base LMs unserviceable in Question Answering scenarios where they must respond to queries regarding volatile information. Retrieval Augmented Generation (RAG) and Tool Learning Ill were proposed as solutions to these problems, and with the development and usage of associated libraries, the aforementioned problems can be greatly mitigated. In this paper, we ponder a general approach to website Question Answering that integrates the zero-shot decision-making capabilities of LMs with the RAG capabilities of LangChain and is able to be kept up to date with dynamic information without the need for constant fine-tuning.
引用
收藏
页码:894 / 896
页数:3
相关论文
共 50 条
  • [41] Prompting Large Language Models with Knowledge-Injection for Knowledge-Based Visual Question Answering
    Hu, Zhongjian
    Yang, Peng
    Liu, Fengyuan
    Meng, Yuan
    Liu, Xingyu
    BIG DATA MINING AND ANALYTICS, 2024, 7 (03): : 843 - 857
  • [42] Interpretable Long-Form Legal Question Answering with Retrieval-Augmented Large Language Models
    Louis, Antoine
    van Dijck, Gijs
    Spanakis, Gerasimos
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 20, 2024, : 22266 - 22275
  • [43] AutoTQA: Towards Autonomous Tabular Question Answering through Multi-Agent Large Language Models
    Zhu, Jun-Peng
    Cai, Peng
    Xu, Kai
    Li, Li
    Sun, Yishen
    Zhou, Shuai
    Su, Haihuang
    Tang, Liu
    Liu, Qi
    PROCEEDINGS OF THE VLDB ENDOWMENT, 2024, 17 (12): : 3920 - 3933
  • [44] Leveraging Retrieval-Augmented Generation for Reliable Medical Question Answering Using Large Language Models
    Kharitonova, Ksenia
    Perez-Fernandez, David
    Gutierrez-Hernando, Javier
    Gutierrez-Fandino, Asier
    Callejas, Zoraida
    Griol, David
    HYBRID ARTIFICIAL INTELLIGENT SYSTEMS, PT II, HAIS 2024, 2025, 14858 : 141 - 153
  • [45] On the Question of Authorship in Large Language Models
    Soos, Carlin
    Haroutunian, Levon
    KNOWLEDGE ORGANIZATION, 2024, 51 (02): : 83 - 95
  • [46] Retrieving-to-Answer: Zero-Shot Video Question Answering with Frozen Large Language Models
    Pan, Junting
    Lin, Ziyi
    Ge, Yuying
    Zhu, Xiatian
    Zhang, Renrui
    Wang, Yi
    Qiao, Yu
    Li, Hongsheng
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS, ICCVW, 2023, : 272 - 283
  • [47] QuASIt: A Cognitive Inspired Approach to Question Answering for the Italian Language
    Pipitone, Arianna
    Tirone, Giuseppe
    Pirrone, Roberto
    AI*IA 2016: ADVANCES IN ARTIFICIAL INTELLIGENCE, 2016, 10037 : 464 - 476
  • [48] ISD-QA: Iterative Distillation of Commonsense Knowledge from General Language Models for Unsupervised Question Answering
    Ramamurthy, Priyadharsini
    Aakur, Sathyanarayanan N.
    2022 26TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2022, : 1229 - 1235
  • [50] How Can We Know When Language Models Know? On the Calibration of Language Models for Question Answering
    Jiang, Zhengbao
    Araki, Jun
    Ding, Haibo
    Neubig, Graham
    TRANSACTIONS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, 2021, 9 (09) : 962 - 977