Evaluating the Adaptability of Large Language Models for Knowledge-aware Question and Answering

被引:0
|
作者
Thakkar, Jay [1 ]
Kolekar, Suresh [1 ]
Gite, Shilpa [1 ,2 ]
Pradhan, Biswajeet [3 ]
Alamri, Abdullah [4 ]
机构
[1] Symbiosis Int Deemed Univ, Symbiosis Ctr Appl AI SCAAI, Pune 412115, India
[2] Symbiosis Int Deemed Univ, Symbiosis Inst Technol, Artificial Intelligence & Machine Learning Dept, Pune 412115, India
[3] Univ Technol Sydney, Fac Engn & Informat Technol, Ctr Adv Modelling & Geospatial Informat Syst CAMGI, Sch Civil & Environm Engn, Sydney, NSW, Australia
[4] King Saud Univ, Coll Sci, Dept Geol & Geophys, Riyadh, Saudi Arabia
关键词
large language models; abstractive summarization; knowledge-aware summarization; personalized summarization; QUALITY;
D O I
10.2478/ijssis-2024-0021
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Large language models (LLMs) have transformed open-domain abstractive summarization, delivering coherent and precise summaries. However, their adaptability to user knowledge levels is largely unexplored. This study investigates LLMs' efficacy in tailoring summaries to user familiarity. We assess various LLM architectures across different familiarity settings using metrics like linguistic complexity and reading grade levels. Findings expose current capabilities and constraints in knowledge-aware summarization, paving the way for personalized systems. We analyze LLM performance across three familiarity levels: none, basic awareness, and complete familiarity. Utilizing established readability metrics, we gauge summary complexity. Results indicate LLMs can adjust summaries to some extent based on user familiarity. Yet, challenges persist in accurately assessing user knowledge and crafting informative, comprehensible summaries. We highlight areas for enhancement, including improved user knowledge modeling and domain-specific integration. This research informs the advancement of adaptive summarization systems, offering insights for future development.
引用
收藏
页数:20
相关论文
共 50 条
  • [31] Efficient Question Answering Based on Language Models and Knowledge Graphs
    Li, Fengying
    Huang, Hongfei
    Dong, Rongsheng
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2023, PT IV, 2023, 14257 : 340 - 351
  • [32] Expert Knowledge-Aware Image Difference Graph Representation Learning for Difference-Aware Medical Visual Question Answering
    Hu, Xinyue
    Gu, Lin
    An, Qiyuan
    Zhang, Mengliang
    Liu, Liangchen
    Kobayashi, Kazuma
    Harada, Tatsuya
    Summers, Ronald M.
    Zhu, Yingying
    PROCEEDINGS OF THE 29TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2023, 2023, : 4156 - 4165
  • [33] Knowledge-aware image understanding with multi-level visual representation enhancement for visual question answering
    Yan, Feng
    Li, Zhe
    Silamu, Wushour
    Li, Yanbing
    MACHINE LEARNING, 2024, 113 (06) : 3789 - 3805
  • [34] Prompting Large Language Models with Answer Heuristics for Knowledge-based Visual Question Answering
    Shao, Zhenwei
    Yu, Zhou
    Wang, Meng
    Yu, Jun
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 14974 - 14983
  • [35] RK-VQA: Rational knowledge-aware fusion-in-decoder for knowledge-based visual question answering
    Chen, Weipeng
    Huang, Xu
    Liu, Zifeng
    Liu, Jin
    Yo, Lan
    INFORMATION FUSION, 2025, 118
  • [36] Knowledge-aware image understanding with multi-level visual representation enhancement for visual question answering
    Feng Yan
    Zhe Li
    Wushour Silamu
    Yanbing Li
    Machine Learning, 2024, 113 : 3789 - 3805
  • [37] KG-EGV: A Framework for Question Answering with Integrated Knowledge Graphs and Large Language Models
    Hou, Kun
    Li, Jingyuan
    Liu, Yingying
    Sun, Shiqi
    Zhang, Haoliang
    Jiang, Haiyang
    ELECTRONICS, 2024, 13 (23):
  • [38] MM-Reasoner: A Multi-Modal Knowledge-Aware Framework for Knowledge-Based Visual Question Answering
    Khademi, Mahmoud
    Yang, Ziyi
    Frujeri, Felipe Vieira
    Zhu, Chenguang
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS - EMNLP 2023, 2023, : 6571 - 6581
  • [39] A General Approach to Website Question Answering with Large Language Models
    Ding, Yilang
    Nie, Jiawei
    Wu, Di
    Liu, Chang
    SOUTHEASTCON 2024, 2024, : 894 - 896
  • [40] Large Language Models Can Connect the Dots: Exploring Model Optimization Bugs with Domain Knowledge-Aware Prompts
    Guan, Hao
    Bai, Guangdong
    Liu, Yepang
    PROCEEDINGS OF THE 33RD ACM SIGSOFT INTERNATIONAL SYMPOSIUM ON SOFTWARE TESTING AND ANALYSIS, ISSTA 2024, 2024, : 1579 - 1591