Optimizing Airline Review Sentiment Analysis: A Comparative Analysis of LLaMA and BERT Models through Fine-Tuning and Few-Shot Learning

被引:0
|
作者
Roumeliotis, Konstantinos I. [1 ]
Tselikas, Nikolaos D. [2 ]
Nasiopoulos, Dimitrios K. [3 ]
机构
[1] Univ Peloponnese, Dept Digital Syst, Sparta 23100, Greece
[2] Univ Peloponnese, Dept Informat & Telecommun, Tripoli 22131, Greece
[3] Agr Univ Athens, Sch Appl Econ & Social Sci, Dept Agribusiness & Supply Chain Management, Athens 11855, Greece
来源
CMC-COMPUTERS MATERIALS & CONTINUA | 2025年 / 82卷 / 02期
关键词
Sentiment classification; review sentiment analysis; user-generated content; domain adaptation; cus- tomer satisfaction; LLaMA model; BERT model; airline reviews; LLM classification; fine-tuning; SERVICE QUALITY;
D O I
10.32604/cmc.2025.059567
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In the rapidly evolving landscape of natural language processing (NLP) and sentiment analysis, improving the accuracy and efficiency of sentiment classification models is crucial. This paper investigates the performance of two advanced models, the Large Language Model (LLM) LLaMA model and NLP BERT model, in the context of airline review sentiment analysis. Through fine-tuning, domain adaptation, and the application of few-shot learning, the study addresses the subtleties of sentiment expressions in airline-related text data. Employing predictive modeling and comparative analysis, the research evaluates the effectiveness of Large Language Model Meta AI (LLaMA) and Bidirectional Encoder Representations from Transformers (BERT) in capturing sentiment intricacies. Fine-tuning, including domain adaptation, enhances the models' performance in sentiment classification tasks. Additionally, the study explores the potential of few-shot learning to improve model generalization using minimal annotated data for targeted sentiment analysis. By conducting experiments on a diverse airline review dataset, the research quantifies the impact of fine-tuning, domain adaptation, and few-shot learning on model performance, providing valuable insights for industries aiming to predict recommendations and enhance customer satisfaction through a deeper understanding of sentiment in user-generated content (UGC). This research contributes to refining sentiment analysis models, ultimately fostering improved customer satisfaction in the airline industry.
引用
收藏
页码:2769 / 2792
页数:24
相关论文
共 50 条
  • [1] Adaptive fine-tuning strategy for few-shot learning
    Zhuang, Xinkai
    Shao, Mingwen
    Gao, Wei
    Yang, Jianxin
    JOURNAL OF ELECTRONIC IMAGING, 2022, 31 (06)
  • [2] RIFF: Learning to Rephrase Inputs for Few-shot Fine-tuning of Language Models
    Najafi, Saeed
    Fyshe, Alona
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: ACL 2024, 2024, : 1447 - 1466
  • [3] An Empirical Evaluation of the Zero-Shot, Few-Shot, and Traditional Fine-Tuning Based Pretrained Language Models for Sentiment Analysis in Software Engineering
    Shafikuzzaman, Md
    Islam, Md Rakibul
    Rolli, Alex C.
    Akhter, Sharmin
    Seliya, Naeem
    IEEE ACCESS, 2024, 12 : 109714 - 109734
  • [4] Few-Shot Fine-Tuning SOTA Summarization Models for Medical Dialogues
    Navarro, David Fraile
    Dras, Mark
    Berkovsky, Shlomo
    NAACL 2022: THE 2022 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES: PROCEEDINGS OF THE STUDENT RESEARCH WORKSHOP, 2022, : 254 - 266
  • [5] Transfer Learning for Sentiment Analysis Using BERT Based Supervised Fine-Tuning
    Prottasha, Nusrat Jahan
    Sami, Abdullah As
    Kowsher, Md
    Murad, Saydul Akbar
    Bairagi, Anupam Kumar
    Masud, Mehedi
    Baz, Mohammed
    SENSORS, 2022, 22 (11)
  • [6] Fine-tuning XLNet for Amazon review sentiment analysis: A comparative evaluation of transformer models
    Shetty, Amrithkala M.
    Manjaiah, D. H.
    Aljunid, Mohammed Fadhel
    ETRI JOURNAL, 2025,
  • [7] Pathologies of Pre-trained Language Models in Few-shot Fine-tuning
    Chen, Hanjie
    Zheng, Guoqing
    Awadallah, Ahmed Hassan
    Ji, Yangfeng
    PROCEEDINGS OF THE THIRD WORKSHOP ON INSIGHTS FROM NEGATIVE RESULTS IN NLP (INSIGHTS 2022), 2022, : 144 - 153
  • [8] Exploring Few-Shot Fine-Tuning Strategies for Models of Visually Grounded Speech
    Miller, Tyler
    Harwath, David
    INTERSPEECH 2022, 2022, : 1416 - 1420
  • [9] Fine-Tuning of CLIP in Few-Shot Scenarios via Supervised Contrastive Learning
    Luo, Jing
    Wu, Guangxing
    Liu, Hongmei
    Wang, Ruixuan
    PATTERN RECOGNITION AND COMPUTER VISION, PT III, PRCV 2024, 2025, 15033 : 104 - 117
  • [10] UserAdapter: Few-Shot User Learning in Sentiment Analysis
    Zhong, Wanjun
    Tang, Duyu
    Wang, Jiahai
    Yin, Jian
    Duan, Nan
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL-IJCNLP 2021, 2021, : 1484 - 1488