Optimizing Airline Review Sentiment Analysis: A Comparative Analysis of LLaMA and BERT Models through Fine-Tuning and Few-Shot Learning

被引:0
|
作者
Roumeliotis, Konstantinos I. [1 ]
Tselikas, Nikolaos D. [2 ]
Nasiopoulos, Dimitrios K. [3 ]
机构
[1] Univ Peloponnese, Dept Digital Syst, Sparta 23100, Greece
[2] Univ Peloponnese, Dept Informat & Telecommun, Tripoli 22131, Greece
[3] Agr Univ Athens, Sch Appl Econ & Social Sci, Dept Agribusiness & Supply Chain Management, Athens 11855, Greece
来源
CMC-COMPUTERS MATERIALS & CONTINUA | 2025年 / 82卷 / 02期
关键词
Sentiment classification; review sentiment analysis; user-generated content; domain adaptation; cus- tomer satisfaction; LLaMA model; BERT model; airline reviews; LLM classification; fine-tuning; SERVICE QUALITY;
D O I
10.32604/cmc.2025.059567
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In the rapidly evolving landscape of natural language processing (NLP) and sentiment analysis, improving the accuracy and efficiency of sentiment classification models is crucial. This paper investigates the performance of two advanced models, the Large Language Model (LLM) LLaMA model and NLP BERT model, in the context of airline review sentiment analysis. Through fine-tuning, domain adaptation, and the application of few-shot learning, the study addresses the subtleties of sentiment expressions in airline-related text data. Employing predictive modeling and comparative analysis, the research evaluates the effectiveness of Large Language Model Meta AI (LLaMA) and Bidirectional Encoder Representations from Transformers (BERT) in capturing sentiment intricacies. Fine-tuning, including domain adaptation, enhances the models' performance in sentiment classification tasks. Additionally, the study explores the potential of few-shot learning to improve model generalization using minimal annotated data for targeted sentiment analysis. By conducting experiments on a diverse airline review dataset, the research quantifies the impact of fine-tuning, domain adaptation, and few-shot learning on model performance, providing valuable insights for industries aiming to predict recommendations and enhance customer satisfaction through a deeper understanding of sentiment in user-generated content (UGC). This research contributes to refining sentiment analysis models, ultimately fostering improved customer satisfaction in the airline industry.
引用
收藏
页码:2769 / 2792
页数:24
相关论文
共 50 条
  • [41] Fine-Tuning BERT for Multi-Label Sentiment Analysis in Unbalanced Code-Switching Text
    Tang, Tiancheng
    Tang, Xinhuai
    Yuan, Tianyi
    IEEE ACCESS, 2020, 8 (08): : 193248 - 193256
  • [42] Pushing the Limit of Fine-Tuning for Few-Shot Learning: Where Feature Reusing Meets Cross-Scale Attention
    Chen, Ying-Yu
    Hsieh, Jun-Wei
    Li, Xin
    Chang, Ming-Ching
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 10, 2024, : 11434 - 11442
  • [43] Fine-Tuning of Distil-BERT for Continual Learning in Text Classification: An Experimental Analysis
    Shah, Sahar
    Manzoni, Sara Lucia
    Zaman, Farooq
    Es Sabery, Fatima
    Epifania, Francesco
    Zoppis, Italo Francesco
    IEEE ACCESS, 2024, 12 : 104964 - 104982
  • [44] LM-BFF-MS: Improving Few-Shot Fine-tuning of Language Models based on Multiple Soft Demonstration Memory
    Park, Eunhwan
    Jeon, Donghyeon
    Kim, Seonhoon
    Kang, Inho
    Na, Seung-Hoon
    PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022): (SHORT PAPERS), VOL 2, 2022, : 310 - 317
  • [45] Fine-tuning Pre-trained Language Models for Few-shot Intent Detection: Supervised Pre-training and Isotropization
    Zhang, Haode
    Liang, Haowen
    Zhang, Yuwei
    Zhan, Liming
    Wu, Xiao-Ming
    Lu, Xiaolei
    Lam, Albert Y. S.
    NAACL 2022: THE 2022 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES, 2022, : 532 - 542
  • [46] Toward Better Generalization of Cross-Domain Few-Shot Classification in Tibetan Character With Contrastive Learning and Meta Fine-Tuning
    Bao, Xun
    Wang, Weilan
    Wang, Xiaojuan
    Zhao, Guanzhong
    Li, Huarui
    Liu, Meiling
    IEEE ACCESS, 2024, 12 : 134439 - 134452
  • [47] Optimizing Fine-Tuning in Quantized Language Models: An In-Depth Analysis of Key Variables
    Shen, Ao
    Lai, Zhiquan
    Li, Dongsheng
    Hu, Xiaoyu
    CMC-COMPUTERS MATERIALS & CONTINUA, 2025, 82 (01): : 307 - 325
  • [48] A multi-granularity in-context learning method for few-shot Named Entity Recognition via Knowledgeable Parameters Fine-tuning
    Zhao, Qihui
    Gao, Tianhan
    Guo, Nan
    INFORMATION PROCESSING & MANAGEMENT, 2025, 62 (04)
  • [49] Advancing Sentiment Analysis in Serbian Literature: A Zero and Few-Shot Learning Approach Using the Mistral Model
    Negie, Milica Ikonie
    Skoric, Mihailo
    Stankovic, Ranka
    Rujevic, Biljana
    Petalinkar, Saga
    PROCEEDINGS OF THE SIXTH INTERNATIONAL CONFERENCE COMPUTATIONAL LINGUISTICS IN BULGARIA, CLIB 2024, 2024, : 58 - 70
  • [50] LLaMA-Reviewer: Advancing Code Review Automation with Large Language Models through Parameter-Efficient Fine-Tuning
    Lu, Junyi
    Yu, Lei
    Li, Xiaojia
    Yang, Li
    Zuo, Chun
    2023 IEEE 34TH INTERNATIONAL SYMPOSIUM ON SOFTWARE RELIABILITY ENGINEERING, ISSRE, 2023, : 647 - 658