Optimizing Airline Review Sentiment Analysis: A Comparative Analysis of LLaMA and BERT Models through Fine-Tuning and Few-Shot Learning

被引:0
|
作者
Roumeliotis, Konstantinos I. [1 ]
Tselikas, Nikolaos D. [2 ]
Nasiopoulos, Dimitrios K. [3 ]
机构
[1] Univ Peloponnese, Dept Digital Syst, Sparta 23100, Greece
[2] Univ Peloponnese, Dept Informat & Telecommun, Tripoli 22131, Greece
[3] Agr Univ Athens, Sch Appl Econ & Social Sci, Dept Agribusiness & Supply Chain Management, Athens 11855, Greece
来源
CMC-COMPUTERS MATERIALS & CONTINUA | 2025年 / 82卷 / 02期
关键词
Sentiment classification; review sentiment analysis; user-generated content; domain adaptation; cus- tomer satisfaction; LLaMA model; BERT model; airline reviews; LLM classification; fine-tuning; SERVICE QUALITY;
D O I
10.32604/cmc.2025.059567
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In the rapidly evolving landscape of natural language processing (NLP) and sentiment analysis, improving the accuracy and efficiency of sentiment classification models is crucial. This paper investigates the performance of two advanced models, the Large Language Model (LLM) LLaMA model and NLP BERT model, in the context of airline review sentiment analysis. Through fine-tuning, domain adaptation, and the application of few-shot learning, the study addresses the subtleties of sentiment expressions in airline-related text data. Employing predictive modeling and comparative analysis, the research evaluates the effectiveness of Large Language Model Meta AI (LLaMA) and Bidirectional Encoder Representations from Transformers (BERT) in capturing sentiment intricacies. Fine-tuning, including domain adaptation, enhances the models' performance in sentiment classification tasks. Additionally, the study explores the potential of few-shot learning to improve model generalization using minimal annotated data for targeted sentiment analysis. By conducting experiments on a diverse airline review dataset, the research quantifies the impact of fine-tuning, domain adaptation, and few-shot learning on model performance, providing valuable insights for industries aiming to predict recommendations and enhance customer satisfaction through a deeper understanding of sentiment in user-generated content (UGC). This research contributes to refining sentiment analysis models, ultimately fostering improved customer satisfaction in the airline industry.
引用
收藏
页码:2769 / 2792
页数:24
相关论文
共 50 条
  • [21] Vietnamese Sentiment Analysis: An Overview and Comparative Study of Fine-tuning Pretrained Language Models
    Dang Van Thin
    Duong Ngoc Hao
    Ngan Luu-Thuy Nguyen
    ACM TRANSACTIONS ON ASIAN AND LOW-RESOURCE LANGUAGE INFORMATION PROCESSING, 2023, 22 (06)
  • [22] Comparative Analysis on Classical Meta-Metric Models for Few-Shot Learning
    Yang, Sai
    Liu, Fan
    Dong, Ning
    Wu, Jiaying
    IEEE ACCESS, 2020, 8 : 127065 - 127073
  • [23] Few-shot Sentiment Analysis Based on Adaptive Prompt Learning and Contrastive Learning
    Shi, Cong
    Zhai, Rui
    Song, Yalin
    Yu, Junyang
    Li, Han
    Wang, Yingqi
    Wang, Longge
    INFORMATION TECHNOLOGY AND CONTROL, 2023, 52 (04): : 1058 - 1072
  • [24] Adaptive Prompt Learning-Based Few-Shot Sentiment Analysis
    Zhang, Pengfei
    Chai, Tingting
    Xu, Yongdong
    NEURAL PROCESSING LETTERS, 2023, 55 (06) : 7259 - 7272
  • [25] Adaptive Prompt Learning-Based Few-Shot Sentiment Analysis
    Pengfei Zhang
    Tingting Chai
    Yongdong Xu
    Neural Processing Letters, 2023, 55 : 7259 - 7272
  • [26] A BERT Fine-tuning Model for Targeted Sentiment Analysis of Chinese Online Course Reviews
    Zhang, Huibing
    Dong, Junchao
    Min, Liang
    Bi, Peng
    INTERNATIONAL JOURNAL ON ARTIFICIAL INTELLIGENCE TOOLS, 2020, 29 (7-8)
  • [27] Towards Foundation Models and Few-Shot Parameter-Efficient Fine-Tuning for Volumetric Organ Segmentation
    Silva-Rodriguez, Julio
    Dolz, Jose
    Ben Ayed, Ismail
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION, MICCAI 2023 WORKSHOPS, 2023, 14393 : 213 - 224
  • [28] Exploring the potential of using ChatGPT for rhetorical move-step analysis: The impact of prompt refinement, few-shot learning, and fine-tuning
    Kim, Minjin
    Lu, Xiaofei
    JOURNAL OF ENGLISH FOR ACADEMIC PURPOSES, 2024, 71
  • [29] Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context Learning
    Liu, Haokun
    Tam, Derek
    Muqeeth, Mohammed
    Mohta, Jay
    Huang, Tenghao
    Raffel, Mohit Bansal Colin
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35, NEURIPS 2022, 2022,
  • [30] ReFine: Re-randomization before Fine-tuning for Cross-domain Few-shot Learning
    Oh, Jaehoon
    Kim, Sungnyun
    Ho, Namgyu
    Kim, Jin-Hwa
    Song, Hwanjun
    Yun, Se-Young
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2022, 2022, : 4359 - 4363