Optimizing Airline Review Sentiment Analysis: A Comparative Analysis of LLaMA and BERT Models through Fine-Tuning and Few-Shot Learning

被引:0
|
作者
Roumeliotis, Konstantinos I. [1 ]
Tselikas, Nikolaos D. [2 ]
Nasiopoulos, Dimitrios K. [3 ]
机构
[1] Univ Peloponnese, Dept Digital Syst, Sparta 23100, Greece
[2] Univ Peloponnese, Dept Informat & Telecommun, Tripoli 22131, Greece
[3] Agr Univ Athens, Sch Appl Econ & Social Sci, Dept Agribusiness & Supply Chain Management, Athens 11855, Greece
来源
CMC-COMPUTERS MATERIALS & CONTINUA | 2025年 / 82卷 / 02期
关键词
Sentiment classification; review sentiment analysis; user-generated content; domain adaptation; cus- tomer satisfaction; LLaMA model; BERT model; airline reviews; LLM classification; fine-tuning; SERVICE QUALITY;
D O I
10.32604/cmc.2025.059567
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In the rapidly evolving landscape of natural language processing (NLP) and sentiment analysis, improving the accuracy and efficiency of sentiment classification models is crucial. This paper investigates the performance of two advanced models, the Large Language Model (LLM) LLaMA model and NLP BERT model, in the context of airline review sentiment analysis. Through fine-tuning, domain adaptation, and the application of few-shot learning, the study addresses the subtleties of sentiment expressions in airline-related text data. Employing predictive modeling and comparative analysis, the research evaluates the effectiveness of Large Language Model Meta AI (LLaMA) and Bidirectional Encoder Representations from Transformers (BERT) in capturing sentiment intricacies. Fine-tuning, including domain adaptation, enhances the models' performance in sentiment classification tasks. Additionally, the study explores the potential of few-shot learning to improve model generalization using minimal annotated data for targeted sentiment analysis. By conducting experiments on a diverse airline review dataset, the research quantifies the impact of fine-tuning, domain adaptation, and few-shot learning on model performance, providing valuable insights for industries aiming to predict recommendations and enhance customer satisfaction through a deeper understanding of sentiment in user-generated content (UGC). This research contributes to refining sentiment analysis models, ultimately fostering improved customer satisfaction in the airline industry.
引用
收藏
页码:2769 / 2792
页数:24
相关论文
共 9 条
  • [1] Adaptive fine-tuning strategy for few-shot learning
    Zhuang, Xinkai
    Shao, Mingwen
    Gao, Wei
    Yang, Jianxin
    JOURNAL OF ELECTRONIC IMAGING, 2022, 31 (06)
  • [2] Fine-Tuning Llama 3 for Sentiment Analysis: Leveraging AWS Cloud for Enhanced Performance
    Shantanu Kumar
    Shruti Singh
    SN Computer Science, 5 (8)
  • [3] Exploring the potential of using ChatGPT for rhetorical move-step analysis: The impact of prompt refinement, few-shot learning, and fine-tuning
    Kim, Minjin
    Lu, Xiaofei
    JOURNAL OF ENGLISH FOR ACADEMIC PURPOSES, 2024, 71
  • [4] Comparing Fine-Tuning, Zero and Few-Shot Strategies with Large Language Models in Hate Speech Detection in English
    Pan, Ronghao
    Garcia-Diaz, Jose Antonio
    Valencia-Garcia, Rafael
    CMES-COMPUTER MODELING IN ENGINEERING & SCIENCES, 2024, 140 (03): : 2849 - 2868
  • [5] Fine-Tuning of Distil-BERT for Continual Learning in Text Classification: An Experimental Analysis
    Shah, Sahar
    Manzoni, Sara Lucia
    Zaman, Farooq
    Es Sabery, Fatima
    Epifania, Francesco
    Zoppis, Italo Francesco
    IEEE ACCESS, 2024, 12 : 104964 - 104982
  • [6] Optimizing Customer Satisfaction Through Sentiment Analysis: A BERT-Based Machine Learning Approach to Extract Insights
    Rahman, Ben
    Maryani
    IEEE ACCESS, 2024, 12 : 151476 - 151489
  • [7] Analysis of Bias in GPT Language Models through Fine-tuning Containing Divergent Data
    Turi, Leandro Furlam
    Cavalini, Athus
    Comarela, Giovanni
    Oliveira-Santos, Thiago
    Badue, Claudine
    De Souza, Alberto F.
    2024 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN 2024, 2024,
  • [8] Enhancing the Analysis of Interdisciplinary Learning Quality with GPT Models: Fine-Tuning and Knowledge-Empowered Approaches
    Zhong, Tianlong
    Cai, Chang
    Zhu, Gaoxia
    Ma, Min
    ARTIFICIAL INTELLIGENCE IN EDUCATION: POSTERS AND LATE BREAKING RESULTS, WORKSHOPS AND TUTORIALS, INDUSTRY AND INNOVATION TRACKS, PRACTITIONERS, DOCTORAL CONSORTIUM AND BLUE SKY, AIED 2024, 2024, 2151 : 157 - 165
  • [9] Leveraging Large Language Models in Tourism: A Comparative Study of the Latest GPT Omni Models and BERT NLP for Customer Review Classification and Sentiment Analysis
    Roumeliotis, Konstantinos I.
    Tselikas, Nikolaos D.
    Nasiopoulos, Dimitrios K.
    INFORMATION, 2024, 15 (12)