BERT Fine-Tuning for Sentiment Analysis on Indonesian Mobile Apps Reviews

被引:10
|
作者
Nugroho, Kuncahyo Setyo [1 ]
Sukmadewa, Anantha Yullian [1 ]
Wuswilahaken, Haftittah Dw [1 ]
Bachtiar, Fitra Abdurrachman [1 ]
Yudistira, Novanto [1 ]
机构
[1] Brawijaya Univ, Fac Comp Sci, Dept Informat Engn, Malang, Indonesia
关键词
BERT fine-tuning; Sentiment analysis; Apps review;
D O I
10.1145/3479645.3479679
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
User reviews have an essential role in the success of the developed mobile apps. User reviews in the textual form are unstructured data, creating a very high complexity when processed for sentiment analysis. Previous approaches that have been used often ignore the context of reviews. In addition, the relatively small data makes the model overfitting. A new approach, BERT, has been introduced as a transfer learning model with a pre-trained model that has previously been trained to have a better context representation. This study examines the effectiveness of fine-tuning BERT for sentiment analysis using two different pre-trained models. Besides the multilingual pre-trained model, we use the pre-trained model that only has been trained in Indonesian. The dataset used is Indonesian user reviews of the ten best apps in 2020 in Google Play sites. We also perform hyper-parameter tuning to find the optimum trained model. Two training data labeling approaches were also tested to determine the effectiveness of the model, which is score-based and lexicon-based. The experimental results show that pre-trained models trained in Indonesian have better average accuracy on lexicon-based data. The specific Indonesian pre-trained model achieved the highest accuracy of 84%, with 25 epochs and 24 minutes of training time.
引用
收藏
页码:258 / 264
页数:7
相关论文
共 50 条
  • [1] A BERT Fine-tuning Model for Targeted Sentiment Analysis of Chinese Online Course Reviews
    Zhang, Huibing
    Dong, Junchao
    Min, Liang
    Bi, Peng
    INTERNATIONAL JOURNAL ON ARTIFICIAL INTELLIGENCE TOOLS, 2020, 29 (7-8)
  • [2] Transfer Learning for Sentiment Analysis Using BERT Based Supervised Fine-Tuning
    Prottasha, Nusrat Jahan
    Sami, Abdullah As
    Kowsher, Md
    Murad, Saydul Akbar
    Bairagi, Anupam Kumar
    Masud, Mehedi
    Baz, Mohammed
    SENSORS, 2022, 22 (11)
  • [3] EEBERT: An Emoji-Enhanced BERT Fine-Tuning on Amazon Product Reviews for Text Sentiment Classification
    Narejo, Komal Rani
    Zan, Hongying
    Dharmani, Kheem Parkash
    Zhou, Lijuan
    Alahmadi, Tahani Jaser
    Assam, Muhammad
    Sehito, Nabila
    Ghadi, Yazeed Yasin
    IEEE ACCESS, 2024, 12 : 131954 - 131967
  • [4] Prompt-Oriented Fine-Tuning Dual Bert for Aspect-Based Sentiment Analysis
    Yin, Wen
    Xu, Yi
    Liu, Cencen
    Zheng, Dezhang
    Wang, Qi
    Liu, Chuanjie
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2023, PART X, 2023, 14263 : 505 - 517
  • [5] Sense-aware BERT and Multi-task Fine-tuning for Multimodal Sentiment Analysis
    Fang, Lingyong
    Liu, Gongshen
    Zhang, Ru
    2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [6] Fine-Tuning BERT for Multi-Label Sentiment Analysis in Unbalanced Code-Switching Text
    Tang, Tiancheng
    Tang, Xinhuai
    Yuan, Tianyi
    IEEE ACCESS, 2020, 8 (08): : 193248 - 193256
  • [7] Fine-Tuning of Word Embeddings for Semantic Sentiment Analysis
    Atzeni, Mattia
    Recupero, Diego Reforgiato
    SEMANTIC WEB CHALLENGES, SEMWEBEVAL 2018, 2018, 927 : 140 - 150
  • [8] Transfer fine-tuning of BERT with phrasal paraphrases
    Arase, Yuki
    Tsujii, Junichi
    COMPUTER SPEECH AND LANGUAGE, 2021, 66
  • [9] Energy and Carbon Considerations of Fine-Tuning BERT
    Wang, Xiaorong
    Na, Clara
    Strubell, Emma
    Friedler, Sorelle A.
    Luccioni, Sasha
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (EMNLP 2023), 2023, : 9058 - 9069
  • [10] SPEECH RECOGNITION BY SIMPLY FINE-TUNING BERT
    Huang, Wen-Chin
    Wu, Chia-Hua
    Luo, Shang-Bao
    Chen, Kuan-Yu
    Wang, Hsin-Min
    Toda, Tomoki
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 7343 - 7347