Aspect-Based API Review Classification: How Far Can Pre-Trained Transformer Model Go?

被引:17
|
作者
Yang, Chengran [1 ]
Xu, Bowen [1 ]
Khan, Junaed Younus [2 ]
Uddin, Gias [2 ]
Han, Donggyun [1 ]
Yang, Zhou [1 ]
Lo, David [1 ]
机构
[1] Singapore Management Univ, Sch Comp & Informat Syst, Singapore, Singapore
[2] Univ Calgary, Dept Elect & Comp Engn, Calgary, AB, Canada
基金
加拿大自然科学与工程研究理事会;
关键词
software mining; natural language processing; multi-label classification; pre-trained models;
D O I
10.1109/SANER53432.2022.00054
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
APIs (Application Programming Interfaces) are reusable software libraries and are building blocks for modern rapid software development. Previous research shows that programmers frequently share and search for reviews of APIs on the mainstream software question and answer (Q&A) platforms like Stack Overflow, which motivates researchers to design tasks and approaches related to process API reviews automatically. Among these tasks, classifying API reviews into different aspects (e.g., performance or security), which is called the aspect-based API review classification, is of great importance. The current state-of-the-art (SOTA) solution to this task is based on the traditional machine learning algorithm. Inspired by the great success achieved by pre-trained models on many software engineering tasks, this study fine-tunes six pre-trained models for the aspect-based API review classification task and compares them with the current SOTA solution on an API review benchmark collected by Uddin et al. The investigated models include four models (BERT, RoBERTa, ALBERT and XLNet) that are pre-trained on natural languages, BERTOverflow that is pre-trained on text corpus extracted from posts on Stack Overflow, and CosSensBERT that is designed for handling imbalanced data. The results show that all the six fine-tuned models outperform the traditional machine learning-based tool. More specifically, the improvement on the F1-score ranges from 21.0% to 30.2%. We also find that BERTOverflow, a model pre-trained on the corpus from Stack Overflow, does not show better performance than BERT. The result also suggests that CosSensBERT also does not exhibit better performance than BERT in terms of F1, but it is still worthy of being considered as it achieves better performance on MCC and AUC.
引用
收藏
页码:385 / 395
页数:11
相关论文
共 50 条
  • [1] Sentiment Analysis for Software Engineering: How Far Can Pre-trained Transformer Models Go?
    Zhang, Ting
    Xu, Bowen
    Thung, Ferdian
    Haryono, Stefanus Agus
    Lo, David
    Jiang, Lingxiao
    2020 IEEE INTERNATIONAL CONFERENCE ON SOFTWARE MAINTENANCE AND EVOLUTION (ICSME 2020), 2020, : 70 - 80
  • [2] Incorporating Dynamic Semantics into Pre-Trained Language Model for Aspect-based Sentiment Analysis
    Zhang, Kai
    Zhang, Kun
    Zhang, Mengdi
    Zhao, Hongke
    Liu, Qi
    Wu, Wei
    Chen, Enhong
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), 2022, : 3599 - 3610
  • [3] Towards a Transformer-Based Pre-trained Model for IoT Traffic Classification
    Bazaluk, Bruna
    Hamdan, Mosab
    Ghaleb, Mustafa
    Gismalla, Mohammed S. M.
    da Silva, Flavio S. Correa
    Batista, Daniel Macedo
    PROCEEDINGS OF 2024 IEEE/IFIP NETWORK OPERATIONS AND MANAGEMENT SYMPOSIUM, NOMS 2024, 2024,
  • [4] Pre-trained Word Embeddings for Arabic Aspect-Based Sentiment Analysis of Airline Tweets
    Ashi, Mohammed Matuq
    Siddiqui, Muazzam Ahmed
    Nadeem, Farrukh
    PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON ADVANCED INTELLIGENT SYSTEMS AND INFORMATICS 2018, 2019, 845 : 241 - 251
  • [5] Aspect-Based Sentiment Analysis of Social Media Data With Pre-Trained Language Models
    Troya, Anina
    Pillai, Reshmi Gopalakrishna
    Rivero, Cristian Rodriguez
    Genc, Zulkuf
    Kayal, Subhradeep
    Araci, Dogu
    2021 5TH INTERNATIONAL CONFERENCE ON NATURAL LANGUAGE PROCESSING AND INFORMATION RETRIEVAL, NLPIR 2021, 2021, : 8 - 17
  • [6] Aspect-Based Sentiment Analysis in Hindi Language by Ensembling Pre-Trained mBERT Models
    Pathak, Abhilash
    Kumar, Sudhanshu
    Roy, Partha Pratim
    Kim, Byung-Gyu
    ELECTRONICS, 2021, 10 (21)
  • [7] Pre-trained Transformer-based Classification for Automated Patentability Examination
    Lo, Hao-Cheng
    Chu, Jung-Mei
    2021 IEEE ASIA-PACIFIC CONFERENCE ON COMPUTER SCIENCE AND DATA ENGINEERING (CSDE), 2021,
  • [8] Pre-Trained Model-Based Automated Software Vulnerability Repair: How Far are We?
    Zhang, Quanjun
    Fang, Chunrong
    Yu, Bowen
    Sun, Weisong
    Zhang, Tongke
    Chen, Zhenyu
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2024, 21 (04) : 2507 - 2525
  • [9] LETS: A Label-Efficient Training Scheme for Aspect-Based Sentiment Analysis by Using a Pre-Trained Language Model
    Shim, Heereen
    Lowet, Dietwig
    Luca, Stijn
    Vanrumste, Bart
    IEEE ACCESS, 2021, 9 : 115563 - 115578
  • [10] A survey of text classification based on pre-trained language model
    Wu, Yujia
    Wan, Jun
    NEUROCOMPUTING, 2025, 616