Aspect-Based API Review Classification: How Far Can Pre-Trained Transformer Model Go?

被引:17
|
作者
Yang, Chengran [1 ]
Xu, Bowen [1 ]
Khan, Junaed Younus [2 ]
Uddin, Gias [2 ]
Han, Donggyun [1 ]
Yang, Zhou [1 ]
Lo, David [1 ]
机构
[1] Singapore Management Univ, Sch Comp & Informat Syst, Singapore, Singapore
[2] Univ Calgary, Dept Elect & Comp Engn, Calgary, AB, Canada
基金
加拿大自然科学与工程研究理事会;
关键词
software mining; natural language processing; multi-label classification; pre-trained models;
D O I
10.1109/SANER53432.2022.00054
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
APIs (Application Programming Interfaces) are reusable software libraries and are building blocks for modern rapid software development. Previous research shows that programmers frequently share and search for reviews of APIs on the mainstream software question and answer (Q&A) platforms like Stack Overflow, which motivates researchers to design tasks and approaches related to process API reviews automatically. Among these tasks, classifying API reviews into different aspects (e.g., performance or security), which is called the aspect-based API review classification, is of great importance. The current state-of-the-art (SOTA) solution to this task is based on the traditional machine learning algorithm. Inspired by the great success achieved by pre-trained models on many software engineering tasks, this study fine-tunes six pre-trained models for the aspect-based API review classification task and compares them with the current SOTA solution on an API review benchmark collected by Uddin et al. The investigated models include four models (BERT, RoBERTa, ALBERT and XLNet) that are pre-trained on natural languages, BERTOverflow that is pre-trained on text corpus extracted from posts on Stack Overflow, and CosSensBERT that is designed for handling imbalanced data. The results show that all the six fine-tuned models outperform the traditional machine learning-based tool. More specifically, the improvement on the F1-score ranges from 21.0% to 30.2%. We also find that BERTOverflow, a model pre-trained on the corpus from Stack Overflow, does not show better performance than BERT. The result also suggests that CosSensBERT also does not exhibit better performance than BERT in terms of F1, but it is still worthy of being considered as it achieves better performance on MCC and AUC.
引用
收藏
页码:385 / 395
页数:11
相关论文
共 50 条
  • [21] End-to-end speech topic classification based on pre-trained model Wavlm
    Cao, Tengfei
    He, Liang
    Niu, Fangjing
    2022 13TH INTERNATIONAL SYMPOSIUM ON CHINESE SPOKEN LANGUAGE PROCESSING (ISCSLP), 2022, : 369 - 373
  • [22] Pre-Trained Transformer-Based Models for Text Classification Using Low-Resourced Ewe Language
    Agbesi, Victor Kwaku
    Chen, Wenyu
    Yussif, Sophyani Banaamwini
    Hossin, Md Altab
    Ukwuoma, Chiagoziem C.
    Kuadey, Noble A.
    Agbesi, Colin Collinson
    Samee, Nagwan Abdel
    Jamjoom, Mona M.
    Al-antari, Mugahed A.
    SYSTEMS, 2024, 12 (01):
  • [23] Generative Pre-trained Transformer (GPT) based model with relative attention for de novo drug design
    Haroon, Suhail
    Hafsath, C. A.
    Jereesh, A. S.
    COMPUTATIONAL BIOLOGY AND CHEMISTRY, 2023, 106
  • [24] Field-road classification for agricultural vehicles in China based on pre-trained visual model
    Zhang, Xiaoqiang
    Chen, Ying
    PeerJ Computer Science, 2024, 10
  • [25] Pre-Trained Language Model-Based Deep Learning for Sentiment Classification of Vietnamese Feedback
    Loc, Cu Vinh
    Viet, Truong Xuan
    Viet, Tran Hoang
    Thao, Le Hoang
    Viet, Nguyen Hoang
    INTERNATIONAL JOURNAL OF COMPUTATIONAL INTELLIGENCE AND APPLICATIONS, 2023, 22 (03)
  • [26] Field-road classification for agricultural vehicles in China based on pre-trained visual model
    Zhang, Xiaoqiang
    Chen, Ying
    PEERJ COMPUTER SCIENCE, 2024, 10
  • [27] A Systematic Review of Transformer-Based Pre-Trained Language Models through Self-Supervised Learning
    Kotei, Evans
    Thirunavukarasu, Ramkumar
    INFORMATION, 2023, 14 (03)
  • [28] TrajPT: A trajectory data-based pre-trained transformer model for learning multi-vehicle interactions
    Li, Yongwei
    Jiang, Yongzhi
    Wu, Xinkai
    TRANSPORTATION RESEARCH PART C-EMERGING TECHNOLOGIES, 2025, 171
  • [29] Biomedical generative pre-trained based transformer language model for age-related disease target discovery
    Zagirova, Diana
    Pushkov, Stefan
    Leung, Geoffrey Ho Duen
    Liu, Bonnie Hei Man
    Urban, Anatoly
    Sidorenko, Denis
    Kalashnikov, Aleksandr
    Kozlova, Ekaterina
    Naumov, Vladimir
    Pun, Frank W.
    Ozerov, Ivan V.
    Aliper, Alex
    Zhavoronkov, Alex
    AGING-US, 2023, 15 (18): : 9293 - 9309
  • [30] BSTC: A Fake Review Detection Model Based on a Pre-Trained Language Model and Convolutional Neural Network
    Lu, Junwen
    Zhan, Xintao
    Liu, Guanfeng
    Zhan, Xinrong
    Deng, Xiaolong
    ELECTRONICS, 2023, 12 (10)