ATTSUM: A Deep Attention-Based Summarization Model for Bug Report Title Generation

被引:9
|
作者
Ma, Xiaoxue [1 ]
Keung, Jacky Wai [1 ]
Yu, Xiao [2 ,3 ]
Zou, Huiqi [1 ]
Zhang, Jingyu [1 ]
Li, Yishu
机构
[1] City Univ Hong Kong, Dept Comp Sci, Hong Kong, Peoples R China
[2] Wuhan Univ Technol, Sanya Sci & Educ Innovat Pk, Sanya 572024, Peoples R China
[3] Wuhan Univ Technol, Sch Comp Sci & Artificial Intelligence, Wuhan 430062, Peoples R China
关键词
Computer bugs; Decoding; Transformers; Semantics; Training; Software; Vocabulary; Bug reports; deep learning; text summarization; title generation; transformers;
D O I
10.1109/TR.2023.3236404
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Concise and precise bug report titles help software developers to capture the highlights of the bug report quickly. Unfortunately, it is common that bug reporters do not create high-quality bug report titles. Recent long short-term memory (LSTM)-based sequence-to-sequence models such as iTAPE were proposed to generate bug report titles automatically, but the text representation method and LSTM employed in such model are difficult to capture the accurate semantic information and draw the global dependencies among tokens effectively. This article proposes a deep attention-based summarization model (i.e., AttSum) to generate high-quality bug report titles. Specifically, the AttSum model employs the encoder.decoder framework, which utilizes the robustly optimized bidirectional-encoder-representations-from-transformers approach to encode the bug report bodies to capture contextual semantic information better, the stacked transformer decoder to automatically generate titles, and the copy mechanism to handle the rare token problem. To validate the effectiveness of AttSum, we conduct automatic and manual evaluations on 333563 "< body, title>" pairs of bug reports and perform a practical analysis of its ability to improve low-quality titles. The result shows that AttSum is superior to the state-of-the-art baselines by a substantial margin both on automatic evaluation metrics (e.g., by 3.4%-58.8% and 7.7%-42.3% in terms of recall-oriented understudy for gisting evaluation in F1 and bilingual evaluation understudy, separately) and three human-set modalities (e.g., by 1.9%-57.5%). Moreover, we analyze the impact of the training data size on AttSum and the results imply that our approach is robust enough to generate much better titles.
引用
收藏
页码:1663 / 1677
页数:15
相关论文
共 50 条
  • [41] Enhancements of Attention-Based Bidirectional LSTM for Hybrid Automatic Text Summarization
    Jiang, Jiawen
    Zhang, Haiyang
    Dai, Chenxu
    Zhao, Qingjuan
    Feng, Hao
    Ji, Zhanlin
    Ganchev, Ivan
    IEEE ACCESS, 2021, 9 : 123660 - 123671
  • [42] Attention-Based Graph Summarization for Large-Scale Information Retrieval
    Shabani, Nasrin
    Beheshti, Amin
    Jolfaei, Alireza
    Wu, Jia
    Haghighi, Venus
    Najafabadi, Maryam Khanian
    Foo, Jin
    IEEE TRANSACTIONS ON CONSUMER ELECTRONICS, 2024, 70 (03) : 6224 - 6235
  • [43] Attention-Based Abnormal-Aware Fusion Network for Radiology Report Generation
    Xie, Xiancheng
    Xiong, Yun
    Yu, Philip S.
    Li, Kangan
    Zhang, Suhua
    Zhu, Yangyong
    DATABASE SYSTEMS FOR ADVANCED APPLICATIONS, 2019, 11448 : 448 - 452
  • [44] Attention-based Deep Learning for Visual Servoing
    Wang, Bo
    Li, Yuan
    2020 CHINESE AUTOMATION CONGRESS (CAC 2020), 2020, : 4388 - 4393
  • [45] Deep Attention-Based Imbalanced Image Classification
    Wang, Lituan
    Zhang, Lei
    Qi, Xiaofeng
    Yi, Zhang
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 33 (08) : 3320 - 3330
  • [46] Attention-Based Ensemble for Deep Metric Learning
    Kim, Wonsik
    Goyal, Bhavya
    Chawla, Kunal
    Lee, Jungmin
    Kwon, Keunjoo
    COMPUTER VISION - ECCV 2018, PT I, 2018, 11205 : 760 - 777
  • [47] Attention-based Deep Multiple Instance Learning
    Ilse, Maximilian
    Tomczak, Jakub M.
    Welling, Max
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 80, 2018, 80
  • [48] Automatic Generation and Evaluation of Chinese Classical Poetry with Attention-Based Deep Neural Network
    Zhao, Jianli
    Lee, Hyo Jong
    APPLIED SCIENCES-BASEL, 2022, 12 (13):
  • [49] An Attention-based Deep Network for CTR Prediction
    Zhang, Hailong
    Yan, Jinyao
    Zhang, Yuan
    ICMLC 2020: 2020 12TH INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND COMPUTING, 2018, : 1 - 5
  • [50] An Attention-Based Deep Learning Model for Phase-Resolved Wave Prediction
    Chen, Jialun
    Gunawan, David
    Taylor, Paul H.
    Chen, Yunzhuo
    Milne, Ian A.
    Zhao, Wenhua
    JOURNAL OF OFFSHORE MECHANICS AND ARCTIC ENGINEERING-TRANSACTIONS OF THE ASME, 2025, 147 (02):