Automated Feature Document Review via Interpretable Deep Learning

被引:0
|
作者
Ye, Ming [1 ]
Chen, Yuanfan [1 ]
Zhang, Xin [2 ]
He, Jinning [1 ]
Cao, Jicheng [1 ]
Liu, Dong [1 ]
Gao, Jing [1 ]
Dai, Hailiang [1 ]
Cheng, Shengyu [1 ]
机构
[1] ZTE Corp, Ctr Res Inst, Shenzhen, Peoples R China
[2] Peking Univ, Sch Comp Sci, Beijing, Peoples R China
关键词
Feature Documents; Agile Methodology; Neural Networks; Interpretable Deep Learning;
D O I
10.1109/ICSE-Companion58688.2023.00101
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
A feature in the agile methodology is a function of a product that delivers business value and meets stakeholders' requirements. Developers compile and store the content of features in a structured feature document. Feature documents play a critical role in controlling software development at a macro level. It is therefore important to ensure the quality of feature documents so that defects are not introduced at the outset. Manual review is an effective activity to ensure quality, but it is human-intensive and challenging. In this paper, we propose a feature document review tool to automate the process of manual review (quality classification, and suggestion generation) based on neural networks and interpretable deep learning. Our goal is to reduce human effort in reviewing feature documents and to prompt authors to craft better feature documents. We have evaluated our tool on a real industrial project from ZTE Corporation. The results show that our quality classification model achieved 75.6% precision and 94.4% recall for poor quality feature documents. For the suggestion generation model, about 70% of the poor quality feature documents could be improved to the qualified level in three rounds of revision based on the suggestions. User feedback shows that our tool helps users save an average of 15.9% of their time.
引用
收藏
页码:351 / 354
页数:4
相关论文
共 50 条
  • [1] Deep Natural Language Feature Learning for Interpretable Prediction
    Urrutia, Felipe
    Buc, Cristian
    Barriere, Valentin
    2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING, EMNLP 2023, 2023, : 3736 - 3763
  • [2] Feature Analysis Network: An Interpretable Idea in Deep Learning
    Li, Xinyu
    Gao, Xiaoguang
    Wang, Qianglong
    Wang, Chenfeng
    Li, Bo
    Wan, Kaifang
    COGNITIVE COMPUTATION, 2024, 16 (03) : 803 - 826
  • [3] Degradation stage classification via interpretable feature learning
    Alfeo, Antonio L. L.
    Cimino, Mario G. C. A.
    Vaglini, Gigliola
    JOURNAL OF MANUFACTURING SYSTEMS, 2022, 62 : 972 - 983
  • [4] Degradation stage classification via interpretable feature learning
    Alfeo, Antonio L.
    Cimino, Mario G.C.A.
    Vaglini, Gigliola
    Journal of Manufacturing Systems, 2022, 62 : 972 - 983
  • [5] Sparse Neural Additive Model: Interpretable Deep Learning with Feature Selection via Group Sparsity
    Xu, Shiyun
    Bu, Zhiqi
    Chaudhari, Pratik
    Barnett, Ian J.
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES: RESEARCH TRACK, ECML PKDD 2023, PT III, 2023, 14171 : 343 - 359
  • [6] Industry return prediction via interpretable deep learning
    Zografopoulos, Lazaros
    Iannino, Maria Chiara
    Psaradellis, Ioannis
    Sermpinis, Georgios
    EUROPEAN JOURNAL OF OPERATIONAL RESEARCH, 2025, 321 (01) : 257 - 268
  • [7] Automated and Interpretable Deep Learning For Carotid Plaque Analysis Using Ultrasound
    Bhatt, Nitish
    Nedadur, Rashmi
    Warren, Blair
    Mafeld, Sebastian
    Raju, Sneha
    Fish, Jason E.
    Wang, Bo
    Howe, Kathryn L.
    JOURNAL OF VASCULAR SURGERY, 2023, 78 (04) : E82 - E83
  • [8] Dynamic and Interpretable State Representation for Deep Reinforcement Learning in Automated Driving
    Hejase, Bilal
    Yurtsever, Ekim
    Han, Teawon
    Singh, Baljeet
    Filev, Dimitar P.
    Tseng, H. Eric
    Ozguner, Umit
    IFAC PAPERSONLINE, 2022, 55 (24): : 129 - 134
  • [9] Automated detection of glaucoma using retinal images with interpretable deep learning
    Mehta, Parmita
    Lee, Aaron Y.
    Wen, Joanne
    Bannit, Michael R.
    Chen, Philip P.
    Bojikian, Karine D.
    Petersen, Christine
    Egan, Catherine A.
    Lee, Su-In
    Balazinska, Magdalena
    Rokem, Ariel
    INVESTIGATIVE OPHTHALMOLOGY & VISUAL SCIENCE, 2020, 61 (07)
  • [10] Dr.Deep: Interpretable Evaluation of Patient Health Status via Clinical Feature's Context Learning
    Ma L.
    Zhang C.
    Jiao X.
    Wang Y.
    Tang W.
    Zhao J.
    Jisuanji Yanjiu yu Fazhan/Computer Research and Development, 2021, 58 (12): : 2645 - 2659