Model-based diversity-driven learn-to-rank test case prioritization

被引:0
|
作者
Shu, Ting [1 ]
He, Zhanxiang [1 ]
Yin, Xuesong [2 ]
Ding, Zuohua [1 ]
Zhou, Mengchu [3 ]
机构
[1] Zhejiang Sci Tech Univ, Phys Dept, Hangzhou 310018, Peoples R China
[2] Hangzhou Dianzi Univ, Sch Media & Design, Hangzhou 310018, Peoples R China
[3] New Jersey Inst Technol, Dept Elect & Comp Engn, Newark, NJ 07102 USA
基金
中国国家自然科学基金;
关键词
Model based testing; Similarity metric; Machine learning; Test case prioritization; FEATURE-SELECTION; SEQUENCE; SOFTWARE; CONTEXT; SET;
D O I
10.1016/j.eswa.2024.124768
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Model-based Test Case Prioritization utilizing similarity metrics has proved effective in software testing. However, the utility of similarity metrics in it varies with test scenarios, hindering its universal effectiveness and performance optimization. To tackle this problem, we propose a Diversity-driven Learn-to-rank model- based TCP approach, named DLTCP, for optimizing early fault detection performance. Our method first employs the whale optimization algorithm to search for a suitable set of similarity metrics from a pool of existing candidates. This search process determines which metrics should be used. According to each selected metric, test cases are then prioritized. The resulting test case rankings are used as the training data for DLTCP. Finally, the proposed method incorporates random forest to train a ranking model for test case prioritization. As such, it can fuse multiple similarity metrics to improve the TCP performance. We conduct extensive experiments to evaluate our method's performance using the average percentage fault detected (APFD) as metric. The experimental results show that DLTCP achieve an average APFD value of 0.953 for seven classic benchmark models, which is 11.37% higher than that of the state-of-the-art algorithms. It can well select a set of similarity metrics for effective fusion, demonstrating competitive performance in early fault detection.
引用
收藏
页数:20
相关论文
共 50 条
  • [1] A Learn-to-Rank Method for Model-Based Regression Test Case Prioritization
    Huang, Yechao
    Shu, Ting
    Ding, Zuohua
    IEEE ACCESS, 2021, 9 : 16365 - 16382
  • [2] Model-based regression test case prioritization
    Panigrahi C.R.
    Mall R.
    Communications in Computer and Information Science, 2010, 54 : 380 - 385
  • [3] Model-Based Regression Test Case Prioritization
    Panigrahi, Chhabi Rani
    Mall, Rajib
    INFORMATION SYSTEMS, TECHNOLOGY AND MANAGEMENT, PROCEEDINGS, 2010, 54 : 380 - 385
  • [4] A Platform for Diversity-Driven Test Amplification
    Kessel, Marcus
    Atkinson, Colin
    PROCEEDINGS OF THE 10TH ACM SIGSOFT INTERNATIONAL WORKSHOP ON AUTOMATING TEST CASE DESIGN, SELECTION, AND EVALUATION (A-TEST '19), 2019, : 35 - 41
  • [5] Diversity-driven unit test generation
    Kessel, Marcus
    Atkinson, Colin
    JOURNAL OF SYSTEMS AND SOFTWARE, 2022, 193
  • [6] DTester: Diversity-Driven Test Case Generation for Web Applications
    Wu, Shumei
    Chang, Zexing
    Zhang, Zhanwen
    Li, Zheng
    Liu, Yong
    INTERNATIONAL JOURNAL OF SOFTWARE ENGINEERING AND KNOWLEDGE ENGINEERING, 2024, 34 (02) : 357 - 390
  • [7] Model-Based Test Case Prioritization Using ACO: A review
    Sharma, Sonia
    Singh, Ajmer
    2016 FOURTH INTERNATIONAL CONFERENCE ON PARALLEL, DISTRIBUTED AND GRID COMPUTING (PDGC), 2016, : 177 - 181
  • [8] Enhanced Adaptive Random Test Case Prioritization for Model-based Test Suites
    Pospisil, Tomas
    Sobotka, Jan
    Novak, Jiri
    ACTA POLYTECHNICA HUNGARICA, 2020, 17 (07) : 125 - 144
  • [9] Model-based test case generation and prioritization: a systematic literature review
    Mohd-Shafie, Muhammad Luqman
    Kadir, Wan Mohd Nasir Wan
    Lichter, Horst
    Khatibsyarbini, Muhammad
    Isa, Mohd Adham
    SOFTWARE AND SYSTEMS MODELING, 2022, 21 (02): : 717 - 753
  • [10] Test case prioritization techniques for model-based testing: a replicated study
    João Felipe S. Ouriques
    Emanuela G. Cartaxo
    Patrícia D. L. Machado
    Software Quality Journal, 2018, 26 : 1451 - 1482