PerfRanker: Prioritization of Performance Regression Tests for Collection-Intensive Software

被引:32
|
作者
Mostafa, Shaikh [1 ]
Wang, Xiaoyin [1 ]
Xie, Tao [2 ]
机构
[1] Univ Texas San Antonio, San Antonio, TX 78249 USA
[2] Univ Illinois, Champaign, IL USA
基金
美国国家科学基金会;
关键词
Performance; Regression Testing; Test Prioritization; EFFICIENT; PRECISE;
D O I
10.1145/3092703.3092725
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Regression performance testing is an important but time/resource-consuming process. Developers need to detect performance regressions as early as possible to reduce their negative impact and fixing cost. However, conducting regression performance testing frequently (e.g., after each commit) is prohibitively expensive. To address this issue, in this paper, we propose PerfRanker, the first approach to prioritizing test cases in performance regression testing for collection-intensive software, a common type of modern software heavily using collections. Our test prioritization is based on performance impact analysis that estimates the performance impact of a given code revision on a given test execution. The evaluation shows that our approach can cover top 3 test cases whose performance is most affected within top 30% to 37% prioritized test cases, in contrast to top 65% to 79% by three baseline approaches.
引用
收藏
页码:23 / 34
页数:12
相关论文
共 50 条
  • [21] Large scale and performance tests of the ATLAS online software
    Alexandrov
    Wolters, H
    Amorim, A
    Badescu, E
    Burckhart-Chromek, D
    Caprini, M
    Dobson, M
    Hart, R
    Jones, R
    Kazarov, A
    Kolos, S
    Kotov, V
    Liko, D
    Lucio, L
    Mapelli, L
    Mineev, M
    Moneta, L
    Nassiakou, M
    Pedro, L
    Ribeiro, A
    Roumiantsev, V
    Ryabov, Y
    Schweiger, D
    Soloviev, I
    PROCEEDINGS OF CHEP 2001, 2001, : 572 - 576
  • [22] Empirical Evaluation of Test Effort Efficiency of Software GA-based Regression Test Case Prioritization Strategy
    Musa, Samaila
    Sultan, Abu Bakar Md
    Abd Ghani, Abdul Azim
    Baharom, Salmi
    PROCEEDINGS OF THE 3RD INTERNATIONAL CONFERENCE ON APPLIED SCIENCE AND TECHNOLOGY (ICAST'18), 2018, 2016
  • [23] Performance Prediction for Families of Data-Intensive Software Applications
    Verriet, Jacques
    Dankers, Reinier
    Somers, Lou
    COMPANION OF THE 2018 ACM/SPEC INTERNATIONAL CONFERENCE ON PERFORMANCE ENGINEERING (ICPE '18), 2018, : 189 - 194
  • [24] Mining Performance Regression Inducing Code Changes in Evolving Software
    Luo, Qi
    Poshyvanyk, Denys
    Grechanik, Mark
    13TH WORKING CONFERENCE ON MINING SOFTWARE REPOSITORIES (MSR 2016), 2016, : 25 - 36
  • [25] Performance of conditional Wald tests in IV regression with weak instruments
    Andrews, Donald W. K.
    Moreira, Marcelo J.
    Stock, James H.
    JOURNAL OF ECONOMETRICS, 2007, 139 (01) : 116 - 132
  • [26] System Regression Test Prioritization in Factory Automation Relating Functional System Tests to the Tested Code using Field Data
    Ulewicz, Sebastian
    Vogel-Heuser, Birgit
    PROCEEDINGS OF THE IECON 2016 - 42ND ANNUAL CONFERENCE OF THE IEEE INDUSTRIAL ELECTRONICS SOCIETY, 2016, : 4613 - 4620
  • [27] COMPARATIVE PERFORMANCE OF TESTS OF NORMALITY IN DETECTING MIXTURES OF PARALLEL REGRESSION LINES
    AGGARWAL, LK
    BAJGIER, SM
    COMMUNICATIONS IN STATISTICS-THEORY AND METHODS, 1987, 16 (09) : 2541 - 2563
  • [28] THE PERFORMANCE OF GOODNESS OF FIT TESTS FOR LOGISTIC-REGRESSION WITH DISCRETE COVARIATES
    KORN, LR
    HOSMER, DW
    LEMESHOW, S
    BIOMETRICAL JOURNAL, 1986, 28 (06) : 697 - 708
  • [29] Validation of spectrometry software - Part VI - Designing performance qualification tests
    McDowall, RD
    SPECTROSCOPY, 2003, 18 (07) : 22 - +
  • [30] A Declarative Approach for Performance Tests Execution in Continuous Software Development Environments
    Ferme, Vincenzo
    Pautasso, Cesare
    PROCEEDINGS OF THE 2018 ACM/SPEC INTERNATIONAL CONFERENCE ON PERFORMANCE ENGINEERING (ICPE '18), 2018, : 261 - 272