µOpTime: Statically Reducing the Execution Time of Microbenchmark Suites Using Stability Metrics
Software Engineering (cs.SE)
FOS: Computer and information sciences
Computer Science - Software Engineering
Computer Science - Distributed, Parallel, and Cluster Computing
Distributed, Parallel, and Cluster Computing (cs.DC)
DOI:
10.1145/3715322
Publication Date:
2025-01-29T15:45:49Z
AUTHORS (4)
ABSTRACT
Performance regressions have a tremendous impact on the quality of software. One way to catch regressions before they reach production is executing performance tests before deployment, e.g., using microbenchmarks, which measure performance at subroutine level. In projects with many microbenchmarks, this may take several hours due to repeated execution to get accurate results, disqualifying them from frequent use in CI/CD pipelines. We propose µOpTime, a static approach to reduce the execution time of microbenchmark suites by configuring the number of repetitions for each microbenchmark. Based on the results of a full, previous microbenchmark suite run, µOpTime determines the minimal number of (measurement) repetitions with statistical stability metrics that still lead to accurate results. We evaluate µOpTime with an experimental study on 14 open-source projects written in two programming languages and five stability metrics. Our results show that (i) µOpTime reduces the total suite execution time (measurement phase) by up to 95.83% (Go) and 94.17% (Java), (ii) the choice of stability metric depends on the project and programming language, (iii) microbenchmark warmup phases have to be considered for Java projects (potentially leading to higher reductions), and (iv) µOpTime can be used to reliably detect performance regressions in CI/CD pipelines.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (54)
CITATIONS (0)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....