TASKers: A Whole-System Generator for Benchmarking Real-Time-System Analyses
Benchmarking
Implementation
Benchmark (surveying)
Control flow
DOI:
10.4230/oasics.wcet.2018.6
Publication Date:
2018-01-01
AUTHORS (5)
ABSTRACT
Implementation-based benchmarking of timing and schedulability analyses requires system code that can be executed on real hardware has defined properties, for example, known worst-case execution times (WCETs) tasks. Traditional approaches creating benchmarks with such characteristics often result in implementations do not resemble real-world systems, either due to work only being simulated by means busy waiting, or because tasks have no control-flow dependencies between each other. In this paper, we address problem TASKers, a generator constructs realistic benchmark systems predefined properties. To achieve this, TASKers composes patterns programs generate produce outputs exhibit preconfigured WCETs when certain inputs. Using knowledge during the generation process, is able specifically introduce inter-task mapping output one task input another.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....