Robust Probabilistic Model Checking with Continuous Reward Domains

FOS: Computer and information sciences Computer Science - Machine Learning Artificial Intelligence (cs.AI) Computer Science - Artificial Intelligence Formal Languages and Automata Theory (cs.FL) Computer Science - Formal Languages and Automata Theory Machine Learning (cs.LG)
DOI: 10.48550/arxiv.2502.04530 Publication Date: 2025-02-06
ABSTRACT
Probabilistic model checking traditionally verifies properties on the expected value of a measure interest. This restriction may fail to capture quality service significant proportion system's runs, especially when probability distribution interest is poorly represented by its due heavy-tail behaviors or multiple modalities. Recent works inspired distributional reinforcement learning use discrete histograms approximate integer reward distribution, but they struggle with continuous space and present challenges in balancing accuracy scalability. We propose novel method for handling both distributions Discrete Time Markov Chains using moment matching Erlang mixtures. By analytically deriving higher-order moments through Moment Generating Functions, our approximates theoretically bounded error while preserving statistical true distribution. detailed insight enables formulation robust based entire function, rather than restricting value. include theoretical foundation ensuring approximation errors, along an experimental evaluation demonstrating method's scalability practical model-checking problems.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....