Distributionally Robust Statistical Verification with Imprecise Neural Networks

Black box
DOI: 10.48550/arxiv.2308.14815 Publication Date: 2023-01-01
ABSTRACT
A particularly challenging problem in AI safety is providing guarantees on the behavior of high-dimensional autonomous systems. Verification approaches centered around reachability analysis fail to scale, and purely statistical are constrained by distributional assumptions about sampling process. Instead, we pose a distributionally robust version verification for black-box systems, where our performance hold over large family distributions. This paper proposes novel approach based combination active learning, uncertainty quantification, neural network verification. central piece an ensemble technique called Imprecise Neural Networks, which provides guide learning. The learning uses exhaustive neural-network tool Sherlock collect samples. An evaluation multiple physical simulators openAI gym Mujoco environments with reinforcement-learned controllers demonstrates that can provide useful scalable
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....