Automatic Perturbation Analysis for Scalable Certified Robustness and Beyond

Robustness
DOI: 10.48550/arxiv.2002.12920 Publication Date: 2020-01-01
ABSTRACT
Linear relaxation based perturbation analysis (LiRPA) for neural networks, which computes provable linear bounds of output neurons given a certain amount input perturbation, has become core component in robustness verification and certified defense. The majority LiRPA-based methods focus on simple feed-forward networks need particular manual derivations implementations when extended to other architectures. In this paper, we develop an automatic framework enable any network structures, by generalizing existing LiRPA algorithms such as CROWN operate general computational graphs. flexibility, differentiability ease use our allow us obtain state-of-the-art results defense fairly complicated like DenseNet, ResNeXt Transformer that are not supported prior works. Our also enables loss fusion, technique significantly reduces the complexity For first time, demonstrate Tiny ImageNet Downscaled where previous approaches cannot scale due relatively large number classes. work yields open-source library community apply areas beyond without much expertise, e.g., create with probably flat optimization landscape applying parameters. opensource is available at https://github.com/KaidiXu/auto_LiRPA.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....