A Geometrical Approach to Evaluate the Adversarial Robustness of Deep Neural Networks
Robustness
Deep Neural Networks
DOI:
10.1145/3587936
Publication Date:
2023-03-15T11:43:40Z
AUTHORS (7)
ABSTRACT
Deep Neural Networks (DNNs) are widely used for computer vision tasks. However, it has been shown that deep models vulnerable to adversarial attacks, i.e., their performances drop when imperceptible perturbations made the original inputs, which may further degrade following visual tasks or introduce new problems such as data and privacy security. Hence, metrics evaluating robustness of against attacks desired. previous mainly proposed shallow networks on small-scale datasets. Although Cross Lipschitz Extreme Value nEtwork Robustness (CLEVER) metric large-scale datasets (e.g., ImageNet dataset), is computationally expensive its performance relies a tractable number samples. In this paper, we propose Adversarial Converging Time Score (ACTS), an attack-dependent quantifies DNN specific input. Our key observation local neighborhoods DNN's output surface would have different shapes given inputs. requires time converging sample. Based geometry meaning, ACTS measures metric. We validate effectiveness generalization dataset using state-of-the-art networks. Extensive experiments show our efficient effective over CLEVER
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (58)
CITATIONS (1)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....