Nesterov Accelerated Gradient and Scale Invariance for Adversarial Attacks

Transferability Overfitting Leverage (statistics) Scale invariance
DOI: 10.48550/arxiv.1908.06281 Publication Date: 2019-01-01
ABSTRACT
Deep learning models are vulnerable to adversarial examples crafted by applying human-imperceptible perturbations on benign inputs. However, under the black-box setting, most existing adversaries often have a poor transferability attack other defense models. In this work, from perspective of regarding example generation as an optimization process, we propose two new methods improve examples, namely Nesterov Iterative Fast Gradient Sign Method (NI-FGSM) and Scale-Invariant (SIM). NI-FGSM aims adapt accelerated gradient into iterative attacks so effectively look ahead examples. While SIM is based our discovery scale-invariant property deep models, for which leverage optimize over scale copies input images avoid "overfitting" white-box model being attacked generate more transferable can be naturally integrated build robust gradient-based against Empirical results ImageNet dataset demonstrate that exhibit higher achieve success rates than state-of-the-art attacks.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....