Selection of Source Images Heavily Influences the Effectiveness of Adversarial Attacks

Transferability Benchmarking Robustness Deep Neural Networks
DOI: 10.48550/arxiv.2106.07141 Publication Date: 2021-01-01
ABSTRACT
Although the adoption rate of deep neural networks (DNNs) has tremendously increased in recent years, a solution for their vulnerability against adversarial examples not yet been found. As result, substantial research efforts are dedicated to fix this weakness, with many studies typically using subset source images generate examples, treating every image as equal. We demonstrate that, fact, is equally suited kind assessment. To do so, we devise large-scale model-to-model transferability scenario which meticulously analyze properties generated from suitable ImageNet by making use three most frequently deployed attacks. In scenario, involves seven distinct DNN models, including recently proposed vision transformers, reveal that it possible have difference up $12.5\%$ success, $1.01$ average $L_2$ perturbation, and $0.03$ ($8/225$) $L_{\infty}$ perturbation when $1,000$ sampled randomly among all candidates. then take one first steps evaluating robustness used create proposing number simple but effective methods identify unsuitable images, thus mitigate extreme cases experimentation support high-quality benchmarking.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....