Boosting Adversarial Attacks with Momentum
Robustness
Boosting
Deep Neural Networks
Benchmark (surveying)
Black box
DOI:
10.48550/arxiv.1710.06081
Publication Date:
2017-01-01
AUTHORS (7)
ABSTRACT
Deep neural networks are vulnerable to adversarial examples, which poses security concerns on these algorithms due the potentially severe consequences. Adversarial attacks serve as an important surrogate evaluate robustness of deep learning models before they deployed. However, most existing can only fool a black-box model with low success rate. To address this issue, we propose broad class momentum-based iterative boost attacks. By integrating momentum term into process for attacks, our methods stabilize update directions and escape from poor local maxima during iterations, resulting in more transferable examples. further improve rates apply ensemble models, show that adversarially trained strong defense ability also We hope proposed will benchmark evaluating various methods. With method, won first places NIPS 2017 Non-targeted Attack Targeted competitions.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....