Improving Black-box Adversarial Attacks with a Transfer-based Prior
Black box
DOI:
10.48550/arxiv.1906.06919
Publication Date:
2019-01-01
AUTHORS (5)
ABSTRACT
We consider the black-box adversarial setting, where adversary has to generate perturbations without access target models compute gradients. Previous methods tried approximate gradient either by using a transfer of surrogate white-box model, or based on query feedback. However, these often suffer from low attack success rates poor efficiency since it is non-trivial estimate in high-dimensional space with limited information. To address problems, we propose prior-guided random gradient-free (P-RGF) method improve attacks, which takes advantage transfer-based prior and information simultaneously. The given model appropriately integrated into our algorithm an optimal coefficient derived theoretical analysis. Extensive experiments demonstrate that requires much fewer queries higher compared alternative state-of-the-art methods.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....