Enhancing the Transferability of Targeted Attacks with Adversarial Perturbation Transform
Transferability
DOI:
10.3390/electronics12183895
Publication Date:
2023-09-15T07:50:10Z
AUTHORS (5)
ABSTRACT
The transferability of adversarial examples has been proven to be a potent tool for successful attacks on target models, even in challenging black-box environments. However, the majority current research focuses non-targeted attacks, making it arduous enhance targeted using traditional methods. This paper identifies crucial issue existing gradient iteration algorithms that generate perturbations fixed manner. These have detrimental impact subsequent computations, resulting instability update direction after momentum accumulation. Consequently, is negatively affected. To overcome this issue, we propose an approach called Adversarial Perturbation Transform (APT) introduces transformation at each iteration. APT randomly samples clean patches from original image and replaces corresponding iterative output image. transformed then used compute next momentum. In addition, could seamlessly integrate with other gradient-based algorithms, incurring minimal additional computational overhead. Experimental results demonstrate significantly enhances when combined Our achieves improvement while maintaining efficiency.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (40)
CITATIONS (1)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....